The Verge reports on efforts by OpenAI to create a language modelling program that could emulate the writing style of a real human.
This quote caught my eye:
“The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence.”
This, to be honest, is quite frightening because anyone who has read, say, a tabloid newspaper, an academic paper, a pro-Brexit speech or most Twitter threads would say this is exactly what humans write like.
But in all seriousness this work lands firmly in the ‘careful what you wish for’ category along with some of the (arguably impressive) tech that Adobe among others are developing in the consumer sphere (i.e. you don’t need a national budget or a movie studio to get hold of it). Adobe, for example, have demonstrated software that allows you to change words in speech or even fake entire sentences. The creative potential is obvious – but so too is the ‘fake news’ potential.
What’s needed here is some sort of embedded ‘watermark’ that can be read by apps that can then add a warning symbol to alert readers/listeners that what they’re being presented with has been altered.
OpenAI, however, has seen the potential for their work to be abused and has decided simply not to publish their data. Which is all well and good but I can imagine that someone, somewhere, is figuring out a way to get hold of it.