The company OpenAI, one of the founders of which is Elon Musk, announced the creation of advanced artificial intelligence to write original texts on a given topic. He was supposed to translate texts into other languages, to answer questions, and perform other useful tasks, but research has shown that it can be extremely dangerous and used by criminals in mass deception of the people. In this regard, the company decided to show only his limited versionand the original to keep in the strictest confidence.
The computer model is called the GPT-2 and were trained on the basis of 8 million web pages. On them were written the texts fall on the basis of new materials on a given human subject: for example, artificial intelligence can write a whole essay about the positive sides of recycling or fantasy story in the style of “Lord of the rings”. However, the researchers also found that the system copes with writing false news about celebrities, disasters and even wars.
At the moment the full version of the tool to generate text available only to developers and some of the authors of the MIT Technology Review. Due to the fact that it can be used for uncontrolled creation of false news and publications in social networks, it was decided not to publish in open access and to keep them secret.
Example of fake news shared the ScienceAlert edition:
Miley Cyrus was caught for stealing at a store on Hollywood Boulevard
19-year-old singer was filmed, and the guards led her out of the store. The singer was a black hoodie with the words “Blurred Lines” in front and “Fashion Police” on the back.
To read the text of a short essay about the disadvantages of recycling can be read in the blog OpenAI.
The developers recognize that the language model at the moment is not perfect: the system sometimes allows the repetition of some pieces of text, unnatural switching themes and failures of the modelling world. For example, in one of the stages of testing the system I wrote about a fire burning under water.
According to the Director of policy Open AI Jack Clark, a few years later this algorithm can turn into a dangerous weapon of mass enter people astray. Some critics believe that the technology does not bear any danger, and the company just attracts attention.
His opinion about the threat artificial intelligence write in the comments. To discuss the topic more in our Telegram chat.