OpenAIThe company said on Thursday it had discovered and disrupted five online campaigns that used its generative artificial intelligence technology to deceptively manipulate public opinion around the world and influence geopolitics.
In a report on covert influence campaigns, OpenAI said they were carried out by state actors and private companies in Russia, China, Iran and Israel, using OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, often to win support for political campaigns or sway public opinion in geopolitical conflicts.
Social media researchers said OpenAI’s report was the first time a major artificial intelligence company had disclosed how its specific tools were being used for such online deception. The recent rise of generative artificial intelligence has raised questions about how the technology might be contributing to the spread of deception.Online misinformationHave questionsespecially in a year with major elections taking place around the world.
Ben Nimmo, OpenAI’s lead researcher, said the company’s goal was to show how the technology could change the reality of online deception, following speculation that generative AI was used in such campaigns.
advertise
Our case studies provide examples of some of the most widely reported and longest-running influence campaigns currently underway,”He said.
Nimmo said the campaigns often used OpenAI’s technology to post political content, but it was difficult for the company to determine whether they were targeting specific elections or simply trying to provoke people. He also said the campaigns failed to gain much traction and the AI ​​tools did not appear to expand their reach or influence.
“These influence operations are still having trouble reaching audiences,” Nimmo said.
But Graham Bruckie, senior director of the Atlantic Council’s Digital Forensic Research Lab, warned that the landscape of online misinformation could also change as generative AI technology becomes more powerful. This week, OpenAI, maker of the ChatGPT chatbot, said it had begunTraining a new flagship AI modelwhich will bring “higher levels of capability.”
“This is a new type of tool,” Brucki said. “It remains to be seen what impact it will have.”
(The New York Times hasProsecutionOpenAI and its partner Microsoft, claiming it violated copyright in news content related to its AI system.
OpenAI, which like Google, Meta and Microsoft provides online chatbots and other AI tools that can compose social media posts, generate realistic images and write computer programs, said in the report that its tools have been used in influence campaigns that researchers have tracked for years, including a Russian campaign known as “doppelgangers” and a Chinese campaign known as “spam camouflage.”
A post on Telegram from an influence campaign that OpenAI said was generated using its tools.
A post on Telegram from an influence campaign that OpenAI said was generated using its tools. via OpenAI
OpenAI said the doppelganger used OpenAI’s technology to post anti-Ukrainian comments in English, French, German, Italian and Polish on X. The company’s tools were also used to translate and edit articles supporting Russia’s side in the Ukrainian war into English and French, and to convert anti-Ukrainian news articles into Facebook posts.
OpenAI’s tools were also used in a previously unknown Russian attack campaign that targeted people in Ukraine, Moldova, the Baltic states and the United States through the Telegram instant messaging service, the company said. The campaign used artificial intelligence to post commentary in Russian and English about the war in Ukraine, the political situation in Moldovany and American politics. The effort also used OpenAI tools to debug computer code that was apparently designed to automatically post messages to Telegram.
advertise
OpenAI said the political comments received few replies and “likes.” The efforts were also sometimes sloppy. At one point, the campaign posted text that was clearly generated by artificial intelligence. “As an AI language model, I am here to help and provide the needed comments,” one post read. A few other times, it posted in broken English, which OpenAI called “poor grammar.”
“Spam camouflage” has long been seen as a Chinese operation, and OpenAI said it used OpenAI technology to debug code and seek advice on how to analyze social media and research current events. The company’s tools have also been used to create social media posts that disparage critics of the Chinese government.
The Iranian activity has been linked to a group called the International Virtual Media Alliance, which used OpenAI tools to create and translate long-form articles and headlines with the goal of spreading pro-Iranian, anti-Israeli, and anti-American sentiment online.
OpenAI called the Israeli campaign “Zeno Zeno,” saying it was run by a company that manages political campaigns. It used OpenAI technology to generate fictional characters and biographies to replace real people on social media services in Israel, Canada and the United States and post anti-Islamic messages.
The OpenAI report says that while today’s generative AI can help make political campaigns more effective, these tools don’tAs many AI experts predictedcreating a large amount of convincing false information.
“This suggests that some of our biggest concerns about AI-enabled influence manipulation and AI-enabled disinformation have not yet materialized,” said Jack Stubbs, chief intelligence officer at Graphika, which tracks manipulation on social media services and reviewed OpenAI’s findings.