Disinformation Documented OpenAI takes action against misuse of its models in propaganda

Published
Jun 5, 2024
Reading time
2 min read
Disinformation Documented: OpenAI takes action against misuse of its models in propaganda

OpenAI models were used in five disinformation campaigns, the company said.

What’s new: OpenAI discovered that operations based in Russia, China, Iran, and Israel had used the company’s models to create and/or revise text in attempts to influence international political opinion. The generated media failed to reach a mass audience, the company said. It banned the accounts.

How it works: Most of the groups primarily used OpenAI’s language models to generate inauthentic social media comments for posting on dummy accounts intended to create the illusion of popular support for certain causes. Some groups used the company’s models to debug code, generate text for websites, and produce images such as political cartoons. Four of the five groups already were known to disinformation researchers.

  • A Russian organization previously unknown to researchers generated large volumes of pro-Russia and anti-Ukraine comments in Russian and English and distributed them via messaging service Telegram. The comments often included poor grammar or telltale phrases such as, “As an AI model, . . .” 
  • Another Russian group that researchers call Doppelganger generated pro-Russia social media comments in English, French, and German. It also used OpenAI models to translate articles from Russian into other languages for publication on websites. Doppelganger used a third-party API to circumvent OpenAI’s restrictions on Russian users. OpenAI has suspended the API.
  • A Chinese operation known to researchers as Spamouflage generated Chinese-language social media comments that supported the Chinese government. It also used OpenAI technology to debug code for a website dedicated to criticizing opponents of the government.
  • An Iranian organization called the International Union of Virtual Media (IUVM) generated English and French articles, headlines, and other text for its website. IUVM is considered a mouthpiece for the Iranian government.
  • STOIC, an Israeli company that runs political social media campaigns, generated articles and social media comments. It also created fictitious bios for inauthentic social media accounts that included images apparently created by other AI models. STOIC created both pro-Israel and anti-Palestine comments as well as comments critical of India’s ruling Bharatiya Janata Party.

Behind the news: AI-produced misinformation on the internet — mostly images, videos, and audio clips — rose sharply starting in the first half of 2023, research found at Google and several fact-checking organizations. By the end of that year, generative AI was responsible for more than 30 percent of media that was manipulated by computers.

Why it matters: Many observers are concerned about potential proliferation of political disinformation as AI models that generate realistic text, images, video, and audio become widely available. This year will see elections in at least 64 countries including most of the world’s most populous nations — a rich opportunity for AI-savvy propagandists. While propagandists have taken advantage of OpenAI’s models, the company was able to detect them and shut them down. More such efforts are bound to follow. 

We’re thinking: Generative AI’s potential to fuel propaganda is worth tracking and studying. But it’s also worth noting that the accounts identified by OpenAI failed to reach significant numbers of viewers or otherwise have an impact. So far, at least, distribution, not generation, continues to be the limiting factor on disinformation.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox