Will AI fakery erode public trust in the key social institutions?

The fear: Generative models will flood media outlets with convincing but false photos, videos, ads, and news stories. The ensuing crisis of authority will lead to widespread distrust in everything from the financial system to democracy itself.

What could go wrong: Between deepfakes of celebrities and the GPT-2 language model’s ability to churn out faux articles that convince readers they’re from the New York Times, AI is a powerful tool for propagandists, charlatans, and saboteurs. As the technology improves, its potential for social disruption only grows.

Behind the worries: Digital fakery is already on the rise in a variety of sectors.

  • Scammers using AI-generated voices that mimicked C-level executives recently tricked corporations into wiring hundreds of thousands of dollars to offshore accounts.
  • In a video of that went viral in May, U.S. House Speaker Nancy Pelosi appeared to slur her speech, prompting political opponents to question her fitness for office. In fact, the clip had been manipulated to alter playback speed at key moments. Although the fakery didn’t depend on AI, it clearly demonstrated the technology’s potential to spread disinformation rapidly and persuasively.
  • In early October, researchers at Microsoft unveiled a model designed to generate fake comments on news articles. Such tools could be used to create an illusion of grassroots support or dissent around any topic.

How scared should you be: It’s hard to gauge the worry because little research has been done evaluating the impact of digital fakery on public trust. So far, deepfakes have been used mostly to harass individual women, according to one study. An optimist might argue that growing awareness of AI-generated disinformation will spur people to develop stronger social bonds and standards for truth-telling. We’re more inclined to imagine an arms race between fakers and systems designed to detect them. As in digital security, the fakers likely would have an edge as they find ways to breach each new defense.

What to do: Researchers are considering a number of countermeasures to fake media. Some propose watermarks that would establish an item’s provenance. Others argue that blockchain offers an effective way to ensure that information originated with a trusted source.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox