AI Spreads Disinformation Understanding the fear of AI-powered disinformation

Published
Reading time
2 min read
Illustration of Frankenstein painting a billboard with the text "Bunnies are the real monsters"

Will AI promote lies that deepen social divisions?

The fear: Propagandists will bait online recommendation algorithms with sensationalized falsehoods. People who snap at the clickbait will be reeled into opposing ideological silos.

Behind the worries: Consumption of online content has skyrocketed since the pandemic began. Social media platforms, especially, are known to be vectors for disinformation. Bad actors have embraced algorithmically boosted disinformation campaigns to advance their agendas.

  • This year alone, agitators have exploited these systems to widen political divisions, spread false data about Covid-19, and promote irrational prejudices.
  • Russian operatives have been blamed for spreading misinformation on a vast range of topics since at least 2014, when the Kremlin flooded the internet with conspiracy theories about the shooting down of a Malaysian passenger jet over Ukraine. That campaign helped to cast doubt on official conclusions that Russian forces had destroyed the plane.
  • YouTube’s recommendation engine is primarily responsible for the growing number of people who believe that Earth is a flat disc rather than a sphere, a 2019 study found.

How scared should you be: Social media networks are getting better at spotting and blocking coordinated disinformation campaigns. But they’re still playing cat-and-mouse with propagandists.

  • Earlier this month, researchers found that Facebook users could slip previously flagged posts past the automated content moderation system by making simple alterations like changing the background color.
  • Creators of social media bots are using portraits created by generative adversarial networks to make automated accounts look like they belong to human users.
  • Efforts to control disinformation occasionally backfire. Conservative media outlets in the U.S. accused Twitter of left-wing bias after it removed a tweet by President Trump that contained falsehoods about coronavirus.

What to do: No company can tell fact from fiction definitively among the infinite shades of gray. AI-driven recommendation algorithms, which generally optimize for engagement, can be designed to limit the spread of disinformation. The industry is badly in need of transparent processes designed to reach reasonable decisions that most people can get behind (like free elections in a democracy). Meanwhile, we can all be more vigilant for signs of disinformation in our feeds.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox