Algorithms Against Disinformation How Facebook, Twitter, and more fought disinfo in 2020.

Published
Reading time
2 min read
Group of people having a snowball fight and covering with a giant Facebook like button

The worldwide pandemic and a contentious U.S. election whipped up a storm of automated disinformation, and some big AI companies reaped the whirlwind.

What happened: Facing rising public pressure to block inflammatory falsehoods, Facebook, Google’s YouTube division, and Twitter scrambled to update their recommendation engines. Members of the U.S. Congress grilled the companies, a popular Netflix documentary excoriated them, and public opinion polls showed that they had lost the trust of most Americans.

Driving the story: The companies addressed the issue through various algorithmic and policy fixes — though they apparently stopped short of making changes that might seriously threaten the bottom line.

  • After discovering hundreds of fake user profiles that included head shots generated by AI, Facebook cracked down on manipulated media it deemed misleading and banned deepfake videos outright. The company continues to develop deep learning tools to detect hate speech, memes that promote bigotry, and misinformation about Covid-19.
  • YouTube developed a classifier to identify so-called borderline content: videos that comply with its rules against hate speech but promote conspiracy theories, medical misinformation, and other fringe ideas.
  • Facebook and Twitter shut down accounts they considered fronts for state-backed propaganda operations.
  • All three companies added disclaimers to content deemed to contain misleading information about the U.S. election. Twitter took its policy furthest, flagging falsehoods from President Donald Trump.

Yes, but: The reforms may not stick. The companies have diluted some, and others have already backfired.

  • In June, the Wall Street Journal reported that some Facebook executives had squelched tools for policing extreme content. The company later reversed algorithmic changes made during the election that boosted reputable news sources. Perceptions that Facebook’s effort was halfhearted prompted some employees to resign.
  • YouTube’s algorithmic tweaks targeting disinformation has succeeded in cutting traffic to content creators who promote falsehoods. But they also boosted traffic to larger entities, like Fox News, that often spread the same dubious information.

Where things stand: There’s no clear way to win the online cat-and-mouse game against fakers, cranks, and propagandists. But the big cats must stay ahead or lose public trust — and regulators’ forbearance.

Learn more: For more details on using AI to stem the tide of disinformation and hate speech online, see our earlier stories on Facebook’s efforts here and here, and on YouTube’s here and here.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox