Published
Reading time
2 min read
Face detection being used on a person during assault on the U.S. Capitol

Face recognition is being used to identify people involved in last week’s assault on the U.S. Capitol. It’s also being misused to support their cause.

What’s new: Law enforcement agencies and online sleuths are using deep learning to put names to faces in images shot while supporters of U.S. President Trump overran the building in Washington, D.C. to stop certification of his defeat in the recent national election, leaving several people dead and many injured. At the same time, pro-Trump propagandists are making false claims that the technology shows left-wing infiltrators led the attack.

What happened: Police arrested few of the perpetrators. In the aftermath, the abundant images have fed AI-powered sleuthing to find those who were allowed to leave the scene.

  • University of Toronto researcher John Scott-Railton used face identification and image enhancement to help identify a man who was photographed inside the Senate chamber wearing body armor and carrying zip-tie handcuffs as retired Air Force Colonel Larry Rendall Brock, Jr. Subsequently Brock was arrested.
  • Clearview AI, a face recognition company used by thousands of U.S. law enforcement agencies, saw a 26 percent jump in search requests following the attack. At least two police agencies have acknowledged using the service to identify perpetrators.
  • Even as face recognition determined that some of the most visible leaders of the assault were Trump supporters, the right-leaning Washington Times erroneously reported that face recognition vendor XRVision had identified individuals leading the assault as left-wing Antifa activists. XRVision called the story “outright false, misleading, and defamatory.”

Deepfakes, too: Falsehoods also circulated regarding deepfake technology. Users of 4chan and social media site Parler wrongly asserted that President Trump’s post-insurrection speech, in which he called the participants “criminals” and “unpatriotic,” was faked by AI. The White House debunked this claim.

Why it matters: The Capitol assault, apart from its aim to disrupt the democratic process (and apparently to assassinate officials), highlights that face recognition and deepfakes are two sides of the machine learning coin: One is a powerful tool for uncovering facts, the other a powerful tool for inventing them. While the police are relying on the former capability, propagandists are exploiting both by spreading believable but false claims.

We’re thinking: Paranoia about artificial intelligence once centered on fear that a malicious superintelligence would wreak havoc. It turns out that humans using AI — and lies about AI — to spread disinformation pose a more immediate threat.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox