Mr. Deepfake Goes to Washington

Published
Reading time
2 min read
Panel at the House Intelligence Committee

U.S. representatives mulled over legal precedents for — and impediments to — regulating the realistic computer-generated videos known as deepfakes.

What’s new: The House Intelligence Committee, worried about the potential impact of fake videos on the 2020 election, questioned experts on AI law, policy, and technology. The panel laid to rest the lawmakers' fear that deepfake technology can fake anybody doing anything. But it highlighted just how easy it is to perpetrate digital hoaxes. The committee contemplated whether to prosecute programmers responsible for deepfake code, and whether regulating deepfakes would impinge on the constitutional right to lie (seriously).

Backstory: Efforts to create realistic-looking video using AI date back to the 1990s. Recently, though, deepfakes have become more sophisticated. Days before the Congressional hearing, activists posted a video of Mark Zuckerberg appearing to deliver a monologue, worthy of a James Bond villain, on the power he wields over popular opinion.

Why it matters: Given the disinformation that swirled around the 2016 election, many experts believe that deepfakes pose a threat to democracy. However, regulating them likely would have a chilling effect on free speech, to say nothing of AI innovation.

Legislative agenda: Congress is considering at least two bills targeting deepfakes.

  • One would make it a crime to produce malicious digital manipulations.
  • The second would require producers to watermark their creations.

A bigger problem: Digital fakery isn’t just about people using neural networks to synthesize video and voices. Last week, security experts spotted a bogus LinkedIn profile purporting to represent a foreign policy specialist, its portrait photo apparently fabricated by a generative adversarial network. Then there are simple tricks like the slo-mo used to make Speaker of the House Nancy Pelosi appear to slur her words. Not to mention the disinformation potential of Photoshop. And the qwerty keyboard.

Our take: After years of effort, social media platforms are still struggling to define what is and isn’t okay to share. Doing this in the context of deepfake technology won't be easy. House Intelligence Committee chair Adam Schiff (D-CA) hinted at tightening existing regulations — like Section 230 of the Communications Decency Act, which protects platforms from legal liability for content posted by users — to hold platforms accountable for what they publish. This could incentivize services like Facebook and YouTube to root out malicious fakery. But it could also restrict legitimate advocacy or satire. For now, consumers of digital media will have to think twice before believing what they see.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox