Deepfakes threaten to undermine law and order, perhaps democracy itself. A coalition of tech companies, nonprofits, and academics joined forces to counter potential adverse impacts.

What’s new: The Deepfake Detection Challenge aims to provide a data set of custom-built deepfakes. Funded by a $10 million grant from Facebook, it also promises a prize for developing tools that spot computer-generated pictures.

The details: Facebook is producing videos with actors who have consented to having their features altered by deepfake technology.

  • A working session at the International Conference on Computer Vision in October will perform quality control.
  • Facebook plans to offer access on a limited basis, with full release to follow at the NeurIPS conference in December.
  • A competition to identify deepfakes in the dataset will run until spring 2020, with the winner to be awarded an unspecified prize.
  • Other partners include Cornell Tech, Microsoft, MIT, the Partnership on AI, UC Berkeley, University at Albany-SUNY, University of Maryland College Park, University of Oxford, and WITNESS.

Behind the news: Activists goaded Facebook to action in June, when they released a synthesized video of Mark Zuckerberg rhapsodizing over his control of billions of peoples’ data.

Why it matters: Deepfakes often are portrayed as a potential vector for political disinformation. But, as Vice and Wired point out, the clear and present danger is harassment of individuals, particularly women, activists, and journalists.

We’re thinking: The fact that deepfakes are created by adversaries means the data set — and resulting filters — will need to evolve as the fakers adapt to detection algorithms.

Can you spot fakes? Test your personal deepfake radar via this online guessing game.


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox