A supergroup of machine learning models flags manipulated photos.

What’s new: Jigsaw, a tech incubator owned by Alphabet, released a system that detects digitally altered images. The organization is testing it with a dozen media partners including Rappler in the Philippines, Animal Politico in Mexico, and Code for Africa.

What’s in it: Assembler is an ensemble of six algorithms, each developed by a different team to spot a particular type of manipulation. Jigsaw put them together and trained them on a dataset from the U.S. National Institute of Standards and Technology’s Media Forensics Challenge.

  • Jigsaw contributed a component that identifies deepfakes generated by StyleGAN.
  • The University Federico II of Naples supplied two models, one that spots image splices and another that finds repeated patches of pixels, indicating that parts of an image were repeatedly copied and pasted.
  • UC Berkeley developed a model that scans pixels for clues that they were produced by different cameras, an indication of image splicing.
  • The University of Maryland’s contribution uses color values as a baseline to look for differences in contrast and other artifacts left by image editing software.

Why it matters: Fake images can deepen political divides, empower scammers, and distort history.

  • In the U.S., members of Congress have tried to discredit former President Obama with fake pictures purporting to show him shaking hands with Iranian president Hassan Rouhani.
  • Scammers used doctored images of the recent Australian bushfires to solicit donations for nonexistent relief funds.
  • Disinformation is known to have influenced politics in Brazil, Kenya, the Philippines, and at least a dozen other democracies.

We’re thinking: Unfortunately, the next move for determined fakers may be to use adversarial attacks to fool this ensemble. But journalists working to keep future elections fair will need every tool they can get.


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox