Hoping to keep surveillance capitalists from capitalizing on your face? Safeguard your selfies with a digital countermeasure.

What’s new: Researchers at the University of Chicago and Fudan University devised a program that subtly alters portrait photos to confuse face recognition models without distorting the image to the human eye.

How it works: Named after the Guy Fawkes mask beloved by privacy advocates, Fawkes cloaks faces by imposing patterns that, to machines, look like someone else.

  • Fawkes compares a portrait photo to another person’s picture, using a feature extractor to find the areas that differ most. Then it generates a perturbation pattern and uses it to alter individual pixels.
  • A penalty system balances the perturbations against a measure of user-perceived image distortion to make sure the effect is invisible to humans.
  • An additional algorithm stress-tests Fawkes’ cloaks to make sure they fool models that use different feature extractors.

Results: The researchers uploaded 50 cloaked photos of the same person to face recognition services from Amazon, Megvii, and Microsoft, which trained on the data. All three failed to identify the person in 32 uncloaked validation images — a 100 percent success rate. However, Fawkes had a hard time fooling models that were already familiar with a given face, having already trained on many uncloaked images. The models developed amnesia, though, after ingesting a fake social media account that exclusively contained cloaked (and renamed) photos.

Yes, but: Fawkes isn’t fool-proof.

  • The researchers were able to build models that saw through the system’s output. In fact, one such model cut its effectiveness to 65 percent.
  • Models trained on photos of a single person in which 15 percent of the pictures were uncloaked were able to identify the person more than half of the time.

Why it matters: We need ways, whether legal or technical, to enable people to protect their privacy. The U.S. startup Clearview.ai made headlines in January when the New York Times reported that its surveillance system, trained on billions of photos scraped from social media sites without permission, was widely used by law enforcement agencies and private businesses.

We’re thinking: If this method takes off, face recognition providers likely will find ways to defeat it. It’s difficult to make images that humans can recognize but computers can’t.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox