A new model designed to sharpen images tends to turn some dark faces white, igniting fresh furor over bias in machine learning.

What’s new: Built by researchers at Duke University, Photo Upsampling via Latent Space Exploration (Pulse) generates high-resolution versions of low-resolution images. It sparked controversy when it transformed a pixelated portrait of Barack Obama into a detailed picture of a white man.

How it works: Most upsampling models are trained to generate high-res output from low-res input. Pulse creates a series of high-res images progressively optimized to match the low-res source.

  • Pulse uses StyleGAN, a pre-trained generative adversarial network, to generate a new image from the original and an input vector.
  • The system downsamples the generated image to the original’s resolution. Then it compares the two.
  • It modifies the input vector based on the differences and repeats the process 100 times to arrive at its final output.
  • Human judges scored Pulse’s output more realistic than that of four competing models. The computer-based assessment Natural Image Quality Evaluator (NIQE) rated Pulse’s images higher than those in a database of high-res celebrity portraits.

The controversy: Twitter user Chicken3gg revealed Pulse’s bias using a downsampled photo of the former U.S. president. That prompted machine learning engineer Robert Osazuwa Ness to try it on blurred images of U.S. Senator Kamala Harris, actress Lucy Liu, and other nonwhite individuals. The system whitewashed them, too, and also interpreted some female faces as male.

  • The incident triggered an online debate that drew in major figures in the AI community.
  • Pulse’s authors blamed racially imbalanced training data. When they trained their system on a more diverse dataset, its accuracy with respect to race ranged from 79 percent to 90 percent.
  • Facebook’s AI chief Yann LeCun echoed the notion that the system’s bias resulted from racially lopsided training data. Timnit Gebru, co-leader of Google’s ethical AI team, shot back that focusing on data alone downplays systemic bias in the machine learning community. As the argument grew heated, LeCun announced his withdrawal from Twitter.

Why it matters: Flawed AI leads to real harm. In January, police in Detroit arrested an African-American man for theft after he was misidentified by a face recognition system. Such systems have been shown to misclassify Black people.

We’re thinking: Upscaling powered by machine learning is making images sharper on televisions and microscopes. The AI community has a pressing need for tests and audit procedures to ensure that such technology is trustworthy and free of bias.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox