MIT withdrew a popular computer vision dataset after researchers found that it was rife with social bias.

What’s happening: Researchers found racist, misogynistic, and demeaning labels among the nearly 80 million pictures in Tiny Images, a collection of 32-by-32 pixel color photos. MIT’s Computer Science and Artificial Intelligence Lab removed Tiny Images from its website and requested that users delete their copies as well.

What the study found: Researchers at the University College of Dublin and UnifyID, an authentication startup, conducted an “ethical audit” of several large vision datasets, each containing many millions of images. They focused on Tiny Images as an example of how social bias proliferates in machine learning.

  • Psychologists and linguists at Princeton in 1985 compiled a database of word relationships called WordNet. Their work has served as a cornerstone in natural language processing.
  • Scientists at MIT CSAIL compiled Tiny Images in 2006 by searching the internet for images associated with words in WordNet. The database includes racial and gender-based slurs, so Tiny Images collected photos labeled with such terms.

What the dataset’s creators said: “Biases, offensive and prejudicial images, and derogatory terminology alienates [sic] an important part of our community — precisely those that we are making efforts to include,” the researchers who built Tiny Images said in a statement.

Behind the news: Social bias — in data an d models, in the industry, and in society at large — has emerged as a major issue in the machine learning community.

  • Concerns over bias in AI reignited last week after a generative model called Pulse converted a pixelated picture of Barack Obama, who is Black, into a high-resolution image of a white man.
  • The compilers of ImageNet recently culled labels — also based on WordNet — deemed biased or offensive from the dataset’s person subtree.

Why it matters: Social biases encoded in training data become entwined with the foundations of machine learning. WordNet transmitted its derogatory, stereotyped, and inaccurate information to Tiny Images, which may have passed them along to countless real-world applications.

We’re thinking: As AI practitioners, we have a responsibility to re-examine the ways we collect and use data. For instance, Cifar-10 and Cifar-100 were derived from TinyImages. We’re not aware of biases in those datasets, but when one dataset’s bias may propagate to another, it’s necessary to track data provenance and address any problems discovered in an upstream data source. Recent proposals set standards for documenting models and datasets to weed out harmful biases before they take root.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox