Bias

Pain Points in Black and White

A model designed to assess medical patients’ pain levels matched the patients’ own reports better than doctors’ estimates did — when the patients were Black.
Ayanna Howard
Bias

Ayanna Howard: Training in Ethical AI

As AI engineers, we have tools to design and build any technology-based solution we can dream of. But many AI developers don’t consider it their responsibility to address potential negative consequences as a part of this work.
Tree farm dataset
Bias

Representing the Underrepresented

Some of deep learning’s bedrock datasets came under scrutiny as researchers combed them for built-in biases. Researchers found that popular datasets impart biases against socially marginalized groups to trained models due to the ways the datasets were compiled, labeled, and used.
Screen captures of AI Incident Database, a searchable collection of reports on the technology’s missteps
Bias

Cataloging AI Gone Wrong

A new database tracks failures of automated systems including machine learning models. The Partnership on AI, a nonprofit consortium of businesses and institutions, launched the AI Incident Database, a searchable collection of reports on the technology’s missteps.
Collage of self portraits
Bias

Unsupervised Prejudice

Social biases are well documented in decisions made by supervised models trained on ImageNet’s labels. But they also crept into the output of unsupervised models pretrained on the same dataset.
Illustration of a neighborhood haunted by an evil pumpkin and a black cat
Bias

Giant Models Bankrupt Research

What if AI requires so much computation that it becomes unaffordable?The fear: Training ever more capable models will become too pricey for all but the richest corporations and government agencies. Rising costs will
Doctor holding candy and kid dressed as a ghost on a wheighing scale
Bias

Unfair Outcomes Destroy Trust

Will AI that discriminates based on race, gender, or economic status undermine the public’s confidence in the technology? Seduced by the promise of cost savings and data-driven decision making, organizations will deploy biased systems that end up doing real-world damage.
Contrast between real and real and synthetic datasets
Bias

Battling Bias in Synthetic Data

Synthetic datasets can inherit flaws in the real-world data they’re based on. Startups are working on solutions. Generating synthetic datasets for training machine learning systems is a booming business.
Data and examples related to IMLE-GAN
Bias

Making GANs More Inclusive

A typical GAN’s output doesn’t necessarily reflect the data distribution of its training set. Instead, GANs are prone to modeling the majority of the training distribution, sometimes ignoring rare attributes — say, faces that represent minority populations.
Woman with plenty of shopping bags
Bias

Credit Where It’s Due

A neural network is helping credit card users continue to shop even when the lender’s credit-approval network goes down. Visa developed a deep learning system that analyzes individual cardholders’ behavior in real time to predict whether credit card transactions should be approved or denied.
Protest in the UK and information about grading algorithm
Bias

AI Grading Doesn’t Make the Grade

The UK government abandoned a plan to use machine learning to assess students for higher education. The UK Department of Education discarded grades generated by an algorithm designed to predict performance on the annual Advanced Level qualifications, which had been canceled due to the pandemic.
Examples of age, gender and race idenitification by face recognition
Bias

Race Recognition

Marketers are using computer vision to parse customers by skin color and other perceived racial characteristics. A number of companies are pitching race classification as a way for businesses to understand the buying habits of different groups.
Rite-Aids face recognition system
Bias

Retail Surveillance Revealed

A major retailer’s AI-powered surveillance program apparently targeted poor people and minorities. Rite-Aid, a U.S.-based pharmacy chain, installed face recognition systems in many of its New York and Los Angeles stores.
Series of images with graphs and data related to optimization algorithms
Bias

When Optimization is Suboptimal

Bias arises in machine learning when we fit an overly simple function to a more complex problem. A theoretical study shows that gradient descent itself may introduce such bias and render algorithms unable to fit data properly.
Tiny Images photos and datasets
Bias

Tiny Images, Outsized Biases: MIT Researchers Delete TinyImages Dataset

MIT withdrew a popular computer vision dataset after researchers found that it was rife with social bias. Researchers found racist, misogynistic, and demeaning labels among the nearly 80 million pictures in Tiny Images, a collection of 32-by-32 pixel color photos.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox