Neighborhood being monitored by AI-powered cameras
Bias

Partners in Surveillance

Police are increasingly able to track motor vehicles throughout the U.S. using a network of AI-powered cameras — many owned by civilians. Flock, which sells automatic license plate readers is encouraging enforcers to use its network to monitor cars and trucks outside their jurisdiction.
2 min read
Operation of a hiring software which evaluates candidates through simple interactive games
Bias

Who Audits the Auditors?

Auditing is a critical technique in the effort to build fair and equitable AI systems. But current auditing methods may not be up to the task. There’s no consensus on how AI should be audited, whether audits should be mandatory, and what to do with their results.
1 min read
Facebook like and dislike buttons
Bias

Social Engagement vs. Social Good

Facebook’s management obstructed the architect of its recommendation algorithms from mitigating their negative social impact. The social network focused on reining in algorithmic bias against particular groups of users at the expense of efforts to reduce disinformation.
2 min read
Selected data from AI Index, an annual report from Stanford University
Bias

AI for Business Is Booming

Commercial AI research and deployments are on the rise, a new study highlights. The latest edition of the AI Index, an annual report from Stanford University, documents key trends in the field including the growing importance of private industry and the erosion of U.S. dominance in research.
2 min read
Margaret Mitchell, Marian Croak and Timnit Gebru pictured
Bias

Google Overhauls Ethical AI Team

Having dismissed two key researchers, Google restructured its efforts in AI ethics. Marian Croak, an accomplished software engineer and vice president of engineering at Google, will lead a new center of expertise in responsible AI, the company announced.
2 min read
Person in wheelchair, person in side profile, person wearing a hoodie
Bias

Human Disabilities Baffle Algorithms

Facebook’s content moderation algorithms block many advertisements aimed at disabled people. The social media platform’s automated systems regularly reject ads for clothing designed for people with physical disabilities.
2 min read
Dozens of different faces shown in a series of images
Bias

Cutting Corners to Recognize Faces

Datasets for training face recognition models have ballooned in size — while slipping in quality and respect for privacy. In a survey of 130 datasets compiled over the last four decades, researchers traced how the need for increasing quantities of data led researchers to relax their standards.
2 min read
Series of images showing a variety of medical AI products
Bias

Medical AI’s Hidden Data

U.S. government approval of medical AI products is on the upswing — but information about how such systems were built is largely unavailable. The U.S. Food and Drug Administration (FDA) has approved a a plethora of AI-driven medical systems.
2 min read
Scale of justice symbol over a map of India
Bias

Fairness East and West

Western governments and institutions struggling to formulate principles of algorithmic fairness tend to focus on issues like race and gender. A new study of AI in India found a different set of key issues.
2 min read
Bias

Pain Points in Black and White

A model designed to assess medical patients’ pain levels matched the patients’ own reports better than doctors’ estimates did — when the patients were Black.
1 min read
Ayanna Howard
Bias

Ayanna Howard: Training in Ethical AI

As AI engineers, we have tools to design and build any technology-based solution we can dream of. But many AI developers don’t consider it their responsibility to address potential negative consequences as a part of this work.
2 min read
Tree farm dataset
Bias

Representing the Underrepresented

Some of deep learning’s bedrock datasets came under scrutiny as researchers combed them for built-in biases. Researchers found that popular datasets impart biases against socially marginalized groups to trained models due to the ways the datasets were compiled, labeled, and used.
2 min read
Screen captures of AI Incident Database, a searchable collection of reports on the technology’s missteps
Bias

Cataloging AI Gone Wrong

A new database tracks failures of automated systems including machine learning models. The Partnership on AI, a nonprofit consortium of businesses and institutions, launched the AI Incident Database, a searchable collection of reports on the technology’s missteps.
1 min read
Collage of self portraits
Bias

Unsupervised Prejudice

Social biases are well documented in decisions made by supervised models trained on ImageNet’s labels. But they also crept into the output of unsupervised models pretrained on the same dataset.
2 min read
Doctor holding candy and kid dressed as a ghost on a wheighing scale
Bias

Unfair Outcomes Destroy Trust

Will AI that discriminates based on race, gender, or economic status undermine the public’s confidence in the technology? Seduced by the promise of cost savings and data-driven decision making, organizations will deploy biased systems that end up doing real-world damage.
1 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox