Selected data from AI Index, an annual report from Stanford University
Bias

AI for Business Is Booming: Stanford's 2021 AI Index shows commercial AI on the rise.

Commercial AI research and deployments are on the rise, a new study highlights. The latest edition of the AI Index, an annual report from Stanford University, documents key trends in the field including the growing importance of private industry and the erosion of U.S. dominance in research.
Margaret Mitchell, Marian Croak and Timnit Gebru pictured
Bias

Google Overhauls Ethical AI Team: What Google is doing after Timnit Gebru's departure.

Having dismissed two key researchers, Google restructured its efforts in AI ethics. Marian Croak, an accomplished software engineer and vice president of engineering at Google, will lead a new center of expertise in responsible AI, the company announced.
Dozens of different faces shown in a series of images
Bias

Cutting Corners to Recognize Faces: Research finds flaws in face recognition datasets.

Datasets for training face recognition models have ballooned in size — while slipping in quality and respect for privacy. In a survey of 130 datasets compiled over the last four decades, researchers traced how the need for increasing quantities of data led researchers to relax their standards.
Person in wheelchair, person in side profile, person wearing a hoodie
Bias

Human Disabilities Baffle Algorithms: Facebook blocked ads aimed at people with disabilities.

Facebook’s content moderation algorithms block many advertisements aimed at disabled people. The social media platform’s automated systems regularly reject ads for clothing designed for people with physical disabilities.
Series of images showing a variety of medical AI products
Bias

Medical AI’s Hidden Data: Why many medical AI devices are black boxes.

U.S. government approval of medical AI products is on the upswing — but information about how such systems were built is largely unavailable. The U.S. Food and Drug Administration (FDA) has approved a a plethora of AI-driven medical systems.
Scale of justice symbol over a map of India
Bias

Fairness East and West: Looking at issues of AI bias and fairness in India.

Western governments and institutions struggling to formulate principles of algorithmic fairness tend to focus on issues like race and gender. A new study of AI in India found a different set of key issues.
Bias

Pain Points in Black and White: Medical AI system predicts knee pain from Black patients.

A model designed to assess medical patients’ pain levels matched the patients’ own reports better than doctors’ estimates did — when the patients were Black.
Ayanna Howard
Bias

Ayanna Howard: How to teach ethics to the next generation of AI builders

As AI engineers, we have tools to design and build any technology-based solution we can dream of. But many AI developers don’t consider it their responsibility to address potential negative consequences as a part of this work.
Tree farm dataset
Bias

Representing the Underrepresented: Many important AI datasets contain bias.

Some of deep learning’s bedrock datasets came under scrutiny as researchers combed them for built-in biases. Researchers found that popular datasets impart biases against socially marginalized groups to trained models due to the ways the datasets were compiled, labeled, and used.
Screen captures of AI Incident Database, a searchable collection of reports on the technology’s missteps
Bias

Cataloging AI Gone Wrong: The AI Incident Database tracks machine learning mistakes

A new database tracks failures of automated systems including machine learning models. The Partnership on AI, a nonprofit consortium of businesses and institutions, launched the AI Incident Database, a searchable collection of reports on the technology’s missteps.
Collage of self portraits
Bias

Unsupervised Prejudice: Image classification models learned bias from ImageNet.

Social biases are well documented in decisions made by supervised models trained on ImageNet’s labels. But they also crept into the output of unsupervised models pretrained on the same dataset.
Illustration of a hand putting candy on a trick or treat bag
Bias

The Black Box Has Dark Corners: Will we ever understand what happens inside black box AI?

Will we ever understand what goes on inside the mind of a neural network?The fear: When AI systems go wrong, no one will be able to explain the reasoning behind their decisions.
Illustration of a neighborhood haunted by an evil pumpkin and a black cat
Bias

Giant Models Bankrupt Research: Will training AI become too expensive for most companies?

What if AI requires so much computation that it becomes unaffordable?The fear: Training ever more capable models will become too pricey for all but the richest corporations and government agencies. Rising costs will
Doctor holding candy and kid dressed as a ghost on a wheighing scale
Bias

Unfair Outcomes Destroy Trust: What could cause widespread backlash against AI?

Will AI that discriminates based on race, gender, or economic status undermine the public’s confidence in the technology? Seduced by the promise of cost savings and data-driven decision making, organizations will deploy biased systems that end up doing real-world damage.
Contrast between real and real and synthetic datasets
Bias

Battling Bias in Synthetic Data: How synthetic data startups are working to avoid bias

Synthetic datasets can inherit flaws in the real-world data they’re based on. Startups are working on solutions. Generating synthetic datasets for training machine learning systems is a booming business.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox