ImageNet face recognition labels on a picture
Bias

ImageNet Gets a Makeover

Computer scientists are struggling to purge bias from one of AI’s most important datasets. ImageNet’s 14 million photos are a go-to collection for training computer-vision systems, yet their descriptive labels have been rife with derogatory and stereotyped attitudes toward race, gender, and sex.
Zhi-Hua Zhou
Bias

Zhi-Hua Zhou: Fresh Methods, Clear Guidelines

I have three hopes for 2020.
Oren Etzioni
Bias

Oren Etzioni: Tools For Equality

In 2020, I hope the AI community will grapple with issues of fairness in ways that tangibly and directly benefit disadvantaged populations.
Information related to Explainable AI (xAI)
Bias

Google's AI Explains Itself

Google's AI platform offers a view into the mind of its machines. Explainable AI (xAI) tools show which features exerted the most influence on a model’s decision, so users can evaluate model performance and potentially mitigate biased results.
Information related to Bias-Resilient Neural Network (BR-Net)
Bias

Bias Fighter

Sophisticated models trained on biased data can learn discriminatory patterns, which leads to skewed decisions. A new solution aims to prevent neural networks from making decisions based on common biases.
Graph related to LIME and SHAP methods
Bias

Bias Goes Undercover

As black-box algorithms like neural networks find their way into high-stakes fields such as transportation, healthcare, and finance, researchers have developed techniques to help explain models’ decisions. New findings show that some of these methods can be fooled.
Leaf stuck on a chain link fence
Bias

Researchers Blocked at the Border

Foreign researchers hoping to attend one of AI’s largest conferences were denied entry into Canada, where the event will be held. Most of those blocked were from developing nations.
Illustration of two black cats labeled as cats, one white cat labeled as banana
Bias

Biased Data Trains Oppressive AI

Will biases in training data unwittingly turn AI into a tool for persecution? Bias encoded in software used by nominally objective institutions like, say, the justice or education systems will become impossible to root out.
 Proportion of examples covered by number of annotators (sorted by number of annotations)
Bias

AI Knows Who Labeled the Data

The latest language models are great at answering questions about a given text passage. However, these models are also powerful enough to recognize an individual writer’s style, which can clue them in to the right answers. New research measures such annotator bias in several data sets.
Question from an exam
Bias

Smart Students, Dumb Algorithms: NLP Systems Struggle at Grading Essays

A growing number of companies that sell standardized tests are using natural language processing to assess writing skills. Critics contend that these language models don’t make the grade.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox