Data and examples related to IMLE-GAN
Bias

Making GANs More Inclusive: A technique to help GANs represent their datasets fairly

A typical GAN’s output doesn’t necessarily reflect the data distribution of its training set. Instead, GANs are prone to modeling the majority of the training distribution, sometimes ignoring rare attributes — say, faces that represent minority populations.
Woman with plenty of shopping bags
Bias

Credit Where It’s Due: How Visa powers real-time credit card approval with AI

A neural network is helping credit card users continue to shop even when the lender’s credit-approval network goes down. Visa developed a deep learning system that analyzes individual cardholders’ behavior in real time to predict whether credit card transactions should be approved or denied.
Protest in the UK and information about grading algorithm
Bias

AI Grading Doesn’t Make the Grade: The UK canceled its plan for AI-generated grading.

The UK government abandoned a plan to use machine learning to assess students for higher education. The UK Department of Education discarded grades generated by an algorithm designed to predict performance on the annual Advanced Level qualifications, which had been canceled due to the pandemic.
Examples of age, gender and race idenitification by face recognition
Bias

Race Recognition: Face recognition companies identify people by race.

Marketers are using computer vision to parse customers by skin color and other perceived racial characteristics. A number of companies are pitching race classification as a way for businesses to understand the buying habits of different groups.
Series of images with graphs and data related to optimization algorithms
Bias

When Optimization is Suboptimal: How gradient descent can sometimes lead to model bias

Bias arises in machine learning when we fit an overly simple function to a more complex problem. A theoretical study shows that gradient descent itself may introduce such bias and render algorithms unable to fit data properly.
Rite-Aids face recognition system
Bias

Retail Surveillance Revealed: How Rite-Aid used face recognition for security

A major retailer’s AI-powered surveillance program apparently targeted poor people and minorities. Rite-Aid, a U.S.-based pharmacy chain, installed face recognition systems in many of its New York and Los Angeles stores.
Tiny Images photos and datasets
Bias

Tiny Images, Outsized Biases: Why MIT withdrew the Tiny Images dataset

MIT withdrew a popular computer vision dataset after researchers found that it was rife with social bias. Researchers found racist, misogynistic, and demeaning labels among the nearly 80 million pictures in Tiny Images, a collection of 32-by-32 pixel color photos.
Examples of high-resolution versions of low-resolution images.
Bias

Image Resolution in Black and White: Behind the Pulse controversy about bias in machine learning

A new model designed to sharpen images tends to turn some dark faces white, igniting fresh furor over bias in machine learning. Photo Upsampling via Latent Space Exploration (Pulse) generates high-resolution versions of low-resolution images.
Face recognition system in a supermarket
Bias

Tech Giants Face Off With Police: Amazon and Microsoft halt face recognition for police.

Three of the biggest AI vendors pledged to stop providing face recognition services to police — but other companies continue to serve the law-enforcement market.
Partnership in AI, Amazon, Baidu, Google, Facebook, IBM, Microsoft logos
Bias

Baidu Leaves Partnership on AI: Chinese tech giant exits a consortium on AI bias and privacy.

Baidu backed out of a U.S.-led effort to promote ethics in AI, leaving the project without a Chinese presence. The Beijing-based search giant withdrew from the Partnership on AI, a consortium that promotes cooperation on issues like digital privacy and algorithmic bias.
Illustration of a doctor and a nurse
Bias

Gender Bender: Double-Hard Debias helps lessen gender bias in NLP models.

AI learns human biases: In word vector space, “man is to computer programmer as woman is to homemaker,” as one paper put it. New research helps language models unlearn such prejudices.
Data related to methods for curating news feeds
Bias

Algorithms Choose the News: MSN news service replaces some human editors with AI.

Machines took another step toward doing the work of journalists. Microsoft laid off dozens of human editors who select articles for the MSN news service and app. Going forward, AI will do the job.
Angry emoji over dozens of Facebook like buttons
Bias

Facebook Likes Extreme Content: Facebook execs rejected changes to reduce polarization.

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.
Road sign with the word "trust"
Bias

Toward AI We Can Count On: Public trust recommendations from AI researchers

A consortium of top AI experts proposed concrete steps to help machine learning engineers secure the public’s trust. Dozens of researchers and technologists recommended actions to counter public skepticism toward artificial intelligence, fueled by issues like data privacy.
Women in AI in academia and industry chart
Bias

AI’s Gender Imbalance: The data behind deep learning's gender gap

Women continue to be severely underrepresented in AI. A meta-analysis of research conducted by Synced Review for Women’s History Month found that female participation in various aspects of AI typically hovers between 10 and 20 percent.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox