Walking through a narrow hallway in a library
Harm

Bias By the Book: Researchers find bias in influential NLP dataset BookCorpus.

Researchers found serious flaws in an influential language dataset, highlighting the need for better documentation of data used in machine learning.
Graph showing types of phishing attacks
Harm

24/7 Phish Fry: How Nestle built an AI-Powered phishing detector with Azure.

Foiling attackers who try to lure email users into clicking on a malicious link is a cat-and-mouse game, as phishing tactics evolve to evade detection. But machine learning models designed to recognize phishing attempts can evolve, too, through automatic retraining and checks to maintain accuracy.
Forbidden sign over security cameras, handprint and face recognition system
Harm

The Coming Crackdown: The 2021 draft version of the European Union's AI Act

The European Union proposed sweeping restrictions on AI technologies and applications. The executive arm of the 27-nation EU published draft rules that aim to regulate, and in some cases ban, a range of AI systems.
Voice recognition tool "Bleep" working
Harm

Haters Gonna [Mute]: Gamers can mute offensive language with AI.

A new tool aims to let video gamers control how much vitriol they receive from fellow players. Intel announced a voice recognition tool called Bleep that the company claims can moderate voice chat automatically, allowing users to silence offensive language.
Person driving a Tesla car
Harm

Tesla Safety Under Investigation: Feds investigate Tesla's Full Self Driving and Autopilot.

U.S. authorities are investigating Tesla’s self-driving technology. Federal regulators launched a probe of nearly two dozen accidents, some of them fatal, that involved Tesla vehicles equipped for self-driving.
Graph showing key AI characteristics
Harm

On Her Majesty’s Secret Algorithm: How the UKs GCHQ will use AI to augment intelligence ops.

The UK’s electronic surveillance agency published its plan to use AI. Government Communications Headquarters (GCHQ) outlined its intention to use machine learning to combat security threats, human trafficking, and disinformation — and to do so ethically — in a new report.
Commercial about The Trevor Lifeline
Harm

Chatbots Against Depression: The Trevor Project used GPT-2 to train crisis counselors.

A language model is helping crisis-intervention volunteers practice their suicide-prevention skills. The Trevor Project, a nonprofit organization that operates a 24-hour hotline for LGBTQ youth, uses a “crisis contact simulator” to train its staff in how to talk with troubled teenagers.
Facebook like and dislike buttons
Harm

Social Engagement vs. Social Good: The builder of Facebook's algorithm talks bias.

Facebook’s management obstructed the architect of its recommendation algorithms from mitigating their negative social impact. The social network focused on reining in algorithmic bias against particular groups of users at the expense of efforts to reduce disinformation.
Margaret Mitchell, Marian Croak and Timnit Gebru pictured
Harm

Google Overhauls Ethical AI Team: What Google is doing after Timnit Gebru's departure.

Having dismissed two key researchers, Google restructured its efforts in AI ethics. Marian Croak, an accomplished software engineer and vice president of engineering at Google, will lead a new center of expertise in responsible AI, the company announced.
Dozens of different faces shown in a series of images
Harm

Cutting Corners to Recognize Faces: Research finds flaws in face recognition datasets.

Datasets for training face recognition models have ballooned in size — while slipping in quality and respect for privacy. In a survey of 130 datasets compiled over the last four decades, researchers traced how the need for increasing quantities of data led researchers to relax their standards.
Autonomous aircraft taking off and landing
Harm

Autonomous Weapons Gain Support: A panel led by Eric Schmidt encouraged more military AI.

A panel of AI experts appointed by the U.S. government came out against a ban on autonomous weapons. A draft report from the National Security Commission on Artificial Intelligence recommends against a proposed international global prohibition of AI-enabled autonomous weapon systems.
Netradyne Driveri system used to monitore Amazon's delivery drivers working
Harm

Eyes On Drivers: Amazon watches delivery drivers with AI-powered cameras.

Amazon is monitoring its delivery drivers with in-vehicle cameras that alert supervisors to dangerous behavior. The online retail giant rolled out a ceiling-mounted surveillance system that flags drivers who, say, read texts, fail to use seatbelts, exceed the speed limit, or ignore a stop sign.
Scale of justice symbol over a map of India
Harm

Fairness East and West: Looking at issues of AI bias and fairness in India.

Western governments and institutions struggling to formulate principles of algorithmic fairness tend to focus on issues like race and gender. A new study of AI in India found a different set of key issues.
Series of images related to water lines replacement and poisoned water
Harm

AI Versus Lead Poisoning: How BlueConduit uses AI to identify lead water lines.

An algorithm is helping cities locate pipes that could release highly toxic lead into drinking water. BlueConduit, a startup that focuses on water safety, is working with dozens of North American municipal governments to locate lead water lines so they can be replaced.
Gun detecting system working and alerting the police
Harm

Draw a Gun, Trigger an Algorithm: These AI-enabled security cameras automatically ID guns.

Computer vision is alerting authorities the moment someone draws a gun. Several companies offer deep learning systems that enable surveillance cameras to spot firearms and quickly notify security guards or police.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox