Scale of justice symbol over a map of India
Harm

Fairness East and West

Western governments and institutions struggling to formulate principles of algorithmic fairness tend to focus on issues like race and gender. A new study of AI in India found a different set of key issues.
Series of images related to water lines replacement and poisoned water
Harm

AI Versus Lead Poisoning

An algorithm is helping cities locate pipes that could release highly toxic lead into drinking water. BlueConduit, a startup that focuses on water safety, is working with dozens of North American municipal governments to locate lead water lines so they can be replaced.
Gun detecting system working and alerting the police
Harm

Draw a Gun, Trigger an Algorithm

Computer vision is alerting authorities the moment someone draws a gun. Several companies offer deep learning systems that enable surveillance cameras to spot firearms and quickly notify security guards or police.
Series of images describing how an AI-powered collar for elephants operates
Harm

Algorithms For Elephants

An AI-powered collar may help protect wild elephants from poachers, hunters, and other hostile humans. Ten ElephantEdge wireless tracking collars will be fitted onto African elephants next year, TechCrunch reported.
Ilya Sutskever
Harm

Ilya Sutskever: Fusion of Language and Vision

The past year was the first in which general-purpose models became economically useful. GPT-3, in particular, demonstrated that large language models have surprising linguistic competence and the ability to perform a wide variety of useful tasks.
Ayanna Howard
Harm

Ayanna Howard: Training in Ethical AI

As AI engineers, we have tools to design and build any technology-based solution we can dream of. But many AI developers don’t consider it their responsibility to address potential negative consequences as a part of this work.
Group of people having a snowball fight and covering with a giant Facebook like button
Harm

Algorithms Against Disinformation

The worldwide pandemic and a contentious U.S. election whipped up a storm of automated disinformation, and some big AI companies reaped the whirlwind.
Screen captures of AI Incident Database, a searchable collection of reports on the technology’s missteps
Harm

Cataloging AI Gone Wrong

A new database tracks failures of automated systems including machine learning models. The Partnership on AI, a nonprofit consortium of businesses and institutions, launched the AI Incident Database, a searchable collection of reports on the technology’s missteps.
Security cameras with face recognition inside a building in Argentina
Harm

That Kid Looks Like a Criminal

In Argentina, a municipal face recognition system could misidentify children as suspected lawbreakers. Authorities in Buenos Aires are scanning subway riders’ faces to find offenders in a database of suspects but the system mixes criminal records with personal information about minors.
Doctor holding candy and kid dressed as a ghost on a wheighing scale
Harm

Unfair Outcomes Destroy Trust

Will AI that discriminates based on race, gender, or economic status undermine the public’s confidence in the technology? Seduced by the promise of cost savings and data-driven decision making, organizations will deploy biased systems that end up doing real-world damage.
Forbidden sign over different potentially dangerous falsehood symbols
Harm

YouTube vs. Conspiracy Theorists

Facing a tsunami of user-generated disinformation, YouTube is scrambling to stop its recommendation algorithm from promoting videos that spread potentially dangerous falsehoods.
Face recognition system in a supermarket
Harm

Tech Giants Face Off With Police

Three of the biggest AI vendors pledged to stop providing face recognition services to police — but other companies continue to serve the law-enforcement market.
Illustration of a broken heart with a smirk in the middle
Harm

Outing Hidden Hatred: Facebook uses NLP and Vision AI to Detect Hateful Posts

Facebook uses automated systems to block hate speech, but hateful posts can slip through when seemingly benign words and pictures combine to create a nasty message. The social network is tackling this problem by enhancing AI’s ability to recognize context.
Angry emoji over dozens of Facebook like buttons
Harm

Facebook Likes Extreme Content

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.
Road sign with the word "trust"
Harm

Toward AI We Can Count On

A consortium of top AI experts proposed concrete steps to help machine learning engineers secure the public’s trust. Dozens of researchers and technologists recommended actions to counter public skepticism toward artificial intelligence, fueled by issues like data privacy.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox