Series of images related to water lines replacement and poisoned water
Harm

AI Versus Lead Poisoning: How BlueConduit uses AI to identify lead water lines.

An algorithm is helping cities locate pipes that could release highly toxic lead into drinking water. BlueConduit, a startup that focuses on water safety, is working with dozens of North American municipal governments to locate lead water lines so they can be replaced.
Gun detecting system working and alerting the police
Harm

Draw a Gun, Trigger an Algorithm: These AI-enabled security cameras automatically ID guns.

Computer vision is alerting authorities the moment someone draws a gun. Several companies offer deep learning systems that enable surveillance cameras to spot firearms and quickly notify security guards or police.
Series of images describing how an AI-powered collar for elephants operates
Harm

Algorithms For Elephants: How AI tracking collars help protect endangered wildlife.

An AI-powered collar may help protect wild elephants from poachers, hunters, and other hostile humans. Ten ElephantEdge wireless tracking collars will be fitted onto African elephants next year, TechCrunch reported.
Ilya Sutskever
Harm

Ilya Sutskever: OpenAI’s co-founder on building multimodal AI models

The past year was the first in which general-purpose models became economically useful. GPT-3, in particular, demonstrated that large language models have surprising linguistic competence and the ability to perform a wide variety of useful tasks.
Ayanna Howard
Harm

Ayanna Howard: How to teach ethics to the next generation of AI builders

As AI engineers, we have tools to design and build any technology-based solution we can dream of. But many AI developers don’t consider it their responsibility to address potential negative consequences as a part of this work.
Group of people having a snowball fight and covering with a giant Facebook like button
Harm

Algorithms Against Disinformation: How Facebook, Twitter, and more fought disinfo in 2020.

The worldwide pandemic and a contentious U.S. election whipped up a storm of automated disinformation, and some big AI companies reaped the whirlwind.
Screen captures of AI Incident Database, a searchable collection of reports on the technology’s missteps
Harm

Cataloging AI Gone Wrong: The AI Incident Database tracks machine learning mistakes

A new database tracks failures of automated systems including machine learning models. The Partnership on AI, a nonprofit consortium of businesses and institutions, launched the AI Incident Database, a searchable collection of reports on the technology’s missteps.
Security cameras with face recognition inside a building in Argentina
Harm

That Kid Looks Like a Criminal: Conarc face recognition contained children's personal info.

In Argentina, a municipal face recognition system could misidentify children as suspected lawbreakers. Authorities in Buenos Aires are scanning subway riders’ faces to find offenders in a database of suspects but the system mixes criminal records with personal information about minors.
Doctor holding candy and kid dressed as a ghost on a wheighing scale
Harm

Unfair Outcomes Destroy Trust: What could cause widespread backlash against AI?

Will AI that discriminates based on race, gender, or economic status undermine the public’s confidence in the technology? Seduced by the promise of cost savings and data-driven decision making, organizations will deploy biased systems that end up doing real-world damage.
Illustration of a hand putting candy on a trick or treat bag
Harm

The Black Box Has Dark Corners: Will we ever understand what happens inside black box AI?

Will we ever understand what goes on inside the mind of a neural network?The fear: When AI systems go wrong, no one will be able to explain the reasoning behind their decisions.
Forbidden sign over different potentially dangerous falsehood symbols
Harm

YouTube vs. Conspiracy Theorists: How YouTube uses AI to spot conspiracies

Facing a tsunami of user-generated disinformation, YouTube is scrambling to stop its recommendation algorithm from promoting videos that spread potentially dangerous falsehoods.
Face recognition system in a supermarket
Harm

Tech Giants Face Off With Police: Amazon and Microsoft halt face recognition for police.

Three of the biggest AI vendors pledged to stop providing face recognition services to police — but other companies continue to serve the law-enforcement market.
Illustration of a broken heart with a smirk in the middle
Harm

Outing Hidden Hatred: How Facebook built a hate speech detector

Facebook uses automated systems to block hate speech, but hateful posts can slip through when seemingly benign words and pictures combine to create a nasty message. The social network is tackling this problem by enhancing AI’s ability to recognize context.
Angry emoji over dozens of Facebook like buttons
Harm

Facebook Likes Extreme Content: Facebook execs rejected changes to reduce polarization.

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.
Road sign with the word "trust"
Harm

Toward AI We Can Count On: Public trust recommendations from AI researchers

A consortium of top AI experts proposed concrete steps to help machine learning engineers secure the public’s trust. Dozens of researchers and technologists recommended actions to counter public skepticism toward artificial intelligence, fueled by issues like data privacy.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox