Animation showing probability of children who may benefit from intervention
Harm

Child-Welfare Agency Drops AI: Oregon and Pennsylvania Halt Use of AI Tool for At-Risk Kids

Officials in charge of protecting children stopped using a machine learning model designed to help them make decisions in difficult cases. The U.S. state of Oregon halted its use of an algorithm intended to identify children who may benefit from intervention.
Theatre masks doing different facial expressions
Harm

Actors Act Against AI: UK Actors Union Launches Campaign Against AI

Performing artists are taking action to protect their earning power against scene-stealing avatars. Equity, a union of UK performing artists, launched a campaign to pressure the government to prohibit unauthorized use of a performer’s AI-generated likeness.
One person pouring a drink of poison in the company of another person
Harm

Logistic Regression: Follow the Curve — A Basic Introduction to Logistic Regression for Machine Learning

There was a moment when logistic regression was used to classify just one thing: If you drink a vial of poison, are you likely to be labeled “living” or “deceased”? Times have changed.
US map with locations of Planned Parenthood
Harm

When Data = Danger: Consumer Behavior App Removes Planned Parenthood Data

Amid rising social tension in the United States over reproductive freedom, a company that analyzes location data on abortion clinics stopped distributing its findings after a critical press report.
Factory workers getting ready to work
Harm

Recognizing Workplace Hazards: AI Device Helps Warehouses Avoid Workplace Injuries

A wearable device may help warehouse workers avoid injuries. Modjoul, maker of a system that evaluates risks to people engaged in physical labor, received an undisclosed sum from Amazon as part of a $1 billion investment in technologies that might enhance the retailer giant’s operations.
Skeletal formula of the (S) enantiomer of VX
Harm

AI Designs Chemical Weapons: Drug Design AI Creates Poisons

It’s surprisingly easy to turn a well-intended machine learning model to the dark side. In an experiment, Fabio Urbina and colleagues at Collaborations Pharmaceuticals, who had built a drug-discovery model to design useful compounds and avoid toxic ones, retrained it to generate poisons.
RobocallGuard system architecture
Harm

Scam Definitely: AI tools to block spam phone calls.

Robocalls slip through smartphone spam filters, but a new generation of deep learning tools promises to tighten the net. Research proposed fresh approaches to thwarting robocalls. Such innovations soon could be deployed in apps.
InstructGPT methods
Harm

A Kinder, Gentler Language Model: Inside Instruct GPT-3, OpenAI's GPT-3 successor.

OpenAI unveiled a more reliable successor to its GPT-3 natural language model. InstructGPT is a version of GPT-3 fine-tuned to minimize harmful, untruthful, and biased output. It's available via an application programming interface.
Excerpts from ethical guidelines for contractors who develop its AI systems.
Harm

Ethics for an Automated Army: Department of Defense issues AI ethics.

The U.S. Department of Defense issued new ethical guidelines for contractors who develop its AI systems. The Pentagon’s Defense Innovation Unit, which issues contracts for AI and other high-tech systems, issued guidelines that contractors must follow.
Matt Zeiler
Harm

Matt Zeiler: Advance AI for good.

There’s a reason why artificial intelligence is sometimes referred to as “software 2.0”: It represents the most significant technological advance in decades. Like any groundbreaking invention, it raises concerns about the future, and much of the media focus is on the threats it brings.
Yoav Shoham
Harm

Yoav Shoham: Language models that reason.

I believe that natural language processing in 2022 will re-embrace symbolic reasoning, harmonizing it with the statistical operation of modern neural networks. Let me explain what I mean by this.
Illustration of a woman riding a sled
Harm

Multimodal AI Takes Off: Multimodal Models, such as CLIP and DALL-E, are taking over AI.

While models like GPT-3 and EfficientNet, which work on text and images respectively, are responsible for some of deep learning’s highest-profile successes, approaches that find relationships between text and images made impressive
Giant snowman taking over the city while helicopters try to take it down
Harm

Governments Lay Down the Law: Governments around the world increasingly regulate AI.

Legislators worldwide wrote new laws — some proposed, some enacted — to rein in societal impacts of automation.What happened: Authorities at all levels ratcheted up regulatory pressure as
Two images showing RETRO Architecture and Gopher (280B) vs State of the Art
Harm

Large Language Models Shrink: Gopher and RETRO prove lean language models can push boundaries.

DeepMind released three papers that push the boundaries — and examine the issues — of large language models.
Timnit Gebru and the Distributed Artificial Intelligence Research Institute logo
Harm

Corporate Ethics Counterbalance: Timnit Gebru launches institute for AI fairness.

One year after her acrimonious exit from Google, ethics researcher Timnit Gebru launched an independent institute to study neglected issues in AI.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox