Forbidden sign over different potentially dangerous falsehood symbols
Harm

YouTube vs. Conspiracy Theorists

Facing a tsunami of user-generated disinformation, YouTube is scrambling to stop its recommendation algorithm from promoting videos that spread potentially dangerous falsehoods.
2 min read
Face recognition system in a supermarket
Harm

Tech Giants Face Off With Police

Three of the biggest AI vendors pledged to stop providing face recognition services to police — but other companies continue to serve the law-enforcement market.
1 min read
Illustration of a broken heart with a smirk in the middle
Harm

Outing Hidden Hatred

Facebook uses automated systems to block hate speech, but hateful posts can slip through when seemingly benign words and pictures combine to create a nasty message. The social network is tackling this problem by enhancing AI’s ability to recognize context.
2 min read
Angry emoji over dozens of Facebook like buttons
Harm

Facebook Likes Extreme Content

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.
2 min read
Road sign with the word "trust"
Harm

Toward AI We Can Count On

A consortium of top AI experts proposed concrete steps to help machine learning engineers secure the public’s trust. Dozens of researchers and technologists recommended actions to counter public skepticism toward artificial intelligence, fueled by issues like data privacy.
1 min read
Map seen with computer vision
Harm

Where Are the Live Bombs?

Unexploded munitions from past wars continue to kill and maim thousands of people every year. Computer vision is helping researchers figure out where these dormant weapons are likely to be.
2 min read
Text "You only live once. #YOLO" written over an orange background
Harm

Code No Evil

A prominent AI researcher has turned his back on computer vision over ethical issues. The co-creator of the popular object-recognition network You Only Look Once (YOLO) said he no longer works on computer vision because the technology has “almost no upside and enormous downside risk.”
1 min read
Chart with top 100 related video sfor YouTube search on "global warming"
Harm

Bad Recommendations

YouTube is a great place to learn about new ideas — including some that have been thoroughly discredited. YouTube’s recommendation algorithm is helping spread misinformation about climate change, according to research by Avaaz, a self-funded activist group.
2 min read
Zhi-Hua Zhou
Harm

Zhi-Hua Zhou: Fresh Methods, Clear Guidelines

I have three hopes for 2020.
1 min read
Dawn Song
Harm

Dawn Song: Taking Responsibility for Data

Datasets are critical to AI and machine learning, and they are becoming a key driver of the economy. Collection of sensitive data is increasing rapidly, covering almost every aspect of people’s lives.
2 min read
Eric Schmidt on C-Span2
Harm

Transparency for Military AI

A U.S. federal judge ruled that the public must be able to see records from a government-chartered AI advisory group. The court decided that the National Security Commission on AI, which guides defense research into AI-powered warfighting technology, must respond to freedom-of-information requests.
1 min read
Information related to Explainable AI (xAI)
Harm

Google's AI Explains Itself

Google's AI platform offers a view into the mind of its machines. Explainable AI (xAI) tools show which features exerted the most influence on a model’s decision, so users can evaluate model performance and potentially mitigate biased results.
2 min read
Volvo car identifying a pedestrian
Harm

Blind Spot

In March 2018, one of Uber’s self-driving cars became the first autonomous vehicle reported to have killed a pedestrian. A new report by U.S. authorities suggests that the accident occurred because the car’s software was programmed to ignore jaywalkers.
2 min read
Illustration: Face of a Halloween pumpkin in a purple background
Harm

AI Goes Rogue

Could humanity be destroyed by its own creation? If binary code running on a computer awakens into sentience, it will be able to think better than humans. It may even be able to improve its own software and hardware.
1 min read
Illustration of 4 ghosts floating and 1 person dressed as a ghost
Harm

Deepfakes Wreak Havoc

Will AI fakery erode public trust in the key social institutions? Generative models will flood media outlets with convincing but false photos, videos, ads, and news stories. The ensuing crisis of authority will lead to widespread distrust in everything from the financial system to democracy itself.
2 min read

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox