Matt Zeiler
Bias

Matt Zeiler: Advance AI for good.

There’s a reason why artificial intelligence is sometimes referred to as “software 2.0”: It represents the most significant technological advance in decades. Like any groundbreaking invention, it raises concerns about the future, and much of the media focus is on the threats it brings.
Yoav Shoham
Bias

Yoav Shoham: Language models that reason.

I believe that natural language processing in 2022 will re-embrace symbolic reasoning, harmonizing it with the statistical operation of modern neural networks. Let me explain what I mean by this.
Abeba Birhane
Bias

Abeba Birhane: Clean up web datasets.

From language to vision models, deep neural networks are marked by improved performance, higher efficiency, and better generalizations. Yet, these systems are also marked by perpetuation of bias and injustice.
Giant snowman taking over the city while helicopters try to take it down
Bias

Governments Lay Down the Law: Governments around the world increasingly regulate AI.

Legislators worldwide wrote new laws — some proposed, some enacted — to rein in societal impacts of automation.What happened: Authorities at all levels ratcheted up regulatory pressure as
Two images showing RETRO Architecture and Gopher (280B) vs State of the Art
Bias

Large Language Models Shrink: Gopher and RETRO prove lean language models can push boundaries.

DeepMind released three papers that push the boundaries — and examine the issues — of large language models.
Timnit Gebru and the Distributed Artificial Intelligence Research Institute logo
Bias

Corporate Ethics Counterbalance: Timnit Gebru launches institute for AI fairness.

One year after her acrimonious exit from Google, ethics researcher Timnit Gebru launched an independent institute to study neglected issues in AI.
Geolitica screen captures
Bias

Minorities Reported: Policing AI shows bias against Blacks and Latinos.

An independent investigation found evidence of racial and economic bias in a crime-prevention model used by police departments in at least nine U.S. states.
Animation showing GPT-3 in full action
Bias

GPT-3 for All: GPT-3 NLP Model is Available for Select Azure Users

Microsoft is making GPT-3 available to selected customers through its Azure cloud service.
Halloween family portrait showing the inheritance of some spooky characteristics
Bias

New Models Inherit Old Flaws: AI Models May Inherit Flaws From Previous Systems

Is AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web.
A surveillance camera over the Washington Monument
Bias

White House Supports Limits on AI: U.S. Rules Protect Citizens from AI-Powered Surveillance and Discrimination

As governments worldwide mull their AI strategies and policies, the Biden administration called for a “bill of rights” to mitigate adverse consequences.What’s new: Top advisors to the U.S. president announced a plan to issue rules
Series of example of accurate and inaccurate matching images to text
Bias

Crawl the Web, Absorb the Bias: NLP Models Absorb Biases from Web Training Data

The emerging generation of trillion-parameter models needs datasets of billions of examples, but the most readily available source of examples on that scale — the web — is polluted with bias and antisocial expressions. A new study examines the issue.
United Nations (UN) logo displayed multiple times
Bias

UN Calls Out AI: UN Report Highlights AI-Related Risks for Privacy, Bias

Human rights officials called for limits on some uses of AI.What’s new: Michelle Bachelet, the UN High Commissioner for Human Rights, appealed to the organization’s member states to suspend certain
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
Bias

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Different x-rays and CT scans displayed
Bias

AI Sees Race in X-Rays

Researchers from Emory University, MIT, Purdue University, and other institutions found that deep learning systems trained to interpret x-rays and CT scans also were able to identify their subjects as Asian, Black, or White.
Image recognition examples
Bias

Smaller Models, Bigger Biases

Compression methods like parameter pruning and quantization can shrink neural networks for use in devices like smartphones with little impact on accuracy — but they also exacerbate a network’s bias.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox