Metaverse illustration with Meta AI product names
Bias

Meta Decentralizes AI Effort: Meta Restructures its AI Research Teams

The future of Big AI may lie with product-development teams. Meta reorganized its AI division. Henceforth, AI teams will report to departments that develop key products.
Animation showing probability of children who may benefit from intervention
Bias

Child-Welfare Agency Drops AI: Oregon and Pennsylvania Halt Use of AI Tool for At-Risk Kids

Officials in charge of protecting children stopped using a machine learning model designed to help them make decisions in difficult cases. The U.S. state of Oregon halted its use of an algorithm intended to identify children who may benefit from intervention.
Indigenous Knowledge Graph
Bias

Native Processing: Intelligent Voices of Wisdom Teaches Native Culture to AI

A group of media and technology experts is working to give AI a better understanding of indigenous peoples. IVOW is a consultancy that aims to reduce machine learning bias against cultures that are underrepresented in training data by producing knowledge graphs and other resources.
InstructGPT methods
Bias

A Kinder, Gentler Language Model: Inside Instruct GPT-3, OpenAI's GPT-3 successor.

OpenAI unveiled a more reliable successor to its GPT-3 natural language model. InstructGPT is a version of GPT-3 fine-tuned to minimize harmful, untruthful, and biased output. It's available via an application programming interface.
Questionnaire for evaluating AI system vendors
Bias

Standards for Hiring Algorithms: Met, Walmart, and more agree to hiring algorithm standards.

Some of the world’s largest corporations will use standardized criteria to evaluate AI systems that influence hiring and other personnel decisions.
Red and green board game pieces
Bias

How to Overcome Societal Obstacles: How to break into AI from a disadvantaged background.

The top artificial intelligence companies include many people who earned degrees at elite educational institutions and started their employment with prior work experience. Yet the world is full of people from nontraditional backgrounds.
Matt Zeiler
Bias

Matt Zeiler: Advance AI for good

There’s a reason why artificial intelligence is sometimes referred to as “software 2.0”: It represents the most significant technological advance in decades. Like any groundbreaking invention, it raises concerns about the future, and much of the media focus is on the threats it brings.
Yoav Shoham
Bias

Yoav Shoham: Language models that reason

I believe that natural language processing in 2022 will re-embrace symbolic reasoning, harmonizing it with the statistical operation of modern neural networks. Let me explain what I mean by this.
Abeba Birhane
Bias

Abeba Birhane: Clean up web datasets

From language to vision models, deep neural networks are marked by improved performance, higher efficiency, and better generalizations. Yet, these systems are also marked by perpetuation of bias and injustice.
Giant snowman taking over the city while helicopters try to take it down
Bias

Governments Lay Down the Law: Governments around the world increasingly regulate AI.

Legislators worldwide wrote new laws — some proposed, some enacted — to rein in societal impacts of automation.What happened: Authorities at all levels ratcheted up regulatory pressure as
Two images showing RETRO Architecture and Gopher (280B) vs State of the Art
Bias

Large Language Models Shrink: Gopher and RETRO prove lean language models can push boundaries.

DeepMind released three papers that push the boundaries — and examine the issues — of large language models.
Timnit Gebru and the Distributed Artificial Intelligence Research Institute logo
Bias

Corporate Ethics Counterbalance: Timnit Gebru launches institute for AI fairness.

One year after her acrimonious exit from Google, ethics researcher Timnit Gebru launched an independent institute to study neglected issues in AI.
Geolitica screen captures
Bias

Minorities Reported: Policing AI shows bias against Blacks and Latinos.

An independent investigation found evidence of racial and economic bias in a crime-prevention model used by police departments in at least nine U.S. states.
Animation showing GPT-3 in full action
Bias

GPT-3 for All: GPT-3 NLP Model is Available for Select Azure Users

Microsoft is making GPT-3 available to selected customers through its Azure cloud service.
Halloween family portrait showing the inheritance of some spooky characteristics
Bias

New Models Inherit Old Flaws: AI Models May Inherit Flaws From Previous Systems

Is AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox