Bias

82 Posts

Douwe Kiela with a l
Bias

Douwe Kiela: Less Hype, More Caution

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Illustration of a person shoveling snow with the help of a flamethrower
Bias

Language Models; Extended: Language models grew more reliable and less biased in 2022.

Researchers pushed the boundaries of language models to address persistent problems of trustworthiness, bias, and updatability.
Illustration of an elf workshop creating a red toy car from a description (channeling AI generated images)
Bias

Synthetic Images Everywhere: 2022 was the year text-to-image AI went mainstream.

Pictures produced by AI went viral, stirred controversies, and drove investments. A new generation of text-to-image generators inspired a flood of experimentation, transforming text descriptions into mesmerizing artworks and photorealistic fantasies.
Ghost controlling a humanoid marionette during a job interview with a female candidate
Bias

Inhuman Resources: Confronting the Fear of AI-Powered Hiring in 2022

Companies are using AI to screen and even interview job applicants. What happens when out-of-control algorithms are the human resources department?
Security camera behind barbed wire
Bias

Panopticon Down Under: Australian Prisons Adopt Face Recognition

A state in Australia plans to outfit prisons with face recognition. Corrective Services NSW, the government agency that operates nearly every prison in New South Wales, contracted the U.S.-based IT firm Unisys to replace a previous system.
Animated map shows U.S. states that enacted AI laws in 2021 and 2022.
Bias

AI Regulations Proceed Locally: U.S. States Enact Laws Targeting AI

EPIC published a summary of AI-related laws that states and cities considered between January 2021 and August 2022.
Flowcharts show how a new contrastive learning approach uses metadata to improve AI image classifiers
Bias

Learning From Metadata: Descriptive Text Improves Performance for AI Image Classification Systems

Images in the wild may not come with labels, but they often include metadata. A new training method takes advantage of this information to improve contrastive learning.
Responsible AI pyramid
Bias

Ethical AI 2.0: Microsoft Revises its Responsible AI Standards

Microsoft tightened the reins on both AI developers and customers.What’s new: The tech titan revised its Responsible AI Standard and restricted access to some AI capabilities accordingly.
House for sale AD
Bias

U.S. Acts Against Algorithmic Bias: Meta Removes Bias from its Ad Algorithms

Regulators are forcing Meta (formerly Facebook) to display certain advertisements more evenly across its membership. The United States government compelled Meta to revise its ad-placement system to deliver ads for housing to members regardless of their age, gender, or ethnicity.
Metaverse illustration with Meta AI product names
Bias

Meta Decentralizes AI Effort: Meta Restructures its AI Research Teams

The future of Big AI may lie with product-development teams. Meta reorganized its AI division. Henceforth, AI teams will report to departments that develop key products.
Animation showing probability of children who may benefit from intervention
Bias

Child-Welfare Agency Drops AI: Oregon and Pennsylvania Halt Use of AI Tool for At-Risk Kids

Officials in charge of protecting children stopped using a machine learning model designed to help them make decisions in difficult cases. The U.S. state of Oregon halted its use of an algorithm intended to identify children who may benefit from intervention.
Indigenous Knowledge Graph
Bias

Native Processing: Intelligent Voices of Wisdom Teaches Native Culture to AI

A group of media and technology experts is working to give AI a better understanding of indigenous peoples. IVOW is a consultancy that aims to reduce machine learning bias against cultures that are underrepresented in training data by producing knowledge graphs and other resources.
InstructGPT methods
Bias

A Kinder, Gentler Language Model: Inside Instruct GPT-3, OpenAI's GPT-3 successor.

OpenAI unveiled a more reliable successor to its GPT-3 natural language model. InstructGPT is a version of GPT-3 fine-tuned to minimize harmful, untruthful, and biased output. It's available via an application programming interface.
Questionnaire for evaluating AI system vendors
Bias

Standards for Hiring Algorithms: Met, Walmart, and more agree to hiring algorithm standards.

Some of the world’s largest corporations will use standardized criteria to evaluate AI systems that influence hiring and other personnel decisions.
Red and green board game pieces
Bias

How to Overcome Societal Obstacles: How to break into AI from a disadvantaged background.

The top artificial intelligence companies include many people who earned degrees at elite educational institutions and started their employment with prior work experience. Yet the world is full of people from nontraditional backgrounds.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox