Bias

94 Posts

Diagram showing how open source tool Giskard works
Bias

Testing for Large Language Models: Meet Giskard, an automated quality manager for LLMs.

An open source tool automatically tests language and tabular-data models for social biases and other common issues. Giskard is a software framework that evaluates models using a suite of heuristics and tests based on GPT-4.
Fused swarm-box-violinplot that captures HCR metrics
Bias

More Scraped Data, Greater Bias: Research shows that training on larger datasets can increase social bias.

How can we build large-scale language and vision models that don’t inherit social biases? Conventional wisdom suggests training on larger datasets, but research challenges this assumption.
Stable Biases: Stable Diffusion may amplify biases in its training data.
Bias

Stable Biases: Stable Diffusion may amplify biases in its training data.

Stable Diffusion may amplify biases in its training data in ways that promote deeply ingrained social stereotypes.
Where Is Meta’s Generative Play?: Why Meta still lacks a flagship generative AI service
Bias

Where Is Meta’s Generative Play?: Why Meta still lacks a flagship generative AI service

While Microsoft and Google scramble to supercharge their businesses with text generation, Meta has yet to launch a flagship generative AI service. Reporters went looking for reasons why.
Generated Data Fouls Human Datasets: Some crowdworkers are using ChatGPT to generate data.
Bias

Generated Data Fouls Human Datasets: Some crowdworkers are using ChatGPT to generate data.

The crowdworkers you hire to provide human data may use AI to produce it. Researchers at École Polytechnique Fédérale de Lausanne found that written material supplied by workers hired via Amazon Mechanical Turk showed signs of being generated by ChatGPT.
Algorithm Investigators: All about the EU's new Centre for Algorithmic Transparency
Bias

Algorithm Investigators: All about the EU's new Centre for Algorithmic Transparency

A new regulatory body created by the European Union promises to peer inside the black boxes that drive social media recommendations. The European Centre for Algorithmic Transparency (ECAT) will study the algorithms that identify, categorize...
The Politics of Language Models: AI's political opinions differ from most Americans'.
Bias

The Politics of Language Models: AI's political opinions differ from most Americans'.

Do language models have their own opinions about politically charged issues? Yes — and they probably don’t match yours. Shibani Santurkar and colleagues at Stanford compared opinion-poll responses of large language models with those of various human groups.
Hinton Leaves Google With Regrets: Why Geoffrey Hinton, one of the “Godfathers of AI” resigned from Google
Bias

Hinton Leaves Google With Regrets: Why Geoffrey Hinton, one of the “Godfathers of AI” resigned from Google

A pioneer of deep learning joined the chorus of AI insiders who worry that the technology is becoming dangerous, saying that part of him regrets his life’s work.
Runaway LLaMA: How Meta's LLaMA NLP model leaked
Bias

Runaway LLaMA: How Meta's LLaMA NLP model leaked

Meta’s effort to make a large language model available to researchers ended with its escape into the wild. Soon after Meta started accepting applications for developer access to LLaMA, a family of trained large language models...
The Larry character in Nothing, Forever, an AI-generated Seinfeld parody
Bias

Seinfeld's Twitch Moment: AI-generated sitcom Nothing, Forever booted from Twitch.

AI hobbyists created an homage to their favorite TV show . . . until it got knocked off the server. The creators of Nothing, Forever launched a fully automated, never-ending emulation of the popular TV show Seinfeld.
Excerpts from NIST AI Risk Management Framework
Bias

Guidelines for Managing AI Risk: NIST released its AI Risk Management Framework.

The United States government published guidelines designed to help organizations limit harm from AI. The National Institute for Standards and Technology, which recommends technological standards in a variety of industries, released the initial version of its AI Risk Management Framework.
Douwe Kiela with a l
Bias

Douwe Kiela: Natural language processing researcher Douwe Kiela calls for less hype, more caution.

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Illustration of a person shoveling snow with the help of a flamethrower
Bias

Language Models, Extended: Large language models grew more reliable and less biased in 2022.

Researchers pushed the boundaries of language models to address persistent problems of trustworthiness, bias, and updatability.
Illustration of an elf workshop creating a red toy car from a description (channeling AI generated images)
Bias

Synthetic Images Everywhere: 2022 was the year text-to-image AI went mainstream.

Pictures produced by AI went viral, stirred controversies, and drove investments. A new generation of text-to-image generators inspired a flood of experimentation, transforming text descriptions into mesmerizing artworks and photorealistic fantasies.
Ghost controlling a humanoid marionette during a job interview with a female candidate
Bias

Inhuman Resources: Confronting the Fear of AI-Powered Hiring in 2022

Companies are using AI to screen and even interview job applicants. What happens when out-of-control algorithms are the human resources department?
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox