Bias

98 Posts

Context Is Everything: Gemini 1.5 Pro, a leap in multimodal AI amid controversy over v1.0
Bias

Context Is Everything: Gemini 1.5 Pro, a leap in multimodal AI amid controversy over v1.0

An update of Google’s flagship multimodal model keeps track of colossal inputs, while an earlier version generated some questionable outputs.
New Leaderboards Rank Safety, More: Hugging Face introduces leaderboards to evaluate model performance and trustworthiness.
Bias

New Leaderboards Rank Safety, More: Hugging Face introduces leaderboards to evaluate model performance and trustworthiness.

Hugging Face introduced four leaderboards to rank the performance and trustworthiness of large language models (LLMs). The open source AI repository now ranks performance on tests of workplace utility, trust and safety, tendency to generate falsehoods, and reasoning.
Screenshot of a pedestrian detector
Bias

Seeing Darker-Skinned Pedestrians: Children and people with darker skin face higher street risks with object detectors, research finds.

In a study, models used to detect people walking on streets and sidewalks performed less well on adults with darker skin and children of all skin tones.
Demo of Q, Amazon's enterprise chatbot
Bias

Amazon Joins Chatbot Fray: The pros and cons of Q, Amazon’s new enterprise chatbot

Amazon launched a chatbot for large companies even as internal tests indicated potential problems. Amazon introduced Q, an AI-powered assistant that enables employees to query documents and corporate systems.
Diagram showing how open source tool Giskard works
Bias

Testing for Large Language Models: Meet Giskard, an automated quality manager for LLMs.

An open source tool automatically tests language and tabular-data models for social biases and other common issues. Giskard is a software framework that evaluates models using a suite of heuristics and tests based on GPT-4.
Fused swarm-box-violinplot that captures HCR metrics
Bias

More Scraped Data, Greater Bias: Research shows that training on larger datasets can increase social bias.

How can we build large-scale language and vision models that don’t inherit social biases? Conventional wisdom suggests training on larger datasets, but research challenges this assumption.
Stable Biases: Stable Diffusion may amplify biases in its training data.
Bias

Stable Biases: Stable Diffusion may amplify biases in its training data.

Stable Diffusion may amplify biases in its training data in ways that promote deeply ingrained social stereotypes.
Where Is Meta’s Generative Play?: Why Meta still lacks a flagship generative AI service
Bias

Where Is Meta’s Generative Play?: Why Meta still lacks a flagship generative AI service

While Microsoft and Google scramble to supercharge their businesses with text generation, Meta has yet to launch a flagship generative AI service. Reporters went looking for reasons why.
Generated Data Fouls Human Datasets: Some crowdworkers are using ChatGPT to generate data.
Bias

Generated Data Fouls Human Datasets: Some crowdworkers are using ChatGPT to generate data.

The crowdworkers you hire to provide human data may use AI to produce it. Researchers at École Polytechnique Fédérale de Lausanne found that written material supplied by workers hired via Amazon Mechanical Turk showed signs of being generated by ChatGPT.
Algorithm Investigators: All about the EU's new Centre for Algorithmic Transparency
Bias

Algorithm Investigators: All about the EU's new Centre for Algorithmic Transparency

A new regulatory body created by the European Union promises to peer inside the black boxes that drive social media recommendations. The European Centre for Algorithmic Transparency (ECAT) will study the algorithms that identify, categorize...
The Politics of Language Models: AI's political opinions differ from most Americans'.
Bias

The Politics of Language Models: AI's political opinions differ from most Americans'.

Do language models have their own opinions about politically charged issues? Yes — and they probably don’t match yours. Shibani Santurkar and colleagues at Stanford compared opinion-poll responses of large language models with those of various human groups.
Hinton Leaves Google With Regrets: Why Geoffrey Hinton, one of the “Godfathers of AI” resigned from Google
Bias

Hinton Leaves Google With Regrets: Why Geoffrey Hinton, one of the “Godfathers of AI” resigned from Google

A pioneer of deep learning joined the chorus of AI insiders who worry that the technology is becoming dangerous, saying that part of him regrets his life’s work.
Runaway LLaMA: How Meta's LLaMA NLP model leaked
Bias

Runaway LLaMA: How Meta's LLaMA NLP model leaked

Meta’s effort to make a large language model available to researchers ended with its escape into the wild. Soon after Meta started accepting applications for developer access to LLaMA, a family of trained large language models...
The Larry character in Nothing, Forever, an AI-generated Seinfeld parody
Bias

Seinfeld's Twitch Moment: AI-generated sitcom Nothing, Forever booted from Twitch.

AI hobbyists created an homage to their favorite TV show . . . until it got knocked off the server. The creators of Nothing, Forever launched a fully automated, never-ending emulation of the popular TV show Seinfeld.
Excerpts from NIST AI Risk Management Framework
Bias

Guidelines for Managing AI Risk: NIST released its AI Risk Management Framework.

The United States government published guidelines designed to help organizations limit harm from AI. The National Institute for Standards and Technology, which recommends technological standards in a variety of industries, released the initial version of its AI Risk Management Framework.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox