Harm

117 Posts

Screen recording of the GPT Store's homepage
Harm

GPT Store Shows Lax Moderation: A report exposes policy violations in OpenAI’s GPT Store.

OpenAI has been moderating its GPT Store with a very light touch. In a survey of the GPT Store’s offerings, TechCrunch found numerous examples of custom ChatGPT instances that appear to violate the store’s own policies.
Toward Managing AI Bio Risk: Over 150 scientists commit to ensure AI safety in synthetic biology research.
Harm

Toward Managing AI Bio Risk: Over 150 scientists commit to ensure AI safety in synthetic biology research.

Scientists pledged to control their use of AI to produce potentially hazardous biological materials.
Deepfakes Become Politics as Usual: Deepfakes dominate as India’s election season unfolds.
Harm

Deepfakes Become Politics as Usual: Deepfakes dominate as India’s election season unfolds.

Synthetic depictions of politicians are taking center stage as the world’s biggest democratic election kicks off.
U.S. Restricts AI Robocalls: U.S. cracks down on AI-generated voice robocalls to combat election interference.
Harm

U.S. Restricts AI Robocalls: U.S. cracks down on AI-generated voice robocalls to combat election interference.

The United States outlawed unsolicited phone calls that use AI-generated voices. 
GPT-4 Biothreat Risk is Low: Study finds GPT-4 no more risky than online search in aiding bioweapon development.
Harm

GPT-4 Biothreat Risk is Low: Study finds GPT-4 no more risky than online search in aiding bioweapon development.

GPT-4 poses negligible additional risk that a malefactor could build a biological weapon, according to a new study. OpenAI compared the ability of GPT-4 and web search to contribute to the creation of a dangerous virus or bacterium. The large language model was barely more helpful than the web.
New Leaderboards Rank Safety, More: Hugging Face introduces leaderboards to evaluate model performance and trustworthiness.
Harm

New Leaderboards Rank Safety, More: Hugging Face introduces leaderboards to evaluate model performance and trustworthiness.

Hugging Face introduced four leaderboards to rank the performance and trustworthiness of large language models (LLMs). The open source AI repository now ranks performance on tests of workplace utility, trust and safety, tendency to generate falsehoods, and reasoning.
Nude Deepfakes Spur Legislators: Taylor Swift deepfake outrage prompts U.S. lawmakers to propose anti-AI pornography laws.
Harm

Nude Deepfakes Spur Legislators: Taylor Swift deepfake outrage prompts U.S. lawmakers to propose anti-AI pornography laws.

Sexually explicit deepfakes of Taylor Swift galvanized public demand for laws against nonconsensual, AI-enabled pornography.
Standard for Media Watermarks: C2PA introduces watermark tech to combat media misinformation.
Harm

Standard for Media Watermarks: C2PA introduces watermark tech to combat media misinformation.

An alliance of major tech and media companies introduced a watermark designed to distinguish real from fake media starting with images. The Coalition for Content Provenance and Authenticity (C2PA) offers an open standard that marks media files with information about their creation and editing.
GPT-4 Wouldn’t Lie to Me . . . Would It?: Researchers showed how GPT-4 can deceive users without being prompted to do so explicitly.
Harm

GPT-4 Wouldn’t Lie to Me . . . Would It?: Researchers showed how GPT-4 can deceive users without being prompted to do so explicitly.

It’s well known that large language models can make assertions that are blatantly false. But can they concoct outright lies? In a proof-of-concept demonstration, Jérémy Scheurer, Mikita Balesni, and Marius Hobbhahn at Apollo Research...
Truth in Online Political Ads: Google tightens restrictions on AI-made political ads.
Harm

Truth in Online Political Ads: Google tightens restrictions on AI-made political ads.

Google, which distributes a large portion of ads on the web, tightened its restrictions on potentially misleading political ads in advance of national elections in the United States, India, and South Africa.
Stable Biases: Stable Diffusion may amplify biases in its training data.
Harm

Stable Biases: Stable Diffusion may amplify biases in its training data.

Stable Diffusion may amplify biases in its training data in ways that promote deeply ingrained social stereotypes.
Lawyers, Beware LLMs: Attorney faces disciplinary action for using ChatGPT's fictional brief.
Harm

Lawyers, Beware LLMs: Attorney faces disciplinary action for using ChatGPT's fictional brief.

A United States federal judge threw ChatGPT’s legal research out of court. An attorney who used ChatGPT to generate a legal brief faces disciplinary action after opposing lawyers discovered that the brief referred to fictional cases and quotations invented by the chatbot.
More Tesla Crashes: Government data shows increase in Tesla autonomous collisions.
Harm

More Tesla Crashes: Government data shows increase in Tesla autonomous collisions.

Tesla cars operating semi-autonomously have had many more collisions than previously reported, and the rate of such incidents has risen, government data shows.
Bengio, Too, Anxious About AI Risks: AI godfather Yoshua Bengio expresses his AI doubts.
Harm

Bengio, Too, Anxious About AI Risks: AI godfather Yoshua Bengio expresses his AI doubts.

Another prominent AI pioneer expressed regret over his life’s work amid rising concerns over the technology’s risks.
Scanner Sees Guns, Misses Knives: AI scanner didn't detect a knife used in school attack.
Harm

Scanner Sees Guns, Misses Knives: AI scanner didn't detect a knife used in school attack.

An automated security-screening system failed to detect a weapon that went on to be used in an attack. Administrators at Proctor High School in Utica, New York, decommissioned an AI-powered weapon detector by Evolv Technologies after a student snuck a knife into the school, BBC reported.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox