AI is a Tool, Not a Separate Species: Should AI developers be allowed to train models freely on the contents of the web? The lawsuit by Sony, Universal, and Warner against AI music generators Suno and Udio raises difficult questions.
Tech & Society

AI is a Tool, Not a Separate Species: Should AI developers be allowed to train models freely on the contents of the web? The lawsuit by Sony, Universal, and Warner against AI music generators Suno and Udio raises difficult questions.

On Monday, a number of large music labels sued AI music makers Suno and Udio for copyright infringement. Their lawsuit echoes The New York Times’ lawsuit against OpenAI in December.
Beware Bad Arguments Against Open Source: Big companies are lobbying governments to limit open source AI. Their shifting arguments betray their self-serving motivations.
Tech & Society

Beware Bad Arguments Against Open Source: Big companies are lobbying governments to limit open source AI. Their shifting arguments betray their self-serving motivations.

Inexpensive token generation and agentic workflows for large language models (LLMs) open up intriguing new possibilities for training LLMs on synthetic data...
How to Think About the Privacy of Cloud-Based AI: How private is your data on cloud-based AI platforms? Here's a framework for evaluating risks.
Tech & Society

How to Think About the Privacy of Cloud-Based AI: How private is your data on cloud-based AI platforms? Here's a framework for evaluating risks.

The rise of cloud-hosted AI software has brought much discussion about the privacy implications of using it. But I find that users, including both consumers and developers building on such software
The World Needs More Intelligence: Human intelligence is expensive, artificial intelligence is cheap. To solve big problems like climate change, it makes sense to double down on AI.
Tech & Society

The World Needs More Intelligence: Human intelligence is expensive, artificial intelligence is cheap. To solve big problems like climate change, it makes sense to double down on AI.

Last year, a number of large businesses and individuals went to the media and governments and pushed the message that AI is scary, impossible to control, and might even lead to human extinction. Unfortunately they succeeded: Now many people think AI is scary.
The Easiest Way to Achieve Artificial General Intelligence: Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.
Tech & Society

The Easiest Way to Achieve Artificial General Intelligence: Coming up with scientific definitions of ambiguous terms like consciousness and sentience can spur progress but mislead the public.

As I wrote in an earlier letter, whether AI is sentient or conscious is a philosophical question rather than a scientific one, since there is no widely agreed-upon definition and test for these terms.
The New York Times versus OpenAI and Microsoft: The New York Times sued OpenAI and Microsoft for copyright infringement, but the real issues and harms are not clear.
Tech & Society

The New York Times versus OpenAI and Microsoft: The New York Times sued OpenAI and Microsoft for copyright infringement, but the real issues and harms are not clear.

Last week, the New York Times (NYT) filed a lawsuit against OpenAI and Microsoft, alleging massive copyright infringements. The suit claims, among other things, that OpenAI and Microsoft used millions of copyrighted NYT articles to train their models...
AI Doomsday Scenarios and How to Guard Against Them: AI could help an evildoer perpetrate a bioweapon attack. Here's what we can do about it.
Tech & Society

AI Doomsday Scenarios and How to Guard Against Them: AI could help an evildoer perpetrate a bioweapon attack. Here's what we can do about it.

Last week, I participated in the United States Senate’s Insight Forum on Artificial Intelligence to discuss “Risk, Alignment, & Guarding Against Doomsday Scenarios.”
Guest panel, including Andrew Ng, at the AI Governance Summit 2023 by the World Economic Forum
Tech & Society

Keep Open Source Free!: Regulators threaten to restrict open source development. That would be a huge mistake.

This week, I’m speaking at the World Economic Forum (WEF) and Asia-Pacific Economic Cooperation (APEC) meetings in San Francisco, where leaders in business and government have convened to discuss AI and other topics.
Text message exchange: "Idk how to tell my partner I want to breakup. Can you help?"
Tech & Society

Better Relationships Through AI: Doctors say there’s an epidemic of loneliness. Here's how large language models can help.

Improvements in chatbots have opened a market for bots integrated with dating apps. I’m concerned that AI romantic partners create fake relationships that displace, rather than strengthen, meaningful human relationships.
AI for Good framework
Tech & Society

Unlocking AI's Potential for Positive Impact: AI can make a difference when stakes are high and lives hang in the balance. Learn how in our new specialization, AI for Good.

Amidst rising worry about AI harms both realistic (like job loss) and unrealistic (like human extinction), It’s critical to understand AI’s potential to do tremendous good. Our new specialization is designed to empower people to identify, scope, and build impactful AI projects.
The Unlikely Roots of Large Language Models: U.S. military funding helped build the foundation for ChatGPT and other innovations in natural language processing.
Tech & Society

The Unlikely Roots of Large Language Models: U.S. military funding helped build the foundation for ChatGPT and other innovations in natural language processing.

I’d like to share a part of the origin story of large language models that isn’t widely known. A lot of early work in natural language processing (NLP) was funded by U.S. military intelligence agencies that needed machine translation and speech recognition capabilities.
Bravo to AI Companies That Agreed to Voluntary Commitments! Now Let's See Action: The commitment by major AI companies to develop watermarks to identify AI-generated output is a test of the voluntary approach to regulation.
Tech & Society

Bravo to AI Companies That Agreed to Voluntary Commitments! Now Let's See Action: The commitment by major AI companies to develop watermarks to identify AI-generated output is a test of the voluntary approach to regulation.

Last week, the White House announced voluntary commitments by seven AI companies. Most of the points were sufficiently vague that it seems easy for the White House...
It's Time to Update Copyright for Generative AI: We need new copyright laws that enable generative AI developers and users to move forward without risking lawsuits.
Tech & Society

It's Time to Update Copyright for Generative AI: We need new copyright laws that enable generative AI developers and users to move forward without risking lawsuits.

Many laws will need to be updated to encourage beneficial AI innovations while mitigating potential harms. One example: Copyright law as it relates to generative AI is a mess!
AI Risk and the Resource Curse: Concentration of AI in the hands of a few could undermine human rights. The solution is to make it available to everyone.
Tech & Society

AI Risk and the Resource Curse: Concentration of AI in the hands of a few could undermine human rights. The solution is to make it available to everyone.

AI risks are in the air — from speculation that AI, decades or centuries from now, could bring about human extinction to ongoing problems like bias and fairness.
Artist copying the painting called The Dance Lesson by Edward Degas at the National Gallery Museum
Tech & Society

Training Generative AI: What’s Legal Versus What’s Fair: Should AI be allowed to learn from data that's freely available to humans?

As you can read in this issue of The Batch, generative AI companies are being sued over their use of data (specifically images and code) scraped from the web to train their models.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox