Exaggerated Fear of AI Is Causing Real Harm AI isn't likely to cause human extinction, but worry that it might is scaring away young people from the field.

Published
Oct 25, 2023
Reading time
2 min read
Exaggerated Fear of AI Is Causing Real Harm: AI isn't likely to cause human extinction, but worry that it might is scaring away young people from the field.

Dear friends,

Welcome to the Halloween special issue of The Batch, where we take a look at fears associated with AI. In that spirit, I’d like to address a fear of mine: Sensationalist claims that AI could bring about human extinction will cause serious harm.

In recent months, I sought out people concerned about the risk that AI might cause human extinction. I wanted to find out how they thought it could happen. They worried about things like a bad actor using AI to create a bioweapon or an AI system inadvertently driving humans to extinction, just as humans have driven other species to extinction through lack of awareness that our actions could have that effect. 

When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out. 

Such overblown fears are already causing harm. High school students who take courses designed by Kira Learning, an AI Fund portfolio company that focuses on grade-school education, have said they are apprehensive about AI because they’ve heard it might lead to human extinction, and they don’t want to be a part of that. Are we scaring students away from careers that would be great for them and great for society?

I don’t doubt that many people who share such worries are sincere. But others have a significant financial incentive to spread fear: 

  • Individuals can gain attention, which can lead to speaking fees or other revenue.
  • Nonprofit organizations can raise funds to combat the phantoms that they’ve conjured.
  • Legislators can boost campaign contributions by acting tough on tech companies.

I firmly believe that AI has the potential to help people lead longer, healthier, more fulfilling lives. One of the few things that can stop it is regulators passing ill-advised laws that impede progress. Some lobbyists for large companies — some of which would prefer not to have to compete with open source — are trying to convince policy makers that AI is so dangerous, governments should require licenses for large AI models. If enacted, such regulation would impede open source development and dramatically slow down innovation. 

How can we combat this? Fortunately, I think the developer and scientific communities believe in spreading truthful, balanced views, and open source has a lot of supporters. I hope all of us can keep promoting a positive view of AI.

AI is far from perfect, and we have much work ahead of us to make it safer and more responsible. But it already benefits humanity tremendously and will do so even more in the future. Let’s make sure unsubstantiated fears don’t handicap that progress.

Witching you lots of learning,

Andrew

P.S. We have a Halloween treat for you! LangChain CEO Harrison Chase has created a new short course, “Functions, Tools, and Agents with LangChain.” It covers the latest capabilities in large language models, including OpenAI’s models, to call functions. This is very useful for handling structured data and a key building block for LLM-based agents. Sign up here!

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox