Existential Risk? I Don't Get It! Prominent computer scientists fear that AI could trigger human extinction. It's time to have a real conversation about the realistic risks.

Published
Reading time
2 min read
Existential Risk? I Don't Get It!: Prominent computer scientists fear that AI could trigger human extinction. It's time to have a real conversation about the realistic risks.

Dear friends,

Last week, safe.org asserted that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This statement was signed by AI scientists who I really respect including Yoshua Bengio and Geoffrey Hinton. It received widespread media coverage.

I have to admit that I struggle to see how AI could pose any meaningful risk for our extinction. AI has risks like bias, fairness, inaccurate outputs, job displacement, and concentration of power. But I see AI’s net impact as a massive contribution to society. It’s saving lives by improving healthcare and making cars safer, improving education, making healthy food and numerous other goods and services more affordable, and democratizing access to information. I don’t understand how it can lead to human extinction.

A number of thoughtful commentators have also pushed back on the extinction narrative. For example:

  • Chris Manning points out that the AI community has a large, quiet majority that’s focused on building useful software and does not share the views of the loud AI Safety crowd that talks about existential risks. It believes the risks can be mitigated.
  • Emily Bender notes that AI doomsaying is a huge distraction from the technology’s real harms, which she lists as “discrimination, surveillance, pollution of the information ecosystem, data theft, labor exploitation.”
  • Along this vein, Matteo Wong in The Atlantic argues that “AI doomerism is a decoy.” It appears to me that time spent by regulators stopping AI from autonomously launching nuclear weapons — which no nuclear power has publicly considered — is time that they’re not spending passing regulations on data privacy, AI transparency or anti-trust that would be less convenient for tech companies and might negatively affect their bottom line.
  • Marc Andreessen wrote an essay on the benefits of AI. While my perspective differs from his on some points (for example, I’m more worried than he is about the negative impact of  job displacement), he makes a sound argument that each time a new technology has been introduced, a predictable moral panic has taken hold. Examples are documented by the fascinating website pessimistsarchive.org (worth a look!), which describes fear of non-fiction novels corrupting youth, elevators causing brain fever, cars (“the devil wagon”) on a mission to destroy the world, and recorded sound harming babies. With the rise of deep learning about 10 years ago, Elon Musk, Bill Gates and Stephen Hawking warned of the existential risk stemming from AI. The current wave of fears about AI feels similar to me, but it’s more intense and has buy-in from prominent scientists.

I’m glad to see others presenting a sensible alternative to the narrative of AI as an extinction risk. Having said that, though, I feel an ethical responsibility to keep an open mind and make sure I really understand the risk — especially given the high regard I have for some who think AI does pose this risk.

To learn more, I’m speaking with a few people who I think might have a thoughtful perspective on how AI creates a risk of human extinction, and I will report back with my findings. In the meantime, I would love to hear your thoughts as well. Please reply to my posts on Twitter or LinkedIn if there’s someone you think I should speak with or if you’d like to share your perspective. Through this, I hope we can have a real conversation about whether AI really poses an extinction risk.

I look forward to continuing the discussion with you,

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox