Could humanity be destroyed by its own creation?

The fear: If binary code running on a computer awakens into sentience, it will be able to think better than humans. It may even be able to improve its own software and hardware. A superior intelligence will see no reason to be controlled by inferior minds. It will enslave or exterminate our species.

What could go wrong: Artificial intelligence already manages crucial systems in fields like finance, security, and communications. An artificial general intelligence (AGI) with access to these systems could crash markets, launch missiles, and sow chaos by blocking or faking messages.

Behind the worries: Humans dominate Earth because we’re smarter than other species. It stands to reason that a superintelligent computer could, in turn, dominate us.

  • Computers already “think” much faster than humans. Signals in the brain travel at around 60 miles per hour. Computers move electrons at the speed of light, roughly 1.5 million times faster. Progressively speedier processors and advances such as quantum computing will only widen the gap.
  • Machines remember more information, too. Scientists estimate that the storage capacity of the human brain is measured in petabytes. Computer storage can grow indefinitely and last as long as the sun shines.

How scared should you be: The notion that general intelligence will emerge from machines taught to play games, monitor security cameras, or solve linguistic puzzles is pure speculation. In his 2016 book The Truth About AI, author Martin Ford asked prominent AI thinkers to estimate when AGI would come online. Their guesses ranged between 10 and nearly 200 years in the future — assuming it’s even possible. If you’re worried about the prospect of an AGI takeover, you have plenty of time to work on safeguards.

What to do: While it would be nice to devise a computer-readable code of ethics that inoculates against a malign superintelligence, for now the danger is rogue humans who might take advantage of AI’s already considerable abilities to do harm. International protocols that hem in bad actors, akin to nuclear nonproliferation agreements, likely would do more good for the time being.


Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox