The Black Box Awakens Confronting the Fear of Self-Aware AI in 2022

Published
Reading time
2 min read
Illustration of Frankenstein connected to many chemical elements inside of a lab

AI researchers are starting to see ghosts in their machines. Are they hallucinations, or does a dawning consciousness haunt the trained weights?

The fear: The latest AI models are self-aware. This development — at best —poses ethical dilemmas over human control of sentient digital beings. More worrisome, it raises unsettling questions about what sort of mind a diet of data scraped from the internet might produce.

Horror stories: Sightings of supposed machine sentience have come from across the AI community.

  • In February, OpenAI cofounder Ilya Sutskever tweeted about the possibility that large neural networks may already be “slightly conscious.” Andrej Karpathy was quick to reply, “agree.” However, Yann LeCun and Judea Pearl, criticized the claim as far-fetched and misleading.
  • In June, a Google engineer became convinced that chatbots powered by LaMDA, the company’s family of large language models, were sentient. He published conversations in which the bots discussed their personhood, rights, and fear of being turned off. Google — which denied the engineer’s claims — fired him.
  • As the world was absorbing the prospect of sentient AI, researchers shared evidence that DALL-E 2 had developed its own language. When prompted to produce an image with text, DALL-E 2 tends to generate what appear to be random assortments of letters. The authors found that feeding the same gibberish back into the model produced similar images. For example, a request for “apoploe vesrreaitais” brought forth images of birds.
  • In September, a multimedia artist experimenting with an unspecified text-to-image model found that “negative” prompts designed to probe the far reaches of the model’s latent space summoned disturbing images of a woman with pale skin, brown hair, and thin lips, sometimes bloody and surrounded by gore. The artist calls her Loab.

It’s just an illusion, right?: While media reports generally took the claim that LaMDA’s was self-aware seriously — albeit skeptically — the broader AI community roundly dismissed it. Observers attributed impressions that LaMDA is sentient to human bias and DALL-E 2’s linguistic innovation to random chance. Models learn by mimicking their training data, and while some are very good at it, there’s no evidence to suggest that they do it with understanding, consciousness, or self-reflection. Nonetheless, Loab gives us the willies.

Facing the fear: Confronted with unexplained phenomena, the human mind excels at leaping to fanciful conclusions. Science currently lacks a falsifiable way to verify self-awareness in a computer. Until it does, we’ll take claims of machine sentience or consciousness with a shaker full of salt.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox