A chatbot persuaded at least one person that it has feelings.
What’s new: A senior engineer at Google announced his belief that the company’s latest conversational language model is sentient. Google put the engineer on administrative leave.
Is there anybody in there? LaMDA is a family of transformer-based models, pretrained to reproduce 1.56 trillion words of dialog, that range in size from 2 billion to 137 billion parameters. Google previously discussed plans to incorporate it into products like Search and Assistant.

  • After pretraining, given a prompt, LaMDA generated a number of possible responses. The developers collected a set of conversations with LaMDA and hired people to rate how sensible, specific, interesting, and safe its responses were. Then they fine-tuned the model to generate those ratings at the end of each response. LaMDA replies with the highest-rated response. To further improve its factual output, they fine-tuned it to mimic the hired workers’ searches and, in such cases, direct an external system to search rather than return the response. Given the previous input and new search results, the model produces new output — which may be a new response or a further search.
  • Researchers in Google’s Responsible AI group tested chatbots based on LaMDA to determine their propensity for hate speech and other toxic behavior. The process persuaded researcher Blake Lemoine that the model possessed self-awareness and a sense of personhood. He transcribed nine conversations between the model and Google researchers and submitted an argument that LaMDA is sentient. In one transcript, a chatbot says it believes it’s a person, discusses its rights, and expresses a fear of being turned off.
  • Google placed Lemoine on administrative leave in early June for violating confidentiality by hiring a lawyer to defend LaMDA’s right to exist and speaking to a member of the U.S. House Judiciary Committee about what he regarded as ethical violations in Google’s treatment of the LaMDA. “Lemoine was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” a company spokesperson told The Washington Post.
  • In a blog post, Lemoine writes that Google’s decision to discipline him follows a pattern of unfair treatment by the company towards its ethics researchers, charging that the company disregards the concerns of ethics researchers and punishes them for speaking up. He cites the company’s earlier dismissal of Ethical AI co-leads Timnit Gebru and Margaret Mitchell.

What they’re saying: Many members of the AI community expressed skepticism of Lemoine’s claims via social media. Melanie Mitchell, professor at the Santa Fe Institute, said, “It's been known for forever that humans are predisposed to anthropomorphize even with only the shallowest of signals…Google engineers are human too, and not immune.”
Why it matters: The propensity to anthropomorphize machines is so strong that it has a name: The Eliza Effect, which refers to a mid-1960s chatbot that persuaded some patients that it was a human psychotherapist. Beyond that, the urge to fall in love with one’s own handiwork is at least as old as the ancient Greek story of Pygmalion, a fictional sculptor who fell in love with a statue he created. We must strive to strengthen our own good judgment even as we do the same for machines.
We’re thinking: We see no reason to believe that LaMDA may be sentient. While such episodes are a distraction from the important work of harnessing AI to solve serious, persistent problems (including machine sentience), they are a reminder to approach extraordinary claims with appropriate skepticism.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox