Can an AI System Be Sentient? Ask a Philosopher

Reading time
2 min read
microscope versus statue

Dear friends,

A Google Engineer recently announced he believes that a language model is sentient. I’m highly skeptical that any of today’s AI models are sentient. Some reporters, to their credit, also expressed skepticism. Still, I worry that widespread circulation of sensationalistic reports on this topic will mislead many people. (You'll find more about it in this issue of The Batch.)

The news does raise an interesting question: How would we know if an AI system were to become sentient?

As I discussed in an earlier letter, whether an AI system is sentient (able to feel) is a philosophical question rather than a scientific one. A scientific hypothesis must be falsifiable. Scientific questions about AI include whether it can beat a human chess champion, accurately translate language, drive a car safely, or pass the Turing Test. These are testable questions.

On the other hand, we have no clear test for whether a system is sentient, conscious (aware of its internal state and external surroundings), or generally intelligent (able to reason across a wide variety of domains). These questions fall in the realm of philosophy instead of science.

Here are some examples of philosophical questions. Even though we haven't devised ways to quantify many of these terms, these questions are enduring and important:

  • Is the nature of humankind good or evil?
  • What is the meaning of life?
  • Is a tree/insect/fish conscious?

By the same token, many important questions that arise in discussions about AI are philosophical:

  • Can AI be sentient? Or conscious?
  • Can an AI system feel emotions?
  • Can AI be creative?
  • Can an AI system understand what it sees or reads?

I expect that developing widely accepted tests for things like sentience and consciousness would be a Herculean, perhaps impossible, task. But if any group of scientists were to succeed in doing so, it would help put to rest some of the ongoing debate.

I fully support work toward artificial general intelligence (AGI). Perhaps a future AGI system will be sentient and conscious, and perhaps not — I’m not sure. But unless we set up clear benchmarks for sentience and consciousness, I expect that it will be very difficult ever to reach a conclusion on whether an AI system has reached these milestones.

Keep learning!



Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox