Bot Therapy and Informed Consent Discord's Kokobot triggers an ethics controversy.

Published
Reading time
2 min read
Screen capture of KokoBot having a conversation with a patient

An experiment in using chatbots to dispense mental-health counseling raised questions about ethics.

What’s new: Rob Morris, cofounder and CEO of Koko, a nonprofit provider of emotional-support services, shared details of an informal experiment in which his organization provided advice generated by a large language model to users without their explicit knowledge or consent.

How it works: The company’s peer-counseling service, known as Kokobot, helps social networks connect users who request counseling to other users who wish to provide it. A prospective counselor receives an anonymous message seeking help, advice, or encouragement, and the service shares the counselor’s response anonymously with the person who requested it.

  • On the social platform Discord, counselors also received an option to write their own response or craft one “With Koko.” Selecting the latter option prompted an implementation of OpenAI’s GPT-3 language model fine-tuned to respond positively to mental health-related inquiries, Morris explained in a video demo. The counselor could send GPT-3’s response, edit it, or discard it. If sent, the response included a disclaimer stating that it was “written in collaboration with Kokobot.”
  • Koko offered to counselors the option to let GPT-3 write responses to 30,000 posts. Counselors accepted the offer about half of the time. Roughly 4,000 users received advice crafted by the model in whole or part.
  • Users rated responses crafted “with Koko” significantly higher than responses written by humans alone, Morris said in a tweet. Counselors who accepted AI assistance responded twice as fast as those who didn’t.
  • Users stopped rating Kokobot-crafted messages highly once they learned the messages were not entirely human-made, Morris said. The company ended the experiment at that point.

The backlash: Experts questioned the ethics of Koko’s actions.

  • John Torous, a psychiatrist at Beth Israel Deaconess Medical Center in Boston, told Gizmodo that Koko had not properly disclosed the experiment’s nature to people who sought mental-health support, an especially vulnerable population.
  • Responding to criticism that Koko had not followed ethical principle known as informed consent, Morris said the experiment was exempt because participants opted in, their identities were anonymized, and an intermediary evaluated the responses before they were shared with people who sought help.

Behind the news: Several companies that use chatbots to support mental health explicitly inform users that the conversation is automated, including Replika, Flow, and Woebot (a portfolio company of AI Fund, which Andrew leads). Some mental health experts question whether chatbots provide lasting benefits and point to the need for more independent studies that demonstrate their efficacy.

Why it matters: AI-powered therapy could be a low-cost alternative for people who seek mental-health counseling, especially in parts of the world where psychiatrists are few.

Moreover, interacting with a computer may help patients feel comfortable sharing issues they wouldn’t discuss with a doctor. However, therapy requires trust, and informal experiments like Koko’s could alienate people who stand to benefit.

We’re thinking: Large language models are becoming more capable by the month, leading developers to turn them loose on all manner of problems. We encourage experimentation, especially in healthcare, but experiments on human subjects must meet the highest ethical standards.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox