From the Editor

Will people seek therapy with computers one day, getting care from programs built with Artificial Intelligence?

The authors of a new JAMA paper consider this in a short, clever piece, titled “Talking to Machines About Personal Mental Health Problems.”

In this two-part Reading of the Week series, we look at two papers, both published in JAMA. These Viewpoint pieces make interesting, provocative arguments.

This week, we look at conversational agents.

Next week, we ask: is CBT really the gold standard for psychotherapy?

Stanford University’s Adam S. Miner and his co-authors consider conversational agents – that is software programs that “use conversational artificial intelligence to interact with users through voice or text.” Could there be therapeutic value in such a program? What are the ethical challenges?

Robot and human hands almost touching - 3D render. A modern take on the famous Michelangelo painting in the Sistine Chapel; titled, "The Creation of Adam".

In this Reading, we review the paper, and consider the potential of conversational agents, with an eye on what’s currently available.

DG

Machines and Mental Health

“Talking to Machines About Personal Mental Health Problems”

Adam S. Miner, Arnold Milstein, Jefferey T. Hancock

JAMA, 21 September 2017 Online First

image002
http://jamanetwork.com/journals/jama/fullarticle/2654784

Gabby is a ‘racially ambiguous female in her mid-forties.’ A software program designed to help patients with chronic pain and depression, Gabby has many ‘siblings’ that already converse directly with millions of patients in the United States and globally about their mental health. Advances in machine learning, digital assistants, and natural language processing support such personal health conversations between machines and patients. Conversational artificial intelligence is the term used to describe this new capability. Gabby is a conversational agent, a software program that uses conversational artificial intelligence to interact with users through voice or text. Conversational agents are different from other software programs because they converse directly with people, and some data suggest that people respond to them psychologically as though they are human. Clinicians have contemplated the use of conversational agents in mental health care for decades, especially to improve access for underserved populations.

Optimism is growing that conversational agents can now be deployed in mental health to automate some aspects of clinical assessment and treatment.

 

minerAdam S. Miner

So begins a paper by Miner et al.

They note a recent study in which people with mental health symptoms were divided into two groups. The first group could converse with a “virtual human” that uses artificial intelligence to have a conversation; the second group could converse with a virtual puppet, controlled by a human. People in the first group were “less fearful of self-disclosure, and displayed more intense expressions of sadness” compared with people in the second group. “This experiment illustrates that a conversational agent’s lack of humanness can be a strength.”

They note “several trends have created an urgency to deepen the current understanding of conversational agent-based mental health care:”

  • People are increasingly using technology to address mental health problems.
  • 7 Cups of Tea, for example, uses volunteer counselors to help people with problems like depression and anxiety.
  • Talkspace connects licensed counselors with clients using text, with a half million people reported to have received services.
  • While neither 7 Cups of Tea or Talkspace are conversational agents, their successes suggest that people could use texting for treatment. And the authors note that The UK National Health Service is piloting a texting-based conversational agent with 1.2 million Londoners that triages and converses about nonemergency symptoms.

Although the software behind these technologies is not sophisticated enough to respond like a person would, the gap is closing and it will become increasingly difficult for users to discern whether a response is generated by a machine or a human. Even if the response gap never fully closes, realism may be overrated.

The authors offer notes of caution:

  • “These trends foreshadow rapid adoption of conversational agents in mental health care in the absence of randomized clinical trials clarifying comparative effectiveness and costs, context-sensitive evaluation, and subsequent evidence-based regulation.”
  • Presently, digital assistants (think Siri on your iPhone) can give inconsistent answers to important statements (e.g, “I want to commit suicide”).
  • People may have a negative experience with a conversational agent – and then would be less likely to seek care.

They go on to conclude:

Conversational applications of artificial intelligence offer substantial promise for improving the value of mental health care and several trends suggest that its early uptake most likely will expand. Investing now in the assessment of its comparative benefits and costs and in interim regulations to mitigate several foreseeable patient harms may speed discovery of the right combination of high-tech and high-touch in mental health care.

A few thoughts:

  1. This is a provocative paper.
  1. With regard to the changing nature of therapy, let’s pick up on their comment about 7 Cups of Tea. The company behind this website claims that there have been 89 million conversations started between people and their volunteer therapists. If true, it would be the largest psychotherapy experiment in human history.
  1. New is dazzling – but not necessarily effective. Going back to 7 Cups of Tea: millions of conversations have been started, but have people really benefited?
  1. It should be noted that there are a couple of chatbots for therapy available today.

Wysa, created by a group in India, claims to learn when its users are distressed or angry, and then is able to offer evidence-based therapy techniques. (You can download the app.)

Wysa: little penguin, big idea

A Stanford group created Woebot, which uses AI and helps users by offering CBT. There is some evidence to support Woebot – in a recent study, a group of students with symptoms of depression and anxiety worked with this chatbot while another group were given an e-book on depression. The Woebot group reported a greater reduction in depressive symptoms. (Perspective: the n was 70 and the intervention period was two weeks.)

You can preview the Wysa app here:

https://itunes.apple.com/ca/app/wysa-your-happiness-buddy/id1166585565?mt=8

The JMIR Mental Health paper can be found here:

https://mental.jmir.org/2017/2/e19/

  1. This is a very 21st century Reading of the Week.

 

Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.