From the Editor

“Compared with treatment of physical conditions, the quality of care of mental health disorders remains poor, and the rate of improvement in treatment is slow. Outcomes for many mental disorders have stagnated or even declined since the original treatments were developed.”

Are there two sentences more disappointing to read? One in five Canadians will experience a mental health problem this year – and yet we have basic problems with quality (and access).

Could AI and machine learning help?

In the first selection, we consider a new JAMA Psychiatry paper which opens with the two sentences above. The University of Cambridge’s Michael P. Ewbankand his co-authors don’t simply bemoan the status quo but seek to change it – they “developed a method of objectively quantifying psychotherapy using a deep learning approach to automatically categorize therapist utterances from approximately 90  000 hours of [internet-delivered CBT]…” In other words, by breaking therapy down into a couple of dozen techniques and then employing machine learning, they attempt to match techniques with outcomes (patient improvement and engagement), with an eye on finding what works and what doesn’t. And, yes, you read that right: they drew on 90 000 hours of therapy. They show: “factors specific to CBT, as well as factors common to most psychotherapies, are associated with increased odds of reliable improvement in patient symptoms.”

machinelearninginmarketing-1621x1000Can computers (and machine learning) improve human therapy?

In the second selection, we consider the comments of University of British Columbia President Santa Ono about school and the stresses of school. Ono speaks about his own struggle with depression. “I’ve been there at the abyss.”

DG

 

“Quantifying the Association Between Psychotherapy Content and Clinical Outcomes Using Deep Learning”

Michael P. Ewbank, Ronan Cummins, Valentin Tablan, Sarah Bateup, Ana Catarino, Alan J. Martin, Andrew D. Blackwell

JAMA Psychiatry, 22 August 2019

https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2748757

Compared with treatment of physical conditions, the quality of care of mental health disorders remains poor, and the rate of improvement in treatment is slow. Outcomes for many mental disorders have stagnated or even declined since the original treatments were developed. A primary reason for the gap in quality of care is the lack of systematic methods for measuring the delivery of psychotherapy. As with any evidence-based intervention, to be effective, treatment needs to be delivered as intended (also known as treatment integrity),which requires accurate measurement of treatment delivered. However, while it is relatively simple to monitor the delivery of most medical treatments (eg, the dosage of a prescribed drug), psychotherapeutic treatments are a series of private discussions between the patient and clinician. As such, monitoring the delivery of this type of treatment to the same extent as physical medicine would require infrastructure and resources beyond the scope of most health care systems.

3937373bb4c75c9789cc3bbd536dc8c0558b6189-andy-blackwell-landscape-webCo-author Andy Blackwell

So begins a new paper by Ewbank et al.

Here’s what they did:

  • Data were obtained from patients receiving internet-delivered CBT for the treatment of a mental health disorder between June 2012 and March 2018.
  • “Reliable improvement was calculated based on 2 severity measures: Patient Health Questionnaire (PHQ-9)and Generalized Anxiety Disorder 7-item scale (GAD-7),corresponding to depressive and anxiety symptoms respectively…”
  • “We defined a total of 24 feature categories, informed by the CBT competences frameworkand the Revised Cognitive Therapy Scale. A research psychologist (M.P.E.) annotated 290 therapy session transcripts, under the guidance of a qualified clinical therapist (S.B.), tagging each therapist text-message utterance as belonging to 1 (or more) of 19 features, with 5 features tagged using regular expressions…”
  • The 24 therapy feature categories included review homework, mood check, and therapeutic praise.
  • “A deep learning model was trained on the annotated utterances and then used to automatically classify all utterances in the full data set into 1 or more of 24 feature categories.”
  • Statistical analyses were done.

Here’s what they found:

  • “The initial sample comprised a total of 90  934 session transcripts taken from 17  572 patients, with a reliable improvement rate of 63.4% and IAPT engagement rate of 87.3%.”
  • Factors associated with improvement: “The results revealed increased quantities of ‘therapeutic praise’ (OR, 1.21; 95% CI, 1.15-1.27), ‘planning for the future’ (OR, 1.12; 95% CI, 1.06-1.19), ‘perceptions of change’ (OR, 1.11; 95% CI, 1.06-1.16), ‘change methods’ (OR, 1.11; 95% CI, 1.06-1.17), ‘set agenda’ (OR, 1.08; 95% CI, 1.02-1.14), ‘elicit feedback’ (OR, 1.06; 95% CI, 1.02-1.11), ‘give feedback’ (OR, 1.05; 95% CI, 1.00-1.10), and ‘review homework’ (OR, 1.04; 95% CI, 1.00-1.09) were all associated with greater odds of reliable improvement.” (See figure below for the odds ratios.)
  • Factors associated with first session engagement:“We found that ‘change methods’ (OR, 1.20; 95% CI. 1.12-1.27), ‘elicit feedback’ (OR, 1.09; 95% CI. 1.03-1.16), ‘set homework’ (OR, 1.09; 95% CI. 1.03-1.16), ‘arrange next session’ (OR, 1.17; 95% CI. 1.10-1.24), ‘therapeutic thanks’ (OR, 1.13; 95% CI. 1.06-1.20), and ‘formulation’ (OR, 1.10; 95% CI, 1.04-1.17) were associated with increased odds of IAPT engagement.”

or-outcomes

They conclude:

At present, the detailed monitoring of therapist performance requires expensive and time-consuming procedures. We believe that this work represents a first step toward a practicable approach for quality controlled behavioral health care. Such monitoring could help arrest therapist drift, ie, the failure to deliver treatments a therapist has been trained to deliver, which may be one of the biggest factors contributing to poor delivery of treatment. Monitoring may help reverse the lower improvement rates observed in more experienced therapists.

A few thoughts:

  1. This is a good paper.
  1. This is an important paper – the authors try to find successful psychotherapy techniques by employing machine learning. The use of technology here isn’t about replacing human therapists so much as attempting to help them succeed.
  1. Taking a look at the list, we see, for example, that reviewing homework bolstered outcomes and setting homework in the first session helped with engagement – which makes sense. As the authors note: “these findings accord with evidence that out-of-session homework is important in determining outcomes in CBT.”
  1. Is there something unique about CBT (as opposed to other psychotherapies)? This question has been openly debated. The authors note: “A central issue in psychotherapy research is whether different approaches work through specific factors or factors that are common to most psychotherapies. Here, we find a positive association between improvement and/or IAPT engagement for each of 6 techniques identified as distinguishing CBT from psychodynamic therapy.”
  1. Is the study too simplistic? Therapy is broken down into a handful of categories. We should also note that the therapy isn’t the usual CBT, but one offered online. Still, the authors drew on 90 000 hours of therapy and the categories are thoughtfully defined.
  1. Like all studies there are limitations. The authors note: “it is not possible to determine whether a therapeutic feature is applied in an appropriate manner or whether a therapist adheres to the CBT protocol.”

 

“‘I’ve been there at the abyss’: UBC president shares personal mental health journey”

Simon Little

Global News, 28 August 2019

https://globalnews.ca/news/5826788/ubc-president-mental-health/

As students prepare to head back to school next week, they’re prepping for assignments, exams – and the pressure-cooker of stress that often comes along with them.

As they do so, UBC’s president is also speaking out, telling students that they’re not alone by sharing his own story of mental health challenges.

Santa Ono was just 14 the first time he tried to take his own life, due to what he said were ‘feelings of inadequacy’ from being a middle sibling between two child prodigies.

Ono never told his family about the unsuccessful attempt, and made a second, more serious attempt years later as a young academic at Johns Hopkins University.

santaonoSanta Ono

The article discusses the mental health problems of the university president:

There are moments that were very happy for me being at university with other students my age, and there were other days where I would just be in my bed alone, not having the energy or the will to get up to feed myself or to shower.

He notes that the decision to get help was critical – as was his decision to use medications and engage in therapy.

The article described the efforts of the UBC President to increase access to counselling services.

The article closes with Ono encouraging people to get help:

There’s nothing to be ashamed of. It’s something that’s very common, and that they can even reach out to me if they’d like.

A few thoughts:

  1. There are many “back-to-school” articles that appear this time of the year; this Global piece is one of my favourites.
  1. I’ve never met Santa Ono – but I’d like to.
  1. A quick reminder: while machine learning and other technological advances may or may not transform care, the importance of people speaking out about mental illness remains crucially important. While stigma has faded, it continues to impede mental health care.

 

Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.