From the Editor

In March, medical assistance in dying will be expanded in Canada to include those with mental illness. Not surprisingly, many people feel strongly about it, with some seeing the change as a natural extension of basic rights and others arguing that it will be a profound mistake. 

What do patients and family members think? How does it relate to their views of suicide in general? Lisa Hawke (of the University of Toronto) and her co-authors attempt to answer these questions in a new Canadian Journal of Psychiatry paper. They do a qualitative analysis, interviewing 30 people with mental illness and 25 family members on medical assistance in dying when the sole underlying medical condition is mental illness (or MAiD MI-SUMC). “Participants acknowledge the intersections between MAiD MI-SUMC and suicidality and the benefits of MAiD MI-SUMC as a more dignified way of ending suffering, but also the inherent complexity of considering [such] requests in the context of suicidality.” We consider the paper and its implications.

In the second selection, Dr. Scott Monteith (of Michigan State University) and his co-authors write about artificial intelligence and misinformation in a new British Journal of Psychiatry paper. They note the shift in AI – from predictive models to generative AI – and its implications for patients. “Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice.”

And in the third selection, writer Shannon Palus discusses the rise of “mental health merch” – clothing items and other merchandise that tout mental health problems, including a pricey sweatshirt with “Lexapro” written on the front (the US brand name for escitalopram). In Slate, Palus discusses her coolness to such things. “As a person who struggles with her own mental health, as a Lexapro taker – well, I hate this trend, honestly! I find it cloying and infantilizing.”

Note that there will be no Reading next week.

DG

Selection 1: “Medical Assistance in Dying for Mental Illness as a Sole Underlying Medical Condition and Its Relationship to Suicide: A Qualitative Lived Experience-Engaged Study”

Lisa D. Hawke, Hamer Bastidas-Bilbao, Vivien Cappe, et al.

The Canadian Journal of Psychiatry, 26 October 2023  Online First

Assisted dying is legally permitted in a number of jurisdictions, such as Canada, the Netherlands, Belgium, Luxemburg, Switzerland, and several American states. To be eligible for medical assistance in dying (MAiD) in Canada, the requester has to be aged 18+, eligible for government-funded health services, capable of making healthcare decisions, make a voluntary request, have a ‘grievous and irremediable’ medical condition, give informed consent, and have been informed of the means available to relieve their suffering… Mental illness cannot be considered a serious or incurable illness, disease, or disability for the purposes of MAiD eligibility, until March 2024. These changes and potential future legislative changes will open MAiD eligibility to a new population and raise questions requiring reflection and deliberation.

So begins a paper by Hawke et al.

Here’s what they did:

  • “This qualitative study uses reflexive thematic analysis and is grounded in a contextualist epistemology to highlight the diversity of knowledge that patients and families create through their situated actions in lived experience contexts.”
  • Individuals with mental illness and family members “participated in interviews examining perspectives on MAiD MI-SUMC and its relationship with suicide.” 
  • “Audio recordings were transcribed and analysed…” 
  • “People with lived experience were engaged in the research process as team members.”

Here’s what they found:

  • 30 individuals with mental illness and 25 family members were interviewed.
  • Demographics. Those with mental illness: average age of 41.8 years; half were female; 60% were White; the majority had depressive disorders (63.3%). Family members: average age of 47.5 years; 80% were female; 72% were White.
  • Four themes were found. See below.

Deciding to die is an individual choice to end the ongoing intolerable suffering of people with mental illness.

“Participants considered the decision to die by people with mental illness to be founded on a wish to end an ongoing suffering. They described the intensity of ongoing suffering in relation to mental illness; the choice to die was described as a response to continuous emotional pain that seems impossible to escape from while living… Among patients, in particular, it was mentioned that dying to end suffering is a choice that individuals can make for themselves.”

MAiD MI-SUMC is the same as suicide because the end result is death, although suicide can be more impulsive.

“Participants noted that MAiD MI-SUMC and suicide are the same since the end result is death: ‘So I definitely believe that MAiD and suicide are one and the same, because it is somebody who is in this case intentionally seeking out a procedure to end their own life.’ (Patient #4) However, participants noted that suicide can be more impulsive.”

MAiD MI-SUMC is a humane, dignified, safe, nonstigmatized alternative to suicide.

“While not differentiating between MAiD and suicide, participants considered MAiD MI-SUMC to be a dignified alternative to suicide. Participants expressed that MAiD MI-SUMC is a safer and painless way to die, due to the medical management and the social accompaniment available: ‘There’s more dignity to MAiD, I think, than suicide for sure, and the chance to, you know, go with your loved ones around you.’ (Family Member #3)… Participants felt that MAiD MI-SUMC does not carry the stigma of suicide.”

Suicidality should be considered when MAiD MI-SUMC is requested, but suicidality’s role is multifaceted given its diverse manifestations.

“While many participants thought that suicidality should be discussed as part of the MAiD MI-SUMC assessment process, they highlighted the complexity of its role in decision-making given that suicidal ideation, plans, and intent are diverse manifestations that could be assessed differently… Some participants argued that the presence of suicidality points to the need for a MAiD MI-SUMC request to be approved, while others felt that it should not be a key factor in the outcome of an MAiD MI-SUMC request…”

A few thoughts:

1. This is a good paper on an important topic with unique data. There is clear relevance in Canada (obviously) but – with MAiD being discussed increasingly across North America and around the world – also beyond our borders.

2. The core findings: “While participants largely equated MAiD MI-SUMC to suicide, since both resulted in death, MAiD MI-SUMC was seen as a more dignified alternative. They also noted less impulsivity and less stigma associated with MAiD MI-SUMC compared to suicide.”

3. The patient and family perspectives are often missing from the discussions about MAiD for mental illness. The authors have done a real service. 

4. Like all studies, there are limitations. They note several, including: “Despite efforts to maximize diversity, some perspectives may have been missed.” They suggest further research is needed with groups such as Indigenous peoples.

5. While there is debate in political circles, this study finds that, from the perspective of patients and families, suicide and MAiD are not seen as profoundly different. (!!) What are the policy implications? How does this change suicide prevention efforts?

The full CJP paper can be found here:

https://journals.sagepub.com/doi/10.1177/07067437231209658

Selection 2: “Artificial intelligence and increasing misinformation”

Scott Monteith, Tasha Glenn, John R. Geddes, et al.

The British Journal of Psychiatry, 26 October 2023  Online First

Although there is widespread excitement about the creative successes and new opportunities resulting from the recent transformative technological advancements in artificial intelligence (AI), one result is increasing patient exposure to medical misinformation. We now live in an era of synthetic media. Text, images, audio and video information can be created or altered by generative AI models based on the data used to train the model. The commercial use of automated content produced by generative AI models, including large language models (LLMs) such as ChatGPT, GPT-3 and image generation models, is expanding rapidly… But generative AI models such as ChatGPT can be unreliable, making errors of both fact and reasoning that can be spread on an unprecedented scale. The general public can easily get incorrect information from generative AI on any topic, including medicine and psychiatry.

So begins an editorial by Monteith et al.

Introduction to generative AI

“The focus of traditional AI is on predictive models to perform a specific task, such as estimate a number, classify data or select between a set of options. In contrast, the focus of generative AI is to create original content. For a given input, rather than one correct answer based on the model’s decision boundaries, generative AI models produce text, audio and visual outputs that can easily be mistakenly attributed to human authors… 

“Generative AI can create the illusion of intelligence. Although at times the output of generative AI models can seem astonishingly human-like, they do not understand the meaning of words and frequently make errors of reasoning and fact… The many types of error from generative AI models include factual errors, inappropriate or dangerous advice, nonsense, fabricated sources and arithmetical errors… One example of inappropriate or dangerous advice is a chatbot recommending calorie restriction and dieting after being told the user has an eating disorder.”

Attitudes to generative AI

“It is easy for the general public to anthropomorphise the use of LLMs, given the simplicity of conversing and the authoritative-sounding responses. The media routinely describe LLMs using words suggestive of human intelligence, such as ‘thinks’, ‘believes’ and ‘understands’. These portrayals generate public interest and trust, but also downplay the limitations of LLMs that statistically predict word sequences based on patterns learned from the training data. Researchers also anthropomorphise generative AI, referring to undesirable LLM text errors as ‘hallucinations’.”

Intentional spread of misinformation

“Without having to rely on human labour, the automated generation of misinformation drives down the cost of creating and disseminating misinformation. Misinformation created by the generative AI models may be better written and more compelling than that from human propagandists. The spread of online misinformation in all areas of medicine is particularly dangerous.”

Unique ethical issues

“There are privacy issues related to the collection and use of personal and proprietary data for training models without permission and compensation. There are legal issues that include plagiarism, copyright infringement and responsibility for errors and false accusations in generative AI output.”

A few thoughts:

1. This is a timely paper.

2. ChatGPT, it turns out, isn’t just for travel tips and to create a resume. It’s clinically relevant, too, as our patients look to find information through AI. Is quality an issue? Misinformation? The authors argue yes and yes.

3. And they close with practical advice: “Psychiatrists should realise that patients may be obtaining misinformation and making decisions based on generative AI responses in medicine, and many other topics, that may affect their lives.”

4. AI has been considered in past Readings. Last week, we highlighted comments from a Quick Takes podcast with Dr. John Torous (of Harvard University). “We want to really be cautious because these are complex pieces of software.” That Reading can be found here: 

https://davidgratzer.com/reading-of-the-week/reading-of-the-week-self-stigma-also-chatgpt-mental-health-care-and-dr-catherine-hickey-on-the-opioid-crisis/

The full BJPsych editorial can be found here:

https://tinyurl.com/4c7r7j4z



Selection 3: “Honestly, I Can’t Stand the Lexapro Sweatshirt. And I take Lexapro!”

Shannon Palus

Slate, 1 November 2023 

Want a pink sweatshirt that says LEXAPRO across the chest and costs $80? You’re out of luck – they are sold out. Perhaps because they were recently highlighted in a piece that ran in the New York Times about the trend of ‘mental health merch.’ Other items featured include tank tops that say ‘Depressed but Make It Hot!’ and a $380 cashmere crew neck embroidered with ‘It’s okay to feel blue.’ Much of the clothing featured in the piece is sold by Eileen Kelly, who hosts a podcast called Going Mental. The idea, Kelly told the paper, is to de-stigmatize mental health struggles.

So begins an essay by Palus.

She wonders about mental health awareness campaigns. “All over the place, you can find people trying to make a buck off ‘mental health’ as a concept without really providing anything meaningful in exchange. (Maybe, if you are lucky, a portion of profits goes to some kind of mental health fund.) There was the widely critiqued #BellLetsTalk campaign, which promoted the concept of … talking about mental health … while also promoting … the phone company. Actress Selena Gomez has spoken widely about her own mental health struggles, including in a deep and searching documentary … but if users want an easier take-home message, they might try buying her Stay Vulnerable melting blush, or lip balms in shades like Empathy or Support.”

She hesitates on the expensive Lexapro sweatshirts. “I don’t think ‘awareness’ about anxiety and depression medication is a huge issue among the $80-sweatshirt-wearers of the country.” She also makes a personal comment: “Maybe part of my resistance, the core of it, is about how I feel about my own anxiety at this point in my life. Even as much as the drug has helped me, I do not, it turns out, wish to be on Lexapro. I would rather not need it! After a decade-and-change of being on it, I am, frankly, kind of weary. Maybe I’ll take it forever. Maybe I won’t. I don’t know. It is a tool with upsides and downsides, like any other. But I’m sort of appalled at the idea that taking Lexapro could be pink, fun, and worth spending $80 to personally advertise, on my body.”

She asks: “I mean, seriously, would you seek to profit off cancer like this?”

And she notes the over-simplicity of the message. “Medication doesn’t help everyone. People often have to try a couple before they find something that does. A third of people with major depressive disorder have treatment-resistant depression. The solution to truly tough mental health conditions – those are harder to put on a sweatshirt. And when it comes to medication, there are side effects…”

Is this all too clever? She argues it is. “Like goods with environmental and political messages, cute mental health shit is acclimating us to a world where we are, on balance, perhaps a little more miserable than we need to be. The planet is dying. We’re on our phones all the time. Depressed but make it hot.

She closes by touching on her own experiences. “Being chronically sad and anxious? It sucks. It’s horrible. I want to get better, and to be better. And to make the world a better place by doing something other than (OK, OK, in addition to) ‘telling my story.’ But really, I don’t think we’re all going to be shopping our way to a healthier future.”

A few thoughts:

1. This is a solid essay – funny, honest, compelling.

2. The cancer line is great.

3. But is she right? To play the Devil’s advocate: isn’t it a sign of success that people try to sell merchandise about environmental concerns? Likewise, after years of being in the shadows, isn’t it good that mental health problems are now so widely discussed that people can sell sweatshirts touting antidepressants?

The full Slate essay can be found here:

https://slate.com/technology/2023/11/against-lexapro-sweatshirt-mental-health-merch.html

Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.