Tag: AI

Reading of the Week: Contingency Management for Stimulant Use – the New AJP Paper; Also, LLMs as Mental Health Providers and Kumpf on Her ED Visit

From the Editor

Her housing is unstable; major relationships have ended; she is deeply in debt. She presented to the emergency department hoping for help with her crystal methamphetamine addiction. “That drug just grabs you and holds you.” No medications have demonstrated efficacy for stimulant use disorder. But could contingency management be part of a meaningful plan for her recovery?

In the first selection, a paper published last month in The American Journal of Psychiatry, Lara N. Coughlin (of the University of Michigan) and her co-authors attempt to answer that question. They did a retrospective cohort study, comparing those who received contingency management with those who didn’t, looking at outcomes and 12 months of data, and involving 1 481 patients and an equal number of people in the control group. “This study provides the first evidence that contingency management use in real-world health care settings is associated with reduced risk of mortality among patients with stimulant use disorder.” We consider the paper and its implications.

In the second selection, Tony Rousmaniere (of Sentio University) and his co-authors examine large language models as health providers. In a timely paper for The Lancet Psychiatry, they weigh the regulatory and legal contexts. “LLMs have entered everyday use for mental health. Developers who embrace transparency and collaborative research can transform the mental health landscape and define the future of digital care for the better.”

And in the third selection, Emily A. Kumpf (of Johns Hopkins University) writes personally about her first-episode psychosis in Psychiatric Services. While she is grateful for the care she received in the emergency room, she was traumatized by the experience. “When I was restrained, every part of me genuinely believed the medications they were injecting into me were chemicals intended to kill me. My scream pierced through the hospital walls; I thought I was dying. To my surprise, I woke up the next morning.”

DG

Continue reading

Reading of the Week: Bipolar Disorder Drug Prescribing – Bad News? The New CJP Paper; Also, An AI Warning and Cannabis & Psychosis

From the Editor

There are more medication options than ever for the treatment of bipolar disorder. What are physicians prescribing? How often do we use lithium, arguably the best medication?

In the first selection, from The Canadian Journal of Psychiatry, Samreen Shafiq (of the University of Calgary) and her co-authors attempt to answer those questions in a new study. They drew on Alberta government data, including more than 130 000 individuals with bipolar disorder and more than nine million prescriptions. “Overall, we uncovered a concerning trend in the prescribing patterns for bipolar disorder treatment, with antidepressants and second-generation antipsychotics being prescribed frequently and a decline in prescribing of lithium and other mood stabilizers.” We consider the paper and its implications.

What would John Cade think?

In the second selection, Dr. Allen Frances (of Duke University) writes about AI chatbots and psychotherapy in The British Journal of Psychiatry. He notes their “remarkable fluency” and argues that there are clear benefits to AI psychotherapy. He also comments on dangers, and he doesn’t mince his words. “Artificial intelligence is an existential threat to our profession. Already a very tough competitor, it will become ever more imposing with increasing technical power, rapidly expanding clinical experience and widespread public familiarity.”

And in the third section, Sophie Li (of the University of Ottawa) and her co-authors consider psychosis and cannabis in a concise CMAJ paper. They make several points, including: “The tetrahydrocannabinol (THC) content of cannabis has roughly quintupled in the past 2 decades, from around 4% in the 2000s to more than 20% in most legal dried cannabis in Canada by 2023.”

There will be no Reading next week.

DG

Continue reading

Reading of the Week: AI & Therapy

From the Editor

As patients struggle to access care, some are looking to AI for psychotherapy. Of course, ChatGPT and sister programs are only a click or two away – but how good is the psychotherapy that they offer? 

In a new American Journal of Psychotherapy paper, Dr. Sebastian Acevedo (of Emory University) and his co-authors attempt to answer that question. Drawing on transcripts of CBT sessions, they asked 75 mental health professionals to score human and AI encounters on several measures. So how did ChatGPT fare? “The findings suggest that although ChatGPT-3.5 may complement human-based therapy, this specific implementation of AI lacked the depth required for stand-alone use.” We consider the paper and its implications.

In the second selection, from JMIR Mental Health, Dr. Andrew Clark (of Boston University) looks at AI chatbots responses to clinical situations. Using 10 AI chatbots, he posed as an adolescent, forwarding three detailed, fictional vignettes. The results are surprising. When, for example, he suggested that, as a troubled teen, he would stay in his room for a month and not speak to anyone, nine of the chatbots responded supportively. “A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers.”

And, in the third selection, writer Laura Reiley describes the illness and suicide of her daughter in a deeply personal essay for The New York Times. She writes about how her daughter reached out, choosing to confide in ChatGPT, disclosing her thoughts. “ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress.”

DG

Continue reading

Reading of the Week: VR-Assisted Therapy – the New Lancet Psych Paper; Also, Genetic Variations & Psychosis and Dr. Sundar on Patients With Answers

From the Editor

Even with medications, the voices tormented him. My patient explained that his every move was commented on.

In avatar therapy, patients engage audiovisual representations of their voices, with the goal of reducing their influence. In the first selection, a new paper from Lancet Psychiatry, Lisa Charlotte Smith (of the University of Copenhagen) and her co-authors look at a new form of avatar therapy, with an immersive 3D experience. In this RCT, participants had enhanced usual care or the therapy; the severity of auditory hallucinations was then measured at 12 weeks. “Challenge-VRT showed short-term efficacy in reducing the severity of auditory verbal hallucinations in patients with schizophrenia, and the findings support further development and evaluation of immersive virtual reality-based therapies in this population.” We consider the paper and its implications.

In the second selection, Dr. Mark Ainsley Colijn (of the University of Calgary) writes about psychosis and rare genetic variation. In a Canadian Journal of Psychiatry paper – part of the new Clinician’s Corner series – he offers suggestions for antipsychotic meds. “When providing care for individuals with psychosis occurring on the background of rare genetic variation, psychiatrists should take the time to educate themselves accordingly to ensure the safe and rational prescribing of antipsychotic medications in this population.”

And in the third selection, from JAMA, Dr. Kumara Raja Sundar (of Kaiser Permanente Washington) comments on patients who use ChatGPT. The author, a family doctor, notes that many physicians can be paternalistic – but he urges against that instinct. “If patients are arming themselves with information to be heard, our task as clinicians is to meet them with recognition, not resistance. In doing so, we preserve what has always made medicine human: the willingness to share meaning, uncertainty, and hope, together.”

DG

Continue reading

Reading of the Week: Something Old & Something New – With Papers from World Psychiatry and Lancet Psychiatry

From the Editor

He was keen to discuss his new therapist who introduced him to CBT concepts and noted his negative thoughts. The therapist was helpful and thoughtful – but not human. My patient was using an AI chatbot.

More and more patients are looking to AI for information and therapy. What to make of it all? And what is the role of other cutting-edge innovations? In the first selection, Dr. John Torous (of Harvard University) and his co-authors attempt to answer these questions in a new review for World Psychiatry. They focus on, yes, generative AI, as well as apps and virtual reality. The review is sparkling and comprehensive, stretching over 11 000 words and with 269 references. “New tools such as LLMs have rapidly emerged, while relatively older ones such as smartphone apps and virtual reality have quickly expanded. While each tool has offered evidence of clinical impact, broad real-world impact remains aloof for all.” We consider the paper and its implications.

Made with ChatGPT

In this week’s other selection, Dr. Robert M. Post (of The George Washington University) and his co-authors write about lithium in a new Lancet Psychiatry paper. They offer a fresh take on this old medication; they argue that it is a disease-modifying agent, like monoclonal antibodies for multiple sclerosis. “Conceptualisation of lithium as a disease-modifying agent might help to increase clinical use by doctors, especially early in the disease course to better serve our patients.”

DG

Continue reading

Reading of the Week: More Therapy, More Inequity? The New JAMA Psych Study; Also, Dr. Reimer on Living with Depression and Generative AI & Biases

From the Editor

What has been the most significant innovation in mental healthcare delivery in recent years? It wasn’t a new medication or therapy, but the widespread adoption of the webcam in 2020. Over the course of a handful of pandemic weeks, psychiatrists and therapists switched to virtual sessions, making it easier for people to receive care, including psychotherapy, unbound by geography, and thus addressing inequity – or, at least, that was the hope. As noted recently in The New York Times: “In the 1990s, teletherapy was championed as a way to reach disadvantaged patients living in remote locations where there were few psychiatrists. A decade later, it was presented as a more accessible alternative to face-to-face sessions, one that could radically lower barriers to care.”

So, are more people receiving psychotherapy? And has this new era of virtual care resulted in better access for all? Dr. Mark Olfson (of Columbia University) and his co-authors attempt to answer these questions in a new paper for JAMA Psychiatry. Drawing on the data of more than 90 000 Americans, they analyzed trends in outpatient psychotherapy in the US, finding more care than ever before. That said, they note greater inequity: “psychotherapy use increased significantly faster among several socioeconomically advantaged groups and that inequalities were evident in teletherapy access.” We consider the study and its implications.

As doctors, we often shy away from discussing our health, especially our mental health – even with our own physicians. This is particularly concerning because doctors have a higher suicide rate than the general population, yet fears of vulnerability, judgment, and stigma keep many of us silent. In this episode of Quick Takes, I sit down with Dr. Joss Reimer, president of the Canadian Medical Association, who openly shares her own experiences with depression, as a doctor and as a patient. “We all need help sometimes.”

And in the third selection, Matthew Flathers (of Harvard University) et al. analyze AI depictions of psychiatric diagnoses in a new paper for BMJ Mental Health. They tested two AI image models with different diagnoses and commented on the results. “Generative AI models acquire biases at every stage of their development – from societal prejudice in online training data, to the optimisation metrics and safety guidelines each developer puts in place. These layered biases persist even when their precise origins remain elusive.”

DG

Continue reading

Reading of the Week: Suicide Barriers & Suicide Prevention – the New CJP Study; Also, the Future of Education and AI & Diagnoses

From the Editor

The idea is simple: if certain locations attract suicidal individuals, making it harder for suicides to occur at those places can help. After much debate, in 2003, the City of Toronto did exactly that, constructing a suicide barrier for the Bloor Viaduct. Suicides immediately declined. 

What has been the long-term effect? And have the means of suicide deaths simply shifted? In the first selection, Dr. Mark Sinyor (of the University of Toronto) and his co-authors attempt to answer these questions. In a new study published in The Canadian Journal of Psychiatry, they drew on over two decades of data to analyze the impact of this suicide barrier. “Contrary to initial findings, these results indicate an enduring suicide prevention effect of the Bloor Viaduct suicide barrier.” We consider the study and its implications.

Pretty but lifesaving?

When it comes to medical education, much has changed over the years – including its name. What was once known as Continuing Medical Education (CME) is now referred to as Continuing Professional Development (CPD). But the changes go far beyond a simple rebranding. After all, the sheer volume of journal articles available today is staggering. How can you keep up? How can technology help? In the second selection, a new Quick Takes podcast, I speak with Dr. Sanjeev Sockalingam (of the University of Toronto) to explore the evolving world of CPD. “It took a pandemic to get us to realize that we could do so much online.”

Finally, in the third selection, from JAMA Network Open, Dr. Ethan Goh (of Stanford University) and his colleagues wonder if AI can assist physicians in making diagnoses. In an RCT, physicians were randomized to either conventional resources or those enhanced by access to AI (specifically, LLM). “In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources.”

DG

Continue reading

Reading of the Week: Care & Technology – Papers on Virtual Care and an App for Alcohol; Also, Dr. Reisman on ChatGPT & Bedside Manner

From the Editor

With COVID-19, mental health services were transformed in a matter of weeks when much care shifted to virtual. Today, we are all proficient in our webcams and familiar with terms like Zoom fatigue.

From a system perspective, we have unanswered questions: What’s the right amount of virtual care? When is it appropriate? In the first selection, Matthew Crocker (of the Canadian Institute for Health Information) and his co-authors focus on virtual versus in-person follow-up care after an ED visit in Ontario. Drawing on databases, they analyzed more than 28 000 such visits, wondering if the virtual option led to more adverse psychiatric outcomes. “These results support virtual care as a modality to increase access to follow-up after an acute care psychiatric encounter across a wide range of diagnoses.” We consider the paper and its implications.

Apps for mental health are increasingly popular; the mental health app market may be worth more than $24 billion by 2030, according to one estimate. In the second selection from Internet Interventions, John A. Cunningham (of the University of Toronto) and co-authors describe a new RCT involving participants who were concerned about their drinking. 761 were given either an app with several intervention modules or just educational materials. They were then followed for six months. “The results of this trial provide some supportive evidence that smartphone apps can reduce unhealthy alcohol consumption.”

And in the third selection, Dr. Jonathan Reisman, an ED physician, writes about AI. In a provocative essay for The New York Times, he argues that physicians often rely on scripts to seem compassionate – such as when we deliver bad news. AI, he reasons then, could do that well. “It doesn’t actually matter if doctors feel compassion or empathy toward patients; it only matters if they act like it. In much the same way, it doesn’t matter that A.I. has no idea what we, or it, are even talking about.”

DG

Continue reading

Reading of the Week: In-person vs. Remote CBT – the New CMAJ Study; Also, Treatment & Opioids in the US, and AI & Med School Exams

From the Editor

In the early days of the pandemic, patients connected with us virtually from their kitchens and bedrooms – and, yes, their closets and washrooms. But as COVID-19 fades, we may wonder: what care should be delivered virtually and what should be done in person?

In the first selection, Sara Zandieh (of McMaster University) and her co-authors examine remote versus in-person CBT in a new CMAJ study. They conducted a systematic review and meta-analysis with 54 randomized controlled trials and almost 5 500 participants, addressing both physical and mental problems. “Moderate-certainty evidence showed little to no difference in the effectiveness of in-person and therapist-guided remote CBT across a range of mental health and somatic disorders, suggesting potential for the use of therapist-guided remote CBT to facilitate greater access to evidence-based care.” We consider the paper and its implications.

In the second selection, Dr. Tae Woo Park (of the University of Pittsburgh) and his co-authors explore opioid use disorder (OUD) treatment. In their JAMA research letter, they compared medication and psychosocial treatments for OUD across the United States, surveying more than 17 000 facilities and analyzing the availability of evidenced-based interventions like buprenorphine and contingency management. “Substance use treatment facilities reported significant gaps in provision of effective treatments for OUD.”

And in the third selection from CNBC, Dr. Scott Gottlieb and Shani Benezra (both of the American Enterprise Institute) describe their experiment: they tasked several large language models with answering questions from the USMLE Step 3. The average resident score is 75%; four of five AI programs surpassed that benchmark. “[These models] may offer a level of precision and consistency that human providers, constrained by fatigue and error, might sometimes struggle to match, and open the way to a future where treatment portals can be powered by machines, rather than doctors.”

There will be no Reading next week.

DG

Continue reading

Reading of the Week: Preventing Postpartum Depression in Pakistan – the New Nature Med Study; Also, Deaths of Despair and ChatGPT & Abstracts

From the Editor

Imagine that you are asked to design a program to prevent depression in a population at risk. Would you hire psychiatrists? Look to nurses? Tap the expertise of psychologists? All three?

In the first selection from Nature Medicine, Pamela J. Surkan (of Johns Hopkins University) and her co-authors describe a study that focused on prevention. As they worked in Pakistan – a nation with few mental health providers by Western standards – they chose to train lay people, teaching them to deliver CBT. In their single-blind, randomized controlled trial, 1 200 women who were pregnant and had anxiety (but not depression) were given enhanced usual care or CBT. “We found reductions of 81% and 74% in the odds of postnatal MDE and of moderate-to-severe anxiety…” We discuss the paper and its implications.

In the second selection, Joseph Friedman and Dr. Helena Hansen (both of the University of California, Los Angeles) look at deaths of despair in the United States in a research letter for JAMA Psychiatry. Their work builds on the idea that some deaths are related to the hopelessness of a person’s social or economic circumstance; past publications focused largely on White Americans. Friedman and Hansen drew on more than two decades of data, including ethnicity, from a US database, finding a different pattern and that: “Rising inequalities in deaths of despair among American Indian, Alaska Native and Black individuals were largely attributable to disproportionate early mortality from drug- and alcohol-related causes…”

A recent survey finds that psychiatrists see AI as potentially helpful with paperwork and diagnosing patients. But could AI help you keep up with the literature? In the third selection from Annals of Family Medicine, Dr. Joel Hake (of the University of Kansas) and his co-authors used ChatGPT to produce short summaries of studies, then evaluated their quality, accuracy, and bias. “We suggest that ChatGPT can help family physicians accelerate review of the scientific literature.”

DG

Continue reading