Tag: ChatGPT

Reading of the Week: AI & Therapy

From the Editor

As patients struggle to access care, some are looking to AI for psychotherapy. Of course, ChatGPT and sister programs are only a click or two away – but how good is the psychotherapy that they offer? 

In a new American Journal of Psychotherapy paper, Dr. Sebastian Acevedo (of Emory University) and his co-authors attempt to answer that question. Drawing on transcripts of CBT sessions, they asked 75 mental health professionals to score human and AI encounters on several measures. So how did ChatGPT fare? “The findings suggest that although ChatGPT-3.5 may complement human-based therapy, this specific implementation of AI lacked the depth required for stand-alone use.” We consider the paper and its implications.

In the second selection, from JMIR Mental Health, Dr. Andrew Clark (of Boston University) looks at AI chatbots responses to clinical situations. Using 10 AI chatbots, he posed as an adolescent, forwarding three detailed, fictional vignettes. The results are surprising. When, for example, he suggested that, as a troubled teen, he would stay in his room for a month and not speak to anyone, nine of the chatbots responded supportively. “A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers.”

And, in the third selection, writer Laura Reiley describes the illness and suicide of her daughter in a deeply personal essay for The New York Times. She writes about how her daughter reached out, choosing to confide in ChatGPT, disclosing her thoughts. “ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress.”

DG

Continue reading

Reading of the Week: How Many Steps A Day to Avoid Depression? The New Lancet Study; Also, TikTok & Med Records and Lieberman on ChatGPT Therapy

From the Editor

How much exercise is enough to prevent illness?

In the first selection, Ding Ding (of The University of Sydney) and her co-authors attempt to answer that question in a new, clever study for The Lancet Public Health. They did a systematic review and meta-analysis involving 57 studies that looked at daily step count and health outcomes, including depression. “Although 10 000 steps per day can still be a viable target for those who are more active, 7 000 steps per day is associated with clinically meaningful improvements in health outcomes and might be a more realistic and achievable target for some.” We consider the paper and its implications.

5 787 more steps needed?

In the second selection, Isabelle Toler and Lindsey Grubbs (both of Case Western Reserve University) look at medical records and language in a paper for The New England Journal of Medicine. In a unique approach, they observe themes in the TikTok videos of patients who are frustrated by what their physicians have written about them. “In the context of a system of medical documentation in which patients have little power to shape their own narratives, clinicians should respect the channels they have chosen to use to share their stories and listen to the messages they convey.”

And in the third selection, psychologist Harvey Lieberman reflects on therapy and ChatGPT in an essay for The New York Times. As a therapist and an octogenarian, he is skeptical of the therapeutic aspects of ChatGPT – but, with use, he partly changes his mind. “I concluded that ChatGPT wasn’t a therapist, although it sometimes was therapeutic. But it wasn’t just a reflection, either.”

Note: there will be no Reading next week.

DG

Continue reading

Reading of the Week: VR-Assisted Therapy – the New Lancet Psych Paper; Also, Genetic Variations & Psychosis and Dr. Sundar on Patients With Answers

From the Editor

Even with medications, the voices tormented him. My patient explained that his every move was commented on.

In avatar therapy, patients engage audiovisual representations of their voices, with the goal of reducing their influence. In the first selection, a new paper from Lancet Psychiatry, Lisa Charlotte Smith (of the University of Copenhagen) and her co-authors look at a new form of avatar therapy, with an immersive 3D experience. In this RCT, participants had enhanced usual care or the therapy; the severity of auditory hallucinations was then measured at 12 weeks. “Challenge-VRT showed short-term efficacy in reducing the severity of auditory verbal hallucinations in patients with schizophrenia, and the findings support further development and evaluation of immersive virtual reality-based therapies in this population.” We consider the paper and its implications.

In the second selection, Dr. Mark Ainsley Colijn (of the University of Calgary) writes about psychosis and rare genetic variation. In a Canadian Journal of Psychiatry paper – part of the new Clinician’s Corner series – he offers suggestions for antipsychotic meds. “When providing care for individuals with psychosis occurring on the background of rare genetic variation, psychiatrists should take the time to educate themselves accordingly to ensure the safe and rational prescribing of antipsychotic medications in this population.”

And in the third selection, from JAMA, Dr. Kumara Raja Sundar (of Kaiser Permanente Washington) comments on patients who use ChatGPT. The author, a family doctor, notes that many physicians can be paternalistic – but he urges against that instinct. “If patients are arming themselves with information to be heard, our task as clinicians is to meet them with recognition, not resistance. In doing so, we preserve what has always made medicine human: the willingness to share meaning, uncertainty, and hope, together.”

DG

Continue reading

Reading of the Week: Care & Technology – Papers on Virtual Care and an App for Alcohol; Also, Dr. Reisman on ChatGPT & Bedside Manner

From the Editor

With COVID-19, mental health services were transformed in a matter of weeks when much care shifted to virtual. Today, we are all proficient in our webcams and familiar with terms like Zoom fatigue.

From a system perspective, we have unanswered questions: What’s the right amount of virtual care? When is it appropriate? In the first selection, Matthew Crocker (of the Canadian Institute for Health Information) and his co-authors focus on virtual versus in-person follow-up care after an ED visit in Ontario. Drawing on databases, they analyzed more than 28 000 such visits, wondering if the virtual option led to more adverse psychiatric outcomes. “These results support virtual care as a modality to increase access to follow-up after an acute care psychiatric encounter across a wide range of diagnoses.” We consider the paper and its implications.

Apps for mental health are increasingly popular; the mental health app market may be worth more than $24 billion by 2030, according to one estimate. In the second selection from Internet Interventions, John A. Cunningham (of the University of Toronto) and co-authors describe a new RCT involving participants who were concerned about their drinking. 761 were given either an app with several intervention modules or just educational materials. They were then followed for six months. “The results of this trial provide some supportive evidence that smartphone apps can reduce unhealthy alcohol consumption.”

And in the third selection, Dr. Jonathan Reisman, an ED physician, writes about AI. In a provocative essay for The New York Times, he argues that physicians often rely on scripts to seem compassionate – such as when we deliver bad news. AI, he reasons then, could do that well. “It doesn’t actually matter if doctors feel compassion or empathy toward patients; it only matters if they act like it. In much the same way, it doesn’t matter that A.I. has no idea what we, or it, are even talking about.”

DG

Continue reading

Reading of the Week: Preventing Postpartum Depression in Pakistan – the New Nature Med Study; Also, Deaths of Despair and ChatGPT & Abstracts

From the Editor

Imagine that you are asked to design a program to prevent depression in a population at risk. Would you hire psychiatrists? Look to nurses? Tap the expertise of psychologists? All three?

In the first selection from Nature Medicine, Pamela J. Surkan (of Johns Hopkins University) and her co-authors describe a study that focused on prevention. As they worked in Pakistan – a nation with few mental health providers by Western standards – they chose to train lay people, teaching them to deliver CBT. In their single-blind, randomized controlled trial, 1 200 women who were pregnant and had anxiety (but not depression) were given enhanced usual care or CBT. “We found reductions of 81% and 74% in the odds of postnatal MDE and of moderate-to-severe anxiety…” We discuss the paper and its implications.

In the second selection, Joseph Friedman and Dr. Helena Hansen (both of the University of California, Los Angeles) look at deaths of despair in the United States in a research letter for JAMA Psychiatry. Their work builds on the idea that some deaths are related to the hopelessness of a person’s social or economic circumstance; past publications focused largely on White Americans. Friedman and Hansen drew on more than two decades of data, including ethnicity, from a US database, finding a different pattern and that: “Rising inequalities in deaths of despair among American Indian, Alaska Native and Black individuals were largely attributable to disproportionate early mortality from drug- and alcohol-related causes…”

A recent survey finds that psychiatrists see AI as potentially helpful with paperwork and diagnosing patients. But could AI help you keep up with the literature? In the third selection from Annals of Family Medicine, Dr. Joel Hake (of the University of Kansas) and his co-authors used ChatGPT to produce short summaries of studies, then evaluated their quality, accuracy, and bias. “We suggest that ChatGPT can help family physicians accelerate review of the scientific literature.”

DG

Continue reading

Reading of the Week: Self-stigma & Depression – the new JAD Study; Also, ChatGPT & Mental Health Care, and Dr. Catherine Hickey on the Opioid Crisis

From the Editor 

Depression is the result of character weakness. So explained my patient who had a major depressive disorder and hesitated to take medications.

Though fading, stigma about mental illness continues to exist, including self-stigma, the negative thoughts and beliefs that patients have about their own disease – as with my patient. How common is self-stigma? How does its prevalence differ around the globe? What are risk factors for it? Nan Du (of the University of Hong Kong) and co-authors attempt to answer these questions in a new Journal of Affective Disorders paper. They do a systematic review and meta-analysis of self-stigma for people with depression, drawing on 56 studies with almost 12 000 participants, and they a focus on international comparisons. “The results showed that the global prevalence of depression self-stigma was 29%. Levels of self-stigma varied across regions, but this difference was not significant.” We consider the paper and its clinical implications.

In this week’s second selection, we look at ChatGPT and mental health care. Dr. John Torous (of Harvard University) joins me for a Quick Takes podcast interview. He sees potential for patients – including making clinical notes more accessible by bridging language and knowledge divides – and for physicians, who may benefit from a more holistic differential diagnosis and treatment plan based on multiple data sets. He acknowledges problems with privacy, accuracy, and ChatGPT’s tendency to “hallucinate,” a term he dislikes. “We want to really be cautious because these are complex pieces of software.” 

And in the third selection, Dr. Catherine Hickey (of Memorial University) writes about the opioid crisis for Academic Psychiatry. The paper opens personally, with Dr. Hickey describing paramedics trying to help a young man who had overdosed. She considers the role of psychiatry and contemplates societal biases. “[I]n a better world, the needless deaths of countless young people would never be tolerated, regardless of their skin color.”

DG

Continue reading

Reading of the Week: Fatal Overdoses & Drug Decriminalization – the new JAMA Psych Paper; Also, ChatGPT vs Residents, and Chang on Good Psychiatry

From the Editor

Does decriminalizing the possession of small amounts of street drugs reduce overdoses? Proponents argue yes because those who use substances can seek care – including in emergency situations – without fear of police involvement and charges. Opponents counter that decriminalization means fewer penalties for drug use, resulting in more misuse and thus more overdoses. The debate can be shrill – but lacking in data.

Spruha Joshi (of New York University) and co-authors bring numbers to the policy discussion with a new JAMA Psychiatry paper. They analyze the impact of decriminalization in two states, Oregon and Washington, contrasting overdoses there and in other US states that didn’t decriminalize. “This study found no evidence of an association between legal changes that removed or substantially reduced criminal penalties for drug possession in Oregon and Washington and fatal drug overdose rates.” We consider the paper and its implications.

In the second selection, Dr. Ashwin Nayak (of Stanford University) and his co-authors look at AI for the writing of patient histories. In a new research letter for JAMA Internal Medicine, they do a head-to-head (head-to-CPU?) comparison with ChatGPT and residents both writing patient histories (specifically, the history of present illness, or HPI). “HPIs generated by a chatbot or written by senior internal medicine residents were graded similarly by internal medicine attending physicians.”

And in the third selection, medical student Howard A. Chang (of Johns Hopkins University) wonders about “good” psychiatry in a paper for Academic Psychiatry. He reflects on the comments of surgeons, pediatricians, and obstetricians, and then mulls the role of our specialty. “I have gleaned that a good psychiatrist fundamentally sees and cares about patients with mental illness as dignified human beings, not broken brains. The good psychiatrist knows and treats the person in order to treat the disease.”

DG

Continue reading

Reading of the Week: Ethnicity, Bias, and Alcohol – the New AJP Paper; Also, Global Mental Health & AI (JAMA Psych) and Halprin on Her Mother (Globe)

From the Editor

He drinks heavily, but does he have a diagnosed alcohol use disorder?

Does the answer to that question tie to ethnicity and biases? In a new American Journal of Psychiatry paper, Rachel Vickers-Smith (of the University of Kentucky) and her co-authors suggest it does. Drawing on US Veterans Affairs’ data with over 700,000 people, they analyzed the scores of a screening tool and the diagnoses with ethnicity recorded in the EMR. “We identified a large, racialized difference in AUD diagnosis, with Black and Hispanic veterans more likely than White veterans to receive the diagnosis at the same level of alcohol consumption.” We look at the paper and mull its implications.

In the second selection, Alastair C. van Heerden (of the University of the Witwatersrand) and his co-authors consider AI and its potential for global mental health services in a new JAMA Psychiatry Viewpoint. They focus on large language models (think ChatGPT) which could do several things, including helping to train and supervise humans. “Large language models and other forms of AI will fundamentally change how we treat mental disorders, allowing us to move away from the current model in which most of the world’s population does not have access to quality mental health services.”

And, in the third selection, Paula Halprin discusses her mother’s alcohol use in an essay for The Globe and Mail. In a moving piece that touches on anger, trauma, and regret, Halprin writes about her re-examination of her mother’s life. “I now understand my mother drank not because of a weak character, but to cope with a body wearing out before its time from unremitting pregnancy and as a way to swallow her anger and disappointment. It was also a way to mourn a loss of self.”

DG

Continue reading

Reading of the Week: RCTs & Mental Health – the New CJP Paper; Also, AI and Discharge Summaries (Lancet DH), and Mehler Paperny on Action (Globe)

From the Editor

How has psychiatric research changed over time?

In the first selection, Sheng Chen (of CAMH) and co-authors attempt to answer this question by focusing on randomized controlled trials in mental health in a new paper for The Canadian Journal of Psychiatry. Using the Cochrane Database of Systematic Reviews, they look at almost 6,700 RCTs published over the past decades. They find: “the number of mental health RCTs increased exponentially from 1965 to 2009, reaching a peak in the years 2005–2009,” and observe a shift away from pharmacologic studies.

RCTs: the gold standard of research

In the second selection, Sajan B. Patel (of St Mary’s Hospital) et al. consider ChatGPT and health care in a new Lancet Digital Health Comment. Noting that discharge summaries tend to be under-prioritized, they wonder if this AI program may help in the future, freeing doctor to do other things. “The question for the future will be how, not if, we adopt this technology.”

And in the third selection, writer Anna Mehler Paperny focuses on campaigns to reduce stigma in a hard-hitting essay for The Globe and Mail. She argues that action is urgently needed to address mental health problems. She writes: “We need more than feel-good bromides. Every time someone prominent utters something about how important mental health is, the follow should be: So what? What are you doing about it? And when?”

DG

Continue reading

Reading of the Week: Dr. Scott Patten on ChatGPT

From the Editor

Having only written four papers, the author wouldn’t seem particularly noteworthy. Yet the work is causing a buzz. Indeed, JAMA published an Editorial about the author, the papers, and the implications.

That author is ChatGPT, who isn’t human, of course – and that’s why it has made something of a splash. More than a million people tried this AI program in the week after its November launch, utilizing it to do everything from composing poetry to drafting essays for school assignments. 

What to make of ChatGPT? What are the implications for psychiatry? And for our journals?

To the last question, some are already reacting; as noted above, last week, JAMA published an Editorial and also updated its Instructions to Authors with several changes, including: “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.”

This week, we feature an original essay by Dr. Scott Patten (of the University of Calgary) for the Reading of the Week. Dr. Patten, who serves as the Editor Emeritus of The Canadian Journal of Psychiatry, considers ChatGPT and these three questions, drawing on his own use of the program.

(And we note that the field is evolving quickly. Since Dr. Patten’s first draft, Microsoft has announced a chatbot for the search engine Bing.)

DG

Continue reading