Tag: ChatGPT

Reading of the Week: Preventing Postpartum Depression in Pakistan – the New Nature Med Study; Also, Deaths of Despair and ChatGPT & Abstracts

From the Editor

Imagine that you are asked to design a program to prevent depression in a population at risk. Would you hire psychiatrists? Look to nurses? Tap the expertise of psychologists? All three?

In the first selection from Nature Medicine, Pamela J. Surkan (of Johns Hopkins University) and her co-authors describe a study that focused on prevention. As they worked in Pakistan – a nation with few mental health providers by Western standards – they chose to train lay people, teaching them to deliver CBT. In their single-blind, randomized controlled trial, 1 200 women who were pregnant and had anxiety (but not depression) were given enhanced usual care or CBT. “We found reductions of 81% and 74% in the odds of postnatal MDE and of moderate-to-severe anxiety…” We discuss the paper and its implications.

In the second selection, Joseph Friedman and Dr. Helena Hansen (both of the University of California, Los Angeles) look at deaths of despair in the United States in a research letter for JAMA Psychiatry. Their work builds on the idea that some deaths are related to the hopelessness of a person’s social or economic circumstance; past publications focused largely on White Americans. Friedman and Hansen drew on more than two decades of data, including ethnicity, from a US database, finding a different pattern and that: “Rising inequalities in deaths of despair among American Indian, Alaska Native and Black individuals were largely attributable to disproportionate early mortality from drug- and alcohol-related causes…”

A recent survey finds that psychiatrists see AI as potentially helpful with paperwork and diagnosing patients. But could AI help you keep up with the literature? In the third selection from Annals of Family Medicine, Dr. Joel Hake (of the University of Kansas) and his co-authors used ChatGPT to produce short summaries of studies, then evaluated their quality, accuracy, and bias. “We suggest that ChatGPT can help family physicians accelerate review of the scientific literature.”

DG

Continue reading

Reading of the Week: Self-stigma & Depression – the new JAD Study; Also, ChatGPT & Mental Health Care, and Dr. Catherine Hickey on the Opioid Crisis

From the Editor 

Depression is the result of character weakness. So explained my patient who had a major depressive disorder and hesitated to take medications.

Though fading, stigma about mental illness continues to exist, including self-stigma, the negative thoughts and beliefs that patients have about their own disease – as with my patient. How common is self-stigma? How does its prevalence differ around the globe? What are risk factors for it? Nan Du (of the University of Hong Kong) and co-authors attempt to answer these questions in a new Journal of Affective Disorders paper. They do a systematic review and meta-analysis of self-stigma for people with depression, drawing on 56 studies with almost 12 000 participants, and they a focus on international comparisons. “The results showed that the global prevalence of depression self-stigma was 29%. Levels of self-stigma varied across regions, but this difference was not significant.” We consider the paper and its clinical implications.

In this week’s second selection, we look at ChatGPT and mental health care. Dr. John Torous (of Harvard University) joins me for a Quick Takes podcast interview. He sees potential for patients – including making clinical notes more accessible by bridging language and knowledge divides – and for physicians, who may benefit from a more holistic differential diagnosis and treatment plan based on multiple data sets. He acknowledges problems with privacy, accuracy, and ChatGPT’s tendency to “hallucinate,” a term he dislikes. “We want to really be cautious because these are complex pieces of software.” 

And in the third selection, Dr. Catherine Hickey (of Memorial University) writes about the opioid crisis for Academic Psychiatry. The paper opens personally, with Dr. Hickey describing paramedics trying to help a young man who had overdosed. She considers the role of psychiatry and contemplates societal biases. “[I]n a better world, the needless deaths of countless young people would never be tolerated, regardless of their skin color.”

DG

Continue reading

Reading of the Week: Fatal Overdoses & Drug Decriminalization – the new JAMA Psych Paper; Also, ChatGPT vs Residents, and Chang on Good Psychiatry

From the Editor

Does decriminalizing the possession of small amounts of street drugs reduce overdoses? Proponents argue yes because those who use substances can seek care – including in emergency situations – without fear of police involvement and charges. Opponents counter that decriminalization means fewer penalties for drug use, resulting in more misuse and thus more overdoses. The debate can be shrill – but lacking in data.

Spruha Joshi (of New York University) and co-authors bring numbers to the policy discussion with a new JAMA Psychiatry paper. They analyze the impact of decriminalization in two states, Oregon and Washington, contrasting overdoses there and in other US states that didn’t decriminalize. “This study found no evidence of an association between legal changes that removed or substantially reduced criminal penalties for drug possession in Oregon and Washington and fatal drug overdose rates.” We consider the paper and its implications.

In the second selection, Dr. Ashwin Nayak (of Stanford University) and his co-authors look at AI for the writing of patient histories. In a new research letter for JAMA Internal Medicine, they do a head-to-head (head-to-CPU?) comparison with ChatGPT and residents both writing patient histories (specifically, the history of present illness, or HPI). “HPIs generated by a chatbot or written by senior internal medicine residents were graded similarly by internal medicine attending physicians.”

And in the third selection, medical student Howard A. Chang (of Johns Hopkins University) wonders about “good” psychiatry in a paper for Academic Psychiatry. He reflects on the comments of surgeons, pediatricians, and obstetricians, and then mulls the role of our specialty. “I have gleaned that a good psychiatrist fundamentally sees and cares about patients with mental illness as dignified human beings, not broken brains. The good psychiatrist knows and treats the person in order to treat the disease.”

DG

Continue reading

Reading of the Week: Ethnicity, Bias, and Alcohol – the New AJP Paper; Also, Global Mental Health & AI (JAMA Psych) and Halprin on Her Mother (Globe)

From the Editor

He drinks heavily, but does he have a diagnosed alcohol use disorder?

Does the answer to that question tie to ethnicity and biases? In a new American Journal of Psychiatry paper, Rachel Vickers-Smith (of the University of Kentucky) and her co-authors suggest it does. Drawing on US Veterans Affairs’ data with over 700,000 people, they analyzed the scores of a screening tool and the diagnoses with ethnicity recorded in the EMR. “We identified a large, racialized difference in AUD diagnosis, with Black and Hispanic veterans more likely than White veterans to receive the diagnosis at the same level of alcohol consumption.” We look at the paper and mull its implications.

In the second selection, Alastair C. van Heerden (of the University of the Witwatersrand) and his co-authors consider AI and its potential for global mental health services in a new JAMA Psychiatry Viewpoint. They focus on large language models (think ChatGPT) which could do several things, including helping to train and supervise humans. “Large language models and other forms of AI will fundamentally change how we treat mental disorders, allowing us to move away from the current model in which most of the world’s population does not have access to quality mental health services.”

And, in the third selection, Paula Halprin discusses her mother’s alcohol use in an essay for The Globe and Mail. In a moving piece that touches on anger, trauma, and regret, Halprin writes about her re-examination of her mother’s life. “I now understand my mother drank not because of a weak character, but to cope with a body wearing out before its time from unremitting pregnancy and as a way to swallow her anger and disappointment. It was also a way to mourn a loss of self.”

DG

Continue reading

Reading of the Week: RCTs & Mental Health – the New CJP Paper; Also, AI and Discharge Summaries (Lancet DH), and Mehler Paperny on Action (Globe)

From the Editor

How has psychiatric research changed over time?

In the first selection, Sheng Chen (of CAMH) and co-authors attempt to answer this question by focusing on randomized controlled trials in mental health in a new paper for The Canadian Journal of Psychiatry. Using the Cochrane Database of Systematic Reviews, they look at almost 6,700 RCTs published over the past decades. They find: “the number of mental health RCTs increased exponentially from 1965 to 2009, reaching a peak in the years 2005–2009,” and observe a shift away from pharmacologic studies.

RCTs: the gold standard of research

In the second selection, Sajan B. Patel (of St Mary’s Hospital) et al. consider ChatGPT and health care in a new Lancet Digital Health Comment. Noting that discharge summaries tend to be under-prioritized, they wonder if this AI program may help in the future, freeing doctor to do other things. “The question for the future will be how, not if, we adopt this technology.”

And in the third selection, writer Anna Mehler Paperny focuses on campaigns to reduce stigma in a hard-hitting essay for The Globe and Mail. She argues that action is urgently needed to address mental health problems. She writes: “We need more than feel-good bromides. Every time someone prominent utters something about how important mental health is, the follow should be: So what? What are you doing about it? And when?”

DG

Continue reading

Reading of the Week: Dr. Scott Patten on ChatGPT

From the Editor

Having only written four papers, the author wouldn’t seem particularly noteworthy. Yet the work is causing a buzz. Indeed, JAMA published an Editorial about the author, the papers, and the implications.

That author is ChatGPT, who isn’t human, of course – and that’s why it has made something of a splash. More than a million people tried this AI program in the week after its November launch, utilizing it to do everything from composing poetry to drafting essays for school assignments. 

What to make of ChatGPT? What are the implications for psychiatry? And for our journals?

To the last question, some are already reacting; as noted above, last week, JAMA published an Editorial and also updated its Instructions to Authors with several changes, including: “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.”

This week, we feature an original essay by Dr. Scott Patten (of the University of Calgary) for the Reading of the Week. Dr. Patten, who serves as the Editor Emeritus of The Canadian Journal of Psychiatry, considers ChatGPT and these three questions, drawing on his own use of the program.

(And we note that the field is evolving quickly. Since Dr. Patten’s first draft, Microsoft has announced a chatbot for the search engine Bing.)

DG

Continue reading