Tag: AI

Reading of the Week: Fatal Overdoses & Drug Decriminalization – the new JAMA Psych Paper; Also, ChatGPT vs Residents, and Chang on Good Psychiatry

From the Editor

Does decriminalizing the possession of small amounts of street drugs reduce overdoses? Proponents argue yes because those who use substances can seek care – including in emergency situations – without fear of police involvement and charges. Opponents counter that decriminalization means fewer penalties for drug use, resulting in more misuse and thus more overdoses. The debate can be shrill – but lacking in data.

Spruha Joshi (of New York University) and co-authors bring numbers to the policy discussion with a new JAMA Psychiatry paper. They analyze the impact of decriminalization in two states, Oregon and Washington, contrasting overdoses there and in other US states that didn’t decriminalize. “This study found no evidence of an association between legal changes that removed or substantially reduced criminal penalties for drug possession in Oregon and Washington and fatal drug overdose rates.” We consider the paper and its implications.

In the second selection, Dr. Ashwin Nayak (of Stanford University) and his co-authors look at AI for the writing of patient histories. In a new research letter for JAMA Internal Medicine, they do a head-to-head (head-to-CPU?) comparison with ChatGPT and residents both writing patient histories (specifically, the history of present illness, or HPI). “HPIs generated by a chatbot or written by senior internal medicine residents were graded similarly by internal medicine attending physicians.”

And in the third selection, medical student Howard A. Chang (of Johns Hopkins University) wonders about “good” psychiatry in a paper for Academic Psychiatry. He reflects on the comments of surgeons, pediatricians, and obstetricians, and then mulls the role of our specialty. “I have gleaned that a good psychiatrist fundamentally sees and cares about patients with mental illness as dignified human beings, not broken brains. The good psychiatrist knows and treats the person in order to treat the disease.”

DG

Continue reading

Reading of the Week: Antidepressants & Bipolar – the New NEJM Paper; Also, AI & Med Ed and Humphreys on Language

From the Editor

What’s the role of antidepressants in the treatment of bipolar disorder? That question is openly debated.

In a New England Journal of Medicine paper that was just published, Dr. Lakshmi N. Yatham (of the University of British Columbia) and his co-authors try to shed light on this issue. In their study, people with bipolar depression who were in remission were given an antidepressant or a placebo and followed for a year. The study involved 209 people from three countries. “[A]djunctive treatment with escitalopram or bupropion XL that continued for 52 weeks did not show a significant benefit as compared with treatment for 8 weeks in preventing relapse of any mood episode.” We consider the paper and its implications.

In the second selection, Drs. Avraham Cooper (of Ohio State University) and Adam Rodman (of Harvard University) consider AI and medical education in The New England Journal of Medicine. They talk about previous technological advancements in history, including the stethoscope. AI, in their view, will change practice and ethics – with clear implications for training and education. “If we don’t shape our own future, powerful technology companies will happily shape it for us.”

And in the third selection, Keith Humphreys (of Stanford University) writes about words and word choices to describe vulnerable populations in an essay for The Atlantic. He notes historic disputes, such as the use of the term patient. “[M]aking these judgments in a rigorous, fact-based way would prevent experts, policy makers, and the general public from being distracted by something easy – arguing about words – when we need to focus on doing something much harder: solving massive social problems.” 

DG

Continue reading

Reading of the Week: tDCS vs Sham for Depression – the New Lancet Paper; Also, US Ketamine Seizures and Dr. Lamas on Medical Practice & AI

From the Editor

He’s tried several medications, but still struggles with his depression. The story is too familiar. Transcranial direct current stimulation (tDCS) is an option, and increasingly the focus of research. With relatively few side effects and the possibility of doing the treatment at home, the advantages of tDCS are clear.

But how do patients taking antidepressants respond? In the first selection, from the pages of The Lancet, Dr. Gerrit Burkhardt (of the University of Munich) and his co-authors report the findings of an impressive study, with a comparison against sham treatment, across eight sites, and involving triple blinding. “Active tDCS was not superior to sham stimulation during a 6-week period. Our trial does not support the efficacy of tDCS as an additional treatment to SSRIs in adults with MDD.” We consider the paper, an accompanying Comment, and the implications.

In the second selection, Joseph J. Palamar (of New York University) and his colleagues analyze data on US ketamine seizures in a Research Letter for JAMA Psychiatry. They view seizures as a measure of recreational and nonmedical use, and conclude: “These data suggest increasing availability of illicit ketamine.”

And in this week’s third selection, Dr. Daniela J. Lamas (of Harvard University), an internist, writes about AI for The New York Times. In thinking about medical practice, she sees artificial intelligence doing more and more, and ultimately helping with diagnosis. She also sees trade-offs. Still, she concludes: “Beyond saving us time, the intelligence in A.I. – if used well – could make us better at our jobs.”

Note that there will be no Reading next week.

DG

Continue reading

Reading of the Week: Lithium Prescribing – the New CJP Paper; Also, AI vs. Doctors (JAMA Int Med) and Bergner on Compulsory Mental Health (NYT)

From the Editor

For patients with bipolar disorder, lithium is an important medication, shown to reduce hospitalizations and suicides better than newer agents. But has it fallen out of fashion? International reports suggest that it has.

In the first selection, Samreen Shafiq (of the University of Calgary) and her co-authors try to answer this question with Canadian data in a new paper for The Canadian Journal of Psychiatry. They draw on a decade of Alberta prescription data with more than 580,000 lithium scripts. “This population-based study suggests that the overall number of new and prevalent lithium users is decreasing in Alberta between the years of 2009 and 2018, but the observed pattern suggests that this decrease may have stopped by the end of our study interval.” We consider the paper and its clinical implications.

In the second selection, John W. Ayers (of University of California San Diego) considers AI-generated responses to health care questions posted on social media, like the need to seek medical attention after a minor head injury with a presentation of a lump and a headache. In a JAMA Internal Medicine paper, they compare ChatGPT answers to those of physicians in terms of quality and empathy. “In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum.”

The debate over coercive care is hot, with proposals to rebalance patients’ rights actively discussed in New York City, Alberta, and California. In the third selection, author Daniel Bergner writes that we should look for alternatives to medications. In a New York Times essay, he argues that antipsychotics are problematic. “By doubling down on existing methods, we’re only beckoning more failure.”

DG

Continue reading

Reading of the Week: Ethnicity, Bias, and Alcohol – the New AJP Paper; Also, Global Mental Health & AI (JAMA Psych) and Halprin on Her Mother (Globe)

From the Editor

He drinks heavily, but does he have a diagnosed alcohol use disorder?

Does the answer to that question tie to ethnicity and biases? In a new American Journal of Psychiatry paper, Rachel Vickers-Smith (of the University of Kentucky) and her co-authors suggest it does. Drawing on US Veterans Affairs’ data with over 700,000 people, they analyzed the scores of a screening tool and the diagnoses with ethnicity recorded in the EMR. “We identified a large, racialized difference in AUD diagnosis, with Black and Hispanic veterans more likely than White veterans to receive the diagnosis at the same level of alcohol consumption.” We look at the paper and mull its implications.

In the second selection, Alastair C. van Heerden (of the University of the Witwatersrand) and his co-authors consider AI and its potential for global mental health services in a new JAMA Psychiatry Viewpoint. They focus on large language models (think ChatGPT) which could do several things, including helping to train and supervise humans. “Large language models and other forms of AI will fundamentally change how we treat mental disorders, allowing us to move away from the current model in which most of the world’s population does not have access to quality mental health services.”

And, in the third selection, Paula Halprin discusses her mother’s alcohol use in an essay for The Globe and Mail. In a moving piece that touches on anger, trauma, and regret, Halprin writes about her re-examination of her mother’s life. “I now understand my mother drank not because of a weak character, but to cope with a body wearing out before its time from unremitting pregnancy and as a way to swallow her anger and disappointment. It was also a way to mourn a loss of self.”

DG

Continue reading

Reading of the Week: The Cutting Edge – Pharmacotherapy for Depression, Apps for Mental Health & AI for Everything (or Maybe Not)

From the Editor

He’s been depressed for years and you are considering augmentation. Should you choose an antipsychotic? Which one?

These are good questions, especially when treating patients with treatment-resistant depression. In the first selection, Drs. Manish K. Jha (of the University of Texas) and Sanjay J. Mathew (of Baylor College of Medicine) look at four antipsychotics in an American Journal of Psychiatry paper. They review the literature for augmentation, including the use of cariprazine, which has just received FDA approval for this purpose. They find evidence, but “their long-term safety in patients with MDD is not well established, and they are potentially concerning regarding weight gain, metabolic dysfunction, extrapyramidal symptoms, and tardive dyskinesia.” We consider the paper and its clinical implications.

In the second selection, S. E. Stoeckl (of Harvard University) and her co-authors consider the evolution of mental health apps in a new paper for the Journal of Technology in Behavioral Science. Looking at hundreds of apps, they analyze data on updates, including new features. They find: “This study highlights the dynamic nature of the app store environments, revealing rapid and substantial changes that could present challenges for app selection, consumer safety, and assessing the economic value of apps.”

And in the third selection, Dr. Dhruv Khullar (of Cornell University) writes for The New Yorker about AI and mental health. In a long essay that touches on chatbots for therapy and screening tools for suicide prevention, he wonders if AI can help clinicians (and non-clinicians) overcome issues around access. “Can artificial minds heal real ones? And what do we stand to gain, or lose, in letting them try?”

Note: there will be no Readings for the next two weeks.

DG

Continue reading

Reading of the Week: RCTs & Mental Health – the New CJP Paper; Also, AI and Discharge Summaries (Lancet DH), and Mehler Paperny on Action (Globe)

From the Editor

How has psychiatric research changed over time?

In the first selection, Sheng Chen (of CAMH) and co-authors attempt to answer this question by focusing on randomized controlled trials in mental health in a new paper for The Canadian Journal of Psychiatry. Using the Cochrane Database of Systematic Reviews, they look at almost 6,700 RCTs published over the past decades. They find: “the number of mental health RCTs increased exponentially from 1965 to 2009, reaching a peak in the years 2005–2009,” and observe a shift away from pharmacologic studies.

RCTs: the gold standard of research

In the second selection, Sajan B. Patel (of St Mary’s Hospital) et al. consider ChatGPT and health care in a new Lancet Digital Health Comment. Noting that discharge summaries tend to be under-prioritized, they wonder if this AI program may help in the future, freeing doctor to do other things. “The question for the future will be how, not if, we adopt this technology.”

And in the third selection, writer Anna Mehler Paperny focuses on campaigns to reduce stigma in a hard-hitting essay for The Globe and Mail. She argues that action is urgently needed to address mental health problems. She writes: “We need more than feel-good bromides. Every time someone prominent utters something about how important mental health is, the follow should be: So what? What are you doing about it? And when?”

DG

Continue reading

Reading of the Week: Dr. Scott Patten on ChatGPT

From the Editor

Having only written four papers, the author wouldn’t seem particularly noteworthy. Yet the work is causing a buzz. Indeed, JAMA published an Editorial about the author, the papers, and the implications.

That author is ChatGPT, who isn’t human, of course – and that’s why it has made something of a splash. More than a million people tried this AI program in the week after its November launch, utilizing it to do everything from composing poetry to drafting essays for school assignments. 

What to make of ChatGPT? What are the implications for psychiatry? And for our journals?

To the last question, some are already reacting; as noted above, last week, JAMA published an Editorial and also updated its Instructions to Authors with several changes, including: “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.”

This week, we feature an original essay by Dr. Scott Patten (of the University of Calgary) for the Reading of the Week. Dr. Patten, who serves as the Editor Emeritus of The Canadian Journal of Psychiatry, considers ChatGPT and these three questions, drawing on his own use of the program.

(And we note that the field is evolving quickly. Since Dr. Patten’s first draft, Microsoft has announced a chatbot for the search engine Bing.)

DG

Continue reading

Reading of the Week: Suicide and Schizophrenia – Across Life Span; Also, Transgender-Inclusive Care (QT), and the NYT on Chatbots

From the Editor

This week, we have three selections.

In the first, we consider suicide and schizophrenia. In a new JAMA Psychiatry paper, Dr. Mark Olfson (of Columbia University) and his co-authors do a cohort study across life-span, tapping a massive database. They find: “the risk of suicide was higher compared with the general US population and was highest among those aged 18 to 34 years and lowest among those 65 years and older.” The authors see clear clinical implications: “These findings suggest that suicide prevention efforts for individuals with schizophrenia should include a focus on younger adults with suicidal symptoms and substance use disorders.”

In the second selection, we consider transgender-inclusive care, looking at a new Quick Takes podcast. Drs. June Lam and Alex Abramovich (both of the University of Toronto) comment on caring for members of this population. “Trans individuals are medically underserved and experience, poor mental health outcomes, high rates of disease burden – compared to cisgender individuals.”

img_9356

Finally, in our third selection from The New York Times, reporter Karen Brown writes about chatbots for psychotherapy, focusing on Woebot. The writer quotes psychologist Alison Darcy about the potential of these conversational agents: “If we can deliver some of the things that the human can deliver, then we actually can create something that’s truly scalable, that has the capability to reduce the incidence of suffering in the population.”

DG

Continue reading

Reading of the Week: AI & Mental Health – Gordon Parker Looks Ahead; Also, Remembering Ronald Fieve

From the Editor

As artificial intelligence advances, what role will computers play in mental health care?

Today, computers touch practically every aspect of our lives – from suggesting books that may be of interest to us on Amazon to helping fly our planes to tropical destinations. But will computers soon help us with diagnosing and treating our patients? Will some parts of clinical medicine be replaced or assisted by computers?

This week, we look at a new paper from Acta Psychiatrica Scandinavica considering AI and care. University of New South Wales’ Professor Gordon Parker sees a role for computers to help humans with diagnosis – but not more. “[R]ather than seeking to develop a computer program that will have diagnostic superiority to an ace clinical psychiatrist, it may be more important to develop programs that complement the psychiatrist’s judgement.”

AI: The next great doctor – or just a pretty face?

And in the second selection, we look back, not forward, and consider the career and contributions of psychiatrist Ronald R. Fieve, who recently passed. Dr. Fieve’s work helped bring lithium to North America.

DG

Continue reading