From the Editor
What’s the role of antidepressants in the treatment of bipolar disorder? That question is openly debated.
In a New England Journal of Medicine paper that was just published, Dr. Lakshmi N. Yatham (of the University of British Columbia) and his co-authors try to shed light on this issue. In their study, people with bipolar depression who were in remission were given an antidepressant or a placebo and followed for a year. The study involved 209 people from three countries. “[A]djunctive treatment with escitalopram or bupropion XL that continued for 52 weeks did not show a significant benefit as compared with treatment for 8 weeks in preventing relapse of any mood episode.” We consider the paper and its implications.

In the second selection, Drs. Avraham Cooper (of Ohio State University) and Adam Rodman (of Harvard University) consider AI and medical education in The New England Journal of Medicine. They talk about previous technological advancements in history, including the stethoscope. AI, in their view, will change practice and ethics – with clear implications for training and education. “If we don’t shape our own future, powerful technology companies will happily shape it for us.”
And in the third selection, Keith Humphreys (of Stanford University) writes about words and word choices to describe vulnerable populations in an essay for The Atlantic. He notes historic disputes, such as the use of the term patient. “[M]aking these judgments in a rigorous, fact-based way would prevent experts, policy makers, and the general public from being distracted by something easy – arguing about words – when we need to focus on doing something much harder: solving massive social problems.”
DG
Selection 1: “Duration of Adjunctive Antidepressant Maintenance in Bipolar I Depression”
Lakshmi N. Yatham, Shyam Sundar Arumugham, Muralidharan Kesavan, et al.
The New England Journal of Medicine, 3 August 2023

Although mania is the defining feature of bipolar I disorder (as compared with bipolar II disorder), patients with this disorder have depressive symptoms three times more frequently than manic symptoms. The effect of depressive symptoms on health and functioning is at least as severe as that of manic episodes. Furthermore, some studies have shown that suicide attempts and suicide deaths are at least 18 times as common during depressive episodes as during manic episodes in bipolar I disorder…
Apart from lamotrigine, quetiapine, and lithium, other FDA-approved maintenance treatments for bipolar I disorder have limited efficacy in preventing depression, and depressive relapses are common among persons with this condition. Adjunctive antidepressants are commonly used to treat depression in patients with bipolar I disorder; as many as 57% of patients with bipolar I disorder received prescriptions for adjunctive antidepressants from 2013 through 2016, and some studies have shown that 80% of patients continue receiving antidepressants for 6 months or longer after remission.
So begins a paper by Yatham et al.
Here’s what they did:
- “We conducted a multisite, double-blind, randomized, placebo-controlled trial of maintenance of treatment with adjunctive escitalopram or bupropion XL as compared with discontinuation of antidepressant therapy in patients with bipolar I disorder who had recently had remission of a depressive episode.”
- Patients were randomly assigned to antidepressants for 52 weeks after remission or a placebo (after 8 weeks).
- Primary outcome: any mood event.
- Secondary outcomes: time to an episode of mania or hypomania or depression.
Here’s what they found:
- “From 2009 through 2019, a total of 238 patients were assessed for eligibility, of whom 209 entered the open-label phase and received buproprion XL or escitalopram adjunctive therapy for a depressive episode. Of the patients who received therapy during the open-label phase, 150 had remission of depression and were enrolled in the double-blind phase.” 90 continued with antidepressant treatment; 87 were switched to the placebo.
- Demographics. 12% of trial participants were White; 87% were Asian; less than 1% were Black.
- Primary outcome. “At 52 weeks, 28 of the patients in the 52-week group (31%) and 40 in the 8-week group (46%) had a primary-outcome event. The hazard ratio for time to any mood episode in the 52-week group relative to the 8-week group was 0.68…” See figure below.
- Secondary outcome. “A total of 15 patients (17%) in the 52-week group as compared with 35 patients (40%) in the 8-week group had a depressive episode within 52 weeks (hazard ratio, 0.43…), and 11 patients (12%) as compared with 5 patients (6%) had a manic or hypomanic event (hazard ratio, 2.28…).”
- No serious adverse events were observed.

A few thoughts:
1. There is much to like in this study: it’s practical and impressive, offering multiple sites in three countries, a randomized trial, and published in a very, very prestigious journal.
2. But there are problems, including core issues around recruitment, culminating in a study termination. (!) Are the results adequately powered?
3. The findings in a sentence: “continuing adjunctive antidepressant therapy for 52 weeks as compared with discontinuing antidepressants at 8 weeks was not more beneficial with regard to the primary outcome of the occurrence of any mood episode.”
4. On Twitter (or X, as it’s now called), Dr. Yatham emphasizes the secondary outcome result – and notes that further data will be released. It should be interesting.
5. Some clinicians still regularly use antidepressants in this population, and the study offers an encouraging result. Is the more cautious approach to avoid antidepressants altogether?
The full NEJM paper can be found here:
https://www.nejm.org/doi/full/10.1056/NEJMoa2300184
Selection 2: “AI and Medical Education – A 21st-Century Pandora’s Box”
Avraham Cooper and Adam Rodman
The New England Journal of Medicine, 3 August 2023

ChatGPT (Chat Generative Pre-trained Transformer), OpenAI’s chatbot powered by artificial intelligence (AI), has become the fastest-growing Internet application in history. Generative AI, which includes large language models such as GPT, has the ability to produce text resembling that generated by humans and seemingly to mimic human thought. Medical trainees and clinicians already use this technology, and medical education doesn’t have the luxury of watchful waiting; the field needs to grapple now with the effects of AI. Many valid concerns have been raised about AI’s effects on medicine, including the propensity for AI to make up information that it then presents as fact (termed a ‘hallucination’), its implications for patient privacy, and the risk of biases being baked into source data. But we worry that the focus on these immediate challenges obscures many of the broader implications that AI could have for medical education…
So begins a paper by Cooper and Rodman.
“Throughout history, technology has disrupted the way physicians think.” They reach back to the 19th century, noting the invention of the stethoscope which “helped spark the development and refinement of the physical exam, which led to the emergence of physicians’ self-conception as diagnostic detectives.”
They pivot to AI today. “ChatGPT has shown the potential to be at least as disruptive as the problem-oriented medical record, having passed both licensing and clinical reasoning exams and approximating the diagnostic thought patterns of physicians. Higher education is currently wrestling with ‘the end of the college essay,’ and medical school personal statements are sure to follow.”
They wonder about the response. “Do medical educators take an activist approach to integrating AI into physician training, deliberately preparing the physician workforce for the safe and appropriate use of this transformational technology in health care? Or do we allow external forces governed by incentives for prioritizing operational efficiency and profits to determine what that integration looks like?”
Medical school
“Medical schools face a dual challenge: they need to both teach students how to utilize AI in their practice and adapt to the emerging academic use of AI by students and faculty.”
They note that the change has begun. “Medical students are already starting to apply AI in their studying and learning, generating disease schema from chatbots and anticipating teaching points. Faculty are contemplating how AI can help them design courses and evaluations.”
And they discuss future challenges. “The whole idea of a medical school curriculum built by humans is now in doubt: How will a medical school provide quality control for components of its curriculum that didn’t originate from a human mind? How can schools maintain academic standards if students use AI to complete assignments?”
Residency
“At the graduate medical education level, residents and fellows need to be prepared for a future in which AI tools are integral components of their independent practice. Trainees will have to become comfortable working with AI and will have to understand its capabilities and limitations, both to support their own clinical skills and because their patients are already using it. For example, ChatGPT can produce advice on cancer screening in patient-friendly language, though not with 100% accuracy. AI queries by patients will inevitably lead to an evolution of the patient–doctor relationship, just as the proliferation of commercial genetic-testing products and online medical advice platforms changed discussion topics during clinic visits.”
Further challenges
“Ethical precepts are the bedrock of medical practice. What will health care look like when medicine is assisted by AI models that filter ethical decisions through opaque algorithms?”
A few thoughts:
1. This is an excellent paper with a timely topic, asking big questions.
2. By their own admission, the authors answer almost none of them.
3. The authors make many good points.
4. A particularly thoughtful one: how AI could change the doctor-patient relationship. Already, with Google, patients have access to good resources, including scientific journals, empowering them with information and, yes, misinformation. AI will continue the trend, for better and worse.
5. ChatGPT has been considered in past Readings. In terms of the doctor-patient relationship, we looked at a JAMA Internal Medicine paper comparing ChatGPT-generated answers to those of physicians in terms of quality and empathy for basic medical questions. “In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum.” That Reading can be found here:
The full NEJM paper can be found here:
https://www.nejm.org/doi/full/10.1056/NEJMp2304993
Selection 3: “The Burden of Proof Is on the Language Police”
Keith Humphreys
The Atlantic, 7 August 2023

In my work as a senior editor at a scientific journal, the most challenging arguments I mediate among reviewers, authors, other editors, and readers are not about research methods, empirical data, or subtle points of theory but about which terms describing vulnerable groups are acceptable and which are harmful. My field – addiction and drug policy – has a tradition of savage infighting over language. Are the people whom earlier generations derided as vagrants or bums more appropriately termed homeless people, people who are homeless, unsheltered persons, persons with lived experience of being unhoused, or something else? Similar arguments erupt in politics, in journalism, in the classroom, in the workplace, and between generations at the dinner table. When even sincere, well-intended people cannot agree on which words reinforce social injustice and damage human well-being, the debates can be mutually bruising.
So begins an essay by Humphreys.
He notes, for instance, that “when someone expresses clear preferences about how he or she wants to be described, that wish requires no evidentiary validation.”
He argues for respecting people’s wishes. “In some cases, honoring other people’s self-conception may mean tolerating language that well-meaning outsiders view as blunt, impolite, or even destructive. For example, some members of my field think people in recovery shouldn’t burden themselves with the terms addicts and alcoholics – words that could very well stigmatize anyone labeled as such without their consent but that are widely claimed by participants in 12-step programs. Scientists and clinicians must show respect to other people’s humanity, and that includes upholding their right to speak for and define themselves.”
He also argues for making decisions based on evidence. “[Q]uite a bit of evidence on the effects of terminology is available to guide us, and in some cases, it backs up a linguistic shift. According to one study in my field, seeing an individual described as a substance abuser rather than as having a substance-use disorder makes people more likely to view them as a safety threat and deserving of punishment.”
“But many other claims about the harmfulness or virtue of individual terms lack clear evidence, and we should therefore be humble in generalizing.” He gives a few examples:
- Elder. “One day, a white American colleague chastised me for using the allegedly demeaning term elder when discussing drug overdoses among Medicare participants, shortly before I got on a Zoom call in which Canadian colleagues of Indigenous ancestry repeatedly used the same term as a sign of respect for the longest-lived members of their community.”
- Patient. “During my clinical training as a psychologist, I was informed (without evidence) that patient was a destructively medicalized term for people seeking mental-health care, and that I should use only client. But surveys of real-life people seeking care show no consensus. In one study, individuals seeing a psychiatrist or a nurse, for example, preferred patients, whereas patients and clients were equally popular among those consulting a social worker or an occupational therapist.”
- Latinx. “Many U.S. academics quickly adopted the neologism Latinx as a more inclusive, gender-neutral alternative to Hispanic or Latino, even though the term bemuses or annoys some people of Latin American descent and survey data suggest that few use it to describe themselves.”
He closes: “A shared commitment to evidence provides a way to resolve upsetting disagreements that can otherwise fester forever, while opening up chances to learn when we have in fact caused harm and genuinely need to treat others better.”
A few thoughts:
1. This is a good essay.
2. Language tends to stoke strong responses. Humphreys suggests a reasonable approach.
3. I still use patient.
The full Atlantic essay can be found here:
Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.
Recent Comments