From the Editor
With COVID-19, mental health services were transformed in a matter of weeks when much care shifted to virtual. Today, we are all proficient in our webcams and familiar with terms like Zoom fatigue.
From a system perspective, we have unanswered questions: What’s the right amount of virtual care? When is it appropriate? In the first selection, Matthew Crocker (of the Canadian Institute for Health Information) and his co-authors focus on virtual versus in-person follow-up care after an ED visit in Ontario. Drawing on databases, they analyzed more than 28 000 such visits, wondering if the virtual option led to more adverse psychiatric outcomes. “These results support virtual care as a modality to increase access to follow-up after an acute care psychiatric encounter across a wide range of diagnoses.” We consider the paper and its implications.

Apps for mental health are increasingly popular; the mental health app market may be worth more than $24 billion by 2030, according to one estimate. In the second selection from Internet Interventions, John A. Cunningham (of the University of Toronto) and co-authors describe a new RCT involving participants who were concerned about their drinking. 761 were given either an app with several intervention modules or just educational materials. They were then followed for six months. “The results of this trial provide some supportive evidence that smartphone apps can reduce unhealthy alcohol consumption.”
And in the third selection, Dr. Jonathan Reisman, an ED physician, writes about AI. In a provocative essay for The New York Times, he argues that physicians often rely on scripts to seem compassionate – such as when we deliver bad news. AI, he reasons then, could do that well. “It doesn’t actually matter if doctors feel compassion or empathy toward patients; it only matters if they act like it. In much the same way, it doesn’t matter that A.I. has no idea what we, or it, are even talking about.”
DG
Selection 1: “Virtual Versus In-Person Follow-up After a Psychiatric Emergency Visit: A Population-Based Cohort Study”
Matthew Crocker, Anjie Huang, Kinwah Fung, et al.
The Canadian Journal of Psychiatry, 23 September 2024

While the emergency department (ED) is a significant point of entry for individuals experiencing acute mental health crises, up to two-thirds of those with an ED visit for a psychiatric reason are in fact not admitted to hospital at the time of their visit. For these individuals, timely outpatient follow-up mental health care, usually within 7, 14, or 30 days is recommended to promote continuity of management plans initiated in the ED. However, there is evidence that fewer than half of individuals with a psychiatric ED visit receive mental health follow-up care in a timely manner postdischarge…
Virtual care has been occurring for some time in many jurisdictions to improve access to specialized care at a distance. During the COVID-19 pandemic, virtual care, either via telephone or video, exploded into widespread use as a health-care delivery modality in alignment with COVID-19 containment efforts. This trend persisted, due to the convenience of virtual care and its potential to enhance access to mental health care, especially in rural and remote regions, and has opened up many opportunities to improve access to care across diverse populations. Virtual care could improve access to follow-up after a psychiatric ED visit. However, the extent to which virtual care is used in follow-up in this circumstance, and whether virtual care is a safe and effective alternative to in-person care for a highly acute population such as this one, is unknown.
Here’s what they did:
- They drew on population-based health administrative data in Ontario.
- They identified adults discharged from a psychiatric ED visit who had a follow-up mental health visit within 14 days.
- “We compared those whose first follow-up visit was virtual (telephone or video) versus in-person on their risk for experiencing either a repeat psychiatric ED visit, psychiatric hospitalization, intentional self-injury, or suicide in the 15-90 days post-ED visit.”
- They then used Cox proportional hazard models to generate “adjusted hazard ratios (aHRs), adjusting for age, income quintile, psychiatric hospitalization, and intentional self-injury in the two years prior to ED visit.”
Here’s what they found:
- Of the 270 197 Ontario residents (aged 18 or older) discharged from a psychiatric ED visit in Ontario, 90 547 had appropriate data and 28 232 (31.1%) individuals had an outpatient mental health care follow-up visit within 14 days of ED discharge.
- Demographics & illness. Most participants were female (53.2%) and had a mean age of 38.7 years. Many had anxiety disorders (46.1%).
- Virtual vs. in-person. About 65% of first follow-up visits were virtual.
- Outcome. “About 13.9% and 14.6% of the virtual and in-person groups, respectively, experienced the composite outcome, corresponding to incidence rates of 60.9 versus 74.2 per 1000 person-years (aHR 0.95…).”
- Stratification. “Results were similar for individual elements of the composite outcome, when stratifying by sex and index psychiatric diagnosis, when varying exposure (7 days) and outcome periods (60 and 30 days), and comparing ‘only’ virtual versus ‘any’ in-person follow-up during the 14-day follow-up.”
A few thoughts:
1. This is a good and relevant study with a nice dataset in a solid journal.
2. The main finding in a sentence: “The risk of a serious adverse psychiatric outcome within 90 days post-ED discharge was not significantly different whether the outpatient follow-up visit occurred virtually or in-person.”
3. It’s reassuring to see the reproducibility of the results across diagnoses.
4. They make an important comment about accessibility of virtual services: “While it is often recognized that virtual care may not be accessible to everyone, particularly those of lower socioeconomic status or who live rurally, our virtual care and in-person groups were quite balanced on these variables. This may speak to the widespread availability of mechanisms for virtual care; about 96% of Canadians aged 15-44 had a cellular telephone in 2020.”
5. In terms of limitations, the authors note a lack of randomization, and thus clinicians may be selecting (appropriately) those who would be better with virtual care.
6. Virtual care has been considered in past Readings, including a BJP systematic review and meta-analysis which analyzed 32 papers and 11 disorders. It found: “Telepsychiatry achieved a symptom improvement effect for various psychiatric disorders similar to that of face-to-face treatment.” That Reading can be found here:
The full CJP paper can be found here:
https://journals.sagepub.com/doi/10.1177/07067437241281068
Selection 2: “Randomized controlled trial of a smartphone app designed to reduce unhealthy alcohol consumption”
John A. Cunningham, Alexandra Godinho, Christina Schell, et al.
Internet Interventions, June 2024

Unhealthy alcohol use is a leading contributor to the preventable burden of disease. While effective treatments exist for those with alcohol use disorders, most will never seek treatment – especially those with unhealthy alcohol use that is less severe compared to those with a more severe disorder. In addition, there is substantial interest among people with unhealthy alcohol use in effective alternatives to traditional alcohol treatment to promote reductions in alcohol consumption. These points, combined with the large public health impact of unhealthy alcohol consumption, emphasizes the importance of developing new means of promoting access to effective care.
Given this need, there have been substantial efforts to target unhealthy alcohol use in primary care settings, as well as through the development of assisted self-change interventions. As technology develops, these interventions have increasingly utilized computer and Internet-based platforms. One more recent approach has been smartphone applications (apps). To-date, there have been a large number of such apps released for public use though the majority have not been developed with reference to theory or evidence. Research on these interventions is limited, with inconsistent evidence of efficacy.
Here’s what they did:
“Participants were recruited from across Canada using online advertisements. Eligible participants who consented to the trial were asked to download a research-specific version of the app and were provided with a code that unlocked it (a different code for each participant to prevent sharing). Those who entered the code were randomized to one of two different versions of the app: 1) the Full app containing all intervention modules; or 2) the Educational only app, containing only the educational content of the app. Participants were followed-up at 6 months. The primary outcome variable was number of standard drinks in a typical week. Secondary outcome variables were frequency of heavy drinking days and experience of alcohol-related problems.”
Here’s what they found:
- A total of 761 participants were randomized.
- Demographics and drinking. Participants were around 42 years and mainly female (54.4% in the intervention group). The vast majority had 5 or more drinks weekly (67.1% in the intervention group).
- Weekly consumption. “A generalized linear mixed model revealed that participants receiving the full app reduced their typical weekly alcohol consumption to a greater extent than participants receiving the educational only app (incidence rate ratio 0.89; 95 % confidence interval 0.80 to 0.98).”
- Follow up. The follow-up rate was 81%.
A few thoughts:
1. This is a good and practical study published in a solid journal.
2. The main finding in a sentence: “Participants who received the full app reported a greater reduction in their alcohol consumption between baseline and 6-month follow-up compared to participants who only received the educational module of the app…”
3. Perspective: the app wasn’t exactly transformative. The average reduction was 2.6 drinks per week (a small effect over the education group).
4. Still, the authors see practical implications: “the strength of these type of interventions is that they can be provided at low cost and distributed widely.” That’s a nice observation. While some have big claims (and little evidence), this study suggests that a thoughtful app be a part of a larger alcohol strategy. As Cunningham commented in an interview: “There is a need to integrate self-help interventions such as smartphone apps into a continuum of care for those with alcohol concerns that includes access to more formal treatment options.”
5. The app included a few evidence-based modules, including a goal-setting tool and one for self-monitoring. How might things look (in a few years) if it were supercharged with AI, allowing the experience to be better tailored to the needs of the user?
6. The authors note several limitations, including the reliance on self-reporting.
The full Internet Int paper can be found here:
https://www.sciencedirect.com/science/article/pii/S221478292400040X
Selection 3: “I’m a Doctor. ChatGPT’s Bedside Manner Is Better Than Mine.”
Jonathan Reisman
The New York Times, 5 October 2024

As a young, idealistic medical student in the 2000s, I thought my future job as a doctor would always be safe from artificial intelligence.
At the time it was already clear that machines would eventually outperform humans at the technical side of medicine. Whenever I searched Google with a list of symptoms from a rare disease, for example, the same abstruse answer that I was struggling to memorize for exams reliably appeared within the first few results.
But I was certain that the other side of practicing medicine, the human side, would keep my job safe. This side requires compassion, empathy and clear communication between doctor and patient. As long as patients were still composed of flesh and blood, I figured, their doctors would need to be, too. The one thing I would always have over A.I. was my bedside manner.
When ChatGPT and other large language models appeared, however, I saw my job security go out the window.
So begins an essay by Dr. Reisman.
He notes that rise of AI in medicine. “These new tools excel at medicine’s technical side – I’ve seen them diagnose complex diseases and offer elegant, evidence-based treatment plans. But they’re also great at bedside communication, crafting language that convinces listeners that a real, caring person exists behind the words. In one study, ChatGPT’s answers to patient questions were rated as more empathetic (and also of higher quality) than those written by actual doctors.”
He argues that the result isn’t surprising. “In medicine – as in many other areas of life – being compassionate and considerate involves, to a surprising degree, following a prepared script.”
But he thinks back to medical school and a session on breaking bad news. “Our teacher role-played a patient who had come to receive the results of a breast biopsy. We medical students took turns telling the patient that the biopsy showed cancer. Before that session, I thought breaking such news was the most daunting aspect of patient care and the epitome of medicine’s human side. Delivering bad news means turning a pathologist’s technical description of flesh under the microscope into an everyday conversation with the person whose flesh it is. I presumed that all it required of me was to be a human and to act like it.”
He describes the process as “technical.” He adds: “The teacher gave us a list of dos and don’ts… Once the news is delivered, pause for a moment to give the patient a chance to absorb it. Don’t say phrases like ‘I’m sorry,’ since the diagnosis isn’t your fault.”
“Somehow the least scientific thing I learned in medical school turned out to be the most formulaic.”
“In the years since, I’ve recited versions of the ‘bad news’ script to scores of patients while working as an emergency room doctor. For patients and their families, these conversations can be life-changing, yet for me it is just another day at work – a colossal mismatch in emotion. The worse the prognosis, the more eagerly I reach for those memorized lines to guide me.”
He thus sees a role for AI. He further argues that medical conversations aren’t so different than other ones. “The truth is that prewritten scripts have always been deeply woven into the fabric of society. Be it greetings, prayer, romance or politics, every aspect of life has its dos and don’ts. Scripts – what you might call ‘manners’ or ‘conventions’ – lubricate the gears of society.”
He closes on a provocative note: “There are linguistic formulas for human empathy and compassion, and we should not hesitate to use good ones, no matter who — or what — is the author.”
A few thoughts:
1. This is a well-argued essay.
2. Do physicians simply follow a script – even during challenging conversations?
3. AI has been considered in past Readings, of course, including a review of the Ayers et al. study mentioned by Reisman which suggests that AI can produce more empathic responses than doctors. You can find it here:
The full NYT essay can be found here:
https://www.nytimes.com/2024/10/05/opinion/ai-chatgpt-medicine-doctor.html
Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.
Recent Comments