From the Editor

The idea is simple: if certain locations attract suicidal individuals, making it harder for suicides to occur at those places can help. After much debate, in 2003, the City of Toronto did exactly that, constructing a suicide barrier for the Bloor Viaduct. Suicides immediately declined. 

What has been the long-term effect? And have the means of suicide deaths simply shifted? In the first selection, Dr. Mark Sinyor (of the University of Toronto) and his co-authors attempt to answer these questions. In a new study published in The Canadian Journal of Psychiatry, they drew on over two decades of data to analyze the impact of this suicide barrier. “Contrary to initial findings, these results indicate an enduring suicide prevention effect of the Bloor Viaduct suicide barrier.” We consider the study and its implications.

Pretty but lifesaving?

When it comes to medical education, much has changed over the years – including its name. What was once known as Continuing Medical Education (CME) is now referred to as Continuing Professional Development (CPD). But the changes go far beyond a simple rebranding. After all, the sheer volume of journal articles available today is staggering. How can you keep up? How can technology help? In the second selection, a new Quick Takes podcast, I speak with Dr. Sanjeev Sockalingam (of the University of Toronto) to explore the evolving world of CPD. “It took a pandemic to get us to realize that we could do so much online.”

Finally, in the third selection, from JAMA Network Open, Dr. Ethan Goh (of Stanford University) and his colleagues wonder if AI can assist physicians in making diagnoses. In an RCT, physicians were randomized to either conventional resources or those enhanced by access to AI (specifically, LLM). “In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources.”

DG

Selection 1: “Long-Term Impact of the Bloor Viaduct Suicide Barrier on Suicides in Toronto: A Time-Series Analysis”

Mark Sinyor, Vera Yu Men, Ayal Schaffer, et al.

The Canadian Journal of Psychiatry, 5 November 2024  Online First

Restricting access to lethal means is an evidence-based suicide prevention strategy and it is 1 of 4 key population-level prevention strategies encouraged by the World Health Organization’s LIVE LIFE implementation guide. Structural interventions have long been used to prevent suicide at high-frequency sites for suicide including iconic bridges, buildings, and natural peaks. Pirkis et al. conducted a meta-analysis demonstrating that such interventions tend to decrease suicides at the frequently used locations themselves, with some evidence of location substitution at neighbouring sites but still resulting in substantial net reductions in suicides in an area. This latter finding is of crucial importance in understanding whether barriers truly lower suicide rates across a city or region; the degree to which this occurs in a sustained manner following barrier installation remains an open scientific question…

Installation of the Bloor Viaduct suicide barrier initiated a natural experiment at a high-frequency site for suicide in Toronto, Canada that provides a relatively unique opportunity to address the long-term effect of such barriers. Prior to the ‘Luminous Veil’ barrier, installed in 2003, the Bloor Viaduct had the second-highest yearly suicide counts of any bridge in North America after the Golden Gate Bridge in San Francisco. At the turn of the millennium, the 9 suicides occurring at the Bloor Viaduct each year accounted for half of suicides by jumping from bridges and 4% of all suicides in Toronto. An initial study by our group examining the first 4 years after the barrier indicated that substantial location substitution appeared to be occurring with a significant rise in suicides at other bridges that diminished the apparent effect of the barrier on overall suicides. However, a follow-up study examining a decade of data after the barrier’s creation found that this increase in suicides at other bridges was a transient observation, as the substitution effect no longer existed in later years…

So begins a paper by Sinyor et al.

Here’s what they did:

  • They conducted a retrospective observational study.
  • They examined “rates of suicides by jumping from the Bloor Viaduct, other bridges and by other methods using coroner’s records in Toronto (1998–2020).” 
  • They calculated quarterly counts of suicide deaths by jumping from bridges and other methods. “Linear growth within each 5-year period was assumed to interpolate the quarterly population in Toronto.”
  • Different statistical analyses were done, including an “interrupted time-series Poisson regression analyses to model changes in quarterly bridge-related suicides after barrier installation.”

Here’s what they found:

  • There were 5 219 suicides from 1998 to 2020.
  • Suicides & bridges. 303 were by jumping from bridges. 
  • Short-term effect. “After controlling for covariates, the construction of the Luminous Veil was associated with a 49% step decrease in bridge-related suicide in the next quarter in Toronto (IRR 0.51…).”
  • Long-term effect. “The postintervention time trend indicated that there was no statistically significant rebound in bridge-related suicide after the original drop (IRR 0.99…); that is, the observed reduction persisted over time.”
  • Other methods. “There was also no associated change in suicides by other methods after the barrier (IRR = 1.04…).”

A few thoughts:

1. This is a good paper, presenting long-term data and adding nicely to the literature, and published in a solid journal.

2. The main finding in seven words: the barrier worked and continues to work.

3. As the authors note: “Our results here concur with the findings of our earlier follow-up study examining the first decade of postbarrier data showing that suicides from bridges in Toronto had decreased with no location substitution, albeit now confirming that finding over nearly 2 decades. Specifically, our results are consistent with the notion that a high proportion of suicides that might otherwise have occurred at the Bloor Viaduct were likely truly prevented.” They calculate that 150 lives were saved.

4. The findings are similar to those found in other long-term studies of suicide barriers, including of the Duke Ellington Bridge in Washington DC, drawing on three decades of data (though the total number of suicides was relatively small). There are, then, clear implications for public policy and for our understanding of suicide. As the authors eloquently write: “high-frequency sites for suicide should, at least in some respect, be considered distinct suicide methods that likely have their own sway and associated mental imagery for individuals. Our study is impactful by identifying an actionable way in which suicides can be prevented in cities and regions with iconic suicide locations.” 

5. Like all studies, there are limitations. The authors note that the study is an “uncontrolled natural experiment and, as such, factors other than the Bloor Viaduct suicide barrier and/or covariates in the analysis may have been responsible for observed changes in suicide rates.”

6. Interested in reading more about suicide? The Lancet Public Health has a special issue on “a public health approach to suicide prevention.” It includes several papers, including one focused on economic downturns and suicide (first authored by Dr. Sinyor). You can find it here:

https://www.thelancet.com/series/suicide-prevention

For those interested in an excellent summary of these papers, Dr. Kirsten Lawson (of the Social Care Partnership NHS Trust) blogs here:

https://www.nationalelfservice.net/mental-health/suicide/suicide-prevention-expanding-the-narrative-to-preventing-the-crisis-not-just-treating-the-crisis/

The full CJP paper can be found here:

https://journals.sagepub.com/doi/full/10.1177/07067437241293978

Selection 2: “Exploring the future of education”

Sanjeev Sockalingam

Quick Takes, 27 November 2024

In this episode of Quick Takes, I speak with Dr. Sanjeev Sockalingam, the VP Education and CMO of CAMH. 

In our interview, we discuss how CPD now uses podcasts and blogs to cut through the “noise.” Our conversation covers the rise of alternative learning methods, such as microlearning. And, looking ahead, we touch on the potential role of AI.

Here, I highlight several comments:

On staying current

“You have to figure out what works for you. Conferences still have yield for me for – at least networking and being aware. Am I walking away after a conference knowing 100% of everything that I listened to and participated in? Probably not. But I might find a few tidbits or glean some insights about something I want to learn more about or integrate into my practice.

“I also use things that synthesize information like blogs, podcasts or posts. For example, Reading of the Week is a great summary of key articles.”

On microlearning

“Microlearning or doses of education – so videos and podcasts started becoming more common. Other snapshots, like online resources (for example, websites), all of those things are also increased in terms of transmitting information – all with pros and cons.”

On the globalization of CPD

“We are no longer confined to locations. We aren’t necessarily going to conferences in [specific] places or workshops in our communities. Once you go online, you can have many people across the world connecting and diversifying perspectives, sharing information. Globalization is happening.”

On learning and AI

“We know that doctors are not always so accurate on knowing where they have gaps in certain areas, and maybe believe that we might be a bit more proficient than we actually are. But wouldn’t it be amazing if we had data in front of us? That is, ‘you’re not using the right antidepressant algorithm?’ So if we could get that feedback – from electronic health records audits of how we deliver our sessions – it could inform how we actually learn. 

“You would get a report card and say, well, you need to focus a little bit on learning more about this.”

The above answers have been edited for length.

The podcast can be found here, and is just over 22 minutes long:

https://www.camh.ca/en/professionals/podcasts/quick-takes/qt-nov-2024—exploring-future-education-sanjeev-socklingham#

Selection 3: “Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial”

Ethan Goh, Robert Gallo, Jason Hom, et al.

JAMA Network Open, 28 October 2024

Diagnostic errors are common, contribute to substantial patient harm, and result from a combination of cognitive and systems factors. Effective interventions to improve diagnostic performance and reduce diagnostic errors will need to focus on both systems factors and cognitive factors, often referred to as clinical reasoning. Strategies that have been advanced to improve clinical reasoning include a variety of educational, reflective, and team-based practices, as well as clinical decision support tools. The impact of these interventions has been limited, and even the most useful methods, such as reflective practice, are difficult to integrate clinically at scale. Artificial intelligence (AI) technologies have long been pursued as promising tools for assisting physicians with diagnostic reasoning.

Large language models (LLMs) – machine learning systems that produce humanlike responses from written language – have shown the ability to solve complex cases, exhibit humanlike clinical reasoning, take patient histories, and display empathetic communication. Due to their generalizable nature, LLMs are actively being integrated into multiple health care settings. Despite the impressive performance of these emerging technologies in benchmarking tasks, current integrations of LLMs require human participation, with the LLM augmenting, rather than replacing, human expertise and oversight. Understanding the implications of deploying these systems in patient care with limited workforce training and integration requires human-computer user studies with richer measures of diagnostic reasoning.

So begins a paper by Goh et al.

Here’s what they did:

  • They conducted a single-blind randomized clinical trial.
  • They recruited physicians from family medicine, internal medicine, and emergency medicine.
  • Participants were “randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage.” 
  • Participants were given 60 minutes to review up to six clinical vignettes.
  • The primary outcome: “performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus.”

Here’s what they found:

  • 50 physicians participated.
  • Career stage. 26 were attendings and 24, residents. Mean years of practice: three.
  • Diagnostic reasoning. “The median diagnostic reasoning score per case was 76% for the LLM group and 74% for the conventional resources-only group, with an adjusted difference of 2 percentage points…”
  • Time. The median time spent per case for the LLM group was 519 seconds, compared with 565 seconds for the conventional resources group. 
  • AI. The LLM alone scored 16 percentage points higher than the conventional resources group. (!!)

A few thoughts:

1. This is an interesting study published in a reasonable journal. A word of caution: the total number of participants wasn’t large.

2. The main finding in five words: AI didn’t help the doctors.

3. The big plot twist: AI alone performed better than the physicians did. (!)

4. Is it time to retrain? Of course, there are reasons to be a bit skeptical of this paper. In an accompanying Editorial, Dr. Sumant R. Ranji (of the University of California) writes:

“The study’s cases were representative of common general practice diagnoses but are presented in an orderly fashion with the relevant history, physical examination, laboratory, and imaging results necessary to construct a prioritized differential diagnosis. Diagnosis in the clinical setting is an iterative – and complicated – process that takes place amid many competing demands and requires input from the patient, caregivers, and multiple clinicians in addition to objective data. Far from a linear process, diagnosis in the clinical practice setting involves progressively refining diagnoses based on new information, and the distinction between diagnosis and treatment is often blurred as clinicians incorporate treatment response into diagnostic reasoning.”

Though cool to the study, he makes an important point about its relevance.

“Achieving diagnostic excellence will require a system that potentiates clinicians’ ability to make accurate and timely diagnoses and supports patients through their diagnostic and therapeutic journey. Generative AI will be a part of this system, but successful integration of LLMs into clinical diagnosis will require technical improvements, training of clinicians, and thoughtful integration of technology into the clinical environment.” Well said.

5. So, no need to retrain just yet. But as AI comes of age, a challenge will be how to incorporate LLM into clinical work – and how to educate providers so that they can better their clinical skills.

6. AI has been considered in past Readings, including a podcast with Dr. John Torous (of Harvard University) in which we chat about ChatGPT and AI. You can find it here:

https://davidgratzer.com/reading-of-the-week/reading-of-the-week-self-stigma-also-chatgpt-mental-health-care-and-dr-catherine-hickey-on-the-opioid-crisis/

The full JAMA Netw Open paper can be found here:

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2825395#google_vignette

Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.