From the Editor
Should psychiatrists comment on the possible mental health problems of President Donald Trump? His niece thinks so.
In our first selection, we consider an essay by Mary L. Trump. In this Washington Post essay, the psychologist discusses the Goldwater Rule, which prevents members of the American Psychiatric Association from commenting on political figures. Trump feels that psychiatrists should speak up. “Adopting a notionally neutral stance in this case doesn’t just create a void where professional expertise should be – it serves to normalize dysfunctional behavior.” We consider the essay and the questions it raises.
Goldwater of the Goldwater Rule
In the other selection, we pick another current topic, but this time we draw from a journal, not a newspaper, considering a new paper from The Canadian Journal of Psychiatry. Aditya Nrusimha Vaidyam (of Harvard University) and his co-authors do a review of chatbots for mental health; that is, “digital tools capable of holding natural language conversations and mimicking human-like behavior in task-oriented dialogue with people” (think Siri or Alexa, but for mental disorders). “This review revealed few, but generally positive, outcomes regarding conversational agents’ diagnostic quality, therapeutic efficacy, or acceptability.”
Selection 1: “Mary Trump: Psychiatrists know what’s wrong with my uncle. Let them tell voters.”
The Washington Post, 22 October 2020
In 1964, Fact magazine published an unscientific survey asking psychiatrists whether they thought the Republican nominee, Barry Goldwater, was psychologically fit to serve as president of the United States. The problem wasn’t that professionals felt the need to share their views of what they considered Goldwater’s dangerous ideas; it was the irresponsible and often bizarre analyses that were in some cases based entirely on rank speculation. ‘Goldwater is basically a paranoid schizophrenic’ who ‘resembles Mao Tse-tung,’ one offered. Another said that he ‘has the same pathological make-up as Hitler, Castro, Stalin and other known schizophrenic leaders.’ A third said that ‘a megalomaniacal, grandiose omnipotence appears to pervade Mr. Goldwater’s personality.’
Embarrassed, the American Psychiatric Association (APA), in reaction to this debacle, established the ‘Goldwater Rule,’ which barred its members from diagnosing public figures. It concluded that ‘it is unethical for a psychiatrist to offer a professional opinion unless he or she has conducted an examination and has been granted proper authorization for such a statement.’
So begins an essay by Trump.
She doesn’t take issue with the original Goldwater Rule, so much as it recent revision.
“In March 2017, shortly after my uncle, Donald Trump, was inaugurated, the APA didn’t just reaffirm the rule – it expanded it past the point of coherence. Not only were members prohibited from diagnosing public figures, now they could no longer offer a professional opinion of any sort, no matter how well supported or evidence-based, even if they believed that a public figure posed a threat to the country’s citizens or national security.”
She forwards arguments.
- “While psychiatric diagnosis is a technical process, it is entirely within bounds to draw conclusions based on observable behavior. It is one thing to declare definitively that a person has anti-social personality disorder (a specific diagnostic term); it is another to point to behaviors – such as deliberately putting other people in harm’s way for no discernible reason (for example, abandoning our Kurdish allies) beyond one’s own self-interest – and express the general conclusion that it is dangerous to have somebody in the Oval Office who is incapable of empathy.”
- “The APA has also stated that ‘psychiatrists are medical doctors; evaluating mental illness is no less thorough than diagnosing diabetes or heart disease.’ That’s true – but what might a cardiologist say if a public figure kept having heart attacks?”
Trump is very focused on the President’s lack of truthfulness and impulsivity. “If we look at the past 3½ years, Donald has lied publicly in excess of 20,000 times; he has impulsively, and against all reason, gone against the advice of experts who could have helped contain the pandemic and protect the economy; he has put private citizens at risk by attacking them on Twitter because they have criticized him; he has proved himself to be incapable of accepting responsibility, changing course or exhibiting empathy.”
She then draws on her own background as a clinician:
“I am a trained clinical psychologist and have worked as a clinician. If Donald had walked into my office for an evaluation, I would have gathered less information about him from a normal intake interview than I could gather from the countless hours of video available from his decades in the public eye. Often when self-reports aren’t available – because the patient is either unable or unwilling to offer information – the clinician turns to those close to the patient in order to fill in the blanks. But none of that is necessary because examples of Donald’s disordered, impulsive, self-defeating and destructive behavior, which are unlikely to present themselves in a clinical setting, have been extensively recorded.”
A few thoughts:
- This is a lively and well argued opinion piece.
- Is it persuasive? Readers can judge for themselves.
- The Goldwater Rule is a bit more nuanced than she describes. From section 7 of American Psychiatric Association’s Principles of Medical Ethics: “On occasion psychiatrists are asked for an opinion about an individual who is in the light of public attention or who has disclosed information about himself/herself through public media. In such circumstances, a psychiatrist may share with the public his or her expertise about psychiatric issues in general. However, it is unethical for a psychiatrist to offer a professional opinion unless he or she has conducted an examination and has been granted proper authorization for such a statement.”
- Is the 2017 “clarification” a profound change? Trump argues it is. A read of the APA’s Ethics Committee opinion suggests otherwise. You can find it here: https://www.psychiatry.org/File%20Library/Psychiatrists/Practice/Ethics/APA-Ethics-Committee-Goldwater-Opinion.pdf
- Needless to say, over the years, others have forwarded this argument. In a New Statesman article, a few psychiatrists criticize the Goldwater Rule. Yale-affiliated psychiatrist Bandy X. Lee, for example, comments: “Expert voices have been very deliberately and dangerously silenced. The detriment to the public’s health is obvious.” The full essay is here: https://www.newstatesman.com/world/2020/09/silencing-psychiatry-goldwater-rule-doing-more-harm-good-ahead-us-2020-election
- Should psychiatrists comment on politicians (like Donald Trump) lying? There may be much work ahead if we want to take on that task …
- What about the argument about public safety and ultimately public health? Trump makes a good case, as does Dr. Lee. But how do we judge the public health? She is concerned about COVID. But, depending on one’s political views, it’s possible to have very different opinions about public health. Drawing from recent polling data: What about better medical coverage for the poor? Gun control? Economic growth? Legalization of cannabis? (And a quick word of thanks to Dr. David Goldbloom for the conversation that helped inform these comments.)
The full Washington Post essay can be found here:
Selection 2: “Changes to the Psychiatric Chatbot Landscape: A Systematic Review of Conversational Agents in Serious Mental Illness”
Aditya Nrusimha Vaidyam, Danny Linggonegoro, John Torous
The Canadian Journal of Psychiatry, 16 October 2020 Online First
The need for digital tools in mental health is clear, with insufficient access to mental health services worldwide and clinical staff increasingly unable to meet rising demand…
Conversational agents, also known as chatbots or voice assistants, are digital tools capable of holding natural language conversations and mimicking human-like behavior in task-oriented dialogue with people. Conversational agents exist in the form of hardware devices such as Amazon Echo or Google Home as well as software apps such as Amazon Alexa, Apple Siri, and Google Assistant. It is estimated that 42% of U.S. adults use digital voice assistants on their smartphone devices, and some industry studies claim that nearly 24% of U.S. adults own at least 1 smart speaker device. With such widespread access to conversational agents, it is understandable that many are interested in their potential and role in health care. Early research has explored the use of conversational agents in a diverse range of clinical settings such as helping with diagnostic decision support, education regarding mental health, and monitoring of chronic conditions. In 2018, the most common condition that conversational agents claimed to cover was related to mental health…
In today’s evolving landscape of rapidly changing technology, growing global health concerns, and lack of access to high-quality mental health care, use and evaluation of these conversational agents of mental health continue to evolve. Since our team’s 2018 review on the topic, many new conversational agents, products, and research studies have emerged.
So begins a paper by Vaidyam et al.
Here’s what they did:
- As with their previous review, they did a search with keywords such as “conversational agent” and “chatbot” of several databases (including PubMed).
- Studies were selected that focused on serious mental illness.
- “Studies were excluded if the study protocol did not measure the direct effect of the use of a conversational agent or did not at all involve the conversational agent in diagnosis, management, or treatment of SMI.”
Here’s what they found:
- There were 247 references with many duplicates. Following PRISMA, seven papers were selected.
- The mean age of participation was 34.3; the mean number of participants: 74; the mean study duration: 4.6 weeks; the major studies were for outpatients, with one exception.
- There were no papers where participants had schizophrenia or bipolar. See the table below.
- “There continued to be no inclusion of children or consideration for emergency situations in these studies as well as minimal reporting of adverse effects.”
- Two of these studies focused on assessing diagnostic quality, 3 studies examined therapeutic efficacy, and 2 studies evaluated the acceptability.
- There was some evidence for the use of chatbots. We highlight a few studies. “Three studies examined the therapeutic efficacy of different conversational agents. Fulmer et al. found that the conversational agent Tess was able to reduce self-identified symptoms of depression and anxiety in college students. Inkster et al. studied the conversational agent Wyza and found users who were more engaged with the conversational agent had significantly higher average mood improvement compared to lower engagement users. Suganuma et al. found that the conversational agent SABORI was effective in improving metrics on WHO-5, a measure of well-being, and Kessler 10, a measure of psychological distress on the anxiety-depression spectrum.”
- In terms of measuring acceptability, there were two studies: “Martínez et al. assessed acceptability, perception, and adherence of users toward HelPath, a conversational agent that is used to detect suicidal behavior. Participants perceived HelPath as emotionally competent and reported a positive level of adherence. Philip et al. found that the majority (68.2%) of patients rated the virtual medical agent positively very satisfied…”
A few thoughts:
- This is a good review.
- The paper suggests early evidence.
- As was the case in the previous review, participants were quite accepting of the chatbots (at least in two studies). (!)
- The authors note inconsistencies in reporting: “There were few improvements in the standardization of conversational agent evaluation in this review compared to our prior 2018 review.”
- What should you think about chatbots? There are a few that can be downloaded from the App Store, including Wyza. Why not spend some time exploring these apps? Your patients are. These chatbots are interesting but not incredible, based on my use. Still, it’s difficult not to be intrigued.
The paper can be found here:
Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.