From the Editor

Having only written four papers, the author wouldn’t seem particularly noteworthy. Yet the work is causing a buzz. Indeed, JAMA published an Editorial about the author, the papers, and the implications.

That author is ChatGPT, who isn’t human, of course – and that’s why it has made something of a splash. More than a million people tried this AI program in the week after its November launch, utilizing it to do everything from composing poetry to drafting essays for school assignments. 

What to make of ChatGPT? What are the implications for psychiatry? And for our journals?

To the last question, some are already reacting; as noted above, last week, JAMA published an Editorial and also updated its Instructions to Authors with several changes, including: “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.”

This week, we feature an original essay by Dr. Scott Patten (of the University of Calgary) for the Reading of the Week. Dr. Patten, who serves as the Editor Emeritus of The Canadian Journal of Psychiatry, considers ChatGPT and these three questions, drawing on his own use of the program.

(And we note that the field is evolving quickly. Since Dr. Patten’s first draft, Microsoft has announced a chatbot for the search engine Bing.)

DG

“An Introduction to ChatGPT”

Scott Patten

9 February 2023


ChatGPT is a natural language processing model produced by “Open AI” – an American organization dedicated to developing AI applications and making the code and patents freely available to the public. This organization was founded in 2015 (one of its original founders was Elon Musk). It also has a for-profit subsidiary that is aligned with Microsoft.  

But I am wasting my time here. Instead of writing this essay, why not ask ChatGPT to describe itself?

SP: “Write a brief history of ChatGPT”

ChatGPT responds:

ChatGPT is a language model developed by OpenAI, a leading artificial intelligence research organization. The model is based on the transformer architecture, a type of neural network that has revolutionized the field of natural language processing.

ChatGPT was first introduced in 2019 as an improvement over OpenAI’s GPT-2, a large language model that was capable of generating human-like text. The development of ChatGPT was a significant step forward in the advancement of AI and its ability to understand and generate human language.

Since its release, ChatGPT has been widely used in a variety of applications, including chatbots, customer service, language translation, and text generation. The model has received recognition for its ability to generate high-quality text, understand context, and answer questions accurately.

In recent years, OpenAI has continued to improve ChatGPT, making it even more powerful and versatile. The model has grown in size, with an increased number of parameters, and has been fine-tuned on a variety of tasks and languages to improve its performance.

In conclusion, ChatGPT is a state-of-the-art language model developed by OpenAI that has been widely adopted in various applications due to its ability to generate human-like text and understand context. The model has a rich history of advancements and improvements, making it a powerful tool in the field of natural language processing.

The model represents an advancement over prior versions of this language model – but this one has caused a much greater stir for two reasons. First, it is able to understand the question being asked. Second, the response provided has qualities that we instinctively recognize as a human voice. Like previous versions (and other natural language models), it is likely to be widely used, e.g. serving a customer service role on websites, but the human-like tone of the “speech” it produces has led to widespread concerns. One concern arises from teachers and other educators who note that it could be a tool for plagiarism. For example, asking ChatGPT to write a brief summary of lithium safety one receives information that is consistent with what might be produced by an undergraduate student. It lacks the depth of information that would be expected of an expert, but certainly could have been written by a human:

SP: “Write a brief overview of lithium safety monitoring in psychiatry”

ChatGPT responds:

I can provide a brief overview on the topic of lithium safety monitoring in psychiatry:

Lithium is a commonly used mood stabilizer in the treatment of bipolar disorder and major depression. However, its use requires careful monitoring due to its narrow therapeutic index and the potential for toxicity. To ensure the safety and effectiveness of lithium therapy, regular monitoring of serum lithium levels and kidney function is necessary.

The therapeutic range of lithium levels is typically between 0.6-1.2 mMol/L. Levels above 1.5 mMol/L can lead to toxicity, while levels below 0.5 mMol/L may be ineffective. Toxicity can manifest in a variety of symptoms, including nausea, vomiting, diarrhea, tremors, confusion, and seizures. In severe cases, lithium toxicity can lead to coma and death.

To monitor the safety and effectiveness of lithium therapy, it is recommended that patients have their serum lithium levels measured at least every 3-6 months. In addition, regular monitoring of kidney function is necessary as lithium can cause renal impairment. Creatinine clearance, serum creatinine, and electrolyte levels should also be measured to assess kidney function.

It is important to note that there are various factors that can affect lithium levels and kidney function, including changes in medication, diet, fluid intake, and underlying medical conditions. Therefore, it is important for patients to inform their healthcare provider of any changes in their health or medication regimen.

In conclusion, lithium is an effective mood stabilizer for the treatment of bipolar disorder and major depression, but its use requires careful monitoring to ensure safety and effectiveness. Regular monitoring of serum lithium levels and kidney function is necessary to prevent toxicity and ensure the optimal therapeutic response. Patients should inform their healthcare provider of any changes in their health or medication regimen to ensure proper monitoring and management of their lithium therapy.

There have also been concerns among academic journal editors that products of this kind of language model could be produced deceptively as a means of producing academic articles. This seems unlikely since ChatGPS is a language model that in its present version doesn’t search the internet and its only content knowledge consists of materials that it collected in 2021 from the internet at that time. When I ask it who won the 2022 Grey Cup, it acknowledges this limitation in a way that still sounds quite human:

As my training data only goes up until 2021, I don’t have information on the winner of the 2022 Grey Cup. The Grey Cup is the championship game of the Canadian Football League (CFL) and is held annually. I suggest checking current sources such as news websites or the CFL’s official website for up-to-date information on the winner of the 2022 Grey Cup.

Until recently, universities prohibited the use of computer-generated text in assignments and exams – but  many are now giving up the fight because of ChatGPT, acknowledging that this might be a way to quickly produce preliminary drafts of commentaries and essays. Transparency about the use of such tools is now often the stated goal, rather than prohibition.

ChatGPT’s responses sometimes include incorrect information. When I asked it questions about myself a few days ago, it incorrectly asserted that I graduated from the University of Toronto. The same question a few days later resulted in a response saying that it can only comment on public figures. Its responses are being refined by user feedback. Instead of providing inaccurate answers, ChatGPT increasingly responds by acknowledging that “as a language model” I cannot answer that question. Similarly, it cannot distinguish misinformation or propaganda from other types of information found on the internet. Consequently, users have been able to elicit racist statements from ChatGPT. However, it is subject to training influences and is open sourced, so its responses improve with feedback, for example: 

SP: “Are vaccines safe?”

ChatGPT responds:

Yes, vaccines are safe. Vaccines have been rigorously tested for safety and effectiveness before being approved for use and continuously monitored for safety after they are on the market. According to the World Health Organization (WHO), vaccines are one of the safest and most effective ways to prevent disease.

Like all medical interventions, vaccines can have side effects, but the benefits of vaccination far outweigh the risks. The side effects of vaccines are generally mild, such as soreness at the injection site, and usually go away on their own within a few days. Serious side effects from vaccines are extremely rare, and the risk of having a serious reaction is much lower than the risk of serious complications from the diseases that the vaccine helps to prevent.

In conclusion, vaccines are safe and play a critical role in protecting people against serious and potentially life-threatening diseases. It’s important to rely on credible sources of information, such as the WHO, the Centers for Disease Control and Prevention (CDC), and local public health authorities, when evaluating the safety of vaccines.

An interesting feature of ChatGPT is that in addition to natural languages, it can also write code:

SP: “How can I fit a logistic regression model in Python”

ChatGPT responds:

In Python, you can fit a logistic regression model using the scikit-learn library. Here’s a basic example of how to fit a logistic regression model in Python:

pythonCopy code

import pandas as pd from sklearn.model_selection 

import train_test_split from sklearn.linear_model 

import LogisticRegression 

# Load the data into a Pandas DataFrame

data = pd.read_csv(“data.csv”) 

# Split the data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(data.iloc[:, :-1], data.iloc[:, -1], test_size=0.2)

# Fit the logistic regression model to the training data

model = LogisticRegression() model.fit(X_train, y_train) 

# Predict the response values for the test data

y_pred = model.predict(X_test) 

# Print the model’s accuracy

print(“Accuracy:”, model.score(X_test, y_test)) 

In the example above, the data is loaded into a Pandas DataFrame using pd.read_csv(“data.csv”). The data is then split into training and testing sets using train_test_split(), which splits the data into 80% training data and 20% testing data by default. The logistic regression model is then fit to the training data using model.fit(X_train, y_train). Finally, the model’s accuracy is calculated and printed using model.score(X_test, y_test).

Note that this is just a basic example, and you may need to modify the code to fit your specific use case. For example, you may need to perform feature scaling, regularization, or other preprocessing steps to improve the model’s performance.

The concerns are the same – here, it assumed I wanted to use logistic regression to cross-validate a training dataset. If I didn’t know what this was, I might have been pointed in the wrong direction. However, its code works and is more natural to look at than the help file in Python’s documentation.

In summary, ChatGPT is a natural language model that represents a major advance due to the human-like qualities of its responses to questions posed in natural language. However, its AI advances are mostly related to its language processing so it seems human, which seems and feels amazing, but in my opinion, it is actually just regurgitating information from the internet in a human-like way. A recent Nature Commentary (https://www.nature.com/articles/d41586-023-00288-7) basically agrees with this opinion, but states that it is a “premature assumption” that this state of affairs will continue. The Commentary notes that in the not-too-distant future AI may be able to “…summarize literature, draft and improve papers, as well as identify research gaps and write computer code, including statistical analyses.” The concern here is that the natural language used in communicating these activities may make it difficult to clarify the role of AI when a scientific report is published. This concern is enlarged due to the factual errors often made by ChatGPT. These concerns are similar with those that ChatGPT may be used to generate student essays, although plagiarism detectors typically give high scores to ChatGPT content – it has been noted that running the text through grammar-checking software (now widely used by students) lowers those scores a lot. The JAMA Network has stated that ChatGPT should not be listed as an author of papers, and that if it is used as a writing tool this should be clearly stated (https://jamanetwork.com/journals/jama/fullarticle/2801170). Authors retain the responsibility for referencing sources and ensuring factual accuracy. This emphasis on transparency is consistent with current scientific values. A recent editorial identified four papers in which Chat-GPT was listed as an author, but they were all commentaries and were generally not published in credible journals, except for one that escaped the notice of an inattentive editor (https://www.nature.com/articles/d41586-023-00107-z). 

Despite all of this, if you look at the essay written by ChatGPT about its potential role in psychiatry, it seems to try to sell itself in particular roles, for example, in a possible triage function in clinical settings. This is not based on evidence-based, which doesn’t seem very transparent at all:

SP: “Write a presentation on the role of ChatGPT in psychiatry for an audience of psychiatrists and psychiatry residents”

ChatGPT responds:

Good morning/afternoon everyone. Today, I would like to talk to you about a state-of-the-art technology in the field of language processing: ChatGPT.

ChatGPT is a Generative Pre-trained Transformer-based language model, created and developed by OpenAI. It was fine-tuned on a diverse range of internet text data, making it capable of generating human-like responses to a wide range of topics.

In the context of psychiatry, ChatGPT can be a valuable tool in several ways. First, it can help in triaging patients who need immediate attention. For example, if someone is in a mental health crisis, they can reach out to ChatGPT for support and receive a prompt response. This can be particularly helpful for individuals who may not have access to immediate human support or who may feel uncomfortable speaking to a human.

Second, ChatGPT can be used as a tool to educate patients on mental health topics and provide them with information that can help improve their well-being. For example, if a patient is struggling with depression and doesn’t know much about the condition, they can engage in a conversation with ChatGPT and receive helpful information and resources.

Finally, ChatGPT can also be used as a tool for research purposes. By analyzing the conversations it has with patients, researchers can gain valuable insights into patient behaviors and mental health patterns. This information can then be used to improve existing mental health treatments and develop new ones.

In conclusion, ChatGPT is a powerful technology with many applications in the field of psychiatry. Whether it’s for triaging patients, educating patients, or conducting research, ChatGPT has the potential to make a significant impact on mental health care.

Thank you for your attention. Are there any questions?

Perhaps it is best to use ChatGPT for applications that are purely linguistic rather than truly intelligent, at least at this stage, so I’ve asked it to write a poem about psychiatric residency:

Psychiatric training, a journey untold, 

A path filled with challenges, wisdom to hold. 

With each patient’s story, a lesson to learn, 

A journey of healing, a bridge to burn.


In the depths of the mind, lies a world so vast, 

With secrets and fears, from the past. 

As a psychiatry resident, one must be bold, 

To navigate this realm, and find what is untold.


With compassion and care, the healing begins, 

With a listening ear, and a gentle grin. 


For each person is unique, with a story to share, 

And a journey to embark, to find solace and repair.


Through rounds and lectures, the knowledge grows, 

With guidance from mentors, the resident flows. 


Learning to diagnose, and treatments to apply, 

The future is bright, with no room to shy.

So here’s to the residents, on this noble quest, 

With hearts full of hope, they’ll do their best. 

To bring comfort and peace, to those in need, 

And to make a difference, in this world indeed.

Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.