Zum Inhalt

ChatGPT in psychiatry: promises and pitfalls

  • Open Access
  • 01.12.2024
  • Commentary
Erschienen in:

Abstract

ChatGPT has become a hot topic of discussion since its release in November 2022. The number of publications on the potential applications of ChatGPT in various fields is on the rise. However, viewpoints on the use of ChatGPT in psychiatry are lacking. This article aims to address this gap by examining the promises and pitfalls of using ChatGPT in psychiatric practice. While ChatGPT offers several opportunities, further research is warranted, as the use of chatbots like ChatGPT raises various technical and ethical concerns. Some practical ways of addressing the challenges for the use of ChatGPT in psychiatry are also discussed.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
AI
Artificial intelligence
GPT
Generative pretrained transformer
ML
Machine learning
PTSD
Post-traumatic stress disorder

Introduction

The launch of ChatGPT in November 2022 has witnessed unprecedented success. The chatbot has taken the world by storm and its number of users climbed to 100 million, in just 2 months after its release, making ChatGPT the fastest growing application of its kind in history [1]. The use of artificial intelligence (AI) and chatbots in healthcare, including psychiatry, is not a new concept. As technology advances and new applications emerge, AI has brought about groundbreaking changes in the field of psychiatry. Since the release of ChatGPT, there is an increasing number of publications on its applications in various areas such as scientific writing [2], language editing [3] and medical education [4]. However, opinions on the use of ChatGPT in psychiatry are lacking. This article aims to examine the opportunities and potential drawbacks of incorporating ChatGPT in psychiatric practice. In addition, some practical approaches to mitigate the challenges posed by the psychiatric applications of ChatGPT are explored.

What is ChatGPT?

GPT is the abbreviation for Generative Pretrained Transformer. ChatGPT is a chatbot developed by OpenAI, which belongs to GPT version 3.5. Prior to ChatGPT, there were three generations of GPT, namely GPT-1, GPT-2 and GPT-3. ChatGPT is a sibling model of InstructGPT, which uses transformer-based architecture to generate human-like texts and conversation in response to inquiries raised by the users [5]. On 14 March 2023, OpenAI released a newer version of GPT known as GPT-4 [6]. Unlike ChatGPT, GPT-4 is not a freeware, and is only available to GPTPlus users who pay a subscription fee. This article focusses on ChatGPT, as it is currently available to the users at no cost, which is more relevant to this discussion.

Opportunities of using ChatGPT in psychiatry

The use of chatbots in psychiatry has started long before the introduction of ChatGPT. For example, the application of earlier versions of chatbots like Eliza and Woebot has provided valuable insights into the opportunities that ChatGPT can offer [7]. Previous studies have investigated several applications of chatbots in the psychiatric practice, including patient education and disease prevention [8], mental health screening [9], detection of self-harm [10] and suicidal ideation [11], as well as patient management, such as delivery of cognitive behavioral therapy [12]. Table 1 summarizes specific examples of using ChatGPT in psychiatry.
Table 1
Specific examples of using ChatGPT in psychiatry
Use of ChatGPT
Key findings
References
Teaching in social psychiatry
• ChatGTP was found to have the following functions:
▪ Acting as an information provider
▪ Used as a tool for debate and discussions
▪ Used in content creation for course materials
▪ Generate hypothetical case vignettes in the area of social psychiatry
[13]
Clinical and ethical reasoning, diagnosis, treatment and prognosis using psychiatry case vignettes
• The study investigated ChatGPT’s ability in clinical and ethical reasoning, diagnosis, treatment and prognosis using 100 psychiatry case vignettes, with the following performance:
▪ Scored grade A for 61 cases
▪ Scored grade B for 31 cases
▪ Scored grade C for 8 cases
▪ No responses were graded  "D"
[14]
Generation of psychodynamic formulation for psychiatric cases
• ChatGPT could generate psychodynamic formulations that were rated appropriate by psychiatrists
• The formulations were further improved by additional instructions such as self-psychology, ego-psychology and object relations
[15]
Answering questions about clinical psychiatry
• The study compared answers from participants (psychiatrists and psychiatry residents) regarding clinical psychiatry questions using information from ChatGPT and other sources
• Participants who used ChatGPT outperformed those who used other sources of information
• ChatGPT performed well with an overall score of 8 out of 10 for accuracy, nuance and completeness
[16]
Diagnosis and treatment recommendation
• A hypothetical psychiatric case was presented to ChatGPT
• The performance of ChatGPT was as follows:
▪ Correctly gave the diagnosis of treatment-resistant schizophrenia
▪ Provided relevant tests to rule of other causes of psychosis
▪ Proposed treatment plans consistent with current standards of care, including medications
▪ Provided a comprehensive list of potential side effects of the proposed medications
[17]
Use of ChatGPT to train a machine learning (ML) model for identification of a psychiatric disorder
• The study used ChatGPT to train an ML model for identification of post-traumatic stress disorder (PTSD) after childbirth
• The ML model was able to identify childbirth-related post-traumatic stress disorder (PTSD) based on maternal childbirth narrative data
[18]
The author believes that, as a robust AI-powered chatbot trained on a large corpus of data, ChatGPT possesses the capability to offer diverse mental health services, similar to other chatbots previously reported. However, it is important to note that ChatGPT should be used as a supportive tool rather than a replacement for expert opinions and services provided by a psychiatrist. Some advantages of using ChatGPT in psychiatry include (1) its around-the-clock availability, (2) reduction in stigma associated with seeking healthcare from a professional, (3) cost effectiveness relative to the high costs of traditional psychiatric care and (4) efficiency due to reduction in waiting time and quick access to large volumes of information.

Potential pitfalls of using ChatGPT in psychiatry

The use of ChatGPT is associated with several potential drawbacks and limitations. Like any computer systems, chatbots can make mistakes. One obvious example is a mistake made by Google’s Bard in a promotional material that caused a $100-billion plunge in Alphabet’s market value [19]. This section discusses several potential pitfalls and the social and ethical concerns of using ChatGPT in psychiatry.

Limited emotional intelligence

Lack of empathy and emotional understanding is one of the disadvantages of using ChatGPT in psychiatry, as machines have difficulty in comprehending complex human emotions. Unlike a psychiatrist, ChatGPT does not possess the ability to interpret non-verbal cues such as facial expressions and body language. Therefore, ChatGPT may create miscommunications by generating inappropriate responses that can potentially confuse and mislead patients.

Overreliance on technology

The overdependence of AI on clinical decision making can weaken the practitioner’s critical thinking skills. This in turn, can lead to inaccurate diagnosis and inappropriate treatment plans. In addition, relying heavily on AI in psychiatry can also lead to dehumanization of mental healthcare. Consequently, this can compromise trust and communication, and have a negative impact on the doctor–patient relationship.

Lack of accountability

The legal and ethical considerations of utilizing AI in healthcare have long been a subject of debate, particularly when AI makes fatal mistakes, which lead to the question of who should be held responsible [20]. As ChatGPT has limited emotional intelligence, the chatbot may give a wrong diagnosis, leading to inadequate or inappropriate treatment. In one study, Elyoseph and Levkovich investigated the potential of ChatGPT in suicide risk assessment and reported that ChatGPT underestimated suicide risk and mental resilience in a hypothetical case [21]. Furthermore, machines have a limited ability to handle crisis such as suicidal or violent behavior. Failure to address these situations in a timely manner may result in life-threatening consequences.

Privacy and confidentiality

Another common subject of debate concerning the use of AI in healthcare is patient privacy and confidentiality [22]. As numerous conversations with ChatGPT are generated daily, concerns around platform security arise, particularly when sensitive patient data are archived. Transparency is also a common concern, especially when there is limited understanding of the complex AI algorithms used in the application, leading to the issues of “black box” and a lack of trust [23]. In addition, the training of chatbots is crucial in information accuracy. AI is prone to bias when there is misrepresentation of the training data [24]. Given that the training data significantly influence its output, ChatGPT may provide biased information, resulting in a lack of generalization, misdiagnoses and fatal outcomes.

Other ethical and social concerns

Not every patient has access to ChatGPT. The use of ChatGPT requires devices like a computer, mobile phone or tablets with internet access. However, patients who are in the lower socioeconomic groups or low-income countries do not have the luxury to possess these devices. This raises the concern of accessibility and equality. Furthermore, the increasing use of AI could lead to job replacement for mental healthcare professionals. Therefore, it is essential that the use of AI should not come at the cost of human expertise.

Potential pitfalls from the patients’ perspectives

Although AI has become increasingly popular in recently years, there are several concerns from the patients’ perspectives. The views of patients on the use of AI including ChatGPT are mixed and evolving. While some patients embrace AI with open arms due to its round-the-clock availability, others may have concerns regarding its impersonal nature. Some patients are not tech savvy and may not be able to use AI efficiently, especially psychiatric patients who are mentally disadvantaged. Furthermore, vulnerable psychiatric patients may be subject to exploitation and manipulation by AI, especially when they lack the social support or understanding on the limitations of AI technologies like chatbots.

Overcoming the challenges for the use of ChatGPT in psychiatry

Addressing the challenges for the use of ChatGPT in psychiatry requires a multi-faceted approach:

Education and training

It is important to ensure that both the mental healthcare professionals and patients receive adequate education and training on the appropriate use of AI technologies like ChatGPT in psychiatry. They need to understand the strengths and limitations of ChatGPT and use it carefully. Practitioners should be reminded that ChatGPT is a supportive tool in clinical decision making and it should not replace their expertise, whereas the patients should be informed of the rights regarding the confidentiality of their data and informed consent.

Optimization of technology

Enhancement of the current technology is necessary to minimize errors and biases. This can be done by ensuring that the data used for training chatbots like ChatGPT is diverse and representative to avoid inaccuracy in diagnosis, as well as discriminatory practices. The emotional intelligence aspects of AI models can be further improved by strengthening the sentiment analytical capabilities to better understand complex emotions. In addition, the development of user-friendly interfaces that incorporate the AI reasoning processes can build trust and enhance transparency.

Development of ethical guidelines and regulatory framework

Healthcare providers like hospitals and clinics can develop a clear ethical guideline when using ChatGPT or other AI technologies in mental healthcare. These guidelines should encompass patient data privacy and confidentiality, patient autonomy and accountability. On the other hand, policy makers should develop regulatory framework to ensure safe and responsible use of AI technologies in healthcare. There should also be regular monitoring and evaluation on the use of these technologies.

Rigorous research and development

Continuous research and development in AI technologies using clinical trials and real-world data analysis in the clinical setting to evaluate the safety and effectiveness of chatbots such as ChatGPT are necessary. Such research should take a multi-disciplinary approach, involving psychiatrists, technologists, policymakers and ethicists to address emerging challenges and to facilitate integration of AI in psychiatry.

Conclusions

Embracing new technologies in psychiatry can drive innovation in mental healthcare. Considering the rising trend of AI in healthcare, ChatGPT shows great potential in psychiatry. However, many technical and ethical questions remain unanswered. Therefore, more research is necessary before ChatGPT can be widely implemented in psychiatry. Psychiatrists, ethicists, technologists and policymakers should take a multi-pronged approach to address key challenges. Patient and practitioner education, optimization of the current technology, as well as development of regulatory framework and ethical guidelines can help ensure safe and effective use of AI in psychiatry. Importantly, chatbots like ChatGPT should not replace the expertise of psychiatrists. After all, psychiatry is not only a science, but also an art that requires human interactions.

Acknowledgements

Not applicable.

Declarations

Not applicable.
Not applicable.

Competing interests

The author declares no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
download
DOWNLOAD
print
DRUCKEN
Titel
ChatGPT in psychiatry: promises and pitfalls
Verfasst von
Rebecca Shin-Yee Wong
Publikationsdatum
01.12.2024
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1186/s41983-024-00791-2
1.
Zurück zum Zitat Reuters. ChatGPT sets record for fastest-growing user base - analyst note. 2023. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ . Accessed 28 Mar 2023.
2.
Zurück zum Zitat Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus. 2023;15(2): e35179. https://doi.org/10.7759/cureus.35179.CrossRefPubMedPubMedCentral
3.
Zurück zum Zitat Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27(1):75. https://doi.org/10.1186/s13054-023-04380-2.CrossRefPubMedPubMedCentral
4.
Zurück zum Zitat Mbakwe AB, Lourentzou I, Celi LA, Mechanic OJ, Dagan A. ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS Digit Health. 2023;2(2): e0000205. https://doi.org/10.1371/journal.pdig.0000205.CrossRefPubMedPubMedCentral
5.
Zurück zum Zitat OpenAI. ChatGPT. 2023. https://openai.com/blog/chatgpt .Accessed 28 Mar 2023.
6.
Zurück zum Zitat OpenAI. GPT-4. 14 March 2023. Available from: https://openai.com/research/gpt-4 Accessed on 28 Mar 2023.
7.
Zurück zum Zitat Pham KT, Nabizadeh A, Selek S. Artificial intelligence and chatbots in psychiatry. Psychiatr Q. 2022;93(1):249–53. https://doi.org/10.1007/s11126-022-09973-8.CrossRefPubMedPubMedCentral
8.
Zurück zum Zitat Fitzsimmons-Craft EE, Chan WW, Smith AC, Firebaugh ML, Fowler LA, et al. Effectiveness of a chatbot for eating disorders prevention: a randomized clinical trial. Int J Eat Disord. 2022;55(3):343–53. https://doi.org/10.1002/eat.23662.CrossRefPubMed
9.
Zurück zum Zitat Schick A, Feine J, Morana S, Maedche A, Reininghaus U. Validity of chatbot use for mental health assessment: experimental study. JMIR Mhealth Uhealth. 2022;10(10): e28082. https://doi.org/10.2196/28082.CrossRefPubMedPubMedCentral
10.
Zurück zum Zitat Deshpande S, Warren J. Self-harm detection for mental health chatbots. Stud Health Technol Inform. 2021;281:48–52. https://doi.org/10.3233/SHTI210118.CrossRefPubMed
11.
Zurück zum Zitat Sels L, Homan S, Ries A, Santhanam P, Scheerer H, Colla M, et al. SIMON: a digital protocol to monitor and predict suicidal ideation. Front Psychiatry. 2021;12: 554811. https://doi.org/10.3389/fpsyt.2021.554811.CrossRefPubMedPubMedCentral
12.
Zurück zum Zitat Jang S, Kim JJ, Kim SJ, Hong J, Kim S, Kim E. Mobile app-based chatbot to deliver cognitive behavioral therapy and psychoeducation for adults with attention deficit: a development and feasibility/usability study. Int J Med Inform. 2021;150: 104440. https://doi.org/10.1016/j.ijmedinf.2021.104440.CrossRefPubMed
13.
Zurück zum Zitat Smith A, Hachen S, Schleifer R, Bhugra D, Buadze A, Liebrenz M. Old dog, new tricks? Exploring the potential functionalities of ChatGPT in supporting educational methods in social psychiatry. Int J Soc Psychiatry. 2023;69(8):1882–9. https://doi.org/10.1177/00207640231178451.CrossRefPubMed
14.
Zurück zum Zitat Franco D’Souza R, Amanullah S, Mathew M, Surapaneni KM. Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes. Asian J Psychiatr. 2023;89: 103770. https://doi.org/10.1016/j.ajp.2023.103770.CrossRefPubMed
15.
Zurück zum Zitat Hwang G, Lee DY, Seol S, Jung J, Choi Y, Her ES, et al. Assessing the potential of ChatGPT for psychodynamic formulations in psychiatry: an exploratory study. Psychiatry Res. 2023;331: 115655. https://doi.org/10.1016/j.psychres.2023.115655.CrossRefPubMed
16.
Zurück zum Zitat Luykx JJ, Gerritse F, Habets PC, Vinkers CH. The performance of ChatGPT in generating answers to clinical questions in psychiatry: a two-layer assessment. World Psychiatr. 2023;22(3):479–80. https://doi.org/10.1002/wps.21145.CrossRef
17.
Zurück zum Zitat Galido PV, Butala S, Chakerian M, Agustines D. A Case study demonstrating applications of ChatGPT in the Clinical Management of treatment-resistant schizophrenia. Cureus. 2023;15(4): e38166. https://doi.org/10.7759/cureus.38166.CrossRefPubMedPubMedCentral
18.
Zurück zum Zitat Bartal A, Jagodnik KM, Chan SJ, Dekel S. ChatGPT demonstrates potential for identifying psychiatric disorders: application to childbirth-related post-traumatic stress disorder. Res Sq. 2023;33428787. https://doi.org/10.21203/rs.3.rs-3428787/v1.
19.
Zurück zum Zitat Reuters. Alphabet shares dive after Google AI chatbot Bard flubs answer in ad. 2023. https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/ .Accessed 29 Mar 2023.
20.
Zurück zum Zitat Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. 2022;9: 862322. https://doi.org/10.3389/fsurg.2022.862322.CrossRefPubMedPubMedCentral
21.
Zurück zum Zitat Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry. 2023;14:1213141. https://doi.org/10.3389/fpsyt.2023.1213141.CrossRefPubMedPubMedCentral
22.
Zurück zum Zitat Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics. 2021;22(1):122. https://doi.org/10.1186/s12910-021-00687-3.CrossRefPubMedPubMedCentral
23.
Zurück zum Zitat Sariyar M, Holm J. Medical informatics in a tension between black-box ai and trust. Stud Health Technol Inform. 2022;289:41–4. https://doi.org/10.3233/SHTI210854.CrossRefPubMed
24.
Zurück zum Zitat Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: a call for open science. Patterns (N Y). 2021;2(10): 100347. https://doi.org/10.1016/j.patter.2021.100347.CrossRefPubMed

Kompaktes Leitlinien-Wissen Neurologie (Link öffnet in neuem Fenster)

Mit medbee Pocketcards schnell und sicher entscheiden.
Leitlinien-Wissen kostenlos und immer griffbereit auf ihrem Desktop, Handy oder Tablet.

Neu im Fachgebiet Neurologie

Assistierte Geburt oder Kaiserschnitt – was ist für die Gehirnentwicklung sicherer?

Ob vaginal-operative Geburt oder Kaiserschnitt in der Austreibungsphase: Das neurologische Outcome der Kinder scheint laut aktuellen Daten vergleichbar zu sein. Für Vakuumentbindungen und Geburten mit mehrfachem Instrumenteneinsatz gilt das jedoch nur mit Einschränkung.

Psychische Traumata erhöhen Risiko für Demenz und Schlaganfall

Je mehr psychische Traumata jemand zu verarbeiten hat, umso höher ist das Risiko für Schlaganfall und Demenz. Vor allem psychischer Stress im Erwachsenenalter scheint die Gefahr zu steigern. Depressionen erklären einen gewichtigen Teil des Risikos.

Schädel-Hirn-Traumata mit erhöhter Demenzsterblichkeit assoziiert

Langzeitdaten der Framingham Heart Study mit einer Nachbeobachtung von bis zu sieben Jahrzehnten belegen eine dosisabhängige Assoziation zwischen Schädel-Hirn-Traumata und der langfristigen Mortalität, die maßgeblich durch demenzbedingte Todesursachen bestimmt wird.

Podcast

Die vier Säulen der Migränetherapie

Trotz effektiver Therapieoptionen bei Migräne werden viele Patientinnen und Patienten nicht leitliniengerecht behandelt. In dieser Folge der ZFA TALKS erklärt Migräneforscher Dr. med. Lukas Becker, wie Betroffene bestmöglich beraten und betreut werden können. Die Aufklärung spielt dabei für ihn eine zentrale Rolle und auch eine Erstlinien-Prophylaxe kann bereits in der Hausarztpraxis gestartet werden.

Zeitschrift für Allgemeinmedizin, DEGAM

Update Neurologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.

Bildnachweise
Die Leitlinien für Ärztinnen und Ärzte, Geburtszange/© Marek / Stock.adobe.com (Symbolbild mit Fotomodellen), Mädchen hält ihren Teddy, Eltern streiten sich im Hintergrund/© fizkes / stock.adobe.com (Symbolbild mit Fotomodellen), Computertomographie bei schwerem Schädel-Hirn-Trauma/© DOUGLAS / stock.adobe.com (Symbolbild mit Fotomodell), ZFA TALKS - Migräne/© (M) New Africa / Stock.adobe.com (Symbolbild mit Fotomodell) Logo: Springer Medizin Verlag GmbH