Illusions of intelligence, connection and reality: Perils of large-language AI models for people with severe mental illness

Jeffrey C.L. Looi, Stephen Allison, Tarun Bastiampillai, Sharon Reutens, Richard C.H. Looi

Research output: Contribution to journalEditorial

Abstract

For people with mental illnesses that impair reality testing, such as psychosis, severe depression and bipolar disorder, Artificial Intelligence (AI) Large-Language Models (LLMs) may represent threats to mental health. LLMs are unable to detect delusional beliefs, may encourage and validate delusions and cognitive distortions, miss opportunities to reinforce reality-based thinking, and exacerbate risks of self-harm and harm to others. Psychiatrists need to understand these risks of LLMs for people with severe mental illnesses, and educate patients and carers on avoiding these potential harms. Risk assessments need to be informed by an awareness of the inputs that patients receive from LLMs.

Original languageEnglish
Number of pages3
JournalAustralasian Psychiatry
DOIs
Publication statusE-pub ahead of print - 15 Sept 2025

Keywords

  • artificial intelligence
  • chatbots
  • large-language models
  • mental illness
  • self-harm

Fingerprint

Dive into the research topics of 'Illusions of intelligence, connection and reality: Perils of large-language AI models for people with severe mental illness'. Together they form a unique fingerprint.

Cite this