Application of Generative Artificial Intelligence for Physician and Patient Oncology Letters - AI-OncLetters

Adel Shahnam, Udit Nindra, Nadia Hitchen, Joanne Tang, Martin Hong, Jun Hee Hong, George Au-Yeung, Wei Chua, Weng Ng, Ashley M. Hopkins, Michael J. Sorich

Research output: Contribution to journalArticlepeer-review

Abstract

PURPOSEAlthough large language models (LLMs) are increasingly used in clinical practice, formal assessments of their quality, accuracy, and effectiveness in medical oncology remain limited. We aimed to evaluate the ability of ChatGPT, an LLM, to generate physician and patient letters from clinical case notes.METHODSSix oncologists created 29 (four training, 25 final) synthetic oncology case notes. Structured prompts for ChatGPT were iteratively developed using the four training cases; once finalized, 25 physician-directed and patient-directed letters were generated. These underwent evaluation by expert consumers and oncologists for accuracy, relevance, and readability using Likert scales. The patient letters were also assessed with the Patient Education Materials Assessment Tool for Print (PEMAT-P), Flesch Reading Ease, and Simple Measure of Gobbledygook index.RESULTSAmong physician-to-physician letters, 95% (119/125) of oncologists agreed they were accurate, comprehensive, and relevant, with no safety concerns noted. These letters demonstrated precise documentation of history, investigations, and treatment plans and were logically and concisely structured. Patient-directed letters achieved a mean Flesch Reading Ease score of 73.3 (seventh-grade reading level) and a PEMAT-P score above 80%, indicating high understandability. Consumer reviewers found them clear and appropriate for patient communication. Some omissions of details (eg, side effects), stylistic inconsistencies, and repetitive phrasing were identified, although no clinical safety issues emerged. Seventy-two percent (90/125) of consumers expressed willingness to receive artificial intelligence (AI)-generated patient letters.CONCLUSIONChatGPT, when guided by structured prompts, can generate high-quality letters that align with clinical and patient communication standards. No clinical safety concerns were identified, although addressing occasional omissions and improving natural language flow could enhance their utility in practice. Further studies comparing AI-generated and human-written letters are recommended.

Original languageEnglish
Article numbere2400323
Number of pages9
JournalJCO clinical cancer informatics
Volume9
DOIs
Publication statusPublished - Jun 2025

Keywords

  • Generative artificial intelligence
  • physician and patient
  • oncology letters
  • large language models
  • clinical practice
  • medical oncology

Fingerprint

Dive into the research topics of 'Application of Generative Artificial Intelligence for Physician and Patient Oncology Letters - AI-OncLetters'. Together they form a unique fingerprint.

Cite this