Generative AI’s healthcare professional role creep: a cross-sectional evaluation of publicly accessible, customised health-related GPTs

Bianca Chu, Natansh D. Modi, Bradley D. Menz, Stephen Bacchi, Ganessan Kichenadasse, Catherine Paterson, Joshua G. Kovoor, Imogen Ramsey, Jessica M. Logan, Michael D. Wiese, Ross A. McKinnon, Andrew Rowland, Michael J. Sorich, Ashley M. Hopkins

Research output: Contribution to journalArticlepeer-review

16 Downloads (Pure)

Abstract

Introduction: Generative artificial intelligence (AI) is advancing rapidly; an important consideration is the public’s increasing ability to customise foundational AI models to create publicly accessible applications tailored for specific tasks. This study aims to evaluate the accessibility and functionality descriptions of customised GPTs on the OpenAI GPT store that provide health-related information or assistance to patients and healthcare professionals. Methods: We conducted a cross-sectional observational study of the OpenAI GPT store from September 2 to 6, 2024, to identify publicly accessible customised GPTs with health-related functions. We searched across general medicine, psychology, oncology, cardiology, and immunology applications. Identified GPTs were assessed for their name, description, intended audience, and usage. Regulatory status was checked across the U.S. Food and Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), and Australian Therapeutic Goods Administration (TGA) databases. Results: A total of 1,055 customised, health-related GPTs targeting patients and healthcare professionals were identified, which had collectively been used in over 360,000 conversations. Of these, 587 were psychology-related, 247 were in general medicine, 105 in oncology, 52 in cardiology, 30 in immunology, and 34 in other health specialties. Notably, 624 of the identified GPTs included healthcare professional titles (e.g., doctor, nurse, psychiatrist, oncologist) in their names and/or descriptions, suggesting they were taking on such roles. None of the customised GPTs identified were FDA, EU MDR, or TGA-approved. Discussion: This study highlights the rapid emergence of publicly accessible, customised, health-related GPTs. The findings raise important questions about whether current AI medical device regulations are keeping pace with rapid technological advancements. The results also highlight the potential “role creep” in AI chatbots, where publicly accessible applications begin to perform — or claim to perform — functions traditionally reserved for licensed professionals, underscoring potential safety concerns.

Original languageEnglish
Article number1584348
Number of pages8
JournalFrontiers in Public Health
Volume13
DOIs
Publication statusPublished - 2025

Keywords

  • AI health applications
  • AI regulation
  • customised GPTs
  • Generative AI in healthcare
  • medical chatbots
  • OpenAI GPT store

Fingerprint

Dive into the research topics of 'Generative AI’s healthcare professional role creep: a cross-sectional evaluation of publicly accessible, customised health-related GPTs'. Together they form a unique fingerprint.

Cite this