TY - JOUR
T1 - Generative AI’s healthcare professional role creep
T2 - a cross-sectional evaluation of publicly accessible, customised health-related GPTs
AU - Chu, Bianca
AU - Modi, Natansh D.
AU - Menz, Bradley D.
AU - Bacchi, Stephen
AU - Kichenadasse, Ganessan
AU - Paterson, Catherine
AU - Kovoor, Joshua G.
AU - Ramsey, Imogen
AU - Logan, Jessica M.
AU - Wiese, Michael D.
AU - McKinnon, Ross A.
AU - Rowland, Andrew
AU - Sorich, Michael J.
AU - Hopkins, Ashley M.
PY - 2025
Y1 - 2025
N2 - Introduction: Generative artificial intelligence (AI) is advancing rapidly; an important consideration is the public’s increasing ability to customise foundational AI models to create publicly accessible applications tailored for specific tasks. This study aims to evaluate the accessibility and functionality descriptions of customised GPTs on the OpenAI GPT store that provide health-related information or assistance to patients and healthcare professionals. Methods: We conducted a cross-sectional observational study of the OpenAI GPT store from September 2 to 6, 2024, to identify publicly accessible customised GPTs with health-related functions. We searched across general medicine, psychology, oncology, cardiology, and immunology applications. Identified GPTs were assessed for their name, description, intended audience, and usage. Regulatory status was checked across the U.S. Food and Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), and Australian Therapeutic Goods Administration (TGA) databases. Results: A total of 1,055 customised, health-related GPTs targeting patients and healthcare professionals were identified, which had collectively been used in over 360,000 conversations. Of these, 587 were psychology-related, 247 were in general medicine, 105 in oncology, 52 in cardiology, 30 in immunology, and 34 in other health specialties. Notably, 624 of the identified GPTs included healthcare professional titles (e.g., doctor, nurse, psychiatrist, oncologist) in their names and/or descriptions, suggesting they were taking on such roles. None of the customised GPTs identified were FDA, EU MDR, or TGA-approved. Discussion: This study highlights the rapid emergence of publicly accessible, customised, health-related GPTs. The findings raise important questions about whether current AI medical device regulations are keeping pace with rapid technological advancements. The results also highlight the potential “role creep” in AI chatbots, where publicly accessible applications begin to perform — or claim to perform — functions traditionally reserved for licensed professionals, underscoring potential safety concerns.
AB - Introduction: Generative artificial intelligence (AI) is advancing rapidly; an important consideration is the public’s increasing ability to customise foundational AI models to create publicly accessible applications tailored for specific tasks. This study aims to evaluate the accessibility and functionality descriptions of customised GPTs on the OpenAI GPT store that provide health-related information or assistance to patients and healthcare professionals. Methods: We conducted a cross-sectional observational study of the OpenAI GPT store from September 2 to 6, 2024, to identify publicly accessible customised GPTs with health-related functions. We searched across general medicine, psychology, oncology, cardiology, and immunology applications. Identified GPTs were assessed for their name, description, intended audience, and usage. Regulatory status was checked across the U.S. Food and Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), and Australian Therapeutic Goods Administration (TGA) databases. Results: A total of 1,055 customised, health-related GPTs targeting patients and healthcare professionals were identified, which had collectively been used in over 360,000 conversations. Of these, 587 were psychology-related, 247 were in general medicine, 105 in oncology, 52 in cardiology, 30 in immunology, and 34 in other health specialties. Notably, 624 of the identified GPTs included healthcare professional titles (e.g., doctor, nurse, psychiatrist, oncologist) in their names and/or descriptions, suggesting they were taking on such roles. None of the customised GPTs identified were FDA, EU MDR, or TGA-approved. Discussion: This study highlights the rapid emergence of publicly accessible, customised, health-related GPTs. The findings raise important questions about whether current AI medical device regulations are keeping pace with rapid technological advancements. The results also highlight the potential “role creep” in AI chatbots, where publicly accessible applications begin to perform — or claim to perform — functions traditionally reserved for licensed professionals, underscoring potential safety concerns.
KW - AI health applications
KW - AI regulation
KW - customised GPTs
KW - Generative AI in healthcare
KW - medical chatbots
KW - OpenAI GPT store
UR - http://www.scopus.com/inward/record.url?scp=105005989338&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/NHMRC/2008119
UR - http://purl.org/au-research/grants/NHMRC/2030913
U2 - 10.3389/fpubh.2025.1584348
DO - 10.3389/fpubh.2025.1584348
M3 - Article
AN - SCOPUS:105005989338
SN - 2296-2565
VL - 13
JO - Frontiers in Public Health
JF - Frontiers in Public Health
M1 - 1584348
ER -