Potential of Large Language Models as Tools Against Medical Disinformation - Reply

Research output: Contribution to journalLetterpeer-review

1 Citation (Scopus)


In Reply We express our gratitude to Zhu et al for engaging with our work. Zhu et al correctly point out that malevolent individuals have historically used manual methods to create and disseminate health disinformation. However, the emergence of sophisticated large language models (LLMs) represents a potential tipping point. These tools possess unprecedented capabilities to personalize content targeted at individuals from diverse backgrounds. If their safeguards are inadequate, their ability to facilitate rapid, scalable, and cost-effective generation of health disinformation presents a unique and substantial challenge that should not be ignored...
Original languageEnglish
Pages (from-to)450-451
Number of pages2
JournalJAMA Internal Medicine
Issue number4
Early online date26 Feb 2024
Publication statusPublished - 1 Apr 2024


  • Health disinformation
  • Artificial intelligence (AI)
  • Vaccination
  • Vaping


Dive into the research topics of 'Potential of Large Language Models as Tools Against Medical Disinformation - Reply'. Together they form a unique fingerprint.

Cite this