Potential of Large Language Models as Tools Against Medical Disinformation - Reply

Research output: Contribution to journalLetterpeer-review

2 Citations (Scopus)

Abstract

In Reply We express our gratitude to Zhu et al for engaging with our work. Zhu et al correctly point out that malevolent individuals have historically used manual methods to create and disseminate health disinformation. However, the emergence of sophisticated large language models (LLMs) represents a potential tipping point. These tools possess unprecedented capabilities to personalize content targeted at individuals from diverse backgrounds. If their safeguards are inadequate, their ability to facilitate rapid, scalable, and cost-effective generation of health disinformation presents a unique and substantial challenge that should not be ignored...
Original languageEnglish
Pages (from-to)450-451
Number of pages2
JournalJAMA Internal Medicine
Volume184
Issue number4
Early online date26 Feb 2024
DOIs
Publication statusPublished - 1 Apr 2024

Keywords

  • Health disinformation
  • Artificial intelligence (AI)
  • Vaccination
  • Vaping

Fingerprint

Dive into the research topics of 'Potential of Large Language Models as Tools Against Medical Disinformation - Reply'. Together they form a unique fingerprint.

Cite this