Experts and Non-Experts Use Diagnostic Aids Inefficiently

Megan L. Bartlett, Jason S. McCarley

Research output: Contribution to journalConference articlepeer-review

3 Downloads (Pure)

Abstract

Studies benchmarking automation-assisted decision making against models of optimal use have shown that assisted performance is highly inefficient. However, these exercises have used novice populations performing simplified decision tasks, limiting their generalizability. Cao et al. (2023) described a machine learning algorithm capable of screening CT scans for early signs of pancreatic cancer with an extremely high sensitivity (area under the curve≥98.5%) and demonstrated that the algorithm improved human clinicians’ diagnoses. We reanalyzed the data of Cao et al. (2023) to assess the efficiency of the clinicians’ aid use. Assisted performance was highly inefficient, roughly matching the predictions of a model that assumes participants randomly defer to the aid’s advice with a probability of 50%. Moreover, aid use was equally poor across varying levels of clinician expertise. The results replicate previous findings of poor decision aid use and confirm that they generalize to real-world tasks using expert decision makers.

Original languageEnglish
Pages (from-to)98-103
Number of pages6
JournalProceedings of the Human Factors and Ergonomics Society
Volume68
Issue number1
DOIs
Publication statusPublished - Sept 2024
Externally publishedYes
Event68th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2024 - Phoenix, United States
Duration: 9 Sept 202413 Sept 2024

Keywords

  • cognitive modeling
  • human-automation interaction
  • naturalistic decision making

Fingerprint

Dive into the research topics of 'Experts and Non-Experts Use Diagnostic Aids Inefficiently'. Together they form a unique fingerprint.

Cite this