Abstract
Studies benchmarking automation-assisted decision making against models of optimal use have shown that assisted performance is highly inefficient. However, these exercises have used novice populations performing simplified decision tasks, limiting their generalizability. Cao et al. (2023) described a machine learning algorithm capable of screening CT scans for early signs of pancreatic cancer with an extremely high sensitivity (area under the curve≥98.5%) and demonstrated that the algorithm improved human clinicians’ diagnoses. We reanalyzed the data of Cao et al. (2023) to assess the efficiency of the clinicians’ aid use. Assisted performance was highly inefficient, roughly matching the predictions of a model that assumes participants randomly defer to the aid’s advice with a probability of 50%. Moreover, aid use was equally poor across varying levels of clinician expertise. The results replicate previous findings of poor decision aid use and confirm that they generalize to real-world tasks using expert decision makers.
Original language | English |
---|---|
Pages (from-to) | 98-103 |
Number of pages | 6 |
Journal | Proceedings of the Human Factors and Ergonomics Society |
Volume | 68 |
Issue number | 1 |
DOIs | |
Publication status | Published - Sept 2024 |
Externally published | Yes |
Event | 68th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2024 - Phoenix, United States Duration: 9 Sept 2024 → 13 Sept 2024 |
Keywords
- cognitive modeling
- human-automation interaction
- naturalistic decision making