Benchmarking automation-aided performance in a forensic face matching task

Megan L. Bartlett, Daniel J. Carragher, Peter J.B. Hancock, Jason S. McCarley

Research output: Contribution to journalArticlepeer-review

12 Downloads (Pure)

Abstract

Carragher and Hancock (2023) investigated how individuals performed in a one-to-one face matching task when assisted by an Automated Facial Recognition System (AFRS). Across five pre-registered experiments they found evidence of suboptimal aided performance, with AFRS-assisted individuals consistently failing to reach the level of performance the AFRS achieved alone. The current study reanalyses these data (Carragher and Hancock, 2023), to benchmark automation-aided performance against a series of statistical models of collaborative decision making, spanning a range of efficiency levels. Analyses using a Bayesian hierarchical signal detection model revealed that collaborative performance was highly inefficient, falling closest to the most suboptimal models of automation dependence tested. This pattern of results generalises previous reports of suboptimal human-automation interaction across a range of visual search, target detection, sensory discrimination, and numeric estimation decision-making tasks. The current study is the first to provide benchmarks of automation-aided performance in the one-to-one face matching task.

Original languageEnglish
Article number104364
Number of pages12
JournalApplied Ergonomics
Volume121
DOIs
Publication statusPublished - Nov 2024
Externally publishedYes

Keywords

  • Face recognition
  • Human-automation interaction
  • Signal detection theory

Fingerprint

Dive into the research topics of 'Benchmarking automation-aided performance in a forensic face matching task'. Together they form a unique fingerprint.

Cite this