Classification Confidence in Exploratory Learning: A User’s Guide

Peter Salamon, David Salamon, V. Adrian Cantu, Michelle An, Tyler Perry, Robert A. Edwards, Anca M. Segall

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)


This paper investigates the post-hoc calibration of confidence for “exploratory” machine learning classification problems. The difficulty in these problems stems from the continuing desire to push the boundaries of which categories have enough examples to generalize from when curating datasets, and confusion regarding the validity of those categories. We argue that for such problems the “one-versus-all” approach (top-label calibration) must be used rather than the “calibrate-the-full-response-matrix” approach advocated elsewhere in the literature. We introduce and test four new algorithms designed to handle the idiosyncrasies of category-specific confidence estimation using only the test set and the final model. Chief among these methods is the use of kernel density ratios for confidence calibration including a novel algorithm for choosing the bandwidth. We test our claims and explore the limits of calibration on a bioinformatics application (PhANNs) as well as the classic MNIST benchmark. Finally, our analysis argues that post-hoc calibration should always be performed, may be performed using only the test dataset, and should be sanity-checked visually.
Original languageEnglish
Pages (from-to)803-829
Number of pages27
JournalMachine Learning and Knowledge Extraction
Issue number3
Early online date21 Jul 2023
Publication statusPublished - Sept 2023


  • confidence calibration
  • top-label confidence calibration
  • bioinformatics
  • machine learning
  • exploratory machine learning


Dive into the research topics of 'Classification Confidence in Exploratory Learning: A User’s Guide'. Together they form a unique fingerprint.

Cite this