Abstract
Cross-Validation (CV) is the primary mechanism used in Machine Learning to control generalization error in the absence of sufficiently large quantities of marked up (tagged or labelled) data to undertake independent testing, training and validation (including early stopping, feature selection, parameter tuning, boosting and/or fusion). Repeated Cross-Validation (RCV) is used to try to further improve the accuracy of our performance estimates, including compensating for outliers. Typically a Machine Learning researcher will the compare a new target algorithm against a wide range of competing algorithms on a wide range of standard datasets. The combination of many training folds, many CV repetitions, many algorithms and parameterizations, and many training sets, adds up to a very large number of data points to compare, and a massive multiple testing problem quadratic in the number of individual test combinations. Research in Machine Learning sometimes involves basic significance testing, or provides confidence intervals, but seldom addresses the multiple testing problem whereby the assumption of p<.05 significance means that we expect a spurious "significant" result for 1 in 20 of our many test pairs. This paper defines and explores a protocol that reduces the scale of repeated CV whilst providing a principled way to control the erosion of significance due to multiple testing.
Original language | English |
---|---|
DOIs | |
Publication status | Published - 2012 |
Event | SCET2012 - Duration: 27 May 2012 → … |
Conference
Conference | SCET2012 |
---|---|
Period | 27/05/12 → … |
Keywords
- Accuracy
- Bias
- Bootstrapping
- Correlation
- Cross-validation
- F-measure
- Generalization error
- Holdout
- Informedness
- Kappa
- Resubstitution error
- ROC AUC