An increasing number of convolutional neural networks for fracture recognition and classification in orthopaedics are these externally validated and ready for clinical application?

L. Oliveira E Carmo, A. van den Merkhof, J. Olczak, M. Gordon, P. C. Jutte, R. L. Jaarsma, F. F.A. Ijpma, J. N. Doornberg, J. Prijs, Machine Learning Consortium

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
1 Downloads (Pure)

Abstract

Aims The number of convolutional neural networks (CNN) available for fracture detection and classification is rapidly increasing. External validation of a CNN on a temporally separate (separated by time) or geographically separate (separated by location) dataset is crucial to assess generalizability of the CNN before application to clinical practice in other institutions. We aimed to answer the following questions: are current CNNs for fracture recognition externally valid?; which methods are applied for external validation (EV)?; and, what are reported performances of the EV sets compared to the internal validation (IV) sets of these CNNs? Methods The PubMed and Embase databases were systematically searched from January 2010 to October 2020 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The type of EV, characteristics of the external dataset, and diagnostic performance characteristics on the IV and EV datasets were collected and com-pared. Quality assessment was conducted using a seven-item checklist based on a modified Methodologic Index for NOn-Randomized Studies instrument (MINORS). Results Out of 1,349 studies, 36 reported development of a CNN for fracture detection and/or clas-sification. Of these, only four (11%) reported a form of EV. One study used temporal EV, one conducted both temporal and geographical EV, and two used geographical EV. When comparing the CNN’s performance on the IV set versus the EV set, the following were found: AUCs of 0.967 (IV) versus 0.975 (EV), 0.976 (IV) versus 0.985 to 0.992 (EV), 0.93 to 0.96 (IV) versus 0.80 to 0.89 (EV), and F1-scores of 0.856 to 0.863 (IV) versus 0.757 to 0.840 (EV). Conclusion The number of externally validated CNNs in orthopaedic trauma for fracture recognition is still scarce. This greatly limits the potential for transfer of these CNNs from the developing institute to another hospital to achieve similar diagnostic performance. We recommend the use of geographical EV and statements such as the Consolidated Standards of Reporting Trials–Artificial Intelligence (CONSORT-AI), the Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence (SPIRIT-AI) and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis–Machine Learning (TRIPOD-ML) to critically appraise performance of CNNs and improve methodological rigor, quality of future models, and facilitate eventual implementation in clinical practice.

Original languageEnglish
Pages (from-to)879-885
Number of pages7
JournalBone and Joint Open
Volume2
Issue number10
DOIs
Publication statusPublished - Oct 2021

Keywords

  • Artificial intelligence
  • Convolutional neural networks
  • Deep learning
  • External validation
  • Machine learning

Fingerprint

Dive into the research topics of 'An increasing number of convolutional neural networks for fracture recognition and classification in orthopaedics are these externally validated and ready for clinical application?'. Together they form a unique fingerprint.

Cite this