Crowdtruth validation: a new paradigm for validating algorithms that rely on image correspondences

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Lena Maier-Hein - , German Cancer Research Center (DKFZ) (Author)
  • Daniel Kondermann - , Heidelberg University  (Author)
  • Tobias Roß - , German Cancer Research Center (DKFZ) (Author)
  • Sven Mersmann - , German Cancer Research Center (DKFZ) (Author)
  • Eric Heim - , German Cancer Research Center (DKFZ) (Author)
  • Sebastian Bodenstedt - , Karlsruhe Institute of Technology (Author)
  • Hannes Götz Kenngott - , Heidelberg University  (Author)
  • Alexandro Sanchez - , Heidelberg University  (Author)
  • Martin Wagner - , Heidelberg University  (Author)
  • Anas Preukschas - , Heidelberg University  (Author)
  • Anna Laura Wekerle - , Heidelberg University  (Author)
  • Stefanie Helfert - , Heidelberg University  (Author)
  • Keno März - , German Cancer Research Center (DKFZ) (Author)
  • Arianeb Mehrabi - , Heidelberg University  (Author)
  • Stefanie Speidel - , Karlsruhe Institute of Technology (Author)
  • Christian Stock - , Heidelberg University  (Author)

Abstract

Purpose: Feature tracking and 3D surface reconstruction are key enabling techniques to computer-assisted minimally invasive surgery. One of the major bottlenecks related to training and validation of new algorithms is the lack of large amounts of annotated images that fully capture the wide range of anatomical/scene variance in clinical practice. To address this issue, we propose a novel approach to obtaining large numbers of high-quality reference image annotations at low cost in an extremely short period of time. Methods: The concept is based on outsourcing the correspondence search to a crowd of anonymous users from an online community (crowdsourcing) and comprises four stages: (1) feature detection, (2) correspondence search via crowdsourcing, (3) merging multiple annotations per feature by fitting Gaussian finite mixture models, (4) outlier removal using the result of the clustering as input for a second annotation task. Results: On average, 10,000 annotations were obtained within 24 h at a cost of $100. The annotation of the crowd after clustering and before outlier removal was of expert quality with a median distance of about 1 pixel to a publically available reference annotation. The threshold for the outlier removal task directly determines the maximum annotation error, but also the number of points removed. Conclusions: Our concept is a novel and effective method for fast, low-cost and highly accurate correspondence generation that could be adapted to various other applications related to large-scale data annotation in medical image computing and computer-assisted interventions.

Details

Original languageEnglish
Pages (from-to)1201-1212
Number of pages12
JournalInternational journal of computer assisted radiology and surgery
Volume10
Issue number8
Publication statusPublished - 5 Aug 2015
Peer-reviewedYes
Externally publishedYes

External IDs

PubMed 25895078
ORCID /0000-0002-4590-1908/work/163294048

Keywords