Why is the Winner the Best?

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

  • German Cancer Research Center (DKFZ)
  • University of Leeds
  • University of Applied Sciences and Arts of Western Switzerland
  • University of Lausanne
  • Technische Hochschule Ingolstadt
  • University of Pennsylvania
  • University of Washington
  • University College London
  • Autonomous University of Barcelona
  • Italian Institute of Technology
  • Polytechnic University of Milan
  • IT University of Copenhagen
  • Erasmus University Rotterdam
  • University of Copenhagen
  • Harvard University
  • King's College London (KCL)
  • University of Duisburg-Essen
  • University of Nebraska Medical Center
  • Arab Academy for Science, Technology and Maritime Transport
  • CIBM Center for Biomedical Imaging
  • Swiss Federal Institute of Technology Lausanne (EPFL)
  • Indraprastha Institute of Information Technology Delhi
  • University of Lübeck
  • The University of Tokyo
  • University of Minnesota System
  • Radboud University Nijmegen
  • Fraunhofer Institute for Digital Medicine
  • Université de Rennes 1
  • Brno University of Technology
  • Masaryk University
  • University of Zurich
  • University of Toronto
  • University of Barcelona
  • University of Oxford
  • University of Strasbourg
  • Institute of Image-Guided Surgery
  • Technical University of Munich
  • Heidelberg University 
  • TUD Dresden University of Technology

Abstract

International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The 'typical' lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.

Details

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
PublisherIEEE Computer Society
Pages19955-19966
Number of pages12
ISBN (electronic)9798350301298
Publication statusPublished - 2023
Peer-reviewedYes

Publication series

SeriesConference on Computer Vision and Pattern Recognition (CVPR)
Volume2023-June
ISSN1063-6919

Conference

Title2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Abbreviated titleCVPR 2023
Duration18 - 22 June 2023
Website
Degree of recognitionInternational event
LocationVancouver Convention Center
CityVancouver
CountryCanada

External IDs

ORCID /0000-0002-4590-1908/work/163294011

Keywords

Keywords

  • cell microscopy, Medical and biological vision