Reconstruction of patient-specific confounders in AI-based radiologic image interpretation using generative pretraining

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

  • Tianyu Han - , Universitätsklinikum Aachen (Autor:in)
  • Laura Žigutytė - , Else Kröner Fresenius Zentrum für Digitale Gesundheit (Autor:in)
  • Luisa Huck - , Universitätsklinikum Aachen (Autor:in)
  • Marc Sebastian Huppertz - , Universitätsklinikum Aachen (Autor:in)
  • Robert Siepmann - , Universitätsklinikum Aachen (Autor:in)
  • Yossi Gandelsman - , University of California at Berkeley (Autor:in)
  • Christian Blüthgen - , Stanford University, Universitätsspital Zürich (Autor:in)
  • Firas Khader - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Christiane Kuhl - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Sven Nebelung - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Jakob Nikolas Kather - , Else Kröner Fresenius Zentrum für Digitale Gesundheit, Medizinische Klinik und Poliklinik I, Nationales Zentrum für Tumorerkrankungen (NCT) Heidelberg (Autor:in)
  • Daniel Truhn - , Universitätsklinikum Aachen (Autor:in)

Abstract

Reliably detecting potentially misleading patterns in automated diagnostic assistance systems, such as those powered by artificial intelligence (AI), is crucial for instilling user trust and ensuring reliability. Current techniques fall short in visualizing such confounding factors. We propose DiffChest, a self-conditioned diffusion model trained on 515,704 chest radiographs from 194,956 patients across the US and Europe. DiffChest provides patient-specific explanations and visualizes confounding factors that might mislead the model. The high inter-reader agreement, with Fleiss’ kappa values of 0.8 or higher, validates its capability to identify treatment-related confounders. Confounders are accurately detected with 10%–100% prevalence rates. The pretraining process optimizes the model for relevant imaging information, resulting in excellent diagnostic accuracy for 11 chest conditions, including pleural effusion and heart insufficiency. Our findings highlight the potential of diffusion models in medical image classification, providing insights into confounding factors and enhancing model robustness and reliability.

Details

OriginalspracheEnglisch
Aufsatznummer101713
Seitenumfang18
FachzeitschriftCell Reports Medicine
Jahrgang5
Ausgabenummer9
PublikationsstatusVeröffentlicht - 17 Sept. 2024
Peer-Review-StatusJa

Externe IDs

PubMed 39241771
ORCID /0009-0000-2447-2959/work/175771151
ORCID /0000-0002-3730-5348/work/198594617

Schlagworte

Schlagwörter

  • confounders, counterfactual explanations, deep learning, explainability, generative models, medical imaging, self-supervised training