Reconstruction of patient-specific confounders in AI-based radiologic image interpretation using generative pretraining

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Tianyu Han - , University Hospital Aachen (Author)
  • Laura Žigutytė - , Else Kröner Fresenius Center for Digital Health (Author)
  • Luisa Huck - , University Hospital Aachen (Author)
  • Marc Sebastian Huppertz - , University Hospital Aachen (Author)
  • Robert Siepmann - , University Hospital Aachen (Author)
  • Yossi Gandelsman - , University of California at Berkeley (Author)
  • Christian Blüthgen - , Stanford University, University Hospital Zurich (Author)
  • Firas Khader - , RWTH Aachen University (Author)
  • Christiane Kuhl - , RWTH Aachen University (Author)
  • Sven Nebelung - , RWTH Aachen University (Author)
  • Jakob Nikolas Kather - , Else Kröner Fresenius Center for Digital Health, Department of Internal Medicine I, National Center for Tumor Diseases (NCT) Heidelberg (Author)
  • Daniel Truhn - , University Hospital Aachen (Author)

Abstract

Reliably detecting potentially misleading patterns in automated diagnostic assistance systems, such as those powered by artificial intelligence (AI), is crucial for instilling user trust and ensuring reliability. Current techniques fall short in visualizing such confounding factors. We propose DiffChest, a self-conditioned diffusion model trained on 515,704 chest radiographs from 194,956 patients across the US and Europe. DiffChest provides patient-specific explanations and visualizes confounding factors that might mislead the model. The high inter-reader agreement, with Fleiss’ kappa values of 0.8 or higher, validates its capability to identify treatment-related confounders. Confounders are accurately detected with 10%–100% prevalence rates. The pretraining process optimizes the model for relevant imaging information, resulting in excellent diagnostic accuracy for 11 chest conditions, including pleural effusion and heart insufficiency. Our findings highlight the potential of diffusion models in medical image classification, providing insights into confounding factors and enhancing model robustness and reliability.

Details

Original languageEnglish
Article number101713
JournalCell Reports Medicine
Volume5
Issue number9
Publication statusPublished - 17 Sept 2024
Peer-reviewedYes

External IDs

PubMed 39241771

Keywords

Keywords

  • confounders, counterfactual explanations, deep learning, explainability, generative models, medical imaging, self-supervised training