Reconstruction of patient-specific confounders in AI-based radiologic image interpretation using generative pretraining
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Reliably detecting potentially misleading patterns in automated diagnostic assistance systems, such as those powered by artificial intelligence (AI), is crucial for instilling user trust and ensuring reliability. Current techniques fall short in visualizing such confounding factors. We propose DiffChest, a self-conditioned diffusion model trained on 515,704 chest radiographs from 194,956 patients across the US and Europe. DiffChest provides patient-specific explanations and visualizes confounding factors that might mislead the model. The high inter-reader agreement, with Fleiss’ kappa values of 0.8 or higher, validates its capability to identify treatment-related confounders. Confounders are accurately detected with 10%–100% prevalence rates. The pretraining process optimizes the model for relevant imaging information, resulting in excellent diagnostic accuracy for 11 chest conditions, including pleural effusion and heart insufficiency. Our findings highlight the potential of diffusion models in medical image classification, providing insights into confounding factors and enhancing model robustness and reliability.
Details
Original language | English |
---|---|
Article number | 101713 |
Journal | Cell Reports Medicine |
Volume | 5 |
Issue number | 9 |
Publication status | Published - 17 Sept 2024 |
Peer-reviewed | Yes |
External IDs
PubMed | 39241771 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- confounders, counterfactual explanations, deep learning, explainability, generative models, medical imaging, self-supervised training