Adversarial attacks and adversarial robustness in computational pathology

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Narmin Ghaffari Laleh - , RWTH Aachen University (Author)
  • Daniel Truhn - , RWTH Aachen University (Author)
  • Gregory Patrick Veldhuizen - , Else Kröner Fresenius Center for Digital Health (Author)
  • Tianyu Han - , RWTH Aachen University (Author)
  • Marko van Treeck - , RWTH Aachen University (Author)
  • Roman D. Buelow - , RWTH Aachen University (Author)
  • Rupert Langer - , University of Bern, Kepler University Hospital (Author)
  • Bastian Dislich - , University of Bern (Author)
  • Peter Boor - , RWTH Aachen University (Author)
  • Volkmar Schulz - , RWTH Aachen University, Fraunhofer Institute for Digital Medicine, Hyperion Hybrid Imaging Systems GmbH (Author)
  • Jakob Nikolas Kather - , Department of internal Medicine I, Else Kröner Fresenius Center for Digital Health, RWTH Aachen University, Heidelberg University , University of Leeds, University Hospital Carl Gustav Carus Dresden (Author)

Abstract

Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.

Details

Original languageEnglish
Article number5711
JournalNature communications
Volume13
Issue number1
Publication statusPublished - Dec 2022
Peer-reviewedYes

External IDs

PubMed 36175413