Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

  • Firas Khader - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Jakob Nikolas Kather - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Tianyu Han - , Medizinische Fakultät Carl Gustav Carus Dresden (Autor:in)
  • Sven Nebelung - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Christiane Kuhl - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Johannes Stegmaier - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)
  • Daniel Truhn - , Rheinisch-Westfälische Technische Hochschule Aachen (Autor:in)

Abstract

Whole-Slide Imaging allows for the capturing and digitization of high-resolution images of histological specimen. An automated analysis of such images using deep learning models is therefore of high demand. The transformer architecture has been proposed as a possible candidate for effectively leveraging the high-resolution information. Here, the whole-slide image is partitioned into smaller image patches and feature tokens are extracted from these image patches. However, while the conventional transformer allows for a simultaneous processing of a large set of input tokens, the computational demand scales quadratically with the number of input tokens and thus quadratically with the number of image patches. To address this problem we propose a novel cascaded cross-attention network (CCAN) based on the cross-attention mechanism that scales linearly with the number of extracted patches. Our experiments demonstrate that this architecture is at least on-par with and even outperforms other attention-based state-of-the-art methods on two public datasets: On the use-case of lung cancer (TCGA NSCLC) our model reaches a mean area under the receiver operating characteristic (AUC) of 0.970 ± 0.008 and on renal cancer (TCGA RCC) reaches a mean AUC of 0.985 ± 0.004. Furthermore, we show that our proposed model is efficient in low-data regimes, making it a promising approach for analyzing whole-slide images in resource-limited settings. To foster research in this direction, we make our code publicly available on GitHub: https://github.com/FirasGit/cascaded_cross_attention.

Details

OriginalspracheEnglisch
TitelMachine Learning in Medical Imaging
Redakteure/-innenXiaohuan Cao, Xi Ouyang, Xuanang Xu, Islem Rekik, Zhiming Cui
Herausgeber (Verlag)Springer Science and Business Media B.V.
Seiten417-426
Seitenumfang10
ISBN (elektronisch)978-3-031-45676-3
ISBN (Print)978-3-031-45675-6
PublikationsstatusVeröffentlicht - 2024
Peer-Review-StatusJa
Extern publiziertJa

Publikationsreihe

ReiheLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band14349 LNCS
ISSN0302-9743

Workshop

Titel14th International Workshop on Machine Learning in Medical Imaging
KurztitelMLMI 2023
Veranstaltungsnummer14
Dauer8 Oktober 2023
OrtVancouver Convention Center
StadtVancouver
LandKanada

Externe IDs

ORCID /0000-0002-3730-5348/work/198594492

Schlagworte

Ziele für nachhaltige Entwicklung

Schlagwörter

  • Computational Pathology, Transformers, Whole-Slide Images