A Framework for Multimodal Medical Image Interaction

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

  • Laura Schütz - , Technische Universität München (Autor:in)
  • Sasan Matinfar - , Technische Universität München (Autor:in)
  • Gideon Schafroth - , Technische Universität München (Autor:in)
  • Navid Navab - , Concordia University (Autor:in)
  • Merle Fairhurst - , Juniorprofessur für Sozial affektiver Touch (Autor:in)
  • Arthur Wagner - , Technische Universität München (Autor:in)
  • Benedikt Wiestler - , Technische Universität München (Autor:in)
  • Ulrich Eck - , Technische Universität München (Autor:in)
  • Nassir Navab - , Technische Universität München (Autor:in)

Abstract

Medical doctors rely on images of the human anatomy, such as magnetic resonance imaging (MRI), to localize regions of interest in the patient during diagnosis and treatment. Despite advances in medical imaging technology, the information conveyance remains unimodal. This visual representation fails to capture the complexity of the real, multisensory interaction with human tissue. However, perceiving multimodal information about the patient's anatomy and disease in real-time is critical for the success of medical procedures and patient outcome. We introduce a Multimodal Medical Image Interaction (MMII) framework to allow medical experts a dynamic, audiovisual interaction with human tissue in three-dimensional space. In a virtual reality environment, the user receives physically informed audiovisual feedback to improve the spatial perception of anatomical structures. MMII uses a model-based sonification approach to generate sounds derived from the geometry and physical properties of tissue, thereby eliminating the need for hand-crafted sound design. Two user studies involving 34 general and nine clinical experts were conducted to evaluate the proposed interaction framework's learnability, usability, and accuracy. Our results showed excellent learnability of audiovisual correspondence as the rate of correct associations significantly improved ($p < 0.001$) over the course of the study. MMII resulted in superior brain tumor localization accuracy ($p < 0.05$) compared to conventional medical image interaction. Our findings substantiate the potential of this novel framework to enhance interaction with medical images, for example, during surgical procedures where immediate and precise feedback is needed.

Details

OriginalspracheEnglisch
Seiten (von - bis)7419-7429
Seitenumfang11
FachzeitschriftIEEE transactions on visualization and computer graphics
Jahrgang30
Ausgabenummer11
PublikationsstatusVeröffentlicht - 2024
Peer-Review-StatusJa

Externe IDs

ORCID /0000-0001-6540-5891/work/190134764

Schlagworte

Ziele für nachhaltige Entwicklung

Schlagwörter

  • Audiovisual feedback, Augmented reality, Brain surgery, Brain tumor, HCI, Human-centered design, Human-computer interaction, Medical image interaction, Medical images, Multimodal interaction, Physical modeling synthesis, Sonification, Surgical navigation, Tumor localization, Virtual reality