Gesture Recognition in Robotic Surgery With Multimodal Attention

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance and eventually robotic automation. The complexity of articulated instrument trajectories and the inherent variability due to surgical style and patient anatomy make analysis and fine-grained segmentation of surgical motion patterns from robot kinematics alone very difficult. Surgical video provides crucial information from the surgical site with context for the kinematic data and the interaction between the instruments and tissue. Yet sensor fusion between the robot data and surgical video stream is non-trivial because the data have different frequency, dimensions and discriminative capability. In this paper, we integrate multimodal attention mechanisms in a two-stream temporal convolutional network to compute relevance scores and weight kinematic and visual feature representations dynamically in time, aiming to aid multimodal network training and achieve effective sensor fusion. We report the results of our system on the JIGSAWS benchmark dataset and on a new in vivo dataset of suturing segments from robotic prostatectomy procedures. Our results are promising and obtain multimodal prediction sequences with higher accuracy and better temporal structure than corresponding unimodal solutions. Visualization of attention scores also gives physically interpretable insights on network understanding of strengths and weaknesses of each sensor.

Details

Original languageEnglish
Pages (from-to)1677-1687
Number of pages11
JournalIEEE Transactions on Medical Imaging
Volume2022
Issue number41(7)
Publication statusPublished - 1 Jul 2022
Peer-reviewedYes

External IDs

PubMed 35108200
ORCID /0000-0002-4590-1908/work/163293968

Keywords

Keywords

  • Multimodal attention, Robotic surgery, Surgical data science, Surgical gesture recognition

Library keywords