Automatic classification of minimally invasive instruments based on endoscopic image sequences

Research output: Contribution to book/conference proceedings/anthology/reportConference contributionContributedpeer-review

Contributors

  • Stefanie Speidel - , National Center for Tumor Diseases (Partners: UKD, MFD, HZDR, DKFZ), Karlsruhe Institute of Technology (Author)
  • Julia Benzko - , Karlsruhe Institute of Technology (Author)
  • Sebastian Krappe - , Karlsruhe Institute of Technology (Author)
  • Gunther Sudra - , Karlsruhe Institute of Technology (Author)
  • Pedram Azad - , Karlsruhe Institute of Technology (Author)
  • Beat Peter - (Author)
  • Müller Stich - , Heidelberg University  (Author)
  • Carsten Gutt - , Heidelberg University  (Author)
  • Rüdiger Dillmann - , Karlsruhe Institute of Technology (Author)

Abstract

Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for contextaware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.

Details

Original languageEnglish
Title of host publicationMedical Imaging 2009
Publication statusPublished - 2009
Peer-reviewedYes

Publication series

SeriesProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume7261
ISSN1605-7422

Conference

TitleMedical Imaging 2009: Biomedical Applications in Molecular, Structural, and Functional Imaging
Duration8 - 10 February 2009
CityLake Buena Vista, FL
CountryUnited States of America

External IDs

ORCID /0000-0002-4590-1908/work/163294184

Keywords

Keywords

  • Abdominal procedures, Endoscopic procedures, Localization & tracking technologies, Segmentation