Gaze and feet as additional input modalities for interacting with geospatial interfaces

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

  • A. Çöltekin - , University of Zurich (Autor:in)
  • J. Hempel - , IAV GmbH Ingenieurgesellschaft Auto und Verkehr (Autor:in)
  • A. Brychtova - , University of Zurich (Autor:in)
  • I. Giannopoulos - , ETH Zurich (Autor:in)
  • S. Stellmach - , Microsoft USA (Autor:in)
  • R. Dachselt - , Professur für Multimedia-Technologie (MT) (Autor:in)

Abstract

Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.

Details

OriginalspracheEnglisch
TitelXXIII ISPRS Congress, Commission II
Seiten113-120
Seitenumfang8
BandIII-2
PublikationsstatusVeröffentlicht - 2 Juni 2016
Peer-Review-StatusJa

Publikationsreihe

ReiheISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
ISSN2194-9042

Konferenz

Titel23rd International Society for Photogrammetry and Remote Sensing Congress, ISPRS 2016
Dauer12 - 19 Juli 2016
StadtPrague
LandTschechische Republik

Externe IDs

ORCID /0000-0002-2176-876X/work/159606443

Schlagworte

Schlagwörter

  • Foot Interaction, Gaze Interaction, GIS, Interfaces, Multimodal Input, Usability, User Interfaces