Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input.

Research output: Contribution to conferencesPaperContributedpeer-review

Contributors

Abstract

For a desktop computer, we investigate how to enhance conventional mouse and keyboard interaction by combining the input modalities gaze and foot. This multimodal approach offers the potential for fluently performing both manual input (e.g., for precise object selection) and gaze-supported foot input (for pan and zoom) in zoomable information spaces in quick succession or even in parallel. For this, we take advantage of fast gaze input to implicitly indicate where to navigate to and additional explicit foot input for speed control while leaving the hands free for further manual input. This allows for taking advantage of gaze input in a subtle and unobtrusive way. We have carefully elaborated and investigated three variants of foot controls incorporating one-, two- and multidirectional foot pedals in combination with gaze. These were evaluated and compared to mouse-only input in a user study using Google Earth as a geographic information system. The results suggest that gaze-supported foot input is feasible for convenient, user-friendly navigation and comparable to mouse input and encourage further investigations of gaze-supported foot controls.

Details

Original languageEnglish
Pages123–130
Publication statusPublished - 2015
Peer-reviewedYes

Conference

Title17th International Conference on Multimodal Interaction
Abbreviated titleICMI 2015
Conference number17
Duration9 - 13 November 2015
Website
Degree of recognitionInternational event
LocationMotif Hotel
CitySeattle
CountryUnited States of America

External IDs

ORCID /0000-0003-1467-7031/work/142253333
ORCID /0000-0002-2176-876X/work/151435335
Scopus 84959272010

Keywords

Keywords

  • multimodal interaction, gaze input, eye tracking, navigation, pan, zoom, foot input, gaze-supported interaction