Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input.

Publikation: Beitrag zu KonferenzenPaperBeigetragenBegutachtung

Beitragende

Abstract

For a desktop computer, we investigate how to enhance conventional mouse and keyboard interaction by combining the input modalities gaze and foot. This multimodal approach offers the potential for fluently performing both manual input (e.g., for precise object selection) and gaze-supported foot input (for pan and zoom) in zoomable information spaces in quick succession or even in parallel. For this, we take advantage of fast gaze input to implicitly indicate where to navigate to and additional explicit foot input for speed control while leaving the hands free for further manual input. This allows for taking advantage of gaze input in a subtle and unobtrusive way. We have carefully elaborated and investigated three variants of foot controls incorporating one-, two- and multidirectional foot pedals in combination with gaze. These were evaluated and compared to mouse-only input in a user study using Google Earth as a geographic information system. The results suggest that gaze-supported foot input is feasible for convenient, user-friendly navigation and comparable to mouse input and encourage further investigations of gaze-supported foot controls.

Details

OriginalspracheEnglisch
Seiten123–130
PublikationsstatusVeröffentlicht - 2015
Peer-Review-StatusJa

Konferenz

Titel17th International Conference on Multimodal Interaction
KurztitelICMI 2015
Veranstaltungsnummer17
Dauer9 - 13 November 2015
Webseite
BekanntheitsgradInternationale Veranstaltung
OrtMotif Hotel
StadtSeattle
LandUSA/Vereinigte Staaten

Externe IDs

ORCID /0000-0003-1467-7031/work/142253333
ORCID /0000-0002-2176-876X/work/151435335
Scopus 84959272010

Schlagworte

Schlagwörter

  • multimodal interaction, gaze input, eye tracking, navigation, pan, zoom, foot input, gaze-supported interaction