Gaze and feet as additional input modalities for interacting with geospatial interfaces
Research output: Contribution to book/conference proceedings/anthology/report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.
Details
Original language | English |
---|---|
Title of host publication | XXIII ISPRS Congress, Commission II |
Pages | 113-120 |
Number of pages | 8 |
Volume | III-2 |
Publication status | Published - 2 Jun 2016 |
Peer-reviewed | Yes |
Publication series
Series | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
---|---|
ISSN | 2194-9042 |
Conference
Title | 23rd International Society for Photogrammetry and Remote Sensing Congress, ISPRS 2016 |
---|---|
Duration | 12 - 19 July 2016 |
City | Prague |
Country | Czech Republic |
External IDs
ORCID | /0000-0002-2176-876X/work/159606443 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Foot Interaction, Gaze Interaction, GIS, Interfaces, Multimodal Input, Usability, User Interfaces