Integrating Touch, Gestures and Speech for Multi-modal Conversations with an Audio-Tactile Graphics Reader

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

Abstract

Screen readers present one cell of a spreadsheet at a time and provide only a limited overview on a single table such as its dimensions. Screen reader users struggle, for example, in recognizing labels to explain another cell’s purpose. In a Wizard-of-Oz study with two wizards as voice agents generating speech feedback we explore a novel audio-tactile graphics reader with tactile grid-based overlays and spoken feedback for touch to enable screen reader users to engage in a conversation about the spreadsheet with a voice assistant and utilize a screen reader to solve spreadsheet calculation tasks. In a pilot with 3 and a main study with 8 BLV students, we identify multi-modal interaction patterns and confirm the importance of two separate roles of speakers: a voice assistant and a screen reader. The conversation is driven by user’s multi-modal speech input and hand gestures provided sequentially or in parallel. Verbal references to cells by spoken addresses, values, and formulas can be embodied as tangible objects to unify tactile and verbal representations.

Details

OriginalspracheEnglisch
TitelHuman-Computer Interaction – INTERACT 2025
Redakteure/-innenCarmelo Ardito, Simone Diniz Junqueira Barbosa, Tayana Conte, André Freire, Isabela Gasparini, Philippe Palanque, Raquel Prates
Seiten323-332
Seitenumfang10
ISBN (elektronisch)978-3-032-05005-2
PublikationsstatusElektronische Veröffentlichung vor Drucklegung - 9 Sept. 2025
Peer-Review-StatusJa

Publikationsreihe

ReiheLecture Notes in Computer Science
Band16110
ISSN0302-9743

Externe IDs

unpaywall 10.1007/978-3-032-05005-2_17
dblp conf/interact/UsabaevBTWO25
Scopus 105017117068

Schlagworte

Schlagwörter

  • accessibility, multimodality, Wizard-of-Oz study, conversational user interface