External Human-Machine-Interfaces on Automated Vehicles: Which message and perspective do pedestrians in crossing situations understand best?

Research output: Contribution to book/conference proceedings/anthology/reportConference contributionContributedpeer-review

Contributors

Abstract

Future automated vehicles (AVs) could be equipped with external human-machine-interfaces (eHMIs) that are supposed to facilitate the communication of AVs with surrounding road users. It has been argued that they might support pedestrians' crossing decisions (Dey et al., 2020). However, in order to achieve that, it is key that the messages conveyed are easily understandable to all road users, under all circumstances. In that regard, there is a discussion about_what_ message should be communicated in such situations: Should the eHMI communicate both, the intention to yield as well as the intention not to yield to the pedestrian, or just one of these messages? Another question is_how _these messages should be communicated: Should the message refer to the pedestrian (i.e., egocentric: “You can(not) go”) or the AV (i.e., allocentric “I (do not) intend to yield”)? And, of course, it is vital that eHMIs are understood even under high cognitive load, as pedestrians might, e.g., be distracted (Dommes, 2019).Accordingly, an earlier study explored the effects of message and perspective on the understandability of _text_-based eHMIs while taking cognitive load into account (Eisma et al., 2021). It was found that egocentric messages were understood best and that their cognitive memory task had no significant effect. In order to test the validity of these findings, we conducted a conceptual replication employing nonverbal eHMIs (which do not exclude persons who cannot read a certain language) and using a visuospatial memory task (as visual load is prominent in traffic).The present study examined which message and perspective of _icon_-based eHMIs pedestrians understand best in terms of comprehension speed and accuracy. Participants in an online experiment (N = 85; M(age) = 36.4) repeatedly indicated crossing decisions in reaction to images of AVs which were equipped with one of six different icon-based eHMIs. The images depicted a crossing situation and were taken from the perspective of a pedestrian. The eHMIs differed in their message about the intended behavior of the AV (yielding/non-yielding) and their perspective (egocentric/allocentric/ambiguous). Each decision task was embedded in a visuospatial memory task of varying difficulty (low/medium/high) which manipulated the participants’ cognitive load. The participants’ response times, crossing decisions, and subjective ratings of clarity for each eHMI icon were measured.Our results indicated that pedestrians understood those eHMI messages better and faster that tell them to cross the street compared to those that instructed them not to do so. In terms of perspective, egocentric eHMIs were considerably better understood objectively as well as subjectively than allocentric and ambiguous ones. There was no difference between the ambiguous and allocentric icons. The participants’ understanding of the different eHMI icons did not differ significantly depending on cognitive load.We conclude that icon-based eHMIs are understood as correct and quick as text-based ones. We advise caution regarding eHMIs which communicate that the AV is not yielding / that the pedestrian cannot cross. This study corroborates previous evidence that egocentric eHMIs are understood best. Further, eHMIs are understood equally correct and quick even when the observer is cognitively loaded.

Details

Original languageEnglish
Title of host publicationIntelligent Human Systems Integration (IHSI 2022): Integrating People and Intelligent Systems. AHFE International
Publication statusPublished - 2022
Peer-reviewedYes

External IDs

Mendeley fe495d92-aad0-35d3-9b12-6a8fee118905
ORCID /0000-0002-1751-3342/work/142250063