Visual abilities are important for auditory-only speech recognition: Evidence from autism spectrum disorder

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

Abstract

In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition.

Details

OriginalspracheEnglisch
Seiten (von - bis)1-11
Seitenumfang11
FachzeitschriftNeuropsychologia
Jahrgang65
PublikationsstatusVeröffentlicht - Dez. 2014
Peer-Review-StatusJa

Externe IDs

Scopus 84918780232
ORCID /0000-0001-7989-5860/work/142244384
ORCID /0000-0001-9298-2125/work/143074516

Schlagworte

Schlagwörter

  • Adult, Child Development Disorders, Pervasive/physiopathology, Face, Female, Humans, Lipreading, Male, Recognition, Psychology/physiology, Speech Perception/physiology, Visual Perception/physiology, Young Adult