How the human brain recognizes speech in the context of changing speakers

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Katharina Von Kriegstein - , University College London, Newcastle University, Max Planck Institute for Human Cognitive and Brain Sciences (Author)
  • David R.R. Smith - , University of Hull, University of Cambridge (Author)
  • Roy D. Patterson - , University of Cambridge (Author)
  • Stefan J. Kiebel - , University College London, Max Planck Institute for Human Cognitive and Brain Sciences (Author)
  • Timothy D. Griffiths - , University College London, Newcastle University (Author)

Abstract

We understand speech from different speakers with ease, whereas artificial speech recognition systems struggle with this task. It is unclear how the human brain solves this problem. The conventional view is that speech message recognition and speaker identification are two separate functions and that message processing takes place predominantly in the left hemisphere, whereas processing of speaker-specific information is located in the right hemisphere. Here, we distinguish the contribution of specific cortical regions, to speech recognition and speaker information processing, by controlled manipulation of task and resynthesized speaker parameters. Two functional magnetic resonance imaging studies provide evidence for a dynamic speech-processing network that questions the conventional view.Wefound that speech recognition regions in left posterior superior temporal gyrus/superior temporal sulcus (STG/STS) also encode speaker-related vocal tract parameters, which are reflected in the amplitude peaks of the speech spectrum, along with the speech message. Right posterior STG/STS activated specifically more to a speaker-related vocal tract parameter change during a speech recognition task compared with a voice recognition task. Left and right posterior STG/STS were functionally connected. Additionally, we found that speaker-related glottal fold parameters (e.g., pitch), which are not reflected in the amplitude peaks of the speech spectrum, are processed in areas immediately adjacent to primary auditory cortex, i.e., in areas in the auditory hierarchy earlier than STG/STS. Our results point to a network account of speech recognition, in which information about the speech message and the speaker's vocal tract are combined to solve the difficult task of understanding speech from different speakers.

Details

Original languageEnglish
Pages (from-to)629-638
Number of pages10
JournalJournal of Neuroscience
Volume30
Issue number2
Publication statusPublished - 13 Jan 2010
Peer-reviewedYes
Externally publishedYes

External IDs

PubMed 20071527
ORCID /0000-0001-7989-5860/work/142244399

Keywords

ASJC Scopus subject areas

Library keywords