Artificial vocal learning guided by phoneme recognition and visual information

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

Abstract

This paper introduces a paradigm shift regarding vocal learning simulations, in which the communicative function of speech acquisition determines the learning process and intelligibility is considered the primary measure of learning success. Thereby, a novel approach for artificial vocal learning is presented that utilizes deep neural network-based phoneme recognition in order to calculate the speech acquisition objective function. This function guides a learning framework that involves the state-of-the-art articulatory speech synthesizer VocalTractLab as the motor-to-acoustic forward model. In this way, an extensive set of German phonemes, including most of the consonants and all stressed vowels, was produced successfully. The synthetic phonemes were rated as highly intelligible by human listeners. Furthermore, it is shown that visual speech information, such as lip and jaw movements, can be extracted from video recordings and be incorporated into the learning framework as an additional loss component during the optimization process. It was observed that this visual loss did not increase the overall intelligibility of phonemes. Instead, the visual loss acted as a regularization mechanism that facilitated the finding of more biologically plausible solutions in the articulatory domain.

Details

OriginalspracheEnglisch
Seiten (von - bis)1734-1744
Seitenumfang11
FachzeitschriftIEEE/ACM Transactions on Audio Speech and Language Processing
Jahrgang31
PublikationsstatusVeröffentlicht - 2023
Peer-Review-StatusJa

Externe IDs

Scopus 85153398480
Mendeley 5a5d4c40-82c7-3084-b861-5b371c00a7d8
ORCID /0000-0003-0167-8123/work/167214870

Schlagworte

Schlagwörter

  • Articulatory speech synthesis, automatic phoneme recognition, vocal learning simulation