Voice over: Audio-visual congruency and content recall in the gallery setting

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

  • Merle T. Fairhurst - , Ludwig-Maximilians-Universität München (LMU), University of London (Autor:in)
  • Minnie Scott - , Tate Gallery (Autor:in)
  • Ophelia Deroy - , School of Advanced Study, Ludwig-Maximilians-Universität München (LMU) (Autor:in)

Abstract

Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

Details

OriginalspracheEnglisch
Aufsatznummere0177622
FachzeitschriftPloS one
Jahrgang12
Ausgabenummer6
PublikationsstatusVeröffentlicht - Juni 2017
Peer-Review-StatusJa
Extern publiziertJa

Externe IDs

PubMed 28636667
ORCID /0000-0001-6540-5891/work/150883503

Schlagworte

ASJC Scopus Sachgebiete