Voice over: Audio-visual congruency and content recall in the gallery setting
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.
Details
Original language | English |
---|---|
Article number | e0177622 |
Journal | PloS one |
Volume | 12 |
Issue number | 6 |
Publication status | Published - Jun 2017 |
Peer-reviewed | Yes |
Externally published | Yes |
External IDs
PubMed | 28636667 |
---|---|
ORCID | /0000-0001-6540-5891/work/150883503 |