Visual speech speeds up auditory identification responses

Tim Paris, Jeesun Kim, Chris Wayne Davis

Research output: Contribution to conferencePaper

3 Citations (Scopus)

Abstract

Auditory speech perception is more accurate when combined with visual speech. Recent ERP studies suggest that visual speech helps 'predict' which phoneme will be heard via feedback from visual to auditory areas, with more visual salient articulations associated with greater facilitation. Two experiments tested this hypothesis with a speeded auditory identification measure. Stimuli consisted of the sounds 'apa', 'aka' and 'ata', with matched and mismatched videos that showed the talker's whole face or upper face (control). The percentage of matched AV videos was set at 85% in Experiment 1 and 15% in Experiment 2. Results showed that responses to matched whole face stimuli were faster than both upper face and mismatched videos in both experiments. Furthermore, salient phonemes (aPa) showed a greater reduction in reaction times than ambiguous ones (aKa). The current study provides support for the proposal that visual speech speeds up processing of auditory speech.

Original languageEnglish
Pages2469-2472
Number of pages4
Publication statusPublished - 2011
Event12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011 -
Duration: 27 Aug 2011 → …

Conference

Conference12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011
Period27/08/11 → …

Fingerprint Dive into the research topics of 'Visual speech speeds up auditory identification responses'. Together they form a unique fingerprint.

Cite this