TY - JOUR
T1 - Differential weighting of information during aloud and silent reading
T2 - Evidence from representational similarity analysis of fMRI data
AU - Bailey, Lyam M.
AU - Matheson, Heath E.
AU - Fawcett, Jonathon M.
AU - Bodner, Glen E.
AU - Newman, Aaron J.
PY - 2025/1/13
Y1 - 2025/1/13
N2 - Single-word reading depends on multiple types of information processing: readers must process low-level visual properties of the stimulus, form orthographic and phonological representations of the word, and retrieve semantic content from memory. Reading aloud introduces an additional type of processing wherein readers must execute an appropriate sequence of articulatory movements necessary to produce the word. To date, cognitive and neural differences between aloud and silent reading have mainly been ascribed to articulatory processes. However, it remains unclear whether articulatory information is used to discriminate unique words, at the neural level, during aloud reading. Moreover, very little work has investigated how other types of information processing might differ between the two tasks. The current work used representational similarity analysis (RSA) to interrogate fMRI data collected while participants read single words aloud or silently. RSA was implemented using a whole-brain searchlight procedure to characterise correspondence between neural data and each of five models representing a discrete type of information. Both conditions elicited decodability of visual, orthographic, phonological, and articulatory information, though to different degrees. Compared with reading silently, reading aloud elicited greater decodability of visual, phonological, and articulatory information. By contrast, silent reading elicited greater decodability of orthographic information in right anterior temporal lobe. These results support an adaptive view of reading whereby information is weighted according to its task relevance, in a manner that best suits the reader’s goals.
AB - Single-word reading depends on multiple types of information processing: readers must process low-level visual properties of the stimulus, form orthographic and phonological representations of the word, and retrieve semantic content from memory. Reading aloud introduces an additional type of processing wherein readers must execute an appropriate sequence of articulatory movements necessary to produce the word. To date, cognitive and neural differences between aloud and silent reading have mainly been ascribed to articulatory processes. However, it remains unclear whether articulatory information is used to discriminate unique words, at the neural level, during aloud reading. Moreover, very little work has investigated how other types of information processing might differ between the two tasks. The current work used representational similarity analysis (RSA) to interrogate fMRI data collected while participants read single words aloud or silently. RSA was implemented using a whole-brain searchlight procedure to characterise correspondence between neural data and each of five models representing a discrete type of information. Both conditions elicited decodability of visual, orthographic, phonological, and articulatory information, though to different degrees. Compared with reading silently, reading aloud elicited greater decodability of visual, phonological, and articulatory information. By contrast, silent reading elicited greater decodability of orthographic information in right anterior temporal lobe. These results support an adaptive view of reading whereby information is weighted according to its task relevance, in a manner that best suits the reader’s goals.
KW - decodability
KW - fMRI
KW - production
KW - reading aloud
KW - RSA
KW - single-word reading
UR - http://www.scopus.com/inward/record.url?scp=105000165530&partnerID=8YFLogxK
U2 - 10.1162/imag_a_00428
DO - 10.1162/imag_a_00428
M3 - Article
AN - SCOPUS:105000165530
SN - 2837-6056
VL - 3
JO - Imaging Neuroscience
JF - Imaging Neuroscience
M1 - imag_a_00428
ER -