Journal of the Acoustical Society of America | 2019

Proactive neural processing of native and non-native speech

 
 
 

Abstract


The impact of attention and language-experience in neural speech processing is typically assessed using sentences, words, and syllables that, when presented in isolation, may not engage the cortical speech network as well as more realistic continuous speech. Here, we explore the neuromodulatory effects of attention and language experience using continuous speech. We recorded electroencephalographic responses from native speakers of English and late Chinese-English bilinguals while listening to a story recorded in English. The story was mixed with a tone sequence, and listeners were instructed to focus either on the speech (attended speech condition) or the tone sequence (ignored speech condition). We used the multivariate temporal response function and the accuracy of a machine-learning based brain-to-speech decoder to quantify differences in cortical entrainment and speech-sound category processing. Our analyses revealed a more robust, context-independent neural encoding of speech-sound categories when they were attended and native. Interestingly, while cortical entrainment to speech was also enhanced by attention, the enhancement was stronger among non-native speakers. Our results suggest that while listeners can manage attention to improve the neural parsing of continuous speech via cortical entrainment, the benefits of attention for speech-sound category processing can be attenuated when the sounds are not native.The impact of attention and language-experience in neural speech processing is typically assessed using sentences, words, and syllables that, when presented in isolation, may not engage the cortical speech network as well as more realistic continuous speech. Here, we explore the neuromodulatory effects of attention and language experience using continuous speech. We recorded electroencephalographic responses from native speakers of English and late Chinese-English bilinguals while listening to a story recorded in English. The story was mixed with a tone sequence, and listeners were instructed to focus either on the speech (attended speech condition) or the tone sequence (ignored speech condition). We used the multivariate temporal response function and the accuracy of a machine-learning based brain-to-speech decoder to quantify differences in cortical entrainment and speech-sound category processing. Our analyses revealed a more robust, context-independent neural encoding of speech-sound categories when t...

Volume 145
Pages 1820-1820
DOI 10.1121/1.5101648
Language English
Journal Journal of the Acoustical Society of America

Full Text