Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew K. Leonard is active.

Publication


Featured researches published by Matthew K. Leonard.


The Journal of Neuroscience | 2012

Signed Words in the Congenitally Deaf Evoke Typical Late Lexicosemantic Responses with No Early Visual Responses in Left Superior Temporal Cortex

Matthew K. Leonard; Naja Ferjan Ramirez; Christina Torres; Katherine E. Travis; Marla Hatrak; Rachel I. Mayberry; Eric Halgren

Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.


PLOS ONE | 2011

Language proficiency modulates the recruitment of non-classical language areas in bilinguals.

Matthew K. Leonard; Christina Torres; Katherine E. Travis; Timothy T. Brown; Donald J. Hagler; Anders M. Dale; Jeffrey L. Elman; Eric Halgren

Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing.


NeuroImage | 2010

Spatiotemporal dynamics of bilingual word processing.

Matthew K. Leonard; Timothy T. Brown; Katherine E. Travis; Lusineh Gharapetian; Donald J. Hagler; Anders M. Dale; Jeffrey L. Elman; Eric Halgren

Studies with monolingual adults have identified successive stages occurring in different brain regions for processing single written words. We combined magnetoencephalography and magnetic resonance imaging to compare these stages between the first (L1) and second (L2) languages in bilingual adults. L1 words in a size judgment task evoked a typical left-lateralized sequence of activity first in ventral occipitotemporal cortex (VOT: previously associated with visual word-form encoding) and then ventral frontotemporal regions (associated with lexico-semantic processing). Compared to L1, words in L2 activated right VOT more strongly from approximately 135 ms; this activation was attenuated when words became highly familiar with repetition. At approximately 400 ms, L2 responses were generally later than L1, more bilateral, and included the same lateral occipitotemporal areas as were activated by pictures. We propose that acquiring a language involves the recruitment of right hemisphere and posterior visual areas that are not necessary once fluency is achieved.


Cerebral Cortex | 2011

Spatiotemporal Neural Dynamics of Word Understanding in 12- to 18-Month-Old-Infants

Katherine E. Travis; Matthew K. Leonard; Timothy T. Brown; Donald J. Hagler; Megan Curran; Anders M. Dale; Jeffrey L. Elman; Eric Halgren

Learning words is central in human development. However, lacking clear evidence for how or where language is processed in the developing brain, it is unknown whether these processes are similar in infants and adults. Here, we use magnetoencephalography in combination with high-resolution structural magnetic resonance imaging to noninvasively estimate the spatiotemporal distribution of word-selective brain activity in 12- to 18-month-old infants. Infants watched pictures of common objects and listened to words that they understood. A subset of these infants also listened to familiar words compared with sensory control sounds. In both experiments, words evoked a characteristic event-related brain response peaking ∼400 ms after word onset, which localized to left frontotemporal cortices. In adults, this activity, termed the N400m, is associated with lexico-semantic encoding. Like adults, we find that the amplitude of the infant N400m is also modulated by semantic priming, being reduced to words preceded by a semantically related picture. These findings suggest that similar left frontotemporal areas are used for encoding lexico-semantic information throughout the life span, from the earliest stages of word learning. Furthermore, this ontogenetic consistency implies that the neurophysiological processes underlying the N400m may be important both for understanding already known words and for learning new words.


Cerebral Cortex | 2014

Speech-Specific Tuning of Neurons in Human Superior Temporal Gyrus

Alexander M. Chan; Andrew R. Dykstra; Vinay Jayaram; Matthew K. Leonard; Katherine E. Travis; Brian Gygi; Janet M. Baker; Emad N. Eskandar; Leigh R. Hochberg; Eric Halgren; Sydney S. Cash

How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words.


Trends in Cognitive Sciences | 2014

Dynamic speech representations in the human temporal lobe

Matthew K. Leonard; Edward F. Chang

Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research.


The Journal of Neuroscience | 2015

Dynamic encoding of speech sequence probability in human temporal cortex.

Matthew K. Leonard; Kristofer E. Bouchard; Claire Tang; Edward F. Chang

Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.


Cerebral Cortex | 2013

Independence of Early Speech Processing from Word Meaning

Katherine E. Travis; Matthew K. Leonard; Alexander M. Chan; Christina Torres; Marisa L. Sizemore; Zhe Qu; Emad N. Eskandar; Anders M. Dale; Jeffrey L. Elman; Sydney S. Cash; Eric Halgren

We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception.


Nature Communications | 2016

Perceptual restoration of masked speech in human cortex

Matthew K. Leonard; Maxime O. Baud; Matthias J. Sjerps; Edward F. Chang

Humans are adept at understanding speech despite the fact that our natural listening environment is often filled with interference. An example of this capacity is phoneme restoration, in which part of a word is completely replaced by noise, yet listeners report hearing the whole word. The neurological basis for this unconscious fill-in phenomenon is unknown, despite being a fundamental characteristic of human hearing. Here, using direct cortical recordings in humans, we demonstrate that missing speech is restored at the acoustic-phonetic level in bilateral auditory cortex, in real-time. This restoration is preceded by specific neural activity patterns in a separate language area, left frontal cortex, which predicts the word that participants later report hearing. These results demonstrate that during speech perception, missing acoustic content is synthesized online from the integration of incoming sensory cues and the internal neural dynamics that bias word-level expectation and prediction.


Brain and Language | 2016

The peri-Sylvian cortical network underlying single word repetition revealed by electrocortical stimulation and direct neural recordings.

Matthew K. Leonard; Ruofan Cai; Miranda Babiak; Angela Ren; Edward F. Chang

Verbal repetition requires the coordination of auditory, memory, linguistic, and motor systems. To date, the basic dynamics of neural information processing in this deceptively simple behavior are largely unknown. Here, we examined the neural processes underlying verbal repetition using focal interruption (electrocortical stimulation) in 58 patients undergoing awake craniotomies, and neurophysiological recordings (electrocorticography) in 8 patients while they performed a single word repetition task. Electrocortical stimulation revealed that sub-components of the left peri-Sylvian network involved in single word repetition could be differentially interrupted, producing transient perceptual deficits, paraphasic errors, or speech arrest. Electrocorticography revealed the detailed spatio-temporal dynamics of cortical activation, involving a highly-ordered, but overlapping temporal progression of cortical high gamma (75-150Hz) activity throughout the peri-Sylvian cortex. We observed functionally distinct serial and parallel cortical processing corresponding to successive stages of general auditory processing (posterior superior temporal gyrus), speech-specific auditory processing (middle and posterior superior temporal gyrus), working memory (inferior frontal cortex), and motor articulation (sensorimotor cortex). Together, these methods reveal the dynamics of coordinated activity across peri-Sylvian cortex during verbal repetition.

Collaboration


Dive into the Matthew K. Leonard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Halgren

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anders M. Dale

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge