Jonathan Brennan
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan Brennan.
Brain and Language | 2016
Jonathan Brennan; Edward P. Stabler; Sarah E. Van Wagenen; Wen-Ming Luh; John Hale
Neurolinguistic accounts of sentence comprehension identify a network of relevant brain regions, but do not detail the information flowing through them. We investigate syntactic information. Does brain activity implicate a computation over hierarchical grammars or does it simply reflect linear order, as in a Markov chain? To address this question, we quantify the cognitive states implied by alternative parsing models. We compare processing-complexity predictions from these states against fMRI timecourses from regions that have been implicated in sentence comprehension. We find that hierarchical grammars independently predict timecourses from left anterior and posterior temporal lobe. Markov models are predictive in these regions and across a broader network that includes the inferior frontal gyrus. These results suggest that while linear effects are wide-spread across the language network, certain areas in the left temporal lobe deal with abstract, hierarchical syntactic representations.
Language and Linguistics Compass | 2016
Jonathan Brennan
The cognitive neuroscience of language relies largely on controlled experiments that are different from the everyday situations in which we use language. This review describes an approach that studies specific aspects of sentence comprehension in the brain using data collected while participants perform an everyday task, such as listening to a story. The approach uses ‘neuro-computational’ models that are based on linguistic and psycholinguistic theories. These models quantify how a specific computation, such as identifying a syntactic constituent, might be carried out by a neural circuit word-by-word. Model predictions are tested for their statistical fit with measured brain data. The paper discusses three applications of this approach: (i) to probe the location and timing of linguistic processing in the brain without requiring unnatural tasks and stimuli, (ii) to test theoretical hypotheses by comparing the fits of different models to naturalistic data, and (iii) to study neural mechanisms for language processing in populations that are poorly served by traditional methods. 1. Language comprehension inside and outside of the lab Research in the cognitive neuroscience of language has mapped many of the relevant brain regions and has begun to reveal the dynamic interplay between these regions that underlies language comprehension and production ( for reviews, see Friederici and Gierhan 2013; Hagoort and Indefrey 2014; see Kemmerer 2014 for a textbook introduction). There is growing interest in whether these results extend beyond constrained laboratory settings to natural, everyday uses of language like listening to a story or having a face-to-face conversation (see the papers collected in Willems 2015). The stimuli and tasks used in these new efforts are ‘naturalistic’ in that they are drawn from how language is used outside of the laboratory but are constrained by the equipment needed to record brain signals. This review describes an approach to studying the neural bases of specific sub-processes of sentence processing during naturalistic comprehension. The majority of work using naturalistic stimuli has addressed language processing at a coarsegrained level. Studies have outlined a range of brain regions that are engaged during natural reading (Yarkoni et al. 2008; Speer et al. 2009; Xu et al. 2005; Wehbe et al. 2014), listening (Brennan et al. 2012; Whitney et al. 2009; Lerner et al. 2014), and audio-visual processing (Wilson et al. 2008; Skipper et al. 2009). Extensions of this work have identified neural signals associated with higher-order processes that are shared across speech rates (Lerner et al. 2014) and across production and comprehension (Stephens et al. 2010; Silbert et al. 2014). Other work using naturalistic stimulation has focused on aspects of discourse comprehension (Whitney et al 2009; Egidi and Caramazza 2013; Kurby and Zacks 2013). These efforts have proved fruitful in building bridges between cognitive neuroscience and humanistic studies including the study of literature (Willems 2013), poetics ( Jacobs 2015), and cinema (Hasson et al. 2008). In contrast, there have been relatively few studies of more fine-grained linguistic processes at the sentence level and below.
Brain and Language | 2014
Jonathan Brennan; Constantine Lignos; David Embick; Timothy P.L. Roberts
Lexical access during speech comprehension comprises numerous computations, including activation, competition, and selection. The spatio-temporal profile of these processes involves neural activity in peri-auditory cortices at least as early as 200 ms after stimulation. Their oscillatory dynamics are less well understood, although reports link alpha band de-synchronization with lexical processing. We used magnetoencephalography (MEG) to examine whether these alpha-related oscillations reflect the speed of lexical access, as would be predicted if they index lexical activation. In an auditory semantic priming protocol, monosyllabic nouns were presented while participants performed a lexical decision task. Spatially-localizing beamforming was used to examine spectro-temporal effects in left and right auditory cortex time-locked to target word onset. Alpha and beta de-synchronization (10-20 Hz ERD) was attenuated for words following a related prime compared to an unrelated prime beginning about 270 ms after stimulus onset. This timing is consistent with how information about word identity unfolds incrementally in speech, quantified in information-theoretic terms. These findings suggest that alpha de-synchronization during auditory word processing is associated with early stages of lexical access.
north american chapter of the association for computational linguistics | 2015
John Hale; David Lutz; Wen-Ming Luh; Jonathan Brennan
Neuroimaging while participants listen to audiobooks provides a rich data source for theories of incremental parsing. We compare nested regression models of these data. These mixed-effects models incorporate linguistic predictors at various grain sizes ranging from part-of-speech bigrams, through surprisal on context-free treebank grammars, to incremental node counts in trees that are derived by Minimalist Grammars. The fine-grained structures make an independent contribution over and above coarser predictors. However, this result only obtains with time courses from anterior temporal lobe (aTL). In analogous time courses from inferior frontal gyrus, only n-grams improve upon a non-syntactic baseline. These results support the idea that aTL does combinatoric processing during naturalistic story comprehension, processing that bears a systematic relationship to linguistic structure.
Annals of the New York Academy of Sciences | 2015
Ioulia Kovelman; Neelima Wagley; Jessica F. Hay; Margaret Ugolini; Susan M. Bowyer; Renee Lajiness-O'Neill; Jonathan Brennan
New approaches to understanding language and reading acquisition propose that the human brains ability to synchronize its neural firing rate to syllable‐length linguistic units may be important to childrens ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near‐infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English‐speaking children (aged 6–12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between childrens amplitude rise–time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brains mechanisms that support language and reading acquisition during both typical and atypical development.
Neuroreport | 2016
Jonathan Brennan; Neelima Wagley; Ioulia Kovelman; Susan M. Bowyer; Annette E. Richard; Renee Lajiness-O'Neill
Neuroscientific evidence points toward atypical auditory processing in individuals with autism spectrum disorders (ASD), and yet, the consequences of this for receptive language remain unclear. Using magnetoencephalography and a passive listening task, we test for cascading effects on speech sound processing. Children with ASD and age-matched control participants (8–12 years old) listened to nonce linguistic stimuli that either did or did not conform to the phonological rules that govern consonant sequences in English (e.g. legal ‘vimp’ vs. illegal ‘vimk’). Beamformer source analysis was used to isolate evoked responses (0.1–30 Hz) to these stimuli in the left and the right auditory cortex. Right auditory responses from participants with ASD, but not control participants, showed an attenuated response to illegal sequences relative to legal sequences that emerged around 330 ms after the onset of the critical phoneme. These results suggest that phonological processing is impacted in ASD, perhaps because of cascading effects from disrupted initial acoustic processing.
Second Language Research | 2018
Hang Wei; Julie E. Boland; Jonathan Brennan; Fang Yuan; Min Wang; Chi Zhang
Prior work has shown intriguing differences between first language (L1) and second language (L2) comprehension priming of relative clauses. We investigated English reduced relative clause priming in Chinese adult learners of English. Participants of different education levels read sentences in a self-paced, moving window paradigm. Critical sentences had a temporarily ambiguous reduced relative clause. Across lists, critical sentences were rotated, so that they occurred either as prime or as target, and had either the same or different verb as the critical sentence with which they were paired. Prime/target pairs were separated by several filler sentences, which never contained a relative clause. Mean reading times for the disambiguating region in the target sentences were faster than in the prime sentences, but only in the same-verb condition, not in the different-verb condition. This pattern of results is consistent with L1 comprehension priming research, suggesting that similar lexically specific mechanisms are involved in L1 and L2 comprehension priming of reduced relative clauses. These findings are in line with lexicalist accounts of sentence comprehension (e.g. MacDonald et al., 1994), according to which syntactic information is bound to specific words. In addition, these findings argue against theories that postulate fundamental differences in processing of L1 and L2 (e.g. Clahsen and Felser, 2006a, 2006b).
Proceedings of the Society for Computation in Linguistics | 2018
Shohini Bhattasali; John Hale; Christophe Pallier; Jonathan Brennan; Wen-Ming Luh; R. Nathan Spreng
On some level, human sentence comprehension must involve both memory retrieval and structural composition. This study differentiates these two processes using neuroimaging data collected during naturalistic listening. Retrieval is formalized in terms of “multiword expressions” while structure-building is formalized in terms of bottom-up parsing. The results most strongly implicate Anterior Temporal regions for structure-building and Precuneus Cortex for memory retrieval.
Language, cognition and neuroscience | 2018
Shohini Bhattasali; Murielle Fabre; Wen-Ming Luh; Hazem Al Saied; Mathieu Constant; Christophe Pallier; Jonathan Brennan; R. Nathan Spreng; John Hale
ABSTRACT This study examines memory retrieval and syntactic composition using fMRI while participants listen to a book, The Little Prince. These two processes are quantified drawing on methods from computational linguistics. Memory retrieval is quantified via multi-word expressions that are likely to be stored as a unit, rather than built-up compositionally. Syntactic composition is quantified via bottom-up parsing that tracks tree-building work needed in composed syntactic phrases. Regression analyses localise these to spatially-distinct brain regions. Composition mainly correlates with bilateral activity in anterior temporal lobe and inferior frontal gyrus. Retrieval of stored expressions drives right-lateralised activation in the precuneus. Less cohesive expressions activate well-known nodes of the language network implicated in composition. These results help to detail the neuroanatomical bases of two widely-assumed cognitive operations in language processing.
Autism Research | 2018
Renee Lajiness-O'Neill; Jonathan Brennan; John E. Moran; Annette E. Richard; Ana Mercedes Flores; Casey Swick; Ryan Goodcase; Tiffany Andersen; Kaitlyn McFarlane; Kenneth W. Rusiniak; Ioulia Kovelman; Neelima Wagley; Maggie Ugolini; Jeremy J. Albright; Susan M. Bowyer
Disrupted neural synchrony may be a primary electrophysiological abnormality in autism spectrum disorders (ASD), altering communication between discrete brain regions and contributing to abnormalities in patterns of connectivity within identified neural networks. Studies exploring brain dynamics to comprehensively characterize and link connectivity to large‐scale cortical networks and clinical symptoms are lagging considerably. Patterns of neural coherence within the Default Mode Network (DMN) and Salience Network (SN) during resting state were investigated in 12 children with ASD (MAge = 9.2) and 13 age and gender‐matched neurotypicals (NT) (MAge = 9.3) with magnetoencephalography. Coherence between 231 brain region pairs within four frequency bands (theta (4–7 Hz), alpha, (8–12 Hz), beta (13–30 Hz), and gamma (30–80 Hz)) was calculated. Relationships between neural coherence and social functioning were examined. ASD was characterized by lower synchronization across all frequencies, reaching clinical significance in the gamma band. Lower gamma synchrony between fronto‐temporo‐parietal regions was observed, partially consistent with diminished default mode network (DMN) connectivity. Lower gamma coherence in ASD was evident in cross‐hemispheric connections between: angular with inferior/middle frontal; middle temporal with middle/inferior frontal; and within right‐hemispheric connections between angular, middle temporal, and inferior/middle frontal cortices. Lower gamma coherence between left angular and left superior frontal, right inferior/middle frontal, and right precuneus and between right angular and inferior/middle frontal cortices was related to lower social/social‐communication functioning. Results suggest a pattern of lower gamma band coherence in a subset of regions within the DMN in ASD (angular and middle temporal cortical areas) related to lower social/social‐communicative functioning. Autism Res 2018, 11: 434–449.