Aaron J. Newman
Dalhousie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aaron J. Newman.
Journal of Psycholinguistic Research | 2001
Aaron J. Newman; Roumyana Pancheva; Kaori Ozawa; Helen J. Neville; Michael T. Ullman
We used event-related functional magnetic resonance imaging to identify brain regions involved in syntactic and semantic processing. Healthy adult males read well-formed sentences randomly intermixed with sentences which either contained violations of syntactic structure or were semantically implausible. Reading anomalous sentences, as compared to well-formed sentences, yielded distinct patterns of activation for the two violation types. Syntactic violations elicited significantly greater activation than semantic violations primarily in superior frontal cortex. Semantically incongruent sentences elicited greater activation than syntactic violations in the left hippocampal and parahippocampal gyri, the angular gyri bilaterally, the right middle temporal gyrus, and the left inferior frontal sulcus. These results demonstrate that syntactic and semantic processing result in nonidentical patterns of activation, including greater frontal engagement during syntactic processing and larger increases in temporal and temporo-parietal regions during semantic analyses.
Journal of Cognitive Neuroscience | 2012
Aaron J. Newman; Antoine Tremblay; Emily S. Nichols; Helen J. Neville; Michael T. Ullman
We investigated the influence of English proficiency on ERPs elicited by lexical semantic violations in English sentences, in both native English speakers and native Spanish speakers who learned English in adulthood. All participants were administered a standardized test of English proficiency, and data were analyzed using linear mixed effects (LME) modeling. Relative to native learners, late learners showed reduced amplitude and delayed onset of the N400 component associated with reading semantic violations. As well, after the N400 late learners showed reduced anterior negative scalp potentials and increased posterior potentials. In both native and late learners, N400 amplitudes to semantically appropriate words were larger for people with lower English proficiency. N400 amplitudes to semantic violations, however, were not influenced by proficiency. Although both N400 onset latency and the late ERP effects differed between L1 and L2 learners, neither correlated with proficiency. Different approaches to dealing with the high degree of correlation between proficiency and native/late learner group status are discussed in the context of LME modeling. The results thus indicate that proficiency can modulate ERP effects in both L1 and L2 learners, and for some measures (in this case, N400 amplitude), L1–L2 differences may be entirely accounted for by proficiency. On the other hand, not all effects of L2 learning can be attributed to proficiency. Rather, the differences in N400 onset and the post-N400 violation effects appear to reflect fundamental differences in L1–L2 processing.
Proceedings of the National Academy of Sciences of the United States of America | 2009
Cheryl M. Capek; Giordana Grossi; Aaron J. Newman; Susan Lloyd Mcburney; David P. Corina; Brigitte Roeder; Helen J. Neville
Studies of written and spoken language suggest that nonidentical brain networks support semantic and syntactic processing. Event-related brain potential (ERP) studies of spoken and written languages show that semantic anomalies elicit a posterior bilateral N400, whereas syntactic anomalies elicit a left anterior negativity, followed by a broadly distributed late positivity. The present study assessed whether these ERP indicators index the activity of language systems specific for the processing of aural-oral language or if they index neural systems underlying any natural language, including sign language. The syntax of a signed language is mediated through space. Thus the question arises of whether the comprehension of a signed language requires neural systems specific for this kind of code. Deaf native users of American Sign Language (ASL) were presented signed sentences that were either correct or that contained either a semantic or a syntactic error (1 of 2 types of verb agreement errors). ASL sentences were presented at the natural rate of signing, while the electroencephalogram was recorded. As predicted on the basis of earlier studies, an N400 was elicited by semantic violations. In addition, signed syntactic violations elicited an early frontal negativity and a later posterior positivity. Crucially, the distribution of the anterior negativity varied as a function of the type of syntactic violation, suggesting a unique involvement of spatial processing in signed syntax. Together, these findings suggest that biological constraints and experience shape the development of neural systems important for language.
Psychophysiology | 2015
Antoine Tremblay; Aaron J. Newman
In the analysis of psychological and psychophysiological data, the relationship between two variables is often assumed to be a straight line. This may be due to the prevalence of the general linear model in data analysis in these fields, which makes this assumption implicitly. However, there are many problems for which this assumption does not hold. In this paper, we show that, in the analysis of event-related potential (ERP) data, the assumption of linearity comes at a cost and may significantly affect the inferences drawn from the data. We demonstrate why the assumption of linearity should be relaxed and how to model nonlinear relationships between ERP amplitudes and predictor variables within the familiar framework of generalized linear models, using regression splines and mixed-effects modeling.
Proceedings of the National Academy of Sciences of the United States of America | 2010
Aaron J. Newman; Ted Supalla; Peter C. Hauser; Elissa L. Newport; Daphne Bavelier
An important question in understanding language processing is whether there are distinct neural mechanisms for processing specific types of grammatical structure, such as syntax versus morphology, and, if so, what the basis of the specialization might be. However, this question is difficult to study: A given language typically conveys its grammatical information in one way (e.g., English marks “who did what to whom” using word order, and German uses inflectional morphology). American Sign Language permits either device, enabling a direct within-language comparison. During functional (f)MRI, native signers viewed sentences that used only word order and sentences that included inflectional morphology. The two sentence types activated an overlapping network of brain regions, but with differential patterns. Word order sentences activated left-lateralized areas involved in working memory and lexical access, including the dorsolateral prefrontal cortex, the inferior frontal gyrus, the inferior parietal lobe, and the middle temporal gyrus. In contrast, inflectional morphology sentences activated areas involved in building and analyzing combinatorial structure, including bilateral inferior frontal and anterior temporal regions as well as the basal ganglia and medial temporal/limbic areas. These findings suggest that for a given linguistic function, neural recruitment may depend upon on the cognitive resources required to process specific types of linguistic cues.
Cerebral Cortex | 2008
Daphne Bavelier; Aaron J. Newman; M. Mukherjee; Peter C. Hauser; S. Kemeny; Allen R. Braun; Mrim Boutla
Short-term memory (STM), or the ability to hold verbal information in mind for a few seconds, is known to rely on the integrity of a frontoparietal network of areas. Here, we used functional magnetic resonance imaging to ask whether a similar network is engaged when verbal information is conveyed through a visuospatial language, American Sign Language, rather than speech. Deaf native signers and hearing native English speakers performed a verbal recall task, where they had to first encode a list of letters in memory, maintain it for a few seconds, and finally recall it in the order presented. The frontoparietal network described to mediate STM in speakers was also observed in signers, with its recruitment appearing independent of the modality of the language. This finding supports the view that signed and spoken STM rely on similar mechanisms. However, deaf signers and hearing speakers differentially engaged key structures of the frontoparietal network as the stages of STM unfold. In particular, deaf signers relied to a greater extent than hearing speakers on passive memory storage areas during encoding and maintenance, but on executive process areas during recall. This work opens new avenues for understanding similarities and differences in STM performance in signers and speakers.
NeuroImage | 2010
Aaron J. Newman; Ted Supalla; Peter C. Hauser; Elissa L. Newport; Daphne Bavelier
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices as well as the basal ganglia, medial frontal, and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages.
Neuroreport | 2001
Karsten Steinhauer; Ca Roumyana Pancheva; Aaron J. Newman; Silvia P. Gennari; Michael T. Ullman
Nouns may refer to countable objects such as tables, or to mass entities such as rice. The mass/count distinction has been discussed in terms of both semantic and syntactic features encoded in the mental lexicon. Here we show that event-related potentials (ERPs) can reflect the processing of such lexical features, even in the absence of any feature-related violations. We demonstrate that count (vs mass) nouns elicit a frontal negativity which is independent of the N400 marker for conceptual-semantic processing, but resembles anterior negativities related to grammatical processing. This finding suggests that the brain differentiates between count and mass nouns primarily on a syntactic basis.
Brain Research | 2011
Jennifer Vannest; Elissa L. Newport; Aaron J. Newman; Daphne Bavelier
A major issue in lexical processing concerns storage and access of lexical items. Here we make use of the base frequency effect to examine this. Specifically, reaction time to morphologically complex words (words made up of base and suffix, e.g., agree+able) typically reflects frequency of the base element (i.e., total frequency of all words in which agree appears) rather than surface word frequency (i.e., frequency of agreeable itself). We term these complex words decomposable. However, a class of words termed whole-word do not show such sensitivity to base frequency (e.g., serenity). Using an event-related MRI design, we exploited the fact that processing low-frequency words increases BOLD activity relative to high frequency ones, and examined effects of base frequency on brain activity for decomposable and whole-word items. Morphologically complex words, half high and half low base frequency, were compared to match high and low frequency simple monomorphemic words using a lexical decision task. Morphologically complex words increased activation in the left inferior frontal and left superior temporal cortices versus simple words. The only area to mirror the behavioral distinction between the decomposable and the whole-word types was the thalamus. Surprisingly, most frequency-sensitive areas failed to show base frequency effects. This variety of responses to frequency and word type across brain areas supports an integrative view of multiple variables during lexical access, rather than a dichotomy between memory-based access and on-line computation. Lexical access appears best captured as interplay of several neural processes with different sensitivities to various linguistic factors including frequency and morphological complexity.
Proceedings of the National Academy of Sciences of the United States of America | 2015
Aaron J. Newman; Ted Supalla; Nina Fernandez; Elissa L. Newport; Daphne Bavelier
Significance Although sign languages and nonlinguistic gesture use the same modalities, only sign languages have established vocabularies and follow grammatical principles. This is the first study (to our knowledge) to ask how the brain systems engaged by sign language differ from those used for nonlinguistic gesture matched in content, using appropriate visual controls. Signers engaged classic left-lateralized language centers when viewing both sign language and gesture; nonsigners showed activation only in areas attuned to human movement, indicating that sign language experience influences gesture perception. In signers, sign language activated left hemisphere language areas more strongly than gestural sequences. Thus, sign language constructions—even those similar to gesture—engage language-related brain systems and are not processed in the same ways that nonsigners interpret gesture. Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual–manual modality with a nonlinguistic symbolic communicative system—gesture—further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages—supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network—demonstrating an influence of experience on the perception of nonlinguistic stimuli.