Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karen Emmorey is active.

Publication


Featured researches published by Karen Emmorey.


Brain and Language | 1997

Neural Systems Mediating American Sign Language: Effects of Sensory Experience and Age of Acquisition

Helen J. Neville; Sharon A. Coffey; Donald S. Lawson; Andrew Fischer; Karen Emmorey; Ursula Bellugi

ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.


Proceedings of the National Academy of Sciences of the United States of America | 2003

A morphometric analysis of auditory brain regions in congenitally deaf adults.

Karen Emmorey; John S. Allen; Joel Bruss; Natalie Schenker; Hanna Damasio

We investigated whether variation in auditory experience in humans during development alters the macroscopic neuroanatomy of primary or auditory association cortices. Volumetric analyses were based on MRI data from 25 congenitally deaf subjects and 25 hearing subjects, all right-handed. The groups were matched for gender and age. Gray and white matter volumes were determined for the temporal lobe, superior temporal gyrus, Heschls gyrus (HG), and the planum temporale. Deaf and hearing subjects did not differ in the total volume or the gray matter volume of HG, which suggests that auditory deafferentation does not lead to cell loss within primary auditory cortex in humans. However, deaf subjects had significantly larger gray matter–white matter ratios than hearing subjects in HG, with deaf subjects exhibiting significantly less white matter in both left and right HG. Deaf subjects also had higher gray matter–white matter ratios in the rest of the superior temporal gyrus, but this pattern was not observed for the temporal lobe as a whole. These findings suggest that auditory deprivation from birth results in less myelination and/or fewer fibers projecting to and from auditory cortices. Finally, the volumes of planum temporale and HG were significantly larger in the left hemisphere for both groups, suggesting that leftward asymmetries within “auditory” cortices do not arise from experience with auditory processing.


Brain and Language | 1987

The neurological substrates for prosodic aspects of speech.

Karen Emmorey

The ability to comprehend and produce the stress contrast between noun compounds and noun phrases (e.g., greenhouse vs. green house) was examined for 8 nonfluent aphasics, 7 fluent aphasics, 7 right hemisphere damaged (RHD) patients, and 22 normal controls. The aphasics performed worse than normal controls on the comprehension task, and the RHD group performed as well as normals. The ability to produce stress contrasts was tested with a sentence-reading task; acoustic measurements revealed that no nonfluent aphasic used pitch to distinguish noun compounds from phrases, but two used duration. All but one of the RHD patients and all but one of the normals produced pitch and/or duration cues. These results suggest that linguistic prosody is processed by the left hemisphere and that with brain damage the ability to produce pitch and duration cues may be dissociated at the lexical level.


The Journal of Neuroscience | 2012

Towards a new neurobiology of language.

David Poeppel; Karen Emmorey; Gregory Hickok; Liina Pylkkänen

Theoretical advances in language research and the availability of increasingly high-resolution experimental techniques in the cognitive neurosciences are profoundly changing how we investigate and conceive of the neural basis of speech and language processing. Recent work closely aligns language research with issues at the core of systems neuroscience, ranging from neurophysiological and neuroanatomic characterizations to questions about neural coding. Here we highlight, across different aspects of language processing (perception, production, sign language, meaning construction), new insights and approaches to the neurobiology of language, aiming to describe promising new areas of investigation in which the neurosciences intersect with linguistic research more closely than before. This paper summarizes in brief some of the issues that constitute the background for talks presented in a symposium at the Annual Meeting of the Society for Neuroscience. It is not a comprehensive review of any of the issues that are discussed in the symposium.


Perceptual and Motor Skills | 1990

Lexical Recognition in Sign Language: Effects of Phonetic Structure and Morphology

Karen Emmorey; David P. Corina

Two experiments are reported which investigate lexical recognition in American Sign Language (ASL). Exp. 1 examined identification of monomorphemic signs and investigated how the manipulation of phonological parameters affected sign identification. Over-all sign identification was much faster than what has been found for spoken language The phonetic structure of sign (the simultaneous availability of Handshape and Location information) and the phonotactics of the ASL lexicon are argued to account for this difference. Exp. 2 compared the time course of recognition for monomorphemic and morphologically complex signs. ASL morphology is largelv nonconcatenative which raises particularly interesting questions for word recognition We found that morphologically complex signs had longer identification times than matched monomorphemic signs. Also, although roots and affixes are often articulated simultaneously in ASL, they were not identified simultaneously. Base forms of morphologically complex signs were identified initially followed by recognition of the morphological inflection. Finally, subjects with deaf parents (Native signers) were able to isolate signs faster than subjects with hearing parents (Late signers). This result suggests that early language experience can influence the initial stages of lexical access and sign identification.


NeuroImage | 2002

Neural Systems Underlying Spatial Language in American Sign Language

Karen Emmorey; Hanna Damasio; Stephen McCullough; Thomas J. Grabowski; Laura L. Boles Ponto; Richard D. Hichwa; Ursula Bellugi

A [(15)O]water PET experiment was conducted to investigate the neural regions engaged in processing constructions unique to signed languages: classifier predicates in which the position of the hands in signing space schematically represents spatial relations among objects. Ten deaf native signers viewed line drawings depicting a spatial relation between two objects (e.g., a cup on a table) and were asked either to produce a classifier construction or an American Sign Language (ASL) preposition that described the spatial relation or to name the figure object (colored red). Compared to naming objects, describing spatial relationships with classifier constructions engaged the supramarginal gyrus (SMG) within both hemispheres. Compared to naming objects, naming spatial relations with ASL prepositions engaged only the right SMG. Previous research indicates that retrieval of English prepositions engages both right and left SMG, but more inferiorly than for ASL classifier constructions. Compared to ASL prepositions, naming spatial relations with classifier constructions engaged left inferior temporal (IT) cortex, a region activated when naming concrete objects in either ASL or English. Left IT may be engaged because the handshapes in classifier constructions encode information about object type (e.g., flat surface). Overall, the results suggest more right hemisphere involvement when expressing spatial relations in ASL, perhaps because signing space is used to encode the spatial relationship between objects.


Memory & Cognition | 1997

A visuospatial “phonological loop” in working memory: Evidence from American Sign Language

Margaret Wilson; Karen Emmorey

In two experiments, the question of whether working memory could support an articulatory rehearsal loop in the visuospatial domain was investigated. Deaf subjects fluent in American Sign Language (ASL) were tested on immediate serial recall. In Experiment 1, using ASL stimuli, evidence for manual motoric coding (worse recall under articulatory suppression) was found, replicating findings of ASL-based phonological coding (worse recall for phonologically similar lists). The two effects did not interact, suggesting separate components which both contribute to performance. Stimuli in Experiment 2 were namable pictures, which had to be recoded for ASL-based rehearsal to occur. Under these conditions, articulatory suppression eliminated the phonological similarity effect. Thus, an articulatory process seems to be used in translating pictures into a phonological code for memory maintenance. These results indicate a configuration of components similar to the phonological loop for speech, suggesting that working memory can develop a language-based rehearsal loop in the visuospatial modality.


Spatial Cognition and Computation | 2000

Using space to describe space: Perspective inspeech, sign, and gesture

Karen Emmorey; Barbara Tversky; Holly A. Taylor

Describing the location of a landmark in ascene typically requires taking a perspective. Descriptions of scenes with several landmarksuse either a route perspective, where theviewpoint is within the scene or a surveyperspective, where the viewpoint is outside, ora mixture of both. Parallel to this, AmericanSign Language (ASL) uses two spatial formats,viewer space, in which the described space isconceived of as in front of the speaker, ordiagrammatic space, in which the describedspace is conceived of as from outside, usuallyabove. In the present study, speakers ofEnglish or ASL described one of two memorizedmaps. ASL signers were more likely to adopt asurvey perspective than English speakers,indicating that language modality can influenceperspective choice. In ASL, descriptions froma survey perspective used diagrammatic space,whereas descriptions from a route perspectiveused viewer space. In English, iconic gesturesaccompanying route descriptions used the full3-D space, similar to viewer space, whilegestures accompanying survey descriptions useda 2-D horizontal or vertical plane similar todiagrammatic space. Thus, the two modes ofexperiencing environments, from within and fromwithout, are expressed naturally in speech,sign, and gesture.


Journal of Cognitive Neuroscience | 2010

Modulation of bold response in motion-sensitive lateral temporal cortex by real and fictive motion sentences

Ayse Pinar Saygin; Stephen McCullough; Morana Alač; Karen Emmorey

Can linguistic semantics affect neural processing in feature-specific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic content can “penetrate” and modulate neural populations that are selective to specific visual properties during natural language comprehension. Participants were presented audiovisually with three kinds of sentences: motion sentences (“The wild horse crossed the barren field.”), static sentences, (“The black horse stood in the barren field.”), and fictive motion sentences (“The hiking trail crossed the barren field.”). Motion-sensitive visual areas (MT+) were localized individually in each participant as well as face-selective visual regions (fusiform face area; FFA). MT+ was activated significantly more for motion sentences than the other sentence types. Fictive motion sentences also activated MT+ more than the static sentences. Importantly, no modulation of neural responses was found in FFA. Our findings suggest that the neural substrates of linguistic semantics include early visual areas specifically related to the represented semantics and that figurative uses of motion verbs also engage these neural systems, but to a lesser extent. These data are consistent with a view of language comprehension as an embodied process, with neural substrates as far reaching as early sensory brain areas that are specifically related to the represented semantics.


Memory & Cognition | 1998

A “word length effect”for sign language: Further evidence for the role of language in structuring working memory

Margaret Wilson; Karen Emmorey

We report a sign length effect in deaf users of American Sign Language that is analogous to the word length effect for speech. Lists containing long signs (signs that traverse relatively long distances) produced poorer memory performance than did lists of short signs (signs that do not change in location). Further, this length effect was eliminated by articulatory suppression (repetitive motion of the hands), and articulatory suppression produced an overall drop in performance. The pattern of results, together with previous findings (Wilson & Emmorey, 1997), provides evidence for a working memory system for sign language that consists of a phonological storage buffer and an articulatory rehearsal mechanism. This indicates a close equivalence of structure between working memory for sign language and working memory for speech. The implications of this equivalence are discussed.

Collaboration


Dive into the Karen Emmorey's collaboration.

Top Co-Authors

Avatar

Stephen McCullough

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ursula Bellugi

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jill Weisberg

San Diego State University

View shared research outputs
Top Co-Authors

Avatar

Marcel R. Giezen

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanna Damasio

University of Iowa Hospitals and Clinics

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge