Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth A. Hirshorn is active.

Publication


Featured researches published by Elizabeth A. Hirshorn.


Nature Neuroscience | 2010

I see where you're hearing: how cross-modal plasticity may exploit homologous brain structures

Daphne Bavelier; Elizabeth A. Hirshorn

Sensory deprivation such as deafness or blindness leads to specific functional and neural reorganization. A new study gives insight into why and how certain abilities change, while others remain unaltered after the loss of a sense.


PLOS ONE | 2013

Lexical processing in deaf readers: an FMRI investigation of reading proficiency.

David P. Corina; Laurel A. Lawyer; Peter Hauser; Elizabeth A. Hirshorn

Individuals with significant hearing loss often fail to attain competency in reading orthographic scripts which encode the sound properties of spoken language. Nevertheless, some profoundly deaf individuals do learn to read at age-appropriate levels. The question of what differentiates proficient deaf readers from less-proficient readers is poorly understood but topical, as efforts to develop appropriate and effective interventions are needed. This study uses functional magnetic resonance imaging (fMRI) to examine brain activation in deaf readers (N = 21), comparing proficient (N = 11) and less proficient (N = 10) readers’ performance in a widely used test of implicit reading. Proficient deaf readers activated left inferior frontal gyrus and left middle and superior temporal gyrus in a pattern that is consistent with regions reported in hearing readers. In contrast, the less-proficient readers exhibited a pattern of response characterized by inferior and middle frontal lobe activation (right>left) which bears some similarity to areas reported in studies of logographic reading, raising the possibility that these individuals are using a qualitatively different mode of orthographic processing than is traditionally observed in hearing individuals reading sound-based scripts. The evaluation of proficient and less-proficient readers points to different modes of processing printed English words. Importantly, these preliminary findings allow us to begin to establish the impact of linguistic and educational factors on the neural systems that underlie reading achievement in profoundly deaf individuals.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Decoding and disrupting left midfusiform gyrus activity during word reading

Elizabeth A. Hirshorn; Yuanning Li; Michael Ward; R. Mark Richardson; Julie A. Fiez; Avniel Singh Ghuman

Significance A central issue in the neurobiology of reading is a debate regarding the visual representation of words, particularly in the left midfusiform gyrus (lmFG). Direct neural recordings, electrical brain stimulation, and pre-/postsurgical neuropsychological testing provided strong evidence that the lmFG supports an orthographically specific “visual word form” system that becomes specialized for the representation of orthographic knowledge. Machine learning elucidated the dynamic role lmFG plays with an early processing stage organized by orthographic similarity and a later stage supporting individuation of single words. The results suggest that there is a dynamic shift from gist-level to individuated orthographic representation in the lmFG in service of visual word recognition. The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.


Frontiers in Psychology | 2015

The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations

Elizabeth A. Hirshorn; Matthew W. G. Dye; Peter C. Hauser; Ted Supalla; Daphne Bavelier

While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers. Highlights: 1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience. 2. Reading comprehension was best predicted by different factors in oral deaf and deaf native signers. 3. Free recall memory (primacy effect) better predicted reading comprehension in deaf native signers as compared to oral deaf or hearing individuals. 4. Language experience should be taken into account when considering cognitive processes that mediate reading in deaf individuals.


Cognitive Neuropsychology | 2012

Routes to short-term memory indexing: Lessons from deaf native users of American Sign Language

Elizabeth A. Hirshorn; Nina Fernandez; Daphne Bavelier

Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se.


Journal of Neurolinguistics | 2014

Using artificial orthographies for studying cross-linguistic differences in the cognitive and neural profiles of reading

Elizabeth A. Hirshorn; Julie A. Fiez

Reading and writing are cultural inventions that have become vital skills to master in modern society. Unfortunately, writing systems are not equally learnable and many individuals struggle to become proficient readers. Languages and their writing systems often have co-varying characteristics, due to both psycholinguistic and socio-cultural forces. This makes it difficult to determine the source of cross-linguistic differences in reading and writing. Nonetheless, it is important to make progress on this issue: a more precise understanding of the factors that affect reading disparities should improve reading instruction theory and practice, and the diagnosis and treatment of reading disorders. In this review, we consider the value of artificial orthographies as a tool for unpacking the factors that create cognitive and neural differences in reading acquisition and skill. We do so by focusing on one dimension that differs among writing systems: grain size. Grain size, or the unit of spoken language that is mapped onto a visual graph, is thought to affect learning, but its impact is still not well understood. We review relevant literature about cross-linguistic writing system differences, the benefits of using artificial orthographies as a research tool, and our recent work with an artificial alphasyllabic writing system for English. We conclude that artificial orthographies can be used to elucidate cross-linguistic principles that affect reading and writing.


Frontiers in Human Neuroscience | 2014

Neural networks mediating sentence reading in the deaf

Elizabeth A. Hirshorn; Matthew W. G. Dye; Peter C. Hauser; Ted Supalla; Daphne Bavelier

The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed.


Journal of Cognitive Neuroscience | 2016

Fusiform gyrus laterality in writing systems with different mapping principles: An artificial orthography training study

Elizabeth A. Hirshorn; Alaina Wrencher; Corrine Durisko; Michelle W. Moore; Julie A. Fiez

Writing systems vary in many ways, making it difficult to account for cross-linguistic neural differences. For example, orthographic processing of Chinese characters activates the mid-fusiform gyri (mFG) bilaterally, whereas the processing of English words predominantly activates the left mFG. Because Chinese and English vary in visual processing (holistic vs. analytical) and linguistic mapping principle (morphosyllabic vs. alphabetic), either factor could account for mFG laterality differences. We used artificial orthographies representing English to investigate the effect of mapping principle on mFG lateralization. The fMRI data were compared for two groups that acquired foundational proficiency: one for an alphabetic and one for an alphasyllabic artificial orthography. Greater bilateral mFG activation was observed in the alphasyllabic versus alphabetic group. The degree of bilaterality correlated with reading fluency for the learned orthography in the alphasyllabic but not alphabetic group. The results suggest that writing systems with a syllable-based mapping principle recruit bilateral mFG to support orthographic processing. Implications for individuals with left mFG dysfunction will be discussed.


Journal of Neurolinguistics | 2018

Do adults acquire a second orthography using their native reading network

Lea Martin; Elizabeth A. Hirshorn; Corrine Durisko; Michelle W. Moore; Robert M. Schwartz; Yihao Zheng; Julie A. Fiez

Abstract Adult second language learners typically aim to acquire both spoken and written proficiency in the second language (L2). It is widely assumed that adults fully retain the capacity they used to acquire literacy as children for their native language (L1). However, given basic principles of neural plasticity and a limited body of empirical evidence, this assumption merits investigation. Accordingly, the current work used an artificial orthography approach to investigate behavioral and neural measures of learning as adult participants acquired a second orthography for English across six weeks of training. One group learned HouseFont, an alphabetic system in which house images are used to represent English phonemes, and the other learned Faceabary, an alphasyllabic system in which face images are used to represent English syllables. The findings demonstrate that adults have considerable capacity to learn a second orthography, even when it involves perceptually atypical graphs, as evidenced by performance improvements that were sustained across weeks of training. They also demonstrate that this learning involves assimilation into the same reading network that supports native literacy, as evidenced by learning related changes in orthographic, phonological, and semantic regions associated with English word reading. Additionally, we found learning patterns that varied across the two orthographies. Faceabary induced bilateral learning effects in the mid-fusiform gyrus and showed greater engagement of regions associated with semantic processing. We speculate the large graph inventories of non-alphabetic systems may create visual-perceptual challenges that increase reliance on holistic reading procedures and associated neural substrates.


Journal of Neurolinguistics | 2018

Chinese-English bilinguals transfer L1 lexical reading procedures and holistic orthographic coding to L2 English

Gal Ben-Yehudah; Elizabeth A. Hirshorn; Travis Simcox; Charles A. Perfetti; Julie A. Fiez

Collaboration


Dive into the Elizabeth A. Hirshorn's collaboration.

Top Co-Authors

Avatar

Julie A. Fiez

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Ward

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter C. Hauser

National Technical Institute for the Deaf

View shared research outputs
Top Co-Authors

Avatar

Ted Supalla

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yuanning Li

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge