Jennifer A.F. Petrich
San Diego State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jennifer A.F. Petrich.
Journal of Deaf Studies and Deaf Education | 2012
Karen Emmorey; Jennifer A.F. Petrich
Two lexical decision experiments are reported that investigate whether the same segmentation strategies are used for reading printed English words and fingerspelled words (in American Sign Language). Experiment 1 revealed that both deaf and hearing readers performed better when written words were segmented with respect to an orthographically defined syllable (the Basic Orthographic Syllable Structure [BOSS]) than with a phonologically defined syllable. Correlation analyses revealed that better deaf readers were more sensitive to orthographic syllable representations, whereas segmentation strategy did not differentiate the better hearing readers. In contrast to Experiment 1, Experiment 2 revealed better performance by deaf participants when fingerspelled words were segmented at the phonological syllable boundary. We suggest that English mouthings that often accompany fingerspelled words promote a phonological parsing preference for fingerspelled words. In addition, fingerspelling ability was significantly correlated with reading comprehension and vocabulary skills. This pattern of results indicates that the association between fingerspelling and print for adult deaf readers is not based on shared segmentation strategies. Rather, we suggest that both good readers and good fingerspellers have established strong representations of English and that fingerspelling may aid in the development and maintenance of English vocabulary.
Journal of Deaf Studies and Deaf Education | 2013
Karen Emmorey; Jennifer A.F. Petrich; Tamar H. Gollan
The frequency-lag hypothesis proposes that bilinguals have slowed lexical retrieval relative to monolinguals and in their nondominant language relative to their dominant language, particularly for low-frequency words. These effects arise because bilinguals divide their language use between 2 languages and use their nondominant language less frequently. We conducted a picture-naming study with hearing American Sign Language (ASL)-English bilinguals (bimodal bilinguals), deaf signers, and English-speaking monolinguals. As predicted by the frequency-lag hypothesis, bimodal bilinguals were slower, less accurate, and exhibited a larger frequency effect when naming pictures in ASL as compared with English (their dominant language) and as compared with deaf signers. For English there was no difference in naming latencies, error rates, or frequency effects for bimodal bilinguals as compared with monolinguals. Neither age of ASL acquisition nor interpreting experience affected the results; picture-naming accuracy and frequency effects were equivalent for deaf signers and English monolinguals. Larger frequency effects in ASL relative to English for bimodal bilinguals suggests that they are affected by a frequency lag in ASL. The absence of a lag for English could reflect the use of mouthing and/or code-blending, which may shield bimodal bilinguals from the lexical slowing observed for spoken language bilinguals in the dominant language.
Brain and Language | 2013
Karen Emmorey; Jill Weisberg; Stephen McCullough; Jennifer A.F. Petrich
We examined word-level reading circuits in skilled deaf readers whose primary language is American Sign Language, and hearing readers matched for reading ability (college level). During fMRI scanning, participants performed a semantic decision (concrete concept?), a phonological decision (two syllables?), and a false-font control task (string underlined?). The groups performed equally well on the semantic task, but hearing readers performed better on the phonological task. Semantic processing engaged similar left frontotemporal language circuits in deaf and hearing readers. However, phonological processing elicited increased neural activity in deaf, relative to hearing readers, in the left precentral gyrus, suggesting greater reliance on articulatory phonological codes, and in bilateral parietal cortex, suggesting increased phonological processing effort. Deaf readers also showed stronger anterior-posterior functional segregation between semantic and phonological processes in left inferior prefrontal cortex. Finally, weaker phonological decoding ability did not alter activation in the visual word form area for deaf readers.
Vision Research | 2012
Rain G. Bosworth; Jennifer A.F. Petrich; Karen R. Dobkins
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the dorsal and ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the dorsal stream.
Acta Psychologica | 2017
Karen Emmorey; Marcel R. Giezen; Jennifer A.F. Petrich; Erin Spurgeon; Lucinda O'Grady Farnady
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.
Journal of Memory and Language | 2012
Karen Emmorey; Jennifer A.F. Petrich; Tamar H. Gollan
Brain and Cognition | 2013
Rain G. Bosworth; Jennifer A.F. Petrich; Karen R. Dobkins
Journal of Deaf Studies and Deaf Education | 2017
Zed Sevcikova Sehyr; Jennifer A.F. Petrich; Karen Emmorey
Cortex | 2007
Jennifer A.F. Petrich; Margaret L. Greenwald; Rita Sloan Berndt
Applied Psycholinguistics | 2018
Zed Sevcikova Sehyr; Brenda Nicodemus; Jennifer A.F. Petrich; Karen Emmorey