Stephen McCullough
San Diego State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen McCullough.
Journal of Cognitive Neuroscience | 2010
Ayse Pinar Saygin; Stephen McCullough; Morana Alač; Karen Emmorey
Can linguistic semantics affect neural processing in feature-specific visual regions? Specifically, when we hear a sentence describing a situation that includes motion, do we engage neural processes that are part of the visual perception of motion? How about if a motion verb was used figuratively, not literally? We used fMRI to investigate whether semantic content can “penetrate” and modulate neural populations that are selective to specific visual properties during natural language comprehension. Participants were presented audiovisually with three kinds of sentences: motion sentences (“The wild horse crossed the barren field.”), static sentences, (“The black horse stood in the barren field.”), and fictive motion sentences (“The hiking trail crossed the barren field.”). Motion-sensitive visual areas (MT+) were localized individually in each participant as well as face-selective visual regions (fusiform face area; FFA). MT+ was activated significantly more for motion sentences than the other sentence types. Fictive motion sentences also activated MT+ more than the static sentences. Importantly, no modulation of neural responses was found in FFA. Our findings suggest that the neural substrates of linguistic semantics include early visual areas specifically related to the represented semantics and that figurative uses of motion verbs also engage these neural systems, but to a lesser extent. These data are consistent with a view of language comprehension as an embodied process, with neural substrates as far reaching as early sensory brain areas that are specifically related to the represented semantics.
Frontiers in Psychology | 2014
Karen Emmorey; Stephen McCullough; Sonya Mehta; Thomas J. Grabowski
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
Brain and Language | 2013
Karen Emmorey; Jill Weisberg; Stephen McCullough; Jennifer A.F. Petrich
We examined word-level reading circuits in skilled deaf readers whose primary language is American Sign Language, and hearing readers matched for reading ability (college level). During fMRI scanning, participants performed a semantic decision (concrete concept?), a phonological decision (two syllables?), and a false-font control task (string underlined?). The groups performed equally well on the semantic task, but hearing readers performed better on the phonological task. Semantic processing engaged similar left frontotemporal language circuits in deaf and hearing readers. However, phonological processing elicited increased neural activity in deaf, relative to hearing readers, in the left precentral gyrus, suggesting greater reliance on articulatory phonological codes, and in bilateral parietal cortex, suggesting increased phonological processing effort. Deaf readers also showed stronger anterior-posterior functional segregation between semantic and phonological processes in left inferior prefrontal cortex. Finally, weaker phonological decoding ability did not alter activation in the visual word form area for deaf readers.
Journal of Cognitive Neuroscience | 2013
Karen Emmorey; Stephen McCullough; Sonya Mehta; Laura L. Boles Ponto; Thomas J. Grabowski
Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.
Language and Cognitive Processes | 2011
Karen Emmorey; Stephen McCullough; Sonya Mehta; Laura L. Boles Ponto; Thomas J. Grabowski
We investigated the functional organisation of neural systems supporting language production when the primary language articulators are also used for meaningful, but nonlinguistic, expression such as pantomime. Fourteen hearing nonsigners and 10 deaf native users of American Sign Language (ASL) participated in an H2 15O-positron emission tomography study in which they generated action pantomimes or ASL verbs in response to pictures of tools and manipulable objects. For pantomime generation, participants were instructed to “show how you would use the object”. For verb generation, signers were asked to “generate a verb related to the object”. The objects for this condition were selected to elicit handling verbs that resemble pantomime (e.g., TO-HAMMER, hand configuration and movement mimic the act of hammering) and nonhandling verbs that do not (e.g., POUR-SYRUP, produced with a “Y” handshape). For the baseline task, participants viewed pictures of manipulable objects and an occasional nonmanipulable object and decided whether the objects could be handled, gesturing “yes” (thumbs up) or “no” (hand wave). Relative to baseline, generation of ASL verbs engaged left inferior frontal cortex, but when nonsigners produced pantomimes for the same objects, no frontal activation was observed. Both groups recruited left parietal cortex during pantomime production. However, for deaf signers the activation was more extensive and bilateral, which may reflect a more complex and integrated neural representation of hand actions. We conclude that the production of pantomime vs. ASL verbs (even those that resemble pantomime) engages partially segregated neural systems that support praxic vs. linguistic functions.
Language, cognition and neuroscience | 2015
Karen Emmorey; Stephen McCullough; Jill Weisberg
We used functional magnetic resonance imaging to identify the neural regions that support comprehension of fingerspelled words, printed words, and American Sign Language (ASL) signs in deaf ASL–English bilinguals. Participants made semantic judgements (concrete–abstract?) to each lexical type, and hearing non-signers served as controls. All three lexical types engaged a left frontotemporal circuit associated with lexical semantic processing. Both printed and fingerspelled words activated the visual word form area for deaf signers only. Fingerspelled words were more left lateralised than signs, paralleling the difference between reading and listening for spoken language. Greater activation in left supramarginal gyrus was observed for signs compared to fingerspelled words, supporting its role in processing sign-specific phonology. Fingerspelling ability was negatively correlated with activation in left occipital cortex, while ASL ability was negatively correlated with activation in right angular gyrus. Overall, the results reveal both overlapping and distinct neural regions for comprehension of signs, text, and fingerspelling.
Brain and Language | 2016
Karen Emmorey; Stephen McCullough; Jill Weisberg
We investigated word-level reading circuits in skilled deaf readers (N=14; mean reading age=19.5years) and less skilled deaf readers (N=14; mean reading age=12years) who were all highly proficient users of American Sign Language. During fMRI scanning, participants performed a semantic decision (concrete concept?), a phonological decision (two syllables?), and a false-font control task (string underlined?). No significant group differences were observed with the full participant set. However, an analysis with the 10 most and 10 least skilled readers revealed that for the semantic task (vs. control task), proficient deaf readers exhibited greater activation in left inferior frontal and middle temporal gyri than less proficient readers. No group differences were observed for the phonological task. Whole-brain correlation analyses (all participants) revealed that for the semantic task, reading ability correlated positively with neural activity in the right inferior frontal gyrus and in a region associated with the orthography-semantics interface, located anterior to the visual word form area. Reading ability did not correlate with neural activity during the phonological task. Accuracy on the semantic task correlated positively with neural activity in left anterior temporal lobe (a region linked to conceptual processing), while accuracy on the phonological task correlated positively with neural activity in left posterior inferior frontal gyrus (a region linked to syllabification processes during speech production). Finally, reading comprehension scores correlated positively with vocabulary and print exposure measures, but not with phonological awareness scores.
Brain and Language | 2015
Jill Weisberg; Stephen McCullough; Karen Emmorey
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration.
Brain and Language | 2016
Karen Emmorey; Sonya Mehta; Stephen McCullough; Thomas J. Grabowski
Signing differs from typical non-linguistic hand actions because movements are not visually guided, finger movements are complex (particularly for fingerspelling), and signs are not produced as holistic gestures. We used positron emission tomography to investigate the neural circuits involved in the production of American Sign Language (ASL). Different types of signs (one-handed (articulated in neutral space), two-handed (neutral space), and one-handed body-anchored signs) were elicited by asking deaf native signers to produce sign translations of English words. Participants also fingerspelled (one-handed) printed English words. For the baseline task, participants indicated whether a word contained a descending letter. Fingerspelling engaged ipsilateral motor cortex and cerebellar cortex in contrast to both one-handed signs and the descender baseline task, which may reflect greater timing demands and complexity of handshape sequences required for fingerspelling. Greater activation in the visual word form area was also observed for fingerspelled words compared to one-handed signs. Body-anchored signs engaged bilateral superior parietal cortex to a greater extent than the descender baseline task and neutral space signs, reflecting the motor control and proprioceptive monitoring required to direct the hand toward a specific location on the body. Less activation in parts of the motor circuit was observed for two-handed signs compared to one-handed signs, possibly because, for half of the signs, handshape and movement goals were spread across the two limbs. Finally, the conjunction analysis comparing each sign type with the descender baseline task revealed common activation in the supramarginal gyrus bilaterally, which we interpret as reflecting phonological retrieval and encoding processes.
Neuropsychologia | 2018
Laurie S. Glezer; Jill Weisberg; Cindy O’Grady Farnady; Stephen McCullough; Katherine J. Midgley; Phillip J. Holcomb; Karen Emmorey
ABSTRACT People who are born deaf often have difficulty learning to read. Recently, several studies have examined the neural substrates involved in reading in deaf people and found a left lateralized reading system similar to hearing people involving temporo‐parietal, inferior frontal, and ventral occipito‐temporal cortices. Previous studies in typical hearing readers show that within this reading network there are separate regions that specialize in processing orthography and phonology. We used fMRI rapid adaptation in deaf adults who were skilled readers to examine neural selectivity in three functional ROIs in the left hemisphere: temporoparietal cortex (TPC), inferior frontal gyrus (IFG), and the visual word form area (VWFA). Results show that in deaf skilled readers, the left VWFA showed selectivity for orthography similar to what has been reported for hearing readers, the TPC showed less sensitivity to phonology than previously reported for hearing readers using the same paradigm, and the IFG showed selectivity to orthography, but not phonology (similar to what has been reported previously for hearing readers). These results provide evidence that while skilled deaf readers demonstrate coarsely tuned phonological representations in the TPC, they develop finely tuned representations for the orthography of written words in the VWFA and IFG. This result suggests that phonological tuning in the TPC may have little impact on the neural network associated with skilled reading for deaf adults. HighlightsAn fMRI rapid adaptation paradigm explored neural tuning in skilled deaf readers.Both the left and right VWFA showed selectivity to whole word orthography.Sensitivity, but not selectivity to phonology was found in temporoparietal cortex.Skilled deaf readers develop finely tuned neural representations of written words.Phonological tuning may have little impact on reading success for deaf adults.