Ted Supalla
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ted Supalla.
Nature Neuroscience | 2004
Mrim Boutla; Ted Supalla; Elissa L. Newport; Daphne Bavelier
Short-term memory (STM), or the ability to hold information in mind for a few seconds, is thought to be limited in its capacity to about 7 ± 2 items. Notably, the average STM capacity when using American Sign Language (ASL) rather than English is only 5 ± 1 items. Here we show that, contrary to previous interpretations, this difference cannot be attributed to phonological factors, item duration or reduced memory abilities in deaf people. We also show that, despite this difference in STM span, hearing speakers and deaf ASL users have comparable working memory resources during language use, indicating similar abilities to maintain and manipulate linguistic information. The shorter STM span in ASL users therefore confirms the view that the spoken span of 7 ± 2 is an exception, probably owing to the reliance of speakers on auditory-based rather than visually based representations in linguistic STM, and calls for adjustments in the norms used with deaf individuals.
Psychological Science | 2006
Daphne Bavelier; Elissa L. Newport; Matthew L. Hall; Ted Supalla; Mrim Boutla
Short-term memory (STM) is thought to be limited in capacity to about 7 ± 2 items for linguistic materials and 4 ± 1 items for visuospatial information (Baddeley & Logie, 1999; Cowan, 2001). Recently, we (Boutla, Supalla, Newport, & Bavelier, 2004) challenged this dichotomy between linguistic and visuospatial STM by showing that STM capacity in users of American Sign Language (ASL) is also limited to about 4 or 5 items. This finding suggests that although longer spans appear for speech, spans are not necessarily longer for linguistic materials across all modalities. Wilson and Emmorey (2006) responded that because we evaluated span using digits for English speakers and letters for ASL signers, the difference we reported might have stemmed from stimulus selection rather than language modality. Here we address this claim by reporting an experiment in which we reexamined STM span in English speakers using letters and compared the outcome with results we and Wilson and Emmorey have obtained for ASL signers. It is important to note that the discrepancy between our previous results and those of Wilson and Emmorey is not in the obtained span in signers found by all parties to be around 4 to 5 items (Fig. la). Rather, Wilson and Emmorey disputed whether the digit span of 7 ± 2 in speakers is an appropriate benchmark for comparison with signers. Using letters to measure span in speakers, they found a span of only 5.3, comparable to that of signers. They suggested that there is no difference in span between the two languages. Here we show that their result is not due to their use of letters instead of digits. Rather, in selecting letters that are translations of one another in English and ASL, Wilson and Emmorey failed to control the stimuli in each language for phonological factors known to affect span size. One such crucial factor is phonological similarity. The finding that span is longer for digits than for letters in English speakers is not new (Cavanaugh, 1972). However, this difference has been attributed to the greater phonological similarity of letter names than digit names in English (Conrad & Hull, 1964; Mueller, Seymour, Kieras, & Meyer, 2003). In our previous study, we used letters with signers and digits with speakers to match stimuli in this important regard. Finger-spelled letters are less phonologically similar than number signs in ASL and therefore are more comparable to digits in English speakers. To demonstrate that there is nothing special about letters versus digits, other than the fact that many letter names are highly similar in English (e.g., bee, dee, ee, gee) and thus prone to produce shorter spans, we show here that when phonologically controlled letter materials are used with English speakers, the span of speakers returns to the typical 7 ± 2 range and continues to contrast with the span of signers.
Proceedings of the National Academy of Sciences of the United States of America | 2010
Aaron J. Newman; Ted Supalla; Peter C. Hauser; Elissa L. Newport; Daphne Bavelier
An important question in understanding language processing is whether there are distinct neural mechanisms for processing specific types of grammatical structure, such as syntax versus morphology, and, if so, what the basis of the specialization might be. However, this question is difficult to study: A given language typically conveys its grammatical information in one way (e.g., English marks “who did what to whom” using word order, and German uses inflectional morphology). American Sign Language permits either device, enabling a direct within-language comparison. During functional (f)MRI, native signers viewed sentences that used only word order and sentences that included inflectional morphology. The two sentence types activated an overlapping network of brain regions, but with differential patterns. Word order sentences activated left-lateralized areas involved in working memory and lexical access, including the dorsolateral prefrontal cortex, the inferior frontal gyrus, the inferior parietal lobe, and the middle temporal gyrus. In contrast, inflectional morphology sentences activated areas involved in building and analyzing combinatorial structure, including bilateral inferior frontal and anterior temporal regions as well as the basal ganglia and medial temporal/limbic areas. These findings suggest that for a given linguistic function, neural recruitment may depend upon on the cognitive resources required to process specific types of linguistic cues.
NeuroImage | 2010
Aaron J. Newman; Ted Supalla; Peter C. Hauser; Elissa L. Newport; Daphne Bavelier
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices as well as the basal ganglia, medial frontal, and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages.
Proceedings of the National Academy of Sciences of the United States of America | 2015
Aaron J. Newman; Ted Supalla; Nina Fernandez; Elissa L. Newport; Daphne Bavelier
Significance Although sign languages and nonlinguistic gesture use the same modalities, only sign languages have established vocabularies and follow grammatical principles. This is the first study (to our knowledge) to ask how the brain systems engaged by sign language differ from those used for nonlinguistic gesture matched in content, using appropriate visual controls. Signers engaged classic left-lateralized language centers when viewing both sign language and gesture; nonsigners showed activation only in areas attuned to human movement, indicating that sign language experience influences gesture perception. In signers, sign language activated left hemisphere language areas more strongly than gestural sequences. Thus, sign language constructions—even those similar to gesture—engage language-related brain systems and are not processed in the same ways that nonsigners interpret gesture. Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual–manual modality with a nonlinguistic symbolic communicative system—gesture—further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages—supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network—demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Frontiers in Psychology | 2015
Elizabeth A. Hirshorn; Matthew W. G. Dye; Peter C. Hauser; Ted Supalla; Daphne Bavelier
While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers. Highlights: 1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience. 2. Reading comprehension was best predicted by different factors in oral deaf and deaf native signers. 3. Free recall memory (primacy effect) better predicted reading comprehension in deaf native signers as compared to oral deaf or hearing individuals. 4. Language experience should be taken into account when considering cognitive processes that mediate reading in deaf individuals.
Frontiers in Human Neuroscience | 2014
Elizabeth A. Hirshorn; Matthew W. G. Dye; Peter C. Hauser; Ted Supalla; Daphne Bavelier
The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed.
Cognition | 2008
Daphne Bavelier; Elissa L. Newport; Matthew L. Hall; Ted Supalla; Mrim Boutla
Archive | 2000
Elissa L. Newport; Ted Supalla
Sign Language & Linguistics | 1999
Yutaka Osugi; Ted Supalla; Rebecca Webb