Robin L. Thompson
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robin L. Thompson.
Frontiers in Psychology | 2010
Pamela M. Perniss; Robin L. Thompson; Gabriella Vigliocco
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience.
Psychological Science | 2005
Robin L. Thompson; Karen Emmorey; Tamar H. Gollan
The “tip of the fingers” phenomenon (TOF) for sign language parallels the “tip of the tongue” phenomenon (TOT) for spoken language. During a TOF, signers are sure they know a sign but cannot retrieve it. Although some theories collapse semantics and phonology in sign language and thus predict that TOFs should not occur, TOFs were elicited in the current study. Like TOTs, TOFs often resolve spontaneously, commonly involve targets that are proper names, and frequently include partial access to phonology. Specifically, signers were more likely to retrieve a target signs handshape, location, and orientation than to retrieve its movement. Signers also frequently recalled the first letter of a finger-spelled word. The existence of TOFs supports two-stage retrieval and a division between semantics and phonology in American Sign Language. The partial phonological information available during TOFs suggests that phonological features are accessed more simultaneously during lexical access for signed language than during lexical access for spoken language.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2009
Robin L. Thompson; David P. Vinson; Gabriella Vigliocco
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of American Sign Language (ASL). The results show that native ASL signers are faster to respond when a specific property iconically represented in a sign is made salient in the corresponding picture, thus providing evidence that a closer mapping between meaning and form can aid in lexical retrieval. While late 2nd-language learners appear to use iconicity as an aid to learning sign (R. Campbell, P. Martin, & T. White, 1992), they did not show the same facilitation effect as native ASL signers, suggesting that the task tapped into more automatic language processes. Overall, the findings suggest that completely arbitrary mappings between meaning and form may not be more advantageous in language and that, rather, arbitrariness may simply be an accident of modality.
Psychological Science | 2012
Robin L. Thompson; David P. Vinson; Bencie Woll; Gabriella Vigliocco
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Journal of Deaf Studies and Deaf Education | 2008
Karen Emmorey; Robin L. Thompson; Rachael Colvin
An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signers face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signers mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signers face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2010
Robin L. Thompson; David P. Vinson; Gabriella Vigliocco
Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a corresponding picture during a picture and sign matching task (Thompson, Vinson, & Vigliocco, 2009). The current study investigates the extent of iconicity effects with a phonological decision task (does the sign involve straight or curved fingers?) in which the meaning of the sign is irrelevant. The results show that iconicity is a significant predictor of response latencies and accuracy, with more iconic signs leading to slower responses and more errors. We conclude that meaning is activated automatically for highly iconic properties of a sign, and this leads to interference in making form-based decisions. Thus, the current study extends previous work by demonstrating that iconicity effects permeate the entire language system, arising automatically even when access to meaning is not needed.
Psychological Science | 2010
David P. Vinson; Robin L. Thompson; Robert Skinner; Neil Fox; Gabriella Vigliocco
In contrast to the single-articulatory system of spoken languages, sign languages employ multiple articulators, including the hands and the mouth. We asked whether manual components and mouthing patterns of lexical signs share a semantic representation, and whether their relationship is affected by the differing language experience of deaf and hearing native signers. We used picture-naming tasks and word-translation tasks to assess whether the same semantic effects occur in manual production and mouthing production. Semantic errors on the hands were more common in the English-translation task than in the picture-naming task, but errors in mouthing patterns showed a different trend. We conclude that mouthing is represented and accessed through a largely separable channel, rather than being bundled with manual components in the sign lexicon. Results were comparable for deaf and hearing signers; differences in language experience did not play a role. These results provide novel insight into coordinating different modalities in language production.
Behavior Research Methods | 2013
Stefan L. Frank; Irene Fernandez Monsalve; Robin L. Thompson; Gabriella Vigliocco
We make available word-by-word self-paced reading times and eye-tracking data over a sample of English sentences from narrative sources. These data are intended to form a gold standard for the evaluation of computational psycholinguistic models of sentence comprehension in English. We describe stimuli selection and data collection and present descriptive statistics, as well as comparisons between the two sets of reading times.
Language and Linguistics Compass | 2011
Robin L. Thompson
That linguistic form should be arbitrarily linked to meaning is generally taken as a fundamental feature of language. However, this paper explores the role of iconicity, or non-arbitrary form-meaning mappings for both language processing and language acquisition. Evidence from signed language research is presented showing that sign language users exploit iconicity in language processing. Further, iconicity may be at work in language acquisition serving to bridge the gap between conceptual representations and linguistic form. Signed languages are taken as a starting point since they tend to encode a higher degree of iconic form-meaning mappings than is found for spoken languages, but the findings are more broadly applicable. Specifically, the emerging evidence argues against the dominant view that connections between linguistic form and meaning need be primarily arbitrary. Instead both arbitrariness and iconicity have a role to play in language.
Bilingualism: Language and Cognition | 2009
Robin L. Thompson; Karen Emmorey; Robert Kluender
In American Sign Language (ASL), native signers use eye gaze to mark agreement (Thompson, Emmorey and Kluender, 2006). Such agreement is unique (it is articulated with the eyes) and complex (it occurs with only two out of three verb types, and marks verbal arguments according to a noun phrase accessibility hierarchy). In a language production experiment using head-mounted eye-tracking, we investigated the extent to which eye gaze agreement can be mastered by late second-language (L2) learners. The data showed that proficient late learners (with an average of 18.8 years signing experience) mastered a cross-linguistically prevalent pattern (NP-accessibility) within the eye gaze agreement system but ignored an idiosyncratic feature (marking agreement on only a subset of verbs). Proficient signers produced a grammar for eye gaze agreement that diverged from that of native signers but was nonetheless consistent with language universals. A second experiment examined the eye gaze patterns of novice signers with less than two years of ASL exposure and of English-speaking non-signers. The results provided further evidence that the pattern of acquisition found for proficient L2 learners is directly related to language learning, and does not stem from more general cognitive processes for eye gaze outside the realm of language.