Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pamela M. Perniss is active.

Publication


Featured researches published by Pamela M. Perniss.


Frontiers in Psychology | 2010

Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages

Pamela M. Perniss; Robin L. Thompson; Gabriella Vigliocco

Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to “hook up” to motor, perceptual, and affective experience.


Philosophical Transactions of the Royal Society B | 2014

The bridge of iconicity: from a world of experience to the experience of language

Pamela M. Perniss; Gabriella Vigliocco

Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.


Philosophical Transactions of the Royal Society B | 2014

Language as a multimodal phenomenon: implications for language learning, processing and evolution

Gabriella Vigliocco; Pamela M. Perniss; David P. Vinson

Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages.


Archive | 2007

Space and iconicity in German sign language (DGS)

Pamela M. Perniss

This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebardensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and motion). Its primary theoretical objectives are to characterize the structure of locative descriptions in DGS; to explain the use of frames of reference and perspective in the expression of location and motion; to clarify the interrelationship between the systems of frames of reference, signing perspective, and classifier predicates; and to characterize the interplay between iconicity principles, on the one hand, and grammatical and discourse constraints, on the other hand, in the use of these spatial devices. In more general terms, the dissertation provides a usage-based account of iconic mapping in the visual-spatial modality. The use of space in sign language expression is widely assumed to be guided by iconic principles, which are furthermore assumed to hold in the same way across sign languages. Thus, there has been little expectation of variation between sign languages in the spatial domain in the use of spatial devices. Consequently, perhaps, there has been little systematic investigation of linguistic expression in the spatial domain in individual sign languages, and less investigation of spatial language in extended signed discourse. This dissertation provides an investigation of spatial expressions in DGS by investigating the impact of different constraints on iconicity in sign language structure. The analyses have important implications for our understanding of the role of iconicity in the visual-spatial modality, the possible language-specific variation within the spatial domain in the visual-spatial modality, the structure of spatial language in both natural language modalities, and the relationship between spatial language and cognition


Trends in Linguistics ; 188 | 2007

Visible variation : comparative studies on sign language structure

Pamela M. Perniss; Roland Pfau; Markus Steinbach

This volume brings together work by scholars engaging in comparative sign linguistics research. The articles discuss data from many different signed and spoken languages. They focus on empirical and descriptive aspects of sign language variation and cover a wide range of topics from different areas of grammar. In addition to this, they address psycholinguistic issues, aspects of language change, and issues concerning data collection in sign languages.


Topics in Cognitive Science | 2015

Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture

Pamela M. Perniss

Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture.


Discourse Processes | 2013

Gestural viewpoint signals referent accessibility

Sandra Debreslioska; Marianne Gullberg; Pamela M. Perniss

The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities.


Topics in Cognitive Science | 2015

The Influence of the Visual Modality on Language Structure and Conventionalization: Insights From Sign Language and Gesture

Pamela M. Perniss; Gary Morgan

For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.


Spatial Cognition and Computation | 2015

Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective.

Jennie E. Pyers; Pamela M. Perniss; Karen Emmorey

Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signers or the perceivers. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions nonegocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a nonegocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that nonlinguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.


Developmental Science | 2018

Mapping language to the world: the role of iconicity in the sign language input

Pamela M. Perniss; Jenny Lu; Gary Morgan; Gabriella Vigliocco

Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on iconicity in language, that is, resemblance relationships between form and meaning, and on non-ostensive contexts, where label and referent do not co-occur. We approach the question of language learning from the perspective of the language input. Specifically, we look at child-directed language (CDL) in British Sign Language (BSL), a language rich in iconicity due to the affordances of the visual modality. We ask whether child-directed signing exploits iconicity in the language by highlighting the similarity mapping between form and referent. We find that CDL modifications occur more often with iconic signs than with non-iconic signs. Crucially, for iconic signs, modifications are more frequent in non-ostensive contexts than in ostensive contexts. Furthermore, we find that pointing dominates in ostensive contexts, and suggest that caregivers adjust the semiotic resources recruited in CDL to context. These findings offer first evidence for a role of iconicity in the language input and suggest that iconicity may be involved in referential mapping and language learning, particularly in non-ostensive contexts.

Collaboration


Dive into the Pamela M. Perniss's collaboration.

Top Co-Authors

Avatar

Roland Pfau

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Inge Zwitserlood

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Vinson

University College London

View shared research outputs
Top Co-Authors

Avatar

Ulrike Zeshan

University of Central Lancashire

View shared research outputs
Top Co-Authors

Avatar

Gary Morgan

City University London

View shared research outputs
Top Co-Authors

Avatar

Neil Fox

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge