Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kashmiri Stec is active.

Publication


Featured researches published by Kashmiri Stec.


Cognitive Semiotics | 2014

Co-constructing referential space in multimodal narratives

Kashmiri Stec; Mike Huiskes

Abstract Meaning-making is a situated, multimodal process. Although most research has focused on conceptualization in individuals, recent work points to the way dynamic processes can affect both conceptualization and expression in multiple individuals (e.g. Özyürek 2002; Fusaroli and Tylén 2012; Narayan 2012). In light of this, we investigate the co-construction of referential space in dyadic multimodal communication. Referential space is the association of a referent with a particular spatial location (McNeill and Pedelty 1995). We focus on the multimodal means by which dyads collaboratively co-construct or co-use referential space, and use it to answer questions related to its use and stability in communication. Whereas previous work has focused on an individuals use of referential space (So et al. 2009), our data suggest that spatial locations are salient to both speakers and addressees: referents assigned to particular spatial locations can be mutually accessible to both participants, as well as stable across longer stretches of discourse.


Open Linguistics | 2015

Multimodal analysis of quotation in oral narratives

Kashmiri Stec; Mike Huiskes; Gisela Redeker

Abstract We investigate direct speech quotation in informal oral narratives by analyzing the contribution of bodily articulators (character viewpoint gestures, character facial expression, character intonation, and the meaningful use of gaze) in three quote environments, or quote sequences – single quotes, quoted monologues and quoted dialogues – and in initial vs. non-initial position within those sequences. Our analysis draws on findings from the linguistic and multimodal realization of quotation, where multiple articulators are often observed to be co-produced with single direct speech quotes (e.g. Thompson & Suzuki 2014), especially on the so-called left boundary of the quote (Sidnell 2006). We use logistic regression to model multimodal quote production across and within quote sequences, and find unique sets of multimodal articulators accompanying each quote sequence type. We do not, however, find unique sets of multimodal articulators which distinguish initial from non-initial utterances; utterance position is instead predicted by type of quote and presence of a quoting predicate. Our findings add to the growing body of research on multimodal quotation, and suggest that the multimodal production of quotation is more sensitive to the number of characters and utterances which are quoted than to the difference between introducing and maintaining a quoted characters’ perspective.


Cognitive Linguistics | 2016

Linguistic, gestural, and cinematographic viewpoint : An analysis of ASL and English narrative

Fey Parrill; Kashmiri Stec; David Quinto-Pozos; Sebastian Rimehaug

Abstract Multimodal narrative can help us understand how conceptualizers schematize information when they create mental representations of films and may shed light on why some cinematic conventions are easier or harder for viewers to integrate. This study compares descriptions of a shot/reverse shot sequence (a sequence of camera shots from the viewpoints of different characters) across users of English and American Sign Language (ASL). We ask which gestural and linguistic resources participants use to narrate this event. Speakers and signers tended to represent the same characters via the same point of view and to show a single perspective rather than combining multiple perspectives simultaneously. Neither group explicitly mentioned the shift in cinematographic perspective. We argue that encoding multiple points of view might be a more accurate visual description, but is avoided because it does not create a better narrative.


Gesture | 2012

Meaningful shifts: A review of viewpoint markers in co-speech gesture and sign language

Kashmiri Stec


Proceedings of GESPIN2011: Gesture and speech in interaction | 2013

Aktionsarten, Speech and Gesture

Raymond Becker; Alan Cienki; Austin Bennett; Christina Cudina; Camille Debras; Zuzanna Fleischer; Michael Haaheim; Torsten Müller; Kashmiri Stec; Alessandra Zarcone


Gesture | 2018

Seeing first person changes gesture but saying first person does not

Fey Parrill; Kashmiri Stec


Glossa | 2017

Multimodal character viewpoint in quoted dialogue sequences

Kashmiri Stec; Mike Huiskes; Martijn Wieling; Gisela Redeker


The Mind Research Repository | 2016

Multimodal quotation: Role shift practices in spoken narratives

Kashmiri Stec; Mike Huiskes; Gisela Redeker


Gesture | 2015

Reporting practices in multimodal viewpoint research

Kashmiri Stec


Archive | 2014

Mental timelines and temporal order cues in multimodal communication.

Raymond Becker; Alan Cienki; Kashmiri Stec

Collaboration


Dive into the Kashmiri Stec's collaboration.

Top Co-Authors

Avatar

Mike Huiskes

University of Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fey Parrill

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Cienki

Moscow State Linguistic University

View shared research outputs
Top Co-Authors

Avatar

Austin Bennett

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

David Quinto-Pozos

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge