Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sotaro Kita is active.

Publication


Featured researches published by Sotaro Kita.


Journal of Memory and Language | 2003

What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking

Sotaro Kita

Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.


Trends in Cognitive Sciences | 2004

Can language restructure cognition? The case for space

Asifa Majid; Melissa Bowerman; Sotaro Kita; Daniel B. M. Haun; Stephen C. Levinson

Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies cross-culturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.


Language | 1998

Semantic typology and spatial conceptualization

Eric Pederson; Eve Danziger; David P. Wilkins; Stephen C. Levinson; Sotaro Kita; Gunter Senft

This project collected linguistic data for spatial relations across a typologically and genetically varied set of languages. In the linguistic analysis, we focus on the ways in which propositions may be functionally equivalent across the linguistic communities while nonetheless representing semantically quite distinctive frames of reference. Running nonlinguistic experiments on subjects from these language communities, we find that a populations cognitive frame of reference correlates with the linguistic frame of reference within the same referential domain.


Journal of Cognitive Neuroscience | 2007

On-line Integration of Semantic Information from Speech and Gesture: Insights from Event-related Brain Potentials

Roel M. Willems; Sotaro Kita; Peter Hagoort

During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.


Cognition | 2002

Returning the tables : Language affects spatial reasoning

Stephen C. Levinson; Sotaro Kita; Daniel B. M. Haun; Björn Rasch

Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking--it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.


Cognition | 2008

Sound symbolism facilitates early verb learning.

Mutsumi Imai; Sotaro Kita; Miho Nagumo; Hiroyuki Okada

Some words are sound-symbolic in that they involve a non-arbitrary relationship between sound and meaning. Here, we report that 25-month-old children are sensitive to cross-linguistically valid sound-symbolic matches in the domain of action and that this sound symbolism facilitates verb learning in young children. We constructed a set of novel sound-symbolic verbs whose sounds were judged to match certain actions better than others, as confirmed by adult Japanese- as well as English speakers, and by 2- and 3-year-old Japanese-speaking children. These sound-symbolic verbs, together with other novel non-sound-symbolic verbs, were used in a verb learning task with 3-year-old Japanese children. In line with the previous literature, 3-year-olds could not generalize the meaning of novel non-sound-symbolic verbs on the basis of the sameness of action. However, 3-year-olds could correctly generalize the meaning of novel sound-symbolic verbs. These results suggest that iconic scaffolding by means of sound symbolism plays an important role in early verb learning.


Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997

Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders

Sotaro Kita; Ingeborg van Gijn; Harry van der Hulst

The previous literature has suggested that the hand movement in co-speech gestures and signs consists of a series of phases with qualitatively different dynamic characteristics. In this paper, we propose a syntagmatic rule system for movement phases that applies to both co-speech gestures and signs. Descriptive criteria for the rule system were developed for the analysis video-recorded continuous production of signs and gesture. It involves segmenting a stream of body movement into phases and identifying different phase types. Two human coders used the criteria to analyze signs and cospeech gestures that are produced in natural discourse. It was found that the criteria yielded good inter-coder reliability. These criteria can be used for the technology of automatic recognition of signs and co-speech gestures in order to segment continuous production and identify the potentially meaningbearing phase.


Journal of Experimental Psychology: General | 2011

The Nature of Gestures' Beneficial Role in Spatial Problem Solving

Mingyuan Chu; Sotaro Kita

Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems (Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced (Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We concluded that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations. (PsycINFO Database Record (c) 2010 APA, all rights reserved).


Language and Cognitive Processes | 2009

Cross-cultural variation of speech-accompanying gesture: A review

Sotaro Kita

This article reviews the literature on cross-cultural variation of gestures. Four factors governing the variation were identified. The first factor is the culture-specific convention for form-meaning associations. This factor is involved in well-known cross-cultural differences in emblem gestures (e.g., the OK-sign), as well as pointing gestures. The second factor is culture-specific spatial cognition. Representational gestures (i.e., iconic and deictic gestures) that express spatial contents or metaphorically express temporal concepts differ across cultures, reflecting the cognitive differences in how direction, relative location and different axes in space are conceptualised and processed. The third factor is linguistic differences. Languages have different lexical and syntactic resources to express spatial information. This linguistic difference is reflected in how gestures express spatial information. The fourth factor is culture-specific gestural pragmatics, namely the principles under which gesture is used in communication. The culture-specificity in politeness of gesture use, the role of nodding in conversation, and the use of gesture space are discussed.


Philosophical Transactions of the Royal Society B | 2014

The sound symbolism bootstrapping hypothesis for language acquisition and language evolution

Mutsumi Imai; Sotaro Kita

Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quines problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture.

Collaboration


Dive into the Sotaro Kita's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martha W. Alibali

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge