Judith Holler
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Judith Holler.
Language and Cognitive Processes | 2009
Judith Holler; Katie Wilkin
Much research has been carried out into the effects of mutually shared knowledge (or common ground) on verbal language use. This present study investigates how common ground affects human communication when regarding language as consisting of both speech and gesture. A semantic feature approach was used to capture the range of information represented in speech and gesture. Overall, utterances were found to contain less semantic information when interlocutors had mutually shared knowledge, even when the information represented in both modalities, speech and gesture, was considered. However, when considering the gestures on their own, it was found that they represented only marginally less information. The findings also show that speakers gesture at a higher rate when common ground exists. It appears therefore that gestures play an important communicational function, even when speakers convey information which is already known to their addressee.
Philosophical Transactions of the Royal Society B | 2014
Stephen C. Levinson; Judith Holler
One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system.
Frontiers in Psychology | 2015
Judith Holler; Kobin H. Kendrick
One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogs to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation toward on-line processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question–response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze toward the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recognizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
Developmental Science | 2009
Evan Kidd; Judith Holler
We report on a study investigating 3-5-year-old childrens use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, bat (flying mammal) and bat (sports equipment). They were then asked to re-tell these stories to a second experimenter. The data were coded for the means that children used during attempts at disambiguation: speech, gesture, or a combination of the two. The results indicated that the 3-year-old children rarely disambiguated the two senses, mainly using deictic pointing gestures during attempts at disambiguation. In contrast, the 4-year-old children attempted to disambiguate the two senses more often, using a larger proportion of iconic gestures than the other children. The 5-year-old children used less iconic gestures than the 4-year-olds, but unlike the 3-year-olds, were able to disambiguate the senses through the verbal channel. The results highlight the value of gesture to the development of childrens language and communication skills.
Quarterly Journal of Experimental Psychology | 2007
Andrew J. Stewart; Judith Holler; Evan Kidd
Two self-paced reading-time experiments examined how ambiguous pronouns are interpreted under conditions that encourage shallow processing. In Experiment 1 we show that sentences containing ambiguous pronouns are processed at the same speed as those containing unambiguous pronouns under shallow processing, but more slowly under deep processing. We outline three possible models to account for the shallow processing of ambiguous pronouns. Two involve an initial commitment followed by possible revision, and the other involves a delay in interpretation. In Experiment 2 we provide evidence that supports the delayed model of ambiguous pronoun resolution under shallow processing. We found no evidence to support a processing system that makes an initial commitment to an interpretation of the pronoun when it is encountered. We extend the account of pronoun resolution proposed by Rigalleau, Caplan, and Baudiffier (2004) to include the treatment of ambiguous pronouns under shallow processing.
Language | 2013
Suzanne Hall; Lisa Rumney; Judith Holler; Evan Kidd
The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18–31 months. The children completed two tasks: (i) a structured measure of pretend (or ‘symbolic’) play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture. Additionally, their productive spoken language knowledge was measured via parental report. The results indicated that symbolic play is positively associated with children’s gesture use, which in turn is positively associated with spoken language knowledge over and above the influence of age. The tripartite relationship between gesture, play and language development is discussed with reference to current developmental theory.
Social Cognitive and Affective Neuroscience | 2015
Judith Holler; Idil Kokal; Ivan Toni; Peter Hagoort; Spencer D. Kelly
Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech-gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
Psychonomic Bulletin & Review | 2015
Spencer D. Kelly; Meghan L. Healey; Judith Holler
Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech–action stimuli than for speech–gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
Information-an International Interdisciplinary Journal | 2011
Spencer D. Kelly; Kelly Byrne; Judith Holler
Theorists of language have argued that co-speech hand gestures are an intentional part of social communication. The present study provides evidence for these claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful communication were low; for the other audience (a group of students preparing for a rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
Psychonomic Bulletin & Review | 2013
Zhenguang G. Cai; Louise Connell; Judith Holler
Much evidence has suggested that people conceive of time as flowing directionally in transverse space (e.g., from left to right for English speakers). However, this phenomenon has never been tested in a fully nonlinguistic paradigm where neither stimuli nor task use linguistic labels, which raises the possibility that time is directional only when reading/writing direction has been evoked. In the present study, English-speaking participants viewed a video where an actor sang a note while gesturing and reproduced the duration of the sung note by pressing a button. Results showed that the perceived duration of the note was increased by a long-distance gesture, relative to a short-distance gesture. This effect was equally strong for gestures moving from left to right and from right to left and was not dependent on gestures depicting movement through space; a weaker version of the effect emerged with static gestures depicting spatial distance. Since both our gesture stimuli and temporal reproduction task were nonlinguistic, we conclude that the spatial representation of time is nondirectional: Movement contributes, but is not necessary, to the representation of temporal information in a transverse timeline.