Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Peeters is active.

Publication


Featured researches published by David Peeters.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2014

Asymmetrical switch costs in bilingual language production induced by reading words

David Peeters; Elin Runnqvist; Daisy Bertrand; Jonathan Grainger

We examined language-switching effects in French-English bilinguals using a paradigm where pictures are always named in the same language (either French or English) within a block of trials, and on each trial, the picture is preceded by a printed word from the same language or from the other language. Participants had to either make a language decision on the word or categorize it as an animal name or not. Picture-naming latencies in French (Language 1 [L1]) were slower when pictures were preceded by an English word than by a French word, independently of the task performed on the word. There were no language-switching effects when pictures were named in English (L2). This pattern replicates asymmetrical switch costs found with the cued picture-naming paradigm and shows that the asymmetrical pattern can be obtained (a) in the absence of artificial (nonlinguistic) language cues, (b) when the switch involves a shift from comprehension in 1 language to production in another, and (c) when the naming language is blocked (univalent response). We concluded that language switch costs in bilinguals cannot be reduced to effects driven by task control or response-selection mechanisms.


Journal of Cognitive Neuroscience | 2015

Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech

David Peeters; Mingyuan Chu; Judith Holler; Peter Hagoort

In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturers communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.


Behavior Research Methods | 2018

The combined use of virtual reality and EEG to study language processing in naturalistic environments

Johanne Tromp; David Peeters; Antje S. Meyer; Peter Hagoort

When we comprehend language, we often do this in rich settings where we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and nonlinguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and virtual reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant while wearing EEG equipment. In the restaurant, participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g., a plate with salmon). The restaurant guest would then produce a sentence (e.g., “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.


Neuropsychologia | 2017

Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues

David Peeters; Tineke M. Snijders; Peter Hagoort

ABSTRACT In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressees attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event‐related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speakers index‐finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speakers pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speakers multimodal referential act and stress the power of pointing as an important natural device to link speech to objects. HIGHLIGHTSWe investigate the comprehension of everyday object‐reference in speech and gesture.A pointing gesture‐induced speech‐object mismatch elicited LIFG and pMTG activation.The mentalizing system was involved in comprehending speech without pointing.The findings extend our knowledge of comprehending everyday multimodal communication.


Behavior Research Methods | 2018

Language-driven anticipatory eye movements in virtual reality

Nicole Eichert; David Peeters; Peter Hagoort

Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.


bioRxiv | 2018

Do we predict upcoming speech content in naturalistic environments

Evelien Heyselaar; David Peeters; Peter Hagoort

The ability to predict upcoming actions is a characteristic hallmark of cognition and therefore not surprisingly a central topic in cognitive science. It remains unclear, however, whether the predictive behaviour commonly observed in strictly controlled lab environments generalizes to rich, everyday settings. In four virtual reality experiments, we tested whether a well-established marker of linguistic prediction (i.e. anticipatory eye movements as observed in the visual world paradigm) replicated when increasing the naturalness of the paradigm by means of i) immersing participants in naturalistic everyday scenes, ii) increasing the number of distractor objects present, iii) manipulating the location of referents in central versus peripheral vision, and iv) modifying the proportion of predictable noun-referents in the experiment. Robust anticipatory eye movements were observed, even in the presence of 10 objects (hereby testing working memory) and when only 25% of all sentences contained a visually present referent (hereby testing error-based learning). The anticipatory effect disappeared, however, when referents were placed in peripheral vision. Together, our findings suggest that working memory may play an important role in predictive processing in everyday communication, but only in contexts where upcoming referents have been explicitly attended to prior to encountering the spoken referential act. Methodologically, our study confirms that ecological validity and experimental control may go hand in hand in future studies of human predictive behaviour.


Behavior Research Methods | 2018

A standardized set of 3-D objects for virtual reality research and applications

David Peeters

The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.


Frontiers for Young Minds | 2014

The scientific significance of sleep-talking

David Peeters; Martin Dresler

WHAT IS SLEEP TALKING? Sleep talking (or somniloquy) can be considered as a part of a larger family of types of “sleep utterances,” such as mumbling, laughing, groaning, and whistling during sleep. The ancient Greek philosopher Heraclitus of Ephesus already observed someone sleep talking about 2,500 years ago, so it is not a very recent discovery. It happens at all ages (provided that one is capable of speaking!) and may occur during all parts of the night. Sleep talking is said to be more common in children than in adults. However, it might also be the case that sleeping kids are simply more often overheard (for instance, by their parents) than adults. Sometimes, sleep talking is placed among involuntary behaviors that may happen during sleep (called “parasomnias”), such as sleepwalking, teeth grinding, and even types of sleep behavior disorders in which patients may injure themselves or their bed partner when involuntarily making dangerous movements during sleep. However, sleep talking is often very innocent and generally does not require any treatment. An exception is the sleep talking that may start to occur after a traumatic experience, as in the case of soldiers who fought in a war.


Journal of Memory and Language | 2013

The representation and processing of identical cognates by late bilinguals: RT and ERP effects

David Peeters; Ton Dijkstra; Jonathan Grainger


Cognition | 2015

Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives

David Peeters; Peter Hagoort

Collaboration


Dive into the David Peeters's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ton Dijkstra

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daisy Bertrand

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Elin Runnqvist

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge