Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martijn Goudbeek is active.

Publication


Featured researches published by Martijn Goudbeek.


Journal of the Acoustical Society of America | 2010

Beyond arousal: valence and potency/control cues in the vocal expression of emotion.

Martijn Goudbeek; Klaus R. Scherer

The important role of arousal in determining vocal parameters in the expression of emotion is well established. There is less evidence for the contribution of emotion dimensions such as valence and potency/control to vocal emotion expression. Here, an acoustic analysis of the newly developed Geneva Multimodal Emotional Portrayals corpus, is presented to examine the role of dimensions other than arousal. This corpus contains twelve emotions that systematically vary with respect to valence, arousal, and potency/control. The emotions were portrayed by professional actors coached by a stage director. The extracted acoustic parameters were first compared with those obtained from a similar corpus [Banse and Scherer (1996). J. Pers. Soc. Psychol. 70, 614-636] and shown to largely replicate the earlier findings. Based on a principal component analysis, seven composite scores were calculated and were used to determine the relative contribution of the respective vocal parameters to the emotional dimensions arousal, valence, and potency/control. The results show that although arousal dominates for many vocal parameters, it is possible to identify parameters, in particular spectral balance and spectral noise, that are specifically related to valence and potency/control.


I-perception | 2015

Normal-Hearing Listeners’ and Cochlear Implant Users’ Perception of Pitch Cues in Emotional Speech

Steven Gilbers; Christina Fuller; Dicky Gilbers; Mirjam Broersma; Martijn Goudbeek; Rolien Free; Deniz Başkent

In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study’s aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings’ pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners’ and 20 CI users’ emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor’s emotions worse than the other actors’. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker’s deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues.


Cognitive Science | 2013

The Effect of Scene Variation on the Redundant Use of Color in Definite Reference

Ruud Koolen; Martijn Goudbeek; Emiel Krahmer

This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly include a color attribute when the scene variation is high as compared with when this variation is low (even if this leads to overspecified descriptions). We argue that these findings are problematic for existing algorithms that aim to automatically generate psychologically realistic target descriptions, such as the Incremental Algorithm, as these algorithms make use of a fixed preference order per domain and do not take visual scene variation into account.


Topics in Cognitive Science | 2012

Alignment in Interactive Reference Production: Content Planning, Modifier Ordering, and Referential Overspecification

Martijn Goudbeek; Emiel Krahmer

Psycholinguistic studies often look at the production of referring expressions in interactive settings, but so far few referring expression generation algorithms have been developed that are sensitive to earlier references in an interaction. Rather, such algorithms tend to rely on domain-dependent preferences for both content selection and linguistic realization. We present three experiments showing that humans may opt for dispreferred attributes and dispreferred modifier orderings when these were primed in a preceding interaction (without speakers being consciously aware of this). In addition, we show that speakers are more likely to produce overspecified references, including dispreferred attributes (although minimal descriptions with preferred attributes would suffice), when these were similarly primed.


Language, cognition and neuroscience | 2016

Can you handle this? The impact of object affordances on how co-speech gestures are produced

Ingrid Masson-Carro; Martijn Goudbeek; Emiel Krahmer

ABSTRACT Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production. Our results suggest that affordances play a key role in determining the amount of representational (but not non-representational) gestures produced by speakers, and the techniques chosen to depict such objects. To our knowledge, this is the first study to systematically show a connection between object characteristics and representation techniques in spontaneous gesture production during the depiction of static referents.


affective computing and intelligent interaction | 2009

Emotion attribution to basic parametric static and dynamic stimuli

Valentijn Visch; Martijn Goudbeek

The following research investigates the effect of basic visual stimuli on the attribution of basic emotions by the viewer. In an empirical study (N = 33) we used two groups of visually minimal expressive stimuli: dynamic and static. The dynamic stimuli consisted of an animated circle moving according to a structured set of movement parameters, derived from emotion expression literature. The parameters are direction, expansion, velocity variation, fluency, and corner bending. The static stimuli consisted of the minimal visual form of a smiley. The varied parameters were mouth openness, mouth curvature, and eye rotation. The findings describing the effect of the parameters on attributed emotions are presented. This paper shows how specific viewer affect attribution can be included in men machine interaction using minimal visual material.


IEEE Transactions on Computational Intelligence and Ai in Games | 2015

Past Our Prime: A Study of Age and Play Style Development in Battlefield 3

Shoshannah Tekofsky; Pieter Spronck; Martijn Goudbeek; Aske Plaat; H. Jaap van den Herik

In recent decades, video games have come to appeal to people of all ages. The effect of age on how people play games is not fully understood. In this paper, we delve into the question how age relates to an individuals play style. “Play style” is defined as any (set of) patterns in game actions performed by a player. Based on data from 10 416 Battlefield 3 players, we found that age strongly correlates to how people start out playing a game (initial play style), and to how they change their play style over time (play style development). Our data shows three major trends: 1) correlations between age and initial play style peak around the age of 20; 2) performance decreases with age; and 3) speed of play decreases with age. The relationship between age and play style may be explained by the neurocognitive effects of aging: as people grow older, their cognitive performance decays, their personalities shift to a more conscientious style, and their gaming motivations become less achievement-oriented.


Cognition & Emotion | 2014

Robust anger: Recognition of deteriorated dynamic bodily emotion expressions

Valentijn T. Visch; Martijn Goudbeek; Marcello Mortillaro

In two studies, the robustness of anger recognition of bodily expressions is tested. In the first study, video recordings of an actor expressing four distinct emotions (anger, despair, fear, and joy) were structurally manipulated as to image impairment and body segmentation. The results show that anger recognition is more robust than other emotions to image impairment and to body segmentation. Moreover, the study showed that arms expressing anger were more robustly recognised than arms expressing other emotions. Study 2 added face blurring as a variable to the bodily expressions and showed that it decreased accurate emotion recognition—but more for recognition of joy and despair than for anger and fear. In sum, the paper indicates the robustness of anger recognition in multileveled deteriorated bodily expressions.


Journal of Nonverbal Behavior | 2017

Children’s Nonverbal Displays of Winning and Losing: Effects of Social and Cultural Contexts on Smiles

Phoebe H. C. Mui; Martijn Goudbeek; Marc Swerts; Arpine Hovasapian

We examined the effects of social and cultural contexts on smiles displayed by children during gameplay. Eight-year-old Dutch and Chinese children either played a game alone or teamed up to play in pairs. Activation and intensity of facial muscles corresponding to Action Unit (AU) 6 and AU 12 were coded according to Facial Action Coding System. Co-occurrence of activation of AU 6 and AU 12, suggesting the presence of a Duchenne smile, was more frequent among children who teamed up than among children who played alone. Analyses of the intensity of smiles revealed an interaction between social and cultural contexts. Whereas smiles, both Duchenne and non-Duchenne, displayed by Chinese children who teamed up were more intense than those displayed by Chinese children who played alone, the effect of sociality on smile intensity was not observed for Dutch children. These findings suggest that the production of smiles by children in a competitive context is susceptible to both social and cultural factors.


international conference on natural language generation | 2016

The Multilingual Affective Soccer Corpus (MASC) : Compiling a biased parallel corpus on soccer reportage in English, German and Dutch

Nadine Braun; Martijn Goudbeek; Emiel Krahmer

The emergence of the internet has led to a whole range of possibilities to not only collect large, but also highly specified text corpora for linguistic research. This paper introduces the Multilingual Affective Soccer Corpus. MASC is a collection of soccer match reports in English, German and Dutch. Parallel texts are collected manually from the involved soccer clubs’ homepages with the aim of investigating the role of affect in sports reportage in different languages and cultures, taking into account the different perspectives of the teams and possible outcomes of a match. The analyzed aspects of emotional language will open up new approaches for biased automatic generation of texts.

Collaboration


Dive into the Martijn Goudbeek's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caixia Liu

Eindhoven University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge