Eva Krumhuber
Jacobs University Bremen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eva Krumhuber.
Emotion | 2009
Eva Krumhuber; Antony Stephen Reid Manstead
We investigated the value of the Duchenne (D) smile as a spontaneous sign of felt enjoyment. Participants either smiled spontaneously in response to amusing material (spontaneous condition) or were instructed to pose a smile (deliberate condition). Similar amounts of D and non-Duchenne (ND) smiles were observed in these 2 conditions (Experiment 1). When subsets of these smiles were presented to other participants, they generally rated spontaneous and deliberate D and ND smiles differently. Moreover, they distinguished between D smiles of varying intensity within the spontaneous condition (Experiment 2). Such a differentiation was also made when seeing the upper or lower face only (Experiment 3), but was impaired for static compared with dynamic displays (Experiment 4). The predictive value of the D smile in these judgment studies was limited compared with other features such as asymmetry, apex duration, and nonpositive facial actions, and was only significant for ratings of the upper face and static displays. These findings raise doubts about the reliability and validity of the D smile and question the usefulness of facial descriptions in identifying true feelings of enjoyment.
Emotion Review | 2013
Eva Krumhuber; Arvid Kappas; Antony Stephen Reid Manstead
A key feature of facial behavior is its dynamic quality. However, most previous research has been limited to the use of static images of prototypical expressive patterns. This article explores the role of facial dynamics in the perception of emotions, reviewing relevant empirical evidence demonstrating that dynamic information improves coherence in the identification of affect (particularly for degraded and subtle stimuli), leads to higher emotion judgments (i.e., intensity and arousal), and helps to differentiate between genuine and fake expressions. The findings underline that using static expressions not only poses problems of ecological validity, but also limits our understanding of what facial activity does. Implications for future research on facial activity, particularly for social neuroscience and affective computing, are discussed.
international conference on computer vision | 2011
Darren Cosker; Eva Krumhuber; Adrian Hilton
This paper presents the first dynamic 3D FACS data set for facial expression research, containing 10 subjects performing between 19 and 97 different AUs both individually and in combination. In total the corpus contains 519 AU sequences. The peak expression frame of each sequence has been manually FACS coded by certified FACS experts. This provides a ground truth for 3D FACS based AU recognition systems. In order to use this data, we describe the first framework for building dynamic 3D morphable models. This includes a novel Active Appearance Model (AAM) based 3D facial registration and mesh correspondence scheme. The approach overcomes limitations in existing methods that require facial markers or are prone to optical flow drift. We provide the first quantitative assessment of such 3D facial mesh registration techniques and show how our proposed method provides more reliable correspondence.
Emotion | 2011
Eva Krumhuber; Klaus R. Scherer
Affect bursts consist of spontaneous and short emotional expressions in which facial, vocal, and gestural components are highly synchronized. Although the vocal characteristics have been examined in several recent studies, the facial modality remains largely unexplored. This study investigated the facial correlates of affect bursts that expressed five different emotions: anger, fear, sadness, joy, and relief. Detailed analysis of 59 facial actions with the Facial Action Coding System revealed a reasonable degree of emotion differentiation for individual action units (AUs). However, less convergence was shown for specific AU combinations for a limited number of prototypes. Moreover, expression of facial actions peaked in a cumulative-sequential fashion with significant differences in their sequential appearance between emotions. When testing for the classification of facial expressions within a dimensional approach, facial actions differed significantly as a function of the valence and arousal level of the five emotions, thereby allowing further distinction between joy and relief. The findings cast doubt on the existence of fixed patterns of facial responses for each emotion, resulting in unique facial prototypes. Rather, the results suggest that each emotion can be portrayed by several different expressions that share multiple facial actions.
applied perception in graphics and visualization | 2010
Darren Cosker; Eva Krumhuber; Adrian Hilton
In this paper we present the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research. The model consists of FACS Action Unit (AU) based parameters and has been independently validated by FACS experts. Using this model, we explore the perceptual differences between linear facial motions -- represented by a linear blend shape approach -- and real facial motions that have been synthesized through the 3D facial model. Through numerical measures and visualizations, we show that this latter type of motion is geometrically nonlinear in terms of its vertices. In experiments, we explore the perceptual benefits of nonlinear motion for different AUs. Our results are insightful for designers of animation systems both in the entertainment industry and in scientific research. They reveal a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion. However, our findings suggest that not all motions need to be animated nonlinearly. The advantage may depend on the type of facial action being produced and the phase of the movement.
Cognition & Emotion | 2009
Eva Krumhuber; Antony Stephen Reid Manstead
We investigated the effects of smiling on perceptions of positive, neutral and negative verbal statements. Participants viewed computer-generated movies of female characters who made angry, disgusted, happy or neutral statements and then showed either one of two temporal forms of smile (slow vs. fast onset) or a neutral expression. Smiles significantly increased the perceived positivity of the message by making negative statements appear less negative and neutral statements appear more positive. However, these smiles led the character to be seen as less genuine than when she showed a neutral expression. Disgust + smile messages led to higher judged happiness than did anger + smile messages, suggesting that smiles were seen as reflecting humour when combined with disgust statements, but as masking negative affect when combined with anger statements. These findings provide insights into the ways that smiles moderate the impact of verbal statements.
international conference on human-computer interaction | 2013
Felix Kistler; Elisabeth André; Samuel Mascarenhas; André Silva; Ana Paiva; Nick Degens; Gert Jan Hofstede; Eva Krumhuber; Arvid Kappas; Ruth Aylett
In this paper, we describe a cultural training system based on an interactive storytelling approach and a culturally-adaptive agent architecture, for which a user-defined gesture set was created. 251 full body gestures by 22 users were analyzed to find intuitive gestures for the in-game actions in our system. After the analysis we integrated the gestures in our application using our framework for full body gesture recognition. We further integrated a second interaction type which applies a graphical interface controlled with freehand swiping gestures.
Proceedings of the 3rd Symposium on Facial Analysis and Animation | 2012
Aleksandra Swiderska; Eva Krumhuber; Arvid Kappas
Masahiro Mori, who introduced the concept of the Uncanny Valley, recommended settling for moderate levels of human likeness in robotic design in order to avoid the eeriness upon encountering entities closely resembling humans [Mori 1970]. However, the strive for highly humanlike design continues, nowadays particularly in areas such as computer graphics and animation, inspiring research on what actually is and what is not perceived as a human being.
affective computing and intelligent interaction | 2013
Aleksandra Swiderska; Eva Krumhuber; Arvid Kappas
Out-group members are commonly viewed as being less human than in-group members. They are denied certain human characteristics and in turn become associated with machines or automata. Specifically, out-groups are attributed less naturally and uniquely human traits, and they are also seen as being less able to experience complex emotions in comparison to the in-group. Such dissociations have been demonstrated with real human faces but in our study, we aimed to test whether similar effects generalize to their artificial versions. Caucasian participants were presented with images of male Caucasian and Indian faces. Their task was to evaluate to what extent naturally and uniquely human traits, as well as primary and secondary emotions, can be attributed to them. In line with previous research, it was found that positive naturally human traits were attributed to a greater degree to the in-group than to the out-group, applying to both real and artificial faces. Moreover, negative naturally human traits and negative primary emotions were attributed more to the out-group. This indicates a positive bias towards the in-group and subtle out-group derogation. The results extend prior research based on real human faces and show that intergroup processes emerge similarly in response to artificial faces, which may have implications for the fields of computer graphics and animation. That is, even the most realistic face recognized as belonging to an out-group member may convey less humanness than that of an in-group member.
Proceedings of the SSPNET 2nd International Symposium on Facial Analysis and Animation | 2010
Darren Cosker; Eva Krumhuber; Adrian Hilton
The Facial Action Coding System (FACS) [Ekman et al. 2002] has become a popular reference for creating fully controllable facial models that allow the manipulation of single actions or so-called Action Units (AUs). For example, realistic 3D models based on FACS have been used for investigating the perceptual effects of moving faces, and for character expression mapping in recent movies. However, since none of the facial actions (AUs) in these models are validated by FACS experts it is unclear how valid the model would be in situations where the accurate production of an AU is essential [Krumhuber and Tamarit 2010]. Moreover, previous work has employed motion capture data representing only sparse 3D facial positions which does not include dense surface deformation detail.