Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcello Mortillaro is active.

Publication


Featured researches published by Marcello Mortillaro.


Emotion | 2012

Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception.

Tanja Bänziger; Marcello Mortillaro; Klaus R. Scherer

Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.


Emotion | 2012

Emotion Expression in Body Action and Posture

Nele Dael; Marcello Mortillaro; Klaus R. Scherer

Emotion communication research strongly focuses on the face and voice as expressive modalities, leaving the rest of the body relatively understudied. Contrary to the early assumption that body movement only indicates emotional intensity, recent studies have shown that body movement and posture also conveys emotion specific information. However, a deeper understanding of the underlying mechanisms is hampered by a lack of production studies informed by a theoretical framework. In this research we adopted the Body Action and Posture (BAP) coding system to examine the types and patterns of body movement that are employed by 10 professional actors to portray a set of 12 emotions. We investigated to what extent these expression patterns support explicit or implicit predictions from basic emotion theory, bidimensional theory, and componential appraisal theory. The overall results showed partial support for the different theoretical approaches. They revealed that several patterns of body movement systematically occur in portrayals of specific emotions, allowing emotion differentiation. Although a few emotions were prototypically expressed by one particular pattern, most emotions were variably expressed by multiple patterns, many of which can be explained as reflecting functional components of emotion such as modes of appraisal and action readiness. It is concluded that further work in this largely underdeveloped area should be guided by an appropriate theoretical framework to allow a more systematic design of experiments and clear hypothesis testing.


IEEE Transactions on Affective Computing | 2011

Toward a Minimal Representation of Affective Gestures

Donald Glowinski; Nele Dael; Antonio Camurri; Gualtiero Volpe; Marcello Mortillaro; Klaus R. Scherer

This paper presents a framework for analysis of affective behavior starting with a reduced amount of visual information related to human upper-body movements. The main goal is to individuate a minimal representation of emotional displays based on nonverbal gesture features. The GEMEP (Geneva multimodal emotion portrayals) corpus was used to validate this framework. Twelve emotions expressed by 10 actors form the selected data set of emotion portrayals. Visual tracking of trajectories of head and hands were performed from a frontal and a lateral view. Postural/shape and dynamic expressive gesture features were identified and analyzed. A feature reduction procedure was carried out, resulting in a 4D model of emotion expression that effectively classified/grouped emotions according to their valence (positive, negative) and arousal (high, low). These results show that emotionally relevant information can be detected/measured/obtained from the dynamic qualities of gesture. The framework was implemented as software modules (plug-ins) extending the EyesWeb XMI Expressive Gesture Processing Library and is going to be used in user centric, networked media applications, including future mobiles, characterized by low computational resources, and limited sensor systems.


Frontiers in Psychology | 2013

On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common

Felix Weninger; Florian Eyben; Björn W. Schuller; Marcello Mortillaro; Klaus R. Scherer

Without doubt, there is emotional information in almost any kind of sound received by humans every day: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow’s pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of “the sound that something makes,” in order to evaluate the system’s auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.


acm multimedia | 2010

The voice of personality: mapping nonverbal vocal behavior into trait attributions

Gelareh Mohammadi; Alessandro Vinciarelli; Marcello Mortillaro

This paper reports preliminary experiments on automatic attribution of personality traits based on nonverbal vocal behavioral cues. In particular, the work shows how prosodic features can be used to predict, with an accuracy up to 75% depending on the trait, the personality assessments performed by human judges on a collection of 640 speech samples. The assessments are based on a short version of the Big Five Inventory, one of the most widely used questionnaires for personality assessment. The judges did not understand the language spoken in the speech samples so that the influence of the verbal content is limited. To tho best of our knowledge, this is the first work aimed at inferring automatically traits attributed by judges rather than traits self-reported by subjects.


Social Psychological and Personality Science | 2011

Subtly Different Positive Emotions Can Be Distinguished by Their Facial Expressions

Marcello Mortillaro; Marc Mehu; Klaus R. Scherer

Positive emotions are crucial to social relationships and social interaction. Although smiling is a frequently studied facial action, investigations of positive emotional expressions are underrepresented in the literature. This may be partly because of the assumption that all positive emotions share the smile as a common signal but lack specific facial configurations. The present study investigated prototypical expressions of four positive emotions—interest, pride, pleasure, and joy. The Facial Action Coding System was used to microcode facial expression of representative samples of these emotions taken from the Geneva Multimodal Emotion Portrayal corpus. The data showed that the frequency and duration of several action units differed between emotions, indicating that actors did not use the same pattern of expression to encode them. The authors argue that an appraisal perspective is suitable to describe how subtly differentiated positive emotional states differ in their prototypical facial expressions.


Emotion Review | 2013

Understanding the Mechanisms Underlying the Production of Facial Expression of Emotion: A Componential Perspective

Klaus R. Scherer; Marcello Mortillaro; Marc Mehu

We highlight the need to focus on the underlying determinants and production mechanisms to fully understand the nature of facial expression of emotion and to settle the theoretical debate about the meaning of motor expression. Although emotion theorists have generally remained rather vague about the details of the process, this has been a central concern of componential appraisal theories. We describe the fundamental assumptions and predictions of this approach regarding the patterning of facial expressions for different emotions. We also review recent evidence for the assumption that specific facial muscle movements may be reliable symptoms of certain appraisal outcomes and that facial expressions unfold over time on the basis of a sequence of appraisal check results.


International Journal of Synthetic Emotions | 2012

Advocating a Componential Appraisal Model to Guide Emotion Recognition

Marcello Mortillaro; Ben Meuleman; Klaus R. Scherer

Most models of automatic emotion recognition use a discrete perspective and a black-box approach, i.e., they output an emotion label chosen from a limited pool of candidate terms, on the basis of purely statistical methods. Although these models are successful in emotion classification, a number of practical and theoretical drawbacks limit the range of possible applications. In this paper, the authors suggest the adoption of an appraisal perspective in modeling emotion recognition. The authors propose to use appraisals as an intermediate layer between expressive features input and emotion labeling output. The model would then be made of two parts: first, expressive features would be used to estimate appraisals; second, resulting appraisals would be used to predict an emotion label. While the second part of the model has already been the object of several studies, the first is unexplored. The authors argue that this model should be built on the basis of both theoretical predictions and empirical results about the link between specific appraisals and expressive features. For this purpose, the authors suggest to use the component process model of emotion, which includes detailed predictions of efferent effects of appraisals on facial expression, voice, and body movements.


Emotion | 2012

Reliable Facial Muscle Activation Enhances Recognizability and Credibility of Emotional Expression

Marc Mehu; Marcello Mortillaro; Tanja Bänziger; Klaus R. Scherer

We tested Ekmans (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.


affective computing and intelligent interaction | 2005

A multimodal database as a background for emotional synthesis, recognition and training in e-learning systems

Luigi Anolli; Fabrizia Mantovani; Marcello Mortillaro; Antonietta Vescovo; A Agliati; Linda Confalonieri; Olivia Realdon; Valentino Zurloni; Alessandro Sacchi

This paper presents a multimodal database developed within the EU-funded project MYSELF. The project aims at developing an e-learning platform endowed with affective computing capabilities for the training of relational skills through interactive simulations. The database includes data coming from 34 participants and concerning physiological parameters, vocal nonverbal features, facial expression and posture. Ten different emotions were considered (anger, joy, sadness, fear, contempt, shame, guilt, pride, frustration and boredom), ranging from primary to self-conscious emotions of particular relevance in learning process and interpersonal relationships. Preliminary results and analyses are presented, together with directions for future work.

Collaboration


Dive into the Marcello Mortillaro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nele Dael

University of Lausanne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge