Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lori Malatesta is active.

Publication


Featured researches published by Lori Malatesta.


artificial intelligence applications and innovations | 2007

Multimodal emotion recognition from expressive faces, body gestures and speech

George Caridakis; Ginevra Castellano; Loic Kessous; Amaryllis Raouzaiou; Lori Malatesta; Stylianos Asteriadis; Kostas Karpouzis

In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.


language resources and evaluation | 2007

Virtual agent multimodal mimicry of humans

George Caridakis; Amaryllis Raouzaiou; Elisabetta Bevacqua; Maurizio Mancini; Kostas Karpouzis; Lori Malatesta; Catherine Pelachaud

This work is about multimodal and expressive synthesis on virtual agents, based on the analysis of actions performed by human users. As input we consider the image sequence of the recorded human behavior. Computer vision and image processing techniques are incorporated in order to detect cues needed for expressivity features extraction. The multimodality of the approach lies in the fact that both facial and gestural aspects of the user’s behavior are analyzed and processed. The mimicry consists of perception, interpretation, planning and animation of the expressions shown by the human, resulting not in an exact duplicate rather than an expressive model of the user’s original behavior.


Applied Intelligence | 2009

Towards modeling embodied conversational agent character profiles using appraisal theory predictions in expression synthesis

Lori Malatesta; Amaryllis Raouzaiou; Kostas Karpouzis; Stefanos D. Kollias

Abstract Appraisal theories in psychology study facial expressions in order to deduct information regarding the underlying emotion elicitation processes. Scherer’s component process model provides predictions regarding particular face muscle deformations that are attributed as reactions to the cognitive appraisal stimuli in the study of emotion episodes. In the current work, MPEG-4 facial animation parameters are used in order to evaluate these theoretical predictions for intermediate and final expressions of a given emotion episode. We manipulate parameters such as intensity and temporal evolution of synthesized facial expressions. In emotion episodes originating from identical stimuli, by varying the cognitive appraisals of the stimuli and mapping them to different expression intensities and timings, various behavioral patterns can be generated and thus different agent character profiles can be defined. The results of the synthesis process are consequently applied to Embodied Conversational Agents (ECAs), aiming to render their interaction with humans, or other ECAs, more affective.


ubiquitous computing | 2009

MPEG-4 facial expression synthesis

Lori Malatesta; Amaryllis Raouzaiou; Kostas Karpouzis; Stefanos D. Kollias

The current work will describe an approach to synthesize expressions, including intermediate ones, via the tools provided in the MPEG-4 standard based on real measurements and on universally accepted assumptions of their meaning, taking into account results of Whissel’s study. Additionally, MPEG-4 facial animation parameters are used in order to evaluate theoretical predictions for intermediate expressions of a given emotion episode, based on Scherer’s appraisal theory. MPEG-4 FAPs and action units are combined in modeling the effects of appraisal checks on facial expressions and temporal evolution issues of facial expressions are investigated. The results of the synthesizing process can then be applied to Embodied Conversational Agents (ECAs), rendering their interaction with humans, or other ECAs, more affective.


Artificial Intelligence | 2009

Affective intelligence: the human face of AI

Lori Malatesta; Kostas Karpouzis; Amaryllis Raouzaiou

Affective computing has been an extremely active research and development area for some years now, with some of the early results already starting to be integrated in human-computer interaction systems. Driven mainly by research initiatives in Europe, USA and Japan and accelerated by the abundance of processing power and low-cost, unintrusive sensors like cameras and microphones, affective computing functions in an interdisciplinary fashion, sharing concepts from diverse fields, such as signal processing and computer vision, psychology and behavioral sciences, human-computer interaction and design, machine learning, and so on. In order to form relations between low-level input signals and features to high-level concepts such as emotions or moods, one needs to take into account the multitude of psychology and representation theories and research findings related to them and deploy machine learning techniques to actually form computational models of those. This chapter elaborates on the concepts related to affective computing, how these can be connected to measurable features via representation models and how they can be integrated into humancentric applications.


artificial intelligence applications and innovations | 2006

MPEG-4 Facial Expression Synthesis based on Appraisal Theory

Lori Malatesta; Amaryllis Raouzaiou; Stefanos D. Kollias

MPEG-4 facial animation parameters are used in order to evaluate theoretical predictions for intermediate expressions of a given emotion episode, based on Scherer’s appraisal theory. MPEG-4 FAPs and action units are combined in modelling the effects of appraisal checks on facial expressions and temporal evolution issues of facial expressions are investigated. The results of the synthesis process can then be applied to Embodied Conversational Agents (ECAs), rendering their interaction with humans, or other ECAs, more affective.


Engineering Applications of Artificial Intelligence | 2016

Associating gesture expressivity with affective representations

Lori Malatesta; Stylianos Asteriadis; George Caridakis; Asimina Vasalou; Kostas Karpouzis

Affective computing researchers adopt a variety of methods in analysing or synthesizing aspects of human behaviour. The choice of method depends on which behavioural cues are considered salient or straightforward to capture and comprehend, as well as the overall context of the interaction. Thus, each approach focuses on modelling certain information and results to dedicated representations. However, analysis or synthesis is usually done by following label-based representations, which usually have a direct mapping to a feature vector. The goal of the presented work is to introduce an interim representational mechanism that associates low-level gesture expressivity parameters with a high-level dimensional representation of affect. More specifically, it introduces a novel methodology for associating easily extracted, low-level gesture data to the affective dimensions of activation and evaluation. For this purpose, a user perception test was carried out in order to properly annotate a dataset, by asking participants to assess each gesture in terms of the perceived activation (active/passive) and evaluation (positive/negative) levels. In affective behaviour modelling, the contribution of the proposed association methodology is twofold: On one hand, when analysing affective behaviour, it can enable the fusion of expressivity parameters alongside with any other modalities coded in higher-level affective representations, leading, in this way, to scalable multimodal analysis. On the other hand, it can enforce the process of synthesizing composite human behaviour (e.g. facial expression, gestures and body posture) since it allows for the translation of dimensional values of affect into synthesized expressive gestures.


Archive | 2009

Emotion Modelling and Facial Affect Recognition in Human-Computer and Human-Robot Interaction

Lori Malatesta; John Murray; Amaryllis Raouzaiou; Antoine Hiolle; Lola Cañamero; Kostas Karpouzis

As research has revealed the deep role that emotion and emotion expression play in human social interaction, researchers in human-computer interaction have proposed that more effective human-computer interfaces can be realized if the interface models the user’s emotion as well as expresses emotions. Affective computing was defined by Rosalind Picard (1997) as computing that relates to, arises from, or deliberately influences emotion or other affective phenomena. According to Picard’s pioneering book, if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, and even to have and express emotions. These positions have become the foundations of research in the area and have been investigated in great depth after their first postulation. Emotion is fundamental to human experience, influencing cognition, perception, and everyday tasks such as learning, communication, and even rational decision making. Affective computing aspires to bridge the gap that typical human-computer interaction largely ignored thus creating an often frustrating experience for people, in part because affect had been overlooked or hard to measure. In order to take these ideas a step further, towards the objectives of practical applications, we need to adapt methods of modelling affect to the requirements of particularshowcases. To do so, it is fundamental to review prevalent psychology theories on emotion, to disambiguate their terminology and identify the fitting computational models that can allow for affective interactions in the desired environments.


2012 Seventh International Workshop on Semantic and Social Media Adaptation and Personalization | 2012

Natural Interaction Multimodal Analysis: Expressivity Analysis towards Adaptive and Personalized Interfaces

Stylianos Asteriadis; George Caridakis; Lori Malatesta; Kostas Karpouzis

Intelligent personalized systems often ignore the affective aspectof human behavior and focus more on tactile cues of the useractivity. A complete user modeling, though, should also incorporatecues such as facial expressions, speech prosody and gesture orbody posture expressivity features, in order to dynamically profile the user, fusing all available modalities since these qualitative affective cues contain significant information about the users on verbal behavior and communication. Towards this direction, this work focuses on automatic extraction of gestural and headexpressivity features and related statistical processing. The perspective of adopting a common formalization of using expressivity features for a multitude of visual, emotional modalities is explored and grounded through an overview of experiments on appropriate corpora and the corresponding analysis.


intelligent technologies for interactive entertainment | 2009

Affective Interface Adaptations in the Musickiosk Interactive Entertainment Application

Lori Malatesta; Amaryllis Raouzaiou; L. Pearce; Kostas Karpouzis

The current work presents the affective interface adaptations in the Musickiosk application. Adaptive interaction poses several open questions since there is no unique way of mapping affective factors of user behaviour to the output of the system. Musickiosk uses a non-contact interface and implicit interaction through emotional affect rather than explicit interaction where a gesture, sound or other input directly maps to an output behaviour - as in traditional entertainment applications. PAD model is used for characterizing the different affective states and emotions.

Collaboration


Dive into the Lori Malatesta's collaboration.

Top Co-Authors

Avatar

Amaryllis Raouzaiou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Kostas Karpouzis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

George Caridakis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Stefanos D. Kollias

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paraskevi K. Tzouveli

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Antoine Hiolle

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge