Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amaryllis Raouzaiou is active.

Publication


Featured researches published by Amaryllis Raouzaiou.


EURASIP Journal on Advances in Signal Processing | 2002

Parameterized facial expression synthesis based on MPEG-4

Amaryllis Raouzaiou; Nicolas Tsapatsoulis; Kostas Karpouzis; Stefanos D. Kollias

In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.


artificial intelligence applications and innovations | 2007

Multimodal emotion recognition from expressive faces, body gestures and speech

George Caridakis; Ginevra Castellano; Loic Kessous; Amaryllis Raouzaiou; Lori Malatesta; Stylianos Asteriadis; Kostas Karpouzis

In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.


language resources and evaluation | 2007

Virtual agent multimodal mimicry of humans

George Caridakis; Amaryllis Raouzaiou; Elisabetta Bevacqua; Maurizio Mancini; Kostas Karpouzis; Lori Malatesta; Catherine Pelachaud

This work is about multimodal and expressive synthesis on virtual agents, based on the analysis of actions performed by human users. As input we consider the image sequence of the recorded human behavior. Computer vision and image processing techniques are incorporated in order to detect cues needed for expressivity features extraction. The multimodality of the approach lies in the fact that both facial and gestural aspects of the user’s behavior are analyzed and processed. The mimicry consists of perception, interpretation, planning and animation of the expressions shown by the human, resulting not in an exact duplicate rather than an expressive model of the user’s original behavior.


Computer Animation and Virtual Worlds | 2006

Emotional face expression profiles supported by virtual human ontology

Alejandra García-Rojas; Frédéric Vexo; Daniel Thalmann; Amaryllis Raouzaiou; Kostas Karpouzis; Stefanos D. Kollias; Laurent Moccozet; Nadia Magnenat-Thalmann

Expressive facial animation synthesis of human like characters has had many approaches with good results. MPEG‐4 standard has functioned as the basis of many of those approaches. In this paper we would like to lay out the knowledge of some of those approaches inside an ontology in order to support the modeling of emotional facial animation in virtual humans (VH). Inside this ontology we will present MPEG‐4 facial animation concepts and its relationship with emotion through expression profiles that utilize psychological models of emotions. The ontology allows storing, indexing and retrieving prerecorded synthetic facial animations that can express a given emotion. Also this ontology can be used a refined knowledge base in regards to the emotional facial animation creation. This ontology is made using Web Ontology Language and the results are presented as answered queries. Copyright


Applied Intelligence | 2009

Towards modeling embodied conversational agent character profiles using appraisal theory predictions in expression synthesis

Lori Malatesta; Amaryllis Raouzaiou; Kostas Karpouzis; Stefanos D. Kollias

Abstract Appraisal theories in psychology study facial expressions in order to deduct information regarding the underlying emotion elicitation processes. Scherer’s component process model provides predictions regarding particular face muscle deformations that are attributed as reactions to the cognitive appraisal stimuli in the study of emotion episodes. In the current work, MPEG-4 facial animation parameters are used in order to evaluate these theoretical predictions for intermediate and final expressions of a given emotion episode. We manipulate parameters such as intensity and temporal evolution of synthesized facial expressions. In emotion episodes originating from identical stimuli, by varying the cognitive appraisals of the stimuli and mapping them to different expression intensities and timings, various behavioral patterns can be generated and thus different agent character profiles can be defined. The results of the synthesis process are consequently applied to Embodied Conversational Agents (ECAs), aiming to render their interaction with humans, or other ECAs, more affective.


ubiquitous computing | 2009

MPEG-4 facial expression synthesis

Lori Malatesta; Amaryllis Raouzaiou; Kostas Karpouzis; Stefanos D. Kollias

The current work will describe an approach to synthesize expressions, including intermediate ones, via the tools provided in the MPEG-4 standard based on real measurements and on universally accepted assumptions of their meaning, taking into account results of Whissel’s study. Additionally, MPEG-4 facial animation parameters are used in order to evaluate theoretical predictions for intermediate expressions of a given emotion episode, based on Scherer’s appraisal theory. MPEG-4 FAPs and action units are combined in modeling the effects of appraisal checks on facial expressions and temporal evolution issues of facial expressions are investigated. The results of the synthesizing process can then be applied to Embodied Conversational Agents (ECAs), rendering their interaction with humans, or other ECAs, more affective.


perception and interactive technologies | 2006

Multimodal sensing, interpretation and copying of movements by a virtual agent

Elisabetta Bevacqua; Amaryllis Raouzaiou; Christopher E. Peters; George Caridakis; Kostas Karpouzis; Catherine Pelachaud; Maurizio Mancini

We present a scenario whereby an agent senses, interprets and copies a range of facial and gesture expression from a person in the real-world. Input is obtained via a video camera and processed initially using computer vision techniques. It is then processed further in a framework for agent perception, planning and behaviour generation in order to perceive, interpret and copy a number of gestures and facial expressions corresponding to those made by the human. By perceive, we mean that the copied behaviour may not be an exact duplicate of the behaviour made by the human and sensed by the agent, but may rather be based on some level of interpretation of the behaviour. Thus, the copied behaviour may be altered and need not share all of the characteristics of the original made by the human.


international conference on artificial neural networks | 2003

An intelligent scheme for facial expression recognition

Amaryllis Raouzaiou; Spiros Ioannou; Kostas Karpouzis; Nicolas Tsapatsoulis; Stefanos D. Kollias; Roddy Cowie

This paper addresses the problem of emotion recognition in faces through an intelligent neuro-fuzzy system, which is capable of analysing facial features extracted following the MPEG-4 standard, associating these features to symbolic fuzzy predicates, and reasoning on the latter, so as to classify facial images according to the underlying emotional states. Results are presented which illustrate the capability of the developed system to analyse and recognise facial expressions in human computer interaction applications.


Archive | 2011

Fundamentals of Agent Perception and Attention Modelling

Christopher E. Peters; Ginvera Castellano; Matthias Rehm; Elisabeth André; Amaryllis Raouzaiou; Kostas Rapantzikos; Kostas Karpouzis; Gaultiero Volpe; Antonio Camurri; Asimina Vasalou

Perception and attention mechanisms are of great importance for entities situated within complex dynamic environments. With roles extending greatly beyond passive information services about the external environment, such mechanisms actively prioritise, augment and expedite information to ensure that the potentially relevant is made available so appropriate action can take place. Here, we describe the rationale behind endowing artificial entities, or virtual agents, with real-time perception and attention systems. We cover the fundamentals of designing and building such systems. Once equipped, the resulting agents can achieve a more substantial connection with their environment for the purposes of reacting, planning, decision making and, ultimately, behaving.


Artificial Intelligence | 2009

Affective intelligence: the human face of AI

Lori Malatesta; Kostas Karpouzis; Amaryllis Raouzaiou

Affective computing has been an extremely active research and development area for some years now, with some of the early results already starting to be integrated in human-computer interaction systems. Driven mainly by research initiatives in Europe, USA and Japan and accelerated by the abundance of processing power and low-cost, unintrusive sensors like cameras and microphones, affective computing functions in an interdisciplinary fashion, sharing concepts from diverse fields, such as signal processing and computer vision, psychology and behavioral sciences, human-computer interaction and design, machine learning, and so on. In order to form relations between low-level input signals and features to high-level concepts such as emotions or moods, one needs to take into account the multitude of psychology and representation theories and research findings related to them and deploy machine learning techniques to actually form computational models of those. This chapter elaborates on the concepts related to affective computing, how these can be connected to measurable features via representation models and how they can be integrated into humancentric applications.

Collaboration


Dive into the Amaryllis Raouzaiou's collaboration.

Top Co-Authors

Avatar

Kostas Karpouzis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Stefanos D. Kollias

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Lori Malatesta

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Nicolas Tsapatsoulis

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar

George Caridakis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Spiros Ioannou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Athanasios I. Drosopoulos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Daniel Thalmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Frédéric Vexo

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Manolis Wallace

University of Peloponnese

View shared research outputs
Researchain Logo
Decentralizing Knowledge