Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarkis Abrilian is active.

Publication


Featured researches published by Sarkis Abrilian.


affective computing and intelligent interaction | 2007

The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data

Ellen Douglas-Cowie; Roddy Cowie; Ian Sneddon; Cate Cox; Orla Lowry; Margaret McRorie; Jean-Claude Martin; Laurence Devillers; Sarkis Abrilian; Anton Batliner; Noam Amir; Kostas Karpouzis

The HUMAINE project is concerned with developing interfaces that will register and respond to emotion, particularly pervasive emotion (forms of feeling, expression and action that colour most of human life). The HUMAINE Database provides naturalistic clips which record that kind of material, in multiple modalities, and labelling techniques that are suited to describing it.


affective computing and intelligent interaction | 2005

Representing real-life emotions in audiovisual data with non basic emotional patterns and context features

Laurence Devillers; Sarkis Abrilian; Jean-Claude Martin

The modeling of realistic emotional behavior is needed for various applications in multimodal human-machine interaction such as emotion detection in a surveillance system or the design of natural Embodied Conversational Agents. Yet, building such models requires appropriate definition of various levels for representing: the emotional context, the emotion itself and observed multimodal behaviors. This paper presents the multi-level emotion and context coding scheme that has been defined following the annotation of fifty one videos of TV interviews. Results of annotation analysis show the complexity and the richness of the real-life data: around 50% of the clips feature mixed emotions with multi-modal conflictual cues. A typology of mixed emotional patterns is proposed showing that cause-effect conflict and masked acted emotions are perceptually difficult to annotate regarding the valence dimension.


intelligent virtual agents | 2006

Perception of blended emotions: from video corpus to expressive agent

Stéphanie Buisine; Sarkis Abrilian; Radoslaw Niewiadomski; Jean-Claude Martin; Laurence Devillers; Catherine Pelachaud

Real life emotions are often blended and involve several simultaneous superposed or masked emotions. This paper reports on a study on the perception of multimodal emotional behaviors in Embodied Conversational Agents. This experimental study aims at evaluating if people detect properly the signs of emotions in different modalities (speech, facial expressions, gestures) when they appear to be superposed or masked. We compared the perception of emotional behaviors annotated in a corpus of TV interviews and replayed by an expressive agent at different levels of abstraction. The results provide insights on the use of such protocols for studying the effect of various models and modalities on the perception of complex emotions.


intelligent virtual agents | 2005

Levels of representation in the annotation of emotion for the specification of expressivity in ECAs

Jean-Claude Martin; Sarkis Abrilian; Laurence Devillers; Myriam Lamolle; Maurizio Mancini; Catherine Pelachaud

In this paper we present a two-steps approach towards the creation of affective Embodied Conversational Agents (ECAs): annotation of a real-life nonacted emotional corpus and animation by copy-synthesis. The basis of our approach is to study how coders perceive and annotate at several levels the emotions observed in a corpus of emotionally rich TV video interviews. We use their annotations to specify the expressive behavior of an agent at several levels. We explain how such an approach can be useful for providing knowledge as input for the specification of non-basic patterns of emotional behaviors to be displayed by the ECA (e.g. which perceptual cues and levels of annotation are required for enabling the proper recognition of the emotions).


From brows to trust | 2004

Evaluation of multimodal behaviour of embodied agents

Stéphanie Buisine; Sarkis Abrilian; Jean-Claude Martin

Individuality of Embodied Conversational Agents (ECAs) may depend on both the look of the agent and the way it combines different modalities such as speech and gesture. In this chapter, we describe a study in which male and female users had to listen to three short technical presentations made by ECAs. Three multimodal strategies of ECAs for using arm gestures with speech were compared: redundancy, complementarity, and speech-specialization. These strategies were randomly attributed to different-looking 2D ECAs, in order to test independently the effects of multimodal strategy and ECAs appearance. The variables we examined were subjective impressions and recall performance. Multimodal strategies proved to influence subjective ratings of quality of explanation, in particular for male users. On the other hand, appearance affected likeability, but also recall performance. These results stress the importance of both multimodal strategy and appearance to ensure pleasantness and effectiveness of presentation ECAs.


ubiquitous computing | 2009

Manual annotation and automatic image processing of multimodal emotional behaviors: validating the annotation of TV interviews

Jean-Claude Martin; George Caridakis; Laurence Devillers; Kostas Karpouzis; Sarkis Abrilian

There have been a lot of psychological researches on emotion and nonverbal communication. Yet, these studies were based mostly on acted basic emotions. This paper explores how manual annotation and image processing can cooperate towards the representation of spontaneous emotional behavior in low-resolution videos from TV. We describe a corpus of TV interviews and the manual annotations that have been defined. We explain the image-processing algorithms that have been designed for the automatic estimation of movement quantity. Finally, we explore how image processing can be used for the validation of manual annotations.


Contexts | 2005

Contextual factors and adaptative multimodal human-computer interaction: multi-level specification of emotion and expressivity in embodied conversational agents

Myriam Lamolle; Maurizio Mancini; Catherine Pelachaud; Sarkis Abrilian; Jean-Claude Martin; Laurence Devillers

In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions).


artificial intelligence applications and innovations | 2006

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviors in TV Interviews

Jean-Claude Martin; George Caridakis; Laurence Devillers; Kostas Karpouzis; Sarkis Abrilian

Designing affective Human Computer-Interfaces such as Embodied Conversational Agents requires modeling the relations between spontaneous emotions and behaviors in several modalities. There have been a lot of psychological researches on emotion and nonverbal communication. Yet, these studies were based mostly on acted basic emotions. This paper explores how manual annotation and image processing might cooperate towards the representation of spontaneous emotional behavior in low resolution videos from TV. We describe a corpus of TV interviews and the manual annotations that have been defined. We explain the image processing algorithms that have been designed for the automatic estimation of movement quantity. Finally, we explore several ways to compare the manual annotations and the cues extracted by image processing.


Revue Dintelligence Artificielle | 2006

Du corpus vidéo à l'agent expressif. Utilisation des différents niveaux de représentation multimodale et émotionnelle

Jean-Claude Martin; Sarkis Abrilian; Laurence Devillers; Myriam Lamolle; Maurizio Mancini; Catherine Pelachaud

Relations between emotions and multimodal behaviors have mostly been studied in the case of acted basic emotions. In this paper, we describe two experiments studying these relations with a copy-synthesis approach. We start from video clips of TV interviews including real-life behaviors. A protocol and a coding scheme have been defined for annotating these clips at several levels (context, emotion, multimodality). The first experiment enabled to manually identify the levels of representation required for replaying the annotated behaviors by an expressive agent. The second experiment involved automatic extraction of information from the multimodal annotations. Such an approach enables to study the complex relations between emotion and multimodal behaviors.


conference of the international speech communication association | 2005

Multimodal databases of everyday emotion: facing up to complexity.

Ellen Douglas-Cowie; Laurence Devillers; Jean-Claude Martin; Roddy Cowie; Suzie Savvidou; Sarkis Abrilian; Cate Cox

Collaboration


Dive into the Sarkis Abrilian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Pelachaud

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kostas Karpouzis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Caridakis

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge