Magalie Ochs
Aix-Marseille University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Magalie Ochs.
advances in computer entertainment technology | 2013
Keith Anderson; Elisabeth André; Tobias Baur; Sara Bernardini; Mathieu Chollet; Evi Chryssafidou; Ionut Damian; Cathy Ennis; Arjan Egges; Patrick Gebhard; Hazaël Jones; Magalie Ochs; Catherine Pelachaud; Kaska Porayska-Pomsta; Paola Rizzo; Nicolas Sabouret
The TARDIS project aims to build a scenario-based serious-game simulation platform for NEETs and job-inclusion associations that supports social training and coaching in the context of job interviews. This paper presents the general architecture of the TARDIS job interview simulator, and the serious game paradigm that we are developing.
intelligent virtual agents | 2013
Brian Ravenet; Magalie Ochs; Catherine Pelachaud
Human’s non-verbal behavior may convey different meanings. They can reflect one’s emotional states, communicative intentions but also his social relations with someone else, i.e. his interpersonal attitude. In order to determine the non-verbal behavior that a virtual agent should display to convey particular interpersonal attitudes, we have collected a corpus of virtual agent’s non-verbal behavior directly created by users. Based on the analysis of the corpus, we propose a Bayesian model to automatically compute the virtual agent’s non-verbal behavior conveying interpersonal attitudes.
Autonomous Agents and Multi-Agent Systems | 2012
Magalie Ochs; David Sadek; Catherine Pelachaud
Recent research has shown that virtual agents expressing empathic emotions toward users have the potential to enhance human–machine interaction. To provide empathic capabilities to a rational dialog agent, we propose a formal model of emotions based on an empirical and theoretical analysis of the users’ conditions of emotion elicitation. The emotions are represented by particular mental states of the agent, composed of beliefs, uncertainties and intentions. This semantically grounded formal representation enables a rational dialog agent to identify from a dialogical situation the empathic emotion that it should express. An implementation and an evaluation of an empathic rational dialog agent have enabled us to validate the proposed model of empathy.
Cognitive Processing | 2012
Magalie Ochs; Radoslaw Niewiadomski; Paul M. Brunet; Catherine Pelachaud
A smile may communicate different communicative intentions depending on subtle characteristics of the facial expression. In this article, we propose an algorithm to determine the morphological and dynamic characteristics of virtual agent’s smiles of amusement, politeness, and embarrassment. The algorithm has been defined based on a virtual agent’s smiles corpus constructed by users and analyzed with a decision tree classification technique. An evaluation, in different contexts, of the resulting smiles has enabled us to validate the proposed algorithm.
intelligent virtual agents | 2014
Mathieu Chollet; Magalie Ochs; Catherine Pelachaud
In this paper, we present a model and its evaluation for expressing attitudes through sequences of non-verbal signals for Embodied Conversational Agents. To build our model, a corpus of job interviews has been annotated at two levels: the non-verbal behavior of the recruiters as well as their expressed attitudes was annotated. Using a sequence mining method, sequences of non-verbal signals characterizing different interpersonal attitudes were automatically extracted from the corpus. From this data, a probabilistic graphical model was built. The probabilistic model is used to select the most appropriate sequences of non-verbal signals that an ECA should display to convey a particular attitude. The results of a perceptive evaluation of sequences generated by the model show that such a model can be used to express interpersonal attitudes.
intelligent virtual agents | 2015
Brian Ravenet; Angelo Cafaro; Beatrice Biancardi; Magalie Ochs; Catherine Pelachaud
In this paper we propose a computational model for the real time generation of nonverbal behaviors supporting the expression of interpersonal attitudes for turn-taking strategies and group formation in multi-party conversations among embodied conversational agents. Starting from the desired attitudes that an agent aims to express towards every other participant, our model produces the nonverbal behavior that should be exhibited in real time to convey such attitudes while managing the group formation and attempting to accomplish the agent’s own turn-taking strategy. We also propose an evaluation protocol for similar multi-agent configurations. We conducted a study following this protocol to evaluate our model. Results showed that subjects properly recognized the attitudes expressed by the agents through their nonverbal behavior and turn taking strategies generated by our system.
intelligent virtual agents | 2015
Atef Ben Youssef; Mathieu Chollet; Hazaël Jones; Nicolas Sabouret; Catherine Pelachaud; Magalie Ochs
This paper presents a socially adaptive virtual agent that can adapt its behaviour according to social constructs (e.g. attitude, relationship) that are updated depending on the behaviour of its interlocutor. We consider the context of job interviews with the virtual agent playing the role of the recruiter. The evaluation of our approach is based on a comparison of the socially adaptive agent to a simple scripted agent and to an emotionally-reactive one. Videos of these three different agents in situation have been created and evaluated by 83 participants. This subjective evaluation shows that the simulation and expression of social attitude is perceived by the users and impacts on the evaluation of the agent’s credibility. We also found that while the emotion expression of the virtual agent has an immediate impact on the user’s experience, the impact of the virtual agent’s attitude expression’s impact is stronger after a few speaking turns.
intelligent virtual agents | 2010
Magalie Ochs; Radoslaw Niewiadomski; Catherine Pelachaud
A smile may communicate different meanings depending on subtle characteristics of the facial expression. In this article, we have studied the morphological and dynamic characteristics of amused, polite, and embarrassed smiles displayed by a virtual agent. A web application has been developed to collect virtual agents smile descriptions corpus directly constructed by users. Based on the corpora and using a decision tree classification technique, we propose an algorithm to determine the characteristics of each type of the smile that a virtual agent may express. The proposed algorithm enables one to generate a variety of facial expressions corresponding to the polite, embarrassed, and amused smiles.
Proceedings of the 2nd international workshop on Social signal processing | 2010
Radoslaw Niewiadomski; Ken Prepin; Elisabetta Bevacqua; Magalie Ochs; Catherine Pelachaud
Smile is one of the most often used nonverbal signals. Depending on when, how and where it is displayed, it may convey various meanings. We believe that introducing the variety of smiles may improve the communicative skills of embodied conversational agents. In this paper we present on-going research on the role of smile in embodied conversational agents. In particular, we analyze the significance of smiling while the agent is either speaking or listening. We also show how it may communicate different messages such as amusement, embarrassment and politeness through different smile morphologies and dynamism.
Revue Dintelligence Artificielle | 2006
Magalie Ochs; Radoslaw Niewiadomski; Catherine Pelachaud; David Sadek
We propose a computational model of emotions that takes into account two aspects of emotions: the emotions triggered by an event and the expressed emotions (the displayed ones), which may differ in real life. More particularly, we present a formalization of emotion eliciting-events based on a model of the agents mental state composed of beliefs, choices, and uncertainties, which enables one to identify the emotional state of an agent at any time. We also introduce a fuzzy logic based model that computes facial expressions of blending for the different kinds of emotions. Finally, examples of facial expressions resulting from the implementation of our model are shown.