Elisabetta Bevacqua
University of Paris
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elisabetta Bevacqua.
intelligent virtual agents | 2005
Christopher E. Peters; Catherine Pelachaud; Elisabetta Bevacqua; Maurizio Mancini; Isabella Poggi
One of the major problems of users interaction with Embodied Conversational Agents (ECAs) is to have the conversation last more than few second: after being amused and intrigued by the ECAs, users may find rapidly the restrictions and limitations of the dialog systems, they may perceive the repetition of the ECAs animation, they may find the behaviors of ECAs to be inconsistent and implausible, etc. We believe that some special links, or bonds, have to be established between users and ECAs during interaction. It is our view that showing and/or perceiving interest is the necessary premise to establish a relationship. In this paper we present a model of an ECA able to establish, maintain and end the conversation based on its perception of the level of interest of its interlocutor.
language resources and evaluation | 2007
George Caridakis; Amaryllis Raouzaiou; Elisabetta Bevacqua; Maurizio Mancini; Kostas Karpouzis; Lori Malatesta; Catherine Pelachaud
This work is about multimodal and expressive synthesis on virtual agents, based on the analysis of actions performed by human users. As input we consider the image sequence of the recorded human behavior. Computer vision and image processing techniques are incorporated in order to detect cues needed for expressivity features extraction. The multimodality of the approach lies in the fact that both facial and gestural aspects of the user’s behavior are analyzed and processed. The mimicry consists of perception, interpretation, planning and animation of the expressions shown by the human, resulting not in an exact duplicate rather than an expressive model of the user’s original behavior.
intelligent virtual agents | 2008
Elisabetta Bevacqua; Maurizio Mancini; Catherine Pelachaud
Within the Sensitive Artificial Listening Agent project, we propose a system that computes the behaviour of a listening agent. Such an agent must exhibit behaviour variations depending not only on its mental state towards the interaction (e.g., if it agrees or not with the speaker) but also on the agents characteristics such as its emotional traits and its behaviour style. Our system computes the behaviour of the listening agent in real-time.
Computer Animation and Virtual Worlds | 2004
Elisabetta Bevacqua; Catherine Pelachaud
We aim at the realization of an Embodied Conversational Agent able to interact naturally and emotionally with user. In particular, the agent should behave expressively. Specifying for a given emotion, its corresponding facial expression will not produce the sensation of expressivity. To do so, one needs to specify parameters such as intensity, tension, movement property. Moreover, emotion affects also lip shapes during speech. Simply adding the facial expression of emotion to the lip shape does not produce lip readable movement. In this paper we present a model based on real data from a speaker on which was applied passive markers. The real data covers natural speech as well as emotional speech. We present an algorithm that determines the appropriate viseme and applies coarticulation and correlation rules to consider the vocalic and the consonantal contexts as well as muscular phenomena such as lip compression and lip stretching. Expressive qualifiers are then used to modulate the expressivity of lip movement. Our model of lip movement is applied on a 3D facial model compliant with MPEG‐4 standard. Copyright
intelligent virtual agents | 2007
Dirk Heylen; Elisabetta Bevacqua; Marion Tellier; Catherine Pelachaud
Embodied conversational agents should be able to provide feedback on what a human interlocutor is saying. We are compiling a list of facial feedback expressions that signal attention and interest, grounding and attitude. As expressions need to serve many functions at the same time and most of the component signals are ambiguous, it is important to get a better idea of the many to many mappings between displays and functions. We asked people to label several dynamic expressions as a probe into this semantic space. We compare simple signals and combined signals in order to find out whether a combination of signals can have a meaning on its own or not, i. e. the meaning of single signals is different from the meaning attached to the combination of these signals. Results show that in some cases a combination of signals alters the perceived meaning of the backchannel.
intelligent virtual agents | 2010
Elisabetta Bevacqua; Sathish Pammi; Sylwia Julia Hyniewska; Marc Schröder; Catherine Pelachaud
One of the most desirable characteristics of an Embodied Conversational Agent (ECA) is the capability of interacting with users in a human-like manner. While listening to a user, an ECA should be able to provide backchannel signals through visual and acoustic modalities. In this work we propose an improvement of our previous system to generate multimodal backchannel signals on visual and acoustic modalities. A perceptual study has been performed to understand how context-free multimodal backchannels are interpreted by users.
Emotion-oriented Systems: The Humaine Handbook | 2011
Dirk Heylen; Elisabetta Bevacqua; Catherine Pelachaud; Isabella Poggi; Jonathan Gratch; Marc Schröder
In face-to-face conversations listeners provide feedback and comments at the same time as speakers are uttering their words and sentence. This ‘talk’ in the backchannel provides speakers with information about reception and acceptance – or lack thereof – of their speech. Listeners, through short verbalisations and non-verbal signals, show how they are engaged in the dialogue. The lack of incremental, real-time processing has hampered the creation of conversational agents that can respond to the human interlocutor in real time as the speech is being produced. The need for such feedback in conversational agents is, however, undeniable for reasons of naturalism or believability, to increase the efficiency of communication and to show engagement and building of rapport. In this chapter, the joint activity of speakers and listeners that constitutes a conversation is more closely examined and the work that is devoted to the construction of agents that are able to show that they are listening is reviewed. Two issues are dealt with in more detail. The first is the search for appropriate responses for an agent to display. The second is the study of how listening responses may increase rapport between agents and their human partners in conversation.
Journal on Multimodal User Interfaces | 2012
Elisabetta Bevacqua; Etienne de Sevin; Sylwia Julia Hyniewska; Catherine Pelachaud
We present a computational model that generates listening behaviour for a virtual agent. It triggers backchannel signals according to the user’s visual and acoustic behaviour. The appropriateness of the backchannel algorithm in a user-agent situation of storytelling, has been evaluated by naïve participants, who judged the algorithm-ruled timing of backchannels more positively than a random timing. The system can generate different types of backchannels. The choice of the type and the frequency of the backchannels to be displayed is performed considering the agent’s personality traits. The personality of the agent is defined in terms of two dimensions, extroversion and neuroticism. We link agents with a higher level of extroversion to a higher tendency to perform more backchannels than introverted ones, and we link neuroticism to less mimicry production and more response and reactive signals sent. We run a perception study to test these relations in agent-user interactions, as evaluated by third parties. We find that the selection of the frequency of backchannels performed by our algorithm contributes to the correct interpretation of the agent’s behaviour in terms of personality traits.
international conference on 3d web technology | 2011
Radoslaw Niewiadomski; Mohammad Obaid; Elisabetta Bevacqua; Julian Looser; Le Quoc Anh; Catherine Pelachaud
We have developed a general purpose use and modular architecture of an embodied conversational agent (ECA). Our agent is able to communicate using verbal and nonverbal channels like gaze, facial expressions, and gestures. Our architecture follows the SAIBA framework that sets 3-step process and communication protocols. In our implementation of SAIBA architecture we focus on flexibility and we introduce different levels of the customization. In particular, our system is able to display the same communicative intention with different embodiments, be a virtual agent or a robot. Moreover our framework is independent of the animation player technology. Agent animations can be displayed across different medias, such as web browser, virtual or augmented reality. In this paper we present our agent architecture and its main features.
affective computing and intelligent interaction | 2009
Marc Schröder; Elisabetta Bevacqua; Florian Eyben; Hatice Gunes; Dirk Heylen; Mark ter Maat; Sathish Pammi; Maja Pantic; Catherine Pelachaud; Björn W. Schuller; Etienne de Sevin; Michel F. Valstar; Martin Wöllmer
Sensitive artificial listeners (SAL) are virtual dialogue partners who, despite their very limited verbal understanding, intend to engage the user in a conversation by paying attention to the users emotions and non-verbal expressions. The SAL characters have their own emotionally defined personality, and attempt to drag the user towards their dominant emotion, through a combination of verbal and non-verbal expression. The demonstrator shows an early version of the fully autonomous SAL system based on audiovisual analysis and synthesis.