Isabella Poggi
Roma Tre University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Isabella Poggi.
IEEE Transactions on Affective Computing | 2012
Alessandro Vinciarelli; Maja Pantic; Dirk Heylen; Catherine Pelachaud; Isabella Poggi; Francesca D'Errico; Marc Schroeder
Social Signal Processing is the research domain aimed at bridging the social intelligence gap between humans and machines. This paper is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesis of social behavior. Modeling investigates laws and principles underlying social interaction, analysis explores approaches for automatic understanding of social exchanges recorded with different sensors, and synthesis studies techniques for the generation of social behavior via various forms of embodiment. For each of the above aspects, the paper includes an extensive survey of the literature, points to the most important publicly available resources, and outlines the most fundamental challenges ahead.
Springer Berlin Heidelberg | 2004
Berardina De Carolis; Catherine Pelachaud; Isabella Poggi; Mark Steedman
Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two components: a Mind and a Body. Her mind reflects her personality, her social intelligence, as well as her emotional reaction to events occurring in the environment. Her body corresponds to her physical appearance able to display expressive behaviors. We designed a Mind—Body interface that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a new XML language (APML). Moreover we have developed a language to describe facial expressions. It combines basic facial expressions with operators to create complex facial expressions. The purpose of this chapter is to describe these languages and to illustrate our approach to the generation of behavior of an agent able to act consistently with her goals and with the context of the interaction.
adaptive agents and multi-agents systems | 2002
Catherine Pelachaud; Valeria Carofiglio; Berardina De Carolis; Fiorella de Rosis; Isabella Poggi
We aim at building a new human-computer interface for Information Delivering applications: the conversational agent that we have developed is a multimodal believable agent able to converse with the User by exhibiting a synchronized and coherent verbal and nonverbal behavior. The agent is provided with a personality and a social role, that allows her to show her emotion or to refrain from showing it, depending on the context in which the conversation takes place. The agent is provided with a face and a mind. The mind is designed according to a BDI structure that depends on the agents personality; it evolves dynamically during the conversation, according to the Users dialog moves and to emotions triggered as a consequence of the Interlocutors move; such cognitive features are then translated into facial behaviors. In this paper, we describe the overall architecture of our system and its various components; in particular, we present our dynamic model of emotions. We illustrate our results with an example of dialog all along the paper. We pay particular attention to the generation of verbal and nonverbal behaviors and to the way they are synchronized and combined with each other. We also discuss how these acts are translated into facial expressions.
Journal of Visualization and Computer Animation | 2002
Catherine Pelachaud; Isabella Poggi
Our goal is to develop a believable embodied agent able to dialogue with a user. In particular, we aim at making an agent that can also combine facial expressions in a complex and subtle way, just like a human agent does. We first review a taxonomy of communicative functions that our agent is able to express non-verbally; but we point out that, due to the complexity of communication, in some cases different information can be provided at once by different parts and actions of an agents face. In this paper we are interested in assessing and treating what happens, at the meaning and signal levels of behaviour, when different communicative functions have to be displayed at the same time and necessarily have to make use of the same expressive resources. In some of these cases the complexity of the agents communication can give rise to conflicts between the parts or movements of the face. In this paper, we propose a way to manage the possible conflicts between different modalities of communication through the tool of belief networks, and we show how this tool allows us to combine facial expressions of different communicative functions and to display complex and subtle expressions. Copyright
intelligent virtual agents | 2005
Christopher E. Peters; Catherine Pelachaud; Elisabetta Bevacqua; Maurizio Mancini; Isabella Poggi
One of the major problems of users interaction with Embodied Conversational Agents (ECAs) is to have the conversation last more than few second: after being amused and intrigued by the ECAs, users may find rapidly the restrictions and limitations of the dialog systems, they may perceive the repetition of the ECAs animation, they may find the behaviors of ECAs to be inconsistent and implausible, etc. We believe that some special links, or bonds, have to be established between users and ECAs during interaction. It is our view that showing and/or perceiving interest is the necessary premise to establish a relationship. In this paper we present a model of an ECA able to establish, maintain and end the conversation based on its perception of the level of interest of its interlocutor.
Archive | 1990
Cristiano Castelfranchi; Isabella Poggi
The aim of this chapter is to consider the social and biological functions of shame and the communicative value of its most typical expression, blushing, while arguing against Darwins theory of blushing, which would deny it any specific function.
Multimodal Intelligent Information Presentation | 2005
Isabella Poggi; Catherine Pelachaud; F. de Rosis; Valeria Carofiglio; B. De Carolis
1. INTELLIGENT BELIEVABLE EMBODIED CONVERSATIONAL AGENTS A wide area of research on Autonomous Agents is presently devoted to the construction of ECAs, Embodied Conversational Agents (Cassell et al. 2000; Pelachaud & Poggi, 2001). An ECA is a virtual Agent that interacts with a User or another Agent through multimodal communicative behavior. It has a realistic or cartoon-like body and it can produce spoken discourse and dialogue, use voice with appropriate prosody and intonation, exhibit the visemes corresponding to the words uttered, make gestures, assume postures, produce facial expression and communicative gaze behavior. An ECA is generally a Believable Agent, that is, one able to express emotion (Bates, 1994) and to exhibit a given personality (Loyall & Bates, 1997). But, according to recent literature (Trappl & Payr, in press; de Rosis et al., in press a), an Agent is even more believable if it can behave in ways typical of given cultures, and if it has a personal communicative style (Canamero & Aylett, in press; Ruttkay et al., in press). This is, in fact, what makes a human a human. More, an ECA must be interactive, that is, take User and context into account, so as to tailor interaction onto the particular User and context at hand. In an ECA that fulfils these constraints the communicative output, that is, the particular combination of multimodal communicative signals displayed (words, prosody, gesture, face, gaze, body posture and movements) is determined by different aspects: a. contents to communicate, b. emotions, c. personality, d. culture, e. style, f. context and User sensitivity. At each moment of a communicative interaction, all of these aspects combine with each other to determine what the Agent will say, and how. In this paper we show how these aspects of an ECA can be modeled in terms of a belief and goal view of human communicative behavior. We then illustrate Greta, an ECA following these principles which is being implemented in the context of the EU project MagiCster
Applied Artificial Intelligence | 2006
Maria Miceli; Fiorella de Rosis; Isabella Poggi
A relevant issue in the domain of natural argumentation and persuasion is the interaction (synergic or conflicting) between “rational” or “cognitive” modes of persuasion and “irrational” or “emotional” ones. This work provides a model of general persuasion and emotional persuasion. We examine two basic modes for appealing to emotions, arguing that emotional persuasion does not necessarily coincide with irrational persuasion, and showing how the appeal to emotions is grounded on the strict and manifold relationship between emotions and goals, which is, so to say, “exploited” by a persuader. We describe various persuasion strategies, propose a method to formalize and represent them as oriented graphs, and show how emotional and non-emotional strategies (and also emotional and non-emotional components in the same strategy) may interact with and strengthen each other. Finally, we address the role of uncertainty in persuasion strategies and show how it can be represented in persuasion graphs.
Visual Analysis of Humans | 2011
Maja Pantic; Roderick Cowie; Francesca D'Errico; Dirk Heylen; Marc Mehu; Catherine Pelachaud; Isabella Poggi; Marc Schroeder; Alessandro Vinciarelli
The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow, ignoring a crucial range of abilities that matter immensely for how people do in life. This range of abilities is called social intelligence and includes the ability to express and recognise social signals produced during social interactions like agreement, politeness, empathy, friendliness, conflict, etc., coupled with the ability to manage them in order to get along well with others while winning their cooperation. Social Signal Processing (SSP) is the new research domain that aims at understanding and modelling social interactions (human-science goals), and at providing computers with similar abilities in human-computer interaction scenarios (technological goals). SSP is in its infancy, and the journey towards artificial social intelligence and socially-aware computing is still long. This research agenda is a twofold, a discussion about how the field is understood by people who are currently active in it and a discussion about issues that the researchers in this formative field face.
international workshop on affective interactions | 2001
Isabella Poggi; Catherine Pelachaud
This paper shows that emotional information conveyed by facial expression is often contained not only in the expression of emotions per se, but also in other communicative signals, namely the performatives of communicative acts. An analysis is provided of the performatives of suggesting, warning, ordering, imploring, approving and praising, both on the side of their cognitive structure and on the side of their facial expression, and it is shown that the meaning and the expression of emotions like sadness, anger, worrying, uncertainty, happiness and surprise are contained in them. We also show that a common core of meaning is present in an emotion (surprise) as well as in other kinds of communicative signals (emphasis, back channel of doubt, adversative signals). We then argue on how the cognitive and expressive analyses of these communicative acts may be applied in the construction of expressive animated faces.