Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine Pelachaud is active.

Publication


Featured researches published by Catherine Pelachaud.


international conference on computer graphics and interactive techniques | 1994

Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents

Justine Cassell; Catherine Pelachaud; Norman I. Badler; Mark Steedman; Brett Achorn; Tripp Becket; Brett Douville; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.


Cognitive Science | 1996

Generating facial expressions for speech

Catherine Pelachaud; Norman I. Badler; Mark Steedman

This article reports results from o program thot produces high-quolity onimotion of fociol expressions ond head movements OS outomoticolly OS possible in conjunction with meaning-based speech synthesis, including spoken intonation. The gool of the research is OS much to test and define our theories of the formal semantics for such gestures, OS to produce convincing onimotion. Towards this end, we hove produced o high-level progromming longuoge for three-dimensional (3-D) onimotion of fociol expressions. We have been concerned primorily with expressions conveying information correlated with the intonotion of the voice: This includes the differences of timing, pitch, and emphosis that ore reloted to such semantic distinctions of discourse OS “focus,” “topic,” and “comment, ” “theme” ond “rheme,” or “given” ond “new” informotion. We ore also interested in the relotion of affect or emotion to fociol expression. Until now, systems hove not embodied such rule-governed tronslotion from spoken utterance meaning to fociol expressions. Our system embodies rules that describe and coordinate these relations: intonotion/informofion, intonofion/offect, ond fociol expressions/affect. A meoning representation includes discourse information: What is controstive/background informotion in the given context, ond whot is the “topic” or “theme” of the discourse? The system mops the meaning representotion into how accents ond their placement ore chosen, how they ore conveyed over fociol expression, ond how speech ond fociol expressions ore coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and monipulotors. Our algorithms then impose synchrony, create coorticulotion effects, and determine offectuol signals, eye ond heod movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other fociol models. We would like to thank Steve Platt for his facial model and for very useful comments. We would like to thank Soetjianto and Khairol Yussof who have improved the facial model. We are also very grateful to Jean Griffin, Francisco Azuola, and Mike Edwards who developed part of the animation software. All the work related to the voice synthesizer, speech, and intonation was done by Scott Prevost. We are very grateful to him. Finally, we would like to thank all the members of the graphics laboratory, especially Cary Phillips and Jianmin Zhao, for their helpful comments.


IEEE Transactions on Affective Computing | 2012

Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing

Alessandro Vinciarelli; Maja Pantic; Dirk Heylen; Catherine Pelachaud; Isabella Poggi; Francesca D'Errico; Marc Schroeder

Social Signal Processing is the research domain aimed at bridging the social intelligence gap between humans and machines. This paper is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesis of social behavior. Modeling investigates laws and principles underlying social interaction, analysis explores approaches for automatic understanding of social exchanges recorded with different sensors, and synthesis studies techniques for the generation of social behavior via various forms of embodiment. For each of the above aspects, the paper includes an extensive survey of the literature, points to the most important publicly available resources, and outlines the most fundamental challenges ahead.


Springer Berlin Heidelberg | 2004

APML, a Markup Language for Believable Behavior Generation

Berardina De Carolis; Catherine Pelachaud; Isabella Poggi; Mark Steedman

Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two components: a Mind and a Body. Her mind reflects her personality, her social intelligence, as well as her emotional reaction to events occurring in the environment. Her body corresponds to her physical appearance able to display expressive behaviors. We designed a Mind—Body interface that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a new XML language (APML). Moreover we have developed a language to describe facial expressions. It combines basic facial expressions with operators to create complex facial expressions. The purpose of this chapter is to describe these languages and to illustrate our approach to the generation of behavior of an agent able to act consistently with her goals and with the context of the interaction.


GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation | 2005

Implementing expressive gesture synthesis for embodied conversational agents

Björn Hartmann; Maurizio Mancini; Catherine Pelachaud

We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies that evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent.


intelligent virtual agents | 2006

Towards a common framework for multimodal generation: the behavior markup language

Stefan Kopp; Brigitte Krenn; Stacy Marsella; Andrew N. Marshall; Catherine Pelachaud; Hannes Pirker; Kristinn R. Thórisson; Hannes Högni Vilhjálmsson

This paper describes an international effort to unify a multimodal behavior generation framework for Embodied Conversational Agents (ECAs). We propose a three stage model we call SAIBA where the stages represent intent planning, behavior planning and behavior realization. A Function Markup Language (FML), describing intent without referring to physical behavior, mediates between the first two stages and a Behavior Markup Language (BML) describing desired physical realization, mediates between the last two stages. In this paper we will focus on BML. The hope is that this abstraction and modularization will help ECA researchers pool their resources to build more sophisticated virtual humans.


adaptive agents and multi-agents systems | 2002

Embodied contextual agent in information delivering application

Catherine Pelachaud; Valeria Carofiglio; Berardina De Carolis; Fiorella de Rosis; Isabella Poggi

We aim at building a new human-computer interface for Information Delivering applications: the conversational agent that we have developed is a multimodal believable agent able to converse with the User by exhibiting a synchronized and coherent verbal and nonverbal behavior. The agent is provided with a personality and a social role, that allows her to show her emotion or to refrain from showing it, depending on the context in which the conversation takes place. The agent is provided with a face and a mind. The mind is designed according to a BDI structure that depends on the agents personality; it evolves dynamically during the conversation, according to the Users dialog moves and to emotions triggered as a consequence of the Interlocutors move; such cognitive features are then translated into facial behaviors. In this paper, we describe the overall architecture of our system and its various components; in particular, we present our dynamic model of emotions. We illustrate our results with an example of dialog all along the paper. We pay particular attention to the generation of verbal and nonverbal behaviors and to the way they are synchronized and combined with each other. We also discuss how these acts are translated into facial expressions.


acm multimedia | 2005

Multimodal expressive embodied conversational agents

Catherine Pelachaud

In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using the taxonomy of communicative functions developed by Isabella Poggi [22] to specify the behavior of the agent. Based on this taxonomy a representation language, Affective Presentation Markup Language, APML has been defined to drive the animation of the agent [4]. Lately, we have been working on creating no longer a generic agent but an agent with individual characteristics. We have been concentrated on the behavior specification for an individual agent. In particular we have defined a set of parameters to change the expressivity of the agents behaviors. Six parameters have been defined and implemented to encode gesture and face expressivity. We have performed perceptual studies of our expressivity model.


Journal of Visualization and Computer Animation | 2002

Subtleties of facial expressions in embodied agents

Catherine Pelachaud; Isabella Poggi

Our goal is to develop a believable embodied agent able to dialogue with a user. In particular, we aim at making an agent that can also combine facial expressions in a complex and subtle way, just like a human agent does. We first review a taxonomy of communicative functions that our agent is able to express non-verbally; but we point out that, due to the complexity of communication, in some cases different information can be provided at once by different parts and actions of an agents face. In this paper we are interested in assessing and treating what happens, at the meaning and signal levels of behaviour, when different communicative functions have to be displayed at the same time and necessarily have to make use of the same expressive resources. In some of these cases the complexity of the agents communication can give rise to conflicts between the parts or movements of the face. In this paper, we propose a way to manage the possible conflicts between different modalities of communication through the tool of belief networks, and we show how this tool allows us to combine facial expressions of different communicative functions and to display complex and subtle expressions. Copyright


Archive | 2002

Greta: A Simple Facial Animation Engine

Stefano Pasquariello; Catherine Pelachaud

In this paper, we present a 3D facial model compliant with MPEG-4 specifications; our aim was the realization of an animated model able to simulate in a rapid and believable manner the dynamics aspect of the human face.

Collaboration


Dive into the Catherine Pelachaud's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norman I. Badler

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge