Justine Cassell
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justine Cassell.
international conference on computer graphics and interactive techniques | 2001
Justine Cassell; Hannes Högni Vilhjálmsson; Timothy W. Bickmore
The Behavior Expression Animation Toolkit (BEAT) allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronized nonverbal behaviors and synthesized speech in a form that can be sent to a number of different animation systems. The nonverbal behaviors are assigned on the basis of actual linguistic and contextual analysis of the typed text, relying on rules derived from extensive research into human conversational behavior. The toolkit is extensible, so that new rules can be quickly added. It is designed to plug into larger systems that may also assign personality profiles, motion characteristics, scene constraints, or the animation styles of particular animators.
international conference on computer graphics and interactive techniques | 1994
Justine Cassell; Catherine Pelachaud; Norman I. Badler; Mark Steedman; Brett Achorn; Tripp Becket; Brett Douville; Scott Prevost; Matthew Stone
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.
human factors in computing systems | 1999
Justine Cassell; Timothy W. Bickmore; Mark Billinghurst; Lee W. Campbell; K. Chang; Hannes Högni Vilhjálmsson; Hao Yan
In this paper, we argue for embodied corrversational charactersas the logical extension of the metaphor of human - computerinteraction as a conversation. We argue that the only way to fullymodel the richness of human I&+ to-face communication is torely on conversational analysis that describes sets ofconversational behaviors as fi~lfilling conversational functions,both interactional and propositional. We demonstrate how toimplement this approach in Rea, an embodied conversational agentthat is capable of both multimodal input understanding and outputgeneration in a limited application domain. Rea supports bothsocial and task-oriented dialogue. We discuss issues that need tobe addressed in creating embodied conversational agents, anddescribe the architecture of the Rea interface.
IEEE Intelligent Systems | 2002
Jonathan Gratch; Jeff Rickel; Elisabeth André; Justine Cassell; Eric Petajan; Norman I. Badler
Discusses some of the key issues that must be addressed in creating virtual humans, or androids. As a first step, we overview the issues and available tools in three key areas of virtual human research: face-to-face conversation, emotions and personality, and human figure animation. Assembling a virtual human is still a daunting task, but the building blocks are getting bigger and better every day.
human factors in computing systems | 2001
Timothy W. Bickmore; Justine Cassell
Building trust with users is crucial in a wide range of applications, such as financial transactions, and some minimal degree of trust is required in all applications to even initiate and maintain an interaction with a user. Humans use a variety of relational conversational strategies, including small talk, to establish trusting relationships with each other. We argue that such strategies can also be used by interface agents, and that embodied conversational agents are ideally suited for this task given the myriad cues available to them for signaling trustworthiness. We describe a model of social dialogue, an implementation in an embodied conversation agent, and an experiment in which social dialogue was demonstrated to have an effect on trust, for users with a disposition to be extroverts.
Communications of The ACM | 2000
Justine Cassell
More than another friendly face, Rea knows how to have a conversation with living, breathing human users with a wink, a nod, and a sidelong glance. A nimals and humans all manifest social qualities and skills. Dogs recognize dominance and submission , stand corrected by their superiors, demonstrate consistent personalities, and so forth. On the other hand, only humans communicate through language and carry on conversations with one another. The skills involved in human conversation have developed in such a way as to exploit all the special characteristics of the human body. We make complex repre-sentational gestures with our prehensile hands, gaze away and toward one another out of the corners of our centrally set eyes, and use the pitch and melody of our flexible voices to emphasize and clarify what we are saying. Perhaps because conversation is so defining of humanness and human interaction, the metaphor of face-to-face conversation has been applied to human-computer interface design for quite some time. One of the early arguments for the utility of this metaphor pointed to the application of the features of face-to-face conversation in human-computer interaction, including mixed initiative, nonverbal communication , sense of presence, and the rules involved in transferring control [9]. However, although these features have gained widespread recognition, human-computer conversation has only recently become more than a metaphor. That is, only recently have human-computer interface designers taken the metaphor seriously enough to attempt to design a computer that could hold up its end of the conversation with a human user. Here, I describe some of the features of human-human conversation being implemented in this new genre of embodied conversational agent, exploring a notable embodied conversational agent—named Rea—based on these features. Because conversation is such a primary skill for humans and learned so early in life (practiced, in fact, between infants and their mothers taking turns cooing and burbling EMBODIED CONVERSATIONAL INTERFACE AGENTS
ubiquitous computing | 2001
Justine Cassell; Kimiko Ryokai
Abstract: Fantasy play and storytelling serve an important role in young children’s development. While computers are increasingly present in the world of young children, there is a lack of computational tools to support children’s voices in everyday storytelling, particularly in the context of fantasy play. We believe that there is a need for computational systems that engage in story-listening rather than story-telling. This paper introduces StoryMat, a system that supports and listens to children’s voices in their own storytelling play. StoryMat offers a child-driven, story-listening space by recording and recalling children’s narrating voices, and the movements they make with their stuffed animals on a colourful story-evoking quilt. Empirical research with children shows that StoryMat fosters developmentally advanced forms of storytelling of the kind that has been shown to provide a bridge to written literacy, and provides a space where children engage in fantasy storytelling collaboratively with or without a playmate. The paper addresses the importance of supporting young children’s fantasy play and suggests a new way for technology to play an integral part in that activity.
meeting of the association for computational linguistics | 2003
Yukiko I. Nakano; Gabe Reinstein; Tom Stocky; Justine Cassell
We investigate the verbal and nonverbal means for grounding, and propose a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction. We analyzed eye gaze, head nods and attentional focus in the context of a direction-giving task. The distribution of nonverbal behaviors differed depending on the type of dialogue move being grounded, and the overall pattern reflected a monitoring of lack of negative feedback. Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state.
Communications of The ACM | 2000
Justine Cassell; Timothy W. Bickmore
Introduction This article is about the kind of trust that is demonstrated in human face-to-face interaction, and approaches to and benefits of having our computer interfaces depend on these same manifestations of trustworthiness. In making technology that is actually trustworthy your morals can really be your only guide. But, assuming that you’re a good person, and have built a technology that does what it promises, or that represents people who do what they promise, then read on. We’re taking as a point of departure our earlier work on the effects of representing the computer as a human body. Here we are going to argue that interaction rituals among humans, such as greetings, small talk and conventional leavetakings, along with their manifestations in speech and in embodied conversational behaviors, can lead the users of technology to judge the technology as more reliable, competent and knowledgeable – to trust the technology more.
User Modeling and User-adapted Interaction | 2003
Justine Cassell; Timothy W. Bickmore
Building a collaborative trusting relationship with users is crucial in a wide range of applications, such as advice-giving or financial transactions, and some minimal degree of cooperativeness is required in all applications to even initiate and maintain an interaction with a user. Despite the importance of this aspect of human–human relationships, few intelligent systems have tried to build user models of trust, credibility, or other similar interpersonal variables, or to influence these variables during interaction with users. Humans use a variety of kinds of social language, including small talk, to establish collaborative trusting interpersonal relationships. We argue that such strategies can also be used by intelligent agents, and that embodied conversational agents are ideally suited for this task given the myriad multimodal cues available to them for managing conversation. In this article we describe a model of the relationship between social language and interpersonal relationships, a new kind of discourse planner that is capable of generating social language to achieve interpersonal goals, and an actual implementation in an embodied conversational agent. We discuss an evaluation of our system in which the use of social language was demonstrated to have a significant effect on users’ perceptions of the agent’s knowledgableness and ability to engage users, and on their trust, credibility, and how well they felt the system knew them, for users manifesting particular personality traits.