Timothy W. Bickmore
Northeastern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timothy W. Bickmore.
ACM Transactions on Computer-Human Interaction | 2005
Timothy W. Bickmore; Rosalind W. Picard
This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.
international conference on computer graphics and interactive techniques | 2001
Justine Cassell; Hannes Högni Vilhjálmsson; Timothy W. Bickmore
The Behavior Expression Animation Toolkit (BEAT) allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronized nonverbal behaviors and synthesized speech in a form that can be sent to a number of different animation systems. The nonverbal behaviors are assigned on the basis of actual linguistic and contextual analysis of the typed text, relying on rules derived from extensive research into human conversational behavior. The toolkit is extensible, so that new rules can be quickly added. It is designed to plug into larger systems that may also assign personality profiles, motion characteristics, scene constraints, or the animation styles of particular animators.
human factors in computing systems | 1999
Justine Cassell; Timothy W. Bickmore; Mark Billinghurst; Lee W. Campbell; K. Chang; Hannes Högni Vilhjálmsson; Hao Yan
In this paper, we argue for embodied corrversational charactersas the logical extension of the metaphor of human - computerinteraction as a conversation. We argue that the only way to fullymodel the richness of human I&+ to-face communication is torely on conversational analysis that describes sets ofconversational behaviors as fi~lfilling conversational functions,both interactional and propositional. We demonstrate how toimplement this approach in Rea, an embodied conversational agentthat is capable of both multimodal input understanding and outputgeneration in a limited application domain. Rea supports bothsocial and task-oriented dialogue. We discuss issues that need tobe addressed in creating embodied conversational agents, anddescribe the architecture of the Rea interface.
international world wide web conferences | 1997
Timothy W. Bickmore; Bill N. Schilit
Digestor is a software system which automatically re-authors arbitrary documents from the world-wide web to display appropriately on small screen devices such as PDAs and cellular phones, providing device-independent access to the web. Digestor is implemented as an HTTP proxy which dynamically re-authors requested web pages using a heuristic planning algorithm and a set of structural page transformations to achieve the best looking document for a given display size.
human factors in computing systems | 2001
Timothy W. Bickmore; Justine Cassell
Building trust with users is crucial in a wide range of applications, such as financial transactions, and some minimal degree of trust is required in all applications to even initiate and maintain an interaction with a user. Humans use a variety of relational conversational strategies, including small talk, to establish trusting relationships with each other. We argue that such strategies can also be used by interface agents, and that embodied conversational agents are ideally suited for this task given the myriad cues available to them for signaling trustworthiness. We describe a model of social dialogue, an implementation in an embodied conversation agent, and an experiment in which social dialogue was demonstrated to have an effect on trust, for users with a disposition to be extroverts.
Archive | 2005
Timothy W. Bickmore; Justine Cassell
The functions of social dialogue between people in the context of performing a task is discussed, as well as approaches to modelling such dialogue in embodied conversational agents. A study of an agent’s use of social dialogue is presented, comparing embodied interactions with similar interactions conducted over the phone, assessing the impact these media have on a wide range of behavioural, task and subjective measures. Results indicate that subjects’ perceptions of the agent are sensitive to both interaction style (social vs. task-only dialogue) and medium.
computational intelligence | 1990
Steven A. Vere; Timothy W. Bickmore
A basic agent has been constructed which integrates limited natural language understanding and generation, temporal planning and reasoning, plan execution, simulated symbolic perception, episodic memory, and some general world knowledge. The agent is cast as a robot submarine operating in a two‐dimensional simulated “Seaworld” about which it has only partial knowledge. It can communicate with people in a vocabulary of about 800 common English words using a medium coverage grammar. The agent maintains an episodic memory of events in its life and has a limited ability to reflect on those events. A person can make statements to the agent, ask it questions, and give it commands. In response to commands, a temporal task planner is invoked to synthesize a plan, which is then executed at an appropriate future time. A large variety of temporal references in natural language are interpreted with respect to agent time. The agent can form and retain compound future plans, and replan in response to new information or new commands. Natural language verbs are represented in a state transition semantics for compatibility with the planner. The agent is able to give terse answers to questions about its past experiences, present activities and perceptions, future intentions, and general knowledge. No other artificial intelligence artifact with this range of capabilities has previously been constructed.
Communications of The ACM | 2000
Justine Cassell; Timothy W. Bickmore
Introduction This article is about the kind of trust that is demonstrated in human face-to-face interaction, and approaches to and benefits of having our computer interfaces depend on these same manifestations of trustworthiness. In making technology that is actually trustworthy your morals can really be your only guide. But, assuming that you’re a good person, and have built a technology that does what it promises, or that represents people who do what they promise, then read on. We’re taking as a point of departure our earlier work on the effects of representing the computer as a human body. Here we are going to argue that interaction rituals among humans, such as greetings, small talk and conventional leavetakings, along with their manifestations in speech and in embodied conversational behaviors, can lead the users of technology to judge the technology as more reliable, competent and knowledgeable – to trust the technology more.
User Modeling and User-adapted Interaction | 2003
Justine Cassell; Timothy W. Bickmore
Building a collaborative trusting relationship with users is crucial in a wide range of applications, such as advice-giving or financial transactions, and some minimal degree of cooperativeness is required in all applications to even initiate and maintain an interaction with a user. Despite the importance of this aspect of human–human relationships, few intelligent systems have tried to build user models of trust, credibility, or other similar interpersonal variables, or to influence these variables during interaction with users. Humans use a variety of kinds of social language, including small talk, to establish collaborative trusting interpersonal relationships. We argue that such strategies can also be used by intelligent agents, and that embodied conversational agents are ideally suited for this task given the myriad multimodal cues available to them for managing conversation. In this article we describe a model of the relationship between social language and interpersonal relationships, a new kind of discourse planner that is capable of generating social language to achieve interpersonal goals, and an actual implementation in an embodied conversational agent. We discuss an evaluation of our system in which the use of social language was demonstrated to have a significant effect on users’ perceptions of the agent’s knowledgableness and ability to engage users, and on their trust, credibility, and how well they felt the system knew them, for users manifesting particular personality traits.
Interacting with Computers | 2005
Timothy W. Bickmore; Lisa B. Caruso; Kerri M. Clough-Gorr; Timothy Heeren
Relational agents-computational artifacts designed to build and maintain long-term social-emotional relationships with users-may provide an effective interface modality for older adults. This is especially true when the agents use simulated face-to-face conversation as the primary communication medium, and for applications in which repeated interactions over long time periods are required, such as in health behavior change. In this article, we discuss the design of a relational agent for older adults that plays the role of an exercise advisor, and report on the results of a longitudinal study involving 21 adults aged 62-84, half of whom interacted with the agent daily for 2 months in their homes and half who served as a standard-of-care control. Results indicate the agent was accepted and liked, and was significantly more efficacious at increasing physical activity (daily steps walked) than the control.