Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cynthia Breazeal is active.

Publication


Featured researches published by Cynthia Breazeal.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2003

Emotion and sociable humanoid robots

Cynthia Breazeal

This paper focuses on the role of emotion and expressive behavior in regulating social interaction between humans and expressive anthropomorphic robots, either in communicative or teaching scenarios. We present the scientific basis underlying our humanoid robots emotion models and expressive behavior, and then show how these scientific viewpoints have been adapted to the current implementation. Our robot is also able to recognize affective intent through tone of voice, the implementation of which is inspired by the scientific findings of the developmental psycholinguistics community. We first evaluate the robots expressive displays in isolation. Next, we evaluate the robots overall emotive behavior (i.e. the coordination of the affective recognition system, the emotion and motivation systems, and the expression system) as it socially engages nave human subjects face-to-face.


Computation for metaphors, analogy, and agents | 1999

The cog project: building a humanoid robot

Rodney A. Brooks; Cynthia Breazeal; Matthew Marjanović; Brian Scassellati; Matthew M. Williamson

To explore issues of developmental structure, physical embodiment, integration of multiple sensory and motor systems, and social interaction, we have constructed an upper-torso humanoid robot called Cog. The robot has twenty-one degrees of freedom and a variety of sensory systems, including visual, auditory, vestibular, kinesthetic, and tactile senses. This chapter gives a background on the methodology that we have used in our investigations, highlights the research issues that have been raised during this project, and provides a summary of both the current state of the project and our long-term goals. We report on a variety of implemented visual-motor routines (smooth-pursuit tracking, saccades, binocular vergence, and vestibular-ocular and opto-kinetic reflexes), orientation behaviors, motor control techniques, and social behaviors (pointing to a visual target, recognizing joint attention through face and eye finding, imitation of head nods, and regulating interaction through expressive feedback). We further outline a number of areas for future research that will be necessary to build a complete embodied system.


Robotics and Autonomous Systems | 2003

Toward sociable robots

Cynthia Breazeal

Abstract This paper explores the topic of social robots—the class of robots that people anthropomorphize in order to interact with them. From the diverse and growing number of applications for such robots, a few distinct modes of interaction are beginning to emerge. We distinguish four such classes: socially evocative, social interface, socially receptive, and sociable. For the remainder of the paper, we explore a few key features of sociable robots that distinguish them from the others. We use the vocal turn-taking behavior of our robot, Kismet, as a case study to highlight these points.


Adaptive Behavior | 2000

Infant-like social interactions between a robot and a human caregiver

Cynthia Breazeal; Brian Scassellati

From birth, human infants are immersed in a social environment that allows them to learn by leveraging the skills and capabilities of their caregivers. A critical pre-cursor to this type of social learning is the ability to maintain interaction levels that are neither overwhelming nor under-stim ulating. In this paper, we present a mechanism for an autonomous robot to regulate the intensity of its social interactions with a human. Similar to the feedback from infant to caregiver, the robot uses expressive displays to modulate the interaction intensity. This mechanism is integrated within a general framework that combines perception, attention, drives, emotions, behavior selection, and motor acts. We present a specific implementation of this architecture that enables the robot to react appropriately to both social stimuli (faces) and non-social stimuli (moving toys) while maintaining a suitable interaction intensity. We present results from both face-to-face interactions and interactions mediated through a toy. Note: This paper was submitted in June, 1998.


Trends in Cognitive Sciences | 2002

Robots that imitate humans

Cynthia Breazeal; Brian Scassellati

The study of social learning in robotics has been motivated by both scientific interest in the learning process and practical desires to produce machines that are useful, flexible, and easy to use. In this review, we introduce the social and task-oriented aspects of robot imitation. We focus on methodologies for addressing two fundamental problems. First, how does the robot know what to imitate? And second, how does the robot map that perception onto its own action repertoire to replicate it? In the future, programming humanoid robots to perform new tasks might be as simple as showing them.


intelligent robots and systems | 1999

How to build robots that make friends and influence people

Cynthia Breazeal; Brian Scassellati

In order to interact socially with a human, a robot must convey intentionality, that is, the human must believe that the robot has beliefs, desires, and intentions. We have constructed a robot which exploits natural human social tendencies to convey intentionality through motor actions and facial expressions. We present results on the integration of perception, attention, motivation, behavior, and motor systems which allow the robot to engage in infant-like interactions with a human caregiver.


systems man and cybernetics | 2004

Social interactions in HRI: the robot view

Cynthia Breazeal

This paper explores the topic of human-robot interaction (HRI) from the perspective of designing sociable autonomous robots-robots designed to interact with people in a human-like way. There are a growing number of applications for robots that people can engage as capable creatures or as partners rather than tools, yet little is understood about how to best design robots that interact with people in this way. The related field of human-computer interaction (HCI) offers important insights, however autonomous robots are a very different technology from desktop computers. In this paper, we look at the field of HRI from an HCI perspective, pointing out important similarities yet significant differences that may ultimately make HRI a distinct area of inquiry. One outcome of this discussion is that it is important to view the design and evaluation problem from the robots perspective as well as that of the human. Taken as a whole, this paper provides a framework with which to design and evaluate sociable robots from a HRI perspective.


intelligent robots and systems | 2004

Effect of a robot on user perceptions

Cory D. Kidd; Cynthia Breazeal

Social robots are robots that help people as capable partners rather than as tools, are believed to be of greatest use for applications in entertainment, education, and healthcare because of their potential to be perceived as trusting, helpful, reliable, and engaging. This paper explores how the robots physical presence influences a persons perception of these characteristics. The first study reported here demonstrates the differences between a robot and an animated character in terms a persons engagement and perceptions of the robot and character. The second study shows that this difference is a result of the physical presence of the robot and that a persons reactions would be similar even if the robot is not physically collocated. Implications to the design of socially communicative and interactive robots are discussed.


Artificial Intelligence | 2008

Teachable robots: Understanding human teaching behavior to build more effective robot learners

Andrea Lockerd Thomaz; Cynthia Breazeal

While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task. We report three main observations on how people administer feedback when teaching a Reinforcement Learning agent: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback, possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. Given this, we made specific modifications to the simulated RL robot, and analyzed and evaluated its learning behavior in four follow-up experiments with human trainers. We report significant improvements on several learning measures. This work demonstrates the importance of understanding the human-teacher/robot-learner partnership in order to design algorithms that support how people want to teach and simultaneously improve the robots learning behavior.


Autonomous Robots | 2002

Recognition of Affective Communicative Intent in Robot-Directed Speech

Cynthia Breazeal; Lijin Aryananda

Human speech provides a natural and intuitive interface for both communicating with humanoid robots as well as for teaching them. In general, the acoustic pattern of speech contains three kinds of information: who the speaker is, what the speaker said, and how the speaker said it. This paper focuses on the question of recognizing affective communicative intent in robot-directed speech without looking into the linguistic content. We present an approach for recognizing four distinct prosodic patterns that communicate praise, prohibition, attention, and comfort to preverbal infants. These communicative intents are well matched to teaching a robot since praise, prohibition, and directing the robots attention to relevant aspects of a task, could be used by a human instructor to intuitively facilitate the robots learning process. We integrate this perceptual ability into our robots “emotion” system, thereby allowing a human to directly manipulate the robots affective state. This has a powerful organizing influence on the robots behavior, and will ultimately be used to socially communicate affective reinforcement. Communicative efficacy has been tested with people very familiar with the robot as well as with naïve subjects.

Collaboration


Dive into the Cynthia Breazeal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesse Gray

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrea Lockerd Thomaz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Cory D. Kidd

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jin Joo Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matt Berlin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Walter Dan Stiehl

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Brooks

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jacqueline Kory Westlund

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge