Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew G. Brooks is active.

Publication


Featured researches published by Andrew G. Brooks.


International Journal of Humanoid Robotics | 2004

TUTELAGE AND COLLABORATION FOR HUMANOID ROBOTS

Cynthia Breazeal; Andrew G. Brooks; Jesse Gray; Guy Hoffman; Cory D. Kidd; Hans Lee; Jeff Lieberman; Andrea Lockerd; David Chilongo

This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robots ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in peoples daily lives.


Robotics and Autonomous Systems | 2006

Using perspective taking to learn from ambiguous demonstrations

Cynthia Breazeal; Matt Berlin; Andrew G. Brooks; Jesse Gray; Andrea Lockerd Thomaz

Abstract This paper addresses an important issue in learning from demonstrations that are provided by “naive” human teachers—people who do not have expertise in the machine learning algorithms used by the robot. We therefore entertain the possibility that, whereas the average human user may provide sensible demonstrations from a human’s perspective, these same demonstrations may be insufficient, incomplete, ambiguous, or otherwise “flawed” from the perspective of the training set needed by the learning algorithm to generalize properly. To address this issue, we present a system where the robot is modeled as a socially engaged and socially cognitive learner. We illustrate the merits of this approach through an example where the robot is able to correctly learn from “flawed” demonstrations by taking the visual perspective of the human instructor to clarify potential ambiguities.


Autonomous Robots | 2007

Behavioral overlays for non-verbal communication expression on a humanoid robot

Andrew G. Brooks; Ronald C. Arkin

This research details the application of non-verbal communication display behaviors to an autonomous humanoid robot, including the use of proxemics, which to date has been seldom explored in the field of human-robot interaction. In order to allow the robot to communicate information non-verbally while simultaneously fulfilling its existing instrumental behavior, a “behavioral overlay” model that encodes this data onto the robots pre-existing motor expression is developed and presented. The state of the robots system of internal emotions and motivational drives is used as the principal data source for non-verbal expression, but in order for the robot to display this information in a natural and nuanced fashion, an additional para-emotional framework has been developed to support the individuality of the robots interpersonal relationships with humans and of the robot itself. An implementation on the Sony QRIO is described which overlays QRIOs existing EGO architecture and situated schema-based behaviors with a mechanism for communicating this framework through modalities that encompass posture, gesture and the management of interpersonal distance.


human-robot interaction | 2006

Working with robots and objects: revisiting deictic reference for achieving spatial common ground

Andrew G. Brooks; Cynthia Breazeal

Robust joint visual attention is necessary for achieving a common frame of reference between humans and robots interacting multimodally in order to work together on real-world spatial tasks involving objects. We make a comprehensive examination of one component of this process that is often otherwise implemented in an ad hoc fashion: the ability to correctly determine the object referent from deictic reference including pointing gestures and speech. From this we describe the development of a modular spatial reasoning framework based around decomposition and resynthesis of speech and gesture into a language of pointing and object labeling. This framework supports multimodal and unimodal access in both real-world and mixed-reality workspaces, accounts for the need to discriminate and sequence identical and proximate objects, assists in overcoming inherent precision limitations in deictic gesture, and assists in the extraction of those gestures. We further discuss an implementation of the framework that has been deployed on two humanoid robot platforms to date.


robot and human interactive communication | 2005

Action parsing and goal inference using self as simulator

Jesse Gray; Cynthia Breazeal; Matt Berlin; Andrew G. Brooks; Jeff Lieberman

The ability to understand a teammates actions in terms of goals and other mental states is an important element of cooperative behavior. Simulation theory argues in favor of an embodied approach whereby humans reuse parts of their cognitive structure for not only generating behavior, but also for simulating the mental states responsible for generating that behavior in others. We present our simulation-theoretic approach and demonstrates its performance in a collaborative task scenario. The robot offers its human teammate assistance by either inferring the humans belief states to anticipate their informational needs, or inferring the humans goal states to physically help the human achieve those goals.


ieee-ras international conference on humanoid robots | 2004

Working collaboratively with humanoid robots

Cynthia Breazeal; Andrew G. Brooks; David Chilongo; Jesse Gray; Guy Hoffman; Cory D. Kidd; Hans Lee; Jeff Lieberman; Andrea Lockerd

This paper presents an overview of our work towards building humanoid robots that can work alongside people as cooperative teammates. We present our theoretical framework based on a novel combination of joint intention theory and collaborative discourse theory, and demonstrate how it can be applied to allow a human to work cooperatively with a humanoid robot on a joint task using speech, gesture, and expressive cues. Such issues must be addressed to enable many new and exciting applications for humanoid robots that require them to assist ordinary people in daily activities or to work as capable members of human-robot teams.


ieee-ras international conference on humanoid robots | 2004

Building an autonomous humanoid tool user

William Bluethmann; Robert O. Ambrose; Myron A. Diftler; Eric Huber; Andrew H. Fagg; Michael T. Rosenstein; Robert Platt; Roderic A. Grupen; Cynthia Breazeal; Andrew G. Brooks; Andrea Lockerd; Richard Alan Peters; Odest Chadwicke Jenkins; Maja J. Matarić; Magdalena D. Bugajska

To make the transition from a technological curiosity to productive tools, humanoid robots will require key advances in many areas, including, mechanical design, sensing, embedded avionics, power, and navigation. Using the NASA Johnson Space Centers Robonaut as a testbed, the DARPA mobile autonomous robot software (MARS) humanoids team is investigating technologies that will enable humanoid robots to work effectively with humans and autonomously work with tools. A novel learning approach is being applied that enables the robot to learn both from a remote human teleoperating the robot and an adjacent human giving instruction. When the remote human performs tasks teleoperatively, the robot learns the salient sensory-motor features to executing the task. Once learned, the task may be carried out fusing the skills required to perform the task, guided by on-board sensing. The adjacent human takes advantage of previously learned skills to sequence the execution of these skills. Preliminary results from initial experiments using a drill to tighten lug nuts on a wheel are discussed.


intelligent robots and systems | 2003

Interactive robot theatre

Cynthia Breazeal; Andrew G. Brooks; Jesse Gray; Matt Hancher; Cory D. Kidd; John McBean; Dan Stiehl; Joshua Strickon

This work motivates interactive robot theatre as an interesting test bed to explore research issues in the development of sociable robots and to investigate the relationship between autonomous robots and intelligent environments. We present the implementation of our initial exploration in this area highlighting three core technologies. First, an integrated show control software development platform for the design and control of an intelligent stage. Second, a stereo vision system that tracks multiple features on multiple audience participants in real-time. Third, an interactive, autonomous robot performer with natural and expressive movement that combines techniques from character animation and robot control.


advances in computer entertainment technology | 2004

Robot's play: interactive games with sociable machines

Andrew G. Brooks; Jesse Gray; Guy Hoffman

Personal robots for human entertainment form a new class of computer-based entertainment that is beginning to become commercially and computationally practical. We expect a principal manifestation of their entertainment capabilities will be socially interactive game playing. We describe this form of gaming and summarize our current efforts in this direction on our lifelike, expressive, autonomous humanoid robot. Our focus is on teaching the robot via playful interaction using natural social gesture and language. We detail this in terms of two broad categories: teaching as play and teaching with play.


advances in computer entertainment technology | 2005

Untethered robotic play for repetitive physical tasks

Andrew G. Brooks; Matthew R. Berlin; Jesse Gray

Personal robots are an increasingly promising new platform for human entertainment. In particular, socially interactive game playing can be used as a mechanism for imparting knowledge and skills to both the robot and the human player. Simultaneous advances in untethered sensing of human activity has widened the scope for inclusion of natural physical movement in these games. In particular, this places certain human health applications within the purview of entertainment robots. Socially responsive automata equipped with the ability to physically monitor unencumbered humans can help to motivate them to perform suitable repetitions of exercise and physical therapy tasks. We demonstrate this concept with two untethered playful interactions: arm exercise mediated by play with a physical robot, and facial exercise mediated by expression-based operation of a popular video game console.

Collaboration


Dive into the Andrew G. Brooks's collaboration.

Top Co-Authors

Avatar

Cynthia Breazeal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jesse Gray

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cory D. Kidd

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Lockerd

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeff Lieberman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dan Stiehl

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Chilongo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hans Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John McBean

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge