Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guy Hoffman is active.

Publication


Featured researches published by Guy Hoffman.


International Journal of Humanoid Robotics | 2004

TUTELAGE AND COLLABORATION FOR HUMANOID ROBOTS

Cynthia Breazeal; Andrew G. Brooks; Jesse Gray; Guy Hoffman; Cory D. Kidd; Hans Lee; Jeff Lieberman; Andrea Lockerd; David Chilongo

This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robots ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in peoples daily lives.


adaptive agents and multi-agents systems | 2004

Teaching and Working with Robots as a Collaboration

Cynthia Breazeal; Guy Hoffman; Andrea Lockerd

New applications for autonomous robots bring them into the human environment where they are to serve as helpful assistants to untrained users in the home or office, or work as capable members of human-robot teams for security, military, and space efforts. These applications require robots to be able to quickly learn how to perform new tasks from natural human instruction, and to perform tasks collaboratively with human teammates. Using joint intention theory as our theoretical framework, our approach integrates learning and collaboration through a goal based task structure. Specifically, we use collaborative discourse with accompanying gestures and social cues to teach a humanoid robot a structurally complex task. Having learned the representation for the task, the robot then performs it shoulder-to-shoulder with a human partner, using social communication acts to dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot.


IEEE Transactions on Robotics | 2007

Cost-Based Anticipatory Action Selection for Human–Robot Fluency

Guy Hoffman; Cynthia Breazeal

A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we describe a model for human-robot joint action, and propose an adaptive action selection mechanism for a robotic teammate, which makes anticipatory decisions based on the confidence of their validity and their relative risk. We conduct an analysis of our method, predicting an improvement in task efficiency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the teams fluency and success. By way of explanation, we raise a number of fluency metric hypotheses, and evaluate their significance between the two study conditions.


human-robot interaction | 2007

Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team

Guy Hoffman; Cynthia Breazeal

A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic team-mates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we propose an adaptive action selection mechanism for a robotic teammate, making anticipatory decisions based on the confidence of their validity and their relative risk. We predict an improvement in task efficiency and fluency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the teams fluency and success. By way of explanation, we propose a number of fluency metrics that differ significantly between the two study groups.


AIAA 1st Intelligent Systems Technical Conference | 2004

Collaboration in Human-Robot Teams

Guy Hoffman; Cynthia Breazeal

Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include—in the long term—robots for homes, hospitals, and offices, but already exist in more advanced settings, such as space exploration. The work reported in this paper is part of an ongoing collaboration with NASA JSC to develop Robonaut, a humanoid robot envisioned to work with human astronauts on maintenance operations for space missions. To date, work with Robonaut has mainly investigated performing a joint task with a human in which the robot is being teleoperated. However, perceptive disorientation, sensory noise, and control delays make teleoperation cognitively exhausting even for a highly skilled operator. Control delays in long range teleoperation also make shoulder-to-shoulder teamwork difficult. These issues motivate our work to make robots collaborating with people more autonomous. Our work focuses on a scenario of a human and an autonomous humanoid robot working together shoulder-to-shoulder, sharing the workspace and the objects required to complete a task. A robotic member of such a team must be able to work towards a shared goal, and be in agreement with the human as to the sequence of actions that will be required to reach that goal, as well as dynamically adjust its plan according to the human’s actions. Human-robot collaboration of this nature is an important yet relatively unexplored kind of human-robot interaction. This paper describes our work towards building a dynamic collaborative framework enabling such an interaction. We discuss our architecture and its implementation for controlling a humanoid robot, working on a task with a human partner. Our approach stems from Joint Intention Theory, which shows that for joint action to emerge, teammates must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. In addition, they must demonstrate commitment to doing their own part, to the others doing theirs, to providing mutual support, and finally—to a mutual belief as to the state of the task. We argue that to this end, the concept of task and action goals is central. We therefore present a goal-driven hierarchical task representation, and a resulting collaborative turn-taking system, implementing many of the above-mentioned requirements of a robotic teammate. Additionally, we show the implementation of relevant social skills supporting our collaborative framework. Finally, we present a demonstration of our system for collaborative execution of a hierarchical object manipulation task by a robot-human team. Our humanoid robot is able to divide the task between the participants while taking into consideration the collaborator’s actions when deciding what to do next. It is capable of asking for mutual support in the cases where it is unable to perform a certain action. To facilitate this interaction, the robot actively maintains a clear and intuitive channel of communication to synchronize goals, task states, and actions, resulting in a fluid, efficient collaboration.


robot and human interactive communication | 2006

Reinforcement Learning with Human Teachers: Understanding How People Want to Teach Robots

Andrea Lockerd Thomaz; Guy Hoffman; Cynthia Breazeal

While reinforcement learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task. We report three main observations on how people administer feedback when teaching a robot a task through reinforcement learning: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback -possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. In conclusion, we discuss future extensions to RL to accommodate these lessons


Autonomous Robots | 2010

Effects of anticipatory perceptual simulation on practiced human-robot tasks

Guy Hoffman; Cynthia Breazeal

With the aim of attaining increased fluency and efficiency in human-robot teams, we have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing a repetitive human-robot collaborative task.In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to improve their relative contribution at a similar rate, possibly playing a part in the human’s “like-me” perception of the robot.In self-report, we find significant differences between the two conditions in the sense of team fluency, the team’s improvement over time, the robot’s contribution to the efficiency and fluency, the robot’s intelligence, and in the robot’s adaptation to the task. We also find differences in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more human qualities to the robot, such as gender and intelligence, as well as credit for success, but we also find increased self-blame and self-deprecation in these subjects’ responses.We believe that this work lays the foundation towards modeling and evaluating artificial practice for robots working in collaboration with humans.


robot and human interactive communication | 2008

A hybrid control system for puppeteering a live robotic stage actor

Guy Hoffman; Rony Kubat; Cynthia Breazeal

This paper describes a robotic puppeteering system used in a theatrical production involving one robot and two human performers on stage. We draw from acting theory and human-robot interaction to develop a hybrid-control puppeteering interface which combines reactive expressive gestures and parametric behaviors with a point-of-view eye contact module. Our design addresses two core considerations: allowing a single operator to puppeteer the robotpsilas full range of behaviors, and allowing for gradual replacement of human-controlled modules by autonomous subsystems. We wrote a play specifically for a performance between two humans and one of our research robots, a robotic lamp which embodied a lead role in the play. We staged three performances with the robot as part of a local festival of new plays. Though we have yet to perform a formal statistical evaluation of the system, we interviewed the actors and director and present their feedback about working with the system.


ieee-ras international conference on humanoid robots | 2004

Working collaboratively with humanoid robots

Cynthia Breazeal; Andrew G. Brooks; David Chilongo; Jesse Gray; Guy Hoffman; Cory D. Kidd; Hans Lee; Jeff Lieberman; Andrea Lockerd

This paper presents an overview of our work towards building humanoid robots that can work alongside people as cooperative teammates. We present our theoretical framework based on a novel combination of joint intention theory and collaborative discourse theory, and demonstrate how it can be applied to allow a human to work cooperatively with a humanoid robot on a joint task using speech, gesture, and expressive cues. Such issues must be addressed to enable many new and exciting applications for humanoid robots that require them to assist ordinary people in daily activities or to work as capable members of human-robot teams.


Autonomous Robots | 2011

Interactive improvisation with a robotic marimba player

Guy Hoffman; Gil Weinberg

Shimon is a interactive robotic marimba player, developed as part of our ongoing research in Robotic Musicianship. The robot listens to a human musician and continuously adapts its improvisation and choreography, while playing simultaneously with the human. We discuss the robot’s mechanism and motion-control, which uses physics simulation and animation principles to achieve both expressivity and safety. We then present an interactive improvisation system based on the notion of physical gestures for both musical and visual expression. The system also uses anticipatory action to enable real-time improvised synchronization with the human player.We describe a study evaluating the effect of embodiment on one of our improvisation modules: antiphony, a call-and-response musical synchronization task. We conducted a 3×2 within-subject study manipulating the level of embodiment, and the accuracy of the robot’s response. Our findings indicate that synchronization is aided by visual contact when uncertainty is high, but that pianists can resort to internal rhythmic coordination in more predictable settings. We find that visual coordination is more effective for synchronization in slow sequences; and that occluded physical presence may be less effective than audio-only note generation.Finally, we test the effects of visual contact and embodiment on audience appreciation. We find that visual contact in joint Jazz improvisation makes for a performance in which audiences rate the robot as playing better, more like a human, as more responsive, and as more inspired by the human. They also rate the duo as better synchronized, more coherent, communicating, and coordinated; and the human as more inspired and more responsive.

Collaboration


Dive into the Guy Hoffman's collaboration.

Top Co-Authors

Avatar

Cynthia Breazeal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Oren Zuckerman

Interdisciplinary Center Herzliya

View shared research outputs
Top Co-Authors

Avatar

Andrea Lockerd Thomaz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Gil Weinberg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cory D. Kidd

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Brooks

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jesse Gray

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrea Lockerd

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gurit E. Birnbaum

Interdisciplinary Center Herzliya

View shared research outputs
Researchain Logo
Decentralizing Knowledge