Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bradley Hayes is active.

Publication


Featured researches published by Bradley Hayes.


intelligent robots and systems | 2014

Discovering task constraints through observation and active learning

Bradley Hayes; Brian Scassellati

Effective robot collaborators that work with humans require an understanding of the underlying constraint network of any joint task to be performed. Discovering this network allows an agent to more effectively plan around co-worker actions or unexpected changes in its environment. To maximize the practicality of collaborative robots in real-world scenarios, humans should not be assumed to have an abundance of either time, patience, or prior insight into the underlying structure of a task when relied upon to provide the training required to impart proficiency and understanding. This work introduces and experimentally validates two demonstration-based active learning strategies that a robot can utilize to accelerate context-free task comprehension. These strategies are derived from the action-space graph, a dual representation of a Semi-Markov Decision Process graph that acts as a constraint network and informs query generation.We present a pilot study showcasing the effectiveness of these active learning algorithms across three representative classes of task structure. Our results show an increased effectiveness of active learning when utilizing feature-based query strategies, especially in multi-instructor scenarios, achieving better task comprehension from a relatively small quantity of training demonstrations. We further validate our results by creating virtual instructors from a model of our pilot study participants, and applying it to a set of 12 more complex, real world food preparation tasks with similar results.


human-robot interaction | 2013

Are you looking at me?: perception of robot attention is mediated by gaze type and group size

Henny Admoni; Bradley Hayes; David J. Feil-Seifer; Daniel Ullman; Brian Scassellati

Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date have examined the time-course of a meaningful robot gaze, and none have directly investigated what type of gaze is best for eliciting the perception of attention. This paper investigates two types of gaze behaviors-short, frequent glances and long, less frequent stares - to find which behavior is better at conveying a robots visual attention. We describe the development of a programmable research platform from MyKeepon toys, and the use of these programmable robots to examine the effects of gaze type and group size on the perception of attention. In our experiment, participants viewed a group of MyKeepon robots executing random motions, occasionally fixating on various points in the room or directly on the participant. We varied type of gaze fixations within participants and group size between participants. Results show that people are more accurate at recognizing shorter, more frequent fixations than longer, less frequent ones, and that their performance improves as group size decreases. From these results, we conclude that multiple short gazes are preferable for indicating attention over one long gaze, and that the visual search for robot attention is susceptible to group size effects.


human-robot interaction | 2017

Improving Robot Controller Transparency Through Autonomous Policy Explanation

Bradley Hayes; Julie A. Shah

Shared expectations and mutual understanding are critical facets of teamwork. Achieving these in human-robot collaborative contexts can be especially challenging, as humans and robots are unlikely to share a common language to convey intentions, plans, or justifications. Even in cases where human co-workers can inspect a robots control code, and particularly when statistical methods are used to encode control policies, there is no guarantee that meaningful insights into a robots behavior can be derived or that a human will be able to efficiently isolate the behaviors relevant to the interaction. We present a series of algorithms and an accompanying system that enables robots to autonomously synthesize policy descriptions and respond to both general and targeted queries by human collaborators. We demonstrate applicability to a variety of robot controller types including those that utilize conditional logic, tabular reinforcement learning, and deep reinforcement learning, synthesizing informative policy descriptions for collaborators and facilitating fault diagnosis by non-experts.


international conference on robotics and automation | 2016

Autonomously constructing hierarchical task networks for planning and human-robot collaboration

Bradley Hayes; Brian Scassellati

Collaboration between humans and robots requires solutions to an array of challenging problems, including multi-agent planning, state estimation, and goal inference. There already exist feasible solutions for many of these challenges, but they depend upon having rich task models. In this work we detail a novel type of Hierarchical Task Network we call a Clique/Chain HTN (CC-HTN), alongside an algorithm for autonomously constructing them from topological properties derived from graphical task representations. As the presented method relies on the structure of the task itself, our work imposes no particular type of symbolic insight into motor primitives or environmental representation, making it applicable to a wide variety of use cases critical to human-robot interaction. We present evaluations within a multi-resolution goal inference task and a transfer learning application showing the utility of our approach.


intelligent robots and systems | 2015

Effective robot teammate behaviors for supporting sequential manipulation tasks

Bradley Hayes; Brian Scassellati

In this work, we present an algorithm for improving collaborator performance on sequential manipulation tasks. Our agent-decoupled, optimization-based, task and motion planning approach merges considerations derived from both symbolic and geometric planning domains. This results in the generation of supportive behaviors enabling a teammate to reduce cognitive and kinematic burdens during task completion. We describe our algorithm alongside representative use cases, with an evaluation based on solving complex circuit building problems. We conclude with a discussion of applications and extensions to human-robot teaming scenarios.


robot and human interactive communication | 2014

People help robots who help others, not robots who help themselves

Bradley Hayes; Daniel Ullman; Emma Alexander; Caroline Bank; Brian Scassellati

Robots that engage in social behaviors benefit greatly from possessing tools that allow them to manipulate the course of an interaction. Using a non-anthropomorphic social robot and a simple counting game, we examine the effects that empathy-generating robot dialogue has on participant performance across three conditions. In the self-directed condition, the robot petitions the participant to reduce his or her performance so that the robot can avoid punishment. In the externally-directed condition, the robot petitions on behalf of its programmer so that its programmer can avoid punishment. The control condition does not involve any petitions for empathy. We find that externally-directed petitions from the robot show a higher likelihood of motivating the participant to sacrifice his or her own performance to help, at the expense of incurring negative social effects. We also find that experiencing these emotional dialogue events can have complex and difficult to predict effects, driving some participants to antipathy, leaving some unaffected, and manipulating others into feeling empathy towards the robot.


AI Matters | 2014

Human-robot collaboration

Brian Scassellati; Bradley Hayes

environment human-robot teaming scenarios , such as this photo of a collaborative furniture assembly task, are at the center of a growing field of robotics and AI research: Human-Robot Collaboration. This domain combines a variety of technical areas within Human-Robot Interaction, with the goal of enabling safe, seamless , effective teamwork between groups of humans and robots. Central to this research are a host of challenges in task planning, motion planning , intention recognition, user modeling, scene recognition, and human-robot communication. These systems are expected to safely and efficiently perform complex actions, assisting humans and independently completing tasks, in a diverse range of scenarios with highly dynamic and uncertain environments.


robotics science and systems | 2016

Robotic Assistance in Coordination of Patient Care

Matthew C. Gombolay; Xi Jessie Yang; Bradley Hayes; Nicole Seo; Zixi Liu; Samir Wadhwania; Tania Yu; Neel Shah; Toni Golen; Julie A. Shah

We conducted a study to investigate trust in and dependence upon robotic decision support among nurses and doctors on a labor and delivery floor. There is evidence that suggestions provided by embodied agents engender inappropriate degrees of trust and reliance among humans. This concern represents a critical barrier that must be addressed before fielding intelligent hospital service robots that take initiative to coordinate patient care. We conducted our experiment with nurses and physicians, and evaluated the subjects’ levels of trust in and dependence upon highand low-quality recommendations issued by robotic versus computer-based decision support. The decision support, generated through action-driven learning from expert demonstration, produced high-quality recommendations that were accepted by nurses and physicians at a compliance rate of 90%. Rates of Type I and Type II errors were comparable between robotic and computer-based decision support. Furthermore, embodiment appeared to benefit performance, as indicated by a higher degree of appropriate dependence after the quality of recommendations changed over the course of the experiment. These results support the notion that a robotic assistant may be able to safely and effectively assist with patient care. Finally, we conducted a pilot demonstration in which a robot-assisted resource nurses on a labor and delivery floor at a tertiary care


human robot interaction | 2016

Robot Nonverbal Behavior Improves Task Performance In Difficult Collaborations

Henny Admoni; Thomas Weng; Bradley Hayes; Brian Scassellati

Nonverbal behaviors increase task efficiency and improve collaboration between people and robots. In this paper, we introduce a model for generating nonverbal behavior and investigate whether the usefulness of nonverbal behaviors changes based on task difficulty. First, we detail a robot behavior model that accounts for top-down and bottom-up features of the scene when deciding when and how to perform deictic references (looking or pointing). Then, we analyze how a robots deictic nonverbal behavior affects peoples performance on a memorization task under differing difficulty levels. We manipulate difficulty in two ways: by adding steps to memorize, and by introducing an interruption. We find that when the task is easy, the robots nonverbal behavior has little influence over recall and task completion. However, when the task is challenging- because the memorization load is high or because the task is interrupted-a robots nonverbal behaviors mitigate the negative effects of these challenges, leading to higher recall accuracy and lower completion times. In short, nonverbal behavior may be even more valuable for difficult collaborations than for easy ones.


human robot interaction | 2018

Workshop on Longitudinal Human-Robot Teaming

Joachim de Greeff; Bradley Hayes; Matthew C. Gombolay; Matthew Johnson; Mark A. Neerincx; Jurriaan van Diggelen; Melissa Cefkin; Ivana Kruijff-Korbayová

As robots that share working and living environments with humans proliferate, human-robot teamwork (HRT) is becoming more relevant every day. By necessity, these HRT dynamics develop over time, as HRT can hardly happen only in the moment. What theories, algorithms, tools, computational models and design methodologies enable effective and safe longitudinal human-robot teaming? To address this question, we propose a half-day workshop on longitudinal human-robot teaming. This workshop seeks to bring together researchers from a wide array of disciplines with the focus of enabling humans and robots to better work together in real-life settings and over long-term. Sessions will consist of a mix of plenary talks by invited speakers and contributed papers/posters, and will encourage discussion and exchange of ideas amongst participants by having breakout groups and a panel discussion.

Collaboration


Dive into the Bradley Hayes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julie A. Shah

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew C. Gombolay

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Johnson

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge