Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura M. Hiatt is active.

Publication


Featured researches published by Laura M. Hiatt.


Proceedings of the IEEE | 2006

Coordinated Multiagent Teams and Sliding Autonomy for Large-Scale Assembly

Brennan Sellner; Frederik W. Heger; Laura M. Hiatt; Reid G. Simmons; Sanjiv Singh

Recent research in human-robot interaction has investigated the concept of Sliding, or Adjustable, Autonomy, a mode of operation bridging the gap between explicit teleoperation and complete robot autonomy. This work has largely been in single-agent domains-involving only one human and one robot-and has not examined the issues that arise in multiagent domains. We discuss the issues involved in adapting Sliding Autonomy concepts to coordinated multiagent teams. In our approach, remote human operators have the ability to join, or leave, the team at will to assist the autonomous agents with their tasks (or aspects of their tasks) while not disrupting the teams coordination. Agents model their own and the human operators performance on subtasks to enable them to determine when to request help from the operator. To validate our approach, we present the results of two experiments. The first evaluates the human/multirobot teams performance under four different collaboration strategies including complete teleoperation, pure autonomy, and two distinct versions of Sliding Autonomy. The second experiment compares a variety of user interface configurations to investigate how quickly a human operator can attain situational awareness when asked to help. The results of these studies support our belief that by incorporating a remote human operator into multiagent teams, the team as a whole becomes more robust and efficient


human robot interaction | 2013

ACT-R/E: an embodied cognitive architecture for human-robot interaction

J. Gregory Trafton; Laura M. Hiatt; Anthony M. Harrison; Franklin P. Tamborello; Sangeet Khemlani; Alan C. Schultz

We present ACT-R/E (Adaptive Character of Thought-Rational / Embodied), a cognitive architecture for human-robot interaction. Our reason for using ACT-R/E is two-fold. First, ACT-R/E enables researchers to build good embodied models of people to understand how and why people think the way they do. Then, we leverage that knowledge of people by using it to predict what a person will do in different situations; e.g., that a person may forget something and may need to be reminded or that a person cannot see everything the robot sees. We also discuss methods of how to evaluate a cognitive architecture and show numerous empirically validated examples of ACT-R/E models.


Space | 2005

The Peer-to-Peer Human-Robot Interaction Project

Terrence Fong; Illah R. Nourbakhsh; Clayton Kunz; John Schreiner; Robert Ambrose; Robert R. Burridge; Reid G. Simmons; Laura M. Hiatt; Alan C. Schultz; J. Gregory Trafton; Magda Bugajska; Jean Scholtz

The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our hypothesis is that peer-to-peer interaction can enable robots to collaborate in a competent, non-disruptive (i.e., natural) manner with users who have limited training, experience, or knowledge of robotics. Specifically, we believe that failures and limitations of autonomy (in planning, in execution, etc.) can be compensated for using human-robot interaction. In this paper, we present an overview of P2P-HRI, describe our development approach and discuss our evaluation methodology.


conference of the european chapter of the association for computational linguistics | 2003

Targeted help for spoken dialogue systems: intelligent feedback improves naive users' performance

Beth Ann Hockey; Oliver Lemon; Ellen Campana; Laura M. Hiatt; Gregory Aist; James Hieronymus; Alexander Gruenstein; John Dowding

We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feedback on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.


Autonomous Robots | 2007

Socially Distributed Perception: GRACE plays social tag at AAAI 2005

Marek P. Michalowski; Selma Sabanovic; Carl F. DiSalvo; Dídac Busquets; Laura M. Hiatt; Nik A. Melchior; Reid G. Simmons

This paper presents a robot search task (social tag) that uses social interaction, in the form of asking for help, as an integral component of task completion. Socially distributed perception is defined as a robots ability to augment its limited sensory capacities through social interaction. We describe the task of social tag and its implementation on the robot GRACE for the AAAI 2005 Mobile Robot Competition & Exhibition. We then discuss our observations and analyses of GRACEs performance as a situated interaction with conference participants. Our results suggest we were successful in promoting a form of social interaction that allowed people to help the robot achieve its goal. Furthermore, we found that different social uses of the physical space had an effect on the nature of the interaction. Finally, we discuss the implications of this design approach for effective and compelling human-robot interaction, considering its relationship to concepts such as dependency, mixed initiative, and socially distributed cognition.


international joint conference on artificial intelligence | 2011

Accommodating human variability in human-robot teams through theory of mind

Laura M. Hiatt; Anthony M. Harrison; J. Gregory Trafton

The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the humans behavior. This allows the cognitive robot to account for variances due to both different knowledge and beliefs about the world, as well as different possible paths the human could take with a given set of knowledge and beliefs. An experiment showed that cognitive robots equipped with this functionality are viewed as both more natural and intelligent teammates, compared to both robots who either say nothing when presented with human variability, and robots who simply point out any discrepancies between the humans expected, and actual, behavior. Overall, this analysis leads to an effective, general approach for determining what thought process is leading to a humans actions.


systems, man and cybernetics | 2006

A Preliminary Study of Peer-to-Peer Human-Robot Interaction

Terry Fong; Jean Scholtz; Julie A. Shah; L. Fluckiger; Clayton Kunz; David Lees; J. Schreiner; M. Siegel; Laura M. Hiatt; Illah R. Nourbakhsh; Reid G. Simmons; R. Ambrose; Robert R. Burridge; Brian Antonishek; Magdalena D. Bugajska; Alan C. Schultz; J. G. Trafton

The Peer-To-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.


human-robot interaction | 2006

Attaining situational awareness for sliding autonomy

Brennan Sellner; Laura M. Hiatt; Reid G. Simmons; Sanjiv Singh

We are interested in the problems of a human operator who is responsible for rapidly and accurately responding to requests for help from an autonomous robotic construction team. A difficult aspect of this problem is gaining an awareness of the requesting robots situation quickly enough to avoid slowing the whole team down. One approach to speeding the initial acquisition of situational awareness is to maintain a buffer of data, and play it back for the human when their help is needed. We report here on an experiment to determine how the composition and length of this buffer affect the humans speed and accuracy in our multi-robot construction domain. The experiments show that, for our scenario, 5 - 10 seconds of one raw video feed led to the fastest operator attainment of situational awareness, while accuracy was maximized by viewing 10 seconds of three video feeds. These results are necessarily specific to our scenario, but we feel that they indicate general trends which may be of use in other situations. We discuss the interacting effects of buffer composition and length on operator speed and accuracy, and draw several conclusions from this experiment which may generalize to other scenarios.


Archive | 2005

Cognition and Multi-Agent Interaction: Communicating and Collaborating with Robotic Agents

J. Gregory Trafton; Alan C. Schultz; Nicholas L. Cassimatis; Laura M. Hiatt; Dennis Perzanowski; Derek Brock; Magdalena D. Bugajska; William Adams

Introduction For the last few years, our lab has been attempting to build robots that are similar to humans in a variety of ways. Our goal has been to build systems that think and act like a person rather than look like a person since the state of the art is not sufficient for a robot to look (even superficially) like a human person. We believe that there are at least two reasons to build robots that think and act like a human. First, how an artificial system acts has a profound effect on how people act toward the system. Second, how an artificial system thinks has a profound effect on how people interact with the system.


human-robot interaction | 2006

Socially distributed perception

Marek P. Michalowski; Carl F. DiSalvo; Dídac Busquets; Laura M. Hiatt; Nik A. Melchior; Reid G. Simmons; Selma Sabanovic

This paper presents a robot search task (social tag) that uses social interaction, in the form of asking for help, as an integral component of task completion. We define socially distributed perception as a robots ability to augment its limited sensory capacities through social interaction.

Collaboration


Dive into the Laura M. Hiatt's collaboration.

Top Co-Authors

Avatar

Reid G. Simmons

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

J. Gregory Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brennan Sellner

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Alan C. Schultz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Frederik W. Heger

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nik A. Melchior

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anthony M. Harrison

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. Greg Trafton

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge