Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan G. Hill is active.

Publication


Featured researches published by Susan G. Hill.


human-robot interaction | 2007

A field experiment of autonomous mobility: operator workload for one and two robots

Susan G. Hill; Barry A. Bodt

An experiment was conducted on aspects of human-robot interaction in a field environment using the U.S. Armys Experimental Unmanned Vehicle (XUV). Goals of this experiment were to examine the use of scalable interfaces and to examine operator span of control when controlling one versus two autonomous unmanned ground vehicles. We collected workload ratings from two Soldiers after they had performed missions that included monitoring, downloading and reporting on simulated reconnaissance, surveillance, and target acquisition (RSTA) images, and responding to unplanned operator intervention requests from the XUV. Several observations are made based on workload data, experimenter notes, and informal interviews with operators.


intelligent robots and systems | 2009

Effective robot team control methodologies for battlefield applications

MaryAnne Fields; Ellen Haas; Susan G. Hill; Christopher Stachowiak; Laura E. Barnes

In this paper, we present algorithms and display concepts that allow Soldiers to efficiently interact with a robotic swarm that is participating in a representative convoy mission. A critical aspect of swarm control, especially in disrupted or degraded conditions, is Soldier-swarm interaction-the Soldier must be kept cognizant of swarm operations through an interface that allows him or her to monitor status and/or institute corrective actions. We provide a control method for the swarm that adapts easily to changing battlefield conditions, metrics and supervisory algorithms that enable swarm members to economically monitor changes in swarm status as they execute the mission, and display concepts that can efficiently and effectively communicate swarm status to Soldiers in challenging battlefield environments.


human robot interaction | 2016

Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations

Ning Wang; David V. Pynadath; Susan G. Hill

Trust is a critical factor for achieving the full potential of human-robot teams. Researchers have theorized that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain trust when the system is less than 100% reliable. In this work, we leverage existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate such explanations. To measure the explanation mechanisms impact on trust, we collected self-reported survey data and behavioral data in an agent-based online testbed that simulates a human-robot team task. The results demonstrate that the added explanation capability led to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robots explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot trust calibration.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Field Assessment of Multimodal Communication for Dismounted Human-Robot Teams

Daniel Barber; Julian Abich; Elizabeth Phillips; Andrew B. Talone; Florian Jentsch; Susan G. Hill

A field assessment of multimodal communication (MMC) was conducted as part of a program integration demonstration to support and enable bi-directional communication between a dismounted Soldier and a robot teammate. In particular, the assessment was focused on utilizing auditory and visual/gesture based communications. The task involved commanding a robot using semantically-based MMC. Initial participant data indicates a positive experience with the multimodal interface (MMI) prototype. The results of the experiment inform recommendations for multimodal designers regarding perceived usability and functionality of the currently implemented MMI.


human robot interaction | 2015

Achieving the Vision of Effective Soldier-Robot Teaming: Recent Work in Multimodal Communication

Susan G. Hill; Daniel Barber; Arthur W. Evans

The U.S. Army Research Laboratory (ARL) Autonomous Systems Enterprise has a vision for the future of effective Soldier-robot teaming. Our research program focuses on three primary thrust areas: communications, teaming, and shared cognition. Here we discuss a recent study in communications, where we collected data using a multimodal interface comprised of speech, gesture, touch and a visual display to command a robot to perform semantically-based tasks. Observations on usability and participant expectations with respect to the interaction with the robot were obtained. Initial observations are reported, showing that the speech-gesture-visual multimodal interface was liked and performed well. Areas for improvement were noted.


Human Factors and Ergonomics Society Annual Meeting Proceedings | 2009

Designing and evaluating a multimodal interface for soldier-swarm interaction

Ellen Haas; Susan G. Hill; Christopher Stachowiak; Mary Anne Fields

Traditional unmanned vehicle and robotic displays often use the visual modality alone to provide information and warnings. In previous studies we found that multimodal (auditory and/or tactile) cues that supplement visual displays, can increase user performance and decrease workload in a variety of settings. In this latest study, we examined the use of visual, spatial auditory, and tactile presentation of geospatial and warning information in a Soldier-robotic swarm interface. Sixteen male Marines with a mean age of 19 years from a Marine Detachment at Aberdeen Proving Ground, Maryland, acted as volunteer participants. Results showed that workload decreased and performance, as measured by reduced response time, increased with multimodal displays. These results are consistent with previous studies. The findings from this study have implications for the design of multimodal interfaces in complex, data-rich domains such as the human-swarm interface.


performance metrics for intelligent systems | 2007

Assessing the impact of bi-directional information flow in UGV operation: a pilot study

Marshal Childers; Barry A. Bodt; Susan G. Hill; Robert Dean; William F. Dodson; Lyle G. Sutton

In June 2007, the Robotics Program Office of the U.S. Army Research Laboratory and General Dynamics Robotics Systems (GDRS) engaged in an exploratory assessment of how bidirectional information flow impacts Unmanned Ground Vehicle (UGV) operation. The purposes of the pilot study were to frame scenarios, protocol, infrastructure, and metrics for a more formal experiment planned for the fall of 2007 while providing current data feedback for the architecture developers. The study was conducted at Fort Indiantown Gap, PA over two distinct areas of rolling vegetated terrain using the eXperimental Unmanned Vehicle (XUV). In this paper, we will share the preliminary findings of the impact of bi-directional information flow on observed robotic behavior, discuss the associated impact on the operator, and relate lessons learned to the planning of our fall 2007 experiment.


intelligent virtual agents | 2017

The Dynamics of Human-Agent Trust with POMDP-Generated Explanations

Ning Wang; David V. Pynadath; Susan G. Hill; Chirag Merchant

Partially Observable Markov Decision Processes (POMDPs) enable optimized decision making by robots, agents, and other autonomous systems. This quantitative optimization can also be a limitation in human-agent interaction, as the resulting autonomous behavior, while possibly optimal, is often impenetrable to human teammates, leading to improper trust and, subsequently, disuse or misuse of such systems [1].


international conference on virtual, augmented and mixed reality | 2015

Exploring the Implications of Virtual Human Research for Human-Robot Teams

Jonathan Gratch; Susan G. Hill; Louis-Philippe Morency; David V. Pynadath; David R. Traum

This article briefly explores potential synergies between the fields of virtual human and human-robot interaction research. We consider challenges in advancing the effectiveness of human-robot teams makes recommendations for enhancing this by facilitating synergies between robotics and virtual human research.


Advances in intelligent systems and computing | 2017

Five Requisites for Human-Agent Decision Sharing in Military Environments

Michael J. Barnes; Jessie Y. C. Chen; Kristin E. Schaefer; Troy D. Kelley; Cheryl Giammanco; Susan G. Hill

Working with industry, universities and other government agencies, the U.S. Army Research Laboratory has been engaged in multi-year programs to understand the role of humans working with autonomous and robotic systems. The purpose of the paper is to present an overview of the research themes in order to abstract five research requirements for effective human-agent decision-making. Supporting research for each of the five requirements is discussed to elucidate the issues involved and to make recommendations for future research. The requirements include: (a) direct link between the operator and a supervisory agent, (b) interface transparency, (c) appropriate trust, (d) cognitive architectures to infer intent, and e) common language between humans and agents.

Collaboration


Dive into the Susan G. Hill's collaboration.

Top Co-Authors

Avatar

Matthew Marge

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Claire Bonial

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

David V. Pynadath

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David R. Traum

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ning Wang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Artstein

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Barber

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge