Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jill L. Drury is active.

Publication


Featured researches published by Jill L. Drury.


systems, man and cybernetics | 2003

Awareness in human-robot interactions

Jill L. Drury; Jean Scholtz; Holly A. Yanco

This paper provides a set of definitions that form a framework for describing the types of awareness that humans have of robot activities and the knowledge that robots have of the commands given them by humans. As a case study, we applied this human-robot interaction (HRI) awareness framework to our analysis of the HRI approaches used at an urban search and rescue competition. We determined that most of the critical incidents (e.g., damage done by robots to the test arena) were directly attributable to lack of one or more kinds of HRI awareness.


systems, man and cybernetics | 2004

Classifying human-robot interaction: an updated taxonomy

Holly A. Yanco; Jill L. Drury

This paper extends taxonomy of human-robot interaction (HRI) introduced in 2002 to include additional categories as well as updates to the categories from the original taxonomy. New classifications include measures of the social nature of the task (human interaction roles and human-robot physical proximity), task type, and robot morphology.


international conference on robotics and automation | 2004

Evaluation of human-robot interaction awareness in search and rescue

Jean Scholtz; J D. Young; Jill L. Drury; Holly A. Yanco

In this paper we report on the analysis of critical incidents during an urban search and rescue robot competition where critical incidents are defined as a situation where the robot could potentially cause damage to itself, the victim, or the environment. We look at the features present in the human-robot interface that contributed to success in different tasks needed in urban search and rescue and present guidelines for human-robot interaction design.


human-robot interaction | 2006

A decomposition of UAV-related situation awareness

Jill L. Drury; Laurel D. Riek; Nathan Rackliffe

This paper presents a fine-grained decomposition of situation awareness (SA) as it pertains to the use of unmanned aerial vehicles (UAVs), and uses this decomposition to understand the types of SA attained by operators of the Desert Hawk UAV. Since UAVs are airborne robots, we adapt a definition previously developed for human-robot awareness after learning about the SA needs of operators through observations and interviews. We describe the applicability of UAV-related SA for people in three roles: UAV operators, air traffic controllers, and pilots of manned aircraft in the vicinity of UAVs. Using our decomposition, UAV interaction designers can specify SA needs and analysts can evaluate a UAV interfaces SA support with greater precision and specificity than can be attained using other SA definitions.


human-robot interaction | 2007

LASSOing HRI: analyzing situation awareness in map-centric and video-centric interfaces

Jill L. Drury; Brenden Keyes; Holly A. Yanco

Good situation awareness (SA) is especially necessary when robots and their operators are not collocated, such as in urban search and rescue (USAR). This paper compares how SA is attained in two systems: one that has an emphasis on video and another that has an emphasis on a three-dimensional map. We performed a within-subjects study with eight USAR domain experts. To analyze the utterances made by the participants, we developed a SA analysis technique, called LASSO, which includes five awareness categories: location, activities, surroundings, status, and overall mission. Using our analysis technique, we show that a map-centric interface is more effective in providing good location and status awareness while a video-centric interface is more effective in providing good surroundings and activities awareness.


Autonomous Robots | 2007

Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI Robot Rescue Competition

Holly A. Yanco; Jill L. Drury

This paper presents results from three years of studying human-robot interaction in the context of the AAAI Robot Rescue Competition. We discuss our study methodology, the competitors’ systems and performance, and suggest ways to improve human-robot interaction in urban search and rescue (USAR) as well as other remote robot operations.


workshops on enabling technologies: infrastracture for collaborative enterprises | 2002

A framework for role-based specification and evaluation of awareness support in synchronous collaborative applications

Jill L. Drury; Marian G. Williams

The contribution of the paper is a framework for specifying and evaluating awareness-related features of synchronous collaborative computing applications. While previous work acknowledges that roles are important to understanding awareness needs, no method has yet been developed to provide a fine-grained, role-based approach to both specifying the awareness-related characteristics of collaborative computing applications and evaluating whether the application meets the awareness requirements. We have been developing a means of specifying and evaluating awareness needs in synchronous collaborative systems based on the framework presented in the paper. We feel this framework can be used by other researchers, as well, to develop methods of specifying and evaluating the ability of collaborative applications to support awareness.


Journal of Field Robotics | 2007

Evolving interface design for robot search tasks

Holly A. Yanco; Brenden Keyes; Jill L. Drury; Curtis W. Nielsen; Douglas A. Few; David J. Bruemmer

This paper describes two steps in the evolution of human-robot interaction designs developed by the University of Massachusetts Lowell (UML) and the Idaho National Laboratory to support urban search and rescue tasks. Usability tests were conducted to compare the two interfaces, one of which emphasized three-dimensional mapping while the other design emphasized the video feed. We found that participants desired a combination of the interface design approaches. As a result, the UML system was changed to augment its heavy emphasis on video with a map view of the area immediately around the robot. The changes were tested in a follow-up user study and the results from that experiment suggest that performance, as measured by the number of collisions with objects in the environment and time on task, is better with the new interaction techniques. Throughout the paper, we describe how we applied human-computer interaction principles and techniques to benefit the evolution of the human-robot interaction designs. While the design work is situated in the urban search and rescue domain, the results can be generalized to domains that involve other search or monitoring tasks using remotely located robots.


human-robot interaction | 2006

A video game-based framework for analyzing human-robot interaction: characterizing interface design in real-time interactive multimedia applications

Justin Richer; Jill L. Drury

There is growing interest in mining the world of video games to find inspiration for human-robot interaction (HRI) design. This paper segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial Vehicle (UAV) domains (treating UAVs as airborne robots). Beyond characterization, the framework can be used to inspire new HRI designs and compare different designs; we provide an example comparison of two UAV ground station applications.


Archive | 2010

Improving Human-Robot Interaction through Interface Evolution

Brenden Keyes; Mark Micire; Jill L. Drury; Holly A. Yanco

In remote robot operations, the human operator(s) and robot(s) are working in different locations that are not within line of sight of each other. In this situation, the human’s knowledge of the robot’s surroundings, location, activities and status is gathered solely through the interface. Depending on the work context, having a good understanding of the robot’s state can be critical. Insufficient knowledge in an urban search and rescue (USAR) situation, for example, may result in the operator driving the robot into a shaky support beam, causing a secondary collapse. While the robot‘s sensors and autonomy modes should help avoid collisions, in some cases the human must direct the robots‘ operation. If the operator does not have good awareness of the robot’s state, the robot can be more of a detriment to the task than a benefit. The human’s comprehension of the robot’s state and environment is known as situation awareness (SA). Endsley developed the most generally accepted definition for SA: “The perception of elements in the environment within a volume of time and space [Level 1 SA], the comprehension of their meaning [Level 2 SA ] and the projection of their status in the near future [Level 3 SA]” (Endsley, 1988). Drury, Scholtz, and Yanco (2003) redefined this definition of situation awareness to make it mo re specific to robot operations, breaking it into five categories: human-robot awareness (the human’s understanding of the robot), human-human awareness, robot-human awareness (the robot’s information about the human), robot-robot awareness, and the humans’ overall mission awareness. In this chapter, we focus on two of the five types of awarene ss that relate to a case in which one human operator is working with one robot: human-robot awareness and the human’s overall mission awareness. Adams (2007) discusses the implications for human-unmanned vehicle SA at each of the three levels of SA (perception, comprehension, and projection). In Drury, Keyes, and Yanco (2007), human-robot awareness is further decomposed into five types to aid in assessing the operator’s understanding of the robot: location awareness, activity awareness, surroundings awareness, status awareness and overall mission awareness (LASSO). The two types that are primarily addressed in this chapter are location awareness and surroundings awareness. Location awareness is the operator’s knowledge of where the robot is situated on a larger scale (e.g., knowing where the robot is from where it started or that it is at a certain point on a map). Surroundings awareness is the knowledge the operator has of the robot’s circumstances in a local sense, such as when there is an

Collaboration


Dive into the Jill L. Drury's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Holly A. Yanco

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean Scholtz

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Loretta D. More

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Micire

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Munjal Desai

University of Massachusetts Lowell

View shared research outputs
Researchain Logo
Decentralizing Knowledge