Michael J. Barnes
United States Army Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael J. Barnes.
systems man and cybernetics | 2007
Jessie Y. C. Chen; Ellen Haas; Michael J. Barnes
In the future, it will become more common for humans to team up with robotic systems to perform tasks that humans cannot realistically accomplish alone. Even for autonomous and semiautonomous systems, teleoperation will be an important default mode. However, teleoperation can be a challenging task because the operator is remotely located. As a result, the operators situation awareness of the remote environment can be compromised and the mission effectiveness can suffer. This paper presents a detailed examination of more than 150 papers covering human performance issues and suggested mitigation solutions. The paper summarizes the performance decrements caused by video images bandwidth, time lags, frame rates, lack of proprioception, frame of reference, two-dimensional views, attention switches, and motion effects. Suggested solutions and their limitations include stereoscopic displays, synthetic overlay, multimodal interfaces, and various predicative and decision support systems.
IEEE Transactions on Human-Machine Systems | 2014
Jessie Y. C. Chen; Michael J. Barnes
The human factors literature on intelligent systems was reviewed in relation to the following: efficient human supervision of multiple robots, appropriate human trust in the automated systems, maintenance of human operators situation awareness, individual differences in human-agent (H-A) interaction, and retention of human decision authority. A number of approaches-from flexible automation to autonomous agents-were reviewed, and their advantages and disadvantages were discussed. In addition, two key human performance issues (trust and situation awareness) related to H-A teaming for multirobot control and some promising user interface design solutions to address these issues were discussed. Some major individual differences factors (operator spatial ability, attentional control ability, and gaming experience) were identified that may impact H-A teaming in the context of robotics control.
systems man and cybernetics | 2011
Jessie Y. C. Chen; Michael J. Barnes; Michelle Harper-Sciarini
The purpose of this paper is to review research pertaining to the limitations and advantages of supervisory control for unmanned systems. We identify and discuss results showing technologies that mitigate the observed problems such as specialized interfaces, and adaptive systems. In the report, we first present an overview of definitions and important terms of supervisory control and human-agent teaming. We then discuss human performance issues in supervisory control of multiple robots with regard to operator multitasking performance, trust in automation, situation awareness, and operator workload. In the following sections, we review research findings for specific areas of supervisory control of multiple ground robots, aerial robots, and heterogeneous robots (using different types of robots in the same mission). In the last section, we review innovative techniques and technologies designed to enhance operator performance and reduce potential performance degradations identified in the literature.
Human Factors | 2012
Jessie Y. C. Chen; Michael J. Barnes
Objective: A military multitasking environment was simulated to examine the effects of an intelligent agent, RoboLeader, on the performance of robotics operators. Background: The participants’ task was to manage a team of ground robots with the assistance of RoboLeader, an intelligent agent capable of coordinating the robots and changing their routes on the basis of battlefield developments. Method: In the first experiment, RoboLeader was perfectly reliable; in the second experiment, RoboLeader’s recommendations were manipulated to be either false-alarm prone or miss prone, with a reliability level of either 60% or 90%. The visual density of the targeting environment was manipulated by the presence or absence of friendly soldiers. Results: RoboLeader, when perfectly reliable, was helpful in reducing the overall mission times. The type of RoboLeader imperfection (false-alarm vs. miss prone) affected operators’ performance of tasks involving visual scanning (target detection, route editing, and situation awareness). There was a consistent effect of visual density (clutter of the visual scene) for multiple performance measures. Participants’ attentional control and video gaming experience affected their overall multitasking performance. In both experiments, participants with greater spatial ability consistently outperformed their low-spatial-ability counterparts in tasks that required effective visual scanning. Conclusion: Intelligent agents, such as RoboLeader, can benefit the overall human-robot teaming performance. However, the effects of type of agent unreliability, tasking requirements, and individual differences have complex effects on human-agent interaction. Application: The current results will facilitate the implementation of robots in military settings and will provide useful data to designs of systems for multirobot control.
Human Factors | 2016
Joseph E. Mercado; Michael A. Rupp; Jessie Y. C. Chen; Michael J. Barnes; Daniel Barber; Katelyn Procci
Objective: We investigated the effects of level of agent transparency on operator performance, trust, and workload in a context of human–agent teaming for multirobot management. Background: Participants played the role of a heterogeneous unmanned vehicle (UxV) operator and were instructed to complete various missions by giving orders to UxVs through a computer interface. An intelligent agent (IA) assisted the participant by recommending two plans—a top recommendation and a secondary recommendation—for every mission. Method: A within-subjects design with three levels of agent transparency was employed in the present experiment. There were eight missions in each of three experimental blocks, grouped by level of transparency. During each experimental block, the IA was incorrect three out of eight times due to external information (e.g., commander’s intent and intelligence). Operator performance, trust, workload, and usability data were collected. Results: Results indicate that operator performance, trust, and perceived usability increased as a function of transparency level. Subjective and objective workload data indicate that participants’ workload did not increase as a function of transparency. Furthermore, response time did not increase as a function of transparency. Conclusion: Unlike previous research, which showed that increased transparency resulted in increased performance and trust calibration at the cost of greater workload and longer response time, our results support the benefits of transparency for performance effectiveness without additional costs. Application: The current results will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams.
Ergonomics | 2012
Jessie Y. C. Chen; Michael J. Barnes
A military targeting environment was simulated to examine the effects of an intelligent route-planning agent RoboLeader, which could support dynamic robot re-tasking based on battlefield developments, on the performance of robotics operators. We manipulated the level of assistance (LOAs) provided by RoboLeader as well as the presence of a visualisation tool that provided feedback to the participants on their primary task (target encapsulation) performance. Results showed that the participants’ primary task benefited from RoboLeader on all LOAs conditions compared to manual performance; however, visualisation had little effect. Frequent video gamers demonstrated significantly better situation awareness of the mission environment than did infrequent gamers. Those participants with higher spatial ability performed better on a secondary target detection task than did those with lower spatial ability. Finally, participants’ workload assessments were significantly lower when they were assisted by RoboLeader than when they performed the target entrapment task manually. Practitioner Summary: This study demonstrated the utility of an intelligent agent for enhancing robotics operators’ supervisory control performance as well as reducing their workload during a complex urban scenario involving moving targets. The results furthered the understanding of the interplay among level-of-autonomy, multitasking performance and individual differences in military tasking environments.
systems man and cybernetics | 2010
Jeonghwan Jin; Ling Rothrock; Patricia L. McDermott; Michael J. Barnes
This paper investigates the impact of framing and time pressure on human judgment performance in a complex multiattribute judgment task. We focus on the decision process of human participants who must choose between pairwise alternatives in a resource-allocation task. We used the Analytic Hierarchy Process (AHP) to calculate the relative weights of the four alternatives (i.e., C1, C2, C3, and C4) and the judgment consistency. Using the AHP, we examined two sets of hypotheses that address the impact of task conditions on the weight prioritization of choice alternatives and the internal consistency of the judgment behavior under varying task conditions. The experiment simulated the allocation of robotic assets across the battlefield to collect data about an enemy. Participants had to make a judgment about which asset to allocate to a new area by taking into account three criteria related to the likelihood of success. We manipulated the information frame and the nature of the task. We found that, in general, participants gave significantly different weights to the same alternatives under different frames and task conditions. Specifically, in terms of ln-transformed priority weights, participants gave significantly lower weights to C2 and C4 and higher weight to C3 under gain frame than under loss frame, and also, under different task conditions (i.e., Tasks #1, #2, and #3), participants gave significantly higher weight to C4 in Task #1, lower weights to C1 and C4, higher weight to C3 in Task #2, and lower weight to C3 in Task #3. Furthermore, we found that the internal consistency of the decision behavior was worse, first, in the loss frame than the gain frame and, second, under time pressure. Our methodology complements utility-theoretic frameworks by assessing judgment consistency without requiring the use of task-performance outcomes. This work is a step toward establishing a coherence criterion to investigate judgment under naturalistic conditions. The results will be useful for the design of multiattribute interfaces and decision aiding tools for real-time judgments in time-pressured task environments.
collaboration technologies and systems | 2010
Mark G. Snyder; Zhihua Qu; Jessie Y. C. Chen; Michael J. Barnes
Autonomous teams of robotic vehicles are gaining significant importance in military applications in the realm of reconnaissance and the execution of vital tasks. As teams of robotic vehicles grow in size and mission complexity, ever increasing burden is placed on the human operators charged with overseeing such operations. It is imperative that, in order to increase future mission complexity and success, a certain amount of work load be removed from the operator and a greater level of autonomy be given to the unmanned systems. A modular architecture has been developed which allows for components to be added on to the RoboLeader utility providing expanded capability of the design with little modification to existing architecture. Currently, the RoboLeader utility assesses intelligence on ground conditions and provides the human operator with alternate path plans for a series of situations which require difference path solutions.
winter simulation conference | 2000
Brett Walters; Jon French; Michael J. Barnes
The field element of the US Army Research Lab (ARL) at Fort Huachuca, Arizona is concerned with the manning required to operate the close-range Tactical Unmanned Aerial Vehicle (TUAV). The operational requirements of the TUAV operators may include extended duty days, reduced crew size and varying shift schedules. These conditions are likely to reduce operator effectiveness due to fatigue. The objective of the study was to analyze how fatigue, crew size, and rotation schedule affect operator workload and performance during the control of a TUAV. The conclusions from executing the models indicate that reducing the number of operators currently recommended for the control of TUAVs results in: 1) 33% more aerial vehicle (AV) mishaps during emergencies, 2) a 13% increase in the time it takes to search for targets, and 3) an 11% decrease in the number of targets detected. Over 400 mission scenario replications of the model were executed allowing statistically reliable predictions to be made of the effect of operator fatigue on performance. Discrete event simulation (DES) models may provide a cost effective means to estimate the impact of human limitations on military systems and highlight performance areas needing attention.
Theoretical Issues in Ergonomics Science | 2018
Jessie Y. C. Chen; Shan G. Lakhmani; Kimberly Stowers; Anthony R. Selkowitz; Julia L. Wright; Michael J. Barnes
ABSTRACT Effective collaboration between humans and agents depends on humans maintaining an appropriate understanding of and calibrated trust in the judgment of their agent counterparts. The Situation Awareness-based Agent Transparency (SAT) model was proposed to support human awareness in human–agent teams. As agents transition from tools to artificial teammates, an expansion of the model is necessary to support teamwork paradigms, which require bidirectional transparency. We propose that an updated model can better inform human–agent interaction in paradigms involving more advanced agent teammates. This paper describes the models use in three programmes of research, which exemplify the utility of the model in different contexts – an autonomous squad member, a mediator between a human and multiple subordinate robots, and a plan recommendation agent. Through this review, we show that the SAT model continues to be an effective tool for facilitating shared understanding and proper calibration of trust in human–agent teams.