Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julie A. Adams is active.

Publication


Featured researches published by Julie A. Adams.


international conference on swarm intelligence | 2018

Learning Based Leadership in Swarm Navigation

Ovunc Tuzel; Gilberto Marcon dos Santos; Chloe Fleming; Julie A. Adams

Collective migration in biological species is often guided by distributed leaders that modulate their peers’ motion behaviors. Distributed leadership is important for artificial swarms, but designing the leaders’ controllers is difficult. A swarm control strategy that leverages trained leaders to influence the collective’s trajectory in spatial navigation tasks was formulated. The neuro-evolutionary learning based control method was used to train a few leaders to influence motion behaviors. The leadership control strategy is applied to a rally task with varying swarm sizes and leadership percentages. Increasing the leadership representation improved task performance. Leaders moved quickly when the swarm had a higher percentage of leaders and slowly when the percentage was small.


human robot interaction | 2018

A Diagnostic Human Workload Assessment Algorithm for Human-Robot Teams

Jamison Heard; Rachel Heald; Caroline E. Harriott; Julie A. Adams

High-stress environments, such as a NASA Control Room, require optimal task performance, as a single mistake may cause monetary loss or the loss of human life. Robots can partner with humans in a collaborative or supervisory paradigm. Such teaming paradigms require the robot to appropriately interact with the human without decreasing either»s task performance. Workload is directly correlated with task performance; thus, a robot may use a human»s workload state to modify its interactions with the human. A diagnostic workload assessment algorithm that accurately estimates workload using results from two evaluations, one peer-based and one supervisory-based, is presented.


human robot interaction | 2018

Swarm Transparency

Julie A. Adams; Jessie Y. C. Chen; Michael A. Goodrich

A key element of system transparency is allowing humans to calibrate their trust in a system, given the implicit inherent uncertainty, emergent behaviors, etc. As robotic swarms progress towards real-world missions, such transparency becomes increasingly necessary in order to reduce the disuse, misuse and errors humans make when influencing and directing the swarm. However,achieving this objective requires addressing the complex challenges associated with providing transparency. Two swarm transparency challenge categories, with exemplar challenges, are provided.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Analysis of Human-Swarm Visualizations

Karina A. Roundtree; Matthew D. Manning; Julie A. Adams

Interest in robotic swarms has increased exponentially. Prior research determined that humans perceive biological swarm motions as a single entity, rather than perceiving the individuals. An open question is how the swarm’s visual representation and the associated task impact human performance when identifying current swarm tasks. The majority of the existing swarm visualizations present each robot individually. Swarms typically incorporate large numbers of individuals, where the individuals exhibit simple behaviors, but the swarm appears to exhibit more intelligent behavior. As the swarm size increases, it becomes increasingly difficult for the human operator to understand the swarm’s current state, the emergent behaviors, and predict future outcomes. Alternative swarm visualizations are one means of mitigating high operator workload and risk of human error. Five visualizations were evaluated for two tasks, go to and avoid, in the presence or absence of obstacles. The results indicate that visualizations incorporating representations of individual agents resulted in higher accuracy when identifying tasks.


Applied Ergonomics | 2018

Human performance measures for the evaluation of process control human-system interfaces in high-fidelity simulations

Jie Xu; Shilo Anders; Arisa Pruttianan; Nathan Lau; Julie A. Adams; Matthew B. Weinger

We reviewed the available literature on measuring human performance to evaluate human-system interfaces (HSIs), focused on high-fidelity simulations of industrial process control systems, to identify best practices and future directions for research and operations. We searched the available literature and then conducted in-depth review, structured coding, and analysis of 49 articles, which described 42 studies. Human performance measures were classified across six dimensions: task performance, workload, situation awareness, teamwork/collaboration, plant performance, and other cognitive performance indicators. Many studies measured performance in more than one dimension, but few studies addressed more than three dimensions. Only a few measures demonstrated acceptable levels of reliability, validity, and sensitivity in the reviewed studies in this research domain. More research is required to assess the measurement qualities of the commonly used measures. The results can provide guidance to direct future research and practice for human performance measurement in process control HSI design and deployment.


robot and human interactive communication | 2017

Towards reaction and response time metrics for real-world human-robot interaction

Caroline E. Harriott; Julie A. Adams

Reaction time and response time have been successfully measured in laboratory settings. As robots move into the real-world, such metrics are needed for human-robot team deployments when evaluating interaction naturalness, ability to maintain safety, and task performance. Potential real-world reaction and response time metrics for peer-based teams are presented. Primary and secondary task reaction and response times were measured via video and auditory coding for a subset of first response tasks. The successful application of the metrics showed that primary task reaction and response times were longer for human-robot teams.


Human Factors and Ergonomics Society Annual Meeting Proceedings | 2009

Comparing, Merging, and Adapting Methods of Cognitive Task Analysis

Robert R. Hoffman; Julie A. Adams; Ann M. Bisantz; Birsen Donmez; David B. Kaber; Emilie Roth

Cognitive systems engineering projects have found a need for and a value in combining particular methods of cognitive task analysis (CTA). A principle reason that is given is that different CTA methods have different strengths in terms of how they inform the study, analysis or design of cognitive work systems. Focus questions include: What specific CTA methods have been combined or merged for some particular research project? What was the rationale for the selection of methods? How did researchers do the combining? Were methods actually merged into a single procedure, or were multiple methods conducted separately, what the resulting analyses or representations merged subsequently?


IEEE Transactions on Human-Machine Systems | 2018

The Underpinnings of Workload in Unmanned Vehicle Systems

Becky L. Hooey; David B. Kaber; Julie A. Adams; Terrence Fong; Brian F. Gore


IEEE Transactions on Human-Machine Systems | 2018

A Survey of Workload Assessment Algorithms

Jamison Heard; Caroline E. Harriott; Julie A. Adams


adaptive agents and multi-agents systems | 2018

Task Fusion Heuristics for Coalition Formation and Planning

Gilberto Marcon dos Santos; Julie A. Adams

Collaboration


Dive into the Julie A. Adams's collaboration.

Top Co-Authors

Avatar

Caroline E. Harriott

Charles Stark Draper Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Kaber

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge