Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kimberly Stowers is active.

Publication


Featured researches published by Kimberly Stowers.


Theoretical Issues in Ergonomics Science | 2018

Situation awareness-based agent transparency and human-autonomy teaming effectiveness

Jessie Y. C. Chen; Shan G. Lakhmani; Kimberly Stowers; Anthony R. Selkowitz; Julia L. Wright; Michael J. Barnes

ABSTRACT Effective collaboration between humans and agents depends on humans maintaining an appropriate understanding of and calibrated trust in the judgment of their agent counterparts. The Situation Awareness-based Agent Transparency (SAT) model was proposed to support human awareness in human–agent teams. As agents transition from tools to artificial teammates, an expansion of the model is necessary to support teamwork paradigms, which require bidirectional transparency. We propose that an updated model can better inform human–agent interaction in paradigms involving more advanced agent teammates. This paper describes the models use in three programmes of research, which exemplify the utility of the model in different contexts – an autonomous squad member, a mediator between a human and multiple subordinate robots, and a plan recommendation agent. Through this review, we show that the SAT model continues to be an effective tool for facilitating shared understanding and proper calibration of trust in human–agent teams.


international conference on engineering psychology and cognitive ergonomics | 2016

Trajectory Recovery System: Angle of Attack Guidance for Inflight Loss of Control

Nicholas Kasdaglis; Tiziano Bernard; Kimberly Stowers

This paper describes the design and development of an ecological display to aid pilots in the recovery of an In-Flight Loss of Control event due to a Stall (ILOC-S). The Trajectory Recovery System (TRS) provides a stimulus \( \to \) response interaction between the pilot and the primary flight display. This display is intended to provide directly perceivable and actionable information of the aerodynamic performance state information and the requisite recovery guidance representation. In an effort to reduce cognitive tunneling, TRS mediates the interaction between pilot and aircraft display systems by deploying cognitive countermeasures that remove display representations unnecessary to the recovery task. Reported here, are the development and initial human centered design activities of a functional and integrated TRS display in a 737 flight-training device.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Intelligent Agent Transparency The Design and Evaluation of an Interface to Facilitate Human and Intelligent Agent Collaboration

Kimberly Stowers; Nicholas Kasdaglis; Olivia B. Newton; Shan G. Lakhmani; Ryan Wohleber; Jessie Y. C. Chen

We evaluated the usability and utility of an unmanned vehicle management interface that was developed based on the Situation awareness–based Agent Transparency model. We sought to examine the effect of increasing levels of agent transparency on operator task performance and perceived usability of the agent. Usability and utility were assessed through flash testing, a focus group, and experimental testing. While usability appeared to decrease with the portrayal of uncertainty, operator performance and reliance on key parts of the interface increased. Implications and next steps are discussed.


Human Factors | 2017

A Framework to Guide the Assessment of Human–Machine Systems:

Kimberly Stowers; James M. Oglesby; Shirley C. Sonesh; Kevin Leyva; Chelsea Iwig; Eduardo Salas

Objective: We have developed a framework for guiding measurement in human–machine systems. Background: The assessment of safety and performance in human–machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human–machine systems. Method: As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human–machine systems, giving a snapshot of the state of science on human–machine system safety and performance. Using this information, we created a framework of safety and performance in human–machine systems. Results: This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human–machine systems. Conclusion: This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. Application: This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.


intelligent user interfaces | 2016

Human-Autonomy Teaming and Agent Transparency

Jessie Y. C. Chen; Michael J. Barnes; Anthony R. Selkowitz; Kimberly Stowers; Shan G. Lakhmani; Nicholas Kasdaglis

We developed the user interfaces for two Human-Robot Interaction (HRI) tasking environments: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interaction with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). These user interfaces were developed based on the Situation awareness-based Agent Transparency (SAT) model. User testing showed that as agent transparency increased, so did overall human-agent team performance. Participants were able to calibrate their trust in the agent more appropriately as agent transparency increased.


Advances in intelligent systems and computing | 2017

Insights into Human-Agent Teaming: Intelligent Agent Transparency and Uncertainty

Kimberly Stowers; Nicholas Kasdaglis; Michael A. Rupp; Jessie Y. C. Chen; Daniel Barber; Michael J. Barnes

This paper discusses two studies testing the effects of agent transparency in joint cognitive systems involving supervisory control and decision-making. Specifically, we examine the impact of agent transparency on operator performance (decision accuracy), response time, perceived workload, perceived usability of the agent, and operator trust in the agent. Transparency has a positive impact on operator performance, usability, and trust, yet the depiction of uncertainty has potentially negative effects on usability and trust. Guidelines and considerations for displaying transparency in joint cognitive systems are discussed.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Space Flight Task Contexts for Long Distance and Duration Exploration Missions: Application to Measurement of Human Automation Interaction

Chelsea Iwig; James M. Oglesby; Misa Shimono; Kimberly Stowers; Kevin Leyva; Eduardo Salas

An effort is currently underway to determine methods for measuring safety and performance of human- automation systems to improve their functioning for future long duration space flights. However, an important step in system evaluation is understanding the contexts in which they operate. The identification of contexts will help in targeting what variables may be related to the overall system’s effectiveness. A review of NASA documents and literature has resulted in the identification of four categories of task contexts that are believed to be important for future Long Distance and Duration Exploration Missions (LDDEM) to Mars and beyond. These four categories include (1) spacecraft navigation, (2) robotic/habitat operations, (3) systems monitoring, and (4) mission planning and scheduling. Within each of these four task categories there exist varying task demands and environmental conditions that impact the user’s interaction with the automation and, subsequently, the types of measurement that are appropriate for analyzing performance and safety within the human-automation system.


Advances in intelligent systems and computing | 2017

Human-Robot Interaction: Proximity and Speed—Slowly Back Away from the Robot!

Keith R. MacArthur; Kimberly Stowers; Peter A. Hancock

This experiment was designed to evaluate the effects of proximity and speed of approach on trust in human-robot interaction (HRI). The experimental design used a 2 (Speed) × 2 (Proximity) mixed factorial design and trust levels were measured by self-report on the Human Robot Trust Scale and the Trust in Automation Scale. Data analyses indicate proximity [F(2, 146) = 6.842, p < 0.01, partial ŋ2 = 0.086] and speed of approach [F(2, 146) = 2.885, p = 0.059, partial ŋ2 = 0.038] are significant factors contributing to changes in trust levels.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

The Influence of Robot Form on Trust

Tracy Sanders; William Volante; Kimberly Stowers; Theresa Kessler; Katharina Gabracht; Brandon Harpold; Paul Oppold; Peter A. Hancock

Assistive robotics is a rapidly progressing field of study that contains facets yet to be fully understood. Here we look at the effect of robot form on user’s level of trust placed on the robot. Form-based trust was evaluated in this study by comparing participant trust ratings based on four robot designs: Lego Mindstorm, Keepon, Sphero and Ozzy. The first view of the robot and the interactions with the robots were examined with pre and post measurements of trust. Sphero and Lego received consistently higher trust ratings than Keepon and Ozzy. Pre-post measures reveal a difference between the initial measure of trust based on form, and the second measure of trust based on the observation of robot function.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Cognition and Physiological Response Towards a Model of Validated Physiological Measurement

Ashley M. Hughes; William Volante; Kimberly Stowers; Kevin Leyva; James M. Oglesby; Tiffany Bisbey; Eduardo Salas; Benjamin A. Knott; Michael A. Vidulich

Complex tasks in large and error-prone environments require unobtrusive, unbiased and real-time measurement of cognitive variables to promote safety and to achieve optimal performance. Despite the prevalence of physiological measurement of cognitive constructs and cognitive performance, such as workload, little has been done to justify the inference of cognitive states from physiological measures. We develop a framework based on the extant literature to provide the groundwork for further validation of physiological measurement. Specifically, we leverage theoretically-grounded conditions of measurement to aid in investigating the logical sampling and construct validity for use of such metrics. Further meta-analytic investigation is warranted to validate the model and justify use of physiological measures.

Collaboration


Dive into the Kimberly Stowers's collaboration.

Top Co-Authors

Avatar

Kevin Leyva

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James M. Oglesby

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Nicholas Kasdaglis

Florida Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter A. Hancock

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Shan G. Lakhmani

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Volante

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Shirley C. Sonesh

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Theresa Kessler

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge