Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristin E. Schaefer is active.

Publication


Featured researches published by Kristin E. Schaefer.


Human Factors | 2016

A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.

Kristin E. Schaefer; Jessie Y. C. Chen; James L. Szalma; Peter A. Hancock

Objective: We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Background: Trust is increasingly important in the growing need for synergistic human–machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human–robot interaction to include all of automation interaction. Method: We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. Results: The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was r ¯  = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ  = +0.49; r ¯ = +0.16) and automation-related (ḡ = +0.53; r ¯ = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. Conclusion: Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. Application: This work has important implications to the enhancement of current and future human–automation interaction, especially in high-risk or extreme performance environments.


Cognitive Systems Research | 2017

Communicating intent to develop shared situation awareness and engender trust in human-agent teams

Kristin E. Schaefer; Edward R. Straub; Jessie Y. C. Chen; Joe Putney; Arthur W. Evans

Abstract This paper addresses issues related to integrating autonomy-enabled, intelligent agents into collaborative, human-machine teams. Interaction with intelligent machine agents capable of making independent, goal-directed decisions in human-machine teaming operations constitutes a major change from traditional human-machine interaction involving teleoperation. Communicating the machine agent’s intent to human counterparts becomes increasingly important as independent machine decisions become subject to human trust and mental models. The authors present findings from their research that suggest existing user display technologies, tailored with context-specific information and the human’s knowledge level of the machine agent’s decision process, can mitigate misperceptions of the appropriateness of agent behavioral responses. This is important because misperceptions on the part of human team members increases the likelihood of trust degradation and unnecessary interventions, ultimately leading to disuse of the agent. Examples of possible issues associated with communicating agent intent, as well as potential implications for trust calibration are provided.


ieee international multi disciplinary conference on cognitive methods in situation awareness and decision support | 2016

Will passengers trust driverless vehicles? Removing the steering wheel and pedals

Kristin E. Schaefer; Edward R. Straub

Driverless passenger vehicles are an emerging technology and a near-term eventuality. As such, the role of someone onboard the vehicle will change from the active role of a driver to the passive role of a passenger. The goal of this work is to provide an initial assessment of this interaction, with a specific focus on the impact of different available control interfaces on trust, usability, and performance. Participants interacted with two simulated driverless passenger vehicles that were designed to mirror a real-world prototype vehicle for Soldier transit on a U.S. military installation. Vehicle 1 had a traditional wheel and pedal control interface, as well as two buttons to disengage or re-engage the vehicles automation system. Vehicle 2 only had the button system available with which to disengage the automation and bring the vehicle to a safe stop in the simulation and then re-engage. Both vehicles were designed to function optimally throughout the virtual environment. Findings suggested equal trust and usability ratings between the two vehicles. However, participants tended to intervene more often with the traditional control interface. Individual differences and preference ratings are reported.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Individual Differences, Trust, and Vehicle Autonomy: A pilot study

Kristin E. Schaefer; David Scribner

Automotive technology has evolved over the last century to accommodate various levels of vehicle autonomy in order to advance safety, enhance comfort, and to mitigate human error. The result has culminated in the development of a robotic or self-driving passenger vehicle. Yet, there is still much to understand with respect to the changing human-vehicle dynamic in order to develop effective, efficient, and trusted autonomous passenger vehicles. This work presents a theoretical review on the changing dynamic of the human-vehicle system, specifically with a focus on individual differences related to trust and performance. A pilot study is reported to illustrate the importance of the human to the development and calibration of trust within the human-vehicle cooperative relationship.


collaboration technologies and systems | 2016

Relinquishing Manual Control: Collaboration Requires the Capability to Understand Robot Intent

Kristin E. Schaefer; Ralph W. Brewer; Joe Putney; Edward Mottern; Jeffrey Barghout; Edward R. Straub

Collaboration between robots and humans means different things to different people in different applications. Collaboration could range from a robot and a human simply operating in the same area, to operations that require complex, interdependent decisions based on joint goals. Despite the level of coordination, all effective collaborations require understanding the control allocation processes, and human engagement or reengagement strategies. This is especially true as humans begin to relinquish manual control, and the robot becomes a team member rather than just a tool. The importance behind this paper is understanding the implications of relinquishing direct control and allowing it to make decisions that could affect the safety of users or bystanders. Also important is understanding when and how to facilitate appropriate engagement strategies. To build appropriate reliance and to calibrate trust, the robot should have a means to convey its reasoning processes or intent. Our research begins to show how user displays can facilitate the development of a shared situation awareness (SA) of the mission space. Shared SA can enhance the teaming effort which engenders and calibrates trust in the robotic system. This paper addresses a number of collaboration issues related to control allocation, including issues specific to relinquishing user control, reengagement strategies, and robot authority. Research specific to the US Army Applied Robotics for Installations and Base Operations (ARIBO) driverless vehicle project is provided to advance understanding of these control allocation issues. Specific findings have shown a relationship between reliance on a robot and access to different user controls. User reports have provided insight on the benefits and limitations integrating user displays to facilitate communication of robot intent. Our research on the impact of interfaces advances the science of human-robot interaction by extending the theory of shared mental models beyond the concept of human-only teams to human-robot teams.


human robot interaction | 2016

Human-animal teams as an analog for future human-robot teams: influencing design and fostering trust

Elizabeth Phillips; Kristin E. Schaefer; Deborah R. Billings; Florian Jentsch; Peter A. Hancock

Our work posits that existing human-animal teams can serve as an analog for developing effective human-robot teams. Existing knowledge of human-animal partnerships can be readily applied to the HRI domain to foster accurate mental models and appropriately calibrated trust in future human-robot teams. Human-animal relationships are examined in terms of the benefiting roles animals can play in enabling effective teaming, as well as the level of team interdependency and team communication, with the goal of developing applications in future human-robot teams.


Cognitive Systems Research | 2017

Situation awareness in human-machine interactive systems

Tom Ziemke; Kristin E. Schaefer; Mica Endsley

Abstract This special issue brings together six papers on situation awareness in human-machine interactive systems, in particular in teams of collaborating humans and artificial agents. The editorial provides a brief introduction and overviews the contributions, addressing issues such as team and shared situation awareness, trust, transparency, timing, engagement, and ethical aspects.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Mental Model Consensus and Shifts During Navigation System-Assisted Route Planning

B. S. Perelman; A. W. Evans; Kristin E. Schaefer

A major barrier to effective spatial decision-making in human-agent teams is that humans and algorithms use different mechanisms to solve spatial problems, frequently leading them to produce different solutions. Incongruity between algorithm-generated solutions and human spatial mental models results in higher workload in mixed-initiative systems, and potential breakdowns in trust and team situation awareness. Although these performance effects are well-understood, few methods exist for quantifying and comparing human spatial mental models and algorithm-generated solutions. To address these problems, 27 participants completed solutions to 5 spatial planning problems, before and after receiving assistance from 2 navigation algorithms. A novel path mapping and clustering approach provided a means of quantifying consensus in human mental models, and shifts in those mental models after viewing the algorithm-suggested routes. Human solutions clustered into a small number of shared mental models. Individual differences in trust in each algorithm predicted acceptance of that algorithm’s route.


international conference on virtual, augmented and mixed reality | 2018

Quantifying Human Decision-Making: Implications for Bidirectional Communication in Human-Robot Teams

Kristin E. Schaefer; Brandon S. Perelman; Ralph W. Brewer; Julia L. Wright; Nicholas Roy; Derya Aksaray

A goal for future robotic technologies is to advance autonomy capabilities for independent and collaborative decision-making with human team members during complex operations. However, if human behavior does not match the robots’ models or expectations, there can be a degradation in trust that can impede team performance and may only be mitigated through explicit communication. Therefore, the effectiveness of the team is contingent on the accuracy of the models of human behavior that can be informed by transparent bidirectional communication which are needed to develop common ground and a shared understanding. For this work, we are specifically characterizing human decision-making, especially in terms of the variability of decision-making, with the eventual goal of incorporating this model within a bidirectional communication system. Thirty participants completed an online game where they controlled a human avatar through a 14 × 14 grid room in order to move boxes to their target locations. Each level of the game increased in environmental complexity through the number of boxes. Two trials were completed to compare path planning for the condition of known versus unknown information. Path analysis techniques were used to quantify human decision-making as well as provide implications for bidirectional communication.


Advances in intelligent systems and computing | 2017

Five Requisites for Human-Agent Decision Sharing in Military Environments

Michael J. Barnes; Jessie Y. C. Chen; Kristin E. Schaefer; Troy D. Kelley; Cheryl Giammanco; Susan G. Hill

Working with industry, universities and other government agencies, the U.S. Army Research Laboratory has been engaged in multi-year programs to understand the role of humans working with autonomous and robotic systems. The purpose of the paper is to present an overview of the research themes in order to abstract five research requirements for effective human-agent decision-making. Supporting research for each of the five requirements is discussed to elucidate the issues involved and to make recommendations for future research. The requirements include: (a) direct link between the operator and a supervisory agent, (b) interface transparency, (c) appropriate trust, (d) cognitive architectures to infer intent, and e) common language between humans and agents.

Collaboration


Dive into the Kristin E. Schaefer's collaboration.

Top Co-Authors

Avatar

E. Ray Pursel

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Florian Jentsch

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Peter A. Hancock

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Deborah R. Billings

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth Phillips

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

James L. Szalma

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Nicholas Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shane T. Mueller

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge