Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ewart de Visser is active.

Publication


Featured researches published by Ewart de Visser.


Human Factors | 2011

A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction

Peter A. Hancock; Deborah R. Billings; Kristin E. Schaefer; Jessie Y. C. Chen; Ewart de Visser; Raja Parasuraman

Objective: We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). Background: To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Method: Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. Results: The overall correlational effect size for trust was r̄ = +0.26, with an experimental effect size of d̄ = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Conclusion: Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. Application: The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.


Military Psychology | 2009

Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload.

Raja Parasuraman; Keryl Cosenzo; Ewart de Visser

Human operators supervising multiple uninhabited air and ground vehicles (UAVs and UGVs) under high task load must be supported appropriately in context by automation. Two experiments examined the efficacy of such adaptive automation in a simulated high workload reconnaissance mission involving four subtasks: (a) UAV target identification; (b) UGV route planning; (c) communications, with embedded verbal situation awareness probes; and (d) change detection. The results of the first “baseline” experiment established the sensitivity of a change detection procedure to transient and nontransient events in a complex, multi-window, dynamic display. Experiment 1 also set appropriate levels of low and high task load for use in Experiment 2, in which three automation conditions were compared: manual; static automation, in which an automated target recognition (ATR) system was provided for the UAV task; and adaptive automation, in which individual operator change detection performance was assessed in real time and used to invoke the ATR if and only if change detection accuracy was below a threshold. Change detection accuracy and situation awareness were higher and workload was lower for both automation conditions compared to manual performance. In addition, these beneficial effects on change detection and workload were significantly greater for adaptive compared to static automation. The results point to the efficacy of adaptive automation for supporting the human operator tasked with supervision of multiple uninhabited vehicles under high workload conditions.


Journal of Cognitive Engineering and Decision Making | 2011

Adaptive Aiding of Human-Robot Teaming Effects of Imperfect Automation on Performance, Trust, and Workload

Ewart de Visser; Raja Parasuraman

In many emerging civilian and military operations, human operators are increasingly being tasked to supervise multiple robotic uninhabited vehicles (UVs) with the support of automation. As 100% automation reliability cannot be assured, it is important to understand the effects of automation imperfection on performance. In addition, adaptive aiding may help counter any adverse effects of static (fixed) automation. Using a high-fidelity multi-UV simulation involving both air and ground vehicles, two experiments examined the effects of automation reliability and adaptive automation on human-system performance with different levels of task load. In Experiment 1, participants performed a reconnaissance mission while assisted with an automatic target recognition (ATR) system whose reliability was low, medium, or high. Overall human-robot team performance was higher than with either human or ATR performance alone. In Experiment 2, participants performed a similar reconnaissance mission with no ATR, static automation, or with adaptive automation keyed to task load. Participant trust and self-confidence were higher and workload was lower for adaptive automation compared with the other conditions. The results show that human-robot teams can benefit from imperfect static automation even in high task load conditions and that adaptive automation can provide additional benefits in trust and workload.


Journal of Experimental Psychology: Applied | 2009

Detecting Threat-Related Intentional Actions of Others: Effects of Image Quality, Response Mode, and Target Cuing on Vigilance

Raja Parasuraman; Ewart de Visser; Ellen Clarke; W. Ryan McGarry; Elizabeth Hussey; Tyler H. Shaw; James C. Thompson

Three experiments examined the vigilance performance of participants watching videos depicting intentional actions of an individuals hand reaching for and grasping an object--involving transporting or using either a gun or a hairdryer--in order to detect infrequent threat-related actions. Participants indicated detection of target actions either manually or by withholding response. They also rated their subjective mental workload before and after each vigilance task. Irrespective of response mode, the detection rate of intentional threats declined over time on task and subjective workload increased, but only under visually degraded viewing conditions. This vigilance decrement was attenuated by temporal cues that were 75% valid in predicting a subsequent target action and eliminated with 100% valid cues. The findings indicate that detection of biological motion targets, and threat-related intentional actions in particular, although not attention sensitive under normal viewing conditions, is subject to vigilance decrement under degraded viewing conditions. The results are compatible with the view that the decrement in detecting threat-related intentional actions reflects increasing failure of attention allocation processes over time.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

The world is not enough: Trust in cognitive agents

Ewart de Visser; Frank Krueger; Patrick E. McKnight; Steven Scheid; Melissa A. Smith; Stephanie Chalk; Raja Parasuraman

Researchers have assumed a dichotomy between human-human trust (HHT) and human-automation trust (HAT). With the advent of cognitive agents, entities that are neither machine nor human, it is important to revisit this theory. Some claim that HHT and HAT are the same concept and propose that people respond socially to more human automation. Others say that HHT and HAT are fundamentally different and propose models that indicate differences in initial perception, automation monitoring performance, and judgments that lead to differences in trust. In this study, we varied humanness on a cognitive spectrum and investigated trust and performance with these different types of cognitive agents. Results showed that increasing the humanness of the automation increased trust calibration and appropriate compliance with an automated aid leading to better overall performance and trust, especially during unreliable conditions. Automated aids that exhibit human characteristics may be more resilient to human disuse in the face of sub-optimal machine performance.


Human Factors | 2014

Team Performance in Networked Supervisory Control of Unmanned Air Vehicles Effects of Automation, Working Memory, and Communication Content

Ryan McKendrick; Tyler H. Shaw; Ewart de Visser; Haneen Saqer; Brian Kidwell; Raja Parasuraman

Objective: Assess team performance within a networked supervisory control setting while manipulating automated decision aids and monitoring team communication and working memory ability. Background: Networked systems such as multi–unmanned air vehicle (UAV) supervision have complex properties that make prediction of human-system performance difficult. Automated decision aid can provide valuable information to operators, individual abilities can limit or facilitate team performance, and team communication patterns can alter how effectively individuals work together. We hypothesized that reliable automation, higher working memory capacity, and increased communication rates of task-relevant information would offset performance decrements attributed to high task load. Method: Two-person teams performed a simulated air defense task with two levels of task load and three levels of automated aid reliability. Teams communicated and received decision aid messages via chat window text messages. Results: Task Load × Automation effects were significant across all performance measures. Reliable automation limited the decline in team performance with increasing task load. Average team spatial working memory was a stronger predictor than other measures of team working memory. Frequency of team rapport and enemy location communications positively related to team performance, and word count was negatively related to team performance. Conclusion: Reliable decision aiding mitigated team performance decline during increased task load during multi-UAV supervisory control. Team spatial working memory, communication of spatial information, and team rapport predicted team success. Application: An automated decision aid can improve team performance under high task load. Assessment of spatial working memory and the communication of task-relevant information can help in operator and team selection in supervisory control systems.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2010

Modeling Human-Automation Team Performance in Networked Systems: Individual Differences in Working Memory Count

Ewart de Visser; Tyler H. Shaw; Amira Mohamed-Ameen; Raja Parasuraman

As human-machine systems grow in size and complexity, there is a need to understand and model how human attentional limitations affect system performance, especially in large networks. As a first step, human-in-the-loop experiments can provide the requisite data. Secondly, such data can be modeled to provide insights by predicting performance with a large number of vehicles. Accordingly, we first carried out an experiment examining human-UAV system performance under low and high levels of task load. We also examined the effects of a networked environment on performance by manipulating the number and relevance of network message traffic from automated agents. Results showed that in conditions of high task load, performance degraded. Moreover, performance increased with the help of relevant messages, and decreased with irrelevant, noise messages. Furthermore, a simple correlation showed a fairly strong connection between working memory scores and our collected performance data. Using regression to model this data revealed that a simple linear equation does not provide for very accurate modeling of different aspects of decision making performance. However, inclusion of the OSPAN working memory capacity measure improves prediction capability considerably. Together, the results of this study show that human-automation team performance metrics can be modeled and used to predict performance under varying levels of traffic, probability of assistance, and working memory capacity in a complex networked environment.


Journal of Experimental Psychology: Applied | 2016

Almost human: Anthropomorphism increases trust resilience in cognitive agents.

Ewart de Visser; Samuel S. Monfort; Ryan McKendrick; Melissa A. Smith; Patrick E. McKnight; Frank Krueger; Raja Parasuraman

We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human–automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism—the degree to which an agent exhibits human characteristics—is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater trust resilience, a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human–agent trust as well as novel automation design.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Designing an Adaptive Automation System for Human Supervision of Unmanned Vehicles: A Bridge from Theory to Practice:

Ewart de Visser; Melanie LeGoullon; Amos Freedy; Elan Freedy; Gershon Weltman; Raja Parasuraman

Careful consideration must be given to the implementation of automation into complex systems. Much research in adaptive automation has identified challenges for system implementation. A key focus of this research has surrounded the methods of automation invocation including critical events, measurement, and modeling techniques. However, little consideration has been given to selecting and implementing appropriate techniques for a given system as a guide to designers of adaptive automation. This paper proposes such a methodology. We demonstrate the use of this methodology by describing a case study about a system designed to support effective communication and collaboration between the commander and vehicle operator in an unmanned aerial vehicle (UAV) system.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2010

Evaluating the Benefits and Potential Costs of Automation Delegation for Supervisory Control of Multiple UAVs

Tyler H. Shaw; Adam Emfield; Andre Garcia; Ewart de Visser; Chris Miller; Raja Parasuraman; Lisa Fern

Previous studies have begun exploring the possibility that “adaptable” automation, in which tasks are delegated to intelligent automation by the user, can preserve the benefits of automation while minimizing its costs. One approach to adaptable automation is the Playbook®interface, which has been used in previous research and has shown performance enhancements as compared to other automation approaches. However, additional investigations are warranted to evaluate both benefits and potential costs of adaptable automation. The present study incorporated a delegation interface into a new display and simulation system, the multiple unmanned aerial vehicle simulator (MUSIM), to allow for flexible control over three unmanned aerial vehicles (UAVs) at three levels of delegation abstraction. Task load was manipulated by increasing the frequency of primary and secondary task events. Additionally, participants experienced an unanticipated event that was not a good fit for the higher levels of delegation abstraction. Treatment of this poor “automation fit” event, termed a “Non-Optimal Play Environment” event (NOPE event), required the use of manual control. Results showed advantages when access to the highest levels of delegation abstraction was provided and as long as operators also had the flexibility to revert to manual control. Performance was better across the two task load conditions and reaction time to respond to the NOPE event was fastest in this condition. The results extend previous findings showing benefits of flexible delegation of tasks to automation using the Playbook interface and suggest that Playbook remains robust even in the face of poor “automation-fit” events.

Collaboration


Dive into the Ewart de Visser's collaboration.

Top Co-Authors

Avatar

Raja Parasuraman

National Institute on Drug Abuse

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amos Freedy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haneen Saqer

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Emfield

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge