Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Garrett G. Sadler is active.

Publication


Featured researches published by Garrett G. Sadler.


Archive | 2017

Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines

Joseph B. Lyons; Garrett G. Sadler; Kolina Koltai; Henri Battiste; Nhut Ho; Lauren C. Hoffmann; David E. Smith; Walter W. Johnson; Robert J. Shively

The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.


Military Psychology | 2016

Trust of an automatic ground collision avoidance technology: A fighter pilot perspective.

Joseph B. Lyons; Nhut Ho; William E. Fergueson; Garrett G. Sadler; Samantha D. Cals; Casey Richardson; Mark Wilkins

The present study examined the antecedents of trust among operational Air Force fighter pilots for an automatic ground collision avoidance technology. This technology offered a platform with high face validity for studying trust in automation because it is an automatic system currently being used in operations by the Air Force. Pilots (N = 142) responded to an online survey which asked about their attitudes toward the technology and assessed a number of psychological factors. Consistent with prior research on trust in automation, a number of trust antecedents were identified which corresponded to human factors, learned trust factors, and situational factors. Implications for the introduction of novel automatic systems into the military are discussed.


International Conference on Applied Human Factors and Ergonomics | 2017

Beyond Point Design: General Pattern to Specific Implementations

Joel Lachter; Summer L. Brandt; Garrett G. Sadler; R. Jay Shively

Elsewhere we have discussed a number of problems typical of highly automated systems and proposed tenets for addressing these problems based on Human-Autonomy Teaming (HAT) [1]. We have examined these principles in the context of aviation [2, 3]. Here we discuss the generality of these tenets by examining how they might be applied to photography and automotive navigation. While these domains are very different, we find application of our HAT tenets provides a number of opportunities for improving interaction between human operators and automation. We then illustrate how the generalities found across aviation, photography and navigation can be captured in a design pattern.


Journal of Cognitive Engineering and Decision Making | 2017

A Longitudinal Field Study of Auto-GCAS Acceptance and Trust: First-Year Results and Implications:

Nhut Ho; Garrett G. Sadler; Lauren C. Hoffmann; Kevin Zemlicka; Joseph B. Lyons; William E. Fergueson; Casey Richardson; Artemio Cacanindin; Samantha D. Cals; Mark Wilkins

In this paper we describe results from the first year of field study examining U.S. Air Force (USAF) F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS). Using semistructured interviews focusing on opinion development and evolution, system transparency and understanding, the pilot–vehicle interface, stories and reputation, usability, and the impact on behavior, we identified factors positively and negatively influencing trust with data analysis methods based in grounded theory. Overall, Auto-GCAS is an effective life-/aircraft-saving technology and is generally well received and trusted appropriately, with trust evolving based on factors including having a healthy skepticism of the system, attributing system faults to hardware problems, and having trust informed by reliable performance (e.g., lives saved). Unanticipated findings included pilots reporting reputation to not be negatively affected by system activations and an interface anticipation cue having the potential to change operational flight behavior. We discuss emergent research avenues in areas of transparency and culture, and values of conducting trust research with operators of real-world systems having high levels of autonomy.


International Conference on Applied Human Factors and Ergonomics | 2017

Exploring Trust Barriers to Future Autonomy: A Qualitative Look

Joseph B. Lyons; Nhut Ho; Anna Lee Van Abel; Lauren C. Hoffmann; W. Eric Fergueson; Garrett G. Sadler; Michelle A. Grigsby; Amy Burns

Autonomous systems dominate future Department of Defense (DoD) strategic perspectives, yet little is known regarding the trust barriers of these future systems as few exemplars exist from which to appropriately baseline reactions. Most extant DoD systems represent “automated” versus “autonomous” systems, which adds complexity to our understanding of user acceptance of autonomy. The trust literature posits several key trust antecedents to automated systems, with few field applications of these factors into the context of DoD systems. The current paper will: (1) review the trust literature as relevant to acceptance of future autonomy, (2) present the results of a qualitative analysis of trust barriers for two future DoD technologies (Automatic Air Collision Avoidance System [AACAS]; and Autonomous Wingman [AW]), and (3) discuss knowledge gaps for implementing future autonomous systems within the DoD. The study team interviewed over 160 fighter pilots from 4th Generation (e.g., F-16) and 5th Generation (e.g., F-22) fighter platforms to gauge their trust barriers to AACAS and AW. Results show that the trust barriers discussed by the pilots corresponded fairly well to the existing trust challenges identified in the literature, though some nuances were revealed that may be unique to DoD technologies/operations. Some of the key trust barriers included: concern about interference during operational requirements; the need for transparency of intent, function, status, and capabilities/limitations; concern regarding the flexibility and adaptability of the technology; cyber security/hacking potential; concern regarding the added workload associated with the technology; concern for the lack of human oversight/decision making capacity; and doubts regarding the systems’ operational effectiveness. Additionally, the pilots noted several positive aspects of the proposed technologies including: added protection during last ditch evasive maneuvers; positive views of existing fielded technologies such as the Automatic Ground Collision Avoidance System; the potential for added operational capabilities; the potential to transfer risk to the robotic asset and reduce risk to pilots; and the potential for AI to participate in the entire mission process (planning-execution-debriefing). This paper will discuss the results for each technology and will discuss suggestions for implementing future autonomy into the DoD.


ieee aiaa digital avionics systems conference | 2017

Application of human-autonomy teaming to an advanced ground station for reduced crew operations

Nhut Ho; Walter W. Johnson; Karanvir Panesar; Kenny Wakeland; Garrett G. Sadler; Nathan Wilson; Bao Nguyen; Joel Lachter; Summer L. Brandt

Within human factors there is burgeoning interest in the “human-autonomy teaming” (HAT) concept as a way to address the challenges of interacting with complex, increasingly autonomous systems. The HAT concept comes out of an aspiration to interact with increasingly autonomous systems as a team member, rather than simply use automation as a tool. The authors, and others, have proposed core tenets for HAT that include bi-directional communication, automation and system transparency, and advanced coordination between human and automated teammates via predefined, dynamic task sequences known as “plays.” It is believed that, with proper implementation, HAT should foster appropriate teamwork, thus increasing trust and reliance on the system, which in turn will reduce workload, increase situation awareness, and improve performance. To this end, HAT has been demonstrated and/or studied in multiple applications including search and rescue operations, healthcare and medicine, autonomous vehicles, photography, and aviation. The current paper presents one such effort to apply HAT. It details the design of a HAT agent, developed by Human Automation Teaming Solutions, Inc., to facilitate teamwork between the automation and the human operator of an advanced ground dispatch station. This dispatch station was developed to support a NASA project investigating a concept called Reduced Crew Operations (RCO); consequently, we have named the agent R-HATS. Part of the RCO concept involves a ground operator providing enhanced support to a large number of aircraft with a single pilot on the flight deck. When assisted by R-HATS, operators can monitor and support or manage a large number of aircraft and use plays to respond in real-time to complicated, workload-intensive events (e.g., an airport closure). A play is a plan that encapsulates goals, tasks, and a task allocation strategy appropriate for a particular situation. In the current implementation, when a play is initiated by a user, R-HATS determines what tasks need to be completed and has the ability to autonomously execute them (e.g., determining diversion options and uplinking new routes to aircraft) when it is safe and appropriate. R-HATS has been designed to both support end users and researchers in RCO and HAT. Additionally, R-HATS and its underlying architecture were developed with generaliz ability in mind as a modular software applicable outside of RCO/aviation domains. This paper will also discuss future further development and testing of R-HATS.


Ergonomics in Design | 2017

Comparing Trust in Auto-GCAS Between Experienced and Novice Air Force Pilots:

Joseph B. Lyons; Nhut Ho; Anna Lee Van Abel; Lauren C. Hoffmann; Garrett G. Sadler; William E. Fergueson; Michelle A. Grigsby; Mark Wilkins

We examined F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS), an automated system fielded on the F-16 to reduce the occurrence of controlled flight into terrain. We looked at the impact of experience (i.e., number of flight hours) as a predictor of trust perceptions and complacency potential among pilots. We expected that novice pilots would report higher trust and greater potential for complacency in relation to Auto-GCAS, which was shown to be partly true. Although novice pilots, compared with experienced pilots, reported equivalent trust perceptions, they also reported greater complacency potential.


international conference on engineering psychology and cognitive ergonomics | 2016

Application of Human-Autonomy Teaming (HAT) Patterns to Reduced Crew Operations (RCO)

Shively R. Jay; Summer L. Brandt; Joel Lachter; Michael Matessa; Garrett G. Sadler; Henri Battiste

Unmanned aerial systems, advanced cockpits, and air traffic management are all seeing dramatic increases in automation. However, while automation may take on some tasks previously performed by humans, humans will still be required to remain in the system for the foreseeable future. The collaboration between humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. This paper applies a methodology for identifying HAT patterns to an advanced cockpit project.


international conference on augmented cognition | 2018

Trust in Sensing Technologies and Human Wingmen: Analogies for Human-Machine Teams.

Joseph B. Lyons; Nhut Ho; Lauren C. Hoffmann; Garrett G. Sadler; Anna Lee Van Abel; Mark Wilkins

The true value of a human-machine team (HMT) consisting of a capable human and an automated or autonomous system will depend, in part, on the richness and dynamic nature of the interactions and degree of shared awareness between the human and the technology. Contemporary views of HMTs emphasize the notion of bidirectional transparency, one type of which is Robot-of-Human (RoH) transparency. Technologies that are capable of RoH transparency may have awareness of human physiological and cognitive states, and adapt their behavior based on these states thus providing augmentation to operators. Yet despite the burgeoning presence of health monitoring devices, little is known about how humans feel about an automated system using sensing capabilities to augment them in a work environment. The current study provides some preliminary data on user acceptance of sensing capabilities on automated systems. The present research examines an emerging predictor of trust in automation, Perfect Automation Schema, as a predictor of trust in the sensing capabilities. Additionally, the current study examines trust of a human wingman as an analogy for looking at trust within the context of a HMT. The findings suggest that Perfect Automation Schema is related to some facets of sensing technology acceptance. Further, trust of a human wingman is contingent on familiarity and experience.


Military Psychology | 2017

Trust of a Military Automated System in an Operational Context

Nhut Ho; Garrett G. Sadler; Lauren C. Hoffmann; Joseph B. Lyons; Walter W. Johnson

Within this descriptive article, we examine the drivers of human trust of automation using a fielded military technology as the focus area. In contrast to the laboratory, real-life interactions between humans and automation often take place in settings characterized by high complexity that potentially obscure the antecedents of trust. We approach this complexity through a case study, which captures the richness and variety of the operational context in which humans interact with automation. In particular, we utilize and substantiate a theoretical and conceptual trust model synthesized by Lee and See (2004) and examine how well it captures the dynamic nature of trust by using a sample of U.S. Air Force F-16 pilots, engineers, and managers of the Automatic Ground Collision Avoidance System (Auto-GCAS). Our results show the Lee and See model succeeds in capturing most trust factors in the case of these Auto-GCAS stakeholders, and we present areas for enhancement of the model. We conclude by elaborating on lessons learned and hypotheses generated regarding factors affecting trust in Auto-GCAS, providing recommendations for future trust research in field work, and discussing the value of working with an operational community while examining trust evolution over several years.

Collaboration


Dive into the Garrett G. Sadler's collaboration.

Top Co-Authors

Avatar

Nhut Ho

California State University

View shared research outputs
Top Co-Authors

Avatar

Joseph B. Lyons

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lauren C. Hoffmann

California State University

View shared research outputs
Top Co-Authors

Avatar

Mark Wilkins

Office of the Secretary of Defense

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Lee Van Abel

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Henri Battiste

California State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William E. Fergueson

Wright-Patterson Air Force Base

View shared research outputs
Researchain Logo
Decentralizing Knowledge