Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathleen L. Mosier is active.

Publication


Featured researches published by Kathleen L. Mosier.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1999

Does automation bias decision-making?

Linda J. Skitka; Kathleen L. Mosier; Mark D. Burdick

Computerized system monitors and decision aids are increasingly common additions to critical decision-making contexts such as intensive care units, nuclear power plants and aircraft cockpits. These aids are introduced with the ubiquitous goal of “reducing human error”. The present study compared error rates in a simulated flight task with and without a computer that monitored system states and made decision recommendations. Participants in non-automated settings out-performed their counterparts with a very but not perfectly reliable automated aid on a monitoring task. Participants with an aid made errors of omission (missed events when not explicitly prompted about them by the aid) and commission (did what an automated aid recommended, even when it contradicted their training and other 100% valid and available indicators). Possible causes and consequences of automation bias are discussed


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2000

Accountability and automation bias

Linda J. Skitka; Kathleen L. Mosier; Mark D. Burdick

Although generally introduced to guard against human error, automated devices can fundamentally change how people approach their work, which in turn can lead to new and different kinds of error. The present study explored the extent to which errors of omission (failures to respond to system irregularities or events because automated devices fail to detect or indicate them) and commission (when people follow an automated directive despite contradictory information from other more reliable sources of information because they either fail to check or discount that information) can be reduced under conditions of social accountability. Results indicated that making participants accountable for either their overall performance or their decision accuracy led to lower rates of “automation bias”. Errors of omission proved to be the result of cognitive vigilance decrements, whereas errors of commission proved to be the result of a combination of a failure to take into account information and a belief in the superior judgement of automated aids.


The International Journal of Aviation Psychology | 2001

AIRCREWS AND AUTOMATION BIAS: THE ADVANTAGES OF TEAMWORK?

Kathleen L. Mosier; Linda J. Skitka; Melisa Dunbar; Lori McDonnell

A series of recent studies on automation bias, the use of automation as a heuristic replacement for vigilant information seeking and processing, has investigated omission and commission errors in highly automated decision environments. Most of the research on this phenomenon has been conducted in a single-person performance configuration. This study was designed to follow up on that research to investigate whether the error rates found with single pilots and with teams of students would hold in the context of an aircraft cockpit, with a professional aircrew. In addition, this study also investigated the efficacy of possible interventions involving explicit automation bias training and display prompts to verify automated information. Results demonstrated the persistence of automation bias in crews compared with solo performers. No effects were found for either training or display prompts. Pilot performance during the experimental legs was most highly predicted by performance on the control leg and by event importance. The previously found phantom memory phenomenon associated with a false engine fire event persisted in crews.


Human Factors | 2007

What You Don't Know Can Hurt You: Factors Impacting Diagnosis in the Automated Cockpit

Kathleen L. Mosier; Nikita Sethi; Shane McCauley; Len Khoo; Judith Orasanu

Objective: We examined the impact of operational variables on diagnosis and decision-making processes, focusing on information search. Background: Arguably, the “best” decision-making processes in high-technology cockpits would be those that are both correspondent (objectively accurate) and coherent (rationally sound). In the electronic world, coherence in terms of identification and incorporation of all relevant information is both a prerequisite to and a limiting factor for accurate diagnosis and decision making. Method: Regional carrier pilots (N = 93) responded to six scenarios by accessing information to determine a diagnosis and decision. Results: Time pressure, a common operational variable, had a strong negative effect on information search and diagnosis accuracy, and the presence of noncongruent information heightened these negative effects. Unexpectedly, source of initial information (automated or other) did not impact any of the dependent variables. Diagnosis confidence was unrelated to accuracy and was negatively related to amount of information accessed. Conclusion: Results confirm both the need for coherence in diagnostic processes and the difficulty of maintaining it under time pressure. Application: One implication of the results of this study is that pilots in high-technology cockpits must be trained to utilize coherent diagnostic processes as standard operating procedure. Additionally, because thorough information search for diagnosis in an automated environment is essential, automated systems must be designed to foster coherent, and thus accurate, diagnostic processes.


Reviews of Human Factors and Ergonomics | 2010

Judgment and decision making by individuals and teams: issues, models, and applications

Kathleen L. Mosier; Ute Fischer

Consistent with technological advances, the role of the operator in many human factors domains has evolved from one characterized primarily by sensory and motor skills to one characterized primarily by cognitive skills and decision making. Decision making is a primary component in problem solving, human-automation interaction, response to alarms and warnings, and error mitigation. In this chapter we discuss decision making in terms of both front-end judgment processes (e.g., attending to and evaluating the significance of cues and information, formulating a diagnosis, or assessing the situation) and back-end decision processes (e.g., retrieving a course of action, weighing ones options, or mentally simulating a possible response). Two important metatheories—correspondence (empirical accuracy) and coherence (rationality and consistency)—provide ways to assess the goodness of each phase (e.g., Hammond, 1996, 2000; Mosier, 2009). We present several models of decision making, including Brunswiks lens model,...


Journal of Cognitive Engineering and Decision Making | 2010

The Role of Affect in Naturalistic Decision Making

Kathleen L. Mosier; Ute Fischer

The field of naturalistic decision making (NDM) assumes a “cold” cognitive model in that nonemotional, valence-neutral cues and information are predicted to influence decision making in identifiable ways. Judgment and decision-making research over the past 10 to 15 years, however, has greatly enhanced knowledge of the ways in which affect that is present at the time of decision making influences how people make decisions—specifically, how they process information, how they respond to risk, and which outcomes they prefer. The purpose of this article is to review relevant aspects of the literature on affect and decision making and to present the argument that NDM researchers need to be cognizant of the potential impact of affect on decision processes to adequately describe and predict expert decision making.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1996

Automation Bias, Accountability, and Verification Behaviors

Kathleen L. Mosier; Linda J. Skitka; Mark D. Burdick; Susan T. Heers

Automated procedural and decision aids may in some cases have the paradoxical effect of increasing errors rather than eliminating them. Results of recent research investigating the use of automated systems have indicated the presence automation bias, a term describing errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing (Mosier & Skitka, in press). Automation commission errors, i.e., errors made when decision makers take inappropriate action because they over-attend to automated information or directives, and automation omission errors, i.e., errors made when decision makers do not take appropriate action because they are not informed of an imminent problem or situation by automated aids, can result from this tendency. A wide body of social psychological research has found that many cognitive biases and resultant errors can be ameliorated by imposing pre-decisional accountability, which sensitizes decision makers to the need to construct compelling justifications for their choices and how they make them. To what extent these effects generalize to performance situations has yet to be empirically established. The two studies presented represent concurrent efforts, with student and “glass cockpit” pilot samples, to determine the effects of accountability pressures on automation bias and on verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves “accountable” for their strategies of interaction with the automation were significantly more likely to verify its correct functioning, and committed significantly fewer automation-related errors than those who did not report this perception.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1999

Automation Use and Automation Bias

Kathleen L. Mosier; Linda J. Skitka

The availability of automation and automated decision aids feeds into a general human tendency to travel the road of least cognitive effort. A series of studies on “automation bias,” the tendency to use automation as a heuristic replacement for vigilant information seeking and processing, has identified several factors associated with this bias. We have examined expert vs. novice (student) performance, and found that automation error rates are comparable across populations, and are not significantly different between 1- and 2-person crews. Professional pilots were sensitive to the importance of correctness for critical flight tasks, and made fewer errors on events involving altitude and heading errors than frequency discrepancies. Training for automation bias reduced commission errors for students, suggesting the importance of early intervention and training on this issue. In other studies, participants in a non-automated condition out-performed those using an automated aid during equivalent failure events; and participants tended to exhibit an “action bias” when any source of information recommended action, whether or not it was appropriate. The intent of this presentation is to provide an overview and summary of the findings of this research program, and to place it within the context of work that has been done in areas such as human-centered design, and training for automation use.


international conference on engineering psychology and cognitive ergonomics | 2015

Effectiveness of Advanced Collaboration Tools on Crew Communication in Reduced Crew Operations

Sarah V. Ligda; Ute Fischer; Kathleen L. Mosier; Michael Matessa; Vernol Battiste; Walter W. Johnson

The present research examines operational performance and verbal communication in airline flight crews under reduced crew operations RCO. Eighteen two-pilot crews flew six scenarios under three conditions; one condition involved current-day operations while two involved RCO. In RCO flights, the Captain initially operated the simulated aircraft alone but could request remote crewmember support as off-nominal events occurred and workload was expected to increase. In one of the two RCO conditions, crewmembers were provided with advanced prototype collaboration tools designed to alleviate difficulties in crew coordination. Crews successfully solved all challenging events without accident and analyses of operational performance did not reveal any differences among the three conditions. In RCO flights, crew communication increased when tools were available relative to flights in which they were not; specifically, there were more acknowledgements and decision-making communications. These results suggest the collaboration tools enable higher degrees of crewmember awareness and/or coordination during distributed operations.


The International Journal of Aviation Psychology | 2013

Pilot–ATC Communication Conflicts: Implications for NextGen

Kathleen L. Mosier; Paula Rettenmaier; Matthew McDearmid; Jordan Wilson; Stanton Mak; Lakshmi Raj; Judith Orasanu

In the planned NextGen aviation operations, it will be critical to ensure shared situational understanding and cooperative problem solving between aircrews and air traffic controllers (ATC). A first step in predicting how future changes will impact flight crews and ATC is to examine the current system and to pinpoint problematic areas that could be ameliorated or exacerbated by advanced automation and heavier traffic density. In this study, we coded Aviation Safety Reporting System (ASRS) reports identified as having communication conflicts between pilots and ATC. Results describe types of conflict, operational context, phase of flight, operator states, and situations conducive to communication conflicts, risk perception differences, and inappropriate resolution strategies. Reports suggest that high workload approach and landing phases are conducive to communication conflicts, that different interpretations of the same information might lead to conflict, and that operator state could impact communication and collaboration between flight crews and ATC. A specific problem was noted when reporters felt that the affective response of the other party was not appropriate to the situation. Although this study reflects the limitations inherent in ASRS data, it can provide insights into potential problem areas and conflict triggers in NextGen operations. This research will enable us to better predict NextGen aircrew–ATC communication breakdowns and conflicts resulting from specific situations or operator states.

Collaboration


Dive into the Kathleen L. Mosier's collaboration.

Top Co-Authors

Avatar

Ute Fischer

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Linda J. Skitka

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francis T. Durso

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jordan Wilson

San Francisco State University

View shared research outputs
Top Co-Authors

Avatar

Karen M. Feigh

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew McDearmid

San Francisco State University

View shared research outputs
Top Co-Authors

Avatar

Paula Rettenmaier

San Francisco State University

View shared research outputs
Top Co-Authors

Avatar

Shane McCauley

San Francisco State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge