Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Angelia Sebok is active.

Publication


Featured researches published by Angelia Sebok.


54th Human Factors and Ergonomics Society Annual Meeting 2010, HFES 2010 | 2010

Stages and Levels of Automation: An Integrated Meta-analysis

Christopher D. Wickens; Huiyang Li; Amy Santamaria; Angelia Sebok; Nadine B. Sarter

Function allocation between human and automation can be represented in terms of the stages & levels taxonomy proposed by Parasuraman, Sheridan & Wickens (2000). Higher degrees of automation (DOA) are achieved both by later stages (e.g., automation decision aiding rather than diagnostic aiding) and higher levels within stages (e.g. executing a choice unless vetoed, versus offering the human several choices). A meta analysis based on data of 14 experiments examines the mediating effects of DOA on routine system performance, performance when the automation fails, workload and situation awareness. The effects of DOA on these four measures are summarized by level of statistical significance. We found: (1) an inverse relationship between routine performance and workload as automation is introduced and DOA increases. (2) a weak positive relationship between routine performance and failure performance, as mediated by DOA. (3) A strong mediating role of situation awareness in improving both routine and failure performance.


Human Factors | 2009

Identifying Black Swans in NextGen: Predicting Human Performance in Off-Nominal Conditions

Christopher D. Wickens; Becky L. Hooey; Brian F. Gore; Angelia Sebok; Corey S. Koenicke

Objective: The objective is to validate a computational model of visual attention against empirical data—derived from a meta-analysis—of pilots’ failure to notice safety-critical unexpected events. Background: Many aircraft accidents have resulted, in part, because of failure to notice nonsalient unexpected events outside of foveal vision, illustrating the phenomenon of change blindness. A model of visual noticing, N-SEEV (noticing— salience, expectancy, effort, and value), was developed to predict these failures. Method: First, 25 studies that reported objective data on miss rate for unexpected events in high-fidelity cockpit simulations were identified, and their miss rate data pooled across five variables (phase of flight, event expectancy, event location, presence of a head-up display, and presence of a highway-in-the-sky display). Second, the parameters of the N-SEEV model were tailored to mimic these dichotomies. Results: The N-SEEV model output predicted variance in the obtained miss rate (r = .73). The individual miss rates of all six dichotomous conditions were predicted within 14%, and four of these were predicted within 7%. Conclusion: The N-SEEV model, developed on the basis of an independent data set, was able to successfully predict variance in this safety-critical measure of pilot response to abnormal circumstances, as collected from the literature. Applications: As new technology and procedures are envisioned for the future airspace, it is important to predict if these may compromise safety in terms of pilots’ failing to notice unexpected events. Computational models such as N-SEEV support cost-effective means of making such predictions.


Human Factors | 2015

The Impact of Sleep Disruption on Complex Cognitive Tasks A Meta-Analysis

Christopher D. Wickens; Shaun Hutchins; Lila Laux; Angelia Sebok

Objective: We aimed to build upon the state of knowledge about the impacts of sleep disruption into the domain of complex cognitive task performance for three types of sleep disruption: total sleep deprivation, sleep restriction, and circadian cycle. Background: Sleep disruption affects human performance by increasing the likelihood of errors or the time it takes to complete tasks, such as the Psychomotor Vigilance Task. It is not clear whether complex tasks are affected in the same way. Understanding the impact of sleep disruption on complex cognitive tasks is important for, and in some instances more relevant to, professional workers confronted with unexpected, catastrophic failures following a period of disrupted sleep. Method: Meta-analytic review methods were applied to each of the three different areas of sleep disruption research. Results: Complex cognitive task performance declines over consecutive hours of continuous wakefulness as well as consecutive days of restricted sleep, is worse for severely restricted sleep (4 or fewer hours in bed), is worse during the circadian nadir than apex, and appears less degraded than simple task performance. Conclusion: The reviews suggest that complex cognitive task performance may not be impacted by disrupted sleep as severely as simple cognitive task performance. Application: Findings apply to predicting effects of sleep disruption on workers in safety-critical environments, such as health care, aviation, the military, process control, and in particular, safety-critical environments involving shiftwork or long-duration missions.


Human Factors | 2013

Supporting Interruption Management and Multimodal Interface Design Three Meta-Analyses of Task Performance as a Function of Interrupting Task Modality

Sara A. Lu; Christopher D. Wickens; Julie C. Prinet; Shaun Hutchins; Nadine Sarter; Angelia Sebok

Objective: The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Background: Multimodal interfaces have been proposed as a promising means to support interruption management. To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Method: Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. Results: The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages. Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. Conclusion: The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. Applications: The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Informing the Design of Multimodal Displays A Meta-Analysis of Empirical Studies Comparing Auditory and Tactile Interruptions

Sara A. Lu; Christopher D. Wickens; Nadine Sarter; Angelia Sebok

The expected air traffic growth will introduce new tasks and automation technologies. As a result, the amount of mostly visual cockpit information will increase significantly, leading to more interruptions and risk of data overload. One promising means of addressing this challenge is through the use of multimodal interfaces which distribute information across sensory channels. To inform the design of such interfaces, a meta-analysis was conducted on the effectiveness and performance effects of auditory versus tactile interruption signals. From the 23 studies, ratio scores were computed to compare performance between the two modalities. The impact of 6 moderator variables was also examined. Overall, this analysis shows faster responses to tactile interruptions. However, more complex and very urgent interruption signals are better presented via the auditory modality. The findings add to our knowledge base in multimodal information processing and can inform modality choices in display design for complex data-rich domains.


Human Factors | 2015

Complacency and automation bias in the use of imperfect automation

Christopher D. Wickens; Benjamin A. Clegg; Alex Z. Vieane; Angelia Sebok

Objective We examine the effects of two different kinds of decision-aiding automation errors on human–automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Background Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Method Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. Results The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but “automation wrong” had a much greater effect on accuracy, reflecting the automation bias, than did “automation gone,” reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Conclusions Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Implications Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

A Computational Model of Task Overload Management and Task Switching

Christopher D. Wickens; Amy Santamaria; Angelia Sebok

We describe a computational model that predicts the decision aspect of sequential multitasking. We investigate how people choose to switch tasks or continue performing an ongoing task when they are in overload conditions where concurrent performance of tasks is impossible. The model is based on a metaanalytic integration of 46 experiments from two literatures: interruption management and applied task switching. Consistent trends from the meta-analysis are used to set parameters in the mathematical model, which is then implemented in a task network model.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Auditory-Visual Redundancy in Vehicle Control Interruptions Two Meta-analyses

Christopher D. Wickens; Julie C. Prinet; Shaun Hutchins; Nadine B. Sarter; Angelia Sebok

Two novel versions of a meta analysis were employed to assess the conditions of ongoing vehicle control task simulations in which (1) auditory presentation of an interrupting task were beneficial over visual presentations and (2) redundant (av) presentation was better than single modality presentation (providing redundancy gain). Altogether 29 studies were identified. The results revealed that the interrupting task benefited from auditory presentation, but the ongoing task (visual vehicle control task) generally did not. Performance of the visual interrupting task was slightly hindered by separation from the ongoing task. The redundancy analysis revealed that the interrupting task benefited from redundancy when it involved spatial localization and alerting and the accuracy of verbal communications; but suffered when speed of the verbal communications response was measured, and when the two visual channels were separated. Implications for multi-modal presentation of information on vehicle workstations are discussed.


Human Factors | 2014

Stages and Levels of Automation in Support of Space Teleoperations

Huiyang Li; Christopher D. Wickens; Nadine Sarter; Angelia Sebok

Objective: This study examined the impact of stage of automation on the performance and perceived workload during simulated robotic arm control tasks in routine and off-nominal scenarios. Background: Automation varies with respect to the stage of information processing it supports and its assigned level of automation. Making appropriate choices in terms of stages and levels of automation is critical to ensure robust joint system performance. To date, this issue has been empirically studied in domains such as aviation and medicine but not extensively in the context of space operations. Method: A total of 36 participants played the role of a payload specialist and controlled a simulated robotic arm. Participants performed fly-to tasks with two types of automation (camera recommendation and trajectory control automation) of varying stage. Tasks were performed during routine scenarios and in scenarios in which either the trajectory control automation or a hazard avoidance automation failed. Results: Increasing the stage of automation progressively improved performance and lowered workload when the automation was reliable, but incurred severe performance costs when the system failed. Conclusion: The results from this study support concerns about automation-induced complacency and automation bias when later stages of automation are introduced. The benefits of such automation are offset by the risk of catastrophic outcomes when system failures go unnoticed or become difficult to recover from. Application: A medium stage of automation seems preferable as it provides sufficient support during routine operations and helps avoid potentially catastrophic outcomes in circumstances when the automation fails.


Human Factors | 2015

Using modeling and simulation to predict operator performance and automation-induced complacency with robotic automation: a case study and empirical validation

Christopher D. Wickens; Angelia Sebok; Huiyang Li; Nadine Sarter; Andrew M. Gacy

Objective: The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Background: Some computational models of complacency in human–automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. Method: We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. Results: The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Applications: Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development.

Collaboration


Dive into the Angelia Sebok's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huiyang Li

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaun Hutchins

Alion Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brett Walters

Alion Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Z. Vieane

Colorado State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge