Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nadine Sarter is active.

Publication


Featured researches published by Nadine Sarter.


Human Factors | 2008

Tactile displays: guidance for their design and application.

Lynette A. Jones; Nadine Sarter

Objective: This article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area. Background: First attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays. Methods: First, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted. Results: This review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation. Conclusion: The sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems. Application: Tactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.


Human Factors | 2007

Pilots' monitoring strategies and performance on automated flight decks: an empirical study combining behavioral and eye-tracking data

Nadine Sarter; Randall J. Mumaw; Christopher D. Wickens

Objective: The objective of the study was to examine pilots automation monitoring strategies and performance on highly automated commercial flight decks. Background: A considerable body of research and operational experience has documented breakdowns in pilot-automation coordination on modern flight decks. These breakdowns are often considered symptoms of monitoring failures even though, to date, only limited and mostly anecdotal data exist concerning pilots monitoring strategies and performance. Method: Twenty experienced B-747-400 airline pilots flew a 1-hr scenario involving challenging automation-related events on a full-mission simulator. Behavioral, mental model, and eye-tracking data were collected. Results: The findings from this study confirm that pilots monitor basic flight parameters to a much greater extent than visual indications of the automation configuration. More specifically, they frequently fail to verify manual mode selections or notice automatic mode changes. In other cases, they do not process mode annunciations in sufficient depth to understand their implications for aircraft behavior. Low system observability and gaps in pilots understanding of complex automation modes were shown to contribute to these problems. Conclusion: Our findings describe and explain shortcomings in pilots automation monitoring strategies and performance based on converging behavioral, eye-tracking, and mental model data. They confirm that monitoring failures are one major contributor to breakdowns in pilot-automation interaction. Application: The findings from this research can inform the design of improved training programs and automation interfaces that support more effective system monitoring.


Human Factors | 2006

Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information

John M. McGuirl; Nadine Sarter

Objective: To examine whether continually updated information about a systems confidence in its ability to perform assigned tasks improves operators trust calibration in, and use of, an automated decision support system (DSS). Background: The introduction of decision aids often leads to performance breakdowns that are related to automation bias and trust miscalibration. This can be explained, in part, by the fact that operators are informed about overall system reliability only, which makes it impossible for them to decide on a case-by-case basis whether to follow the systems advice. Method: The application for this research was a neural net-based decision aid that assists pilots with detecting and handling in-flight icing encounters. A multifactorial experiment was carried out with two groups of 15 instructor pilots each flying a series of 28 approaches in a motion-base simulator. One group was informed about the systems overall reliability only, whereas the other group received updated system confidence information. Results: Pilots in the updated group experienced significantly fewer icing-related stalls and were more likely to reverse their initial response to an icing condition when it did not produce desired results. Their estimate of the systems accuracy was more accurate than that of the fixed group. Conclusion: The presentation of continually updated system confidence information can improve trust calibration and thus lead to better performance of the human-machine team. Application: The findings from this research can inform the design of decision support systems in a variety of event-driven high-tempo domains.


Human Factors | 2008

Cross-Modal Links Among Vision, Audition, and Touch in Complex Environments

Thomas K. Ferris; Nadine Sarter

Objectives: This study sought to determine whether performance effects of crossmodal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Background: Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of crossmodal cuing asymmetries. Method: A microworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Results: Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. Conclusions: The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. Application: The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.


Human Factors | 2004

Not now! Supporting interruption management by indicating the modality and urgency of pending tasks

Chih Yuan Ho; Mark I. Nikolic; Molly J. Waters; Nadine Sarter

Operators in complex event-driven domains must coordinate competing attentional demands in the form of multiple tasks and interactions. This study examined the extent to which this requirement can be supported more effectively through informative interruption cueing (in this case, partial information about the nature of pending tasks). The 48 participants performed a visually demanding air traffic control (ATC) task. They were randomly assigned to 1 of 3 experimental groups that differed in the availability of information (not available, available upon request, available automatically) about the urgency and modality of pending interruption tasks. Within-subject variables included ATC-related workload and the modality, frequency, and priority of interruption tasks. The results show that advance knowledge about the nature of pending tasks led participants to delay visual interruption tasks the longest, which allowed them to avoid intramodal interference and scanning costs associated with performing these tasks concurrently with ATC tasks. The 3 experimental groups did not differ significantly in terms of their interruption task performance; however, the group that automatically received task-related information showed better ATC performance, thus experiencing a net performance gain. Actual or potential applications of this research include the design of interfaces in support of attention and interruption management in a wide range of event-driven environments.


Human Factors | 2009

Using Informative Peripheral Visual and Tactile Cues to Support Task and Interruption Management

Shameem Hameed; Thomas K. Ferris; Swapnaa Jayaraman; Nadine Sarter

Objective: This study examined the effectiveness of using informative peripheral visual and tactile cues to support task switching and interruption management. Background: Effective support for the allocation of limited attentional resources is needed for operators who must cope with numerous competing task demands and frequent interruptions in data-rich, event-driven domains. One prerequisite for meeting this need is to provide information that allows them to make informed decisions about, and before, (re)orienting their attentional focus. Method: Thirty participants performed a continuous visual task. Occasionally, they were presented with a peripheral visual or tactile cue that indicated the need to attend to a separate visual task. The location, frequency, and duration parameters of these cues represented the domain, importance, and expected completion time, respectively, of the interrupting task. Results: The findings show that the informative cues were detected and interpreted reliably. Information about the importance (rather than duration) of the task was used by participants to decide whether to switch attention to the interruption, indicating adherence to experimenter instructions. Erroneous task-switching behavior (nonadherence to experimenter instructions) was mostly caused by misinterpretation of cues. Conclusion: The effectiveness of informative peripheral visual and tactile cues for supporting interruption management was validated in this study. However, the specific implementation of these cues requires further work and needs to be tailored to specific domain requirements. Application: The findings from this research can inform the design of more effective notification systems for a variety of complex event-driven domains, such as aviation, medicine, or process control.


Human Factors | 2011

Continuously informing vibrotactile displays in support of attention management and multitasking in anesthesiology.

Thomas K. Ferris; Nadine Sarter

Objective: A novel vibrotactile display type was investigated to determine the potential benefits for supporting the attention and task management of anesthesiologists. Background: Recent research has shown physiological monitoring and multitasking performance can benefit from displaying patient data via alarm-like tactile notifications and via continuously informing auditory displays (e.g., sonifications). The current study investigated a novel combination of these two approaches: continuously informing tactile displays. Method: A tactile alarm and two continuously informing tactile display designs were evaluated in an anesthesia induction simulation with anesthesiologists as participants. Several performance measures were collected for two tasks: physiological monitoring and anesthesia induction. A multi-task performance score equivalently weighted components from each task, normalized across experimental scenarios. Subjective rankings of the displays were also collected. Results: Compared to the baseline (visual and auditory only) display configuration, each tactile display significantly improved performance in several objective measures, including multitask performance score. The continuously informing display that encoded the severity of patient health into the salience of its signals supported significantly better performance than the other two tactile displays. Contrasting the objective results, participants subjectively ranked the tactile alarm display highest. Conclusion: Continuously informing tactile displays with alarm-like properties (e.g., salience mapping) can better support anesthesiologists’ physiological monitoring and multitasking performance under the high task demands of anesthesia induction. Adaptive display mechanisms may improve user acceptance. Application: This study can inform display design to support multitasking performance of anesthesiologists in the clinical setting and other supervisory control operators in work domains characterized by high demands for visual and auditory resources.


Human Factors | 2008

Investigating mode errors on automated flight decks: Illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research

Nadine Sarter

Objective: The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Background: Mode errors on modern flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. Methods: The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. Results: This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Conclusion: Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. Application: The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.


Theoretical Issues in Ergonomics Science | 2010

Capturing the dynamics of attention control from individual to distributed systems: The shape of models to come

David D. Woods; Nadine Sarter

New technology presents opportunities for enhancing the performance of human systems that are tasked to meet multiple, often competing demands. Yet, mistakes in designing and deploying these technologies can create complexities that make human systems more brittle. To many stakeholders, the answer to this challenge is increase situation awareness. But what these advocates refer to when they talk about enhancing situation awareness varies tremendously. Over 15 years ago, the authors commented on how the label was ill defined. Today, the label is more popular than ever but the range of situations and the kinds of awareness are now so diverse that the label is better referred to as multiply defined. This paper returns to basic concepts and findings about human perception and the control of attention and the critical role that these processes play in individual as well as joint and distributed activity–how people know where to focus next in changing situations. This paper also briefly reviews recent studies on the neurobiology of the control of attention that help explain how people find what is relevant despite the fact that this is highly context sensitive. Together, the findings from this research can be synthesised into new models that capture how human systems can fluently and dynamically shift focus as context, goals and situations change. These models are needed to be able to understand, predict and support the processes involved in assessing situations and achieving situation awareness. They can be scaled up to address environments where technology is used to extend human perception into distant scenes and where technology connects multiple interdependent agents (both human groups and machine agents) over new temporal and spatial scales.


50th Annual Meeting of the Human Factors and Ergonomics Society, HFES 2006 | 2006

The implications of crossmodal links in attention for the design of multimodal interfaces: A driving simulation study

Thomas K. Ferris; Robert Penfold; Shameem Hameed; Nadine Sarter

The design of multimodal interfaces rarely takes into consideration recent data suggesting the existence of considerable crossmodal spatial and temporal links in attention. This can be partly explained by the fact that crossmodal links have been studied almost exclusively in spartan laboratory settings with simple cues and tasks. As a result, it is not clear whether they scale to more complex settings. To examine this question, participants in this experiment drove a simulated military vehicle and were periodically presented with lateralized visual indications marking locations of roadside mines and safe areas of travel. Valid and invalid auditory and tactile cues preceded these indications at varying stimulus-onset asynchronies. The findings confirm that the location and timing of crossmodal cue combinations affect response time and accuracy in complex domains as well. In particular, presentation of crossmodal cues at SOAs below 500ms and tactile cuing resulted in lower accuracy and longer response times.

Collaboration


Dive into the Nadine Sarter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angelia Sebok

Alion Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huiyang Li

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sara A. Lu

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge