Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sara L. Riggs is active.

Publication


Featured researches published by Sara L. Riggs.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Anesthesia maintenance and vigilance: Examining task switching

Scott M. Betza; Katherina Jurewicz; David M. Neyens; Sara L. Riggs; James H. Abernathy; Scott Reeves

Limited research has focused on vigilance during the maintenance phase of anesthesia work. The goal of this study was to identify anesthesia maintenance tasks and to identify the transitions between these tasks within the perspective of the vigilance paradigm. In this study, three bariatric surgeries were recorded and analyzed using a task categorization structure. Across the surgeries the primary anesthesia provider spent 71% of their time doing patient or display monitoring tasks. Task frequency and transition visualizations were generated to identify trends in the task switching. Transitions between the task categories occurred approximately once every nine seconds for the primary anesthesia provider. Additionally, it appears that regardless of the task, there was a high frequency of task transitions to looking at the visual displays and then from the visual displays towards the patient. The results of this study emphasize the importance of vigilance for anesthesia display design.


Human Factors | 2016

The Development and Evaluation of Countermeasures to Tactile Change Blindness

Sara L. Riggs; Nadine Sarter

Objective: The goal of the present study was to develop and empirically evaluate three countermeasures to tactile change blindness (where a tactile signal is missed in the presence of a tactile transient). Each of these countermeasures relates to a different cognitive step involved in successful change detection. Background: To date, change blindness has been studied primarily in vision, but there is limited empirical evidence that the tactile modality may also be subject to this phenomenon. Change blindness raises concerns regarding the robustness of tactile and multimodal interfaces. Method: Three countermeasures to tactile change blindness were evaluated in the context of a highly demanding monitoring task. One countermeasure was proactive (alerting the participant to a possible change before it occurred) whereas the other two were adaptive (triggered after the change upon an observed miss). Performance and subjective data were collected. Results: Compared to the baseline condition, all countermeasures improved intramodal tactile change detection. Adaptive measures resulted in the highest detection rates, specifically when signal gradation was employed (i.e., when the intensity of the tactile signal was increased after a miss was observed). Conclusion: Adaptive displays can be used to counter the effects of change blindness and ensure that tactile information is reliably detected. Increasing the tactile intensity after a missed change appears most promising and was the preferred countermeasure. Application: The findings from this study can inform the design of interfaces employing the tactile modality to support monitoring and attention management in data-rich domains.


The International Journal of Aerospace Psychology | 2017

Multimodal Information Presentation in Support of NextGen Operations

Sara L. Riggs; Christopher D. Wickens; Nadine Sarter; Lisa C. Thomas; Mark I. Nikolic; Angelia Sebok

ABSTRACT Objective: This study examined the effectiveness of visual, auditory, tactile, and redundant auditory-visual information presentation in the context of a medium-fidelity ‘Next Generation Air Transportation System’ (NextGen) flight simulation. Background: Data overload, especially in the visual channel, and associated breakdowns in monitoring represent a major challenge in aviation. These problems are expected to worsen with NextGen, which will require pilots to manage increased amounts of data and adopt new responsibilities. The introduction of multimodal interfaces (interfaces that distribute information across multiple sensory channels) has been proposed as a means to offload the overburdened visual channel and thus address data overload. Method: Experienced commercial airline pilots completed 2 scenarios using a medium-fidelity flight simulator. For each scenario, NextGen tasks and events were presented either using technology that is currently available (visual and auditory displays) or technology proposed as part of NextGen design concepts (i.e., tactile and redundant displays). Performance was measured based on response time and accuracy. Results: Faster responses were observed for redundant displays, compared to either vision or audition alone. No significant benefit of redundancy was found for accuracy and workload did not mediate redundancy effects. For traffic events, there were faster response times with tactile displays, but higher response accuracy with auditory displays. Conclusion: The findings from this research add to the knowledge base in multimodal information processing and can inform the design of displays for NextGen operations.


Human Factors and Ergonomics Society 2017 International Annual Meeting, HFES 2017 | 2017

The effect of movement and cue complexity on tactile change detection

Scott M. Betza; Scott Reeves; James H. Abernathy; Sara L. Riggs

There is a growing interest in using touch to offload the often overburdened visual channel as its merit has been demonstrated in various work domains. However, more work is needed to understand the perceptual limitations of the tactile modality, including how it is affected by change blindness (i.e., failure to detect changes due to transients) as the majority of work on change blindness has been in vision. This study examines how movement and cue complexity affects the ability to detect tactile changes. The findings indicate the ability to detect changes are affected by: 1) movement (walking resulted in worse change detection rates compared to sitting) and 2) cue complexity (high complexity cues had worse change detection rates compared to low complexity). Overall, this work adds to the knowledge base of tactile perception and can inform the design of tactile displays for multiple work domains such as anesthesiology.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

The “up-side” and “down-side” of tactile parameters: An evaluation of increases and decreases in tactile cue magnitude to support anesthesia monitoring

Kylie M. Gomes; Scott Reeves; Sara L. Riggs

There is a need to find alternative ways to present information to alleviate data overload in the complex environment of anesthesia monitoring in the operating room. The tactile modality has been shown as a promising means in supporting this effort; however, to develop effective tactile displays, it is important to ensure that the tactile parameters convey information that is both meaningful and easy to learn. This work aimed to address this gap by evaluating how increasing and decreasing tactile parameter magnitude could map to physiological variables to support anesthesia monitoring. It was found that increases in magnitude for the intensity and temporal parameters mapped to urgent changes in physiological variables whereas decreases in magnitude for these parameters mapped to less urgent changes (e.g., a physiological variable is normalizing). This work provides preliminary support towards a better understanding of which tactile parameters should be used to represent information in the operating room to support anesthesia monitoring. Furthermore, it demonstrates how increases and decreases in tactile parameters can be interpreted differently, emphasizing the importance of carefully considering how to utilize changes in tactile parameters to convey information in complex domains.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Connecting the Big Five Taxonomies: Understanding how Individual Traits Contribute to Team Adaptability under Workload Transitions

Shannon P. Devlin; Jake R. Flynn; Sara L. Riggs

Dynamic and data-rich domains, like those found in the military, rely heavily on teamwork for their operations. Previous work has attempted to understand how the personality of individuals contributes to overall team performance, but specific links between individual traits and team dimensions are needed. This study aims to link the dimensions from the original Big Five Trait Taxonomy to the Big Five in teamwork. Specifically, the focus was identifying which dimensions in the Big Five Trait Taxonomy influenced the Big Five in teamwork’s core component of adaptability. Ten pairs of participants completed a simulated Unmanned Aerial Vehicle control task. The best and worst performing pairs were identified and further analyzed to assess how pairs enabled adaptability when workload transitioned. The findings showed the best performing pairs enabled team adaptability effectively and had high levels of extraversion, lower levels of diversity across all dimensions, and adopted collaborative strategies to complete all the tasks. These findings suggest operational standards, technology, and training programs should be developed to foster these personality traits and collaborative-base strategies.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

The effect of video game experience and the ability to handle workload and workload transitions

Shannon P. Devlin; Sara L. Riggs

Dynamic and complex domains, like those found in the military and healthcare require operators to experience and cope with changes in task demands, i.e. workload transitions. The literature has shown individual differences affect a person’s ability to handle workload transitions, but the role of individual experience has not been thoroughly examined. The focus of this work aims to understand how the role of video game experience, affects an individual’s ability to handle workload transitions in the context of simulated unmanned aerial vehicle (UAV) control. Twenty-one participants completed a UAV simulation under four workload scenarios: low, high, gradual shift from low to high, and sudden shift from low to high. Performance was compared across self-reported video-game players (VGPs) and non-video game players (NVGPs). Overall accuracy for VGPs was higher than NVGPs. This line of work provides the foundation to understand the effect of video game experience, which can help inform training programs and workplace design for operators in various data-rich environments.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Analyzing eye tracking data using a Markovian framework to assess differences in scan patterns

Shannon P. Devlin; Sara L. Riggs

Data overload, especially in the visual channel, represents a major challenge with regards to display design in data-rich domains. One promising means of addressing data overload is with the use of eye tracking technology to better understand an operator’s transition methods between tasks in order to support operators in real-time. The goal of this study is to develop a Markovian framework analyzing eye movement across different panels while performing a simulated Unmanned Aerial Vehicle (UAV) control task, the chosen application of this study. Across ten participants, an increase in workload adversely affected performance, but did not change individual scan patterns, which were based on a Markovian framework. However, across participants, eye tracking data revealed five distinct scan patterns, each with varying levels of success in terms of response time and accuracy. The top four performers all adopted different scan patterns. The findings show that eye tracking can provide unique insights to explain performance differences between individuals. Overall, the findings provide the foundation for developing an algorithm that optimizes performance while accounting for individual differences.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

The Effect of Age on Crossmodal Matching using Auditory Frequency

Kylie M. Gomes; Sara L. Riggs

A limited number of multimodal studies conduct crossmodal matching – a step to ensure cues are perceived to be of equal intensity across sensory modalities. The majority of work on crossmodal matching was conducted by Stevens from the 1950-1960’s, but there has been limited work on the development of a reliable crossmodal matching method to be used in more recent multimodal studies. A few studies have contributed to this goal; however, there has been little consideration towards identifying which parameters map between each modality and whether age significantly contributes to the between-subject variability. The goal of the current study is to investigate how auditory pitch and age affects crossmodal matching and the variability in the matches made. The findings of this study revealed that when auditory pitch is varied, there is a significant effect of age, especially when participants were able to control the intensity of the auditory modality. Additionally, there was significant variability between different modality combinations across both age groups. The findings demonstrate the importance of considering the appropriate parameters to be used across different sensory modalities and the effect age has on crossmodal matching.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Analyzing Visual Search Techniques using Eye Tracking for a Computerized Provider Order Entry (CPOE) Task

Kylie M. Gomes; Sara L. Riggs

A challenge with current Computerized Provider Order Entry (CPOE) systems includes patient identification errors, i.e. when an incorrect patient’s record is referenced. These types of errors can lead to patient safety issues such as administrating medication to the incorrect patient. Eye tracking technology can provide insights into the visual search patterns of healthcare professionals and shed light on how patient identification errors occur. This study investigates whether there are differences in visual search metrics, response time, and accuracy when searching for a patient by two identifiers – name or date of birth – from a list of patients with similar names. The findings revealed there was no effect of search strategy on speed or accuracy; however, there was an effect on fixation duration and number of fixations within specific areas of interest. Across both search strategies, there were more fixations on names. This demonstrates the importance of a patient’s name regardless of search strategy and is an important consideration to take into account if multiple patients share the same name. This study shows that eye tracking technology can be used to investigate the visual search patterns employed during patient identification and provide insights as to how patient identification errors occur. It also demonstrates a need to develop alternative methods to prevent patient identification errors apart from relying on healthcare professionals to verify patient identity.

Collaboration


Dive into the Sara L. Riggs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James H. Abernathy

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angelia Sebok

Alion Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge