Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristin M. Divis is active.

Publication


Featured researches published by Kristin M. Divis.


international conference on augmented cognition | 2015

Determining the optimal time on x-ray analysis for Transportation Security Officers.

Ann Speed; Austin Silva; Derek Trumbo; David J. Stracuzzi; Christina E. Warrender; Michael Christopher Stefan Trumbo; Kristin M. Divis

The Transportation Security Administration has a large workforce of Transportation Security Officers, most of whom perform interrogation of x-ray images at the passenger checkpoint. To date, TSOs on the x-ray have been limited to a 30-min session at a time, however, it is unclear where this limit originated. The current paper outlines methods for empirically determining if that 30-min duty cycle is optimal and if there are differences between individual TSOs. This work can inform scheduling TSOs at the checkpoint and can also inform whether TSOs should continue to be cross-trained (i.e., performing all 6 checkpoint duties) or whether specialization makes more sense.


international conference on virtual, augmented and mixed reality | 2016

MODELING HUMAN COMPREHENSION OF DATA VISUALIZATIONS.

Michael Joseph Haass; Andrew T. Wilson; Laura E. Matzen; Kristin M. Divis

A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.


IEEE Transactions on Visualization and Computer Graphics | 2018

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

Laura E. Matzen; Michael Joseph Haass; Kristin M. Divis; Zhiyuan Wang; Andrew T. Wilson

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewers attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.


Journal of Human Performance in Extreme Environments | 2018

Physiological and Cognitive Factors Related to Human Performance During the Grand Canyon Rim-to-Rim Hike

Kristin M. Divis; Clifford Anderson-Bergman; Robert G. Abbott; Victoria Newton; Glory Emmanuel-Aviña

Exposure to extreme environments is both mentally and physically taxing, leading to suboptimal performance and even life-threatening emergencies. Physiological and cognitive monitoring could provide the earliest indicator of performance decline and inform appropriate therapeutic intervention, yet little research has explored the relationship between these markers in strenuous settings. The Rim-to-Rim Wearables at the Canyon for Health (R2R WATCH) study is a research project at Sandia National Laboratories funded by the Defense Threat Reduction Agency to identify which physiological and cognitive phenomena collected by non-invasive wearable devices are the most related to performance in extreme environments. In a pilot study, data were collected from civilians and military warfighters hiking the Rim-to-Rim trail at the Grand Canyon. Each participant wore a set of devices collecting physiological, cognitive, and environmental data such as heart rate, memory, ambient temperature, etc. Promising preliminary results found correlates between physiological markers recorded by the wearable devices and decline in cognitive abilities, although further work is required to refine those measurements. Planned follow-up studies will validate these findings and further explore outstanding questions.


international conference on augmented cognition | 2017

Rim-to-Rim Wearables at the Canyon for Health (R2R WATCH): Experimental Design and Methodology

Glory Emmanuel Aviña; Robert G. Abbott; Cliff Anderson-Bergman; Catherine Branda; Kristin M. Divis; Lucie Jelinkova; Victoria Newton; Emily Pearce; Jon K. Femling

The Rim-to-Rim Wearables At The Canyon for Health (R2R WATCH) study examines metrics recordable on commercial off the shelf (COTS) devices that are most relevant and reliable for the earliest possible indication of a health or performance decline. This is accomplished through collaboration between Sandia National Laboratories (SNL) and The University of New Mexico (UNM) where the two organizations team up to collect physiological, cognitive, and biological markers from volunteer hikers who attempt the Rim-to-Rim (R2R) hike at the Grand Canyon. Three forms of data are collected as hikers travel from rim to rim: physiological data through wearable devices, cognitive data through a cognitive task taken every 3 hours, and blood samples obtained before and after completing the hike. Data is collected from both civilian and warfighter hikers. Once the data is obtained, it is analyzed to understand the effectiveness of each COTS device and the validity of the data collected. We also aim to identify which physiological and cognitive phenomena collected by wearable devices are the most relatable to overall health and task performance in extreme environments, and of these ascertain which markers provide the earliest yet reliable indication of health decline. Finally, we analyze the data for significant differences between civilians’ and warfighters’ markers and the relationship to performance. This is a study funded by the Defense Threat Reduction Agency (DTRA, Project CB10359) and the University of New Mexico (The main portion of the R2R WATCH study is funded by DTRA. UNM is currently funding all activities related to bloodwork. DTRA, Project CB10359; SAND2017-1872 C). This paper describes the experimental design and methodology for the first year of the R2R WATCH project.


international conference on augmented cognition | 2017

Patterns of Attention: How Data Visualizations Are Read

Laura E. Matzen; Michael Joseph Haass; Kristin M. Divis; Mallory Stites

Data visualizations are used to communicate information to people in a wide variety of contexts, but few tools are available to help visualization designers evaluate the effectiveness of their designs. Visual saliency maps that predict which regions of an image are likely to draw the viewer’s attention could be a useful evaluation tool, but existing models of visual saliency often make poor predictions for abstract data visualizations. These models do not take into account the importance of features like text in visualizations, which may lead to inaccurate saliency maps. In this paper we use data from two eye tracking experiments to investigate attention to text in data visualizations. The data sets were collected under two different task conditions: a memory task and a free viewing task. Across both tasks, the text elements in the visualizations consistently drew attention, especially during early stages of viewing. These findings highlight the need to incorporate additional features into saliency models that will be applied to visualizations.


Pervasive and Mobile Computing | 2018

Physiological state in extreme environments

Glory Emmanuel-Aviña; Kristin M. Divis; Robert G. Abbott

Abstract Commercial off-the-shelf (COTS) wearable devices are used to quantify physiology during physical activities to monitor levels of fitness and to prevent overexertion. We argue that there are limitations and challenges to measuring physiological data with current state-of-the-art wearable devices, both with the hardware as well as the data itself. These limitations and challenges are exacerbated when wearable devices are used in extreme climate environments We discuss these through empirical findings from our study where hikers are suited with wearable technologies as they cross the Grand Canyon. We discuss the performance of various wearable technologies in the extreme environment of the canyon as well as the concerns with downloaded data. These findings highlight the needs and opportunities for the wearable devices market, specifically how wearable technologies could mature to quantify performance and fatigue through real-time data collection and analysis.


Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX | 2018

Sensor operators as technology consumers: What do users really think about that radar?

Laura A. McNamara; Kristin M. Divis; J. Daniel Morrow

Many companies rely on user experience metrics, such as Net Promoter scores, to monitor changes in customer attitudes toward their products. This paper suggests that similar metrics can be used to assess the user experience of the pilots and sensor operators who are tasked with using our radar, EO/IR, and other remote sensing technologies. As we have previously discussed, the problem of making our national security remote sensing systems useful, usable and adoptable is a human-system integration problem that does not get the sustained attention it deserves, particularly given the high- throughput, information-dense task environments common to military operations. In previous papers, we have demonstrated how engineering teams can adopt well-established human-computer interaction principles to fix significant usability problems in radar operational interfaces. In this paper, we describe how we are using a combination of Situation Awareness design methods, along with techniques from the consumer sector, to identify opportunities for improving human-system interactions. We explain why we believe that all stakeholders in remote sensing – including program managers, engineers, or operational users – can benefit from systematically incorporating some of these measures into the evaluation of our national security sensor systems. We will also provide examples of our own experience adapting consumer user experience metrics in operator-focused evaluation of currently deployed radar interfaces.


international conference on augmented cognition | 2017

Eye Tracking for Dynamic, User-Driven Workflows

Laura A. McNamara; Kristin M. Divis; J. Daniel Morrow; David Nikolaus Perkins

Researchers at Sandia National Laboratories in Albuquerque, New Mexico, are engaged in the empirical study of human-information interaction in high-consequence national security environments. This focus emerged from our longstanding interactions with military and civilian intelligence analysts working across a broad array of domains, from signals intelligence to cybersecurity to geospatial imagery analysis. In this paper, we discuss how several years’ of work with Synthetic Aperture Radar (SAR) imagery analysts revealed the limitations of eye tracking systems for capturing gaze events in the dynamic, user-driven problem-solving strategies characteristic of geospatial analytic workflows. We also explain the need for eye tracking systems capable of supporting inductive study of dynamic, user-driven problem-solving strategies characteristic of geospatial analytic workflows. We then discuss an ongoing project in which we are leveraging some of the unique properties of SAR image products to develop a prototype eyetracking data collection and analysis system that will support inductive studies of visual workflows in SAR image analysis environments.


international conference on augmented cognition | 2015

Through a Scanner Quickly: Elicitation of P3 in Transportation Security Officers Following Rapid Image Presentation and Categorization

Michael Christopher Stefan Trumbo; Laura E. Matzen; Austin Silva; Michael Joseph Haass; Kristin M. Divis; Ann Speed

Numerous domains, ranging from medical diagnostics to intelligence analysis, involve visual search tasks in which people must find and identify specific items within large sets of imagery. These tasks rely heavily on human judgment, making fully automated systems infeasible in many cases. Researchers have investigated methods for combining human judgment with computational processing to increase the speed at which humans can triage large image sets. One such method is rapid serial visual presentation (RSVP), in which images are presented in rapid succession to a human viewer. While viewing the images and looking for targets of interest, the participant’s brain activity is recorded using electroencephalography (EEG). The EEG signals can be time-locked to the presentation of each image, producing event-related potentials (ERPs) that provide information about the brain’s response to those stimuli. The participants’ judgments about whether or not each set of images contained a target and the ERPs elicited by target and non-target images are used to identify subsets of images that merit close expert scrutiny [1]. Although the RSVP/EEG paradigm holds promise for helping professional visual searchers to triage imagery rapidly, it may be limited by the nature of the target items. Targets that do not vary a great deal in appearance are likely to elicit useable ERPs, but more variable targets may not. In the present study, we sought to extend the RSVP/EEG paradigm to the domain of aviation security screening, and in doing so to explore the limitations of the technique for different types of targets. Professional Transportation Security Officers (TSOs) viewed bag X-rays that were presented using an RSVP paradigm. The TSOs viewed bursts of images containing 50 segments of bag X-rays that were presented for 100 ms each. Following each burst of images, the TSOs indicated whether or not they thought there was a threat item in any of the images in that set. EEG was recorded during each burst of images and ERPs were calculated by time-locking the EEG signal to the presentation of images containing threats and matched images that were identical except for the presence of the threat item. Half of the threat items had a prototypical appearance and half did not. We found that the bag images containing threat items with a prototypical appearance reliably elicited a P300 ERP component, while those without a prototypical appearance did not. These findings have implications for the application of the RSVP/EEG technique to real-world visual search domains.

Collaboration


Dive into the Kristin M. Divis's collaboration.

Top Co-Authors

Avatar

Laura E. Matzen

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Michael Joseph Haass

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Andrew T. Wilson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Laura A. McNamara

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Robert G. Abbott

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ann Speed

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Austin Silva

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Daniel Morrow

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge