Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Marie Stevens-Adams is active.

Publication


Featured researches published by Susan Marie Stevens-Adams.


international conference on augmented cognition | 2013

Enhanced Training for Cyber Situational Awareness

Susan Marie Stevens-Adams; Armida Carbajal; Austin Silva; Kevin S. Nauer; Benjamin John Anderson; Theodore Reed; J. Chris Forsythe

A study was conducted in which participants received either tool-based or narrative-based training and then completed challenges associated with network security threats. Three teams were formed: (1) Tool-Based, for which each participant received tool-based training; (2) Narrative-Based, for which each participant received narrative-based training and (3) Combined, for which three participants received tool-based training and two received narrative-based training. Results showed that the Narrative-Based team recognized the spatial-temporal relationship between events and constructed a timeline that was a reasonable approximation of ground truth. In contrast, the Combined team produced a linear sequence of events that did not encompass the relationships between different adversaries. Finally, the Tool-Based team demonstrated little appreciation of either the spatial or temporal relationships between events. These findings suggest that participants receiving Narrative-Based training were able to use the software tools in a way that allowed them to gain a greater level of situation awareness.


international conference on augmented cognition | 2013

Human Dimension in Cyber Operations Research and Development Priorities

J. Chris Forsythe; Austin Silva; Susan Marie Stevens-Adams; Jeffrey M. Bradshaw

Within cyber security, the human element represents one of the greatest untapped opportunities for increasing the effectiveness of network defenses. However, there has been little research to understand the human dimension in cyber operations. To better understand the needs and priorities for research and development to address these issues, a workshop was conducted August 28-29, 2012 in Washington DC. A synthesis was developed that captured the key issues and associated research questions.


international conference on augmented cognition | 2015

Effects of Professional Visual Search Experience on Domain-General and Domain-Specific Visual Cognition

Laura E. Matzen; Michael Joseph Haass; Laura A. McNamara; Susan Marie Stevens-Adams; Stephanie N. McMichael

Vision is one of the dominant human senses and most human-computer interfaces rely heavily on the capabilities of the human visual system. An enormous amount of effort is devoted to finding ways to visualize information so that humans can understand and make sense of it. By studying how professionals engage in these visual search tasks, we can develop insights into their cognitive processes and the influence of experience on those processes. This can advance our understanding of visual cognition in addition to providing information that can be applied to designing improved data visualizations or training new analysts.


Archive | 2013

A literature review of safety culture.

Kerstan Suzanne Cole; Susan Marie Stevens-Adams; Caren A. Wenner

Workplace safety has been historically neglected by organizations in order to enhance profitability. Over the past 30 years, safety concerns and attention to safety have increased due to a series of disastrous events occurring across many different industries (e.g., Chernobyl, Upper Big-Branch Mine, Davis-Besse etc.). Many organizations have focused on promoting a healthy safety culture as a way to understand past incidents, and to prevent future disasters. There is an extensive academic literature devoted to safety culture, and the Department of Energy has also published a significant number of documents related to safety culture. The purpose of the current endeavor was to conduct a review of the safety culture literature in order to understand definitions, methodologies, models, and successful interventions for improving safety culture. After reviewing the literature, we observed four emerging themes. First, it was apparent that although safety culture is a valuable construct, it has some inherent weaknesses. For example, there is no common definition of safety culture and no standard way for assessing the construct. Second, it is apparent that researchers know how to measure particular components of safety culture, with specific focus on individual and organizational factors. Such existing methodologies can be leveraged for future assessments. Third, based on the published literature, the relationship between safety culture and performance is tenuous at best. There are few empirical studies that examine the relationship between safety culture and safety performance metrics. Further, most of these studies do not include a description of the implementation of interventions to improve safety culture, or do not measure the effect of these interventions on safety culture or performance. Fourth, safety culture is best viewed as a dynamic, multi-faceted overall system composed of individual, engineered and organizational models. By addressing all three components of safety culture, organizations have a better chance of understanding, evaluating, and making positive changes towards safety within their own organization.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2010

Using After-Action Review Based on Automated Performance Assessment to Enhance Training Effectiveness

Susan Marie Stevens-Adams; Justin Derrick Basilico; Robert G. Abbott; Charlie J. Gieseler; Chris Forsythe

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.


international conference on augmented cognition | 2015

Methodology for Knowledge Elicitation in Visual Abductive Reasoning Tasks

Michael Joseph Haass; Laura E. Matzen; Susan Marie Stevens-Adams; Allen R. Roach

The potential for bias to affect the results of knowledge elicitation studies is well recognized. Researchers and knowledge engineers attempt to control for bias through careful selection of elicitation and analysis methods. Recently, the development of a wide range of physiological sensors, coupled with fast, portable and inexpensive computing platforms, has added an additional dimension of objective measurement that can reduce bias effects. In the case of an abductive reasoning task, bias can be introduced through design of the stimuli, cues from researchers, or omissions by the experts. We describe a knowledge elicitation methodology robust to various sources of bias, incorporating objective and cross-referenced measurements. The methodology was applied in a study of engineers who use multivariate time series data to diagnose the performance of devices throughout the production lifecycle. For visual reasoning tasks, eye tracking is particularly effective at controlling for biases of omission by providing a record of the subject’s attention allocation.


international conference on augmented cognition | 2015

Ethnographic Methods for Experimental Design: Case Studies in Visual Search

Laura A. McNamara; Kerstan Suzanne Cole; Michael Joseph Haass; Laura E. Matzen; J. Daniel Morrow; Susan Marie Stevens-Adams; Stephanie N. McMichael

Researchers at Sandia National Laboratories are integrating qualitative and quantitative methods from anthropology, human factors and cognitive psychology in the study of military and civilian intelligence analyst workflows in the United States’ national security community. Researchers who study human work processes often use qualitative theory and methods, including grounded theory, cognitive work analysis, and ethnography, to generate rich descriptive models of human behavior in context. In contrast, experimental psychologists typically do not receive training in qualitative induction, nor are they likely to practice ethnographic methods in their work, since experimental psychology tends to emphasize generalizability and quantitative hypothesis testing over qualitative description. However, qualitative frameworks and methods from anthropology, sociology, and human factors can play an important role in enhancing the ecological validity of experimental research designs.


Reliability Engineering & System Safety | 2015

Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

Huafei N. Liao; Katrina M. Groth; Susan Marie Stevens-Adams

This article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification are discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. These challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.


international conference on foundations of augmented cognition | 2011

Individual differences and the science of human performance

Michael Christopher Stefan Trumbo; Susan Marie Stevens-Adams; Stacey Langfitt Hendrickson; Robert G. Abbott; Michael Joseph Haass; J. Chris Forsythe

This study comprises the third year of the Robust Automated Knowledge Capture (RAKC) project. In the previous two years, preliminary research was conducted by collaborators at the University of Notre Dame and the University of Memphis. The focus of this preliminary research was to identify relationships between cognitive performance aptitudes (e.g., short-term memory capacity, mental rotation) and strategy selection for laboratory tasks, as well as tendencies to maintain or abandon these strategies. The current study extends initial research by assessing electrophysiological correlates with individual tendencies in strategy selection. This study identifies regularities within individual differences and uses this information to develop a model to predict and understand the relationship between these regularities and cognitive performance.


international conference on human-computer interaction | 2011

Evaluating Information Visualizations with Working Memory Metrics

Alisa Bandlow; Laura E. Matzen; Kerstan Suzanne Cole; Courtney C. Dornburg; Charles J. Geiseler; John A. Greenfield; Laura A. McNamara; Susan Marie Stevens-Adams

Information visualization tools are being promoted to aid decision support. These tools assist in the analysis and comprehension of ambiguous and conflicting data sets. Formal evaluations are necessary to demonstrate the effectiveness of visualization tools, yet conducting these studies is difficult. Objective metrics that allow designers to compare the amount of work required for users to operate a particular interface are lacking. This in turn makes it difficult to compare workload across different interfaces, which is problematic for complicated information visualization and visual analytics packages. We believe that measures of working memory load can provide a more objective and consistent way of assessing visualizations and user interfaces across a range of applications. We present initial findings from a study using measures of working memory load to compare the usability of two graph representations.

Collaboration


Dive into the Susan Marie Stevens-Adams's collaboration.

Top Co-Authors

Avatar

Michael Joseph Haass

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kerstan Suzanne Cole

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Laura E. Matzen

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Laura A. McNamara

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Austin Silva

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Laurie Burnham

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert G. Abbott

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Chris Forsythe

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

J. Chris Forsythe

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge