Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Joseph Haass is active.

Publication


Featured researches published by Michael Joseph Haass.


Human Factors | 2013

Individual Differences in Multitasking Ability and Adaptability

Brent Morgan; Sidney K. D'Mello; Robert G. Abbott; Gabriel A. Radvansky; Michael Joseph Haass; Andrea K. Tamplin

Objective: The aim of this study was to identify the cognitive factors that predictability and adaptability during multitasking with a flight simulator. Background: Multitasking has become increasingly prevalent as most professions require individuals to perform multiple tasks simultaneously. Considerable research has been undertaken to identify the characteristics of people (i.e., individual differences) that predict multitasking ability. Although working memory is a reliable predictor of general multitasking ability (i.e., performance in normal conditions), there is the question of whether different cognitive faculties are needed to rapidly respond to changing task demands (adaptability). Method: Participants first completed a battery of cognitive individual differences tests followed by multitasking sessions with a flight simulator. After a baseline condition, difficulty of the flight simulator was incrementally increased via four experimental manipulations, and performance metrics were collected to assess multitasking ability and adaptability. Results: Scholastic aptitude and working memory predicted general multitasking ability (i.e., performance at baseline difficulty), but spatial manipulation (in conjunction with working memory) was a major predictor of adaptability (performance in difficult conditions after accounting for baseline performance). Conclusion: Multitasking ability and adaptability may be overlapping but separate constructs that draw on overlapping (but not identical) sets of cognitive abilities. Application: The results of this study are applicable to practitioners and researchers in human factors to assess multitasking performance in real-world contexts and with realistic task constraints. We also present a framework for conceptualizing multitasking adaptability on the basis of five adaptability profiles derived from performance on tasks with consistent versus increased difficulty.


international conference on augmented cognition | 2015

Effects of Professional Visual Search Experience on Domain-General and Domain-Specific Visual Cognition

Laura E. Matzen; Michael Joseph Haass; Laura A. McNamara; Susan Marie Stevens-Adams; Stephanie N. McMichael

Vision is one of the dominant human senses and most human-computer interfaces rely heavily on the capabilities of the human visual system. An enormous amount of effort is devoted to finding ways to visualize information so that humans can understand and make sense of it. By studying how professionals engage in these visual search tasks, we can develop insights into their cognitive processes and the influence of experience on those processes. This can advance our understanding of visual cognition in addition to providing information that can be applied to designing improved data visualizations or training new analysts.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

A new method for categorizing scanpaths from eye tracking data

Michael Joseph Haass; Laura E. Matzen; Karin Butler; Mikaela Lea Armenta

From the seminal work of Yarbus [1967] on the relationship of eye movements to vision, scanpath analysis has been recognized as a window into the mind. Computationally, characterizing the scanpath, the sequential and spatial dependencies between eye positions, has been demanding. We sought a method that could extract scanpath trajectory information from raw eye movement data without assumptions defining fixations and regions of interest. We adapted a set of libraries that perform multidimensional clustering on geometric features derived from large volumes of spatiotemporal data to eye movement data in an approach we call GazeAppraise. To validate the capabilities of GazeAppraise for scanpath analysis, we collected eye tracking data from 41 participants while they completed four smooth pursuit tracking tasks. Unsupervised cluster analysis on the features revealed that 162 of 164 recorded scanpaths were categorized into one of four clusters and the remaining two scanpaths were not categorized (recall/sensitivity=98.8%). All of the categorized scanpaths were grouped only with other scanpaths elicited by the same task (precision=100%). GazeAppraise offers a unique approach to the categorization of scanpaths that may be particularly useful in dynamic environments and in visual search tasks requiring systematic search strategies.


international conference on augmented cognition | 2015

Exploratory Analysis of Visual Search Data

David J. Stracuzzi; Ann Speed; Austin Silva; Michael Joseph Haass; Derek Trumbo

Visual search data describe people’s performance on the common perceptual problem of identifying target objects in a complex scene. Technological advances in areas such as eye tracking now provide researchers with a wealth of data not previously available. The goal of this work is to support researchers in analyzing this complex and multimodal data and in developing new insights into visual search techniques. We discuss several methods drawn from the statistics and machine learning literature for integrating visual search data derived from multiple sources and performing exploratory data analysis. We ground our discussion in a specific task performed by officers at the Transportation Security Administration and consider the applicability, likely issues, and possible adaptations of several candidate analysis methods.


international conference on virtual, augmented and mixed reality | 2016

MODELING HUMAN COMPREHENSION OF DATA VISUALIZATIONS.

Michael Joseph Haass; Andrew T. Wilson; Laura E. Matzen; Kristin M. Divis

A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.


international conference on augmented cognition | 2015

Methodology for Knowledge Elicitation in Visual Abductive Reasoning Tasks

Michael Joseph Haass; Laura E. Matzen; Susan Marie Stevens-Adams; Allen R. Roach

The potential for bias to affect the results of knowledge elicitation studies is well recognized. Researchers and knowledge engineers attempt to control for bias through careful selection of elicitation and analysis methods. Recently, the development of a wide range of physiological sensors, coupled with fast, portable and inexpensive computing platforms, has added an additional dimension of objective measurement that can reduce bias effects. In the case of an abductive reasoning task, bias can be introduced through design of the stimuli, cues from researchers, or omissions by the experts. We describe a knowledge elicitation methodology robust to various sources of bias, incorporating objective and cross-referenced measurements. The methodology was applied in a study of engineers who use multivariate time series data to diagnose the performance of devices throughout the production lifecycle. For visual reasoning tasks, eye tracking is particularly effective at controlling for biases of omission by providing a record of the subject’s attention allocation.


international conference on augmented cognition | 2015

Ethnographic Methods for Experimental Design: Case Studies in Visual Search

Laura A. McNamara; Kerstan Suzanne Cole; Michael Joseph Haass; Laura E. Matzen; J. Daniel Morrow; Susan Marie Stevens-Adams; Stephanie N. McMichael

Researchers at Sandia National Laboratories are integrating qualitative and quantitative methods from anthropology, human factors and cognitive psychology in the study of military and civilian intelligence analyst workflows in the United States’ national security community. Researchers who study human work processes often use qualitative theory and methods, including grounded theory, cognitive work analysis, and ethnography, to generate rich descriptive models of human behavior in context. In contrast, experimental psychologists typically do not receive training in qualitative induction, nor are they likely to practice ethnographic methods in their work, since experimental psychology tends to emphasize generalizability and quantitative hypothesis testing over qualitative description. However, qualitative frameworks and methods from anthropology, sociology, and human factors can play an important role in enhancing the ecological validity of experimental research designs.


IEEE Transactions on Visualization and Computer Graphics | 2018

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

Laura E. Matzen; Michael Joseph Haass; Kristin M. Divis; Zhiyuan Wang; Andrew T. Wilson

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewers attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.


international conference on foundations of augmented cognition | 2011

Individual differences and the science of human performance

Michael Christopher Stefan Trumbo; Susan Marie Stevens-Adams; Stacey Langfitt Hendrickson; Robert G. Abbott; Michael Joseph Haass; J. Chris Forsythe

This study comprises the third year of the Robust Automated Knowledge Capture (RAKC) project. In the previous two years, preliminary research was conducted by collaborators at the University of Notre Dame and the University of Memphis. The focus of this preliminary research was to identify relationships between cognitive performance aptitudes (e.g., short-term memory capacity, mental rotation) and strategy selection for laboratory tasks, as well as tendencies to maintain or abandon these strategies. The current study extends initial research by assessing electrophysiological correlates with individual tendencies in strategy selection. This study identifies regularities within individual differences and uses this information to develop a model to predict and understand the relationship between these regularities and cognitive performance.


international conference on augmented cognition | 2017

Patterns of Attention: How Data Visualizations Are Read

Laura E. Matzen; Michael Joseph Haass; Kristin M. Divis; Mallory Stites

Data visualizations are used to communicate information to people in a wide variety of contexts, but few tools are available to help visualization designers evaluate the effectiveness of their designs. Visual saliency maps that predict which regions of an image are likely to draw the viewer’s attention could be a useful evaluation tool, but existing models of visual saliency often make poor predictions for abstract data visualizations. These models do not take into account the importance of features like text in visualizations, which may lead to inaccurate saliency maps. In this paper we use data from two eye tracking experiments to investigate attention to text in data visualizations. The data sets were collected under two different task conditions: a memory task and a free viewing task. Across both tasks, the text elements in the visualizations consistently drew attention, especially during early stages of viewing. These findings highlight the need to incorporate additional features into saliency models that will be applied to visualizations.

Collaboration


Dive into the Michael Joseph Haass's collaboration.

Top Co-Authors

Avatar

Laura E. Matzen

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura A. McNamara

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Austin Silva

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kristin M. Divis

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Robert G. Abbott

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Robert K. Rowe

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

James C. Forsythe

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kerstan Suzanne Cole

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge