Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leslie M. Blaha is active.

Publication


Featured researches published by Leslie M. Blaha.


international conference on augmented cognition | 2017

Interface Metaphors for Interactive Machine Learning.

Robert J. Jasper; Leslie M. Blaha

To promote more interactive and dynamic machine learning, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be used in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.


Systems Factorial Technology#R##N#A Theory Driven Methodology for the Identification of Perceptual and Cognitive Mechanisms | 2017

An Examination of Task Demands on the Elicited Processing Capacity

Leslie M. Blaha

Why do we not observe supercapacity more often? Although it has always seemed a little counterintuitive, it is plausible that increasing information or cognitive load in a task might result in more efficient processing. This is particularly the case when experimental participants might leverage Gestalt perception or holistic representations. However, it seems rare that supercapacity is consistently observed in experimental settings. In this chapter, I review applications of capacity coefficient analysis, particularly in double-factorial paradigm studies, to determine the relative prevalence of various capacity results. I explore the design choices that seem to support or encourage supercapacity versus those that might act to limit capacity by stimulus or response structures. I will attempt to determine if there are guiding principles for experimental designs that do or do not reliably produce supercapacity processing.


Archive | 2018

Four Perspectives on Human Bias in Visual Analytics

Emily Wall; Leslie M. Blaha; Celeste Lyn Paul; Kristin A. Cook; Alex Endert

Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this chapter we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.


visualization for computer security | 2017

Toward a visualization-supported workflow for cyber alert management using threat models and human-centered design

Lyndsey Franklin; Meg Pirrung; Leslie M. Blaha; Michelle Dowling; Mi Feng

Cyber network analysts follow complex processes in their investigations of potential threats to their network. Much research is dedicated to providing automated decision support in the effort to make their tasks more efficient, accurate, and timely. Support tools come in a variety of implementations from machine learning algorithms that monitor streams of data to visual analytic environments for exploring rich and noisy data sets. Cyber analysts, however, need tools which help them merge the data they already have and help them establish appropriate baselines against which to compare anomalies. Furthermore, existing threat models that cyber analysts regularly use to structure their investigation are not often leveraged in support tools. We report on our work with cyber analysts to understand the analytic process and how one such model, the MITRE ATT&CK Matrix [42], is used to structure their analytic thinking. We present our efforts to map specific data needed by analysts into this threat model to inform our visualization designs. We leverage this expert knowledge elicitation to identify a capability gaps that might be filled with visual analytic tools. We propose a prototype visual analytic-supported alert management workflow to aid cyber analysts working with threat models.


international conference on augmented cognition | 2017

CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning

Dustin Arendt; Caner Komurlu; Leslie M. Blaha

We developed CHISSL, a human-machine interface that utilizes interactive supervision to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user’s interactions, CHISSL trains a classification model guided by the user’s grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human and machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.


international conference on human-computer interaction | 2018

Evaluation of Visualization Heuristics.

Ryan Williams; Jean Scholtz; Leslie M. Blaha; Lyndsey Franklin; Zhuanyi Huang

Multiple sets of heuristic have been developed and studied in the Human Computer Interaction (HCI) domain as a method for fast, lightweight evaluations for usability problems. However, none of the heuristics have been adopted by the information visualization or the visual analytics communities. Our literature review looked at heuristic sets developed by Nielsen and Molich [7] and Forsell and Johansson [1] to understand how these heuristics were developed and their intended applications. We also reviewed heuristic studies conducted by Hearst and colleagues [2] and Vaataja and colleagues [10] to determine how individuals apply heuristics to evaluating visualization systems. While each study noted potential issues with the heuristic descriptions and the evaluator’s familiarity with the heuristics, no direct connections were made. Our research looks to understand how individuals with domain expertise in information visualization and visual analytics could use heuristics to discover usability problems and evaluate visualizations. By empirically evaluating visualization heuristics, we can identify the key ways that these heuristics can be used to inform the visual analytics design process. Further, they may help to identify usability problems that are and are not task specific. We hope to use this process to also identify missing heuristics that may apply to designs for different analytic purposes.


international conference on augmented cognition | 2018

Human Machine Interactions: Velocity Considerations.

Joseph A. Cottam; Leslie M. Blaha; Kris Cook; Mark A. Whiting

Measuring change is increasingly a computational task, but understanding change and its implications are fundamentally human challenges. Successful human/machine teams for streaming data analysis effectively balance data velocity with people’s capacity to ingest, reason about, and act upon the data. Computational support is critical to aiding humans with finding what is needed when it is needed. This is particularly evident in supporting complex sensemaking, situation awareness, and decision making in streaming contexts. Herein, we conceptualize human/machine teams as interacting streams of data, generated from the interactions that are core to the human/machine team activity. These streams capture the relative velocities of the human and machine activities, which allows the machine to balance the capabilities of the two halves of the system. We review the known challenges in handling interacting streams that have been distilled in computational systems. And we use this perspective to understand some of the open challenges to designing effective human/machine systems that support the disparate velocities of humans and machines.


international conference on augmented cognition | 2018

Improving Automation Transparency: Addressing Some of Machine Learning’s Unique Challenges

Corey K. Fallon; Leslie M. Blaha

A variety of factors can affect one’s reliance on an automated aid. Some of these factors include one’s perception of the system’s trustworthiness, such as perceived reliability of the system or one’s ability to understand the system’s underlying reasoning. A mismatch between the operator’s perception and the true capabilities and characteristics of the system can lead to inappropriate reliance on the tool. This improper use of the system can manifest as either underutilization of the technology or complacency resulting from over-trusting the system. Increasing an automated tool’s transparency is one approach that enables the operator to more appropriately rely on the technology. Transparent automated systems provide additional information that allows the user to see the system’s intent and understand its underlying processes and capabilities. Several researchers have developed frameworks to support the design of more transparent automation. However, these frameworks may not fully consider the particular challenges to transparency design introduced by automation that leverages machine learning. Like all automation, these systems can benefit from transparency. However, artificial intelligence poses new challenges that must be considered when designing for transparency. Unique considerations must be made in terms of the type, and amount or level of transparency information conveyed to the user.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Interfacing the Modifiable Multitasking Environment with ACT-R for Computational Cognitive Modeling of Complex Tasks

Leslie M. Blaha; Leif C. Carlsen; Tim Halverson; Brad Reynolds

We demonstrate a set of software tools designed to facilitate computational cognitive modeling of multitasking performance. The Modifiable Multitasking Environment (ModME) offers a flexible, browser-based platform for creating multitasking experiments. Simplified Interfacing for Modeling Cognition–JavaScript (SIMCog-JS) provides communication between the browser-based experiments in ModME and the Java implementation of the ACT-R cognitive architecture. The baseline configuration of these software packages enables an ACT-R model to perform pilot-like multitasking in the modified Multi-Attribute Task Battery, which is implemented as the baseline task available in ModME. We show how this combination facilitates the development of models for assessing multitasking workload. In this demonstration, we will explain the software packages and allow attendees to interact with system elements, particularly the ModME graphical user interfaces. All software is available open source for attendees to try themselves.


Archive | 2018

Bias by Default

Joseph A. Cottam; Leslie M. Blaha

Systems have biases. Their interfaces naturally guide a user toward specific patterns of action. For example, modern word-processors and spreadsheets are both capable of handling word wrapping, checking spelling and calculating formulas. You could write a paper in a spreadsheet or could do simple business modeling in a word-processor. However, their interfaces naturally communicate the function for which they are designed. Visual analytic interfaces also have biases. We outline why simple Markov models are a plausible tool for investigating that bias, even prior to user interactions, and how they might be applied to understand a priori system biases. We also discuss some anticipated difficulties in such modeling and touch briefly on what some Markov model extensions might provide.

Collaboration


Dive into the Leslie M. Blaha's collaboration.

Top Co-Authors

Avatar

Joseph A. Cottam

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lyndsey Franklin

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Alex Endert

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caner Komurlu

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Corey K. Fallon

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dimitri Zarzhitsky

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dustin Arendt

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Elliott Skomski

Western Washington University

View shared research outputs
Researchain Logo
Decentralizing Knowledge