Alex Endert
Pacific Northwest National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex Endert.
human factors in computing systems | 2010
Christopher Andrews; Alex Endert; Chris North
Space supports human cognitive abilities in a myriad of ways. The note attached to the side of the monitor, the papers spread out on the desk, diagrams scrawled on a whiteboard, and even the keys left out on the counter are all examples of using space to recall, reveal relationships, and think. Technological advances have made it possible to construct large display environments in which space has real meaning. This paper examines how increased space affects the way displays are regarded and used within the context of the cognitively demanding task of sensemaking. A pair of studies were conducted demonstrating how the spatial environment supports sensemaking by becoming part of the distributed cognitive process, providing both external memory and a semantic layer.
human factors in computing systems | 2012
Alex Endert; Patrick Fiaux; Chris North
Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the users feedback into account.
Information Visualization | 2011
Christopher Andrews; Alex Endert; Beth Yost; Chris North
Larger, higher-resolution displays are becoming accessible to a greater number of users as display technologies decrease in cost and software for the displays improves. The additional pixels are especially useful for information visualization where scalability has typically been limited by the number of pixels available on a display. But how will visualizations for larger displays need to fundamentally differ from visualizations on desktop displays? Are the basic visualization design principles different? With this potentially new design paradigm comes questions such as whether the relative effectiveness of various graphical encodings are different on large displays, which visualizations and datasets benefit the most, and how interaction with visualizations on large, high-resolution displays will need to change. As we explore these possibilities, we shift away from the technical limitations of scalability imposed by traditional displays (e.g. number of pixels) to studying the human abilities that emerge when these limitations are removed. There is much potential for information visualizations to benefit from large, high-resolution displays, but this potential will only be realized through understanding the interaction between visualization design, perception, interaction techniques, and the display technology. In this paper we present critical design issues and outline some of the challenges and future opportunities for designing visualizations for large, high-resolution displays. We hope that these issues, challenges, and opportunities will provide guidance for future research in this area.
visual analytics science and technology | 2011
Alex Endert; Chao Han; Dipayan Maiti; Leanna House; Scotland Leman; Chris North
In visual analytics, sensemaking is facilitated through interactive visual exploration of data. Throughout this dynamic process, users combine their domain knowledge with the dataset to create insight. Therefore, visual analytic tools exist that aid sensemaking by providing various interaction techniques that focus on allowing users to change the visual representation through adjusting parameters of the underlying statistical model. However, we postulate that the process of sensemaking is not focused on a series of parameter adjustments, but instead, a series of perceived connections and patterns within the data. Thus, how can models for visual analytic tools be designed, so that users can express their reasoning on observations (the data), instead of directly on the model or tunable parameters? Observation level (and thus “observation”) in this paper refers to the data points within a visualization. In this paper, we explore two possible observation-level interactions, namely exploratory and expressive, within the context of three statistical methods, Probabilistic Principal Component Analysis (PPCA), Multidimensional Scaling (MDS), and Generative Topographic Mapping (GTM). We discuss the importance of these two types of observation level interactions, in terms of how they occur within the sensemaking process. Further, we present use cases for GTM, MDS, and PPCA, illustrating how observation level interaction can be incorporated into visual analytic tools.
visualization for computer security | 2009
Glenn A. Fink; Chris North; Alex Endert; Stuart J. Rose
The goal of cyber security visualization is to help analysts increase the safety and soundness of our digital infrastructures by providing effective tools and workspaces. Visualization researchers must make visual tools more usable and compelling than the text-based tools that currently dominate cyber analysts tool chests. A cyber analytics work environment should enable multiple, simultaneous investigations and information foraging, as well as provide a solution space for organizing data. We describe our study of cyber-security professionals and visualizations in a large, high-resolution display work environment and the analytic tasks this environment can support. We articulate a set of design principles for usable cyber analytic workspaces that our studies have brought to light. Finally, we present prototypes designed to meet our guidelines and a usability evaluation of the environment.
IEEE Transactions on Visualization and Computer Graphics | 2012
Alex Endert; Patrick Fiaux; Chris North
Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the users reasoning and intuition.
intelligent information systems | 2014
Alex Endert; Shahriar H. Hossain; Naren Ramakrishnan; Chris North; Patrick Fiaux; Christopher D. Andrews
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
international conference on human computer interaction | 2011
Katherine Vogt; Lauren Bradel; Christopher Andrews; Chris North; Alex Endert; Duke Hutchings
This study adapts existing tools (Jigsaw and a text editor) to support multiple input devices, which were then used in a co-located collaborative intelligence analysis study conducted on a large, high-resolution display. Exploring the sensemaking process and user roles in pairs of analysts, the two-hour study used a fictional data set composed of 50 short textual documents that contained a terrorist plot and subject pairs who had experience working together. The large display facilitated the paired sensemaking process, allowing teams to spatially arrange information and conduct individual work as needed. We discuss how the space and the tools affected the approach to the analysis, how the teams collaborated, and the user roles that developed. Using these findings, we suggest design guidelines for future co-located collaborative tools.
human factors in computing systems | 2011
Chris North; Remco Chang; Alex Endert; Wenwen Dou; Richard May; Bill Pike; Glenn A. Fink
Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces. One key aspect that separates visual analytics from other related fields (InfoVis, SciVis, HCI) is the focus on analytical reasoning. While the final products generated at from an analytical process are of great value, research has shown that the processes of the analysis themselves are just as important if not more so. These processes not only contain information on individual insights discovered, but also how the users arrive at these insights. This area of research that focuses on understanding a users reasoning process through the study of their interactions with a visualization is called Analytic Provenance, and has demonstrated great potential in becoming a foundation of the science of visual analytics. The goal of this workshop is to provide a forum for researchers and practitioners from academia, national labs, and industry to share methods for capturing, storing, and reusing user interactions and insights. We aim to develop a research agenda for how to better study analytic provenance and utilize the results in assisting users in solving real world problems.
IEEE Transactions on Visualization and Computer Graphics | 2014
Eli T. Brown; Alvitta Ottley; Helen Zhao; Quan Lin; Richard Souvenir; Alex Endert; Remco Chang
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a users interactions with a system reflect a large amount of the users reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a users task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the users personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.