Erik W. Anderson
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erik W. Anderson.
ieee vgtc conference on visualization | 2011
Erik W. Anderson; Kristin Potter; Laura E. Matzen; Jason F. Shepherd; Gilbert A. Preston; Cláudio T. Silva
Effectively evaluating visualization techniques is a difficult task often assessed through feedback from user studies and expert evaluations. This work presents an alternative approach to visualization evaluation in which brain activity is passively recorded using electroencephalography (EEG). These measurements are used to compare different visualization techniques in terms of the burden they place on a viewers cognitive resources. In this paper, EEG signals and response times are recorded while users interpret different representations of data distributions. This information is processed to provide insight into the cognitive load imposed on the viewer. This paper describes the design of the user study performed, the extraction of cognitive load measures from EEG data, and how those measures are used to quantitatively evaluate the effectiveness of visualizations.
IEEE Transactions on Visualization and Computer Graphics | 2010
Joel Daniels; Erik W. Anderson; Luis Gustavo Nonato; Cláudio T. Silva
We introduce a flexible technique for interactive exploration of vector field data through classification derived from user-specified feature templates. Our method is founded on the observation that, while similar features within the vector field may be spatially disparate, they share similar neighborhood characteristics. Users generate feature-based visualizations by interactively highlighting well-accepted and domain specific representative feature points. Feature exploration begins with the computation of attributes that describe the neighborhood of each sample within the input vector field. Compilation of these attributes forms a representation of the vector field samples in the attribute space. We project the attribute points onto the canonical 2D plane to enable interactive exploration of the vector field using a painting interface. The projection encodes the similarities between vector field points within the distances computed between their associated attribute points. The proposed method is performed at interactive rates for enhanced user experience and is completely flexible as showcased by the simultaneous identification of diverse feature types.
Computer Graphics Forum | 2011
Cláudio T. Silva; Erik W. Anderson; Emanuele Santos; Juliana Freire
Over the last 20 years, visualization courses have been developed and offered at universities around the world. Many of these courses use established visualization libraries and tools (e.g. VTK, ParaView, AVS, VisIt) as a way to provide students a hands‐on experience, allowing them to prototype and explore different visualization techniques. In this paper, we describe our experiences using VisTrails as a platform to teach scientific visualization. VisTrails is an open‐source system that was designed to support exploratory computational tasks such as visualization and data analysis. Unlike previous scientific workflow and visualization systems, VisTrails provides a comprehensive provenance management infrastructure. We discuss how different features of the system, and in particular, the provenance information have changed the dynamics of the Scientific Visualization course we offer at the University of Utah. We also describe our initial attempts at using the provenance information to better assess our teaching techniques and student performance.
ieee international conference on escience | 2008
Bill Howe; Peter W. Lawson; Renee M. Bellinger; Erik W. Anderson; Emanuele Santos; Juliana Freire; Carlos Eduardo Scheidegger; António M. Baptista; Cláudio T. Silva
Data analysis tasks at an Ocean Observatory require integrative and and domain-specialized use of database, workflow, visualization systems. We describe a platform to support these tasks developed as part of the cyberinfrastructure at the NSF Science and Technology Center for Coastal Margin Observation and Prediction integrating a provenance-aware workflow system, 3D visualization, and a remote query engine for large-scale ocean circulation models. We show how these disparate tools complement each other and give examples of real scientific insights delivered by the integrated system. We conclude that data management solutions for eScience require this kind of holistic, integrative approach, explain how our approach may be generalized, and recommend a broader, application-oriented research agenda to explore relevant architectures.
Journal of Cognitive Neuroscience | 2010
Gilbert A. Preston; Erik W. Anderson; Cláudio T. Silva; Terry Goldberg; Eric M. Wassermann
Working memory (WM) has been described as short-term retention of information that is no longer accessible in the environment, and the manipulation of this information for subsequent use in guiding behavior. WM is viewed as a cognitive process underlying higher-order cognitive functions. Evidence supports a critical role for PFC in mediating WM performance. Studies show psychomotor processing speed and accuracy account for considerable variance in neural efficiency (Ne). This study compared the relative effects of active and sham 10 Hz rTMS applied to dorsolateral prefrontal cortex (DLPFC) on indices of Ne in healthy participants performing a WM paradigm that models the association between WM load and task behavior [Sternberg, S. High-speed scanning in human memory. Science, 153, 652–654, 1966]. Previous studies identified a relationship between diminished Ne and impaired WM across a broad array of clinical disorders. In the present study, the authors predicted there would be a main effect of stimulation group (STM) on accuracy (SCR) and processing speed (RT), hence, Ne. We observed a main effect of STM for RT without an effect on SCR; even so, there was a robust effect of STM on Ne.
Computing in Science and Engineering | 2008
Erik W. Anderson; Cláudio T. Silva; James P. Ahrens; Katrin Heitmann; Salman Habib
Provenance - the logging of information about how data came into being and how it was processed - is an essential aspect of managing large-scale simulation and data-intensive projects. Using a cosmology code comparison project as an example, this article presents how a provenance system can play a key role in such applications.
statistical and scientific database management | 2008
Lauro Didier Lins; David Koop; Erik W. Anderson; Steven P. Callahan; Emanuele Santos; Carlos Eduardo Scheidegger; Juliana Freire; Cláudio T. Silva
Provenance (also referred to as audit trail, lineage, and pedigree) captures information about the steps used to generate a given data product. Such information provides documentation that is key to determining data quality and authorship, and necessary for preserving, reproducing, sharing and publishing the data. Workflow design, in particular for exploratory tasks (e.g., creating a visualization, mining a data set), requires an involved, trial-and-error process. To solve a problem, a user has to iteratively refine a workflow to experiment with different techniques and try different parameter values, as she formulates and test hypotheses. The maintenance of detailed provenance (or history) of this process has many benefits that go beyond documentation and result reproducibility. Notably, it supports several operations that facilitate exploration, including the ability to return to a previous workflow version in an intuitive way, to undo bad changes, to compare different workflows, and to be reminded of the actions that led to a particular result [2].
brazilian symposium on computer graphics and image processing | 2007
Erik W. Anderson; Steven P. Callahan; Carlos Eduardo Scheidegger; John M. Schreiner; Cláudio T. Silva
Medical image registration is a difficult problem. Not only a registration algorithm needs to capture both large and small scale image deformations, it also has to deal with global and local image intensity variations. In this paper we describe a new multiresolution elastic image registration method that challenges these difficulties in image registration. To capture large and small scale image deformations, we use both global and local affine transformation algorithms. To address global and local image intensity variations, we apply an image intensity standardization algorithm to correct image intensity variations. This transforms image intensities into a standard intensity scale, which allows highly accurate registration of medical images.Unstructured volume grids are ubiquitous in scientific computing, and have received substantial interest from the scientific visualization community. In this paper, we take a point-based approach to rendering unstructured grids. In particular, we present a novel method of approximating these irregular elements with point-based primitives amenable to existing hardware acceleration techniques. To improve interactivity to large datasets, we have adapted a level-of-detail strategy. We use a well-known quantitative metric to analyze the image quality achieved by the final rendering.
workshop on beyond time and errors | 2012
Erik W. Anderson
In this position paper, we discuss the problems and advantages of using physiological measurements to to estimate cognitive load in order to evaluate scientific visualization methods. We will present various techniques and technologies designed to measure cognitive load and how they may be leveraged in the context of user evaluation studies for scientific visualization. We also discuss the challenges of experiments designed to use these physiological measurements.
Computing in Science and Engineering | 2010
Erik W. Anderson; Gilbert A. Preston; Cláudio T. Silva
Applying Python to a neuroscience project let developers put complex data processing and advanced visualization techniques together in a coherent framework.The Python programming language provides a development environment suitable to both computational and visualization tasks. One of Pythons key advantages is that it lets developers use packages that extend the language to provide advanced capabilities, such as array and matrix manipulation, image processing, digital signal processing, and visualization. Several popular data exploration and visualization tools have been built in Python, including Visit (www.llnl. gov/visit), Paraview (www.paraview. org), climate data analysis tools (CDAT; www2-pcmdi.llnl.gov/cdat), and VisTrails (www.vistrails .org). In our work, we use VisTrails; however, nearly any Python-enabled application can produce similar results. The neuroscience field often uses bothmultimodal data and computationally complex algorithms to analyze data collected from study participants. Here, we investigate a study in which magnetic resonance imaging (MRI) is combined with electroencephalography (EEG) data to examine working memory.