Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sean Ahern is active.

Publication


Featured researches published by Sean Ahern.


international conference on computer graphics and interactive techniques | 2002

Chromium: a stream-processing framework for interactive rendering on clusters

Greg Humphreys; Mike Houston; Ren Ng; Randall J. Frank; Sean Ahern; P. D. Kirchner; James T. Klosowski

We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromiums stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.


Lawrence Berkeley National Laboratory | 2009

FastBit: interactively searching massive data

Kesheng Wu; Sean Ahern; Edward W Bethel; Jacqueline H. Chen; Hank Childs; E. Cormier-Michel; Cameron Geddes; Junmin Gu; Hans Hagen; Bernd Hamann; Wendy S. Koegler; Jerome Lauret; Jeremy S. Meredith; Peter Messmer; Ekow J. Otoo; V Perevoztchikov; A. M. Poskanzer; Prabhat; Oliver Rübel; Arie Shoshani; Alexander Sim; Kurt Stockinger; Gunther H. Weber; W. M. Zhang

As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.


IEEE Computer Graphics and Applications | 2010

Extreme Scaling of Production Visualization Software on Diverse Architectures

Hank Childs; David Pugmire; Sean Ahern; Brad Whitlock; Mark Howison; Prabhat; Gunther H. Weber; E. Wes Bethel

This article presents the results of experiments studying how the pure-parallelism paradigm scales to massive data sets, including 16,000 or more cores on trillion-cell meshes, the largest data sets published to date in the visualization literature. The findings on scaling characteristics and bottlenecks contribute to understanding how pure parallelism will perform in the future.


ieee international conference on high performance computing data and analytics | 2009

Scalable computation of streamlines on very large datasets

David Pugmire; Hank Childs; Christoph Garth; Sean Ahern; Gunther H. Weber

Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.


ieee international conference on high performance computing data and analytics | 2008

High performance multivariate visual data exploration for extremely large data

Oliver Rübel; Prabhat; Kesheng Wu; Hank Childs; Jeremy S. Meredith; Cameron Geddes; E. Cormier-Michel; Sean Ahern; Gunther H. Weber; Peter Messmer; Hans Hagen; Bernd Hamann; E. Wes Bethel

One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.


IEEE Transactions on Visualization and Computer Graphics | 2008

Visualizing Temporal Patterns in Large Multivariate Data using Modified Globbing

Markus Glatter; Jian Huang; Sean Ahern; Jamison Daniel; Aidong Lu

Extracting and visualizing temporal patterns in large scientific data is an open problem in visualization research. First, there are few proven methods to flexibly and concisely define general temporal patterns for visualization. Second, with large time-dependent data sets, as typical with todaypsilas large-scale simulations, scalable and general solutions for handling the data are still not widely available. In this work, we have developed a textual pattern matching approach for specifying and identifying general temporal patterns. Besides defining the formalism of the language, we also provide a working implementation with sufficient efficiency and scalability to handle large data sets. Using recent large-scale simulation data from multiple application domains, we demonstrate that our visualization approach is one of the first to empower a concept driven exploration of large-scale time-varying multivariate data.


IEEE Transactions on Visualization and Computer Graphics | 2008

Chromium Renderserver: Scalable and Open Remote Rendering Infrastructure

B. Paul; Sean Ahern; E.W. Bethel; E. Brugger; R. Cook; Jamison Daniel; K. Lewis; J. Owen; D. Southard

Chromium Renderserver (CRRS) is a software infrastructure that provides the ability for one or more users to run and view image output from unmodified, interactive OpenGL and X11 applications on a remote parallel computational platform equipped with graphics hardware accelerators via industry-standard Layer-7 network protocols and client viewers. The new contributions of this work include a solution to the problem of synchronizing X11 and OpenGL command streams, remote delivery of parallel hardware-accelerated rendering, and a performance analysis of several different optimizations that are generally applicable to a variety of rendering architectures. CRRS is fully operational, open source software.


architectural support for programming languages and operating systems | 2012

A distributed data-parallel framework for analysis and visualization algorithm development

Jeremy S. Meredith; Robert Sisneros; David Pugmire; Sean Ahern

The coming generation of supercomputing architectures will require fundamental changes in programming models to effectively make use of the expected million to billion way concurrency and thousand-fold reduction in per-core memory. Most current parallel analysis and visualization tools achieve scalability by partitioning the data, either spatially or temporally, and running serial computational kernels on each data partition, using message passing as needed. These techniques lack the necessary level of data parallelism to execute effectively on the underlying hardware. This paper introduces a framework that enables the expression of analysis and visualization algorithms with memory-efficient execution in a hybrid distributed and data parallel manner on both multi-core and many-core processors. We demonstrate results on scientific data using CPUs and GPUs in scalable heterogeneous systems.


international conference on conceptual structures | 2010

Coupling visualization and data analysis for knowledge discovery from multi-dimensional scientific data

Oliver Rübel; Sean Ahern; E. Wes Bethel; Mark D. Biggin; Hank Childs; E. Cormier-Michel; Angela H. DePace; Michael B. Eisen; Charless C. Fowlkes; Cameron Geddes; Hans Hagen; Bernd Hamann; Min-Yu Huang; Soile V.E. Keranen; David W. Knowles; Chris L. Luengo Hendriks; Jitendra Malik; Jeremy S. Meredith; Peter Messmer; Prabhat; Daniela Ushizima; Gunther H. Weber; Kesheng Wu

Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies -such as efficient data management- supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.


IEEE Computer | 2013

Data Analysis and Visualization in High-Performance Computing

Amy F. Szczepański; Jian Huang; Troy Baer; Yashema C. Mack; Sean Ahern

Because data analysis and visualization jobs are highly diverse in terms of their size-measured by core count, memory use, and requisite software-sophisticated, high-performance monitoring tools are needed to improve user support and facilitate resource allocation.

Collaboration


Dive into the Sean Ahern's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Hamann

University of California

View shared research outputs
Top Co-Authors

Avatar

E. Wes Bethel

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

George Ostrouchov

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kenneth I. Joy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy S. Meredith

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge