Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hank Childs is active.

Publication


Featured researches published by Hank Childs.


ieee visualization | 2005

A contract based system for large data visualization

Hank Childs; Eric Brugger; Kathleen S. Bonnell; Jeremy S. Meredith; Mark C. Miller; Brad Whitlock; Nelson L. Max

VisIt is a richly featured visualization tool that is used to visualize some of the largest simulations ever run. The scale of these simulations requires that optimizations are incorporated into every operation VisIt performs. But the set of applicable optimizations that VisIt can perform is dependent on the types of operations being done. Complicating the issue, VisIt has a plugin capability that allows new, unforeseen components to be added, making it even harder to determine which optimizations can be applied. We introduce the concept of a contract to the standard data flow network design. This contract enables each component of the data flow network to modify the set of optimizations used. In addition, the contract allows for new components to be accommodated gracefully within VisIts data flow network system.


Lawrence Berkeley National Laboratory | 2009

FastBit: interactively searching massive data

Kesheng Wu; Sean Ahern; Edward W Bethel; Jacqueline H. Chen; Hank Childs; E. Cormier-Michel; Cameron Geddes; Junmin Gu; Hans Hagen; Bernd Hamann; Wendy S. Koegler; Jerome Lauret; Jeremy S. Meredith; Peter Messmer; Ekow J. Otoo; V Perevoztchikov; A. M. Poskanzer; Prabhat; Oliver Rübel; Arie Shoshani; Alexander Sim; Kurt Stockinger; Gunther H. Weber; W. M. Zhang

As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.


IEEE Computer Graphics and Applications | 2010

Extreme Scaling of Production Visualization Software on Diverse Architectures

Hank Childs; David Pugmire; Sean Ahern; Brad Whitlock; Mark Howison; Prabhat; Gunther H. Weber; E. Wes Bethel

This article presents the results of experiments studying how the pure-parallelism paradigm scales to massive data sets, including 16,000 or more cores on trillion-cell meshes, the largest data sets published to date in the visualization literature. The findings on scaling characteristics and bottlenecks contribute to understanding how pure parallelism will perform in the future.


eurographics workshop on parallel graphics and visualization | 2006

A scalable, hybrid scheme for volume rendering massive data sets

Hank Childs; Mark A. Duchaineau; Kwan-Liu Ma

We introduce a parallel, distributed memory algorithm for volume rendering massive data sets. The algorithms scalability has been demonstrated up to 400 processors, rendering one hundred million unstructured elements in under one second. The heart of the algorithm is a hybrid approach that parallelizes over both the elements of the input data and over the pixels of the output image. At each stage of the algorithm, there are strong limits on how much work each processor performs, ensuring good parallel efficiency. The algorithm is sample-based. We present two techniques for calculating the sample points: a 3D rasterization technique and a kernel-based technique, which trade off between speed and generality. Finally, the algorithm is very flexible. It can be deployed in general purpose visualization tools and can also support diverse mesh types, ranging from structured grids to curvilinear and unstructured meshes to point clouds.


IEEE Transactions on Visualization and Computer Graphics | 2011

Streamline Integration Using MPI-Hybrid Parallelism on a Large Multicore Architecture

David Camp; Christoph Garth; Hank Childs; David Pugmire; Kenneth I. Joy

Streamline computation in a very large vector field data set represents a significant challenge due to the nonlocal and data-dependent nature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programming and execution as applied to streamline integration on a large, multicore platform. With multicore processors now prevalent in clusters and supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice. We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize over seeds and parallelize over blocks, and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing between cores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication and I/O bandwidth than a traditional, nonhybrid distributed implementation.


ieee international conference on high performance computing data and analytics | 2009

Scalable computation of streamlines on very large datasets

David Pugmire; Hank Childs; Christoph Garth; Sean Ahern; Gunther H. Weber

Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.


IEEE Transactions on Visualization and Computer Graphics | 2012

Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

Mark Howison; E.W. Bethel; Hank Childs

With the computing industry trending toward multi- and many-core processors, we study how a standard visualization algorithm, raycasting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with data sets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.


ieee international conference on high performance computing data and analytics | 2008

High performance multivariate visual data exploration for extremely large data

Oliver Rübel; Prabhat; Kesheng Wu; Hank Childs; Jeremy S. Meredith; Cameron Geddes; E. Cormier-Michel; Sean Ahern; Gunther H. Weber; Peter Messmer; Hans Hagen; Bernd Hamann; E. Wes Bethel

One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.


IEEE Computer | 2013

Research Challenges for Visualization Software

Hank Childs; Berk Geveci; William J. Schroeder; Jeremy S. Meredith; Kenneth Moreland; Christopher M. Sewell; Torsten W. Kuhlen; E.W. Bethel

As the visualization research community reorients its software to address up-coming challenges, it must successfully deal with diverse processor architectures, distributed systems, various data sources, massive parallelism, multiple input and output devices, and interactivity.


IEEE Computer | 2013

Ultrascale Visualization of Climate Data

Dean N. Williams; T. Bremer; Charles Doutriaux; John Patchett; Sean Williams; Galen M. Shipman; Ross Miller; Dave Pugmire; B. Smith; Chad A. Steed; E. W. Bethel; Hank Childs; H. Krishnan; P. Prabhat; M. Wehner; Cláudio T. Silva; Emanuele Santos; David Koop; Tommy Ellqvist; Jorge Poco; Berk Geveci; Aashish Chaudhary; Andrew C. Bauer; Alexander Pletzer; David A. Kindig; Gerald Potter; Thomas Maxwell

Collaboration across research, government, academic, and private sectors is integrating more than 70 scientific computing libraries and applications through a tailorable provenance framework, empowering scientists to exchange and examine data in novel ways.

Collaboration


Dive into the Hank Childs's collaboration.

Top Co-Authors

Avatar

Kenneth I. Joy

University of California

View shared research outputs
Top Co-Authors

Avatar

E. Wes Bethel

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bernd Hamann

University of California

View shared research outputs
Top Co-Authors

Avatar

Sean Ahern

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jeremy S. Meredith

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gunther H. Weber

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David Pugmire

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Ostrouchov

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge