Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Pugmire is active.

Publication


Featured researches published by David Pugmire.


IEEE Computer Graphics and Applications | 2010

Extreme Scaling of Production Visualization Software on Diverse Architectures

Hank Childs; David Pugmire; Sean Ahern; Brad Whitlock; Mark Howison; Prabhat; Gunther H. Weber; E. Wes Bethel

This article presents the results of experiments studying how the pure-parallelism paradigm scales to massive data sets, including 16,000 or more cores on trillion-cell meshes, the largest data sets published to date in the visualization literature. The findings on scaling characteristics and bottlenecks contribute to understanding how pure parallelism will perform in the future.


IEEE Transactions on Visualization and Computer Graphics | 2011

Streamline Integration Using MPI-Hybrid Parallelism on a Large Multicore Architecture

David Camp; Christoph Garth; Hank Childs; David Pugmire; Kenneth I. Joy

Streamline computation in a very large vector field data set represents a significant challenge due to the nonlocal and data-dependent nature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programming and execution as applied to streamline integration on a large, multicore platform. With multicore processors now prevalent in clusters and supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice. We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize over seeds and parallelize over blocks, and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing between cores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication and I/O bandwidth than a traditional, nonhybrid distributed implementation.


ieee international conference on high performance computing data and analytics | 2009

Scalable computation of streamlines on very large datasets

David Pugmire; Hank Childs; Christoph Garth; Sean Ahern; Gunther H. Weber

Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.


IEEE Transactions on Visualization and Computer Graphics | 2010

Analysis of Recurrent Patterns in Toroidal Magnetic Fields

Allen Sanderson; Guoning Chen; Xavier Tricoche; David Pugmire; Scott Kruger; Joshua Breslau

In the development of magnetic confinement fusion which will potentially be a future source for low cost power, physicists must be able to analyze the magnetic field that confines the burning plasma. While the magnetic field can be described as a vector field, traditional techniques for analyzing the fields topology cannot be used because of its Hamiltonian nature. In this paper we describe a technique developed as a collaboration between physicists and computer scientists that determines the topology of a toroidal magnetic field using fieldlines with near minimal lengths. More specifically, we analyze the Poincaré map of the sampled fieldlines in a Poincaré section including identifying critical points and other topological features of interest to physicists. The technique has been deployed into an interactiveparallel visualization tool which physicists are using to gain new insight into simulations of magnetically confined burning plasmas.


IEEE Computer Graphics and Applications | 2016

VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures

Kenneth Moreland; Christopher M. Sewell; William Usher; Li-Ta Lo; Jeremy S. Meredith; David Pugmire; James Kress; Hendrik A. Schroots; Kwan-Liu Ma; Hank Childs; Matthew Larsen; Chun-Ming Chen; Robert Maynard; Berk Geveci

One of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.


architectural support for programming languages and operating systems | 2012

A distributed data-parallel framework for analysis and visualization algorithm development

Jeremy S. Meredith; Robert Sisneros; David Pugmire; Sean Ahern

The coming generation of supercomputing architectures will require fundamental changes in programming models to effectively make use of the expected million to billion way concurrency and thousand-fold reduction in per-core memory. Most current parallel analysis and visualization tools achieve scalability by partitioning the data, either spatially or temporally, and running serial computational kernels on each data partition, using message passing as needed. These techniques lack the necessary level of data parallelism to execute effectively on the underlying hardware. This paper introduces a framework that enables the expression of analysis and visualization algorithms with memory-efficient execution in a hybrid distributed and data parallel manner on both multi-core and many-core processors. We demonstrate results on scientific data using CPUs and GPUs in scalable heterogeneous systems.


eurographics workshop on parallel graphics and visualization | 2013

GPU acceleration of particle advection workloads in a parallel, distributed memory setting

David Camp; Harinarayan Krishnan; David Pugmire; Christoph Garth; Ian Johnson; E. Wes Bethel; Kenneth I. Joy; Hank Childs

Although there has been significant research in GPU acceleration, both of parallel simulation codes (i.e., GPGPU) and of single GPU visualization and analysis algorithms, there has been relatively little research devoted to visualization and analysis algorithms on GPU clusters. This oversight is significant: parallel visualization and analysis algorithms have markedly different characteristics -- computational load, memory access pattern, communication, idle time, etc. -- than the other two categories. In this paper, we explore the benefits of GPU acceleration for particle advection in a parallel, distributed-memory setting. As performance properties can differ dramatically between particle advection use cases, our study operates over a variety of workloads, designed to reveal insights about underlying trends. This work has a three-fold aim: (1) to map a challenging visualization and analysis algorithm -- particle advection -- to a complex system (a cluster of GPUs), (2) to inform its performance characteristics, and (3) to evaluate the advantages and disadvantages of using the GPU. In our performance study, we identify which factors are and are not relevant for obtaining a speedup when using GPUs. In short, this study informs the following question: if faced with a parallel particle advection problem, should you implement the solution with CPUs, with GPUs, or does it not matter?


ieee symposium on large data analysis and visualization | 2012

Parallel stream surface computation for large data sets

David Camp; Hank Childs; Christoph Garth; David Pugmire; Kenneth I. Joy

Parallel stream surface calculation, while highly related to other particle advection-based techniques such as streamlines, has its own unique characteristics that merit independent study. Specifically, stream surfaces require new integral curves to be added continuously during execution to ensure surface quality and accuracy; performance can be improved by specifically accounting for these additional particles. We present an algorithm for generating stream surfaces in a distributed-memory parallel setting. The algorithm incorporates multiple schemes for parallelizing particle advection and we study which schemes work best. Further, we explore speculative calculation and how it can improve overall performance. In total, this study informs the efficient calculation of stream surfaces in parallel for large data sets, based on existing integral curve functionality.


Proceedings of the First Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization | 2015

Loosely Coupled In Situ Visualization: A Perspective on Why It's Here to Stay

James Kress; Scott Klasky; Norbert Podhorszki; Jong Youl Choi; Hank Childs; David Pugmire

In this position paper, we argue that the loosely coupled in situ processing paradigm will play an important role in high performance computing for the foreseeable future. Loosely coupled in situ is an enabling technique that addresses many of the current issues with tightly coupled in situ, including, ease-of-integration, usability, and fault tolerance. We survey the prominent positives and negatives of both tightly coupled and loosely coupled in situ and present our recommendation as to why loosely coupled in situ is an enabling technique that is here to stay. We then report on some recent experiences with loosely coupled in situ processing, in an effort to explore each of the discussed factors in a real-world environment.


international parallel and distributed processing symposium | 2016

Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows

David Pugmire; James Kress; Jong Choi; Scott Klasky; Tahsin M. Kurç; R.M. Churchill; Matthew Wolf; Greg Eisenhower; Hank Childs; Kesheng Wu; Alexander Sim; Junmin Gu; Jonathan Low

Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. This paper discusses initial research into visualization and analysis of distributed data workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.

Collaboration


Dive into the David Pugmire's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Klasky

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Norbert Podhorszki

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Wolf

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jong Youl Choi

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark Kim

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Qing Liu

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

E. Suchyta

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge