Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Patchett is active.

Publication


Featured researches published by John Patchett.


IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003. | 2003

SLIC: scheduled linear image compositing for parallel volume rendering

Aleksander Stompel; Kwan-Liu Ma; Eric B. Lum; James P. Ahrens; John Patchett

Parallel volume rendering offers a feasible solution to the large data visualization problem by distributing both the data and rendering calculations among multiple computers connected by a network. In sort-last parallel volume rendering, each processor generates an image of its assigned subvolume, which is blended together with other images to derive the final image. Improving the efficiency of this compositing step, which requires interprocesssor communication, is the key to scalable, interactive rendering. The recent trend of using hardware-accelerated volume rendering demands further acceleration of the image compositing step. We present a new optimized parallel image compositing algorithm and its performance on a PC cluster. Our test results show that this new algorithm offers significant savings over previous algorithms in both communication and compositing costs. On a 64-node PC cluster with a 100BaseT network interconnect, we can achieve interactive rendering rates for images at resolutions up to 1024x1024 pixels at several frames per second.


ieee international conference on high performance computing data and analytics | 2014

An image-based approach to extreme scale in situ visualization and analysis

James P. Ahrens; Sébastien Jourdain; Patrick O'Leary; John Patchett; David H. Rogers; Mark R. Petersen

Extreme scale scientific simulations are leading a charge to exascale computation, and data analytics runs the risk of being a bottleneck to scientific discovery. Due to power and I/O constraints, we expect in situ visualization and analysis will be a critical component of these workflows. Options for extreme scale data analysis are often presented as a stark contrast: write large files to disk for interactive, exploratory analysis, or perform in situ analysis to save detailed data about phenomena that a scientists knows about in advance. We present a novel framework for a third option - a highly interactive, image-based approach that promotes exploration of simulation results, and is easily accessed through extensions to widely used open source tools. This in situ approach supports interactive exploration of a wide range of results, while still significantly reducing data movement and storage.


ieee conference on mass storage systems and technologies | 2012

Jitter-free co-processing on a prototype exascale storage stack

John M. Bent; Sorin Faibish; James P. Ahrens; Gary Grider; John Patchett; Percy Tzelnic; Jon Woodring

In the petascale era, the storage stack used by the extreme scale high performance computing community is fairly homogeneous across sites. On the compute edge of the stack, file system clients or IO forwarding services direct IO over an interconnect network to a relatively small set of IO nodes. These nodes forward the requests over a secondary storage network to a spindle-based parallel file system. Unfortunately, this architecture will become unviable in the exascale era. As the density growth of disks continues to outpace increases in their rotational speeds, disks are becoming increasingly cost-effective for capacity but decreasingly so for bandwidth. Fortunately, new storage media such as solid state devices are filling this gap; although not cost-effective for capacity, they are so for performance. This suggests that the storage stack at exascale will incorporate solid state storage between the compute nodes and the parallel file systems. There are three natural places into which to position this new storage layer: within the compute nodes, the IO nodes, or the parallel file system. In this paper, we argue that the IO nodes are the appropriate location for HPC workloads and show results from a prototype system that we have built accordingly. Running a pipeline of computational simulation and visualization, we show that our prototype system reduces total time to completion by up to 30%.


IEEE Computer | 2013

Ultrascale Visualization of Climate Data

Dean N. Williams; T. Bremer; Charles Doutriaux; John Patchett; Sean Williams; Galen M. Shipman; Ross Miller; Dave Pugmire; B. Smith; Chad A. Steed; E. W. Bethel; Hank Childs; H. Krishnan; P. Prabhat; M. Wehner; Cláudio T. Silva; Emanuele Santos; David Koop; Tommy Ellqvist; Jorge Poco; Berk Geveci; Aashish Chaudhary; Andrew C. Bauer; Alexander Pletzer; David A. Kindig; Gerald Potter; Thomas Maxwell

Collaboration across research, government, academic, and private sectors is integrating more than 70 scientific computing libraries and applications through a tailorable provenance framework, empowering scientists to exchange and examine data in novel ways.


Proceedings of the 2009 Workshop on Ultrascale Visualization | 2009

Interactive remote large-scale data visualization via prioritized multi-resolution streaming

James P. Ahrens; Jonathan Woodring; David E. DeMarle; John Patchett; Mathew Maltrud

The simulations that run on petascale and future exascale supercomputers pose a difficult challenge for scientists to visualize and analyze their results remotely. They are limited in their ability to interactively visualize their data mainly due to limited network bandwidth associated with sending and reading large data at a distance. To tackle this issue, we provide a generalized distance visualization architecture for large remote data that aims to provide interactive analysis. We achieve this through a prioritized, multi-resolution, streaming architecture. Since the original data size is several orders of magnitude greater than the display and network technologies, we stream downsampled versions of representation data over time to complete a visualization using fast local rendering. This technique provides the necessary interactivity and full-resolution results dynamically on demand while maintaining a full-featured visualization framework.


IEEE Transactions on Visualization and Computer Graphics | 2016

In Situ Eddy Analysis in a High-Resolution Ocean Climate Model

Jonathan Woodring; Mark R. Petersen; Andre Schmeiber; John Patchett; James P. Ahrens; Hans Hagen

An eddy is a feature associated with a rotating body of fluid, surrounded by a ring of shearing fluid. In the ocean, eddies are 10 to 150 km in diameter, are spawned by boundary currents and baroclinic instabilities, may live for hundreds of days, and travel for hundreds of kilometers. Eddies are important in climate studies because they transport heat, salt, and nutrients through the worlds oceans and are vessels of biological productivity. The study of eddies in global ocean-climate models requires large-scale, high-resolution simulations. This poses a problem for feasible (timely) eddy analysis, as ocean simulations generate massive amounts of data, causing a bottleneck for traditional analysis workflows. To enable eddy studies, we have developed an in situ workflow for the quantitative and qualitative analysis of MPAS-Ocean, a high-resolution ocean climate model, in collaboration with the ocean model research and development process. Planned eddy analysis at high spatial and temporal resolutions will not be possible with a postprocessing workflow due to various constraints, such as storage size and I/O time, but the in situ workflow enables it and scales well to ten-thousand processing elements.


ieee symposium on large data analysis and visualization | 2014

ADR visualization: A generalized framework for ranking large-scale scientific data using Analysis-Driven Refinement

Boonthanome Nouanesengsy; Jonathan Woodring; John Patchett; Kary Myers; James P. Ahrens

Prioritization of data is necessary for managing large-scale scientific data, as the scale of the data implies that there are only enough resources available to process a limited subset of the data. For example, data prioritization is used during in situ triage to scale with bandwidth bottlenecks, and used during focus+context visualization to save time during analysis by guiding the user to important information. In this paper, we present ADR visualization, a generalized analysis framework for ranking large-scale data using Analysis-Driven Refinement (ADR), which is inspired by Adaptive Mesh Refinement (AMR). A large-scale data set is partitioned in space, time, and variable, using user-defined importance measurements for prioritization. This process creates a prioritization tree over the data set. Using this tree, selection methods can generate sparse data products for analysis, such as focus+context visualizations or sparse data sets.


Computer Graphics Forum | 2012

Interface Exchange as an Indicator for Eddy Heat Transport

Sean Williams; Mark R. Petersen; Matthew W. Hecht; Mathew Maltrud; John Patchett; James P. Ahrens; Bernd Hamann

The ocean contains many large‐scale, long‐lived vortices, called mesoscale eddies, that are believed to have a role in the transport and redistribution of salt, heat, and nutrients throughout the ocean. Determining this role, however, has proven to be a challenge, since the mechanics of eddies are only partly understood; a standard definition for these ocean eddies does not exist and, therefore, scientifically meaningful, robust methods for eddy extraction, characterization, tracking and visualization remain a challenge. To shed light on the nature and potential roles of eddies, we extend our previous work on eddy identification and tracking to construct a new metric to characterize the transfer of water into and out of eddies across their boundary, and produce several visualizations of this new metric to provide clues about the role eddies play in the global ocean.


2008 Workshop on Ultrascale Visualization | 2008

Petascale visualization: Approaches and initial results

James P. Ahrens; Li-Ta Lo; Boonthanome Nouanesengsy; John Patchett; Allen McPherson

With the advent of the first petascale supercomputer, Los Alamoss Roadrunner, there is a pressing need to address how to visualize petascale data. The crux of the petascale visualization performance problem is interactive rendering, since it is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. In this work, we evaluated the rendering performance of multi-core CPU and GPU-based processors. To achieve high-performance on multi-core processors, we tested with multi-core optimized raytracing engines for rendering. For real-world performance testing, and to prepare for petascale visualization tasks, we interfaced these rendering engines with VTK and ParaView. Initial results show that rendering software optimized for multi-core CPU processors provides competitive performance to GPUs for the parallel rendering of massive data. The current architectural multi-core trend suggests multi-core based supercomputers are able to provide interactive visualization and rendering support now and in the future.


international supercomputing conference | 2017

Extreme Event Analysis in Next Generation Simulation Architectures

Stephen Hamilton; Randal C. Burns; Charles Meneveau; Perry L. Johnson; Peter Lindstrom; John Patchett; Alexander S. Szalay

Numerical simulations present challenges because they generate petabyte-scale data that must be extracted and reduced during the simulation. We demonstrate a seamless integration of feature extraction for a simulation of turbulent fluid dynamics. The simulation produces on the order of 6 TB per timestep. In order to analyze and store this data, we extract velocity data from a dilated volume of the strong vortical regions and also store a lossy compressed representation of the data. Both reduce data by one or more orders of magnitude. We extract data from user checkpoints in transit while they reside on temporary burst buffer SSD stores. In this way, analysis and compression algorithms are designed to meet specific time constraints so they do not interfere with simulation computations. Our results demonstrate that we can perform feature extraction on a world-class direct numerical simulation of turbulence while it is running and gather meaningful scientific data for archival and post analysis.

Collaboration


Dive into the John Patchett's collaboration.

Top Co-Authors

Avatar

James P. Ahrens

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Woodring

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Christopher Mitchell

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Christopher M. Sewell

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sean Williams

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David H. Rogers

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dean N. Williams

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Galen M. Shipman

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge