John Clyne
National Center for Atmospheric Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Clyne.
New Journal of Physics | 2007
John Clyne; Pablo D. Mininni; Alan Norton; Mark Peter Rast
The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, makingsuchanalysisofthedataapainstaking,ifnotahopeless,task.Inthispaper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 504 2 by 2048 points for the first and 1536 3 points for the second, to overwhelm the interactive capabilities of typically available analysis resources.
ieee visualization | 2001
Eric B. Lum; Kwan-Liu Ma; John Clyne
In this paper we present a hardware-assisted rendering technique coupled with a compression scheme for the interactive visual exploration of time-varying scalar volume data. A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3-D graphics card. Using a single PC equipped with a modest amount of memory, a texture capable graphics card, and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 45 millions voxels each time step) at interactive rates, permitting the visual exploration of large scientific data sets in both the temporal and spatial domain.
visualization and data analysis | 2005
John Clyne; Mark Peter Rast
Scientific visualization is routinely promoted as an indispensable component of the knowledge discovery process in a variety of scientific and engineering disciplines. However, our experiences with visualization at the National Center for Atmospheric Research (NCAR) differ somewhat from those described by many in the visualization community. Visualization at NCAR is used with great success to convey highly complex results to a wide variety of audiences, but the technology only rarely plays an active role in the day-to-day scientific discovery process. We believe that one reason for this is the mismatch between the size of the primary simulation data sets produced and the capabilities of the software and visual computing facilities generally available for their analysis. Here we describe preliminary results of our efforts to facilitate visual as well as non-visual analysis of terascale scientific data sets with the aim of realizing greater scientific return from such large scale computation efforts.
VisSym | 1999
John Clyne; John M. Dennis
Previous efforts aimed at improving direct volume rendering performance have focused largely on time-invariant, 3D data. Little work has been done in the area of interactive direct volume rendering of time-varying data, such as is commonly found in Computational Fluid Dynamics (CFD) simulations. Until recently, the additional costs imposed by time-varying data have made consideration of interactive direct volume rendering impractical. We present a volume rendering system based on a parallel implementation of the Shear-Warp Factorization algorithm that is capable of rendering time-varying 1283 data at interactive speeds.
eurographics | 2005
Hiroshi Akiba; Kwan-Liu Ma; John Clyne
We present a systematic approach for direct volume rendering terascale-sized data that are time-varying, and possibly non-uniformly sampled, using only a single commodity graphics PC. Our method employs a data reduction scheme that combines lossless, wavelet-based progressive data access with a user-directed, hardware-accelerated data packing technique. Data packing is achieved by discarding data blocks with values outside the data interval of interest and encoding the remaining data in a structure that can be efficiently decoded in the GPU. The compressed data can be transferred between disk, main memory, and video memory more efficiently, leading to more effective data exploration in both spatial and temporal domains. Furthermore, our texture-map based volume rendering system is capable of correctly displaying data that are sampled on a stretched, Cartesian grid. To study the effectiveness of our technique we used data sets generated from a large solar convection simulation, computed on a non-uniform, 504/spl times/504/spl times/2048 grid.
New Journal of Physics | 2008
Pablo D. Mininni; Ed Lee; Alan Norton; John Clyne
Accurately interpreting three dimensional (3D) vector quantities output as solutions to high-resolution computational fluid dynamics (CFD) simulations can be an arduous, time-consuming task. Scientific visualization of these fields can be a powerful aid in their understanding. However, numerous pitfalls present themselves ranging from computational performance to the challenge of generating insightful visual representations of the data. In this paper, we briefly survey current practices for visualizing 3D vectorfields, placing particular emphasis on those data arising from CFD simulations of turbulence. We describe the capabilities of a vector field visualization system that we have implemented as part of an open source visual data analysis environment. We also describe a novel algorithm we have developed for illustrating the advection of one vector field by a second flow field. We demonstrate these techniques in the exploration of two sets of runs. The first comprises an ideal and a resistive magnetohydrodynamic (MHD) simulation. This set is used to test the validity of the advection scheme. The second corresponds to a simulation of MHD turbulence. We show the formation of structures in the flows, the evolution of magnetic field lines, and how field line advection can be used effectively to track structures therein.
ieee symposium on large data analysis and visualization | 2015
Shaomeng Li; Kenny Gruchalla; Kristin Potter; John Clyne; Hank Childs
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.
eurographics workshop on parallel graphics and visualization | 2017
Shaomeng Li; Nicole Marsaglia; Vincent Chen; Christopher M. Sewell; John Clyne; Hank Childs
We consider the problem of wavelet compression in the context of portable performance over multiple architectures. We contribute a new implementation of the wavelet transform algorithm that uses data parallel primitives from the VTK-m library. Because of the data parallel primitives approach, our algorithm is hardware-agnostic and yet can run on many-core architectures. We also study the efficacy of this implementation over multiple architectures against hardware-specific comparators. Results show that our performance is portable, scales well, and is comparable to native implementations. Finally, we argue that compression times for large data sets are likely fast enough to fit within in situ constraints, adding to the evidence that wavelet transformation could be an effective in situ compression operator.
IEEE Transactions on Visualization and Computer Graphics | 2013
John Clyne; P. D. Mininni; Alan Norton
Numerical simulations of turbulent fluid flow in areas ranging from solar physics to aircraft design are dominated by the presence of repeating patterns known as coherent structures. These persistent features are not yet well understood, but are believed to play an important role in the dynamics of turbulent fluid motion, and are the subject of study across numerous scientific and engineering disciplines. To facilitate their investigation a variety of techniques have been devised to track the paths of these structures as they evolve through time. Heretofore, all such feature tracking methods have largely ignored the physics governing the motion of these objects at the expense of error prone and often computationally expensive solutions. In this paper, we present a feature path prediction method that is based on the physics of the underlying solutions to the equations of fluid motion. To the knowledge of the authors the accuracy of these predictions is superior to methods reported elsewhere. Moreover, the precision of these forecasts for many applications is sufficiently high to enable the use of only the most rudimentary and inexpensive forms of correspondence matching. We also provide insight on the relationship between the internal time stepping used in a CFD simulation, and the evolution of coherent structures, that we believe is of benefit to any feature tracking method applicable to CFD. Finally, our method is easy to implement, and computationally inexpensive to execute, making it well suited for very high-resolution simulations.
international conference on cluster computing | 2017
Shaomeng Li; Sudhanshu Sane; Leigh Orf; Pablo D. Mininni; John Clyne; Hank Childs
Data reduction through compression is emerging as a promising approach to ease I/O costs for simulation codes on supercomputers. Typically, this compression is achieved by techniques that operate on individual time slices. However, as simulation codes advance in time, outputting multiple time slices as they go, the opportunity for compression incorporating the time dimension has not been extensively explored. Moreover, recent supercomputers are increasingly equipped with deeper memory hierarchies, including solid state drives and burst buffers, which creates the opportunity to temporarily store multiple time slices and then apply compression to them all at once, i.e., spatiotemporal compression. This paper explores the benefits of incorporating the time dimension into existing wavelet compression, including studying its key parameters and demonstrating its benefits in three axes: storage, accuracy, and temporal resolution. Our results demonstrate that temporal compression can improve each of these axes, and that the impact on performance for real systems, including tradeoffs in memory usage and execution time, is acceptable. We also demonstrate the benefits of spatiotemporal wavelet compression with real-world visualization use cases and tailored evaluation metrics.