Jonathan Woodring
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan Woodring.
ieee visualization | 2003
Jonathan Woodring; Chaoli Wang; Han-Wei Shen
We present an alternative method for viewing time-varying volumetric data. We consider such data as a four-dimensional data field, rather than considering space and time as separate entities. If we treat the data in this manner, we can apply high dimensional slicing and projection techniques to generate an image hyperplane. The user is provided with an intuitive user interface to specify arbitrary hyperplanes in 4D, which can be displayed with standard volume rendering techniques. From the volume specification, we are able to extract arbitrary hyperslices, combine slices together into a hyperprojection volume, or apply a 4D raycasting method to generate the same results. In combination with appropriate integration operators and transfer functions, we are able to extract and present different space-time features to the user.
IEEE Transactions on Visualization and Computer Graphics | 2006
Jonathan Woodring; Han-Wei Shen
Time-varying, multi-variate, and comparative data sets are not easily visualized due to the amount of data that is presented to the user at once. By combining several volumes together with different operators into one visualized volume, the user is able to compare values from different data sets in space over time, run, or field without having to mentally switch between different renderings of individual data sets. In this paper, we propose using a volume shader where the user is given the ability to easily select and operate on many data volumes to create comparison relationships. The user specifies an expression with set and numerical operations and her data to see relationships between data fields. Furthermore, we render the contextual information of the volume shader by converting it to a volume tree. We visualize the different levels and nodes of the volume tree so that the user can see the results of suboperations. This gives the user a deeper understanding of the final visualization, by seeing how the parts of the whole are operationally constructed
IEEE Transactions on Visualization and Computer Graphics | 2009
Jonathan Woodring; Han-Wei Shen
Time-varying data is usually explored by animation or arrays of static images. Neither is particularly effective for classifying data by different temporal activities. Important temporal trends can be missed due to the lack of ability to find them with current visualization methods. In this paper, we propose a method to explore data at different temporal resolutions to discover and highlight data based upon time-varying trends. Using the wavelet transform along the time axis, we transform data points into multi-scale time series curve sets. The time curves are clustered so that data of similar activity are grouped together, at different temporal resolutions. The data are displayed to the user in a global time view spreadsheet where she is able to select temporal clusters of data points, and filter and brush data across temporal scales. With our method, a user can interact with data based on time activities and create expressive visualizations.
eurographics | 2003
Jonathan Woodring; Han-Wei Shen
We present a new method for displaying time varying volumetric data. The core of the algorithm is an integration through time producing a single view volume that captures the essence of multiple time steps in a sequence. The resulting view volume then can be viewed with traditional raycasting techniques. With different time integration functions, we can generate several kinds of resulting chronovolumes, which illustrate differing types of time varying features to the user. By utilizing graphics hardware and texture memory, the integration through time can be sped up, allowing the user interactive control over the temporal transfer function and exploration of the data.
IEEE Transactions on Visualization and Computer Graphics | 2013
Ayan Biswas; Soumya Dutta; Han-Wei Shen; Jonathan Woodring
Information theory provides a theoretical framework for measuring information content for an observed variable, and has attracted much attention from visualization researchers for its ability to quantify saliency and similarity among variables. In this paper, we present a new approach towards building an exploration framework based on information theory to guide the users through the multivariate data exploration process. In our framework, we compute the total entropy of the multivariate data set and identify the contribution of individual variables to the total entropy. The variables are classified into groups based on a novel graph model where a node represents a variable and the links encode the mutual information shared between the variables. The variables inside the groups are analyzed for their representativeness and an information based importance is assigned. We exploit specific information metrics to analyze the relationship between the variables and use the metrics to choose isocontours of selected variables. For a chosen group of points, parallel coordinates plots (PCP) are used to show the states of the variables and provide an interface for the user to select values of interest. Experiments with different data sets reveal the effectiveness of our proposed framework in depicting the interesting regions of the data sets taking into account the interaction among the variables.
ieee symposium on large data analysis and visualization | 2011
Jonathan Woodring; Susan M. Mniszewski; Christopher M. Brislawn; David E. DeMarle; James P. Ahrens
We revisit wavelet compression by using a standards-based method to reduce large-scale data sizes for production scientific computing. Many of the bottlenecks in visualization and analysis come from limited bandwidth in data movement, from storage to networks. The majority of the processing time for visualization and analysis is spent reading or writing large-scale data or moving data from a remote site in a distance scenario. Using wavelet compression in JPEG 2000, we provide a mechanism to vary data transfer time versus data quality, so that a domain expert can improve data transfer time while quantifying compression effects on their data. By using a standards-based method, we are able to provide scientists with the state-of-the-art wavelet compression from the signal processing and data compression community, suitable for use in a production computing environment. To quantify compression effects, we focus on measuring bit rate versus maximum error as a quality metric to provide precision guarantees for scientific analysis on remotely compressed POP (Parallel Ocean Program) data.
IEEE Computer Graphics and Applications | 2010
James P. Ahrens; Katrin Heitmann; Mark R. Petersen; Jonathan Woodring; Sean Williams; Patricia K. Fasel; Christine Ahrens; Chung-Hsing Hsu; Berk Geveci
This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.
Astrophysical Journal Supplement Series | 2011
Jonathan Woodring; Katrin Heitmann; James P. Ahrens; Patricia K. Fasel; Chung-Hsing Hsu; Salman Habib; Adrian Pope
The advent of large cosmological sky surveys—ushering in the era of precision cosmology—has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to a trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper, we introduce new analysis features implemented within ParaView, a fully parallel, open-source visualization toolkit, to analyze large N-body simulations. A major aspect of ParaView is that it can live and operate on the same machines and utilize the same parallel power as the simulation codes themselves. In addition, data movement is in a serious bottleneck now and will become even more of an issue in the future; an interactive visualization and analysis tool that can handle data in situ is fast becoming essential. The new features in ParaView include particle readers and a very efficient halo finder that identifies friends-of-friends halos and determines common halo properties, including spherical overdensity properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to programming languages like Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.
Proceedings of the 2009 Workshop on Ultrascale Visualization | 2009
James P. Ahrens; Jonathan Woodring; David E. DeMarle; John Patchett; Mathew Maltrud
The simulations that run on petascale and future exascale supercomputers pose a difficult challenge for scientists to visualize and analyze their results remotely. They are limited in their ability to interactively visualize their data mainly due to limited network bandwidth associated with sending and reading large data at a distance. To tackle this issue, we provide a generalized distance visualization architecture for large remote data that aims to provide interactive analysis. We achieve this through a prioritized, multi-resolution, streaming architecture. Since the original data size is several orders of magnitude greater than the display and network technologies, we stream downsampled versions of representation data over time to complete a visualization using fast local rendering. This technique provides the necessary interactivity and full-resolution results dynamically on demand while maintaining a full-featured visualization framework.
ieee vgtc conference on visualization | 2009
Jonathan Woodring; Han-Wei Shen
When creating transfer functions for time‐varying data, it is not clear what range of values to use for classification, as data value ranges and distributions change over time. In order to generate time‐varying transfer functions, we search the data for classes that have similar behavior over time, assuming that data points that behave similarly belong to the same feature. We utilize a method we call temporal clustering and sequencing to find dynamic features in value space and create a corresponding transfer function. First, clustering finds groups of data points that have the same value space activity over time. Then, sequencing derives a progression of clusters over time, creating chains that follow value distribution changes. Finally, the cluster sequences are used to create transfer functions, as sequences describe the value range distributions over time in a data set.