Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan D. Fabian is active.

Publication


Featured researches published by Nathan D. Fabian.


ieee symposium on large data analysis and visualization | 2011

The ParaView Coprocessing Library: A scalable, general purpose in situ visualization library

Nathan D. Fabian; Kenneth Moreland; David C. Thompson; Andrew C. Bauer; Pat Marion; Berk Gevecik; Michel Rasquin; Kenneth E. Jansen

As high performance computing approaches exascale, CPU capability far outpaces disk write speed, and in situ visualization becomes an essential part of an analysts workflow. In this paper, we describe the ParaView Coprocessing Library, a framework for in situ visualization and analysis coprocessing. We describe how coprocessing algorithms (building on many from VTK) can be linked and executed directly from within a scientific simulation or other applications that need visualization and analysis. We also describe how the ParaView Coprocessing Library can write out partially processed, compressed, or extracted data readable by a traditional visualization application for interactive post-processing. Finally, we will demonstrate the librarys scalability in a number of real-world scenarios.


Proceedings of the 2nd international workshop on Petascal data analytics: challenges and opportunities | 2011

Examples of in transit visualization

Kenneth Moreland; Ron A. Oldfield; Pat Marion; Sébastien Jourdain; Norbert Podhorszki; Venkatram Vishwanath; Nathan D. Fabian; Ciprian Docan; Manish Parashar; Mark Hereld; Michael E. Papka; Scott Klasky

One of the most pressing issues with petascale analysis is the transport of simulation results data to a meaningful analysis. Traditional workflow prescribes storing the simulation results to disk and later retrieving them for analysis and visualization. However, at petascale this storage of the full results is prohibitive. A solution to this problem is to run the analysis and visualization concurrently with the simulation and bypass the storage of the full results. One mechanism for doing so is in transit visualization in which analysis and visualization is run on I/O nodes that receive the full simulation results but write information from analysis or provide run-time visualization. This paper describes the work in progress for three in transit visualization solutions, each using a different transport mechanism.


Proceedings of the First Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization | 2015

ParaView Catalyst: Enabling In Situ Data Analysis and Visualization

Utkarsh Ayachit; Andrew C. Bauer; Berk Geveci; Patrick O'Leary; Kenneth Moreland; Nathan D. Fabian; Jeffrey Mauldin

Computer simulations are growing in sophistication and producing results of ever greater fidelity. This trend has been enabled by advances in numerical methods and increasing computing power. Yet these advances come with several costs including massive increases in data size, difficulties examining output data, challenges in configuring simulation runs, and difficulty debugging running codes. Interactive visualization tools, like ParaView, have been used for post-processing of simulation results. However, the increasing data sizes, and limited storage and bandwidth make high fidelity post-processing impractical. In situ analysis is recognized as one of the ways to address these challenges. In situ analysis moves some of the post-processing tasks in line with the simulation code thus short circuiting the need to communicate the data between the simulation and analysis via storage. ParaView Catalyst is a data processing and visualization library that enables in situ analysis and visualization. Built on and designed to interoperate with the standard visualization toolkit VTK and the ParaView application, Catalyst enables simulations to intelligently perform analysis, generate relevant output data, and visualize results concurrent with a running simulation. In this paper, we provide an overview of the Catalyst framework and some of the success stories.


international conference on supercomputing | 2014

Evaluation of methods to integrate analysis into a large-scale shock shock physics code

Ron A. Oldfield; Kenneth Moreland; Nathan D. Fabian; David H. Rogers

Exascale supercomputing will embody many revolutionary changes in the hardware and software of high-performance computing. For example, projected limitations in power and I/O-system performance will fundamentally change visualization and analysis workflows. A traditional post-processing workflow involves storing simulation results to disk and later retrieving them for visualization and data analysis; however, at Exascale, post-processing approaches will not be able to capture the volume or granularity of data necessary for analysis of these extreme-scale simulations. As an alternative, researchers are exploring ways to integrate analysis and simulation without using the storage system. In situ and in transit are two options, but there has not been an adequate evaluation of these approaches to identify strengths, weaknesses, and trade-offs at large scale. This paper provides a detailed performance and scaling analysis of a large-scale shock physics code using traditional post-processsing, in situ, and in transit analysis to detect material fragments from a simulated explosion.


Archive | 2012

Report of experiments and evidence for ASC L2 milestone 4467 : demonstration of a legacy application's path to exascale.

Matthew L. Curry; Kurt Brian Ferreira; Kevin Pedretti; Vitus J. Leung; Kenneth Moreland; Gerald Fredrick Lofstead; Ann C. Gentile; Ruth Klundt; H. Lee Ward; James H. Laros; Karl Scott Hemmert; Nathan D. Fabian; Michael J. Levenhagen; Ronald B. Brightwell; Richard Frederick Barrett; Kyle Bruce Wheeler; Suzanne M. Kelly; Arun F. Rodrigues; James M. Brandt; David C. Thompson; John P. VanDyke; Ron A. Oldfield; Thomas Tucker

This report documents thirteen of Sandias contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to prove that the team has completed milestone 4467-Demonstration of a Legacy Applications Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.


ieee symposium on large data analysis and visualization | 2012

In situ fragment detection at scale

Nathan D. Fabian

We explore the problem of characterizing fragments using Par-aView in situ with an explosion simulation. By running in situ we can see a much higher temporal view of the data as well as potentially compress the output to only those statistics about fragments we care about. However, the fragment finding must be able to scale as well as the simulation. In order to achieve the necessary scales, we borrow operations the simulation is already doing and take advantage of them within Para View, demonstrating the resulting improvement in scaling performance.


intelligent user interfaces | 2015

Data Privacy and Security Considerations for Personal Assistants for Learning (PAL)

Elaine M. Raybourn; Nathan D. Fabian; Warren L. Davis; Raymond C. Parks; Jonathan T. McClain; Derek Trumbo; Damon Regan; Paula J. Durlach

A hypothetical scenario is utilized to explore privacy and security considerations for intelligent systems, such as a Personal Assistant for Learning (PAL). Two categories of potential concerns are addressed: factors facilitated by user models, and factors facilitated by systems. Among the strategies presented for risk mitigation is a call for ongoing, iterative dialog among privacy, security, and personalization researchers during all stages of development, testing, and deployment.


european conference on parallel processing | 2015

Canaries in a Coal Mine: Using Application-level Checkpoints to Detect Memory Failures.

Patrick M. Widener; Kurt Brian Ferreira; Scott Levy; Nathan D. Fabian

Memory failures in future extreme scale applications are a significant concern in the high-performance computing community and have attracted much research attention. We contend in this paper that using application checkpoint data to detect memory failures has potential benefits and is preferable to examining application memory. To support this contention, we describe the application of machine learning techniques to evaluate the veracity of checkpoint data. Our preliminary results indicate that supervised decision tree machine learning approaches can effectively detect corruption in restart files, suggesting that future extreme-scale applications and systems may benefit from incorporating such approaches in order to cope with memory failures.


Archive | 2009

Detecting Combustion and Flow Features In Situ Using Principal Component Analysis

David C. Thompson; Ray W. Grout; Nathan D. Fabian; Janine C. Bennett

This report presents progress on identifying and classifying features involving combustion in turbulent flow using principal component analysis (PCA) and k-means clustering using an in situ analysis framework. We describe a process for extracting temporally- and spatially-varying information from the simulation, classifying the information, and then applying the classification algorithm to either other portions of the simulation not used for training the classifier or further simulations. Because the regions classified as being of interest take up a small portion of the overall simulation domain, it will consume fewer resources to perform further analysis or save these regions at a higher fidelity than previously possible. The implementation of this process is partially complete and results obtained from PCA of test data is presented that indicates the process may have merit: the basis vectors that PCA provides are significantly different in regions where combustion is occurring and even when all 21 species of a lifted flame simulation are correlated the computational cost of PCA is minimal. What remains to be determined is whether k-means (or other) clustering techniques will be able to identify combined combustion and flow features with an accuracy that makes further characterization of these regions feasible and meaningful.


Data Science and Engineering | 2018

Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

Maher Salloum; Nathan D. Fabian; David M. Hensinger; Jina Lee; Elizabeth M. Allendorf; Ankit Bhagatwala; Myra L. Blaylock; Jacqueline H. Chen; Jeremy A. Templeton; Irina Kalashnikova Tezaur

Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate its usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.

Collaboration


Dive into the Nathan D. Fabian's collaboration.

Top Co-Authors

Avatar

Kenneth Moreland

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ron A. Oldfield

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

David C. Thompson

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Elaine M. Raybourn

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Janine C. Bennett

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Bradley Carvey

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

David H. Rogers

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

David M. Hensinger

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Maher Salloum

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge