Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Boonthanome Nouanesengsy is active.

Publication


Featured researches published by Boonthanome Nouanesengsy.


international parallel and distributed processing symposium | 2011

A Study of Parallel Particle Tracing for Steady-State and Time-Varying Flow Fields

Tom Peterka; Robert B. Ross; Boonthanome Nouanesengsy; Teng-Yok Lee; Han-Wei Shen; Wesley Kendall; Jian Huang

Particle tracing for streamline and path line generation is a common method of visualizing vector fields in scientific data, but it is difficult to parallelize efficiently because of demanding and widely varying computational and communication loads. In this paper we scale parallel particle tracing for visualizing steady and unsteady flow fields well beyond previously published results. We configure the 4D domain decomposition into spatial and temporal blocks that combine in-core and out-of-core execution in a flexible way that favors faster run time or smaller memory. We also compare static and dynamic partitioning approaches. Strong and weak scaling curves are presented for tests conducted on an IBM Blue Gene/P machine at up to 32 K processes using a parallel flow visualization library that we are developing. Datasets are derived from computational fluid dynamics simulations of thermal hydraulics, liquid mixing, and combustion.


IEEE Transactions on Visualization and Computer Graphics | 2011

Load-Balanced Parallel Streamline Generation on Large Scale Vector Fields

Boonthanome Nouanesengsy; Teng-Yok Lee; Han-Wei Shen

Because of the ever increasing size of output data from scientific simulations, supercomputers are increasingly relied upon to generate visualizations. One use of supercomputers is to generate field lines from large scale flow fields. When generating field lines in parallel, the vector field is generally decomposed into blocks, which are then assigned to processors. Since various regions of the vector field can have different flow complexity, processors will require varying amounts of computation time to trace their particles, causing load imbalance, and thus limiting the performance speedup. To achieve load-balanced streamline generation, we propose a workload-aware partitioning algorithm to decompose the vector field into partitions with near equal workloads. Since actual workloads are unknown beforehand, we propose a workload estimation algorithm to predict the workload in the local vector field. A graph-based representation of the vector field is employed to generate these estimates. Once the workloads have been estimated, our partitioning algorithm is hierarchically applied to distribute the workload to all partitions. We examine the performance of our workload estimation and workload-aware partitioning algorithm in several timings studies, which demonstrates that by employing these methods, better scalability can be achieved with little overhead.


ieee international conference on high performance computing data and analytics | 2012

Parallel particle advection and FTLE computation for time-varying flow fields

Boonthanome Nouanesengsy; Teng-Yok Lee; Kewei Lu; Han-Wei Shen; Tom Peterka

Flow fields are an important product of scientific simulations. One popular flow visualization technique is particle advection, in which seeds are traced through the flow field. One use of these traces is to compute a powerful analysis tool called the Finite-Time Lyapunov Exponent (FTLE) field, but no existing particle tracing algorithms scale to the particle injection frequency required for high-resolution FTLE analysis. In this paper, a framework to trace the massive number of particles necessary for FTLE computation is presented. A new approach is explored, in which processes are divided into groups, and are responsible for mutually exclusive spans of time. This pipelining over time intervals reduces overall idle time of processes and decreases I/O overhead. Our parallel FTLE framework is capable of advecting hundreds of millions of particles at once, with performance scaling up to tens of thousands of processes.


ieee symposium on large data analysis and visualization | 2014

ADR visualization: A generalized framework for ranking large-scale scientific data using Analysis-Driven Refinement

Boonthanome Nouanesengsy; Jonathan Woodring; John Patchett; Kary Myers; James P. Ahrens

Prioritization of data is necessary for managing large-scale scientific data, as the scale of the data implies that there are only enough resources available to process a limited subset of the data. For example, data prioritization is used during in situ triage to scale with bandwidth bottlenecks, and used during focus+context visualization to save time during analysis by guiding the user to important information. In this paper, we present ADR visualization, a generalized analysis framework for ranking large-scale data using Analysis-Driven Refinement (ADR), which is inspired by Adaptive Mesh Refinement (AMR). A large-scale data set is partitioned in space, time, and variable, using user-defined importance measurements for prioritization. This process creates a prioritization tree over the data set. Using this tree, selection methods can generate sparse data products for analysis, such as focus+context visualizations or sparse data sets.


ieee symposium on large data analysis and visualization | 2012

Flow-guided file layout for out-of-core pathline computation

Chun-Ming Chen; Boonthanome Nouanesengsy; Teng-Yok Lee; Han-Wei Shen

As CPU processing power becomes more powerful and storage capacity increases, performing data-intensive visualization computations involving large data on a desktop computer becomes an increasingly viable option. Desktops, though, usually lack the memory capacity required to load such large data at once, and thus the cost of I/O becomes a major bottleneck for the necessary out-of-core computation. Among techniques that reduce runtime I/O cost, reordering the file layout to increase data locality has become popular in recent years. However, file layout techniques for time-varying scientific data, especially for time-varying flow fields, have been rarely discussed. In this paper, we evaluate the performance impact of utilizing a file layout method for out-of-core time-varying flow visualization. We extend a graph-based representation of flow fields, originally developed for static vector fields, to time-varying flow fields, and apply a graph layout algorithm to order data blocks to be written to disk. Benefits from the generated file layouts are evaluated using various parameters and seeding scenarios.


2008 Workshop on Ultrascale Visualization | 2008

Petascale visualization: Approaches and initial results

James P. Ahrens; Li-Ta Lo; Boonthanome Nouanesengsy; John Patchett; Allen McPherson

With the advent of the first petascale supercomputer, Los Alamoss Roadrunner, there is a pressing need to address how to visualize petascale data. The crux of the petascale visualization performance problem is interactive rendering, since it is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. In this work, we evaluated the rendering performance of multi-core CPU and GPU-based processors. To achieve high-performance on multi-core processors, we tested with multi-core optimized raytracing engines for rendering. For real-world performance testing, and to prepare for petascale visualization tasks, we interfaced these rendering engines with VTK and ParaView. Initial results show that rendering software optimized for multi-core CPU processors provides competitive performance to GPUs for the parallel rendering of massive data. The current architectural multi-core trend suggests multi-core based supercomputers are able to provide interactive visualization and rendering support now and in the future.


eurographics workshop on parallel graphics and visualization | 2011

Revisiting parallel rendering for shared memory machines

Boonthanome Nouanesengsy; James P. Ahrens; Jonathan Woodring; Han-Wei Shen

Increasing the core count of CPUs to increase computational performance has been a significant trend for the better part of a decade. This has led to an unprecedented availability of large shared memory machines. Programming paradigms and systems are shifting to take advantage of this architectural change, so that intra-node parallelism can be fully utilized. Algorithms designed for parallel execution on distributed systems will also need to be modified to scale in these new shared and hybrid memory systems. In this paper, we reinvestigate parallel rendering algorithms with the goal of finding one that achieves favorable performance in this new environment. We test and analyze various methods, including sort-first, sort-last, and a hybrid scheme, to find an optimal parallel algorithm that maximizes shared memory performance.


ieee vgtc conference on visualization | 2009

Visual analysis of brain activity from fMRI data

Firdaus Janoos; Boonthanome Nouanesengsy; Raghu Machiraju; Han-Wei Shen; Steffen Sammet; Michael V. Knopp

Classically, analysis of the time‐varying data acquired during fMRI experiments is done using static activation maps obtained by testing voxels for the presence of significant activity using statistical methods. The models used in these analysis methods have a number of parameters, which profoundly impact the detection of active brain areas. Also, it is hard to study the temporal dependencies and cascading effects of brain activation from these static maps. In this paper, we propose a methodology to visually analyze the time dimension of brain function with a minimum amount of processing, allowing neurologists to verify the correctness of the analysis results, and develop a better understanding of temporal characteristics of the functional behaviour. The system allows studying time‐series data through specific volumes‐of‐interest in the brain‐cortex, the selection of which is guided by a hierarchical clustering algorithm performed in the wavelet domain. We also demonstrate the utility of this tool by presenting results on a real data‐set.


visual analytics science and technology | 2009

Using projection and 2D plots to visually reveal genetic mechanisms of complex human disorders

Boonthanome Nouanesengsy; Sang-Cheol Seok; Han-Wei Shen; Veronica J. Vieland

Gene mapping is a statistical method used to localize human disease genes to particular regions of the human genome. When performing such analysis, a genetic likelihood space is generated and sampled, which results in a multidimensional scalar field. Researchers are interested in exploring this likelihood space through the use of visualization. Previous efforts at visualizing this space, though, were slow and cumbersome, only showing a small portion of the space at a time, thus requiring the user to keep a mental picture of several views. We have developed a new technique that displays much more data at once by projecting the multidimensional data into several 2D plots. One plot is created for each parameter that shows the change along that parameter. A radial projection is used to create another plot that provides an overview of the high dimensional surface from the perspective of a single point. Linking and brushing between all the plots are used to determine relationships between parameters. We demonstrate our techniques on real world autism data, showing how to visually examine features of the high dimensional space.


Archive | 2016

2016 CSSE L3 Milestone: Deliver In Situ to XTD End Users

John Patchett; Boonthanome Nouanesengsy; Patricia K. Fasel; James P. Ahrens

This report summarizes the activities in FY16 toward satisfying the CSSE 2016 L3 milestone to deliver in situ to XTD end users of EAP codes. The Milestone was accomplished with ongoing work to ensure the capability is maintained and developed. Two XTD end users used the in situ capability in Rage. A production ParaView capability was created in the HPC and Desktop environment. Two new capabilities were added to ParaView in support of an EAP in situ workflow. We also worked with various support groups at the lab to deploy a production ParaView in the LANL environment for both desktop and HPC systems. . In addition, for this milestone, we moved two VTK based filters from research objects into the production ParaView code to support a variety of standard visualization pipelines for our EAP codes.

Collaboration


Dive into the Boonthanome Nouanesengsy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James P. Ahrens

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

John Patchett

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dean N. Williams

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Charles Doutriaux

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey F. Painter

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge