Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teng-Yok Lee is active.

Publication


Featured researches published by Teng-Yok Lee.


IEEE Transactions on Visualization and Computer Graphics | 2010

An Information-Theoretic Framework for Flow Visualization

Lijie Xu; Teng-Yok Lee; Han-Wei Shen

The process of visualization can be seen as a visual communication channel where the input to the channel is the raw data, and the output is the result of a visualization algorithm. From this point of view, we can evaluate the effectiveness of visualization by measuring how much information in the original data is being communicated through the visual communication channel. In this paper, we present an information-theoretic framework for flow visualization with a special focus on streamline generation. In our framework, a vector field is modeled as a distribution of directions from which Shannons entropy is used to measure the information content in the field. The effectiveness of the streamlines displayed in visualization can be measured by first constructing a new distribution of vectors derived from the existing streamlines, and then comparing this distribution with that of the original data set using the conditional entropy. The conditional entropy between these two distributions indicates how much information in the original data remains hidden after the selected streamlines are displayed. The quality of the visualization can be improved by progressively introducing new streamlines until the conditional entropy converges to a small value. We describe the key components of our framework with detailed analysis, and show that the framework can effectively visualize 2D and 3D flow data.


international parallel and distributed processing symposium | 2011

A Study of Parallel Particle Tracing for Steady-State and Time-Varying Flow Fields

Tom Peterka; Robert B. Ross; Boonthanome Nouanesengsy; Teng-Yok Lee; Han-Wei Shen; Wesley Kendall; Jian Huang

Particle tracing for streamline and path line generation is a common method of visualizing vector fields in scientific data, but it is difficult to parallelize efficiently because of demanding and widely varying computational and communication loads. In this paper we scale parallel particle tracing for visualizing steady and unsteady flow fields well beyond previously published results. We configure the 4D domain decomposition into spatial and temporal blocks that combine in-core and out-of-core execution in a flexible way that favors faster run time or smaller memory. We also compare static and dynamic partitioning approaches. Strong and weak scaling curves are presented for tests conducted on an IBM Blue Gene/P machine at up to 32 K processes using a parallel flow visualization library that we are developing. Datasets are derived from computational fluid dynamics simulations of thermal hydraulics, liquid mixing, and combustion.


IEEE Transactions on Visualization and Computer Graphics | 2011

Load-Balanced Parallel Streamline Generation on Large Scale Vector Fields

Boonthanome Nouanesengsy; Teng-Yok Lee; Han-Wei Shen

Because of the ever increasing size of output data from scientific simulations, supercomputers are increasingly relied upon to generate visualizations. One use of supercomputers is to generate field lines from large scale flow fields. When generating field lines in parallel, the vector field is generally decomposed into blocks, which are then assigned to processors. Since various regions of the vector field can have different flow complexity, processors will require varying amounts of computation time to trace their particles, causing load imbalance, and thus limiting the performance speedup. To achieve load-balanced streamline generation, we propose a workload-aware partitioning algorithm to decompose the vector field into partitions with near equal workloads. Since actual workloads are unknown beforehand, we propose a workload estimation algorithm to predict the workload in the local vector field. A graph-based representation of the vector field is employed to generate these estimates. Once the workloads have been estimated, our partitioning algorithm is hierarchically applied to distribute the workload to all partitions. We examine the performance of our workload estimation and workload-aware partitioning algorithm in several timings studies, which demonstrates that by employing these methods, better scalability can be achieved with little overhead.


IEEE Transactions on Visualization and Computer Graphics | 2009

Visualization and Exploration of Temporal Trend Relationships in Multivariate Time-Varying Data

Teng-Yok Lee; Han-Wei Shen

We present a new algorithm to explore and visualize multivariate time-varying data sets. We identify important trend relationships among the variables based on how the values of the variables change over time and how those changes are related to each other in different spatial regions and time intervals. The trend relationships can be used to describe the correlation and causal effects among the different variables. To identify the temporal trends from a local region, we design a new algorithm called SUBDTW to estimate when a trend appears and vanishes in a given time series. Based on the beginning and ending times of the trends, their temporal relationships can be modeled as a state machine representing the trend sequence. Since a scientific data set usually contains millions of data points, we propose an algorithm to extract important trend relationships in linear time complexity. We design novel user interfaces to explore the trend relationships, to visualize their temporal characteristics, and to display their spatial distributions. We use several scientific data sets to test our algorithm and demonstrate its utilities.


ieee pacific visualization symposium | 2011

View point evaluation and streamline filtering for flow visualization

Teng-Yok Lee; Oleg Mishchenko; Han-Wei Shen; Roger Crawfis

Visualization of flow fields with geometric primitives is often challenging due to occlusion that is inevitably introduced by 3D streamlines. In this paper, we present a novel view-dependent algorithm that can minimize occlusion and reveal important flow features for three dimensional flow fields. To analyze regions of higher importance, we utilize Shannons entropy as a measure of vector complexity. An entropy field in the form of a three dimensional volume is extracted from the input vector field. To utilize this view-independent complexity measure for view-dependent calculations, we introduce the notion of a maximal entropy projection (MEP) framebuffer, which stores maximal entropy values as well as the corresponding depth values for a given viewpoint. With this information, we develop a view-dependent streamline selection algorithm that can evaluate and choose streamlines that will cause minimum occlusion to regions of higher importance. Based on a similar concept, we also propose a viewpoint selection algorithm that works hand-in-hand with our streamline selection algorithm to maximize the visibility of high complexity regions in the flow field.


ieee international conference on high performance computing data and analytics | 2012

Parallel particle advection and FTLE computation for time-varying flow fields

Boonthanome Nouanesengsy; Teng-Yok Lee; Kewei Lu; Han-Wei Shen; Tom Peterka

Flow fields are an important product of scientific simulations. One popular flow visualization technique is particle advection, in which seeds are traced through the flow field. One use of these traces is to compute a powerful analysis tool called the Finite-Time Lyapunov Exponent (FTLE) field, but no existing particle tracing algorithms scale to the particle injection frequency required for high-resolution FTLE analysis. In this paper, a framework to trace the massive number of particles necessary for FTLE computation is presented. A new approach is explored, in which processes are divided into groups, and are responsible for mutually exclusive spans of time. This pipelining over time intervals reduces overall idle time of processes and decreases I/O overhead. Our parallel FTLE framework is capable of advecting hundreds of millions of particles at once, with performance scaling up to tens of thousands of processes.


ieee pacific visualization symposium | 2013

Exploring vector fields with distribution-based streamline analysis

Kewei Lu; Abon Chaudhuri; Teng-Yok Lee; Han-Wei Shen; Pak Chung Wong

Streamline-based techniques are designed based on the idea that properties of streamlines are indicative of features in the underlying field. In this paper, we show that statistical distributions of measurements along the trajectory of a streamline can be used as a robust and effective descriptor to measure the similarity between streamlines. With the distribution-based approach, we present a framework for interactive exploration of 3D vector fields with streamline query and clustering. Streamline queries allow us to rapidly identify streamlines that share similar geometric features to the target streamline. Streamline clustering allows us to group together streamlines of similar shapes. Based on users selection, different clusters with different features at different levels of detail can be visualized to highlight features in 3D flow fields. We demonstrate the utility of our framework with simulation data sets of varying nature and size.


ieee pacific visualization symposium | 2009

Visualizing time-varying features with TAC-based distance fields

Teng-Yok Lee; Han-Wei Shen

To analyze time-varying data sets, tracking features over time is often necessary to better understand the dynamic nature of the underlying physical process. Tracking 3D time-varying features, however, is non-trivial when the boundaries of the features cannot be easily defined. In this paper, we propose a new framework to visualize time-varying features and their motion without explicit feature segmentation and tracking. In our framework, a time-varying feature is described by a time series or Time Activity Curve (TAC). To compute the distance, or similarity, between a voxels time series and the feature, we use the Dynamic Time Warping (DTW) distance metric. The purpose of DTW is to compare the shape similarity between two time series with an optimal warping of time so that the phase shift of the feature in time can be accounted for. After applying DTW to compare each voxels time series with the feature, a time-invariant distance field can be computed. The amount of time warping required for each voxel to match the feature provides an estimate of the time when the feature is most likely to occur. Based on the TAC-based distance field, several visualization methods can be derived to highlight the position and motion of the feature. We present several case studies to demonstrate and compare the effectiveness of our framework.


IEEE Transactions on Visualization and Computer Graphics | 2013

Efficient Local Statistical Analysis via Integral Histograms with Discrete Wavelet Transform

Teng-Yok Lee; Han-Wei Shen

Histograms computed from local regions are commonly used in many visualization applications, and allowing the user to query histograms interactively in regions of arbitrary locations and sizes plays an important role in feature identification and tracking. Computing histograms in regions with arbitrary location and size, nevertheless, can be time consuming for large data sets since it involves expensive I/O and scan of data elements. To achieve both performance- and storage-efficient query of local histograms, we present a new algorithm called WaveletSAT, which utilizes integral histograms, an extension of the summed area tables (SAT), and discrete wavelet transform (DWT). Similar to SAT, an integral histogram is the histogram computed from the area between each grid point and the grid origin, which can be be pre-computed to support fast query. Nevertheless, because one histogram contains multiple bins, it will be very expensive to store one integral histogram at each grid point. To reduce the storage cost for large integral histograms, WaveletSAT treats the integral histograms of all grid points as multiple SATs, each of which can be converted into a sparse representation via DWT, allowing the reconstruction of axis-aligned region histograms of arbitrary sizes from a limited number of wavelet coefficients. Besides, we present an efficient wavelet transform algorithm for SATs that can operate on each grid point separately in logarithmic time complexity, which can be extended to parallel GPU-based implementation. With theoretical and empirical demonstration, we show that WaveletSAT can achieve fast preprocessing and smaller storage overhead than the conventional integral histogram approach with close query performance.


ieee symposium on large data analysis and visualization | 2012

Salient time steps selection from large scale time-varying data sets with dynamic time warping

Xin Tong; Teng-Yok Lee; Han-Wei Shen

Empowered by rapid advance of high performance computer architectures and software, it is now possible for scientists to perform high resolution simulations with unprecedented accuracy. Nowadays, the total size of data from a large-scale simulation can easily exceed hundreds of terabytes or even petabytes, distributed over a large number of time steps. The sheer size of data makes it difficult to perform post analysis and visualization after the computation is completed. Frequently, large amounts of valuable data produced from simulations are discarded, or left in disk unanalyzed. In this paper, we present a novel technique that can retrieve the most salient time steps, or key time steps, from large scale time-varying data sets. To achieve this goal, we develop a new time warping technique with an efficient dynamic programming scheme to map the whole sequence into an arbitrary number of time steps specified by the user. A novel contribution of our dynamic programming scheme is that the mapping between the whole time sequence and the key time steps is globally optimal, and hence the information loss is minimum. We propose a high performance algorithm to solve the dynamic programming problem that makes the selection of key times run in real time. Based on the technique, we create a visualization system that allows the user to browse time varying data at arbitrary levels of temporal detail. Because of the low computational complexity of this algorithm, the tool can help the user explore time varying data interactively and hierarchically. We demonstrate the utility of our algorithm by showing results from different time-varying data sets.

Collaboration


Dive into the Teng-Yok Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Peterka

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kewei Lu

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Pak Chung Wong

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Xin Tong

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lijie Xu

Ohio State University

View shared research outputs
Top Co-Authors

Avatar

Samson Hagos

Pacific Northwest National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge