Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chongke Bi is active.

Publication


Featured researches published by Chongke Bi.


2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV) | 2013

Proper orthogonal decomposition based parallel compression for visualizing big data on the K computer

Chongke Bi; Kenji Ono; Kwan-Liu Ma; Haiyuan Wu; Toshiyuki Imamura

The development of supercomputers has greatly help us to carry on large-scale computing for dealing with various problems through simulating and analyzing them. Visualization is an indispensable tool to understand the properties of the data from supercomputers. Especially, interactive visualization can help us to analyze data from various viewpoints and even to find out some local small but important features. However, it is still difficult to interactively visualize such kind of big data directly due to the slow file I/O problem and the limitation of memory size. For resolving these problems, we proposed a parallel compression method to reduce the data size with low computational cost. Furthermore, the fast linear decompression process is another merit for interactive visualization. Our method uses proper orthogonal decomposition (POD) to compress data because it can effectively extract important features from the data and the resulting compressed data can also be linearly decompressed. Our implementation achieves high parallel efficiency with a binary load-distributed approach, which is similar to the binary-swap image composition used in parallel volume rendering [2]. This approach allows us to effectively utilize all the processing nodes and reduce the interprocessor communication cost throughout the parallel compression calculations. Our test results on the K computer demonstrate superior performance of our design and implementation.


international symposium on visual computing | 2010

Interpolating 3D diffusion tensors in 2D planar domain by locating degenerate lines

Chongke Bi; Shigeo Takahashi; Issei Fujishiro

Interpolating diffusion tensor fields is a key technique to visualize the continuous behaviors of biological tissues such as nerves and muscle fibers. However, this has been still a challenging task due to the difficulty to handle possible degeneracy, which means the rotational inconsistency caused by degenerate points. This paper presents an approach to interpolating 3D diffusion tensors in 2D planar domains by aggressively locating the possible degeneracy while fully respecting the underlying transition of tensor anisotropy. The primary idea behind this approach is to identify the degeneracy using minimum spanning tree-based clustering algorithm, and resolve the degeneracy by optimizing the associated rotational transformations. Degenerate lines are generated in this process to retain the smooth transitions of anisotropic features. Comparisons with existing interpolation schemes will be also provided to demonstrate the technical advantages of the proposed approach.


international conference on systems | 2014

2-3-4 Decomposition Method for Large-Scale Parallel Image Composition with Arbitrary Number of Nodes

Jorji Nonaka; Chongke Bi; Kenji Ono; Masahiro Fujita

Visual data exploration helps users to get better insight into their data and has been considered an indispensable tool for computational scientists. Sort-last parallel rendering is a proven approach for large-scale scientific visualization however it requires a costly parallel image composition at the final stage. Since it requires interprocess communication among the entire nodes, it usually dominates the total cost of a parallel rendering process. Efficient image composition algorithms for power-of-two number of nodes have already been proposed so far, however when handling non power-of-two number of nodes, an additional processing is required causing performance penalty. The simplest way is to execute this additional processing in the initial stage, or in parts, during the entire parallel image composition process. The latter approach causes less performance penalty, however since it adds performance penalty at every stage of parallel image composition, thus it can suffer in a large-scale image composition where tens, or even hundreds, of thousands of nodes can be involved. In this paper, we propose a decomposition approach, for non-power-of-two number of nodes, named 2-3-4 Decomposition. It works by generating exactly power-of-two number of groups of 2, 3 or 4 nodes. Therefore, by compositing independently each of these groups, at the end, we will obtain a power-of-two number of nodes making it easy to combine with any of the existing image composition algorithms for power-of-two number of nodes. It works as a pre-processing and the performance penalty is limited to the overhead of compositing three or four images. This performance penalty can be further reduced depending on the image compositing algorithm to be applied in the next stage. Our experimental results have shown promising results making this method a potential candidate for large-scale image composition with arbitrary number of nodes.


pacific-rim symposium on image and video technology | 2010

Sophisticated Construction and Search of 2D Motion Graphs for Synthesizing Videos

Jun Kobayashi; Chongke Bi; Shigeo Takahashi

This paper presents an intuitive method for synthesizing videos by directly manipulating video objects without using 3D models. The proposed method extracts a video object from each video frame and creates locally consistent video sequences using a 2D motion graph, where its node corresponds to the extracted video object and its edge represents a motion transition between a pair of nodes. Our primary contribution lies in a sophisticated construction of the 2D motion graph using shape matching techniques, and its search that allows us to intuitively synthesize a new video sequence by manipulating feature points extracted from the video objects through the 2D screen space. The method further employs a deformation technique to interpolate between video objects with relatively different shapes, and thus can increase available motion transitions by inserting intervening video objects into the 2D motion graph. Several examples have been generated to demonstrate that this approach can create the user-intended motions of the video objects easily by clicking and dragging the feature points.


asian conference on pattern recognition | 2015

Person re-identification using color enhancing feature

Peng Li; Haiyuan Wu; Qian Chen; Chongke Bi

In this paper, we propose a novel feature descriptor for person re-identification without personal information. It is called Water-Drop Render Box (WDRB). The WDRB method is calculated by three steps with target color and its histogram: registration of target color, transformation of distance map, and enhancement of color using target histogram. In order to calculate WDRB, persons top-view images of entering and leaving the room are captured. This is achieved by constructing a birds-eye view camera system. The person re-identification is carried out by estimating Bhattacharyya distance between database (entering/leaving room person) image and input (entering/leaving room person) image. Finally, the effectiveness of our WDRB descriptor will be demonstrated through several person reidentification experiments. According to the experimental results, it indicates that the WDRB method can be also used for object re-identification.


international congress on big data | 2014

Parallel POD Compression of Time-Varying Big Datasets Using m-Swap on the K Computer

Chongke Bi; Kenji Ono; Lu Yang

Thanks to the supercomputer, more and more complicated simulations are successfully achieved. On the other hand, to analyze and understand the intrinsic properties of the big datasets from the simulations is an urgent research for scientists. However, the explosive size of the big datasets makes such kind of task difficult. Therefore, reduction of the size of the big datasets becomes an important topic, in which data compression and parallel computing are the two key techniques. In this paper, we presented a parallel data compression approach to reduce the size of time-varying big datasets. Firstly, we employ the proper orthogonal decomposition (POD) method for compression. The POD method can extract the underlying features of datasets to greatly reduce the size of big datasets. Meanwhile, the compressed datasets can be decompressed linearly. This feature can help scientists to interactively visualize big datasets for analysis. Then, we introduced a novel m-swap method to effectively parallelize the POD compression algorithm. The m-swap method can reach a high performance through fully using all parallel computing processors. In another word, no idle processors exist in the parallel compression process. Furthermore, the m-swap method can greatly reduce the cost of interprocessor communication. This is achieved by controlling the data transfer among 2m processors to obtain the best balance of computation cost of these processors. Finally, the effectiveness of our method will be demonstrated through compressing several time-varying big datasets on the K computer with ten thousands of processors.


ieee pacific visualization symposium | 2014

2-3-4 Combination for Parallel Compression on the K Computer

Chongke Bi; Kenji Ono

The development of supercomputers has successfully helped us to carry on complicated simulation with exploded size of dataset. For visualizing such kind of large-scale dataset, reducing the data size by using compression methods is one of the most useful approach. Moreover, parallelization of compression algorithm can greatly improve the efficiency and resolve the limitation of memory size. However, in parallel compression algorithm, interprocessor communication is indispensable, while it is also a bottleneck problem, especially for the general cases that the number of processors is not power-of-two. Parallel POD (proper orthogonal decomposition) compression algorithm is such an example, the number of time steps must be power-of-two for the binary swap scheme. A method that can fully resolve this problem with low computational cost will be very popular. In this paper, we proposed such an approach called 2-3-4 combination approach, which can be simply implemented and also reach high performance of parallel computing algorithms. Furthermore, our method can obtain the best balance among all parallel computing processors. This is achieved by transferring the non-power-of-two problem into power-of-two problem to fully use the best balance feature of binary swap method. We evaluate our approach through applying it to the parallel POD compression algorithm on the K computer.


visualization and data analysis | 2012

Degeneracy-aware interpolation of 3D diffusion tensor fields

Chongke Bi; Shigeo Takahashi; Issei Fujishiro

Visual analysis of 3D diffusion tensor fields has become an important topic especially in medical imaging for understanding microscopic structures and physical properties of biological tissues. However, it is still difficult to continuously track the underlying features from discrete tensor samples, due to the absence of appropriate interpolation schemes in the sense that we are able to handle possible degeneracy while fully respecting the smooth transition of tensor anisotropic features. This is because the degeneracy may cause rotational inconsistency of tensor anisotropy. This paper presents such an approach to interpolating 3D diffusion tensor fields. The primary idea behind our approach is to resolve the possible degeneracy through optimizing the rotational transformation between a pair of neighboring tensors by analyzing their associated eigenstructure, while the degeneracy can be identified by applying a minimum spanning tree-based clustering algorithm to the original tensor samples. Comparisons with existing interpolation schemes will be provided to demonstrate the advantages of our scheme, together with several results of tracking white matter fiber bundles in a human brain.


Scientific Visualization: Interactions, Features, Metaphors | 2011

Previewing Volume Decomposition Through Optimal Viewpoints

Shigeo Takahashi; Issei Fujishiro; Yuriko Takeshima; Chongke Bi

Understanding a volume dataset through a 2D display is a complex task because it usually contains multi-layered inner structures that inevitably cause undesirable overlaps when projected onto the display. This requires us to identify feature subvolumes embedded in the given volume and then visualize them on the display so that we can clarify their relative positions. This article therefore introduces a new feature-driven approach to previewing volumes that respects both the 3D nested structures of the feature subvolumes and their 2D arrangement in the projection by minimizing their occlusions. The associated process begins with tracking the topological transitions of isosurfaces with respect to the scalar field, in order to decompose the given volume dataset into feature components called interval volumes while extracting their nested structures. The volume dataset is then projected from the optimal viewpoint that archives the best balanced visibility of the decomposed components. The position of the optimal viewpoint is updated each time when we peel off an outer component with our interface by calculating the sum of the viewpoint optimality values for the remaining components. Several previewing examples are demonstrated to illustrate that the present approach can offer an effective means of traversing volumetric inner structures both in an interactive and automatic fashion with the interface.


Journal of Visualization | 2017

Compression-based integral curve data reuse framework for flow visualization

Fan Hong; Chongke Bi; Hanqi Guo; Kenji Ono; Xiaoru Yuan

Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reuse framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.Graphical Abstract

Collaboration


Dive into the Chongke Bi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kwan-Liu Ma

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge