Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liang Wan is active.

Publication


Featured researches published by Liang Wan.


international conference on computer graphics and interactive techniques | 2008

Intrinsic colorization

Xiaopei Liu; Liang Wan; Yingge Qu; Tien-Tsin Wong; Stephen Lin; Chi-Sing Leung; Pheng-Ann Heng

In this paper, we present an example-based colorization technique robust to illumination differences between grayscale target and color reference images. To achieve this goal, our method performs color transfer in an illumination-independent domain that is relatively free of shadows and highlights. It first recovers an illumination-independent intrinsic reflectance image of the target scene from multiple color references obtained by web search. The reference images from the web search may be taken from different vantage points, under different illumination conditions, and with different cameras. Grayscale versions of these reference images are then used in decomposing the grayscale target image into its intrinsic reflectance and illumination components. We transfer color from the color reflectance image to the grayscale reflectance image, and obtain the final result by relighting with the illumination component of the target image. We demonstrate via several examples that our method generates results with excellent color consistency.


IEEE Transactions on Multimedia | 2009

The Rhombic Dodecahedron Map: An Efficient Scheme for Encoding Panoramic Video

Chi-Wing Fu; Liang Wan; Tien-Tsin Wong; Chi-Sing Leung

Omnidirectional videos are usually mapped to planar domain for encoding with off-the-shelf video compression standards. However, existing work typically neglects the effect of the sphere-to-plane mapping. In this paper, we show that by carefully designing the mapping, we can improve the visual quality, stability and compression efficiency of encoding omnidirectional videos. Here we propose a novel mapping scheme, known as the rhombic dodecahedron map (RD map) to represent data over the spherical domain. By using a family of skew great circles as the subdivision kernel, the RD map not only produces a sampling pattern with very low discrepancy, it can also support a highly efficient data indexing mechanism over the spherical domain. Since the proposed map is quad-based, geodesic-aligned, and of very low area and shape distortion, we can reliably apply 2-D wavelet-based and DCT-based encoding methods that are originally designated to planar perspective videos. At the end, we perform a series of analysis and experiments to investigate and verify the effectiveness of the proposed method; with its ultra-fast data indexing capability, we show that we can playback omnidirectional videos with very high frame rates on conventional PCs with GPU support.


eurographics symposium on rendering techniques | 2005

Spherical Q 2 -tree for sampling dynamic environment sequences

Liang Wan; Tien-Tsin Wong; Chi-Sing Leung

Previous methods in environment map sampling seldom consider a sequence of dynamic environment maps. The generated sampling patterns of the sequence may not maintain the temporal illumination consistency and result in choppy animation. In this paper, we propose a novel approach, spherical Q2-tree, to address this consistency problem. The local adaptive nature of the proposed method suppresses the abrupt change in the generated sampling patterns over time, hence ensures a smooth and consistent illumination. By partitioning the spherical surface with simple curvilinear equations, we construct a quadrilateral-based quadtree over the sphere. This Q2-tree allows us to adaptively sample the environment based on an importance metric and generates low-discrepancy sampling patterns. No time-consuming relaxation is required. The sampling patterns of a dynamic sequence are rapidly generated by making use of the summed area table and exploiting the coherence of consecutive frames. From our experiments, the rendering quality of our sampling pattern for a static environment map is comparable to previous methods. However, our method produces smooth and consistent animation for a sequence of dynamic environment maps, even the number of samples is kept constant over time.


IEEE Transactions on Visualization and Computer Graphics | 2007

Isocube: Exploiting the Cubemap Hardware

Liang Wan; Tien-Tsin Wong; Chi-Sing Leung

This paper proposes a novel six-face spherical map, isocube, that fully utilizes the cubemap hardware built in most GPUs. Unlike the cubemap, the proposed isocube uniformly samples the unit sphere (uniformly distributed), and all samples span the same solid angle (equally important). Its mapping computation contains only a small overhead. By feeding the cubemap hardware with the six-face isocube map, the isocube can exploit all built-in texturing operators tailored for the cubemap and achieve a very high frame rate. In addition, we develop an anisotropic filtering that compensates aliasing artifacts due to texture magnification. This filtering technique extends the existing hardware anisotropic filtering and can be applied not only to the proposed isocube, but also to other texture mapping applications.


computer vision and pattern recognition | 2013

Maximum Cohesive Grid of Superpixels for Fast Object Localization

Liang Li; Wei Feng; Liang Wan; Jiawan Zhang

This paper addresses a challenging problem of regularizing arbitrary super pixels into an optimal grid structure, which may significantly extend current low-level vision algorithms by allowing them to use super pixels (SPs) conveniently as using pixels. For this purpose, we aim at constructing maximum cohesive SP-grid, which is composed of real nodes, i.e SPs, and dummy nodes that are meaningless in the image with only position-taking function in the grid. For a given formation of image SPs and proper number of dummy nodes, we first dynamically align them into a grid based on the centroid localities of SPs. We then define the SP-grid coherence as the sum of edge weights, with SP locality and appearance encoded, along all direct paths connecting any two nearest neighboring real nodes in the grid. We finally maximize the SP-grid coherence via cascade dynamic programming. Our approach can take the regional objectness as an optional constraint to produce more semantically reliable SP-grids. Experiments on object localization show that our approach outperforms state-of-the-art methods in terms of both detection accuracy and speed. We also find that with the same searching strategy and features, object localization at SP-level is about 100-500 times faster than pixel-level, with usually better detection accuracy.


IEEE Transactions on Multimedia | 2013

Cube2Video: Navigate Between Cubic Panoramas in Real-Time

Qiang Zhao; Liang Wan; Wei Feng; Jiawan Zhang; Tien-Tsin Wong

Online virtual navigation systems enable users to hop from one 360° panorama to another, which belong to a sparse point-to-point collection, resulting in a less pleasant viewing experience. In this paper, we present a novel method, namely Cube2Video, to support navigating between cubic panoramas in a video-viewing mode. Our method circumvents the intrinsic challenge of cubic panoramas, i.e., the discontinuities between cube faces, in an efficient way. The proposed method extends the matching-triangulation-interpolation procedure with special considerations of the spherical domain. A triangle-to-triangle homography-based warping is developed to achieve physically plausible and visually pleasant interpolation results. The temporal smoothness of the synthesized video sequence is improved by means of a compensation transformation. As experimental results demonstrate, our method can synthesize pleasant video sequences in real time, thus mimicking walking or driving navigation.


international conference on acoustics, speech, and signal processing | 2013

Image co-saliency detection by propagating superpixel affinities

Zhiyu Tan; Liang Wan; Wei Feng; Chi-Man Pun

Image co-saliency detection is a valuable technique to highlight perceptually salient regions in image pairs. In this paper, we propose a self-contained co-saliency detection algorithm based on superpixel affinity matrix. We first compute both intra and inter similarities of superpixels of image pairs. Bipartite graph matching is applied to determine most reliable inter similarities. To update the similarity score between every two superpixels, we next employ a GPU-based all-pair SimRank algorithm to do propagation on the affinity matrix. Based on the inter superpixel affinities we derive a co-saliency measure that evaluates the foreground cohesiveness and locality compactness of superpixels within one image. The effectiveness of our method is demonstrated in experimental evaluation.


International Journal of Computer Vision | 2015

SPHORB: A Fast and Robust Binary Feature on the Sphere

Qiang Zhao; Wei Feng; Liang Wan; Jiawan Zhang

In this paper, we propose SPHORB, a new fast and robust binary feature detector and descriptor for spherical panoramic images. In contrast to state-of-the-art spherical features, our approach stems from the geodesic grid, a nearly equal-area hexagonal grid parametrization of the sphere used in climate modeling. It enables us to directly build fine-grained pyramids and construct robust features on the hexagonal spherical grid, thus avoiding the costly computation of spherical harmonics and their associated bandwidth limitation. We further study how to achieve scale and rotation invariance for the proposed SPHORB feature. Extensive experiments show that SPHORB consistently outperforms other existing spherical features in accuracy, efficiency and robustness to camera movements. The superior performance of SPHORB has also been validated by real-world matching tests.


IEEE Transactions on Multimedia | 2013

Example-Based Color Transfer for Gradient Meshes

Yi Xiao; Liang Wan; Chi-Sing Leung; Yu-Kun Lai; Tien-Tsin Wong

Editing a photo-realistic gradient mesh is a tough task. Even only editing the colors of an existing gradient mesh can be exhaustive and time-consuming. To facilitate user-friendly color editing, we develop an example-based color transfer method for gradient meshes, which borrows the color characteristics of an example image to a gradient mesh. We start by exploiting the constraints of the gradient mesh, and accordingly propose a linear-operator-based color transfer framework. Our framework operates only on colors and color gradients of the mesh points and preserves the topological structure of the gradient mesh. Bearing the framework in mind, we build our approach on PCA-based color transfer. After relieving the color range problem, we incorporate a fusion-based optimization scheme to improve color similarity between the reference image and the recolored gradient mesh. Finally, a multi-swatch transfer scheme is provided to enable more user control. Our approach is simple, effective, and much faster than color transferring the rastered gradient mesh directly. The experimental results also show that our method can generate pleasing recolored gradient meshes.


IEEE Transactions on Visualization and Computer Graphics | 2010

Evolving Mazes from Images

Liang Wan; Xiaopei Liu; Tien-Tsin Wong; Chi-Sing Leung

We propose a novel reaction diffusion (RD) simulator to evolve image-resembling mazes. The evolved mazes faithfully preserve the salient interior structures in the source images. Since it is difficult to control the generation of desired patterns with traditional reaction diffusion, we develop our RD simulator on a different computational platform, cellular neural networks. Based on the proposed simulator, we can generate the mazes that exhibit both regular and organic appearance, with uniform and/or spatially varying passage spacing. Our simulator also provides high controllability of maze appearance. Users can directly and intuitively ¿paint¿ to modify the appearance of mazes in a spatially varying manner via a set of brushes. In addition, the evolutionary nature of our method naturally generates maze without any obvious seam even though the input image is a composite of multiple sources. The final maze is obtained by determining a solution path that follows the user-specified guiding curve. We validate our method by evolving several interesting mazes from different source images.

Collaboration


Dive into the Liang Wan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tien-Tsin Wong

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi-Sing Leung

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qiang Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhi-Qiang Liu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge