Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oscar Chi Lim Au is active.

Publication


Featured researches published by Oscar Chi Lim Au.


multimedia signal processing | 2013

Depth map denoising using graph-based transform and group sparsity

Wei Hu; Xin Li; Gene Cheung; Oscar Chi Lim Au

Depth maps, characterizing per-pixel physical distance between objects in a 3D scene and a capturing camera, can now be readily acquired using inexpensive active sensors such as Microsoft Kinect. However, the acquired depth maps are often corrupted due to surface reflection or sensor noise. In this paper, we build on two previously developed works in the image denoising literature to restore single depth maps-i.e., to jointly exploit local smoothness and nonlocal self-similarity of a depth map. Specifically, we propose to first cluster similar patches in a depth image and compute an average patch, from which we deduce a graph describing correlations among adjacent pixels. Then we transform similar patches to the same graph-based transform (GBT) domain, where the GBT basis vectors are learned from the derived correlation graph. Finally, we perform an iterative thresholding procedure in the GBT domain to enforce group sparsity. Experimental results show that for single depth maps corrupted with additive white Gaussian noise (AWGN), our proposed NLGBT denoising algorithm can outperform state-of-the-art image denoising methods such as BM3D by up to 2.37dB in terms of PSNR.


international conference on image processing | 2012

Depth map compression using multi-resolution graph-based transform for depth-image-based rendering

Wei Hu; Gene Cheung; Xin Li; Oscar Chi Lim Au

Depth map compression is important for efficient network transmission of 3D visual data in texture-plus-depth format, where the observer can synthesize an image of a freely chosen viewpoint via depth-image-based rendering (DIBR) using received neighboring texture and depth maps as anchors. Unlike texture maps, depth maps exhibit unique characteristics like smooth interior surfaces and sharp edges that can be exploited for coding gain. In this paper, we propose a multi-resolution approach to depth map compression using previously proposed graph-based transform (GBT). The key idea is to treat smooth surfaces and sharp edges of large code blocks separately and encode them in different resolutions: encode edges in original high resolution (HR) to preserve sharpness, and encode smooth surfaces in low-pass-filtered and down-sampled low resolution (LR) to save coding bits. Because GBT does not filter across edges, it produces small or zero high-frequency components when coding smooth-surface depth maps and leads to a compact representation in the transform domain. By encoding down-sampled surface regions in LR GBT, we achieve representation compactness for a large block without the high computation complexity associated with an adaptive large-block GBT. At the decoder, encoded LR surfaces are up-sampled and interpolated while preserving encoded HR edges. Experimental results show that our proposed multi-resolution approach using GBT reduced bitrate by 68% compared to native H.264 intra with DCT encoding original HR depth maps, and by 55% compared to single-resolution GBT encoding small blocks.


international conference on image processing | 2012

Novel temporal domain hole filling based on background modeling for view synthesis

Wenxiu Sun; Oscar Chi Lim Au; Lingfeng Xu; Yujun Li; Wei Hu

View synthesis is a technique to generate images/videos in a virtual viewpoint. In this paper, the dis-occlusion/hole problem in view synthesis is resolved from the temporal domain. By the fact that dis-occlusions belong to the background, firstly we build an online background under a newly designed Switchable Gaussian Model (SGM), owning to its computationally simplicity and scene adaptivity. Then, real textures in the dis-occlusions are able to be recovered with the built background. Experimental results have verified the improvements in rendering quality and computation complexity by comparing to the conventional spatial filling methods and other temporal filling methods.


international conference on image processing | 2014

Graph-based joint denoising and super-resolution of generalized piecewise smooth images

Wei Hu; Gene Cheung; Xin Li; Oscar Chi Lim Au

Images are often decoded with noise at receiver due to capturing errors and/or signal quantization during compression. Further, it is often necessary to display a decoded image at a higher resolution than the captured one, given available high-resolution (HR) display or a need to zoom-in for detailed examination. In this paper, we address the problems of image denoising and super-resolution (SR) jointly in one unified graph-based framework, focusing on a special class of signals called generalized piecewise smooth (GPWS) images. GPWS images are composed mostly of smooth regions connected by transition regions, and represent an important subclass of images, including cartoon, sub-regions of video frames with captions, graphics images in video games, etc. Like our previous work on piecewise smooth (PWS) images, GPWS images also imply simple-enough graph representations in the pixel domain, so that suitable graph-based filtering techniques can be readily applied. Specifically, leveraging on previous work on graph spectral analysis, for a given pixel block in low-resolution (LR) we first use the second eigenvector of a computed graph Laplacian matrix to identify a hard boundary, and then use the third eigenvector to identify two piecewise smooth regions and a transition region that separates them. The LR hard boundary is then super-resolved into HR via a procedure based on local self-similarity, while graph weights of the LR transition region is mapped to those of the HR transition region via polynomial fitting. Using the computed HR boundary and weights in the transition region, we construct a suitable HR graph corresponding to the LR counterpart, and perform joint denoising / SR using a graph smoothness prior. Experimental results show that our proposed algorithm outperforms two representative separable denoising / SR schemes in both subjective and objective quality.


international conference on multimedia and expo | 2012

Depth Map Super-Resolution Using Synthesized View Matching for Depth-Image-Based Rendering

Wei Hu; Gene Cheung; Xin Li; Oscar Chi Lim Au

In texture-plus-depth format of 3D visual data, texture and depth maps of multiple viewpoints are coded and transmitted at sender. At receiver, decoded texture and depth maps of two neighboring viewpoints are used to synthesize a desired intermediate view via depth-image-based rendering (DIBR). In this paper, to enable transmission of depth maps at low resolution for bit saving, we propose a novel super-resolution (SR) algorithm to increase the resolution of the received depth map at decoder to match the corresponding received high resolution texture map for DIBR. Unlike previous depth map SR techniques that only utilize the texture map of the same view 0 to interpolate missing depth pixels of view 0, we use texture maps of the same and neighboring viewpoints, 0 and 1, so that the error between the original texture map of view 1 and the synthesized image of view 1 (interpolated using texture and depth maps of view 0) can be used as a regularization term during depth map SR of view 0. Further, piecewise smoothness of the reconstructed depth map is enforced by computing only the lowest frequency coefficients in Graph based Transform (GBT) domain for each interpolated block. Experimental results show that our SR scheme out-performed a previous scheme by up to 1.7dB in synthesized view quality in PSNR.


international conference on image and graphics | 2011

Image Interpolation Using Autoregressive Model and Gauss-Seidel Optimization

Ketan Tang; Oscar Chi Lim Au; Lu Fang; Zhiding Yu; Yuanfang Guo

In this paper we propose a simple yet effective image interpolation algorithm based on autoregressive model. Unlike existing algorithms which rely on low resolution pixels to estimate interpolation coefficients, we optimize the interpolation coefficients and high resolution pixel values jointly from one optimization problem. Although the two sets of variables are coupled in the cost function, the problem can be effectively solved using Gauss-Seidel method. We prove the iterations are guaranteed to converge. Experiments show that on average we have over 3dB gain compared to bicubic interpolation and over 0.1dB gain compared to SAI.


IEEE Transactions on Image Processing | 2014

Seamless View Synthesis Through Texture Optimization

Wenxiu Sun; Oscar Chi Lim Au; Lingfeng Xu; Yujun Li; Wei Hu

In this paper, we present a novel view synthesis method named Visto, which uses a reference input view to generate synthesized views in nearby viewpoints. We formulate the problem as a joint optimization of inter-view texture and depth map similarity, a framework that is significantly different from other traditional approaches. As such, Visto tends to implicitly inherit the image characteristics from the reference view without the explicit use of image priors or texture modeling. Visto assumes that each patch is available in both the synthesized and reference views and thus can be applied to the common area between the two views but not the out-of-region area at the border of the synthesized view. Visto uses a Gauss-Seidel-like iterative approach to minimize the energy function. Simulation results suggest that Visto can generate seamless virtual views and outperform other state-of-the-art methods.


international conference on acoustics, speech, and signal processing | 2011

Error compensation and reliability based view synthesis

Wenxiu Sun; Oscar Chi Lim Au; Lingfeng Xu; Sung Him Chui; Chun Wing Kwok; Yujun Li

View synthesis offers a great flexibility in generating free viewpoint television (FTV) and 3D video (3DV). However, the depth-image-based view synthesis approach is very sensitive to errors in the camera parameters or poorly estimated depth maps (also called depth images). Because of these errors, three kinds of artifacts (blurring, contour, hole) are possibly introduced during the general synthesis process. Comparing to conventional methods which implement the view synthesis only in ideal case, in this paper, we propose to design an error compensation and reliability based view synthesis system where the potential errors are considered. The main contributions are highlighted as follows: Firstly, the camera parameter errors are compensated by a global homography transformation matrix. Secondly, the depth maps are classified into both reliable and unreliable regions and the reliability based weighting masks are built to blend synthesized images from two different views together. Finally, a reliability depth map based hole-filling technique is used to fill the existing holes. The experimental results demonstrate that these artifacts are efficiently reduced in the synthesized images.


international conference on acoustics, speech, and signal processing | 2013

Arbitrary factor image interpolation using geodesic distance weighted 2D autoregressive modeling

Ketan Tang; Oscar Chi Lim Au; Yuanfang Guo; Jiahao Pang; Jiali Li

Least square regression has been widely used in image interpolation. Some existing regression-based interpolation methods used ordinary least squares (OLS) to formulate cost functions. These methods usually have difficulties at object boundaries because OLS is sensitive to outliers. Weighted least squares (WLS) is then adopted to solve the outlier problem. Some weighting schemes have been proposed in the literature. In this paper we propose to use geodesic distance weighting in that geodesic distance can simultaneously measure both the spatial distance and color difference. Another contribution of this paper is that we propose an optimization scheme that can handle arbitrary factor interpolation. The idea is to separate the problem into two parts, an adaptive pixel correlation model and a convolution based image degradation model. Geodesic distance weighted 2D autoregressive model is used to model the pixel correlation which preserves local geometry. The convolution based image degradation model provides the flexibility to handle arbitrary interpolation factor. The entire problem is formulated as a WLS problem constrained by a linear equality.


international conference on image processing | 2013

Arbitrary factor image interpolation by convolution kernel constrained 2-D autoregressive modeling

Ketan Tang; Oscar Chi Lim Au; Yuanfang Guo; Jiahao Pang; Jiali Li; Lu Fang

Among existing interpolation methods, convolution-based methods are able to perform arbitrary factor interpolation but the results are usually blurry or jaggy, adaptive interpolation methods usually can reduce the blurry and jaggy artifacts but cannot handle arbitrary factor interpolation. In this paper we propose an arbitrary factor adaptive interpolation algorithm by combining 2-D piecewise autoregressive (PAR) modeling and convolution kernel constraint. PAR model ensures local geometries are well preserved thus the resultant image is not blurry or jaggy. Convolution kernel constraint ensures the recovered high resolution image consistent with the low resolution image, and also provides the flexibility to handle arbitrary interpolation factor. Experiment results show that our algorithm achieves state-of-the-art performance for any interpolation factor.

Collaboration


Dive into the Oscar Chi Lim Au's collaboration.

Top Co-Authors

Avatar

Wei Hu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lingfeng Xu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wenxiu Sun

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yujun Li

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gene Cheung

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Xin Li

West Virginia University

View shared research outputs
Top Co-Authors

Avatar

Ketan Tang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yuanfang Guo

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chun Wing Kwok

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jiahao Pang

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge