Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yifan Zuo is active.

Publication


Featured researches published by Yifan Zuo.


Optoelectronic Imaging and Multimedia Technology III | 2014

High accuracy hole filling for Kinect depth maps

Jianxin Wang; Ping An; Yifan Zuo; Zhixiang You; Zhaoyang Zhang

Hole filling of depth maps is a core technology of the Kinect based visual system. In this paper, we propose a hole filling algorithm for Kinect depth maps based on separately repairing of the foreground and background. There are two-part processing in the proposed algorithm. Firstly, a fast pre-processing to the Kinect depth map holes is performed. In this part, we fill the background holes of Kinect depth maps with the deepest depth image which is constructed by combining the spatio-temporal information of the pixels in Kinect depth map with the corresponding color information in the Kinect color image. The second step is the enhancement for the pre-processing depth maps. We propose a depth enhancement algorithm based on the joint information of geometry and color. Since the geometry information is more robust than the color, we correct the depth by affine transform in prior to utilizing the color cues. Then we determine the filter parameters adaptively based on the local features of the color image which solves the texture copy problem and protects the fine structures. Since L1 norm optimization is more robust to data outliers than L2 norm optimization, we force the filtered value to be the solution for L1 norm optimization. Experimental results show that the proposed algorithm can protect the intact foreground depth, improve the accuracy of depth at object edges, and eliminate the flashing phenomenon of depth at objects edges. In addition, the proposed algorithm can effectively fill the big depth map holes generated by optical reflection.


IEEE Transactions on Circuits and Systems for Video Technology | 2018

Explicit Edge Inconsistency Evaluation Model for Color-Guided Depth Map Enhancement

Yifan Zuo; Qiang Wu; Jian Zhang; Ping An

Color-guided depth enhancement is used to refine depth maps according to the assumption that the depth edges and the color edges at the corresponding locations are consistent. In methods on such low-level vision tasks, the Markov random field (MRF), including its variants, is one of the major approaches that have dominated this area for several years. However, the assumption above is not always true. To tackle the problem, the state-of-the-art solutions are to adjust the weighting coefficient inside the smoothness term of the MRF model. These methods lack an explicit evaluation model to quantitatively measure the inconsistency between the depth edge map and the color edge map, so they cannot adaptively control the efforts of the guidance from the color image for depth enhancement, leading to various defects such as texture-copy artifacts and blurring depth edges. In this paper, we propose a quantitative measurement on such inconsistency and explicitly embed it into the smoothness term. The proposed method demonstrates promising experimental results compared with the benchmark and state-of-the-art methods on the Middlebury ToF-Mark, and NYU data sets.


international conference on image and graphics | 2015

Depth Map Upsampling Using Segmentation and Edge Information

Shuai Zheng; Ping An; Yifan Zuo; Xuemei Zou; Jianxin Wang

A depth upsampling method based on Markov Random Field is proposed, considering the depth and color information. First, the initial interpolated depth map is inaccurate and oversmooth, we use a rectangle window centered on every pixel to search the maximum and minimum depth value of the depth map to find out the edge pixels. Then, we use the depth information to guide the segmentation of the color image and build different data terms and smoothness terms for the edge and non-edge pixels. The result depth map is piecewise smooth and the edge is sharp. In the meanwhile, the result is good where the color information is consistent while the depth is not or where the depth information is consistent while the color is not. Experiments show that the proposed method performs better than other upsampling methods in terms of mean absolute difference (MAD).


international conference on multimedia and expo | 2016

Explicit modeling on depth-color inconsistency for color-guided depth up-sampling

Yifan Zuo; Qiang Wu; Jian Zhang; Ping An

Color-guided depth up-sampling is to enhance the resolution of depth map according to the assumption that the depth discontinuity and color image edge at the corresponding location are consistent. Through all methods reported, MRF including its variants is one of major approaches, which has dominated in this area for several years. However, the assumption above is not always true. Solution usually is to adjust the weighting inside smoothness term in MRF model. But there is no any method explicitly considering the inconsistency occurring between depth discontinuity and the corresponding color edge. In this paper, we propose quantitative measurement on such inconsistency and explicitly embed it into weighting value of smoothness term. Such solution has not been reported in the literature. The improved depth up-sampling based on the proposed method is evaluated on Middlebury datasets and ToFMark datasets and demonstrate promising results.


international conference on image processing | 2016

Explicit measurement on depth-color inconsistency for depth completion

Yifan Zuo; Qiang Wu; Ping An; Jian Zhang

Color-guided depth completion is to refine depth map through structure light sensing by filling missing depth structure and de-nosing. It is based on the assumption that depth discontinuity and color edge at the corresponding location are consistent. Among all proposed methods, MRF-based method including its variants is one of major approaches. However, the assumption above is not always true, which causes texture-copy and depth discontinuity blurring artifacts. The state-of-the-art solutions usually are to modify the weighting inside smoothness term of MRF model. Because there is no any method explicitly considering the inconsistency occurring between depth discontinuity and the corresponding color edge, they cannot adaptively control the effect of guidance from color image when completing depth map. In this paper, we propose quantitative measurement on such inconsistency and explicitly embed it into weighting value of smoothness term. The proposed method is evaluated on NYU Kinect datasets and demonstrates promising results.


international conference on image processing | 2015

Depth upsampling method via Markov random fields without edge-misaligned artifacts

Yifan Zuo; Ping An; Shuai Zheng; Zhaoyang Zhang

Recently, the widely use of time-of-flight sensors captures depth information for dynamic scenes in real time, which promotes the developing of many 3D image or video processing applications. However, such depth maps are noisy and have low resolutions. In this paper, we propose an edge-based depth map super-resolution method via solving a labeling optimization problem in MRF. The inputs are low quality depth map and the according high-resolution color image. The proposed method not only avoids the texture-copy artifacts, but also preserves the edges of depth which do not exist in the color image. We compare our algorithm with the state of the art on the benchmark dataset. The experimental results prove the validity and robustness of our approach.


2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology | 2015

Depth map super resolution and edge enhancement by utilizing RGB information

Xu-le Yan; Ping An; Shuai Zheng; Yifan Zuo; Zhixiang You

The paper presents a depth map super-resolution method of which the core content is a novel edge enhancement algorithm. Auto-regressive algorithm is applied to generate an initial upsampled depth map before the edge enhancement. Except for the low-resolution depth map, an intensity image derived from high-resolution color image is also utilized to extract accurate depth edge, which is finally rectified by combining color, depth and intensity information. The experimental results show that our approach is able to recover high-resolution (HR) depth maps with high quality. What’s more, in comparison with the previous state-of-art algorithms, our approach can generally achieve better results.


Optoelectronic Imaging and Multimedia Technology III | 2014

Applications of just-noticeable depth difference model in joint multiview video plus depth coding

Chao Liu; Ping An; Yifan Zuo; Zhaoyang Zhang

A new multiview just-noticeable-depth-difference(MJNDD) Model is presented and applied to compress the joint multiview video plus depth. Many video coding algorithms remove spatial and temporal redundancies and statistical redundancies but they are not capable of removing the perceptual redundancies. Since the final receptor of video is the human eyes, we can remove the perception redundancy to gain higher compression efficiency according to the properties of human visual system (HVS). Traditional just-noticeable-distortion (JND) model in pixel domain contains luminance contrast and spatial-temporal masking effects, which describes the perception redundancy quantitatively. Whereas HVS is very sensitive to depth information, a new multiview-just-noticeable-depth-difference(MJNDD) model is proposed by combining traditional JND model with just-noticeable-depth-difference (JNDD) model. The texture video is divided into background and foreground areas using depth information. Then different JND threshold values are assigned to these two parts. Later the MJNDD model is utilized to encode the texture video on JMVC. When encoding the depth video, JNDD model is applied to remove the block artifacts and protect the edges. Then we use VSRS3.5 (View Synthesis Reference Software) to generate the intermediate views. Experimental results show that our model can endure more noise and the compression efficiency is improved by 25.29 percent at average and by 54.06 percent at most compared to JMVC while maintaining the subject quality. Hence it can gain high compress ratio and low bit rate.


IFTC | 2012

Edge-Based Algorithm for Multi-view Depth Map Generation

Yifan Zuo; Ping An; Zhaoyang Zhang

Normalized Cross-Correlation (NCC) is a common matching measure which is insensitive to radiometric differences between stereo images. However, traditional rectangle-based NCC tends to expand the depth discontinuities. An efficient edge-based algorithm with NCC for multi-view depth map generation is proposed in this paper, which preserves depth discontinuity while remaining the advantage of robustness to radiometric differences. In addition, all pixels of initial result are classified into uncover, occlusion, reliable and unreliable by exploiting Left-Right Consistency (LRC) constraint and sequential consistency constraint. Since voting scheme will lead to errors when match windows are lack of reliable information and joint-trilateral filter will blur the depth map if employing fixed window size, especially in depth discontinuities, we combine voting scheme and joint-trilateral filter to get a better result. The experimental results show that our method achieves competitively performance.


Journal of Electronic Imaging | 2018

Integrated cosparse analysis model with explicit edge inconsistency measurement for guided depth map upsampling

Yifan Zuo; Qiang Wu; Ping An; Xiwu Shang

Collaboration


Dive into the Yifan Zuo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge