Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingyu Yang is active.

Publication


Featured researches published by Jingyu Yang.


IEEE Transactions on Image Processing | 2014

Color-guided depth recovery from RGB-D data using an adaptive autoregressive model

Jingyu Yang; Xinchen Ye; Kun Li; Chunping Hou; Yao Wang

This paper proposes an adaptive color-guided autoregressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We observe and verify that the AR model tightly fits depth maps of generic scenes. The depth recovery task is formulated into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. We analyze the stability of our method from a linear system point of view, and design a parameter adaptation scheme to achieve stable and accurate depth recovery. Quantitative and qualitative evaluation compared with ten state-of-the-art schemes show the effectiveness and superiority of our method. Being able to handle various types of depth degradations, the proposed method is versatile for mainstream depth sensors, time-of-flight camera, and Kinect, as demonstrated by experiments on real systems.


IEEE Transactions on Multimedia | 2013

Cloud-Based Image Coding for Mobile Devices—Toward Thousands to One Compression

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

Current image coding schemes make it hard to utilize external images for compression even if highly correlated images can be found in the cloud. To solve this problem, we propose a method of cloud-based image coding that is different from current image coding even on the ground. It no longer compresses images pixel by pixel and instead tries to describe images and reconstruct them from a large-scale image database via the descriptions. First, we describe an input image based on its down-sampled version and local feature descriptors. The descriptors are used to retrieve highly correlated images in the cloud and identify corresponding patches. The down-sampled image serves as a target to stitch retrieved image patches together. Second, the down-sampled image is compressed using current image coding. The feature vectors of local descriptors are predicted by the corresponding vectors extracted in the decoded down-sampled image. The predicted residual vectors are compressed by transform, quantization, and entropy coding. The experimental results show that the visual quality of reconstructed images is significantly better than that of intra-frame coding in HEVC and JPEG at thousands to one compression .


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Image and Video Denoising Using Adaptive Dual-Tree Discrete Wavelet Packets

Jingyu Yang; Yao Wang; Wenli Xu; Qionghai Dai

We investigate image and video denoising using adaptive dual-tree discrete wavelet packets (ADDWP), which is extended from the dual-tree discrete wavelet transform (DDWT). With ADDWP, DDWT subbands are further decomposed into wavelet packets with anisotropic decomposition, so that the resulting wavelets have elongated support regions and more orientations than DDWT wavelets. To determine the decomposition structure, we develop a greedy basis selection algorithm for ADDWP, which has significantly lower computational complexity than a previously developed optimal basis selection algorithm, with only slight performance loss. For denoising the ADDWP coefficients, a statistical model is used to exploit the dependency between the real and imaginary parts of the coefficients. The proposed denoising scheme gives better performance than several state-of-the-art DDWT-based schemes for images with rich directional features. Moreover, our scheme shows promising results without using motion estimation in video denoising. The visual quality of images and videos denoised by the proposed scheme is also superior.


IEEE Transactions on Image Processing | 2008

Image Coding Using Dual-Tree Discrete Wavelet Transform

Jingyu Yang; Yao Wang; Wenli Xu; Qionghai Dai

In this paper, we explore the application of 2-D dual-tree discrete wavelet transform (DDWT), which is a directional and redundant transform, for image coding. Three methods for sparsifying DDWT coefficients, i.e., matching pursuit, basis pursuit, and noise shaping, are compared. We found that noise shaping achieves the best nonlinear approximation efficiency with the lowest computational complexity. The interscale, intersubband, and intrasubband dependency among the DDWT coefficients are analyzed. Three subband coding methods, i.e., SPIHT, EBCOT, and TCE, are evaluated for coding DDWT coefficients. Experimental results show that TCE has the best performance. In spite of the redundancy of the transform, our DDWT TCE scheme outperforms JPEG2000 up to 0.70 dB at low bit rates and is comparable to JPEG2000 at high bit rates. The DDWT TCE scheme also outperforms two other image coders that are based on directional filter banks. To further improve coding efficiency, we extend the DDWT to an anisotropic dual-tree discrete wavelet packets (ADDWP), which incorporates adaptive and anisotropic decomposition into DDWT. The ADDWP subbands are coded with TCE coder. Experimental results show that ADDWP TCE provides up to 1.47 dB improvement over the DDWT TCE scheme, outperforming JPEG2000 up to 2.00 dB. Reconstructed images of our coding schemes are visually more appealing compared with DWT-based coding schemes thanks to the directionality of wavelets.


european conference on computer vision | 2012

Depth recovery using an adaptive color-guided auto-regressive model

Jingyu Yang; Xinchen Ye; Kun Li; Chunping Hou

This paper proposes an adaptive color-guided auto-regressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We formulate the depth recovery task into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. Experimental results show that our method outperforms existing state-of-the-art schemes, and is versatile for both mainstream depth sensors: ToF camera and Kinect.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Nonrigid Structure From Motion via Sparse Representation

Kun Li; Jingyu Yang; Jianmin Jiang

This paper proposes a new approach for nonrigid structure from motion with occlusion, based on sparse representation. We address the occlusion problem based on the latest developments on sparse representation: matrix completion, which can recover the observation matrix that has high percentages of missing data and can also reduce the noises and outliers in the known elements. We introduce sparse transform to the joint estimation of 3-D shapes and motions. 3-D shape trajectory space is fit by wavelet basis to achieve better modeling of complex motion. Experimental results on datasets without and with occlusion show that our method can better estimate the 3-D shapes and motions, compared with state-of-the-art algorithms.


IEEE Transactions on Image Processing | 2013

Landmark Image Super-Resolution by Retrieving Web Images

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

This paper proposes a new super-resolution (SR) scheme for landmark images by retrieving correlated web images. Using correlated web images significantly improves the exemplar-based SR. Given a low-resolution (LR) image, we extract local descriptors from its up-sampled version and bundle the descriptors according to their spatial relationship to retrieve correlated high-resolution (HR) images from the web. Though similar in content, the retrieved images are usually taken with different illumination, focal lengths, and shot perspectives, resulting in uncertainty for the HR detail approximation. To solve this problem, we first propose aligning these images to the up-sampled LR image through a global registration, which identifies the corresponding regions in these images and reduces the mismatching. Second, we propose a structure-aware matching criterion and adaptive block sizes to improve the mapping accuracy between LR and HR patches. Finally, these matched HR patches are blended together by solving an energy minimization problem to recover the desired HR image. Experimental results demonstrate that our SR scheme achieves significant improvement compared with four state-of-the-art schemes in terms of both subjective and objective qualities.


Science in China Series F: Information Sciences | 2009

Ways to sparse representation: An overview

Jingyu Yang; Yigang Peng; Wenli Xu; Qionghai Dai

Many algorithms have been proposed to find sparse representations over redundant dictionaries or transforms. This paper gives an overview of these algorithms by classifying them into three categories: greedy pursuit algorithms, lp norm regularization based algorithms, and iterative shrinkage algorithms. We summarize their pros and cons as well as their connections. Based on recent evidence, we conclude that the algorithms of the three categories share the same root: lp norm regularized inverse problem. Finally, several topics that deserve further investigation are also discussed.


IEEE Transactions on Image Processing | 2015

Image Denoising by Exploring External and Internal Correlations

Huanjing Yue; Xiaoyan Sun; Jingyu Yang; Feng Wu

Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Foreground–Background Separation From Video Clips via Motion-Assisted Matrix Restoration

Xinchen Ye; Jingyu Yang; Xin Sun; Kun Li; Chunping Hou; Yao Wang

Separation of video clips into foreground and background components is a useful and important technique, making recognition, classification, and scene analysis more efficient. In this paper, we propose a motion-assisted matrix restoration (MAMR) model for foreground-background separation in video clips. In the proposed MAMR model, the backgrounds across frames are modeled by a low-rank matrix, while the foreground objects are modeled by a sparse matrix. To facilitate efficient foreground-background separation, a dense motion field is estimated for each frame, and mapped into a weighting matrix which indicates the likelihood that each pixel belongs to the background. Anchor frames are selected in the dense motion estimation to overcome the difficulty of detecting slowly moving objects and camouflages. In addition, we extend our model to a robust MAMR model against noise for practical applications. Evaluations on challenging datasets demonstrate that our method outperforms many other state-of-the-art methods, and is versatile for a wide range of surveillance videos.

Collaboration


Dive into the Jingyu Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xinchen Ye

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge