Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lu Yu is active.

Publication


Featured researches published by Lu Yu.


IEEE Signal Processing Letters | 2011

Binocular Just-Noticeable-Difference Model for Stereoscopic Images

Yin Zhao; Zhenzhong Chen; Ce Zhu; Yap-Peng Tan; Lu Yu

Conventional 2-D Just-Noticeable-Difference (JND) models measure the perceptible distortion of visual signal based on monocular vision properties by presenting a single image for both eyes. However, they are not applicable for stereoscopic displays in which a pair of stereoscopic images is presented to a viewers left and right eyes, respectively. Some unique binocular vision properties, e.g., binocular combination and rivalry, need to be considered in the development of a JND model for stereoscopic images. In this letter, we propose a binocular JND (BJND) model based on psychophysical experiments which are conducted to model the basic binocular vision properties in response to asymmetric noises in a pair of stereoscopic images. The first experiment exploits the joint visibility thresholds according to the luminance masking effect and the binocular combination of noises. The second experiment examines the reduction of visual sensitivity in binocular vision due to the contrast masking effect. Based on these experiments, the developed BJND model measures the perceptible distortion of binocular vision for stereoscopic images. Subjective evaluations on stereoscopic images validate of the proposed BJND model.


IEEE Transactions on Broadcasting | 2011

Boundary Artifact Reduction in View Synthesis of 3D Video: From Perspective of Texture-Depth Alignment

Yin Zhao; Ce Zhu; Zhenzhong Chen; Dong Tian; Lu Yu

3D Video (3DV) with depth-image-based view synthesis is a promising candidate of next generation broadcasting applications. However, the synthesized views in 3DV are often contaminated by annoying artifacts, particularly notably around object boundaries, due to imperfect depth maps (e.g., produced by state-of-the-art stereo matching algorithms or compressed lossily). In this paper, we first review some representative methods for boundary artifact reduction in view synthesis, and make an in-depth investigation into the underlying mechanisms of boundary artifact generation from a new perspective of texture-depth alignment in boundary regions. Three forms of texture-depth misalignment are identified as the causes for different boundary artifacts, which mainly present themselves as scattered noises on the background and object erosion on the foreground. Based on the insights gained from the analysis, we propose a novel solution of suppression of misalignment and alignment enforcement (denoted as SMART) between texture and depth to reduce background noises and foreground erosion, respectively, among different types of boundary artifacts. The SMART is developed as a three-step pre-processing in view synthesis. Experiments on view synthesis with original and compressed texture/depth data consistently demonstrate the superior performance of the proposed method as compared with other relevant boundary artifact reduction schemes.


IEEE Transactions on Image Processing | 2011

Depth No-Synthesis-Error Model for View Synthesis in 3-D Video

Yin Zhao; Ce Zhu; Zhenzhong Chen; Lu Yu

Currently, 3-D Video targets at the application of disparity-adjustable stereoscopic video, where view synthesis based on depth-image-based rendering (DIBR) is employed to generate virtual views. Distortions in depth information may introduce geometry changes or occlusion variations in the synthesized views. In practice, depth information is stored in 8-bit grayscale format, whereas the disparity range for a visually comfortable stereo pair is usually much less than 256 levels. Thus, several depth levels may correspond to the same integer (or sub-pixel) disparity value in the DIBR-based view synthesis such that some depth distortions may not result in geometry changes in the synthesized view. From this observation, we develop a depth no-synthesis-error (D-NOSE) model to examine the allowable depth distortions in rendering a virtual view without introducing any geometry changes. We further show that the depth distortions prescribed by the proposed D-NOSE profile also do not compromise the occlusion order in view synthesis. Therefore, a virtual view can be synthesized losslessly if depth distortions follow the D-NOSE specified thresholds. Our simulations validate the proposed D-NOSE model in lossless view synthesis and demonstrate the gain with the model in depth coding.


IEEE Signal Processing Letters | 2014

Pixel-Based Inter Prediction in Coded Texture Assisted Depth Coding

Shuai Li; Jianjun Lei; Ce Zhu; Lu Yu; Chunping Hou

This letter presents a pixel-based motion estimation scheme assisted with the coded texture video for depth inter-prediction, in view of motion similarity between depth and texture video. The proposed scheme can achieve higher inter-prediction gain without transmitting any motion vector in the pixel-based motion estimation. Coupled with depth-texture structure similarity, the inter prediction method is further extended to an integrated prediction approach by making use of both intra and inter information. Experimental results show that our proposed method achieves superior rate-distortion performance.


IEEE Signal Processing Letters | 2017

Weighted-to-Spherically-Uniform Quality Evaluation for Omnidirectional Video

Yule Sun; Ang Lu; Lu Yu

Omnidirectional video records a scene in all directions around one central position. It allows users to select viewing content freely in all directions. Assuming that viewing directions are uniformly distributed, the isotropic observation space can be regarded as a sphere. Omnidirectional video is commonly represented by different projection formats with one or multiple planes. To measure objective quality of omnidirectional video in observation space more accurately, a weighted-to-spherically-uniform quality evaluation method is proposed in this letter. The error of each pixel on projection planes is multiplied by a weight to ensure the equivalent spherical area in observation space, in which pixels with equal mapped spherical area have the same influence on distortion measurement. Our method makes the quality evaluation results more accurate and reliable since it avoids error propagation caused by the conversion from resampling representation space to observation space. As an example of such quality evaluation method, weighted-to-spherically-uniform peak signal-to-noise ratio is described and its performance is experimentally analyzed.


IEEE Transactions on Broadcasting | 2016

Texture-Aware Depth Prediction in 3D Video Coding

Ce Zhu; Shuai Li; Jianhua Zheng; Yanbo Gao; Lu Yu

3D video has raised great interest in the last decade and currently a new 3D video coding standard, known as 3D video coding extension of High Efficiency Video Coding (3D-HEVC), has been developed. The standard investigates the coding of multiview video plus depth, which consists of texture videos and depth videos of multiple views. Depth video, as a description of geometry information of a scene, is generally composed of large flat regions separated by sharp edges. The conventional video coding may fail to generate an accurate prediction for units with sharp edges due to its block-based prediction which cannot compensate (minor) boundary changes well. In order to attack the problem, a new texture-aware depth inter-prediction method is proposed, which incorporates pixel-oriented weighting in the bi-prediction process by exploiting motion and structure similarities between texture and depth videos. Furthermore, such pixel-oriented weighting scheme can be extended to the uni-prediction process by considering more prediction blocks with small motion vector displacements. Experimental results demonstrate that the adapted 3D-HEVC codec with the proposed method can achieve better rate-distortion performance compared to the original 3D-HEVC standard codec.


IEEE Transactions on Image Processing | 2018

Local and Global Feature Learning for Blind Quality Evaluation of Screen Content and Natural Scene Images

Wujie Zhou; Lu Yu; Yang Zhou; Weiwei Qiu; Ming-Wei Wu; Ting Luo

The blind quality evaluation of screen content images (SCIs) and natural scene images (NSIs) has become an important, yet very challenging issue. In this paper, we present an effective blind quality evaluation technique for SCIs and NSIs based on a dictionary of learned local and global quality features. First, a local dictionary is constructed using local normalized image patches and conventional


IEEE Signal Processing Letters | 2014

Block-Based In-Loop View Synthesis for 3-D Video Coding

Yichen Zhang; Yin Zhao; Lu Yu

K


data compression conference | 2016

General Synthesized View Distortion Estimation for Depth Map Compression of FTV

Ang Lu; Yichen Zhang; Lu Yu

-means clustering. With this local dictionary, the learned local quality features can be obtained using a locality-constrained linear coding with max pooling. To extract the learned global quality features, the histogram representations of binary patterns are concatenated to form a global dictionary. The collaborative representation algorithm is used to efficiently code the learned global quality features of the distorted images using this dictionary. Finally, kernel-based support vector regression is used to integrate these features into an overall quality score. Extensive experiments involving the proposed evaluation technique demonstrate that in comparison with most relevant metrics, the proposed blind metric yields significantly higher consistency in line with subjective fidelity ratings.


Archive | 2013

METHOD AND DEVICE FOR GENERATING PREDICTED PICTURES

Lu Yu; Yichen Zhang; Yin Zhao; Yingjie Hong; Ming Li

View synthesis prediction (VSP) employs a synthesized picture as a reference picture for current-view texture coding, which is an advanced disparity-compensated prediction. However, the picture-based view synthesis demands huge complexity, especially for decoders. Therefore, we propose a block-based in-loop view synthesis scheme which generates VSP samples only for blocks using VSP modes (called target blocks). For a target block, a window in reference view is estimated. Then, pixels within the window are warped to the current view, producing VSP samples for the target block. The proposed method turns the picture-level VSP sample generation into macroblock-level process, and significantly reduces complexity of the VSP module while maintaining coding efficiency.

Collaboration


Dive into the Lu Yu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ce Zhu

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuai Li

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ang Lu

Zhejiang University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge