Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wei Liu is active.

Publication


Featured researches published by Wei Liu.


IEEE Transactions on Broadcasting | 2012

Coding Distortion Elimination of Virtual View Synthesis for 3D Video System: Theoretical Analyses and Implementation

Hui Yuan; Ju Liu; Hongji Xu; Zhibin Li; Wei Liu

For a three dimensional video (3DV) system, coding distortion of texture videos and depth maps could be propagated to virtual views during the view synthesis procedure. In this paper, the coding distortion is considered in the virtual view synthesis. In order to reduce the coding distortion as well as improve the qualities of synthesized virtual views, Wiener filter is utilized in 3DV systems. To ensure that Wiener filter could be implemented on virtual views, the principles of virtual view synthesis and Wiener filter are presented and analyzed in detail. Subsequently, the implementation of Wiener filter is designed for 3DV system. In the proposed method, Wiener filter coefficients of each frame are calculated in the 3DV server, and then transmitted to the 3DV terminal. By employing those Wiener filter coefficients, post filtering on the virtual views could be implemented after the virtual view synthesis procedure. Experimental results demonstrate that a maximum 0.742 dB PSNR gain could be achieved when comparing the proposed method (taking the coding bits of depth maps, texture videos, and that of Wiener filter coefficients into consideration) with the virtual view synthesis method without Wiener filter at the same coding bit rate of a 3DV system.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

DIBR based view synthesis for free-viewpoint television

Xiaohui Yang; Ju Liu; Jiande Sun; Xinchao Li; Wei Liu; Yuling Gao

We propose an effective virtual view synthesis approach, which utilizes the technology of depth-image-based rendering (DIBR). In our scheme, two reference color images and their associated depth maps are used to generate the arbitrary virtual viewpoint. Firstly, the main and auxiliary viewpoint images are warped to the virtual viewpoint. After that, the cracks and error points are removed to enhance the image quality. Then, we complement the disocclusions of the virtual viewpoint image warped from the main viewpoint with the help of the auxiliary viewpoint. In order to reduce the color incontinuity of the virtual view, the brightness of the two reference viewpoint images are adjusted. Finally, the holes are filled by the depth-assistance asymmetric dilation inpainting method. Simulations show that the view synthesis approach is effective and reliable in both of subjective and objective evaluations.


Iet Information Security | 2011

Step-projection-based spread transform dither modulation

Xinchao Li; Ju Liu; Jiande Sun; Xiaohui Yang; Wei Liu

Quantisation index modulation (QIM) is an important class of watermarking methods, which has been widely used in blind watermarking applications. It is well known that spread transform dither modulation (STDM), as an extension of QIM, has good performance in robustness against random noise and re-quantisation. However, the quantisation step-sizes used in STDM are random numbers not taking features of the image into account. The authors present a step projection-based approach to incorporate the perceptual model with STDM framework. Four implementations of the proposed algorithm are further presented according to different modified versions of the perceptual model. Experimental results indicate that the step projection-based approach can incorporate the perceptual model with STDM framework in a better way, thereby providing a significant improvement in image fidelity. Compared with the former proposed modified schemes of STDM, the authors best performed implementation provides powerful resistance against common attacks, especially in robustness against Gauss noise, salt and pepper noise and JPEG compression.


world congress on intelligent control and automation | 2010

A gaze tracking scheme for eye-based intelligent control

Xiaohui Yang; Jiande Sun; Ju Liu; Jinyu Chu; Wei Liu; Yuling Gao

In this paper, an intelligent control scheme based on remote gaze tracking is proposed. First, the eye-moving video of the user is captured by ordinary resolution camera under the illumination of near infrared light sources, then the images of the eye region and the pupil region are extracted by processing the video in real time. We process the image of the pupil region, and get the coordinates of the pupil center and the corneal glints produced by the infrared light sources. The coordinates of the points on the screen that the user is observing are computed by the gaze-tracking algorithm based on cross-ratio-invariant, and a procedure of calibration is needed to eliminate the error produced by the deviation of the optical and visual axes of the eyeball. Finally, the gaze is tracked in real-time. The results show that the accuracy of the gaze tracking system is about 0.327 degree horizontally and 0.300 degree vertically, which is better than most gaze tracking system reported in other papers.


IEEE Signal Processing Letters | 2011

Robust Video Hashing Based on Double-Layer Embedding

Xiushan Nie; Ju Liu; Jiande Sun; Wei Liu

A robust video hashing scheme for video content identification and authentication is proposed, which is called Double-Layer Embedding scheme. Intra-cluster Locally Linear Embedding (LLE) and inter-cluster Multi-Dimensional Scaling (MDS) are used in the scheme. Some dispersive frames of the video are first selected through graph model, and the video is partitioned into clusters based on the dispersive frames and the K-Nearest Neighbor method during the hashing. Then, the intra-cluster LLE and inter-cluster MDS are used to generate local and global hash sequences which can inherently describe the corresponding video. Experimental results show that the video hashing is resistant to geometric attacks of frames and channel impairments of transmission.


international conference on signal processing | 2010

A gray difference-based pre-processing for gaze tracking

Caixia Yang; Jiande Sun; Ju Liu; Xiaohui Yang; Dichangsheng Wang; Wei Liu

In practical applications of gaze tracking systems, because the glasses and accessories often degrade the accuracy of eye detection and gaze tracking, the pre-processing to eliminate these factors is extremely essential for gaze tracking systems. In this paper, a gray difference-based pre-processing scheme is proposed, which utilizes the gray difference between the face, pupils and reflecting points on the cornea to detect the eyes. When the proposed scheme is tested under a cross-ratio-invariant-base gaze tracking system with subjects wearing glasses and accessories, the adaptability and practicality of the test system is improved greatly. The experimental results show that the system can achieve average accuracies of about 0.5 degree in a view field of 14×20×l0(cm), which is almost the same as most of commercial gaze tracking systems. That demonstrates the proposed pre-processing scheme is so effective and helpful that it is promising to be adopted into gaze tracking system.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

Virtual view synthesis without preprocessing depth image for depth image based rendering

Lu Wang; Ju Liu; Jiande Sun; Yannan Ren; Wei Liu; Yuling Gao

Virtual view synthesis has been considered as a crucial technique in three-dimensional television (3DTV) display, where depth-image-based rendering (DIBR) is a key technology. In order to improve the virtual image quality, a method without preprocessing the depth image is proposed. During the synthesis, the hole-flag map is fully utilized. A Horizontal, Vertical and Diagonal Extrapolation (HVDE) using depth information algorithm is also proposed for filling the tiny cracks. After blending, main virtual view image is obtained. Then the image generated by filtering the depth image is regarded as assistant image to fill small holes in the main image. Experimental results show that the proposed method can obtain better performance in both subjective quality and objective evaluation.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

Depth generation method for 2D to 3D conversion

Fengli Yu; Ju Liu; Yannan Ren; Jiande Sun; Yuling Gao; Wei Liu

An efficient depth map generation method is presented for static scenes with moving objects. Firstly, static background scene is reconstructed. Depth map of the reconstructed static background scene is extracted by linear perspective. Then, moving objects are segmented precisely. Depth values are assigned to the segmented moving objects according to their positions in the static scene. Finally, the depth values of the static background scene and the moving objects are integrated into one depth map. Experimental results show that the proposed method can generate smooth and reliable depth maps.


international conference on signal processing | 2010

LLE-based video hashing for video identification

Xiushan Nie; Jianping Qiao; Ju Liu; Jiande Sun; Xinchao Li; Wei Liu

As web video databases tend to contain immense copies with the explosive growth of online videos, effective and efficient copy identification techniques are required for content management and copyrights protection. To this end, this paper presents a novel video hashing for video copy identification based on Locally Linear Embedding (LLE). It maps the video to a low-dimensional space through LLE, which is invariant to translation, rotation and rescaling. In this way, we can use the points mapped from the video to play as a robust hashing. Meanwhile, to detect copies which are parts of original videos or contain a clip that comes from original. A dynamic sliding window is applied for matching. Experimental results show that the video hashing is of good robustness and discrimination.


international symposium on broadband multimedia systems and broadcasting | 2012

A novel key-frame extraction method for semi-automatic 2D-to-3D video conversion

Dichangsheng Wang; Ju Liu; Jiande Sun; Wei Liu; Yujun Li

Semi-automatic 2D-to-3D conversion becomes very popular in 3D contents creation due to its advantages over balancing the tradeoff between labor cost and 3D conversion effect. However, the key-frame extraction, as a very important step, has not been specifically put forward in the existing systems. In this paper, a novel key-frame extraction method based on cumulative occlusion is proposed for 2D-to-3D system. An input long color video is first segmented into several shots by using the block-based histogram difference. Then shot filtering is performed by several principles which are specific in 2D-to-3D system. After shot segmentation, for each shot, the cumulative occlusion curve is computed, according to which key-frames are selected. Objective evaluation shows that, compared with the previous methods, the proposed key-frame selection algorithm can keep depth errors of all frames in the whole video controlled in a lower level. And the good propagated depth maps indicate that our proposed schemes can be used for most semi-automatic 2D-to-3D systems.

Collaboration


Dive into the Wei Liu's collaboration.

Top Co-Authors

Avatar

Ju Liu

Shandong University

View shared research outputs
Top Co-Authors

Avatar

Jiande Sun

Shandong Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge