Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lianghao Wang is active.

Publication


Featured researches published by Lianghao Wang.


intelligent information technology application | 2009

A Depth Extraction Method Based on Motion and Geometry for 2D to 3D Conversion

Xiaojun Huang; Lianghao Wang; Junjun Huang; Dongxiao Li; Ming Zhang

With the development of 3DTV, the conversion of existing 2D videos to 3D videos becomes an important component of 3D content production. One of the key steps in 2D to 3D conversion is how to generate a dense depth map. In this paper, we propose a novel depth extraction method based on motion and geometric information for 2D to 3D conversion, which consists of two major depth extraction modules, the depth from motion and depth from geometrical perspective. The H.264 motion estimation result is utilized and cooperates with moving object detection to diminish block effect and generates a motion-based depth map. On the other hand, a geometry-based depth map is generated by edge detection and Hough transform. Finally, the motion-based depth map and the geometry-based depth map are integrated into one depth map by a depth fusion algorithm.


IEEE Signal Processing Letters | 2013

Full-Image Guided Filtering for Fast Stereo Matching

Qingqing Yang; Dongxiao Li; Lianghao Wang; Ming Zhang

A novel full-image guided filtering method is proposed. Different with many existing neighborhood filters, all input elements are employed during the proposed filtering approach. In addition, a novel scheme called weight propagation is proposed to compute support weights. It fulfills the requirements of edge preserving and low complexity. It is applied to the cost-volume filtering in the local stereo matching framework. The algorithm utilizing the proposed filtering method is currently one of the best local algorithms on the Middlebury stereo testbed in terms of both speed and accuracy.


Eurasip Journal on Image and Video Processing | 2013

Depth-image-based rendering with spatial and temporal texture synthesis for 3DTV

Ming Xi; Lianghao Wang; Qingqing Yang; Dongxiao Li; Ming Zhang

A depth-image-based rendering (DIBR) method with spatial and temporal texture synthesis is presented in this article. Theoretically, the DIBR algorithm can be used to generate arbitrary virtual views of the same scene in a three-dimensional television system. But the disoccluded area, which is occluded in the original views and becomes visible in the virtual views, makes it very difficult to obtain high image quality in the extrapolated views. The proposed view synthesis method combines the temporally stationary scene information extracted from the input video and spatial texture in the current frame to fill the disoccluded areas in the virtual views. Firstly, the current texture image and a stationary scene image, which is extracted from the input video, are warped to the same virtual perspective position by the DIBR method. Then, the two virtual images are merged together to reduce the hole regions and maintain the temporal consistency of these areas. Finally, an oriented exemplar-based inpainting method is utilized to eliminate the remaining holes. Experimental results are shown to demonstrate the performance and advantage of the proposed method compared with other view synthesis methods.


IEEE Transactions on Broadcasting | 2010

An Asymmetric Edge Adaptive Filter for Depth Generation and Hole Filling in 3DTV

Lianghao Wang; Xiaojun Huang; Ming Xi; Dongxiao Li; Ming Zhang

An asymmetric edge adaptive filter (AEAF) is proposed in this paper to partially solve two puzzles in 3DTV, i.e. depth generation and hole filling. Different from other similar processing methods, one time of AEAF operation can simultaneously achieve the effect of edge correction and pre-processing of the depth maps. Thus the computing complexity can be greatly reduced. On the one hand, based on the initial depth map obtained by simple algorithms, AEAF can achieve depth maps with comparatively accurate object edges, avoiding high computation after the introduction of depth generation method based on image and video segmentation. On the other hand, AEAF can reduce the area of holes in rendered views via asymmetric smoothing of depth maps, promising an improvement in the image quality with reduced artifacts and distortions. Experiment results in the applications of 2D-to-3D conversion and stereo matching show that a balance point is found by AEAF between the two aspects of the contradiction.


IEEE\/OSA Journal of Display Technology | 2015

3D Synthesis and Crosstalk Reduction for Lenticular Autostereoscopic Displays

Dongxiao Li; Dongning Zang; Xiaotian Qiao; Lianghao Wang; Ming Zhang

Novel image processing methods are presented in this work for 3D synthesis of multiview images and crosstalk reduction to improve the perceived image quality of lenticular autostereoscopic displays. First, for optimizing the intensity of a screen subpixel mapped to a fraction view number, a weighting method is proposed to blend the intensities of the corresponding subpixels from the two neighboring integer view images by minimizing the mean square error. Experimental results show that, comparing with the conventional rounding method, the proposed weighting method can effectively reduce the ghosting artifacts and sharpen the object boundaries when viewing at optimal integer viewpoints. Second, the crosstalk among vertical neighboring subpixels is modeled as a shift-invariant low-pass filter, and a novel computational efficient inverse filtering method is proposed for crosstalk reduction by applying fast Fourier transform (FFT) on each column of subpixels in the multiview interlaced screen image. In addition, a novel filtering method is proposed for determining the maximum input dynamic range of screen subpixel intensities. Experimental results demonstrate that the ghost image from neighboring views can be substantially diminished by the proposed inverse filtering method.


international conference on audio, language and image processing | 2012

An automatic 2D to 3D conversion algorithm using multi-depth cues

Pan Ji; Lianghao Wang; Dongxiao Li; Ming Zhang

The 2D to 3D conversion technique plays a crucial role in the development and promotion of three-dimensional television (3DTV) for it can provide adequate supply of high-quality 3D program material. In this paper, a novel automatic 2D to 3D conversion method using multi-depth cues is presented. The depth cues used in our system, which will be integrated into one depth map according to the types of the 2D scenes, include perspective geometry, defocus, visual saliency and adaptive depth models. After the depth maps are extracted, the original 2D image or video is converted to stereoscopic one for showing on 3D display devices. Our method is verified on various sequences and the experimental results show that the resulting image or video is both realistic and visual pleasing.


international conference on image and graphics | 2011

GPU Based Implementation of 3DTV System

Lianghao Wang; Jing Zhang; Shao-Jun Yao; Dongxiao Li; Ming Zhang

This paper focuses on the near real-time implementation of end-to-end 3DTV System. It is specially designed for the generation of high-quality disparity map and depth-image-based rendering (DIBR) on the graphics processing unit (GPU) through CUDA (Compute Unified Device Architecture) API. We propose our novel methods including a kind of stereo matching with adaptive windows and an asymmetric edge adaptive filter (AEAF) for industrial application. These algorithms are structured in a way that exposes as much data parallelism as possible and the power of shared memory and data parallel programming in GPU is exploited. We evaluate our proposed methods and implementation based on the benchmark Middlebury and the experiment results show that our method is suitable for application on the trade-off among accuracy and execution speed. Running on an NVIDIA Quadro FX4800 graphics card, for each 480x375 stereo images with 60 disparity levels, the proposed system reaches about 146ms for stereo matching and reaches the speed of DIBR 5.7ms for rendering 1 view or 14ms for rendering 8 views.


Journal of Zhejiang University Science C | 2012

Accurate real-time stereo correspondence using intra- and inter-scanline optimization

Li Yao; Dongxiao Li; Jing Zhang; Lianghao Wang; Ming Zhang

This paper deals with a novel stereo algorithm that can generate accurate dense disparity maps in real time. The algorithm employs an effective cross-based variable support aggregation strategy within a scanline optimization framework. Rather than matching intensities directly, the use of adaptive support aggregation allows for precisely handling the weak textured regions as well as depth discontinuities. To improve the disparity results with global reasoning, we reformulate the energy function on a tree structure over the whole 2D image area, as opposed to dynamic programming of individual scanlines. By applying both intra- and inter-scanline optimizations, the algorithm reduces the typical ’streaking’ artifact while maintaining high computational efficiency. The experimental results are evaluated on the Middlebury stereo dataset, showing that our approach is among the best for all real-time approaches. We implement the algorithm on a commodity graphics card with CUDA architecture, running at about 35 fames/s for a typical stereo pair with a resolution of 384×288 and 16 disparity levels.


international conference on wireless communications and signal processing | 2015

Deep convolutional architecture for natural image denoising

Xuejiao Wang; Qiuyan Tao; Lianghao Wang; Dongxiao Li; Ming Zhang

Natural image is an important source of human access to information, however observed image signals are often corrupted in the process of acquisition or transmission. As an important link of image preprocessing, image denoising has significant influence on the follow-up procedures. Unlike traditional methods that use related features of spatial or transform domain in a single image, we propose a deep learning method for natural image denoising. Our method directly learns an end-to-end mapping from a noisy image to a corresponding de-noised image. Its based on a deep convolutional architecture with rectified linear units and local response normalization. The experiment results show that the proposed deep convolutional architecture learns various features from noisy images, and achieves denoising results of high quality within short time for practical usage.


international conference on image and graphics | 2011

Hierarchical Joint Bilateral Filtering for Depth Post-Processing

Qingqing Yang; Lianghao Wang; Dongxiao Li; Ming Zhang

Various 3D applications require accurate and smooth depth map, and post-processing is necessary for depth map directly generated by different correspondence algorithms. A hierarchical joint bilateral filtering method is proposed to improve the coarse depth map. By first carrying out depth confidence measuring, pixels are put into different categories according to their matching confidence. Then the initial coarse depth map is down-sampled together with the corresponding confidence map. Depth map is progressively fixed during multistep up sampling. Different from many filtering approaches, confident matches are propagated to unconfident regions by suppressing outliers in a hierarchical structure. Experiment results present that the proposed method can achieve significant improvement of initial depth map with low computational complexity.

Collaboration


Dive into the Lianghao Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge