Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miao Liao is active.

Publication


Featured researches published by Miao Liao.


british machine vision conference | 2006

Real-time Global Stereo Matching Using Hierarchical Belief Propagation.

Qingxiong Yang; Liang Wang; Ruigang Yang; Shengnan Wang; Miao Liao; David Nistér

In this paper, we present a belief propagation based global algorithm that generates high quality results while maintaining real-time performance. To our knowledge, it is the first BP based global method that runs at real-time speed. Our efficiency performance gains mainly from the parallelism of graphics hardware,which leads to a 45 times speedup compared to the CPU implementation. To qualify the accurancy of our approach, the experimental results are evaluated on the Middlebury data sets, showing that our approach is among the best (ranked first in the new evaluation system) for all real-time approaches. In addition, since the running time of general BP is linear to the number of iterations, adopting a large number of iterations is not feasible for practical applications. Hence a novel approach is proposed to adaptively update pixel cost. Unlike general BP methods, the running time of our proposed algorithm dramatically converges.


international conference on computer vision | 2009

Modeling deformable objects from a single depth camera

Miao Liao; Qing Zhang; Huamin Wang; Ruigang Yang; Minglun Gong

We propose a novel approach to reconstruct complete 3D deformable models over time by a single depth camera, provided that most parts of the models are observed by the camera at least once. The core of this algorithm is based on the assumption that the deformation is continuous and predictable in a short temporal interval. While the camera can only capture part of a whole surface at any time instant, partial surfaces reconstructed from different times are assembled together to form a complete 3D surface for each time instant, even when the shape is under severe deformation. A mesh warping algorithm based on linear mesh deformation is used to align different partial surfaces. A volumetric method is then used to combine partial surfaces, fix missing holes, and smooth alignment errors. Our experiment shows that this approach is able to reconstruct visually plausible 3D surface deformation results with a single camera.


IEEE Transactions on Visualization and Computer Graphics | 2012

Video Stereolization: Combining Motion Analysis with User Interaction

Miao Liao; Jizhou Gao; Ruigang Yang; Minglun Gong

We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the users labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.


international conference on computer graphics and interactive techniques | 2009

Physically guided liquid surface modeling from videos

Huamin Wang; Miao Liao; Qing Zhang; Ruigang Yang; Greg Turk

We present an image-based reconstruction framework to model real water scenes captured by stereoscopic video. In contrast to many image-based modeling techniques that rely on user interaction to obtain high-quality 3D models, we instead apply automatically calculated physically-based constraints to refine the initial model. The combination of image-based reconstruction with physically-based simulation allows us to model complex and dynamic objects such as fluid. Using a depth map sequence as initial conditions, we use a physically based approach that automatically fills in missing regions, removes outliers, and refines the geometric shape so that the final 3D model is consistent to both the input video data and the laws of physics. Physically-guided modeling also makes interpolation or extrapolation in the space-time domain possible, and even allows the fusion of depth maps that were taken at different times or viewpoints. We demonstrated the effectiveness of our framework with a number of real scenes, all captured using only a single pair of cameras.


computer vision and pattern recognition | 2009

Joint depth and alpha matte optimization via fusion of stereo and time-of-flight sensor

Jiejie Zhu; Miao Liao; Ruigang Yang; Zhigeng Pan

We present a new approach to iteratively estimate both high-quality depth map and alpha matte from a single image or a video sequence. Scene depth, which is invariant to illumination changes, color similarity and motion ambiguity, provides a natural and robust cue for foreground/ background segmentation - a prerequisite for matting. The image mattes, on the other hand, encode rich information near boundaries where either passive or active sensing method performs poorly. We develop a method to combine the complementary nature of scene depth and alpha matte to mutually enhance their qualities. We formulate depth inference as a global optimization problem where information from passive stereo, active range sensor and matte is merged. The depth map is used in turn to enhance the matting. In addition, we extend this approach to video matting by incorporating temporal coherence, which reduces flickering in the composite video. We show that these techniques lead to improved accuracy and robustness for both static and dynamic scenes.


computer vision and pattern recognition | 2007

Light Fall-off Stereo

Miao Liao; Liang Wang; Ruigang Yang; Minglun Gong

We present light fall-off stereo-LFS-a new method for computing depth from scenes beyond lambertian reflectance and texture. LFS takes a number of images from a stationary camera as the illumination source moves away from the scene. Based on the inverse square law for light intensity, the ratio images are directly related to scene depth from the perspective of the light source. Using this as the invariant, we developed both local and global methods for depth recovery. Compared to previous reconstruction methods for non-lamebrain scenes, LFS needs as few as two images, does not require calibrated camera or light sources, or reference objects in the scene. We demonstrated the effectiveness of LFS with a variety of real-world scenes.


Journal of Real-time Image Processing | 2014

Real-time stereo using approximated joint bilateral filtering and dynamic programming

Liang Wang; Ruigang Yang; Minglun Gong; Miao Liao

We present a stereo algorithm that is capable of estimating scene depth information with high accuracy and in real time. The key idea is to employ an adaptive cost-volume filtering stage in a dynamic programming optimization framework. The per-pixel matching costs are aggregated via a separable implementation of the bilateral filtering technique. Our separable approximation offers comparable edge-preserving filtering capability and leads to a significant reduction in computational complexity compared to the traditional 2D filter. This cost aggregation step resolves the disparity inconsistency between scanlines, which are the typical problem for conventional dynamic programming based stereo approaches. Our algorithm is driven by two design goals: real-time performance and high accuracy depth estimation. For computational efficiency, we utilize the vector processing capability and parallelism in commodity graphics hardware to speed up this aggregation process over two orders of magnitude. Over 90 million disparity evaluations per second [the number of disparity evaluations per seconds (MDE/s) corresponds to the product of the number of pixels and the disparity range and the obtained frame rate and, therefore, captures the performance of a stereo algorithm in a single number] are achieved in our current implementation. In terms of quality, quantitative evaluation using data sets with ground truth disparities shows that our approach is one of the state-of-the-art real-time stereo algorithms.


computer vision and pattern recognition | 2011

Interreflection removal for photometric stereo by using spectrum-dependent albedo

Miao Liao; Xinyu Huang; Ruigang Yang

We present a novel method that can separate m-bounced light and remove the interreflections in a photometric stereo setup. Under the assumption of a uniformly colored lambertian surface, the intensity of a point in the scene is the sum of 1-bounced light through m-bounced light rays. Ruled by the law of diffuse reflection, whenever a light ray is bounced by the surface, its intensity will be attenuated by the factor of albedo ρ. This implies that the measured intensity value can be written as a polynomial function of ρ, and the intensity contribution of the m-bounced light rays are expressed by the term of ρm. Therefore, when we change the surface albedo, the intensity of the m-bounced light is changed to the order of m. This non-linearity gives us the possibility to separate the m-bounced light. In practice, we illuminate the scene with different light colors to effectively simulate different surface albedos since albedo is spectrum dependent. Once the m-bounced light rays are separated, we can perform the photometric stereo algorithm on the 1-bounced light (direct lighting) images to produce the 3D shape without the impact of interreflections. Experiments have shown that we get significantly improved scene reconstruction with a minimum of two color images.


international conference on image processing | 2008

Real-time Light Fall-off Stereo

Miao Liao; Liang Wang; Ruigang Yang; Minglun Gong

We present a real-time depth recovery system using Light Fall-off Stereo (LFS). Our system contains two co-axial point light sources (LEDs) synchronized with a video camera. The video camera captures the scene under these two LEDs in complementary states(e.g., one on, one off). Based on the inverse square law for light intensity, the depth can be directly solved using the pixel ratio from two consecutive frames. We demonstrate the effectiveness of our approach with a number of real world scenes. Quantitative evaluation shows that our system compares favorably to other commercial real-time 3D range sensors, particularly in textured areas. We believe our system offers a low-cost high-resolution alternative for depth sensing under controlled lighting.


international conference on intelligent computing | 2010

Complete 3D model reconstruction using two types of depth sensors

Guangyu Mu; Miao Liao; Ruigang Yang; Dantong Ouyang; Zhiwen Xu; Xiaoxin Guo

We propose a fast and simple application system of 3D model reconstruction. We acquire range images by using a combination of a regular camera and two types of depth sensors. The reconstruction of a 3D model consists of four key steps: (i) Initial alignment either feature tracking or the 4-points congruent sets algorithm is used to align surfaces captured at different frames. (ii)The iterative closest point (ICP) method is applied to further align the piecewise surfaces from the last step. (iii) The surfaces are merged into a whole 3D model by the volumetric method. (iv) In the refinement step, we fill holes and produce a complete 3D model that approximates the original model with robust repair of polygonal models. At last, we present the experimental results which show that the errors between our reconstructed model and the ground truth are less than 1%.

Collaboration


Dive into the Miao Liao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minglun Gong

Memorial University of Newfoundland

View shared research outputs
Top Co-Authors

Avatar

Liang Wang

University of Kentucky

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qing Zhang

University of Kentucky

View shared research outputs
Researchain Logo
Decentralizing Knowledge