Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuncai Liu is active.

Publication


Featured researches published by Yuncai Liu.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1990

Determination of camera location from 2-D to 3-D line and point correspondences

Yuncai Liu; Thomas S. Huang; Olivier D. Faugeras

A method for the determination of camera location from two-dimensional (2-D) to three-dimensional (3-D) straight line or point correspondences is presented. With this method, the computations of the rotation matrix and the translation vector of the camera are separable. First, the rotation matrix is found by a linear algorithm using eight or more line correspondences, or by a nonlinear algorithm using three or more line correspondences, where the line correspondences are either given or derived from point correspondences. Then, the translation vector is obtained by solving a set of linear equations based on three or more line correspondences, or two or more point correspondences. Eight 2-D to 3-D line correspondences or six 2-D to 3-D point correspondences are needed for the linear approach; three 2-D to 3-D line or point correspondences for the nonlinear approach. Good results can be obtained in the presence of noise if more than the minimum required number of correspondences are used. >


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1988

Estimation of rigid body motion using straight line correspondences

Yuncai Liu; Thomas S. Huang

Abstract An algorithm for the estimation of rigid body motion using straight line correspondences is presented in this paper. In the case of pure translation, we present a linear algorithm using 5 line correspondences over 3 frames. In the case of general motion, it is found that the rotation and the translation parts are separable. The rotation part can be computed by the iterative solution of nonlinear equations based on 6 or more line correspondences over 3 frames. After the rotation is found, the translation part is determined just as in the pure translation case. For the special case of constant rotation, the convergence range of the iterative method is wide enough so that global search can be used to estimate the rotation matrix. However, for the case of variable rotation, global search appears computationally infeasible at present.


IEEE Signal Processing Letters | 2010

Visual Saliency Detection via Sparsity Pursuit

Junchi Yan; Mengyuan Zhu; Huanxi Liu; Yuncai Liu

Saliency mechanism has been considered crucial in the human visual system and helpful to object detection and recognition. This paper addresses a novel feature-based model for visual saliency detection. It consists of two steps: first, using the learned overcomplete sparse bases to represent image patches; and then, estimating saliency information via low-rank and sparsity matrix decomposition. We compare our model with the previous methods on natural images. Experimental results on both natural images and psychological patterns show that our model performs competitively for visual saliency detection task, and suggest the potential application of matrix decomposition and convex optimization for image analysis.


Pattern Recognition | 2008

A new calibration model of camera lens distortion

Jianhua Wang; Fanhuai Shi; Jing Zhang; Yuncai Liu

Lens distortion is one of the main factors affecting camera calibration. In this paper, a new model of camera lens distortion is presented, according to which lens distortion is governed by the coefficients of radial distortion and a transform from ideal image plane to real sensor array plane. The transform is determined by two angular parameters describing the pose of the real sensor array plane with respect to the ideal image plane and two linear parameters locating the real sensor array with respect to the optical axis. Experiments show that the new model has about the same correcting effect upon lens distortion as the conventional model including all the radial distortion, decentering distortion and prism distortion. Compared with the conventional model, the new model has fewer parameters to be calibrated and more explicit physical meaning.


IEEE Transactions on Image Processing | 2011

Text From Corners: A Novel Approach to Detect Text and Caption in Videos

Xu Zhao; Kai-Hsiang Lin; Yun Fu; Yuxiao Hu; Yuncai Liu; Thomas S. Huang

Detecting text and caption from videos is important and in great demand for video retrieval, annotation, indexing, and content analysis. In this paper, we present a corner based approach to detect text and caption from videos. This approach is inspired by the observation that there exist dense and orderly presences of corner points in characters, especially in text and caption. We use several discriminative features to describe the text regions formed by the corner points. The usage of these features is in a flexible manner, thus, can be adapted to different applications. Language independence is an important advantage of the proposed method. Moreover, based upon the text features, we further develop a novel algorithm to detect moving captions in videos. In the algorithm, the motion features, extracted by optical flow, are combined with text features to detect the moving caption patterns. The decision tree is adopted to learn the classification criteria. Experiments conducted on a large volume of real video shots demonstrate the efficiency and robustness of our proposed approaches and the real-world system. Our text and caption detection system was recently highlighted in a worldwide multimedia retrieval competition, Star Challenge, by achieving the superior performance with the top ranking.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1988

A linear algorithm for motion estimation using straight line correspondences

Yuncai Liu; Thomas S. Huang

This paper presents a linear algorithm for determining 3D motion parameters of a rigid object based on straight line correspondences. The algorithm requires a minimum number of thirteen line correspondences over three frames. It includes three steps: first, three intermediate matrices are computed; then, several candidate solutions of the rotation matrices and translation vectors are obtained from the intermediate matrices; finally, motion parameters are uniquely determined by the physical constraints of 3D rotations and translations. Some simulation results are also given.


computer vision and pattern recognition | 1988

Determination of camera location from 2D to 3D line and point correspondences

Yuncai Liu; Thomas S. Huang; Olivier D. Faugeras

A novel method for the determination of camera location from 2-D to 3-D line or point correspondences is presented. Using this method, the computation of the rotation matrix and the translation vector of the camera are separable. First, the rotation matrix is found by a linear algorithm using eight or more line correspondences, or by a nonlinear algorithm using three of more line correspondences, where the line correspondences are either given or derived from point correspondences. Then, the translation vector can be obtained by solving a set of linear equations based on three or more line correspondences, or two or more point correspondences. Eight 2-D-to-3-D line or point correspondences or six 2-D-to-3-D point correspondences are needed for the linear approach; three 2-D-to-3-D line or point correspondences for the nonlinear approach. Good results can be obtained in the presence of noise if more than the minimum required number of correspondences are used.<<ETX>>


Pattern Recognition Letters | 2005

An improved snake model for building detection from urban aerial images

Jing Peng; Dong Zhang; Yuncai Liu

This paper proposes an improved snake model focusing on building detection from gray-level aerial images of high resolution. Based on the radiometric and geometric behaviors of buildings, the traditional snake model is modified in two aspects: the criteria for the selection of initial seeds and the external energy function. Moreover, the post-treatment combining with illumination information reduces the constraints for initial snake model and the interference caused by illumination, sharply lessening the times of iteration. Compared with traditional snake model, this algorithm can converge to the true building contours more quickly and stably from complex environment.


international conference on intelligent transportation systems | 2009

Automatic generation of road network map from massive GPS, vehicle trajectories

Wenhuan Shi; Shuhan Shen; Yuncai Liu

Intelligent transportation systems (ITS) and navigation systems usually demand good timeliness on their used vector road network maps. Unfortunately, existing methods for road network map generation, such as the surveying method and the satellite image digitizing method, are difficult to generate up-to-date road network maps. This paper presents a method for automatic generation of road network map from massive GPS vehicle trajectories, which can generate the up-to-date road network map from the up-to-date GPS vehicle trajectory data that can be widely available. Given a set of GPS vehicle trajectory data, the input first is processed to construct a road network bitmap which depicts the road network. Then the road network skeleton on the bitmap is computed. Finally, the road network graph extraction is implemented on the skeleton to generate the vector road network map data. To evaluate the presented method, we implemented and tested it on the massive GPS vehicle trajectory data collected in Jilin City, China. The test results demonstrate that the presented method can be promising in applications.


Pattern Recognition | 2008

Generative tracking of 3D human motion by hierarchical annealed genetic algorithm

Xu Zhao; Yuncai Liu

We present a generative method for reconstructing 3D human motion from single images and monocular image sequences. Inadequate observation information in monocular images and the complicated nature of human motion make the 3D human pose reconstruction challenging. In order to mine more prior knowledge about human motion, we extract the motion subspace by performing conventional principle component analysis (PCA) on small sample set of motion capture data. In doing so, we also reduce the problem dimensionality so that the generative pose recovering can be performed more effectively. And, the extracted subspace is naturally hierarchical. This allows us to explore the solution space efficiently. We design an annealed genetic algorithm (AGA) and hierarchical annealed genetic algorithm (HAGA) for human motion analysis that searches the optimal solutions by utilizing the hierarchical characteristics of state space. In tracking scenario, we embed the evolutionary mechanism of AGA into the framework of evolution strategy for adapting the local characteristics of fitness function. We adopt the robust shape contexts descriptor to construct the matching function. Our methods are demonstrated in different motion types and different image sequences. Results of human motion estimation show that our novel generative method can achieve viewpoint invariant 3D pose reconstruction.

Collaboration


Dive into the Yuncai Liu's collaboration.

Top Co-Authors

Avatar

Xu Zhao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Shuhan Shen

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Junchi Yan

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Xiong Li

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Chenhao Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Fanhuai Shi

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jing Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Qing-Jie Kong

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jianhua Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yun Fu

Northeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge