Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yihong Wu is active.

Publication


Featured researches published by Yihong Wu.


Journal of Mathematical Imaging and Vision | 2006

PnP Problem Revisited

Yihong Wu; Zhanyi Hu

Perspective-n-Point camera pose determination, or the PnP problem, has attracted much attention in the literature. This paper gives a systematic investigation on the PnP problem from both geometric and algebraic standpoints, and has the following contributions: Firstly, we rigorously prove that the PnP problem under distance-based definition is equivalent to the PnP problem under orthogonal-transformation-based definition when n > 3, and equivalent to the PnP problem under rotation-transformation-based definition when n = 3. Secondly, we obtain the upper bounds of the number of solutions for the PnP problem under different definitions. In particular, we show that for any three non-collinear control points, we can always find out a location of optical center such that the P3P problem formed by these three control points and the optical center can have 4 solutions, its upper bound. Additionally a geometric way is provided to construct these 4 solutions. Thirdly, we introduce a depth-ratio based approach to represent the solutions of the whole PnP problem. This approach is shown to be advantageous over the traditional elimination techniques. Lastly, degenerated cases for coplanar or collinear control points are also discussed. Surprisingly enough, it is shown that if all the control points are collinear, the PnP problem under distance-based definition has a unique solution, but the PnP problem under transformation-based definition is only determined up to one free parameter.


Pattern Recognition | 2007

Self-recalibration of a structured light system via plane-based homography

B. Zhang; Youfu Li; Yihong Wu

Self-recalibration of the relative pose in a vision system plays a very important role in many applications and much research has been conducted on this issue over the years. However, most existing methods require information of some points in general three-dimensional positions for the calibration, which is hard to be met in many practical applications. In this paper, we present a new method for the self-recalibration of a structured light system by a single image in the presence of a planar surface in the scene. Assuming that the intrinsic parameters of the camera and the projector are known from initial calibration, we show that their relative position and orientation can be determined automatically from four projection correspondences between an image and a projection plane. In this method, analytical solutions are obtained from second order equations with a single variable and the optimization process is very fast. Another advantage is the enhanced robustness in implementation via the use of over constrained systems. Computer simulations and real data experiments are carried out to validate our method.


european conference on computer vision | 2004

Camera calibration from the quasi-affine invariance of two parallel circles

Yihong Wu; Haijiang Zhu; Zhanyi Hu; Fuchao Wu

In this paper, a new camera calibration algorithm is proposed, which is from the quasi-affine invariance of two parallel circles. Two parallel circles here mean two circles in one plane, or in two parallel planes. They are quite common in our life.


Image and Vision Computing | 2006

Coplanar circles, quasi-affine invariance and calibration

Yihong Wu; Xinju Li; Fuchao Wu; Zhanyi Hu

We define the lines associated with two coplanar circles, and give the distributions of any two coplanar circles and their associated lines. Further we prove that the distribution of two coplanar circles with no real intersection and their associated lines is a quasi-affine invariance. Then the results are applied to calibrating a camera. The calibration method has the advantages: (1) it is based on conic fitting; (2) it does not need any matching. Experiments with two separate circles validate our quasi-affine invariance and show that the estimated camera intrinsic parameters are as good as those obtained by Zhangs (2000) method.


international conference on computer vision | 2005

Geometric invariants and applications under catadioptric camera model

Yihong Wu; Zhanyi Hu

This paper presents geometric invariants of points and their applications under central catadioptric camera model. Although the image has severe distortions under the model, we establish some accurate projective geometric invariants of scene points and their image points. These invariants, being functions of principal point, are useful, from which a method for calibrating the camera principal point and a method for recovering planar scene structures are proposed. The main advantage of using these in variants for plane reconstruction is that neither camera motion nor the intrinsic parameters, except for the principal point, is needed. The theoretical correctness of the established invariants and robustness of the proposed methods are demonstrated by experiments. In addition, our results are found to be applicable to some more general camera models other than the catadioptric one


Pattern Recognition | 2008

A new linear algorithm for calibrating central catadioptric cameras

Fuchao Wu; Fuqing Duan; Zhanyi Hu; Yihong Wu

In this paper, a novel linear calibration algorithm based on lines is presented for central catadioptric cameras. We firstly derive the relationship between the projection on the viewing sphere of a space point and its catadioptric image. And then by the relationship we establish a group of linear constraints on the catadioptric parameters from the catadioptric projections of spatial lines. By using these linear constraints, any central catadioptric camera can be fully calibrated from a single view of three or more lines without prior knowledge on the camera. Extensive experiments show this algorithm can improve the calibrations robustness.


intelligent robots and systems | 2006

Easy Calibration for Para-catadioptric-like Camera

Yihong Wu; Youfu Li; Zhanyi Hu

For omnidirectional cameras, most of the previous calibration methods from lines use conic fitting. This paper presents a calibration method for para-catadioptric-like cameras from lines without conic fitting under a single view. We establish equations on the five camera intrinsic parameters. These equations are linear for the focal lengths and skew factor once the principal point is known. The principal point can be approximated well by the center of the imaged mirror contour in practice or can be accurately estimated by quadric equations. After obtaining the principal point, we propose an algorithm to calibrate the focal lengths and skew factor. The algorithm needs neither prior structure knowledge nor conic fitting and is linear, which make it easy to implement. Other omnidirectional cameras can also use this presented work if high accuracy is not required. Experiments demonstrate the efficiency of the proposed algorithm


european conference on computer vision | 2006

Euclidean structure from N ≥ 2 parallel circles: theory and algorithms

Pierre Gurdjos; Peter F. Sturm; Yihong Wu

Our problem is that of recovering, in one view, the 2D Euclidean structure, induced by the projections of N parallel circles. This structure is a prerequisite for camera calibration and pose computation. Until now, no general method has been described for N > 2. The main contribution of this work is to state the problem in terms of a system of linear equations to solve. We give a closed-form solution as well as bundle adjustment-like refinements, increasing the technical applicability and numerical stability. Our theoretical approach generalizes and extends all those described in existing works for N = 2 in several respects, as we can treat simultaneously pairs of orthogonal lines and pairs of circles within a unified framework. The proposed algorithm may be easily implemented, using well-known numerical algorithms. Its performance is illustrated by simulations and experiments with real images.


international conference on pattern recognition | 2006

A Novel Framework for Urban Change Detection Using VHR Satellite Images

Weiming Li; Xiaoming Li; Yihong Wu; Zhanyi Hu

We present a novel framework for detecting urban changes from a pair of very high resolution (VHR) satellite images such as those taken by satellite Quickbird-II or IKONOS. Image differences due to variations of imaging conditions such as view angle and illumination are distinguished from significant urban changes in the scene. First, we adopt a new image registration method, which makes several useful geometrical constraints available. Then we find changed line segments over time. After that we match scale invariant feature transform (SIFT) points and generate corresponding regions in order to exclude changed line segments due to parallax. We perform shadow detection to exclude changed line segments due to shadow change. Finally we group the remaining changed line segments into clusters, among which the significant ones form the changed regions as output. Our experiments with real Quickbird-II images show that the proposed method can well detect significant urban changes


Medical Image Analysis | 2018

A deep learning model integrating FCNNs and CRFs for brain tumor segmentation

Xiaomei Zhao; Yihong Wu; Guidong Song; Zhenye Li; Yazhuo Zhang; Yong Fan

HighlightsA deep learning model integrating FCNNs and CRFs for brain tumor segmentation.The integration of FCNNs and CRF‐RNN improves the segmentation robustness.A segmentation model with Flair, T1c, and T2 scans achieves competitive performance. Abstract Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF‐RNN) using image slices with parameters of FCNNs fixed; and 3) fine‐tuning the FCNNs and the CRF‐RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice‐by‐slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans. Graphical abstract Figure. No Caption available.

Collaboration


Dive into the Yihong Wu's collaboration.

Top Co-Authors

Avatar

Zhanyi Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fuchao Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qiulei Dong

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Youji Feng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Youfu Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xiaoming Deng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fuqing Duan

Beijing Normal University

View shared research outputs
Top Co-Authors

Avatar

Guidong Song

Capital Medical University

View shared research outputs
Top Co-Authors

Avatar

Heping Li

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge