Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhanyi Hu is active.

Publication


Featured researches published by Zhanyi Hu.


Pattern Recognition | 2003

A New Easy Camera Calibration Technique Based on Circular Points

Xiaoqiao Meng; Zhanyi Hu

Inspired by Zhang’s work, a new easy technique for calibrating a camera based on circular points is proposed. The proposed technique only requires the camera to observe a newly designed planar calibration pattern (referred to as the model plane hereinafter) which includes a circle and a pencil of lines passing through the circle’s center, at a few (at least three) different unknown orientations, then all the five intrinsic parameters can be determined linearly. The main point of our new technique is that it needs to know neither metric measurement on the model plane, nor the correspondences between points on the model plane and image ones, hence it can be done fully automatically. The proposed technique is particularly useful for those people who are not familiar with computer vision. Experiments with simulated data as well as with real images show that our new technique is robust and accurate.


Pattern Recognition | 2009

MSLD: A robust descriptor for line matching

Zhiheng Wang; Fuchao Wu; Zhanyi Hu

Line matching plays an important role in many applications, such as image registration, 3D reconstruction, object recognition and video understanding. However, compared with other features (such as point, region matching), it has made little progress in recent years. In this paper, we investigate the problem of matching line segments automatically only from their neighborhood appearance, without resorting to any other constraints or priori knowledge. A novel line descriptor called mean-standard deviation line descriptor (MSLD) descriptor is proposed for this purpose, which is constructed by the following three steps: (1) For each pixel on the line segment, its pixel support region (PSR) is defined and then the PSR is divided into non-overlapped sub-regions. (2) Line gradient description matrix (GDM) is formed by characterizing each sub-region into a vector. (3) MSLD is built by computing the mean and standard deviation of GDM column vectors. Extensive experiments on real images show that MSLD descriptor is highly distinctive for line matching under rotation, illumination change, image blur, viewpoint change, noise, JPEG compression and partial occlusion. In addition, the concept of MSLD descriptor can also be extended to creating curve descriptor (mean-standard deviation curve descriptor, MSCD), and promising MSCD-based results for both curve and region matching are also demonstrated in this work.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Catadioptric camera calibration using geometric invariants

Xianghua Ying; Zhanyi Hu

Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines and spheres in space are all projected into conics in the catadioptric image plane. We prove that the projection of a line can provide three invariants whereas the projection of a sphere can only provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two kinds of variants of this novel method. The first one uses projections of lines and the second one uses projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve catadioptric camera calibration. One important conclusion in this paper is that the method based on projections of spheres is more robust and has higher accuracy than that based on projections of lines. The performances of our method are demonstrated by both the results of simulations and experiments with real images.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Rotationally Invariant Descriptors Using Intensity Order Pooling

Bin Fan; Fuchao Wu; Zhanyi Hu

This paper proposes a novel method for interest region description which pools local features based on their intensity orders in multiple support regions. Pooling by intensity orders is not only invariant to rotation and monotonic intensity changes, but also encodes ordinal information into a descriptor. Two kinds of local features are used in this paper, one based on gradients and the other on intensities; hence, two descriptors are obtained: the Multisupport Region Order-Based Gradient Histogram (MROGH) and the Multisupport Region Rotation and Intensity Monotonic Invariant Descriptor (MRRID). Thanks to the intensity order pooling scheme, the two descriptors are rotation invariant without estimating a reference orientation, which appears to be a major error source for most of the existing methods, such as Scale Invariant Feature Transform (SIFT), SURF, and DAISY. Promising experimental results on image matching and object recognition demonstrate the effectiveness of the proposed descriptors compared to state-of-the-art descriptors.


european conference on computer vision | 2004

Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model

Xianghua Ying; Zhanyi Hu

There are two kinds of omnidirectional cameras often used in computer vision: central catadioptric cameras and fisheye cameras. Previous literatures use different imaging models to describe them separately. A unified imaging model is however presented in this paper. The unified model in this paper can be considered as an extension of the unified imaging model for central catadioptric cameras proposed by Geyer and Daniilidis. We show that our unified model can cover some existing models for fisheye cameras and fit well for many actual fisheye cameras used in previous literatures. Under our unified model, central catadioptric cameras and fisheye cameras can be classified by the model’s characteristic parameter, and a fisheye image can be transformed into a central catadioptric one, vice versa. An important merit of our new unified model is that existing calibration methods for central catadioptric cameras can be directly applied to fisheye cameras. Furthermore, the metric calibration from single fisheye image only using projections of lines becomes possible via our unified model but the existing methods for fisheye cameras in the literatures till now are all non-metric under the same conditions. Experimental results of calibration from some central catadioptric and fisheye images confirm the validity and usefulness of our new unified model.


International Journal of Computer Vision | 2010

Rejecting Mismatches by Correspondence Function

Xiangru Li; Zhanyi Hu

A novel method ICF (Identifying point correspondences by Correspondence Function) is proposed for rejecting mismatches from given putative point correspondences. By analyzing the connotation of homography, we introduce a novel concept of correspondence function for two images of a general 3D scene, which captures the relationships between corresponding points by mapping a point in one image to its corresponding point in another. Since the correspondence functions are unknown in real applications, we also study how to estimate them from given putative correspondences, and propose an algorithm IECF (Iteratively Estimate Correspondence Function) based on diagnostic technique and SVM. Then, the proposed ICF method is able to reject the mismatches by checking whether they are consistent with the estimated correspondence functions. Extensive experiments on real images demonstrate the excellent performance of our proposed method. In addition, the ICF is a general method for rejecting mismatches, and it is applicable to images of rigid objects or images of non-rigid objects with unknown deformation.


computer vision and pattern recognition | 2011

Aggregating gradient distributions into intensity orders: A novel local image descriptor

Bin Fan; Fuchao Wu; Zhanyi Hu

A novel local image descriptor is proposed in this paper, which combines intensity orders and gradient distributions in multiple support regions. The novelty lies in three aspects: 1) The gradient is calculated in a rotation invariant way in a given support region; 2) The rotation invariant gradients are adaptively pooled spatially based on intensity orders in order to encode spatial information; 3) Multiple support regions are used for constructing descriptor which further improves its discriminative ability. Therefore, the proposed descriptor encodes not only gradient information but also information about relative relationship of intensities as well as spatial information. In addition, it is truly rotation invariant in theory without the need of computing a dominant orientation which is a major error source of most existing methods, such as SIFT. Results on the standard Oxford dataset and 3D objects have shown a significant improvement over the state-of-the-art methods under various image transformations.


Pattern Recognition | 2005

Camera calibration with moving one-dimensional objects

Fuchao Wu; Zhanyi Hu; Haijiang Zhu

In this paper, we show that the rotating 1D calibrating object used in the literature is in essence equivalent to a familiar 2D planar calibration object. In addition, we also show that when the 1D object undergoes a planar motion rather than rotating around a fixed point, such equivalence still holds but the traditional way fails to handle it. Experiments are carried out to verify the theoretical correctness and numerical robustness of our results.


Journal of Mathematical Imaging and Vision | 2006

PnP Problem Revisited

Yihong Wu; Zhanyi Hu

Perspective-n-Point camera pose determination, or the PnP problem, has attracted much attention in the literature. This paper gives a systematic investigation on the PnP problem from both geometric and algebraic standpoints, and has the following contributions: Firstly, we rigorously prove that the PnP problem under distance-based definition is equivalent to the PnP problem under orthogonal-transformation-based definition when n > 3, and equivalent to the PnP problem under rotation-transformation-based definition when n = 3. Secondly, we obtain the upper bounds of the number of solutions for the PnP problem under different definitions. In particular, we show that for any three non-collinear control points, we can always find out a location of optical center such that the P3P problem formed by these three control points and the optical center can have 4 solutions, its upper bound. Additionally a geometric way is provided to construct these 4 solutions. Thirdly, we introduce a depth-ratio based approach to represent the solutions of the whole PnP problem. This approach is shown to be advantageous over the traditional elimination techniques. Lastly, degenerated cases for coplanar or collinear control points are also discussed. Surprisingly enough, it is shown that if all the control points are collinear, the PnP problem under distance-based definition has a unique solution, but the PnP problem under transformation-based definition is only determined up to one free parameter.


computer vision and pattern recognition | 2010

Line matching leveraged by point correspondences

Bin Fan; Fuchao Wu; Zhanyi Hu

A novel method for line matching is proposed. The basic idea is to use tentative point correspondences, which can be easily obtained by keypoint matching methods, to significantly improve line matching performance, even when the point correspondences are severely contaminated by outliers. When matching a pair of image lines, a group of corresponding points that may be coplanar with these lines in 3D space is firstly obtained from all corresponding image points in the local neighborhoods of these lines. Then given such a group of corresponding points, the similarity between this pair of lines is calculated based on an affine invariant from one line and two points. The similarity is defined on the basis of median statistic in order to handle the problem of inevitable incorrect correspondences in the group of point correspondences. Furthermore, the relationship of rotation between the reference and query images is estimated from all corresponding points to filter out those pairs of lines which are obviously impossible to be matches, hence speeding up the matching process as well as further improving its robustness. Extensive experiments on real images demonstrate the good performance of the proposed method as well as its superiority to the state-of-the-art methods.

Collaboration


Dive into the Zhanyi Hu's collaboration.

Top Co-Authors

Avatar

Fuchao Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yihong Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shuhan Shen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qiulei Dong

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Gao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hainan Cui

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiang Gao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hung-Tat Tsui

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Songde Ma

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge