Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fuchao Wu is active.

Publication


Featured researches published by Fuchao Wu.


international conference on computer vision | 2011

Local Intensity Order Pattern for feature description

Zhenhua Wang; Bin Fan; Fuchao Wu

This paper presents a novel method for feature description based on intensity order. Specifically, a Local Intensity Order Pattern(LIOP) is proposed to encode the local ordinal information of each pixel and the overall ordinal information is used to divide the local patch into subregions which are used for accumulating the LIOPs respectively. Therefore, both local and overall intensity ordinal information of the local patch are captured by the proposed LIOP descriptor so as to make it a highly discriminative descriptor. It is shown that the proposed descriptor is not only invariant to monotonic intensity changes and image rotation but also robust to many other geometric and photometric transformations such as viewpoint change, image blur and JEPG compression. The proposed descriptor has been evaluated on the standard Oxford dataset and four additional image pairs with complex illumination changes. The experimental results show that the proposed descriptor obtains a significant improvement over the existing state-of-the-art descriptors.


Pattern Recognition | 2009

MSLD: A robust descriptor for line matching

Zhiheng Wang; Fuchao Wu; Zhanyi Hu

Line matching plays an important role in many applications, such as image registration, 3D reconstruction, object recognition and video understanding. However, compared with other features (such as point, region matching), it has made little progress in recent years. In this paper, we investigate the problem of matching line segments automatically only from their neighborhood appearance, without resorting to any other constraints or priori knowledge. A novel line descriptor called mean-standard deviation line descriptor (MSLD) descriptor is proposed for this purpose, which is constructed by the following three steps: (1) For each pixel on the line segment, its pixel support region (PSR) is defined and then the PSR is divided into non-overlapped sub-regions. (2) Line gradient description matrix (GDM) is formed by characterizing each sub-region into a vector. (3) MSLD is built by computing the mean and standard deviation of GDM column vectors. Extensive experiments on real images show that MSLD descriptor is highly distinctive for line matching under rotation, illumination change, image blur, viewpoint change, noise, JPEG compression and partial occlusion. In addition, the concept of MSLD descriptor can also be extended to creating curve descriptor (mean-standard deviation curve descriptor, MSCD), and promising MSCD-based results for both curve and region matching are also demonstrated in this work.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Rotationally Invariant Descriptors Using Intensity Order Pooling

Bin Fan; Fuchao Wu; Zhanyi Hu

This paper proposes a novel method for interest region description which pools local features based on their intensity orders in multiple support regions. Pooling by intensity orders is not only invariant to rotation and monotonic intensity changes, but also encodes ordinal information into a descriptor. Two kinds of local features are used in this paper, one based on gradients and the other on intensities; hence, two descriptors are obtained: the Multisupport Region Order-Based Gradient Histogram (MROGH) and the Multisupport Region Rotation and Intensity Monotonic Invariant Descriptor (MRRID). Thanks to the intensity order pooling scheme, the two descriptors are rotation invariant without estimating a reference orientation, which appears to be a major error source for most of the existing methods, such as Scale Invariant Feature Transform (SIFT), SURF, and DAISY. Promising experimental results on image matching and object recognition demonstrate the effectiveness of the proposed descriptors compared to state-of-the-art descriptors.


computer vision and pattern recognition | 2011

Aggregating gradient distributions into intensity orders: A novel local image descriptor

Bin Fan; Fuchao Wu; Zhanyi Hu

A novel local image descriptor is proposed in this paper, which combines intensity orders and gradient distributions in multiple support regions. The novelty lies in three aspects: 1) The gradient is calculated in a rotation invariant way in a given support region; 2) The rotation invariant gradients are adaptively pooled spatially based on intensity orders in order to encode spatial information; 3) Multiple support regions are used for constructing descriptor which further improves its discriminative ability. Therefore, the proposed descriptor encodes not only gradient information but also information about relative relationship of intensities as well as spatial information. In addition, it is truly rotation invariant in theory without the need of computing a dominant orientation which is a major error source of most existing methods, such as SIFT. Results on the standard Oxford dataset and 3D objects have shown a significant improvement over the state-of-the-art methods under various image transformations.


Pattern Recognition | 2005

Camera calibration with moving one-dimensional objects

Fuchao Wu; Zhanyi Hu; Haijiang Zhu

In this paper, we show that the rotating 1D calibrating object used in the literature is in essence equivalent to a familiar 2D planar calibration object. In addition, we also show that when the 1D object undergoes a planar motion rather than rotating around a fixed point, such equivalence still holds but the traditional way fails to handle it. Experiments are carried out to verify the theoretical correctness and numerical robustness of our results.


computer vision and pattern recognition | 2010

Line matching leveraged by point correspondences

Bin Fan; Fuchao Wu; Zhanyi Hu

A novel method for line matching is proposed. The basic idea is to use tentative point correspondences, which can be easily obtained by keypoint matching methods, to significantly improve line matching performance, even when the point correspondences are severely contaminated by outliers. When matching a pair of image lines, a group of corresponding points that may be coplanar with these lines in 3D space is firstly obtained from all corresponding image points in the local neighborhoods of these lines. Then given such a group of corresponding points, the similarity between this pair of lines is calculated based on an affine invariant from one line and two points. The similarity is defined on the basis of median statistic in order to handle the problem of inevitable incorrect correspondences in the group of point correspondences. Furthermore, the relationship of rotation between the reference and query images is estimated from all corresponding points to filter out those pairs of lines which are obviously impossible to be matches, hence speeding up the matching process as well as further improving its robustness. Extensive experiments on real images demonstrate the good performance of the proposed method as well as its superiority to the state-of-the-art methods.


european conference on computer vision | 2004

Camera calibration from the quasi-affine invariance of two parallel circles

Yihong Wu; Haijiang Zhu; Zhanyi Hu; Fuchao Wu

In this paper, a new camera calibration algorithm is proposed, which is from the quasi-affine invariance of two parallel circles. Two parallel circles here mean two circles in one plane, or in two parallel planes. They are quite common in our life.


Image and Vision Computing | 2005

Single view metrology from scene constraints

Guanghui Wang; Zhanyi Hu; Fuchao Wu; Hung-Tat Tsui

The problem of how to retrieve Euclidean entities of a 3D scene from a single uncalibrated image is studied in this paper. We first present two methods to compute the camera projection matrix through the homography of a reference space plane and its vertical vanishing point. Then, we show how to use the projection matrix and some available scene constraints to retrieve geometrical entities of the scene, such as height of an object on the reference plane, measurements on a vertical or arbitrary plane with respect to the reference plane, distance from a point to a line, etc. In particular, the method is further employed to compute the volume and surface area of some regular and symmetric objects from a single image, the undertaking seems no similar report in the literature to our knowledge. In addition, all the algorithms are formulated in an explicit and linear geometric framework, and the involved computation is linear. Finally, extensive experiments on simulated data and real images as well as a comparative test with a closely related method in the literature validate our proposed methods.


Image and Vision Computing | 2005

Camera calibration and 3D reconstruction from a single view based on scene constraints

Guanghui Wang; Hung-Tat Tsui; Zhanyi Hu; Fuchao Wu

This paper mainly focuses on the problem of camera calibration and 3D reconstruction from a single view of structured scene. It is well known that three constraints on the intrinsic parameters of a camera can be obtained from the vanishing points of three mutually orthogonal directions. However, there usually exist one or several pairs of line segments, which are mutually orthogonal and lie in the pencil of planes defined by two of the vanishing directions in the structured scenes. It is proved in this paper that a new independent constraint to the image of the absolute conic can be obtained if the pair of line segments is of equal length or with known length ratio in space. The constraint is further studied both in terms of the vanishing points and the images of circular points. Hence, four independent constraints on a camera are obtained from one image, and the camera can be calibrated under the widely accepted assumption of zero-skew. This paper also presents a simple method for the recovery of camera extrinsic parameters and projection matrix with respect to a given world coordinate system. Furthermore, several methods are presented to estimate the positions and poses of space planar surfaces from the recovered projection matrix and scene constraints. Thus, a scene structure can be reconstructed by combining the planar patches. Extensive experiments on simulated data and real images, as well as a comparative test with other methods in the literature, validate our proposed methods


Pattern Recognition | 2012

Robust line matching through line-point invariants

Bin Fan; Fuchao Wu; Zhanyi Hu

This paper is about line matching by line-point invariants which encode local geometric information between a line and its neighboring points. Specifically, two kinds of line-point invariants are introduced in this paper, one is an affine invariant constructed from one line and two points while the other is a projective invariant constructed from one line and four points. The basic idea of our proposed line matching methods is to use cheaply obtainable matched points to boost line matching via line-point invariants, even if the matched points are susceptible to severe outlier contamination. To deal with the inevitable mismatches in the matched points, two line similarity measures are proposed, one is based on the maximum and the other is based on the maximal median. Therefore, four different line matching methods are obtained by combining different line-point invariants with different similarity measures. Their performances are evaluated by extensive experiments. The results show that our proposed methods outperform the state-of-the-art methods, and are robust to mismatches in the matched points used for line matching.

Collaboration


Dive into the Fuchao Wu's collaboration.

Top Co-Authors

Avatar

Zhanyi Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bin Fan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yihong Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fuqing Duan

Beijing Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhenhua Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

A-Li Luo

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaoming Deng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhiheng Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jian-Nan Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge