Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chengfei Zhu is active.

Publication


Featured researches published by Chengfei Zhu.


Image and Vision Computing | 2010

Adaptive pyramid mean shift for global real-time visual tracking

Shuxiao Li; Hongxing Chang; Chengfei Zhu

Tracking objects in videos using the mean shift technique has attracted considerable attention. In this work, a novel approach for global target tracking based on mean shift technique is proposed. The proposed method represents the model and the candidate in terms of background weighted histogram and color weighted histogram, respectively, which can obtain precise object size adaptively with low computational complexity. To track targets whose displacements between two successive frames are relatively large, we implement the mean shift procedure via a coarse-to-fine way for global maximum seeking. This procedure is termed as adaptive pyramid mean shift, because it uses the pyramid analysis technique and can determine the pyramid level adaptively to decrease the number of iterations required to achieve convergence. Experimental results on various tracking videos and its application to a tracking and pointing subsystem show that the proposed method can successfully cope with different situations such as camera motion, camera vibration, camera zoom and focus, high-speed moving object tracking, partial occlusions, target scale variations, etc.


Computer Vision and Image Understanding | 2009

Fast curvilinear structure extraction and delineation using density estimation

Shuxiao Li; Hongxing Chang; Chengfei Zhu

Detection and delineation of lines is important for many applications. However, most of the existing algorithms have the shortcoming of high computational cost and can not meet the on-board real-time processing requirement. This paper presents a novel method for curvilinear structure extraction and delineation by using kernel-based density estimation. The method is based on efficient calculation of pixel-wise density estimation for an input feature image, which is termed as local weighted features (LWF). For gray and binary images, the LWF can be efficiently calculated by integral image and accumulated image, respectively. Detectors for small objects and centerlines based on LWF are developed and the selection of density estimation kernels is also illustrated. The algorithm is very fast and achieves 50 fps on a PIV2.4G processor. Evaluation results on a number of images and videos are given to demonstrate the satisfactory performances of the proposed method with its high stability and adaptability.


international conference on vehicular electronics and safety | 2010

Vision-aided UAV navigation using GIS data

Duoyu Gu; Chengfei Zhu; Jiang Guo; Shuxiao Li; Hongxing Chang

This paper proposes a novel vision-aided navigation architecture to aid the inertial navigation system (INS) for accurate unmanned aerial vehicle (UAV) localization. Unlike previous image localization methods such as scene matching and terrain contour matching, our approach registers meaningful object-level features extracted from real-time aerial imagery with the data of geographic information system (GIS). Firstly, we extract from aerial images the widely distributed object features including roads, rivers, road intersections, villages, bridges et al.. Then, the extracted image features are delineated as geometrical points and vectors, which coincide with the representation of GIS data. Finally, GIS model is constructed by corresponding geographical object information from GIS data, and visual geometrical features are registered with GIS model to obtain the absolute position of the image. The proposed method adopts GIS as reference data, thus the storage requirement is lower than that of scene matching. In addition, all steps of this approach can be calculated efficiently, while the computational cost of terrain contour matching is very high. Simulation results demonstrate the feasibility of the proposed method for UAV localization.


Computer Vision and Image Understanding | 2014

Visual object tracking using spatial Context Information and Global tracking skills

Shuxiao Li; Ou Wu; Chengfei Zhu; Hongxing Chang

Tracking objects in videos by the mean shift algorithm with color weighted histograms has received much attention in recent years. However, the stability of weights in mean shift still needs to be improved especially under low-contrast scenes with complex motions. This paper presents a new type of color cue, which produces stable weights for mean shift tracking and can be computed pixel by pixel efficiently. The proposed color cue employs global tracking techniques to overcome the illustrated drawbacks of the mean shift algorithm. It represents a target candidate with a larger scale than that of the target model so that the model is much more precise than the candidate. We illustrate that the weights by this way are more reliable under various scenes. To further suppress surrounding clutters, we establish a new spatial context model so that the optimization results are a set of weights which can be computed pixel by pixel. The proposed color cue is called CIG since it computes the weights based on spatial Context Information and Global tracking skills. Experimental results on various tracking videos show that weight images by CIG have higher stability and precision than those of current methods especially under low-contrast scenes with complex motions


asia-pacific conference on information processing | 2009

Matching Road Networks Extracted from Aerial Images to GIS Data

Chengfei Zhu; Shuxiao Li; Hongxing Chang; JiXiang Zhang

In aerial images, road network is the most salient artificial object, and could provide lots of geographical information. Thus it is very valuable for navigation systems,e.g. cruise missile or UAV(unmanned Aerial Vehicle) navigation system. With the presence of GIS in which road network is also the most common data, the position of aerial image can be located by matching it with a model generated from GIS. There are many algorithms presented to match images with model, and mostly the computational costs of them are very expensive, which can not meet the on-line processing requirement. In this paper, we first represent the GIS data as a model(GIS model in abbreviation in the following text).Then, a method for registering the road network picked up from the aerial images with GIS model is depicted.Several aerial images are taken to test the effectiveness of our method. Experimental results demonstrated that the correctness and veracity are both satisfying, and the cost of our method is acceptable.


international conference on image and graphics | 2011

Automatic Bridge Extraction for Optical Images

Duoyu Gu; Chengfei Zhu; Hao Shen; Jin-Zong Hu; Hongxing Chang

This paper describes a novel hierarchy algorithm for extracting bridges over water in optical images. To reduce the omission of bridges by searching the edge, we extract the river regions which the bridges are included in. Firstly, we segment the optical image to get the coarse water bodies using iterative threshold, eliminate the noise regions and add the missing regions based on k-means clustering with texture information and spatial coherence. Then, the blanks are connected based on shape features and candidate bridge regions are segmented from river regions. Finally, the bridges are verified by geometric information and the ubiety between bridges and river. The results show that this approach is efficient and effective for extracting bridges in satellite image from Google Earth and in aerial optical images acquired by unmanned aerial vehicle.


asia-pacific conference on information processing | 2009

Robust Foreground Segmentation Using Subspace Based Background Model

JiXiang Zhang; Yuan Tian; YiPing Yang; Chengfei Zhu

Robust foreground segmentation is an essential step in many computer vision applications such as visual surveillance and behavior analysis. This paper proposes a subspace based background modeling and foreground segmentation algorithm, which improves the incremental background subspace learning in a robust manner. It can efficiently reduce the influence of the foreground pixels which are undesired in background updating procedure, at the same time, adapts well to background variations. Furthermore, a novel subspace initialization method based on L1-minimization is proposed to efficiently construct the subspace background model using global information, without the requirement of empty scene. Experimental results demonstrate the robustness and effectiveness of the algorithm.


asian conference on pattern recognition | 2015

Specific changes detection in visible-band VHR images using classification likelihood space

Feimo Li; Shuxiao Li; Chengfei Zhu; Xiaosong Lan; Hongxing Chang

Object-based post-classification change detection methods are effective for very high resolution images, but their effectiveness is limited by incomplete class hierarchy and complex image object comparison. In this paper, a novel Classification Likelihood Space (CLS) is proposed to synthesize the effective object-based image analysis and easy-to-implement post-classification comparison, serving as a well tradeoff between performance and complexity. The proposed algorithm is tested on a dataset which comprises 102 pairs of visible-band very high resolution real satellite images, and a great improvement is observed over traditional post-classification comparison.


international conference on digital image processing | 2013

Corner detector using invariant analysis

Chengfei Zhu; Shuxiao Li; Yi Song; Hongxing Chang

Corner detection has been shown to be very useful in many computer vision applications. Some valid approaches have been proposed, but few of them are accurate, efficient and suitable for complex applications (such as DSP). In this paper, a corner detector using invariant analysis is proposed. The new detector assumes an ideal corner of a gray level image should have a good corner structure which has an annulus mask. An invariant function was put forward, and the value of which for the ideal corner is a constant value. Then, we could verify the candidate corners by compare their invariant function value with the constant value. Experiments have shown that the new corner detector is accurate and efficient and could be used in some complex applications because of its simple calculation.


Remote Sensing | 2017

Cost-Effective Class-Imbalance Aware CNN for Vehicle Localization and Categorization in High Resolution Aerial Images

Feimo Li; Shuxiao Li; Chengfei Zhu; Xiaosong Lan; Hongxing Chang

Joint vehicle localization and categorization in high resolution aerial images can provide useful information for applications such as traffic flow structure analysis. To maintain sufficient features to recognize small-scaled vehicles, a regions with convolutional neural network features (R-CNN) -like detection structure is employed. In this setting, cascaded localization error can be averted by equally treating the negatives and differently typed positives as a multi-class classification task, but the problem of class-imbalance remains. To address this issue, a cost-effective network extension scheme is proposed. In it, the correlated convolution and connection costs during extension are reduced by feature map selection and bi-partite main-side network construction, which are realized with the assistance of a novel feature map class-importance measurement and a new class-imbalance sensitive main-side loss function. By using an image classification dataset established from a set of traditional real-colored aerial images with 0.13 m ground sampling distance which are taken from the height of 1000 m by an imaging system composed of non-metric cameras, the effectiveness of the proposed network extension is verified by comparing with its similarly shaped strong counter-parts. Experiments show an equivalent or better performance, while requiring the least parameter and memory overheads are required.

Collaboration


Dive into the Chengfei Zhu's collaboration.

Top Co-Authors

Avatar

Hongxing Chang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shuxiao Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaosong Lan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Feimo Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yi Song

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yiping Shen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hao Shen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jinglan Zhang

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Duoyu Gu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

JiXiang Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge