Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qifeng Yu is active.

Publication


Featured researches published by Qifeng Yu.


IEEE Geoscience and Remote Sensing Letters | 2007

An Adaptive Contoured Window Filter for Interferometric Synthetic Aperture Radar

Qifeng Yu; Xia Yang; Sihua Fu; Xiaolin Liu; Xiangyi Sun

An adaptive contoured window filter is proposed to filter off the noise from phase images of interferometric synthetic aperture radar (InSAR) in this letter. The contoured windows can best satisfy the requirement that constrains the phase signal constant inside windows on which low-pass filtering can remove the noise well while the fringe phases are well preserved. The contoured windows are determined by tracing along the local fringe orientation. An algorithm for determining window sizes adaptive to the fringe density is also proposed. The theoretical analysis and experiments prove that the proposed filter can greatly remove decorrelation noise while preserving the fringe phase well, even for those fringes with strong curvatures for InSAR processing


Applied Optics | 2009

Fold-ray videometrics method for the deformation measurement of nonintervisible large structures.

Qifeng Yu; Guangwen Jiang; Sihua Fu; Zhichao Chao; Yang Shang; Xiangyi Sun

An optical measurement method, the fold-ray videometrics method, that is applicable to the deformation measurement of large structures is proposed. Through an illustration of ship deformation, the principle of fold-ray videometrics and the composition of the deformation measurement system are introduced. The fold-ray videometrics method is able to transfer or relay three-dimensional geometric information with a fold-ray optical path and thus is capable of real-time measurement of three-dimensional positions, attitudes, and deformations between nonintervisible objects and those of intervisible objects with a very large angle of view. The proposed method therefore has the potential to be applied in deformation measurement of large structures.


Applied Optics | 2012

Robust camera pose estimation from unknown or known line correspondences

Xiaohu Zhang; Zheng Zhang; You Li; Xianwei Zhu; Qifeng Yu; Jianliang Ou

We address the model-to-image registration problem with line features in the following two ways. (a) We present a robust solution to simultaneously recover the camera pose and the three-dimensional-to-two-dimensional line correspondences. With weak pose priors, our approach progressively verifies the pose guesses with a Kalman filter by using a subset of recursively found match hypotheses. Experiments show our method is robust to occlusions and clutter. (b) We propose a new line feature based pose estimation algorithm, which iteratively optimizes the objective function in the object space. Experiments show that the algorithm has strong robustness to noise and outliers and that it can attain very accurate results efficiently.


Applied Optics | 1998

GENERALIZED SPIN FILTERING AND AN IMPROVED DERIVATIVE-SIGN BINARY IMAGE METHOD FOR THE EXTRACTION OF FRINGE SKELETONS

Qifeng Yu; Xiaolin Liu; Xiangyi Sun

Generalized spin filters, including several directional filters such as the directional median filter and the directional binary filter, are proposed for removal of the noise of fringe patterns and the extraction of fringe skeletons with the help of fringe-orientation maps (FOMs). The generalized spin filters can filter off noise on fringe patterns and binary fringe patterns efficiently, without distortion of fringe features. A quadrantal angle filter is developed to filter off the FOM. With these new filters, the derivative-sign binary image (DSBI) method for extraction of fringe skeletons is improved considerably. The improved DSBI method can extract high-density skeletons as well as common density skeletons.


Journal of Electronic Imaging | 2014

Instantaneous video stabilization for unmanned aerial vehicles

Jing Dong; Yang Xia; Qifeng Yu; Ang Su; Wang Hou

Abstract. Video stabilization is a critical step for improving the quality of videos captured by unmanned aerial vehicles. However, the complicated scenarios in the video and the need for instantaneously presenting a stabilized image posed significant challenges to the existing methods. In this work, an instantaneous video stabilization method for unmanned aerial vehicles is proposed. This new approach serves several purposes: smoothing the video motion in both two-dimensional and three-dimensional (3-D) scenes, decreasing the lags in response, and instantaneously providing the stabilized image to users. For each input frame, our approach regenerates four short motion trajectories by applying interframe transformations to the four corners of the image rectangle. An adaptive filter is then performed to smooth motion trajectories and suppress the lags in response simultaneously. Finally, at the stage of image composition, the quality of image is considered for selecting a visually plausible stabilized video. Experiments show that our approach can stabilize various videos without the need for user interaction or costly 3-D reconstruction, and it works as an instant-process for videos from an online source.


EURASIP Journal on Advances in Signal Processing | 2013

Multi-modal image matching based on local frequency information

Xiaochun Liu; Zhihui Lei; Qifeng Yu; Xiaohu Zhang; Yang Shang; Wang Hou

This paper challenges the issue of matching between multi-modal images with similar physical structures but different appearances. To emphasize the common structural information while suppressing the illumination and sensor-dependent information between multi-modal images, two image representations namely Mean Local Phase Angle (MLPA) and Frequency Spread Phase Congruency (FSPC) are proposed by using local frequency information in Log-Gabor wavelet transformation space. A confidence-aided similarity (CAS) that consists of a confidence component and a similarity component is designed to establish the correspondence between multi-modal images. The two representations are both invariant to contrast reversal and non-homogeneous illumination variation, and without any derivative or thresholding operation. The CAS that integrates MLPA with FSPC tightly instead of treating them separately can more weight the common structures emphasized by FSPC, and therefore further eliminate the influence of different sensor properties. We demonstrate the accuracy and robustness of our method by comparing it with those popular methods of multi-modal image matching. Experimental results show that our method improves the traditional multi-modal image matching, and can work robustly even in quite challenging situations (e.g. SAR & optical image).


Applied Optics | 2014

Smeared star spot location estimation using directional integral method

Wang Hou; Haibo Liu; Zhihui Lei; Qifeng Yu; Xiaochun Liu; Jing Dong

Image smearing significantly affects the accuracy of attitude determination of most star sensors. To ensure the accuracy and reliability of a star sensor under image smearing conditions, a novel directional integral method is presented for high-precision star spot location estimation to improve the accuracy of attitude determination. Simulations based on the orbit data of the challenging mini-satellite payload satellite were performed. Simulation results demonstrated that the proposed method exhibits high performance and good robustness, which indicates that the method can be applied effectively.


Applied Optics | 2010

Study of a pose-relay videometric method using a parallel camera series

Zhichao Chao; Qifeng Yu; Guangwen Jiang; Sihua Fu

This study proposes a pose-relay videometric method that uses a parallel camera series and is applicable to measuring deformation between nonintervisible and intervisible objects with a very wide angle of view. The measuring system is constructed by adding symmetrical cameras to the pose-relay stations of a single camera measuring system to improve its robustness and precision. An adjustment data fusion method is suggested to take full advantage of the data redundancy among neighboring relay stations in the proposed system. Simulated results show that the adjusted method enhances the measuring precision achieved with the classic weighted average data fusion method owing to its use of the restraint condition inherent in the system.


Journal of Electronic Imaging | 2014

Small infrared target detection using frequency-spatial cues in a single image

Xiaoliang Sun; Wang Hou; Qifeng Yu; Xiaolin Liu; Yang Shang

Abstract. This paper describes an approach for small infrared (IR) target detection using frequency-spatial cues. We model the background as spikes of the amplitude spectrum in the frequency domain. Target regions are highlighted through background suppression, and the suppression is realized via convoluting the amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale. A theoretical analysis of the convoluting process in the frequency domain is presented. We note that the high values are attributed to sharp gradients in the IR image. In order to uniformly highlight the target region, the proposed algorithm introduces cues of image segmentation in the spatial domain. Targets are completely preserved in the final result. An image database is built, which is used to test the proposed algorithm. Results show that our algorithm detects small IR targets effectively with a competitive performance over some state-of-the-art techniques, even for images with cluttered backgrounds. In addition, we show that it is able to detect multiple targets with varied sizes, which are challenges for existing algorithms.


Image and Vision Computing | 2014

The effects of temperature variation on videometric measurement and a compensation method

Qifeng Yu; Zhichao Chao; Guangwen Jiang; Yang Shang; Sihua Fu; Xiaolin Liu; Xianwei Zhu; Haibo Liu

When a videometric system operates over a long period, temperature variations in the camera and its environment will affect the measurement results, which cannot be ignored. How to eliminate or compensate for the effects of such variations in temperature is an emergent problem. Starting with the image drift phenomenon, this paper presents an image-drift model that analyzes the relationship between variations in the camera parameters and drift in the coordinates of the image. A simplified model is then introduced by analyzing the coupling relationships among the variations in the camera parameters. Furthermore, a model of the relationship between the camera parameters and temperature variations is established with the system identification method. Finally, several compensation experiments on image drift are carried out, using the parameter-temperature relationship model calibrated with one arbitrary data set to compensate the others. The analyses and experiments demonstrate the feasibility and efficiency of the proposed method. This paper proposed a compensation method for eliminating the effects of temperature variation in the long duration application of image vision.A model of the relationship between the camera parameters and temperature variations is established with the system identification method.Experiments are carried on. The analyses and experiments demonstrate the feasibility and efficiency of the proposed method.

Collaboration


Dive into the Qifeng Yu's collaboration.

Top Co-Authors

Avatar

Yang Shang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaohu Zhang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Sihua Fu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaolin Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Guangwen Jiang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhihui Lei

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiangyi Sun

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xia Yang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Haibo Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xianwei Zhu

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge