Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Feihu Qi is active.

Publication


Featured researches published by Feihu Qi.


IEEE Transactions on Medical Imaging | 2008

Segmenting Lung Fields in Serial Chest Radiographs Using Both Population-Based and Patient-Specific Shape Statistics

Yonghong Shi; Feihu Qi; Zhong Xue; Liya Chen; Kyoko Ito; Hidenori Matsuo; Dinggang Shen

This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.


IEEE Transactions on Medical Imaging | 2006

Learning-based deformable registration of MR brain images

Guorong Wu; Feihu Qi; Dinggang Shen

This paper presents a learning-based method for deformable registration of magnetic resonance (MR) brain images. There are two novelties in the proposed registration method. First, a set of best-scale geometric features are selected for each point in the brain, in order to facilitate correspondence detection during the registration procedure. This is achieved by optimizing an energy function that requires each point to have its best-scale geometric features consistent over the corresponding points in the training samples, and at the same time distinctive from those of nearby points in the neighborhood. Second, the active points used to drive the brain registration are hierarchically selected during the registration procedure, based on their saliency and consistency measures. That is, the image points with salient and consistent features (across different individuals) are considered for the initial registration of two images, while other less salient and consistent points join the registration procedure later. By incorporating these two novel strategies into the framework of the HAMMER registration algorithm, the registration accuracy has been improved according to the results on simulated brain data, and also visible improvement is observed particularly in the cortical regions of real brain data


Pattern Recognition | 2008

Multimodality image registration by maximization of quantitative-qualitative measure of mutual information

Hongxia Luan; Feihu Qi; Zhong Xue; Liya Chen; Dinggang Shen

This paper presents a novel image similarity measure, referred to as quantitative-qualitative measure of mutual information (Q-MI), for multimodality image registration. Conventional information measures, e.g., Shannons entropy and mutual information (MI), reflect quantitative aspects of information because they only consider probabilities of events. In fact, each event has its own utility to the fulfillment of the underlying goal, which can be independent of its probability of occurrence. Thus, it is important to consider both quantitative (i.e., probability) and qualitative (i.e., utility) measures of information in order to fully capture the characteristics of events. Accordingly, in multimodality image registration, Q-MI should be used to integrate the information obtained from both the image intensity distributions and the utilities of voxels in the images. Different voxels can have different utilities, for example, in brain images, two voxels can have the same intensity value, but their utilities can be different, e.g., a white matter (WM) voxel near the cortex can have higher utility than a WM voxel inside a large uniform WM region. In Q-MI, the utility of each voxel in an image can be determined according to the regional saliency value calculated from the scale-space map of this image. Since the voxels with higher utility values (or saliency values) contribute more in measuring Q-MI of the two images, the Q-MI-based registration method is much more robust, compared to conventional MI-based registration methods. Also, the Q-MI-based registration method can provide a smoother registration function with a relatively larger capture range. In this paper, the proposed Q-MI has been validated and applied to the rigid registrations of clinical brain images, such as MR, CT and PET images.


information processing in medical imaging | 2007

Learning best features and deformation statistics for hierarchical registration of MR brain images

Guorong Wu; Feihu Qi; Dinggang Shen

A fully learning-based framework has been presented for deformable registration of MR brain images. In this framework, the entire brain is first adaptively partitioned into a number of brain regions, and then the best features are learned for each of these brain regions. In order to obtain overall better performance for both of these two steps, they are integrated into a single framework and solved together by iteratively performing region partition and learning the best features for each partitioned region. In particular, the learned best features for each brain region are required to be identical, and maximally salient as well as consistent over all individual brains, thus facilitating the correspondence detection between individual brains during the registration procedure. Moreover, the importance of each brain point in registration is evaluated according to the distinctiveness and consistency of its respective best features, therefore the salient points with distinctive and consistent features can be hierarchically selected to steer the registration process and reduce the risk of being trapped in local minima. Finally, the statistics of inter-brain deformations, represented by multi-level B-Splines, is also hierarchically captured for effectively constraining the brain deformations estimated during the registration procedure. By using this proposed learning-based registration framework, more accurate and robust registration results can be achieved according to experiments on both real and simulated data.


international conference on medical imaging and augmented reality | 2006

A general learning framework for non-rigid image registration

Guorong Wu; Feihu Qi; Dinggang Shen

This paper presents a general learning framework for non-rigid registration of MR brain images. Given a set of training MR brain images, three major types of information are particularly learned, and further incorporated into a HAMMER registration algorithm for improving the performance of registration. First, the best features are learned from different types of local image descriptors for each part of brain, thereby the learned best features are consistent on the correspondence points across individual brains, but different on non-correspondence points. Moreover, the statistics of selected best features is learned from the training samples, and used to guide the feature matching during the image registration. Second, in order to avoid the local minima in the registration, the points hierarchically selected to drive image registration are determined by the learned consistency and distinctiveness of their respective best features. Third, deformation fields are adaptively represented by B-splines, with more control points placed on the regions with large shape variations across individual brains or on the regions with consistent and distinctive best features. Also, the statistics of B-splines based deformations is captured and used to regularize the brain registration. Finally, by incorporating all learned information into HAMMER registration framework, promising results are obtained on both real and simulated data.


medical image computing and computer assisted intervention | 2005

Learning best features for deformable registration of MR brains

Guorong Wu; Feihu Qi; Dinggang Shen

This paper presents a learning method to select best geometric features for deformable brain registration. Best geometric features are selected for each brain location, and used to reduce the ambiguity in image matching during the deformable registration. Best geometric features are obtained by solving an energy minimization problem that requires the features of corresponding points in the training samples to be similar, and the features of a point to be different from those of nearby points. By incorporating those learned best features into the framework of HAMMER registration algorithm, we achieved about 10% improvement of accuracy in estimating the simulated deformation fields, compared to that obtained by HAMMER. Also, on real MR brain images, we found visible improvement of registration in cortical regions.


international conference on computer vision | 2005

Multi-modal image registration by quantitative-qualitative measure of mutual information (Q-MI)

Hongxia Luan; Feihu Qi; Dinggang Shen

This paper presents a novel measure of image similarity, called quantitative-qualitative measure of mutual information (Q-MI), for multi-modal image registration. Conventional information measure, i.e., Shannon’s entropy, is a quantitative measure of information, since it only considers probabilities, not utilities of events. Actually, each event has its own utility to the fulfillment of the underlying goal, which can be independent of its probability of occurrence. Therefore, it is important to consider both quantitative and qualitative (i.e., utility) information simultaneously for image registration. To achieve this, salient voxels such as white matter (WM) voxels near to brain cortex will be assigned higher utilities than the WM voxels inside the large WM regions, according to the regional saliency values calculated from scale-space map of brain image. Thus, voxels with higher utilities will contribute more in measuring the mutual information of two images under registration. We use this novel measure of mutual information (Q-MI) for registration of multi-modality brain images, and find that the successful rate of our registration method is much higher than that of conventional mutual information registration method.


medical image computing and computer assisted intervention | 2006

Segmenting lung fields in serial chest radiographs using both population and patient-specific shape statistics

Yonghong Shi; Feihu Qi; Zhong Xue; Kyoko Ito; Hidenori Matsuo; Dinggang Shen

This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. First, a modified scale-invariant feature transform (SIFT) local descriptor is used to characterize the image features in the vicinity of each pixel, so that the deformable model deforms in a way that seeks for the region with similar SIFT local descriptors; second, the deformable model is constrained by both population-based and patient-specific shape statistics. At first, population-based shape statistics plays an leading role when the number of serial images is small, and gradually, patient-specific shape statistics plays a more and more important role after a sufficient number of segmentation results on the same patient have been obtained. The proposed deformable model can adapt to the shape variability of different patients, and obtain more robust and accurate segmentation results.


international symposium on biomedical imaging | 2006

Improve brain registration using machine learning methods

Guorong Wu; Feihu Qi; Dinggang Shen

A machine learning method is introduced here to improve the accuracy of brain registration. Generally, different brain regions might need different types or sets of features for registration, which actually can be determined and learned from the brain samples by a machine learning method. In this paper, we focus on investigating the best geometric features required by different brain regions, to match the correspondences and manage the registration procedure hierarchically. Compared to other conventional registration methods where no learning method is employed, our learning-based registration method is able to produce not only more consistent registration on serial images of the same subject, but also more accurate registration on simulated dataset


Pattern Recognition | 2004

Shape and motion from simultaneous equations with closed-loop solution

Zhaozhong Wang; Feihu Qi

This paper proposes a method for simultaneously estimating 2D image motion and 3D object shape and motion from only two frames. The problem is formulated in a system of equations, including the differential epipolar constraint, a newly derived optical flow equation and surface normal constraint, under the assumption of perspective projection, rigid motion, Lambertian reflectance and distant lighting. A closed-loop solver is constructed based on the simultaneous equations to export accurate estimate for optical flow as well as dense shape and motion. Experimental results are also provided.

Collaboration


Dive into the Feihu Qi's collaboration.

Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongxia Luan

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Quan

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Liya Chen

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Zhaozhong Wang

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge