Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yonghong Shi is active.

Publication


Featured researches published by Yonghong Shi.


IEEE Transactions on Medical Imaging | 2008

Segmenting Lung Fields in Serial Chest Radiographs Using Both Population-Based and Patient-Specific Shape Statistics

Yonghong Shi; Feihu Qi; Zhong Xue; Liya Chen; Kyoko Ito; Hidenori Matsuo; Dinggang Shen

This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.


IEEE Transactions on Medical Imaging | 2014

Hierarchical Lung Field Segmentation With Joint Shape and Appearance Sparse Learning

Yeqin Shao; Yaozong Gao; Yanrong Guo; Yonghong Shi; Xin Yang; Dinggang Shen

Lung field segmentation in the posterior-anterior (PA) chest radiograph is important for pulmonary disease diagnosis and hemodialysis treatment. Due to high shape variation and boundary ambiguity, accurate lung field segmentation from chest radiograph is still a challenging task. To tackle these challenges, we propose a joint shape and appearance sparse learning method for robust and accurate lung field segmentation. The main contributions of this paper are: 1) a robust shape initialization method is designed to achieve an initial shape that is close to the lung boundary under segmentation; 2) a set of local sparse shape composition models are built based on local lung shape segments to overcome the high shape variations; 3) a set of local appearance models are similarly adopted by using sparse representation to capture the appearance characteristics in local lung boundary segments, thus effectively dealing with the lung boundary ambiguity; 4) a hierarchical deformable segmentation framework is proposed to integrate the scale-dependent shape and appearance information together for robust and accurate segmentation. Our method is evaluated on 247 PA chest radiographs in a public dataset. The experimental results show that the proposed local shape and appearance models outperform the conventional shape and appearance models. Compared with most of the state-of-the-art lung field segmentation methods under comparison, our method also shows a higher accuracy, which is comparable to the inter-observer annotation variation.


medical image computing and computer assisted intervention | 2008

Hierarchical Shape Statistical Model for Segmentation of Lung Fields in Chest Radiographs

Yonghong Shi; Dinggang Shen

The standard Active Shape Model (ASM) generally uses a whole population to train a single PCA-based shape model for segmentation of all testing samples. Since some testing samples can be similar to only sub-population of training samples, it will be more effective if particular shape statistics extracted from the respective sub-population can be used for guiding image segmentation. Accordingly, we design a set of hierarchical shape statistical models, including a whole-population shape model and a series of sub-population models. The whole-population shape model is used to guide the initial segmentation of the testing sample, and the initial segmentation result is then used to select a suitable sub-population shape model according to the shape similarity between the testing sample and each sub-population. By using the selected subpopulation shape model, the segmentation result can be further refined. To achieve this segmentation process, several particular steps are designed next. First, all linearly aligned samples in the whole population are used to generate a whole-population shape model. Second, an affinity propagation method is used to cluster all linearly aligned samples into several clusters, to determine the samples belonging to the same sub-populations. Third, the original samples of each sub-population are linearly aligned to their own mean shape, and the respective sub-population shape model is built using the newly aligned samples in this sub-population. By using all these three steps, we can generate hierarchical shape statistical models to guide image segmentation. Experimental results show that the proposed method can significantly improve the segmentation performance, compared to conventional ASM.


Medical Image Analysis | 2015

Predict brain MR image registration via sparse learning of appearance and transformation

Qian Wang; Minjeong Kim; Yonghong Shi; Guorong Wu; Dinggang Shen

We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially.


Pattern Recognition | 2017

Robust multi-atlas label propagation by deep sparse representation

Chen Zu; Zhengxia Wang; Daoqiang Zhang; Peipeng Liang; Yonghong Shi; Dinggang Shen; Guorong Wu

Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.


medical image computing and computer assisted intervention | 2006

Segmenting lung fields in serial chest radiographs using both population and patient-specific shape statistics

Yonghong Shi; Feihu Qi; Zhong Xue; Kyoko Ito; Hidenori Matsuo; Dinggang Shen

This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. First, a modified scale-invariant feature transform (SIFT) local descriptor is used to characterize the image features in the vicinity of each pixel, so that the deformable model deforms in a way that seeks for the region with similar SIFT local descriptors; second, the deformable model is constrained by both population-based and patient-specific shape statistics. At first, population-based shape statistics plays an leading role when the number of serial images is small, and gradually, patient-specific shape statistics plays a more and more important role after a sufficient number of segmentation results on the same patient have been obtained. The proposed deformable model can adapt to the shape variability of different patients, and obtain more robust and accurate segmentation results.


medical image computing and computer assisted intervention | 2012

Dense Deformation Reconstruction via Sparse Coding

Yonghong Shi; Guorong Wu; Zhijian Song; Dinggang Shen

Many image registration algorithms need to interpolate dense deformations from a small set of sparse deformations or correspondences established on the landmark points. Previous methods generally use a certain pre-defined deformation model, e.g., B-Spline or Thin-Plate Spline, for dense deformation interpolation, which may affect the final registration accuracy since the actual deformation may not exactly follow the pre-defined model. To address this issue, we propose a novel leaning-based method to represent the to-be-estimated dense deformations as a linear combination of sample dense deformations in the pre-constructed dictionary, with the combination coefficients computed from sparse representation of their respective correspondences on the same set of landmarks. Specifically, in the training stage, for each training image, we register it to the selected template by a certain registration method and obtain correspondences on a fixed set of landmarks in the template, as well as the respective dense deformation field. Then, we can build two dictionaries to, respectively, save the landmark correspondences and their dense deformations from all training images at the same indexing order. Thus, in the application stage, after estimating the landmark correspondences for a new subject, we can first represent them by all instances in the dictionary of landmark correspondences. Then, the estimated sparse coefficients can be used to reconstruct the dense deformation field of the new subject by fusing the corresponding instances in the dictionary of dense deformations. We have demonstrated the advantage of our proposed deformation interpolation method in two applications, i.e., CT prostate registration in the radiotherapy and MR brain registration in the neuroscience study. In both applications, our learning-based method can achieve higher accuracy and potentially faster computation, compared to the conventional method.


medical image computing and computer assisted intervention | 2016

Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning

Pei Dong; Yangrong Guo; Yue Gao; Peipeng Liang; Yonghong Shi; Qian Wang; Dinggang Shen; Guorong Wu

Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinsons disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First, we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second, besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third, since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.


international conference on machine learning | 2011

Learning statistical correlation of prostate deformations for fast registration

Yonghong Shi; Shu Liao; Dinggang Shen

This paper presents a novel fast registration method for aligning the planning image onto each treatment image of a patient for adaptive radiation therapy of the prostate cancer. Specifically, an online correspondence interpolation method is presented to learn the statistical correlation of the deformations between prostate boundary and non-boundary regions from a population of training patients, as well as from the online-collected treatment images of the same patient. With this learned statistical correlation, the estimated boundary deformations can be used to rapidly predict regional deformations between prostates in the planning and treatment images. In particular, the population-based correlation can be initially used to interpolate the dense correspondences when the number of available treatment images from the current patient is small. With the acquisition of more treatment images from the current patient, the patient-specific information gradually plays a more important role to reflect the prostate shape changes of the current patient during the treatment. Eventually, only the patient-specific correlation is used to guide the regional correspondence prediction, once a sufficient number of treatment images have been acquired and segmented from the current patient. Experimental results show that the proposed method can achieve much faster registration speed yet with comparable registration accuracy compared with the thin plate spline (TPS) based interpolation approach.


international conference on medical imaging and augmented reality | 2008

Learning Longitudinal Deformations for Adaptive Segmentation of Lung Fields from Serial Chest Radiographs

Yonghong Shi; Dinggang Shen

We previously developed a deformable model for segmenting lung fields in serial chest radiographs by using both population-based and patient-specific shape statistics, and obtained higher accuracy compared to other methods. However, this method uses an ad hocway to evenly partition the boundary of lung fields into some short segments, in order to capture the patient-specific shape statistics from a small number of samples by principal component analysis (PCA). This ad hocpartition can lead to a segment including points with different amounts of longitudinal deformations, thus rendering it difficult to capture principal variations from a small number of samples using PCA. In this paper, we propose a learning technique to adaptively partition the boundary of lung fields into short segments according to the longitudinal deformations learned for each boundary point. Therefore, all points in the same short segment own similar longitudinal deformations and thus small variations within all longitudinal samples of a patient, which enables effective capture of patient-specific shape statistics by PCA. Experimental results show the improved performance of the proposed method in segmenting the lung fields from serial chest radiographs.

Collaboration


Dive into the Yonghong Shi's collaboration.

Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Feihu Qi

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Peipeng Liang

Capital Medical University

View shared research outputs
Top Co-Authors

Avatar

Qian Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Shu Liao

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chen Zu

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Daoqiang Zhang

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge