Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yaozong Gao is active.

Publication


Featured researches published by Yaozong Gao.


NeuroImage | 2014

Segmentation of neonatal brain MR images using patch-driven level sets

Li Wang; Feng Shi; Gang Li; Yaozong Gao; Weili Lin; John H. Gilmore; Dinggang Shen

The segmentation of neonatal brain MR image into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), is challenging due to the low spatial resolution, severe partial volume effect, high image noise, and dynamic myelination and maturation processes. Atlas-based methods have been widely used for guiding neonatal brain segmentation. Existing brain atlases were generally constructed by equally averaging all the aligned template images from a population. However, such population-based atlases might not be representative of a testing subject in the regions with high inter-subject variability and thus often lead to a low capability in guiding segmentation in those regions. Recently, patch-based sparse representation techniques have been proposed to effectively select the most relevant elements from a large group of candidates, which can be used to generate a subject-specific representation with rich local anatomical details for guiding the segmentation. Accordingly, in this paper, we propose a novel patch-driven level set method for the segmentation of neonatal brain MR images by taking advantage of sparse representation techniques. Specifically, we first build a subject-specific atlas from a library of aligned, manually segmented images by using sparse representation in a patch-based fashion. Then, the spatial consistency in the probability maps from the subject-specific atlas is further enforced by considering the similarities of a patch with its neighboring patches. Finally, the probability maps are integrated into a coupled level set framework for more accurate segmentation. The proposed method has been extensively evaluated on 20 training subjects using leave-one-out cross validation, and also on 132 additional testing subjects. Our method achieved a high accuracy of 0.919±0.008 for white matter and 0.901±0.005 for gray matter, respectively, measured by Dice ratio for the overlap between the automated and manual segmentations in the cortical region.


NeuroImage | 2015

LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

Li Wang; Yaozong Gao; Feng Shi; Gang Li; John H. Gilmore; Weili Lin; Dinggang Shen

Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy.


NeuroImage | 2014

Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation

Li Wang; Feng Shi; Yaozong Gao; Gang Li; John H. Gilmore; Weili Lin; Dinggang Shen

Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination processes. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6-8months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter.


medical image computing and computer assisted intervention | 2013

Representation Learning: A Unified Deep Learning Framework for Automatic Prostate MR Segmentation

Shu Liao; Yaozong Gao; Aytekin Oto; Dinggang Shen

Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.


IEEE Transactions on Medical Imaging | 2013

Sparse Patch-Based Label Propagation for Accurate Prostate Localization in CT Images

Shu Liao; Yaozong Gao; J Lian; Dinggang Shen

In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.


IEEE Transactions on Medical Imaging | 2016

Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching

Yanrong Guo; Yaozong Gao; Dinggang Shen

Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.


IEEE Transactions on Medical Imaging | 2016

Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model

Tri Huynh; Yaozong Gao; Jiayin Kang; Li Wang; Pei Zhang; J Lian; Dinggang Shen

Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.


IEEE Transactions on Medical Imaging | 2014

Learning to Rank Atlases for Multiple-Atlas Segmentation

Gerard Sanroma; Guorong Wu; Yaozong Gao; Dinggang Shen

Recently, multiple-atlas segmentation (MAS) has achieved a great success in the medical imaging area. The key assumption is that multiple atlases have greater chances of correctly labeling a target image than a single atlas. However, the problem of atlas selection still remains unexplored. Traditionally, image similarity is used to select a set of atlases. Unfortunately, this heuristic criterion is not necessarily related to the final segmentation performance. To solve this seemingly simple but critical problem, we propose a learning-based atlas selection method to pick up the best atlases that would lead to a more accurate segmentation. Our main idea is to learn the relationship between the pairwise appearance of observed instances (i.e., a pair of atlas and target images) and their final labeling performance (e.g., using the Dice ratio). In this way, we select the best atlases based on their expected labeling accuracy. Our atlas selection method is general enough to be integrated with any existing MAS method. We show the advantages of our atlas selection method in an extensive experimental evaluation in the ADNI, SATA, IXI, and LONI LPBA40 datasets. As shown in the experiments, our method can boost the performance of three widely used MAS methods, outperforming other learning-based and image-similarity-based atlas selection methods.


medical image computing and computer-assisted intervention | 2013

Unsupervised Deep Feature Learning for Deformable Registration of MR Brain Images

Guorong Wu; Minjeong Kim; Qian Wang; Yaozong Gao; Shu Liao; Dinggang Shen

Establishing accurate anatomical correspondences is critical for medical image registration. Although many hand-engineered features have been proposed for correspondence detection in various registration applications, no features are general enough to work well for all image data. Although many learning-based methods have been developed to help selection of best features for guiding correspondence detection across subjects with large anatomical variations, they are often limited by requiring the known correspondences (often presumably estimated by certain registration methods) as the ground truth for training. To address this limitation, we propose using an unsupervised deep learning approach to directly learn the basis filters that can effectively represent all observed image patches. Then, the coefficients by these learnt basis filters in representing the particular image patch can be regarded as the morphological signature for correspondence detection during image registration. Specifically, a stacked two-layer convolutional network is constructed to seek for the hierarchical representations for each image patch, where the high-level features are inferred from the responses of the low-level network. By replacing the hand-engineered features with our learnt data-adaptive features for image registration, we achieve promising registration results, which demonstrates that a general approach can be built to improve image registration by using data-adaptive features through unsupervised deep learning.


Medical Physics | 2014

Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

Li Wang; Ken Chung Chen; Yaozong Gao; Feng Shi; Shu Liao; Gang Li; Steve Guofang Shen; Jin Yan; Philip K. M. Lee; Ben Chow; Nancy X. Liu; James J. Xia; Dinggang Shen

PURPOSE Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. METHODS To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into a maximum a posteriori probability-based convex segmentation framework for accurate segmentation. RESULTS The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. CONCLUSIONS The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.

Collaboration


Dive into the Yaozong Gao's collaboration.

Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Li Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Feng Shi

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Gang Li

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Weili Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Shu Liao

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yanrong Guo

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Qian Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Sang Hyun Park

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge