Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Meiyan Huang is active.

Publication


Featured researches published by Meiyan Huang.


IEEE Transactions on Biomedical Engineering | 2014

Brain Tumor Segmentation Based on Local Independent Projection-Based Classification

Meiyan Huang; Wei Yang; Yao Wu; Jun Jiang; Wufan Chen; Qianjin Feng

Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.


Computerized Medical Imaging and Graphics | 2013

3D brain tumor segmentation in multimodal MR images based on learning population- and patient-specific feature sets

Jun Jiang; Yao Wu; Meiyan Huang; Wei Yang; Wufan Chen; Qianjin Feng

Brain tumor segmentation is a clinical requirement for brain tumor diagnosis and radiotherapy planning. Automating this process is a challenging task due to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this paper, we propose a method to construct a graph by learning the population- and patient-specific feature sets of multimodal magnetic resonance (MR) images and by utilizing the graph-cut to achieve a final segmentation. The probabilities of each pixel that belongs to the foreground (tumor) and the background are estimated by global and custom classifiers that are trained through learning population- and patient-specific feature sets, respectively. The proposed method is evaluated using 23 glioma image sequences, and the segmentation results are compared with other approaches. The encouraging evaluation results obtained, i.e., DSC (84.5%), Jaccard (74.1%), sensitivity (87.2%), and specificity (83.1%), show that the proposed method can effectively make use of both population- and patient-specific information.


IEEE Transactions on Medical Imaging | 2014

Prostate Segmentation Based on Variant Scale Patch and Local Independent Projection

Yao Wu; Guoqing Liu; Meiyan Huang; Jiacheng Guo; Jun Jiang; Wei Yang; Wufan Chen; Qianjin Feng

Accurate segmentation of the prostate in computed tomography (CT) images is important in image-guided radiotherapy; however, difficulties remain associated with this task. In this study, an automatic framework is designed for prostate segmentation in CT images. We propose a novel image feature extraction method, namely, variant scale patch, which can provide rich image information in a low dimensional feature space. We assume that the samples from different classes lie on different nonlinear submanifolds and design a new segmentation criterion called local independent projection (LIP). In our method, a dictionary containing training samples is constructed. To utilize the latest image information, we use an online updated strategy to construct this dictionary. In the proposed LIP, locality is emphasized rather than sparsity; local anchor embedding is performed to determine the dictionary coefficients. Several morphological operations are performed to improve the achieved results. The proposed method has been evaluated based on 330 3-D images of 24 patients. Results show that the proposed method is robust and effective in segmenting prostate in CT images.


NeuroImage | 2014

Brain extraction based on locally linear representation-based classification.

Meiyan Huang; Wei Yang; Jun Jiang; Yao Wu; Yu Zhang; Wufan Chen; Qianjin Feng

Brain extraction is an important procedure in brain image analysis. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remains challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. To address this problem, we present a Locally Linear Representation-based Classification (LLRC) method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimers Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four publicly available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX), and is comparable to the performance of BEaST, while being more accurate on some datasets compared with BEaST.


NeuroImage | 2015

FVGWAS: Fast voxelwise genome wide association analysis of large-scale imaging genetic data ☆

Meiyan Huang; Thomas E. Nichols; Chao Huang; Yang Yu; Zhaohua Lu; Rebecca C. Knickmeyer; Qianjin Feng; Hongtu Zhu

More and more large-scale imaging genetic studies are being widely conducted to collect a rich set of imaging, genetic, and clinical data to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. Several major big-data challenges arise from testing genome-wide (NC>12 million known variants) associations with signals at millions of locations (NV~10(6)) in the brain from thousands of subjects (n~10(3)). The aim of this paper is to develop a Fast Voxelwise Genome Wide Association analysiS (FVGWAS) framework to efficiently carry out whole-genome analyses of whole-brain data. FVGWAS consists of three components including a heteroscedastic linear model, a global sure independence screening (GSIS) procedure, and a detection procedure based on wild bootstrap methods. Specifically, for standard linear association, the computational complexity is O (nNVNC) for voxelwise genome wide association analysis (VGWAS) method compared with O ((NC+NV)n(2)) for FVGWAS. Simulation studies show that FVGWAS is an efficient method of searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. Finally, we have successfully applied FVGWAS to a large-scale imaging genetic data analysis of ADNI data with 708 subjects, 193,275voxels in RAVENS maps, and 501,584 SNPs, and the total processing time was 203,645s for a single CPU. Our FVGWAS may be a valuable statistical toolbox for large-scale imaging genetic analysis as the field is rapidly advancing with ultra-high-resolution imaging and whole-genome sequencing.


Computational and Mathematical Methods in Medicine | 2012

Retrieval of brain tumors with region-specific bag-of-visual-words representations in contrast-enhanced MRI images.

Meiyan Huang; Wei Yang; Mei Yu; Zhentai Lu; Qianjin Feng; Wufan Chen

A content-based image retrieval (CBIR) system is proposed for the retrieval of T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors. In this CBIR system, spatial information in the bag-of-visual-words model and domain knowledge on the brain tumor images are considered for the representation of brain tumor images. A similarity metric is learned through a distance metric learning algorithm to reduce the gap between the visual features and the semantic concepts in an image. The learned similarity metric is then used to measure the similarity between two images and then retrieve the most similar images in the dataset when a query image is submitted to the CBIR system. The retrieval performance of the proposed method is evaluated on a brain CE-MRI dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). The experimental results demonstrate that the mean average precision values of the proposed method range from 90.4% to 91.5% for different views (transverse, coronal, and sagittal) with an average value of 91.0%.


PLOS ONE | 2016

Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation.

Jun Cheng; Wei Yang; Meiyan Huang; Wei Huang; Jun Jiang; Yujia Zhou; Ru Yang; Jie Zhao; Yanqiu Feng; Qianjin Feng; Wufan Chen

Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset.


medical image computing and computer assisted intervention | 2015

Prediction of CT Substitutes from MR Images Based on Local Sparse Correspondence Combination

Yao Wu; Wei Yang; Lijun Lu; Zhentai Lu; Liming Zhong; Ru Yang; Meiyan Huang; Yanqiu Feng; Wufan Chen; Qianjin Feng

Prediction of CT substitutes from MR images are clinically desired for dose planning in MR-based radiation therapy and attenuation correction in PET/MR. Considering that there is no global relation between intensities in MR and CT images, we propose local sparse correspondence combination LSCC for the prediction of CT substitutes from MR images. In LSCC, we assume that MR and CT patches are located on two nonlinear manifolds and the mapping from the MR manifold to the CT manifold approximates a diffeomorphism under a local constraint. Several techniques are used to constrain locality: 1 for each patch in the testing MR image, a local search window is used to extract patches from the training MR/CT pairs to construct MR and CT dictionaries; 2 k-Nearest Neighbors is used to constrain locality in the MR dictionary; 3 outlier detection is performed to constrain locality in the CT dictionary; 4 Local Anchor Embedding is used to solve the MR dictionary coefficients when representing the MR testing sample. Under these local constraints, the coefficient weights are linearly transferred from MR to CT, and used to combine the samples in the CT dictionary to generate CT predictions. The proposed method has been evaluated for brain images on a dataset of 13 subjects. Each subject has T1- and T2-weighted MR images, as well as a CT image with a total of 39 images. Results show the effectiveness of the proposed method which provides CT predictions with a mean absolute error of 113.8 HU compared with real CTs.


Scientific Reports | 2017

Hippocampus Segmentation Based on Local Linear Mapping

Shumao Pang; Jun Jiang; Zhentai Lu; Xueli Li; Wei Yang; Meiyan Huang; Yu Zhang; Yanqiu Feng; Wenhua Huang; Qianjin Feng

We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.


international symposium on biomedical imaging | 2016

Predict CT image from MRI data using KNN-regression with learned local descriptors

Liming Zhong; Liyan Lin; Zhentai Lu; Yao Wu; Zixiao Lu; Meiyan Huang; Wei Yang; Qianjing Feng

Accurate prediction of CT image from MRI data is clinically desired for attenuation correction in PET/MR hybrid imaging systems and dose planning in MR-based radiation therapy. We present a k-nearest neighbor (KNN)-regression method to predict CT image from MRI data. In this method the nearest neighbors of each MR image patch are searched in the constraint spatial range. To improve the accuracy and efficiency of CT prediction, we propose to use of supervised descriptor learning based on low-rank approximation and manifold regularization to optimize the local descriptor of an MRimage patch and to reduce its dimensionality. The proposed method is evaluated on a dataset consisting of 13 subjects of paired brain MRI and CT images. Result shows that the proposed method can effectively predict CT images from MRI data and outperforms two state-of-the-art methods for CT prediction.

Collaboration


Dive into the Meiyan Huang's collaboration.

Top Co-Authors

Avatar

Qianjin Feng

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Wei Yang

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Wufan Chen

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Yao Wu

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Zhentai Lu

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Jun Jiang

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Yanqiu Feng

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Liming Zhong

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Mei Yu

Southern Medical University

View shared research outputs
Top Co-Authors

Avatar

Guoqing Liu

Southern Medical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge