Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chen Zu is active.

Publication


Featured researches published by Chen Zu.


Brain Imaging and Behavior | 2016

Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment

Chen Zu; Biao Jie; Mingxia Liu; Songcan Chen; Dinggang Shen; Daoqiang Zhang

Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.


Pattern Recognition | 2017

Robust multi-atlas label propagation by deep sparse representation

Chen Zu; Zhengxia Wang; Daoqiang Zhang; Peipeng Liang; Yonghong Shi; Dinggang Shen; Guorong Wu

Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.


Neural Processing Letters | 2018

Automatic Tumor Segmentation with Deep Convolutional Neural Networks for Radiotherapy Applications

Yan Wang; Chen Zu; Guangliang Hu; Yong Luo; Zongqing Ma; Kun He; Xi Wu; Jiliu Zhou

Accurate tumor delineation in medical images is of great importance in guiding radiotherapy. In nasopharyngeal carcinoma (NPC), due to its high variability, low contrast and discontinuous boundaries in magnetic resonance images (MRI), the margin of the tumor is especially difficult to be identified, making the radiotherapy planning a more challenging problem. The objective of this paper is to develop an automatic segmentation method of NPC in MRI for radiosurgery applications. To this end, we present to segment NPC using a deep convolutional neural network. Specifically, to obtain spatial consistency as well as accurate feature details for segmentation, multiple convolution kernel sizes are employed. The network contains a large number of trainable parameters which capture the relationship between the MRI intensity images and the corresponding label maps. When trained on subjects with pre-labeled MRI, the network can estimate the label class of each voxel for the testing subject which is only given the intensity image. To demonstrate the segmentation performance, we carry on our method on the T1-weighted images of 15 NPC patients, and compare the segmentation results against the radiologist’s reference outline. Experimental results show that the proposed method outperforms the traditional hand-crafted features based segmentation methods. The presented method in this paper could be useful for NPC diagnosis and helpful for guiding radiotherapy.


medical image computing and computer assisted intervention | 2016

Progressive Graph-Based Transductive Learning for Multi-modal Classification of Brain Disorder Disease

Zhengxia Wang; Xiaofeng Zhu; Ehsan Adeli; Yingying Zhu; Chen Zu; Feiping Nie; Dinggang Shen; Guorong Wu

Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimers disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data.


NeuroImage | 2018

3D conditional generative adversarial networks for high-quality PET image estimation at low dose

Yan Wang; Biting Yu; Lei Wang; Chen Zu; David S. Lalush; Weili Lin; Xi Wu; Jiliu Zhou; Dinggang Shen; Luping Zhou

ABSTRACT Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high‐quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c‐GANs) to estimate the high‐quality full‐dose PET images from low‐dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c‐GANs, we condition the model on an input low‐dose PET image and generate a corresponding output full‐dose PET image. Specifically, to render the same underlying information between the low‐dose and full‐dose PET images, a 3D U‐net‐like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full‐dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c‐GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c‐GANs method outperforms the benchmark methods and achieves much better performance than the state‐of‐the‐art methods in both qualitative and quantitative measures. HIGHLIGHTSTo render the same underlying information between the low‐dose and full‐dose PET images, a 3D U‐net‐like deep architecture which can combine hierarchical features by using skip connections is designed as the generator network to synthesize the full‐dose image.To guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network.A concatenated 3D c‐GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images.


neural information processing systems | 2014

Label-Alignment-Based Multi-Task Feature Selection for Multimodal Classification of Brain Disease

Chen Zu; Biao Jie; Songcan Chen; Daoqiang Zhang

Recently, multi-task feature selection methods have been applied to jointly identify the disease-related brain regions for fusing information from multiple modalities of neuroimaging data. However, most of those approaches ignore the complementary label information across modalities. To address this issue, in this paper, we present a novel label-alignment-based multi-task feature selection method to jointly select the most discriminative features from multi-modality data. Specifically, the feature selection procedure of each modality is treated as a task and a group sparsity regularizer (i.e., \(\ell _{2,1}\) norm) is adopted to ensure that only a small number of features to be selected jointly. In addition, we introduce a new regularization term to preserve label relatedness. The function of the proposed regularization term is to align paired within-class subjects from multiple modalities, i.e., to minimize their distance in corresponding low-dimensional feature space. The experimental results on the magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset demonstrate that our proposed method can achieve better performances over state-of-the-art methods on multimodal classification of Alzheimer’s disease (AD) and mild cognitive impairment (MCI).


medical image computing and computer assisted intervention | 2016

Identifying high order brain connectome biomarkers via learning on hypergraph

Chen Zu; Yue Gao; Brent C. Munsell; Minjeong Kim; Ziwen Peng; Yingying Zhu; Wei Gao; Daoqiang Zhang; Dinggang Shen; Guorong Wu

The functional connectome has gained increased attention in the neuroscience community. In general, most network connectivity models are based on correlations between discrete-time series signals that only connect two different brain regions. However, these bivariate region-to-region models do not involve three or more brain regions that form a subnetwork. Here we propose a learning-based method to explore subnetwork biomarkers that are significantly distinguishable between two clinical cohorts. Learning on hypergraph is employed in our work. Specifically, we construct a hypergraph by exhaustively inspecting all possible subnetworks for all subjects, where each hyperedge connects a group of subjects demonstrating highly correlated functional connectivity behavior throughout the underlying subnetwork. The objective function of hypergraph learning is to jointly optimize the weights for all hyperedges which make the separation of two groups by the learned data representation be in the best consensus with the observed clinical labels. We deploy our method to find high order childhood autism biomarkers from rs-fMRI images. Promising results have been obtained from comprehensive evaluation on the discriminative power and generality in diagnosis of Autism.


medical image computing and computer assisted intervention | 2018

Locality Adaptive Multi-modality GANs for High-Quality PET Image Synthesis

Yan Wang; Luping Zhou; Lei Wang; Biting Yu; Chen Zu; David S. Lalush; Weili Lin; Xi Wu; Jiliu Zhou; Dinggang Shen

Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multimodality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.


Archive | 2018

Multi-modality Feature Learning in Diagnoses of Alzheimer’s Disease

Daoqiang Zhang; Chen Zu; Biao Jie; Tingting Ye

Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, which is mild cognitive impairment (MCI). Recently, multi-task feature selection methods are typically used for joint selection of common features across multiple modalities. In this chapter, we review several latest multi-modality feature learning works in diagnoses of AD. Specifically, multi-task feature selection (MTFS) is proposed to jointly select the common subset of relevant features for multiple variables from each modality. Based on MTFS, a manifold regularized multi-task feature learning method (M2TFS) is used to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. However, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. In order to overcome this issue, label-aligned multi-task feature selection (LAMTFS) which can fully explore the realtionships across both modalities and subjects is proposed. Then a discriminative multi-task feature selection method is proposed to select the most discriminative features for multi-modality based classification. The experimental results on the baseline magnetic resonance image (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative (ADNI) data base demonstrate the effectiveness of those above proposed methods.


Brain Imaging and Behavior | 2018

Identifying disease-related subnetwork connectome biomarkers by sparse hypergraph learning

Chen Zu; Yue Gao; Brent C. Munsell; Minjeong Kim; Ziwen Peng; Jessica R. Cohen; Daoqiang Zhang; Guorong Wu

The functional brain network has gained increased attention in the neuroscience community because of its ability to reveal the underlying architecture of human brain. In general, majority work of functional network connectivity is built based on the correlations between discrete-time-series signals that link only two different brain regions. However, these simple region-to-region connectivity models do not capture complex connectivity patterns between three or more brain regions that form a connectivity subnetwork, or subnetwork for short. To overcome this current limitation, a hypergraph learning-based method is proposed to identify subnetwork differences between two different cohorts. To achieve our goal, a hypergraph is constructed, where each vertex represents a subject and also a hyperedge encodes a subnetwork with similar functional connectivity patterns between different subjects. Unlike previous learning-based methods, our approach is designed to jointly optimize the weights for all hyperedges such that the learned representation is in consensus with the distribution of phenotype data, i.e. clinical labels. In order to suppress the spurious subnetwork biomarkers, we further enforce a sparsity constraint on the hyperedge weights, where a larger hyperedge weight indicates the subnetwork with the capability of identifying the disorder condition. We apply our hypergraph learning-based method to identify subnetwork biomarkers in Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). A comprehensive quantitative and qualitative analysis is performed, and the results show that our approach can correctly classify ASD and ADHD subjects from normal controls with 87.65 and 65.08% accuracies, respectively.

Collaboration


Dive into the Chen Zu's collaboration.

Top Co-Authors

Avatar

Daoqiang Zhang

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Biao Jie

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiliu Zhou

Chengdu University of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Xi Wu

Chengdu University of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Lei Wang

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Luping Zhou

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Tingting Ye

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Researchain Logo
Decentralizing Knowledge