Archive | 2019

Disease-Image Specific Generative Adversarial Network for Brain Disease Diagnosis with Incomplete Multi-modal Neuroimages

 
 
 
 
 

Abstract


Incomplete data problem is unavoidable in automated brain disease diagnosis using multi-modal neuroimages (e.g., MRI and PET). To utilize all available subjects to train diagnostic models, deep networks have been proposed to directly impute missing neuroimages by treating all voxels in a 3D volume equally. These methods are not diagnosis-oriented, as they ignore the disease-image specific information conveyed in multi-modal neuroimages, i.e., (1) disease may cause abnormalities only at local brain regions, and (2) different modalities may highlight different disease-associated regions. In this paper, we propose a unified disease-image specific deep learning framework for joint image synthesis and disease diagnosis using incomplete multi-modal neuroimaging data. Specifically, by using the whole-brain images as input, we design a disease-image specific neural network (DSNN) to implicitly model disease-image specificity in MRI/PET scans using the spatial cosine kernel. Moreover, we develop a feature-consistent generative adversarial network (FGAN) to synthesize missing images, encouraging DSNN feature maps of synthetic images and their respective real images to be consistent. Our DSNN and FGAN can be jointly trained, by which missing images are imputed in a task-oriented manner for brain disease diagnosis. Experimental results on 1, 466 subjects suggest that our method not only generates reasonable neuroimages, but also achieves the state-of-the-art performance in both tasks of Alzheimer’s disease (AD) identification and mild cognitive impairment (MCI) conversion prediction.

Volume None
Pages 137-145
DOI 10.1007/978-3-030-32248-9_16
Language English
Journal None

Full Text