Interpretation of 3D CNNs for Brain MRI Data Classification
Maxim Kan, Ruslan Aliev, Anna Rudenko, Nikita Drobyshev, Nikita Petrashen, Ekaterina Kondrateva, Maxim Sharaev, Alexander Bernstein, Evgeny Burnaev
IInterpretable Deep Learning for PatternRecognition in Brain Differences Between Menand Women
Maxim Kan , Ruslan Aliev , Anna Rudenko , Nikita Drobyshev , NikitaPetrashen , Ekaterina Kondrateva , Maxim Sharaev , Alexander Bernstein ,and Evgeny Burnaev Skolkovo Institute of Science and Technology, Moscow, Russia
Abstract.
Deep learning shows high potential for many medical im-age analysis tasks. Neural networks work with full-size data withoutextensive preprocessing and feature generation and, thus, informationloss. Recent work has shown that morphological difference between spe-cific brain regions can be found on MRI with deep learning techniques.We consider the pattern recognition task based on a large open-accessdataset of healthy subjects — an exploration of brain differences be-tween men and women. However, interpretation of the lately proposedmodels is based on a region of interest and can not be extended to pixelor voxel-wise image interpretation, which is considered to be more infor-mative. In this paper, we confirm the previous findings in sex differencesfrom diffusion-tensor imaging on T1 weighted brain MRI scans. We com-pare the results of three voxel-based 3D CNN interpretation methods:Meaningful Perturbations, GradCam and Guided Backpropagation andprovide the open-source code.
Keywords:
MRI · Neural Networks · Deep Learning ·
3D CNN · CNNinterpretation · Meaningful perturbation · GradCam
Deep learning recently has found many applications in the area of medical diag-nostics/image processing [21] [17]. For example, processing Magnetic ResonanceImages (MRI) using a convolutional neural network (CNN) allows to reducethe dose of gadolinium used for contrast by an order of magnitude. [10]. An-other example is detection of cerebral microbleeds using a 3D-CNN [6]. Tissuesegmentation in MR images plays a great importance in modern medical re-search. One of the most common image segmentation tasks in brain MRI is thesegmentation of Gray Matter (GM), White Matter (WM), and CerebrospinalFluid (CSF). One possible approach to this segmentation task is proposed in[29]. The authors apply convolutional networks to multi-modal (T1, T2 andFA) MRI images in order to segment infant brain tissue images into GM, WM,and CSF. CNNs are applied to a variety of regression tasks, see [18]. Finally, a r X i v : . [ q - b i o . N C ] J un M. Kan et al.
CNNs are used in early-stage Alzheimer’s disease detection in MRI and PETimages [24]. Conventionally, the brain data is firstly processed to get the lowerdimensional meaningful features [14], for diffusion tensor imaging (DTI) — it isfraction anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and ra-dial diffusivity (RD) values [27]. For functional T2* MRI images it is functionalconnectivity features, spectral features and etc. Classifier constructions followsthis feature extraction step. However, deep learning approaches, especially thosefor processing 3D data, are shown to be more accurate in many applications [27]as they use full-sized data without information loss during extensive preprocess-ing. Working on deep learning models interpretation in MRI implies trainingon large databases of healthy subjects. One of the most common and highly ex-plored databases available in open-access is Human Connection Project (HCP) .A conventional task being extensively explored on this database is a task of sexpatterns recognition between men and women.Men and women do a lot of things like encoding memories, sensing emotions,recognizing faces, solving certain problems, and making decisions in differentways. Since the brain controls cognition and behaviors, these sex-related func-tional differences may be associated with the gender-specific structure of thebrain [5]. Also recent studies indicate that gender may affect the human cogni-tive functions, such as emotion, memory, perception and etc. [3].However, previous studies on morphological difference between specific brainregions show interpretation only on the feature or region-of-interest level. Onthe contrary, the state-of-the-art deep-learning interpretation methods allow vi-sualisation of the decision rule in a pixel-wise fashion [27]. Or in the case of 3Dconvolution models — voxel-wise [16]. The contributions of the proposed paperare as follows: – we reproduce the state-of-the-art 3D CNN model [27] to investigate the dif-ference between men and women brains on T1 images and confirm previousfindings on DTI; – we are first, to the best of our knowledge, to apply several network interpre-tation methods to the 3D CNN model: Meaningful Perturbations, GradCamand Guided Backpropagation to find sex specific patterns and compare thereperformances. – we compare these results to the conventional machine learning classificationmodels on morphometry data of the same subjects.The source code is open and available at https://github.com/maxs-kan/InterpretableNeuroDL The database Human Connectome Project (HCP) contains MRI data from 1113subjects, including 507 men and 606 women. We explored T1 images, prepsoc-cesed with HCP- pipelines . https://db.humanconnectome.org https://github.com/Washington-University/HCPpipelinesnterpretable Deep Learning for Pattern Recognition 3 For the morphometry data analysis we used Freesufrer preprocessed fea-tures from section Expanded FreeSurfer Data for the same 1113 subjects.The morphometry characteristics (volumes, surface areas, thicknesses, etc.) arecalculated for 34 cortical regions according to Desikan-Killiany Atlas and for 45subcortical areas according to Automatic subcortical segmentation [8] resultingin a vector of 935 features for each subject.
We used the morphometry data classification as a baseline machine learningmodel. The best performing model is chosen among different classifiers (XG-Boost, k-Nearest Neighbors(KNN) and Logistic Regression) via a grid-search.All considered models were validated with 10-fold cross-validation technique andmost important features were chosen with the model feature scoring.
In this work we used state-of-the-art 3D CNN model architecture from the [27].The neural network consists of three hidden layers, a linear layer, whichintegrates the output of the hidden layers into the inputs of the terminal softmaxactivation layer. The first layer is a convolutional layer which convolves the inputtensor with a kernel. The size of the kernel is 3 × × α = 0 . L regularization of λ = 10 − , exponential decay rates forthe moment estimates β = 0 . β = 0 . ε = 10 − . Also, we used schedulerstrategy, which reduces stepsize as epoch increases, and early stopping to preventoverfitting. Batch size for training is set to 45.Due to low amount of data, we performed 10-fold cross-validation to estimatemodel performance (stratified strategy).We compared the results of the 3D CNN network to the support vectormachine classifier with rbf kernel (SVM) results, trained on the full size datareshaped to the 1-dimensional vector, as proposed in [27]. https://surfer.nmr.mgh.harvard.edu/ M. Kan et al. We analyzedfeatures in the first hidden layer, as they are less abstractive [27], and can repre-sent the structural features of MRI images. There are 32 features after the firsthidden layer, according to the proposed architecture.Firstly, we computed mean of voxel values for each feature for men andwomen with confidence intervals. We used two-sample t-test to determine sig-nificant difference among these values.Secondly, for each individual we normalized feature map, so all elements areintegers in range [0 , H = − (cid:88) i =0 p i · log p i , (1)where p i shows how often a pixel with value i appears in the image. Meaningful Perturbations for 3D CNN results interpretation.
The goalof the method [9] is to perturb the smallest possible region of the MRI such thatthe model significantly changes its output probability for MR image class, whichmeans that this region is the most important for model decision and it is themost informative part of the image. In this work we perturb original image x by replacing the corresponding region with Gaussian blurring of the image. Let m : Λ → [0 ,
1] be a mask associating each voxel u ∈ Λ of input image with scalarvalue m ( u ). Then the perturbation operator: P ( x ; m ) = x (cid:12) m + ( g σ ∗ x ) (cid:12) (1 − m ) (2)and g σ is a 3D Gaussian kernel. Our goal is to find the smallest deletion mask m that causes the score f c ( P ( x ; m )) (cid:28) f c ( x ), where f c ( · ) is the probability ofbelonging to a class c . To avoid the artifacts [13], we pad the x with j zerosand the mask m applied to x K : x K = x [ K : H + j + K, K : W + j + K, K : D + j + K ] (3)with integer K drawing from the discrete uniform distribution on [0 , j ), where H, W, D - size of image. Also, to obtain a mask more representative of naturalperturbations, we can encourage it to have a simple structure. We do so byregularizing m in total-variation (TV) norm, upsampling it by factor s to imagesize from a low resolution version and applying Gaussian filter on the upsamplingmask. Let us denote by M = g σ m ∗ ( U p ( m, s )), where g σ m is a 3D Gaussian kerneland U p ( · , s ) is a trilinear upsampling algorithm by factor of s . Finding m c forclass c can be formulated as the following optimization problem: m c = arg min m E K ∼ U [0 ,j ) ( f c ( P ( x K , M )) + λ (cid:88) u ||∇ M ( u ) || β + λ || − m || (4) nterpretable Deep Learning for Pattern Recognition 5 For our experiment σ = σ m = 10, λ = 3, λ = 1, β = 7, s = 4, j = 5,and repeat jittering 10 times. The score is optimized by Adam optimizer withlearning rate α = 0 .
3, exponential decay rates for the moment estimates β = 0 . β = 0 . Guided Backpropagation for 3D CNN results interpretation.
In orderto obtain saliency maps of an input MRI from our network we use GuidedBackpropagation method [23]. This approach computes the gradient of the scorefor class c, y c , with respect to the input image x : m c = dy c dx . (5)The gradient computed with specific backpropagation through the ReLU non-linearity. In Guided Backpropagation, we backpropagated the positive values ofthe gradient and set the negative ones to zero. And as usual, we backpropagatedvalues of the gradient, which corresponds to positive values of the input to ReLU.Let G l be a gradient backpropagated through layer l and f l +1 i = ReLU ( f li ): G li = ( f li > · ( G l +1 i > · G l +1 i . (6) GradCAM for 3D CNN results interpretation.
GradCAM [20] interpretsthe model, assuming that the deep CNN layers capture higher-level visual con-structs [4]. In these layers neurons attends to the parts of the objects whichresponsible for the class. GradCAM computes the gradient for score y c of class c before terminal layer with respect to filter activations of the last convolutionallayer F k . Then it computes the importance weights for each filter, i.e. α ck = 1 H · W · D H (cid:88) i =1 W (cid:88) j =1 D (cid:88) k =1 ∂y c ∂F ki,j,k , (7)where H, W, D — size of a filter activation tensor. α ck captures the “importance”of filter k for a target class c . To obtain the class-discriminative localization mask m c , we computed a weighted combination of filter activations, and follow it bya ReLU: m c = ReLU (cid:34)(cid:88) k =1 α ck · F k (cid:35) . (8)Also we upsampled m c to the input image resolution using trilinear interpolation. The results of 10-folds cross-validation for 1113 subjects are in Table 1 andfeature importances ( β scores) for Logistic Regression model chosen via a gridsearch are represented on Fig. 1. M. Kan et al.
Fig. 1.
Feature importances for Logistic Regression model.
Table 1.
Results for baseline morphometry data classifcation models: 10-fold cross-validation XGB KNN Logistic regressionMean accuracy 0.89 0.85 0.92STD 0.02 0.04 0.03
As can be seen from Fig. 1, the most important features for the LogisticRegression model belong to the following brain regions volumes and intensities:corpus callosum, left and right insula and thalamic regions, as well as wholebrian metrics for white matter hyperintensities and intensity of cerebrospinalfluid.
The 3D CNN model with the proposed architecture yields the mean accuracyof 0 . ± .
03 on 10-fold cross-validation. SVM achieves 0 . ± .
02 accuracy on10-fold cross-validation. So 3D-CNN model slightly outperforms standard SVM.
We analyzed features in the first hidden layer. Mean voxel values for 31 featureshave significant difference in men-women groups, with 10 features larger forwomen, and 21 features larger for men (see Fig. 2).The structural features extracted from 3D CNN reflect the brain structuredifferences between men and women. In the first hidden layer of 3D CNN model,we have found 25 features that have significant difference between men andwomen in voxels value. Moreover, using entropy measure, we found a range offeatures with higher complexity in a men’s brain as reflected by significantlyhigher entropy value. These results indicate that the gender-related differencesare likely to exist in the whole-brain range including both white and gray matter.We would like to highlight that these results are in line with the previous results nterpretable Deep Learning for Pattern Recognition 7 from [27], where the authors showed on the same dataset that the men’s brainhave more complex features, and thus, higher entropy. (a) (b)
Fig. 2. (a) Mean voxel values for each feature in male/female groups. Features that aresignificantly large for men are marked with *, features that are significantly large forwomen are marked with +. (b) Mean entropy values for each feature in male/femalegroups.
For the two target classes in the sex differences classification, we get two differ-ent feature maps correspondingly with the meaningful perturbations algorithm.Two feature masks for men and women appear to highlight different regions ofinterest, and were then explored separately.We completed 10 fold cross validation to check the 3D CNN performance withthe images restricted on masks. Then we multiplied every validation sample byaverage male mask, by average female mask or by the sum of these masks voxel-wise. The accuracy for the male mask is 0 . ± .
13, for the female mask —0 . ± .
09 and for the conjoined mask — 0 . ± .
11 respectively. Thus, we canconclude that all necessary information for classification problem is both in maleand female masks (in their conjunction). The difference in masks for male andfemale may be explained by the specifics of the algorithm: we need to find thesmallest region in the input image, deletion of which will decrease the probabilityof being a specific class. In Fig. 3a we show the final mask which contains regionsfor men and women.Next, we segmented each MR image into 246 gray matter regions accordingto the Human Brainnettome Atlas [7], and 50 white matter regions according tothe ICBM-81 White-Matter Labels Atlas [15]. For each region of each brain atlas,
M. Kan et al.
L Ry=25 x=25 L Rz=35 (a)
L Ry=25 x=25 L Rz=35 (b)
L Ry=25 x=25 L Rz=35 (c)
Fig. 3.
Cross-sectional view on three attention maps for 3D CNN interpretation ob-tained with: a. Meaningful Perturbation (conjoined male and female attention mask),b. Guided Backpropagation, c. GradCAM. The greater the voxel’s value of each mask,the more important this voxel for classification.nterpretable Deep Learning for Pattern Recognition 9 we estimated fractions of voxels of this region included into the mask, which wehave obtained via Meaningful Perturbations. We normalized these fractions, sothe values for all regions would sum up to 1. The top-5 scored regions of eachatlas with the largest values as proposed in [27] are presented in Table 2.
Table 2.
The most discriminative regions of each atlas obtained with MeaningfulPerturbations method. ICBM White-Matter Labels AtlasRegions ScoreCorticospinal tract right 0.1273Corticospinal tract left 0.0927Anterior corona radiata right 0.0594Pontine crossing tract 0.0580Cerebral peduncle left 0.0488Human Brainnettome AtlasRegions Male scoreOrG R 6 5 0.0131OrG R 6 3 0.0124IPL R 6 2 0.0120MFG R 7 1 0.0118PhG L 6 5 0.0117
These findings partially overlap with the morphometry results, showing com-mon white matter regions in the corpus callosum (Anterior corona radiata), aswell as cerebellum (Cerebral peduncle). The gray matter region in common over-laps on frontal gyri (MFG R).
We computed a saliency map for every person in the dataset and then took meanover the dataset. As we have two classes in our dataset, the final map containsthe regions of interest for each class, see Figure 3b.
We compute corresponding localization masks, containing information aboutboth male and female regions of interest. The cross-sectional view of the resultis shown on 3c.
We reproduced the state-of-the-art 3D CNN model from [27] DTI study andfound similar differences on MRI T1w images. The model exhibits the mean accuracy of 0 . ± .
03, which is slightly higher than morphometry data classi-fication having less variance. We are the first, to the best of our knowledge, toapply several network interpretation methods to the 3D CNN model: Meaning-ful Perturbations, GradCam and Guided Backpropagation to find gender specificpatterns, and to compare their performances. We observed similar results, whichmeans that the masks computed with all three methods reveal similar patternsand thus are trustworthy. We found that GradCam method is the fastest oneand ready plug-and-play method, while the Meaningful Perturbations method isslowest one yet showing most anatomic-like attention maps. Our deep learningresults are in line with conventional machine learning classification model re-sults on morphometric data. We also publish the code to the open-source libraryfor public use. The proposed interpretation tool could be successfully used invarious MRI pathology detection applications like epilepsy detection, Alzheimerdisorder diagnosis, Autism Spectrum disorder classification and others.
In the current work we aimed at studying sex-related differences in human brain.In order to localize the most informative brain areas for classification task,we created attention maps for 3D CNN output in three different ways. Usingthese maps we were able to denote which brain regions play the most importantrole in the sex classification task. For men the brain region with the highestclassification accuracy was Parietal Lobe, namely Superior parietal lobule andInferior parietal lobule (Brodmann areas 5 and 7) in line with the previousstudies [19], where it was shown that parietal lobe activity is biased to the righthemisphere in men. Comparing this result to the [27] we can notice that theregion with higher classification accuracy in their study — left precuneus (BA31) — is bounded cytoarchitecturally with superior and inferior paritial lobulesso we may suppose gender structural differences in this region. Moreover, theMedial frontal gyrus (part of parietal Lobe as well, BA 6,8,10,46) contributedsignificantly to the classification task. It could be explained by the morphologicalasymmetry of the medial frontal gyrus in the men brain [22].Female brain analysis shows different brain regions with high classificationaccuracy. In line with the previous study [27] we detected orbital gyrus (BA 13,14) to be essential in sex detection. Moreover, in female brain parahippocampalgyrus also appears to affect the classification accuracy. Our results show thatAmygdala and Hippocampus (part of Subcortical nuclei) turned out to be regionsof high classification accuracies though previous studies show that both theseregions are not sexually dimorphic [25] [12].In white matter we found the following regions in the male brain with thehighest classification accuracy: cingulate gyrus, middle cerebellar peduncle, An-terior corona radiata left, Posterior thalamic radiation, corpus collosum. Whichis in line with previous studies [26], [1].We also found that middle cerebellar peduncle is informative for classificationtask, which is in line with the previous studies [11]. Cingulate gyrus as well as nterpretable Deep Learning for Pattern Recognition 11
Posterior thalamic radiation were both detected to have sex differences [2]. Wealso found the regions in limbic-thalamo-cortical circuity which exhibit gender-related differences (cingulate gyrus and Anterior corona radiata), which alsocoincides with with the results of the previous study [27].It is also worth noting, that attention maps in Fig. 3a show the spatial patternof frontoparietal resting-state brain network, which was initially discovered fromresting-state fMRI activity and is thought to be involved in a wide variety oftasks by initiating and modulating cognitive control abilities [28]. It might beinteresting in future research to look specifically at this network and explore itin terms of gender-related brain differences.
References
1. Bava, S., Boucquey, V., Goldenberg, D., Thayer, R.E., Ward, M., Jacobus, J.,Tapert, S.F.: Sex differences in adolescent white matter architecture. Brain research , 41–48 (2011)2. Brun, C.C., Lepore, N., Luders, E., Chou, Y.Y., Madsen, S.K., Toga, A.W.,Thompson, P.M.: Sex differences in brain structure in auditory and cingulate re-gions. Neuroreport (10), 930 (2009)3. Cahill, L.: Why sex matters for neuroscience. Nature reviews neuroscience (6),477–484 (2006)4. Chen, X., Fang, H., Lin, T.Y., Vedantam, R., Gupta, S., Doll´ar, P., Zitnick, C.L.:Microsoft coco captions: Data collection and evaluation server. arXiv preprintarXiv:1504.00325 (2015)5. Cosgrove, K.P., Mazure, C.M., Staley, J.K.: Evolving knowledge of sex differencesin brain structure, function, and chemistry. Biological psychiatry (8), 847–855(2007)6. Dou, Q., Chen, H., Yu, L., Zhao, L., Qin, J., Wang, D., Mok, V.C., Shi, L., Heng,P.A.: Automatic detection of cerebral microbleeds from mr images via 3d convo-lutional neural networks. IEEE transactions on medical imaging (5), 1182–1195(2016)7. Fan, L., Li, H., Zhuo, J., Zhang, Y., Wang, J., Chen, L., Yang, Z., Chu, C., Xie,S., Laird, A.R., et al.: The human brainnetome atlas: a new brain atlas based onconnectional architecture. Cerebral cortex (8), 3508–3526 (2016)8. Fischl, B.: Freesurfer. Neuroimage (2), 774–781 (2012)9. Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningfulperturbation. In: Proceedings of the IEEE International Conference on ComputerVision. pp. 3429–3437 (2017)10. Gong, E., Pauly, J.M., Wintermark, M., Zaharchuk, G.: Deep learning enablesreduced gadolinium dose for contrast-enhanced brain mri. Journal of MagneticResonance Imaging (2), 330–340 (2018)11. Kanaan, R.A., Chaddock, C., Allin, M., Picchioni, M.M., Daly, E., Shergill, S.S.,McGuire, P.K.: Gender influence on white matter microstructure: a tract-basedspatial statistics analysis. PLoS One (3) (2014)12. Kret, M.E., De Gelder, B.: A review on sex differences in processing emotionalsignals. Neuropsychologia (7), 1211–1221 (2012)13. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world.arXiv preprint arXiv:1607.02533 (2016)2 M. Kan et al.14. Lasiˇc, S., Szczepankiewicz, F., Eriksson, S., Nilsson, M., Topgaard, D.: Mi-croanisotropy imaging: quantification of microscopic diffusion anisotropy and orien-tational order parameter by diffusion mri with magic-angle spinning of the q-vector.Frontiers in Physics , 11 (2014)15. Mori, S., Wakana, S., Nagae-Poetscher, L., Van Zijl, P.: Mri atlas of human whitematter. American Journal of Neuroradiology (6), 1384 (2006)16. Pawlowski, N., Glocker, B.: Is texture predictive for age and sex in brain mri?arXiv preprint arXiv:1907.10961 (2019)17. Pominova, M., Artemov, A., Sharaev, M., Kondrateva, E., Bernstein, A., Burnaev,E.: Voxelwise 3d convolutional and recurrent neural networks for epilepsy and de-pression diagnostics from structural and functional mri data. In: 2018 IEEE In-ternational Conference on Data Mining Workshops (ICDMW). pp. 299–307. IEEE(2018)18. Pominova, M., Kuzina, A., Kondrateva, E., Sushchinskaya, S., Burnaev, E., Yarkin,V., Sharaev, M.: Ensemble of 3d cnn regressors with data fusion for fluid intelli-gence prediction. In: Challenge in Adolescent Brain Cognitive Development Neu-rocognitive Prediction. pp. 158–166. Springer (2019)19. Rescher, B., Rappelsberger, P.: Gender dependent eeg-changes during a mentalrotation task. International Journal of Psychophysiology (3), 209–222 (1999)20. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Ba-tra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision (2), 336359(Oct 2019). https://doi.org/10.1007/s11263-019-01228-7, http://dx.doi.org/10.1007/s11263-019-01228-7
21. Sharaev, M., Artemov, A., Kondrateva, E., Sushchinskaya, S., Burnaev, E., Bern-stein, A., Akzhigitov, R., Andreev, A.: Mri-based diagnostics of depression con-comitant with epilepsy: in search of the potential biomarkers. In: 2018 IEEE 5thInternational Conference on Data Science and Advanced Analytics (DSAA). pp.555–564. IEEE (2018)22. Spasojevi´c, G., Stojanovi´c, Z., ˇSuˇsˇcevi´c, D., Malobabi´c, S., Rafajlovski, S., Tati´c,V.: Asymmetry and sexual dimorphism of the medial frontal gyrus visible surfacein humans. Vojnosanitetski pregled (2), 123–127 (2010)23. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplic-ity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014)24. Suk, H.I., Lee, S.W., Shen, D., Initiative, A.D.N., et al.: Hierarchical feature rep-resentation and multimodal fusion with deep learning for ad/mci diagnosis. Neu-roImage , 569–582 (2014)25. Tan, A., Ma, W., Vira, A., Marwha, D., Eliot, L.: The human hippocampus isnot sexually-dimorphic: meta-analysis of structural mri volumes. Neuroimage ,350–366 (2016)26. Westerhausen, R., Walter, C., Kreuder, F., Wittling, R.A., Schweiger, E., Wit-tling, W.: The influence of handedness and gender on the microstructure of thehuman corpus callosum: a diffusion-tensor magnetic resonance imaging study. Neu-roscience letters (2), 99–102 (2003)27. Xin, J., Zhang, X.Y., Tang, Y., Yang, Y.: Brain differences between men andwomen: Evidence from deep learning. Frontiers in neuroscience , 185 (2019)28. Zanto, Theodore P, G.A.: Fronto-parietal network: flexible hub of cognitive control.Trends in cognitive sciences , 602–603 (2013)29. Zhang, W., Li, R., Deng, H., Wang, L., Lin, W., Ji, S., Shen, D.: Deep convolutionalneural networks for multi-modality isointense infant brain image segmentation.NeuroImage108