GSR-Net: Graph Super-Resolution Network for Predicting High-Resolution from Low-Resolution Functional Brain Connectomes
GGSR-Net: Graph Super-Resolution Network forPredicting High-Resolution from Low-ResolutionFunctional Brain Connectomes
Megi Isallari and Islem Rekik ID (cid:63) BASIRA Lab, Faculty of Computer and Informatics, Istanbul Technical University,Istanbul, Turkey
Abstract.
Catchy but rigorous deep learning architectures were tai-lored for image super-resolution (SR), however, these fail to generalizeto non-Euclidean data such as brain connectomes. Specifically, buildinggenerative models for super-resolving a low-resolution brain connectome at a higher resolution (i.e., adding new graph nodes/edges) remains unex-plored although this would circumvent the need for costly data collectionand manual labelling of anatomical brain regions (i.e. parcellation). Tofill this gap, we introduce GSR-Net (Graph Super-Resolution Network),the first super-resolution framework operating on graph-structured datathat generates high-resolution brain graphs from low-resolution graphs.
First , we adopt a U-Net like architecture based on graph convolution,pooling and unpooling operations specific to non-Euclidean data. How-ever, unlike conventional U-Nets where graph nodes represent samplesand node features are mapped to a low-dimensional space (encodingand decoding node attributes or sample features), our GSR-Net oper-ates directly on a single connectome: a fully connected graph where con-ventionally, a node denotes a brain region, nodes have no features, andedge weights denote brain connectivity strength between two regions ofinterest (ROIs). In the absence of original node features, we initially as-sign identity feature vectors to each brain ROI (node) and then leveragethe learned local receptive fields to learn node feature representations.Specifically, for each ROI, we learn a node feature embedding by locallyaveraging the features of its neighboring nodes based on their connectiv-ity weights.
Second , inspired by spectral theory, we break the symmetryof the U-Net architecture by topping it up with a graph super-resolution(GSR) layer and two graph convolutional network layers to predict a HR(high-resolution) graph while preserving the characteristics of the LR(low-resolution) input. Our proposed GSR-Net framework outperformedits variants for predicting high-resolution brain functional connectomesfrom low-resolution connectomes. Our Python GSR-Net code is availableon BASIRA GitHub at https://github.com/basiralab/GSR-Net . (cid:63) Corresponding author: [email protected], http://basira-lab.com . This work isaccepted for publication in the Machine Learning in Medical Imaging workshopSpringer proceedings, in conjunction with MICCAI 2020. a r X i v : . [ ee ss . I V ] S e p Introduction
Remarkable progress in diagnosing brain disorders and exploring brain anatomyhas been made using neuroimaging modalities (such as MRI (magnetic resonanceimaging) or DTI (diffusion tensor imaging)). Recent advances in ultra-high field(7 Tesla) MRI help show fine-grained variations in brain structure and function.However, MRI data at submillimeter resolutions is very scarce due to the lim-ited number and high cost of the ultra-high field scanners. To circumvent thisissue, several works explored the prospect of super-resolution to map a brainintensity image of low resolution to an image of higher resolution [1,2,3]. Inrecent years, advances in deep learning have inspired a multitude of works inimage super-resolution ranging from the early approaches using ConvolutionalNeural Networks (CNN) (e.g. SRCNN [4]) to the state-of-the-art methods suchas Generative Adversarial Nets (GAN) (e.g. SRGAN [5]). For instance, [6] usedConvolutional Neural Networks to generate 7T-like MRI images from 3T MRIand more recently, [7] used ensemble learning to synergize high-resolution GANsof MRI differentially enlarged with complementary priors. While a significantnumber of image super-resolution methods have been proposed for MRI super-resolution, super-resolving brain connectomes (i.e., brain graphs) remains largelyunexplored. Typically, a brain connectome is the product of a very complexneuroimage processing pipeline that integrates MRI images into pre-processingand analysis steps from skull stripping to cortical thickness, tissue segmentationand registration to a brain atlas [8]. To generate brain connectomes at differ-ent resolutions, one conventionally uses image brain atlas (template) to definethe parcellation of the brain into N (depending on the resolution) anatomicalregions of interest (ROIs). A typical brain connectome is comprised of N nodeswhere a node denotes a brain ROI and edge weights denote brain connectivitystrength between two ROIs (e.g., correlation between neural activity or similar-ity in brain morphology) [9,10]. However, this process has two main drawbacks:(1) the computational time per subject is very high and (2) pre-processing stepssuch as registration and label propagation are highly prone to variability andbias [11,12]. Alternatively, given a low-resolution (LR) connectome, one can devise a sys-tematic method to automatically generate a high-resolution (HR) connectomeand thus circumvent the need for costly neuroimage processing pipelines.
How-ever, such a method would have to address two major challenges.
First , stan-dard downsampling/upsampling techniques are not easily generalizable to non-Euclidean data due to the complexity of network data. The high computationalcomplexity, low parallelizability, and inapplicability of machine learning meth-ods to geometric data render image super-resolution algorithms ineffective [13].
Second , upsampling (super-resolution) in particular is a notoriously ill-posedproblem since the LR connectome can be mapped to a variety of possible solu-tions in HR space. Furthermore, while unpooling (deconvolution) is a recurringconcept in graph embedding approaches, it typically focuses on graph embeddingreconstruction rather than in the expansion of the topology of the graph [14].Two recent pioneering works have tackled the problem of graph super-resolution15,16], however both share the dichotomized aspect of the engineered learning-based GSR framework, which is composed of independent blocks that cannotco-learn together to better solve the target super-resolution problem. Besides,both resort to first vectorizing LR brain graphs in the beginning of the learningprocess, thereby spoiling the rich topology of the brain as a connectome.To address these limitations, we propose GSR-Net: the first geometric deeplearning framework that attempts to solve the problem of predicting a high-resolution connectome from a low-resolution connectome. The key idea of GSR-Net can be summarized in three fundamental steps: (i) learning feature embed-dings for each brain ROI (node) in the LR connectome, (ii) the design of agraph super-resolution operation that predicts an HR connectome from the LRconnectivity matrix and feature embeddings of the LR connectome computed in(i), (iii) learning node feature embeddings for each node in the super-resolved(HR) graph obtained in (ii). First, we adopt a U-Net like architecture and in-troduce the Graph U-Autoencoder. Specifically, we leverage the Graph U-Netproposed in [14]: an encoder-decoder architecture based on graph convolution,pooling and unpooling operations that specifically work on non-Euclidean data.However, as most graph embedding methods, the Graph U-Net focuses on typi-cal graph analytic tasks such as link prediction or node classification rather thansuper-resolution. Particularly, the conventional Graph U-Net is a node-focused architecture where a node n represents a sample and mapping the node n to an m -dimensional space (i.e., simpler representation) depends on the node and itsattributes [17].Our Graph U-Autoencoder on the other hand, is a graph-focused architecturewhere a sample is represented by a connectome : a fully connected graph whereconventionally, nodes have no features and edge weights denote brain connec-tivity strength between two nodes. We unify both these concepts by learning amapping of the node n to an m -dimensional space that translates the topologicalrelationships between the nodes in the connectome as node features. Namely, weinitially assign identity feature vectors to each brain ROI and we learn node fea-ture embeddings by locally averaging the features of its neighboring nodes basedon their connectivity weights. Second, we break the symmetry of the U-Net ar-chitecture by adding a GSR layer to generate an HR connectome from the nodefeature embeddings of the LR connectome learned in the Graph U-Autoencoderblock. Specifically, in our GSR block, we propose a layer-wise propagation rulefor super-resolving low-resolution brain graphs, rooted in spectral graph theory.Third, we stack two additional graph convolutional network layers to learn nodefeature embeddings for each brain ROI in the super-resolved graph. Problem Definition.
A connectome can be represented as C = { V , E , X } where V is a set of nodes and E is a set of edges connecting pairs of nodes.The network nodes are defined as brain ROIs. The connectivity (adjacency) ig. 1: Proposed framework of Graph Super-Resolution Network (GSR-Net) forsuper-resolving low-resolution brain connectomes. (A) Graph U-AutoencoderBlock.
Our Graph U-Autoencoder is built by stacking two encoding modulesand two decoding modules. An encoding module contains a graph pooling layerand a graph convolutional network (GCN) and its inverse operation is a decodingmodule comprised of a graph unpooling layer and a GCN. Here, we integrate a self-reconstruction loss L rec that guides the learning of node feature embeddingsfor each brain ROI in the LR connectome. (B) Super Resolution Block. TheGSR Layer super-resolves both the topological structure of the LR connectome(connectivity matrix A l ) and the feature matrix of the LR connectome ( X l ). Tosuper-resolve A l , we propose the layer-wise propagation rule ˜ A h = WS d U ∗ Z l ,where W is a matrix of trainable filters that we enforce to match the eigenvectormatrix of the HR graph via an eigen-decomposition loss L eig , S d is the concate-nation of two identity matrices, U is the eigenvector matrix of A l and Z l isthe matrix of node feature embeddings of the LR brain graph generated in (A) .The propagation rule for the feature matrix super-resolution is: ˜ X h = ˜ A h ˜ A Th . (C) Loss function. Our GSR-Net loss comprises a self-reconstruction loss L rec ,super-resolution loss L hr and eigen-decomposition loss L eig to optimize learningthe predicted HR connectome from a LR connectome.matrix A is an N × N matrix ( N is the number of nodes), where A ij denotesthe connectivity weight between two ROIs i and j using a specific metric (e.g.,correlation between neural activity or similarity in brain morphology). Let X ∈ R N × F denote the feature matrix where N is the number of nodes and F is thenumber of features (i.e., connectivity weights) per node. Each training subject s n our dataset is represented by two connectivity matrices in LR and HR domainsdenoted as C l = { V l , E l , X l } and C h = { V h , E h , X h } , respectively. Given abrain graph C l , our objective is to learn a mapping f : ( A l , X l ) (cid:55)→ ( A h , X h ),which maps C l onto C h . Overall Framework. In Fig
1, we illustrate the proposed GSR-Net ar-chitecture including: (i) an asymmetric graph U-Autoencoder to learn the fea-ture embeddings matrix Z l for a LR brain graph by f l : ( A l , X l ) (cid:55)→ Z l , (ii) a graph super-resolution (GSR) layer mapping LR graph embeddings Z l andthe LR connectivity matrix to a HR feature matrix and connectivity matrix by f h : ( A l , Z l ) (cid:55)→ ( ˜A h , ˜X h ), (iii) learning the HR feature embeddings Z h by stack-ing two graph convolutional layers as f z : ( ˜A h , ˜X h ) (cid:55)→ Z h , and (iv) computingthe loss function L .
1. Graph U-Autoencoder.
U-Net architectures have long achieved state-of-the-art performance in various tasks thanks to their encoding-decoding naturefor high-level feature extraction and embedding. In the first step of our GSR-Net, we adopt the concept of Graph U-Nets [14] based on learning node repre-sentations from node attributes and we extend this idea to learning node repre-sentations from topological relationships between nodes. To learn node featureembeddings of a given LR connectome C l = { V l , E l , X l } , we propose a GraphU-Autoencoder comprising of a Graph U-Encoder and a Graph U-Decoder. Graph U-Encoder . The Graph U-Encoder inputs the adjacency matrix A l ∈ R N × N of C l = { V l , E l , X l } (N is the number of nodes of C l ) as wellas the feature matrix capturing the node content of the graph X l ∈ R N × F . Inthe absence of original node features, we assign an identity matrix I N ∈ R N × N to the feature matrix X l , where the encoder is only informed of the identityof each node. We build the Graph U-Encoder by stacking multiple encodingmodules, each containing a graph pooling layer followed by a graph convolutionallayer. Each encoding block is intuitively expected to encode high-level featuresby downsampling the connectome and aggregating content from each node’slocal topological neighborhood. However, as a graph-focused approach wherethe sample is represented by a connectome and the connectome’s nodes arefeatureless, our Graph U-Encoder defines the notion of locality by edge weightsrather than node features. Specifically, the pooling layer adaptively selects a fewnodes to form a smaller brain graph in order to increase the local receptive fieldand for each node, the GCN layer aggregates (locally averages) the features ofits neighboring nodes based on their connectivity weights. Graph Pooling Layer . The layer’s propagation rule can be defined as follows: v = X ( l ) l u ( l ) / (cid:107) u ( l ) (cid:107) ; indices = rank ( v, k ); ˜ v = sigmoid ( v ( indices ));˜ X l ( l ) = X ( l ) l ( indices, :); A ( l +1) l = A ( l ) l ( indices, indices ); X ( l +1) l = ˜ X l ( l ) (cid:12) (˜ v TF )The graph pooling layer adaptively selects a subset of nodes to form a newsmaller graph based on their scalar projection values on a trainable projectionvector u . First, we find the scalar projection of X l on u which computes a one-dimensional v vector, where v i is the scalar projection of each node on vector u . We find the k-largest values in v which are then saved as the indices of thenodes selected for the new downsampled graph. According to the indices found,e extract the feature matrix rows for each node selected ( X ( l ) l ( indices, :)) aswell as the respective adjacency matrix rows and columns to obtain the adjacencymatrix of the downsampled graph: A ( l +1) l = A ( l ) l ( indices, indices ). Hence, thisreduces the graph size from N to k : A ( l +1) l ∈ R k × k . In the end, by applying asigmoid mapping to the projection vector v , we obtain the gate vector ˜ v ∈ R k which we multiply with TF (one-dimensional vector with all F elements equalto 1). The product ˜ v TF is then multiplied element-wise with ˜ X ( l ) l to controlinformation of the selected nodes and obtain the new feature matrix of thedownsampled graph X ( l +1) l ∈ R k × F . Graph U-Decoder . Similarly to Graph U-Encoder, Graph U-Decoder isbuilt by stacking multiple decoding modules, each comprising a graph unpool-ing layer followed by a graph convolutional layer. Each decoding module acts asthe inverse operation of its encoding counterpart by gradually upsampling andaggregating neighborhood information for each node.
Graph Unpooling Layer.
The graph unpooling layer retracts the graph pool-ing operation by relocating the nodes in their original positions according tothe saved indices of the selected nodes in the pooled graph. Formally, we write X ( l +1) = relocate ( N × F , X ( l ) l , indices ), where N × F is the reconstructed featurematrix of the new graph (initially the feature matrix is empty) . X ( l ) l ∈ R k × F is the feature matrix of the current downsampled graph and the relocate oper-ation assigns row vectors in X ( l ) l into N × F feature matrix according to theircorresponding indices stored in indices . Graph U-Autoencoder for super-resolution.
Next, we introduce ourGraph U-Autoencoder which first includes a GCN to learn an initial node rep-resentation of the LR connectome. This first GCN layer takes as input ( A l , X l )and outputs Z ∈ R N × NK : a node feature embedding matrix with N K numberof features per node where K is the factor by which the resolution increases whenwe predict the HR graph from a LR graph ( F is specifically chosen to be N K for reasons we explore in greater detail in the next section). The transformationcan be defined as follows: Z = σ ( ˆ D − ˆ A ˆ D − X l W l ), where ˆ D is the diagonalnode degree matrix, ˆ A = A + I is the adjacency matrix with added self-loopsand σ is the activation function. W l is a matrix of trainable filter parametersto learn. Next, we apply two encoding blocks followed by two decoding blocksoutputting Z l ∈ R N × NK : Z l = GraphU Autoencoder ( ˆ A l , Z ) . Optimization . To improve and regularize the training of our graph autoen-coder model such that the LR connectome embeddings preserve the topologicalstructure A l and node content information X l of the original LR connectome,we enforce the learned LR node feature embedding Z l to match the initial nodefeature embedding of the LR connectome Z . In our loss function we integratea self-reconstruction regularization term which minimizes the mean squared er-ror (MSE) between the node representation Z and the output of the GraphU-Autoencoder Z l : L rec = λ N (cid:80) Ni =1 || Z i − Z li || .
2. Proposed GSR layer.
Super-resolution plays an important role in grid-like data but standard image operations are not directly applicable to graph data.n particular, there is no spatial locality information among nodes in graphs. Inthis section, we present a mathematical formalization of the GSR Layer, whichis the key operation for predicting a high-resolution graph C h from the low-resolution brain graph C l . Recently, [18] proposed a novel upsampling methodrooted in graph Laplacian decomposition that aims to upsample a graph sig-nal while retaining the frequency domain characteristics of the original signaldefined in the time/spatial domain. To define our GSR layer, we leverage thespectral upsampling concept to expand the size of graph while perserving thelocal information of the node and the global structure of the graph using thespectrum of its graph Laplacian.Suppose L ∈ R N × N and L ∈ R NK × NK are the graph Laplacians of theoriginal low-resolution graph and high-resolution (upsampled) graph respectively( K is the factor by which the resolution of the graph increases). Given L and L , their respective eigendecompositions are: L = U Λ U ∗ , L = U Λ U ∗ ,where U ∈ R N × N and U ∈ R NK × NK . In matrix form, our graph upsamplingdefinition can be easily defined as: x u = U S d U ∗ x , where S d = [ I N × N I N × N ] T , x is a signal on the input graph and x u denotes the upsampled signal. We cangeneralize the matrix form to a signal X l ∈ R N × F with F input channels (i.e., a F -dimensional vector for every node) as follows: ˜ A h = U S d U ∗ X l . To generatean N K × N K resolution graph, the number of input channels F of X l should beset to N K . This is why the output of the Graph U-AutoEncoder Z l (which isgoing to be the input X l of the GSR Layer) is specified to be of the dimensions: N × N K . Super-resolving the graph structure . To predict ˜ A h , we first predict theeigenvectors U of the ground truth high-dimensional A h . We formalize thelearnable parameters in this GSR layer as a matrix W ∈ R NK × NK to learnsuch that the distance error between the weights and the eigenvectors U of theground truth high-resolution A h is minimized. Hence, the propagation rule forour layer is: ˜ A h = WS d U ∗ Z l . Super-resolving the graph node features . To super-resolve the featurematrix or assign feature vectors to the new nodes (at this point, the new nodesdo not have meaningful representations), we again leverage the concept of trans-lating topological relationships between nodes to node features. By adding newnodes and edges while attempting to retain the characteristics of the originallow-resolution brain graph, it is highly probable that some new nodes and edgeswill remain isolated, which might cause loss of information in the subsequentlayers. To avoid this, we initialize the target feature matrix ˜ C h as follows:˜ X ( l ) h = ˜ A ( l ) h ( ˜A ( l ) h ) T . This operation links nodes at a maximum two-hop dis-tance and increases connectivity between nodes [19]. Each node is then assigneda feature vector that satisfies this property. Notably, both the adjacency and fea-ture matrix are converted to symmetric matrices mimicking realistic predictions: ˜A h = ( ˜A h + ˜A Th ) / ˜X h = ( ˜X h + ˜X Th ) / Optimization.
To learn trainable filters which enforce the super-resolvedconnectome’s eigen-decomposition to match that of the ground truth HR con-nectome (i.e., preserving both local and global topologies), we further add the igen-decomposition loss : the MSE between the weights and the eigenvectors U of the ground truth high-resolution A h : L eig = N (cid:80) Ni =1 || W i − U i || .
3. Additional graph embedding layers.
Following the GSR layer, welearn more representative ROI-specific feature embeddings of the super-resolvedgraph by stacking two additional GCNs: Z h = GCN ( ˜ A h , ˜ X h ) and Z h = GCN ( ˜ A h , ˜ Z h ).For each node, these embedding layers aggregate the feature vectors of its neigh-boring nodes, thus fully translating the connectivity weights to node featuresof the new super-resolved graph. The output of this third step constitutes thefinal prediction of the GSR-Net of the HR connectome from the input LR con-nectome. However, our predictions of the HR graph Z h are of size N K × N K and our target HR graph size might not satisfy such multiplicity rule. In suchcase, we can add isotropic padding of HR adjacency matrix during the trainingstage and remove the extra-padding in the loss evaluation step and in the finalprediction.
Optimization . Our training process is primarily guided by the super-resolutionloss which minimizes the MSE between our super-resolved brain connectomesand the ground truth HR ones. The total GSR-Net loss function comprises the self-reconstruction loss , the eigen-decomposition loss , and the super-resolutionloss and it is computed as follows: L = L hr + L eig + λ L rec = 1 N N (cid:88) i =1 || Z hi − A hi || + 1 N N (cid:88) i =1 || W i − U i || + λ N N (cid:88) i =1 || Z i − Z li || Connectomic dataset and parameter setting.
We used 5-fold cross-validationto evaluate our framework on 277 subjects from the Southwest University Lon-gitudinal Imaging Multimodal (SLIM) study [20]. For each subject, two separatefunctional brain networks with 160 ×
160 (LR) and 268 ×
268 (HR) resolutionswere produced using two groupwise whole-brain parcellation approaches pro-posed in [21] and [22], respectively. Our GSR-Net uses Adam Optimizer with alearning rate of 0 . N K . We empirically set the parameter λ of the self-reconstruction regularization loss to 16. Evaluation and comparison methods.
We benchmark the performanceof our GSR-Net against different baseline methods: (1) GSR Layer: a variantof GSR-Net where we remove both the graph Autoencoder (
Fig (2) Deep GSR:
In this variant, first, thenode feature embeddings matrix Z l of the LR connectome is learned throughtwo GCN layers. Second, this Z l is inputted to the GSR Layer, and third welearn the node feature embeddings of the output of the GSR Layer (i.e., thesuper-resolved graph) leveraging two more GCN layers and a final inner productdecoder layer. (3) GSR-Autoencoder : a variant of GSR-Net where we remove ig. 2: Comparison between the ground truth HR graph and the predicted HRgraph of a representative subject.
We display in (A) the residual error matrixcomputed using mean squared error (MSE) between the ground truth and pre-dicted super-resolved brain graph. We plot in (B)
MSE results for each of thethree baseline methods and our proposed GSR-Net.nly the additional GCN layers.
Fig multi-scale landscape of brain dysconnectivity in a wide spectrumof disorders [10].
In this paper, we proposed GSR-Net, the first geometric deep learning frameworkfor super-resolving low-resolution functional brain connectomes. Our methodachieved the best graph super-resolution results in comparison with its ablatedversion and other variants. However, there are a few limitations we need toaddress. To circumvent the high computational cost of a graph Laplacian, wecan well-approximate the eigenvalue vector by a truncated expression in termsof Chebyshev polynomials [24]. Future work includes refining our spectral up-sampling theory towards fast computation, enhancing the scalability and inter-pretability of our GSR-Net architecture with recent advancements in geomet-ric deep learning, and extending its applicability to large-scale multi-resolutionbrain connectomes [12]. Besides, we aim to condition the learning of the HRbrain graph by a population-driven connectional brain template [25] to enforcethe super-resolution of more biologically sound brain connectomes.
We provide three supplementary items on GSR-Net for reproducible and openscience:1. A 12-mn YouTube video explaining how GSR-Net works on BASIRA YouTubechannel at https://youtu.be/xwHKRxgMaEM .2. GSR-Net code in Python on GitHub at https://github.com/basiralab/GSR-Net .3. A GitHub video code demo on BASIRA YouTube channel at https://youtu.be/GahVu9NeOIg . This project has been funded by the 2232 International Fellowship for Out-standing Researchers Program of TUBITAK (Project No:118C288, http://basira-lab.com/reprime/ ) supporting I. Rekik. However, all scientific con-tributions made in this project are owned and approved solely by the authors. eferences
1. Bahrami, K., Shi, F., Rekik, I., Gao, Y., Shen, D.: 7T-guided super-resolution of3T mri. Medical physics (2017) 1661–16772. Chen, Y., Xie, Y., Zhou, Z., Shi, F., Christodoulou, A.G., Li, D.: Brain mri superresolution using 3d deep densely connected neural networks. In: 2018 IEEE 15thInternational Symposium on Biomedical Imaging (ISBI 2018), IEEE (2018) 739–7423. Ebner, M., Wang, G., Li, W., Aertsen, M., Patel, P.A., Aughwane, R., Melbourne,A., Doel, T., Dymarkowski, S., De Coppi, P., et al.: An automated framework forlocalization, segmentation and super-resolution reconstruction of fetal brain mri.NeuroImage (2020) 1163244. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convo-lutional networks (2014)5. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken,A., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network (2016)6. Bahrami, K., Shi, F., Rekik, I.: Convolutional neural network for reconstruc-tion of 7T-like images from 3T MRI using appearance and anatomical features.International Conference on Medical Image Computing and Computer-AssistedIntervention (2016)7. Lyu, Q., Shan, H., Wang, G.: Mri super-resolution with ensemble learning andcomplementary priors. IEEE Transactions on Computational Imaging (2020)6156248. Bassett, D.S., Sporns, O.: Network neuroscience. Nature neuroscience (2017)3539. Fornito, A., Zalesky, A., Breakspear, M.: The connectomics of brain disorders.Nature Reviews Neuroscience (2015) 159–17210. Van den Heuvel, M.P., Sporns, O.: A cross-disorder connectome landscape of braindysconnectivity. Nature reviews neuroscience (2019) 435–44611. Qi, S., Meesters, S., Nicolay, K., ter Haar Romeny, B.M., Ossenblok, P.: Theinfluence of construction methodology on structural brain network measures: Areview. Journal of neuroscience methods (2015) 170–18212. Bressler, S.L., Menon, V.: Large-scale brain networks in cognition: emerging meth-ods and principles. Trends in cognitive sciences (2010) 277–29013. Cui, P., Wang, X., Pei, J., Zhu, W.: A survey on network embedding (2017)14. Gao, H., Ji, S.: Graph u-nets. In Chaudhuri, K., Salakhutdinov, R., eds.: Pro-ceedings of the 36th International Conference on Machine Learning. Volume 97 ofProceedings of Machine Learning Research., Long Beach, California, USA, PMLR(2019) 2083–209215. Cengiz, K., Rekik, I.: Predicting high-resolution brain networks using hierarchicallyembedded and aligned multi-resolution neighborhoods. International Workshop onPRedictive Intelligence In MEdicine (2019) 115–12416. Mhiri, I., Khalifa, A.B., Mahjoub, M.A., Rekik, I.: Brain graph super-resolutionfor boosting neurological disorder diagnosis using unsupervised multi-topology con-nectional brain template learning. Medical Image Analysis (2020) 10176817. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graphneural network model. IEEE Transactions on Neural Networks (2009) 61–8018. Tanaka, Y.: Spectral domain sampling of graph signals. IEEE Transactions onSignal Processing (2018) 3752–37679. Chepuri, S.P., Leus, G.: Subsampling for graph power spectrum estimation (2016)20. Liu, W., Wei, D., Chen, Q., Yang, W., Meng, J., Wu, G., Bi, T., Zhang, Q., Zuo,X.N., Qiu, J.: Longitudinal test-retest neuroimaging data from healthy youngadults in southwest china. Scientific Data (2017)21. Dosenbach, N.U., Nardos, B., Cohen, A.L., Fair, D.A., Power, J.D., Church, J.A.,Nelson, S.M., Wig, G.S., Vogel, A.C., Lessov-Schlaggar, C.N., et al.: Prediction ofindividual brain maturity using fmri. Science (2010) 1358–136122. Shen, X., Tokoglu, F., Papademetris, X., Constable, R.: Groupwise whole-brainparcellation from resting-state fmri data for network node identification. NeuroIm-age (2013) 403 – 41523. Van den Heuvel, M.P., Bullmore, E.T., Sporns, O.: Comparative connectomics.Trends in cognitive sciences (2016) 345–36124. Hammond, D.K., Vandergheynst, P., Gribonval, R.: Wavelets on graphs via spec-tral graph theory (2009)25. Dhifallah, S., Rekik, I., Initiative, A.D.N., et al.: Estimation of connectional braintemplates using selective multi-view network normalization. Medical Image Anal-ysis59