Multi-scale brain networks
MMulti-scale brain networks
Richard F. Betzel and Danielle S. Bassett , ∗ Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104 and Department of Electrical and Systems Engineering,University of Pennsylvania, Philadelphia, PA, 19104 (Dated: November 7, 2016)The network architecture of the human brain has become a feature of increasing interest tothe neuroscientific community, largely because of its potential to illuminate human cognition, itsvariation over development and aging, and its alteration in disease or injury. Traditional tools andapproaches to study this architecture have largely focused on single scales – of topology, time, andspace. Expanding beyond this narrow view, we focus this review on pertinent questions and novelmethodological advances for the multi-scale brain. We separate our exposition into content related tomulti-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure.In each case, we recount empirical evidence for such structures, survey network-based methodologicalapproaches to reveal these structures, and outline current frontiers and open questions. Althoughpredominantly peppered with examples from human neuroimaging, we hope that this account willoffer an accessible guide to any neuroscientist aiming to measure, characterize, and understand thefull richness of the brain’s multiscale network structure – irrespective of species, imaging modality,or spatial resolution.
I. INTRODUCTION
Over the past decade, the neuroimaging communityhas witnessed a paradigm shift. The view that local-ized populations of neurons and individual brain regionssupport cognition and behavior has gradually given wayto the realization that connectivity matters [1–5]. Thecomplex spatiotemporal activity patterns that have beenassociated with cognition are underpinned by expansivenetworks of anatomical connections [6–8]. This shift hasoccurred in parallel with the maturation of another field,network science, which has made available a large setof analytic tools and frameworks for characterizing theorganization of complex networks [9–11].As with any new field, the best practices for con-structing and analyzing brain networks are still evolv-ing. Among recent developments is the understandingthat brain networks are fundamentally multi-scale enti-ties [12]. The meaning of “scale” can vary depending oncontext; here we focus on three possible definitions rele-vant to the study of brain networks. First, a network’s spatial scale refers to the granularity at which its nodesand edges are defined and can range from that of individ-ual cells and synapses [13–16] to brain regions and large-scale fiber tracts [3]. Second, networks can be charac-terized over temporal scales with precision ranging fromsub-millisecond [17, 18] to that of the entire lifespan [19–21], to evolutionary changes across different species [22].Finally, networks can be analyzed at different topologi-cal scales ranging from individual nodes to the networkas a whole [23–25]. Collectively, these scales define theaxes of a three-dimensional space in which any analysisof brain network data lives (Fig. 1). Most brain networkanalyses exist as points in this space – i.e. they focus ∗ dsb @ seas.upenn.edu on networks defined singularly at one spatial, temporal,and topological scale. We argue that, while such studieshave proven illuminating, in order to better understandthe brain’s true multi-scale, multi-modal nature, it is es-sential that our network analyses begin to form bridgesthat link different scales to one another.In this review, we focus on two specific aspects of themulti-scale brain. First, we present and discuss varia-tions of network algorithms (particularly, community de-tection ) that make it possible to describe a network atmultiple topological scales [26, 27]. We choose to fo-cus on community detection – which we define carefullyin the next section – because it encompasses one of themost frequently used set of tools capable of extractingand characterizing network organization across a contin-uous range of scales. We do, of course, make mention ofother alternatives. Next, we discuss the topic of multi-scale temporal networks and a set of multi-layer tech-niques for exploring brain networks at different tempo-ral resolutions. In this section, we draw particular fo-cus to the topic of multi-slice/layer community detectionand its role in characterizing time-varying connectivity.Throughout both sections, we also comment on method-ological limitations of these methods, the best practicesfor their application, and possible future directions. Thisreview is written for the neuroimaging community, and sothe literature we cover and the examples that we presentare selected to be especially relevant for researchers work-ing with MRI data (whether functional, diffusion, orstructural). Nonetheless, our frank discussion of multi-scale methods and views are broadly relevant and appli-cable to researchers working with other data modalities(including EEG, MEG, ECOG, and fNIRS) and at otherspatial scales in humans or other species. a r X i v : . [ q - b i o . N C ] N ov FIG. 1.
The multi-scale brain.
Brain networks are organized across multiple spatiotemporal scales and also can be analyzedat topological (network) scales ranging from individual nodes to the network as a whole. Images of neuronal ensemble recordings,segmented axons, brain evolution, and gray-matter development adapted with permission from [28–31].
II. FUNCTIONAL AND STRUCTURAL BRAINNETWORKS
With MRI data, network nodes are almost alwaysparcels of gray-matter voxels (sometimes the voxels,themselves, are used as nodes [32]). Brain networks comein two basic flavors that differ from one another basedon how connections are defined among nodes.
Structural or anatomical connectivity (SC) networks refer to nodeslinked by physical connections. With MRI data, theseconnections usually reflect white-matter fiber tracts re-constructed from the application of tractography algo-rithms to diffusion images. Functional connectivity (FC)networks, on the other hand, refer to the strength of thestatistical relationship between nodes’ activity over time[33]. Usually this statistical relationship is operational-ized as a Fisher-transformed correlation coefficient [34]or a coherence measure [35]. Both SC and FC networksare represented with a connectivity matrix, A , whoseelement A ij is equal to the connection weight between regions i and j . III. MULTI-SCALE NETWORK ANALYSIS
Network analysis is the process of interrogating an SCor FC network using tools derived from graph theory inorder to better understand its character. It is importantto note that this type of analysis takes explicit accountof the network architecture of SC and FC – i.e. that thecollective organization and configuration of connectionsgives rise to system-level behavior. It is therefore distinctfrom other techniques that examine SC and FC connec-tion weights in isolation [36]. Network science, which hasexisted as a field long before the advent of network neuro-science, has contributed a large number of measurementsof a network that can help reveal its function, highlightinfluential nodes, and identify features that contribute toits robustness and vulnerability. The topological scale atwhich a network is described depends upon what fea-tures of the network these measures highlight. Somemeasures are simple; a node’s degree (or the weightedanalog, strength) simply counts the number of connec-tions incident on any node and can be interpreted as ameasure of a node’s influence, with high-degree nodes ex-hibiting the greatest influence [37]. Degree is an exampleof a strictly local measure – it characterizes only a singlenode. At the opposite end of the spectrum are measuresthat describe the organization of the network as a whole.A network’s characteristic path length, for example, isthe average number of steps it takes to go from one nodeto another. Short path lengths imply, at least in the-ory, that information can be quickly shared across thenetwork [38, 39].Degree and path length, along with other local andglobal network measures, are useful for characterizingnetworks at their most extreme topological scales: atthe level of a network’s most commonly studied funda-mental units (its nodes; although see [40] and [41] foralternatives) and the level of the network as a collective.Between these two scales lies a mesoscale, an intermedi-ate scale at which a network can be characterized not interms of local and global properties, but also in termsof differently sized clusters of nodes that adopt differenttypes of configurations. It is at this mesoscale that wecan observe community structure [27], cores and periph-eries [42], and rich clubs [43]. It is essential to note thatthe mesoscale, unlike local and global scales, is defined asa range of scales situated between two extremes. There-fore, mesoscale structures have the capacity to emerge,persist, and dissolve over multiple topological scales. Ingeneral, the detection of such structures is performed al-gorithmically, usually through the application of toolsdesigned to detect specific types of mesoscale structure.As a simple illustration, consider a network with commu-nity structure. In the context of networks, communitiesrefer to sub-networks (clusters of nodes and their edges)that are internally dense (many within-community edges)and externally sparse (few between-community edges)[26, 44]. One intuitive (and quite palatable) hypothe-sis is that brain networks are organized into hierarchicalcommunities, meaning that communities at any partic-ular scale can be sub-divided into smaller communities,which in turn can be further sub-divided, and so on [45–47]. This hierarchy can be “cut” at any particular levelto obtain a single-scale description of the network’s com-munities, but doing so ignores the richness engenderedby the hierarchical nature of the communities. Similararguments can be applied to other types of meso-scaleorganization, such as core-periphery [48] and rich clubs[49].In the following subsections, we review analysis tech-niques for the detection of mesoscale structure in brainnetworks, focusing on communities due to their inher-ent multi-scale nature. We pay particular attention totechniques that make it possible to detect communitystructure over a range of topological scales, thereby un-covering a richer, more detailed multi-scale description of brain networks.
A. Multi-scale community structure
Local and global properties of networks are straight-forward to compute because the units of analysis – in-dividual nodes and the whole network – are immedi-ately evident and require no additional search. Mesoscalestructure, however, is not always evident. Its presenceor absence in a network depends on the configurationof edges among the network’s nodes – that is, the net-work’s topology . Real-world networks are composed ofmany nodes and edges arranged in complex patterns thatcan obscure structural regularities. Due to this com-plexity, if one wishes to observe mesoscale structure innetworks, one must search for it algorithmically. In thecase of community structure [45, 50], there is no short-age of algorithms for doing so. They range both in termsof how they define communities and also their compu-tational complexity [51–55]. Whether the plurality ofmethods is viewed as a shortcoming or an advantage, theenterprise of community detection is one of the better-developed and continually-growing sub-fields of networkanalysis [27, 56].While each community detection technique offers itsown unique perspective on how to identify communitiesin networks, the method that is most widely used andarguably the most versatile is modularity maximization [57]. Modularity maximization partitions a network’snodes into communities so as to maximize an objectivefunction known as the modularity (or just “ Q ”). Themodularity function compares the observed pattern ofconnections in a network against the pattern that wouldbe expected under a specified null model of network con-nectivity. That is, the weight of each existing edge isdirectly compared against the weight of the same edgeif connections were formed under the null model. Someof the observed connections will be unlikely to exist un-der the null model or will be stronger than the null modelwould predict. Modularity maximization tries to place asmany of the stronger-than-expected connections withincommunities as possible.More formally, if the weight of the observed and ex-pected connection between nodes i and j are given by A ij and P ij , respectively, and σ i ∈ [1 , . . . , K ] indicatesto which of K communities node i is assigned, then themodularity can be calculated as: Q = (cid:88) ij [ A ij − P ij ] δ ( σ i σ j ) , (1)where δ ( ·· ) is the Kronecker delta function and is equalto 1 if its arguments are the same and 0 otherwise. Mul-tiple methods exist to actually maximize Q , but in theend they all result in an estimate of a network’s com-munity structure: a partition of the network nodes intocommunities. FIG. 2.
Schematic figure illustrating multi-scale community detection.
Networks exhibit community structure overa range of different topological scales. In panels (A) and (B) we show communities detected in a structural connectivitynetwork at two different topological scales (the colors in the surface plots indicate the community to which each region isassigned). We investigate these scales by tuning the resolution parameter in modularity maximization (a common communitydetection approach) to γ = 1 and γ = 2 .
5. In panel (C) we illustrate the multi-resolution approach for “sweeping” througha range of resolution parameters to detect communities at different scales, this time using a synthetic network constructed tohave hierarchical community structure (hierarchical levels that divide the network into 2, 4, and 8 communities). To identifytopological scales of interest (ranges of γ ), we calculated the mean pairwise variation of information (VI) of all partitionsdetected at each value of γ . Low values of VI indicate that on average the detected partitions were similar to one another.The metric VI achieves local minima at scales that uncover the planted hierarchical communities; at values of γ where noneof the planted hierarchical communities are detected, VI takes on non-zero values, indicating lack of consensus across detectedpartitions and highlighting values of γ at which community structure is not present. The number and size of communities in the partitionwith the biggest Q represent the communities present inthe network, right? Unfortunately, the answer to thisquestion is “not always.” Modularity and other similarquality functions exhibit a “resolution limit” that lim-its the size of detectable communities [58]; communitiessmaller than some size, even if they otherwise adhere toour intuition of a community, are mathematically un-detectable. In order to detect communities of all sizes,modularity has been extended in recent years to includea resolution parameter, γ , that can be tuned to uncovercommunities of different size [59]. The augmented mod-ularity equation then reads: Q ( γ ) = (cid:88) ij [ A ij − γP ij ] δ ( σ i σ j ) . (2)The resolution parameter was initially introduced as atechnique for circumventing the resolution limit. Inad-vertently, it has contributed to the versatility of the mod-ularity measure. The resolution parameter effectively acts as a tuning knob, making it possible to obtain esti-mates of small communities when it is at one setting andlarger communities when it is at another setting: when γ is big or small maximizing modularity will return cor-respondingly small or large communities. If we smoothlytune the resolution parameter from one extreme to theother, we can effectively obtain estimates of a network’scommunity structure, all the way from the coarsest scaleat which all network nodes fall into the same communityup through the finest scale where network nodes formsingleton communities. Varying the resolution parame-ter to highlight communities of different sizes is known as multi-scale community detection [60]. It should be notedthat there exist possible definitions of modularity func-tions that do not suffer from resolution limits in the firstplace [61]. A full discussion of these functions is beyondthe scope of this review.
1. Multi-scale community structure in the neuroimagingliterature
Multi-scale analyses of real-world networks have re-vealed known structural motifs in proteins [54, 62], dy-namic patterns in financial systems [60, 63], and “forcechains” in physical systems of particles [64]. Most studiesof community structure in brain networks, however, havefocused on communities at a single scale [6, 21, 65] or,in the event that investigators wish to examine multiplescales, have resorted to heuristics such as recursive par-titioning [46, 66], edge thresholding [65], or by acceptingsub-optimal solutions through the modification of exist-ing algorithms [67]. The multi-scale modularity maxi-mization approach and related techniques [54, 68, 69] canseamlessly scan all topological scales by tuning the res-olution parameter, which entails no additional assump-tions. While single-scale approaches to community de-tection are not fundamentally wrong, they miss out onthe richness that may be present at other scales. Forexample, a single-scale estimate of the community struc-ture for a hierarchically modular network would detectonly one of the hierarchical scales present in the system.Nonetheless, there is a growing number of studies thathave employed multi-scale community detection tech-niques [70]. Some of these studies used the multi-scaleapproach to identify single-scale modules, but at a res-olution parameter that differs from the default ( γ = 1)[71–73]. In other words, they obtained estimates of com-munity structure over multiple scales and defined a sec-ondary objective function that, when optimized, identi-fied from among that set of partitions a scale at whichto focus on. Other approaches have explicitly set out tocompare community structure detected at different res-olutions. In the aging literature, for example, a num-ber of studies have reported that communities becomeless segregated across the human lifespan [20, 74]. Ina recent study, however, the authors analyzed the com-munity structure of resting-state FC networks across thelifespan and at different values of γ [75]. They showedthat community structure, and specifically the extent towhich communities are segregated from one another, ex-hibits an interaction between age and scale; smaller com-munities become less segregated with age, while largercommunities become increasingly segregated. However,had the authors only explored community structure at asingle topological scale, they would have never observedthe reported interaction.Other studies have estimated multi-scale communitystructure towards more theoretical ends. For example,in [76], the authors characterize different spatial andtopological properties of anatomical brain networks asa function of γ , and use a measure of community ra-dius [77] to show that large communities (as measuredby the number of nodes) are embedded in large physi-cal spaces. This mapping of a large topological entityto a large physical entity is not required of networkedsystems [78], and its existence suggests the presence of non-trivial constraints on the embedding of the brain’snetwork architecture within the confines of the humanskull [79]. Indeed, the multiscale nature of the brain’smodular architecture is strikingly similar to the hierar-chical modularity observed in large-scale integrated cir-cuits, whose abstract (and rather complex) topology hasbeen mapped cost-efficiently (meaning with a predomi-nance of short wires) into the two-dimensional space ofa computer chip [46, 80]. This efficient mapping can beuncovered by testing for the presence of Rentian scaling[81], a property by which the number of edges crossingthe boundary of a spatial parcel of the network scales log-arithmically with the number of nodes inside the parcel.Hierarchically modular networks – including the humanbrain, the C. elegans neuronal network, and even theLondon underground – that have been efficiently embed-ded into physical space commonly display Rentian scal-ing, while those that have not been efficiently embeddeddo not show this property.
2. Implementation and practical considerations
Community detection, generally, is easy to do but dif-ficult to do well [56]. Modularity maximization for com-munity detection begins with the assumption that thenetwork is modular [82], and as a technique is prone tofalse positives [83]. Moreover, detecting the globally op-timal partition is computationally intractable [84], themost popular algorithm for maximizing modularity gen-erates variable output [85], and the composition of de-tected communities can be biased by the overall densityof the network [58]. These are issues associated withmodularity maximization before sweeping γ . Adding theresolution parameter can further amplify these complica-tions; these issues are manifest at every level of γ . Howcan the prospect of multi-scale modularity maximizationbe performed in a principled, careful, and thoughtfulway? Selecting the resolution parameter
One of the most important issues is to select the topo-logical scale(s) of interest, which is tantamount to focus-ing on a subset of γ values. Without prior knowledgeof the number and size of communities, there is no goodrationale for preferring one value of γ over another (in-cluding γ = 1). There are, however, a few approachesdescribed in the existing literature for selecting a scaleof interest from among the communities detected over arange of γ values. Intuitively, if a network’s organizationat a particular scale is truly well-described by communi-ties, then we might also believe that our algorithms willeasily detect this organization. In this case, the knownvariability in the output of some modularity maximiza-tion techniques [85] can actually work in our favor. Whenvariability is low – i.e. the algorithm converges to sim-ilar community structure estimates over multiple runs –it might be indicative of especially well-defined commu-nities. Under this assumption, we repeatedly maximizemodularity at different values of γ and calculate the pair-wise similarity of the detected communities [77]. We canthen focus on community structure detected at γ valueswhere the similarity is great (and variability low) (See[71, 72, 86, 87] for examples where this approach hasbeen applied). Similarity of partitions can be estimatedusing a number of measures such as normalized mutualinformation [88], variation of information [89], or the z -score of the Rand coefficient [90].Other approaches have also been suggested. One pos-sibility is to use statistical arguments to focus on specificscales of γ . For example, we could estimate the proba-bility of observing a community of a particular size bychance, and then focus on the scale where the detectedcommunities’ sizes deviate most from chance [91]. An-other possibility assumes that “good” community struc-ture is not fleeting – i.e. that it should persist over somerange of γ [60]. Under this assumption we can calculatethe average similarity between partitions detected at ev-ery pair of γ values and cluster the resulting similarity(or distance) matrix. The clusters correspond to collec-tions of detected partitions that are all highly similarto one another – the absence of clusters suggests that ifcommunity structure exists at different scales, then it isshort-lived and possibly of less interest [92]. At the veryleast, in the event that one does not wish to scan multi-ple topological scales, a good method for demonstratingthe robustness of a result that depends upon the compo-sition of detected communities is to vary γ slightly fromthe selected value to verify that community structure isconsistent (see, for example: [93]). Consensus community structure and communities of interest
Choosing the γ value(s) at which to analyze a net-work’s community structure is the first hurdle. Thereremain the unresolved questions of how to define con-sensus communities that are representative over a groupof partitions and how to determine whether all (or justsome) of the detected consensus communities are of in-terest (the group of partitions could come from multipleoptimizations of a modularity maximization algorithm ora collection of partitions obtained from many individu-als). There are now multiple approaches for choosing aconsensus partition, including “similarity maximization”(choosing the consensus partition as the one with great-est average similarity to the other partitions) [77] andvariants of the “association-recluster” framework (usinga clustering algorithm to find consensus communities ina co-occurence or association matrix that stores the fre-quency with which nodes co-occur in a community overan ensemble of partitions) [72, 94, 95]. Because theseapproaches are now well-known and widely-used, we willnot discuss them further here. We do, however, find it prudent to discuss the finalquestion: “should we analyze all the communities in thepartition?” The notion of defining a partition in which allnodes get assigned to one community or another presup-poses that this type of structure exists in the first place.Is this a reasonable assumption? The presence of hubs[6] and rich-clubs [49] suggests that at least some brainnetwork nodes fail to strictly adhere to the communitytemplate – hub nodes, by definition, are highly connectedand span multiple modules. In short, maximizing modu-larity always partitions the network into clusters, but areall the clusters really communities? There are multipleways to address this question. One possibility is, again,to invoke a statistical argument and ignore communitieswith properties consistent with what you might expect bychance. For instance, you could calculate the modularitycontribution made by each community (defined in [96]and applied in [20, 72]) and compare the observed val-ues against a random null model (e.g., permute the com-munity labels and recalculate modularity contributions,optimize modularity for rewired networks and comparethe observed modularity to that of the randomized net-works). The gold standard technique, however, would bea tool that does not force all nodes to be in a communityand only detects communities that are inconsistent witha random null model. Such a tool exists in the form ofthe OSLOM algorithm [82], which works by first identify-ing the worst node in a community (i.e. the one with thefewest within-community connections). Next, the com-munity is assigned a “ C -score” defined as the probabilityof observing a node in the same community that makesmore within-community connections than expected in arandom network. To the best of author’s knowledge andat the time of writing this review, OSLOM has not yetbeen applied to brain network data.In this section, we highlighted the fact that networkscan exhibit non-random organization across a range oftopological scales, from that of individual nodes up tothe entire network. To develop a more complete un-derstanding of the network’s organization and developdeeper insight into its function, we argue that it is essen-tial to focus not only on one single scale, but to embracethe multi-scale topological nature of brain networks andcharacterize brain networks using appropriately multi-scale tools. The result is a richer picture of a brain net-work. That added richness may be necessary to form adeeper understanding of how brain network structure isassociated with human behavior and cognition, and ulti-mately how it is altered in disease. B. Multi-scale rich club and core-peripheryorganization
In addition to community structure, networks can ex-hibit a range of mesoscale organizations. These includerich club and core-periphery structure, both of whichhave been investigated in the context of brain networks.
FIG. 3.
Multi-scale rich club and core-periphery analysis . ( A ) The rich club coefficient, φ bin , for the observed network(black) and the mean over an ensemble of random networks (gray) as a function of node degree, k . The ratio of thesetwo measures defines the normalized rich club coefficient, φ norm . Values of k for which the observed rich club coefficient isstatistically greater than that of a random network define the rich club regime. ( B ) Most studies focus on a rich club definedat a single k value and use it to classify edges as “rich club” (rich node to rich node), “feeder” (rich node to non-rich node),or “non-rich club” (non-rich node to non-rich node). The number of edges assigned to each class is highly dependent upon the k at which the rich club is defined. ( C ) We show edge classifications at three different values of k , in order to highlight thatclassifications (and the subsequent interpretation) can vary dramatically, even across statistically significant rich clubs. ( D )Core-periphery classification can be performed using a parameterized model [97]. The parameters ( α, β ) determine the size ofthe core relative to the periphery and how sharply the two are divided from one another [48]. At different parameter values themodel identifies different cores and different peripheries, and assigns each node a “coreness” score. ( E ) As an example, we showtwo sets of coreness scores ordered from smallest to largest. The two sets vary in terms of the core size and constitution. ( F )For the same two sets, we show the topographic distribution of coreness scores. Note: In both the rich club and core-peripheryexamples, the network studied was a structural network used in a previous study [98]. While not the explicit focus of this review, we felt thatwe would be remiss not to briefly mention the availabletools to study multi-scale rich club and core-peripheryorganization.We recall that a rich club is a group of hubs (highdegree, high strength nodes) that are also densely in-terconnected to one another [43, 99]. Rich clubs are hy-pothesized to act as integrative structures in SC networksby linking modules to one another and facilitating rapidtransmission of information [49]. Core-periphery struc-ture is a related concept, which assumes that the networkconsists of one (or a few) dense cores, with which periph-eral nodes interact, though the peripheral nodes rarelyinteract with one another [42, 97, 100]. Similar to rich clubs, cores play an integrative role, serving as a locusfor different brain regions to link up and exchange infor-mation.Similar to communities, there is a tendency in the net-work science literature to concoct binary assignments ofnodes as either belonging to or not belonging to a net-work’s cores and rich clubs. This dichotomy aids in theinterpretation of results, but ultimately belies the com-plexity and richness of core-periphery and rich club or-ganization in a network, both of which can persist overmultiple topological scales. Whereas communities canbe identified by maximizing a modularity function, richclubs are detected by calculating a rich-club coefficient, φ ( k ), which measures the density of connections amongnodes with degree k or greater (Fig. 3A). If this coefficientis greater than what would be expected under a randomnetwork model, there is evidence that the rich club is sta-tistically significant. In practice, there is nearly always aplurality of statistically significant rich clubs, and hencea plurality of rich club nodes. The absence of a singularrich club gives rise to multiple complementary views ofhow hub nodes interact with one another and how theycontribute to brain function (Fig. 3B,C). A similar argu-ment applies for core-periphery structure, where nodescan be more or less core- or periphery-like in a gradedsense, defying the dichotomy of being one or the other(Fig. 3D-F).Is there a practical way to assess these types of multi-scale rich clubs and core-periphery structures? In thecase of rich clubs, one natural solution is to report therange of statistically significant rich clubs and character-ize the composition of rich clubs across that range. Inthe case of core-periphery organization, one can studya parameterized landscape of core-periphery architec-ture, offering a continuous description of cores of differentsizes, and with differing softness of the boundary betweenthe core-like nodes and the periphery-like nodes [48, 97](Fig. 3D-F). These and other approaches that are similarin spirit may offer additional insights into the multi-scalearchitecture of the brain in a manner that complementsthe assessment of heirarchical community structure de-scribed in detail in earlier sections. C. Multi-scale temporal networks
At this point, we take a step back and note that brainnetworks, both functional and structural, are not staticbut instead fluctuate over timescales ranging from thesub-second [101, 102] to the lifespan [103]. These fluc-tuations in network organization, especially over shorttimescales ( < that of a single scan session), have becomefrequent topics of investigation [104–108].How do we study a network that changes over multi-ple timescales? One promising approach is to use multi-layer network models of temporal networks [109, 110].The multi-layer network model is flexible enough to dealwith networks that vary along dimensions other thantime [111], but when applied to temporal networks ittreats estimates of the network’s topology at differenttime points as “layers”. For example, a layer could repre-sent a functional network estimated from a few minutesof observational data acquired during an fMRI BOLDscan [112] or it could represent the structural connectiv-ity of an individual participant at a particular age in adevelopmental or lifespan study [75]. Whereas traditionalnetwork analysis would characterize each layer indepen-dently of one another, multi-layer network analysis treatsthe collection of layers as a single object, characterizingits structure as a whole to explicitly bridge multiple tem-poral scales. Equally important, the multi-layer networkmodel is agnostic (from a mathematical perspective) to the timescales represented by the layers, and can there-fore accommodate virtually any timescale made accessi-ble using neuroimaging technologies.
1. Multi-scale, multi-layer network analysis
Most of the familiar network measurements have beengeneralized so that they can be computed on a multi-layer network. For example, path length, clustering,and some centrality measures are all easily calculated[109]. While a few recent studies have begun to inves-tigate these measures in multi-frequency brain networks[113–115], the most widely used multi-layer measure innetwork neuroscience is that of multi-layer, multi-scalecommunity detection [116]. Though there are severaldifferent approaches for detecting communities in tempo-ral networks, including non-negative matrix factorization[117, 118], and hidden Markov models [119], the mostpopular is multi-layer modularity maximization, whichrepresents a powerful extension of the standard modu-larity maximization framework that makes it possible touncover communities across layers (i.e., time, in the caseof temporal networks). The multi-layer analog resolvesseveral important issues. First, it confers further flexibil-ity to the multi-layer network model by making accessiblefamiliar methods. Communities can, of course, be calcu-lated for each layer independently. This unfortunatelygives rise to ambiguities regarding the continuation ofcommunities from one layer to the next. The secondadvantage of the multi-layer model is that by estimat-ing the community structure of all layers simultaneouslysuch ambiguities are effectively resolved. Third, it opensthe possibility of defining new measures for character-izing the flow of communities across layers [87, 95, 120].For example, the measure “flexibility” quantifies how fre-quently a brain region changes its community assignmentfrom one layer to the next [104]. Increased flexibility hasbeen associated with learning [104], increased executivefunction [121], aging [75], and positive mood, novelty ofexperience, and fatigue [93]. Additionally, it can also beused to reveal a temporally stable core of primary sensorysystems along with a flexible periphery of higher-ordercognitive systems [48] offering an architecture thoughtto be particularly conducive to flexible cognitive control[122]. Other statistics including “promiscuity” offer dis-tinct quantifications of meso-scale network reconfigura-tion [120].Importantly, multi-layer modularity maximization in-cludes a resolution parameter, γ , that functions in ananalogous manner to the resolution parameter in single-layer community detection. In conjunction with themulti-layer framework, which facilitates the investigationof temporal networks, the resolution parameter gives a re-searcher the option of incorporating multiple topologicalscales into a temporal analysis of networks. FIG. 4.
Schematic figure illustrating multi-layer network construction and community detection.
Individualnetworks can be combined in a meaningful way to form a multi-layer network. In panel (A) we show four example networks,each of which contains the same 25 nodes but arranged in different configurations. The links in these networks could representfluctuating functional connections over time (e.g., within a single scan or over development), connections estimated duringdifferent tasks, different frequency bands, or different connection modalities (e.g., structural connections weighted by streamlinecount or fractional anisotropy or functional connections measured as correlations or coherence). (B)
To combine individuallayers, links are added from node i to itself across layers. These links can be added ordinally, linking a node to itself in adjacentlayers, or categorically, linking a node to itself across all layers. The result is a multi-layer network. (C) Multi-layer networkscan be analyzed using many now-standard measures in network science, including – but not limited to – community detectionalgorithms. The resulting estimate of communities allows us to track the formation and dissolution of communities across layersand report properties of individual nodes – e.g., their flexiblity, which measures how frequently a node changes its communityassignment.
2. Practical considerations
The multi-layer model can accommodate many differ-ent types of data collected over multiple timescales. Thisfreedom comes at a cost, however. In order to consider alllayers as forming a single multi-layer network object, it iscurrently a necessity to, either manually or in some data-driven way, add artificial links between layers. Broadly,there are two strategies for this approach. The first as-sumes that layers are not ordinally related to one another– i.e. layers have no temporal precedence with respect toone another; a permutation of the order of layers resultsin effectively the same network. If these assumptionshold (e.g., if layers represent connectivity matrices ob-tained from different task states), then it makes sense to categorically link layers to one another [87, 123]. If thelayers exhibit an ordinal relationship, then it makes moresense to link node i in layer s to its temporally adjacentlayers, s − s +1 [86]. The decision to choose one ap-proach over the other can, of course, influence whatever measurement is being made on the network. Currently, itis standard practice (at least in network neuroscience) toadd ordinal links when dealing with temporal networks.Even with sound rationale for selecting one linking pro-cedure over the other, there still remains the difficultdecision of how to assign the inter-layer links a weight.Again, how these weights are selected can have an effecton whatever measure is being computed. Without strongevidence to select one weighting scheme over another, in-terlayer links are usually assigned the same value, ω , thatis sometimes varied over a narrow range. Ideally, therewould be a principled, data-driven approach for selectingthis value. D. Multi-scale spatial networks
The explosion of network science into different scien-tific communities can be attributed, in part, to the factthat it provides a set of tools that can be applied to0network data of all types. In this review, we focusedon brain networks derived from functional and diffusionMRI, the modalities most often used in the neuroimagingcommunity. The networks constructed from these dataspan spatial scales ranging from that of individual voxelsup to that of the whole brain. The nature of MRI, how-ever, makes it virtually impossible to construct brain net-works at finer scales, such as the level of individual cellsor that of neuronal populations. Other spatial scales are,of course, accessible using alternative imaging modalities.For example, optical imaging has delineated cellular-levelnetworks of mouse retina [16, 124] as well as of model or-ganisms like the nematode,
C. elegans [13], or drosophila [125]. Large-scale tract-tracing and fluorescent labelingtechniques have proven useful in uncovering networks atan intermediate scale – detecting axonal projections be-tween local processing units in drosophila [126], and brainareas in mouse [127] and macaque [128]. Additionally,meta-analytic studies that aggregate and summarize theresults of individual tract-tracing experiments have pro-duced convergent maps of macaque [129] and rat [130]network architecture. At these scales, the details of whateach node and edge represent differ from that of whole-brain human networks. Nonetheless, the same networkanalysis tools can be brought to bear on these networksto reveal their organization and gain insight into theirfunction. As microscale imaging tools become more com-mon, and existing tools more refined, capable of handlinghigher throughput, and imaging greater volumes, theywill be able to offer novel insights into how the multi-scale spatial network structure of the brain relates tocognition and behavior. An important step in advanc-ing the field of network neuroscience is understanding,specifically, how network properties at one spatial scaleare related to properties at another [131].Presently, of course, the analysis of human brain net-works is limited by the spatial granularity of the indi-vidual voxel. Even with this lower bound on the size ofbrain network nodes, it is possible to probe multiple spa-tial scales using MRI data. The most obvious manner inwhich spatial scale can be examined is in the choice ofbrain parcellation. MRI acquisitions return observationsat the level of individual voxels. Voxels may be noisy, suf-fer from signal dropout, and due to their large numbermay present computational challenges to conduct analy-ses at that scale. For these reasons, it has become com-mon to aggregate voxels into parcels or regions of interest;rather than focus on any particular voxel, this allows usto focus on the average properties of parcels [132].The number of alternative parcellations is ever-growing, with each new parcellation presenting a newcriteria – e.g., spatial variation in functional connectiv-ity, myelination, cytoarchitectonics, etc. – for groupingvoxels together into regions [7, 133–136]. The number ofparcels ranges from ≈ IV. CONCLUSION AND FUTUREDIRECTIONS
This review deals with the topic of multi-scale brainnetworks. We discuss tools for performing multi-scalenetwork analysis, their application to time-resolved net-works that highlight network-level fluctuations acrossmultiple temporal resolutions, and finally touch brieflyon how different spatial scales of analysis are making animpact on the field of network neuroscience. The resultsof network analyses at different scales can be seen as bothredundant and complementary. In some sense, we ex-pect to find similar network properties across scales [22]– the same energetic and spatial constraints that shapenetwork structure at the scale of brain regions and ar-eas are at play at the cellular-level [145–147]. On theother hand, the function of network nodes and circuitsas well as their biophysical attributes likely depend crit-ically upon the scale at which a network is constructedand analyzed. Accordingly, we might also expect net-works to be optimized to perform scale-specific functions[148], and studying a particular scale gives us a uniqueinsight into the network architecture underpinning thosefunctions. Ultimately, network neuroscience will needboth approaches – an understanding of network functionand organization at specific scales, as well as a map thatbridges multiple different spatial, temporal, and topolog-ical scales.
V. ACKNOWLEDGEMENTS
We thank Arian Ashourvan and Lia Papadopoulos forhelpful feedback on an earlier verson of this manuscript.DSB would also like to acknowledge support from theJohn D. and Catherine T. MacArthur Foundation, theAlfred P. Sloan Foundation, the Army Research Labo-ratory and the Army Research Office through contract1numbers W911NF-10-2-0022 and W911NF-14-1-0679,the National Institute of Health (2-R01-DC-009209-11,1R01HD086888-01, R01-MH107235, R01-MH107703,and R21-M MH-106799), the Office of Naval Research,and the National Science Foundation (BCS-1441502,BCS-1430087, PHY-1554488, and BCS-1631550). Thecontent is solely the responsibility of the authors and does not necessarily represent the official views of any ofthe funding agencies.Depiction of gray-matter development in 1 was repro-duced from [31]. Copyright (2004) National Academy ofSciences, U.S.A.
REFERENCES [1] D. S. Bassett and E. Bullmore, Neuroscientist ,512523 (2006).[2] O. Sporns, Networks of the Brain (MIT press, 2011).[3] E. T. Bullmore and D. S. Bassett, Annual review ofclinical psychology , 113 (2011).[4] S. L. Bressler and V. Menon, Trends in cognitive sci-ences , 277 (2010).[5] H.-J. Park and K. Friston, Science , 1238411 (2013).[6] P. Hagmann, L. Cammoun, X. Gigandet, R. Meuli, C. J.Honey, V. J. Wedeen, and O. Sporns, PLoS Biol , e159(2008).[7] A. M. Hermundstad, D. S. Bassett, K. S. Brown, E. M.Aminoff, D. Clewett, S. Freeman, A. Frithsen, A. John-son, C. M. Tipper, M. B. Miller, et al. , Proceedings ofthe National Academy of Sciences , 6169 (2013).[8] J. Go˜ni, M. P. van den Heuvel, A. Avena-Koenigsberger,N. V. de Mendizabal, R. F. Betzel, A. Griffa, P. Hag-mann, B. Corominas-Murtra, J.-P. Thiran, andO. Sporns, Proceedings of the National Academy of Sci-ences , 833 (2014).[9] M. E. Newman, SIAM review , 167 (2003).[10] S. P. Borgatti, A. Mehra, D. J. Brass, and G. Labianca,science , 892 (2009).[11] A.-L. Barab´asi, Network science (Cambridge UniversityPress, 2016).[12] D. S. Bassett and F. Siebenhuhner, “Multiscale networkorganization in the human brain,” in
Multiscale Analy-sis and Nonlinear Dynamics: From Genes to the Brain (Wiley, 2013).[13] T. A. Jarrell, Y. Wang, A. E. Bloniarz, C. A. Brittin,M. Xu, J. N. Thomson, D. G. Albertson, D. H. Hall,and S. W. Emmons, Science , 437 (2012).[14] M. Shimono and J. M. Beggs, Cerebral Cortex , 3743(2015).[15] M. S. Schroeter, P. Charlesworth, M. G. Kitzbichler,O. Paulsen, and E. T. Bullmore, The Journal of Neu-roscience , 5459 (2015).[16] W.-C. A. Lee, V. Bonin, M. Reed, B. J. Graham,G. Hood, K. Glattfelder, and R. C. Reid, Nature ,370 (2016).[17] A. N. Khambhati, K. A. Davis, B. S. Oommen, S. H.Chen, T. H. Lucas, B. Litt, and D. S. Bassett, PLoSComput Biol , e1004608 (2015).[18] S. P. Burns, S. Santaniello, R. B. Yaffe, C. C. Jouny,N. E. Crone, G. K. Bergey, W. S. Anderson, and S. V.Sarma, Proc Natl Acad Sci U S A , E5321E5330(2014).[19] X.-N. Zuo, C. Kelly, A. Di Martino, M. Mennes, D. S.Margulies, S. Bangaru, R. Grzadzinski, A. C. Evans,Y.-F. Zang, F. X. Castellanos, et al. , The Journal ofneuroscience , 15034 (2010). [20] R. F. Betzel, L. Byrge, Y. He, J. Go˜ni, X.-N. Zuo, andO. Sporns, Neuroimage , 345 (2014).[21] S. Gu, T. D. Satterthwaite, J. D. Medaglia, M. Yang,R. E. Gur, R. C. Gur, and D. S. Bassett, Proceedingsof the National Academy of Sciences , 13681 (2015).[22] M. P. van den Heuvel, E. T. Bullmore, and O. Sporns,Trends in cognitive sciences , 345 (2016).[23] C. J. Stam and J. C. Reijneveld, Nonlinear biomedicalphysics , 1 (2007).[24] E. Bullmore and O. Sporns, Nature Reviews Neuro-science , 186 (2009).[25] M. Rubinov and O. Sporns, Neuroimage , 1059(2010).[26] M. A. Porter, J.-P. Onnela, and P. J. Mucha, Noticesof the AMS , 1082 (2009).[27] S. Fortunato, Physics reports , 75 (2010).[28] G. Buzs´aki, Nature neuroscience , 446 (2004).[29] J. Beyer, M. Hadwiger, A. Al-Awami, W.-K. Jeong,N. Kasthuri, J. W. Lichtman, and H. Pfister, IEEEcomputer graphics and applications , 50 (2013).[30] L. Krubitzer, Annals of the New York Academy of Sci-ences , 44 (2009).[31] N. Gogtay, J. N. Giedd, L. Lusk, K. M. Hayashi,D. Greenstein, A. C. Vaituzis, T. F. Nugent, D. H. Her-man, L. S. Clasen, A. W. Toga, et al. , Proceedings ofthe National academy of Sciences of the United Statesof America , 8174 (2004).[32] M. P. van den Heuvel, C. J. Stam, M. Boersma, andH. E. Hulshoff Pol, Neuroimage , 528539 (2008).[33] K. J. Friston, Brain Connect , 1336 (2011).[34] A. Zalesky, A. Fornito, and E. Bullmore, Neuroimage , 2096 (2012).[35] Z. Zhang, Q. K. Telesford, C. Giusti, K. O. Lim, andD. S. Bassett, PLoS One , e0157243 (2016).[36] S. L. Simpson and P. J. Laurienti, Brain Connect ,9598 (2016).[37] H. Takeuchi, Y. Taki, R. Nouchi, A. Sekiguchi,H. Hashizume, Y. Sassa, Y. Kotozaki, C. M. Miyauchi,R. Yokoyama, K. Iizuka, S. Nakagawa, T. Nagase,K. Kunitoki, and R. Kawashima, Neuroimage ,197209 (2015).[38] E. Santarnecchi, G. Galli, N. R. Polizzotto, A. Rossi,and S. Rossi, Hum Brain Mapp , 45664582 (2014).[39] Y. Li, Y. Liu, J. Li, W. Qin, K. Li, C. Yu, and T. Jiang,PLoS Comput Biol , e1000395 (2009).[40] C. Giusti, R. Ghrist, and D. S. Bassett, J ComputNeurosci , 1 (2016).[41] D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha,and S. T. Grafton, Chaos , 013112 (2014).[42] S. P. Borgatti and M. G. Everett, Social networks ,375 (2000). [43] V. Colizza, A. Flammini, M. A. Serrano, and A. Vespig-nani, Nature physics , 110 (2006).[44] M. E. Newman, Nature Physics , 25 (2012).[45] D. Meunier, R. Lambiotte, A. Fornito, K. D. Ersche,and E. T. Bullmore, Hierarchy and dynamics in neuralnetworks , 2 (2010).[46] D. S. Bassett, D. L. Greenfield, A. Meyer-Lindenberg,D. R. Weinberger, S. W. Moore, and E. T. Bullmore,PLoS Comput Biol , e1000748 (2010).[47] C. C. Hilgetag and M.-T. H¨utt, Trends in cognitive sci-ences , 114 (2014).[48] D. S. Bassett, N. F. Wymbs, M. P. Rombach, M. A.Porter, P. J. Mucha, and S. T. Grafton, PLoS ComputBiol , e1003171 (2013).[49] M. P. van den Heuvel and O. Sporns, The Journal ofneuroscience , 15775 (2011).[50] O. Sporns and R. F. Betzel, Annual review of psychol-ogy , 613 (2016).[51] G. Palla, I. Der´enyi, I. Farkas, and T. Vicsek, Nature , 814 (2005).[52] Y.-Y. Ahn, J. P. Bagrow, and S. Lehmann, Nature ,761 (2010).[53] M. Rosvall and C. T. Bergstrom, Proceedings of theNational Academy of Sciences , 1118 (2008).[54] J.-C. Delvenne, S. N. Yaliraki, and M. Barahona,Proceedings of the National Academy of Sciences ,12755 (2010).[55] B. Karrer and M. E. Newman, Physical Review E ,016107 (2011).[56] S. Fortunato and D. Hric, arXiv preprintarXiv:1608.00163 (2016).[57] M. E. Newman and M. Girvan, Physical review E ,026113 (2004).[58] S. Fortunato and M. Barthelemy, Proceedings of theNational Academy of Sciences , 36 (2007).[59] J. Reichardt and S. Bornholdt, Physical Review E ,016110 (2006).[60] D. J. Fenn, M. A. Porter, M. McDonald, S. Williams,N. F. Johnson, and N. S. Jones, Chaos: An Interdisci-plinary Journal of Nonlinear Science , 033119 (2009).[61] V. A. Traag, P. Van Dooren, and Y. Nesterov, PhysicalReview E , 016114 (2011).[62] A. Delmotte, E. W. Tate, S. N. Yaliraki, and M. Bara-hona, Physical biology , 055010 (2011).[63] D. J. Fenn, M. A. Porter, P. J. Mucha, M. McDonald,S. Williams, N. F. Johnson, and N. S. Jones, Quanti-tative Finance , 1493 (2012).[64] D. S. Bassett, E. T. Owens, M. A. Porter, M. L. Man-ning, and K. E. Daniels, Soft Matter , 2731 (2015).[65] J. D. Power, A. L. Cohen, S. M. Nelson, G. S. Wig, K. A.Barnes, J. A. Church, A. C. Vogel, T. O. Laumann,F. M. Miezin, B. L. Schlaggar, et al. , Neuron , 665(2011).[66] Y. He, J. Wang, L. Wang, Z. J. Chen, C. Yan, H. Yang,H. Tang, C. Zhu, Q. Gong, Y. Zang, et al. , PloS one ,e5226 (2009).[67] D. Meunier, R. Lambiotte, and E. T. Bullmore, Fron-tiers in neuroscience , 200 (2010).[68] M. T. Schaub, J.-C. Delvenne, S. N. Yaliraki, andM. Barahona, PloS one , e32210 (2012).[69] M. Kheirkhahzadeh, A. Lancichinetti, and M. Rosvall,Physical Review E , 032309 (2016).[70] M. Rubinov, R. J. Ypma, C. Watson, and E. T. Bull-more, Proceedings of the National Academy of Sciences , 10032 (2015).[71] S. Gu, F. Pasqualetti, M. Cieslak, Q. K. Telesford, B. Y.Alfred, A. E. Kahn, J. D. Medaglia, J. M. Vettel, M. B.Miller, S. T. Grafton, et al. , Nature communications (2015).[72] R. F. Betzel, J. D. Medaglia, L. Papadopoulos,G. Baum, R. Gur, R. Gur, D. Roalf, T. D.Satterthwaite, and D. S. Bassett, arXiv preprintarXiv:1608.01161 (2016).[73] C. Nicolini and A. Bifone, Scientific reports (2016).[74] M. Y. Chan, D. C. Park, N. K. Savalia, S. E. Petersen,and G. S. Wig, Proceedings of the National Academy ofSciences , E4997 (2014).[75] R. F. Betzel, B. Miˇsi´c, Y. He, J. Rumschlag, X.-N. Zuo,and O. Sporns, arXiv preprint arXiv:1510.08045 (2015).[76] C. Lohse, D. S. Bassett, K. O. Lim, and J. M. Carlson,PLoS Comput Biol , e1003712 (2014).[77] K. W. Doron, D. S. Bassett, and M. S. Gazzaniga,Proceedings of the National Academy of Sciences ,18661 (2012).[78] M. Barthelemy, Physics Reports , 1 (2011).[79] E. Bullmore and O. Sporns, Nature Reviews Neuro-science , 336 (2012).[80] F. Klimm, D. S. Bassett, J. M. Carlson, and P. J.Mucha, PLoS Comput Biol , e1003491 (2014).[81] M. M. Sperry, Q. K. Telesford, F. Klimm, and D. S.Bassett, Journal of Complex Networks (2016).[82] A. Lancichinetti, F. Radicchi, J. J. Ramasco, andS. Fortunato, PloS one , e18961 (2011).[83] R. Guimera, M. Sales-Pardo, and L. A. N. Amaral,Physical Review E , 025101 (2004).[84] B. H. Good, Y. A. de Montjoye, and A. Clauset, PhysRev E Stat Nonlin Soft Matter Phys , 046106 (2010).[85] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, andE. Lefebvre, Journal of statistical mechanics: theoryand experiment , P10008 (2008).[86] L. Chai, M. G. Mattar, I. A. Blank, E. Fedorenko, andD. S. Bassett, Cerebral Cortex In Press (2016).[87] M. G. Mattar, M. W. Cole, S. L. Thompson-Schill, andD. S. Bassett, PLoS Comput Biol , e1004533 (2015).[88] A. Lancichinetti, S. Fortunato, and J. Kert´esz, NewJournal of Physics , 033015 (2009).[89] M. Meil˘a, in Learning theory and kernel machines (Springer, 2003) pp. 173–187.[90] A. L. Traud, E. D. Kelsic, P. J. Mucha, and M. A.Porter, SIAM review , 526 (2011).[91] V. A. Traag, G. Krings, and P. Van Dooren, Scientificreports (2013).[92] R. Lambiotte, J.-C. Delvenne, and M. Barahona, IEEETransactions on Network Science and Engineering , 76(2014).[93] R. F. Betzel, T. D. Satterthwaite, J. I. Gold, and D. S.Bassett, arXiv preprint arXiv:1601.07881 (2016).[94] A. Lancichinetti and S. Fortunato, Scientific reports (2012).[95] D. S. Bassett, M. A. Porter, N. F. Wymbs, S. T.Grafton, J. M. Carlson, and P. J. Mucha, Chaos: An In-terdisciplinary Journal of Nonlinear Science , 013142(2013).[96] D. S. Bassett, E. T. Owens, K. E. Daniels, and M. A.Porter, Phys. Rev. E , 041306 (2012).[97] M. P. Rombach, M. A. Porter, J. H. Fowler, and P. J.Mucha, SIAM Journal on Applied mathematics , 167(2014). [98] R. F. Betzel, S. Gu, J. D. Medaglia, F. Pasqualetti, andD. S. Bassett, arXiv preprint arXiv:1603.05261 (2016).[99] T. Opsahl, V. Colizza, P. Panzarasa, and J. J. Ramasco,Physical review letters , 168702 (2008).[100] P. Holme, Physical Review E , 046111 (2005).[101] N. J. Kopell, H. J. Gritton, M. A. Whittington, andM. A. Kramer, Neuron , 1319 (2014).[102] V. D. Calhoun, R. Miller, G. Pearlson, and T. Adalı,Neuron , 262 (2014).[103] A. Di Martino, D. A. Fair, C. Kelly, T. D. Satterthwaite,F. X. Castellanos, M. E. Thomason, R. C. Craddock,B. Luna, B. L. Leventhal, X.-N. Zuo, et al. , Neuron ,1335 (2014).[104] D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha,J. M. Carlson, and S. T. Grafton, Proceedings of theNational Academy of Sciences , 7641 (2011).[105] E. A. Allen, E. Damaraju, S. M. Plis, E. B. Erhardt,T. Eichele, and V. D. Calhoun, Cerebral cortex , bhs352(2012).[106] D. S. Bassett, M. Yang, N. F. Wymbs, and S. T.Grafton, Nature neuroscience , 744 (2015).[107] A. Zalesky, A. Fornito, L. Cocchi, L. L. Gollo, andM. Breakspear, Proceedings of the National Academyof Sciences , 10341 (2014).[108] R. F. Betzel, M. Fukushima, Y. He, X.-N. Zuo, andO. Sporns, NeuroImage , 287 (2016).[109] M. Kivel¨a, A. Arenas, M. Barthelemy, J. P. Gleeson,Y. Moreno, and M. A. Porter, Journal of complex net-works , 203 (2014).[110] M. De Domenico, A. Sol´e-Ribalta, E. Cozzo, M. Kivel¨a,Y. Moreno, M. A. Porter, S. G´omez, and A. Arenas,Physical Review X , 041022 (2013).[111] S. F. Muldoon and D. S. Bassett, Philosophy of Science In Press (2016).[112] Q. K. Telesford, M.-E. Lynall, J. Vettel, M. B. Miller,S. T. Grafton, and D. S. Bassett, NeuroImage (2016).[113] M. De Domenico, S. Sasai, and A. Arenas, arXivpreprint arXiv:1603.05897 (2016).[114] M. J. Brookes, P. K. Tewarie, B. A. Hunt, S. E. Robson,L. E. Gascoyne, E. B. Liddle, P. F. Liddle, and P. G.Morris, NeuroImage , 425 (2016).[115] F. Battiston, V. Nicosia, M. Chavez, and V. Latora,arXiv preprint arXiv:1606.09115 (2016).[116] P. J. Mucha, T. Richardson, K. Macon, M. A. Porter,and J.-P. Onnela, science , 876 (2010).[117] L. Gauvin, A. Panisson, and C. Cattuto, PloS one ,e86028 (2014).[118] A. Ponce-Alvarez, G. Deco, P. Hagmann, G. L. Romani,D. Mantini, and M. Corbetta, PLoS Comput Biol ,e1004100 (2015).[119] L. F. Robinson, L. Y. Atlas, and T. D. Wager, Neu-roImage , 274 (2015).[120] L. Papadopoulos, J. Puckett, K. E. Daniels, and D. S.Bassett, arXiv preprint arXiv:1603.08159 (2016).[121] U. Braun, A. Sch¨afer, H. Walter, S. Erk, N. Romanczuk-Seiferth, L. Haddad, J. I. Schweiger, O. Grimm,A. Heinz, H. Tost, et al. , Proceedings of the NationalAcademy of Sciences , 11678 (2015).[122] E. Fedorenko and S. L. Thompson-Schill, Trends CognSci , 120126 (2014).[123] M. W. Cole, D. S. Bassett, J. D. Power, T. S. Braver,and S. E. Petersen, Neuron , 238 (2014).[124] M. Helmstaedter, K. L. Briggman, S. C. Turaga,V. Jain, H. S. Seung, and W. Denk, Nature , 168 (2013).[125] S.-y. Takemura, A. Bharioke, Z. Lu, A. Nern, S. Vita-ladevuni, P. K. Rivlin, W. T. Katz, D. J. Olbris, S. M.Plaza, P. Winston, et al. , Nature , 175 (2013).[126] C.-T. Shih, O. Sporns, S.-L. Yuan, T.-S. Su, Y.-J. Lin,C.-C. Chuang, T.-Y. Wang, C.-C. Lo, R. J. Greenspan,and A.-S. Chiang, Current Biology , 1249 (2015).[127] S. W. Oh, J. A. Harris, L. Ng, B. Winslow, N. Cain,S. Mihalas, Q. Wang, C. Lau, L. Kuan, A. M. Henry, et al. , Nature , 207 (2014).[128] N. Markov, M. Ercsey-Ravasz, A. R. Gomes, C. Lamy,L. Magrou, J. Vezoli, P. Misery, A. Falchier, R. Quilo-dran, M. Gariel, et al. , Cerebral Cortex , bhs270 (2012).[129] K. E. Stephan, L. Kamper, A. Bozkurt, G. A. Burns,M. P. Young, and R. K¨otter, Philosophical Transactionsof the Royal Society B: Biological Sciences , 1159(2001).[130] M. Bota, O. Sporns, and L. W. Swanson, Proceedingsof the National Academy of Sciences , E2093 (2015).[131] M. P. van den Heuvel, L. H. Scholtens, L. F. Barrett,C. C. Hilgetag, and M. A. de Reus, The Journal ofNeuroscience , 13943 (2015).[132] M. A. de Reus and M. P. Van den Heuvel, Neuroimage , 397 (2013).[133] C. Destrieux, B. Fischl, A. Dale, and E. Halgren, Neu-roimage , 1 (2010).[134] B. T. Yeo, F. M. Krienen, J. Sepulcre, M. R. Sabuncu,D. Lashkari, M. Hollinshead, J. L. Roffman, J. W.Smoller, L. Z¨ollei, J. R. Polimeni, et al. , Journal of neu-rophysiology , 1125 (2011).[135] E. M. Gordon, T. O. Laumann, B. Adeyemo, J. F. Huck-ins, W. M. Kelley, and S. E. Petersen, Cerebral cortex, bhu239 (2014).[136] M. Glasser, T. Coalson, E. Robinson, C. Hacker, J. Har-well, E. Yacoub, K. Ugurbil, J. Anderson, C. Beckmann,M. Jenkinson, et al. , Nature (2015).[137] L. Cammoun, X. Gigandet, D. Meskaldji, J. P. Thi-ran, O. Sporns, K. Q. Do, P. Maeder, R. Meuli, andP. Hagmann, Journal of neuroscience methods , 386(2012).[138] I. Diez, P. Bonifazi, I. Escudero, B. Mateos, M. A.Mu˜noz, S. Stramaglia, and J. M. Cortes, Scientific re-ports (2015).[139] J. Wang, L. Wang, Y. Zang, H. Yang, H. Tang, Q. Gong,Z. Chen, C. Zhu, and Y. He, Human brain mapping ,1511 (2009).[140] A. Zalesky, A. Fornito, I. H. Harding, L. Cocchi,M. Y¨ucel, C. Pantelis, and E. T. Bullmore, Neuroimage , 970 (2010).[141] D. S. Bassett, J. A. Brown, V. Deshpande, J. M. Carl-son, and S. T. Grafton, Neuroimage , 12621279(2011).[142] P. Bellec, P. Rosa-Neto, O. C. Lyttelton, H. Benali, andA. C. Evans, Neuroimage , 1126 (2010).[143] G. Rosenthal, O. Sporns, and G. Avidan, Cerebral Cor-tex (2016).[144] D. A. Dawson, J. Lam, L. B. Lewis, F. Carbonell, J. D.Mendola, and A. Shmuel, Brain connectivity , 57(2016).[145] R. F. Betzel, A. Avena-Koenigsberger, J. Go˜ni, Y. He,M. A. De Reus, A. Griffa, P. E. V´ertes, B. Miˇsic, J.-P. Thiran, P. Hagmann, et al. , Neuroimage , 1054(2016).[146] S. Henriksen, R. Pang, and M. Wronkiewicz, eLife , e12366 (2016).[147] P. E. V´ertes, A. F. Alexander-Bloch, N. Gogtay, J. N.Giedd, J. L. Rapoport, and E. T. Bullmore, Proceed-ings of the National Academy of Sciences109