Multiscale unfolding of real networks by geometric renormalization
MMultiscale unfolding of real networks by geometric renormalization
Guillermo Garc´ıa-P´erez,
1, 2
Mari´an Bogu˜n´a,
1, 2 and M. ´Angeles Serrano
1, 2, 3 Departament de F´ısica de la Mat`eria Condensada,Universitat de Barcelona, Mart´ı i Franqu`es 1, 08028 Barcelona, Spain Universitat de Barcelona Institute of Complex Systems (UBICS), Universitat de Barcelona, Barcelona, Spain ICREA, Pg. Llu´ıs Companys 23, E-08010 Barcelona, Spain (Dated: June 2, 2017)Multiple scales coexist in complex networks. However, the small world property makes themstrongly entangled. This turns the elucidation of length scales and symmetries a defiant challenge.Here, we define a geometric renormalization group for complex networks and use the technique toinvestigate networks as viewed at different scales. We find that real networks embedded in a hiddenmetric space show geometric scaling, in agreement with the renormalizability of the underlyinggeometric model. This allows us to unfold real scale-free networks in a self-similar multilayer shellwhich unveils the coexisting scales and their interplay. The multiscale unfolding offers a basis fora new approach to explore critical phenomena and universality in complex networks, and affordsus immediate practical applications, like high-fidelity smaller-scale replicas of large networks and amultiscale navigation protocol in hyperbolic space which boosts the success of single-layer versions.
I. INTRODUCTION
Symmetries permeate reality and our theories to un-derstand it. From very simple to very subtle, all of themdenote invariance under a transformation, and thus sim-ilarity or even exact correspondence between differentparts of a system or between the system and itself whenobserved at different scales of length, or other variable.As paradigmatic examples, fractals are geometric objectsshowing physical scale invariance and self-similarity [1].Moreover, these properties can also apply to phenomeno-logical behaviours like systems dynamics near criticalpoints of phase transitions [2].In complex networks, multiple scales coexist but theyare so entangled that the definition of self-similarity andscale-invariance has been limited by the lack of a validsource of geometric length scale transformations. Previ-ous efforts to study these symmetries are based on topol-ogy and include coarse-graining to preserve the large-scale behaviour of random walks [3], or box-covering pro-cedures based on shortest path lengths between nodes [4–9]. The latter revealed that certain real networks havefinite fractal dimensions and exhibit self-similarity, al-though scaling in the topological properties was not ob-served beyond the degree distribution and the maximumand average degrees. However, the collection of short-est paths, albeit a well-defined metric, is a poor sourceof length-based scaling factors in networks due to thesmall-world [10] or even ultrasmall-world [11] property,and the problem remained controversial. Other studieshave faced the multiscale structure of network models ina somewhat more geometric way [12, 13], but their find-ings cannot be directly applied to real-world networks.The development in the last years of plausible mod-els of complex networks based on an underlying met-ric space [14, 15] opens now the door to a proper geo-metric definition of self-similarity and scale invarianceand to an unfolding of the different scales present inthe connectivity structure of real networks. Hidden met- ric space network models couple the topology of a net-work to an underlying geometry through a universalconnectivity law which combines popularity and simi-larity dimensions [14, 16, 17], such that more popularand similar nodes have more chance to interact. Natu-rally, the geometricalization of networks allows a reser-voir of distance scales so that we can borrow concepts andtechniques from the renormalization group in statisticalphysics [18, 19], which has been used to study systemswhere widely different length scales are present simulta-neously. By recursive averaging over short-distance de-grees of freedom, the renormalization group has success-fully explained, for instance, the universality propertiesof critical behavior in phase transitions [20].In this work, we introduce a geometric renormalizationgroup for complex networks (RGN). The method is basedon a geometric embedding of the networks to constructrenormalized versions of their structure by coase-grainingneighbouring nodes into supernodes and defining a newmap which progressively selects longer range connectionsby identifying relevant interactions at each scale. TheRGN technique is inspired by the block spin renormal-ization group devised by L. P. Kadanoff [18].
II. EVIDENCE OF GEOMETRIC SCALING INREAL NETWORKS
The map of a complex network embedded in a hiddenmetric space, M ( T, G ), contains information about bothits topology T and geometry G (in terms of the positionsof the nodes in the hidden metric space). Given M ( T, G ),we define a geometric renormalization operator F r of res-olution r which coarse-grains the original network by afactor r and defines a new topology T (cid:48) and a new geom-etry G (cid:48) conforming the renormalized map M (cid:48) M ( T, G ) F r −→ M (cid:48) ( T (cid:48) , G (cid:48) ) . (1) a r X i v : . [ c ond - m a t . d i s - nn ] J un The transformation zooms out by changing the minimumlength scale from that of the original network to a largervalue. This operation can be iterated starting from theoriginal network at l = 0, M ( l +1) ( T ( l +1) , G ( l +1) ) = F r [ M ( l ) ( T ( l ) , G ( l ) )] . (2)In the limit N → ∞ , it can be applied up to any desiredscale of observation, whereas it is bounded to O (log N )iterations in systems with a finite number of nodes N .The simplest hidden metric space that can embed anetwork is a one-dimensional sphere on which nodes havespecific angular positions { θ i ; i = 1 , · · · , N } . In thisspace, the transformation proceeds by, first, defining non-overlapping blocks of consecutive nodes of size r alongthe circle and, second, coarse-graining the blocks intosupernodes, regardless of whether they are connected ornot to each other. Each supernode is then placed withinthe angular region defined by the corresponding blockso that the order of nodes in the original embedding ispreserved in the renormalization process. All the linksbetween some node in one supernode and some node inthe other, if any, are renormalized into a single link be-tween the two supernodes. Figure 1 illustrates the pro-cess. This coarse-graining procedure is not restricted toequal size blocks and can be defined in different ways aslong as the angular distance between the nodes inside theblocks is smaller than the distance between nodes in dif-ferent blocks. For instance, one could divide the circle inequally sized sectors of a certain arc length such that theycontain on average a constant number of nodes. The ge-ometric renormalization operator has abelian semigroupstructure with respect to the composition, meaning thata certain number of iterations of a given resolution areequivalent to a single transformation of higher resolution,as shown in Fig. 1 [21]. Finally, the set of renormalizednetwork layers l , each r l times smaller than the originalone, forms a multiscale shell of the network.In this work, we apply the RGN to six differentreal scale-free networks from very different domains:technology (Internet), transportation (Airports), biology(Cell metabolism and Proteome) and scripts (Music andWords); see Appendix A for details. Many real networkscan be embedded in the one-dimensional sphere using the S model [14], which places nodes into a circle and con-nects every pair with a probability that decreases withtheir distance along the circle, as a measure of their sim-ilarity, and increases with the product of their hiddendegrees { κ i } , as a measure of their popularity (see Ap-pendix A). The hidden degrees are well approximatedby the observed degrees in the network [14, 22], and theembedding method uses statistical inference techniquesto identify the angular coordinates which maximize thelikelihood that the topology of the real network is repro-duced by the model [23, 24]. Once the hidden degrees andcoordinates of the real scale-free networks considered inour study are known, we apply the coarse-graining bydefining blocks of size r = 2 consecutive nodes in thecircle, and place the supernodes within the coordinates l = 0 l = 1 l = 2 r = 2 r = 2 r = 4 FIG. 1.
Geometric renormalization transformation forcomplex networks.
Each layer is obtained after a renor-malization step with resolution r starting from the originalnetwork in l = 0. Each node i in red is placed at an angularposition θ ( l ) i on the S circle and has a size proportional tothe logarithm of its hidden degree κ ( l ) i . Straight solid linesrepresent the links in each layer. Coarse-graining blocks cor-respond to the blue shadowed areas, and dashed lines connectnodes to their supernodes in layer l + 1. Two supernodes inlayer l +1 are connected if and only if, in layer l , some node inone supernode is connected to some node in the other (bluelinks give an example). of their corresponding nodes with the only restriction ofpreserving the original ordering. We iterate the processso that at each coarse-graining step the size of the systemis reduced by a half.The resulting topological features of the renormalizednetworks are shown in Fig. 2 (see also Fig. 6 in Ap-pendix B). We observe that the degree distributions,degree-degree correlations —as measured by the averagenearest neighbours degree—, and the clustering spectra,all show self-similar behaviour with curves for the differ-ent renormalized layers collapsing if the degrees in thelayers are rescaled by their average degree. Also, forevery layer l we obtained a partition into communities, P ( l ) , using the Louvain method [25]; Fig. 2 bottom showstheir modularities Q ( l ) . We also defined the partition in-duced by P ( l ) on the original network, P ( l, , obtainedby considering that two nodes i and j of the original net-work are in the same community in P ( l, if and only ifthe supernodes of i and j in layer l belong to the samecommunity in P ( l ) . Both the modularity Q ( l, of P ( l, and the normalized mutual information nM I ( l, betweenboth partitions P (0) and P ( l, are shown in Fig. 2 bot-tom. Strikingly, the community structure is preservedalong the flow to the extent of allowing us to find high-modularity partitions of the original network from much -2 -1 k ( l ) res -2 -4 P ( l ) c ( k ( l ) r e s ) Internet l = 0 l = 1 l = 2 l = 3 l = 4 l = 5 -2 -1 k ( l ) res -3 -2 -1 ¯ c ( l ) ( k ( l ) r e s ) l . . . C o mm un i t i e s -2 -1 k ( l ) res -2 -4 Metabolic l = 0 l = 1 l = 2 -2 -1 k ( l ) res -2 -1 l . . . -2 -1 k ( l ) res -2 -4 Music l = 0 l = 1 l = 2 -2 -1 k ( l ) res -2 -1 l . . . Q ( l, Q ( l ) nM I ( l, -1 -2 -1 ¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res ) -2 -1 ¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res ) -2 -2 -1 ¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res ) FIG. 2.
Self-similarity of real networks along the RGN flow.
Each column shows the RGN flow with r = 2of different topological features of the Internet AS network (left), the Human Metabolic network (middle) and the Musicnetwork (right). Top:
Complementary cumulative distribution of rescaled degrees k ( l ) res = k ( l ) / (cid:104) k ( l ) (cid:105) . Middle:
Degree-dependent clustering coefficient over rescaled-degree classes. The inset shows the normalized average nearest neighbour degree¯ k nn,n ( k ( l ) res ) = ¯ k nn ( k ( l ) res ) (cid:104) k ( l ) (cid:105) / (cid:104) ( k ( l ) ) (cid:105) . Bottom:
RGN flow of the community structure; Q ( l ) stands for the modularity inlayer l , Q ( l, is the modularity that the community structure of layer l induces in the original network, and nMI ( l, is thenormalized mutual information between the latter and the community structure detected directly in the original network. Thenumber of layers in each system is determined by their original size. smaller versions of it. This property suggests a new andefficient multiscale community detection algorithm [26–28]. III. GEOMETRIC RENORMALIZATION OFTHE S MODEL
The self-similarity exhibited by real-world networkscan be understood in terms of their congruency with theunderlying hidden metric space S model. As we showanalytically (see Appendix C for details), the model isrenormalizable in a geometric sense, and that means thatreal scale-free networks with a geometric structure —i. e.,which admit a good embedding— necessarily display thesame scaling behaviour.To see why the S model exhibits this self-similarity,we need to consider the renormalization transformationof the geometric layout as well, that is, of hidden de- grees, angular positions, µ , R and β . As we show inAppendix C, by assigning a new hidden degree κ ( l +1) i tosupernode i in layer l + 1 as a function of the hiddendegrees of the nodes it contains in layer l according to κ ( l +1) i = r (cid:88) j =1 (cid:16) κ ( l ) j (cid:17) β /β , (3)as well as an angular coordinate θ ( l +1) i given by θ ( l +1) i = r (cid:80) j =1 (cid:16) θ ( l ) j κ ( l ) j (cid:17) βr (cid:80) j =1 (cid:16) κ ( l ) j (cid:17) β /β , (4)and by rescaling the global parameters as µ ( l +1) = µ ( l ) /r , R ( l +1) = R ( l ) /r and β ( l +1) = β ( l ) , the renormalized net-works remain maximally congruent with the hidden met-ric space model. This means that the probability p ( l +1) ij for two supernodes i and j to be connected in layer l + 1(which, according to the RGN procedure is given by theprobability for at least one link to exist between somenode in i and some node in j in layer l ), maintains itsoriginal form Eq. (A1), as shown in Fig. 3 A . This appliesboth to the model and to real networks as long as theyadmit a good embedding, see also Fig. 7 in Appendix B.In addition, notice that the transformation of the geo-metric layout also has the abelian semi-group structure.Since the networks remain congruent with the S model, hidden degrees κ ( l ) remain proportional to ob-served degrees k ( l ) , which allows us to explore the degreedistribution of the renormalized layers analytically. Itcan be shown that, if the original distribution of hiddendegrees is a power law with characteristic exponent γ ,the hidden degree distribution in the renormalized layersis also a power law with the same exponent asymptot-ically, as long as ( γ − / < β (see Appendix C). In-terestingly, the global parameter controlling the cluster-ing coefficient, β , does not change along the flow, whichexplains the self-similarity of the clustering spectra. Fi-nally, the transformation for the angles Eq. (4) preservesthe ordering of nodes and the heterogeneity in their angu-lar density and, as a consequence, the community struc-ture is preserved in the flow [23, 29, 30]. The model istherefore renormalizable, and RGN realizations at anyscale belong to the same ensemble with a different av-erage degree, which should be rescaled to produce self-similar replicas.A good approximation of the behaviour of the averagedegree for very large networks can be calculated by takinginto account the transformation of hidden degrees in theRG flow Eq. (3) (see Appendix C for details). We obtain (cid:104) k (cid:105) ( l +1) = r ν (cid:104) k (cid:105) ( l ) , with a scaling factor ν dependingon the connectivity structure of the original network. If0 < γ − β ≤
1, the flow is dominated by the exponent ofthe degree distribution γ , and the scaling factor is givenby ν = 2 γ − − , (5)whereas the flow is dominated by the strength of cluster-ing if 1 ≤ γ − β <
2, and ν = 2 β − . (6)Therefore, if γ < β < B ), then ν > γ = 3 and β ≥ β = 2 and γ ≥
3, which indicates that the networkis at the edge of the transition between the small-worldand non-small-world phases; and ν < γ > β >
2, causing the RGN flow to produce sparser networksapproaching a unidimensional ring structure as a fixedpoint (phase II in Fig. 3 B ). In this case, the renormalizedlayers eventually lose the small-world property. -6 -3 χ ( l ) ij -9 -6 -3 p ( χ ( l ) i j ) A . . . . γ . . . . β Internet MetabolicMusicAirports ProteomeWords Synthetic
I II III B l . . . . . ¯ c ( l ) l h k ( l ) i FIG. 3.
RGN of the S model. A Empirical connectionprobability in a synthetic S network. Fraction of connectedpairs of nodes as a function of χ ( l ) ij = R ( l ) ∆ θ ( l ) ij / ( µ ( l ) κ ( l ) i κ ( l ) j ) inthe renormalized layers, from l = 0 to l = 8, and r = 2. Theoriginal synthetic network has N ∼ γ = 2 . β = 1 .
5. The black dashed line shows the theoretic curveEq. (A1). The inset shows the invariance of the mean lo-cal clustering along the RGN flow. B Real networks in theconnectivity phase diagram. The synthetic network above isalso shown. Darker blue (green) in the shaded areas representhigher values of the exponent ν . The dashed line separates the γ -dominated region from the β -dominated region. In phaseI, ν > ν < ν = 0 and,hence, the transition between the small-world and non-small-world phases. In region III, the degree distribution loses itsscale-freeness along the flow. The inset shows the exponen-tial increase of the average degree of the renormalized realnetworks (cid:104) k ( l ) (cid:105) with respect to l . In Fig. 3 B , several real networks are displayed in theconnectivity space. All of them lay in the region havingthe fully connected network as the fixed point, meaningthat the RGN flow progressively selects more and morelong range connections as a consequence of their small-worldness (see Appendix C). Furthermore, all of them,except the Internet and the Airports networks, belongto the β -dominated region. The inset also shows thebehaviour of the average degree of every layer l , (cid:104) k ( l ) (cid:105) ; aspredicted, it grows exponentially in all cases.Interestingly, global properties of the model, like thosereflected in the spectrum of eigenvalues of both the ad-jacency and laplacian matrices, and quantities like thediffusion time and the restabilization time [31], show adependence on γ and β which is in consonance with theone displayed by the RGN flow of the average degree,see results in Figs. 10, 11 and 12 of Appendix C for syn-thetic networks. The S model seems to be more sensitiveto small changes in degree heterogeneity in the region0 < γ − β ≤
1, whereas changes in clustering are betterreflected when 1 ≤ γ − β ≤ IV. APPLICATIONS
The RGN enables us to unfold scale-free complex net-works in a self-similar multilayer shell which unveils thecoexisting scales and their interplay. Beyond the theo-retical implications of the discovery that self-similarityunder the RGN flow seems to be an ubiquitous symme-try in real networks, their multiscale unfolding can beexploited in immediate practical applications. Next, wepropose two among many others; one which singles out aspecific scale and another which exploits multiple scalessimultaneously.
A. Mini-me network replicas
The self-similarity unveiled by the RGN in real net-works allows the construction of high-fidelity reduced ver-sions that we call Mini-me network replicas. The down-scaling of the topology of large real-world complex net-works finds useful applications, for instance, in networkedcommunication systems like the Internet, as a reducedtestbed to analyze the performance of new routing pro-tocols [32–35]. However, the success of such programis based upon the quality of the downscaled version ofthe original network, that should reproduce not only lo-cal properties but also the mesoscopic structure of thenetwork. Mini-me replicas can also be used to performfinite size scaling of critical phenomena taking place onreal networks, so that critical exponents could be eval-uated starting from a single size instance network. TheMini-me networks can be produced at any scale in therange in which self-similarity is preserved. For their con-struction, we exploit the fact that, under renormaliza-tion, a scale-free network remains self-similar and con-gruent with the underlying geometric model in all theself-similarity range of the multilayer shell. The idea isto single out a specific scale after a certain number ofrenormalization steps.Typically, the renormalized average degree of real net-works increases in the flow, since they belong to the small-world phase (see inset in Fig. 3 B ), meaning thatthe network layer at the selected scale is more denselyconnected. To reduce the density to the level of the origi-nal network, we apply a pruning of links, see Appendix A.Basically, we readjust parameter µ , controlling the num-ber of links in the underlying geometric S model, so thatthe expected average degree in the renormalized versionis that of the original network, which in turn modifies theconnection probability Eq. (A1). We keep in the Mini-me network only the links present in the renormalizedlayer which are consistent with the readjusted connectionprobability. In this way, we obtain a reduced version ofthe real network which is statistically equivalent to a verygood approximation.To illustrate the high-fidelity that Mini-me networkreplicas can achieve, we use them to reproduce the be-haviour of dynamical processes in real networks. Weselected three different dynamical processes, the clas-sic ferromagnetic Ising model, the susceptible-infected-susceptible (SIS) epidemic spreading model, and the Ku-ramoto model of synchronization, see Appendix A fordetails. We test these dynamics in all the self-similar net-work layers of the real networks analysed in this work.Results are shown in Fig. 4 and Fig. 13 in Appendix D.Quite remarkably, for all dynamics and all networks, weobserve very similar results between the original andMini-me replicas at all scales. This is particularly in-teresting as these dynamics have a strong dependenceon the mesoscale structure of the underlying networks.This strongly supports our claim that both the microand meso-scales are preserved in the downscaled repli-cas, as expected given the self-similarity of the networklayers in the RGN flow. B. Multiscale navigation
Applications that simultaneously exploit more thanone or even all the layers in the self-similar multi-scale shell are also possible. Next, we introduce anew multiscale navigation protocol for networks embed-ded in hyperbolic space, which improves single-layer re-sults [23]. To this end, we exploit the quasi-isomorphismbetween the S model and the H model in hyperbolicspace [16, 36] to produce a purely geometric represen-tation of the multiscale shell (see Appendix A). In hy-perbolic space, each node is characterised by a radial co-ordinate directly related to its degree, and an angularcoordinate identical to that in the circle. The connectionprobability becomes a decreasing function of the hyper-bolic distance between nodes and, therefore, the mostlikely path connecting two distant nodes is typically thetopological shortest path.The multiscale protocol is based on greedy routing, inwhich a source node transmitting information or a packetto a target node sends it to its neighbour closest to des-tination in the metric space. As performance metrics weconsider the success rate (fraction of successful greedy . . . . . Inverse temperature /T . . . . . . I s i ng h | m | i ( l ) Internet l = 0 l = 1 l = 2 l = 3 l = 4 l = 5 . . . . . Infection rate λ . . . . . . S I S h ρ i ( l ) .
00 0 .
05 0 .
10 0 .
15 0 . Coupling strength σ . . . . . . K u r a m o t o h r i ( l ) . . . . . Inverse temperature /T Metabolic l = 0 l = 1 l = 2 . . . . . Infection rate λ .
00 0 .
05 0 .
10 0 .
15 0 . Coupling strength σ . . . . . Inverse temperature /T Music l = 0 l = 1 l = 2 . . . . . Infection rate λ .
00 0 .
05 0 .
10 0 .
15 0 . Coupling strength σ FIG. 4.
Dynamics on the Mini-me replicas.
Each column shows the order parameters versus the control parameters ofdifferent dynamical processes on the original and Mini-me replicas of the Internet AS network (left), the Human Metabolicnetwork (middle) and the Music network (right) with r = 2, that is, every value of l identifies a network 2 l times smaller thanthe original one. All points show the results averaged over 100 simulations. Error bars indicate the fluctuations of the orderparameters. Top:
Magnetization (cid:104)| m |(cid:105) ( l ) of the Ising model as a function of the inverse temperature 1 /T . Middle:
Prevalence (cid:104) ρ (cid:105) ( l ) of the SIS model as a function of the infection rate λ . Bottom:
Coherence (cid:104) r (cid:105) ( l ) of the Kuramoto model as a functionof the coupling strength σ . In all cases, the curves of the smaller-scale replicas are extremely similar to the results obtained onthe original networks. paths), and the stretch of successful path (ratio betweenthe number of hops in the greedy path and the topologi-cal shortest path). Notice that, in general, greedy rout-ing cannot guarantee the existence of a successful greedypath among all pairs of nodes in the network; the packetcan get trapped into a loop if sent to an already visitednode. In this case, the multiscale protocol can find al-ternative paths by taking advantage of the increased effi-ciency of greedy forwarding in the coarse-grained layers.When node i needs to send a packet to some destina-tion node j , node i performs a virtual greedy forwardingstep in the highest possible layer to find which supernodeshould be next in the greedy path. Based on this, node i then forwards the packet to its physical neighbour inthe real network which guarantees that it will eventuallyreach such supernode. The process is depicted in Fig. 5 A (full details can be found in Appendix A). To guaranteenavigation inside supernodes, we require an extra con-dition in the renormalization process and only considerblocks of connected consecutive nodes. A single node canbe left alone forming a supernode by itself, so blocks areof size one or two nodes. Notice that the new require-ment does not alter the self-similarity of the renormalizednetworks forming the multiscale shell (Figs. 14 and 15 in Appendix E) nor the congruency with the hidden metricspace (Fig. 16 in Appendix E).Figure 5 B shows the increase of the success rate as thenumber of layers L used in the navigation process is in-creased for the different real networks considered in thiswork. Interestingly, as seen in Fig. 5 C , this improvementalters the stretch of successful paths only mildly. Themultiscale navigation protocol boosts the success rate byfinding paths just slightly longer on average as comparedwith standard greedy routing in the original network inalmost all cases, see inset in Fig. 5 C . The improvementcomes at the expense of adding information about thesupenodes to the knowledge needed for standard greedyrouting in single-layered networks. However, the trade-off between improvement and information overload is ad-vantageous as for many systems the addition of just oneor two renormalized layer produces already a notable ef-fect. V. DISCUSSION
Hidden metric space network models [14, 16, 17] areable to explain non-trivial structural features of real source i destination j A . . . . . S u cc e ss r a t e B InternetMetabolicMusic AirportsProteomeWords
Layers . . . . . . A v e r age s t r e t c h C . . . . FIG. 5.
Multiscale navigation. A
Illustration of thenavigation protocol. Red arrows show the unsuccessful greedypath in the original layer of a message attempting to reach thetarget yellow node. Green arrows show the successful greedypath from the same source using both layers. B Success rateas a function of the number of layers used in the process,computed for 10 randomly selected pairs. C Average stretch (cid:104) l g /l s (cid:105) , where l g is the topological length of a path foundby the algorithm and l s is the actual shortest path length inthe network. The inset shows the average geometric stretch (cid:104) l g /l (0) g (cid:105) , where l (0) g is the topological length of a path foundby the classical single-layer navigation protocol. networks—including scale-free degree distributions, clus-tering, and self-similarity of the nested hierarchy of sub-graphs produced by degree pruning [37]—, and also fundamental mechanisms like preferential attachment ingrowing networks [17] and the emergence of communi-ties [30]. Interestingly, the existence of a metric spaceunderlying complex networks allows us to define a ge-ometric renormalization group that reveals the multi-scale nature of these systems. Quite strikingly, modelsof scale-free networks are shown to be self-similar undersuch renormalization, revealing different structural prop-erties depending on the level of coupling with the metricspace and degree heterogeneity. The importance of theseresults, however, stems from the observed self-similarityunder geometric renormalization as an ubiquitous sym-metry of real world scale-free networks, which moreoverstands as a new evidence in favour of the conjecture thathidden metric spaces underlie real networks.The renormalization group presented in this work issimilar in spirit to the topological renormalization stud-ied in [4]. However, it has clear advantages. First, theordering in the construction of the boxes is dictated bythe embedding of the original network in the underlyingspace. Second, the congruency between real scale-freenetworks and the underlying metric space explains theself-similarity of real systems and reveals a multiscale or-ganization that preserves the mesoscopic structure acrossdifferent observation scales. In the case of topologicalrenormalization, on the other hand, the lack of an un-derlying model implies that it is not obvious to advancewhen the network will be self-similar before applying thetransformation and whether or not the mesoscopic struc-ture will be mantained.From a fundamental point of view, the geometricrenormalization group introduced here has proven to bean exceptional tool to unravel the global organization ofcomplex networks across scales and promises to become astandard methodology to analyze real complex networks.It can also help in areas like the study of metapopu-lation models, in which transportation fluxes or popu-lation movements happen both on a local and a globalscale [38]. From a practical point of view, we envisionmany applications besides the two studied in this paper.For instance, the development of a new community detec-tion method that would use the mesoscopic informationencoded in the different observation scales, and the use ofdownscaled versions of the network to perform finite sizescaling. This last application would allow for the deter-mination of critical exponents of real complex networks,a task that it not possible with current methods. ACKNOWLEDGMENTS
We acknowledge support from a James S. McDon-nell Foundation Scholar Award in Complex Systems; theICREA Academia prize, funded by the Generalitat deCatalunya; Ministerio de Econom´ıa y Competitividadof Spain projects no. FIS2013-47282-C2-1-P and no.FIS2016-76830-C2-2-P (AEI/FEDER, UE); the Gener-alitat de Catalunya grant no. 2014SGR608.
AUTHOR CONTRIBUTIONS
G. G.-P., M. B., and M. ´A. S. contributed to the designand implementation of the research, to the analysis of theresults, and to the writing of the manuscript.
ADDITIONAL INFORMATION
Competing financial interests:
The authors de-clare no competing financial interests.
Appendix A: Methods1. Real networks data
The real networks analyzed in this paper are: • The Internet at the Autonomous Systems level. The data was collected by the Cooperative Association forInternet Data Analysis (CAIDA) [39] and corresponds to mid 2009. • The Airports network. It was obtained from Ref. [40, 41]. Directed links represent flights by airlines. Weconsider the undirected version obtained by keeping bidirectional edges only. • The one-mode projection onto metabolites of the human metabolic network at the cell level, as used in Ref. [29]. • The human HI-II-14 interactome. This proteome network was obtained from Ref. [42]. We removed self-loops. • The Music network. Nodes are chords—sets of musical notes played in a single beat—and connections representobserved transitions among them in a set of songs, see Ref. [43]. The original network is weighted, directed andvery dense. Hence, we applied the disparity filter [44] with α = 0 .
01 to obtain a sparser network. Finally, wekept bidirectional edges only to construct the undirected network. • The network of adjacency between words in Darwin’s book “On the Origin of Species”, from Ref. [45].In all cases, we only considered the largest connected components. S model and transformation to H The S model [14] places the nodes of a network into a one-dimensional sphere of radius R and connects every pair i, j with probability p ij = 11 + χ βij = 11 + (cid:16) d a,ij µκ i κ j (cid:17) β , (A1)where µ controls the average degree of the network, β its clustering, and d a,ij = R ∆ θ ij is the distance between thenodes separated by an angle ∆ θ ij ; R is set to N/ π , where N is the number of nodes, so that the density of nodes alongthe circle is equal to 1. The hidden degrees κ i and κ j are proportional to the degrees of nodes i and j , respectively.The S model is isomorphic to a purely geometric model, the H model [16], in which nodes are placed in atwo-dimensional hyperbolic disk of radius R H = 2 ln (cid:18) Rµκ (cid:19) , (A2)where κ = min { κ i } . By mapping every mass κ i to a radial coordinate r i according to r i = R H − κ i κ , (A3)the connection probability, Eq. (A1), becomes p ij = 11 + e β ( x ij − R H ) , (A4)where x ij = r i + r j + 2 ln ∆ θ ij is a good approximation to the hyperbolic distance between two points with coordinates( r i , θ i ) and ( r j , θ j ) in the native representation of hyperbolic space. The exact hyperbolic distance d H is given by thehyperbolic law of cosines, d H = acosh (cosh r i cosh r j − sinh r i sinh r j cos ∆ θ ij ) . (A5)
3. Adjusting the average degree of Mini-me network replicas
To reduce the average degree in a renormalized network to the level of the original network, we apply a pruningof links using the underlying metric model with which the networks in all layers are congruent. The procedure isdetailed in this section.The renormalized network in layer l has an average degree (cid:104) k ( l ) (cid:105) generally larger (in phase I) from the originalnetwork’s (cid:104) k (0) (cid:105) . Moreover, the new network is congruent with the underlying hidden metric space with a parameter µ ( l ) = µ (0) /r l controlling its average degree. The main idea is to decrease the value of µ ( l ) to a new value µ ( l )new —which implies that the connection probability of every pair of nodes ( i, j ), p ( l ) ij , decreases to p ( l ) ij, new . We then prunethe existing links by keeping them with probability q ( l ) ij = p ( l ) ij, new p ( l ) ij . (A6)Therefore, the probability for a link to exist in the pruned network reads, P { a ( l ) ij, new = 1 } = p ( l ) ij q ( l ) ij = p ( l ) ij, new , (A7)whereas the probability for it not to exist is P { a ( l ) ij, new = 0 } = 1 − p ( l ) ij + p ( l ) ij (1 − q ( l ) ij ) = 1 − p ( l ) ij, new , (A8)that is, the pruned network has a lower average degree and is also congruent with the underlying metric space modelwith the new value of µ ( l )new . Hence, we only need to find the right value of µ ( l )new so that (cid:104) k ( l )new (cid:105) = (cid:104) k (0) (cid:105) . In thethermodynamic limit, the average degree of an S network is proportional to µ , so we could simply set µ ( l )new = (cid:104) k (0) (cid:105)(cid:104) k ( l ) (cid:105) µ ( l ) . (A9)However, since we consider real-world networks, finite-size effects play an important role. Indeed, we need to correctthe value of µ ( l )new in Eq. (A9). To this end, we use a correcting factor c , initially set to c = 1, and use µ ( l )new = c (cid:104) k (0) (cid:105)(cid:104) k ( l ) (cid:105) µ ( l ) ;for every value of c , we prune the network. If (cid:104) k ( l )new (cid:105) > (cid:104) k (0) (cid:105) , we give c the new value c − . u → c , where u is arandom variable uniformly distributed between 0 and 1. Similarly, if (cid:104) k ( l )new (cid:105) < (cid:104) k (0) (cid:105) , c + 0 . u → c . The process endswhen |(cid:104) k ( l )new (cid:105) − (cid:104) k (0) (cid:105)| is below a given threshold (in our case, we set it to 0.1).
4. Simulation of dynamical processes
The Ising model is an equilibrium model of interacting spins [46]. Every node i is assigned a variable s i with twopossible values s i = ±
1, and the energy of the system is, in the absence of external field, given by the Hamiltonian H = − (cid:88) i 0, we accept the change;otherwise, we accept it with probability e − ∆ H /T , where T is the temperature acting as a control parameter. The orderparameter is the absolute magnetization per spin | m | , where m = N (cid:80) i s i ; if all spins point in the same direction, | m | = 1, whereas | m | = 0 if half the spins point in each direction.In the SIS dynamical model of epidemic spreading [47], every node i can present two states at a given time t ,susceptible ( n i ( t ) = 0) or infected ( n i ( t ) = 1). Both infection and recovery are Poisson processes. An infected noderecovers with rate 1, whereas infected nodes infect their susceptible neighbours at rate λ . We simulate this processusing the continuous-time Gillespie algorithm with all nodes initially infected. The order parameter is the prevalenceor fraction of infected nodes ρ ( t ) = N (cid:80) i n i ( t ).0The Kuramoto model is a dynamical model for coupled oscillators. Every node i is described by a natural frequency ω i and a time-dependent phase θ i ( t ). A node’s phase evolves according to˙ θ i = ω i + σ (cid:88) i 2) respectively, as in Ref. [48]. The order parameter r ( t ) = N (cid:12)(cid:12)(cid:12)(cid:80) j e iθ j ( t ) (cid:12)(cid:12)(cid:12) measures thephase coherence of the set of nodes; if all nodes oscillate in phase, r ( t ) = 1, whereas r ( t ) → 5. Multiscale navigation Given a network and its embedding (layer 0), we merge pairs of consecutive nodes only if they are connected, whichguarantees navigation inside supernodes; this process generates layer 1. We repeat the process to generate L layers.The multiscale navigation protocol requires every node i to be provided with the following local information:1. The coordinates ( r ( l ) i , θ ( l ) i ) of node i in every layer l .2. The list of (super)neighbours of node i in every layer as well as their coordinates.3. Let SuperN( i, l ) be the supernode to which i belongs in layer l . If SuperN( i, l ) is connected to SuperN( k, l )in layer l , at least one of the (super)nodes in layer l − i, l ) must be connected to atleast one of the (super)nodes in layer l − k, l ); such node is called gateway . For everysuperneighbour of node SuperN( i, l ) in layer l , node i knows which (super)node or (super)nodes in layer l − i, l − 1) belong to SuperN( i, l ) in layer l so, in layer l − 1, they must either be the same (super)node or different but connected (super)nodes.4. If SuperN( i, l − 1) is a gateway reaching some supernode s , at least one of its (super)neighbours in layer l − s ; node i knows which.This information allows us to navigate the network as follows. Let j be the destination node to which i wants toforward a message, and let node i know j ’s coordinates in all L layers ( r ( l ) j , θ ( l ) j ). In order to decide which of itsphysical neighbours (i. e., in layer 0) should be next in the message-forwarding process, node i must first check if itis connected to j ; in that case, the decision is clear. If it is not, it must:1. Find the highest layer l max in which SuperN( i, l max ) and SuperN( j, l max ) still have different coordinates. Set l = l max .2. Perform a standard step of greedy routing in layer l : find the closest neighbour of SuperN( i, l ) to SuperN( j, l ).This is the current target SuperT( l ).3. While l > 0, look into layer l − – Set l = l − – If SuperN( i, l ) is a gateway connecting to some (super)node within SuperT( l +1), node i sets as new currenttarget SuperT( l ) its (super)neighbour belonging to SuperT( l + 1) closest to SuperN( j, l ). – Else node i sets as new target SuperT( l ) the gateway in SuperN( i, l + 1) connecting to SuperT( l + 1) (its(super)neighbor belonging to SuperN( i, l + 1)).4. In layer l = 0, SuperT(0) belongs to the real network and she is a neighbour of i , so node i forwards the messageto SuperT(0).1 Name Type Nodes N γ β (cid:104) k (cid:105) (cid:104) c (cid:105) Internet Technological Autonomous systems 23748 2.17 1.44 4.92 0.61Metabolic Biological Metabolites 1436 2.6 1.3 6.57 0.54Music Script Chords 2476 2.27 1.1 16.66 0.82Airports Transportation World airports 3397 1.88 1.7 11.32 0.63Proteome Biological Proteins 4100 2.25 1.001 6.52 0.09Words Script Words 7377 2.25 1.01 11.99 0.47TABLE I. Overview of the considered real-world networks. Details for each dataset can be found in the Appendix A. Appendix B: Evidence of geometric scaling in real networks The global topological parameters of all six networks are contained in Table I.Fig. 2 compares the topological properties of the renormalized networks for three real networks. We show theequivalent results for the Airports, Proteome and Words networks in Fig. 6. -2 -1 k ( l ) res -2 -4 P ( l ) c ( k ( l ) r e s ) Airports l = 0 l = 1 l = 2 -2 -1 k ( l ) res -1 ¯ c ( l ) ( k ( l ) r e s ) l . . . C o mm un i t i e s -2 -1 k ( l ) res -2 -4 Proteome l = 0 l = 1 l = 2 -1 k ( l ) res -3 -2 -1 l . . . -2 -1 k ( l ) res -2 -4 Words l = 0 l = 1 l = 2 l = 3 -2 -1 k ( l ) res -3 -2 -1 l . . . Q ( l, Q ( l ) nMI ( l, -1 -2 -1 ¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res ) -2 -1 ¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res ) -2 -2 -1 ¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res )¯ k ( l ) nn,n ( k ( l ) res ) FIG. 6. Self-similarity along the RGN flow. Each column shows the RGN flows of different topological features of theAirports network (left), the Proteome network (middle) and the Words network (right) with r = 2. Top: Complementarycumulative distribution of rescaled degrees k ( l ) res = k ( l ) / (cid:104) k ( l ) (cid:105) . Middle: Local clustering averaged over rescaled-degree classes.The inset shows the normalized average nearest neighbour degree ¯ k nn,n ( k ( l ) res ) = ¯ k nn ( k ( l ) res ) (cid:104) k ( l ) (cid:105) / (cid:104) ( k ( l ) ) (cid:105) . Bottom: RGN flowof the community structure; Q ( l ) stands for the modularities in every layer l , Q ( l, is the modularity that the communitystructure in the l layer induces in the original network, and nMI ( l, is the normalized mutual information between bothpartitions. The number of layers in each system is determined by their original size. In Fig. 7, we show the empirical connection probabilities of the six real-world networks considered in this paper aswell as their renormalized versions.2 − − − − − − P ( χ ( l ) i j ) Internet l = 0 l = 1 l = 2 l = 3 l = 4 l = 5 − − − − − − Metabolic l = 0 l = 1 l = 2 − − − − − − Music l = 0 l = 1 l = 2 − − χ ( l ) ij − − − − P ( χ ( l ) i j ) Airports l = 0 l = 1 l = 2 − − χ ( l ) ij − − − − Proteome l = 0 l = 1 l = 2 − − χ ( l ) ij − − − − Words l = 0 l = 1 l = 2 l = 3 FIG. 7. Empirical connection probabilities. Fraction of connected pairs within a given range of χ ( l ) ij for the six real-worldnetworks and their renormalized versions. The black curve is the theoretic connection probability. Appendix C: The Geometric Renormalization Group This section contains the calculations related to the theoretical aspects of the geometric renormalization transfor-mation. In particular, we show the semi-group structure of the transformation, derive the corresponding recurrencerelations for the renormalization of the S model and calculate the flow of the average degree. We also discuss theconnection with statistical mechanics by using the isomorphism between the S and the H models and, finally, weinclude some numerical results regarding the relation between global properties of the networks generated by themodel and the flow of the average degree. 1. The semigroup structure of the coarse-graining step It is easy to show that the geometric coarse-graining presented in this paper has the semigroup structure. Tothis end, we need to see that node i is mapped to the same supernode whether we apply the coarse-graining with r = r first and then a second time with r = r or just once with r = r r . In the first case, the step with r = r maps i to supernode m = (cid:98) i/r (cid:99) (where (cid:98) x (cid:99) represents the integer part of x ), and then m is mapped to n = (cid:98) m/r (cid:99) = (cid:98)(cid:98) i/r (cid:99) /r (cid:99) in the second step. In the second case, i is mapped to supernode s = (cid:98) i/ ( r r ) (cid:99) . Noticethat s = (cid:98) ( i/r ) /r (cid:99) = (cid:98) ( (cid:98) i/r (cid:99) + α ) /r (cid:99) = (cid:98) ( m + α ) /r (cid:99) , where α = ( i mod r ) /r < 1. Now, m + αr = mr + αr = (cid:22) mr (cid:23) + m mod r r + αr , (C1)so (cid:22) m + αr (cid:23) = (cid:22) mr (cid:23) ⇔ m mod r r + αr < , (C2)which is always fulfilled since α < m mod r + αr ≤ r − αr = 1 + α − r < . (C3)Thus, s = n , and node i is mapped to the same supernode in both cases. It follows immediately from this result thatboth processes yield the same final link structure.3 2. Selecting long-range connections As we apply the renormalization transformation, some links are integrated inside the supernodes, so they do notcontribute to the topology of the renormalized network. In Fig. 8, we show that links joining nodes separated a largeangular distance ∆ θ ij require larger values of r to be integrated; in other words, the connections in a renormalizednetwork represent long-range connections in the original graph. l . . . . . . h ∆ θ i j i / π InternetMetabolicMusic AirportsProteome WordsSynthetic FIG. 8. Connection range in renormalized layers. Normalized angular distance (cid:104) ∆ θ ij (cid:105) averaged over all links that arenot integrated inside a supernode in layer l with r = 2. 3. Geometric renormalization of the S model In this subsection, we derive the RG equations of the S model. In order to simplify the notation, all unprimedquantities will refer to layer l − 1, whereas primed ones will correspond to layer l . Moreover, we consider the particularcase in which all supernodes contain the same number of nodes ( r ) for simplicity, although the following calculationsare also valid for supernodes of different sizes.Consider the probability p (cid:48) ij for two supernodes i and j in layer l to be connected, which is given by the probabilityfor at least one link between a pair of the nodes within the supernodes in layer l − p (cid:48) ij = 1 − r (cid:89) e =1 (1 − p e ) , (C4)where e runs over all pairs of nodes ( m, n ) with m in supernode i and n in supernode j . The term p e is the probabilityfor m and n to be connected in layer l − p e = 11 + (cid:16) R ∆ θ e µ ( κ m κ n ) e (cid:17) β . (C5)Eq. (C4) takes the same functional form as Eq. (C5), p (cid:48) ij = 1 − r (cid:89) e =1 11 + (cid:16) R ∆ θ e µ ( κ m κ n ) e (cid:17) − β = 1 − r (cid:81) e =1 (cid:16) µ ( κ m κ n ) e R ∆ θ e (cid:17) β = 1 − 11 + Φ (cid:48) ij = 11 + (cid:48) ij (C6)with Φ (cid:48) ij = r (cid:88) e =1 (cid:18) µ ( κ m κ n ) e R ∆ θ e (cid:19) β + r − (cid:88) e =1 r (cid:88) f = e +1 (cid:18) µ ( κ m κ n ) e R ∆ θ e (cid:19) β (cid:18) µ ( κ m κ n ) f R ∆ θ f (cid:19) β + . . . (C7)4Since the angular distance between the nodes inside each block is generally smaller than the distance between i and j , all the ∆ θ e are approximately equal (∆ θ e ≈ ∆ θ ), so we can writeΦ (cid:48) ij ≈ (cid:16) µR ∆ θ (cid:17) β r (cid:88) e =1 ( κ m κ n ) βe + (cid:16) µR ∆ θ (cid:17) β r − (cid:88) e =1 r (cid:88) f = e +1 ( κ m κ n ) βe ( κ m κ n ) βf + . . . (C8)The S model assumes a uniform density of nodes δ = 1, which means that R = N π , whereas µ is a constantindependent of N . Indeed, µR (cid:28) 1, so the first term leads Eq. (C8) in most cases. Thus,Φ (cid:48) ij ≈ (cid:16) µR ∆ θ (cid:17) β r (cid:88) e =1 ( κ m κ n ) βe . (C9)Introducing this result into Eq. (C6), p (cid:48) ij ≈ 11 + (cid:16) R ∆ θµ (cid:17) β r (cid:80) e =1 ( κ m κ n ) βe , (C10)we see that, in order for the resulting expression to be congruent with the model, we need a set of equations thattransform the parameters according to (cid:18) R ∆ θµ (cid:19) β r (cid:80) e =1 ( κ m κ n ) βe = (cid:32) R (cid:48) ∆ θ (cid:48) ij µ (cid:48) κ (cid:48) i κ (cid:48) j (cid:33) β (cid:48) . (C11)Let us now assume that the angular coordinate of a supernode is some generalised center of mass of the nodes itintegrates, so the separation between the two renormalised nodes ∆ θ (cid:48) ij is approximately equal to the angular separationbetween the nodes that belong to different blocks, i.e. ∆ θ (cid:48) ij ≈ ∆ θ ; thus, β (cid:48) = β . The choice δ = 1 leads to R (cid:48) = Rr ,that is, to the rescaling step. Setting µ (cid:48) = µr , Eq. (C11) further requires (cid:0) κ (cid:48) i κ (cid:48) j (cid:1) β = r (cid:88) e =1 ( κ m κ n ) βe , (C12)which is fulfilled if κ (cid:48) i = r (cid:88) j =1 κ βj /β . (C13)The transformation of masses preserves the semi-group structure exactly, since( κ (cid:48)(cid:48) i ) r = r (cid:88) j =1 (cid:0) κ (cid:48) j (cid:1) β /β = r (cid:88) j =1 r (cid:88) k =1 κ βj,k /β = ( κ (cid:48) i ) r . (C14)We should require the transformation of angles to preserve it as well. This can be achieved using the followinggeneralised center of mass θ (cid:48) i = r (cid:80) j =1 ( θ j κ j ) βr (cid:80) j =1 κ βj /β , (C15)given that ( θ (cid:48)(cid:48) i ) r = 1( κ (cid:48)(cid:48) i ) r r (cid:88) j =1 (cid:0) θ (cid:48) j κ (cid:48) j (cid:1) β /β = 1( κ (cid:48) i ) r r (cid:88) j =1 (cid:0) κ (cid:48) j (cid:1) β (cid:0) κ (cid:48) j (cid:1) β r (cid:88) k =1 ( θ j,k κ j,k ) β /β = ( θ (cid:48) i ) r . (C16)5 4. RG flow of the average degree As discussed in the previous subsection, as we renormalize, we move in the space of realizations of the S model,always keeping the congruency between the network and the hidden metric space, i.e. Eq. (C5). Therefore, we canuse the S model to compute the average degree (cid:104) k (cid:48) (cid:105) of the renormalised networks. According to Ref. [16], (cid:104) k (cid:48) (cid:105) = C µ (cid:48) (cid:104) κ (cid:48) (cid:105) , (C17)where C does not change as we renormalize. We thus need to compute (cid:104) κ (cid:48) (cid:105) , where κ (cid:48) is given by Eq. (C13) and theoriginal distribution of masses is assumed to be a power-law, ρ ( κ ) = 1 − γκ − γc − κ − γ κ − γ , κ ∈ [ κ , κ c ] . (C18)The strategy to compute (cid:104) κ (cid:48) (cid:105) is as follows: We define z ≡ κ β and find their distribution ρ z ( z ). We thencalculate ˆ ρ rz ( s ) (where ˆ ρ z ( s ) is the Laplace transform of ρ z ( z )); according to the convolution theorem, this is theLaplace transform of the variable z (cid:48) ≡ (cid:80) r z = κ (cid:48) β . Finally, we compute (cid:104) κ (cid:48) (cid:105) as the 1 /β -th moment of z (cid:48) , that is, (cid:104) κ (cid:48) (cid:105) = (cid:104) z (cid:48) /β (cid:105) , from ˆ ρ rz ( s ). From Eq. (C18), ρ ( κ )d k = 1 − γκ − γc − κ − γ κ − γ d κ = 1 − γβ (cid:16) κ − γc − κ − γ (cid:17) z − γβ − d z, (C19)so ρ z ( z ) = 1 − γβ (cid:16) κ − γc − κ − γ (cid:17) z − η , (C20)where η = γ − β + 1. If γ < β + 1, η < 3, which means that z (cid:48) and, consequently, κ (cid:48) are also power-law distributed since the centrallimit theorem does not apply (the opposite case corresponds to phase III in Fig.2 B ) [49]. The Laplace transformof Eq. (C20) is given byˆ ρ z ( s ) = κ βc (cid:90) κ β ρ z ( z ) e − sz d z = (1 − γ ) (cid:16) Γ(1 − η, sκ β ) − Γ(1 − η, sκ βc ) (cid:17) β (cid:16) κ − γc − κ − γ (cid:17) s η − , (C21)where Γ( a, b ) is the incomplete gamma function,Γ( a, b ) = ∞ (cid:90) b t a − e − t d t. (C22)From this result, it follows thatˆ ρ z (cid:48) ( s ) = (1 − γ ) (cid:16) Γ(1 − η, sκ β ) − Γ(1 − η, sκ βc ) (cid:17) β (cid:16) κ − γc − κ − γ (cid:17) s η − r . (C23) We need to compute (cid:104) z (cid:48) /β (cid:105) = ∞ (cid:90) z (cid:48) /β ρ z (cid:48) ( z (cid:48) )d z (cid:48) . (C24)To do so, consider the integral I = C (cid:48) ∞ (cid:90) s α ˆ ρ ( n ) z (cid:48) ( s )d s = C (cid:48) ∞ (cid:90) s α ∞ (cid:90) ( − n z (cid:48) n ρ z (cid:48) ( z (cid:48) ) e − sz (cid:48) d z (cid:48) d s. (C25)6Taking into account that for α > − ∞ (cid:90) s α e − sz (cid:48) d s = z (cid:48)− − α Γ(1 + α ) , (C26)we see that I = C (cid:48) ( − n Γ(1 + α ) ∞ (cid:90) z (cid:48) n − − α ρ z (cid:48) ( z (cid:48) )d z (cid:48) . (C27)Now, setting C (cid:48) = ( − n Γ(1 + α ) − and n − − α = 1 /β , I = (cid:104) z (cid:48) /β (cid:105) . However, since α = n − − /β > − n ∈ N , the smallest n we can choose is n = 1, so α = − /β . Finally, we can write (cid:104) κ (cid:48) (cid:105) = − (cid:16) − β (cid:17) ∞ (cid:90) s − /β ˆ ρ (cid:48) z (cid:48) ( s )d s, (C28)where ˆ ρ z (cid:48) ( s ) is given in Eq. (C23). Particular case r = 2 To start solving Eq. (C28), let us first take the limit of N → ∞ ⇒ κ c → ∞ , which means that ˆ ρ z ( s ) becomesˆ ρ z ( s ) = Cs η − Γ(1 − η, sκ β ) , C = γ − βκ − γ . (C29)Using the same change of variable as in Eq. (C21), we see thatˆ ρ (cid:48) z ( s ) = − κ βc (cid:90) κ β zρ z ( z ) e − sz d z = − Cs η − Γ(2 − η, sκ β ) . (C30)Let us now evaluate ˆ ρ (cid:48) z (cid:48) ( s ),ˆ ρ (cid:48) z (cid:48) ( s ) = r ˆ ρ r − z ( s )ˆ ρ (cid:48) z ( s ) = 2ˆ ρ z ( s )ˆ ρ (cid:48) z ( s ) = − C s η − Γ(1 − η, sκ β )Γ(2 − η, sκ β ) (C31)and introduce this result into Eq. (C28), (cid:104) κ (cid:48) (cid:105) = 2 C Γ (cid:16) − β (cid:17) ∞ (cid:90) s η − − /β Γ(1 − η, sκ β )Γ(2 − η, sκ β )d s = 2 C Γ (cid:16) − β (cid:17) κ β (2 η − − /β )0 ∞ (cid:90) ω η − − /β Γ(1 − η, ω )Γ(2 − η, ω )d ω = κ γ − β Γ (cid:16) − β (cid:17) ∞ (cid:90) ω η − − /β Γ(1 − η, ω )Γ(2 − η, ω )d ω. (C32)We thus need to solve an integral of the form I ( ν, s , s ) = ∞ (cid:90) x ν Γ( s , x )Γ( s , x )d x, ν > − . (C33)7In our case, ν = 2 η − − /β = (2 γ − /β − > − ⇔ γ > / 2. Integrating by parts, I ( ν, s , s ) = 1 ν + 1 x ν +1 Γ( s , x )Γ( s , x ) (cid:12)(cid:12)(cid:12)(cid:12) ∞ + 1 ν + 1 ∞ (cid:90) x ν +1 (cid:0) Γ( s , x ) x s − e − x + Γ( s , x ) x s − e − x (cid:1) d x = 1 ν + 1 ∞ (cid:90) (cid:0) Γ( s , x ) x ν + s e − x + Γ( s , x ) x ν + s e − x (cid:1) d x. (C34)We can find a recurrence relation for the integrals in the last expression, I (cid:48) ( α, s ) = ∞ (cid:90) Γ( s, x ) x α e − x d x = 1 α + 1 x α +1 Γ( s, x ) (cid:12)(cid:12)(cid:12)(cid:12) ∞ + 1 α + 1 ∞ (cid:90) x α +1 (cid:0) x s − e − x + Γ( s, x ) e − x (cid:1) d x = 1 α + 1 12 α + s +1 Γ( α + s + 1 , x ) (cid:12)(cid:12)(cid:12)(cid:12) ∞ + 1 α + 1 ∞ (cid:90) Γ( s, x ) x α +1 e − x d x = 1 α + 1 12 α + s +1 Γ( α + s + 1) + 1 α + 1 I (cid:48) ( α + 1 , s ) . (C35)Iterating yields I (cid:48) ( α, s ) = ∞ (cid:88) n =1 n (cid:81) n (cid:48) =1 ( α + n (cid:48) ) 12 α + s + n Γ( α + s + n ) = ∞ (cid:88) n =1 Γ( α + 1)Γ( α + s + n )Γ( α + n + 1)2 α + s + n . (C36)Introducing this result into Eq. (C34), I ( ν, s , s ) = 1 ν + 1 ( I (cid:48) ( ν + s , s ) + I (cid:48) ( ν + s , s ))= 1 ν + 1 ∞ (cid:88) n =1 (cid:18) Γ( ν + s + 1)Γ( ν + s + s + n )Γ( ν + s + n + 1)2 ν + s + s + n + Γ( ν + s + 1)Γ( ν + s + s + n )Γ( ν + s + n + 1)2 ν + s + s + n (cid:19) = 1( ν + 1)2 ν + s + s ∞ (cid:88) n =1 Γ( ν + s + s + n )2 n (cid:18) Γ( ν + s + 1)Γ( ν + s + n + 1) + Γ( ν + s + 1)Γ( ν + s + n + 1) (cid:19) . (C37)Finally, Eq. (C32) becomes (cid:104) κ (cid:48) (cid:105) = κ γ − β Γ (cid:16) − β (cid:17) ∞ (cid:90) ω η − − /β Γ(1 − η, ω )Γ(2 − η, ω )d ω = κ γ − β Γ (cid:16) − β (cid:17) I (2 η − − /β, − η, − η )= 2 β ( γ − κ β Γ (cid:16) − β (cid:17) (2 γ − ∞ (cid:88) n =1 Γ (cid:16) n − β (cid:17) n Γ (cid:16) γ − β + 1 (cid:17) Γ (cid:16) γ − β + n + 1 (cid:17) + Γ (cid:16) γ − β (cid:17) Γ (cid:16) γ − β + n (cid:17) . (C38)Using this result, Eq. (C17) and κ = (cid:104) κ (cid:105) ( γ − / ( γ − 1) we can write an expression for the exponent ν (defined bythe expression (cid:104) k (cid:48) (cid:105) = r ν (cid:104) k (cid:105) ): ν = 2ln 2 ln β ( γ − γ − β Γ (cid:16) − β (cid:17) (2 γ − ∞ (cid:88) n =1 Γ (cid:16) n − β (cid:17) n Γ (cid:16) γ − β + 1 (cid:17) Γ (cid:16) γ − β + n + 1 (cid:17) + Γ (cid:16) γ − β (cid:17) Γ (cid:16) γ − β + n (cid:17) − . (C39)The above result is shown in Fig. 9.8 (cid:45) (cid:45) Γ Β FIG. 9. Connectivity phase diagram. Exact value of ν as a function of β and γ according to Eq. (C39). The exact solutionagrees with the solution in the power-law approximation (see next subsection) for large values of β or γ . Solution in the power-law approximation From Eq. (C38), we see that the exact solution for large r can be extremely convoluted, thus making the limit r → ∞ inaccessible. However, if we consider that ρ κ (cid:48) ( κ (cid:48) ) is a power-law (which is a reasonable approximation if η < 3, as discussed above), the computation of (cid:104) κ (cid:48) (cid:105) becomes simpler. Under this assumption, z (cid:48) are also power-lawdistributed with exponent − η , that is, ρ z (cid:48) ( z (cid:48) ) = C (cid:48) z (cid:48)− η , C (cid:48) = γ − βκ (cid:48) − γ . (C40)We study two cases separately:i. 1 < η < 2: In this case, we determine the value of C (cid:48) and, with it, (cid:104) κ (cid:48) (cid:105) = κ (cid:48) γ − γ − . If the assumption in Eq. (C40)is correct, ˆ ρ z (cid:48) ( s ) must behave as [50]ˆ ρ z (cid:48) ( s ) = 1 + C (cid:48) s η − Γ(1 − η ) , s → + . (C41)According to Eqs. (C23) and (C29),ˆ ρ z (cid:48) ( s ) = (cid:2) Cs η − Γ (1 − η, sz ) (cid:3) r = (cid:34) Cs η − Γ (1 − η ) (cid:32) − ( sz ) − η e − sz ∞ (cid:88) n =0 ( sz ) n Γ(2 − η + n ) (cid:33)(cid:35) r = (cid:34) C Γ (1 − η ) (cid:32) s η − − z − η e − sz ∞ (cid:88) n =0 ( sz ) n Γ(2 − η + n ) (cid:33)(cid:35) r → (cid:34) C Γ (1 − η ) (cid:32) s η − − z − η ∞ (cid:88) n =0 ( sz ) n Γ(2 − η + n ) (cid:33)(cid:35) r , s → + . (C42)In the above expression, we see that the term that does not depend on s is given by the product of the r termswith n = 0, whereas the term of order s η − is given by the sum of the r products of s η − with the remaining9 r − n = 0. Thus, we findˆ ρ z (cid:48) ( s ) = (cid:34) C Γ (1 − η ) (cid:32) s η − − z − η ∞ (cid:88) n =0 ( sz ) n Γ(2 − η + n ) (cid:33)(cid:35) r → C r Γ r (1 − η ) (cid:34) ( − r z r (1 − η )0 Γ r (2 − η ) + r ( − r − s η − z ( r − − η )0 Γ r − (2 − η ) (cid:35) = (cid:32) γ − βκ − γ (cid:33) r Γ r (1 − η ) (cid:34) ( − r z r (1 − η )0 (1 − η ) r Γ r (1 − η ) + r ( − r − s η − z ( r − − η )0 (1 − η ) r − Γ r − (1 − η ) (cid:35) = (cid:32) η − z − η (cid:33) r (cid:34) z r (1 − η )0 ( η − r + rs η − Γ (1 − η ) z ( r − − η )0 ( η − r − (cid:35) = 1 + r η − z − η s η − Γ (1 − η ) . (C43)We can now identify C (cid:48) as C (cid:48) = γ − βκ (cid:48) − γ = r η − z − η = r γ − βκ − γ , (C44)so κ (cid:48) = r γ − κ (C45)and (cid:104) κ (cid:48) (cid:105) = r γ − (cid:104) κ (cid:105) . (C46)Finally, plugging this result into Eq. (C17), (cid:104) k (cid:48) (cid:105) = C µr r γ − (cid:104) κ (cid:105) = r γ − − (cid:104) k (cid:105) → ∞ γ < cte. γ = 30 γ > < η < 3: This case is much simpler, since (cid:104) z (cid:105) and hence (cid:104) z (cid:48) (cid:105) are finite and can be easily computed. Indeed,given that (cid:104) z (cid:48) (cid:105) = r (cid:104) z (cid:105) , we see that (cid:104) κ (cid:48) (cid:105) = γ − γ − κ (cid:48) = γ − γ − z (cid:48) ) /β = γ − γ − (cid:18) η − η − (cid:104) z (cid:48) (cid:105) (cid:19) /β = γ − γ − (cid:18) η − η − r (cid:104) z (cid:105) (cid:19) /β = γ − γ − (cid:18) r η − η − η − η − z (cid:19) /β = γ − γ − (cid:16) rκ β (cid:17) /β = r /β (cid:104) κ (cid:105) . (C48)This result and Eq. (C17) together imply (cid:104) k (cid:48) (cid:105) = C µr r /β (cid:104) κ (cid:105) = r /β − (cid:104) k (cid:105) → ∞ β < cte. β = 20 β > η = 2, since η = 2 ⇒ γ − β = 1 ⇒ β = γ − . (C50)Therefore, we can conclude that the network flows towards a fully connected graph if γ < β < 2. The line γ = 3and β > β = 2 and γ > (cid:104) k (cid:105) → γ > β > 2. Notice that thisassertion is only valid under the assumption in Eq. (C40), which is not true in general. However, we expect it to bea good approximation of the flow’s behaviour as r → ∞ .0 5. Mapping to hyperbolic space and the partition function In this section, we show how the RGN presented in this work can be described in the formalism of statistical physics.As explained in Appendix A, using the mapping to hyperbolic space, the connection probability, Eq. (C5), becomes p mn = 11 + e β ( x mn − R H ) , (C51)where x mn = r m + r n + 2 ln ∆ θ mn is a good approximation to the hyperbolic distance between two points withcoordinates ( r m , θ m ) and ( r n , θ n ) in the native representation of hyperbolic space.Now, let a mn = 1 if the link between nodes m and n exists and a mn = 0 otherwise; Eq. (C51) can be written as p mn ≡ P ( a mn ) = e − β amn ( x mn − R H ) e − β ( x mn − R H ) , (C52)which means that, in the H model, every pair of nodes represents a fermionic state of energy x mn / R H / { a mn } , the likelihood of a given network is given by P ( { a mn } ) = (cid:89) m 1, so (cid:98) Nr (cid:99) (cid:89) i =1 r ( r − (cid:89) t =1 (cid:16) e − β ( x t − R H ) (cid:17) ≈ e (cid:98) N/ (cid:99) (cid:80) i =1 ln ( µ ( κ m κ n ) i ) β ) ≈ e N (cid:104) ln ( µκ m κ n ) β ) (cid:105) . (C60)Defining ζ ≡ e (cid:104) ln ( µκ m κ n ) β ) (cid:105) = e (cid:82) ln ( µκ m κ n ) β ) ρ ( κ m ) ρ ( κ n ) dκ m dκ n (C61)we can write Eq. (C55) as Z = ζ N/ Z (cid:48) , (C62)where Z (cid:48) = (cid:80) { a ij } e − βH (cid:48) ( { a ij } ) . 6. Local vs. global properties In the S model, we impose three parameters, γ, β and (cid:104) κ (cid:105) , all three related to local properties of nodes (degreeand clustering). However, the RG flow of observables like the average degree should be related to global propertiesof the system; indeed, we would expect two networks with similar average degree flows to exhibit similarities at theglobal scale as well, whereas two networks with very different RG trajectories (even in the same phase, i.e., flowingtowards the same fixed point) should be easier to distinguish by looking at their global properties. To check thishypothesis, we have generated synthetic networks with different values of γ and β and compared the eigenvalues ofboth the adjacency and laplacian matrices. The results are shown in Figs. 10, 11 and 12. As we see, the RG analysisof the model allows us to assess the stability of the global properties of networks against perturbations of their localones, and hence the importance of clustering and degree heterogeneity on a given system.2 − − − − λ φ ( λ ) γ = 2 . ,β = 3 . − − − − λ φ ( λ ) γ = 2 . ,β = 3 . − − − − λ φ ( λ ) γ = 3 . ,β = 3 . − − − − λ φ ( λ ) γ = 3 . ,β = 3 . − − − − λ φ ( λ ) γ = 4 . ,β = 3 . − − − − λ φ ( λ ) γ = 2 . ,β = 2 . − − − − λ φ ( λ ) γ = 2 . ,β = 2 . − − − − λ φ ( λ ) γ = 3 . ,β = 2 . − − − − λ φ ( λ ) γ = 3 . ,β = 2 . − − − − λ φ ( λ ) γ = 4 . ,β = 2 . − − − − λ φ ( λ ) γ = 2 . ,β = 2 . − − − − λ φ ( λ ) γ = 2 . ,β = 2 . − − − − λ φ ( λ ) γ = 3 . ,β = 2 . − − − − λ φ ( λ ) γ = 3 . ,β = 2 . − − − − λ φ ( λ ) γ = 4 . ,β = 2 . − − − − λ φ ( λ ) γ = 2 . ,β = 1 . − − − − λ φ ( λ ) γ = 2 . ,β = 1 . − − − − λ φ ( λ ) γ = 3 . ,β = 1 . − − − − λ φ ( λ ) γ = 3 . ,β = 1 . − − − − λ φ ( λ ) γ = 4 . ,β = 1 . − − − − λ φ ( λ ) γ = 2 . ,β = 1 . − − − − λ φ ( λ ) γ = 2 . ,β = 1 . − − − − λ φ ( λ ) γ = 3 . ,β = 1 . − − − − λ φ ( λ ) γ = 3 . ,β = 1 . − − − − λ φ ( λ ) γ = 4 . ,β = 1 . FIG. 10. Eigenvalues of adjacency matrices. Every plot represents a histogram of the eigenvalues (divided by (cid:112) (cid:104) κ (cid:105) ) ofthe adjacency matrices of 100 synthetic networks of size N = 1000 and (cid:104) κ (cid:105) = 5 for a particular set of values ( γ, β ). The orderof the plots corresponds to that of the phase diagram Fig. 3 B . Notice how the RG analysis correctly predicts the dependenceof the spectra on γ only on the top-left corner of the figure, as well as the independence on γ on the bottom-right region. − λ φ ( λ ) γ = 2 . ,β = 3 . − λ φ ( λ ) γ = 2 . ,β = 3 . − λ φ ( λ ) γ = 3 . ,β = 3 . − λ φ ( λ ) γ = 3 . ,β = 3 . − λ φ ( λ ) γ = 4 . ,β = 3 . − λ φ ( λ ) γ = 2 . ,β = 2 . − λ φ ( λ ) γ = 2 . ,β = 2 . − λ φ ( λ ) γ = 3 . ,β = 2 . − λ φ ( λ ) γ = 3 . ,β = 2 . − λ φ ( λ ) γ = 4 . ,β = 2 . − λ φ ( λ ) γ = 2 . ,β = 2 . − λ φ ( λ ) γ = 2 . ,β = 2 . − λ φ ( λ ) γ = 3 . ,β = 2 . − λ φ ( λ ) γ = 3 . ,β = 2 . − λ φ ( λ ) γ = 4 . ,β = 2 . − λ φ ( λ ) γ = 2 . ,β = 1 . − λ φ ( λ ) γ = 2 . ,β = 1 . − λ φ ( λ ) γ = 3 . ,β = 1 . − λ φ ( λ ) γ = 3 . ,β = 1 . − λ φ ( λ ) γ = 4 . ,β = 1 . − λ φ ( λ ) γ = 2 . ,β = 1 . − λ φ ( λ ) γ = 2 . ,β = 1 . − λ φ ( λ ) γ = 3 . ,β = 1 . − λ φ ( λ ) γ = 3 . ,β = 1 . − λ φ ( λ ) γ = 4 . ,β = 1 . FIG. 11. Eigenvalues of laplacian matrices. Every plot represents a histogram of the eigenvalues (divided by (cid:112) (cid:104) κ (cid:105) ) of thelaplacian matrices of 100 synthetic networks of size N = 1000 and (cid:104) κ (cid:105) = 5 for a particular set of values ( γ, β ). The order ofthe plots corresponds to that of the phase diagram Fig. 3 B . Notice how the RG analysis correctly predicts the dependence ofthe spectra on γ only on the top-left corner of the figure, as well as the independence on γ on the bottom-right region. β γ λ (log) 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 β γλ n / λ (log) 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10 10.5 FIG. 12. Diffusion time and synchronization stability. Left: Logarithm of the diffusion time (inverse of the algebraicconnectivity or first non-null eigenvalue of the laplacian λ ) of networks of size N = 1000 and (cid:104) κ (cid:105) = 5 averaged over 100realizations. Bottom: Logarithm of the quotient λ n /λ (this quantity is related to the stability of synchronization processeson networks; it gives the time that the system needs to get back to the stable synchronized state after a perturbation occurred).In both plots, we can see the similarities with Fig. 3 B . Appendix D: Mini-me network replicas . . . . . Inverse temperature /T . . . . . . I s i ng h | m | i ( l ) Airports l = 0 l = 1 l = 2 . . . . . Infection rate λ . . . . . . S I S h ρ i ( l ) . 00 0 . 05 0 . 10 0 . 15 0 . Coupling strength σ . . . . . . K u r a m o t o h r i ( l ) . . . . . Inverse temperature /T Proteome l = 0 l = 1 l = 2 . . . . . Infection rate λ . 00 0 . 05 0 . 10 0 . 15 0 . Coupling strength σ . . . . . Inverse temperature /T Words l = 0 l = 1 l = 2 l = 3 . . . . . Infection rate λ . 00 0 . 05 0 . 10 0 . 15 0 . Coupling strength σ FIG. 13. Dynamics on the Mini-me replicas. Each column shows the order parameters versus the control parametersof different dynamical processes on the original and Mini-me replicas of the Airports network (left), the Proteome network(middle) and the Words network (right) with r = 2, that is, every value of l identifies a network 2 l times smaller than theoriginal one. All points show the results averaged over 100 simulations. Error bars indicate the fluctuations of the orderparameters. Top: Magnetization (cid:104)| m |(cid:105) ( l ) of the Ising model as a function of the inverse temperature 1 /T . Middle: Prevalence (cid:104) ρ (cid:105) ( l ) of the SIS model as a function of the infection rate λ . Bottom: Coherence (cid:104) r (cid:105) ( l ) of the Kuramoto model as a functionof the coupling strength σ . In all cases, the curves of the smaller-scale replicas are extremely similar to the results obtained onthe original networks. Appendix E: Multiscale navigation networks This section includes some results showing the topological properties of the coarse-grained for navigation networks;Fig. 14 shows the complementary cumulative degree distributions, whereas Fig. 15 contains their clustering spectra. − − − P ( l ) c ( k ( l ) r e s ) Internet l = 0 l = 1 l = 2 l = 3 l = 4 l = 5 − − − − Metabolic l = 0 l = 1 l = 2 − − − − Music l = 0 l = 1 l = 2 − − k ( l ) res − − P ( l ) c ( k ( l ) r e s ) Airports l = 0 l = 1 l = 2 − k ( l ) res − − Proteome l = 0 l = 1 l = 2 − − k ( l ) res − − Words l = 0 l = 1 l = 2 l = 3 FIG. 14. Complementary cumulative degree distributions. Every curve represents the complementary cumulativedegree distribution of a given layer in the multiscale navigation shell. − − − ¯ c ( l ) ( k ( l ) r e s ) Internet l = 0 l = 1 l = 2 l = 3 l = 4 l = 5 − − − − Metabolic l = 0 l = 1 l = 2 − − − − Music l = 0 l = 1 l = 2 − k ( l ) res − − ¯ c ( l ) ( k ( l ) r e s ) Airports l = 0 l = 1 l = 2 − k ( l ) res − − Proteome l = 0 l = 1 l = 2 − k ( l ) res − − Words l = 0 l = 1 l = 2 l = 3 FIG. 15. Clustering spectra. Every curve represents the clustering spectrum of a given layer in the multiscale navigationshell. We also present the empirical connection probabilities of the networks after the coarse-graining for navigation (inwhich pairs of nodes are merged together into a supernode only if they are connected) in Fig. 16. Notice that thecongruency with the underlying metric space is preserved even is the sizes of the blocks are different.6 − − − − − − P ( χ ( l ) i j ) Internet l = 0 l = 1 l = 2 l = 3 l = 4 l = 5 − − − − − − Metabolic l = 0 l = 1 l = 2 − − − − − − Music l = 0 l = 1 l = 2 − − χ ( l ) ij − − − − P ( χ ( l ) i j ) Airports l = 0 l = 1 l = 2 − − χ ( l ) ij − − − − Proteome l = 0 l = 1 l = 2 − − χ ( l ) ij − − − − Words l = 0 l = 1 l = 2 l = 3 FIG. 16. Empirical connection probabilities. Fraction of connected pairs within a given range of χ ( l ) ij for the six real-worldnetworks and their coarse-grained for navigation versions. The black curve is the theoretic connection probability.[1] B. Mandelbrot, in Proceedings of the Twelve Symposia in Applied Mathematics, Roman Jakobson editor. Structure ofLanguage and its Mathematical Aspects, New York, USA (1961) pp. 190–219.[2] H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford Univ. Press, Oxford, 1971).[3] D. Gfeller and P. De Los Rios, Phys. Rev. Lett. , 038701 (2007).[4] C. Song, S. Havlin, and H. A. Makse, Nature , 392 (2005).[5] K. I. Goh, G. Salvi, B. Kahng, and D. Kim, Phys. Rev. Lett. , 018701 (2006).[6] C. Song, S. Havlin, and H. A. Makse, Nature Physics , 275 (2006).[7] J. S. Kim, K. I. Goh, B. Hahng, and D. Kim, New J. Phys. , 177 (2007).[8] F. Radicchi, J. J. Ramasco, A. Barrat, and S. Fortunato, Phys. Rev. Lett. , 148701 (2008).[9] H. D. Rozenfeld, C. Song, and H. A. Makse, Phys. Rev. Lett. , 025701 (2010).[10] D. J. Watts and S. H. Strogatz, Nature , 440 (1998).[11] R. Cohen and S. Havlin, Phys. Rev. Lett. , 058701 (2003).[12] M. Newman and D. Watts, Physics Letters A , 341 (1999).[13] S. Boettcher, Frontiers in Physiology , 102 (2011).[14] M. ´A. Serrano, D. Krioukov, and M. Bogu˜n´a, Phys. Rev. Lett. , 078701 (2008).[15] M. Bogu˜n´a, F. Papadopoulos, and D. Krioukov, Nature Comms , 62 (2010).[16] D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Bogu˜n´a, Phys Rev E , 036106 (2010).[17] F. Papadopoulos, M. Kitsak, M. A. Serrano, M. Boguna, and D. Krioukov, Nature , 537 (2012).[18] K. Leo P., Statistical Physics: Statics, Dynamics and Renormalization (World Scientific, Singapore, 2000).[19] K. G. Wilson, Reviews of Modern Physics , 773 (1975).[20] K. G. Wilson, Rev. Mod. Phys. , 583 (1983).[21] For instance, in Fig. 1 the same transformation with r = 4 leads from l = 0 to l = 2 in a single step. Whenever the numberof nodes is not divisible by r , the last supernode in a layer contains less than r nodes, as in the example at l = 1; however,the RGN equations are valid for uneven supernode sizes as well. Notice that the set of transformations F r does not includean inverse element to reverse the process.[22] M. Bogu˜n´a and R. Pastor-Satorras, Phys. Rev. E , 036112 (2003).[23] M. Bogu˜n´a, F. Papadopoulos, and D. Krioukov, Nature communications , 62 (2010).[24] F. Papadopoulos, R. Aldecoa, and D. Krioukov, Phys. Rev. E , 022807 (2015).[25] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and ´Etienne Lefebvre, J. Stat. Mech. , P10008 (2008).[26] A. Arenas, A. Fern´andez, and S. G´omez, New Journal of Physics , 053039 (2008).[27] P. Ronhovde and Z. Nussinov, Phys. Rev. E , 016109 (2009).[28] Y.-Y. Ahn, J. P. Bagrow, and S. Lehmann, Nature , 761 (2010). [29] M. A. Serrano, M. Boguna, and F. Sagues, Mol. BioSyst. , 843 (2012).[30] K. Zuev, M. Bogu˜n´a, G. Bianconi, and D. Krioukov, Scientific Reports , 9421 EP (2015).[31] P. V. Mieghem, Graph Spectra for Complex Networks (Cambridge University Press, New York, NY, USA, 2011).[32] F. Papadopoulos, K. Psounis, and R. Govindan, IEEE Journal on Selected Areas in Communications , 2313 (2006).[33] F. Papadopoulos and K. Psounis, SIGCOMM Comput. Commun. Rev. , 39 (2007).[34] W. M. Yao and S. Fahmy, in (2008) pp. 1–6.[35] W. M. Yao and S. Fahmy, in (2011) pp. 299–309.[36] D. Krioukov, F. Papadopoulos, A. Vahdat, and M. Bogu˜n´a, Phys. Rev. E , 035101 (2009).[37] M. A. Serrano, D. Krioukov, and M. Bogu˜n´a, Phys. Rev. Lett. , 048701 (2011).[38] V. Colizza, R. Pastor-Satorras, and A. Vespignani, Nat Phys , 276 (2007).[39] K. Claffy, Y. Hyun, K. Keys, M. Fomenkov, and D. Krioukov, in CATCH Proc. Int. Conf. on World Wide Web Companion (2013) pp. 1343–1350.[42] T. Rolland, et al. Cell , 1212 (2014).[43] J. Serr`a, A. Corral, M. Bogu˜n´a, M. Haro, and J. L. Arcos, Scientific Reports (2012), 10.1038/srep00521.[44] M. ´A. Serrano, M. Bogu˜n´a, and A. Vespignani, Proc. Natl. Acad. Sci. USA , 6483 (2009).[45] R. Milo, S. Itzkovitz, N. Kashtan, R. Levitt, S. Shen-Orr, I. Ayzenshtat, M. Sheffer, and U. Alon, Science , 1538(2004).[46] S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes, Phys. Rev. E , 016104 (2002).[47] R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. , 3200 (2001).[48] Y. Moreno and A. F. Pacheco, EPL (Europhysics Letters) , 603 (2004).[49] B. Gnedenko and A. Kolmogorov, Limit distributions for sums of independent random variables , Addison-Wesley series instatistics (Addison-Wesley, 1968).[50] R. A. Handelsman and J. S. Lew, SIAM Journal on Mathematical Analysis5