Dynamics and Processing in Finite Self-Similar Networks
DDYNAMICS AND PROCESSING IN FINITE SELF-SIMILARNETWORKS
SIMON DEDEO † AND DAVID C. KRAKAUER ‡ ,(cid:63), † Abstract.
A common feature of biological networks is the geometricproperty of self-similarity. Molecular regulatory networks through to cir-culatory systems, nervous systems, social systems and ecological trophicnetworks, show self-similar connectivity at multiple scales. We analyzethe relationship between topology and signaling in contrasting classes ofsuch topologies. We find that networks differ in their ability to containor propagate signals between arbitrary nodes in a network dependingon whether they possess branching or loop-like features. Networks alsodiffer in how they respond to noise, such that one allows for greaterintegration at high noise, and this performance is reversed at low noise.Surprisingly, small-world topologies, with diameters logarithmic in sys-tem size, have slower dynamical timescales, and may be less integrated(more modular) than networks with longer path lengths. All of thesephenomena are essentially mesoscopic, vanishing in the infinite limit butproducing strong effects at sizes and timescales relevant to biology.
Biological networks exhibit a wide range of structural features at mul-tiple spatial scales [1–3]. These include local circuitry reflecting the logicof regulation among small numbers of elements [4], and motifs of statis-tically over-represented patterns within larger networks of interactions [5],through to macroscopic properties of complete networks including the de-scription of the degree distributions and the large scale geometric featuresof networks [2]. Among the most interesting geometric properties of biolog-ical networks is the property of self-similarity or scale invariance [6, 7], inwhich characteristic topological features are present at all scales from thelocal organization of individual nodes, through to aggregations at the largestnetwork scales.For genetic and proteomic regulatory networks, as well as social net-works and a variety of distribution networks, including respiratory and cir-culatory networks, the mechanisms generating self-similar structures havebeen well explored [8–17]. A growing body of empirical work investigates † Santa Fe Institute, Santa Fe, NM 87501, USA ‡ Wisconsin Institute for Discovery, University of Wisconsin, Madison, WI53706, USA (cid:63)
Department of Genetics, University of Wisconsin, Madison, WI 53706,USA
E-mail addresses : [email protected], [email protected] . Date : November 4, 2018. a r X i v : . [ q - b i o . M N ] F e b DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS self-similar network structures, including motif overabundances at differ-ent coarse-grained scales. The topology of networks under coarse-graining(agglomeration) of nodes has formed a central focus in both empirical [6]and theoretical [18–24] work. However, the functional implications of thesetopological properties remain poorly understood.Functional explanations of self-similarity tend to fall into one of threebroad classes. Robustness explanations consider the connectivity propertiesunder perturbation, and contrast, for example, scale-free and exponentialdegree distributions [25–27]. Adaptive optimization theories argue that self-similarity provides an efficient means of provisioning densely distributedresource sinks with a minimum of cable cost [28, 29]. Hence networks such asthe circulatory system can efficiently provide energy-rich compounds to thecells of the body, and neural networks can efficiently integrate informationfrom a large variety of sensory inputs [30, 31].Finally, neutral theories suggest that self-similarity is not in itself an opti-mized property of biological networks, but a consequence of highly conserveddevelopmental processes with local rules of assembly that generate charac-teristic macroscopic properties [32–35]. Mathematical studies have shownhow motif abundances can be the consequences of constraints on large scaletopological properties [36]; conversely, large-scale topological features mightarise from constraints on a single local property [37]. In either case, theconnection between the large and small-scale properties of a network mayhave emerged first without functional meaning.In this contribution we investigate the functional implications of self-similar assembly, as the nodes of a system adjust their internal states inresponse to their neighbours and in the presence of environmental noise.We find a tension between the small-world properties of a network andthe rapidity of the transition to an ordered phase. For a fixed number ofvertices and links, self-similar networks with small-world properties tendto show more gradual transitions, both dynamically, and as a function ofnoise. The nested-hub structure of such networks provides a bottleneckrestricting the possible paths to distant parts of the network. By contrast,hierarchical assemblies, characterized by nesting and a more open structure,have a sharper transition to the ordered state.Our most surprising results show that while there is some advantage tosmall-diameter, small-world networks in the high-noise regime, a completelydifferent architecture – that of nested networks, which eliminates bottlenecksat the expense of longer average paths – provides greater integration in thelow-noise regime. Further, small-world networks produced by branchingshow dramatically longer dynamical timescales than their nested counter-parts. Both features of the nested architecture are driven by the presenceof multiple paths between points, as we establish both by simulation and byanalytic calculation of the graphical structures that underlie the problem.A central theme of our investigation are the differences between theseconstructions in the mesoscopic regime: N (cid:29)
1, but finite. As we shall see,
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 3
Figure 1.
Branching (left) and nested (right) iterations ona simple graph.various properties that vanish in the infinite-size limit lead to pronounceddifferences in behavior at the finite scales relevant to biology. Our use ofboth analytic and numerical techniques allows us to investigate two dis-tinct regimes relevant to this mesoscopic phase: analytic results describethe finite-size–infinite-time equilibrium, while numerical simulations showthe finite-size–finite-time properties, relevant in the case of strong non-equilibrium effects.1.
Constructing self-similar networks
We first introduce a deterministic, algorithmic approach for construct-ing hierarchical, self-similar networks. Our methods use the notion of aconstruction template, or base motif, that provides the seed for self-similarconstruction. Alternative stochastic approaches include defining hierarchi-cal assemblies in terms of correlations in an otherwise random network [38],through biases introduced into an ensemble [39], or through high-dimensionalgeneralizations of deterministic constructions [40, 41]. The pseudofrac-tal [42] and the “flower” graphs of [19] are an alternative deterministicconstruction. An advantage of the deterministic assemblies is that exactcalculations of critical behavior become possible.
DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS
The self-similar networks we describe take two forms, depending on theirassembly mechanisms – see Fig. 1. The assembly mechanism is stated for-mally in Appendix A; it relies on the specification of (1) a motif pattern M (in Fig. 1, for example, the triangle), and (2) a method f , of replacingnodes in the pattern by new, “smaller scale,” copies of the original motif.This method can then be iterated, deterministically, to produce networks ofincreasing size and complexity.Visually, our constructions possess fractal-like properties, with self-similarityupon coarse-graining. Our formal definition of the construction of these net-works amounts, in the reverse direction, to a specification of a renormaliza-tion group transformation [43].The two simplest choices of node replacement lead to two different kindsof network: a branching topology, characterized by the absence of large-scaleloops, and a nested topology, where the loop structure of the base motif isreplicated on all scales. We consider the scaling of average network diameter, (cid:104) d (cid:105) , the geodesic distance averaged over all distinct pairs of nodes.1.1. Branching Assembly.
As a network grows, a particular unit maypreserve the same “local-structural” relationship at each level of the hierar-chy. For example, the central node of a star may be the central node of thenetwork at all levels of iteration. These networks are characteristic of circu-latory and vascular networks, where each node, regardless of its position ina hierarchy, tends to perform the same function [44, 45].In the formalism of Appendix A, such a mapping is provided when f ( i, j )is equal to i . Iterations increase inequality in the network, producing de-gree distributions characterized by a motif scale, with a exponential tailof nodes with a “runaway” influence on the rest of the system. Biologicalnetworks with this property include the neural network of C. elegans [46]and the small-world networks of Ref. [47]. Exponential tails to the degreedistribution are found also in the original Erd¨os-R´enyi random graph.An illustration of a branching iteration on the triangle is shown on theleft column of Fig. 1. As the order increases, loops, loops with free loops,and so forth are produced; the highest vertex degree increases exponentiallyin the number of vertices – these are nodes on the largest “super-loop.” Allloops, or, in general, subgraphs, may be detached by a single cut.1.2.
Nested Assembly.
In contrast to branching iterations, a nested iter-ation is when the unit – vertex or subgraph – takes on the characteristicsof its neighbors. A node is no longer restricted to a single local structure,but participates in structures at multiple spatial scales. This is commonin communication and computational networks, characterized by extensivefeedback loops and connectivity to topologically distinct regions.
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 5
Figure 2.
Two generations of nested assembly for a com-mon
E. coli motif; the base graph M is shown in the upper-right corner. The large-scale (cid:67)(cid:66) pattern is topologicallyequivalent to the base motif.In the formalism of Appendix A, this second mode of network assemblyis provided by f ( i, j ) equal to j . While branching structures look tree-like , nested networks are characterized by the replication of subgraph loopstructures on increasingly larger scales.Nested networks are shown on the right-hand column of Fig. 1; a morecomplicated example is that of Fig. 2, where two iterations of a motif over-abundant in E. coli [48] is shown. As shown in Fig. 3, nested graphs havelarger diameters; they lack the “small-world” property of logarithmic scalingof diameter with size found through replication.1.3.
Topological Properties.
The two cases we have considered, purebranching or nesting of a motif pattern M , can be considered extremesof how a network might assemble. Branching networks tend to increase in-equalities in the degree distributions of vertices while keeping loops at anapproximately constant density, whereas nested networks create many moreloops while reproducing (almost) the degree distribution of the lower levels.Another difference between the assembly rules is the scaling of the averagediameter. As can be seen in Fig. 3, branching, with its tree-like hierarchyof central hubs, produces small-world graphs where the network diameterscales only as the logarithm of network size [47].This can be understood by considering successive construction steps. Theincrease in the number of vertices at each iteration leads to an exponential Formally: they have logarithmic scaling of diameter, and no loops on scales above themotif size.
DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS N a v e r a g e d i a m e t e r Figure 3. “Small world” behavior in hierarchies. Shown isthe scaling of the average diameter, (cid:104) d (cid:105) with N , the num-ber of vertices, for the branching (solid line), nested (dashedline), and mixed (intermediate, dotted line) hierarchies onthe triangle motif. Branching hierarchies have the small-world property; (cid:104) d (cid:105) ∼ ln N , while nested hierarchies scale asa power law with index roughly that of the motif diameter: (cid:104) d (cid:105) ∼ N (cid:104) d M (cid:105) . Mixed hierarchies – shown here, those built ofalternating branching and nested iterations – also scale as apower law, but at a slower rate than the nested case.scaling of system size with iteration number:(1) N i − N i − = ( n − N i − , where n is the number of vertices in the base motif and N i is the totalnumber of vertices at iteration i . The tree structure of branching networks,however, means that distance between nodes on the perimeter ( i.e. , thosenodes with the greatest separations on the graph) only increase by a con-stant, proportional to n . The diameter, in other words, increases linearly ateach iteration, and so the network as a whole has only a logarithmic scalingbetween diameter and system size.Diameter increases much more rapidly for the nested structures. Crossingsuch a structure requires crossing the nested subgraphs, and so the separa-tion between distance points increases proportional to N i as well as n . Thisleads to a power-law scaling, with diameter increasing as a power of thenumber of vertices. The index of the average diameter scaling is the averagediameter of the base motif. All of these relationships are shown in Fig. 3.When two formation patterns are mixed so that a graph may switch frombranched to nested, the results is networks as in Fig. 4. As a means ofself-organization, mixed iterations lead to a kind of self- dis similarity, wherecoarse-graining reveals different organizational principles at different levels. YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 7
Figure 4.
Mixed iterations on the three-vertex loop.Branching then nesting (left); nesting then branching (right.)Subsequent iterations define connectivity on increasinglylarger scales.Evidence for self-dissimilarity under coarse-graining has been found in bothbiological and engineered systems [49].2.
Signaling, Modularity and Noise
Whereas some network features and motif structures could arise throughsimple genetic or developmental stochastic processes, non-essential conser-vation rules, or due to the local constraints of physics and chemistry, weshow that two extremes of network structure can still have important func-tional implications for the ways in which different parts of a network becomecorrelated, or exchange information.A crucial concept for this work is that of noise, which accounts for theinfluence of random events and unobserved degrees of freedom in a system.Particular examples of noise might include the small-number fluctuations inreactants that affect metabolic processes, the coupling of observed neuronsto part of the larger, unobserved network, or the use of mixed strategies in agame-theoretic system. In the absence of strong theories for the noise prop-erties of a particular case, we use a maximum-entropy model, as describedbelow.In particular, for our dynamics, we take nodes to have two states (“on”or “off”) approximating the discrete switching events observed in a numberof systems from the cellular [50] to the social [51]. We follow recent workshowing the dominance of pairwise interactions in system behavior [52–54],and consider networks with pairwise constraints described by a maximumentropy model. This amounts to requiring the full state of the system –the switch-state of all N vertices – be given by the Boltzmann distribution.This is then the Ising model on an arbitrary graph. DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS
We can then write the Boltzmann distribution of spins P ( { σ } ), as givenby the set of pairwise constraints J ij , and external fields h i , E ( { σ } ) = 12 (cid:88) ij J ij σ i σ j P ( { σ } ) = 1 Z exp − β (cid:104) E ( { σ } ) + (cid:88) h i σ i (cid:105) (2)The J ij are simply the edges of the different networks we consider. Thesystem is in the maximum entropy state with only one observable – averagetotal energy, or number of satisfied pairs – fixed [55]. Usually, the h i aretaken to be zero; when they are non-zero it amounts to external constraintsacting on single nodes – such as one might expect in a network partiallydevoted to sensing external conditions.The most important parameter for this study is the overall factor of β .Large values of β correspond to the low-noise regime; conversely, as β goesto zero, the coupling between nodes is swamped by random fluctuations.We refer to β as the inverse noise, and focus on how changing β leads tochanges in how the network correlates and processes information in bothequilibrium and non-equilibrium situations.Determining the correlational and information-theoretic properties of thenetworks involves finding the joint probabilities of the states of the network, P ( { σ } ). In general, we are interested in quantities such as P ( σ i , σ j ), the jointprobability of two nodes i and j being in the same, or opposite, switchingstates.There are many different approaches to finding, or approximating, P ( { σ } );they are valid in different regimes. For the construction rules we consider, formotif structures with maximum degree of two ( i.e. , chains), exact solutionsof the Ising model are possible via a renormalization group transformation,and for structures with maximum degree of three, an exact solution for thepartition function in zero field is generally possible [56]. For arbitrary mo-tifs, however, the partition function for the i th iteration can no longer bewritten as the partition function for the ( i − J → J (cid:48) .In this paper, we adapt the “direct configurational” method (DCM; see, e.g. , Ref. [57]), which allows exact computation in small, finite networks.The self-similar properties of our networks allow these computations to beextended to graphs with many hundreds of nodes.The computation of P ( { σ } ) can be done via the normalizing term, or“partition function,” Z , in the denominator of Eq. 2. Derivatives of Z withrespect to h then give moments of P ( { σ } ). These computations not onlyprovide an exact solution, but decompose graphically into sums of “paths ofinfluence” closely related to the Feynman diagrams of condensed matter andparticle physics. Appendix B describes these calculations, which provide arigorous basis for the qualitative discussion of how multiple paths lead tocritical phenomena (see, e.g. , Ref. [58].) YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 9 Phase Transitions and the Mesoscopic Regime
Despite the simplicity of Eq. 2, the model has a rich set of behaviors,including (depending on the graph structure) a critical point, β c . The char-acterization of critical phenomena has been a central theme of the study ofcomplex networks [59]. With exact expressions for the correlations in hand(see Appendix B), we can study the nature of the order-disorder transitionon the different hierarchies presented here.Despite the length of the expansions – ratios of two power series in tanh β to O ( N ) – the general behavior of the correlation functions for differentnetworks is similar, with a monotonic rise from the disordered to the or-dered state. The leading-order behavior in the high-noise limit as β → β ) r min , where r min is the shortest distance between the two verticesunder consideration. The failure of this approximation is due to the increas-ing number of paths of influence available, which can allow longer paths todominate if they increase in number quickly enough to offset the exponentialsuppression in signal.For branching networks, the tree-like structure suggests that a phase tran-sition in the bulk is prevented at non-zero noise by the nucleation of bound-ary spins as happens in the Cayley tree (as distinguished from the Bethelattice) [60]. Since nested graphs can also be detached by a constant num-ber of cuts at any iteration – even when N → ∞ – the critical point in thethermodynamic limit is also expected to be zero [61], similar to the kind offerromagnetic frustration found in random graphs [62].For these reasons, it is thus useful to define a critical point for a finitesystem without reference to a thermodynamic limit but through the behaviorof various correlations that, though never mathematically singular, do showthe existence of a transition between two distinct behaviors.For the particular example of the Cayley tree, Ref. [63] introduced thenotion of a cross-over noise, β g . Decreasing noise, which pushed β above β g , was associated with the emergence of non-Gaussianity, a slowdown ofdynamics, and glassy behaviors such as aging; the critical noise parameter β g goes to the infinite-size limit (1 /β c goes to zero) very slowly (as log (log N )),so that the thermodynamic limit is not representative even for very largesystems. The slow approach to thermodynamic limits is often found in finiteramification structures such as the Cayley tree [64].We investigate these systems below using analytic tools (Sec. 3.1) andnumerical simulation (Sec. 3.2), on networks of size N ∼ mesoscopic regime, is qualitativelydifferent from the infinite size limit. The networks we study here are in thisregime – but so are much larger networks, and system sizes nine orders ofmagnitude larger, for example, are expected to have properties that differby only a factor of two.
10 DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS Β H ea t C a p ac it y Inverse Noise Β C o rr e l a ti on L e ng t h D Figure 5.
Stationary Aspects . Critical behavior, as mea-sured by the heat capacity (top) and correlation length (bot-tom) for a large network composed of branching (solid line)and nested (dotted line) iterations on the triangle motif. Thenetworks are all four-stage iterations, with 243 nodes and 363bonds. In the heat capacity measure, nested constructionsshow a greater concentration of accessible states at the tran-sition point. The correlation length for branching networksis initially larger than for nested, but around the cross-overnoise the nested structures show a rapid rise. Both these ef-fects are driven by the existence of multiple paths betweenpoints in nested networks. Vertical gray lines show wherethe correlation length exceeds the average network diame-ter, leading to an undamped pathway. In both cases, thishappens near the peak of the heat capacity.have(3) C = TV ∂S∂T = 1
V ∂ ln N ∂ ln T = β N ∂ ln Z∂β Figure 5.
Stationary Aspects . Critical behavior, as mea-sured by the heat capacity (top) and correlation length (bot-tom) for a large network composed of branching (solid line)and nested (dotted line) iterations on the triangle motif. Thenetworks are all four-stage iterations, with 243 nodes and 363bonds. In the heat capacity measure, nested constructionsshow a greater concentration of accessible states at the tran-sition point. The correlation length for branching networksis initially larger than for nested, but around the cross-overnoise the nested structures show a rapid rise. Both these ef-fects are driven by the existence of multiple paths betweenpoints in nested networks. Vertical gray lines show wherethe correlation length exceeds the average network diame-ter, leading to an undamped pathway. In both cases, thishappens near the peak of the heat capacity.3.1.
Stationary Aspects.
We measure two quantities related to the sta-tionary, equilibrium properties of the two networks, focusing on their criticalphenomena. First we consider the heat capacity, C (see, e.g. ,Ref. [65] foran example of its use in biological systems.) At constant external field, we YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 11 have(3) C = TV ∂S∂T = 1
V ∂ ln N ∂ ln T = β N ∂ ln Z∂β where S is the entropy, V the volume (here, the number of nodes), and N the weighted number of states accessible to the system. The heat capacitymeasures the (logarithmic) number of states accessible per (log) unit noise.It has a maximum, more or less sharply peaked, as a function of β . As oneheats the system through this point, the number of accessible states increasesdramatically, and the intensity and variety of the cooperative phenomena inthe transition can be quantified by the height of the peak. Nested networksare characterized by a greater concentration of states, as can be seen inthe top panel of Fig. 5; they can be said to have sharper transitions to theordered state.The approach to a phase transition is often defined in terms of a transitionbetween an exponential, and power-law, decline in the correlation functionas a function of distance. If we measure the correlation between pairs ofnodes separated by a distance ∆ r , we can define a correlation length, D ,(4) (cid:104) σ ( r ) σ ( r + ∆ r ) (cid:105) ∝ χ − ∆ r/D , where on these inhomogeneous networks we take ∆ r to be the geodesicdistance between points. The bottom panel of Fig. 5 shows the transitionthat occurs as one passes into the low-noise regime: nested networks, withmultiple paths between distant points, allow distant parts of the network tocorrelate at β ∼ q = 4 iteration, the nested networks have a maximum diameter of 32,compared to 9 for the more tightly structured branching networks. Thismeans that at high noise (small β ), nested networks allow for greater modu-larity – distant parts of the network are less correlated. The transition thatoccurs at β ∼
1, where D for nested networks becomes much larger, reversesthis property; nested hierarchies at low noise have stronger long-range cor-relations. We return to the question of modularity in Sec. 3.3, where weaddress it through simulation.The correlation length D exceeds the average diameter at β roughly 0.8(branching) and 0.9 (nested.) By analogy with infinite-limit, and homoge-nous, systems, one can consider this noise level as where the effective mass oflong-range fluctuations becomes zero. In contrast with the standard infinitelimit phase transition, this critical point occurs near, but not precisely at,the maximum of the network heat capacity.3.2. Dynamical Aspects.
Heat capacity and correlation length are bothstatic measures of modularity and signaling. We also expect dynamicalsignatures of the cross-over in finite networks. In this section, we showthat though branching networks are much smaller in diameter (Fig. 3) than nested networks, they have much longer timescales (Fig. 6.) Nodes are ac-tually less coupled compared to the higher-diameter nested networks, wheremultiple paths between nodes exist.In general there are many dynamics compatible with the stationary dis-tributions of Eq. 2. We take the standard Glauber dynamics [66, 67] witheach update step being associated with a different randomly chosen node. Given a sufficiently long time series for any pair of nodes, we can thenmeasure the timescales of their dynamics. We focus here on the decay ofthe overlap,(5) C ( t, t w ) = 1 N N (cid:88) i =1 σ i ( t ) σ i ( t + t w ) , where a single step, ∆ t equal to one, is an update of a randomly chosen spin.The function C ( t, t w ) decays from unity (at t w equal to zero) down to (atnoises below the glassy phase) a noise floor given by the Poisson statisticsof uncorrelated spins. It can be used to measure a number of differentproperties, including that of aging below the spin glass transition [63]. Herewe measure τ w , the time it takes C ( t, t w ) to cross a particular threshold. InFig. 6, the threshold is taken to be 0.5, so that τ w is the average time for anode to flip with 25% probability. Because of the long tailed distribution ofrelaxation times, we follow Ref. [70] in estimating τ w by the median, insteadof the mean.Relaxation times for spin-glass systems are themselves time-dependent –the longer one lets the system run, the longer the correlation time becomes.This is referred to in the physics literature as ‘aging’ [71] – correlationalproperties depend on the age (time since initialization by random initialconditions) of the system. We also see evidence for time-dependent correla-tion functions past the critical point, similar to that found by Ref. [63], butfocus here on the contrasting behavior of the relaxation time at constantage. We are here in the strongly out-of-equilibrium regime (long timescaleson a newly-initialized network.)The top panel of Fig. 6 shows how τ w scales with β . The strongest differ-ences between the two networks emerge in the low-noise (high- β ) regime. Inparticular, branching networks, with their hub-and-spoke topologies, showtimescales more than two orders of magnitude longer than their nested coun-terparts.The differences are due to bottlenecks to communication that exist be-tween distant parts of the network in the branching case. Since all pathsbetween distant nodes must pass through a single point, the speed of com-munication is limited by the timescale for that single point to change state. We expect other local update rules, such as Metropolis [68], to have similar dynamicalproperties, with differences appearing only on introduction of non-local rules such as thoseof Ref. [69].
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 13
DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 13
Figure 6.
Dynamical Aspects . Top : The relaxationtime, τ w , as a function of inverse noise β . Timescales areshown for branching (solid line) and nested (dashed line) net-works. Network parameters here are the same as in Fig. 5.As the noise drops ( β increases), the relaxation times divergefor both networks. At noises below the glassy transition, itis the branching networks that show a stronger slowdown,caused by bottleneck-frustration similar in nature to that ofthe Cayley tree. Ranges between the thinner lines enclose50% of samples. Bottom : distribution of τ w at β = 1 . Figure 6.
Dynamical Aspects . Top : The relaxationtime, τ w , as a function of inverse noise β . Timescales areshown for branching (solid line) and nested (dashed line) net-works. Network parameters here are the same as in Fig. 5.As the noise drops ( β increases), the relaxation times divergefor both networks. At noises below the glassy transition, itis the branching networks that show a stronger slowdown,caused by bottleneck-frustration similar in nature to that ofthe Cayley tree. Ranges between the thinner lines enclose50% of samples. Bottom : distribution of τ w at β = 1 . That small world networks, if they rely on hub-and-spoke topologies,are actually slower, is connected to the emergence of long-lived metastablestates. The analogs of domain walls for inhomogeneous networks – sepa-rated parts of the system that fall into opposite states of local consensus –emerge at low noise. These walls propagate through the system until theymeet bottlenecks – places where disparate parts of the network connect via asingle node – and are effectively pinned for long periods. Multiple paths, bycontrast, increase the number of points of contact between different neigh-borhoods.The dispersion in behavior (lower panel of Fig. 6) for both networks islarge. This shows the effect of non-equilibrium dynamics and an (effective,since finite-time) breaking of ergodicity. In some cases, the initial conditions,after burn-in, may lead to a particularly ordered configuration from whichthe system departs with only vanishing probability. In other cases, meta-stable states are not as long-lived, and relaxation can happen quickly.That one can achieve dispersion in behavioral timescales of nearly fiveorders of magnitude from a system with only ∼ nodes is remarkable.The dispersion, which itself sees an exponential rise at β ∼
1, is anotherindication of the presence of a finite-size critical phase, present only in themesoscopic regime.3.3.
Fluctuation Localization.
Though branching networks are smaller– nodes are, on average, closer to each other – we have shown by simulationthat the timescales of dynamical change are much longer (Fig. 6). Mean-while, we can determine how many new configurations become accessibleas the noise declines from our analytic determination of the heat capacity(Fig. 5).In this section, we examine features relevant to information-processing,which is a property of both the stationary properties of the network (howmany configurations are accessible) and the dynamical ones (how quicklyone configuration turns into another.)In particular, we ask about the entropy of the system over finite time, andhow and where that information is stored: locally (in single nodes), on themotif scale, or non-locally, across widely separated motifs. Such questionsare essential to biological function: distinct substructures must not only pro-cess information by means of local motif patterns, but also communicate theresults of that processing to more distant nodes. Anomalous concentrationsof a metabolic product, say, may be detected by influences on one part ofthe system, but may need to trigger a transcriptional cascade in a differentmodule.For systems where bits are largely independent, the multi-informationis close to zero, indicating that very little information is exchanged be-tween subgroups. When nodes come to process information in complexways, however, the multi-information becomes larger, indicating that ap-parent randomness at the local scale becomes pattern exchange on larger
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 15 scales. Formally, the multi-information at any particular scale is the de-crease in entropy seen when the distributions taken by the smaller scales arecombined into a joint probability distribution.We measure the multi-information (see, e.g. , [72]), a generalization ofmutual information used to describe cooperative information-processing [73,74]. For the case of three subsystems, whose internal states are representedby a vector, we have for the multi-information(6) I nl = (cid:32) (cid:88) i =1 H ( P [ (cid:126)x i ]) (cid:33) − H ( P [ (cid:126)x , (cid:126)x , (cid:126)x ])We consider the multi-information between the motif and global scale,with the three the most widely separated, but equidistant triangle motifschosen as the x i sets. In words, the first term of Eq. 6 is the total entropyof the subsystems considered in isolation of each other; if there were nolong-range synchronization, this would be equal to the second term, and themulti-information would be zero. Conversely, since the maximum amountof information in the subsystem is nine bits, the maximum amount of multi-information is six bits (all three distant motifs perfectly correlated.)Fig. 7 shows these results, computed directly from simulation. We esti-mate the multi-information using the NSB estimator [75, 76] – we find itleads to good estimates of simulated datasets with the dramatically lowerentropies one expects past the mesoscopic critical points. Both the branching and nested structures show a distinct window atwhich long-range synchronization is strongest and roughly half a bit canbe communicated between distant parts of the system. At first, as noise de-creases, distant nodes become more correlated (as in Fig. 5), and the multi-information rises; however, at low noise (large β ), fluctuations on all scalesare frozen in. In both cases, this window appears around the same noise-level than the peak of the heat capacity; this provides additional supportto the description of a mesoscopic phase transition, since more conventionalthermodynamic systems are known to have maximum multi-information atthe critical point [77]. 4. Discussion
In contrast with the regular lattices of field theory, complex networks arecharacterized by both small-scale pattern and large-scale structural diver-sity. On small scales, repeating network motifs [78] indicates strong localinhomogeneity. On large scales, networks may be characterized by modu-larity or by large-scale motifs visible under coarse-graining or aggregationof vertices [79, 80]. The study of such transformations on complex networkshas uncovered evidence for self-similarity [6], and small-scale and large-scale When used to estimate the multi-information by subtraction, we find that the estima-tor is not unbiased; this effect is overwhelmed, however, for multi-information measure-ments larger than 10 − bits, by the intrinsic dispersion of simulation runs.
16 DYNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS
Wednesday, November 30, 2011
Figure 7.
Fluctuation Delocalization . Top : The multi-information between the local (motif-scale) and global scalesfor branching (solid) and nested (dashed) structures. Alsoshown are the upper and lower ranges for 50% of the net-works studied. In both cases, greater non-local correlations(high multi-information) are seen as the noise is reduced ( β increases) – until a critical point at which multi-informationdeclines to zero, indicating that information processing hasbecome local again. The heavy line at ≈ .
01 indicates the1 σ errors associated with the NSB estimator. Bottom : dis-tributions near the peak of the multi-information (branchingat β = 0 .
9; nested at β = 1 .
1) showing the dispersion inmeasurements.network structures, for example, are found to be correlated in cellular net-works [81].In this contribution, we compared two alternative topologies – networksof branching motifs (with one pattern replicated many times) and networks
Figure 7.
Fluctuation Delocalization . Top : The multi-information between the local (motif-scale) and global scalesfor branching (solid) and nested (dashed) structures. Alsoshown are the upper and lower ranges for 50% of the net-works studied. In both cases, greater non-local correlations(high multi-information) are seen as the noise is reduced ( β increases) – until a critical point at which multi-informationdeclines to zero, indicating that information processing hasbecome local again. The heavy line at ≈ .
01 indicates the1 σ errors associated with the NSB estimator. Bottom : dis-tributions near the peak of the multi-information (branchingat β = 0 .
9; nested at β = 1 .
1) showing the dispersion inmeasurements.network structures, for example, are found to be correlated in cellular net-works [81].
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 17
Branching Nestedstationarydiameter small-world polynomialcorrelations short distance long distancephase transitions soft harddynamicaltimescales slow rapidlow-noise processing local global
Table 1.
Summary of Results . The behaviors of con-trasting self-similar networks in the mesoscopic regime.In this contribution, we compared two alternative topologies – networksof branching motifs (with one pattern replicated many times) and networksof nested motifs (where patterns play the role of templates.) Branching net-works have a familiar tree-like structure and possess the small-world prop-erty; their benefits include efficient signal propagation at high noise. Nestednetworks retain self-similarity but without small-world scaling, and conferbenefits such as redundant paths between distant nodes at the cost of longerpath lengths.A central theme has been the difference at the onset of a mesoscopic ver-sion of a phase transition. Phase transitions in general occur in networkswhen the exponential fading of a correlation along a particular path is bal-anced by the exponential increase in the number of paths between the twopoints [58]. In complex networks, this implies that structural inhomogene-ity on a range of different scales will be relevant for the critical behavioranalogous to that found in more regular systems.Our investigation has uncovered a number of counter-intuitive propertiesof small-world systems. Smaller diameter networks adjust more slowly, haveshorter correlation lengths, and can not achieve the levels of non-local inte-gration seen in those nested systems. Our analytic exposition of the problemshows explicitly how the onset of correlations are driven by the existence ofmultiple paths between points; our simulations show how the existence ofsuch paths allows for the more rapid dissipation of inhomogeneity. Multiplepaths are thus central for both information processing and the timescales ofcoordination.In some cases, the characteristic features of the small-world topology listedin Table 1 are desirable. They can lead to greater modularity, and longertimescales, than they would for more “open” topologies with longer pathlengths. At low noise, their fluctuations are more localized, meaning thatfluctuations in distant structures are increasingly independent, and disjointmemories do not merge and fade as fast. Depending on the nature of com-putation, these may be desirable properties – as they are, for example, inthe case of the liquid state model [82].
The existence of such paths also bears on the question of network ro-bustness – particularly under targeted attack [83]. When all correlationalinformation between two nodes must travel along a single path, the failureof any intermediate node is catastrophic. Conversely, robustness to nodedeletion will, in general, increase as the number of distinct paths betweenpoints increases, even if the number of edges remains constant.We suggest that our work is particularly relevant to the study of informa-tion processing in the brain [84, 85]. On the one hand, the maximum entropymodel of Sec. 2 has formed the basis of a powerful set of models for the de-scription of observed neural correlations [86], and the information-theoreticquantities we have investigated are directly related to the Tononi φ mea-sure [87] and the C N measure of Ref. [88]. These latter measures considervarious bi-partitions; the fractal structure of our networks naturally suggestextensions of these measures to the tower of higher-order correlations asdescribed in Ref. [72].One the other hand, the multi-scale structure of the brain – from scalesof 50 µ m to centimeters – is well-established [89, and refs. therein]. Thetopological and dynamical properties of certain random and deterministicself-similar wirings, relevant to neuroscience, have been under recent inves-tigation [89, 90]. Our work has direct bearing on explicit models of corticalnetwork architecture [91], and in particular suggests that small-world pathlengths may not be the only way in which a network might optimize infor-mation processing.Self-similar network properties have proven relevant to the study of avast range of other natural systems, from gene-regulatory [10, 11] and meta-bolic networks [12], all the way up to food webs [17] and human socialnetworks [13–16]. In the case of social networks, for example, branchingnetworks with complete-graph motifs are small-world examples of the ro-bust social quilts studied by Ref. [92], while “span of control” theories [93]address the consequences of hierarchy for information processing and dynam-ics [94]. Hierarchical structure may also be associated with the emergence oflong timescales associated with strategic information processing in animalsystems [95].In parallel, the maximum-entropy models we consider here have provenuseful not only in studies of neural functioning, but also in studies of theimmune system [96], and animal behavior [97, 98]. In many cases, suchsystems are found at criticality [99], making it important to understand themesoscopic regime.The analysis of this paper suggests that statistics related to the existenceof multiple paths in a network may be an important way to determine howrelevant structural features have been organized to achieve the contrastingproperties found in Table 1. It may not be necessary to compute all therelevant Feynman diagrams in a graph to answer central questions aboutthe nature of the critical point and ordered phase. When calibrated againstthe exactly-solved models of this paper, statistics related to the scaling of YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 19 the number of paths between vertices as a function of distance may besufficient to study both the nature of the critical point and the existence ofnon-equilibrium effects. We leave this question for future investigation.Most theoretical studies have focused on comparing the functional impli-cations of self-similar and non-self-similar networks. We have found nonethat consider the functional implications of alternative self-similar networks.If it proves to be true that constraints of development account for, and im-pose, wide spread network self-similarity, then variations on a fractal themewill become the principle means by which development might tinker withfunctionally important properties.5.
Acknowledgements
SD thanks Van Savage, Tanmoy Bhattacharya, Simeon Hellerman, GeorgeBezerra, and the Institute for the Physics and Mathematics of the Universe,University of Tokyo, Japan, and acknowledges the support of an OmidyarFellowship. SD and DCK acknowledge support of National Science Foun-dation Grant 1137929, “The Small Number Limit of Biological InformationProcessing.” DCK acknowledges a John Templeton Foundation award onthe Origins and Evolution of Regulation in Biological Systems. Appendix A: Formal Definitions of Branching and NestedNetworks
Beginning with a motif, M , with N ( M ) vertices, we build up, by iteration,a larger structure, S ( q, M ), where q is the number of iterations and S ( q =0 , M ) is M . The motif directs the assembly of increasingly larger structures,in a recursive fashion, providing the graph with both small and large scaleinhomogeneity.At the q th iteration, replace the vertices in S ( q − , M ) by separate copiesof M , and rewire the system while maintaining the local motif structure.One might take the vertices of a triangle, for example, and replace each ofthem by a copy of the same three-node structure. The different ways toaccomplish this model how a network may develop the internal structureof its subsystems; going the other direction, a particular choice defines acoarse-graining operation that might form an element of a renormalizationgroup.More formally, for each vertex i in M , replace the vertex by a copy of M , M i . For each vertex j in M , connect the free edges – those remainingfrom the previous iteration that were attached to vertex i – to the internalvertices of M i , by some mapping f ( i, j ) (generally not symmetric.)At the q th iteration, take M and replace each vertex i in M with copiesof S ( q − , M ). The rewiring now takes a edge from the j th subunit to the i th subunit, and attaches it to the f ( i, j ) vertex in S ( q − , M ). The f ( i, j )vertex for S (2 , M ) is defined as the f ( i, j )th vertex in the f ( i, j )th subunit,and so forth for higher values of q .Graph S ( q, M ) has N ( M ) q +1 vertices and n ( M ) (cid:80) qi =0 N ( M ) i , or n ( N q − / ( N − S is always close to that of thelocal graph M , so that sparse networks remain sparse; however, the highermoments of the degree distribution may grow dramatically depending onthe choice of f .Going from S ( q, M ) to S ( q − , M ) is a form of renormalization [43]. Once M is chosen, the remaining choice is that of the assembly rule, f ( i, j ); weconsider the two simplest cases f ( i, j ) equal to i (branching assembly), andto j (nested assembly.) These operations are easier to see graphically; forthe example of a triangle motif being replicated at multiple scales by thetwo methods, see Fig. 1. For a more explicit example of how the f ( i, j ) ruleworks, see Fig. 8.7. Appendix B: Ising Solutions in the Direct ConfigurationalMethod
In Sec. 3.1 we examined stationary properties of the system. Because ofthe divergence of timescales discussed in Sec. 3.2, it is difficult to determinereliable measurements of these properties from simulation. Here we discussthe “Direct Configurational Method” (DCM), which allows one to writedown expressions for these properties analytically. The expressions are long,
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 21
12 3
12 3 12 312 3 f ( i = 1 , j = 3) = 3 f ( i = 3 , j = 1) = 1 Monday, October 31, 2011
Figure 8.
An example of how the f ( i, j ) function specifieswhich fine-grained node to connect to, for the first iterationof the nested case, f ( i, j ) = j .but tractable by analytic methods that use computer algebra. They enableus to separate finite-size–finite-time effects (accessible by simulation) fromfinite-size–infinite-time effects associated with equilibrium.A number of different expansions for the correlations can be written inthe high-noise ( i.e. , β (cid:28) β c ) limit. The most common, known as the linked-cluster expansion [100], has formed the center of studies of the Ising modelon regular lattices [101–103].Because of the attention paid to lattices with great amounts of symmetryand of infinite extent, less often used are the exact solutions, expressible as apower series with a finite number of terms (tanh βJ ) n , available for latticesof finite size. This “direct configurational method” (Ch. 2, Ref. [57]) allowsone to write an expression for the partition function of a graph directly, byenumerating all of the subgraphs (including disconnected subgraphs) of theoriginal lattice with all vertices even. Similar expressions, with some ver-tices “rooted” in various ways, allow one to determine correlation functionsthrough partial derivatives of Z .For a system of any appreciable size, enumerating the disconnected graphsis a nearly impossible computational task. Finding the free energy, F , equalto ln Z , turns such a sum of disconnected graphs into a far shorter sum involving only connected graphs, with multiple bonds between vertices al-lowed, weighted in a new fashion (Ch. 20, Ref. [104]). These are the usualFeynman diagrams, and allow one to handle an arbitrarily large lattice tofinite order in β . When the lattice has translation symmetries, bond- andvertex-renormalization [100, 105–107] becomes possible, making computa-tions to very high order possible (currently around 20th order [108].)In the case of a biological network, however, many of these techniquesbecome impractical; the standard renormalization procedures are frustratedby the strong inhomogeneity in the network, and the unrenormalized graphsare far more numerous and still require computation of the symmetry factors.When a network is characterized by repeating motifs within a larger lattice,however, the enumeration of subgraphs becomes plausible.In the DCM, to compute the partition function, Z , on a graph G , wetake all subgraphs g of G with vertices even; this set is written E ( G ) andincludes disconnected subgraphs. We can then write(7) Z = 2 N ( G ) (cosh βJ ) n ( G ) (cid:88) g ∈ E ( G ) v n ( g ) , where n ( g ) is the number of edges in graph (or subgraph) g , N ( g ) the numberof vertices, and v is tanh βJ . We take E ( G ) to include the “null graph” withno edges. Finding the derivatives of Z with respect to a set of external fieldsamounts to allowing some vertices to be odd. We write, for example,(8) P a,b = 2 N ( G ) (cosh βJ ) n ( G ) (cid:88) g ∈ E ( G,a,b ) v n ( g ) , where E ( G, a, b ) are the subgraphs with all vertices even, when the effectivenumber of edges coming in to vertices a and b are both incremented by one(note that E ( G, a, a ) is the same as E ( G ). Then,(9) (cid:104) σ a σ b (cid:105) = 1 Z ∂ Z∂h i ∂h j = P a,b Z , and higher-order (connected) correlations yet can be computed as(10) (cid:104) σ i σ i · · · σ i k (cid:105) = ∂ k ln Z∂h i · · · ∂h i k . Direct enumeration of all possible disconnected subgraphs rapidly be-comes prohibitive, since computation time is exponential in the numberof edges. For the motifs, however, with small n ( M ) (less than 10, e.g. ),the computation can be done on a modern desktop machine. Our generalmethod will be to compose the partition function for S ( q, M ) from the par-tition function for S ( q − , M ).7.1. Branching Networks in the Ising Model.
Determining the parti-tion function for the branching assembly rule is reasonably straightforward.It is aided by the tree-like hierarchy that arises as the graph is built up; alldisconnected, even graphs at any stage can be decomposed into the union
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 23 of the set of disconnected graphs on the N ( M ) subgraphs S ( q − , M ) andthe disconnected graphs on the additional motif M that now forms the“highest-level” of the network.(11) Z q = 2 N ( M ) (cosh βJ ) n ( M ) Z N ( M ) q − (cid:88) m ∈ E ( M ) v n ( m ) . The free energy per vertex, ln Z q /N q , is a slowly decreasing function of q .Computing the correlation function of such a system is again aided bythe tree-like hierarchy. At stage q , copies of the S ( q − , M ) graph areplaced at the N ( M ) locations. A vertex A on one of those copies can thenbe referenced by a string of q numbers { a , . . . , a q } , where a q is the vertexnumber of M into which the S ( q − , M ) graph containing A is placed.Consider, to begin with, the correlation function between vertex A , { a , a } ,and B , { b , b } in S (2 , M ). When the roots are found on different subgraphs( i.e. , a (cid:54) = b ),(12) (cid:104) σ A σ B (cid:105) = 1 Z ∂ Z ∂h A ∂h B = 1 Z P ,a a P ,a b P ,b b Z N ( m ) − , where P ,ab is the sum of all graphs on M even in all vertices except atvertices a and b which are odd (“subgraphs of M rooted at a and b ”):(13) P ,ab = (cid:88) m ∈ E ( M, { a,b } ) v n ( m ) . In words, the path from A to B requires leaving the subgraph containing A at a , crossing M , and entering the subgraph containing B at b . Theadditional factors of Z , the partition function on M , come from the othersubgraphs that, if they are are entered, must be left from the same vertex.The generalization to n roots is straightforward.The general form for P can be written P q, { a }{ b } = P q − , { a ...a q − }{ a q ...a q } P ,a q b q × P q − , { b q ...b q }{ b ...b q − } Z N ( M ) − q − , (14)or, in words, that one must get to the most connected node on one’s sub-graph, and from there travel over the highest-level M to the most connectednode of the destination subgraph.We consider two vertices { a } and { b } to be separated by a copy distance d where d is the number of subgraphs one must traverse to reach B from A (formally, if a d (cid:54) = b d but either d is the generation of the graph or a d +1 = b d +1 .) The correlation function has the form of an exponential cutoff:(15) χ ( d ) = 1 |P ( d ) | (cid:88) { A,B }∈P ( d ) (cid:104) σ A σ B (cid:105) ∼ χ d , where P ( d ) is the set of all vertex pairs separated by copy distance d , and χ is the average correlation between different pairs in M . In nesting, local interactions are increasingly less aware of the larger struc-tures in which they are embedded; as copy distance increases, correlationsdie exponentially. Furthermore, the correlation between two vertices de-pends only on their relative positions in the hierarchy; the pair is insensitiveto the extent of the rest of the graph.These effects, are due to the way in which the subgraphs are wired to-gether; all interactions between different subgraphs pass through a single-vertex bottlenecks that restrict the number of paths. In the next section,we shall see how the nested construction opens these bottlenecks – at thecost of larger graph diameters – and alters the critical behavior.7.2.
Nested Networks and the General Form.
The branching compu-tations were reasonably simple because of the absence of redundant paths, orloops, above the motif scale. (Formally, the difference – the set of unsharededges – between two paths decomposes into a union of even subgraphs on themotif M .) The absence of larger redundant paths has many implications inaddition to how it affects the correlation functions; for example, connectionsbetween distant nodes may be cut by removal of a single vertex.The nested rule partition function appears harder to compute because ofthe existence of loops and redundant paths on all scales. However, a generalalgorithm for the computation of an arbitrary P q, { a } , { b } may be specified.One decomposes the problem into two parts. One first considers how totraverse the “coarse-grained” graph, at the highest level; and then considershow to travel “within” each coarse-grained vertex to complete the path.The difference between nested and branching then amounts simply to whichparticular node address on subgraph A allows you to jump to subgraph B .More formally, P q, { a } , { b } is the sum over on the motif M in the followingway:(1) At level q , one has a set of roots, { a , . . . , a q } , { b , . . . , b q } , ... . Eachof these roots corresponds to a root in one of the S ( q − , M ) copies.For example, the copy number a q has a root { a , . . . , a q − } .(2) Consider in turn each subgraph m in motif M (where m can bedisconnected or connected, odd or even).(3) Each edge of that subgraph gives two additional roots, one associatedwith each end of the edge. For example, an edge between nodes a q and b q leads to two new roots, one for the copy a q , and one for thecopy b q .(4) If the graph has been constructed by branching, the additional rootfor the a q copy is { a q . . . a q } (a list q − a q copy is { b q . . . b q } (a list q − S ( q − , M ) copy, we have a set of roots, r i .(7) Add together v n ( m ) and the product of the N ( M ) P q − ,r i .Note that ensuring the final path is even is deferred to the bottom level,when P ,r i is computed. YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 25
Eq. 7 is the basis of the direct configurational method; some examples ofthis expression for small graphs can be found in Ref. [109]. While enumera-tion of graphs much larger than 30 bonds is impossible, using the methodsdescribed in the text, it is possible to build up much larger graphs withbranching and nested properties of interest. With these equations, andEqs. 11 and 14, an arbitrary hierarchy may be constructed, since there is norestriction on the form of P q − .We have checked the central formulae explicitly through subgraph enu-meration on Fig. 2; the results for three iterations we have checked throughseventh order in β , and thus in v , by an unrenormalized linked-cluster ex-pansion, using Feynman diagrams in the standard fashion [100, 110]. References [1] S. H. Strogatz. Exploring complex networks.
Nature , 410:268–276,2001.[2] Mark Newman, Albert-Laszlo Barabasi, and Duncan J. Watts.
TheStructure and Dynamics of Networks . Princeton University Press,2006.[3] M Madan Babu, Nicholas M Luscombe, L Aravind, Mark Gerstein,and Sarah A Teichmann. Structure and evolution of transcriptionalregulatory networks.
Current Opinion in Structural Biology , 14, 2004.[4] E.H. Davidson.
The regulatory genome: gene regulatory networks indevelopment and evolution . Academic Press, 2006.[5] U Alon. Network motifs: theory and experimental approaches.
NatureReviews Genetics , 8:450–461, 2007.[6] Chaoming Song, Shlomo Havlin, and Hern´an A Makse. Self-similarityof complex networks.
Nature , 433:392, 2005.[7] Reka Albert. Scale-free networks in cell biology.
J Cell Sci ,118(21):4947–4957, 2005.[8] A Goldberger. Fractal mechanisms.
IEEE Eng. Med. Biol. Mag. ,11(2), 1992.[9] R. Orbach. Dynamics of fractal networks.
Science , 231:814–819, 1986.[10] Patrick C. Philips. Epistasis the essential role of gene interactionsin the structure and evolution of genetic systems.
Nature ReviewsGenetics , 9:855–867, 2008.[11] Preston R. Aldrich, Robert K. Horsley, Yousuf A. Ahmed, Joseph J.Williamson, and Stefan M. Turcic. Fractal topology of gene promoternetworks at phase transitions.
Gene Regul Syst Bio , 4:75–82, 2010.[12] E. Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai, and A.-L.Barabsi. Hierarchical organization of modularity in metabolic net-works.
Science , 297(5586):1551–1555, 2002.[13] R. Guimer`a, L. Danon, A. D´ıaz-Guilera, F. Giralt, and A. Arenas.Self-similar community structure in a network of human interactions.
Phys. Rev. E , 68:065103, 2003. [14] M.C. Gonzlez, H.J. Herrmann, J. Kertsz, and T. Vicsek. Communitystructure and ethnic preferences in school friendship networks.
PhysicaA: Statistical Mechanics and its Applications , 379(1):307 – 316, 2007.[15] Marcus J Hamilton, Bruce T Milne, Robert S Walker, Oskar Burger,and James H Brown. The complex structure of hunter-gatherer socialnetworks.
Proceedings of the Royal Society B: Biological Sciences ,274(1622):2195–2203, 2007.[16] William R. Burnside, James H. Brown, Oskar Burger, Marcus J.Hamilton, Melanie Moses, and Luis M.A. Bettencourt. Humanmacroecology: linking pattern and process in big-picture human ecol-ogy.
Biological Reviews , 87(1):194–208, 2012.[17] Jennifer A. Dunne. The network structure of food webs. In MercedesPascual and Jennifer A. Dunne, editors,
Ecological Networks: Link-ing Structure to Dynamics in Food Webs , page 27. Oxford UniversityPress, 2006.[18] Hern´an D Rozenfeld and Daniel Ben-Avraham. Percolation in hierar-chical scale-free nets.
Phys. Rev. E , 75:61102, 2007.[19] Hern´an D Rozenfeld, Shlomo Havlin, and Daniel Ben-Avraham. Frac-tal and transfractal recursive scale-free nets.
New Journal of Physics ,9:175, 2007.[20] Filippo Radicchi, Jos´e J Ramasco, Alain Barrat, and Santo Fortunato.Complex networks renormalization: Flows and fixed points.
PhysicalReview Letters , 101:148701, 2008.[21] Filippo Radicchi, Alain Barrat, Santo Fortunato, and Jos´e J Ram-asco. Renormalization flows in complex networks.
Physical Review E ,79:26104, 2009.[22] Hern´an D Rozenfeld, Chaoming Song, and Hern´an A Makse. Small-world to fractal transition in complex networks: A renormalizationgroup approach.
Physical Review Letters , 104:25701, 2010.[23] Golnoosh Bizhani, Peter Grassberger, and Maya Paczuski. Randomsequential renormalization and agglomerative percolation in networks:Application to Erd¨os-R´enyi and scale-free graphs.
Phys. Rev. E ,84:66111, 2011.[24] Golnoosh Bizhani, Vishal Sood, Maya Paczuski, and Peter Grass-berger. Random sequential renormalization of networks: Applicationto critical trees.
Phys. Rev. E , 83:36110, 2011.[25] E. Estrada. Network robustness to targeted attacks. The interplay ofexpansibility and degree distribution.
European Physical Journal B ,52:563–574, 2006.[26] Anthony H. Dekker and Bernard D. Colbert. Network robustness andgraph topology. In
Proceedings of the 27th Australasian conference onComputer science - Volume 26 , ACSC ’04, pages 359–368, Dunedin,New Zealand, 2004.[27] Duncan S Callaway, M. E. J Newman, Steven H Strogatz, and Dun-can J Watts. Network robustness and fragility: Percolation on random
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 27 graphs.
Phys. Rev. Lett. , 85:5468, 2000.[28] G. B. West, J. H. Brown, and B. J. Enquist. The Fourth Dimension ofLife: Fractal Geometry and Allometric Scaling of Organisms.
Science ,284:1677, 1999.[29] L. K. Gallos, C. Song, S. Havlin, and H. A. Makse. Scaling theory oftransport in complex biological networks.
Proc. Natl. Acad. Sci. USA ,104:7746–7751, 2007.[30] C. F. Stevens. An evolutionary scaling law for the primate visualsystem and its basis in cortical function.
Nature , 411:193–195, 2001.[31] Dmitri B. Chklovskii, Thomas Schikorski, and Charles F. Stevens.Wiring optimization in cortical circuits.
Neuron , 34(3):341 – 347, 2002.[32] A. Raval. Some asymptotic properties of duplication graphs.
Phys.Rev. E , 68(6):066119, 2003.[33] K. Evlampiev and H. Isambert. Evolution of Protein Interaction Net-works by Whole Genome Duplication and Domain Shuffling. arXiv ,q-bio/0606036, 2006.[34] A Barab´asi and R Albert. Emergence of scaling in random networks.
Science , 286(5439):509, 1999.[35] G Whitesides and B Grzybowski. Self-assembly at all scales.
Science ,295(5564):2418, 2002.[36] A V´azquez, R Dobrin, D Sergi, J-P Eckmann, Z N Oltvai, and A-L Barab´asi. The topological relationship between the large-scale at-tributes and local interaction patterns of complex networks.
Proc.Natl. Acad. Sci. USA , 101(52):17940–5, 2004.[37] Juyong Park and M. E Newman. Solution of the two-star model of anetwork.
Phys. Rev. E , 70:66146, 2004.[38] Chaoming Song, Shlomo Havlin, and Hern´an A Makse. Origins offractality in the growth of complex networks.
Nature Physics , 2:275,2006.[39] Juyong Park and M. E Newman. Statistical mechanics of networks.
Phys. Rev. E , 70:66117, 2004.[40] Zhongzhi Zhang, Lili Rong, and Francesc Comellas. High-dimensionalrandom apollonian networks.
Physica A , 364:610, 2006.[41] Zhongzhi Zhang, Lichao Chen, Shuigeng Zhou, Lujun Fang, JihongGuan, and Tao Zou. Analytical solution of average path length forapollonian networks.
Phys. Rev. E , 77(1):017102, 2008.[42] S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes. Pseudofractalscale-free web.
Phys. Rev. E , 65:066122, 2002.[43] David R Nelson and Michael E Fisher. Soluble renormalization groupsand scaling fields for low-dimensional Ising systems.
Ann. Phys. ,91:226, 1975.[44] Van M. Savage, Eric J. Deeds, and Walter Fontana. Sizing up allo-metric scaling theory.
PLoS Comput Biol , 4(9):e1000171, 2008.[45] V. M. Savage, L. P. Bentley, B. J. Enquist, J. S. Sperry, D. D.Smith, P. B. Reich, and E. I. von Allmen. Hydraulic trade-offs and space filling enable better predictions of vascular structure and func-tion in plants.
Proceedings of the National Academy of Sciences ,107(52):22722–22727, 2010.[46] L. A. N Amaral, A Scala, M Barth´el´emy, and H. E Stanley. Classesof small-world networks.
Proc. Natl. Acad. Sci. USA , 97:11149, 2000.[47] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’ networks.
Nature , 393:440, 1998.[48] Kim Baskerville and Maya Paczuski. Subgraph ensembles and motifdiscovery using an alternative heuristic for graph isomorphism.
Phys.Rev. E , 74:51903, 2006.[49] Shalev Itzkovitz, Reuven Levitt, Nadav Kashtan, Ron Milo, MichaelItzkovitz, and Uri Alon. Coarse-graining and self-dissimilarity of com-plex networks.
Physical Review E , 71:16127, 2005.[50] Javier Mac´ıa, Stefanie Widder, and Ricard Sol´e. Why are cellularswitches Boolean? General conditions for multistable genetic circuits.
J Theor Biol , 261(1):126–35, 2009.[51] S. DeDeo, D. Krakauer, and J. Flack. Inductive Game Theory andthe Dynamics of Animal Conflict.
PLoS Computational Biology ,6(5):e1000782, 2010.[52] E Schneidman, MJ Berry II, R Segev, and W Bialek. Weak pair-wise correlations imply strongly correlated network states in a neuralpopulation.
Nature , 440(7087):1007, 2006.[53] William Bialek and Rama Ranganathan. Rediscovering the power ofpairwise interactions. arXiv , q-bio.QM/0712.4397, 2007.[54] Jeffrey D. Fitzgerald and Tatyana O. Sharpee. Maximally informativepairwise interactions in networks.
Phys. Rev. E , 80:031914, 2009.[55] E. T Jaynes. Information theory and statistical mechanics.
Phys. Rev. ,106:620, 1957.[56] Michael E Fisher. Transformations of Ising models.
Phys. Rev. ,113:969, 1959.[57] Jaan Oitmaa, Christopher Hamer, and Weihong Zheng.
Series ex-pansion methods for strongly interacting lattice models . CambridgeUniversity Press, 2006.[58] H. Eugene Stanley. Scaling, universality, and renormalization: Threepillars of modern critical phenomena.
Rev. Mod. Phys. , 71:358, 1999.[59] S Dorogovtsev, A Goltsev, and J Mendes. Critical phenomena incomplex networks.
Rev. Mod. Phys. , 80:1275–1335, 2008.[60] E M¨uller-Hartmann and J Zittartz. New type of phase transition.
Phys. Rev. Lett. , 33:893, 1974.[61] Y Gefen, A Aharony, Y Shapir, and B. B Mandelbrot. Phase transi-tions on fractals. II. Sierpinski gaskets.
J. Phys. A , 17:435, 1984.[62] Pontus Svenson. Freezing in random graph ferromagnets.
Phys. Rev.E , 64(3):036122, 2001.[63] R M´elin, J. C Angl`es d’Auriac, P Chandra, and B Dou¸cot. Glassybehaviour in the ferromagnetic Ising model on a Cayley tree.
Journal
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 29 of Physics A , 29:5773, 1996.[64] Tatijana Stoˇsi´c, Borko D Stoˇsi´c, and Ivon P Fittipaldi. Anomalousbehavior of the zero field susceptibility of the Ising model on the Cayleytree.
Physica A , 320:443, 2003.[65] Gasper Tkacik, Elad Schneidman, Michael J Berry, and WilliamBialek. Ising models for networks of real neurons. arXiv , q-bio/0611072, 2006.[66] Roy J Glauber. Time-dependent statistics of the Ising model.
Journalof Mathematical Physics , 4:294, 1963.[67] Pavel L Krapivsky, Sidney Redner, and Eli Ben-Naim.
A Kinetic Viewof Statistical Physics . Cambridge University Press, 2010.[68] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth,Augusta H Teller, and Edward Teller. Equation of state calculationsby fast computing machines.
Journal of Chemical Physics , 21:1087,1953.[69] Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dy-namics in Monte Carlo simulations.
Phys. Rev. Lett. , 58:86–88, 1987.[70] Alain Billoire and Enzo Marinari. Letter to the editor: Correlationtimescales in the Sherrington-Kirkpatrick model.
Journal of PhysicsA , 34:L727, 2001.[71] C.D. Dominicis and I. Giardina.
Random fields and spin glasses: afield theory approach . Cambridge University Press, 2006.[72] E Schneidman, S Still, MJ Berry, and W Bialek. Network informationand connected correlations.
Phys. Rev. Lett. , 91(23):238701–238701,2003.[73] W. J. McGill. Multivariate information transmission.
Psychometrika ,19:97–116, 1954.[74] A Bell. The co-information lattice. In S. Amari, A. Cichocki,S. Makino, and N. Murata, editors,
Proceedings of the Fourth In-ternational Workshop on Independent Component Analysis and BlindSignal Separation , Nara, Japan, 2003.[75] Ilya Nemenman, Fariel Shafee, and William Bialek. Entropy and in-ference, revisited. arXiv , physics/0108025, 2001.[76] Ilya Nemenman, William Bialek, and Rob de Ruyter van Steveninck.Entropy and information in neural spike trains: Progress on the sam-pling problem.
Physical Review E , 69:56111, 2004.[77] Ionas Erb and Nihat Ay. Multi-information in the thermodynamiclimit.
J Stat Phys , 115:949, 2004.[78] Shai S Shen-Orr, Ron Milo, Shmoolik Mangan, and Uri Alon. Networkmotifs in the transcriptional regulation network of Escherichia coli.
Nature Genetics , 31(1):64–8, 2002.[79] LH Hartwell, JJ Hopfield, S Leibler, and AW Murray. From molecularto modular cell biology.
Nature , 402(6761):C47–C52, 1999.[80] M E J Newman. Modularity and community structure in networks.
Proc. Natl. Acad. Sci. USA , 103(23):8577–82, 2006. [81] A V´azquez, R Dobrin, D Sergi, J.-P Eckmann, Z. N Oltvai, and A.-L Barab´asi. The topological relationship between the large-scale at-tributes and local interaction patterns of complex networks.
Proc.Natl. Acad. Sci. USA , 101:17940, 2004.[82] Wolfgang Maass, Thomas Natschl¨ager, and Henry Markram. Real-time computing without stable states: a new framework for neuralcomputation based on perturbations.
Neural Comput , 14(11):2531–60, 2002.[83] R Albert, H Jeong, and A Barabasi. Error and attack tolerance ofcomplex networks.
Nature , 406(6794):378–82, 2000.[84] Changsong Zhou, Lucia Zemanov´a, Gorka Zamora, Claus C. Hilgetag,and J¨urgen Kurths. Hierarchical organization unveiled by functionalconnectivity in complex brain networks.
Phys. Rev. Lett. , 97:238103,2006.[85] E Bullmore and O Sporns. Complex brain networks: graph theoreticalanalysis of structural and functional systems.
Nature Reviews Neuro-science , 2009.[86] Greg J Stephens, Leslie C Osborne, and William Bialek. Searching forsimplicity in the analysis of neurons and behavior.
Proc. Natl. Acad.Sci. USA , 108 Suppl 3:15565–71, 2011.[87] Giulio Tononi. An information integration theory of consciousness.
BMC Neurosci , 5:42, 2004.[88] G Tononi, O Sporns, and G M Edelman. A measure for brain com-plexity: relating functional segregation and integration in the nervoussystem.
Proc. Natl. Acad. Sci. USA , 91(11):5033–7, 1994.[89] Olaf Sporns. Small-world connectivity, motif composition, and com-plexity of fractal neuronal connections.
Biosystems , 85(1):55–64, 2006.[90] Marcus Kaiser and Claus C Hilgetag. Optimal hierarchical modulartopologies for producing limited sustained activation of neural net-works.
Front Neuroinform , 4:8, 2010.[91] P Robinson, J Henderson, E Matar, P Riley, and R Gray. Dynamicalreconnection and stability constraints on cortical network architecture.
Phys. Rev. Lett. , 103(10):108104, 2009.[92] Matthew O. Jackson, Toms Rodrguez Barraquer, and Xu Tan.Social Capital and Social Quilts: Network Patterns of FavorExchange.
American Economic Review , 2011. Forthcoming.http://ssrn.com/paper=1657130.[93] Michael Keren and David Levhari. The optimum span of control in apure hierarchy.
Management Science , 25(11):pp. 1162–1172, 1979.[94] Andrea Patacconi. Coordination and delay in hierarchies.
The RANDJournal of Economics , 40(1):pp. 190–208, 2009.[95] S. DeDeo, D. Krakauer, and J. Flack. Evidence of Strategic Period-icities in Collective Conflict Dynamics.
Journal of the Royal SocietyInterface , 8(62):1260–1273, 2011.
YNAMICS AND PROCESSING IN FINITE SELF-SIMILAR NETWORKS 31 [96] T Mora, A. M Walczak, W Bialek, and C. G Callan. Maximum entropymodels for antibody diversity.
Proc. Natl. Acad. Sci. USA , 107:5405,2010.[97] William Bialek, Andrea Cavagna, Irene Giardina, Thierry Mora, Ed-mondo Silvestri, Massimiliano Viale, and Aleksandra M Walczak. Sta-tistical mechanics for natural flocks of birds. eprint arXiv , 1107:604,2011.[98] B.C. Daniels, D.C. Krakauer, and J.C. Flack. Sparse coding of conflicttime series identifies kin groups and policers as predictable conflictparticipants. 2012. Submitted.[99] Thierry Mora and William Bialek. Are biological systems poised atcriticality?
J Stat Phys , 144:268, 2011.[100] Michael Wortis. Linked cluster expansion.
Phase Transitions andCritical Phenomena , 3:114–178, 1974.[101] Hildegard Meyer-Ortmanns and Thomas Reisz. Critical phenomenawith convergent series expansions in a finite volume.
J. Stat. Phys. ,87:755, 1997.[102] Thomas Reisz. High temperature critical O(N) field models by LCEseries.
Phys. Lett. B , 360:77, 1995.[103] Bernie G Nickel and J. J Rehr. High-temperature series for scalar-fieldlattice models: Generation and analysis.
J. Stat. Phys. , 61:1, 1990.[104] James Glimm and Arthur Jaffe.
Quantum physics: a functional inte-gral point of view . Springer-Verlag, Second edition, 1987.[105] R Brout. Statistical mechanical theory of ferromagnetism. high densitybehavior.
Phys. Rev. , 118:1009, 1960.[106] Gerald Horwitz and Herbert B Callen. Diagrammatic expansion forthe Ising model with arbitrary spin and range of interaction.
Phys.Rev. , 124:1757, 1961.[107] F Englert. Linked cluster expansions in the statistical theory of ferro-magnetism.
Phys. Rev. , 129(2):567–577, 1963.[108] Massimo Campostrini. Linked-cluster expansion of the Ising model.
J. Stat. Phys. , 103(1/2):369, 2001.[109] M. F Sykes and D. L Hunter. On the determination of weights for thehigh temperature star cluster expansion of the free energy of the Isingmodel in zero magnetic field.
Journal of Physics A , 7:1589, 1974.[110] B M¨uhlschlegel and H Zittartz. Gaussian average method in the sta-tistical theory of the Ising model.