Self-organized criticality in neural networks from activity-based rewiring
SSelf-organized criticality in neural networks from activity-based rewiring
Stefan Landmann, Lorenz Baumgarten, and Stefan Bornholdt ∗ Institut f¨ur Theoretische Physik, Universit¨at Bremen, Bremen, Germany (Dated: September 25, 2020)Neural systems process information in a dynamical regime between silence and chaotic dynamics.This has lead to the criticality hypothesis which suggests that neural systems reach such a state byself-organizing towards the critical point of a dynamical phase transition.Here, we study a minimal neural network model that exhibits self-organized criticality in thepresence of stochastic noise using a rewiring rule which only utilizes local information. For networkevolution, incoming links are added to a node or deleted, depending on the node’s average activ-ity. Based on this rewiring-rule only, the network evolves towards a critcal state, showing typicalpower-law distributed avalanche statistics. The observed exponents are in accord with criticality aspredicted by dynamical scaling theory, as well as with the observed exponents of neural avalanches.The critical state of the model is reached autonomously without need for parameter tuning, is in-dependent of initial conditions, is robust under stochastic noise, and independent of details of theimplementation as different variants of the model indicate. We argue that this supports the hy-pothesis that real neural systems may utilize similar mechanisms to self-organize towards criticalityespecially during early developmental stages.
PACS numbers: 05.65.+b, 64.60.av, 84.35.+i, 87.18.Sn
I. INTRODUCTION
Neural systems, in order to efficiently process informa-tion, have to operate at an intermediate level of activity,avoiding, both, a chaotic regime, as well as silence. Ithas long been speculated that neural systems may op-erate close to a dynamical phase transition that is natu-rally located between chaotic and ordered dynamics [1–4].Indeed, recent experimental results support the critical-ity hypothesis, most prominently the so-called neuronalavalanches, specific neuronal patterns in the resting stateof cortical tissue which are power-law distributed in theirsizes and durations [5–9]. Studies suggesting that neuralsystems exhibit optimal computational properties at crit-icality [10–12] further support the criticality hypothesis.However, which mechanisms could drive such complexsystems towards a critical state? Ideally, criticality isreached by a decentralized, self-organized mechanism, anidea known as self-organized criticality (SOC) [13–15].Models for self-organized criticality in neural networkswere discussed even before experimental indications ofneural criticality [5], including a self-organized criticaladaptive network model [3, 16], as well as an adaptationof the Olami-Feder-Christensen SOC model for earth-quakes [17] in the context of neural networks by Eurichet al. [18].The seminal paper of Beggs and Plenz [5] eventuallyinspired a multitude of self-organized critical neural net-work models, often with a particular focus on biologicaldetails in the self-organizing mechanisms. Some of thesemechanisms are based on short-term synaptic plasticity[19], spike timing dependent plasticity [20], long-termplasticity [21], while others rely on Hebbian-like rules ∗ [email protected] [22–24] or anti-Hebbian rules [25]. For recent reviewson criticality in neural systems see [26–29].In this paper we revisit the earliest self-organizedcritical adaptive network [3] in the wake of the observa-tion of neural avalanches and ask two questions: Doesthis general model still self-organize to criticality whenadapted to the particular properties of neural networks?How do its avalanche statistics compare to experimentaldata? Our aim is to formulate the simplest possiblemodel, namely an autonomous dynamical system thatgenerates avalanche statistics without external parame-ters and without any parameter tuning.The original SOC network model [3] is formulated asa spin system, with binary nodes of values σ ∈ {− , } ,corresponding to inactive and active states respectively.In order to study avalanches in the critical state, a trans-lation to Boolean state nodes σ ( t ) ∈ { , } is necessary,as has been formulated in [30]. That model demonstratedself-organized criticality based on the correlation betweenneighboring nodes, resulting in avalanche statistics in ac-cordance with criticality [30]. Nevertheless, its algorith-mic implementation falls short of a fully autonomous dy-namical system: Its adaptation rule still uses data fromdifferent simulation runs in order to determine the synap-tic change to be performed. Therefore, we here formulateour model as a fully autonomous system with adapta-tion dynamics based on solely local information. It usesBoolean state nodes on a network without a predefinedtopology. The network topology changes by link adapta-tions (addition and removal of links) based on local in-formation only, namely the temporally averaged activityof single nodes. Neither information of the global state ofthe system nor information about neighboring nodes, e.g.activity correlations [30] or retro-synaptic signals [21], areneeded. a r X i v : . [ c ond - m a t . d i s - nn ] S e p II. THE MODEL
Let us now define our model in detail. Consider a di-rected graph with N nodes with binary states σ ( t ) ∈{ , } representing resting and firing nodes. Signals aretransmitted between nodes i and j via activating or in-hibiting links c ij ∈ {− , } . If there is no connectionbetween i and j we set c ij = 0. Besides the fast dy-namical variables σ ( t ) of the network, the connections c ij form a second set of dynamical variables of the sys-tem which are evolving on a considerably slower timescale than the node states σ ( t ). Let us define these twodynamical processes, activity dynamics and network evo-lution, separately. A. Activity dynamics
The state σ i ( t + ∆ t ) of node i depends on the input f i ( t ) = N (cid:88) j =1 c ij σ j ( t ) (1)at some earlier time t . For simplicity of simulation wehere chose a time step of ∆ t = 1 and perform parallelupdate such that this time step corresponds to one sweepwhere each node is updated exactly once. Please notethat random sequential update as well as an autonomousupdate of each node according to a given internal timescale is possible as well and does not change our results.Having received the input f i ( t ), node i will be active at t + 1 with a probabilityProb[ σ i ( t + 1) = 1] = 11 + exp[ − β ( f i ( t ) − . . (2)Here, β is an inverse temperature, solely serving the pur-pose of quantifying the amount of noise in the model. Forthe low-temperature limit β → ∞ the probability (2) be-comes a step function which equals 0 for f i < . f i > .
5. This function broadens for decreasing β ,also allowing for nodes being active once in a while with-out receiving any input. Such idling activity is observedin cortical tissue and will play a role in the evolutionarydynamics as defined in the following.This model attempts to formulate the simplest rules forthe activity dynamics possible, i.e. with the fewest statesof the nodes and the fewest parameters. Thus the dynam-ics neither considers a refractory time nor a non-zero ac-tivation threshold. Nevertheless, as shown in Sec. V, themechanism driving the network towards criticality worksin very different biological implementations of the model.This suggests that despite being a coarse simplificationof a real biological system, the model is able to representbasic mechanisms which can also be at work in the morecomplex real neuronal systems. B. Network evolution
Following the natural time scale separation betweenfast neuron dynamics and slow change of their connectiv-ity, we here implement changes of the network structureitself on a well-separated slow timescale. For every timestep, each node is chosen with a small probability µN (cid:28) A i = (cid:104) σ i (cid:105) W over the time window of the last W time steps according to the following rules: • A i = 0: add a new incoming link c ij = 1 fromanother randomly chosen node j . • A i = 1: add a new incoming link c ij = − j . • A i (cid:54)∈ { , } : remove one incoming link of i .Thus, inactive (i.e. non-switching) nodes receive newlinks, while active (i.e. switching) nodes lose links. Theserules prevent the system from reaching, both, an orderedphase where all nodes are permanently frozen, as wellas a chaotic regime with abundant switching activity.In particular, the system is driven towards a dynamicalphase transition between a globally ordered and a glob-ally chaotic phase. Note that the sign of an added link isdetermined by the nature of the frozen state (on or off),unlike the original SOC network model where the sign ofnew links had been chosen randomly [3].Note that rewiring is based on locally available infor-mation only. To simulate the way a single cell could keepa running average, we also implemented the average ac-tivity of a node as A i ( t +1) = σ ( t +1) · (1 − α )+ A i ( t ) · α asthe basic principle a biochemical average would be taken.Here, the parameter α ∈ [0 ,
1] determines the temporalmemory of the nodes (instead of the averaging time win-dow parameter W ). Since the newly defined A i can onlyapproach but never attain 0 or 1, we have to reformulatethe criteria which determine the type of rewiring to beperformed. The condition for a node to receive an acti-vating link is transformed from A i = 0 to A i < (cid:15) with (cid:15) (cid:28)
1, the other criteria are changed correspondingly.Then, we find that the model works accordingly.For practical purposes, we perform the rewiring of onlyone randomly chosen node i after every T = Nµ sweeps,instead of selecting every node with a certain probability µN at each time step. Both implementations yield thesame results.The proposed rules for the network evolution areinspired by synaptic wiring and rewiring as observedin early developmental stages of neural populations orduring the rewiring of dissociated cortical cultures [31].In these systems, homeostatic plasticity mechanisms areat work, which lead to an increasing activity of overlyinactive neurons and vice versa. In [32] it was foundthat the application of inhibitory neurotransmitters topyramidal neurons in isolated cell cultures, and thus adecrease of activity, leads to an increased outgrowth ofneurites. In contrast, if excitatory neurotransmittersare applied, a degeneration of dendritic structures isinduced [32–34]. These observations were confirmedin experiments where electrical stimulation of neuronsshowed to inhibit dendritic outgrowth [35] and blockingof activity resulted in an increased growth of dendrites[36, 37]. Thus, if a neuron is overly inactive or active, it”grows and retracts its dendrites to find an optimal levelof input ...” [38] which is mimicked by the proposedrewiring rules. Similar homeostatic adaption rules havebeen successfully used to model cortical rewiring afterdeafferentation [39].The rewiring rules of the model use information onthe level of single nodes only and are robust, despite theirsimplicity. To show that the mechanism also works in dif-ferent biological implementations, we examined modifiedversions of this model, see Sec. V for details. For exam-ple, criticality is also reached if the strength of the linksis allowed to assume continuous values. Furthermore, in-troducing inhibiting neurons instead of inhibiting linksdoes not change the dynamics of the model. III. EVOLUTION OF THE NETWORKSTRUCTURE
The evolution of the network starts with a specifiedinitial configuration of links c ( t = 0) and the state ofall nodes set to σ ( t = 0) = . Doing so, all activityoriginates from small perturbations caused by stochas-tic noise. Applying the rewiring rules, the system thenevolves towards a dynamical steady state with character-istic average numbers of activating and inhibiting links.As a convenient observable of the dynamical state ofthe network, and an approximate indicator of a possiblecritical state of the network, we measure the branchingparameter (cid:104) λ (cid:105) by calculating, for every node i , how manyneighbor nodes λ on average change their state at time t + 1 if the state of i is changed at time t . Averaging λ over the network indicates the dynamical regime of thenetwork. where (cid:104) λ (cid:105) = 1 is often used as an indicatorof criticality. Note that, by construction, (cid:104) λ (cid:105) dependson the connectivity matrix c ij ( t ) and on the state vector σ ( t ) and, therefore, has to be considered with some cau-tion. For example, its critical value may differ from onewhen the evolved networks develop community structureor degree correlations between in- and out-links or be-tween nodes [10]. Therefore, we will here use the branch-ing parameter for a qualitative assessment of the networkevolution, only, and analyze criticality with tools fromdynamical scaling theory below.Let us now turn to the evolutionary dynamics ofthe model, starting from a random network c ( t = 0)with only the average connectivity specified at t = 0.Figure 1a shows the time series of the average number ofincoming activating and inhibiting links per node (cid:104) k + (cid:105) and (cid:104) k − (cid:105) starting from a fully unconnected network. The figure also shows the temporal evolution of thebranching parameter (cid:104) λ (cid:105) .In the beginning of the network evolution, there areonly few links between the nodes, and noise inducedactivity dies out very fast. Therefore, the activityis very low and only activating links are added. Asa result, the branching parameter increases. Whenthe value of (cid:104) k + (cid:105) approaches one, activity starts topropagate through the network and some nodes becomepermanently active. This causes the rewiring algorithmto insert inhibiting links. After some transient time, theaverage connectivities become stationary and fluctuatearound a mean value. The branching parameter alsobecomes stationary and fluctuates around a value nearone, indicating a possible critical behavior. The ratioof inhibiting links to activating links approximatelyattains (cid:104) k − (cid:105) / (cid:104) k + (cid:105) ≈ . (cid:104) k + (cid:105) = (cid:104) k − (cid:105) = 2.In contrast to the starting configuration in Fig. 1a, thenetwork is densely connected and the nodes change theirstates often. Since the nodes rarely stay in the samestate during the averaging time W , links are preferen-tially deleted in the beginning. After a transient periodthe system reaches a stationary steady state similar tothe one already observed in Figure 1a, indicating inde-pendence from initial conditions.This scenario is reminiscent of synaptic pruning dur-ing adolescence, where in some regions of the brain ap-proximately 50% of synaptic connections are lost [41].It is hypothesized that this process contributes to theobserved increase in efficiency of the brain during ado-lescence [42]. In the proposed model, starting with thedensely connected network shown in Fig. 1b, the branch-ing parameter is considerably larger than one. In thisstate information transmission and processing are diffi-cult since already small perturbations percolate throughthe entire network. The decrease in the number of linksleads to a network with a branching parameter close toone, much better suited for information processing tasks.In order to explore the parameter dependency of themodel, let us now ask how the steady-state-averages ofthe connectivities and of the branching parameter dependon the system parameters ( β, W, N ). Figure 2a shows theaverage connectivity of activating incoming links over abroad range of parameter space. A prominent feature isthe sub-critical region (upper left corner) where the al-gorithm fails to create connected graphs and the averageconnectivity of incoming links is far below one. This isdue to nodes being predominantly active by noise, in-stead of signal transmission. If a node i has no incominglinks its probability to be turned on at least once by noise I n - d e g r ee / B r a n c h i n g p a r a m . (a) I n - d e g r ee / B r a n c h i n g p a r a m . (b) k + k FIG. 1. (Color online) (a) Time series of the average in-connectivities and branching parameter for N = 1000 , β = 10 , T = 1000,starting from a completely unconnected network. After a transient period, the average connectivities and the branchingparameter become stationary. The branching parameter fluctuates around (cid:104) λ (cid:105) = 1 . ± .
11, indicating possible criticality.(b) Evolution starts with an average connectivity of (cid:104) k (cid:105) = 2 for activating and inhibiting links. Even though having a verydifferent initial configuration, the system evolves towards a similar steady state as found in (a). during the W time steps is given byProb( A i >
0) = 1 − (cid:18) −
11 + e β (cid:19) W . (3)Therefore, demanding that on average not more than halfof the nodes should be turned on by noise during W steps,gives an upper bound for the time window WW max = − log 2log (cid:16) − e β (cid:17) . (4)This boundary is shown as a white dashed line in Fig. 2a,obviously being a good approximation for the boundaryof the sub-critical region. Most importantly, we see thatif β is sufficiently large, i.e. if the noise is sufficientlysmall, there always is a region in which connected net-works emerge. Since W max is independent of system size N , this also holds for large systems. Fig. 2b shows the av-erage branching parameter for the same range of ( β, W )as Fig. 2a. Note that (cid:104) λ (cid:105) is close to a value slightly largerthan one, over a wide range of noise and averaging times.In order to explore whether this indicates criticality (witha critical branching parameter value larger than one forthe evolved networks), let us now explore other criteriaof criticality. IV. CRITICALITY
An important feature of critical systems is scale-independent behavior, meaning that close to a phasetransition similar patterns can be observed on all scales.Near criticality, correlations between distant parts of thesystem do not vanish and microscopic perturbations can cause influences on all scales. This also implies thatpower-laws occur in many observables, as e.g. in the sizedistribution of fluctuations.
A. Avalanches of perturbation spreading
Let us now investigate the statistics of avalanches ofperturbations spreading on the networks. Note thatthe network evolution drives the system towards a statewhere activity never dies out. Therefore, we cannotconsider avalanches of activity-spreading, as usuallydone in numerical experiments, with one perturbationat a time. The problem of persistent activity could becircumvented by introducing an activity threshold whichdefines the start and the end of avalanches as done in[43]. This procedure, nevertheless, is not reliable sincethe introduction of an activity threshold can generatepower-law-like scaling from uncorrelated stochasticprocesses as was shown in [44]. Instead, showing thatthe size and duration of the fluctuations are power-lawdistributed is a more reliable procedure commonly usedin statistical physics [45]. This method is related to thedetermination of the Boolean Lyapunov exponent, whichwas used e.g. in [46] in order to examine the criticalbehavior of neural networks.First, we let the system evolve until the branching pa-rameter and the average connectivities reach steady av-erage values. Then, a copy σ c of the network is made.One node of this copy is chosen at random and its stateis flipped: if it was active, it is turned inactive and viceversa. By comparing the temporal evolution of the un-perturbed system σ and the perturbed system σ c one canexamine the spreading of this perturbation. For quanti-fying the ’difference’ between the two copies it is conve- β A v g . T i m e T (a) β A v g . T i m e T (b) › k + fi › λ fi FIG. 2. (Color online) (a) The average connectivity of activating incoming links (cid:104) k + (cid:105) for different values of the averaging length T and the inverse temperature β . The white dotted line is the upper bound of T given by Eq. (4). (b) The average branchingparameter (cid:104) λ (cid:105) for different values of the averaging window length W and the inverse temperature β . The average branchingparameter is close to a typical value near one over a broad range of ( β, W ). The data was obtained by averaging over 30000evolution steps, system size is N = 1000. nient to use the Hamming-distance of the state vectors d H ( σ , σ c ) which is defined as the number of differing en-tries in σ and σ c , i.e. the number of nodes which deviatefrom each other in their states. During the examinationof one perturbation the rewiring algorithm is not in ac-tion.Performing simulations we found that in most cases d H ( σ , σ c ) → β = 10 and W = 1000 this was observed in more than90 % of all perturbations.It is straightforward to define the avalanche duration T as the time between the start of the perturbation and thereturn of σ c to the same attractor as σ and the avalanchesize S as the cumulative sum of the Hamming-distancesbetween σ and σ c during the avalanche: S = T (cid:88) t =0 d H ( σ ( t ) , σ c ( t )) . (5)From universal scaling theory [47] it is expected thatthese observables exhibit power-law scaling at criticalityProb( S ) ∼ S − τ , (6)Prob( T ) ∼ T − α . (7)Furthermore, it should also hold that the relation be-tween the average avalanche size and the avalanche du-ration shows power-law scaling (cid:104) S (cid:105) ( T ) ∼ T − γ , (8) with the exponents fulfilling the relation α − τ − γ. (9)To further verify criticality it is possible to explicitly showthe scale-freeness of the avalanche dynamics. This canbe done by determining the average avalanche profiles(avalanche size over time) for different avalanche dura-tions. Scaled properly, these shapes should collapse ontoone universal curve if the system is critical. B. Results
Figure 3 shows the distribution of avalanche sizes anddurations as well as the collapse of avalanche profiles foravalanches of different durations.(a) shows that the avalanche duration scales with an ex-ponent of α = 2 . ± .
03 up to the square root of the sys-tem size. (b) reveals a power-law scaling of the avalanchesize with an exponent of τ = 1 . ± .
01. Both exponents α and τ are in line with experimental results [5, 8]. (c)shows that the relation between average avalanche sizeand avalanche duration also exhibits a power-law scalingup to the square root of system size with an exponent of γ = 1 . ± .
03. These exponents fulfill the relation α − τ − . ± . . ± .
01 = 1 . ± . ≈ γ (10)strongly suggesting that the system is critical. (d)shows the collapse of the activity curves onto oneuniversal shape, as it was also found in experiments [8],reflecting the fractal structure of the avalanche dynamics.A further verification of criticality can be found in Fig-(a) (b) A v g . a c t i v i t y A / T γ − (c) (d) FIG. 3. (Color online) Avalanche statistics and collapse of avalanche profiles. (a) Avalanche duration distribution. (b)Avalanche size distribution. (c) Average avalanche size over avalanche duration. The red dots show logarithmically binneddata. (d) Collapse of avalanche shapes. The curves show the activity during avalanches of length 6 (red), 10 (black), 16 (blue),30 (green), 36 (yellow). Parameters: N = 2000 , W = 1000 , β = 10, data from 75 · avalanches. ure 4 which shows the avalanche size distributions fordifferent systems sizes N . With increasing system sizethe power-law-like regions of the distributions increase,showing that the cut-off is only a finite size effect. V. OTHER VERSIONS OF THE MODEL
The main goal of this work is to present a minimaladaptive network model which exhibits self-organizedcritical behavior. At the same time the model is sup-posed to be plausible, in the sense that only local in-formation is used to approach the critical state. Whilewe simulate this model on a von Neumann computer, afully autonomous implementation is possible. To furtherdemonstrate that our model represents a general mecha-nism and does not depend on particular features of theimplementation, also variants of the model were tested.
A. Inhibiting nodes
We tested a variant which uses inhibiting nodes in-stead of inhibiting links. In this modified model, nodesare connected by un-weighted links. Before starting theevolution of the network a fraction of all nodes is chosento be inhibitory. Here we typically chose 20–30 % as itis often used as rough approximation for real neural sys-tems [40]. Simulations revealed that in the frame of themodel the exact number is not of importance.If an inhibitory node is active, it contributes a signal − • A i = 1: add a new incoming link c ij = − j is changed to • A i = 1: add a new incoming link from anotherrandomly chosen inhibitory node j . Avalanche Size S10 -6 -5 -4 -3 -2 -1 P r o b ( E ) N = 50 N = 250 N = 1000 ∝ S −1. 6
FIG. 4. Scaling of the avalanche size distribution with in-creasing system size N . Each distribution is obtained from90000 avalanches. During one avalanche each node can onlycontribute once to the avalanche size. Parameters: β = 10, W = 1000. Defining the rewiring rules in this way, the statisticalproperties of the inhibiting nodes do not differ from theactivating nodes. Therefore, the dynamics of this mod-ified version should be the same as the dynamics of theoriginal model as, indeed, has been confirmed by simula-tions.
B. Continuous link weights
Choosing discrete link weights c ij ∈ {− , , } allowsfor a minimalistic description of the model and to formu-late simple rules for the network evolution. However, inorder to mimic the varying synaptic strengths of a realneural system, a version with continuous link weights hasalso been examined. We find that the following continu-ous rewiring rules lead to critical behavior, as well. In thesame way as in the original model, after every W timesteps, one node i is chosen at random. Depending on itsaverage activity A i its linkage is changed as described inthe following: • A i = 0: randomly choose another node j . If c ij = 0add a new incoming link c ij ∈ [0 , ∆]. If c ij (cid:54) = 0multiply the link weight by a factor (1+ δ sign ( c ij )). • A i = 1: randomly choose another node j . If c ij = 0add a new incoming link c ij ∈ [ − ∆ , c ij (cid:54) = 0multiply the link weight by a factor (1 − δ sign ( c ij )). • A i (cid:54)∈ { , } : randomly choose one incoming link of i . If | c ij | < c ij = 0, otherwise decrease thelink weight by a factor (1 − δ ).Hereby, the additional parameters δ and ∆ should be cho-sen such that δ (cid:28) > VI. CONCLUSION AND OUTLOOK
In this paper we studied a minimal neural networkmodel that self-organizes towards a critical state, repro-ducing detailed features of criticality observed in real bi-ological neural systems. The model involves only threeparameters, the inverse temperature β determining theamount of noise in the model, the averaging time W defining the timescale on which the network evolutionis performed, and the system size N . None of them is inneed of fine tuning and they can be varied over a consid-erable range. We reformulated the earliest network SOCmodel [3] with Boolean variables and modified the evo-lution rule accordingly, resulting in a mechanism whichdrives the autonomous network model towards critical-ity whenever spontaneous activity is present. Note thatthis mechanism is based on the temporally averaged ac-tivity of single nodes only. Thus, it is shown that nei-ther information about the global state of the network,nor information about neighboring nodes is necessary forobtaining neural networks showing self-organized criticalbehavior.In contrast to classical systems exhibiting self-organized criticality, as e.g. the sandpile model [13], themodel shows critical features over a broad range of noise.Indeed, it even utilizes noise in order to sustain activitypermanently. The source of robustness of this class ofSOC models is the fact that the criticality of the systemis stored in separate variables, namely the links betweenthe nodes, rather than in the dynamical variables, thenode states, themselves. Classical SOC models are morevulnerable against noise, as can be seen, for example, inthe forest fire model, where criticality emerges as a frac-tal distribution of tree states, that is easily disturbed.In our self-organized critical adaptive network model, incontrast, noise can be varied over a broad range.We have demonstrated that the rewiring mechanismalso works in different versions of the model where, forexample, inhibiting nodes instead of inhibiting links areimplemented or continuous link weights are used. Thisillustrates that the observed self-organized critical char-acteristics arise as stable phenomena independent of evenmajor features of the system, only depending on thestructure of the rewiring algorithm. This independenceand robustness gives strong support to the hypothesisthat also real biological neural systems could take ad-vantage of this simple and robust way to self-tune closeto a phase transition in order to stay away from, both,frozen as well as chaotic dynamical regimes.Further work on minimal neural network models show-ing critical behavior could focus on how criticality in-fluences learning, as it already has been touched e.g. in[21, 24]. Further studies on self-organized critical net-works models could not only help us in our understand-ing of biological neural systems, but also motivate newways to generate artificial neural networks through mor-phogenesis. ACKNOWLEDGMENTS
We thank Gorm Gruner Jensen for discussions andcomments on the manuscript. [1] C. G. Langton, “Computation at the edge of chaos: phasetransitions and emergent computation,”
Physica D: Non-linear Phenomena , vol. 42, no. 1-3, pp. 12–37, 1990.[2] A. V. Herz and J. J. Hopfield, “Earthquake cycles andneural reverberations: collective oscillations in systemswith pulse-coupled threshold elements,”
Physical ReviewLetters , vol. 75, no. 6, p. 1222, 1995.[3] S. Bornholdt and T. Rohlf, “Topological evolution of dy-namical networks: Global criticality from local dynam-ics,”
Physical Review Letters , vol. 84, no. 26, p. 6114,2000.[4] P. Bak and D. R. Chialvo, “Adaptive learning by ex-tremal dynamics and negative feedback,”
Physical Re-view E , vol. 63, no. 3, p. 031912, 2001.[5] J. M. Beggs and D. Plenz, “Neuronal avalanches in neo-cortical circuits,”
The Journal of Neuroscience , vol. 23,no. 35, pp. 11167–11177, 2003.[6] J. M. Beggs and D. Plenz, “Neuronal avalanches are di-verse and precise activity patterns that are stable formany hours in cortical slice cultures,”
The Journal ofNeuroscience , vol. 24, no. 22, pp. 5216–5229, 2004.[7] T. Petermann, T. C. Thiagarajan, M. A. Lebedev, M. A.Nicolelis, D. R. Chialvo, and D. Plenz, “Spontaneous cor-tical activity in awake monkeys composed of neuronalavalanches,”
Proceedings of the National Academy of Sci-ences USA , vol. 106, no. 37, pp. 15921–15926, 2009.[8] N. Friedman, S. Ito, B. A. Brinkman, M. Shimono, R. L.DeVille, K. A. Dahmen, J. M. Beggs, and T. C. But-ler, “Universal critical dynamics in high resolution neu-ronal avalanche data,”
Physical Review Letters , vol. 108,no. 20, p. 208102, 2012.[9] W. L. Shew, W. P. Clawson, J. Pobst, Y. Karimipanah,N. C. Wright, and R. Wessel, “Adaptation to sensoryinput tunes visual cortex to criticality,”
Nature Physics ,vol. 11, no. 8, pp. 659–663, 2015.[10] D. B. Larremore, W. L. Shew, and J. G. Restrepo, “Pre-dicting criticality and dynamic range in complex net-works: effects of topology,”
Physical Review Letters ,vol. 106, no. 5, p. 058101, 2011.[11] W. L. Shew, H. Yang, S. Yu, R. Roy, and D. Plenz,“Information capacity and transmission are maximizedin balanced cortical networks with neuronal avalanches,”
Journal of Neuroscience , vol. 31, no. 1, pp. 55–63, 2011.[12] W. L. Shew, H. Yang, T. Petermann, R. Roy, andD. Plenz, “Neuronal avalanches imply maximum dy-namic range in cortical networks at criticality,”
The Jour-nal of Neuroscience , vol. 29, no. 49, pp. 15595–15600,2009.[13] P. Bak, C. Tang, and K. Wiesenfeld, “Self-organized crit-icality,”
Physical Review A , vol. 38, no. 1, p. 364, 1988.[14] B. Drossel and F. Schwabl, “Self-organized critical forest-fire model,”
Physical Review Letters , vol. 69, no. 11,p. 1629, 1992.[15] P. Bak and K. Sneppen, “Punctuated equilibrium andcriticality in a simple model of evolution,”
Physical Re- view Letters , vol. 71, no. 24, p. 4083, 1993.[16] S. Bornholdt and T. R¨ohl, “Self-organized critical neuralnetworks,”
Physical Review E , vol. 67, no. 6, p. 066118,2003.[17] Z. Olami, H. J. S. Feder, and K. Christensen, “Self-organized criticality in a continuous, nonconservativecellular automaton modeling earthquakes,”
Phys. Rev.Lett. , vol. 68, pp. 1244–1247, Feb 1992.[18] C. W. Eurich, J. M. Herrmann, and U. A. Ernst, “Finite-size effects of avalanche dynamics,”
Physical Review E ,vol. 66, no. 6, p. 066137, 2002.[19] A. Levina, J. M. Herrmann, and T. Geisel, “Dynamicalsynapses causing self-organized criticality in neural net-works,”
Nature Physics , vol. 3, no. 12, pp. 857–860, 2007.[20] C. Meisel and T. Gross, “Adaptive self-organization ina realistic neural network model,”
Physical Review E ,vol. 80, no. 6, p. 061917, 2009.[21] V. Hernandez-Urbina and J. M. Herrmann, “Self-organised criticality via retro-synaptic signals,”
Frontiersin Physics , vol. 4, p. 54, 2016.[22] L. de Arcangelis, C. Perrone-Capano, and H. J. Her-rmann, “Self-organized criticality model for brain plas-ticity,”
Physical Review Letters , vol. 96, no. 2, p. 028107,2006.[23] G. L. Pellegrini, L. de Arcangelis, H. J. Herrmann, andC. Perrone-Capano, “Activity-dependent neural networkmodel on scale-free networks,”
Physical Review E , vol. 76,no. 1, p. 016107, 2007.[24] L. de Arcangelis and H. J. Herrmann, “Learning as aphenomenon occurring in a critical state,”
Proceedings ofthe National Academy of Sciences USA , vol. 107, no. 9,pp. 3977–3981, 2010.[25] M. O. Magnasco, O. Piro, and G. A. Cecchi, “Self-tunedcritical anti-hebbian networks,”
Physical Review Letters ,vol. 102, no. 25, p. 258102, 2009.[26] W. L. Shew and D. Plenz, “The functional benefits ofcriticality in the cortex,”
The Neuroscientist , vol. 19,no. 1, pp. 88–100, 2013.[27] D. Markovi´c and C. Gros, “Power laws and self-organized criticality in theory and nature,”
Physics Re-ports , vol. 536, no. 2, pp. 41–74, 2014.[28] J. Hesse and T. Gross, “Self-organized criticality as afundamental property of neural systems,”
Frontiers inSystems Neuroscience , 2015.[29] V. Hernandez-Urbina and J. Michael Herrmann, “Neu-ronal avalanches in complex networks,”
Cogent Physics ,vol. 3, no. 1, p. 1150408, 2016.[30] M. Rybarsch and S. Bornholdt, “Avalanches in self-organized critical neural networks: A Minimal model forthe neural SOC universality class,”
PloS one , vol. 9, no. 4,p. e93090, 2014.[31] Y. Yada, T. Mita, A. Sanada, R. Yano, R. Kanzaki, D. J.Bakkum, A. Hierlemann, and H. Takahashi, “Develop-ment of neural population activity toward self-organizedcriticality,”
Neuroscience , vol. 343, pp. 55–65, 2017. [32] M. P. Mattson and S. Kater, “Excitatory and in-hibitory neurotransmitters in the generation and degen-eration of hippocampal neuroarchitecture,”
Brain Re-search , vol. 478, no. 2, pp. 337–348, 1989.[33] M. Mattson, P. Dou, and S. Kater, “Outgrowth-regulating actions of glutamate in isolated hippocam-pal pyramidal neurons,”
Journal of Neuroscience , vol. 8,no. 6, pp. 2087–2100, 1988.[34] P. Haydon, D. McCobb, and S. Kater, “The regulationof neurite outgrowth, growth cone motility, and electricalsynaptogenesis by serotonin,”
Developmental Neurobiol-ogy , vol. 18, no. 2, pp. 197–215, 1987.[35] C. S. Cohan and S. B. Kater, “Suppression of neuriteelongation and growth cone mentality by electrical activ-ity,”
Science , vol. 232, pp. 1638–1641, 1986.[36] R. D. Fields, E. A. Neale, and P. G. Nelson, “Ef-fects of patterned electrical activity on neurite outgrowthfrom mouse sensory neurons,”
Journal of Neuroscience ,vol. 10, no. 9, pp. 2950–2964, 1990.[37] F. Van Huizen and H. Romijn, “Tetrodotoxin enhancesinitial neurite outgrowth from fetal rat cerebral cortexcells in vitro,”
Brain Research , vol. 408, no. 1, pp. 271–274, 1987.[38] M. Fauth and C. Tetzlaff, “Opposing effects of neu-ronal activity on structural plasticity,”
Frontiers in Neu-roanatomy , vol. 10, 2016.[39] M. Butz, A. Van Ooyen, and F. W¨org¨otter, “A modelfor cortical rewiring following deafferentation and focalstroke,”
Frontiers in Computational Neuroscience , vol. 3,2009. [40] H. Markram, M. Toledo-Rodriguez, Y. Wang, A. Gupta,G. Silberberg, and C. Wu, “Interneurons of the neocor-tical inhibitory system,”
Nature Reviews. Neuroscience ,vol. 5, no. 10, p. 793, 2004.[41] P. Rakic, J.-P. Bourgeois, and P. S. Goldman-Rakic,“Synaptic development of the cerebral cortex: implica-tions for learning, memory, and mental illness,”
Progressin Brain Research , vol. 102, pp. 227–243, 1994.[42] L. P. Spear, “Adolescent neurodevelopment,”
Journal ofAdolescent Health , vol. 52, no. 2, pp. S7–S13, 2013.[43] S.-S. Poil, R. Hardstone, H. D. Mansvelder, andK. Linkenkaer-Hansen, “Critical-state dynamics ofavalanches and oscillations jointly emerge from balancedexcitation/inhibition in neuronal networks,”
Journal ofNeuroscience , vol. 32, no. 29, pp. 9817–9823, 2012.[44] J. Touboul and A. Destexhe, “Can power-law scaling andneuronal avalanches arise from stochastic dynamics?,”
PloS one , vol. 5, no. 2, p. e8982, 2010.[45] H. E. Stanley,
Phase transitions and critical phenomena .Clarendon Press, Oxford, 1971.[46] C. Haldeman and J. M. Beggs, “Critical branching cap-tures activity in living neural networks and maximizesthe number of metastable states,”
Physical Review Let-ters , vol. 94, no. 5, p. 058101, 2005.[47] J. P. Sethna, K. A. Dahmen, and C. R. Myers, “Cracklingnoise,”
Nature , vol. 410, no. 6825, pp. 242–250, 2001.[48] F. Lombardi, H. Herrmann, C. Perrone-Capano,D. Plenz, and L. De Arcangelis, “Balance between excita-tion and inhibition controls the temporal organization ofneuronal avalanches,”