Critical behaviour at the onset of synchronization in a neuronal model
CCritical behaviour at the onset of synchronization in a neuronal model
Amin Safaeesirat and Saman Moghimi-Araghi
Physics department, Sharif University of Technology, P.O. Box 11155-9161, Tehran, Iran
The presence of both critical behaviour and oscillating patterns in brain dynamics is a veryinteresting issue. In this paper, we consider a model for a neuron population where each neuron ismodeled by an over-damped rotator. We find that in the space of external parameters, there existsome regions that system shows synchronization. Interestingly, just at the transition point, theavalanche statistics show a power-law behaviour. Also, in the case of small systems, the (partially)synchronization and power-law behavior can happen at the same time.
I. INTRODUCTION
Scale-free and critical behavior has long been observed in animal and human brain experiments, both in vitro andin vivo [1–6]. Neuronal avalanches, the bursts of activity and spiking of neurons that spread in large areas of thebrain network, exhibit power-law forms in their size and duration distributions. Also, other data like the data of thehuman electroencephalography (EEG) show similar patterns in the background activity of the brain [7, 8].The criticality of neuronal systems is supported by the existence of power-law behavior and data collapse. Being atcriticality is thought to play a crucial role in some of brain fundamental properties like optimal response and storageand transfer of information [9–13]. As there is no need to tune any parameter to arrive at criticality, it was suggestedthat the brain is an example of Self-Organized Critical (SOC) systems[14]. The idea of self-Organized criticality wasintroduced by Bak, Tang, and Wiesenfeld [15], where using a simple sandpile model shows that some complex systemscan organize themselves towards their critical point and therefore, there is no need to regulate external parameters.A range of models was introduced to show that self-organized criticality is plausible in the neural systems [16–22].However, being at criticality does not necessarily imply that the system arrives at criticality through a self-organizingmechanism. Rather, it has been shown that neural systems may not be an example of SOC systems [23]. Also, for thebrain network, which has a highly hierarchical and modular structure, the critical points are generalized to criticalregions, similar to that of Griffiths phases [24, 25] , and there may be no need to fine-tune a parameter. Actually, itseems that a long term evolutionary mechanism would be enough to explain why the brain works at criticality.Therefore, some efforts have been made to find how power law can happen in models of the neuronal system evenyou need to tune some external parameters[26]. Yet it remains to understand what is the phase transition thatdetermines the critical state. One of the most promising candidates is the synchronization transition. It is foundthat the brain is more or less maintained on the transition point to synchronization [12, 27, 28], and the criticality inavalanche statistics can happen at this transition point[29, 30]. Also, a general framework based on Landau-Ginzburgtheory has been proposed to describe the dynamics of cortex and it is found that at the edge of synchronizationscale-free avalanches appear [31]. In similar studies, it is found that the bifurcation that leads to successive firings ofa neuron might have effect on the overall dynamics [].In this paper, we study a network of neurons modeled by over-damped rotators to mimic type I neurons, ratherthan the usual integrate-and-fire model that more or less mimics type II neurons’ behaviour. Therefore, our setupis similar to that of the Kuramoto model[32] and hence each oscillator is identified by a single-phase variable. Thishelps us to study the synchronization of the system more easily. Of course, the dynamics, especially the way thetwo neighbours affect each other is very different from the Kuramoto model and therefore it has completely differentfeatures. We explore the phase space of external parameters and find the transition lines of synchronization. Themain parameters we have explored are the external stimulation power, the relative strength of inhibitory neurons toexcitatory ones and the axonal time delay. We analyze the system’s avalanche size statistics at different points ofthe phase space of the parameters and interestingly observe that right at the transition the avalanche size and periodstatistics obey power-law.
II. MODELS AND METHODS: THE SINGLE NEURON MODEL
There are lots of models for single neuron dynamics. However, for simulating relatively large networks, simplemodels should be chosen. A simple model only keeps the main characteristics of the system and throws away otherdetails. For a neuron, this main characteristic may be stated as follows: If a relatively small stimulation is appliedto a neuron, It doesn’t react considerably. However, a neuron that receives a slowly varying current throws off afairly regular set of spikes and in this situation, it may be regarded as an oscillator. In the language of dynamicalsystems, when a neuron is sufficiently stimulated, the stable fixed point is lost and its movement in phase space can a r X i v : . [ c ond - m a t . d i s - nn ] N ov FIG. 1. Two fixed points of the system for
I < α >>
1. Here T is the external torque exerted on the system from outside(take a look at Eq. 2). The system will stand in its stable fixed point if there is no external stimulation. If the stimulation isenough and the system proceeds its unstable fixed point, it will return to the stable fixed point counterclockwise. We definethis as a ”spike” in our system. be regarded as a limit cycle, at least in short times. Therefore, for the single neuron model, it is good to consider asimple model that has such a cycle when it is stimulated. The simplest model might be the well-known leaky integrateand fire model (LIF); however, the LIF model. As we will discuss later, this model more or less mimics the behaviorof Type II neurons, that is, there is a ”discontinuity” in the frequency of successive firings of the modeled neuron,or its activity, at the onset activity. In fact the activity is continuous but with a logarithmic singularity which insome sense is equivalent to discontinuity. In contrast, Type I neurons are characterized mainly by the appearance ofoscillations with arbitrarily low frequency as the current is injected. In our study, we prefer to use a simple modelthat behaves more closely to a type I neuron.A suitable model for type I neurons is given by the equation dφdt = I − sin φ, (1)where I is the external stimulation and φ is the dynamical variable. The model has been considered before as asimple model to simulate the dynamics of neurons and is closely related to quadratic non-linear integrate and firemodel[33–35].The equation (1) can be interpreted as an over-damped rotator in the following way: Consider a rod pendulum ofmass m and length l driven by a constant torque in a viscous medium. The equation of motion of such pendulum isgiven by: ml d φdt + b dφdt + mgl sin φ = T (2)where φ is the angle of the rotating rod with the downward vertical, b is the viscous damping constant, g is theacceleration due to gravity, and T is the external torque.Equation (2) is a second-order system; however, in the limit of very large b ’s, where the rotator is over-damped,it may be approximated with a first-order system. Neglecting the inertia term ml ¨ φ and non-dimensionalizing theresulted equation, one arrives at equation (1) [36].Although equation (2) has a very rich dynamics [36], the over-damped limit is sufficient for modeling single neurondynamics. It shows a saddle-node bifurcation at I = 1, when I <
I >
1, there will be no fixed points and the rotator will overturn continually. Therest state of the neuron corresponds with the stable fixed point and when
I > θ = φ − π/ dθdt = I − cos θ. (3)Consider the case I <
1. The two equilibrium points are shown in figure 1. When the system is at rest, it standsat the stable fixed point. If the rotator is stimulated slightly, it returns back to the stable fixed point. However, ifthe stimulation is large enough to take the rotator beyond the unstable fixed point, or the threshold, it will rotatea complete circle and then arrives at the stable fixed point. We call this complete rotation of the rotator a spike.Of course, usually, it is considered that the ”neuron” spikes when θ = π , that is, it produces an action potentialright at that moment. This model can be mapped to quadratic integrate and fire neuron by the transformation u = tan( θ/ I = 1, it spikes repeatedly. The inter spike interval (ISI) is determined viaISI = (cid:90) t f +ISIt f dt = (cid:90) θ f +2 πθ f π d θ I − cos θ = 2 π √ I − t f is the spike time and θ f is the spike angle. Therefore, the gain function is f ( I ) = 1ISI = √ I − π . (5)and hence it grows as √ I − I − µ with µ > µ <
0, the function is clearly this continuous andfor the case of µ = 0 it is usually regarded that there is jump in the function and is therefore discontinuous. In thecase of the leaky integrate-and-fire model with the dynamics given by ˙ v = I − v and the threshold potential v Th . = 1,the gain function is proportional to − / log( I − (cid:39) ( I − , which resembles a discontinuity, although the functionis actually continuous. Therefore, to mimic the type I neurons, we prefer to choose the overdamped rotator as oursingle-neuron model. III. MODELS AND METHODS: THE NETWORK OF NEURONS
We have considered a network composed of N neurons modeled by the over-damped rotator where 80% of them arerandomly chosen to be excitatory neurons, and the rest are inhibitory ones. Each neuron, regardless of being excitatoryor inhibitory, receives a constant number of internal connections, that is from a fixed number of neurons within thenetwork. We denote this number by C and write C = C E + C I where C E and C I are the number of connectionsfrom excitatory and inhibitory neurons respectively. We have considered the ratio of inhibitory inputs to excitatoryones is the same as the population ratio, which is C I /C E = 1 /
4. The neurons also receive C ext connections fromexternal neurons. C ext is taken to be equal to the number of excitatory connections from the network ( C ext = C E ).The external neurons fire randomly so that the intervals between two successive firings have a Poisson distributionwith the average T ext = 1 /ν ext . These external neurons effectively play the role of external (noisy) current, thereforewe set I = 0 in the dynamics of over-damped rotators (Eq. 3) and control the external current through tuning ν ext .When the rotator i arrives at θ = π , it fires and hence changes the phase of its neighbour j by the amount J ij .For presynaptic excitatory neurons J ij = J and for presynaptic inhibitory ones J ij = − gJ . The same strength J isconsidered for external (excitatory) neurons. Also, we assume that if the neuron i fires at time t , the spike arrives atthe neuron j at t + D , that is, an axonal time delay introduced to the system.Now we can write the dynamics of the neuron i in the following form ∂θ i ∂t = − cos θ i + (cid:88) j J ij (cid:88) k δ ( t − t kj − D ) , (6)where the first summation, is on presynaptic neurons connected to the neuron i , and the second summation is on thespikes of the neuron j . In fact, the delta function δ ( t − t kj − D ) represents the k th spike from the neuron j arrivingat t = t kj + D .Usually, the models describing networks of neurons have a handful of parameters and are difficult to analyzecompletely. In our model, the parameters ν ext , g, D, J , and N are the five adjustable parameters. The role of eachparameter is obvious; ν ext controls the strength of external drive, g controls the relative strength of inhibition toexcitation within the network, D is the axonal time delay, J controls the strength of the synapses and N is simplythe total number of neurons. To make the parameter space smaller, we have fixed J = 0 .
015 and C = 100. We try toinvestigate the dynamical characteristics of the system by exploring considerable parts of the remaining 3-dimensionalspace. FIG. 2. The plot illustrates the effective potential for the rotator. The rotator tends to stay at the local minimums of thefunction where θ = 2 kπ − cos − I in which k is an integer. Due to the outside stimulation, over time, the rotator will reach tothe local maximum and consequently spikes and goes to the next relative minimum. The plot is sketched for I = 0 . IV. SYNCHRONIZATION OF THE NEURONS
We have simulated populations of the neurons to find out if synchronization happens throughout the system or not.It has been observed that in such neuron populations synchronization might happen[30, 37, 40, 41]. For example, in[37] a network similar to ours is studied and it has been observed that in the presence of delay, one can find someregions where the system shows synchronization. In his paper, Brunell has studied its network for different values ofexternal drive and the relative strength of inhibition to excitation in the system and has found regions in this spacewhere synchronization happens. Also in [38], using a Kuramoto-like model it has been shown that in presence of delaya transition from a asynchrony to synchrony happens when the level of input to the network is changed. Note thatalthough we have rotators as our single neuron model, it differs from Kuramoto model, both in single agent dynamicsand and in interactions between two adjacent neurons: it has a preferred angle θ (corresponding to rest state of theneurons) and, when a neuron fires, it kicks the post-synaptic neuron. In Kuramoto model, there is no preferred angleand also the interactions comes from a term proportional to sine of the angle difference of the two neuron. Quiterecently, a model has been proposed in which all the above terms are present: the intrinsic angular velocity of therotator and the sine terms of the Kuramoto and the preferred rest point and the kicks of our model [39]. Within thismodel both synchronous and asynchronous phases have been observed and a rich phase diagram has been obtained.Also in some transition points scaling laws in avalanche statistics is found.To distinguish the synchron phase from asynchron one, we need a suitable order parameter which vanishes whenthe system is not synchronous and takes non-zero value otherwise. In the context of Kuramoto-type rotators [32],the order parameter is defined as r = | (1 /N ) (cid:80) Nk =1 e iθ K | , where θ k is the phase of k ’th rotator. If all the rotators arein phase, r will be unity and if the rotators are all independent, it would be of the order of 1 / √ N , which becomesnegligible for sufficiently large systems. Hopefully, we have phases as our dynamical variables and at the first sight,one thinks of the same order parameter. However, as we discuss below, we find this definition inappropriate for ourmodel. Our single-neuron model is given by Eq. 3. To have a better insight let us interpret the same equation as aLangevin-type equation: dθ i dt = I − cos θ i + η i = − ∂V ( θ i ) ∂θ i + η i , (7)where we have collected the effect of network spikes and external spikes in the noise term η i and have defined thepotential V as V ( θ ) = − Iθ + sin θ. (8)This potential function is sketched schematically in figure 2 for I = 0 . θ = I , orequivalently find the neuron at rest. In such a situation, even if the firings of the neurons are irregular and asynchron,the order parameter can take a non-vanishing value. In fact, if the ratio of the neurons at rest to the total numberof neurons is µ , the order parameter will be µ + O (1 / √ N ). Therefore, especially for small external stimulation, thisorder parameter is useless and it doesn’t show synchronize ”activity”.If we want to see if the ”activity” is synchronous, it is better to consider just the neurons that are firing. In the caseof rotators, we may consider only the rotators with the phase parameter in the interval [ π/ , π/ e iθ is always negative and it’s averagewill be never zero. On the other hand, within the same interval, the imaginary part of e iθ , ( sin θ ) ranges from -1 to 1.Consider the quantity r = (1 /N a ) (cid:80) k sin θ k , where N a is the number of spiking rotators and the summation is justover the set of spiking rotators. If the network activity is irregular, then r will be of the order of 1 / √ N a , however ifall the rotators spike synchronously, that is θ k ( t ) (cid:39) f ( t ) for all spiking rotators, then r = sin( f ( t )). This functionfluctuates in the interval [ − , m = (cid:104) ( 1 N a N a (cid:88) i =1 sin θ k ) (cid:105) , (9)where (cid:104) . (cid:105) means time averaging over a period comparable with the spike period. In this way, if the network activityis irregular, m = O (1 /N a ) and in the case of synchronous activity m = O (1). Just we have to check N a is not verysmall, and as we will see in our model, in all the regions we are interested in, the activity is large enough.In simulations, we have considered several samples with different sizes ranging from 1000 neurons to 16000 ones.Also, we have considered several random configurations of links with the above-mentioned statistics. As for the initialcondition, we have assigned a random phase between 0 and 2 π to each rotator and have numerically integrated thedifferential equations. The time steps were set to ∆ t = 0 .
01. After the system arrives at the steady-state, whichhappens typically no more than 20000 temporal steps, we begin our actual measurements.A central quantity of a neuron population is its activity. The activity time series A ( t, δt ) is defined as the numberof spikes in the network between t and t + δt . The parameter δt should be taken much smaller than the average periodof a single neuron successive firings, but not too small that only single firings are found within each interval. We havetaken δt of the order of ∆ t which seems appropriate for our systems, therefore we fix δt = ∆ t and will simply denote A ( t, δt ) by A ( t ).Within the same networks, we have calculated the order parameter m (Eq. 9) for different values of ν ext , theexternal stimulation strength, g relative number of inhibitory neurons to excitatory ones ,and the delay parameter D .Figure 3 shows the order parameter in ν ext - g plane for four different values of D . We have considered ν ext > N = 4000 and J = 0 .
015 and D takes the values 0.5, 1, 1.5 and2.5 in sub-figures (a) to (d) respectively. For all values of D , there are two distinct regions that the order parametertakes non-zero values. These regions are pushed towards the line ν ext = 1 as D is increased. In the case D = 0 . ν ext . We have also checkedthe case of D = 0 and within a larger area (up to ν e xt = 60) and observed no synchronous phases. In other words,the presence of the delay in the system plays a crucial role in the properties of this collective behaviour. For a largerdelay, smaller external stimulation is needed to arrive at synchronization. We will denote the left region by L-Synch.and the right one by R-Synch. In the L-Synch. region, the excitatory neurons dominate and in R-Synch. region theinhibitory neurons dominate.To see how the order parameter actually distinguishes the synchronous activity from an asynchronous activity, wehave plotted the activity pattern of the system for a few points in the phase graph of Fig. 3(c) that is for D = 1 .
5. Thepoints selected are a = ( g, ν ext ) = (1 , . e = (9 , .
5) and d = (4 , .
5) which lay in the L-Synch. region, R-Synch.region and the region where m (cid:39)
0, respectively. Actually, the order parameters of these points are m a = 0 . m e = 0 .
66 and m d = 0 . m is completely sensitive to synchronization. We have checkedsimilar graphs for many different points and found the order parameter consistent with the activity patterns.The next point to be explored is if the synchronous regions are changed if the system size is altered. We have donesimulations for sizes N = 1000 , , g = 8, g = 1 and ν ext = 1 .
5. It is observed that the order parameter shows little dependence on the size of the system,the only difference is that the fluctuations are stronger in smaller systems. Also, as it is clearly seen in Fig. 5(c),larger networks have sharper transitions and in the limit N → ∞ the derivative of the order parameter will becomediscontinuous which is a hallmark of second-order phase transition. However, as we are dealing with systems withsizes up to N = 16000, we have to be careful about the finite size effects.One last point that is worth mentioning: the order parameter does depend on the number of synapses per neuron,but as we have kept this quantity constant in all simulations shown here, this effect is not observed. This dependence FIG. 3. The contour plots show the order parameter of systems of 4000 neurons for different values of g (inhibitory strength), ν ext (average external stimulation frequency), and delay (Eq. 9). The brighter areas in the plots are the synchronous ones.The color varies between blue and pink so that the higher the order parameter, the color is closer to pink. Not only is thedelay necessary for the existence of the synchronization but also the different values of delay can change the location of thesynchronous regions. As the delay increases, they are pushed to the lower ν ext and dwindling. a) D = 0 .
5, b) D = 1, c) D = 1 . D = 2 .
5. The resolution of the graphs are of ∆ g = 0 .
15 and ∆ ν ext = 0 .
06. The points shown in plot ”c”, are selectedto be representatives of different states for this system. The activity pattern and correspondent histogram of size and periodof avalanches for these points are shown in Fig. 4 and Fig. 6 respectively. can be expected, for example, if we reduce this number to zero there will be no synchronization and on the otherhand, if all the neurons are connected to each other the synchronization happens much more easily.
V. RESULTS: SYNCHRONIZATION AND POWER-LAW BEHAVIOUR
The phase diagram of the system (Fig.3) suggests something like a continuous phase transition between synchronousand asynchronous regions. To assess this transition and identify the possible critical behaviour, we investigate neuralavalanches in the network. In the literature, the existence of scale-free neural avalanches is the most importantindicator of criticality in neural systems[1]. Generally, the network displays successive activities of various sizes s andduration d , which are called avalanches. If the system is critical, the size and duration of avalanche distributions showpower-law behaviour, i.e, p ( s ) ∼ s − γ s and p ( d ) ∼ d − γ d [1].Identification of successive avalanches is a challenging task. One may think of non-stop continuous activities withinthe system so that at least one quiescent temporal step ∆ t exists between two separated avalanches. However, thisdefinition depends on ∆ t , additionally for very large systems with lots of neurons, such a quiescent period may happenrarely. A better mechanism for identifying the avalanches is to consider the activity of the network and consider each FIG. 4. The activity of the system for different points located at different parts of the phase diagram (Fig. 3 (c)).For eachpoint, the raster plot and the activity of the system are sketched. Also, the average of the activity of the network is shownby a red dashed line. The order parameter for a = ( g, ν ext ) = (1 , . b = (2 , . c = (8 , . d = (4 , .
5) and e = (9 , . . , . , . , .
001 and 0 .
66 respectively, which are completely consistent with their correspondent activity patterns.Point c is right at the margin of synchronous and asynchronous states where the system shows power-law behaviour. It hasbeen investigated elaborately in the next part in terms of criticality. period of time within which the activity is higher than the time-average of the network’s activity as an avalanche. Thisdefinition has been used before in some similar systems [30, 42]. To avoid incorrect statistics, we have considered theactivity over the threshold as the size of activity [43] Probability distributions of avalanche size and avalanche period,for networks of 4000 neurons and various values of g and ν ext , are performed in Fig 6. Asynchronous systems, whoseorder parameter is nearly zero, show sub-critical behaviour with a Poisson like distribution, whereas synchronoussystems that lie within L-Synch. or R-Synch. regions, display super-critical distributions. However, right at thetransition curve, the probability distribution clearly shows a power-law distribution, though there is a bump in theend that usually is referred to as a sign of super-critical behavior. We will come back to this issue later.Consider a cross-section like the one shown in Fig. 5(a). Increasing ν ext . from unity, the shape of the probabilitydistribution function changes from a Poisson-like curve to a power-law and then again to a Poisson-like curve but witha bump at very large values of s . Let us call the point that the system exhibits power-law behaviour by ”the scalingpoint”. This scaling point happens to be near the transition point of the system from the asynchronous region to thesynchronous one. Identification of the exact value of ν ext . for the scaling point in such a cross-section is hard becausethe probability distribution function gradually changes as we increase ν ext . . In fact, moving from asynchronous regionto the synchronous ones, a power-law behaviour is found without the bump in the end. Very quickly, with a littlechange in the external parameters, the bump forms and the probability distribution function keeps its form in a smallinterval of the external parameters and then the scaling behavior fades away. Although, usually in the literature ofneural systems the presence of a bump is considered as a sign of super-criticality, this is not the case in the literature FIG. 5. Three cross-sections of the order parameter for systems with D = 1 . g = 1 cross-section. b) The g = 1 cross-section.c) The ν ext = 1 .
5. The cross-sections and the phase diagram approximately remains unchanged as we change the size of thesystem.FIG. 6. Avalanche distributions for both size (a) and period (b) for some points of the phase diagram of the system shownin Fig. 6 (c). For sub-critical points (b,c) distributions in both plots illustrate Poissonian behavior. For supper-critical points(a,e), in addition to the Poissonian behavior, there is a bump at the end of the distributions due to the synchronization.Right between these two regimes, point c performs power-law distribution which is the indicator of criticality. This suggests asecond-order phase transition between synchronous and asynchronous stats at this point. of critical systems and self-organized criticality. The presence of the bump can be a general feature in probabilitydistribution functions to keep the total probability equal to unity. In addition to this reasoning, in our case, whenthe bump is present, due to the higher number of larger events we will have a finer tail with a less statistical errorthat allows us to check criticality through data collapse. Therefore, we consider the bumpy probability distributionto investigate the critical properties of the scaling point. We have also calculated indicators like (cid:104) s (cid:105)(cid:104) s (cid:105) / ( (cid:104) s (cid:105) ) [46]to identify the scaling point, however, such parameters show a relatively broad peak at the transition too, unless forour largest system sizes. For the cross-section g = 8 and for systems with different sizes, the approximate values of FIG. 7. The log-log plot of ν ext at the critical point as a function of N , the number of neurons. The system shows a goodscaling with the shifting exponent λ = 0 . ± .
1. The best fit gives ν ∞ = 1 . ± .
005 which is the critical ν ext for an infinitelylarge system. N ν c ( N ) 1 . ± .
09 1 . ± .
04 1 . ± .
015 1 . ± .
007 1 . ± . ν ∞ = 1 . ± . ν ext . of the scaling point is brought in table I. For smaller systems, the scaling point is not so close to the transitionpoint, but in the case of large systems the scaling point turns out to be right at the transition point.In fact, it can be shown that this is a finite-size effect. When the usual statistical mechanics’ models are simulated,a size-dependent (pseudo)critical point is found. If we denote the (pseudo)critical point by ν C ( N ) and the actualcritical point by ν ∞ then we have | ν C ( N ) − ν ∞ | ∼ N − λ (10)where λ is the shifting exponent. Fig. 7 shows a log-log plot of ν C ( N ) − ν ∞ as a function of size of the system N .The parameters ν ∞ and λ are obtained by the best fitting. A very good linear fit is observed with ν ∞ = 1 . ± . λ = 0 . ± .
1. The value obtained for ν ∞ is quite the same as the value of ν ext . at which the transition tosynchronization occurs within the precision we have. In fact, for the line g = 8 we have ν ext . = 1 . ± .
005 for thetransition point. We have checked the same phenomenon for a few other cross-sections, some of them being horizontaland some on the R-Synch. region, and in all cases the scaling point coincides with the transition point. Also, thescaling exponents obtained in the following are the same within error-bars.Let us focus on the probability distribution function on the scaling point. Fig. 8 shows the log-log plot of theprobability distribution function of avalanche sizes and avalanche periods for different system sizes. Both plots showa linear part, which is the scaling region, and a cut-off that is clearly size-dependent, which is a hallmark of criticality.As said before, we have considered the external parameters where the probability distributions with a bump at theend of the plot. This consideration has no effect on the scaling exponents, however, gives a less statistical errorbecause of finer data at the tail of the probability distribution. Note that the probability of avalanches at the bumpis extremely small (for example 10 − for system size N = 16000 in the case of avalanche size distribution) and doesnot have an important role in the avalanche size statistics. The next interesting point is that in the case of avalanchesize distribution, the linear part of the plot for systems with different sizes do not form a single line and for a systemwith a larger size, the line stands slightly above the line of a smaller system. This phenomenon, however, does nothappen in the case of the period of avalanches. We will discuss it later and will see that the dependence causes someinteresting phenomena to happen.Despite this size dependency, the scaling exponent, τ s , does not depend on the size. We have obtained the exponentsusing the technique explained in [44] and found τ s = 1 . ± .
06. Also for the case of the period of avalanches, wehave τ d = 2 . ± .
15. The values obtained from brain activity data are 1 . τ d , is not muchfar from the experimental value, however, τ s is completely different. As we show in the following, this exponent is anapparent exponent and finite-size scaling reveals another (real) exponent for the system.0 FIG. 8. Probability distribution function of size (a) and period (b) for systems of various sizes at their critical points. Theexponents of the distributions are τ s = 1 . ± .
06 and τ d = 2 . ± .
15 respectively for size and period.FIG. 9. Finite-size-scaling for a) avalanche size and b) avalanche period probability distribution function shown in Fig.8. Thescaling parameters are ˜ τ s = 1 . ± .
05 and φ s = 0 . ± .
04 for avalanche size and ˜ τ d = 2 . ± .
15 and φ d = 0 . ± .
05 foravalanche period.
To study the scaling behavior of the system more thoroughly, we have performed a Finite Size Scaling analysis(FSS) for both the avalanche sizes and their periods. If a system is critical and hence scale-invariant, the probabilitydistribution function should be of the form p ( x ) = x − ˜ τ f ( x/N φ ) (11)where x can be avalanche size or its period and ˜ τ and φ are the scaling exponents. Of course, the above equationshould be valid for x > x where x is a lower cut-off. The function f is usually called the scaling function anddepends only on the quantity x/N φ .To check if a system obeys the above equation, we have to do a data collapse. This can be achieved by plotting x ˜ τ × p ( x ) against x/N φ for the different system sizes. If for all x greater than a lower cutoff, the data of differentsystem sizes N can be collapsed onto a single curve, then it is manifestly shown that the probability distributionfunction has the scaling property. Fig. 9 shows such a data collapse for avalanche size and avalanche period. Thescaling exponents are obtained to be ˜ τ s = 1 . ± .
05 and φ s = 0 . ± .
04 for avalanche size and ˜ τ d = 2 . ± .
15 and φ d = 0 . ± .
05 for avalanche period. Interestingly, the value obtained for the scaling exponent ˜ τ s is different from τ s . Such a phenomenon has been observed before [47]. In fact, it is usually assumed that the scaling function f ( u ) hasa finite value in the limit u →
0. However, in some cases, the scaling function behaves as x α in the limit mentionedabove. If this is the case, we can write f ( x ) = x α g ( x ) where lim x → g ( x ) = g is a finite number. Therefore, we willhave p ( s ) = s − ˜ τ s (cid:0) s/N φ s (cid:1) α g ( s/N φ ) = s − ˜ τ s + α N − αφ s g ( s/N φ ) . (12)In the limit of large system sizes, for avalanche sizes much smaller than N , the quantity s/N φ becomes very smalland therefore, we can effectively take the argument of g equal to zero. Then, the probability distribution functionwill be proportional s − ˜ τ s + α and hence, the apparent scaling exponent τ = ˜ τ s − α will be observed. As it is seen inFig. 9, the scaling function f ( u ) does not have a finite value when its argument tends to zero, rather, it behaves as u α with α = − . ± .
03, that is, the scaling function becomes very large when u →
0. Note that for each systemsize, there is a lower cut-off equal to s /N φ below which the scaling function deviates from g ( u ) ∼ u α . This analysisgives the apparent scaling exponent τ (cid:39) .
86 which was obtained before by fitting in Fig. 8.1
FIG. 10. Average avalanche size versus period scaled by N ρ . We obtain ρ = 0 . γ = 1 . ± .
08 for the bestfit.
This size dependency of probability distribution function has some other consequences. As an example, considerthe relation between mean avalanche size and mean avalanche period. Assuming narrow joint distributions, s and d are functions of each other and we may write (cid:104) s (cid:105) d (cid:39) d γ , (13)where (cid:104) s (cid:105) d is the average of s conditioned to d . In fact, Eq. 13 comes from two assumptions. First, the assumptionof narrow joint distribution and second, the fact that both the probability distribution functions of s and d are givenby Eq. 11. also, one may find a relation among the scaling exponents ˜ τ s , ˜ τ d and γ . This relation is read to be: γ = 1 − ˜ τ d − ˜ τ s . (14)However, in this derivation it is assumed a finite value for lim u → f ( u ). With a simple algebra, it can be shown thatif lim u → f ( u ) ∼ u α , the relation between mean values of avalanche size and avalanche period can be written as (cid:104) s (cid:105) d (cid:39) N ρ d γ (15)with γ = 1 − ˜ τ d − ˜ τ s + α = 1 − ˜ τ d − τ s (16)and ρ = φ s α − τ s . (17)With the obtained values for scaling exponents we find γ = 1 . ± . ρ = 0 . ± .
06. Fig. 10 shows a log-logplot of (cid:104) s (cid:105) d for different systems sizes as a function of d scaled by N ρ . In this graph, we have assumed ρ = 0 . γ = 1 . ± .
08 which is consistent with some previously obtained value andexperimental data[48], although originated from different scaling exponents τ and α . VI. SUMMARY AND CONCLUSION
In this paper, we studied the behavior of neural networks consisting of over-damped rotators as a model for thedynamic of type I single neurons. Tuning external stimulation, inhibitory strength, and axonal delay time, we haveidentified both synchronous and asynchronous regions in the phase diagram of our system. Through finite sizeanalysis, we have proposed that the transition is actually a second-order phase transition between synchronous andasynchronous states in the system. Interestingly we have observed that the probability distribution function showsthe power-law distribution for both size and period of avalanches in the vicinity of critical lines. At these lines, theprobability distribution functions show a very good data collapse. The interesting point is that for the smaller systems,the point where the power-law behavior emerges falls inside the synchronous region. Therefore, in these systems, it ispossible to observe both criticality and synchronization. Although, it is seen that for a very large system, the phasetransition happens right at the synchronization transition point.2Yet there remain some unanswered questions. First of all, the transition line hardly touches the line on which thebalance between inhibitory and excitatory neurons happens. Also, we have not proposed a mechanism that takesthe system towards the critical line. One of the answers might be the evolution of synapses which is absent in ourtheory. For example, if we devise a mechanism that strengthens the excitatory neurons at the synchronous phase andstrengthens the inhibitory ones in asynchronous phase, the transition line to the r-synch region would become stable.However, at this point, we have not implemented any kind of synapse dynamics in the system. [1] Beggs, John M., and Dietmar Plenz. ”Neuronal avalanches in neocortical circuits.” Journal of neuroscience 23, no. 35(2003): 11167-11177.[2] Petermann, Thomas, Tara C. Thiagarajan, Mikhail A. Lebedev, Miguel AL Nicolelis, Dante R. Chialvo, and DietmarPlenz. ”Spontaneous cortical activity in awake monkeys composed of neuronal avalanches.” Proceedings of the NationalAcademy of Sciences 106, no. 37 (2009): 15921-15926.[3] Gireesh, Elakkat D., and Dietmar Plenz. ”Neuronal avalanches organize as nested theta-and beta/gamma-oscillationsduring development of cortical layer 2/3.” Proceedings of the National Academy of Sciences 105, no. 21 (2008): 7576-7581.[4] Pasquale, V., P. Massobrio, L. L. Bologna, M. Chiappalone, and S. Martinoia. ”Self-organization and neuronal avalanchesin networks of dissociated cortical neurons.” Neuroscience 153, no. 4 (2008): 1354-1369.[5] Hahn, Gerald, Thomas Petermann, Martha N. Havenith, Shan Yu, Wolf Singer, Dietmar Plenz, and Danko Nikoli´c.”Neuronal avalanches in spontaneous activity in vivo.” Journal of neurophysiology 104, no. 6 (2010): 3312-3322.[6] Priesemann, V., M. Wibral, M. Valderrama, R. Pr¨opper, M. Le Van Quyen, T. Geisel, J. Triesch, D. Nikolic, and M. H.J. Munk. ”Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front Syst Neurosci 8 (108): 80–96.10.3389/fnsys. 2014.00108.” (2014).[7] Freeman, Walter J., Linda J. Rogers, Mark D. Holmes, and Daniel L. Silbergeld. ”Spatial spectral analysis of humanelectrocorticograms including the alpha and gamma bands.” Journal of neuroscience methods 95, no. 2 (2000): 111-121.[8] Shriki, Oren, Jeff Alstott, Frederick Carver, Tom Holroyd, Richard NA Henson, Marie L. Smith, Richard Coppola, EdwardBullmore, and Dietmar Plenz. ”Neuronal avalanches in the resting MEG of the human brain.” Journal of Neuroscience 33,no. 16 (2013): 7079-7090.[9] Shew, Woodrow L., Hongdian Yang, Thomas Petermann, Rajarshi Roy, and Dietmar Plenz. ”Neuronal avalanches implymaximum dynamic range in cortical networks at criticality.” Journal of neuroscience 29, no. 49 (2009): 15595-15600.[10] Shew, Woodrow L., and Dietmar Plenz. ”The functional benefits of criticality in the cortex.” The neuroscientist 19, no. 1(2013): 88-100.[11] Kinouchi, Osame, and Mauro Copelli. ”Optimal dynamical range of excitable networks at criticality.” Nature physics 2,no. 5 (2006): 348-351.[12] Gautam, Shree Hari, Thanh T. Hoang, Kylie McClanahan, Stephen K. Grady, and Woodrow L. Shew. ”Maximizing sensorydynamic range by tuning the cortical state to criticality.” PLoS computational biology 11, no. 12 (2015): e1004576.[13] Legenstein, Robert, and Wolfgang Maass. ”What makes a dynamical system computationally powerful.” New directionsin statistical signal processing: From systems to brain (2007): 127-154.[14] Hesse, Janina, and Thilo Gross. ”Self-organized criticality as a fundamental property of neural systems.” Frontiers insystems neuroscience 8 (2014): 166.[15] Bak, Per, Chao Tang, and Kurt Wiesenfeld. ”Self-organized criticality: An explanation of the 1/f noise.” Physical reviewletters 59, no. 4 (1987): 381.[16] Bornholdt, Stefan, and Thimo Rohlf. ”Topological evolution of dynamical networks: Global criticality from local dynamics.”Physical Review Letters 84, no. 26 (2000): 6114.[17] Levina, Anna, J. Michael Herrmann, and Theo Geisel. ”Dynamical synapses causing self-organized criticality in neuralnetworks.” Nature physics 3, no. 12 (2007): 857-860.[18] Meisel, Christian, and Thilo Gross. ”Adaptive self-organization in a realistic neural network model.” Physical Review E80, no. 6 (2009): 061917.[19] Tetzlaff, Christian, Samora Okujeni, Ulrich Egert, Florentin W¨org¨otter, and Markus Butz. ”Self-organized criticality indeveloping neuronal networks.” PLoS Comput Biol 6, no. 12 (2010): e1001013.[20] de Arcangelis, Lucilla, Carla Perrone-Capano, and Hans J. Herrmann. ”Self-organized criticality model for brain plasticity.”Physical review letters 96, no. 2 (2006): 028107.[21] Millman, Daniel, Stefan Mihalas, Alfredo Kirkwood, and Ernst Niebur. ”Self-organized criticality occurs in non-conservativeneuronal networks during ‘up’states.” Nature physics 6, no. 10 (2010): 801-805.[22] Kossio, Felipe Yaroslav Kalle, Sven Goedeke, Benjamin van den Akker, Borja Ibarz, and Raoul-Martin Memmesheimer.”Growing critical: self-organized criticality in a developing neural system.” Physical review letters 121, no. 5 (2018):058301.[23] Bonachela, Juan A., Sebastiano De Franciscis, Joaqu´ın J. Torres, and Miguel A. Munoz. ”Self-organization without con-servation: are neuronal avalanches generically critical?.” Journal of Statistical Mechanics: Theory and Experiment 2010,no. 02 (2010): P02015.[24] Moretti, Paolo, and Miguel A. Munoz. ”Griffiths phases and the stretching of criticality in brain networks.” Naturecommunications 4, no. 1 (2013): 1-10.3