Boundary solution based on rescaling method: recoup the first and second-order statistics of neuron network dynamics
Cecilia Romaro, Antonio Carlos Roque, Jose Roberto Castilho Piqueira
BBoundary solution based on rescaling method:recoup the first and second-order statistics ofneuron network dynamics
C. Romaro, A.C. Roque, J.R.C. Piqueira
Physics department and Escola PolitécnicaUniversity of São Paulo.
February 7, 2020
Abstract
There is a strong nexus between the network size and the computational resources available, whichmay impede a neuroscience study. In the meantime, rescaling the network while maintaining itsbehavior is not a trivial mission. Additionally, modeling patterns of connections under topographicorganization presents an extra challenge: to solve the network boundaries or mingled with anunwished behavior. This behavior, for example, could be an inset oscillation due to the torussolution; or a blend with/of unbalanced neurons due to a lack (or overdose) of connections. Wedetail the network rescaling method able to sustain behavior statistical utilized in [20] and presenta boundary solution method based on the previous statistics recoup idea.
Understanding the brain is challenging, given both its complex mechanisms and its inaccessibility.Modeling in neuroscience has been typically used to understand the neurons [11, 7] and neuronalsystem [23, 17, 10]. Computational resources [3, 13, 4, 5] continually contribute to the study andunderstanding of neuronal pathways [6], channels [19], proteins and other discovered mechanisms[16] however computational resources [9] still pose a challenge to network dynamics studies evenin neuroscience [14, 18, 27, 12, 23]. Consequently there seems to be a compromise between theincrease in detail or the size of network models and the computational resource available. Tomake more detailed simulations computationally feasible we could therefore reduce the size of thenetwork.Rescaling the network to decrease or increase its size is, however, a challenging process. Forexample, as we reduce the number of neurons, an increase in the number of connections or thesynaptic weight is needed to balance the external inputs. However, this can lead to an undesiredspiking synchrony and regularity [2, 24, 8].An additional challenge arises in the modeling of somatotopic regions or networks with boundaryconditions. The topographic pattern of connection is interrupted in the network edges, changingthe activity in the network boundary [14, 15]. A classic solution adopted to this problem is thetorus connection, which introduces undesired oscillations to the network [22, 21, 25].The purpose of this work is to explain a method to rescaling the network recouping the firstand second order statistics and, , present a method to boundary solution of topographic networkbased on the rescaling model previous presented.1 a r X i v : . [ q - b i o . N C ] J a n his paper is organized as follows. In Section 2 we present the Rescaling method. In Subsection2.1 we give a description of the algorithm and, than, in Subsection 2.2 we present applied examplesand their results. Finally in Subsection 2.3 we present the mathematical explanation and discussmodel requirements as well as method limitations.Following this pattern, in Section 3 we present the Rescaling method. In Subsection 3.1 wegive a description of the algorithm and, than, in Subsection 3.2 we present applied examples.And, finally, in Subsection 3.3, we discuss the sufficient conditions of the model for the methodapplication. A neuron network structure can be defined by the number of neurons N, a function of connection F ( o pre , o post ) between neuron pre-synaptic o pre and post-synaptic o post , and the synaptic strength w pre,post . This network can be a slice in inner and inter connected subsets on neurons (populationsor layers). Other neuron-model-dependent parameters such as firing threshold V th , reset potentialafter spike V res , absolute refractory period τ ref ; or synapse-model-dependent parameters such assynapse time constant τ syn , synaptic transmission delays ∆ syn ; can integrate the model. Thoseparameters, definitely bias the neuron network activity and behavior but do not compose thenetwork structure parameters.Our rescaling is dependent on a single parameter k positive in the interval ] , ∞ [, which is usedto resize down ( k ∈ ]0 , ) or up ( k ∈ ]1 , ∞ [ ) the numbers of network neurons, connections, externalinputs, and synaptic weights, while maintaining fixed the function of connection F ( o pre , o post ) andthe proportions of cells per subset of neurons.This method is able to maintain the first and second-order statistics, and, therefore, the layer-specific average firing rates, the synchrony, the irregularity features and the network behaviorsimilar to the ones observed in the full version. That happens essentially because it holds fixed theprobability and the pattern of connections, it keeps the average random input [24], and the fixedproportion between the firing threshold and the square root of the number of connections [26]. The algorithm of the rescaling method can be found in any example-application on Section 2.2, alsoavailable on GitHub (https://github.com/ceciliaromaro/recoup-the-first-and-second-order-statistics-of-neuron-network-dynamics) and it is informally described as previous in [20] as follows: • Step 1:
Decreasing the number of neurons and external input per neuron by multiplyingthem by the scale factor while keeping the proportions of cells per population fixed; • Step 2:
Decreasing the number of connections per population by multiplying them by thesquare of the scale factor while keeping the functions of connections (probabilities) betweenpopulations unchanged; • Step 3:
Increasing the synaptic weights by dividing them by the square root of the scalefactor; 2
Step 4:
Providing each cell with a DC input current with a value corresponding to the totalinput lost due to rescaling.The first three steps keep the proportional balance of network through neurons, external inputsand layers. The fourth step changes the threshold to guarantee the neuron/layer activity.
Rescaling method for model with 1 set of neurons N the number of neurons. C the probability of connection ( F ( o pre , o post ) = C ). X the total number of connections ( x = X/N the average number of connections per neuron) X ext the number on average of external neurons connected to each neuron in N. w (pA or mV) the weight of synaptic strength. k the factor of rescaling. f ext (Hz) the average firing rate of the external input. f (Hz) the average firing rate of the set of neurons. τ syn (ms) synapse time constant.1.NUMBER OF NEURONS N (cid:48) ← k ∗ N X (cid:48) ext ← k ∗ X ext C (cid:48) ← C X (cid:48) ← k ∗ X (cid:46) corolario of X = C ∗ N pre ∗ N pos
3. SYNAPTIC STRENGHT w (cid:48) ← w/ √ k
4. THRESHOLD ADJUSTMENT q sum = w ∗ f ∗ x q ext = w ∗ f ext ∗ X ext I (cid:48) DC = τ syn ∗ ((1 − √ k ) ∗ ( q sum + q ext )) (cid:46) Extra DC (pA or mV) input to compensate resize
Done! (cid:46)
Notice that step 4 uses parameters without resizing.Notice that if w is given by mV , it is not necessary to multiply the DC input by τ syn in step 17.Instead, 17: V (cid:48) DC = (1 − √ k ) ∗ ( q sum + q ext ) (cid:46) Extra DC (mV) input to compensate resizing.Notice that τ m and C m are neurons parameters, not network parameters. The same idea applied for 1 layer is recurrently apply for all layers. The one attention is to calculatethe compensation threshold current correctly: a weighted average connections number-frequency-weight of each presynaptic layer. 3 lgorithm 2
Rescaling method for model with n set of neurons n the number of layers/sets of neuros. N i the number of presynaptic neurons in layer i . ( i ∈ n ) N j the number of possynaptic neurons in layer j . ( j ∈ n ) C ij the probability of connection from layer i to layer j ( F ( i, j ) = C ij ). X ij the total number of connections between layer i and layer j ( x j = X ij /N j the average number of received connections per neuron). X ext,j the number on average of external neurons connected to each neurons of layer j . w ij (pA or mV) the average weight of synaptic strength in i target j . w ext,j (pA or mV) the average weight of synaptic strength in X ext,j to layer j . k the factor of rescaling. f ext,j (Hz) the average firing rate of the external input target set j . f i (Hz) the average firing rate of the presynaptic neurons. τ syn (ms) synapse time constant.1.NUMBER OF NEURONS for each j in n do N (cid:48) j ← k ∗ N j X (cid:48) ext,j ← k ∗ X ext,j for each j in n do for each i in n do C (cid:48) ij ← C ij X (cid:48) ij ← k ∗ X ij (cid:46) corolario of X ij = C pre,pos * N i * N j
3. SYNAPTIC STRENGHT for each j in n do for each i in n do w (cid:48) ij ← w ij / √ k w (cid:48) ext,j ← w ext,j / √ k
4. THRESHOLD ADJUSTMENT for each j in n do q sum j = (cid:80) ni =1 w ij ∗ f i ∗ X ij /N j q ext,j = w ext,j ∗ f ext,j ∗ X ext,j I (cid:48) DC j = τ syn ∗ ((1 − √ k ) ∗ ( q sum j + q ext,j )) (cid:46) DC (pA or mV) input to compensate resize
Done! 4 .2 Rescaling Applied
All applications of this method presented in this publication were implemented in Python (withBrian2 or NetPyNE) and can be found on GitHub (https://github.com/ceciliaromaro/recoup-the-first-and-second-order-statistics-of-neuron-network-dynamics). The neuron model is the LeakyIntegrate and Fire (LIF).
For any pre-synaptic neuron i and any post-synaptic neuron j in the network, a fixed probability pof connection i-j is called random inner connection. For a probability p lower than 0.1 we can sayit is sparse [26]. The first illustrative application of this method is a network of inhibitory neuronswith a sparse random inner connection (p«1) and a Poisson external input.In this appliance we rescale the network up to 1%. The network parameters before and afterrescaling are available on Table 1. Figure 1 presents the raster plots for the full network (1A) andfor the rescaled network to 1% (1B) and presents, for the network rescaled in different sizes (120%,100%, 80%, 50%, 30%, 20%, 10%, 5%, 1%): the average firing rate, a first order statistic (1C);and the inter spike interval (ISI): a second order statistic (1D). The difference between averagefrequency is less the 8.5%, and the ISI is less than 1%.Parameter description Variable Full scale RescalingFactor of rescaling k - 0.01Number of inhibitory neurons N − Number of external input to each neuron X ext X , ,
000 100
Weight of excitatory synaptic strength w ± δw (pA) 30 ± ± p .
05 0 . Absolute refractory period τ ref (ms) 2 2Synapse time constant τ syn (ms) 0.5 0.5Membrane time constant τ m (ms) 10 10Synaptic transmission delays ∆ t ± δ ∆ t (ms) 1.5 ± ± C m (pF) 250 250Inhibitory/excitatory synaptic strength g -2 -2Reset potential (mV) V ret -65 -65Fixed firing threshold (mV) V th -45 -45the average firing rate of the external input f ext (Hz) 8 8Table 1: Sparse random connected inhibitory neurons network model specification before and afterof the rescaling: parameters and metrics. 5 A)(B) (C)(D)
Figure 1: Sparse random connected inhibitory neurons network model. Raster plot for (A) fullnetwork and (B) network rescaled to 1% of the number of total neurons. (C) The average firingrate and (D) Inter Spike Interval (ISI) for different scale factors k (120%, 100%, 80%, 50%, 30%,20%, 10%, 5%, 1%). The ’x’ is value for each simulation run and the bar is the average of the setrun.
The avalanche can be defined as the rise of activity above some basal or threshold level [1]. Thisrise is triggered by the activation of a few or a single neuron, producing a cascade of firings thatreturns below threshold after some time. This process can have particular statistical propertieslike power law distribution of size and duration.In other works, the avalanche is a quick rise in the network activity, locally or systemic, followedby a sudden downgrade in the activity back to the previous equilibrium. This rise in activity istriggered by the activation of a few neurons with a feedback connection able to change the averageactivity.The second application of this method is a 2-layer excitatory-inhibitory neurons network witha sparse random inner connection (p«1) and a Poisson external input. This network is similar tothe first one however using excitatory neurons with inner connection able to produce avalanche.In this appliance we rescale the network up to 50%. The network parameters before and afterrescaling are available on Table 2. The Figure 2 presents the raster plots and spike histogram forthe full network and for the rescaled network to 50%. It is possible to see the avalanche in bothcases. 6arameter description Variable Full scale RescalingFactor of rescaling k - 0.5 ou colocar a 20%?Number of excitatory neurons N + N − X ext
50 25Total number of inner connection X w ± δw (pA) 400 ±
40 566 ± p τ ref (ms) 2 2Synapse time constant τ syn (ms) 0.5 0.5Membrane time constant τ m (ms) 10 10Synaptic transmission delays ∆ t ± δ ∆ t (ms) 1.5 ± ± C m (pF) 250 250Inhibitory/excitatory synaptic strength g -4 -4Reset potential (mV) V ret -65 -65Fixed firing threshold (mV) V th -50 -50the average firing rate of neurons f (Hz)the average firing rate of the external input f ext (Hz) 8 8Table 2: Inhibitory-excitatory neurons network model specification before and after of the rescaling:parameters and metrics. (A)(B) (C)(D) Figure 2: Excitatory and Inhibitory interconnected neurons network with avalanches. (A) Rasterplot and (B) spikes histogram for full network and (C) raster plot and (D) spikes histogram forNetwork rescaled to 50% of the number of total neurons.7 .2.3 PD [18] network: Eight layers excitatory-inhibitory interconnected Network
The PD [18] model is a four excitatory-inhibitory interconnected layers (eight sets of neurons)network with external Poisson input and some parameters based on biological data. This model isable to reproduce the average firing rate of the somatosensory cortex observed in vivo .The rescaling of this network was implemented, discussed in detail and published at [min-haPDpublicacao]. The following shows (Table 3 and Figure 3) an overview of the PD rescalingnetwork up to 30% of the full version (k=0.3), which means less than 10% of the total numberof connections ( k ∗ X ) remained and all network behavior, firing-rate specific per layer, and ir-regularity metrics were maintained. In [20] this reduction reaches 1% of total number of neurons(k=0.01), 10 neurons for layer 5i and 0.01% of total number of inner connections ( k ∗ X ) with itslimitations discussed.Table 3 presents an overview of the network dimensions and parameters before and after rescal-ing to 30%. Figure 3 presents the raster plots, the average firing rate per layer: a first order statistic;and the inter spike interval (ISI) per layer: a second order statistic for the full scale network andthe rescaled network to 30%.Parameter description Variable Full scale RescalingFactor of rescaling k - 0.3Number of excitatory neurons N + ≈ ≈ N − ≈ ≈ X ext ≈ ≈ X ≈ ≈ w ± δw (pA) 87.8 ± ± p C * C Absolute refractory period τ ref (ms) 2 2Synapse time constant τ syn (ms) 0.5 0.5Membrane time constant τ m (ms) 10 10Synaptic transmission delays ∆ t ± δ ∆ t (ms) ∆ t * ∆ t Membrane capacitance C m (pF) 250 250Inhibitory/excitatory synaptic strength g -4 -4Reset potential (mV) V ret -65 -65Fixed firing threshold (mV) V th -50 -50the average firing rate of neurons f (Hz) f * f the average firing rate of the external input f ext (Hz) 8 8Table 3: PD [18] eight layers excitatory-inhibitory interconnected Network model specificationbefore and after of the rescaling: parameters and metrics. (*)The value vary for each layer or typeof neuron: see the specification in Table 5 from original article [18].8 A) (B)(C) (D) (E)(F)
Figure 3: Reproduction of figure 6 of [18] (A,B,C) and Network rescaled to 30% the number oftotal neurons (D,E, F): (A) Raster plot of firing rate of the eight neuron population: 2/3 , 4, 5 and6 for excitatory and inhibitory neurons. The number of neurons per layer shown is proportional tothe full scale of the network, resulting in a total number of approximately 1850 neurons plotted.(B) Boxplot of 60 seconds of single unit firing rate for each population. (C) Irregularity estimatedby coefficient of variance of the interspike interval of a 60 seconds simulation. (D) Raster plot and(E,F) statistics as (A,B,C). The simulation times and number of neurons plotted were chosen asin the full scale. Adapted from [20]
The Brunel [2] network model is an excitatory-inhibitory 2-layers neurons network with a sparserandom inner connection (p=0.1) and a DC external input. The network behavior depends onthe proportion of inhibitory/excitatory weight of synaptic strength, g, the proportion of the DCexternal input Θ , and the fixed firing threshold V th . The variation of those proportions is able tochange the average firing rate frequency, synchrony and irregularity of network.Table 4 presents general network parameters of Figure 8 in original article [2]. Figure 4 presentsthe firing rate and Figure 5 presents the ISI for the rescaled network to different sizes (120%, 100%,80%, 50%, 30%, , 20%, 10%, 5%) and four parameters combination from the Figure 8 of originalarticle [2]: (A) g = 3 and Θ = 2 .V th ; (B) g = 6 and Θ = 4 .V th , (C) g = 5 and Θ = 2 .V th , and(D) g = 4 . and Θ = 1 . .V th . For the (D) simulations ( g = 4 . and Θ = 1 . .V th ), the Θ wasreplaced by an equivalent Poisson input with the same w , weight of excitatory synapses strength.Figure 6 presents the raster plots and spike histogram for the full network and for the rescalednetwork to 25%. Those are the configuration of Figure 8B (right side of Figure 6) and Figure 8C(left side of Figure 6) in the Brunel original article [2]. It seems that the oscillation present inFigure 6B is not as strongly marked as in 6D, at least not visually. This network configuration( g = 6 and Θ = 4 .V th ) presents a large variation in irregularity with rescaling (see Figure 5B) and9he highest (15%) average firing rate variation at rescaling, resembling the firing rate for the (D- 14.5%) configuration but higher, even the θ replaced by a Poisson external input (in D). Thosefeature are explained in the end of the Section 2.3: Model requirements, mathematical explicationand method limitations.Parameter description Variable Full scale RescalingFactor of rescaling k - 0.05Number of excitatory neurons N + N − X ext X w ± δw (mV) 0.1 0.45Probability of connection p τ ref (ms) 2 2Membrane time constant τ m (ms) 20 20Synaptic transmission delays ∆ t ± δ ∆ t (ms) 1.5 1.5Reset potential (mV) V ret
10 10Fixed firing threshold (mV) V th
20 20Table 4: Brunel [2] model specification before and after of the rescaling: parameters and metrics. (A) (B) (C) (D)
Figure 4: Average firing rate of Brunel [2] network for different rescaling size (120%, 100%, 80%,50%, 30%, , 20%, 10%, 5%) runned 30 turns during 10s each one. (A) g = 3 and Θ = 2 .V th ; (B) g = 6 and Θ = 4 .V th , (C) g = 5 and Θ = 2 .V th , and (D) g = 4 . and Θ = 1 . .V th . The ’x’ isvalue for each simulation run and the bar is the average of the set run. (A) (B) (C) (D) Figure 5: Average of irregularity of single-unit spikes calculated by the coefficient of variation ofthe Brunel [2] network neurons ISI for different rescaling size(120%, 100%, 80%, 50%, 30%, , 20%,10%, 5%) runned 30 turns during 10s each one. (A) g = 3 and Θ = 2 .V th ; (B) g = 6 and Θ = 4 .V th ,(C) g = 5 and Θ = 2 .V th , and (D) g = 4 . and Θ = 1 . .V th . The ’x’ is value for each simulationrun and the bar is the average of the set run. 10 A)(B)(C)(D) (E)(F)(G)(H)
Figure 6: Right side: Raster plots (A-full scale and C-resize to 25%) and histogram (B-full scaleand D-resize to 25%) for the Brunel [2] network with g = 6 and Θ = 4 .V th configuration. Left side:Raster plots (E-full scale and G-resize to 25%) and histogram (F-full scale and H-resize to 25%)for the Brunel [2] network with g = 5 and Θ = 2 .V th configuration. This method works in any network where: • the weight of synaptic strength makes a small contribution compared to the firing threshold( w « V th − V rt ); • there is a low probability of connection ( p « 1).The mathematical reason is that, in those networks, the second order statistics is dependentlyof the number of received connections, x , and the square of synaptic strength, w .11he rescaling method gives: w (cid:48) = w/ (cid:112) ( k ) , (2.1) x (cid:48) = k.x = k .X/ ( k.N ) . (2.2)This maintains x.w and, therefore, the second order statistics.The first order statistics depends on the number of received connections, x = X/N , and thesynaptic strength, w . So, the forth step of the method provides a DC to supply the loss of (1 − (cid:112) ( k )) in the first order statistic.More formally, this method works in any model that can be approximated by a sparse randomconnected network where the neuron activity can be approximated by an average part plus afluctuating Gaussian part: V ( t ) = µ ( t ) + σ.η ( t ) , (2.3) η is a gaussian white noise, µ ( t ) is the average part, σ is the standard deviation and, therefore, σ.η ( t ) is the fluctuating part, where: µ ( t ) = µ int ( t ) + µ ext ( t ) , (2.4) σ ( t ) = σ int ( t ) + σ ext ( t ) , (2.5) For any network with n set of neurons, which each set may be inhibitory ( w pre < ) or excitatory( w pre > ) neurons: µ post ( t ) = n (cid:88) pre =1 ( x pre,post .w pre,post .f pre,post .τ pre,post ) + X ext,post .w ext,post .f ext,post .τ ext,post , (2.6) σ post ( t ) = n (cid:88) pre =1 ( x pre,post .w pre,post .f pre,post .τ pre,post ) + X ext,post .w ext,post .f ext,post .τ ext,post . (2.7)Hence, replacing 2.1 and 2.2 in 2.7, σ (cid:48) post ( t ) = n (cid:88) pre =1 (( k.x pre,post ) . ( w pre,post / √ k ) .τ pre,post )+( k.X ext,post ) . ( w ext,post / √ k ) .f ext,post .τ ext,post = σ post ( t ) , (2.8)given that, the second order statistics is granted. Going back to the first order of statistics:The forth step of method grant a DC where: DC post = (1 − (cid:112) ( k )) . ( n (cid:88) pre =1 ( x pre,post .w pre,post .f pre,post .τ pre,post )+ X ext,post .w ext,post .f ext,post .τ ext,post ) . (2.9)12hus, the new µ (cid:48) ( t ) is given by: µ (cid:48) ( t ) = µ (cid:48) int ( t ) + µ (cid:48) ext ( t ) + DC, (2.10)and, replacing 2.1, 2.2 and 2.6 in 2.10: µ (cid:48) post ( t ) = n (cid:88) pre =1 (( k.x pre,post ) . ( w pre,post / √ k ) .f pre,post .τ pre,post )+( k.X ext,post ) . ( w ext,post / √ k ) .f ext,post .τ ext,post + DC post , (2.11)and replacing DC: µ (cid:48) post ( t ) = n (cid:88) pre =1 (( k.x pre,post ) . ( w pre,post / √ k ) .f pre,post .τ pre,post ) + k.X ext,post . ( w ext,post / √ k ) .f ext,ext .τ ext,post )+(1 − √ k ) . n (cid:88) pre =1 (( x pre,post ) . ( w pre,post ) .f pre,post .τ pre,post ) + (1 − √ k )( X extpost .w ext,post .f ext,post .τ ext,post )= µ post ( t ) , (2.12)given that, the first order statistics is granted too. For Brunel network, x , the average number of received connections per neuron, x e , the averagenumber of received excitatory connections per neuron, and x i , the average number of receivedinhibitory connections per neuron, we have: X/N = x = x e + x i , (2.13)the mean µ ( t ) and the deviation σ ( t ) can be detailed as: µ ( t ) = x.w. (1 − gx i /x e ) .f. ( t − ∆ t ) .τ + X ext .w.f ext .τ, (2.14) σ ( t ) = x.w . (1 + g x i /x e ) .f. ( t − ∆ t ) .τ + X ext .w .f ext .τ. (2.15)Hence, replacing 2.1 and 2.2 in 2.15, σ (cid:48) ( t ) = ( k.x ) . ( w/ √ k ) . (1 + g x i /x e ) .f. ( t − ∆ t ) .τ + ( k.X ext ) . ( w/ √ k ) .f ext .τ = σ ( t ) , (2.16)given that, the second order statistics is granted. Going back to the first order of statistics:The forth step of method grant a DC where: DC = (1 − (cid:112) ( k ))[ x.w. (1 − gx i /x e ) .f. ( t − ∆ t ) .τ + X ext .w.f ext .τ ] . (2.17)Thus, the new µ (cid:48) ( t ) is given by: µ (cid:48) ( t ) = µ (cid:48) int ( t ) + µ (cid:48) ext ( t ) + DC, (2.18)and, replacing 2.1, 2.2 and 2.14 in 2.18: 13 (cid:48) ( t ) = ( k.x ) . ( w/ √ k ) . (1 − gx i /x e ) .f. ( t − ∆ t ) .τ + ( k.X ext ) . ( w/ √ k ) .f ext .τ + DC, (2.19)and replacing DC: µ (cid:48) ( t ) =( k.x ) . ( w/ √ k ) . (1 − gx i /x e ) .f. ( t − ∆ t ) .τ + k.X ext . ( w/ √ k ) .f ext .τ )+(1 − (cid:112) ( k )[( x ) . ( w ) . (1 − gx i /x e ) .f. ( t − ∆ t ) .τ + X ext .w.f ext .τ ]= µ ( t ) , (2.20)given that, the first order statistics is granted too. The size limit of rescaling happens when w become so large that the fist model requirement ( w « V th − V rt ) stops to be satisfied.In case that the model stops working on a smaller scale, one solution is to increase the randominput, which means, to artificially add an external random input and compensate it on the threshold(see Figures 4D and 5D). A massive random external input guarantees the network operation ona stable point because it reduces the perturbation point caused by under or over inter connectionspike activity. It reduces the ratio between the inter connection mean µ int or standard deviation σ int and the total mean µ or total standard deviation σ . It avoids changing the previous balancepoint of the network activity.Additionally, this method does not introduce any resonance or oscillation. Instead, it tendsto prevent oscillations such as the application 2.2.1. It is due to the reduce of the ratio betweenthe inter connection mean µ int and the total standard deviation σ . In other words, and moreformally, by equation 30 from [2] G = x.w.τ.f. ( gx i /x e − σ = − µ int σ , (2.21) H = x.w .τ.f. ( g x i /x i + 1) σ = σ int σ , (2.22)where µ int is the mean and σ int is the standard deviation due to internal connections probability.The oscillation on Figure 6B (and on Figure 8B in [2]) is due to the H → and G (cid:112) ( τ / ∆ t ) .Once this method does not change the ratio H, nor σ but decreases µ int , the ratio G is not satisfiedanymore and, consequently, neither is the oscillation sustained in Figure 6D.14 Boundary Correction Method
The key to this method is to apply the rescaling idea to each neuron o in the network or boundaryregion. The rescaling factor k o for each neuron o aims to compensate the number of connectionslost due to the cut, i.e. in case the boundary was inexistent and the network had an infinite numberof neurons. The key is the normalized connection density.Normalized connection density: For a given neuron o , the new number of received connectionsnormalized by the total number of received connections if the network weas infinite (no boundary).For example, for a square i × j network, the neurons o ij on the corner receives at least 0.25 of theconnections if the network was infinite - was not end in i × j . The boundary correction method essentially numerically estimates the normalized density functionof connection on the first step, then weights each neuron connection based on this density and finallybalances the threshold to grant the neuron/layer activity.The laborious part of this method is to bring up the normalized density function of connection,once it depends on of the pattern of connection in each model. Below is an easy way that willwork out for any pattern of connection. However, if the normalized density function of connectionis analytically known, one can use it and start the boundary correction algorithm by step 2.The algorithmic of rescaling method can be found in any one of example-application on Sec-tion 3.2 those are also available in GitHub (https://github.com/ceciliaromaro/recoup-the-first-and-second-order-statistics-of-neuron-network-dynamics) as follows: • Step 1:
Calculate the scale factor for any neuron o in network based in the normalizedconnection density; • Step 2:
Increase the synaptic weights by dividing them by the square root of the scale factor; • Step 3:
Provide each cell with a DC input current with a value corresponding to the totalinput lost due to network edge (boundary cut).
More formally, our method algorithm can be described by the following pseudo-algorithm:15 lgorithm 3
Rescaling method for boundary correction for model with n layers n number of sets in model N j the (finite) set of possynaptic neurons. j ∈ n N i the (finite) set of presynaptic neurons. i ∈ n X ij the average number of connection between N i and one neuron in N j if the model was noboundary. x oj the number of synapse connected to each neurons of N j . w oij (pA or mV) the average weight of synaptic strength the set N i target the neuron o in N j . k (cid:48) oj the factor of rescaling of the neuron o from the set N j . (will be calculate). f i (Hz) the average firing rate of set of neurons N i . τ syn (ms) synapse time constant.1. CALCULATE OF NORMALIZED CONNECTION DENSITY for each layer j in n do for each neuron o in N j do k (cid:48) oj ← x oj / (cid:80) ni =1 ( X ij )2. SYNAPTIC STRENGHT for each layer i in n do for each layer j in n do for each neuron o in N j do w (cid:48) oij ← w oj / (cid:112) k oij
3. THRESHOLD ADJUSTMENT for each layer j in n do for each neuron o in N j do c sum oij = w oij ∗ (cid:80) ni =1 f i ∗ X ij I (cid:48) DC j = τ syn ∗ (1 − (cid:113) k (cid:48) oj ) ∗ c sum oij (cid:46) DC (pA or mV) input to compensate resize
Done! 16 .2 Boundary correction applied
We applied the boundary solution for the models presented in Sections 2.2.1, 2.2.3 and 2.2.4. Inorder to rise the boundary problem, first a model with topographic pattern of connection is needed.Therefore, we assigned a spatial position for each neuron, then applied a Gaussian with σ g as apattern of connection and than ran the network with and without the boundary solution.All the applications of this method presented in this publication were implemented in Python(with Brian2) and they can be found on GitHub (https://github.com/ceciliaromaro/recoup-the-first-and-second-order-statistics-of-neuron-network-dynamics). All neurons from the model presented in 2.2.1 were homogeneous distributed on mm and a σ g = 0.25 mm was utilized. Figure 7 presents the average firing-rate per neurons before and after theboundary correction, the mean fire rate of neurons in the core (around 50 % of all neurons) and onthe boundary (the complementary 50 % of all neurons) and the network irregularity. Visually theboundary neurons spike more than core neurons in the model without boundary correction due tothe leak of inhibition connections. (A)(B)(C) (D)(E)(F) Figure 7: Topographic sparse random connected inhibitory neurons network model with σ g =0 . mm . Average firing rate per neuron (A) without boundary correction (D) with boundarycorrection. (B) Core and boundary average firing rate and (C) Irregularity without boundarycorrection and (E) average firing rate and (F) irregularity without boundary correction.17 .2.2 Somatosensory S1 network: Eight layers excitatory-inhibitory interconnectedNetwork All neurons from the full version of the model presented on 2.2.3 were homogeneously distributedon mm and a σ g = 0 . mm was utilised. The same was done for the rescaling to 50%.Figures 8A to 8D present the average firing-rate per neurons. Each dot represents the position ofthe neuron and the size of the dot is proportional to the average firing rate of that neuron. Figures8A and 8B correspond to excitatory layer L2 without and with boundary correction respectively.Figures 8C and 8D present excitatory L5 without and with boundary correction respectively.Figures 8E and 8G present the core (around 50 % of all neurons)- boundary (the complementary50 % of all neurons) layers average firing rate respectively without boundary correction and Figures8F and 8H are with boundary correction. Figure 9 presents the same of the Figure 8 for the networkrescaled in 50% of original size. This show that it is possible to combine both methods. In all cases,the boundary neurons spikes visually more than core on the model without boundary correctiondue to the leak of inhibition connections. (A)(B) (C)(D) (E)(F) (G)(H) Figure 8: The PD full scale average firing-rate per neuron and per layer. Neurons from L2 exci-tatory (A)without and (B) with boundary correction. Neurons in L5 excitatory (C) without and(D) with boundary correction. Each dot represents the position of neuron and the size of the dotis proportional to the average firing rate of that neuron. The core (around 50 % of all neurons)layers average firing rate (E) without boundary correction and (F) with boundary correction. Theboundary (the complementary 50 % of all neurons) layers average firing rate (G) without boundarycorrection and (H) with boundary correction. 18
A)(B) (C)(D) (E)(F) (G)(H)
Figure 9: The PD rescaled to 50 % average firing-rate per neuron and per layer. Neurons from L2excitatory (A)without and (B) with boundary correction. Neurons in L5 excitatory (C) withoutand (D) with boundary correction. Each dot represents the position of neuron and the size of thedot is proportional to the average firing rate of that neuron. The core (around 50 % of all neurons)layers average firing rate (E) without boundary correction and (F) with boundary correction. Theboundary (the complementary 50 % of all neurons) layers average firing rate (G) without boundarycorrection and (H) with boundary correction.
All neurons from the full version of the model presented on 2.2.4 were homogeneous distributed on mm and a σ g = 0 . mm was utilized. We ran the model for g = 6 and Θ = 4 .V th configuration(Figure 10), for g = 5 , Θ = 2 .V th configuration (Figure 11), for g = 3 and Θ = 2 .V th and for g = 4 and Θ = 1 . .V th configurations (Figure 12), respectively Figure 8 B, C, A and D configurationnetwork of [2]. .Figures 10A to 10D present the average firing-rate per neurons. Each dot represents the neu-ron’s position and the size of the dot is proportional to the average firing rate of that neuron.The Figures 10A and 10B correspond to excitatory layer without and with boundary correctionrespectively. The Figures 10C and 10D to inhibitory layer without and with boundary correctionrespectively. The Figures 10E to 10H present the core (around 50 % of all neurons)- boundary(the complementary 50 % of all neurons) layers average firing rate and network irregularity re-spectively without boundary correction. Figures 10I to 10J are network average firing rate andnetwork irregularity with boundary correction. The Figure 10J presents the raster plot and Figure10K presents the spikes histogram of the network. All those for g = 6 and Θ = 4 .V th configuration.For g = 5 and Θ = 2 .V th configuration the same can be found in the for the configuration g = 6 and Θ = 4 .V th (Figure 11).Note that the oscillation presents in Figure 6B, vanished in 6D is visually back on Figure10K. This phenomenon will be explained in the Section 3.3 - Model requirements, mathematicalexplications and method limitations.Figure 12 presents the results of the reproduction the Figure 8-A ( g = 3 and Θ = 2 .V th )configuration network and Figure 8 D ( g = 4 and Θ = 1 . .V th ) configuration network of [2].Figures 12A and 12B present the average firing-rate and irregularity with boundary correction for19s run simulation. Figure 12C presents the spikes histogram of the network. All those for g = 3 and Θ = 2 .V th configuration. For g = 4 and Θ = 1 . .V th configuration the same can be found inFigure 12D to 12F but to the Θ replaced for an equivalent Poisson input. (A)(B) (C)(D) (E)(F)(G)(H)(I) (J)(K) Figure 10: The topographic Brunel [2] model average firing-rate per neuron and per layer for g = 6 and Θ = 4 .V th configuration. Neurons from layer excitatory (A) without and (B) with boundarycorrection. Neurons in layer inhibitory (C) without and (D) with boundary correction. Each dotrepresents the position of neuron and the size of the dot is proportional to the average firing rateof that neuron. (E) The core (around 50 % of all neurons) layers average firing rate and (F) theboundary (the complementary 50 % of all neurons) layers average firing rate without boundarycorrection with boundary correction. The average of the single-unit irregularity calculated by thecoefficient of variance of the interspike intervals (ISI) without boundary correction (G) and withboundary correction (I). (H) The average firing rate of all neurons with boundary correction. (J)Raster plot and (K) histogram of the network for 200ms run. All others graphics showed resultsfor 5s run. 20 A)(B) (C)(D) (E)(F)(G)(H)(I) (J)(K)
Figure 11: The topographic Brunel [2] model average firing-rate per neuron and per layer for g = 5 and Θ = 2 .V th configuration. Neurons from layer excitatory (A) without and (B) with boundarycorrection. Neurons in layer inhibitory (C) without and (D) with boundary correction. Each dotrepresents the position of neuron and the size of the dot is proportional to the average firing rateof that neuron. (E) The core (around 50 % of all neurons) layers average firing rate and (F) theboundary (the complementary 50 % of all neurons) layers average firing rate without boundarycorrection with boundary correction. The average of the single-unit irregularity calculated by thecoefficient of variance of the interspike intervals (ISI) without boundary correction (G) and withboundary correction (I). (H) The average firing rate of all neurons with boundary correction. (J)Raster plot and (K) histogram of the network for 200ms run. All others graphics showed resultsfor 5s run. 21 A)(B)(C) (D)(E)(F)
Figure 12: The topographic Brunel [2] model average firing-rate, irregularity and spike histogram(A, B, C) for g = 3 and Θ = 2 .V th and (D, E, F) for g = 4 and Θ = 1 . .V th configurations.Average firing rate (A, D), irregularity (B, E) and spike histogram (C, F) of neurons from layersexcitatory and inhibitory with boundary correction. Average firing rate and Irregularity for 5s runand histogram of the network for 200ms run. This method works in any network that satisfies the rescaling condition (Section 2.3) for a reductionto 25%: the minimum of rescaling to which a corner neuron can be submitted. This is a sufficientcondition even though it is not a necessary condition.Note that this boundary solution was able to retrieve the firing rate of:- Figure 7E back to Figure 1C lost in Figure 7B;- Figures 8F to 8H and Figures 9F to 9H back to Figure 3B lost in Figures 8E to 8G andFigures 9E to 9G;- Figures 10H back to Figure 4B lost in Figures 10E and 10F;- Figures 11H back to Figure 4C lost in Figures 11E and 11F;To retain the firing rate of:- Figure 4A in Figure 12A;- Figure 4D in Figure 12D;To retrieve the irregularity of- Figure 7F back to Figure 1D lost in Figure 7C;- Figures 11I back to Figure 5C lost in Figure 11Gand to retain the irregularity of- Figure 5A in Figure 12B- Figure 5D in Figure 12EIt is possible that even networks that could lose some synchrony with the application of the22escaling to 25% (see Figure 6D), could remain the synchrony with the application of boundarycorrection (see Figure 11K and 6B ). This is due to the weighting of neurons on network that hadthe µ int reduced. If that is low enough to do not perturb the G of the system (see Equation: 2.21),the network proprieties to oscillation remain valid. This work was produced as part of the activities of FAPESP Research, Disseminations and Inno-vation Center for Neuromathematics (Grant 2013/07699-0, S. Paulo Research Foundation). Theauthor is the recipient of PhD scholarships from the Brazilian Coordenação de Aperfeiçoamentode Pessoal de Nível Superior (CAPES). The author is thankful to Mauricio Girardi-Schappo, whoencourage her to explain the method in a paper and author’s little sister, Cinthia Romaro, whofound time to comment on the manuscript even during her Harvard MBA.
References [1]
Beggs, J. M., and Plenz, D.
Neuronal avalanches in neocortical circuits.
Journal ofneuroscience 23 , 35 (2003), 11167–11177.[2]
Brunel, N.
Dynamics of sparsely connected networks of excitatory and inhibitory spikingneurons.
Journal of computational neuroscience 8 , 3 (2000), 183–208.[3]
Carnevale, N. T., and Hines, M. L.
The NEURON book . Cambridge University Press,2006.[4]
Gewaltig, M.-O., and Diesmann, M.
Nest (neural simulation tool).
Scholarpedia 2 , 4(2007), 1430.[5]
Goodman, D. F., and Brette, R.
The brian simulator.
Frontiers in neuroscience 3 (2009),26.[6]
Hines, M. L., Morse, T., Migliore, M., Carnevale, N. T., and Shepherd, G. M.
Modeldb: a database to support computational neuroscience.
Journal of computational neu-roscience 17 , 1 (2004), 7–11.[7]
HODGKIN, A. L., and HUXLEY, A. F.
A quantitative description of membrane currentand its application to conduction and excitation in nerve.
J. Physiol. 117 , 4 (1952), 500–544.[8]
Iyer, R., Menon, V., Buice, M., Koch, C., and Mihalas, S.
The influence of synap-tic weight distribution on neuronal population dynamics.
PLoS computational biology 9 , 10(2013), e1003248.[9]
Jordan, J., Ippen, T., Helias, M., Kitayama, I., Sato, M., Igarashi, J., Diesmann,M., and Kunkel, S.
Extremely scalable spiking neuronal network simulation code: fromlaptops to exascale computers.
Frontiers in neuroinformatics 12 (2018), 2.[10]
Kriener, B., Helias, M., Rotter, S., Diesmann, M., and Einevoll, G. T.
Howpattern formation in ring networks of excitatory and inhibitory spiking neurons depends onthe input current regime.
Frontiers in computational neuroscience 7 (2014), 187.[11]
Lapicque, L.
Recherches quantitatives sur l’excitation electrique des nerfs traitee commeune polarization.
Journal de Physiologie et de Pathologie Generalej 9 (1907), 620–635.[12]
Lee, J. H., Koch, C., and Mihalas, S.
A computational analysis of the function of threeinhibitory cell types in contextual visual processing.
Frontiers in computational neuroscience11 (2017), 28. 2313]
Lytton, W. W., Seidenstein, A. H., Dura-Bernal, S., McDougal, R. A., Schür-mann, F., and Hines, M. L.
Simulation neurotechnologies for advancing brain research:parallelizing large networks in neuron.
Neural computation 28 , 10 (2016), 2063–2090.[14]
Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M.,Sanchez, C. A., Ailamaki, A., Alonso-Nanclares, L., Antille, N., Arsever, S.,et al.
Reconstruction and simulation of neocortical microcircuitry.
Cell 163 , 2 (2015), 456–492.[15]
Mazza, M., De Pinho, M., Piqueira, J. R. C., and Roque, A. C.
A dynamical modelof fast cortical reorganization.
Journal of computational neuroscience 16 , 2 (2004), 177–201.[16]
McDougal, R. A., Hines, M. L., and Lytton, W. W.
Reaction-diffusion in the neuronsimulator.
Frontiers in neuroinformatics 7 (2013), 28.[17]
Millman, D., Mihalas, S., Kirkwood, A., and Niebur, E.
Self-organized criticalityoccurs in non-conservative neuronal networks during ‘up’states.
Nature physics 6 , 10 (2010),801.[18]
Potjans, T. C., and Diesmann, M.
The cell-type specific cortical microcircuit: relatingstructure and activity in a full-scale spiking network model.
Cerebral cortex 24 , 3 (2012),785–806.[19]
Ranjan, R., Khazen, G., Gambazzi, L., Ramaswamy, S., Hill, S. L., Schürmann, F.,and Markram, H.
Channelpedia: an integrative and interactive database for ion channels.
Frontiers in neuroinformatics 5 (2011), 36.[20]
Romaro, C., Najman, F., Dura-Bernal, S., and Roque, A. C.
Implementation of thepotjans-diesmann cortical microcircuit model in netpyne/neuron with rescaling option.
BMCNeuroscience , 64 (2018), sup–2.[21]
Senk, J., Hagen, E., van Albada, S. J., and Diesmann, M.
Reconciliation of weakpairwise spike-train correlations and highly coherent local field potentials across space. arXivpreprint arXiv:1805.10235 (2018).[22]
Senk, J., Korvasová, K., Schuecker, J., Hagen, E., Tetzlaff, T., Diesmann, M.,and Helias, M.
Conditions for traveling waves in spiking neural networks. arXiv preprintarXiv:1801.06046 (2018).[23]
Tsodyks, M., Pawelzik, K., and Markram, H.
Neural networks with dynamic synapses.
Neural computation 10 , 4 (1998), 821–835.[24]
Van Albada, S. J., Helias, M., and Diesmann, M.
Scalability of asynchronous net-works is limited by one-to-one mapping between effective connectivity and correlations.
PLoScomputational biology 11 , 9 (2015), e1004490.[25]
Veltz, R., and Sejnowski, T. J.
Periodic forcing of inhibition-stabilized networks: nonlin-ear resonances and phase-amplitude coupling.
Neural computation 27 , 12 (2015), 2477–2509.[26]
Vreeswijk, C. v., and Sompolinsky, H.
Chaotic balanced state in a model of corticalcircuits.
Neural computation 10 , 6 (1998), 1321–1371.[27]
Wagatsuma, N., Potjans, T. C., Diesmann, M., Sakai, K., and Fukai, T.
Spatialand feature-based attention in a layered cortical microcircuit model.