Cortical oscillations implement a backbone for sampling-based computation in spiking neural networks
Agnes Korcsak-Gorzo, Michael G. Müller, Andreas Baumbach, Luziwei Leng, Oliver Julien Breitwieser, Sacha J. van Albada, Walter Senn, Karlheinz Meier, Robert Legenstein, Mihai A. Petrovici
OOscillatory background activity implements a backbone forsampling-based computations in spiking neural networks
Michael G. Müller, Robert LegensteinInstitute of Theoretical Computer ScienceGraz University of TechnologyA-8010 Graz, Austria {mueller,robert.legenstein}@igi.tugraz.at
June 22, 2020
Abstract
Various data suggest that the brain carries out probabilistic inference. Models that performinference through sampling are particularly appealing since instead of requiring networksto perform sophisticated mathematical operations, they can simply exploit stochasticity inneuron behavior. However, sampling from complex distributions is a hard problem. Inparticular, mixing behavior is often very sensitive to the temperature parameter which controlsthe stochasticity of the sampler. We propose that background oscillations, an ubiquitousphenomenon throughout the brain, can mitigate this issue and thus implement the backbone forsampling-based computations in spiking neural networks. We first show that both in current-based and conductance-based neuron models, the level of background activity effectively definesthe sampling temperature of the network. This mechanism allows brain networks to flexiblycontrol the sampling behavior, either favoring convergence to local optima or promotingmixing. We then demonstrate that background oscillations can in this way structure stochasticcomputations into discrete sampling episodes. In each such episode, solutions are first exploredat high temperatures before annealing to low temperatures favors convergence to a goodsolution.
Behavioral data show that humans act in a probabilistically optimal way in many scenarios,suggesting that the brain performs probabilistic inference (Pouget et al., 2013; Körding andWolpert, 2004). This observation has inspired neural network models capable of carrying outprobabilistic inference computations. Sampling-based models are especially attractive as they havea number of functional advantages (Fiser et al., 2010) as well as experimental support (Berkeset al., 2011).Sampling-based computations in neural networks necessitate stochasticity of neuronal activity.In networks of idealized spiking neuron models with stochastic spiking behavior, the network’sstationary distribution can be exactly characterized mathematically (Buesing et al., 2011). Althoughsuch a characterization is not possible for more biologically realistic models of neural networks, ithas been shown that most networks do have a stationary distribution, as long as they are stochasticin some way (Habenschuss et al., 2013).One source of stochasticity on the level of individual neurons is the large number of inputs thatcortical neurons typically receive, resulting in membrane potential fluctuation which give rise tostochastic firing behavior. Theoretical models often assume the limit of infinitely large backgroundfiring rates to arrive at Gaussian noise currents (Gerstner et al., 2014). However, firing rates in thebrain as well as the number of presynaptic partners of individual neurons are limited. Moveover,activity levels in the brain are rarely constant, and often show cyclic behavior in different frequencybands (Buzsaki, 2006). 1 a r X i v : . [ q - b i o . N C ] J un u (a.u.)0.20.4 ( u / T ) a low temp. u (a.u.) high temp. t / ms0123 n e u r o n b z = z t / ms z = z p ( z j ) c
012 biases 0 1 2 3 mixed
Figure 1:
Temperature effects in the neuralsampling model of Buesing et al. (2011). a)
Ef-fect of the temperature T on the escape rate ρ k (lowtemperature: T = 0 . , high temperature: T = 1 . ). b) Spikes from a small network ( w ij = − for i (cid:54) = j , b = [1 , . , , T ) for low and high temperatures.Shaded areas show times when exclusively neuron has fired within the last
10 ms , i.e. the network stateis z = [0 , , , T , the state with highest probabilityfor these parameters. c) Energy landscape for lowand high temperatures. Inset shows the neuron bi-ases. For each of the four neurons, the probability(Eq. 2) that the neuron is exclusively active is shown,as well as the probability of mixed states. At hightemperatures, the landscape becomes flatter.We investigate in this work the effect of limited, varying levels of background input on networksof leaky integrate-and-fire (LIF) neurons. We show that the level of background input shapesthe network activity by defining the sampling temperature, an important parameter of samplingmodels controlling the shape of the distribution of network states. High temperatures flatten thedistribution, allowing the network to easily traverse the state space, while low temperatures leadto more pronounced maxima, resulting in more time being spent there. If background activityis oscillatory, then phase-dependent stationary distributions emerge (Habenschuss et al., 2013).We show how its effect on the temperature allows oscillatory background activity to structuresampling-based computations in spiking neural networks (SNNs) into discrete episodes, in whicha high-temperature search phase is followed by a low-temperature phase of convergence to agood solution, which also provides a reference time at which solutions can be read out. Theseobservations establish a novel link between the findings of the experimental neurosciences and theunderstanding of brain computations with theoretical models.
Several SNN models for probabilistic inference have been proposed. The neural sampling model byBuesing et al. (2011) considers idealized stochastic spiking neurons with rectangular post-synapticpotentials (PSPs) of length τ , spiking with instantaneous firing intensity ρ k ( t ) = 1 τ exp (cid:18) T u k ( t ) (cid:19) . (1)Here, u k ( t ) is the membrane potential of neuron k and T is the "temperature", a parameter thatscales the stochasticity of the neuron. Using exponential functions as firing intensity functions(also called escape rate functions) has been shown to fit the behavior of cortical neurons very well(Jolivet et al., 2006). Let z ( t ) denote the state vector of the network at time t , where the state z k ( t ) of a neuron k is if the neuron has spiked within [ t − τ, t ] , and otherwise. Let p ( z ) denotethe distribution over states that are visited by the network over time. Buesing et al. (2011) provedthat under certain conditions on the membrane potentials, recurrent networks of such neuronsgenerate samples from a Boltzmann distribution p ( z ) = 1 Z exp (cid:18) T (cid:0) z T Wz + b T z (cid:1)(cid:19) , (2)where Z is the normalization constant and W and b are synaptic weights and neuronal biases.Clearly, the temperature T influences the sampled distribution. At high values of T , the distributionis flatter (i.e., the probability differences between different states decrease), and mixing is easier,2hile at low values of T , energy maxima are more pronounced and the state will tend to stayaround modes. Hence, in this model (Eq. 1), any factor that multiplicatively scales u ( t ) directlysets the sampling temperature (see Fig. 1a for the effect of T on ρ k ( t ) ). The temperature changesthe network behavior (see Fig. 1b for an example with a small network), controlling the energylandscape (see Fig. 1c).In this work, we use more realistic LIF neurons instead of the abstract neuron model used byBuesing et al. (2011). In these neuron models, there is no explicit temperature parameter thatscales the exponent in a probability distribution (but see Petrovici et al., 2016, for a way of relatingLIF neuron spiking to sampling from Boltzmann distributions). We next discuss how multiplicativescaling similar to the T factor in Eq. 1 arises for LIF neurons when background input is present. For a given input current, cortical neurons are thought to behave rather deterministically (Mainenand Sejnowski, 1995). However, when subject to large numbers of inputs as is the case in vivo, theyexhibit stochastic behavior necessary for networks to carry out sampling tasks (Jolivet et al., 2006).This synaptic bombardment causes fluctuations of the membrane potential. The relation betweenstochastic input and resulting membrane fluctuations with the firing behavior of current-basedneuron models has been investigated in different ways.Mensi et al. (2011) fitted a stochastic spike response model (SRM) to a current-based adaptiveexponential integrate-and-fire (AdEx) neuron using colored noise current as simulated backgroundinput. The resulting SRM can reproduce the AdEx behavior with high accuracy using an escaperate function of the form ρ k ( t ) = ρ exp (( u k ( t ) − u T ) / ∆ u ) , where ρ is a scaling parameter, u T sets the (soft) threshold of the SRM and ∆ u defines the degree of stochasticity. Both of the latterparameters are functions of σ , the standard deviation of the white noise input current used todrive the neuron. For increasing levels of σ , both u T and ∆ u monotonically increase (Mensi et al.,2011, Fig. 4). Hence, the level of stochastic background input influcences the stochasticity of theneuron model.Analytical expressions for the escape rate function of current-based LIF neurons have alsobeen developed, e.g. by Plesser and Gerstner (2000). Their results show that the variance ofthe membrane potential fluctuations multiplicatively scales the membrane potential within theescape rate function. As these fluctuations can be caused by fluctuations of the input current, thissimilarly suggest an influence of the background input on the stochastic behavior of the neuron. Given that background activity can influence neuronal stochasticity, we first asked whetherbackground activity does in fact change the sampling behavior of spiking neural networks in away similar to the influence of the temperature T in idealized neural sampling networks. It isexpected that the linear relationship between the input currents and membrane potential leadsto a more direct and more pronounced effect in current-based neuron models (which were alsoconsidered in the studies discussed in Sec. 2.2). We therefore first discuss the effect of backgroundactivity on networks of LIF neurons with current-based synapses before moving to the more realisticconductance-based case. Assume that neurons receive background input via exponential current-based synapses (see
Methods5.1 ) from Poisson processes with rates ν e (excitatory input) and ν i (inhibitory input). Changesof this background input may have a strong effect on a neuron’s behavior, potentially placingit in drastically different regimes. To avoid this, it is often assumed that the input is balanced,i.e. changing rates do not alter the mean membrane potential. In current-based models, such a3 () / H z ×10 a exc. inh. u / m V b u / m V c d WTA I I I I I / p A e p ( z = z j ) ×10 =1 0 1 2 3m.01 ×10 =5 f n e u r o n p ( z = z j ) g n e u r o n n o r m . p ( z = z j ) f / H z h f i t T / m V i f i t u T / m V j
54 52 50 u / mV024 f i t ( u ) / m s k =1=3=5 Figure 2:
Background input rate sets the sampling temperature in networks of current-based LIF neurons.
Excitatory and inhibitory background firing rates ( a ), mean membranepotential (cid:104) u (cid:105) ( b ), and variance (cid:104) u (cid:105) of the membrane potential ( c ) for different levels of backgroundinput α . d) Illustrative sampling task showing the effect of background input (see
Methods 5.5 ).Four neurons receiving different input currents I , . . . , I (bottom: values of input currents) areconnected with lateral inhibition in a winner-take-all (WTA) fashion. Additionally, all neuronsreceive spiking input from a fifth neuron ( f = 75 Hz ). e) Probability of each neuron being exclusivelyactive for low ( α = 1 ) and high ( α = 5 ) temperatures (see Methods 5.5 ). The probability of mixedstates is shown in gray. At high temperatures, the energy landscape becomes flatter. f )
Probabilityof each neuron being exclusively active as α increases. g) Normalized energy landscape: at eachvalue of α , the probabilities are divided by the maximum occurring value to offset overall activitychanges. Towards high temperatures, response become more similar. h) Neuron firing rate inresponse to stimulus used for fitting (see
Methods 5.6 ). i) Temperature values resulting from fittingan SRM to the LIF neuron at various α . j) Soft threshold values from fitting. k) Firing intensitycurves ρ ( u ) resulting from fitting for α ∈ { , , } . With increasing background input rate, thefiring intensity becomes flatter.balanced state can easily be achieved if both excitatory and inhibitory input rates change in alinear fashion, i.e. ν e = αν e , and ν i = αν i , , (3)where ν e, and ν i, are some excitatory and inhibitory base rates, and α is a factor scaling thebackground input. In this case, a balanced state can be achieved for arbitrary firing rates andsynaptic time constants by appropriately setting the weights for excitatory and inhibitory inputs(see Methods 5.2 for details). Ignoring the firing threshold, this results in fluctuations of themembrane potential u ( t ) with zero mean over time (i.e. (cid:104) u (cid:105) = E L , where E L is the restingpotential) and with variance (cid:104) u (cid:105) = α (cid:18) c e ν e , + c i ν i , (cid:19) , (4)where c e and c i are constants (see Methods 5.2 ). We see that the variance linearily increases withthe background input rates, and an arbitrarily high variance can be reached if the backgroundinput rates are high enough (Fig. 2a-c shows firing rates, mean u , and variance of u as functions of α for a LIF neuron with ν e , = ν i , ). As (cid:104) u ( t ) (cid:105) has been shown to determine neuron stochasticity,this suggests that linearly increasing background activity rates (i.e. α ) results in linearily increasingtemperatures in networks of such neurons.To show that this is indeed the case, we investigated how the behavior of a network with lateralinhibition (Fig. 2d, see Methods 5.5 for details) changes with increasing background frequency.We find that for high background activity rates, the energy landscape is flatter than for low rates4Fig. 2e). This trend (Fig. 2f) is particularly visible if the state probabilities are compared in anormalized way, offsetting the change of the maximum state probability (Fig. 2g).To quantify the change of the sampling temperature, we fitted an SRM model to the LIFneuron for different values of α using the fitting method by Jolivet et al. (2006) (see Methods 5.6 ).We used an SRM model that was identical to the LIF neuron model except for a stochastic firingcriterion with instantaneous firing intensity ρ ( u ) = 1∆ t exp (cid:18) u − u T T (cid:19) , (5)where T is the temperature, u T is the soft threshold (i.e. the value of u where the firing intensityreaches / ∆ t ), and ∆ t is the resolution of the discrete-time simulation. Fig. 2h shows the firing rateof the neuron in response to the stimulus. As the input is balanced, the firing rate does not changewith the level of background activity. The temperature values resulting from the fitting procedure(see Methods 5.6 for details) are shown in Fig. 2i. We find that the temperature increases linearilywith increasing levels of background activity. The soft threshold u T shows a slightly increasinglinear trend (Fig. 2j). The resulting firing intensity curves are well-aligned, as in the theoreticalmodel (Fig. 2k, cf. Fig. 1a). This shows shows that balanced background activity can control thesampling temperature without altering the overall level of neuronal responses in current-basedmodels. The effect of background input on neuron behavior is much more complicated in conductance-basedneuron models. In particular, since input currents depend on the membrane potential, backgroundcurrents can only be balanced at a particular voltage. Nevertheless, the impact of backgroundinput on membrane potential fluctuations of conductance-based LIF neurons can been calculated(Zerlaut et al., 2018) (see
Methods 5.3 ). Generally, both the mean (cid:104) u (cid:105) =: µ u ( ν e , ν i ) and thevariance (cid:104) u (cid:105) =: σ u ( ν e , ν i ) are bounded and converge towards a finite value as α increases. Thissuggests a more complex relationship between background rates and the sampling temperature.To see whether background input can still control sampling temperature over a reasonable range,we investigated four different scenarios (Fig. 3, see Methods 5.4 ). These scenarios arise fromdifferent choices of synaptic conductances for background inputs and different ways of changingthe background activity levels:
Unbalanced:
In this scenario, we do not care about the balance of background input. This canresult from many parameter settings, for simplicity, we choose ν e = ν i . As a result, both µ u and σ u increase with α (Fig. 3a–c, first column). Balanced at the resting potential E L : As in the current-based case, we can balance the input at µ u ( ν e , ν i ) ≡ E L for ν e = ν i (Fig. 3a–c, second column) using the same parameters as in the previouscase except for a specifically chosen inhibitory synaptic input conductance (see Methods 5.3 ). Balanced at −
55 mV : This mimicks the regime of cortical up-states, where neurons have membranepotentials close to the firing threshold (here: u th = −
50 mV ). For a given choice of synaptic inputconductances, this results from a special way of increasing ν i with α (i.e. ν i (cid:54) = ν e here, see Methods5.3 ). Note that balancing here requires some minimal background input level (Fig. 3a–c, thirdcolumn).
Approximately balanced with high variance:
A different regime results from choosing larger synapticconductances. This regime is approximately balanced (small change of µ u ), but differs from theprevious scenarios in two significant ways: the overall variance is much higher, and the variancedecreases with α (Fig. 3a–c, last column). Such a regime might lead to different temperaturebehavior (e.g. T might decrease with increasing α if the variance defines the temperature).The different behaviors of membrane potential mean and variance raise the question whether(and what kind of) temperature effects can be seen in these cases. We performed the same analysisas before, now using conductance-based neurons (see Methods 5.1 ), for the four cases describedabove. The resulting state probabilities (Fig. 3d) show that temperature effects are indeed present.The effect is strong in the unbalanced and the exactly balanced cases. In the approximately5 () / H z ×10 a unbal. exc. inh. ×10 bal. at E L ×10 bal. at 55mV ×10 approx. bal. u () / m V b u () / m V c ×10 d =1 01 ×10 =50123 n e u r o n n e u r o n e ×10 =1 01 ×10 =51 2 3 4 5 05 ×10 =1 02 ×10 =51 2 3 4 5 05 ×10 =1 0.02.5 ×10 =5 0.000.05 p ( z = z j ) n o r m . p ( z = z j ) Figure 3:
Background input rate sets the sampling temperature for conductance-based LIF neurons.
Columns show different scenarios of conductance-based background input(from left to right: unbalanced input, input balanced at the resting potential E L , balanced ata mean membrane potential value close to the firing threshold, approximately balanced withhigh mean with large variance of u ). a) Excitatory and inhibitory background input rates for agiven scaling value α . b) Mean membrane potential resulting from background input scaled at α . c) Variance of membrane potential fluctuations resulting from background input scaled at α . d) Probabilities of each neuron being exclusively active (estimated in windows of length )estimated from simulating the network (as in Fig. 2d) for
100 s . Insets show probabilities for low( α = 1 ) and high ( α = 5 ) temperatures with mixed state probability shown in gray. e) Normalizedstate probabilities (as in Fig. 2g).balanced case, the effect is small but shows the same trend (flatter distribution with increasing α ,see Fig. 3d), which is surprising as the variance of u here decreases as α is increased. This canbe explained as follows: even as the variance decreases, the overall synaptic conductance evokedby background input grows. Therefore, as α increases, the effect of the background input growsstronger relative to the input from the recurrent network connections, thus leading to more equalresponses.These results suggests that conductance-based models show temperature effects similar tocurrent-based models. To quantify the effect, we repeated the fitting procedure described above.Again, we solely replaced the firing mechanism (see Methods 5.1 ) without changing the integrationof inputs. The resulting model thus is not an SRM, but rather a conductance-based LIF neuronwith stochastic firing.We again performed the fitting for different values of α . The results (Fig. 4) confirm thetemperature effect in conductance-based models, with some differences to the current-based case.Fig. 4a shows the neuronal firing rates in response to the stimulus used for fitting. In the unbalancedcase, the mean membrane potential increases with α , resulting in increased firing rates. In thebalanced and approximately balanced cases, however, the firing rates decrease even though themean membrane potential stays constant. This marks an important difference to the current-basedcase. Fig. 4b shows the temperature values resulting from fitting. In all four scenarios, T increaseswith α , as expected by the responses in the sampling network (cf. Fig. 3d). Fig. 4c shows thesoft threshold values. In contrast to the current-based case, u T here significantly changes over therange of α values in all cases (as expected from the changing firing rates, Fig. 4a). This shows6 f / H z a unbal. bal. at E L bal. at 55mV approx. bal. f i t T / m V b f i t u T / m V c u / mV024 f i t ( u ) / m s d =1=3=5 60 50 40 u / mV024 60 50 40 u / mV024 60 50 40 u / mV024 Figure 4:
Fitting stochastic models to conductance-based LIF neurons with back-ground input. a)
Firing rate in response to stimulus used for fitting. Even if the input isbalanced or approximately balanced, the firing rates change substantially for different values of α . b) Temperature values resulting from fitting stochastic models to the LIF neurons. Temperaturesincrease with increasing α in all cases. c) Soft thresholds resulting from fitting. Values changessubstantially in some cases. d) Firing intensity values ρ ( u ) resulting from fitting for α ∈ { , , } .Behavior is different in each scenario, but the exponentials are never aligned as in the current-basedcase.that even in balanced regimes, increased background activity changes the level of neuronal activityin the conductance-based case. This is also visible when comparing the resulting firing intensityfunctions (Fig. 4d) to those from the current-based case (Fig. 2j). Here, the intensity functions fordifferent values of α do not seem aligned at first glance. However, as both T and u T increase in alinear fashion with α , there always is a voltage value where all curves meet (see Methods 5.7 ), asin the theoretical neural sampling model by Buesing et al. (2011). By interpolating between theunbalanced scenario and the scenario balanced at E L (which only differ in the synaptic conductancefor inhibitory background input), we can find parameter values in which the intensities are alignedin a similar point as in the current-based case (see Methods 5.8 ). To summarize, we found that the strength of background activity can control the sampling behaviorof neural networks. Since background activity levels exhibit oscillatory changes across many differentbrain areas (Buzsaki, 2006), this effect could have profound implications for the organization ofcomputations in the brain. The proposed link between activity levels and sampling temperaturesuggests that brain networks alternate between sampling at high temperatures, allowing rapidtraversing of the state space for good mixing, and low temperatures, promoting convergence tostates of high probability. We next describe possible functional advantages of such oscillatorysampling networks.We considered a network consisting of 15 assemblies, each consisting of 3 strongly recurrentlyconnected neurons (Fig. 5a). These assemblies were organized in three groups of five assemblieseach (blue, green, and red group in Fig. 5a). Assemblies within a group were connected throughinhibitory connections, i.e. each group implements a winner-take-all (WTA) motif of five assemblies.In addition, the n th assembly in group 2 was linked with excitatory connections to the n th assemblyin group 1 and group 3 (Fig. 5a). Hence, the network distribution consists of five high-probabilitystates where one interlinked assembly-triple is active — we term such a high-probability state asolution of the sampling problem. This triple will then inhibit other assemblies due to the WTA7 a s o l u t i o n group 1 group 2 group 30 2phase0.00.10.20.3 p ( s o l u t i o n ) c =0.5 =2.5 =50 20.55osc. 0.5 2.5 5010000 t i m e a ll v i s i t e d / m s d n.s. osc. 0.5 2.5 5010002000 s w i t c h t i m e / m s e n.s. t / ms05 nu m . s o l . v i s i t e d valid solution g r o u p g r o u p g r o u p Figure 5:
Background oscillations structure computations into sampling episodes. a)
Network setup. Three winner-take-all groups contain five recurrently connected assemblies, defininga sampling task with five equal solutions (see text ). b) Network activity with oscillating backgroundinput. Top: background activity scaling factor α is varied in [0 . , at f = 10 Hz . Middle: spikesfrom each assembly in
500 ms of a simulation. Bottom: number of distinct solutions the networkhas visited. Green bars show times when the network state defines a valid solution. c) Probabilityof network state encoding a solution depending on oscillation phase, estimated over t = 100 s . Grayhorizontal lines show estimated probability for constant α values. Inset shows change of α . d) Mean time until the network has visited all five solutions for oscillating and constant backgroundactivity ( α ∈ { . , . , } ) for N = 100 runs (whiskers denote SD). Significance ∗∗ (cid:98) = p < − (Wilcoxon ranksum test). e) Time between choosing distinct solutions (see
Methods 5.9 ) for t = 100 s . ∗ (cid:98) = p < − .structures. We say that the network has found a valid solution if one linked assembly triple is active( > of neurons per assembly fired within the last
10 ms ) while all other assemblies remain silent( < of neurons fired; see Methods 5.9 for details). As the recurrent connectivity within eachassembly is rather strong, the network tends to lock into one such solution, making mixing hard.However, the goal of the sampling process is to visit all solutions in a reasonable amount of time.We compared the behavior of the network when α was oscillating in the range considered before( α ∈ [0 . , , i.e., total background rates in [2500 , , network activity shown in Fig. 5b) withthe behavior when α was constant. To obtain similar levels of activity for low and high backgroundactivity, we chose a parameter set between the first and second scenario from above (Fig. 3) whichresulted in minimal change of the neurons’ firing rates for any value of α (see Methods 5.8 ).Fig. 5c shows that background oscillations structure sampling-based computations by definingtimes when good solutions can be read-out from the network. We quantified this by the probabilitythat the current state is a valid solution over the phase of the oscillation (Fig. 5c). At times oflow temperature, oscillating networks are much more likely to provide a solution compared tohigh (dotted line) or medium (dashed line) temperature networks. Networks with constant lowtemperature provide higher probabilities for valid solutions (dashed-dotted line). However, thesenetworks tend to converge to one solution and stay there for a long time, thus they exhibit muchworse mixing behavior. We quantified mixing by measuring (i) the time it takes the network to8isit each solution at least once (Fig. 5d), and (ii) the time it takes on average to move fromone solution to another one (Fig. 5e). On both measures, best performance is achieved withoscillatory background activity or intermediate constant background activity levels. In summary,oscillatory network activity structures sampled-based computations in spiking neural networks.This oscillatory structure can provide good solutions with high probability while inheriting thegood mixing properties of high temperature networks.
We have shown that background input effectively sets the sampling temperature in networks of LIFneurons in both current-based and in more realistic conductance-based models, and how this effectleads to functional advantages in sampling networks when oscillatory background input is present.Cyclic background activity thus structures sampling-based computations in spiking neural networksby allowing the network to alternate between periods of high and low temperature, performing akind of annealing process (Kirkpatrick et al., 1983). High temperatures allow traversing the statespace rapidly, giving rise to good mixing behavior, while low temperatures promote momentaryconvergence to good solutions. Such a form of computing in discrete steps in brain networks haspreviously been suggested based on phased-lock shifts of attention in the visual stream (Buschmanand Miller, 2010). Low temperature periods furthermore provide a reference time for reading outgood network solutions with high probability, reminiscent of phase-based neural codes in whichfiring at particular phases conveys information (O’Keefe and Recce, 1993).Habenschuss et al. (2013) have previously shown that in the presence of periodic input, spikingnetworks have a phase-specific stationary distribution which is influenced by the network parametersand the properties of the inputs. We have identified in this work the specific nature of the phase-dependent distributions by showing that changing background input levels result in temperaturechanges without otherwise altering responses.The general functional benefits of background oscillations shown in this work are intriguingas cyclic activity is prevalent throughout the brain. It has recently been suggested that althoughcyclic activity is routinely separated into distinct frequency bands, oscillations in fact have a similarfunction throughout the cortex (Lundqvist et al., 2020). We have used a modulation frequency of
10 Hz , corresponding to high theta-band or low alpha-band oscillations, but in principle, cyclicsampling episodes can take place at any frequency (e.g. beta-band oscillations, prominent in thefrontal lobe).A number of functional roles have been proposed for oscillations (Lengyel et al., 2005). Aitchi-son and Lengyel (2016) showed how oscillations between excitatory and inhibitory populationsimplement Hamiltonian Monte Carlo. Such excitatory-inhibitory oscillations are usually quitefast (i.e. gamma-band). We focus here on activity on longer time scales. It has furthermore beensuggested on theoretical grounds that during the hippocampal theta cycle, modulation of GABA B synapses performs a process similar to simulated annealing in a model of population dynamics(Sohal and Hasselmo, 1998b). Such a mechanism was shown to be advantageous for sequencedisambiguation (Sohal and Hasselmo, 1998a). In this work we propose that temperature controltakes place on the level of individual neurons via input regardless of the synapse type, thus, themechanism we propose has a much more general scope.The benefit of providing a particular time window for reading out good solutions suggests thatusing temperature oscillations might enhance any-time computations with neuromorphic computingplatforms, which often use sampling-based computations. There, while theory guarantees thatspiking networks converge towards a stationary distribution (Habenschuss et al., 2013) whichis shaped so that solution states have a high probability (Jonke et al., 2016), it is not clear atany given point in time whether the current state is a solution candidate or a transitional state.Temperature oscillations, e.g. provided by activity oscillations, could enhance such networks bymitigating this problem. 9 Methods
In the current-based LIF model the membrane potential u ( t ) is updated according to C m dudt = − g L ( u − E L ) + I e + I i + I ext , (6)where C m is the membrane capacitance, g L is the leak conductance, E L is the leak reversal potential,and I e ( t ) and I i ( t ) are the synaptic currents from excitatory and inhibitory input at time t ( I ext isexternally injected current). The synaptic currents are modeled as exponentials and updated fromsynaptic input via dI e dt = − I e τ e + (cid:88) j ∈ PRE e w j S j ( t ) dI i dt = − I i τ i + (cid:88) j ∈ PRE i w j S j ( t ) (7)where S j ( t ) = (cid:80) f δ (cid:16) t − t ( f ) j (cid:17) is the spike train of the presynaptic neuron j , w j is the synapticweight from the presynaptic neuron j , and τ e and τ i are the time constants of excitatory andinhibitory synapses, respectively. The sums run over sets of all excitatory and inhibitory presynapticpartners. The LIF neuron generates spikes every time u ( t ) reaches a threshold u th , this also triggersa reset if u ( t ) ≥ u th : u ← u reset . (8)After spiking, neurons are clamped to u reset for the duration of an absolute refractory period ∆ abs .The stochastic model we fitted to data generated from such a LIF neuron is identical to the LIFneurons except for the deterministic spike generation mechanism, which is replaced by a stochasticspike criterion using an instantaneous firing intensity of ρ ( t ) = 1∆ t exp (cid:18) u ( t ) − u T T (cid:19) (9)where T and u T are parameters (temperature and soft threshold) obtained from the fitting method.Spikes drawn from a Poisson process with this instantaneous intensity. In our discrete-timesimulations, we calculate the probability of a spike within each simulation time step ∆ t , which is Pr ( spike in [ t, t + ∆ t ] | u ( t )) = 1 − exp ( − ρ ( t )∆ t ) , (10)and draw spikes accordingly. The neuron parameters are given in Tab. 1.In the conductance-based LIF model, u ( t ) evolves according to C m dudt = − g L ( u − E L ) − g e ( u − E e ) − g i ( u − E i ) + I ext , (11)where g e ( t ) and g i ( t ) are the excitatory and inhibitory conductances at time t , and E e and E i arethe excitatory and inhibitory reversal potentials, respectively. The conductances are modeled asexponentials and updated from synaptic input via dg e dt = − g e τ e + (cid:88) j ∈ PRE e g j S j ( t ) dg i dt = − g i τ i + (cid:88) j ∈ PRE i g j S j ( t ) (12)where g j is the synaptic conductance of the synapse with the presynaptic neuron j (others termsas in Eq. 7). Spike generation in the deterministic conductance-base model was identical to thedeterministic current based-model (Eq. 8), and the stochastic model similarily only replaces thespike generation mechanism with stochastic firing (Eq. 9). The neuron parameters are given inTab. 1.Background activity was provided to each deterministic LIF neuron via Poisson input atexcitatory rate ν e and inhibitory rate ν i . In the current-based case, the input rates are ν e = ν i = αν with ν = 5000 Hz , and the input is scaled by synaptic weights w e and w i (see below, Methods ). In the conductance-based case, the input rates ν e and ν i and synaptic conductances of g e , bg = ¯ g e , bg g L and g i , bg = ¯ g i , bg g L depend on the scenario (see below, Methods and ).10arameter C m g L E L E e E i τ e τ i w u th u reset ∆ abs unit pF nS mV mV mV ms ms pA mV mV mscurrent-based 250 25 -65 2 3 500 -50 -65 3conductance-based 250 25 -65 0 -80 2 3 -50 -65 3Table 1: neuron parameters. Assume a LIF neuron receives background Poisson input in the form of an excitatory spike train S e ( t ) and an inhibitory spike train S i ( t ) , with rates of ν e (excitation input) and ν i (inhibitoryinput). The inputs are scaled by synaptic weights w e (excitation) and w i (inhibition). The resultinginput current to the neuron is I bg ( t ) = w e (cid:90) ∞ (cid:15) e ( s ) S e ( t − s ) ds + w i (cid:90) ∞ (cid:15) i ( s ) S i ( t − s ) ds (13)where (cid:15) e ( t ) and (cid:15) i ( t ) are the unweighted responses to a single excitatory and inhibitory spike,respectively. Applying Campbell’s theorem (Papoulis and Pillai, 2002) gives the expected value ofthis current (cid:104) I bg ( t ) (cid:105) = w e ν e ¯ (cid:15) e + w i ν i ¯ (cid:15) i , (14)where ¯ (cid:15) e and ¯ (cid:15) i are the integrals over a single unweighted excitatory and inhibitory synapticresponse, respectively. If the the excitatory and inhibitory background input rates are both linearlyscaled versions of some base firing rate, i.e. ν e = αν e , and ν i = αν i , , (15)we get a balanced state by setting w e = w ν e ¯ (cid:15) e and w i = − w ν i ¯ (cid:15) i , (16)where w is some scalar.We are interested in the effect of such background input for different input rates. A linearchange of both excitatory and inhibitory rate, i.e. an increase or decrease of α , adds no offsetto the membrane potential u ( t ) (as the mean current is always zero). To calculate the varianceof the membrane potential fluctuations, we assume exponential synapses (as in Eq. 7) with timeconstants τ e = ¯ (cid:15) e (excitation) and τ i = ¯ (cid:15) i (inhibition). In this case, the postsynaptic current andthe effect of leaky integration by the LIF neuron can be combined into a single filter describingthe postsynaptic potential (with double exponential shape). This filter can be used to calculatethe membrane potential fluctuations resulting from the background input (again using Campbell’stheorem) for a passive membrane, i.e. a neuron without a spike generation mechanism. It is givenby (cid:104) u (cid:105) = α (cid:18) w g L (cid:19) (cid:18) ν e , ( τ m + τ e ) + 1 ν i , ( τ m + τ i ) (cid:19) (17)for the choice of w e and w i described above (here, τ m = C m /g L ). This can be written in the formof (cid:104) u (cid:105) = α ( c e /ν e , + c i /ν i , ) (see text ), where c e = (cid:16) w g L (cid:17) τ m + τ e ) and c i = (cid:16) w g L (cid:17) τ m + τ i ) . The impact of input in the form of excitatory and inhibitory Poisson spikes on conductance-basedLIF neurons has been investigated by Zerlaut et al. (2018), who have derived approximations of11cenario parameter ν e ν i ¯ g e , bg ¯ g i , bg g stim , max unit Hz Hz nSunbalanced α · α · E L α · α · ¯ µ u = −
55 mV α · from Eq. 20 0.02 0.02 10approximately balanced α · α · µ u ( ν e , ν i ) = ν e τ e g e , bg E e + ν i τ i g i , bg E i + g L E L µ G ( ν e , ν i ) (18)with µ G ( ν e , ν i ) = ν e τ e g e , bg + ν i τ i g i , bg + g L . Here, g e , bg and g i , bg are the synaptic conductancesof synapses for excitatory and inhibitory background input, respectively. The variance of themembrane potential fluctuations is σ ( ν e , ν i ) = ν e (cid:16) g e , bg µ G ( ν e ,ν i ) ( E e − µ u ( ν e , ν i )) τ e (cid:17) (cid:16) C m µ G ( ν e ,ν i ) + τ e (cid:17) + ν i (cid:16) g i , bg µ G ( ν e ,ν i ) ( E i − µ u ( ν e , ν i )) τ i (cid:17) (cid:16) C m µ G ( ν e ,ν i ) + τ i (cid:17) . (19)In this work, we use ν e = αν e , for the excitatory rate. The inhibitory rate is ν i = αν i , unlesswe balance the input at a value (cid:54) = E L . In this case, we set µ u to some target value ¯ µ u . Assuminggiven values of ν e , g e , bg , g i , bg , τ e and τ i , such a balance can be obtained by choosing the inhibitoryrates as ν i = ν e τ e g e , bg ( E e − ¯ µ u ) τ i g i , bg (¯ µ u − E i ) + g L ( E L − ¯ µ u ) τ i g i , bg (¯ µ u − E i ) . (20)If we wish to balance the input at ¯ µ u = E L , we can again choose ν i = αν i , and balance the inputby choosing the conductance to inhibitory background input depending on the other parametersby setting g i = g e τ e ( E L − E e ) τ i ( E i − E L ) . (21) We investigate four scenarios for conductance-based background input chosen to cover a range ofbehaviors of µ u ( ν e , ν i ) and σ ( ν e , ν i ) for increasing background input frequencies ν e and ν i (see text ). There are multiple degrees of freedom which can be used to achieve balanced or unbalanceddynamics (see Methods 5.3 ). For simplicity, we restrict ourselves to the case of ν e = ν i wherepossible (in all cases except when balancing at a membrane potential (cid:54) = E L , where changingthe rates is required). Otherwise, we simply vary the inhibitory conductance while leaving theexcitatory conductance unchanged in the first three scenarios. For the approximately balanced case,which features high variance due to larger values of both excitatory and inhibitory backgroundinput conductances, both excitatory and inhibitory conductances are larger. All parameters aregiven in Tab. 2. To illustrate the effect of the background input activity on the sampling behavior of an SNN,we used a simple sampling task (see Fig. 2d). The SNN consisted of 4 neurons receiving biasinput by injecting currents of amplitudes I ext = [40 , , ,
40] pA , respectively. Each neuron12dditionally received input from an external neuron (Poisson spiking at f = 75 Hz , synaptic weight w in = 3000 pA , current-based case; conductance-based case: synaptic conductance g in = g stim , max ,see Tab. 2 and Methods 5.6 ). Neurons had inhibitory lateral connections (weight w inh = − w in orconductance g inh = 3 g in ).We ran this network for
100 s . From the spikes of each neuron, we computed network states asproposed by Berkes et al. (2011) by setting the state z j of each neuron j to 1 if the neuron firedwithin the last and otherwise to 0. This allowed us to estimate the fraction of time in whichone neuron was exclusively active (i.e. z j = 1 for some j = j and z k = 0 for all k (cid:54) = j ) and inwhich the state was mixed (i.e. z j = 1 for more than one j ). To fit neuron models with stochastic spike generation to data from deterministic neurons, wefollowed the method for fitting escape rate functions described by Jolivet et al. (2006). We estimatedthe spiking probability given the membrane u using a stimulus consisting of 100 inputs (80%excitatory), each firing according to a Poisson process with f stim = 5 Hz . Each input had a synapticweight drawn from a uniform distribution in [0 , (current-based model) or [0 , g stim , max ] (conductance-based case, given for each scenario in Tab. 2). This stimulus was presented to thedeterministic LIF models, recording the resulting spike times. It was then presented to a passiveversion of the stochastic neurons (i.e. no firing mechanism) which were reset at every spike of thedeterministic model. Binned histograms of u of the passive model at all times and at spike timesof the original model allow estimating the firing probability p ( spike | u ) (see Jolivet et al., 2006). Tofit the model, we insert Eq. 9 into Eq. 10 and reformulate the result to get u − u T T = log (cid:18) − log (cid:16) − p ( spike | u (cid:1)(cid:17)(cid:19) , (22)where we perform linear regression on the right-hand side to get values of T and u T . The shape of p ( spike | u ) is approximately Gaussian. We found that the best fits result from using only the valuesof p ( spike | u ) from u < arg max u p ( spike | u ) for fitting, except for the case in which we balance at E L , where we used all values of p ( spike | u ) until they were no longer convex (starting from lowvalues of u ).The stochastic models were evaluated by simulating 1000 deterministic and 1000 stochasticversions of the model for using a new stimulus. From these runs, the time-varying firingintensities ν LIF ( t ) and ν fit ( t ) were estimated. A criterion measuring the quality of fit used to assesshow well the fitted models match the originals, it was calculated as M d = 2 (cid:82) ν LIF ( t ) ν fit ( t ) dt (cid:82) ν ( t ) dt + (cid:82) ν ( t ) dt (23)for every model. This similarity criterion, which determines how well the firing intensities match,is inspired by Mensi et al. (2011). We found that the stochastic models were generally capableof reproducing the LIF behavior reasonably well (i.e. M d > . in most scenarios, Fig. 6). Thefitting method has difficulties (and the quality of fit decreases) if either the variance is high (large α in all scenarios, all models in the approximately balanced scenario) or if the mean membranepotential is far from the threshold (scenario balanced at E L ). We used the results from fitting onlyfor elucidating the temperature effect in networks of LIF neurons with background input, so anyerrors resulting from imperfect fits did not carry over to the experiments showing the functionaladvantages of background oscillations (where we again used LIF neurons and spiking backgroundinput for our simulations). The M d criterion as stated by Mensi et al. (2011, Eq. 16) seems to contain an error, therefore, it is slightlyadapted here so its properties match those discussed by Mensi et al., i.e. a value of indicates a perfect match,while a value of indicates no match (e.g. if ν fit ( t ) ≡ ). .00.51.0 M d cuba unbal. bal. at E L M d bal. at 55 mV approx. bal. const. f Figure 6: Quality of fit for each α value in each of the considered scenarios. Quality criterion M d was calculated according to Eq. 23. A value of M d = 1 indicates a perfect match of the estimatedtime-varying firing intensities of the original LIF model and the fitted stochastic models, while M d = 0 indicates no match between the firing intensities. T and b In the theoretical model of Buesing et al. (2011), the firing intensities ρ ( u ) are aligned in somepoint ρ regardless of the temperature value T . We show here that the same is true for the intensitycurves of conductance-based LIF neurons as the temperature T and the soft threshold u T bothscale linearly with α (cf. Fig 4).Assume that the relationship between α and both T and u T is linear, i.e. T ( α ) =: T α + T and u T ( α ) =: u T , α + u T , . (24)Then, for firing intensities of the form ρ ( u ; α ) = 1∆ t exp (cid:18) u − u T ( α ) T ( α ) (cid:19) = 1∆ t exp (cid:18) u − u T , α − u T , T α + T (cid:19) (25)the intensities are always aligned in some intensity value ρ := ρ ( u ; α ) at some constant u for allvalues of α . This point is given by ρ = 1∆ t exp (cid:18) u T , T (cid:19) and u = − u T , T T + u T , . (26)Note that ρ might be very small or large depending on the slope of T ( α ) and u T ( α ) . The unbalanced and balanced at E L scenarios differ only in the inhibitory conductance g i , bg . Wesee in in Fig. 4 that in the former case, increasing α leads to an offset of the ρ -curves towardssmaller values of u , while in the latter, larger values of α correspond to ρ -curves shifted towardslarger u values. This suggests that there are scenarios in between where the ρ -curves overlap in asimilar fashion to the current-based case. When interpolated between these two conditions, linearilychanging g i , bg as well as g stim , max , it is possible to obtain scenarios where the ρ -curves align ata point of ρ ≈ − , or any other value. It is also possible to choose parameters resulting in(almost) constant u T , resulting in alignment around ρ = t .For the sampling task described below ( Methods 5.9 ), we found it was advantageous to minimizethe change of the firing frequency over the range of α . This ensures that when comparing oscillatingbackground input to constant background levels, the latter show similar firing rates for all scenarios14 () / H z ×10 a exc. inh. u / m V b u / m V c f / H z d f i t T / m V e linear fit 1 2 3 4 54846 f i t u T / m V f linear fit 54 52 50 u / mV0123 f i t ( u ) / m s g =1=3=5 54 52 50 u / mV0.00.51.0 f i t ( u ) / m s h =1=3=5 Figure 7: Parameters used for sampling task with background oscillations, tuned to have similarfiring rates for any value of α . a) Background input rates per α . b) Mean membrane potential. c) Variance of membrane potential fluctuations. d) Firing rate in response to stimulus used forfitting. e) Temperature values resulting from fitting, as well as linear fit (gray). f )
Soft thresholdvalues resulting from fitting, as well as linear fit (gray). g) Firing intensities resulting from fitting.Due to noise in the fitting procedure (resulting in a too large value of u T at α = 5 ), the curves donot seem aligned. h) Firing intensities using linear fits of the values of T and u T (see gray lines inpanels e and f), showing good alignment.(i.e. leading to fair conditions for comparing the behavior; for the proposed computational functionof oscillations, it is not problematic if there is no or almost no network activity during at somepart of the cycle). For the stimulus used above (for fitting), this was the case for a scaling factorof ¯ g i , bg = 0 . with g stim , max = 17 nS (see Fig. 7). To illustrate the advantage of oscillatory background input, we constructed a sampling task withseveral equal solutions which are far apart in the state space, thus making mixing hard. The circuitconsisted of 3 winner-take-all (WTA) groups (Fig. 5a). Each group contained 5 assemblies, eachformed by 3 neurons with strong recurrent connectivity (all neuron pairs bidirectionally connected)and lateral inhibition (bidirectional connections between all neuron pairs that not part of the sameassembly). Between groups, the n th assemblies were bidirectionally linked in the form of a chain:one neuron of the n th assembly in the first group was connected to one neuron of the n th assemblyin the second group, a different neuron in this assembly was connected to one neuron of the n th assembly of the third group (see Fig. 5a). Synapse parameters are given in Tab. 3. As there are noinput units providing activity to the network, it was necessary to inject a current of I ext = 400 pA into each neuron so neurons did not remain silent. The other neuron parameters used for this taskare described above ( Methods 5.8 ).This circuit defines a sampling task with 5 equally probable solutions. A network state isdefined to encode the n th solution if the n th assembly in each group is simultaneously active, andall other assemblies are inactive. An assembly is regarded as active if of its neurons firedwithin the last
10 ms , otherwise, it is regarded as inactive. This definition allows to characterizethe network state at each time step of the discrete-time simulation as either a solution state (withconnection g E syn delaysunit nS mV msrecurrent 8.5 0 ∼ U (1 , inhibitory -17 -80 0.1link 8.5 0 ∼ U (1 , Table 3: Connection parameters for oscillation experiment. Excitatory connections have randomsynaptic delays ( U ( a, b ) denotes a uniform distribution in [ a, b ] .15 t / ms05 nu m . s o l . v i s i t e d valid solution g r o u p g r o u p g r o u p b t / ms05 nu m . s o l . v i s i t e d valid solution g r o u p g r o u p g r o u p c t / ms05 nu m . s o l . v i s i t e d valid solution g r o u p g r o u p g r o u p d t / ms05 nu m . s o l . v i s i t e d valid solution g r o u p g r o u p g r o u p Figure 8: Comparison of network activity for different background input conditions (as in Fig. 5b).See Fig. 5c-e for statistics over many runs. a) Oscillating background activity with α ∈ [0 . , . b) Constant background activity with α = 0 . . c) Constant background activity with α = 2 . . d) Constant background activity with α = 5 .a certain solution id) or a state not encoding a solution.Background input was provided to the network via Poisson spikes. Each neuron receivedindependent background input, with rates scaled by α as in previous experiments. The scalingfactor α was sinusoidally modulated over time, with α ( t ) ∈ [0 . , and modulation frequency f = 10 Hz (see Fig. 5b top). We compare the results in this case to the results when α is keptconstant ( α ∈ { . , . , } ). Fig. 8 shows sample behavior for the different cases, highlighting thedifferent behavioral regimes (e.g. locking into one solution for α ≡ . , see text ).To estimate the probability of the network state encoding a solution, we ran the network for
100 s in each case. For the constant α cases, we report the fraction of network states that encodeone of the 5 solutions. For the oscillatory case, we estimated the phase-aligned fractions of solutionstates (Fig. 5c).We tested the mixing behavior in two ways. First, we estimated the time it took to switchbetween solutions over
100 s of simulation time. Switching times were defined as the differencebetween the time the network state changed to any solution state and the time the network statenext changed to a solution state for a different solution (i.e. difference between solution state onsettimes, Fig. 5e shows mean and SD). We also estimated the time it took to find all solutions byrunning the network N = 100 times for
20 s for each of the four background input setups. Werecorded how long it took for the network state to visit each of the 5 solutions in each of thesesimulations (Fig. 5d shows mean and SD). In some rare cases, the simulation time was not enough16or the network to visit all solutions, these runs were discarded. Significance values were calculatedusing the Wilcoxon rank-sum test.
All simulations were performed with Brian2 (Stimberg et al., 2019) with a resolution of ∆ t = 0 .
05 ms . This work was supported by the Austrian Science Fund (FWF): I 3251-N33 and the EuropeanUnion project
References
L. Aitchison and M. Lengyel. The hamiltonian brain: efficient probabilistic inference with excitatory-inhibitory neural circuit dynamics.
PLoS computational biology , 12(12), 2016.P. Berkes, G. Orbán, M. Lengyel, and J. Fiser. Spontaneous cortical activity reveals hallmarks of anoptimal internal model of the environment.
Science , 331(6013):83–87, 2011.L. Buesing, J. Bill, B. Nessler, and W. Maass. Neural dynamics as sampling: a model for stochasticcomputation in recurrent networks of spiking neurons.
PLoS computational biology , 7(11):e1002211,2011.T. Buschman and E. Miller. Shifting the spotlight of attention: Evidence for discrete computations incognition.
Frontiers in Human Neuroscience , 4:194, 2010. ISSN 1662-5161. doi: 10.3389/fnhum.2010.00194. URL .G. Buzsaki.
Rhythms of the Brain . Oxford University Press, 2006.J. Fiser, P. Berkes, G. Orbán, and M. Lengyel. Statistically optimal perception and learning: from behaviorto neural representations.
Trends in cognitive sciences , 14(3):119–130, 2010.W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski.
Neuronal dynamics: From single neurons tonetworks and models of cognition . Cambridge University Press, 2014.S. Habenschuss, Z. Jonke, and W. Maass. Stochastic computations in cortical microcircuit models.
PLoScomputational biology , 9(11), 2013.R. Jolivet, A. Rauch, H.-R. Lüscher, and W. Gerstner. Predicting spike timing of neocortical pyramidalneurons by simple threshold models.
Journal of computational neuroscience , 21(1):35–49, 2006.Z. Jonke, S. Habenschuss, and W. Maass. Solving constraint satisfaction problems with networks of spikingneurons.
Frontiers in neuroscience , 10:118, 2016.S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. science , 220(4598):671–680, 1983.K. P. Körding and D. M. Wolpert. Bayesian integration in sensorimotor learning.
Nature , 427(6971):244–247, 2004.M. Lengyel, Z. Huhn, and P. Érdi. Computational theories on the function of theta oscillations.
Biologicalcybernetics , 92(6):393–408, 2005.M. K. Lundqvist, A. M. Bastos, and E. K. Miller. Preservation and changes in oscillatory dynamics acrossthe cortex. bioRxiv , 2020.Z. F. Mainen and T. J. Sejnowski. Reliability of spike timing in neocortical neurons.
Science , 268(5216):1503–1506, 1995.S. Mensi, R. Naud, and W. Gerstner. From stochastic nonlinear integrate-and-fire to generalized linearmodels. In
Advances in Neural Information Processing Systems , pages 1377–1385, 2011. . O’Keefe and M. L. Recce. Phase relationship between hippocampal place units and the eeg theta rhythm. Hippocampus , 3(3):317–330, 1993. doi: 10.1002/hipo.450030307. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/hipo.450030307 .A. Papoulis and S. U. Pillai.
Probability, random variables, and stochastic processes . Tata McGraw-HillEducation, 2002.M. A. Petrovici, J. Bill, I. Bytschok, J. Schemmel, and K. Meier. Stochastic inference with spikingneurons in the high-conductance state.
Physical Review E , 94(4), Oct 2016. ISSN 2470-0053. doi:10.1103/physreve.94.042312. URL http://dx.doi.org/10.1103/PhysRevE.94.042312 .H. E. Plesser and W. Gerstner. Noise in integrate-and-fire neurons: From stochastic input to escape rates.
Neural computation , 12(2):367–384, 2000.A. Pouget, J. M. Beck, W. J. Ma, and P. E. Latham. Probabilistic brains: knowns and unknowns.
Natureneuroscience , 16(9):1170, 2013.V. S. Sohal and M. E. Hasselmo. Changes in gabab modulation during a theta cycle may be analogous tothe fall of temperature during annealing.
Neural computation , 10(4):869–882, 1998a.V. S. Sohal and M. E. Hasselmo. Gabab modulation improves sequence disambiguation in computationalmodels of hippocampal region ca3.
Hippocampus , 8(2):171–193, 1998b.M. Stimberg, R. Brette, and D. F. Goodman. Brian 2, an intuitive and efficient neural simulator. eLife , 8:e47314, aug 2019. ISSN 2050-084X. doi: 10.7554/eLife.47314. URL https://doi.org/10.7554/eLife.47314 .Y. Zerlaut, S. Chemla, F. Chavane, and A. Destexhe. Modeling mesoscopic cortical dynamics using amean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.
Journal of computational neuroscience , 44(1):45–61, 2018., 44(1):45–61, 2018.