Impact of network structure and cellular response on spike time correlations
IImpact of network structure and cellular response on spike timecorrelations
James Trousdale , Yu Hu , Eric Shea-Brown , Kreˇsimir Josi´c , Department of Mathematics, University of Houston, Houston, TX, USA Department of Biology and Biochemistry, University of Houston, Houston, TX, USA Department of Applied Mathematics, Program in Neurobiology and Behavior, University ofWashington, Seattle, WA, USA
Abstract
Novel experimental techniques reveal the simultaneous activity of larger and larger numbers of neu-rons. As a result there is increasing interest in the structure of cooperative – or correlated – activityin neural populations, and in the possible impact of such correlations on the neural code. A fun-damental theoretical challenge is to understand how the architecture of network connectivity alongwith the dynamical properties of single cells shape the magnitude and timescale of correlations. Weprovide a general approach to this problem by extending prior techniques based on linear responsetheory . We consider networks of general integrate-and-fire cells with arbitrary architecture, andprovide explicit expressions for the approximate cross-correlation between constituent cells. Thesecorrelations depend strongly on the operating point (input mean and variance) of the neurons, evenwhen connectivity is fixed. Moreover, the approximations admit an expansion in powers of the ma-trices that describe the network architecture. This expansion can be readily interpreted in termsof paths between different cells. We apply our results to large excitatory-inhibitory networks, anddemonstrate first how precise balance — or lack thereof — between the strengths and timescales ofexcitatory and inhibitory synapses is reflected in the overall correlation structure of the network.We then derive explicit expressions for the average correlation structure in randomly connectednetworks. These expressions help to identify the important factors that shape coordinated neuralactivity in such networks.
Author summary
Is neural activity more than the sum of its individual parts? What is the impact of cooperative,or correlated , spiking among multiple cells? We can start addressing these questions, as rapid ad-vances in experimental techniques allow simultaneous recordings from ever-increasing populations.However, we still lack a general understanding of the origin and consequences of the joint activitythat is revealed. The challenge is compounded by the fact that both the intrinsic dynamics of singlecells and the correlations among then vary depending on the overall state of the network. Here, wedevelop a toolbox that addresses this issue. Specifically, we show how linear response theory allowsfor the expression of correlations explicitly in terms of the underlying network connectivity andknown single-cell properties — and that the predictions of this theory accurately match simula-tions of a touchstone, nonlinear model in computational neuroscience, the general integrate-and-firecell. Thus, our theory should help unlock the relationship between network architecture, single-celldynamics, and correlated activity in diverse neural circuits.1 a r X i v : . [ q - b i o . N C ] O c t ntroduction New multielectrode and imaging techniques are revealing the simultaneous activity of neural ensem-bles and, in some cases, entire neural populations [1–4]. This has thrust upon the computationalbiology community the challenge of characterizing a potentially complex set of interactions — or correlations — among pairs and groups of neurons.Beyond important and rich challenges for statistical modeling [5], the emerging data promisesnew perspectives on the neural encoding of information [6]. The structure of correlations in theactivity of neuronal populations is of central importance in understanding the neural code [7–13].However, theoretical [9–11, 14–16], and empirical studies [17–19] do not provide a consistent set ofgeneral principles about the impact of correlated activity. This is largely because the presence ofcorrelations can either strongly increase or decrease the fidelity of encoded information dependingon both the structure of correlations across a population and how their impact is assessed.A basic mechanistic question underlies the investigation of the role of collective activity in cod-ing and signal transmission: How do single-cell dynamics, connection architecture, and synapticdynamics combine to determine patterns of network activity? Systematic answers to this questionwould allow us to predict how empirical data from one class of stimuli will generalize to otherstimulus classes and recording sites. Moreover, a mechanistic understanding of the origin of corre-lations, and knowledge of the patterns we can expect to see under different assumptions about theunderlying networks, will help resolve recent controversies about the strength and pattern of corre-lations in mammalian cortex [1,20–22]. Finally, understanding the origin of correlations will informthe more ambitious aim of inferring properties of network architecture from observed patterns ofactivity [23–25].Here, we examine the link between network properties and correlated activity. We develop atheoretical framework that accurately predicts the structure of correlated spiking that emerges ina widely used model — recurrent networks of general integrate and fire cells. The theory naturallycaptures the role of single cell and synaptic dynamics in shaping the magnitude and timescale ofspiking correlations. We focus on the exponential integrate and fire model, which has been shownto capture membrane and spike responses of cortical neurons [26]; however, the general approachwe take can be applied to a much broader class of neurons, a point we return to in the Discussion.Our approach is based on an extension of linear response theory to networks [25, 27]. We startwith a linear approximation of a neuron’s response to an input. This approximation can be obtainedexplicitly for many neuron models [28–30], and is directly related to the spike triggered average [31].The correlation structure of the network is then estimated using an iterative approach. As in priorwork [32–34], the resulting expressions admit an expansion in terms of paths through the network.We apply this theory to networks with precisely balanced inhibition and excitation in the inputsto individual cells. In this state individual cells receive a combination of excitatory and inhibitoryinputs with mean values that largely cancel. We show that, when timescales and strengths ofexcitatory and inhibitory connections are matched, only local interactions between cells contributeto correlations. Moreover, our theory allows us to explain how correlations are altered when precisetuning balance is broken. In particular, we show how strengthening inhibition may synchronizethe spiking activity in the network. Finally, we derive results which allow us to gain an intuitiveunderstanding of the factors shaping average correlation structure in randomly connected networksof neurons. 2 etwork model and linear response theory
Our goal is to understand how the architecture of a network shapes the statistics of its activity.We show how correlations between spike trains of cells can be approximated using response char-acteristics of individual cells along with information about synaptic dynamics, and the structure ofthe network. We start by briefly reviewing linear response theory of neuronal responses [29, 35, 36],and then use it to approximate the correlation structure of a network.
Network model
To illustrate the results we consider a network of N nonlinear integrate-and-fire (IF) neurons withmembrane potentials modeled by τ i ˙ v i = − ( v i − E L,i ) + ψ ( v i ) + E i + (cid:113) σ i τ i ξ i ( t ) + f i ( t ) + η i ( t ) . (1)Here E i represents the mean synaptic input current from parts of the system not explicitly modeled.A spike-generating current ψ ( v i ) may be included to emulate the rapid onset of action potentials.Unless otherwise specified, we utilize the exponential IF model (EIF), so that ψ ( v ) ≡ ∆ T exp[( v − v T ) / ∆ T ] [26]. Cells are subject to internally induced fluctuations due to channel noise [37], andexternally induced fluctuations due to inputs not explicitly modelled [38]. We model both byindependent, Gaussian, white noise processes, (cid:113) σ i τ i ξ i ( t ) [39]. An external signal to cell i isrepresented by η i ( t ).Upon reaching a threshold v th , an action potential is generated, and the membrane potential isreset to v r , where it is held constant for an absolute refractory period τ ref . The output of cell i ischaracterized by the times, t i,k , at which its membrane potential reaches threshold, resulting in anoutput spike train y i ( t ) = (cid:80) k δ ( t − t i,k ) . Synaptic interactions are modeled by delayed α -functions f i ( t ) = (cid:88) j ( J ij ∗ y j )( t ) , where J ij ( t ) = W ij (cid:18) t − τ D,j τ S,j (cid:19) exp (cid:104) − t − τ D,j τ S,j (cid:105) t ≥ τ D,j t < τ D,j . (2)The N × N matrix J contains the synaptic kernels, while the matrix W contains the synapticweights, and hence defines the network architecture. In particular, W ij = 0 represents the absenceof a synaptic connection from cell j to cell i .Table 1 provides an overview of all parameters and variables. Measures of spike time correlation
We quantify dependencies between the responses of cells in the network using the spike train auto-and cross-correlation functions [40]. For a pair of spike trains, y i ( t ) , y j ( t ), the cross-correlationfunction C ij ( τ ) is defined as C ij ( τ ) = cov ( y i ( t + τ ) , y j ( t )) . The auto-correlation function C ii ( t ) is the cross-correlation between a spike train and itself, and C ( t ) is the matrix of cross-correlation functions. Denoting by N y i ( t , t ) = (cid:82) t t y i ( s ) ds the numberof spikes over a time window [ t , t ], the spike count correlation, ρ ij ( T ), over windows of length τ is defined as, ρ ij ( T ) = cov (cid:0) N y i ( t, t + T ) , N y j ( t, t + T ) (cid:1)(cid:113) var ( N y i ( t, t + T )) var (cid:0) N y j ( t, t + T ) (cid:1) .
3e assume stationarity of the spiking processes (that is, the network has reached a steady state)so that ρ ij ( T ) does not depend on t . We also use the total correlation coefficient ρ ij ( ∞ ) =lim T →∞ ρ ij ( T ) to characterize dependencies between the processes y i and y j over arbitrarily longtimescales.The spike count covariance is related to the cross-correlation function by [7, 41]cov (cid:0) N y i ( t, t + τ ) , N y j ( t, t + τ ) (cid:1) = (cid:90) τ − τ C ij ( s )( τ − | s | ) ds. We can interpret the cross-correlation as the conditional probability that cell j spikes at time t + τ given that cell i spiked at time t . The conditional firing rate, H ij ( τ ) = lim ∆ t → t Pr (cid:0) N y j ( t + τ, t + τ + ∆ t ) > | N y i ( t, t + ∆ t ) > (cid:1) , is the firing rate of cell j conditioned on a spike in cell i at τ units of time in the past, and C ij ( τ ) = r i ( H ij ( τ ) − r j ) . Linear response of individual cells
Neuronal network models are typically described by a complex system of coupled nonlinear stochas-tic differential equations. Their behavior is therefore difficult to analyze directly. We will use linearresponse theory [29,35,36,40] to approximate the cross-correlations between the outputs of neuronsin a network. We first review the linear approximation to the response of a single cell. We illustratethe approach using current-based IF neurons, and explain how it can be generalized to other modelsin the Discussion.The membrane potential of an IF neuron receiving input (cid:15)X ( t ), with vanishing temporal average, (cid:104) X ( t ) (cid:105) = 0, evolves according to τ ˙ v = − ( v − E L ) + ψ ( v ) + E + √ σ τ ξ ( t ) + (cid:15)X ( t ) . (3)The time-dependent firing rate, r ( t ), is determined by averaging the resulting spike train, y ( t ) = (cid:80) j δ ( t − t j ) , across different realizations of noise, ξ ( t ) , for fixed X ( t ). Using linear response theory,we can approximate the firing rate by r ( t ) = r + ( A ∗ (cid:15)X )( t ) , (4)where r is the (stationary) firing rate when (cid:15) = 0. The linear response kernel, A ( t ) , characterizesthe firing rate response to first order in (cid:15) . A rescaling of the function A ( t ) gives the spike-triggeredaverage of the cell, to first order in input strength, and is hence equivalent to the optimal Weinerkernel in the presence of the signal ξ ( t ). [40, 42]. In Fig. 1, we compare the approximate firing rateobtained from Eq. (4) to that obtained numerically from Monte Carlo simulations.The linear response kernel A ( t ) depends implicitly on model parameters, but is independentof the input signal, (cid:15)X ( t ), when (cid:15) is small relative to the noise √ σ τ ξ ( t ). In particular, A ( t )is sensitive to the value of the mean input current, E . We emphasize that the presence of thebackground noise, ξ , in Eq. (3) is essential to the theory, as noise linearizes the transfer functionthat maps input to output. 4 inear response in recurrent networks The linear response kernel can be used to approximate the response of a cell to an external input.However, the situation is more complicated in a network where a neuron can affect its own activitythrough recurrent connections. To extend the linear response approximation to networks we followthe approach introduced by [27]. Instead of using the linear response kernel to approximate thefiring rate of a cell, we use it to approximate a realization of its output y ( t ) ≈ y ( t ) = y ( t ) + ( A ∗ X )( t ) . (5)Here y ( t ) represents a realization of the spike train generated by an integrate-and-fire neuronobeying Eq. (3) with X ( t ) = 0.The central assumption we make is that a cell acts approximately as a linear filter of its inputs.Note that Eq. (5) defines a mixed point and continuous process, but averaging y ( t ) in Eq. (5) overrealizations of y gives Eq. (4). Hence, Eq. (5) can be viewed as a natural generalization of Eq. (4)where the unperturbed output of the cell is represented as a point process, y ( t ), instead of thefiring rate, r .We first use Eq. (5) to describe spontaneously evolving networks where η i ( t ) = 0. Equation (1)can then be rewritten as τ i ˙ v i = − ( v i − E L,i ) + ψ ( v i ) + E (cid:48) i + (cid:113) σ i τ i ξ i ( t ) + ( f i ( t ) − E [ f i ]) , (6)where E (cid:48) i = E i + E [ f i ] and E [ · ] represents the temporal average.As a first approximation of the spiking output of cells in the coupled network, we start withrealizations of spike trains, y i , generated by IF neurons obeying Eq. (6) with f i ( t ) = E [ f i ]. This isequivalent to considering neurons isolated from the network, with adjusted DC inputs (due to meannetwork interactions). Following the approximation given by Eq. (5), we use a frozen realization ofall y i to find a correction to the output of each cell, with X ( t ) set to the mean-adjusted synapticinput, X ( t ) = f i ( t ) − E [ f i ] . As noted previously, the linear response kernel is sensitive to changes in the mean input current. Itis therefore important to include the average synaptic input E [ f i ] in the definition of the effectivemean input, E (cid:48) i .The input from cell j to cell i is filtered by the synaptic kernel J ij ( t ). The linear response ofcell i to a spike in cell j is therefore captured by the interaction kernel K ij defined by K ij ( t ) ≡ ( A i ∗ J ij )( t ) . The output of cell i in response to mean-adjusted input, y j ( t ) − r j , from cell j can be approximatedto first order in input strength using the linear response correction y i ( t ) = y i ( t ) + (cid:88) j ( K ij ∗ [ y j − r j ])( t ) . (7)We explain how to approximate the stationary rates, r j , in the Methods section.The cross-correlation between the processes y i ( t ) in Eq. (7) gives a first approximation to thecross-correlation function between the cells (See Methods), C ij ( τ ) ≈ C ij ( τ ) = E (cid:2) ( y i ( t + τ ) − r i )( y j ( t ) − r j ) (cid:3) = δ ij C ii ( τ ) + ( K ij ∗ C jj )( τ ) + ( K − ji ∗ C ii )( τ ) + (cid:88) k ( K ik ∗ K − jk ∗ C kk )( τ ) , (8)5here we used f − ( t ) = f ( − t ). [25] obtained an approximation closely related to Eq. (8). They firstobtained the cross-correlation between a pair of neurons which either receive a common input or share a monosynaptic connection. This can be done using Eq. (4), without the need to introducethe mixed process given in Eq. (5). [25] then implicitly assumed that the correlations not due toone of these two submotifs could be disregarded. The correlation between pairs of cells which weremutually coupled (or were unidirectionally coupled with common input) was approximated by thesum of correlations introduced by each submotif individually.Equation (8) provides a first approximation to the joint spiking statistics of cells in a recurrentnetwork. However, it captures only the effects of direct synaptic connections, represented by thesecond and third terms, and common input, represented by the last term in Eq. (8). The impactof larger network structures, such as loops and chains are not captured, although they may signifi-cantly impact cross-correlations [43–45]. Experimental studies have also shown that local corticalconnectivity may not be fully random [46–48]. It is therefore important to understand the effectson network architecture on correlations.To capture the impact of the full network structure, we propose an iterative approach whichaccounts for successively larger connectivity patterns in the network [33, 34]. We again start with y i ( t ), a realization of a single spike train in isolation. Successive approximations to the output ofcells in a recurrent network are defined by y n +1 i ( t ) = y i ( t ) + (cid:88) j ( K ij ∗ [ y nj − r j ])( t ) , n ≥ . (9)To compute the correction to the output of a neuron, in the first iteration we assume thatits inputs come from a collection of isolated cells: When n = 1, Eq. (9) takes into account onlyinputs from immediate neighbors, treating each as disconnected from the rest of the network. Thecorrections in the second iteration are computed using the approximate cell responses obtainedfrom the first iteration. Thus, with n = 2, Eq. (9) also accounts for the impact of next nearestneighbors. Successive iterations include the impact of directed chains of increasing length: Theisolated output from an independent collection of neurons is filtered through n stages to producethe corrected response (See Fig. 2.)Notation is simplified when this iterative construction is recast in matrix form to obtain y n +1 ( t ) = y ( t ) + ( K ∗ [ y n − r ])( t )= y ( t ) + n +1 (cid:88) k =1 ( K ( k ) ∗ [ y − r ])( t ) , n ≥ , (10)where y n ( t ) = [ y ni ( t )] and r = [ r i ] are length N column vectors, and K ( k ) represents a k -fold matrixconvolution of K with itself. Let X ( t ) = [ X ij ( t )] and Y ( t ) = [ Y ij ( t )] be n × n and n × n matrices of functions, respectively. We define theconvolution of matrices ( X ∗ Y )( t ) to be the n × n matrix of functions with entries defined by( X ∗ Y ) ij ( t ) = (cid:88) k ( X ik ∗ Y kj )( t ) . Expectations and convolutions commute for matrix convolutions as matrix expectations are taken entry-wise. Eachentry of a matrix convolution is a linear combination of scalar convolutions which commute with expectations.Additionally, we adopt the convention that the zeroth power of the interaction matrix, K ij ( t ), is the diagonal matrixwith K ij ( t ) = δ ( t ) when i = j . Hence K ij ( t ) acts as the identity matrix under matrix convolution. n th approximation to the matrix of cross-correlations can be written in terms of the inter-action kernels, K ij , and the autocorrelations of the base processes y as (See Methods) C ij ( τ ) ≈ C n ( τ ) = E (cid:2) ( y n ( t + τ ) − r )( y n ( t ) − r ) T (cid:3) = n (cid:88) k,l =0 ( K ( k ) ∗ C ∗ ( K − ) ( lT ) )( τ ) , n ≥ , (11)where K − ( t ) = K ( − t ), X ( kT ) = ( X ( k ) ) T , and X ( k ) is the k -fold matrix convolution of X with itself.If we apply the Fourier transform, ˜ f ( ω ) = F [ f ( t )]( ω ) ≡ (cid:82) ∞−∞ f ( t ) e − πiωt dt, to Eq. (11), we findthat for each ω , ˜C n ( ω ) = E [ ˜y n ( ω ) ˜y n ∗ ( ω )] = n (cid:88) k,l =0 ˜K k ( ω ) E [ ˜y ( ω ) ˜y ∗ ( ω )]( ˜K ∗ ) l ( ω )= (cid:32) n (cid:88) k =0 ˜K k ( ω ) (cid:33) E (cid:2) ˜y ( ω ) ˜y ∗ ( ω ) (cid:3) (cid:32) n (cid:88) l =0 ( ˜K ∗ ) l ( ω ) (cid:33) , (12)where X ∗ denotes the conjugate transpose of X . The zero-mean Fourier transforms ˜ y ni of thespiking processes y ni is defined by ˜ y ni = F [ y ni − r i ], and ˜ f = F ( f ) for all other quantities.For a suitable matrix norm || · || , when || ˜K || <
1, we can take the limit n → ∞ in Eq. (12) [49],to obtain an approximation to the full array of cross-spectra ˜C ( ω ) ≈ ˜C ∞ ( ω ) = lim n →∞ ˜C n ( ω ) = ( I − ˜K ( ω )) − ˜C ( ω )( I − ˜K ∗ ( ω )) − . (13)This equation can also be obtained by generalizing the approach of [27] (also see [13]). In thelimit n → ∞ , directed paths of arbitrary length contribute to the approximation. Equation (13)therefore takes into account the full recurrent structure of the network. We will use the spectralnorm || · || , and assume that in the networks we study || ˜K || <
1. This condition is confirmednumerically when we use Eq. (13).Finally, consider the network response to external signals, η i ( t ), with zero mean and finitevariance. The response of the neurons in the recurrent network can be approximated iteratively by y n +1 = y + K ∗ [ y n − r ] + A ∗ η , where A = diag( A i ) and η ( t ) = [ η i ( t )]. External signals and recurrent synaptic inputs are bothlinearly filtered to approximate a cell’s response, consistent with a generalization of Eq. (4). As inEq. (11), the n th approximation to the matrix of correlations is C ( τ ) ≈ C n ( τ ) = n (cid:88) k,l =0 ( K ( k ) ∗ C ∗ ( K − ) ( lT ) )( τ ) + n − (cid:88) k,l =0 ( A ( k ) ∗ C η ∗ ( A − ) ( lT ) )( τ ) , where C η ( τ ) = E (cid:2) η ( t + τ ) η ( t ) T (cid:3) is the covariance matrix of the external signals. We can againtake the Fourier transform and the limit n → ∞ , and solve for ˜C ( ω ). If || ˜K || < ˜C ∞ ( ω ) = ( I − ˜K ( ω )) − ( ˜C ( ω ) + ˜A ( ω ) ˜C η ( ω ) ˜A ∗ ( ω ))( I − ˜K ∗ ( ω )) − . (14)When the signals comprising η are white (and possibly correlated) corrections must be made toaccount for the change in spectrum and response properties of the isolated cells [27, 50, 51] (SeeMethods). 7e note that Eq. (10), which is the basis of our iterative approach, provides an approximationto the network’s output which is of higher than first order in connection strength. This may seemat odds with a theory that provides a linear correction to a cell’s response, cf. Eq. (4). However,Eq. (10) does not capture nonlinear corrections to the response of individual cells, as the output ofeach cell is determined linearly from its input. It is the input that can contain terms of any orderin connection strength stemming from directed paths of different lengths through the network.
Results
We use the theoretical framework developed above to analyze the statistical structure of the spikingactivity in a network of IF neurons described by Eq. (1). We first show that the cross-correlationfunctions between cells in two small networks can be studied in terms of contributions from directedpaths through the network. We use a similar approach to understand the structure of correlationsin larger all–to–all and random networks. We show that in networks where inhibition and exci-tation are tuned for exact balance, only local interactions contribute to correlations. When suchbalance is broken by a relative elevation of inhibition, the result may be increased synchrony in thenetwork. The theory also allows us to obtain averages of cross-correlation functions conditioned onconnectivity between pairs of cells in random networks. Such averages can provide a tractable yetaccurate description of the joint statistics of spiking in these networks.The correlation structure is determined by the response properties of cells together with synapticdynamics and network architecture. Network interactions are described by the matrix of synapticfilters, J , given in Eq. (2), while the response of cell i to an input is approximated using its linearresponse kernel A i . Synaptic dynamics, architecture, and cell responses are all combined in thematrix K , where K ij describes the response of cell i to an input from cell j (See Eq. (1)). Thecorrelation structure of network activity is approximated in Eq. (13) using the Fourier transformsof the interaction matrix, K , and the matrix of unperturbed autocorrelations C . Statistics of the response of microcircuits
We first consider a pair of simple microcircuits to highlight some of the features of the theory. Westart with the three cell model of feed-forward inhibition shown in Fig. 3A [52]. The interactionmatrix, ˜K ( ω ), has the form ˜K ( ω ) = K E E ( ω ) 0 ˜ K E I ( ω )˜ K IE ( ω ) 0 0 , where cells are indexed in the order E , E , I . To simplify notation, we omit the dependence of ˜K ( ω ) and other spectral quantities on ω . 8ote that ˜K is nilpotent, and the inverse of ( I − ˜K ) may be expressed as( I − ˜K ) − = ( I + ˜K + ˜K ) = K E E + ˜ K E I ˜ K IE K E I ˜ K IE . (15)Substituting Eq. (15) into Eq. (13) yields an approximation to the matrix of cross-spectra. Forinstance, ˜ C ∞ E I = ˜ K IE ˜ C I + ˜ K E E ˜ K ∗ IE ˜ C E + ˜ K E I | ˜ K IE | ˜ C E = ( ˜ A E ˜ J E I ) ˜ C I (cid:124) (cid:123)(cid:122) (cid:125) I + ( ˜ A E ˜ J E E )( ˜ A I ˜ J IE ) ∗ ˜ C E (cid:124) (cid:123)(cid:122) (cid:125) II (16)+ ( ˜ A E ˜ J E E ) | ˜ A I ˜ J IE | ˜ C E (cid:124) (cid:123)(cid:122) (cid:125) III . Figure 3B shows that these approximations closely match numerically obtained cross-correlations.˜ C X is the uncoupled power spectrum for cell X .Equation (16) gives insight into how the joint response of cells in this circuit is shaped by thefeatures of the network. The three terms in Eq. (16) are directly related to the architecture of themicrocircuit: Term I represents the correlating effect of the direct input to cell E from cell I . TermII captures the effect of the common input from cell E . Finally, term III represents the interactionof the indirect input from E to E through I with the input from E to I (See Fig. 3C). A changein any single parameter may affect multiple terms. However, the individual contributions of allthree terms are apparent.To illustrate the impact of synaptic properties on the cross-correlation between cells E and I we varied the inhibitory time constant, τ I (See Fig. 3B and C). Such a change is primarily reflectedin the shape of the first order term, I: Multiplication by ˜ J E I is equivalent to convolution with theinhibitory synaptic filter, J E I . The shape of this filter is determined by τ I (See Eq. (2)), and ashorter time constant leads to a tighter timing dependency between the spikes of the two cells [25,53–56]. In particular, Ostojic et al. made similar observations using a related approximation. Inthe FFI circuit, the first and second order terms, I and II, are dominant (red and dark orange,Fig. 3B). The relative magnitude of the third order term, III (light orange, Fig. 3B), is small. Thenext example shows that even in a simple recurrent circuit, terms of order higher than two may besignificant.More generally, the interaction matrices, ˜K , of recurrent networks are not nilpotent. Considertwo reciprocally coupled excitatory cells, E and E (See Fig. 4A, left). In this case, ˜K = K E E ˜ K E E so that ( I − ˜K ) − = 11 − ˜ K E E ˜ K E E ( I + ˜K ) . ˜C ∞ = 1 | − ˜ K E E ˜ K E E | ( I + ˜K ) ˜ C E
00 ˜ C E ( I + ˜K ∗ )= 1 | − ˜ K E E ˜ K E E | ˜ C E + | ˜ K E E | ˜ C E ˜ K ∗ E E ˜ C E + ˜ K E E ˜ C E K E E ˜ C E + K ∗ E E ˜ C E ˜ C E + | K E E | ˜ C E . (17)In contrast to the previous example, this approximation does not terminate at finite order ininteraction strength. After expanding, the cross-spectrum between cells E and E is approximatedby ˜ C ∞ E E = ∞ (cid:88) k,l =0 ( ˜ K E E ˜ K E E ) k ( ˜ K ∗ E E ˜ K ∗ E E ) l ( ˜ K ∗ E E ˜ C E + ˜ K E E ˜ C E ) . (18)Directed paths beginning at E and ending at E (or vice-versa) are of odd length. Hence, thisapproximation contains only odd powers of the kernels ˜ K E i E j , each corresponding to a directedpath from one cell to the other. Likewise, the approximate power spectra contain only even powersof the kernels corresponding to directed paths that connect a cell to itself (See Fig. 4A).The contributions of different sub-motifs to the cross- and auto-correlations are shown inFigs. 4C, D when the isolated cells are in a near-threshold excitable state (CV ≈ . τ i , E L,i , etc.) and the statistics of itsinput ( E i , σ i ). A change in operating point can significantly change a cell’s response to an input.Using linear response theory, these changes are reflected in the response functions A i , and thepower spectra of the isolated cells, ˜C . To highlight the role that the operating point plays in theapproximation of the correlation structure given by Eq. (13), we elevated the mean and decreasedthe variance of background noise by increasing E i and decreasing σ i in Eq. (1). With the chosenparameters the isolated cells are in a super-threshold, low noise regime and fire nearly periodically(CV ≈ . Orders of coupling interactions:
It is often useful to expand Eq. (13) in terms of powers of ˜K [32]. The term ˜K n ˜C ( ˜K ∗ ) m in the expansion is said to be of order n + m . Equivalently, in theexpansion of ˜C ∞ ij , the order of a term refers to the sum of the powers of all constituent interactionkernels ˜K ab . We can also associate a particular connectivity submotif with each term. In particular, n th order terms of the form ˜K ia n − ˜K a n − a n − · · · ˜K a j ˜C jj are associated with a directed path j → a → · · · → a n − → a n − → i from cell j to cell i .Similarly, the term ˜C ii ˜K ∗ ia · · · ˜K ∗ a n − a n − ˜K ∗ a n − j corresponds to a n -step path from cell i to cell j .An ( n + m ) th order term of the form ˜K ia n − ˜K a n − a n − · · · ˜K a a ˜C a a ˜K ∗ a b · · · ˜K ∗ b m − b m − ˜K ∗ b m − j n steps removed from cell i and m steps removedfrom cell j . This corresponds to a submotif of the form i ← a n − ← · · · ← a → b → · · · → b n − → j consisting of two branches originating at cell a . (See Fig. 5, and also Fig. 6A and the discussionaround Eqs. (16,18).) Statistics of the response of large networks
The full power of the present approach becomes evident when analyzing the activity of largernetworks. We again illustrate the theory using several examples. In networks where inhibitionand excitation are tuned to be precisely balanced, the theory shows that only local interactionscontribute to correlations. When this balance is broken, terms corresponding to longer pathsthrough the network shape the cross-correlation functions. One consequence is that a relativeincrease in inhibition can lead to elevated network synchrony. We also show how to obtain tractableand accurate approximation of the average correlation structure in random networks.
A symmetric, all–to–all network of excitatory and inhibitory neurons
We begin withan all–to–all coupled network of N identical cells. Of these cells, N E make excitatory, and N I make inhibitory synaptic connections. The excitatory cells are assigned indices 1 , . . . , N E , and theinhibitory cells indices N E + 1 , . . . , N . All excitatory (inhibitory) synapses have weight W E = G E N E ( W I = G I N I ), and timescale τ E ( τ I ). The interaction matrix ˜K may then be written in block form, ˜K = ˜ A ˜J , where ˜J = ˜ J E N E N E ˜ J I N E N I ˜ J E N I N E ˜ J I N I N I . Here N N is the N × N matrix of ones, ˜ J X is the weighted synaptic kernel for cells of class X (assumed identical within each class), and ˜ A is the susceptibility function for each cell in thenetwork. Although the effect of autaptic connections (those from a cell to itself) is negligible (SeeSI Fig. 2, their inclusion significantly simplifies the resulting expressions.We define ˜ µ E = N E ˜ J E , ˜ µ I = N I ˜ J I , and ˜ µ = ˜ µ E + ˜ µ I . Using induction, we can show that ˜K k = ˜ A k ˜ µ k − ˜J . Direct matrix multiplication yields ˜J˜J ∗ = ˜ µ c NN where ˜ µ c = N E | ˜ J E | + N I | ˜ J I | , which allows us to calculate the powers ˜K k ˜K l ∗ when k, l (cid:54) = 0, ˜K k ˜K l ∗ = ˜ A k ( ˜ A ∗ ) l ˜ µ k − (˜ µ ∗ ) l − ˜ µ c NN . An application of Eq. (13) then gives an approximation to the matrix of cross-spectra: ˜C ∞ = ˜ C ∞ (cid:88) k,l =0 ˜K k ˜K l ∗ = ˜ C (cid:32) ˜ A − ˜ A ˜ µ (cid:33) ˜J + (cid:32) ˜ A − ˜ A ˜ µ (cid:33) ∗ ˜J ∗ + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ A − ˜ A ˜ µ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ µ c NN + I N (19)11he cross-spectrum between two cells in the network is therefore given by[ ˜C ∞ ij ] i ∈ X,j ∈ Y = ˜ C (cid:32) ˜ A − ˜ A ˜ µ (cid:33) ˜ µ Y N Y + (cid:32) ˜ A − ˜ A ˜ µ (cid:33) ∗ ˜ µ ∗ X N X + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ A − ˜ A ˜ µ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ µ c + δ ij , (20)where X ∈ { E, I } . In Eq. (20) the first two terms represent the effects of all unidirectional chainsoriginating at cell j and terminating at cell i , and vice versa. The third term represents the effectsof direct and indirect common inputs to the two neurons. In Fig. 6A, we highlight a few of thesecontributing motifs.Interestingly, when excitation and inhibition are tuned for precise balance (so that ˜ µ = ˜ µ E + ˜ µ I =0), Eq. (20) reduces to [ ˜C ∞ ] i ∈ X,j ∈ Y = ˜ C (cid:20) ˜ A ˜ µ Y N Y + ˜ A ∗ ˜ µ ∗ X N X + | ˜ A | ˜ µ c + δ ij (cid:21) . (21)Effects of direct connections between the cells are captured by the first two terms, while those ofdirect common inputs to the pair are captured by the third term. Contributions from other pathsdo not appear. In other words, in the precisely balanced case only local interactions contribute tocorrelations. To understand this cancelation intuitively, consider the contribution of directed chains originat-ing at a given excitatory neuron, j . For τ >
0, the cross-correlation function, C ij ( τ ), is determinedby the change in firing rate of cell i at time τ given a spike in cell j at time 0. By the symme-try of the all–to–all connectivity and stationarity, the firing of cell j has an equal probability ofeliciting a spike in any excitatory or inhibitory cell in the network. Due to the precise synapticbalance, the postsynaptic current generated by the elicited spikes in the excitatory population willcancel the postsynaptic current due to elicited spikes in the inhibitory population on average. Thecontribution of other motifs cancel in a similar way.In Fig. 6B, we show the impact of breaking this excitatory-inhibitory balance on cross-correlationfunctions. We increased the strength and speed of the inhibitory synapses relative to excitatorysynapses, while holding constant, for sake of comparison, the long window correlation coefficients, ρ ( ∞ ). at ≈ .
05 Moreover, the degree of network synchrony, characterized by the short windowcorrelation coefficients, is increased (See Fig. 6B inset). Intuitively, a spike in one of the excitatorycells transiently increases the likelihood of spiking in all other cells in the network. Since inhibitionin the network is stronger and faster than excitation, these additional spikes will transiently decreasethe likelihood of spiking in twice removed cells.Linear response theory allows us to confirm this heuristic observation, and quantify the impactof the imbalance on second order statistics. Expanding Eq. (20) for two excitatory cells to secondorder in coupling strength, we find˜ C ∞ E i E j = ˜ C (cid:20) ˜ A ˜ µ E N E + ˜ A ∗ ˜ µ ∗ E N E + | ˜ A | ˜ µ c + ˜ A ˜ µ ˜ µ E N E + ( ˜ A ∗ ) ˜ µ ∗ ˜ µ ∗ E N E + δ ij (cid:21) + O ( || ˜K || ) . (22)The complete cancellation between contributions of chains involving excitatory and inhibitory cellsno longer takes place, and the two underlined terms appear as a consequence (compare withEq. (21)). These underlined terms capture the effects of all length two chains between cells E i or E j , starting at one and terminating at the other. The relative strengthening of inhibition im-plies that chains of length two provide a negative contribution to the cross-correlation functionat short times ( cf. [57]). Additionally, the impact of direct common input to cells E i and E j on12orrelations is both larger in magnitude (because we increased the strength of both connectiontypes) and sharper (the faster inhibitory time constant means common inhibitory inputs inducessharper correlations). These changes are reflected in the second order term | ˜ A | ˜ µ c in Eq. (22).In sum, unbalancing excitatory and inhibitory connections via stronger, faster inhibitory synapsesenhances synchrony, moving a greater proportion of the covariance mass closer to τ = 0 (SeeFig. 6B). To illustrate this effect in terms of underlying connectivity motifs, we show the contribu-tions of length two chains and common input in both the precisely tuned and non-precisely tunedcases in Fig. 6C. A similar approach would allow us to understand the impact of a wide range ofchanges in cellular or synaptic dynamics on the structure of correlations across networks. Random, fixed in-degree networks of homogeneous excitatory and inhibitory neurons
Connectivity in cortical neuronal networks is typically sparse, and connection probabilities canfollow distinct rules depending on area and layer [58]. The present theory allows us to considerarbitrary architectures, as we now illustrate.We consider a randomly connected network of N E excitatory and N I inhibitory cells coupledwith probability p . To simplify the analysis, every cell receives exactly pN E excitatory and pN I inhibitory inputs. Thus, having fixed in-degree, each cell receives an identical level of mean synapticinput. In addition, we continue to assume that cells are identical. Therefore, the response of eachcell in the network is described by the same linear response kernel. The excitatory and inhibitoryconnection strengths are G E / ( pN E ) and G I / ( pN I ), respectively. The timescales of excitation andinhibition may differ, but are again identical for cells within each class.The approximation of network correlations (Eq. (13)) depends on the realization of the con-nectivity matrix. For a fixed realization, the underlying equations can be solved numerically toapproximate the correlation structure (See Fig. 7A). However, the cross-correlation between a pairof cells of given types has a form which is easy to analyze when only leading order terms in 1 /N are retained.Specifically, the average cross-spectrum for two cells of given types is (See SI Section 1) E (cid:110) ˜C ∞ ij (cid:111) i ∈ X,j ∈ Y = ˜ C (cid:32) ˜ A − ˜ A ˜ µ (cid:33) ˜ µ Y N Y + (cid:32) ˜ A − ˜ A ˜ µ (cid:33) ∗ ˜ µ ∗ X N X + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ A − ˜ A ˜ µ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ µ c + O (1 /N ) , (23)when i (cid:54) = j . This shows that, to leading order in 1 /N , the mean cross-spectrum between two cells ingiven classes equals that in the all–to–all network (see Eq. (20)). Therefore our previous discussionrelating network architecture to the shape of cross-correlations in the all–to–all network extends tothe average correlation structure in the random network for large N .[32] derived similar expressions for the correlation functions in networks of interacting Hawkesprocesses [59,60] by assuming either the network is regular (i.e., both in- and out-degrees are fixed)or has a sufficiently narrow degree distribution. Our analysis depends on having fixed in-degrees,and we do not assume that networks are fully regular. Both approaches lead to results that holdapproximately (for large enough N ) when the in-degree is not fixed. Average correlations between cells in the random network conditioned on first orderconnectivity
As Fig. 7B shows there is large variability around the mean excitatory-inhibitorycross-correlation function given by the leading order term of Eq. (23). Therefore, understanding theaverage cross-correlation between cells of given types does not necessarily provide much insight into13he mechanisms that shape correlations on the level of individual cell pairs. Instead, we examinethe average correlation between a pair of cells conditioned on their first order (direct) connectivity.We derive expressions for first order conditional averages correct to O (1 /N ) (See SI Sec. 2).The average cross-spectrum for a pair of cells with indices i (cid:54) = j , conditioned on the value of thedirect connections between them is E (cid:110) ˜C ∞ ij | ˜J ij , ˜J ij (cid:111) i ∈ X,j ∈ Y = ˜ C (cid:34) ˜ A ˜J ij + ˜ A ∗ ˜J ∗ ji + (cid:32) ˜ A ˜ µ − ˜ A ˜ µ (cid:33) ˜ µ Y N Y + (cid:32) ˜ A ˜ µ − ˜ A ˜ µ (cid:33) ∗ ˜ µ ∗ X N X + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ A − ˜ A ˜ µ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ µ c + O (1 /N ) . (24)Here we set ˜J ij = 0 if we condition on the absence of a connection j → i , and ˜J ij = ˜ J Y /p if wecondition on its presence. The term ˜J ji is set similarly.Although Eq. (24) appears significantly more complicated than the cell-type averages givenin Eq. (23), they only differ in the underlined, first order terms. The magnitude of expectedcontributions from all higher order motifs is unchanged and coincides with those in the all–to–allnetwork.Figure 7C shows the mean cross-correlation function for mutually coupled excitatory-inhibitorypairs. Taking into account the mutual coupling significantly reduces variability (Compare withFig. 7B). To quantify this reduction, we calculate the mean reduction in variability when correlationfunctions are computed conditioned on the connectivity between the cells. For a single network,the relative decrease in variability can be quantified using µ error = 1 N T (cid:88) ( i,j ) ∈ Ti > j || C ij ( τ ) − C FOC T ( τ ) || || C ij ( τ ) − C CT T ( τ ) || , where T represents pairs of cells of a given type and connection (in the present example these arereciprocally coupled excitatory-inhibitory pairs), N T is the number of pairs of that type in thenetwork, C CT T ( τ ) is the leading order approximation of average correlations given only the type ofcells in T (as in Eq. (23)), and C FOC T ( τ ) the leading order approximation to average correlationsconditioned on the first order connectivity of class T (as in Eq. (24)). Figure 7D shows µ error averaged over twenty networks. In particular, compare the reduction in variability when condi-tioning on bidirectional coupling between excitatory-inhibitory pairs shown in Figs. 7B,C, with thecorresponding relative error in Fig. 7D (circled in red). Discussion
We have developed a general theoretical framework that can be used to describe the correlationstructure in a network of spiking cells. This theory allows us to find tractable approximations ofcross-correlation functions in terms of the network architecture and single cell response properties.The approach was originally used to study the population response of the electrosensory lateral linelobe of weakly electric fish [27]. The key approximation relies on the assumption that the activity ofcells in the network can be represented by a mixed point and continuous stochastic process, as given We make use of the norm || · || defined by || f || = (cid:0)(cid:82) | f | (cid:1) / .
14n Eq. (8). An iterative construction then leads to the expressions for approximate cross-correlationsbetween pairs of cells given by Eq. (13).[25] obtained formulas for cross-correlations that correspond to the first step in this iterativeconstruction, given in Eq. (8). Their approach captures corrections due to direct coupling (firstorder terms) and direct common input (second order terms involving second powers of interactionkernels). Our approach can be viewed as a generalization that also accounts for length two directedchains, along with all higher order corrections. As Fig. 4 illustrates, these additional terms can yieldsignificant contributions to the structure of cross-correlations. We note that another distinctionbetween our approach and that of [25] is that the present approach also allows us to calculatecorrected auto-correlations, whereas the framework of [25] did not allow for adjustments to auto-correlations.[32] analyzed the correlation structure in networks of interacting
Hawkes processes [59,60] usingan approach similar to the one presented here. In both cases, we represented correlations betweencell pairs in terms of contributions of different connectivity motifs. However, our methods differin important ways: while their expressions are exact for Hawkes processes, [32] did not attemptto match their results to more physiological cell models, and did not account for the responseproperties of individual cells, although it may be possible to approximately fit the Hawkes modelsto such models. [32] examined only total spike count covariances, which equal the integrals of thecross-correlation functions. This leads to a loss of information about the temporal structure ofcorrelations. However, as they note, their approach can be extended to obtain complete cross-correlation functions for the Hawkes model.To illustrate the power of the theory in analyzing the factors that shape correlations, we con-sidered a number of simple examples for which the approximation Eq. (13) is tractable. We showedhow the theory can be used both to obtain intuition for the effects that shape correlations, and toquantify their impact. In particular, we explained how only local connections affect correlationsin a precisely tuned all–to–all network, and how strengthening inhibition may synchronize spikingactivity.Linear response methods are perturbative. For Eq. (5) to be valid, neurons need to respondapproximately linearly to their inputs. This will only be true if inputs are “weak” relative to thedynamical operating point of the cell. We assume the presence of a white noise background whichlinearizes the transfer function, although it is possible to extend the present methods to coloredbackground noise [26, 61].It may be surprising that an approach based on linear response theory can provide correctionsto cross-correlations of arbitrary order in network connectivity. Corrections to firing activity whichare higher order in network connectivity emerge from the linear correction in Eq. (10). A fullexpansion of firing activity would include terms arising from corrections to the input-output transferof the individual cells beyond those captured by the linear response approximation. Formally,including such terms would be necessary to capture all contributions of a given order in networkconnectivity [33, 34]. However, the high accuracy demonstrated by our method indicates that, atleast in some cases, these additional terms are quite small. In short, our approximation neglectshigher-order corrections to the input-output transfer of individual cells, which is compatible withthe assumption that the presence of background noise causes this transfer to be close to linear.As expected from the preceding discussion, simulations suggest that, for IF neurons, our ap-proximations become less accurate as cells receive progressively stronger inputs. The “physical”reasons for this loss of accuracy could be related to interactions between the ”hard threshold” andincoming synaptic inputs with short timescales. While the theory will work for short synaptic15imescales, it will improve for slower synaptic dynamics, limiting towards being essentially exact inthe limit of arbitrarily long synaptic time constants (Note the improvement in the approximationfor the FFI circuit for the slower timescale as exhibited in Fig. 3). However, as we have shown,the theory remains applicable in a wide range of dynamical regimes, including relatively low noise,superthreshold regimes where cells exhibit strong oscillatory behavior. Moreover, the theory canyield accurate approximations of strong correlations due to coupling: for the bidirectionally coupledexcitatory circuit of Fig. 4, the approximate cross-correlations match numerically obtained resultseven when correlation coefficients are large ( ρ E E ( ∞ ) ≈ . ≈ . Methods
Numerical methods
Simulations were run in C++, and the stochastic differential equationswere integrated with a standard Euler method with a time-step of 0.01ms. General parametervalues were as follows: τ i = 20ms, E L,i + E i = − σ i = √ v th = 20mV, v r = − τ ref = 2ms, V T = − . T = 1 . τ E = 10ms, τ I = 5ms, τ D,i = 1ms. Marginal statistics(firing rates, uncoupled power spectra and response functions) were obtained using the thresholdintegration method of [30] in MATLAB. All code is available upon request.
Calculation of stationary rates in a recurrent network
The stationary firing rate of an IFneuron can be computed as a function of the mean and intensity of internal noise ( E i , σ i ) and othercellular parameters ( τ i , E L i , etc...) [64]. Denote the stationary firing rate of cell i in the networkby r i , and by r i, ( E, σ ) the stationary firing rate in the presence of white noise with mean E andvariance σ . We keep the dependencies on other parameters are implicit. The stationary rates, r i ,in the recurrent network without external input are determined self-consistently by r i = r i, ( E (cid:48) i , σ i ) = r i, ( E i + (cid:88) j W ij r j , σ i ) i = 1 , . . . , N , where we used E [ f i ] = (cid:80) j W ij E [ y j ] = (cid:80) j W ij r j . This equality holds because the synaptic kernels, J ij , were normalized to have area W ij . These equations can typically be solved by fixed-pointiteration. 16ote that this provides an effective mean input, E (cid:48) i , to each cell, but does not give adjustmentsto the variance, σ i . We assume that the major impact of recurrent input is reflected in E (cid:48) i , andignore corrections to the cell response involving higher order statistics of the input. This approachis valid as long as fluctuations in the recurrent input to each cell are small compared to σ i , andmay break down otherwise [28]. Derivation of Eq. (8) : Equation (8) can be obtained by a direct calculation: E (cid:2) ( y i ( t + τ ) − r i )( y j ( t ) − r j ) (cid:3) = E (cid:2) ( y i ( t + τ ) − r i )( y j ( t ) − r j ) (cid:3) + (cid:88) k E (cid:2) ( K ik ∗ [ y k − r k ])( t + τ )( y j ( t ) − r j ) (cid:3) + (cid:88) k E (cid:2) ( y i ( t + τ ) − r i )( K jk ∗ [ y k − r k ])( t ) (cid:3) + (cid:88) k,l E (cid:2) ( K ik ∗ [ y k − r k ])( t + τ )( K jl ∗ [ y l − r l ])( t ) (cid:3) = δ ij C ii ( τ ) + ( K ij ∗ C jj )( τ ) + ( K − ji ∗ C ii )( τ ) + (cid:88) k ( K ik ∗ K − jk ∗ C kk )( τ ) Verification of Eq. (11) : First, Eq. (10) directly implies that y n ( t ) = y ( t ) + n (cid:88) k =1 ( K k ∗ [ y − r ])( t ) , n ≥ , which we may use to find, for each n ≥ C n ( τ ) ≡ E (cid:2) ( y n ( t + τ ) − r )( y n ( t ) − r ) T (cid:3) = E (cid:2) ( y ( t + τ ) − r )( y ( t ) − r ) T (cid:3) + n (cid:88) k =1 E (cid:104) ( K k ∗ [ y − r ])( t + τ )( y ( t ) − r ) T (cid:105) + n (cid:88) k =1 E (cid:104) ( y ( t + τ ) − r )( K k ∗ [ y − r ]) T ( t ) (cid:105) + n (cid:88) k,l =1 E (cid:104) ( K k ∗ [ y − r ])( t + τ )( K k ∗ [ y − r ]) T ( t ) (cid:105) = C ( τ ) + n (cid:88) k =1 ( K k ∗ C )( τ ) + n (cid:88) k =1 ( C ∗ ( K − ) kT )( τ ) + n (cid:88) k,l =1 ( K k ∗ C ∗ ( K − ) lT )( τ ) . (25)Since K ij ( t ) = δ ij δ ( t ), Eq. (25) is equivalent to Eq. (11). Correction to statistics in the presence of an external white noise signals.
Expres-sion (14) can be used to compute the statistics of the network response to inputs η i ( t ) of finitevariance. As noted by [27], when inputs have infinite variance additional corrections are necessary.As a particular example, consider the case where the processes are correlated white noise, i.e., when η i ( t ) = √ cx c ( t ) + √ − cx i ( t ), where x c , x i are independent white noise processes with variance σ e .Then each η i is also a white noise process with intensity σ ei , but E [ η i ( t + τ ) η j ( t )] = [ δ ij δ ( τ ) + (1 − δ ij ) cδ ( τ )] σ ei . The firing rate of cell i in response to this input is r i = r ( E (cid:48) i , (cid:112) ( σ i ) + ( σ ei ) ), andthe point around which the response of the cell is linearized needs to be adjusted.Finally, we may apply an additional correction to the linear response approximation of autocor-relations. For simplicity, we ignore coupling in Eq. (14) (so that ˜K = 0). Linear response predictsthat ˜C ii ( ω ) = ˜C ii ( ω ; σ i ) + ( σ ei ) | ˜ A i ( ω ) | , where we have introduced explicit dependence on σ i ,the variance of white noise being received by an IF neuron with power spectrum ˜C ii ( ω ; σ i ), in the17bsence of the external signal. The approximation may be improved in this case by making thefollowing substitution in Eq. (14) [27, 51]: ˜C ii ( ω ; σ i ) + ( σ ei ) | ˜ A i ( ω ) | → ˜C ii ( ω ; σ i + ( σ ei ) )The response function A should be adjusted likewise. Acknowledgements
The authors thank Robert Rosenbaum, Brent Doiron and Srdjan Ostojic for many helpful dis-cussions. This work was supported by NSF grants DMS-0817649, DMS-1122094, and a TexasARP/ATP award to KJ, as well as NSF Grants DMS-1122106, DMS-0818153, and a BurroughsWellcome Career Award at the Scientific Interface to ESB.
References
1. Cohen M, Kohn A (2011) Measuring and interpreting neuronal correlations. Nat Neurosci14: 811–819.2. Ohki K, Chung S, Ch’ng Y, Kara P, Reid R (2005) Functional imaging with cellular resolutionreveals precise micro-architecture in visual cortex. Nature 433: 597–603.3. Field G, Gauthier J, Sher A, Greschner M, Machado T, et al. (2010) Functional connectivityin the retina at the resolution of photoreceptors. Nature 467: 673–677.4. Jia H, Rochefort N, Chen X, Konnerth A (2010) In vivo two-photon imaging of sensory-evoked dendritic calcium signals in cortical neurons. Nat Protoc 6: 28–35.5. Brown EN, Kass RE, Mitra P (2004) Multiple neural spike train data analysis: state-of-the-art and future challenges. Nat Neurosci 7: 456–61.6. Averbeck B, Latham P, Pouget A (2006) Neural correlations, population coding and compu-tation. Nat Rev Neurosci 7: 358–366.7. Shadlen M, Newsome W (1998) The variable discharge of cortical neurons: implications forconnectivity, computation, and information coding. J Neurosci 18: 3870.8. Panzeri S, Schultz S, Treves A, Rolls E (1999) Correlations and encoding of information inthe nervous system. Proc R Soc Lond B 266: 1001-1012.9. Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate and itsimplication for psychophysical performance. Nature 370: 140–143.10. Abbott LF, Dayan P (1999) The effect of correlated variability on the accuracy of a popula-tion code. Neural Comp 11: 91–101.11. Sompolinsky H, Yoon H, Kang K, Shamir M (2001) Population coding in neuronal systemswith correlated noise. Phys Rev E 64: 051904.12. Panzeri S, Petersen R, Schultz S, Lebedev M, Diamond M (2001) The role of spike timingin the coding of stimulus location in rat somatosensory cortex. Neuron 29: 769-777.183. Beck J, Bejjanki V, Pouget A (2011) Insights from a simple expression for linear fisherinformation in a recurrently connected population of spiking neurons. Neural Comp : 1–19.14. Josi´c K, Shea-Brown E, Doiron B, de La Rocha J (2009) Stimulus-dependent correlationsand population codes. Neural Comp 21: 2774–2804.15. Schneidman E, Bialek W, Berry M (2003) Synergy, Redundancy, and Independence in Pop-ulation Codes. J Neurosci 23: 11539-11553.16. Latham PE, Nirenberg S (2005) Synergy, Redundancy, and Independence in PopulationCodes, Revisited. J Neurosci 25: 5195-5206.17. Nirenberg S, Carcieri SM, Jacobs AL, Latham PE (2001) Retinal ganglion cells act largelyas independent encoders. Nature 411: 698–701.18. Cohen M, Newsome W (2009) Estimates of the contribution of single neurons to perceptiondepend on timescale and noise correlation. J Neurosci 29: 6635–6648.19. Romo R, Hernandez A, Zainos A, Salinas E (2003) Correlated neuronal discharges thatincrease coding efficiency during perceptual discrimination. Neuron 38: 649-657.20. Renart A, de la Rocha et al J (2010) The asynchronous state in cortical circuits. Science327: 587 – 590.21. Ecker A, Berens P, Keliris G, Bethge M, Logothetis N, et al. (2010) Decorrelated neuronalfiring in cortical microcircuits. Science 327: 584.22. Dragoi V (2011). personal communication.23. Paninski L, Ahmadian Y, Ferreira D, Koyama S, Rahnama Rad K, et al. (2010) A new lookat state-space models for neural data. J Comput Neurosci 29: 107–126.24. Nykamp D (2007) A mathematical framework for inferring connectivity in probabilistic neu-ronal networks. Math Biosci 205: 204–251.25. Ostojic S, Brunel N, Hakim V (2009) How connectivity, background activity, and synapticproperties shape the cross-correlation between spike trains. J Neurosci 29: 10234–10253.26. Fourcaud-Trocm´e N, Hansel D, Van Vreeswijk C, Brunel N (2003) How spike generationmechanisms determine the neuronal response to fluctuating inputs. J Neurosci 23: 11628.27. Lindner B, Doiron B, Longtin A (2005) Theory of oscillatory firing induced by spatiallycorrelated noise and delayed inhibitory feedback. Phys Rev E 72: 061919.28. Brunel N, Hakim V (1999) Fast global oscillations in networks of integrate-and-fire neuronswith low firing rates. Neural Comp 11: 1621–1671.29. Lindner B, Schimansky-Geier L (2001) Transmission of noise coded versus additive signalsthrough a neuronal ensemble. Phys Rev Lett 86: 2934–2937.30. Richardson M (2007) Firing-rate response of linear and nonlinear integrate-and-fire neuronsto modulated current-based and conductance-based synaptic drive. Phys Rev E 76: 021919.31. Gabbiani F, Koch C (1998) Methods in Neuronal Modeling, MIT Press. pp. 313–360.192. Pernice V, Staude B, Cardanobile S, Rotter S (2011) How structure determines correlationsin neuronal networks. PLoS Comput Biol 7: e1002059.33. Rangan A (2009) Diagrammatic expansion of pulse-coupled network dynamics. Phys RevLett 102: 158101.34. Rangan A (2009) Diagrammatic expansion of pulse-coupled network dynamics in terms ofsubnetworks. Phys Rev E 80: 036101.35. Risken H (1996) The Fokker-Planck equation: Methods of solution and applications. SpringerVerlag.36. Brunel N, Chance F, Fourcaud N, Abbott L (2001) Effects of synaptic noise and filtering onthe frequency response of spiking neurons. Phys Rev Lett 86: 2186–2189.37. White JA, Rubinstein JT, Kay AR (2000) Channel noise in neurons. Trends Neurosci 23:131–7.38. Renart A, Brunel N, Wang X (2004) Mean-field theory of irregularly spiking neuronal popu-lations and working memory in recurrent cortical networks. Computational neuroscience: Acomprehensive approach .39. Burkitt A (2006) A review of the integrate-and-fire neuron model: I. homogeneous synapticinput. Biol Cybern 95: 1–19.40. Gabbiani F, Cox S (2010) Mathematics for Neuroscientists. Academic Press.41. Bair W, Zohary E, Newsome W (2001) Correlated firing in macaque visual area mt: timescales and relationship to behavior. J Neurosci 21: 1676.42. Barreiro A, Shea-Brown E, Thilo E (2010) Time scales of spike-train correlation for neuraloscillators with common drive. Phys Rev E 81: 011916.43. Roxin A (2011) The role of degree distribution in shaping the dynamics in networks ofsparsely connected spiking neurons. Front Comput Neurosci 5.44. Zhao L, Beverlin B, Netoff T, Nykamp DQ (2011) Synchronization from second order networkconnectivity statistics. Front Comput Neurosci 5: 1–16.45. Bullmore E, Sporns O (2009) Complex brain networks: graph theoretical analysis of struc-tural and functional systems. Nat Rev Neurosci 10: 186–198.46. Song S, Sj¨ostr¨om P, Reigl M, Nelson S, Chklovskii D (2005) Highly nonrandom features ofsynaptic connectivity in local cortical circuits. PLoS Biol 3: e68.47. Oswald A, Doiron B, Rinzel J, Reyes A (2009) Spatial profile and differential recruitment ofgabab modulate oscillatory activity in auditory cortex. J Neurosci 29: 10321.48. Perin R, Berger T, Markram H (2011) A synaptic organizing principle for cortical neuronalgroups. P Natl Acad Sci USA 108: 5419.49. Kat¯o T (1995) Perturbation Theory for Linear Operators, volume 132. Springer Verlag.50. de la Rocha J, Doiron B, Shea-Brown E, Josi´c K, Reyes A (2007) Correlation between neuralspike trains increases with firing rate. Nature 448: 802-806.201. Vilela R, Lindner B (2009) Comparative study of different integrate-and-fire neurons: Spon-taneous activity, dynamical response, and stimulus-induced correlation. Phys Rev E 80:031909.52. Kremkow J, Perrinet L, Masson G, Aertsen A (2010) Functional consequences of correlatedexcitatory and inhibitory conductances in cortical networks. J Comp Neurosci 28: 579–594.53. Veredas F, Vico F, Alonso J (2005) Factors determining the precision of the correlated firinggenerated by a monosynaptic connection in the cat visual pathway. J Physiol 567: 1057.54. Kirkwood P (1979) On the use and interpretation of cross-correlations measurements in themammalian central nervous system. J Neurosci Meth 1: 107.55. Fetz E, Gustafsson B (1983) Relation between shapes of post-synaptic potentials and changesin firing probability of cat motoneurones. J Physiol 341: 387.56. Herrmann A, Gerstner W (2001) Noise and the PSTH response to current transients: I.general theory and application to the integrate-and-fire neuron. J Comput Neurosci 11:135–151.57. Vreeswijk C, Abbott L, Bard Ermentrout G (1994) When inhibition not excitation synchro-nizes neural firing. J Comput Neurosci 1: 313–321.58. Shepherd G (1991) Foundations of the neuron doctrine. Oxford Univ Press.59. Hawkes A (1971) Spectra of some self-exciting and mutually exciting point processes.Biometrika 58: 83.60. Hawkes A (1971) Point spectra of some mutually exciting point processes. J Roy Stat SocB Met : 438–443.61. Alijani A, Richardson M (2011) Rate response of neurons subject to fast or frozen noise:From stochastic and homogeneous to deterministic and heterogeneous populations. PhysRev E 84: 011919.62. Richardson M (2009) Dynamics of populations and networks of neurons with voltage-activated and calcium-activated currents. Phys Rev E 80: 021928.63. Toyoizumi T, Rad K, Paninski L (2009) Mean-field approximations for coupled populationsof generalized linear model spiking neurons with markov refractoriness. Neural Comp 21:1203–1243.64. Ricciardi L, Sacerdote L (1979) The ornstein-uhlenbeck process as a model for neuronalactivity. Biol Cybern 35: 1–9. 21 I npu tt o ce ll R e p ea t e d t r i a l s F i r i ng r a t e L i n ea r r es pon se - r (t) = r + ( A X )(t) * A -50 25ms5 mV
20V (mV)r(t) (Hz)
BCD
Figure 1. Illustrating Eq. (4) . (A)
The input to the post-synaptic cell is a fixed spike trainwhich is convolved with a synaptic kernel. (B)
A sample voltage path for the post-synaptic cellreceiving the input shown in A) in the presence of background noise. (C)
Raster plot of 100realizations of output spike trains of the post-synaptic cell. (D)
The output firing rate, r ( t ),obtained by averaging over realizations of the output spike trains in C). The rate obtained usingMonte Carlo simulations (shaded in gray) matches predictions of linear response theory obtainedusing Eq. (4) (black). 22 CD 3 1 24 y y y = y y y y =y y y y y y A y y y y
03 n = 0n = 1n = 2n = 3Iteration
Figure 2. Iterative construction of the linear approximation to network activity. (A)
An example recurrent network. (B)-(D)
A sequence of graphs determines the successiveapproximations to the output of neuron 1. Processes defined by the same iteration of Eq. (10)have equal color. (B)
In the first iteration of Eq. (10), the output of neuron 1 is approximatedusing the unperturbed outputs of its neighbors. (C)
In the second iteration the results of the firstiteration are used to define the inputs to the neuron. For instance, the process y depends on thebase process y which represents the unperturbed output of neuron 1. Neuron 4 receives no inputsfrom the rest of the network, and all approximations involve only its unperturbed output, y . (D) Cells 3 and 4 are not part of recurrent paths, and their contributions to the approximation arefixed after the second iteration. However, the recurrent connection between cells 1 and 2 impliesthat subsequent approximations involve contributions of directed chains of increasing length.23 = + + AB C -50 50 -50ms 50ms τ (ms) τ (ms)C (τ) (kHz )∞ C (τ) (kHz )∞-10 -4 -10 -4 TheorySimulations First orderSecond orderThird order E I I II III
Figure 3. The relation between correlation structure and response statistics in afeed-forward inhibitory microcircuit. (A)
The FFI circuit (left) can be decomposed intothree submotifs. Equation (16) shows that each submotif provides a specific contribution to thecross-correlation between cells E and I . (B) Comparison of the theoretical prediction with thenumerically computed cross-correlation between cells E and I . Results are shown for twodifferent values of the inhibitory time constant, τ I ( τ I = 5ms, solid line, τ I = 10ms, dashed line). (C) The contributions of the different submotifs in panel A are shown for both τ I = 5ms (solid)and τ I = 10ms (dashed). Inset shows the corresponding change in the inhibitory synaptic filter.The present color scheme is used in subsequent figures. Connection strengths were ±
40 mV · msfor excitatory and inhibitory connections. In each case, the long window correlation coefficient ρ E I between the two cells was ≈ − .
18. 24 E = + + + ... A B C DE F GResponse Function Cross-correlations Auto-correlations E xc i t a b l e O sc ill a t o r y t (ms)t (ms)101010 -3 -3 A(t) (1/mV • ms)A(t) (1/mV • ms) C (τ) (kHz )∞ C (τ) (kHz )∞τ (ms) τ (ms)τ (ms) τ (ms)C (τ) (kHz )∞ C (τ) (kHz )∞10 -4 -3 -3 -4
50 505050 Zeroth OrderSecond OrderFirst OrderThird Order
Figure 4. The relation between correlation structure and response statistics for twobidirectionally coupled, excitatory cells. (A)
The cross-correlation between the two cellscan be represented in terms of contributions from an infinite sequence of submotifs (See Eq. (18)).Though we show only a few “chain” motifs in one direction, one should note that there will alsobe contributions to the cross-correlation from chain motifs in the reverse direction in addition toindirect common input motifs (See the discussion of Fig. 5). (B), (E)
Linear response kernels inthe excitable (B) and oscillatory (E) regimes. (C), (F)
The cross-correlation function with firstand third order contributions computed using Eq. (17) in the excitable (C) and oscillatory (F)regimes. (D), (G)
The auto-correlation function with zeroth and second order contributionscomputed using Eq. (17) in the excitable (D) and oscillatory (G) regimes. In the oscillatoryregime, higher order contributions were small relative to first order contributions and are thereforenot shown. The network’s symmetry implies that cross-correlations are symmetric, and we onlyshow them for positive times. All cross-correlations are nearly indistinguishable from thoseobtained from simulations (See SI Fig. 3 for comparisons with simulations). Connection strengthswere 40 mV · ms. The long window correlation coefficient ρ E E ( ∞ ) between the two cells was ≈ . ≈ . a a n-1 i a i a AB jb Figure 5. The motifs giving rise to terms in the expansion of Eq. (13) . (A)
Termscontaining only unconjugated (or only conjugated) interaction kernels ˜K ab correspond to directedchains. (B) Terms containing both unconjugated and conjugated interaction kernels ˜K ab correspond to direct or indirect common input motifs. C = C [K + K + K(K ) + ...]
B CA
Length 2ChainsCommon Input
C (τ) (kHz )∞ ρ(T) τ (ms)T (ms) τ (ms)50-50 10 -6 -5 Direct Connection ~ ~ ~ ~
Figure 6. All–to–all networks and the importance of higher order motifs. (A)
Some ofthe submotifs contributing to correlations in the all–to–all network. (B)
Cross-correlationsbetween two excitatory cells in an all–to-all network ( N E = 80 , N I = 20) obtained using Eq. (19)(Solid – precisely tuned network with ˜ µ ≡ G E = 175 mV · ms , G I = − ( N E /N I ) G E = −
700 mV · ms , τ E = τ I = 10 ms], dashed – non-preciselytuned network with ˜ µ (cid:54) = 0 [ G E = 210 mV · ms , G I = − · ms , τ E = 10 ms , τ I = 5 ms]). (C) Comparison of first and second order contributions to the cross-correlation function in panel A inthe precisely tuned (left) and non-precisely tuned (right) network. In both cases, the long windowcorrelation coefficient ρ ( ∞ ) was fixed at 0.05. 26 E EI IIEIEE EI II IIEIEE
A BC D
C (τ) (kHz )∞TheorySimulations C (τ) (kHz )∞C (τ) (kHz )∞ τ (ms)50-50 τ (ms)50-50 τ (ms)50-500.51 B/C
Figure 7. Correlations in random, fixed in-degree networks. (A)
A comparison ofnumerically obtained excitatory-inhibitory cross-correlations to the approximation given byEq. (24). (B)
Mean and standard deviation for the distribution of correlation functions forexcitatory-inhibitory pairs of cells. (Solid line – mean cross-correlation, shaded area – onestandard deviation from the mean, calculated using bootstrapping in a single networkrealization). (C)
Mean and standard deviation for the distribution of cross-correlation functionsconditioned on cell type and first order connectivity for a reciprocally coupledexcitatory-inhibitory pair of cells. (Solid line – mean cross-correlation function, shaded area – onestandard deviation from the mean found by bootstrapping). (D)
Average reduction in L errorbetween cross-correlation functions and their respective first-order conditioned averages, relativeto the error between the cross-correlations and their cell-type averages. Blue circles give resultsfor a precisely tuned network, and red squares for a network with stronger, faster inhibition.Error bars indicate two standard errors above and below the mean. G E , G I , τ E , τ I for panels A-Care as in the precisely tuned network of Fig. 6, and the two networks of panel D are as in thenetworks of the same figure. 27 i , τ i , E L,i , σ i Membrane potential, membrane time constant, leak reversal potential, andnoise intensity of cell i . E i , σ i Mean and standard deviation of the background noise for cell i . v th , v r , τ ref Membrane potential threshold, reset, and absolute refractory period for cells. ψ ( v ) , V T , ∆ T Spike generating current, soft threshold and spike shape parameters for theIF model [26]. f i ( t ) , η i ( t ) Synaptic input from other cells in the network, and external input to cell i . τ S,i , τ
D,i