An information-theoretic framework to measure the dynamic interaction between neural spike trains
Gorana Mijatovic, Yuri Antonacci, Tatjana Loncar-Turukalo, Ludovico Minati, Luca Faes
AAn Information − Theoretic Framework to Measure the Dynamic Interaction betweenNeural Spike Trains
Gorana Mijatovic, ∗ Yuri Antonacci, † Tatjana Loncar-Turukalo, ‡ Ludovico Minati, § and Luca Faes ¶ Faculty of Technical Sciences, University of Novi Sad, Serbia Department of Physics and Chemistry ”Emilio Segr`e”, University of Palermo, Italy Center for Mind/Brain Sciences (CIMeC), University of Trento,38123 Trento, Italy, and the Institute of Innovative Research,Tokyo Institute of Technology, Yokohama 226-8503, Japan Department of Engineering, University of Palermo, Italy (Dated: December 17, 2020)Understanding the interaction patterns among simultaneous recordings of spike trains from mul-tiple neuronal units is a key topic in neuroscience. However, an optimal approach of assessing theseinteractions has not been established, as existing methods either do not consider the inherent pointprocess nature of spike trains or are based on parametric assumptions that may lead to wrong in-ferences if not met. This work presents a framework, grounded in the field of information dynamics,for the model-free, continuous-time estimation of both undirected (symmetric) and directed (causal)interactions between pairs of spike trains. The framework decomposes the overall information ex-changed dynamically between two point processes X and Y as the sum of the dynamic mutualinformation (dMI) between the histories of X and Y , plus the transfer entropy (TE) along the di-rections X → Y and Y → X . Building on recent work which derived theoretical expressions andconsistent estimators for the TE in continuous time, we develop algorithms for estimating efficientlyall measures in our framework through nearest neighbor statistics. These algorithms are validatedin simulations of independent and coupled spike train processes, showing the accuracy of dMI andTE in the assessment of undirected and directed interactions even for weakly coupled and shortrealizations, and proving the superiority of the continuous-time estimator over the discrete-timemethod. Then, the usefulness of the framework is illustrated in a real data scenario of recordingsfrom in-vitro preparations of spontaneously-growing cultures of cortical neurons, where we showthe ability of dMI and TE to identify how the networks of undirected and directed spike traininteractions change their topology through maturation of the neuronal cultures. PACS numbers: 02.50.Ey, 05.45.Tp, 87.10.Mn, 92.70.Gt
I. INTRODUCTION
Multi-electrode recording techniques, which have be-come a standard in the neuroscience, provide largeamounts of data about the neural activity measured atdifferent temporal and spatial scales. In particular, si-multaneous recordings of the firing activity of hundredsof neurons in various regions of the brain are nowadayswidely available, and such availability is raising more andmore the interest of neuroscientists about how groups ofneurons influence reciprocally their firing and act in con-cert to determine the function of a given brain region[1].Thrust by this interest, the field of computationalneuroscience has witnessed a continuous development oftools and algorithms to quantify the degree of interac-tion between two or more simultaneously recorded spiketrains. The large body of work in this context has evolvedalong two main directions: the development of symmetric ∗ [email protected] † [email protected] ‡ [email protected] § [email protected] ¶ [email protected] (undirected) measures of coupling between pairs of spiketrains, commonly denoted as synchrony measures [2], andthe design of causal (directed) interaction measures, typ-ically based on the statistical concept of Granger causal-ity (GC) [3]. Synchrony measures are essentially aimedat detecting pairwise correlations in spike trains [4], andare based on standard tools like cross-correlation [5] andcoherence [6] or on specific statistics applied to the inter-spike intervals (ISIs) extracted from the two analyzedtrains [2, 7]. On the other hand, directed measures at-tempt to infer the causal influence that one neural unitexerts over another by adopting different strategies forthe implementation to spike train data of the conceptof predictability improvement inherent in the definitionof GC [8, 9]. Up to now, these two approaches havebeen pursued as alternative to one another, and unifyingframeworks embedding directed and undirected measuresof spike train interaction as complementary aspects ofthe overall dynamic interaction between spike trains arelacking.A major issue with the estimation of both undirectedand directed interactions between neural spike trains isthat the underlying tools usually do not consider thepoint process nature of neural spike train data. In fact,interaction measures like correlation and GC are typicallydefined for continuously valued signals uniformly sam- a r X i v : . [ q - b i o . N C ] D ec pled in time (i.e., time series), and their computation forpoint processes poses both theoretical and practical chal-lenges. The application of these measures to neural spiketrains has been performed transforming the spike traindata either into discrete-time sequences through binningof the temporal axis (e.g., [10]), or into continuously val-ued signals through convolution with smoothing kernels(e.g., [11]). In either case, the alteration of the natureof the observed data causes a loss of information (in thecase of time discretization) or the addition of distort-ing information (in the case of smoothing) which makesthe estimation of the desired measure intrinsically sub-optimal. Moreover, the parameters associated with thetransformation (e.g., width of the time bin, kernel type)are difficult to set and impact substantially on the es-timated interaction measure. Alternatives which avoiddata transformation have been proposed in the contextof parametric modeling, for example in [8, 12] where lin-ear parametric models were introduced to compute GCspecificaly for point processes. However, parametric ap-proaches are limited in the fact that they set a priori thetype of interaction to be studied, and may miss to revealimportant aspects of such interaction if the data do notfit the assumed model.To overcome the above limitations, the present studyintroduces a framework for the computation of directedand undirected measures of interaction between neuralspike trains, which is directly applicable to point pro-cesses evaluated in continuous time and does not requirea model of the interactions. The framework is definedin the emerging field of information dynamics [13, 14],and exploits well-known concepts like mutual informa-tion (MI) [15] and transfer entropy (TE) [16]. The MI is amodel-free undirected measure of the ’static’ interactionbetween two processes, while the TE is a dynamic and di-rected measure that implements the probabilistic notionof GC in a non-parametric fashion. Here, we considera measure of the overall information exchanged dynami-cally between two spike train processes X and Y , and ex-pand it as the sum of the dynamic MI (dMI) between thehistories of X and Y and the TE along the two directions X → Y and Y → X . Importantly, we build on the theo-retical formulation of the TE for point processes [17] andon the recent introduction of a fully model-free estimatorbased on nearest neighbor statistics [18], to compute allmeasures in our framework in continuous-time for spiketrain processes. The resulting dMI and TE measures arefirst validated in simulated scenarios of independent andcoupled spike trains, and then used to identify the net-works of undirected and directed interactions underlyingthe spiking activity of spontaneously-growing cultures ofcortical neurons.The Matlab Software relevant to this work is avail-able for free download from the github repositoryhttps://github.com/mijatovicg/TEMI. II. METHODSA. Information-theoretic framework to assess thedynamic interaction between stochastic processes
Given two possibly coupled dynamical systems X and Y , we assume that their evolution over time is describedby the stochastic processes X = { X t } and Y = { Y t } , t ∈ R . To evaluate the dynamic interaction between the twosystems in the information-theoretic domain, we considera measure quantifying the amount of information sharedbetween the present and past states of the related pro-cesses. Formally, if X t , Y t denote the present state of thetwo processes and X − t , Y − t denote their past history, theoverall dynamic information shared between X and Y isdefined as: I X ; Y = I ( X − t ; Y − t )+ I ( X t ; Y − t | X − t )+ I ( Y t ; X − t | Y − t ) , (1)where I ( · ; · ) and I ( · ; ·|· ) denote mutual information (MI)and conditional MI (CMI). In particular, the MI termin (1) is a measure of the information shared by thepast states of the two analyzed processes, which we de-note as dynamic mutual information (dMI) between thehistories of X and Y , I X − ; Y − = I ( X − t ; Y − t ); the twoCMI terms correspond to the information transfer from X to Y and from Y to X , T X → Y = I ( Y t ; X − t | Y − t ) and T Y → X = I ( X t ; Y − t | X − t ), quantified according to the well-known notion of transfer entropy (TE) [16]. Note that inthe definition of I X − ; Y − , T X → Y and T X → Y the time in-dex is dropped under the hypothesis of stationarity of theprocesses. Eq. (1) is explained graphically in the Venndiagram representation of Fig. 1, and evidences how theoverall dynamic interaction measure I X ; Y is decomposedas the sum of the two directional TE measures, T X → Y and T Y → X , plus the symmetric measure of dynamic MI, I X − ; Y − .The information-theoretic measures defined above aretypically computed for discrete-time stochastic processes,i.e. processes defined at discrete time instants t n = n ∆ t, n ∈ Z , where ∆ t is the interval between time sam-ples expressed in units of time. In discrete time, informa-tion dynamic measures are well-established and a num-ber of practical approaches exist to estimate dMI andTE measures starting from realizations of the processes X and Y provided in the form of synchronous time seriesof finite length [19]. The definition and subsequent com-putation of information dynamic measures in continuoustime, i.e. in the limit ∆ t →
0, is much more cumbersomeand has been proposed only recently [17, 18]. In eithercase, to ensure convergence in the limit of small time binsize it is important to compute the information dynamicmeasures as rates , normalizing them to the width of thetime bins. Using information rates, the decompositionof the overall dynamic interaction between X and Y be-comes: ˙ I X ; Y = ˙ I X − ; Y − + ˙ T X → Y + ˙ T Y → X , (2) H ( X ) H ( X ) T Y X H ( Y ) H ( Y ) I X ,Y T X Y a) b) c) I X ,Y H ( X ) t H ( X t ) H ( Y t ) H ( Y ) t a) T Y X T X Y c)b) I X ,Y I X ,Y H ( X t ) a) c)b) H ( Y t ) T Y X T X Y H ( X ) t H ( Y ) t I X Y ; I X ; Y FIG. 1. Venn diagram representation of the proposed measure of dynamic mutual information and of its decomposition. (a)Entropy of the present state ( H ( X t ) , H ( Y t )) and of the past states ( H ( X − t ) , H ( Y − t )) of two processes X and Y spike trainsand the corresponding pasts X − and Y − . (b) Overall dynamic information shared between X and Y , I X ; Y . Decomposition of I X ; Y revealing the directional transfer entropies T X → Y and T Y → X , and the symmetric dynamic mutual information I X − ; Y − . where the dMI and TE rates are obtained normalizingthe corresponding terms in (1) by the time bin width(e.g., ˙ I X − ; Y − = (1 / ∆ t ) I X − ; Y − , ˙ T Y → X = (1 / ∆ t ) T Y → X ).In discrete-time, setting t n as the present time step, therandom variables describing the present and past statesof the processes X and Y are X t n , Y t n and X − t n , Y − t n , re-spectively. With this notation, the dMI and TE ratesappearing in (2) are formulated as follows:˙ I X − ; Y − = λ · E (cid:20) ln p ( x − n , y − n ) p ( x − n ) p ( y − n ) (cid:21) = lim T →∞ T N (cid:88) n =1 ln p ( x − n , y − n ) p ( x − n ) p ( y − n ) , (3)˙ T Y → X = λ · E (cid:20) ln p ( x n | x − n , y − n ) p ( x n | x − n ) (cid:21) = lim T →∞ T N (cid:88) n =1 ln p ( x n | x − n , y − n ) p ( x n | x − n ) , (4)where λ = 1 / ∆ t = N/T is the sampling rate of thediscrete-time processes, T = N ∆ t is the duration ofa time window containing N samples of the processes, x n , y n and x − n = [ x n − , x n − . . . ] , y − n = [ y n − , y n − . . . ],are realizations of X t n , Y t n and of X − t n , Y − t n , and p ( · ), p ( · , · ) and p ( ·|· ) denote marginal, joint and conditionalprobability density. Note that the equivalences in (3)and (4) hold for ergodic processes where time averagescan replace ensemble averages. In the following section,we show how similar formulations can be obtained for aparticular class of continuous-time processes, i.e., spiketrains. B. Formulation of the framework for spike trainprocesses
In this section, the framework introduced above forthe dynamic analysis of generic stochastic processes is formalized for spike train processes. Spike trains arecontinuous-time processes described by the occurrence atspecific non-overlapping time instants of indistinguish-able events, or spikes. As such, spike trains are typi-cally observed as series of time points corresponding tothe event times; here, we consider the spike trains X = { x i } , i = 1 , , . . . , N X , and Y = { y j } , j = 1 , , . . . , N Y ,where x i and y j are real numbers denoting the timepoints of the i th spike of the train X and of the j th spikeof the train Y . We stress that this is in contrast with thediscrete-time formulation of stochastic processes, whererealizations of the processes X and Y are time series ofvalues measured at synchronously sampled time points.While for discrete-time processes the system state ismapped by the time series values, in the case of pointprocesses the state is defined at each time instant bythe so-called counting process, i.e. by the continuous-time process that counts the number of spikes which haveoccurred up to the present time; for the spike train X , thecounting process is N X,t = n : x n ≤ t < x n +1 ( N X,t = 0 ∀ t < x , N X,t = N X ∀ t ≥ X N X ). Then, the firing ratemeasures the probability for the train X to fire in a timeinterval [ u, u + ∆ u ] relative to the duration ∆ u of theinterval; given the counting process, the instantaneousfiring rate computed at time u for the process X is λ X,u =lim ∆ u → p u (cid:0) N X,u +∆ u − N X,u = 1 (cid:1) / ∆ u , where p u refersto a probability density evaluated in continuous time (i.e.,at any time u ). With this formalism, the TE rate fromthe source spike train Y to the target spike train X isdefined as [17]:˙ T Y → X = λ X E p x (cid:34) ln λ X,x i | X − xi ,Y − xi λ X,x i | X − xi (cid:35) = lim T →∞ T N X (cid:88) i =1 ln λ X,x i | X − xi ,Y − xi λ X,x i | X − xi , (5)where N X is the number of spikes occurring in X duringthe period T and λ X = N X /T is the average firing rateof X . In (5), λ X,x i | X − xi and λ X,x i | X − xi ,Y − yi are the instanta-neous firing rates of the target process X evaluated at thetime of its i th spike x i , respectively conditioned on thepast history of the target process only and on the pasthistories of both the target and the source. Importantly,while the probability density p u defining the instanta-neous firing rate is taken at any arbitrary time point(unconditionally to events in any process), the proba-bilities defining the conditional firing rates used to as-sess the TE rate are taken at the times of the spikingevents in the target. This aspect, which is stressed mak-ing it explicit in (5) that the expectation is taken over thedistribution p x , is crucial for the further developmentsthat lead to the estimation of the TE rate in continu-ous time. Indeed, expressing the conditional firing ratesin terms of p u , making a Bayes inversion and observingthat lim ∆ u → p u ( ·| N X,u +∆ u − N X,u = 1 (cid:1) = p x ( · ), (5) canbe rewritten as [18]:˙ T Y → X = λ X E p x (cid:20) ln (cid:18) p x ( X − x i , Y − x i ) p u ( X − x i , Y − x i ) · p u ( X − x i ) p x ( X − x i ) (cid:19)(cid:21) , (6)which provides the basis for the estimation of the TE ratein continuous time reported in Sect. II C.The TE rate from X to Y is defined in a straight-forward way by inverting the role of source and targetspike trains in the derivations above. As regards the dMIrate quantifying the symmetric interactions between thepast histories of the two processes, we define it extend-ing (3) to spike train processes in analogy to how (4) wasextended to yield (5). Specifically, given the joint andmarginal probability densities of the past history of thetwo processes evaluated in continuous time, p u ( X − u , Y − u ), p u ( X − u ) and p u ( Y − u ), u ∈ R , the dMI rate is defined as:˙ I X − ; Y − = λ U E p u (cid:20) ln p u ( X − u , Y − u ) p u ( X − u ) p u ( Y − u ) (cid:21) = lim T →∞ T N U (cid:88) i =1 ln p u ( X − u i , Y − u i ) p u ( X − u i ) p u ( Y − u i ) , (7)where N U time points, u , . . . , u N U , are assumed to occurwith intensity λ U = N U /T during a period of duration T .Similarly to (6), the derivation in (7) provides a meansto estimate the dMI rate for spike trains. C. Continuous time estimation
This section presents the estimation of the TE anddMI measures composing the overall dynamic informa-tion shared by two spike trains X and Y . The measuresare introduced in (1) for generic stochastic processes andare made explicit as rates in (6) and (7) for the case ofspike train processes. The estimation approach is basedfirst on building realizations of the past of the two pro-cesses from the available data by means of a history em-bedding strategy [18], and then on applying on such re-alizations the nearest neighbor entropy estimator [20] to x i a) YX b) YX Y ,1 x i u i Y ,2 x i Y ,3 x i X ,1 x i X ,2 x i X ,3 x i X ,1 u i X ,2 u i X ,3 u i Y ,1 u i Y ,2 u i Y ,3 u i FIG. 2. Example of history embeddings reconstructed withembedding length l = 3. (a) Joint history embeddings J lx i =[ X x i , Y x i ] at a target event x i . (b) Joint history embeddings J lu i = [ X u i , Y u i ] at a random event u i . compute the different entropy terms that compose theTE and dMI rates to be estimated; entropy estimation isperformed following a strategy that favors compensationof the bias of the individual entropy terms when they aresummed to get the desired measure [18, 21].History embedding is performed in order to approx-imate the past of the two spike trains referred to spe-cific time points such as the spike times x i or y j ,or to arbitrary time points u i sampled in continuoustime. In the first case, the histories needed to com-pute the TE rate (6) are constructed, from the set oftarget spike times x i , i = 1 , . . . , N X , as illustrated inFig. 2a: the history of the target train X referred tothe i th spike time x i is approximated taking l inter-spike intervals as X − x i ≈ X lx i = [ X x i , , · · · , X x i ,l ], where X x i ,k = x i − k +1 − x i − k , k = 1 , . . . l ; the history of thedriver train Y referred to x i is approximated takingfirst the interval from the most recent driver spike (i.e., y p : y p < x i , y p +1 ≥ x i ) to x i and then the preceding l − Y − x i ≈ Y lx i = [ x i − y p , Y l − y p ]; the jointhistory at x i , J − x i = [ X − x i , Y − x i ] is approximated by the 2 l vector J lx i = [ X lx i , Y lx i ]. In the second case, the historiesneeded to compute the MI rate (7) are constructed, froma set of arbitrary times u i , i = 1 , . . . , N U , as illustratedin Fig. 2b: in this case the histories of both spike trainsreferred to u i are approximated taking the interval fromthe most recent spike to u i followed by l − X − u i ≈ X lu i = [ u i − x p , X l − x p ], Y − u i ≈ Y lu i = [ u i − y p , Y l − y p ], J − u i ≈ J lu i = [ X lu i , Y lu i ]; the times u i are placed randomlyover the time axis, according to criteria determined de-pending on the application. Note that here we assumethe same length l for all history embeddings, but this canbe optimized for each embedding separately [18].The history embeddings built as described above attarget spike times form the data matrices X lx ∈ R ( N (cid:48) X × l ) and J lx ∈ R ( N (cid:48) X × l ) which contain in the n th row respec-tively the vectors X lx n + l and J lx n + l , n = 1 , . . . , N (cid:48) X , whilethe history embeddings at arbitrary times form the datamatrices X lu ∈ R ( N (cid:48) U × l ) and J lu ∈ R ( N (cid:48) U × l ) which con-tain in the n th row respectively the vectors X lu n and J lu n , n = 1 , . . . , N (cid:48) U ( N (cid:48) X and N (cid:48) U are in general lower than N X and N U , see supplementary material).These data matrices are passed as input to the algo-rithms for the estimation of TE and dMI rates, whosepseudo-codes and codes are provided in the supplemen-tary material. Estimation of the TE rate and of thedMI rate is performed by first expanding (6) and (7)in differential entropy terms, to obtain the equations˙ T Y → X = λ X T Y → X and ˙ I X − ; Y − = λ U I X − ; Y − , where T Y → X = H p u ( X − x i , Y − x i ) − H ( X − x i , Y − x i )+ H ( X − x i ) − H p u ( X − x i )(8) I X − ; Y − = H ( X − u i ) + H ( Y − u i ) − H ( X − u i , Y − u i ); (9)the notations H ( · ) and H p u ( · ) refer to ’standard’ differ-ential entropy where expectation is taken over the sameprobability distribution for which the log-likelihood isestimated (e.g., H ( X − x i ) = − E p x [ln p x ( X − x i )]), and to’cross-entropy’ where the two distributions differ (e.g., H p u ( X − x i ) = − E p x [ln p u ( X − x i )]) [18]. Then, each entropyterm is estimated using the well-known Kozachenko-Leonenko (KL) method [20]. Starting from N realiza-tions of a generic d -dimensional variable W forming thedata matrix W ∈ R ( N × d ) , this method estimates the dif-ferential entropy H ( W ) = − E p w [ln p w ( w )] as:ˆ H ( W ) = ln( N − − ψ ( k )+ln c d + dN N (cid:88) i =1 ln (cid:15) w i ,k, W (10)where ψ ( · ) is the digamma function, c d is the volume ofthe d -dimensional unit ball under a given norm ( c d = 1for the maximum norm used in this work), and (cid:15) w i ,k, W istwice the distance between the i th realization of W andits k th nearest neighbor taken from W . To estimate a cross-entropy term H p v ( W ) = − E p w [ln p v ( w )] from twodata matrices W ∈ R ( N × d ) and V ∈ R ( M × d ) , eq. (10) ismodified asˆ H p v ( W ) = ln( M ) − ψ ( k ) + ln c d + dN N (cid:88) i =1 ln (cid:15) w i ,k, V (11)where (cid:15) w i ,k, V is twice the distance between the vector w i ∈ W and its k th nearest neighbor taken from V .The formulations of the differential entropy and cross-entropy are exploited to compute the four entropy termscomposing the TE in (8) and the three entropy termscomposing the dMI in (9), using from time to time thedata matrices resulting from history embedding in placeof the generic matrices W and V . While a naive estima-tor would fix the parameter k and use (10) and (11) foreach entropy term composing ˙ T Y → X and ˙ I X − ,Y − , herewe adopt the bias compensation strategies proposed in[18, 22], whereby the number of neighbors k is changedat each data sample in order to use the same range of dis-tances in spaces of different dimensions and ultimatelyreduce the bias in the estimation of sums of entropies.The two strategies are implemented in Algorithm 1 andAlgorithm 2 described in the supplementary material.Both algorithms start with a fixed parameter k global thatwill be the minimum number of nearest neighbors in anysearch space. At the i th iteration, corresponding to arealization w i of the data matrix W , the algorithms per-form both neighbor searches whereby k is fixed and thedistance (cid:15) w i ,k, W between w i and its k th nearest neigh-bor within W (or within a different data matrix V incase of cross-entropy estimation) is computed, and rangesearches whereby the number of neighbors k w i , W of w i found inside W (or inside V in case of cross-entropy) iscounted. Applying (8) and (9) with entropy and cross-entropy as in (10) and (11) where the role of w i is takenby X lx i , J lx i , X lu i , Y lu i or J lu i , leads finally to estimate theTE rate as the output of Algorithm 1 [18], and the dMIrate as the output of Algorithm 2 [22], as follows:˙ T Y → X = λ X N (cid:48) X N (cid:48) X (cid:88) i =1 ψ ( k X lxi , X lu ) − ψ ( k X lxi , X lx ) + ψ ( k J lxi , J lx ) − ψ ( k J lxi , J lu ) + l ln (cid:15) X lxi ,k Xlxi, X lx , X lx · (cid:15) J lxi ,k Jlxi, J lx , J lx (cid:15) X lxi ,k Xlxi, X lu , X lu · (cid:15) J lxi ,k Jlxi, J lu , J lu , (12)˙ I X − ; Y − = λ U (cid:2) ψ ( k global ) + ln( N (cid:48) U − − N (cid:48) U N (cid:48) U (cid:88) i =1 (cid:0) ψ ( k X lui , X lu ) + ψ ( k Y lui , Y lu ) (cid:1)(cid:3) . (13) III. VALIDATION ON SIMULATED SPIKETRAINS
This Section reports the application of the proposedanalysis framework on synthetic spike trains simulated under controlled conditions of firing and synchrony. Twodifferent simulation scenarios are designed to reproducethe spiking dynamics of uncoupled processes and of cou-pled processes with different direction and intensity of in-teraction. In both simulations, the continuous-time mea-sures of dMI rate and TE rate defined in Sect. II B andestimated as reported in Sect. II C are compared withthe discrete-time estimates of the same measures; thelatter are obtained dividing the temporal axis into timebins, building the discrete-time process that counts thenumber of spikes falling into each time bin, and applyingEqs. (3) and (4) to the resulting sequences of naturalnumbers.Both simulations are relevant to Poisson spike trainswith mean firing rate λ = 1 spike/s. The first scenarioconsiders pairs of independent Poisson trains X and Y ,for which the ground truth value of the overall dynamicinformation exchanged between the processes is zero( ˙ I X − ; Y − = ˙ T X → Y = ˙ T Y → X = 0). Process realizationsare generated with variable duration, to simulate a num-ber of target events N X = N Y ∈ { , , , } spikes. The second scenario examines coupled Poissonspike trains which can be coupled unidirectionally orwithout a preferential direction of interaction. This isachieved generating a master process X as a Poissonspike train, and a driven process Y such that, for eachspike occurring at time x i in X , a spike in Y is placedat the time point y i = x i + τ + u i , where τ is a constanttime delay and u i is a random time jitter sampled fromthe uniform distribution U ( − δ, δ ). With this setting, theparameter δ is inversely related to the coupling betweenthe two trains, and changing the delay τ it is possibleto achieve different coupling configurations: τ = 0 cor-responds to absence of a preferential coupling direction,while τ < τ > Y to X and from X to Y . Here we in-vestigate the behavior of the dMI and TE rates when τ = aδ , with a ∈ {− , − . , , . , } , and reproducingconditions from very strong coupling ( δ ≈ msec) to veryweak coupling ( δ ≈ s).In all simulations, the continuous time estimator is im-plemented choosing a number of arbitrary time pointsfor entropy estimation equal to the number of simulatedspikes N U = N X = N Y , and drawing such points fromthe uniform distribution U (0 , T ); the length of the historyembedding and the initial parameter for nearest neighboranalysis were set as l = 1 and k global = 5. The discrete-time estimator is implemented using combinations of thetime bin width and embedding length set to cover theaverage duration of the inter-spike intervals (∆ t · l = 1s, ∆ t = 0 . T Y → X , ˙ T X → Y , or ˙ I X − ,Y − ) is deemed asstatistically significant if its value on the original trainsexceeds the 95 th percentile of its distribution on surro-gates.The comparison between the performances ofcontinuous- and discrete-time estimators on independentprocesses is illustrated in Fig. 3. For each measure, theperformance can be inferred in terms of bias (i.e, thedeviation of the average value across realizations fromthe expected zero level) and variance across realizations.The discrete-time estimates of both TE and dMI exhibita substantial bias for all the reported data lengths, whilethe continuous-time estimates are always very closeto the true value. The variance is also lower for thecontinuous-time estimates, especially as regards the dMImeasure. As expected, both estimators improve theirperformance with increasing the data length. As regardsthe statistical significance of the detected interactions,both estimators rejected the null hypothesis of uncou-pling in a limited number of realizations, compatiblewith the nominal rate of false positives.In Fig. 4 the two estimators are compared for pairsof spike trains of length T = 300 s, interacting along thedirection X → Y ( τ = δ ) with varying coupling strengthmodulated inversely by the parameter δ . The progressivede-coupling of the two trains obtained increasing δ is re-flected by the TE rate from X to Y estimated in continu-ous time, while the discrete-time estimates exhibit a lessinterpretable non-monotonic behavior (Fig. 4a). Alongthe uncoupled direction, the discrete-time TE rates de-viate substantially from the expected zero level and dis-play a bias dependent on δ , while the continuous-timeestimates are consistently null (Fig. 4b). The MI ratedecreases with δ for both estimators, but the discrete-time estimates seem to be again biased as they stabilizeat ∼ δ reaches the highest values corre-sponding to maximum de-coupling (Fig. 4c).Fig. 5 provides an exhaustive description of the sce-nario with coupled Poisson spike trains investigated bythe proposed continuous-time estimator. Simulations oflength T = 300 s are iterated to reproduce conditionsof strong ( τ = ± δ ) and intermediate ( τ = ± . δ ) cou-pling from X to Y ( τ >
0) and from Y to X ( τ < τ = 0), each with couplingstrength decreasing progressively as the de-coupling pa-rameter increases from δ = 0 .
005 to δ = 2. The overalldynamic information exchanged between the two trains( ˙ I X ; Y , Fig. 5a) is insensitive to the coupling direction,as seen by the overlap of its average values for τ = ± δ and for τ = ± . δ , and reflects the coupling strength,as seen by its monotonic decrease observed at increasing a) b) 7 4 5 61 4 5 42 7 654 2 4 59 4 11 58 5 6 8 T [ n a t s / s ] X Y T [ n a t s / s ] Y X I [ n a t s / s ] Y X ;
100 300 500 1000
FIG. 3. Comparison between discrete-time (a, orange) andcontinuous-time (b, blue) estimators of the TE and dMI ratesapplied to independent Poisson spike trains. Plots depict themean (circles) and standard deviation (shades) of each mea-sure, computed over 100 simulation runs as a function of thespike train duration. The number of realizations detected asstatistically significant using JODI surrogates is reported foreach measure and duration. ẟ [s] T YX . . . . . . . . . . [nats/s] a) T [nats/s] XY . . . . . . . . . . ẟ [s]b) I [nats/s] YX ; . . . . . . . . . . ẟ [s]c) FIG. 4. Comparison between discrete-time (orange) andcontinuous-time (blue) estimators of the TE and dMI ratesapplied to interacting Poisson spike trains coupled from X to Y . Plots depict the mean (circles) and standard deviation(shades) of each measure, computed over 100 simulation runsas a function of the de-coupling parameter δ . δ . The TE rates along the two directions of interaction between X and Y (Fig. 5b,c) exhibit symmetric trends,with ˙ T Y → X decreasing progressively and ˙ T X → Y increas-ing progressively as the preferential coupling directionshifts from Y → X ( τ = − δ ) to from X → Y ( τ = δ ).Both TE rates decrease with increasing the de-couplingparameter and start losing statistical significance for δ higher than (cid:39) .
9, becoming largely non-significant for δ = 2; the analysis of surrogate data indicates also theabsence of directed interactions along the uncoupled di-rection, as documented by the small rate of detection ofsignificant ˙ T Y → X values when τ = δ and of significant˙ T Y → X values when τ = − δ . Finally, the rate of dynamicinformation shared by the process histories ( ˙ I X − ; Y − ,Fig. 5d) decreases monotonically with δ and is almostinsensitive to τ ; the dMI rate remains high and statisti-cally significant also when the coupling direction is notunequivocal ( τ = ± . δ ) or not established ( τ = 0), pro-viding in these conditions higher detection rates than theTE even when the coupling is weak ( δ ≥ IV. APPLICATION
To test the usefulness of the presented framework in areal-world scenario of interacting neural spike trains, weconsidered the data from a public repository of in-vitrocultures of dissociated cells grown on multi-electrode ar-rays (MEAs) [24, 25]. In this experimental setting, neo-cortical cells were harvested from the brains of rat em-bryos and plated on glass culture wells; each culture wasobtained plating ∼ × ∼ a) . . . . . . . . . . . . . . . . . . I Y [ n a t s / s ] X ; [s] Stat. significance [%]0 b) T Y [ n a t s / s ] X c) 00.511.52 T X [ n a t s / s ] Y d) . . . . . . . . .
50 1 1 [s] . . . . . . . . .
50 1 1 1 [s] I Y [ n a t s / s ] X ; τ = 0.5 δτ = -0.5 δτ = 0 τ = - δτ = δ τ = 0.5 δτ = -0.5 δ τ = 0 τ = - δτ = δτ = 0.5 δτ = -0.5 δτ = 0 τ = - δτ = δ τ = -0.5 δτ = 0 τ = 0.5 δτ = δτ = - δτ = -0.5 δτ = 0 τ = 0.5 δ τ = δτ = - δτ = -0.5 δτ = 0 τ = 0.5 δτ = δτ = - δ τ = -0.5 δτ = 0 τ = 0.5 δτ = δτ = - δ . . FIG. 5. Continuous-time estimation the overall dynamic in-formation shared between coupled Poisson spike trains (a) andof its TE rate (b,c) and dMI rate (d) components. Plots de-pict the distribution (mean ± SD) of each measure, computedover 100 simulation runs as a function of the de-coupling pa-rameter δ , for different values of the parameter τ determin-ing the coupling direction. Bar plots report the number ofrealizations for which the considered measure is detected asstatistically significant using JODI surrogates. while in the beginning cells are electrically quiescent andafter 30 DIV they start to degenerate [24]. Accordingly,we considered 25 cultures analyzed through three stagesof maturation, labeled as early ( ∼ ∼
15 DIV) and mature ( ∼
25 DIV). For each culture andstage, the analysis was performed as depicted in Fig. 6and explained in the following.First, bursts were detected at the level of single elec-trode through temporal clustering (Fig. 6a), accordingto an empirical criterion whereby sequences of at leastfour consecutive spikes were grouped into a burst if allISIs are lower than a threshold empirically set as theminimum between 1 / (4 λ ) and 0.1 s; the minimal ISI wasset adaptively, with the value 0.1 s empirically accountingfor a worst-case scenario of the timing of axonal propaga-tion among neurons [28]). Afterwards, the timing of eachburst was identified as the center of mass of the temporaldistribution of the spikes within the burst. We chose toperform this coarse-graining procedure as setting a tem-poral mesoscale is in this case more representative of acontinuous process since the individual spikes are binnedby the temporal resolution of the ADC converter. With-out this procedure, our analysis performed at the level ofindividual spikes did not yield significant dMI or TE val-ues; this can be explained considering that spikes outsidethe bursts are largely stochastic and unrelated to connec-tivity, while spikes within a burst represent a variety ofcomplex phenomena which cannot be fully disentangledby MEA electrodes each sampling many neurons [28],even if complex features are otherwise visible [29, 30].Consequently, the burst spike trains obtained by tem-poral clustering (see Fig. 6a) were analyzed in pairs es-timating the proposed information-theoretic measures incontinuous time to get a symmetric dMI rate matrix andan asymmetric TE rate matrix (Fig. 6b). Estimationwas performed setting N U = N X and k global = 5 as inthe simulations, and using l = 1 intervals for historyembedding; while the choice of the dimension of embed-dings remains crucial in real data analysis, here the in-vestigation of longer uniform embeddings ( l = 2 , l = 3)did not lead to evident differences (results not shownfor brevity). The statistical significance of each mea-sure (dMI or TE) estimated from a pair of spike trainswas assessed generating 100 JODI surrogate trains, us-ing inverse percentiles to derive the probability that thesurrogate measure is larger than the original measure,and comparing this probability with a critical significancelevel set to α = 0 . /M ( M = 59 is the network size);this achieved a Bonferroni correction for the inference ofnon-isolated network nodes (i.e., for each node, there is a0.01 chance that it is connected to at least another nodeunder the null hypothesis of disconnection). Statisticaltesting provided a threshold to validate connections inthe dMI and TE rate matrices; after binarization of thedirected dMI network and of the directed TE network(Fig. 6c), we computed the percentages of active nodesand of significant links in each network. Moreover westudied the distributions of the weighted degree of dMIand TE rates (Fig. 6d), as well as the distributions of theweighted in-degree and out-degree of the TE rate (Fig.6e), in terms of statistical comparison (paired Wilcoxonsigned-rank test) and of histogram representation (nor- p < 0.001037 b) TE degree E l ec t r od e s Time [s]0
40 60 80 a) d)
TE in-degree
TE out-degree204059 dMI E l ec t r od e i j c) Significant dMI TE
20 30 e) Significant TE p = 0.9 m = 0.39 H = 0.88 m = 0.51 H = 0.87 dMI degree0 73 m = 3.78 H = 0.54 m = 0.37 H = 0.31 1 20 40 59Electrode j j Electrode j FIG. 6. Analysis of dynamic interactions between neuronal spike trains performed for a representative culture in the maturestage (25 days in vitro). (a) Raster plot of the recorded spike trains (dots) for an exemplary portion of the recording,with network bursts detected through temporal clustering denoted by squares. (b) Color-coded matrix representation ofthe undirected (dMI) and directed (TE) interaction measures computed between each pair of trains for the whole-durationrecordings ( ∼ m ) and entropy ( H ) values) of the weighted dMI degree andTE degree and of the weighted in-degree and out-degree of TE assessed across nodes; note that histograms consider only theactive nodes. malized entropy).The results shown for the culture in Fig. 6 generalize tothe 25 analyzed cultures, as depicted in the network rep-resentation of Fig. 7 for two arbitrarily-chosen but rep-resentative cultures studies across stages of maturation,as well as in the complete results reported in the sup-plementary material. We found that, in line with previ-ous observations showing increasing network activity andconnectivity with the age of the cultures [28, 29, 31], thenumber of non-isolated nodes and of significant networklinks increased moving from the early to the develop-ing and mature stages. The progression across the threestages was highlighted clearly using dMI (Fig. 7a), whileTE revealed rich network structures eliciting nodes act-ing as sources and sinks of information flow in the maturestage (Fig. 7b); the two measures detected a comparablepercentage of active nodes, whereas the network connec-tivity was higher using dMI than TE (Fig. 7 and Fig.S3a,b). Moving from the early to the developing andmature stages, the weighted node degree became higherand more dispersed across nodes (increasing median andentropy of the degree distribution, Fig. S3c,d, Fig. S4);upon maturation, the weighted degree derived from dMIwas stronger (higher median) and more dispersed acrossnodes (higher entropy) than the degree derived from TE(Fig. 6d and Figs. S3c,e, Fig. S4a,b). The distributionsof the weighted in-degree and out-degree of TE rate be-came also more skewed and with higher entropy throughthe maturation process, but did not differ significantlyfrom each other (Fig. S3d,f and Fig. S4c,d).These results document that both the dMI rate and the TE rate can detect the expected larger involvement of theneuronal units in the establishment of networked func-tional interactions occurring as the neural cultures spon-taneously develop their anatomical connections. Mirror-ing our simulations, the directed interactions capturedby dMI are generally of higher intensity than the undi-rected interactions captured by TE. The dMI was bet-ter able to describe the gradually raising heterogeneityof the nodal strength also observed in previous stud-ies for these spontaneously growing neuronal networks[29, 31]. In the mature stage however, both dMI and TEhighlighted the emergent structural organization of thefunctional connections between neurons, showing fattertailed node degree distributions with the appearance ofgroups of high-strength nodes. The emergence with mat-uration of a significant proportion of highly connectednodes (hubs) was revealed also in a previous extensiveanalysis of the firing activity of these cultures, where itwas related to the small-world topology of the underlyingfunctional networks [29]. Small-world networks have anarchitecture which supports efficient information transfer[32], and the increasing appearance of highly connectednodes in older cultures suggests that such hubs may playa role in organizing the information transfer, possibly act-ing as sources or sinks for the network activity. This hy-pothesis is supported in our analysis by the heavy taileddistributions of the in- and out-degree of the TE rate inthe mature cultures, where high degrees were observedfor a small but non-negligible number of nodes. Of note,our analysis could not establish the prevalence of sinks orsources of information transfer, as we found a high simi-0 FIG. 7. Undirected dMI networks (a) and directed TE networks (b) obtained through information-theoretic analysis of spiketrains from two neuronal cultures (rows) considered at different age (early, ∼ ∼
15 DIV; mature, ∼ larity between the distributions of weighted in- and out-degree of the TE rate. Such similarity may be explainedmethodologically: with the rise of synchronized activity,pairwise interactions tend to appear more bidirectionaldue to the increasing influence of spurious effects (e.g.,cascade, common driver) due to processes excluded fromthe bivariate analysis [33, 34]. An extension of the TErate to multivariate spike trains [18] is recommended toface this issue and explore more efficiently the topologicalproperties of large-scale and densely connected networkslike those emerging from the activity of these neuronalcultures. V. DISCUSSION
Although information-theoretic methods are widelyused for the analysis of multivariate time series in com-putational neuroscience and physiology [13, 14, 19, 35],their application to spike train data is far less popular.The main reason for this is methodological, being relatedto the difficulty of a reliable implementation in contin-uous time of methods intrinsically defined for discrete-time processes like MI, TE, and many other measuresof information dynamics. In fact, while a continuous-time formalism can be avoided in the study of processeswhich are intrinsically occurring in discrete time (e.g.,cardiovascular variability series [14, 35]) or can be rea-sonably represented through sampling techniques (e.g.,electromagnetic neural signals [36, 37]), it becomes of ut-most importance when the information carried by theanalyzed process relies on its continuous-time nature, ashappens for point processes and for neural spike trainsin particular [18]. In these processes, the analysis based on time discretization - though being common [38–40] -is impractical due to issues of estimation bias, data re-quirement and inability to capture interactions deployedover multiple time scales [18]. These issues are observedalso in our simulations, where we confirm the high biasand strong dependence on data size (number of spikes re-quired) and analysis parameters (temporal bin size andhistory embedding length) of the discrete-time estimatorof the TE rate, and find similar issues for the dMI rate.On the contrary, the adoption of a continuous time for-malism implemented with an accurate nearest-neighborestimator, developed as in [18] for the TE rate and newlydesigned for the dMI rate, leads to overcome all short-comings of the traditional estimates, yielding low-bias,data-efficient and essentially parameter-free measures ofdirected information flow and neural synchrony. In addi-tion, being based on the non-parametric estimation of in-formation measures, our approach is model-free and thusinherently nonlinear. This stands in contrast to mostof the methods explicitly devised to capture directed in-teractions for point processes, which make use of linearparametric models [8, 12].Our developments build on recent work offering forthe first time the possibility to assess in continuous timethe directed transfer of information (TE) between event-based data [18]. This approach, bringing the computa-tional reliability discussed above, opens unique possibil-ities for the description of brain activities in terms of in-formation flow when such flow is encoded by neural spikes[1]. Furthermore, following a dominant trend in neuro-science whereby the degree of concurrent firing of neuralspike trains is assessed to quantify the general conceptof neural synchrony [4], we frame into our continuous-time information-theoretic analysis also the evaluation1of a symmetric measure of correlation between pairs oftrains. Indeed, besides computing the TE, we define andquantify also the dMI as a dynamic form of mutual infor-mation. This measure addresses symmetric interactionsthat are not captured by the TE, providing the com-plementary information needed to evaluate thoroughlythe whole dynamic interaction between two processes(i.e., the interaction involving different temporal statesin the processes, see Fig. 1). Importantly, the undi-rected dynamic information provided by dMI turns outto be a useful complement to the directional informationprovided by the TE. In our simulations, we found thatdMI can capture better than TE conditions of weaklycoupled processes in which the coupling direction doesnot emerge clearly. These conditions are encounteredoften in the analysis of neural spike trains, where thesynchronous firing of different neural units is not clearlydirectional and is not consistent in time (e.g., due to thecomplex properties of neuronal firing and/or to inaccu-racy of spike sorting). In the analysis of real-data thedMI rate captured better than the TE rate, in terms ofproperties of the weighted node degree distribution, thedevelopment of the neuronal cultures, showing how net-work structures containing some densely connected nodesare formed upon maturation.In summary, the proposed information-theoreticframework provides principled measures to assess pair-wise interactions in point process data in a more robust and flexible way than the discrete-time or parametric ap-proaches previously proposed, and has thus potential toprovide new physiological insight into the functional cou-pling among neural spike trains. To further improve suchpotential, future work is envisaged which extends thebivariate approach and allows embracing a fully multi-variate perspective on the analysis of spike train data.The definition of a whole set of measures, hierarchicallyorganized to quantify self-interactions in a spike train(e.g., via information storage [13]) as well as pairwiseand higher-order interactions among multiple spike trains(e.g., via conditional TE or MI measures [19, 22], orvia synergy/redundancy measures [14, 41]), will offer thepossibility to study generalized network structures whereinteractions of different orders are represented [42]. Theinterest for these enhanced representations of spike trainnetworks extends to other modalities for brain monitor-ing (e.g. fMRI [43, 44]), and reaches far beyond neuro-science encompassing research fields as diverse as physi-ology, social systems, seismology and finance [45–48].
ACKNOWLEDGMENTS
The Authors acknowledge Joseph T. Lizier and DavidP. Shorten for fruitful discussion. [1] E. N. Brown, R. E. Kass, and P. P. Mitra, Nature neu-roscience , 456 (2004).[2] T. Kreuz, J. S. Haas, A. Morelli, H. D. Abarbanel,and A. Politi, Journal of neuroscience methods , 151(2007).[3] C. W. Granger, Econometrica: journal of the Economet-ric Society , 424 (1969).[4] C. S. Cutts and S. J. Eglen, Journal of Neuroscience ,14288 (2014).[5] V. M. Eguiluz, D. R. Chialvo, G. A. Cecchi, M. Baliki,and A. V. Apkarian, Physical review letters , 018102(2005).[6] R. Salvador, J. Suckling, C. Schwarzbauer, and E. Bull-more, Philosophical Transactions of the Royal Society B:Biological Sciences , 937 (2005).[7] T. Kreuz, D. Chicharro, C. Houghton, R. G. Andrzejak,and F. Mormann, Journal of neurophysiology , 1457(2013).[8] S. Kim, D. Putrino, S. Ghosh, and E. N. Brown, PLoSComput Biol , e1001110 (2011).[9] A. G. Nedungadi, G. Rangarajan, N. Jain, and M. Ding,Journal of computational neuroscience , 55 (2009).[10] V. Pasquale, P. Massobrio, L. Bologna, M. Chiappalone,and S. Martinoia, Neuroscience , 1354 (2008).[11] L. Zhu, Y.-C. Lai, F. C. Hoppensteadt, and J. He, NeuralComputation , 2359 (2003).[12] G. Valenza, L. Faes, L. Citi, M. Orini, and R. Barbieri,IEEE Transactions on Biomedical Engineering , 1077(2017). [13] J. T. Lizier, The local information dynamics of distributedcomputation in complex systems (Springer Science &Business Media, 2012).[14] L. Faes, A. Porta, G. Nollo, and M. Javorka, Entropy , 5 (2017).[15] M. Cover Thomas and A. Thomas Joy, New York: Wiley , 37 (1991).[16] T. Schreiber, Physical review letters , 461 (2000).[17] R. E. Spinney, M. Prokopenko, and J. T. Lizier, PhysicalReview E , 032319 (2017).[18] D. Shorten, R. Spinney, and J. Lizier, bioRxiv (2020).[19] M. Wibral, R. Vicente, and J. T. Lizier, Directed infor-mation measures in neuroscience (Springer, 2014).[20] L. Kozachenko and N. N. Leonenko, Problemy PeredachiInformatsii , 9 (1987).[21] A. Kraskov, H. St¨ogbauer, and P. Grassberger, Physicalreview E , 066138 (2004).[22] L. Faes, D. Kugiumtzis, G. Nollo, F. Jurysta, andD. Marinazzo, Physical Review E , 032904 (2015).[23] L. Ricci, M. Castelluzzo, L. Minati, and A. Perinelli,Chaos: An Interdisciplinary Journal of Nonlinear Science , 121102 (2019).[24] F. O. Morin, Y. Takamura, and E. Tamiya, Journal ofbioscience and bioengineering , 131 (2005).[25] D. A. Wagenaar, J. Pine, and S. M. Potter, BMC neu-roscience , 11 (2006).[26] G. W. Gross and J. M. Kowalski, Journal of IntelligentMaterial Systems and Structures , 558 (1999).[27] R. O. Wong, M. Meister, and C. J. Shatz, Neuron ,
923 (1993).[28] L. Minati, H. Ito, A. Perinelli, L. Ricci, L. Faes,N. Yoshimura, Y. Koike, and M. Frasca, IEEE Access , 174793 (2019).[29] J. H. Downes, M. W. Hammond, D. Xydas, M. C.Spencer, V. M. Becerra, K. Warwick, B. J. Whalley, andS. J. Nasuto, PLoS Comput Biol , e1002522 (2012).[30] P. Massobrio, V. Pasquale, and S. Martinoia, Scientificreports , 10578 (2015).[31] L. Minati, J. Winkel, A. Bifone, P. Oswiecimka, andJ. Jovicich, Chaos: An Interdisciplinary Journal of Non-linear Science , 043115 (2017).[32] V. Latora and M. Marchiori, Physical review letters ,198701 (2001).[33] D. A. Smirnov, Physical Review E , 042917 (2013).[34] L. Novelli and J. T. Lizier, arXiv preprintarXiv:2007.07500 (2020).[35] A. Porta and L. Faes, Proceedings of the IEEE , 282(2015).[36] M. Wibral, B. Rahm, M. Rieder, M. Lindner, R. Vicente,and J. Kaiser, Progress in biophysics and molecular biol-ogy , 80 (2011).[37] J. T. Lizier, J. Heinzle, A. Horstmann, J.-D. Haynes, andM. Prokopenko, Journal of computational neuroscience , 85 (2011). [38] B. Gour´evitch and J. J. Eggermont, Journal of neuro-physiology , 2533 (2007).[39] S. A. Neymotin, K. M. Jacobs, A. A. Fenton, and W. W.Lytton, Journal of computational neuroscience , 69(2011).[40] J. J. Harris, E. Engl, D. Attwell, and R. B. Jolivet, PLoScomputational biology , e1007226 (2019).[41] M. Wibral, C. Finn, P. Wollstadt, J. T. Lizier, andV. Priesemann, Entropy , 494 (2017).[42] F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lu-cas, A. Patania, J.-G. Young, and G. Petri, Physics Re-ports (2020).[43] E. Tagliazucchi, P. Balenzuela, D. Fraiman, and D. R.Chialvo, Frontiers in physiology , 15 (2012).[44] G.-R. Wu, W. Liao, S. Stramaglia, J.-R. Ding, H. Chen,and D. Marinazzo, Medical image analysis , 365(2013).[45] Y. Ogata, in Seismicity patterns, their statistical signifi-cance and physical meaning (Springer, 1999) pp. 471–507.[46] R. Barbieri, E. C. Matten, A. A. Alabi, and E. N. Brown,American Journal of Physiology-Heart and CirculatoryPhysiology , H424 (2005).[47] C. G. Bowsher, Journal of Econometrics , 876 (2007).[48] G. Ver Steeg and A. Galstyan, in