Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lionel Barnett is active.

Publication


Featured researches published by Lionel Barnett.


Physical Review Letters | 2009

Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables

Lionel Barnett; Anil K. Seth

Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.


The Journal of Neuroscience | 2015

Granger Causality Analysis in Neuroscience and Neuroimaging

Anil K. Seth; Lionel Barnett

### Introduction A key challenge in neuroscience and, in particular, neuroimaging, is to move beyond identification of regional activations toward the characterization of functional circuits underpinning perception, cognition, behavior, and consciousness. Granger causality (G-causality) analysis


Philosophical Transactions of the Royal Society A | 2011

Causal density and integrated information as measures of conscious level.

Anil K. Seth; Lionel Barnett

An outstanding challenge in neuroscience is to develop theoretically grounded and practically applicable quantitative measures that are sensitive to conscious level. Such measures should be high for vivid alert conscious wakefulness, and low for unconscious states such as dreamless sleep, coma and general anaesthesia. Here, we describe recent progress in the development of measures of dynamical complexity, in particular causal density and integrated information. These and similar measures capture in different ways the extent to which a systems dynamics are simultaneously differentiated and integrated. Because conscious scenes are distinguished by the same dynamical features, these measures are therefore good candidates for reflecting conscious level. After reviewing the theoretical background, we present new simulation results demonstrating similarities and differences between the measures, and we discuss remaining challenges in the practical application of the measures to empirically obtained data.


congress on evolutionary computation | 2001

Netcrawling-optimal evolutionary search with neutral networks

Lionel Barnett

Several studies have demonstrated that in the presence of a high degree of selective neutrality, in particular on fitness landscapes featuring neutral networks, evolution is qualitatively different from that of the more common model of rugged/correlated fitness landscapes often (implicitly) assumed by GA researchers. We characterise evolutionary dynamics on fitness landscapes with neutral networks and argue that, if a certain correlation-like statistical property holds, the most efficient strategy for evolutionary search is not population-based, but rather a population-of-one netcrawler-a variety of hill-climber. We derive quantitative estimates for expected waiting times to discovery of fitter genotypes and discuss implications for evolutionary algorithm design, including a proposal for an adaptive variant of the netcrawler.


Physical Review E | 2015

Granger causality for state-space models.

Lionel Barnett; Anil K. Seth

Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations-commonplace in application domains as diverse as climate science, econometrics, and the neurosciences-induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.


Physical Review Letters | 2012

Transfer Entropy as a Log-likelihood Ratio

Lionel Barnett; Terry Bossomaier

Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.


Frontiers in Neuroinformatics | 2013

Granger causality is designed to measure effect, not mechanism.

Lionel Barnett

In their recent paper, Hu et al. (2011) make the claim that Granger causality (GC) does not capture how strongly one time series influences another. Given the sizeable literature on GC, this claim could be considered radical. We examined this claim, and found that it is based essentially on semantics. Hu et al. (2011) would like a measure of causal interaction to explicitly quantify an underlying causal mechanism, and point out that GC values do not consistently reflect the relative sizes of explicit interaction coefficients in a corresponding generative model. However, GC is, by design and purpose, not interested in this. Rather, it is a measure of causal effect, namely the reduction in prediction error when the causal interaction is taken into account, as compared to when it is ignored. [According to one version of neuroscience terminology (Friston, 2011), which attempts to draw a distinction between the different conceptions of connectivity, GC measures of causal effect yield directed “functional connectivity” maps when applied to neuroimaging data. In contrast, “effective connectivity” maps represent the effective mechanism generating the observed data, and provide interaction coefficients. Neither functional nor effective connectivity representations necessarily map univocally onto the underlying anatomical (structural) connectivity.] Multiple properties of GC make it an elegant measure of causal effect. It satisfies crucial symmetry properties, including that GC from Y to X is invariant under rescalings of Y and X, as well as the addition of a multiple of X to Y, consistent with the measuring of independent predictive information about X contained in Y (Geweke, 1982; Hosoya, 1991; Barrett et al., 2010). Such transformations do however, change the relative magnitudes of regression coefficients, thus it is not possible to simultaneously measure causal mechanism and causal effect. Further, for the case of Gaussian variables, GC is equivalent to transfer entropy, enabling an explicit interpretation in terms of Shannon information flow (Barnett et al., 2009). The GC from one multivariate variable to another multivariate variable has a decomposition into the sum of independent contributions from each predictor to each predictee [equation 18 in Barrett et al. (2010)]. The same defence of GC applies in the frequency domain, with spectral GC from Y to X at frequency f capturing the proportion of power of X at frequency f that results from its interaction with Y (Geweke, 1982). The fact that time domain GC is the mean spectral GC over all frequencies up to the Nyquist frequency provides further justification. As far as statistical inference is concerned, for the reasons above, GC can indeed be used to compare the magnitude of causal interactions between different sets of time series. Contrary to Hu et al.s (2011) interpretation, the fact that some regression coefficients contribute more to GC than others (and some not at all) is actually an indication that GC analysis adds to our understanding of a system, even when the generative model is known a priori. A further pragmatic advantage of the GC method is that, in sample, time-domain GC asymptotically follows distributions that are known analytically (chi-squared family), thus facilitating hypothesis testing (Geweke, 1982). Note on Redundancy: A specific argument Hu et al. (2011) make against GC comes from the behaviour of the measure for the system given by their equation 10. Hu et al. (2011) compute the GC from X2 to X1 to be zero when the residual η2 associated with X2 has zero variance, but claim that a measure of causal influence should be non-zero in this case. However, in this case, X2 is a redundant variable, being fully determined by the past of X1, and therefore, does not influence X1 once the past of X1 is taken into account. In other words, X2 has no independent causal influence on X1 and it is therefore, entirely consistent for GC to be zero in this case. GC is not a perfect measure for all stochastic time series: if the true process is not a straightforward multivariate autoregressive process with white-noise residuals, then it becomes only an approximate measure of causal influence. In each real-world scenario, discretion is required in deciding if confounds such as non-linearity and correlations in the noise are mild enough for the measure to remain applicable. In these scenarios, it can be useful to consider a range of different measures such as Phase Slope Index (Nolte et al., 2008), Partial Directed Coherence (Baccala and Sameshima, 2001), and the Directed Transfer Function (Kaminski and Blinowska, 1991; Kaminski et al., 2001). In summary, GC measures causal effect in a clear and unambiguous way on stationary multivariate autoregressive processes. We believe that the measure is rightly being widely applied in neuroscience as a measure of directed functional connectivity whenever such models provide a reasonable fit to data. Hu et al.s (2011) “new causality” compares regression model coefficients rather than prediction errors, and is therefore a measure of causal mechanism. New causality sets out to achieve a different aim from GC, and the divergence of the two measures is not a problem.


Complex Adaptive Systems Modeling | 2013

Information and phase transitions in socio-economic systems

Terence Bossomaier; Lionel Barnett; Michael Harré

We examine the role of information-based measures in detecting and analysing phase transitions. We contend that phase transitions have a general character, visible in transitions in systems as diverse as classical flocking models, human expertise, and social networks. Information-based measures such as mutual information and transfer entropy are particularly suited to detecting the change in scale and range of coupling in systems that herald a phase transition in progress, but their use is not necessarily straightforward, possessing difficulties in accurate estimation due to limited sample sizes and the complexities of analysing non-stationary time series. These difficulties are surmountable with careful experimental choices. Their effectiveness in revealing unexpected connections between diverse systems makes them a promising tool for future research.


Complexity | 2010

Spatial embedding and the structure of complex networks

Seth Bullock; Lionel Barnett; E. Di Paolo

We review and discuss the structural consequences of embedding a random network within a metric space such that nodes distributed in this space tend to be connected to those nearby. We find that where the spatial distribution of nodes is maximally symmetrical some of the structural properties of the resulting networks are similar to those of random nonspatial networks. However, where the distribution of nodes is inhomogeneous in some way, this ceases to be the case, with consequences for the distribution of neighborhood sizes within the network, the correlation between the number of neighbors of connected nodes, and the way in which the largest connected component of the network grows as the density of edges is increased. We present an overview of these findings in an attempt to convey the ramifications of spatial embedding to those studying real-world complex systems.


Neuroscience of Consciousness | 2017

Global and local complexity of intracranial EEG decreases during NREM sleep

Michael Schartner; Andrea Pigorini; Steve A. Gibbs; Gabriele Arnulfo; Simone Sarasso; Lionel Barnett; Lino Nobili; Marcello Massimini; Anil K. Seth

Abstract Key to understanding the neuronal basis of consciousness is the characterization of the neural signatures of changes in level of consciousness during sleep. Here we analysed three measures of dynamical complexity on spontaneous depth electrode recordings from 10 epilepsy patients during wakeful rest (WR) and different stages of sleep: (i) Lempel–Ziv complexity, which is derived from how compressible the data are; (ii) amplitude coalition entropy, which measures the variability over time of the set of channels active above a threshold; (iii) synchrony coalition entropy, which measures the variability over time of the set of synchronous channels. When computed across sets of channels that are broadly distributed across multiple brain regions, all three measures decreased substantially in all participants during early-night non-rapid eye movement (NREM) sleep. This decrease was partially reversed during late-night NREM sleep, while the measures scored similar to WR during rapid eye movement (REM) sleep. This global pattern was in almost all cases mirrored at the local level by groups of channels located in a single region. In testing for differences between regions, we found elevated signal complexity in the frontal lobe. These differences could not be attributed solely to changes in spectral power between conditions. Our results provide further evidence that the level of consciousness correlates with neural dynamical complexity.

Collaboration


Dive into the Lionel Barnett's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua Brown

Charles Sturt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge