Featured Researches

Neurons And Cognition

If deep learning is the answer, then what is the question?

Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence (AI) research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This perspective has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterise computations or neural codes, or who wish to understand perception, attention, memory, and executive functions? In this Perspective, our goal is to offer a roadmap for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics, and neural representation in artificial and biological systems. We highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.

Read more
Neurons And Cognition

Implementing Inductive bias for different navigation tasks through diverse RNN attractors

Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. By combining two types of networks in a modular structure, we could get better performance for both regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks, and their combination with metric representation leads to flexibile multiple-task learning.

Read more
Neurons And Cognition

Improving Functional Connectome Fingerprinting with Degree-Normalization

Functional connectivity quantifies the statistical dependencies between the activity of brain regions, measured using neuroimaging data such as functional MRI BOLD time series. The network representation of functional connectivity, called a Functional Connectome (FC), has been shown to contain an individual fingerprint allowing participants identification across consecutive testing sessions. Recently, researchers have focused on the extraction of these fingerprints, with potential applications in personalized medicine. Here, we show that a mathematical operation denominated degree-normalization can improve the extraction of FC fingerprints. Degree-normalization has the effect of reducing the excessive influence of strongly connected brain areas in the whole-brain network. We adopt the differential identifiability framework and apply it to both original and degree-normalized FCs of 409 individuals from the Human Connectome Project, in resting-state and 7 fMRI tasks. Our results indicate that degree-normalization systematically improves three fingerprinting metrics, namely differential identifiability, identification rate and matching rate. Moreover, the results related to the matching rate metric suggest that individual fingerprints are embedded in a low-dimensional space. The results suggest that low-dimensional functional fingerprints lie in part in weakly connected subnetworks of the brain, and that degree-normalization helps uncovering them. This work introduces a simple mathematical operation that could lead to significant improvements in future FCs fingerprinting studies.

Read more
Neurons And Cognition

Improving J-divergence of brain connectivity states by graph Laplacian denoising

Functional connectivity (FC) can be represented as a network, and is frequently used to better understand the neural underpinnings of complex tasks such as motor imagery (MI) detection in brain-computer interfaces (BCIs). However, errors in the estimation of connectivity can affect the detection performances. In this work, we address the problem of denoising common connectivity estimates to improve the detectability of different connectivity states. Specifically, we propose a denoising algorithm that acts on the network graph Laplacian, which leverages recent graph signal processing results. Further, we derive a novel formulation of the Jensen divergence for the denoised Laplacian under different states. Numerical simulations on synthetic data show that the denoising method improves the Jensen divergence of connectivity patterns corresponding to different task conditions. Furthermore, we apply the Laplacian denoising technique to brain networks estimated from real EEG data recorded during MI-BCI experiments. Using our novel formulation of the J-divergence, we are able to quantify the distance between the FC networks in the motor imagery and resting states, as well as to understand the contribution of each Laplacian variable to the total J-divergence between two states. Experimental results on real MI-BCI EEG data demonstrate that the Laplacian denoising improves the separation of motor imagery and resting mental states, and shortens the time interval required for connectivity estimation. We conclude that the approach shows promise for the robust detection of connectivity states while being appealing for implementation in real-time BCI applications.

Read more
Neurons And Cognition

Independent components of human brain morphology

Quantification of brain morphology has become an important cornerstone in understanding brain structure. Measures of cortical morphology such as thickness and surface area are frequently used to compare groups of subjects or characterise longitudinal changes. However, such measures are often treated as independent from each other. A recently described scaling law, derived from a statistical physics model of cortical folding, demonstrates that there is a tight covariance between three commonly used cortical morphology measures: cortical thickness, total surface area, and exposed surface area. We show that assuming the independence of cortical morphology measures can hide features and potentially lead to misinterpretations. Using the scaling law, we account for the covariance between cortical morphology measures and derive novel independent measures of cortical morphology. By applying these new measures, we show that new information can be gained; in our example we show that distinct morphological alterations underlie healthy ageing compared to temporal lobe epilepsy, even on the coarse level of a whole hemisphere. We thus provide a conceptual framework for characterising cortical morphology in a statistically valid and interpretable manner, based on theoretical reasoning about the shape of the cortex.

Read more
Neurons And Cognition

Inferring network properties from time series using transfer entropy and mutual information: validation of multivariate versus bivariate approaches

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting their properties requires inferred network models to reflect key underlying structural features; however, even a few spurious links can distort network measures, challenging functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all networks for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.

Read more
Neurons And Cognition

Inferring population statistics of receptor neurons sensitivities and firing-rates from general functional requirements

On the basis of the evident ability of neuronal olfactory systems to evaluate the intensity of an odorous stimulus and at the same time also recognise the identity of the odorant over a large range of concentrations, a few biologically-realistic hypotheses on some of the underlying neural processes are made. In particular, it is assumed that the receptor neurons mean firing-rate scale monotonically with odorant intensity, and that the receptor sensitivities range widely across odorants and receptor neurons hence leading to highly distributed representations of the stimuli. The mathematical implementation of the phenomenological postulates allows for inferring explicit functional relationships between some measurable quantities. It results that both the dependence of the mean firing-rate on odorant concentration and the statistical distribution of receptor sensitivity across the neuronal population are power-laws, whose respective exponents are in an arithmetic, testable relationship. In order to test quantitatively the prediction of power-law dependence of population mean firing-rate on odorant concentration, a probabilistic model is created to extract information from data available in the experimental literature. The values of the free parameters of the model are estimated by an info-geometric Bayesian maximum-likelihood inference which keeps into account the prior distribution of the parameters. The eventual goodness of fit is quantified by means of a distribution-independent test. [CONTINUES]

Read more
Neurons And Cognition

Influence of Various Temporal Recoding on Pavlovian Eyeblink Conditioning in The Cerebellum

We consider the Pavlovian eyeblink conditioning (EBC) via repeated presentation of paired conditioned stimulus (tone) and unconditioned stimulus (airpuff). The influence of various temporal recoding of granule cells on the EBC is investigated in a cerebellar network where the connection probability p c from Golgi to granule cells is changed. In an optimal case of p ∗ c (=0.029) , individual granule cells show various well- and ill-matched firing patterns relative to the unconditioned stimulus. Then, these variously-recoded signals are fed into the Purkinje cells (PCs) through parallel-fibers (PFs). In the case of well-matched PF-PC synapses, their synaptic weights are strongly depressed through strong long-term depression (LTD). On the other hand, practically no LTD occurs for the ill-matched PF-PC synapses. This type of "effective" depression at the PF-PC synapses coordinates firings of PCs effectively, which then make effective inhibitory coordination on cerebellar nucleus neuron [which elicits conditioned response (CR; eyeblink)]. When the learning trial passes a threshold, acquisition of CR begins. In this case, the timing degree T d of CR becomes good due to presence of the ill-matched firing group which plays a role of protection barrier for the timing. With further increase in the trial, strength S of CR (corresponding to the amplitude of eyelid closure) increases due to strong LTD in the well-matched firing group. Thus, with increasing the learning trial, the (overall) learning efficiency degree L e (taking into consideration both timing and strength of CR) for the CR is increased, and eventually it becomes saturated. By changing p c from p ∗ c , we also investigate the influence of various temporal recoding on the EBC. It is thus found that, the more various in temporal recoding, the more effective in learning for the Pavlovian EBC.

Read more
Neurons And Cognition

Influence of autapses on synchronisation in neural networks with chemical synapses

A great deal of research has been devoted on the investigation of neural dynamics in various network topologies. However, only a few studies have focused on the influence of autapses, synapses from a neuron onto itself via closed loops, on neural synchronisation. Here, we build a random network with adaptive exponential integrate-and-fire neurons coupled with chemical synapses, equipped with autapses, to study the effect of the latter on synchronous behaviour. We consider time delay in the conductance of the pre-synaptic neuron for excitatory and inhibitory connections. Interestingly, in neural networks consisting of both excitatory and inhibitory neurons, we uncover that synchronous behaviour depends on their synapse type. Our results provide evidence on the synchronous and desynchronous activities that emerge in random neural networks with chemical, inhibitory and excitatory synapses where neurons are equipped with autapses.

Read more
Neurons And Cognition

Influence of inhibitory synapses on the criticality of excitable neuronal networks

In this work, we study the dynamic range of a neuronal network of excitable neurons with excitatory and inhibitory synapses. We obtain an analytical expression for the critical point as a function of the excitatory and inhibitory synaptic intensities. We also determine an analytical expression that gives the critical point value in which the maximal dynamic range occurs. Depending on the mean connection degree and coupl\-ing weights, the critical points can exhibit ceasing or ceaseless dynamics. However, the dynamic range is equal in both cases. We observe that the external stimulus mask some effects of self-sustained activity (ceaseless dynamic) in the region where the dynamic range is calculated. In these regions, the firing rate is the same for ceaseless dynamics and ceasing activity. Furthermore, we verify that excitatory and inhibitory inputs are approximately equal for a network with a large number of connections, showing excitatory-inhibitory balance as reported experimentally.

Read more

Ready to get started?

Join us today