Featured Researches

Neurons And Cognition

External Electromagnetic Wave Excitation of a PreSynaptic Neuron Based on LIF model

Interaction of electromagnetic (EM) waves with human tissue has been a longstanding research topic for electrical and biomedical engineers. However, few numbers of publications discuss the impacts of external EM-waves on neural stimulation and communication through the nervous system. In fact, complex biological neural channels are a main barrier for intact and comprehensive analyses in this area. One of the everpresent challenges in neural communication responses is dependency of vesicle release probability on the input spiking pattern. In this regard, this study sheds light on consequences of changing the frequency of external EM-wave excitation on the post-synaptic neuron's spiking rate. It is assumed that the penetration depth of the wave in brain does not cover the postsynaptic neuron. Consequently, we model neurotransmission of a bipartite chemical synapse. In addition, the way that external stimulation affects neurotransmission is examined. Unlike multiple frequency component EM-waves, the monochromatic incident wave does not face frequency shift and distortion in dispersive media. In this manner, a single frequency signal is added as external current in the modified leaky integrated-andfire (LIF) model. The results demonstrate existence of a node equilibrium point in the first order dynamical system of LIF model. A fold bifurcation (for presupposed LIF model values) occurs when the external excitation frequency is near 200 Hz. The outcomes provided in this paper enable us to select proper frequency excitation for neural signaling. Correspondingly, the cut-off frequency reliance on elements' values in LIF circuit is found.

Read more
Neurons And Cognition

External noise removed from magnetoencephalographic signal using Independent Component Analyses of reference channels

Background: Many magnetoencephalographs (MEG) contain, in addition to data channels, a set of reference channels positioned relatively far from the head that provide information on magnetic fields not originating from the brain. This information is used to subtract sources of non-neural origin, with either geometrical or least mean squares (LMS) methods. LMS methods in particular tend to be biased toward more constant noise sources and are often unable to remove intermittent noise. New Method: To better identify and eliminate external magnetic noise, we propose performing ICA directly on the MEG reference channels. This in most cases produces several components which are clear summaries of external noise sources with distinct spatio-temporal patterns. We present two algorithms for identifying and removing such noise components from the data which can in many cases significantly improve data quality. Results: We performed simulations using forward models that contained both brain sources and external noise sources. First, traditional LMS-based methods were applied. While this removed a large amount of noise, a significant portion still remained. In many cases, this portion could be removed using the proposed technique, with little to no false positives. Comparison with existing method(s): The proposed method removes significant amounts of noise to which existing LMS-based methods tend to be insensitive. Conclusions: The proposed method complements and extends traditional reference based noise correction with little extra computational cost and low chances of false positives. Any MEG system with reference channels could profit from its use, particularly in labs with intermittent noise sources.

Read more
Neurons And Cognition

Extracting low-dimensional psychological representations from convolutional neural networks

Deep neural networks are increasingly being used in cognitive modeling as a means of deriving representations for complex stimuli such as images. While the predictive power of these networks is high, it is often not clear whether they also offer useful explanations of the task at hand. Convolutional neural network representations have been shown to be predictive of human similarity judgments for images after appropriate adaptation. However, these high-dimensional representations are difficult to interpret. Here we present a method for reducing these representations to a low-dimensional space which is still predictive of similarity judgments. We show that these low-dimensional representations also provide insightful explanations of factors underlying human similarity judgments.

Read more
Neurons And Cognition

Falsification and consciousness

The search for a scientific theory of consciousness should result in theories that are falsifiable. However, here we show that falsification is especially problematic for theories of consciousness. We formally describe the standard experimental setup for testing these theories. Based on a theory's application to some physical system, such as the brain, testing requires comparing a theory's predicted experience (given some internal observables of the system like brain imaging data) with an inferred experience (using report or behavior). If there is a mismatch between inference and prediction, a theory is falsified. We show that if inference and prediction are independent, it follows that any minimally informative theory of consciousness is automatically falsified. This is deeply problematic since the field's reliance on report or behavior to infer conscious experiences implies such independence, so this fragility affects many contemporary theories of consciousness. Furthermore, we show that if inference and prediction are strictly dependent, it follows that a theory is unfalsifiable. This affects theories which claim consciousness to be determined by report or behavior. Finally, we explore possible ways out of this dilemma.

Read more
Neurons And Cognition

Fast and Accurate Langevin Simulations of Stochastic Hodgkin-Huxley Dynamics

Fox and Lu introduced a Langevin framework for discrete-time stochastic models of randomly gated ion channels such as the Hodgkin-Huxley (HH) system. They derived a Fokker-Planck equation with state-dependent diffusion tensor D and suggested a Langevin formulation with noise coefficient matrix S such that S S ⊺ =D . Subsequently, several authors introduced a variety of Langevin equations for the HH system. In this paper, we present a natural 14-dimensional dynamics for the HH system in which each \emph{directed} edge in the ion channel state transition graph acts as an independent noise source, leading to a 14×28 noise coefficient matrix S . We show that (i) the corresponding 14D system of ordinary differential \rev{equations} is consistent with the classical 4D representation of the HH system; (ii) the 14D representation leads to a noise coefficient matrix S that can be obtained cheaply on each timestep, without requiring a matrix decomposition; (iii) sample trajectories of the 14D representation are pathwise equivalent to trajectories of Fox and Lu's system, as well as trajectories of several existing Langevin models; (iv) our 14D representation (and those equivalent to it) give the most accurate interspike-interval distribution, not only with respect to moments but under both the L 1 and L ∞ metric-space norms; and (v) the 14D representation gives an approximation to exact Markov chain simulations that are as fast and as efficient as all equivalent models. Our approach goes beyond existing models, in that it supports a stochastic shielding decomposition that dramatically simplifies S with minimal loss of accuracy under both voltage- and current-clamp conditions.

Read more
Neurons And Cognition

Fast simulations of highly-connected spiking cortical models using GPUs

Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, user definable models and different devices. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on a balanced network of excitatory and inhibitory neurons, using AdEx neurons and conductance-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3⋅ 10 8 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.

Read more
Neurons And Cognition

Fate of Duplicated Neural Structures

Statistical mechanics determines the abundance of different arrangements of matter depending on cost-benefit balances. Its formalism and phenomenology percolate throughout biological processes and set limits to effective computation. Under specific conditions, self-replicating and computationally complex patterns become favored, yielding life, cognition, and Darwinian evolution. Neurons and neural circuits sit at a crossroads between statistical mechanics, computation, and (through their role in cognition) natural selection. Can we establish a {\em statistical physics} of neural circuits? Such theory would tell what kinds of brains to expect under set energetic, evolutionary, and computational conditions. With this big picture in mind, we focus on the fate of duplicated neural circuits. We look at examples from central nervous systems, with a stress on computational thresholds that might prompt this redundancy. We also study a naive cost-benefit balance for duplicated circuits implementing complex phenotypes. From this we derive {\em phase diagrams} and (phase-like) transitions between single and duplicated circuits, which constrain evolutionary paths to complex cognition. Back to the big picture, similar phase diagrams and transitions might constrain I/O and internal connectivity patterns of neural circuits at large. The formalism of statistical mechanics seems a natural framework for thsi worthy line of research.

Read more
Neurons And Cognition

Feedback Gains modulate with Motor Memory Uncertainty

A sudden change in dynamics produces large errors leading to increases in muscle co-contraction and feedback gains during early adaptation. We previously proposed that internal model uncertainty drives these changes, whereby the sensorimotor system reacts to the change in dynamics by up regulating stiffness and feedback gains to reduce the effect of model errors. However, these feedback gain increases have also been suggested to represent part of the adaptation mechanism. Here, we investigate this by examining changes in visuomotor feedback gains during gradual or abrupt force field adaptation. Participants grasped a robotic manipulandum and reached while a curl force field was introduced gradually or abruptly. Abrupt introduction of dynamics elicited large initial increases in kinematic error, muscle co-contraction and visuomotor feedback gains, while gradual introduction showed little initial change in these measures despite evidence of adaptation. After adaptation had plateaued,there was a change in the co-contraction and visuomotor feedback gains relative to null field movements, but no differences (apart from the final muscle activation pattern) between the abrupt and gradual introduction of dynamics. This suggests that the initial increase in feedback gains is not part of the adaptation process, but instead an automatic reactive response to internal model uncertainty. In contrast, the final level of feedback gains is a predictive tuning of the feedback gains to the external dynamics as part of the internal model adaptation. Together, the reactive and predictive feedback gains explain the wide variety of previous experimental results of feedback changes during adaptation.

Read more
Neurons And Cognition

Fighting seizures with seizures: diffusion and stability in neural systems

Seizure activity is a ubiquitous and pernicious pathophysiology that, in principle, should yield to mathematical treatments of (neuronal) ensemble dynamics - and therefore interventions on stochastic chaos. A seizure can be characterised as a deviation of neural activity from a stable dynamical regime, i.e. one in which signals fluctuate only within a limited range. In silico treatments of neural activity are an important tool for understanding how the brain can achieve stability, as well as how pathology can lead to seizures and potential strategies for mitigating instabilities, e.g. via external stimulation. Here, we demonstrate that the (neuronal) state equation used in Dynamic Causal Modelling generalises to a Fokker-Planck formalism when propagation of neuronal activity along structural connections is considered. Using the Jacobian of this generalised state equation, we show that an initially unstable system can be rendered stable via a reduction in diffusivity (i.e., connectivity that disperses neuronal fluctuations). We show, for neural systems prone to epileptic seizures, that such a reduction can be achieved via external stimulation. Specifically, we show that this stimulation should be applied in such a way as to temporarily mirror epileptic activity in the areas adjoining an affected brain region - thus 'fighting seizures with seizures'. We offer proof of principle using simulations based on functional neuroimaging data collected from patients with idiopathic generalised epilepsy, in which we successfully suppress pathological activity in a distinct sub-network. Our hope is that this technique can form the basis for real-time monitoring and intervention devices that are capable of suppressing or even preventing seizures in a non-invasive manner.

Read more
Neurons And Cognition

Fine-grain atlases of functional modes for fMRI analysis

Population imaging markedly increased the size of functional-imaging datasets, shedding new light on the neural basis of inter-individual differences. Analyzing these large data entails new scalability challenges, computational and statistical. For this reason, brain images are typically summarized in a few signals, for instance reducing voxel-level measures with brain atlases or functional modes. A good choice of the corresponding brain networks is important, as most data analyses start from these reduced signals. We contribute finely-resolved atlases of functional modes, comprising from 64 to 1024 networks. These dictionaries of functional modes (DiFuMo) are trained on millions of fMRI functional brain volumes of total size 2.4TB, spanned over 27 studies and many research groups. We demonstrate the benefits of extracting reduced signals on our fine-grain atlases for many classic functional data analysis pipelines: stimuli decoding from 12,334 brain responses, standard GLM analysis of fMRI across sessions and individuals, extraction of resting-state functional-connectomes biomarkers for 2,500 individuals, data compression and meta-analysis over more than 15,000 statistical maps. In each of these analysis scenarii, we compare the performance of our functional atlases with that of other popular references, and to a simple voxel-level analysis. Results highlight the importance of using high-dimensional "soft" functional atlases, to represent and analyse brain activity while capturing its functional gradients. Analyses on high-dimensional modes achieve similar statistical performance as at the voxel level, but with much reduced computational cost and higher interpretability. In addition to making them available, we provide meaningful names for these modes, based on their anatomical location. It will facilitate reporting of results.

Read more

Ready to get started?

Join us today