Featured Researches

Neurons And Cognition

Metastable attractors explain the variable timing of stable behavioral action sequences

Natural animal behavior displays rich lexical and temporal dynamics, even in a stable environment. This implies that behavioral variability arises from sources within the brain, but the origin and mechanics of these processes remain largely unknown. Here, we focus on the observation that the timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action timing fluctuations. Using hidden Markov models we established a robust and accurate dictionary between ensemble activity patterns and actions. We then showed that metastable attractors, representing activity patterns with the requisite combination of reliable sequential structure and high transition timing variability, could be produced by reciprocally coupling a high dimensional recurrent network and a low dimensional feedforward one. Transitions between attractors were generated by correlated variability arising from the feedback loop between the two networks. This mechanism predicted a specific structure of low-dimensional noise correlations that were empirically verified in M2 ensemble dynamics. This work suggests a robust network motif as a novel mechanism to support critical aspects of animal behavior and establishes a framework for investigating its circuit origins via correlated variability.

Read more
Neurons And Cognition

Microcircuit synchronization and heavy tailed synaptic weight distribution in preBötzinger Complex contribute to generation of breathing rhythm

The preBötzinger Complex, the mammalian inspiratory rhythm generator, encodes inspiratory time as motor pattern. Spike synchronization throughout this sparsely connected network generates inspiratory bursts albeit with variable latencies after preinspiratory activity onset in each breathing cycle. Using preBötC rhythmogenic microcircuit minimal models, we examined the variability in probability and latency to burst, mimicking experiments. Among various physiologically plausible graphs of 1000 point neurons with experimentally determined neuronal and synaptic parameters, directed Erd?s-Rényi graphs best captured the experimentally observed dynamics. Mechanistically, preBötC (de)synchronization and oscillatory dynamics are regulated by the efferent connectivity of spiking neurons that gates the amplification of modest preinspiratory activity through input convergence. Furthermore, to replicate experiments, a lognormal distribution of synaptic weights was necessary to augment the efficacy of convergent coincident inputs. These mechanisms enable exceptionally robust yet flexible preBötC attractor dynamics that, we postulate, represent universal temporal-processing and decision-making computational motifs throughout the brain.

Read more
Neurons And Cognition

Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons

Excitation-inhibition (E-I) balance is ubiquitously observed in the cortex. Recent studies suggest an intriguing link between balance on fast timescales, tight balance, and efficient information coding with spikes. We further this connection by taking a principled approach to optimal balanced networks of excitatory (E) and inhibitory (I) neurons. By deriving E-I spiking neural networks from greedy spike-based optimizations of constrained minimax objectives, we show that tight balance arises from correcting for deviations from the minimax optima. We predict specific neuron firing rates in the network by solving the minimax problem, going beyond statistical theories of balanced networks. Finally, we design minimax objectives for reconstruction of an input signal, associative memory, and storage of manifold attractors, and derive from them E-I networks that perform the computation. Overall, we present a novel normative modeling approach for spiking E-I networks, going beyond the widely-used energy minimizing networks that violate Dale's law. Our networks can be used to model cortical circuits and computations.

Read more
Neurons And Cognition

Mining the Mind: Linear Discriminant Analysis of MEG source reconstruction time series supports dynamic changes in deep brain regions during meditation sessions

Meditation practices have been claimed to have a positive effect on the regulation of mood and emotion for quite some time by practitioners, and in recent times there has been a sustained effort to provide a more precise description of the changes induced by meditation on human brain. Longitudinal studies have reported morphological changes in cortical thickness and volume in selected brain regions due to meditation practice, which is interpreted as evidence for effectiveness of it beyond the subjective self reporting. Evidence based on real time monitoring of meditating brain by functional imaging modalities such as MEG or EEG remains a challenge. In this article we consider MEG data collected during meditation sessions of experienced Buddhist monks practicing focused attention (Samatha) and open monitoring (Vipassana) meditation, contrasted by resting state with eyes closed. The MEG data is first mapped to time series of brain activity averaged over brain regions corresponding to a standard Destrieux brain atlas, and further by bootstrapping and spectral analysis to data matrices representing a random sample of power spectral densities over bandwidths corresponding to α , β , γ , and θ bands in the spectral range. We demonstrate using linear discriminant analysis (LDA) that the samples corresponding to different meditative or resting states contain enough fingerprints of the brain state to allow a separation between different states, and we identify the brain regions that appear to contribute to the separation. Our findings suggest that cingulate cortex, insular cortex and some of the internal structures, most notably accumbens, caudate and putamen nuclei, thalamus and amygdalae stand out as separating regions, which seems to correlate well with earlier findings based on longitudinal studies.

Read more
Neurons And Cognition

Model Order Reduction in Neuroscience

The human brain contains approximately 10 9 neurons, each with approximately 10 3 connections, synapses, with other neurons. Most sensory, cognitive and motor functions of our brains depend on the interaction of a large population of neurons. In recent years, many technologies are developed for recording large numbers of neurons either sequentially or simultaneously. An increase in computational power and algorithmic developments have enabled advanced analyses of neuronal population parallel to the rapid growth of quantity and complexity of the recorded neuronal activity. Recent studies made use of dimensionality and model order reduction techniques to extract coherent features which are not apparent at the level of individual neurons. It has been observed that the neuronal activity evolves on low-dimensional subspaces. The aim of model reduction of large-scale neuronal networks is an accurate and fast prediction of patterns and their propagation in different areas of the brain. Spatiotemporal features of the brain activity are identified on low dimensional subspaces with methods such as dynamic mode decomposition (DMD), proper orthogonal decomposition (POD), discrete empirical interpolation (DEIM) and combined parameter and state reduction. In this paper, we give an overview of the currently used dimensionality reduction and model order reduction techniques in neuroscience. This work will be featured as a chapter in the upcoming Handbook on Model Order Reduction,(P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W. H. A. Schilders, L. M. Silveira, eds, to appear on DE GRUYTER)

Read more
Neurons And Cognition

Model Reduction Captures Stochastic Gamma Oscillations on Low-Dimensional Manifolds

Gamma frequency oscillations (25-140 Hz), observed in the neural activities within many brain regions, have long been regarded as a physiological basis underlying many brain functions, such as memory and attention. Among numerous theoretical and computational modeling studies, gamma oscillations have been found in biologically realistic spiking network models of the primary visual cortex. However, due to its high dimensionality and strong nonlinearity, it is generally difficult to perform detailed theoretical analysis of the emergent gamma dynamics. Here we propose a suite of Markovian model reduction methods with varying levels of complexity and applied it to spiking network models exhibiting heterogeneous dynamical regimes, ranging from homogeneous firing to strong synchrony in the gamma band. The reduced models not only successfully reproduce gamma band oscillations in the full model, but also exhibit the same dynamical features as we vary parameters. Most remarkably, the invariant measure of the coarse-grained Markov process reveals a two-dimensional surface in state space upon which the gamma dynamics mainly resides. Our results suggest that the statistical features of gamma oscillations strongly depend on the subthreshold neuronal distributions. Because of the generality of the Markovian assumptions, our dimensional reduction methods offer a powerful toolbox for theoretical examinations of many other complex cortical spatio-temporal behaviors observed in both neurophysiological experiments and numerical simulations.

Read more
Neurons And Cognition

Modeling state-transition dynamics in brain signals by memoryless Gaussian mixtures

Recent studies have proposed that one can summarize brain activity into dynamics among a relatively small number of hidden states and that such an approach is a promising tool for revealing brain function. Hidden Markov models (HMMs) are a prevalent approach to inferring such neural dynamics among discrete brain states. However, the validity of modeling neural time series data with HMMs has not been established. Here, to address this situation and examine the performance of the HMM, we compare the model with the Gaussian mixture model (GMM), which is a statistically simpler model than the HMM with no assumption of Markovianity, by applying both models to synthetic and empirical resting-state functional magnetic resonance imaging (fMRI) data. We find that the GMM allows us to interpret the sequence of the estimated hidden states as a time series obeying some patterns and is often better than HMMs in terms of the accuracy and consistency of estimating the time course of the hidden state. These results suggest that GMMs can be a model of first choice for investigating hidden-state dynamics in data even if the time series is apparently not memoryless.

Read more
Neurons And Cognition

Modeling the Hallucinating Brain: A Generative Adversarial Framework

This paper looks into the modeling of hallucination in the human's brain. Hallucinations are known to be causally associated with some malfunctions within the interaction of different areas of the brain involved in perception. Focusing on visual hallucination and its underlying causes, we identify an adversarial mechanism between different parts of the brain which are responsible in the process of visual perception. We then show how the characterized adversarial interactions in the brain can be modeled by a generative adversarial network.

Read more
Neurons And Cognition

Modelling Drosophila Motion Vision Pathways for Decoding the Direction of Translating Objects Against Cluttered Moving Backgrounds

Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly \textit{Drosophila} motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.

Read more
Neurons And Cognition

Models Currently Implemented in MIIND

This is a living document that will be updated when appropriate. MIIND [1, 2] is a population-level neural simulator. It is based on population density techniques, just like DIPDE [3]. Contrary to DIPDE, MIIND is agnostic to the underlying neuron model used in its populations so any 1, 2 or 3 dimensional model can be set up with minimal effort. The resulting populations can then be grouped into large networks, e.g. the Potjans-Diesmann model [4]. The MIIND website this http URL contains training materials, and helps to set up MIIND, either by using virtual machines, a DOCKER image, or directly from source code.

Read more

Ready to get started?

Join us today