Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodrigo Echeveste is active.

Publication


Featured researches published by Rodrigo Echeveste.


Frontiers in Robotics and AI | 2014

Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules

Rodrigo Echeveste; Claudius Gros

Generating functionals may guide the evolution of a dynamical system and constitute a possible route for handling the complexity of neural networks as relevant for computational intelligence. We propose and explore a new objective function, which allows to obtain plasticity rules for the afferent synaptic weights. The adaption rules are Hebbian, self-limiting, and result from the minimization of the Fisher information with respect to the synaptic flux. We perform a series of simulations examining the behavior of the new learning rules in various circumstances. The vector of synaptic weights aligns with the principal direction of input activities, whenever one is present. A linear discrimination is performed when there are two or more principal directions; directions having bimodal firing-rate distributions, being characterized by a negative excess kurtosis, are preferred. We find robust performance and full homeostatic adaption of the synaptic weights results as a by-product of the synaptic flux minimization. This self-limiting behavior allows for stable online learning for arbitrary durations. The neuron acquires new information when the statistics of input activities is changed at a certain point of the simulation, showing however, a distinct resilience to unlearn previously acquired knowledge. Learning is fast when starting with randomly drawn synaptic weights and substantially slower when the synaptic weights are already fully adapted.


Neural Computation | 2015

Two-trace model for spike-timing-dependent synaptic plasticity

Rodrigo Echeveste; Claudius Gros

We present an effective model for timing-dependent synaptic plasticity (STDP) in terms of two interacting traces, corresponding to the fraction of activated NMDA receptors and the concentration in the dendritic spine of the postsynaptic neuron. This model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, thus constituting a practical tool for the study of the interplay of neural activity and synaptic plasticity in extended spiking neural networks. For isolated pairs of pre- and postsynaptic spikes, the standard pairwise STDP rule is reproduced, with appropriate parameters determining the respective weights and timescales for the causal and the anticausal contributions. The model contains otherwise only three free parameters, which can be adjusted to reproduce triplet nonlinearities in hippocampal culture and cortical slices. We also investigate the transition from time-dependent to rate-dependent plasticity occurring for both correlated and uncorrelated spike patterns.


Entropy | 2015

The Fisher Information as a Neural Guiding Principle for Independent Component Analysis

Rodrigo Echeveste; Samuel Eckmann; Claudius Gros

The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule for synaptic plasticity. In the present work, we study the dependence of the solutions to this rule in terms of the moments of the input probability distribution and find a preference for non-Gaussian directions, making it a suitable candidate for independent component analysis (ICA). We confirm in a numerical experiment that a neuron trained under these rules is able to find the independent components in the non-linear bars problem. The specific form of the plasticity rule depends on the transfer function used, becoming a simple cubic polynomial of the membrane potential for the case of the rescaled error function. The cubic learning rule is also an excellent approximation for other transfer functions, as the standard sigmoidal, and can be used to show analytically that the proposed plasticity rules are selective for directions in the space of presynaptic neural activities characterized by a negative excess kurtosis.


Frontiers in Computational Neuroscience | 2016

Drifting States and Synchronization Induced Chaos in Autonomous Networks of Excitable Neurons

Rodrigo Echeveste; Claudius Gros

The study of balanced networks of excitatory and inhibitory neurons has led to several open questions. On the one hand it is yet unclear whether the asynchronous state observed in the brain is autonomously generated, or if it results from the interplay between external drivings and internal dynamics. It is also not known, which kind of network variabilities will lead to irregular spiking and which to synchronous firing states. Here we show how isolated networks of purely excitatory neurons generically show asynchronous firing whenever a minimal level of structural variability is present together with a refractory period. Our autonomous networks are composed of excitable units, in the form of leaky integrators spiking only in response to driving currents, remaining otherwise quiet. For a non-uniform network, composed exclusively of excitatory neurons, we find a rich repertoire of self-induced dynamical states. We show in particular that asynchronous drifting states may be stabilized in purely excitatory networks whenever a refractory period is present. Other states found are either fully synchronized or mixed, containing both drifting and synchronized components. The individual neurons considered are excitable and hence do not dispose of intrinsic natural firing frequencies. An effective network-wide distribution of natural frequencies is however generated autonomously through self-consistent feedback loops. The asynchronous drifting state is, additionally, amenable to an analytic solution. We find two types of asynchronous activity, with the individual neurons spiking regularly in the pure drifting state, albeit with a continuous distribution of firing frequencies. The activity of the drifting component, however, becomes irregular in the mixed state, due to the periodic driving of the synchronized component. We propose a new tool for the study of chaos in spiking neural networks, which consists of an analysis of the time series of pairs of consecutive interspike intervals. In this space, we show that a strange attractor with a fractal dimension of about 1.8 is formed in the mentioned mixed state.


bioRxiv | 2018

Energetic substrate availability regulates synchronous activity in an excitatory neural network

David S. Tourigny; Muhammad Kaiser Abdul Karim; Rodrigo Echeveste; Mark Rn Kotter; John S. O'Neill

Neural networks are required to meet significant metabolic demands associated with performing sophisticated computational tasks in the brain. The necessity for efficient transmission of information imposes stringent constraints on the metabolic pathways that can be used for energy generation at the synapse, and energetic substrate availability has been shown to reduce the efficacy of synaptic function. Here we take a combined experimental-computational approach to study the effects of energetic substrate availability on global neural network behavior and find that glucose alone can sustain excitatory neurotransmission required to generate high-frequency synchronous bursting that emerges in culture. In contrast, obligatory oxidative energetic substrates such as lactate and pyruvate are unable to substitute for glucose, indicating that glycolysis is the primary metabolic pathway underlying coordinated network activity. Our experimental results and computational modelling therefore support recent suggestions that glycolysis serves as the predominant source of ATP for synaptic vesicle recycling at presynaptic nerve terminals. Significance Statement The metabolic demands of neurons in the waking human brain are extremely high, accounting for almost a quarter of the body’s overall ATP turnover, and therefore precise regulation of neuronal metabolism is essential for energy-efficient encoding of information in the cortex. Using multi-electrode arrays we show that excitatory human cortical neuronal cultures, directly-derived from ES cells, develop synchronised network behaviour as they mature over several weeks. We use this robust and reproducible platform to understand the metabolic underpinnings of neuronal networks. Exploiting a combined experimental-computational model of metabolic regulation, we provide evidence that glycolytic ATP production is required to sustain the high rates of synaptic vesicle turnover that are essential for coordinated network behaviour.


Trends in Neurosciences | 2018

The Redemption of Noise: Inference with Neural Populations

Rodrigo Echeveste; Máté Lengyel

In 2006, Ma et al. presented an elegant theory for how populations of neurons might represent uncertainty to perform Bayesian inference. Critically, according to this theory, neural variability is no longer a nuisance, but rather a vital part of how the brain encodes probability distributions and performs computations with them.


Scientific Reports | 2018

E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks

Philip Trapp; Rodrigo Echeveste; Claudius Gros

Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron’s input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.


BMC Neuroscience | 2015

Should Hebbian learning be selective for negative excess kurtosis

Claudius Gros; Samuel Eckmann; Rodrigo Echeveste

Within the Hebbian learning paradigm, synaptic plasticity results in potentiation whenever pre- and postsynaptic activities are correlated, and in depression otherwise. This requirement is however not sufficient to determine the precise functional form for Hebbian learning, and a range of distinct formulations have been proposed hitherto. They differ, in particular, in the way runaway synaptic growth is avoided; by either imposing a hard upper bound for the synaptic strength, overall synaptic scaling, or additive synaptic decay [1]. Here we propose [2] a multiplicative Hebbian learning rule which is, at the same time, self-limiting and selective for negative excess kurtosis (for the case of symmetric input distributions). Hebbian learning results naturally in a principal component analysis (PCA), whenever one is present. Alternative formulations of the Hebbian learning paradigm differ however in other properties. Importantly they may, or may not, perform an independent component analysis (ICA), whenever one is feasible. The ICA may be achieved by maximizing (minimizing) the excess kurtosis, whenever the latter is positive (negative) [3]. Noting that naturally occurring individual cell lifetime firing rates are, however, characterized by having both a large kurtosis and a large skewness [4], we investigate in this paper the effect of the skewness and kurtosis on performing both PCA and ICA (see Figure ​Figure1)1) with several Hebbian learning rules. A particular emphasis is placed on the differences between additive and multiplicative schemes. We find that multiplicative Hebbian plasticity rules select both for small excess kurtosis and large skewness, allowing them to perform an ICA, in contrast to additive rules. Figure 1 The learning rules ability to perform an independent component analysis is tested with the non-linear bars problem. An input set consisting of a random number of horizontal and vertical bars is fed to the neuron which, after training, becomes selective ...


BMC Neuroscience | 2015

A simple effective model for STDP: from spike pairs and triplets to rate-encoding plasticity

Rodrigo Echeveste; Claudius Gros

In the present work [1] we propose an effective model formulating synaptic potentiation and depression in terms of two interacting traces, representing the fraction of open NMDA receptors and the Ca2+ concentration in the post-synaptic neuron, respectively. These two traces then determine the evolution of the synaptic strength. We first confirm that the standard pairwise STDP curve is obtained for low frequency trains of pairs of pre- and post-synaptic spikes and we then evaluate triplet effects (see Figure ​Figure1),1), comparing the models results to experimental data from hippocampal culture [2,3]. Finally, we evaluate the models predictions for spike trains of different frequencies and degrees of correlation, observing that a BCM-like rule for plasticity as a function of the pre-and post-synaptic firing rates is recovered when employing uncorrelated poisson trains of pre- and postsynaptic spikes. Figure 1 Models prediction and comparison to experimental results from hippocampal culture. A . The standard pairwise STDP curve is recovered by the model. Blue lines indicate the models results and red circles the experimental data [2]. B . Triplets, consisting ... Having a low number of parameters and being composed of only polynomial differential equations, the model is able nonetheless to reproduce key features of LTP and LTD. Moreover, since the parameters of the model are easily related to the dynamical properties of the synapse, we believe the model constitutes a useful tool to study extended neural networks from a dynamical systems point of view.


arXiv: Neurons and Cognition | 2015

An objective function for self-limiting neural plasticity rules

Rodrigo Echeveste; Claudius Gros

Collaboration


Dive into the Rodrigo Echeveste's collaboration.

Top Co-Authors

Avatar

Claudius Gros

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Samuel Eckmann

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Martin

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Philip Trapp

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Tim Jahn

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

David S. Tourigny

Laboratory of Molecular Biology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John S. O'Neill

Laboratory of Molecular Biology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge