Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristina Savin is active.

Publication


Featured researches published by Cristina Savin.


PLOS Computational Biology | 2010

Independent Component Analysis in Spiking Neurons

Cristina Savin; Prashant Joshi; Jochen Triesch

Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.


Journal of the Royal Society Interface | 2009

Epileptogenesis due to glia-mediated synaptic scaling.

Cristina Savin; Jochen Triesch; Michael Meyer-Hermann

Homeostatic regulation of neuronal activity is fundamental for the stable functioning of the cerebral cortex. One form of homeostatic synaptic scaling has been recently shown to be mediated by glial cells that interact with neurons through the diffusible messenger tumour necrosis factor-α (TNF-α). Interestingly, TNF-α is also used by the immune system as a pro-inflammatory messenger, suggesting potential interactions between immune system signalling and the homeostatic regulation of neuronal activity. We present the first computational model of neuron–glia interaction in TNF-α-mediated synaptic scaling. The model shows how under normal conditions the homeostatic mechanism is effective in balancing network activity. After chronic immune activation or TNF-α overexpression by glia, however, the network develops seizure-like activity patterns. This may explain why under certain conditions brain inflammation increases the risk of seizures. Additionally, the model shows that TNF-α diffusion may be responsible for epileptogenesis after localized brain lesions.


PLOS Computational Biology | 2012

Feedforward inhibition and synaptic scaling--two sides of the same coin?

Christian Keck; Cristina Savin; Jörg Lücke

Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.


PLOS Computational Biology | 2014

Optimal Recall from Bounded Metaplastic Synapses: Predicting Functional Adaptations in Hippocampal Area CA3

Cristina Savin; Peter Dayan; Máté Lengyel

A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments.


conference on computer as a tool | 2007

A Hybrid Algorithm for Medical Diagnosis

Camelia Vidrighin Bratu; Cristina Savin; Rodica Potolea

Medical diagnosis and prognosis is an emblematic example for classification problems. Machine learning could provide invaluable support for automatically inferring diagnostic rules from descriptions of past cases, making the diagnosis process more objective and reliable. Since the problem involves both test and misclassification costs, we have analyzed ICET, the most prominent approach in the literature for complex cost problems. The hybrid algorithm tries to avoid the pitfalls of traditional greedy induction by performing a heuristic search in the space of possible decision trees through evolutionary mechanisms. Our implementation solves some of the problems of the initial ICET algorithm, proving it to be a viable solution for the problem considered.


Frontiers in Computational Neuroscience | 2015

Editorial: Emergent Neural Computation from the Interaction of Different Forms of Plasticity

Matthieu Gilson; Cristina Savin; Friedemann Zenke

More than 60 years later, Hebbs prophecy “neurons that fire together wire together” (Hebb, 1949; Shatz, 1992) prevails as one of the cornerstones of modern neuroscience. Nonetheless, it is becoming increasingly evident that there is more to neural plasticity than the strengthening of synapses between co-active neurons. Experiments have revealed a plethora of synaptic and cellular plasticity mechanisms acting simultaneously in neural circuits. How such diverse forms of plasticity collectively give rise to neural computation remains poorly understood. The present Research Topic approaches this question by bringing together recent advances in the modeling of different forms of synaptic and neuronal plasticity. Taken together, these studies argue that the concerted interaction of diverse forms of plasticity is critical for circuit formation and function. A first insight from this Research Topic underscores the importance of the time scale of homeostatic plasticity to avoid runaway dynamics of Hebbian plasticity. While known homeostatic processes act slowly, on the timescale of hours to days, existing theoretical models invariably use fast homeostasis. Yger and Gilson (2015) review a body of theoretical work arguing that rapid forms of homeostatic control are in fact critical for stable learning and thus should also exist in biological circuits. Following a similar line of thought, Chistiakova et al. (2015) review experimental and theoretical literature which suggests that the role of rapid homeostasis could be filled by heterosynaptic plasticity. Alternatively, other mechanisms can achieve a similar stabilizing effect, as long as they are fast, for instance the rapid homeostatic sliding threshold in Guise et al. (2015). These findings raise questions concerning the purpose of slow homeostasis and metaplasticity. Since non-modulated plasticity leads to “interference” between memories when confronted with rich environmental stimuli (Chrol-Cannon and Jin, 2015), it is tempting to hypothesize that certain slow homeostatic mechanisms may correct for this (Yger and Gilson, 2015). The second development reflected in this Research Topic concerns the interactions between excitatory and inhibitory (E/I) plasticity. Multiple studies independently stress the importance of such interactions for shaping circuit selectivity and decorrelating network activity during learning. Kleberg et al. (2014) demonstrate how spike-timing-dependent plasticity at excitatory (eSTDP) and inhibitory (iSTDP) synapses drives the formation of selective signaling pathways in feed-forward networks. Together they ensure excitatory-inhibitory balance and sharpen neuronal responses to salient inputs. Moreover, by systematically exploring different iSTDP windows, the authors show that anti-symmetric plasticity, in which pre-post spike pairs lead to potentiation of an inhibitory synapse, are most efficient at establishing pathway-specific balance. Zheng and Triesch (2014) confirm the relevance of e/iSTDP for propagating information in a recurrent network. Their model also highlights the importance of other forms of plasticity, in particular intrinsic plasticity and structural plasticity for robust synfire-chain learning. Beyond information propagation, Duarte and Morrison (2014) show that E/I plasticity allows recurrent neural networks to form internal representations of the external world and to perform non-linear computations with them. They find that the decorrelating action of inhibitory plasticity pushes the network away from states with poor discriminability. These results are corroborated by Srinivasa and Cho (2014), who show that such representations can be efficiently picked up by downstream layers. Networks shaped by both e- and iSTDP learn to discriminate between neural activity patterns in a self-organized fashion, whereas networks with only one form of plasticity perform worse. Binas et al. (2014) show that the interplay of E/I plasticity in recurrent neural networks can form robust winner-take-all (WTA) circuits, important for solving a range of behaviorally relevant tasks (e.g., categorization or decision making). Using a novel mean-field theory for network dynamics and plasticity, the authors characterize parameter regions in which stable WTA circuits emerge autonomously through the interaction of E/I plasticity. While most work presented here focuses on long-term plasticity, Esposito et al. (2015), study the interactions between Hebbian and short-term plasticity (STP) at excitatory synapses. The authors postulate a form of metaplasticity that adjusts the properties of STP to minimize circuit error. This model provides a normative interpretation for experimentally observed variability in STP properties across neural circuits and its close link to network connectivity motifs. While detailed error computation as assumed here is biologically implausible, reward-related information could be provided by neuromodulators (in particular, dopamine), which are know to regulate circuit dynamics and plasticity. The functional importance of neuromodulation is explored in two papers. First, Aswolinskiy and Pipa (2015) systematically compare reward-dependent vs. supervised and unsupervised learning across a broad range of tasks. They find that, when combined with suitable homeostatic plasticity mechanisms, reward-dependent synaptic plasticity can yield a performance similar to abstract supervised learning. Second, Savin and Triesch (2014) use a similar circuit model to study how reward-dependent learning shapes random recurrent networks into working memory circuits. They show that the interaction between dopamine-modulated STDP and homeostatic plasticity is sufficient to explain a broad range of experimental findings regarding the coding properties of neurons in prefrontal circuits. More generally, these results enforce the idea that reward-dependent learning is critical for shifting the limited neural resources toward the computations that matter most in terms of behavioral outcomes. Taken together, the contributions to this Research Topic suggest that circuit-level function emerges from the complex, but well-orchestrated interplay of different forms of neural plasticity. To learn how neuronal circuits self-organize and how computation emerges in the brain it is therefore vital to focus on interacting forms of plasticity. This sets the scene for exciting future research in both theoretical and experimental neuroscience.


Frontiers in Computational Neuroscience | 2014

Emergence of task-dependent representations in working memory circuits

Cristina Savin; Jochen Triesch

A wealth of experimental evidence suggests that working memory circuits preferentially represent information that is behaviorally relevant. Still, we are missing a mechanistic account of how these representations come about. Here we provide a simple explanation for a range of experimental findings, in light of prefrontal circuits adapting to task constraints by reward-dependent learning. In particular, we model a neural network shaped by reward-modulated spike-timing dependent plasticity (r-STDP) and homeostatic plasticity (intrinsic excitability and synaptic scaling). We show that the experimentally-observed neural representations naturally emerge in an initially unstructured circuit as it learns to solve several working memory tasks. These results point to a critical, and previously unappreciated, role for reward-dependent learning in shaping prefrontal cortex activity.


Current Opinion in Neurobiology | 2017

Maximum entropy models as a tool for building precise neural controls

Cristina Savin; Gašper Tkačik

Neural responses are highly structured, with population activity restricted to a small subset of the astronomical range of possible activity patterns. Characterizing these statistical regularities is important for understanding circuit computation, but challenging in practice. Here we review recent approaches based on the maximum entropy principle used for quantifying collective behavior in neural activity. We highlight recent models that capture population-level statistics of neural data, yielding insights into the organization of the neural code and its biological substrate. Furthermore, the MaxEnt framework provides a general recipe for constructing surrogate ensembles that preserve aspects of the data, but are otherwise maximally unstructured. This idea can be used to generate a hierarchy of controls against which rigorous statistical tests are possible.


Scientific Reports | 2018

Optimal neural inference of stimulus intensities

Travis Monk; Cristina Savin; Jörg Lücke

In natural data, the class and intensity of stimuli are correlated. Current machine learning algorithms ignore this ubiquitous statistical property of stimuli, usually by requiring normalized inputs. From a biological perspective, it remains unclear how neural circuits may account for these dependencies in inference and learning. Here, we use a probabilistic framework to model class-specific intensity variations, and we derive approximate inference and online learning rules which reflect common hallmarks of neural computation. Concretely, we show that a neural circuit equipped with specific forms of synaptic and intrinsic plasticity (IP) can learn the class-specific features and intensities of stimuli simultaneously. Our model provides a normative interpretation of IP as a critical part of sensory learning and predicts that neurons can represent nontrivial input statistics in their excitabilities. Computationally, our approach yields improved statistical representations for realistic datasets in the visual and auditory domains. In particular, we demonstrate the utility of the model in estimating the contrastive stress of speech.


Nature Precedings | 2011

Optimal storage and recall with biologically plausible synapses

Cristina Savin; Máté Lengyel

Synaptic plasticity is widely accepted to underlie learning and memory. Yet, models of associative networks with biologically plausible synapses fail to match brain performance: memories stored in such networks are quickly overwritten by ongoing plasticity (Amit & Fusi 1996, Fusi et al 2007). Metaplasticity – the process by which neural activity changes the ability of synapses to exhibit further plasticity – is believed to increase memory capacity (Fusi et al 2005). However, it remains unclear if neurons can make use of this additional information during recall. In particular, previous attempts at reading out information in metaplastic synapses using heuristic recall dynamics led to rather poor performance (Huang & Amit 2010). Here, we developed a theoretical framework for storage and recall with finite-state synapses that allowed us to find neural and synaptic dynamics that maximize the efficiency of autoassociative recall. Since information storage by synaptic plasticity is lossy, we formulated the problem of recalling a previously stored pattern from a noisy cue as probabilistic inference (Lengyel et al 2005) and derived neural dynamics efficiently implementing such inferences. Our approach is general and can be applied to any synaptic plasticity model which involves stochastic transitions between a finite set of states. We show how synaptic plasticity rules need to be matched to the statistics of stored patterns, and how recall dynamics need to be matched both to input statistics and to the plasticity rule itself in order to achieve optimal performance. In particular, for binary synapses with metastates we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that we derive directly from the synaptic metaplasticity rule with virtually no free parameters.

Collaboration


Dive into the Cristina Savin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen Triesch

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Peter Dayan

University College London

View shared research outputs
Top Co-Authors

Avatar

Jörg Lücke

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar

Iosif Ignat

Technical University of Cluj-Napoca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

József Fiser

Central European University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge