Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Friedemann Zenke is active.

Publication


Featured researches published by Friedemann Zenke.


Nature Communications | 2015

Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks

Friedemann Zenke; Everton J. Agnes; Wulfram Gerstner

Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.


Philosophical Transactions of the Royal Society B | 2017

Hebbian plasticity requires compensatory processes on multiple timescales

Friedemann Zenke; Wulfram Gerstner

We review a body of theoretical and experimental research on Hebbian and homeostatic plasticity, starting from a puzzling observation: while homeostasis of synapses found in experiments is a slow compensatory process, most mathematical models of synaptic plasticity use rapid compensatory processes (RCPs). Even worse, with the slow homeostatic plasticity reported in experiments, simulations of existing plasticity models cannot maintain network stability unless further control mechanisms are implemented. To solve this paradox, we suggest that in addition to slow forms of homeostatic plasticity there are RCPs which stabilize synaptic plasticity on short timescales. These rapid processes may include heterosynaptic depression triggered by episodes of high postsynaptic firing rate. While slower forms of homeostatic plasticity are not sufficient to stabilize Hebbian plasticity, they are important for fine-tuning neural circuits. Taken together we suggest that learning and memory rely on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits. This article is part of the themed issue ‘Integrating Hebbian and homeostatic plasticity’.


Current Opinion in Neurobiology | 2017

The temporal paradox of Hebbian learning and homeostatic plasticity

Friedemann Zenke; Wulfram Gerstner; Surya Ganguli

Hebbian plasticity, a synaptic mechanism which detects and amplifies co-activity between neurons, is considered a key ingredient underlying learning and memory in the brain. However, Hebbian plasticity alone is unstable, leading to runaway neuronal activity, and therefore requires stabilization by additional compensatory processes. Traditionally, a diversity of homeostatic plasticity phenomena found in neural circuits is thought to play this role. However, recent modelling work suggests that the slow evolution of homeostatic plasticity, as observed in experiments, is insufficient to prevent instabilities originating from Hebbian plasticity. To remedy this situation, we suggest that homeostatic plasticity is complemented by additional rapid compensatory processes, which rapidly stabilize neuronal activity on short timescales.


Neural Computation | 2018

SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks

Friedemann Zenke; Surya Ganguli

A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.


Frontiers in Computational Neuroscience | 2015

Editorial: Emergent Neural Computation from the Interaction of Different Forms of Plasticity

Matthieu Gilson; Cristina Savin; Friedemann Zenke

More than 60 years later, Hebbs prophecy “neurons that fire together wire together” (Hebb, 1949; Shatz, 1992) prevails as one of the cornerstones of modern neuroscience. Nonetheless, it is becoming increasingly evident that there is more to neural plasticity than the strengthening of synapses between co-active neurons. Experiments have revealed a plethora of synaptic and cellular plasticity mechanisms acting simultaneously in neural circuits. How such diverse forms of plasticity collectively give rise to neural computation remains poorly understood. The present Research Topic approaches this question by bringing together recent advances in the modeling of different forms of synaptic and neuronal plasticity. Taken together, these studies argue that the concerted interaction of diverse forms of plasticity is critical for circuit formation and function. A first insight from this Research Topic underscores the importance of the time scale of homeostatic plasticity to avoid runaway dynamics of Hebbian plasticity. While known homeostatic processes act slowly, on the timescale of hours to days, existing theoretical models invariably use fast homeostasis. Yger and Gilson (2015) review a body of theoretical work arguing that rapid forms of homeostatic control are in fact critical for stable learning and thus should also exist in biological circuits. Following a similar line of thought, Chistiakova et al. (2015) review experimental and theoretical literature which suggests that the role of rapid homeostasis could be filled by heterosynaptic plasticity. Alternatively, other mechanisms can achieve a similar stabilizing effect, as long as they are fast, for instance the rapid homeostatic sliding threshold in Guise et al. (2015). These findings raise questions concerning the purpose of slow homeostasis and metaplasticity. Since non-modulated plasticity leads to “interference” between memories when confronted with rich environmental stimuli (Chrol-Cannon and Jin, 2015), it is tempting to hypothesize that certain slow homeostatic mechanisms may correct for this (Yger and Gilson, 2015). The second development reflected in this Research Topic concerns the interactions between excitatory and inhibitory (E/I) plasticity. Multiple studies independently stress the importance of such interactions for shaping circuit selectivity and decorrelating network activity during learning. Kleberg et al. (2014) demonstrate how spike-timing-dependent plasticity at excitatory (eSTDP) and inhibitory (iSTDP) synapses drives the formation of selective signaling pathways in feed-forward networks. Together they ensure excitatory-inhibitory balance and sharpen neuronal responses to salient inputs. Moreover, by systematically exploring different iSTDP windows, the authors show that anti-symmetric plasticity, in which pre-post spike pairs lead to potentiation of an inhibitory synapse, are most efficient at establishing pathway-specific balance. Zheng and Triesch (2014) confirm the relevance of e/iSTDP for propagating information in a recurrent network. Their model also highlights the importance of other forms of plasticity, in particular intrinsic plasticity and structural plasticity for robust synfire-chain learning. Beyond information propagation, Duarte and Morrison (2014) show that E/I plasticity allows recurrent neural networks to form internal representations of the external world and to perform non-linear computations with them. They find that the decorrelating action of inhibitory plasticity pushes the network away from states with poor discriminability. These results are corroborated by Srinivasa and Cho (2014), who show that such representations can be efficiently picked up by downstream layers. Networks shaped by both e- and iSTDP learn to discriminate between neural activity patterns in a self-organized fashion, whereas networks with only one form of plasticity perform worse. Binas et al. (2014) show that the interplay of E/I plasticity in recurrent neural networks can form robust winner-take-all (WTA) circuits, important for solving a range of behaviorally relevant tasks (e.g., categorization or decision making). Using a novel mean-field theory for network dynamics and plasticity, the authors characterize parameter regions in which stable WTA circuits emerge autonomously through the interaction of E/I plasticity. While most work presented here focuses on long-term plasticity, Esposito et al. (2015), study the interactions between Hebbian and short-term plasticity (STP) at excitatory synapses. The authors postulate a form of metaplasticity that adjusts the properties of STP to minimize circuit error. This model provides a normative interpretation for experimentally observed variability in STP properties across neural circuits and its close link to network connectivity motifs. While detailed error computation as assumed here is biologically implausible, reward-related information could be provided by neuromodulators (in particular, dopamine), which are know to regulate circuit dynamics and plasticity. The functional importance of neuromodulation is explored in two papers. First, Aswolinskiy and Pipa (2015) systematically compare reward-dependent vs. supervised and unsupervised learning across a broad range of tasks. They find that, when combined with suitable homeostatic plasticity mechanisms, reward-dependent synaptic plasticity can yield a performance similar to abstract supervised learning. Second, Savin and Triesch (2014) use a similar circuit model to study how reward-dependent learning shapes random recurrent networks into working memory circuits. They show that the interaction between dopamine-modulated STDP and homeostatic plasticity is sufficient to explain a broad range of experimental findings regarding the coding properties of neurons in prefrontal circuits. More generally, these results enforce the idea that reward-dependent learning is critical for shifting the limited neural resources toward the computations that matter most in terms of behavioral outcomes. Taken together, the contributions to this Research Topic suggest that circuit-level function emerges from the complex, but well-orchestrated interplay of different forms of neural plasticity. To learn how neuronal circuits self-organize and how computation emerges in the brain it is therefore vital to focus on interacting forms of plasticity. This sets the scene for exciting future research in both theoretical and experimental neuroscience.


The Journal of Physiology | 2018

Specific synaptic input strengths determine the computational properties of excitation–inhibition integration in a sound localization circuit

Enida Gjoni; Friedemann Zenke; Brice Bouhours; Ralf Schneggenburger

During the computation of sound localization, neurons of the lateral superior olive (LSO) integrate synaptic excitation arising from the ipsilateral ear with inhibition from the contralateral ear. We characterized the functional connectivity of the inhibitory and excitatory inputs onto LSO neurons in terms of unitary synaptic strength and convergence. Unitary IPSCs can generate large conductances, although their strength varies over a 10‐fold range in a given recording. By contrast, excitatory inputs are relatively weak. The conductance associated with IPSPs needs to be at least 2‐fold stronger than the excitatory one to guarantee effective inhibition of action potential (AP) firing. Computational modelling showed that strong unitary inhibition ensures an appropriate slope and midpoint of the tuning curve of LSO neurons. Conversely, weak but numerous excitatory inputs filter out spontaneous AP firing from upstream auditory neurons.


international conference on machine learning | 2017

Continual Learning Through Synaptic Intelligence

Friedemann Zenke; Ben Poole; Surya Ganguli


Archive | 2017

Improved multitask learning through synaptic intelligence.

Friedemann Zenke; Ben Poole; Surya Ganguli


Archive | 2016

Emergent Neural Computation from the Interaction of Different Forms of Plasticity

Cristina Savin; Matthieu Gilson; Friedemann Zenke


The Journal of Physiology | 2018

Specific synaptic input strengths determine the computational properties of excitation-inhibition integration in a sound localization circuit: E-I integration in a binaural auditory nucleus

Enida Gjoni; Friedemann Zenke; Brice Bouhours; Ralf Schneggenburger

Collaboration


Dive into the Friedemann Zenke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wulfram Gerstner

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Everton J. Agnes

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brice Bouhours

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Enida Gjoni

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ralf Schneggenburger

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge