Jean-Pascal Pfister
University of Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean-Pascal Pfister.
The Journal of Neuroscience | 2006
Jean-Pascal Pfister; Wulfram Gerstner
Classical experiments on spike timing-dependent plasticity (STDP) use a protocol based on pairs of presynaptic and postsynaptic spikes repeated at a given frequency to induce synaptic potentiation or depression. Therefore, standard STDP models have expressed the weight change as a function of pairs of presynaptic and postsynaptic spike. Unfortunately, those paired-based STDP models cannot account for the dependence on the repetition frequency of the pairs of spike. Moreover, those STDP models cannot reproduce recent triplet and quadruplet experiments. Here, we examine a triplet rule (i.e., a rule which considers sets of three spikes, i.e., two pre and one post or one pre and two post) and compare it to classical pair-based STDP learning rules. With such a triplet rule, it is possible to fit experimental data from visual cortical slices as well as from hippocampal cultures. Moreover, when assuming stochastic spike trains, the triplet learning rule can be mapped to a Bienenstock–Cooper–Munro learning rule.
Neural Computation | 2006
Jean-Pascal Pfister; Taro Toyoizumi; David Barber; Wulfram Gerstner
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Julijana Gjorgjieva; Claudia Clopath; Juliette Audet; Jean-Pascal Pfister
Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock–Cooper–Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes. Here we consider a triplet model of spike-timing–dependent plasticity (STDP) that depends on the interactions of three precisely timed spikes. Triplet STDP has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture. In the case of rate-based patterns, we show a tight correspondence between the triplet STDP rule and the BCM rule. We analytically demonstrate the selectivity property of the triplet STDP rule for orthogonal inputs and perform numerical simulations for nonorthogonal inputs. Moreover, in contrast to BCM, we show that triplet STDP can also induce selectivity for input patterns consisting of higher-order spatiotemporal correlations, which exist in natural stimuli and have been measured in the brain. We show that this sensitivity to higher-order correlations can be used to develop direction and speed selectivity.
Neural Computation | 2007
Taro Toyoizumi; Jean-Pascal Pfister; Kazuyuki Aihara; Wulfram Gerstner
We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.
The Journal of Neuroscience | 2013
Johanni Brea; Walter Senn; Jean-Pascal Pfister
Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.
international conference on artificial neural networks | 2003
Jean-Pascal Pfister; David Barber; Wulfram Gerstner
Many activity dependent learning rules have been proposed in order to model long-term potentiation (LTP). Our aim is to derive a spike time dependent learning rule from a probabilistic optimality criterion. Our approach allows us to obtain quantitative results in terms of a learning window. This is done by maximising a given likelihood function with respect to the synaptic weights. The resulting weight adaptation is compared with experimental results.
The Journal of Neuroscience | 2014
Sigrid Marie Blom; Jean-Pascal Pfister; Mirko Santello; Walter Senn; Thomas Nevian
Neuropathic pain caused by peripheral nerve injury is a debilitating neurological condition of high clinical relevance. On the cellular level, the elevated pain sensitivity is induced by plasticity of neuronal function along the pain pathway. Changes in cortical areas involved in pain processing contribute to the development of neuropathic pain. Yet, it remains elusive which plasticity mechanisms occur in cortical circuits. We investigated the properties of neural networks in the anterior cingulate cortex (ACC), a brain region mediating affective responses to noxious stimuli. We performed multiple whole-cell recordings from neurons in layer 5 (L5) of the ACC of adult mice after chronic constriction injury of the sciatic nerve of the left hindpaw and observed a striking loss of connections between excitatory and inhibitory neurons in both directions. In contrast, no significant changes in synaptic efficacy in the remaining connected pairs were found. These changes were reflected on the network level by a decrease in the mEPSC and mIPSC frequency. Additionally, nerve injury resulted in a potentiation of the intrinsic excitability of pyramidal neurons, whereas the cellular properties of interneurons were unchanged. Our set of experimental parameters allowed constructing a neuronal network model of L5 in the ACC, revealing that the modification of inhibitory connectivity had the most profound effect on increased network activity. Thus, our combined experimental and modeling approach suggests that cortical disinhibition is a fundamental pathological modification associated with peripheral nerve damage. These changes at the cortical network level might therefore contribute to the neuropathic pain condition.
Nature Neuroscience | 2010
Jean-Pascal Pfister; Peter Dayan; Máté Lengyel
The trajectory of the somatic membrane potential of a cortical neuron exactly reflects the computations performed on its afferent inputs. However, the spikes of such a neuron are a very low-dimensional and discrete projection of this continually evolving signal. We explored the possibility that the neuron′s efferent synapses perform the critical computational step of estimating the membrane potential trajectory from the spikes. We found that short-term changes in synaptic efficacy can be interpreted as implementing an optimal estimator of this trajectory. Short-term depression arose when presynaptic spiking was sufficiently intense as to reduce the uncertainty associated with the estimate; short-term facilitation reflected structural features of the statistics of the presynaptic neuron such as up and down states. Our analysis provides a unifying account of a powerful, but puzzling, form of plasticity.
Frontiers in Computational Neuroscience | 2010
Jean-Pascal Pfister; Peter Tass
Highly synchronized neural networks can be the source of various pathologies such as Parkinsons disease or essential tremor. Therefore, it is crucial to better understand the dynamics of such networks and the conditions under which a high level of synchronization can be observed. One of the key factors that influences the level of synchronization is the type of learning rule that governs synaptic plasticity. Most of the existing work on synchronization in recurrent networks with synaptic plasticity are based on numerical simulations and there is a clear lack of a theoretical framework for studying the effects of various synaptic plasticity rules. In this paper we derive analytically the conditions for spike-timing dependent plasticity (STDP) to lead a network into a synchronized or a desynchronized state. We also show that under appropriate conditions bistability occurs in recurrent networks governed by STDP. Indeed, a pathological regime with strong connections and therefore strong synchronized activity, as well as a physiological regime with weaker connections and lower levels of synchronization are found to coexist. Furthermore, we show that with appropriate stimulation, the network dynamics can be pushed to the low synchronization stable state. This type of therapeutical stimulation is very different from the existing high-frequency stimulation for deep brain stimulation since once the stimulation is stopped the network stays in the low synchronization regime.
Frontiers in Computational Neuroscience | 2010
Guillaume Hennequin; Wulfram Gerstner; Jean-Pascal Pfister
Spike-frequency adaptation is known to enhance the transmission of information in sensory spiking neurons by rescaling the dynamic range for input processing, matching it to the temporal statistics of the sensory stimulus. Achieving maximal information transmission has also been recently postulated as a role for spike-timing-dependent plasticity (STDP). However, the link between optimal plasticity and STDP in cortex remains loose, as does the relationship between STDP and adaptation processes. We investigate how STDP, as described by recent minimal models derived from experimental data, influences the quality of information transmission in an adapting neuron. We show that a phenomenological model based on triplets of spikes yields almost the same information rate as an optimal model specially designed to this end. In contrast, the standard pair-based model of STDP does not improve information transmission as much. This result holds not only for additive STDP with hard weight bounds, known to produce bimodal distributions of synaptic weights, but also for weight-dependent STDP in the context of unimodal but skewed weight distributions. We analyze the similarities between the triplet model and the optimal learning rule, and find that the triplet effect is an important feature of the optimal model when the neuron is adaptive. If STDP is optimized for information transmission, it must take into account the dynamical properties of the postsynaptic cell, which might explain the target-cell specificity of STDP. In particular, it accounts for the differences found in vitro between STDP at excitatory synapses onto principal cells and those onto fast-spiking interneurons.