Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefano Fusi is active.

Publication


Featured researches published by Stefano Fusi.


Neuron | 2005

Cascade models of synaptically stored memories.

Stefano Fusi; Patrick J. Drew; L. F. Abbott

Storing memories of ongoing, everyday experiences requires a high degree of plasticity, but retaining these memories demands protection against changes induced by further activity and experience. Models in which memories are stored through switch-like transitions in synaptic efficacy are good at storing but bad at retaining memories if these transitions are likely, and they are poor at storage but good at retention if they are unlikely. We construct and study a model in which each synapse has a cascade of states with different levels of plasticity, connected by metaplastic transitions. This cascade model combines high levels of memory storage with long retention times and significantly outperforms alternative models. As a result, we suggest that memory storage requires synapses with multiple states exhibiting dynamics over a wide range of timescales, and we suggest experimental tests of this hypothesis.


Neural Computation | 1994

Learning in neural networks with material synapses

Daniel J. Amit; Stefano Fusi

We discuss the long term maintenance of acquired memory in synaptic connections of a perpetually learning electronic device. This is affected by ascribing each synapse a finite number of stable states in which it can maintain for indefinitely long periods. Learning uncorrelated stimuli is expressed as a stochastic process produced by the neural activities on the synapses. In several interesting cases the stochastic process can be analyzed in detail, leading to a clarification of the performance of the network, as an associative memory, during the process of uninterrupted learning. The stochastic nature of the process and the existence of an asymptotic distribution for the synaptic values in the network imply generically that the memory is a palimpsest but capacity is as low as log N for a network of N neurons. The only way we find for avoiding this tight constraint is to allow the parameters governing the learning process (the coding level of the stimuli; the transition probabilities for potentiation and depression and the number of stable synaptic levels) to depend on the number of neurons. It is shown that a network with synapses that have two stable states can dynamically learn with optimal storage efficiency, be a palimpsest, and maintain its (associative) memory for an indefinitely long time provided the coding level is low and depression is equilibrated against potentiation. We suggest that an option so easily implementable in material devices would not have been overlooked by biology. Finally we discuss the stochastic learning on synapses with variable number of stable synaptic states.


Neural Computation | 2007

Learning Real-World Stimuli in a Neural Network with Spike-Driven Synaptic Dynamics

Joseph M. Brader; Walter Senn; Stefano Fusi

We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).


Neural Computation | 2000

Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI Implementation

Stefano Fusi; Mario Annunziato; Davide Badoni; Andrea Salamon; Daniel J. Amit

We present a model for spike-driven dynamics of a plastic synapse, suited for a VLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takcs process). The model of the synapse is implemented in a VLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chips parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (34 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short ( 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are low and LTP is balanced against LTD.


Nature Neuroscience | 2007

Limits on the memory storage capacity of bounded synapses

Stefano Fusi; L. F. Abbott

Memories maintained in patterns of synaptic connectivity are rapidly overwritten and destroyed by ongoing plasticity related to the storage of new memories. Short memory lifetimes arise from the bounds that must be imposed on synaptic efficacy in any realistic model. We explored whether memory performance can be improved by allowing synapses to traverse a large number of states before reaching their bounds, or by changing the way these bounds are imposed. In the case of hard bounds, memory lifetimes grow proportional to the square of the number of synaptic states, but only if potentiation and depression are precisely balanced. Improved performance can be obtained without fine tuning by imposing soft bounds, but this improvement is only linear with respect to the number of synaptic states. We explored several other possibilities and conclude that improving memory performance requires a more radical modification of the standard model of memory storage.


Neural Computation | 1999

Collective behavior of networks with linear (VLSI) integrate-and-fire neurons

Stefano Fusi; Maurizio Mattia

We analyze in detail the statistical properties of the spike emission process of a canonical integrate-and-fire neuron, with a linear integrator and a lower bound for the depolarization, as often used in VLSI implementations (Mead, 1989). The spike statistics of such neurons appear to be qualitatively similar to conventional (exponential) integrate-and-fire neurons, which exhibit a wide variety of characteristics observed in cortical recordings. We also show that, contrary to current opinion, the dynamics of a network composed of such neurons has two stable fixed points, even in the purely excitatory network, corresponding to two different states of reverberating activity. The analytical results are compared with numerical simulations and are found to be in good agreement.


Biological Cybernetics | 2002

Hebbian spike-driven synaptic plasticity for learning patterns of mean firing rates

Stefano Fusi

Abstract.u2002Synaptic plasticity is believed to underlie the formation of appropriate patterns of connectivity that stabilize stimulus-selective reverberations in the cortex. Here we present a general quantitative framework for studying the process of learning and memorizing of patterns of mean spike rates. General considerations based on the limitations of material (biological or electronic) synaptic devices show that most learning networks share the palimpsest property: old stimuli are forgotten to make room for the new ones. In order to prevent too-fast forgetting, one can introduce a stochastic mechanism for selecting only a small fraction of synapses to be changed upon the presentation of a stimulus. Such a mechanism can be easily implemented by exploiting the noisy fluctuations in the pre- and postsynaptic activities to be encoded. The spike-driven synaptic dynamics described here can implement such a selection mechanism to achieve slow learning, which is shown to maximize the performance of the network as an associative memory.


Frontiers in Computational Neuroscience | 2010

Internal Representation of Task Rules by Recurrent Dynamics: The Importance of the Diversity of Neural Responses

Mattia Rigotti; Daniel B. Rubin; Xiao Jing Wang; Stefano Fusi

Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.


Neural Computation | 2004

Minimal Models of Adapted Neuronal Response to In Vivo &#8211lLike Input Currents

Giancarlo La Camera; Alexander Rauch; Hans-Rudolf Lüscher; Walter Senn; Stefano Fusi

Rate models are often used to study the behavior of large networks of spiking neurons. Here we propose a procedure to derive rate models that take into account the fluctuations of the input current and firing-rate adaptation, two ubiquitous features in the central nervous system that have been previously overlooked in constructing rate models. The procedure is general and applies to any model of firing unit. As examples, we apply it to the leaky integrate-and-fire (IF) neuron, the leaky IF neuron with reversal potentials, and to the quadratic IF neuron. Two mechanisms of adaptation are considered, one due to an after hyperpolarization current and the other to an adapting threshold for spike emission. The parameters of these simple models can be tuned to match experimental data obtained from neocortical pyramidal neurons. Finally, we show how the stationary model can be used to predict the time-varying activity of a large population of adapting neurons.


Journal of Physiology-paris | 2003

Modelling the formation of working memory with networks of integrate-and-fire neurons connected by plastic synapses

Paolo Del Giudice; Stefano Fusi; Maurizio Mattia

In this paper we review a series of works concerning models of spiking neurons interacting via spike-driven, plastic, Hebbian synapses, meant to implement stimulus driven, unsupervised formation of working memory (WM) states. Starting from a summary of the experimental evidence emerging from delayed matching to sample (DMS) experiments, we briefly review the attractor picture proposed to underlie WM states. We then describe a general framework for a theoretical approach to learning with synapses subject to realistic constraints and outline some general requirements to be met by a mechanism of Hebbian synaptic structuring. We argue that a stochastic selection of the synapses to be updated allows for optimal memory storage, even if the number of stable synaptic states is reduced to the extreme (bistable synapses). A description follows of models of spike-driven synapses that implement the stochastic selection by exploiting the high irregularity in the pre- and post-synaptic activity. Reasons are listed why dynamic learning, that is the process by which the synaptic structure develops under the only guidance of neural activities, driven in turn by stimuli, is hard to accomplish. We provide a feasibility proof of dynamic formation of WM states in this context the beneficial role of short-term depression (STD) is illustrated. by showing how an initially unstructured network autonomously develops a synaptic structure supporting simultaneously stable spontaneous and WM states in this context the beneficial role of short-term depression (STD) is illustrated. After summarizing heuristic indications emerging from the study performed, we conclude by briefly discussing open problems and critical issues still to be clarified.

Collaboration


Dive into the Stefano Fusi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Amit

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Davide Badoni

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel B. Rubin

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge