Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel J. Amit is active.

Publication


Featured researches published by Daniel J. Amit.


Annals of Physics | 1987

Statistical mechanics of neural networks near saturation

Daniel J. Amit; Hanoch Gutfreund; Haim Sompolinsky

The Hopfield model of a neural network is studied near its saturation, i.e., when the number p of stored patterns increases with the size of the network N, as p = αN. The mean-field theory for this system is described in detail. The system possesses, at low α, both a spin-glass phase and 2p dynamically stable degenerate ferromagnetic phases. The latter have essentially full macroscopic overlaps with the memorized patterns, and provide effective associative memory, despite the spin-glass features. The network can retrieve patterns, at T = 0, with an error of less than 1.5% for α <αc = 0.14. At αc the ferromagnetic (FM) retrieval states disappear discontinuously. Numerical simulations show that even above αc the overlaps with the sored patterns are not zero, but the level of error precludes meaningful retrieval. The difference between the statistical mechanics and the simulations is discussed. As α decreases below 0.05 the FM retrieval states become ground states of the system, and for α < 0.03 mixture states appear. The level of storage creates noise, akin to temperature at finite p. Replica symmetry breaking is found to be salient in the spin-glass state, but in the retrieval states it appears at extremely low temperatures, and is argued to have a very weak effect. This is corroborated by simulations. The study is extended to survey the phase diagram of the system in the presence of stochastic synaptic noise (temperature), and the effect of external fields (neuronal thresholds) coupled to groups of patterns. It is found that a field coupled to many patterns has a very limited utility in enhancing their learning. Finally, we discuss the robustness of the network to the relaxation of various underlying assumptions, as well as some new trends in the study of neural networks.


Network: Computation In Neural Systems | 1991

Quantitative study of attractor neural network retrieving at low spike rates: I. substrate—spikes, rates and neuronal gain

Daniel J. Amit; Misha Tsodyks

We discuss the conversion of the description of the dynamics of a neural network from a temporal variation of synaptic currents driven by point spikes and modulated by a synaptic structure to a description of the current dynamics driven by spike rates. The conditions for the validity of such a conversion are discussed in detail and are shown to be quite realistic in cortical conditions. This is done in preparation for a discussion of a scenario of an attractor neural network, based on the interaction of synaptic currents and neural spike rates.The spike rates are then expressed in terms of the currents themselves to provide a closed set of dynamical equations for the currents. The current-rate relation is expressed as a neuronal gain function, converting currents into spike rates. It describes an integrate-and-fire element with noisy inputs, under explicit quaniitatve conditions which we argue to be plausible in a cortical situation In particular, it is shown that the gain of the current to rate transduct...


Neural Computation | 1994

Learning in neural networks with material synapses

Daniel J. Amit; Stefano Fusi

We discuss the long term maintenance of acquired memory in synaptic connections of a perpetually learning electronic device. This is affected by ascribing each synapse a finite number of stable states in which it can maintain for indefinitely long periods. Learning uncorrelated stimuli is expressed as a stochastic process produced by the neural activities on the synapses. In several interesting cases the stochastic process can be analyzed in detail, leading to a clarification of the performance of the network, as an associative memory, during the process of uninterrupted learning. The stochastic nature of the process and the existence of an asymptotic distribution for the synaptic values in the network imply generically that the memory is a palimpsest but capacity is as low as log N for a network of N neurons. The only way we find for avoiding this tight constraint is to allow the parameters governing the learning process (the coding level of the stimuli; the transition probabilities for potentiation and depression and the number of stable synaptic levels) to depend on the number of neurons. It is shown that a network with synapses that have two stable states can dynamically learn with optimal storage efficiency, be a palimpsest, and maintain its (associative) memory for an indefinitely long time provided the coding level is low and depression is equilibrated against potentiation. We suggest that an option so easily implementable in material devices would not have been overlooked by biology. Finally we discuss the stochastic learning on synapses with variable number of stable synaptic states.


Neural Computation | 2000

Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI Implementation

Stefano Fusi; Mario Annunziato; Davide Badoni; Andrea Salamon; Daniel J. Amit

We present a model for spike-driven dynamics of a plastic synapse, suited for a VLSI implementation. The synaptic device behaves as a capacitor on short timescales and preserves the memory of two stable states (efficacies) on long timescales. The transitions (LTP/LTD) are stochastic because both the number and the distribution of neural spikes in any finite (stimulation) interval fluctuate, even at fixed pre- and postsynaptic spike rates. The dynamics of the single synapse is studied analytically by extending the solution to a classic problem in queuing theory (Takcs process). The model of the synapse is implemented in a VLSI and consists of only 18 transistors. It is also directly simulated. The simulations indicate that LTP/LTD probabilities versus rates are robust to fluctuations of the electronic parameters in a wide range of rates. The solutions for these probabilities are in very good agreement with both the simulations and measurements. Moreover, the probabilities are readily manipulable by variations of the chips parameters, even in ranges where they are very small. The tests of the electronic device cover the range from spontaneous activity (34 Hz) to stimulus-driven rates (50 Hz). Low transition probabilities can be maintained in all ranges, even though the intrinsic time constants of the device are short ( 100 ms). Synaptic transitions are triggered by elevated presynaptic rates: for low presynaptic rates, there are essentially no transitions. The synaptic device can preserve its memory for years in the absence of stimulation. Stochasticity of learning is a result of the variability of interspike intervals; noise is a feature of the distributed dynamics of the network. The fact that the synapse is binary on long timescales solves the stability problem of synaptic efficacies in the absence of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the network, if the transition probabilities are low and LTP is balanced against LTD.


Network: Computation In Neural Systems | 1997

Dynamics of a recurrent network of spiking neurons before and following learning

Daniel J. Amit; Nicolas Brunel

Extensive simulations of large recurrent networks of integrate-and-fire excitatory and inhibitory neurons in realistic cortical conditions (before and after Hebbian unsupervised learning of uncorrelated stimuli) exhibit a rich phenomenology of stochastic neural spike dynamics and, in particular, coexistence between two types of stable states: spontaneous activity upon stimulation by an unlearned stimulus, and ‘working memory’ states strongly correlated with learned stimuli. Firing rates have very wide distributions, due to the variability in the connectivity from neuron to neuron. ISI histograms are exponential, except for small intervals. Thus the spike emission processes are well approximated by a Poisson process. The variability of the spike emission process is effectively controlled by the magnitude of the post-spike reset potential relative to the mean depolarization of the cell. Cross-correlations (CC) exhibit a central peak near zero delay, flanked by damped oscillations. The magnitude of the centr...


Network: Computation In Neural Systems | 1991

Quantitative study of attractor neural networks retrieving at low spike rates: II. Low-rate retrieval in symmetric networks

Daniel J. Amit; Misha Tsodyks

A network of current-rate dynamics with a symmetric synaptic matrix is analysed and simulated for its low-rate attractor structure. The dynamics is deterministic, with the noise included in a realistic current-rate transduction function (discussed in part I)The analysis is carried out in mean-field theory. It is shown that at low loading the network retrieves without errors, with uniform low rates, that there are no simple spurious states and that the low-rate attractors, retrieving single patterns, are stable to the admixture of additional patterns. The analysis of the attractors in a network with an extensive number of patterns is carried out in the replica symmetric approximations. The results for the dependence of the retrieval rates on loading level; for the distribution of rates among neurons, as well as for the storage capacity are compared with simulations. Simulations also show that retriewal performance is very robust to random elimination of synapses. Moreover, errors in the stimulus, relative ...


Journal of Physics A | 1980

Renormalisation group analysis of the phase transition in the 2D Coulomb gas, Sine-Gordon theory and XY-model

Daniel J. Amit; Y Y Goldschmidt; S Grinstein

A systematic renormalisation group technique for studying the 2D sine-Gordon theory (Coulomb gas, XY model) near its phase transition is presented. The new results are (a) higher order terms in the flow equations beyond those of Kosterlitz (1974) give rise to a new universal quantity; (b) this in turn gives the universal form as well as the relative coefficient of the next-to-leading term in the correlation function of the XY model; (c) the free energy (1PI vacuum sum) is calculated after the singularity at beta 2=4 pi is treated; (d) vortices with multiple charges are shown to be irrelevant; (e) symmetry breaking fields are analysed systematically. The main ideas that the sine-Gordon theory can be defined as a double expansion in alpha (fugacity) and delta = beta 2/8 pi -1 (distance from the critical temperature at alpha =0). Wave-function and coupling constant ( alpha ) renormalisations are necessary and sufficient, around beta 2=8 pi where cos phi acquires dimension 2, for functions with elementary SG fields. This gives rise to renormalisation of beta . The renormalisability is proved to the order calculated in the context of the SG theory, and in general, by using the equivalence to the Thirring-Schwinger model. The renormalised beta 2 plays a role analogous to the dimension in a phi 4 theory-8 pi being the critical dimension. beta 2>8 pi gives an infrared asymptotically free theory which leads to the well-known fixed line. The infrared properties are understood by analogy with the non-linear sigma model.


Neural Computation | 2003

Spike-driven synaptic dynamics generating working memory states

Daniel J. Amit; Gianluigi Mongillo

The collective behavior of a network, modeling a cortical module of spiking neurons connected by plastic synapses is studied. A detailed spike-driven synaptic dynamics is simulated in a large network of spiking neurons, implementing the full double dynamics of neurons and synapses. The repeated presentation of a set of external stimuli is shown to structure the network to the point of sustaining working memory (selective delay activity). When the synaptic dynamics is analyzed as a function of pre- and postsynaptic spike rates in functionally defined populations, it reveals a novel variation of the Hebbian plasticity paradigm: in any functional set of synapses between pairs of neurons (e.g., stimulatedstimulated, stimulateddelay, stimulatedspontaneous), there is a finite probability of potentiation as well as of depression. This leads to a saturation of potentiation or depression at the level of the ratio of the two probabilities. When one of the two probabilities is very high relative to the other, the familiar Hebbian mechanism is recovered. But where correlated working memory is formed, it prevents overlearning. Constraints relevant to the stability of the acquired synaptic structure and the regimes of global activity allowing for structuring are expressed in terms of the parameters describing the single-synapse dynamics. The synaptic dynamics is discussed in the light of experiments observing precise spike timing effects and related issues of biological plausibility.


Network: Computation In Neural Systems | 1990

Attractor neural networks with biological probe records

Daniel J. Amit; M R Evans; Moshe Abeles

We present an attractor neural network which can associatively retrieve a variety of activity patterns encoded in the synaptic matrix between the excitatory neurons. The neurons are characterized by an absolute refractory period of 2 ms and would at saturation emit spikes at a rate of 500 s−1, yet the collective operation of the network allows stable retrieval performance at rates as low as 20–25 s−1. The network is presented as a model of increasingly realistic neurons assembled in a network with increasingly realistic output structures, on which a variety of experiments can be carried out.The types of features included are: continuous dynamics of the membrane potential except at spike emission; differentiation of excitatory and inhibitory operation; relative refractory period, due to post-spike hyperpolarization; membrane potential decay constants; uniform or random spike transmission delays; inhibition by hyperpolarization or by shunting; short and persistent stimuli, represented as synaptic currents i...


Neural Computation | 1997

Paradigmatic working memory (attractor) cell in IT cortex

Daniel J. Amit; Stefano Fusi; Volodya Yakovlev

We discuss paradigmatic properties of the activity of single cells comprising an attractora developed stable delay activity distribution. To demonstrate these properties and a methodology for measuring their values, we present a detailed account of the spike activity recorded from a single cell in the inferotemporal cortex of a monkey performing a delayed match-to-sample (DMS) task of visual images. In particular, we discuss and exemplify (1) the relation between spontaneous activity and activity immediately preceding the first stimulus in each trial during a series of DMS trials, (2) the effect on the visual response (i.e., activity during stimulation) of stimulus degradation (moving in the space of IT afferents), (3) the behavior of the delay activity (i.e., activity following visual stimulation) under stimulus degradation (attractor dynamics and the basin of attraction), and (4) the propagation of information between trialsthe vehicle for the formation of (contextual) correlations by learning a fixed stimulus sequence (Miyashita, 1988). In the process of the discussion and demonstration, we expose effective tools for the identification and characterization of attractor dynamics.1 A color version of this article is found on the Web at: http://www.fiz.huji.ac.il/staff/acc/faculty/damita

Collaboration


Dive into the Daniel J. Amit's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Misha Tsodyks

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandro Romani

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar

Hanoch Gutfreund

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Volodya Yakovlev

Interdisciplinary Center for Neural Computation

View shared research outputs
Top Co-Authors

Avatar

Davide Badoni

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Haim Sompolinsky

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shaul Hochstein

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge