Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haim Sompolinsky is active.

Publication


Featured researches published by Haim Sompolinsky.


conference on learning theory | 1992

Query by committee

H. S. Seung; M. Opper; Haim Sompolinsky

We propose an algorithm called query by commitee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.


Science | 1996

Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity

C. van Vreeswijk; Haim Sompolinsky

Neurons in the cortex of behaving animals show temporally irregular spiking patterns. The origin of this irregularity and its implications for neural processing are unknown. The hypothesis that the temporal variability in the firing of a neuron results from an approximate balance between its excitatory and inhibitory inputs was investigated theoretically. Such a balance emerges naturally in large networks of excitatory and inhibitory neuronal populations that are sparsely connected by relatively strong synapses. The resulting state is characterized by strongly chaotic dynamics, even when the external inputs to the network are constant in time. Such a network exhibits a linear response, despite the highly nonlinear dynamics of single neurons, and reacts to changing external stimuli on time scales much smaller than the integration time constant of a single neuron.


Neural Computation | 1998

Chaotic balanced state in a model of cortical circuits

C. van Vreeswijk; Haim Sompolinsky

The nature and origin of the temporal irregularity in the electrical activity of cortical neurons in vivo are not well understood. We consider the hypothesis that this irregularity is due to a balance of excitatory and inhibitory currents into the cortical cells. We study a network model with excitatory and inhibitory populations of simple binary units. The internal feedback is mediated by relatively large synaptic strengths, so that the magnitude of the total excitatory and inhibitory feedback is much larger than the neuronal threshold. The connectivity is random and sparse. The mean number of connections per unit is large, though small compared to the total number of cells in the network. The network also receives a large, temporally regular input from external sources. We present an analytical solution of the mean-field theory of this model, which is exact in the limit of large network size. This theory reveals a new cooperative stationary state of large networks, which we term a balanced state. In this state, a balance between the excitatory and inhibitory inputs emerges dynamically for a wide range of parameters, resulting in a net input whose temporal fluctuations are of the same order as its mean. The internal synaptic inputs act as a strong negative feedback, which linearizes the population responses to the external drive despite the strong nonlinearity of the individual cells. This feedback also greatly stabilizes the systems state and enables it to track a time-dependent input on time scales much shorter than the time constant of a single cell. The spatiotemporal statistics of the balanced state are calculated. It is shown that the autocorrelations decay on a short time scale, yielding an approximate Poissonian temporal statistics. The activity levels of single cells are broadly distributed, and their distribution exhibits a skewed shape with a long power-law tail. The chaotic nature of the balanced state is revealed by showing that the evolution of the microscopic state of the network is extremely sensitive to small deviations in its initial conditions. The balanced state generated by the sparse, strong connections is an asynchronous chaotic state. It is accompanied by weak spatial cross-correlations, the strength of which vanishes in the limit of large network size. This is in contrast to the synchronized chaotic states exhibited by more conventional network models with high connectivity of weak synapses.


Annals of Physics | 1987

Statistical mechanics of neural networks near saturation

Daniel J. Amit; Hanoch Gutfreund; Haim Sompolinsky

The Hopfield model of a neural network is studied near its saturation, i.e., when the number p of stored patterns increases with the size of the network N, as p = αN. The mean-field theory for this system is described in detail. The system possesses, at low α, both a spin-glass phase and 2p dynamically stable degenerate ferromagnetic phases. The latter have essentially full macroscopic overlaps with the memorized patterns, and provide effective associative memory, despite the spin-glass features. The network can retrieve patterns, at T = 0, with an error of less than 1.5% for α <αc = 0.14. At αc the ferromagnetic (FM) retrieval states disappear discontinuously. Numerical simulations show that even above αc the overlaps with the sored patterns are not zero, but the level of error precludes meaningful retrieval. The difference between the statistical mechanics and the simulations is discussed. As α decreases below 0.05 the FM retrieval states become ground states of the system, and for α < 0.03 mixture states appear. The level of storage creates noise, akin to temperature at finite p. Replica symmetry breaking is found to be salient in the spin-glass state, but in the retrieval states it appears at extremely low temperatures, and is argued to have a very weak effect. This is corroborated by simulations. The study is extended to survey the phase diagram of the system in the presence of stochastic synaptic noise (temperature), and the effect of external fields (neuronal thresholds) coupled to groups of patterns. It is found that a field coupled to many patterns has a very limited utility in enhancing their learning. Finally, we discuss the robustness of the network to the relaxation of various underlying assumptions, as well as some new trends in the study of neural networks.


Nature Neuroscience | 2006

The tempotron: a neuron that learns spike timing–based decisions

Robert Gütig; Haim Sompolinsky

The timing of action potentials in sensory neurons contains substantial information about the eliciting stimuli. Although the computational advantages of spike timing–based neuronal codes have long been recognized, it is unclear whether, and if so how, neurons can learn to read out such representations. We propose a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. The number of categorizations of random spatiotemporal patterns that a neuron can implement is several times larger than the number of its synapses. The underlying nonlinear temporal computation allows neurons to access information beyond single-neuron statistics and to discriminate between inputs on the basis of multineuronal spike statistics. Our work demonstrates the high capacity of neural systems to learn to decode information embedded in distributed patterns of spike synchrony.Note: The PDF version of this article was corrected on the 14th of February, and the HTML version on the 16th of February. Please see the PDF for details.


Current Opinion in Neurobiology | 1997

New perspectives on the mechanisms for orientation selectivity

Haim Sompolinsky; Robert Shapley

Since the discovery of orientation selectivity by Hubel and Wiesel, the mechanisms responsible for this remarkable operation in the visual cortex have been controversial. Experimental studies over the past year have highlighted the contribution of feedforward thalamo-cortical afferents, as proposed originally by Hubel and Wiesel, but they have also indicated that this contribution alone is insufficient to account for the sharp orientation tuning observed in the visual cortex. Recent advances in understanding the functional architecture of local cortical circuitry have led to new proposals for the involvement of intracortical recurrent excitation and inhibition in orientation selectivity. Establishing how these two mechanisms work together remains an important experimental and theoretical challenge.


Physical Review Letters | 2001

Equilibrium Properties of Temporally Asymmetric Hebbian Plasticity

Jonathan E. Rubin; Daniel D. Lee; Haim Sompolinsky

A theory of temporally asymmetric Hebb rules, which depress or potentiate synapses depending upon whether the postsynaptic cell fires before or after the presynaptic one, is presented. Using the Fokker-Planck formalism, we show that the equilibrium synaptic distribution induced by such rules is highly sensitive to the manner in which bounds on the allowed range of synaptic values are imposed. In a biologically plausible multiplicative model, the synapses in asynchronous networks reach a distribution that is invariant to the firing rates of either the presynaptic or postsynaptic cells. When these cells are temporally correlated, the synaptic strength varies smoothly with the degree and phase of their synchrony.


Nature Neuroscience | 2005

Bistability of cerebellar Purkinje cells modulated by sensory stimulation.

Yonatan Loewenstein; Séverine Mahon; Paul Chadderton; Kazuo Kitamura; Haim Sompolinsky; Yosef Yarom; Michael Häusser

A persistent change in neuronal activity after brief stimuli is a common feature of many neuronal microcircuits. This persistent activity can be sustained by ongoing reverberant network activity or by the intrinsic biophysical properties of individual cells. Here we demonstrate that rat and guinea pig cerebellar Purkinje cells in vivo show bistability of membrane potential and spike output on the time scale of seconds. The transition between membrane potential states can be bidirectionally triggered by the same brief current pulses. We also show that sensory activation of the climbing fiber input can switch Purkinje cells between the two states. The intrinsic nature of Purkinje cell bistability and its control by sensory input can be explained by a simple biophysical model. Purkinje cell bistability may have a key role in the short-term processing and storage of sensory information in the cerebellar cortex.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Memory traces in dynamical systems

Surya Ganguli; Dongsung Huh; Haim Sompolinsky

To perform nontrivial, real-time computations on a sensory input stream, biological systems must retain a short-term memory trace of their recent inputs. It has been proposed that generic high-dimensional dynamical systems could retain a memory trace for past inputs in their current state. This raises important questions about the fundamental limits of such memory traces and the properties required of dynamical systems to achieve these limits. We address these issues by applying Fisher information theory to dynamical systems driven by time-dependent signals corrupted by noise. We introduce the Fisher Memory Curve (FMC) as a measure of the signal-to-noise ratio (SNR) embedded in the dynamical state relative to the input SNR. The integrated FMC indicates the total memory capacity. We apply this theory to linear neuronal networks and show that the capacity of networks with normal connectivity matrices is exactly 1 and that of any network of N neurons is, at most, N. A nonnormal network achieving this bound is subject to stringent design constraints: It must have a hidden feedforward architecture that superlinearly amplifies its input for a time of order N, and the input connectivity must optimally match this architecture. The memory capacity of networks subject to saturating nonlinearities is further limited, and cannot exceed N. This limit can be realized by feedforward structures with divergent fan out that distributes the signal across neurons, thereby avoiding saturation. We illustrate the generality of the theory by showing that memory in fluid systems can be sustained by transient nonnormal amplification due to convective instability or the onset of turbulence.


Neural Computation | 2003

Rate models for conductance-based cortical neuronal networks

Oren Shriki; David Hansel; Haim Sompolinsky

Population rate models provide powerful tools for investigating the principles that underlie the cooperative function of large neuronal systems. However, biophysical interpretations of these models have been ambiguous. Hence, their applicability to real neuronal systems and their experimental validation have been severely limited. In this work, we show that conductance-based models of large cortical neuronal networks can be described by simplified rate models, provided that the network state does not possess a high degree of synchrony. We first derive a precise mapping between the parameters of the rate equations and those of the conductance-based network models for time-independent inputs. This mapping is based on the assumption that the effect of increasing the cells input conductance on its f-I curve is mainly subtractive. This assumption is confirmed by a single compartment Hodgkin-Huxley type model with a transient potassium A-current. This approach is applied to the study of a network model of a hypercolumn in primary visual cortex. We also explore extensions of the rate model to the dynamic domain by studying the firing-rate response of our conductance-based neuron to time-dependent noisy inputs. We show that the dynamics of this response can be approximated by a time-dependent second-order differential equation. This phenomenological single-cell rate model is used to calculate the response of a conductance-based network to time-dependent inputs.

Collaboration


Dive into the Haim Sompolinsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Hansel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Uri Rokni

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Markus Meister

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

H. S. Seung

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Maoz Shamir

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Golomb

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge