Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Hansel is active.

Publication


Featured researches published by David Hansel.


Neural Computation | 1995

Synchrony in excitatory neural networks

David Hansel; Germán Mato; Claude Meunier

Synchronization properties of fully connected networks of identical oscillatory neurons are studied, assuming purely excitatory interactions. We analyze their dependence on the time course of the synaptic interaction and on the response of the neurons to small depolarizations. Two types of responses are distinguished. In the first type, neurons always respond to small depolarization by advancing the next spike. In the second type, an excitatory postsynaptic potential (EPSP) received after the refractory period delays the firing of the next spike, while an EPSP received at a later time advances the firing. For these two types of responses we derive general conditions under which excitation destabilizes in-phase synchrony. We show that excitation is generally desynchronizing for neurons with a response of type I but can be synchronizing for responses of type II when the synaptic interactions are fast. These results are illustrated on three models of neurons: the Lapicque integrate-and-fire model, the model of Connor et al., and the Hodgkin-Huxley model. The latter exhibits a type II response, at variance with the first two models, that have type I responses. We then examine the consequences of these results for large networks, focusing on the states of partial coherence that emerge. Finally, we study the Lapicque model and the model of Connor et al. at large coupling and show that excitation can be desynchronizing even beyond the weak coupling regime.


The Journal of Neuroscience | 2006

Competition between Feedback Loops Underlies Normal and Pathological Dynamics in the Basal Ganglia

Arthur Leblois; Thomas Boraud; Wassilios G. Meissner; Hagai Bergman; David Hansel

Experiments performed in normal animals suggest that the basal ganglia (BG) are crucial in motor program selection. BG are also involved in movement disorders. In particular, BG neuronal activity in parkinsonian animals and patients is more oscillatory and more synchronous than in normal individuals. We propose a new model for the function and dysfunction of the motor part of BG. We hypothesize that the striatum, the subthalamic nucleus, the internal pallidum (GPi), the thalamus, and the cortex are involved in closed feedback loops. The direct (cortex–striatum–GPi–thalamus–cortex) and the hyperdirect loops (cortex–subthalamic nucleus–GPi–thalamus–cortex), which have different polarities, play a key role in the model. We show that the competition between these two loops provides the BG–cortex system with the ability to perform motor program selection. Under the assumption that dopamine potentiates corticostriatal synaptic transmission, we demonstrate that, in our model, moderate dopamine depletion leads to a complete loss of action selection ability. High depletion can lead to synchronous oscillations. These modifications of the network dynamical state stem from an imbalance between the feedback in the direct and hyperdirect loops when dopamine is depleted. Our model predicts that the loss of selection ability occurs before the appearance of oscillations, suggesting that Parkinsons disease motor impairments are not directly related to abnormal oscillatory activity. Another major prediction of our model is that synchronous oscillations driven by the hyperdirect loop appear in BG after inactivation of the striatum.


Neural Computation | 2003

Rate models for conductance-based cortical neuronal networks

Oren Shriki; David Hansel; Haim Sompolinsky

Population rate models provide powerful tools for investigating the principles that underlie the cooperative function of large neuronal systems. However, biophysical interpretations of these models have been ambiguous. Hence, their applicability to real neuronal systems and their experimental validation have been severely limited. In this work, we show that conductance-based models of large cortical neuronal networks can be described by simplified rate models, provided that the network state does not possess a high degree of synchrony. We first derive a precise mapping between the parameters of the rate equations and those of the conductance-based network models for time-independent inputs. This mapping is based on the assumption that the effect of increasing the cells input conductance on its f-I curve is mainly subtractive. This assumption is confirmed by a single compartment Hodgkin-Huxley type model with a transient potassium A-current. This approach is applied to the study of a network model of a hypercolumn in primary visual cortex. We also explore extensions of the rate model to the dynamic domain by studying the firing-rate response of our conductance-based neuron to time-dependent noisy inputs. We show that the dynamics of this response can be approximated by a time-dependent second-order differential equation. This phenomenological single-cell rate model is used to calculate the response of a conductance-based network to time-dependent inputs.


Journal of Computational Neuroscience | 1996

Chaos and synchrony in a model of a hypercolumn in visual cortex

David Hansel; Haim Sompolinsky

Neurons in cortical slices emit spikes or bursts of spikes regularly in response to a suprathreshold current injection. This behavior is in marked contrast to the behavior of cortical neurons in vivo, whose response to electrical or sensory input displays a strong degree of irregularity. Correlation measurements show a significant degree of synchrony in the temporal fluctuations of neuronal activities in cortex. We explore the hypothesis that these phenomena are the result of the synchronized chaos generated by the deterministic dynamics of local cortical networks. A model of a “hypercolumn” in the visual cortex is studied. It consists of two populations of neurons, one inhibitory and one excitatory. The dynamics of the neurons is based on a Hodgkin-Huxley type model of excitable voltage-clamped cells with several cellular and synaptic conductances. A slow potassium current is included in the dynamics of the excitatory population to reproduce the observed adaptation of the spike trains emitted by these neurons. The pattern of connectivity has a spatial structure which is correlated with the internal organization of hypercolumns in orientation columns. Numerical simulations of the model show that in an appropriate parameter range, the network settles in a synchronous chaotic state, characterized by a strong temporal variability of the neural activity which is correlated across the hypercolumn. Strong inhibitory feedback is essential for the stabilization of this state. These results show that the cooperative dynamics of large neuronal networks are capable of generating variability and synchrony similar to those observed in cortex. Auto-correlation and cross-correlation functions of neuronal spike trains are computed, and their temporal and spatial features are analyzed. In other parameter regimes, the network exhibits two additional states: synchronized oscillations and an asynchronous state. We use our model to study cortical mechanisms for orientation selectivity. It is shown that in a suitable parameter regime, when the input is not oriented, the network has a continuum of states, each representing an inhomogeneous population activity which is peaked at one of the orientation columns. As a result, when a weakly oriented input stimulates the network, it yields a sharp orientation tuning. The properties of the network in this regime, including the appearance of virtual rotations and broad stimulus-dependent cross-correlations, are investigated. The results agree with the predictions of the mean field theory which was previously derived for a simplified model of stochastic, two-state neurons. The relation between the results of the model and experiments in visual cortex are discussed.


Neural Computation | 1998

On numerical simulations of integrate-and-fire neural networks

David Hansel; Germán Mato; Claude Meunier; L. Neltner

It is shown that very small time steps are required to reproduce correctly the synchronization properties of large networks of integrate-and-fire neurons when the differential system describing their dynamics is integrated with the standard Euler or second-order Runge-Kutta algorithms. The reason for that behavior is analyzed, and a simple improvement of these algorithms is proposed.


EPL | 1993

Phase Dynamics for Weakly Coupled Hodgkin-Huxley Neurons

David Hansel; G. Mato; Claude Meunier

Hodgkin-Huxley model neurons coupled by weak excitatory interactions are studied by a phase reduction technique. All the information about the coupling between the neurons and their synchronization is then contained in an effective interaction between their phases. One shows analytically that an excitatory coupling can result in an effective inhibition between the neurons reducing their firing rates. Systems of two neurons exhibit bistability and out-of-phase locking. It is suggested that these features may have significant consequences for networks.


Neural Computation | 2000

The Number of Synaptic Inputs and the Synchrony of Large, Sparse Neuronal Networks

David Golomb; David Hansel

The prevalence of coherent oscillations in various frequency ranges in the central nervous system raises the question of the mechanisms that synchronize large populations of neurons. We study synchronization in models of large networks of spiking neurons with random sparse connectivity. Synchrony occurs only when the average number of synapses, M, that a cell receives is larger than a critical value, Mc. Below Mc, the system is in an asynchronous state. In the limit of weak coupling, assuming identical neurons, we reduce the model to a system of phase oscillators that are coupled via an effective interaction, . In this framework, we develop an approximate theory for sparse networks of identical neurons to estimate Mc analytically from the Fourier coefficients of . Our approach relies on the assumption that the dynamics of a neuron depend mainly on the number of cells that are presynaptic to it. We apply this theory to compute Mc for a model of inhibitory networks of integrate-and-fire (I&F) neurons as a function of the intrinsic neuronal properties (e.g., the refractory period Tr), the synaptic time constants, and the strength of the external stimulus, Iext. The number Mc is found to be nonmonotonous with the strength of Iext. For Tr 0, we estimate the minimum value of Mc over all the parameters of the model to be 363.8. Above Mc, the neurons tend to fire in smeared one-cluster states at high firing rates and smeared two-or-more-cluster states at low firing rates. Refractoriness decreases Mc at intermediate and high firing rates. These results are compared to numerical simulations. We show numerically that systems with different sizes, N, behave in the same way provided the connectivity, M, is such that 1/Meff 1/M 1/N remains constant when N varies. This allows extrapolating the large N behavior of a network from numerical simulations of networks of relatively small sizes (N = 800 in our case). We find that our theory predicts with remarkable accuracy the value of Mc and the patterns of synchrony above Mc, provided the synaptic coupling is not too large. We also study the strong coupling regime of inhibitory sparse networks. All of our simulations demonstrate that increasing the coupling strength reduces the level of synchrony of the neuronal activity. Above a critical coupling strength, the network activity is asynchronous. We point out a fundamental limitation for the mechanisms of synchrony relying on inhibition alone, if heterogeneities in the intrinsic properties of the neurons and spatial fluctuations in the external input are also taken into account.


Neural Computation | 2001

Patterns of Synchrony in Neural Networks with Spike Adaptation

C. van Vreeswijk; David Hansel

We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.


Journal of Computational Neuroscience | 1997

Traveling waves and the processing of weakly tuned inputs in a cortical network module

Rani Ben-Yishai; David Hansel; Haim Sompolinsky

Recent studies have shown that local cortical feedback can havean important effect on the response of neurons in primary visualcortex to the orientation of visual stimuli. In this work, westudy the role of the cortical feedback in shaping thespatiotemporal patterns of activity in cortex. Two questionsare addressed: one, what are the limitations on the ability ofcortical neurons to lock their activity to rotatingoriented stimuli within a single receptive field? Two, can thelocal architecture of visual cortex lead to the generation ofspontaneous traveling pulses of activity? We study theseissues analytically by a population-dynamic model of ahypercolumn in visual cortex. The order parameter thatdescribes the macroscopic behavior of the network is thetime-dependent population vector of the network. We firststudy the network dynamics under the influence of a weakly tunedinput that slowly rotates within the receptive field. We showthat if the cortical interactions have strong spatialmodulation, the network generates a sharply tuned activityprofile that propagates across the hypercolumn in a path thatis completely locked to the stimulus rotation. The resultantrotating population vector maintains a constant angular lagrelative to the stimulus, the magnitude of which grows with thestimulus rotation frequency. Beyond a critical frequency thepopulation vector does not lock to the stimulus but executes aquasi-periodic motion with an average frequency that is smallerthan that of the stimulus. In the second part we consider thestable intrinsic state of the cortex under the influence of isotropic stimulation. We show that if the local inhibitoryfeedback is sufficiently strong, the network does not settleinto a stationary state but develops spontaneous travelingpulses of activity. Unlike recent models of wave propagation incortical networks, the connectivity pattern in our model isspatially symmetric, hence the direction of propagation ofthese waves is arbitrary. The interaction of these waves withan external-oriented stimulus is studied. It is shown that thesystem can lock to a weakly tuned rotating stimulus if thestimulus frequency is close to the frequency of the intrinsic wave.


The Journal of Neuroscience | 2012

The Mechanism of Orientation Selectivity in Primary Visual Cortex without a Functional Map

David Hansel; Carl van Vreeswijk

Neurons in primary visual cortex (V1) display substantial orientation selectivity even in species where V1 lacks an orientation map, such as in mice and rats. The mechanism underlying orientation selectivity in V1 with such a salt-and-pepper organization is unknown; it is unclear whether a connectivity that depends on feature similarity is required, or a random connectivity suffices. Here we argue for the latter. We study the response to a drifting grating of a network model of layer 2/3 with random recurrent connectivity and feedforward input from layer 4 neurons with random preferred orientations. We show that even though the total feedforward and total recurrent excitatory and inhibitory inputs all have a very weak orientation selectivity, strong selectivity emerges in the neuronal spike responses if the network operates in the balanced excitation/inhibition regime. This is because in this regime the (large) untuned components in the excitatory and inhibitory contributions approximately cancel. As a result the untuned part of the input into a neuron as well as its modulation with orientation and time all have a size comparable to the neuronal threshold. However, the tuning of the F0 and F1 components of the input are uncorrelated and the high-frequency fluctuations are not tuned. This is reflected in the subthreshold voltage response. Remarkably, due to the nonlinear voltage-firing rate transfer function, the preferred orientation of the F0 and F1 components of the spike response are highly correlated.

Collaboration


Dive into the David Hansel's collaboration.

Top Co-Authors

Avatar

Haim Sompolinsky

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Germán Mato

National Scientific and Technical Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl van Vreeswijk

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

David Golomb

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Arthur Leblois

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Alex Roxin

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wassilios G. Meissner

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge