Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandro Romani is active.

Publication


Featured researches published by Sandro Romani.


Nature Neuroscience | 2015

Theta sequences are essential for internally generated hippocampal firing fields.

Yingxue Wang; Sandro Romani; Brian Lustig; Anthony Leonardo; Eva Pastalkova

Sensory cue inputs and memory-related internal brain activities govern the firing of hippocampal neurons, but which specific firing patterns are induced by either of the two processes remains unclear. We found that sensory cues guided the firing of neurons in rats on a timescale of seconds and supported the formation of spatial firing fields. Independently of the sensory inputs, the memory-related network activity coordinated the firing of neurons not only on a second-long timescale, but also on a millisecond-long timescale, and was dependent on medial septum inputs. We propose a network mechanism that might coordinate this internally generated firing. Overall, we suggest that two independent mechanisms support the formation of spatial firing fields in hippocampus, but only the internally organized system supports short-timescale sequential firing and episodic memory.


Science | 2017

Behavioral time scale synaptic plasticity underlies CA1 place fields

Katie C. Bittner; Aaron D. Milstein; Christine Grienberger; Sandro Romani; Jeffrey C. Magee

A different form of synaptic plasticity How do synaptic or other neuronal changes support learning? This subject has been dominated by Hebbs postulate of synaptic change. Although there is strong experimental support for Hebbian plasticity in a number of preparations, alternative ideas have also been developed over the years. Bittner et al. provide in vivo, in vitro, and modeling data to support the view that non-Hebbian plasticity may underlie the formation of hippocampal place fields (see the Perspective by Krupic). Instead of multiple pairings, a single strong Ca2+ plateau potential in neuronal dendrites paired with spatial inputs may be sufficient to produce place cells. Science, this issue p. 1033; see also p. 974 A particular type of long–time scale plasticity shapes the formation of stable place fields in the hippocampus. Learning is primarily mediated by activity-dependent modifications of synaptic strength within neuronal circuits. We discovered that place fields in hippocampal area CA1 are produced by a synaptic potentiation notably different from Hebbian plasticity. Place fields could be produced in vivo in a single trial by potentiation of input that arrived seconds before and after complex spiking. The potentiated synaptic input was not initially coincident with action potentials or depolarization. This rule, named behavioral time scale synaptic plasticity, abruptly modifies inputs that were neither causal nor close in time to postsynaptic activation. In slices, five pairings of subthreshold presynaptic activity and calcium (Ca2+) plateau potentials produced a large potentiation with an asymmetric seconds-long time course. This plasticity efficiently stores entire behavioral sequences within synaptic weights to produce predictive place cell activity.


European Journal of Neuroscience | 2005

Learning in realistic networks of spiking neurons and spike‐driven plastic synapses

Gianluigi Mongillo; Emanuele Curti; Sandro Romani; Daniel J. Amit

We have used simulations to study the learning dynamics of an autonomous, biologically realistic recurrent network of spiking neurons connected via plastic synapses, subjected to a stream of stimulus–delay trials, in which one of a set of stimuli is presented followed by a delay. Long‐term plasticity, produced by the neural activity experienced during training, structures the network and endows it with active (working) memory, i.e. enhanced, selective delay activity for every stimulus in the training set. Short‐term plasticity produces transient synaptic depression. Each stimulus used in training excites a selective subset of neurons in the network, and stimuli can share neurons (overlapping stimuli). Long‐term plasticity dynamics are driven by presynaptic spikes and coincident postsynaptic depolarization; stability is ensured by a refresh mechanism. In the absence of stimulation, the acquired synaptic structure persists for a very long time. The dependence of long‐term plasticity dynamics on the characteristics of the stimulus response (average emission rates, time course and synchronization), and on the single‐cell emission statistics (coefficient of variation) is studied. The study clarifies the specific roles of short‐term synaptic depression, NMDA receptors, stimulus representation overlaps, selective stimulation of inhibition, and spike asynchrony during stimulation. Patterns of network spiking activity before, during and after training reproduce most of the in vivo physiological observations in the literature.


Journal of Computational Neuroscience | 2006

Mean-field analysis of selective persistent activity in presence of short-term synaptic depression

Sandro Romani; Daniel J. Amit; Gianluigi Mongillo

Mean-Field theory is extended to recurrent networks of spiking neurons endowed with short-term depression (STD) of synaptic transmission. The extension involves the use of the distribution of interspike intervals of an integrate-and-fire neuron receiving a Gaussian current, with a given mean and variance, in input. This, in turn, is used to obtain an accurate estimate of the resulting postsynaptic current in presence of STD. The stationary states of the network are obtained requiring self-consistency for the currents—those driving the emission processes and those generated by the emitted spikes. The model network stores in the distribution of two-state efficacies of excitatory-to-excitatory synapses, a randomly composed set of external stimuli. The resulting synaptic structure allows the network to exhibit selective persistent activity for each stimulus in the set. Theory predicts the onset of selective persistent, or working memory (WM) activity upon varying the constitutive parameters (e.g. potentiated/depressed long-term efficacy ratio, parameters associated with STD), and provides the average emission rates in the various steady states. Theoretical estimates are in remarkably good agreement with data “recorded” in computer simulations of the microscopic model.


The Journal of Neuroscience | 2008

Universal Memory Mechanism for Familiarity Recognition and Identification

Volodya Yakovlev; Daniel J. Amit; Sandro Romani; Shaul Hochstein

Macaque monkeys were tested on a delayed-match-to-multiple-sample task, with either a limited set of well trained images (in randomized sequence) or with never-before-seen images. They performed much better with novel images. False positives were mostly limited to catch-trial image repetitions from the preceding trial. This result implies extremely effective one-shot learning, resembling Standings finding that people detect familiarity for 10,000 once-seen pictures (with 80% accuracy) (Standing, 1973). Familiarity memory may differ essentially from identification, which embeds and generates contextual information. When encountering another person, we can say immediately whether his or her face is familiar. However, it may be difficult for us to identify the same person. To accompany the psychophysical findings, we present a generic neural network model reproducing these behaviors, based on the same conservative Hebbian synaptic plasticity that generates delay activity identification memory. Familiarity becomes the first step toward establishing identification. Adding an inter-trial reset mechanism limits false positives for previous-trial images. The model, unlike previous proposals, relates repetition–recognition with enhanced neural activity, as recently observed experimentally in 92% of differential cells in prefrontal cortex, an area directly involved in familiarity recognition. There may be an essential functional difference between enhanced responses to novel versus to familiar images: The maximal signal from temporal cortex is for novel stimuli, facilitating additional sensory processing of newly acquired stimuli. The maximal signal for familiar stimuli arising in prefrontal cortex facilitates the formation of selective delay activity, as well as additional consolidation of the memory of the image in an upstream cortical module.


Neural Computation | 2013

Scaling laws of associative memory retrieval

Sandro Romani; Itai Pinkoviezky; Alon Rubin; Misha Tsodyks

Most people have great difficulty in recalling unrelated items. For example, in free recall experiments, lists of more than a few randomly selected words cannot be accurately repeated. Here we introduce a phenomenological model of memory retrieval inspired by theories of neuronal population coding of information. The model predicts nontrivial scaling behaviors for the mean and standard deviation of the number of recalled words for lists of increasing length. Our results suggest that associative information retrieval is a dominating factor that limits the number of recalled items.


Nature Neuroscience | 2017

Inhibitory suppression of heterogeneously tuned excitation enhances spatial coding in CA1 place cells

Christine Grienberger; Aaron D. Milstein; Katie C. Bittner; Sandro Romani; Jeffrey C. Magee

Place cells in the CA1 region of the hippocampus express location-specific firing despite receiving a steady barrage of heterogeneously tuned excitatory inputs that should compromise output dynamic range and timing. We examined the role of synaptic inhibition in countering the deleterious effects of off-target excitation. Intracellular recordings in behaving mice demonstrate that bimodal excitation drives place cells, while unimodal excitation drives weaker or no spatial tuning in interneurons. Optogenetic hyperpolarization of interneurons had spatially uniform effects on place cell membrane potential dynamics, substantially reducing spatial selectivity. These data and a computational model suggest that spatially uniform inhibitory conductance enhances rate coding in place cells by suppressing out-of-field excitation and by limiting dendritic amplification. Similarly, we observed that inhibitory suppression of phasic noise generated by out-of-field excitation enhances temporal coding by expanding the range of theta phase precession. Thus, spatially uniform inhibition allows proficient and flexible coding in hippocampal CA1 by suppressing heterogeneously tuned excitation.


Hippocampus | 2015

Short‐term plasticity based network model of place cells dynamics

Sandro Romani; Misha Tsodyks

Rodent hippocampus exhibits strikingly different regimes of population activity in different behavioral states. During locomotion, hippocampal activity oscillates at theta frequency (5–12 Hz) and cells fire at specific locations in the environment, the place fields. As the animal runs through a place field, spikes are emitted at progressively earlier phases of the theta cycles. During immobility, hippocampus exhibits sharp irregular bursts of activity, with occasional rapid orderly activation of place cells expressing a possible trajectory of the animal. The mechanisms underlying this rich repertoire of dynamics are still unclear. We developed a novel recurrent network model that accounts for the observed phenomena. We assume that the network stores a map of the environment in its recurrent connections, which are endowed with short‐term synaptic depression. We show that the network dynamics exhibits two different regimes that are similar to the experimentally observed population activity states in the hippocampus. The operating regime can be solely controlled by external inputs. Our results suggest that short‐term synaptic plasticity is a potential mechanism contributing to shape the population activity in hippocampus.


PLOS Computational Biology | 2014

Continuous attractor network model for conjunctive position-by-velocity tuning of grid cells.

Bailu Si; Sandro Romani; Misha Tsodyks

The spatial responses of many of the cells recorded in layer II of rodent medial entorhinal cortex (MEC) show a triangular grid pattern, which appears to provide an accurate population code for animal spatial position. In layer III, V and VI of the rat MEC, grid cells are also selective to head-direction and are modulated by the speed of the animal. Several putative mechanisms of grid-like maps were proposed, including attractor network dynamics, interactions with theta oscillations or single-unit mechanisms such as firing rate adaptation. In this paper, we present a new attractor network model that accounts for the conjunctive position-by-velocity selectivity of grid cells. Our network model is able to perform robust path integration even when the recurrent connections are subject to random perturbations.


PLOS Computational Biology | 2010

Continuous Attractors with Morphed/Correlated Maps

Sandro Romani; Misha Tsodyks

Continuous attractor networks are used to model the storage and representation of analog quantities, such as position of a visual stimulus. The storage of multiple continuous attractors in the same network has previously been studied in the context of self-position coding. Several uncorrelated maps of environments are stored in the synaptic connections, and a position in a given environment is represented by a localized pattern of neural activity in the corresponding map, driven by a spatially tuned input. Here we analyze networks storing a pair of correlated maps, or a morph sequence between two uncorrelated maps. We find a novel state in which the network activity is simultaneously localized in both maps. In this state, a fixed cue presented to the network does not determine uniquely the location of the bump, i.e. the response is unreliable, with neurons not always responding when their preferred input is present. When the tuned input varies smoothly in time, the neuronal responses become reliable and selective for the environment: the subset of neurons responsive to a moving input in one map changes almost completely in the other map. This form of remapping is a non-trivial transformation between the tuned input to the network and the resulting tuning curves of the neurons. The new state of the network could be related to the formation of direction selectivity in one-dimensional environments and hippocampal remapping. The applicability of the model is not confined to self-position representations; we show an instance of the network solving a simple delayed discrimination task.

Collaboration


Dive into the Sandro Romani's collaboration.

Top Co-Authors

Avatar

Misha Tsodyks

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Amit

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Mikhail Katkov

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hidehiko K. Inagaki

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey C. Magee

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar

Karel Svoboda

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar

Katie C. Bittner

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar

Shaul Hochstein

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Bailu Si

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge