Yasser Roudi
Norwegian University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yasser Roudi.
Nature Neuroscience | 2013
Tora Bonnevie; Benjamin Dunn; Marianne Fyhn; Torkel Hafting; Dori Derdikman; John L Kubie; Yasser Roudi; Edvard I. Moser; May-Britt Moser
To determine how hippocampal backprojections influence spatially periodic firing in grid cells, we recorded neural activity in the medial entorhinal cortex (MEC) of rats after temporary inactivation of the hippocampus. We report two major changes in entorhinal grid cells. First, hippocampal inactivation gradually and selectively extinguished the grid pattern. Second, the same grid cells that lost their grid fields acquired substantial tuning to the direction of the rats head. This transition in firing properties was contingent on a drop in the average firing rate of the grid cells and could be replicated by the removal of an external excitatory drive in an attractor network model in which grid structure emerges by velocity-dependent translation of activity across a network with inhibitory connections. These results point to excitatory drive from the hippocampus, and possibly other regions, as one prerequisite for the formation and translocation of grid patterns in the MEC.
Nature Neuroscience | 2013
Jonathan J. Couey; Aree Witoelar; Sheng-Jia Zhang; Kang Zheng; Jing Ye; Benjamin Dunn; Rafał Czajkowski; May-Britt Moser; Edvard I. Moser; Yasser Roudi; Menno P. Witter
Grid cells in layer II of the medial entorhinal cortex form a principal component of the mammalian neural representation of space. The firing pattern of a single grid cell has been hypothesized to be generated through attractor dynamics in a network with a specific local connectivity including both excitatory and inhibitory connections. However, experimental evidence supporting the presence of such connectivity among grid cells in layer II is limited. Here we report recordings from more than 600 neuron pairs in rat entorhinal slices, demonstrating that stellate cells, the principal cell type in the layer II grid network, are mainly interconnected via inhibitory interneurons. Using a model attractor network, we demonstrate that stable grid firing can emerge from a simple recurrent inhibitory network. Our findings thus suggest that the observed inhibitory microcircuitry between stellate cells is sufficient to generate grid-cell firing patterns in layer II of the medial entorhinal cortex.
Nature Reviews Neuroscience | 2014
Edvard I. Moser; Yasser Roudi; Menno P. Witter; Clifford G. Kentros; Tobias Bonhoeffer; May-Britt Moser
One of the grand challenges in neuroscience is to comprehend neural computation in the association cortices, the parts of the cortex that have shown the largest expansion and differentiation during mammalian evolution and that are thought to contribute profoundly to the emergence of advanced cognition in humans. In this Review, we use grid cells in the medial entorhinal cortex as a gateway to understand network computation at a stage of cortical processing in which firing patterns are shaped not primarily by incoming sensory signals but to a large extent by the intrinsic properties of the local circuit.
Physical Review E | 2009
Yasser Roudi; Joanna Tyrcha; John Hertz
We study pairwise Ising models for describing the statistics of multineuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods--inversion of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson--are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin-glass theory for the normal phase. We also show that a globally correlated input to the neurons in the network leads to a small increase in the average coupling. However, the pair-to-pair variation in the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signaling the need to include higher-order correlations to describe the statistics of large networks.
PLOS Computational Biology | 2005
Yasser Roudi; P.E. Latham
A fundamental problem in neuroscience is understanding how working memory—the ability to store information at intermediate timescales, like tens of seconds—is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.
Frontiers in Computational Neuroscience | 2009
Yasser Roudi; Erik Aurell; John Hertz
Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the mean values and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
Annual Review of Neuroscience | 2016
David C. Rowland; Yasser Roudi; May-Britt Moser; Edvard I. Moser
The medial entorhinal cortex (MEC) creates a neural representation of space through a set of functionally dedicated cell types: grid cells, border cells, head direction cells, and speed cells. Grid cells, the most abundant functional cell type in the MEC, have hexagonally arranged firing fields that tile the surface of the environment. These cells were discovered only in 2005, but after 10 years of investigation, we are beginning to understand how they are organized in the MEC network, how their periodic firing fields might be generated, how they are shaped by properties of the environment, and how they interact with the rest of the MEC network. The aim of this review is to summarize what we know about grid cells and point out where our knowledge is still incomplete.
Philosophical Transactions of the Royal Society B | 2013
Edvard I. Moser; May-Britt Moser; Yasser Roudi
One of the major breakthroughs in neuroscience is the emerging understanding of how signals from the external environment are extracted and represented in the primary sensory cortices of the mammalian brain. The operational principles of the rest of the cortex, however, have essentially remained in the dark. The discovery of grid cells, and their functional organization, opens the door to some of the first insights into the workings of the association cortices, at a stage of neural processing where firing properties are shaped not primarily by the nature of incoming sensory signals but rather by internal self-organizing principles. Grid cells are place-modulated neurons whose firing locations define a periodic triangular array overlaid on the entire space available to a moving animal. The unclouded firing pattern of these cells is rare within the association cortices. In this paper, we shall review recent advances in our understanding of the mechanisms of grid-cell formation which suggest that the pattern originates by competitive network interactions, and we shall relate these ideas to new insights regarding the organization of grid cells into functionally segregated modules.
PLOS Computational Biology | 2005
Yasser Roudi; Alessandro Treves
Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects.
Journal of Statistical Mechanics: Theory and Experiment | 2004
Yasser Roudi; Alessandro Treves
We investigate the properties of an autoassociative network of threshold-linear units whose synaptic connectivity is spatially structured and asymmetric. Since the methods of equilibrium statistical mechanics cannot be applied to such a network due to the lack of a Hamiltonian, we approach the problem through a signal-to-noise analysis, that we adapt to spatially organized networks. The conditions are analysed for the appearance of stable, spatially non-uniform profiles of activity with large overlaps with one of the stored patterns. It is also shown, with simulations and analytic results, that the storage capacity does not decrease much when the connectivity of the network becomes short range. In addition, the method used here enables us to calculate exactly the storage capacity of a randomly connected network with arbitrary degree of dilution.