Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Omri Barak is active.

Publication


Featured researches published by Omri Barak.


Science | 2008

Synaptic Theory of Working Memory

Gianluigi Mongillo; Omri Barak; Misha Tsodyks

It is usually assumed that enhanced spiking activity in the form of persistent reverberation for several seconds is the neural correlate of working memory. Here, we propose that working memory is sustained by calcium-mediated synaptic facilitation in the recurrent connections of neocortical networks. In this account, the presynaptic residual calcium is used as a buffer that is loaded, refreshed, and read out by spiking activity. Because of the long time constants of calcium kinetics, the refresh rate can be low, resulting in a mechanism that is metabolically efficient and robust. The duration and stability of working memory can be regulated by modulating the spontaneous activity in the network.


The Journal of Neuroscience | 2010

Neuronal Population Coding of Parametric Working Memory

Omri Barak; Misha Tsodyks; Ranulfo Romo

Comparing two sequentially presented stimuli is a widely used experimental paradigm for studying working memory. The delay activity of many single neurons in the prefrontal cortex (PFC) of monkeys was found to be stimulus-specific, however, population dynamics of stimulus representation has not been elucidated. We analyzed the population state of a large number of PFC neurons during a somatosensory discrimination task. Using the tuning curves of the neurons, we derived a compact characterization of the population state. Stimulus representation by the population was found to degrade after stimulus termination, and emerge in a different form toward the end of the delay. Specifically, the tuning properties of neurons were found to change during the task. We suggest a mechanism whereby information about the stimulus is contained in activity-dependent synaptic facilitation of recurrent connections.


Current Opinion in Neurobiology | 2014

Working models of working memory.

Omri Barak; Misha Tsodyks

Working memory is a system that maintains and manipulates information for several seconds during the planning and execution of many cognitive tasks. Traditionally, it was believed that the neuronal underpinning of working memory is stationary persistent firing of selective neuronal populations. Recent advances introduced new ideas regarding possible mechanisms of working memory, such as short-term synaptic facilitation, precise tuning of recurrent excitation and inhibition, and intrinsic network dynamics. These ideas are motivated by computational considerations and careful analysis of experimental data. Taken together, they may indicate the plethora of different processes underlying working memory in the brain.


Neuron | 2007

Stochastic Emergence of Repeating Cortical Motifs in Spontaneous Membrane Potential Fluctuations In Vivo

Alik Mokeichev; Michael Okun; Omri Barak; Yonatan Katz; Ohad Ben-Shahar; Ilan Lampl

It was recently discovered that subthreshold membrane potential fluctuations of cortical neurons can precisely repeat during spontaneous activity, seconds to minutes apart, both in brain slices and in anesthetized animals. These repeats, also called cortical motifs, were suggested to reflect a replay of sequential neuronal firing patterns. We searched for motifs in spontaneous activity, recorded from the rat barrel cortex and from the cat striate cortex of anesthetized animals, and found numerous repeating patterns of high similarity and repetition rates. To test their significance, various statistics were compared between physiological data and three different types of stochastic surrogate data that preserve dynamical characteristics of the recorded data. We found no evidence for the existence of deterministically generated cortical motifs. Rather, the stochastic properties of cortical motifs suggest that they appear by chance, as a result of the constraints imposed by the coarse dynamics of subthreshold ongoing activity.


PLOS Computational Biology | 2005

Persistent Activity in Neural Networks with Dynamic Synapses

Omri Barak; Misha Tsodyks

Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.


Neural Computation | 2013

Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks

David Sussillo; Omri Barak

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


The Journal of Neuroscience | 2013

The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off.

Omri Barak; Mattia Rigotti; Stefano Fusi

Intelligent behavior requires integrating several sources of information in a meaningful fashion—be it context with stimulus or shape with color and size. This requires the underlying neural mechanism to respond in a different manner to similar inputs (discrimination), while maintaining a consistent response for noisy variations of the same input (generalization). We show that neurons that mix information sources via random connectivity can form an easy to read representation of input combinations. Using analytical and numerical tools, we show that the coding level or sparseness of these neurons activity controls a trade-off between generalization and discrimination, with the optimal level depending on the task at hand. In all realistic situations that we analyzed, the optimal fraction of inputs to which a neuron responds is close to 0.1. Finally, we predict a relation between a measurable property of the neural representation and task performance.


Progress in Neurobiology | 2013

From fixed points to chaos: Three models of delayed discrimination

Omri Barak; David Sussillo; Ranulfo Romo; Misha Tsodyks; L. F. Abbott

Working memory is a crucial component of most cognitive tasks. Its neuronal mechanisms are still unclear despite intensive experimental and theoretical explorations. Most theoretical models of working memory assume both time-invariant neural representations and precise connectivity schemes based on the tuning properties of network neurons. A different, more recent class of models assumes randomly connected neurons that have no tuning to any particular task, and bases task performance purely on adjustment of network readout. Intermediate between these schemes are networks that start out random but are trained by a learning scheme. Experimental studies of a delayed vibrotactile discrimination task indicate that some of the neurons in prefrontal cortex are persistently tuned to the frequency of a remembered stimulus, but the majority exhibit more complex relationships to the stimulus that vary considerably across time. We compare three models, ranging from a highly organized line attractor model to a randomly connected network with chaotic activity, with data recorded during this task. The random network does a surprisingly good job of both performing the task and matching certain aspects of the data. The intermediate model, in which an initially random network is partially trained to perform the working memory task by tuning its recurrent and readout connections, provides a better description, although none of the models matches all features of the data. Our results suggest that prefrontal networks may begin in a random state relative to the task and initially rely on modified readout for task performance. With further training, however, more tuned neurons with less time-varying responses should emerge as the networks become more structured.


Journal of Computational Neuroscience | 2008

Slow oscillations in neural networks with facilitating synapses

Ofer Melamed; Omri Barak; Gilad Silberberg; Henry Markram; Misha Tsodyks

The synchronous oscillatory activity characterizing many neurons in a network is often considered to be a mechanism for representing, binding, conveying, and organizing information. A number of models have been proposed to explain high-frequency oscillations, but the mechanisms that underlie slow oscillations are still unclear. Here, we show by means of analytical solutions and simulations that facilitating excitatory (Ef) synapses onto interneurons in a neural network play a fundamental role, not only in shaping the frequency of slow oscillations, but also in determining the form of the up and down states observed in electrophysiological measurements. Short time constants and strong Ef synapse-connectivity were found to induce rapid alternations between up and down states, whereas long time constants and weak Ef synapse connectivity prolonged the time between up states and increased the up state duration. These results suggest a novel role for facilitating excitatory synapses onto interneurons in controlling the form and frequency of slow oscillations in neuronal circuits.


Neuron | 2015

Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty

Federico Carnevale; Victor de Lafuente; Ranulfo Romo; Omri Barak; Néstor Parga

Under uncertainty, the brain uses previous knowledge to transform sensory inputs into the percepts on which decisions are based. When the uncertainty lies in the timing of sensory evidence, however, the mechanism underlying the use of previously acquired temporal information remains unknown. We study this issue in monkeys performing a detection task with variable stimulation times. We use the neural correlates of false alarms to infer the subjects response criterion and find that it modulates over the course of a trial. Analysis of premotor cortex activity shows that this modulation is represented by the dynamics of population responses. A trained recurrent network model reproduces the experimental findings and demonstrates a neural mechanism to benefit from temporal expectations in perceptual detection. Previous knowledge about the probability of stimulation over time can be intrinsically encoded in the neural population dynamics, allowing a flexible control of the response criterion over time.

Collaboration


Dive into the Omri Barak's collaboration.

Top Co-Authors

Avatar

Misha Tsodyks

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Ranulfo Romo

National Autonomous University of Mexico

View shared research outputs
Top Co-Authors

Avatar

Dori Derdikman

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Ehud Ahissar

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Knarik Bagdasarian

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oded Barzelay

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Federico Carnevale

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Néstor Parga

Autonomous University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge