Jeffrey M. Beck
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeffrey M. Beck.
Neuron | 2008
Jeffrey M. Beck; Wei Ji Ma; Roozbeh Kiani; Timothy D. Hanks; Anne K. Churchland; Jamie D. Roitman; Michael N. Shadlen; P.E. Latham; Alexandre Pouget
When making a decision, one must first accumulate evidence, often over time, and then select the appropriate action. Here, we present a neural model of decision making that can perform both evidence accumulation and action selection optimally. More specifically, we show that, given a Poisson-like distribution of spike counts, biological neural networks can accumulate evidence without loss of information through linear integration of neural activity and can select the most likely action through attractor dynamics. This holds for arbitrary correlations, any tuning curves, continuous and discrete variables, and sensory evidence whose reliability varies over time. Our model predicts that the neurons in the lateral intraparietal cortex involved in evidence accumulation encode, on every trial, a probability distribution which predicts the animals performance. We present experimental evidence consistent with this prediction and discuss other predictions applicable to more general settings.
Nature Neuroscience | 2013
Alexandre Pouget; Jeffrey M. Beck; Wei Ji Ma; P.E. Latham
There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference.
Nature Neuroscience | 2014
Rubén Moreno-Bote; Jeffrey M. Beck; Ingmar Kanitscheider; Xaq Pitkow; P.E. Latham; Alexandre Pouget
Computational strategies used by the brain strongly depend on the amount of information that can be stored in population activity, which in turn strongly depends on the pattern of noise correlations. In vivo, noise correlations tend to be positive and proportional to the similarity in tuning properties. Such correlations are thought to limit information, which has led to the suggestion that decorrelation increases information. In contrast, we found, analytically and numerically, that decorrelation does not imply an increase in information. Instead, the only information-limiting correlations are what we refer to as differential correlations: correlations proportional to the product of the derivatives of the tuning curves. Unfortunately, differential correlations are likely to be very small and buried under correlations that do not limit information, making them particularly difficult to detect. We found, however, that the effect of differential correlations on information can be detected with relatively simple decoders.
Neuron | 2012
Jeffrey M. Beck; Wei Ji Ma; Xaq Pitkow; P.E. Latham; Alexandre Pouget
Behavior varies from trial to trial even when the stimulus is maintained as constant as possible. In many models, this variability is attributed to noise in the brain. Here, we propose that there is another major source of variability: suboptimal inference. Importantly, we argue that in most tasks of interest, and particularly complex ones, suboptimal inference is likely to be the dominant component of behavioral variability. This perspective explains a variety of intriguing observations, including why variability appears to be larger on the sensory than on the motor side, and why our sensors are sometimes surprisingly unreliable.
Nature Neuroscience | 2011
Vikranth R. Bejjanki; Jeffrey M. Beck; Zhong-Lin Lu; Alexandre Pouget
Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.
Current Opinion in Neurobiology | 2008
Wei Ji Ma; Jeffrey M. Beck; Alexandre Pouget
Systems neuroscience traditionally conceptualizes a population of spiking neurons as merely encoding the value of a stimulus. Yet, psychophysics has revealed that people take into account stimulus uncertainty when performing sensory or motor computations and do so in a nearly Bayes-optimal way. This suggests that neural populations do not encode just a single value but an entire probability distribution over the stimulus. Several such probabilistic codes have been proposed, including one that utilizes the structure of neural variability to enable simple neural implementations of probabilistic computations such as optimal cue integration. This approach provides a quantitative link between Bayes-optimal behaviors and specific neural operations. It allows for novel ways to evaluate probabilistic codes and for predictions for physiological population recordings.
The Journal of Neuroscience | 2011
Jeffrey M. Beck; P.E. Latham; Alexandre Pouget
A wide range of computations performed by the nervous system involves a type of probabilistic inference known as marginalization. This computation comes up in seemingly unrelated tasks, including causal reasoning, odor recognition, motor control, visual tracking, coordinate transformations, visual search, decision making, and object recognition, to name just a few. The question we address here is: how could neural circuits implement such marginalizations? We show that when spike trains exhibit a particular type of statistics—associated with constant Fano factors and gain-invariant tuning curves, as is often reported in vivo—some of the more common marginalizations can be achieved with networks that implement a quadratic nonlinearity and divisive normalization, the latter being a type of nonlinear lateral inhibition that has been widely reported in neural circuits. Previous studies have implicated divisive normalization in contrast gain control and attentional modulation. Our results raise the possibility that it is involved in yet another, highly critical, computation: near optimal marginalization in a remarkably wide range of tasks.
Nature Neuroscience | 2011
Wei Ji Ma; Vidhya Navalpakkam; Jeffrey M. Beck; Ronald van den Berg; Alexandre Pouget
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance.
Neural Computation | 2011
Jeffrey M. Beck; Vikranth R. Bejjanki; Alexandre Pouget
A simple expression for a lower bound of Fisher information is derived for a network of recurrently connected spiking neurons that have been driven to a noise-perturbed steady state. We call this lower bound linear Fisher information, as it corresponds to the Fisher information that can be recovered by a locally optimal linear estimator. Unlike recent similar calculations, the approach used here includes the effects of nonlinear gain functions and correlated input noise and yields a surprisingly simple and intuitive expression that offers substantial insight into the sources of information degradation across successive layers of a neural network. Here, this expression is used to (1) compute the optimal (i.e., information-maximizing) firing rate of a neuron, (2) demonstrate why sharpening tuning curves by either thresholding or the action of recurrent connectivity is generally a bad idea, (3) show how a single cortical expansion is sufficient to instantiate a redundant population code that can propagate across multiple cortical layers with minimal information loss, and (4) show that optimal recurrent connectivity strongly depends on the covariance structure of the inputs to the network.
Journal of Physiology-paris | 2006
Janneke F. M. Jehee; Constantin A. Rothkopf; Jeffrey M. Beck; Dana H. Ballard
Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.