Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where P.E. Latham is active.

Publication


Featured researches published by P.E. Latham.


Nature Reviews Neuroscience | 2006

Neural correlations, population coding and computation

Bruno B. Averbeck; P.E. Latham; Alexandre Pouget

How the brain encodes information in population activity, and how it combines and manipulates that activity as it carries out computations, are questions that lie at the heart of systems neuroscience. During the past decade, with the advent of multi-electrode recording and improved theoretical models, these questions have begun to yield answers. However, a complete understanding of neuronal variability, and, in particular, how it affects population codes, is missing. This is because variability in the brain is typically correlated, and although the exact effects of these correlations are not known, it is known that they can be large. Here, we review studies that address the interaction between neuronal noise and population codes, and discuss their implications for population coding in general.


Neuron | 2008

Probabilistic Population Codes for Bayesian Decision Making

Jeffrey M. Beck; Wei Ji Ma; Roozbeh Kiani; Timothy D. Hanks; Anne K. Churchland; Jamie D. Roitman; Michael N. Shadlen; P.E. Latham; Alexandre Pouget

When making a decision, one must first accumulate evidence, often over time, and then select the appropriate action. Here, we present a neural model of decision making that can perform both evidence accumulation and action selection optimally. More specifically, we show that, given a Poisson-like distribution of spike counts, biological neural networks can accumulate evidence without loss of information through linear integration of neural activity and can select the most likely action through attractor dynamics. This holds for arbitrary correlations, any tuning curves, continuous and discrete variables, and sensory evidence whose reliability varies over time. Our model predicts that the neurons in the lateral intraparietal cortex involved in evidence accumulation encode, on every trial, a probability distribution which predicts the animals performance. We present experimental evidence consistent with this prediction and discuss other predictions applicable to more general settings.


Nature Neuroscience | 1999

Reading population codes: a neural implementation of ideal observers.

Sophie Denève; P.E. Latham; Alexandre Pouget

Many sensory and motor variables are encoded in the nervous system by the activities of large populations of neurons with bell-shaped tuning curves. Extracting information from these population codes is difficult because of the noise inherent in neuronal responses. In most cases of interest, maximum likelihood (ML) is the best read-out method and would be used by an ideal observer. Using simulations and analysis, we show that a close approximation to ML can be implemented in a biologically plausible model of cortical circuitry. Our results apply to a wide range of nonlinear activation functions, suggesting that cortical areas may, in general, function as ideal observers of activity in preceding areas.


Nature Neuroscience | 2001

Efficient computation and cue integration with noisy population codes

Sophie Denève; P.E. Latham; Alexandre Pouget

The brain represents sensory and motor variables through the activity of large populations of neurons. It is not understood how the nervous system computes with these population codes, given that individual neurons are noisy and thus unreliable. We focus here on two general types of computation, function approximation and cue integration, as these are powerful enough to handle a range of tasks, including sensorimotor transformations, feature extraction in sensory systems and multisensory integration. We demonstrate that a particular class of neural networks, basis function networks with multidimensional attractors, can perform both types of computation optimally with noisy neurons. Moreover, neurons in the intermediate layers of our model show response properties similar to those observed in several multimodal cortical areas. Thus, basis function networks with multidimensional attractors may be used by the brain to compute efficiently with population codes.


Science | 2010

Optimally interacting minds.

Bahador Bahrami; Karsten Olsen; P.E. Latham; Andreas Roepstorff; Geraint Rees; Chris Frith

Two Heads Are Better Than One When two people peer into the distance and try to figure out if a faint number is a three or an eight, classical signal detection theory states that the joint decision can only be as good as that of the person with higher visual acuity. Bahrami et al. (p. 1081; see the Perspective by Ernst) propose that a discussion not only of what each person perceives but also of the degree of confidence in those assignments can improve the overall sensitivity of the decision. Using a traditional contrast-detection task, they showed that, when the individuals did not differ too much in their powers of visual discrimination, collective decision-making significantly improved sensitivity. The model offered here formalizes debates held since the Enlightenment about whether collective thinking can outperform that of elite individuals. Sharing choices and confidence in those choices can be an empowering experience. In everyday life, many people believe that two heads are better than one. Our ability to solve problems together appears to be fundamental to the current dominance and future survival of the human species. But are two heads really better than one? We addressed this question in the context of a collective low-level perceptual decision-making task. For two observers of nearly equal visual sensitivity, two heads were definitely better than one, provided they were given the opportunity to communicate freely, even in the absence of any feedback about decision outcomes. But for observers with very different visual sensitivities, two heads were actually worse than the better one. These seemingly discrepant patterns of group behavior can be explained by a model in which two heads are Bayes optimal under the assumption that individuals accurately communicate their level of confidence on every trial.


Nature | 2001

Retinal ganglion cells act largely as independent encoders

Sheila Nirenberg; S. M. Carcieri; Adam L. Jacobs; P.E. Latham

Correlated firing among neurons is widespread in the visual system. Neighbouring neurons, in areas from retina to cortex, tend to fire together more often than would be expected by chance. The importance of this correlated firing for encoding visual information is unclear and controversial. Here we examine its importance in the retina. We present the retina with natural stimuli and record the responses of its output cells, the ganglion cells. We then use information theoretic techniques to measure the amount of information about the stimuli that can be obtained from the cells under two conditions: when their correlated firing is taken into account, and when their correlated firing is ignored. We find that more than 90% of the information about the stimuli can be obtained from the cells when their correlated firing is ignored. This indicates that ganglion cells act largely independently to encode information, which greatly simplifies the problem of decoding their activity.


Nature | 2010

Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex.

Michael London; Arnd Roth; Lisa Beeren; Michael Häusser; P.E. Latham

It is well known that neural activity exhibits variability, in the sense that identical sensory stimuli produce different responses, but it has been difficult to determine what this variability means. Is it noise, or does it carry important information—about, for example, the internal state of the organism? Here we address this issue from the bottom up, by asking whether small perturbations to activity in cortical networks are amplified. Based on in vivo whole-cell patch-clamp recordings in rat barrel cortex, we find that a perturbation consisting of a single extra spike in one neuron produces approximately 28 additional spikes in its postsynaptic targets. We also show, using simultaneous intra- and extracellular recordings, that a single spike in a neuron produces a detectable increase in firing rate in the local network. Theoretical analysis indicates that this amplification leads to intrinsic, stimulus-independent variations in membrane potential of the order of ±2.2–4.5 mV—variations that are pure noise, and so carry no information at all. Therefore, for the brain to perform reliable computations, it must either use a rate code, or generate very large, fast depolarizing events, such as those proposed by the theory of synfire chains. However, in our in vivo recordings, we found that such events were very rare. Our findings are thus consistent with the idea that cortex is likely to use primarily a rate code.


Nature Neuroscience | 2013

Probabilistic brains: knowns and unknowns

Alexandre Pouget; Jeffrey M. Beck; Wei Ji Ma; P.E. Latham

There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference.


Neural Computation | 1998

Statistically efficient estimation using population coding

Alexandre Pouget; Kechen Zhang; Sophie Denève; P.E. Latham

Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.


Nature Neuroscience | 2004

Tuning curve sharpening for orientation selectivity: coding efficiency and the impact of correlations

Peggy Seriès; P.E. Latham; Alexandre Pouget

Several studies have shown that the information conveyed by bell-shaped tuning curves increases as their width decreases, leading to the notion that sharpening of tuning curves improves population codes. This notion, however, is based on assumptions that the noise distribution is independent among neurons and independent of the tuning curve width. Here we reexamine these assumptions in networks of spiking neurons by using orientation selectivity as an example. We compare two principal classes of model: one in which the tuning curves are sharpened through cortical lateral interactions, and one in which they are not. We report that sharpening through lateral interactions does not improve population codes but, on the contrary, leads to a severe loss of information. In addition, the sharpening models generate complicated codes that rely extensively on pairwise correlations. Our study generates several experimental predictions that can be used to distinguish between these two classes of model.

Collaboration


Dive into the P.E. Latham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophie Denève

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Bahador Bahrami

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Rodgers

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

M. Blank

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge