Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Pouget is active.

Publication


Featured researches published by Alexandre Pouget.


Nature Reviews Neuroscience | 2006

Neural correlations, population coding and computation

Bruno B. Averbeck; P.E. Latham; Alexandre Pouget

How the brain encodes information in population activity, and how it combines and manipulates that activity as it carries out computations, are questions that lie at the heart of systems neuroscience. During the past decade, with the advent of multi-electrode recording and improved theoretical models, these questions have begun to yield answers. However, a complete understanding of neuronal variability, and, in particular, how it affects population codes, is missing. This is because variability in the brain is typically correlated, and although the exact effects of these correlations are not known, it is known that they can be large. Here, we review studies that address the interaction between neuronal noise and population codes, and discuss their implications for population coding in general.


Neuron | 2008

Probabilistic Population Codes for Bayesian Decision Making

Jeffrey M. Beck; Wei Ji Ma; Roozbeh Kiani; Timothy D. Hanks; Anne K. Churchland; Jamie D. Roitman; Michael N. Shadlen; P.E. Latham; Alexandre Pouget

When making a decision, one must first accumulate evidence, often over time, and then select the appropriate action. Here, we present a neural model of decision making that can perform both evidence accumulation and action selection optimally. More specifically, we show that, given a Poisson-like distribution of spike counts, biological neural networks can accumulate evidence without loss of information through linear integration of neural activity and can select the most likely action through attractor dynamics. This holds for arbitrary correlations, any tuning curves, continuous and discrete variables, and sensory evidence whose reliability varies over time. Our model predicts that the neurons in the lateral intraparietal cortex involved in evidence accumulation encode, on every trial, a probability distribution which predicts the animals performance. We present experimental evidence consistent with this prediction and discuss other predictions applicable to more general settings.


Nature Neuroscience | 2005

Reference frames for representing visual and tactile locations in parietal cortex

Marie Avillac; Sophie Denève; Etienne Olivier; Alexandre Pouget; Jean-René Duhamel

The ventral intraparietal area (VIP) receives converging inputs from visual, somatosensory, auditory and vestibular systems that use diverse reference frames to encode sensory information. A key issue is how VIP combines those inputs together. We mapped the visual and tactile receptive fields of multimodal VIP neurons in macaque monkeys trained to gaze at three different stationary targets. Tactile receptive fields were found to be encoded into a single somatotopic, or head-centered, reference frame, whereas visual receptive fields were widely distributed between eye- to head-centered coordinates. These findings are inconsistent with a remapping of all sensory modalities in a common frame of reference. Instead, they support an alternative model of multisensory integration based on multidirectional sensory predictions (such as predicting the location of a visual stimulus given where it is felt on the skin and vice versa). This approach can also explain related findings in other multimodal areas.


Journal of Cognitive Neuroscience | 1997

Spatial transformations in the parietal cortex using basis functions

Alexandre Pouget; Terrence J. Sejnowski

Sensorimotor transformations are nonlinear mappings of sensory inputs to motor responses. We explore here the possibility that the responses of single neurons in the parietal cortex serve as basis functions for these transformations. Basis function decomposition is a general method for approximating nonlinear functions that is computationally efficient and well suited for adaptive modification. In particular, the responses of single parietal neurons can be approximated by the product of a Gaussian function of retinal location and a sigmoid function of eye position, called a gain field. A large set of such functions forms a basis set that can be used to perform an arbitrary motor response through a direct projection. We compare this hypothesis with other approaches that are commonly used to model population codes, such as computational maps and vectorial representations. Neither of these alternatives can fully account for the responses of parietal neurons, and they are computationally less efficient for nonlinear transformations. Basis functions also have the advantage of not depending on any coordinate system or reference frame. As a consequence, the position of an object can be represented in multiple reference frames simultaneously, a property consistent with the behavior of hemineglect patients with lesions in the parietal cortex.


Nature Neuroscience | 1999

Reading population codes: a neural implementation of ideal observers.

Sophie Denève; P.E. Latham; Alexandre Pouget

Many sensory and motor variables are encoded in the nervous system by the activities of large populations of neurons with bell-shaped tuning curves. Extracting information from these population codes is difficult because of the noise inherent in neuronal responses. In most cases of interest, maximum likelihood (ML) is the best read-out method and would be used by an ideal observer. Using simulations and analysis, we show that a close approximation to ML can be implemented in a biologically plausible model of cortical circuitry. Our results apply to a wide range of nonlinear activation functions, suggesting that cortical areas may, in general, function as ideal observers of activity in preceding areas.


Nature Neuroscience | 2001

Efficient computation and cue integration with noisy population codes

Sophie Denève; P.E. Latham; Alexandre Pouget

The brain represents sensory and motor variables through the activity of large populations of neurons. It is not understood how the nervous system computes with these population codes, given that individual neurons are noisy and thus unreliable. We focus here on two general types of computation, function approximation and cue integration, as these are powerful enough to handle a range of tasks, including sensorimotor transformations, feature extraction in sensory systems and multisensory integration. We demonstrate that a particular class of neural networks, basis function networks with multidimensional attractors, can perform both types of computation optimally with noisy neurons. Moreover, neurons in the intermediate layers of our model show response properties similar to those observed in several multimodal cortical areas. Thus, basis function networks with multidimensional attractors may be used by the brain to compute efficiently with population codes.


Nature Neuroscience | 2000

Computational approaches to sensorimotor transformations.

Alexandre Pouget; Lawrence H. Snyder

Behaviors such as sensing an object and then moving your eyes or your hand toward it require that sensory information be used to help generate a motor command, a process known as a sensorimotor transformation. Here we review models of sensorimotor transformations that use a flexible intermediate representation that relies on basis functions. The use of basis functions as an intermediate is borrowed from the theory of nonlinear function approximation. We show that this approach provides a unifying insight into the neural basis of three crucial aspects of sensorimotor transformations, namely, computation, learning and short-term memory. This mathematical formalism is consistent with the responses of cortical neurons and provides a fresh perspective on the issue of frames of reference in spatial representations.


neural information processing systems | 1996

Probabilistic Interpretation of Population Codes

Richard S. Zemel; Peter Dayan; Alexandre Pouget

We present a general encoding-decoding framework for interpreting the activity of a population of units. A standard population code interpretation method, the Poisson model, starts from a description as to how a single value of an underlying quantity can generate the activities of each unit in the population. In casting it in the encoding-decoding framework, we find that this model is too restrictive to describe fully the activities of units in population codes in higher processing areas, such as the medial temporal area. Under a more powerful model, the population activity can convey information not only about a single value of some quantity but also about its whole distribution, including its variance, and perhaps even the certainty the system has in the actual presence in the world of the entity generating this quantity. We propose a novel method for forming such probabilistic interpretations of population codes and compare it to the existing method.


Annual Review of Neuroscience | 2012

Brain plasticity through the life span: learning to learn and action video games

Daphne Bavelier; C. Shawn Green; Alexandre Pouget; Paul R. Schrater

The ability of the human brain to learn is exceptional. Yet, learning is typically quite specific to the exact task used during training, a limiting factor for practical applications such as rehabilitation, workforce training, or education. The possibility of identifying training regimens that have a broad enough impact to transfer to a variety of tasks is thus highly appealing. This work reviews how complex training environments such as action video game play may actually foster brain plasticity and learning. This enhanced learning capacity, termed learning to learn, is considered in light of its computational requirements and putative neural mechanisms.


Nature Neuroscience | 2013

Probabilistic brains: knowns and unknowns

Alexandre Pouget; Jeffrey M. Beck; Wei Ji Ma; P.E. Latham

There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference.

Collaboration


Dive into the Alexandre Pouget's collaboration.

Top Co-Authors

Avatar

P.E. Latham

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophie Denève

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terrence J. Sejnowski

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dora E. Angelaki

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ingmar Kanitscheider

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge