Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sophie Denève is active.

Publication


Featured researches published by Sophie Denève.


Nature Neuroscience | 1999

Reading population codes: a neural implementation of ideal observers.

Sophie Denève; P.E. Latham; Alexandre Pouget

Many sensory and motor variables are encoded in the nervous system by the activities of large populations of neurons with bell-shaped tuning curves. Extracting information from these population codes is difficult because of the noise inherent in neuronal responses. In most cases of interest, maximum likelihood (ML) is the best read-out method and would be used by an ideal observer. Using simulations and analysis, we show that a close approximation to ML can be implemented in a biologically plausible model of cortical circuitry. Our results apply to a wide range of nonlinear activation functions, suggesting that cortical areas may, in general, function as ideal observers of activity in preceding areas.


Nature Neuroscience | 2001

Efficient computation and cue integration with noisy population codes

Sophie Denève; P.E. Latham; Alexandre Pouget

The brain represents sensory and motor variables through the activity of large populations of neurons. It is not understood how the nervous system computes with these population codes, given that individual neurons are noisy and thus unreliable. We focus here on two general types of computation, function approximation and cue integration, as these are powerful enough to handle a range of tasks, including sensorimotor transformations, feature extraction in sensory systems and multisensory integration. We demonstrate that a particular class of neural networks, basis function networks with multidimensional attractors, can perform both types of computation optimally with noisy neurons. Moreover, neurons in the intermediate layers of our model show response properties similar to those observed in several multimodal cortical areas. Thus, basis function networks with multidimensional attractors may be used by the brain to compute efficiently with population codes.


Neural Computation | 1998

Statistically efficient estimation using population coding

Alexandre Pouget; Kechen Zhang; Sophie Denève; P.E. Latham

Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.


Journal of Physiology-paris | 2004

Bayesian multisensory integration and cross-modal spatial links

Sophie Denève; Alexandre Pouget

Our perception of the word is the result of combining information between several senses, such as vision, audition and proprioception. These sensory modalities use widely different frames of reference to represent the properties and locations of object. Moreover, multisensory cues come with different degrees of reliability, and the reliability of a given cue can change in different contexts. The Bayesian framework--which we describe in this review--provides an optimal solution to deal with this issue of combining cues that are not equally reliable. However, this approach does not address the issue of frames of references. We show that this problem can be solved by creating cross-modal spatial links in basis function networks. Finally, we show how the basis function approach can be combined with the Bayesian framework to yield networks that can perform optimal multisensory combination. On the basis of this theory, we argue that multisensory integration is a dialogue between sensory modalities rather that the convergence of all sensory information onto a supra-modal area.


Neural Computation | 1999

Narrow versus wide turning curves: what's best for a population code?

Alexandre Pouget; Sophie Denève; Jean-Christophe Ducom; P.E. Latham

Neurophysiologists are often faced with the problem of evaluating the quality of a code for a sensory or motor variable, either to relate it to the performance of the animal in a simple discrimination task or to compare the codes at various stages along the neuronal pathway. One common belief that has emerged from such studies is that sharpening of tuning curves improves the quality of the code, although only to a certain point; sharpening beyond that is believed to be harmful. We show that this belief relies on either problematic technical analysis or improper assumptions about the noise. We conclude that one cannot tell, in the general case, whether narrow tuning curves are better than wide ones; the answer depends critically on the covariance of the noise. The same conclusion applies to other manipulations of the tuning curve profiles such as gain increase.


The Journal of Neuroscience | 2007

Optimal Sensorimotor Integration in Recurrent Cortical Networks: A Neural Implementation of Kalman Filters

Sophie Denève; Jean-René Duhamel; Alexandre Pouget

Several behavioral experiments suggest that the nervous system uses an internal model of the dynamics of the body to implement a close approximation to a Kalman filter. This filter can be used to perform a variety of tasks nearly optimally, such as predicting the sensory consequence of motor action, integrating sensory and body posture signals, and computing motor commands. We propose that the neural implementation of this Kalman filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits. In such networks, the tuning curves to variables such as arm velocity are remarkably noninvariant in the sense that the amplitude and width of the tuning curves of a given neuron can vary greatly depending on other variables such as the position of the arm or the reliability of the sensory feedback. This property could explain some puzzling properties of tuning curves in the motor and premotor cortex, and it leads to several new predictions.


PLOS Computational Biology | 2013

Predictive coding of dynamical variables in balanced spiking networks.

Martin Boerlin; Christian K. Machens; Sophie Denève

Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.


Nature Neuroscience | 2016

Efficient codes and balanced networks

Sophie Denève; Christian K. Machens

Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs.


Current Opinion in Neurobiology | 2011

Neural processing as causal inference

Timm Lochmann; Sophie Denève

Perception is about making sense, that is, understanding what events in the outside world caused the sensory observations. Consistent with this intuition, many aspects of human behavior confronting noise and ambiguity are well explained by principles of causal inference. Extending these insights, recent studies have applied the same powerful set of tools to perceptual processing at the neural level. According to these approaches, microscopic neural structures solve elementary probabilistic tasks and can be combined to construct hierarchical predictive models of the sensory input. This framework suggests that variability in neural responses reflects the inherent uncertainty associated with sensory interpretations and that sensory neurons are active predictors rather than passive filters of their inputs. Causal inference can account parsimoniously and quantitatively for non-linear dynamical properties in single synapses, single neurons and sensory receptive fields.


Neural Computation | 2008

Online learning with hidden markov models

Gianluigi Mongillo; Sophie Denève

We present an online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs). The sufficient statistics required for parameters estimation is computed recursively with time, that is, in an online way instead of using the batch forward-backward procedure. This computational scheme is generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations. The resulting algorithm is equivalent to the batch EM algorithm, for appropriate discount factor and scheduling of parameters update. On the other hand, the online algorithm is able to deal with dynamic environments, i.e., when the statistics of the observed data is changing with time. The implications of the online algorithm for probabilistic modeling in neuroscience are briefly discussed.

Collaboration


Dive into the Sophie Denève's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renaud Jardri

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

P.E. Latham

University College London

View shared research outputs
Top Co-Authors

Avatar

Jean-René Duhamel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Boris Gutkin

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

David Barrett

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Timm Lochmann

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Ralph Bourdoukan

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge