Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian K. Machens is active.

Publication


Featured researches published by Christian K. Machens.


The Journal of Neuroscience | 2004

Linearity of Cortical Receptive Fields Measured with Natural Sounds

Christian K. Machens; Michael S. Wehr; Anthony M. Zador

How do cortical neurons represent the acoustic environment? This question is often addressed by probing with simple stimuli such as clicks or tone pips. Such stimuli have the advantage of yielding easily interpreted answers, but have the disadvantage that they may fail to uncover complex or higher-order neuronal response properties. Here, we adopt an alternative approach, probing neuronal responses with complex acoustic stimuli, including animal vocalizations. We used in vivo whole-cell methods in the rat auditory cortex to record subthreshold membrane potential fluctuations elicited by these stimuli. Most neurons responded robustly and reliably to the complex stimuli in our ensemble. Using regularization techniques, we estimated the linear component, the spectrotemporal receptive field (STRF), of the transformation from the sound (as represented by its time-varying spectrogram) to the membrane potential of the neuron. We find that the STRF has a rich dynamical structure, including excitatory regions positioned in general accord with the prediction of the classical tuning curve. However, whereas the STRF successfully predicts the responses to some of the natural stimuli, it surprisingly fails completely to predict the responses to others; on average, only 11% of the response power could be predicted by the STRF. Therefore, most of the response of the neuron cannot be predicted by the linear component, although the response is deterministically related to the stimulus. Analysis of the systematic errors of the STRF model shows that this failure cannot be attributed to simple nonlinearities such as adaptation to mean intensity, rectification, or saturation. Rather, the highly nonlinear response properties of auditory cortical neurons must be attributable to nonlinear interactions between sound frequencies and time-varying properties of the neural encoder.


The Journal of Neuroscience | 2010

Functional, But Not Anatomical, Separation of “What” and “When” in Prefrontal Cortex

Christian K. Machens; Ranulfo Romo; Carlos D. Brody

How does the brain store information over a short period of time? Typically, the short-term memory of items or values is thought to be stored in the persistent activity of neurons in higher cortical areas. However, the activity of these neurons often varies strongly in time, even if time is unimportant for whether or not rewards are received. To elucidate this interaction of time and memory, we reexamined the activity of neurons in the prefrontal cortex of monkeys performing a working memory task. As often observed in higher cortical areas, different neurons have highly heterogeneous patterns of activity, making interpretation of the data difficult. To overcome these problems, we developed a method that finds a new representation of the data in which heterogeneity is much reduced, and time- and memory-related activities became separate and easily interpretable. This new representation consists of a few fundamental activity components that capture 95% of the firing rate variance of >800 neurons. Surprisingly, the memory-related activity components account for <20% of this firing rate variance. The observed heterogeneity of neural responses results from random combinations of these fundamental components. Based on these components, we constructed a generative linear model of the network activity. The model suggests that the representations of time and memory are maintained by separate mechanisms, even while sharing a common anatomical substrate. Testable predictions of this hypothesis are proposed. We suggest that our method may be applied to data from other tasks in which neural responses are highly heterogeneous across neurons, and dependent on more than one variable.


Nature Neuroscience | 2003

Single auditory neurons rapidly discriminate conspecific communication signals

Christian K. Machens; Hartmut Schütze; Astrid Franz; Olga Kolesnikova; Martin B. Stemmler; B. Ronacher; Andreas V. M. Herz

Animals that rely on acoustic communication to find mates, such as grasshoppers, are astonishingly accurate in recognizing song patterns that are specific to their own species. This raises the question of whether they can also solve a far more complicated task that might provide a basis for mate preference and sexual selection: to distinguish individual songs by detecting slight variations around the common species-specific theme. Using spike-train discriminability to quantify the precision of neural responses from the auditory periphery of a model grasshopper species, we show that information sufficient to distinguish songs is readily available at the single-cell level when the spike trains are analyzed on a millisecond time scale.


PLOS Computational Biology | 2013

Predictive coding of dynamical variables in balanced spiking networks.

Martin Boerlin; Christian K. Machens; Sophie Denève

Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.


Nature Neuroscience | 2016

Efficient codes and balanced networks

Sophie Denève; Christian K. Machens

Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs.


Neural Computation | 2002

Energy-efficient coding with discrete stochastic events

Susanne Schreiber; Christian K. Machens; Andreas V. M. Herz; Simon B. Laughlin

We investigate the energy efficiency of signaling mechanisms that transfer information by means of discrete stochastic events, such as the opening or closing of an ion channel. Using a simple model for the generation of graded electrical signals by sodium and potassium channels, we find optimum numbers of channels that maximize energy efficiency. The optima depend on several factors: the relative magnitudes of the signaling cost (current flow through channels), the fixed cost of maintaining the system, the reliability of the input, additional sources of noise, and the relative costs of upstream and downstream mechanisms. We also analyze how the statistics of input signals influence energy efficiency. We find that energy-efficient signal ensembles favor a bimodal distribution of channel activations and contain only a very small fraction of large inputs when energy is scarce. We conclude that when energy use is a significant constraint, trade-offs between information transfer and energy can strongly influence the number of signaling molecules and synapses used by neurons and the manner in which these mechanisms represent information.


Current Opinion in Neurobiology | 2014

Variability in neural activity and behavior.

Alfonso Renart; Christian K. Machens

Neural activity and behavior in laboratory experiments are surprisingly variable across trials. This variability and its potential causes have been the focus of a spirited debate. Here we review recent research that has shed light on the sources of neural variability and its impact on behavior. We explain how variability may arise from incomplete knowledge about an animals internal states and its environment. We discuss the problem of incomplete knowledge both from the experimenters point of view and from the animals point of view. Both view points are illustrated through several examples from the literature. We furthermore consider both mechanistic and normative models that explain how neural and behavioral variability may be linked. Finally, we review why variability may confer an adaptive advantage to organisms.


eLife | 2016

Demixed principal component analysis of neural population data

Dmitry Kobak; Wieland Brendel; Christos Constantinidis; Claudia E. Feierstein; Adam Kepecs; Zachary F. Mainen; Xue-Lian Qi; Ranulfo Romo; Naoshige Uchida; Christian K. Machens

Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001


Progress in Neurobiology | 2013

Population-wide distributions of neural activity during perceptual decision-making.

Adrien Wohrer; Mark D. Humphries; Christian K. Machens

Cortical activity involves large populations of neurons, even when it is limited to functionally coherent areas. Electrophysiological recordings, on the other hand, involve comparatively small neural ensembles, even when modern-day techniques are used. Here we review results which have started to fill the gap between these two scales of inquiry, by shedding light on the statistical distributions of activity in large populations of cells. We put our main focus on data recorded in awake animals that perform simple decision-making tasks and consider statistical distributions of activity throughout cortex, across sensory, associative, and motor areas. We transversally review the complexity of these distributions, from distributions of firing rates and metrics of spike-train structure, through distributions of tuning to stimuli or actions and of choice signals, and finally the dynamical evolution of neural population activity and the distributions of (pairwise) neural interactions. This approach reveals shared patterns of statistical organization across cortex, including: (i) long-tailed distributions of activity, where quasi-silence seems to be the rule for a majority of neurons; that are barely distinguishable between spontaneous and active states; (ii) distributions of tuning parameters for sensory (and motor) variables, which show an extensive extrapolation and fragmentation of their representations in the periphery; and (iii) population-wide dynamics that reveal rotations of internal representations over time, whose traces can be found both in stimulus-driven and internally generated activity. We discuss how these insights are leading us away from the notion of discrete classes of cells, and are acting as powerful constraints on theories and models of cortical organization and population coding.


Frontiers in Computational Neuroscience | 2010

Demixing Population Activity in Higher Cortical Areas

Christian K. Machens

Neural responses in higher cortical areas often display a baffling complexity. In animals performing behavioral tasks, single neurons will typically encode several parameters simultaneously, such as stimuli, rewards, decisions, etc. When dealing with this large heterogeneity of responses, cells are conventionally classified into separate response categories using various statistical tools. However, this classical approach usually fails to account for the distributed nature of representations in higher cortical areas. Alternatively, principal component analysis (PCA) or related techniques can be employed to reduce the complexity of a data set while retaining the distributional aspect of the population activity. These methods, however, fail to explicitly extract the task parameters from the neural responses. Here we suggest a coordinate transformation that seeks to ameliorate these problems by combining the advantages of both methods. Our basic insight is that variance in neural firing rates can have different origins (such as changes in a stimulus, a reward, or the passage of time), and that, instead of lumping them together, as PCA does, we need to treat these sources separately. We present a method that seeks an orthogonal coordinate transformation such that the variance captured from different sources falls into orthogonal subspaces and is maximized within these subspaces. Using simulated examples, we show how this approach can be used to demix heterogeneous neural responses. Our method may help to lift the fog of response heterogeneity in higher cortical areas.

Collaboration


Dive into the Christian K. Machens's collaboration.

Top Co-Authors

Avatar

Sophie Denève

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Ranulfo Romo

National Autonomous University of Mexico

View shared research outputs
Top Co-Authors

Avatar

Adrien Wohrer

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

David Barrett

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Wieland Brendel

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Anthony M. Zador

Cold Spring Harbor Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph Bourdoukan

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge