Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Stemmler is active.

Publication


Featured researches published by Martin Stemmler.


PLOS Computational Biology | 2010

Action potential energy efficiency varies among neuron types in vertebrates and invertebrates

Biswa Sengupta; Martin Stemmler; Simon B. Laughlin; Jeremy E. Niven

The initiation and propagation of action potentials (APs) places high demands on the energetic resources of neural tissue. Each AP forces ATP-driven ion pumps to work harder to restore the ionic concentration gradients, thus consuming more energy. Here, we ask whether the ionic currents underlying the AP can be predicted theoretically from the principle of minimum energy consumption. A long-held supposition that APs are energetically wasteful, based on theoretical analysis of the squid giant axon AP, has recently been overturned by studies that measured the currents contributing to the AP in several mammalian neurons. In the single compartment models studied here, AP energy consumption varies greatly among vertebrate and invertebrate neurons, with several mammalian neuron models using close to the capacitive minimum of energy needed. Strikingly, energy consumption can increase by more than ten-fold simply by changing the overlap of the Na+ and K+ currents during the AP without changing the APs shape. As a consequence, the height and width of the AP are poor predictors of energy consumption. In the Hodgkin–Huxley model of the squid axon, optimizing the kinetics or number of Na+ and K+ channels can whittle down the number of ATP molecules needed for each AP by a factor of four. In contrast to the squid AP, the temporal profile of the currents underlying APs of some mammalian neurons are nearly perfectly matched to the optimized properties of ionic conductances so as to minimize the ATP cost.


Neural Computation | 2012

Optimal population codes for space: Grid cells outperform place cells

Alexander Mathis; Andreas V. M. Herz; Martin Stemmler

Rodents use two distinct neuronal coordinate systems to estimate their position: place fields in the hippocampus and grid fields in the entorhinal cortex. Whereas place cells spike at only one particular spatial location, grid cells fire at multiple sites that correspond to the points of an imaginary hexagonal lattice. We study how to best construct place and grid codes, taking the probabilistic nature of neural spiking into account. Which spatial encoding properties of individual neurons confer the highest resolution when decoding the animals position from the neuronal population response? A priori, estimating a spatial position from a grid code could be ambiguous, as regular periodic lattices possess translational symmetry. The solution to this problem requires lattices for grid cells with different spacings; the spatial resolution crucially depends on choosing the right ratios of these spacings across the population. We compute the expected error in estimating the position in both the asymptotic limit, using Fisher information, and for low spike counts, using maximum likelihood estimation. Achieving high spatial resolution and covering a large range of space in a grid code leads to a trade-off: the best grid code for spatial resolution is built of nested modules with different spatial periods, one inside the other, whereas maximizing the spatial range requires distinct spatial periods that are pairwisely incommensurate. Optimizing the spatial resolution predicts two grid cell properties that have been experimentally observed. First, short lattice spacings should outnumber long lattice spacings. Second, the grid code should be self-similar across different lattice spacings, so that the grid field always covers a fixed fraction of the lattice period. If these conditions are satisfied and the spatial “tuning curves” for each neuron span the same range of firing rates, then the resolution of the grid code easily exceeds that of the best possible place code with the same number of neurons.


PLOS Computational Biology | 2013

Information and efficiency in the nervous system-a synthesis

Biswa Sengupta; Martin Stemmler; K. J. Friston

In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components—like genetic circuits, biochemical cascades, and ion channels, among others—enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode—with almost 20–60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.


Science | 2015

Connecting multiple spatial scales to decode the population activity of grid cells

Martin Stemmler; Alexander Mathis; Andreas V. M. Herz

Reading the neural code for space: discrete scales of grid-cell activity enable goal-directed navigation and localization. Mammalian grid cells fire when an animal crosses the points of an imaginary hexagonal grid tessellating the environment. We show how animals can navigate by reading out a simple population vector of grid cell activity across multiple spatial scales, even though neural activity is intrinsically stochastic. This theory of dead reckoning explains why grid cells are organized into discrete modules within which all cells have the same lattice scale and orientation. The lattice scale changes from module to module and should form a geometric progression with a scale ratio of around 3/2 to minimize the risk of making large-scale errors in spatial localization. Such errors should also occur if intermediate-scale modules are silenced, whereas knocking out the module at the smallest scale will only affect spatial precision. For goal-directed navigation, the allocentric grid cell representation can be readily transformed into the egocentric goal coordinates needed for planning movements. The goal location is set by nonlinear gain fields that act on goal vector cells. This theory predicts neural and behavioral correlates of grid cell readout that transcend the known link between grid cells of the medial entorhinal cortex and place cells of the hippocampus.


Journal of Neuroscience Methods | 2012

Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data

Armin Bahl; Martin Stemmler; Andreas V. M. Herz; Arnd Roth

The construction of compartmental models of neurons involves tuning a set of parameters to make the model neuron behave as realistically as possible. While the parameter space of single-compartment models or other simple models can be exhaustively searched, the introduction of dendritic geometry causes the number of parameters to balloon. As parameter tuning is a daunting and time-consuming task when performed manually, reliable methods for automatically optimizing compartmental models are desperately needed, as only optimized models can capture the behavior of real neurons. Here we present a three-step strategy to automatically build reduced models of layer 5 pyramidal neurons that closely reproduce experimental data. First, we reduce the pattern of dendritic branches of a detailed model to a set of equivalent primary dendrites. Second, the ion channel densities are estimated using a multi-objective optimization strategy to fit the voltage trace recorded under two conditions - with and without the apical dendrite occluded by pinching. Finally, we tune dendritic calcium channel parameters to model the initiation of dendritic calcium spikes and the coupling between soma and dendrite. More generally, this new method can be applied to construct families of models of different neuron types, with applications ranging from the study of information processing in single neurons to realistic simulations of large-scale network dynamics.


Proceedings of the IEEE | 2014

Power Consumption During Neuronal Computation

Biswa Sengupta; Martin Stemmler

Maintaining the ability of the nervous system to perceive, remember, process, and react to the outside world requires a continuous energy supply. Yet the overall power consumption is remarkably low, which has inspired engineers to mimic nervous systems in designing artificial cochlea, retinal implants, and brain-computer interfaces (BCIs) to improve the quality of life in patients. Such neuromorphic devices are both energy efficient and increasingly able to emulate many functions of the human nervous system. We examine the energy constraints of neuronal signaling within biology, review the quantitative tradeoff between energy use and information processing, and ask whether the biophysics and design of nerve cells minimizes energy consumption.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Grid cells in rat entorhinal cortex encode physical space with independent firing fields and phase precession at the single-trial level

Eric T. Reifenstein; Richard Kempter; Susanne Schreiber; Martin Stemmler; Andreas V. M. Herz

When a rat moves, grid cells in its entorhinal cortex become active in multiple regions of the external world that form a hexagonal lattice. As the animal traverses one such “firing field,” spikes tend to occur at successively earlier theta phases of the local field potential. This phenomenon is called phase precession. Here, we show that spike phases provide 80% more spatial information than spike counts and that they improve position estimates from single neurons down to a few centimeters. To understand what limits the resolution and how variable spike phases are across different field traversals, we analyze spike trains run by run. We find that the multiple firing fields of a grid cell operate as independent elements for encoding physical space. In addition, phase precession is significantly stronger than the pooled-run data suggest. Despite the inherent stochasticity of grid-cell firing, phase precession is therefore a robust phenomenon at the single-trial level, making a theta-phase code for spatial navigation feasible.


Neuron | 2015

Local Postsynaptic Voltage-Gated Sodium Channel Activation in Dendritic Spines of Olfactory Bulb Granule Cells

Wolfgang Georg Bywalez; Dinu Patirniche; Vanessa Rupprecht; Martin Stemmler; Andreas V. M. Herz; Dénes Pálfi; Balázs Rózsa; Veronica Egger

Neuronal dendritic spines have been speculated to function as independent computational units, yet evidence for active electrical computation in spines is scarce. Here we show that strictly local voltage-gated sodium channel (Nav) activation can occur during excitatory postsynaptic potentials in the spines of olfactory bulb granule cells, which we mimic and detect via combined two-photon uncaging of glutamate and calcium imaging in conjunction with whole-cell recordings. We find that local Nav activation boosts calcium entry into spines through high-voltage-activated calcium channels and accelerates postsynaptic somatic depolarization, without affecting NMDA receptor-mediated signaling. Hence, Nav-mediated boosting promotes rapid output from the reciprocal granule cell spine onto the lateral mitral cell dendrite and thus can speed up recurrent inhibition. This striking example of electrical compartmentalization both adds to the understanding of olfactory network processing and broadens the general view of spine function.


Physical Review E | 2013

Multiscale codes in the nervous system: the problem of noise correlations and the ambiguity of periodic scales.

Alexander Mathis; Andreas V. M. Herz; Martin Stemmler

Encoding information about continuous variables using noisy computational units is a challenge; nonetheless, asymptotic theory shows that combining multiple periodic scales for coding can be highly precise despite the corrupting influence of noise [Mathis, Herz, and Stemmler, Phys. Rev. Lett. 109, 018103 (2012)]. Indeed, the cortex seems to use periodic, multiscale grid codes to represent position accurately. Here we show how such codes can be read out without taking the long-term limit; even on short time scales, the precision of such codes scales exponentially in the number N of neurons. Does this finding also hold for neurons that are not firing in a statistically independent fashion? To assess the extent to which biological grid codes are subject to statistical dependences, we first analyze the noise correlations between pairs of grid code neurons in behaving rodents. We find that if the grids of two neurons align and have the same length scale, the noise correlations between the neurons can reach values as high as 0.8. For increasing mismatches between the grids of the two neurons, the noise correlations fall rapidly. Incorporating such correlations into a population coding model reveals that the correlations lessen the resolution, but the exponential scaling of resolution with N is unaffected.


Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 2011

Neuronal precision and the limits for acoustic signal recognition in a small neuronal network

Daniela Neuhofer; Martin Stemmler; Bernhard Ronacher

Recognition of acoustic signals may be impeded by two factors: extrinsic noise, which degrades sounds before they arrive at the receiver’s ears, and intrinsic neuronal noise, which reveals itself in the trial-to-trial variability of the responses to identical sounds. Here we analyzed how these two noise sources affect the recognition of acoustic signals from potential mates in grasshoppers. By progressively corrupting the envelope of a female song, we determined the critical degradation level at which males failed to recognize a courtship call in behavioral experiments. Using the same stimuli, we recorded intracellularly from auditory neurons at three different processing levels, and quantified the corresponding changes in spike train patterns by a spike train metric, which assigns a distance between spike trains. Unexpectedly, for most neurons, intrinsic variability accounted for the main part of the metric distance between spike trains, even at the strongest degradation levels. At consecutive levels of processing, intrinsic variability increased, while the sensitivity to external noise decreased. We followed two approaches to determine critical degradation levels from spike train dissimilarities, and compared the results with the limits of signal recognition measured in behaving animals.

Collaboration


Dive into the Martin Stemmler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Biswa Sengupta

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Bernhard Ronacher

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Eric T. Reifenstein

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Susanne Schreiber

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Veronica Egger

University of Regensburg

View shared research outputs
Top Co-Authors

Avatar

Balázs Rózsa

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dénes Pálfi

Pázmány Péter Catholic University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge