Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Heinz Haslinger is active.

Publication


Featured researches published by Robert Heinz Haslinger.


Physical Review Letters | 2004

Quantifying self-organization with optimal predictors.

Cosma Rohilla Shalizi; Kristina Lisa Shalizi; Robert Heinz Haslinger

Despite broad interest in self-organizing systems, there are few quantitative, experimentally applicable criteria for self-organization. The existing criteria all give counter-intuitive results for important cases. In this Letter, we propose a new criterion, namely, an internally generated increase in the statistical complexity, the amount of information required for optimal prediction of the systems dynamics. We precisely define this complexity for spatially extended dynamical systems, using the probabilistic ideas of mutual information and minimal sufficient statistics. This leads to a general method for predicting such systems and a simple algorithm for estimating statistical complexity. The results of applying this algorithm to a class of models of excitable media (cyclic cellular automata) strongly support our proposal.


Physical Review E | 2006

Automatic filters for the detection of coherent structure in spatiotemporal systems.

Cosma Rohilla Shalizi; Robert Heinz Haslinger; Jean-Baptiste Rouquier; Kristina Lisa Klinkner; Cristopher Moore

Most current methods for identifying coherent structures in spatially extended systems rely on prior information about the form which those structures take. Here we present two approaches to automatically filter the changing configurations of spatial dynamical systems and extract coherent structures. One, local sensitivity filtering, is a modification of the local Lyapunov exponent approach suitable to cellular automata and other discrete spatial systems. The other, local statistical complexity filtering, calculates the amount of information needed for optimal prediction of the systems behavior in the vicinity of a given point. By examining the changing spatiotemporal distributions of these quantities, we can find the coherent structures in a variety of pattern-forming cellular automata, without needing to guess or postulate the form of that structure. We apply both filters to elementary and cyclical cellular automata (ECA and CCA) and find that they readily identify particles, domains, and other more complicated structures. We compare the results from ECA with earlier ones based upon the theory of formal languages and the results from CCA with a more traditional approach based on an order parameter and free energy. While sensitivity and statistical complexity are equally adept at uncovering structure, they are based on different system properties (dynamical and probabilistic, respectively) and provide complementary information.


The Journal of Neuroscience | 2010

Rapid Structural Remodeling of Thalamocortical Synapses Parallels Experience-Dependent Functional Plasticity in Mouse Primary Visual Cortex

Jason E. Coleman; Marc Nahmani; Jeffrey P. Gavornik; Robert Heinz Haslinger; Arnold J. Heynen; Alev Erisir; Mark F. Bear

Monocular lid closure (MC) causes a profound shift in the ocular dominance (OD) of neurons in primary visual cortex (V1). Anatomical studies in both cat and mouse V1 suggest that large-scale structural rearrangements of eye-specific thalamocortical (TC) axons in response to MC occur much more slowly than the shift in OD. Consequently, there has been considerable debate as to whether the plasticity of TC synapses, which transmit competing visual information from each eye to V1, contributes to the early functional consequences of MC or is simply a feature of long-term deprivation. Here, we used quantitative immuno-electron microscopy to examine the possibility that alterations of TC synapses occur rapidly enough to impact OD after brief MC. The effect of short-term deprivation on TC synaptic structure was examined in male C57BL/6 mice that underwent 3 and 7 d of MC or monocular retinal inactivation (MI) with tetrodotoxin. The data show that 3 d of MC is sufficient to induce substantial remodeling of TC synapses. In contrast, 3 d of MI, which alters TC activity but does not shift OD, does not significantly affect the structure of TC synapses. Our results support the hypothesis that the rapid plasticity of TC synapses is a key step in the sequence of events that shift OD in visual cortex.


Nature Neuroscience | 2009

Thalamic activity that drives visual cortical plasticity

Monica L. Linden; Arnold J. Heynen; Robert Heinz Haslinger; Mark F. Bear

Manipulations of activity in one retina can profoundly affect binocular connections in the visual cortex. Retinal activity is relayed to the cortex by the dorsal lateral geniculate nucleus (dLGN). We compared the qualities and amount of activity in the dLGN following monocular eyelid closure and monocular retinal inactivation in awake mice. Our findings substantially alter the interpretation of previous studies and define the afferent activity patterns that trigger cortical plasticity.


Neural Computation | 2010

The computational structure of spike trains

Robert Heinz Haslinger; Kristina Lisa Klinkner; Cosma Rohilla Shalizi

Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike trains structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.


Neural Computation | 2010

Discrete time rescaling theorem: Determining goodness of fit for discrete time statistical models of neural spiking

Robert Heinz Haslinger; Gordon Pipa; Emery N. Brown

One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the models spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.


PLOS ONE | 2012

Context Matters: The Illusive Simplicity of Macaque V1 Receptive Fields

Robert Heinz Haslinger; Gordon Pipa; Bruss Lima; Wolf Singer; Emery N. Brown; Sergio Neuenschwander

Even in V1, where neurons have well characterized classical receptive fields (CRFs), it has been difficult to deduce which features of natural scenes stimuli they actually respond to. Forward models based upon CRF stimuli have had limited success in predicting the response of V1 neurons to natural scenes. As natural scenes exhibit complex spatial and temporal correlations, this could be due to surround effects that modulate the sensitivity of the CRF. Here, instead of attempting a forward model, we quantify the importance of the natural scenes surround for awake macaque monkeys by modeling it non-parametrically. We also quantify the influence of two forms of trial to trial variability. The first is related to the neuron’s own spike history. The second is related to ongoing mean field population activity reflected by the local field potential (LFP). We find that the surround produces strong temporal modulations in the firing rate that can be both suppressive and facilitative. Further, the LFP is found to induce a precise timing in spikes, which tend to be temporally localized on sharp LFP transients in the gamma frequency range. Using the pseudo R2 as a measure of model fit, we find that during natural scene viewing the CRF dominates, accounting for 60% of the fit, but that taken collectively the surround, spike history and LFP are almost as important, accounting for 40%. However, overall only a small proportion of V1 spiking statistics could be explained (R2∼5%), even when the full stimulus, spike history and LFP were taken into account. This suggests that under natural scene conditions, the dominant influence on V1 neurons is not the stimulus, nor the mean field dynamics of the LFP, but the complex, incoherent dynamics of the network in which neurons are embedded.


Neural Computation | 2013

Encoding through patterns: Regression tree-based neuronal population models

Robert Heinz Haslinger; Gordon Pipa; Laura D. Lewis; Danko Nikolić; Ziv Williams; Emery N. Brown

Although the existence of correlated spiking between neurons in a population is well known, the role such correlations play in encoding stimuli is not. We address this question by constructing pattern-based encoding models that describe how time-varying stimulus drive modulates the expression probabilities of population-wide spike patterns. The challenge is that large populations may express an astronomical number of unique patterns, and so fitting a unique encoding model for each individual pattern is not feasible. We avoid this combinatorial problem using a dimensionality-reduction approach based on regression trees. Using the insight that some patterns may, from the perspective of encoding, be statistically indistinguishable, the tree divisively clusters the observed patterns into groups whose member patterns possess similar encoding properties. These groups, corresponding to the leaves of the tree, are much smaller in number than the original patterns, and the tree itself constitutes a tractable encoding model for each pattern. Our formalism can detect an extremely weak stimulus-driven pattern structure and is based on maximizing the data likelihood, not making a priori assumptions as to how patterns should be grouped. Most important, by comparing pattern encodings with independent neuron encodings, one can determine if neurons in the population are driven independently or collectively. We demonstrate this method using multiple unit recordings from area 17 of anesthetized cat in response to a sinusoidal grating and show that pattern-based encodings are superior to those of independent neuron models. The agnostic nature of our clustering approach allows us to investigate encoding by the collective statistics that are actually present rather than those (such as pairwise) that might be presumed.


Neural Computation | 2011

Applying the multivariate time-rescaling theorem to neural population models

Felipe Gerhard; Robert Heinz Haslinger; Gordon Pipa

Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test.


Frontiers in Computational Neuroscience | 2013

Missing mass approximations for the partition function of stimulus driven Ising models

Robert Heinz Haslinger; Demba Ba; Ralf A. W. Galuske; Ziv Williams; Gordon Pipa

Ising models are routinely used to quantify the second order, functional structure of neural populations. With some recent exceptions, they generally do not include the influence of time varying stimulus drive. Yet if the dynamics of network function are to be understood, time varying stimuli must be taken into account. Inclusion of stimulus drive carries a heavy computational burden because the partition function becomes stimulus dependent and must be separately calculated for all unique stimuli observed. This potentially increases computation time by the length of the data set. Here we present an extremely fast, yet simply implemented, method for approximating the stimulus dependent partition function in minutes or seconds. Noting that the most probable spike patterns (which are few) occur in the training data, we sum partition function terms corresponding to those patterns explicitly. We then approximate the sum over the remaining patterns (which are improbable, but many) by casting it in terms of the stimulus modulated missing mass (total stimulus dependent probability of all patterns not observed in the training data). We use a product of conditioned logistic regression models to approximate the stimulus modulated missing mass. This method has complexity of roughly O(LNNpat) where is L the data length, N the number of neurons and Npat the number of unique patterns in the data, contrasting with the O(L2N) complexity of alternate methods. Using multiple unit recordings from rat hippocampus, macaque DLPFC and cat Area 18 we demonstrate our method requires orders of magnitude less computation time than Monte Carlo methods and can approximate the stimulus driven partition function more accurately than either Monte Carlo methods or deterministic approximations. This advance allows stimuli to be easily included in Ising models making them suitable for studying population based stimulus encoding.

Collaboration


Dive into the Robert Heinz Haslinger's collaboration.

Top Co-Authors

Avatar

Gordon Pipa

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Emery N. Brown

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felipe Gerhard

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Laura D. Lewis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark F. Bear

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge