Il Memming Park
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Il Memming Park.
Nature Neuroscience | 2014
Il Memming Park; Miriam L. R. Meister; Alexander C. Huk; Jonathan W. Pillow
It has been suggested that the lateral intraparietal area (LIP) of macaques plays a fundamental role in sensorimotor decision-making. We examined the neural code in LIP at the level of individual spike trains using a statistical approach based on generalized linear models. We found that LIP responses reflected a combination of temporally overlapping task- and decision-related signals. Our model accounts for the detailed statistics of LIP spike trains and accurately predicts spike trains from task events on single trials. Moreover, we derived an optimal decoder for heterogeneous, multiplexed LIP responses that could be implemented in biologically plausible circuits. In contrast with interpretations of LIP as providing an instantaneous code for decision variables, we found that optimal decoding requires integrating LIP spikes over two distinct timescales. These analyses provide a detailed understanding of neural representations in LIP and a framework for studying the coding of multiplexed signals in higher brain areas.
IEEE Signal Processing Magazine | 2013
Il Memming Park; Sohan Seth; António R. C. Paiva; Lin Li; Jose C. Principe
Over the last decade, several positive-definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2013
Lin Li; Il Memming Park; Austin J. Brockmeier; Badong Chen; Sohan Seth; Joseph T. Francis; Justin C. Sanchez; Jose C. Principe
The precise control of spiking in a population of neurons via applied electrical stimulation is a challenge due to the sparseness of spiking responses and neural system plasticity. We pose neural stimulation as a system control problem where the system input is a multidimensional time-varying signal representing the stimulation, and the output is a set of spike trains; the goal is to drive the output such that the elicited population spiking activity is as close as possible to some desired activity, where closeness is defined by a cost function. If the neural system can be described by a time-invariant (homogeneous) model, then offline procedures can be used to derive the control procedure; however, for arbitrary neural systems this is not tractable. Furthermore, standard control methodologies are not suited to directly operate on spike trains that represent both the target and elicited system response. In this paper, we propose a multiple-input multiple-output (MIMO) adaptive inverse control scheme that operates on spike trains in a reproducing kernel Hilbert space (RKHS). The control scheme uses an inverse controller to approximate the inverse of the neural circuit. The proposed control system takes advantage of the precise timing of the neural events by using a Schoenberg kernel defined directly in the space of spike trains. The Schoenberg kernel maps the spike train to an RKHS and allows linear algorithm to control the nonlinear neural system without the danger of converging to local minima. During operation, the adaptation of the controller minimizes a difference defined in the spike train RKHS between the system and the target response and keeps the inverse controller close to the inverse of the current neural circuit, which enables adapting to neural perturbations. The results on a realistic synthetic neural circuit show that the inverse controller based on the Schoenberg kernel outperforms the decoding accuracy of other models based on the conventional rate representation of neural signal (i.e., spikernel and generalized linear model). Moreover, after a significant perturbation of the neuron circuit, the control scheme can successfully drive the elicited responses close to the original target responses.
Neural Computation | 2017
Yuan Zhao; Il Memming Park
When governed by underlying low-dimensional dynamics, the interdependence of simultaneously recorded populations of neurons can be explained by a small number of shared factors, or a low-dimensional trajectory. Recovering these latent trajectories, particularly from single-trial population recordings, may help us understand the dynamics that drive neural computation. However, due to the biophysical constraints and noise in the spike trains, inferring trajectories from data is a challenging statistical problem in general. Here, we propose a practical and efficient inference method, the variational latent gaussian process (vLGP). The vLGP combines a generative model with a history-dependent point process observation, together with a smoothness prior on the latent trajectories. The vLGP improves on earlier methods for recovering latent trajectories, which assume either observation models inappropriate for point processes or linear dynamics. We compare and validate vLGP on both simulated data sets and population recordings from the primary visual cortex. In the V1 data set, we find that vLGP achieves substantially higher performance than previous methods for predicting omitted spike trains, as well as capturing both the toroidal topology of visual stimuli space and the noise correlation. These results show that vLGP is a robust method with the potential to reveal hidden neural dynamics from large-scale neural recordings.
The Journal of Neuroscience | 2014
Il Memming Park; Yuriy V. Bobkov; Barry W. Ache; Jose C. Principe
The spatial and temporal characteristics of the visual and acoustic sensory input are indispensable attributes for animals to perform scene analysis. In contrast, research in olfaction has focused almost exclusively on how the nervous system analyzes the quality and quantity of the sensory signal and largely ignored the spatiotemporal dimension especially in longer time scales. Yet, detailed analyses of the turbulent, intermittent structure of water- and air-borne odor plumes strongly suggest that spatio-temporal information in longer time scales can provide major cues for olfactory scene analysis for animals. We show that a bursting subset of primary olfactory receptor neurons (bORNs) in lobster has the unexpected capacity to encode the temporal properties of intermittent odor signals. Each bORN is tuned to a specific range of stimulus intervals, and collectively bORNs can instantaneously encode a wide spectrum of intermittencies. Our theory argues for the existence of a novel peripheral mechanism for encoding the temporal pattern of odor that potentially serves as a neural substrate for olfactory scene analysis.
Nature Neuroscience | 2017
Jacob L. Yates; Il Memming Park; Leor N. Katz; Jonathan W. Pillow; Alexander C. Huk
During perceptual decision-making, responses in the middle temporal (MT) and lateral intraparietal (LIP) areas appear to map onto theoretically defined quantities, with MT representing instantaneous motion evidence and LIP reflecting the accumulated evidence. However, several aspects of the transformation between the two areas have not been empirically tested. We therefore performed multistage systems identification analyses of the simultaneous activity of MT and LIP during individual decisions. We found that monkeys based their choices on evidence presented in early epochs of the motion stimulus and that substantial early weighting of motion was present in MT responses. LIP responses recapitulated MT early weighting and contained a choice-dependent buildup that was distinguishable from motion integration. Furthermore, trial-by-trial variability in LIP did not depend on MT activity. These results identify important deviations from idealizations of MT and LIP and motivate inquiry into sensorimotor computations that may intervene between MT and LIP.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2012
Lin Li; Il Memming Park; Sohan Seth; Justin C. Sanchez; Jose C. Principe
This paper quantifies and comparatively validates functional connectivity between neurons by measuring the statistical dependence between their firing rates. Based on statistical analysis of the pairwise functional connectivity, we estimate, exclusively from neural data, the neural assembly functional connectivity given a behavior task, which provides a quantifiable representation of the dynamic nature during the behavioral task. Because of the time scale of behavior (100-1000 ms), a statistical method that yields robust estimators for this small sample size is desirable. In this work, the temporal resolutions of four estimators of functional connectivity are compared on both simulated data and real neural ensemble recordings. The comparison highlights how the properties and assumptions of statistical-based and phase-based metrics affect the interpretation of connectivity. Simulation results show that mean square contingency (MSC) and mutual information (MI) create more robust quantification of functional connectivity under identical conditions than cross correlation (CC) and phase synchronization (PhS) when the sample size is 1 s. The results of the simulated analysis are extended to real neuronal recordings to assess the functional connectivity in monkeys cortex corresponding to three movement states in a food reaching task and construct the assembly graph given a movement state and the activation degree of a state-related assembly over time using the statistical test exclusively from neural data dependencies. The activation degree of a given state-related assembly reaches the peak repeatedly when the specific movement states occur, which also reveals the network of interactions among the neurons are key for the operation of a specific behavior.
international ieee/embs conference on neural engineering | 2017
David Hocker; Il Memming Park
Generalized linear models (GLMs) are useful tools to capture the characteristic features of spiking neurons; however, the long-term prediction of an autoregressive GLM inferred through maximum likelihood (ML) can be subject to runway self-excitation. We explain here that this runaway excitation is a consequence of the one-step-ahead ML inference used in estimating the parameters of the GLM. Alternatively, inference techniques that incorporate the likelihood of spiking multiple steps ahead in the future can alleviate this instability. We formulate a multi-step log-likelihood (MSLL) as an alternative objective for fitting spiking data. We maximize MSLL to infer an autoregressive GLM for individual spiking neurons recorded from the lateral intraparietal (LIP) area of monkeys during a perceptual decision-making task. While ML inference is shown to produce a GLM with poor fits of the neurons interspike intervals and autocorrelation, in addition to its runaway excitation, MSLL fit models show a substantial improvement in interval statistics and stable spiking.
bioRxiv | 2017
Il Memming Park; Jonathan W. Pillow
The efficient coding hypothesis, which proposes that neurons are optimized to maximize information about the environment, has provided a guiding theoretical framework for sensory and systems neuroscience. More recently, a theory known as the Bayesian Brain hypothesis has focused on the brain’s ability to integrate sensory and prior sources of information in order to perform Bayesian inference. However, there is as yet no comprehensive theory connecting these two theoretical frameworks. We bridge this gap by formalizing a Bayesian theory of efficient coding. We define Bayesian efficient codes in terms of four basic ingredients: (1) a stimulus prior distribution; (2) an encoding model; (3) a capacity constraint, specifying a neural resource limit; and (4) a loss function, quantifying the desirability or undesirability of various posterior distributions. Classic efficient codes can be seen as a special case in which the loss function is the posterior entropy, leading to a code that maximizes mutual information, but alternate loss functions give solutions that differ dramatically from information-maximizing codes. In particular, we show that decorrelation of sensory inputs, which is optimal under classic efficient codes in low-noise settings, can be disadvantageous for loss functions that penalize large errors. Bayesian efficient coding therefore enlarges the family of normatively optimal codes and provides a more general framework for understanding the design principles of sensory systems. We examine Bayesian efficient codes for linear receptive fields and nonlinear input-output functions, and show that our theory invites reinterpretation of Laughlin’s seminal analysis of efficient coding in the blowfly visual system. One of the primary goals of theoretical neuroscience is to understand the functional organization of neurons in the early sensory pathways and the principles governing them. Why do sensory neurons amplify some signals and filter out others? What can explain the particular configurations and types of neurons found in early sensory system? What general principles can explain the solutions evolution has selected for extracting signals from the sensory environment? Two of the most influential theories for addressing these questions are the “efficient coding” hypothesis and the “Bayesian brain” hypothesis. The efficient coding hypothesis, introduced by Attneave and Barlow more than fifty years ago, uses the ideas from Shannon’s information theory to formulate a theory normatively optimal neural coding [1, 2]. The Bayesian brain hypothesis, on the other hand, focuses on the brain’s ability to perform Bayesian inference, and can be traced back to ideas from Helmholtz about optimal perceptual inference [3–7]. A substantial literature has sought to alter or expand the original efficient coding hypothesis [5, 8–18], and a large number of papers have considered optimal codes in the context of Bayesian inference [19–26], However, the two theories have never been formally connected within a single, comprehensive theoretical framework. Here we propose to fill this gap by formulating a general Bayesian theory of efficient coding that unites the two hypotheses. We begin by reviewing the key elements of each theory and then describe a framework for unifying them. Our approach involves combining a prior and model-based likelihood function with a neural resource constraint and a loss functional that quantifies what makes for a “good” posterior distribution. We show that classic efficient codes arise when we use information-theoretic quantities for these ingredients, but that a much larger family of Bayesian efficient codes can be constructed by allowing these ingredients to vary. We explore Bayesian efficient codes for several important cases of interest, namely linear receptive fields and nonlinear response functions. The latter case was examined in an influential paper by Laughlin that examined contrast coding in the blowfly large monopolar cells (LMCs) [27]; we reanalyze data from this paper and argue that LMC responses are in fact better described as minimizing the average square-root error than as maximizing mutual information.
bioRxiv | 2018
Kathleen Esfahany; Isabel Siergiej; Yuan Zhao; Il Memming Park
Abstract The mammalian visual system consists of several anatomically distinct areas, layers, and cell types. To understand the role of these subpopulations in visual information processing, we analyzed neural signals recorded from excitatory neurons from various anatomical and functional structures. For each of 186 mice, one of six genetically tagged cell types and one of six visual areas were targeted while the mouse was passively viewing various visual stimuli. We trained linear classifiers to decode one of six visual stimulus categories with distinct spatiotemporal structures from the population neural activity. We found that neurons in both the primary visual cortex and secondary visual areas show varying degrees of stimulus-specific decodability, and neurons in superficial layers tend to be more informative about the stimulus categories. Additional decoding analyses of directional motion were consistent with these findings. We observed synergy in the population code of direction in several visual areas suggesting area-specific organization of information representation across neurons. These differences in decoding capacities shed light on the specialized organization of neural information processing across anatomically distinct subpopulations, and further establish the mouse as a model for understanding visual perception.