David P. Nguyen
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David P. Nguyen.
Proceedings of the National Academy of Sciences of the United States of America | 2001
Emery N. Brown; David P. Nguyen; Loren M. Frank; Matthew A. Wilson; Victor Solo
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields.
Neural Computation | 2004
Riccardo Barbieri; Loren M. Frank; David P. Nguyen; Michael C. Quirk; Victor Solo; Matthew A. Wilson; Emery N. Brown
Neural spike train decoding algorithms and techniques to compute Shan-non mutual information are important methods for analyzing how neural systems represent biological signals. Decoding algorithms are also one of several strategies being used to design controls for brain-machine inter-faces. Developing optimal strategies to desig n decoding algorithms and compute mutual information are therefore important problems in com-putational neuroscience. We present a general recursive filter decoding algorithm based on a point process model of individual neuron spiking activity and a linear stochastic state-space model of the biological signal. We derive from the algorithm new instantaneous estimates of the en-tropy, entropy rate, and the mutual information between the signal and the ensemble spiking activity. We assess the accuracy of the algorithm by computing, along with the decoding error, the true coverage probabil-ity of the approximate 0.95 confidence regions for the individual signal estimates. We illustrate the new algorithm by reanalyzing the position and ensemble neural spiking activity of CA1 hippocampal neurons from two rats foraging in an open circular environment. We compare the per-formance of this algorithm with a linear filter constructed by the widely used reverse correlation method. The median decoding error for Animal 1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and the true coverage probability for 0.95 confidence regions was 0.67 (0.75) using 34 (32) neurons. These findings improve significantly on our pre-vious results and suggest an integrated approach to dynamically reading neural codes, measuring their properties, and quantifying the accuracy with which encoded information is extracted.
Journal of Visualized Experiments | 2009
Fabian Kloosterman; Thomas J. Davidson; Stephen N. Gomperts; Stuart P. Layton; Gregory J Hale; David P. Nguyen; Matthew A. Wilson
Chronic recording of large populations of neurons is a valuable technique for studying the function of neuronal circuits in awake behaving rats. Lightweight recording devices carrying a high density array of tetrodes allow for the simultaneous monitoring of the activity of tens to hundreds of individual neurons. Here we describe a protocol for the fabrication of a micro-drive array with twenty one independently movable micro-drives. This device has been used successfully to record from hippocampal and cortical neurons in our lab. We show how to prepare a custom designed, 3-D printed plastic base that will hold the micro-drives. We demonstrate how to construct the individual micro-drives and how to assemble the complete micro-drive array. Further preparation of the drive array for surgical implantation, such as the fabrication of tetrodes, loading of tetrodes into the drive array and gold-plating, is covered in a subsequent video article.
Journal of Visualized Experiments | 2009
David P. Nguyen; Stuart P. Layton; Gregory A. Hale; Stephen N. Gomperts; Thomas J. Davidson; Fabian Kloosterman; Matthew A. Wilson
The tetrode, a bundle of four electrodes, has proven to be a valuable tool for the simultaneous recording of multiple neurons in-vivo. The differential amplitude of action potential signatures over the channels of a tetrode allows for the isolation of single-unit activity from multi-unit signals. The ability to precisely control the stereotaxic location and depth of the tetrode is critical for studying coordinated neural activity across brain regions. In combination with a micro-drive array, it is possible to achieve precise placement and stable control of many tetrodes over the course of days to weeks. In this protocol, we demonstrate how to fabricate and condition tetrodes using basic tools and materials, install the tetrodes into a multi-drive tetrode array for chronic in-vivo recording in the rat, make ground wire connections to the micro-drive array, and attach a protective cone onto the micro-drive array in order to protect the tetrodes from physical contact with the environment.
international conference of the ieee engineering in medicine and biology society | 2008
David P. Nguyen; Riccardo Barbieri; Matthew A. Wilson; Emery N. Brown
EEG and LFP activity reflect the dynamic and organized interactions of neural ensembles; therefore, it may be possible to use the features of brain rhythms to determine the computational state of a neuronal network. When neuronal networks are activated, physical principles predict that the frequency content of the field potential should reflect the network state, per se, and ergo the state transition. A novel way for characterizing brain states is by quantifying the temporal structure of AM and FM activity (change in amplitude and frequency over time) for brain rhythms of interest. The concept of AM and FM, in the quantitative sense, is virtually unexplored in systems neuroscience. This is not surprising considering estimation of FM activity requires fine temporal and precise estimation of instantaneous frequency. For AM activity, the absolute value of the Hilbert transform is sufficient. Here, we outline a practical pole tracking algorithm which uses a Kalman filter for univariate AR processes to estimate instantaneous frequency. We demonstrate the filter performance using simulated chirp and real EEG/LFP data recorded from the rat hippocampus; and show that AM/FM activity in EEG/LFP is temporally structured and dependent on behavioral and cognitive state. This algorithm has the potential to be a practical tool for characterizing fundamental structure in electrophysiology data and classifying computational states in the brain.
international conference of the ieee engineering in medicine and biology society | 2004
Riccardo Barbieri; Loren M. Frank; David P. Nguyen; Michael C. Quirk; Victor Solo; Matthew A. Wilson; Emery N. Brown
Developing optimal strategies for constructing and testing decoding algorithms is an important question in computational neuroscience, In this field, decoding algorithms are mathematical methods that model ensemble neural spiking activity as they dynamically represent a biological signal. We present a recursive decoding algorithm based on a Bayesian point process model of individual neuron spiking activity and a linear stochastic state-space model of the biological signal. We assess the accuracy of the algorithm by computing, along with the decoding error, the true coverage probability of the approximate 0.95 confidence regions for the individual signal estimates. We illustrate the new algorithm by analyzing the position and ensemble neural spiking activity of CA1 hippocampal neurons from a rat foraging in an open circular environment The median decoding error during 10 minutes of open foraging was 5.5 cm, and the true coverage probability for 0.95 confidence regions was 0.75 using 32 neurons. These findings improve significantly on our previous results and suggest an approach to reading dynamically information represented in ensemble neural spiking activity.
Neural Computation | 2011
Sridevi V. Sarma; David P. Nguyen; Gabriela Czanner; Sylvia Wirth; Matthew A. Wilson; Wendy A. Suzuki; Emery N. Brown
Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.
PLOS ONE | 2014
Dennis A. Dean; Gail K. Adler; David P. Nguyen; Elizabeth B. Klerman
We present a novel approach for analyzing biological time-series data using a context-free language (CFL) representation that allows the extraction and quantification of important features from the time-series. This representation results in Hierarchically AdaPtive (HAP) analysis, a suite of multiple complementary techniques that enable rapid analysis of data and does not require the user to set parameters. HAP analysis generates hierarchically organized parameter distributions that allow multi-scale components of the time-series to be quantified and includes a data analysis pipeline that applies recursive analyses to generate hierarchically organized results that extend traditional outcome measures such as pharmacokinetics and inter-pulse interval. Pulsicons, a novel text-based time-series representation also derived from the CFL approach, are introduced as an objective qualitative comparison nomenclature. We apply HAP to the analysis of 24 hours of frequently sampled pulsatile cortisol hormone data, which has known analysis challenges, from 14 healthy women. HAP analysis generated results in seconds and produced dozens of figures for each participant. The results quantify the observed qualitative features of cortisol data as a series of pulse clusters, each consisting of one or more embedded pulses, and identify two ultradian phenotypes in this dataset. HAP analysis is designed to be robust to individual differences and to missing data and may be applied to other pulsatile hormones. Future work can extend HAP analysis to other time-series data types, including oscillatory and other periodic physiological signals.
Nature Materials | 2004
Chun Wang; Qing Ge; David T. Ting; David P. Nguyen; Hui Rong Shen; Jianzhu Chen; Herman N. Eisen; Jorge Heller; Robert Langer; David Putnam
Network: Computation In Neural Systems | 2003
David P. Nguyen; Loren M. Frank; Emery N. Brown