Jonathan H. Huggins
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan H. Huggins.
Journal of Computational and Graphical Statistics | 2014
Eftychios A. Pnevmatikakis; Kamiar Rahnama Rad; Jonathan H. Huggins; Liam Paninski
Kalman filtering-smoothing is a fundamental tool in statistical time-series analysis. However, standard implementations of the Kalman filter-smoother require O(d3) time and O(d2) space per time step, where d is the dimension of the state variable, and are therefore impractical in high-dimensional problems. In this article we note that if a relatively small number of observations are available per time step, the Kalman equations may be approximated in terms of a low-rank perturbation of the prior state covariance matrix in the absence of any observations. In many cases this approximation may be computed and updated very efficiently (often in just O(k2d) or O(k2d + kdlog d) time and space per time step, where k is the rank of the perturbation and in general k ≪ d), using fast methods from numerical linear algebra. We justify our approach and give bounds on the rank of the perturbation as a function of the desired accuracy. For the case of smoothing, we also quantify the error of our algorithm because of the low-rank approximation and show that it can be made arbitrarily low at the expense of a moderate computational cost. We describe applications involving smoothing of spatiotemporal neuroscience data. This article has online supplementary material.
Journal of Computational Neuroscience | 2014
Ari Pakman; Jonathan H. Huggins; Carl Smith; Liam Paninski
AbstractWe present fast methods for filtering voltage measurements and performing optimal inference of the location and strength of synaptic connections in large dendritic trees. Given noisy, subsampled voltage observations we develop fast l1-penalized regression methods for Kalman state-space models of the neuron voltage dynamics. The value of the l1-penalty parameter is chosen using cross-validation or, for low signal-to-noise ratio, a Mallows’ Cp-like criterion. Using low-rank approximations, we reduce the inference runtime from cubic to linear in the number of dendritic compartments. We also present an alternative, fully Bayesian approach to the inference problem using a spike-and-slab prior. We illustrate our results with simulations on toy and real neuronal geometries. We consider observation schemes that either scan the dendritic geometry uniformly or measure linear combinations of voltages across several locations with random coefficients. For the latter, we show how to choose the coefficients to offset the correlation between successive measurements imposed by the neuron dynamics. This results in a “compressed sensing” observation scheme, with an important reduction in the number of measurements required to infer the synaptic weights.
arXiv: Statistics Theory | 2015
Jonathan H. Huggins; Daniel M. Roy
neural information processing systems | 2016
Jonathan H. Huggins; Trevor Campbell; Tamara Broderick
international conference on artificial intelligence and statistics | 2017
Jonathan H. Huggins; James Zou
arXiv: Statistics Theory | 2015
Jonathan H. Huggins; Daniel M. Roy
international conference on machine learning | 2015
Jonathan H. Huggins; Karthik Narasimhan; Ardavan Saeedi; Vikash K. Mansinghka
international conference on machine learning | 2015
Jonathan H. Huggins; Joshua B. Tenenbaum
arXiv: Statistics Theory | 2016
Trevor Campbell; Jonathan H. Huggins; Jonathan P. How; Tamara Broderick
arXiv: Machine Learning | 2016
Ryan Giordano; Tamara Broderick; Rachael Meager; Jonathan H. Huggins; Michael I. Jordan