Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kamiar Rahnama Rad is active.

Publication


Featured researches published by Kamiar Rahnama Rad.


Journal of Computational Neuroscience | 2010

A new look at state-space models for neural data

Liam Paninski; Yashar Ahmadian; Daniel Gil Ferreira; Shinsuke Koyama; Kamiar Rahnama Rad; Michael Vidne; Joshua T. Vogelstein; Wei Wu

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.


Neural Computation | 2009

Mean-field approximations for coupled populations of generalized linear model spiking neurons with markov refractoriness

Taro Toyoizumi; Kamiar Rahnama Rad; Liam Paninski

There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking network model with delays, which in general may be very difficult to understand analytically. Here we develop mean-field methods for approximating the stimulus-driven firing rates (in both the time-varying and steady-state cases), auto- and cross-correlations, and stimulus-dependent filtering properties of these networks. These approximations are valid when the contributions of individual network coupling terms are small and, hence, the total input to a neuron is approximately gaussian. These approximations lead to deterministic ordinary differential equations that are much easier to solve and analyze than direct Monte Carlo simulation of the network activity. These approximations also provide an analytical way to evaluate the linear input-output filter of neurons and how the filters are modulated by network interactions and some stimulus feature. Finally, in the case of strong refractory effects, the mean-field approximations in the generalized linear model become inaccurate; therefore, we introduce a model that captures strong refractoriness, retains all of the easy fitting properties of the standard generalized linear model, and leads to much more accurate approximations of mean firing rates and cross-correlations that retain fine temporal behaviors.


IEEE Transactions on Information Theory | 2011

Nearly Sharp Sufficient Conditions on Exact Sparsity Pattern Recovery

Kamiar Rahnama Rad

Consider the <i>n</i>-dimensional vector <i>y</i>=<i>X</i>β+ε where β ∈ \BBR<i>p</i> has only <i>k</i> nonzero entries and ε ∈ \BBR<i>n</i> is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of β given the observation vector <i>y</i> and the measurement matrix <i>X</i>. First, we derive a nonasymptotic upper bound on the probability that a specific wrong sparsity pattern is identified by the maximum-likelihood estimator. We find that this probability depends (inversely) exponentially on the difference of ||<i>X</i>β||<sub>2</sub> and the <i>l</i><sub>2</sub> -norm of <i>X</i>β projected onto the range of columns of <i>X</i> indexed by the wrong sparsity pattern. Second, when <i>X</i> is randomly drawn from a Gaussian ensemble, we calculate a nonasymptotic upper bound on the probability of the maximum-likelihood decoder not declaring (partially) the true sparsity pattern. Consequently, we obtain sufficient conditions on the sample size <i>n</i> that guarantee almost surely the recovery of the true sparsity pattern. We find that the required growth rate of sample size <i>n</i> matches the growth rate of previously established necessary conditions.


Journal of Computational and Graphical Statistics | 2014

Fast Kalman Filtering and Forward–Backward Smoothing via a Low-Rank Perturbative Approach

Eftychios A. Pnevmatikakis; Kamiar Rahnama Rad; Jonathan H. Huggins; Liam Paninski

Kalman filtering-smoothing is a fundamental tool in statistical time-series analysis. However, standard implementations of the Kalman filter-smoother require O(d3) time and O(d2) space per time step, where d is the dimension of the state variable, and are therefore impractical in high-dimensional problems. In this article we note that if a relatively small number of observations are available per time step, the Kalman equations may be approximated in terms of a low-rank perturbation of the prior state covariance matrix in the absence of any observations. In many cases this approximation may be computed and updated very efficiently (often in just O(k2d) or O(k2d + kdlog d) time and space per time step, where k is the rank of the perturbation and in general k ≪ d), using fast methods from numerical linear algebra. We justify our approach and give bounds on the rank of the perturbation as a function of the desired accuracy. For the case of smoothing, we also quantify the error of our algorithm because of the low-rank approximation and show that it can be made arbitrarily low at the expense of a moderate computational cost. We describe applications involving smoothing of spatiotemporal neuroscience data. This article has online supplementary material.


Network: Computation In Neural Systems | 2010

Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods

Kamiar Rahnama Rad; Liam Paninski

Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the methods flexibility and performance on a variety of simulated and real data.


information theory workshop | 2010

Sparse superposition codes for Gaussian vector quantization

Ioannis Kontoyiannis; Kamiar Rahnama Rad; Savvas Gitzenis

A new method is presented for the optimal or near-optimal quantization of memoryless Gaussian data. The basic construction of the codebook is motivated by related ideas in the statistical framework of sparse recovery in linear regression. Similarly, the encoding is performed by a convex-hull iterative algorithm. Preliminary theoretical results establish the optimality of the proposed algorithm for a certain range of the parameter values. Experimental results demonstrate these theoretical findings on simulated data. The performance of the proposed algorithm is compared with that of trellis-coded quantization and of the recently proposed algorithms in and. Depending on the choice of the relevant design parameters, the complexity of the encoding algorithm varies, and is generally polynomial in the data block-length. The present results are, in part, motivated by the analogous channel coding results of.


conference on information sciences and systems | 2009

Sharp upper bound on error probability of exact sparsity recovery

Kamiar Rahnama Rad

Imagine the vector y = Xβ + ε where β ∈ ℝ<sup>m</sup> has only k non zero entries and ε ∈ R<sup>n</sup> is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted with noise. We find a non-asymptotic upper bound on the error probability of exact recovery of the sparsity pattern given any generic measurement matrix X. By drawing X from a Gaussian ensemble, as an example, to ensure exact recovery, we obtain asymptotically sharp sufficient conditions on n which meet the necessary conditions introduced in (Wang et al., 2008).


conference on information sciences and systems | 2012

Robust particle filters via sequential pairwise reparameterized Gibbs sampling

Liam Paninski; Kamiar Rahnama Rad; Michael Vidne

Sequential Monte Carlo (“particle filtering”) methods provide a powerful set of tools for recursive optimal Bayesian filtering in state-space models. However, these methods are based on importance sampling, which is known to be non-robust in several key scenarios, and therefore standard particle filtering methods can fail in these settings. We present a filtering method which solves the key forward recursion using a reparameterized Gibbs sampling method, thus sidestepping the need for importance sampling. In many cases the resulting filter is much more robust and efficient than standard importance-sampling particle filter implementations. We illustrate the method with an application to a nonlinear, non-Gaussian model from neuroscience.


conference on decision and control | 2010

Distributed parameter estimation in networks

Kamiar Rahnama Rad; Alireza Tahbaz-Salehi


neural information processing systems | 2011

Information Rates and Optimal Decoding in Large Neural Populations

Kamiar Rahnama Rad; Liam Paninski

Collaboration


Dive into the Kamiar Rahnama Rad's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Jadbabaie

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan H. Huggins

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pooya Molavi

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinsuke Koyama

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Timothy A. Machado

Salk Institute for Biological Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge