Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Natesh S. Pillai is active.

Publication


Featured researches published by Natesh S. Pillai.


Bernoulli | 2013

Optimal tuning of the hybrid Monte Carlo algorithm

Alexandros Beskos; Natesh S. Pillai; Gareth O. Roberts; Jesus M. Sanz-Serna; Andrew M. Stuart

We investigate the properties of the Hybrid Monte Carlo algorithm (HMC) in high dimensions. HMC develops a Markov chain reversible w.r.t. a given target distribution . by using separable Hamiltonian dynamics with potential -log .. The additional momentum variables are chosen at random from the Boltzmann distribution and the continuous-time Hamiltonian dynamics are then discretised using the leapfrog scheme. The induced bias is removed via a Metropolis- Hastings accept/reject rule. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an O(1) acceptance probability as the dimension d of the state space tends to ., the leapfrog step-size h should be scaled as h=l ×d−1/ 4 . Therefore, in high dimensions, HMC requires O(d1/ 4 ) steps to traverse the state space. We also identify analytically the asymptotically optimal acceptance probability, which turns out to be 0.651 (to three decimal places). This is the choice which optimally balances the cost of generating a proposal, which decreases as l increases (because fewer steps are required to reach the desired final integration time), against the cost related to the average number of proposals required to obtain acceptance, which increases as l increases


Journal of the American Statistical Association | 2015

Dirichlet–Laplace Priors for Optimal Shrinkage

Anirban Bhattacharya; Debdeep Pati; Natesh S. Pillai; David B. Dunson

Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimensions. This has motivated continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians, facilitating computation. In contrast to the frequentist literature, little is known about the properties of such priors and the convergence and concentration of the corresponding posterior distribution. In this article, we propose a new class of Dirichlet–Laplace priors, which possess optimal posterior concentration and lead to efficient posterior computation. Finite sample performance of Dirichlet–Laplace priors relative to alternatives is assessed in simulated and real data examples.


Annals of Applied Probability | 2014

Universality of covariance matrices.

Natesh S. Pillai; Jun Yin

In this paper we prove the universality of covariance matrices of the form HN×N = 1 N X † X where [X]M×N is a rectangular matrix with independent real valued entries [xij] satisfying Exij = 0 and Ex 2 = 1 M , N,M → ∞. Furthermore it is assumed that these entries have sub-exponential tails. We will study the asymptotics in the regime N/M = dN ∈ (0,∞),limN!1 dN 6 1. Our main result states that the Stieltjes transform of the empirical eigenvalue distribution of H is given by the Marcenko-Pastur law uniformly up to the edges of the spectrum with an error of order (N�) 1 where � is the imaginary part of the spectral parameter in the Stieltjes transform. From this strong local Marcenko-Pastur law, we derive the following results. 1. The rigidity of eigenvalues: If j = j,N denotes the classical location of the j-th eigenvalue under the


Annals of Statistics | 2014

Posterior contraction in sparse Bayesian factor models for massive covariance matrices

Debdeep Pati; Anirban Bhattacharya; Natesh S. Pillai; David B. Dunson

Sparse Bayesian factor models are routinely implemented for parsimonious dependence modeling and dimensionality reduction in high-dimensional applications. We provide theoretical understanding of such Bayesian procedures in terms of posterior convergence rates in inferring high-dimensional covariance matrices where the dimension can be larger than the sample size. Under relevant sparsity assumptions on the true covariance matrix, we show that commonly-used point mass mixture priors on the factor loadings lead to consistent estimation in the operator norm even when


Annals of Applied Probability | 2012

Diffusion limits of the random walk Metropolis algorithm in high dimensions.

Jonathan C. Mattingly; Natesh S. Pillai; Andrew M. Stuart

p\gg n


Annals of Applied Probability | 2012

Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions

Natesh S. Pillai; Andrew M. Stuart; Alexandre H. Thiery

. One of our major contributions is to develop a new class of continuous shrinkage priors and provide insights into their concentration around sparse vectors. Using such priors for the factor loadings, we obtain similar rate of convergence as obtained with point mass mixture priors. To obtain the convergence rates, we construct test functions to separate points in the space of high-dimensional covariance matrices using insights from random matrix theory; the tools developed may be of independent interest. We also derive minimax rates and show that the Bayesian posterior rates of convergence coincide with the minimax rates upto a


Journal of the American Statistical Association | 2016

Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations

Patrick R. Conrad; Youssef M. Marzouk; Natesh S. Pillai; Aaron Smith

\sqrt{\log n}


Annals of Statistics | 2012

Edge universality of correlation matrices

Natesh S. Pillai; Jun Yin

term.


Bayesian Analysis | 2015

Bayesian Nonparametric Weighted Sampling Inference

Yajuan Si; Natesh S. Pillai; Andrew Gelman

Diffusion limits of MCMC methods in high dimensions provide a useful theoretical tool for studying computational complexity. In particular, they lead directly to precise estimates of the number of steps required to explore the target measure, in stationarity, as a function of the dimension of the state space. However, to date such results have mainly been proved for target measures with a product structure, severely limiting their applicability. The purpose of this paper is to study diffusion limits for a class of naturally occurring high-dimensional measures found from the approximation of measures on a Hilbert space which are absolutely continuous with respect to a Gaussian reference measure. The diffusion limit of a random walk Metropolis algorithm to an infinite-dimensional Hilbert space valued SDE (or SPDE) is proved, facilitating understanding of the computational complexity of the algorithm.


Statistics Surveys | 2015

Statistical inference for dynamical systems: A review

Kevin McGoff; Sayan Mukherjee; Natesh S. Pillai

The Metropolis-adjusted Langevin (MALA) algorithm is a sampling algorithm which makes local moves by incorporating information about the gradient of the logarithm of the target density. In this paper we study the efficiency of MALA on a natural class of target measures supported on an infinite dimensional Hilbert space. These natural measures have density with respect to a Gaussian random field measure and arise in many applications such as Bayesian nonparametric statistics and the theory of conditioned diffusions. We prove that, started in stationarity, a suitably interpolated and scaled version of the Markov chain corresponding to MALA converges to an infinite dimensional diffusion process. Our results imply that, in stationarity, the MALA algorithm applied to an N-dimensional approximation of the target will take

Collaboration


Dive into the Natesh S. Pillai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew M. Stuart

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Debdeep Pati

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre H. Thiery

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Lysy

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Hill

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

M. Gregory Forest

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge