Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Subhashis Ghosal is active.

Publication


Featured researches published by Subhashis Ghosal.


Annals of Statistics | 2007

Convergence rates of posterior distributions for non-i.i.d. observations

Subhashis Ghosal; Aad van der Vaart

We consider the asymptotic behavior of posterior distributions and Bayes estimators based on observations which are required to be neither independent nor identically distributed. We give general results on the rate of convergence of the posterior measure relative to distances derived from a testing criterion. We then specialize our results to independent, nonidentically distributed observations, Markov processes, stationary Gaussian time series and the white noise model. We apply our general results to several examples of infinite-dimensional statistical models including nonparametric regression with normal errors, binary regression, Poisson regression, an interval censoring model, Whittle estimation of the spectral density of a time series and a nonlinear autoregressive model.


Annals of Statistics | 2007

Posterior convergence rates of Dirichlet mixtures at smooth densities

Subhashis Ghosal; Aad van der Vaart

We study the rates of convergence of the posterior distribution for Bayesian density estimation with Dirichlet mixtures of normal distributions as the prior. The true density is assumed to be twice continuously differentiable. The bandwidth is given a sequence of priors which is obtained by scaling a single prior by an appropriate order. In order to handle this problem, we derive a new general rate theorem by considering a countable covering of the parameter space whose prior probabilities satisfy a summability condition together with certain individual bounds on the Hellinger metric entropy. We apply this new general theorem on posterior convergence rates by computing bounds for Hellinger (bracketing) entropy numbers for the involved class of densities, the error in the approximation of a smooth density by normal mixtures and the concentration rate of the prior. The best obtainable rate of convergence of the posterior turns out to be equivalent to the well-known frequentist rate for integrated mean squared error n -2/5 up to a logarithmic factor.


Annals of Statistics | 2006

Posterior consistency of gaussian process prior for nonparametric binary regression

Subhashis Ghosal; Anindya Roy

Consider binary observations whose response probability is an unknown smooth function of a set of covariates. Suppose that a prior on the response probability function is induced by a Gaussian process mapped to the unit interval through a link function. In this paper we study consistency of the resulting posterior distribution. If the covariance kernel has derivatives up to a desired order and the bandwidth parameter of the kernel is allowed to take arbitrarily small values, we show that the posterior distribution is consistent in the L 1 -distance. As an auxiliary result to our proofs, we show that, under certain conditions, a Gaussian process assigns positive probabilities to the uniform neighborhoods of a continuous function. This result may be of independent interest in the literature for small ball probabilities of Gaussian processes.


Journal of the American Statistical Association | 2004

Bayesian Estimation of the Spectral Density of a Time Series

Nidhan Choudhuri; Subhashis Ghosal; Anindya Roy

This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updating the prior using the Whittle likelihood. A Markov chain Monte Carlo algorithm for sampling from this posterior distribution is described that is used for computing the posterior mean, variance, and other statistics. A consistency result is established for this pseudoposterior distribution that holds for a short-memory Gaussian time series and under some conditions on the prior. To prove this asymptotic result, a general consistency theorem of Schwartz is extended for a triangular array of independent, nonidentically distributed observations. This extension is also of independent interest. A simulation study is conducted to compare the proposed method with some existing methods. The method is illustrated with the well-studied sunspot dataset.


Biometrika | 2013

Adaptive Bayesian multivariate density estimation with Dirichlet mixtures

Weining Shen; Surya T. Tokdar; Subhashis Ghosal

We show that rate-adaptive multivariate density estimation can be performed using Bayesian methods based on Dirichlet mixtures of normal kernels with a prior distribution on the kernels covariance matrix parameter. We derive sufficient conditions on the prior specification that guarantee convergence to a true density at a rate that is minimax optimal for the smoothness class to which the true density belongs. No prior knowledge of smoothness is assumed. The sufficient conditions are shown to hold for the Dirichlet location mixture-of-normals prior with a Gaussian base measure and an inverse Wishart prior on the covariance matrix parameter. Locally Holder smoothness classes and their anisotropic extensions are considered. Our study involves several technical novelties, including sharp approximation of finitely differentiable multivariate densities by normal mixtures and a new sieve on the space of such densities. Copyright 2013, Oxford University Press.


Electronic Journal of Statistics | 2008

Nonparametric Bayesian model selection and averaging

Subhashis Ghosal; Jüri Lember; Aad van der Vaart

We consider nonparametric Bayesian estimation of a probability density p based on a random sample of size n from this density using a hierarchical prior. The prior consists, for instance, of prior weights on the regularity of the unknown density combined with priors that are appropriate given that the density has this regularity. More generally, the hierarchy consists of prior weights on an abstract model index and a prior on a density model for each model index. We present a general theorem on the rate of contraction of the resulting posterior distribution as n?8, which gives conditions under which the rate of contraction is the one attached to the model that best approximates the true density of the obser- vations. This shows that, for instance, the posterior distribution can adapt to the smoothness of the underlying density. We also study the posterior distribution of the model index, and find that under the same conditions the posterior distribution gives negligible weight to models that are bigger than the optimal one, and thus selects the optimal model or smaller models that also approximate the true density well. We apply these result to log spline density models, where we show that the prior weights on the regularity index interact with the priors on the models, making the exact rates depend in a complicated way on the priors, but also that the rate is fairly robust to specification of the prior weights.


Electronic Journal of Statistics | 2008

Kullback Leibler property of kernel mixture priors in Bayesian density estimation

Yuefeng Wu; Subhashis Ghosal

Positivity of the prior probability of Kullback-Leibler neighborhood around the true density, commonly known as the Kullback-Leibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depending on the sample space and the class of densities to be estimated. The Kullback-Leibler property of the Dirichlet mixture prior has been shown for some special kernels like the normal density or Bernstein polynomial, under appropriate conditions. In this paper, we obtain easily verifiable sufficient conditions, under which a prior obtained by mixing a general kernel possesses the Kullback-Leibler property. We study a wide variety of kernel used in practice, including the normal,


Bernoulli | 1999

Asymptotic normality of posterior distributions in high-dimensional linear models

Subhashis Ghosal

t


Statistics in Medicine | 2008

Bayesian bootstrap estimation of ROC curve

Jiezhun Gu; Subhashis Ghosal; Anindya Roy

, histogram, gamma, Weibull densities and so on, and show that the Kullback-Leibler property holds if some easily verifiable conditions are satisfied at the true density. This gives a catalog of conditions required for the Kullback-Leibler property, which can be readily used in applications.


Journal of Theoretical Probability | 1996

The strong law of large numbers for weighted averages under dependence assumptions

Tapas K. Chandra; Subhashis Ghosal

We study consistency and asymptotic normality of posterior distributions of the regression coefficient in a linear model when the dimension of the parameter grows with increasing sample size. Under certain growth restrictions on the dimension (depending on the design matrix), we show that the posterior distributions concentrate in neighbourhoods of the true parameter and can be approximated by an appropriate normal distribution.

Collaboration


Dive into the Subhashis Ghosal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anindya Roy

University of Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sayantan Banerjee

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Weining Shen

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prithwish Bhaumik

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Yongqiang Tang

SUNY Downstate Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge