Jayanta K. Ghosh
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jayanta K. Ghosh.
Bayesian Analysis | 2010
Surya T. Tokdar; Yu Zhu; Jayanta K. Ghosh
We develop a novel Bayesian density regression model based on logistic Gaussian processes and subspace projection. Logistic Gaussian processes provide an attractive alternative to the popular stick-breaking processes for modeling a family of conditional densities that vary smoothly in the conditioning variable. Subspace projection offers dimension reduction of predictors through multiple linear combinations, offering an alternative to the zeroing out theme of variable selection. We illustrate that logistic Gaussian process and subspace projection combine well to produce a computationally tractable and theoretically sound density regression procedure that offers good out of sample prediction, accurate estimation of subspace projection and satisfactory estimation of subspace dimensionality. We also demonstrate that subspace projection may lead to better prediction than variable selection when predictors are well chosen and possibly dependent on each other, each having a moderate influence on the response.
arXiv: Statistics Theory | 2008
Małgorzata Bogdan; Jayanta K. Ghosh; Surya T. Tokdar
In the spirit of modeling inference for microarrays as multiple testing for sparse mixtures, we present a similar approach to a simplified version of quantitative trait loci (QTL) mapping. Unlike in case of microarrays, where the number of tests usually reaches tens of thousands, the number of tests performed in scans for QTL usually does not exceed several hundreds. However, in typical cases, the sparsity
Communications in Statistics-theory and Methods | 1980
Asit P. Basu; Jayanta K. Ghosh
p
Test | 1995
Gauri Sankar Datta; Jayanta K. Ghosh
of significant alternatives for QTL mapping is in the same range as for microarrays. For methodological interest, as well as some related applications, we also consider non-sparse mixtures. Using simulations as well as theoretical observations we study false discovery rate (FDR), power and misclassification probability for the Benjamini-Hochberg (BH) procedure and its modifications, as well as for various parametric and nonparametric Bayes and Parametric Empirical Bayes procedures. Our results confirm the observation of Genovese and Wasserman (2002) that for small p the misclassification error of BH is close to optimal in the sense of attaining the Bayes oracle. This property is shared by some of the considered Bayes testing rules, which in general perform better than BH for large or moderate
Annals of Statistics | 2009
Surya T. Tokdar; Ryan Martin; Jayanta K. Ghosh
p
Bayesian Analysis | 2013
Jyotishka Datta; Jayanta K. Ghosh
s.
Statistical Science | 2008
Ryan Martin; Jayanta K. Ghosh
Let X1,X2,…,Xp be p random variables with cdfs F1(x),F2(x),…,Fp(x)respectively. Let U = min(X1,X2,…,Xp) and V = max(X1,X2,…,Xp).In this paper we study the problem of uniquely determining and estimating the marginal distributions F1,F2,…,Fp given the distribution of U or of V. First the problem of competing and complementary risks are introduced with examples and the corresponding identification problems are considered when the X1s are independently distributed and U(V) is identified, as well as the case when U(V) is not identified. The case when the X1s are dependent is considered next. Finally the problem of estimation is considered.
Journal of Statistical Planning and Inference | 1999
Subhashis Ghosal; Jayanta K. Ghosh; R. V. Ramamoorthi
SummaryFor an Euclidean groupG acting freely on the parameter space, we derive, among several noninformative priors, the reference priors of Berger-Bernardo and Chang-Eaves for our parameter of interest θ1, a scalar maximal invariant parametric function. Identifying the nuisance parameter vector with the group element, we derive a simple structure of the information matrix which is used to obtain different noninformative priors. We compare these priors using the marginalization paradox and the probability-matching criteria. The Chang-Eaves and the Berger-Bernardo reference priors appear to be the most attractive choice. Several illustrative examples are considered.
Advances in statistical decision theory and applications | 1997
Subhashis Ghosal; Jayanta K. Ghosh; R. V. Ramamoorthi
Mixture models have received considerable attention recently and Newton [Sankhyā Ser. A 64 (2002) 306-322] proposed a fast recursive algorithm for estimating a mixing distribution. We prove almost sure consistency of this recursive estimate in the weak topology under mild conditions on the family of densities being mixed. This recursive estimate depends on the data ordering and a permutation-invariant modification is proposed, which is an average of the original over permutations of the data sequence. A Rao-Blackwell argument is used to prove consistency in probability of this alternative estimate. Several simulations are presented, comparing the finite-sample performance of the recursive estimate and a Monte Carlo approximation to the permutation-invariant alternative along with that of the nonparametric maximum likelihood estimate and a nonparametric Bayes estimate.
Statistics & Probability Letters | 2003
Ruma Basu; Jayanta K. Ghosh; Rahul Mukerjee
In this paper, we establish some optimality properties of the multiple testing rule induced by the horseshoe estimator due to Carvalho, Polson, and Scott (2010, 2009) from a Bayesian decision theoretic viewpoint. We consider the two- groups model for the data and an additive loss structure such that the total loss is equal to the number of misclassified hypotheses. We use the same asymptotic framework as Bogdan, Chakrabarti, Frommlet, and Ghosh (2011) who introduced the Bayes oracle in the context of multiple testing and provided conditions under which the Benjamini-Hochberg and Bonferroni procedures attain the risk of the Bayes oracle. We prove a similar result for the horseshoe decision rule up to Op1q with the constant in the horseshoe risk close to the constant in the oracle. We use the Full Bayes estimate of the tuning parameter . It is worth noting that the Full Bayes estimate cannot be replaced by the Empirical Bayes estimate, which tends to be too small.