Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anirban DasGupta is active.

Publication


Featured researches published by Anirban DasGupta.


Test | 1994

An overview of robust Bayesian analysis

James O. Berger; Elías Moreno; Luis R. Pericchi; M. Jesús Bayarri; José M. Bernardo; Juan Antonio Cano; Julián de la Horra; Jacinto Martín; David Ríos-Insúa; Bruno Betrò; Anirban DasGupta; Paul Gustafson; Larry Wasserman; Joseph B. Kadane; Cid Srinivasan; Michael Lavine; Anthony O’Hagan; Wolfgang Polasek; Christian P. Robert; Constantinos Goutis; Fabrizio Ruggeri; Gabriella Salinetti; Siva Sivaganesan

SummaryRobust Bayesian analysis is the study of the sensitivity of Bayesian answers to uncertain inputs. This paper seeks to provide an overview of the subject, one that is accessible to statisticians outside the field. Recent developments in the area are also reviewed, though with very uneven emphasis.


Statistics and Risk Modeling | 1989

FREQUENTIST BEHAVIOR OF ROBUST BAYES ESTIMATES OF NORMAL MEANS

Anirban DasGupta; William J. Studden

AMS 1980 subject classifications: 62 F 15, 62 F 35


Journal of Statistical Planning and Inference | 1999

Comparison of the P-value and posterior probability

Hyun Sook Oh; Anirban DasGupta

For iid observations from a multivariate normal distribution in p dimensions with an unknown mean and a covariance matrix proportional to the Identity, we revisit the issue of the apparent irreconcilability of the classical test for a point null and the standard Bayesian formulation for testing such a point null. With appropriate families of priors on the alternative, we consider the threshold value of the a priori probability of the point null required for the smallest (over priors on the alternative) posterior probability and the classical P-value to coincide. We also consider, for an arbitrary but fixed a priori probability of the point null, the ratio of the minimum posterior probability and the classical P-value. The main results emphasize properties of the null distributions of these two quantities and their quartiles, etc. Among many theorems proved in the article are the results that regardless of the dimension p, the threshold prior probability as defined above has a median exactly equal to 0.5 in many cases, and the ratio as described above has a median exactly equal to twice the a priori probability assigned to the null. These and other results are an attempt to clarify the issue of typicality: how often the Bayes-classical conflict will arise and in what magnitude.


Journal of Statistical Planning and Inference | 1997

Sample size problems in ANOVA Bayesian point of view

Anirban DasGupta; Brani Vidakovic

Abstract In this paper we discuss the sample size problem for balanced one-way ANOVA under a posterior Bayesian formulation of the problem. Using the distribution theory of appropriate quadratic forms we derive explicit sample sizes for prespecified posterior precisions. Comparisons with classical sample sizes are made. Instead of extensive tables, a mathematica program for sample size calculation is given. The formulations given in this article form a foundational step towards Bayesian calculation of sample size, in general.


Journal of Statistical Planning and Inference | 1994

Uniform and subuniform posterior robustness: The sample size problem

Anirban DasGupta; Saurabh Mukhopadhyay

Abstract The following general question is addressed: given i.i.d. realizations X1,X2,…,Xn from a distribution Pθ with parameter θ, where θ has a prior distribution π belonging to some family Γ, is it possible to prescribe a sample size n0 such that, for n⩾n0, obtaining posterior robustness is guaranteed for any actual data we are likely to see or even for all possible data. Formally, we identify a ‘natural’ set C such that P (the observation vector X∉C)⩽e, for all possible marginal distributions implied by Γ, and protect ourselves for all X in the set C. Typically, such a set C exists if Γ is tight. There are two aspects in these results: one of them is establishing the plausibility itself; this is done by showing uniform convergence to zero of ranges of posterior quantities. This part forms the mathematical foundation of the program. The second aspect is providing actual sample size prescriptions for a specific goal to be attained. This forms the application part of the program. In this article, we only consider testing and set estimation problems relating to the normal distribution with conjugate priors.


Journal of Statistical Planning and Inference | 1992

Compromise designs in heteroscedastic linear models

Anirban DasGupta; Saurabh Mukhopadhyay; William J. Studden

Abstract We consider heteroscedastic linear models in which the variance of a response is an exponential or a power function of its mean. Such models have earlier been considered in Bickel (1978), Carroll and Ruppert (1982) etc. Classical as well as Bayes optimal experimental design is considered. We specifically address the problem of ‘compromise design’ where the experimenter is simultaneously interested in many estimation problems and wants to find a design that has an efficiency of at least 1/(1+e) in each problem. For specific models we work out the smallest e for which such a design exists. This is done for classical as well as Bayes problems. The effect of the variance function on the value of the smallest e is examined. The maximin efficient design is then compared to the usual A-optimal design. Some general comments are made.


Archive | 1994

Distributions Which Are Gaussian Convolutions

Anirban DasGupta

Let Z ~ N(0,1). We consider distributions on IR which arise as convolutions with Z. The intersection of this class of convolutions with the family of normal scale mixtures is completely characterized and the implications are discussed. We also study the tail properties of the convolutions. A domain of attraction theorem is proved. Finally, we give a characterization of all random variables Y such that the convolution Z + Y is unimodal and relate the number of modes of Y to that of the convolution. One particularly surprising example is given of a harshly oscillating density which becomes unimodal when convolved with Z.


Archive | 2011

Finite Sample Theory of Order Statistics and Extremes

Anirban DasGupta

The ordered values of a sample of observations are called the order statistics of the sample, and the smallest and the largest are called the extremes. Order statistics and extremes are among the most important functions of a set of random variables that we study in probability and statistics. There is natural interest in studying the highs and lows of a sequence, and the other order statistics help in understanding the concentration of probability in a distribution, or equivalently, the diversity in the population represented by the distribution. Order statistics are also useful in statistical inference, where estimates of parameters are often based on some suitable functions of the order statistics. In particular, the median is of very special importance. There is a well-developed theory of the order statistics of a fixed number n of observations from a fixed distribution, as also an asymptotic theory where n goes to infinity. We discuss the case of fixed n in this chapter. A distribution theory for order statistics when the observations are from a discrete distribution is complex, both notationally and algebraically, because of the fact that there could be several observations which are actually equal. These ties among the sample values make the distribution theory cumbersome. We therefore concentrate on the continuous case.


Archive | 2011

Poisson Processes and Applications

Anirban DasGupta

A single theme that binds together a number of important probabilistic concepts and distributions, and is at the same time a major tool to the applied probabilist and the applied statistician is the Poisson process. The Poisson process is a probabilistic model of situations where events occur completely at random at intermittent times, and we wish to study the number of times the particular event has occurred up to a specific time instant, or perhaps the waiting time till the next event, and so on. Some simple examples are receiving phone calls at a telephone call center, receiving an e-mail from someone, arrival of a customer at a pharmacy or some other store, catching a cold, occurrence of earthquakes, mechanical breakdown in a computer or some other machine, and so on. There is no end to how many examples we can think of, where an event happens, then nothing happens for a while, and then it happens again, and it keeps going like this, apparently at random. It is therefore not surprising that the Poisson process is such a valuable tool in the probabilist’s toolbox. It is also a fascinating feature of Poisson processes that it is connected in various interesting ways to a number of special distributions, including the Poisson, exponential, Gamma, Beta, uniform, binomial, and the multinomial. These embracing connections and wide applications make the Poisson process a very special topic in probability.


Archive | 2010

Normal Approximations and the Central Limit Theorem

Anirban DasGupta

Many of the special discrete and special continuous distributions that we have discussed can be well approximated by a normal distribution for suitable configurations of their underlying parameters. Typically, the normal approximation works well when the parameter values are such that the skewness of the distribution is small. For example, binomial distributions are well approximated by a normal distribution when n is large and p is not too small or too large. Gamma distributions are well approximated by a normal distribution when the shape parameter α is large. Whenever we see a certain phenomenon empirically all too often, we might expect that there is a unifyingmathematical result there, and in this case indeed there is. The unifyingmathematical result is one of the most important results in all of mathematics and is called the central limit theorem. The subject of central limit theorems is incredibly diverse. In this chapter, we present the basic or the canonical central limit theorem and its applications to certain problems with which we are already familiar. Among numerous excellent references on central limit theorems, we recommend Feller (1968, 1971) and Pitman (1992) for lucid expositions and examples. The subject of central limit theorems also has a really interesting history; we recommend Le Cam (1986) and Stigler (1986) in this area. Careful and comprehensive mathematical treatments are available in Hall (1992) and Bhattacharya and Rao (1986). For a diverse selection of examples, see DasGupta (2008).

Collaboration


Dive into the Anirban DasGupta's collaboration.

Top Co-Authors

Avatar

Lawrence D. Brown

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. Tony Cai

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Brani Vidakovic

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohan Delampady

Indian Statistical Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge