Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James O. Berger is active.

Publication


Featured researches published by James O. Berger.


Journal of the American Statistical Association | 1996

The Intrinsic Bayes Factor for Model Selection and Prediction

James O. Berger; Luis R. Pericchi

Abstract In the Bayesian approach to model selection or hypothesis testing with models or hypotheses of differing dimensions, it is typically not possible to utilize standard noninformative (or default) prior distributions. This has led Bayesians to use conventional proper prior distributions or crude approximations to Bayes factors. In this article we introduce a new criterion called the intrinsic Bayes factor, which is fully automatic in the sense of requiring only standard noninformative priors for its computation and yet seems to correspond to very reasonable actual Bayes factors. The criterion can be used for nested or nonnested models and for multiple model comparison and prediction. From another perspective, the development suggests a general definition of a “reference prior” for model comparison.


Journal of the American Statistical Association | 1987

Testing a Point Null Hypothesis: The Irreconcilability of P Values and Evidence

James O. Berger; Thomas Sellke

Abstract The problem of testing a point null hypothesis (or a “small interval” null hypothesis) is considered. Of interest is the relationship between the P value (or observed significance level) and conditional and Bayesian measures of evidence against the null hypothesis. Although one might presume that a small P value indicates the presence of strong evidence against the null, such is not necessarily the case. Expanding on earlier work [especially Edwards, Lindman, and Savage (1963) and Dickey (1977)], it is shown that actual evidence against a null (as measured, say, by posterior probability or comparative likelihood) can differ by an order of magnitude from the P value. For instance, data that yield a P value of .05, when testing a normal mean, result in a posterior probability of the null of at least .30 for any objective prior distribution. (“Objective” here means that equal prior weight is given the two hypotheses and that the prior is symmetric and nonincreasing away from the null; other definiti...


Journal of the American Statistical Association | 2008

Mixtures of g Priors for Bayesian Variable Selection

Feng Liang; Rui Paulo; German Molina; Merlise A. Clyde; James O. Berger

Zellners g prior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this article we study mixtures of g priors as an alternative to default g priors that resolve many of the problems with the original formulation while maintaining the computational tractability that has made the g prior so popular. We present theoretical properties of the mixture g priors and provide real and simulated examples to compare the mixture formulation with fixed g priors, empirical Bayes approaches, and other default procedures. Please see Arnold Zellners letter and the authors response.


International Statistical Review | 1988

Statistical decision theory and related topics IV

Shanti S. Gupta; James O. Berger

1 - Selection, Ranking, and Multiple Comparisons.- Sequential Selection Procedures for Multi-Factor Experiments Involving Koopman-Darmois Populations with Additivity.- Selection Problem for a Modified Multinomial (Voting) Model.- A Decision Theory Formulation for Population Selection Followed by Estimating the Mean of the Selected Population.- On the Problem of Finding the Largest Normal Mean under Heteroscedasticity.- On Least Favorable Configurations for Some Poisson Selection Rules and Some Conditional Tests.- Selection of the Best Normal Populations better Than a Control: Dependence Case.- Inference about the Change-Point in a Sequence of Random Variables: A Selection Approach.- On Confidence Sets in Multiple Comparisons.- 2 - Asymptoticand Sequential Analysis.- The VPRT: Optimal Sequential and Nonsequential Testing.- An Edgeworth Expansion for the Distribution of the F-Ratio under a Randomization Model for the Randomized Block Design.- On Bayes Sequential Tests.- Stochastic Search in a Square and on a Torus.- Distinguished Statistics, Loss of Information and a Theorem of Robert B. Davies.- Prophet Inequalities for Threshold Rules for Independent Bounded Random Variables.- Weak Convergence of the Aalen Estimator for a Censored Renewal Process.- Sequential Stein-Rule Maximum Likelihood Estimation: General Asymptotics.- Fixed Proportional Accuracy in Three Stages.- 3 - Estimationand Testing.- Dominating Inadmissible Tests in Exponential Family Models.- On Estimating Change Point in a Failure Rate.- A Nonparametric, Intersection-Union Test for Stochastic Order.- On Estimating the Number of Unseen Species and System Reliability.- The Effects of Variance Function Estimation on Prediction and Calibration: An Example.- On Estimating a Parameter and Its Score Function, II.- A Simple Test for the Equality of Correlation Matrices.- Conditions of Raos Covariance Method Type for Set-Valued Estimators.- Conservation of Properties of Optimality of Some Statistical Tests and Point Estimators under Extensions of Distributions.- Some Recent Results in Signal Detection.- 4 - Design, and Comparisonof Experimentsand Distributions.- Comparison of Experiments and Information in Censored Data.- A Note on Approximate D-Optimal Designs for G x 2m.- Some Statistical Design Aspects of Estimating Automotive Emission Deterioration Factors.- Peakedness in Multivariate Distributions.- Spatial Designs.


Archive | 1980

Statistical Decision Theory

James O. Berger

Decision theory is the science of making optimal decisions in the face of uncertainty. Statistical decision theory is concerned with the making of decisions when in the presence of statistical knowledge (data) which sheds light on some of the uncertainties involved in the decision problem. The generality of these definitions is such that decision theory (dropping the qualifier ‘statistical’ for convenience) formally encompasses an enormous range of problems and disciplines. Any attempt at a general review of decision theory is thus doomed; all that can be done is to present a description of some of the underlying ideas.


Annals of Statistics | 2004

Optimal predictive model selection

Maria Maddalena Barbieri; James O. Berger

Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we show that, for selection among normal linear models, the optimal predictive model is often the median probability model, which is defined as the model consisting of those variables which have overall posterior probability greater than or equal to 1/2 of being in a model. The median probability model often differs from the highest probability model.


The American Statistician | 2001

Calibration of p Values for Testing Precise Null Hypotheses

Thomas Sellke; M. J Bayarri; James O. Berger

P values are the most commonly used tool to measure evidence against a hypothesis or hypothesized model. Unfortunately, they are often incorrectly viewed as an error probability for rejection of the hypothesis or, even worse, as the posterior probability that the hypothesis is true. The fact that these interpretations can be completely misleading when testing precise hypotheses is first reviewed, through consideration of two revealing simulations. Then two calibrations of a ρ value are developed, the first being interpretable as odds and the second as either a (conditional) frequentist error probability or as the posterior probability of the hypothesis.


Technometrics | 2007

A Framework for Validation of Computer Models.

M. J. Bayarri; James O. Berger; Rui Paulo; Jerry Sacks; John A. Cafeo; James C. Cavendish; Chin-Hsu Lin; Jian Tu

We present a framework that enables computer model evaluation oriented toward answering the question: Does the computer model adequately represent reality? The proposed validation framework is a six-step procedure based on Bayesian and likelihood methodology. The Bayesian methodology is particularly well suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models, combining multiple sources of information, and updating validation assessments as new information is acquired. Moreover, it allows inferential statements to be made about predictive error associated with model predictions in untested situations. The framework is implemented in a test bed example of resistance spot welding, to provide context for each of the six steps in the proposed validation process.


Bayesian Analysis | 2006

The case for objective Bayesian analysis

James O. Berger

Bayesian statistical practice makes extensive use of versions of ob- jective Bayesian analysis. We discuss why this is so, and address some of the criticisms that have been raised concerning objective Bayesian analysis. The dan- gers of treating the issue too casually are also considered. In particular, we suggest that the statistical community should accept formal objective Bayesian techniques with confldence, but should be more cautious about casual objective Bayesian techniques.


Test | 1994

An overview of robust Bayesian analysis

James O. Berger; Elías Moreno; Luis R. Pericchi; M. Jesús Bayarri; José M. Bernardo; Juan Antonio Cano; Julián de la Horra; Jacinto Martín; David Ríos-Insúa; Bruno Betrò; Anirban DasGupta; Paul Gustafson; Larry Wasserman; Joseph B. Kadane; Cid Srinivasan; Michael Lavine; Anthony O’Hagan; Wolfgang Polasek; Christian P. Robert; Constantinos Goutis; Fabrizio Ruggeri; Gabriella Salinetti; Siva Sivaganesan

SummaryRobust Bayesian analysis is the study of the sensitivity of Bayesian answers to uncertain inputs. This paper seeks to provide an overview of the subject, one that is accessible to statisticians outside the field. Recent developments in the area are also reviewed, though with very uneven emphasis.

Collaboration


Dive into the James O. Berger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongchu Sun

University of Missouri

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. P. Dawid

University College London

View shared research outputs
Top Co-Authors

Avatar

Rui Paulo

University of Bristol

View shared research outputs
Researchain Logo
Decentralizing Knowledge