Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. J. Bayarri is active.

Publication


Featured researches published by M. J. Bayarri.


Technometrics | 2007

A Framework for Validation of Computer Models.

M. J. Bayarri; James O. Berger; Rui Paulo; Jerry Sacks; John A. Cafeo; James C. Cavendish; Chin-Hsu Lin; Jian Tu

We present a framework that enables computer model evaluation oriented toward answering the question: Does the computer model adequately represent reality? The proposed validation framework is a six-step procedure based on Bayesian and likelihood methodology. The Bayesian methodology is particularly well suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models, combining multiple sources of information, and updating validation assessments as new information is acquired. Moreover, it allows inferential statements to be made about predictive error associated with model predictions in untested situations. The framework is implemented in a test bed example of resistance spot welding, to provide context for each of the six steps in the proposed validation process.


Journal of the American Statistical Association | 2000

P Values for Composite Null Models

M. J. Bayarri; James O. Berger

Abstract The problem of investigating compatibility of an assumed model with the data is investigated in the situation when the assumed model has unknown parameters. The most frequently used measures of compatibility are p values, based on statistics T for which large values are deemed to indicate incompatibility of the data and the model. When the null model has unknown parameters, p values are not uniquely defined. The proposals for computing a p value in such a situation include the plug-in and similar p values on the frequentist side, and the predictive and posterior predictive p values on the Bayesian side. We propose two alternatives, the conditional predictive p value and the partial posterior predictive p value, and indicate their advantages from both Bayesian and frequentist perspectives.


Annals of Statistics | 2007

Computer model validation with functional output

M. J. Bayarri; James O. Berger; John A. Cafeo; Gonzalo Garcia-Donato; F. Liu; J. Palomo; R. J. Parthasarathy; Rui Paulo; Jerry Sacks; Daniel Walsh

A key question in evaluation of computer models is Does the computer model adequately represent reality? A six-step process for computer model validation is set out in Bayarri et al. [Technometrics 49 (2007) 138-154] (and briefly summarized below), based on comparison of computer model runs with field data of the process being modeled. The methodology is particularly suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and being able to adapt to different, but related scenarios. Two complications that frequently arise in practice are the need to deal with highly irregular functional data and the need to acknowledge and incorporate uncertainty in the inputs. We develop methodology to deal with both complications. A key part of the approach utilizes a wavelet representation of the functional data, applies a hierarchical version of the scalar validation methodology to the wavelet coefficients, and transforms back, to ultimately compare computer model output with field output. The generality of the methodology is only limited by the capability of a combination of computational tools and the appropriateness of decompositions of the sort (wavelets) employed here. The methods and analyses we present are illustrated with a test bed dynamic stress analysis for a particular engineering system.


Annals of Statistics | 2012

Criteria for Bayesian model choice with application to variable selection

M. J. Bayarri; James O. Berger; A. Forte; Gonzalo Garcia-Donato

In objective Bayesian model selection, no single criterion has emerged as dominant in defining objective prior distributions. Indeed, many criteria have been separately proposed and utilized to propose differing prior choices. We first formalize the most general and compelling of the various criteria that have been suggested, together with a new criterion. We then illustrate the potential of these criteria in determining objective model selection priors by considering their application to the problem of variable selection in normal linear models. This results in a new model selection objective prior with a number of compelling properties.


Bayesian Analysis | 2009

Modularization in Bayesian Analysis, with Emphasis on Analysis of Computer Models ∗

F. Liu; M. J. Bayarri; James O. Berger

Bayesian analysis incorporates different sources of information into a single analysis through Bayes theorem. When one or more of the sources of information are suspect (e.g., if the model assumed for the information is viewed as quite possibly being significantly flawed), there can be a concern that Bayes theorem allows this suspect information to overly influence the other sources of information. We consider a variety of situations in which this arises, and give methodological suggestions for dealing with the problem. After consideration of some pedagogical examples of the phenomenon, we focus on the interface of statistics and the development of complex computer models of processes. Three testbed computer models are considered, in which this type of issue arises.


Technometrics | 2009

Using Statistical and Computer Models to Quantify Volcanic Hazards

M. J. Bayarri; James O. Berger; Eliza S. Calder; Keith Dalbey; Simon Lunagomez; Abani K. Patra; E. Bruce Pitman; Elaine T. Spiller; Robert L. Wolpert

Risk assessment of rare natural hazards, such as large volcanic block and ash or pyroclastic flows, is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is used to provide the needed extrapolation to unseen parts of the hazard space. Statistical modeling of the available data is needed to determine the initializing distribution for exercising the computer model. In dealing with rare events, direct simulations involving the computer model are prohibitively expensive. The solution instead requires a combination of adaptive design of computer model approximations (emulators) and rare event simulation. The techniques that are developed for risk assessment are illustrated on a test-bed example involving volcanic flow.


Queueing Systems | 1994

Bayesian prediction inM/M/1 queues

Carmen Armero; M. J. Bayarri

Simple queues with Poisson input and exponential service times are considered to illustrate how well-suited Bayesian methods are used to handle the common inferential aims that appear when dealing with queue problems. The emphasis will mainly be placed on prediction; in particular, we study the predictive distribution of usual measures of effectiveness in anM/M/1 queue system, such as the number of customers in the queue and in the system, the waiting time in the queue and in the system, the length of an idle period and the length of a busy period.


Archive | 2011

Bayesian Statistics 9

José M. Bernardo; M. J. Bayarri; James O. Berger; A. P. Dawid; David Heckerman; A. F. M. Smith; Mike West

We thank all of the discussants for their valuable insights and elaborations. In particular, we thank Prof. Clarke and Dr. Severinski for their conjectured extension to Theorem 3, the product of many personal discussions both in Austin and in Spain (and probably many more hours of work in Miami). The conjecture seems quite likely to be true, and strikes us as a nice way of understanding adaptive penalty functions and infinite-dimensional versions of the corresponding shrinkage priors. Rather than respond to each of the six discussions in turn, we have grouped the comments into three rough categories.I describe ongoing work on development of Bayesian methods for exploring periodically varying phenomena in astronomy, addressing two classes of sources: pulsars, and extrasolar planets (exoplanets). For pulsars, the methods aim to detect and measure periodically varying signals in data consisting of photon arrival times, modeled as non-homogeneous Poisson point processes. For exoplanets, the methods address detection and estimation of planetary orbits using observations of the reflex motion “wobble” of a host star, including adaptive scheduling of observations to optimize inferences.


Journal of Mathematical Psychology | 2016

Rejection Odds and Rejection Ratios: A Proposal for Statistical Practice in Testing Hypotheses

M. J. Bayarri; Daniel J. Benjamin; James O. Berger; Thomas Sellke

Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p-values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to incorrect rejection. Both pre-experimental versions (involving the power and Type I error) and post-experimental versions (depending on the actual data) are considered. Implementations are provided that range from depending only on the p-value to consideration of full Bayesian analysis. A surprise is that all implementations – even the full Bayesian analysis – have complete frequentist justification. Versions of our proposal can be implemented that require only minor modifications to existing practices yet overcome some of their most severe shortcomings.


Journal of Statistical Planning and Inference | 2003

Bayesian measures of surprise for outlier detection

M. J. Bayarri; J Morales

From a Bayesian point of view, testing whether an observation is an outlier is usually reduced to a testing problem concerning a parameter of a contaminating distribution. This requires elicitation of both (i) the contaminating distribution that generates the outlier and (ii) prior distributions on its parameters. However, very little information is typically available about how the possible outlier could have been generated. Thus easy, preliminary checks in which these assessments can often be avoided may prove useful. Several such measures of surprise are derived for outlier detection in normal models. Results are applied to several examples. Default Bayes factors, where the contaminating model is assessed but not the prior distribution, are also computed.

Collaboration


Dive into the M. J. Bayarri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerry Sacks

Research Triangle Park

View shared research outputs
Top Co-Authors

Avatar

Morris H. DeGroot

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rui Paulo

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge