Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael D. McKay is active.

Publication


Featured researches published by Michael D. McKay.


Technometrics | 2000

A comparison of three methods for selecting values of input variables in the analysis of output from a computer code

Michael D. McKay; Richard J. Beckman; W. J. Conover

Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.


Transportation Research Part A-policy and Practice | 1996

CREATING SYNTHETIC BASELINE POPULATIONS

Richard J. Beckman; Keith A. Baggerly; Michael D. McKay

To develop activity-based travel models using microsimulation, individual travelers and households must be considered. Methods for creating baseline synthetic populations of households and persons using 1990 census data are given. Summary tables from the Census Bureau STF-3A are used in conjunction with the Public Use Microdata Sample (PUMS), and Iterative Proportional Fitting (IPF) is applied to estimate the proportion of households in a block group or census tract with a desired combination of demographics. Households are generated by selection of households from the associated PUMS according to these proportions. The tables of demographic proportions which are exploited here to make household selections from the PUMS may be used in traditional modeling. The procedures are validated by creating pseudo census tracts from PUMS samples and considering the joint distribution of the size of households and the number of vehicles in the households. It is shown that the joint distributions created by these methods do not differ substantially from the true values. Additionally the effects of small changes in the procedure, such as imputation of additional demographics and adding partial counts to the constructed demographic tables are discussed in the paper.


Computer Physics Communications | 1999

Evaluating prediction uncertainty in simulation models

Michael D. McKay; John Morrison; Stephen C. Upton

Input values are a source of uncertainty for model predictions. When input uncertainty is characterized by a probability distribution, prediction uncertainty is characterized by the induced prediction distribution. Comparison of a model predictor based on a subset of model inputs to the full model predictor leads to a natural decomposition of the prediction variance and the correlation ratio as a measure of importance. Because the variance decomposition does not depend on assumptions about the form of the relation between inputs and output, the analysis can be called nonparametric. Variance components can be estimated through designed computer experiments.


Reliability Engineering & System Safety | 2006

Sensitivity analysis when model outputs are functions

Katherine Campbell; Michael D. McKay; Brian J. Williams

When outputs of computational models are time series or functions of other continuous variables like distance, angle, etc., it can be that primary interest is in the general pattern or structure of the curve. In these cases, model sensitivity and uncertainty analysis focuses on the effect of model input choices and uncertainties in the overall shapes of such curves. We explore methods for characterizing a set of functions generated by a series of model runs for the purpose of exploring relationships between these functions and the model inputs.


Reliability Engineering & System Safety | 1997

Nonparametric variance-based methods of assessing uncertainty importance

Michael D. McKay

Abstract This paper examines the feasibility and value of using nonparametric variance-based methods to supplement parametric regression methods for uncertainty analysis of computer models. It shows from theoretical considerations how the usual linear regression methods are a particular case within the general framework of variance-based methods. Examples of strengths and weaknesses of the methods are demonstrated analytically and numerically in an example. The paper shows that relaxation of linearity assumptions in nonparametric variance-based methods comes at the cost of additional computer runs.


Reliability Engineering & System Safety | 1995

Robustness of an uncertainty and sensitivity analysis of early exposure results with the MACCS reactor accident consequence model

Jon C. Helton; Jay D. Johnson; Michael D. McKay; A.W. Shiver; J.L. Sprung

Abstract Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis were used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The following results were obtained in tests to check the robustness of the analysis techniques: two independent Latin hypercube samples produced similar uncertainty and sensitivity analysis results; setting important variables to best-estimate values produced substantial reductions in uncertainty, while setting the less important variables to best-estimate values had little effect on uncertainty; similar sensitivity analysis results were obtained when the original uniform and loguniform distributions assigned to the 34 imprecisely known input variables were changed to left-triangular distributions and then to right-triangular distributions; and analyses with rank-transformed and logarithmically-transformed data produced similar results and substantially outperformed analyses with raw (i.e., untransformed) data.


Technometrics | 1987

Monte Carlo estimation under different distributions using the same simulation

Richard J. Beckman; Michael D. McKay

Two methods for reducing the computer time necessary to investigate changes in distribution of random inputs to large simulation computer codes are presented. The first method produces unbiased estimators of functions of the output variable under the new distribution of the inputs. The second method generates a subset of the original outputs that has a distribution corresponding to the new distribution of inputs. Efficiencies of the two methods are examined.


Technometrics | 2008

Using Orthogonal Arrays in the Sensitivity Analysis of Computer Models

Max D. Morris; Leslie M. Moore; Michael D. McKay

We consider a class of input sampling plans, called permuted column sampling plans, that are popular in sensitivity analysis of computer models. Permuted column plans, including replicated Latin hypercube sampling, support estimation of first-order sensitivity coefficients, but these estimates are biased when the usual practice of random column permutation is used to construct the sampling arrays. Deterministic column permutations may be used to eliminate this estimation bias. We prove that any permuted column sampling plan that eliminates estimation bias, using the smallest possible number of runs in each array and containing the largest possible number of arrays, can be characterized by an orthogonal array of strength 2. We derive approximate standard errors of the first-order sensitivityindices for this sampling plan. We give two examples demonstrating the sampling plan, behavior of the estimates, and standard errors, along with comparative results based on other approaches.


Reliability Engineering & System Safety | 2006

Combined array experiment design

Leslie M. Moore; Michael D. McKay; Katherine Campbell

Abstract Experiment plans formed by combining two or more designs, such as orthogonal arrays primarily with 2- and 3-level factors, creating multi-level arrays with subsets of different strength are proposed for computer experiments to conduct sensitivity analysis. Specific illustrations are designs for 5-level factors with fewer runs than generally required for 5-level orthogonal arrays of strength 2 or more. At least 5 levels for each input are desired to allow for runs at a nominal value, 2-values either side of nominal but within a normal, anticipated range, and two, more extreme values either side of nominal. This number of levels allows for a broader range of input combinations to test the input combinations where a simulation code operates. Five-level factors also allow the possibility of up to fourth-order polynomial models for fitting simulation results, at least in one dimension. By having subsets of runs with more than strength 2, interaction effects may also be considered. The resulting designs have a “checker-board” pattern in lower-dimensional projections, in contrast to grid projection that occurs with orthogonal arrays. Space-filling properties are also considered as a basis for experiment design assessment.


Bayesian Analysis | 2006

Combining experimental data and computer simulations, with an application to flyer plate experiments

Brian J. Williams; Dave Higdon; Jim Gattiker; Leslie M. Moore; Michael D. McKay; Sallie Keller-McNulty

Collaboration


Dive into the Michael D. McKay's collaboration.

Top Co-Authors

Avatar

Leslie M. Moore

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard J. Beckman

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Brian J. Williams

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jon C. Helton

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Katherine Campbell

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.W. Shiver

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Dave Higdon

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

J.L. Sprung

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jay D. Johnson

Science Applications International Corporation

View shared research outputs
Researchain Logo
Decentralizing Knowledge