Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Max D. Morris is active.

Publication


Featured researches published by Max D. Morris.


Journal of Statistical Planning and Inference | 1995

Exploratory designs for computational experiments

Max D. Morris; Toby J. Mitchell

Abstract Recent work by Johnson et al. (J. Statist. Plann. Inference 26 (1990) 131–148) establishes equivalence of the maximin distance design criterion and an entropy criterion motivated by function prediction in a Bayesian setting. The latter criterion has been used by Currin et al. (J. Amer. Statist. Assoc. 86 (1991) 953–963) to design experiments for which the motivating application is approximation of a complex deterministic computer model. Because computer experiments often have a large number of controlled variables (inputs), maximin designs of moderate size are often concentrated in the corners of the cuboidal design region, i.e. each input is represented at only two levels. Here we will examine some maximin distance designs constructed within the class of Latin hypercube arrangements. The goal of this is to find designs which offer a compromise between the entropy/maximin criterion, and good projective properties in each dimension (as guaranteed by Latin hypercubes). A simulated annealing search algorithm is presented for constructing these designs, and patterns apparent in the optimal designs are discussed.


Journal of the American Statistical Association | 1991

Bayesian Prediction of Deterministic Functions, with Applications to the Design and Analysis of Computer Experiments

Carla Currin; Toby J. Mitchell; Max D. Morris; Don Ylvisaker

Abstract This article is concerned with prediction of a function y(t) over a (multidimensional) domain T, given the function values at a set of “sites” {t (1), t (2), …, t (n)} in T, and with the design, that is, with the selection of those sites. The motivating application is the design and analysis of computer experiments, where t determines the input to a computer model of a physical or behavioral system, and y(t) is a response that is part of the output or is calculated from it. Following a Bayesian formulation, prior uncertainty about the function y is expressed by means of a random function Y, which is taken here to be a Gaussian stochastic process. The mean of the posterior process can be used as the prediction function ŷ(t), and the variance can be used as a measure of uncertainty. This kind of approach has been used previously in Bayesian interpolation and is strongly related to the kriging methods used in geostatistics. Here emphasis is placed on product linear and product cubic correlation func...


Technometrics | 1992

Screening, predicting, and computer experiments

William J. Welch; Robert J. Buck; Jerome Sacks; Henry P. Wynn; Toby J. Mitchell; Max D. Morris

Many scientific phenomena are now investigated by complex computer models or codes. Given the input values, the code produces one or more outputs via a complex mathematical model. Often the code is expensive to run, and it may be necessary to build a computationally cheaper predictor to enable, for example, optimization of the inputs. If there are many input factors, an initial step in building a predictor is identifying (screening) the active factors. We model the output of the computer code as the realization of a stochastic process. This model has a number of advantages. First, it provides a statistical basis, via the likelihood, for a stepwise algorithm to determine the important factors. Second, it is very flexible, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects. Third, the same data are used for screening and building the predictor, so expensive runs are efficiently used. We illustrate the methodology with two examples, both having 20 input variables. I...


Technometrics | 1993

Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction

Max D. Morris; Toby J. Mitchell; Donald Ylvisaker

This article is concerned with the problem of predicting a deterministic response function yo over a multidimensional domain T, given values of yo and all of its first derivatives at a set of design sites (points) in T. The intended application is to computer experiments in which yo is an output from a computer model of a physical system and each point in T represents a particular configuration of the input parameters. It is assumed that the first derivatives are already available (e.g., from a sensitivity analysis) or can be produced by the code that implements the model. A Bayesian approach in which the random function that represents prior uncertainty about yo is taken to be a stationary Gaussian stochastic process is used. The calculations needed to update the prior given observations of yo and its first derivatives at the design sites are given and are illustrated in a small example. The issue of experimental design is also discussed, in particular the criterion of maximizing the reduction in entropy...


Technometrics | 2000

A Class of Three-Level Experimental Designs for Response Surface Modeling.

Max D. Morris

Most empirically constructed response surface models are based on polynomials containing terms of order 2 or less. Experimental designs involving three equally spaced levels of each factor are popular choices for collecting data to tit such models. Because complete three-level factorial plans require more experimental runs than can usually be accommodated in practice, smaller designs are typically used. The families of three-level designs most often used in this context are the Box–Behnken plans and various forms of the central composite designs. This article introduces a different method for constructing composite designs, motivated by notions of sequential experimentation and the minimax and maximin distance criteria used in spatial modeling. Operational and performance characteristics of some designs constructed by the method are compared to those of competing Box–Behnken and central composite plans.


Journal of Forensic Sciences | 2010

Validation of Tool Mark Comparisons Obtained Using a Quantitative, Comparative, Statistical Algorithm

L. Scott Chumbley; Max D. Morris; M. James Kreiser; Charles Fisher; Jeremy Craft; Lawrence Genalo; Stephen Davis; David Faden; Julie Kidd

Abstract:  A statistical analysis and computational algorithm for comparing pairs of tool marks via profilometry data is described. Empirical validation of the method is established through experiments based on tool marks made at selected fixed angles from 50 sequentially manufactured screwdriver tips. Results obtained from three different comparison scenarios are presented and are in agreement with experiential knowledge possessed by practicing examiners. Further comparisons between scores produced by the algorithm and visual assessments of the same tool mark pairs by professional tool mark examiners in a blind study in general show good agreement between the algorithm and human experts. In specific instances where the algorithm had difficulty in assessing a particular comparison pair, results obtained during the collaborative study with professional examiners suggest ways in which algorithm performance may be improved. It is concluded that the addition of contextual information when inputting data into the algorithm should result in better performance.


Mathematical Geosciences | 1997

Six factors which affect the condition number of matrices associated with kriging

George J. Davis; Max D. Morris

Determining kriging weights to estimate some variable of interest at a given point in the field involves solving a system of linear equations. The matrix of this linear system is subject to numerical instability, and this instability is measured by the matrix condition number. Six parameters in the kriging process have been identified which directly affect this condition number. Analysis of a series of 648 experiments gives some insight on these parameters, and how the condition number relates to kriging variance.


Bulletin of Mathematical Biology | 1980

A note on the dominance hierarchy index

Max D. Morris; C. Alex McMahan

The realized (observed) value of Landau’s dominance hierarchy index is examined. Under a model of constant pairwise dominance probabilities, the observed index is shown to be a strongly consistent estimator of the underlying (true) index. However, a large number of encounters between animals is shown to be required in order to reduce bias and variance to practical levels except when the pairwise dominance probabilities are near one.


Technometrics | 2008

Using Orthogonal Arrays in the Sensitivity Analysis of Computer Models

Max D. Morris; Leslie M. Moore; Michael D. McKay

We consider a class of input sampling plans, called permuted column sampling plans, that are popular in sensitivity analysis of computer models. Permuted column plans, including replicated Latin hypercube sampling, support estimation of first-order sensitivity coefficients, but these estimates are biased when the usual practice of random column permutation is used to construct the sampling arrays. Deterministic column permutations may be used to eliminate this estimation bias. We prove that any permuted column sampling plan that eliminates estimation bias, using the smallest possible number of runs in each array and containing the largest possible number of arrays, can be characterized by an orthogonal array of strength 2. We derive approximate standard errors of the first-order sensitivityindices for this sampling plan. We give two examples demonstrating the sampling plan, behavior of the estimates, and standard errors, along with comparative results based on other approaches.


Technometrics | 1983

Two-Level Multifactor Designs for Detecting the Presence of Interactions,

Max D. Morris; Toby J. Mitchell

A design optimality criterion, tr (L)-optimality, is applied to the problem of designing two-level multifactor experiments to detect the presence of interactions among the controlled variables. We give rules for constructing tr (L)-optimal foldover designs and tr (L)-optimal fractional factorial designs. Some results are given on the power of these designs for testing the hypothesis that there are no two-factor interactions. Augmentation of the tr (L)-optimal designs produces designs that achieve a compromise between the criteria of D-optimality (for parameter estimation in a first-order model) and tr (L)-optimality (for detecting lack of fit). We give an example to demonstrate an application to the sensitivity analysis of a computer model.

Collaboration


Dive into the Max D. Morris's collaboration.

Top Co-Authors

Avatar

Toby J. Mitchell

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert W. Young

Defense Threat Reduction Agency

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leslie M. Moore

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Alex McMahan

University of Texas Health Science Center at San Antonio

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge