Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason L. Loeppky is active.

Publication


Featured researches published by Jason L. Loeppky.


Technometrics | 2009

Choosing the Sample Size of a Computer Experiment: A Practical Guide

Jason L. Loeppky; Jerome Sacks; William J. Welch

We provide reasons and evidence supporting the informal rule that the number of runs for an effective initial computer experiment should be about 10 times the input dimension. Our arguments quantify two key characteristics of computer codes that affect the sample size required for a desired level of accuracy when approximating the code via a Gaussian process (GP). The first characteristic is the total sensitivity of a code output variable to all input variables; the second corresponds to the way this total sensitivity is distributed across the input variables, specifically the possible presence of a few prominent input factors and many impotent ones (i.e., effect sparsity). Both measures relate directly to the correlation structure in the GP approximation of the code. In this way, the article moves toward a more formal treatment of sample size for a computer experiment. The evidence supporting these arguments stems primarily from a simulation study and via specific codes modeling climate and ligand activation of G-protein.


Technometrics | 2007

Nonregular Designs With Desirable Projection Properties

Jason L. Loeppky; Randy R. Sitter; Boxin Tang

In many industrial applications, the experimenter is interested in estimating some of the main effects and two-factor interactions. In this article we rank two-level orthogonal designs based on the number of estimable models containing a subset of main effects and their associated two-factor interactions. By ranking designs in this way, the experimenter can directly assess the usefulness of the experimental plan for the purpose in mind. We apply the new ranking criterion to the class of all non isomorphic two-level orthogonal designs with 16 and 20 runs and introduce a computationally efficient algorithm, based on two theoretical results, which will aid in finding designs with larger run sizes. Catalogs of useful designs with 16, 20, 24, and 28 runs are presented.


Reliability Engineering & System Safety | 2011

Batch sequential design to achieve predictive maturity with calibrated computer models

Brian J. Williams; Jason L. Loeppky; Leslie M. Moore; Mason S. Macklem

Sequential experiment design strategies have been proposed for efficiently augmenting initial designs to solve many problems of interest to computer experimenters, including optimization, contour and threshold estimation, and global prediction. We focus on batch sequential design strategies for achieving maturity in global prediction of discrepancy inferred from computer model calibration. Predictive maturity focuses on adding field experiments to efficiently improve discrepancy inference. Several design criteria are extended to allow batch augmentation, including integrated and maximum mean square error, maximum entropy, and two expected improvement criteria. In addition, batch versions of maximin distance and weighted distance criteria are developed. Two batch optimization algorithms are considered: modified Fedorov exchange and a binning methodology motivated by optimizing augmented fractional factorial skeleton designs.


ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering | 2015

Integrating Bayesian Linear Regression with Ordered Weighted Averaging: Uncertainty Analysis for Predicting Water Main Failures

Golam Kabir; Solomon Tesfamariam; Jason L. Loeppky; Rehan Sadiq

AbstractWater distribution networks (WDNs) are among the most important and expensive municipal infrastructure assets that are vital to public health. Municipal authorities strive for implementing preventive (or proactive) programs rather than corrective (or reactive) programs. The ability to predict the failure of pipes in WDNs is vital in the proactive investment planning of replacement and rehabilitation strategies. However, due to inherent uncertainties in data and modeling, WDN failure prediction is challenging. To improve understanding of water main failure processes, accurate quantification of uncertainty is necessary. The research reported in this paper presents a comparative evaluation of the prediction accuracy of normal multiple linear regression and Bayesian regression models using water mains failure data/information from the City of Calgary. Results indicate that Bayesian regression models provide better predicted response and handle the uncertainty more accurately than normal regression model.


Statistical Science | 2016

Analysis Methods for Computer Experiments: How to Assess and What Counts?

Hao Chen; Jason L. Loeppky; Jerome Sacks; William J. Welch

Statistical methods based on a regression model plus a zero-mean Gaussian process (GP) have been widely used for predicting the output of a deterministic computer code. There are many suggestions in the literature for how to choose the regression component and how to model the correlation structure of the GP. This article argues that comprehensive, evidence-based assessment strategies are needed when comparing such modeling options. Otherwise, one is easily misled. Applying the strategies to several computer codes shows that a regression model more complex than a constant mean either has little impact on prediction accuracy or is an impediment. The choice of correlation function has modest effect, but there is little to separate two common choices, the power exponential and the Matern, if the latter is optimized with respect to its smoothness. The applications presented here also provide no evidence that a composite of GPs provides practical improvement in prediction accuracy. A limited comparison of Bayesian and empirical Bayes methods is similarly inconclusive. In contrast, we find that the effect of experimental design is surprisingly large, even for designs of the same type with the same theoretical properties.


international conference on enterprise information systems | 2015

Improving Online Marketing Experiments with Drifting Multi-armed Bandits

Giuseppe Burtini; Jason L. Loeppky; Ramon Lawrence

Restless bandits model the exploration vs. exploitation trade-off in a changing (non-stationary) world. Restless bandits have been studied in both the context of continuously-changing (drifting) and change-point (sudden) restlessness. In this work, we study specific classes of drifting restless bandits selected for their relevance to modelling an online website optimization process. The contribution in this work is a simple, feasible weighted least squares technique capable of utilizing contextual arm parameters while considering the parameter space drifting non-stationary within reasonable bounds. We produce a reference implementation, then evaluate and compare its performance in several different true world states, finding experimentally that performance is robust to time drifting factors similar to those seen in many real world cases.


The Astrophysical Journal | 2014

COMPARING SIMULATED EMISSION FROM MOLECULAR CLOUDS USING EXPERIMENTAL DESIGN

Miayan Yeremi; Mallory Flynn; Stella S. R. Offner; Jason L. Loeppky; Erik Rosolowsky

We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.


Technometrics | 2002

Methods for Assessing Curvature and Interaction in Mixture Experiments

Gregory F. Piepel; Ruel D. Hicks; Jeffrey M. Szychowski; Jason L. Loeppky

The terms curvature and interaction traditionally are not defined or used in the context of mixture experiments because curvature and interaction effects are partially confounded due to the mixture constraint that the component proportions sum to 1. Instead, the term nonlinear blending traditionally is defined and used. However, just as the concept of a component effect is defined specifically for mixture experiments, the concepts of curvature and interaction can be also. These special definitions lead to special sets of points for assessing the curvature and interaction of mixture components. An interaction plot for mixture components is obtained by plotting measured or predicted response values at the special points for assessing interaction. Together with response trace (component effects) plots, the new mixture component interaction plots provide for graphically assessing the linear, curvature, and interaction effects of mixture components. These methods provide a more complete picture of how mixture components affect a response, which is useful in formulating mixtures and confirming or extending subject matter knowledge. The special sets of points and the graphical techniques used to assess linear, curvature, and interaction effects are illustrated using two nuclear waste glass examples.


PLOS ONE | 2015

Constructing rigorous and broad biosurveillance networks for detecting emerging zoonotic outbreaks.

Mac Brown; Leslie M. Moore; Benjamin H. McMahon; Dennis R. Powell; Montiago X. LaBute; James M. Hyman; Ariel L. Rivas; Mark D. Jankowski; Joel Berendzen; Jason L. Loeppky; Carrie A. Manore; Jeanne M. Fair

Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities of potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic.


Siam Journal on Optimization | 2013

Convex Relaxations of the Weighted Maxmin Dispersion Problem

Sheena Haines; Jason L. Loeppky; Paul Tseng; Xianfu Wang

Consider the weighted maxmin dispersion problem of locating point(s) in a given region

Collaboration


Dive into the Jason L. Loeppky's collaboration.

Top Co-Authors

Avatar

Brian J. Williams

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leslie M. Moore

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Warren Hare

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Abigail Arnold

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Erik Rosolowsky

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Hao Chen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Solomon Tesfamariam

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Burtini

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Golam Kabir

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge