Brian Weaver
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian Weaver.
Technometrics | 2013
Brian Weaver; William Q. Meeker; Luis A. Escobar; Joanne Wendelberger
Repeated measures degradation studies are used to assess product or component reliability when there are few or even no failures expected during a study. Such studies are often used to assess the shelf life of materials, components, and products. We show how to evaluate the properties of proposed test plans. Such evaluations are needed to identify statistically efficient tests. We consider test plans for applications where parameters related to the degradation distribution or the related lifetime distribution are to be estimated. We use the approximate large-sample variance–covariance matrix of the parameters of a mixed effects linear regression model for repeated measures degradation data to assess the effect of sample size (number of units and number of measurements within the units) on estimation precision of both degradation and failure-time distribution quantiles. We also illustrate the complementary use of simulation-based methods for evaluating and comparing test plans. These test-planning methods are illustrated with two examples. We provide the R code and examples as supplementary materials (available online on the journal web site) for this article.
Quality Engineering | 2012
Brian Weaver; Michael S. Hamada; Stephen B. Vardeman; Alyson G. Wilson
ABSTRACT Gauge repeatability and reproducibility (R&R) studies are used to assess precision of measurement systems. In particular, they are used to quantify the importance of various sources of variability in a measurement system. We take a Bayesian approach to data analysis and show how to estimate variance components associated with the sources of variability and relevant functions of these using the gauge R&R data together with prior information. We then provide worked examples of gauge R&R data analysis for types of studies common in industrial applications. With each example we provide WinBUGS code to illustrate how easy it is to implement a Bayesian analysis of gauge R&R data.
Journal of Quality Technology | 2014
Hamada; Alyson G. Wilson; Brian Weaver; R.W. Griffiths; Harry F. Martz
This paper illustrates the development of Bayesian assurance test plans for system reliability assuming that binomial data will be collected on the system and that previous information is available from component testing. The posterior consumers and producers risks are used as the criteria for developing the test plan. Using the previous component information reduces the number of tests needed to achieve the same levels of risk. The proposed methodology is illustrated with two examples.
Science and Technology of Nuclear Installations | 2013
Tom Burr; Michael S. Hamada; John Howell; Misha Skurikhin; Larry Ticknor; Brian Weaver
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals. Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.
Accreditation and Quality Assurance | 2012
Tom Burr; Stephen Croft; Michael S. Hamada; Stephen B. Vardeman; Brian Weaver
A previous related paper considered rounding error effects in the presence of underlying measurement error and presented a Bayesian approach to estimate instrument input random error standard deviation. This addendum to the previous paper emphasizes that the effects of random error depend on the true (and usually unknown) value of the measurand, in terms of both the variance and the item-specific bias. However, it is shown that if we assume that the true values are uniformly distributed, then instrument variance and item-specific bias can be combined into an “effective random error variance” and a strategy to estimate the effective random error variance is provided.
Quality Engineering | 2016
Brian Weaver; Michael S. Hamada
KEY POINT We provide a gentle introduction to this powerful analysis method that can handle complex data and modeling situations.
Journal of Quality Technology | 2013
David H. Collins; Jason K. Freels; Aparna V. Huzurbazar; Richard L. Warr; Brian Weaver
Perusal of quality- and reliability-engineering literature indicates some confusion over the meaning of accelerated life testing (ALT), highly accelerated life testing (HALT), highly accelerated stress screening (HASS), and highly accelerated stress auditing (HASA). In addition, there is a significant conflict between testing as part of an iterative process of finding and removing defects and testing as a means of estimating or predicting product reliability. We review the basics of these testing methods and describe how they relate to statistical methods for estimation and prediction of reliability and reliability growth. We also outline potential synergies to help reconcile statistical and engineering approaches to accelerated testing, resulting in better product quality at lower cost.
Quality Engineering | 2017
Michael L. Fugate; Michael S. Hamada; Brian Weaver
ABSTRACT This article proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
Archive | 2015
Stefano Andreon; Brian Weaver
In this chapter we introduce regression models, i.e., how to fit (regress) one, or more quantities, against each other through a functional relationship and estimate any unknown parameters that dictate this relationship. Questions of interest include: how to deal with samples affected by selection effects? How does a rich data structure influence the fitted parameters? And what about non-linear multiple-predictor fits, upper/lower limits, measurement errors of different amplitudes and an intrinsic variety in the studied populations, or an extra source of variability? A number of examples illustrate how to answer these questions and how to predict the value of an unavailable quantity by exploiting the existence of a trend with another, available, quantity.
Quality Engineering | 2018
Michael S. Hamada; Brian Weaver
Key Point This article provides a number of examples that show how useful simulation is in addressing many problems encountered in applied statistics.