Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Q. Meeker is active.

Publication


Featured researches published by William Q. Meeker.


Statistical Science | 2006

A Review of Accelerated Test Models

Luis A. Escobar; William Q. Meeker

Engineers in the manufacturing industries have used accelerated test (AT) experiments for many decades. The purpose of AT experiments is to acquire reliability information quickly. Test units of a material, component, subsystem, or entire systems are subjected to higher-than-usual levels of one or more accelerating variables such as temperature or stress. Then the AT results are used to predict life of the units at use conditions. The extrapolation is typically justified (correctly or incorrectly) on the basis of physically motivated models or a combination of empirical model fitting with a sufficient amount of previous experience in testing similar units. The need to extrapolate in both time and the accelerating variables generally necessitates the use of fully parametric models. Statisticians have made important contributions in the development of appropriate stochastic models for AT data (typically a distribution for the response and regression relationships between the parameters of this distribution and the accelerating variable(s)), statistical methods for AT planning (choice of accelerating variable levels and allocation of available test units to those levels), and methods of estimation of suitable reliability metrics. This paper provides a review of many of the AT models that have been use successfully in this area.


Technometrics | 1998

Accelerated degradation tests: modeling and analysis

William Q. Meeker; Luis A. Escobar; C. Joseph Lu

High reliability systems generally require individual system components having extremely high reliability over long periods of time. Short product development times require reliability tests to be conducted with severe time constraints. Frequently few or no failures occur during such tests, even with acceleration. Thus, it is difficult to assess reliability with traditional life tests that record only failure times. For some components, degradation measures can be taken over time. A relationship between component failure and amount of degradation makes it possible to use degradation models and data to make inferences and predictions about a failure-time distribution. This article describes degradation reliability models that correspond to physical-failure mechanisms. We explain the connection between degradation reliability models and failure-time reliability models. Acceleration is modeled by having an acceleration model that describes the effect that temperature (or another accelerating variable) has on...


The American Statistician | 1995

Teaching about Approximate Confidence Regions Based on Maximum Likelihood Estimation

William Q. Meeker; Luis A. Escobar

Abstract Maximum likelihood (ML) provides a powerful and extremely general method for making inferences over a wide range of data/model combinations. The likelihood function and likelihood ratios have clear intuitive meanings that make it easy for students to grasp the important concepts. Modern computing technology has made it possible to use these methods over a wide range of practical applications. However, many mathematical statistics textbooks, particularly those at the Senior/Masters level, do not give this important topic coverage commensurate with its place in the world of modern applications. Similarly, in nonlinear estimation problems, standard practice (as reflected by procedures available in the popular commercial statistical packages) has been slow to recognize the advantages of likelihood-based confidence regions/intervals over the commonly use “normal-theory” regions/intervals based on the asymptotic distribution of the “Wald statistic.” In this note we outline our approach for presenting, ...


Psychometrika | 1993

The nontruncated marginal of a truncated bivariate normal distribution

Barry C. Arnold; Robert J. Beaver; Richard A. Groeneveld; William Q. Meeker

Inference is considered for the marginal distribution ofX, when (X, Y) has a truncated bivariate normal distribution. TheY variable is truncated, but only theX values are observed. The relationship of this distribution to Azzalinis “skew-normal” distribution is obtained. Method of moments and maximum likelihood estimation are compared for the three-parameter Azzalini distribution. Samples that are uniformative about the skewness of this distribution may occur, even for largen. Profile likelihood methods are employed to describe the uncertainty involved in parameter estimation. A sample of 87 Otis test scores is shown to be well-described by this model.


Technometrics | 1978

Theory for Optimum Accelerated Censored Life Tests for Weibull and Extreme Value Distributions

Wayne Nelson; William Q. Meeker

This paper presents maximum likelihood theory for large-sample optimum accelerated life test plans. The plans are used to estimate a simple linear relationship between (transformed) stress and product life, which has a Weibull or smallest extreme value distribution. Censored data are to be analyzed before all test units fail. The plans show that all test units need not run to failure and that more units should be tested at low test stresses than at high ones. The plans are illustrated with a voltage-accelerated life test of an electrical insulating fluid.


Technometrics | 1999

Estimating fatigue curves with the random fatigue-limit model

Francis Pascual; William Q. Meeker

In a fatigue-limit model, units tested below the fatigue limit (also known as the threshold stress) theoretically will never fail. This article uses a random fatigue-limit model to describe (a) the dependence of fatigue life on the stress level, (b) the variation in fatigue life, and (c) the unit-tounit variation in the fatigue limit.We fit the model to actual fatigue datasets by maximum likelihood methods and study the fits under different distributional assumptions. Small quantiles of the life distribution are often of interest to designers. Lower confidence bounds based on likelihood ratio methods are obtained for such quantiles. To assess the fits of the model, we construct diagnostic plots and perform goodness-of-fit tests and residual analyses.


Technometrics | 1984

A Comparison of Accelerated Life Test Plans for Weibull and Lognormal Distributions and Type I Censoring

William Q. Meeker

Previous work on planning accelerated life test plans for the Weibull and lognormal distributions has concentrated on optimum test plans that minimize the variance of some specified estimator. However, these test plans use tests at only two levels of stress and, thus, have serious practical limitations. This article compares optimum test plans and some compromise test plans with respect to additional criteria including (a) the ability to detect departures from the assumed stress-life relationship and (b) robustness to departures from the assumptions used in determining the plans. The comparisons are based on the large sample properties of maximum likelihood estimators, and the test plans are compared over a range of practical testing situations. The comparisons suggest some general rules for planning accelerated life tests.


International Statistical Review | 1993

A Review of Recent Research and Current Issues in Accelerated Testing

William Q. Meeker; Luis A. Escobar

Summary Accelerated tests are used to obtain timely information on the life distribution or performance over time of products. Test units are used more frequently than usual or are subjected to higher than usual levels of stress or stresses like temperature and voltage. Then the results are used to make predictions about product life or performance over time at the more moderate use or design conditions. Changes in technology, the calls for rapid product development, and the need to continuously improve product reliability have put new demands on the applications for these tests. In this paper we briefly review the basic statistical and other ideas behind accelerated testing and give an overview of some current and planned statistical research to improve accelerated test planning and methods. Todays manufacturers are facing strong pressure to develop newer, higher technology products in record time, while improving productivity, product field reliability, and overall quality. This has motivated the development of methods like concurrent engineering and encouraged wider use of designed experiments for product and process improvement efforts. The requirements for higher reliability have increased the need for more up-front testing of materials, components and systems. This is in line with the generally accepted modern quality philosophy for producing high reliability products: achieve high reliability by improving the design and manufacturing processes; move away from reliance on inspection to achieve high reliability. Estimating the time-to-failure distribution or long-term performance of components of high reliability products is particularly difficult. Most modern products are designed to operate without failure for years, tens of years, or more. Thus few units will fail or degrade importantly in a test of practical length at normal use conditions. For this reason, Accelerated Tests (ATS) are used widely in manufacturing industries, particularly to obtain timely information on the reliability of product components and materials. Generally, information from tests at high levels of stress (e.g., use rate, temperature, voltage, or pressure) is extrapolated, through a physically reasonable statistical model, to obtain estimates of life or long-term performance at lower, normal levels of stress. In some cases stress is increased or otherwise changed during the course of a test (step-stress and progressive-stress ATS). AT results are used in the reliability-design process to assess or demonstrate component and subsystem reliability, certify components, detect failure


Biometrics | 1992

Assessing influence in regression analysis with censored data.

Luis A. Escobar; William Q. Meeker

In this paper we show how to evaluate the effect that perturbations to the model, data, or case weights have on maximum likelihood estimates from censored survival data. The ideas and methods also apply to other nonlinear estimation problems. We review the ideas behind using log-likelihood displacement and local influence methods. We describe new interpretations for some local influence statistics and show how these statistics extend and complement traditional case deletion influence statistics for linear least squares. These statistics identify individual and combinations of cases that have important influence on estimates of parameters and functions of these parameters. We illustrate the methods by reanalyzing the Stanford Heart Transplant data with a parametric regression model.


IEEE Transactions on Reliability | 1995

Statistical tools for the rapid development and evaluation of high-reliability products

William Q. Meeker; Michael Hamada

Todays manufacturers face increasingly intense global competition. To remain profitable, they are challenged to design, develop, test, and manufacture high reliability products in ever-shorter product-cycle times and, at the same time, remain within stringent cost constraints. Design, manufacturing, and reliability engineers have developed an impressive array of tools for producing reliable products. These tools will continue to be important. However, due to changes in the way that new product-concepts are being developed and brought to market, there is need for change in the usual methods used for design-for-reliability and reliability testing, assessment, and improvement programs. This tutorial uses a conceptual degradation-based reliability model to describe the role of, and need for, integration of reliability data sources. These sources include accelerated degradation testing, accelerated life testing (for materials and components), accelerated multifactor robust-design experiments and over-stress prototype testing (for subsystems and systems), and the use of field data (especially early-production) to produce a robust, high-reliability product and to provide a process for continuing improvement of reliability of existing and future products. Manufacturers need to develop economical and timely methods of obtaining, at each step of the product design and development process, the information needed to meet overall reliability goals. We emphasize the need for intensive, effective upstream testing of product materials, components, and design concepts. >

Collaboration


Dive into the William Q. Meeker's collaboration.

Top Co-Authors

Avatar

Luis A. Escobar

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Li

Iowa State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Weaver

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge