William R. Myers
Procter & Gamble
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William R. Myers.
Drug Information Journal | 2000
William R. Myers
A major problem in the analysis of clinical trials is missing data caused by patients dropping out of the study before completion. This problem can result in biased treatment comparisons and also impact the overall statistical power of the study. This paper discusses some basic issues about missing data as well as potential “watch outs.” The topic of missing data is often not a major concern until it is time for data collection and data analysis. This paper provides potential design considerations that should be considered in order to mitigate patients from dropping out of a clinical study. In addition, the concept of the missing-data mechanism is discussed. Five general strategies of handling missing data are presented: 1. Complete-case analysis, 2. “Weighting methods,” 3. Imputation methods, 4. Analyzing data as incomplete, and 5. “Other” methods. Within each strategy, several methods are presented along with advantages and disadvantages. Also briefly discussed is how the International Conference on Harmonization (ICH) addresses the issue of missing data. Finally, several of the methods that are illustrated in the paper are compared using a simulated data set.
Journal of Quality Technology | 2005
William R. Myers; William A. Brenneman; Raymond H. Myers
Robust Parameter Design (RPD) has been used extensively in industrial experiments since its introduction by Genichi Taguchi. RPD has been studied and applied, in most cases, assuming a linear model under standard assumptions. More recently, RPD has been considered in a generalized linear model (GLM) setting. In this paper, we apply a general dual-response approach when using RPD in the case of a GLM. We motivate the need for exploring both the process mean and process variance by discussing situations when a compromise between the two is necessary. Several examples are provided in order to further motivate the need for applying a dual-response approach when applying RPD in the case of a GLM.
Technometrics | 2015
Shan Ba; William R. Myers; William A. Brenneman
Sliced Latin hypercube designs (SLHDs) have important applications in designing computer experiments with continuous and categorical factors. However, a randomly generated SLHD can be poor in terms of space-filling, and based on the existing construction method that generates the SLHD column by column using sliced permutation matrices, it is also difficult to search for the optimal SLHD. In this article, we develop a new construction approach that first generates the small Latin hypercube design in each slice and then arranges them together to form the SLHD. The new approach is intuitive and can be easily adapted to generate orthogonal SLHDs and orthogonal array-based SLHDs. More importantly, it enables us to develop general algorithms that can search for the optimal SLHD efficiently.
Journal of Statistical Planning and Inference | 1994
William R. Myers; Raymond H. Myers; Walter H. Carter
Abstract Alphabetic optimality has become an important component of experimental design in the case of the standard linear model. Many design criteria have been developed in order to produce optimal designs based on either parameter estimation or the prediction of a response. However, relatively little attention has been devoted to developing designs in the nonlinear model case (e.g. logistic regression model). D-optimality is the only criterion that has received much attention in the literature for the logistic regression model. This paper develops optimal designs for the logistic model based on the prediction of a response. Comparisons are made between different optimal designs by computing efficiencies based on certain optimality criteria. The robustness properties for some of the newly created designs are also investigated. Finally, the development of optimal designs for asymmetric regions of the logistic regression model is presented.
Quality and Reliability Engineering International | 2006
Timothy J. Robinson; William A. Brenneman; William R. Myers
When categorical noise variables are present in the Robust Parameter Design (RPD) context, it is possible to reduce process variance by not only manipulating the levels of the control factors but also by adjusting the proportions associated with the levels of the categorical noise factor(s). When no adjustment factors exist or when the adjustment factors are unable to bring the process mean close to target, a popular approach for determining optimal operating conditions is to find the levels of the control factors that minimize the estimated mean squared error of the response. Although this approach is effective, engineers may have a difficult time translating mean squared error into quality. We propose the use of a parts per million defective objective function. Furthermore, we point out that in many situations the levels of the control factors are not equally desirable due to cost and/or time issues. We have termed these types factors non-uniform control factors. We propose the use of desirability functions to determine optimal operating conditions when non-uniform control factors are present and illustrate this methodology with an example from industry. Copyright
Journal of Quality Technology | 2003
William A. Brenneman; William R. Myers
Robust parameter design has been extensively studied and applied in industrial experiments over the past twenty years. The purpose of robust parameter design is to design a process or product that is robust to uncontrollable changes in the noise variables. In many applications the noise variables are continuous, for which several assumptions, often difficult to verify in practice, are necessary to estimate the response variance. In this paper, we consider the case where the noise variable is categorical in nature. We discuss the impact that the assumptions for continuous and categorical noise variables have on the robust settings and on the overall process variance estimate. A designed experiment from industry is presented to illustrate the results.
Journal of Biopharmaceutical Statistics | 1996
William R. Myers; Raymond H. Myers; Walter H. Carter; Kimber L. White
In this paper we focus on the use of a two-stage procedure for logistic regression that emphasizes predicting response through the use of the Q-optimality criterion. The use of D-optimality in the first stage is primarily to allow best possible parameter estimates as one enters the second stage. However, it is important to understand that there are many ways to formulate the two-stage procedure. It may involve any optimality criterion in either stage. In fact, theoretically, one need not stop at two stages. It was our intention in this paper to demonstrate the potential in the two-stage procedure in cases in which good initial parameter estimates are not available. Those investigators who are interested in the software for the two-stage procedure described here should contact Dr. William R. Myers.
Journal of Statistics Education | 2009
Timothy J. Robinson; William A. Brenneman; William R. Myers
While split-plot designs have received considerable attention in the literature over the past decade, there seems to be a general lack of intuitive understanding of the error structure of these designs and the resulting statistical analysis. Typically, students learn the proper error terms for testing factors of a split-plot design via expected mean squares. This does not provide any true insight as far as why a particular error term is appropriate for a given factor effect. We provide a way to intuitively understand the error structure and resulting statistical analysis in split-plot designs through building on concepts found in simple designs, such as completely randomized and randomized complete block designs, and then provide a way for students to “see” the error structure graphically. The discussion is couched around an example from paper manufacturing.
Technometrics | 2018
V. Roshan Joseph; Li Gu; Shan Ba; William R. Myers
ABSTRACT To identify the robust settings of the control factors, it is very important to understand how they interact with the noise factors. In this article, we propose space-filling designs for computer experiments that are more capable of accurately estimating the control-by-noise interactions. Moreover, the existing space-filling designs focus on uniformly distributing the points in the design space, which are not suitable for noise factors because they usually follow nonuniform distributions such as normal distribution. This would suggest placing more points in the regions with high probability mass. However, noise factors also tend to have a smooth relationship with the response and therefore, placing more points toward the tails of the distribution is also useful for accurately estimating the relationship. These two opposing effects make the experimental design methodology a challenging problem. We propose optimal and computationally efficient solutions to this problem and demonstrate their advantages using simulated examples and a real industry example involving a manufacturing packing line. Supplementary materials for the article are available online.
Maturitas | 2000
Edda Gomez-Panzani; Melanie Williams; James T. Kuznicki; William R. Myers; Steven A. Zoller; Carol A Bixler; Laura C Winkler