Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James M. Lucas is active.

Publication


Featured researches published by James M. Lucas.


Journal of Quality Technology | 1994

How to Achieve a Robust Process Using Response Surface Methodology

James M. Lucas

Taguchis experimental system is used to develop robust processes, that is, processes which are insensitive to variations in uncontrollable variables, such as environmental variables. Taguchis recommended designs can be considered to be response surfac..


Journal of Quality Technology | 2004

Factorial experiments when factor levels are not necessarily reset

Derek Webb; James M. Lucas; John J. Borkowski

In industry, the run order of experiments is often randomized, but this does not guarantee that all factor levels are reset from one run to the next. When this happens and the levels of factors are the same in successive runs, the assumption of independent observations from run to run may be violated or incorrect. In this case, an ordinary least squares analysis can produce biased estimates of the coefficients in the model, which leads to erroneous test results and inferences. In this paper we describe how not resetting the levels of one or more factors in successive runs can result in less precision in parameter estimates and a larger than expected prediction variance. We present formulas for the prediction variance and the expected prediction variance for this situation. These quantities are important because they allow us to compare the prediction properties of experiments that are completely randomized to experiments where the levels of one or more factors are not reset. We give an analysis of an industrial experiment of this type and recommendations for carrying out factorial experiments where the levels of one or more factors are not reset.


Journal of Quality Technology | 2002

L(k) Factorial Experiments With Hard-To-Change and Easy-To-Change Factors

Huey L. Ju; James M. Lucas

Experimental factors, especially when they are hard-to-change, are often not independently reset on each run. For example, it would be very costly to let a mold cool down between runs and then reheat it when the same mold temperature is required on successive runs. Therefore, even when the experiment is run using a random run order, a “completely randomized design” requiring a single error term is not obtained. There is a restriction on randomization that causes the experiment to require more than one error term in the model. The analysis of the experiment with restricted randomization often proceeds, however, as if a completely randomized design were run. This ignores the fact that the experiment inherently has a “split plot” structure. We examine properties of experiments run in this very common manner and investigate Lk experiments using two error terms. One error term is associated with setting the “hard-to-change factors” and the other represents the remaining error. We develop the expected covariance matrices for observations and for the estimated regression coefficients in experiments using a random run order and using run orders that restrict randomization partially or completely. We compare the precision of the estimators of the regression coefficients for the various run order scenarios and show that classical split-plot blocking is superior to a random run order.


Communications in Statistics-theory and Methods | 1997

Bias in test statistics when restrictions in randomization are caused by factors

Jitendra Ganju; James M. Lucas

It is fairly common in the conduct of an experiment to randomize, either completely or partially (within a block), the allocation of treatments to experimental units. The act of randomization is easy to envision in experiments that require an equal chance for every experimental unit, independent of other units, to receive any treatment. How does one randomize, however, factorial experiments that involve factors having more than one level? The practice in industry has been to obtain at random the sequence of allocating treatment combinations to experimental units. The analyses then proceed assuming the design to be properly randomized. Using only a random run order, however, does not render a randomized experiment as conceived by Fisher. A factor that requires the same level for successive runs may not be independently reset from run-to-run. An experiment therefore becomes a (unbalanced) split-plot experiment due to the restriction imposed on randomization of not resetting factor levels for every run. We m...


Journal of Statistical Planning and Inference | 1999

Detecting randomization restrictions caused by factors

Jitendra Ganju; James M. Lucas

In practice, randomization in factorial experiments has generally meant the sequential application of treatment combinations to experimental units determined by a random run order and the resetting of levels of factors only when levels change from one run to the next. Whenever factor levels are not reset for consecutive runs requiring the same level of a factor, the experiment is inadvertently split-plotted. The number and the size of the whole plots are formed randomly. The unbalanced split-plot experiment is assumed by experimenters to be completely randomized and analysis usually proceeds by the method of ordinary least squares. We show that once the experiment has been run it is often difficult, sometimes impossible, to determine the effect of such inadvertent split-plotting. Retrospectively, therefore, the correct analysis often cannot be performed. The aims of the paper are: (1) to urge experimenters and statisticians to recognize prospectively the difficulty in resetting factor levels so that split-plotting is deliberate but designed in a manner beneficial to both the conduct of the experiment and analysis of the data; (2) to make recommendations regarding more comprehensive reporting of data from randomized experiments.


The American Statistician | 1993

Skills for Industrial Statisticians to Survive and Prosper in the Emerging Quality Environment

Roger Hoerl; Jeffrey H. Hooper; Peter J. Jacobs; James M. Lucas

Abstract There is a growing perception that statisticians are not typically equipped with the skills required to function effectively in industry. This disparity appears to be greatest for those statisticians working in the area of quality and productivity improvement. Boroto and Zahn summarized the problem when they stated: “A sense of dissatisfaction exists in the statistics profession stemming from a consensus that statisticians in all working environments are undervalued and under-utilized.” “To correct the situation,” noted Box “requires a restructuring comparable in its depth with the political restructuring now going on in Eastern Europe.” In October 1990, the Education Committee of the Statistics Division of the American Society for Quality Control (ASQC) chartered a team to develop a “white paper” on the skills required to function effectively as an industrial statistician working in the area of quality and productivity improvement. This article is the result of the teams effort. Potential root ...


International Journal of Productivity and Quality Management | 2006

Economic control chart policies for monitoring variables

Erwin M. Saniga; Thomas P. McWilliams; Darwin J. Davis; James M. Lucas

In this paper, we compare the costs of an economically designed CUSUM control chart and a common Shewhart control chart, the X–bar chart for many configurations of parameters. Our results indicate that there are identifiable regions where there is an overwhelming cost advantage to using CUSUM charts. Additionally, we find that there are identifiable regions where an X–bar chart can be employed without any substantial economic disadvantage. Finally, we identify regions where a regular search policy is less costly than a policy of using either a CUSUM or X–bar chart.


Iie Transactions | 2006

Detecting improvement using Shewhart attribute control charts when the lower control limit is zero

James M. Lucas; Darwin J. Davis; Erwin M. Saniga

In this paper, we present a method to monitor count data so as to be able to detect improvement when the counts are low enough to cause the lower limit to be zero. The method, which is proposed as an add-on to the conventional Shewhart control chart, consists in counting the number of samples in which zero defectives or zero defects per unit occur and signaling an increase in quality if k-in-a-row or 2-in-t samples have zero counts of defectives or zero defects per unit. This method enjoys some similarities to the very popular Shewhart control chart in that it is easy to design, understand and use. It is flexible, robust, and, like the Shewhart chart, yields detection frequencies that are optimal for very large shifts and good for other shifts. Some comparisons with traditional CUSUM charts are provided. Figures enabling Shewhart control chart users to easily design low-side add-on control charts are given for c and np charts.


The American Statistician | 2000

Analysis of Unbalanced Data from an Experiment with Random Block Effects and Unequally Spaced Factor Levels

Jitendra Ganju; James M. Lucas

Abstract We examine a 32 factorial experiment that was run on each of 12 days with extra replications of the middle point. Usually, the literature on the analysis of factorial experiments performed in blocks treats the blocks as a fixed effect. Data collected from this experiment, however, were analyzed by treating the block effect as random. The combination of quantitative factor levels and random block effects provides an opportunity to discuss issues that are not commonly encountered in the literature on mixed model analyses of data. In particular, we: (a) discuss modeling the interaction between blocks and regression coefficients; (b) discuss the effect of coding on the estimation of degrees of freedom (df) of the fixed terms; (c) present an oddity with estimating the intercept df; and (d) discuss inference on the mean of the responses. How the experiment may have been run and its consequences are also discussed.


Quality Engineering | 2012

Statistical Engineering to Stabilize Vaccine Supply

Julia O'Neill; George Atkins; Dan Curbison; Betsy Flak; James M. Lucas; Dana Metzger; Lindsay Morse; Tejal Shah; Thomas Steinmetz; Kathleen Van Citters; Matthew C. Wiener; Byron Wingerd; Anthony Yourey

ABSTRACT Reliable vaccine supply is a critical public health concern. In this case study, statistical engineering was applied to a complex problem in vaccine production. Statistical techniques ranging from simple graphics to sophisticated time series and variance components models were needed to identify root causes. Custom solutions and monitoring methods were developed to insure long-term success. Stability of the vaccine product has been improved; long-term variability has been reduced by one third so far, with additional improvements underway. The stability of other vaccines could be improved by applying the same analysis, standardization, and monitoring approaches.

Collaboration


Dive into the James M. Lucas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Yourey

United States Military Academy

View shared research outputs
Top Co-Authors

Avatar

Dan Curbison

United States Military Academy

View shared research outputs
Top Co-Authors

Avatar

Dana Metzger

United States Military Academy

View shared research outputs
Top Co-Authors

Avatar

Derek Webb

Bemidji State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge