Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerald J. Hahn is active.

Publication


Featured researches published by Gerald J. Hahn.


Technometrics | 1990

Exponentially weighted moving average control schemes: properties and enhancements

James M. Lucas; Michael S. Saccucci; Robert V. Baxley Jr.; William H. Woodall; Hazem D. Maragh; Fedrick W. Faltin; Gerald J. Hahn; William T. Tucker; J. Stuart Hunter; John F. MacGregor; Thomas J. Harris

Roberts (1959) first introduced the exponentially weighted moving average (EWMA) control scheme. Using simulation to evaluate its properties, he showed that the EWMA is useful for detecting small shifts in the mean of a process. The recognition that an EWMA control scheme can be represented as a Markov chain allows its properties to be evaluated more easily and completely than has previously been done. In this article, we evaluate the properties of an EWMA control scheme used to monitor the mean of a normally distributed process that may experience shifts away from the target value. A design procedure for EWMA control schemes is given. Parameter values not commonly used in the literature are shown to be useful for detecting small shifts in a process. In addition, several enhancements to EWMA control schemes are considered. These include a fast initial response feature that makes the EWMA control scheme more sensitive to start-up problems, a combined Shewhart EWMA that provides protection against both larg...


The American Statistician | 1999

The Impact of Six Sigma Improvement—A Glimpse into the Future of Statistics

Gerald J. Hahn; William J. Hill; Roger Hoerl; Stephen A. Zinkgraf

Abstract Six Sigma improvements—a highly disciplined and statistically based approach for removing defects from products, processes, and transactions, involving everybody in the corporation—has been adopted as a major initiative by some of our leading companies. This is fundamentally changing the paradigm of how statistics is applied in business and industry, and has had a career-changing impact on those statisticians who have been involved. We describe the Six Sigma initiative and its evolution, the enthusiastic and visionary support by the CEOs at some major corporations that have embraced it, its successes to date, and the impact on statistics and statisticians. We then turn to a major theme—what statisticians must do to be maximally effective in this exciting new environment. These changes will not be limited to the companies that have adopted Six Sigma, or, for that matter, industry, but are all-pervasive. We discuss the dramatic longer term implications on our profession.


Technometrics | 1993

A systematic approach to planning for a designed industrial experiment

David Coleman; Douglas C. Montgomery; Berton H. Gunter; Gerald J. Hahn; Perry Haaland; Michael O'Connell; Ramón V. León; Anne C. Shoemaker; Kwok-Leung Tsui

Design of experiments and analysis of data from designed experiments are well-established methodologies in which statisticians are formally trained. Another critical and rarely taught skill is the planning that precedes designing an experiment. This article suggests a set of tools for presenting generic technical issues and experimental features found in industrial experiments. These tools are predesign experiment guide sheets to systematize the planning process and to produce organized written documentation. They also help experimenters discuss complex trade-offs between practical limitations and statistical preferences in the experiment. A case study involving the (computer numerical control) CNC-machining of jet engine impellers is included.


Technometrics | 1979

A Simple Method for Regression Analysis with Censored Data.

Josef Schmee; Gerald J. Hahn

Problems requiring regression analysis of censored data arise frequently in practice. For example, in accelerated testing one wishes to relate stress and average time to failure from data including unfailed units, i.e., censored observations. Maximum likelihood is one method for obtaining the desired estimates; in this paper, we propose an alternative approach. An initial least squares fit is obtained treating the censored values as failures. Then, based upon this initial fit, the expected failure time for each censored observation is estimated. These estimates are then used, instead of the censoring times, to obtain a revised least squares fit and new expected failure times are estimated for the censored values. These are then used in a further least squares fit. The procedure is iterated until convergence is achieved. This method is simpler to implement and explain to non-statisticians than maximum likelihood and appears to have good statistical and convergence properties. The method is illustrated by a...


Technometrics | 1972

Linear Estimation of a Regression Relationship from Censored Data Part I—Simple Methods and Their Application

Wayne Nelson; Gerald J. Hahn

In many regression problems, data on the dependent variable are censored; that is, the values of some observations are known only to be above or else below some value. Such data often arise in accelerated life testing where life is the dependent variable and temperature or stress is the independent variable and some test units have not failed at the time of the analysis. In such situations, the standard techniques of least squares estimation for the parameters of a linear regression model cannot be used, since the values of the censored observations are not known. This is Part I of a two-part series on the theory and application of linear estimation methods for regression analysis using the ordered observations of censored data. The use of these methods is illustrated with analyses of censored data from an accelerated life test of motor insulation and of censored data from tandem specimens in a creep-rupture test on an alloy.


Journal of Quality Technology | 1983

Evaluation of a Start-Up Demonstration Test

Gerald J. Hahn; Julie B. Gage

An equipment acceptance procedure requires c consecutive successful start-ups of the equipment. Procedures are given for finding the statistical distribution of the total number of attempted start-ups to achieve acceptance, assuming that the start-ups a..


Journal of Quality Technology | 1973

A Survey of Prediction Intervals and Their Applications

Gerald J. Hahn; Wayne Nelson

In many practical problems in industry, it is desired to use the results of a previous sample to predict the results of a future sample. For example, data on warranty costs on large motors over the past three years are to be used for planning purposes t..


Journal of the American Statistical Association | 1969

Factors for Calculating Two-Sided Prediction Intervals for Samples from a Normal Distribution

Gerald J. Hahn

Abstract Factors r(k, n; γ) for a normal distribution are given such that one may be 100γ% confident that the two-sided prediction interval will contain all of k future values, where and s are the sample mean and standard deviation from n previous observations. A special case is a prediction interval for a single future observation, which can also be obtained using percentage points of the Student t-distribution. Curves which indicate the extent to which two previously proposed approximations over-estimate r(k, n; γ) are also given. One of these, based on a Bonferroni Inequality, gives generally satisfactory results for all cases except for small n and large k, but frequently requires unusual percentage points of the Student t-distribution.


Journal of the American Statistical Association | 1970

Additional Factors for Calculating Prediction Intervals for Samples from a Normal Distribution

Gerald J. Hahn

Abstract A prediction interval on k future observations is an interval which is said to contain the values of all k such observations with a specified probability based on the results of a past sample of n observations from the same population. Such an interval is frequently required in practical applications. This article provides factors for a normal distribution for These factors were obtained using a new computer program whose accuracy was checked by comparison with existing tabulations. Some results are also given to indicate the extent to which a previously proposed approximation based on a Bonferroni Inequality overestimates the factors for calculating a one-sided prediction limit. 1. Calculating a two-sided 100γ percent prediction interval for a. γ = 0.90 and many values of k ≤ 20 and n ≥ 4. b. γ = 0.95 and 0.99 for many values of k ≥ 20 and n = 4 and 5. (Factors for n > 5 were previously given in [9].) 2. Calculating a one-sided 100γ percent prediction limit for γ=0.90, 0.95 and 0.99 for many val...


Journal of Quality Technology | 1970

Statistical Intervals for a Normal Population: Part 1. Tables, Examples and Applications

Gerald J. Hahn

Part 1 of this article provides factors for constructing frequently required intervals for a normal population and describes the situations in which each interval is applicable. The specific intervals considered are ones to contain with a high probabili..

Collaboration


Dive into the Gerald J. Hahn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Escobar

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge