William T. Tucker
General Electric
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William T. Tucker.
Technometrics | 1990
James M. Lucas; Michael S. Saccucci; Robert V. Baxley Jr.; William H. Woodall; Hazem D. Maragh; Fedrick W. Faltin; Gerald J. Hahn; William T. Tucker; J. Stuart Hunter; John F. MacGregor; Thomas J. Harris
Roberts (1959) first introduced the exponentially weighted moving average (EWMA) control scheme. Using simulation to evaluate its properties, he showed that the EWMA is useful for detecting small shifts in the mean of a process. The recognition that an EWMA control scheme can be represented as a Markov chain allows its properties to be evaluated more easily and completely than has previously been done. In this article, we evaluate the properties of an EWMA control scheme used to monitor the mean of a normally distributed process that may experience shifts away from the target value. A design procedure for EWMA control schemes is given. Parameter values not commonly used in the literature are shown to be useful for detecting small shifts in a process. In addition, several enhancements to EWMA control schemes are considered. These include a fast initial response feature that makes the EWMA control scheme more sensitive to start-up problems, a combined Shewhart EWMA that provides protection against both larg...
systems man and cybernetics | 1987
James C. Bezdek; Richard J. Hathaway; Michael J. Sabin; William T. Tucker
A counterexample to the original incorrect convergence theorem for the fuzzy c-means (FCM) clustering algorithms (see J.C. Bezdak, IEEE Trans. Pattern Anal. and Math. Intell., vol.PAMI-2, no.1, pp.1-8, 1980) is provided. This counterexample establishes the existence of saddle points of the FCM objective function at locations other than the geometric centroid of fuzzy c-partition space. Counterexamples previously discussed by W.T. Tucker (1987) are summarized. The correct theorem is stated without proof: every FCM iterate sequence converges, at least along a subsequence, to either a local minimum or saddle point of the FCM objective function. Although Tuckers counterexamples and the corrected theory appear elsewhere, they are restated as a caution not to further propagate the original incorrect convergence statement.
Technometrics | 1992
Scott A. Vander Wiel; William T. Tucker; Frederick W. Faltin; Necip Doganaksoy
The goal of algorithmic statistical process control is to reduce predictable quality variations using feedback and feedforward techniques and then monitor the complete system to detect and remove unexpected root causes of variation. This methodology seeks to exploit the strengths of both automatic control and statistical process control (SPC), two fields that have developed in relative isolation from one another. Recent experience with the control and monitoring of intrinsic viscosity from a particular General Electric polymerization process has led to a better understanding of how SPC and feedback control can be united into a single system. Building on past work by MacGregor, Box, Astrom, and others, the article covers the application from statistical identification and modeling to implementing feedback control and final SPC monitoring. Operational and technical issues that arose are examined, and a general approach is outlined.
Communications in Statistics-theory and Methods | 1991
Necip Doganaksoy; Frederick W. Faltin; William T. Tucker
There are many instances in which the quality of a product or constancy of a process is determined by the joint levels of several attributes or properties. During the conduct of such a process or the production of such a product, one wishes to detect as quickly as possible any departure from a satisfactory state, while at the same time identifying which attributes are responsible for the deviation. In most cases of practical interest, however, there exist correlations among the several properties of interest; this makes it advisable to monitor certain aggregate characteristics of the process, rather than observing its various components separately. When the mean vector of the quality attributes is the major concern, this aggregate monitoring function is most commonly implemented via a T 2 chart. The dependencies among attributes, however, complicate the determination of which are responsible when a deviation occurs. This paper presents an approach to help identify aberrant variables when Shewhart type mul...
Technometrics | 1993
William T. Tucker; Frederick W. Faltin; Scott A. Vander Wiel
Statistical process control (SPC) has traditionally been applied to processes in which successive observations would ideally be independent and identically distributed as a basis for achieving fundamental process improvement. Stochastic control, on the other hand, addresses situations in which observations are dynamically related over time; its intent is to run the existing process well, as opposed to improving it as such. A schema is presented for uniting traditional SPC and feedforward/feedback control into a system that exploits the strengths of both, Building on past work by MacGregor, Box, Astrom, and others, we discuss the theory and practice of such an approach, along with a consideration of research and technical issues that arise.
International Statistical Review | 1993
Frederick W. Faltin; Gerald J. Hahn; William T. Tucker; Scott A. Vander Wiel
Algorithmic Statistical Process Control (ASPC) is an approach to quality improvement that reduces predictable quality variations using feedback and feedforward techniques, and then monitors the entire system to detect changes. As such, it is a marriage of control theory and statistical process control (SPC). Control theoretical concepts are used to minimize deviations from target by process adjustments; sec is used to gain fundamental improvements. Where applicable, ASPC is a logical next step in the drive for continuous quality improvement. This paper presents the ASPC concept and its applications to practitioners. Technical and non-technical requirements and factors conducive to the use of ASPC are emphasized, and pre-planning for the use of ASPC in new processes is discussed.
Archive | 1996
William Q. Meeker; R. Bruce Thompson; Chien-Ping Chiou; Shuen Lin Jeng; William T. Tucker
This paper outlines a proposed methodology for using combinations of physical modeling of an inspection process along with laboratory and production data to estimate Nondestructive Evaluation (NDE) capability. The physical/statistical prediction model will be used to predict Probability of Detection (POD), Probability of False Alarm (PFA) and Receiver Operating Characteristic (ROC) function curves. These output functions are used to quantify the NDE capability. The particular focus of this work is on the use of ultrasonic methods for detecting hard-alpha and other subsurface flaws in titanium using gated peak detection. This is a uniquely challenging problem since the inspection must detect very complex subsurface flaws with significant “material” noise. However, the underlying framework of the methodology should be general enough to apply to other NDE methods.
Communications in Statistics - Simulation and Computation | 1982
Thomas P. Turiel; Gerald J. Hahn; William T. Tucker
This paper gives the results of a new simulation study for the familiar calibration problem and the less familiar inverse median estimation problem. The latter arises when one wishes to estimate from a linear regression analysis the value of the independent variable corresponding to a specified value of the median of the dependent variable. For example, from the results of a regression analysis between stress and time to failure, one might wish to estimate the stress at which the median time to failure is 10,000 hours. In the study, the mean square error, Pitman closeness, and probability of overestimation are compared for both the calibration problem and the inverse median estimation problem for (1) the classical estimator, (2) the inverse estimator, and (3) a modified version of an estimator proposed by Naszodi (1978) for both a small sample and a moderately large sample situation.
Communications in Statistics-theory and Methods | 1982
William T. Tucker
The Box-Jenkins method is a popular and important technique for modeling and forecasting of time series. Unfortunately the problem of determining the appropriate ARMA forecasting model (or indeed if an ARMA model holds) is a major drawback to the use of the Box-Jenkins methodology. Gray et al. (1978) and Woodward and Gray (1979) have proposed methods of estimating p and qin ARMA modeling based on the R and Sarrays that circumvent some of these modeling difficulties. In this paper we generalize the R and S arrays by showing a relationship to Pade approximunts and then show that these arrays have a much wider application than in just determining model order. Particular non-ARMA models can be identified as well. This includes certain processes that consist of deterministic functions plus ARMA noise, indeed we believe that the combined R and S arrays are the best overall tool so fur developed for the identification of general 2nd order (not just stationary) time scries models.
The American Statistician | 1990
William M. Makuch; Gerald J. Hahn; William T. Tucker
Abstract In this exciting time for statistics and computing, individuals who are well trained in both areas are badly needed. We propose a graduate-level dual major in statistics and computing to address industrys needs. We also suggest required computing courses for a more traditional curriculum in statistics. A statement of the curriculum objectives and a brief description of an industrial setting are followed by specific recommendations. One possible sequence of course work and comments concerning the implementation of the proposed program are also provided.