Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emmanuel Yashchin is active.

Publication


Featured researches published by Emmanuel Yashchin.


Technometrics | 1993

Performance of CUSUM control schemes for serially correlated observations

Emmanuel Yashchin

This article discusses situations in which one is interested in evaluating the run-length characteristics of a cumulative sum control scheme when the underlying data show the presence of serial correlation. In practical applications, situations of this type are common in problems associated with monitoring such characteristics of data as forecasting errors, measures of model adequacy, and variance components. The discussed problem is also relevant in situations in which data transformations are used to reduce the magnitude of serial correlation. The basic idea of analysis involves replacing the sequence of serially correlated observations by a sequence of independent and identically distributed observations for which the runlength characteristics of interest are roughly the same. Applications of the proposed method for several classes of processes arising in the area of statistical process control are discussed in detail, and it is shown that it leads to approximations that can be considered acceptable in...


Technometrics | 1989

Weighted Cumulative Sum Technique

Emmanuel Yashchin

A class of weighted control schemes that generalizes the basic cumulative sum (CUSUM) technique is introduced. The schemes of the first type, in which the weights represent information concomitant with the data, prove to be especially useful when handling charts corresponding to samples of varying sizes. The schemes of the second type are based on giving greater weight to more recent information. Representatives of this class are shown to have better run length characteristics with respect to drift in the level of a controlled process than does the classical CUSUM, while maintaining good sensitivity with respect to shifts. Analogous to the classical CUSUM scheme, they admit a dual graphical representation; that is, the scheme can be applied by means of a one- or two-sided decision interval or via a V mask. A special case of this type of scheme, designated the geometric CUSUM, is considered in detail. It can be viewed as a CUSUM-type counterpart of the exponentially weighted moving average.


Ibm Journal of Research and Development | 1985

On the analysis and design of CUSUM-Shewhart control schemes

Emmanuel Yashchin

In recent years cumulative sum (CUSUM) control charts have become increasingly popular as an alternative to Shewharts control charts. These charts use sequentially accumulated information in order to detect out-of-control conditions. They are philosophically related to procedures of sequential hypothesis testing (the relation being similar to that existing between Shewharts charts and classical procedures for hypothesis testing). In the present paper we present a new approach to design of CUSUM-Shewhart control schemes and analysis of the associated run length distributions (under the assumption that the observations correspond to a sequence of independent and identically distributed random variables). This approach is based on the theory of Markov chains and it enables one to analyze the ARL (Average Run Length), the distribution function of the run length, and other quantities associated with a CUSUM-Shewhart scheme. In addition, it enables one to analyze situations in which out-of-target conditions are not present initially, but rather appear after a substantial period of time during which the process has operated in on-target mode (steady state analysis). The paper also introduces an APL package, DARCS, for design, analysis, and running of both one- and two-sided CUSUM-Shewhart control schemes and gives several examples of its application.


Technometrics | 1994

Monitoring Variance Components

Emmanuel Yashchin

This article discusses methods for monitoring a process in which the variance of the measurements is attributed to several known sources of variability. For example, in the case of integrated circuit fabrication one is typically interested in monitoring the process mean as well as the lot-to-lot, wafer-to-wafer-within-lot, and within-wafer components of variability. The article discusses the problem of monitoring the process level and variability by using the cumulative sum technique. Some aspects of the implementation of this methodology are also considered, and examples are given.


Technometrics | 1995

Estimating the current mean of a process subject to abrupt changes

Emmanuel Yashchin

This article discusses estimation of the current process mean in situations in which this parameter is subject to abrupt changes of unpredictable magnitude at some unknown points in time. It introduces performance criteria for this estimation problem and discusses in detail the relative merits of several estimation procedures. I show that an estimate based on exponentially weighted moving average of past observations has optimality properties within the class of linear estimators, and I propose alternative estimating procedures to overcome its limitations. I consider two primary types of estimation procedures, Markovian estimators, in which the current estimate is obtained as a function of the previous estimate and the most current data point, and adaptive estimators, based on identification of the most recent changepoint. We give several examples that illustrate the use of the proposed techniques.


Lifetime Data Analysis | 2002

Parametric Modeling for Survival with Competing Risks and Masked Failure Causes

Betty J. Flehinger; Benjamin Reiser; Emmanuel Yashchin

We consider a life testing situation in which systems are subject to failure from independent competing risks. Following a failure, immediate (stage-1) procedures are used in an attempt to reach a definitive diagnosis. If these procedures fail to result in a diagnosis, this phenomenon is called masking. Stage-2 procedures, such as failure analysis or autopsy, provide definitive diagnosis for a sample of the masked cases. We show how stage-1 and stage-2 information can be combined to provide statistical inference about (a) survival functions of the individual risks, (b) the proportions of failures associated with individual risks and (c) probability, for a specified masked case, that each of the masked competing risks is responsible for the failure. Our development is based on parametric distributional assumptions and the special case for which the failure times for the competing risks have a Weibull distribution is discussed in detail.


Technometrics | 1996

Inference about defects in the presence of masking

Betty J. Flehinger; Benjamin Reiser; Emmanuel Yashchin

This article considers the situation in which a system consists of k components and a defect in any component causes a system malfunction. When a system malfunction occurs, test procedures restrict the cause to some subset of the Ic components. When that subset consists of more than one component, this phenomenon is termed masking. Typically, masking introduces two types of problems. First, it is desirable to estimate the “diagnostic probability”—that is, the probability, given a specified malfunctioning subset, that each of the masked components is the defective one. Second, when a set of historical data contains masked information, one would like to use this information to estimate the defect probability of each individual component type. The article discusses these problems in detail and derives two-stage procedures for estimation and inference.


Journal of Applied Physics | 2006

Threshold electromigration failure time and its statistics for Cu interconnects

Baozhen Li; Cathryn Christiansen; J. Gill; Timothy Sullivan; Emmanuel Yashchin; Ronald G. Filippi

Integrated circuit chip metallization reliability under use conditions is extrapolated from failure distributions of test structures tested under accelerated conditions. Lognormally plotted electromigration failure time distributions for via/line contact configurations with no redundant conductive path usually display two features that are different from failure time distributions for configurations that have well-defined redundant conductive paths. First, the failure times are more widely distributed (larger standard deviation or σ), and second, the left portion of the distribution (early failures) bends downward (if the sample size is large enough) as the failure times become shorter, in contrast to the straight line behavior that is usually observed for structures with good redundancy. The downward deviation from a straight line distribution erodes the goodness of fit relative to the commonly used two-parameter (t50,σ) lognormal distribution model, and the large σ produces a lifetime projection under u...


Nonlinear Analysis-theory Methods & Applications | 1997

Change-point models in industrial applications

Emmanuel Yashchin

In many industrial applications of Statistics it is not reasonable to assume that the same model remains adequate as time progresses. Models in which the environment and related parameters undergo abrupt changes at unknown moments of time are found to be relevant in a much wider class of practical situations. These models spawned a number of fundamental problems in the field of the change-point theory, such as the problems of detection of changes (monitoring), estimation of the current process parameters (filtering), identifying points of change and regimes (segmentation) and tests for data homogeneity. These problems have been addressed, to various extent, in a large number of works, including several recent books and review papers (cf. [l-4]). Problems related to change-point models are typically relevant in either hxed sample or sequential settings. For example, in the problem of on-line detection of a change decisions to trigger an out of control signal are made sequentially, based on some stopping variable. Some problems, however, can be formulated in both sequential and fixed sample settings. For example, in process capability analysis the problem of segmentation involves identifying all the regimes and change-points present in a given data set. However, in speech analysis the problem of segmentation is typically relevant in a sequential setting, with emphasis placed on identification of the most recent regime. Similarly, the problem of estimating parameters at a given point in time can be formulated as a sequential (filtering) or a fixed sample (smoothing) settings. In this article we focus on sequential methods, with emphasis on the problems of detection and filtering.


The Journal of Portfolio Management | 2003

Using Statistical Process Control to Monitor Active Managers

Thomas K. Philips; Emmanuel Yashchin; David M. Stein

Investors who are invested in (or bear responsibility for) many active portfolios face a resource allocation problem: To which products should they direct their attention and scrutiny? Ideally they will focus their attention on portfolios that appear to be in trouble, but these are not easily identified using classical methods of performance evaluation. In fact, it is often claimed that it takes forty years to determine whether an active portfolio outperforms its benchmark. The claim is fallacious. In this article, we show how a statistical process control scheme known as the CUSUM, which is closely related to Walds [1947] Sequential Probability Ratio Test, can be used to reliably detect flat-to-the-benchmark performance in forty months, and underperformance faster still. By rapidly detecting underperformance, the CUSUM allows investors to focus their attention on potential problems before they have a serious impact on the performance of the overall portfolio. The CUSUM procedure is provably optimal: For any given rate of false alarms, no other procedure can detect underperformance faster. It is robust to the distribution of excess returns, allowing its use in almost any asset class, including equities, fixed income, currencies and hedge funds without modification, and is currently being used to monitor over

Researchain Logo
Decentralizing Knowledge