Sheng-Tsaing Tseng
National Tsing Hua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sheng-Tsaing Tseng.
IEEE Transactions on Reliability | 2009
Chien-Yu Peng; Sheng-Tsaing Tseng
Degradation models are widely used to assess the lifetime information of highly reliable products if there exists quality characteristics whose degradation over time can be related to reliability. The performance of a degradation model depends strongly on the appropriateness of the model describing a products degradation path. In this paper, motivated by laser data, we propose a general linear degradation path in which the unit-to-unit variation of all test units can be considered simultaneously with the time-dependent structure in degradation paths. Based on the proposed degradation model, we first derive an implicit expression of a products lifetime distribution, and its corresponding mean-time-to-failure (MTTF). By using the profile likelihood approach, maximum likelihood estimation of parameters, a products MTTF, and their confidence intervals can be obtained easily. In addition, laser degradation data are used to illustrate the proposed procedure. Furthermore, we also address the effects of model mis-specification on the prediction of the products MTTF. It shows that the effect of the model mis-specification on the predictions of a products MTTF is not critical under the case of large samples. However, when the sample size and the termination time are not large enough, a simulation study shows that these effects are not negligible.
IEEE Transactions on Reliability | 2006
Chen-Mao Liao; Sheng-Tsaing Tseng
Today, many products are designed to function for a long period of time before they fail. For such highly-reliable products, collecting accelerated degradation test (ADT) data can provide useful reliability information. However, it usually requires a moderate sample size to implement an ADT. Hence, ADT is not applicable for assessing the lifetime distribution of a newly developed or very expensive product which only has a few available test units on hand. Recently, a step-stress ADT (SSADT) has been suggested in the literature to overcome the above difficulty. However, in designing an efficient SSADT experiment, the issue about how to choose the optimal settings of variables was not discussed, such as sample size, measurement frequency, and termination time. In this study, we first use a stochastic diffusion process to model a typical SSADT problem. Next, under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal settings of these variables are obtained by minimizing the asymptotic variance of the estimated 100p/sup th/ percentile of the products lifetime distribution. Finally, an example is used to illustrate the proposed method.
IEEE Transactions on Reliability | 2009
Sheng-Tsaing Tseng; N. Balakrishnan; Chih-Chun Tsai
Step-stress accelerated degradation testing (SSADT) is a useful tool for assessing the lifetime distribution of highly reliable products (under a typical-use condition) when the available test items are very few. Recently, an optimal SSADT plan was proposed based on the assumption that the underlying degradation path follows a Wiener process. However, the degradation model of many materials (especially in the case of fatigue data) may be more appropriately modeled by a gamma process which exhibits a monotone increasing pattern. Hence, in practice, designing an efficient SSADT plan for a gamma degradation process is of great interest. In this paper, we first introduce the SSADT model when the degradation path follows a gamma process. Next, under the constraint that the total experimental cost does not exceed a pre-specified budget, the optimal settings such as sample size, measurement frequency, and termination time are obtained by minimizing the approximate variance of the estimated MTTF of the lifetime distribution of the product. Finally, an example is presented to illustrate the proposed method.
Iie Transactions | 1996
Sheng-Tsaing Tseng
The traditional Economic Manufacturing Quantity (EMQ) model assumes that the production process (system) is perfect. However, owing to aging, the production process will shift from the ‘in-control’ state to the ‘out-of-control’ state and produce defective items. When the process shift distribution follows an Increasing Failure Rate (IFR) distribution, preventive (scheduled) maintenance policy is usually used to enhance its reliability. In this paper we incorporate a preventive maintenance policy into this deteriorating production system and derive an optimal preventive maintenance policy. Two well-known IFR distributions, Weibull and extreme-value distributions, are considered and some examples are used to illustrate the proposed model. Finally, the advantages of the proposed model are addressed in the conclusion.
European Journal of Operational Research | 2000
Ruey Huei Yeh; Wen-Tsung Ho; Sheng-Tsaing Tseng
Abstract This article studies the optimal production run length for a deteriorating production system in which the products are sold with free minimal repair warranty. The deterioration process of the system is characterized by a two-state continuous-time Markov chain. For products sold with free minimal repair warranty, we show that there exists a unique optimal production run length such that the expected total cost per item is minimized. Since there is no closed form expression for the optimal production run length, an approximate solution is derived. In addition, three special cases which provide bounds for searching the optimal production run length are investigated and some sensitivity analysis is carried out to study the effects of the model parameters on the optimal production run length. Finally, a numerical example is given to evaluate the performance of the optimal production run length.
IEEE Transactions on Reliability | 2012
Chih-Chun Tsai; Sheng-Tsaing Tseng; N. Balakrishnan
Degradation models are usually used to provide information about the reliability of highly reliable products that are not likely to fail within a reasonable period of time under traditional life tests, or even accelerated life tests. The gamma process is a natural model for describing degradation paths, which exhibit a monotone increasing pattern, while the commonly used Wiener process is not appropriate in such a case. We discuss the problem of optimal design for degradation tests based on a gamma degradation process with random effects. To conduct a degradation experiment efficiently, several decision variables (such as the sample size, inspection frequency, and measurement numbers) need to be determined carefully. These decision variables affect not only the experimental cost, but also the precision of the estimates of lifetime parameters of interest. Under the constraint that the total experimental cost does not exceed a pre-specified budget, the optimal decision variables are found by minimizing the asymptotic variance of the estimate of the 100p-th percentile of the lifetime distribution of the product. Laser data are used to illustrate the proposed method. Moreover, we assess analytically the effects of model mis-specification that occur when the random effects are not taken into consideration in the gamma degradation model. The numerical results of these effects reveal that the impact of model mis-specification on the accuracy and precision of the prediction of percentiles of the lifetimes of products are somewhat serious for the tail probabilities. A simulation study also shows that the simulated values are quite close to the asymptotic values.
IEEE Transactions on Reliability | 1997
Sheng-Tsaing Tseng; Hong-Fwu Yu
Analysis of degradation data can provide information about the lifetime of highly reliable products, if there exists a product characteristic whose degradation over time can be related to reliability. To obtain a precise estimator of a product mean time-to-failure, one practical problem arising from designing a degradation experiment is: how long should the experiment last? This paper proposes a termination rule to determine an appropriate stopping time of a degradation experiment. A case study of an LED product illustrates the method. The proposed procedure is computationally simple and provides reliability analysis for on-line real-time information of the product lifetime when the termination time is determined.
Journal of data science | 2007
Sheng-Tsaing Tseng; Chien-Yu Peng
Accelerated degradation tests (ADTs) can provide timely reliability information of product. Hence ADTs have been widely used to assess the lifetime distribution of highly reliable products. In order to properly predict the lifetime distribution, modeling the products degradation path plays a key role in a degradation analysis. In this paper, we use a stochastic diffusion process to describe the products degradation path and a recursive formula for the products lifetime distribution can be obtained by using the first passage time (FPT) of its degradation path. In addition, two approximate formulas for the products mean-time-to-failure (MTTF) and median life (B50) are given. Finally, we extend the proposed method to the case of ADT and a real LED data is used to illustrate the proposed procedure. The results demonstrate that the proposed method has a good performance for the LED lifetime prediction.
Naval Research Logistics | 1999
Hong-Fwu Yu; Sheng-Tsaing Tseng
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a products lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method.
IEEE Transactions on Semiconductor Manufacturing | 2003
Sheng-Tsaing Tseng; Arthur B. Yeh; Fugee Tsung; Yun-Yu Chan
The exponentially weighted moving average (EWMA) feedback controller (with a fixed discount factor) is a popular run by run control scheme which primarily uses data from past process runs to adjust settings for the next run. Although the EWMA controller with a small discount factor can guarantee a long-term stability (under fairly regular conditions), it usually requires a moderately large number of runs to bring the output of a process to its target. This is impractical for process with small batches. The reason is that the output deviations are usually very large at the beginning of the first few runs and, as a result, the output may be out of process specifications. In order to reduce a possibly high rework rate, the authors propose a variable discount factor to tackle the problem. They state the main results in which the stability conditions and the optimal variable discount factor of the proposed EWMA controller are derived. An example is given to demonstrate the performance. Moreover, a heuristic is proposed to simplify the computation of the variable discount factor. It is seen that the proposed method is easy to implement and provides a good approximation to the optimal variable discount factor.