Cheng-Der Fuh
National Central University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cheng-Der Fuh.
Annals of Statistics | 2004
Cheng-Der Fuh
Let ξ 0 , ξ 1 ,..., ξ ω-1 be observations from the hidden Markov model with probability distribution P θ0 , and let ξ ω , ξ ω+1 ,... be observations from the hidden Markov model with probability distribution P θ1 . The parameters θ 0 and θ 1 are given, while the change point ω is unknown. The problem is to raise an alarm as soon as possible after the distribution changes from P θ0 to P θ1 , but to avoid false alarms. Specifically, we seek a stopping rule N which allows us to observe the ξs sequentially, such that E∞N is large, and subject to this constraint, sup k E k (N - k|N ≥ k) is as small as possible. Here E k denotes expectation under the change point k, and E∞ denotes expectation under the hypothesis of no change whatever. In this paper we investigate the performance of the Shiryayev-Roberts-Pollak (SRP) rule for change point detection in the dynamic system of hidden Markov models. By making use of Markov chain representation for the likelihood function, the structure of asymptotically minimax policy and of the Bayes rule, and sequential hypothesis testing theory for Markov random walks, we show that the SRP procedure is asymptotically minimax in the sense of Pollak [Ann. Statist. 13 (1985) 206-227]. Next, we present a second-order asymptotic approximation for the expected stopping time of such a stopping scheme when w = 1. Motivated by the sequential analysis in hidden Markov models, a nonlinear renewal theory for Markov random walks is also given.
IEEE Transactions on Reliability | 2010
I-Tang Yu; Cheng-Der Fuh
Degradation analysis is a tool for assessing the lifetime distribution in reliability analysis. The lifetime of a product, in degradation analysis, is defined as the time when the value of a chosen degradation characteristic reaches a predetermined threshold. This kind of failure is called a soft failure, in contrast with a hard failure which means that the product is not functioning at any positive performance level. In this article, we introduce the idea of considering the threshold for the degradation characteristic as random. The difference between the time to soft failure and hard failure is then modeled. We modify Lu & Meekers two-stage method, and a new approach named the three-stage method is developed. We apply both the two- and three-stage methods to a simulation study. From the simulation study, we conclude that when the time to soft failure is very different from the time to hard failure, the three-stage method leads to a better performance than the two-stage method. Fatigue-crack growth data are analysed at the end.
Annals of Statistics | 2006
Cheng-Der Fuh
Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models. We first prove, under some regularity conditions, there is a consistent sequence of roots of the likelihood equation that is asymptotically normal with the inverse of the Fisher information as its variance. With an extra assumption that the likelihood equation has a unique root for each n, then there is a consistent sequence of estimators of the unknown parameters. If, in addition, the supremum of the log likelihood function is integrable, the MLE exists and is strongly consistent. Edge-worth expansion of the approximate solution of likelihood equation is also established. Several examples, including Markov switching models, ARMA models, (G)ARCH models and stochastic volatility (SV) models, are given for illustration.
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2004
Wen-Liang Hung; Jinn-Shing Lee; Cheng-Der Fuh
It is well known that an intuitionistic fuzzy relation is a generalization of a fuzzy relation. In fact there are situations where intuitionistic fuzzy relations are more appropriate. This paper discusses the fuzzy clustering based on intuitionistic fuzzy relations. On the basis of max-t & min-s compositions, we discuss an n-step procedure which is an extension of Yang and Shihs [17] n-step procedure. A similarity-relation matrix is obtained by beginning with a proximity-relation matrix using the proposed n-step procedure. Then we propose a clustering algorithm for the similarity-relation matrix. Numerical comparisons of three critical max-t & min-s compositions: max-t1 & min-s1, max-t2 & min-s2 and max-t3 & min-s3, are made. The results show that max-t1 & min-s1 compositions has better performance. Sometimes, data may be missed with an incomplete proximity-relation matrix. Imputation is a general and flexible method for handling missing-data problem. In this paper we also discuss a simple form of imputation is to estimate missing values by max-t & min-s compositions.
Educational and Psychological Measurement | 2008
Yi-Hsuan Lee; Edward H. Ip; Cheng-Der Fuh
Although computerized adaptive tests have enjoyed tremendous growth, solutions for important problems remain unavailable. One problem is the control of item exposure rate. Because adaptive algorithms are designed to select optimal items, they choose items with high discriminating power. Thus, these items are selected more often than others, leading to both overexposure and underutilization of some parts of the item pool. Overused items are often compromised, creating a security problem that could threaten the validity of a test. Building on a previously proposed stratification scheme to control the exposure rate for one-dimensional tests, the authors extend their method to multidimensional tests. A strategy is proposed based on stratification in accordance with a functional of the vector of the discrimination parameter, which can be implemented with minimal computational overhead. Both theoretical and empirical validation studies are provided. Empirical results indicate significant improvement over the commonly used method of controlling exposure rate that requires only a reasonable sacrifice in efficiency.
Annals of Applied Probability | 2004
Cheng-Der Fuh
Let {X_n,n\geq0} be a Markov chain on a general state space X with transition probability P and stationary probability \pi. Suppose an additive component S_n takes values in the real line R and is adjoined to the chain such that {(X_n,S_n),n\geq0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to boundary crossing problems for {(X_n,S_n),n\geq0}. To be more precise, for given b\geq0, define the stopping time \tau=\tau(b)=inf{n:S_n>b}. When a drift \mu of the random walk S_n is 0, we derive a one-term Edgeworth type asymptotic expansion for the first passage probabilities P_{\pi}{\tau<m} and P_{\pi}{\tau<m,S_m<c}, where m\leq\infty, c\leq b and P_{\pi} denotes the probability under the initial distribution \pi. When \mu\neq0, Brownian approximations for the first passage probabilities with correction terms are derived.
Stochastic Processes and their Applications | 2001
Gerold Alsmeyer; Cheng-Der Fuh
Let be a complete separable metric space and (Fn)n[greater-or-equal, slanted]0 a sequence of i.i.d. random functions from to which are uniform Lipschitz, that is, Ln=supx[not equal to]y d(Fn(x),Fn(y))/d(x,y)
Stochastic Processes and their Applications | 2000
Cheng-Der Fuh; Cun-Hui Zhang
We provide moment inequalities and sufficient conditions for the quick convergence for Markov random walks, without the assumption of uniform ergodicity for the underlying Markov chain. Our approach is based on martingales associated with the Poisson equation and Wald equations for the second moment with a variance formula. These results are applied to nonlinear renewal theory for Markov random walks. A random coefficient autoregression model is investigated as an example.
Operations Research | 2011
Cheng-Der Fuh; Inchi Hu; Ya-Hui Hsu; Ren-Her Wang
Simulation of small probabilities has important applications in many disciplines. The probabilities considered in value-at-risk (VaR) are moderately small. However, the variance reduction techniques developed in the literature for VaR computation are based on large-deviations methods, which are good for very small probabilities. Modeling heavy-tailed risk factors using multivariate t distributions, we develop a new method for VaR computation. We show that the proposed method minimizes the variance of the importance-sampling estimator exactly, whereas previous methods produce approximations to the exact solution. Thus, the proposed method consistently outperforms existing methods derived from large deviations theory under various settings. The results are confirmed by a simulation study.
Communications in Statistics-theory and Methods | 2003
Cheng-Der Fuh; Inchi Hu; Shih-Kuei Lin
Abstract To improve the empirical performance of the Black-Scholes model, many alternative models have been proposed to address leptokurtic feature, volatility smile, and volatility clustering effects of the asset return distributions. However, analytical tractability remains a problem for most alternative models. In this article, we study a class of hidden Markov models including Markov switching models and stochastic volatility models, that can incorporate leptokurtic feature, volatility clustering effects, as well as provide analytical solutions to option pricing. We show that these models can generate long memory phenomena when the transition probabilities depend on the time scale. We also provide an explicit analytic formula for the arbitrage-free price of the European options under these models. The issues of statistical estimation and errors in option pricing are also discussed in the Markov switching models.