Parametric inference in a perturbed gamma degradation process
aa r X i v : . [ s t a t . M E ] M a y Parametric inference in a perturbed gamma degradation process
L. Bordes a, ∗ , C. Paroissin a , A. Salami a a Universit´e de Pau et des Pays de l’Adour, Laboratoire de Math´ematiques et de leurs Applications - UMR CNRS 5142, Avenue de l’Universit´e,64013 Pau cedex, France.
Abstract
We consider the gamma process perturbed by a Brownian motion (independent of the gamma process) as a degradationmodel. Parameters estimation is studied here. We assume that n independent items are observed at irregular instants.From these observations, we estimate the parameters using the moments method. Then, we study the asymptoticproperties of the estimators. Furthermore we derive some particular cases of items observed at regular or non-regularinstants. Finally, some numerical simulations and two real data applications are provided to illustrate our method. Keywords: gamma process, Wiener process, method of moments, consistency, asymptotic normality
AMS Classification:
1. Introduction and model
Many authors model degradation by a Wiener di ff usion process. Doksum and H´oyland [1] applied the Brown-ian motion with drift to a variable-stress accelerated life testing experiment. Next Whitmore [2] extended the Wienerdegradation process with the possibility of imperfect inspections. Another interesting extension is the bivariate Wienerprocess considered by Whitmore et al.[3] in which the degradation process and a marker process (that can be seenas a covariate in medical applications) are combined. Finally Wang [4] has studied the maximum likelihood infer-ence method for a class of Wiener processes including random e ff ects. According to Barker [5], this process is nolonger monotone, but can take into account minor system repairs over time. In addition, this process can be negative.Although such behaviours have di ffi cult physical interpretation. They can be explained by above mentioned phenom-ena like minor repairs or measurement degradation errors. It means that for some types of degradation models, thepossibility of non-negative increments is appropriate.However in many situations the physical degradation process can be considered as monotone while the observedprocess is a perturbation of the degradation process and then can be no longer monotone. Physical degradation pro-cesses are usually described by monotone L´evy processes like the gamma process or the compound Poisson process.These process implies that the system state cannot be improved over time, and then this system cannot return to itsoriginal state without external maintenance actions. The gamma process was originally proposed by Abdel-Hameed[6] in order to describe the degradation phenomenon. This process is frequently used in the literature since it is prefer-able from the physics point of view (monotonic deterioration). Moreover, calculations with this process are oftenexplicit, it properly accounts for the temporal variability of damage and allows determining optimum maintenancepoliciesIn this paper, we propose a degradation model D = ( D t ) t ≥ which combines these two approaches as follows: ∀ t ≥ , D t = Y t + τ B t where ( Y t ) t ≥ is a gamma process such that Y is gamma distributed with scale parameter ξ > α > B t ) t ≥ is a Brownian motion. This model is defined for τ ∈ R and the two processes are assumed to ∗ http: // lma-umr5142.univ-pau.fr, T´el. 05 59 40 75 38, Fax 05 59 40 75 55 Email addresses: [email protected] (L. Bordes), [email protected] (C. Paroissin), [email protected] (A.Salami)
Preprint submitted to Statistics & Probability Letters November 2, 2018 e independent. Without loss of generality, we can assume that τ ≥ τ B t and − τ B t have the same distributionfor all t ≥
0. The motivations behind considering such a model are the following ones. First, this model embedsthe two approaches mentioned above. Indeed, it is clear that when τ =
0, this model turns to be a gamma process.Moreover, if α/ξ tends to b > α/ξ tends to 0, then this model converges weakly to a Brownian motion withpositive drift b . Second, measurements of degradation tests reflect measurement errors. Hence, the role of Brownianmotion in this model can be interpreted as measurement errors. Finally, our model can take into account minor repairsconsidered on system over time.In this paper, estimation of model parameters is derived using the method of moments. In literature the two mostcommon methods of gamma process parameter estimation, namely, maximum likelihood and method of moments, arediscussed in [7]. Both methods for deriving the estimators of gamma process parameters were initially presented byC¸ inlar et al.[8]. Besides, Dufresne et al.[9] propose to use a conjugate Bayesian analysis in which the scale parameterof the gamma process is assumed to have an inverted gamma distribution as prior. A method for estimating a gammaprocess by means of expert judgement and a Bayesian estimation method is also discussed in [7]. Finally maximum-likelihood and Bayesian estimation of the parameters of the Brownian stress–strength model was studied by Ebrahimiand Ramallingam [10] and Basu and Lingham [11].The organization of the paper is as follows. First, we present a general case where n independent processes areobserved at irregular instants. Both number of observations and instants are di ff erent for each degradation process.Parameters estimation and asymptotic properties (consistency and asymptotic normality) of the estimators are stud-ied. Next, we derive some particular cases of items observed at regular or non-regular instants. Finally, numericalsimulations and two real data applications are provided to illustrate our method.
2. General case
Let (cid:16) D ( n ) (cid:17) n ∈ N ∗ be a sequence of independent and identically distributed (i.i.d.) copies of the degradation modeldescribed in the previous section. The i -th degradation process is observed N i times such that N i ∈ N ∗ . For all i ∈ N ∗ and all j ∈ { , . . . , N i } , we will denote by t i j these instants (with convention that for all i ∈ N ∗ , t i = θ = (cid:16) ξ, α, τ (cid:17) ∈ Θ = R ∗ + × R ∗ + × R + the parameter space of the model. Estimation of model parameters is derived usingthe method of moments. Asymptotic properties are then studied. For any i ∈ N ∗ , for any 1 ≤ j ≤ N i and for any k ∈ N , we denote by m ( k ) i j the k -th moment and by by m ( k ) i j the k -thcentral moment of increments ∆ D i j = D ( i ) t ij − D ( i ) t ij − : m ( k ) i j = E h ∆ D ki j i and m ( k ) i j = E (cid:20)(cid:16) ∆ D i j − E h ∆ D i j i(cid:17) k (cid:21) . Since the gamma process and the Brownian motion are independent, the first three moments are equal to: m (1) i j = αξ ∆ t i j , m (2) i j = αξ ∆ t i j + αξ ∆ t i j ! + τ ∆ t i j , m (3) i j = αξ ∆ t i j + α ξ (cid:16) ∆ t i j (cid:17) + αξ ∆ t i j ! + ατ ξ (cid:16) ∆ t i j (cid:17) . These expressions can be easily computed from the moments of the gamma distribution (see [12] for non-centralmoments and see [13] for a recursive formulae of the central moments) and from the ones of the normal distribution[12].Let f be the following di ff erentiable map from Θ to f ( Θ ) defined by: ∀ θ ∈ Θ , f ( θ ) = m (1) m (2) m (3) = m (1) i j / ∆ t i j m (2) i j / ∆ t i j m (3) i j / ∆ t i j = α/ξα/ξ + τ α/ξ . f is bijective. Then, the parameters can be expressed in terms of m (1) , m (2) and m (3) as follows: f − ( m ) = ξατ = q m (1) m (3) m (1) q m (1) m (3) m (2) − √ m (1) m (3) , where m = m (1) m (2) m (3) . Let b m n be the empirical estimator of the first central three moments: b m n = b m (1) n b m (2) n b m (3) n = n X i = N i − n X i = N i X j = ∆ D i j / ∆ t i j (cid:16) ∆ D i j − ∆ t i j b m (1) n (cid:17) / ∆ t i j (cid:16) ∆ D i j − ∆ t i j b m (1) n (cid:17) / ∆ t i j . The estimator b θ n = f − (cid:0)b m n (cid:1) of θ = (cid:16) ξ, α, τ (cid:17) is therefore defined by: b ξ n = vut b m (1) n b m (3) n , b α n = b m (1) n vut b m (1) n b m (3) n and b τ n = b m (2) n − q b m (1) n b m (3) n . We first recall the following theorem (for more details see Theorem 6 . Theorem 1.
Let ( a n ) n ≥ be a sequence of positive numbers. Let ( X n ) n ≥ be a sequence of independent random vari-ables. We set S n = n P k = X k . If a n −−−−→ n →∞ ∞ and ∞ P n = (cid:16) Var [ X n ] / a n (cid:17) < ∞ , then ( S n − E [ S n ]) / a n a . s . −−−−→ n →∞ . Then we establish the following lemma.
Lemma 2.
We have that P n ≥ N n n P i = N i ! − < ∞ . P roof . We set A n = N + . . . + N n . One can note that N n = A n − A n − . Then it follows that n X k = N k ( N + . . . + N k ) = n X k = A k − A k − A k = N N + n X k = A k − A k − A k ≤ N + n X k = A k − A k − A k A k − ≤ N + n X k = A k − − n X k = A k ≤ N + n − X k = A k − n X k = A k ≤ N + A − A n ≤ N ≤ . (cid:3) In the sequel we will prove the consistency of b θ n . Theorem 3.
Under the following assumptions : ( H ) P n ≥ N n P j = (cid:16) ∆ t n j (cid:17) − n P i = N i ! − < ∞ , ( H ) ∃ d u , ∀ i ∈ N ∗ , ∀ j ∈ { , . . . , N i } , ∆ t i j ≤ d u , b θ n converges almost surely to θ as n tends to infinity. P roof . One has to prove that b m n tends to m a.s. as n tends to infinity. Indeed, since f − is continuous on f ( Θ ),we obtain, by applying the continuous mapping theorem [15], that b θ n a . s . −−−−→ n →∞ θ. Hence let us prove the almost sureconvergence of b m n . 3 lmost sure convergence of b m (1) n to m (1) . By applying Theorem 1, it holds that: b m (1) n − m (1) = n X i = N i − n X i = N i X j = ∆ D i j ∆ t i j − m (1) ! = n n X i = n n X i = N i − N i X j = ∆ D i j ∆ t i j − m (1) !| {z } X i a . s . −−−−→ n →∞ . Indeed, for all i ∈ N ∗ , N i ≥
1, implying that n P i = N i −−−−→ n →∞ ∞ . Moreover by Assumption (H ) and since increments areindependent, one gets the following term is finite: X n ≥ Var ( X n ) n = X n ≥ N n X j = (cid:16) ∆ t n j (cid:17) − Var h ∆ D n j i n X i = N i − = " αξ + τ n ≥ N n X j = (cid:16) ∆ t n j (cid:17) − n X i = N i − < + ∞ . Thus b m (1) n a . s . −−−−→ n →∞ m (1) . Almost sure convergence of b m (2) n to m (2) . Let us set:˜ m (2) n = n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − (cid:16) ∆ D i j − E h ∆ D i j i(cid:17) . Hence the following decomposition holds: b m (2) n − m (2) = b m (2) n − ˜ m (2) n + ˜ m (2) n − m (2) . Thus one has to prove that both b m (2) n − ˜ m (2) n and ˜ m (2) n − m (2) tend almost surely to 0 as n tends to infinity.1. Almost sure convergence of b m (2) n − ˜ m (2) n to b m (2) n − ˜ m (2) n = n X i = N i − h m (1) − b m (1) n i n X i = N i X j = h(cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) + (cid:16) E (cid:16) ∆ D i j (cid:17) − ∆ t i j b m (1) n (cid:17)i = h m (1) − b m (1) n i n X i = N i − n X i = N i X j = ∆ t i j − hb m (1) n − m (1) i n X i = N i − n X i = N i X j = (cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) . Using Assumption ( H ) and as shown previously one can deduce easily that the first term of the last expres-sion tends to 0 as n tends to infinity. Moreover the second term tends also to 0 as n tends to infinity since hb m (1) n − m (1) i a . s . −−−−→ n →∞ n P i = N i ! − n P i = N i P j = (cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) a . s . −−−−→ n →∞
0. Indeed using Lemma 2, Assumption (H )and since increments are independent, one gets: X n ≥ N n X j = Var h ∆ D n j i n X i = N i − = " αξ + τ n ≥ N n X j = ∆ t n j n X i = N i − < ∞ . Thus one can deduce that b m (2) n − ˜ m (2) n a . s . −−−−→ n →∞ Almost sure convergence of ˜ m (2) n to m (2) . Applying Theorem 1, it follows that:˜ m (2) n − m (2) = n X i = N i − n X i = N i X j = (cid:18)(cid:16) ∆ t i j (cid:17) − (cid:16) ∆ D i j − E h ∆ D i j i(cid:17) − m (2) (cid:19) a . s . −−−−→ n →∞ . κ ( θ ) and κ ( θ ) depend only on θ (one can compute them explicitly) such that: X n ≥ N n X j = (cid:16) ∆ t n j (cid:17) − Var (cid:20)(cid:16) ∆ D n j − E h ∆ D n j i(cid:17) (cid:21) n X i = N i − = κ ( θ ) X n ≥ N n n X i = N i − + κ ( θ ) X n ≥ N n X j = (cid:16) ∆ t n j (cid:17) − n X i = N i − which is finite using Lemma 2 and Assumption (H ) . Thus it follows that ˜ m (2) n a . s . −−−−→ n →∞ m (2) . Almost sure convergence of b m (3) n to m (3) . Similarly as above, we set:˜ m (3) n = n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − (cid:16) ∆ D i j − E h ∆ D i j i(cid:17) . Next we have the following decomposition: b m (3) n − m (3) = b m (3) n − ˜ m (3) n + ˜ m (3) n − m (3) . Let us check that b m (3) n − ˜ m (3) n tends almost surely to 0 as n tends to infinity.1. Almost sure convergence of b m (3) n − ˜ m (3) n to b m (3) n − ˜ m (3) n = h m (1) − b m (1) n i n X i = N i − n X i = N i X j = (cid:20)h ∆ D ij − ∆ t ij b m (1) n i + h ∆ D ij − ∆ t ij b m (1) n i h ∆ D ij − E (cid:16) ∆ D ij (cid:17)i + h ∆ D ij − E (cid:16) ∆ D ij (cid:17)i (cid:21) = h m (1) − b m (1) n i n X i = N i − n X i = N i X j = (cid:20) ∆ D ij + (cid:16) ∆ t ij ˆ m (1) n (cid:17) − ∆ t ij ∆ D ij ˆ m (1) n − ∆ D ij E (cid:16) ∆ D ij (cid:17) + ∆ t ij ˆ m (1) n E (cid:16) ∆ D ij (cid:17) + E (cid:16) ∆ D ij (cid:17) (cid:21) Let us show that we can replace ˆ m (1) n by m (1) in the above expression. Using Assumption ( H ), E (cid:16) ∆ D i j (cid:17) =∆ t i j m (1) and the fact that ˆ m (1) n tends to m (1) as n tends almost surely to infinity, it follows that h m (1) − b m (1) n i n X i = N i − n X i = N i X j = ∆ t ij (cid:18)(cid:16) ˆ m (1) n (cid:17) − (cid:16) m (1) (cid:17) (cid:19) = − hb m (1) n − m (1) i n X i = N i − n X i = N i X j = ∆ t ij hb m (1) n + m (1) i| {z } ≤ d u (cid:16)b m (1) n + m (1) (cid:17) a . s . −−−→ n →∞ m (1) d u a . s . −−−→ n →∞ , h m (1) − b m (1) n i n X i = N i − n X i = N i X j = ∆ t ij ∆ D ij (cid:16) ˆ m (1) n − m (1) (cid:17) = − hb m (1) n − m (1) i n X i = N i − n X i = N i X j = ∆ t ij ∆ D ij ∆ t ij | {z } ≤ d u ˆ m (1) n a . s . −−−→ n →∞ d u m (1) a . s . −−−→ n →∞ h m (1) − b m (1) n i n X i = N i − n X i = N i X j = ∆ t ij m (1) hb m (1) n − m (1) i = − m (1) hb m (1) n − m (1) i (2) n X i = N i − n X i = N i X j = ∆ t ij | {z } ≤ d u a . s . −−−→ n →∞ . Thus it follows that b m (3) n − ˜ m (3) n = h m (1) − b m (1) n i n X i = N i − n X i = N i X j = (cid:20) ∆ D i j − E (cid:16) ∆ D i j (cid:17) − ∆ D i j E (cid:16) ∆ D i j (cid:17)(cid:21) + o a . s . (1) = h m (1) − b m (1) n i n X i = N i − n X i = N i X j = (cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) + o a . s . (1)5hich tends almost surely to 0 as n tends to infinity since n X i = N i − n X i = N i X j = (cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) ≤ d u ˜ m (2) n a . s . −−−−→ n →∞ d u m (2) and b m (1) n as −−−−→ n →∞ m (1) . Then we deduce that b m (3) n − ˜ m (3) n as −−−−→ n →∞ Almost sure convergence of ˜ m (3) n to m (3) . After tedious calculations, one obtain that there exists constants κ ( θ ), κ ( θ ) and κ ( θ ) depending only on θ such that: X n ≥ N n X j = (cid:16) ∆ t n j (cid:17) − Var (cid:20)(cid:16) ∆ D n j − E h ∆ D n j i(cid:17) (cid:21) n X i = N i − = κ ( θ ) X n ≥ N n X j = (cid:16) ∆ t n j (cid:17) − n X i = N i − + κ ( θ ) X n ≥ N n n X i = N i − + κ ( θ ) X n ≥ N n X j = ∆ t n j n X i = N i − . All these series, using Lemma 2, (H ) and (H ), are convergent. Thus we have that ˜ m (3) n a . s . −−−−→ n →∞ m (3) . (cid:3) Before showing the asymptotic normality of b m n , we shall establish the following Lemma. Lemma 4. If ( H ) and the following assumption hold ( H ) ∀ u ∈ { , , } , lim n →∞ n X i = N i − n X i = N i X j = ∆ t u − i j = c u < ∞ .Then it follows that n X i = N i / ˆ m (1) n − m (1) ˜ m (2) n − m (2) ˜ m (3) n − m (3) d −−−−→ n →∞ N (cid:16) , Σ ( ∞ ) (cid:17) , where Σ ( ∞ ) = αξ + τ ! c αξ c αξ c + τ + ατ ξ + α ξ αξ c αξ c + τ + ατ ξ + α ξ αξ c + α ξ + ατ ξ αξ c + τ + ατ ξ + α ξ αξ c + α ξ + ατ ξ αξ c + α ξ + ατ ξ + α ξ + α τ ξ + τ + ατ ξ ! c . P roof . To prove this Lemma we apply the central limit theorem of Lindeberg-Feller [15] since the increments areindependent. We set first by ∆ D i j = ∆ D i j − E (cid:16) ∆ D i j (cid:17) . Then we have n X i = N i / ˆ m (1) n − m (1) ˜ m (2) n − m (2) ˜ m (3) n − m (3) = n X i = N i − / n X i = N i X j = (cid:16) ∆ t i j (cid:17) − ∆ D i j − E (cid:16) ∆ D i j (cid:17) ∆ D i j − E (cid:18) ∆ D i j (cid:19) ∆ D i j − E (cid:18) ∆ D i j (cid:19) = n X i = N i − / n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X i j . We set X i j = (cid:16) X i j , X i j , X i j (cid:17) T . Let us check the first condition of the Lindeberg-Feller theorem. For any ǫ > n X i = N i − n X i = N i X j = E (cid:13)(cid:13)(cid:13) X i j (cid:13)(cid:13)(cid:13) ∆ t i j k Xij k ∆ tij >ǫ n P i = N i ! / = n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − E X k = X i jk k Xij k ∆ t ij >ǫ n P i = N i = n X i = N i − n X i = N i X j = X k = (cid:16) ∆ t i j (cid:17) − E (cid:16) X i jk (cid:17) k Xij k ∆ t ij >ǫ n P i = N i ! . (1)6oreover because X k = X i jk > (cid:16) ∆ t i j (cid:17) ǫ n X i = N i ⊂ [ k = X i jk > ∆ t i j ǫ n P i = N i ! , we have k Xij k ∆ tij >ǫ n P i = N i ! / ≤ S k ′ = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Xijk ′ ∆ tij (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) >ǫ n P i = N i ! / − / ≤ X k ′ = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Xijk ′ ∆ tij (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) >ǫ n P i = N i ! / − / . Thus it follows that Equation (1) implies that n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X k = X k ′ = E (cid:16) X i jk (cid:17) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Xijk ′ ∆ tij (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) >ǫ n P i = N i ! / − / ≤ √ ǫ − n X i = N i − / n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X k = X k ′ = E h X i jk (cid:12)(cid:12)(cid:12) X i jk ′ (cid:12)(cid:12)(cid:12)i ≤ ǫ − √ n X i = N i − / n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X k = X k ′ = (cid:16) E h X i jk i + E h X i jk ′ i(cid:17) , (2)where the last inequality is obtained by applying the Young inequality. Moreover one can check that for any q ∈ N ∗ ,we have E (cid:16) ∆ D qi j (cid:17) = E h(cid:16) ∆ Y i j + ∆ B i j (cid:17) q i = q X s = qs ! E (cid:16) ∆ Y si j (cid:17) E (cid:16) ∆ B q − si j (cid:17) = q X s = qs ! s Q l = (cid:16) α ∆ t i j + s − l (cid:17) ξ s (cid:16) τ ∆ t i j (cid:17) q − s E (cid:16) ˜ B q − s (cid:17) = ∆ t i j α q X s = qs ! s − Q l = (cid:16) α ∆ t i j + s − l (cid:17) ξ s (cid:16) τ ∆ t i j (cid:17) q − s E (cid:16) ˜ B q − s (cid:17) = ∆ t i j Pol q − (cid:16) ∆ t i j (cid:17) , where ˜ B ∼ N (0 ,
1) and Pol q − (cid:16) ∆ t i j (cid:17) denotes a polynomial of order q − ∆ t i j the coe ffi cients of whichdepend only on θ . Then Equation (2) is equal to ǫ − √ n X i = N i − / n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − (cid:16) Pol (cid:16) ∆ t i j (cid:17) + Pol (cid:16) ∆ t i j (cid:17)(cid:17) ≤ ǫ − √ n X i = N i − / n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − Pol (cid:16) ∆ t i j (cid:17) which tends to 0 as n tends to infinity since n P i = N i ! − n P i = N i P j = (cid:16) ∆ t i j (cid:17) − Pol (cid:16) ∆ t i j (cid:17) is bounded using Assumptions ( H )and ( H ).Next the variance covariance matrix Σ i j of X i j is given by Σ i j = Var (cid:16) ∆ D i j (cid:17) E (cid:18) ∆ D i j (cid:19) E (cid:18) ∆ D i j (cid:19) E (cid:18) ∆ D i j (cid:19) Var (cid:20)(cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) (cid:21) E (cid:18) ∆ D i j (cid:19) − E (cid:18) ∆ D i j (cid:19) E (cid:18) ∆ D i j (cid:19) E (cid:18) ∆ D i j (cid:19) E (cid:18) ∆ D i j (cid:19) − E (cid:18) ∆ D i j (cid:19) E (cid:18) ∆ D i j (cid:19) Var (cid:20)(cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) (cid:21) . n X i = N i − / n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X i j −−−−→ n →∞ Σ ( ∞ ) , where the finite terms of Σ ( ∞ ) , under Assumption (H ), are obtained from the following equations: Σ ( ∞ ) = lim n →∞ n X i = N i − n X i = N i X j = (cid:16) ∆ t i j (cid:17) − Σ i j such that σ uv , pour 1 ≤ u ≤ v ≤
3, are the terms of the variance-covariance matrix Σ i j : σ = αξ + τ ! ∆ t i j , σ = αξ ∆ t i j , σ = αξ ∆ t i j + τ + ατ ξ + α ξ ! ∆ t i j σ = αξ ∆ t i j + τ + ατ ξ + α ξ ! ∆ t i j , σ = αξ ∆ t i j + α ξ + ατ ξ ! ∆ t i j σ = αξ ∆ t i j + α ξ + ατ ξ ! ∆ t i j + α ξ + α τ ξ + τ + ατ ξ ! ∆ t i j . Finally we conclude that n X i = N i − / n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X i j d −−−−→ n →∞ N (cid:16) , Σ ( ∞ ) (cid:17) . In the sequel we will prove the asymptotic normality of b θ n . First, let us prove the asymptotic normality of b m n . Theorem 5.
Under Assumptions ( H − H ) , we have: n X i = N i / (cid:0)b m n − m (cid:1) d −−−−→ n →∞ N (0 , H ) , where H = A Σ ( ∞ ) A T such that A is given by:A = − αξ + τ ! c . P roof . First we note that n X i = N i / ( ˆ m n − m ) = n X i = N i / ˆ m (1) n − m (1) ˆ m (2) n − ˜ m (2) n + ˜ m (2) n − m (2) ˆ m (3) n − ˜ m (3) n + ˜ m (3) n − m (3) = n X i = N i / m (2) n − ˜ m (2) n ˆ m (3) n − ˜ m (3) n + n X i = N i / ˆ m (1) n − m (1) ˜ m (2) n − m (2) ˜ m (3) n − m (3) . Second we have n X i = N i / (cid:18) ˆ m (2) n − ˜ m (2) n (cid:19) (3) = n X i = N i / h m (1) − ˆ m (1) n i n X i = N i − n X i = N i X j = (cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) + n X i = N i − n X i = N i X j = ∆ t i j n X i = N i / h m (1) − ˆ m (1) n i n tends to infinity. Indeed we check that the first term of the last expression tends in prob-ability to 0 as n tends to infinity since n P i = N i ! / h m (1) − ˆ m (1) n i is normally distributed and, as shown previously, n P i = N i ! − n P i = N i P j = (cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) tends to 0 as n tends to infinity. Moreover the second term in the right-handside of (3) tends to 0 as n tends to infinity because n P i = N i ! − n P i = N i P j = ∆ t i j is convergent, n P i = N i ! / h ˆ m (1) n − m (1) i hasan asymptotic normal distribution and h ˆ m (1) n − m (1) i tends almost surely to 0 as n tends to infinity. Then we deducethat n P i = N i ! / (cid:18) ˆ m (2) n − ˜ m (2) n (cid:19) tends in probability to 0 as n tends to infinity.Furthermore one gets that n X i = N i / (cid:18) ˆ m (3) n − ˜ m (3) n (cid:19) = n X i = N i / h m (1) − b m (1) n i n X i = N i − n X i = N i X j = (cid:20)(cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) (cid:21) + o p (1) . Let us show that n X i = N i − n X i = N i X j = (cid:20)(cid:16) ∆ D i j − E (cid:16) ∆ D i j (cid:17)(cid:17) − Var (cid:16) ∆ D i j (cid:17)(cid:21) a . s . −−−−→ n →∞ . Indeed, since increments are independent, one gets that there exists constants κ ( θ ) and κ ( θ ) depending only on θ such that X n ≥ N n X j = Var (cid:20)(cid:16) ∆ D n j − E h ∆ D n j i(cid:17) (cid:21) n X i = N i − = κ ( θ ) X n ≥ N n X j = ∆ t n j n X i = N i − + κ ( θ ) X n ≥ N n X j = (cid:16) ∆ t n j (cid:17) n X i = N i − which is convergent using Assumption ( H ) and Lemma 2. Moreover we have: n X i = N i − n X i = N i X j = Var (cid:16) ∆ D i j (cid:17) −−−−→ n →∞ αξ + τ ! c . Then one can write that n X i = N i / ˆ m (1) n − m (1) ˆ m (2) n − m (2) ˆ m (3) n − m (3) = n X i = N i − / n X i = N i X j = (cid:16) ∆ t i j (cid:17) − − (cid:16) αξ + τ (cid:17) c ∆ D i j − E (cid:16) ∆ D i j (cid:17) ∆ D i j − E (cid:16) ∆ D i j (cid:17) ∆ D i j − E (cid:16) ∆ D i j (cid:17) + o p (1) = A n X i = N i − / n X i = N i X j = (cid:16) ∆ t i j (cid:17) − X i j . By Lemma 4 we have n P i = N i ! − / n P i = N i P j = (cid:16) ∆ t i j (cid:17) − X i j d −−−−→ n →∞ N (cid:16) , Σ ( ∞ ) (cid:17) . Thus it follows that n X i = N i / (cid:0)b m n − m (cid:1) d −−−−→ n →∞ N (cid:16) , A Σ ( ∞ ) A T (cid:17) . Since f is a di ff erentiable and bijective function and f − is continuous on f ( Θ ), then we obtain the asymptoticnormality of b θ n by applying the δ -method (see Theorem 3 . heorem 6. Under Assumptions ( H − H ) , we have: n X i = N i / (cid:16)b θ n − θ (cid:17) d −−−−→ n →∞ N (0 , M ) , where M = G H G T such that G is given by:G = ∂ f − ∂ m (1) ∂ f − ∂ m (1) ∂ f − ∂ m (1) ∂ f − ∂ m (2) ∂ f − ∂ m (2) ∂ f − ∂ m (2) ∂ f − ∂ m (3) ∂ f − ∂ m (3) ∂ f − ∂ m (3) ( m ) = p m (1) m (3) s m (1) m (3) + s m (1) m (3) − s m (3) m (1) − m (3) s m (1) m (3) − m (1) m (3) s m (1) m (3) − s m (1) m (3) . (cid:3) As an application of theorem 6, one can construct the confidence interval with asymptotic level 1 − ϑ for eachparameter: lim n →∞ P ξ ∈ b ξ n ± z − ϑ σ (cid:16)b ξ n (cid:17)pP ni = N i = − ϑ, lim n →∞ P α ∈ b α n ± z − ϑ σ (cid:0) b α n (cid:1)pP ni = N i = − ϑ and lim n →∞ P τ ∈ b τ n ± z − ϑ σ (cid:16)b τ n (cid:17)pP ni = N i = − ϑ where z − ϑ is the critical value of the standard normal distribution and σ (cid:16)b ξ n (cid:17) , σ (cid:0)b α n (cid:1) and σ (cid:16)b τ n (cid:17) are the asymptoticstandard deviation of b ξ n , b α n and b τ n respectively (square-root of the diagonal of variance-covariance matrix M appearedin Theorem 6). Thus one can test whether τ = δ -method and using the previous theorem, it can be proved that √ n b α n b ξ n − αξ d −−−−→ n →∞ N (cid:16) , G M G T (cid:17) , where M is the top left 2 × M and where G = (cid:16) − αξ , ξ (cid:17) T . Hence one can obtain the confidenceinterval with asymptotic level 1 − ϑ for α/ξ . As mentioned in the introduction, it is useful to test the Brownian motionwith a positive drift model against the gamma process model.
3. Particular cases
Before considering several particular cases corresponding to various sampling scheme, we will introduce somestronger but more comprehensive assumptions:(A ) Same number of observations for all the processes: ∀ i ∈ N ∗ , N i = N (A ) Same instants of observations for all the processes: ∀ i ∈ N ∗ , ∀ j ∈ { , . . . , N i } , t i j = t j (A ) Regular instants (not necessary the same instants for all the processes): ∀ i ∈ N ∗ , ∃ T i such that ∀ j ∈ { , . . . , N i } , ∆ t i j = T i / N i (A ) Same time interval for observations: ∃ T such that ∀ i ∈ N ∗ , t iN i ≤ T ) Uniformly bounded delay between consecutive observations: ∃ d l > , ∀ i ∈ N ∗ , ∀ j ∈ { , . . . , N i } , ∆ t i j ≥ d l Note that (A ) ⇒ (A ). More interesting are the relationships between Assumptions (H − H ) and Assumptions(A − A ). In particular, one can easily check that (A ) ⇒ (H ) and (A ) & (A ) ⇒ (H ). Moreover simplificationsmay occur under some assumptions. For instance, if (A ) and (A ) are satisfied, then (H ) and (H ) are equivalentrespectively to:(H ′ ) P n ≥ N n n P i = N i ! − < ∞ (H ′ ) ∀ u ∈ { , } , lim n →∞ n X i = N i − n X i = N − ui < ∞ In addition if (A ) holds then (H ′ ) and (H ′ ) are satisfied. We will now consider five di ff erent special cases that can bedescribed in terms of Assumptions (A − A ): • Case 1 - Same number of observations a the same regular instants over [0 , T ]: (A ) − (A ); • Case 2 - Same number of observations at the same non-regular instants over [0 , T ] : (A ), (A ) and (A ); • Case 3 - N i = i and regular instants over [0 , T ]: (A ) and (A ); • Case 4 - N i = i and regular instants over [0 , iT ]: (A ) and (A ); • Case 5 - N i = i − and regular instants over [0 , T ]: (A ) and (A ).One can easily check that estimators in cases 1, 2 and 4 are consistent and asymptotically normal. At least, theestimator in case 3 is consistent but asymptotic normality cannot be established using our results. In the last case onecan check that consistency and asymptotic normality cannot be established using our results.
4. Numerical illustration
Here we illustrate our theoretical results throughout simulations. We recall that the parameters were fixed asfollows: ξ = α = .
02 and τ = .
02. The number of observations for each item was set to N = T = ∆ t i = , ∆ t i =
300 and ∆ t i = n . Based on resultsgiven in the Tables 1, 2 and 3 we note the the average of degradation is well estimated whatever the sample size sincethe larger n is, the better the estimation is towards Bias, MSE and StD. Table 1: Empirical bias
Bias 50 100 200 ξ α τ Table 2: Empirical MSE
MSE 50 100 200 ξ α τ
5. Real data application
In what follows, we present the results that we have achieved in the implementation of the data given in thefollowing sections: 11 able 3: Empirical standard deviation
StD 50 100 200 ξ α τ An example of dataset can be found in [16]. Fifteen components were tested under three di ff erent temperatures65 ◦ C , 85 ◦ C and 105 ◦ C . Degradation percent values were read out at 200 ,
500 and 1000 hours. We have estimatedthe three parameters of the degradation models and we have constructed, see Table 4, the 95% confidence intervalof each parameter. First, we denote that values within brackets constitute the standard deviation of each parameter.Let us discuss the results. One can note that ξ decreases as temperature increases. Moreover τ and α/ξ increase as Table 4: Estimation of parameters and 95% confidence intervals
Parameters ξ α τ α/ξ Estimation (65 ◦ C ) 5 .
51 (1 .
31) 0 .
01 (0 . . .
96) 0 . .
84; 6 .
18] [0 .
01; 0 .
02] [0; 10 .
11] [0 . . ◦ C ) 0 .
71 (0 .
37) 0 .
012 (0 .
49) 0 . . . .
51; 0 .
89] [0; 0 .
26] [0; 0 .
06] [0; 0 . ◦ C ) 0 .
29 (1 .
87) 0 .
02 (0 .
14) 0 .
27 (1 .
51) 0 . .
24] [0; 0 .
09] [0; 1 .
04] [0; 1 . α is almost stable. Finally from the confidence intervals at 65 ◦ C our model turns tobe a gamma process since one can accept that τ = α/ξ , Whitmore and Schenkelberg [17] presented some heating cable test data. The degradation of the cable is measuredas the natural logarithm of resistance. Degradation is accelerated by thermal stress so temperature is used as the stressmeasure. Five test items were baked in an oven at each test temperature. Three test temperatures were used, 200 ◦ C ,240 ◦ C and 260 ◦ C , giving a total of 15 items. The clock times are in thousands of hours. The cable is deemed tohave failed when the log-resistance reaches ln(2) = . ◦ C until the test equipment was required for other projects. We have estimated the three parameters of the degradationmodels and we have constructed, see Table 5, the 95% confidence interval of each parameter. Like above, we denotethat values within brackets constitute the standard deviation of each parameter. One notes that ξ , α and α/ξ increaseas temperature increases. However it is not the case for τ . Although we have the same number of items as forthe previous data set, here we observe standard deviations with very large values. It is therefore di ffi cult to choosebetween one of the two sub-models, and more generally it may be interpreted as bad fitting of the model.
6. Concluding remarks
In this paper we have proposed a gamma process perturbed by a Brownian motion as a degradation model for whichwe derived parameters estimator. Asymptotic properties of this estimator have been established. Since degradation ofsystem is also influenced by the environment, it is interesting to consider a model integrating covariates. Such modelwill be studied in a forthcoming paper. 12 able 5: Estimation of parameters and 95% confidence intervals
Parameters ξ α τ α/ξ Estimation (200 ◦ C ) 2 .
18 (5 .
07) 0 .
47 (1 .
41) 0 .
03 (9 .
97) 0 . .
77; 3 .
58] [0 .
08; 0 .
86] [0; 2 .
81] [0; 0 . ◦ C ) 2 .
38 (6 .
74) 2 .
17 (3 .
10) 0 .
14 (11 .
57) 0 . .
51; 4 .
25] [1 .
31; 3 .
03] [0; 3 .
35] [0; 0 . ◦ C ) 2 .
75 (9 .
01) 5 .
13 (3 .
78) 0 .
04 (17 .
31) 0 . .
74] [3 .
87; 6 .
38] [0; 5 .
78] [0; 2 . ReferencesReferences [1] K. Doksum, A. H´oyland, Models for variable-stress accelerated life testing experiments based on Wiener processes and the inverse Gaussiandistribution, Technometrics 34 (1) (1992) 74–82.[2] G. Whitmore, Estimating degradation by a Wiener di ff usion process subject to measurement error, Lifetime Data Analysis 1 (1995) 307–319.[3] G. Whitmore, M. Crowder, J. Lawless, Failure inference from a marker process based on a bivariate Wiener model, Lifetime Data Analysis4 (3) (1998) 229–251.[4] X. Wang, Wiener processes with random e ff ects for degradation data, Journal of Multivariate Analysis 101 (2) (2010) 340–351.[5] C. Barker, Maintenance policies to guarantee optimal performance of stochastically deteriorating multi-component systems, Ph.D. thesis,School of Enginerring and Mathematical Sciences (2006).[6] M. Abdel-Hameed, A gamma wear process, IEEE Transactions on Reliability 24 (2) (1975) 152–153.[7] J. M. van Noortwijk, A survey of the application of gamma processes in maintenance, Reliability Engineering and System Safety 94 (1)(2009) 2–21.[8] E. C¸ inlar, Z. P. Baˇzant, E. Osman, Stochastic process for extrapolating concrete creep, J. Eng. Mech. Div. 103 (EM6) (1977) 1069–1088.[9] F. Dufresne, H. U. Gerber, E. S. W. Shiu, Risk theory with the gamma process, ASTIN Bull. 21 (2) (1991) 177–192.[10] N. Ebrahimi, T. Ramallingam, Estimation of system reliability in Brownian stress-strength models based on sample paths, Ann. Inst. Stat.Math. 45 (1) (1993) 9–19.[11] S. Basu, R. T. Lingham, Bayesian estimation of system reliability in Brownian stress-strength models, Ann. Inst. Stat. Math. 55 (1) (2003)7–19.[12] L. N. Johnson, S. Kotz, N. Balakrishnan, Continuous Univariate Distributions, Vol. 1, Wiley-Inter science Publication, 1995.[13] R. Willink, Relationships between central moments and cumulants, with formulae for the central moments of gamma distributions, Commu-nications in Statistics - Theory and Methods 32 (4) (2003) 701–704.[14] V. V. Petrov, Limit theorems of Probability Theory - Sums of Independent Random Variables, Oxford University Press Inc., New York, 1995.[15] A. W. van der Vaart, Asymptotic Statistics, Cambridge University Press., 1998.[16] NIST / SEMATECH e-Handbook of Statistical Methods, http: // / div898 / handbook / apr / section4 / apr423.htm, 2010.[17] G. Whitmore, F. Schenkelberg, Modelling accelerated degradation data using Wiener di ff usion with a time scale transformation, LifetimeData Analysis 3 (1) (1997) 27–45.usion with a time scale transformation, LifetimeData Analysis 3 (1) (1997) 27–45.