Estimation of P(X > Y ) for Weibull distribution based on hybrid censored samples
EEstimation of P ( X > Y ) for Weibull distribution based onhybrid censored samples A. Asgharzadeh a ∗ , M. Kazemi a , D. Kundu ba Department of Statistics, Faculty of Mathematical Sciences, University ofMazandaran, P. O. Box 47416-1407, Babolsar, Iran b Department of Mathematics, Indian Institute of Technology, Kanpour, India
Abstract
A Hybrid censoring scheme is mixture of Type-I and Type-II censoringschemes. Based on hybrid censored samples, this paper deals with the in-ference on R = P ( X > Y ), when X and Y are two independent Weibulldistributions with different scale parameters, but having the same shape pa-rameter. The maximum likelihood estimator (MLE), and the approximateMLE (AMLE) of R are obtained. The asymptotic distribution of the maxi-mum likelihood estimator of R is obtained. Based on the asymptotic distribu-tion, the confidence interval of R can be derived. Two bootstrap confidenceintervals are also proposed. We consider the Bayesian estimate of R , andpropose the corresponding credible interval for R . Monte Carlo simulationsare performed to compare the different proposed methods. Analysis of a realdata set has also been presented for illustrative purposes. Keywords : Approximate maximum likelihood estimator, Hybrid censoring, Maxi-mum likelihood estimator, stress-strength model.
In the context of reliability, the stress-strength model describes the life of a com-ponent which has a random strength X and is subjected to random stress Y . Thecomponent fails at the instant that the stress applied to it exceeds the strength andthe component will function satisfactorily if the event { ( x, y ) | x > y } occurs. Thus, R = P ( X > Y ) is a measure of component reliability. For some application of R ,see Kotz et al . (2003). Many authors have studied the stress-strength parameter R .Among them, Ahmad et al . (1997), Awad et al . (1981), Kundu and Gupta (2005, ∗ E-mail addresses: [email protected] (A. Asgharzadeh), [email protected] (M.Kazemi). a r X i v : . [ s t a t . O T ] J u l et al . (2008) and Rezaei(2010).A mixture of Type-I and Type-II censoring schemes is known as the hybridcensoring scheme and it can be described as follows. Suppose n identical units areput on a life test. The lifetimes of the sample units are independent and identicallydistributed (i.i.d) random variables. The test is terminated when a pre-specifiednumber r out of n units have failed or a pre-determined time T , has been reached.It is also assumed that the failed items are not replaced. Therefore, in hybridcensoring scheme, the experimental time and the number of failures will not exceed T and r , respectively. It is clear that Type-I and Type-II censoring schemes can beobtained as special cases of hybrid censoring scheme by taking r = n and T → ∞ respectively. Now we describe the data available under the hybrid censoring scheme.Note that, under the hybrid censoring scheme, it is assumed that r and T are knownin advance. Therefore, under this censoring scheme we have one of the two followingtypes of observations:Case I : { y n < y n < · · · < y r : n } if y r : n < T Case II : { y n < y n < · · · < y d : n } if d < r, y d : n < T < y d +1: n , where y n < y n < · · · denote the observed ordered failure times of the experi-mental units. For further details on hybrid censoring and relevant references, seeEpstein (1954), Fairbanks et al . (1982), Childs et al . (2003), Gupta and Kundu(1998), Kundu (2007), Ebrahimi (1990, 1992).The two-parameter Weibull distribution denoted by W ( α, θ ) has the proba-bility density function (pdf) f ( x, α, θ ) = αθ x α − e − xαθ , x > , α, θ > , (1)and the cumulative distribution function (cdf) F ( x, α, θ ) = 1 − e − xαθ , x > , α, θ > , (2)Here α is the shape parameter and θ is the scale parameter.Based on complete X -sample and Y -sample, Kundu and Gupta (2006) consid-ered the estimation of R = P ( Y < X ) when X ∼ W ( α, θ ) and Y ∼ W ( α, θ ) aretwo independent Weibull distributions with different scale parameters, but havingthe same shape parameter. In this paper, we extend their results for the case whenthe samples are hybrid censored.The paper is organized as follows: In Section 2, we derive the maximumlikelihood estimator (MLE) of R . It is observed that the MLE can not be obtained2n a closed form. We propose an approximate MLE (AMLE) of R in Section 3,which can be obtained explicitly. Different confidence intervals are presented inSection 4. Bayesian solutions are presented in Section 5. Analysis of a real data setas well a Monte Carlo simulation based comparison of the proposed methods areperformed in Section 6. Finally in Section 7, we conclude the paper. R Let X ∼ W ( α, θ ) and Y ∼ W ( α, θ ) be independent random variables. Then itcan be easily seen that R = P ( X > Y ) = θ θ + θ . (3)Our interest is in estimating R based on hybrid censored data on both vari-ables. To derive the MLE of R , first we obtain the MLE’s of α , θ and θ .Suppose X = ( X n , X n , · · · , X r : n ) is a hybrid censored sample from W ( α, θ )with censored scheme ( r , T ) and Y = ( Y m , Y m , · · · , Y r : m ) is a hybrid cen-sored sample from W ( α, θ ) with censored scheme ( r , T ). For notation simplicity,we will write ( X , X , · · · , X r ) for ( X n , X n , · · · , X r : n ) and ( Y , Y , · · · , Y r ) for( Y m , Y m , · · · , Y r : m ). Therefore, the likelihood function of α , θ and θ is given(see Balakrishnan and Aggarwala (2000)) by L ( α, θ , θ ) = (cid:34) c r (cid:89) i =1 f ( x i )[1 − F ( u )] n − r (cid:35) × (cid:34) c r (cid:89) j =1 f ( y j )[1 − F ( u )] m − r (cid:35) , (4)where c = n ( n − n − · · · ( n − r + 1) , U = min ( X R , T ) ,c = m ( m − m − · · · ( m − r + 1) , U = min ( Y R , T ) . and r = r (cid:88) i =1 I { x i ≤ u } , r = r (cid:88) j =1 I { y j ≤ u } . Upon using (1) and (2), we immediately have the likelihood function of the observeddata as follows: L ( data | α, θ , θ ) = c c α r + r θ − r θ − r r (cid:89) i =1 x α − i r (cid:89) j =1 y α − j × exp (cid:40) − θ (cid:34) r (cid:88) i =1 x αi + ( n − r ) u α (cid:35) − θ (cid:34) r (cid:88) j =1 y αj + ( m − r ) u α (cid:35)(cid:41) . (5)3rom (5), the log-likelihood function for the hybrid censored data without the mul-tiplicative constant is l ( α, θ , θ ) = ( r + r ) ln( α ) − r ln( θ ) − r ln( θ ) + ( α − × (cid:34) r (cid:88) i =1 ln( x i ) + r (cid:88) j =1 ln( y j ) (cid:35) − θ (cid:34) r (cid:88) i =1 x αi + ( n − r ) u α (cid:35) − θ (cid:34) r (cid:88) j =1 y αj + ( m − r ) u α (cid:35) . (6)The MLEs of α , θ and θ , say (cid:98) α , (cid:98) θ and (cid:98) θ respectively, can be obtained asthe solution of ∂l∂α = r + r α + (cid:34) r (cid:88) i =1 ln( x i ) + r (cid:88) j =1 ln( y j ) (cid:35) − θ (cid:34) r (cid:88) i =1 x αi ln( x i ) + ( n − r ) u α ln( u ) (cid:35) − θ (cid:34) r (cid:88) j =1 y αj ln( y j ) + ( m − r ) u α ln( u ) (cid:35) = 0 , (7) ∂l∂θ = − r θ + 1 θ (cid:34) r (cid:88) i =1 x αi + ( n − r ) u α (cid:35) = 0 , (8) ∂l∂θ = − r θ + 1 θ (cid:34) r (cid:88) j =1 y αj + ( m − r ) u α (cid:35) = 0 . (9)From (7) and (8), we obtain (cid:98) θ ( α ) = 1 r (cid:34) r (cid:88) i =1 x αi + ( n − r ) u α (cid:35) , and (cid:98) θ ( α ) = 1 r (cid:34) r (cid:88) j =1 y αj + ( m − r ) u α (cid:35) . (10)Substituting the expressions of (cid:98) θ ( α ) and (cid:98) θ ( α ) into (7), (cid:98) α can be obtained asa fixed point solution of the following equation: k ( α ) = α. (11)Here k ( α ) = r + r U ( α ) + V ( α ) − W , where U ( α ) = r [ (cid:80) r i =1 x αi ln( x i ) + ( n − r ) u α ln( u )] (cid:80) r i =1 x αi + ( n − r ) u α , ( α ) = r (cid:104)(cid:80) r j =1 y αj ln( y j ) + ( m − r ) u α ln( u ) (cid:105)(cid:80) r j =1 y αj + ( m − r ) u α and W = r (cid:88) i =1 ln( x i ) + r (cid:88) j =1 ln( y j ) . A simple iterative procedure k ( α ( j ) ) = α ( j +1) , where α ( j ) is the j-th iterate,can be used to find the solution of (11). The iterative procedure should be stoppedwhen the absolute difference between α ( j ) and α ( j +1) is sufficiently small. Once weobtain (cid:98) α ML , the MLE of θ and θ , can be deduced from (10) as (cid:98) θ ML = (cid:98) θ ( (cid:98) α ML )and (cid:98) θ ML = (cid:98) θ ( (cid:98) α ML ). Therefore, we compute the MLE of R as (cid:98) R ML = r (cid:80) r i =1 x (cid:98) α ML i + ( n − r ) u (cid:98) α ML r (cid:80) r i =1 x (cid:98) α ML i + ( n − r ) u (cid:98) α ML + r (cid:80) r j =1 y (cid:98) α ML j + ( m − r ) u (cid:98) α ML . (12)Here the maximum likelihood approach does not give an explicit estimatorfor α and hence for R , based on a hybrid censored sample. In the next section, wepropose the approximate maximum likelihood estimates which have explicit forms. R In this section, the approximate maximum likelihood method is used to estimatethe stress-strength parameter R . It is based on the fact that if the random variable X has W ( α, θ ), then V = ln( X ), has the extreme value distribution with pdf as f ( v ; µ, σ ) = 1 σ e v − µσ − e v − µσ , −∞ < v < ∞ , (13)where µ = 1 α ln θ and σ = 1 α . The density function (13) is known as the densityfunction of an extreme value distribution, with location, and scale parameters as µ and σ respectively. The standard extreme value distribution has the pdf and cdf as g ( v ) = e v − e v , G ( v ) = 1 − e − e v . Suppose X < X < · · · < X r is ahybrid censored sample from W ( α, θ ) withcensored scheme ( r , T ) and Y < Y < · · · < Y r is a hybrid censored sample from W ( α, θ ) with censored scheme ( r , T ). Let us use the following notations: T i =ln( X i ) , Z i = T i − µ σ , i = 1 , · · · , r and S j = ln( Y j ) , W j = S j − µ σ , j = 1 , · · · , r ,where µ = 1 α ln θ , µ = 1 α ln θ and σ = 1 α .5he log-likelihood function of the observed data T , · · · , T r and S , · · · , S r is l ∗ ( µ , µ , σ ) ∝ − ( r + r ) ln σ + r (cid:88) i =1 ln( g ( z i )) + ( n − r ) ln(1 − G ( u ∗ ))+ r (cid:88) j =1 ln( g ( w j )) + ( m − r ) ln(1 − G ( u ∗ )) , (14)where u ∗ = ln u − µ σ , u ∗ = ln u − µ σ . Differentiating (14) with respect to µ , µ and σ , we obtain the likelihood equations as ∂l ∗ ∂µ = − σ r (cid:88) i =1 g (cid:48) ( z i ) g ( z i ) + 1 σ ( n − r ) g ( u ∗ )1 − G ( u ∗ ) = 0 , (15) ∂l ∗ ∂µ = − σ r (cid:88) j =1 g (cid:48) ( w j ) g ( w j ) + 1 σ ( m − r ) g ( u ∗ )1 − G ( u ∗ ) = 0 , (16) ∂l ∗ ∂σ = − r + r σ − σ r (cid:88) i =1 z i g (cid:48) ( z i ) g ( z i ) + 1 σ ( n − r ) u ∗ g ( u ∗ )1 − G ( u ∗ ) − σ r (cid:88) j =1 w j g (cid:48) ( w j ) g ( w j ) + 1 σ ( m − r ) u ∗ g ( u ∗ )1 − G ( u ∗ ) = 0 . (17)It is observed that the likelihood equations are not linear and do not admitexplicit solutions. We approximate the terms p ( z i ) = g (cid:48) ( z i ) g ( z i ) and q ( u ∗ ) = g ( u ∗ )1 − G ( u ∗ )by expanding in Taylor series as follows. Suppose p i = in +1 , q i = 1 − p i and p ∗ r i = p r i + p r i +1 , q ∗ r i = 1 − p ∗ r i . We expand the function p ( z i ) around G − ( p i ) =ln( − ln( q i )) = µ i . Further, we also expand the term q ( u ∗ ) around G − ( p R ) = µ R if u = x R . If u = T , expand q ( u ∗ ) around the point G − ( p ∗ r ) = µ ∗ r .Similarly, We approximate the function ¯ p ( w j ) = g (cid:48) ( w j ) g ( w j ) around G − ( p j ) = µ j .we also expand the term ¯ q ( u ∗ ) = g ( u ∗ )1 − G ( u ∗ ) around G − ( p R ) = µ R if u = x R andIf u = T , we expand ¯ q ( u ∗ ) around the point G − ( p ∗ r ) = µ ∗ r . Now, let u = x R and u = x R . Then r = R , r = R and u ∗ = z R , u ∗ = w R .Considering only the first order derivatives and neglecting the higher orderderivatives we get p ( z i ) ≈ α i + β i z i , ¯ p ( w j ) ≈ α j + β j w j ,q ( z R ) ≈ − α R − β R z R , ¯ q ( w R ) ≈ − α R − β R z R , where α i = 1 + ln q i (1 − ln( − ln q i )) , β i = ln( q i ) . ∂l ∗ ∂µ = − σ (cid:34) R (cid:88) i =1 ( α i + β i z i ) − ( n − R )(1 − α R − β R z R ) (cid:35) = 0 , (18) ∂l ∗ ∂µ = − σ (cid:34) R (cid:88) j =1 ( α j + β j w j ) − ( m − R )(1 − α R − β R w R ) (cid:35) = 0 , (19) ∂l ∗ ∂σ = − R + R ) σ − σ (cid:34) R (cid:88) i =1 z i ( α i + β i z i ) − ( n − R )(1 − α R − β R z R ) (cid:35) − σ (cid:34) R (cid:88) j =1 w j ( α j + β j w j ) − ( m − R )(1 − α R − β R w R ) (cid:35) = 0 . (20)If we denote ˜ µ , ˜ µ and ˜ σ as the solutions of (18), (19) and (20) respectively, thenobserve that˜ µ = A + B ˜ σ, ˜ µ = A + B ˜ σ, and ˜ σ = − D + (cid:112) D − R + R ) E R + R ) , where A = (cid:80) R i =1 β i t i + ( n − R ) β R t R (cid:80) R i =1 β i + ( n − R ) β R , B = (cid:80) R i =1 α i − ( n − R )(1 − α R ) (cid:80) R i =1 β i + ( n − R ) β R ,A = (cid:80) R j =1 β j s j + ( m − R ) β R s R (cid:80) R j =1 β j + ( m − R ) β R , B = (cid:80) R j =1 α j − ( m − R )(1 − α R ) (cid:80) R j =1 β j + ( m − R ) β R ,D = R (cid:88) i =1 α i ( t i − A ) − ( n − R )(1 − α R )( t R − A ) + 2 A B R (cid:88) i =1 β i + 2 A B ( n − R ) β R + R (cid:88) j =1 α j ( s j − A ) − ( m − R )(1 − α R )( s R − A ) + 2 A B R (cid:88) j =1 β i + 2 A B ( m − R ) β R ,E = R (cid:88) i =1 β i t i ( t i − A ) + ( n − R ) β R t R − ( n − R ) A β R t R + R (cid:88) j =1 β j s j ( s j − A )+ ( m − R ) β R s R − ( m − R ) A β R s R . σ is obtained, ˜ µ and ˜ µ can be obtained immediately. Therefore, theapproximate MLE of R is given by˜ R = ˜ θ ˜ θ + ˜ θ , where˜ θ = exp (cid:18) σ ( A + B ˜ σ ) (cid:19) , and ˜ θ = exp (cid:18) σ ( A + B ˜ σ ) (cid:19) . Similar procedure as above can be used to obtain AMLE of R for other cases (Seethe Appendix). R Based on asymptotic behavior of R , we present an asymptotic confidence intervalof R . We further, propose two confidence intervals based on the non-parametricbootstrap method. R In this subsection, we obtain the asymptotic variances and covariances of the MLEs, (cid:98) θ , (cid:98) θ and (cid:98) α by entries of the inverse of the observed Fisher information matrix I ( θ )where θ = ( α, θ , θ ). From the log-likelihood function in (6), we obtain the observedFisher information matrix of θ as I ( θ ) = − ∂ l∂α ∂ l∂α∂θ ∂ l∂α∂θ ∂ l∂θ ∂α ∂ l∂θ ∂ l∂θ ∂θ ∂ l∂θ ∂α ∂ l∂θ ∂θ ∂ l∂θ = I I I I I I I I I , − I = − r + r α − θ (cid:20) r (cid:88) i =1 x αi [ln( x i )] + ( n − r ) u α [ln( u )] (cid:21) − θ (cid:20) r (cid:88) j =1 y αj [ln( y j )] + ( m − r ) u α [ln( u )] (cid:21) , − I = r θ − θ (cid:34) r (cid:88) i =1 x αi + ( n − r ) u α (cid:35) , − I = r θ − θ (cid:34) r (cid:88) j =1 y αj + ( m − r ) u α (cid:35) , − I = 1 θ (cid:20) r (cid:88) i =1 x αi ln( x i ) + ( n − r ) u α ln( u ) (cid:21) = − I , − I = 1 θ (cid:20) r (cid:88) j =1 y αj ln( y j ) + ( m − r ) u α ln( u ) (cid:21) = − I ,I = I = 0 , Now, the asymptotic variance-covariance matrix, A = [ a ij ], for the MLE’s isobtained by inverting the Fisher information matrix, i.e. A = I ( θ ) − = [ I ij ] − .Therefore A = 1 u I I − I I − I I − I I I I − I I I I − I I I I I I − I I , where u = I I I − I I I − I I I . Now, the variance of (cid:98) R (say B ) can obtained also using delta method as follows.We have (cid:98) R = g ( (cid:98) α, (cid:98) θ , (cid:98) θ ) , where g ( α, θ , θ ) = θ θ + θ . Therefore, B = b t Ab , where b = ∂g∂α∂g∂θ ∂g∂θ = 1( θ + θ ) θ − θ ,
9t is easy to show that B = b (cid:48) Ab = 1 u ( θ + θ ) (cid:2) ( I I − I ) θ − I I θ θ + ( I I − I ) θ (cid:3) . To compute the confidence interval of R, the variance B needs to be estimated.We recommend using the emprical Fisher information matrix, and the MLE esti-mates of α , θ and θ , to estimate B , which is very convenient. As a consequenceof that, a 100(1 − γ )% asymptotic confidence intervals for R can be given by( (cid:98) R − z − γ (cid:112) (cid:98) B, (cid:98) R + z − γ (cid:112) (cid:98) B ) , where z γ is 100 γ -th percentile of N (0 , R by using theasymptotic distribution of the AMLE of R can be obtained similarly by replacing α , θ and θ in B by their respective AMLE’s.It is of interest to observe that when the shape parameter α is known, theMLE of R can be obtained explicitly asˆ R ML = S ( x ) S ( x ) + r r S ( y ) , (21)where S ( x ) = (cid:80) r i =1 x αi + ( n − r ) u α and S ( y ) = (cid:80) r j =1 y αj + ( m − r ) u α . In this section, we propose two confidence intervals based on the non-parametricbootstrap methods: (i) percentile bootstrap method (we call it Boot-p) based onthe idea of Efron (1982), and (ii) bootstrap-t method (we refer to it as Boot-t)based on the idea of Hall (1988). We illustrate briefly how to estimate confidenceintervalss of R using both methods. (i) Boot-p method
1. Generate a bootstrap sample of size r , { x ∗ , · · · , x ∗ r } from { x , · · · , x r } , andgenerate a bootstrap sample of size r , { y ∗ , · · · , y ∗ r } from { y , · · · , y r } . Basedon { x ∗ , · · · , x ∗ r } and { y ∗ , · · · , y ∗ r } , compute the bootstrap estimate of R say (cid:98) R ∗ using (12).2. Repeat 1 NBOOT times.3. Let H ( x ) = P ( (cid:98) R ∗ ≤ x ) be the cumulative distribution function of (cid:98) R ∗ . Define (cid:98) R Bp ( x ) = H − ( x ) for a given x . The approximate 100(1 − γ )% confidenceinterval of R is given by (cid:16) (cid:98) R Bp ( γ , (cid:98) R Bp (1 − γ (cid:17) ii) Boot-t method
1. From the sample { x , · · · , x r } and { y , · · · , y r } , compute (cid:98) R .2. Same as in Boot-p method, first generate bootstrap sample { x ∗ , · · · , x r ∗ } , { y ∗ , · · · , y r ∗ } and then compute (cid:98) R ∗ , the bootstrap estimate of R . Also,compute the statistic T ∗ = (cid:98) R ∗ − (cid:98) R (cid:113) V ar ( (cid:98) R ∗ ) .
3. Repeat 1 and 2 NBOOT times.4. Let H ( x ) = P ( T ∗ ≤ x ) be the cumulative distribution function of T ∗ . For agiven x define (cid:98) R Bt ( x ) = (cid:98) R + H − ( x ) (cid:113) V ar ( (cid:98) R ). The approximate 100(1 − γ )%confidence intervals of R is given by (cid:16) (cid:98) R Bt ( γ , (cid:98) R Bt (1 − γ (cid:17) . R In this section, we obtain the Bayes estimation of R under assumption that the shapeparameter α and scale parameters θ and θ are independent random variables. itis also assumed that the prior density of θ j is the inverse gamma IG ( a j , b j ), j = 1 , π j ( θ j ) = π ( θ j | a j , b j ) = e − bjθj θ − ( a j +1) j b a j j Γ( a j ) , and α has the gamma G ( a , b ) with density function π ( α ) = π ( α | a , b ) = e − b α α a − b a Γ( a ) . The likelihood function of the observed data is L ( data | α, θ , θ ) ∝ α r + r θ − r θ − r r (cid:89) i =1 x α − i r (cid:89) j =1 y α − j × exp (cid:40) − θ (cid:34) r (cid:88) i =1 x αi + ( n − r ) u α (cid:35) − θ (cid:34) r (cid:88) j =1 y αj + ( m − r ) u α (cid:35)(cid:41) . (22)The joint density of the α, θ , θ and data can be obtained as L ( α, θ , θ , data ) = L ( data | α, θ , θ ) × π ( θ ) × π ( θ ) × π ( α ) . α , θ and θ given the data is L ( α, θ , θ | data ) = L ( data | α, θ , θ ) π ( θ ) π ( θ ) π ( α ) (cid:82) ∞ (cid:82) ∞ (cid:82) ∞ L ( data | α, θ , θ ) π ( θ ) π ( θ ) π ( α ) dθ dθ dα . (23)From (23), it is observed that the Bayes estimate can not be obtained in a closedform. Therefore, we opt for stochastic simulation procedures, namely, the Gibbsand Metropolis samplers (Gilks et al., 1995) to generate samples from the posteriordistributions and then compute the Bayes estimate of R and the correspondingcredible interval of R . The posterior pdfs of θ and θ are as follows: θ | α, θ , data ∼ IG (cid:32) r + a , b + r (cid:88) i =1 x αi + ( n − r ) u α (cid:33) ,θ | α, θ , data ∼ IG (cid:32) r + a , b + r (cid:88) j =1 y αj + ( m − r ) u α (cid:33) , and f ( α | θ , θ , data ) ∝ α r + r + a − r (cid:89) i =1 x α − i r (cid:89) j =1 y α − j × exp (cid:40) − b α − θ (cid:32) r (cid:88) i =1 x αi + ( n − r ) u α (cid:33) − θ (cid:32) r (cid:88) j =1 y αj + ( m − r ) u α (cid:33)(cid:41) . The posterior pdf of α is not known, but the plot of its show that it is symmet-ric and close to normal distribution. So to generate random numbers from thisdistributions, we use the Metropolis-Hastings method with normal proposal dis-tribution. Note that the Metropolis algorithm considers only symmetric proposaldistributions. Therefore the algorithm of Gibbs sampling is as follows:1. Start with an initial guess ( α (0) , θ (0)1 , θ (0)2 ).2. Set t = 1.3. Using Metropolis-Hastings, generate α ( t ) from f α = f ( α | θ ( t − , θ ( t − , data )with the N ( α ( t − ,
1) proposal distribution.4. Generate θ ( t )1 from IG ( r + a , b + (cid:80) r i =1 x α ( t − i + ( n − r ) u α ( t − ).5. Generate θ ( t )2 from IG ( r + a , b + (cid:80) r j =1 y α ( t − j + ( m − r ) u α ( t − ) .6. Compute R ( t ) from (3).7. Set t = t + 1. 12. Repeat steps 3-7, M times.Note that in step 3, we use the Metropolis-Hastings algorithm with q ∼ N ( α ( t − , σ ) proposal distribution as follows:a. Let x = α ( t − .b. Generate y from the proposal distribution q.c. Let p ( x, y ) = min { , f α ( y ) f α ( x ) q ( x ) q ( y ) } .d. Accept y with probability p ( x, y ) or accept x with probability 1 − p ( x, y ).Now the approximate posterior mean, and posterior variance of R become (cid:98) E ( R | data ) = 1 M M (cid:88) t =1 R ( t ) , (cid:100) V ar ( R | data ) = 1 M M (cid:88) t =1 (cid:16) R ( t ) − (cid:98) E ( R | data ) (cid:17) . Based on M and R values, using the method proposed by Chen and Shao(1999), a 100(1 − γ )% HPD credible interval can be constructed as (cid:16) R [ γ M ] , R [(1 − γ ) M ] (cid:17) ,where R [ γ M ] and R [(1 − γ ) M ] are the [ γ M ]-th smallest integer and the [(1 − γ ) M ]-thsmallest integer of { R t , t = 1 , , · · · , M } , respectively. In this section, analysis of a real data set and a Monte Carlo simulation are pre-sented to illustrate all the estimation methods described in the preceding sections.
In this subsection, We compare the performances of the MLE, AMLE and the Bayesestimates with respect to the squared error loss function in terms of biases, andmean squares errors (MSE). We also compare different confidence intervals, namelythe confidence intervals obtained by using asymptotic distributions of the MLE,and two different bootstrap confidence intervals and the HPD credible intervals interms of the average confidence lengths. For computing the Bayes estimators andHPD credible intervals, we assume 2 priors as follows:Prior 1: a j = 0 , b j = 0 , j = 1 , , , Prior 2: a j = 1 , b j = 2 , j = 1 , , . able 1 .The average estimates(A.E) and MSE of the MLE, AMLE and Bayes estimates of R when m = n = 30and ( α, θ , θ ) = (1 . , ,
1) .( r , T ) ( r , T ) MLE AMLE BSprior 1 prior 2(20 ,
1) A.E 0.4929 0.4921 0.5074 0.5067MSE 0.0081 0.0093 0.0047 0.0031(25 ,
1) A.E 0.4941 0.5040 0.5068 0.5037(20, 1) MSE 0.0072 0.0084 0.0035 0.0032(20 ,
2) A.E 0.5049 0.4951 0.5043 0.5039MSE 0.0074 0.0090 0.0045 0.0039(25 ,
2) A.E 0.4957 0.4953 0.5049 0.5036MSE 0.0068 0.0071 0.0049 0.0041(30 ,
2) A.E 0.4986 0.4979 0. 5033 0. 5027MSE 0.0065 0.0074 0.0053 0.0037(20 ,
1) A.E 0.5058 0.4947 0.5051 0.5047MSE 0.0071 0.0078 0.0066 0.0061(25 ,
1) A.E 0.5045 0.5052 0.5038 0.5031(25,1) MSE 0.0066 0.0067 0.0054 0.0056(20 ,
2) A.E 0.5041 0.5046 0.5039 0.5022MSE 0.0067 0.0069 0.0061 0.0057(25 ,
2) A.E 0.5033 0.5039 0.5032 0.5023MSE 0.0058 0.0061 0.0055 0.0046(30 ,
2) A.E 0.4979 0.4983 0.5024 0.5022MSE 0.0058 0.0049 0.0049 0.0038(20 ,
1) A.E 0.5035 0.5036 0.5033 0.5027MSE 0.0067 0.0074 0.0057 0.0043(25 ,
1) A.E 0.4971 0.4956 0.5029 0.5024(20,2) MSE 0.0054 0.0069 0.0046 0.0033(20 ,
2) A.E 0.5029 0.5031 0.5026 0.5025MSE 0.0066 0.0069 0.0048 0.0037(25 ,
2) A.E 0.4979 0.4968 0.5022 0.5020MSE 0.0065 0.0059 0.0043 0.0031(30 ,
2) A.E 0.4986 0.4977 0.5018 0.5016MSE 0.0056 0.0061 0.0039 0.0023(20 ,
1) A.E 0.4974 0.4959 0.5027 0.5019MSE 0.0062 0.0071 0.0045 0.0037(25 ,
1) A.E 0.4979 0.4964 0.5020 0.5018(25,2) MSE 0.0051 0.0061 0.0043 0.0023(20 ,
2) A.E 0.5023 0.4968 0.5021 0.5020MSE 0.0057 0.0070 0.0049 0.0027(25 ,
2) A.E 0.5019 0.4982 0.5019 0.5017MSE 0.0050 0.0054 0.0036 0.0021(30 ,
2) A.E 0.4988 0.4983 0.5014 0.5010MSE 0.0044 0.0041 0.0041 0.0013(20 ,
1) A.E 0.5047 0.4952 0.5042 0.5029MSE 0.0066 0.0076 0.0059 0.0043(25 ,
1) A.E 0.4951 0.4959 0.5031 0.5016(30,2) MSE 0.0059 0.0064 0.0048 0.0032(20 ,
2) A.E 0.5068 0.4953 0.5043 0.5021MSE 0.0064 0.0068 0.0065 0.0029(25 ,
2) A.E 0.5034 0.5029 0.5016 0.5009MSE 0.0050 0.0057 0.0047 0.0014(30 ,
2) A.E 0.5005 0.5013 0.5006 0.5003MSE 0.0043 0.0049 0.0051 0.0008 able 2 . Average confidence/credible length for estimators of R .( r , T ) ( r , T ) MLE Boot-t Boot-p BSprior 1 prior 2(20 ,
1) 0.3854 0.4071 0.3966 0.3717 0.3416(25 ,
1) 0.3779 0.3988 0.3824 0.3714 0.3197(20, 1) (20 ,
2) 0.3815 0.3874 0.3902 0.3759 0.2989(25 ,
2) 0.3426 0.3747 0.3592 0.3217 0.2996(30 ,
2) 0.3140 0.3706 0.3269 0.2975 0.2661(20 ,
1) 0.3785 0.3884 0.3877 0.3419 0.3055(25 ,
1) 0.3766 0.3899 0.3813 0.3124 0.2892(25, 1) (20 ,
2) 0.3711 0.3914 0.3732 0.3053 0.2971(25 ,
2) 0.3463 0.3618 0.3557 0.2902 0.2566(30 ,
2) 0.2939 0.3314 0.3273 0.2613 0.2387(20 ,
1) 0.3461 0.3597 0.3444 0.3143 0.2819(25 ,
1) 0.3211 0.3424 0.3229 0.3078 0.2413(20, 2) (20 ,
2) 0.3357 0.3527 0.3289 0.3053 0.2673(25 ,
2) 0.3103 0.3315 0.3071 0.2642 0.2374(30 ,
2) 0.2846 0.3363 0.2978 0.2560 0.1956(20 ,
1) 0.3484 0.3677 0.3565 0.2835 0.2634(25 ,
1) 0.3192 0.3275 0.2905 0.2764 0.2312(25, 2) (20 ,
2) 0.3248 0.3350 0.3132 0.2714 0.2276(25 ,
2) 0.2925 0.2850 0.3013 0.2558 0.2170(30 ,
2) 0.2614 0.2831 0.2738 0.2203 0.2111(20 ,
1) 0.3417 0.3605 0.3452 0.2304 0.2079(25 ,
1) 0.3311 0.3225 0.3308 0.2244 0.1716(30, 2) (20 ,
2) 0.3255 0.3600 0.3396 0.2372 0.1948(25 ,
2) 0.2779 0.2813 0.2800 0.2052 0.1620(30 ,
2) 0.2462 0.2804 0.2748 0.2034 0.0913
Here we present a data analysis of the strength data reorted by Badar and Priest(1982). This data, represent the strength measured in GPA for single carbon fibers,and impregnated 1000-carbon fiber tows. Single fibers were tested under tension atgauge lengths of 20mm (Data Set 1) and 10mm (Data Set 2). These data have beenused previously by Raqab and Kundu (2005), Kundu and Gupta (2006), Kunduand Raqab (2009) and Asgharzadeh et al (2011). The data are presented in Tables3 and 4.
Table 3 . Data Set 1 (gauge lengths of 20 mm).1.312 1.314 1.479 1.552 1.700 1.803 1.861 1.865 1.944 1.9581.966 1.997 2.006 2.021 2.027 2.055 2.063 2.098 2.140 2.1792.224 2.240 2.253 2.270 2.272 2.274 2.301 2.301 2.359 2.3822.382 2.426 2.434 2.435 2.478 2.490 2.511 2.514 2.535 2.5542.566 2.570 2.586 2.629 2.633 2.642 2.648 2.684 2.697 2.7262.770 2.773 2.800 2.809 2.818 2.821 2.848 2.880 2.954 3.0123.067 3.084 3.090 3.096 3.128 3.233 3.433 3.585 3.585 able 4 . Data Set 2 (gauge lengths of 10 mm).1.901 2.132 2.203 2.228 2.257 2.350 2.361 2.396 2.397 2.4452.454 2.474 2.518 2.522 2.525 2.532 2.575 2.614 2.616 2.6182.624 2.659 2.675 2.738 2.740 2.856 2.917 2.928 2.937 2.9372.977 2.996 3.030 3.125 3.139 3.145 3.220 3.223 3.235 3.2433.264 3.272 3.294 3.332 3.346 3.377 3.408 3.435 3.493 3.5013.537 3.554 3.562 3.628 3.852 3.871 3.886 3.971 4.024 4.0274.225 4.395 5.020 Kundu and Gupta (2006) analyzed these data sets using two-parameter Weibulldistribution after subtracting 0.75 from both these data sets. After subtracting0.75 from all the points of these data sets, Kundu and Gupta (2006) observed thatthe Weibull distributions with equal shape parameters fit to both these data sets.Based on the complete data, we plot the histogram of the samples of α generatedby MCMC method along with their exact posterior density function in Figure 1.From the Figure 1 it is clear that the exact posterior density function match quitewell with the simulated samples obtained using MCMC method.For illustrative purposes, we havegenerated two different hybrid censored sam-ples from above data sets after subtracting 0.75: Scheme 1 : r = 45 , T = 2 . , r = 40 , T = 2 . R become 0.3958 and 0.3872; and the corresponding95% confidence intervals become (0.2723, 0.5193) and (0.2617, 0.4891), respectively.We also obtain the 95% Boot-p and Boot-t confidence intervals as (0.3045, 0.5523)and (0.3106, 0.5526), respectively.For the Bayesian estimate of R , we use non-proper priors on θ , θ and α , i.e., a = a = a = b = b = b = 0. Based on the above priors, we obtain 0.3933 asthe Bayes estimate of R , under the squared error loss function. We also computethe 95% highest posterior density (HPD) credible interval of R as (0.2937, 0.4829). Scheme 2 : r = 35 , T = 1 . , r = 25 , T = 2 . R become 0.4024,0.4092 and 0.4238; and the corresponding 95% confidence intervals become (0.2781,0.5267), (0.2960, 0.5531) and (0.2870, 0.5371) respectively. We also obtain the 95%Boot-p and Boot-t confidence intervals as (0.3359, 0.5711) and (0.3484, 0.5964)respectively. In this paper we have addressed the inferential issues on R = P ( X > Y ), when X and Y are independent and the have Weibull distribution with the same shape17igure 1: Posterior density function of α .but different scale parameters. It is assumed that the data are hybrid censored onboth X and Y . We have provided the MLE of R , and it is observed that the MLEof R cannot be obtained in explicit form. But it can be obtained by solving a onedimensional non-linear equation. We have also provided the AMLE of R , and itcan be obtained explicitly. Extensive Monte Carlo simulations indicate that theperformances of MLE and AMLE are very similar, hence for all practical purposesAMLE can be used in practice. We have further considered the Bayesian inferenceon R based on fairly general inverted gamma priors on the scale parameters, and anindependent gamma prior on the common shape parameter. The Bayes estimatorcannot be obtained in explicit form, we have Gibbs sampling technique to computethe Bayes estimate and also to compute the credible interval. It is observed thatthe performances of the Bayes estimators are very satisfactory, and if some priorinformations are available on the unknown parameters, Bayes estimator should beprefered compared to MLE or AMLE, as expected. Appendix
AMLE of R for other cases :When u = T and u = T , we expand p ( z i ), and q ( u ∗ ) in Taylor series aroundthe points µ i and µ ∗ r , respectively. Further, we also expand the termes ¯ p ( w j ), and18 q ( u ∗ ) around the points µ j and µ ∗ r , respectively . Similarly, considering only thefirst order derivatives and neglecting the higher order derivatives, the AMLEs of α , θ and θ , can be obtained as˜ µ = A + B ˜ σ, ˜ µ = A + B ˜ σ, and ˜ σ = − D + (cid:112) D − R + R ) E R + R ) , where A = (cid:80) r i =1 β i t i + ( n − r ) β r ∗ ln( T ) (cid:80) r i =1 β i + ( n − r ) β R ∗ , B = (cid:80) r i =1 α i − ( n − r )(1 − α r ∗ ) (cid:80) r i =1 β i + ( n − r ) β r ∗ ,A = (cid:80) r j =1 β j s j + ( m − r ) β r ∗ s r (cid:80) r j =1 β j + ( m − r ) β r ∗ , B = (cid:80) r j =1 α j − ( m − r )(1 − α r ∗ ) (cid:80) r j =1 β j + ( m − r ) β r ∗ ,D = r (cid:88) i =1 α i ( t i − A ) − ( n − r )(1 − α r ∗ )(ln( T ) − A ) + 2 A B r (cid:88) i =1 β i + 2 A B ( n − r ) β r ∗ + r (cid:88) j =1 α j ( s j − A ) − ( m − r )(1 − α r ∗ )(ln( T ) − A ) + 2 A B r (cid:88) j =1 β i + 2 A B ( m − r ) β r ∗ ,E = r (cid:88) i =1 β i t i ( t i − A ) + ( n − r ) β r ∗ (ln( T )) − ( n − r ) A β r ∗ ln( T ) + r (cid:88) j =1 β j s j ( s j − A )+ ( m − r ) β r ∗ (ln( T )) − ( m − r ) A β R ∗ ln( T ) , here α r ∗ i = 1 + ln( q r ∗ i )(1 − ln( − ln( q r ∗ i ))) , β r ∗ i = ln( q r ∗ i ) , i = 1 , . Therefore, the approximate MLE of R is given by˜ R = ˜ θ ˜ θ + ˜ θ , where ˜ θ = exp (cid:18) σ ( A + B ˜ σ ) (cid:19) , and ˜ θ = exp (cid:18) σ ( A + B ˜ σ ) (cid:19) . Similarly, AMLE of R for other cases(when u = T , u = x R or u = x R , u = T )can be obtained. 19 eferences [1] Asgharzadeh, A., Valiollahi, R., Raqab. M. Z.(2011). Stress-strength reliabilityof Weibull distribution based on progressively censored samples, SORT
35, 103-124.[2] Adimari, G. and Chiogna, M. (2006). Partially parametric interval estimationof
P r ( Y > X ), Computational Statistics and Data Analysis
51, 1875-1891.[3] Ahmad, K. E., Fakhry, M. E. and Jaheen, Z. F. (1997). Empirical Bayes esti-mation of P ( Y < X ) and characterizations of the Burr-type X model,
Journal ofStatistical Planning and Inference
64, 297-308.[4] Awad, A. M., Azzam, M. M. and Hamdan, M. A. (1981), Some inference resultsin P ( Y < X ) in the bivariate exponential model,
Communications in Statistics-Theory and Methods
10, 2515-2524.[5] Badar, M. G. and Priest, A. M. (1982). Statistical aspects of fiber and bundlestrength in hybrid composites. In: Hayashi, T., Kawata, K., Umekawa, S. (Eds.),
Progress in Science and Engineering Composites , ICCM-IV, Tokyo, pp. 1129-1136.[6] Bain, L. J. (1978).
Statistical Analysis of Reliability and Life Testing Model,NewYork:
Marceland Dekker Inc.[7] Baklizi, A. (2008). Likelihood and Bayesian estimation of P ( Y < X ) usinglower record values from the generalized exponential distribution,
ComputationalStatistics and Data Analysis , 52, 34683473.[8] Balakrishnan, N. and Aggarwala, R. (2000).
Progressive censoring: Theory,Methods and Applications , Boston: Birkhauser.[9] Berger, J. O. and Sun, D. (1993). Bayesian analysis for the poly-Weibull distri-bution.
Journal of the American Statistical Association,
88, 14121418.[10] Chen, S., Bhattacharya, G.K., (1988). Exact confidence bounds for an expo-nential parameter under hybrid censoring.
Communications in Statistics -Theoryand Methods , 17,1857-1870.[11] Chen, M. H. and Shao, Q. M. (1999). Monte Carlo estimation of Bayesiancredible and HPD intervals, J ournal of Computational and Graphical Statistics ,8, 6992. 2012] Childs, A., Chandrasekhar, B., Balakrishnan, N., Kundu, D.,( 2003). Exact in-ference based on type-I and type-II hybrid censored samples from the exponentialdistribution.
Annals of the Institute of Statistical Mathematics , 55, 319-330.[13] Ebrahimi, N., (1990). Estimating the parameter of an exponential distributionfrom hybrid life test.
Journal of Statistical Planning and Inference , 23, 255261.[14] Ebrahimi, N. (1992). Prediction intervals for future failures in exponential dis-tribution under hybrid censoring”,
IEEE Transactions on Reliability , 41, 127 -132.[15] Efron, B. (1982). The jackknife, the bootstrap and other re-sampling plans,
Philadelphia, PA: SIAM, CBMSNSF Regional Conference Series in AppliedMathematics,
Annals ofStatistics , 25, 555 - 564.[17] Fairbanks, K., Madson, R. and Dykstra, R. (1982), A confidence interval for anexponential parameter from a hybrid life test,
Journal of the American StatisticalAssociation,
77, 137-140.[18] Gilks, W. R., Richardson, S. and Spiegelhalter, D. J. (1995).
Markov ChainMonte Carlo in Practice , Chapman & Hall, London.[19] Gupta, R.D. and Kundu, D., (1998). Hybrid censoring schemes with exponen-tial failure distribution.
Communications in Statistics-Theory and Methods , 27,3065-3083[20] Kotz, S., Lumelskii, Y. and Pensky, M. (2003).
The Stress-Strength Model andits Generalizations . New York: World Scientific.[21] Kundu, D.,( 2007). On hybrid censored Weibull distribution.
Journal of Sta-tistical Planning and Inference , 137, 2127 - 2142.[22] Kundu, D. and Gupta, R. D. (2005). Estimation of P ( Y < X ) for the general-ized exponential distribution,
Metrika
61, 291-308.[23] Kundu, D. and Gupta, R. D. (2006). Estimation of P ( Y < X ) for Weibulldistributions,
IEEE Transcations on Reliability R = P ( Y < X ) for three-parameter Weibull distribution,
Statistics and Probability Letters
79, 1839-1846.2125] Raqab, M. Z. and Kundu, D. (2005). Comparison of different estimators of P ( Y < X ) for a scaled Burr type X distribution,
Communications in Statistics-Simulation and Computation P ( Y < X )for the three-parameter generalized exponential distribution,
Communications inStatistics-Theory and Methods
37, 2854-2865.[27] Rezaei. S., Tahmasbi. and R. Mahmoodi. M. (2010). Estimation of P ( Y < X )for generalized Pareto distribution,