Bayesian estimation of a competing risk model based on Weibull and exponential distributions under right censored data
BBayesian estimation of a competing risk model basedon Weibull and exponential distributions under rightcensored data
Hamida Talhi , Hiba Aiachi , Nadji Rahmania Probability Statistics Laboratory, Badji Mokhtar University, BP12, 23000 Annaba, Algeria , Paul Painlev´e laboratory, UMR-CNRS 8524. Lille University, 59655 Villeneuve d’AscqC´edex, France.
Abstract
In this paper we investigate the estimation of the unknown parameters of a com-peting risk model based on a Weibull distributed decreasing failure rate and anexponentially distributed constant failure rate, under right censored data. TheBayes estimators and the corresponding risks are derived using various loss func-tions. Since the posterior analysis involves analytically intractable integrals, wepropose a Monte-Carlo method to compute these estimators. Given initial val-ues of the model parameters, the Maximum Likelihood estimators are computedusing the Expectation-Maximization algorithm. Finally, we use Pitman’s close-ness criterion and integrated mean-square error to compare the performance ofthe Bayesian and the maximum likelihood estimators.
Keywords:
Weibull model, Exponential model, right censored sample,Bayesian estimation, Expectation Maximisation algorithm, Markov chainMonte Carlo.
1. Introduction
The exponential and the Weibull distributions are the most used distribu-tions in life time data analysis, mostly due to experience and goodness-of-fittests, see for instance Lawless (2002) and Hamada et al. (2008). In this paperwe propose a Bayesian analysis of a computing risk model based on Weibull andexponential distributions under right censored data. Boudjerda et al. (2016)considered the Bayesian analysis of the right truncated Weibull distribution un-der type II censored data and derived Bayes estimators and the correspondingrisks using symmetric and asymmetric loss functions. Aouf and Chadli (2017)considered the Bayesian analysis of generalized Lindley distribution under type Corresponding Author.E-mail adress: [email protected] (N.Rahmania).
Preprint submitted to Elsevier January 12, 2021 a r X i v : . [ m a t h . S T ] J a n I censored data and derived Bayes estimators and the corresponding risks usingsymmetric and asymmetric loss functions. Balakrishnan and Mitra (2012) ap-plied the EM algorithm to estimate the parameters of the Weibull distributionwhen the model is left-truncated and the data are right censored.The exponential distribution E ( η ), with mean η is often used for modellingfailure times caused by accidents cleared of birth defects of a no ageing material.The survival function of the exponential distribution is S E ( t ) = exp( − tη ) , where the scale parameter η is the inverse of the constant hazard rate λ .The versatile Weibull W ( η, β ) distribution, has survival function S W ( t ) = exp[ − (cid:18) tη (cid:19) β ]and hazard rate h W ( t ) = βη (cid:18) tη (cid:19) β − . When the shape parameter β <
1, the decreasing hazard rate of the model canbe used for modelling failure due to early birth defects, and when β > β = 1, the Weibull distributionreduces to the exponential distribution with scale parameter η . This last casemay arise due to failures by accidents.When modeling reliability feedback data with the Weibull distribution, theproblem one is faced with is to decide whether β = 1 or β < β = 1 versus β > B = min ( E, W ) where E follows theexponential distribution E ( η ) and W follows the Weibull distribution W ( η , β )where β >
1, where the r.v. E and W are assmed to be independent. Conse-quently, the distribution of B is characterised by the parameters η , η and β and will be denoted B ( η , η , β ).We propose two approaches to the estimate the parameters of B ( η , η , β ).The first approach is the classical maximum likelihood estimation (MLE) and2he second one is the Bayesian estimation using three loss functions (GeneralizedQuadratic function, entropy function and Linex function). We use the Metropo-lis Hastings sampling procedure to generate Monte-Carlo samples to obtain theBayes estimators of the unknown parameters. Finally, we perform some simula-tion experiments to compare the performance of the proposed Bayes estimatorsand the maximum likelihood estimators in terms of Pitman’s closeness criterionand the integrated mean square error (IMSE).The rest of the the paper is organized as the following: In Section 2, wepresent the main characteristics of the model. Section 3 deals with the maximumlikelihood estimation of the B ( η , η , β ) distribution through the EM algorithm.In Section 4, the Bayesian estimators under different loss functions are displayed.Monte-Carlo simulation results are presented in Section 5. Finally, Section 6concludes the paper.
2. The B distribution Consider the r.v. B = min ( E, W ) where E is exponentially distributed withmean η and W follows the Weibull distribution with scale parameter η andshape parameter β , E and W being independent. The main characteristics ofthe probability distribution B of the r.v. B are as follows. Its hazard functionis h B ( x ) = 1 η + βη (cid:18) xη (cid:19) β − , (1)its survival (or reliability) function is S B ( x ) = exp (cid:34) − xη − (cid:18) xη (cid:19) β (cid:35) (2)and its probability density function (pdf) is f B ( x ) = (cid:32) η + βη (cid:18) xη (cid:19) β − (cid:33) exp (cid:34) − xη − (cid:18) xη (cid:19) β (cid:35) . (3)
3. Maximum likelihood estimation
Consider a n -sample ( X , X , . . . , X n ) generated from the B distributionwith pdf (3). Assuming the data is right censored, the likelihood function forright censoring data reads L ( η , η , β | X ) = n (cid:89) i =1 f B ( x i ) δ i S B ( x i ) − δ i = n (cid:89) i =1 h B ( x i ) δ i S B ( x i ) , x ≤ x ≤ · · · ≤ x n . In view of (1) and (2), the likelihood function is L ( η , η , β | X ) = n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δ i exp (cid:34) − (cid:80) ni =1 x i η − n (cid:88) i =1 (cid:18) x i η (cid:19) β (cid:35) . (4)3ince the r.v. B is a result of a competition between E and W , the datamodel is incomplete in the sense that although the observations are realizationsof one of these r.v., it is often hard to know beforehand whether a particularobservation is a realization of E or W . This in turn makes a direct maximizationof the likelihood function numerically highly unstable. Instead, we use the EMalgorithm with its two steps, expectation (E) and maximization (M), seems aplausible alternative to the direct maximization of the likelihood function forincomplete data models, especially when we can implement the maximizationstep separately for the exponential and the Weibull models (cf. Dempster et al.(1977), Bousquet et al. (2006) and Little & Rubin (2002)).We proceed as follows. Define z i = ( z Ei , z Wi ) where z Ei = 1 and z Wi = 0indicates that the associated observation is coming from an exponential model,and z Wi = 1 and z Ei = 0 is from the Weibull distribution. By convention, thecomplete data can then be written as o = ( o i = ( x i , z i ) , i = 1 ..., n ) = ( x, z ) . Sothe resulting competing risk density can be written as f ( o i ) = h E ( x i ) z Ei h W ( x i ) z Wi S E ( x i ) S W ( x i ) , (5)and the log-likelihood based on complete data o = ( o , ..., o n ) reads l ( η , η , β | o ) = n (cid:88) i =1 z Ei log( h E ( x i ))+ z Wi log( h W ( x i ))+log( S E ( x i ))+log( S W ( x i )) . (6)Set Θ = ( η , η , β ) and let ˜Θ denote its current value. The expected value oflog-likelihood Q (Θ | ˜Θ) is Q (Θ | ˜Θ) = E ( l ( η , η , β | o ) | x, ˜Θ)= n (cid:88) i =1 ˜ p E ( x i ) log( h E ( x i )) + ˜ p W ( x i ) log( h W ( x i )) + log( S E ( x i )) + log( S W ( x i )) , (7)where˜ p E ( x i ) = E (˜ p E ( x i | x, ˜Θ) = P ( z Ei = 1 | x, ˜Θ) = h E ( x i ) h E ( x i ) + h W ( x i ) , ˜ p W ( x i ) = 1 − ˜ p E ( x i ) . Here ˜ p E ( x i ) (˜ p W ( x i )) denotes the probability that the observation is comingfrom the exponential (Weibull) distribution. Moreover, the equation (7) has anadditive structure that results from the contribution of both the exponentialand the Weibull distributions. This additive decomposition of (7) makes theimplementation of M-step easier in the sense that it maximizes separately theterms corresponding to exponential and Weibull distributions. The exponentialterm can be maximized by direct differentiation with respect to the parameter η , whereas the Weibull term can be maximized using any of the iterative pro-cedures such as the Newton-Raphson method (see Mann et al. (1974), Presset al. (2007)), since there is no closed form of the derivatives with respect tothe Weibull parameters η and β . These two steps can be repeated until theiterating algorithm converges to give the desired MLE estimates.4 . Bayesian estimators under different loss functions In the Bayesian approach, the unknown parameters are considered as randomvariables (r.v) instead of fixed constants, from this point the variations in theparameters can be incorporated by assuming prior distributions of the unknownparameters. As prior distributions, we assume the parameters η , η follow theGamma distribution as a prior: π ( η ) = a b Γ( b ) η b − exp[ − a η ] π ( η ) = a b Γ( b ) η b − exp[ − a η ] , while the parameter β follow an uniform distribution, β ∼ U ( β l , β r ), π ( β ) = 1 β r − β l , β r ≤ β ≤ β l . Moreover, η , η and β are assumed independent. Thus, the joint prior distri-bution of ( η , η , β ) is given by π ( η , η , β ) = a b a b Γ( b )Γ( b ) η b − η b − exp[ − a η − a η ] 1 β r − β l . (8)There is no specific criterion for the selection of the Gamma family except that itis flexible and admits a Gamma distribution as a conjugate prior. The posteriordensity is then π ( η , η , β | X ) = L ( η η , β | X ) π ( η , η , β ) (cid:82) (cid:82) (cid:82) + ∞ L ( η η , β | X ) π ( η , η , β )d η d η d β , so the joint posterior of ( η , η , β ) is π ( η , η , β | X ) = Kη b − η b − e (cid:20) − (cid:80) ni =1 xiη − (cid:80) ni =1 (cid:16) xiη (cid:17) β − a η − a η (cid:21) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δ i , (9)where K is the normalizing constant.Next, we introduce the three loss functions namely the generalised quadratic(GQ), the Linex and the entropy functions we will consider below. In thefollowing table we display these loss functions with their Bayes estimators andthe corresponding posterior risks (PR). Under the GQ loss function, L ( λ, δ ) = τ ( λ )( λ − δ ) , assuming that τ ( λ ) = λ α − , the Bayesian estimators of η , η and β denoted respectively by ˆ η GQ ) , ˆ η GQ ) and ˆ β ( GQ ) are5 oss function Expression Bayes estimators posterior riskGeneralised quadratic L ( λ, δ ) = τ ( λ )( λ − δ ) ˆ δ GQ = E π ( τ ( λ ) λ ) E π ( τ ( λ ) E π ( τ ( λ )( λ − δ ) Entropy L ( λ, δ ) = (cid:16) δλ (cid:17) p − p log (cid:16) δλ (cid:17) − δ E = E π ( λ − p ) − p p [ E π (log( λ − log(ˆ δ E )))]Linex L ( λ, δ ) = exp( r ( δ − λ )) − r ( δ − λ ) − δ L = − r log( E π (exp( − rλ )) r (ˆ δ GQ − ˆ δ L )Table 1: The loss functions and the corresponding Bayesian estimators and the posterior riskˆ η GQ ) = (cid:82) (cid:82) (cid:82) + ∞ η α + b − η b − exp (cid:20) − (cid:80) ni =1 x i η − (cid:80) ni =1 (cid:16) x i η (cid:17) β − a η − a η (cid:21) (cid:81) ni =1 (cid:18) η + βη (cid:16) x i η (cid:17) β − (cid:19) δ i d η d η d β (cid:82) (cid:82) (cid:82) + ∞ η α + b − η b − exp (cid:20) − (cid:80) ni =1 x i η − (cid:80) ni =1 (cid:16) x i η (cid:17) β − a η − a η (cid:21) (cid:81) ni =1 (cid:18) η + βη (cid:16) x i η (cid:17) β − (cid:19) δ i d η d η d β , ˆ η GQ ) = (cid:82) (cid:82) (cid:82) + ∞ η b − η α + b − exp (cid:20) − (cid:80) ni =1 x i η − (cid:80) ni =1 (cid:16) x i η (cid:17) β − a η − a η (cid:21) (cid:81) ni =1 (cid:18) η + βη (cid:16) x i η (cid:17) β − (cid:19) δ i d η d η d β (cid:82) (cid:82) (cid:82) + ∞ η b − η α + b − exp (cid:20) − (cid:80) ni =1 x i η − (cid:80) ni =1 (cid:16) x i η (cid:17) β − a η − a η (cid:21) (cid:81) ni =1 (cid:18) η + βη (cid:16) x i η (cid:17) β − (cid:19) δ i d η d η d β , ˆ β ( GQ ) = (cid:82) (cid:82) (cid:82) + ∞ β α η b − η b − exp (cid:20) − (cid:80) ni =1 x i η − (cid:80) ni =1 (cid:16) x i η (cid:17) β − a η − a η (cid:21) (cid:81) ni =1 (cid:18) η + βη (cid:16) x i η (cid:17) β − (cid:19) δ i d η d η d β (cid:82) (cid:82) (cid:82) + ∞ β α − η b − η b − exp (cid:20) − (cid:80) ni =1 x i η − (cid:80) ni =1 (cid:16) x i η (cid:17) β − a η − a η (cid:21) (cid:81) ni =1 (cid:18) η + βη (cid:16) x i η (cid:17) β − (cid:19) δ i d η d η d β . The corresponding posterior risks are then
P R (ˆ η GQ ) ) = E π ( η α +10 ) − η GQ ) E π ( η α ) + ˆ η GQ ) E π ( η α − ) ,P R (ˆ η GQ ) ) = E π ( η α +11 ) − η GQ ) E π ( η α ) + ˆ η GQ ) E π ( η α − ) ,P R ( ˆ β ( GQ ) ) = E π ( β α +1 ) − β ( GQ ) E π ( β α ) + ˆ β GQ ) E π ( β α − ) . We note that when α = 1, we retrieve the basic quadratic loss function.Under the entropy loss function, the Bayesian estimators ˆ η E ) , ˆ η E ) andˆ β ( E ) are ˆ η E ) = K (cid:90) (cid:90) (cid:90) + ∞ η b − − p η b − exp (cid:34) − (cid:80) ni =1 x i η − n (cid:88) i =1 (cid:18) x i η (cid:19) β − a η − a η (cid:35) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δ i d η d η d β − p , ˆ η E ) = K (cid:90) (cid:90) (cid:90) + ∞ η b − η b − − p exp (cid:34) − (cid:80) ni =1 x i η − n (cid:88) i =1 (cid:18) x i η (cid:19) β − a η − a η (cid:35) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δ i d η d η d β − p , ˆ β ( E ) = K (cid:90) (cid:90) (cid:90) + ∞ β − p η b − η b − exp (cid:34) − (cid:80) ni =1 x i η − n (cid:88) i =1 (cid:18) x i η (cid:19) β − a η − a η (cid:35) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δ i d η d η d β − p . The corresponding posterior risks are then
P R ( ˆ η E ) ) = pE π (log( η ) − log(ˆ η E ) )) , R (ˆ η E ) ) = pE π (log( η ) − log(ˆ η E ) )) ,P R ( ˆ β ( E ) ) = pE π (log( β ) − log( ˆ β ( E ) )) . Finally, under the Linex loss function we obtain the following estimators ˆ η L ) = − Kr log (cid:90) (cid:90) (cid:90) + ∞ η b − η b − e (cid:34) − (cid:80) ni =1 xiη − (cid:80) ni =1 (cid:18) xiη (cid:19) β − a η − a η − rη (cid:35) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δi d η d η d β , ˆ η L ) = − Kr log (cid:90) (cid:90) (cid:90) + ∞ η b − η b − e (cid:34) − (cid:80) ni =1 xiη − (cid:80) ni =1 (cid:18) xiη (cid:19) β − a η − a η − rη (cid:35) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δi d η d η d β , ˆ β ( L ) = − Kr log (cid:90) (cid:90) (cid:90) + ∞ η b − η b − e (cid:34) − (cid:80) ni =1 xiη − (cid:80) ni =1 (cid:18) xiη (cid:19) β − a η − a η − rβ (cid:35) n (cid:89) i =1 (cid:32) η + βη (cid:18) x i η (cid:19) β − (cid:33) δi d η d η d β . The corresponding posterior risks are then
P R ( ˆ η L ) ) = r (ˆ η GQ ) − ˆ η L ) ) ,P R (ˆ η L ) ) = r (ˆ η GQ ) − ˆ η L ) ) ,P R ( ˆ β ( L ) ) = r ( ˆ β ( GQ ) − ˆ β ( L ) ) . Since it is difficult to obtain closed form expressions of all these estimators, inthe next section we will use the MCMC procedures to evaluate them.
5. Simulation study
In order to compare the performance of the proposed Bayes estimators withthe MLE estimators, we perform a Monte Carlo study assuming that η =2 , η = 1 and β = 2 i.e. we consider the model B (2 , , N = 10000samples of the right censored model with different sizes n = 10 , n = 20 and n = 30. By choosing to censor 10% respectively 20% of date, we obtain thefollowing results. In the next tables we display the values of the estimators using the EMalgorithm for the B model when 10% and 20% of data are censored, where aNewton-Raphson algorithm is applied to the Weibull distribution and the directlikelihood maximization is applied to the exponential distribution.7 able 2: The MLE of the parameters with quadratic error (in brackets)(10%). n parameter MLE10 η η β η η β η η β Table 3: The MLE of the parameters with quadratic error (in brackets)(20%). n parameter MLE10 η η β η η β η η β Discussion:
For both censoring times, the estimated values of the param-eters are close to the true values. Moreover, when 10% of the data is censored,the smallest quadratic error corresponds to the largest n . The Bayesian estimators are obtained using the MCMC methods. For thechoice of the hyperparameters designed from the equations given in Section4. we consider the following prior informations. For the shape parameter β ,we assume that [ β l , β r ] = [1 , η of the exponentialcomponent we assume [ η l , η r ] = [1 , η of theWeibull component, we have [ η l , η r ] = [1 , able 4: Bays estimators and PR (in brackets) under generalized quadratic loss function. n censoringpercentage parameter α -2 -1 -0.5 0.5 1 210 10% η η β η η β η η β η η β η η β η η β able 5: Bays estimators and PR (in brackets) under the entropy loss function. n censoringpercentage parameter P-2 -1 -0.5 0.5 1 210 10% η η β η η β η η β η η β η η β η η β able 6: Bays estimators and PR (in brackets) under the Linex loss function. n censoringpercentage parameter r -2 -1 -0.5 0.5 1 210 10% η η β η η β η η β η η β η η β η η β Discussion:
We note that the value α = − n is large. Under the entropy lossfunction, we obtain the best posterior risk when p = − n = 30. Finally,11nder the Linex loss function, the case r = − . Table 7: Bays estimators and PR (in brackets) under the three loss function n censoringpercentage parameter GQ( α = −
2) entropy ( p = −
1) Linex ( r = − . η η β η η β η η β η η β η η β η η β Discussion:
We notice that the entropy loss function provides the bestBayesian estimator of the parameters among the three loss functions in regardto the posterior risk values. While clearly the other two (GQ, Linex) gave thesame level of performance.
Since the survival and the hazard functions are both depend on time, weconsider here the interval of time t = [1 , n = 30, the MLE and the Bayesian estimators,we choose the estimator under the entropy loss function (the best estimator aswe have seen above). in the both cases of censoring times 10% and 20%.12345 iscussion: We notice that in the case of the survival function, when t < t >
In this section, we compare the best Bayesian estimators obtained above withthe maximum likelihood estimators. For this, we propose to use the followingtwo criteria: the Pitman closeness criterion (see Pitman (1937), Fuller (1982)and Jozani (2012)) and the integrated mean square error (IMSE) defined asfollows.
An estimator θ of a parameter θ dominates another estimator θ in thesense of Pitman closeness criterion if for all θ ∈ Θ P θ [ | θ − θ | < | θ − θ | ] > . Consider the estimators θ i ( i = 1 , . . . , N ) obtained with N samples of model.The integrated mean square error is defined asIMSE = ( N (cid:88) i =1 ( θ i − θ ) ) /N. In the following tables, we present the values of the Pitman probabilitieswhich allow us to compare the Bayesian estimators with the MLE under thethree loss function when α = − , p = − r = − .16 able 8: Pitman comparison of the estimators n censoringpercentage parameter GQ( α = −
2) entropy ( p = −
1) b Linex ( r = − )10 10% η η β η η β η η β η η β η η β η η β Discussion:
When n is small, the Bayesian estimators of η and β arebetter than the MLE estimators. Furthermore, we note that the generalisedquadratic loss function provides the best values, however the MLE estimator of η is better than the Bayesian one. When n is large, the MLE estimators of thethree parameters are better than the Bayesian estimators.In the next table we present the values of the integrated mean square error of theestimators under the three loss function and the maximum likelihood estimator.17 able 9: The IMSE of the estimators n censoringpercentage parameter MLE GQ( α = −
2) entropy ( p = −
1) Linex ( r = − . η η β η η β η η β η η β η η β η η β Discussion:
When n is small, the Bayesian estimators of η and β providesthe smallest IMSE compared to the MLE estimators. But in for η the MLEestimator preforms better than the Bayesian one. When n is large, all theBayesian estimators are better than the MLE estimator, and we can notice thatthe generalised quadratic loss function provide the best values of the IMSE.
6. Conclusion
In this study we considered a simple competing risk model based on Weibulland exponential failures, We used classical and Bayesian estimation methods toestimate the unknown parameters where we used the EM algorithm since noclosed form of the MLE estimators can be obtained. The results were obtainedusing simulated data sets of size 10, 20 and 30. We obtained the Bayesian esti-mators under the generalized quadratic, entropy and Linex loss functions.Thenwe used the Monte-Carlo simulation technique to determine which loss func-tion has the smallest posterior risks. These selected Bayesian estimators arecompared with the maximum likelihood estimators of the unknown parametersusing Pitman’s closeness criterion and the integrated mean square error. as fu-ture prospect, a mixture of the loss functions used in this paper might yield anoptimal estimation. 18 eferencesReferences [1] Achcar J.A and Leonardo R.A (1998) : Use of Markov Chain Monte Carlomethods in a Bayesian analysis of the Block and Basu bivariate exponentialdistribution, Annals of the Institute of Statistical Mathematics, 50, 403-416.[2] Agostino, R, B. and Stephens, M. A. (1986): Goodness-of-fit Tech-niques.New York, Marcel Dekker.[3] Aouf, F. and Chadli, A. (2017): Bayesian Estimations in the GeneralizedLindley Model, International journal of mathematical models and methodsin applied sciences, 11, 26-32.[4] Balakrishnan, N. and Mitra, D. (2012): Left truncated and right censoredWeibull data and likelihood inference with an illustration, computationstatistics and data analysis.56 (12), 4011-4025[5] Basu, S., Sen, A. and Banerjee, M. (2003): Bayesian analysis of competingrisks with partially masked cause of failure, Journal of the Royal StatisticalSociety: Series C (Applied Statistics), 52(1), p. 77–93.[6] Berger, J.O. and Sun, D. (1993): Bayesian analysis for the poly-weibulldistribution, Journal of the American Statistical Association, 88(424),p.1412–1418.[7] Bertholon, H. (2001): Une mod´elisation du vieillissement. Ph D. Thesis,Joseph Fourier University, Grenoble.[8] Bousquet, N., Bertholon, H. and Celeux, G. (2006): An alternative com-peting risk model to the Weibull distribution for modelling aging in lifetimedata analysis, Lifetime Data Analysis, 12, p. 481-504.[9] Boudjerda, K., Chadli, A., Fellag, H. (2016): Posterior Analysis of theCompound Truncated Weibull Under Different Loss Functions for Cen-sored Data, international journal of mathematics and computers in simu-lation,Vol 10, 265-272.[10] Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977): Maximum likelihoodfrom incomplete data via the EM algorithm, Journal of the Royal StatisticalSociety, Ser. B, 39(1), p.1-38.[11] Hamada, M.S., Wilson, A., Reese, C.S. and Martz, H. (2008): BayesianReliability, Springer-Verlag.[12] Lawless, J.F. (2002): Statistical Models and Methods for Lifetime Data,John Wiley and Sons. 1913] Little, R.J.A. and Rubin, D.B. (2008): Statistical Analysis with MissingData, John Wiley and Sons.[14] Mann, N.R., Schafer, R.E. and Singpurwalla, N.D. (1974): Methods forStatistical Analysis of Reliability and Life Data, John Wiley and Sons.[15] Mclachlan, G. J. and Krishnam, T. (1997): The EM algorithm and Exten-sions. New York, Wiley.[16] Park, C. and Padgett, W.J. (2004): Analysis of strength distributions ofmultimodal failures using the EM algorithm, Technical report No. 220,Department of Statistics, University of South Carolina.[17] Pedada, S. D. and Khattree, R. (1986): On Pitman nearness and varianceof estimators, Comm. Statist. Soc., 14, 145-155.[18] Pitman, E. (1937): The closest estimates of statistical parameters. Mathe-matical proceeding of the Cambridge philosophical society, 33(2).[19] Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. (2007):
Numerical Recipes 3rd Edition: The Art of Scientific Computing , Cam-bridge University Press.[20] Ranjan, R. and Upadhyay, S.K. (2013): Posterior analysis of a computingrisk model based on decreasing failure rate Weibull and exponential failures,
Journal of Reliability and Statistical Studies .
Vol. 8, Issue 1 (2015): 51-62[21] Varadhan, R and Gilbert, P. (2010): An R package for solving a large sys-tem of nonlinear equations and for optimizing a high-dimensional nonlinearobjective function,