Estimation of Inverse Weibull Distribution Under Type-I Hybrid Censoring
aa r X i v : . [ s t a t . O T ] J u l Estimation for Inverse Weibull Distribution UnderType-I Hybrid Censoring
Mohammad Kazemi a , Mina Azizpour ba Department of Statistics, School of Mathematical SciencesShahrood University of Technology, Shahrood, Iran b Department of Statistics, Faculty of Mathematical Sciences,University of Mazandaran, Babolsar, Iran
Abstract
The mixture of Type I and Type II censoring schemes is called the hybridcensoring. This paper presents the statistical inferences of the Inverse Weibulldistribution when the data are Type-I hybrid censored. First we consider themaximum likelihood estimators of the unknown parameters. It is observed thatthe maximum likelihood estimators can not be obtained in closed form. Wefurther obtain the Bayes estimators and the corresponding highest posteriordensity credible intervals of the unknown parameters under the assumptionof independent gamma priors using the importance sampling procedure. Wealso compute the approximate Bayes estimators using Lindley’s approximationtechnique. We have performed a simulation study in order to compare theproposed Bayes estimators with the maximum likelihood estimators. A real lifedata set is used to illustrate the results derived.
Keywords : Bayes estimators; Hybrid censoring; Importance sampling;Maximum likelihood estimators.
MSC 2010 : 62N01; 62N02.
In life testing experiments often the data are censored. Type-I and Type-II arethe two most popular censoring schemes which are in use for any life testing ex-periment. Two mixtures of Type-I and Type-II censoring schemes are known ashybrid censoring schemes. If the experiment terminates as soon as either the R -thfailure or the pre-specified censoring time T occurs, type-I hybrid censoring schemehas been performed. In type-II hybrid censoring scheme, the experiment termi-nates when the latter of the R -th failure and the censoring time T occurs. Denotethe i -th order statistic from a random sample of size n by X i : n . Thus, in type-Ihybrid censoring scheme, one observes X n , · · · , X r : n when X r : n ≤ min { X R : n , T } and X r +1: n > min { X R : n , T } . Under this scheme, the experiment may be termi-nated too early resulting in very few failures. Under type-II hybrid censoringscheme, the experiment terminates when X n , · · · , X r : n are observed for which Corresponding authorE-mail address:
[email protected] r : n ≤ max { X R : n , T } and X r +1: n > max { X R : n , T } . In both hybrid censoringschemes, the failure number R and censoring time T are pre-fixed.Epstein (1954) first introduced the hybrid censoring scheme and analyzed thedata under the assumption of exponential lifetime distribution of the experimen-tal units. An extensive literature exists for hybrid censoring under classical andBayesian framework and the overview presented below describe some of the workdone on this topic. Gupta and Kundu (1998) obtained confidence and credible in-tervals for an one-parameter exponential distribution. Kundu (2007) obtained theMLE’s, the approximate MLE’s and Bayes estimates of shape and scale parame-ters of a Weibull distribution. Kundu and Pradhan (2009) analyzed a generalizedexponential distribution in presence of hybrid censoring. Balakrishnan and Shafay(2012) developed a general method for obtaining Bayes prediction intervals of fu-ture observable based on an observed Type-I hybrid censored data. Rastogi andTripathi (2013) derived maximum likelihood and Bayes estimates of the unknownmodel parameters of a Burr XII distribution. Singh and Tripathi (2015) studieda two-parameter lognormal distribution using hybrid censored samples and derivedvarious point and interval estimates of unknown lognormal parameters from clas-sical and Bayesian viewpoint. Tripathi and Rastogi (2015) considered point andinterval estimation of the unknown parameters of a generalized inverted exponentialdistribution and obtained various classical and Bayes estimates based on hybrid cen-sored samples. Hyun, Lee and Robert (2016) analyzed a two-parameter log-logisticdistribution based on type I and type II hybrid censored data.In this paper, we provide point and interval estimators for the unknown pa-rameters of an inverse Weibull (IW) distribution based on type-I hybrid censoredsamples. The probability density function (PDF) of an IW distribution is f X ( x ; α, λ ) = αθ α x − ( α +1) e − ( θx ) − α , x > , (1.1)and the corresponding cumulative distribution function (CDF) is given by F X ( x ; α, λ ) = e − ( θx ) − α , x > , (1.2)where α > θ > α governs the shape of the PDF, thehazard function and the general properties of the IW distribution. When α = 1 and α = 2, the IW distribution reduce to the inverse exponential and inverse Rayleighdistributions respectively.The IW distribution is more appropriate model than the Weibull distributionbecause the Weibull distribution does not provide a satisfactory parametric fit if thedata indicate a non-monotone and unimodal hazard rate functions. The hazard ratefunction of IW distribution can be decreasing or increasing depending on the valueof the shape parameter. The IW distribution is useful to model several data such asthe time to breakdown of an insulating fluid subjected to the action of a constanttension and degradation of mechanical components such as pistons and crankshaftsof diesel engines. Extensive work has been done on the IW distribution. Kundu2nd Howlader (2010) considered the Bayesian inference and prediction problems ofthe IW distribution based on Type-II censored data. Singh et al. (2013) proposeda Bayesian procedure for the estimation of the parameters of IW distribution un-der Type-II hybrid censoring scheme. Ateya (2015) considered point and intervalestimation of the unknown parameters of a IW distribution based on Balakrish-nans unified hybrid censoring scheme. We consider the inference for IW distributionunder type-I hybrid censoring scheme.The rest of the paper is organized as follows. In Section 2, we discuss the maxi-mum likelihood estimation of the scale and shape parameters of the IW distribution.The asymptotic confidence bounds are provided in section 3. Bayesian analyses arepresented in Section 4. In Section 5, we conduct a simulation study to compare theperformance of proposed methods and then analyzed a real data set for illustrativepurpose in Section 6. Finally we conclude the paper in section 7. In this section we provide the maximum likelihood estimators (MLEs) of the un-known parameters. We re-parametrize the model as follows λ = θ α . Suppose n identical units are put on life test. Then under type-I hybrid censoring scheme, weobserve only the first r failure times, say t , t , . . . , t r . Under the assumptions thatthe lifetime distribution of the items are independent and identically distributed(i.i.d.) IW random variable, the likelihood function for the type-I hybrid censoreddata without the multiplicative constant can be written as L ( α, λ | data ) = α r λ r e − λ P ri =1 x αi r Y i =1 x α +1 i (1 − e − λu − α ) n − r , (2.3)where x i = t ( i ) , u = min ( t ( R ) , T ) and r denotes the number of units that would failbefore the time u . Taking the logarithm of (2.3), we obtain l ( α, λ | data ) = r ln( αλ ) − λ r X i =1 x αi + ( α + 1) r X i =1 ln x i + ( n − r ) ln(1 − e − λu − α ) . (2.4)Taking derivatives with respect to α and λ of (2.4), and equality to zero, weobtain ∂ ln L∂α = rα − λ r X i =1 x αi ln x i + r X i =1 ln x i + ( n − r ) λu − α ln ue − λu − α − e − λu − α ,∂ ln L∂λ = rλ − r X i =1 x αi + ( n − r ) u − α e − λu − α − e − λu − α . (2.5)It is clear that the normal equations do not have explicit solutions. We needsome numerical techniques to solve the simultaneous equations.3 Asymptotic Confidence Bounds
Since the MLEs of the unknown parameters α, λ can not be obtained in closedforms, it is not easy to derive the exact distributions of the MLEs. Therefore, theexact confidence intervals for the unknown parameters is difficult to obtain. Inthis section, we compute the observed Fisher information based on the likelihoodequations. These will enable us to develop pivotal quantities based on the limitingnormal distribution and then construct asymptotic confidence intervals.From the log-likelihood function in (2.4) we obtain the observed Fisher informa-tion as ∂ l ( α, λ ) ∂α = − rα − λ r X i =1 x αi (ln x i ) + ( n − r ) λu − α (ln u ) e − λu − α (1 − λu − α )1 − e − λu − α − ( n − r ) λ u − α ln u e − λu − α (1 − e − λu − α ) ,∂ l ( α, λ ) ∂α∂λ = − r X i =1 x αi ln x i − ( n − r ) u − α ln( u ) e − λu − α (1 − λu − α )1 − e − λu − α + ( n − r ) u − α ln( u ) λe − λu − α (1 − e − λu − α ) ,∂ l ( α, λ ) ∂λ = − rλ − ( n − r ) u − α e − λu − α − e − λu − α − ( n − r ) u − α e − λu − α (1 − e − λu − α ) . The observed Fisher information matrix can be inverted to obtain the asymp-totic variance-covariance matrix of the MLEs as I − = ( − ∂ l ( α,λ ) ∂α | ˆ α, ˆ λ ∂ l ( α,λ ) ∂α∂λ | ˆ α, ˆ λ∂ l ( α,λ ) ∂α∂λ | ˆ α, ˆ λ ∂ l ( α,λ ) ∂λ | ˆ α, ˆ λ !) − = (cid:18) V V V V (cid:19) . It is well known that MLEs are asymptotically normally distributed and usingthis property of MLEs, we can construct the approximate confidence intervals for α and λ . Since the ˆ α and ˆ λ is asymptotically normally distributed, we have theasymptotic distribution of P = ˆ α − α √ V , P = ˆ λ − λ √ V , to be standard normal. Using the pivotal quantities P and P , 100(1 − γ )% asymp-totic confidence intervals for α and λ based on the MLEs are( b α − z γ √ V , b α + z γ √ V ) , (3.6)and ( b λ − z γ √ V , b λ + z γ √ V ) , (3.7)respectively, where z γ is the (cid:0) γ (cid:1) th upper percentile of standard normal distribution.4 Bayesian Analysis
In this section we compute the Bayes estimates and the associated HPD credibleintervals of the shape and scale parameters. We need to assume some prior distri-butions of the unknown parameters for the Bayesian inference. Unfortunately, whenboth the parameters are unknown, there does not exist any natural conjugate priors.In this paper similarly as in Kundu and Gupta (2008), it is assumed that α and λ have the following independent gamma priors; π ( α | a, b ) ∝ α a − e − bα , α > ,π ( λ | c, d ) ∝ λ c − e − dλ , λ > . Here all the hyper parameters a, b, c, d are assumed to be known and positive.Based on the above priors, the joint density function of the data, α and λ is L ( data, α, λ ) = α r + a − λ r + c − e − bα − λ { d + P ri =1 x αi } r Y i =1 x α +1 i (1 − e − λu − α ) n − r . (4.8)Based on L ( data, α, λ ), we obtain the joint posterior density function of α and λ given the data as π ( α, λ | data ) = L ( data, α, λ ) R ∞ R ∞ L ( data, α, λ ) dαdλ . (4.9)Therefore, the posterior density function of α and λ given the data can be writtenas π ( α, λ | data ) ∝ g ( λ | α, data ) g ( α | data ) h ( α, λ | data ) , (4.10)here g ( λ | α, data ) is a gamma density function with the shape and scale parametersas ( r + c ) and ( d + P ri =1 x αi ), respectively. Also g ( α | data ) is a proper densityfunction given by g ( α | data ) ∝ d + P ri =1 x αi ) r + c α a + r − e − bα r Y i =1 x α +1 i . (4.11)Moreover h ( α, λ | data ) = (1 − e − λu − α ) n − r . Therefore, the Bayes estimate of any function of α and λ , say g ( α, λ ) under thesquared error loss function isˆ g B ( α, λ ) = R ∞ R ∞ g ( α, λ ) g ( λ | α, data ) g ( α | data ) h ( α, λ | data ) dαdλ R ∞ R ∞ g ( λ | α, data ) g ( α | data ) h ( α, λ | data ) dαdλ . (4.12)Unfortunately, (4.12) can not be computed analytically for general g ( α, λ ). Weapply two dierent approximation methods to evaluate the Bayes estimators of α and λ . The first approximation technique due to Lindley (1980) and the second is animportance sampling procedure as suggested by Chen and Shao (1999). The detailsare explained below. 5 .1 Lindley’s Approximation It is known that the (4.12) can not be computed explicitly. Because of that Lind-ley(1980) proposed an approximation to compute the ratio of two integrals suchas (4.12). This has been used by several authors to obtain the approximate Bayesestimators. This approximation technique uses Taylor’s series expansion of the in-tegral expression around maximum likelihood estimator. For details see Lindley(1980) or Press (2001). Other approximation form, e.g., Laplace approximation andapproximation form in Tierney and Kadane (1986) can be used to evaluate (4.12).Based on Lindley’s approximation, the approximate Bayes estimates of α and λ under the squared error loss functions are respectivelyˆ α L = ˆ α + 12 " l τ + l τ τ + 3 l τ τ + l ( τ τ + 2 τ ) + (cid:18) a − α − b (cid:19) τ + (cid:18) c − λ − d (cid:19) τ (4.13)ˆ λ L = ˆ λ + 12 " l τ τ + l τ + l ( τ τ + 2 τ ) + 3 l τ τ + (cid:18) a − α − b (cid:19) τ + (cid:18) c − λ − d (cid:19) τ , (4.14)where ˆ α and ˆ λ are MLEs of α and λ respectively and a, b, c, d are the known hyperparameters. The explicit expressions of τ , τ , τ , l , τ , l , l , l are providedin the Appendix A.Although using Lindley’s approximation we can obtain the Bayes estimates, butit is not possible to construct the HPD credible intervals using this method. There-fore, in the next subsection we propose the importance sampling procedure to drawsamples from the posterior density function and in turn compute the Bayes esti-mates, and also construct HPD credible intervals. Importance sampling is widely used in Bayesian computation. We use it to generatea sample from the posterior density function π ( α, λ | data ) and then to computethe Bayes estimates and HPD credible intervals. We need the following theorem forfurther development. Theorem 1.
The conditional density of α , given data, say g ( α | data ) is log-concave. Proof.
See Appendix B.Since g ( α | data ) has a log-concave density, using the idea of Devroye (1984), itis possible to generate a sample from g ( α | data ). Moreover, since g ( λ | α, data )6ollows gamma, it is quite simple to generate from g ( λ | α, data ). Now we wouldlike to provide the importance sampling procedure to compute the Bayes estimatesand also to construct the credible interval of g ( α, β ) = θ (say). Similarly as in Raqaband Madi(2005) a simulation based consistent estimate of E ( g ( α, β )) = E ( θ ) can beobtained using Algorithm as given below. Algorithm.
Step 1: Generate α from g ( . | data ) using the method developed by Devroye (1984).Step 2: Generate λ from g ( . | α, data ).Step 3: Repeat Step 1 and Step 2 and obtain ( α , λ ) , ..., ( α M , λ M ).Step 4: : An approximate Bayes estimate of θ under a squared error loss function canbe obtained as ˆ g B ( α, λ ) = ˆ θ = M P Mi =1 θ i h ( α i , λ i | data ) M P Mi =1 h ( α i , λ i | data ) . (4.15)Step 5: Obtain the posterior variance of g ( α, β ) = θ asˆ V ( g ( α, λ | data )) = M P Mi =1 ( θ i − ˆ θ ) h ( α i , λ i | data ) M P Mi =1 h ( α i , λ i | data ) . (4.16)We now obtain the credible interval of θ using the idea of Chen and Shao (1999).Let us denote π ( θ | data ) and Π( θ | data ) as the posterior density and posteriordistribution functions of θ , respectively. Also let θ ( β ) be the β -th quantile of θ , i.e., θ ( β ) = inf { θ ; Π( θ | data ) ≥ β } , < β < . Observe that for a given θ ∗ , Π( θ ∗ | data ) = E { I θ ≤ θ ∗ | data } , where I θ ≤ θ ∗ is theindicator function. Therefore, a simulation consistent estimator of Π( θ ∗ | data ) canbe obtained as ˆΠ( θ ∗ | data ) = M P Mi =1 I θ i ≤ θ ∗ h ( α i , λ i | data ) M P Mi =1 h ( α i , λ i | data ) . (4.17)For i = 1 , ..., M , let { θ ( i ) } be the ordered values of θ i , and w i = h ( α ( i ) , λ ( i ) | data ) P Mi =1 h ( α i , λ i | data ) , be the associated weight, then we haveˆΠ( θ ∗ | data ) = θ ∗ < θ (1) P ij =1 w j θ ( i ) ≤ θ ∗ < θ ( i +1) θ ∗ ≥ θ ( M ) . (4.18)7herefore, θ ( β ) can be approximated byˆ θ ( β ) = (cid:26) θ (1) β = 0 θ ( i ) P i − j =1 w j < β ≤ P ij =1 w j . (4.19)To obtain a 100(1 − β )% HPD credible interval for θ , consider intervals of theform R j = (cid:18) ˆ θ ( jM ) , ˆ θ (cid:16) j +[(1 − β ) M ] M (cid:17) (cid:19) (4.20)for j = 1 , , . . . , M − [(1 − β ) M ], where [ a ] denotes the largest integer less thanor equal to [ a ]. Finally, among all R j choose that interval which has the smallestlength. In this section we compare the performance of the different methods through a simu-lation study. We estimate the unknown parameters using the MLE, Bayes estimatorsobtained by Lindley’s approximations, and also by the Bayes estimators obtainedby using MCMC technique. The simulation study is carried out for different samplesize and with different choices of R , T values. For a particular set of hybrid cen-sored data, the MLEs and Bayes estimators are obtained as described before. Bothnon-informative and informative priors are used for the shape and scale parameters.In case of non-informative prior we take a = b = c = d = 0. We call it as Prior 1.Note that as the hyper-parameters go to zero, the prior density becomes inverselyproportional to its argument and also becomes improper. This density is commonlyused as an improper prior for parameters in the range of zero to infinity. It shouldalso be mentioned that when a = b = 0 , π ( α | a, b ) is not log-concave, but the pos-terior density function g ( α | data ) is still log-concave. For the informative prior,we chose a = 2 , b = 1 , c = d = 1. We call it as Prior 2. For computing differentpoint estimators we generated 1000 samples from the IW distribution with α = 2and λ = 1. The averages and mean squared errors (MSE) of estimators of α and λ are presented in Tables 1 and 2, respectively.We also compute the 95% asymptotic confidence intervals based on MLEs. Forcomparison purposes, we compute the 95% HPD credible intervals from the Gibbssamples. We report the average confidence/credible lengths in Table 3. In Table 3,the first and second row represent the result for α and λ , respectively.Some of the points are quite clear from Tables 1 and 2. In Tables 1 and 2, it isobserved that the approximate Bayes estimators of unknown parameters based onLindley’s approximation match quite well with the Bayes estimators using MCMCmethod. In most of the cases, the Bayes estimates obtained by using Lindley’s ap-proximation of λ based on prior 1 perform better than the MLEs of λ , but whilefor α it is the other way. But in case of prior 2, the Bayes estimates using Lind-ley’s approximation of ( α, λ ) perform marginally better than the MLEs for all casesconsidered. It is also observed that in most of cases the performance in terms ofaverage bias and the MSE of Bayes estimates obtained by using MCMC procedure8able 1: The average estimates (A.E) and the associated mean squared errors (MSE)for α . ( n, T ) R MLE Bayes(Lindley) Bayes(MCMC)prior 1 prior 2 prior 1 prior 220 A.E 2.1113 2.1789 2.1068 2.1249 2.0919MSE 0.1848 0.2078 0.1646 0.1325 0.0953(30, 1.5) 25 A.E 2.1091 2.0665 2.0948 2.1119 2.0992MSE 0.1503 0.1382 0.1035 0.1463 0.121030 A.E 2.1057 2.1356 2.0855 2.1062 2.0826MSE 0.1519 0.1638 0.1231 0.1211 0.116320 A.E 2.1394 2.1244 2.0993 2.1253 2.1256MSE 0.1696 0.2187 0.1453 0.1218 0.0671(30, 2.5) 25 A.E 2.0999 2.1224 2.0851 2.0687 2.0639MSE 0.1317 0.1708 0.1287 0.1054 0.026130 A.E 2.0996 2.0466 2.1142 1.9790 1.9985MSE 0.1306 0.3042 0.1625 0.1142 0.018435 A.E 2.0586 2.0540 2.0209 2.0495 2.0262MSE 0.0820 0.0740 0.0727 0.0667 0.0318(50,1.5) 40 A.E 2.0633 2.0738 2.0604 2.0424 1.9912MSE 0.0791 0.1017 0.0752 0.0520 0.027550 A.E 2.0623 2.0822 2.0609 2.0136 2.0124MSE 0.0764 0.0847 0.0809 0.0709 0.015235 A.E 2.0793 2.0795 2.0430 2.0196 2.0149MSE 0.0747 0.0741 0.0649 0.0481 0.0032(50,2.5) 40 A.E 2.0682 2.0799 2.0847 2.0085 2.0068MSE 0.0737 0.0722 0.0839 0.0296 0.001750 A.E 2.0287 2.0525 2.0376 2.0013 2.0008MSE 0.0691 0.0891 0.0547 0.0154 0.0006 under Prior 1 are close to that of the corresponding behaviour of the MLEs or theBayes estimates obtained by Lindley’s approximations. But while using informativeprior (Prior 2), the performance of the Bayes estimates by using MCMC are muchbetter than the other estimates. Therefore, if the prior information are available,then we should use the Bayes estimates, otherwise MLEs may be used to avoid thecomputational cost.For all the methods, and for both the estimators, it is observed that for fixed n as R or T increases in most of cases the average biases, and the MSE decrease, itverifies the consistency properties of the estimates.Now, considering the confidence intervals and credible lengths, it is observed thatthe asymptotic results of the MLE work quite well. It can be seen that the averageconfidence lengths is quite close to the average credible intervals, mainly for large n and R . But, in most of the cases, the average lengths of the credible intervalsare slightly shorter than the confidence intervals. From Table 3 it is observed thatthe results obtained using informative priors are not signicantly different than thecorresponding results obtained using non-proper priors. Finally, note that Bayesestimates are most computationally expensive, followed by MLE.9able 2: The average estimates (A.E) and the associated mean squared errors (MSE)for λ . ( n, T ) R MLE Bayes(Lindley) Bayes(MCMC)prior 1 prior 2 prior 1 prior 220 A.E 1.0464 0.9793 1.0119 1.0570 1.0537(30,1.5) MSE 0.0514 0.0456 0.0472 0.0528 0.042525 A.E 1.0298 1.0164 1.0170 0.9971 0.9993MSE 0.0487 0.0406 0.0335 0.0452 0.041830 A.E 1.0264 1.0122 1.0216 1.0371 1.0283MSE 0.0461 0.0526 0.0418 0.0487 0.039520 A.E 1.0107 1.0326 1.0071 1.0203 1.0237(30,2.5) MSE 0.0467 0.0401 0.0445 0.0392 0.035925 A.E 1.0275 1.0055 1.0449 1.0159 0.9787MSE 0.0451 0.0591 0.0774 0.0309 0.028730 A.E 1.0282 1.0673 1.0342 0.9958 0.9864MSE 0.0437 0.0504 0.0418 0.0377 0.026135 A.E 1.012 1.0002 1.0167 1.0423 1.0337(50,1.5) MSE 0.0284 0.0258 0.0229 0.0276 0.024640 A.E 1.0153 0.9944 1.0144 1.0334 1.0221MSE 0.0271 0.0243 0.0226 0.0262 0.024150 A.E 1.0267 1.0131 1.0175 1.017 1.0263MSE 0.0266 0.0230 0.0278 0.0243 0.024035 A.E 1.0159 1.0032 1.0013 1.0446 1.0373(50,2.5) MSE 0.0260 0.0287 0.0259 0.0216 0.020740 A.E 1.0221 1.0037 0.9903 1.0245 1.0327MSE 0.0280 0.0255 0.0482 0.0219 0.013550 A.E 1.0228 1.0322 1.0253 1.0087 1.0074MSE 0.0262 0.0416 0.0261 0.0204 0.0018
In this section, we consider the two following examples to illustrate the use of theestimation methods proposed in the previous sections.
Example 1.
Consider the following data giving the maximum flood levels (inmillions of cubic feet per second) of the Susquehenna River at Harrisburg, Pennsylve-nia over 20 four-year periods (1890 − Before progressing, first we want to check whether the IW distribution fits thedata or not. For this purpose, we have used the complete data. The MLEs and bayesestimates of ( α , θ ) based on the complete sample are (4.3143, 2.7905) and (4.1861,2.7657), respectively. The Kolmogorov-Smirnov distance between the empirical dis-tribution function and the fitted distribution functions when the parameters areobtained by MLEs, and the associated p -value are 0.1060 and 0.8557, respectively.Since the p -value is quite high, we cannot reject the null hypothesis that the data10able 3: The average confidence/credible lengths for the MLE and Bayes estimatesof α and λ . ( n, T ) R MLE Bayes(MCMC)prior 1 prior 220 1.4718 1.3986 1.38640.8106 0.8230 0.79180(30,1.5) 25 1.4338 1.3891 1.38280.7978 0.7663 0.756530 1.4422 1.3911 1.38720.7996 0.7508 0.750020 1.4384 1.4024 1.38350.7891 0.8013 0.7812(30,2.5) 25 1.2730 1.1916 1.20140.7873 0.7818 0.741630 1.2533 1.1885 1.17620.7788 0.7417 0.746935 1.1985 1.1842 1.18730.6144 0.6327 0.5990(50,1.5) 40 1.0917 1.0916 1.09130.6147 0.5986 0.529550 1.0889 1.0862 1.08850.6164 0.6069 0.601435 1.0450 1.0381 1.03990.6022 0.5499 0.5427(50,2.5) 40 0.9934 1.0057 0.98310.5968 0.5259 0.508250 0.9483 0.9129 0.92100.5958 0.5024 0.5169 are coming from the IW distribution. We have just plotted the empirical cumulativedistribution function and the fitted cumulative distribution function in Figure 1. Itshows that IWD fit the data very well. Now, we have created two artificially theType-I hybrid censored data sets from the above uncensored data set, using thefollowing censoring schemes:
Scheme 1: R = 18 , T = 0 . . In this scheme, it is observed that the R -th failure does not take place before timepoint T . For this scheme, the hybrid censored sample is: From the above sample data, the MLEs of α and θ are 4.2726 and 2.6565, respec-tively. Since we do not have any prior information available, we use non-informativepriors, i.e., a = b = c = d = 0 on both α and θ to compute bayes estimatores. Nowusing Algorithm of section 4.2, we generate 1000 MCMC samples and based on themwe compute the Bayes estimates of α and θ as 4.5665 and 2.8148, respectively. The95% asymptotic confidence intervals of α and θ based on the empirical Fisher infor-mation matrix are (2.7207, 5.8244) and (2.3623, 2.9507) respectively. Moreover, the11 .3 0.4 0.5 0.6 0.7 . . . . . Maximum floodlevel C u m u l a t i v e P r obab ili t y Figure 1:
The empirical and fitted distributions.
95% HPD credible intervals of α and θ are (2.4603, 5.2454), and (2.2977, 2.8928),respectively. Scheme 2 : R = 14 , T = 0 . . In this scheme, it is observed that the R -th failuretook place before T . In this case, the hybrid sample is: Based on the sample, the MLEs and bayes estimates of α and θ are (3.6933,2.5446) and (3.8607, 2.7158) respectively. The 95% asymptotic confidence intervalsof α and θ are (2.2233, 5.1635) and (2.2088, 2.8804) respectively. We also com-pute the 95% HPD credible intervals of α and θ and they are (2.3481, 5.2153), and(2.2718, 3.0145), respectively. example 2. In this example we consider the data given by Bjerkedal (1960), andit represents the survival times (in days) of guinea pigs injected with different dosesof tubercle bacilli. The regimen number is the common logarithm of the numberof bacillary units in 0.5 ml. of challenge solution; i.e., regimen 6.6 corresponds to4 . × bacillary units per 0.5 ml. (log (4 . × ) = 6 . . Corresponding to regimen6.6, there were 72 observations listed below:
12 15 22 24 24 32 32 33 34 38 38 43 44 48 52 53 54 5455 56 57 58 58 59 60 60 60 60 61 62 63 65 65 67 68 7070 72 73 75 76 76 81 83 84 85 87 91 95 96 98 99 109 110121 127 129 131 143 146 146 175 175 211 233 258 258 263 297 341 341 376. α = 1 . θ = 0 . Scheme 1 : R = 50 , T = 90 and Scheme 2 : R = 60 , T = 150 . For scheme 1, the hybrid censored sample is:
12 15 22 24 24 32 32 33 34 38 38 43 44 48 52 53 5454 55 56 57 58 58 59 60 60 60 60 61 62 63 65 65 6768 70 70 72 73 75 76 76 81 83 84 85 87.
For this data set, the MLEs of α and θ are 1.3272 and 0.0178 respectively. Wecompute the Bayes estimates of α and θ with respect to the assumed non-informativepriors as 1.4736 and 0.0142, respectively. The 95% asymptotic confidence intervalsof α and θ are (1.0569, 1.5779) and (0.0140,0.0212) respectively. Similarly, the95% HPD credible intervals of α and θ are (1.2604, 1.6868), and (0.0106, 0.0177),respectively.Now for Scheme 2, it is observed that the R -th failure took place before T . Inthis case, the hybrid sample is:
12 15 22 24 24 32 32 33 34 38 38 43 44 48 52 53 5454 55 56 57 58 58 59 60 60 60 60 61 62 63 65 65 6768 70 70 72 73 75 76 76 81 83 84 85 87 91 95 96 9899 109 110 121 127 129 131 143 146.
Based on the sample, the MLEs and bayes estimatores of α and θ are (1.3688,0.0182) and (1.4599, 0.0158) respectively. The 95% asymptotic confidence intervalsof α and θ are (1.1846, 1.6447) and (0.0152, 0.0216) respectively. We also computethe 95% HPD credible intervals of α and θ as (1.3426, 1.5763) and (0.0135, 0.0181),respectively. In this paper we considered the classical and Bayesian inference of the inverse Weibulldistribution based on Type-I hybrid censored data. The maximum likelihood esti-mators of the parameters can be obtained by using an iterative procedure. Hencethe Bayesian inference seems to be the natural choice for the analysis of certainsurvival data. The prior belief of the model was represented by the independentgamma priors on the shape and scale parameters. The squared error loss functionwas used as it is appropriate when large errors of the estimation are considered tobe more serious compared to other loss functions. It was observed that the Bayesestimators and the HPD credible intervals can not be obtained in explicit form. Weproposed two approximations which can be implemented very easily. We comparedthe performance of the Bayes estimators with the MLEs by Monte Carlo simulations,and it was observed that the performances are quite satisfactory.13 ppendix A
For the two-parameter case, using the notation ( λ , λ ) = ( α, λ ), the Lindley’s ap-proximation can be written as,ˆ g = g (ˆ λ , ˆ λ ) + 12 " A + l B + l B + l C + l C + p A + p A , where, A = X i =1 2 X j =1 w ij τ ij , l ij = ∂ i + j L ( λ , λ ) ∂λ i ∂λ j , i, j = 0 , , , , i + j = 3 , p i = ∂p∂λ i ,w i = ∂g∂λ i , w ij = ∂ g∂λ i ∂λ j , p = ln π ( λ , λ ) , A ij = w i τ ii + w j τ ji B ij = ( w i τ ii + w j τ ij ) τ ii , C ij = 3 w i τ ii τ ij + w j ( τ ii τ jj + 2 τ ij ) , here L(.,.) is the log-likelihood function of the observed data, π ( λ , λ ) is the jointprior density function of ( λ , λ ) and τ ij is the (i,j)-th element of the inverse of theobserved Fisher information matrix. Moreover, ˆ λ and ˆ λ are the MLEs of ( λ , λ ),respectively and all the quantities are evaluated at ( ˆ λ , ˆ λ ).Now we have L ( α, λ | x ) = α r λ r e − λ P ri =1 x αi r Y i =1 x α +1 i (1 − e − λu − α ) n − r . Therefore, we obtain l = 2 r ˆ λ + ( n − r ) u − α e − ˆ λu − ˆ α − e − ˆ λu − ˆ α + 3( n − r ) u − α e − λu − ˆ α (1 − e − ˆ λu − ˆ α ) + 2( n − r ) u − α e − λu − ˆ α (1 − e − ˆ λu − ˆ α ) ,l = 2 r ˆ α − ( n − r )ˆ λu − ˆ α (ln u ) e − ˆ λu − ˆ α (1 − λu − α + ˆ λ u − α )1 − e − ˆ λu − α + 3( n − r )ˆ λ u − α ln u e − λu − ˆ α (1 − ˆ λu − ˆ α )(1 − e − ˆ λu − ˆ α ) − ˆ λ r X i =1 x ˆ αi (ln x i ) − n − r )ˆ λ u − α ln u e − λu − ˆ α (1 − ˆ λu − ˆ α )(1 − e − ˆ λu − ˆ α ) ,l = ( n − r ) u − α ln( u ) e − ˆ λu − ˆ α (2 − ˆ λu − ˆ α )1 − e − ˆ λu − ˆ α − n − r ) λu − α ln( u ) e − λu − ˆ α (1 − e − ˆ λu − ˆ α ) + ( n − r ) u − α ln( u ) e − λu − ˆ α (2 − λu − ˆ α )(1 − e − ˆ λu − ˆ α ) , = − r X i =1 x ˆ αi (ln x i ) + ( n − r ) u − ˆ α (ln u ) e − ˆ λu − ˆ α (1 − λu − ˆ α + ˆ λ u − α )1 − e − ˆ λu − ˆ α − n − r )ˆ λu − α ln( u ) e − λu − ˆ α (1 − ˆ λu − ˆ α )(1 − e − ˆ λu − ˆ α ) + 2( n − r )ˆ λ u − α ln u e − λu − α (1 − e − ˆ λu − ˆ α ) . The elements of the Fisher information matrix are obtained in section 4. By using(3.6), τ ij = V ij , where i, j = 1 , . Now when g ( α, λ ) = α , then w = 1 , w = 0 , w i j = 0 , for i, j = 1 , . Therefore, A = 0 , B = τ , B = τ τ , C = 3 τ τ ,C = ( τ τ + 2 τ ) , A = τ , A = τ . Now the first part of Lindleys approximation follows by using p = d − α − c, p = b − λ − a. For the second part, note that g ( α, λ ) = λ , then w = 0 , w = 1 , w i j = 0 , for i, j = 1 , . Therefore, A = 0 , B = τ τ , B = τ , C = ( τ τ + 2 τ ) C = 3 τ τ , A = τ , A = τ . therefore the second part follows immediately. Appendix B
The conditional density of α given the data is g ( α | data ) ∝ d + P ri =1 x αi ) r + c α a + r − e − bα r Y i =1 x α +1 i . The logarithm of g ( α | data ) without the additive constant isln g ( α | data ) = − ( r + c ) ln d + r X i =1 x αi ! + ( a + r −
1) ln( α ) − bα + ( α + 1) r X i =1 ln( x i ) , ddα ln r X i =1 x αi + d ! ≥ . Therefore the result follows.
References [1] Ateya, S. F. (2015). Estimation under inverse Weibull distribution based onBalakrishnans unified hybrid censored scheme,
Communications in Statistics-Simulation and Computation , DOI: 10.1080/03610918.2015.1099666.[2] Balakrishnan, N. and Shafay, A. R. (2012). One- and two-sample Bayesianprediction intervals based on type-II hybrid censored data,
Communications inStatistics-Theory and Methods , 41, 1511-1531.[3] Banerjee, A., Kundu, D.( 2008). Inference based on Type-II hybrid censoreddata from a Weibull distribution,
IEEE Transactions on Reliability , 57, 369-37.[4] Bjerkedal, T., (1960). Acquisition of resistance in guinea pigs infected withdifferent doses of virulent tubercle bacilli,
American Journal of Hygiene , 72,130-148.[5] Chen, M.H., Shao, Q.M., (1999). Monte Carlo estimation of Bayesian Credibleand HPD intervals,
Journal of Computational and Graphical Statistics , 8, 69 -92.[6] Devroye, L., (1984). A simple algorithm for generating random variates with alog-concave density function,
Computing , 33, 247-257.[7] Dumonceaux, R. and Antle, C. E., (1973). Discrimination between the lognor-mal and Weibull distribution,
Technometrics , 15, 923-926.[8] Epstein, B.,(1954). Truncated life tests in the exponential case,
Annals ofStatistics , 25, 555-564.[9] Gupta, R.D. and Kundu, D., (1998). Hybrid censoring schemes with exponen-tial failure distribution,
Communications in Statistics-Theory and Methods , 27,3065-3083[10] Hyun, S., Lee, J. and Robert Y. (2016). Parameter estimation of Type I andType II hybrid censored data from the log-logistic distribution,
Industrial andSystems Engineering Review , 4, 37-44.[11] Kundu, D. and Pradhan, B. (2009). Estimating the parameters of the general-ized exponential distribution in presence of hybrid censoring,
Communicationsin Statistics-Theory and Methods , 38, 2030-2041.1612] Kundu, D.,(2007). On hybrid censored Weibull distribution,
Journal of Statis-tical Planning and Inference , 137, 2127-2142.[13] Kundu, D., Howlader, H., (2010). Bayesian inference and prediction of theinverse Weibull distribution for Type-II censored data,
Computational Statisticsand Data Analysis , 54, 1547-1558.[14] Lindley, D.V., (1980). Approximate Bayesian methods,
Trabajos de Estadistic ,a31, 223-237.[15] Press, S.J., (2001).
The Subjectivity of Scientists and the Bayesian Approach ,Wiley, NewYork.[16] Raqab, M.Z. and Madi, M.T., (2005), Bayesian inference for the generalizedexponential distribution,
Journal of Statistical Computation and Simulation ,75, 841 -852.[17] Rastogi, M. K., Tripathi, Y. M., (2013). Inference on unknown parameters of aBurr distribution under hybrid censoring,
Statistical Papers , 54, 619-643.[18] Singh, S. and Tripathi, Y.M., (2015). Bayesian estimation and prediction for ahybrid censored lognormal distribution,
IEEE Transactions on Reliability , DOI:10.1109/TR.2015.2494370.[19] Singh, S. K., Singh, U., and Sharma, V. K. (2013). Bayesian analysis for Type-IIhybrid censored sample from inverse Weibull distribution,
International Journalof System Assurance Engineering and Management , 4(3), 241-248.[20] Tripathi, Y.M. and Rastogi, M. K., (2015). Estimation using hybrid censoreddata from a generalized inverted exponential distribution,
Communications inStatistics-Theory and Methods , DOI: 10.1080/03610926.2014.932805.[21] Tierny, L., Kadane, J.B., (1986). Accurate approximations for posterior mo-ments and marginal densities,