Estimation of parameters of the logistic exponential distribution under progressive type-I hybrid censored sample
aa r X i v : . [ s t a t . A P ] F e b Estimation of parameters of the logisticexponential distribution under progressivetype-I hybrid censored sample
Subhankar
Dutta ∗ and Suchandan Kayal † Department of Mathematics, National Institute of Technology Rourkela, Rourkela-769008, India
Abstract
The paper addresses the problem of estimation of the model parameters of the logistic ex-ponential distribution based on progressive type-I hybrid censored sample. The maximumlikelihood estimates are obtained and computed numerically using Newton-Raphson method.Further, the Bayes estimates are derived under squared error, LINEX and generalized entropyloss functions. Two types (independent and bivariate) of prior distributions are consideredfor the purpose of Bayesian estimation. It is seen that the Bayes estimates are not of explicitforms. Thus, Lindley’s approximation technique is employed to get approximate Bayes esti-mates. Interval estimates of the parameters based on normal approximate of the maximumlikelihood estimates and normal approximation of the log-transformed maximum likelihoodestimates are constructed. The highest posterior density credible intervals are obtained byusing the importance sampling method. Furthermore, numerical computations are reportedto review some of the results obtained in the paper. A real life dataset is considered for thepurpose of illustrations.
Keywords:
Newton-Raphson method; Bayes estimates; Independent priors; Bivariate prior;Lindley’s approximation technique; Highest posterior density credible interval; Importancesampling method; Mean squared error; Coverage probability.
Censoring was introduced in practice to save time and reduce the number of failed unitsassociated with a life-testing experiment. The concept of censoring is very common in reli-ability and survival studies. The type-I and type-II censoring schemes are two fundamental ∗ Email address: [email protected] † Email address (corresponding author): [email protected], [email protected] n units are placedto conduct a life-testing experiment. In type-I censoring scheme, it is assumed that the ex-periment runs up to a prefixed time, say T . Here, the random number of failures, say m < n in the time interval [0 , T ] is observed. For the case of the type-II censoring scheme, theexperiment continues till a predefined number of failures, say m . Thus, clearly the time, say T at which the experiment stops is random. Epstein (1954) introduced a censoring scheme,which is a mixture of type-I and type-II schemes. This is dubbed as the hybrid censoringscheme in the literature. The main drawback of the hybrid censoring scheme is that it doesnot have flexibility of removing experimental units before the experiment terminates. How-ever, the progressive type-II censoring scheme has such flexibility. Due to this virtue, theprogressive type-II censoring design has been widely utilized to analyse lifetime data. But,these days, the components of a system are highly reliable because of advanced technology.As a result, in the progressive type-II censoring scheme, the duration of the total experimentmay be very large. To overcome this difficulty, Kundu and Joarder (2006) introduced a gen-eral censoring scheme, known as the progressive type-I hybrid censing scheme. Note that theprogressive type-I hybrid censoring scheme is a mixture of the hybrid and progressive type-IIcensoring schemes. The progressive type-I hybrid censoring scheme is described briefly inthe following.Suppose an experiment with n identical components has started to work. The randomlifetimes of the n components are denoted by X , . . . , X n . Let an integer m ( < n ) and thetime point T are fixed when the experiment starts. At the time of first failure, say X m : n , R number of live units are randomly taken out. At the time of second failure, say X m : n , R number of units are randomly removed. Similarly, for the third failure, fourth failureand so on. Now, assume that the m th failure occurs at time X m : m : n , which is less than T . Then, the experiment stops at X m : m : n . Other ways, let the m th failure occurs afterthe time point T . The j (0 ≤ j < m ) number of failures occur before T . Then, at T , theremaining units R ∗ j = n − R − R − ..... − R j − j are removed and the experiment terminates.These two cases are named as Case- A and Case- B . This censoring scheme is known as theprogressive type-I hybrid censoring scheme. For details on this censoring design, one mayrefer to Kundu and Joarder (2006). Thus, for the progressive type-I hybrid censored sample,one has either Case- A or Case- B , whereCase-A : { X m : n , . . . , X m : m : n } , if X m : m : n < T ;Case-B : { X m : n , . . . , X j : m : n } , if X j : m : n < T < X j +1: m : n . For Case- B , it is noted that X j +1: m : n is not observed. However, the meaning of X j : m : n < T
This section focuses on the maximum likelihood estimation of the unknown model parame-ters α and λ of the logistic exponential distribution with CDF (1.2) based on the progressivetype-I hybrid censored sample. Given the observed data, the likelihood function of α and λ can be written as follows:Case- A : L ( α, λ ) ∝ m Y i =1 f ( x i : m : n )[1 − F ( x i : m : n )] R i , (2.1)Case- B : L ( α, λ ) ∝ j Y i =1 f ( x i : m : n )[1 − F ( x i : m : n )] R i [1 − F ( T )] R ∗ j , (2.2)where m ≤ n is a prefixed integer and R ∗ j = n − R − R − . . . − R j − j . Henceforth, fornotational convenience, x i is used for x i : m : n . The Eqs. (2 .
1) and (2 .
2) can be combined as L ( α, λ ) ∝ D Y i =1 f ( x i )[1 − F ( x i )] R i [1 − F ( T )] R ∗ D , (2.3)where for Case- A , D = m , R ∗ D = 0, and for Case- B , D = j , R ∗ D = n − R − R − . . . − R D − D .Now, using (1.1) and (1.2), the likelihood function in (2.3) becomes L ( α, λ ) ∝ α D λ D e λ P Di =1 x i D Y i =1 ( e λx i − α − [1 + ( e λx i − α ] − ( R i +2) [1 + ( e λT − α ] − R ∗ D . (2.4)Thus, the log-likelihood function islog L ( α, λ ) = D log α + D log λ + λ D X i =1 x i + ( α − D X i =1 log( e λx i − − D X i =1 ( R i + 2) log[1 + ( e λx i − α ] − R ∗ D log[1 + ( e λT − α ] . (2.5)Now, differentiating (2.5) with respect to α and λ , and equating to zero, the likelihoodequations are obtained as ∂ log L∂α = Dα + D X i =1 log( e λx i − − D X i =1 ( R i + 2)( e λx i − α log( e λx i − e λx i − α ] − R ∗ D ( e λT − α log( e λT − e λT − α ]= 0 (2.6)4nd ∂ log L∂λ = Dλ + D X i =1 x i + ( α − D X i =1 x i e λx i ( e λx i − − D X i =1 ( R i + 2) αx i ( e λx i − α − e λx i [1 + ( e λx i − α ] − R ∗ D αT ( e λT − α − e λT [1 + ( e λT − α ]= 0 . (2.7)The MLEs of α and λ are the simultaneous solutions of the equations given by (2.6) and(2.7). It is not easy to get solution of these equations in explicit form. So, Newton-Raphsoniterative method is utilized to compute the MLEs of α and λ , which are respectively denotedby b α and b λ . It is worth pointing that in statistical inference, it is always of interest to study − − − a p r o f il e l og − li k e li hood (a) − − − l p r o f il e l og − li k e li hood (b) Figure 1: Profile log-likelihood function of ( a ) α and ( b ) λ , for the real dataset in Section 7.the existence and uniqueness of the MLEs of the parameters. In order to achieve this, onerequires to show two conditions proposed by M¨akel¨ainen et al. (1981). These are difficultto establish due to the complicated nature of the expressions of the second order partialderivatives of the log-likelihood function. However, to have rough idea on the existence anduniqueness of the MLEs of α and λ for the logistic exponential distribution under progressivetype-I hybrid censored sample, the profile log-likelihood function for the parameters arepresented in Figure 1 . These plots show that the MLEs may exist uniquely.
This section deals with the derivation of the Bayes estimates of the parameters α and λ ofthe logistic exponential distribution with respect to three loss functions when progressivelytype-I hybrid censored sample is available. The loss functions are squared error, LINEX andgeneralized entropy. Among these, the squared error loss function is symmetric and other5wo are asymmetric. Let δ be an estimator of the parameter η . Then, the squared error lossfunction is given by L SQ ( η, δ ) = ( δ − η ) . (3.1)The squared error loss function assigns equal weight in underestimation as well as overes-timation. Sometimes, it is also called balanced loss function. There are various situations,where symmetric loss function is not an appropriate tool to use. For example, when esti-mating lifetime of a satellite, overestimation is more serious than underestimation. Further,for estimating water level of a river in rainy season, underestimation is more serious thanoverestimation. In these situations, the following loss functions, known as the LINEX loss(see Varian (1975)) and generalized entropy loss (see Calabria and Pulcini (1994)) are useful,which are respectively given by L LI ( η, δ ) = e p ( δ − η ) − p ( δ − η ) − , p = 0 (3.2)and L GE ( η, δ ) = (cid:18) δη (cid:19) q − q log (cid:18) δη (cid:19) − , q = 0 . (3.3)Under the loss functions given by (3.1), (3.2) and (3.3), the Bayes estimators can be writtenin terms of the conditional expectation as b η SQ = E η ( η | x ) (3.4) b η LI = − p − log[ E η ( e − pη | x )] , p = 0 (3.5) b η GE = [ E η ( η − q | x )] − q , q = 0 . (3.6)In Bayesian estimation, prior distribution has an important role. According to Arnold and Press(1983) there is no way, in which someone can say which prior is better than other one. Theyadditionally stated that it is all the more regularly the case that somebody chooses to con-fine consideration regarding a given adaptable group of priors, and then picks one from thatfamily which appears to be the best match with their own convictions. Here, the Bayesianestimation has been studied under two kinds of priors. The prior distributions for the model parameters α and γ are considered to be gammadistributions. Let α ∼ Gamma ( a, b ) and λ ∼ Gamma ( c, d ), where the PDFs of α and λ arerespectively given by π ( α ) ∝ α a − e − bα and π ( λ ) ∝ λ c − e − dλ ; α, λ > , a, b, c, d > . (3.7)The joint prior distribution of α and λ is π ( α, λ ) ∝ α a − e − bα λ c − e − dλ ; α, λ > , a, b, c, d > . (3.8)6fter some calculations, the posterior PDF of α and λ given X = x is obtained as π ( α, λ | x ) = k − α D + a − λ D + c − e − bα e − λ ( d − P Di =1 x i ) D Y i =1 ( e λx i − α − [1 + ( e λx i − α ] − ( R i +2) × [1 + ( e λT − α ] − R ∗ D , (3.9)where k = Z ∞ Z ∞ α D + a − λ D + c − e − bα e − λ ( d − P Di =1 x i ) D Y i =1 ( e λx i − α − [1 + ( e λx i − α ] − ( R i +2) × [1 + ( e λT − α ] − R ∗ D dαdλ. Now, consider a function of the parameters α and λ , say φ ( α, λ ) . Then, from (3.4), (3.5) and(3.6), the forms of the Bayes estimates of φ ( α, λ ) with respect to the squared error, LINEXand generalized entropy loss functions are given by b φ SQ = Z ∞ Z ∞ φ ( α, λ ) π ( α, λ | x ) dαdλ (3.10) b φ LI = − (cid:18) p (cid:19) log (cid:20)Z ∞ Z ∞ φ ( α, λ ) π ( α, λ | x ) dαdλ (cid:21) , p = 0 (3.11) b φ GE = (cid:20)Z ∞ Z ∞ ( φ ( α, λ )) − q π ( α, λ | x ) dαdλ (cid:21) − q , q = 0 , (3.12)respectively. Note that (3 .
12) reduces to (3 . q = − . Next, Bayes estimates of theparameters α and λ are obtained. To derive the Bayes estimates of α and λ with respect tothe loss functions given by (3.1), (3.2) and (3.3), one needs to replace φ ( α, λ ) by α and λ ,respectively in (3.10), (3.11) and (3.12). In the following, the form of the Bayes estimates of α with respect to the loss functions (3.1), (3.2) and (3.3) are respectively obtained as b α SQ = k − Z ∞ Z ∞ α D + a + c λ D + c − e − bα e − λ ( dα − P Di =1 x i ) D Y i =1 [1 + ( e λx i − α ] − ( R i +2) × ( e λx i − α − [1 + ( e λT − α ] − R ∗ D dαdλ, (3.13) b α LI = − (cid:18) p (cid:19) log " k − Z ∞ Z ∞ α D + a + c − λ D + c − e − α ( b + m ) e − λ ( dα − P Di =1 x i ) D Y i =1 ( e λx i − α − × [1 + ( e λx i − α ] − ( R i +2) [1 + ( e λT − α ] − R ∗ D dαdλ (3.14)7nd b α GE = " k − Z ∞ Z ∞ α D + a + c − q − λ D + c − e − bα e − λ ( dα − P Di =1 x i ) D Y i =1 [1 + ( e λx i − α ] − ( R i +2) × ( e λx i − α − [1 + ( e λT − α ] − R ∗ D dαdλ − q . (3.15)In analogy to b α SQ , b α LI and b α GE , the Bayes estimates of λ under the loss functions (3.1), (3.2)and (3.3), respectively denoted by b λ SQ , b λ LI and b λ GE can be obtained. These are omittedfor the sake of conciseness. In this subsection, let us consider a bivariate prior for α and λ , which is given by π ∗ ( α, λ ) = π ∗ ( α | λ ) π ( λ ); α > , λ > . (3.16)where π ( λ ) is given by (3.7) and π ∗ ( α | λ ) ∝ λ ; α > , λ > . (3.17)The prior π ∗ ( α | λ ) is known as the noninformative prior of α when λ is fixed. Thus, from(3.16), the bivariate prior distribution of α and λ is given by π ∗ ( α, λ ) ∝ λ ( c − e − dλ ; α, λ > , c, d > . (3.18)Similar to the preceding subsection, the posterior PDF of α and λ given the data x is obtainedas π ∗ ( α, λ | x ) = k − α D λ D + c − e − λ ( d − P Di =1 x i ) D Y i =1 ( e λx i − α − [1 + ( e λx i − α ] − ( R i +2) × [1 + ( e λT − α ] − R ∗ D , (3.19)where k = Z ∞ Z ∞ α D λ D + c − e − λ ( d − P Di =1 x i ) D Y i =1 ( e λx i − α − [1 + ( e λx i − α ] − ( R i +2) × [1 + ( e λT − α ] − R ∗ D dα dλ. Based on the posterior density function given by (3.19), the Bayes estimates of the parameters α and λ are obtained with respect to the loss functions (3.1), (3.2) and (3.3). The expressionsare omitted for the sake of brevity. Considering the bivariate prior distributions, the Bayesestimates of α under the loss functions (3.1), (3.2) and (3.3) are denoted by ˆ α dSQ , ˆ α dLI andˆ α dGE , respectively. Further, the Bayes estimates of λ under the loss functions (3.1), (3.2)and (3.3) are denoted by ˆ λ dSQ , ˆ λ dLI and ˆ λ dGE , respectively.8 Approximation techniques
There are various techniques in the literature which have been used to compute approximateBayes estimates. In this section, two approaches are employed for the evaluation of theBayes estimates obtained in the preceding section. First, consider Lindley’s method, whichexpresses how to approximate a ratio of two particular form of integrations. For details,please refer to Lindley (1980).
This subsection deals with the computation of the Bayes estimates of α and λ of logisticexponential distribution under the loss functions (3.1), (3.2) and (3.3) when progressivelytype-I hybrid censored sample is available. First, let us write the Bayes estimate of α withrespect to the LINEX loss function. Please refer to Appendix A for some idea on Lindley’sapproximation method. In this case, for p = 0 , one has φ ( α, λ ) = e − pα , v = − pe − pα , v = p e − pα , v = v = v = v = 0 . Thus, using ( A. α under LINEX loss function is ob-tained as b α LI ≈ − (cid:18) p (cid:19) log h e − pα + 12 { p e − pα − pe − pα ( l τ + l τ τ + 3 l τ τ + l ( τ τ + 2 τ ) + 2 P τ + 2 P τ ) } i , (4.20)where the other unknown terms in (4.20) are given in Appendix A . Next, consider thegeneral entropy loss function given by (3.3). Under this loss function, φ ( α, λ ) = α − q , v = − qα − ( q +1) , v = q ( q + 1) α − ( q +2) , v = v = v = v = 0 . Therefore, under the general entropy loss function, the approximate Bayes estimate of α canbe obtained, which is given by b α GE ≈ h α − q + 0 . { q ( q + 1) α − ( q +2) − qα − ( q +1) ( l τ + l τ τ + 3 l τ τ + l ( τ τ + 2 τ ) + 2 P τ + 2 P τ ) } i − q . (4.21)Note that the Bayes estimate of α with respect to the squared error loss function (denotedby b α SQ ) can be computed from (4.21) after taking q = − . The Bayes estimates of λ withrespect to the LINEX, generalized entropy and squared error loss functions can be derivedsimilarly, and therefore the expressions are omitted. The Bayes estimates of λ with respect tothe loss functions (3.1), (3.2) and (3.3) using Lindley’s approximation technique are denotedby b λ SQ , b λ LI and b λ GE , respectively. Further, when the bivariate prior distribution for α and λ (see Subsection 3 .
2) is considered, the approximate Bayes estimates for α and λ can beobtained similar to the case of independent priors. Some changes in computing P and P are required in this purpose. 9 .2 Importance sampling method In this subsection, importance sampling method is used to evaluate the Bayes estimates ofthe parameters α and λ with respect to the squared error, LINEX and generalized entropyloss functions. The main advantage of this method over the Lindley’s approximation methodis that this method can be used to construct HPD credible intervals. In this method, oneneeds to rewrite the posterior PDF given by Equation (3.9) as π ( α, λ | x ) ∝ G λ D + c, d − D X i =1 x i ! G α | λ D + a, b − D X i =1 log( e λx i − ! h ( α, λ ) , (4.22)where G ( a , a ) represents gamma distribution with parameters a and a , and h ( α, λ ) = " b − D X i =1 log( e λx i − − ( D + a ) d − D X i =1 x i ! − ( D + c ) (cid:2) e λT − α (cid:3) − R ∗ D × D Y i =1 (cid:2) e λx i − α (cid:3) − ( R i +2) e λx i − . (4.23)Further, in Eq. (4.22), G λ (cid:16) D + c, d − P Di =1 x i (cid:17) and G α | λ (cid:16) D + a, b − P Di =1 log( e λx i − (cid:17) mean λ ∼ G D + c, d − D X i =1 x i ! and α | λ ∼ G D + a, b − D X i =1 log( e λx i − ! . Now, the algorithm comprising of the following steps can be used to obtain Bayes estimatesof φ ( α, λ ) with respect to the loss functions given by (3.1), (3.2) and (3.3).——————————————————————————————————————Algorithm-1——————————————————————————————————————Step-1: Generate λ from G (cid:16) D + c, d − P Di =1 x i (cid:17) distribution.Step-2: For the given value of λ in Step-1, generate α from G (cid:16) D + a, b − P Di =1 log( e λx i − (cid:17) distribution.Step-3: Repeat Step-1 and Step-2, for N times to obtain ( α , λ ) , . . . , ( α N , λ N ) . ——————————————————————————————————————-Based on the values as in Step-3, the Bayes estimates of a parametric function φ ( α, λ ) underLINEX and generalized entropy loss functions are obtained as˜ φ LI = − (cid:18) p (cid:19) log " P Ni =1 e − pφ ( α i ,λ i ) h ( α i , λ i ) P Ni =1 h ( α i , λ i ) (4.24)10nd ˜ φ GE = " P Ni =1 φ − q ( α i , λ i ) h ( α i , λ i ) P Ni =1 h ( α i , λ i ) − q , (4.25)respectively. The Bayes estimate with respect to the squared error loss function, denotedby ˜ φ SQ can be obtained from (4.25) when q = − . Further, the Bayes estimates of α withrespect to the loss functions (3.2) and (3.3) can be respectively obtained from (4.24) and(4.25) after substituting α i in place of φ ( α i , λ i ). Similarly, the Bayes estimates of λ canbe evaluated. The Bayes estimates of α (respectively λ ) with respect to the loss functionsgiven by (3.1), (3.2) and (3.3) obtained using importance sampling method are denotedby ˜ α SQ (respectively ˜ λ SQ ), ˜ α LI (respectively ˜ λ LI ) and ˜ α GE (respectively ˜ λ GE ), respectively.It is known that the credible intervals are constructed based on the posterior distribution.Further, if the credible interval satisfies the condition of HPD interval, then it is called asthe HPD credible interval. In this paper, the 100(1 − β )% HPD credible intervals of theparameters α and λ are constructed numerically based on the generated posterior samples.In this context, the idea proposed by Chen and Shao (1999) has been used. In this section, two methods are employed to construct confidence intervals of α and λ. Using the asymptotic normality property of the MLEs b α and b λ , the 100(1 − β )% confidenceintervals of the unknown model parameters α and λ can be constructed. In doing so, thevariances of b α and b λ are required, which can be obtained from the inverse of the observedFisher information matrix. Note that under some regularity conditions, the MLEs ( b α, b λ )follow approximately bivariate normal distribution with mean vector ( α, λ ) and variance-covariance matrix I − ( b α, b λ ). That is,( b α, b λ ) ∼ N (( α, λ ) , b I − ( b α, b λ )) , where b I ( b α, b λ ) is the observed Fisher information matrix given by b I ( b α, b λ ) = (cid:20) − l − l − l − l (cid:21) ( α,λ )=( b α, b λ ) . (5.1)The elements of the observed Fisher information matrix given by (5.1) are provided inAppendix A . Note that the variances of b α and b λ , respectively denoted by V ar ( b α ) and V ar ( b λ ) can be obtained from the main diagonal entries of the matrix b I − ( b α, b λ ) . Thus, the100(1 − β )% approximate confidence intervals for α and λ are constructed as (cid:16)b α ± z β p V ar ( b α ) (cid:17) and (cid:18)b λ ± z β q V ar ( b λ ) (cid:19) , z β/ is the percentile of the standard normal distribution with right-tail probability β/
2. In some cases, the 100(1 − β )% confidence interval has negative lower bound whenthe parameter takes positive value. This is a drawback of the normal approximation of theMLEs. To overcome this drawback, Meeker and Escobar (2014) introduced log-transformedMLE-based confidence intervals. According to these authors, the confidence interval obtainedbased on logarithm transformed MLEs has better coverage probability. Based on the log-transformed MLEs, the 100(1 − β )% confidence intervals of α and λ are respectively givenby b α exp ( ± z β √ τ b α )! and b λ exp ( ± z β √ τ b λ )! . In this section, a Monte Carlo simulation is performed to investigate the performance of theproposed estimates of parameters of the logistic exponential distribution under progressvetype-I hybrid censored sample. The comparative performance of the estimates is studied interms of their average and mean squared error (MSE) values. In doing so, 2000 progressvetype-I hybrid censored samples are generated using various censoring schemes. Several com-binations of ( n, m ) values are considered. The values of T are taken as 0 . .
65. The R . . n = 35 and 40 are considered here. The numerical study includes the following steps:(i) Consider the true values of the model parameters as α = 1 . λ = 0 . n and m number of failures.When n = 35, three values of m are considered such that the percentage of failureinformation ( mn ∗ n = 40.(iii) In the tables, (0 ∗
3) and (1 ∗
5) represent (0 , ,
0) and (1 , , , , M SE = 1 M M X i =1 h θ ( i ) k − θ k i ; i = 1 , , where θ = α , θ = λ and M = 2000.(iv) Based on 2000 progressive type-I hybrid censored samples, compute the average valuesof the MLEs of α and λ . The Bayes estimates of α and λ with respect to symmetric12oss function (squared error loss) and asymmetric loss functions (LINEX loss and gen-eralized entropy loss) are computed. In this purpose, a = 3 , b = 2 , c = 3 , d = 4 areconsidered.(v) In Tables 1 , , p and q are considered. These are p = ( − . , . , q = ( − . , − . , . α and λ are calculated using normal approximation to MLEs (NA) and normal approximationto the log-transform MLEs (NL) methods. The average lengths of the HPD credibleintervals of the parameters are also provided.From the tabulated values, the following points are noticed.(i) In Tables 1 and 5, the average and MSE values are reported for the parameter α for T = 0 . .
65, respectively. It is observed that under same censoring schemes theMSE values become smaller when T increases. For fixed n , when m increases, thenMSE values decrease. For the parameter α , most of the time the Bayes estimates undergeneralized entropy loss function perform better than other estimates.(ii) In Tables 2 and 6, the average and MSE values are presented for the parameter λ when T = 0 . .
65, respectively. Similar observations as in the case of the estimates ofthe parameter α are noticed.(iii) In Tables 1, 2, 5 and 6, it is seen that the average values of the Bayes estimates underLINEX loss and generalized entropy loss functions decrease when the values of p and q increase. Further, the Bayes estimates under squared error loss function are almostequal to that under LINEX loss function when p is nearly 0.(iv) In Tables 3 and 7, the confidence interval estimates are provided for the parameters α and λ , respectively. It is observed that interval lengths decrease when T increases. Foreach censoring schemes, the lengths of the NA-based confidence intervals are smallerthan NL-based confidence intervals. The HPD credible interval lengths are smallerthan other simulated interval lengths.In this part of the section, the coverage probabilities of the unknown parameters arediscussed. To this aim, the asymptotic pivotal quantities are defined as Q = b α − α b λ √ τ , Q = b α − αλ √ τ , Q = b λ − λ b λ √ τ . By using Monte Carlo simulation, the coverage probabilities P ( − . ≤ Q i ≤ .
65) and P ( − . ≤ Q i ≤ . i = 1 , , m and T bothincrease. It is further seen that coverage probabilities under Q are much better than othertwo pivotal quantities. 14able 1: Average values and MSEs of the estimates of α for T = 0 . (n,m) scheme prior b α b α LI b α GE b α SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . (n,m) scheme prior b α b α LI b α GE b α SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . λ for T = 0 . (n,m) scheme prior b λ b λ LI b λ GE b λ SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . (n,m) scheme Prior b λ b λ LI b λ GE b λ SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . α and λ for T = 0 .
90% confidence interval 95% confidence interval(n,m) scheme NA NL HPD NA NL HPD(35,10) (0*9,25) 1.662900 1.724621 1.347014 1.975324 2.079246 1.5329940.624599 0.640706 0.531508 0.741948 0.769030 0.632295(0*5,5*5) 1.528562 1.577310 1.125026 1.815746 1.897750 1.1319550.607505 0.621800 0.577517 0.721642 0.745673 0.674827(5*5,0*5) 1.553188 1.601013 1.163564 1.844999 1.925646 1.2901280.726851 0.749132 0.697847 0.863410 0.900899 0.821357(25,0*9) 1.519845 1.570609 1.163126 1.805392 1.890829 1.2984370.752925 0.781501 0.715040 0.894385 0.942504 0.851844(35,15) (0*14,20) 1.668365 1.736048 1.31918 1.981816 2.095825 1.4972930.636133 0.6545915 0.608156 0.7556489 0.7866983 0.712385(0*11,5*4) 1.705169 1.775827 0.959026 2.025534 2.144567 1.0628960.626288 0.643520 0.500201 0.743954 0.772936 0.597847(5*4,0*11) 1.640762 1.700266 1.075443 1.949026 2.049206 1.2017780.682176 0.700271 0.653973 0.810343 0.840771 0.765516(20,0*14) 1.766647 1.846175 1.254274 2.098562 2.232597 1.3854360.884193 0.932265 0.744514 1.050314 1.131424 0.873779(35,25) (0*24,10) 1.708772 1.780958 1.296620 2.029814 2.151433 1.3996770.625575 0.641205 0.517554 0.743107 0.769387 0.609452(0*20,2*5) 1.584549 1.643226 1.242404 1.882253 1.981050 1.4469240.617701 0.632767 0.518356 0.733754 0.759083 0.625169(2*5,0*20) 1.571370 1.627798 1.354149 1.866598 1.961595 1.5329480.661035 0.679511 0.573648 0.785222 0.816305 0.672999(10,0*24) 1.536179 1.590377 1.379903 1.824794 1.916031 1.5696390.680936 0.701037 0.601988 0.808870 0.842684 0.717719(40,10) (0*9,30) 1.503518 1.552392 1.323993 1.732248 1.785997 1.4986280.574461 0.585573 0.543190 0.682389 0.701061 0.6284428(0*5,6*5) 1.518605 1.566231 1.478622 1.803919 1.884054 1.7528550.621611 0.635037 0.593233 0.738399 0.760963 0.661112(6*5,0*5) 1.639116 1.695835 1.461939 1.947071 2.042544 1.6532350.685168 0.701458 0.657897 0.813896 0.841281 0.801696(30,0*9) 1.720554 1.797007 1.680008 2.043809 2.172652 1.9800840.855615 0.897024 0.800181 1.016368 1.086185 0.990415(40,20) (0*19,20) 1.491926 1.541941 1.450229 1.772228 1.856404 1.7125000.593455 0.606905 0.573032 0.704952 0.727561 0.673965(0*10,2*10) 1.615110 1.678358 1.527493 1.918555 2.025075 1.8539120.599880 0.613638 0.566066 0.712585 0.735711 0.630888(2*10,0*10) 1.400833 1.441757 1.375785 1.664020 1.732860 1.6250650.639620 0.655641 0.617997 0.759791 0.786727 0.710105(20,0*19) 1.477324 1.525186 1.457825 1.754882 1.835424 1.7177360.737058 0.762897 0.700185 0.875536 0.919031 0.819991(40,30) (0*29,10) 1.531566 1.586747 1.323700 1.819314 1.912214 1.7741690.599093 0.612949 0.485393 0.711650 0.734941 0.665450(20*0,1*10) 1.476999 1.524632 1.426656 1.754496 1.834652 1.7003320.593539 0.606846 0.550496 0.705053 0.727418 0.653898(1*10,0*20) 1.279943 1.311853 1.233060 1.520417 1.574067 1.4742450.580307 0.592797 0.542255 0.689335 0.710325 0.628923(10,0*29) 1.457166 1.504002 1.341740 1.730937 1.809749 1.6846160.648755 0.666087 0.596387 0.770642 0.799787 0.695473 T = 0 .
90% confidence interval 95% confidence interval(n,m) scheme Q Q Q Q Q Q (35,10) (0*9,25) 0.738 0.759 0.695 0.805 0.813 0.772(0*5,5*5) 0.712 0.714 0.665 0.781 0.788 0.751(0*5,5*5) 0.663 0.670 0.639 0.734 0.737 0.742(25,0*9) 0.622 0.656 0.601 0.677 0.733 0.696(35,15) (0*14,20) 0.753 0.780 0.776 0.808 0.844 0.836(0*11,5*4) 0.746 0.783 0.745 0.798 0.829 0.805(5*4,0*11) 0.699 0.726 0.675 0.767 0.785 0.780(20,0*14) 0.706 0.753 0.768 0.754 0.813 0.804(35,25) (0*24,10) 0.772 0.799 0.756 0.818 0.853 0.830(0*20,2*5) 0.775 0.815 0.772 0.834 0.863 0.833(2*5,0*20) 0.695 0.742 0.725 0.755 0.812 0.801(10,0*24) 0.704 0.738 0.729 0.757 0.821 0.804(40,10) (0*9,30) 0.738 0.743 0.646 0.797 0.796 0.738(0*5,6*5) 0.735 0.736 0.685 0.799 0.799 0.776(6*5,0*5) 0.724 0.730 0.621 0.780 0.773 0.736(30,0*9) 0.676 0.740 0.776 0.735 0.802 0.804(40,20) (0*19,20) 0.755 0.783 0.754 0.826 0.845 0.819(0*10,2*10) 0.786 0.818 0.757 0.840 0.874 0.826(2*10,0*10) 0.711 0.730 0.745 0.795 0.800 0.802(20,0*19) 0.668 0.708 0.725 0.728 0.777 0.785(40,30) (0*29,10) 0.765 0.807 0.754 0.825 0.862 0.825(0*20,1*10) 0.743 0.765 0.753 0.799 0.822 0.832(1*10,0*20) 0.679 0.701 0.709 0.748 0.777 0.788(10,0*29) 0.717 0.754 0.727 0.780 0.815 0.810 α for T = 0 .
65 . (n,m) scheme prior b α b α LI b α GE b α SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . (n,m) scheme prior b λ b λ LI b λ GE b λ SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . λ for T = 0 .
65 . (n,m) scheme prior b λ b λ LI b λ GE b λ SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . (n,m) scheme prior b λ b λ LI b λ GE b λ SQ p = − . p = 0 . p = 1 q = − . q = − . q = 0 . α and λ for T = 0 .
65 .
90% confidence interval 95% confidence interval(n,m) scheme NA NL HPD NA NL HPD(35,10) (0*9,25) 1.511626 1.558747 1.454540 1.795629 1.874913 1.7695580.634094 0.648532 0.619148 0.753227 0.777496 0.729197(0*5,5*5) 1.657463 1.719895 1.610198 1.968865 2.073993 1.9365520.675029 0.692410 0.660681 0.801852 0.831078 0.744792(5*5,0*5) 1.549786 1.601992 1.588753 1.840958 1.928823 1.7573240.726049 0.746621 0.680476 0.862458 0.897060 0.848878(25,0*9) 1.339220 1.377287 1.309316 1.590831 1.654859 1.5479710.664750 0.682869 0.647127 0.789643 0.820113 0.735169(35,15) (0*14,20) 1.504370 1.557721 1.414510 1.787009 1.876822 1.7462960.497177 0.505134 0.414510 0.590586 0.603950 0.546005(0*11,5*4) 1.464152 1.513019 1.373593 1.739235 1.821478 1.6558500.502811 0.510941 0.486327 0.597279 0.610932 0.566976(5*4,0*11) 1.436714 1.483197 1.355396 1.706643 1.784864 1.6662800.696719 0.716478 0.633861 0.827618 0.860851 0.798131(20,0*14) 1.380959 1.423022 1.343415 1.640412 1.711179 1.6084770.638713 0.655186 0.613885 0.758714 0.786412 0.719818(35,25) (0*24,10) 1.392959 1.436461 1.291460 1.654666 1.727862 1.6122850.496879 0.504951 0.477010 0.590233 0.603787 0.561773(0*20,2*5) 1.434666 1.481037 1.412044 1.704209 1.782242 1.6506020.497247 0.505409 0.468921 0.590669 0.604378 0.553102(2*5,0*20) 1.419961 1.465373 1.397091 1.686741 1.763157 1.6281700.553324 0.564292 0.529671 0.657282 0.675710 0.633503(10,0*24) 1.401969 1.445631 1.330690 1.665369 1.738833 1.6152820.587330 0.599888 0.542329 0.697677 0.718781 0.680043(40,10) (0*9,30) 1.725610 1.793698 1.632209 2.049816 2.164492 1.9857870.672530 0.688718 0.632945 0.798884 0.826097 0.740191(0*5,6*5) 1.401979 1.440160 1.347604 1.665381 1.729592 1.6023390.623021 0.636555 0.587576 0.740074 0.762819 0.726511(6*5,0*5) 1.518159 1.568752 1.459005 1.803388 1.888537 1.7477800.712804 0.732657 0.682750 0.846725 0.880115 0.768387(30,0*9) 1.302959 1.339157 1.255089 1.547758 1.608637 1.4705760.842233 0.877745 0.804793 1.000471 1.060301 0.939509(40,20) (0*19,20) 1.198213 1.226362 1.138907 1.423332 1.470649 1.4059850.475450 0.482485 0.459066 0.564776 0.576590 0.594159(0*10,2*10) 1.353430 1.392856 1.301444 1.607711 1.674031 1.5869780.492425 0.500071 0.482310 0.584942 0.597781 0.554856(2*10,0*10) 1.315115 1.350133 1.218898 1.562197 1.621085 1.5069210.550090 0.560495 0.524489 0.653440 0.670921 0.622460(20,0*19) 1.314422 1.351628 1.241843 1.561374 1.623953 1.5135570.614041 0.629183 0.588718 0.729406 0.754864 0.685917(40,30) (0*29,10) 1.233753 1.264277 1.212832 1.465549 1.516868 1.4127030.471489 0.478487 0.458205 0.560072 0.571822 0.544493(0*20,1*10) 1.221089 1.250823 1.208826 1.450506 1.500493 1.4260490.471077 0.477943 0.451937 0.559583 0.571111 0.522580(1*10,0*20) 1.294778 1.330041 1.231115 1.538040 1.597343 1.5040680.518700 0.527707 0.507350 0.616153 0.631281 0.572084(10,0*29) 1.315101 1.351095 1.271489 1.562181 1.622714 1.5202610.523748 0.533162 0.577396 0.622150 0.637963 0.595866 T = 0 .
65 .
90% confidence interval 95% confidence interval(n,m) scheme Q Q Q Q Q Q (35,10) (0*9,25) 0.758 0.758 0.757 0.821 0.804 0.836(0*5,5*5) 0.785 0.780 0.789 0.845 0.835 0.859(5*5,0*5) 0.735 0.743 0.788 0.802 0.805 0.836(25,0*9) 0.643 0.675 0.636 0.714 0.757 0.712(35,15) (0*14,20) 0.796 0.809 0.745 0.848 0.863 0.813(0*11,5*4) 0.799 0.809 0.755 0.851 0.870 0.825(5*4,0*11) 0.730 0.751 0.772 0.796 0.816 0.836(20,0*14) 0.684 0.723 0.708 0.752 0.796(35,25) (0*24,10) 0.797 0.822 0.787 0.853 0.874 0.841(0*20,2*5) 0.775 0.801 0.770 0.834 0.861 0.835(2*5,0*20) 0.759 0.789 0.764 0.817 0.855 0.825(10,0*24) 0.736 0.765 0.779 0.792 0.832 0.834(40,10) (0*9,30) 0.817 0.785 0.779 0.873 0.833 0.856(0*5,6*5) 0.743 0.728 0.750 0.801 0.787 0.826(6*5,0*5) 0.762 0.756 0.769 0.818 0.825 0.841(30,0*9) 0.633 0.660 0.748 0.692 0.743 0.799(40,20) (0*19,20) 0.756 0.773 0.796 0.824 0.841 0.862(0*10,2*10) 0.789 0.805 0.772 0.843 0.859 0.832(2*10,0*10) 0.758 0.768 0.747 0.816 0.829 0.832(20,0*19) 0.729 0.753 0.748 0.783 0.831 0.810(40,30) (0*29,10) 0.752 0.783 0.767 0.818 0.842 0.838(0*20,1*10) 0.753 0.781 0.772 0.823 0.852 0.839(1*10,0*20) 0.771 0.794 0.772 0.826 0.859 0.838(10,0*29) 0.734 0.767 0.772 0.811 0.832 0.838 Real data analysis
In this section, a real dataset given by Bjerkedal (1960) is considered and analysed. Thedataset describes the survival times (days) of 72 guinea pigs which are infected with virulenttubercle bacilli. The data set is given below.———————————————————————————————————————12, 15, 22, 24, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 53, 54, 54, 55, 56, 57, 58, 58, 59, 60,60, 60, 60, 61, 62, 63, 65, 65, 67, 68, 70, 70, 72, 73, 75,76,76, 81, 83, 84, 85, 87, 91, 95, 96,98, 99, 109, 110, 121,127,129, 131,143, 146, 146, 175, 175, 211, 233, 258, 258, 263, 297, 341,341, 376———————————————————————————————————————-For this purpose of goodness of fit test, negative log-likelihood criterion, Alkaikes-informationcriterion (AIC), AICc, and Bayesian information criterion(BIC) are considered. The valuesof these test statistics are presented in Table 9, which suggests that the logistic exponentialdistribution provides a best fit to the data comparison to the exponential distribution (ED),Weibull distribution (WD), inverse exponential distribution (IED), inverse Weibull distribu-tion (IWD), gamma distribution and Burr distribution. For the purpose of goodness-of-fittest, we consider different plots in Figure 2, Figure 3, Figure 4 and Figure 5 to show thatthe logistic exponential distribution fits the real dataset well. In Figure 2, Q - Q plots areused to compare the given dataset to the theoretical model. Here, Q - Q plot representsthe points ( F − ( i/ ( n + 1); ˆ α, ˆ λ ) , x ( i ) ), where i = 1 , . . . , n , and x ( i ) represents the orderedvalues of given real data. In Figure 3, empirical cumulative distribution function (ECDF)plot has been made which represents the cumulative distribution function for observed dataand it is compared with theoretical CDFs. In Figure 5 , P - P plot represents the points( F ( x i ; ˆ α, ˆ λ ) , F n ( x ( i ) )), where F n ( x ( i ) ) = (1 /n ) P ni =1 I ( X ≤ x ( i ) ) is the empirical distributionfunction and I is the indicator function.Table 9: The MLEs, standard errors (SEs) and statistics for goodness-of-fit Model b α (SE) b λ (SE) -logL AIC AICC BICLED 1.680051 (0.171128) 0.008596 (0.000695) 393.1994 790.3988 790.5727 790.1135ED ——– 0.010018 (0.001168) 403.4421 808.8842 808.9699 808.7415IED ——— 60.09751 (7.082558) 402.6718 807.3436 807.4007 807.2009WD 1.392761 (0.118424) 110.529489(9.933974) 397.1477 798.2954 798.4693 798.0101IWD 1.415144 (0.117427) 284.285192 (125.980402) 395.6491 795.2982 795.4721 795.0128Gamma 2.081241 (0.321162) 0.020854 (0.003628) 394.2476 792.4952 792.6691 792.2098Burr 4.027607 (17.711090) 0.057153 (0.251267) 490.5494 985.0988 985.2727 984.8135 α and λ for the real dataset (n,m) T Prior scheme MLE
LLF GELF SELFp = − . p = 0 . p = 1 q = − . q = − . q = 0 . Table 11: Average lengths of the interval estimates of α and λ for the real dataset
90% confidence interval 95% confidence interval(n,m) T scheme NA NL HPD NA NL HPD(72,30) 45 (0*16,3*14) 1.449476 1.479495 1.425826 1.721801 1.772246 1.6865910.004608 0.004662 0.004472 0.005474 0.005563 0.005348(0*9,2*21) 1.402861 1.429036 1.293271 1.666428 1.710404 1.5907520.004378 0.004422 0.004169 0.005201 0.005274 0.005027(72,30) 60 (0*29,42) 1.249023 1.273158 1.206673 1.483688 1.524239 1.4304380.004277 0.004333 0.004144 0.005080 0.005175 0.004844(0*9,2*21) 2.74877 2.891338 2.369878 3.265205 3.505682 3.0023340.005259 0.005320 0.004869 0.006247 0.006349 0.005064
100 200 300 400 . . . ecdf(X) Logistic exponential distribution CD F (a) . . . ecdf(X) Exponential distribution CD F (b) . . . ecdf(X) Inverse exponential distribution CD F (c) . . . ecdf(X) Gamma distribution CD F (d) . . . ecdf(X) Weibull distribution CD F (e) . . . ecdf(X) Inverse Weibull distribution CD F (f) Figure 3: The ECDF and CDF comparison for various distributions fitted to the assumedreal dataset.
In this article, statistical inference for the two parameter logistic exponential distributionunder progressive type-I hybrid censoring scheme has been discussed. The MLEs are ob-tained. The forms of the MLEs can not be expressed explicitly. Thus, Newton-Raphson30 istogram and theoretical density
Inverse Weibull distribution D en s i t y . . . . (a) Histogram and theoretical density
Exponential distribution D en s i t y . . . . (b) Histogram and theoretical density
Inverse exponential distribution D en s i t y . . . . (c) Histogram and theoretical density
Weibull distribution D en s i t y . . . . (d) Histogram and theoretical density
Inverse Weibull distribution D en s i t y . . . . (e) Histogram and theoretical density
Gamma distribution D en s i t y . . . . (f) Figure 4: Histogram with density plots for various distributions fitted to given real dataset.31 .0 0.2 0.4 0.6 0.8 1.0 . . . Logistic exponential distribution E m p i r i c i a l p r obab ili t i e s (a) . . . Exponential distribution E m p i r i c i a l p r obab ili t i e s (b) . . . Weibull distribution E m p i r i c i a l p r obab ili t i e s (c) . . . Inverse Weibull distribution E m p i r i c i a l p r obab ili t i e s (d) . . . Inverse exponential distribution E m p i r i c i a l p r obab ili t i e s (e) . . . Gamma distribution E m p i r i c i a l p r obab ili t i e s (f) Figure 5: P-P plots for various distributions fitted to given real dataset.32terative method is employed to compute the MLEs of the parameters. In Bayesian study,independent and bivariate priors are considered. Three loss functions are utilized. TheBayes estimates are difficult to get in closed forms. Rather, they are of the form of theratio of two integrals. To evaluate approximate Bayes estimates, Lindley’s approximationtechnique is applied. The HPD credible intervals are obtained by using importance samplingmethod. In addition to the HPD credible intervals, two methods are employed to computedthe asymptotic confidence intervals of the model parameters. Monte Carlo simulation studyis performed to compare the performance of the proposed estimates. From the numericalstudy, it is observed that the Bayes estimates with respect to the independent prior distri-butions perform better than the other estimates. Further, a real dataset is considered forillustrative purposes.
Acknowledgement:
The author S. Dutta, thanks the Council of Scientific and IndustrialResearch (C.S.I.R. Grant No. 09/983(0038)/2019-EMR-I), India, for the financial assis-tantship received to carry out this research work. Both the authors thanks the researchfacilities received from the Department of Mathematics, National Institute of TechnologyRourkela, India.
References
Arnold, B. C. and Press, S. J. (1983). Bayesian inference for Pareto populations,
Journal ofEconometrics . (3), 287–306.Bjerkedal, T. (1960). Acquisition of resistance in guinea pies infected with different doses ofvirulent tubercle bacilli., American Journal of Hygiene . (1), 130–48.Calabria, R. and Pulcini, G. (1994). Bayes 2-sample prediction for the inverse Weibulldistribution, Communications in Statistics-Theory and Methods . (6), 1811–1824.Chen, M. H. and Shao, Q. M. (1999). Monte Carlo estimation of Bayesian credible and HPDintervals, Journal of Computational and Graphical Statistics . (1), 69–92.Epstein, B. (1954). Truncated life tests in the exponential case, The Annals of MathematicalStatistics . (3), 555–564.Gamchi, F. V., Alma, ¨O. G. and Belaghi, R. A. (2019). Classical and Bayesian inference forBurr type-III distribution based on progressive type-II hybrid censored data, MathematicalSciences . (2), 79–95.Goyal, T., Rai, P. K. and Maurya, S. K. (2020). Bayesian estimation for GDUS exponentialdistribution under type-I progressive hybrid censoring, Annals of Data Science . (2), 307–345. 33emmati, F. and Khorram, E. (2013). Statistical analysis of the log-normal distribution un-der type-II progressive hybrid censoring schemes, Communications in Statistics-simulationand Computation . (1), 52–75.Kayal, T., Tripathi, Y. M., Kundu, D. and Rastogi, M. (2019). Statistical inference of Chendistribution based on type-I progressive hybrid censored samples, Statistics, Optimization& Information Computing . .Kundu, D. and Joarder, A. (2006). Analysis of Type-II progressively hybrid censored data,
Computational Statistics & Data Analysis . (10), 2509–2528.Lan, Y. and Leemis, L. M. (2008). The logistic–exponential survival distribution, NavalResearch Logistics . (3), 252–264.Lin, C.-T., Ng, H. K. T. and Chan, P. S. (2009). Statistical inference of type-II progressivelyhybrid censored data with Weibull lifetimes, Communications in Statistics – Theory andMethods . (10), 1710–1729.Lindley, D. V. (1980). Approximate Bayesian methods, Trabajos de estad´ıstica y de investi-gaci´on operativa . (1), 223–245.M¨akel¨ainen, T., Schmidt, K. and Styan, G. P. (1981). On the existence and uniqueness ofthe maximum likelihood estimate of a vector-valued parameter in fixed-size samples, TheAnnals of Statistics . (4), 758–767.Meeker, W. Q. and Escobar, L. A. (2014). Statistical methods for reliability data , John Wiley& Sons.Sultana, F., Tripathi, Y. and Kumar Rastogi, M. (2019). Parameter estimation and pre-diction for the generalized half normal distribution under progressive hybrid censoring,
Journal of The Iranian Statistical Society . (1), 191–236.Sultana, F., Tripathi, Y. M., Wu, S. J. and Sen, T. (2020). Inference for Kumaraswamydistribution based on type-I progressive hybrid censoring, Annals of Data Science . pp. 1–25.Varian, H. R. (1975). A Bayesian approach to real estate assessment,
In Studies in Bayesianeconometrics and statistics in honor of Leonard J. Savage, eds Fienberg Stephen E., andA. Zellner . pp. 195–208.
A Appendix
In this part of the paper, the Lindley’s approximation method is described. However, onemay refer to Lindley (1980) for further elaboration. Consider an arbitrary function of α and λ , say φ ( α, λ ). The forms of the Bayes estimators (see Eqs. (3.4), (3.5) and (3.6)) involve34xpectations with respect to the posterior probability density function. The posterior meanof φ ( α, λ ) is given by E ( φ ( α, λ ) | x ) = R ∞ R ∞ φ ( α, λ ) e l ( α,λ | x )+ P ( α,λ ) dαdλ R ∞ R ∞ e l ( α,λ | x )+ P ( α,λ ) dαdλ , (A.1)where l ( α, λ | x ) is the log-likelihood function and P ( α, λ ) is the logarithm of joint priordistribution of α and λ . Using the method due to Lindley (1980), (A.1) can be approximatedas E ( φ ( α, λ ) | x ) ≈ φ ( α, λ ) + 12 ( A + l B + l B + l C + l C + 2 P A + 2 P A ) , (A.2)where A = P i =1 P j =1 v ij τ ij , θ = α , θ = λ, v i = ∂φ∂θ i , v ij = ∂ φ∂θ i ∂θ j , P i = ∂P∂θ i , A ij = v i τ ii + v j τ ji , B ij = ( v i τ ii + v j τ ij ) τ ii , C ij = 3 v i τ ii τ ij + v j ( τ ii τ jj + 2 τ ij ) , l ij = ∂ i + j l∂θ i θ j , P = log π ( θ , θ )and τ ij is the ( i, j )th element of h − ∂ l∂θ i ∂θ j i − , where i, j = 1 ,
2. For the present problem,the loglikelihood function can be written as l = D log α + D log λ + λ D X i =1 x i + ( α − D X i =1 log( e λx i − − D X i =1 ( R i + 2) log U − R ∗ D log V, (A.3)35here U ≡ U ( α, λ ) = 1 + ( e λx i − α and V ≡ V ( α, λ ) = 1 + ( e λT − α . Next, the requiredpartial derivatives of l with respect to α and λ are presented. l = ∂l∂α = Dα + D X i =1 log( e λx i − − D X i =1 ( R i + 2) U α U − R ∗ D V α Vl = ∂l∂λ = Dλ + D X i =1 x i + ( α − D X i =1 x i e λx i ( e λx i − − D X i =1 ( R i + 2) U λ U − R ∗ D V λ Vl = ∂ l∂α = − Dα − D X i =1 ( R i + 2) (cid:20) U U αα − U α U (cid:21) − R ∗ D (cid:20) V V αα − V α V (cid:21) l = ∂ l∂λ = − Dλ − ( α − D X i =1 x i e λx i ( e λx i − − D X i =1 ( R i + 2) (cid:20) U U λλ − U λ U (cid:21) − R ∗ D (cid:20) V V λλ − V λ V (cid:21) l = ∂ l∂α∂λ = D X i =1 x i e λx i ( e λx i − − D X i =1 ( R i + 2) (cid:20) U U αλ − U α U λ U (cid:21) − R ∗ D (cid:20) V V αλ − V α V λ V (cid:21) l = ∂ l∂α = 2 Dα − D X i =1 ( R i + 2) (cid:20) U U ααα − U U α U αα + 2 U α U (cid:21) − R ∗ D (cid:20) V V ααα − V V α V αα + 2 V α V (cid:21) l = ∂ l∂λ = ( α − D X i =1 x i (cid:20) e λx i (1 + e λx i )( e λx i − (cid:21) − D X i =1 ( R i + 2) (cid:20) U U λλλ − U U λ U λλ + 2 U λ U (cid:21) − R ∗ D (cid:20) V V λλλ − V V λ V λλ + 2 V λ V (cid:21) + 2 Dλ l = ∂ l∂α ∂λ = − D X i =1 ( R i + 2) (cid:20) U ( U U ααλ − U λ U αα − U α U αλ ) + 2 U α U λ U (cid:21) − R ∗ D (cid:20) V ( V V ααλ − V λ V αα − V α V αλ ) + 2 V α V λ V (cid:21) l = ∂ l∂α∂λ = − D X i =1 ( R i + 2) (cid:20) U ( U U αλλ − U α U λλ − U λ U αλ ) + 2 U α U λ U (cid:21) − R ∗ D (cid:20) V ( V V αλλ − V α V λλ − V λ V αλ ) + 2 V α V λ V (cid:21) − D X i =1 x i (cid:20) e λx i ( e λx i − (cid:21) U α = ( e λx i − α log( e λx i − V α = ( e λT − α log( e λT − U λ = αx i e λx i ( e λx i − α − V λ = αT e λT ( e λT − α − U αλ = x i e λx i ( e λx i − α − (cid:2) α log( e λx i −
1) + 1 (cid:3) V αλ = T e λT ( e λT − α − (cid:2) α log( e λT −
1) + 1 (cid:3) U αα = ( e λx i − α (cid:2) log( e λx i − (cid:3) V αα = ( e λT − α (cid:2) log( e λT − (cid:3) U λλ = αx i e λx i ( e λx i − α − ( αe λx i − V λλ = αT e λT ( e λT − α − ( αe λT − U ααα = ( e λx i − α (cid:2) log( e λx i − (cid:3) V ααα = ( e λT − α (cid:2) log( e λT − (cid:3) U λλλ = αx i e λx i ( e λx i − α − (cid:2) α e λx i + (1 − α ) e λx i + 1 (cid:3) V λλλ = αT e λT ( e λT − α − (cid:2) α e λT + (1 − α ) e λT + 1 (cid:3) U ααλ = x i e λx i ( e λx i − α − log( e λx i − (cid:2) α log( e λx i −
1) + 2 (cid:3) V ααλ = T e λT ( e λT − α − log( e λT − (cid:2) α log( e λT −
1) + 2 (cid:3) U αλλ = x i e λx i ( e λx i − α − (cid:2) α ( αe λx i −
1) log( e λx i −
1) + 2 αe λx i − (cid:3) V αλλ = T e λT ( e λT − α − (cid:2) α ( αe λT −
1) log( e λT −
1) + 2 αe λT − (cid:3) ..