Methods of Estimation for the Three-Parameter Reflected Weibull Distribution
Methods of Estimation for the Three-Parameter Reflected Weibull Distribution
Fateme Maleki jebeli a , Einolah Deiri b.* a Department of Statistics, Marvdasht Branch, Islamic Azad University, Marvdasht, Iran Email: [email protected] b Department of Statistics, Qaemshahr Branch, Islamic Azad University, Qaemshahr, Iran Email: [email protected]
Abstract
In this paper, we propose methods for the estimation of parameters for the three-parameter Reflected Weibull ( π π ) distribution. The Moment estimator ( πππΈ ), Maximum likelihood estimator (
ππΏπΈ ) and Location and Scale Parameters free maximum likelihood estimator (
πΏπππΉπΈ ). The
πΏπππΉπΈ is based on a data transformation, which avoids the problem of unbounded likelihood estimator. Through Mont Carlo simulations, we further show that the
πΏπππΉπΈ performs better than
πππΈ and
ππΏπΈ in terms of bias and root mean squared error (
π πππΈ ). Finally, two examples based on real data sets are presented to illustrate methods.
Keyword:
Reflected Weibull , Mont Carlo simulations , Moment estimator, Maximum likelihood estimator . Introduction
The Weibull distribution, first presented by Weibull [17], is the most widely used distribution in reliability and lifetime studies. The cumulative distribution function (
πΆπ·πΉ ) and probability density function (
ππ·πΉ ) of the three-parameter Weibull distribution are given by
πΉ(π₯; πΏ, π½, πΎ) = 1 β exp [β (π₯ β πΎπ½ ) πΏ ] (1) and π(π₯; πΏ, π½, πΎ) = πΏπ½ (π₯ β πΎπ½ ) πΏβ1 ππ₯π [β (π₯ β πΎπ½ ) πΏ ] (2) for πΏ > 0, π½ > 0 and πΎ < π₯ , for example see, Johnson et al. [8]. If X has the Weibull distribution with πΆπ·πΉ and
ππ·πΉ given by (1) and (2) then β π is said to have the RW distribution. The
πΆπ·πΉ and
ππ·πΉ for the three-parameter π π are given by πΉ(π₯; πΏ, π½, πΎ) = 1 β exp [β (πΎ β π₯π½ ) πΏ ] (3) and π(π₯; πΏ, π½ , πΎ) = πΏπ½ (πΎ β π₯π½ ) πΏβ1 ππ₯π [β (πΎ β π₯π½ ) πΏ ] (4) or πΏ > 0, π½ > 0 and π₯ < πΎ . The associated mean πΈ(π₯) = πΎ β π½π€ ( + 1) ; where π€(π₯) is gamma function
π€(π₯) = β« π‘ π₯ β1 π βπ₯ ππ₯ β0 . This distribution , first presented by Cohen[4]. For πΏ β€ 1, the distribution is J-shape, for πΏ > 1, the π π distribution becomes bell-shape. Figure1.
The density function of the three parameter π π distribution for different choices of πΏ where π½ = 1, πΎ = 0. As Cohen[4] has said some readers may recognize the π π distribution of largest values, or the Fisher-Tippet type III distribution of largest values as discussed by Gumbel[7]. As Lai [9] has said strictly speaking , the π π is not suitable for reliability modeling unless πΎ > 0 and ( πΎπ½ ) πΏ β₯ 9 . The RW distribution is suitable for ductile strength, you can see Nadarajah and Kotz.[12]. In this paper, we propose three method of estimation of parameters of the three-parameter π π distribution. πππΈ and
ππΏπΈ that discussed with many of authors for a lot of distributions .As Chen and Amin [3] said
ππΏπΈ does not always give satisfactory estimates of parameters for certain three-parameter distributions where the density is positive only to right of a shifted origin, πΎ , this being of the unknown parameters. for example in the Lognormal, Gamma distribution and Weibull model with three parameters the critical difficulty is that there are paths in the parameter space, with πΎ tending to the smallest observation, along which the likelihood becomes infinite. Griffths [6] suggested a method for estimation parameters of the three-parameter Lognorma l distribution. Lawless [10] have all given detailed of descriptions of various methods of parameter estimation of the three-parameter Weibull distribution. As Nagatsuka and Balakrishnan [13] said since there are estimation that are uniqueness but they are useless from an inferential point of views and consistency is one of the most fundame nta l properties to show that statistics are suitable as estimators of unknown parameters and as Nagatsuka et al. [14] suggested we will say πΏπππΉπΈ s. The rest of this article is organized as follows. In section 1, we present the
ππΏπΈ . In section 2, we present the
πππΈ . In section 3, we present the
πΏπππΉπΈ , as a new method for estimation of parameters of the three-parameter π π distribution. In section 4, we show that the πΏπππΉπΈ performs well compared to some other prominent methods, we will simulate and use of bias and
π πππΈ . In section 5, two real life data sets are used as examples to illustrate the methods of estimation. In section 6 we will write some concluding remarks. Maximum likelihood estimation
Let π π , π = 1,2, β¦ . π, be a random variable distributed as (3) with the vector of parameters (πΏ, π½, πΎ) . We now determine the ππΏπΈ s of the parameters of the three-parameter π π distribution. Le t π₯ , π₯ , β¦ , π₯ π be observed values of a random sample size π from the three-parameter π π distribution.The log-likelihood function for the vector of parameters can be written as π(πΏ, π½, πΎ) = πππππΏ β πππππ½ + (πΏ β 1) β πππ (πΎ β π₯ π π½ ) ππ=1 β β (πΎ β π₯ π π½ ) πΏππ=1 When π₯ π < πΎ , π = 1,2, β¦ π, and π(πΏ) = ππ(πΏ, π½, πΎ)ππΏ = ππΏ + β πππ (πΎ β π₯ π π½ ) ππ=1 β β (πΎ β π₯ π π½ ) πΏππ=1 πππ (πΎ β π₯ π π½ ), π(π½) = ππ(πΏ, π½, πΎ)ππ½ = βππΏπ½ + πΏπ½ β (πΎ β π₯ π π½ ) πΏ+1ππ=1 π(πΎ) = ππ(πΏ, π½, πΎ)ππΎ = (πΏ β 1) β ( 1πΎ β π₯ π ) ππ=1 β πΏπ½ β (πΎ β π₯ π π½ ) πΏβ1ππ=1 As we know, The
ππΏπΈ of πΎ is more than π (1) , where π (π) denotes π -th order statistic. The ππΏπΈ of πΏ , π½ and πΎ are obtained by solving the non-linear equations π(πΏ) = 0 , π(π½) = 0 and π(πΎ) = 0 . Moments Estimation
We know
πΈ(πΎ β π) π = β« (πΏπ½) (πΎ β π₯) π (πΎ β π₯π½ ) πΏβ1 ππ₯π [β (πΎ β π₯π½ ) πΏ ] ππ₯ πΎββ If ( πΎβπ₯π½ ) πΏ = π’ then πΈ(πΎ β π) π = π½ π π€ ( ππΏ + 1) . We can write πΈ(π) = πΎ β π½π€ ( + 1) , πΈ(π ) = 2πΎπΈ(π) β πΎ + π½ π€ ( + 1) , πΈ(π ) = πΎ β 3πΎ πΈ(π) + 3πΎπΈ(π ) β π½ π€ (3πΏ + 1) with replace
πΈ(π), πΈ(π ) and πΈ(π ) with π₯Μ = β π₯ πππ=1 , π₯ Μ Μ Μ = β π₯ π2ππ=1 and π₯ Μ Μ Μ = β π₯ π3ππ=1 , we simply obtain the moment estimates of πΏ , π½ and πΎ . Location and scale parameters free maximum likelihood Estimation
It is well known that the reliability conditions are not satisfied for the
ππΏπΈ for every distribut io n with three-parameter then some authors suggested a new method for this problem. For examp le; agatsuka and Balakrishnan [13] studied about methods of estimation for three-parameter Inverse Gaussian distribution. Nagatsuka et al. [15] studied about methods of estimation for three-parameter Gamma distribution. Nagatsuka et al. [14] studied about methods of estimation for three-parameter Weibull distribution and suggested that authors can be study about another distributions such as the three-parameter π π distribution. Then in this paper we want studied this new method for estimation of parameters for this distribution and compared with the ππΏπΈ and
πππΈ . In this section we will say about this new method.
In this section, we describe a new method of estimation of the parameters of the three - parameters RW distribution and discuss some of properties. Let π , π , β¦ , π π be n independent random variables from the three- parameter π π distribution with common πΆπ·πΉ (3). Throughout the paper, we assume that the following two conditions hold:
Assumption
1. The sample size n is greater than 2. Assumption
2. With π π β π π probability 1, for some π β π . These conditions are very natural, which are required for all existing methods of estimation for three-parameter distributions. We first consider some statistics depending on only one parameter a before presenting the method of estimation. For i = 1 , ,...,n , let π (π) denote the order statistics among π , π , . . . , π π . Then, we consider the following statistics: π (π) = π (π) β π (1) π (π) β π (1) , π = 1,2, β¦ . , π (5) It is easy to see that π (π), s do not depend on πΎ and π½ , but depend only on πΏ . Statistics similar to those in (5) have been considered by Nagatsuka and Kamakura [16] for the model presented by Castillo and Hadi [2]. Observe that π (1) takes on the value 0 and π (π) takes on the value 1 constantly. We consider the maximum likelihood estimator of πΏ based on π (1) , π (2) , β¦ , π (π) . The likelihood function of πΏ based on π (1) , π (2) , β¦ , π (π) might be bounded (will be proved later) since these are not dependent on πΎ as mentioned above. To obtain the maximum likelihood estimator of πΏ based on W (i) βs, we first derive the joint ππ·πΉ of π (2) , π (3) , β¦ , π (πβ1) . Proposition 1 . For πΏ > 0 , the joint density
ππ·πΉ of π (2) , π (3) , β¦ , π (πβ1) is given by π(π€ , π€, β¦ . , π€ πβ1 ) = π! πΏ π β« β« (βπ’) πβ2 {β(βπ£ β π’ + π’π€ π ) ππ=1 } πΏβ10ββ0ββ Γ ππ₯π[β{β (βπ£ β π’ + π’ππ€ π ) πΏππ=1 }]ππ’ππ£ when 0 β€ w β€ Β·Β·Β· β€ w n β1 β€ 1; and w = 0 , w n = 1. Proof : enote
πΉ(. ; 1, πΏ, 0) by πΊ(. ; πΏ ) and π(. ; 1, πΏ, 0) by π(. ; πΏ) , for simplicity. Suppose π , π , . . . , π π are π independent random variables from the standard π π distribution with ππ·πΉ π(π§ π ; πΏ) and CDF πΊ(π§ π ;πΏ ) . For π = 1,2, . . . , π , let π (π) be the i-th order statistics among π , π , . . . , π π . For π β2 real value
0 β€ π€ β€ Β·Β·Β· β€ π€ πβ1 β€ 1 , let us consider π π (π (π) β€ π€ π , π = 2, . . , π β 1) = π π ( π (π) β π (1) π (π) β π (1) β€ π€ π , π = 2, . . , π β 1)= β« β« π π (π (π) β€ π’ + (π£ β π’)π€ π , π = 2, β¦ . , π β 1|π (1) = π’, π (π) = π£) π£ββ0ββ Γ π (π β 1) π(π£; πΏ)π(π’; πΏ){πΊ(π£; πΏ) β πΊ(π’; πΏ)} πβ2 ππ’ππ£ = β« β« π! π£ββ π(π£; πΏ)π(π’; πΏ) β{πΊ(π’ + (π£ β π’)π€ π ; πΏ )} πβ1π=20ββ ππ’ππ£. (6) For every π’, π£ such that π’ < π£ < 0 , π > 2 and Ξ΄> 0 , the integrand in (6) , i.e, π! π(π£; πΏ)π(π’; πΏ) β{πΊ(π’ + (π£ β π’)π€ π ; πΏ)} πβ1π=2 has a partial derivative π! π(π£; πΏ)π(π’; πΏ) β (π£ β π’)π(π’ + (π£ β π’)π€ π ; πΏ) πβ1π=2 , with respect to π€ π , π = 2, . . . , π β 1 . Moreover, we not that (π β 2)! β (π£ β π’)π(π’ + (π£ β π’)π€ π ;πΏ ) πβ1π=2 {πΊ(π£; πΏ) β πΊ(π’; πΏ)} πβ2 , is bounded above then we have π! β(π£ β π’)π(π’ + (π£ β π’) π€ π ; πΏ) β€ π(π β 1)π(π£; πΏ)π(π’; πΏ) πβ1π=2 {πΊ(π£; πΏ) β πΊ(π’; πΏ )} πβ2 , and by applying part(ii) of theorem 16.8 Billingsley[1], the partial derivative of P r (ππ β€ π€ π ; π = 2, . . . , π β 1) with respect to π€ π π = 2, . . . , π β 1 , is given by β« β« π πβ2 π! π(π£; πΏ)π(π’; πΏ) β πΊ(π’ + (π£ β π’)π€ π ; πΏ) πβ1π=2 β ππ€ ππβ1π=2π£ββ0ββ ππ£ππ’ = β« β« π! π(π£; πΏ)π(π’; πΏ)(π£ β π’) πβ2 β π(π’ + (π£ β π’)π€ π ; πΏ) πβ1π=2π£ββ0ββ ππ£ππ’. (7) or π€ ,Β·Β·Β· , π€ πβ1 for which
0 β€ π€ β€ Β·Β·Β· β€ π€ πβ1 β€ 1 is not satisfied, the partial derivative of π π (π (π) β€ π€ π , π = 2, . . , π β 1) with respect to π€ (π ) , π = 2, β¦ . , π β 1 , is always equal to 0 since lim ππ€ π β0,π=2,β¦,πβ1 π π (π€ π β€ π (π) β€ π€ π + πΏπ€ π , π = 2, β¦ . , π β 1)β ππ€ ππβ1π=2 = 0. After suitable transformations of variables π’ and π£, the proof of proposition 1 gets completed. β‘ From Proposition 1, we can obtain the likelihood function of πΏ based on π (1) , β¦ . , π (π) as π(πΏ; π€2, π€3, . . . , π€π β 1) = π ( π€2, π€3, . . . , π€π β 1; πΏ), (8) Where π€ , π€ , β¦ , π€ πβ1 are the realized values of π (2) , π (3) , β¦ , π (πβ1) . Then, the ππΏπΈ of πΏ based on π (π), π , denoted by πΏΜ π€ , is obtained by maximizing π(πΏ; π€ , π€ , . . . , π€ πβ1 ) with respect to πΏ , by substituting π (π), π for π€ π, s. Proposition 2 . For πΏ > 0 and any given π€ , π€ , β¦ , π€ πβ1 such that β€ Β·Β·Β· β€ π€ β€ 1 , the likelihood function π(πΏ; π€ , π€ , . . . , π€ πβ1 ) is differentiable with respect to πΏ , and the derivative π Β΄ (πΏ; π€ , β¦ . . , π€ πβ1 ) is given by π Β΄ (πΏ, π€ , β¦ . , π€ πβ1 ) = π! πΏ π β« β« { ππΏ + β log(βπ£ β π’ + π’π€ π ) [(βπ£ β π’ + ππ=10ββ0ββ π’π€ π ) πΏ ]}exp {(n β 2) log(βu) + (Ξ΄ β 1) β log(βπ£ β π’ + π’π€ π ) ππ=1 β β (βπ£ β π’ + ππ=1 π’π€ π ) πΏ } ππ’ππ£ . Proof:
For Ξ΄ > 0, given π€ , π€ , β¦ , π€ πβ1 such that
0 β€ π€ β€ Β·Β·Β· β€ π€ πβ1 β€ 1 , π(πΏ; π€ , π€ , . . . , π€ πβ1 ) can be rewritten as π(πΏ; π€ , β¦ . , π€ πβ1 ) = n! β« β« exp {β(πΏ, π’, π£; w , . . . , w πβ1 )} ππ’ππ£, where β(πΏ, π’, π£ ; π€ , β¦ , π€ πβ1 ) = πππππΏ + (π β 2) log(βπ’) + (πΏ β 1) β log(βπ£ β π’ + π’π€ π ) β β(βπ£ β π’ + π’π€ π ) πΏππ=1ππ=1 without loss of generality, we denote β(πΏ, π’, π£; π€ , β¦ , π€ πβ1 ) by β(πΏ, π’, π£) in the remaining part of this proof. For every πΏ > 0 , π’ < 0 , π£ < 0 , π > 2 and π€ , π€ , β¦ , π€ πβ1 such that
0 β€ π€ β€ Β·Β·Β· β€ π€ πβ1 β€ 1 , the partial derivative of ππ₯π{β(πΏ, π’, π£} with respect to πΏ is given by ββ² (πΏ, π’, π£)ππ₯π{β(πΏ, π’, π£)} , where ββ² (πΏ, π’, π£) , is the partial derivative of β(πΏ, π’, π£) with respect to πΏ , which is πΏ + β log(βπ£ β π’ + π’π€ π ) [1 β (βπ£ β π’ + π’π€ π ) πΏππ=1 ]. And |ββ² (πΏ, π’, π£)ππ₯π{β(πΏ, π’, π£)}| is bounded above and then there exists π such that |ββ² (πΏ, π’, π£)ππ₯π{β(πΏ, π’, π£)/2}| β€ π. Thus β« β« |ββ² (πΏ, π’, π£)ππ₯π{β(πΏ, π’, π£)}| ππ’ππ£ β€ π β« β« exp {β(πΏ, π’, π£)/2} ππ’ππ£ β€ π (β« β« exp{β(πΏ, π’, π£)} ππ’ππ£) . The second last inequality is due to Lyapunovβs inequality while the last inequality holds from proposition 1. Then, by applying part (ii) of theorem 16.8 of Billingsley [1], the derivative of π(πΏ; π€ , β¦ . , π€ πβ1 ) is given by π Β΄ (πΏ; π€ , β¦ . , π€ πβ1 ) = π! β« β« πππ₯π{β(πΏ, π’, π£)}ππΏ ππ’ππ£ = π! β« β« ββ² (πΏ, π’, π£)ππ₯π{β(πΏ, π’, π£)} ππ’ππ£. (9) The proof of proposition 2 is thus complete. β‘ The following theorem and the using corollary implies that the estimate of πΏ obtained by maximizing π(πΏ; π€ , β¦ . , π€ πβ1 ) or solving equation π Β΄ (πΏ; π€ , β¦ . , π€ πβ1 ) = 0 always exists uniquely over the entire parameter space. Theorem β² . For πΏ > 0 and any given π€ , β¦ . , π€ πβ1 such that
0 β€ π€ β€ Β·Β·Β· β€ π€ πβ1 β€ 1 the likelihood equation π Β΄ (πΏ; π€ , β¦ . , π€ πβ1 ) = 0 always has a unique solution. Proof:
First, we shall that likelihood equation has at least one solution. For simplicity, we denote π Β΄ (πΏ; π€ , β¦ . , π€ πβ1 ) by π Β΄ (πΏ) , β(πΏ, π’, π£, π€ . . . , π€ πβ1 ) by β(πΏ, π’, π£) and ββ² (πΏ, π’, π£, π€ . . . , π€ πβ1 ) by ββ² (πΏ, π’, π£) in the remaining part of this proof. Since ππ₯π{β(πΏ, π’, π£)} > 0 , πππ πΏβ0 β β² (πΏ, π’, π£) = β , and πππ πΏββ β β² (πΏ, π’, π£) < 0 for every πΏ > 0 and π’ < 0 and π£ < 0 , there exist a positive real value πΏ , such that π Β΄ (πΏ) > 0 for all πΏ in (0, πΏ1) and a positive real value πΏ , such that π Β΄ (πΏ) < 0 for all Ξ΄ > Ξ΄ ,also for πΏ > 0 , π Β΄ (πΏ) is continuous with espect to πΏ . Thus π Β΄ (πΏ) = 0 always has at least one solution. Next we shall show that the number of solution exactly one. Let , π π’,π£,π:π = log(βπ’ β π£ + π’π€ π ), π = 1, . . . , π for simplicity. Then, we rewrite β(πΏ, π’, π£) and ββ²( πΏ , π’, π£ ) as β(πΏ, π’, π) = h(Ξ΄, π π’,π£ ) = πππππΏ + (π β 2) log(βπ’) + (πΏ β 1) β π π’,π£,π:πππ=1 β β exp (πΏπ π’,π£,π:π ) ππ=1 and β β² ( πΏ , π’, π£ ) = β β² ( πΏ , π π’,π£ ) = β β² ( πΏ , π (π½, π π’,π£ )) = ππΏ + β π π’,π£,π:πππ=1 β β π π’,π£,π:πππ=1 exp(πΏπ π’,π£,π:π ) = ππΏ + β{1 β exp(πΏπ π’,π£,π:π )}π π’,π£,π:πππ=1 = ππΏ + π (πΏ, π π’,π£ ) , where π π’,π£ = (π π’,π£,1:π , π π’,π£,2:π , β¦ , π π’,π£,π:π ) and π (πΏ, π π’,π£ ) = β{1 β exp(πΏ)π π’,π£,π:π }π π’,π£,π:πππ=1 , π = 1,2, β¦ . , π. we see that each π π’,π£,π:π , π = 1,2, β¦ . , π , takes on values over (ββ, +β) for π’ and v<0 , note that, if π (π’,π£,π:π) < 0 , π = 1, . . . , π , π (πΏ, ππ’, π) is strictly increasing in each π π’,π,π:π < , π = 1, . . . , π , and takes on value over (ββ, 0) , for any fixed Ξ΄ > 0 , thus there exist a unique value of π(πΏ, π π’,π ) on the set {π π’,π : π π’,π,π:π < 0, π = 1, . . . , π} such that ββ² (πΏ, π π’,π ) = 0 , for any fixed Ξ΄ > 0 , we denote the value of π (πΏ, π π’,π ) by π (πΏ) , we see that ββ² (πΏ, π π’,π ) < 0 for π π’,π on the set {ππ’, π; π (πΏ, π π’,π ) < π (πΏ), π π’,π , π: π < 0, π = 1, . . . , π} and for ββ² (πΏ, π π’,π ) > 0 for π π’,π on the set {π π’,π ; π (πΏ, π π’,π ) > π (πΏ), π π’,π,π:π < 0, π = 1, . . . , π} for any Ξ΄ > 0. Analogously, if π π’,π,π:π > 0, π = 1, . . . , π; π (πΏ, π π’,π ) is strictly decreasing in each π π’,π,π:π < 0; π = 1, . . ., n and take on values over (ββ, 0) thus, there exists a unique value of π (πΏ, π π’,π ) on the set {π π’,π : π π’,π,π:π > 0; π = 1, . . . , π} such that ββ² (πΏ, π π’,π ) = 0 , for any fixed Ξ΄ > 0 .we denote the value of π (πΏ, π π’,π ) by π (πΏ) . We see that ββ² (πΏ, π π’,π ) > 0 for π π’,π on the set {π π’,π ; π (πΏ, π π’,π ) > π (πΏ), π π’,π,π:π > 0, π = 1, . . . , π} and that ββ² (πΏ, π π’,π ) < 0 for π π’,π on the set {π π’,π ; π (πΏ, π π’,π ) < π (πΏ), π π’,π,π:π > 0, π = 1, . . . , π} for any Ξ΄ > 0. Let, for βΞ΄ , Ξ¦(Ξ΄, βΞ΄, π π’,π£ ) = β Β΄ (πΏ + βπΏ, π(πΏ, π π’,π£ ))exp {β(πΏ + βπΏ, π π’,π£ )}β Β΄ (πΏ, π(πΏ, π π’,π£ ))exp {β(πΏ, π π’,π£ )} = ππΏ + βπΏ + π(πΏ + βπΏ, π π’,π£ )ππΏ + π(πΏ, π π’,π£ ) (1 + βπΏπΏ ) π Γ ππ₯π {βπΏ β π π’,π£,π:πππ=1 + β[1 β exp (βπΏπ π’,π£,π:π ) ππ=1 ]exp (πΏπ π’,π£,π:π )} ; (10) Then π Β΄ (πΏ + βπΏ) can be rewritten as π Β΄ (πΏ + βπΏ) = π! β« β« Ξ¦(Ξ΄, βΞ΄, π π’,π£ ) β Β΄ (πΏ , π π’,π£ ) exp{β(πΏ , π π’,π£ )}ππ’ππ£ . (11) From now on, let us focus on the case when βπΏ β₯ 0 . We note that, lim βπΏβ0
Ξ¦(Ξ΄, βΞ΄, π π’,π£ ) = Ξ¦(Ξ΄, 0, π π’,π£ ) = 1 ,
For (π’, π£)π{ π(πΏ, π π’,π£ ) β π (πΏ) πππ π (πΏ)}. (12) While
π·(Ξ΄, βΞ΄, π π’,π£ ) = { 1 ππ βπΏ = 0ββ ππ βπΏ > 0 (π’, π£)π{ π(πΏ, π π’,π£ ) = π (πΏ) ππ π (πΏ)}. (13) for any πΏ . Let πΏ β be one of the solutions of π Β΄ (πΏ) = 0 . Then, lim βπΏβ0 π Β΄ (πΏ β + βπΏ) β π Β΄ (πΏ β )βπΏ = lim βπΏβ0 π Β΄ (πΏ β + βπΏ)βπΏ = lim βπΏβ0 π! β¬ {π’<0,π£<0} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£βπΏ = π! lim βπΏβ0 { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ +β¬ { π(πΏ β ,π π’,π£ )β π (πΏ β ) πππ π (πΏ β )} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ ]. (14) From (12) and (13) it is too easy to see that
π·(Ξ΄ β , βΞ΄, π π’,π£ ) ), for any (π’, π£)π{ π(πΏ β , π π’,π£ ) β π (πΏ β ) πππ π (πΏ β )} approaches 1 faster than π·(Ξ΄ β , βΞ΄, π π’,π£ ) for any (π’, π£)π{ π(πΏ β , π π’,π£ ) =π (πΏ β ) ππ π (πΏ β )} approaching 1 when βπΏ decreases. Hence β¬ { π(πΏ β ,π π’ ,π£ )β π (πΏ β ) πππ π (πΏ β )} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ approaches β¬ { π(πΏ β ,π π’ ,π£ )β π (πΏ β ) πππ π (πΏ β )} β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ faster that { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ approaching β¬ { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ , when βΞ΄ decreases. Note further than β¬ { π(πΏ β ,π π’,π£ )β π (πΏ β ) πππ π (πΏ β )} β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ = β¬ { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ = 0 Therefore by the fundamental theory of differential calculus, the sign of the RHS of (14) agrees with sign of β¬ { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ for sufficiently small βΞ΄ > 0 , which implies lim βπΏβ0 π Β΄ (πΏ β + βπΏ) β π Β΄ (πΏ β )βπΏ < 0 (15) Since for any βΞ΄ > 0 , β Β΄ (πΏ β + βπΏ, π π’,π£ ) < 0 And ππ₯π{π(πΏ β + βπΏ, π π’,π£ )} > 0 for any (π’, π£)π{ π(πΏ β , π π’,π£ ) = π (πΏ β ) ππ π (πΏ β )} and thus β¬ { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} π·(Ξ΄ β , βΞ΄, π π’,π£ )β Β΄ (πΏ β , π π’,π£ )ππ₯π{β(πΏ β , π π’,π£ )}ππ’ππ£ = β¬ { π(πΏ β ,π π’,π£ )=π (πΏ β ) ππ π (πΏ β )} β Β΄ (πΏ β + βΞ΄, π π’,π£ )ππ₯π{β(πΏ β + βΞ΄, π π’,π£ )}ππ’ππ£ < 0 Analogously, we obtain the following inequality: lim βπΏβ0 π Β΄ (πΏ β + βπΏ) β π Β΄ (πΏ β )βπΏ < 0 (16) The proof is very similar to proof of (15) and is therefore omitted for the sake of brevity. It follows from (15), (16) and the fact that π Β΄ (πΏ) is differentiable with respect to πΏ (the proof is very similar to the proof of the differentiability of π(πΏ; π€ , β¦ , π€ πβ1 ) in proposition 2 and is therefore omitted for the sake of brevity that π Β΄ (πΏ β )βπΏ < 0 holds. This is clearly implies that π Β΄ (πΏ) changes sign only once with respect to πΏ . From the facts established above , π Β΄ (πΏ) = 0 always has a unique solution with respect to πΏ . The proof of theorem 1 is thus completed. β‘ orollary 1. For Ξ΄ > 0 , and any given π€ , π€ , . . . , π€ πβ1 such that
0 β€ π€ β€ Β·Β·Β· β€ π€ πβ1 β€ 1 , the likelihood function π(πΏ; π€ , π€ , . . . , π€ πβ1 ) is unimodal with respect to πΏ . Proof : Corollary 1 the obvious from theorem 1 , and the proof therefore omitted. β‘ One of the main purposes of this section is to prove that the estimate of πΏ has consistency. The following lemma is needed before presenting the result about the consistency. Lemma
1. For any fixed πΏ β πΏ , where πΏ is the true value of the parameter πΏ , lim πββ π π (π(πΏ; π (2) , β¦ π (π) ) < π(πΏ ; π (2) , β¦ , π (π) ) = 1 Proof : Let π π , π = 2, β¦ . , π β 1, be the random variables whose order statistics are π (π) , π = 2, β¦ . , π β 1. For every π’ πππ π£ such that π’ < π£ < 0 , under the conditions π (1) = π’ , π (π) = π£, where π (1) = π (1) βπΎ π½ , π (π) = π (π) βπΎ π½ , and π½ and πΎ are the true values of π½ and πΎ, respectively, since the conditional joint pdf of the order statisticts of π π, π given π (1) = π’ , π (π) = π£, is given by (π β 2)! β (π£ β π’) πβ1π=2 π(π’ + (π£ β π’)π€ π ; πΏ )/{πΊ(π£; πΏ ) β πΊ(π’; πΏ )}, β€ β― β€ π€ πβ1 β€ 1 π π , π = 2, β¦ , π β 1, are distributed with the common conditional pdf, given π (1) = π’ , π (π) = π£, which is expressed as (π£ β π’)π(π’ + (π£ β π’)π€ π ; πΏ )/{πΊ(π£; πΏ ) β πΊ(π’; πΏ )}, π β€ 1. (17) and these are conditionally independent, given π (1) = π’ , π (π) = π£ . Denote (π β 2)! β(π£ β π’) πβ1π=2 π(π’ + (π£ β π’)π€ π ; πΏ )/{πΊ(π£; πΏ) β πΊ(π’; πΏ)} by π π’,π£ (πΏ; π (2) , β¦ , π (πβ1) ). For any fixed π’ and π£ such that π’ < π£ < 0, under the conditiona l π (1) = π’ , π (π) = π£, let us consider, for every πΏ β πΏ π > 2,
1π β 2 [log π π’,π£ (πΏ; π (2) , β¦ , π (πβ1) )π π’,π£ (πΏ ; π (2) , β¦ , π (πβ1) )] = 1π β 2 β πππ [ (π£ β π’)π(π’ + (π£ β π’) π€ π ; πΏ)/{πΊ(π£; πΏ) β πΊ(π’; πΏ)}(π£ β π’)π(π’ + (π£ β π’)π€ π ; πΏ )/{πΊ(π£; πΏ ) β πΊ(π’; πΏ )}] πβ1π=2 (18) By the law of large numbers, (18) tends in probability to
πΈ [πππ [ (π£ β π’)π(π’ + (π£ β π’) π€; πΏ)/{πΊ(π£; πΏ) β πΊ(π’; πΏ)}(π£ β π’)π(π’ + (π£ β π’)π€; πΏ )/{πΊ(π£; πΏ ) β πΊ(π’; πΏ )}]] (19) here π is a random variable which is distributed with the conditional ππ·πΉ in (17), given π (1) =π’ , π (π) = π£ . By Jensen's inequality, we have πΈ [πππ [ (π£ β π’)π(π’ + (π£ β π’)π€; πΏ)/{πΊ(π£; πΏ) β πΊ(π’; πΏ)}(π£ β π’)π(π’ + (π£ β π’) π€; πΏ )/{πΊ(π£; πΏ ) β πΊ(π’; πΏ )}]] < log {πΈ [ (π£ β π’)π(π’ + (π£ β π’)π€; πΏ)/{πΊ(π£; πΏ) β πΊ(π’; πΏ)}(π£ β π’)π(π’ + (π£ β π’)π€; πΏ )/{πΊ(π£; πΏ ) β πΊ(π’; πΏ )}]} = πππ β« (π£ β π’)π(π’ + (π£ β π’)π€; πΏ){πΊ(π£; πΏ) β πΊ(π’; πΏ)} = 0 (20) It follows that lim πββ π π { 1π β 2 πππ π π’,π£ (πΏ; π (2) , β¦ , π (πβ1) )π π’,π£ (πΏ ; π (2) , β¦ , π (πβ1) ) < 0|π (1) = π’ , π (π) = π£} = 1 or limπ ππββ {π π’,π£ (πΏ ; π (2) , β¦ , π (πβ1) ) < π π’,π£ (πΏ ; π (2) , β¦ , π (πβ1) )|π (1) = π’ , π (π) = π£} = 1 (21) By the positivity and the integrability of π π’,π£ (πΏ; π (2) , β¦ , π (πβ1) ) and π π’,π£ (πΏ ; π (2) , β¦ , π (πβ1) ) and (21), implies limπ ππββ {π(πΏ; π (2) , β¦ , π (πβ1) ) < π(πΏ ; π (2) , β¦ , π (πβ1) )|π (1) = π’ , π (π) = π£} = 1 (22) Moreover β« β« π(π β 1)π(π’; πΏ )π(π£; πΏ ){πΊ(π£; πΏ ) β πΊ(π’; πΏ )} πβ2π£ββ0ββ ππ’ππ£ = 1 (23) and β« β« π(π β 1)π(π’; πΏ )π(π£; πΏ ){πΊ(π£; πΏ ) β πΊ(π’; πΏ )} πβ2π£ββ0ββ Γ π π {π(πΏ ; π (2) , β¦ π (π) ) < π(πΏ ; π (2) , β¦ , π (π) )|π (1) = π’ , π (π) = π£}ππ’ππ£ β€ β« β« π(π β 1)π(π’; πΏ )π(π£; πΏ ){πΊ(π£; πΏ ) β πΊ(π’; πΏ )} πβ2π£ββ0ββ ππ’ππ£ = 1 (24) since π π {π(πΏ; π (2) , β¦ π (π) ) < π(πΏ ; π (2) , β¦ , π (π) )|π (1) = π’ , π (π) = π£} is bounded by 1. Then by applying the dominated convergence theorem, from (23) and (24) it follows that limπ ππββ (π(πΏ; π (2) , β¦ , π (πβ1) ) < π(πΏ ; π (2) , β¦ , π (πβ1) )) β« β« lim πββ π(π β 1)π(π’; πΏ )π(π£; πΏ ){πΊ(π£; πΏ ) β πΊ(π’; πΏ )} πβ2π£ββ0ββ Γ π π {π(πΏ ; π (2) , β¦ π (π) ) < π(πΏ ; π (2) , β¦ , π (π) )|π (1) = π’ , π (π) = π£}ππ’ππ£ = β« β« lim πββ π(π β 1)π(π’; πΏ )π(π£; πΏ ){πΊ(π£; πΏ ) β πΊ(π’; πΏ )} πβ2π£ββ0ββ ππ’ππ£ = lim πββ β« β« π(π β 1)π(π’; πΏ )π(π£; πΏ ){πΊ(π£; πΏ ) β πΊ(π’; πΏ )} πβ2 ππ’ππ£ = 1 π£ββ0ββ The proof of Lemma 1 is thus complete. β‘
Theorem 2.
The estimator πΏΜ π€ is consistent for Ξ΄>0 . Proof:
The proof is very similar to the proof of Theorem 3.7 of Lehmann and Casella [11], and is therefore omitted. β‘
Once we obtain the estimate of πΏ, using the method outlined above, we may proceed to the stimation of πΎ and π½, where in the estimators have the following properties. Property 1 . The estimates exist uniquely for all n and for all πΏ, πΎ and π½, where π > 2, πΏ > 0, ββ <πΎ < β and π½ > 0. Property 2 . The estimators are consistent for πΎ and π½ , respectively. First , before providing the estimators having the above properties, we consider the following estimators of πΎ and π½ : πΎΜ ππππ‘ = π (π) (25) and π½Μ ππππ‘ = [β (πΎΜ ππππ‘ β π₯ π ) πΏΜ π ππ=1 π ] π (26) It is evident that the estimates πΎΜ ππππ‘ πππ π½Μ ππππ‘ uniquely exist, given the observations π₯ , β¦ . , π₯ π , where πΏΜ πΒ΄ is the realized value of πΏΜ π . It is well-known that π (π) tends in probability to πΎ as π ββ for every πΎ since πΈ(π (π) β πΎ) = ππΏπ½ β« (πΎ β π₯) ππ₯π [βπ (πΎ β π₯π½ ) πΏ ] (πΎ β π₯π½ ) πΏβ1 ππ₯ when π ( πΎβπ₯π½ ) πΏ = π§ then πΈ(π (π) β πΎ) = π½ β« β π§ π β0 π βπ§ ππ§, when β β; πΈ(π (π) β πΎ) = 0 . ssuming that πΏ and πΎ are known and substituting πΏ for πΏΜ π and πΎ for πΎΜ ππππ‘ in (25,26) , π½Μ ππππ‘ is the maximum likelihood estimator of π½ in the regular case and therefore consistent for π½. It follows these facts and slutskyΒ΄s theorem that π½Μ ππππ‘ is consistent for π½. The estimators πΎΜ ππππ‘ and π½Μ ππππ‘ then have properties 1 and 2 mentioned above. However, the estimators could have considerable bias since πΎΜ ππππ‘ has significant bias. So, we need to consider correction of bias for these estimators. Since
πΈβπ (π) β = πΎ β π½ (1 + 1πΏ ) π β1πΏ
It is easy to proof, upon substituting πΏΜ π for πΏ and π½Μ ππππ‘ for π½, the bias-corrected estimator of πΎ becomes πΎΜ π = π (π) + π½Μ ππππ‘ (1 + 1πΏΜ π ) π β 1πΏΜ π . (27) We then obtain the bias-corrected estimator of π½ as π½Μ π = [β (πΎΜ π β π₯ π ) πΏΜ π ππ=1 π ] π It follows, from the above mentioned forms and the fact that the term π½Μ ππππ‘ (1 + π ) π β in (27) tends in probability to 0 as π β β (which can be shown easily by using slutskyΒ΄s theorem),tha t πΎΜ π and π½Μ π also have properties 1 and 2. Simulation
We carry out a Mont Carlo simulation study to evaluate the compare of the proposed estimators. The proposed estimators , termed LSPFE, MLE and MME. In the simulation study, the values of the shape parameter πΏ are selected as 0.5,1,2,3,4,5 and take πΎ = 0 and π½ = 10 the sample size is taken to be 20,50,100. All programs in this numerical study were written in the package R. Tables 1-6 display the simulation results of the bias and root mean squared on 100 Monte carlo runs for each set of configurations .Bias column is joint columns represent the sum of the absolute values of bias of the three estimators, and π πππΈ column in joint columns represents the root of the trace of
πππΈ matrix of the three estimators , which are used for evaluting the marginal performance based on bias and
π πππΈ of the three estimators. Figures 2-10 show the bias and
ππ ππΈ of tables 1-6. from these results , we observe the following. able 1.
π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators based on 100 simulations with πΏ = 0.5 and π =20,50,100. πΏ n method location shape scale joint Bias RMSE Bias RMSE Bias RMSE Bias RMSE 0.5 20 LSPF -0.1610 0.5483 -0.0431 0.7763 0.2325 0.5881 0.0095 1.1177 MLE -0.5512 0.5463 0.0091 0.5123 -0.204 0.7746 -0.2481 1.0775 MME -0.5971 0.7107 -0.1231 0.7617 1.4276 2.3191 0.2358 2.5423 50 LSPF 0.1375 0.5235 -0.0471 0.6604 0.1664 0.4469 0.0856 0.9539 MLE 0.2975 0.5380 0.0231 0.4406 0.1959 0.6311 0.1721 0.9391 MME -0.5525 0.6965 -0.1161 0.9375 1.1753 1.8561 0.1689 2.1931 100 LSPF -0.1166 0.5051 -0.0451 0.7485 -0.0011 0.0008 -0.0541 0.9030 MLE -0.1833 0.5076 -0.0661 0.6128 -0.0011 0.0011 -0.0831 0.7957 MME -0.5466 0.5507 -0.0841 0.4406 1.2963 1.8461 0.2219 1.9762 Table 2.
π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators based on 100 simulations with πΏ = 1 and π =20,50,100. πΏ n method location shape scale joint Bias RMSE Bias RMSE Bias RMSE Bias RMSE 1 20 LSPF 0.3071 0.4804 -0.0751 0.3331 0.4372 0.7175 0.2230 0.9255 MLE -0.2617 0.8345 -0.2693 0.5987 -0.4641 0.7351 -0.3303 1.2631 MME -0.8715 1.0553 -0.6748 1.0009 1.6422 2.1225 0.0318 2.5731 50 LSPF 0.3005 0.4769 -0.2021 0.5098 -0.0131 0.0551 0.0295 0.7003 MLE 0.2387 0.7235 -0.4966 0.7139 -0.0621 0.1121 -0.1059 1.0227 MME -0.8476 0.8930 -0.6274 1.0793 1.1431 1.9301 -0.1107 2.3849 100 LSPF -0.1676 0.1974 0.0996 0.7273 0.0082 0.0223 -0.0199 0.7541 MLE -0.2291 0.5246 -0.2066 0.8295 -0.0112 0.0265 -0.1485 0.9818 MME -0.8158 0.8786 -0.7266 0.9551 1.1161 1.3683 -0.1422 1.8859 Table 3.
π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators based on 100 simulations with πΏ = 2 and π = 20,50,100. πΏ n method location shape scale joint Bias RMSE Bias RMSE Bias RMSE Bias RMSE 2 20 LSPF -0.2212 0.9367 -0.4791 1.0354 -0.3491 0.3505 -0.3496 1.4396 MLE -0.3448 0.8677 -0.9122 0.9444 -0.4692 0.4712 -0.5752 1.3663 MME -0.9233 1.0066 -1.7770 1.9459 1.1964 2.8355 -0.5013 3.5833 50 LSPF 0.1775 0.7071 -0.4071 0.8143 -0.2104 0.2523 -0.1465 1.1076 MLE -0.2956 0.6685 -0.7533 1.1845 -0.2875 0.3550 -0.4452 1.4057 MME -0.8775 0.9784 -1.2994 1.9909 1.1918 1.7498 -0.3282 2.8254 100 LSPF 0.1334 0.1201 -0.1352 0.4837 -0.0624 0.1035 -0.0211 0.5091 MLE 0.1066 0.5962 -0.9841 1.3548 -0.1422 0.1593 -0.3397 1.4887 MME -0.7023 0.9346 -1.4392 1.9803 1.1033 1.5260 -0.3460 2.6691 able 4. π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators based on 100 simulations with πΏ = 3 and π =20,50,100. πΏ n method Location shape scale joint Bias RMSE Bias RMSE Bias RMSE Bias RMSE 3 20 LSPF -0.4035 0.4207 -1.3051 2.3454 -0.4162 0.4272 -0.7081 2.4208 MLE -0.4816 0.8843 -1.2705 2.4899 -0.5883 0.5913 -0.7798 2.7076 MME -0.9766 1.0329 -2.1584 3.0973 1.58333 2.5923 -0.5171 4.1689 50 LSPF -0.1975 0.4092 -1.1972 1.3112 -0.2362 0.3702 -0.5435 1.4226 MLE -0.3425 0.5783 -1.4134 1.5540 -0.2221 0.4404 -0.6591 1.7156 MME -0.8965 0.9265 -2.2703 2.8085 1.42134 1.8190 -0.5817 3.4720 100 LSPF 0.1885 0.2859 -0.5832 1.1419 -0.1593 0.3682 -0.1844 1.2334 MLE -0.2633 0.3314 -1.8962 2.0324 -0.1844 0.4222 -0.7811 2.1021 MME -0.7198 0.9956 -2.2501 2.7733 1.41525 1.6870 -0.5182 3.3954 Table 5.
π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators based on 100 simulations with πΏ = 4 and π =20,50,100. πΏ n method locatin shape scale joint Bias RMSE Bias RMSE Bias RMSE Bias RMSE 4 20 LSPF -0.5152 0.5994 -1.7012 2.9138 -0.4666 0.5766 -0.8941 3.0301 MLE -0.5823 0.6881 -2.1624 3.0818 -0.3565 0.6814 -1.0334 3.2303 MME -0.9361 0.9962 -2.8055 4.2989 1.8166 2.8170 -0.6412 5.2353 50 LSPF -0.4353 0.4948 -1.5972 2.3024 -0.3864 0.5237 -0.8062 2.4125 MLE -0.5021 0.5903 -2.3731 2.4607 -0.3365 0.6073 -1.0703 2.6023 MME -0.7433 0.8967 -3.1522 3.7828 1.4953 2.0957 -0.7992 4.4165 100 LSPF -0.3956 0.4070 -0.7283 1.4701 -0.2830 0.4881 -0.4692 1.6015 MLE -0.4732 0.4163 -2.0125 2.6382 0.2865 0.5355 -0.7321 2.7240 MME -0.6324 0.7384 -2.9104 3.7994 1.0646 1.8203 -0.8254 4.2771 Table 6.
π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators based on 100 simulations with πΏ = 5 and π =20,50,100. πΏ n method Location shape scale joint Bias RMSE Bias RMSE Bias RMSE Bias RMSE 5 20 LSPF -0.6162 0.8932 -2.4394 3.5809 -0.4065 0.6076 -1.1532 3.7403 MLE -0.6511 0.9086 -2.6842 4.2845 -0.5026 0.6279 -1.2794 4.4246 MME -0.9133 1.1093 -3.3283 4.9113 1.5332 3.4707 -0.9023 6.1153 50 LSPF -0.4525 0.4521 -2.2532 3.1435 -0.3915 0.5493 -1.0322 3.2230 MLE -0.5271 0.5284 -2.5521 3.8372 -0.4294 0.5873 -1.1693 3.9176 MME -0.7895 0.9893 -3.4084 4.8109 1.4839 2.4489 -0.9045 5.4882 100 LSPF -0.4502 0.4465 -1.5851 1.9033 -0.3141 0.4614 -0.7837 2.0087 MLE -0.4934 0.4586 -2.4012 3.4528 -0.3723 0.5704 -1.0886 3.5295 MME -0.7540 0.8926 -3.3794 4.7901 1.2931 1.8546 -0.9462 5.2136 igure 2 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for πΏ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 20. Figure 3 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for πΏ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 50. Figure 4 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for πΏ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 100. -4 -2 mse lspfemse mlemse mmebias lspfebias mle-4 -202 mse lspfemse mlemse mmebias lspfebias mlebias mme -4-20 mse lspfemse mlemse mmebias lspfebias mle bias mme igure 5 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for π½ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 20. Figure 6 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for π½ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 50. Figure 7 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for π½ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 100. -1/5-1-0/500/5 mse lspf20mse mle20mse mme20bias lspf20bias mle20bias mme20-1-0/500/5 mse lspf 50msemle50mse mme50bias lspf50biasmle50bias mme50 -1-0/500/511/5 mselspf100msemle100msemme100biaslspf100biasmle100biasmme100 igure 8 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for πΎ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 20. Figure 9 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for πΎ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 50. Figure 10 . π΅πππ , π πππΈ of πΏπππΉ , ππΏ , ππ estimators for πΎ based on 100 simulations with πΏ = 0.5,1,2,3,4,5 and π = 100. -1 mselspf20msemle20msemme20biaslspf20biasmle20biasmme20-10123 mselppf50msemle50msemme50biaslspf50biasmle50biasmme50-0/500/51 mselspf100msemle100msemme100biaslspf100biasmle100biasmme100 . The πΏπππΉπΈπ have the less
π πππΈ and absolute bias for every π and πΏ constant. 2. If sample size will be large the π πππΈ and absolute bias for the
πΏπππΉπΈ and
ππΏπΈ will be small. 3. If sample size will be large the
πππΈ has not very large variant, and this is almost independent to sample size. 4. When the πΏ is increase and the sample size is constant then the π πππΈ and absolute
π΅πππ are increase. 5.The
π΅πππ for
πΏπππΉπΈ s are minus on the other hand
πΏπππΉπΈ are less than real value , but the bias for
ππΏπΈ and
πππΈ are minus or plus.
5. Illustrate examples
We demonstrate the proposed method for the Three-parameter π π distribution in this section by using two data sets.one of them is with large sample size and the other is with small sample size. The first sample has been selected from Cohen [4] and Elderton and Johnson [5], is fitted an observed age distribution of holders of a certain type of life insurance policy. The data is in the table 7. Cohen [4]obtained the
πππΈ of this data, π½ = 310.54659, πΎ = 339.7792125 and πΏ =40.043878.
We computed the ππΏ and the πΏπππΉ estimation for π½, πΎ and πΏ. The results are in table 8. Figures 11 and 12 show the density plots(fitted pdf versus empirical
ππ·πΉ ) for the distribut io n plots (fitted
πΆπ·πΉ versus empirical
πΆπ·πΉ ) for the three different estimation methods. The figures show that the
πΏπππΉ estimators provide the best fit.
Table 7 : The data for age of life insurance policy holders.
Table 8 :Estimates of the parameters for data I example 1.
LSPFE MLE MME πΏ π½ πΎ Figure 11 : Fitted
ππ·πΉ s and the histogram for the three different estimation methods for data in example 1. Age in years 5-14 15-24 25-34 35-44 45-54 55-64 65-74 75-84 totals frequencies 1 56 167 98 34 9 2 1 368
Figure 12 : Fitted versus the empirical
πΆπ·πΉ for the three different estimation methods for data in example 1.
We said if π has the Weibull distribution then βπ has the π π distribution. Next, we consider initially reported by Nagatsuka et al.[13]. This data has the three-parameter Weibull distribution and reporte in the table 9. We consider βπ that has the three-parameter π π distribution. This data is bearingΒ΄s fatigue life data. The ππΏπΈ and
πΏπππΉπΈ and
πππΈ of parameters are in table 10. Figures 13 and 14 show the density plots(fitted
ππ·πΉ versus empirical
ππ·πΉ ) for the distribut io n plots (fitted
πΆπ·πΉ versus empirical
πΆπ·πΉ ) for the three different estimation methods. The figures show that the
πΏπππΉ estimators provide the best fit.
Table 9:
The data for age of life insurance policy holders.
Table 10:
Estimates of the parameters for data in example 2.
LSPFE MLE MME πΏ π½ πΎ -149.02 -152.7 56.23 Figure 13 : Fitted
ππ·πΉ s and the histogram for the three different estimation methods for data in example 2.
Figure 14:
Fitted versus the empirical
πΆπ·πΉ for the three different estimation methods for data in example 2.
Examples show that
πΏπππΉπΈ s are less
π πππΈ and bias then this is better than the
ππΏπΈ and
πππΈ for every sample size. .Concluding remarks
We consider three methods for estimation of parameters in the three-parameter π π distribution. The ππΏπΈ and
πππΈ methods have studied in books and articles , but
πΏπππΉπΈ has studied just for a little the three-parameter distributions, In article has proofed that
πΏπππΉπΈ s provide the best fit.
7. Refrences [1] Billingsley P. (1994) Probability and Measure, third ed. John Wiley and Sons, New York. [2] Castillo E., Hadi A.S. (1995) A method for estimating parameters and quantiles of distribut io ns of continuous random variables. computational statistics and data analysis; 27:125-139. [3] Chen R. C. H., Amin N. A. K. (1983) Estimating Parameters in Continuous Univar iate distributions with a shifted origin. Journal of the Royal Statistical Society; 45(3):394-4.3. [4] Cohen A. C. (1973) The reflected weibull distribution. Technometrics ; 15(4):867-873. [5] Elderton W. P., Johnson N. L. (1969) Systems of Frequency Curves. Cambridge Univers it y Press. [6] Griffths D. A. (1980) Interval estimation for the three-parameter lognormal distribution via the likelihood function. Applied Statistics; 29:58-68. [7]Gumbel E. J. (1958) Statistics of extermes. Columbia university press, New York, N. Y. [8] Johnson N. L., Kotz S., Balakrishnan N. (1995) Continuous univariate distribution, 2 nd ed. New York: Wiley. [9] Lai C. D. (2014) Generalized Weibull disributions. Springer. ISBN: 978-3-642-39105-7. [10] Lawless J. F. (2003) Statistical Models and Methods for lifetime Data, second ed. John Wiley & Sons, Hoboken, New Jersey. [11] Lehmann E. L., Casella G. (1998) Theory of Point Estimation,2 nd ed., Springer, New York. [12] Nadarajah S., Kotz S. (2008) Strength modeling using weibull distributions. Journal of mechanical science and technology; 22:1247-1254. [13] Nagatsuka N., Balakrishnan N. (2012) A consistent method of estimation for the parameters of the three-parameter inverse Gaussian distribution. Journal of statistical computation and simulation; 83(10):1915-1931. [14] Nagatsuka N., Balakrishnan N., Kamakura T. (2013) A consistent method of estimation for the parameters of the three-parameter Weibull distribution. Computational statistics and data analysis; 58:210-226. 15] Nagatsuka N., Balakrishnan N., Kamakura T. (2014) A consistent method of estimation for the parameters of the three-parameter Gamma distribution. Communications in statistic s-Theory and Methods .3905-3926.3905-3926