Second Order Asymptotic Properties for the Tail Probability of the Number of Customers in the M/G/1 Retrial Queue
aa r X i v : . [ m a t h . P R ] A p r Second Order Asymptotic Properties for the Tail Probability ofthe Number of Customers in the
M/G/ Retrial Queue
Bin Liu a and Yiqiang Q. Zhao b a. School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, P.R. Chinab. School of Mathematics and Statistics, Carleton University, Ottawa, ON, Canada K1S 5B6 December 25, 2018
Abstract
When an explicit expression for a probability distribution function F ( x ) can not be found,asymptotic properties of the tail probability function ¯ F ( x ) = 1 − F ( x ) are very valuable, sincethey provide approximations for system performance and approaches for computing probabilitydistribution. In this paper, we study tail asymptotic properties for the number of customersin the M/G/
M/G/
Keywords:
M/G/
Mathematics Subject Classification (2000):
Stationary probability distributions, such as the stationary queue length distribution, for queueingsystems are one of the most important types of performance measures, which often provide funda-mental information about the system required in performance evaluation, system design, controland optimization. A large number of queueing systems do not possess explicit expressions for suchstationary distributions. When an explicit expression is not available, asymptotic properties ofthe tail probabilities often lead to approximations for system performance, and also computationalapproaches.In the queueing literature, for a distribution function F ( x ), tail asymptotic analysis has mainlyfocused on the so-called first order approximation for the tail probability (or distribution) function¯ F ( x ) = 1 − F ( x ). Specifically, we want to identify a function ¯ F ( x ), referred to as the first(dominant) term in the asymptotic expansion, such that lim x →∞ ¯ F ( x ) / ¯ F ( x ) = 1, denoted by¯ F ( x ) ∼ ¯ F ( x ), which is equivalent to the first order asymptotic expansion ¯ F ( x ) = ¯ F ( x ) + o ( ¯ F ( x ))as x → ∞ , where o ( ¯ F ( x )) is a higher order infinitesimal function compared to ¯ F ( x ). The secondorder approximation is a refinement of the first order approximation. Specifically, besides the firstorder term F ( x ), we want to further identify a second function ¯ F ( x ) satisfying ¯ F ( x ) − ¯ F ( x ) ∼ ¯ F ( x ), or ¯ F ( x ) = ¯ F ( x ) + ¯ F ( x ) + o ( ¯ F ( x )) as x → ∞ .n this paper, we are interested in the second order asymptotic expansion for the stationaryqueue length tail probability function for the M/G/
M/G/ λ . An arrived (primary) customer,to the system with the server idle, is served immediately by the server, otherwise it joins the orbitof infinite waiting capacity, becoming a repeated customer. Each of the repeated customers in theorbit independently repeatedly tries (visits to the server) for receiving service until it finds theserver idle and then the service starts immediately. All inter-visit times are i.i.d. exponential r.v.’swith (retrial) rate µ . All service times (for primary and repeated customers) are assumed to bei.i.d. r.v.’s (also independent of arrivals) having the distribution F β ( t ) with F β (0) = 0 and a finitemean β . Upon completion of its service, the (primary or repeated) customer leaves the system.Let T β be the generic service time having distribution function F β ( t ), and β ( s ) the Laplace-Stieltjestransforms (LST) of F β ( t ). Let ρ = λβ , be the traffic intensity. It is well known that the systemis stable if and only if (iff) ρ < M/G/ F β ( t ) = 0 . t − . + 100 t − . ; (3) Thesecond asymptotic term itself improves the quality of approximations. Our result is a complementto the queueing literature, particularly on retrial queues.Our focus in this paper is to derive the second order approximation, or two-term asymptoticexpansion, for the total number L µ of customers in the M/G/
Assumption A. ¯ F β ( t ) = t − a L ( t ) and L ( t ) = r + r t − h L ( t ) as t → ∞ , where a > , h > , r > , −∞ < r < ∞ and L ( t ) is a slowly varying function at infinity. Examples of probability distributions satisfying Assumption A include: (i)
Hall/Weiss Class ([38], [18]): ¯ F β ( t ) = (1 / t − v (cid:0) t − w (cid:1) with t ≥
1, where v > w > (ii) Burr distributions (including Lomax distributions):¯ F β ( t ) = (cid:18) bb + t w (cid:19) v , where b, v, w > vw > , thus ¯ F β ( t ) = b v t − vw (cid:2) − vbt − w + O ( t − w ) (cid:3) as t → ∞ . (iii) Folded student’s t -distributions:¯ F β ( t ) = 2 · Γ(( v + 1) / √ vπ Γ( v/ Z ∞ t (cid:18) x v (cid:19) − ( v +1) / dx, where v > , thus ¯ F β ( t ) = 2 · Γ(( v +1) / √ vπ Γ( v/ v ( v +1) / · t − v h v − v ( v +1)2( v +2) t − + O ( t − ) i as t → ∞ . Remark 1.1
A tail distribution ¯ F is said to be the second order regularly varying (see, e.g., deHaan and Stadtm¨uller (1996) [10], Geluk et al. (1997) [17] or Resnick (2007) [36]) with the firstorder parameter − σ < and the second order parameter ̺ < , written as ¯ F ∈ RV ( − σ, ̺ ) , ifthere exists an ultimately positive or negative auxiliary function A ( t ) with lim t →∞ A ( t ) = 0 , and aconstant c = 0 such that lim t →∞ ¯ F ( xt ) / ¯ F ( t ) − x − σ A ( t ) = cx − σ x ̺ − ̺ , for all x > , One can easily check that the distribution tail ¯ F β in Assumption A belongs to RV ( − a, − h ) , bysetting A ( t ) = t − h L ( t ) and c = − hr /r . Remark 1.2
For characterizing the first order asymptotic properties, say for the queueing lengthdistribution, it is enough to make the first order asymptotic assumption for the service time distri-bution F β , e.g., ¯ F β ( t ) ∼ t − a L ( t ) , where L ( t ) can be any slowly varying function at ∞ . However, for he second order approximation, it is necessary to further specify the second term in the asymptoticexpansion for the service time distribution. Since the second order regular variation is a standarddefinition assumed in the literature for second order approximations, readers could also expect thatthe same assumption can be made in our paper. Our Assumption A is indeed an effort in thisdirection. To see the relationship between Assumption A and the standard second order regularvariation (2RV), in the revised version of our paper, we followed Hua and Joe (2011) [19] to havethe following equivalent condition to 2RV: F ( t ) = r t − a f ( t ) (1.1) with r > and lim t →∞ f ( t ) = 1 and | − f ( t ) | being a regularly varying function of index − h ( h > ). It is then not difficult to see that our Assumption A can be also written in the formof (1.1), where − f ( t ) = kt − h L ( t ) . Therefore, the difference between the standard 2RV (where | − f ( t ) | is regularly varying) and Assumption A (where − f ( t ) is regularly varying) is consideredvery minor.The assumption a > implies that the second moment β of the service time is finite, whichcan be potentially weakened (see Remark 4.1 in Section 4 for more information). The main result in this paper is given in the following theorem:
Theorem 1
For the
M/G/ retrial queue, under Assumption A, we have the following secondorder asymptotic expansion: P { L µ > j } = λ a r ( a − − ρ ) j − a +1 + ∆ L ( j ) , (1.2) where as j → ∞ , ∆ L ( j ) = (cid:20) c L + λr a (1 − ρ ) L ( j ) (cid:21) λ a j − a + o ( j − a ) , for h = 1 ,λ a + h r ( a + h − − ρ ) j − a − h +1 L ( j ) + o ( j − a − h +1 ) , for < h < ,c L λ a j − a + o ( j − a ) , for h > ,c L = r ( a/ − − ρ + λr ( ρ + 1 /a ) µ (1 − ρ ) + λ β r (1 − ρ ) . The second order asymptotic expansion given in Theorem 1 can be viewed as a refined resultof the equivalence theorem for retrial queues under the assumption of a heavy-tailed service time(e.g., [37], [40] and [33]). Recently, another refinement of the equivalence theorem was provided in[27], which gives the first order approximation to the difference P { L µ > j } − P { L ∞ > j } . However,the second order approximation to L µ is not a consequence of this refinement (see discussions inSection 6 for details).The rest of the paper is organized as follows: Section 2 provides preliminaries to facilitate ourmain analysis; Section 3 and Section 4 contain key results for proving the main theorem; Section 5completes the proof to the main result (Theorem 1); and the final section, Section 6, containsconcluding remarks and numerical results. 4 Preliminary
In this section, several stochastic decompositions, as sums of independent or i.i.d. r.v.’s, will beintroduced. Notations and properties for the involved r.v.’s will be discussed. These concepts willfacilitate the process in later sections to our main result.For the stable
M/G/ N orb be the number of repeatedcustomers in the orbit and let I ser = 1 or 0 according to a busy server or an idle server, respectively.Let R µ be a r.v. taking nonnegative integer values with the probability generating function (GF)defined by Ez R µ def = E ( z N orb | I ser = 0). It follows from [13] (pp.14–16) that P { I ser = 0 } = 1 − ρ and Ez R µ = exp (cid:26) − λµ Z z − β ( λ − λu ) β ( λ − λu ) − u du (cid:27) . (2.1)It is well known that for the M/G/ L µ of customers in thesystem can be written as the sum of two independent random variables (see, p.15 in [13]): the totalnumber L ∞ of customers in the corresponding standard M/G/ R µ , i.e., L µ d = L ∞ + R µ , (2.2)where the symbol d = means equality in probability distribution. Such a symbol will be used through-out this paper. The equality (2.2) can be verified easily because Ez L µ = ∞ X n =0 z n P { I ser = 0 , N orb = n } + ∞ X n =0 z n +1 P { I ser = 1 , N orb = n } = p ( z ) + zp ( z ) , (2.3)where p i ( z ) def = P ∞ n =0 z n P { I ser = i, N orb = n } , i = 0 ,
1, are explicitly expressed (e.g., pp.9–10 in[13]). The expressions for p i ( z ), together with (2.3), leads to Ez L µ = Ez L ∞ · Ez R µ , since Ez L µ = (1 − ρ )(1 − z ) β ( λ − λz ) − z · β ( λ − λz ) · exp (cid:26) − λµ Z z − β ( λ − λu ) β ( λ − λu ) − u du (cid:27) , (2.4) Ez L ∞ = (1 − ρ )(1 − z ) β ( λ − λz ) − z · β ( λ − λz ) , (2.5)which results in (2.2).The stochastic decomposition (2.2) is often used to establish the asymptotic equivalence: P { L µ > j } ∼ P { L ∞ > j } as j → ∞ , (2.6)under the assumption of a heavy-tailed service time (e.g., [37], [40] and [33]), which reveals the firstorder asymptotic behaviour of P { L µ > j } . One should notice that, in the first order approximationto P { L µ > j } , P { R µ > j } is dominated by P { L ∞ > j } , so a detailed (first order) asymptoticbehaviour of P { R µ > j } is not required, and the first order asymptotic behaviour of P { L ∞ > j } issufficient. However, as will be shown in later sections, both the first order asymptotic behaviour of5 { R µ > j } and the second order asymptotic expansion of P { L ∞ > j } are required for determiningthe second term in the second order asymptotic expansion of P { L µ > j } .To this end, we first rewrite (2.1). Let ψ = ρµ (1 − ρ ) , (2.7) κ ( s ) = 1 − ρβ · − β ( s ) s − λ + λβ ( s ) , (2.8) τ ( s ) = exp (cid:26) − ψ Z s κ ( u ) du (cid:27) . (2.9)It is easy to see, from (2.1) and (2.7)–(2.9), that Ez R µ = exp (cid:26) − λµ Z λ − λz − β ( s ) s − λ + λβ ( s ) ds (cid:27) = τ ( λ − λz ) . (2.10)We now show that both κ ( s ) and τ ( s ) are the LSTs of two probability distributions, respectively.For the first assertion, let F ( e ) β ( x ) be the so-called equilibrium distribution of F β ( x ), which isdefined as F ( e ) β ( x ) = β − R x (1 − F β ( t )) dt . The LST β ( e ) ( s ) of F ( e ) β ( x ) can be written as β ( e ) ( s ) =(1 − β ( s )) / ( β s ). From (2.8), we have κ ( s ) = (1 − ρ ) β ( e ) ( s )1 − ρβ ( e ) ( s ) = ∞ X k =1 (1 − ρ ) ρ k − ( β ( e ) ( s )) k . (2.11) Remark 2.1
Define T κ to be a geometric sum of i.i.d. r.v.’s T ( e ) β,j , j ≥ , each with the distribution F ( e ) β ( x ) ; or more specifically, T κ d = T ( e ) β, + T ( e ) β, + · · · + T ( e ) β,J , (2.12) where P ( J = j ) = (1 − ρ ) ρ j − , j ≥ and J is independent of T ( e ) β,j for j ≥ . Then, immediatelyfrom (2.11), κ ( s ) can be viewed as the LST of the distribution function F κ ( · ) of the r.v. T κ . For the second assertion that τ ( s ) is the LST of a probability distribution on [0 , ∞ ), denoted by F τ ( x ), we use mathematical induction. By Theorem 1 in Feller (1991) [14] (see p.439), it is true aslong as τ ( s ) is completely monotone, i.e., τ ( s ) possesses derivatives τ ( n ) ( s ) of all orders such that( − n τ ( n ) ( s ) ≥ s >
0, and τ (0) = 1. First, it is clear from (2.9) that τ (0) = 1 and τ (1) ( s ) = − ψ · τ ( s ) κ ( s ) . (2.13)We then proceed with the mathematical induction on n . Obviously, − τ (1) ( s ) ≥ s >
0. Next,let us make the induction hypothesis that ( − k κ ( k ) ( s ) ≥ s > k = 1 , , . . . , n . Takingderivatives n times on both sides of (2.13), we get τ ( n +1) ( s ) = − ψ · n X i =0 (cid:18) ni (cid:19) τ ( i ) ( s ) κ ( n − i ) ( s ) . (2.14)6herefore,( − n +1 τ ( n +1) ( s ) = ψ · n X i =0 (cid:18) ni (cid:19) h ( − i τ ( i ) ( s ) i · h ( − n − i κ ( n − i ) ( s ) i ≥ s > . (2.15)By Remark 2.1, κ ( s ) is the LST of probability distribution F κ ( · ), hence ( − k κ ( k ) ( s ) ≥ s > k = 1 , , . . . , which, together with the induction hypothesis, completes the proof for k = n + 1. Remark 2.2
Let T τ be a r.v. having the distribution F τ ( x ) . Since τ ( s ) is the LST of the probabilitydistribution F τ ( x ) , the expression Ez R µ = τ ( λ − λz ) in (2.10) implies that R µ can be regarded asthe number of Poisson arrivals at rate λ within a random time T τ . In the following, we consider a stochastic decomposition for L ∞ . Note that (2.5) can be rewrittenas Ez L ∞ = θ ( λ − λz ) · β ( λ − λz ) , (2.16)where θ ( s ) = 1 − ρ + ρκ ( s ) . (2.17)This implies that θ ( s ) can be viewed as the LST of the probability distribution function F θ ( t ) of ar.v. T θ , defined by T θ def = (cid:26) , with probability 1 − ρ,T κ , with probability ρ. (2.18) Remark 2.3
By (2.16), with the same argument as that in Remark 2.2, one can interpret L ∞ as the number of Poisson arrivals at rate λ within a random time T θ + T β , where T θ and T β areindependent, and L µ can also be interpreted in a similar fashion. More precisely, the interpretations in Remarks 2.2 and 2.3 are restated in the following lemma.
Lemma 2.1
Let N t be a Poisson process with rate λ , which is independent of the r.v.’s T κ , T β and T τ (defined in the above discussions). Then, R µ , L ∞ and L µ can be expressed as follows: R µ d = N T τ , (2.19) L ∞ d = N T θ + T β d = N T θ + N T β , (2.20) L µ d = N T θ + T β + T τ d = N T θ + N T β + N T τ . (2.21)Our asymptotic analysis (in the following sections) is based on the assumption that the servicetime T β has a so-called regularly varying tail. We adopt the following definitions. Definition 2.1 (e.g., Bingham et al. (1989) [5])
A measurable function U : (0 , ∞ ) → (0 , ∞ ) is regularly varying at ∞ with index σ ∈ ( −∞ , ∞ ) (written U ∈ R σ ) iff lim t →∞ U ( xt ) /U ( t ) = x σ for all x > . If σ = 0 we call U slowly varying, i.e., lim t →∞ U ( xt ) /U ( t ) = 1 for all x > . efinition 2.2 (e.g., Foss, Korshunov and Zachary (2011) [16]) A distribution F on (0 , ∞ ) belongs to the class of subexponential distribution (written F ∈ S ) if lim t →∞ ¯ F ∗ ( t ) / ¯ F ( t ) = 2 , where ¯ F = 1 − F and ¯ F ∗ denotes the second convolution of ¯ F . It is well known that for a distribution F on (0 , ∞ ), if ¯ F ( t ) ∼ t − α L ( t ), where α ≥ L ( t )is a slowly varying function at ∞ , then F ∈ S (see, e.g., Embrechts, Kluppelberg and Mikosch(1997) [12]).For convenience to the readers, main notations and key quantities are summarized in the fol-lowing list: λ : Poisson arrival rate; µ : Poisson retrial rate; T β and F β ( · ) : Service time r.v. and its distribution, respectively; β n : n th moment of T β ; β ( s ) : LST of F β ( · ); ρ = λβ : Traffic intensity of the M/G/ N orb : Number of the repeated customers in the orbit; I ser : State of the server, taking values 1 or 0 according to busy or idle, respectively; R µ : r.v. having distribution P { R µ = j } = P { N orb = j | I ser = 0 } ; L µ : Total number of customers in the M/G/ L ∞ : Total number of customers in the corresponding standard (non-retrial) M/G/ L µ d = L ∞ + R µ : Stochastic decomposition; ψ : A constant given in (2.7); F ( e ) β ( x ) : Equilibrium distribution of F β ( · ); β ( e ) ( s ) : LST of F ( e ) β ( · ); T ( e ) β,j : i.i.d. r.v.’s having distribution F ( e ) β ( · ); J : Geometric r.v. with parameter ρ , independent of T ( e ) β,j ; T κ d = P Jj =1 T ( e ) β,j and F κ ( · ) : Geometric sum of T ( e ) β,j and its distribution; κ ( s ) : Defined in (2.8) and proved to be the LST of F κ ( · );8 ( s ) and F τ ( · ) : τ ( s ) is defined in (2.9) and proved to be the LST of a probability distribution F τ ( · ); T τ : r.v. having distribution F τ ( · ); T θ and F θ ( · ) : r.v. taking values 0 or T κ with probability 1 − ρ or ρ , respectively, and its distribution; θ ( s ) : LST of F θ ( · ); L ( t ) and L ( t ) : Slowly varying functions at ∞ ; N t : Poisson process with rate λ ; R µ d = N T τ , L ∞ d = N T θ + T β d = N T θ + N T β and L µ d = N T θ + T β + T τ d = N T θ + N T β + N T τ : See Lemma 2.1. T τ In this section, we provide an asymptotic property for the tail probability of the r.v. T τ , which willbe stated in Theorem 2. This is the first order asymptotic expansion of P { T τ > t } , which holdsunder a weaker condition (a relaxation of Assumption A): P { T β > t } ∼ t − a L ( t ) as t → ∞ where a > Lemma 3.1 (pp.580–581 in [12])
Let N be a r.v. with P { N = k } = (1 − ρ ) ρ k − , k = 1 , , . . . ,and { Y k } ∞ k =1 be a sequence of non-negative i.i.d. r.v.’s having a common subexponential distribution F . Define S n = P nk =1 Y k . Then P { S N > t } ∼ − ρ (1 − F ( t )) , t → ∞ . (3.1)Suppose that P { T β > t } ∼ t − a L ( t ) as t → ∞ where a >
1, by Karamata’s theorem (e.g.,p.28 in [5]), we have R ∞ t (1 − F β ( x )) dx ∼ ( a − − t − a +1 L ( t ), which implies 1 − F ( e ) β ( t ) ∼ (( a − β ) − t − a +1 L ( t ), t → ∞ . By Remark 2.1 and applying Lemma 3.1, we have P { T κ > t } ∼ c κ · t − a +1 L ( t ) , t → ∞ , (3.2)where c κ = 1(1 − ρ )( a − β . (3.3)Note that T τ has the distribution function F τ defined in terms of its LST τ ( s ) in (2.9), which is,therefore, determined by the distribution function F κ of T κ . In the following theorem, we presentthe asymptotic tail probability of T τ . Theorem 2
Suppose that P { T β > t } ∼ t − a L ( t ) as t → ∞ where a > . Then P { T τ > t } ∼ (1 − /a ) c κ ψ · t − a L ( t ) , t → ∞ , (3.4) where ψ and c κ are expressed in (2.7) and (3.3), respectively.
9o prove Theorem 2, let us list some notations and preliminary results, which will be used. Let F ( x ) be any distribution on [0 , ∞ ) with the LST φ ( s ). We denote the n th moment of F ( x ) by φ n , n ≥
0. It is well known (for example, Proposition 8.44 in Breiman [7]) that if φ n < ∞ , then φ ( s ) = n X k =0 φ k k ! ( − s ) k + o ( s n ) , n ≥ . (3.5)Next, if φ n < ∞ , we introduce the notation φ n ( s ) and b φ n ( s ), defined by φ n ( s ) def = ( − n +1 ( φ ( s ) − n X k =0 φ k k ! ( − s ) k ) , n ≥ , (3.6) b φ n ( s ) def = φ n ( s ) /s n +1 , n ≥ . (3.7)It follows that if φ n < ∞ , then for n ≥ s ↓ b φ n − ( s ) = φ n /n ! , (3.8) s b φ n ( s ) = 1 n ! φ n − b φ n − ( s ) . (3.9)In addition, if φ n < ∞ , let us define a sequence of functions F k recursively by: F ( t ) = F ( t )and 1 − F k +1 ( t ) def = Z ∞ t (1 − F k ( x )) dx, k = 1 , , . . . , n. (3.10)It is not difficult to check that 1 − F k +1 (0) = φ k /k ! and F k +1 ( t ) has the LST b φ k − ( s ). Namely, b φ k − ( s ) = Z ∞ e − st (1 − F k ( t )) dt, k = 1 , , . . . , n. (3.11) Lemma 3.2 (pp.333-334 in [5])
Assume that n < d < n + 1 , n ∈ { , , , . . . } . Then the follow-ing are equivalent: − F ( t ) ∼ t − d L ( t ) , t → ∞ , (3.12) φ n ( s ) ∼ Γ( d − n )Γ( n + 1 − d )Γ( d ) s d L (1 /s ) , s ↓ . (3.13)In the following, we will divide the proof of Theorem 2 into two parts, depending on whether a is an integer or not. First let us rewrite (2.9) as follows: τ ( s ) = 1 − ψ Z s κ ( u ) du + ∞ X k =2 ( − ψ ) k k ! (cid:18)Z s κ ( u ) du (cid:19) k . (3.14)10 .1 A proof of Theorem 2 for non-integer a > Suppose that m < a < m + 1, m ∈ { , , . . . } . By (3.2), P { T κ > t } ∼ c κ · t − a +1 L ( t ). So, κ m − < ∞ and κ m = ∞ . Define κ m − ( s ) in a manner similar to that in (3.6). By Lemma 3.2 , κ m − ( s ) ∼ Γ( a − m )Γ( m + 1 − a )Γ( a − c κ s a − L (1 /s ) , s ↓ . (3.15)By Karamata’s theorem (p.28 in [5]), Z s κ m − ( u ) du ∼ Γ( a − m )Γ( m + 1 − a )Γ( a − a c κ s a L (1 /s ) , s ↓ . (3.16)Next, we present a relationship between τ m ( s ) and κ m − ( s ). By the definition of κ m − ( s ), κ ( s ) = m − X k =0 κ k k ! ( − s ) k + ( − m κ m − ( s ) , (3.17)where κ m − ( s ) = o ( s m − ) and κ m − ( s ) /s m → ∞ as s ↓
0. Hence, Z s κ ( u ) du = − m X k =1 κ k − k ! ( − s ) k + ( − m Z s κ m − ( u ) du, (3.18)where R s κ m − ( u ) du = o ( s m ) and R s κ m − ( u ) du/s m +1 → ∞ as s ↓ { v k ; k = 0 , , , . . . , m } satisfying τ ( s ) = m X k =0 v k ( − s ) k + ( − m +1 ψ Z s κ m − ( u ) du + O ( s m +1 ) , s ↓ . (3.19)Define τ m ( s ) in a manner similar to that in (3.6). By (3.19), τ m ( s ) = ψ Z s κ m − ( u ) du + O ( s m +1 ) ∼ ψ Z s κ m − ( u ) du, s ↓ . (3.20)By (3.16) and (3.20), τ m ( s ) ∼ Γ( a − m )Γ( m + 1 − a )Γ( a ) · a − a c κ ψs a L (1 /s ) , s ↓ . (3.21)Applying Lemma 3.2, P { T τ > t } ∼ a − a c κ ψt − a L ( t ) , t → ∞ , (3.22)which completes the proof of Theorem 2 for non-integer a > .2 A proof of Theorem 2 for integer a > Suppose that a = m ∈ { , , . . . } . By (3.2), P { T κ > t } ∼ c κ · t − m +1 L ( t ). So, κ m − < ∞ .Unfortunately, whether κ m − is finite or not remains uncertain, which is determined essentiallyby whether R ∞ x t − L ( t ) dt is convergent or not. For this reason we need to sharpen our tools byintroducing the de Haan class Π of slowly varying functions. Definition 3.1 (e.g., Bingham et al. (1989) [5])
A function F : (0 , ∞ ) → (0 , ∞ ) belongs tothe de Haan class Π at ∞ if there exists a function H : (0 , ∞ ) → (0 , ∞ ) such that lim t ↑∞ F ( xt ) − F ( t ) H ( t ) = log x for all x > , (3.23) where the function H is called the auxiliary function of F . Recall the definition of F k ( t ) given in (3.10). Repeatedly using Karamata’s theorem (p.27 in[5]) and the monotone density theorem (p.39 in [5]), we know that 1 − F ( t ) ∼ t − n L ( t ) is equivalentto 1 − F n ( t ) ∼ t − L ( t ) / ( n − R t (1 − F n ( x )) dx ∈ Π with anauxiliary function that can be taken as L ( t ) / ( n − R t (1 − F n ( x )) dx has the LST b φ n − ( s ). Applying Theorem 3.9.1 in [5] (pp.172–173), we have thefollowing lemma. Lemma 3.3
Let F ( x ) be a probability distribution function and n ∈ { , , . . . } . Then, the followingtwo statements are equivalent:(i) − F ( t ) ∼ t − n L ( t ) , t → ∞ ; (3.24) (ii) lim s ↓ b φ n − ( xs ) − b φ n − ( s ) L (1 /s ) / ( n − − log x, for all x > . (3.25)Since κ m − < ∞ , we can define κ m − ( s ) in a manner similar to that in (3.6), such that, κ ( s ) = m − X k =0 κ k k ! ( − s ) k + ( − m − κ m − ( s ) , (3.26)where κ m − ( s ) = o ( s m − ) as s ↓
0. Therefore, Z s κ ( u ) du = − m − X k =1 κ k − k ! ( − s ) k + ( − m − Z s κ m − ( u ) du, (3.27)where R s κ m − ( u ) du = o ( s m − ) as s ↓ { w k ; k = 0 , , , . . . , m } , τ ( s ) = m X k =0 w k ( − s ) k + ( − m ψ Z s κ m − ( u ) du + o ( s m ) . (3.28)12efining b τ m − ( s ) in a manner similar to that in (3.7), we have b τ m − ( s ) = w m + ψs m Z s u m − b κ m − ( u ) du + o (1) , (3.29)which immediately gives, b τ m − ( xs ) = w m + ψ ( xs ) m Z xs u m − b κ m − ( u ) du + o (1)= w m + ψs m Z s u m − b κ m − ( xu ) du + o (1) . (3.30)By (3.29) and (3.30), b τ m − ( xs ) − b τ m − ( s ) = ψs m Z s u m − ( b κ m − ( xu ) − b κ m − ( u )) du + o (1) . (3.31)Note that P { T κ > t } ∼ c κ · t − m +1 L ( t ) and κ m − < ∞ . Define b κ m − ( s ) in a manner similar to thatin (3.7). By Lemma 3.3, we obtain b κ m − ( xu ) − b κ m − ( u ) ∼ − (log x ) c κ L (1 /u ) / ( m − u ↓ . (3.32)By Karamata’s theorem (p.28 in [5]), we know Z s u m − ( b κ m − ( xu ) − b κ m − ( u )) du ∼ − (log x ) c κ m s m L (1 /s ) / ( m − s ↓ . (3.33)Therefore, by (3.31) and (3.33),lim s ↓ b τ m − ( xs ) − b τ m − ( s ) L (1 /s ) / ( m − − m − m c κ ψ log x. (3.34)By applying Lemma 3.3, we obtain from (3.34) that P { T τ > t } ∼ m − m c κ ψt − m L ( t ) , t → ∞ , (3.35)which completes the proof of Theorem 2 for integer a = m ∈ { , , . . . } . T θ + T β + T τ In this section, we will provide the second order asymptotic result (Theorem 3) for the tail prob-ability P { T θ + T β + T τ > t } , which will be used in the next section to obtain the second orderasymptotic expansion for the tail probability of L µ . This requires further specifications on thesecond term of the asymptotic tail probability of service time T β . For this reason, Assumption A(stated in the introduction section) is made.We first study the second order asymptotic behaviour of tail probability P { T θ > t } . Recall thedefinition of T θ in (2.18), which immediately gives P { T θ > t } = ρP { T κ > t } . (4.1)13efering to Remark 2.1, T κ can be viewed as a geometric sum of i.i.d. r.v.’s, which suggests thatwe need second order asymptotic properties of such a random sum.Suppose that { p n } ∞ n =0 is a discrete probability measure, and G ( t ) = P ∞ n =0 p n F ∗ n ( t ), t ≥ Lemma 4.1 (Theorem 2 in [39])
Suppose that P ∞ n =0 p n z n is analytic at z = 1 . Let F ∈ S be a probability distribution function with finite mean φ and probability density function f ( t ) = dF ( t ) /dt . Then the following assertions are equivalent:(i) lim t →∞ ¯ F ∗ ( t ) − F ( t ) f ( t ) = 2 φ ; (4.2) (ii) lim t →∞ ¯ G ( t ) − ( P ∞ n =1 np n ) ¯ F ( t ) f ( t ) = φ ∞ X n =2 n ( n − p n . (4.3)As a special case of Lemma 4.1, we set p = 0 and p n = (1 − ρ ) ρ n − , n ≥
1. Then, P ∞ n =1 np n =1 / (1 − ρ ) and P ∞ n =2 n ( n − p n = 2 ρ/ (1 − ρ ) . So, (4.3) can be written as¯ G ( t ) = 11 − ρ ¯ F ( t ) + 2 ρφ (1 − ρ ) f ( t ) + o ( f ( t )) . (4.4)With the aid of (4.4), we are ready to present the second order asymptotic expansion of thetail probability of T κ . Set ¯ F ( t ) = ¯ F ( e ) β ( t ) in Lemma 4.1, i.e., ¯ F ( t ) = (1 /β ) R ∞ t ¯ F β ( x ) dx and f ( t ) = (1 /β ) ¯ F β ( t ). It follows from Assumption A that¯ F ( e ) β ( t ) = r β ( a − t − a +1 + r β ( a + h − t − a − h +1 L ( t )(1 + o (1)) , (4.5)which implies that ¯ F ( e ) β ∈ RV ( − a + 1 , − h ). By Proposition 3.8 in [31], we know that (4.2) holdsfor ¯ F ( t ) = ¯ F ( e ) β ( t ).Recall Remark 2.1. T κ can be viewed as a geometric sum of i.i.d. random variables withdistribution F ( e ) β ( x ). Note that φ = (1 /β ) R ∞ t ¯ F β ( t ) dt = β / (2 β ), where β = R ∞ t dF β ( t ) = R ∞ t ¯ F β ( t ) dt < ∞ . It follows from (4.4) that P { T κ > t } = 1(1 − ρ ) β (cid:18)Z ∞ t ¯ F β ( x ) dx (cid:19) + ρβ (1 − ρ ) β ¯ F β ( t ) + o ( ¯ F β ( t )) as t → ∞ . (4.6) Remark 4.1
In Assumption A, we have assumed a > to ensure that the service time T β has afinite second moment β , which allows us to use the results in [39]. Here, we would like to mentionthat for the situation where β = ∞ , the second order asymptotic properties of a random sum with RV summands have been studied partially in Leng and Hu (2014) [26], and their result is presentedfor certain cases with different combinations of parameter values a and h . Applying the result in[26], one may proceed with a discussion on those cases with < a ≤ in a way similar to whatfollows, except that more complicated expressions would be involved in the derivations. P { T θ > t } as follows: P { T θ > t } = λr ( a − − ρ ) t − a +1 + ∆ T θ ( t ) , as t → ∞ , (4.7)where ∆ T θ ( t ) = (cid:20) λ β r (1 − ρ ) + λr a (1 − ρ ) L ( t ) (cid:21) t − a + o ( t − a L ( t )) , for h = 1 ,λr ( a + h − − ρ ) t − a − h +1 L ( t ) + o ( t − a − h +1 L ( t )) , for 0 < h < ,λ β r (1 − ρ ) t − a + o ( t − a ) , for h > . (4.8)The following two lemmas will be used to complete the two-term asymptotic expansion of P { T θ + T β + T τ > t } as t → ∞ . Lemma 4.2 (p.48 in [16])
Let F , G and G be distribution functions. Suppose that F ∈ S . If ¯ G i ( t ) / ¯ F ( t ) → c i as t → ∞ for some c i ≥ , i = 1 , , then G ∗ G ( t ) / ¯ F ( t ) → c + c as t → ∞ ,where the symbol G ∗ G stands for the convolution of G and G . Lemma 4.3 (Lemma 6.2 in [27])
Let X and X be independent r.v.s with distribution func-tions F and F , respectively. Assume that F ( t ) ∼ c t − d +1 L ( t ) and F ( t ) ∼ c t − d L ( t ) , where c > , c > , d > and L ( t ) is a slowly varying function at infinity. Then P { X + X > t } = F ( t ) + ( d − µ F · t − F ( t ) + F ( t ) + o ( F ( t )) as t → ∞ , (4.9) where µ F < ∞ is the mean value of X . It follows from (3.4) that P { T τ > t } = (1 − /a ) c κ ψr t − a + o ( t − a ) as t → ∞ where ψ and c κ are given in (2.7) and (3.3), respectively. By Lemma 4.2 and Assumption A, we have P { T β + T τ > t } = (cid:18) r + λr aµ (1 − ρ ) (cid:19) t − a + o ( t − a ) . (4.10)In terms of Lemma 4.3, (4.7), (4.8) and (4.10), together with E ( T β + T τ ) = β + ψ = β + ρµ (1 − ρ ) ,we have the following result. Theorem 3
The second order asymptotic expansion for the tail probability P { T θ + T β + T τ > t } is given by: P { T θ + T β + T τ > t } = λr ( a − − ρ ) t − a +1 + ∆ T ( t ) , as t → ∞ , (4.11)15 here ∆ T ( t ) = (cid:20) c T + λr a (1 − ρ ) L ( t ) (cid:21) t − a + o ( t − a ) , for h = 1 ,λr ( a + h − − ρ ) t − a − h +1 L ( t ) + o ( t − a − h +1 ) , for < h < ,c T t − a + o ( t − a ) , for h > , (4.12) c T = r − ρ + λr ( ρ + 1 /a ) µ (1 − ρ ) + λ β r (1 − ρ ) . (4.13) L µ In this section, we will complete the proof of our main result (Theorem 1), which is a refinement ofthe asymptotic equivalence (2.6), by introducing a second term in the asymptotic approximationto P { L µ > j } . Towards this end, we now provide the following results, which will be used in theproof. Lemma 5.1
Let d > be a constant. Then, as x → ∞ , Γ( x − d )Γ( x ) = x − d + d ( d + 1)2 x − d − + O ( x − d − ) . (5.1) Proof.
The second order asymptotic expansion of the gamma function is given byΓ( x ) = √ πx x − / e − x (cid:20) x + O (cid:18) x (cid:19)(cid:21) as x → ∞ . (5.2)Taking the logarithm of both sides, we getln Γ( x ) = ( x − /
2) ln x − x + ln √ π + 112 x + O (cid:18) x (cid:19) . (5.3)Noting that ( x − d − /
2) ln(1 − d/x ) = ( x − d − / (cid:20) − dx − d x + O (cid:18) x (cid:19)(cid:21) = − d + d ( d + 1)2 x + O (cid:18) x (cid:19) , we can writeln Γ( x − d ) = ( x − d − /
2) ln x + d ( d + 1)2 x − x + ln √ π + 112 x + O (cid:18) x (cid:19) . (5.4)By (5.3) and (5.4), we haveln Γ( x − d ) − ln Γ( x ) = − d ln x + d ( d + 1)2 x + O ( x − ) . x − d )Γ( x ) = x − d exp (cid:26) d ( d + 1)2 x + O (cid:18) x (cid:19)(cid:27) = x − d (cid:20) d ( d + 1)2 x + O (cid:18) x (cid:19)(cid:21) . (5.5) Corollary 5.1
Suppose that λ > and d > . Then, as j → ∞ , Z ∞ λe − λt ( λt ) j +1 ( j + 1)! · t − d dt = λ d j − d + 12 d ( d − λ d j − d − + O ( j − d − ) . (5.6) Proof.
Note that Z ∞ λe − λt ( λt ) j +1 ( j + 1)! · t − d dt = λ d · Γ( j + 2 − d )Γ( j + 2) . (5.7)By Lemma 5.1,Γ( j + 2 − d ) / Γ( j + 2) = ( j + 2) − d + 12 d ( d + 1)( j + 2) − d − + O ( j − d − )= j − d + 12 d ( d − j − d − + O ( j − d − ) , which is substituted into (5.7) to complete the proof.Suppose that N t is a Poisson process with rate λ >
0, and
T > F ( t ) = P { T > t } . Then, P { N T > j } = Z ∞ P { N t ≥ j + 1 } dF ( t ) = Z ∞ λe − λt ( λt ) j +1 ( j + 1)! ¯ F ( t ) dt. (5.8) Lemma 5.2 (Proposition 3.1 in [4], or Theorem 3.1 in [15]) If ¯ F ( t ) = P { T > t } is heavierthan e −√ t as t → ∞ , then P ( N T > j ) ∼ P { T > j/λ } as j → ∞ . Since t − d L ( t ) is heavier than e −√ t as t → ∞ , we immediately have the following corollary. Corollary 5.2
Suppose that λ > and d > . Let L ( t ) be a slowly varying function at ∞ . Then, Z ∞ λe − λt ( λt ) j +1 ( j + 1)! · t − d L ( t ) dt ∼ λ d j − d L ( j ) as j → ∞ . (5.9)By Lemma 2.1, L µ = N T θ + T β + T τ . Setting ¯ F ( t ) = P { T θ + T β + T τ > t } in (5.8), we know P { L µ > j } = P { N T θ + T β + T τ > j } = Z ∞ λe − λt ( λt ) j +1 ( j + 1)! · P { T θ + T β + T τ > t } dt. (5.10)Recall that the second order asymptotic expansion of P { T θ + T β + T τ > t } as t → ∞ . Substituting(4.11) into (5.10), P { L µ > j } = λr ( a − − ρ ) Z ∞ λe − λt ( λt ) j +1 ( j + 1)! · t − a +1 dt + Z ∞ λe − λt ( λt ) j +1 ( j + 1)! · ∆ T ( t ) dt. (5.11)Substituting ∆ T ( t ), given in (4.12) and (4.13), into (5.11) and applying Corollary 5.1 and Corol-lary 5.2, we complete the proof of Theorem 1. 17 Concluding remarks and numerical results
In this paper, we considered the
M/G/ L µ of customers in thesystem. This result is a refinement of the first order approximation (the equivalence theorem (2.6)for the retrial queue systems), and is also a refinement of a recent asymptotic result in [27], whichis the first order approximation for the difference between the tail probabilities of L µ and L ∞ (thetotal number of customers in the corresponding standard M/G/ P { L µ > j } − P { L ∞ > j } cannot directly act as the secondterm in the second order asymptotic expansion for P { L µ > j } . This can be clarified as follows.It was proved in [27] (Theorem 6.1) that, under the assumption P { T β > t } ∼ t − a L ( t ) as t → ∞ where a > (i) P { L µ > j } ∼ P { L ∞ > j } ∼ c j − a +1 L ( j ); (ii) P { L µ > j } − P { L ∞ > j } ∼ c j − a L ( j ),where c and c are constants. (i) and (ii) imply that P { L ∞ > j } = c · j − a +1 L ( j ) + o ( j − a +1 L ( j )) (6.1)and P { L µ > j } = P { L ∞ > j } + c · j − a L ( j ) + o ( j − a L ( j )) . (6.2)Substituting (6.1) into (6.2), we have P { L µ > j } = c j − a +1 L ( j ) + o ( j − a +1 L ( j )) + c j − a L ( j ) + o ( j − a L ( j )) . (6.3)It is clear that compared to all other terms in (6.3), the last term o ( j − a L ( j )) is a higher orderinfinitesimal function as j → ∞ . However, to specify the second term in the second order asymptoticexpansion, a comparison of the orders between the second and the third terms in (6.3) is needed,which has not been addressed in Theorem 6.1 of [27]. This requires further specification on thesecond term of the asymptotic tail probability of the service time. Such a comparison has been madein this paper under Assumption A, in which the slowly varying function L ( · ) is further specified,allowing us to obtain the second term o ( j − a +1 L ( j )) in the second order asymptotic expansion of L ∞ .Finally, it is interesting to see the improvement in the approximation when the second term inthe asymptotic expansion is added to the first term approximation. Let us set r = r = 1 / L ( · ) ≡ F β ( t ) = t − a (cid:0) + t − h (cid:1) with t ≥
1, where a > h > β = R ∞ ¯ F β ( t ) dt = 1 + (cid:16) a − + a + h − (cid:17) and β = R ∞ t ¯ F β ( t ) dt = 1 + a − + a + h − . We use the notations Apprx1 and Apprx2 to represent the firstand second order asymptotic approximations to P { L µ > j } , respectively. By Theorem 1, Apprx1 = λ a r ( a − − ρ ) j − a +1 and Apprx2 = Apprx1+∆ appL ( j ). where ∆ appL ( j ) is the approximated value of ∆ L ( j )by neglecting the higher order infinitesimal in (1.2). We use the notation Imprv to represent therelative improvement made by adopting the seond order asymptotic approximation, that is, Imprv =18 L ( j ) Apprx2 × µ = 0 . a = 2 . h = 2, and λ = ρ/β with ρ ∈ { . , . , . } .In the following table, we report the first and second order asymptotic approximations to P { L µ >j } , from which we can see a significant improvement. Similar improvements can be seen for otherparameter values. ρ j P { L µ > j } Apprx1 Apprx1 Imprv(%)0.4 10 0.00067146 0.00092049 37.08820 0.00023740 0.00028142 18.54430 0.00012922 0.00014520 12.36350 6.0057e-05 6.4512e-05 7.417690 2.4869e-05 2.5894e-05 4.12090.6 20 0.00098129 0.00147690 50.51240 0.00034694 0.00043456 25.25660 0.00018885 0.00022065 16.83780 0.00012266 0.00013815 12.628100 8.7769e-05 9.6636e-05 10.1020.8 20 0.00402880 0.01052500 161.2660 0.00077534 0.00119210 53.752100 0.00036034 0.00047656 32.251160 0.00017805 0.00021394 20.157200 0.00012740 0.00014795 16.126Table 1: The approximations to P { L µ > j } for a = 2 . h = 2. Acknowledgments
The authors would like to thank three anonymous reviewers and the handling editor for theirvaluable comments and suggestions, which improved the quality of this paper significantly. Thiswork was supported in part by the National Natural Science Foundation of China (Grant No.71571002), the Natural Science Foundation of the Anhui Higher Education Institutions of China(No. KJ2017A340), the Research Project of Anhui Jianzhu University, and a Discovery Grant fromthe Natural Sciences and Engineering Research Council of Canada (NSERC).
References [1] Abate, J., Choudhury, G.L., & Whitt, W. (1994). Waiting-time tail probabilities in queueswith long-tail service-time distributions.
Queueing Systems , 16(3-4), 311–338.[2] Abate, J., & Whitt, W. (1999). Explicit
M/G/
Operations Research Letters , 25(1), 25–31.[3] Artalejo, J.R., & G´omez-Corral, A. (2008).
Retrial Queueing Systems . Springer, Berlin.194] Asmussen, S., Kl¨uppelberg, C., & Sigman, K. (1999). Sampling at subexponential times, withqueueing applications.
Stochastic Processes and their Applications , 79(2), 265–286.[5] Bingham, N.H., Goldie, C.M., & Teugels, J.L. (1989).
Regular Variation . Cambridge UniversityPress.[6] Boxma, O.J., & Cohen, J.W. (1998). The
M/G/
IEEE Journal on Selected Areas in Communications , 16(5), 749–763.[7] Breiman, L. (1992).
Probability . Society for Industrial and Applied Mathematics, Philadelphia.[8] Choi, B.D., & Chang, Y. (1999). Single server retrial queues with priority calls.
Mathematicaland Computer Modelling , 30, 7-32.[9] de Haan, L., & Resnick, S. (1996). Second-order regular variation and rates of covergence inextreme-value theory.
The Annals of Probability , 24, 97–124.[10] de Haan, L., & Stadtm¨uller, U. (1996). Generalized regular variation of second order.
Journalof the Australian Mathematical Society , 61(3), 381–395.[11] Degen, M., Lambrigger, D.D., & Segers, J. (2010). Risk concentration and diversification:Second-order properties.
Insurance: Mathematics Economics , 46, 541–546.[12] Embrechts, P., Kluppelberg, C., & Mikosch, T. (1997).
Modelling Extremal Events for Insur-ance and Finance . Springer, Heidelberg.[13] Falin, G.I., & Templeton, J.G.C. (1997).
Retrial Queues . Chapman & Hall, London.[14] Feller, W. (1971).
An Introduction to Probability Theorey and Its Applications, Vol. II . JohnWiley & Sons, London.[15] Foss, S., & Korshunov, D. (2000). Sampling at a random time with a heavy-tailed distribution.
Markov Processes and Related Fields , 6(4), 543-568.[16] Foss, S., Korshunov, D., & Zachary, S. (2011).
An Introduction to Heavy-Tailed and Subexpo-nential Distributions . Springer, New York.[17] Geluk, J., de Haan, L., Resnick, S., & St˘aric˘a, C. (1997). Second-order regular variation,convolution and the central limit theorem.
Stochastic Processes & Their Applications , 69(2),139-159.[18] Hall, P. (1982). On some simple estimates of an exponent of regular variation.
Journal of theRoyal Statistical Society , 44(1), 37–42.[19] Hua, L., & Joe, H. (2011). Second order regular variation and conditional tail expectation ofmultiple risks,
Insurance Mathematics & Economics , 2011, 49(3), 537–546.[20] Kim, B., & Kim, J. (2012). Exact tail asymptotics for the
M/M/m retrial queue with nonper-sistent customers.
Operations Research Letters , 40(6), 537–540.[21] Kim, B., Kim, J., & Kim, J. (2010a). Tail asymptotics for the queue size distribution in the
M AP/G/
Queueing Systems , 66(1), 79–94.2022] Kim, J., & Kim, B. (2016). A survey of retrial queueing systems.
Ann Oper Res , 247(1), 3–36.[23] Kim, J., Kim, B., & Ko, S.-S. (2007). Tail asymptotics for the queue size distribution in an
M/G/
Journal of Applied Probability , 44(4), 1111–1118.[24] Kim, J., Kim, J., & Kim, B. (2010c). Regularly varying tail of the waiting time distributionin
M/G/
Queueing Systems , 65(4), 365–383.[25] Kim, J., Kim, J., & Kim, B. (2012). Tail asymptotics of the queue size distribution in the
M/M/m retrial queue.
Journal of Computational and Applied Mathematics , 236(14), 3445–3460.[26] Leng, X., & Hu, T. (2014). The closure property of 2 RV under random sum. Statistics &Probability Letters , 92(5), 158–167.[27] Liu, B., Min, J., & Zhao, Y.Q. (2017). Refined tail asymptotic properties for the M X /G/ M/G/ /K queues with subexponential service times. Queueing Systems , 76(1),1–19.[29] Liu, B., Wang, X., & Zhao, Y.Q. (2012). Tail asymptotics for
M/M/c retrial queues withnonpersistent customers.
Operational Research: An International Journal , 12(2), 173–188.[30] Liu, B., & Zhao, Y. Q. (2010). Analyzing retrial queues by censoring.
Queueing Systems , 64(3),203–225.[31] Liu, Q., Mao, T., & Hu, T. (2017). Closure properties of the second-order regular variationunder convolutions.
Communications in Statistics–Theory and Methods , 46(1), 104–119.[32] Mao, T., & Kai, W.N. (2015). Second-order properties of tail probabilities of sums and ran-domly weighted sums.
Extremes , 18(3), 403–435.[33] Masuyama, H. (2014). Subexponential tail equivalence of the stationary queue length distri-butions of
BM AP/GI/
Stochastic Operations Research in Business and Industry (eds.by Tadashi Dohi, Katsunori Ano and Shoji Kasahara), World Scientific Publisher. http://infoshako.sk.tsukuba.ac.jp/ ∼ tuan/papers/Tuan chapter ver3.pdf [35] Ramsay, C.M. (2007). Exact waiting time and queue size distributions for equilibrium M/G/
Queueing Systems , 57(4), 147–155.[36] Resnick, S.I. (2007).
Heavy-Tail Phenomena: Probabilistic and Statistical Modeling . SpringerSeries in Operations Research and Financial Engineering, Springer, New York.[37] Shang, W., Liu, L., & Li, Q.-L. (2006). Tail asymptotics for the queue length in an
M/G/
Queueing Systems , 52(3), 193–198.2138] Weiss, L. (1971). Asymptotic inference about a density function at an end of its range.
NavalRes. Logist. Quart. , 18, 111-114.[39] Willekens, E., & Teugels, J.L. (1992). Asymptotic expansions for waiting time probabilities inan
M/G/
Queueing Systems , 10(4), 295-312.[40] Yamamuro, K. (2012). The queue length in an
M/G/