Stationary coupling method for renewal process in continuous time (application to strong bounds for the convergence rate of the distribution of the regenerative process)
aa r X i v : . [ m a t h . P R ] D ec Stationary coupling method for renewalprocess in continuous time application to strong bounds for the convergence rate of the distribution ofthe regenerative process
G. Zverkina ∗ June 21, 2018
Abstract
We propose a new modification of the coupling method for renewalprocess in continuous time. We call this modification “the stationarycoupling method” , and construct it primarily to obtain the bounds forconvergence rate of the distribution of the regenerative processes inthe total variation metrics. At the same time this modification of thecoupling method demonstrates an improvement of the classical resultof polynomial convergence rate of the distribution of the regenerativeprocess – in the case of a heavy tail. keywords
Renewal process, Regenerative process, Rate of convergen-ce, Coupling method subclass
MSC 60B10 MSC 60J25 MSC 60K15
This paper proposes a new modification of the coupling method, which wecall the stationary coupling method . ∗ The author is supported by the RFBR, project No 17-01-00633 A.
The coupling method invented by W. Doeblin in [3] ordinarily is used toobtain the bounds of convergence rate of a Markov process to the stationaryregime.Below we give a detailed description of this method.Suppose that ( X ′ t , t ≥
0) and ( X ′′ t , t ≥
0) are two versions of Markovprocess ( X t , t ≥
0) with different initial states X ′ = x ′ and X ′′ = x ′′ , andwith the same transition function; the state space of the process ( X t , t ≥ X with σ -algebra σ ( X ).In what follows we introduce P x ′ t ( A ) def == P { X ′ t ∈ A } , P x ′′ t ( A ) def == P { X ′′ t ∈ A } for A ∈ σ ( X ) , and let τ ( x ′ , x ′′ ) def === inf { t > X ′ t = X ′′ t } . Then (cid:12)(cid:12)(cid:12) P x ′ t ( A ) − P x ′′ t ( A ) (cid:12)(cid:12)(cid:12) ≤ P { τ ( x ′ , x ′′ ) > t } by the coupling inequality.The random variable τ ( x ′ , x ′′ ) is called coupling epoch .Now suppose that for some positive increasing unbounded function ϕ ( t )we have E ϕ ( τ ( x ′ , x ′′ )) = C ( x ′ , x ′′ ) < ∞ . Then from Markov inequality wededuce: (cid:12)(cid:12) P x ′ t ( A ) − P x ′′ t ( A ) (cid:12)(cid:12) ≤ P { τ ( x ′ , x ′′ ) > t } == P { ϕ ( τ ( x ′ , x ′′ )) > ϕ ( t ) } ≤ E ϕ ( τ ( x ′ , x ′′ )) ϕ ( t ) . (1)Suppose that the process ( X t , t ≥
0) is ergodic, that is, for all initial states x ∈ X the distribution P xt converges weakly to the invariant probabilitymeasure P as t → ∞ , i.e. P xt = ⇒ P as t → ∞ .2ntegrating of the inequality (1) with respect to the stationary measure P we obtain (cid:12)(cid:12)(cid:12) P x ′ t ( A ) − P ( A ) (cid:12)(cid:12)(cid:12) ≤ Z X ϕ ( τ ( x ′ , x ′′ )) d P ( x ′′ ) ϕ ( t ) = C ( x ′ ) ϕ ( t ) , (2)and (cid:13)(cid:13)(cid:13) P x ′ t − P (cid:13)(cid:13)(cid:13) T V ≤ C ( x ′ ) ϕ ( t ) . The original coupling method was most commonly used for the Markovchains, i.e. for random processes in discrete time.It is required to modify application of the coupling method for randomprocesses in continuous time, since this case suggests P { τ ( x ′ , x ′′ ) < + ∞} <
1. To resolve this problem it was proposed to construct (in a special prob-ability space) the paired stochastic process ( Z t , t ≥
0) = (( Z ′ t , Z ′′ t ) , t ≥ X ′ t D = Z ′ t and X ′′ t D = Z ′′ t for all t ≥ ;2. P { τ ( Z ′ , Z ′′ ) < ∞} = 1 , where τ ( Z ′ , Z ′′ ) = τ ( Z ) def === inf { t ≥ Z ′ t = Z ′′ t } ; Z ′ t = Z ′′ t for all t ≥ τ ( Z ′ , Z ′′ ) . The paired stochastic process ( Z t , t ≥
0) = (( Z ′ t , Z ′′ t ) , t ≥
0) which satis-fies conditions – is called successful coupling – see [4].Let us replace condition by the condition ′ . E τ ( Z ′ , Z ′′ ) < ∞ , where τ ( Z ′ , Z ′′ ) = τ ( Z ) def === inf { t ≥ Z ′ t = Z ′′ t } . We call the paired stochastic process Z t = (( Z ′ t , Z ′′ t ) , t ≥
0) which satisfiesconditions , ′ and .3 ote that the processes ( Z ′ t , t ≥ and ( Z ′′ t , t ≥ can be non-Markov,and its finite-dimensional distributions may differ from the finite-dimensionaldistributions of ( X ′ t , t ≥ and ( X ′′ t , t ≥ respectively; furthermore, gen-erally speaking, the processes ( Z ′ t , t ≥ and ( Z ′ t , t ≥ turn out to bedependent .Then for all A ∈ σ ( X ) we use the coupling inequality in the followingform: (cid:12)(cid:12) P x ′ t ( A ) − P x ′′ t ( A ) (cid:12)(cid:12) = | P { X ′ t ∈ A } − P { X ′′ t ∈ A }| == | P { Z ′ t ∈ A } − P { Z ′′ t ∈ A }| ≤ P { τ ( Z ′ , Z ′′ ) ≥ t } ≤≤ E ϕ ( τ ( Z ′ , Z ′′ )) ϕ ( t ) ≤ C ( Z ′ , Z ′′ ) ϕ ( t ) , (3)where C ( Z ′ , Z ′′ ) ≥ E ϕ ( τ ( Z ′ , Z ′′ )).As Z ′ = X ′ = x ′ and Z ′′ = X ′′ = x ′′ , the right-hand side of the inequalitydepends only on x ′ and x ′′ ; C ( Z ′ , Z ′′ ) = C ( x ′ , x ′′ ). Hence we can integratethe inequality (3) with respect to the stationary measure P as in (2): (cid:12)(cid:12)(cid:12) P x ′ t ( A ) − P ( A ) (cid:12)(cid:12)(cid:12) ≤ Z X C ( x ′ , x ′′ ) P ( d x ′′ ) ϕ ( t ) = C ( x ′ ) ϕ ( t ) , and therefore (cid:13)(cid:13)(cid:13) P x ′ t − P (cid:13)(cid:13)(cid:13) T V ≤ ϕ ( t )) − C ( x ′ ) . However, this integration leads to certain difficulties – see, e.g., [8, 9, 10,11].
Stationary coupling method.
In what follows we construct a strong successful coupling (cid:16) Z t , t ≥ (cid:17) = (cid:16)(cid:16) Z t , e Z t (cid:17) , t ≥ (cid:17) for the process (cid:16) X t , t ≥ (cid:17) with an initialstate x ∈ X and its stationary version (cid:16) e X t , t ≥ (cid:17) . After that we obtainthe estimate for the random variable e τ ( x ) = e τ ( Z ) def == inf n t > Z t = e Z t o .
4f we prove the finiteness of E ϕ ( e τ ( x )) then we obtain k P xt ( A ) − P ( A ) k T V ≤ P { e τ ( x ) > t } ≤ ϕ ( t )) − E ϕ ( e τ ( x ))analogously to the inequality (3). Definition 1 (Stationary Coupling) . A successful coupling of the Markov process and its stationary version iscalled stationary successful coupling .We call the method of construction of stationary successful coupling, andthe use of this construction for bounds of the convergence rate of the dis-tribution of a Markov process to the stationary distribution the stationarycoupling method .Our goal is to describe the construction of the stationary successful cou-pling for renewal process, and application of this construction in order toobtain the bounds for the convergence of the distribution of the regenerativeprocesses to the stationary distribution.
This article is divided into 5 Sections, including the Introduction.In Section 2 we set up main definitions and some necessary denotations.Section 3 describes the construction of the stationary successful couplingfor the backward renewal process when
Key Condition is satisfied.Section 4 demonstrates application of the stationary coupling method forthe bounds of the convergence rate of the backward renewal process.Section 5 extends the results of Section 4 to the regenerative Markovand regenerative non-Markov processes and discusses the way to use thestationary coupling method for the queueing theory.
Definition 2 (Renewal Process) . Let { ζ i } ∞ i =0 be a sequence of positive inde-pendent random variables, and the random variables { ζ i } ∞ i =1 are identicallydistributed; denote by F ( s ) def == P { ζ i ≤ s } for i ≥
1, and by G ( s ) def == P { ζ ≤ s } . 5uppose that E ζ i < ∞ for all i ≥
0, and θ n def == n X i =0 ζ i for each n ≥ θ n is referred to as the n th renewal time (or renewal point), the intervals[ θ n , θ n +1 ] being called renewal intervals, and { ζ i def == θ i +1 − θ i } ∞ i =0 being calledrenewal periods; θ = ζ is called first renewal point.Then the random variable R t def == ∞ X n =0 ( θ n ≤ t ) = max { n : θ n ≤ t } (where ( · ) is the indicator function) represents the number of jumps thathave occurred by time t , and we call the process ( R t , t ≥
0) a renewal process.If θ = ζ = 0 then the process ( R t , t ≥
0) is called delayed.
Remark . The renewal process ( R t , t ≥
0) is a counting process, and it isnot regenerative.
Definition 3 (Backward and Forward Renewal Processes) . Let N t def == ( t − max { θ n : θ n ≤ t } ) and N ∗ t def == (min { θ n : θ n ≤ t } − t ), where N t is the backward renewal time of the renewal process R t , and N ∗ t is theforward renewal time of the renewal process R t .We call the processes ( N t , t ≥
0) and ( N ∗ t , t ≥ backward renewalprocess and forward renewal process respectively.At the same time we call the process ( N t , t ≥
0) an embedded backwardrenewal process of the renewal process ( R t , t ≥ Remark . The processes ( N t , t ≥
0) and ( N ∗ t , t ≥
0) are Markov piecewise-linear regenerative processes with the state space R def == R ≥ with the Borel σ -algebra σ ( R ).Therefore we construct the stationary successful coupling for the back-ward renewal process ( N t , t ≥ Remark . It is a well-known fact that if the distribution F ( s ) is not latticethen lim t →∞ P { N t ≤ s } = lim t →∞ P { N ∗ t ≤ s } = e F ( s ) , where e F ( s ) = ( E ζ ) − s Z (1 − F ( u )) d u. (4)6elow we shall see that application of the stationary coupling method ispossible only if the following Key Condition is satisfied.
Key Condition.
In what follows we suppose that the following inequality forthe cumulative distribution function of the renewal period of the renewal pro-cess (or of the length of the regeneration period of the regenerative process)is true, i.e. Z { s : ∃ F ′ ( s ) } F ′ ( s ) d s > , and E ζ i < ∞ . Remark . Key Condition for the renewal process implies: • E ζ i > i ≥ • There exists an invariant probability distribution P on ( R , σ ( R ))which satisfies (4) such that P rt = ⇒ P , where P rt ( M ) def == P { N t ∈ M | N = r } , M ∈ σ ( R ). Remark . For convenience of the reader we assume that the first renewaltime of the process ( R t , t ≥
0) has the cumulative distribution function F r ( s ) def == F ( r + s ) − F ( r )1 − F ( r ) , where r ≥ N t , t ≥ F r ( s ) is a cumulative distribution function of the residual time of therenewal period if r is a given elapsed time of this period. Denotation 1.
For nondecreasing function F ( s ) we introduce F − ( y ) def == inf { x : F ( x ) ≥ y } . Denotation 2.
In what follows µ def == E ζ and µ == E ζ . Denotation 3.
Here and hereafter we put e F ( s ) def == µ − s Z (1 − F ( u )) d u and e f ( s ) def == e F ′ ( s ) = µ − (1 − F ( u )). Denotation 4.
For r ≥ F r ( s ) def == F ( s + r ) − F ( r )1 − F ( r ) .7 ✻ ❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛❛ ❅❘(cid:0)✠ ❅❘ f ( s ) ❅❘ f ( s ) e F ( s ) F ( s )Figure 1: Illustration to Proposition 1 and Remark 6. The mixed typecumulative distribution function F ( s ) has the positive density f ( s ) withintwo intervals, and three jumps. Denotation 5. U , U ′ , U ′′ , U i , U ′ i , U ′′ i , U ′′′ i are independent uniformly dis-tributed on [0 ,
1) random variables on some probability space (Ω , F , P ). Denotation 6.
Denote ϕ ( s ) def == (cid:16) ∃ F ′ ( s ) (cid:17) × (cid:16) F ′ ( s ) ∧ e F ′ ( s ) (cid:17) == F ′ ( s ) ∧ e F ′ ( s ) , if there exists F ′ ( s ) , , otherwiseand Φ( s ) def == s Z ϕ ( u ) d u ; κ def == Φ(+ ∞ ); κ def == 1 − κ. Proposition 1.
Key Condition implies κ = ∞ Z ϕ ( s ) d s > . roof. Indeed,
Key Condition implies that there exists a positive density f ( s ) = F ′ ( s ) on some interval ( s , s ), s < s .It is easy to check that for all s ∈ (0 , s ) the inequality F ( s ) < e f ( s ) = e F ′ ( s ) = 1 − F ( s ) µ ≥ − F ( s ) µ > s ∈ ( s , s ) and κ ≥ s Z s (cid:16) f ( s ) ∧ e f ( s ) (cid:17) d s > • Remark . If the distribution F ( s ) which has an absolutely continuous com-ponent is close to a discrete distribution then κ is close to zero – see Fig.1. Denotation 7.
We introduce Ψ( s ) def == F ( s ) − Φ( s ), e Ψ( s ) def == e F ( s ) − Φ( s ). Remark . Note that Ψ(+ ∞ ) = e Ψ(+ ∞ ) = 1 − κ [= κ ], and the functionsΦ( s ), Ψ( s ) and e Ψ( s ) are nondecreasing. Remark . It is easily seen that κ − Φ( s ) is the cumulative distribution func-tion. Also if κ < κ − Ψ( s ) and κ − e Ψ( s ) are the cumulative distributionfunctions.If κ = 1 then Φ( s ) ≡ F ( s ) ≡ e F ( s ) = 1 − e − λs for λ = µ − andΨ( s ) ≡ e Ψ( s ) ≡
0. In this case we put κ − Ψ( s ) def ≡≡ κ − e Ψ( s ) def ≡≡ − ( u ) def ≡≡ e Ψ − ( u ) def ≡≡ Denotation 8.
Here and hereafter let us introduceΞ( U , U ′ , U ′′ ) def == ( U < κ )Φ − ( κ U ′ ) + ( U ≥ κ )Ψ − ( κ U ′′ ); e Ξ( U , U ′ , U ′′ ) def == ( U < κ )Φ − ( κ U ′ ) + ( U ≥ κ ) e Ψ − ( κ U ′′ ) . Remark . Clearly, F ( s ) = κ (cid:0) κ − Φ( s ) (cid:1) + κ (cid:0) κ − Ψ( s ) (cid:1) = Φ( s ) + Ψ( s ) , and e F ( s ) = κ (cid:0) κ − Φ( s ) (cid:1) + κ (cid:16) κ − e Ψ( s ) (cid:17) = Φ( s ) + e Ψ( s ) . P { Ξ( U , U ′ , U ′′ ) ≤ s } = P { Ξ( U , U ′ , U ′′ ) ≤ s | U < κ } P { U < κ } ++ P { Ξ( U , U ′ , U ′′ ) ≤ s | U ≥ κ } P { U ≥ κ } == κ P { Φ − ( κ U ′ ) ≤ s } + (1 − κ ) P { Φ − ((1 − κ ) U ′′ ) ≤ s } == κ P { U ′ ≤ Φ( s ) κ − } + (1 − κ ) P { U ′′ ≤ Φ( s )(1 − κ ) − } = F ( s ) . Analogously, P { e Ξ( U , U ′ , U ′′ ) ≤ s } = e F ( s ).Moreover, P { Ξ( U , U ′ , U ′′ ) = e Ξ( U , U ′ , U ′′ ) } = P { U < κ } = κ, sincethe distribution e Ψ( s ) is absolutely continuous, and the measure of commonpart of distributions Ψ( s ) and e Ψ( s ) is equal to zero. Denotation 9.
For the random process ( X t , t ≥
0) we denote by P xt ( M ) def == P { X t ∈ M | X = x } . If this process is ergodic then P ( M ) def == lim t →∞ P xt ( M ). ( N t , t ≥ . This section considers the renewal process ( R t , t ≥
0) and its embeddedbackward renewal process ( N t , t ≥ KeyCondition is satisfied.
At the beginning, let us recall that the independent versions of the processes (cid:16) N t , t ≥ (cid:17) and (cid:16) e N t , t ≥ (cid:17) can be constructed as follows (see, e.g., [1,Chap.V, Proposition 3.5 and Corollary 3.6]).10 .1.1 Construction of the version of the non-stationary backwardrenewal process ( N t , t ≥ – see Fig. 2. If N = r then P { ζ ≤ s } = F r ( s ), where ζ is the first renewal time of thecorresponding renewal process ( R t , t ≥ ζ == F − r ( U ), and ζ i def == F − ( U i ) for i > θ i def == i P j =0 ζ j ; then Z t def == ( t ≥ θ )( t − max { θ i : θ i ≤ t } ) + ( t < θ )( r + t ) == ( t ≥ θ )( t − θ R t ) + ( t < θ )( r + t ) D = N t . ✲ Z t tζ ∼ F a ζ ∼ F ζ ∼ F ζ ∼ F ζ ∼ Fθ θ θ θ θ θ Figure 2: Construction of the version ( Z t , t ≥
0) of the process ( N t , t ≥ (cid:16) e N t , t ≥ (cid:17) – see Fig. 3. Remark 3 provides us with the formula of the distribution of the stationaryprocesses (cid:16) e N t , t ≥ (cid:17) and (cid:16) e N ∗ t , t ≥ (cid:17) : P n e N t ≤ s o = P n e N ∗ t ≤ s o = e F ( s ), and therefore P n e N ≤ s o = P n e N ∗ ≤ s o = e F ( s ).So, we put e θ h = e N ∗ i = e ζ == e F − ( U ′ ), and e ζ i def == F − ( U ′ i ) for i > e θ i def == i P j =0 e ζ j ; e Z == F − e θ ( U ′′ ) (see Denotation 4); then we put e Z t def == (cid:16) t < e θ (cid:17) (cid:16) t + e Z (cid:17) + (cid:16) t ≥ e θ (cid:17) (cid:16) t − max ne θ n : e θ n ≤ t o(cid:17) D = e N t . emark . P { e Z ≤ s } == ∞ Z F u ( s ) d e F ( u ) = ∞ Z F ( s + u ) − F ( u )1 − F ( u ) × − F ( u ) µ d u == µ − ∞ Z (1 − F ( u )) d u − ∞ Z ((1 − F ( s + u )) d u == µ − s Z (1 − F ( u )) d u = e F ( s ) . ✲✁☛ P e ζ ∼ e F ζ ∼ F ζ ∼ F ζ ∼ F ζ ∼ F e θ e θ e θ e θ e θ e Z t t Figure 3: Construction of the version (cid:16) e Z t , t ≥ (cid:17) of the process (cid:16) e N t , t ≥ (cid:17) . Remark . The processes (cid:16) Z t , t ≥ (cid:17) and (cid:16) e Z t , t ≥ (cid:17) described in Sections3.1.1 and 3.1.2 are independent since they are constructed by using indepen-dent random variables (see Denotation 5). Now we construct the successful coupling for the process (cid:16) N t , t ≥ (cid:17) withthe initial state N = r and and its stationary version (cid:16) e N t , t ≥ (cid:17) with theinitial distribution P n e N t ≤ s o = e F ( s ) (on some probability space (Ω , F , P )– see Denotation 5).Here we use the principles of construction exposed in Sections 3.1.1 and3.1.2. However, the construction considered in these Sections is the con-struction of the independent versions of the processes (cid:16) N t , t ≥ (cid:17) and (cid:16) e N t , t ≥ (cid:17) . We have mentioned before that the successful coupling for the12rocesses in continuous time is a pair of dependent processes. So, we have tomodify the construction of Sections 3.1.1 and 3.1.2.To construct the successful coupling for the processes (cid:16) N t , t ≥ (cid:17) and (cid:16) e N t , t ≥ (cid:17) , i.e. a pair of the dependent backward renewal processes, itsuffices to construct all renewal times of the corresponding renewal processes (cid:16) R t , t ≥ (cid:17) and (cid:16) e R t , t ≥ (cid:17) – the times ϑ i on Fig. 4. Remark . Since the processes (cid:16) N t , t ≥ (cid:17) and (cid:16) e N t , t ≥ (cid:17) are piecewise-linear processes, the coincidence of these processes can occur only at thecommon renewal time.We construct a pair (cid:16) Z t , t ≥ (cid:17) = (cid:16)(cid:16) Z t , e Z t (cid:17) , t ≥ (cid:17) by induction –see Fig. 4. Since we assume that studied backward renewal process is ahomogeneous Markov process, the distribution of the first renewal time ofnon-stationary version of this process has the distribution which dependsonly on the initial state. Namely, G ( s ) = F r ( s ) if N = r . (cid:16) Z t , t ≥ (cid:17) . Basis of induction.
We put θ == G − ( U )[= F − r ( U )] , e θ == e F − ( U ′ ) , e Z == F − e θ ( U ′′ ); here and hereafter θ is the first renewal time of the process (cid:16) Z t , t ≥ (cid:17) , and e θ is the first renewal time of the process (cid:16) e Z t , t ≥ (cid:17) , e Z has an initial distribution P of the stationary backward renewal process, i.e. P n e Z ≤ s o = e F ( s ).Now we introduce Z t def == t + r (cid:2) = t + N (cid:3) and e Z t def == t + e Z for t ∈ [0 , ϑ ),where ϑ == t ∧ e t (on Fig. 4: ϑ = e θ ). Time ϑ is the first time when arenewal of at least one of the processes (cid:16) Z t , t ≥ (cid:17) and (cid:16) e Z t , t ≥ (cid:17) occured. Step of induction.
Suppose that we have already constructed the process (cid:16) Z t , t ≥ (cid:17) for t ∈ [0 , ϑ n ), ϑ n = θ i ∧ e θ j . Then there are only three alterna-tives. 13 ✲(cid:0)✠ P e ζ ∼ e F ❈❈❲ e ζ ∼ F (cid:0)✠ P e Ξ ∼ e F (cid:0)✠ P e Ξ ∼ e F e ζ ∼ F (cid:0)✠ P e Ξ ∼ e Fζ ∼ F r Ξ ∼ F Ξ ∼ F Ξ ∼ F ζ ∼ F e ζ = ζ θ θ θ θ θ θ e θ e θ e θ e θ e θ e θ e θ e θ ϑ ϑ ϑ ϑ ϑ ϑ = e τ ϑ ϑ Figure 4: Construction of the successful coupling (cid:16) Z t , t ≥ (cid:17) . Case 1.
In this case we have ϑ n = θ i = e θ j – on Fig. 4 this situation occursfor the first time at the point ϑ , and then at the points ϑ , ϑ , etc.In this situation at the time ϑ n the processes coincide, and at the sametime they begin a new renewal period with the same distribution.Then we put Z ϑ n = e Z ϑ n = 0 , θ i +1 = e θ j +1 = ϑ n +1 = F − ( U n +1 ) + ϑ n ; and Z t = e Z t def == t − ϑ n for t ∈ [ ϑ n , ϑ n +1 ) . Thus after the first coincidence(time e τ = ϑ on Fig. 4) the processes ( Z t , t ≥ and (cid:16) e Z t , t ≥ (cid:17) haveidentical renewal periods, and therefore these processes are identical. Case 2.
In this case we obtain ϑ n = e θ j < θ i (the times e θ and e θ on Fig. 4),i.e. the renewal period of the stationary version of our renewal process endedbefore the renewal period of the nonstationary version of our renewal processended. In this case we construct the processes similarly as in the previousSections, i.e. we put e Z ϑ n = 0 , Z ϑ n = Z ϑ n − , e θ j +1 def == e θ j + F − ( U n +1 ); and e Z t def == t − ϑ n , Z t def == t − ϑ n + Z ϑ n for t ∈ [ ϑ n , ϑ n +1 ) , where ϑ n +1 def == θ i ∧ e θ j +1 .In fact, in this situation, we construct the process (cid:16) e Z t , t ≥ (cid:17) accordingto the scheme of Section 3.1.2, and we do not change anything in the behaviorof the process (cid:16) Z t , t ≥ (cid:17) . ase 3. In this case we result in ϑ n = θ i < e θ j (the times θ , θ and θ onFig. 4): the renewal period of the nonstationary process ended, while therenewal period of the stationary process has not passed yet. In this situationwe attempt to continue the behaviour of the processes in such a way that theymay coincide with the positive probability (equal to κ ) at the next renewaltime: we put θ i +1 def == θ i + Ξ( U n +1 , U ′ n +1 , U ′′ n +1 ); e θ j def == θ i + Ξ( U n +1 , U ′ n +1 , U ′′ n +1 ) , and Z t def == t − ϑ n , e Z t def == t − ϑ n + F − ̺ ( U ′′′ n +1 ) for t ∈ [ ϑ n , ϑ n +1 ) , where ̺ = e Ξ( U n +1 , U ′ n +1 , U ′′ n +1 ) and ϑ n +1 def == θ i ∧ e θ j +1 .As P n Ξ( U n +1 , U ′ n +1 , U ′′ n +1 ) = e Ξ( U n +1 , U ′ n +1 , U ′′ n +1 ) o = κ > (see Re-mark 9), in this situation P n θ i = e θ j +1 o = κ > . (cid:16) Z t , t ≥ (cid:17) = (cid:16)(cid:16) Z t , e Z t (cid:17) , t ≥ (cid:17) is a successfulcoupling for the processes (cid:16) N t , t ≥ (cid:17) and (cid:16) e N t , t ≥ (cid:17) .Lemma 1. Z t D = N t and e Z t D = e N t for all t ≥ .Proof. First, we see that the construction of the non-stationary process (cid:16) Z t , t ≥ (cid:17) is identical to the construction of Section 3.1.1. So, the pro-cess (cid:16) Z t , t ≥ (cid:17) is Markov, and Z t D = N t for all t ≥ (cid:16) e Z t , t ≥ (cid:17) for t ∈ [0 , θ ). Its construction isidentical to the construction of Section 3.1.2.At the time θ , this process restarts from the stationary distribution. Atthe time θ , we have forgotten the previous history of the process (cid:16) e Z t , t ≥ (cid:17) ,and it began its motion further. The construction of the process (cid:16) e Z t , t ≥ (cid:17) after time θ is identical to the construction of Section 3.1.2. Therefore thedistribution of this process is stationary as long as we do not interfere withthe the construction of this process. It means that e Z t D = e N t between the time θ and θ .Now we consider the next intervals [ θ i , θ i +1 ).15n this case the process (cid:16) e Z t , t ≥ (cid:17) restarts from the stationary distri-bution at times θ i , and up to the next restart (at the time θ i +1 ) this processhas a stationary distribution.So, for all t ≥ e Z t D = e N t as E ( θ i +1 − θ i ) > • Lemma 2. P { e τ ( r ) < ∞} = 1 , where e τ ( r ) def == inf n t ≥ Z t = e Z t (cid:12)(cid:12)(cid:12) N = r o , r ∈ R .Proof. Denote by S n def == n Z θ n = e Z θ n o , S n def == S n ∩ \ ≤ i ≤ n − S i !! = n Z θ n = e Z θ n & Z θ i = e Z θ i , i < n o . It can be easily seen that S n ∩ S m = ◦ / if n = m , P ( S n ) = κ κ n − , and P (cid:18) ∞ S n =1 S n (cid:19) = 1.In accordance with our construction of the pair (cid:16) Z t , t ≥ (cid:17) , we obtain P n Z θ = e Z θ o = 1 since the distribution e F ( s ) is absolutely continuous, and P { e τ = θ n } = P ( S n ) = κ κ n − , n ≥ S n not depend on r , n ≥ ζ i def == θ i − θ i − ( i ≥
1) are independent sincethey have been constructed by using different independent random variables.Then from the law of total probability we deduce: P { e τ ( r ) < ∞} = ∞ X n =1 (cid:16) P { e τ ( r ) < ∞| S n } P { S n } (cid:17) == ∞ X n =1 P ( S n ) P { ζ n < ∞| S n } Y ≤ i ≤ n − P { ζ i < ∞| S n } ! == ∞ X n =1 P ( S n ) κ − Φ( ∞ ) Y ≤ i ≤ n − ( κ − Ψ( ∞ )) ! = κ ∞ X n =1 κ n − = 1 . If κ = 1 then P ( S ) = 1, and e τ ( r ) = θ ; P { θ < ∞} = 1 if Key Condition is satisfied.So, Lemma 2 is proved. • emma 3. Z t = e Z t for all t ≥ e τ ( r ) .Proof. The statement of the Lemma 3 follows from the construction of theCase 1 (Section 3.2.1) and Lemma 3 is proved.
Lemma 4. E e τ ( r ) < ∞ . To prove Lemma 4 we need the following elementary
Proposition 2.
Let ξ be a non-negative random variable, and let E and E be an events such that the random variables ξ and ( E ) are independent.Then E (cid:16) ξ × ( E ) (cid:17) ≤ E ξ ; E (cid:16) ξ × ( E ) (cid:17) = E ξ P ( E ) . (5) of Lemma 4. Let us recall, that P ( S n ) = κ κ n , ζ i def == θ i − θ i − , and for all i = j the random variable ζ i does not depend on the random event S j .Then from (5) we have for κ ∈ (0 , E e τ ( r ) = E ζ + E (cid:16) ( S ) ζ (cid:17) + E (cid:16) ( S )( ζ + ζ ) (cid:17) ++ E (cid:16) ( S )( ζ + ζ + ζ ) (cid:17) + . . . + E ( S n ) n X i =1 ζ i ! + . . . == E ζ + ∞ X i =1 E ζ i × ( S i ) + ∞ X j = i +1 ( S j ) !!! ≤≤ E ζ + ∞ X i =1 E ζ i × i − Y ℓ =1 P (cid:0) S ℓ (cid:1) × ∞ X j = i +1 j − Y ℓ = i +1 P (cid:0) S ℓ (cid:1) ∞ Y ℓ = j P ( S ℓ ) !!!! == E ζ + ∞ X i =1 E ζ i × κ i − κ ∞ X j =0 κ j !! = E ζ + 2 κ − E ζ , (6)17ere we put k − Y ℓ = k ( · ) def == 1.If κ = 1 then P ( S ) = 1. Hence E e τ ( r ) = E θ < ∞ and Lemma 4 isproved. • Thus, Lemmata 1–4 imply the following
Theorem 1.
The paired process (cid:16) Z t , t ≥ (cid:17) = (cid:16)(cid:16) Z t , e Z t (cid:17) , t ≥ (cid:17) con-structed in Section 3.2.1 is a strong successful coupling for the backwardrenewal process (cid:16) N t , t ≥ (cid:17) . Lemma 5.
Let (cid:16) N t , t ≥ (cid:17) be a backward renewal process which satisfiesKey Condition with the initial state N = r , and µ ,K def == E ( ζ ) K < ∞ , µ K def == E ( ζ ) K < ∞ for some K ≥ .Then for the stationary coupling (cid:16) Z t , t ≥ (cid:17) constructed by the schemaexposed in Section 3.2.1 for all k ∈ [1 , K ] we have E ( e τ ( r )) k = C ( k, r ) < ∞ , where e τ ( r ) def == inf n t ≥ Z t = e Z t (cid:12)(cid:12)(cid:12) N = r o , and C ( k, r ) ≤ µ ,k κ ∞ X n =1 (cid:0) ( n + 1) k − κ n − (cid:1) ++ µ k ∞ X n =1 (cid:0)(cid:0) κn ( n + 2) k − + κ ( n + 1) k − (cid:1) κ n − (cid:1) def == b C ( k, ζ ) . Proof.
Suppose that e τ ( r ) = θ ν . 18hen by (5) for 1 ≤ i < ν we obtain EE (cid:16) ( ζ i ) k ( S ν ) (cid:17) = P ( S ν ) ∞ Z s k d (cid:0) κ − Ψ( s ) (cid:1) ≤ µ k κ ν − κ ; E (cid:16) ( ζ ν ) k ( S ν ) (cid:17) = P ( S ν ) ∞ Z s k d (cid:0) κ − Φ( s ) (cid:1) ≤ µ k κ ν ;and E (cid:16) ( ζ ) k ( S ν ) (cid:17) = κ κ ν µ ,k . (7)Using the inequalities (7) as well as Jensen’s inequality for k ≥ a i ≥ (cid:18) n P i =1 a i (cid:19) k ≤ n k − n P i =1 a ki we result in the expression similarto formula (6): E ( e τ ( r )) k = E ∞ X n =1 ( S n ) ζ + n X i =1 ζ i ! k ≤≤ E ∞ X n =1 ( n + 1) k − ζ k + n X i =1 ζ ki ! ( S n ) !! == ∞ X n =1 (cid:0) ( n + 1) k − (cid:0) E (cid:0) ( ζ ) k ( S n ) (cid:1) ++ X ≤ i ≤ n − E (cid:0) ( ζ i ) k ( S n ) (cid:1) + E (cid:0) ( ζ n ) k ( S n ) (cid:1) ≤≤ ∞ X n =1 ( n + 1) k − (cid:0) κ κ n − µ ,k + ( n − µ k κ κ n − + µ k κ n (cid:1) == µ ,k κ ∞ X n =1 (cid:0) ( n + 1) k − κ n − (cid:1) ++ µ k ∞ X n =1 (cid:0)(cid:0) κn ( n + 2) k − + κ ( n + 1) k − (cid:1) κ n − (cid:1) . (8)19o, Lemma 5 is proved. • From now on we introduce denotations E e αζ def == ε ,α ; E e αζ def == ε α ; e ε α def == ∞ Z e αs d Ψ( s ). Remark . If Key Condition is satisfied then e ε = (1 − κ ) <
1, and thereexists β > e ε β < Lemma 6.
Let (cid:16) N t , t ≥ (cid:17) be a backward renewal process which satisfiesKey Condition with the initial state N = r , and E e aζ = ε ,a < ∞ , E e aζ = ε a < ∞ for some a > .Suppose e ε β < for some β > .Then for the stationary coupling (cid:16) Z t , t ≥ (cid:17) constructed by the schemaexposed in Section 3.2.1 for all γ ∈ (0 , β ) we have E e γ e τ ( r ) = C ( γ, r ) < ∞ , where e τ ( r ) def == inf n t ≥ Z t = e Z t (cid:12)(cid:12)(cid:12) N = r o , and C ( γ, r ) ≤ ε ,β ε β − e ε β def == b C ( γ, ζ ) . Proof.
Once more assume that e τ ( r ) = θ ν . Then for 1 ≤ i < ν we obtain E (cid:0) e βζ i (cid:0) S i (cid:1)(cid:1) = P (cid:0) S i (cid:1) ∞ Z e βs d (cid:0) κ − Ψ( s ) (cid:1) = e ε β and E (cid:0) e βζ ν ( S ν ) (cid:1) = P ( S ν ) ∞ Z e βs d (cid:0) κ − Φ( s ) (cid:1) ≤ ε β . E ( e τ ( r )) k = E ∞ X n =1 ( S n ) exp β ζ + n X i =1 ζ i !!!! == E ∞ X n =1 e βζ e βζ n ( S n ) Y ≤ i ≤ n − (cid:0) e βζ i (cid:0) S i (cid:1)(cid:1)!! ≤≤ ∞ X n =1 ε ,β ε β ( e ε β ) n − = ε ,β ε β − e ε β . (9)Lemma 6 is proved. • Corollary 1.
Let (cid:16) N t , t ≥ (cid:17) be a backward renewal process which satisfiesKey Condition with the initial state N = r , and µ ,K def == E ( ζ ) K < ∞ , µ K def == E ( ζ ) K < ∞ for some K ≥ .Then for all t ≥ and every k ∈ [1 , K ] we have k P rt − P k T V ≤ b C ( k, ζ ) t − k , where P rt ( M ) def == P { N t ∈ M | N = r } , P ( M ) def == lim t →∞ P rt ( M ) , M ∈ R , and ζ has a cumulative distribution function F r ( s ) . Corollary 2.
Let (cid:16) N t , t ≥ (cid:17) be a backward renewal process which satisfiesKey Condition with the initial state N = r , and for some a > E e aζ = ε ,a < ∞ , E e aζ = ε a < ∞ . Suppose e ε β < for some β > .Then for all t ≥ and γ ∈ (0 , β ) we derive an estimate k P rt − P k T V ≤ b C ( γ, ζ ) e − γt , where P rt ( M ) def == P { N t ∈ M | N = r } , P ( M ) def == lim t →∞ P rt ( M ) , M ∈ R , and ζ has a cumulative distribution function F r ( s ) .Proof. Corollary 1 and Corollary 2 follow from Lemma 5, Lemma 6 andSection 1.2.
Remark . The Corollary 1 improves the classical result: there exists theconstant C such that k P rt − P k T V ≤ C t − K +1 (see [1, 2, 5, 7])21 Application of the stationary coupling tostrong bounds for the convergence rate ofthe distribution of the regenerative process
Let us recall the definition of a regenerative process in continuous time.
Definition 4.
Assume that the random process (cid:16) X t , t ≥ (cid:17) with the statespace ( X , σ ( X )) has continuous time parameter t ∈ [0 , + ∞ ).Besides, we suppose that there exists a renewal process (cid:16) R t , t ≥ (cid:17) withthe renewal times { θ j } ∞ j =0 ; as before, θ i = i X j =0 ζ j , the random variables { ζ j } ∞ j =0 are independent, and { ζ j } ∞ j =1 are identically distributed.Furthermore, the pair of processes (( X t , R t ) , t ≥
0) has the followingproperty: for each n ≥
0, the post- θ n process (cid:16) X θ n + t , t ≥ (cid:17) is independentof ( θ , . . . , θ n ) (or, equivalently, of ζ , . . . , ζ n ) and its distribution does notdepend upon n .Then we call the process (cid:16) X t , t ≥ (cid:17) regenerative process.We call ( R n , t ≥
0) the embedded renewal process and refer to the θ n as regeneration points or regeneration times.The behaviour of the process X t on the k th cycleΘ k def == ( X t + θ k , t ∈ [0 , ζ k +1 ]) is a random element with the state space D ( X )of X -valued functions which are right-continuous and have left-hand limitsand with finite lifelengths – see [1, Chapter 6]; { Θ k } ∞ k =0 are i.i.d. randomelements.We call the intervals ( θ t − , θ t ) the regeneration periods , and we call therandom variable ζ k the length of the k th regeneration period .When t equals 0, (cid:16) X t , t ≥ (cid:17) is called a nondelayed (or pure) regenera-tive process. Otherwise, the process is called a delayed regenerative process.22 .2 Bounds for the convergence rate of the regenera-tive process. This Section is devoted to
Markov regenerative processes. We attempt toderive the strong bounds (in the total variation metrics) for the convergencerate of this process in the case when the distribution of the regenerationperiod length satisfies
Key Conditions .These bounds can be extended to the case of non-Markov regenerativeprocesses by the following reasons.
Remark
15 (On non-Markov regenerative processes) . Every arbitrary non-Markov regenerative process (cid:16) X t , t ≥ (cid:17) with the state space ( X , σ ( X ))can be extended to the Markov regenerative process (cid:16) X t , t ≥ (cid:17) with theextended state space (cid:0) X , σ (cid:0) X (cid:1)(cid:1) .For instance, we can include in the state X t for t ∈ [ θ n − , θ n ) full historyof the process (cid:16) X t , t ≥ (cid:17) on the time interval [ θ n − , t ] for markovization ofnon-Markov regenerative process.The process X t def == { X s , s ∈ [ θ n − , t ] | t ∈ [ θ n − , θ n ) } is Markov and rege-nerative with the extended state space (cid:0) X , σ (cid:0) X (cid:1)(cid:1) .Denote by P xt (cid:0) M (cid:1) def == P (cid:8) X t ∈ M (cid:9) for the process X t with the initialstate X = x and M ∈ σ (cid:0) X (cid:1) .If E ζ i < ∞ then P xt = ⇒ P , where P is some stationary probabilitymeasure on the state space (cid:0) X , σ (cid:0) X (cid:1)(cid:1) .If we prove that (cid:13)(cid:13)(cid:13) P xt − P (cid:13)(cid:13)(cid:13) T V ≤ ψ ( t, x ) (cid:2) = φ ( t, ζ ) (cid:3) for all t ≥ X t : k P xt − P k T V def == 2 sup M ∈ σ ( X ) | P xt ( M ) − P ( M ) | ≤≤ (cid:13)(cid:13) P t − P (cid:13)(cid:13) T V def == 2 sup M ∈ σ ( X ) (cid:12)(cid:12) P t (cid:0) M (cid:1) − P (cid:0) M (cid:1)(cid:12)(cid:12) ≤ φ ( t, ζ ) . Besides, the extension for markovization can be more simple for thequeueing non-Markov regenerative process (when considering the structureof this process). 23o, we deal with the regenerative Markov process (cid:16) X t , t ≥ (cid:17) with thestate space ( X , σ ( X )); and let { θ i } ∞ i =0 be its regeneration points. As wasmentioned before, θ i = i P k =0 ζ k and ζ k ≥ { ζ i } ∞ i =0 are independent non-negative random variables, and { ζ i } ∞ i =1 are identically distributed.Let the renewal process (cid:16) R t , t ≥ (cid:17) with the renewal points { θ i } ∞ i =0 bean embedded renewal process of the regenerative process (cid:16) X t , t ≥ (cid:17) , andlet (cid:16) N t , t ≥ (cid:17) be an embedded backward renewal process of the renewalprocess (cid:16) R t , t ≥ (cid:17) .Once more we denote by F ( s ) def == P { ζ i ≤ s } for i ≥
1, and by G ( s ) def == P { ζ ≤ s } ; we assume that G ( s ) = F r ( s ) – see Denotation 4,and we suppose that Key Condition is satisfied.A paired process (cid:16) V t , t ≥ (cid:17) def == (cid:16) ( X t , N t ) , t ≥ (cid:17) is a Markov regenera-tive process with the state space (cid:0) X , σ (cid:0) X (cid:1)(cid:1) def == ( X , σ ( X )) × ( R , σ ( R )),and the components X t and N t of the process (cid:16) V t , t ≥ (cid:17) are dependent.Namely, there exists a conditional distribution G a ( M ) def == P { X t ∈ M | N t = a } = P { X θ k + a ∈ M | θ k + a ≤ θ k +1 } , where M ∈ σ ( X ).So, if we know the renewal times of the process (cid:16) R t , t ≥ (cid:17) then it ispossible to define the (conditional) distribution of the process (cid:16) X t , t ≥ (cid:17) at any time (cid:16) given the values of { θ i } ∞ i =0 (cid:17) , and we can propose the followingmethod of construction of the successful coupling (cid:16)(cid:16) W t , f W t (cid:17) , t ≥ (cid:17) for theprocesses (cid:16) V t , t ≥ (cid:17) = (cid:16) ( X t , N t ) , t ≥ (cid:17) and (cid:16) e V t , t ≥ (cid:17) def == (cid:16)(cid:16) e X t , e N t (cid:17) , t ≥ (cid:17) . First, it is possible to construct (on the probability space (Ω , F , P ) – seeDenotation 5) the stationary successful coupling for the second parts of theprocesses (cid:16) V t , t ≥ (cid:17) and (cid:16) e V t , t ≥ (cid:17) , i.e. for the processes (cid:16) N t , t ≥ (cid:17) and24 e N t , t ≥ (cid:17) by the schema described in Section 3. As a result, we obtain theprocess (cid:16) Z t , t ≥ (cid:17) = (cid:16)(cid:16) Z t , e Z t (cid:17) , t ≥ (cid:17) .Then we fix the time e τ ( r ) def == inf n t ≥ Z t = e Z t (cid:12)(cid:12)(cid:12) N = r o ; we do knowthat E e τ ( r ) < ∞ (Lemma 4); and e τ ( r ) = θ j = ϑ i for some i and j .After that we can complete the process (cid:16) Z t , t ≥ (cid:17) to the process (cid:16) ( Y t , Z t ) , t ≥ (cid:17) by construction (on some probability space (Ω ′ , F ′ , P ′ ))the random elementsΘ == { Y t , t ∈ [0 , θ ] | θ = ζ } D = { X t , t ∈ [0 , θ ] | θ = ζ } and for k = 1 , , . . . j Θ k def == { Y t , t ∈ [ θ k − , θ k ] | θ k − θ k − = ζ k } D = { X t , t ∈ [ θ k − , θ k ] | θ k − θ k − = ζ k } . Similarly, it is possible to complete the process (cid:16) e Z t , t ≥ (cid:17) to the process (cid:16)(cid:16) e Y t , e Z t (cid:17) , t ≥ (cid:17) by construction (on some probability space (Ω ′′ , F ′′ , P ′′ ))the random elements e Θ == n e Y t , t ∈ [0 , e θ ] (cid:12)(cid:12)(cid:12) e θ = e ζ o D = n e X t , t ∈ [0 , e θ ] (cid:12)(cid:12)(cid:12) e θ = e ζ o and for k = 1 , , . . . i e Θ k def == { e Y t , t ∈ he θ k − , e θ k i (cid:12)(cid:12)(cid:12) e θ k − e θ k − = e ζ k } D = D = n e X t , t ∈ he θ k − , e θ k i (cid:12)(cid:12)(cid:12) e θ k − e θ k − = e ζ k o . At the time e τ ( r ) = θ j = ϑ i the processes ( Z t , t ≥
0) and (cid:16) e Z t , t ≥ (cid:17) havethe same distribution (or even coincide in particular cases).After the time e τ ( r ) = θ j we can again construct (on some probabilityspace (Ω ′′′ , F ′′′ , P ′′′ )) the random elementsΘ k def == { Y t , t ∈ [ θ k − , θ k ] | θ k − θ k − = ζ k } D = { X t , t ∈ [ θ k − , θ k ] | θ k − θ k − = ζ k } for k > j , and we put e Θ i + ℓ def == Θ j + ℓ , ℓ > (cid:16)(cid:16) W t , f W t (cid:17) , t ≥ (cid:17) = (cid:16)(cid:16) Y t , Z t , e Y t , e Z t (cid:17) , t ≥ (cid:17) D = (cid:16)(cid:16) X t , N t , e X t , e N t (cid:17) , t ≥ (cid:17) ,
25n the probability space (cid:16)e Ω , f F , e P (cid:17) def == (Ω , F , P ) × (Ω ′ , F ′ , P ′ ) × (Ω ′′ , F ′′ , P ′′′ ) × (Ω ′′′ , F ′′′ , P ′′′ ) , and this process is a successful coupling for the processes (cid:16) V t , t ≥ (cid:17) = (cid:16) ( X t , N t ) , t ≥ (cid:17) and (cid:16) e V t , t ≥ (cid:17) = (cid:16)(cid:16) e X t , e N t (cid:17) , t ≥ (cid:17) –here and hereafter we omit a detailed description of the construction of thisprocess.Now, from Corollary 1 and Corollary 2 we deduce: Theorem 2.
Let (cid:16) X t , t ≥ (cid:17) be a Markov regenerative process with thestate space ( X , σ ( X )) which satisfies Key Condition, with the initial state X = x , and suppose that µ ,K def == E ( ζ ) K < ∞ , µ K def == E ( ζ ) K < ∞ for some K ≥ .Then for all t ≥ and every k ∈ [1 , K ] we derive an estimate k P xt − P k T V ≤ b C ( k, ζ ) t − k , where P rt ( M ) def == P { X t ∈ M | X = x } , P ( M ) def == lim t →∞ P xt ( M ) , M ∈ σ ( X ) ,and ζ is a first regeneration point of the process (cid:16) X t , t ≥ (cid:17) . Theorem 3.
Let (cid:16) X t , t ≥ (cid:17) be a Markov regenerative process with thestate space ( X , σ ( X )) which satisfies Key Condition, with the initial state X = x , and let E e aζ = ε ,a < ∞ , E e aζ = ε a < ∞ for some a > .Then for all t ≥ and for all β > such that e ε β < we have k P xt − P k T V ≤ b C ( β, ζ ) e − βt . Remark . Furthermore, it is possible to obtain a certain decrease in valueof the constants b C ( k, ζ ) and b C ( β, ζ ) using the properties of the cumulativedistribution function F ( s ) and determining more accurately the estimates inthe calculations (8) end (9). The distribution of the period of the queueing regenerative process ( Q t , t ≥
0) is often unknown in the queuing theory. However, the regeneration period26an be often split into two parts, in most cases we call them a busy periodand an idle period. Furthermore, as a rule the idle period has a knownnon-discrete distribution. We suppose that the bounds for the moments ofthe busy period are also known. This queueing process has an embeddedalternating renewal process , and it turns out not to be Markov, although thebackward alternating renewal process defined similarly to the backward re-newal process is Markov – see Definition 3. So, in this situation the queueingregenerative process has an embedded backward alternating renewal process ( A t , t ≥ A t , t ≥
0) in the case when one of its alternatingrenewal periods has the cumulative distribution function which satisfies
KeyCondition , and all alternating renewal periods have finite expectation. More-over, it is possible to use the stationary coupling method for to the backwardalternating renewal process in some cases when the alternating periods ofone regenerative period are dependent. The description of the stationarycoupling method for the backward alternating process will be presented inthe next publications.So, we can find the bounds for the convergence of the backward renewalprocess ( A t , t ≥
0) by using the stationary coupling method. Then, using theargumentation of Section 5.2, we can verify that the bounds for the process( A t , t ≥
0) appear to be true for a complete process (( Q t , A t ) , t ≥
0) and forthe process ( Q t , t ≥ acknowledgements The author is grateful to L. G. Afanasyeva andA. Yu. Veretennikov for detailed discussions and comments, to H. Thoris-son for useful advices and remarks, and to M. K. Turtsynsky for the greathelp.