Perturbation approach to scaled type Markov renewal processes with infinite mean
aa r X i v : . [ m a t h . P R ] A p r Perturbation approach to scaled typeMarkov renewal processes with infinitemean.
Zsolt Pajor-Gyulai ∗ Domokos Sz´asz † December 25, 2017
Abstract
Scaled type Markov renewal processes generalize classical renewal pro-cesses: renewal times come from a one parameter family of probabilitylaws and the sequence of the parameters is the trajectory of an ergodicMarkov chain. Our primary interest here is the asymptotic distribution ofthe Markovian parameter at time t → ∞ . The limit, of course, dependson the stationary distribution of the Markov chain. The results, however,are essentially different depending on whether the expectations of the re-newals are finite or infinite. If the expectations are uniformly bounded,then we can provide the limit in general (beyond the class of scaled typeprocesses), where the expectations of the probability laws in question ap-pear, too. If the means are infinite, then – by assuming that the renewaltimes are rescaled versions of a regularly varying probability law with ex-ponent 0 ≤ α ≤ α which emerges in the limits. Heavy tailed probability distributions have recently arisen in new interestingapplications, it is sufficient to mention waiting times in queueing networks likethe internet or stock prices. For us the laws with exponents α = cameinto play in stochastic models of physical phenomena as return times to the ori-gin of processes which are proved to behave analogously to random walks on Z d where d = or ∗ Budapest University of Technology,Institute of Physics , [email protected] † Budapest University of Technology, Mathematical Institute ,Budapest, Egry J. u. 1 HungaryH-1111, [email protected] t → ∞ . Our main interestis the case when the variables have infinite expectations and the process is ofscaled type. To emphasize coherence, we also prove – by using our method –results, already known for the finite mean case. We develop an operator for-malism and use some facts from perturbation theory to develop a key lemmafrom which most of our results follow easily.In the theory of ordinary renewal processes, the first attempts to extend thewell known result of Feller and Smith ([8],[17]) to the infinite mean case wereperformed by Erickson([7]), Teugels ([20]) and Anderson&Athreya ([1]). As forthe Markov renewal process, some partial results has been already obtained,e.g. in [13], many properties of the spent time (age-) process (see Sect 5.) wereestablished under different assumptions for the alternating renewal process.In this paper we show that under certain assumptions, the classical result ofDynkin ([6]) still holds.The paper is organized as follows. Section 2 contains our definitions andour key technical result. Sections 3, 4, 5 deal with its consequences while sec-tion 6 presents the physics application which drove our attention to the topic.Finally section 7 is devoted to the proofs of our theorems. Note 1.
We originally used the name Renewal Process directed by a Markov Chain(RPdMC), but we decided to stick to traditions and to use Markov renewal process.Some authors use the name Semi-Markov process for the whole phenomena, but weonly refer to a particular process by this name.
Consider a measurable function F λ ( t ) ≡ F ( λ , t ) : [ a , b ] × R + → [
0, 1 ] with2 asic assumptions: For fixed λ , F λ ( . ) is a non-arithmetic distribution function (1) ∃ δ > λ ∈ [ a , b ] F λ ( δ ) < ∃ K ∈ R + : inf λ ∈ [ a , b ] F λ ( K ) > X λ with distribution function F λ . If X λ hasexpectation, then is denoted by µ λ . Remark 1.
Conditions (2) and (3) implies that there is no sequence ( λ i ) i ≥ that X λ i would converge either to the point mass at zero or to infty in distribution (or - as thelimit is non-random - in probability). Definition 1.
The family of distributions defined above is called scaled-type if thereis a distribution function F : R + → [
0, 1 ] for whichF λ ( t ) = F ( λ t ) In this case, the basic assumptions are satisfied if 0 < a ≤ b < ∞ andmoreover if µ = R ∞ xdF ( x ) is finite , then µ λ = µ / λ .Also suppose we have a recurrent Harris chain (cf. e.g. Chapter 5.6 in [5])( Λ , Λ , ... ) on [ a , b ] with transition kernel g ( λ − , A ) . Suppose that this chainhas a spectral gap, which means that the spectrum of its transition operator on L ∞ ([ a , b ]) is bounded away from the unit circle except for the eigenvalue 1 andfinitely many other eigenvalues on the unit circle. Let ρ s denote the stationarymeasure. Definition 2.
Suppose ( λ , λ , λ , . . . ) ∈ [ a , b ] N . Then the sequence S n = ∑ n − j = X λ j : n =
1, 2, . . . with S = is called a Non-Homogeneous Renewal Process (NHRP) if X λ , X λ , X λ , . . . is an independent sequence of random variables such that ∀ j ∈ N the distribution of X λ j is F λ j . If furthermore F λ ( t ) = F ( λ t ) with some distributionfunction F, then we call the process a Scaled-type Renewal Process (STRP) . Definition 3.
The sequence S n = ∑ n − j = X Λ j : n =
1, 2, . . . with the conventionS = is called a Markov renewal process if• Λ , Λ , Λ , . . . is a homogeneous Markov chain introduced above and• for every realization λ , λ , . . . of this Markov chain S n : n =
0, 1, 2, . . . is anon-homogeneous renewal process.(Notation: if we want to emphasize the dependence of the process on λ , we writeS n , λ .) N t , λ denote the number of renewals that occurredbefore time t (including the one at t =
0) with initial parameter value λ , i.e. N t , λ = inf { n : S n , λ > t } (4)and let U λ ( t ) = E N t , λ . Denote the ”type” of the renewal ongoing at time t by Λ ( t ) = Λ N t , λ − and the distribution of the parameter Λ ( t ) , conditioned onthe initial parameter value λ , by Φ t , λ , i.e. Φ t , λ ( A ) = P ( Λ ( t ) ∈ A ∈ B ([ a , b ]) | Λ = λ ) (5) Note 2. Λ ( t ) is a so called Semi-Markov process since it would be a continuous timeMarkov chain on [ a , b ] , if for every λ , F λ were an exponential distribution function. By conditioning on the first renewal, the renewal equation writes as Φ t , λ ( A ) = { λ ∈ A } ( − F λ ( t ))+ (6) + Z t Z ba Φ t − s , λ ( A ) g ( λ , d λ ) dF λ ( s ) All the basic phenomena are governed by equations like (6). Since this isnot the usual renewal equation, we have to generalize standard renewal theory.Our first result is an existence and uniqueness theorem.
Theorem 1.
For any measurable function h ., λ ( A ) which is bounded on boundedintervals, i.e. ( ∀ t > )( ∃ M t < ∞ ) : | h s , λ ( A ) | < M t ∀ s ∈ [ t ] the solution of equation Ψ t , λ ( A ) = h t , λ ( A ) + Z t Z ba Ψ t − s , λ ( A ) g ( λ , d λ ) dF λ ( s ) (7) exists and is unique among the functions that vanish for t < and are bounded onbounded intervals.Moreover, the solution can be given as an infinite series: Ψ t , λ ( A ) = h t , λ ( A )+ (8) + ∞ ∑ n = Z [ a , b ] n Z t h t − s , λ n ( A ) d (cid:18) ( Π ∗ ) n − i = F λ i ( s ) (cid:19) n − ∏ i = g ( λ i , d λ i + ) where Π ∗ denotes the convolution product. This form of the solution is troublesome to work with, but we can also write Ψ t , λ ( A ) = Z [ t ] × [ a , b ] h t − s , λ ( A ) U λ ( ds , d λ ) (9)4here we introduced the functions U λ ( t , A ) = { λ ∈ A } Θ ( t ) + g ( λ , A ) F λ ( t )+ (10) + ∞ ∑ n = Z [ a , b ] n − g ( λ n − , A )( Π ∗ ) n − i = F λ i ( t ) n − ∏ i = g ( λ i , d λ i + ) Here Θ ( t ) = t > U λ ( t , A ) = A ( λ ) + ∞ ∑ n = P λ ( X Λ + ... + X Λ n − < t ; Λ n ∈ A ) (11)so U λ ( t , A ) is the expected number of jumps into the set A before time t (plus1 if the process is launched from A). The integration in (9) is wrt the measuredefined by U λ ( A × [ t , t ]) = U λ ( t , A ) − U λ ( t , A ) for A ∈ B ([ a , b ]) and 0 < t < t < ∞ .Note that from (11) it is clear that U λ ( t ) = U λ ( t , [ a , b ]) . Introduce the Laplace transform of F : ϕ λ ( z ) = Z ∞ e − zx dF λ ( x ) = z Z ∞ e − zx F λ ( x ) dx z ≥ z > λ ∈ [ a , b ] ϕ λ ( z ) < λ ∈ [ a , b ] ϕ λ ( z ) > ϕ λ ( z ) = ϕ ( z / λ ) , where ϕ ( z ) is the Laplace transformof the measure dF ( . ) Also let ω λ ( z , A ) = Z [ ∞ ] × A e − zs U λ ( ds , d λ ) = Z ∞ e − zs U λ ( ds , A ) (13)For fixed z , ω λ ( z , . ) is, of course, a measure on [ a , b ] . By the virtue of (11)this can be also written as ω λ ( z , A ) = A ( λ ) + ∞ ∑ n = E (cid:16) e − z ( X Λ + ... + X Λ n − ) { Λ n ∈ A } (cid:12)(cid:12)(cid:12) Λ = λ (cid:17) Also let Ξ λ ( z , A ) be the Laplace transform of Ψ t , λ ( A ) in the variable t , i.e. Ξ λ ( z , A ) = Z ∞ e − zt Ψ t , λ ( A ) dt Ξ λ ( z , A ) = Z ba φ λ ( z , A ) ω λ ( z , d λ ) (14)where the integration is wrt the measure defined in (13) and φ λ ( z , A ) is theLaplace transform of h t , λ ( A ) , i.e. φ λ ( z , A ) = Z ∞ e − zt h t , λ ( A ) dt (15)In all the applications, h is so, that this Laplace transform exists. Clearly, then Ξ λ ( z , A ) exists as well. Despite its simplicity, it is not equation (14) whichproves useful in the sequel. Instead, take the Laplace transform of (7) to obtain Ξ λ ( z , A ) = φ λ ( z , A ) + Z ba Ξ λ ( z , A ) ϕ λ ( z ) g ( λ , d λ ) (16) ∀ λ ∈ [ a , b ] and ∀ A ∈ B ([ a , b ]) . Recall that
Definition 4.
A positive function L ( t ) defined on R ≥ is slowly varying at infinity ifL ( ct ) L ( t ) → ∀ c ∈ R > The key element in the treatment is the following
Lemma 1.
Whenever µ λ < ∞ for all λ ∈ [ a , b ] , or the process is scaled type with aregularly varying ancestor distribution, i.e. − F ( t ) = t − α L ( t ) α ∈ [
0, 1 ] where L is a slowly varying function at infinity, we have Ξ λ ( z , A ) ∼ R ba φ λ ( z , A ) d ρ s ( λ ) R ba ( − ϕ λ ′ ( z )) d ρ s ( λ ′ ) (17) as z → provided that lim sup z → sup λ ∈ [ a , b ] φ λ ( z , A ) R ba φ λ ( z , A ) d ρ s ( λ ) < ∞ Remark 2.
The main idea behind Lemma 1 is that the asymptotic behaviour is inde-pendent of the initial state. Thus the asymptotic formulas must be identical with whatwould be exact if the distribution of λ was the stationary one. .4 The operator formalism In the proof of Lemma 1, we use a perturbation approach in the framework ofan operator formalism developed in this section.As usual, let L ∞ [ a , b ] denote the set of bounded, measurable functions on [ a , b ] . The transition operator of the Markov chain (defined by the kernel g ofthe previous section), denoted by P , operates on this space by P f ( λ ) = Z ba f ( λ ′ ) g ( λ , d λ ′ ) f ∈ L ∞ ([ a , b ]) , λ ∈ [ a , b ] Of course, on the adjoint space M ([ a , b ]) (i.e. every signed measure on [ a , b ] offinite total variation) its effect is given by µ P ( A ) = Z ba g ( λ , A ) d µ ( λ ) µ ∈ M ([ a , b ]) , A ∈ B ([ a , b ]) Also define the operator valued functions ˆ ϕ ( z ) , ˆ φ ( z ) , and ˆ Ξ ( z ) acting on L ∞ ([ a , b ]) by ( ˆ ϕ ( z ) f )( λ ) = ϕ λ ( z ) f ( λ ) ( ˆ φ ( z ) A )( λ ) = φ λ ( z , A )( ˆ Ξ ( z ) A )( λ ) = Ξ λ ( z , A ) where f ∈ L ∞ ([ a , b ]) , A is the indicator function of A. In the last two defini-tions, the operators are defined on the linear span of step functions in L ∞ ([ a , b ]) .With these, it can be easily seen that equation (16) is equivalent to the oper-ator equation ˆ Ξ ( z ) = ˆ φ ( z ) + ˆ ϕ ( z ) P ˆ Ξ ( z ) This yields the formal solutionˆ Ξ ( z ) = ( I − ˆ ϕ ( z ) P ) − ˆ φ ( z ) (18)Condition (12) ensures the existence of the inverse for every z >
0, since || ˆ ϕ ( z ) || = sup λ ∈ [ a , b ] ϕ λ ( z ) < µ in M ([ a , b ]) as a functional on an element f of L ∞ ([ a , b ]) with ( µ , f ) , i.e. ( µ , f ) = Z [ a , b ] f d µ and note that e.g. Ξ λ ( z , A ) = ( δ λ , ˆ Ξ ( z ) A ) where δ λ is the point mass concentrated on λ . In this framework, Lemma 1can be rephrased as 7 emma 2. Suppose µ λ < ∞ ∀ λ ∈ [ a , b ] or that − F λ ( t ) = − F ( λ t ) = ( λ t ) − α L ( λ t ) α ∈ [
0, 1 ] where L is a slowly varying function. Then if lim sup z → || ˆ φ ( z ) || ( ρ s , ˆ φ ( z ) A ) < ∞ (19) then ( δ λ , ˆ Ξ ( z ) A ) ∼ ( ρ s , ˆ φ ( z ) A )( ρ s , ( I − ˆ ϕ ( z )) ) z → where = [ a , b ] . Conjecture 1.
Lemma 2 is likely to be true under the somewhat milder condition that µ ρ s = R ba µ λ d ρ s ( λ ) < ∞ , which allows µ λ to be infinite on a ρ s -negligible set if ˆ φ ( z ) is nice in some sense. The ground of this suggestion is that a ρ s -negligible set cannothave large effect on asymptotic relations. This question does not arise in the scaled typecase, so we do not pursuit it in the sequel. However, we mention Corollary 3.
If the annihilator of ρ s i.e.A ρ s = { f ∈ L ∞ ([ a , b ]) : ( ρ s , f ) = } is an invariant subspace of ˆ φ ( z ) for every z, then the assertion of Lemma 2 holds if µ λ = ∞ only on a ρ s -null set. U λ ( t , A ) In this section, we investigate the asymptotic behaviour of U λ ( t , A ) . To dothis, note that (9) implies that if h t , λ ( A ) = A ( λ ) , then we have Ψ t , λ ( A ) = U λ ( t , A ) and also φ λ ( z , A ) = A ( λ ) / z . The assumption (19) is satisfied if ρ s ( A ) >
0. Then Lemma 1 yields ω λ ( z , A ) − A ( λ ) ∼ R ba ( − ϕ λ ( z )) d ρ s ( λ ) ρ s ( A ) z → ω and Ξ !). Thus we have Theorem 2.
For A ∈ B ([ a , b ]) with ρ s ( A ) > , we have for µ ρ s < ∞ thatU λ ( t , A ) ∼ t µ ρ s ρ s ( A ) t → ∞ (21) while in the scaled type case for α ∈ [
0, 1 ) ,U λ ( t , A ) ∼ t α L ( t ) sin ( πα ) / πα R ba λ − α d ρ s ( λ ) ρ s ( A ) = sin ( πα ) / πα − F ( t ) ρ s ( A ) R ba λ − α d ρ s ( λ ) ote that if α = , the last factor is one. When α = , one obtainsU λ ( t , A ) ∼ t ˜ L ( t ) ρ s ( A ) R ba λ − d ρ s ( λ ) where ˜ L = R t ( − F ( s )) ds varies slowly and U λ ( t , A )( − F ( t )) → in addition. Remark 3. If ρ s ( A ) = , then P ( Λ n ∈ A ) < C A e − γ n , where γ is the spectral gapof the Markov chain. Thus by (11) , we have the estimateU λ ( t , A ) < + ∞ ∑ n = P ( Λ n ∈ A ) < C A − e γ where the last inequality implies that only finitely many times does the chain jump toA as t → ∞ with probability one. Φ t , λ ( A ) In this special case h t , λ ( A ) = { λ ∈ A } ( − F λ ( t )) , (9) becomes Φ t , λ ( A ) = Z [ t ] × A ( − F λ ( t − s )) dU λ ( s , λ ) (22)and φ λ ( z , A ) = { λ ∈ A } ( − ϕ λ ( z )) / z . Thus Ξ λ ( z , A ) ∼ z R A ( − ϕ λ ( z )) d ρ s ( λ ) R ba ( − ϕ λ ′ ( z ))) d ρ s ( λ ′ ) z → λ lim inf z → Z A − ϕ λ ( z ) − inf λ ∈ [ a , b ] ϕ λ ( z ) d ρ s ( λ ) > ρ s ( A ) >
0. To see this note that in the finite mean case (12)ensures that inf λ ∈ [ a , b ] µ λ > ϕ ’s. In the scaled type case note that the integral in (23) admits the lowerbound ρ s ( A ) − ϕ ( az ) − ϕ ( bz ) ≥ ρ s ( A ) ab > − ϕ . Our result is Theorem 3.
For A ∈ B ([ a , b ]) with ρ s ( A ) > , we have lim t → ∞ Φ t , λ ( A ) = µ ρ s Z A µ λ d ρ s ( λ ) ∀ λ ∈ [ a , b ] (24)9 f µ ρ s < ∞ . In the scaled type, finite mean case, this becomes lim t → ∞ Φ t , λ ( A ) = R A λ d ρ s ( λ ) R ba λ ′ d ρ s ( λ ′ ) ∀ λ ∈ [ a , b ] (25) If in the scaled type case − F ( t ) = t − α L ( t ) , we have lim t → ∞ Φ t , λ ( A ) = R A λ − α d ρ s ( λ ) R ba λ ′− α d ρ s ( λ ′ ) which implies that in the special case α = , the limit is just ρ s ( A ) . Remark 4. (24) and (25) are true for ρ s ( A ) = as well, since Φ t , λ is a measure(Apply the result to A c ). Let Y t , λ denote the time since the last renewal occurred and Z t , λ is the remain-ing time until the next renewal, i.e. Y t , λ = t − S N t , λ Z t , λ = S N t , λ + − t The total lifetime is the sum C t , λ = Y t , λ + Z t , λ .It is easy to see, that P ( Y t , λ < x ) A ( λ ) satisfies (7) with the inhomoge-neous term h Yt , λ ( A ) = [ x ] ( t ) A ( λ )( − F λ ( t )) . Of course, in the end we willset A = [ a , b ] , but now we need the dependence on A to make ˆ φ a linear oper-ator. This yields φ Y λ ( z , A ) = A ( λ ) Z x e − zt ( − F λ ( t )) dt Since we can use the bounded convergence theorem for fixed x, we have φ Y λ ( z , A ) → A ( λ ) Z x ( − F λ ( t )) dt > z → Ξ Y λ ( z , [ a , b ]) ∼ R ba R x ( − F λ ( t )) dtd ρ s ( λ ) R ba ( − ϕ λ ( z )) d ρ s ( λ ) It is also not hard to obtain that P ( Z t , λ < x ) A ( λ ) also satisfies (7) with h Zt , λ ( A ) = A ( λ )( F λ ( t + x ) − F λ ( t )) , and after some calculation, we getlim z → φ Z λ ( z , A ) = lim z → φ Y λ ( z , A ) Ξ Z λ ( z , [ a , b ]) ∼ Ξ Y λ ( z , [ a , b ]) .As to C λ , t , one can obtain h Ct , λ ( A ) = A ( λ ) [ x ] ( t )( F λ ( x ) − F λ ( t )) , thus forthe Laplace transform φ C λ ( z , A ) → A ( λ ) R x ( F λ ( x ) − F λ ( t )) dt as z → Ξ C λ ( z , [ a , b ]) ∼ R ba R x ( F λ ( x ) − F λ ( t )) d ρ s ( λ ) R ba ( − ϕ λ ( z )) d ρ s ( λ ) Theorem 4. If µ ρ s < ∞ then P ( Y t , λ < x ) P ( Z t , λ < x ) (cid:27) → µ ρ s Z x Z ba ( − F λ ( t ′ )) d ρ s ( λ ) dt ′ and P ( C t , λ < x ) → µ ρ s Z x Z ba ( F λ ( x ) − F λ ( t ′ )) d ρ s ( λ ) When the expectations of the waiting times are infinite, there is no properasymptotic distribution of Y t , λ , all the mass escapes to infinity. Instead, Y t , λ / t has a limit distribution. The following results are generalizations of the onedue to Dynkin about ordinary renewal processes. (Cf. [8] XIV.3). Theorem 5. If − F ( t ) = t − α L ( t ) with < α < in the scaled type case, Y t , λ / tconverges in distribution to the distribution with density function sin ( πα ) π x − α ( − x ) α − while the limit density function of Z t , λ / t is sin ( πα ) π x α ( + x ) In the α = case, Y t , λ t P → Z t , λ t P → t → ∞ In the α = case we can only stateY t , λ t P → Z t , λ t P → t → ∞ Remark 5.
These formulas are identical to the original ones, which means that thepresence of different kinds of renewal times is irrelevant asymptotically.
Semi-Markov theory is one of the most efficient area of stochastic processes togenerate applications in real-life problems. We cannot give here a complete11iew of such applications in the fields of (paraphrasing Barbu and Limnios)Economics, Manpower models. Insurance, Finance, Reliability, Simulation,Queuing, Branching processes. Medicine (including survival data). Social Sci-ences, Language Modelling, Seismic Risk Analysis, Biology, Computer Science,Chromatography and Fluid mechanics, mainly due to the lack of expertise. (seee.g. [9] or [10]))Therefore, we present the application, which motivated our model the prob-lems treated. Namely Random Walks with Internal States in one and two di-mensions. Shortly we investigated continuous time random walks with inter-nal states in which the speed parameter was the internal state changing ac-cording to a Markov chain at every visit of the random walk to the origin. Intwo dimensions, it was a paradigm model to the two disk Lorentz process, i.e.two disks wandering in a periodic scatterer configuration and changing energywhen they collide with each other.It can be shown (cf. an upcoming article of the authors) that the return timesto the origin are regularly varying with exponent α = d =
1, and slowlyvarying in d =
2, i.e1 − F d = ( t ) ∼ C t − − F d = ( t ) ∼ C log t The exact values of the constants are not important now. Suppose for easethat the stationary distribution is uniform. In the physical model [ a , b ] =[ √ E , √ E ] , where E is the total energy of the two colliding disks.Our results yield to the expected number of returns to the origin (numberof collisions) U d = λ ( t ) ∼ √ t π C E ( − ) U d = λ ( t ) ∼ log tC Interesting that the energy dependence vanishes in d = ρ s ( A ) which is a good approximation).The answer to the question concerning is the asymptotic distribution of thespeed is simple as well. Note, that due to our assumption of ρ s ( A ) , the limitdistribution has density Φ d = ∞ , λ ( λ ) = λ − E ( − ) Φ d = ∞ , λ ( λ ) = √ E ( √ − ) Finally, Y λ , t / t and Z λ , t / t has the limit distribution specified in Theorem 5.The meaning for α = Proofs
For the following proofs, we need the so called Abelian-Tauberian theorems(see [8] XIII.5).
Fact 1 (Feller) . Let H be a measure on R + , κ ( z ) = R e − zx dH the Laplace transformwrt it and H ( x ) ≡ H ([ x ]) ! Then for ρ ≥ , κ ( t / x ) κ ( x ) → t − ρ x → ∞ and H ( tx ) H ( x ) → t ρ x → ∞ imply each other, moreover in this case κ ( x ) ∼ H ( x ) Γ ( ρ + ) x → ∞ (26)A popular reformulation of this result is Fact 2.
If L is slowly varying in infinity and ≤ ρ < ∞ , then κ ( x ) ∼ x ρ L ( x ) x → ∞ and H ( x ) ∼ Γ ( ρ + ) x ρ L ( x ) x → ∞ imply each other. The following result is Example (c) XIII.5 in [8]
Fact 3.
For ρ < − F ( x ) ∼ Γ ( − ρ ) x − ρ L ( x ) and − ϕ ( z ) ∼ z ρ L ( z ) imply each other. Proof of Theorem 1.
Suppose we have two such solutions and denote their dif-ference with ˜ Ψ t , λ ( A ) . This function satisfies the homogeneous version of (7):˜ Ψ t , λ ( A ) = Z t Z ba ˜ Ψ t − s , λ ( A ) g ( λ , d λ ) dF λ ( s ) | ˜ Ψ t , λ ( A ) | < M t if | Ψ s , λ ( A ) | < M t for s < t . If we iterate n times,then through a little manipulation (can be checked by induction), we get˜ Ψ t , λ ( A ) = Z [ a , b ] n Z t ˜ Ψ t − s , λ n ( A ) d (cid:18) ( Π ∗ ) n − i = F λ i ( s ) (cid:19) n − ∏ i = g ( λ i , d λ i + ) where Π ∗ denotes the convolution product, so | ˜ Ψ t , λ ( A ) | < M t Z [ a , b ] n Z t d (cid:18) ( Π ∗ ) n − i = F λ i ( s ) (cid:19) n − ∏ i = g ( λ i , d λ i + ) == M t Z [ a , b ] n ( Π ∗ ) n − i = F λ i ( t ) n − ∏ i = g ( λ i , d λ i + ) = M t P ( S n , λ < t ) which goes to zero as n → ∞ for all t by (11) if U λ ( t ) < ∞ for every finite t .But this follows from the fact it is clearly less than the renewal function of aclassical renewal process with distribution function F ( x ) = sup λ ∈ [ a , b ] F λ ( x ) which is not the point mass at zero by condition (2). Now the statement followsby the result of ordinary renewal theory.From the proof of uniqueness, one can deduce that if we iterate in the in-homogeneous equation (7), then the remainder term converges to zero. Thusafter some calculation, we get exactly the solution given in the theorem. Theconvergence of the series (8) can be checked by noticing Ψ t , λ ( A ) ≤ ˜ M t U λ ( t ) where | h s , λ ( A ) | < ˜ M t for s < t . Proof of Lemma 2.
Note that I − ˆ ϕ ( z ) P = I − P + ( I − ˆ ϕ ( z )) P (27)We will treat the second term as an asymptotic perturbation, where the param-eter of the perturbation is z .First consider the µ λ < ∞ case. Then for f ∈ L ∞ ([ a , b ]) { ( I − ˆ ϕ ( z )) f } ( λ ) = ( − ϕ λ ( z )) f ( λ ) = z µ λ f ( λ ) + o λ ( z ) where o . ( z ) is a vector for which || o . ( z ) || / z → z →
0. Thus ( I − ˆ ϕ ( z )) P = z µ P + o ( z ) µ is the operator on L ∞ ([ a , b ]) defined by ( µ f )( λ ) = µ λ f ( λ ) and themeaning of o ( z ) is straightforward. Since 1 is a simple isolated eigenvalue of P , which is stable under the perturbation due to the assumed spectral gap (andto the number of eigenvalues on the unit circle being finite), using Theorem 2.6in Chapter VIII in [11], we have that I − ˆ ϕ ( z ) P = ( cz + o ( z ))( Π + o ( )) + K ( z ) (28)where Π f = ( ρ s , f ) , ν Π = ( ν , ) ρ s , and K ( z ) is the operator arising from therest of the spectra and projects to the annihilator A ρ s of ρ s (see Conjecture 1).Its essential property is that the part of the spectra it is representing is boundedaway from zero as z → o ( ) is here an operator converging to zero in normas z → ( I − P ) =
0, one obtains from (27) and (28) ( ρ s , ( I − ˆ ϕ ( z )) P ) = ( cz + o ( z ))( ρ s , ) + o ( z ) = cz + o ( z ) (29)since ( ρ s , K ( z ) ) = o ( z ) . To see this, note that0 = ( ρ s + z ρ + o ( z ) , K ( z )( + z + o ( z ))) where ( z ) = + z + o ( z ) is the perturbed right eigenvector that correspondsto the unperturbed eigenvalue 1. (These asymptotics are guaranteed by thetheorem cited above.) After rearrangement, ( ρ s , K ( z ) ) = − z (( ρ , K ( z ) ) + ( ρ s , K ( z ) )) + o ( z ) = o ( z ) (30)since || K ( z ) || , || ρ s K ( z ) || → ( I − ˆ ϕ ( z ) P ) − = cz + o ( z ) ( Π + o ( )) + O ( ) where O ( ) is a bounded operator which comes from the spectra of K ( z ) beingbounded away from zero. With a little arrangement and application of (29), ( I − ˆ ϕ ( z ) P ) − = Π ( ρ s , ( I − ˆ ϕ ( z )) ) ( + o ( )) + O ( ) where we used P = and o ( ) is just a real valued function converging tozero as z → Ξ ( z ) = ( I − ˆ ϕ ( z ) P ) − ˆ φ ( z ) = Π ˆ φ ( z )( ρ s , ( I − ˆ ϕ ( z )) ) ( + o ( )) + O ( ) ˆ φ ( z ) and finally by ρ s Π = ρ s ( δ λ , ˆ Ξ ( z ) A ) (cid:18) ( ρ s , ˆ φ ( z ) A )( ρ s , ( I − ˆ ϕ ( z )) ) (cid:19) − == + o ( ) + o ( ) ( δ λ , O ( ) ˆ φ ( z ) A )( ρ s , ˆ φ ( z ) A )
15y the assumption of the theorem, the explicitly written factor is bounded forsmall z , so the whole expression goes to 1 as z → α ∈ [
0, 1 ) , 1 − ϕ λ ( z ) = (cid:16) z λ (cid:17) α Γ ( − α ) L (cid:18) λ z (cid:19) ( + o ( )) == z α L ( z ) Γ ( − α ) λ − α ( + o ( )) where the first equation is due to Fact 3. Thus ( I − ˆ ϕ ( z )) P = z α L ( z ) M α P + o ( z α L ( z )) where ( M α f )( λ ) = Γ ( − α ) λ − α f ( λ ) . Repeating the finite mean proof with z ↔ z α L ( z ) gives the desired result.In the remaining α = H ( t ) = R t ( − F ( s )) ds is a slowly varying function. Thus by Fact 2,1 − ϕ ( z ) = zH ( z )( + o ( z )) so ( − ˆ ϕ ( z )) P = zH ( z ) M + o ( zH ( z )) . Note that zH ( z ) → z → Proof of Corollary 3.
Introduce µ ∞ = { λ ∈ [ a , b ] : µ λ = ∞ } . What we will showis that ρ s ( µ ∞ ) = µ ∞ can be almost literally dropped from thestate space and thus the assertion.If A ρ s is an invariant subspace of ˆ φ ( z ) , then it is also an invariant subspaceof ˆ Ξ ( z ) as well and thus by A ∩ µ ∞ ∈ A ρ s ( ρ s , ˆ Ξ ( z ) A ) = ( ρ s , ˆ Ξ ( z ) A ∩ µ c ∞ ) Note also that we can assume λ / ∈ µ ∞ since with probability one, ther isan n for which Λ n / ∈ µ ∞ and we can consider the process launched from there.Then ( λ , ˆ Ξ ( z ) A ) = ( λ , ˆ Ξ ( z ) A ∩ µ c ∞ ) This implies that we only have to work in the subspace spanned by the func-tions in L ∞ ([ a , b ]) that does vanish on µ λ . U λ ( t , A ) Proof of Theorem 2.
In the finite mean case we ρ s -almost everywhere have1 − ϕ λ ( z ) = µ λ z + o λ ( z ) where R ba o λ ( z ) d ρ s ( λ ) = o ( z ) . This latter can be seen by noting that R ba ϕ λ d ρ s ( λ ) is the Laplace transform of the mixture of F -s with respect to ρ s ( λ ) which is aproper distribution function. 16lugging this to (20) and observing that Corollary 3 applies here, we obtain ω λ ( z , A ) − A ( λ ) ∼ z ρ s ( A ) R ba µ λ d ρ s ( λ ) + o ( ) ∼ z ρ s ( A ) µ ρ s Using Fact 1, the proof is ready.To see the the case when α ∈ (
0, 1 ) , note that by Fact 3 and the boundedconvergence theorem, Z ba ( − ϕ λ ( z )) d ρ s ( λ ) = Γ ( − α ) z α L ( z )( + o ( )) Z ba λ − α d ρ s ( λ ) By virtue of Fact 2 and by noting that1 Γ ( − α ) Γ ( + α ) = sin ( πα ) πα the statement of the theorem is obtained. In the α = − ϕ ( az ) ≤ Z ba ( − ϕ λ ( z )) d ρ s ( λ ) ≤ − ϕ ( bz ) where both the lower and upper bounds are ∼ − ϕ ( z ) since they are slowlyvarying by Fact 3. For the remaining α = Z t ( − F ( s )) ds = ˜ L ( t ) is a slowly varying function. Then by Fact 3,1 − ϕ ( z ) ∼ z ˜ L ( z ) z → ω λ ( z , A ) − A ( λ ) ∼ z ˜ L ( z ) ρ s ( A ) R ba λ − d ρ s ( λ ) This finishes the proof again by Fact 1.To check the last assertion, note that U λ ( t , A )( − F ( t )) = U λ ( t , A ) t − ˜ L ( t ) t ( − F ( t )) R t ( − F ( s )) ds Here the first term is finite by what just has been proved while Z t ( − F ( s )) ds = t ( − F ( t )) + Z t sdF ( s ) .5 Proof of the results for Φ t , λ ( A ) Proof of Theorem 3.
Since µ λ < ∞ except for a ρ s negligible set, we again havethe asymptotic expansion ϕ λ ( z ) = − µ λ z + o λ ( z ) z → Ξ λ ( z , A ) ∼ z R A µ λ d ρ s ( λ ) R ba µ λ ′ d ρ s ( λ ′ ) ≡ z K ( A ) z → t → ∞ t Z t Φ t , λ ( s ) ds = K ( A ) > µ λ = µ / λ .In the scaled type, regularly or slowly varying case, the proof is essentiallysimilar to the proof of Theorem 2. Proof of Theorem 4.
Through the same procedure as in the proof of Theorem 3in the finite mean case.
Proof of Theorem 5.
Using Theorem 2, we have that if α ∈ [
0, 1 ) , ( − F ( t )) U λ ( t ) → sin παπα R ba λ − α d ρ s ( λ ) (32)By the same arguments as in [8] p.472, we have P ( tx < Y λ , t < tx ) == ∞ ∑ n = P ( ∪ y ∈ [ − x ,1 − x ] { S λ , n = ty } ∩ { X Λ n + > t ( − y ) } ) which can be seen to equal Z [ − x ,1 − x ] × [ a , b ] ( − F ( λ t ( − y ))) U λ ( tdy , d λ ) By (32), this is asymptotically equal tosin ( πα ) πα R ba λ − α d ρ s ( λ ) · (33) · Z [ − x ,1 − x ] × [ a , b ] ( − F ( λ t ( − y ))) − F ( t ) U λ ( tdy , d λ ) U λ ( t )
18n the α ∈ (
0, 1 ) case, the first term in the integral approaches λ − α ( − y ) − α ,while U λ ( ty , A ) U λ ( t ) → y α ρ s ( A ) ⇒ U λ ( tdy , d λ ) U λ ( t ) → α y α − d ρ s ( λ ) dy (34)as t → ∞ . The latter can be seen by noting that Fact 1 and Theorem 2 yield1 U λ ( t ) Z [ ∞ ] × A e − zy U λ ( tdy , d λ ) = ω λ ( z / t , A ) U λ ( t ) → Γ ( α + ) z α ρ s ( A ) as t → ∞ , which is the Laplace transform in the time variable of the measurein (34).If α =
0, then the first term goes to one everywhere except y =
1, while theLaplace transform above is just 1, which means that the underlying measureconverges weakly to the point mass at y = α ∈ (
0, 1 ) , thatlim t → ∞ P ( tx < Y λ , t < tx ) = sin ( πα ) π Z − x − x ( − y ) − α y α − dy If α = x =
0, then we get zero. Since x > α = t − U λ ( t ) Z t ( − F ( s )) ds → R ba λ − d ρ s ( λ ) so instead of (33), we have1 R ba λ − d ρ s ( λ ) Z [ − x ,1 − x ] × [ a , b ] − F ( λ t ( − y )) t − R t ( − F ( s )) ds U λ ( tdy , d λ ) U λ ( t ) Similarly as before, the measure wrt we are integrating can be shown to weaklyconverge to the point mass on y =
1. If y =
1, the first fraction in the integrandis asymptotically equal to 1 λ ( − y ) − F ( t ) t − R t ( − F ( s )) ds which can be shown to approach zero as t → ∞ by partial integration. If weset x =
1, then since x > Acknowledgement
We would like to thank P. N´andori for pointing out numerous typos andhelping us improve the original manuscript. D. Sz. is also grateful to Hungar-ian National Foundation for Scientific Research grants No. T 046187, K 71693,NK 63066 and TS 049835 19 eferences [1] K.K. Anderson, K.B. Athreya
A renewal theorem in the infinite mean case.
Ann.Prob. , 388-393 (1987)[2] V.S. Barbu, N. Limnios Semi-Markov Chains and Hidden Semi-Markov Modelstoward Applications, Their Use in Reliability and DNA Analysis , Springer (ISBN978-0-387-73171-1), (2008)[3] E. Cinlar.
Markov renewal theory.
Adv. in Appl. Probab., 1:123-187, (1969)[4] E. Cinlar.
Introduction to Stochastic Processes.
Prentice Hall, New York, (1975)[5] R. Durrett.
Probability. Theory&Examples , Duxburry Press, 3rd Edition,(2005)[6] E.B. Dynkin
Some limit theorems for sums of independent random variables withinfinite mathematical expectations.
Izv.Akad.Nauk. SSSR Ser.Mat. , 247-266(1955) (in Russian). English translation: Selected Translations, Math. Statist.Prob. ,171-189 (1961)[7] K.B.Erickson Strong renewal theorems with infinite mean
Trans. Amer. Math.Soc. , 263-291 (1970)[8] W. Feller.
An Introduction to Probability Theory and Its Applications, Volume II ,2nd Ed. John Wiley&Sons, Inc. (1971)[9] Janssen J. (ed)
Semi-Markov Models. Theory and Applications
Plenum PressNew York. (1986)[10] Janssen J., Limnios N.
Semi-Markov Models and Applications.
Kluwer Aca-demic New York. (1999)[11] T. Kato
Perturbation Theory for linear operators , Die Grundlehren der mathe-matischen Wissenschaften in Einzeldarstellungen Band 132, Springer (1966)[12] P. L´evy.
Processus semi-markoviens.
In Proc. of International Congress ofMathematics, Amsterdam (1954)[13] K.V.Mitov, N.M.Yanev
Limit theorems for alternating renewal processes in theinfinite mean case
Adv. Appl. Prob. , 896-911 (2001)[14] R. Pyke. Markov renewal processes: definitions and preliminary properties.
Ann.Math. Statist., 32:1231-1241, (1961)[15] R. Pyke.
Markov renewal processes with finitely many states.
Ann. Math.Statist., 32:1243-1259, (1961).[16] R. Pyke and R. Schaufele.
Limit theorems for Markov renewal processes.
Ann.Math. Statist., 35:1746-1764, (1964) 2017] W.L. Smith
Asymptotic renewal theorems
Proc. Roy. Soc. Edinburgh Sect. A Regenerative stochastic processes.
Proc. R. Soc. Lond. Ser. AMath. Phys. Eng., 232:6-31, (1955)[19] L. Takacs.
Some investigations concerning recurrent stochastic processes of acertain type
Magyar Tud. Akad. Mat. Kutato Int. Kzl., 3:115-128, (1954)[20] J.L. Teugels
Renewal theorems when the first or the second moment is infinite.
Ann. Math. Statist.39