Self-similar scaling limits of Markov chains on the positive integers
SSelf-similar scaling limits of Markov chains on thepositive integers
Jean Bertoin ♠ & Igor Kortchemski ♥ Abstract
We are interested in the asymptotic behavior of Markov chains on the set of positive integersfor which, loosely speaking, large jumps are rare and occur at a rate that behaves like a negativepower of the current state, and such that small positive and negative steps of the chain roughlycompensate each other. If X n is such a Markov chain started at n , we establish a limit theorem for n X n appropriately scaled in time, where the scaling limit is given by a nonnegative self-similarMarkov process. We also study the asymptotic behavior of the time needed by X n to reach somefixed finite set. We identify three different regimes (roughly speaking the transient, the recurrentand the positive-recurrent regimes) in which X n exhibits different behavior. The present resultsextend those of Haas & Miermont [19] who focused on the case of non-increasing Markov chains.We further present a number of applications to the study of Markov chains with asymptoticallyzero drifts such as Bessel-type random walks, nonnegative self-similar Markov processes, invari-ance principles for random walks conditioned to stay positive, and exchangeable coalescence-fragmentation processes. Figure 1:
Three different asymptotic regimes of the Markov chain X n /n started at n as n → ∞ :with probability tending to one as n → ∞ , in the first case, the chain never reaches the boundary(transient case); in the second case X n reaches the boundary and then stays within its vicinity onlong time scales (positive recurrent case), and in the last case X n visits the boundary infinitelymany times and makes some macroscopic excursions in between (null-recurrent case). ♠ Universität Zürich. [email protected] ♥ Universität Zürich. [email protected]
MSC2010 subject classifications . Primary 60F17,60J10,60G18; secondary 60J35.
Keywords and phrases.
Markov chains, Self-similar Markov processes, Lévy processes, Invariance principles. a r X i v : . [ m a t h . P R ] N ov INTRODUCTION In short, the purpose of this work is to provide explicit criteria for the functional weak convergenceof properly rescaled Markov chains on N = {
1, 2, . . . } . Since it is well-known from the work ofLamperti [29] that self-similar processes arise as the scaling limit of general stochastic processes,and since in the case of Markov chains, one naturally expects the Markov property to be preservedafter convergence, scaling limits of rescaled Markov chains on N should thus belong to the classof self-similar Markov processes on [ ∞ ) . The latter have been also introduced by Lamperti [31],who pointed out a remarkable connexion with real-valued Lévy processes which we shall recall lateron. Considering the powerful arsenal of techniques which are nowadays available for establishingconvergence in distribution for sequences of Markov processes (see in particular Ethier & Kurtz [16]and Jacod & Shiryaev [23]), it seems that the study of scaling limits of general Markov chains on N should be part of the folklore. Roughly speaking, it is well-known that weak convergence of Fellerprocesses amounts to the convergence of infinitesimal generators (in some appropriate sense), andthe path should thus be essentially well-paved.However, there is a major obstacle for this natural approach. Namely, there is a delicate issueregarding the boundary of self-similar Markov processes on [ ∞ ) : in some cases, 0 is an absorbingboundary, in some other, 0 is an entrance boundary, and further 0 can also be a reflecting boundary,where the reflection can be either continuous or by a jump. See [6, 12, 17, 36, 37] and the refer-ences therein. Analytically, this raises the questions of identifying a core for a self-similar Markovprocess on [ ∞ ) and of determining its infinitesimal generator on this core, in particular on theneighborhood of the boundary point 0 where a singularity appears. To the best of our knowledge,these questions remain open in general, and investigating the asymptotic behavior of a sequence ofinfinitesimal generators at a singular point therefore seems rather subtle.A few years ago, Haas & Miermont [19] obtained a general scaling limit theorem for non-increasingMarkov chains on N (observe that plainly, 1 is always an absorbing boundary for non-increasing self-similar Markov processes), and the purpose of the present work is to extend their result by removingthe non-increase assumption. Our approach bears similarities with that developed by Haas & Mier-mont, but also with some differences. In short, Haas and Miermont first established a tightnessresult, and then analyzed weak limits of convergent subsequences via martingale problems, whereaswe rather investigate asymptotics of infinitesimal generators.More precisely, in order to circumvent the crucial difficulty related to the boundary point 0, weshall not directly study the rescaled version of the Markov chain, but rather of a time-changed ver-sion. The time-substitution is chosen so to yield weak convergence towards the exponential of a Lévyprocess, where the convergence is established through the analysis of infinitesimal generators. Theupshot is that cores and infinitesimal generators are much better understood for Lévy processes andtheir exponentials than for self-similar Markov processes, and boundaries yield no difficulty. We arethen left with the inversion of the time-substitution, and this turns out to be closely related to theLamperti transformation. However, although our approach enables us to treat the situation when DESCRIPTION OF THE MAIN RESULTS + ∞ , it does notseem to provide direct access to the case when the limiting process is reflected at the boundary (seeFigure 1).The rest of this work is organized as follows. Our general results are presented in Section 2. Westate three main limit theorems, namely Theorems 1, 2 and 4, each being valid under some specificset of assumptions. Roughly speaking, Theorem 1 treats the situation where the Markov chain istransient and thus escapes to + ∞ , whereas Theorem 2 deals with the recurrent case. In the latter, weonly consider the Markov chain until its first entrance time in some finite set, which forces absorptionat the boundary point 0 for the scaling limit. Theorem 4 is concerned with the situation where theMarkov chain is positive recurrent; then convergence of the properly rescaled chain to a self-similarMarkov process absorbed at 0 is established, even though the Markov chain is no longer trapped insome finite set. Finally, we also provide a weak limit theorem (Theorem 3) in the recurrent situationfor the first instant when the Markov chain started from a large level enters some fixed finite set.Section 3 prepares the proofs of the preceding results, by focusing on an auxiliary continuous-timeMarkov chain which is both closely related to the genuine discrete-time Markov chain and easier tostudy. The connexion between the two relies on a Lamperti-type transformation. The proofs of thestatements made in Section 2 are then given in Section 4 by analyzing the time-substitution; classicalarguments relying on the celebrated Foster criterion for recurrence of Markov chains also play acrucial role. We illustrate our general results in Section 5. First, we check that they encompass thoseof Haas & Miermont in the case where the chain is non-increasing. Then we derive functional limittheorems for Markov chains with asymptotically zero drift (this includes the so-called Bessel-typerandom walks which have been considered by many authors in the literature), scaling limits arethen given in terms of Bessel processes. Lastly, we derive a weak limit theorem for the number ofparticles in a fragmentation-coagulation process, of a type similar to that introduced by J. Berestycki[3]. Finally, in Section 6, we point at a series of open questions related to this work.We conclude this Introduction by mentioning that our initial motivation for establishing suchscaling limits for Markov chains on N was a question raised by Nicolas Curien concerning the studyof random planar triangulations and their connexions with compensated fragmentations which hasbeen developed in a subsequent work [5]. Acknowledgments.
We thank an anonymous referee and Vitali Wachtel for several useful com-ments. I.K. would also like to thank Leif Döring for stimulating discussions.
For every integer n (cid:62)
1, let ( p n , k ; k (cid:62) ) be a sequence of non-negative real numbers such that (cid:80) k (cid:62) p n , k =
1, and let ( X n ( k ) ; k (cid:62) ) be the discrete-time homogeneous Markov chain started atstate n such that the probability transition from state i to state j is p i , j for i , j ∈ N . Specifically, DESCRIPTION OF THE MAIN RESULTS X n ( ) = n , and P ( X n ( k + ) = j | X n ( k ) = i ) = p i , j for every i , j (cid:62) k (cid:62)
0. Under certainassumptions on the probability transitions, we establish (Theorems 1, 2 and 4 below) a functionalinvariance principle for n X n , appropriately scaled in time, to a nonnegative self-similar Markovprocess in the Skorokhod topology for càdlàg functions. In order to state our results, we first need toformulate the main assumptions. For n (cid:62)
1, denote by Π ∗ n the probability measure on R defined by Π ∗ n ( dx ) = (cid:88) k (cid:62) p n , k · δ ln ( k )− ln ( n ) ( dx ) ,which is the law of ln ( X n ( ) /n ) . Let ( a n ) n (cid:62) be a sequence of positive real numbers with regularvariation of index γ >
0, meaning that a (cid:98) xn (cid:99) /a n → x γ as n → ∞ for every fixed x >
0, where (cid:98) x (cid:99) stands for the integer part of a real number x . Let Π be a measure on R \{ } such that Π ( { −
1, 1 } ) = (cid:90) ∞ − ∞ ( ∧ x ) Π ( dx ) < ∞ . (1)We require that Π ( { −
1, 1 } ) = γ = a n → ∞ , but we shall not pursue this goalhere. Finally, denote by R = [− ∞ , ∞ ] the extended real line.We now introduce our main assumptions: (A1). As n → ∞ , we have the following vague convergence of measures on R \{ } : a n · Π ∗ n ( dx ) ( v ) −→ n → ∞ Π ( dx ) .Or, in other words, we assume that a n · E (cid:20) f (cid:18) X n ( ) n (cid:19)(cid:21) −→ n → ∞ (cid:90) R f ( e x ) Π ( dx ) for every continuous function f with compact support in [ ∞ ] \{ } . (A2). The following two convergences holds: a n · (cid:90) − x Π ∗ n ( dx ) −→ n → ∞ b , a n · (cid:90) − x Π ∗ n ( dx ) −→ n → ∞ σ + (cid:90) − x Π ( dx ) ,for some b ∈ R and σ (cid:62) DESCRIPTION OF THE MAIN RESULTS (A1) , we may have (cid:82) − | x | Π ( dx ) = ∞ , in which case (A2) requires small positive and negative steps of the chain to roughly compensate each other. We now introduce several additional tools in order to describe the scaling limit of the Markov chain X n . Let ( ξ ( t )) t (cid:62) be a Lévy process with characteristic exponent given by the Lévy–Khintchine for-mula Φ ( λ ) = − σ λ + ibλ + (cid:90) ∞ − ∞ (cid:16) e iλx − − iλx | x | (cid:54) (cid:17) Π ( dx ) , λ ∈ R .Specifically, there is the identity E (cid:104) e iλξ ( t ) (cid:105) = e tΦ ( λ ) for t (cid:62) λ ∈ R . Then set I ∞ = (cid:90) ∞ e γξ ( s ) ds ∈ ( ∞ ] .It is known that I ∞ < ∞ a.s. if ξ drifts to − ∞ (i.e. lim t → ∞ ξ ( t ) = − ∞ a.s.), and I ∞ = ∞ a.s. if ξ driftsto + ∞ or oscillates (see e.g. [7, Theorem 1] which also gives necessary and sufficient conditionsinvolving Π ). Then for every t (cid:62)
0, set τ ( t ) = inf (cid:12) u (cid:62) (cid:90) u e γξ ( s ) ds > t (cid:13) with the usual convention inf ∅ = ∞ . Finally, define the Lamperti transform [31] of ξ by Y ( t ) = e ξ ( τ ( t )) for 0 (cid:54) t < I ∞ , Y ( t ) = t (cid:62) I ∞ .In view of the preceding observations, Y hits 0 in finite time almost surely if, and only if, ξ drifts to − ∞ .By construction, the process Y is a self-similar Markov process of index 1 /γ started at 1. Recallthat if P x is the law of a nonnegative Markov process ( M t ) t (cid:62) started at x (cid:62)
0, then M is self-similarwith index α > ( r − α M rt ) t (cid:62) under P x is P r − α x for every r > x (cid:62)
0. Lamperti [31]introduced and studied nonnegative self-similar Markov processes and established that, conversely,any self-similar Markov process which either never reaches the boundary states 0 and ∞ , or reachesthem continuously (in other words, there is no killing inside ( ∞ )) can be constructed by using theprevious transformation. X n We are now ready to state our first main result, which is a limit theorem in distribution in the spaceof real-valued càdlàg functions D ( R + , R ) on R + equipped with the J -Skorokhod topology (we referto [23, Chapter VI] for background on the Skorokhod topology). DESCRIPTION OF THE MAIN RESULTS Theorem 1 (Transient case) . Assume that (A1) and (A2) hold, and that the Lévy process ξ does not drift to − ∞ . Then the convergence (cid:18) X n ( (cid:98) a n t (cid:99) ) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) (2) holds in distribution in D ( R + , R ) . In this case, Y does not touch 0 almost surely (see the left-most image in Figure 1). When ξ driftsto − ∞ , we establish an analogous result for the chain X n stopped when it reaches some fixed finiteset under the following additional assumption: (A3). There exists β > n → ∞ a n · (cid:90) ∞ e βx Π ∗ n ( dx ) < ∞ .Observe that (A1) and (A3) imply that (cid:82) ∞ e βx Π ( dx ) < ∞ . Roughly speaking, Assumption (A3) tells us that in the case where ξ drifts to − ∞ , the chain X n /n does not make too large positivejumps and will enable us to use Foster–Lyapounov type estimates (see Sec. 4.2). Observe that (A3) is automatically satisfied if the Markov chain is non-increasing or has uniformly bounded upwardsjumps.In the sequel, we let K (cid:62) {
1, 2, . . . , K } is accessible by X n forevery n (cid:62) { i (cid:62) X n ( i ) (cid:54) K } < ∞ with positive probability for every n (cid:62) (A1) , (A2) hold and ξ drifts to − ∞ , then such integers always exist.Indeed, consider κ := sup { n (cid:62) : P ( X n < n ) = } .If κ = ∞ , then the measure Π ∗ n has support in [ ∞ ) for infinitely many n ∈ N , and thus, if further (A1) and (A2) hold, ξ must be a subordinator and therefore drifts to + ∞ . Therefore, κ < ∞ if ξ driftsto − ∞ , and by definition of κ , the set {
1, 2, . . . , κ } is accessible by X n for every n (cid:62)
1. For irreducibleMarkov chains, one can evidently take K = (A1) , (A2) , (A3) hold and the Lévy process ξ drifts to − ∞ , then {
1, 2, . . . , K } is recurrent for the Markov chain, in the sense that for every n (cid:62)
1, inf { k (cid:62) X n ( k ) (cid:54) K } < ∞ almost surely (see Lemma 4.1). Loosely speaking, we call this the recurrent case.Finally, for every n (cid:62)
1, let X † n be the Markov chain X n stopped at its first visit to {
1, 2, . . . , K } ,that is X † n ( · ) = X n ( · ∧ A ( K ) n ) , where A ( K ) n = inf { k (cid:62) X n ( k ) (cid:54) K } , with again the usual conventioninf ∅ = ∞ . Theorem 2.
Assume that (A1) , (A2) , (A3) hold and that the Lévy process ξ drifts to − ∞ . Then the conver-gence (cid:32) X † n ( (cid:98) a n t (cid:99) ) n ; t (cid:62) (cid:33) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) (3) DESCRIPTION OF THE MAIN RESULTS holds in distribution in D ( R + , R ) . In this case, the process Y is absorbed once it reaches 0 (see the second and third images from theleft in Fig. 1). This result extends [19, Theorem 1], see Section 5.1 for details. We will discuss in Section2.5 what happens when the Markov chain X n is not stopped anymore. Observe that according to theasymptotic behavior of ξ , the behavior of Y is drastically different: when ξ drifts to − ∞ , Y is absorbedat 0 at a finite time and Y remains forever positive otherwise.Let us mention that with the same techniques, it is possible to extend Theorems 1 and 2 when theLévy process ξ is killed at a random exponential time, in which case Y reaches 0 by a jump. However,to simplify the exposition, we shall not pursue this goal here.Given σ (cid:62) b ∈ R , γ > Π on R \{ } such that (1) holds and Π ( { −
1, 1 } ) =
0, it ispossible to check the existence of a family ( p n , k ; n , k (cid:62) ) such that (A1) and (A2) , hold (see e.g. [19,Proposition 1] in the non-increasing case). We may further request (A3) whenever (cid:82) ∞ e βx Π ( dx ) < ∞ for some β >
0. As a consequence, our Theorems 1 and 2 show that any nonnegative self-similarMarkov process, such that its associated Lévy measure Π has a small finite exponential moment on [ ∞ ) , considered up to its first hitting time of the origin is the scaling limit of a Markov chain. It is natural to ask whether the convergence (3) holds jointly with the convergence of the associatedabsorption times. Observe that this is not a mere consequence of Theorem 2, since absorption times,if they exist, are in general not continuous functionals for the Skorokhod topology on D ( R + , R ) .Haas & Miermont [19, Theorem 2] proved that, indeed, the associated absorption time converge fornon-increasing Markov chains. We will prove that, under the same assumptions as for Theorem 2,the associated absorption times converge in distribution, and further the convergence holds also forthe expected value under an additional positive-recurrent type assumption.Let Ψ be the Laplace exponent associated with ξ , which is given by Ψ ( λ ) = Φ (− iλ ) = σ λ + bλ + (cid:90) ∞ − ∞ (cid:16) e λx − − λx | x | (cid:54) (cid:17) Π ( dx ) .for those values of λ ∈ R such that this quantity is well defined, so that E (cid:104) e λξ ( t ) (cid:105) = e tΨ ( λ ) . Note that (A3) implies that Ψ is well defined on a positive neighborhood of 0. (A4) . There exists β > γ such thatlim sup n → ∞ a n · (cid:90) ∞ e β x Π ∗ n ( dx ) < ∞ and Ψ ( β ) <
0. (4)
DESCRIPTION OF THE MAIN RESULTS (A3) , which only requires the first inequality of (4) to hold for a certain β >
0. Also, if (A4) holds, then we have Ψ ( γ ) < Ψ . Conversely, observe that (A4) is automatically satisfied if Ψ ( γ ) < (A1) , (A2) and (A4) hold, then the Lévy process ξ drifts to − ∞ and the first hitting time A ( k ) n of {
1, 2, . . . , k } by X n has finite expectation for every n > k , where k issufficiently large (see Lemma 4.2). Loosely speaking, we call this the positive recurrent case. Theorem 3.
Assume that (A1) , (A2) , (A3) hold and that ξ drifts to − ∞ . Let K (cid:62) be such that {
1, 2, . . . , K } is accessible by X n for every n (cid:62) .(i) We have A ( K ) n a n ( d ) −→ n → ∞ (cid:90) ∞ e γξ ( s ) ds , (5) and this convergence holds jointly with (3) .(ii) If further (A4) holds, and in addition,for every n (cid:62) K + (cid:88) k (cid:62) k β · p n , k < ∞ , (6) then E (cid:104) A ( K ) n (cid:105) a n −→ n → ∞ | Ψ ( γ ) | . (7)We point out that when (4) is satisfied, the inequality (cid:80) k (cid:62) k β · p n , k < ∞ is automatically sat-isfied for every n sufficiently large, that is condition (6) is then fulfilled provided that K has beenchosen sufficiently large. See Remark 4.10 for the extension of (7) to higher order moments. Finally,observe that (6) is the only condition which does not only depend on the asymptotic behavior of p n , · as n → ∞ (the behavior of the law of X n ( ) for small values of n matters here).This result has been proved by Haas & Miermont [19, Theorem 2] in the case of non-increasingMarkov chains. However, some differences appear in our more general setup. For instance, (7)always is true when the chain is non-increasing, but clearly cannot hold if Ψ ( γ ) > (cid:82) ∞ e γξ ( s ) ds = ∞ a.s.) or if the Markov chain is irreducible and not positive recurrent (in this case E [ A ( K ) n ] = ∞ ). It is natural to ask if Theorem 2 also holds for the non-absorbed Markov chain X n . Roughly speaking,we show that the answer is affirmative if it does not make too large jumps when reaching low valuesbelonging to {
1, 2, . . . , K } , as quantified by the following last assumption which completes (A4) . AN AUXILIARY CONTINUOUS-TIME MARKOV PROCESS (A5) . Assumption (A4) holds and, in addition, for every n (cid:62)
1, we have E (cid:104) X n ( ) β (cid:105) = (cid:88) k (cid:62) k β · p n , k < ∞ ,with β > γ such that (4) holds. Theorem 4.
Assume that (A1) , (A2) and (A5) hold. Then the convergence (cid:18) X n ( (cid:98) a n t (cid:99) ) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) (8) holds in distribution in D ( R + , R ) . Recall that when ξ drifts to − ∞ , we have I ∞ < ∞ and Y t = t (cid:62) I ∞ , so that roughly speakingthis result tells us that with probability tending to 1 as n → ∞ , once X n has reached levels of order o ( n ) , it will remain there on time scales of order a n .If (A4) holds but not (A5) , we believe that the result of Theorem 4 does not hold in general sincethe Markov chain may become null-recurrent (see Remark 4.11) and the process may “restart” from0 (see Section 6). We finally briefly comment on the techniques involved in the proofs of Theorems 1 and 2, which dif-fer from those of [19]. We start by embedding X n in continuous time by considering an independentPoisson process N n of parameter a n , which allows us to construct a continuous-time Markov process L n such that the following equality in distribution holds (cid:18) n X n ( N n ( t )) ; t (cid:62) (cid:19) ( d ) = ( exp ( L n ( τ n ( t ))) ; t (cid:62) ) ,where τ n is a Lamperti-type time change of L n (see (12)). Roughly speaking, to establish Theorems1 and 2, we use the characterization of functional convergence of Feller processes by generators inorder to show that L n converges in distribution to ξ and that τ n converges in distribution towards τ .However, one needs to proceed with particular caution when ξ drifts to − ∞ , since the time changesthen explode. In this case, assumption (A3) will give us useful bounds on the growth of X n byFoster–Lyapounov techniques. In this section, we construct an auxiliary continuous-time Markov chain ( L n ( t ) ; t (cid:62) ) in such a waythat L n , appropriately scaled, converges to ξ and such that, roughly speaking, X n may be recoveredfrom exp ( L n ) by a Lamperti-type time change. AN AUXILIARY CONTINUOUS-TIME MARKOV PROCESS L n For every n (cid:62)
1, first let ( ξ n ( t ) ; t (cid:62) ) be a compound Poisson process with Lévy measure a n · Π ∗ n .That is E (cid:104) e iλξ n ( t ) (cid:105) = exp (cid:18) t (cid:90) ∞ − ∞ ( e iλx − ) · a n Π ∗ n ( dx ) (cid:19) , λ ∈ R , t (cid:62) ξ n is a Feller process on R with generator A n given by A n f ( x ) = a n (cid:90) ∞ − ∞ ( f ( x + y ) − f ( x )) Π ∗ n ( dy ) , f ∈ C ∞ c ( R ) , x ∈ R ,where C ∞ c ( I ) denotes the space of real-valued infinitely differentiable functions with compact sup-port in an interval I .It is also well-known that the Lévy process ξ , which has been introduced in Section 2.2, is a Fellerprocess on R with infinitesimal generator A given by A f ( x ) = σ f (cid:48)(cid:48) ( x ) + bf (cid:48) ( x ) + (cid:90) ∞ − ∞ (cid:0) f ( x + y ) − f ( x ) − f (cid:48) ( x ) y | y | (cid:54) (cid:1) Π ( dy ) , f ∈ C ∞ c ( R ) , x ∈ R ,and, in addition, C ∞ c ( R ) is a core for ξ (see e.g. [38, Theorem 31.5]). Under (A1) and (A2) , by [24,Theorems 15.14 & 15.17], ξ n converges in distribution in D ( R + , R ) as n → ∞ to ξ . It is then classicalthat the convergence of generators A n f −→ n → ∞ A f (9)holds for every f ∈ C ∞ c ( R ) , in the sense of the uniform norm on C ( R ) . It is also possible to checkdirectly (9) by a simple calculation which relies on the fact that lim (cid:15) → lim n → ∞ a n (cid:82) (cid:15) − (cid:15) y Π ∗ n ( dy ) = (A2) (see Sec. 5.2 for similar estimates). We leave the details to the reader.For x ∈ R , we let { x } = x − (cid:98) x (cid:99) denote the fractional part of x and also set (cid:100) x (cid:101) = (cid:98) x (cid:99) + (cid:100) n (cid:101) = n + n is an integer). By convention, we set A = Π ∗ =
0. Now introducean auxiliary continuous-time Markov chain ( L n ( t ) ; t (cid:62) ) on R ∪ { + ∞ } which has generator B n defined as follows: B n f ( x ) = ( − { ne x } ) · A (cid:98) ne x (cid:99) f ( x ) + { ne x } · A (cid:100) ne x (cid:101) f ( x )) , f ∈ C ∞ c ( R ) , x ∈ R . (10)We allow L n to take eventually the cemetery value + ∞ , since it is not clear for the moment whether L n explodes in finite time or not. The process L n is designed in such a way that if n exp ( L n ) is at aninteger valued state, say j ∈ N , then it will wait a random time distributed as an exponential randomvariable of parameter a j and then jump to state k ∈ N with probability p j , k for k (cid:62)
1. In particular n exp ( L n ) then remains integer whenever it starts in N . Roughly speaking, the generator (10) thenextends the possible states of L n from ln ( N /n ) to R by smooth interpolation.A crucial feature of L n lies in the following result. Proposition 3.1.
Assume that (A1) and (A2) hold. For every x ∈ R , L n , started from x , converges indistribution in D ( R + , R ) as n → ∞ to ξ + x . AN AUXILIARY CONTINUOUS-TIME MARKOV PROCESS Proof.
Consider the modified continuous-time Markov chain ( (cid:98) L n ( t ) ; t (cid:62) ) on R which has generator (cid:98) B n defined as follows: (cid:98) B n f ( x ) = ( − { ne x } ) · (cid:98) ne x (cid:99) (cid:54) n · A (cid:98) ne x (cid:99) f ( x ) + { ne x } · (cid:100) ne x (cid:101) (cid:54) n · A (cid:100) ne x (cid:101) f ( x ) , f ∈ C ∞ c ( R ) , x ∈ R . (11)We stress that (cid:98) B n f ( x ) = B n f ( x ) for all x < ln n , so the processes L n and (cid:98) L n can be coupled so thattheir trajectories coincide up to the time when they exceed ln n . Therefore, it is enough to check thatfor every x (cid:62) (cid:98) L n , started from x , converges in distribution in D ( R + , R ) to ξ + x .The reason for introducing (cid:98) L n is that clearly (cid:98) L n does not explode, and is in addition a Fellerprocess (note that it is not clear a priori that L n is a Feller process that does not explode). Indeed,the generator (cid:98) B n can be written in the form (cid:98) B n f ( x ) = (cid:82) ∞ − ∞ ( f ( x + y ) − f ( x )) µ n ( x , dy ) for x ∈ R and f ∈ C ∞ c ( R ) and where µ n ( x , dy ) is the measure on R defined by µ n ( x , dy ) = ( − { ne x } ) (cid:98) ne x (cid:99) (cid:54) n a (cid:98) ne x (cid:99) Π ∗(cid:98) ne x (cid:99) ( dy ) + { ne x } (cid:100) ne x (cid:101) (cid:54) n a (cid:100) ne x (cid:101) Π ∗(cid:100) ne x (cid:101) ( dy ) .It is straightforward to check that sup x ∈ R µ n ( x , R ) < ∞ and that the map x → µ n ( x , dy ) is weaklycontinuous. This implies that (cid:98) L n is indeed a Feller process.By [24, Theorem 19.25] (see also Theorem 6.1 in [16, Chapter 1]), in order to establish Proposition3.1 with L n replaced by (cid:98) L n , it is enough to check that (cid:98) B n f converges uniformly to A f as n → ∞ for every f ∈ C ∞ c ( R ) . For the sake of simplicity, we shall further suppose that | f | (cid:54)
1. Note that A f ( x ) → x → ± ∞ since ξ is a Feller process, and (9) implies that (cid:98) B n f converges uniformly oncompact intervals to A f as n → ∞ . Therefore, it is enough to check thatlim M → ∞ lim n → ∞ sup | x | >M | (cid:98) B n f ( x ) | = (cid:15) >
0. By (1), we may choose u > Π ( R \ (− u , u )) < (cid:15) . The portmanteautheorem [8, Theorem 2.1] and (A1) imply thatlim sup n → ∞ a n · Π ∗ n ( R \ (− u , u )) (cid:54) Π ( R \ (− u , u )) < (cid:15) .We can therefore find M > a n · Π ∗ n ( R \ (− M , M )) < (cid:15) for every n (cid:62)
1. Now let m < M be such that the support of f is included in [ m , M ] . Then, for x > M > M + u , (cid:98) B n f ( x ) = (cid:90) ∞ − ∞ f ( x + y ) x + y (cid:54) M µ n ( x , dy ) ,so that | (cid:98) B n f ( x ) | (cid:54) a (cid:98) ne x (cid:99) Π ∗(cid:98) ne x (cid:99) ((− ∞ , M − M )) + a (cid:100) ne x (cid:101) Π ∗(cid:100) ne x (cid:101) ((− ∞ , M − M )) (cid:54) (cid:15) . One similarlyshows that | (cid:98) B n f ( x ) | (cid:54) (cid:15) for x < − M < m − u . This completes the proof. X n from L n by a time change Unless otherwise specifically mentioned, we shall henceforth assume that L n starts from 0. In orderto formulate a connection between X n and exp ( L n ) , it is convenient to introduce some additional SCALING LIMITS OF THE MARKOV CHAIN X N ( N n ( t ) ; t (cid:62) ) of intensity a n independent of X n , and, forevery t (cid:62)
0, set τ n ( t ) = inf (cid:12) u (cid:62) (cid:90) u a n exp ( L n ( s )) a n ds > t (cid:13) . (12)We stress that τ n ( t ) is finite a.s. for all t (cid:62)
0. Indeed, if we write ζ for the possible explosion time of L n ( ζ = ∞ when L n does not explode), then (cid:82) ζ a n exp ( L n ( s )) ds = ∞ almost surely. Specifically, when n exp ( L n ) is at some state, say k , it stays there for an exponential time with parameter a k and thecontribution of this portion of time to the integral has thus the standard exponential distribution,which entails our claim. Lemma 3.2.
Assume that L n ( ) = . Then we have (cid:18) n X n ( N n ( t )) ; t (cid:62) (cid:19) ( d ) = ( exp ( L n ( τ n ( t ))) ; t (cid:62) ) . (13) Proof.
Plainly, the two processes appearing in (13) are continuous-time Markov chains, so to provethe statement, we need to check that their respective embedded discrete-time Markov chains (i.e. jumpchains) have the same law, and that the two exponential waiting times at a same state have the sameparameter.Recall the description made after (10) of the process n exp ( L n ) started at an integer value. Wesee in particular that the two jump chains in (13) have indeed the same law. Then fix some j ∈ N and recall that the waiting time of L n at state ln ( j/n ) is distributed according to an exponentialrandom variable of parameter a j . It follows readily from the definition of the time-change τ n that thewaiting time of exp ( L n ( τ n ( · )) at state j/n is distributed according to an exponential random variableof parameter a j × a n a j = a n . This proves our claim. X n We now prove Theorem 1 by establishing that (cid:18) X n ( N n ( t )) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) (14)in D ( R + , R ) . Since by the functional law of large numbers ( N n ( t ) /a n ; t (cid:62) ) converges in probabilityto the identity uniformly on compact sets, Theorem 1 will follow from (14) by standard properties ofthe Skorokhod topology (see e.g. [23, VI. Theorem 1.14]). Proof of Theorem 1.
Assume that (A1) , (A2) hold and that ξ does not drift to − ∞ . In particular, recallfrom the Introduction that we have I ∞ = ∞ and the process Y ( t ) = exp ( ξ ( τ ( t ))) remains boundedaway from 0 for all t (cid:62) SCALING LIMITS OF THE MARKOV CHAIN X N x (cid:55)→ a (cid:98) nx (cid:99) /a n converges uniformly on compact subsets of R + to x (cid:55)→ x γ as n → ∞ . Recall that L n ( ) =
0. Then byProposition 3.1 and standard properties of the Skorokhod topology (see e.g. [23, VI. Theorem 1.14]),it follows that (cid:18) a n exp ( L n ( s )) a n ; s (cid:62) (cid:19) ( d ) −→ n → ∞ ( exp ( γξ ( s )) ; s (cid:62) ) in D ( R + , R ) . This implies that (cid:18) (cid:90) u a n exp ( L n ( s )) a n ds ; u (cid:62) (cid:19) ( d ) −→ n → ∞ (cid:18) (cid:90) u exp ( γξ ( s )) ds ; u (cid:62) (cid:19) , (15)in C ( R + , R ) , which is the space of real-valued continuous functions on R + equipped with the topol-ogy of uniform convergence on compact sets. Since the two processes appearing in (15) are almostsurely (strictly) increasing in u and I ∞ = ∞ , τ is almost surely (strictly) increasing and continuouson R + . It is then a simple matter to see that (15) in turn implies that τ n converges in distribution to τ in C ( R + , R ) . Therefore, by applying Proposition 3.1 once again, we finally get that ( exp ( L n ( τ n ( t ))) ; t (cid:62) ) ( d ) −→ n → ∞ ( exp ( ξ ( τ ( t ))) ; t (cid:62) ) = Y in D ( R + , R ) . By Lemma 3.2, this establishes (14) and completes the proof. Before tackling the proof of Theorem 2, we start by exploring several preliminary consequences of (A3) , which will also be useful in Section 4.4.In the irreducible case, Foster [18] showed that the Markov chain X is positive recurrent if andonly if there exists a finite set S ⊂ N , a function f : N → R + and (cid:15) > i ∈ S , (cid:88) j (cid:62) p i , j f ( j ) < ∞ , and for every i (cid:54)∈ S , (cid:88) j (cid:62) p i , j f ( j ) (cid:54) f ( i ) − (cid:15) . (16)The map f : N → R + is commonly referred to as a Foster–Lyapounov function. The conditions (16)may be rewritten in the equivalent formsfor every i ∈ S , E [ f ( X i ( ))] < ∞ , and for every i (cid:54)∈ S , E [ f ( X i ( )) − f ( i )] (cid:54) − (cid:15) .Therefore, Foster–Lyapounov functions allow to construct nonnegative supermartingales, and thecriterion may be interpreted as a stochastic drift condition in analogy with Lyapounov’s stabilitycriteria for ordinary differential equations. A similar criterion exists for recurrence instead of positiverecurrence (see e.g. [10, Chapter 5] and [33]).In our setting, we shall see that (A3) yields Foster–Lyapounov functions of the form f ( x ) = x β forcertain values of β >
0. For i , K (cid:62)
1, recall that A ( K ) i = inf { j (cid:62) X i ( j ) (cid:54) K } denotes the first returntime of X i to {
1, 2, . . . , K } . SCALING LIMITS OF THE MARKOV CHAIN X N Lemma 4.1.
Assume that (A1) , (A2) , (A3) hold and that the Lévy process ξ drifts to − ∞ . Then:(i) There exists < β < β such that Ψ ( β ) < .(ii) For all such β , we have a n (cid:90) ∞ (cid:16) e β x − (cid:17) Π ∗ n ( dx ) −→ n → ∞ Ψ ( β ) <
0. (17) (iii) Let M (cid:62) K be such that a n (cid:82) ∞ (cid:0) e β x − (cid:1) Π ∗ n ( dx ) (cid:54) for every n (cid:62) M . Then, for every i (cid:62) M , theprocess defined by M i ( · ) = X i ( · ∧ A ( M ) i ) β is a positive supermartingale (for the canonical filtration of X i ).(iv) Almost surely, A ( K ) i < ∞ for every i (cid:62) .Proof. By (A1) and (A3) , we have (cid:82) ∞ x Π ( dx ) < ∞ . Since ξ drifts to − ∞ , by [7, Theorem 1], we have b + (cid:90) | x | > xΠ ( dx ) ∈ [− ∞ , 0 ) .In particular, Ψ (cid:48) ( +) = b + (cid:82) | x | > xΠ ( dx ) ∈ [− ∞ , 0 ) , so that there exists β > Ψ ( β ) < ξ n is a compound Poisson Process with Lévymeasure a n · Π ∗ n that converges in distribution to ξ as n → ∞ . By dominated convergence, thisimplies that E [ e β ξ n ( ) ] → E [ e β ξ ( ) ] as n → ∞ , or, equivalently, that (17) holds.For (iii), note that for i (cid:62) M , a i i β · E (cid:104) X i ( ) β − X i ( ) β (cid:105) = a i i β · ∞ (cid:88) k = p i , k (cid:16) k β − i β (cid:17) = a i · (cid:90) ∞ (cid:16) e β x − (cid:17) Π ∗ i ( dx ) (cid:54)
0. (18)Hence E (cid:2) X i ( ) β (cid:3) (cid:54) E (cid:2) X i ( ) β (cid:3) for every i (cid:62) M , which implies that M i is a positive supermartin-gale.The last assertion is an analog of Foster’s criterion of recurrence for irreducible Markov chains.Even though we do not assume irreducibility here, it is a simple matter to adapt the proof of Theorem3.5 in [10, Chapter 5] in our case. Since M i is a positive supermartingale, it converges almost surely toa finite limit, which implies that A ( M ) i < ∞ almost surely for every i (cid:62) M +
1, and therefore A ( M ) i < ∞ for every i (cid:62) {
1, 2, . . . , K } is accessibleby X n for every n (cid:62)
1, it readily follows that A ( K ) i < ∞ almost surely for every i (cid:62) K + X n entails that the continuous-timeprocess L n defined in Section 3.1 does not explode (and, as a matter of fact, is also recurrent). If thestronger assumptions (A4) and (6) hold instead of (A3) , roughly speaking the Markov chain becomespositive recurrent (note that ξ drifts to − ∞ when (A4) holds): SCALING LIMITS OF THE MARKOV CHAIN X N Lemma 4.2.
Assume that (A1) , (A2) , (A4) and (6) hold. Then:(i) There exists an integer M (cid:62) K and a constant c > such that, for every n (cid:62) M , a n · (cid:90) ∞ − ∞ (cid:16) e β x − (cid:17) Π ∗ n ( dx ) (cid:54) − c . (19) (ii) For every n (cid:62) K + , E (cid:104) A ( K ) n (cid:105) < ∞ .(iii) Assume that, in addition, (A5) holds. Then for every n (cid:62) , E (cid:104) A ( K ) n (cid:105) < ∞ .Proof. The proof of (i) is similar to that of Lemma 4.1. For the other assertions, it is convenient toconsider the following modification of the Markov chain. We introduce probability transitions p (cid:48) n , k such that p (cid:48) n , k = p n , k for all k (cid:62) n > K , and for n =
1, . . . , K , we choose the p (cid:48) n , k such that p (cid:48) n , k > k (cid:62) (cid:80) k (cid:62) k β · p (cid:48) n , k < ∞ . In other words, the modified chain with transitionprobabilities p (cid:48) n , k , say X (cid:48) n , then fulfills (A5) .The chain X (cid:48) n is then irreducible (recall that, by assumption, {
1, . . . , K } is accessible by X n for every n ∈ N ) and fulfills the assumptions of Foster’s Theorem. See e.g. Theorem 1.1 in Chapter 5 of [10]applied with h ( i ) = i β and F = {
1, . . . , M } . Hence X (cid:48) n is positive recurrent, and as a consequence,the first entrance time of X (cid:48) n in {
1, . . . , K } has finite expectation for every n ∈ N . But by construction,for every n (cid:62) K +
1, the chains X n and X (cid:48) n coincide until the first entrance in {
1, . . . , K } ; this proves(ii). Finally, when (A5) holds, there is no need to modify X n and the preceding argument shows that E (cid:104) A ( K ) n (cid:105) < ∞ for all n (cid:62) Remark 4.3.
We will later check that under the assumptions of Lemma 4.2, we may have E [ A ( K ) i ] = ∞ for some 1 (cid:54) i (cid:54) K if (A4) holds but not (A5) (see Remark 4.11).Recall that L n denotes the auxiliary continuous-time Markov chain which has been defined inSection 3.1 with L n ( ) = Corollary 4.4.
Keep the same assumptions and notation as in Lemma 4.2, and introduce the first passage time α ( M ) n = inf { t (cid:62) n exp ( L n ( t )) (cid:54) M } . The process exp (cid:16) β L n ( t ∧ α ( M ) n ) + c ( t ∧ α ( M ) n ) (cid:17) , t (cid:62) is then a supermartingale.Proof. Let
R > M be arbitrarily large; we shall prove our assertion with α ( M ) n replaced by α ( M , R ) n = inf { t (cid:62) n exp ( L n ( t )) (cid:54)∈ { M +
1, . . . , R }} . SCALING LIMITS OF THE MARKOV CHAIN X N L n stopped at time α ( M , R ) n is a Feller process with values in − ln n + ln N , and it followsfrom (10) that its infinitesimal generator, say G , is given by G f ( x ) = a ne x (cid:90) ∞ − ∞ ( f ( x + y ) − f ( x )) Π ∗ ne x ( dy ) for every x such that ne x ∈ { M +
1, . . . , R } . Applying this for f ( y ) = exp ( β y ) , we get from Lemma4.2 (i) that G f ( x ) (cid:54) − cf ( x ) , which entails that f (cid:16) L n ( t ∧ α ( M , R ) n ) (cid:17) exp (cid:16) c ( t ∧ α ( M , R ) n ) (cid:17) , t (cid:62) R → ∞ , recall that L n does notexplode, and apply the (conditional) Fatou Lemma.We now establish two useful lemmas based on the Foster–Lyapounov estimates of Lemma 4.1.The first one is classical and states that if the Lévy process ξ drifts to − ∞ and its Lévy measure Π hasfinite exponential moments, then its overall supremum has an exponentially small tail. The second,which is the discrete counterpart of the first, states that if the Markov chain X i starts from a low value i , then X i will unlikely reach a high value without entering {
1, 2, . . . , K } first. Lemma 4.5.
Assume that the Lévy process ξ drifts to − ∞ and that its Lévy measure fulfills the integrabilitycondition (cid:82) ∞ e βx Π ( dx ) < ∞ for some β > . There exists β > sufficiently small with Ψ ( β ) < , andthen for every u (cid:62) , we have P (cid:32) sup s (cid:62) ξ ( s ) > u (cid:33) (cid:54) e − β u . Proof.
The assumption on the Lévy measure ensures that the Laplace exponent Ψ of ξ is well-definedand finite on [ β ] . Because ξ drifts to − ∞ , the right-derivative Ψ (cid:48) ( +) of the convex function Ψ mustbe strictly negative (possibly Ψ (cid:48) ( +) = − ∞ ) and therefore we can find β > Ψ ( β ) <
0. Thenthe process ( e β ξ ( s ) , s (cid:62) ) is a nonnegative supermartingale and our claim follows from the optionalstopping theorem applied at the first passage time above level u .We now prove an analogous statement for the discrete Markov chain X n , tailored for future use: Lemma 4.6.
Assume that (A1) , (A2) , (A3) hold and that the Lévy process ξ drifts to − ∞ . Fix (cid:15) > . Forevery n sufficiently large, for every (cid:54) i (cid:54) (cid:15) n , we have P ( X i reaches [ (cid:15)n , ∞ ) before [ K ]) (cid:54) (cid:15) β . Proof.
We first check that there exists an integer M (cid:62) K , such that for every 1 (cid:54) i (cid:54) N , P ( X i reaches [ N , ∞ ) before [ M ]) (cid:54) ( i/N ) β . (20) SCALING LIMITS OF THE MARKOV CHAIN X N M (cid:62) K such that M i ( · ) = X β i ( · ∧ A ( M ) i ) is a positive supermartingale.Hence, setting B ( N ) i = inf { j (cid:62) X i ( j ) (cid:62) N } , by the optional stopping theorem we get that i β (cid:62) E (cid:104) X β i ( A ( M ) i ∧ B ( N ) i ) (cid:105) (cid:62) E (cid:20) X β i ( B ( N ) i ) { A ( M ) i >B ( N ) i } (cid:21) (cid:62) N β P (cid:16) A ( M ) i > B ( N ) i (cid:17) .This establishes (20).We now turn to the proof of the main statement. By the Markov property, write P ( X i reaches [ (cid:15)n , ∞ ) before [ K ]) (cid:54) P ( X i reaches [ (cid:15)n , ∞ ) before [ M ]) + M (cid:88) j = K + P (cid:0) X j reaches [ (cid:15)n , ∞ ) before [ K ] (cid:1) .By (20), the first term of the latter sum is bounded by (cid:15) β . In addition, for every fixed 2 (cid:54) j (cid:54) M , since {
1, 2, . . . , K } is accessible by X j by the definition of K , it is clear that P (cid:0) X j reaches [ (cid:15)n , ∞ ) before [ K ] (cid:1) → n → ∞ . The conclusion follows. Recall that X † n denotes the Markov chain X n stopped when it hits {
1, 2, . . . , K } . As for the non-absorbedcase, Theorem 2 will follow if we manage to establish that (cid:32) X † n ( N n ( t )) n ; t (cid:62) (cid:33) ( d ) −→ n → ∞ ( Y t ; t (cid:62) ) (21)in D ( R + , R ) .We now need to introduce some additional notation. Fix M (cid:62) a ( M ) i = a i for i > M and a ( M ) i = (cid:54) i (cid:54) M . Denote by L ( M ) n the Markov chain with generator (10) when thesequence ( a n ) n (cid:62) is replaced with the sequence ( a ( M ) n ) n (cid:62) . In other words, L ( M ) n may be seen as L n absorbed at soon as it hits { ln ( /n ) , ln ( /n ) , . . . , ln ( M/n ) } . Proposition 3.1 (applied with the sequence ( a ( M ) n ) instead of ( a n ) ), shows that, under (A1) and (A2) , L ( M ) n , started from any x ∈ R , convergesin distribution in D ( R + , R ) to ξ + x . In addition, if L ( M ) n ( ) = X ( M ) n denotes the process X n absorbed as soon as hits {
1, 2, . . . , M } , Lemma 3.2 (applied with ( a ( M ) n ) instead of ( a n ) ) entails that (cid:18) n X ( M ) n ( N n ( t )) ; t (cid:62) (cid:19) ( d ) = (cid:16) exp (cid:16) L ( M ) n ( τ ( M ) n ( t )) (cid:17) ; t (cid:62) (cid:17) , (22)where τ ( M ) n ( t ) = inf u (cid:62) (cid:90) u a ( M ) n exp ( L n ( s )) a ( M ) n ds > t , t (cid:62) (cid:18) n X † n ( N n ( t )) ; t (cid:62) (cid:19) ( d ) = (cid:16) exp ( L ( K ) n ( τ ( K ) n ( t ))) ; t (cid:62) (cid:17) . (23) SCALING LIMITS OF THE MARKOV CHAIN X N L ( M ) n ( ) = d SK the Skorokhod J distance on D ( R + , R ) . In the proof of Theorem2, we will use the following simple property of d SK : Lemma 4.7.
Fix (cid:15) > and f ∈ D ( R + , R ) that has limit at + ∞ . Let σ : R + → R + ∪ { + ∞ } be a right-continuous non-decreasing function. For T (cid:62) , let f [ T ] ∈ D ( R + , R ) be the function defined by f [ T ] ( t ) = f ( σ ( t ) ∧ T ) for t (cid:62) . Finally, assume that there exists T > is such that | f ( t ) | < (cid:15) for every t (cid:62) T . Then d SK (cid:16) f ◦ σ , f [ T ] (cid:17) (cid:54) (cid:15) . This is a simple consequence of the definition of the Skorokhod distance. We are now ready tocomplete the proof of Theorem 2.
Proof of Theorem 2.
By (23), it suffices to check that (cid:16) exp (cid:16) L ( K ) n ( τ ( K ) n ( t )) (cid:17) ; t (cid:62) (cid:17) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) (24)in D ( R + , R ) . To simplify notation, for n (cid:62) t (cid:62)
0, set Y † n ( t ) = exp (cid:16) L ( K ) n ( τ ( K ) n ( t )) (cid:17) , and, for every t > Y t n ( t ) = exp (cid:16) L ( K ) n ( τ ( K ) n ( t ) ∧ t ) (cid:17) , Y t ( t ) = exp ( ξ ( τ ( t ) ∧ t )) and recall that Y ( t ) = exp ( ξ ( τ ( t ))) .First observe that for every fixed t > Y t n ( d ) −→ n → ∞ Y t (25)in D ( R + , R ) . Indeed, since L ( K ) n → ξ in distribution in D ( R + , R ) , the same arguments as in Section4.1 apply and give that τ ( K ) n ( · ) ∧ t → τ ( · ) ∧ t in distribution in C ( R + , R ) .We now claim that for every η ∈ (
0, 1 ) , there exists t > n sufficiently large, P (cid:0) d SK (cid:0) Y , Y t (cid:1) > η (cid:1) < η β , P (cid:16) d SK (cid:16) Y † n , Y t n (cid:17) > η (cid:17) < η β . (26)Assume for the moment that (26) holds and let us see how to finish the proof of (24). Let F : D ( R + , R ) → R + be a bounded uniformly continuous function. By [8, Theorem 2.1], it is enoughto check that E [ F ( Y † n )] → E [ F ( Y )] as n → ∞ . Fix (cid:15) ∈ (
0, 1 ) and let η > | F ( f ) − F ( g ) | (cid:54) (cid:15) if d SK ( f , g ) (cid:54) η . We shall further impose that η β < (cid:15) . By (26), we may choose t > Λ = (cid:8) d SK (cid:0) Y , Y t (cid:1) < η (cid:9) , Λ n = (cid:10) d SK (cid:16) Y † n , Y t n (cid:17) < η (cid:11) are both of probability at least 1 − η β (cid:62) − (cid:15) for every n sufficiently large. Then write for n sufficiently large (cid:12)(cid:12)(cid:12) E [ F ( Y )] − E (cid:104) F ( Y † n ) (cid:105)(cid:12)(cid:12)(cid:12) (cid:54) (cid:12)(cid:12)(cid:12) E [ F ( Y ) Λ ] − E (cid:104) F ( Y † n ) Λ n (cid:105)(cid:12)(cid:12)(cid:12) + (cid:15) (cid:107) F (cid:107) ∞ (cid:54) (cid:12)(cid:12)(cid:12) E (cid:2) F ( Y t ) Λ (cid:3) − E (cid:104) F ( Y t n ) Λ n (cid:105)(cid:12)(cid:12)(cid:12) + (cid:15) + (cid:15) (cid:107) F (cid:107) ∞ (cid:54) (cid:12)(cid:12)(cid:12) E (cid:2) F ( Y t ) (cid:3) − E (cid:104) F ( Y t n ) (cid:105)(cid:12)(cid:12)(cid:12) + (cid:15) + (cid:15) (cid:107) F (cid:107) ∞ . SCALING LIMITS OF THE MARKOV CHAIN X N (cid:12)(cid:12)(cid:12) E (cid:2) F ( Y t ) (cid:3) − E (cid:104) F ( Y t n ) (cid:105)(cid:12)(cid:12)(cid:12) tends to 0 as n → ∞ . As a consequence, (cid:12)(cid:12)(cid:12) E [ F ( Y )] − E (cid:104) F ( Y † n ) (cid:105)(cid:12)(cid:12)(cid:12) (cid:54) (cid:15) + (cid:15) (cid:107) F (cid:107) ∞ for every n sufficiently large.We finally need to establish (26). For the first inequality, since ξ drifts to − ∞ , we may choose t > P ( ξ ( t ) < ( η )) > − η β . By Lemma 4.5 and the Markov property P (cid:32) sup s (cid:62) t e ξ ( s )− ξ ( t ) > /η (cid:33) (cid:54) η β .The event { sup s (cid:62) t e ξ ( s ) (cid:54) η } thus has probability at least 1 − η β , and on this event, we have d SK ( Y , Y t ) (cid:54) η by Lemma 4.7. This establishes the first inequality of (26).For the second one, note that since L ( K ) n converges in distribution to ξ , there exists t > P (cid:16) exp ( L ( K ) n ( t )) > η (cid:17) < η for every n sufficiently large. But on the event (cid:10) exp ( L ( K ) n ( t )) < η (cid:11) ∩ { after time t , n exp ( L n ) reaches [ K ] before [ ηn , ∞ ) } ,which has probability at least 1 − η β by Lemma 4.6 (recall also the identity (23)), we have theinequality d SK ( Y † n , Y t n ) (cid:54) η by Lemma 4.7. This establishes (26) and completes the proof of Theorem2. We start with several preliminary remarks in view of proving Theorem 3. First, we point out thatour statements in Section 2 are unchanged if we replace the sequence ( a n ) by another sequence, say ( a (cid:48) n ) , such that a n /a (cid:48) n → n → ∞ . Thanks to Theorem 1.3.3 and Theorem 1.9.5 (ii) in [9], we maytherefore assume that there exists an infinitely differentiable function h : R + → R such that ( i ) for every n (cid:62) a n = n γ · e h ( ln ( n )) , ( ii ) for every k (cid:62) h ( k ) ( x ) −→ x → ∞
0, (27)where h ( k ) denotes the k -th derivative of h . This will be used in the proof of Lemma 4.9 below.Assume that (A1) , (A2) , (A3) hold and that ξ drifts to − ∞ . For every integer M (cid:62)
1, recall fromSection 4.3 the notation X ( M ) n , ( a ( M ) n ) and L ( M ) n , and the initial condition L ( M ) n ( ) =
0. To simplify thenotation, we set (cid:101) a n = a ( M ) n for n (cid:62) (cid:101) L n ( s ) = L ( M ) n ( s ) . By (22), we may and will assume that theidentity 1 n X ( M ) n ( N n ( t )) = exp (cid:16) L ( M ) n ( τ ( M ) n ( t )) (cid:17) holds for all t (cid:62)
0, where N n is a Poisson process with intensity a n independent of X n and the timechange τ ( M ) n is defined by (12) with (cid:101) a n = a ( M ) n replacing a n . SCALING LIMITS OF THE MARKOV CHAIN X N n > M , let A ( M ) n = inf { i (cid:62) X n ( i ) (cid:54) M } be the absorption time of X ( M ) n and α ( M ) n = inf { t (cid:62) X n ( N n ( t )) (cid:54) M } that of X ( M ) n ( N n ( · )) , so that there are the identities α ( M ) n = (cid:90) ∞ (cid:101) a n exp ( (cid:101) L n ( s )) (cid:101) a n ds = (cid:90) α ( M ) n a n exp ( L n ( s )) a n ds and N n (cid:16) α ( M ) n (cid:17) = A ( M ) n (28)for every n > M . We shall first establish a weaker version of Theorem 3 (i) in which K has beenreplaced by M : Lemma 4.8.
Assume that (A1) , (A2) , (A3) hold and that ξ drifts to − ∞ . The following weak convergenceshold jointly in D ( R + , R ) ⊗ R : (cid:101) L n ( d ) −→ n → ∞ ξ and α ( M ) n ( d ) −→ n → ∞ (cid:90) ∞ e γξ ( s ) ds .In turn, in order to establish Lemma 4.8, we shall need the following technical result: Lemma 4.9.
Assume that (A1) , (A2) , (A3) and (27) hold, and that ξ drifts to − ∞ . There exist β > , M > and C > such that for every n (cid:62) M , (cid:88) k (cid:62) ( a βk − a βn ) · p n , k (cid:54) − C · a β − n . (29)If a n = c · n γ for every n sufficiently large for a certain c >
0, observe that this is a simpleconsequence of (17) applied with β = βγ . Note also that (29) then clearly holds when ( a n ) isreplaced with ( (cid:101) a n ) . We postpone its proof in the general case to the end of this section. Proof of Lemma 4.8.
The first convergence has been established in the proof of Proposition 3.1. UsingSkorokhod’s representation Theorem, we may assume that it holds in fact almost surely on D ( R + , R ) ,and we shall now check that this entails the second. To this end, note first that for every R (cid:62) (cid:90) R (cid:101) a n exp ( (cid:101) L n ( s )) (cid:101) a n ds a.s. −→ n → ∞ (cid:90) R e γξ ( s ) ds ,since the sequence ( a n ) varies regularly with index γ . It is therefore enough to check that for every (cid:15) > t >
0, we may find R sufficiently large so thatlim sup n → ∞ P (cid:32) (cid:90) ∞ R (cid:101) a n exp ( (cid:101) L n ( s )) (cid:101) a n ds > t (cid:33) (cid:54) (cid:15) and P (cid:18) (cid:90) ∞ R e γξ ( s ) ds > t (cid:19) (cid:54) (cid:15) . (30)The second inequality is obvious since (cid:82) ∞ e γξ ( s ) ds is almost surely finite.To establish the first inequality in (30), we start with some preliminary observations. By the Potterbounds (see [9, Theorem 1.5.6]), there exists a constant C > (cid:101) a i / (cid:101) a n (cid:54) C ( i/n ) γ + for every1 (cid:54) i (cid:54) n . Fix η > β + C C β η β ( γ + ) /t β < (cid:15) , where C is a positive constant (independentof η and (cid:15) ) which will be chosen later on. Then pick R sufficiently large so that P (cid:16) exp ( (cid:101) L n ( R )) > η (cid:17) < SCALING LIMITS OF THE MARKOV CHAIN X N (cid:15)/ n sufficiently large (this is possible since (cid:101) L n converges to ξ and the latter drifts to − ∞ ).By the Markov property and (28), for every i (cid:62)
1, the conditional law of (cid:90) ∞ R (cid:101) a n exp ( (cid:101) L n ( s )) (cid:101) a n ds given n exp ( (cid:101) L n ( R )) = i , is that of α ( M ) i . It follows from (28) and elementary estimates for Poissonprocesses that is suffices to checklim sup n → ∞ max M + (cid:54) i (cid:54) ηn P (cid:16) A ( M ) i > t (cid:101) a n / (cid:17) (cid:54) (cid:15)/
2. (31)To this end, for every i (cid:62) M + n (cid:62)
1, we use Markov’s inequality and get P (cid:16) A ( M ) i > t (cid:101) a n / (cid:17) (cid:54) β t β (cid:101) a βn E (cid:104) ( A ( M ) i ) β (cid:105) . (32)We then apply Theorem 2’ in [2] with f ( x ) = x β , h ( x ) = (cid:101) a βx , g ( x ) = (cid:101) a β − x ,which tells us that there exists a constant C > E (cid:104) f ( A ( M ) i ) (cid:105) (cid:54) C · h ( i ) for every i (cid:62) M + C > E [ h ( X n ( )) − h ( n )] (cid:54) − C · g ( n ) for every n (cid:62) M , and lim inf n → ∞ g ( n ) f (cid:48) ◦ f − ◦ h ( n ) > g ( n ) / ( f (cid:48) ◦ f − ◦ h ( n )) = /β .By (32), we therefore get that for every i (cid:62) M + n (cid:62) P (cid:16) A ( M ) i > t (cid:101) a n / (cid:17) (cid:54) C β t β (cid:18) (cid:101) a i (cid:101) a n (cid:19) β .As a consequence of the aforementioned Potter bounds, for every M + (cid:54) i (cid:54) ηn , P (cid:16) A ( M ) i > t (cid:101) a n / (cid:17) (cid:54) β C C β t β · η β ( γ + ) < (cid:15)/ SCALING LIMITS OF THE MARKOV CHAIN X N Proof of Theorem 3. (i) Assume that M (cid:62) K , β > c > β instead of β ).It suffices to check that (5) holds with A ( K ) n replaced by A ( M ) n . Indeed, since X ( M ) n and X † n may becoupled in such a way that they coincide until the first time X n hits {
1, 2, . . . , M } , for every a > P (cid:16)(cid:12)(cid:12)(cid:12) A ( K ) n − A ( M ) n (cid:12)(cid:12)(cid:12) > a (cid:17) (cid:54) max K + (cid:54) i (cid:54) M P (cid:16) A ( K ) i > a (cid:17) which tends to 0 as a → ∞ by Lemma 4.1 (iv). In turn, as before, since ( N n ( t ) /a n ; t (cid:62) ) convergesin probability to the identity uniformly on compact sets as n → ∞ , it is enough to check that theconvergence (cid:101) A ( M ) n ( d ) −→ n → ∞ (cid:90) ∞ e γξ ( s ) ds holds jointly with (21). By the preceding discussion and (28), we can complete the proof with anappeal to Lemma 4.8.(ii) Again, it suffices to check that (7) holds with A ( K ) n replaced by A ( M ) n . Indeed, we see fromMarkov property that E (cid:104)(cid:12)(cid:12)(cid:12) A ( K ) n − A ( M ) n (cid:12)(cid:12)(cid:12)(cid:105) (cid:54) max K + (cid:54) i (cid:54) M E (cid:104) A ( K ) i (cid:105) ,and the right-hand side is finite by Lemma 4.2 (ii).Recall that N n is a Poisson process with intensity a n , so by (28), we have for n > M a n E (cid:104) A ( M ) n (cid:105) = E (cid:104) α ( M ) n (cid:105) = E (cid:34) (cid:90) α ( M ) n a n exp ( L n ( s )) a n ds (cid:35) and we thus have to check that (cid:90) ∞ E (cid:20) a n exp ( L n ( s )) a n { s<α ( M ) n } (cid:21) ds −→ n → ∞ (cid:90) ∞ E (cid:104) e γξ ( s ) (cid:105) ds = | Ψ ( γ ) | . (33)In this direction, take any β ∈ ( γ , β ) , and recall from Potter bounds [9, Theorem 1.5.6] that thereis some constant C > a n − a nx (cid:54) C · x β for every n ∈ N and x (cid:62) nx ∈ Z + . Wededuce that E (cid:34)(cid:18) a n exp ( L n ( s )) a n (cid:19) β /β (cid:10) s<α ( M ) n (cid:11) (cid:35) (cid:54) C β /β · E (cid:20) exp ( β L n ( s )) (cid:10) s<α ( M ) n (cid:11) (cid:21) (cid:54) C (cid:48) · e − cs ,where c , C (cid:48) are positive finite constants, and the last inequality stems from Corollary 4.4. Then recallthat a n − a n exp ( L n ( s )) { s<α ( M ) n } converges in distribution to exp ( γξ ( s )) for every s (cid:62)
0. An argumentof uniform integrability now shows that (33) holds, and this completes the proof.
Remark 4.10.
The argument of the proof above shows that more precisely, for every 1 (cid:54) p < β /γ ,we have E (cid:34)(cid:32) A ( M ) n a n (cid:33) p (cid:35) −→ n → ∞ E (cid:20)(cid:18) (cid:90) ∞ e γξ ( s ) ds (cid:19) p (cid:21) . SCALING LIMITS OF THE MARKOV CHAIN X N Remark 4.11.
Assume that (A1) , (A2) and (A4) hold. Let 1 (cid:54) m (cid:54) K be an integer. Then E (cid:104) A ( K ) m (cid:105) = ∞ ⇐⇒ (cid:88) k (cid:62) a k · p m , k = ∞ .Indeed, by the Markov property applied at time 1, write E (cid:104) A ( K ) m (cid:105) = + (cid:88) k (cid:62) K + E (cid:104) A ( K ) k (cid:105) p m , k .By Lemma 4.2 (ii), E [ A ( K ) k ] < ∞ for every k (cid:62) K +
1, and by Theorem 3 (ii), E [ A ( K ) k ] /a k converges to apositive real number as k → ∞ . Therefore, there exists a constant C > a k /C (cid:54) E [ A ( K ) k ] (cid:54) C · a k for every k (cid:62) K +
1. As a consequence,1 C ( E [ A ( K ) m ] − ) = C (cid:88) k (cid:62) K + E [ A ( K ) k ] · p m , k (cid:54) (cid:88) k (cid:62) K + a k · p m , k (cid:54) C (cid:88) k (cid:62) K + E [ A ( K ) k ] p m , k = C ( E [ A ( K ) m ] − ) .The conclusion follows.We conclude this section with the proof of Lemma 4.9. Proof of Lemma 4.9.
By Lemma 4.1, there exists β > Ψ ( β ) <
0. Fix β < β ∧ ( β /γ ) andnote that Ψ ( βγ ) < Ψ . We shall show that a − βn (cid:88) k (cid:62) ( a βk − a βn ) · p n , k −→ n → ∞ Ψ ( βγ ) . (34)To this end, write a − βn (cid:88) k (cid:62) ( a βk − a βn ) = a n (cid:90) ∞ − ∞ (cid:16) e βγx − (cid:17) Π ∗ n ( dx ) + a n (cid:90) ∞ − ∞ (cid:32)(cid:18) a ne x a n (cid:19) β − e βγx (cid:33) Π ∗ n ( dx ) .By Lemma 4.1 (ii), the result will follow if we prove that a n (cid:90) ∞ − ∞ (cid:32)(cid:18) a ne x a n (cid:19) β − e βγx (cid:33) Π ∗ n ( dx ) −→ n → ∞ a n (cid:90) x (cid:62) (cid:32)(cid:18) a ne x a n (cid:19) β − e βγx (cid:33) Π ∗ n ( dx ) −→ n → ∞ a n (cid:90) x (cid:54) − (cid:32)(cid:18) a ne x a n (cid:19) β − e βγx (cid:33) Π ∗ n ( dx ) −→ n → ∞ ( a ne x /a n ) β con-verges to e βγx as n → ∞ , uniformly in x (cid:54) −
1. By (A1) and (1), this readily implies the secondconvergence of (35). For the first one, a similar argument shows that the convergence of ( a ne x /a n ) β SCALING LIMITS OF THE MARKOV CHAIN X N e βγx as n → ∞ holds uniformly in x ∈ [ A ] , for every fixed A >
1. Therefore, if η >
A > n → ∞ a n (cid:90) ∞ A (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:18) a ne x a n (cid:19) β − e βγx (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Π ∗ n ( dx ) (cid:54) η . (36)To this end, fix (cid:15) > β ( γ + (cid:15) ) < β . By the Potter bounds, there exists a constant C > x (cid:62) n (cid:62) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:18) a ne x a n (cid:19) β − e βγx (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Ce β ( γ + (cid:15) ) x + e βγx .Since (cid:82) ∞ e β ( γ + (cid:15) ) x Π ( dx ) < ∞ and (cid:82) ∞ e βγx Π ( dx ) < ∞ by our choice of β and (cid:15) , we may choose A > C (cid:90) ∞ A e β ( γ + (cid:15) ) x Π ( dx ) + (cid:90) ∞ A e βγx Π ( dx ) < η .Hence lim sup n → ∞ a n (cid:90) ∞ A (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:18) a ne x a n (cid:19) β − e βγx (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Π ∗ n ( dx ) (cid:54) C (cid:90) ∞ A e β ( γ + (cid:15) ) x Π ( dx ) + (cid:90) ∞ A e βγx Π ( dx ) < η .This establishes (36) and completes the proof of (37).We now show that a n (cid:90) − (cid:32)(cid:18) a ne x a n (cid:19) β − e βγx (cid:33) Π ∗ n ( dx ) −→ n → ∞
0. (37)By (i) in (27), we have (cid:18) a ne x a n (cid:19) β − e βγx = e βγx (cid:16) e βh ( ln ( n )+ x )− βh ( ln ( n )) − (cid:17) .For every n (cid:62) x ∈ (−
1, 1 ) , an application of Taylor-Lagrange’s formula yields the existence ofa real number u n ( x ) ∈ ( ln ( n ) −
1, ln ( n ) + ) such that h ( ln ( n ) + x ) = h ( ln ( n )) + xh ( ) ( ln ( n )) + x h ( ) ( u n ( x )) / h ( k ) denotes the k -th derivative of h . Recalling (ii) in (27), we can write e βγx (cid:16) e βh ( ln ( n )+ x )− βh ( ln ( n )) − (cid:17) = βxh ( ) ( ln ( n )) + x g n ( x ) ,where g n ( x ) → n → ∞ , uniformly in x ∈ (−
1, 1 ) . Also note that h ( ) ( ln ( n )) → n → ∞ .Now, a n (cid:90) − (cid:32)(cid:18) a ne x a n (cid:19) β − e βγx (cid:33) Π ∗ n ( dx ) = βh ( ) ( ln ( n )) · a n (cid:90) − x Π ∗ n ( dx ) + a n (cid:90) − x g n ( x ) Π ∗ n ( dx ) . (38)By (A2) and the preceding observations, the sum appearing in (38) tends to 0 as n → ∞ . Thiscompletes the proof. SCALING LIMITS OF THE MARKOV CHAIN X N Here we establish Theorem 4.
Proof of Theorem 4.
By Theorem 2, Lemma 4.7 and the strong Markov property, it is enough to showthat for every fixed t > (cid:15) > (cid:54) i (cid:54) K , we have P (cid:32) sup (cid:54) t (cid:54) t X i ( (cid:98) a n t (cid:99) ) (cid:62) (cid:15)n (cid:33) = P (cid:32) sup (cid:54) k (cid:54) (cid:98) a n t (cid:99) X i ( k ) (cid:62) (cid:15)n (cid:33) −→ n → ∞
0, (39)To this end, fix 1 (cid:54) i (cid:54) K , and introduce the successive return times to {
1, 2, . . . , K } by X i : T ( ) = A ( K ) i = inf { j > X i ( j ) (cid:54) K } ,and recursively, for k (cid:62) T ( k ) = inf { j > T ( k − ) ; X i ( j ) (cid:54) K } .Plainly, T ( k ) (cid:62) k and we see from the strong Markov property that (39) will follow if we manage tocheck that, for every 1 (cid:54) i (cid:54) K , a n · P (cid:32) sup (cid:54) j (cid:54) T ( ) X i ( j ) (cid:62) (cid:15)n (cid:33) −→ n → ∞
0. (40)To this end, introduce τ n = inf { j (cid:62) X i ( j ) > (cid:15)n } ∧ T ( ) and note that E [ τ n ] → E [ T ( ) ] as n → ∞ by monotone convergence since {
1, 2, . . . , K } is accessible by X n for every n (cid:62)
1. In addition, E (cid:104) T ( ) − τ n (cid:105) = (cid:88) j (cid:62) (cid:15)n P ( X ( τ n ) = j ) E (cid:104) A ( K ) j (cid:105) ,But the last part of Theorem 3 shows that E [ A ( K ) j ] /a j converges to some positive real number as j → ∞ and thus E [ A ( K ) j ] (cid:62) Ca j for every j (cid:62) C >
0. Since E [ τ n ] → E [ T ( ) ] as n → ∞ , this implies that (cid:88) j (cid:62) (cid:15)n P ( X ( τ n ) = j ) a j −→ n → ∞ η > C (cid:48) > a j /a n (cid:62) C (cid:48) ( j/n ) γ − η (cid:62) C (cid:48) (cid:15) γ − η for every n (cid:62) j (cid:62) (cid:15)n . Therefore a n · (cid:88) j (cid:62) (cid:15)n P ( X i ( τ n ) = j ) −→ n → ∞ APPLICATIONS We shall now illustrate our general results stated in Section 2 by discussing some special cases whichmay be of independent interest. Specifically, we shall first show how one can recover the results ofHaas & Miermont [19] about the scaling limits of decreasing Markov chains, then we shall discusslimit theorems for Markov chains with asymptotically zero drift. Finally, we shall apply our resultsto the study of the number of blocks in some exchangeable fragmentation-coagulation processes (see[3]).
Let us first explain how to recover the result of Haas & Miermont. For n (cid:62)
1, denote by p ∗ n theprobability measure on R + defined by p ∗ n ( dx ) = (cid:88) k (cid:62) p n , k · δ kn ( dx ) ,which is the law of n X n ( ) . In [19], Haas & Miermont establish the convergence (2) under the as-sumption of the existence of a non-zero, finite, non-negative measure µ on [
0, 1 ] such that the conver-gence a n ( − x ) · p ∗ n ( dx ) ( w ) −→ n → ∞ µ ( dx ) (41)holds for the weak convergence of measures on [
0, 1 ] . Our framework covers this case, where thelimiting process Y is decreasing. Indeed, assuming (41) and µ ( { } ) = (cid:101) µ be the image of µ by the mapping x (cid:55)→ ln ( x ) , and let Π ( dx ) be the measure (cid:101) µ ( dx ) / ( − e x ) , which issupported on (− ∞ , 0 ) (the image of Π ( dx ) by x (cid:55)→ − x is exactly the measure ω ( dx ) defined in [19,p. 1219]). Then: Proposition 5.1.
Assume (41) with µ ( { } ) = . We then have (cid:82) ∞ − ∞ ( ∧ | x | ) Π ( dx ) < ∞ and (A1) , (A2) holdwith b = (cid:90) − x Π ( dx ) + µ ( { } ) = (cid:90) /e ln ( x ) − x µ ( dx ) + µ ( { } ) , σ = In addition, (A3) , (A4) and (A5) hold for every β > .Proof. This simply follows from the facts that for every continuous bounded function f : R → R + , (cid:90) ∞ − ∞ f ( x ) Π ( dx ) = (cid:90) f ( ln ( x )) − x µ ( dx ) , a n (cid:90) ∞ − ∞ f ( x ) Π ∗ n ( dx ) = (cid:90) f ( ln ( x )) − x · a n ( − x ) p ∗ n ( dx ) and that, as noted in [19, p. 1219]), Ψ ( λ ) = − µ ( { } ) · λ + (cid:90) − ∞ ( e λx − ) Π ( dx ) ,which is negative for every λ > APPLICATIONS
For every n (cid:62)
1, let ∆ n = X n ( ) − n be the first jump of the Markov chain X n . We say that this Markovchain has asymptotically zero drift if E [ ∆ n ] → n → ∞ . The study of processes with asymptoti-cally zero drift was initiated by Lamperti in [27, 28, 30], and was continued by many authors; see [1]for a thorough bibliographical description.A particular instance of such Markov chains are the so-called Bessel-type random walks, whichare random walks on N , reflected at 1, with steps ± p n , n + = p n = (cid:18) − d n + o (cid:18) n (cid:19)(cid:19) as n → ∞ , p n , n − = q n = − p n , (42)where d ∈ R . The study of Bessel-type random walks has attracted a lot of attention starting fromthe 1950s in connection with birth-and-death processes, in particular concerning the finiteness andlocal estimates of first return times [20, 21, 28, 30]; see also the Introduction of [1], which contains aconcise and precise bibliographical account. Also, the interest to Bessel-type random walks has beenrecently renewed due to their connection to statistical physics models such as random polymers[13, 1] (see again the Introduction of [1] for details) and a non-mean field model of coagulation–fragmentation [4]. Non-neighbor Markov chains with asymptotically zero drifts have also appearedin [32] in connection with random billiards.Assume that there exist p > δ > C > n (cid:62) E [ | ∆ n | p ] (cid:54) C · n p − − δ . (43)Also assume that as n → ∞ , E [ ∆ n ] = cn + o (cid:18) n (cid:19) , E (cid:104) ∆ n (cid:105) = s + o ( ) (44)for some c ∈ R and s ∈ ( ∞ ) .Finally, set r = − cs , ν = − + r δ = − r . APPLICATIONS E [ ∆ n ] = cn + o (( n log ( n )) − ) and E (cid:2) ∆ n (cid:3) = s + o ( log ( n ) − ) and also that the Markov chain is irreducible, but do not restrict themselves to theMarkovian case).Note that Bessel-type random walks satisfying (42) verify (43) & (44) with c = − d/ s =
1, sothat r = d .In the seminal work [28], when r <
1, under the additional assumptions that sup n (cid:62) E (cid:2) | ∆ n | (cid:3) < ∞ and that the Markov chain is uniformly null (see [28] for a definition), Lamperti showed that n X n , appropriately scaled in time, converges in D ( R + , R ) to a Bessel process. However, the majorityof the subsequent work concerning Markov chains with asymptotically zero drifts and Bessel-typerandom walks was devoted to the study of the asymptotic behavior of return times and of statisticsof excursions from sets. A few authors [26, 25, 14] extended Lamperti’s result under weaker mo-ment conditions, but only for the convergence of finite dimensional marginals and not for functionalscaling limits.Let R ( ν ) /s be a Bessel process with index ν (or equivalently of dimension δ = ( ν + ) ) started from1 /s (we refer to [35, Chap. XI] for background on Bessel processes). By standard properties of Besselprocesses, R ( ν ) /s does not touch 0 for r (cid:54) −
1, is reflected at 0 for − < r <
1, and absorbed at 0 for r (cid:62) Theorem 5.
Assume that (43) & (44) hold.(i) If either r (cid:54) − , or r > , then we have (cid:18) X n ( (cid:98) n t (cid:99) ) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ sR ( ν ) /s in D ( R + , R ) .(ii) If r > − , there exists an integer K (cid:62) such that {
1, 2, . . . , K } is accessible by X n for every n (cid:62) , andthe following distributional convergence holds in D ( R + , R ) : (cid:32) X † n ( (cid:98) n t (cid:99) ) n ; t (cid:62) (cid:33) ( d ) −→ n → ∞ sR ( ν ) , † /s , where X † n denotes the Markov chain X n stopped as soon as it hits {
1, 2, . . . , K } and R ( ν ) , † /s denotes theBessel process R ( ν ) /s stopped as soon as it hits .In addition, if A n denotes the first time X n hits {
1, 2, . . . , K } , then A n n ( d ) −→ n → ∞ s · γ ( + r ) / , (45) where γ ( + r ) / is a Gamma random variable with parameter ( + r ) / . APPLICATIONS (iii) If r > , we have further E (cid:2) A qn (cid:3) n q −→ n → ∞ ( s ) q · Γ (cid:0) + r − q (cid:1) Γ (cid:0) + r (cid:1) (46) for every (cid:54) q < ( + r ) / . In particular, E [ A n ] n −→ n → ∞ s ( r − ) .These results concerning the asymptotic scaled functional behavior of Markov chains with asymp-totically zero drifts and the fact that the scaling limit of the first time they hit 0 is a multiple of aninverse gamma random variable may be new. We stress that the appearance of the inverse gammadistribution in this framework is related to a well-known result of Dufresne [15], see also the discus-sion in [7] for further references.The main step to prove Theorem 5 is to check that the conditions (43) & (44) imply our assump-tions introduced in Section 2 are satisfied: Proposition 5.2.
Assertion (A1) holds with a n = n and Π = ; Assertion (A2) holds with b = c − s and σ = s ; Assertion (A3) holds for every β > . Finally, if r > , then Assertion (A5) holds for every β ∈ (
2, 1 + r ) . Before proving this, let us explain how to deduce Theorem 5 from Proposition 5.2.
Proof of Theorem 5.
By Proposition 5.2, for every t (cid:62)
0, we have ξ ( t ) = sB t + c − s t where B is astandard Brownian motion. Note that ξ drifts to − ∞ if and only if 2 c − s <
0, that is r > −
1. By [35,p. 452], Y ( t/s ) is a Bessel process R ( ν ) with index ν and dimension δ given by ν := c − s s = − + r δ := − r started from 1 and stopped as soon as its hits 0. Hence by scaling, we can write Y ( t ) = sR ( ν ) /s ( t ) .Theorem 5 then follows from Theorems 1, 2, 3 and 4 as well as Remark 4.10. For (45) and (46), wealso use the fact that (see e.g. [35, p. 452]) (cid:90) ∞ e ( sB u + c − s u ) du ( d ) = s · γ ( + r ) / .This completes the proof.The proof of Proposition 5.2 is slightly technical, and we start with a couple of preparatory lem-mas. Lemma 5.3.
We have n (cid:90) | x | > | e x − | Π ∗ n ( dx ) −→ n → ∞ n (cid:90) | x | > ( e x − ) Π ∗ n ( dx ) −→ n → ∞ APPLICATIONS Proof.
It is enough to establish the second convergence, which implies the first one. We show that n (cid:82) ∞ ( e x − ) Π ∗ n ( dx ) → n → ∞ (the case when x < − n (cid:90) ∞ ( e x − ) Π ∗ n ( dx ) = n (cid:88) k (cid:62) en (cid:18) kn − (cid:19) p n , k = (cid:88) k (cid:62) en ( k − n ) p · ( k − n ) p − p n , k (cid:54) (cid:88) k (cid:62) en ( k − n ) p · ( e − ) p − · n p − p n , k = E (cid:2) ∆ n { ∆ n (cid:62) ( e − ) n } (cid:3) ( e − ) p − n p − (cid:54) C ( e − ) p − · n − δ by (43),which tends to 0 as n → ∞ . Lemma 5.4.
We have lim (cid:15) → lim sup n → ∞ n (cid:90) | x | <(cid:15) | x | · Π ∗ n ( dx ) = Proof.
To simplify notation, we establish the result with (cid:15) replaced by ln ( + (cid:15) ) , with (cid:15) ∈ (
0, 1 ) . Write (cid:90) | x | < ln ( + (cid:15) ) | x | · Π ∗ n ( dx ) = (cid:88) ( + (cid:15) ) − (cid:54) k/n (cid:54) + (cid:15) (cid:12)(cid:12)(cid:12)(cid:12) ln (cid:18) + k − nn (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) p n , k .But ( + (cid:15) ) − (cid:54) k/n (cid:54) + (cid:15) implies that | k − n | /n (cid:54) (cid:15) , and there exists a constant C (cid:48) > | ln ( + x ) | (cid:54) C (cid:48) | x | for every | x | (cid:54)
1. Hence n (cid:90) | x | <(cid:15) | x | · Π ∗ n ( dx ) (cid:54) (cid:88) ( + (cid:15) ) − (cid:54) k/n (cid:54) + (cid:15) | k − n | · | k − n | n p n , k (cid:54) (cid:15) · E (cid:104) ∆ n (cid:105) ,and the result follows by (44).We are now in position to establish Proposition 5.2. Proof of Proposition 5.2.
In order to check (A1) , we show that n · Π ∗ n ([ ln ( a ) , ∞ )) → n → ∞ forevery fixed a > a ∈ (
0, 1 ) ) by writing that ( a − ) p n p (cid:88) k (cid:62) an p n , k (cid:54) (cid:88) k (cid:62) an ( k − n ) p p n , k (cid:54) E [ | ∆ n | p ] (cid:54) C · n p − − δ .Therefore, n · Π ∗ n ([ ln ( a ) , ∞ ) = n (cid:88) k (cid:62) an p n , k (cid:54) C ( a − ) p n − δ −→ n → ∞ (A2) , we first show that n (cid:90) − (cid:18) e x − − x − x (cid:19) Π ∗ n ( dx ) −→ n → ∞
0. (47)
APPLICATIONS C (cid:48) > | e x − − x − x / | (cid:54) C (cid:48) x for every | x | (cid:54)
1, for fixed (cid:15) >
0, by Lemma 5.4 we may find η > n (cid:90) η − η (cid:18) e x − − x − x (cid:19) Π ∗ n ( dx ) (cid:54) (cid:15) for every n sufficiently large. But n (cid:90) η< | x | < (cid:18) e x − − x − x (cid:19) Π ∗ n ( dx ) −→ n → ∞ n (cid:90) − (cid:16) ( e x − ) − x (cid:17) Π ∗ n ( dx ) −→ n → ∞
0. (48)Next observe that n (cid:90) ∞ − ∞ ( e x − ) Π ∗ n ( dx ) = n E [ ∆ n ] and n (cid:90) ∞ − ∞ ( e x − ) Π ∗ n ( dx ) = E (cid:104) ∆ n (cid:105) .Thus, by Lemma 5.3 and (44), we have n (cid:90) − ( e x − ) Π ∗ n ( dx ) −→ n → ∞ c , n (cid:90) − ( e x − ) Π ∗ n ( dx ) −→ n → ∞ s .Then write n (cid:90) − ( e x − ) Π ∗ n ( dx ) = n (cid:90) − xΠ ∗ n ( dx ) + n (cid:90) − x Π ∗ n ( dx ) + n (cid:90) − (cid:18) e x − − x − x (cid:19) Π ∗ n ( dx ) and n (cid:90) − ( e x − ) Π ∗ n ( dx ) = n (cid:90) − x Π ∗ n ( dx ) + n (cid:90) − (cid:16) ( e x − ) − x (cid:17) Π ∗ n ( dx ) .By (47) and (48), the last term of the right-hand side of the two previous equalities tends to 0 as n → ∞ . It follows that n · (cid:90) − x Π ∗ n ( dx ) −→ n → ∞ b , n · (cid:90) − x Π ∗ n ( dx ) −→ n → ∞ σ ,where b and σ satisfy c = b + σ s = σ .This shows that (A2) holds. APPLICATIONS (A3) holds for every β ∈ [ p ] , first note that the constraint on β yieldsthe existence of a constant C (cid:48) > k β / ( k − n ) p (cid:54) C (cid:48) n β − p for every k (cid:62) en and n (cid:62) n · (cid:90) ∞ e β x Π ∗ n ( dx ) = n − β (cid:88) k (cid:62) en k β p n , k = n − β (cid:88) k (cid:62) en ( k − n ) p k β ( k − n ) p p n , k (cid:54) C (cid:48) n − p (cid:88) k (cid:62) en ( k − n ) p p n , k (cid:54) CC (cid:48) n − δ .This shows that (A3) holds.Finally, for the last assertion of Proposition 5.2 observe that Ψ ( λ ) = s λ + c − s λ , so that Ψ ( ) = c + s and Ψ ( + r ) =
0. In particular, if 2 c + s <
0, one may find β ∈ (
2, 1 + r ) such that Ψ ( β ) < (A4) . Finally, for (A5) , note that E (cid:2) | X n ( ) − n | β (cid:3) = E (cid:2) | ∆ n | β (cid:3) (cid:54) E [ | ∆ n | p ] < ∞ impliesthat E (cid:2) X n ( ) β (cid:3) < ∞ . This completes the proof. Remark 5.5.
The results of [22] establish many estimates concerning various statistics of excursionsof X from 1 (such as the duration of the excursion, its maximum, etc.). Unfortunately, those estimatesare not enough to establish directly (40),(45) and (46). However, only in the particular case of Bessel-type random walks, it is possible to use the local estimates of [1] in order to establish (40), (45) and(46) directly. Exchangeable fragmentation-coalescence processes were introduced by J. Berestycki [3], as Marko-vian models whose evolution combines the dynamics of exchangeable coalescent processes and thoseof homogeneous fragmentations. The fragmentation-coagulation process that we shall consider inthis Section can be viewed as a special case in this family.Imagine a particle system in which particles may split or coagulate as time passes. For the sakeof simplicity, we shall focus on the case when coalescent events are simple, that is the coalescentdynamics is that of a Λ -coalescent in the sense of Pitman [34]. Specifically, Λ is a finite measure on [
0, 1 ] ; we shall implicitly assume that Λ has no atom at 0, viz. Λ ( { } ) =
0. In turn, we suppose thatthe fragmentation dynamics are homogeneous (i.e. independent of the masses of the particles) andgoverned by a finite dislocation measure which only charges mass-partitions having a finite number(at least two) of components. That is, almost-surely, when a dislocation occurs, the particle whichsplits is replaced by a finite number of smaller particles.The process n = ( n ( t ) ; t (cid:62) ) which counts the number of particles as time passes, when theprocess starts at time t = n particles, is a continuous-time Markov chain with values in N .More precisely, the rate at which n jumps from n to k < n as the result of a simple coagulation eventinvolving n − k + g n , k = (cid:90) ( ] (cid:18) nk − (cid:19) x n − k − ( − x ) k − Λ ( dx ) . APPLICATIONS g n = n − (cid:88) k = g n , k = (cid:90) ( ] (cid:16) − ( − x ) n − nx ( − x ) n − (cid:17) x − √ ä Λ ( dx ) for the total rate of coalescence. In turn, let µ denote a finite measure on N , such that the rate atwhich each particle splits into j + j units for the numberof particles) when a dislocation event occurs, is given by µ ( j ) for every j ∈ N .We are interested in the jump chain X n = ( X n ( k ) ; k (cid:62) ) of n , that is the discrete-time embeddedMarkov chain of the successive values taken by n . The transition probabilities p n , k of X n are thusgiven by p n , k = (cid:14) nµ ( k − n ) / ( g n + nµ ( N )) for k > n , g n , k / ( g n + nµ ( N )) for k < n .We assume from now on that the measure µ has a finite mean m := ∞ (cid:88) j = jµ ( j ) < ∞ and further that (cid:90) ( ] x − Λ ( dx ) < ∞ .Before stating our main result about the scaling limit of the chain X n , it is convenient introduce themeasure Π ( dy ) on (− ∞ , 0 ) induced by the image of x − Λ ( dx ) by the map x (cid:55)→ y = ln ( − x ) andobserve that (cid:90) (− ∞ ,0 ) ( ∧ | y | ) Π ( dy ) < ∞ .We may thus consider the spectrally negative Lévy process ξ = ( ξ ( t ) , t (cid:62) ) whose Laplace transformgiven by E [ exp ( qξ ( t ))] = exp (cid:32) tµ ( N ) (cid:32) mq + (cid:90) (− ∞ ,0 ) ( e qy − ) Π ( dy ) (cid:33)(cid:33) = exp (cid:32) tµ ( N ) (cid:32) mq + (cid:90) ( ) (( − x ) q − ) · x − Λ ( dx ) (cid:33)(cid:33) .We point out that ξ has finite variations, more precisely it is the sum of the negative of a subordinatorand a positive drift, and also that ξ drifts to + ∞ , oscillates, or drifts to − ∞ according as the mean E [ ξ ] = m + (cid:90) (− ∞ ,0 ) y Π ( dy ) = m + (cid:90) ln ( − x ) x Λ ( dx ) is respectively strictly positive, zero, or strictly negative (possibly − ∞ ). Corollary 5.6.
Let ( Y ( t ) , t (cid:62) ) denote the positive self-similar Markov process with index , which is asso-ciated via Lamperti’s transform to the spectrally negative Lévy process ξ . APPLICATIONS (i) If ξ drifts to + ∞ or oscillates, then there is the weak convergence in D ( R + , R ) (cid:18) X n ( (cid:98) nt (cid:99) ) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) . (ii) If ξ drifts to − ∞ , then A ( ) n = inf { k (cid:62) : X n ( k ) = } is a.s. finite for all n (cid:62) , A ( ) n n ( d ) −→ n → ∞ (cid:90) ∞ √ ä e ξ ( s ) ds , and this weak convergence holds jointly with X n (cid:16) (cid:98) nt (cid:99) ∧ A ( ) n (cid:17) n ; t (cid:62) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) in D ( R + , R ) .(iii) If m < (cid:82) (− ∞ ,0 ) ( − e y ) Π ( dy ) = (cid:82) x − Λ ( dx ) and (cid:80) ∞ j = j β µ ( j ) < ∞ for some β > , then ξ drifts to − ∞ and (cid:18) X n ( (cid:98) nt (cid:99) ) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ ( Y ( t ) ; t (cid:62) ) in D ( R + , R ) . In addition, for every (cid:54) p < β such that m < (cid:82) ( ) ( − ( − x ) p ) /p · x − Λ ( dx ) , wehave E (cid:34)(cid:32) A ( ) n n (cid:33) p (cid:35) ( d ) −→ n → ∞ E (cid:20)(cid:18) (cid:90) ∞ √ ä e ξ ( s ) ds (cid:19) p (cid:21) . Proof.
We first note that, since µ as finite mean m ,lim n → ∞ n ∞ (cid:88) k = n ( f ( k/n ) − f ( )) µ ( k − n ) = mf (cid:48) ( ) for every bounded function f : R + → R that is differentiable at 1. We also lift from Lemma 9 fromHaas & Miermont [19] that in this situationlim n → ∞ n − (cid:88) k = ( f ( k/n ) − f ( )) g n , k = (cid:90) ( ] ( f ( − x ) − f ( )) x − Λ ( dx ) .Then we observe that there is the identity g n n = (cid:90) ( ] n − n − (cid:88) j = (( − x ) j − ( − x ) n − ) x − Λ ( dx ) . OPEN QUESTIONS (cid:82) ( ] x − Λ ( dx ) < ∞ that g n = o ( n ) , and therefore g n + nµ ( N ) ∼ nµ ( N ) as n → ∞ . Hencelim n → ∞ nµ ( N ) ∞ (cid:88) k = ( f ( k/n ) − f ( )) p n , k = mf (cid:48) ( ) + (cid:90) ( ] ( f ( − x ) − f ( )) x − Λ ( dx ) ,and for every bounded function h : R → R which is differentiable at 0, we therefore havelim n → ∞ n (cid:90) R ( h ( x ) − h ( )) Π ∗ n ( dx ) = lim n → ∞ n ∞ (cid:88) k = ( h ( ln ( k/n )) − h ( )) p n , k = µ ( N ) (cid:32) mh (cid:48) ( ) + (cid:90) ( ] ( h ( ln ( − x )) − h ( )) x − Λ ( dx ) (cid:33) = µ ( N ) (cid:32) mh (cid:48) ( ) + (cid:90) (− ∞ ,0 ) ( h ( y )) − h ( )) Π ( dy ) (cid:33) where Π ( dy ) stands for the image of x − Λ ( dx ) by the map x (cid:55)→ y = ln ( − x ) . This proves that theassumptions (A1) and (A2) hold (with Π/µ ( N ) instead of Π to be precise), and then (i) follows fromTheorem 1. Note also that n · (cid:90) ∞ e βx Π ∗ n ( dx ) = n · (cid:88) k>en kn · p n , k (cid:54) µ ( N ) (cid:88) k> ( − e ) n ( + k ) µ ( k ) (cid:54) ( + m ) /µ ( N ) , (49)which shows that (A3) is fulfilled. Hence (ii) follows from Theorems 2 and 3 (i).Finally, it is easy to check that when the assumptions of (iii) are fulfilled, then (A4) and (A5) hold.Indeed, as for (49), for every β > n · (cid:90) ∞ e βx Π ∗ n ( dx ) = n · (cid:88) k>en (cid:18) kn (cid:19) β p n , k (cid:54) n − β µ ( N ) (cid:88) k> ( − e ) n k β µ ( k ) ,and we can thus invoke Theorem 4, as well as Theorem 3 (ii) and Remark 4.10.Roughly speaking, Corollary 5.6 tells us that in case (i), the number of blocks drifts to + ∞ andin case (iii), once the number of blocks is of order o ( n ) , it will remain of order o ( n ) on time scalesof order a n . In case (ii), we are only able to understand what happens until the moment when thereis only one block. It is plausible that in some cases, the process counting the number of blocks maythen “restart” (see Section 6 for a similar discussion). Here we gather some open questions.
OPEN QUESTIONS Question 6.1.
Is it true that Theorem 2 remains valid if (A3) is replaced with the condition inf { i (cid:62) X n ( i ) (cid:54) K } < ∞ almost surely for every n (cid:62) Question 6.2.
Is it true that Theorem 4 remains valid if (A4) is replaced with the condition that E [ inf { i (cid:62) X n ( i ) (cid:54) K } ] < ∞ for every n (cid:62) Question 6.3.
Consider a Markov chain with asymptotically zero drifts satisfying (44) only. Underwhat conditions do we have (cid:18) X n ( (cid:98) n t (cid:99) ) n ; t (cid:62) (cid:19) ( d ) −→ n → ∞ sR ( ν ) /s ?When in addition the assumption (43) is satisfied, our results settle the cases r (cid:54) − r > r (cid:54) n (cid:62) E (cid:2) | ∆ n | (cid:3) < ∞ and that Markov chain is uniformlynull (see [28] for a definition). We mention that if X n is irreducible and not positive recurrent (whichis the case when r < X n will be very sensitive to the laws of X k ( ) forsmall values of k . For example, even in the Bessel-like random walk case, one drastically changes thebehavior of X n just by changing the distribution of X ( ) in such a way that E (cid:2) X ( ) (cid:3) = ∞ . Moregenerally: Question 6.4.
Assume that (A1) and (A2) hold, and that there exists an integer 1 (cid:54) n (cid:54) K such that E [ inf { i (cid:62) X n ( i ) (cid:54) K } ] = ∞ . Under what conditions on the probability distributions X ( ) , X ( ) , . . . , X K ( ) does the Markov chain X n have a continuous scaling limit (in which case 0 is a continuouslyreflecting boundary)? A discontinuous càdlàg scaling limit (in which case 0 is a discontinuouslyreflecting boundary)?As a first step, one could first try to answer this question under the assumptions (A3) or (A4) which enable the use of Foster–Lyapounov type techniques. We intend to develop this in a futurework. Question 6.5.
Assume that ξ does not drift to − ∞ , and that if P x denotes the law of Y started from x >
0, then P x converges weakly as x ↓ P . Does thereexist a family ( p n , k ) such that the law of Y under P is the scaling limit of X n as n → ∞ ? If so, canone find sufficient conditions guaranteeing this distributional convergence? Question 6.6.
Assume that ξ drifts to − ∞ , so that Y is absorbed at 0. Assume that Y has a recurrentextension at 0. Does there exist a family ( p n , k ) such that this recurrent extension is the scaling limit of X n as n → ∞ ? If so, can one find sufficient conditions guaranteeing this distributional convergence? EFERENCES References [1] K. S. A
LEXANDER , Excursions and local limit theorems for Bessel-like random walks , Electron. J.Probab., 16 (2011), pp. no. 1, 1–44.[2] S. A
SPANDIIAROV AND
R. I
ASNOGORODSKI , General criteria of integrability of functions of passage-times for non-negative stochastic processes and their applications , Teor. Veroyatnost. i Primenen., 43(1998), pp. 509–539.[3] J. B
ERESTYCKI , Exchangeable fragmentation-coalescence processes and their equilibrium measures ,Electron. J. Probab., 9 (2004), pp. no. 25, 770–824 (electronic).[4] C. B
ERNARDIN AND
F. L. T
ONINELLI , A one-dimensional coagulation-fragmentation process with adynamical phase transition , Stochastic Process. Appl., 122 (2012), pp. 1672–1708.[5] J. B
ERTOIN , N. C
URIEN , AND
I. K
ORTCHEMSKI , Random planar maps & growth-fragmentations ,Preprint available on arxiv, http://arxiv.org/abs/1507.02265.[6] J. B
ERTOIN AND
M. S
AVOV , Some applications of duality for Lévy processes in a half-line , Bull. Lond.Math. Soc., 43 (2011), pp. 97–110.[7] J. B
ERTOIN AND
M. Y OR , Exponential functionals of Lévy processes , Probab. Surv., 2 (2005), pp. 191–212.[8] P. B
ILLINGSLEY , Convergence of probability measures , Wiley Series in Probability and Statistics:Probability and Statistics, John Wiley & Sons Inc., New York, second ed., 1999. A Wiley-Interscience Publication.[9] N. H. B
INGHAM , C. M. G
OLDIE , AND
J. L. T
EUGELS , Regular variation , vol. 27 of Encyclopediaof Mathematics and its Applications, Cambridge University Press, Cambridge, 1987.[10] P. B
RÉMAUD , Markov chains , vol. 31 of Texts in Applied Mathematics, Springer-Verlag, NewYork, 1999. Gibbs fields, Monte Carlo simulation, and queues.[11] F. C
ARAVENNA AND
L. C
HAUMONT , Invariance principles for random walks conditioned to staypositive , Ann. Inst. Henri Poincaré Probab. Stat., 44 (2008), pp. 170–190.[12] L. C
HAUMONT , A. K
YPRIANOU , J. C. P
ARDO , AND
V. R
IVERO , Fluctuation theory and exit systemsfor positive self-similar Markov processes , Ann. Probab., 40 (2012), pp. 245–279.[13] J. D E C ONINCK , F. D
UNLOP , AND
T. H
UILLET , Random walk versus random line , Phys. A, 388(2009), pp. 4034–4040.
EFERENCES
ENISOV , D. K
ORSHUNOV , AND
V. W
ACHTEL , Potential analysis for positive recurrent Markovchains with asymptotically zero drift: power-type asymptotics , Stochastic Process. Appl., 123 (2013),pp. 3027–3051.[15] D. D
UFRESNE , The distribution of a perpetuity, with applications to risk theory and pension funding ,Scand. Actuar. J., (1990), pp. 39–79.[16] S. N. E
THIER AND
T. G. K
URTZ , Markov processes , Wiley Series in Probability and MathematicalStatistics: Probability and Mathematical Statistics, John Wiley & Sons, Inc., New York, 1986.Characterization and convergence.[17] P. J. F
ITZSIMMONS , On the existence of recurrent extensions of self-similar Markov processes , Electron.Comm. Probab., 11 (2006), pp. 230–241.[18] F. G. F
OSTER , On the stochastic matrices associated with certain queuing processes , Ann. Math. Statis-tics, 24 (1953), pp. 355–360.[19] B. H
AAS AND
G. M
IERMONT , Self-similar scaling limits of non-increasing Markov chains , Bernoulli,17 (2011), pp. 1217–1247.[20] T. E. H
ARRIS , First passage and recurrence distributions , Trans. Amer. Math. Soc., 73 (1952),pp. 471–486.[21] J. L. H
ODGES , J R . AND
M. R
OSENBLATT , Recurrence-time moments in random walks , Pacific J.Math., 3 (1953), pp. 127–136.[22] O. H
RYNIV , M. V. M
ENSHIKOV , AND
A. R. W
ADE , Excursions and path functionals for stochasticprocesses with asymptotically zero drifts , Stochastic Process. Appl., 123 (2013), pp. 1891–1921.[23] J. J
ACOD AND
A. N. S
HIRYAEV , Limit theorems for stochastic processes , vol. 288 of Grundlehren derMathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, Berlin, second ed., 2003.[24] O. K
ALLENBERG , Foundations of modern probability , Probability and its Applications (New York),Springer-Verlag, New York, second ed., 2002.[25] G. K
ERSTING , Asymptotic Γ -distribution for stochastic difference equations , Stochastic Process.Appl., 40 (1992), pp. 15–28.[26] F. C. K LEBANER , Stochastic difference equations and generalized gamma distributions , Ann. Probab.,17 (1989), pp. 178–188.[27] J. L
AMPERTI , Criteria for the recurrence or transience of stochastic process. I. , J. Math. Anal. Appl., 1(1960), pp. 314–330.
EFERENCES
A new class of probability limit theorems , J. Math. Mech., 11 (1962), pp. 749–772.[29] ,
Semi-stable stochastic processes , Trans. Amer. Math. Soc., 104 (1962), pp. 62–78.[30] ,
Criteria for stochastic processes. II. Passage-time moments , J. Math. Anal. Appl., 7 (1963),pp. 127–145.[31] ,
Semi-stable Markov processes. I , Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 22 (1972),pp. 205–225.[32] M. V. M
ENSHIKOV , M. V
ACHKOVSKAIA , AND
A. R. W
ADE , Asymptotic behaviour of randomlyreflecting billiards in unbounded tubular domains , J. Stat. Phys., 132 (2008), pp. 1097–1133.[33] S. M
EYN AND
R. L. T
WEEDIE , Markov chains and stochastic stability , Cambridge University Press,Cambridge, second ed., 2009. With a prologue by Peter W. Glynn.[34] J. P
ITMAN , Coalescents with multiple collisions , Ann. Probab., 27 (1999), pp. 1870–1902.[35] D. R
EVUZ AND
M. Y OR , Continuous martingales and Brownian motion , vol. 293 of Grundlehren derMathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, Berlin, third ed., 1999.[36] V. R
IVERO , Recurrent extensions of self-similar Markov processes and Cramér’s condition , Bernoulli,11 (2005), pp. 471–509.[37] ,