The law of the iterated logarithm for a piecewise deterministic Markov process assured by the properties of the Markov chain given by its post-jump locations
Dawid Czapla, Sander C. Hille, Katarzyna Horbacz, Hanna Wojewódka-Ściążko
aa r X i v : . [ m a t h . P R ] S e p The law of the iterated logarithm for a piecewisedeterministic Markov process assured by the propertiesof the Markov chain given by the post-jump locations
Dawid Czapla , Sander C. Hille , Katarzyna Horbacz , andHanna Wojewódka-Ściążko Institute of Mathematics, University of Silesia in Katowice, Bankowa 14, 40-007 Katowice, Poland Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
Abstract
In the paper we consider some piecewise deterministic Markov process whose contin-uous component evolves according to semiflows, which are switched at the jump timesof a Poisson process. The associated Markov chain describes the states of this processdirectly after the jumps. Certain ergodic properties of these two objects have alreadybeen investigated in our recent papers. We now aim to establish the law of the iteratedlogarithm for the aforementioned continuous-time process. Moreover, we intend to dothis using the already proven properties of the discrete-time system. The abstract modelunder consideration has interesting interpretations in real-life sciences, such as biology.Among others, it can be used to describe the stochastic dynamics of gene expression.
Keywords: piecewise deterministic Markov process, random dynamical system, invariant measure,law of the iterated logarithm, asymptotic coupling
Introduction
The law of the iterated logarithm (LIL) characterises essentially the maximal fluctuationsaround the mean of a stochastic process in discrete or continuous time. It is intimatelyrelated to the strong law of large numbers (SLLN) and the central limit theorem (CLT).The history of results on the LIL dates back to the work by Khinchin [21], in the specificcontext of dyadic representations of numbers, and to the one by Kolmogorov [22], for generalsequences of independent, non-necessarily identically distributed random variables that sat-isfy a particular ‘asymptotic boundedness’ condition. Kolomogorov’s results for identicallydistributed random variables with finite second moment were further generalised into theversion of the LIL known as the Hartman-Wintner Theorem [16]. See also [30] and e.g. [5]for a review of results on the LIL for the case of independent variables at the time of writing.The main goal of this paper is to prove the validity of the LIL for a class of piecewisedeterministic Markov processes (PDMPs). In this setting, the associated random variables1re neither independent, nor identically distributed. Our method of proof is intentionallysuch that the result for the PDMP is derived from the validity of the LIL for the Markovchain given by its post-jump locations. The latter has been established in [10] (see also thereferences mentioned there).PDMPs have been introduced by Davis [12] as a general class of stochastic processes.They are encountered as suitable mathematical models for processes in the physical worldaround us, e.g. in biology, as stochastic model for gene expression [25], gene regulation [18],excitable membranes [29] or population dynamics [1, 2], as well as in resource allocation andservice provisioning (queing, cf. [12]). Questions of ergodicity and asymptotic stability ofPDMPs defined on locally compact state spaces have been studied in detail in [3, 4, 7, 14].The case of non-locally compact state space has been studied much less so far (see e.g. [2,8, 18, 29, 31]). A similar statement applies to the study of limit theorems (see [32, 29]). Formore information on the validity of limit theorems (SLLN, CLT or LIL) for non-stationaryprocesses one may consult [6, 8, 9, 10, 23].A PDMP consists of deterministic movement in a state space (a Polish metric spacein our case) that is alternated at random times of intervention with a random jump instate. In general, the distribution of the next intervention time and the jump can be bothstate dependent (cf. e.g. [18]). Here, and e.g. also in [2], only the jump is distributedconditionally given the current state of the system. The process examined in this paper(described in detail in Section 2) involves jumps that occure at random time points accordingto a Poisson process. Any post-jump location is attained by transforming a pre-jump stateusing a randomly selected function, and, further, by adding a radom shift to the resultingstate. Between any two consecutive jumps, the system is driven deterministically by oneof a finite number of flows, which are switched at the jump times. If the state space isaugmented with an index set of the applied movements, then the chain obtained by pairingthe state just after the jump with the index of the movement that is applied thereafteryields a Markov chain, which intuitively should contain ‘all information’ about the PDMP.Therefore it is enlightning to show how properties of the PDMP can be proven from relevantproperties of the Markov chain constituted by the post-jump states.Essentially, our method of proof splits the problem into subproblems that can be ana-lyzed separately. One subproblem can be addressed using a version of the LIL for certainsquare integrable martingales, whose proof draws heavily on [17, Theorem 1] and uses thecoupling methods applied for establishing [9, Lemma 2.2] (cf. also [15]). Another builds onthe validity of the LIL for Markov chains associated to PDMPs in the abstract model class,that has been obtained recently (cf. [10, Theorem 4.1]).We believe that the class of dynamical systems under study is broad enough to covermodels of suitable real-life systems, e.g. biological systems, such as artificial evolutionaryexperiments on bacteria [1], as well as chemotactic movement of bacteria or amoeba (relatedto the study of so-called velocity-jump models, employing particular Fokker-Planck equa-tions, see e.g. [26, 19, 27]). Discussion and the detailed study of such application are beyondthe scope of this paper, but they shall be the subject of our further reseach collaboration.2
Prelimenaries
Let us first introduce a piece of notation, as well as gather the most important definitionsand facts, used in this paper.
For any point x and any set A , the symbols δ x and A will denote the Dirac measure at x and the indicator function of A , respectively.Suppose that ( E, ̺ E ) is a Polish metric space and let B E denote the σ -field of itsBorel subsets. Let B b ( E ) stand for the space of all bounded, Borel measurable functions f : E → R equipped with the supremum norm k f k ∞ = sup x ∈ E | f ( x ) | . We shall also refer tocertain subspaces of B b ( E ) , namely C b ( E ) , consisting of all continuous functions, Lip b ( E ) ,consisting of all Lipschitz continuous functions, and Lip
F M ( E ) = { f ∈ Lip b ( E ) : k f k BL ≤ } , where the norm k · k BL is given by k f k BL = max {| f | Lip , k f k ∞ } and | f | Lip stands for theminimal Lipschitz constant of f for every f ∈ Lip b ( E ) . Finally, we will also consider thespace ¯ B b ( E ) of functions f : E → R which are Borel measurable and bounded below.The spaces of finite and probability Borel measures on E will be denoted by M fin ( E ) and M ( E ) , respectively. Further, we also define M V ,r ( E ) = (cid:26) µ ∈ M ( E ) : Z E V r ( x ) µ ( dx ) < ∞ (cid:27) for any r > and any given Lyapunov function V : E → [0 , ∞ ) , that is, a functionwhich is continuous, bounded on bounded sets, and, in the case of unbounded E , satisfies lim ̺ E ( x, ¯ x ) →∞ V ( x ) = ∞ for some fixed point ¯ x ∈ E . For brevity, for any f ∈ ¯ B b ( E ) and anysigned Borel measure µ on E , we will write h f, µ i for R E f ( x ) µ ( dx ) . As usual, supp µ willstand for the support of µ ∈ M fin ( E ) .To evaluate the distance between probability measures, we will use the so-called Fortet-Mourier distance (see e.g. [24]), defined as follows: d F M ( µ , µ ) = sup {|h f, µ − µ i| : f ∈ Lip
F M ( E ) } for µ , µ ∈ M ( E ) . Let us indicate that, under the assumption that ( E, ̺ E ) is a Polish space, the convergencein d F M is equivalent to the weak convergence of probability measures, and also the space ( M ( E ) , d F M ) is complete (for the proofs of both these facts see e.g. [13]). A function P : E × B E → [0 , is called a (sub)stochastic kernel, if, for any fixed A ∈ B E , P ( · , A ) : E → [0 , is a Borel measurable map, and, for any fixed x ∈ E , P ( x, · ) : B E → [0 , is a (sub)probability Borel measure. For any two kernels P : E × B E → [0 , and3 : E × B E → [0 , we can define their composition P R : E × B E → [0 , given by P R ( x, A ) = Z E P ( y, A ) R ( x, dy ) for x ∈ E and A ∈ B E . (1.1)Following this rule, for any (sub)stochastic kernel P : E × B E → [0 , , we can define its n -thstep kernels P n : E × B E → [0 , , inductively on n ∈ N , by setting P n = P P n − , where P is given by P ( x, A ) = δ x ( A ) for every x ∈ E and any A ∈ B E .Moreover, for any stochastic kernel P , we can define a regular Markov operator ( · ) P : M fin ( E ) → M fin ( E ) and its dual operator P ( · ) : B b ( E ) → B b ( E ) in the followingway: µP ( A ) = Z E P ( x, A ) µ ( dx ) for µ ∈ M fin ( E ) , A ∈ B E , (1.2) P f ( x ) = Z E f ( y ) P ( x, dy ) for f ∈ B b ( E ) , x ∈ E. (1.3)Obviously, h f, µP i = h P f, µ i for any f ∈ B b ( E ) and any µ ∈ M fin ( E ) . Moreover, notethat any operator P ( · ) of the form (1.3) can be extended, in the usual way, to a linearoperator on ¯ B b ( E ) , preserving the duality property, and hence it is reasonable to apply P ( · ) to any Lyapunov function. For notational simplicity, we shall use the same symbol for theextension as for the original operator on B b ( E ) . An operator ( · ) P , given by (1.2), is said tobe Markov-Feller if P f ∈ C b ( E ) for every f ∈ C b ( E ) .We call µ ∗ ∈ M fin ( E ) an invariant measure of ( · ) P if µ ∗ P = µ ∗ . If ( · ) P has a uniqueinvariant measure µ ∗ ∈ M ( E ) and there exists q ∈ (0 , such that d F M ( µP n , µ ∗ ) ≤ c ( µ ) q n for any µ ∈ M V , ( E ) , n ∈ N , where c ( µ ) is a constant depending only on µ , then ( · ) P is said to be exponentially ergodicin d F M .Let us consider E N with the product topology. For every n ∈ N define φ n : E N → E by the formula φ n ( ω ) = e n , where ω = ( e , e , . . . ) ∈ E N . According to [28, Theorem 2.8],for any µ ∈ M ( E ) and any stochastic kernel P : E × B E → [0 , , there exists P ∈ M ( E N ) such that ( φ n ) n ∈ N is a time-homogeneus Markov chain on the probability space ( E N , B E N , P ) with transition function P and initial measure µ , that is P n ( x, A ) = P ( φ k + n ∈ A | φ k = x ) for x ∈ E, A ∈ B E , n, k ∈ N , (1.4)and µ ( A ) = P ( φ ∈ A ) for A ∈ B E . The chain defined as above shall be further called the canonical Markov chain. Clearly, P ( B ) may be read as the probability of the event { ( φ n ) n ∈ N ∈ B } for any B ∈ B E N .Conversely, it is clear that the one-step transition law of any time-homogeneous Markovchain determines a stochastic kernel and the corresponding n -step kernels which satisfy (1.4).4s far as the dual operator P ( · ) is concerned, we have P n f ( x ) = E ( f ( φ n ) | φ = x ) for x ∈ E, f ∈ B b ( E ) , n ∈ N . A regular Markov semigroup ( P t ) t ∈ R + is a family of regular Markov operators ( · ) P t : M fin ( E ) → M fin ( E ) , t ∈ R + , which form a semigroup (under composition) withthe identity transformation ( · ) P as the unity element. Provided that ( · ) P t is a Markov-Feller operator for every t ∈ R + , the semigroup ( P t ) t ∈ R + is said to be Markov-Feller, too. If,for some µ ∗ ∈ M fin ( E ) , µ ∗ P t = µ ∗ for every t ∈ R + , then we call µ ∗ an invariant measureof ( P t ) t ∈ R + .Let ( φ ( t )) t ∈ R + be an E -valued time-homogeneous Markov process, defined on an arbi-trary probability space (Ω , F , P ) , with continuous time parameter t ∈ R + . Suppose that,for any t ∈ R + , P t : E × B E → [0 , is defined by P t ( x, A ) = P ( φ ( t ) ∈ A | φ (0) = x ) for x ∈ E, A ∈ B E , t ∈ R + . (1.5)It is well-known that these transition probability functions form a semigroup of stochastickernels under the composition operation defined by (1.1). Thus the family of the corre-sponding Markov operators ( P t ) t ∈ R + is a regular Markov semigroup. The dual operator of P t , t ∈ R + , can be expressed in the form P t f ( x ) = E ( f ( φ ( t )) | φ (0) = x ) . Now, let ( φ n ) n ∈ N be a Markov chain with transition law P , and let ( φ (1) n ) n ∈ N , ( φ (2) n ) n ∈ N be its copies with initial distributions µ ∈ M ( E ) , µ ∈ M ( E ) , respectively. A time-homogeneus Markov chain ( φ (1) n , φ (2) n ) n ∈ N evolving on E (endowed with the product topol-ogy) is said to be a Markovian coupling of ( φ (1) n ) n ∈ N and ( φ (2) n ) n ∈ N whenever its transitionlaw C : E × B E → [0 , satisfies C (( x, y ) , A × E ) = P ( x, A ) and C (( x, y ) , E × A ) = P ( y, A ) for any x, y ∈ E, A ∈ B E , and its initial distribution α ∈ M ( E ) is such that α ( A × E ) = µ ( A ) , α ( E × A ) = µ ( A ) for any A ∈ B E . In what follows we always assume that the coupling is defined canonically on the coor-dinate space (( E ) N , B ( E ) N ) endowed with an appropriately constructed measure C ∈M (( E ) N ) . Consider an E -valued time-homogeneous Markov chain ( φ n ) n ∈ N with initial distribution µ ∈ M ( E ) and an E -valued time-homogeneous Markov process ( φ ( t )) t ∈ R + with initialdistribution ν ∈ M ( E ) . For any function g ∈ Lip b ( E ) , let us introduce ( s n ( g )) n ∈ N and5 s ( g )( t )) t ∈ R + , given by s n ( g ) = P n − i =0 g ( φ i ) p n ln(ln( n )) for n > e and s n ( g ) = 0 for n ≤ e ; (1.6) s ( g )( t ) = R t g ( φ ( s )) ds p t ln(ln( t )) for t > e and s ( g )( t ) = 0 for t ≤ e. (1.7)Suppose that µ ∗ ∈ M ( E ) and ν ∗ ∈ M ( E ) are the unique invariant measures for ( φ n ) n ∈ N and ( φ ( t )) t ∈ R + , respectively. We say that the Markov chain ( g ( φ n )) n ∈ N satisfies the LIL if,for ˆ g = g − h g, µ ∗ i and some σ (ˆ g ) ∈ (0 , ∞ ) , lim sup n →∞ s n (ˆ g ) = σ (ˆ g ) and lim inf n →∞ s n (ˆ g ) = − σ (ˆ g ) P -a.s.Accordingly, we say that the Markov process ( g ( φ ( t ))) t ∈ R + satisfies the LIL if, for ¯ g = g − h g, ν ∗ i and some σ (¯ g ) ∈ (0 , ∞ ) , lim sup t →∞ s (¯ g ) ( t ) = σ (¯ g ) and lim inf t →∞ s (¯ g ) ( t ) = − σ (¯ g ) P -a.s. In the beginning, we shall discuss the structure and assumptions of the model under consid-eration. Let us indicate that this model was initially introduced in [8], where we have alsoelaborated on its possible applications. Further, let us summarise the already known resultsthat are used further in this paper.
Consider a separable Banach space ( H, k · k ) and a closed subset Y of H . For any x ∈ H and any r > , let B ( x, r ) denote an open ball in H centered at x and of radius r . Let usalso fix a topological measure space (Θ , B Θ , ϑ ) with a finite Borel measure ϑ . With a slightabuse of notation, we will further write dθ only, instead of ϑ ( dθ ) . Finally, fix m ∈ N andintroduce the set of indexes I := { , . . . , m } equipped with the metric d given by d ( i, j ) = ( , i = j , i = j . We shall investigate a random dynamical system ( Y ( t )) t ∈ R + evolving through jumps,occuring at random moments τ n , n ∈ N , which coincide with the jump times of a Poissonprocess with a given intensity λ . In every time interval [ τ n − , τ n ) , where τ = 0 , the systemis driven by one of the given continuous semiflows S i : R + × Y → Y , i ∈ I . The currentsemiflow, say S i , is switched to another (or the same) one S j with a probability π ij ( y ) ,depending on the post-jump state y . We assume that these place-dependent probabilities6onstitute a matrix of continuous functions π ij : Y → [0 , , i, j ∈ I , such that X j ∈ I π ij ( y ) = 1 for any y ∈ Y, i ∈ I. The above description can be shortly formalized by the following formula: Y ( t ) = S ξ n ( t − τ n , Y n ) for t ∈ [ τ n , τ n +1 ) , (2.1)where ξ n is an I -valued random variable indicating which semiflow has been chosen afterthe n -th jump, and Y n is a result of some transformation of the state Y ( τ n − ) just before thejump. The transformation is attained by a function w θ : Y → Y , selected randomly amongall possible ones { w θ : θ ∈ Θ } , and further disturbed by adding some random shift H n .Therefore, we can formally write Y n = w θ n ( Y ( τ n − )) + H n . It is assumed that, given Y ( τ n − ) = y , the probability of choosing w θ (at the jumptime τ n ) is determined by the density function θ p ( y, θ ) such that p : Y × Θ → [0 , ∞ ) is a continuous map. Moreover, it is required that the map ( y, θ ) w θ ( y ) is continuous.Further, we also assume that, for some ε > , all the variables H n , n ∈ N , have a commondistribution ν ε ∈ M ( H ) supported on B (0 , ε ) ⊂ H , and that w θ ( y ) + h ∈ Y for any h ∈ supp ( ν ε ) , θ ∈ Θ , y ∈ Y. We therefore formally consider a stochastic process ( Y ( t )) t ∈ R + of the form (2.1), definedas an interpolation of the discrete-time process ( Y n ) n ∈ N determined by the recursive formula Y n = Y ( τ n ) = w θ n ( S ξ n − (∆ τ n , Y n − )) + H n for n ∈ N , (2.2)where ( τ n ) n ∈ N , ( θ n ) n ∈ N , ( ξ n ) n ∈ N and ( H n ) n ∈ N are certain sequences of random variables(specified below) with values in R + , Θ , I and H , respectively.The distribution of ( Y , ξ ) is fixed arbitrarily. The sequence ( τ n ) n ∈ N , wherein τ = 0 a.s., is such that τ n → ∞ a.s., as n → ∞ . The increments ∆ τ n +1 := τ n +1 − τ n , n ∈ N ,are, in turn, assumed to be mutually independent and identically distributed according tothe exponential distribution with intensity λ > . Moreover, the disturbances ( H n ) n ∈ N arerequired to be identically distributed with ν ε , introduced above. Finally, the chains ( ξ n ) n ∈ N and ( θ n ) n ∈ N are defined, inductively on n ∈ N , as follows: P ( ξ n +1 = j | Y n +1 = y, ξ n = i ; W n ) = π ij ( y ) for y ∈ Y, i, j ∈ I, P ( θ n +1 ∈ D | S ξ n (∆ τ n +1 , Y n ) = y ; W n ) = Z D p ( y, θ ) dθ for D ∈ B Θ , y ∈ Y, where W = ( Y , ξ ) and W n = ( W , H , . . . , H n , τ , . . . , τ n , θ , . . . , θ n , ξ , . . . , ξ n ) for n ∈ N .
7e also demand that, for any n ∈ N , the variables ∆ τ n +1 , H n +1 , θ n +1 and ξ n +1 are(mutually) conditionally independent given W n , and that ∆ τ n +1 and H n +1 are independentof W n .Let us now consider the space X := Y × I with the metric ̺ c , given by ̺ c (( y , i ) , ( y , i )) = k y − y k + c d ( i , i ) for ( y , i ) , ( y , i ) ∈ X, (2.3)with a sufficiently large constant c ≥ (defined explictly in [8]). Now, define X n := ( Y n , ξ n ) for n ∈ N . Given µ ∈ M ( X ) , we shall further consider the canonical ( X × R + ) -valued Markov chain ( X n , ∆ τ n ) n ∈ N with initial distribution µ ⊗ δ , defined on a probability space (Ω , F , P ) , where Ω := ( X × R + ) N and F := B Ω , whose transition law Π : ( X × R + ) × B X × R + → [0 , isgiven by Π (( y, i, s ) , A ) = Z ∞ λe − λt Z Θ p ( S i ( t, y ) , θ ) Z supp ( ν ε ) X j ∈ I A ( w θ ( S i ( t, y )) + h, j, t ) × π ij ( w θ ( S i ( t, y )) + h ) ! ν ε ( dh ) dθ dt (2.4)for any ( y, i, s ) ∈ X × R + and any A ∈ B X × R + . Note that ( X n ) n ∈ N itself is also a time-homogeneous Markov chain with transition law P : X × B X → [0 , satisfying P (( y, i ) , A ) = Π (( x, s ) , A × R + ) for any x ∈ X, s ∈ R + and A ∈ B X . (2.5)Moreover, we have Π (( x, s ) , X × B ) = Z B λe − λt dt for any ( x, s ) ∈ X × R + and B ∈ B R + . Now, define the continuous-time process ( X ( t )) t ∈ R + on the space (Ω , F , P ) , by setting X ( t ) = ( Y ( t ) , ξ ( t )) = ( S ξ n ( t − τ n , Y n ) , ξ n ) for t ∈ [ τ n , τ n +1 ) . (2.6)One may check that ( X ( t )) t ∈ R + is an X -valued time-homogeneous Markov process such that X ( τ n ) = X n for any n ∈ N . The Markov transition semigroup associated with the process ( X ( t )) t ∈ R + shall be denotedby ( P t ) t ∈ R + .Summarising this part of the paper, let us indicate that, if X is distributed according8o some measure µ ∈ M ( X ) , then we get P (( X n , ∆ τ n ) ∈ D ) = ( µ ⊗ δ ) P n ( D ) for any D ∈ B X × R + , P (∆ τ n ∈ B ) = Z B λe − λt dt for any B ∈ B R + , n ∈ N , (2.7) P ( X ( t ) ∈ A ) = µP t ( A ) for any A ∈ B X , t ∈ R + . (2.8)Let us further assume that there exist ¯ y ∈ Y , α ∈ R and positive constants L , ¯ L , L w , L π , L p , δ π , δ p , r ∈ (0 , such that L r L w + (2 + r ) αλ < , (2.9)and, for all i, i , i ∈ I , y , y ∈ Y , t ∈ R + , the following conditions hold: sup y ∈ Y Z ∞ e − λt Z Θ k w θ ( S i ( t, ¯ y )) − ¯ y k r p ( S i ( t, y ) , θ ) dθ dt < ∞ , (A1) k S i ( t, y ) − S i ( t, y ) k ≤ Le αt k y − y k + t ¯ L d ( i , i ) , (A2) Z Θ p ( y , θ ) k w θ ( y ) − w θ ( y ) k r dθ ≤ L w k y − y k r , (A3) X j ∈ I | π ij ( y ) − π ij ( y ) | ≤ L π k y − y k , Z Θ | p ( y , θ ) − p ( y , θ ) | dθ ≤ L p k y − y k , (A4) X j ∈ I min { π i ,j ( y ) , π i ,j ( y ) } ≥ δ π , Z Θ( y ,y ) min { p ( y , θ ) , p ( y , θ ) } dθ ≥ δ p , (A5)where Θ( y , y ) := { θ ∈ Θ : k w θ ( y ) − w θ ( y ) k ≤ L w k y − y k} . Hypotheses (A1)-(A5)and their reasonableness are discussed in detail e.g. in [8, 10, 11]. Suppose that hypothesis (A1)-(A5) hold with constants satisfying (2.9). Then[8, Theorem 4.1] implies that the Markov operator P , determined by (2.5), is exponentiallyergodic in d F M induced by the metric ̺ c given by (2.3). In fact, the exponential ergodicityitself can be obtained even under slightly weaker assumptions than (A1)-(A5) (cf. [8]). Tobe more precise, (A1), (A3) and (2.9) may be considered in their weaker versions, wherein r = − . However, to establish the law of the iterated logarithm, we need them as given in[10] and also in this paper.Fix an arbitrary non-constant function g ∈ Lip b ( X ) . Further, consider the chain ( X n ) n ∈ N governed by P , defined in (2.5), with the initial distribution µ ∈ M V , r ( X ) ,where r ∈ (0 , is the constant appearing in (2.9), and V : X → [0 , ∞ ) is a Lyapunovfunction given by V ( y, i ) = k y − ¯ y k for every ( y, i ) ∈ X, (2.10)where ¯ y is determined by (A1). Referring to [10, Theorem 4.1], we know that the chain ( g ( X n )) n ∈ N satisfies the invariance principle for the LIL, and whence it also satisfies the9IL itself (cf. [10, Section 3.2]).In [8, Corollary 4.5] we have proven that there is a one-to-one correspondence betweeninvariant measures of the operator P and those of the semigroup ( P t ) t ∈ R + . This obviouslyimplies that ( P t ) t ∈ R + has a unique invariant distribution if and only if P admits the one,which holds, in particular, whenever conditions (A1)-(A5) and (2.9) are satisfied. Theabove-mentioned correspondence can be described explicitly, using the Markov operatorsassociated with the stochastic kernels G, W : X × B X → [0 , defined as follows: G (( y, i ) , A ) = Z ∞ λe − λt A ( S i ( t, y ) , i ) dt, (2.11) W (( y, i ) , A ) = X j ∈ I Z supp ( ν ε ) Z Θ A ( w θ ( y ) + h, j ) π ij ( w θ ( y ) + h ) p ( y, θ ) dθ ν ε ( dh ) (2.12)for any ( y, i ) ∈ X , A ∈ B X . More precisely, [8, Theorem 4.4] says that if µ ∗ ∈ M ( X ) isan invariant measure of the Markov operator P , then ν ∗ := µ ∗ G is an invariant measure ofthe Markov semigroup ( P t ) t ∈ R + , and ν ∗ W = µ ∗ . Conversely, if ν ∗ ∈ M ( X ) is an invariantmeasure of ( P t ) t ∈ R + , then µ ∗ := ν ∗ W is an invariant measure of P , and µ ∗ G = ν ∗ .Finally, let us denote the renewal counting process with arrival times τ n , n ∈ N , by ( N t ) t ∈ R + , i.e. N t := max { n ∈ N : τ n ≤ t } for t ∈ R + . (2.13) Consider the Markov chain ( X n ) n ∈ N with transition law P , given by (2.5), as well as thepiecewise deterministic Markov process ( X ( t )) t ∈ R + , defined by (2.6). Further, recall thatunder hypotheses (A1)-(A5) and (2.9) both the semigroup ( P t ) t ∈ R + and the operator P pos-sess unique invariant distributions, denoted by ν ∗ ∈ M ( X ) and µ ∗ ∈ M ( X ) , respectively.Moreover, we know that ν ∗ = µ ∗ G , where G is defined in (2.11).Let g ∈ Lip b ( X ) be an arbitrary non-constant function, and define ¯ g = g − h g, ν ∗ i .Following (1.6) and (1.7), we can introduce s n ( G ¯ g ) = P n − i =0 G ¯ g ( X i ) p n ln(ln( n )) for n > e, s n ( G ¯ g ) = 0 for n ≤ e, (3.1) s (¯ g )( t ) = R t ¯ g ( X ( s )) ds p t ln(ln( t )) for t > e, and s (¯ g )( t ) = 0 for t ≤ e. (3.2)We are now ready to state our main result, whose proof is presented in the ramainderof the paper. Theorem 3.1.
Suppose that conditions (A1) - (A5) hold with the constants satisfying (2.9) .Then, for any non-constant function g ∈ Lip b ( X ) and any initial measure µ ∈ M V , r ( X ) with V given by (2.10) , the process ( g ( X ( t ))) t ∈ R + satisfies the LIL. .1 The proof of the main result According to the definition introduced in Section 1.3, we need to prove that lim sup t →∞ s (¯ g ) ( t ) = σ (¯ g ) and lim inf t →∞ s (¯ g ) ( t ) = − σ (¯ g ) P -a.s.for some σ (¯ g ) ∈ (0 , ∞ ) .Recall that, for any t ∈ R + , N t is given by (2.13). Further, note that whenever t ≥ τ ,which in other words means that N t > e , we have s (¯ g )( t ) = p N t ln(ln( N t )) p t ln(ln( t )) p N t ln(ln( N t )) N t − X i =0 Z τ i +1 τ i ¯ g ( X ( s )) ds + R t (¯ g ) ! , where R t (¯ g ) := 1 p N t ln(ln( N t )) Z tτ Nt ¯ g ( X ( s )) ds. We can further write s (¯ g )( t ) = p N t ln(ln( N t )) p t ln(ln( t )) p N t ln(ln( N t )) N t − X i =0 (cid:18)Z τ i +1 τ i ¯ g ( X ( s )) ds − λ G ¯ g ( X i ) (cid:19) + R t (¯ g ) + 1 λ s N t ( G ¯ g ) ! , (3.3)where s N t ( G ¯ g ) is defined as in (3.1). Referring to the elementary renewal theorem, whichsays that lim t →∞ N t t = λ P -a.s. , (3.4)we obtain that lim t →∞ p N t ln(ln( N t )) p t ln(ln( t )) = √ λ P -a.s. (3.5)For any t ∈ R + , let us now introduce the following notation: I ( t ) := 1 p N t ln(ln( N t )) N t − X i =0 (cid:18)Z τ i +1 τ i ¯ g ( X ( s )) ds − λ G ¯ g ( X i ) (cid:19) ,I ( t ) := R t (¯ g ) ,I ( t ) := 1 λ s N t ( G ¯ g ) . The asymptotic behavior of each of these components shall be analyzed separately.11irst of all, we have | R t (¯ g ) | ≤ k ¯ g k ∞ ∆ τ N t +1 p N t ln(ln( N t )) P -a.s. for t ≥ τ . (3.6)Observe that the right-hand side of the above inequality tends to zero. Indeed, note that ∞ X n =3 P ∆ τ n +1 p n ln(ln( n )) ≥ ε ! = ∞ X n =3 e − λε √ n ln(ln( n )) < ∞ . Hence, due to the Borel-Cantelli lemma, lim n →∞ ∆ τ n +1 p n ln(ln( n )) = 0 P -a.s. , whence also lim t →∞ ∆ τ N t +1 p N t ln(ln( N t )) = 0 P -a.s. , which follows from (3.4). Finally, referring to (3.6), we see that lim t →∞ I ( t ) = 0 P -a.s. (3.7)While investigating I , we shall refer to [10, Theorem 4.1]. Note that the Markov chain ( X n ) n ∈ N , for which the sequence ( s n ( G ¯ g )) n ∈ N is defined, satisfies all the assumptionsrequired in [10, Theorem 4.1]. Therefore the only conditions that need to be proven are G ¯ g ∈ Lip b ( X ) and h G ¯ g, µ ∗ i = 0 , where the latter follows immediately from the definition of ¯ g and the fact that h G ¯ g, µ ∗ i = h ¯ g, ν ∗ i (cf. [8, Theorem 4.4]). Since the boundedness of G ¯ g is also obvious, it remains to show its Lipschitz-continuity. Note that, according to (A2),we have | G ¯ g ( y , i ) − G ¯ g ( y , i ) | ≤ Z ∞ λe − λt | g ( S i ( t, y ) , i ) − g ( S i ( t, y ) , i ) | dt ≤ | g | Lip Z ∞ λe − λt ( k S i ( t, y ) − S i ( t, y ) k + cd ( i , i )) dt ≤ | g | Lip (cid:18) λL k y − y k Z ∞ e − ( λ − α ) t dt + d ( i , i ) ¯ L Z ∞ λe − λt t dt + cd ( i , i ) (cid:19) = | g | Lip λLλ − α k y − y k + d ( i , i ) ¯ Lλ + c !! ≤ | g | Lip λLλ − α + ¯ Lλ + c ! ̺ c (( y , i ) , ( y , i )) for any ( y , i ) , ( y , i ) ∈ X, which guarantees that G ¯ g ∈ Lip b ( X ) . Therefore it follows from [10, Theorem 4.1] that lim sup n →∞ s n ( G ¯ g ) = σ ( G ¯ g ) and lim inf n →∞ s n ( G ¯ g ) = − σ ( G ¯ g ) P -a.s. , (3.8)12here, for any function h ∈ Lip b ( X ) , σ ( h ) = E µ ∗ ∞ X i =0 P i h ( X ) − ∞ X i =0 P i h ( X ) + h ( X ) ! , and E µ ∗ is the expected value corresponding to the probability measure P µ ∗ defined on (Ω , F ) such that P µ ∗ ( X ∈ A ) = µ ∗ ( A ) for A ∈ F . Hence, due to (3.8) and (3.4), we obtain lim sup t →∞ I ( t ) = 1 λ lim sup t →∞ s N t ( G ¯ g ) = 1 λ σ ( G ¯ g ) P -a.s. , and lim inf t →∞ I ( t ) = − λ σ ( G ¯ g ) P -a.s. (3.9)Note that σ ( G ¯ g ) < ∞ , which is explained in details in [10].Finally, to analyze the asymptotic behaviour of I ( t ) , we need to appeal to [17, Theorem 1],whose assertion guarantees the LIL for certain square integrable martingales. Let us firstintroduce the sequence ( M n (¯ g )) n ∈ N given by M (¯ g ) = 0 and M n (¯ g ) = n − X k =0 (cid:18)Z τ k +1 τ k ¯ g ( X ( s )) ds − λ G ¯ g ( X k ) (cid:19) for n ∈ N . (3.10)Note that ( M n (¯ g )) n ∈ N is a martingale with respect to the natural filtration ( F n ) n ∈ N of ( X n , ∆ τ n ) n ∈ N . Indeed, we have Z n +1 (¯ g ) := M n +1 (¯ g ) − M n (¯ g ) = Z τ n +1 τ n ¯ g ( X ( s )) ds − λ G ¯ g ( X n )= Z τ n +1 τ n ¯ g ( S ξ n ( s − τ n , Y n ) , ξ n ) ds − λ G ¯ g ( Y n , ξ n )= Z ∆ τ n +1 ¯ g ( S ξ n ( s, Y n ) , ξ n ) ds − λ G ¯ g ( Y n , ξ n ) , (3.11)whence, appealing to (2.7), for any ( y, i, u ) ∈ X × R + , we get E ( Z n +1 (¯ g ) | Y n = y, ξ n = i, ∆ τ n = u ) = Z R Z t ¯ g ( S i ( s, y ) , i ) ds P (∆ τ n +1 ∈ dt ) − λ G ¯ g ( y, i )= Z ∞ λe − λt Z t ¯ g ( S i ( s, y ) , i ) ds dt − λ G ¯ g ( y, i )= Z ∞ Z ∞ s λe − λt dt ¯ g ( S i ( s, y ) , i ) ds − λ G ¯ g ( y, i )= Z ∞ e − λs ¯ g ( S i ( s, y ) , i ) ds − λ G ¯ g ( y, i ) = 0 , which, by the Markov property of the chain ( X n , ∆ τ n ) n ∈ N , implies that ( M n (¯ g )) n ∈ N is13 martingale. Further, we also obtain E (cid:0) Z n +1 (¯ g ) (cid:1) ≤ E (cid:18)Z ∆ τ n +1 ¯ g ( S ξ n ( s, Y n ) , ξ n ) ds (cid:19) ! + 2 E (cid:18) λ G ¯ g ( Y n , ξ n ) (cid:19) ! ≤ k ¯ g k ∞ E (cid:16) (∆ τ n +1 ) (cid:17) + 2 λ k ¯ g k ∞ = 6 λ k ¯ g k ∞ , which means that the martingale increments Z n (¯ g ) = M n (¯ g ) − M n − (¯ g ) , n ∈ N , are uniformlybounded in the L ( P ) -norm, and thus the martingale itself is square-integrable, as requiredin [17, Theorem 1].Now, define h n (¯ g ) := E (cid:0) M n (¯ g ) (cid:1) for n ∈ N . It will be clarified later on (in Section 3.2) that there exists ¯ n ∈ N such that h n (¯ g ) > forevery n ≥ ¯ n . We need to establish the following conditions: lim n →∞ h n (¯ g ) n X l =1 Z l (¯ g ) = 1 P -a.s. , (3.12) ∞ X n =¯ n h − n (¯ g ) E (cid:0) Z n (¯ g ) {| Z n (¯ g ) | <υh n (¯ g ) } (cid:1) < ∞ for every υ > , (3.13) ∞ X n =¯ n h − n (¯ g ) E (cid:0) | Z n (¯ g ) | {| Z n (¯ g ) |≥ ϑh n (¯ g ) } (cid:1) < ∞ for every ϑ > , (3.14)which, in view of [17, Theorem 1], imply the LIL for the martingale ( M n (¯ g )) n ∈ N . To bemore precise, according to [17, Theorem 1], the sequence ( M n (¯ g )) n ∈ N satisfies the Strasseninvariance principle for the LIL with the normalizing factors p h n (¯ g ) ln(ln( h n (¯ g ))) , n ≥ ¯ n. In particular, it also satisfies the LIL itself, which, in this case, means that lim sup n →∞ M n (¯ g ) p h n (¯ g ) ln(ln( h n (¯ g ))) = 1 P -a.s. , lim inf n →∞ M n (¯ g ) p h n (¯ g ) ln(ln( h n (¯ g ))) = − P -a.s. , and so, according to (3.4), we further obtain lim sup t →∞ M N t (¯ g ) q h N t (¯ g ) ln(ln( h N t (¯ g ))) = 1 P -a.s. , lim inf n →∞ M N t (¯ g ) q h N t (¯ g ) ln(ln( h N t (¯ g ))) = − P -a.s.Let the part of the proof in which we verify (3.12)-(3.14) be postponed into the subsequent14ection, namely Section 3.2, in which we shall also prove that lim t →∞ q h N t (¯ g ) ln (cid:0) ln( h N t (¯ g )) (cid:1)p N t ln(ln( N t )) = ˜ σ (¯ g ) P -a.s. , (3.15)where ˜ σ (¯ g ) := E µ ∗ (cid:0) Z (¯ g ) (cid:1) = E µ ∗ (cid:0) M (¯ g ) (cid:1) ∈ (0 , ∞ ) . (3.16)Then, provided that (3.12)-(3.14) and (3.15) are established, we obtain lim sup t →∞ I ( t ) = ˜ σ (¯ g ) and lim inf t →∞ I ( t ) = − ˜ σ (¯ g ) P -a.s. (3.17)Finally, combining (3.3) with (3.5), (3.7), (3.9) and (3.17), we obtain lim sup t →∞ s (¯ g )( t ) = σ (¯ g ) and lim inf t →∞ s (¯ g )( t ) = − σ (¯ g ) P -a.s. , (3.18)where σ (¯ g ) := √ λ (cid:18) λ σ ( G ¯ g ) + ˜ σ (¯ g ) (cid:19) ∈ (0 , ∞ ) . The proof of Theorem 3.1 is therefore completed (provided that (3.12)-(3.15) are established,which shall be done in the upcoming section).
Let us consider Z := { (( x , t ) , ( x , s )) ∈ ( X × R + ) : t = s } , and, for any A ∈ B X , define ( A ) Z := { (( x , t ) , ( x , t )) ∈ Z : ( x , x ) ∈ A } . Further, introduce e Q : Z × B Z → [0 , given by e Q ((( x , s ) , ( x , s )) , B ) = Z supp ( ν ε ) Z ∞ λe − λt Z Θ X j ∈ I B ( w j ( x , x , t, θ, h )) × π j ( x , x , t, θ, h ) ! p ( x , x , t, θ ) dθ dt ν ε ( dh ) (3.19)for (( x , s ) , ( x , s )) ∈ Z and B ∈ B Z such that x = ( y , i ) , x = ( y , i ) , where w j ( x , x , t, θ, h ) = (( w θ ( S i ( t, y ) + h ) , j, t ) , ( w θ ( S i ( t, y ) + h ) , j, t )) , π j ( x , x , t, θ, h ) = π i ,j ( w θ ( S i ( t, y )) + h ) ∧ π i ,j ( w θ ( S i ( t, y )) + h ) , p ( x , x , t, θ ) = p ( S i ( t, y ) , θ ) ∧ p ( S i ( t, y ) , θ ) . e Q is a substochastic kernel, and, for any x , x ∈ X , t ∈ R + , B ∈ B X , satisfiesthe following properties: e Q ((( x , t ) , ( x , t )) , ( B × X ) Z ) ≤ Π (( x , t ) , B × R + ) , e Q ((( x , t ) , ( x , t )) , ( X × B ) Z ) ≤ Π (( x , t ) , B × R + ) . For any given distribution m ∈ M ( X ) , on the coordinate space ( e Ω , e F ) associated with Z ,we can now construct a probability measure e C so that e C (cid:16)(cid:16) e X (1)0 , e X (2)0 (cid:17) ∈ A, f ∆ τ = 0 (cid:17) = m ( A ) for any A ∈ B X , and the canonical Markovian coupling (( e X (1) n , f ∆ τ n ) , ( e X (2) n , f ∆ τ n )) n ∈ N of Π , defined on thisspace, is governed by the transition probability kernel of the form e C = e Q + e R, where e Q is defined by (3.19), and e R stands for a complementary substochastic kernel on Z × B Z . The latter can be specified by defining the corresponding family of measures onrectangles { A × B : A, B ∈ B X } as follows: e R ((( x , t ) , ( x , t )) , ( A × B ) Z ) = 11 − e Q ((( x , t ) , ( x , t )) , Z ) × (cid:16) Π (( x , t ) , A ) − e Q ((( x , t ) , ( x , t )) , ( A × X ) Z ) (cid:17) × (cid:16) Π (( x , t ) , B ) − e Q ((( x , t ) , ( x , t )) , ( X × B ) Z ) (cid:17) , when e Q ((( x , t ) , ( x , t )) , Z ) < , and e R ((( x , t ) , ( x , t )) , ( A × B ) Z ) = 0 otherwise.Now, define Q : X × B X → [0 , and C : X × B X → [0 , as the kernels which, forany ( x , x ) ∈ X , t ∈ R + and A ∈ B X , satisfy Q (( x , x ) , A ) = e Q ((( x , , ( x , , ( A ) Z ) = e Q ((( x , t ) , ( x , t )) , ( A ) Z ) ,C (( x , x ) , A ) = e C ((( x , , ( x , , ( A ) Z ) = e C ((( x , t ) , ( x , t )) , ( A ) Z ) . (3.20)Later on in this paper, we will write e E x ,x for the expected value corresponding to themeasure e C x ,x := e C (cid:16) · (cid:12)(cid:12)(cid:12) e X (1)0 = x , e X (2)0 = x (cid:17) , x , x ∈ X. Let us indicate that the model under consideration enjoys all the hypotheses assumedin [20, Theorem 2.1] (see the proof of [8, Theorem 4.1], where these conditions are verified),which, in particular, means that(B0) The Markov operator P is Feller.(B1) There exist constants a ∈ (0 , and b ∈ (0 , ∞ ) such that P V ( x ) ≤ aV ( x ) + b for every x ∈ X, V is given by (2.10).Moreover, letting F = (cid:8) (( y , i ) , ( y , i )) ∈ X : i = i (cid:9) ∪ (cid:26) ( x , x ) ∈ X : V ( x ) + V ( x ) < b − a (cid:27) , the following statements hold:(B2) We have supp Q ( x, y, · ) ⊂ F and Z X ̺ ( u, v ) Q ( x, y, du × dv ) ≤ β̺ ( x, y ) for any ( x, y ) ∈ F and some β ∈ (0 , . (B3) Letting U ( r ) := { ( u, v ) ∈ F : ̺ ( u, v ) ≤ r } for any r > , we have inf ( x,y ) ∈ F Q ( x, y, U ( β̺ ( x, y ))) > . (B4) There exists l > such that Q (cid:0) x, y, X (cid:1) ≥ − l̺ ( x, y ) for every ( x, y ) ∈ F. (B5) There exist γ ∈ (0 , and ˆ c > such that e E x ,x ( γ − ρ ) ≤ ˆ c, whenever V ( x ) + V ( y ) < b (1 − a ) − , where V is given by (2.10) and ρ = inf (cid:26) n ∈ N : (cid:16) e X (1) n , e X (2) n (cid:17) ∈ F and V (cid:16) e X (1) n (cid:17) + V (cid:16) e X (2) n (cid:17) < b − a (cid:27) . (3.21)For ( Z n (¯ g )) n ∈ N , given by (3.11), let us consider the sequences of their copies ( e Z ( i ) n (¯ g )) n ∈ N , i ∈ { , } , defined on ( e Ω , e F , e C ) as follows: e Z ( i ) n (¯ g ) = Z n (¯ g ) (cid:16)(cid:16) e X ( i )0 , f ∆ τ (cid:17) , (cid:16) e X ( i )1 , f ∆ τ (cid:17) , . . . (cid:17) for n ∈ N and i ∈ { , } . (3.22)According to [10, Lemmas 3.4 and 3.5], we can now state the following result. Lemma 3.2.
Suppose that ∞ X n =1 e E x ,x | e Z (1) n (¯ g ) − e Z (2) n (¯ g ) | < ∞ for all x , x ∈ X, (3.23) and that there exists r ∈ (0 , such that sup n ∈ N E | Z n (¯ g ) | r < ∞ for any i ∈ { , } . (3.24)17 hen lim n →∞ h n (¯ g ) n = ˜ σ (¯ g ) with ˜ σ (¯ g ) given by (3.16) , which further yields lim n →∞ p h n (¯ g ) ln (ln ( h n (¯ g ))) p n ln(ln( n )) = ˜ σ (¯ g ) , and consequently (3.15) holds. Moreover, conditions (3.23) , (3.24) imply that there exists ¯ n ∈ N such that h n (¯ g ) > for all n ≥ ¯ n , and that hypotheses (3.12) - (3.14) hold. Hence, dueto [17, Theorem 1], the martingale ( M n (¯ g )) n ∈ N , given by (3.10) , satisfies the LIL. In view of the above lemma, to finalise the proof of Theorem 3.1, it remains to establishhypotheses (3.23)-(3.24).Let us introduce the function F (¯ g ) : X × R + → R + given by F (¯ g )( x, t ) = Z t ¯ g ( S i ( s, y ) , i ) ds for any x = ( y, i ) ∈ X, t ∈ R + . (3.25)We then have e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n +1 (¯ g ) − e Z (2) n +1 (¯ g ) (cid:12)(cid:12)(cid:12) ≤ e E x ,x (cid:12)(cid:12)(cid:12) F (¯ g ) (cid:16) e X (1) n , f ∆ τ n +1 (cid:17) − F (¯ g ) (cid:16) e X (2) n , f ∆ τ n +1 (cid:17)(cid:12)(cid:12)(cid:12) + 1 λ e E x ,x (cid:12)(cid:12)(cid:12) G ¯ g (cid:16) e X (1) n (cid:17) − G ¯ g (cid:16) e X (2) n (cid:17)(cid:12)(cid:12)(cid:12) . (3.26)Let us estimate each component on the right hand side of (3.26) separately. First of all,according to (3.20) and (2.7), we have e E x ,x (cid:12)(cid:12)(cid:12) F (¯ g ) (cid:16) e X (1) n , f ∆ τ n +1 (cid:17) − F (¯ g ) (cid:16) e X (2) n , f ∆ τ n +1 (cid:17)(cid:12)(cid:12)(cid:12) = Z X (cid:18)Z ∞ λe − λt | F (¯ g )( u, i, t ) − F (¯ g )( v, j, t ) | dt (cid:19) C n (( x , x ) , ( du × di ) × ( dv × dj )) . Further, according to (3.25), we get Z ∞ λe − λt | F (¯ g )( u, i, t ) − F (¯ g )( v, j, t ) | dt ≤ Z ∞ λe − λt Z t | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | ds dt = Z ∞ Z ∞ s λe − λt | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | dt ds = Z ∞ (cid:18)Z ∞ s λe − λt dt (cid:19) | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | ds = Z ∞ e − λs | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | ds, e E x ,x (cid:12)(cid:12)(cid:12) F (¯ g ) (cid:16) e X (1) n , f ∆ τ n +1 (cid:17) − F (¯ g ) (cid:16) e X (2) n , f ∆ τ n +1 (cid:17)(cid:12)(cid:12)(cid:12) ≤ Z X Z ∞ e − λs | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | ds C n (( x , x ) , ( du × di ) × ( dv × dj )) . (3.27)The second component on the right-hand side of (3.26) can be estimated similarly, i.e. λ e E x ,x (cid:12)(cid:12)(cid:12) G ¯ g (cid:16) e X (1) n (cid:17) − G ¯ g (cid:16) e X (2) n (cid:17)(cid:12)(cid:12)(cid:12) ≤ Z X Z ∞ e − λs | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | ds C n (( x , x ) , ( du × di ) × ( dv × dj )) . (3.28)Combining (3.26), (3.27) and (3.28), we obtain e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n (¯ g ) − e Z (2) n (¯ g ) (cid:12)(cid:12)(cid:12) ≤ Z X Z ∞ e − λs | ¯ g ( S i ( s, u ) , i ) − ¯ g ( S j ( s, v ) , j ) | ds C n (( x , x ) , ( du × di ) × ( dv × dj )) . (3.29)Consider b Z = b Z Q ∪ b Z R , where b Z Q := Z × { } and b Z R := Z × { } . There exists thensome probability space ( b Ω , b F , b C ) , on which we can construct a time-homogeneous canonicalMarkov chain (( b X (1) n , c ∆ τ n ) , ( b X (2) n , c ∆ τ n ) , ζ n ) n ∈ N with c ∆ τ = 0 and ζ = 0 , evolving on b Z ,and such that its transition probability function b C is given by b C (cid:0) (( x , t ) , ( x , t ) , ζ ) , A (cid:1) = (cid:16) e Q ((( x , t ) , ( x , t )) , · ) ⊗ δ (cid:17) ( A )+ (cid:16) e R ((( x , t ) , ( x , t )) , · ) ⊗ δ (cid:17) ( A ) for (( x , t ) , ( x , t ) , ζ ) ∈ b Z and A ∈ B b Z (cf. e.g. [8, 9, 20]). By convention, we will furtherwrite b C x ,x ( · ) for b C ( ·| b X (1)0 = x , b X (2)0 = x ) , and we will denote the corresponding expectedvalue by b E x ,x , x , x ∈ X .Let ρ be given by (3.21), and, for N ∈ N , define ρ N := inf (cid:26) n ≥ N : (cid:16) b X (1) n , b X (2) n (cid:17) ∈ F and V (cid:16) b X (1) n (cid:17) + V (cid:16) b X (2) n (cid:17) < b − a (cid:27) . Moreover, introduce τ := inf n n ∈ N : (cid:16)(cid:16) b X (1) k , c ∆ τ k (cid:17) , (cid:16) b X (2) k , c ∆ τ k (cid:17) , ζ k (cid:17) ∈ b Z Q for all k ≥ n o , and H N,n = n \ j = N { ζ j = 1 } for n, N ∈ N such that n > N. b C x ,x (cid:16)b Ω \ H N,n (cid:17) = b C x ,x n [ j = N { ζ j = 0 } ≤ b C x ,x ( τ > N ) for n > N, n, N ∈ N . (3.30)Now, fix n, N, M such that n > M > N and introduce b C n,M,Nx ,x ( · ) := b C x ,x ( · ∩ { ρ N ≤ M } ∩ H N,n ) . Following the reasoning presented e.g. in [9], and applying the estimate (3.30), we obtain b C x ,x ( · ) ≤ b C n,M,Nx ,x ( · ) + b C x ,x ( · ∩ { ρ N > M } ) + b C x ,x ( · ∩ { τ > N } ) , and therefore, using (3.29) and referring to the fact that ¯ g ∈ Lip b ( X ) , we get e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n (¯ g ) − e Z (2) n (¯ g ) (cid:12)(cid:12)(cid:12) ≤ | ¯ g | Lip Z X (cid:18)Z ∞ e − λs ̺ c (( S i ( s, u ) , i ) , ( S j ( s, v ) , j )) ds (cid:19) × b C n,M,Nx ,x (cid:16) b X (1) n ∈ du × di, b X (2) n ∈ dv × dj (cid:17) + 4 k ¯ g k ∞ λ (cid:16)b C x ,x ( ρ N > M ) + b C x ,x ( τ > N ) (cid:17) , (3.31)where ̺ c is given by (2.3). Further, condition (A2) implies the following: Z ∞ e − λs ̺ c (( S i ( s, u ) , i ) , ( S j ( s, v ) , j )) ds ≤ Z ∞ e − λs ( k S i ( s, u ) − S j ( s, v ) k + cd ( i, j )) ds ≤ Z ∞ e − λs (cid:0) Le αs k u − v k + s ¯ Ld ( i, j ) + cd ( i, j ) (cid:1) ds = L k u − v k Z ∞ e − ( λ − α ) s ds + d ( i, j ) Z ∞ (cid:16) ¯ Lse − λs + ce − λs (cid:17) ds = Lλ − α k u − v k + (cid:18) ¯ Lλ + cλ (cid:19) d ( i, j ) ≤ (cid:18) Lλ − α + ¯ Lλ + 1 λ (cid:19) ̺ c (( u, i ) , ( v, j )) . (3.32)Note that the last inequality holds, since c is required to be sufficiently large. According to(3.31) and (3.32), we obtain e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n (¯ g ) − e Z (2) n (¯ g ) (cid:12)(cid:12)(cid:12) ≤ | ¯ g | Lip (cid:18) Lλ − α + ¯ Lλ + 1 λ (cid:19) Z X ̺ c (( u, i ) , ( v, j )) × b C n,M,Nx ,x (cid:16) b X (1) n ∈ du × di, b X (2) n ∈ dv × dj (cid:17) + 4 k ¯ g k ∞ λ (cid:16)b C x ,x ( ρ N > M ) + b C x ,x ( τ > N ) (cid:17) . (3.33)Due to [9, Lemma 2.2], there exist constants c , c , c > , q , q , q ∈ (0 , and p ≥ such20hat, for any x , x ∈ X and n, N, M ∈ N satisfying n > N > M , the following inequalitieshold: Z X ̺ c (( u, i ) , ( v, j )) b C n,M,Nx ,x (cid:16) b X (1) n ∈ du × di, b X (2) n ∈ dv × dj (cid:17) ≤ c q n − M , b C x ,x ( ρ N > M ) ≤ c q M − pN (1 + V ( x ) + V ( x )) , b C x ,x ( τ > N ) ≤ c q N (1 + V ( x ) + V ( x )) , which, together with (3.33), imply e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n (¯ g ) − e Z (2) n (¯ g ) (cid:12)(cid:12)(cid:12) ≤ | ¯ g | Lip (cid:18) Lλ − α + ¯ Lλ + 1 λ (cid:19) c q n − M + 4 k ¯ g k ∞ λ (cid:16) c q M − pN + c q N (cid:17) (1 + V ( x ) + V ( x )) ≤ C k ¯ g k ∞ (cid:16) q n − M + q M − pN + q N (cid:17) (1 + V ( x ) + V ( x )) with C := 2 c (cid:18) Lλ − α + ¯ Lλ + 1 λ (cid:19) + 4 λ ( c + c ) . Now, define n = ⌈ p ⌉ and fix an arbitrary n > n . Letting N = ⌊ n/ (4 p ) ⌋ and M = ⌈ n/ ⌉ ,we obtain e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n (¯ g ) − e Z (2) n (¯ g ) (cid:12)(cid:12)(cid:12) ≤ ¯ C k ¯ g k BL q n (1 + V ( x ) + V ( x )) for every x , x ∈ X, where ¯ C := C max { q − , q − p } and q := max { q / , q / , q / (4 p )3 } ∈ (0 , . Since ¯ g is bounded,the above estimation also holds (with some ˆ C in the place of ¯ C ) for n ≤ n . We finally get ∞ X n =1 e E x ,x (cid:12)(cid:12)(cid:12) e Z (1) n (¯ g ) − e Z (2) n (¯ g ) (cid:12)(cid:12)(cid:12) < ∞ for every x , x ∈ X, which proves (3.23).It now remains to establish (3.24). Referring to (3.22), (3.11) and (2.11), for every n ∈ N and any i ∈ { , } , we obtain E | Z n (¯ g ) | r = E (cid:12)(cid:12)(cid:12)(cid:12)Z ∆ τ n +1 ¯ g ( S ξ n ( s, Y n ) , ξ n ) ds − λ G ¯ g ( X n ) (cid:12)(cid:12)(cid:12)(cid:12) r = E (cid:12)(cid:12)(cid:12)(cid:12)Z ∆ τ n +1 ¯ g ( S ξ n ( s, Y n ) , ξ n ) ds − Z ∞ e − λs ¯ g ( S ξ n ( s, Y n ) , ξ n ) ds (cid:12)(cid:12)(cid:12)(cid:12) r . Since ¯ g is bounded, we further get E | Z n (¯ g ) | r ≤ k ¯ g k r ∞ E (cid:18) ∆ τ n +1 + 1 λ (cid:19) r for every n ∈ N , i ∈ { , } . r > , there exists some κ ∈ (2 , ∞ ) such that ( ψ + ψ ) r ≤ κ (cid:0) ψ r + ψ r (cid:1) for any ψ , ψ ≥ , whence E | Z n (¯ g ) | r ≤ κ k ¯ g k r ∞ (cid:18) E (cid:16) (∆ τ ) r (cid:17) + 1 λ r (cid:19) for every n ∈ N , i ∈ { , } , which is finite, due to the fact that f ∆ τ has the exponential distribution. Finally, we get sup n ∈ N E | Z n (¯ g ) | r < ∞ for any i ∈ { , } , and the proof is completed. The work of Hanna Wojewódka-Ściążko has been partly supported by the National ScienceCentre of Poland, grant number 2018/02/X/ST1/01518.