Law of Large Numbers for Semi-Markov inhomogeneous Random Evolutions on Banach spaces
aa r X i v : . [ m a t h . P R ] M a y arXiv: arXiv:1304.4169 Law of Large Numbers for Semi-Markovinhomogeneous Random Evolutions onBanach spaces
N. Vadori ∗ and A. Swishchuk † Abstract:
Using backward propagators, we construct inhomogeneous Ran-dom Evolutions on Banach spaces driven by (uniformly ergodic) Semi-Markov processes. After studying some of their properties (measurability,continuity, integral representation), we establish a Law of Large Numbersfor such inhomogeneous Random Evolutions, and more precisely their weakconvergence - in the Skorohod space D - to an inhomogeneous semigroup.A martingale characterization of these inhomogeneous Random Evolutionsis also obtained. Finally, we present applications to inhomogeneous L´evyRandom Evolutions. Keywords: inhomogeneous random evolutions; weak convergence; Skorohod space;martingale problem; backward propagators; law of large numbers; semi-Markov pro-cesses; Banach spaces.
Subject Classification AMS 2010:
Primary: 60F17, 60F05; Secondary: 60B10,60G44.
Notations to be used throughout the paper: • N , N ∗ : non negative integers, positive integers • R , R ∗ , R + , R + ∗ : real numbers, non zero real numbers, non negative real numbers,positive real numbers • L ( E, F )(resp. B ( E, F )): the space of linear (resp. bounded linear) operators E → F • Bor ( E ): Borel sigma-algebra on E • C n ( E ) (resp. C nb ( E ), C n ( E )): continuous (resp. continuous bounded, continuousvanishing at infinity) functions E → R such that the n th derivative is in C ( E )(resp. C b ( E ), C ( E )) • L pE (Ω , F , P ): quotient space of F −
Bor ( E ) measurable functions Ω → E s.t. R Ω || f || pE d P < ∞• B E (Ω , F ) (resp. B bE (Ω , F )): the space of F −
Bor ( E ) measurable (resp. measur-able bounded) functions Ω → E • D ( J, E ) the Skorohod space of RCLL (right-continuous with left limits) functions J → E ( J ∈ Bor ( R )) ∗ Corresponding author: 2500 University Drive NW, Calgary, AB, Canada T2N 1N4, email:[email protected] † . Vadori and A. Swishchuk/ • P ( E ): family of Borel probability measures on E • L ( X | P ) (or L ( X ) if there is no ambiguity): Law of X , i.e. P ◦ X − • a.e. → , P → , ⇒ : convergence resp. a.e., in probability, in distribution. • t ǫ,s := s + ǫ ( t − s ) (section 4 and Appendix only) • | E | : cardinal of E • d : Skorohod metric (see [4], chapter 3, equation 5.2)
1. Introduction
Random Evolutions began to be studied in the 1970’s, because of their potential appli-cations to biology, movement of particles, signal processing, quantum physics, finance& insurance, etc. (see [5], [8], [9]). As R. Hersh says in [8]: ”Random evolutions modela situation in which an evolving system changes its law of motion because of randomchanges in the environment”. These random changes are usually modeled by jump pro-cesses, because they aim at modeling the fact that the system moves from some stateto another, for example a stock that switches between different volatilities. In 1972, T.Kurtz established a Law of Large numbers for Random Evolutions constructed withhomogeneous semigroups ([14]). Following this work, J. Watkins published a very in-teresting paper ([27]), followed by two more ([28], [29]) in which, using a martingalecharacterization of the Random Evolution, he established a Central Limit Theorem forRandom Evolutions constructed with i.i.d. generators of (homogeneous) semigroupsunder some technical assumptions, especially on the dual of the Banach space. Later,A. Swishchuk and V. Korolyuk established a Law of Large Numbers and a CentralLimit theorem for Random Evolutions driven by (uniformly ergodic) Semi-Markovprocesses ([24], [25] chapter 4, [17]) in which the switching between the different (ho-mogeneous) semigroups occurs at the jump times of a semi-Markov process.To the best of our knowledge, only homogeneous Random Evolutions have been studiedas of today, i.e. Random Evolutions constructed with homogeneous semigroups. In thispaper, we consider Random Evolutions constructed with backward inhomogeneoussemigroups (3.1), also called backward propagators or backward evolution systems inthe literature (e.g. [19] chapter 5, [6] chapter 2) and driven by (uniformly ergodic) Semi-Markov processes. We choose the backward case for practical reasons: for example theChapman-Kolmogorov equation is backward in time. In this paper we present severalnew results:1. In sections 2 and 3, we study some general properties of inhomogeneous semi-groups and construct inhomogeneous Random Evolutions. These sections aremainly related to functional analysis but we need them to prove the main law oflarge numbers result of section 4, especially the characterization of inhomoge-neous semigroups as a unique solution to a well-posed Cauchy problem (2.10),a second order Taylor formula for inhomogeneous semigroups (2.13) and theforward integral representation for inhomogeneous Random Evolutions (3.4).2. In section 4 we establish our main result (4.19), that can be thought of as a lawof large numbers for inhomogeneous Random Evolutions driven by uniformlyergodic Semi-Markov processes: we index the inhomogeneous Random Evolutionby a small parameter ǫ that we use to rescale time so that the Semi-Markovprocess goes to its unique stationary distribution as ǫ →
0. We also obtain a . Vadori and A. Swishchuk/ martingale characterization of the Random Evolution (4.17). We establish theweak convergence of the Random Evolution in the Skorohod space D ( J, Y ) to aninhomogeneous semigroup, where J ⊆ R + and Y is a separable Banach space.The proof of the unicity of the limiting distribution is linked to the unicity ofthe Cauchy problem of section 2 and cannot be done using what has been donebefore in [25] or [27] for the following reason: in the latter papers, because ofthe time-homogeneity of the random evolutions, weak convergence of sequencesof martingales of the following type were studied: X n ( t ) − X n ( s ) − Z ts AX n ( u ) du, where A is the generator of a semigroup and { X n ( • ) } n ∈ N a sequence of D ( J, Y )valued random variables. Provided the relative compactness of { X n ( • ) } n ∈ N , onecould choose a limiting process X ( • ) and prove its unicity in distribution. Theproblem that we will be facing in our paper is the following: in the (backward)inhomogeneous case, we will have to prove weak convergence of sequences ofmartingales of the type: V n ( t ) f − V n ( s ) f − Z ts V n ( u ) A ( u ) fdu, where f ∈ Y , A ( t ) is the generator of an inhomogeneous semigroup and { V n ( • ) } n ∈ N are stochastic processes with sample paths in D ( J, B ( Y )), where B ( Y ) is thespace of bounded linear operators on Y . Nevertheless, we will typically onlyhave the relative compactness of { V n ( • ) f } n ∈ N in D ( J, Y ), for every f ∈ Y andnot the relative compactness of { V n ( • ) } n ∈ N in D ( J, B ( Y )), one of the reasonsbeing that B ( Y ) might not be separable (and all the usual techniques for theproof of relative compactness in D ( J, E ) require E to be separable, see [4]).Therefore we cannot just pick a limiting process V ( • ) ∈ D ( J, B ( Y )) and proveits unicity in distribution, but we will have to construct it ourself using mainlydensity and the Skorohod representation theorem, handling negligible sets of ourprobability space with care.3. Finally an important remark should be made about applications where Y = C ( R d ), the space of continuous functions R d → R vanishing at infinity. Toprove the crucial compact containment criterion, it is said in both [27] and [25]that there exists a compact embedding of a Sobolev space into C ( R d ), which isnot true. Therefore we shall see on the specific example of inhomogeneous L´evyRandom Evolutions (section 5) how we can still prove the compact containmentcriterion using a characterization of compact sets in C ( R d ) (see 4.9), and howthis proof can be recycled in the case of other examples.The paper is organized as follows: in section 2 we present some results on inhomoge-neous semigroups, in section 3 we introduce inhomogeneous random evolutions drivenby (uniformly ergodic) Semi-Markov processes and some of their properties, in section4 we prove our main law of large number result of weak convergence of the inhomo-geneous random evolution to an inhomogeneous semigroup as well as a martingalecharacterization of the inhomogeneous Random Evolution, and finally in section 5 wegive an application to inhomogeneous L´evy Random Evolutions. Convention throughout the paper:
Let ( Y, ||·|| ) be a real separable Banach space.Let Y be the Borel sigma-algebra generated by the norm topology. Let Y ∗ the dual . Vadori and A. Swishchuk/ space of Y . ( Y , || · || Y ) is assumed to be a real separable Banach space which iscontinuously embedded in Y (this idea was used in [19], chapter 5), i.e. Y ⊆ Y and ∃ c ∈ R + : || f || ≤ c || f || Y ∀ f ∈ Y . Unless mentioned otherwise, limits are taken inthe Y − norm. Limits in the Y norm will be denoted Y − lim. In the following, J willrefer either to R + or to [0 , T ∞ ] for some T ∞ > J := { ( s, t ) ∈ J : s ≤ t } . Letalso, for s ∈ J : J ( s ) := { t ∈ J : s ≤ t } and ∆ J ( s ) := { ( r, t ) ∈ J : s ≤ r ≤ t } .
2. Inhomogeneous operator semigroups
This section presents some results on inhomogeneous semigroups. Some of them aresimilar to what can be found in [19] chapter 5 and [6] chapter 2, but to the best of ourknowledge, they are new.
Definition 2.1.
A function
Γ : ∆ J → B ( Y ) is called a (backward) inhomogeneous Y -semigroup if:i) ∀ t ∈ J : Γ( t, t ) = I ii) ∀ ( s, r ) , ( r, t ) ∈ ∆ J : Γ( s, r )Γ( r, t ) = Γ( s, t ) If in addition, ∀ ( s, t ) ∈ ∆ J : Γ( s, t ) = Γ(0 , t − s ) , Γ is called homogeneous Y − semigroup. We now introduce the generator of the inhomogeneous semigroup:
Definition 2.2.
Let t ∈ J and: D ( A Γ ( t )) := f ∈ Y : lim h ↓ t + h ∈ J (Γ( t, t + h ) − I ) fh = lim h ↓ t − h ∈ J (Γ( t − h, t ) − I ) fh ∈ Y Define ∀ t ∈ J , ∀ f ∈ D ( A Γ ( t )) : A Γ ( t ) f := lim h ↓ t + h ∈ J (Γ( t, t + h ) − I ) fh = lim h ↓ t − h ∈ J (Γ( t − h, t ) − I ) fh Let D ( A Γ ) := T t ∈ J D ( A Γ ( t )) . Then A Γ : J → L ( D ( A Γ ) , Y ) is called the generator of theinhomogeneous Y -semigroup Γ . The following definitions deal with continuity and boundedness of semigroups:
Definition 2.3.
An inhomogeneous Y -semigroup Γ is B ( Y ) − bounded (resp. B ( Y ) − contraction) if sup ( s,t ) ∈ ∆ J || Γ( s, t ) || B ( Y ) < ∞ (resp. sup ( s,t ) ∈ ∆ J || Γ( s, t ) || B ( Y ) ≤ ). Definition 2.4.
Let E Y ⊆ Y . An inhomogeneous Y -semigroup Γ is E Y − stronglycontinuous if ∀ ( s, t ) ∈ ∆ J , ∀ f ∈ E Y : lim ( h ,h ) → (0 , s + h ,t + h ) ∈ ∆ J || Γ( s + h , t + h ) f − Γ( s, t ) f || = 0 Definition 2.5.
Let E Y ⊆ Y . An inhomogeneous Y -semigroup Γ is E Y − superstrongly continuous if ∀ ( s, t ) ∈ ∆ J , ∀ f ∈ E Y : Γ( s, t ) Y ⊆ Y and lim ( h ,h ) → (0 , s + h ,t + h ) ∈ ∆ J || Γ( s + h , t + h ) f − Γ( s, t ) f || Y = 0 . Vadori and A. Swishchuk/ Remark : throughout the paper, we will use this terminology that ”super strong con-tinuity” refers to continuity in the Y − norm and that ”strong continuity” refers tocontinuity in the Y − norm. Terminology : For the above types of continuity, we use the terminology t − continuity(resp. s − continuity) for the continuity of the partial application u → Γ( s, u ) f (resp. u → Γ( u, t ) f ).The following theorems give conditions under which the semigroup is differentiable in s and t . Theorem 2.6.
Let Γ be an inhomogeneous Y -semigroup. Assume that Y ⊆ D ( A Γ ) and that Γ is Y − super strongly s − continuous, Y − strongly t − continuous. Then: ∂∂s Γ( s, t ) f = − A Γ ( s )Γ( s, t ) f ∀ ( s, t ) ∈ ∆ J , ∀ f ∈ Y Proof. see Appendix A
Theorem 2.7.
Let Γ an inhomogeneous Y -semigroup. Assume Y ⊆ D ( A Γ ) and that Γ is Y − strongly t -continuous. Then we have: ∂∂t Γ( s, t ) f = Γ( s, t ) A Γ ( t ) f ∀ ( s, t ) ∈ ∆ J , ∀ f ∈ Y Proof. see Appendix AIn general, for f ∈ Y , we will want to use the semigroup integral representationΓ( s, t ) f − f = R ts Γ( s, u ) A Γ ( u ) fdu and therefore we will need that u → Γ( s, u ) A Γ ( u ) f is in L Y ([ s, t ]). The following theorem give sufficient conditions for which it is the case,as we will typically have A Γ ( t ) ∈ B ( Y , Y ) ∀ t ∈ J . Theorem 2.8.
Assume that theorem 2.7 holds. Assume also that ∀ t ∈ J , A Γ ( t ) ∈B ( Y , Y ) and ∀ ( s, t ) ∈ ∆ J , u → || A Γ ( u ) || B ( Y ,Y ) ∈ L R ([ s, t ]) . Then ∀ f ∈ Y , ( s, t ) ∈ ∆ J : Γ( s, t ) f − f = Z ts Γ( s, u ) A Γ ( u ) fdu Proof. see Appendix AThe following definition introduces the concept of regular inhomogeneous semigroup,which basically means that it is differentiable and that its derivative is integrable.
Definition 2.9.
An inhomogeneous Y -semigroup Γ is said to be regular if it satisfiestheorems 2.6, 2.7 and ∀ ( s, t ) ∈ ∆ J , ∀ f ∈ Y , u → Γ( s, u ) A Γ ( u ) f is in L Y ([ s, t ]) . Now we are ready to characterize the inhomogeneous semigroup as a unique solutionof a well-posed Cauchy problem, which we will need for our main result 4.19:
Theorem 2.10.
Let A Γ the generator of a a regular inhomogeneous Y -semigroup Γ and s ∈ J , G s ∈ B ( Y ) . A solution operator G : J ( s ) → B ( Y ) to the Cauchy problem: (cid:26) ddt G ( t ) f = G ( t ) A Γ ( t ) f ∀ t ∈ J ( s ) , ∀ f ∈ Y G ( s ) = G s . Vadori and A. Swishchuk/ is said to be regular if it is Y − strongly continuous and if it satisfies the Cauchy prob-lem above. If G is such a regular solution, then we have G ( t ) f = G s Γ( s, t ) f , ∀ t ∈ J ( s ) , ∀ f ∈ Y .Proof. see Appendix AThe following Corollary comes straightforwardly from 2.10 and expresses the fact thatequality of generators of regular semigroups implies equality of semigroups. Corollary 2.11.
Assume that Γ and Γ are regular inhomogeneous Y -semigroupsand that ∀ f ∈ Y , ∀ t ∈ J , A Γ ( t ) f = A Γ ( t ) f . Then ∀ f ∈ Y , ∀ ( s, t ) ∈ ∆ J : Γ ( s, t ) f =Γ ( s, t ) f . In particular if Y is dense in Y , then Γ and Γ agree on Y . We conclude this section with a 2 nd order Taylor formula for inhomogeneous semi-groups: Definition 2.12.
Let: D ( A Γ ∈ Y ) := { f ∈ D ( A Γ ) ∩ Y : A Γ ( t ) f ∈ Y ∀ t ∈ J }D ( A ′ Γ ) := f ∈ D ( A Γ ∈ Y ) : Y − lim h → t + h ∈ J A Γ ( t + h ) f − A Γ ( t ) fh ∈ Y ∀ t ∈ J and for t ∈ J , f ∈ D ( A ′ Γ ) : A ′ Γ ( t ) f := Y − lim h → t + h ∈ J A Γ ( t + h ) f − A Γ ( t ) fh Theorem 2.13.
Let Γ a regular inhomogeneous Y -semigroup, ( s, t ) ∈ ∆ J . Assumethat ∀ u ∈ J , A Γ ( u ) ∈ B ( Y , Y ) and u → || A Γ ( u ) || B ( Y ,Y ) ∈ L R ([ s, t ]) . Then we havefor f ∈ D ( A Γ ∈ Y ) : Γ( s, t ) f = f + Z ts A Γ ( u ) fdu + Z ts Z us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdrdu Assume in addition that:i) A Γ is Y − strongly continuousii) u → Γ( s, u ) A ( u ) f and u → R us Γ( s, r ) A Γ ( r ) A ′ Γ ( u ) fdr ∈ L Y ([ s, t ]) then we have for f ∈ D ( A ′ Γ ) : Γ( s, t ) f = f + Z ts A Γ ( u ) fdu + Z ts ( t − u )Γ( s, u ) A ( u ) fdu + Z ts ( t − u ) Z us Γ( s, r ) A Γ ( r ) A ′ Γ ( u ) fdrdu Proof. see Appendix ARemark how the latter formula coincides with the well-known 2 nd order Taylor formulafor homogeneous semigroups (see e.g. [25], proposition 1.2.):Γ( t − s ) f = f + ( t − s ) A Γ f + Z ts ( t − u )Γ( u − s ) A fdu . Vadori and A. Swishchuk/
3. Inhomogeneous Random Evolutions
As in [11] (section 3.5), let (Ω , F , P ) a complete probability space, ( X, X ) a finitemeasurable space and ( x n ) n ∈ N , ( τ n ) n ∈ N ∗ random variables resp. Ω → X , Ω → R + ∗ .Let τ := 0, T n := P nk =0 τ k . The sojourn times T n form a strictly increasing sequence,for every ω . We say that ( x n , T n ) n ∈ N is a Markov renewal process if there exists asemi-Markov kernel Q : X × X × R + → [0 ,
1] (see e.g. [16], definition 2.2.) such that ∀ y ∈ X , t ∈ R + , n ∈ N : P [ x n +1 = y, T n +1 − T n ≤ t | x k , T k : k ∈ [ | , n | ]] = P [ x n +1 = y, T n +1 − T n ≤ t | x n ]= Q ( x n , y, t ) a.e.In the following we let ( x n , T n ) n ∈ N a Markov renewal process and let: • P ( x, y ) := Q ( x, y, ∞ ) := lim t →∞ Q ( x, y, t ) = P [ x n +1 = y | x n = x ] • F ( x, t ) := P y ∈ X Q ( x, y, t ) = P [ T n +1 − T n ≤ t | x n = x ]Define the counting process: N ( t )( ω ) := sup { n ∈ N : T n ( ω ) ≤ t } = sup n ∈ N ∗ n X k =1 { T k ≤ t } ( ω ) , for ω ∈ Ω , t ∈ R + By the latter representation, N ( t ) is F −
Bor ( R + ) measurable and represents thenumber of jumps on (0 , t ] and can possibly be infinite in the case of a general statespace. In the case of a finite state space however, the renewal process is regular, i.e. ∀ t ∈ R + , N ( t ) < ∞ a.e.. Further, ∀ ω ∈ Ω, t → N t ( ω ) is right continuous, as it isconstant on the intervals [ T n ( ω ) , T n +1 ( ω )): n ∈ N . We define the semi-Markov process( x ( t )) t ∈ R + by x ( t )( ω ) := x N ( t )( ω ) ( ω ) onΩ ∗ := \ t ∈ Q { ω ∈ Ω : N ( t )( ω ) < ∞} (so that P (Ω ∗ ) = 1) . Remark:
From now on, we will work on the probability space (Ω ∗ , F ∗ , P ∗ ) := (Ω ∗ , F| Ω ∗ , P | Ω ∗ ),where F| Ω ∗ := { A ∈ F : A ⊆ Ω ∗ } and P | Ω ∗ := P ( A ) ∀ A ∈ F| Ω ∗ ( F| Ω ∗ is a sigma-algebra on Ω ∗ since Ω ∗ = ∅ ∈ F ). We point out that the restrictions to Ω ∗ of F− measurable functions are F ∗ − measurable. For sake of clarity, we will not write ex-plicitly that we are working with the restrictions to Ω ∗ of the random variables (e.g. x n | Ω ∗ , T n | Ω ∗ ), but we will always be. In order to avoid heavy notations, (Ω ∗ , F ∗ , P ∗ )will be noted (Ω , F , P ).We define the following random variables on Ω, for s ≤ t ∈ R + : • the number of jumps on ( s, t ]: N s ( t ) := N ( t ) − N ( s ) • the jump times on ( s, ∞ ): T n ( s ) := T N ( s )+ n for n ∈ N ∗ , and T ( s ) := s . • the states visited by the process on [ s, ∞ ): x n ( s ) := x ( T n ( s )), for n ∈ N .We will assume that ( x n , T n ) n ∈ N is an ergodic Markov Renewal Process, which meansthat it is irreducible aperiodic, and the imbedded Markov Chain ( x n ) n ∈ N is aperiodic(see [11], section 3.7). We will also assume that ( x n ) n ∈ N is a uniformly ergodic Markov . Vadori and A. Swishchuk/ Chain, namely that there exists a probability measure ρ := { ρ x } x ∈ X on X such that(see [18]): lim n →∞ || P n − Π || B ( B b R ( X )) = 0where for f ∈ B b R ( X ) (since X finite, B b R ( X ) = B R ( X )), x ∈ X :Π f ( x ) := X y ∈ X ρ y f ( y ) (constant of x ) P f ( x ) := X y ∈ X P ( x, y ) f ( y ) = E [ f ( x n +1 ) | x n = x ]and we have that P n f ( x ) = E [ f ( x n ) | x = x ]. From the standard theory of semi-Markovprocesses (see e.g. [11] propositions 3.16.2 and 3.9.1) we get that:lim t →∞ t E [ N ( t )] = 1 M ρ lim t →∞ t N ( t ) = 1 M ρ a.e.With: M ρ := Π m m n ( x ) := Z R + t n F ( x, dt )Now we are ready to introduce inhomogeneous random evolutions:Consider a family of inhomogeneous Y − semigroups (Γ x ) x ∈ X , with respective genera-tors ( A x ) x ∈ X , satisfying: ∀ s ∈ J :( r, t, x, f ) → Γ x ( r ∧ t, r ∨ t ) f is Bor ( J ( s )) ⊗ Bor ( J ( s )) ⊗ X ⊗ Y − Y measurableas well as a family ( D ( x, y )) ( x,y ) ∈ X × X ⊆ B ( Y ) of B ( Y ) − contractions, satisfying:( x, y, f ) → D ( x, y ) f is X ⊗ X ⊗ Y − Y measurable
Definition 3.1.
The function V : ∆ J × Ω → B ( Y ) defined pathwise by: V ( s, t )( ω ) = N s ( t ) Y k =1 Γ x k − ( s ) ( T k − ( s ) , T k ( s )) D ( x k − ( s ) , x k ( s )) Γ x ( t ) (cid:0) T N s ( t ) ( s ) , t (cid:1) is called a (Γ , D, x ) − inhomogeneous Y -random evolution, or simply an inhomoge-neous Y -random evolution. V is said to be continuous if D ( x, y ) = I , ∀ ( x, y ) ∈ X × X . V is said to be regular (resp. B ( Y ) − contraction) if (Γ x ) x ∈ X are regular (resp. B ( Y ) − contraction). Remark:
We use as conventions that Q k =1 := I and n Q k =1 A k := A ...A n − A n , that is,the product operator applies the product on the right. . Vadori and A. Swishchuk/ Remark: if N s ( t ) >
0, then x N s ( t ) ( s ) = x ( T N s ( t ) ( s )) = x ( T N ( t ) ) = x ( t ). If N s ( t ) = 0,then x ( s ) = x ( t ) and x N s ( t ) ( s ) = x ( s ) = x ( T ( s )) = x ( s ) = x ( t ). Therefore in allcases x N s ( t ) ( s ) = x ( t ).The following proposition deals with the measurability of the inhomogeneous Randomevolution: Property 3.2.
For s ∈ J , f ∈ Y , the stochastic process ( V ( s, t )( ω ) f ) ( ω,t ) ∈ Ω × J ( s ) isadapted to the (augmented) filtration: F t ( s ) := σ (cid:2) x n ∧ N s ( t ) ( s ) , T n ∧ N s ( t ) ( s ) : n ∈ N (cid:3) ∨ σ ( P − null sets ) Proof.
Let E ∈ Y , ( s, t ) ∈ ∆ J , f ∈ Y . We have: V ( s, t ) f − ( E ) = [ n ∈ N { V ( s, t ) f ∈ E } ∩ { N s ( t ) = n } Denoting the F t ( s ) − Bor ( R + ) measurable (by construction) function h k := T ( k +1) ∧ N s ( t ) ( s ) − T k ∧ N s ( t ) ( s ), remark that N s ( t )( ω ) = sup m ∈ N P mk =0 h − k ( R + ∗ ) ( ω ) and is therefore F t ( s ) − Bor ( R + ) measurable. Therefore { N s ( t ) = n } ∈ F t ( s ). Let:Ω n := { N s ( t ) = n } M := { n ∈ N : Ω n = ∅} M = ∅ since Ω = S n ∈ N Ω n , and for n ∈ M , let the sigma-algebra F n := F t ( s ) | Ω n := { A ∈ F t ( s ) : A ⊆ Ω n } ( F n is a sigma-algebra on Ω n since Ω n = ∅ ∈ F t ( s )). Nowconsider the map V ( s, t ) f ( n ) : (Ω n , F n ) → ( Y, Y ): V n ( s, t ) f := " n Y k =1 Γ x k − ( s ) ( T k − ( s ) , T k ( s )) D ( x k − ( s ) , x k ( s )) Γ x n ( s ) ( T n ( s ) , t ) f We have: V ( s, t ) f − ( E ) = [ n ∈ N { V n ( s, t ) f ∈ E } ∩ Ω n = [ n ∈ N { ω ∈ Ω n : V n ( s, t ) f ∈ E } = [ n ∈ N V n ( s, t ) f − ( E )Therefore it remains to show that V n ( s, t ) f − ( E ) ∈ F n , since F n ⊆ F t ( s ).First let n >
0. Notice that V n ( s, t ) f = ψ ◦ β n ◦ α n ... ◦ β ◦ α ◦ φ , where: φ : Ω n → J ( s ) × X × Ω n → Y × Ω n ω → ( T n ( s )( ω ) , x n ( s )( ω ) , ω ) → (Γ x n ( s )( ω ) ( T n ( s )( ω ) , t ) f, ω )The previous mapping holding since T k ( s )( ω ) ∈ [ s, t ] ∀ ω ∈ Ω n , k ∈ [ | , n | ]. φ ismeasurable iff each one of the coordinate mappings are. The canonical projections aretrivially measurable. Let A ∈ Bor ( J ( s )), B ∈ X . We have: { ω ∈ Ω n : T n ( s ) ∈ A } = Ω n ∩ T n ∧ N s ( t ) ( s ) − ( A ) ∈ F n { ω ∈ Ω n : x n ( s ) ∈ B } = Ω n ∩ x n ∧ N s ( t ) ( s ) − ( B ) ∈ F n . Vadori and A. Swishchuk/ Now, by measurability assumption, we have for B ∈ Y : { ( t n , y n ) ∈ J ( s ) × X : Γ y n ( t ∧ t n , t ∨ t n ) f ∈ B } = C ∈ Bor ( J ( s )) ⊗ X and therefore: { ( t n , y n , ω ) ∈ J ( s ) × X × Ω n : Γ y n ( t ∧ t n , t ∨ t n ) f ∈ B } = C × Ω n ∈ Bor ( J ( s )) ⊗ X ⊗ F n Therefore φ is F n − Y ⊗ F n measurable. Define for i ∈ [1 , n ]: α i : Y × Ω n → X × X × Y × Ω n → Y × Ω n ( g, ω ) → ( x n − i ( s )( ω ) , x n − i +1 ( s )( ω ) , g, ω ) → ( D ( x n − i ( s )( ω ) , x n − i +1 ( s )( ω )) g, ω )Again, the canonical projections are trivially measurable. We have for p ∈ [ | , n | ]: { ω ∈ Ω n : x p ( s ) ∈ B } = Ω n ∩ x p ∧ N s ( t ) ( s ) − ( B ) := C ∈ F n Therefore { ( g, ω ) ∈ Y × Ω n : x p ( s ) ∈ B } = Y × C ∈ Y ⊗ F n Now, by measurability assumption, ∀ B ∈ Y , ∃ C ∈ X ⊗ X ⊗ Y : { ( y n − i , y n − i +1 , g, ω ) ∈ X × X × Y × Ω n : D ( y n − i , y n − i +1 ) g ∈ B } = C × Ω n ∈ X ⊗ X ⊗ Y ⊗ F n , which proves the measurability of α i . Then we define for i ∈ [1 , n ]: β i : Y × Ω n → J ( s ) × J ( s ) × X × Y × Ω n → Y × Ω n ( g, ω ) → ( T n − i ( s )( ω ) , T n − i +1 ( s )( ω ) , x n − i ( s )( ω ) , g, ω ) → (Γ x n − i ( s )( ω ) ( T n − i ( s )( ω ) , T n − i +1 ( s )( ω )) g, ω )By measurability assumption, ∀ B ∈ Y , ∃ C ∈ Bor ( J ( s )) ⊗ Bor ( J ( s )) ⊗ X ⊗ Y : { ( t n − i , t n − i +1 , y n − i , g, ω ) ∈ J ( s ) × J ( s ) × X × Y × Ω n : Γ y n − i ( t n − i ∧ t n − i +1 , t n − i ∨ t n − i +1 ) g ∈ B } = C × Ω n ∈ Bor ( J ( s )) ⊗ Bor ( J ( s )) ⊗ X ⊗ Y ⊗ F n , which proves the measurability of β i .Finally, define the canonical projection: ψ : Y × Ω n → Y ( g, ω ) → g which proves the measurability of V n ( s, t ) f .For n = 0, we have V ( s, t ) f ( n ) = Γ x ( s ) ( s, t ) f and the proof is similar.The following propositions show that the random evolution has right-continuous pathsand that it satisfies some integral representation, which will be used in section 4 toprove relative compactness in the Skorohod space D ( J ( s ) , Y ). . Vadori and A. Swishchuk/ Property 3.3.
Let V an inhomogeneous Y -random evolution and ( s, t ) ∈ ∆ J , ω ∈ Ω .Then V ( • , • )( ω ) is an inhomogeneous Y − semigroup. Further, if V is regular, then u → V ( s, u )( ω ) is Y − strongly RCLL on J ( s ) , i.e. ∀ f ∈ Y , u → V ( s, u )( ω ) f ∈ D ( J ( s ) , ( Y, || · || )) . More precisely, we have for f ∈ Y : V ( s, u − ) f = V ( s, u ) f if u / ∈ { T n ( s ) : n ∈ N } V ( s, T n +1 ( s )) f = V ( s, T n +1 ( s ) − ) D ( x n ( s ) , x n +1 ( s )) f ∀ n ∈ N where we denote V ( s, t − ) f := lim u ↑ t V ( s, u ) f .Proof. see Appendix AIn particular we observe that if D = I , then u → V ( s, u )( ω ) is in fact Y − stronglycontinuous on J ( s ). Property 3.4.
Let V a regular inhomogeneous Y -random evolution and ( s, t ) ∈ ∆ J , f ∈ Y . Then V satisfies on Ω : V ( s, t ) f = f + Z ts V ( s, u ) A x ( u ) ( u ) fdu + N s ( t ) X k =1 V ( s, T k ( s ) − )[ D ( x k − ( s ) , x k ( s )) − I ] f Proof. see Appendix A
4. Law of Large numbers for the inhomogeneous Random Evolution . Notation : in the following we denote for ǫ ∈ R + , ( s, t ) ∈ R +2 : t ǫ,s := s + ǫ ( t − s ).In the same way we introduced inhomogeneous Y − random evolutions, we consider afamily ( D ǫ ( x, y )) ( x,y ) ∈ X × X,ǫ ∈ (0 , of B ( Y ) − contractions, satisfying ∀ ǫ ∈ (0 , x, y, f ) → D ǫ ( x, y ) f is X ⊗ X ⊗ Y − Y measurableand let D ( x, y ) := I . We define: D ( D ) := \ ǫ ∈ [0 , x,y ) ∈ X × X f ∈ Y : lim h → ǫ + h ∈ [0 , D ǫ + h ( x, y ) f − D ǫ ( x, y ) fh ∈ Y and ∀ f ∈ D ( D ): D ǫ ( x, y ) f := lim h → ǫ + h ∈ [0 , D ǫ + h ( x, y ) f − D ǫ ( x, y ) fh In the same way we introduce D ( D ), corresponding to the 2 nd derivative.We also let: D ( D ∈ Y ) := (cid:8) f ∈ D ( D ) ∩ Y : D ( x, y ) f ∈ Y ∀ ( x, y ) ∈ X × X (cid:9) . Vadori and A. Swishchuk/ We define the space: b D := \ x ∈ X D ( A ′ x ) ∩ D ( D ) ∩ D ( D ∈ Y ) , where D ( A ′ x ) was defined in 2.12. These notations will hold until the end of the paper.In this section we assume the following set of assumptions that we call (A0) : Assumptions on the regularity of operators :i) Y ⊆ D ( D )ii) (Γ x ) x ∈ X are regulariii) A x is Y − strongly continuous, ∀ x ∈ X Assumptions on the semi-Markov process :i) ∃ ¯ τ > Q ( x, y, (¯ τ, ∞ )) = 0 ∀ ( x, y ) ∈ X × X (uniformly bounded sojournincrements) Remark : the latter implies in particular that all the moments m n are well-defined. Assumptions on the boundedness of operators :i) (Γ x ) x ∈ X and ( D ǫ ( x, y )) ( x,y ) ∈ X × X,ǫ ∈ (0 , are B ( Y ) − contractions, with D := I .ii) A x ( t ) ∈ B ( Y , Y ) ∀ t ∈ J , ∀ x ∈ X and sup u ∈ J || A x ( u ) || B ( Y ,Y ) < ∞ iii) sup u ∈ Jx ∈ X || A ′ x ( u ) f || < ∞ , sup u ∈ Jx ∈ X || A x ( u ) f || Y < ∞ , ∀ f ∈ T x ∈ X D ( A ′ x )iv) sup ǫ ∈ [0 , x,y ) ∈ X || D ǫ ( x, y ) f || < ∞ , ∀ f ∈ D ( D )v) sup ǫ ∈ [0 , x,y ) ∈ X × X || D ǫ ( x, y ) f || < ∞ , ∀ f ∈ D ( D )As said in introduction: similarly to what has been done in [25] or [27] we index theinhomogeneous Random Evolution by a small parameter ǫ that we use to rescale timeso that the Semi-Markov process goes to its unique stationary distribution: Definition 4.1.
Let V an inhomogeneous Y − random evolution. We define (pathwiseon Ω ) the inhomogeneous Y − random evolution in the averaging scheme V ǫ for ǫ ∈ (0 , , ( s, t ) ∈ ∆ J by: V ǫ ( s, t ) := N s (cid:18) t ǫ ,s (cid:19) Y k =1 Γ x k − ( s ) (cid:0) T ǫ,sk − ( s ) , T ǫ,sk ( s ) (cid:1) D ǫ ( x k − ( s ) , x k ( s )) Γ x (cid:18) t ǫ ,s (cid:19) T ǫ,sN s (cid:18) t ǫ ,s (cid:19) ( s ) , t Remark: we notice that V ǫ is well-defined since on Ω: T ǫ,sN s (cid:18) t ǫ ,s (cid:19) ( s ) = s + ǫ T N s (cid:18) t ǫ ,s (cid:19) ( s ) − s ! ≤ s + ǫ (cid:16) t ǫ ,s − s (cid:17) = t and that it coincides with V for ǫ = 1, i.e. V ( s, t ) = V ( s, t ). . Vadori and A. Swishchuk/ Our goal is to prove, as in [25], that for each f in some suitable subset of Y , { V ǫ ( s, • ) f } -seen as a family of elements of D ( J ( s ) , Y ) - converges weakly to some continuous limit-ing process V ( s, • ) f to be determined. To this end, we will first prove that { V ǫ ( s, • ) f } is relatively compact with a.e. continuous weak limit points. This is equivalent tothe notion of C − tightness in [10] (VI.3) because P ( D ( J ( s ) , Y )) topologized with theProhorov metric is a separable and complete metric space ( Y being a separable Ba-nach space), which implies that relative compactness and tightness are equivalent in P ( D ( J ( s ) , Y )) (by Prohorov’s theorem). Then we will identify the limiting process V .We first need some classical elements that can be found in [25] (1.4) and [4] (sections3.8 to 3.11). In particular the Skorohod space D ( J ( s ) , Y ) will always be topologizedwith the Skorohod metric d (see [4], chapter 3, equation 5.2). Definition 4.2.
Let ( ν n ) n ∈ N a sequence of probability measures on a metric space ( S, d ) . We say that ν n converges weakly to ν , and write ν n ⇒ ν iff ∀ f ∈ C b ( S ) : lim n →∞ Z S fdν n = Z S fdν Definition 4.3.
Let { ν ǫ } a family of probability measures on a metric space ( S, d ) . { ν ǫ } is said to be relatively compact iff for any sequence ( ν n ) n ∈ N ⊆ { ν ǫ } , there existsa weakly converging subsequence. Definition 4.4.
Let s ∈ J , { X ǫ } a family of stochastic processes with sample pathsin D ( J ( s ) , Y ) . We say that { X ǫ } is relatively compact iff {L ( X ǫ ) } is (in the metricspace P ( D ( J ( s ) , Y )) endowed with the Prohorov metric). We write that X ǫ ⇒ X iff L ( X ǫ ) ⇒ L ( X ) . We say that { X ǫ } is C-relatively compact iff it is relatively compactand if ever X ǫ ⇒ X , then X has a.e. continuous sample paths.If E Y ⊆ Y , we say that { V ǫ } is E Y − relatively compact (resp. E Y − C-relatively com-pact) iff { V ǫ ( s, • ) f } is ∀ f ∈ E Y , ∀ s ∈ J . Definition 4.5.
Let s ∈ J , { X ǫ } a family of stochastic processes with sample pathsin D ( J ( s ) , Y ) . We say that { X ǫ } satisfies the compact containment criterion if ∀ ∆ ∈ (0 , , ∀ T ∈ J ( s ) , ∃ K ⊆ Y compact set such that: lim inf ǫ → P [ X ǫ ( t ) ∈ K ∀ t ∈ [ s, T ]] ≥ − ∆ We say that { V ǫ } satisfies the compact containment criterion in E Y ⊆ Y if ∀ f ∈ E Y , ∀ s ∈ J , { V ǫ ( s, • ) f } satisfies it. Theorem 4.6.
Let s ∈ J , { X ǫ } a family of stochastic processes with sample paths in D ( J ( s ) , Y ) . { X ǫ } is C-relatively compact iff it is relatively compact and J s ( X ǫ ) ⇒ ,where: J s ( X ǫ ) := Z ∞ s e − u ( J s ( X ǫ , u ) ∧ duJ s ( X ǫ , u ) := sup t ∈ [ s,u ] || X ǫ ( t ) − X ǫ ( t − ) || Theorem 4.7.
Let s ∈ J , { X ǫ } a family of stochastic processes with sample paths in D ( J ( s ) , Y ) . { X ǫ } is relatively compact iff: . Vadori and A. Swishchuk/ i) { X ǫ } satisfies the compact containment criterionii) ∀ T ∈ J ( s ) , ∃ r > and a family { C s ( ǫ, η ) : ( ǫ, η ) ∈ (0 , × (0 , } of nonnegativerandom variables such that ∀ ( ǫ, η ) ∈ (0 , × (0 , , ∀ h ∈ [0 , η ] , ∀ t ∈ [ s, T ] : E [ || X ǫ ( t + h ) − X ǫ ( t ) || r | G ǫ,st ] ≤ E [ C s ( ǫ, η ) |G ǫ,st ]lim η → lim sup ǫ → E [ C s ( ǫ, η )] = 0 where G ǫ,st := σ [ X ǫ ( u ) : u ∈ [ s, t ]] The compact containment criterion can be in practice quite hard to prove, because weneed either a compact embedding of a Banach space into Y , or a characterization ofcompact sets in Y . An important remark should be made about applications where Y = C ( R d ), the space of continuous functions R d → R vanishing at infinity. To provethis compact containment criterion, it is said in both [27] and [25] that there exists acompact embedding of a Sobolev space into C ( R d ), which is not true. Here we suggesta method to prove it (4.9), that will be applied in section 5 to the case of inhomoge-neous L´evy Random Evolutions.If we have a compact embedding then the proof of the compact containment is easy,as mentioned here: Property 4.8.
Assume that there exists a Banach space ( Z, |||·||| ) compactly embeddedin Y , and that (Γ x ) x ∈ X , ( D ǫ ( x, y )) ǫ ∈ (0 , , ( x,y ) ∈ X × X are B ( Z ) − contractions. Then { V ǫ } satisfies the compact containment criterion in Z .Proof. Let f ∈ Z , ( s, t ) ∈ ∆ J , c := ||| f ||| and K := cl ( Y ) − S c ( Z ), the Y − closure ofthe Z − closed ball of radius c . K is compact because of the compact embedding of Z into Y . Let ǫ ∈ (0 , x ) x ∈ X , ( D ǫ ( x, y )) ( x,y ) ∈ X × X are B ( Z ) − contractions, V ǫ is a B ( Z ) − contraction and we have ∀ ω ∈ Ω: ||| V ǫ ( s, t )( ω ) f ||| ≤ ||| f ||| = c . Therefore V ǫ ( s, t )( ω ) f ∈ S c ( Z ) ⊆ K and so P [ V ǫ ( s, t ) f ∈ K ∀ t ∈ [ s, T ]] = P (Ω) = 1 ≥ − ∆.For example, we can consider the Rellich-Kondrachov compactness theorem: if U ⊆ R d is an open, bounded Lipschitz domain, then the Sobolev space W ,p ( U ) is compactlyembedded in L q ( U ), where p ∈ [1 , d ) and q ∈ [1 , dpd − p ).For the space C ( R d ), there is no well-known such compact embedding, therefore wehave to proceed differently. Property 4.9.
Let Y := C ( R d ) , E Y ⊆ Y . Assume that ∀ ∆ ∈ (0 , , ∀ ( s, T ) ∈ ∆ J , ∀ ǫ ∈ (0 , , ∀ f ∈ E Y , ∃ A ǫ ⊆ Ω : P ( A ǫ ) ≥ − ∆ and the family { V ǫ ( s, t )( ω ) f : t ∈ [ s, T ] , ǫ ∈ (0 , , ω ∈ A ǫ } converge uniformly to at infinity, is equicontinuous anduniformly bounded. Then { V ǫ } satisfies the compact containment criterion in E Y . Remark: we say that a family of functions { f α } converge uniformly to 0 at infinityif: ∀ ǫ > , ∃ δ > | z | > δ ⇒ sup α | f α ( z ) | < ǫ . Vadori and A. Swishchuk/ Proof.
Let f ∈ E Y , K the Y − closure of the set: K := { V ǫ ( s, t )( ω ) f : ǫ ∈ (0 , , ω ∈ A ǫ , t ∈ [ s, T ] } K is a family of elements of Y that are equicontinuous, uniformly bounded and thatconverge uniformly to 0 at infinity by assumption. Therefore it is well-known, using theArzela-Ascoli theorem on the Alexandroff compactification of R d , that K is relativelycompact in Y and therefore that K is compact in Y . And we have ∀ ǫ ∈ (0 , P [ V ǫ ( s, t ) f ∈ K ∀ t ∈ [ s, T ]] ≥ P [ ω ∈ A ǫ : V ǫ ( s, t ) f ∈ K ∀ t ∈ [ s, T ]] = P ( A ǫ ) ≥ − ∆ Remark:
Because lim t →∞ t N ( t ) = M ρ a.e., this set A ǫ will typically be A ǫ := n ǫN s (cid:16) T ǫ ,s (cid:17) ≤ n o for some well chosen constant n (see Appendix B). D ( J ( s ) , Y ) In the following we will make the following assumption: (A1) { V ǫ } satisfies the compact containment criterion in Y Lemma 4.10.
Let ( s, t ) ∈ ∆ J , f ∈ Y . Under (A0) , V ǫ satisfies on Ω : V ǫ ( s, t ) f = f + Z ts V ǫ ( s, u ) A x (cid:18) u ǫ ,s (cid:19) ( u ) fdu + N s (cid:18) t ǫ ,s (cid:19) X k =1 V ǫ ( s, T ǫ,sk ( s ) − )[ D ǫ ( x k − ( s ) , x k ( s )) − I ] f Proof.
Same proof as 3.4, except that the induction is made on the intervals [ T ǫ,sk ( s ) , T ǫ,sk +1 ( s )) ∩ J ( s ), instead of [ T k ( s ) , T k +1 ( s )) ∩ J ( s ). Lemma 4.11.
Assume (A0) and (A1) . Then { V ǫ } is Y − relatively compact.Proof. We want to prove 4.7. Using 4.10 we have for h ∈ [0 , η ]: || V ǫ ( s, t + h ) f − V ǫ ( s, t ) f ||≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z t + ht V ǫ ( s, u ) A x (cid:18) u ǫ ,s (cid:19) ( u ) fdu + N s (cid:18) ( t + h ) ǫ ,s (cid:19) X k = N s (cid:18) t ǫ ,s (cid:19) V ǫ ( s, T ǫ,sk ( s ) − )[ D ǫ ( x k − ( s ) , x k ( s )) − I ] f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ηM + ǫ N s (cid:18) ( t + η ) ǫ ,s (cid:19) X k = N s (cid:18) t ǫ ,s (cid:19) ǫ || D ǫ ( x k − ( s ) , x k ( s )) f − f ||≤ ηM + ǫM h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17) + 1 i . Vadori and A. Swishchuk/ where M := sup x,u || A x ( u ) || B ( Y ,Y ) || f || Y and M := sup ǫ,x,y || D ǫ ( x, y ) f || (by (A0) ).Notice || V ǫ ( s, T ǫ,sk ( s ) − ) || B ( Y ) ≤ D ǫ are B ( Y ) − contractions.For ǫ ∈ (0 ,
1] we have: ǫ h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17) + 1 i ≤ ǫ sup t ∈ [ s,s + η ] h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17)i + ǫ sup t ∈ [ s + η,T ] h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17)i + ǫ ≤ ǫN (cid:16) ( s + 2 η ) ǫ ,s (cid:17) + ǫ sup t ∈ [ s + η,T ] h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17)i + ǫ Note that the supremums in the previous expression are a.e. finite as they are a.e.bounded by N (cid:16) ( T + 1) ǫ ,s (cid:17) . Now let: C s ( ǫ, η ) := ηM + M ǫN (cid:16) ( s + 2 η ) ǫ ,s (cid:17) + M ǫ sup t ∈ [ s + η,T ] h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17)i + M ǫ We have to show that lim η → lim ǫ → E [ C s ( ǫ, η )] = 0. We have:lim η → lim ǫ → ηM + M ǫ E h N (cid:16) ( s + 2 η ) ǫ ,s (cid:17)i + M ǫ = lim η → ηM + M ηM ρ = 0Let { ǫ n } any sequence that goes to 0, and denote: Z n := ǫ n sup t ∈ [ s + η,T ] h N (cid:16) ( t + η ) ǫn ,s (cid:17) − N (cid:16) t ǫn ,s (cid:17)i We first want to show that { Z n } is uniformly integrable. By [4], it is sufficient to showthat sup n E ( Z n ) < ∞ . We have that E ( Z n ) ≤ ǫ n E h N (cid:16) ( T + 1) ǫn ,s (cid:17)i . But by [7], weget that for an ergodic Markov Renewal process (which is our framework):lim t →∞ E ( N ( t )) t < ∞ And therefore { Z n } is uniformly integrable.Then we show that Z n a.e. → Z := ηM ρ . Let:Ω ∗ := (cid:26) lim ǫ → ǫN (cid:16) ( s + 1) ǫ ,s (cid:17) = 1 M ρ (cid:27) , so that P (Ω ∗ ) = 1. Let ω ∈ Ω ∗ and δ >
0. There exists some constant r ( ω, δ ) > ǫ < r : (cid:12)(cid:12)(cid:12)(cid:12) ǫN (cid:16) ( s + 1) ǫ ,s (cid:17) − M ρ (cid:12)(cid:12)(cid:12)(cid:12) < δT + η and if t ∈ [ s + η, T + η ]: (cid:12)(cid:12)(cid:12)(cid:12) ( t − s ) ǫN (cid:16) ( s + 1) ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) < δ ( t − s ) T + η ≤ δ . Vadori and A. Swishchuk/ Let ǫ < ηr (recall η >
0) and ǫ := ǫt − s . Then ǫ < ηr η = r , and therefore: (cid:12)(cid:12)(cid:12)(cid:12) ( t − s ) ǫ N (cid:16) ( s + 1) ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) < δ ⇒ (cid:12)(cid:12)(cid:12)(cid:12) ǫN (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) < δ And therefore for ǫ < ηr and t ∈ [ s + η, T ]: (cid:12)(cid:12)(cid:12)(cid:12) ǫN (cid:16) ( t + η ) ǫ ,s (cid:17) − ǫN (cid:16) t ǫ ,s (cid:17) − ηM ρ (cid:12)(cid:12)(cid:12)(cid:12) < δ ⇒ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) sup t ∈ [ s + η,T ] h ǫN (cid:16) ( t + η ) ǫ ,s (cid:17) − ǫN (cid:16) t ǫ ,s (cid:17)i − ηM ρ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ δ < δ We have proved that Z n a.e. → Z . By uniform integrability of { Z n } , we get that lim n →∞ E ( Z n ) = E ( Z ) and therefore since the sequence { ǫ n } is arbitrary:lim ǫ → ǫ E " sup t ∈ [ s + η,T ] h N (cid:16) ( t + η ) ǫ ,s (cid:17) − N (cid:16) t ǫ ,s (cid:17)i = ηM ρ Lemma 4.12.
Assume (A0) , (A1) . Then { V ǫ } is Y − C-relatively compact.Proof.
We want to prove 4.6. By 4.11 it is relatively compact. By 4.6 it is sufficientto show that J s ( V ǫ ( s, • ) f ) a.e. →
0. Let δ >
T > e − T ≤ δ . For u ∈ [ s, T ] we have: J s ( V ǫ ( s, • ) f, u ) ≤ sup t ∈ [ s,T ] || V ǫ ( s, t ) f − V ǫ ( s, t − ) f || (using 4 .
10) = max k ∈ (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) ,N s (cid:18) T ǫ ,s (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:21) (cid:12)(cid:12)(cid:12)(cid:12) V ǫ ( s, T ǫ,sk ( s )) f − V ǫ (cid:0) s, T ǫ,s − k ( s ) (cid:1) f (cid:12)(cid:12)(cid:12)(cid:12) (using 4 .
10) = max k ∈ (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) ,N s (cid:18) T ǫ ,s (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:21) || V ǫ (cid:0) s, T ǫ,s − k ( s ) (cid:1) ( D ǫ ( x k − ( s ) , x k ( s )) f − f ) ||≤ max k ∈ (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) ,N s (cid:18) T ǫ ,s (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:21) || D ǫ ( x k − ( s ) , x k ( s )) f − f ||≤ max ( x,y ) ∈ X × X || D ǫ ( x, y ) f − f || Since Y ⊆ D ( D ), ∃ r > ǫ < r ⇒ max ( x,y ) ∈ X × X || D ǫ ( x, y ) f − f || < δ , and thereforefor ǫ < r , ω ∈ Ω: J s ( V ǫ ( s, • ) f ) = Z Ts e − u ( J s ( V ǫ ( s, • ) f, u ) ∧ du + Z ∞ T e − u ( J s ( V ǫ ( s, • ) f, u ) ∧ du< δ e − s − e − T ) + e − T ≤ δ . Vadori and A. Swishchuk/ D ( J ( s ) , Y ) of the inhomogeneous RandomEvolution In this section, we prove our main result 4.19. As in [27] and [25], we start by findinga martingale characterization of V ǫ ( s, • ) f (4.15, 4.17). We then prove that this mar-tingale converges weakly to zero (4.18). Nevertheless, the proof of the main result 4.19will differ completely from what has been done in the latter references, for the reasonsmentioned in introduction.We first start by the following definition that involves the operator ( P − I ) − of theuniformly ergodic Markov Chain ( x n ). Definition 4.13.
Assume (A0) . For f ∈ Y , x ∈ X , t ∈ J , let f ǫ ( x, t ) := f + ǫf ( x, t ) ,where f is the unique solution of the equation (see [18], proposition 4): ( P − I ) f ( • , t )( x ) = M ρ [ b A ( t ) − a ( x, t )] fa ( x, t ) := 1 M ρ (cid:0) m ( x ) A x ( t ) + P D ( x, • )( x ) (cid:1)b A ( t ) := Π a ( • , t ) namely, f ( x, t ) = M ρ ( P − I ) − ( b A ( t ) f − a ( • , t ) f )( x ) . Remark 4.14.
Recall that P, Π , M ρ have been defined at the beginning of section 3.The existence of f is guaranteed because Π[ b A ( t ) − a ( • , t )] f = 0 by definition of b A (see[18], proposition 4). In fact, in [18], the operators Π and P are defined on B b R ( X ) butthe results hold true if we work on B bE ( X ) , where E is any Banach space such that [ b A ( t ) − a ( x, t )] f ∈ E (e.g. E = Y if f ∈ b D , E = Y if f ∈ Y ). To see that, first observethat P and Π can be defined the same way on B bE ( X ) as they were on B b R ( X ) . Then take ℓ ∈ E ∗ such that || ℓ || = 1 and g ∈ B bE ( X ) such that || g || B bE ( X ) = max x || g ( x ) || E = 1 .We therefore have that: || ℓ ◦ g || B b R ( X ) ≤ , and since we have the uniform ergodicity on B b R ( X ) , we have that : sup || ℓ || =1 || g || BbE ( X ) =1 x ∈ X | P n ( ℓ ◦ g )( x ) − Π( ℓ ◦ g )( x ) | ≤ || P n − Π || B ( B b R ( X )) → By linearity of ℓ, P, Π (and because P and Π can be defined the same way on B bE ( X ) asthey were on B b R ( X ) ) we get that | P n ( ℓ ◦ g )( x ) − Π( ℓ ◦ g )( x ) | = | ℓ ( P n g ( x ) − Π g ( x )) | . Butbecause || P n g ( x ) − Π g ( x ) || E = sup || ℓ || =1 | ℓ ( P n g ( x ) − Π g ( x )) | and that this supremumis attained (see e.g. [2], section III.6), then: sup || ℓ || =1 || g || BbE ( X ) =1 x ∈ X | ℓ ( P n g ( x ) − Π g ( x )) | = sup || g || BbE ( X ) =1 x ∈ X || P n g ( x ) − Π g ( x ) || E = sup || g || BbE ( X ) =1 || P n g − Π g || B bE ( X ) = || P n − Π || B ( B bE ( X )) and so we also have || P n − Π || B ( B bE ( X )) → , i.e. the uniform ergodicity in B bE ( X ) .Now, according to the proofs of theorems 3.4, 3.5 chapter VI of [21] and because the . Vadori and A. Swishchuk/ Markov chain ( x n ) is aperiodic, || P n − Π || B ( B bE ( X )) → is the only thing we need toprove that P − I is invertible on: B Π E ( X ) := { f ∈ B bE ( X ) : Π f = 0 } , the space E plays no role. Further, ( P − I ) − ∈ B ( B Π E ( X )) by the bounded inversetheorem. Lemma 4.15.
Assume (A0) . Define recursively for ǫ ∈ (0 , , s ∈ J : V ǫ ( s ) := IV ǫn +1 ( s ) := V ǫn ( s )Γ x n ( s ) (cid:0) T ǫ,sn ( s ) , T ǫ,sn +1 ( s ) (cid:1) D ǫ ( x n ( s ) , x n +1 ( s )) i.e. V ǫn ( s ) = V ǫ ( s, T ǫ,sn ( s )) ; and for f ∈ Y : M ǫn ( s ) f := V ǫn ( s ) f ǫ ( x n ( s ) , T ǫ,sn ( s )) − f ǫ ( x ( s ) , s ) − n − X k =0 E [ V ǫk +1 ( s ) f ǫ ( x k +1 ( s ) , T ǫ,sk +1 ( s )) − V ǫk ( s ) f ǫ ( x k ( s ) , T ǫ,sk ( s )) |F k ( s )] so that ( M ǫn ( s ) f ) n ∈ N is a F n ( s ) − martingale by construction. Let for t ∈ J ( s ) : f M ǫt ( s ) f := M ǫN s (cid:18) t ǫ ,s (cid:19) +1 ( s ) f e F ǫt ( s ) := F N s (cid:18) t ǫ ,s (cid:19) +1 ( s ) where F n ( s ) := σ [ x k ( s ) , T k ( s ) : k ≤ n ] ∨ σ ( P − null sets ) and F N s (cid:18) t ǫ ,s (cid:19) +1 ( s ) is definedthe usual way (provided we have shown that N s (cid:16) t ǫ ,s (cid:17) + 1 is a F n ( s ) -stopping time ∀ t ∈ J ( s ) ). Then ∀ ℓ ∈ Y ∗ , ∀ s ∈ J , ∀ ǫ ∈ (0 , , ∀ f ∈ Y , ( ℓ ( f M ǫt ( s ) f ) , e F ǫt ( s )) t ∈ J ( s ) is areal-valued martingale. Remark:
The expectations in 4.15 are taken in the usual Bochner sense and that willbe the case throughout all the paper, unless mentioned otherwise.
Proof.
By construction ( ℓ ( M ǫn ( s ) f ) , F n ( s )) n ∈ N is a martingale. Let θ ( t ) := N s (cid:16) t ǫ ,s (cid:17) +1. ∀ t ∈ J ( s ), θ ( t ) is a F n ( s ) − stopping time, because: { θ ( t ) = n } = n N s (cid:16) t ǫ ,s (cid:17) = n − o = n T n − ( s ) ≤ t ǫ ,s o ∩ n T n ( s ) > t ǫ ,s o ∈ F n ( s )Let t ≤ t ∈ J ( s ). We have that ( ℓ ( M ǫθ ( t ) ∧ n ( s ) f ) , F n ( s )) n ∈ N is a martingale. Assumewe have shown that it is uniformly integrable, then we can apply the optional samplingtheorem for uniformly integrable martingales to the stopping times θ ( t ) ≤ θ ( t ) a.eand get: E [ ℓ ( M ǫθ ( t ) ∧ θ ( t ) ( s ) f ) |F θ ( t ) ( s )] = ℓ ( M ǫθ ( t ) ∧ θ ( t ) ( s ) f ) a.e. ⇒ E [ ℓ ( M ǫθ ( t ) ( s ) f ) |F θ ( t ) ( s )] = ℓ ( M ǫθ ( t ) ( s ) f ) a.e. ⇒ E [ ℓ ( f M ǫt ( s ) f ) | e F ǫt ( s )] = ℓ ( f M ǫt ( s ) f ) a.e. , . Vadori and A. Swishchuk/ which shows that ( ℓ ( f M ǫt ( s ) f ) , e F ǫt ( s )) t ∈ J ( s ) is a martingale. Now to show the uniformintegrability, by [4] it is sufficient to show that sup n E ( || M ǫθ ( t ) ∧ n ( s ) f || ) < ∞ . But: || M ǫθ ( t ) ∧ n ( s ) f || ≤ || f || + || f || + 2( || f || + || f || )( θ ( t ) ∧ n ) ≤ || f || + || f || )(1 + θ ( t ))where || f || := sup x,u || f ( x, u ) || ( || f || < ∞ since by (A0) , sup u || A x ( u ) || B ( Y ,Y ) < ∞ and ( P − I ) − ∈ B ( B Π Y ( X )) by Remark 4.14). Since E ( θ ( t ) ) < ∞ by [7], we aredone. Remark 4.16.
In the following we will make use of the fact that can be found in[1] (theorem 3.1) that for sequences ( X n ) , ( Y n ) of random variables with value in aseparable metric space with metric d ′ , if X n ⇒ X and d ′ ( X n , Y n ) ⇒ , then Y n ⇒ X .In our case we will typically have d ′ ( X n , Y n ) a.e. → , and to show it we will use theremark in [4] after lemma 5.1, chapter 3 that implies that if ( X n ) , ( Y n ) take value in D ( J ( s ) , Y ) and if ∀ T ∈ Q + ∩ J ( s ) : lim n →∞ sup t ∈ [ s,T ] || X n ( ω )( t ) − Y n ( ω )( t ) || = 0 a.e.then d ( X n , Y n ) a.e. → (where as mentioned before, d is the Skorohod metric). Lemma 4.17.
Assume (A0) and let n ǫ ( s, t ) := 1 + ⌊ t − sǫM ρ ⌋ and t ǫk ( s, t ) := s + k t − sn ǫ ( s,t ) .For f ∈ b D and s ∈ J , f M ǫ • ( s ) f has the asymptotic representation: f M ǫ • ( s ) f = V ǫ ( s, • ) f − f − ǫM ρ n ǫ ( s, • ) X k =1 V ǫ ( s, t ǫk ( s, • )) b A ( t ǫk ( s, • )) f + ◦ (1) a.e. , where ◦ ( ǫ p ) is defined by the following property: ∀ T ∈ Q + ∩ J ( s ) , lim ǫ → ǫ − p sup t ∈ [ s,T ] || ◦ ( ǫ p ) || = 0 a.e. , so that the remark 4.16 on the a.e. convergence in the Skorohod space will be satisfied.Proof. For sake of clarity let: f ǫk := f ǫ ( x k ( s ) , T ǫ,sk ( s )) f ,k := f ( x k ( s ) , T ǫ,sk ( s ))First we have that: V ǫN s (cid:18) t ǫ ,s (cid:19) +1 ( s ) f ǫN s (cid:18) t ǫ ,s (cid:19) +1 = V ǫN s (cid:18) t ǫ ,s (cid:19) +1 ( s ) f + ◦ (1)because || V ǫN s (cid:18) t ǫ ,s (cid:19) +1 ( s ) f ,N s (cid:18) t ǫ ,s (cid:19) +1 || ≤ || f || .Where || f || is defined similarly as in the proof of 4.15. Now we have: V ǫk +1 ( s ) f ǫk +1 − V ǫk ( s ) f ǫk = V ǫk ( s )( f ǫk +1 − f ǫk ) + V ǫk +1 ( s ) f ǫk +1 − V ǫk ( s ) f ǫk +1 . Vadori and A. Swishchuk/ and: E [ V ǫk ( s )( f ǫk +1 − f ǫk ) |F k ( s )] = ǫV ǫk ( s ) E [( f ,k +1 − f ,k ) |F k ( s )] , as V ǫk ( s ) is F k ( s ) − Bor ( B ( Y )) measurable. Now, we know that every discrete timeMarkov process with stationary transition kernel (i.e. that does not depend on n ) hasthe strong Markov property, so the Markov process ( x n , T n ) has it. For k ≥
1, thetimes N ( s ) + k are F n (0) − stopping times. Therefore for k ≥ E [( f ,k +1 − f ,k ) |F k ( s )] = E [( f ,k +1 − f ,k ) | T k ( s ) , x k ( s )]and again using the strong Markov property as well as the stationarity of the MarkovRenewal process: E [( f ,k +1 − f ,k ) | T k ( s ) = t k , x k ( s ) = x ]= X y ∈ X Z ∞ f ( y, t ǫ,sk + ǫu ) Q ( x, y, du ) − f ( x, t ǫ,sk )Let f ′ ( x, t ) = M ρ ( P − I ) − [ b A ′ ( t ) f − a ′ ( • , t ) f ]( x ), and a ′ ( x, t ) = M ρ m ( x ) A ′ x ( t ), b A ′ ( t ) = Π a ′ ( • , t ) which exists because ( P − I ) − ∈ B ( B Π Y ( X )) and f ∈ ∩ x ∈ X D ( A ′ x ).Using the Fundamental theorem of Calculus for the Bochner integral ( v → f ′ ( y, v ) ∈ L Y ([ a, b ]) ∀ [ a, b ] since || f ′ || < ∞ by (A0) : indeed, sup x,t || A ′ x ( t ) f || < ∞ ): E [( f ,k +1 − f ,k ) | T k ( s ) = t k , x k ( s ) = x ]= X y ∈ X Z ∞ " f ( y, t ǫ,sk ) + Z t ǫ,sk + ǫut ǫ,sk f ′ ( y, v ) dv Q ( x, y, du ) − f ( x, t ǫ,sk )= ( P − I ) f ( • , t ǫ,sk ) ( x ) + X y ∈ X Z ∞ (cid:20)Z ǫu f ′ ( y, t ǫ,sk + v ) dv (cid:21) Q ( x, y, du )= ( P − I ) f ( • , t ǫ,sk ) ( x ) + ◦ (1)because as mentioned before: || f ′ || < ∞ by (A0) and: X y ∈ X Z ∞ (cid:20)Z ǫu f ′ ( y, t ǫ,sk + v ) dv (cid:21) Q ( x, y, du ) ≤ ǫ || f ′ || m ( x )All put together we have for k ≥
1, using the definition of f : E [ V ǫk ( s )( f ǫk +1 − f ǫk ) |F k ( s )] = ǫV ǫk ( s )( P − I ) f ( • , T ǫ,sk ( s )) ( x k ( s )) + ◦ ( ǫ )= ǫM ρ V ǫk ( s ) h b A ( T ǫ,sk ( s )) − a ( x k ( s ) , T ǫ,sk ( s )) i f + ◦ ( ǫ ) . Vadori and A. Swishchuk/ and the first term: E [ V ǫ ( s )( f ǫ − f ǫ ) |F k ( s )] = ◦ (1) ( ≤ ǫ || f || )Now we have to compute the terms corresponding to V ǫk +1 ( s ) f ǫk +1 − V ǫk ( s ) f ǫk +1 . Wewill show that the term corresponding to k = 0 is ◦ (1) and that for k ≥ E [ V ǫk +1 ( s ) f ǫk +1 − V ǫk ( s ) f ǫk +1 |F k ( s )] = ǫM ρ V ǫk ( s ) a ( x k ( s ) , T ǫ,sk ( s )) f + ◦ ( ǫ )To conclude, we have to show that P N s (cid:18) t ǫ ,s (cid:19) k =1 ◦ ( ǫ ) = ◦ (1). But (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)P N s (cid:18) t ǫ ,s (cid:19) k =1 ◦ ( ǫ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ||◦ ( ǫ ) || ǫ ǫN s (cid:16) t ǫ ,s (cid:17) . The fact that sup t ∈ [0 ,T ] ǫN s (cid:16) t ǫ ,s (cid:17) = ǫN s (cid:16) T ǫ ,s (cid:17) a.e. → T − sM ρ concludesthe proof.For the term k = 0, we have using (A0) , the definition of V ǫk and 2.8: V ǫ ( s ) f ǫ − V ǫ ( s ) f ǫ = V ǫ ( s ) f − f + ◦ (1)= Γ x ( s ) ( s, T ǫ,s ( s )) D ǫ ( x ( s ) , x ( s )) f − f + ◦ (1)= Γ x ( s ) ( s, T ǫ,s ( s )) ( D ǫ ( x ( s ) , x ( s )) f − f ) + Γ x ( s ) ( s, T ǫ,s ( s )) f − f + ◦ (1) ⇒ E [ V ǫ ( s ) f ǫ − V ǫ ( s ) f ǫ |F ( s )] ≤ max x,y || D ǫ ( x, y ) f − f || + Z ∞ ǫu sup x,t || A x ( t ) || B ( Y ,Y ) || f || Y F ( x, du ) ≤ ǫ (cid:18) sup ǫ,x,y || D ǫ ( x, y ) f || + || f || Y sup x,t m ( x ) || A x ( t ) || B ( Y ,Y ) (cid:19) = ◦ (1)Now we have for k ≥ V ǫk +1 ( s ) − V ǫk ( s ) = V ǫk ( s ) (cid:2) Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) D ǫ ( x k ( s ) , x k +1 ( s )) − I (cid:3) And because by (A0) we have sup ǫ,x,y || D ǫ ( x, y ) g || < ∞ for g ∈ Y , we get using 2.8: D ǫ ( x k ( s ) , x k +1 ( s )) g = g + Z ǫ D u ( x k ( s ) , x k +1 ( s )) gdu and Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) g = g + Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) Γ x k ( s ) ( T ǫ,sk ( s ) , u ) A x k ( s ) ( u ) gdu Because f ∈ b D , we get that b A ( t ) f ∈ Y , a ( x, t ) f ∈ Y ∀ t, x . Since ( P − I ) − ∈B ( B Π Y ( X )) (by remark 4.14), we get that f ,k +1 ∈ Y and therefore using the previousrepresentations and (A0) :Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) D ǫ ( x k ( s ) , x k +1 ( s )) f ,k +1 = Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) f ,k +1 + ◦ (1)= f ,k +1 + Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) Γ x k ( s ) ( T ǫ,sk ( s ) , u ) A x k ( s ) ( u ) f ,k +1 du + ◦ (1) . Vadori and A. Swishchuk/ Therefore taking the conditional expectation we get: E [ V ǫk +1 ( s ) f ,k +1 − V ǫk ( s ) f ,k +1 |F k ( s )]= V ǫk ( s ) X y ∈ X Z ∞ (cid:20)Z ǫu Γ x k ( s ) ( T ǫ,sk ( s ) , T ǫ,sk ( s ) + v ) A x k ( s ) ( T ǫ,sk ( s ) + v ) f ,k +1 dv (cid:21) Q ( x k ( s ) , y, du ) + ◦ (1)= ◦ (1) ( ≤ ǫCm ( x k ( s )) for some constant C by (A0) )and so: E [ V ǫk +1 ( s ) f ǫk +1 − V ǫk ( s ) f ǫk +1 |F k ( s )] = E [ V ǫk +1 ( s ) f − V ǫk ( s ) f |F k ( s )] + ◦ ( ǫ )Now because f ∈ b D and by (A0) (which ensures that the integral below exists): D ǫ ( x k ( s ) , x k +1 ( s )) f = f + ǫD ( x k ( s ) , x k +1 ( s )) f + Z ǫ ( ǫ − u ) D u ( x k ( s ) , x k +1 ( s )) fdu And so using boundedness of D ǫ (again (A0) ):Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) D ǫ ( x k ( s ) , x k +1 ( s )) f = Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) f + ǫ Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) D ( x k ( s ) , x k +1 ( s )) f + ◦ ( ǫ )The first term has the representation by (2.13):Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) f = f + Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) A x k ( s ) ( u ) fdu + Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) Z uT ǫ,sk ( s ) Γ x k ( s ) ( T ǫ,sk ( s ) , r ) A x k ( s ) ( r ) A x k ( s ) ( u ) fdrdu Taking the conditional expectation and using the fact that by (A0) : sup u,x || A x ( u ) f || Y < ∞ , sup u,x || A x ( u ) || B ( Y ,Y ) < ∞ and m ( x k ( s )) < ∞ , we have: E " Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) Z uT ǫ,sk ( s ) Γ x k ( s ) ( T ǫ,sk ( s ) , r ) A x k ( s ) ( r ) A x k ( s ) ( u ) fdrdu (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F k ( s ) = ◦ ( ǫ )The second term has the representation, because f ∈ b D (which ensures that D ( x, y ) f ∈ Y ) and using 2.8: ǫ Γ x k ( s ) (cid:0) T ǫ,sk ( s ) , T ǫ,sk +1 ( s ) (cid:1) D ( x k ( s ) , x k +1 ( s )) f = ǫD ( x k ( s ) , x k +1 ( s )) f + ǫ Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) Γ x k ( s ) ( T ǫ,sk ( s ) , u ) A x k ( s ) ( u ) D ( x k ( s ) , x k +1 ( s )) fdu = ǫD ( x k ( s ) , x k +1 ( s )) f + ◦ ( ǫ ) . Vadori and A. Swishchuk/ And so all together we have: E [ V ǫk +1 ( s ) f − V ǫk ( s ) f |F k ( s )] = V ǫk ( s ) E "Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) A x k ( s ) ( u ) fdu + ǫD ( x k ( s ) , x k +1 ( s )) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F k ( s ) + ◦ ( ǫ )We have by the strong Markov property and the stationarity of the Markov Renewalprocess: E (cid:2) D ( x k ( s ) , x k +1 ( s )) f | F k ( s ) (cid:3) = P D ( x k ( s ) , • )( x k ( s )) f and: E " Z T ǫ,sk +1 ( s ) T ǫ,sk ( s ) A x k ( s ) ( u ) fdu (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) x k ( s ) = x, T k ( s ) = t k = X y ∈ X Z ∞ "Z t ǫ,sk + ǫut ǫ,sk A x ( v ) fdv Q ( x, y, du )= X y ∈ X Z ∞ (cid:20) ǫuA x ( t ǫ,sk ) f + Z ǫu ( ǫu − v ) A ′ x ( t ǫ,sk + v ) fdv (cid:21) Q ( x, y, du )= ǫm ( x ) A x ( t ǫ,sk ) f + ◦ ( ǫ ) as sup u,x || A ′ x ( u ) f || < ∞ and m ( x ) < ∞ . So finally we get: E [ V ǫk +1 ( s ) f ǫk +1 − V ǫk ( s ) f ǫk +1 |F k ( s )] = ǫM ρ V ǫk ( s ) a ( x k ( s ) , T ǫ,sk ( s )) f + ◦ ( ǫ )And therefore: f M ǫt ( s ) f = V ǫN s (cid:18) t ǫ ,s (cid:19) +1 ( s ) f − f − ǫM ρ N s (cid:18) t ǫ ,s (cid:19) X k =1 V ǫ ( s, T ǫ,sk ( s )) b A ( T ǫ,sk ( s )) f + ◦ (1)Now, let θ ( t ) := N s (cid:16) t ǫ ,s (cid:17) + 1. Using (A0) (in particular uniform boundedness ofsojourn times): V ǫθ ( t ) ( s ) f = V ǫ ( s, t )Γ x θ ( t ) − ( s ) ( t, T ǫ,sθ ( t ) ( s )) D ǫ ( x θ ( t ) − ( s ) , x θ ( t ) ( s )) f ⇒ || V ǫθ ( t ) ( s ) f − V ǫ ( s, t ) f || ≤ || Γ x θ ( t ) − ( s ) ( t, T ǫ,sθ ( t ) ( s )) D ǫ ( x θ ( t ) − ( s ) , x θ ( t ) ( s )) f − f ||≤ || D ǫ ( x θ ( t ) − ( s ) , x θ ( t ) ( s )) f − f || + || Γ x θ ( t ) − ( s ) ( t, T ǫ,sθ ( t ) ( s )) f − f ||≤ ǫ sup ǫ,x,y || D ǫ ( x, y ) f || + ( T ǫ,sθ ( t ) ( s ) − t ) sup x,t || A x ( t ) || B ( Y ,Y ) || f || Y ≤ ǫ sup ǫ,x,y || D ǫ ( x, y ) f || + ( T ǫ,sθ ( t ) ( s ) − T ǫ,sθ ( t ) − ( s )) sup x,t || A x ( t ) || B ( Y ,Y ) || f || Y ≤ ǫ sup ǫ,x,y || D ǫ ( x, y ) f || + ǫ ¯ τ sup x,t || A x ( t ) || B ( Y ,Y ) || f || Y . Vadori and A. Swishchuk/ And therefore: V ǫN s (cid:18) t ǫ ,s (cid:19) +1 ( s ) f = V ǫ ( s, t ) f + ◦ (1)Now let’s prove the final step, i.e.: ǫ N s (cid:18) t ǫ ,s (cid:19) X k =1 V ǫ ( s, T ǫ,sk ( s )) b A ( T ǫ,sk ( s )) f = ǫ n ǫ ( s,t ) X k =1 V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f + ◦ (1)We have: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ N s (cid:18) t ǫ ,s (cid:19) X k =1 V ǫ ( s, T ǫ,sk ( s )) b A ( T ǫ,sk ( s )) f − ǫ n ǫ ( s,t ) X k =1 V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ N s (cid:18) t ǫ ,s (cid:19) X k =1 || V ǫ ( s, T ǫ,sk ( s )) b A ( T ǫ,sk ( s )) f − V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f || | {z } ( i ) + ǫ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N s (cid:18) t ǫ ,s (cid:19) X k =1 V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f − n ǫ ( s,t ) X k =1 V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)| {z } ( ii ) For the second term ( ii ) we have: ǫ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N s (cid:18) t ǫ ,s (cid:19) X k =1 V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f − n ǫ ( s,t ) X k =1 V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫC (cid:12)(cid:12)(cid:12) N s (cid:16) t ǫ ,s (cid:17) − n ǫ ( s, t ) (cid:12)(cid:12)(cid:12) = C (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) + ◦ (1)where C := | X | sup t,x (cid:2) m ( x ) || A x ( t ) || B ( Y ,Y ) || f || Y + || P D ( x, • )( x ) || (cid:3) . Now, let { η n } ⊆ R + ∗ any sequence such that η n → t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) ≤ sup t ∈ [ s,s + η n ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) + sup t ∈ [ s + η n ,T ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) We have: sup t ∈ [ s,s + η n ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫN s (cid:16) ( s + η n ) ǫ ,s (cid:17) + η n M ρ . Vadori and A. Swishchuk/ and since ǫN s (cid:16) ( s + η n ) ǫ ,s (cid:17) a.e. → η n M ρ we have:lim sup ǫ → sup t ∈ [ s,s + η n ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) ≤ η n M ρ on some subset Ω n : P (Ω n ) = 1Now, in the proof of 4.11 we showed that:lim ǫ → sup t ∈ [ s + η n ,T ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) = 0 on some subset Ω ′ n : P (Ω ′ n ) = 1So finally we have on Ω ∗ := T n Ω n ∩ Ω ′ n :lim sup ǫ → sup t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) ≤ η n M ρ ∀ n Taking the limit as η n → ǫ → sup t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12)(cid:12) = 0 on Ω ∗ which shows that ( ii ) = ◦ (1).For the first term (i), we begin by the same trick:sup t ∈ [ s,T ] ( i ) ≤ sup t ∈ [ s,s + η n ] ( i ) + sup t ∈ [ s + η n ,T ] ( i )and on Ω ∗ , ∀ n : sup t ∈ [ s,s + η n ] ( i ) ≤ CǫN s (cid:16) ( s + η n ) ǫ ,s (cid:17) ⇒ lim sup ǫ → sup t ∈ [ s,s + η n ] ( i ) ≤ C η n M ρ Let ω ∈ Ω ∗ . Let δ >
0. Because f ∈ b D , t → b A ( t ) f is continuous on [ s, T ], and thereforeuniformly continuous on it, therefore ∃ r = r ( δ ), r < δ such that: | u − u | < r ⇒ || b A ( u ) f − b A ( u ) f || < δ Take a := { a j } j =0 ..n a any partition of [ s, t ] ( a = s , a n a = t ) such that || a || < c r , forsome constant c to be chosen. We have: N s (cid:18) t ǫ ,s (cid:19) X k =1 || V ǫ ( s, T ǫ,sk ( s )) b A ( T ǫ,sk ( s )) f − V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f || = n a − X j =0 N s a ǫ ,sj +1 ! X k = N s a ǫ ,sj ! +1 || V ǫ ( s, T ǫ,sk ( s )) b A ( T ǫ,sk ( s )) f − V ǫ ( s, t ǫk ( s, t )) b A ( t ǫk ( s, t )) f || . Vadori and A. Swishchuk/ If k ∈ (cid:20) N s (cid:18) a ǫ ,sj (cid:19) + 1 , N s (cid:18) a ǫ ,sj +1 (cid:19)(cid:21) , we have T ǫ,sk ( s ) ∈ [ a j , a j +1 ]. Also, by definition ofΩ ∗ , there exists r ( δ ) > ǫ < r we have sup t ∈ [ s,T ] (cid:12)(cid:12)(cid:12) ǫN s (cid:16) t ǫ ,s (cid:17) − t − sM ρ (cid:12)(cid:12)(cid:12) Assume (A0) and let f ∈ b D , s ∈ J and assume that for some sequence ǫ n → , f M ǫ n • ( s ) f ⇒ M • ( s, f ) for some D ( J ( s ) , Y ) − valued random variable M • ( s, f ) .Then f M • ( s ) f = 0 a.e..Proof. Let ℓ ∈ Y ∗ . Then ℓ ( f M ǫ n • ( s ) f ) is a real-valued martingale by 4.15 and usingproblem 13, section 3.11 of [4] together with the continuous mapping theorem, weget that ℓ ( f M ǫ n • ( s ) f ) ⇒ ℓ ( M • ( s, f )). The fact that the process ℓ ( M • ( s, f )) is a localmartingale with respect to its natural filtration is a direct application of a result thatcan be found in [10] (corollary 1.19, chapter IX). We have to show that the jumps of ℓ ( f M ǫ n • ( s ) f ) are uniformly bounded. We have: M ǫ n k +1 ( s ) f − M ǫ n k ( s ) f = ∆ V ǫ n k +1 ( s ) f ǫ n k +1 − E [∆ V ǫ n k +1 ( s ) f ǫ n k +1 |F k ( s )]where ∆ V ǫ n k +1 ( s ) f ǫ n k +1 := V ǫ n k +1 ( s ) f ǫ n k +1 − V ǫ n k ( s ) f ǫ n k and where f ǫ n k +1 has been defined in the proof of 4.17. Then we have: (cid:12)(cid:12)(cid:12) ∆ ℓ ( f M ǫ n t ( s ) f ) (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) ℓ (cid:16) ∆ f M ǫ n t ( s ) f (cid:17)(cid:12)(cid:12)(cid:12) ≤ || ℓ || sup k ∈ N || ∆ V ǫ n k +1 ( s ) f ǫ n k +1 ||≤ || ℓ || ( || f || + || f || )Now, we observe that ℓ ( f M ǫ n • ( s ) f ) is square-integrable since using its definition we getimmediately that for some C , C ∈ R + : | ℓ ( f M ǫ n t ( s ) f ) | ≤ C N s (cid:16) t ǫn ,s (cid:17) + C . Vadori and A. Swishchuk/ and that E (cid:20) N s (cid:16) t ǫn ,s (cid:17) (cid:21) < ∞ by [7]. Now according to theorem 5.1 in [26], if weprove that ∀ t : D ℓ ( f M ǫ n • ( s ) f ) E t ⇒ 0, then we get that ℓ ( M • ( s, f )) is equal to the zeroprocess, up to indistinguishability. In particular, it yields that ∀ t ∈ J ( s ), ∀ ℓ ∈ Y ∗ : ℓ ( M t ( s, f )) = 0 a.e.. Now, by [15] (chapter 2), we know that the dual space of everyseparable Banach space has a countable total subset, more precisely there exists acountable subset S ⊆ Y ∗ such that ∀ g ∈ Y :( ℓ ( g ) = 0 ∀ ℓ ∈ S ) ⇒ g = 0since P [ ℓ ( M t ( s, f )) = 0 , ∀ ℓ ∈ S ] = 1, we get M t ( s, f ) = 0 a.e., i.e. M • ( s, f ) is amodification of the zero process. Since both process have a.e. right-continuous paths,they are in fact indistinguishable (see [12]). And so M • ( s, f ) = 0 a.e..Now it remains to show that ∀ t , the quadratic variation D ℓ ( f M ǫ n • ( s ) f ) E t ⇒ 0. Usingthe definition of f M ǫ n • ( s ) f in 4.15, we get that: D ℓ ( f M ǫ n • ( s ) f ) E t = N s (cid:18) t ǫn ,s (cid:19) X k =0 E (cid:2) ℓ ( M ǫ n k +1 ( s ) f − M ǫ n k ( s ) f ) (cid:12)(cid:12) F k ( s ) (cid:3) and: M ǫ n k +1 ( s ) f − M ǫ n k ( s ) f = ∆ V ǫ n k +1 ( s ) f ǫ n k +1 − E [∆ V ǫ n k +1 ( s ) f ǫ n k +1 |F k ( s )]where ∆ V ǫ n k +1 ( s ) f ǫ n k +1 := V ǫ n k +1 ( s ) f ǫ n k +1 − V ǫ n k ( s ) f ǫ n k In the proof of 4.17 we proved that if f ∈ b D , then: (cid:12)(cid:12)(cid:12)(cid:12) ∆ V ǫ n k +1 ( s ) f ǫ n k +1 (cid:12)(cid:12)(cid:12)(cid:12) ≤ Cǫ n and therefore that: D ℓ ( f M ǫ n • ( s ) f ) E t ≤ C || ℓ || ǫ n N s (cid:16) t ǫn ,s (cid:17) a.e. → ǫ n N s (cid:16) t ǫn ,s (cid:17) a.e. → t − sM ρ .To prove our next theorem, we need the following 2 assumptions, usually fulfilled inpractice (see section 5): (A2) ∀ s ∈ J, { V ǫ ( s, • ) b A ( • ) } satisfies the compact containment criterion in b D (A3) b A is the generator of a regular inhomogeneous Y − semigroup b ΓLet’s make some comments about these assumptions. About (A3) , we first notice by2 . 11, that if we assume Y to be dense in Y as it will be the case, then b Γ is unique. (A3) will in general be satisfied in practice because from the expression of b A (see 4.13), wecan write b Γ explicitly as some average of the semigroups { Γ x } x ∈ X (see section 5). (A2) yields that { V ǫ ( s, • ) b A ( • ) } is b D− C-relatively compact. Indeed, in theorems 4.6and 4.7, the 2 other conditions are proved exactly the same way as in the proofs of . Vadori and A. Swishchuk/ theorems 4.11 and 4.12, using the continuity of b A on b D (recall T x ∈ X D ( A ′ x ) ⊆ b D ).The proof of the compact containment criterion is linked to the nature of the Banachspace Y , as we saw in 4.8, 4.9. The idea to see how it can be proved here is the following:For fixed t ∈ J ( s ) and f ∈ b D , we know that b A ( t ) f ∈ Y , and since { V ǫ } satisfies thecompact containment criterion in Y ( (A1) ), ∃ K = K ( t , T, ∆ , f ) ⊆ Y compact setsuch that: lim inf ǫ → P [ V ǫ ( s, t ) b A ( t ) f ∈ K ∀ t ∈ [ s, T ]] ≥ − ∆What we want to prove is that, in some way, we can ”put the ∀ t inside the probability”,namely ∃ K = K ( T, ∆ , f ) ⊆ Y compact set such that:lim inf ǫ → P [ V ǫ ( s, t ) b A ( t ) f ∈ K ∀ t ∈ [ s, T ]] ≥ − ∆In practice it is easy to prove, because the dependence in time of the generators A x ( t )doesn’t cause any problems when dealing with compacity in Y . For example, it isstraightforward that it is satisfied in the case of 4.8, using some simple boundednessconditions. As for the case of 4.9, the reason why it is satisfied is that usually (seesection 5): the set A ǫ doesn’t depend on f (it is only linked with the Markov Renewalprocess, i.e. the only source of randomness) and the family of functions: { V ǫ ( s, t )( ω ) b A ( t ) f : t ∈ [ s, T ] , ǫ ∈ (0 , , ω ∈ A ǫ } converge uniformly to 0 at infinity, are equicontinuous and uniformly bounded. Thelatter will be true because typically we’ll have that the following family of functionsconverge uniformly to 0 at infinity, are equicontinuous and uniformly bounded: { b A ( t ) f : t ∈ [ s, T ] } That is, the time-dependence of b A doesn’t affect the 3 previous features. Theorem 4.19. Assume that b D contains a countable family that is dense in both Y and Y . Under assumptions (A0) , (A1) , (A2) , (A3) , we have that b Γ is a B ( Y ) − contractionregular inhomogeneous Y -semigroup. Further, for every countable family { f k } ⊆ Y and s ∈ J we have the weak convergence in the Skorohod topology D ( J ( s ) , Y ∞ ) : ( V ǫ ( s, • ) f k : k ∈ N ) ⇒ ( b Γ( s, • ) f k : k ∈ N ) Remark: typically, Y = C n ( R d ), Y = C n ( R d ), and the countable family is chosenin C n ( R d ), for n ≤ n ≤ n . Proof. The proof can be splitted in the following steps: • Take { g k } k ∈ N ∗ ⊆ b D a countable family that is both dense in Y and Y andbecause marginal tightness implies countable tightness, get the tightness of( V ǫ n ( s, • ) b A ( • ) g k , V ǫ n ( s, • ) g k : k ≥ 1) using (A1) , (A2) . By tightness, take oneweakly converging sequence ǫ n and the goal is to show that the limit is uniquein distribution. . Vadori and A. Swishchuk/ • Carry the problem to a new probability space (Ω ′ , F ′ , P ′ ) where convergenceholds a.e., by the Skorohod representation theorem. • On a subset Ω ′∗ ⊆ Ω ′ such that P ′ (Ω ′∗ ) = 1 and using density (and complete-ness), construct a stochastic process V ′ ( s, • ) with sample paths in D ( J ( s ) , B ( Y ))such that on Ω ′∗ we have the convergence in the Skorohod topology (where thesubscript ′ denotes random variables on Ω ′ ): V ′ ǫ n ( s, • , g k , b A ) → V ′ ( s, • ) b A ( • ) g k , ∀ kV ′ ǫ n ( s, • ) g k → V ′ ( s, • ) g k where ∀ n :( V ǫ n ( s, • ) b A ( • ) g k , V ǫ n ( s, • ) g k : k ≥ d = ( V ′ ǫ n ( s, • , g k , b A ) , V ′ ǫ n ( s, • , g k ) : k ≥ • Show the convergence of the Riemann sum of 4.17 to an integral and use 4.18to show the convergence of the martingale of 4.17 to the zero process on Ω ′∗ , sothat V ′ will satisfy the following integral equation on Ω ′∗ : V ′ ( s, • ) f = f + Z • s V ′ ( s, u ) b A ( u ) fdu, ∀ f ∈ Y • By (A3) and unicity of the Cauchy problem 2.10, show that ∀ f ∈ Y , ∀ s ∈ J , V ′ ( s, • ) f must be the constant random variable b Γ( s, • ) f in C ( J ( s ) , Y ).Let { g k } k ∈ N ∗ ⊆ b D a countable family that is both dense in Y and Y . As we mentionedit, (A2) is enough to prove that { V ǫ ( s, • ) b A ( • ) } is b D− C-relatively compact. Indeed,in theorems 4.6 and 4.7, the 2 other conditions are proved exactly the same wayas in the proofs of theorems 4.11 and 4.12, using the continuity of b A on b D (recall T x ∈ X D ( A ′ x ) ⊆ b D ). By the latter and 4.12, the family { V ǫ ( s, • ) b A ( • ) g k , V ǫ ( s, • ) g k : k ≥ } is C-relatively compact in D ( J ( s ) , Y ) ∞ , and in fact in D ( J ( s ) , Y ∞ ) since the limitpoints are continuous. Take a converging sequence ǫ n :( V ǫ n ( s, • ) b A ( • ) g k , V ǫ n ( s, • ) g k : k ≥ ⇒ ( α ( s, • , g k ) , v ( s, • , g k ) : k ≥ ′ , F ′ , P ′ ) and random variables with thesame distributions as the previous ones (denoted by the subscript ′ ), such that:( V ′ ǫ n ( s, • , g k , b A ) , V ′ ǫ n ( s, • , g k ) : k ≥ a.e. → ( α ′ ( s, • , g k ) , v ′ ( s, • , g k ) : k ≥ g ∈ Y . By density, there exists a sequence ( g k ) k ∈ N ∗ ⊆ { g k } k ∈ N ∗ so that g k → g .Let: B ( r ) := { ( x, y ) ∈ D ( J ( s ) , Y ) × D ( J ( s ) , Y ) : d ( x, y ) ≤ r } = d − ([0 , r ])As the preimage of a closed set under a continuous function, B ( r ) is closed in D ( J ( s ) , Y ) × D ( J ( s ) , Y ). Note that because D ( J ( s ) , Y ) is separable, the Borel sigma-algebras Bor ( D ( J ( s ) , Y ) × D ( J ( s ) , Y )) and Bor ( D ( J ( s ) , Y )) ⊗ Bor ( D ( J ( s ) , Y )) agree, so we shall not worry aboutthat. . Vadori and A. Swishchuk/ Since for every k , k , n the pairs { V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g k ) } and { V ǫ n ( s, • ) g k , V ǫ n ( s, • ) g k } have the same distributions, we get that: P ′ (cid:2) d ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g k )) ≤ || g k − g k || (cid:3) = P ′ (cid:2) ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g k )) ∈ B ( || g k − g k || ) (cid:3) = P (cid:2) ( V ǫ n ( s, • ) g k , V ǫ n ( s, • ) g k ) ∈ B ( || g k − g k || ) (cid:3) = P (cid:2) d ( V ǫ n ( s, • ) g k , V ǫ n ( s, • ) g k ) ≤ || g k − g k || (cid:3) = 1Let the subset Ω ′∗ ⊆ Ω ′ :Ω ′∗ := \ k ,k ,n (cid:8) d ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g k )) ≤ || g k − g k || (cid:9)\ n lim n →∞ ( V ′ ǫ n ( s, • , g k , b A ) , V ′ ǫ n ( s, • , g k ) : k ≥ 1) = ( α ′ ( s, • , g k ) , v ′ ( s, • , g k ) : k ≥ o so that P ′ (Ω ′∗ ) = 1. On Ω ′∗ , the sequence ( V ′ ǫ n ( s, • , g k )) k ∈ N is Cauchy in D ( J ( s ) , Y )which is complete, therefore it converges to some V ′ ǫ n ( s, • , g ) as k → ∞ . To see that V ′ ǫ n ( s, • , g ) has the same distribution as V ǫ n ( s, • ) g , we observe that V ǫ n ( s, • ) g k a.e. → V ǫ n ( s, • ) g (by contraction property of V ǫ n ) and we just invoke the unicity of the limitin distribution, together with the fact that ∀ k : V ′ ǫ n ( s, • , g k ) and V ǫ n ( s, • ) g k have thesame distributions. Note that all the { V ′ ǫ n ( s, • , g ) : g ∈ Y, n ∈ N } are defined on thecommon subset Ω ′∗ .We have on Ω ′∗ that ∀ n, k , k : d ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g k )) ≤ || g k − g k || . Since d is continuous, we may take the limit as n → ∞ and obtain that: d ( v ′ ( s, • , g k ) , v ′ ( s, • , g k )) ≤ || g k − g k || , which by completeness shows the convergence of the sequence ( v ′ ( s, • , g k )) k ∈ N to some v ′ ( s, • , g ) (which belongs to C ( J ( s ) , Y ) as a limit in the Skorohod metric of elementsof C ( J ( s ) , Y )). Now to see that the latter is the pointwise limit of V ′ ǫ n ( s, • , g ) on Ω ′∗ ,we observe that: d ( v ′ ( s, • , g ) , V ′ ǫ n ( s, • , g )) ≤ d ( v ′ ( s, • , g ) , v ′ ( s, • , g k )) + d ( v ′ ( s, • , g k ) , V ′ ǫ n ( s, • , g k ))+ d ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g ))Now, by continuity of d , || · || and taking the limit as p → ∞ , we have on Ω ′∗ : d ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g p )) ≤ || g k − g p || ⇒ d ( V ′ ǫ n ( s, • , g k ) , V ′ ǫ n ( s, • , g )) ≤ || g k − g || Therefore first choose k such that the 1st and 3rd terms are small, then choose n suchthat the 2nd term is small.Now that v ′ ( s, • , g ) is well defined on Ω ′∗ for every g ∈ Y , we want to show that on somesubset Ω ′ ⊆ Ω ′∗ such that P ′ (Ω ′ ) = 1, we have v ′ ( s, • , h + λg ) = v ′ ( s, • , h )+ λv ′ ( s, • , g )and || v ′ ( s, t, g ) || ≤ || g || , ∀ h, g ∈ Y , ∀ λ ∈ R , ∀ t ∈ J ( s ), namely that for every ω ∈ Ω ′ and t ∈ J ( s ), v ′ ( s, t, • )( ω ) is a B ( Y ) − contraction. After having proved the latter, wewill adopt the notation V ′ ( s, t )( ω ) := v ′ ( s, t, • )( ω ) to emphasize this fact. . Vadori and A. Swishchuk/ Let h, g ∈ Y , λ ∈ R . There exists sequences ( h k ) k ∈ N ∗ , ( g k ) k ∈ N ∗ ⊆ { g k } k ∈ N ∗ so that g k → g , h k → h and ( λ k ) k ∈ N ∗ ⊆ Q so that λ k → λ .First observe the following equality in distribution:( V ǫ n ( s, • )( h k + λ k g k ) , V ǫ n ( s, • ) h k , λ k V ǫ n ( s, • ) g k ) d = ( V ′ ǫ n ( s, • , h k + λ k g k ) , V ′ ǫ n ( s, • , h k ) , λ k V ′ ǫ n ( s, • , g k ))This is because ∀ k , there exists a sequence ( β kp ) p ∈ N ∗ ⊆ { g p } p ∈ N ∗ such that β kp → h k + λ k g k and by contraction property of V ǫ n and construction of V ′ ǫ n :( V ǫ n ( s, • ) β kp , V ǫ n ( s, • ) h k , λ k V ǫ n ( s, • ) g k ) a.e. ,p →∞ → ( V ǫ n ( s, • )( h k + λ k g k ) , V ǫ n ( s, • ) h k , λ k V ǫ n ( s, • ) g k )( V ′ ǫ n ( s, • , β kp ) , V ′ ǫ n ( s, • , h k ) , λ k V ′ ǫ n ( s, • , g k )) a.e. ,p →∞ → ( V ′ ǫ n ( s, • , h k + λ k g k ) , V ′ ǫ n ( s, • , h k ) , λ k V ′ ǫ n ( s, • , g k ))and that:( V ǫ n ( s, • ) β kp , V ǫ n ( s, • ) h k , λ k V ǫ n ( s, • ) g k ) d = ( V ′ ǫ n ( s, • , β kp ) , V ′ ǫ n ( s, • , h k ) , λ k V ′ ǫ n ( s, • , g k ))and we conclude by unicity of the limit in distribution. In particular we get that V ′ ǫ n ( s, • , h k + λ k g k ) − V ′ ǫ n ( s, • , h k ) − λ k V ′ ǫ n ( s, • , g k ) = 0 a.e.. Let the subset Ω ′∗∗ ⊆ Ω ′∗ :Ω ′∗∗ = Ω ′∗ ∩ \ n,k ,k ∈ N , r ∈ Q { V ′ ǫ n ( s, • , g k + rg k ) − V ′ ǫ n ( s, • , g k ) − rV ′ ǫ n ( s, • , g k ) = 0 } so that P ′ (Ω ′∗∗ ) = 1. Because the limit points v ′ are continuous, we have on Ω ′∗∗ that: V ′ ǫ n ( s, • , h k + λ k g k ) − V ′ ǫ n ( s, • , h k ) − λ k V ′ ǫ n ( s, • , g k ) n →∞ → v ′ ( s, • , h k + λ k g k ) − v ′ ( s, • , h k ) − λ k v ′ ( s, • , g k )and therefore that v ′ ( s, • , h k + λ k g k ) − v ′ ( s, • , h k ) − λ k v ′ ( s, • , g k ) = 0. Taking the limitas k → ∞ (and using again the fact that the v ′ are continuous) yields v ′ ( s, • , h + λg ) = v ′ ( s, • , h ) + λv ′ ( s, • , g ).Now to show that || v ′ ( s, t, g ) || ≤ || g || ∀ g ∈ Y, ∀ t ∈ J ( s ) on some subset Ω ′ ⊆ Ω ′∗∗ suchthat P ′ (Ω ′ ) = 1, we observe that because V ′ ǫ n ( s, • , g k ) → v ′ ( s, • , g k ), and v ′ ( s, • , g k ) → v ′ ( s, • , g ): by [4] (chapter 3, proposition 5.2) and using continuity of v ′ we get that ∀ t : v ′ ( s, t, g k ) → v ′ ( s, t, g ), V ′ ǫ n ( s, t, g k ) → v ′ ( s, t, g k ). Since V ′ ǫ n ( s, • , g p ) and V ǫ n ( s, • ) g p have the same distribution, denoting S p := { x ∈ D ( J ( s ) , Y ) : || x ( t ) || ≤ || g p || ∀ t ∈ J ( s ) } we get that P ′ [ V ′ ǫ n ( s, • , g p ) ∈ S p ] = 1, say on Ω ′ n,p . Let Ω ′ := T n,p Ω ′ n,p ∩ Ω ′∗∗ .Let t ∈ J ( s ). We have on Ω ′ that: || V ′ ǫ n ( s, t, g k ) || ≤ || g k || . Taking the limit as n → ∞ ,we get || v ′ ( s, t, g k ) || ≤ || g k || , and then as k → ∞ we get || v ′ ( s, t, g ) || ≤ || g || .Now let’s take some g p and show that on some Ω ′ ⊆ Ω ′ such that P (Ω ′ ) = 1: ǫ n n ǫn ( s, • ) X k =1 V ′ ǫ n ( s, t ǫ n k ( s, • ) , g p , b A ) → M ρ Z • s V ′ ( s, u ) b A ( u ) g p du First, let’s prove that a.e. (say on some Ω ′ ⊆ Ω ′ ): ∀ p , V ′ ( s, • ) b A ( • ) g p = α ′ ( s, • , g p ).By what we did before we know that ∀ h ∈ Y :( V ′ ǫ n ( s, • , g p , b A ) , V ′ ǫ n ( s, • , h )) d = ( V ǫ n ( s, • ) b A ( • ) g p , V ǫ n ( s, • ) h ) . Vadori and A. Swishchuk/ In particular taking t ∈ J ( s ) and h = b A ( t ) g p :( V ′ ǫ n ( s, t, g p , b A ) , V ′ ǫ n ( s, t, b A ( t ) g p )) d = ( V ǫ n ( s, t ) b A ( t ) g p , V ǫ n ( s, t ) b A ( t ) g p )And therefore: P ′ [ V ′ ǫ n ( s, t, g p , b A ) − V ′ ǫ n ( s, t, b A ( t ) g p ) = 0] = 1, say on Ω ′ n,t,p and on the other hand by continuity of the limits, we have on Ω ′ t := T n,p Ω ′ n,t,p ∩ Ω ′ :0 = V ′ ǫ n ( s, t, g p , b A ) − V ′ ǫ n ( s, t, b A ( t ) g p ) → α ′ ( s, t, g p ) − v ′ ( s, t, b A ( t ) g p )Let Ω ′ := T t ∈ Q Ω ′ t . Now let t ∈ J ( s ) and a sequence of rationals t k → t . On Ω ′ wehave that:0 = α ′ ( s, t k , g p ) − v ′ ( s, t k , b A ( t k ) g p ) = α ′ ( s, t k , g p ) − V ′ ( s, t k ) b A ( t k ) g p We have α ′ ( s, t k , g p ) → α ′ ( s, t, g p ) by continuity of α ′ . And: || V ′ ( s, t k ) b A ( t k ) g p − V ′ ( s, t ) b A ( t ) g p ||≤ || V ′ ( s, t k )( b A ( t k ) g p − b A ( t ) g p ) || + || V ′ ( s, t k ) b A ( t ) g p − V ′ ( s, t ) b A ( t ) g p ||≤ || b A ( t k ) g p − b A ( t ) g p || + || V ′ ( s, t k ) b A ( t ) g p − V ′ ( s, t ) b A ( t ) g p || where the last inequality used linearity and contraction property of V ′ . The first termgoes to 0 by continuity of b A on b D , and the second by continuity of V ′ ( s, • ) b A ( t ) g p .Now back to the convergence of the Riemann sum, we have on Ω ′ : d ǫ n n ǫn ( s, • ) X k =1 V ′ ǫ n ( s, t ǫ n k ( s, • ) , g p , b A ) , M ρ Z • s α ′ ( s, u, g p ) du ≤ d ǫ n n ǫn ( s, • ) X k =1 V ′ ǫ n ( s, t ǫ n k ( s, • ) , g p , b A ) , ǫ n n ǫn ( s, • ) X k =1 α ′ ( s, t ǫ n k ( s, • ) , g p ) | {z } ( i ) + d ǫ n n ǫn ( s, • ) X k =1 α ′ ( s, t ǫ n k ( s, • ) , g p ) , M ρ Z • s α ′ ( s, u, g p ) du | {z } ( ii ) Let T > 0. Because V ′ ǫ n ( s, • , g p , b A ) → α ′ ( s, • , g p ) and α ′ is continuous, the convergencein the Skorohod topology is equivalent to convergence in the uniform topology. Inparticular : sup t ∈ [ s,T ] || V ′ ǫ n ( s, t, g p , b A ) − α ′ ( s, t, g p ) || → . Vadori and A. Swishchuk/ Therefore we get on Ω ′ :sup t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 V ′ ǫ n ( s, t ǫ n k ( s, t ) , g p , b A ) − ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ ǫ n n ǫ n ( s, T ) sup t ∈ [ s,T ] || V ′ ǫ n ( s, t, g p , b A ) − α ′ ( s, t, g p ) || → ǫ n n ǫ n ( s, T ) → T − sM ρ . By remark 4.16 we get ( i ) → η m → t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ sup t ∈ [ s,s + η m ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + sup t ∈ [ s + η m ,T ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Because on Ω ′ , α ′ ( s, • , g p ) = V ′ ( s, • ) b A ( • ) g p and by contraction property of V ′ we getthat:sup t ∈ [ s,s + η m ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ǫ n n ǫ n ( s, s + η m ) + C η m ⇒ lim sup n →∞ sup t ∈ [ s,s + η m ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C η m For the other term, we proceed exactly as in the proof of 4.17: because t ≥ η m , we canrefine as we wish the partition { t ǫ n k ( s, t ) } uniformly in t ∈ [ s + η m , T ], namely: ∀ δ > ∃ r ( δ, m ) > ǫ n < r : t ǫ n k +1 ( s, t ) − t ǫ n k ( s, t ) < r ∀ t ∈ [ s + η m , T ] , ∀ k ∈ [ | , n ǫ n ( s, t ) | ]and get the convergence of the riemann sum to the Riemann integral uniformly on[ s + η m , T ], i.e.sup t ∈ [ s + η m ,T ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n →∞ → n →∞ sup t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C η m ∀ m . Vadori and A. Swishchuk/ and therefore taking the limit as m → ∞ :lim n →∞ sup t ∈ [ s,T ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ǫ n n ǫn ( s,t ) X k =1 α ′ ( s, t ǫ n k ( s, t ) , g p ) − M ρ Z ts α ′ ( s, u, g p ) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = 0which is what we want.Finally we get on Ω ′ , by continuity of the limit points: V ′ ǫ n ( s, • ) g p − g p − M ρ ǫ n n ǫn ( s, • ) X k =1 V ′ ǫ n ( s, t ǫ n k ( s, • ) , g p , b A ) → V ′ ( s, • ) g p − g p − Z • s V ′ ( s, u ) b A ( u ) g p du Since we have:( V ′ ǫ n ( s, • ) g p , V ′ ǫ n ( s, • , g p , b A )) d = ( V ǫ n ( s, • ) g p , V ǫ n ( s, • ) b A ( • ) g p )Then we get, for some M • ( s, g p ): V ǫ n ( s, • ) g p − g p − M ρ ǫ n n ǫn ( s, • ) X k =1 V ǫ n ( s, t ǫ n k ( s, • )) b A ( t ǫ n k ( s, • )) g p ⇒ M • ( s, g p )and therefore by 4.17 that f M ǫ n • ( s ) g p ⇒ M • ( s, g p ), which yields by 4.18 that M • ( s, g p ) =0 a.e., and therefore: V ′ ( s, • ) g p − g p − Z • s V ′ ( s, u ) b A ( u ) g p du = 0 a.e., say on some Ω ′∗ := \ p Ω ′∗ ( p ) ⊆ Ω ′ Let f ∈ Y and t ∈ J ( s ). Since { g p } is dense in Y , there exists a sequence f p ⊆ { g p } Y → f . We have on Ω ′∗ : (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) V ′ ( s, t ) f − f − Z ts V ′ ( s, u ) b A ( u ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || V ′ ( s, t ) f − V ′ ( s, t ) f p || + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) V ′ ( s, t ) f p − f − Z ts V ′ ( s, u ) b A ( u ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || f p − f || + Z ts || V ′ ( s, u ) b A ( u ) f p − V ′ ( s, u ) b A ( u ) f || du ≤ || f p − f || + || f p − f || Y Z ts || b A ( u ) || B ( Y ,Y ) du → ′∗ we have ∀ f ∈ Y : V ′ ( s, • ) f = f + Z • s V ′ ( s, u ) b A ( u ) fdu By continuity of b A on Y ( (A0) ), and boundedness + continuity of V ′ : t → V ′ ( s, t ) b A ( t ) f ∈ C ( J ( s ) , Y ) on Ω ′∗ and therefore we have on Ω ′∗ : ∂∂t V ′ ( s, t ) f = V ′ ( s, t ) b A ( t ) f ∀ t ∈ J ( s ) , ∀ f ∈ Y V ′ ( s, s ) = I By (A3) and 2.10, V ′ ( s, • )( ω ′ ) f = b Γ( s, • ) f , ∀ ω ′ ∈ Ω ′∗ , ∀ f ∈ Y . By contractionproperty and density of Y in Y , the previous equality is true in Y . . Vadori and A. Swishchuk/ 5. Applications In this section we give an application to inhomogeneous L´evy random evolutions. Wefirst introduce inhomogeneous L´evy semigroups.Let J = [0 , T ∞ ], Y := C ( R d ), Y := C ( R d ). Let (Ω , F , P ) a probability space (possi-bly different than the probability space on which is defined the Semi-Markov process)and ( L t ) t ∈ J a process with independent increments and absolutely continuous char-acteristics (PIIAC), or inhomogeneous L´evy processes in [13]. It is a specific case ofadditive processes that are semimartingales, which is not the case of all additive pro-cesses (see [13]) . For A ∈ Bor ( R d ) and z ∈ R d we let p s,t ( z, A ) = P ( L t − L s ∈ A − z ), µ s,t the law of L t − L s and the inhomogeneous L´evy semigroup:Γ( s, t ) f ( z ) := E [ f ( L t − L s + z )] = Z R d p s,t ( z, dy ) f ( y ) = Z R d µ s,t ( dy ) f ( z + y )Γ is a regular B ( Y ) − contraction semigroup and ∀ n ∈ N we have Γ( s, t ) ∈ B ( C n ( R d ))and || Γ( s, t ) || B ( C n ( R d )) ≤ L t ) t ∈ J (see [3], 14.1) ensuresthat there exists unique ( B t ) t ∈ J ⊆ R d , ( C t ) t ∈ J a family of d × d symmetric nonnegative-definite matrices and (¯ ν t ) t ∈ J a family of measures on R d such that: E [ e i h u,L t i ] = e ψ ( u,t ) , with : ψ ( u, t ) := i h u, B t i − h u, C t u i + Z R d ( e i h u,y i − − i h u, y i | y |≤ )¯ ν t ( dy )( B t , C t , ¯ ν t ) t ∈ J is called the spot characteristics of L . They satisfy the following regu-larity conditions: • ∀ t ∈ J , ¯ ν t { } = 0 and R R d ( | y | ∧ ν t ( dy ) < ∞• ( B , C , ¯ ν ) = (0 , , 0) and ∀ ( s, t ) ∈ ∆ J : C t − C s is symmetric nonnegative-definite and ¯ ν s ( A ) ≤ ¯ ν t ( A ) ∀ A ∈ Bor ( R d ). • ∀ t ∈ J , B s → B t , h u, C s u i → h u, C t u i ∀ u ∈ R d and ¯ ν s ( A ) → ¯ ν t ( A ) ∀ A ∈ Bor ( R d ) such that A ⊆ { z ∈ R d : | z | > ǫ } for some ǫ > ∀ t ∈ J , R R d | y | | y |≤ ¯ ν t ( dy ) < ∞ , we can replace B t by B t in the L´evy-Khintchinerepresentation of L , where B t := B t − R R d y | y |≤ ¯ ν t ( dy ). We denote by ( B t , C t , ¯ ν t ) t ∈ J this other version of the spot characteristics of L .In the case of PIIAC, there exists ( b t ) t ∈ J ⊆ R d , ( c t ) t ∈ J a family of d × d symmetricnonnegative-definite matrices and ( ν t ) t ∈ J a family of measures on R d satisfying: Z T ∞ (cid:18) | b s | + || c s || + Z R d ( | y | ∧ ν s ( dy ) (cid:19) ds < ∞ B t = Z t b s dsC t = Z t c s ds ¯ ν t ( A ) = Z t ν s ( A ) ds ∀ A ∈ Bor ( R d ) . Vadori and A. Swishchuk/ where || c t || denotes any norm on the space of d × d matrices. ( b t , c t , ν t ) t ∈ J is calledthe local characteristics of L .By [13], we have the following representation for L : L t = Z t b s ds + Z t √ c s dW s + Z t Z R d y | y |≤ ( N − ¯ ν )( dsdy ) + X s ≤ t ∆ L s | ∆ L s | > where for A ∈ Bor ( R d ): ¯ ν ([0 , t ] × A ) := ¯ ν t ( A ) and N the Poisson measure of L ( N − ¯ ν isthen called the compensated Poisson measure of L ). ( W t ) t ∈ J is a d -dimensional Brow-nian motion on R d , independent from the jump process R t R R d y | y |≤ ( N − ν )( dsdy ) + P s ≤ t ∆ L s | ∆ L s | > . √ c t here stands for the unique symmetric nonnegative-definitesquare root of c t . Sometimes it is convenient to write a Cholesky decomposition c t = h t h Tt and replace √ c t by h t in the previous representation.It can be shown - see [22] - that the infinitesimal generator of the semigroup Γ is givenby: A Γ ( t ) f ( z ) = d X j =1 b t ( j ) ∂f∂x j ( z ) + 12 d X j,k =1 c t ( j, k ) ∂ f∂x j ∂x k ( z )+ Z R d f ( z + y ) − f ( z ) − d X j =1 ∂f∂x j ( z ) y ( j )1 | y |≤ ! ν t ( dy )and that Y = C ( R d ) ⊆ D ( A Γ ( t )) = D ( A Γ ). And if b t := b t − R R d y | y |≤ ν t ( dy ) iswell-defined: A Γ ( t ) f ( z ) = d X j =1 b t ( j ) ∂f∂x j ( z ) + 12 d X j,k =1 c t ( j, k ) ∂ f∂x j ∂x k ( z ) + Z R d ( f ( z + y ) − f ( z )) ν t ( dy )Now to introduce inhomogeneous L´evy Random Evolutions, we consider ( L x ) x ∈ X a collection of inhomogeneous L´evy processes on (Ω , F , P ) with local characteristics( b t ( x ) , c t ( x ) , ν t ( x )). Define for z ∈ R d , ( s, t ) ∈ ∆ J , x ∈ X :Γ x ( s, t ) f ( z ) := E [ f ( L xt − L xs + z )] = Z R d p xs,t ( z, dy ) f ( y )This inhomogeneous Random evolution is regular, B ( Y ) − contraction because the cor-responding semigroup is. In this case and under some technical conditions, we canactually prove the compact containment criterion in the case where d = 1 (this resultcan probably be extended to any d ). Indeed, define the jump operators: D ǫ ( x, y ) f ( z ) := f ( z + ǫα ( x, y ))where α ∈ B b R ( X × X, X ⊗ X ), so that D ( x, y ) f = α ( x, y ) f ′ and Y ⊆ D ( D ). Let e L be an inhomogeneous L´evy process with local characteristics (0 , , ν t ) and e µ s,t the lawof e L t − e L s . Assume that:i) ν t ( x ) = ν t ∀ x ∈ X (the L´evy measure is the same over all states)ii) sup x,t | b t ( x ) | ≤ r ∈ R + and sup x,t p c t ( x ) ≤ σ ∈ R + (uniformly bounded driftand volatility) . Vadori and A. Swishchuk/ iii) ∀ ( s, T ) ∈ ∆ J , the collection of measures { e µ s,t } t ∈ [ s,T ] is tight.Then using 4.9, { V ǫ } satisfies (A1) , i.e. the compact containment criterion in Y (seeproof in Appendix B).Let’s make some comments about the other assumptions of 4.19. (A0) will be satis-fied with b D := C ( R d ) and provided the local characteristics ( b t ( x ) , c t ( x ) , ν t ( x )) arebounded and differentiable. By the discussion we had just before 4.19, (A2) will besatisfied because from the expression of A x ( t ) below and boundedness of the localcharacteristics, the following family of functions converge uniformly to 0 at infinity,are equicontinuous and uniformly bounded: { b A ( t ) f : t ∈ [ s, T ] } About (A3) , the generator A x ( t ) has the following expression: A x ( t ) f ( z ) = d X j =1 b xt ( j ) ∂f∂x j ( z ) + 12 d X j,k =1 c xt ( j, k ) ∂ f∂x j ∂x k ( z )+ Z R d f ( z + y ) − f ( z ) − d X j =1 ∂f∂x j ( z ) y ( j )1 | y |≤ ! ν xt ( dy )Keep in mind that: b A ( t ) = 1 M ρ X x ∈ X (cid:0) m ( x ) A x ( t ) + P D ( x, • )( x ) (cid:1) ρ x It is clear that b A is the generator of an inhomogeneous L´evy semigroup with localcharacteristcs ( b b t , b c t , b ν t ) given by: b b t = 1 M ρ X x ∈ X ( m ( x ) b xt + P α ( x, • )( x )) ρ x b c t = 1 M ρ X x ∈ X m ( x ) ρ x c xt b ν t ( dy ) = 1 M ρ X x ∈ X m ( x ) ρ x ν xt ( dy )In particular we check that b ν t is still a L´evy measure on R d . Appendix A: Proofs of the results in section 2 and 3 A.1. Proof of 2.6 Let ( s, t ) ∈ ∆ J , f ∈ Y . ∂ − ∂s Γ( s, t ) f = lim h ↓ s − h,t ) ∈ ∆ J Γ( s, t ) f − Γ( s − h, t ) fh = − lim h ↓ s − h,t ) ∈ ∆ J Γ( s − h, s ) − Ih Γ( s, t ) f = − A Γ ( s )Γ( s, t ) f . Vadori and A. Swishchuk/ since Γ( s, t ) f ∈ D ( A Γ ).For s < t : ∂ + ∂s Γ( s, t ) f = lim h ↓ s + h,t ) ∈ ∆ J Γ( s + h, t ) f − Γ( s, t ) fh = − lim h ↓ s + h,t ) ∈ ∆ J Γ( s, s + h ) − Ih Γ( s + h, t ) f Let h ∈ (0 , t − s ]: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s, s + h ) − I ) h Γ( s + h, t ) f − A Γ ( s )Γ( s, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s, s + h ) − I ) h Γ( s, t ) f − A Γ ( s )Γ( s, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s, s + h ) − I ) h (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) B ( Y ,Y ) || Γ( s + h, t ) f − Γ( s, t ) f || Y , the last inequality holding because ∀ ( s, t ) ∈ ∆ J : Γ( s, t ) Y ⊆ Y .We are going to apply the uniform boundedness principle to show that :sup h ∈ (0 ,t − s ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s, s + h ) − I ) h (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) B ( Y ,Y ) < ∞ Y is Banach. We have to show that ∀ g ∈ Y : sup h ∈ (0 ,t − s ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < ∞ . Let g ∈ Y .We have (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h ↓ → || A Γ ( s ) g || since Y ⊆ D ( A Γ ). ∃ δ ( g ) ∈ (0 , t − s ) : h ∈ (0 , δ ) ⇒ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < || A Γ ( s ) g || . Then, by Y -strong t − continuity of Γ, h → (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∈ C ([ δ, t − s ]). Let M := max h ∈ [ δ,t − s ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Then we get (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ max( M, || A Γ ( s ) g || ) ∀ h ∈ (0 , t − s ] and so sup h ∈ (0 ,t − s ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h g (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < ∞ .Further, by Y − super strong s − continuity of Γ, || Γ( s + h, t ) f − Γ( s, t ) f || Y h ↓ → 0. Fi-nally, since Γ( s, t ) f ∈ D ( A Γ ), (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( s,s + h ) − I ) h Γ( s, t ) f − A Γ ( s )Γ( s, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h ↓ → ∂ + ∂s Γ( s, t ) f = − A Γ ( s )Γ( s, t ) f for s < t , which shows that ∂∂s Γ( s, t ) f = − A Γ ( s )Γ( s, t ) f for ( s, t ) ∈ ∆ J . . Vadori and A. Swishchuk/ A.2. Proof of 2.7 Let ( s, t ) ∈ ∆ J , f ∈ Y . We have: ∂ + ∂t Γ( s, t ) f = lim h ↓ s,t + h ) ∈ ∆ J Γ( s, t + h ) f − Γ( s, t ) fh = lim h ↓ s,t + h ) ∈ ∆ J Γ( s, t ) (Γ( t, t + h ) − I ) fh And for h ∈ J : t + h ∈ J : (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ( s, t ) (Γ( t, t + h ) − I ) fh − Γ( s, t ) A Γ ( t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || Γ( s, t ) || B ( Y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( t, t + h ) − I ) fh − A Γ ( t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h ↓ → f ∈ D ( A Γ ). Therefore ∂ + ∂t Γ( s, t ) f = Γ( s, t ) A Γ ( t ) f .Now if s < t : ∂ − ∂t Γ( s, t ) f = lim h ↓ s,t − h ) ∈ ∆ J Γ( s, t ) f − Γ( s, t − h ) fh = lim h ↓ s,t − h ) ∈ ∆ J Γ( s, t − h ) (Γ( t − h, t ) − I ) fh For h ∈ (0 , t − s ]: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Γ( s, t − h ) (Γ( t − h, t ) − I ) fh − Γ( s, t ) A Γ ( t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || Γ( s, t − h ) || B ( Y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( t − h, t ) − I ) fh − A Γ ( t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + || (Γ( s, t − h ) − Γ( s, t )) A Γ ( t ) f || Since f ∈ D ( A Γ ), (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (Γ( t − h,t ) − I ) fh − A Γ ( t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h ↓ → 0. By Y − strong t -continuity of Γ: || (Γ( s, t − h ) − Γ( s, t )) A Γ ( t ) f || h ↓ → 0. By the principle of uniform boundedness to-gether with the Y − strong t -continuity of Γ, we have sup h ∈ (0 ,t − s ] || Γ( s, t − h ) || B ( Y ) ≤ sup h ∈ [0 ,t − s ] || Γ( s, t − h ) || B ( Y ) < ∞ .Therefore we get ∂ − ∂t Γ( s, t ) f = Γ( s, t ) A Γ ( t ) f for s < t , which shows ∂∂t Γ( s, t ) f =Γ( s, t ) A Γ ( t ) f for ( s, t ) ∈ ∆ J . . Vadori and A. Swishchuk/ A.3. Proof of 2.8 Let f ∈ Y , ( s, t ) ∈ ∆ J . First u → Γ( s, u ) A Γ ( u ) f ∈ B Y ([ s, t ]) as the derivative of u → Γ( s, u ) f . By the principle of uniform boundedness together with the Y − strong t − continuity of Γ, we have M := sup u ∈ [ s,t ] || Γ( s, u ) || B ( Y ) < ∞ . We then observe thatfor u ∈ [ s, t ]: || Γ( s, u ) A Γ ( u ) f || ≤ M || A Γ ( u ) f || ≤ M || A Γ ( u ) || B ( Y ,Y ) || f || Y A.4. Proof of 2.10 Let ( s, u ) , ( u, t ) ∈ ∆ J , f ∈ Y . Consider the function φ : u → G ( u )Γ( u, t ) f . We aregoing to show that φ ′ ( u ) = 0 ∀ u ∈ [ s, t ] and therefore that φ ( s ) = φ ( t ). We have for u < t : d + φdu ( u ) = lim h ↓ h ∈ (0 ,t − u ] h [ G ( u + h )Γ( u + h, t ) f − G ( u )Γ( u, t ) f ]Let h ∈ (0 , t − u ]. We have: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h [ G ( u + h )Γ( u + h, t ) f − G ( u )Γ( u, t ) f ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h G ( u + h )Γ( u, t ) f − h G ( u )Γ( u, t ) f − G ( u ) A Γ ( u )Γ( u, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)| {z } (1) + || G ( u + h ) || B ( Y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h Γ( u + h, t ) f − h Γ( u, t ) f + A Γ ( u )Γ( u, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)| {z } (2) + || G ( u + h ) A Γ ( u )Γ( u, t ) f − G ( u ) A Γ ( u )Γ( u, t ) f || | {z } (3) And we have:(1) → G satisfies the initial value problem and Γ( u, t ) Y ⊆ Y (2) → ∂∂u Γ( u, t ) f = − A Γ ( u )Γ( u, t ) f (3) → G Further, by the principle of uniform boundedness together with the Y − strong conti-nuity of G , we have sup h ∈ (0 ,t − u ] || G ( u + h ) || B ( Y ) ≤ sup h ∈ [0 ,t − u ] || G ( u + h ) || B ( Y ) < ∞ .We therefore get d + φdu ( u ) = 0. Now for u > s : d − φdu ( u ) = lim h ↓ h ∈ (0 ,u − s ] h [ G ( u )Γ( u, t ) f − G ( u − h )Γ( u − h, t ) f ] . Vadori and A. Swishchuk/ Let h ∈ (0 , u − s ]: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h [ G ( u )Γ( u, t ) f − G ( u − h )Γ( u − h, t ) f ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h G ( u )Γ( u, t ) f − h G ( u − h )Γ( u, t ) f − G ( u ) A Γ ( u )Γ( u, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)| {z } (4) + || G ( u − h ) || B ( Y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − h Γ( u − h, u )Γ( u, t ) f + 1 h Γ( u, t ) f + A Γ ( u )Γ( u, t ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)| {z } (5) + || G ( u ) A Γ ( u )Γ( u, t ) f − G ( u − h ) A Γ ( u )Γ( u, t ) f || | {z } (6) By the principle of uniform boundedness together with the Y − strong t -continuity of G , we have sup h ∈ (0 ,u − s ] || G ( u − h ) || B ( Y ) ≤ sup h ∈ [0 ,u − s ] || G ( u − h ) || B ( Y ) < ∞ . And:(4) → G satisfies the initial value problem and Γ( u, t ) Y ⊆ Y (5) → u, t ) Y ⊆ Y (6) → G We therefore get d − φdu ( u ) = 0. A.5. Proof of 2.13 Since Γ is regular and f , A Γ ( u ) f ∈ Y and u → || A Γ ( u ) || B ( Y ,Y ) is integrable on [ s, t ]we have by 2.8:Γ( s, t ) f = f + Z ts Γ( s, u ) A Γ ( u ) fdu = f + Z ts (cid:20) A Γ ( u ) f + Z us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdr (cid:21) du = f + Z ts A Γ ( u ) fdu + Z ts Z us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdrdu If g : [ s, t ] → R and g : [ s, t ] → Y are differentiable functions, and g g ′ , g ′ g areBochner integrable on [ s, t ], then we have by the Fundamental theorem of Calculus forthe Bochner integral: g ( t ) g ( t ) − g ( s ) g ( s ) = Z ts ( g g ) ′ ( u ) du = Z ts g ′ ( u ) g ( u ) du + Z ts g ( u ) g ′ ( u ) du With g ( u ) := ( t − u ) and g ( u ) := R us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdr . We have shown that g ′ g is integrable on [ s, t ]. Provided we show that: g ′ ( u ) = ∂∂u Z us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdr = Γ( s, u ) A ( u ) f + Z us Γ( s, r ) A Γ ( r ) A ′ Γ ( u ) fdr the fact that g g ′ is integrable on [ s, t ] by assumption ends the proof. . Vadori and A. Swishchuk/ We have: ∂∂u Z us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdr = lim h → s,u + h ) ∈ ∆ J h (cid:20)Z u + hs Γ( s, r ) A Γ ( r ) A Γ ( u + h ) fdr − Z us Γ( s, r ) A Γ ( r ) A Γ ( u ) fdr (cid:21) = lim h → s,u + h ) ∈ ∆ J h (cid:20)Z u + hs Γ( s, r ) A Γ ( r ) A Γ ( u + h ) fdr − Z us Γ( s, r ) A Γ ( r ) A Γ ( u + h ) fdr (cid:21) + 1 h (cid:20)Z us Γ( s, r ) A Γ ( r ) A Γ ( u + h ) f − Γ( s, r ) A Γ ( r ) A Γ ( u ) fdr (cid:21) By the principle of uniform boundedness together with the Y − strong t -continuityof Γ, Y − strong continuity of A Γ we have M := sup r ∈ [ s,t ] || Γ( s, r ) || B ( Y ) < ∞ and M := sup r ∈ [ s,t ] || A Γ ( r ) || B ( Y ,Y ) < ∞ . Let h ∈ (0 , t − u ] (the proof is the same for h ∈ [ s − u, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z us h Γ( s, r ) A Γ ( r ) A Γ ( u + h ) f − h Γ( s, r ) A Γ ( r ) A Γ ( u ) f − Γ( s, r ) A Γ ( r ) A ′ Γ ( u ) fdr (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ Z us || Γ( s, r ) || B ( Y ) || A Γ ( r ) || B ( Y ,Y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h A Γ ( u + h ) f − h A Γ ( u ) f − A ′ Γ ( u ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Y dr< ǫ ( u − s ) M M for | h | < δ u since f ∈ D ( A ′ Γ )We have also: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h Z u + hs Γ( s, r ) A Γ ( r ) A Γ ( u + h ) f − h Z us Γ( s, r ) A Γ ( r ) A Γ ( u + h ) fdr − Γ( s, u ) A ( u ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h Z u + hu Γ( s, r ) A Γ ( r ) A Γ ( u + h ) fdr − Γ( s, u ) A ( u ) f (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ h Z u + hu || Γ( s, r ) || B ( Y ) || A Γ ( r ) || B ( Y ,Y ) || A Γ ( u + h ) f − A Γ ( u ) f || Y dr | {z } (1) + 1 h Z u + hu || Γ( s, r ) || B ( Y ) || A Γ ( r ) A Γ ( u ) f − A Γ ( u ) A Γ ( u ) f || dr | {z } (2) + 1 h Z u + hu || Γ( s, r ) A ( u ) f − Γ( s, u ) A ( u ) f || dr | {z } (3) Therefore: (1) → f ∈ D ( A ′ Γ ) and M , M < ∞ (2) → Y − strong continuity of A Γ and M < ∞ (3) → Y − strong t-continuity of Γ . Vadori and A. Swishchuk/ A.6. Proof of 3.3 The fact that V ( s, t ) ∈ B ( Y ) is straightforward from the definition of V . The semi-group property comes from straightforward computations. Then, we will show that u → V ( s, u )( ω ) is Y − strongly continuous on each [ T n ( s ) , T n +1 ( s )) ∩ J ( s ), n ∈ N and Y − strongly RCLL at each T n +1 ( s ) ∈ J ( s ), n ∈ N . Let n ∈ N such that T n ( s ) ∈ J ( s ). ∀ t ∈ [ T n ( s ) , T n +1 ( s )) ∩ J ( s ), we have: V ( s, t ) = " n Y k =1 Γ x k − ( s ) ( T k − ( s ) , T k ( s )) D ( x k − ( s ) , x k ( s )) Γ x n ( s ) ( T n ( s ) , t )Therefore by Y − strong t − continuity of Γ, we get that u → V ( s, u )( ω ) is Y − stronglycontinuous on [ T n ( s ) , T n +1 ( s )) ∩ J ( s ). If T n +1 ( s ) ∈ J ( s ), the fact that V ( s, • ) has aleft limit at T n +1 ( s ) also comes from the Y − strong t − continuity of Γ: V ( s, T n +1 ( s ) − ) f = lim h ↓ G sn Γ x n ( s ) ( T n ( s ) , T n +1 ( s ) − h ) f = G sn Γ x n ( s ) ( T n ( s ) , T n +1 ( s )) fG sn = n Y k =1 Γ x k − ( s ) ( T k − ( s ) , T k ( s )) D ( x k − ( s ) , x k ( s ))Therefore we get the relationship: V ( s, T n +1 ( s )) f = V ( s, T n +1 ( s ) − ) D ( x n ( s ) , x n +1 ( s )) f We notice therefore why we used the terminology ”continuous inhomogeneous Y − randomevolution” when D = I . A.7. Proof of 3.4 Let s ∈ J , ω ∈ Ω, f ∈ Y . We are going to proceed by induction and show that ∀ n ∈ N ,we have ∀ t ∈ [ T n ( s ) , T n +1 ( s )) ∩ J ( s ): V ( s, t ) f = f + Z ts V ( s, u ) A x ( u ) ( u ) fdu + n X k =1 V ( s, T k ( s ) − )[ D ( x k − ( s ) , x k ( s )) − I ] f For n = 0, we have ∀ t ∈ [ s, T ( s )) ∩ J ( s ): V ( s, t ) f = Γ x ( s ) ( s, t ) f , and therefore V ( s, t ) f = f + R ts V ( s, u ) A x ( u ) ( u ) fdu by regularity of Γ.Now assume that the property is true for n − 1, namely: ∀ t ∈ [ T n − ( s ) , T n ( s )) ∩ J ( s ),we have: V ( s, t ) f = f + Z ts V ( s, u ) A x ( u ) ( u ) fdu + n − X k =1 V ( s, T k ( s ) − )[ D ( x k − ( s ) , x k ( s )) − I ] f Therefore it implies that (by continuity of the Bochner integral): V ( s, T n ( s ) − ) f = f + Z T n ( s ) s V ( s, u ) A x ( u ) ( u ) fdu + n − X k =1 V ( s, T k ( s ) − )[ D ( x k − ( s ) , x k ( s )) − I ] f . Vadori and A. Swishchuk/ Now, ∀ t ∈ [ T n ( s ) , T n +1 ( s )) ∩ J ( s ) we have that: V ( s, t ) = G sn Γ x n ( s ) ( T n ( s ) , t ) G sn := n Y k =1 Γ x k − ( s ) ( T k − ( s ) , T k ( s )) D ( x k − ( s ) , x k ( s ))and therefore ∀ t ∈ [ T n ( s ) , T n +1 ( s )) ∩ J ( s ), by 2.7 and regularity of Γ: ∂∂t V ( s, t ) f = V ( s, t ) A x ( t ) ( t ) f ⇒ V ( s, t ) f = V ( s, T n ( s )) f + Z tT n ( s ) V ( s, u ) A x ( u ) ( u ) fdu Further, by the proof of 3.3 we get V ( s, T n ( s )) f = V ( s, T n ( s ) − ) D ( x n − ( s ) , x n ( s )) f .Therefore combining these results we have: V ( s, t ) f = V ( s, T n ( s ) − ) D ( x n − ( s ) , x n ( s )) f + Z tT n ( s ) V ( s, u ) A x ( u ) ( u ) fdu = V ( s, T n ( s ) − ) f + Z tT n ( s ) V ( s, u ) A x ( u ) ( u ) fdu + V ( s, T n ( s ) − ) D ( x n − ( s ) , x n ( s )) f − V ( s, T n ( s ) − ) f = f + Z T n ( s ) s V ( s, u ) A x ( u ) ( u ) fdu + n − X k =1 V ( s, T k ( s ) − )[ D ( x k − ( s ) , x k ( s )) − I ] f + Z tT n ( s ) V ( s, u ) A x ( u ) ( u ) fdu + V ( s, T n ( s ) − ) D ( x n − ( s ) , x n ( s )) f − V ( s, T n ( s ) − ) f = f + Z ts V ( s, u ) A x ( u ) ( u ) fdu + n X k =1 V ( s, T k ( s ) − )[ D ( x k − ( s ) , x k ( s )) − I ] f Appendix B: Proofs of the results in section 5 B.1. Proof that Γ is a semigroup and that ∀ n ∈ N we have Γ( s, t ) ∈ B ( C n ( R d )) and || Γ( s, t ) || B ( C n ( R d )) ≤ For n, d ∈ N ∗ , let N d,n := { α ∈ N d : P di =1 α ( i ) ≤ n } . The space C n ( R d ) (resp. C nb ( R d ))denotes the space of functions f : R d → R for which ∂ α f ∈ C ( R d ) (resp. C b ( R d )) ∀ α ∈ N d,n . This space is Banach for the norm: || f || := max α ∈ N d,n || ∂ α f || ∞ ∂ α f := ∂ P di =1 α ( i ) f∂ α (1)1 ...∂ α ( d ) d Let for ( s, t ) ∈ ∆ J : Γ( s, t ) f ( x ) := E [ f ( L t − L s + x )] = R R d p s,t ( x, dy ) f ( y ) for f ∈ B b R ( R d ). By linearity of the expectation, Γ( s, t ) is linear. Further, Γ satisfies the semi-group equation because of the Chapman-Kolmogorov equation. Now let’s show that ∀ n ∈ N we have Γ( s, t ) ∈ B ( C n ( R d )) and || Γ( s, t ) || B ( C n ( R d )) ≤ 1. Let Y := C ( R d ).Let’s first start with C ( R d ). Let f ∈ C ( R d ). By ([23]), we get the representationΓ( s, t ) f ( x ) = R R d µ s,t ( dy ) f ( x + y ), where µ s,t is the distribution of L t − L s , i.e. µ s,t ( A ) = P ( L t − L s ∈ A ) for A ∈ Bor ( R d ). . Vadori and A. Swishchuk/ Let x ∈ R d and take any sequence ( x n ) n ∈ N ⊆ R d : x n → x and denote g n := y → f ( x n + y ) ∈ C ( R d ) and g := y → f ( x + y ) ∈ C ( R d ). By continuity of f , g n → g point-wise. Further, || g n || = || f || ∈ L R ( R d , Bor ( R d ) , µ s,t ). Therefore by Lebesgue dominatedconvergence theorem, we get lim n →∞ Γ( s, t ) f ( x n ) = R R d lim n →∞ µ s,t ( dy ) f ( x n + y ) = R R d µ s,t ( dy ) f ( x + y ) = Γ( s, t ) f ( x ). Therefore Γ( s, t ) f ∈ C ( R d ). By the same argumentbut now taking any sequence ( x n ) n ∈ N ⊆ R d : | x n | → ∞ , we get lim | x |→∞ Γ( s, t ) f ( x ) = 0and therefore Γ( s, t ) f ∈ C ( R d ).Further, we get: | Γ( s, t ) f ( x ) | = (cid:12)(cid:12)(cid:12)(cid:12)Z R d µ s,t ( dy ) f ( x + y ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z R d µ s,t ( dy ) | f ( x + y ) | ≤ Z R d µ s,t ( dy ) || f || = || f || and therefore || Γ( s, t ) || B ( C ( R d )) ≤ x ∈ R d , f ∈ C q ( R d ) ( q ≥ h n ) n ∈ N ⊆ R ∗ : h n → j ∈ [ | , d | ]and f n,j ( x ) := f ( x , ..., x j + h n , ..., x d ). Then:1 h n ((Γ( s, t ) f ) n,j ( x ) − Γ( s, t ) f ( x )) = Z R d µ s,t ( dy ) 1 h n ( f n,j ( x + y ) − f ( x + y ))Let g n := y → h n ( f n,j ( x + y ) − f ( x + y )) ∈ C ( R d ) and g := y → ∂f∂x j ( x + y ) ∈ C ( R d ). We have g n → g pointwise since f ∈ C ( R d ). By the Mean Value theorem, ∃ z j ( n, x, y ) ∈ [ −| h n | , | h n | ] : g n ( y ) = ∂f∂x j ( x + y , ..., x j + y j + z j ( n, x, y ) , ..., x d + y d ), andtherefore | g n ( y ) | ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂f∂x j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∈ L R ( R d , Bor ( R d ) , µ s,t ). Therefore by Lebesgue dominatedconvergence theorem, we get: ∂ Γ( s, t ) f∂x j ( x ) = Z R d µ s,t ( dy ) ∂f∂x j ( x + y )Using the same argument as for C ( R d ), we get that Γ( s, t ) f ∈ C ( R d ) since ∂f∂x j ∈ C ( R d ) ∀ j ∈ [ | , d | ]. Repeating this argument by computing successively every partialderivative up to order q by the relationship ∂ α Γ( s, t ) f ( x ) = R R d µ s,t ( dy ) ∂ α f ( x + y ) ∀ α ∈ N d,q , we get Γ( s, t ) f ∈ C q ( R d ).Further, the same way we got || Γ( s, t ) f || ≤ || f || for f ∈ C ( R d ), we get for f ∈ C q ( R d ): || ∂ α Γ( s, t ) f || ≤ || ∂ α f || ∀ α ∈ N d,q . Therefore max α ∈ N d,q || ∂ α Γ( s, t ) f || ≤ max α ∈ N d,q || ∂ α f ||⇒ || Γ( s, t ) f || C q ( R d ) ≤ || f || C q ( R d ) ⇒ || Γ( s, t ) || B ( C q ( R d )) ≤ B.2. Proof that Γ is Y − super strongly s -continuous, Y − strongly t -continuous Let ( s, t ) ∈ ∆ J , f ∈ Y and h ∈ [ − s, t − s ]: || Γ( s + h, t ) f − Γ( s, t ) f || Y = max α ∈ N d,m || ∂ α Γ( s + h, t ) f − ∂ α Γ( t, s ) f || = max α ∈ N d,m || Γ( s + h, t ) ∂ α f − Γ( t, s ) ∂ α f || Let α ∈ N d,m and { h n } n ∈ N ⊆ [ − s, t − s ] any sequence such that h n → 0. Let S n := L t − L s + h n and S := L t − L s . We have S n P → S by stochastic continuity of additive . Vadori and A. Swishchuk/ processes. By the Skorokhod’s representation theorem, there exists a probability space(Ω ′ , F ′ , P ′ ) and random variables { S ′ n } n ∈ N , S ′ on it such that S n D = S ′ n , S D = S ′ and S ′ n a.e. → S ′ . Let x ∈ R d , we therefore get: | Γ( s + h n , t ) ∂ α f ( x ) − Γ( t, s ) ∂ α f ( x ) | = | E ′ [ ∂ α f ( x + S ′ n ) − ∂ α f ( x + S ′ )] |≤ E ′ | ∂ α f ( x + S ′ n ) − ∂ α f ( x + S ′ ) |⇒ || Γ( s + h n , t ) ∂ α f − Γ( t, s ) ∂ α f || ≤ sup x ∈ R d E ′ | ∂ α f ( x + S ′ n ) − ∂ α f ( x + S ′ ) |≤ E ′ [ sup x ∈ R d | ∂ α f ( x + S ′ n ) − ∂ α f ( x + S ′ ) | ]Further, since ∂ α f ∈ Y , ∂ α f is uniformly continuous on R d and ∀ ǫ > ∃ δ > | x − y | < δ ⇒ | ∂ α f ( x ) − ∂ α f ( y ) | < ǫ . And because S ′ n a.e. → S ′ , for a.e. ω ′ ∈ Ω ′ , ∃ N ( ω ′ ) ∈ N : n ≥ N ( ω ′ ) ⇒ | S ′ n ( ω ′ ) − S ′ ( ω ′ ) | = | x + S ′ n ( ω ′ ) − S ′ ( ω ′ ) − x | < δ ∀ x ∈ R d ⇒ | ∂ α f ( x + S ′ n ( ω ′ )) − ∂ α f ( x + S ′ ( ω ′ )) | < ǫ ∀ x ∈ R d . Therefore we have that g n :=sup x ∈ R d | ∂ α f ( x + S ′ n ) − ∂ α f ( x + S ′ ) | a.e. → 0. Further | g n | ≤ || ∂ α f || ∈ L R (Ω ′ , F ′ , P ′ ).By Lebesgue dominated convergence theorem we get:lim n →∞ E ′ [ sup x ∈ R d | ∂ α f ( x + S ′ n ) − ∂ α f ( x + S ′ ) | ] = 0We can notice that the proof strongly relies on the uniform continuity of f , and there-fore on the topological properties of the space C ( R d ) (which C b ( R d ) doesn’t have).We prove that Γ is Y − strongly t -continuous exactly the same way, but now consid-ering Y n := L t + h n − L s and any sequence { h n } n ∈ N ⊆ K such that h n → 0, where K := [ s − t, T ∞ − t ] if J = [0 , T ∞ ] and K := [ s − t, 1] if J = R + . B.3. Proof that Γ is regular By Taylor’s theorem we get ∀ f ∈ Y , t ∈ J , x, y ∈ R d : (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) f ( x + y ) − f ( x ) − d X j =1 ∂f∂x j ( x ) y ( j ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ | y | d X j,k =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂ f∂x j ∂x k ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ d | y | || f || Y ⇒|| A Γ ( t ) f || ≤ || f || Y d X j =1 | b t ( j ) | + 12 d X j,k =1 | c t ( j, k ) | + 2 ν t {| y | > } + d Z | y |≤ | y | ν t ( dy ) ⇒|| A Γ ( t ) || B ( Y ,Y ) ≤ d X j =1 | b t ( j ) | + 12 d X j,k =1 | c t ( j, k ) | + 2 ν t {| y | > } + d Z | y |≤ | y | ν t ( dy ), observing that ν t {| y | > } + R | y |≤ | y | ν t ( dy ) < ∞ by assumption. Therefore by inte-grability assumption on the local characteristics and theorem 2.8, we get the regularityof Γ. B.4. Proof that V ǫ satisfies the compact containment criterion in Y We want to prove 4.9. Here we will assume - for sake of clarity - that the diffusionparts of the processes L x are driven by the same brownian motion W . The proof can . Vadori and A. Swishchuk/ be extended straightforwardly to the general case where the | X | brownian motions W x are imperfectly correlated, expressing each one of them as a linear combination of | X | independent brownian motions (Cholesky).We showed V ǫ is a B ( Y ) − contraction, so it remains to show the uniform convergenceto 0 at infinity. We have the following representation, for ω ′ ∈ Ω (to make clear thatthe expectation is wrt ω and not ω ′ ): V ǫ ( s, t )( ω ′ ) f ( z ) = E (cid:20) f (cid:18) z + Z ts b u ( x ( u ǫ ,s )( ω ′ )) du + Z ts q c u ( x ( u ǫ ,s )( ω ′ )) dW u + e L t − e L s + N s (cid:18) t ǫ ,s (cid:19) ( ω ′ ) X k =1 ǫα ( x k − ( s )( ω ′ ) , x k ( s )( ω ′ )) Let: g ω ′ ,t,ǫ ( z ) = E (cid:20) f (cid:18) z + Z ts b u ( x ( u ǫ ,s )( ω ′ )) du + Z ts q c u ( x ( u ǫ ,s )( ω ′ )) dW u + N s (cid:18) t ǫ ,s (cid:19) ( ω ′ ) X k =1 ǫα ( x k − ( s )( ω ′ ) , x k ( s )( ω ′ )) and if N stands for the normal distribution, define: Z := Z ts q c u ( x ( u ǫ ,s )( ω ′ )) dW u , so that Z ∼ N (cid:18) , σ := Z ts c u ( x ( u ǫ ,s )( ω ′ )) du (cid:19) Z := σ ( W T − W s ) so that Z ∼ N (cid:0) , σ := σ ( T − s ) (cid:1) Let δ > 0. There exists k δ > P [ | Z | > k δ ] < δ || f || . Since σ ≤ σ , then P [ | Z | > k δ ] ≤ P [ | Z | > k δ ] < δ || f || . And letting B := {| Z | > k δ } : (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20) B f (cid:18) z + Z ts b u ( x ( u ǫ ,s )( ω ′ )) du + Z ts q c u ( x ( u ǫ ,s )( ω ′ )) dW u + N s (cid:18) t ǫ ,s (cid:19) ( ω ′ ) X k =1 ǫα ( x k − ( s )( ω ′ ) , x k ( s )( ω ′ )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || f || P ( B ) < δ A ǫ , we do the following: since lim t →∞ N ( t ) t = M ρ a.e., then in particular N ( t ) t P → M ρ and therefore ∃ ǫ > | ǫ | < ǫ ⇒ P (cid:20) ǫN s (cid:16) T ǫ ,s (cid:17) ≤ T − sM ρ (cid:21) ≥ − ∆2In addition, since P h N s (cid:16) T ǫ ,s (cid:17) < ∞ i = 1, and because every probability measure ona Polish space is tight (here R is Polish), ∃ n ∈ N : P h N s (cid:16) T ǫ ,s (cid:17) ≤ n i ≥ − ∆2 . Vadori and A. Swishchuk/ so that ∀ ǫ ∈ (0 , n := (cid:16) T − sM ρ (cid:17) ∨ n : P h ǫN s (cid:16) T ǫ ,s (cid:17) ≤ n i = P ( A ǫ ) ≥ − ∆Note that n only depends on T , s and ∆. Now, ∃ c δ > | z | > c δ ⇒ | f ( z ) | < δ . Wehave for ω ′ ∈ A ǫ and | z | > C δ := c δ + || α || n + k δ + r ( T − s ): (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20) B c f (cid:18) z + Z ts b u ( x ( u ǫ ,s )( ω ′ )) du + Z ts q c u ( x ( u ǫ ,s )( ω ′ )) dW u + N s (cid:18) t ǫ ,s (cid:19) ( ω ′ ) X k =1 ǫα ( x k − ( s )( ω ′ ) , x k ( s )( ω ′ )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < δ P ( B c ) ≤ δ | g ω ′ ,t,ǫ ( z ) | < δ for | z | > C δ (uniform in ω ′ , t, ǫ ). Now we have that: V ǫ ( s, t )( ω ′ ) f = Γ e L ( s, t ) g ω ′ ,t,ǫ , where Γ e L is the inhomogeneous semigroup corresponding to e L , so that: V ǫ ( s, t )( ω ′ ) f ( z ) = Z R e µ s,t ( dy ) g ω ′ ,t,ǫ ( z + y )By tightness of the family { e µ s,t } t ∈ [ s,T ] , ∃ C δ > e µ s,t {| y | > C δ } < δ ∀ t ∈ [ s, T ]. Let c ∗ δ := 2( C δ ∨ C δ ) and we have for | z | > c ∗ δ , observing that || g ω ′ ,t,ǫ || ≤ || f || : V ǫ ( s, t )( ω ′ ) f ( z ) = Z | y |≤ C δ e µ s,t ( dy ) g ω ′ ,t,ǫ ( z + y ) + Z | y | >C δ e µ s,t ( dy ) g ω ′ ,t,ǫ ( z + y ) ⇒ | V ǫ ( s, t )( ω ′ ) f ( z ) | ≤ Z | y |≤ C δ e µ s,t ( dy ) | g ω ′ ,t,ǫ ( z + y ) | + || f || e µ s,t {| y | > C δ } But | y | ≤ C δ ⇒ | z + y | ≥ | z | − | y | ≥ c ∗ δ − C δ ≥ C δ ⇒ | g ω ′ ,t,ǫ ( z + y ) | < δ , and so | V ǫ ( s, t )( ω ′ ) f ( z ) | < δ ( || f || + 1).Because c ∗ δ is uniform in ω ′ , t and ǫ , we have finished the proof. References [1] Billingsley, P. Convergence of probability measures , John Wiley & Sons, Inc.,1999.[2] Conway, J. A course in functional Analysis , Springer, 2007.[3] Cont, R., Tankov, P. Financial Modelling with Jump processes , CRC PressLLC, 2004.[4] Ethier, S., Kurtz, T. Markov Processes: Characterization and Convergence ,John Wiley, 1986.[5] Griego, R., Hersh, R. Random evolutions, Markov chains, and systems ofpartial differential equations. Proc. National Acad. Sci. 62 305-308., 1969 . Vadori and A. Swishchuk/ [6] Gulisashvili, A., van Casteren, J. Non autonomous Kato classes andFeynman-Kac propagators , World Scientific Publishing Co. Pte. Ltd, 1986.[7] Hunter, J. On the moments of Markov Renewal processes , Advances in Ap-plied Probability, 1 (2), 188 - 210, 1969.[8] Hersh, R., Random evolutions: A survey of results and problems. RockyMountain J. Math. 4 443-475., 1972.[9] Hersh, R., Pinsky, M. Random evolutions are asymptotically Gaussian. Comm. Pure Appl. Math. XXV 33-44, 1972[10] Jacod, J., Shiryaev, A. Limit theorems for stochastic processes , Springer,2003.[11] Janssen, J., Manca, R. Semi-Markov Risk Models for Finance, Insuranceand Reliability , Springer, 2007.[12] Karatzas, I., Shreve, S. Brownian motion and Stochastic Calculus , Springer,1998.[13] Kluge, W. Time-inhomogeneous L´evy processes in interest rate and creditrisk models , Phd Thesis, 2005.[14] Kurtz, T. A Random Trotter Product Formula , Proceedings of the Ameri-can Mathematical Society, Vol. 35, No. 1 (Sep., 1972), pp. 147- 154[15] Ledoux, M., Talagrand, M. Probability in Banach Spaces: Isoperimetry andProcesses , Springer-Verlag, 1991[16] Limnios, N., Oprisan, G. Semi-Markov processes and Reliability ,Birkh¨auser, Boston, 2001.[17] Limnios, N., Swishchuk, A. Discrete-time Semi-Markov Random Evolutionsand their applications , Adv. Appl. Prob. 45, 127, 2013[18] Math´e, P. Numerical integration using V -uniformly ergodic Markov chains ,J. Appl. Probab. 41, no. 4, 1104-1112, 2004.[19] Pazy, A. Semigroups of Linear Operators and Applications to Partial Dif-ferential Equations , Springer-Verlag, 1983.[20] Pinsky, M., Lecture notes on Random Evolutions , World Scientific Publish-ing, 1991[21] Revuz, D. Markov chains , Elsevier Science Publishers B.V., 1984.[22] R¨uschendorf, L., Wolf, V. Comparison of time-inhomogeneous Markov pro-cesses , Preprint, 2011.[23] Sato, K. L´evy Processes and Infinitely Divisible Distributions , CambridgeUniversity Press, 1999.[24] Swishchuk, A. Random Evolutions and their Applications , Kluwer Aca-demic Publishers, 1997.[25] Swishchuk, A., Korolyuk, V. Evolutions of systems in random media , CRCPress, 1995.[26] Whitt, W. Proofs of the Martingale FCLT . Probability Surveys, vol. 4, pp.268-302, 2007[27] Watkins, J. A CLT in random evolution . Ann. Prob. 12. 2, 480-513, 1984[28] Watkins, J. A stochastic integral representation for random evolution . Ann.Prob. 13. 2, 531-557, 1985[29] Watkins, J.