Infinite rate mutually catalytic branching in infinitely many colonies. Construction, characterization and convergence
aa r X i v : . [ m a t h . P R ] J un Infinite rate mutually catalytic branching in infinitely many colonies.Construction, characterization and convergence
Achim KlenkeInstitut f¨ur MathematikJohannes Gutenberg-Universit¨at MainzStaudingerweg 9D-55099 [email protected] Leonid MytnikFaculty of Industrial Engineeringand ManagementTechnion – Israel Institute of TechnologyHaifa [email protected] 30, 2011
First submitted on January 05, 2009
Abstract
We construct a mutually catalytic branching process on a countable site space with infinite “branchingrate”. The finite rate mutually catalytic model, in which the rate of branching of one population at a siteis proportional to the mass of the other population at that site, was introduced by Dawson and Perkinsin [DP98]. We show that our model is the limit for a class of models and in particular for the Dawson-Perkins model as the rate of branching goes to infinity. Our process is characterized as the unique solutionto a martingale problem. We also give a characterization of the process as a weak solution of an infinitesystem of stochastic integral equations driven by a Poisson noise.
AMS Subject Classification: 60K35; 60K37; 60J80; 60J65; 60J35.Keywords: mutually catalytic branching; martingale problem; duality; stochastic differential equations. This work is partly funded by the German Israeli Foundation with grant number G-807-227.6/2003
Introduction and main results
In [DP98] Dawson and Perkins considered the following mutually catalytic model: Y i,t ( k ) = Y i, ( k ) + Z t X l ∈ S A ( k, l ) Y i,s ( l ) ds + Z t (cid:0) γY ,s ( k ) Y ,s ( k ) (cid:1) / dW i,s ( k ) , t ≥ , k ∈ S, i = 1 , . (1.1)Here S is a countable set that is thought of as the site space. (In fact, Dawson and Perkins made the explicitchoice S = Z d .) The matrix A is defined by A ( k, l ) = a ( k, l ) − { k = l } , where a is a symmetric transition matrix of a Markov chain on S . Finally, ( W i ( k ) , k ∈ S, i = 1 ,
2) is anindependent family of one-dimensional Brownian motions. Dawson and Perkins studied the long-time behaviorof this model and also constructed the analogous model in the continuous setting on R instead of S . One canthink of γ as being the branching rate for this model.In this paper we study (under weaker assumptions on the matrix A ) a model that formally corresponds tothe case γ = ∞ . This infinite rate mutually catalytic branching process can be characterized by a certainmartingale problem. We show that this martingale problem is well-posed and its solution X is the uniquesolution of a system of stochastic differential equations driven by a certain Poisson noise. In fact, we constructthe solution via approximate solutions of this system of SDEs. Furthermore, we show that X is the limit of theDawson-Perkins processes as γ → ∞ . Hence, we call X the infinite rate mutually catalytic branching process (IMUB).This is the second part in a series of three papers. In the first part [KM10], we studied the infinite rate mutuallycatalytic branching process in the case where S is a singleton. In the third part [KM11], we investigated thelongtime behaviour for the case where S is countable. There we establish a dichotomy between segregation andcoexistence of types depending on the potential properties of the migration mechanism A .An alternative construction of the infinite rate mutually catalytic branching process via a Trotter type approx-imation scheme can be found [Oel08] and [KO10]. We remark that although the approach in [KO10] is moreeasily accessible, it yields less information about the IMUB process than the approach made here. In particular,the investigation of the longtime behaviour in [KM11] needs the description of the jumps of the process thatwe develop in this paper. We have to introduce some notation. Let A = ( A ( k, l )) k,l ∈ S be a matrix on S satisfying the following assump-tions: A ( k, l ) ≥ k = l (1.2)and kAk := sup k ∈ S X l ∈ S |A ( k, l ) | + |A ( l, k ) | < ∞ . (1.3)Let E = [0 , ∞ ) \ (0 , ∞ ) . (1.4)For u, v ∈ [0 , ∞ ) S define h u, v i = X k ∈ S u ( k ) v ( k ) ∈ [0 , ∞ ] . x ∈ ([0 , ∞ ) ) S and ζ ∈ [0 , ∞ ) S define h x, ζ i = X k ∈ S x ( k ) ζ ( k ) ∈ [0 , ∞ ] . By Lemma IX.1.6 of [Lig85], there exists a β ∈ (0 , ∞ ) S and an Γ ≥ X k ∈ S β ( k ) < ∞ (1.5)and X l ∈ S β ( l )( |A ( k, l ) | + |A ( l, k ) | ) ≤ Γ β ( k ) for all k ∈ S. (1.6)We fix this β for the rest of this paper. Note that for the transpose matrix A ∗ of A , we have kA ∗ k = kAk < ∞ and (1.6) holds with the same β . Hence, in what follows, A could be replaced by A ∗ . We will make use of thisfact in Section 4 when we construct a dual process.Let us define the Liggett-Spitzer spaces as follows: L β = (cid:8) u ∈ [0 , ∞ ) S : h u, β i < ∞ (cid:9) , L β, = (cid:8) x ∈ (cid:0) [0 , ∞ ) (cid:1) S : h x, β i ∈ [0 , ∞ ) (cid:9) , L β,E = L β, ∩ E S . For u ∈ R S , let k u k β = X k ∈ S | u ( k ) | β ( k ) . (1.7)Furthermore, for x = ( x , x ) ∈ L β, , let k x k β, = k x k β + k x k β . Note that k · k β defines a topology on L β .Furthermore, k · k β, defines a topology on L β, and on L β,E . We will henceforth assume that these spaces areequipped with these topologies.Let A f ( k ) = P l ∈ S A ( k, l ) f ( l ) if the sum is well defined. Let A n denote the n th matrix power of A (note thatthis is well defined and finite by (1.3)) and define p t ( k, l ) := e t A ( k, l ) := ∞ X n =0 t n A n ( k, l ) n ! . Let S denote the (not necessarily Markov) semigroup generated by A , that is, S t f ( k ) = X l ∈ S p t ( k, l ) f ( l ) for t ≥ . We will use the notation A f , S t f and so on also for [0 , ∞ ) valued functions f with the obvious meaning.Note that for f ∈ L β , the expressions A f and S t f are well defined and that (recall Γ from (1.6)) kA f k β ≤ Γ k f k β and kS t f k β ≤ e Γ t k f k β . (1.8)Let A ( k, l ) = A ( k, l ) + . Denote by ( S t ) t ≥ the semigroup generated by A , that is, S t = P ∞ n =0 e − t A n /n !.Clearly, for any f ∈ L β and k ∈ S , we have A f ( k ) ≤ A f ( k ) , S t f ( k ) ≤ S t f ( k ) and f ( k ) ≤ S t f ( k ) . As above, it is easy to check that k A f k β ≤ Γ k f k β and k S t f k β ≤ e Γ t k f k β for all t ≥ . (1.9)3herefore, we trivially have A f ( k ) ≤ Γ k f k β β ( k ) for all f ∈ L β , (1.10) S t f ( k ) ≤ e Γ t k f k β β ( k ) for all f ∈ L β , t ≥ . (1.11)All the estimates (1.8)–(1.11) also hold for the transposed matrix A ∗ and the derived objects A ∗ , S ∗ and so on.Let D L β,E = D L β,E [0 , ∞ ) be the Skorohod space of c`adl`ag L β,E -valued functions.We will employ a martingale problem in order to characterize the (bivariate) process X ∈ D L β,E that will bethe limit of the Dawson-Perkins models as γ → ∞ . In order to formulate this martingale problem for X , weneed some more notation. For x = ( x , x ) and y = ( y , y ) ∈ R we introduce the lozenge product x ⋄ y := − ( x + x )( y + y ) + i ( x − x )( y − y )(with i = √−
1) and define F ( x, y ) = exp( x ⋄ y ) . Note that the lozenge product defines a symmetric bilinear form, in particular x ⋄ y = y ⋄ x . Some more propertiesof F and the lozenge product can be found in [KM10, Lemma 2.2, Corollaries 2.3, 2.4]. For x, y ∈ ( R ) S , wewrite hh x, y ii = X k ∈ S x ( k ) ⋄ y ( k )whenever the infinite sum is well defined and let H ( x, y ) = exp( hh x, y ii ) . (1.12)Define L f, = (cid:8) y ∈ ([0 , ∞ ) ) S : y ( k ) = 0 for only finitely many k ∈ S (cid:9) (1.13)and L f,E = L f, ∩ E S . (1.14)Finally, define the spaces L β ∞ = (cid:8) f ∈ [0 , ∞ ) S : h f, g i < ∞ for all g ∈ L β (cid:9) = n f ∈ L β : sup k ∈ S f ( k ) /β ( k ) < ∞ o (1.15)and L β,E ∞ = (cid:8) η = ( η , η ) ∈ E S : η , η ∈ L β ∞ (cid:9) . As a subspace, L β ∞ inherits the norm of L β .Note that the function H ( x, y ) is well defined if either x ∈ ( R ) S and y ∈ L f,E or x ∈ L β,E and y ∈ L β,E ∞ . Martingale Problem
Our main theorem is the following.
Theorem 1.1 (a)
For all x ∈ L β,E , there exists a unique solution X ∈ D L β,E of the following martingaleproblem: For each y ∈ L f,E , the process M x,y defined by M x,yt := H ( X t , y ) − H ( x, y ) − Z t hhA X s , y ii H ( X s , y ) ds (MP ) is a martingale with M x,y = 0 . (b) For any x ∈ L β,E and y ∈ L β,E ∞ , the process M x,y is well defined and is a martingale. (c) Denote by P x the distribution of X with X = x . Then ( P x ) x ∈ L β,E is a strong Markov family. tochastic integral equation Unfortunately, the characterization of X as the solution of the martingale problem (MP ) does not shed muchlight on properties of the process X such as: Is X continuous or discontinuous? If it is discontinuous, what isthe structure of jump formation?These questions will be answered by a different representation of X as as a solution to a system of stochas-tic differential equations of jump type. We will see that the coordinate processes of X are so-called purelydiscontinuous martingales and we will give a precise quantitative statement about the distribution of jumps.Before we give the exact description, let us briefly and roughly recall the concept of stochastic integrals withrespect to Poisson point measures. Let ν be a finite measure on some Borel space F (which will be taken to be[0 , ∞ ) × E later) and assume that N is a Poisson point process on [0 , ∞ ) × F with intensity measure N ′ := λ ⊗ ν (here λ denotes the Lebesgue measure). Furthermore, let ( Z t ) t ≥ be an R F -valued predictable process (withrespect to the filtration σ ( N ([0 , t ] × · )) t ≥ ). Then define the integral( Z ∗ N ) t ( ω ) := Z t Z F Z s ( ω ; x ) N (cid:0) ω ; ds ⊗ dx (cid:1) = X s ≤ t,x ∈ F Z s ( ω ; x ) N (cid:0) ω ; { s } × { x } (cid:1) . Note that the sum is finite since the intensity measure ν is finite. Now, define the so-called martingale measure M := N − N ′ . Then ( Z ∗ M ) t := ( Z ∗ N ) t − Z t Z F Z s ( ω ; x ) N ′ ( ds ⊗ dx )is well defined for almost all ω if, for example, E (cid:20)Z t Z F | Z s ( x ) | N ′ ( ds ⊗ dx ) (cid:21) < ∞ . (1.16)In this case, Z ∗ M is an integrable process and is, in fact, a martingale (here we use that Z is predictable).Now, by some L -approximation procedure, the assumption that ν be finite could be weakened to σ -finitenessif condition (1.16) is fulfilled. In this case, both Z ∗ N and Z ∗ N ′ are well defined. However, using an L -approximation scheme (similarly as for the construction of infinitely divisible random variables with generalL´evy measure), we can define Z ∗ M even if only E "(cid:18)Z t Z F Z s ( x ) N ( ds ⊗ dx ) (cid:19) / < ∞ for all t ≥ . (1.17)In this case, Z ∗ M is still a local L -martingale. It is purely discontinuous in the sense that it is orthogonal toall continuous local martingales. This general construction of stochastic integrals with respect to integer valuedmartingale measures is performed in full generality, for example, in [JS87, Section II, 1d].The process that we will construct does not have second moments but we will show that it has p th momentsof all orders p ∈ [1 , p <
2, we have the simple estimate( P a i ) / ≤ ( P | a i | p ) /p for a i ∈ R . Hence, using Jensen’s inequality, as pointed out in the proof of [LM05,Lemma 3.1], it is enough to show that for some p ∈ (1 , E (cid:20)Z t Z F | Z s ( x ) | p N ′ ( ds ⊗ dx ) (cid:21) < ∞ for all t ≥ . (1.18)In fact, following the proof of [LM05, Lemma 3.1] (see also [JS87, Proposition I.1.47(c)]), one readily gets thatif condition (1.18) holds for all p ∈ (1 , Z ∗ M is an L p -martingale for any p ∈ [1 , X such that the coordinate processes solve a system of stochastic integralequations where F = [0 , ∞ ) × E .The first step is, of course, to describe the intensity measure on F . Then we formulate the stochastic integralequation and state in a theorem that it has a unique (weak) solution. The construction of the solution will be5erformed by an approximation scheme with finite intensity measures and finite site spaces. Uniqueness will beshown using a self-duality of the solution.The stochastic parts of the single coordinates in the Dawson-Perkins process defined in (1.1) are two-dimensionalisotropic diffusions and are hence time-transformed planar Brownian motions. When we speed up these motions,at any positive time, they will be close to their absorbing points at E . Hence, a crucial role in the subsequentconsiderations will be played by the harmonic measure Q of planar Brownian motion B on (0 , ∞ ) . That is, if B = ( B , B ) is a Brownian motion in R started at x ∈ [0 , ∞ ) and τ = inf { t > B t (0 , ∞ ) } , then wedefine Q x = P x [ B τ ∈ · ] . Lemma 1.2 If x = ( u, v ) ∈ (0 , ∞ ) , then the harmonic measure Q x has a one-dimensional Lebesgue densityon E that is given by Q ( u,v ) (cid:0) d (¯ u, ¯ v ) (cid:1) = π uv ¯ u u v + (cid:0) ¯ u + v − u (cid:1) d ¯ u, if ¯ v = 0 , π uv ¯ v u v + (cid:0) ¯ v + u − v (cid:1) d ¯ v, if ¯ u = 0 . (1.19) Furthermore, trivially we have Q x = δ x if x ∈ E . Formula (1.19) appears in the remark on page 1094 of [DP98] and could be derived by recalling that the Cauchydistribution is the harmonic measure for planar Brownian motion on the upper half plane and then applyingthe conformal map z
7→ √ z (identifying R with C ) that maps the half plane to the quadrant. A more formalproof of this lemma is deferred to the appendix.As the next goal is to define a measure for the jumps that drive the process X , we need to describe theinfinitesimal dynamics of X . These will be defined in terms of the σ -finite measure ν on E that arises as thevague limit (on E \ { (1 , } ) of ǫ − Q (1 ,ǫ ) as ǫ →
0. Using (1.19), it is easy to see that ν has a one-dimensionalLebesgue density given by ν (cid:0) d ( u, v ) (cid:1) = π u (1 − u ) (1 + u ) du, if v = 0 , π v (cid:0) v (cid:1) dv, if u = 0 . (1.20)We use ν to define the Poisson point process (PPP) that will be the driving force of the equations. Let N bethe PPP on S × R + × R + × E with intensity N ′ = ℓ S ⊗ λ ⊗ λ ⊗ ν, (1.21)where λ is the Lebesgue measure on R + and ℓ S is the counting measure on S . The first R + is used as timeset while the second R + is used to model the (predictable) intensity I ( X t − ; k ) at which jumps at site k ∈ S come depending on the current state X t − . Now assume that F = ( F t ) t ≥ is a filtration that fulfills the usualhypotheses and is such that ( N − N ′ ) (cid:0) { k } × [0 , t ] × A × B (cid:1) t ≥ is an F -martingale for all k ∈ S and measurable A ⊂ R and B ⊂ E with λ ( A ) ν ( B ) < ∞ . Finally, define the F -martingale measure M := N − N ′ . (1.22)The measure ν is the limit of the Q only at the point (1 , ∈ E . The limits ν ( u, of ǫ − Q ( u,ǫ ) and ν (0 ,v ) of ǫ − Q ( ǫ,v ) can be obtained by simple transformations of ν (see [KM10, discussion before (5.5)]): For suitable f : E → R , we have Z E f ( y ) ν ( u, ( dy ) = 1 u Z E f ( u ( y , y )) ν ( dy )6nd Z E f ( y ) ν (0 ,v ) ( dy ) = 1 v Z E f (cid:0) v ( y , y ) (cid:1) ν ( dy ) . Hence, if we define the functions J i ( y, z ) = y z − i + ( y − z i for y, z ∈ E, i = 1 , J = ( J , J ) , then for z ∈ E , we get ( z + z ) Z E f ( y ′ ) ν z ( dy ′ ) = Z E f (cid:0) z + J ( y, z ) (cid:1) ν ( dy ) . (1.24)This motivates the following definitions. Define the functions I , I and I := I + I that will serve as intensitiesfor the driving noise by I i ( x ; k ) := { x − i ( k ) > } A x i ( k ) x − i ( k ) for x ∈ L β,E , k ∈ S, t ≥ , i = 1 , . (1.25)Let x ∈ L β,E . A pair ( N , X ) is called a weak solution of the following system of stochastic integral equations(for t ≥ k ∈ S , i = 1 , X i,t ( k ) = x i ( k ) + Z t A X i,s ( k ) ds + Z t Z [0 , ∞ ) × E J i (cid:0) y, X s − ( k ) (cid:1) [0 ,I ( X s − ; k )] ( a ) M (cid:0) { k } , ds, d ( a, y ) (cid:1) (1.26)if N is a PPP described in (1.21) and X is an F -adapted D L β,E valued process such that (1.26) holds for all t ≥ k ∈ S . (Note that R + × E plays the rˆole of F in the considerations around (1.16) and that ( a, y )plays the rˆole of x there.) We say that the solution is unique if the distribution of X is the same for all weaksolutions.In order to grasp the intuitive meaning of (1.26), first consider the case X i,s − ( k ) >
0. Then Poisson points y ∈ E come at the rate ( A X − i,s − ( k ) /X i,s − ( k )) ν ( dy ). The point y is turned into a jump X s ( k ) − X s − ( k ) ofsize J ( y, X s − ( k )). According to (1.24) this means that jumps from X s − ( k ) to some y ′ ∈ E come at a rate A X − i,s − ( k ) ν X s − ( k ) ( dy ′ ) as desired. A similar reasoning holds for the case X − i,s − ( k ) > M i,t ( k ) := Z t Z [0 , ∞ ) × E J i (cid:0) y, X s − ( k ) (cid:1) [0 ,I ( X s − ; k )] ( a ) M (cid:0) { k } , ds, d ( a, y ) (cid:1) (1.27)does not make sense as a Lebesgue-Stieltjes integral but is understood in the sense explained around (1.17) with x replaced by ( a, y ) and Z s ( ω ; x ) replaced by J (cid:0) y, X s − ( k )( ω ) (cid:1) [0 ,I k ( X s − ( ω ))] ( a ). Furthermore, note that the leftlimit X s − is used so as to make the integrand predictable. In order that the integral be well defined, with aview to (1.18), it is enough to check that for some p ∈ (1 , E (cid:20)Z t Z E (cid:12)(cid:12) J i (cid:0) y, X s − ( k ) (cid:1)(cid:12)(cid:12) p I ( X s − ; k ) ds ν ( dy ) (cid:21) < ∞ . (1.28)In fact, if condition (1.28) holds for all p ∈ (1 , M i,t ( k )) t ≥ , i = 1 , k ∈ S , is an L p -martingale for any p ∈ [1 , p ∈ (1 , J is used in order to define the dynamics of X in terms of only onesource of noise that produces “standard jumps”. If M ( { k } , · , [0 , I k ( X s − )] × · ) has an atom at { s } × { y } , thenthe actual jump of X ( k ) at time s is from X s − ( k ) to X s ( k ) = X s − ( k ) + J ( y, X s − ( k )) = ( ( y , y ) X ,s − ( k ) , if X ,s − ( k ) > , ( y , y ) X ,s − ( k ) , if X ,s − ( k ) > . y >
0, then the coordinate X s − ( k ) changes its type. Since ν ( { y : y > } ) =2 /π < ∞ (see Lemma A.1), the changes of type come at a finite rate as long as X s ( k ) is bounded away from0. On the other hand, if y >
0, then the jumps change the size of the population at site k by a factor of y ,but not its type. From the definition of the measure ν the jumps for which | y − | is small come at an infiniterate. Theorem 1.3
For any x ∈ L β,E , there exists a unique weak solution ( N , X ) of (1.26) and X solves (MP ). The construction of the solution of (1.26) requires a lot of effort, including an involved approximation scheme.If we were interested only in the existence of solutions of the martingale problem (MP ), we could follow aneasier route by using the Trotter product approach as performed in [KO10].It is natural to ask whether the coordinate processes of the solution of (1.26) ever hit the point (0 , S being a singleton, this is proved in[KM10, Thm. 1.7]. Convergence as the rates go to infinity
Now let us go back to the Dawson-Perkins model. We would like to clarify our initial motivation that theprocess described in Theorems 1.1 and 1.3 is indeed the limit of Dawson-Perkins process as γ → ∞ . Let Y γ = ( Y γ , Y γ ) be a solution of (1.1) with Y ∈ L β,E . This process with our slightly relaxed assumptions on A can be constructed in a way similar to the construction of Dawson and Perkins (see also [CDG04]). Furthermore,let X be a solution of (MP ) with X = Y .Clearly, the continuous processes Y γ cannot converge to the discontinuous process X in the Skorohod topologyon D L β,E . Hence, in order to get a limit theorem, we use the weaker Meyer-Zheng “pseudo-path topology” (see[MZ84]). Roughly speaking, convergence in the “pseudo-path topology” means convergence for Lebesgue almostall time points. More precisely, for any f ∈ D L β, let ψ ( f ) denote the image measure on [0 , ∞ ) × L β, of e − t dt under the map t ( t, f ( t )). Note that ψ is injective and hence weak convergence in the space of probabilitymeasures on [0 , ∞ ) × L β, defines a notion of convergence on D L β, that is called the “pseudo-path topology”by Meyer and Zheng [MZ84].For the convergence of Y γ to X , it is not crucial that in (1.1) the noise term has the special form of a product.In fact, it is only necessary that the noise is isotropic, strictly positive in (0 , ∞ ) and vanishing at the boundaryin such a way that it admits a solution with each coordinate nonnegative. Hence, consider the equation Y i,t ( k ) = Y i, ( k ) + Z t X l ∈ S A ( k, l ) Y i,s ( l ) ds + Z t γ / σ ( Y s ( k )) dW i,s ( k ) , t ≥ , k ∈ S, i = 1 , . (1.29)Here ( W i ( k ) , k ∈ S, i = 1 ,
2) is an independent family of one-dimensional Brownian motions and σ : [0 , ∞ ) → [0 , ∞ ) is measurable and fulfils the following assumptions: Assumption 1.4 (i) σ ( x ) = 0 for all x ∈ E .(ii) inf σ ( C ) > for any compact C ⊂ (0 , ∞ ) .(iii) For each y ∈ L β, and γ > , (1.29) admits a (weak) L β, -valued solution. Of course, σ ( x ) = √ x x is the case considered in (1.1) and it satisfies the above assumptions. Theorem 1.5
Assume that (i) and (ii) hold and that for each γ > , we have chosen an L β, -valued solution Y γ of (1.29). Assume that X := Y γ ∈ L β,E does not depend on γ . Then, for each sequence γ n → ∞ , in D L β, equipped with the Meyer-Zheng pseudo-path topology, we have the convergence in law Y γ n = ⇒ X as n → ∞ . .4 Organization of the paper We prove the existence parts of Theorems 1.1 and 1.3 via an approximation procedure. In Section 2, we willconstruct a family ( X ( m,ǫ ) ) m ∈ N , ǫ> of processes which • live on finite site spaces S m ⊂ S , • have a finite jump measure ν ǫ instead ν by suppressing certain small jumps, and • where the intensities of the driving noise (see (1.25)) are truncated for small values x i ( k ) ∈ (0 , ǫ ).In that section, we further derive moment estimates for the truncated measures and processes.In Section 3 we show that the sequence ( X ( m,ǫ ) ) m ∈ N , ǫ> is tight and that any (weak) limit point solves themartingale problem (MP ). Sections 2 and 3 are the corner stone for the existence part in Theorem 1.3. InSection 4, we will show uniqueness for (MP ). Section 5 is devoted to the proof of Theorem 1.3 based on theSection 3 and 4. Theorem 1.5 is proved in Section 6. The aim of this section is to construct the family of approximating processes ( X ( m,ǫ ) ), that was announced inSection 1.4. To this end define a sequence ( S m ) m ∈ N of finite subsets of S such that S m ↑ S as m → ∞ . Theprocess X ( m,ǫ ) formally lives on S but we keep fixed all coordinates in S \ S m . We will define a family of approximating processes X ( m,ǫ ) = (cid:0) ( X ( m,ǫ )1 ,t ( k ) , X ( m,ǫ )2 ,t ( k )) ∈ E, k ∈ S, t ≥ (cid:1) in a way that they may change values only for k ∈ S m and stay constant for k ∈ S \ S m . To this end let usdefine the matrix A ( m ) by A ( m ) ( k, l ) = (cid:26) A ( k, l ) , if k, l ∈ S m , , otherwise . Let ( S ( m ) t ) t ≥ be the semigroup generated by A ( m ) and let p ( m ) t = e t A ( m ) denote its kernel, that is, for f ∈ L β S ( m ) t f ( k ) = X l ∈ S p ( m ) t ( k, l ) f ( l ) . Define A ( m ) ( k, l ) = A ( m ) ( k, l ) + and let ( S ( m ) t ) t ≥ be the semigroup generated by A ( m ) . Clearly, for any f ∈ L β , A ( m ) f ( k ) ≤ A ( m ) f ( k ) ≤ A f ( k ) for all k ∈ S, S ( m ) t f ( k ) ≤ S ( m ) t f ( k ) ≤ S t f ( k ) for all k ∈ S. (2.1)We denote by a ∨ b := max( a, b ) the maximum and by a ∧ b := min( a, b ) the minimum of two numbers. Fix ε ∈ (0 ,
1) and m ∈ N and define the modified jump rate (compare (1.25)) I ( m,ǫ ) i ( x ; k ) := { x − i ( k ) > } A ( m ) x i ( k ) x − i ( k ) ∨ ǫ for i = 1 , , (2.2)and I ( m,ǫ ) ( x ; k ) = I ( m,ǫ )1 ( x ; k ) + I ( m,ǫ )2 ( x ; k ) . y, z ∈ E and i = 1 ,
2, define (compare (1.23)) J ( m,ǫ ) i ( y, z ) = y ( z − i ∨ ǫ ) { z − i > } + ( y − z i (2.3)and J ( m,ǫ ) = (cid:0) J ( m,ǫ )1 , J ( m,ǫ )2 (cid:1) . Note that these definitions are pretty much in line with the definitions of I and J in (1.25) and (1.23), buthere small positive values of z i are replaced by ǫ . This handles the problem of increasing jump rates when acoordinate approaches 0.Next, we take care of the problem that the jump measure ν is infinite. We introduce the (finite) truncated jumpmeasure ν ǫ := ν E ǫ where E ǫ := (cid:8) y ∈ E : y (1 − ǫ, ǫ ′ ) (cid:9) , that is, ν ǫ ( dy ) = ν ( dy ) { y (1 − ǫ, ǫ ′ ) } . (2.4)Here, ǫ ′ := ǫ ′ ( ǫ ) ∈ [ ǫ/ , ǫ ] is chosen according to Lemma A.3 such that Z E ǫ ( y − ν ( dy ) = 0 . This particular form of the truncated jump measure is helpful since it preserves the expectation of jumps.Note that Z E J ( m,ǫ ) i ( y, z ) ν ( dy ) = ( z − i ∨ ǫ ) { z − i > } , Z E J ( m,ǫ ) i ( y, z ) ν ǫ ( dy ) = ( z − i ∨ ǫ ) { z − i > } , and I ( m,ǫ ) ( x ; k ) Z E J ( m,ǫ ) i ( y, x ) ν ǫ ( dy ) = A ( m ) x i ( k ) { x − i ( k ) > } . (2.5)For the approximating process (but not for the limiting process X ), we have to take special care of the coordinatesthat assume the value 0. If for a given coordinate k , we have x ( k ) = (0 , A ( m ) x ( k ) would drivethe process out of the state space L β,E immediately. (This would happen also to a coordinate with x i ( k ) > A x − i ( k ) > A ( m ) x ( k ). There are several ways to do so, for example, one could use for each i = 1 ,
2, a Poisson process withjump size ǫ and rate ǫ − A ( m ) x − i ( k ). Here, in order to stick formally with the noise processes N defined in(1.21) (and for no other reason), we define two independent (and independent of N ) noises N and N withthe same distribution as N and let M i := N i − N ′ denote the compensated jump measure, i = 1 ,
2. Finally,assume that ( F t ) t ≥ is the filtration generated by N , N and N and that fulfils the usual conditions.The intensities of the jumps away from 0 will be given by I , ( m,ǫ ) i ( x ; k ) := { x ( k )=(0 , } ǫ A ( m ) x i ( k ) . (2.6)Note that Z E ǫy I , ( m,ǫ ) i ( x ; k ) ν ( dy ) = { x ( k )=(0 , } A ( m ) x i ( k ) . (2.7)Now given a process X ( m,ǫ ) which is adapted to the filtration ( F t ) t ≥ , define ∆ X ( m,ǫ ) s = X ( m,ǫ ) s − X ( m,ǫ ) s − thejump of X ( m,ǫ ) at time s . 10ow we can define the process X ( m,ǫ ) as the unique strong solution of the system of equations X ( m,ǫ ) i,t ( k ) = x i ( k ) + Z t Z [0 , ∞ ) × E ǫ J ( m,ǫ ) i (cid:0) y, X ( m,ǫ ) s − ( k ) (cid:1) [0 ,I ( m,ǫ ) ( X ( m,ǫ ) s − ; k )] ( a ) N (cid:0) { k } , ds, d ( a, y ) (cid:1) + Z t Z [0 , ∞ ) × E ǫy [0 ,I , ( m,ǫ ) i ( X ( m,ǫ ) s − ; k )] ( a ) N i (cid:0) { k } , ds, d ( a, y ) (cid:1) + Z t A ( m ) X ( m,ǫ ) i,s ( k ) { X ( m,ǫ ) i,s ( k ) > } ds, for k ∈ S m , i = 1 , X ( m,ǫ ) t ( k ) = x ( k ) for t ≥ k ∈ S \ S m . (2.9)Note that the middle term in (2.8) represents the jumps in the case where X ( m,ǫ ) s − ( k ) = (0 , ǫ ↓
0, the compensated version of this termvanishes.Note that J ( m,ǫ ) is defined in a way that the jumps indeed do not drive the coordinate processes out of thespace E ; that is, z + J ( m,ǫ ) ( y, z ) ∈ E for all y, z ∈ E . Also note that the middle term in (2.8) does not drive thecoordinates out of E since it is nonzero only if the coordinate takes the value (0 ,
0) and in this case the valueof only one type changes by jump.Since the jumps according to ν ǫ have a finite mean, the total mass process increases at most exponentially withthe number of jumps that occur. Since the jump rate of X ( m,ǫ ) is bounded by ν ( E ǫ ) ǫ − | S m | times the totalmass, in each time interval there are in fact at most finitely many jumps. Hence the solution X ( m,ǫ ) of (2.8)and (2.9) is indeed well defined and is unique.We want to write the dynamics of X ( m,ǫ ) as a sum of the “heat flow” and the martingale term of compensatedjumps. To this end, we define the martingale M ( m,ǫ ) i,t ( k ) := Z t Z [0 , ∞ ) × E ǫ J ( m,ǫ ) i (cid:0) y, X ( m,ǫ ) s − ( k ) (cid:1) [0 ,I ( m,ǫ ) ( X ( m,ǫ ) s − ; k )] ( a ) M (cid:0) { k } , ds, d ( a, y ) (cid:1) + Z t Z [0 , ∞ ) × E ǫy [0 ,I , ( m,ǫ ) i ( X ( m,ǫ ) s − ; k )] ( a ) M i (cid:0) { k } , ds, d ( a, y ) (cid:1) . (2.10)Hence, by subtracting and adding the compensator terms in (2.8), we can rewrite (2.8) as X ( m,ǫ ) i,t ( k ) = x i ( k ) + M ( m,ǫ ) i,t ( k )+ Z t Z E ǫ J ( m,ǫ ) i (cid:0) y, X ( m,ǫ ) s ( k ) (cid:1) I ( m,ǫ ) (cid:0) X ( m,ǫ ) s ; k (cid:1) ν ( dy ) ds + Z t Z E ǫy I , ( m,ǫ ) i (cid:0) X ( m,ǫ ) s ; k (cid:1) ν ( dy ) ds + Z t A ( m ) X ( m,ǫ ) i,s ( k ) { X ( m,ǫ ) i,s ( k ) > } ds, for k ∈ S m , i = 1 , . (2.11)11dding the last three terms in (2.11), we get (using (2.5) and (2.7) in the first equality) Z t Z E ǫ J ( m,ǫ ) i (cid:0) y, X ( m,ǫ ) s ( k ) (cid:1) I ( m,ǫ ) (cid:0) X ( m,ǫ ) s ; k (cid:1) ν ( dy ) ds + Z t Z E ǫy I , ( m,ǫ ) i (cid:0) X ( m,ǫ ) s ; k (cid:1) ν ( dy ) ds + Z t A ( m ) X ( m,ǫ ) i,s ( k ) { X ( m,ǫ ) i,s ( k ) > } ds = Z t A ( m ) X ( m,ǫ ) i,s ( k ) (cid:16) { X ( m,ǫ )3 − i,s ( k ) > } + { X ( m,ǫ ) s ( k )=(0 , } + { X ( m,ǫ ) i,s ( k ) > } (cid:17) ds = Z t A ( m ) X ( m,ǫ ) i,s ( k ) ds. Now, (2.11) can be rewritten as X ( m,ǫ ) i,t ( k ) = x i ( k ) + Z t A ( m ) X ( m,ǫ ) i,s ( k ) ds + M ( m,ǫ ) i,t ( k ) , k ∈ S m , i = 1 , . (2.12) Remark 2.1
Our way of defining the approximate processes might look a bit special at first glance. However,the more naive idea of truncating the rates I like min( I, M ) for some M > and then letting M → ∞ alongwith ǫ ↓ did not work since this results in an additional drift term in (2.12) that we could not control.The other idea that naturally pops up is to suppress the jumps of small size on the absolute scale; that is,suppressing all jumps where | ∆ X t ( k ) | > ǫ for some ǫ . However also in this case, we could not control theadditional error term. The aim is to let ǫ ↓ m → ∞ and to show that X ( m,ǫ ) converges to a solution of (1.26). To this end, weneed moment bounds on X ( m,ǫ ) that are uniform in m and ǫ (Corollary 2.3) and that will be derived by thefollowing martingale decomposition for the product X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ). Lemma 2.2
Let k , k ∈ S , k = k , and let t > . Then X ( m,ǫ ) i,t ( k i ) = S ( m ) t X ( m,ǫ ) i, ( k i ) + X l ∈ S Z t p ( m ) t − s ( k i , l ) d M ( m,ǫ ) i,s ( l ) and X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ) = S ( m ) t X ( m,ǫ )1 , ( k ) S ( m ) t X ( m,ǫ )2 , ( k ) − X l ∈ S Z t p ( m ) t − s ( k , l ) p ( m ) t − s ( k , l ) (cid:16) X ( m,ǫ )1 ,s ( l ) A ( m ) X ( m,ǫ )2 ,s ( l ) + X ( m,ǫ )2 ,s ( l ) A ( m ) X ( m,ǫ )1 ,s ( l ) (cid:17) ds + M ( m,ǫ ) t , where M ( m,ǫ ) t is given by M ( m,ǫ ) t = X l ,l ∈ Sl = l Z t p ( m ) t − s ( k , l ) p ( m ) t − s ( k , l ) (cid:16) X ( m,ǫ )1 ,s − ( l ) d M ( m,ǫ )2 ,s ( l ) + X ( m,ǫ )2 ,s − ( l ) d M ( m,ǫ )1 ,s ( l ) (cid:17) . roof. The first equation is just the mild form of (2.12).The second equality is an easy application of the integration by parts formula and the fact that X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ) = 0 for all t ≥ , k ∈ S. ✷ Corollary 2.3
Let k , k ∈ S and t > . Then we have E h X ( m,ǫ ) i,t ( k i ) i ≤ S t x i ( k i ) and E h X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ) i ≤ S t x ( k ) S t x ( k ) . Proof.
Consider first the case where k , k ∈ S m . We have X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ) ≤ S ( m ) t X ( m,ǫ )1 , ( k ) S ( m ) t X ( m,ǫ )2 , ( k ) + M t . For
K > τ K := inf { s ≥ P l P i =1 X ( m,ǫ ) s,i ( l ) ≥ K } ∧ t . A simple stopping argumentshows that S ( m ) t − τ K X ,τ K ( k ) S ( m ) t − τ K X ,τ K ( k ) ≤ S ( m ) t X ( m,ǫ )1 , ( k ) S ( m ) t X ( m,ǫ )2 , ( k )+ X l ,l ∈ Sl = l Z τ K p ( m ) t − s ( k , l ) p ( m ) t − s ( k , l ) (cid:16) X ( m,ǫ )1 ,s − ( l ) d M ( m,ǫ )2 ,s ( l ) + X ( m,ǫ )2 ,s − ( l ) d M ( m,ǫ )1 ,s ( l ) (cid:17) . Since now the integrand is bounded and the integrators are martingales, the integral has expectation zero, andwe get E h S ( m ) t − τ K X ,τ K ( k ) S ( m ) t − τ K X ,τ K ( k ) i ≤ S ( m ) t X ( m,ǫ )1 , ( k ) S ( m ) t X ( m,ǫ )2 , ( k ) . Since P [ τ K = t ] K →∞ −→
1, the expression in the expectation tends to X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ). Using Fatou’s lemma,we conclude E (cid:2) X ( m,ǫ )1 ,t ( k ) X ( m,ǫ )2 ,t ( k ) (cid:3) ≤ S ( m ) t X ( m,ǫ )1 , ( k ) S ( m ) t X ( m,ǫ )2 , ( k ) . ✷ Now we will give the L p -bounds for the martingales M ( m,ǫ ) i , i = 1 , Lemma 2.4
For any p ∈ (1 , , there exists a constant c p < ∞ such that for all m ∈ N , ǫ > , k ∈ S , T > and i = 1 , , we have E (cid:20) sup t ≤ T | M ( m,ǫ ) i,t ( k ) | p (cid:21) ≤ c p Z T [( A + 1) S s x ( k ) + 1] [( A + 1) S s x ( k ) + 1] ds. Proof.
First note that for k S m , we have M ( m,ǫ ) i ( k ) ≡ k ∈ S m . Recall the definitions of M ( m,ǫ ) and J in (2.10) and (1.23), respectively, and note that E (cid:20) sup t ≤ T | M ( m,ǫ ) i,t ( k ) | p (cid:21) ≤ p − E (cid:20) sup t ≤ T | C t | p (cid:21) + 3 p − E (cid:20) sup t ≤ T | D t | p (cid:21) + 3 p − E (cid:20) sup t ≤ T | E t | p (cid:21) =: L + L + L , where C t := Z t Z [0 , ∞ ) × E ǫ y (cid:0) X ( m,ǫ )3 − i,s − ( k ) ∨ ǫ (cid:1) [0 ,I ( m,ǫ ) i ( X ( m,ǫ ) s − ; k )] ( a ) M (cid:0) { k } , ds, d ( a, y ) (cid:1) ,D t := Z t Z [0 , ∞ ) × E ǫ ( y − X ( m,ǫ ) i,s − ( k ) [0 ,I ( m,ǫ )3 − i ( X ( m,ǫ ) s − ; k )] ( a ) M (cid:0) { k } , ds, d ( a, y ) (cid:1) , E t := Z t Z [0 , ∞ ) × E ǫy [0 ,I , ( m,ǫ ) i ( X ( m,ǫ ) s − ; k )] ( a ) M i (cid:0) { k } , ds, d ( a, y ) (cid:1) are martingales of finite variation. As the point process N has no double points, the square variation processof C is [ C, C ] t = Z t Z [0 , ∞ ) × E ǫ y (cid:0) X ( m,ǫ )3 − i,s − ( k ) ∨ ǫ (cid:1) [0 ,I ( m,ǫ ) i ( X ( m,ǫ ) s − ; k )] ( a ) N (cid:0) { k } , ds, d ( a, y ) (cid:1) Let m ,p := R E | y − | p ν ( dy ) and m ,p := R E y p ν ( dy ) denote the p -th moments of ν . By Lemma A.5, we havethat both quantities are finite. Hence, by the Burkholder-Gundy-Davis inequality (see, e.g., [DM83, TheoremVII.92]) we get with c ′ p = 3 p − (4 p ) p L ≤ c ′ p E (cid:2) [ C, C ] p/ t (cid:3) ≤ c ′ p E "Z t Z [0 , ∞ ) × E ǫ y p (cid:0) X ( m,ǫ )3 − i,s − ( k ) ∨ ǫ (cid:1) p [0 ,I ( m,ǫ ) i ( X ( m,ǫ ) s − ; k )] ( a ) N (cid:0) { k } , ds, d ( a, y ) (cid:1) = c ′ p E "Z t Z [0 , ∞ ) × E ǫ y p (cid:0) X ( m,ǫ )3 − i,s − ( k ) ∨ ǫ (cid:1) p [0 ,I ( m,ǫ ) i ( X ( m,ǫ ) s ; k )] ( a ) N ′ (cid:0) { k } , ds, d ( a, y ) (cid:1) = c ′ p E (cid:20)Z t Z E y p (cid:0) X ( m,ǫ )3 − i,s ( k ) ∨ ǫ (cid:1) p I ( m,ǫ ) i ( X ( m,ǫ ) s ; k ) ν ( dy ) ds (cid:21) ≤ c ′ p m ,p E "Z T (cid:0) X ( m,ǫ )3 − i,s ( k ) ∨ ǫ (cid:1) p − { X ( m,ǫ )3 − i,s ( k ) > } A ( m ) X ( m,ǫ ) i,s ( k ) ds ≤ c ′ p m ,p E "Z T (cid:0) X ( m,ǫ )3 − i,s ( k ) + 1 (cid:1) { X ( m,ǫ )3 − i,s ( k ) > } A ( m ) X ( m,ǫ ) i,s ( k ) ds ≤ c ′ p m ,p Z T (cid:0) S s x − i ( k ) + 1 (cid:1) AS s x i ( k ) ds (2.13)where the last inequality follows by Corollary 2.3 and (2.1). The last line of (2.13) is trivially bounded by c ′ p m ,p Z T [( A + 1) S s x ( k ) + 1] [( A + 1) S s x ( k ) + 1] ds. (2.14)Hence we are done for L .Similarly, we get for L (by taking X ( m,ǫ )3 − i,s ( k ) = 0) that L ≤ ǫ p − c ′ p m ,p Z T AS s x i ( k ) ds. L term. Again, by the Burkholder-Gundy-Davis inequality, we get L ≤ c ′ p E Z T Z [0 , ∞ ) × E ǫ ( y − X ( m,ǫ ) i,s − ( k ) [0 ,I ( m,ǫ )3 − i ( X ( m,ǫ ) s − ; k )] ( a ) N (cid:0) { k } , ds, d ( a, y ) (cid:1)! p/ ≤ c ′ p E "Z T Z [0 , ∞ ) × E | y − | p X ( m,ǫ ) i,s − ( k ) p [0 ,I ( m,ǫ )3 − i ( X ( m,ǫ ) s − ; k )] ( a ) N (cid:0) { k } , ds, d ( a, y ) (cid:1) = c ′ p E "Z T Z E | y − | p X ( m,ǫ ) i,s ( k ) p I ( m,ǫ )3 − i ( X ( m,ǫ ) s ; k ) ν ( dy ) ds ≤ c ′ p m ,p E "Z T (cid:0) X ( m,ǫ ) i,s ( k ) (cid:1) p − A ( m ) X ( m,ǫ )3 − i,s ( k ) ds ≤ c ′ p m ,p E "Z T (cid:0) X ( m,ǫ ) i,s ( k ) + 1 (cid:1) { X ( m,ǫ ) i,s ( k ) > } A ( m ) X ( m,ǫ )3 − i,s ( k ) ds ≤ c ′ p m ,p Z T (cid:0) S s x i ( k ) + 1 (cid:1) AS s x − i ( k ) ds, (2.15)where the last inequality follows by Corollary 2.3 and (2.1). Again, the right hand side of (2.15) is triviallybounded by (2.14). Now the claim holds with c p = c ′ p ( m ,p + 2 m ,p ) (which is finite by Lemma A.5). ✷ Remark 2.5
Note that the bound in the above lemma is uniform in m and ǫ . From Lemma 2.4, it is easy to derive the following bound (uniform in m ) on the moments of the increments of X ( m,ǫ ) i ( k ). Lemma 2.6
For any r , r ∈ (0 , such that < r /r < , there exists a constant c = c ( r , r ) such that forall T > , k ∈ S and i = 1 , , we have E (cid:20) sup t ≤ T (cid:12)(cid:12)(cid:12) X ( m,ǫ ) i,t ( k ) − x i ( k ) (cid:12)(cid:12)(cid:12) r (cid:21) ≤ c max j =1 , Z T Y i ′ =1 (cid:2) ( A + 1) S s x i ′ ( k ) + 1 (cid:3) ds ! r j . Proof.
For k ∈ S \ S m , the result is trivial. Hence now let k ∈ S m . By equation (2.8) and the triangleinequality, we get E (cid:20) sup t ≤ T (cid:12)(cid:12)(cid:12) X ( m,ǫ ) i,t ( k ) − x i ( k ) (cid:12)(cid:12)(cid:12) r (cid:21) ≤ R + R , where R = E " sup t ≤ T (cid:18)Z t A X ( m,ǫ ) i,s ( k ) ds (cid:19) r and R = E (cid:20) sup t ≤ T (cid:12)(cid:12)(cid:12) M ( m,ǫ ) i,t ( k ) (cid:12)(cid:12)(cid:12) r (cid:21) . As in R the integrand is nonnegative, and using Jensen’s inequality and (2.18), we get R ≤ (cid:18) E (cid:20) Z T A X ( m ) i,s ( k ) ds (cid:21)(cid:19) r ≤ (cid:18) Z T AS s x i ( k ) ds (cid:19) r . (2.16)For R , by Jensen’s inequality and Lemma 2.4 (with p = r /r ), we get that for some constant c r /r < ∞ , R ≤ (cid:18) E (cid:20) sup t ≤ T (cid:12)(cid:12) ¯ M i,t ( k ) (cid:12)(cid:12) r /r (cid:21)(cid:19) r ≤ c r /r (cid:18) Z T Y j =1 (cid:2) ( A + 1) S s x j ( k ) + 1 (cid:3) ds (cid:19) r . (2.17)15ombining (2.16) and (2.17) gives the claim of this lemma. ✷ First, by Corollary 2.3, we have E h X ( m,ǫ ) i,t ( k ) i ≤ S t x i ( k ) for k ∈ S, i = 1 , , (2.18)and hence by (1.11), E h(cid:10) X ( m,ǫ ) i,t , β (cid:11)i ≤ e Γ t h x i , β i for i = 1 , . Now we derive bounds on sup t ≤ T (cid:10) X ( m,ǫ ) i,t , β (cid:11) , i = 1 , Lemma 2.7
Let φ be a non-negative function on S . For any T, K > , P (cid:20) sup t ≤ T D X ( m,ǫ ) i,t , φ E > K (cid:21) ≤ K − (cid:10) S T x i , φ (cid:11) . In particular, for x ∈ L β,E , we have (recall β and Γ from (1.5)) P (cid:20) sup t ≤ T D X ( m,ǫ ) i,t , β E > K (cid:21) ≤ K − e Γ T k x i k β . Proof.
First assume that φ has finite support. Recall M ( m,ǫ ) from (2.12). Define the submartingale M ( m,ǫ ) i,t ( k ) := x i ( k ) + M ( m,ǫ ) i,t ( k ) + Z t A ( m ) X ( m,ǫ ) i,s ( k ) ds and note that 0 ≤ X ( m,ǫ ) i,t ( k ) ≤ M ( m,ǫ ) i,t ( k ). By Doob’s inequality, we get P (cid:20) sup t ≤ T (cid:10) X ( m,ǫ ) i,t , φ (cid:11) > K (cid:21) ≤ P (cid:20) sup t ≤ T (cid:10) M ( m,ǫ ) i,t , φ (cid:11) > K (cid:21) ≤ E (cid:2)(cid:10) M ( m,ǫ ) i,T , φ (cid:11)(cid:3) K .
By (2.18), we get E h M ( m,ǫ ) i,T ( k ) i ≤ x i ( k ) + Z T ( AS t ) x i ( k ) dt = S T x i ( k ) . Hence E (cid:2)(cid:10) M ( m,ǫ ) i,T , φ (cid:11)(cid:3) ≤ (cid:10) S T x i , φ (cid:11) , which finishes the proof for φ with finite support. For general φ ∈ [0 , ∞ ) S , the claim follows by monotoneconvergence. ✷ Lemma 2.8
Fix arbitrary
T > . Let ( τ m ) m ∈ N be a sequence of stopping times bounded by T . Then for any r ∈ (0 , and k ∈ S , we have lim δ ↓ sup m ∈ N , ǫ> E h(cid:12)(cid:12) X ( m,ǫ ) i,τ m + δ ( k ) − X ( m,ǫ ) i,τ m ( k ) (cid:12)(cid:12) r i = 0 . Proof.
Without loss of generality, we may assume δ ≤
1. We define the stopping time σ m,K = inf n t ≥ (cid:10) X ( m,ǫ )1 ,t + X ( m,ǫ )2 ,t , β (cid:11) ≥ K o and let R ( m,ǫ )1 := E h(cid:12)(cid:12) X ( m,ǫ ) i,τ m + δ ( k ) − X ( m,ǫ ) i,τ m ( k ) (cid:12)(cid:12) r { σ m,K ≤ T +1 } i R ( m,ǫ )2 := E h(cid:12)(cid:12) X ( m,ǫ ) i,τ m + δ ( k ) − X ( m,ǫ ) i,τ m ( k ) (cid:12)(cid:12) r { σ m,K >T +1 } i . Let p > pr ≤ q > /p + 1 /q = 1. Then by H¨older’s inequality, we have R ( m,ǫ )1 ≤ (cid:16) E h(cid:12)(cid:12) X ( m,ǫ ) i,τ m + δ ( k ) − X ( m,ǫ ) i,τ m ( k ) (cid:12)(cid:12) pr i(cid:17) /p P [ σ m,K ≤ T + 1] /q ≤ (cid:18) E (cid:20) sup t ≤ T +1 X ( m,ǫ ) i,t ( k ) pr (cid:21)(cid:19) /p P (cid:20) sup t ≤ T +1 (cid:10) X ( m,ǫ )1 ,t + X ( m,ǫ )2 ,t , β (cid:11) ≥ K (cid:21) /q By Lemma 2.6, we have h ( T ) := sup m ∈ N , ǫ> (cid:18) E (cid:20) sup t ≤ T +1 X ( m,ǫ ) i,t ( k ) pr (cid:21)(cid:19) /p < ∞ . Let δ >
0. By Lemma 3.4, we can choose K sufficiently large such thatsup m ∈ N , ǫ> P (cid:20) sup t ≤ T +1 (cid:10) X ( m,ǫ )1 ,t + X ( m,ǫ )2 ,t , β (cid:11) ≥ K (cid:21) /q ≤ δ h ( T ) . This implies that sup m ∈ N , ǫ> R ( m,ǫ )1 ≤ δ . (2.19)Now we turn to R ( m,ǫ )2 . Let r ∈ ( r / , r ). By the strong Markov property of X ( m,ǫ ) and Lemma 2.6, weobtain E h(cid:12)(cid:12) X ( m,ǫ ) i,τ m + δ ( k ) − X ( m,ǫ ) i,τ m ( k ) (cid:12)(cid:12) r { σ m,K >T +1 } i ≤ E h(cid:12)(cid:12) X ( m,ǫ ) i,τ m + δ ( k ) − X ( m,ǫ ) i,τ m ( k ) (cid:12)(cid:12) r {h X ( m,ǫ )1 ,τm + X ( m,ǫ )2 ,τm ,β i≤ K } i = E h E X ( m,ǫ ) τm (cid:2)(cid:12)(cid:12) X ( m,ǫ ) i,δ ( k ) − X ( m,ǫ ) i, ( k ) (cid:12)(cid:12) r (cid:3) {h X ( m,ǫ )1 ,τm + X ( m,ǫ )2 ,τm ,β i≤ K } i ≤ c ( r , r ) max j =1 , E (cid:20)(cid:18) Z δ Y i ′ =1 (cid:2) ( A + 1) S s X ( m,ǫ ) i ′ ,τ m ( k ) + 1 (cid:3) ds (cid:19) r j × {h X ( m,ǫ )1 ,τm + X ( m,ǫ )2 ,τm ,β i≤ K } i . (2.20)Note that on the event {h X ( m,ǫ )1 ,τ m + X ( m,ǫ )2 ,τ m , β i ≤ K } , by (1.9), (1.9) and (1.10), we have (cid:0) ( A + 1) S s X ( m,ǫ ) i,τ m (cid:1) ( k ) ≤ (Γ + 1) e Γ s K/β ( k ) . Hence for some constant c ( r , r , Γ , T ), the right hand side of (2.20) is bounded by c ( r , r , Γ , T ) max j =1 , (cid:18) Z δ (cid:18) Kβ k + 1 (cid:19) ds (cid:19) r j −→ δ ↓ , uniformly in m . Together with (2.19), this implieslim sup δ ↓ sup m ∈ N , ǫ> (cid:0) R ( m,ǫ )1 + R ( m,ǫ )2 (cid:1) ≤ δ . Since δ > ✷ Existence of a solution to (MP ) Recall the definition of the approximating process X ( m,ǫ ) from (2.8) and the martingale problem (MP ) fromTheorem 1.1. In this section, we show that this process, in fact, converges to a solution of the martingaleproblem (MP ) if m → ∞ , ǫ ↓
0. Since the order of limits does not play a role here, we assume that we aregiven a sequence ǫ m ↓ X ( m ) := X ( m,ǫ m ) . Similarly, we define I ( m ) := I ( m,ǫ m ) , J ( m ) , M ( m ) and soon.Recall that D L β,E is the Skorohod space of c`adl`ag functions [0 , ∞ ) → L β,E equipped with the Skorohod topology.This and the next section will be devoted to the proof of the following theorem. Theorem 3.1
Let X ( m )0 = x ∈ L β,E for all m ∈ N . As m → ∞ , the processes X ( m ) converge in distributionin D L β,E to X which is the unique solution to the martingale problem (MP ) with X = x . The strategy of the proof is pretty much standard. First we prove tightness of the sequence of approximate pro-cesses and show that every convergent subsequence satisfies the above martingale problem. Then (in Section 4)we will show the uniqueness of the solution to the martingale problem (MP ).This section is devoted to the proof of the following proposition which is the first step in the proof of the abovetheorem. Proposition 3.2
Let X ( m )0 = x ∈ L β,E for all m ∈ N .(i) The sequence ( X ( m ) ) m ∈ N is tight in D L β,E .(ii) Any limit point of ( X ( m ) ) m ∈ N in D L β,E solves the martingale problem (MP ). The strategy for showing tightness in Proposition 3.2 is to do two things:(1) We show that the so-called compact containment condition holds for ( X ( m ) ) m ∈ N (see Lemma 3.4).(2) Let Lip f ( L β,E ; C ) denote the space of bounded Lipschitz functions on L β,E that depend on only finitelymany coordinates. We use moment estimates for the coordinate processes X ( m ) ( k ) and Aldous’s criterionto show that for f ∈ Lip f ( L β,E ; C ), the sequence (cid:0) f ( X ( m ) t ) (cid:1) t ≥ , m ∈ N , is tight in D R (Lemma 3.5).By the Stone-Weierstraß theorem, Lip f ( L β,E ; C ) ⊂ C b ( L β,E ; C ) is dense in the topology of uniform convergenceon compacts. Hence (1) and (2) imply tightness of ( X ( m ) ) m ∈ N by Theorem 3.9.1 of [EK86]. A subset L ⊂ L β,E is relatively compact if and only if(i) sup y ∈ L k y i k β < ∞ for i = 1 , , and(ii) for every δ > there exists a finite F ⊂ S such that sup y ∈ L (cid:13)(cid:13) y i S \ F (cid:13)(cid:13) β < δ for i = 1 , . Proof.
Simple. (Note that in the setting of Prohorov’s theorem, these two conditions correspond to bound-edness of the total masses and to tightness.) ✷ Lemma 3.4 (Compact containment condition)
For every
T > and every δ > , there exists a compactset L δ ⊂ L β,E such that for every m ∈ N P (cid:2) X ( m ) t ∈ L δ for all t ∈ [0 , T ] (cid:3) ≥ − δ. roof. Fix
T >
0. By Lemma 3.3, it is enough to show the following:(i) For any δ >
0, there exists
K > m ∈ N , we have P (cid:20) sup t ≤ T (cid:10) X ( m ) i,t , β (cid:11) > K (cid:21) ≤ δ, for i = 1 , . (ii) For any δ >
0, there exists a finite set F ⊂ S such that for all m ∈ N , P (cid:20) sup t ≤ T (cid:10) X ( m ) i,t F c , β (cid:11) > δ (cid:21) ≤ δ, for i = 1 , . While (i) is immediate from Lemma 2.7, for (ii), note that P (cid:20) sup t ≤ T (cid:10) X ( m ) i,t F c , β (cid:11) > δ (cid:21) ≤ δ − (cid:10) S T x, β F c (cid:11) ↓ F c ↓ ∅ . ✷ The next step, which completes the proof of Proposition 3.2(i), is to show the following lemma.
Lemma 3.5
Let f ∈ Lip f ( L β,E ; C ) . Then f (cid:0) X ( m ) t (cid:1) t ≥ , m ∈ N , is tight in the space D C of c`adl`ag functions [0 , ∞ ) → C equipped with the Skorohod topology. Proof.
Let
T >
0, ( τ m ) m ∈ N and r ∈ (0 ,
1) be as in Lemma 2.8. Since f is Lipschitz and depends on onlyfinitely many coordinates, by Lemma 2.8, we havelim δ ↓ lim sup m →∞ E (cid:2)(cid:12)(cid:12) f (cid:0) X ( m ) τ m + δ (cid:1) − f (cid:0) X ( m ) τ m (cid:1)(cid:12)(cid:12) r (cid:3) = 0 . Hence by Aldous’s tightness criterion (see [Ald78]), the claim follows. ✷ In the previous subsection, we proved that the sequence of laws of ( X ( m ) ) m ∈ N is tight in D L β,E and is hencerelatively compact by Prohorov’s theorem. Let X be a process who’s law is an arbitrary limit point of thatsequence. Then there exists a subsequence ( X ( m k ) ) k ∈ N such that X ( m k ) k →∞ = ⇒ X weakly in D L β,E . In order to ease the notation, in this section we will assume that the sequences ( ǫ m ) m ∈ N and( S m ) m ∈ N were chosen such that X ( m ) m →∞ = ⇒ X. Remark 3.6
By (2.13) and (2.15) in Lemma 2.4 and Remark 2.5, using Fatou’s lemma, we get that (1.28)holds for the limiting process X and hence the stochastic integral in (1.26) is well defined. First, we derive estimates on the first and second moment of a limit point X . Lemma 3.7
For all t > , k, l ∈ S with k = l and i = 1 , we have E [ X i,t ( k )] ≤ S t X i, ( k ) , E [ X ,t ( k ) X ,t ( l )] ≤ S t X , ( k ) S t X , ( l ) . (3.1)19 or every p ∈ (0 , , there exists a constant c p such that E h sup t ≤ T X i,t ( k ) p i ≤ c p (cid:18) x i ( k ) p + Z T (cid:2) ( A + 1) S s x ( k ) (cid:3)(cid:2) ( A + 1) S s x ( k ) + 1 (cid:3) ds (cid:19) . (3.2) Moreover for any non-negative function φ on S , T, K > , and i = 1 , , P (cid:20) sup t ≤ T h X i,t , φ i > K (cid:21) ≤ K − (cid:10) S T x i , φ (cid:11) . (3.3) Proof.
The inequalities in (3.1) follow from Corollary 2.3 with the help of Fatou’s lemma by switching to theSkorohod space with the a.s. convergence instead of weak convergence of the processes.For the same reasons, (3.2) follows from Lemma 2.6. Here we also used the trivial inequality a p ≤ a + 1 for p ≤ a ≥ ✷ Now we have to identify the equation for the limiting point X . For this goal, it will be enough to identify thecompensator measures of the limits of the martingales M ( m ) i ( k ). At this stage it will be more convenient for usto use a different representation of those processes. Let N ( m )∆ ( { k } , · ) , k ∈ S , be the family of point process on R + × R × R induced by the jumps of the processes X ( m ) , that is N ( m )∆ (cid:0) { k } , dt, dz (cid:1) = X s { ∆ X ( m ) s ( k ) =0 } δ ( s, ∆ X ( m ) s ( k )) ( dt, dz ) . Let N ( m )∆ ′ denote the corresponding compensator measure and let M ( m )∆ := N ( m )∆ − N ( m )∆ ′ . Furthermore, define N ∆ , N ′ ∆ and M ∆ similarly, but with X ( m ) replaced by X .Recall from (2.12) that X ( m ) t,i ( k ) = X ( m )0 ,i ( k ) + Z t A ( m ) X ( m ) s,i ( k ) ds + Z t Z [0 , ∞ ) × E ǫ J ( m ) i (cid:16) y, X ( m ) s − ( k ) (cid:17) [0 ,I ( m ) ( X ( m ) s − ; k )] ( a ) M (cid:0) { k } , ds, d ( a, y ) (cid:1) + Z t Z [0 , ∞ ) × E ǫy [0 ,I , ( m ) i ( X ( m ) s − ; k )] ( a ) M i (cid:0) { k } , ds, d ( a, y ) (cid:1) . Let e := (1 ,
0) and e := (0 , M , M and M are independent compensated jump measures withintensity N ′ and since N ′ is absolutely continuous (which implies that there are no double points), we get for B ⊂ R measurable that M ( m )∆ ( { k } , dt, B ) = Z [0 , ∞ ) × E ǫ B \{ } (cid:0) J ( m ) (cid:0) y, X ( m ) t − ( k ) (cid:1)(cid:1) [0 ,I ( m ) ( X ( m ) s − ; k )] ( a ) M (cid:0) { k } , dt, d ( a, y ) (cid:1) + X i =1 Z [0 , ∞ ) × E B ( ǫ m y e i ) [0 ,I , ( m ) i ( X ( m ) s − ; k )] ( a ) M i (cid:0) { k } , dt, d ( a, y ) (cid:1) . Hence for i = 1 ,
2, we have X ( m ) t,i ( k ) = x i ( k ) + Z t A ( m ) X ( m ) s,i ( k ) ds + Z t Z R z i M ( m )∆ ( { k } , ds, dz ) . Lemma 3.8
The weak limit point X is a solution of X t ( k ) = x ( k ) + Z t A X s ( k ) ds + Z t Z R z M ∆ (cid:0) { k } , ds, dz (cid:1) , (3.4)20 here M ∆ = N ∆ − N ′ ∆ . The compensator measure of the point process N ∆ is given by N ′ ∆ (cid:0) { k } , dt, B (cid:1) = Z E B \{ } J ( y, X t − ( k )) I ( X t ; k ) dt ν ( dy ) for B ⊂ R measurable , k ∈ S. Proof.
By Theorem IX.2.4 of [JS87], it is enough to check for all k ∈ S that X ( m ) ( k ) , Z R N ( m )∆ ′ (cid:0) { k } , dt, dz (cid:1) G ( z ) (cid:19) m →∞ = ⇒ (cid:18) X ( k ) , Z R N ′ ∆ (cid:0) { k } , dt, dz (cid:1) G ( z ) (cid:19) , (3.5)for(i) each continuous G ∈ C + b ( R ) which is 0 in some neighbourhood of 0 and(ii) G = h i h j , i, j = 1 ,
2, for some bounded continuous function h = ( h , h ) : R → R that fulfils h ( x ) = x in some neighbourhood of 0.Note that Z R G ( z ) N ( m )∆ ′ (cid:0) { k } , dt, dz (cid:1) = Z E ǫm G (cid:0) J ( m ) ( y, X ( m ) t − ( k )) (cid:1) N ( m ) ′ (cid:0) { k } , dt, [0 , I ( m ) ( X t − ; k )] , dy (cid:1) + X i =1 Z E G ( ǫ m y e i ) N ( m ) ′ (cid:16) { k } , dt, [0 , I , ( m ) i ( X t − ; k )] , dy (cid:17) = I ( m ) ( X t − ; k ) (cid:18) Z E ǫm G (cid:0) J ( m ) ( y, X ( m ) t − ( k )) (cid:1) ν ( dy ) (cid:19) dt + X i =1 I , ( m ) i ( X t − ; k ) (cid:18) Z E G ( ǫ m y e i ) ν ( dy ) (cid:19) dt (3.6)and similarly Z R G ( z ) N ′ ∆ (cid:0) { k } , dt, dz (cid:1) = I ( X t − ; k ) (cid:18) Z E G (cid:0) J ( x, X t − ( k )) (cid:1) ν ( dx ) (cid:19) dt. (3.7)By Skorohod’s representation theorem, we may assume that X ( m ) and X are defined on one probability spacesuch that X ( m ) converges almost surely to X (and not only weakly). In order to show (3.5), it is enough toshow that the right hand side of (3.6) converges to that of (3.7). In order to show this, it is enough to showthat for any k ∈ S , uniformly in z on compacts of L β,E , we have I ( m ) ( z ; k ) Z E ǫm G (cid:0) J ( m ) ( y, z ( k )) (cid:1) ν ( dy ) −→ I ( z ; k ) Z E G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) as m → ∞ , (3.8)and for i = 1 , I , ( m ) i ( z ; k ) Z E G ( ǫ m y e i ) ν ( dy ) −→ m → ∞ . (3.9)The proof of Lemma 3.8 is thus complete when we have shown the following two lemmas. ✷ Lemma 3.9
For any bounded measurable function G : R → R such that G ( x ) = 0 for all x ∈ R with k x k ∞ ≤ δ for some δ > , we have (3.8) and (3.9). Lemma 3.10
For any j, j ′ = 1 , , and for G ( x ) := G jj ′ ( x ) := x j x j ′ {k x k ∞ ≤ } , we have (3.8) and (3.9). k ∈ S and a compact set L ⊂ L β,E . Then there exists a K < ∞ such that | z i ( k ) | ≤ K and |A ( m ) z i ( k ) | ≤ K for all z ∈ L , i = 1 , m large enough such that S m ∋ k , we have A ( m ) z ( k ) ↑ A z ( k ). Since A z ( k ) and A ( m ) z ( k ) are continuousfunctions of z (see (1.8)), we get uniform convergence on compacts, that is, η m := sup z ∈ L (cid:12)(cid:12)(cid:12) A ( m ) z ( k ) − A z ( k ) (cid:12)(cid:12)(cid:12) m →∞ −→ . (3.10) Proof (of Lemma 3.9). Case 1: z ( k ) = 0 . If z ( k ) = 0, then both sides in (3.8) equal zero and it remainsto show (3.9). Note that I , ( m ) i ( z ; k ) ≤ K/ǫ m and recall that G ( x ) = 0 if k x k ≤ δ . Hence, by Lemma A.1, (cid:12)(cid:12)(cid:12)(cid:12) I , ( m ) i ( z ; k ) Z E G ( ǫ m y e i ) ν ( dy ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ K k G k ∞ ǫ m ν (cid:0) { } × ( δ/ǫ m , ∞ ) (cid:1) ≤ π K k G k ∞ δ ǫ m . Case 2: z ( k ) = 0 . In this case, the left hand side of (3.9) equals zero and it remains to show (3.8). Weconsider, without loss of generality, the case z ( k ) >
0. By Corollary A.6, (cid:12)(cid:12)(cid:12)(cid:12)Z E G ( J ( y, z ( k ))) ν ( dy ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ k G k ∞ ν (cid:0)(cid:8) y : k J ( y, z ( k )) k ∞ > δ (cid:9)(cid:1) = k G k ∞ ν (cid:0)(cid:8) y : k J ( y, (1 , k ∞ > δ/z ( k ) (cid:9)(cid:1) ≤ k G k ∞ π z ( k ) δ ≤ k G k ∞ π K δ . (3.11)Assume that ǫ m < δ/K . Case 2(i): ǫ m ≤ z ( k ) . In this case, by (3.10), (cid:12)(cid:12)(cid:12) I ( m ) ( z ; k ) − I ( z ; k ) (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) z ( k ) − A ( m ) z ( k ) − z ( k ) − A z ( k ) (cid:12)(cid:12)(cid:12) ≤ z ( k ) − η m . (3.12)For any y ∈ E , we have J i ( y, z ( k )) = J ( m ) i ( y, z ( k )), i = 1 ,
2. For y ∈ E \ E ǫ m , we have J ( y, z ( k )) = 0 and | J ( y, z ( k )) | ≤ ǫ m z ≤ ǫ m K ≤ δ. Hence G ( J ( y, z ( k ))) = 0 for y ∈ E \ E ǫ m . This shows Z E ǫm G (cid:0) J ( m ) ( y, z ( k )) (cid:1) ν ( dy ) = Z E ǫm G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) = Z E G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) . (3.13)Using (3.13) and (3.11), the difference of the left hand side and right hand side in (3.8) is bounded by k G k ∞ π z ( k ) δ (cid:12)(cid:12)(cid:12) I ( m ) ( z ; k ) − I ( z ; k ) (cid:12)(cid:12)(cid:12) ≤ K k G k ∞ πδ η m m →∞ −→ , where the convergence follows by (3.10). Case 2(ii): ǫ m > z ( k ) . In this case, by (3.11), (cid:12)(cid:12)(cid:12)(cid:12) I ( z ; k ) Z E G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ Kz ( k ) k G k ∞ π z ( k ) δ ≤ K k G k ∞ πδ ǫ m and similarly (cid:12)(cid:12)(cid:12)(cid:12) I ( m ) ( z ; k ) Z E ǫm G (cid:0) J ( m ) ( y, z ( k )) (cid:1) ν ( dy ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ K k G k ∞ πδ ǫ m . This shows (3.8) and (3.9) and finishes the proof of Lemma 3.9. ✷ roof (of Lemma 3.10). The proof of this lemma uses some of the estimates from the proof of Lemma 3.9.Assume that ǫ m ≤ / Case 1: z ( k ) = 0 . If z ( k ) = 0, then both sides in (3.8) equal zero and it remains to show (3.9). Note that I , ( m ) i ( z ; k ) Z E G jj ′ ( ǫ m y e i ) ν ( dy ) = 0 if j = i or j ′ = i. Hence, now assume that j = j ′ = i . Recall that I , ( m ) i ( z ; k ) ≤ K/ǫ m . Then, by Lemma A.2, I , ( m ) i ( z ; k ) Z E G ii ( ǫ m y e i ) ν ( dy ) = I , ( m ) i ( z ; k ) ǫ m Z { }× (0 , /ǫ m ) y ν ( dy ) ≤ Kπ ǫ m log(1 /ǫ m ) m →∞ −→ . That is, the right hand side in (3.9) equals 0 while the left hand side converges to 0.
Case 2: z ( k ) = 0 . In this case (3.9) holds trivially. Without loss of generality, we assume that z ( k ) > Case 2a: j = j ′ = 1 . We have Z E ǫm G (cid:0) J ( m ) ( y, z ( k )) (cid:1) ν ( dy ) ≤ Z E ǫm G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) ≤ Z E G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy )= z ( k ) Z E ( y − {| y − |≤ /z ( k ) } { y ≤ /z ( k ) } ν ( dy ) . (3.14)For z ( k ) > z ( k ) Z (1 − /z ( k ) , /z ( k )) ×{ } ( y − ν ( dy ) ≤ π z ( k ) . For z ( k ) ∈ (0 , z ( k ) (cid:20) Z (0 , /z ( k )) ×{ } ( y − ν ( dy ) + ν (cid:0) { } × (0 , /z ( k )) (cid:1)(cid:21) ≤ z ( k ) π (cid:2) (cid:0) /z ( k ) (cid:1) + 1 (cid:3) ≤ z ( k ) . Summing up, for all z ( k ) >
0, we have Z E ǫm G (cid:0) J ( m ) ( y, z ( k )) (cid:1) ν ( dy ) ≤ Z E G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) ≤ z ( k ) . (3.15)Now for the difference of the two integrals. We have (using Lemma A.4) Z E \ E ǫm G (cid:0) J ( y, z ( k )) (cid:1) ν ( dy ) ≤ z ( k ) Z (1 − ǫ m , ǫ m ) ×{ } ( y − ν ( dy ) ≤ π z ( k ) ǫ m . (3.16)Let ∆ := Z E ǫm (cid:2) G (cid:0) ( J ( y, z ( k )) (cid:1) − G (cid:0) ( J ( m ) ( y, z ( k )) (cid:1)(cid:3) ν ( dy ) . For z ( k ) ≥ ǫ m , we have ∆ = 0. On the other hand, for z ( k ) ∈ (0 , ǫ m ), we have (using Lemma A.1)∆ = z ( k ) Z E ǫm (cid:2) {| y − |≤ /z ( k ) } { /ǫ m
For any x, y ∈ E , we have Z E h x,y dν = 0 . Proof.
By symmetry, it is enough to consider the case x = (1 , h (1 , ,y = h y .Let B be planar Brownian motion started at z ∈ [0 , ∞ ) and recall that Q z is its harmonic measure on [0 , ∞ ) ;that is, Q z is the distribution of B τ where τ is the exit time from (0 , ∞ ) .By formally extending the domain of h y to [0 , ∞ ) , and applying Itˆo’s formula, we see that ( h y ( B t )) t ≥ is a( C -valued) martingale. Since h y grows at most linearly, we have E [ | h y ( B t ) | ] ≤ C E [ k B t k ] ≤ C ( k z k + 2 t ) forsome C < ∞ . It is well known that E [ τ p ] < ∞ for p ∈ [1 / ,
1) (see [KM10, Lemma 3.5] or [Bur77, Equation(3.8)] with α = π/ | h y ( B t ∧ τ ) | , t ≥
0, is bounded in L p for all p ∈ [1 , R h y dQ z = h y ( z ).Recall that ν is the vague limit of ǫ − Q (1 ,ǫ ) as ǫ →
0. Hence we can hope that R h y dν can be written as thelimit of ǫ − R h y dQ (1 ,ǫ ) . In fact, since h y grows at most linearly and since h y (1 ,
0) = 0, by [KM10, Lemma 5.5],we get Z h y dν = lim ǫ → ǫ Z h y dQ (1 ,ǫ ) = lim ǫ → ǫ h y (1 , ǫ ) = 0 . ✷ Now we are ready to write the martingale problem for any limiting point X . Lemma 3.12
Let y ∈ L f,E and let X be any limit point of ( X ( m ) ) m ∈ N . Then M t := e hh X t ,y ii − e hh X ,y ii − Z t hhA X s , y ii e hh X s ,y ii ds, (3.23) is a martingale. Proof.
By Itˆo’s formula for discontinuous semimartingales (see, e.g., [Pr04, Theorem 32]) applied to X solving(3.4), we get that e hh X t ,y ii − e hh X ,y ii − Z t hhA X s , y ii e hh X s ,y ii ds − X k ∈ S Z t Z E N ′ (cid:0) { k } , ds, dz (cid:1) e hh X s ,y ii (cid:2) e J ( z,X s ( k )) y ( k ) − − J ( z, X s ( k )) ⋄ y ( k ) (cid:3) (3.24)25s a local martingale. By the definition of N ′ and h x,y in (3.22), using Lemma 3.11, we get Z t Z E N ′ ( { k } , ds, dz ) e hh X s ,y ii (cid:2) e J ( z,X s ( k )) ⋄ y ( k ) − − J ( z, X s ( k )) ⋄ y ( k ) (cid:3) = Z t e hh X s ,y ii Z E h X s ( k ) ,y ( k ) dν = 0 . Hence also M is a local martingale and it remains to show that M is in fact a martingale. Applying (3.2),(1.10), (1.9), for all T >
0, we get E (cid:20) sup t ≤ T (cid:12)(cid:12)(cid:12)(cid:12) Z t hhA X s , y ii e hh X s ,y ii ds (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ≤ E "Z T (cid:10) A X ,s , | y | (cid:11) + (cid:10) A X ,s , | y | (cid:11) ds ≤ Z T (cid:10) AS s x + AS s x , | y | (cid:11) ds ≤ X k ∈ S X i =1 e Γ s k x i k β β ( k ) | y ( k ) | < ∞ . Note that the last inequality follows since y has finite support. Since the exponents in (3.23) have nonpositivereal part, they are bounded. Hence we conclude that E (cid:20) sup t ≤ T | M t | (cid:21) < ∞ . But this implies that M is indeed a martingale. ✷ By Lemma 3.12, any limit point of ( X ( m ) ) m ∈ N solves the martingale problem (MP ). Hence the proof ofProposition 3.2(ii) is now complete. ✷ ) This section is devoted to the proof of the following proposition.
Proposition 4.1
There is a unique solution to the martingale problem (MP ) and the map x P x is measur-able. The proposition will be proved via a series of lemmas.
Recall from (1.14) that L f,E is the space of y ∈ E S with only finitely many nonzero coordinates. Recall that A ∗ and A ∗ are the transpose matrices of A and A , respectively, and that S ∗ and S ∗ are the correspondingsemigroups. Recall the definition of L β,E ∞ from (1.15) and let D L β,E ∞ = D L β,E ∞ [0 , ∞ ) be the Skorohod space of L β,E ∞ -valued c`adl`ag paths.We will define a D L β,E ∞ valued process Y = ( Y , Y ) that solves the martingale problem which is dual to (MP ).Recall the function H from (1.12). Proposition 4.2
Let Y = y ∈ L f,E . Then there exists the process Y ∈ D L β,E ∞ which satisfies the followingmartingale problem: For all x ∈ L β,E , M ∗ ,x,yt := H ( x, Y t ) − H ( x, Y ) − Z t (cid:10)(cid:10) x, A ∗ Y s (cid:11)(cid:11) H ( x, Y s ) ds (MP ∗ ) is martingale. roof. The existence of a process Y ∈ D L β,E that solves the martingale problem (MP ∗ ) for all x ∈ L f,E follows immediately from Proposition 3.2, since the assumptions on A are satisfied by A ∗ as well. In fact, byassuming that Y is constructed similarly as X in Section 3, we may assume that Lemma 3.7 holds for Y . Tofinish the proof we have to show that this Y takes in fact values in the subspace L β,E ∞ and that Y satisfies (MP ∗ )for all x ∈ L β,E (not only for x ∈ L f,E ). Step 1.
First we show that Y takes values in L β,E ∞ . It is enough to show that for all φ ∈ L β and i = 1 ,
2, wehave P (cid:20) sup t ≤ T (cid:10) Y i,t , φ (cid:11) > K (cid:21) → K → ∞ . (4.1)By (3.3) in Lemma 3.7, for any φ ∈ L β and K >
0, we get (recall that ( S ∗ T ) is the semigroup generated by thetransposed matrix A ∗ ) P (cid:20) sup t ≤ T (cid:10) Y i,t , φ (cid:11) > K (cid:21) ≤ K − (cid:10) S ∗ T Y i, , φ (cid:11) = K − (cid:10) Y i, , S T φ (cid:11) . (4.2)By (1.11), we have that S T φ ( k ) < ∞ for all k and since Y i, has finite support, the right hand side of (4.2) isfinite. Step 2.
Now we show that Y satisfies (MP ∗ ) for all x ∈ L β,E . Let ( x n ) n ∈ N be a sequence in L f,E such that x n ↑ x as n → ∞ . Then M ∗ ,x n ,y is a martingale for any n ∈ N . By (4.2), for any T > s ≤ T (cid:12)(cid:12) H ( x n , Y s ) − H ( x, Y s ) (cid:12)(cid:12) n →∞ −→ L ). (4.3)Note that |hh x n , A ∗ Y s ii| ≤ (cid:10) x + x , A ∗ ( Y ,s + Y ,s ) (cid:11) = 2 (cid:10) A ( x + x ) , Y ,s + Y ,s (cid:11) . Consequently, for all
T > t ∈ [0 , T ], we get (cid:12)(cid:12)(cid:12)(cid:12)Z t hh x n , A ∗ Y s ii H ( x n , Y s ) ds (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z t |hh x n , A ∗ Y s ii| ds ≤ Z T |hh x n , A ∗ Y s ii| ds ≤ Z T (cid:10) A ( x + x ) , Y ,s + Y ,s (cid:11) ds. (4.4)By Lemma 3.7, the expectation of the right hand side of (4.4) is bounded by2 Z T (cid:10) A ( x + x ) , S ∗ s ( Y , + Y , ) (cid:11) ds = 2 Z T (cid:10) S s A ( x + x ) , Y , + Y , (cid:11) ds ≤ e Γ T (cid:0) k x k β + k x k β (cid:1) X k ∈ S Y , ( k ) + Y , ( k ) β ( k ) < ∞ . (4.5)By dominated convergence, the integral term in the definition of M ∗ ,x n ,y converges in L to the correspondingintegral term for M ∗ ,x,y . Hence M ∗ ,x n ,yt converges in L to M ∗ ,x,y for each t . Consequently, M ∗ ,x,y is amartingale. ✷ .2 Moment bounds for solutions of the martingale problem In Lemma 3.7, we established a bound on the first moments of those solutions X of the martingale problem(MP ) that arise as limiting points of the approximating processes ( X ( m ) ) m ∈ N . In order to show uniqueness ofthe solution to (MP ), we need to establish a similar bound for any solution to (MP ). In fact we will establisha slightly stronger result, but first let us define the notion of the local martingale problem. We say that X solves local martingale problem (MP ) with X = x ∈ L β,E if for any y ∈ L f,E , the process M x,y is a local martingale.Now we are ready to prove the following lemma. Lemma 4.3
Let x ∈ L β,E and let X be a solution to the local martingale problem (MP ) with X = x . Then(i) for all k ∈ S , t ≥ and for i = 1 , , we have E [ X i,t ( k )] ≤ S t ( x + x )( k ) , (ii) and X is a solution to the martingale problem (MP ) with X = x ∈ L β,E . Proof. (i)
Let y = (1 , { k } ∈ L f,E be the test function that takes the value (1 , ∈ E at k and is zerootherwise. For K > τ K = inf (cid:8) t ≥ k X ,t + X ,t k β ≥ K (cid:9) . Since a bounded local martingale is a martingale, and since for every ǫ > e hh X t ∧ τK ,ǫy ii − e hh x,ǫy ii − ǫ Z t ∧ τ K e hh X s ,ǫy ii hhA X s , y ii ds (4.6)is bounded by 2 + 2 ǫT Γ K/β ( k ) for t ≤ T , the expression in (4.6) is in fact martingale. Hence, we have E h ǫ − (cid:0) − e hh X t ∧ τK ,ǫy ii (cid:1)i = ǫ − (cid:0) − e hh x,ǫy ii (cid:1) − E (cid:20)Z t ∧ τ K e hh X s ,ǫy ii hhA X s , y ii ds (cid:21) . Note that Re hh x, ǫy ii ≤ x ∈ E S . HenceRe (cid:0) − e hh X t ∧ τK ,ǫy ii (cid:1) ≥ . Using Fatou’s lemma we get E (cid:2) X ,t ∧ τ K ( k )+ X ,t ∧ τ K ( k ) (cid:3) = 12 E (cid:20) lim ε ↓ Re ǫ − (cid:0) − e hh X t ∧ τK ,ǫy ii (cid:1)(cid:21) ≤
12 lim inf ε ↓ ǫ − Re E h − e hh X t ∧ τK ,ǫy ii i = x ( k ) + x ( k ) −
12 lim sup ε ↓ Re (cid:18) E (cid:20)Z t ∧ τ K e hh X s ,ǫy ii hhA X s , y ii ds (cid:21)(cid:19) . Using dominated convergence (recall τ K ), we obtain (recall (1.10) and Γ from (1.6)) E (cid:2) X ,t ∧ τ K ( k )+ X ,t ∧ τ K ( k ) (cid:3) ≤ x ( k ) + x ( k ) −
12 Re (cid:18) E (cid:20)Z t ∧ τ K hhA X s , y ii ds (cid:21)(cid:19) ≤ x ( k ) + x ( k ) + E (cid:20)Z t ∧ τ K (cid:0) A X ,s ( k ) + A X ,s ( k ) (cid:1) ds (cid:21) ≤ x ( k ) + x ( k ) + 2 K Γ t/β ( k ) < ∞ . (4.7)28rom (4.7), we get E (cid:2) X ,t ∧ τ K ( k ) + X ,t ∧ τ K ( k ) (cid:3) ≤ x ( k ) + x ( k ) + E (cid:20)Z t A ( X ,s ∧ τ K + X ,s ∧ τ K )( k ) ds (cid:21) . Since both side are finite by (4.7), standard arguments yield E (cid:2) X ,t ∧ τ K ( k ) + X ,t ∧ τ K ( k ) (cid:3) ≤ S t ( x + x )( k ) for all t ≥ K ≥ . Letting K → ∞ and using Fatou’s lemma, we obtain E (cid:2) X ,t ( k ) + X ,t ( k ) (cid:3) ≤ S t ( x + x )( k ) . This finishes the proof of (i). (ii)
We have to show that the local martingale M x,yt = H ( X t , y ) − H ( x, y ) − Z t hhA X s , y ii H ( X s , y ) ds is in fact a martingale. The argument is similar as in the proof of Lemma 3.12. Therefore, we omit the details. ✷ Corollary 4.4
Let x ∈ L β,E and X be a solution to the martingale problem (MP ) with X = x and let φ ∈ L β ∞ .Then for all t ≥ and i = 1 , , we have E (cid:2) h X i,t , φ i (cid:3) ≤ (cid:10) S t ( x + x ) , φ (cid:11) ≤ e Γ t (cid:10) x + x , φ (cid:11) < ∞ . (4.8) Proof.
The first inequality is a consequence of the previous lemma, the second is due to (1.9) and the thirdis due to the very definition of L β ∞ . ✷ Corollary 4.5
Let X = x ∈ L β,E and X be a solution to (1.26). Then X is a solution to the martingaleproblem (MP ) with X = x . Proof.
By Itˆo’s formula (see (3.24) and (3.23) in the proof of Lemma 3.12) we get that X is a solution to thelocal martingale problem (MP ). Then by Lemma 4.3(ii), we get that it is also a solution to the martingaleproblem (MP ). ✷ By definition, for any x ∈ L β,E and any solution X of the martingale problem (MP ) with X = x , the process M x,y is a martingale for any y ∈ L f,E . The L -estimates we have just established enable us to show that thisis true even for y ∈ L β,E ∞ . Lemma 4.6
For any x ∈ L β,E , any solution X of (MP ) with X = x and any y ∈ L β,E ∞ , the process M x,y isa martingale. Proof.
The proof is similar to Step 2 of Proposition 4.2. For the key estimate of (4.5), here we employCorollary 4.4 instead of Lemma 3.7. We omit the details. ✷ Proposition 4.7 (Duality)
Let Y = y ∈ L f,E and let Y ∈ D L β,E ∞ be a solution to the martingale problem(MP ∗ ). Let X = x ∈ L β,E and let X ∈ D L β,E be an a solution to the martingale problem (MP ) which isindependent of Y . Then X and Y are dual with respect to the function H : E (cid:2) H ( X t , Y ) (cid:3) = E (cid:2) H ( X , Y t ) (cid:3) for all t ≥ . (4.9)29 roof. Fix t >
0. For r, s ∈ [0 , t ], define f ( s, r ) = E (cid:2) H ( X s , Y r ) (cid:3) and g ( s, r ) = E (cid:2) hhA X s , Y r ii H ( X s , Y r ) (cid:3) = E (cid:2) hh X s , A ∗ Y r ii H ( X s , Y r ) (cid:3) . By (4.5) and Corollary 4.4, we get E (cid:2) | M ∗ ,X s ,yr | (cid:3) ≤ e Γ r E (cid:2) k X ,s + X ,s k β (cid:3) X k ∈ S y ( k ) + y ( k ) β ( k ) ≤ e Γ( r + s ) k x + x k β X k ∈ S y ( k ) + y ( k ) β ( k ) < ∞ . Hence we can compute f ( s, r ) − f ( s, − Z r g ( s, u ) du = E (cid:2) M ∗ ,X s ,yr (cid:3) = E (cid:2) E [ M ∗ ,X s ,yr (cid:12)(cid:12) X s ] (cid:3) = 0 , (4.10)since M ∗ ,X s ,y is a martingale with M ∗ ,X s ,y = 0. Similarly, we get f ( s, r ) − f (0 , r ) − Z s g ( u, r ) du = E (cid:2) M x,Y r s (cid:3) = 0 . (4.11)Using the same estimates for E (cid:2) hhA X s , Y r ii (cid:3) , we obtain Z t Z t | g ( r, s ) | dr ds < ∞ . (4.12)By (4.10), (4.11), (4.12) and Lemma 4.4.10 of [EK86] (with their f and f both equal to our g ), we get f (0 , t ) = f ( t, ✷ Proof of Proposition 4.1. Step 1 (One-dimensional distributions).
Let x ∈ L β,E and let X, X ′ ∈ D L β,E be two solutions to the martingale problem (MP ) with X = X ′ = x . Let y ∈ L f,E and let Y be asolution to (MP ∗ ) with Y = y . By Proposition 4.7, we have E (cid:2) H ( X t , y ) (cid:3) = E (cid:2) H ( X , Y t ) (cid:3) = E (cid:2) H ( X ′ t , y ) (cid:3) for all t ≥ . (4.13)By Corollary 2.4 of [KM10], the family { H ( · , y ) , y ∈ L f,E } is measure determining, hence the one-dimensionalmarginals of X and X ′ coincide. Step 2 (Finite-dimensional distributions).
Now we use a version of the well-known theorem claimingthat “uniqueness of one-dimensional distributions for solutions to a martingale problem implies uniqueness offinite-dimensional distributions”. More precisely, denote by F t = σ ( X s , s ≤ t ) the σ -algebra generated by X s , s ≤ t . Note that ( L f,E , k · k β ) is a separable Banach space. Hence there exists a regular conditional probability Q s = P [( X s + t ) t ≥ ∈ · (cid:12)(cid:12) F s ]. Arguing as in [B97, Corollary VI.2.2], we see that for almost all ω , under Q s thecanonical process is a solution to (MP ) started in X s .Now we may argue as in the proof of Theorem VI.3.2 in [B97] to get uniqueness distribution of X . Step 3 (Measurability).
For the proof of the existence of a solution to (MP ), we employed an approximationprocedure: We constructed processes X ( m ) with finitely many jumps (in finite time intervals) from a given noise,and showed convergence along a subsequence m n ↑ ∞ . Due to uniqueness of the limit point (Step 2), we getconvergence as m → ∞ . Let us denote the corresponding laws (with initial point x ) by P mx and P x . By thevery construction of X ( m ) it is clear that x P mx is measurable. Hence also the limit x P x is measurable. ✷ roof of Theorems 1.1 and 3.1. Theorems 1.1(a) and 3.1 follow immediately from Propositions 3.2 and4.1. Theorem 1.1(b) follows from Lemma 4.6.In order to show the strong Markov property of Theorem 1.1(c), by [EK86, Theorem 4.4.2], it is enough toshow that the martingale problem (MP ) is well-posed not only for deterministic points x ∈ L β,E , but alsofor probability measures µ ∈ M ( L β,E ). The problem is, of course, that for X ∼ µ and y ∈ L f,E , in generalthe process M X ,y is not well defined, as the integrand hhA X s , y ii H ( X s , y ) is unbounded. Hence, we propose aslight modification of (MP ) and assume that y ∈ L β,E, ++ ∞ , where L β,E, ++ ∞ := (cid:8) y ∈ L β,E : ∃ c < ∞ with c − β ( k ) < y i ( k ) < cβ ( k ) ∀ i = 1 , , k ∈ S (cid:9) ⊂ L β,E ∞ . Recall that kA u k β ≤ Γ k u k β for all u ∈ ([0 , ∞ ) ) S . Hence for all y ∈ L β,E, ++ ∞ , the map L β,E → C , x x, y ii H ( x, y ) is bounded. Hence for y ∈ L β,E, ++ ∞ , the process M X ,y is well defined, and we say that X as asolution to the martingale problem (MP ′ ) if M X ,y is a martingale for all y ∈ L β,E, ++ ∞ . Arguing as in the proofof Proposition 4.7, we get the duality E [ H ( X t , y )] = E [ H ( X , Y t )] for all y ∈ L β,E, ++ ∞ . (4.14)Note that L β,E, ++ ∞ ⊂ L β,E ∞ is dense. Hence (4.14) determines the distribution of X t . By [EK86, Theorem4.4.2(a)], we infer uniqueness of the finite-dimensional distributions and hence of the solution to (MP ′ ). Hence P µ := R µ ( dx ) P x is the unique distribution of any solution to (MP ′ ) with X ∼ µ . That is, the martingale(MP ′ ) is well-posed and hence by [EK86, Theorem 4.4.2], ( P x ) x ∈ L β,E possesses the strong Markov property. ✷ First we will show weak uniqueness of the solutions of (1.26). Let X be any solution to (1.26) with X = x ∈ L β,E . Then by Corollary 4.5, we get that X is also a solution to the martingale problem (MP ). However, byTheorem 1.1, the solution to (MP ) is unique in law. Hence also the solution to (1.26) is unique in law.Now we will show the existence of ( X, N ) solving (1.26). The procedure is pretty much standard and we onlysketch the main arguments.Let X be the unique (in law) solution to the martingale problem (MP ). By Lemma 3.8 and Theorem 3.1, weget that X can be constructed in a way that it also satisfies (3.4). Moreover, we define the point process ˜ N by N ∆ ( { k } , dt, A ) = Z [0 , ∞ ) × E ] A \{ } (cid:0) J ( y, X t − ( k )) (cid:1) ˜ N ( { k } , dt, dy ) , for A ⊂ R . Let ( k n , t n , x n ) n ≥ be an arbitrary labeling of the points of the point process ˜ N . Let N be a Poisson pointprocess on S × R + × R + × E independent of ˜ N and X . Also let { U n } n ≥ be a sequence of independent randomvariables uniform on (0 ,
1) which are also independent of ˜ N and X .Define the new point process N on S × R + × R + × E by N ( dk, dt, dr, dx ) = X n ≥ δ ( k n ,t n ,U n I ( X tn − ; k n ) ,x n ) ( dk, dt, dr, dx )+ X n ≥ { r>I ( X tn − ; k n ) } N (cid:0) dk, dt, dr, dx (cid:1) . (5.1)Both summands in (5.1) are predictable transformations of point processes of class (QL) (in the sense of [IW89,Definition 3.2]); that is, they possess continuous compensators. Standard arguments yield that they are hencealso point processes of class (QL). A standard computation shows that the compensator measures are given by ℓ S ( dk ) { r ≤ I ( X t − ; k ) } λ ( dt ) λ ( dr ) ν ( dx )31nd ℓ S ( dk ) { r ≤ I ( X t − ; k ) } λ ( dt ) λ ( dr ) ν ( dx ) , respectively. Hence N is a point process of class (QL) and has the deterministic and absolutely continuouscompensator measure ℓ S ⊗ λ ⊗ λ ⊗ ν . By [IW89, Theorem 6.2], we get that N is thus a Poisson point processwith intensity ℓ S ⊗ λ ⊗ λ ⊗ ν . ✷ Recall that Y γ solves the following system of equations Y γi,t ( k ) = y i, ( k ) + Z t A Y γi,s ( k ) ds + Z t γ / σ ( Y γs ( k )) dW i,s ( k ) , t ≥ , k ∈ S, i = 1 , . (6.1)First of all we establish uniform integrability of Y γi , i = 1 , Lemma 6.1
For any
T > , p ∈ (0 , and i = 1 , , we have sup γ ≥ E (cid:20) sup t ≤ T (cid:10) Y γi,t , β (cid:11) p (cid:21) < ∞ . Proof.
By simple stochastic calculus, we get e − Γ t (cid:10) Y γi,t , β (cid:11) = (cid:10) Y γi, , β (cid:11) + Z t (cid:0)(cid:10) A Y γi,s , β (cid:11) − Γ (cid:10) Y γi,s , β (cid:11)(cid:1) ds + X k ∈ S β ( k ) Z t e − Γ s γ / σ ( Y γs ( k )) dW i,s ( k ) ≤ (cid:10) Y γi, , β (cid:11) + X k ∈ S β ( k ) Z t e − Γ s γ / σ ( Y γs ( k )) dW i,s ( k ) ≤ (cid:10) Y γi, , β (cid:11) + B i, P k ∈ S β ( k ) R t e − s γ σ ( Y γs ( k )) ds (6.2)where the second inequality follows by (1.9) and B i , i = 1 ,
2, are independent Brownian motions. Hence we getthat the pair (cid:0) e − Γ t h Y γ ,t , β i , e − Γ t h Y γ ,t , β i (cid:1) is stochastically bounded by the time-changed planar Brownian motion B starting at B = ( u, v ) := (cid:0) h Y γ , , β i , h Y γ , , β i (cid:1) and evolving until the stopping time τ = inf { t ≥ B ,t B ,t = 0 } . For p ∈ (1 , K i ≡ E (cid:20) sup t ≤ τ ( B i,t ) p (cid:21) < (cid:18) pp − (cid:19) p E (cid:2) ( B i,τ ) p (cid:3) . (6.3)The ( p/ p < α = π/ K i < ∞ . (6.4)32e can get (6.4) also by an explicit estimate using the density of the distribution of ˜ B τ from (1.19): E (cid:2) ( B i,τ ) p (cid:3) ≤ | u − v | p/ + 2 p/ ( uv ) p/ cos( pπ/ . This immediately implies that E (cid:20) sup t ≤ T (cid:10) Y γi,t , β (cid:11) p (cid:21) < e Γ T K i < ∞ for all i = 1 , , (6.5)uniformly in γ ≥ ✷ Lemma 6.2
The family ( Y γ ) γ ≥ is tight in D L β,E equipped with Meyer-Zheng pseudo-path topology. Proof.
The process M γi ( k ) defined by M γi,t ( k ) := Y γi,t ( k ) − Y i, ( k ) − Z t A Y γi,s ( k ) ds is a martingale. In order to show tightness of ( Y γ ) γ ≥ , it is enough to show tightness of ( Y γi ( k )) γ ≥ for all k ∈ S and i = 1 , . By Lemma 6.1, the random variable h Y γi,t , β i has a p -th moment for any p ∈ (0 , Z t A Y γi,s ( k ) ds. Note that the conditional variation (see, e.g., [MZ84, page 358]) V γi,T ( k ) of M γi,t ( k ) up to time T equals V γi,T ( k ) = sup t ≤ T E [ | M γi,t ( k ) | ] . Hence, by Theorem 4 of [MZ84], in order to get the tightness of the martingale M γi,t ( k ) it is enough to showthat sup γ> sup t ≤ T E (cid:2)(cid:12)(cid:12) M γi,t ( k ) (cid:12)(cid:12)(cid:3) for all T > . (6.6)However, | M γi,t ( k ) | ≤ | Y γi,t ( k ) | + | x ( k ) | + (cid:12)(cid:12)(cid:12)(cid:12) Z t A Y γi,s ( k ) ds (cid:12)(cid:12)(cid:12)(cid:12) and (6.6) again follows immediately by boundedness of p -th moments (for p <
2) of h Y γi,t , β i . ✷ Lemma 6.3
Let X be an arbitrary limit point of ( Y γ ) γ ≥ . Then X solves the martingale problem (MP ). Proof.
Let γ n → ∞ be such that Y γ n converges to X as n → ∞ . By Itˆo’s formula, for z ∈ L f, (recall (1.13)),the process M γ,y,z defined by M γ,y,zt = H ( Y γt , z ) − H ( Y γ , z ) − Z t hhA Y γ , z ii H ( Y γs , z ) ds (6.7)is a martingale. Since ( Y γ n ) n ∈ N converges to X , the right hand side of (6.7) converges to M y,z . As the p -thmoments h Y γt , β i (for p ∈ (0 , γ ), also the p -th moments of M γ,y,z are uniformlybounded. By [MZ84, Theorem 11], we infer that M x,z = lim n →∞ M γ n ,x,z is a martingale. In other words, X is a [0 , ∞ ) -valued solution to the martingale problem (MP ). It remains to show that X t ∈ E for all t > k ∈ S . Recall that we derived the tightness of the martingales M γ,y,zt ( k ). But this implies that the quadraticvariation of M γ,y,z ( k ) is stochastically bounded uniformly in γ ; that is, Z t γ σ ( Y γs ( k )) ds
33s uniformly bounded in γ . Since γ n → ∞ , this implies Z t σ ( Y γ n s ( k )) ds n →∞ −→ . By Assumption 1.4(ii) and (6.5), this implies Z t (cid:2)(cid:0) Y γ n ,s ( k ) Y γ n ,s ( k ) (cid:1) ∧ (cid:3) ds n →∞ −→ . On the other hand, we have Z t (cid:2)(cid:0) Y γ n ,s ( k ) Y γ n ,s ( k ) (cid:1) ∧ (cid:3) ds n →∞ −→ Z t (cid:2)(cid:0) X ,s ( k ) X ,s ( k ) (cid:1) ∧ (cid:3) ds, hence Z t X ,s ( k ) X ,s ( k ) ds = 0 . Thus X ,s ( k ) X ,s ( k ) = 0 , for almost every s . Since the limiting process X is c`adl`ag, we have X t ∈ E S for all t ≥ ✷ The above lemma finishes the proof of Theorem 1.5.
Appendix A: Properties of the jump measure
Recall the measure ν from (1.20). For ease of reference, we collect some basic facts on the moments of ν . Lemma A.1
Let ǫ > . We have ν (cid:0) { } × ( ǫ, ∞ ) (cid:1) = 2 π ǫ ) ≤ π (cid:0) ∧ ǫ − (cid:1) , and ν (cid:0) ([0 , ∞ ) \ (1 − ǫ, ǫ )) × { } (cid:1) = ( π ǫ (4 − ǫ ) − π , if ǫ ≤ , π ǫ (2+ ǫ ) , if ǫ ≥ , ≤ π (cid:0) ǫ − ∧ ǫ − (cid:1) . Proof.
This is simple calculus. ✷ Lemma A.2
For x ≥ , we have Z y ν ( dy ) = 1 , Z { }× (0 ,x ) y ν ( dy ) = 2 π arctan( x ) − π x x , and Z { }× (0 ,x ) y ν ( dy ) = 2 π log(1 + x ) − π x x ≤ π log( x ) , where the inequality holds if x ≥ . roof. This is simple calculus. ✷ Lemma A.3
For ǫ > , we have Z { y ≥ ǫ } ( y − ν ( dy ) = 1 π (cid:18) (cid:18) ǫǫ (cid:19) − ǫ ǫ (cid:19) . (A.1) For ǫ ∈ (0 , , we have Z { y ≤ − ǫ } ( y − ν ( dy ) = 1 π (cid:18) − (cid:18) ǫ − ǫ (cid:19) − ǫ − ǫ (cid:19) . (A.2) Hence for ǫ ∈ (0 , , there exists an ǫ ′ := ǫ ′ ( ǫ ) ∈ [ ǫ/ , ǫ ] such that Z { y (1 − ǫ, ǫ ′ ) } ( y − ν ( dy ) = 0 . (A.3) Proof.
By elementary calculus, we get (A.1) and (A.2). Using the explicit expressions, it is easy to check that Z { y (1 − ǫ, ǫ ) } ( y − ν ( dy ) ≤ ≤ Z { y (1 − ǫ, ǫ/ } ( y − ν ( dy ) . By continuity, (A.3) holds for some ǫ ′ ∈ [ ǫ/ , ǫ ]. ✷ Note that letting ǫ → R ( y − ν ( dy ) = 0 in the sense of a Cauchy principal value. Lemma A.4
For x > , we have Z (0 ,x ) ×{ } ( y − ν ( dy ) = 4 π (cid:18) log(1 + x ) − x x (cid:19) . Hence, for ǫ ∈ (0 , , we get Z (1 − ǫ, ǫ ) ×{ } ( y − ν ( dy ) = 4 π (cid:18) log (cid:18) ǫ − ǫ (cid:19) − ǫ − ǫ (cid:19) ≤ π ǫ Proof.
This is simple calculus. ✷ Lemma A.5
For p ∈ (1 , , we have m ,p := Z E | y − | p ν ( dy ) ≤ π p − p + 2 p ( p − − p ) < ∞ . (A.4) and m ,p := Z E y p ν ( dy ) = π p ( p − π p ) < ∞ . (A.5) Proof.
Note that m ,p ≤ π Z ∞ v (1 + v ) dv + 4 π Z u (1 − u ) − p du + 4 π Z ∞ u (1 − u ) − p (1 + u ) du and the right hand side equals the right hand side of (A.4). The formula for m ,p can be derived by an explicitcalculation using (i). ✷ orollary A.6 Recall J from (1.23). For any L > , we have ν (cid:0)(cid:8) y : k J ( y, (1 , k ∞ ≥ L (cid:9)(cid:1) ≤ π L − . Proof.
This is a direct consequence of the definition of J (see (1.23)) and Lemma A.1. ✷ Appendix B: Proof of Lemma 1.2.
Let x = ( u, v ) ∈ (0 , ∞ ) . Explicit integration of (1.19) yields Q ( u,v ) ( { } × [¯ v, ∞ )) = 12 + 1 π arctan (cid:18) v − u − ¯ v uv (cid:19) and Q ( u,v ) ([¯ u, ∞ ) × { } ) = 12 + 1 π arctan (cid:18) u − v − ¯ u uv (cid:19) . This yields Q ( u,v ) ( E ) = 12 + 12 + 1 π arctan (cid:18) v − u uv (cid:19) + 1 π arctan (cid:18) u − v uv (cid:19) = 1 . Hence Q as defined in (1.19) is in fact a probability measure. Furthermore, for u > ǫ ∈ (0 , u ), we have Q ( u,v ) (cid:0) { } × ( u − ǫ, u + ǫ ) (cid:1) = 1 π arctan (cid:18) ( u + ǫ ) − u + v uv (cid:19) − π arctan (cid:18) ( u − ǫ ) − u + v uv (cid:19) −→ π arctan( ∞ ) − π arctan( −∞ ) = 1 as ( u, v ) → ( u , . Hence Q ( u,v ) → δ ( u , as ( u, v ) → ( u , Q ( u,v ) → δ (0 ,v ) as ( u, v ) → (0 , v ).Finally, explicitly computing second derivatives gives (cid:16) ∂ ∂u + ∂ ∂v (cid:17) uv ¯ u u v + (cid:0) ¯ u + v − u (cid:1) = − uv ¯ u (cid:0) u v ¯ u − u − u + 3 v − u v + 3 u ¯ u + 3 u ¯ u + 3 u v − u v + 3 ¯ u v (cid:1)(cid:16) u v + (cid:0) ¯ u + v − u (cid:1) (cid:17) + 4 uv ¯ u (cid:0) u v ¯ u − u − u + 3 v − u v + 3 u ¯ u + 3 u ¯ u + 3 u v − u v + 3 ¯ u v (cid:1)(cid:16) u v + (cid:0) ¯ u + v − u (cid:1) (cid:17) = 0 . Hence, the function in (1.19) is indeed harmonic. ✷ Acknowledgement
We would like to thank an anonymous referee who helped considerably to debug the paper and to improve theexposition. 36 eferences [Ald78] D. Aldous. Stopping Times and Tightness.
Ann. Probab. , Vol. 6(2), 335–340, 1978.[B97] R. F. Bass.
Diffusions and Elliptic Operators.
Probability and its Applications (New York). Springer-Verlag, New York, 1998.[Bur77] D. L. Burkholder. Exit times of Brownian motion, harmonic majorization, and Hardy spaces.
Advancesin Math. , 26(2):182–205, 1977.[CDG04] J. T. Cox and D. A. Dawson and A. Greven. Mutually catalytic super branching random walks: largefinite systems and renormalization analysis.
Mem. Amer. Math. Soc. , 171, no. 809, 2004.[DP98] D. A. Dawson and E. A. Perkins. Long time behaviour and co-existence in a mutually catalyticbranching model.
The Annals of Probability , 26(3):1088–1138, 1998.[DM83] C. Dellacherie and P. A. Meyer.
Probabilit´es et potentiel: Chapitres V `a VIII Th´eorie des martingales .Hermann, Paris, 1983.[EK86] S. N. Ethier and T. G. Kurtz.
Markov Process: Characterization and Convergence . John Wiley andSons, New York, 1986.[IW89] N. Ikeda and S. Watanabe.
Stochastic differential equations and diffusion processes , volume 24 of
North-Holland Mathematical Library . North-Holland Publishing Co., Amsterdam, 2. edition, 1989.[JS87] J. Jacod and A. N. Shiryaev.
Limit Theorems for Stochastic Processes . Springer-Verlag, New York,1987.[KM10] A. Klenke and L. Mytnik. Infinite rate mutually catalytic branching.
Ann. Probab. , 38(4): 1690–1716,2010.[KM11] A. Klenke and L. Mytnik. Infinite rate mutually catalytic branching in infinitely many colonies. Thelongtime behaviour.
Ann. Probab. (to appear) , 2011.[KO10] A. Klenke and M. Oeler. A Trotter type approach to infinite rate mutually catalytic branching.
Ann.Probab. , 38(2): 479–497, 2010.[LM05] J.-F. Le Gall and L. Mytnik. Stochastic integral representation and regularity of the density for theexit measure of super-Brownian motion,
Ann. Probab.
Interacting particle systems . Springer-Verlag, New York, 1985.[MZ84] P. A. Meyer and W. A. Zheng. Tightness criteria for laws of semimartingales.
Ann. Inst. H. Poincar´eProbab. Statist. , 20(4):353–372, 1984.[Myt96] L. Mytnik. Superprocesses in random environments.
The Annals of Probability , 24:1953–1978, 1996.[Oel08] M. Oeler. Mutually Catalytic Branching at Infinite Rate.
PhD thesis, Universit¨at Mainz , 2008.[Pr04] P. E. Protter.
Stochastic Integration and Differential Equations.
Springer-Verlag, Berlin, 2004.[SV97] D. W. Stroock and S. R. Varadhan.