Mimicking the marginal distributions of a semimartingale
aa r X i v : . [ m a t h . P R ] M a y Mimicking the marginal distributions of asemimartingale
Amel Bentata and Rama ContSeptember 2009. Revised: April 2012. ∗ Abstract
We show that the flow of marginal distributions of a discontinuoussemimartingale X can be matched by a Markov process whose infinitesi-mal generator is expressed in terms of the local characteristics of X . Theconditions under which such Markovian projections exist are shown tohold for a large class of stochastic processes used in applications. Ourresults extend a “mimicking theorem” of Gy¨ongy (1986) to discontinuoussemimartingales. We use this result to derive a partial integro-differentialequation for the one-dimensional distributions of a semimartingale, ex-tending the Kolmogorov forward equation to a non-Markovian setting.
MSC Classification Numbers: 60J75, 60H10Keywords: mimicking theorem, semimartingale, Markovian projection, martin-gale problem, Kolmogorov forward equation.
Contents ∗ Laboratoire de Probabilit´es et Mod`eles Al´eatoires, UMR 7599 CNRS - Universit´e Pierre& Marie Curie (Paris VI) Introduction
Stochastic processes with path-dependent, non-Markovian dynamics are usedin various fields such as physics and mathematical finance present challengesfor computation, simulation and estimation. In some applications where one isinterested in the marginal distributions of such processes, such as option pricingor Monte Carlo simulation of densities, the complexity of the model can begreatly reduced by considering a low-dimensional Markovian model with thesame marginal distributions. Given a process ξ , a Markov process X is said to mimick ξ on the time interval [0 , T ], T >
0, if ξ and X have the same marginaldistributions: ∀ t ∈ [0 , T ] , ξ t d = X t . (1) X is called a Markovian projection of ξ . The construction of Markovian projec-tions was first suggested by Br´emaud [4] in the context of queues. Constructionof mimicking processes of ’Markovian’ type has been explored for Ito processes[13] and marked point processes [7]. A notable application is the derivation offorward equations for option pricing [3, 9].We propose in this paper a systematic construction of such Markovian pro-jections for (possibly discontinuous) semimartingales. Given a semimartingale ξ ,we give conditions under which there exists a Markov process X whose marginaldistributions are identical to those of ξ , and give an explicit construction ofthe Markov process X as the solution of a martingale problem for an integro-differential operator [2, 20, 23, 24].In the martingale case, the Markovian projection problem is related to theproblem of constructing martingales with a given flow of marginals, which datesback to Kellerer [19] and has been recently explored by Yor and coauthors[1, 15, 21] using a variety of techniques. The construction proposed in this pa-per is different from the others since it does not rely on the martingale propertyof ξ . We shall see nevertheless that our construction preserves the (local) mar-tingale property. Also, whereas the approaches described in [1, 15, 21] use asa starting point the marginal distributions of ξ , our construction describes themimicking Markov process X in terms of the local characteristics [17] of thesemimartingale ξ . Our construction thus applies more readily to solutions ofstochastic differential equations where the local characteristics are known butnot the marginal distributions.Section 2 presents a Markovian projection result for a R d -valued semimartin-gale given by its local characteristics. We use these results in section 2.4 to derivea partial integro-differential equation for the one-dimensional distributions of adiscontinuous semimartingale, thus extending the Kolmogorov forward equationto a non-Markovian setting. Section 3 shows how this result may be appliedto processes whose jumps are represented as the integral of a predictable jumpamplitude with respect to a Poisson random measure, a representation oftenused in stochastic differential equations with jumps. In Section 4 we show thatour construction applies to a large class of semimartingales, including smoothfunctions of a Markov process (Section 4.1), and time-changed L´evy processes2Section 4.2). Consider, on a filtered probability space (Ω , F , ( F t ) t ≥ , P ), an Ito semimartin-gale, on the time interval [0 , T ], T >
0, given by the decomposition ξ t = ξ + Z t β s ds + Z t δ s dW s + Z t Z k y k≤ y ˜ M ( ds dy ) + Z t Z k y k > y M ( ds dy ) , (2)where ξ is in R d , W is a standard R n -valued Wiener process, M is an integer-valued random measure on [0 , T ] × R d with compensator measure µ and ˜ M = M − µ is the compensated measure [17, Ch.II,Sec.1], β (resp. δ ) is an adaptedprocess with values in R d (resp. M d × n ( R )).Our goal is to construct a Markov process, on some filtered probability space(Ω , B , ( B t ) t ≥ , Q ) such that X and ξ have the same marginal distributions on[0 , T ], i.e. the law of X t under Q coincides with the law of ξ t under P . Wewill construct X as the solution to a martingale problem [11, 23, 25, 22] on thecanonical space Ω = D ([0 , T ] , R d ). Let Ω = D ([0 , T ] , R d ) be the Skorokhod space of right-continuous functionswith left limits. Denote by X t ( ω ) = ω ( t ) the canonical process on Ω , B t itsnatural filtration and B t ≡ B t + .Our goal is to construct a probability measure Q on Ω such that X is aMarkov process under Q and ξ and X have the same one-dimensional distribu-tions: ∀ t ∈ [0 , T ] , ξ t d = X t . In order to do this, we shall characterize Q as the solution of a martingaleproblem for an appropriately chosen integro-differential operator L .Let C b ( R d ) denote the set of bounded and continuous functions on R d , C ∞ ( R d ) the set of infinitely differentiable functions with compact support on R d and C ( R d ) the set of continuous functions defined on R d and vanishing atinfinity for the supremum norm. Let R ( R d − { } ) denote the space of L´evymeasures on R d i.e. the set of non-negative σ -finite measures ν on R d − { } such that Z R d −{ } ν ( dy ) (cid:0) ∧ k y k (cid:1) < ∞ . R ( R d − { } ) is endowed with the structure of a measurable space, such that for3ach ν ∈ R ( R d − { } ), the map C b ( R d ) R ,ϕ Z R d −{ } ν ( dy ) k y k k y k ϕ ( y )is measurable.Consider a time-dependent integro-differential operator L = ( L t ) t ∈ [0 ,T ] de-fined, for f ∈ C ∞ ( R d ), by L t f ( x ) = b ( t, x ) . ∇ f ( x ) + d X i,j =1 a ij ( t, x )2 ∂ f∂x i ∂x j ( x )+ Z R d [ f ( x + y ) − f ( x ) − {k y k≤ } y. ∇ f ( x )] n ( t, dy, x ) , (3)where a : [0 , T ] × R d M d × d ( R ), b : [0 , T ] × R d R d and n : [0 , T ] × R d
7→ R ( R d − { } ) are measurable functions.For ( t , x ) ∈ [0 , T ] × R d , we recall that a probability measure Q t ,x on(Ω , B T ) is a solution to the martingale problem for ( L , C ∞ ( R d )) on [0 , T ] if Q ( X u = x , ≤ u ≤ t ) = 1 and for any f ∈ C ∞ ( R d ), the process f ( X t ) − f ( x ) − Z tt L s f ( X s ) ds is a ( Q t ,x , ( B t ))-martingale on [0 , T ]. Existence, uniqueness and regularityof solutions to martingale problems for integro-differential operators have beenstudied under various conditions on the coefficients [25, 16, 11, 20, 22, 12].We make the following assumptions on the coefficients: Assumption 1 (Boundedness of coefficients) . ( i ) ∃ K > , ∀ ( t, z ) ∈ [0 , T ] × R d , k b ( t, z ) k + k a ( t, z ) k + Z (cid:0) ∧ k y k (cid:1) n ( t, dy, z ) ≤ K ( ii ) lim R →∞ Z T sup z ∈ R d n ( t, {k y k ≥ R } , z ) dt = 0 . where k . k denotes the Euclidean norm. Assumption 2 (Continuity) . (i) For t ∈ [0 , T ] and B ∈ B ( R d −{ } ) , b ( t, . ) , a ( t, . ) and n ( t, B, . ) are continuouson R d , uniformly in t ∈ [0 , T ] .(ii) For all z ∈ R d , b ( ., z ) , a ( ., z ) and n ( ., B, z ) are right-continuous on [0 , T [ ,uniformly in z ∈ R d . ssumption 3 (Non-degeneracy) . Either ∀ R > ∀ t ∈ [0 , T ] inf k z k≤ R inf x ∈ R d , k x k =1 t x.a ( t, z ) .x > a ≡ β ∈ ]0 , , C > , and a family n β ( t, dy, z )of positive measures such that ∀ ( t, z ) ∈ [0 , T ] × R d n ( t, dy, z ) = n β ( t, dy, z ) + C k y k d + β dy, Z (cid:0) ∧ k y k β (cid:1) n β ( t, dy, z ) ≤ K , lim ǫ → sup z ∈ R d Z k y k≤ ǫ k y k β n β ( t, dy, z ) = 0 . Mikulevicius and Pragarauskas [22] show that if L satisfies Assumptions 1,2and 3 ( which corresponds to a “non-degenerate L´evy operator” in the termi-nology of [22]) the martingale problem for ( L , C ∞ ( R d ) ) has a unique solution Q t ,x for every initial condition ( t , x ) ∈ [0 , T ] × R d : Proposition 1.
Under Assumptions 1, 2 and 3 the martingale problem for (( L t ) t ∈ [0 ,T ] , C ∞ ( R d )) on [0 , T ] is well-posed : for any x ∈ R d , t ∈ [0 , T ] , thereexists a unique probability measure Q t ,x on (Ω , B T ) such that Q ( X u = x , ≤ u ≤ t ) =1 and for any f ∈ C ∞ ( R d ) , f ( X t ) − f ( x ) − Z tt L s f ( X s ) ds is a ( Q t ,x , ( B t ) t ≥ ) -martingale on [0 , T ] . Under Q t ,x , ( X t ) is a Markov pro-cess and the evolution operator ( Q t ,t ) t ∈ [ t ,T ] defined by ∀ f ∈ C b ( R d ) , Q t ,t f ( x ) = E Q t ,x [ f ( X t )] (4) verifies the following continuity property: ∀ f ∈ C ∞ ( R d ) , lim t ↓ t Q t ,t f ( x ) = f ( x ) . (5) In particular, denoting q t ,t ( x , dy ) the marginal distribution of X t , the map t ∈ [ t , T [ Z R d q t ,t ( x , dy ) f ( y ) (6) is right-continuous, for any f ∈ C ∞ ( R d ) .Proof. By a result of Mikulevicius and Pragarauskas [22, Theorem 5], the mar-tingale problem is well-posed. We only need to prove that the continuity prop-erty (5) holds on [ t , T [ for any x ∈ R d . For f ∈ C ∞ ( R d ), Q t ,t f ( x ) = E Q t ,x [ f ( X t )]= f ( x ) + E Q t ,x (cid:20)Z tt L s f ( X s ) ds (cid:21) . t ∈ [ t , T ] R t L s f ( X s ) ds is uniformly bounded on[ t , T ]. By Assumption 2, since X is right continuous, s ∈ [ t , T [
7→ L s f ( X s ) isright-continuous up to a Q t ,x -null set andlim t ↓ t Z tt L s f ( X s ) ds = 0 a . s . Applying the dominated convergence theorem yields,lim t ↓ t E Q t ,x (cid:20)Z tt L s f ( X s ) ds (cid:21) = 0 , that is lim t ↓ t Q t ,t f ( x ) = f ( x ) , implying that t ∈ [ t , T [ Q t ,t f ( x ) is right-continuous at t . An important property of continuous-time Markov processes is their link withpartial (integro-)differential equation (PIDE) which allows to use analyticaltools for studying their probabilistic properties. In particular the transitiondensity of a Markov process solves the forward Kolmogorov equation (or Fokker-Planck equation) [24]. The following result shows that under Assumptions 1, 2and 3 the forward equation corresponding to L has a unique solution: Theorem 1 (Kolmogorov Forward equation) . Under Assumptions 1, 2 and 3,for each ( t , x ) ∈ [0 , T ] × R d , there exists a unique family ( p t ,t ( x , dy ) , t ∈ [ t , T ]) of bounded measures on R d such that p t ,t ( x , . ) = ǫ x , the point mass at x and ∀ t ∈ [ t , T ] , ∀ g ∈ C ∞ ( R d ) , Z R d g ( y ) dp t ,t dt ( x , dy ) = Z R d p t ,t ( x , dy ) L t g ( y ) . (7) p t ,t ( x , . ) is the conditional distribution of X t given X t = x , where ( X, Q t ,x ) is the unique solution of the martingale problem for ( L , C ∞ ( R d ) ) starting from ( t , x ) .Proof.
1. Under Assumptions 1, 2 and 3, Proposition 1 implies that themartingale problem for L on the domain C ∞ ( R d ) is well-posed. Denote( X, Q t ,x ) the unique solution of the martingale problem for L with initialcondition x ∈ R d at t , and define ∀ t ≥ t , ∀ g ∈ C b ( R d ) , Q t ,t g ( x ) = E Q t ,x [ g ( X t )] . (8)By [22, Theorem 5], ( Q s,t , ≤ s ≤ t ) is then a (time-inhomogeneous)semigroup, satisfying the continuity property (5) on [ t , T [.6f q t ,t ( x , dy ) denotes the law of X t under Q t ,x , the martingale propertyimplies that q t ,t ( x , dy ) satisfies ∀ g ∈ C ∞ ( R d ) , Z R d q t ,t ( x , dy ) g ( y ) = g ( x )+ Z tt Z R d q t ,s ( x , dy ) L s g ( y ) ds. (9)Proposition 1 provides the right-continuity of t ∈ [ t , T [ R R d q t ,t ( x , dy ) g ( y ) for any g in C ∞ ( R d ). Given Assumption2, q t ,t is a solution of (7) with initial condition q t ,t ( dy ) = ǫ x . Thissolution of (7) is in particular positive with mass 1.To show uniqueness of solutions of (7), we will rewrite (7) as the for-ward Kolmogorov equation associated with a homogeneous operator onspace-time domain and use uniqueness results for the corresponding ho-mogeneous equation.2. Let D ≡ C ([0 , T ]) ⊗ C ∞ ( R d ) be the tensor product of C ([0 , T ]) and C ∞ ( R d ). Define the operator A on D by ∀ f ∈ C ∞ ( R d ) , ∀ γ ∈ C ([0 , T ]) , A ( f γ )( t, x ) = γ ( t ) L t f ( x ) + f ( x ) γ ′ ( t ) . (10)[11, Theorem 7.1, Chapter 4] implies that for any x ∈ R d , if ( X, Q t ,x )is a solution of the martingale problem for L , then the law of η t = ( t, X t )under Q t ,x is a solution of the martingale problem for A : in particularfor any f ∈ C ∞ ( R d ) and γ ∈ C ([0 , T ]), Z q t ( x , dy ) f ( y ) γ ( t ) = f ( x ) γ (0) + Z t Z q s ( x , dy ) A ( f γ )( s, y ) ds. (11)[11, Theorem 7.1, Chapter 4] implies also that if the law of η t = ( t, X t )is a solution of the martingale problem for A then the law of X is also asolution of the martingale problem for L , namely: uniqueness holds for themartingale problem associated to the operator L on C ∞ ( R d ) if and only ifuniqueness holds for the martingale problem associated to the martingaleproblem for A on D .Define, for t ∈ [0 , T ] and h ∈ C b ([0 , T ] × R d ), ∀ ( s, x ) ∈ [0 , T ] × R d , U t h ( s, x ) = Q s,s + t ( h ( t + s, . )) ( x ) . (12)The properties of Q s,t then imply that ( U t , t ≥
0) is a family of linearoperators on C b ([0 , T ] × R d ) satisfying U t U r = U t + r on C b ([0 , T ] × R d ) and U t h → h in as t ↓ D . ( U t , t ≥
0) is thus a semigroup on C b ([0 , T ] × R d )satisfying the continuity property of the form (5) on D .One observes that, for h ∈ D , U t h ( s, x ) = Q s,s + t ( h ( t + s, . )) ( x ) = Z R d q s,s + t ( x, dy ) h ( t + s, y ) . t = 0 in the sequel. Since the mar-tingale problem holds for η t = ( t, X t ), then for all h ∈ D , the martingaleproperty yields, ∀ t ∈ [0 , T ] , ∀ ( s, x ) ∈ [0 , T ] × R d U t h ( s, x ) = U h ( s, x ) + Z t U u Ah ( s, x ) du. (13)Considering again this equality for 0 ≤ ǫ < t , U t h − U ǫ h = Z tǫ U u Ah du. (14)Denoting C ([0 , T ] × R d ) the set of continuous functions defined on [0 , T ] × R d and vanishing at infinity for the supremum norm, we intend to apply[11, Theorem 2.2, Chapter 4] to prove that ( U t , t ≥
0) generates a stronglycontinuous contraction on C ([0 , T ] × R d ) with infinitesimal generator givenby the closure A . First, one shall simply observe that D is dense in C ([0 , T ] × R d ) implying that the domain of A is dense in C ([0 , T ] × R d )too. The well-posedness of the martingale problem for A implies that A satisfies the maximum principle. To conclude, it is sufficient to prove that Im ( λ − A ) is dense in C ([0 , T ] × R d ) for some λ or even better that D is included in Im ( λ − A ). We recall that Im ( λ − A ) denotes the image ofthe domain of A under the map ( λ − A ).3. Consider 0 ≤ ǫ < T , (14) yields for h ∈ D , Z Tǫ e − t U t h dt = Z Tǫ e − t U ǫ h dt + Z Tǫ e − t Z tǫ U s Ah ds dt = U ǫ h (cid:2) e − ǫ − e − T (cid:3) + Z Tǫ ds Z Ts e − t dt ! U s Ah = U ǫ h (cid:2) e − ǫ − e − T (cid:3) + Z Tǫ ds (cid:2) e − s − e − T (cid:3) U s Ah = e − ǫ U ǫ h − e − T " U ǫ h + Z Tǫ U s Ah ds + Z Tǫ ds e − s U s Ah.
Using (13) and gathering all the terms together yields, Z Tǫ e − t U t h dt = e − ǫ U ǫ h − e − T U T h + Z Tǫ ds e − s U s Ah. (15)8et us focus on the quantity Z Tǫ ds e − s U s Ah.
Observing that,1 ǫ [ U t + ǫ h − U t h ] = 1 ǫ [ U ǫ − I ] U t h = U t ǫ [ U ǫ − I ] h, taking ǫ → U t ǫ [ U ǫ − I ] h → U t Ah Hence, the limit when ǫ → ǫ [ U ǫ − I ] U t h exists, implying that U t h belongs to the domain of A for any h ∈ D .Thus, Z Tǫ ds e − s U s h belongs to the domain of A and Z Tǫ ds e − s U s Ah = A Z Tǫ ds e − s U s h. Since U is a contraction semigroup, thus a contraction, and given theright-continuity property of U t on the space D , one may take ǫ → T → ∞ in (15), leading to Z ∞ e − t U t h dt = U h (0 , x ) + A Z ∞ ds e − s U s h. Thus, (cid:0) I − A (cid:1) Z ∞ ds e − s U s h (0 , x ) = U h = h, yielding h ∈ Im ( I − A ). We have shown that ( U t , t ≥
0) generates astrongly continuous contraction on C ([0 , T ] × R d ) with infinitesimal gen-erator A (see [11, Theorem 2.2, Chapter 4]).4. The Hille-Yosida theorem (see [11, Proposition 2.6, Chapter 1]) then im-plies that for all λ > Im ( λ − A ) = C ([0 , T ] × R d ) .
9. Now consider for t ≥ , h ∈ C b ([0 , T ] × R d ), Q t h ( x ) = Z R d q t ( x , dy ) h ( t, y ) = ( U t h ) (0 , x ) . (16)Using (9), we have, for ǫ > ∀ ( f, γ ) ∈ C ([0 , T ]) × C ∞ ( R d ) , Q t ( f γ )( x ) − Q ǫ ( f γ )( x ) = Z tǫ Z R d q u ( x , dy ) A ( f γ )( u, y ) du = Z tǫ Q u ( A ( f γ ))( x ) du. (17)By linearity, for any h ∈ D we have Q t h ( x ) − Q ǫ h ( x ) = Z tǫ Z R d q u ( x , dy ) Ah ( u, y ) du = Z tǫ Q u Ah ( x ) du, (18)Now let p t ( x , dy ) be another solution of (7) such that p ( x , dy ) = ǫ x ( dy ).Then p t is also a solution of (9). An integration by parts implies that, for( f, γ ) ∈ C ([0 , T ]) × C ∞ ( R d ) , Z R d p t ( x , dy ) f ( y ) γ ( t ) = f ( x ) γ (0) + Z t Z R d p s ( x , dy ) A ( f γ )( s, y ) ds. (19)Define, for h ∈ C b ([0 , T ] × R d ), ∀ ( t, x ) ∈ [0 , T ] × R d , P t h ( x ) = Z R d p t ( x , dy ) h ( t, y ) . Using (19) we have, for ( f, γ ) ∈ C ([0 , T ]) × C ∞ ( R d ), ∀ ǫ > P t ( f γ ) −P ǫ ( f γ ) = Z tǫ Z R d p u ( dy ) A ( f γ )( u, y ) du = Z tǫ P u ( A ( f γ )) du. (20)which is identical to (17). Multiplying by e − λt and integrating with respectto t we obtain that, for λ > ,λ Z ∞ e − λt P t ( f γ )( x ) dt = f ( x ) γ (0) + λ Z ∞ e − λt Z t P u ( A ( f γ ))( x ) du dt = f ( x ) γ (0) + λ Z ∞ (cid:18)Z ∞ u e − λt dt (cid:19) P u ( A ( f γ ))( x ) du = f ( x ) γ (0) + Z ∞ e − λu P u ( A ( f γ ))( x ) du. Similarly, from (17) we obtain for any λ > ,λ Z ∞ e − λt Q t ( f γ )( x ) dt = f ( x ) γ (0) + Z ∞ e − λu Q u ( A ( f γ ))( x ) du. f, γ ) ∈ C ([0 , T ]) × C ∞ ( R d ) we have Z ∞ e − λt Q t ( λ − A )( f γ )( x ) dt = f ( x ) γ (0) = Z ∞ e − λt P t ( λ − A )( f γ ) dt. (21)By linearity, for any h ∈ D we have Z ∞ e − λt Q t ( λ − A ) h ( x ) dt = h (0 , x ) = Z ∞ e − λt P t ( λ − A ) h ( x ) dt (22)Using the density of D in C ([0 , T ] × R d ), we conclude that, for h ∈C ([0 , T ] × R d ) Z ∞ e − λt Q t ( λ − A ) h ( x ) dt = Z ∞ e − λt P t ( λ − A ) h ( x ) dt (23)Finally, using the fact that Im ( λ − A ) = C ([0 , T ] × R d ) , we conclude that ∀ h ∈ C ([0 , T ] × R d ) , Z ∞ e − λt Q t h ( x ) dt = Z ∞ e − λt P t h ( x ) dt, (24)so the Laplace transform of t
7→ P t h ( x ) is uniquely determined.Using (20), ∀ ǫ > , ∀ h ∈ D , P t h − P ǫ h = Z tǫ Z R d p u ( dy ) Ah ( u, y ) du = Z tǫ P u ( Ah ) du (25)by linearity, which allows to show that, for any h ∈ D , t
7→ P t h ( x ) isright-continuous: ∀ h ∈ D , lim t ′ ↓ t P t ′ h ( x ) = P t h ( x ) . An identical argument using (20) shows that t
7→ Q t h ( x ) is right-continuous.These two right-continuous functions have the same Laplace transform by(24), so they are equal. Thus we have shown that ∀ h ∈ D , Z h ( t, y ) q t ( x , dy ) = Z h ( t, y ) p t ( x , dy ) . (26)Since D is dense in C ([0 , T ] × R d ) for the supremum norm, (26) also holdsfor h ∈ C ([0 , T ] × R d ). By [11, Proposition 4.4, Chapter 3], C ([0 , T ] × R d )is convergence determining, hence separating, allowing us to conclude that p t ( x , dy ) = q t ( x , dy ). 11 emark 2.1. Assumptions 1, 2 and 3 are sufficient but not necessary for thewell-posedness of the martingale problem. For example, the boundedness As-sumption 1 may be relaxed to local boundedness, using localization techniquesdeveloped in [23, 25]. Such extensions are not trivial and, in the unboundedcase, additional conditions are needed to ensure that X does not explode (see[25, Chapter 10]). The following assumptions on the local characteristics of the semimartingale ξ are almost-sure analogs of Assumptions 1, 2 and 3: Assumption 4. β, δ are bounded on [0 , T ] : ∃ K > , ∀ t ∈ [0 , T ] , k β t k ≤ K , k δ t k ≤ K a . s . Assumption 5.
The jump compensator µ has a density m ( ω, t, dy ) with respectto the Lebesgue measure on [0 , T ] which satisfies ( i ) ∃ K > , ∀ t ∈ [0 , T ] Z R d (cid:0) ∧ k y k (cid:1) m ( ., t, dy ) ≤ K < ∞ a . s . ( ii ) and lim R →∞ Z T m ( ., t, {k y k ≥ R } ) dt = 0 a . s . Assumption 6.
Either ( i ) ∃ ǫ > , ∀ t ∈ [0 , T [ t δ t δ t ≥ ǫ I d a . s . or ( ii ) δ ≡ β ∈ ]0 , , c, K > , and a family m β ( t, dy )of positive measures such that ∀ t ∈ [0 , T [ m ( t, dy ) = m β ( t, dy ) + c k y k d + β dy a . s ., Z (cid:0) ∧ k y k β (cid:1) m β ( t, dy ) ≤ K , and lim ǫ → Z k y k≤ ǫ k y k β m β ( t, dy ) = 0 a . s . Note that Assumption 5 is only slightly stronger than stating that m is aL´evy kernel since in that case we already have R (cid:0) ∧ k y k (cid:1) m ( ., t, dy ) < ∞ .Assumption 6 extends the “ellipticity” assumption to the case of pure-jumpsemimartingales and holds for a large class of semimartingales driven by stableor tempered stable processes. Theorem 2 (Markovian projection) . Assume there exists measurable functions a : [0 , T ] × R d M d × d ( R ) , b : [0 , T ] × R d R d and n : [0 , T ] × R d
7→ R ( R d − { } ) satisfying Assumption 2 such that for all t ∈ [0 , T ] and B ∈ B ( R d − { } ) , E [ β t | ξ t − ] = b ( t, ξ t − ) a . s , E (cid:2) t δ t δ t | ξ t − (cid:3) = a ( t, ξ t − ) a . s , E [ m ( ., t, B ) | ξ t − ] = n ( t, B, ξ t − ) a . s . (27)12 f ( β, δ, m ) satisfies Assumptions 4, 5, 6, then there exists a Markov process (( X t ) t ∈ [0 ,T ] , Q ξ ) , with infinitesimal generator L defined by (3) , whose marginaldistributions mimick those of ξ : ∀ t ∈ [0 , T ] , X t d = ξ t .X is the weak solution of the stochastic differential equation X t = ξ + Z t b ( u, X u ) du + Z t Σ( u, X u ) dB u + Z t Z k y k≤ y ˜ N ( du dy ) + Z t Z k y k > y N ( du dy ) , (28) where ( B t ) is an n-dimensional Brownian motion, N is an integer-valued ran-dom measure on [0 , T ] × R d with compensator n ( t, dy, X t − ) dt , ˜ N = N − n theassociated compensated random measure and Σ ∈ C ([0 , T ] × R d , M d × n ( R )) suchthat t Σ( t, z )Σ( t, z ) = a ( t, z ) . We will call ( X, Q ξ ) the Markovian projection of ξ . Proof.
First, we observe that n is a L´evy kernel : for any ( t, z ) ∈ [0 , T ] × R d Z R d (cid:0) ∧ k y k (cid:1) n ( t, dy, z ) = E (cid:20)Z R d (cid:0) ∧ k y k (cid:1) m ( t, dy ) | ξ t − = z (cid:21) < ∞ a . s ., using Fubini’s theorem and Assumption 5. Consider now the case of a purejump semimartingale verifying (ii) and define, for B ∈ B ( R d − { } ), ∀ z ∈ R d n β ( t, B, z ) = E (cid:20)Z B m ( t, dy, ω ) − c dy k y k d + β | ξ t − = z (cid:21) . As argued above, n β is a L´evy kernel on R d . Assumptions 4 and 5 imply that( b, a, n ) satisfies Assumption 1. Furthermore, under assumptions either ( i ) or( ii ) for ( δ, m ), Assumption 3 holds for ( b, a, n ). Together with Assumption 2yields that L is a non-degenerate operator and Proposition 1 implies that themartingale problem for ( L t ) t ∈ [0 ,T ] on the domain C ∞ ( R d ) is well-posed. Denote(( X t ) t ∈ [0 ,T ] , Q ξ ) its unique solution starting from ξ and q t ( ξ , dy ) the marginaldistribution of X t . 13et f ∈ C ∞ ( R d ). Itˆo’s formula yields f ( ξ t ) = f ( ξ ) + d X i =1 Z t d X i =1 ∂f∂x i ( ξ s − ) dξ is + 12 Z t tr (cid:2) ∇ f ( ξ s − ) t δ s δ s (cid:3) ds + X s ≤ t " f ( ξ s − + ∆ ξ s ) − f ( ξ s − ) − d X i =1 ∂f∂x i ( ξ s − )∆ ξ is = f ( ξ ) + Z t ∇ f ( ξ s − ) .β s ds + Z t ∇ f ( ξ s − ) .δ s dW s + 12 Z t tr (cid:2) ∇ f ( ξ s − ) t δ s δ s (cid:3) ds + Z t Z k y k≤ ∇ f ( ξ s − ) .y ˜ M ( ds dy )+ Z t Z R d (cid:0) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:1) M ( ds dy ) . We note that • since k∇ f k is bounded R t R k y k≤ ∇ f ( ξ s − ) .y ˜ M ( ds dy ) is a square-integrablemartingale. • R t R k y k > ∇ f ( ξ s − ) .y M ( ds dy ) < ∞ a . s . since k∇ f k is bounded. • since ∇ f ( ξ s − ) and δ s are uniformly bounded on [0 , T ], R t ∇ f ( ξ s − ) .δ s dW s is a martingale.Hence, taking expectations, we obtain: E P [ f ( ξ t )] = E P [ f ( ξ )] + E P (cid:20)Z t ∇ f ( ξ s − ) .β s ds (cid:21) + E P (cid:20) Z t tr (cid:2) ∇ f ( ξ s − ) t δ s δ s (cid:3) ds (cid:21) + E P (cid:20)Z t Z R d (cid:0) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:1) M ( ds dy ) (cid:21) = E P [ f ( ξ )] + E P (cid:20)Z t ∇ f ( ξ s − ) .β s ds (cid:21) + E P (cid:20) Z t tr (cid:2) ∇ f ( ξ s − ) t δ s δ s (cid:3) ds (cid:21) + E P (cid:20)Z t Z R d (cid:0) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:1) m ( s, dy ) ds (cid:21) . E P (cid:20)Z t ∇ f ( ξ s − ) .β s ds (cid:21) ≤ k∇ f k E P (cid:20)Z t k β s k ds (cid:21) < ∞ , E P (cid:20) Z t tr (cid:2) ∇ f ( ξ s − ) t δ s δ s (cid:3)(cid:21) ≤ k∇ f k E P (cid:20)Z t k δ s k ds (cid:21) < ∞ , E P (cid:20)Z t Z R d (cid:13)(cid:13) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:13)(cid:13) m ( s, dy ) ds (cid:21) ≤ k∇ f k E P "Z t Z k y k≤ k y k m ( s, dy ) ds + 2 k f k E P "Z t Z k y k > m ( s, dy ) ds < + ∞ , we may apply Fubini’s theorem to obtain E P [ f ( ξ t )] = E P [ f ( ξ )] + Z t E P [ ∇ f ( ξ s − ) .β s ] ds + 12 Z t E P (cid:2) tr (cid:2) ∇ f ( ξ s − ) t δ s δ s (cid:3)(cid:3) ds + Z t E P (cid:20)Z R d (cid:0) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:1) m ( s, dy ) (cid:21) ds. Conditioning on ξ t − and using the iterated expectation property, E P [ f ( ξ t )] = E P [ f ( ξ )] + Z t E P (cid:2) ∇ f ( ξ s − ) . E P [ β s | ξ s − ] (cid:3) ds + 12 Z t E P (cid:2) tr (cid:2) ∇ f ( ξ s − ) E P (cid:2) t δ s δ s | ξ s − (cid:3)(cid:3) ] (cid:3) ds + Z t E P (cid:20) E P (cid:20)Z R d (cid:0) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:1) m ( s, dy ) | ξ s − (cid:21)(cid:21) ds = E P [ f ( ξ )] + Z t E P [ ∇ f ( ξ s − ) .b ( s, ξ s − )] ds + 12 Z t E P (cid:2) tr (cid:2) ∇ f ( ξ s − ) a ( s, ξ s − ) (cid:3)(cid:3) ds + Z t Z R d E P (cid:2)(cid:0) f ( ξ s − + y ) − f ( ξ s − ) − {k y k≤ } y. ∇ f ( ξ s − ) (cid:1) n ( s, dy, ξ s − ) (cid:3) ds. Hence E P [ f ( ξ t )] = E P [ f ( ξ )] + E P (cid:20)Z t L s f ( ξ s − ) ds (cid:21) . (29)Let p t ( dy ) denote the law of ( ξ t ) under P , (29) writes: Z R d p t ( dy ) f ( y ) = Z R d p ( dy ) f ( y ) + Z t Z R d p s ( dy ) L s f ( y ) ds. (30)Hence p t ( dy ) satisfies the Kolmogorov forward equation (7) for the operator L with the initial condition p ( dy ) = µ ( dy ) where µ denotes the law of ξ .Applying Theorem 1, the flows q t ( ξ , dy ) of X t and p t ( dy ) of ξ t are the same on[0 , T ]. This ends the proof. 15 emark 2.2 (Mimicking conditional distributions) . The construction in The-orem 2 may also be carried out using E [ β t | ξ t − , F ] = b ( t, ξ t − ) a . s , E (cid:2) t δ t δ t | ξ t − , F (cid:3) = a ( t, ξ t − ) a . s , E [ m ( ., t, B ) | ξ t − , F ] = n ( t, B, ξ t − ) a . s , instead of ( b, a, n ) in (41) . If ( b , a , n ) satisfies Assumption (3) , then fol-lowing the same procedure we can construct a Markov process ( X, Q ξ ) whoseinfinitesimal generator has coefficients ( b , a , n ) such that ∀ f ∈ C b ( R d ) , ∀ t ∈ [0 , T ] E P [ f ( ξ t ) |F ] = E Q ξ [ f ( X t )] , i.e. the marginal distribution of X t matches the conditional distribution of ξ t given F . Remark 2.3.
For Ito processes (i.e. continuous semimartingales of the form (2) with µ = 0 ), Gy¨ongy [13, Theorem 4.6] gives a “mimicking theorem” underthe non-degeneracy condition t δ t .δ t ≥ ǫI d which corresponds to our Assumption6, but without requiring the continuity condition (Assumption 2) on ( b, a, n ) .Brunick & Shreve [5] extend this result by relaxing the ellipticity condition of[13]. In both cases, the mimicking process X is constructed as a weak solutionto the SDE (28) (without the jump term), but this weak solution does not ingeneral have the Markov property: indeed, it need not even be unique under theassumptions used in [13, 5]. In particular, in the setting used in [13, 5], the lawof X is not uniquely determined by its ’infinitesimal generator’ L . This makesit difficult to ‘compute’ quantities involving X , either through simulation or bysolving a partial differential equation.By contrast, under the additional continuity condition 2 on the projectedcoefficients, X is a Markov process whose law is uniquely determined by its in-finitesimal generator L and whose marginals are the unique solution of the Kol-mogorov forward equation (7) . This makes it possible to compute the marginalsof X by simulating the SDE (28) or by solving a forward PIDE.It remains to be seen whether the additional Assumption 2 is verified in mostexamples of interest. We will show in Section 4 that this is indeed the case. Remark 2.4 (Markovian projection of a Markov process) . The term
Marko-vian projection is justified by the following remark: if the semimartingale ξ isalready a Markov process and satisfies the assumption of Theorem 2, then theuniqueness in law of the solution to the martingale problem for L implies thatthe Markovian projection ( X, Q ξ ) of ξ has the same law as ( ξ, P ξ ) . So the mapwhich associates (the law Q ξ of ) X to ξ may indeed be viewed as a projection;in particular it is involutive.This property contrasts with other constructions of mimicking processes [1,7, 13, 14, 21] which fail to be involutive. A striking example is the construction,by Hamza & Klebaner [14], of discontinuous martingales whose marginals matchthose of a Gaussian Markov process. .4 Forward equations for semimartingales Theorem 1 and Theorem 2 allow us to obtain a forward PIDE which extends theKolmogorov forward equation to semimartingales which verify the Assumptionsof Theorem 2:
Theorem 3.
Let ξ be a semimartingale given by (2) satisfying the assumptionsof Theorem 2. Denote p t ( dx ) the law of ξ t on R d . The ( p t ) t ∈ [0 ,T ] is the uniquesolution, in the sense of distributions, of the forward equation ∀ t ∈ [0 , T ] , ∂p t ∂t = L ⋆t . p t , (31) with initial condition p = µ , where µ denotes the law of ξ ,where L ⋆ is the adjoint of L , defined by ∀ g ∈ C ∞ ( R d , R ) , L ⋆t g ( x ) = −∇ [ b ( t, x ) g ( x )] + ∇ (cid:20) a ( t, x )2 g ( x ) (cid:21) (32)+ Z R d (cid:2) g ( x − z ) n ( t, z, x − z ) − g ( x ) n ( t, z, x ) − k z k≤ z. ∇ [ g ( x ) n ( t, dz, x )] (cid:3) , where the coefficients b, a, n are defined as in (41) .Proof. The existence and uniqueness is a direct consequence of Theorem 1 andTheorem 2. To finish the proof, let compute L ⋆t . Viewing p t as an element ofthe dual of C ∞ ( R d ), (7) rewrites : for f ∈ C ∞ ( R d , R ) ∀ f ∈ C ∞ ( R d , R ) , Z f ( y ) dpdt ( dy ) = Z p t ( dy ) L t f ( y ) . We have ∀ f ∈ C ∞ ( R d ) , ∀ t ≤ t ′ < T < p t ′ − p t t ′ − t , f > t ′ → t → < p t , L t f > = < L ∗ t p t , f >, where < ., . > is the duality product. 17or z ∈ R d , define the translation operator τ z by τ z f ( x ) = f ( x + z ). Then Z p t ( dx ) L t f ( x )= Z p t ( dx ) h b ( t, x ) ∇ f ( x ) + 12 tr (cid:2) ∇ f ( x ) a ( t, x ) (cid:3) + Z | z | > ( τ z f ( x ) − f ( x )) n ( t, dz, x )+ Z | z |≤ ( τ z f ( x ) − f ( x ) − z. ∇ f ( x )) n ( t, dz, x ) i = Z h − f ( x ) ∂∂x [ b ( t, x ) p t ( dx )] + f ( x ) ∂ ∂x [ a ( t, x )2 p t ( dx )]+ Z | z | > f ( x )( τ − z ( p t ( dx ) n ( t, dz, x )) − p t ( dx ) n ( t, dz, x ))+ Z | z |≤ f ( x )( τ − z ( p t ( dx ) n ( t, dz, x )) − p t ( dx ) n ( t, dz, x )) − z ∂∂x ( p t ( dx ) n ( t, dz, x )) i , allowing to identify L ⋆ . An important property of the construction of ξ in Theorem 2 is that it preservesthe (local) martingale property: if ξ is a local martingale, so is X : Proposition 2 (Martingale preserving property) .
1. If ξ is a local martingale which satisfies the assumptions of Theorem 2,then its Markovian projection ( X t ) t ∈ [0 ,T ] is a local martingale on (Ω , B t , Q ξ ) .2. If furthermore E P "Z T Z R d k y k m ( t, dy ) dt < ∞ , then ( X t ) t ∈ [0 ,T ] is a square-integrable martingale.Proof.
1) If ξ is a local martingale then the uniqueness of its semimartingaledecomposition entails that β t + Z k y k≥ y m ( t, dy ) = 0 dt × P − a.e. Q ξ ∀ t ∈ [0 , T ] , Z t ds " b ( s, X s − ) + Z k y k≥ y n ( s, dy, X s − ) = 0 ! = 1 . The assumptions on m, δ then entail that X , as a sum of an Ito integral and acompensated Poisson integral, is a local martingale.2) If E P (cid:2)R k y k µ ( dt dy ) (cid:3) < ∞ then E Q ξ (cid:20)Z k y k n ( t, dy, X t − ) (cid:21) < ∞ , and the compensated Poisson integral in X is a square-integrable martingale. The representation (2) is not the most commonly used in applications, where aprocess is constructed as the solution to a stochastic differential equation drivenby a Brownian motion and a Poisson random measure ζ t = ζ + Z t β s ds + Z t δ s dW s + Z t Z ψ s ( y ) ˜ N ( ds dy ) , (33)where ξ ∈ R d , W is a standard R n -valued Wiener process, β and δ are non-anticipative c`adl`ag processes, N is a Poisson random measure on [0 , T ] × R d with intensity ν ( dy ) dt where Z R d (cid:0) ∧ k y k (cid:1) ν ( dy ) < ∞ , ˜ N = N − ν ( dy ) dt, (34)and the random jump amplitude ψ : [0 , T ] × Ω × R d R d is P ⊗ B ( R d )-measurable, where P is the predictable σ -algebra on [0 , T ] × Ω. In this section,we shall assume that ∀ t ∈ [0 , T ] , ψ t ( ω,
0) = 0 and E (cid:20)Z t Z R d (cid:0) ∧ k ψ s ( ., y ) k (cid:1) ν ( dy ) ds (cid:21) < ∞ . The difference between this representation and (2) is the presence of a randomjump amplitude ψ t ( ω, . ) in (33). The relation between these two representationsfor semimartingales has been discussed in great generality in [10, 18]. Here wegive a less general result which suffices for our purpose. The following resultexpresses ζ in the form (2) suitable for applying Theorem 2. Lemma 1 (Absorbing the jump amplitude in the compensator) . ζ t = ζ + Z t β s ds + Z t δ s dW s + Z t Z ψ s ( z ) ˜ N ( ds dz )19 an be also represented as ζ t = ζ + Z t β s ds + Z t δ s dW s + Z t Z y ˜ M ( ds dy ) , (35) where M is an integer-valued random measure on [0 , T ] × R d with compensator µ ( ω, dt, dy ) given by ∀ A ∈ B ( R d − { } ) , µ ( ω, dt, A ) = ν ( ψ − t ( ω, A )) dt, where ψ − t ( ω, A ) = { z ∈ R d , ψ t ( ω, z ) ∈ A } denotes the inverse image of A underthe partial map ψ t .Proof. The result can be deduced from [10, Th´eor`eme 12] but we sketch herethe proof for completeness. A Poisson random measure N on [0 , T ] × R d canbe represented as a counting measure for some random sequence ( T n , U n ) withvalues in [0 , T ] × R d N = X n ≥ { T n ,U n } . (36)Let M be the integer-valued random measure defined by: M = X n ≥ { T n ,ψ Tn ( .,U n ) } . (37) µ , the predictable compensator of M is characterized by the following property[17, Thm 1.8.]: for any positive P ⊗ B ( R d )-measurable map χ : [0 , T ] × Ω × R d → R + and any A ∈ B ( R d − { } ), E (cid:20)Z t Z A χ ( s, y ) M ( ds dy ) (cid:21) = E (cid:20)Z t Z A χ ( s, y ) µ ( ds dy ) (cid:21) . (38)Similarly, for B ∈ B ( R d − { } ) E (cid:20)Z t Z B χ ( s, y ) N ( ds dy ) (cid:21) = E (cid:20)Z t Z B χ ( s, y ) ν ( dy ) ds (cid:21) . Using formulae (36) and (37): E (cid:20)Z t Z A χ ( s, y ) M ( ds dy ) (cid:21) = E X n ≥ χ ( T n , ψ T n ( ., U n )) = E "Z t Z ψ − s ( .,A ) χ ( s, ψ s ( ., z )) N ( ds dz ) = E "Z t Z ψ − s ( .,A ) χ ( s, ψ s ( ., z )) ν ( dz ) ds E (cid:20)Z t Z A χ ( s, y ) µ ( ds dy ) (cid:21) = E "Z t Z ψ − s ( .,A ) χ ( s, ψ s ( ., z )) ν ( dz ) ds . Since ψ is a predictable random function, the uniqueness of the predictablecompensator µ (take φ ≡ Id in [17, Thm 1.8.]) entails µ ( ω, dt, A ) = ν ( ψ − t ( ω, A )) dt. (39)Formula (39) defines a random measure µ which is a L´evy kernel Z t Z (cid:0) ∧ k y k (cid:1) µ ( dy ds ) = Z t Z (cid:0) ∧ k ψ s ( ., y ) k (cid:1) ν ( dy ) ds < ∞ . In the case where ψ t ( ω, . ) : R d R d is invertible and differentiable, we cancharacterize the density of the compensator µ as follows: Lemma 2 (Differentiable case) . If the L´evy measure ν ( dz ) has a density ν ( z ) and if ψ t ( ω, . ) : R d R d is a C ( R d , R d ) -diffeomorphism, then ζ , given in (33) ,has the representation ζ t = ζ + Z t β s ds + Z t δ s dW s + Z t Z y ˜ M ( ds dy ) , where M is an integer-valued random measure with compensator m ( ω ; t, y ) dt dy = 1 ψ t ( ω, R d ) ( y ) | det ∇ y ψ t | − ( ω, ψ − t ( ω, y )) ν ( ψ − t ( ω, y )) dt dy, where ∇ y ψ t denotes the Jacobian matrix of ψ t ( ω, . ) .Proof. We recall from the proof of Lemma 1: E (cid:20)Z t Z A χ ( s, y ) µ ( ds dy ) (cid:21) = E "Z t Z ψ − s ( .,A ) χ ( s, ψ s ( ., z )) ν ( z ) ds dz . Proceeding to the change of variable ψ s ( ., z ) = y : E "Z t Z ψ − s ( .,A ) χ ( s, ψ s ( ., z )) ν ( z ) ds dz = E (cid:20)Z t Z A { ψ s ( R d ) } ( y ) χ ( s, y ) | det ∇ ψ s | − ( ., ψ − s ( ., y )) ν ( ψ − s ( ., y )) ds dy (cid:21) . The density appearing in the right hand side is predictable since ψ is a pre-dictable random function. The uniqueness of the predictable compensator µ yields the result. 21et us combine Lemma 2 and Theorem 2. To proceed, we make a furtherassumption. Assumption 7.
The L´evy measure ν admits a density ν ( y ) with respect to theLebesgue measure on R d and: ∀ t ∈ [0 , T ] ∃ K > Z t Z k y k > (cid:0) ∧ k ψ s ( ., y ) k (cid:1) ν ( y ) dy ds < K a . s . and lim R →∞ Z T ν ( ψ t ( {k y k ≥ R } )) dt = 0 a . s . Theorem 4.
Let ( ζ t ) be an Ito semimartingale defined on [0 , T ] by the giventhe decomposition ζ t = ζ + Z t β s ds + Z t δ s dW s + Z t Z ψ s ( y ) ˜ N ( ds dy ) , where ψ t ( ω, . ) : R d R d is invertible and differentiable with inverse φ t ( ω, . ) .Define m ( t, y ) = 1 { y ∈ ψ t ( R d ) } | det ∇ ψ t | − ( ψ − t ( y )) ν ( ψ − t ( y )) . (40) Assume there exists measurable functions a : [0 , T ] × R d M d × d ( R ) , b : [0 , T ] × R d R d and j : ( t, x ) ∈ [0 , T ] × R d ) → R ( R d − { } ) satisfying Assumption 2such that for ( t, z ) ∈ [0 , T ] × R d , B ∈ B ( R d − { } ) , E [ β t | ζ t − ] = b ( t, ζ t − ) a . s , E (cid:2) t δ t δ t | ζ t − (cid:3) = a ( t, ζ t − ) a . s , E [ m ( ., t, B ) | ζ t − ] = j ( t, B, ζ t − ) a . s . (41) If β and δ satisfy Assumption 4, ν Assumption 7, ( δ, m ) satisfy Assumptions5-6, then the stochastic differential equation X t = ζ + Z t b ( u, X u ) du + Z t Σ( u, X u ) dB u + Z t Z y ˜ J ( du dy ) , (42) where ( B t ) is an n -dimensional Brownian motion, J is an integer valued ran-dom measure on [0 , T ] × R d with compensator j ( t, dy, X t − ) dt , ˜ J = J − j and Σ : [0 , T ] × R d M d × n ( R ) is a continuous function such that t Σ( t, z )Σ( t, z ) = a ( t, z ) , admits a unique weak solution (( X t ) t ∈ [0 ,T ] , Q ζ ) whose marginal distri-butions mimick those of ζ : ∀ t ∈ [0 , T ] X t d = ζ t . Under Q ζ , X is a Markov process with infinitesimal generator L given by (3) . roof. We first use Lemma 2 to obtain the representation (35) of ζ : ζ t = ζ + Z t β s ds + Z t δ s dW s + Z t Z y ˜ M ( ds dy )Then, we observe that Z t Z y ˜ M ( ds dy ) = Z t Z k y k≤ y ˜ M ( ds dy ) + Z t Z k y k > y [ M ( ds dy ) − µ ( ds dy )]= Z t Z k y k≤ y ˜ M ( ds dy ) + Z t Z k y k > y M ( ds dy ) − Z t Z k y k > y µ ( ds dy ) , where the terms above are well-defined thanks to Assumption 7. Lemma 2 leadsto: Z t Z k y k > y µ ( ds dy ) = Z t Z k ψ s ( y ) k > k ψ s ( ., y ) k ν ( y ) dy ds. Hence: ζ t = ζ + "Z t β s ds − Z t Z k ψ s ( y ) k > k ψ s ( ., y ) k ν ( y ) dy ds + Z t δ s dW s + Z t Z k y k≤ y ˜ M ( ds dy ) + Z t Z k y k > y M ( ds dy ) . This representation has the form (2) and Assumptions 4 and 7 guarantee thatthe local characteristics of ζ satisfy the assumptions of Theorem 2. ApplyingTheorem 2 yields the result. We now give some examples of stochastic models used in applications, whereMarkovian projections can be characterized in a more explicit manner thanin the general results above. These examples also serve to illustrate that thecontinuity assumption (Assumption 2) on the projected coefficients ( b, a, n ) in(41) can be verified in many useful settings.
In many examples in stochastic modeling, a quantity Z is expressed as a smoothfunction f : R d → R of a d-dimensional Markov process Z : ξ t = f ( Z t ) with f : R d → R We will show that in this situation our assumptions will hold for ξ as soon as Z has an infinitesimal generator whose coefficients satisfy Assumptions 1, 2 and3, allowing us to construct the Markovian Projection of ξ t .23onsider a time-dependent integro-differential operator L = ( L t ) t ∈ [0 ,T ] de-fined, for g ∈ C ∞ ( R d ), by L t g ( z ) = b Z ( t, z ) . ∇ g ( z ) + d X i,j =1 ( a Z ) ij ( t, x )2 ∂ g∂x i ∂x j ( x )+ Z R d [ g ( z + ψ Z ( t, z, y ) − g ( z ) − ψ Z ( t, y, z ) . ∇ g ( z )] ν Z ( y ) dy, (43)where b Z : [0 , T ] × R d R d , a Z : [0 , T ] × R d M d × d ( R ), and ψ Z : [0 , T ] × R d × R d are measurable functions and ν Z is a L´evy density.If one assume that ψ Z ( ., .,
0) = 0 ψ Z ( t, z, . ) is a C ( R d , R d ) − diffeomorphism , ∀ t ∈ [0 , T ] ∀ z ∈ R d E "Z t Z {k y k≥ } (cid:0) ∧ k ψ Z ( s, z, y ) k (cid:1) ν Z ( y ) dy ds < ∞ , (44)then applying Lemma 2, (43) rewrites, for g ∈ C ∞ ( R d ): L t g ( x ) = b Z ( t, x ) . ∇ g ( x ) + d X i,j =1 ( a Z ) ij ( t, x )2 ∂ g∂x i ∂x j ( x )+ Z R d [ g ( x + y ) − g ( x ) − y. ∇ g ( x )] m Z ( t, y, x ) dy, (45)where m Z ( t, y, x ) = 1 { y ∈ ψ Z ( t, R d ,x ) } | det ∇ ψ Z | − ( t, x, ψ − Z ( t, x, y )) ν Z ( ψ − Z ( t, x, y )) . (46)Throughout this section we shall assume that ( b Z , a Z , m Z ) satisfy Assumptions1, 2 and 3. Proposition 1 then implies that for any Z ∈ R d , the SDE, ∀ t ∈ [0 , T ] Z t = Z + Z t b Z ( u, Z u − ) du + Z t a Z ( u, Z u − ) dW u + Z t Z ψ Z ( u, Z u − , y ) ˜ N ( du dy ) , (47)admits a weak solution (( Z t ) t ∈ [0 ,T ] , Q Z ), unique in law, with ( W t ) an n-dimensionalBrownian motion, N a Poisson random measure on [0 , T ] × R d with compensator ν Z ( y ) dy dt , ˜ N the associated compensated random measure. Under Q Z , Z isa Markov process with infinitesimal generator L .Consider now the process ξ t = f ( Z t ) . (48)The aim of this section is to build in an explicit manner the Markovian Projec-tion of ξ t for a sufficiently large class of functions f .Let us first rewrite ξ t in the form (2):24 roposition 3. Let f ∈ C ( R d , R ) with bounded derivatives such that, ∀ ( z , · · · , z d − ) ∈ R d − , u f ( z , . . . , z d − , u ) is a C ( R , R ) − diffeomorphism . Assume that ( a Z , m Z ) satisfy Assumption 3, then ξ t = f ( Z t ) admits the follow-ing semimartingale decomposition: ξ t = ξ + Z t β s ds + Z t δ s dB s + Z t Z u ˜ K ( ds du ) , where β t = ∇ f ( Z t − ) .b Z ( t, Z t − ) + tr (cid:2) ∇ f ( Z t − ) t a Z ( t, Z t − ) a Z ( t, Z t − ) (cid:3) + R R d ( f ( Z t − + ψ Z ( t, Z t − , y )) − f ( Z t − ) − ψ Z ( t, Z t − , y ) . ∇ f ( Z t − )) ν Z ( y ) dy,δ t = k∇ f ( Z t − ) a Z ( t, Z t − ) k , (49) B is a real valued Brownian motion and K is an integer-valued random measureon [0 , T j ] × R with compensator k ( t, Z t − , u ) du dt defined for all z ∈ R d and forany u > (and analogously for u < ) via: k ( t, z, [ u, ∞ [) = Z R d { f ( z + ψ Z ( t,z,y )) − f ( z ) ≥ u } ν Z ( y ) dy. (50) and ˜ K its compensated random measure.Proof. Applying Itˆo’s formula to ξ t = f ( Z t ) yields ξ t = ξ + Z t ∇ f ( Z s − ) .b Z ( s, Z s − ) ds + Z t ∇ f ( Z s − ) .a Z ( s, Z s − ) dW s + 12 Z t tr (cid:2) ∇ f ( Z s − ) t a Z ( s, Z s − ) a Z ( s, Z s − ) (cid:3) ds + Z t ∇ f ( Z s − ) .ψ Z ( s, Z s − , y ) ˜ N ( ds dy )+ Z t Z R d ( f ( Z s − + ψ Z ( s, Z s − , y )) − f ( Z s − ) − ψ Z ( s, Z s − , y ) . ∇ f ( Z s − )) N ( ds dy )= ξ + Z t h ∇ f ( Z s − ) .b Z ( s, Z s − ) + 12 tr (cid:2) ∇ f ( Z s − ) t a Z ( s, Z s − ) a Z ( s, Z s − ) (cid:3) + Z R d ( f ( Z s − + ψ Z ( s, Z s − , y )) − f ( Z s − ) − ψ Z ( s, Z s − , y ) . ∇ f ( Z s − )) ν Z ( y ) dy i ds + Z t ∇ f ( Z s − ) .a Z ( s, Z s − ) dW s + Z t Z R d ( f ( Z s − + ψ Z ( s, Z s − , y )) − f ( Z s − )) ˜ N ( ds dy ) . Given Assumption 3, either ∀ R > ∀ t ∈ [0 , T ] inf k z k≤ R inf x ∈ R d , k x k =1 t x.a Z ( t, z ) .x > , (51)25hen ( B t ) t ∈ [0 ,T ] defined by dB t = ∇ f ( Z t − ) .a Z ( t, Z t − ) W t k∇ f ( Z t − ) a Z ( t, Z t − ) k , is a continuous local martingale with [ B ] t = t thus a Brownian motion, or a Z ≡
0, then ξ is a pure-jump semimartingale. Define K t K t = Z t Z Ψ Z ( s, Z s − , y ) ˜ N ( ds dy ) , with Ψ Z ( t, z, y ) = ψ Z ( t, z, κ z ( y )) where κ z ( y ) : R d → R d y → ( y , · · · , y d − , f ( z + y ) − f ( z )) . Since for any z ∈ R d , | det ∇ y κ z | ( y ) = (cid:12)(cid:12)(cid:12) ∂∂y d f ( z + y ) (cid:12)(cid:12)(cid:12) >
0, one can define κ − z ( y ) = ( y , · · · , y d − , F z ( y )) F z ( y ) : R d → R f ( z +( y , · · · , y d − , F z ( y ))) − f ( z ) = y d . Considering φ the inverse of ψ that is φ ( t, ψ Z ( t, z, y ) , z ) = y , defineΦ( t, z, y ) = φ ( t, z, κ − z ( y )) . Φ corresponds to the inverse of Ψ Z and Φ is differentiable on R d with image R d .Now, define m ( t, z, y ) = | det ∇ y Φ( t, z, y ) | ν Z (Φ( t, z, y ))= (cid:12)(cid:12) det ∇ y φ ( t, z, κ − z ( y )) (cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) ∂f∂y d ( z + κ − z ( y )) (cid:12)(cid:12)(cid:12)(cid:12) − ν Z ( φ ( t, z, κ − z ( y ))) . One observes that Z t Z k y k > (cid:0) ∧ k Ψ Z ( s, z, y ) k (cid:1) ν Z ( y ) dy ds = Z t Z k y k > (cid:0) ∧ (cid:0) ψ ( s, z, y ) + · · · + ψ d − ( s, z, y ) + ( f ( z + ψ Z ( s, z, y )) − f ( z )) (cid:1)(cid:1) ν Z ( y ) dy ds ≤ Z t Z k y k > (cid:0) ∧ (cid:0) ψ ( s, z, y ) + · · · + ψ d − ( s, z, y ) + k∇ f k k ψ Z ( s, z, y ) k (cid:1)(cid:1) ν Z ( y ) dy ds ≤ Z t Z k y k > (cid:0) ∧ (2 ∨ k∇ f k ) k ψ Z ( s, z, y ) k (cid:1) ν Z ( y ) dy ds. Given the condition (44), one may apply Lemma 2 and express K t as K t = R t R y ˜ M ( ds dy ) where ˜ M is a compensated integer-valued random measure on260 , T ] × R d with compensator m ( t, Z t − , y ) dy dt .Extracting the d -th component of K t , one obtains the semimartingale decom-position of ξ t on [0 , T ] ξ t = ξ + Z t β s ds + Z t δ s dB s + Z t Z u ˜ K ( ds du ) , where β t = ∇ f ( Z t − ) .b Z ( t, Z t − ) + tr (cid:2) ∇ f ( Z t − ) t a Z ( t, Z t − ) a Z ( t, Z t − ) (cid:3) + R R d ( f ( Z t − + ψ Z ( t, Z t − , y )) − f ( Z t − ) − ψ Z ( t, Z t − , y ) . ∇ f ( Z t − )) ν Z ( y ) dy,δ t = k∇ f ( Z t − ) a Z ( t, Z t − ) k , and K is an integer-valued random measure on [0 , T ] × R with compensator k ( t, Z t − , u ) du dt defined for all z ∈ R d via k ( t, z, u ) = Z R d − m ( t, ( y , · · · , y d − , u ) , z ) dy · · · dy d − = Z R d − | det ∇ y Φ( t, z, ( y , · · · , y d − , u )) | ν Z (Φ( t, z, ( y , · · · , y d − , u ))) dy · · · dy d − , and ˜ K its compensated random measure. In particular for any u > u < k ( t, z, [ u, ∞ [) = Z R d { f ( z + ψ Z ( t,z,y )) − f ( z ) ≥ u } ν Z ( y ) dy. Given the semimartingale decomposition of ξ t in the form (2), we may nowconstruct the Markovian projection of ξ as follows. Theorem 5.
Assume that • the coefficients ( b Z , a Z , m Z ) satisfy Assumptions 1, 2 and 3, • the Markov process Z has a transition density q t ( . ) which is continuouson R d uniformly in t ∈ [0 , T ] , and t q t ( z ) is right-continuous on [0 , T [ ,uniformly in z ∈ R d . • f ∈ C b ( R d , R ) such that ∀ ( z , · · · , z d − ) ∈ R d − , u f ( z , . . . , z d − , u ) is a C ( R , R ) − diffeomorphism . efine, for w ∈ R , t ∈ [0 , T ] ,b ( t, w ) = 1 c ( w ) Z R d − h ∇ f ( . ) .b Z ( t, . ) + 12 tr (cid:2) ∇ f ( . ) t a Z ( t, . ) a Z ( t, . ) (cid:3) + Z R d ( f ( . + ψ Z ( t, ., y )) − f ( . ) − ψ Z ( t, ., y ) . ∇ f ( . )) ν Z ( y ) dy, i ( z , · · · , z d − , w ) × q t ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d (cid:12)(cid:12)(cid:12) ( z , · · · , z d − , F ( z , · · · , z d − , w )) ,σ ( t, w ) = 1 p c ( w ) h Z R d − k∇ f ( . ) a Z ( t, . ) k ( z , · · · , z d − , w ) × q t ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d (cid:12)(cid:12)(cid:12) ( z , · · · , z d − , F ( z , · · · , z d − , w )) i / ,j ( t, [ u, ∞ [ , w ) = 1 c ( w ) Z R d − (cid:18)Z R d { f ( . + ψ Z ( t,.,y )) − f ( . ) ≥ u } ( z , · · · , z d − , w ) ν Z ( y ) dy (cid:19) × q t ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) , (52) for u > (and analogously for u < ), with c ( w ) = Z R d − dz ...dz d − q t ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) . Then the stochastic differential equation X t = ξ + Z t b ( s, X s ) ds + Z t σ ( s, X s ) dB s + Z t Z k y k≤ y ˜ J ( ds dy ) + Z t Z k y k > y J ( ds dy ) , (53) where ( B t ) is a Brownian motion, J is an integer-valued random measure on [0 , T ] × R with compensator j ( t, du, X t − ) dt , ˜ J = J − j , admits a weak solution (( X t ) t ∈ [0 ,T ] , Q ξ ) , unique in law, whose marginal distributions mimick those of ξ : ∀ t ∈ [0 , T ] , X t d = ξ t . Under Q ξ , X is a Markov process with infinitesimal generator L given by ∀ g ∈ C ∞ ( R ) L t g ( w ) = b ( t, w ) g ′ ( w ) + σ ( t, w )2 g ′′ ( w )+ Z R d [ g ( w + u ) − g ( w ) − ug ′ ( w )] j ( t, du, w ) . Lemma 3.
Let Z be a R d -valued random variable with density q ( z ) and f ∈C ( R d , R ) such that ∀ ( z , · · · , z d − ) ∈ R d − , u f ( z , . . . , z d − , u ) is a C ( R , R ) − diffeomorphism . Define the function F : R d → R such that f ( z , · · · , z d − , F ( z )) = z d . Then forany measurable function g : R d → R such that E [ | g ( Z ) | ] < ∞ and any w ∈ R , E [ g ( Z ) | f ( Z ) = w ]= 1 c ( w ) Z R d − dz ...dz d − g ( z , · · · , z d − , w ) q ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) , with c ( w ) = Z R d − dz ...dz d − q ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) . Proof.
Consider the d-dimensional random variable κ ( Z ), where κ : R d R d is given by κ ( z ) = ( z , · · · , z d − , f ( z )) . ( ∇ z κ ) = · · · ∂f∂z · · · ∂f∂z d − ∂f∂z d , so t | det( ∇ z κ ) | ( z ) = (cid:12)(cid:12)(cid:12) ∂f∂z d ( z ) (cid:12)(cid:12)(cid:12) >
0. Hence κ is a C ( R d , R d )-diffeomorphism withinverse κ − . κ ( κ − ( z )) = ( κ − ( z ) , · · · , κ − d − ( z ) , f ( κ − ( z ) , · · · , κ − d ( z )) = z. For 1 ≤ i ≤ d − κ − i ( z ) = z i and f ( z , · · · , z d − , κ − d ( z )) = z d that is κ − d ( z ) = F ( z ). Hence κ − ( z , · · · , z d ) = ( z , · · · , z d − , F ( z )) . Define q κ ( z ) dz the inverse image of the measure q ( z ) dz under the partial map κ by q κ ( z ) = 1 { κ ( R d ) } ( z ) | det( ∇ z κ − ) | ( z ) q ( κ − ( z ))= 1 { κ ( R d ) } ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ∂f∂z d (cid:12)(cid:12)(cid:12)(cid:12) − ( z , · · · , z d − , F ( z )) q ( z , · · · , z d − , F ( z )) . κ ( z ) is the density of κ ( Z ). So, for any w ∈ f ( R d ) = R , E [ g ( Z ) | f ( Z ) = w ]= Z R d − E [ g ( Z ) | κ ( Z ) = ( z , · · · , z d − , w )] dz , · · · , dz d − = 1 c ( w ) Z R d − dz ...dz d − g ( z , · · · , z d − , w ) q ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d (cid:12)(cid:12)(cid:12) ( z , · · · , z d − , F ( z , · · · , z d − , w )) , with c ( w ) = Z R d − dz ...dz d − q ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) ∂f∂z d ( z , · · · , z d − , F ( z , · · · , z d − , w )) (cid:12)(cid:12)(cid:12) . Proof of Theorem 5.
Let us show that if ( b Z , a Z , m Z ) satisfy Assumptions 1, 2and 3 then the triplet ( δ t , β t , k ( t, Z t − , u )) satisfies the assumptions of Theorem2. Then given Proposition 3, one may build in an explicit manner the MarkovianProjection of ξ t .First, note that β t and δ t satisfy Assumption 4 since b Z ( t, z ) and a Z ( t, z )satisfy Assumption 1 and ∇ f and ∇ f are bounded.One observes that if m Z satisfies Assumption 1, then the equality (46) im-plies that ψ Z and ν Z satisfies: ∃ K > ∀ t ∈ [0 , T ] ∀ z ∈ R d Z t Z {k y k≥ } (cid:0) ∧ k ψ Z ( s, z, y ) k (cid:1) ν Z ( y ) dy ds < K . (54)Hence, Z t Z (cid:0) ∧ k u k (cid:1) k ( s, Z s − , u ) du ds = Z t Z (cid:0) ∧ | f ( Z s − + ψ Z ( s, Z s − , y )) − f ( Z s − ) | (cid:1) ν Z ( y ) dy ds ≤ Z t Z (cid:0) ∧ k∇ f k k ψ Z ( s, Z s − , u ) k (cid:1) ν Z ( y ) dy ds, is bounded and k satisfies Assumption 5.As argued before, one sees that if a Z is non-degenerate then δ t is. In the case δ t ≡
0, for t ∈ [0 , T [ , R > , z ∈ B (0 , R ) and g ∈ C ( R ) ≥
0, denoting C and30 T > k ( t, z, u )= Z R d − | det ∇ y Φ( t, z, ( y , · · · , y d − , u )) | ν Z (Φ( t, z, ( y , · · · , y d − , u ))) dy · · · dy d − = Z R d − (cid:12)(cid:12) det ∇ y φ ( t, z, κ − z ( y , · · · , y d − , u )) (cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12) ∂f∂y d ( z + κ − z ( y , · · · , y d − , u )) (cid:12)(cid:12)(cid:12)(cid:12) − ν Z ( φ ( t, z, κ − z ( y , · · · , y d , u ))) dy · · · dy d − ≥ Z R d − (cid:12)(cid:12)(cid:12)(cid:12) ∂f∂y d ( z + κ − z ( y , · · · , y d − , u )) (cid:12)(cid:12)(cid:12)(cid:12) − C k κ − z ( y , · · · , y d − , u ) k d + β dy · · · dy d − = Z R d − C k ( y , · · · , y d − , u ) k d + β dy · · · dy d − = 1 | u | d + β Z R d − C k ( y /u, · · · , y d − /u, k d + β dy · · · dy d − = C ′ | u | β , with C ′ = R R d − C k ( w , · · · , w d − , k − dw · · · dw d − .Similarly Z (cid:0) ∧ | u | β (cid:1) (cid:18) k ( t, z, u ) − C ′ | u | β (cid:19) du = Z (cid:0) ∧ | u | β (cid:1) Z R d − (cid:12)(cid:12)(cid:12)(cid:12) ∂f∂y d ( z + κ − z ( y , · · · , y d − , u )) (cid:12)(cid:12)(cid:12)(cid:12) − h (cid:12)(cid:12) det ∇ y φ ( t, z, κ − z ( y , · · · , y d − , u )) (cid:12)(cid:12) ν Z ( φ ( t, z, κ − z ( y , · · · , y d − , u ))) − C k κ − z ( y , · · · , y d − , u ) k d + β i dy · · · dy d − du = Z R d (cid:0) ∧ | f ( z + ( y , · · · , y d − , u )) − f ( z ) | β (cid:1)(cid:16) | det ∇ y φ ( t, z, ( y , · · · , y d − , u )) | ν Z ( φ ( t, z, ( y , · · · , y d − , u ))) − C k ( y , · · · , y d − , u ) k d + β (cid:17) dy · · · dy d − du ≤ Z R d (cid:0) ∧ k∇ f kk ( y , · · · , y d − , u ) k β (cid:1)(cid:16) | det ∇ y φ ( t, z, ( y , · · · , y d − , u )) | ν Z ( φ ( t, z, ( y , · · · , y d − , u ))) − C k ( y , · · · , y d − , u ) k d + β (cid:17) dy · · · dy d − du
31s also bounded. Similar arguments would show thatlim ǫ → Z | u |≤ ǫ | u | β (cid:18) k ( t, Z t − , u ) − C | u | β (cid:19) du = 0 a . s . and lim R →∞ Z T k ( t, Z t − , {| u | ≥ R } ) dt = 0 a . s ., since this essentially hinges on the fact that f has bounded derivatives.Applying Lemma 3, one can compute explicitly the conditional expectationsin (41). For example, b ( t, w ) = E [ β t | ξ t − = w ] = Z R d − h ∇ f ( . ) .b Z ( t, . ) + 12 tr (cid:2) ∇ f ( . ) t a Z ( t, . ) a Z ( t, . ) (cid:3) + Z R d ( f ( . + ψ Z ( t, ., y )) − f ( . ) − ψ Z ( t, ., y ) . ∇ f ( . )) ν Z ( y ) dy, i ( z , · · · , z d − , w ) × q t ( z , · · · , z d − , F ( z , · · · , z d − , w )) | ∂f∂z d ( z , · · · , z d − , F ( z , · · · , z d − , w )) | . with F : R d → R defined by f ( z , · · · , z d − , F ( z )) = z d . Furthermore, f is C with bounded derivatives, ( b Z , a Z , ν Z ) satisfy Assumption 1. Since z ∈ R d → q t ( z ) is continuous in z uniformly in t ∈ [0 , T ] and t ∈ [0 , T [ → q t ( z ) is right-continuous in t uniformly in z ∈ R d , the same properties hold for b . Proceedingsimilarly, one can show that that Assumption 2 holds for σ and j so Theorem2 may be applied to yield the result. Models based on time–changed L´evy processes have been the focus of muchrecent work, especially in mathematical finance [6]. Let L t be a L´evy process,( b, σ , ν ) be its characteristic triplet on (Ω , ( F t ) t ≥ , P ), N the Poisson randommeasure representing the jumps of L and ( θ t ) t ≥ a locally bounded, strictlypositive F t -adapted cadlag process. The process ξ t = ξ + L Θ t Θ t = Z t θ s ds. is called a time-changed L´evy process where θ t is interpreted as the rate of timechange. Theorem 6 (Markovian projection of time-changed L´evy processes) . Assumethat ( θ t ) t ≥ is bounded from above and away from zero: ∃ K, ǫ > , ∀ t ∈ [0 , T ] , K ≥ θ t ≥ ǫ a . s . (55) and that there exists α : [0 , T ] × R R such that ∀ t ∈ [0 , T ] , ∀ z ∈ R , α ( t, z ) = E [ θ t | ξ t − = z ] , here α ( t, . ) is continuous on R , uniformly in t ∈ [0 , T ] and, for all z in R , α ( ., z ) is right-continuous in t on [0 , T [ . If either ( i ) σ > ii ) σ ≡ ∃ β ∈ ]0 , , c, K ′ > , and a measure ν β ( dy ) such that ν ( dy ) = ν β ( dy ) + c | y | β dy, Z (cid:0) ∧ | y | β (cid:1) ν β ( dy ) ≤ K ′ , lim ǫ → Z | y |≤ ǫ | y | β ν β ( dy ) = 0 , then • ( ξ t ) has the same marginals as ( X t ) on [0 , T ] , defined as the weak solutionof X t = ξ + Z t σ p α ( s, X s − ) dB s + Z t bα ( s, X s − ) ds + Z t Z | z |≤ z ˜ J ( ds dz ) + Z t Z | z | > zJ ( ds dz ) , where B t is a real-valued brownian motion, J is an integer-valued randommeasure on [0 , T ] × R with compensator α ( t, X t − ) ν ( dy ) dt . • The marginal distribution p t of ξ t is the unique solution of the forwardequation: ∂p t ∂t = L ⋆t . p t , where, L ∗ t is given by L ⋆t g ( x ) = − b ∂∂x [ α ( t, x ) g ( x )] + σ ∂ ∂x [ α ( t, x ) g ( x )]+ Z R d ν ( dz ) (cid:20) g ( x − z ) α ( t, x − z ) − g ( x ) α ( t, x ) − k z k≤ z. ∂∂x [ g ( x ) α ( t, x )] (cid:21) , with the given initial condition p ( dy ) = µ ( dy ) where µ denotes the lawof ξ .Proof. Consider the L´evy-Ito decomposition of L : L t = bt + σW t + Z t Z | z |≤ z ˜ N ( dsdz ) + Z t Z | z | > zN ( dsdz ) . Then ξ rewrites ξ t = ξ + σW (Θ t ) + b Θ t + Z Θ t Z | z |≤ z ˜ N ( ds dz ) + Z θ t Z | z | > zN ( ds dz ) . (Θ t ) is a continuous martingale starting from 0, with quadratic variationΘ t = R t θ s ds . Hence, there exists Z t a Brownian motion, such that W (Θ t ) d = Z t p θ s dZ s . Hence ξ t is the weak solution of : ξ t = ξ + Z t σ p θ s dZ s + Z t bθ s ds + Z t Z | z |≤ zθ s ˜ N ( ds dz ) + Z t Z | z | > zθ s N ( ds dz ) . Using the notations of Theorem 2, β t = b θ t , δ t = σ p θ t , m ( t, dy ) = θ t ν ( dy ) . Given the conditions (55), one simply observes that ∀ ( t, z ) ∈ [0 , T ] × R , ǫ ≤ α ( t, z ) ≤ K. Hence Assumptions 4, 5 and 6 hold for ( β, δ, m ). Furthermore, b ( t, . ) = E [ β t | ξ t − = . ] = b α ( t, . ) ,σ ( t, . ) = E (cid:2) δ t | ξ t − = . (cid:3) / = σ p α ( t, . ) ,n ( t, B, . ) = E [ m ( t, B ) | ξ t − = . ] = α ( t, . ) ν ( B ) , are all continuous on R uniformly in t on [0 , T ] and for all z ∈ R , α ( ., z ) isright-continuous on [0 , T [. One may apply Theorem 2 yielding the result.The impact of the random time change on the marginals can be capturedby making the characteristics state dependent( bα ( t, X t − ) , σ α ( t, X t − ) , α ( t, X t − ) ν )by introducing the same adjustment factor α ( t, X t − ) to the drift, diffusion co-efficient and L´evy measure. In particular if α ( t, x ) is affine in x we get an affineprocess [8] where the affine dependence of the characteristics with respect to thestate are restricted to be colinear, which is rather restrictive. This remark showsthat time-changed L´evy processes, which in principle allow for a wide variety ofchoices for θ and L , may not be as flexible as apparently simpler affine modelswhen it comes to reproducing marginal distributions. References [1]
D. Baker and M. Yor , A brownian sheet martingale with the samemarginals as the arithmetic average of geometric brownian motion , Elec-tronic Journal of Probability, 14 (2009), pp. 1532–1540.342]
R. F. Bass , Stochastic differential equations with jumps , Probab. Surv., 1(2004), pp. 1–19.[3]
A. Bentata and R. Cont , Forward equations for option prices in semi-martingales , Finance and Stochastics, forthcoming (2012).[4]
P. Br´emaud , Point processes and queues , Springer, 1981.[5]
G. Brunick and S. Shreve , Matching statistics of an Ito process by aprocess of diffusion type , working paper, 2010.[6]
P. Carr, H. Geman, D. B. Madan, and M. Yor , Stochastic volatilityfor L´evy processes , Math. Finance, 13 (2003), pp. 345–382.[7]
R. Cont and A. Minca , Recovering portfolio default intensities impliedby CDO tranches , Mathematical Finance, forthcoming (2008).[8]
D. Duffie, J. Pan, and K. Singleton , Transform Analysis and AssetPricing for Affine Jump-diffusions , Econometrica, 68 (2000), pp. 1343–1376.[9]
B. Dupire , Pricing and hedging with smiles , in Mathematics of DerivativeSecurities, M. Dempster and S. Pliska, eds., Cambridge University Press,1997, pp. 103–111.[10]
N. El Karoui and J. LePeltier , Representation de processus ponctuelsmultivari´es `a l’aide d’un processus de poisson , Z. Wahrscheinlichkeitstheo-rie und Verw. Gebiete, 39 (1977), pp. 111–133.[11]
S. N. Ethier and T. G. Kurtz , Markov Processes: CharacterizationAnd Convergence , Wiley, 1986.[12]
A. Figalli , Existence and uniqueness of martingale solutions for SDEswith rough or degenerate coefficients , J. Funct. Anal., 254 (2008), pp. 109–153.[13]
I. Gy¨ongy , Mimicking the one-dimensional marginal distributions of pro-cesses having an Itˆo differential , Probab. Theory Relat. Fields, 71 (1986),pp. 501–516.[14]
K. Hamza and F. Klebaner , A family of non-gaussian martingales withgaussian marginals , working paper, Monash University, 2006.[15]
F. Hirsch and M. Yor , Unifying constructions of martingales associatedwith processes increasing in the convex order, via L´evy and Sato sheets ,Pr´epublication 285, Universit´e d’Evry, 2009.[16]
J. Jacod , Calcul stochastique et probl`emes de martingales , Springer,Berlin, 1979. 3517]
J. Jacod and A. N. Shiryaev , Limit theorems for stochastic processes ,Springer, Berlin, 2003.[18]
Y. M. Kabanov, R. Liptser, and A. Shiryaev , On the representa-tion of integral-valued random measures and local martingales by means ofrandom measures with deterministic compensators , Math. USSR Sbornik,(1981), pp. 267–280.[19]
H. G. Kellerer , Markov-Komposition und eine Anwendung auf Martin-gale , Math. Ann., 198 (1972), pp. 99–122.[20]
T. Komatsu , Markov processes associated with certain integro-differentialoperators , Osaka J. Math., 10 (1973), pp. 271–303.[21]
D. Madan and M. Yor , Making Markov martingales meet marginals ,Bernoulli, 8 (2002), pp. 509–536.[22]
R. Mikuleviˇcius and H. Pragarauskas , On the martingale problem as-sociated with non-degenerate l´evy operators , Lithuanian mathematical jour-nal, 32 (1992), pp. 297–311.[23]
D. W. Stroock , Diffusion processes associated with L´evy generators , Z.Wahrscheinlichkeitstheorie und Verw. Gebiete, 32 (1975), pp. 209–244.[24] ,
Markov Processes from Ito’s Perspective , Princeton Univ. Press,2003.[25]