Stability results for martingale representations: the general case
Antonis Papapantoleon, Dylan Possamai, Alexandros Saplaouras
aa r X i v : . [ m a t h . P R ] M a r STABILITY RESULTS FOR MARTINGALE REPRESENTATIONS:THE GENERAL CASE
ANTONIS PAPAPANTOLEON, DYLAN POSSAMAÏ, AND ALEXANDROS SAPLAOURASA
BSTRACT . In this paper, we obtain stability results for martingale representations in a very general framework.More specifically, we consider a sequence of martingales each adapted to its own filtration, and a sequence of ran-dom variables measurable with respect to those filtrations. We assume that the terminal values of the martingales andthe associated filtrations converge in the extended sense, and that the limiting martingale is quasi–left–continuousand admits the predictable representation property. Then, we prove that each component in the martingale repre-sentation of the sequence converges to the corresponding component of the martingale representation of the limitingrandom variable relative to the limiting filtration, under the Skorokhod topology. This extends in several directionsearlier contributions in the literature, and has applications to stability results for backward SDEs with jumps and todiscretisation schemes for stochastic systems. C ONTENTS
1. I
NTRODUCTION
22. P
RELIMINARIES ( D , J ) and convergence in the extended sense 93. S TABILITY OF MARTINGALE REPRESENTATIONS is valid for J ( X ∞ ) is valid 253.7. Step is valid 263.7.1. Θ ∞ ,I is an F –martingale 273.7.2. N is sufficiently integrable 323.7.3. The filtration G ∞ is immersed in the filtration F PPENDIX
A. A
UXILIARY RESULTS J –topology 39A.1.1. J –continuous functions 39A.1.2. Technical proofs 44A.2. Measure Theory 46 We thank Samuel Cohen for useful discussions during the work on these topics. Alexandros Saplaouras gratefully acknowledges thefinancial support from the DFG Research Training Group 1845 “Stochastic Analysis with Applications in Biology, Finance and Physics”.Dylan Possamaï gratefully acknowledges the financial support from the ANR project P
ACMAN (ANR-16-CE05-0027). Moreover, allauthors gratefully acknowledge the financial support from the P
ROCOPE project “Financial markets in transition: mathematical modelsand challenges”.
A.3. Young functions 47A.4. Proof of Corollary 3.4 51R
EFERENCES
NTRODUCTION
Consider a sequence ( X n ) n ∈ N of square–integrable martingales, which is assumed to converge to anothersquare–integrable martingale X ∞ , the convergence holding either in the strong sense, meaning in particularthat all the martingales X n , as well as X ∞ , are defined on a common probability space, or in the weak sense,that is, the convergence is in distribution, and each X n is then defined on its own probability space. For ev-ery n ∈ N , let us denote by G n := ( G nt ) t ≥ the filtrations with respect to which X n are martingales and by G ∞ := ( G ∞ t ) t ≥ the one associated to X ∞ . Given now a sequence of random variables ( ξ n ) n ∈ N , where ξ n isrespectively G n ∞ − measurable, based on a well–known result (see, for instance, Jacod and Shiryaev [34, LemmaIII.4.24]) the martingales Y n · := E [ ξ n |G n · ] admit a so–called orthogonal decomposition with respect to X n . Inother words, for every n ∈ N , let X n,c be the continuous part of X n and e µ X n,d be the (compensated) randommeasure of jumps associated to X n,d , i.e. the purely discontinuous part of X n , then Y n · = Y n + Z · Z ns d X n,cs + Z · Z R ℓ U ns ( x ) e µ X n,d (d s, d x ) + N n · , (1.1)where Z n and U n are respectively a predictable process and a predictable function, while N n is another martin-gale, appropriately orthogonal to both the continuous and the discontinuous martingale parts of X n .Assume now that the sequence of pairs ( X n , G n ) n ∈ N converges (in the extended sense) to ( X ∞ , G ∞ ) , and thatthe sequence ( ξ n ) n ∈ N converges, in an appropriate sense, to some G ∞∞ − measurable random variable ξ ∞ , suchthat the following orthogonal decomposition for Y · := E [ ξ ∞ |G ∞· ] with respect to X ∞ holds Y ∞· = Y ∞ + Z · Z ∞ s d X ∞ ,cs + Z · Z R ℓ U ∞ s ( x ) e µ X ∞ ,d (d s, d x ) + N ∞· . (1.2)A natural question is then whether the convergence of Y n to Y ∞ implies also the convergence of the martin-gale parts on the right–hand side of (1.1) to their respective counterparts on the right–hand side of (1.2). Aweaker version of the posed question is whether the sequence consisting of the sum of the stochastic integrals in(1.1) converges to the sum of the stochastic integrals in (1.2) and therefore also the sequence of the orthogonalmartingales ( N n ) n ∈ N converges to N ∞ .This problem of approximations of certain martingale representations has a long history, which was mainly mo-tivated by applications in mathematical finance. There the random variables ξ n can be understood as contingentclaims to be hedged using financial assets whose prices are given by X n , and where Z n are then appropriatehedging strategies (usually risk minimising). In this context, U n would typically appear when the price processes X n can have jumps, and N n would sum up all the information in the filtration G n which cannot be generated by X n,c or e µ X n,d . This is the typical situation encountered in so–called incomplete financial markets. Furthermore,the approximation of X by X n usually stems from computational considerations, typically using discretisationschemes for practical and efficient implementations. The question of whether the associated hedging strategiesconverge or not, and in which sense, is then of paramount importance. This was notably the subject of Jacod,Méléard, and Protter [35], which considers a setting where the U n do not appear, since the stochastic integralis an integral with respect to X n (and not only its continuous martingale part; this is the celebrated Galtchouk–Kunita–Watanabe decomposition from martingale theory), and where the ξ n are Markovian functionals of X n .Earlier contributions by Jakubowski, Mémin, and Pagès [36] and then Kurtz and Protter [40, 41] had alreadystudied, from a theoretical point of view, the simpler question of the weak convergence of stochastic integrals ofthe form R · Z ns d X ns , while Duffie and Protter had investigated the aforementioned financial applications in [24].The problem posed above is also intimately linked to the study of weak convergence of discretisation schemesfor stochastic systems, which has been a topic of continued interest in stochastic numerical analysis and its ap-plications. As illustrated in several articles, there are discretisation schemes for such systems which do not leadto satisfactory stability properties, especially for the simplest and elementary processes, such as stochastic inte-grals and stochastic differential equations. These questions, in a context similar to ours, have been investigated TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 3 for instance by Barlow and Protter [4], for the stability of special semimartingale decompositions, by Coquetand Słomi´nski [13] for Dirichlet processes, and by Émery [27, 28], Mackeviˇcius [48, 49], Protter [62, 63] andSłomi´nski [67] for strong solutions of stochastic differential equations. More recently, this also was the directionfollowed by Leão and Ohashi [42], where the authors aimed at describing readable structural conditions on agiven optional process adapted to a Brownian filtration, in order to construct explicit, robust and feasible approx-imating skeletons for smooth semimartingales. Closedness results for stochastic integrals with respect to (local)martingales are also part of the folklore of the general theory of processes. This, roughly speaking, correspondsto the case where one simply studies integrals of the formv R · Z s d X ns . Hence, the case of integrals in L (or moregenerally in L p , p > ) is straightforward, coming almost directly from the Hilbert space isometry of stochasticintegrands and integrals, see for instance Protter [64, p. 153] or Jacod [33]. The much more subtle case of mar-tingales in L was settled by Yor [69], see also Delbaen and Schachermayer [18] for a survey of these results, aswell as additional compactness criteria. The case where X is allowed to be a semimartingale is naturally quitemore involved, and comprehensive results in this direction were obtained by Mémin [51], Schweizer [66], Monatand Stricker [55, 56], and Delbaen, Monat, Schachermayer, Schweizer, and Stricker [19, 20].Once the semimartingales considered have more structural properties, other interesting results can be obtained.Barrieu, Cazanave, and El Karoui [6], for instance, and later Barrieu and El Karoui [5] were interested in whatthey coined “continuous quadratic semimartingales” (see also related articles by Mocha and Westray [54], stillin the continuous case, and recent extensions to jump processes by Ngoupeyou [58] and El Karoui, Matoussi,and Ngoupeyou [25]), for which, roughly speaking, the bounded variation process part in the semimartingale X is absolutely continuous with respect to the quadratic variation of the martingale part of X . [5, 6] obtainedassociated stability results for these processes.An important common feature of the articles mentioned so far, is that they actually only consider the strong framework we described at the beginning of this introduction, in the sense that there is a always a fixed proba-bility space and all processes (meaning here mainly X n and X ∞ ) are adapted to the same fixed filtration G . Animportant exception is Słomi´nski [67], where the probability space is fixed, but not the filtration. However, forpractical purposes, and especially for the analysis of numerical schemes, it is well–known that the weak frame-work is also of paramount importance, as illustrated for instance by the famous Donsker theorem. There hasthus been a certain number of studies of stability properties for semimartingale or martingale decompositionswhen the underlying filtration itself is also allowed to change. In that direction, Antonelli and Kohatsu-Higa [3],followed by Coquet, Mackeviˇcius, and Mémin [14, 15], Ma, Protter, San Martín, and Torres [47], Briand, De-lyon, and Mémin [9], Briand, Delyon, and Mémin [10], and then Cheridito and Stadje [11], studied such stabilityproperties for continuous backward stochastic differential equations (BSDEs for short), a type of non–linearmartingale representation. Mémin [52] looked into the stability of the canonical decomposition for semimartin-gales, Kchia [38] (see also Kchia and Protter [39]) extended Barlow and Protter’s result [4] for stability of specialsemimartingale decompositions to a framework allowing changing filtrations, while Possamaï and Tan [61] ex-tended results of [9, 10] to the case of so–called second order BSDEs. Let us also mention the recent paper byMadan, Pistorius, and Stadje [50], which considers stability results for BSDEs with jumps (that is to say thatboth the processes Z and U are present in the solution), when the driving càdlàg martingale is approximated byrandom walks. Several of these works make a strong use of the notions of extended convergence, introduced byAldous [1], as well as that of convergence of filtrations, introduced by Hoover [32] and further developed byCoquet, Mémin, and Mackeviˇcius [16] and Coquet, Mémin, and Słomi´nski [17], which also plays a major rolein the present paper. Let us also mention the recent contributions by Leão, Ohashi and Simas [43, 44], who studystability of Wiener functionals under weak convergence of filtration beyond semimartingales, in the context offunctional It¯o calculus.Our work follows this latter strand of literature and studies the problem of stability for the martingale represen-tation (or the orthogonal decomposition) of the martingales X n , when their filtration is also allowed to change.A very important difference compared to the existing literature is that we basically make no assumption onthe filtration G n , besides the minimal ones, i.e. that they satisfy the usual assumptions of right–continuity andcompleteness under a fixed reference probability measure P , and that they converge in an appropriate sense tothe filtration G ∞ associated to X ∞ . This means, in particular, that the filtrations G n are not constrained to be A. PAPAPANTOLEON, D. POSSAMAÏ, AND A. SAPLAOURAS quasi–left–continuous, a property often considered in the existing literature, and whose relaxation highly com-plicates the problem. We believe that this is somehow the highest degree of generality one can consider whilestill remaining in the martingale framework. However, this level of generality comes at the price that we haveto assume more properties for the limiting filtration G ∞ . More precisely, we have to assume that G ∞ is quasi–left–continuous (hence also the martingale X ∞ ) and that the predictable representation property holds for G ∞ and X ∞ (meaning that N ∞ in (1.2) above must vanish). Although the first assumption is somehow unavoidablein such a setting, as illustrated by Mémin [52], the second one is slightly more restrictive. A proper discussionof the reasons why our approach cannot work without it requires lengthy preliminaries, hence we postpone it toSubsection 3.3 below. Our results stipulate that under these assumptions, the extended convergence of ( ξ n , G n ) implies the joint convergence in the Skorokhod topology of ( Y n , Z n · X n + U n ⋆ e µ X n,d , N n ) , but also of theangle brackets ( h Y n i , h Y n , X n i , h N n i ) . In case the processes X n have in addition independent increments, wecan obtain that the above convergences also hold in law when we work under the natural filtration associated to X n (see Corollary 3.4). Besides, if we assume that there exist two sequences that converge to the continuousand the purely discontinuous part of the limiting martingale respectively, then the angle brackets of Y n withrespect to these sequences converge to the angle brackets of Y ∞ with respect to the continuous and the purelydiscontinuous part of the limiting martingale, see Corollary 3.10.On the way to proving our results, we needed to apply the Burkholder–Davis–Gundy inequality, as well as theDoob inequality in their general form, namely for a (suitable) moderate Young function Φ . This finally allowedus to have a “sharp” L − convergence in our results, and not simply L ǫ , for some ε > , as it is usually imposedin the literature in order to have sufficient integrability. As the reader may suspect, this result was possible dueto the special role that p = 2 plays in the general L p − theory.The rest of the paper is organised as follows. Section 2 introduces all the relevant notions from stochastic analysisand stochastic integration, as well as from the study of the Skorokhod space and the extended convergence ofAldous. Section 3 is then devoted to the statement of our main results, a comprehensive comparison with theexisting literature, as well as a very detailed explanation of our strategy of proof. The proof itself follows, whilethe appendices collect important technical results. Notation.
Let R + denote the set of non–negative real numbers, R + := [0 , ∞ ] and N := N ∪ {∞} . For anypositive integer ℓ , any x ∈ R ℓ will be identified as a column vector of length ℓ , x i will denote the i − th elementof x and π i will denote the canonical i − projection R ℓ ∋ x x i ∈ R , for ≤ i ≤ ℓ. The identity function R ℓ ∋ x x ∈ R ℓ will be denoted by Id ℓ , where we will suppress the index when the dimension is clear. By | x | we will denote the usual Euclidean norm of x , while the metric compatible with the topology imposed by theEuclidean norm will be denoted by d |·| . For any additional positive integer q , a q × ℓ − matrix with real entrieswill be considered as an element of R q × ℓ . For any z ∈ R q × ℓ , its transpose will be denoted by z ⊤ ∈ R ℓ × q . Theelement at the i − th row and j − th column of z ∈ R q × ℓ will be denoted by z ij , for ≤ i ≤ q and ≤ j ≤ ℓ .The trace of a square matrix z ∈ R ℓ × ℓ is Tr[ z ] := P ℓi =1 z ii . We endow R q × ℓ with the k·k − norm definedfor any z ∈ R q × ℓ by k z k := Tr[ z ⊤ z ] and remind the reader that this norm is derived from the inner productdefined for any ( z, u ) ∈ R q × ℓ × R q × ℓ by Tr[ z ⊤ u ] . Moreover, we will also make use of the k·k − norm, whichis defined as k z k := P qi =1 P ℓj =1 | z ij | , for z ∈ R q × ℓ . We abuse notation and denote by the neutral elementin the groups ( R ℓ , +) and ( R q × ℓ , +) . Throughout the rest of the paper p, q, ℓ will always denote natural integersand, in particular, ℓ will be fixed.Let E denote a finite dimensional topological space, then B ( E ) will denote the associated Borel σ − algebra.Furthermore, for any other finite dimensional topological space G and for any non–negative measure ρ defined on ( R + , B ( R + )) , we will denote the Lebesgue–Stieltjes integral, with respect to some measure ρ on ( R + , B ( R + )) ,of any measurable map f : ( R + , B ( R + )) −→ ( G, B ( G )) by Z ( u,t ] f ( s ) ρ (d s ) and Z ( u, ∞ ) f ( s ) ρ (d s ) , for any u, t ∈ R + . The convergence of the respective square bracket processes is also obtained, but this is well-known in the existing literature. For the definition see Appendix A.3.
TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 5
In case ρ is a finite measure with associated distribution function F ρ ( · ) := ρ ([0 , · ]) , we will indifferently denotethe above integrals by Z ( u,t ] f ( s )d F ρs and Z ( u, ∞ ) f ( s )d F ρs , for any u, t ∈ R + . When there is no confusion as to which measure the distribution function F ρ is associated to, we will omit theupper index and simply write F . More generally, for any measure ̺ on ( R + × E, B ( R + ) ⊗ B ( E )) and for anymeasurable map g : ( R + × E, B ( R + ) ⊗ B ( E )) −→ ( G, B ( G )) we will denote the Lebesgue–Stieltjes integralby Z ( u,t ] × A g ( s, x ) ̺ (d s, d x ) and Z ( u, ∞ ) × A g ( s, x ) ̺ (d s, d x ) , for any t, u ∈ R + , A ∈ B ( E ) . The integrals above are to be understood in a component–wise sense.Finally, we recall, for the convenience of the reader, some classical terminology. Let ( E, d E ) be a Polish space.We denote by P ( E ) the set of all probability measures on ( E, B ( E )) . We endow P ( E ) with the weak topology , i.e. the coarsest topology for which the mappings P ( E ) ∋ ̺ R E f d ̺ ∈ R , are continuous for all boundedcontinuous functions f on E. It is well–known that E is Polish if and only if P ( E ) is Polish; see Aliprantisand Border [2, Theorem 15.15] or Parthasarathy [60, Thoerem 6.5]. Moreover, a subset Γ of P ( E ) is relativelycompact for the weak topology if and only if it is tight , see [2, Theorem 15.22] or [60, Theorem 6.7]. For arandom variable Ξ : (Ω , G ) −→ ( E, B ( E )) , its law L (Ξ) is defined as L (Ξ)( A ) := P ( { ω ∈ Ω , Ξ( ω ) ∈ A } ) , forevery A ∈ B ( E ) . We will say that the sequence of random variables (Ξ k ) k ∈ N converges in law to the randomvariable Ξ ∞ , and we will write Ξ k L −−−→ Ξ ∞ , if L (Ξ k ) converges weakly to L (Ξ ∞ ) , which will be denotedas L (Ξ k ) w −−−→ L (Ξ ∞ ) . Moreover, we will say that the sequence of random variables (Ξ k ) k ∈ N is tight, if theassociated sequence of laws (cid:0) L (Ξ k ) (cid:1) k ∈ N is tight. Finally, for a tight sequence (Ξ k ) k ∈ N of random variables, wewill say that Ξ is a weak–limit point if there exists a subsequence (Ξ k l ) l ∈ N such that L (Ξ k l ) w −−−→ L (Ξ) holds.2. P RELIMINARIES
The stochastic basis.
Let (Ω , G , P ) be a probability space, which is fixed for the remainder of this paper.Expectations under P will be denoted by E [ · ] . For any filtration F on (Ω , G , P ) and for any F − stopping time τ , we will denote the set of R q − valued and square–integrable F − martingales stopped at τ by H ( F , τ ; R q ) . Aprocess ( M t ) t ∈ R + will be denoted also as M , and the usual augmentation of its natural filtration will be denotedby F M . Let M ∈ H ( F , τ ; R q ) , then its norm is defined by k M k H ( F ,τ ; R q ) := E [Tr[ h M i τ ]] . In the sequel, wewill say that the real–valued F − martingales L, M are ( mutually ) orthogonal , denoted by L ⊥⊥ M , if their product LM is an F − martingale; see Jacod and Shiryaev [34, Definition I.4.11.a, Lemma I.4.13.c]. An R q − valued, F − martingale L will be called a continuous martingale if L = 0 and L i is a continuous F − martingale, for each i = 1 , . . . , q . Moreover, an R q − valued, F − martingale M will be called a purely discontinuous martingale if M = 0 and M i is orthogonal to all continuous real–valued F − martingales, for each i = 1 , . . . , q . Using [34,Corollary I.4.16] we can decompose the space of square integrable F − martingales as follows H ( F , τ ; R q ) = (cid:0) H ( F , τ ; R ) (cid:1) q = (cid:0) H ,c ( F , τ ; R ) ⊕ H ,d ( F , τ ; R ) (cid:1) q = H ,c ( F , τ ; R q ) ⊕ H ,d ( F , τ ; R q ) , where we have defined for any m ∈ N H ,c ( F , τ ; R m ) := (cid:8) M ∈ H ( F , τ ; R m ) , M is continuous (cid:9) , H ,d ( F , τ ; R m ) := (cid:8) M ∈ H ( F , τ ; R m ) , M is purely discontinuous (cid:9) . Then, it follows from [34, Theorem I.4.18], that any M ∈ H ( F , τ ; R q ) admits a unique decomposition, up to P − indistinguishability M · = M + M c · + M d · , where M c = M d = 0 . The process M c = ( M c, , . . . , M c,q ) ∈H ,c ( F , τ ; R q ) will be called the continuous ( martingale ) part of M and the process M d = ( M d, , . . . , M d,q ) ∈H ,d ( F , τ ; R q ) will be called the purely discontinuous ( martingale ) part of M . In functional analysis, this is called the weak ⋆ − topology . We assume that all filtrations considered satisfy the usual conditions of right–continuity and P − completeness. For a process M , the corresponding process stopped at τ , denoted by M τ , is defined by M τt := M t ∧ τ , t ≥ . A. PAPAPANTOLEON, D. POSSAMAÏ, AND A. SAPLAOURAS
Stochastic integrals.
Let us fix an arbitrary filtration F on (Ω , G , P ) and an arbitrary F − stopping time τ .The predictable σ − field generated by F − adapted and left–continuous processes on Ω × R + is denoted by P F .2.2.1. It¯o stochastic integral.
We will follow [34, Section III.6a] throughout this sub–sub–section. Let X ∈H ( F , τ ; R ℓ ) . There exists an F − predictable, càdlàg and increasing process C X such that h X i τ · = Z · (0 , ·∧ τ ] d h X i s d C Xs d C Xs , where d h X i / d C X is a positive definite, symmetric and F − predictable ℓ × ℓ − matrix whose elements are definedby (cid:16) d h X i · d C X · (cid:17) ij := d h X i ij · d C X · , for i, j = 1 , . . . , ℓ. Let us now proceed by defining H ( X, F , τ ; R ℓ ) := (cid:26) Z : (Ω × R + , P F ) −→ ( R ℓ , B ( R ℓ )) , E (cid:20) Z (0 ,τ ] Z ⊤ t d h X i t d C Xt Z t d C Xt (cid:21) < ∞ (cid:27) . Notice that this space does not depend on the choice of C X . For any Z ∈ H ( X, F , τ ; R ℓ ) , the It¯o stochastic in-tegral of Z with respect to X is well defined and is an element of H ( F , τ ; R ) . It will be denoted interchangeablyby R · Z s d X s or Z · X . Moreover, we have the following equality k Z k H ( X, F ,τ ; R ℓ ) := E (cid:20) Z (0 ,τ ] Z ⊤ t d h X i t d C Xt Z t d C Xt (cid:21) = E (cid:2) Tr[ h Z · X i τ ] (cid:3) . We denote the space of It¯o stochastic integrals of processes in the space H ( X, F , τ ; R ℓ ) , with respect to X , by L ( X, F , τ ; R ) , and remind the reader that L ( X, F , τ ; R ) ⊂ H ( F , τ ; R ) . Remark 2.1.
In case X is a continuous martingale, i.e. X ∈ H ,c ( F , τ ; R ℓ ) , then for any Z ∈ H ( X, F , τ ; R ℓ ) it holds that Z · X is an element of H ,c ( F , τ ; R ) , from which it follows from [34, Section III.4a] that L ( X, F , τ ; R ) ⊂ H ,c ( F , τ ; R ) ⊂ H ( F , τ ; R ) . Stochastic integral with respect to an integer–valued random measure.
Let us now define the space e Ω :=Ω × R + × R ℓ as well as the σ − algebra e P F := P F ⊗ B (cid:0) R ℓ (cid:1) . A measurable function U : (cid:0)e Ω , e P F (cid:1) −→ ( R , B ( R )) is called an e P F − measurable function or simply F − predictable function .Let µ := { µ ( ω ; d t, d x ) } ω ∈ Ω be a random measure on R + × R ℓ , i.e. a family of non–negative measures definedon (cid:0) R + × R ℓ , B ( R + ) ⊗ B (cid:0) R ℓ (cid:1)(cid:1) satisfying µ (cid:0) ω ; { } × R ℓ (cid:1) = 0 , identically. For an F − predictable function U ,we define the process U ∗ µ · ( ω ) := Z (0 , · ] × R ℓ U ( ω, s, x ) µ ( ω ; d s, d x ) , if Z (0 , · ] × R ℓ | U ( ω, s, x ) | µ ( ω ; d s, d x ) < ∞ , ∞ , otherwise . Let us fix an arbitrary càdlàg F − adapted process X until the end of the present sub–sub–section, from which wecan define the processes X − = ( X t − ) t ∈ R + and ∆ X = (∆ X t ) t ∈ R + , where X − := X , X t − := lim s → ts U ⋆ e µ ( X, F ) the stochastic integral of U with respect to e µ ( X, F ) , and point out that for U ∈ G ( µ X , F , τ ; R ) it holds ∆( U ⋆ e µ ( X, F ) ) t = R R ℓ U ( t, x ) e µ ( X, F ) ( { t } × d x ) by definition. Let us also introduce the following conve-nient notation Z τ τ Z R ℓ U s ( x ) e µ ( X, F ) (d s, d x ) := U ⋆ e µ ( X, F ) τ − U ⋆ e µ ( X, F ) τ , where τ , τ are F − stopping times such that ≤ τ ≤ τ ≤ ∞ , P − a.s. Remark 2.2. Observe that the canonical projections satisfy π i ∈ G ( µ X , F , τ ; R ) , for every i = 1 , . . . , ℓ .Therefore, we can associate to the process X the F − martingale ( π ⋆ e µ ( X, F ) , . . . , π ℓ ⋆ e µ ( X, F ) ) ∈ H ,d ( F , τ ; R ℓ ) . In case X ∈ H ( F , τ ; R ℓ ) , it is clear that X d = ( π ⋆ e µ ( X, F ) , . . . , π ℓ ⋆ e µ ( X, F ) ) , i.e. the purely discontinuous part ofthe martingale X is indistinguishable from ( π ⋆ e µ ( X, F ) , . . . , π ℓ ⋆ e µ ( X, F ) ) . Henceforth, when X ∈ H ( F , τ ; R ℓ ) ,we will make no distinction between these two purely discontinuous martingales. Moreover, assuming that X isan F − martingale, when we refer to the jump process ∆ X d we will mean the R ℓ − valued process ∆ X d · := (cid:18) Z R ℓ π ( x ) e µ ( X, F ) ( {·} × d x ) , . . . , Z R ℓ π ℓ ( x ) e µ ( X, F ) ( {·} × d x ) (cid:19) , (2.1)while, assuming that X is simply an F − adapted process, when we refer to the jump process ∆ X we will meanthe R ℓ − valued process ∆ X · = (cid:18) Z R ℓ π ( x ) µ X ( {·} × d x ) , . . . , Z R ℓ π ℓ ( x ) µ X ( {·} × d x ) (cid:19) . (2.2)This subtle difference arises because the process X is not quasi–left–continuous, hence the compensator ν ( X, F ) can have jumps. In other words, the jumps of µ X and e µ ( X, F ) are not identical, in general. Remark 2.3. Let X ∈ H ( F , τ ; R ℓ ) . We can also associate an integer–valued random measure to the jumps ofthe martingale X d , denoted by µ X d , and then (2.1) can be written in a similar form to (2.2), i.e. ∆ X d · = (cid:18) Z R ℓ π ( x ) µ X d ( {·} × d x ) , . . . , Z R ℓ π ℓ ( x ) µ X d ( {·} × d x ) (cid:19) . (2.3) A. PAPAPANTOLEON, D. POSSAMAÏ, AND A. SAPLAOURAS Remark 2.4. We would like to clarify a subtle detail in our notation at this point, namely the difference betweenthe mappings ∗ and ⋆ . Apart from the fact that they have different domains, let us comment on the way they aredefined. Let U ∈ G ( µ X , F , τ ; R ) , then U ∗ e µ ( X, F ) · = Z (0 , · ] × R ℓ U ( ω, s, x ) e µ ( X, F ) ( ω ; d s, d x ) , i.e. this is the Lebesgue–Stieljes integral of U with respect to the signed measure e µ ( X, F ) , for which the onlyinformation we have regarding its integrability is the square summability of its jumps. Clearly this does not implythe finiteness of the process in any time interval. On the contrary, by U ⋆ e µ ( X, F ) we denote the square–integrablepurely discontinuous F − martingale whose jump at each time t is given by R R ℓ U ( t, x ) e µ ( X, F ) ( { t } × d x ) . Aspecific case where the two processes U ∗ e µ ( X, F ) · and U ⋆ e µ ( X, F ) · coincide is given by [34, Proposition II.1.28]and corresponds to the finite variation case.The space of real–valued square–integrable stochastic integrals with respect to e µ ( X, F ) will be denoted by K (cid:0) µ X , F , τ ; R (cid:1) := n U ⋆ e µ ( X, F ) , U ∈ G ( µ X , F , τ ; R ) o . By [34, Theorem II.1.33], or He, Wang, and Yan [31, Theorem 11.21], and the Kunita–Watanabe inequality, see e.g. [31, Corollary 6.34], we have E (cid:2) h U ⋆ e µ ( X, F ) i τ (cid:3) < ∞ , if and only if U ∈ G (cid:0) µ X , F , τ ; R (cid:1) , which enables us to define the following more convenient space H ( µ X , F , τ ; R ) := n U : (cid:0)e Ω , e P F (cid:1) −→ (cid:0) R , B ( R ) (cid:1) , E (cid:2) h U ⋆ e µ ( X, F ) i τ (cid:3) < ∞ o , and we emphasise that we have the direct identification H ( µ X , F , τ ; R ) = G ( µ X , F , τ ; R ) . Orthogonal decompositions. We close this subsection with a reminder on orthogonal decompositions ofsquare integrable martingales. Definition 2.5. Let X ∈ H ( F , τ ; R ℓ ) . X is said to possess the F − predictable representation property if H ( F , τ ; R ) = L ( X c , F , τ ; R ) ⊕ K ( µ X , F , τ ; R ) , where H ( F , τ ; R ) := { M ∈ H ( F , τ ; R ) , M = 0 } . In other words, for any M ∈ H ( F , τ ; R ) , there exists apair ( Z, U ) ∈ H ( X c , F , τ ; R ℓ ) × H ( µ X , F , τ ; R ) such that M · = M + Z · X c · + U ⋆ e µ ( X, F ) · . In the sequel, we adapt the notation of Cohen and Elliott [12, Sections 13.2–3]. We associate the measure M µ :( e Ω , G ⊗ B ( R + ) ⊗ B (cid:0) R ℓ (cid:1) ) −→ R + to a random measure µ , which is defined as M µ ( B ) = E [ B ∗ µ ∞ ] . Wewill refer to M µ as the Doléans measure associated to µ. If there exists an F − predictable partition ( A n ) n ∈ N of e Ω such that M µ ( A n ) < ∞ , for every n ∈ N , then we will say that µ is F − predictably σ − integrable and we willdenote it by µ ∈ e A σ ( F ) . For a sub– σ –algebra A of G ⊗ B ( R + ) ⊗ B (cid:0) R ℓ (cid:1) , the restriction of the measure M µ to ( e Ω , A ) will be denoted by M µ | A . Moreover, for W : ( e Ω , G ⊗ B ( R + ) ⊗ B (cid:0) R ℓ (cid:1) ) −→ ( R , B ( R )) , we define therandom measure W µ as follows ( W µ )( ω ; d s, d x ) := W ( ω, s, x ) µ ( ω ; d s, dx ) . Definition 2.6. Let µ ∈ e A σ ( F ) and W : ( e Ω , G ⊗B ( R + ) ⊗B (cid:0) R ℓ (cid:1) ) −→ ( R , B ( R )) be such that | W | µ ∈ e A σ ( F ) . Then we define the conditional F − predictable projection of W on µ , denoted by M µ (cid:2) W | e P F (cid:3) as follows M µ (cid:2) W | e P F (cid:3) := d M W µ | e P F d M µ | e P F . The following definition is justified by [34, Lemma III.4.24]. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 9 Definition 2.7. Let Y ∈ H ( F , τ ; R ) and consider a triple ( Z, U, N ) ∈ H ( X c , F , τ ; R ℓ ) × H ( µ X , F , τ ; R ) ×H ( F , τ ; R ) such that Y = Y + Z · X c + U ⋆ e µ ( X, F ) + N, (2.4) with h N, X c,i i = 0 , for i = 1 , . . . , ℓ , and M µ X (cid:2) ∆ N | e P F (cid:3) = 0 . Then, (2.4) is called the orthogonal decomposi-tion of Y with respect to ( X c , µ X , F ) . We conclude this subsection with a useful corollary, which must be preceded by the definition of the followingspace H ( X ⊥ , F , τ ; R ) := n N ∈ H ( F , τ ; R ) , N ⊥⊥ L, ∀ L ∈ L ( X c , F , τ ; R ) ⊕ K ( µ X , F , τ ; R ) o . Corollary 2.8. Let Y ∈ H ( F , τ ; R ) , X ∈ H ( F , τ ; R ℓ ) and Y = Y + Z · X c + U ⋆ e µ ( X, F ) + N be the orthogonaldecomposition of Y with respect to ( X c , µ X , F ) . Then N ∈ H ( X ⊥ , F , τ ; R ) . In particular N ⊥⊥ X i , for every i = 1 , . . . , ℓ. Proof. This is immediate by [34, Proposition I.4.15, Theorem III.4.5] and [12, Theorem 13.3.16]. (cid:3) The Skorokhod space ( D , J ) and convergence in the extended sense. The natural path–space for an R p × q − valued process, which is adapted to some filtration, is the Skorokhod space D ([0 , ∞ ); R p × q ) := (cid:8) α : R + −→ R p × q , α is càdlàg (cid:9) , which we equip with the Skorokhod J ( R p × q ) − topology, see [34, Section VI.1b]. We denote by d J ( R p × q ) themetric which is compatible with the J ( R p × q ) − topology. Due to [34, Comments VI.1.21-22], and since inthe remainder of the paper we will have to distinguish between joint and separate convergence on products ofSkorokhod spaces, we will always indicate the state space in our notation, for the sake of clarity. We remindthe reader that ( D ([0 , ∞ ); R p × q ) , d J ( R p × q ) ) is a Polish space, see [34, Theorem VI.1.14]. We will denote by d lu the metric on D ([0 , ∞ ); R p × q ) which is compatible with the topology of locally uniform convergence, see [34,Section VI.1a], and by d k·k ∞ the metric on D ([0 , ∞ ); R p × q ) which is compatible with the topology of uniform convergence. Clearly d k·k ∞ is stronger than d lu . Moreover, it is well–known that d lu is stronger than d J ( R p × q ) ,see [34, Proposition VI.1.17]. In order to simplify notations, we set D p × q := D ([0 , ∞ ); R p × q ) and, in case p = q = 1 , D := D . Moreover, to avoid any misunderstanding, we will not introduce any shorthand notationfor the space ( D ([0 , ∞ ); R )) p × q . We will postpone all proofs of the present section, except for the very shortones, to Appendix A.1 for the sake of readability. Definition 2.9. Let ( M k ) k ∈ N be an arbitrary sequence such that M k is an R p × q − valued càdlàg process, forevery k ∈ N . (i) The sequence ( M k ) k ∈ N converges in probability under the J ( R p × q ) − topology to M ∞ if P (cid:16) d J ( R p × q ) ( M k , M ∞ ) > ε (cid:17) −−−−−→ k →∞ , for every ε > ,and we denote it by M k ( J ( R p × q ) , P ) −−−−−−−−−−→ M ∞ . (ii) Let ϑ ∈ [1 , ∞ ) . The sequence ( M k ) k ∈ N converges in L ϑ − mean under the J ( R p × q ) − topology to M ∞ if E h(cid:0) d J ( R p × q ) ( M k , M ∞ ) (cid:1) ϑ i −−−−−→ k →∞ , and we denote it by M k ( J ( R p × q ) , L ϑ ) −−−−−−−−−−−→ M ∞ . (iii) Analogously, we denote by M k (lu , P ) −−−−−→ M ∞ , resp. M k ( lu , L ϑ ) −−−−−−→ M ∞ , the convergence in probability ,resp. in L ϑ − mean , under the locally uniform topology . Notice that we omit the index associated to the convergence ( i.e. k −→ ∞ ). We will do the same in the remainder of the paper, whenit is clear to which index the convergence refers. (iv) Let ( p , p , q , q ) ∈ N . Moreover, let ( M k ) k ∈ N be a sequence of R p × q − valued and càdlàg processesand ( N k ) k ∈ N be a sequence of R p × q − valued and càdlàg processes. For ϑ , ϑ ∈ [1 , ∞ ) we will write ( M k , N k ) (cid:0) J ( R p × q × R p × q ) , L ϑ ( D p × q ) × L ϑ ( D p × q ) (cid:1) −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−→ ( M ∞ , N ∞ ) , if the following convergences hold ( M k , N k ) (cid:0) J ( R p × q × R p × q ) , P ) (cid:1) −−−−−−−−−−−−−−−−−−→ ( M ∞ , N ∞ ) , M k ( J ( R q ) , L ϑ ) −−−−−−−−−−−→ M ∞ , and N k ( J ( R q ) , L ϑ ) −−−−−−−−−−−→ N ∞ . Let us now introduce notions related to the convergence of σ − fields and filtrations. We will need the filtrationsto be indexed by [0 , ∞ ] , hence, given a filtration F := ( F t ) t ≥ , we define the σ − algebra F ∞ by using theconvention F ∞ := F ∞− = _ t ≥ F t . We recall also the following notation for every sub − σ − field F of G and ϑ ∈ [1 , ∞ ) L ϑ (Ω , F , P ; R q ) := n ξ, R q − valued and F − measurable such that E [ | ξ | ϑ ] < ∞ o . Definition 2.10. (i) A sequence of σ − algebrae ( F k ) k ∈ N converges weakly to the σ − algebra F ∞ if, for every ξ ∈ L (Ω , F ∞ , P ; R ) , we have E [ ξ |F k ] P −−→ E [ ξ |F ∞ ] . We denote the weak convergence of σ − algebrae by F k w −−−→ F ∞ . (ii) A sequence of filtrations (cid:0) F k := ( F kt ) t ≥ (cid:1) k ∈ N converges weakly to F ∞ := ( F ∞ t ) t ≥ , if, for every ξ ∈ L (Ω , F ∞∞ , P ; R ) , we have E [ ξ |F k · ] (J ( R ) , P ) −−−−−−−→ E [ ξ |F ∞· ] . We denote the weak convergence of the filtrations by F k w −−−→ F ∞ . (iii) Consider the sequence (cid:0) ( M k , F k ) (cid:1) k ∈ N , where M k is an R q − valued càdlàg process and F k is a filtration,for any k ∈ N . The sequence (cid:0) ( M k , F k ) (cid:1) k ∈ N converges in the extended sense to ( M ∞ , F ∞ ) if for every ξ ∈ L (Ω , F ∞∞ , P ; R ) , (cid:18) M k E [ ξ |F k · ] (cid:19) ( J ( R q +1 ) , P ) −−−−−−−−−−→ (cid:18) M ∞ E [ ξ |F ∞· ] (cid:19) . (2.5) We denote the convergence in the extended sense by (cid:0) M k , F k (cid:1) ext −−−−→ ( M ∞ , F ∞ ) . Remark 2.11. For the definition of weak convergence of filtrations, we could have used only random variables ξ of the form A , for A ∈ F ∞∞ . Indeed, the two definitions are equivalent, see Coquet et al. [17, Remark 1.1)].The following result, which is due to Hoover [32, Theorem 7.4], provides a sufficient condition for weak con-vergence of σ − algebrae which are generated by random variables. Example 2.12. Let ( ξ k ) k ∈ N be a sequence of random variables such that ξ k P −−→ ξ ∞ . Then the convergence σ ( ξ k ) w −−−→ σ ( ξ ∞ ) holds, where σ ( ψ ) denotes the σ − algebra generated by the random variable ψ. In the next example, which is [17, Proposition 2], a sufficient condition for the weak convergence of the naturalfiltrations of stochastic processes is provided. Example 2.13. Let M k be a process with independent increments, for every k ∈ N . If M k (J ( R q ) , P ) −−−−−−−−→ M ∞ ,then F M k w −−−→ F M ∞ . TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 11 In the remainder of this section, we fix an arbitrary sequence of filtrations ( F k ) k ∈ N on (Ω , G , P ) (recall Footnote4), with F k := ( F kt ) t ≥ , and an arbitrary sequence ( M k ) k ∈ N , where M k is an R q − valued, uniformly integrable, F k − martingale, for every k ∈ N . Then, it is well known that the random variables M k ∞ := lim t →∞ M kt arewell–defined P − a.s. , and M k ∞ ∈ L (Ω , F k ∞ , P ; R q ) , for k ∈ N ; see [34, Theorem I.1.42].Next, we would like to discuss how to deduce the extended convergence of martingales and filtrations fromindividual convergence results. Such properties have already been obtained by Mémin [52, Proposition 1.(iii)],where he refers to Coquet et al. [17, Proposition 7] for the proof. However, the authors in [17] proved the resultunder the additional assumption that the processes are adapted to their natural filtrations. Moreover, they considera finite time horizon T , which gives the time point T a special role for the J ( R ) − topology on D ([0 , T ]; R ) , seealso [34, Remark VI.1.10]. In addition, in [17, Remark 1.2)], the convergence M k ∞ L (Ω , F ∞∞ , P ; R q ) −−−−−−−−−−−→ M ∞∞ is assumed, although it is not necessary (note that we have translated their results into our notation). This isrestrictive, in the sense that they have to assume in addition the F ∞∞ − measurability of M k ∞ , for each k ∈ N . We present below, for the sake of completeness, the statement and proof of the aforementioned results for theinfinite time horizon case, under the condition M k ∞ L (Ω , G , P ; R q ) −−−−−−−−−−→ M ∞∞ . Proposition 2.14. Assume the convergence M k ∞ L (Ω , G , P ; R q ) −−−−−−−−−−→ M ∞∞ holds. Then, the convergence F k w −−−→ F ∞ is equivalent to the convergence ( M k , F k ) ext −−−−→ ( M ∞ , F ∞ ) . The following two results, which are essentially [52, Theorem 11, Corollary 12], constitute the cornerstone forthe convergence in the extended sense. Here we state and prove them in the multi–dimensional case. Beforewe proceed, let us recall some further definitions. An F − adapted process M is called F − quasi–left–continuous if ∆ M σ = 0 , P − a.s. , for every F − predictable time σ. An F − adapted process S is called an F − specialsemimartingale if S = S + M + A, where S is finite–valued and F − measurable, M is a local F − martingalewith M = 0 and A is an F − predictable, finite variation process with A = 0 ; see [34, Definition I.4.21]. Thisdecomposition of an F − special semimartingale is unique, and for this reason we will call it the F − canonicaldecomposition of S . For a process A of finite variation, we denote by Var( A ) the ( total ) variation process of A , i.e. Var( A ) t ( ω ) is the total variation of the function R + ∋ s A s ( ω ) ∈ R + in the interval [0 , t ] . For A ∈ D p × q , we denote by Var( A ) ∈ D p × q the process for which Var( A ) ij := Var( A ij ) , for i = 1 , . . . , p and j = 1 , . . . , q. Theorem 2.15. Let ( S k ) k ∈ N be a sequence of R q − valued F k − special semimartingales with F k − canonicaldecomposition S k = S k + M k + A k , for every k ∈ N . Assume that S ∞ is F ∞ − quasi–left–continuous and thefollowing properties hold (i) the sequence (cid:0) [ S k,i ] / ∞ (cid:1) k ∈ N is uniformly integrable, for every i = 1 , . . . , q , (ii) the sequence ( k Var( A k ) ∞ k ) k ∈ N is tight, (iii) the extended convergence ( S k , F k ) ext −−−−→ ( S ∞ , F ∞ ) holds.Then ( S k , M k , A k ) ( J ( R q × ) , P ) −−−−−−−−−−→ ( S ∞ , M ∞ , A ∞ ) . Proof. By [52, Theorem 11], we obtain for every i = 1 , . . . , q the following convergence ( S k,i , M k,i , A k,i ) ( J ( R ) , P ) −−−−−−−−→ ( S ∞ ,i , M ∞ ,i , A ∞ ,i ) . Then, by assumption S k (J ( R q ) , P ) −−−−−−−−→ S ∞ , and using Corollary A.2 and Remark A.3 we obtain the requiredresult. (cid:3) Theorem 2.16. Let M k ∈ H ( F k , ∞ ; R q ) for any k ∈ N and M ∞ be F ∞ − quasi–left–continuous. If thefollowing convergences hold ( M k , F k ) ext −−−−→ ( M ∞ , F ∞ ) and M k ∞ L (Ω , G , P ; R q ) −−−−−−−−−−→ M ∞∞ , then (i) ( M k , [ M k ] , h M k i ) ( J ( R q × R q × q × R q × q ) , P ) −−−−−−−−−−−−−−−−−→ ( M ∞ , [ M ∞ ] , h M ∞ i ) , (ii) for every i = 1 , . . . , q , we have [ M k,i ] ∞ L (Ω , G , P ; R ) −−−−−−−−−→ k →∞ [ M ∞ ,i ] ∞ and h M k,i i ∞ L (Ω , G , P ; R ) −−−−−−−−−→ k →∞ h M ∞ ,i i ∞ . We conclude this subsection with the following technical lemma which will be of utmost importance for us inthe proof of our robustness result for martingale representations. Lemma 2.17. Let ( L k ) k ∈ N be a sequence of R p − valued processes such that (cid:0) Tr (cid:2) [ L k ] ∞ (cid:3)(cid:1) k ∈ N is uniformlyintegrable and ( N k ) k ∈ N be a sequence of R q − valued processes such that (cid:0) Tr (cid:2) [ N k ] ∞ (cid:3)(cid:1) k ∈ N is bounded in L (Ω , G , P ) , i.e. sup k ∈ N E (cid:2) Tr (cid:2) [ N k ] ∞ (cid:3)(cid:3) < ∞ . Then (cid:0)(cid:13)(cid:13) Var (cid:0) [ L k , N k ] (cid:1) ∞ (cid:13)(cid:13) (cid:1) k ∈ N is uniformly integrable. 3. S TABILITY OF MARTINGALE REPRESENTATIONS Framework and statement of the main theorem. We start by presenting and discussing the main assump-tions that will be used throughout this section. Let us fix an arbitrary sequence of càdlàg R ℓ − valued processes ( X k ) k ∈ N for which we assume that sup k ∈ N E (cid:20) Z (0 , ∞ ) × R ℓ | x | µ X k (d s, d x ) (cid:21) < ∞ . (3.1)Then we fix an arbitrary sequence of filtrations ( G k ) k ∈ N (recall Footnote 4) with G k := ( G kt ) t ≥ , for every k ∈ N , on the probability space (Ω , G , P ) , and an arbitrary sequence of real–valued random variables ( ξ k ) k ∈ N .The following assumptions will be in force throughout this section. (M1) The filtration G ∞ is quasi–left–continuous and the process X ∞ is G ∞ − quasi–left–continuous. (M2) The process X k ∈ H ( G k , ∞ ; R ℓ ) , for every k ∈ N . Moreover X k ∞ L (Ω , G , P ; R ℓ ) −−−−−−−−−−→ X ∞∞ . (M3) The martingale X ∞ possesses the G ∞ − predictable representation property. (M4) The filtrations converge weakly, i.e. G k w −−−→ G ∞ . (M5) The random variable ξ k ∈ L (Ω , G k ∞ , P ; R ) , for every k ∈ N , and ξ k L (Ω , G , P ; R ) −−−−−−−−−→ ξ ∞ . Remark 3.1. In view of Proposition 2.14, conditions (M2) and (M4) imply that ( X k , G k ) ext −−−−→ ( X ∞ , G ∞ ) . Remark 3.2. In (M5) , we have imposed an additional measurability assumption for the sequence of random vari-ables ( ξ k ) k ∈ N , since we require that ξ k is G k ∞ − measurable for any k ∈ N , instead of just being G− measurable.We could spare that additional assumption at the cost of a stronger hypothesis in (M4) , namely that the weakconvergence of the σ − algebrae G k ∞ w −−−→ G ∞∞ , holds in addition. To sum up, the pair (M4) and (M5) can be substituted by the following (M4 ′ ) The filtrations converge weakly as well as the final σ − algebrae, that is G k w −−−→ G ∞ and G k ∞ w −−−→ G ∞∞ . (M5 ′ ) The sequence ( ξ k ) k ∈ N ⊂ L (Ω , G , P ; R ) and satisfies ξ k L (Ω , G , P ; R ) −−−−−−−−−→ ξ ∞ . In the sequel we are going to update our notation as follows: for k ∈ N , X k,c , respectively X k,d , will denote thecontinuous part, respectively the purely discontinuous part, of the martingale X k . Moreover, for i = 1 , . . . , ℓ , X k,c,i , respectively X k,d,i , will denote the i − th element of the continuous part, respectively the i − th elementof the purely discontinuous part, of the martingale X k . An F − predictable process A will be denoted by A F ,whenever the coexistence of several filtrations may create confusion. Theorem 3.3. Let conditions (M1) – (M5) hold and define the G k − martingales Y k := E [ ξ k | G k · ] , for k ∈ N . The orthogonal decomposition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , G ∞ ) Y ∞ = Y ∞ + Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) , TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 13 and the orthogonal decomposition of Y k with respect to ( X k,c , µ X k,d , G k ) Y k = Y k + Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) + N k , for k ∈ N , satisfy the following convergences (cid:0) Y k , Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) , N k (cid:1) ( J ( R ) , L ) −−−−−−−−−→ (cid:0) Y ∞ , Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) , (cid:1) , (3.2) (cid:0) h Y k i , h Y k , X k i , h N k i (cid:1) ( J ( R × R ℓ × R ) , L ) −−−−−−−−−−−−−−→ (cid:0) h Y ∞ i , h Y ∞ , X ∞ i , (cid:1) , (3.3) where h Y k , X k i := ( h Y k , X k, i , . . . , h Y k , X k,ℓ i ) ⊤ for all k ∈ N . Examples and applications. In order to apply the above result in a concrete scenario, we need to checkthat Assumptions (M1) – (M5) are satisfied. The input data would then be a random variable ξ ∞ , a martingale X ∞ and a filtration G ∞ , and we assume they satisfy (M1) and (M3) . Moreover, we can construct sequences ( ξ k ) k ∈ N and ( X k ) k ∈ N such that (M2) and (M5) are also satisfied. Therefore, what remains to be shown and isnot trivial, is the weak convergence of the filtrations, i.e. (M4) .The following two cases describe situations where we can easily check that this condition is satisfied. • According to Coquet et al. [17, Proposition 2], which we have stated as Example 2.13, if X k is a martingalewith independent increments that converges to X ∞ , and the filtrations G k and G ∞ are the natural filtrationsgenerated by the respective martingales, then (M4) is automatically satisfied. • According to Coquet et al. [14, Theorem 1], if X ∞ is a càdlàg Markov process, X k is a discretization of X ∞ , and G k and G ∞ are the natural filtrations generated by the respective martingales, then (M4) is againautomatically satisfied.The following two sub–sub–sections provide two corollaries of Theorem 3.3 that are relevant for applications,in particular for numerical schemes.3.2.1. The case of processes with independent increments. In this sub–sub–section, we focus on processes withindependent increments, and we are interested in convergence in law, which is the relevant convergence for nu-merical schemes, such as the Euler–Monte Carlo method. Convergence results for numerical schemes typicallyinvolve stochastic processes that are defined on distinct probability spaces, while the very definition of weak con-vergence of filtrations in (M4) requires that processes are defined on the same space. In order to reconcile theseopposing facts, we will work with the natural filtration of stochastic processes with independent increments and,in this case, the weak convergence of the filtrations follows from the results of Coquet et al. [17, Proposition 2].Then, as a corollary of the main theorem, we can show that the convergence results in (3.2) and (3.3) hold alsoin law.Let us set the framework for the results that follow. Let ( X k ) k ∈ N be a sequence of R ℓ − valued càdlàg processes,where X k is defined on the space (Ω k , G k , P k ) for each k ∈ N , and ( ξ k ) k ∈ N be a sequence of real–valued randomvariables, where each ξ k is defined on (Ω k , G k , P k ) for each k ∈ N . Moreover, we assume that sup k ∈ N E k (cid:20) Z (0 , ∞ ) × R ℓ | x | µ X k (d s, d x ) (cid:21) < ∞ , (3.4)where by E k [ · ] we have denoted the expectation under P k , for every k ∈ N . We will denote analogously by E k [ ·|F · ] the conditional expectation with respect to (an element of) a filtration F under the measure P k , forevery k ∈ N . Moreover, we will denote the set of R q − valued and square–integrable F X k − martingales by H ( F X k , ∞ ; R q ) , for every k ∈ N , where we have notationally suppressed the dependence on Ω k .The following assumptions will be in force throughout this sub–sub–section. (W1) The filtration F X ∞ is quasi–left–continuous. (W2) X k ∈ H ( F X k , ∞ ; R ℓ ) , for every k ∈ N . Moreover, X k L −→ X ∞ in D ([0 , ∞ ); R ℓ ) as well as X k ∞ L −−−→ X ∞∞ in R ℓ , where ( | X k ∞ | ) k ∈ N is in addition uniformly integrable. (W3) The martingale X ∞ possesses the F X ∞ − predictable representation property. (W4) The process X k has independent increments relative to the filtration F X k , for every k ∈ N . (W5) The random variable ξ k ∈ L (Ω k , F X k ∞ , P k ; R ) , for every k ∈ N , and is such that ( | ξ k | ) k ∈ N is uniformlyintegrable and ξ k L −−−→ ξ ∞ . Corollary 3.4. Let conditions (W1) – (W5) hold and define the martingales Y k := E k [ ξ k | F X k · ] , for k ∈ N . Theorthogonal decomposition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , F X ∞ ) Y ∞ = Y ∞ + Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , F X ∞ ) , and the orthogonal decomposition of Y k with respect to ( X k,c , µ X k,d , F X k ) Y k = Y k + Z k · X k,c + U k ⋆ e µ ( X k,d , F Xk ) + N k , for k ∈ N , satisfy the following convergences (cid:0) Y k , Z k · X k,c + U k ⋆ e µ ( X k,d , F Xk ) , N k (cid:1) L −−−→ (cid:0) Y ∞ , Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , F X ∞ ) , (cid:1) , (3.5) (cid:0) h Y k i , h Y k , X k i , h N k i (cid:1) L −−−→ (cid:0) h Y ∞ i , h Y ∞ , X ∞ i , (cid:1) . (3.6)The proof is deferred to Appendix A.4.3.2.2. A stronger version of the main theorem. In this sub–sub–section, we strengthen the main theorem in thefollowing sense: if we assume that there exist two sequences that converge to the continuous and the purelydiscontinuous part of the limiting martingale, then the angle brackets of Y k with respect to these sequences con-verge to the angle brackets of Y ∞ with respect to the continuous and the purely discontinuous part of the limitingmartingale. This framework is very useful when considering discrete–time approximations of continuous–timeprocesses. To this end, we need to allow the It¯o integrator to exhibit jumps, i.e. it should not necessarily be acontinuous martingale. Therefore, we also need to generalize the notion of orthogonal decompositions whichwas described in Sub–sub–subsection 2.2.3. Definition 3.5. Let F be a filtration, ( X ◦ , X ♮ ) ∈ H ( F , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) and Y ∈ H ( F , ∞ ; R ) . Thedecomposition Y = Y + Z · X ◦ + U ⋆ e µ X ♮ + N, where the equality is understood componentwise, will be called the orthogonal decomposition of Y with respectto ( X ◦ , X ♮ ) if ( i ) Z ∈ H ( X ◦ , F , ∞ ; R m ) and U ∈ H ( µ X ♮ , F , ∞ ; R ) , ( ii ) Z · X ◦ ⊥⊥ U ⋆ e µ X ♮ , ( iii ) N ∈ H ( F , ∞ ; R ) with h N, X ◦ i F = 0 and M µ X♮ [∆ N | e P F ] = 0 . The following results will allow us to obtain the orthogonal decomposition as understood in Definition 3.5. Theirproofs can be found in [59, Appendix A]. For the statement of the following results we will fix an arbitraryfiltration F . Lemma 3.6. Let ( X ◦ , X ♮ ) ∈ H ( F , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) with M µ X♮ [∆ X ◦ | e P F ] = 0 , where the equal-ity is understood componentwise. Then, for every Y ◦ ∈ L ( X ◦ , F , ∞ ; R ) , Y ♮ ∈ K ( µ X ♮ , F , ∞ ; R ) , we have h Y ◦ , Y ♮ i = 0 . In particular, h X ◦ , X ♮ i = 0 . In view of Lemma 3.6, we can provide in the next proposition the desired orthogonal decomposition of a mar-tingale Y with respect to a pair ( X ◦ , X ♮ ) ∈ H ( F , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) , i.e. we do not necessarily use thepair ( X c , X d ) which is naturally associated to the martingale X . Observe that in this case we do allow the firstcomponent to have jumps. This is particularly useful when one needs to decompose a discrete–time martingale asa sum of an It¯o integral, a stochastic integral with respect to an integer–valued random measure and a martingaleorthogonal to the space of stochastic integrals. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 15 Proposition 3.7. Let ( Y, X ◦ , X ♮ ) ∈ H ( F , ∞ ; R ) × H ( F , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) with M µ X♮ [∆ X ◦ | e P F ] =0 , where the equality is understood componentwise. Then, there exists a pair ( Z, U ) ∈ H ( X ◦ , F , ∞ ; R m ) × H ( µ X ♮ F , ∞ ; R ) and N ∈ H ( F , ∞ ; R ) such that Y = Y + Z · X ◦ + U ⋆ e µ X ♮ + N, (3.7) with h X ◦ , N i = 0 and M µ X♮ (cid:2) ∆ N | e P F (cid:3) = 0 . Moreover, this decomposition is unique, up to indistinguishability. In other words, the orthogonal decomposition of Y with respect to the pair ( X ◦ , X ♮ ) is well–defined under theabove additional assumption on the jump parts of the martingales X ◦ and X ♮ .We conclude this subsection with some useful results. Let X := ( X ◦ , X ♮ ) ∈ H ( R , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) with M µ X♮ [∆ X ◦ | e P F ] = 0 . Then we define H ( X ⊥ , F , ∞ ; R ) := (cid:0) L ( X ◦ , F , ∞ ; R ) ⊕ K ( µ X ♮ , F , ∞ ; R ) (cid:1) ⊥ . Proposition 3.8. Let X := ( X ◦ , X ♮ ) ∈ H ( F , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) with M µ X♮ [∆ X ◦ | e P F ] = 0 . Then, H ( X ⊥ , F , ∞ ; R ) = (cid:8) L ∈ H ( F , ∞ ; R ) , h X ◦ , L i F = 0 and M µ X♮ [∆ L | e P F ] = 0 (cid:9) . Moreover, the space (cid:0) H ( X ⊥ , F , ∞ ; R ) , k · k H ( F , ∞ ; R ) (cid:1) is closed. Corollary 3.9. Let X := ( X ◦ , X ♮ ) ∈ H ( F , ∞ ; R m ) × H ,d ( F , ∞ ; R n ) with M µ X♮ [∆ X ◦ | e P F ] = 0 . Then, H ( R ) = L ( X ◦ , F , ∞ ; R ) ⊕ K ( µ X ♮ , F , ∞ ; R ) ⊕ H ( X ⊥ , F , ∞ ; R ) , where each of the spaces appearing in the above identity is closed. In view of the above results, we are going to strengthen Condition (M2) to the following one (M2 ′ ) There is a pair ( X k, ◦ , X k,♮ ) ∈ H ( G k , ∞ ; R ℓ ) × H ,d ( G k , ∞ ; R ℓ ) with M µ Xk,♮ (cid:2) ∆ X k, ◦ (cid:12)(cid:12) e P G k (cid:3) = 0 forevery k ∈ N , such that in addition X k = X k, ◦ + X k,♮ , and (cid:0) X k, ◦∞ , X k,♮ ∞ (cid:1) L (Ω , G , P ; R ℓ × ) −−−−−−−−−−−→ (cid:0) X ∞ ,c ∞ , X ∞ ,d ∞ (cid:1) . (3.8) Corollary 3.10. Let conditions (M1) , (M2 ′ ) and (M3) – (M5) hold. Then Theorem 3.3 is valid and the conver-gence (3.3) can be improved into the following one (cid:0) h Y k , X k, ◦ i , h Y k , X k,♮ i (cid:1) ( J ( R ℓ × R ℓ ) , L ) −−−−−−−−−−→ (cid:0) h Y ∞ ,c , X ∞ ,c i , h Y ∞ ,d , X ∞ ,d i (cid:1) , (3.9) where we have defined h Y k , X k, ◦ i i := h Y k , X k, ◦ ,i i , for i = 1 , . . . , ℓ , and k ∈ N , and analogously for theprocesses h Y k , X k,♮ i , k ∈ N , h Y ∞ , X ∞ ,c i and h Y ∞ , X ∞ ,d i .Proof. Theorem 3.3 obviously applies in the current framework, since the sequence ( X k, ◦ ) k ∈ N approximates X ∞ ,c , i.e. the associated sequence of jump processes will finally vanish. Due to the bilinearity of the dualpredictable projection, we obtain the following convergence (cid:10) Y k , X k, ◦ (cid:11) + (cid:10) Y k , X k,♮ (cid:11) = (cid:10) Y k , X k (cid:11) ( J ( R ℓ ) , L ) −−−−−−−−−→ (cid:10) Y ∞ , X ∞ (cid:11) = (cid:10) Y ∞ , X ∞ ,c (cid:11) + (cid:10) Y ∞ , X ∞ ,d (cid:11) . (3.10)By convergence (3.8) and (M5) , we obtain the following convergence (cid:0) Y k ∞ + X k, ◦ ,i ∞ , Y k ∞ − X k, ◦ ,i ∞ (cid:1) L (Ω , G , P ; R ) −−−−−−−−−−→ (cid:0) Y ∞∞ + X ∞ ,c,i ∞ , Y ∞∞ − X ∞ ,c,i ∞ (cid:1) , (3.11)for every i = 1 , . . . , ℓ . Theorem 2.16.(i) yields that the predictable quadratic covariation of the processes abovealso converge, and then the polarisation identity allows us to deduce that h Y k , X k, ◦ i ( J ( R ℓ ) , L ) −−−−−−−−−→ h Y ∞ , X ∞ ,c (cid:11) = h Y ∞ ,c , X ∞ ,c (cid:11) . The statement now follows from the continuity of the angle bracket of the limiting processes, due to the quasi–left–continuity of the limiting filtrations, the convergence in (3.10) and Lemma A.1. (cid:3) Remark 3.11. The last corollary generalizes Madan et al. [50, Corollary 2.6]. Indeed, they consider a discrete–time process approximating a Lévy process, and work with the natural filtrations, while we can deal both withdiscrete– and continuous–time approximations of general martingales, with arbitrary filtrations.3.3. Comparison with the literature. In this section, we will compare the results presented in Subsections 3.1and 3.2 with analogous results in the existing literature, namely, with Briand et al. [10, Theorem 5] and Jacodet al. [35, Theorem 3.3]. This discussion serves also as an introduction to the next section, where we will try toclarify some technical points of the proof. For the convenience of the reader, we have adapted the notation of theaforementioned articles to our notation.We will follow the chronological order for our discussion, i.e. we will start with the comparison of Theorem 3.3with [35, Theorem 3.3]. There, the authors consider a single filtration, i.e. G k = G ∞ for every k ∈ N , where G ∞ is an arbitrary filtration. The reader should observe that under this framework, (M4) reduces to a triviality.Additionally, since the filtration is chosen to be arbitrary, conditions (M1) and (M3) are not necessarily satis-fied. Regarding the stochastic integrators, X k, ◦ is a locally square–integrable real–valued G ∞ − martingale and X k,♮ = 0 , for every k ∈ N . Therefore, the authors deal with Kunita–Watanabe decompositions, i.e. each Y k con-sists of an It¯o process and its orthogonal martingale, for every k ∈ N . Schematically, the convergence indicatedby solid arrows in the following scheme holds, and it is proved in [35, Theorem 3.3] that the convergence of therespective parts of the Kunita–Watanabe decompositions (indicated with the dashed arrows) also holds X k Y k = Z k · X k + N k y y 99K 99K X ∞ Y ∞ = Z ∞ · X ∞ + N ∞ .In other words, the result of [35] is more general than Theorem 3.3, in the sense that the orthogonal martingalepart is non–zero, and more restrictive in the sense that it considers a single filtration and Kunita–Watanabedecompositions.Let us rewrite the above scheme by means of an It¯o stochastic integral with respect to the continuous martingalepart X k,c , and of a stochastic integral with respect to the integer–valued measure µ X k,d , for every k ∈ N , i.e. X k Y k = Z k · X k,c + ( Z k Id ) ⋆ e µ ( X k,d , G ∞ ) + N k y y 99K 99K X ∞ Y ∞ = Z ∞ · X ∞ ,c + ( Z ∞ Id ) ⋆ e µ ( X ∞ ,d , G ∞ ) + N ∞ .Observe that we have written the purely discontinuous part of the martingale Z k · X k as ( Z k Id ) ⋆ e µ ( X k,d , G ∞ ) . Thisfollows from [34, Proposition II.1.30], in conjunction with the fact that X k,d = Id ⋆ e µ ( X k,d , G k ) , for every k ∈ N .Assume moreover, that we are interested in proving a result analogous to [35, Theorem 3.3] for orthogonaldecompositions, and not only for Kunita–Watanabe decompositions. Then, in view of the above scheme, weshould not restrict ourselves to integrands of the form W Id for the stochastic integral with respect to an integer–valued measure, where W is a predictable process which is determined by the It¯o integrand. In other words,given that the convergence indicated by solid arrows in the following scheme holds, we would like to prove thatthe convergence indicated by the dashed arrows also holds X k Y k = Z k · X k,c + U k ⋆ e µ ( X k,d , G ∞ ) + N k y y 99K 99K X ∞ Y ∞ = Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) + N ∞ . However, if we were to try to follow arguments analogous to [35, Theorem 3.3] in order to prove the resultdescribed by the last scheme, we would not be able to conclude. In order to intuitively explain why, let usintroduce some further notations. To this end, recall that we are under the framework of [35, Theorem 3.3], i.e. G k = G ∞ for every k ∈ N , and fix a k ∈ N . Moreover, recall by the last scheme that N k is the martingaleobtained by the orthogonal decomposition of Y k with respect to X k , and assume the following orthogonal For simplicity, we will also assume in the following discussion that every local martingale is a square–integrable martingale. Thiswill of course not be the case in the subsequent sections. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 17 decompositions hold X k = X k + γ k · X ∞ ,c + β k ⋆ e µ ( X ∞ ,d , G ∞ ) + L k , where h X ∞ ,c , L k,c i = 0 and M µ X ∞ ,d [∆ L k |P G ∞ ] = 0 ,N k = λ k · X ∞ ,c + ζ k ⋆ e µ ( X ∞ ,d , G ∞ ) + R k , where h X ∞ ,c , R k,c i = 0 and M µ X ∞ ,d [∆ R k |P G ∞ ] = 0 . Using arguments analogous to [35, Theorem 3.3], we would be required at some point to write the stochasticintegrals U k ⋆ e µ ( X k,d , G k ) and ζ k ⋆ e µ ( X ∞ ,d , G k ) as the sum of stochastic integrals with respect to e µ ( X ∞ ,d , G ∞ ) and e µ ( L k,d , G ∞ ) . This is indeed possible when the predictable function β k , respectively U k , ζ k , is of the form W Id,respectively W Id , W Id, for some predictable process W , respectively W , W . However, nothing guaranteesthat this is possible in the general case, which means that we cannot follow through with the proof.Our work intends to provide a general limit theorem. Therefore, a condition which imposes a specific relationshipbetween the approximating filtrations G k and the limiting filtration G ∞ would seem restrictive, e.g. , G k ⊂ G ∞ for every k ∈ N . On the other hand, it is well–known that any right-continuous martingale can be embeddedinto a Brownian motion, where the filtration associated to the Brownian motion is in general larger than itsnatural filtration and also depends on the family of stopping times that are used to embed the right–continuousmartingale; see [57, Theorem 11]. This may lead one to think that we can, without loss of generality, embedevery element of the convergent sequence of martingales ( X k ) k ∈ N ∪{∞} into a Brownian motion, though, in thiscase, the associated filtration G ∞ would automatically depend on the sequence of families of stopping timesused. In special cases, e.g. , in the embedding of a sequence of random walks into a Brownian motion, thestopping times used for the embedding are stopping times with respect to the natural filtration of said Brownianmotion. Therefore, for Donsker type approximations, it seems that one may use this assumption without loss ofgenerality. However, for the general case, it is not clear how one can verify the weak predictable representationproperty of X ∞ with respect to G ∞ . Indeed, observe that if we consider only the natural filtrations then thereis no reason why the assumption G k ⊂ G ∞ should hold and in the best case we fall back in the frameworkpresented in our work. As one can see, the two questions posed in this comment are closely related. Returningto our initial formulation and in view of the Jacod–Yor theorem, see [12, Theorem 18.3.6] in conjunction with[12, Theorem 14.5.7], the case of a sequence ( X k ) k ∈ N ∪{∞} of additive martingales with respect to their naturalfiltrations provides a concrete example.We proceed now to the comparison of Corollary 3.10 and [10, Theorem 5]. The authors of [10] consider theBrownian motion case on a finite and deterministic time interval [0 , T ] . Therefore, in order to translate the finitetime horizon into the positive real half–line, we will assume in the following that the processes are indexedon R + but are constant on the time interval [ T, ∞ ) . More precisely and by means of the notation we haveintroduced, X ∞ is a real–valued F X ∞ − Brownian motion which is approximated by the sequence ( X k, ◦ ) k ∈ N under the locally uniform topology in L − mean, where X k, ◦ is a square–integrable F X k − martingale for every k ∈ N . Obviously, X k,♮ = 0 for every k ∈ N . The reader may recall that the convergence X k, ◦ ( lu , L ) −−−−−→ X ∞ is equivalent to X k, ◦ ( J ( R ) , L ) −−−−−−−→ X ∞ , due to the continuity of the limit X ∞ . In view of the aforementionedconvergence and due to the special role of the time T , we have that X k, ◦∞ = X k, ◦ T L (Ω , G , P ; R ) −−−−−−−−→ X ∞ T = X ∞∞ . Until this point, we can verify that conditions (M1) , (M2 ′ ) and (M3) are satisfied. Condition (M4) is guaranteedby [10, Proposition 3], while condition (M5) is [10, Assumption (H2)]. It is immediate now that the frameworkof [10, Theorem 5] is stronger than the one we have described for Corollary 3.10. Moreover, Briand, Delyonand Mémin consider the Kunita–Watanabe decomposition of the martingale Y k with respect to X k, ◦ , for every k ∈ N , i.e. the stochastic integral is an It¯o integral. Schematically, the convergences indicated by solid arrows inthe following scheme hold, and it is proved in [10, Theorem 5] that the convergence of the respective parts of theKunita–Watanabe decompositions (indicated with dashed arrows) also holds X k, ◦ Y k = Z k · X k, ◦ + N k y y 99K 99K X ∞ Y ∞ = Z ∞ · X ∞ + 0 .Therefore, we can identify [10, Theorem 5] as a special case of Corollary 3.10. Let us also briefly describe the technique for the proof that the authors have followed in [10]. We will do so,because we are going to follow the same technique, mutatis mutandis , in order to prove Theorem 3.3. A sufficientcondition to conclude the required result is to prove that an arbitrary weak–limit point of the sequence ( N k ) k ∈ N ,say N , is orthogonal to X ∞ , i.e. [ X ∞ , N ] is an F ( X ∞ ,N ) ⊤ − martingale or equivalently h X ∞ , N i F ( X ∞ ,N ) ⊤ = 0 . (3.12)The reader should keep in mind that the predictable quadratic variation of [ X ∞ , N ] is determined with respectto the filtration F ( X ∞ ,N ) ⊤ and not merely the filtration F X ∞ . The reason is that the F X ∞ − measurability of N cannot be a priori guaranteed. Let us now briefly argue why (3.12) is a sufficient condition. We start by observingthat by definition we have F X ∞ ⊂ F ( X ∞ ,N ) ⊤ , and therefore P F X ∞ ⊂ P F ( X ∞ ,N ) ⊤ . Hence, using the well–known property (see [34, Theorem I.4.40.d)]) H · h X ∞ , N i F ( X ∞ ,N ) ⊤ = h H · X ∞ , N i F ( X ∞ ,N ) ⊤ , for H ∈ H (cid:0) X ∞ , F ( X ∞ ,N ) ⊤ , T ; R (cid:1) , (3.13)the authors can conclude in particular that N ⊥⊥ L, for every L ∈ L ( X ∞ , F X ∞ , T ; R ) . For the equivalent formsof orthogonality see [34, Proposition I.4.15]. Assume now that ( N k l ) l ∈ N is the subsequence that approximates N , i.e. N k l L −−−→ k →∞ N . Using condition (3.13), the fact that Y ∞ can be written in the form Y ∞ = Z ∞ · X ∞ forsome Z ∈ H ( X ∞ , F X ∞ , T ; R ) , and recalling that X ∞ possesses the F X ∞ − predictable representation prop-erty, Briand, Delyon and Mémin can conclude that h N k l i ( lu , L ) −−−−−→ and consequently N = 0 . Since the lastconvergence holds for every weak–limit point of the sequence ( N k ) k ∈ N and due to the sequential compact-ness of ( N k ) k ∈ N (recall that (cid:0) C ([0 , T ]; R ) , d lu (cid:1) is Polish), the authors can conclude that N k ( lu , L ) −−−−−→ . Theconvergence Z k · X k, ◦ ( lu , L ) −−−−−→ Z ∞ · X ∞ follows then automatically.After this discussion, the reader may have already wondered what difficulties arise if we do not impose (M3) . Inorder to explain briefly the issue, let us omit (M3) from the set of assumptions, i.e. N ∞ = 0 in the Convergence(3.2). Then, by recalling the arguments above, we have that ( N k ) k ∈ N is a tight sequence, therefore we can assumein general that the set { N : N is a weak limit point of ( N k ) k ∈ N } is not a singleton and we can approximateevery element up to a subsequence. Recall also that the arbitrary weak limit of ( N k ) k ∈ N is not necessarily G ∞ − adapted . Then, the best we can say about the relationship between N ∞ and the arbitrary weak–limit N is that the G ∞ − optional projection of N is indistinguishable from N ∞ . Despite our best efforts, we did notmanage to improve this result, leading us to assume that (M3) holds.We close this subsection with a short discussion on the set of conditions (M1) – (M5) . As we have already statedin Section 2.3, Theorems 2.15 and 2.16 (which are respectively [52, Theorem 11, Corollary 12] restated in themultidimensional case) constitute the cornerstones for our work. The former will be used in order to constructconvergent sequences of martingales, while the latter will guarantee that the respective sequences of dual pre-dictable quadratic variations will be also convergent. Therefore, it is natural for our results to be built on theframework of the aforementioned results. Indeed, conditions (M1) , (M2) , (M4) and (M5) are those which guar-antee that ( X k , Y k , G k ) ext −−→ ( X ∞ , Y ∞ , G ∞ ) , recall Remark 3.1. Condition (M3) will be needed in order tocharacterize the arbitrary weak–limit of the sequence ( N k ) k ∈ N ; for more details see Section 3.8.3.4. Outline of the proof. In this subsection, we present the strategy and an overview of the main argumentsused to prove Theorem 3.3, in order to ease the understanding of the technical parts that follow. If such a property could be proved, then using the fact that N is orthogonal to every element of L ( X ∞ ,c , G ∞ , ∞ ; R ) ⊕K ( µ X ∞ ,d , G ∞ , ∞ ; R ) , we could then easily conclude that N = N ∞ , by the uniqueness of the orthogonal representation. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 19 The first part of the statement amounts to showing the following convergences Y k = Y k + Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) + N k 99K 99K 99K 99K Y ∞ = Y ∞ + Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) + 0 . (3.14)The definition of Y k as the optional projection of ξ k with respect to G k , for k ∈ N , together with Assumptions (M4) and (M5) and Proposition 2.14, yield directly that Y k (J ( R ) , P ) −−−−−−−→ Y ∞ . In particular, Y k P −−→ Y ∞ . Thus,the sum on the right-hand of (3.14) converges, and we can conclude if we show that N k (J ( R ) , P ) −−−−−−−→ . Step : N k (J ( R ) , P ) −−−−−−−→ . We will show that h N k i L −−−→ , which is equivalent to h N k i (J ( R ) , P ) −−−−−−−→ . The integrability of the sequence ( h N k i ) k ∈ N , in conjunction with Doob’s maximal inequality, allow us then to deduce that N k (J ( R ) , P ) −−−−−−−→ . Step : h N k i L −−−→ . We cannot show the statement directly, since we do not even know if ( N k ) k ∈ N has a well–defined limit. Now,notice that h N k i = h Y k , N k i , for every k ∈ N , by the orthogonality in the martingale representation. Wewill thus show instead that h Y ∞ , N i F = 0 , where N is a weak–limit point of ( N k ) k ∈ N , F is a filtration suchthat G ∞ t ⊂ F t , for every t ∈ [0 , ∞ ) , and h Y ∞ , N i F is the dual F − predictable projection of [ Y ∞ , N ] . This isequivalent to proving that [ Y ∞ , N ] is an F − martingale, and a sufficient condition for the latter is the following h X ∞ ,c , N c i F = 0 , M µ X [∆ N | e P F ] = 0 , and N is an F − martingale . (3.15)Hence, having showed that N = 0 for every weak–limit point N of ( N k ) k ∈ N , we can show a posteriori therequired convergence. Step : A sufficient condition for (3.15) . This amounts to showing that [ X ∞ , N ] and (cid:20)Z · Z R ℓ h ( x ) I ( x ) e µ ( X ∞ ,d , G ∞ ) (d s, d x ) , N (cid:21) are uniformly integrable F − martingales , (3.16)for a suitable positive and deterministic function h and for a suitable family of sets I . Step : A sufficient condition for (3.16) . This amounts to proving convergence (3.17) below. We have that N k l L −−−→ N , X k L −−−→ X ∞ , while bothsequences possess the P–UT property; see [34, Definition VI.6.1]. Hence we can conclude that [ X k l , N k l ] L −−−→ [ X ∞ , N ] , and [ X ∞ , N ] is a uniformly integrable martingale as the limit of uniformly integrable martingales.Then, we also need to show that Z · Z R ℓ h ( x ) I ( x ) e µ ( X k,d , G k ) (d s, d x ) ( J ( R ) , L ) −−−−−−−−−→ Z · Z R ℓ h ( x ) I ( x ) e µ ( X ∞ ,d , G ∞ ) (d s, d x ) , (3.17)again for a suitable deterministic and positive function h , and a suitable family of sets I , in order to obtain theconvergence h Z · Z R ℓ h ( x ) I ( x ) e µ ( X k,d , G k ) (d s, d x ) , N k i L −−−→ h Z · Z R ℓ h ( x ) I ( x ) e µ ( X ∞ ,d , G ∞ ) (d s, d x ) , N i . Step is valid for J ( X ∞ ) . This subsection is devoted to proving that (3.17) is true for a family J ofopen subsets of R ℓ . Throughout this section, we will consider X k to be a G k − martingale, for every k ∈ N . Inparticular, its jump process is given by (2.1). Before we proceed, let us introduce some notations that will beused throughout the rest of Section 3. For a fixed i ∈ { , . . . , ℓ } , following the notations used in Appendix A.1, we introduce the sets W ( X ∞ ,i ( ω )) := { u ∈ R \ { } , ∃ t > with ∆ X ∞ ,d,it ( ω ) = u } ,V ( X ∞ ,i ) := { u ∈ R \ { } , P (cid:0) ∆ X ∞ ,d,it = u, for some t > (cid:1) > } , I ( X ∞ ,i ) := { ( v, w ) ⊂ R \ { } , vw > and v, w / ∈ V ( X ∞ ,i ) } . By [34, Lemma VI.3.12], we have that the set V ( X ∞ ,i ) is at most countable, for every i = 1 , . . . , ℓ. Moreover,we define J ( X ∞ ) := (cid:26) ℓ Y i =1 I i , where I i ∈ I ( X ∞ ,i ) ∪ (cid:8) R (cid:9) for every i = 1 , . . . , ℓ (cid:27) \ (cid:8) R ℓ (cid:9) , and, for every I := I × · · · × I ℓ ∈ J ( X ∞ ) , we set J I := (cid:8) i ∈ { , . . . , ℓ } , I i = R } 6 = ∅ . Let k ∈ N , I := I × · · · × I ℓ ∈ J ( X ∞ ) and g : Ω × R ℓ −→ R . Then we define the R ℓ +1 − valued process b X k [ g, I ] := (cid:0) ( X k ) ⊤ , X k,g,I (cid:1) ⊤ , where X k,g,I · ( ω ) := X Let condition (M2) hold. Fix an I ∈ J ( X ∞ ) and a function g : (Ω × R ℓ , G ⊗ B ( R ℓ )) −→ ( R , B ( R )) such that there exists Ω C ⊂ Ω with P (Ω C ) = 1 , and R ℓ ∋ x g ( ω, x ) ∈ R is continuous on C ( ω ) for every ω ∈ Ω C , where C ( ω ) := ℓ Y i =1 A i ( ω ) , with A i ( ω ) := ( W ( X ∞ ,i ( ω )) , if i ∈ J I ,W ( X ∞ ,i ( ω )) ∪ { } , if i ∈ { , . . . , ℓ } \ J I , Then, it holds b X k [ g, I ] (J ( R ℓ +1 ) , P ) −−−−−−−−−→ k →∞ b X ∞ [ g, I ] . Proof. Let us fix an I := I × · · · × I ℓ ∈ J ( X ∞ ) . Since the space ( D ( R ℓ +1 ) , d J ( R ℓ +1 ) ) is Polish, by Dudley[23, Theorem 9.2.1], it is therefore sufficient to prove that for every subsequence ( b X k l [ g, I ]) l ∈ N , there exists afurther subsequence ( b X k lm [ g, I ]) m ∈ N for which b X k lm [ g, I ] J ( R ℓ +1 ) −−−−−−−→ m →∞ b X ∞ [ g, I ] , P − a.s. (3.18)Let ( b X k l [ g, I ]) l ∈ N be fixed hereinafter. For every i ∈ J I , since I i ∈ I ( X ∞ ,i ) , there exists Ω I i ⊂ Ω such that(i) P (cid:0) Ω I i (cid:1) = 1 ,(ii) ∆ X ∞ ,d,it ( ω ) / ∈ ∂I i , for every t ∈ R + and ω ∈ Ω I i , where ∂A denotes the | · |− boundary of the set A. Condition (M2) implies the convergence X k ( J ( R ℓ ) , P ) −−−−−−−−→ X ∞ . Hence, for the subsequence ( X k l ) l ∈ N , there isa further subsequence ( X k lm ) m ∈ N for which holds X k lm J ( R ℓ ) −−−−−−→ X ∞ , P − a.s. (3.19)Let Ω sub ⊂ Ω be such that P (Ω sub ) = 1 and such that the convergence (3.19) holds for every ω ∈ Ω sub . Define Ω J I sub := Ω sub ∩ (cid:0) ∩ i ∈ J I Ω I i (cid:1) . Then(i ′ ) P (Ω J I sub ) = 1 , TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 21 (ii ′ ) ∆ X ∞ ,d,it ( ω ) / ∈ ∂I i , for every t ∈ R + , for every i ∈ J I and every ω ∈ Ω J I sub . By the last property, we can conclude that ∂I i ∩ W ( X ∞ ,i ( ω )) = ∅ , for every ω ∈ Ω J I sub , i ∈ J I . Therefore,the function R ℓ ∋ x g I ( ω ) g ( ω, x ) I ( x ) is continuous on the set C ( ω ) for P − almost every ω ∈ Ω J I sub . We cannow conclude once we apply, for each ω ∈ Ω J I sub , Proposition A.8 for the sequence ( X k lm ( ω )) m ∈ N and for thefunction R ℓ ∋ x g ( ω, x ) ∈ R . This gives us (3.18). (cid:3) In order to abridge the notation in the following results, we introduce, for any p ∈ R + , the continuous function R ℓ ∋ x R p ℓ X i =1 ( | x i | ∧ p ∈ R , where a ∧ b = min { a, b } for a, b ∈ R . Corollary 3.13. Let condition (M2) hold. Then, for every I ∈ J ( X ∞ ) it holds b X k [R p , I ] ( J ( R ℓ +1 ) , P ) −−−−−−−−→ b X ∞ [R p , I ] , for every p ∈ R + . Proof. Let p ∈ R + . It suffices to apply the above proposition to the function R p , which is continuous. (cid:3) Let us now provide more details on the strategy of the proof for this step. Proposition 3.18 below is the mostimportant result of this subsection, since it provides us with a rich enough family of converging martingalesequences. It is this family that we are going to use in order to show convergence (3.17). To this end, we are goingto apply Theorem 2.15 to the sequence ( X k, R ,I ) k ∈ N , where I ∈ J ( X ∞ ) . However, all of this requires to makesure that this sequence indeed verifies the requirements of the aforementioned theorem, for every I ∈ J ( X ∞ ) .This is the subject of the remainder of this subsection.Let us fix I = Q ℓi =1 I i ∈ J ( X ∞ ) , hence the set J I is a fixed non–empty subset of { , . . . , ℓ } . Moreover, wedefine R ℓ ∋ x R ,A R ( x ) A ( x ) , for every A ⊂ R ℓ . Lemma 3.14. The process X k, R ,I is a G k − special semimartingale, for every k ∈ N . In particular, its G k –canonical decomposition is given by X k, R ,I · = Z · Z R ℓ R ,I ( x ) e µ ( X k,d , G k ) (d s, d x ) + Z (0 , · ] × I R ( x ) ν ( X k,d , G k ) (d s, d x ) , or equivalently R ,I ∗ µ X k,d = R ,I ⋆ e µ ( X k,d , G k ) + R ,I ∗ ν ( X k,d , G k ) . Proof. Let k ∈ N . Observe that by construction the process X k, R ,I is G k − adapted and càdlàg. The function R ,I is positive, hence the process X k, R ,I is a G k − submartingale of finite variation, as its paths are P − a.s. non–decreasing. Before we proceed, we need to show the integrability of R ,I ∗ µ X k,d ∞ in order to make use of[34, Proposition II.1.28]. But, we have E (cid:20) Z (0 , ∞ ) × R ℓ R ,I ( x ) µ X k,d (d s, d x ) (cid:21) ≤ E (cid:20) Z (0 , ∞ ) × R ℓ ℓ X i =1 | x i | µ X k,d (d s, d x ) (cid:21) ≤ E (cid:20) Z (0 , ∞ ) × R ℓ | x | µ X k (d s, d x ) (cid:21) (3.1) < ∞ , (3.20)which yields also that X k, R ,I is special, by [34, Proposition I.4.23.(iv)]. Moreover, by [34, Theorem II.1.8] andcondition (3.1) we obtain E (cid:20) Z (0 , ∞ ) × R ℓ R ,I ( x ) ν ( X k,d , G k ) (d s, d x ) (cid:21) < ∞ , for every k ∈ N . (3.21) Therefore, we have X k, R ,I · = X The sequence (cid:0) Tr (cid:2) [ X k ] ∞ (cid:3)(cid:1) k ∈ N is uniformly integrable. (ii) The sequence (cid:0) X k, R ,I ∞ (cid:1) k ∈ N is uniformly integrable. (iii) The sequence (cid:0) [ X k, R ,I ] ∞ (cid:1) k ∈ N is uniformly integrable.Proof. (i) In view of conditions (M1) , (M2) and (M4) , we have from Theorem 2.16.(ii) that the sequence ([ X k,i ] ∞ ) k ∈ N is uniformly integrable. By [31, Corollary 1.10] we can conclude.(ii) Using the definitions of R and X k, R ,I , we get X k, R ,I ∞ = Z (0 , ∞ ) × R ℓ R ,I ( x ) µ X k,d (d s, d x ) ≤ Z (0 , ∞ ) × R ℓ | x | µ X k,d (d s, d x ) = Tr (cid:2) [ X k,d ] ∞ (cid:3) ≤ Tr (cid:2) [ X k ] ∞ (cid:3) . Hence, from (i) and [31, Theorem 1.7] we can conclude.(iii) By Lemma 3.14, the process X k, R ,I is a G k − special semimartingale for every k ∈ N , whose martingalepart is purely discontinuous. Therefore, we have by [34, Theorem I.4.52] that [ X k, R ,I ] ∞ = X s> (cid:12)(cid:12) R (cid:0) ∆ X k,ds (cid:1)(cid:12)(cid:12) I (∆ X k,ds ) ≤ X s> (cid:12)(cid:12) R (cid:0) ∆ X k,ds (cid:1)(cid:12)(cid:12) = X s> (cid:18) ℓ X i =1 (cid:0)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12) ∧ (cid:1) (cid:19) = X s> (cid:20) ℓ X i =1 (cid:16)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12) (0 , (cid:0)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12)(cid:1) + [1 , ∞ ) (cid:0)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12)(cid:1)(cid:17)(cid:21) ≤ ℓ X s> (cid:20) ℓ X i =1 (cid:16)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12) (0 , (cid:0)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12)(cid:1) + [1 , ∞ ) (cid:0)(cid:12)(cid:12) ∆ X k,d,is (cid:12)(cid:12)(cid:1)(cid:17)(cid:21) ≤ ℓ X s> (cid:20) ℓ X i =1 (cid:0) ∆ X k,d,is (cid:1) (cid:21) = 2 ℓ Tr (cid:2) [ X k,d ] ∞ (cid:3) ≤ ℓ Tr (cid:2) [ X k ] ∞ (cid:3) . Thus, using (i) and [31, Theorem 1.7] again, we have the required result. (cid:3) Lemma 3.16. (i) The sequence (cid:0) Var(R ,I ∗ ν ( X k,d , G k ) ) ∞ (cid:1) k ∈ N is tight in ( R , | · | ) . (ii) The sequence (cid:0) P s> (cid:0) ∆(R ,I ∗ ν ( X k,d , G k ) ) s (cid:1) (cid:1) k ∈ N is uniformly integrable.Proof. (i) We have already observed that X k, R ,I is a G k − submartingale for every k ∈ N and consequently R ,I ∗ ν ( X k,d , G k ) is non–decreasing for every k ∈ N ; a property which is also immediate since R ,I is a positivefunction and ν ( X k,d , G k ) is a (positive) measure. Therefore, it holds Var (cid:0) R ,I ∗ ν ( X k,d , G k ) (cid:1) = R ,I ∗ ν ( X k,d , G k ) , for every k ∈ N . TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 23 In view of the above and due to Markov’s inequality, it suffices to prove that sup k ∈ N E [R ,I ∗ ν ( X k,d , G k ) ∞ ] < ∞ .Indeed, for every ε > it holds for K := ε sup k ∈ N E [R ,I ∗ ν ( X k,d , G k ) ∞ ] > that sup k ∈ N P (cid:2) R ,I ∗ ν ( X k,d , G k ) ∞ > K (cid:3) ≤ K sup k ∈ N E [R ,I ∗ ν ( X k,d , G k ) ∞ ] < ε, which yields the required tightness. Now, observe that we have E [R ,I ∗ ν ( X k,d , G k ) ∞ ] [34, Proposition II.1.28] = E [R ,I ∗ µ X k,d ∞ ] (3.20) < ∞ . (3.22)We have concluded using inequality (3.20), which in turn makes use of Assumption (3.1). Therefore (3.22) yieldsthat sup k ∈ N E [R ,I ∗ ν ( X k,d , G k ) ∞ ] < ∞ . (ii) We have, for every k ∈ N , that the following holds: X s> (cid:0) ∆(R ,I ∗ ν ( X k,d , G k ) ) s (cid:1) = X s> (cid:16) Z R ℓ R ,I ( x ) ν ( X k,d , G k ) ( { s } × d x ) (cid:17) Jensen Ineq. ≤ X s> Z R ℓ R ,I ( x ) ν ( X k,d , G k ) ( { s } × d x ) ≤ Z (0 , ∞ ) × R ℓ R ,I ( x ) ν ( X k,d , G k ) (d s, d x )= Z (0 , ∞ ) × I h ℓ X i =1 ( | x i | ∧ i ν ( X k,d , G k ) (d s, d x ) ≤ ℓ Z (0 , ∞ ) × I ℓ X i =1 ( | x i | ∧ ν ( X k,d , G k ) (d s, d x )= 2 ℓ R ,I ∗ ν ( X k,d , G k ) ∞ . (3.23)Using Lemma 3.15.(ii) and the first lemma in Meyer [53, p.770], there exists a moderate , Young function Φ such that sup k ∈ N E (cid:2) Φ (cid:0) X k, R ,I ∞ (cid:1)(cid:3) < ∞ . Then, using that X k, R ,I is an increasing process, and is thus equal to its supremum process, the decompositionof Lemma 3.14 and applying Lenglart, Lépingle, and Pratelli [45, Théorème 3.2.1], we can conclude that sup k ∈ N E (cid:2) Φ (cid:0) R ,I ∗ ν ( X k,d , G k ) ∞ (cid:1)(cid:3) < ∞ . By de La Vallée–Poussin’s criterion, the latter condition is equivalent to the uniform integrability of the sequence (cid:0) R ,I ∗ ν ( X k,d , G k ) ∞ (cid:1) k ∈ N . Then, by (3.23) and [31, Theorem 1.7] we can conclude the uniform integrability of therequired sequence. (cid:3) Corollary 3.17. (i) The sequence (cid:0)(cid:2) R ,I ⋆ e µ ( X k,d , G k ) (cid:3) ∞ (cid:1) k ∈ N is uniformly integrable. (ii) Consequently, it holds that R ,I ⋆ e µ ( X k,d , G k ) ∈ H ,d ( G k , ∞ ; R ) , for every k ∈ N .Proof. Using that R ,I ⋆ e µ ( X k,d , G k ) is a martingale of finite varation, we have (cid:2) R ,I ⋆ e µ ( X k,d , G k ) (cid:3) ∞ = X s> (cid:16) ∆(R ,I ⋆ e µ ( X k,d , G k ) ) s (cid:17) = X s> (cid:16) ∆( X k, R ,I ) s − ∆(R ,I ∗ ν ( X k,d , G k ) ) s (cid:17) ≤ X s> (cid:16) ∆( X k, R ,I ) s (cid:17) + 2 X s> (cid:16) ∆(R ,I ∗ ν ( X k,d , G k ) ) s (cid:17) = 2 [ X k, R ,I ] ∞ + 2 X s> (cid:16) ∆(R ,I ∗ ν ( X k,d , G k ) ) s (cid:17) , where in the last equality we have used that X k, R ,I is a semimartingale whose paths have finite variation and[34, Theorem I.4.52]. In view now of the above inequality, Lemma 3.15.(iii), Lemma 3.16.(ii) and [31, Theorem1.7, Corollary 1.10], we can conclude the required property. This shows (i). Cf. Definition A.11. In addition, (i) implies the integrability of (cid:2) R ,I ⋆ e µ ( X k,d , G k ) (cid:3) , hence from [34, Proposition I.4.50.c)] we get that R ,I ⋆ e µ ( X k,d , G k ) ∈ H ,d ( G k , ∞ ; R ) . (cid:3) Proposition 3.18. Let conditions (M1) , (M2) and (M4) hold. Then the following convergence holds (cid:0) ( X k ) ⊤ , R ,I ⋆ e µ ( X k,d , G k ) (cid:1) ⊤ (J ( R ℓ +1 ) , L ) −−−−−−−−−−→ (cid:0) ( X ∞ ) ⊤ , R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) ⊤ . (3.24) Proof. As we have already pointed out on page 21, we are going to apply Theorem 2.15 to the sequence ( X k, R ,I ) k ∈ N . By Lemma 3.14, this is a sequence of G k − special semimartingales, for every k ∈ N .In view of (M1) , which states that X ∞ is G ∞ − quasi–left–continuous, and using [34, Corollary II.1.19], weget that the compensator ν ( X ∞ ,d , G ∞ ) associated to µ X ∞ ,d is an atomless random measure. Therefore, the finitevariation part of the G ∞ − canonical decomposition of X ∞ , R ,I is a continuous process. Moreover, by [31, The-orem 5.36] and (M1) , which states that the filtration G ∞ is quasi–left–continuous, it suffices to show that themartingale part of X ∞ , R ,I is uniformly integrable. The latter holds by Corollary 3.17.(ii).Lemma 3.15.(iii) yields that condition (i) of Theorem 2.15 holds. Lemma 3.16.(ii) yields that condition (ii) ofthe aforementioned theorem also holds. Moreover, from Corollary 3.13 with p = 2 , we obtain the convergence (cid:0) ( X k ) ⊤ , X k, R ,I (cid:1) ⊤ (J ( R ℓ +1 ) , P ) −−−−−−−−−→ (cid:0) ( X ∞ ) ⊤ , X ∞ , R ,I (cid:1) ⊤ . (3.25)The last convergence in conjunction with conditions (M2) and (M4) , Remark 3.1 and Corollary A.2, is equivalentto the convergence ( X k, R ,I , G k ) ext −−−−→ ( X ∞ , R ,I , G ∞ ) . Therefore, condition (iii) of Theorem 2.15 is alsosatisfied.Applying now Theorem 2.15 to the sequence ( X k, R ,I ) k ∈ N , and keeping in mind the decomposition fromLemma 3.14, we obtain the convergence (cid:0) X k, R ,I , R ,I ⋆ e µ ( X k,d , G k ) (cid:1) ⊤ (J ( R ) , P ) −−−−−−−−→ (cid:0) X ∞ , R ,I , R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) ⊤ . (3.26)Using Corollary A.2, we can combine the convergences in (3.25) and (3.26) to obtain (cid:0) ( X k ) ⊤ , R ,I ⋆ e µ ( X k,d , G k ) (cid:1) ⊤ (J ( R ℓ × R ) , P ) −−−−−−−−−−→ (cid:0) ( X ∞ ) ⊤ , R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) ) ⊤ . The last result can be further strengthened to an L − convergence in view of the following arguments: let α k := (cid:0) ( X k ) ⊤ , R ,I ⋆ e µ ( X k,d , G k ) (cid:1) , then by Vitali’s Convergence Theorem, the latter is equivalent to showingthat d ( α k , α ∞ ) is uniformly integrable. Moreover, by the inequality d ( α k , α ∞ ) ≤ (cid:0) d J ( α k , 0) + d J (0 , α ∞ ) (cid:1) ≤ d ( α k , 0) + 2 d (0 , α ∞ ) ≤ k α k k ∞ + 2 k α ∞ k ∞ , and [31, Theorem 1.7, Corollary 1.10], it suffices to show that ( k α k k ∞ ) n ∈ N is uniformly integrable.By Corollary 3.17.(i) we know that the sequence (cid:0) [R ,I ⋆ e µ ( X k , G k ) ] ∞ (cid:1) k ∈ N is uniformly integrable. Therefore,using de La Vallée Poussin’s criterion, there exists a moderate Young function φ such that sup k ∈ N E h φ (cid:16)(cid:2) R ,I ⋆ e µ ( X k , G k ) (cid:3) ∞ (cid:17)i < ∞ . (3.27)Proposition A.12 yields that the map R + ∋ x ψ ψ ( x ) := φ ( x ) is again moderate and Young. We can applynow the Burkholder–Davis–Gundy (BDG) inequality [cf. 31, Theorem 10.36] to the sequence of martingales (cid:0) R ,I ⋆ e µ ( X k , G k ) (cid:1) k ∈ N using the function ψ , and we obtain that sup k ∈ N E (cid:20) φ (cid:18) 12 sup s> (cid:12)(cid:12) (R ,I ⋆ e µ ( X k , G k ) ) s (cid:12)(cid:12) (cid:19)(cid:21) = sup k ∈ N E (cid:20) ψ (cid:18) sup s> (cid:12)(cid:12) (R ,I ⋆ e µ ( X k , G k ) ) s (cid:12)(cid:12)(cid:19)(cid:21) BDG ≤ C ψ sup k ∈ N E h ψ (cid:16)(cid:2) R ,I ⋆ e µ ( X k , G k ) (cid:3) ∞ (cid:17)i = C ψ sup k ∈ N E (cid:20) φ (cid:18) (cid:2) R ,I ⋆ e µ ( X k , G k ) (cid:3) ∞ (cid:19)(cid:21) (3.27) < ∞ . (3.28) TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 25 Hence the sequence (cid:0) sup s> | R ,I ⋆ e µ ( X k , G k ) | (cid:1) k ∈ N is uniformly integrable, again from de La Vallée Poussin’scriterion. Moreover, (Tr[[ X k ] ∞ ]) k ∈ N is a uniformly integrable sequence; cf. Lemma 3.15.(i). Using analogousarguments to the above inequality, we can conclude that the sequence (sup s> | X ks | ) k ∈ N is also uniformlyintegrable. Hence, the family (cid:0) sup s> | X ks | + sup s> | R ,I ⋆ e µ ( X k , G k ) | (cid:1) k ∈ N is uniformly integrable, whichallows us to conclude. (cid:3) Lemma 3.19. The sequence (R ,I ⋆ e µ ( X k,d , G k ) ) k ∈ N possesses the P–UT property, consequently (cid:0) R ,I ⋆ e µ ( X k,d , G k ) , [R ,I ⋆ e µ ( X k,d , G k ) ] (cid:1) ( J ( R ) , P ) −−−−−−−−→ (cid:0) R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) ] (cid:1) . (3.29) Proof. By (3.28), we obtain that the martingale sequence (R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) ) k ∈ N is L − bounded, which allowsus further to conclude that sup k ∈ N E (cid:20) sup s> (cid:12)(cid:12) ∆ (cid:0) R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) s (cid:12)(cid:12)(cid:21) The process ≤ is càdlàg sup k ∈ N E (cid:20) sup s ≤ t (cid:12)(cid:12) ∆ (cid:0) R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) s (cid:12)(cid:12) + sup s ≤ t (cid:12)(cid:12) ∆ (cid:0) R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) s − (cid:12)(cid:12)(cid:21) The process ≤ is càdlàg k ∈ N E (cid:20) sup s ≤ t (cid:12)(cid:12) ∆ (cid:0) R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) s (cid:12)(cid:12)(cid:21) The process ≤ is càdlàg k ∈ N E (cid:20) sup s ≤ t (cid:12)(cid:12) ∆ (cid:0) R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) (cid:1) s (cid:12)(cid:12) (cid:21) L − boundedness < ∞ . Now, (3.24) yields that the sequence converges in law, hence [34, Corollary VI.6.30] allow us to conclude thatthe sequence (cid:0) R ,I ⋆ e µ ( X k,d , G k ) (cid:1) k ∈ N possesses the P–UT property. Then, we obtain the required convergencefrom [34, Theorems VI.6.26 and VI.6.22.(c)]. (cid:3) Step is valid. The following result provides a sufficient criterion for showing that a martingale L isorthogonal to the space generated by another martingale X . We adopt the notation of the previous section. Proposition 3.20. Let X be an F − quasi–left–continuous, R ℓ − valued F − martingale and L be a uniformlyintegrable, R − valued F − martingale. Assume that (i) [ X, L ] is a uniformly integrable F − martingale, where [ X, L ] := ([ X , L ] , . . . , [ X ℓ , L ]) ⊤ . (ii) I is a family of subsets of R ℓ such that σ ( I ) = B ( R ℓ ) and the martingale R ,A ⋆ e µ ( X, F ) is well–definedfor every A ∈ I . Moreover, (cid:2) R ,A ⋆ e µ ( X, F ) , L (cid:3) is a uniformly integrable F − martingale, for every A ∈ I . (iii) | ∆ L | µ X ∈ e A σ ( F ) .Then, we have that h X c,i , L c i F = 0 , for every i = 1 , . . . ℓ, and M µ X [∆ L | e P F ] = 0 . (3.30) Proof. By condition (i) we have that [ X i , L ] is a process of class (D), see [34, Definition I.1.46], for every i = 1 , . . . , ℓ. Hence, by the Doob–Meyer decomposition we obtain that h X i , L i = 0 , for every i = 1 , . . . , ℓ. (3.31)We are going to translate condition (ii) into the following one E (cid:20) Z (0 , ∞ ) × R ℓ W ( s, x )R ( x )∆ L s µ X (d s, d x ) (cid:21) = 0 , for every F − measurable function W. (3.32)Before we proceed, recall that we have assumed X to be an F − quasi–left–continuous martingale. Thus, by [34,Corollary II.1.19] and Remark 2.2 we can conclude that ∆ (cid:0) R ,A ⋆ e µ ( X, F ) (cid:1) s = R (cid:0) ∆ X ds (cid:1) A (∆ X ds ) = R (cid:0) ∆ X s (cid:1) A (∆ X s ) = ∆ (cid:0) R ,A ∗ µ X (cid:1) s . (3.33) For the notation, recall the discussion before Definition 2.6. By the martingale property of (cid:2) R ,A ⋆ e µ ( X, F ) , L (cid:3) , we obtain for every ≤ t < u < ∞ and every C ∈ F t , that E h C E (cid:2) [R ,A ⋆ e µ ( X, F ) , L ] u (cid:12)(cid:12) F t (cid:3) − C [R ,A ⋆ e µ ( X, F ) , L ] t i = E h C [R ,A ⋆ e µ ( X, F ) , L ] u − C [R ,A ⋆ e µ ( X, F ) , L ] t i = E (cid:20) C X t Step is to prove that [ Y ∞ , N ] is an F − martingale, (3.37)for every weak–limit point N of ( N k ) k ∈ N and for some filtration F which includes G ∞ , and may depend on N . In the next few lines, we are going to explain why this is sufficient (for showing that the limit of ( h N k i ) k ∈ N equals zero) and how the filtration F is going to be determined.Observe that by the orthogonal decomposition of Y k with respect to ( X k,c , µ X k,d , G k ) , we have for every k ∈ N h Y k , N k i = h N k i . (3.38) We abuse notation and denote e Ω ∋ ( ω, s, x ) π i x i ∈ R . TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 27 This identity is the link with what follows. In Lemma 3.21, we show that the sequence ( N k ) k ∈ N is tight, whilethe sequence ( h N k i ) k ∈ N is C − tight. Thus, there exists a subsequence ( k l ) l ∈ N such that ( N k l ) l ∈ N and ( h N k l i ) l ∈ N converge jointly and we denote the respective weak–limit points by N and Ξ , i.e. ( N k l , h N k l i ) L −−−→ l →∞ ( N , Ξ) . (3.39)The last convergence will enable us to prove, see Corollary 3.27, that [ Y k l , N k l ] − h N k l i L −−−→ l →∞ [ Y ∞ , N ] − Ξ . (3.40)Using classical arguments, we can prove that the limit process is a uniformly integrable martingale with respectto F ( Y ∞ ,N, Ξ) . Then, we can conclude that Ξ = 0 , if (3.37) is valid for the filtration F ( Y ∞ ,N, Ξ) , since in this case Ξ is an F ( Y ∞ ,N, Ξ) − predictable F ( Y ∞ ,N, Ξ) − martingale of finite variation, see [34, Corollary I.3.16].However, the problem is that the filtration F ( Y ∞ ,N, Ξ) does not necessarily contain G ∞ . Before we proceed letus fix an arbitrary I ∈ J ( X ∞ ) and an arbitrary G ∈ G ∞∞ . In order to enlarge the filtration with respect to whichthe limiting process in (3.40) is a martingale, we will use the sequences (Θ k,I ) k ∈ N and (Θ k,I,G ) k ∈ N , which aredefined as Θ k,I := (cid:0) X k , [ X k ] − h X k i , Y k , N k , [ X k , N k ] , [R ,I ⋆ e µ ( X k,d , G k ) , N k ] , [ Y k , N k ] − h N k i (cid:1) ⊤ , (3.41) Θ k,I,G := (cid:0) (Θ k,I ) ⊤ , E [ G |G k · ] (cid:1) ⊤ . (3.42)We will prove initially that Θ ∞ ,I,G := (cid:0) X ∞ , [ X ∞ ] − h X ∞ i G ∞ , Y ∞ , N , [ X ∞ , N ] , [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , N ] , [ Y ∞ , N ] − Ξ , E [ G |G ∞· ] (cid:1) ⊤ , (3.43) i.e. the weak–limit of (Θ k l ,I,G ) l ∈ N , is an F M ∞ ,G − martingale, where M ∞ ,G := ( X ∞ , h X ∞ i G ∞ , Y ∞ , N , Ξ , E [ G |G ∞· ]) . (3.44)Since the set G was arbitrarily chosen, we will deduce in Proposition 3.29 the martingale property of Θ ∞ ,I := (cid:0) X ∞ , [ X ∞ ] − h X ∞ i G ∞ , Y ∞ , N , [ X ∞ , N ] , [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , N ] , [ Y ∞ , N ] − Ξ (cid:1) ⊤ , (3.45)with respect to the filtration F := G ∞ ∨ F M ∞ , for M ∞ := ( X ∞ , h X ∞ i G ∞ , Y ∞ , N , Ξ) . (3.46)Observe that the filtration F depends on N and Ξ , but we notationally suppressed this dependence. Moreover,it does not necessarily hold that F is right–continuous. However, in view of [31, Theorem 2.46] and the càdlàgproperty of Θ ∞ , we can conclude also that Θ ∞ is an F + − martingale, where F + := ( F t + ) t ∈ R + and F t + := T s>t F s . Θ ∞ ,I is an F –martingale. In view of the discussion above, we need to show that the family (Θ k,I,G ) k ∈ N is tight and uniformly integrable for every I ∈ J ( X ∞ ) and every G ∈ G ∞∞ . These results are proved in Lemmata3.23 and 3.24. Before that, we present some necessary results. Lemma 3.21. Assume the setting of Theorem 3.3. Then, for the sequence ( N k ) k ∈ N the following are true (i) The sequence ( h N k i ) k ∈ N k is C − tight in D ( R ) . (ii) The sequence ( N k ) k ∈ N is tight in D ( R ) . (iii) The sequence (cid:0) ( N k , h N k i ) (cid:1) k ∈ N is tight in D ( R ) . (iv) The sequence ( N k ) k ∈ N is L − bounded, i.e. sup k ∈ N E h sup t ∈ [0 , ∞ ] | N kt | i < ∞ . (v) The sequence ( h N k i ∞ ) k ∈ N k is uniformly integrable. (vi) The sequence ([ N k ] ∞ ) k ∈ N k is L − bounded, i.e. sup k ∈ N E (cid:2) [ N k ] ∞ (cid:3) < ∞ . Proof. (i) By the construction of the orthogonal decompositions of Y k with respect to ( X k,c , µ X k,d , G k ) andCorollary 2.8, we obtain that h Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) , N k i = 0 , for every k ∈ N . (3.47)By conditions (M4) , (M5) and Proposition 2.14, we have ( Y k , G k ) (J ( R ) , P ) −−−−−−−→ ( Y ∞ , G ∞ ) , which allows us to apply Theorem 2.16 and therefore obtain by part (i) the convergence ( Y k , [ Y k ] , h Y k i ) ( J ( R ) , P ) −−−−−−−−→ ( Y ∞ , [ Y ∞ ] , h Y ∞ i ) . (3.48)Hence, by Assumption (M1) and convergence (3.48), we get that the sequence ( h Y k i ) k ∈ N is C − tight, see [34,Definition VI.3.25]. Moreover, (3.47) implies that h Y k i = h Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) i + h N k i , for every k ∈ N , (3.49)which in turn yields that the process h Y k i strongly majorises both h Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) i and h N k i ,for every k ∈ N , see [34, Definition VI.3.34]. We can conclude thus the C − tightness of ( h N k i ) k ∈ N by [34,Proposition VI.3.35].(ii) Using [34, Theorem VI.4.13] to obtain the tightness of ( N k ) k ∈ N , it suffices to show that ( h N k i ) k ∈ N isC − tight and ( N k ) k ∈ N is tight. The first statement follows from (i), while for the second one we get that N k = 0 ,by the definition of the orthogonal decomposition of Y k with respect to ( X k,c , µ X k,d , G k ) , for every k ∈ N . (iii) This is immediate in view of [34, Corollary VI.3.33] and (i)–(ii).(iv) We have by Doob’s L − inequality, see [34, Theorem 5.1.3] and [31, Theorem 6.8], that E (cid:20) sup t ∈ [0 , ∞ ] (cid:12)(cid:12) N kt (cid:12)(cid:12) (cid:21) ≤ E h(cid:12)(cid:12) N k ∞ (cid:12)(cid:12) i = 4 E h h N k i ∞ i ≤ E h h Y k i ∞ i , (3.50)by identity (3.49). By convergence (3.48), compare Theorem 2.16.(ii), we obtain that the sequence ( h Y k i ∞ ) k ∈ N is uniformly integrable and in particular L − bounded.(v) Since ( h N k i ∞ ) k ∈ N is strongly majorized by ( h Y k i ∞ ) k ∈ N which is uniformly integrable, see the commentsin the proof of (iv), we can conclude by [31, Theorem 1.7].(vi) The identity E (cid:2) [ N k ] ∞ ] = E (cid:2) h N k i ∞ (cid:3) for every k ∈ N and the uniform integrability of the sequence ( h N k i ∞ ) k ∈ N which implies the L − boundedness of the sequence allow us to conclude. (cid:3) Lemma 3.22. Let conditions (M1) , (M2) , (M4) and (M5) hold. Then the sequences ( X k ) k ∈ N and ( Y k ) k ∈ N possess the P–UT property.Proof. The proof is analogous to the one of Lemma 3.19, in particular we are going to apply [34, CorollaryVI.6.30]. In view of conditions (M1) , (M2) , (M4) , (M5) , Remark 3.1 and Proposition 2.14, we have that thesequences ( X k ) k ∈ N and ( Y k ) k ∈ N satisfy Theorem 2.16, i.e. X k ( J ( R ℓ ) , P ) −−−−−−−−→ X ∞ and Y k (J ( R ) , P ) −−−−−−−→ Y ∞ . Moreover, ( X k ) k ∈ N is L − bounded and for every k ∈ N the process X k is a càdlàg G k − martingale. Therefore,using similar arguments to the ones in the proof of Lemma 3.19, we get that sup k ∈ N E (cid:20) sup s ≤ t (cid:12)(cid:12) ∆ X ks (cid:12)(cid:12)(cid:21) < ∞ , which allows us to conclude. The steps for the sequence ( Y k ) k ∈ N are completely similar, so that we omit them. (cid:3) TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 29 Lemma 3.23. The family of random variables (cid:0)(cid:13)(cid:13) Θ k,I,Gt (cid:13)(cid:13) (cid:1) k ∈ N ,t ∈ [0 , ∞ ] is uniformly integrable for every I ∈J ( X ∞ ) and G ∈ G ∞∞ , where, abusing notation, we have defined (cid:13)(cid:13) Θ k,I,Gt (cid:13)(cid:13) := (cid:13)(cid:13) X kt (cid:13)(cid:13) + (cid:13)(cid:13) [ X k ] t − h X k i t (cid:13)(cid:13) + k ( Y kt , N kt ) k + k [ X k , N k ] t k + (cid:13)(cid:13)(cid:0) [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , N ] t , [ Y k , N k ] t − h N k i t , E [ G |G kt ] (cid:1)(cid:13)(cid:13) . (3.51) Proof. Let I ∈ J ( X ∞ ) and G ∈ G ∞∞ be fixed. We are going to prove that for each summand of k Θ k,I,Gt k theassociated family indexed by { k ∈ N , t ∈ [0 , ∞ ] } is uniformly integrable. Then, by [31, Corollary 1.10], we canconclude also for their sum ( k Θ k,I,Gt k ) k ∈ N ,t ∈ [0 , ∞ ] .(i) Let i ∈ { , . . . , ℓ } . Theorem 2.16 and Burkholder–Davis–Gundy’s inequality imply the L –boundedness of (sup t ∈ [0 , ∞ ] | X k,i | ) k ∈ N . By de La Vallée–Poussin’s criterion, we obtain that (sup t ∈ [0 , ∞ ] | X k,i | ) k ∈ N is uniformlyintegrable. By the obvious domination | X k,it | ≤ sup t ∈ [0 , ∞ ] | X k,it | , for every t ∈ [0 , ∞ ] , and [31, Theorem 1.7],we can conclude in particular the uniform integrability of the family ( | X k,it | ) k ∈ N ,t ∈ [0 , ∞ ] . By [31, Corollary 1.10],we conclude that ( k X kt k ) k ∈ N ,t ∈ [0 , ∞ ] is uniformly integrable.(ii) Let i, j ∈ { , . . . , ℓ } . By Theorem 2.16 and Lemma 2.17, we obtain the uniform integrability of the se-quence (Var([ X k,i , X k,j ]) ∞ ) k ∈ N . Hence the sequence ( | [ X k,i , X k,j ] t | ) k ∈ N ,t ∈ [0 , ∞ ] is also uniformly integrable,in view of the domination | [ X k,i , X k,j ] t | ≤ Var([ X k,i , X k,j ]) ∞ . Then, ( k [ X k ] t k ) k ∈ N ,t ∈ [0 , ∞ ] is also uniformlyintegrable as the sum of uniformly integrable families.For the uniform integrability of ( h X k,i , X k,j i t ) k ∈ N ,t ∈ [0 , ∞ ] , we can use arguments completely analogous to theones above. We have only to mention that Lemma 2.17 is valid when we substitute the quadratic covariation bythe predictable quadratic covariation. Therefore we can conclude by the inequalities | [ X k,i , X k,j ] t − h X k,i , X k,j i t | ≤ | [ X k,i , X k,j ] t | + |h X k,i , X k,j i t | ≤ k Var([ X k ]) ∞ k + k Var( h X k i ) ∞ k . (iii) We can conclude the uniform integrability of ( Y k ) k ∈ N by arguments analogous to (i), since the sequence ( Y k ) k ∈ N satisfies the assumptions of Theorem 2.16.(iv) The uniform integrability of ( N kt ) k ∈ N ,t ∈ [0 , ∞ ] is immediate by Lemma 3.21.(iv) and de La Vallée Poussin’scriterion.(v) In Corollary 3.17.(i) we have obtained that the sequence ([R ,I ⋆ e µ ( X k,d , G k ) ] ∞ ) k ∈ N is uniformly integrable.Moreover, the sequence ([ N k ] ∞ ) k ∈ N is L − bounded, by Lemma 3.21.(vi). Now we can conclude the uniformintegrability of the sequence (Var([R ,I ⋆ e µ ( X k,d , G k ) , N k ]) ∞ ) k ∈ N by Lemma 2.17. This further implies the uni-form integrability of ([R ,I ⋆ e µ ( X k,d , G k ) , N k ] t ) k ∈ N ,t ∈ [0 , ∞ ] .(vi) We prove first the uniform integrability of the family ([ Y k , N k ] t ) k ∈ N ,t ∈ [0 , ∞ ] . We can obtain it by applyingLemma 3.21 to the uniformly integrable sequence (Tr[[ Y k ] ∞ ]) k ∈ N and the L − bounded sequence ([ N k ] ∞ ) k ∈ N ;see Lemma 3.21.(vi). The uniform integrability of ( h N k i ∞ ) k ∈∞ is provided by Lemma 3.21.(v). Then | [ Y k , N k ] t − h N k i t | ≤ | [ Y k , N k ] t | + |h N k i t |≤ Var (cid:0) [ Y k , N k ] (cid:1) t + Var (cid:0) h N k i (cid:1) t ≤ Var (cid:0) [ Y k , N k ] (cid:1) ∞ + Var (cid:0) h N k i (cid:1) ∞ , and we can now conclude by [31, Theorem 1.7].(vii) Observe that for every p > and for every k ∈ N , t ∈ [0 , ∞ ] it holds E (cid:2) ( E [ G |G kt ]) p (cid:3) Jensen ineq. ≤ E (cid:2) E [( G ) p |G kt ] (cid:3) = E [ G ] = P ( G ) . Then, we can conclude by de La Vallée–Poussin’s criterion. We could have, more generally, used that ≤ E [ G |G kt ] ≤ , for every k ∈ N , t ∈ [0 , ∞ ] , see [31, Theorem 1.18]. (cid:3) Lemma 3.24. The sequence (cid:0) Θ k,I,G (cid:1) k ∈ N is tight in D ( R ( ℓ +1) × ℓ × R × × R ℓ × R × ) for every I ∈ J ( X ∞ ) and for every G ∈ G ∞∞ . Proof. Let I ∈ J ( X ∞ ) and G ∈ G ∞∞ be fixed. We claim that the sequence Φ k := ( X k , Y k , N k ) is tight in D ( R ℓ × R × ) , and that the tightness of (Φ k ) k ∈ N is sufficient to show the tightness of (Θ k,I,G ) k ∈ N .The space D ( R ( ℓ +1) × ℓ × R × × R ℓ × R × ) is Polish since it is isometric to D ( R ℓ +2 · ℓ +5 ) which is Polish,see [34, Theorem VI.1.14]. Hence in this case tightness is equivalent to sequential compactness. Therefore, itsuffices to provide a weakly convergent subsequence (Θ k lm ,I,G ) m ∈ N for every subsequence (Θ k l ,I,G ) l ∈ N .Let us therefore consider a subsequence (Θ k l ,I,G ) l ∈ N . Assuming the tightness of (Φ k ) k ∈ N , there exists a weaklyconvergent subsequence (Φ k lm ) m ∈ N , converging to, say, ( X ∞ , Y ∞ , e N ) . Moreover, by Lemma 3.22, we havethat the sequences ( X k ) k ∈ N , ( Y k ) k ∈ N possess the P–UT property. In particular, the subsequences ( X k lm ) m ∈ N , ( Y k lm ) m ∈ N possess the P–UT property. On the other hand, ( N k ) k ∈ N is L − bounded and by using argumentscompletely analogous to the ones used in Lemma 3.19, we get that the sequence ( N k lm ) m ∈ N possesses also the P–UT property. Finally, by Lemma 3.19 we have that the sequence (R ,I ⋆ e µ ( X klm ,d , G klm ) ) m ∈ N possesses the P–UT property. By Theorem 2.16, we therefore obtain the convergence ( X k , [ X k ] , h X k i ) (J ( R ℓ × R ℓ × ℓ × R ℓ × ℓ ) , P ) −−−−−−−−−−−−−−−−→ ( X ∞ , [ X ∞ ] , h X ∞ i G ∞ ) . Hence, by [34, Theorem VI.6.26] and by Proposition 3.18, we also get the convergence (cid:0) X k lm , [ X k lm ] − h X k lm i , Y k lm , N k lm , [ X k lm , N k lm ] , [R ,I ⋆ e µ ( X klm ,d , G klm ) , N k lm ] , [ Y k lm , N k lm ] (cid:1)y L (cid:0) X ∞ , [ X ∞ ] − h X ∞ i G ∞ , Y ∞ , e N , [ X ∞ , e N ] , [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , e N ] , [ Y ∞ , e N ] (cid:1) . In view of the C − tightness of ( h N k i ) k ∈ N , see Lemma 3.21, we can pass to a further subsequence ( k l mn ) n ∈ N sothat h N k lmn i L −−−→ n →∞ e Ξ . Hence, we can finally obtain a jointly weakly convergent subsequence Θ k lmn ,I,G L −−−→ n →∞ (cid:0) X ∞ , [ X ∞ ] −h X ∞ i G ∞ , Y ∞ , e N , [ X ∞ , e N ] , [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , e N ] , [ Y ∞ , e N ] − e Ξ , E [ G |G ∞· ] (cid:1) , where we have used that ( X k , E [ G |G k · ]) (J ( R ℓ +1 ) , P ) −−−−−−−−−→ k →∞ ( X ∞ , E [ G |G ∞· ]) to conclude.In order to prove our initial claim that (Φ k ) k ∈ N is tight, we will apply [34, Theorem 4.13] to the L − boundedsequences ( X k ) k ∈ N , ( Y k ) k ∈ N , and ( N k ) k ∈ N . The sequences ( X k ) k ∈ N , ( Y k ) k ∈ N , and ( N k ) k ∈ N are clearly tightin R ℓ , R and R respectively, as L − bounded. The sequences (cid:0) Tr (cid:2) h X k i (cid:3)(cid:1) k ∈ N and ( h Y k i ) k ∈ N are C − tight asa consequence of Theorem 2.16 and the quasi–left–continuity of X ∞ and Y ∞ . The sequence ( h N k i ) k ∈ N is C − tight by Lemma 3.21. Finally, the sequence (Ψ k ) k ∈ N , where Ψ k := Tr (cid:2) h X k i (cid:3) + h Y k i + h N k i , is C − tight as the sum of C − tight sequences; see [34, Corollary VI.3.33]. This concludes the proof. (cid:3) Remark 3.25. In the proof of the previous lemma, in order to prove the tightness of the sequence Φ k we haveused [34, Theorem VI.4.13], which in turn makes use of Aldous’s criterion for tightness , see [34, Section VI.4a].This allows us to conclude that every weak–limit point of ( N k ) k ∈ N is quasi–left–continuous for its naturalfiltration. Recall that by Condition (M1) , we have in particular that X ∞ and Y ∞ are quasi–left–continuous fortheir natural filtrations. To sum up, for the arbitrary weak–limit point N of ( N k ) k ∈ N , it holds P (∆ X ∞ ,it = 0) = P (∆ Y ∞ t = 0) = P (∆ N t = 0) = 1 , for every t ∈ R + , i = 1 , . . . , ℓ. Observe that the above property is independent of the filtration with respect to which the processes are adaptedto.In view of Lemma 3.21, we will fix for the rest of this subsection a pair ( N , Ξ) such that (3.39) is valid, i.e. theweak–limit point ( N , Ξ) is approximated by the subsequence ( N k l , h N k l i ) l ∈ N . Consequently the subsequence ( k l ) l ∈ N will be fixed hereinafter. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 31 Lemma 3.26. The subsequence ( N k l ) l ∈ N possesses the P–UT property.Proof. This is immediate by [34, Corollary VI.6.30]. To verify the assumptions of the aforementioned corollary,in view of convergence (3.39), it is only necessary to prove that sup l ∈ N E (cid:20) sup s ≤ t (cid:12)(cid:12) ∆ N k l s (cid:12)(cid:12)(cid:21) < ∞ . This follows by analogous arguments to the ones used in the proof of Lemma 3.19. (cid:3) Corollary 3.27. Assume that conditions (M1) , (M2) , (M4) , (M5) and convergence (3.39) hold. Then, for every I ∈ J ( X ∞ ) , G ∈ G ∞∞ , we have that Θ k l ,I,G L −−−→ l →∞ Θ ∞ ,I,G and that the process Θ ∞ ,I,G is a uniformlyintegrable F M ∞ ,G − martingale, where M ∞ ,G is defined in (3.44) .Proof. Let us fix an I ∈ J ( X ∞ ) and a G ∈ G ∞∞ . By Lemma 3.24, i.e. by the tightness of the sequence (Θ k,I,G ) k ∈ N , we obtain that the sequence (Θ k l ,I,G ) l ∈ N is tight. Therefore, it is sufficient to prove the convergencein law of each element of the subsequence (Θ k l ,I,G ) l ∈ N .(i) By conditions (M2) , (M4) and Proposition 2.14, the following convergence holds X k ( J ( R ℓ ) , P ) −−−−−−−−→ X ∞ . (ii) By the above convergence, conditions (M1) , (M2) , (M4) and Theorem 2.16, the following convergenceholds [ X k ] − h X k i ( J ( R ℓ × ℓ ) , P ) −−−−−−−−−−→ [ X ∞ ] − h X ∞ i G ∞ . (iii) By conditions (M2) , (M4) , (M5) and Proposition 2.14, the following convergence holds Y k (J ( R ) , P ) −−−−−−−→ Y ∞ . (iv) By Lemma 3.22, the sequences ( X k l ) l ∈ N and ( Y k l ) l ∈ N possess the P–UT property. Moreover, by Lemma3.26 the sequence ( N k l ) l ∈ N has the P–UT property and by Lemma 3.19 the sequence (R ,I ⋆ e µ ( X kl,d , G kl ) ) l ∈ N possesses the P–UT property. Therefore we can obtain by [34, Theorem VI.6.26] that [ X k l , N k l ] L −−−→ l →∞ [ X ∞ , N ] , [R ,I ⋆ e µ ( X kl,d , G kl ) , N k l ] L −−−→ l →∞ [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , N ] , [ Y k l , N k l ] − h N k l i L −−−→ l →∞ [ Y ∞ , N ] − Ξ . (v) By condition (M4) , the following convergence holds E [ G |G k · ] (J ( R ) , P ) −−−−−−−→ E [ G |G ∞· ] . In order to show the second statement, we use Corollary A.2 and that ( X k ) k ∈ N is a common element, hence weobtain the convergence (cid:0) Θ k l ,I,G , M k l ,G (cid:1) L −−−→ l →∞ (cid:0) Θ ∞ ,I,G , M ∞ ,G (cid:1) . Now we can conclude that Θ ∞ ,I,G is a uniformly integrable martingale with respect to the filtration generatedby M ∞ ,G , which obviously coincides with the filtration generated by (Θ ∞ ,I,G , M ∞ ,G ) , by Lemma 3.23 and[34, Theorem IX.1.12, Proposition IX.1.10]. Finally, in order to obtain the martingale property with respect to F M ∞ ,G , i.e. the usual augmentation of the natural filtration of M ∞ ,G , we apply [31, Theorem 2.46]. (cid:3) Remark 3.28. (i) For every t ∈ [0 , ∞ ] , we have F t = G ∞ t ∨ F ( N, Ξ) ⊤ t . (ii) The inclusion F M ∞ ,G ⊂ F holds for every fixed G ∈ G ∞∞ . In particular, it also holds that P F M ∞ ,G ⊂ P F , forevery fixed G ∈ G ∞∞ . Proposition 3.29. The process Θ ∞ ,I is a uniformly integrable F − martingale, for every I ∈ J ( X ∞ ) . Proof. Let us fix an I ∈ J ( X ∞ ) . By applying Corollary 3.27 for every G ∈ G ∞∞ we have that Θ ∞ ,I is auniformly integrable F M ∞ ,G − martingale. Therefore we have only to prove that Θ ∞ ,I is an F − martingale. ByLemma A.10, it is sufficient to prove that for every ≤ t < u ≤ ∞ the following condition holds Z Λ E [Θ ∞ ,Iu |F t ] d P = Z Λ Θ ∞ ,It d P , (3.52)for every Λ ∈ A t := (cid:8) Γ ∩ ∆ , Γ ∈ G ∞ t , ∆ ∈ F ( N, Ξ) ⊤ t (cid:9) . Observe that A t is a π − system with σ ( A t ) = F t . Let us, therefore, fix ≤ t < u ≤ ∞ and Λ ∈ A t , where Λ = Γ ∩ ∆ , for some Γ ∈ G ∞ t , ∆ ∈ F ( N, Ξ) ⊤ t . Observethat in particular Λ ∈ F M ∞ , Γ t . Now we obtain Z Λ E (cid:2) Θ ∞ ,Iu |F t (cid:3) d P = Z Λ E (cid:2) E [Θ ∞ ,Iu |F t ] (cid:12)(cid:12) F M ∞ , Γ t (cid:3) d P = Z Λ E (cid:2) Θ ∞ ,Iu (cid:12)(cid:12) F M ∞ , Γ t (cid:3) d P = Z Λ Θ ∞ ,It d P , (3.53)where the first equality holds because Λ ∈ F M ∞ , Γ t , so that we can use the definition of the conditional expectationwith respect to the σ − algebra F M ∞ , Γ t . In the second equality, we have used the tower property and Remark3.28.(ii), while for the third one we have used that Θ ∞ ,I is a F M ∞ , Γ − martingale, i.e. we applied Corollary 3.27for Γ ∈ G ∞ t ⊂ G ∞∞ . Therefore, we can conclude that Θ ∞ ,I is a uniformly integrable F − martingale. (cid:3) N is sufficiently integrable. The uniform integrability of the F − martingale N implies neither the inte-grability of [ N ] / ∞ nor of sup t ∈ [0 , ∞ ) | N t | . In Lemma 3.32 we will prove that there exists a suitable function Ψ such that Ψ (cid:18) sup t ∈ [0 , ∞ )] (cid:12)(cid:12) N t (cid:12)(cid:12)(cid:19) and Ψ (cid:0) [ N ] / ∞ (cid:1) are integrable.This result is crucial in order to show that M µ X ∞ (cid:2) | ∆ N | (cid:12)(cid:12) e P F (cid:3) is well–defined, see Corollary 3.33 and Proposition3.34. The way we choose Ψ is given in Definition 3.30.We have shown in Lemma 3.15 that the sequence (cid:0) Tr (cid:2) [ X k ] ∞ (cid:3)(cid:1) k ∈ N is uniformly integrable. The orthogo-nal decomposition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , G ∞ ) implies then that the (finite) family {h Z ∞ · X ∞ ,c i ∞ , [ U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) ] ∞ } is uniformly integrable. Therefore, the family M := (cid:8) Tr (cid:2) [ X k ] ∞ (cid:3) , k ∈ N (cid:9) ∪ (cid:8) h Z ∞ · X ∞ ,c i ∞ , [ U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) ] ∞ (cid:9) , (3.54)is uniformly integrable as a finite union of uniformly integrable families of random variables. By de La Vallée–Poussin’s criterion, see [12, Corollary 2.5.5], we can construct a Young function Φ , see Appendix A.3 for therespective definition, such that sup Γ ∈M E [Φ(Γ)] < ∞ . We can improve the last condition by choosing a moder-ate Young function Φ A , see Appendix A.3 for the definition, such that sup Γ ∈M E [Φ A (Γ)] < ∞ , (3.55)where A := ( α m ) m ∈ N is a sequence of non–negative integers such that α = 0 , α m ≤ α m and lim m →∞ α m = ∞ , see the first Lemma in [53] . Definition 3.30. Let Φ A be a moderate Young function with associated sequence A , consisting of non–negativeintegers such that α = 0 , α m ≤ α m and lim m →∞ α m = ∞ , for which condition (3.55) holds. For R + ∋ x quad / x ∈ R + , we define the function Ψ to be the Young conjugate of Φ A ◦ quad . Remark 3.31. The crucial property of Ψ is that it is moderate, see Proposition A.12. Lemma 3.32. The weak–limit N given by convergence (3.39) and the Young function Ψ from Definition 3.30satisfies E (cid:20) Ψ (cid:18) sup t ∈ [0 , ∞ ) | N t | (cid:19)(cid:21) < ∞ , and E h Ψ (cid:16) [ N ] / ∞ (cid:17)i < ∞ . A π − system is a non–empty family of sets which is closed under finite intersections. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 33 Proof. By convergence (3.39), Corollary 3.27, Remark 3.25 and [34, Proposition VI.3.14] we have that N k l t L −−−→ l →∞ N t , for every t ∈ R + . The function Ψ is continuous and convex, since it admits a representation via a Lebesgue integral with positiveand non–decreasing integrand. By the continuity of Ψ and the above convergence we can also conclude that Ψ (cid:0) | N k l t | (cid:1) L −−−→ l →∞ Ψ( | N t | ) , for every t ∈ R + . (3.56)Using now Proposition A.12.(iv), there exists a Young function Υ and constants K > and C > such that sup l ∈ N E (cid:20) Υ (cid:18) Ψ (cid:18) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) N k l t (cid:12)(cid:12)(cid:19)(cid:19)(cid:21) = sup l ∈ N E (cid:20) (Υ ◦ Ψ) (cid:18) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) N k l t (cid:12)(cid:12)(cid:19)(cid:21) ≤ sup l ∈ N E (cid:20) C [0 ,C ] (cid:18) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) N k l t (cid:12)(cid:12)(cid:19)(cid:21) + K l ∈ N E (cid:20) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) N k l t (cid:12)(cid:12) [ C, ∞ ) (cid:18) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) N k l t (cid:12)(cid:12)(cid:19)(cid:21) ≤ C + K l ∈ N E (cid:20) sup t ∈ [0 , ∞ ) | N k l t | (cid:21) Lem. 3.21.(iv) < ∞ . (3.57)By the above inequality and de La Vallée Poussin’s criterion, we obtain the uniform integrability of the family (cid:0) Ψ( | N k l t | ) (cid:1) l ∈ N ,t ∈ [0 , ∞ ) . On the other hand, convergence (3.56) and the Dunford–Pettis compactness criterion, seeDellacherie and Meyer [21, Chapter II, Theorem 25], yield that the set Q := (cid:8) Ψ( | N k l t | ) , t ∈ [0 , ∞ ) , l ∈ N } ∪ { Ψ( | N t | ) , t ∈ [0 , ∞ ) (cid:9) , is uniformly integrable, since we augment the relatively weakly compact set (cid:0) Ψ( | N k l t | ) (cid:1) t ∈ [0 , ∞ ) ,l ∈ N merely byaggregating it with the weak–limits Ψ( | N t | ) , for t ∈ [0 , ∞ ) . In particular, the subset N Ψ := (cid:0) Ψ( | N t | ) (cid:1) t ∈ [0 , ∞ ) isuniformly integrable. The L − boundedness of N Ψ and the F − martingale property of N , see Proposition 3.29,imply that the random variable N ∞ := lim t →∞ N t exists P − a.s. Using the uniform integrability of N Ψ andthe continuity of Ψ once again, we have that Ψ (cid:0)(cid:12)(cid:12) N t (cid:12)(cid:12)(cid:1) L (Ω , G , P ) −−−−−−−−→ t →∞ Ψ (cid:0)(cid:12)(cid:12) N ∞ (cid:12)(cid:12)(cid:1) , (3.58) i.e. Ψ( | N ∞ | ) ∈ L (Ω , G , P ) . Recall now that the function Ψ is moderate and convex, see Proposition A.12. Bythe integrability of Ψ( | N ∞ | ) we have that k N ∞ k Ψ < ∞ , where k Θ k Ψ := inf (cid:8) λ > , E (cid:2) Ψ (cid:0) | Θ | λ (cid:1)(cid:3) ≤ (cid:9) is thenorm of the Orlicz space associated to the Young function Ψ , see Dellacherie and Meyer [22, Paragraph 97].Now we are ready to apply Doob’s inequality in the form [22, Inequality 103.1], since the Young conjugate of Ψ is the moderate Young function Φ A ◦ quad , with associated constant c Φ A ◦ quad < ∞ , see Definition A.11 for theassociated constants of a Young function. The above yields C Ψ := (cid:13)(cid:13)(cid:13)(cid:13) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) N t (cid:12)(cid:12)(cid:13)(cid:13)(cid:13)(cid:13) Ψ ≤ c Φ A ◦ quad k N ∞ k Ψ < ∞ . (3.59)Inequality (3.59) yields therefore the finiteness of k sup t ∈ [0 , ∞ ) | N t |k Ψ , which in conjunction with the fact that Ψ is moderate, i.e. c Ψ < ∞ , provides also the finiteness of E [Ψ(sup t ∈ [0 , ∞ ) | N t | )] . The latter can be easilyconcluded by Long [46, Theorem 3.1.1.(b),(d)] and the fact that E (cid:20) Ψ (cid:18) sup t ∈ [0 , ∞ ) | N t | (cid:19)(cid:21) ≤ E (cid:20) Ψ (cid:18) sup t ∈ [0 , ∞ ) | N t | C Ψ (cid:19)(cid:21) { C Ψ ≤ } + c C Ψ Ψ E (cid:20) Ψ (cid:18) sup t ∈ [0 , ∞ ) | N t | C Ψ (cid:19)(cid:21) { C Ψ > } , which is finite in any case. Now, we use the finiteness of E (cid:2) Ψ(sup t ∈ [0 , ∞ ) | N t | ) (cid:3) , Burkholder–Davis–Gundy’sinequality [31, Theorem 10.36], and the fact that Ψ is moderate to conclude that E (cid:2) Ψ (cid:0) [ N ] / ∞ (cid:1)(cid:3) < ∞ . (cid:3) Corollary 3.33. The weak–limit N in convergence (3.39) satisfies E (cid:20) Z (0 , ∞ ) | ∆ N s | k x k µ X ∞ ,d (d s, d x ) (cid:21) < ∞ . Proof. We have that E (cid:20) Z (0 , ∞ ) × R ℓ (cid:12)(cid:12) ∆ N s (cid:12)(cid:12) k x k µ X ∞ ,d (d s, d x ) (cid:21) = ℓ X i =1 E (cid:20) Z (0 , ∞ ) (cid:12)(cid:12) ∆ N s (cid:12)(cid:12) | x i | µ X ∞ ,d (d s, d x ) i = ℓ X i =1 E (cid:20) X s> (cid:12)(cid:12) ∆ N s (cid:12)(cid:12) (cid:12)(cid:12) ∆ X ∞ ,d,is (cid:12)(cid:12)i ≤ ℓ X i =1 E (cid:20)(cid:18) X s> (∆ N s ) (cid:19) / (cid:18) X s> (∆ X ∞ ,d,is ) (cid:19) / (cid:21) ≤ ℓ X i =1 E h [ N ] / ∞ [ X ∞ ,i ] / ∞ i Young Ineq. ≤ Lem. 3.32 ℓ X i =1 E h Ψ (cid:0) [ N ] ∞ (cid:1) + Φ A ◦ quad (cid:0) [ X ∞ ,i ] ∞ (cid:1)i (3.55) ≤ E h Ψ (cid:0) [ N ] ∞ (cid:1) + Φ A (cid:0) Tr (cid:2) [ X ∞ ] ∞ (cid:3)(cid:1)i < ∞ , where in the last inequality we used also the convexity of Φ A in order to take out the coefficient , which appearsdue to the definition of the function quad . (cid:3) We conclude this sub–sub–section with the following result, which yields that ( X ∞ , N , . . . , X ∞ ,ℓ N ) ⊤ is anuniformly integrable martingale. Proposition 3.34. The weak–limit N in convergence (3.39) satisfies h X ∞ ,c,i , N c i F = 0 , for every i = 1 , . . . ℓ, and M µ X ∞ ,d [∆ N | e P F ] = 0 . (3.60) Proof. We will apply Proposition 3.20 to the pair of F − martingales ( X ∞ , N ) . Firstly, recall Remark 3.25, i.e. that X ∞ and N are F − quasi–left–continuous F − martingales. Now we verify that the aforementioned pair indeedsatisfies the requirements of Proposition 3.20.(i)–(iii). • By Proposition 3.29 we have that [ X ∞ , N ] is a uniformly integrable F − martingale, hence the first conditionis satisfied. • By the same proposition, we also have that [R ,I ⋆ e µ ( X ∞ ,d , G ∞ ) , N ] is a uniformly integrable F − martingale, forevery I ∈ J ( X ∞ ) . Moreover, by Lemma A.9 we have that σ (cid:0) J ( X ∞ ) (cid:1) = B ( R ℓ \ { } ) . Hence, the secondcondition is also satisfied. • Finally, the property | ∆ N | µ X ∞ ,d ∈ e A σ ( F ) is equivalent to E (cid:20) Z (0 , ∞ ) × R ℓ (cid:12)(cid:12) ∆ N s (cid:12)(cid:12) V ( s, x ) µ X ∞ ,d (d s, d x ) (cid:21) < ∞ , (3.61)for some strictly positive F − predictable function V. In view of Corollary 3.33, (3.61) holds for V ( ω, t, x ) = P ℓi =1 | π i ( x ) | = k x k , which is µ X ∞ ,d − a.e. strictly positive and F − predictable as deterministic. Therefore,(3.61) is valid, i.e. the third condition is satisfied. (cid:3) The filtration G ∞ is immersed in the filtration F . In this sub–sub–section we use the notation and frame-work of subsection 3.7.1. Recall that the filtration F has been defined in (3.46), and that the subsequence ( k l ) l ∈ N is fixed and such that convergence (3.39) holds. Definition 3.35. The filtration G is immersed in the filtration F if H ( G , ∞ ; R ) ⊂ H ( F , ∞ ; R ) . Lemma 3.36. Let X + X ∞ ,c, F + Id ⋆ e µ ( X ∞ ,d , F ) be the canonical representation of X ∞ as an F − martingaleand X + X ∞ ,c + Id ⋆ e µ ( X ∞ ,d , G ∞ ) be the canonical representation of X ∞ as an G ∞ − martingale. Then therespective parts are indistinguishable, i.e. X ∞ ,c = X ∞ ,c, F , and Id ⋆ e µ ( X ∞ ,d , G ∞ ) = Id ⋆ e µ ( X ∞ ,d , F ) , up to indistinguishability. Therefore, we will simply denote the continuous part of the F − martingale X ∞ by X ∞ ,c and we will use indif-ferently Id ⋆ e µ ( X ∞ , G ∞ ) and Id ⋆ e µ ( X ∞ , F ) to denote the discontinuous part of the F − martingale X ∞ . A summary of other terms describing the same property can be found in Tsirelson [68]. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 35 Proof. By Proposition 3.29 the process X ∞ is an F − martingale whose canonical representation is given by X ∞ = X + X ∞ ,c, F + Id ⋆ e µ ( X ∞ ,d , F ) , see [34, Corollary II.2.38]. However X ∞ is G ∞ − adapted, which in conjunction with Föllmer and Protter [30,Theorem 2.2] implies that the process X ∞ ,c, F + Id ⋆ e µ ( X ∞ ,d , F ) is an G ∞ − martingale. On the other hand, thecanonical representation of the process X ∞ as a G ∞ − martingale is X ∞ = X + X ∞ ,c + Id ⋆ e µ ( X ∞ ,d , G ∞ ) . Hence, by [34, Theorem I.4.18] we can conclude the indistinguishability of the respective parts due to the unique-ness of the decomposition. (cid:3) Lemma 3.37. We have h X ∞ i G ∞ = h X ∞ i F . Therefore, we will denote the dual predictable projection of X ∞ simply by h X ∞ i . Proof. In Proposition 3.29 we showed that the process [ X ∞ ] − h X ∞ i G ∞ is an F − martingale. On the other hand, (cid:0) Tr (cid:2) [ X ∞ ] t (cid:3)(cid:1) t ∈ [0 , ∞ ] is uniformly integrable and in particular of class (D) . Consequently, by [34, TheoremI.3.18], there exists a unique F − predictable process, say h X ∞ i F , such that [ X ∞ ] − h X ∞ i F is a uniformlyintegrable F − martingale. Recall that by definition G ∞ ⊂ F , therefore P G ∞ ⊂ P F , which allows us to concludethat h X ∞ i G ∞ − h X ∞ i F is a uniformly integrable F − predictable F − martingale of finite variation. Therefore, by[34, Corollary I.3.16], we obtain that h X ∞ i G ∞ − h X ∞ i F = 0 up to an evanescent set. (cid:3) Corollary 3.38. The process X ∞ is F − quasi–left–continuous. Therefore ν ( X ∞ ,d , F ) (cid:0) ω ; { t } × R ℓ (cid:1) = 0 for every ( ω, t ) ∈ Ω × R + . (3.62) Proof. The F − quasi–left–continuity of X ∞ is immediate by the continuity of h X ∞ i and [34, Theorem I.4.2].By [34, Corollary II.1.19], we conclude that (3.62) holds. (cid:3) Now, we can obtain some useful properties about the predictable quadratic covariation of the continuous and thepurely discontinuous martingale part of X ∞ . Lemma 3.39. We have h X ∞ ,c i F = h X ∞ ,c i G ∞ and ν ( X ∞ ,d , F ) | P G ∞ = ν ( X ∞ ,d , G ∞ ) . Therefore, we will denote the dual predictable projection of X ∞ ,c simply by h X ∞ ,c i . Proof. For the reader’s convenience we separate the proof in two parts.(i) First, we prove that ν ( X ∞ ,d , F ) | P G ∞ = ν ( X ∞ ,d , G ∞ ) . Indeed, recalling Remark 2.2 and that X ∞ is both G ∞ − and F − quasi–left–continuous, it holds for every t ∈ [0 , ∞ ) Z R ℓ x e µ ( X ∞ ,d , G ∞ ) ( { t } × d x ) (M1) = Z R ℓ xµ X ∞ ( { t } × d x ) (3.62) = Z R ℓ x e µ ( X ∞ ,d , F ) ( { t } × d x ) , P − a.s. (3.63)Consequently, for every non–negative, G ∞ –predictable function θ , using [34, Theorem II.1.8], it holds E (cid:2) θ ∗ ν ( X ∞ ,d , F ) ∞ (cid:3) (3.63) = E (cid:20) X s> θ ( s, ∆ X ∞ ,d ) [∆ X ∞ ,d =0] (cid:21) = E (cid:2) θ ∗ ν ( X ∞ ,d , G ∞ ) ∞ (cid:3) . Therefore we can conclude that ν ( X ∞ ,d , F ) (cid:12)(cid:12) P G ∞ = ν ( X ∞ ,d , G ∞ ) .(ii) Now we prove that h X ∞ ,c i G ∞ = h X ∞ ,c i F . We will combine the previous part with Lemma 3.37. The mapId is both an G ∞ − and F − predictable function as deterministic and continuous. Then h X ∞ ,c i F = h X ∞ i − h Id ⋆ e µ ( X ∞ ,d , F ) i F Lem. 3.36 = Cor. 3.38 h X ∞ i − | Id | ∗ ν ( X ∞ ,d , F ) (i) = h X ∞ i − | Id | ∗ ν ( X ∞ ,d , G ∞ ) (M1) = h X ∞ i − h Id ⋆ e µ ( X ∞ ,d , G ∞ ) i G ∞ = h X ∞ ,c i G ∞ . (cid:3) In view of the previous lemmata, we are able to prove that every G ∞ − stochastic integral with respect to X ∞ isalso an F − martingale. The exact statement is provided below. See [34, Definition I.1.46]. Lemma 3.40. Let Z ∈ H ( X ∞ ,c , G ∞ , ∞ ; R ℓ ) , then we have Z · X ∞ ,c ∈ H ,c ( F , ∞ ; R ) .Proof. By Lemma 3.39, we can easily conclude that H ( X ∞ ,c , G ∞ , ∞ ; R ℓ ) ⊂ H ( X ∞ ,c , F , ∞ ; R ℓ ) . We aregoing to prove the required property initially for simple integrands. Assume that ρ, σ are G ∞ − stopping timessuch that ρ ≤ σ , P − a.s. , and that ψ is an R ℓ − valued, bounded and G ∞ ρ − measurable random variable. Then,see [34, Theorem 4.5], the G ∞ − stochastic integral ( ψ K ρ,σ K ) · X ∞ ,c is defined as ψ K ρ,σ K · X ∞ ,c = ℓ X i =1 Z · ψ i K ρ,σ K d X ∞ ,c,i . (3.64)Treating now ψ as an F ρ − measurable variable, since G ∞ ⊂ F , and using that X ∞ ,c is an F − martingale,Proposition 3.29 yields that the representation in (3.64) is also an F − martingale. By [34, Theorem III.4.5 – Parta] we can conclude for an arbitrary Z ∈ H ( X ∞ ,c , G ∞ , ∞ ; R ) , since the process Z · X ∞ ,c can be approximatedby F − martingales with representation as in (3.64). (cid:3) Lemma 3.41. Let U ∈ H ( µ X ∞ ,d , G ∞ , ∞ ; R ) , then it holds that U ⋆ e µ ( X ∞ ,d , G ∞ ) ∈ H ,d ( F , ∞ ; R ) . Moreover,the processes U ⋆ e µ ( X ∞ ,d , G ∞ ) and U ⋆ e µ ( X ∞ ,d , F ) are indistinguishable.Proof. Let U ∈ H ( µ X ∞ ,d , G ∞ , ∞ ; R ) . The inclusion P G ∞ ⊂ P F and the equalities E (cid:20) X s> (cid:12)(cid:12)(cid:12)(cid:12) Z R ℓ U ( s, x ) e µ ( X ∞ ,d , F ) ( { s } × d x ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:21) Cor. 3.38 = E (cid:20) X s> (cid:12)(cid:12)(cid:12)(cid:12) Z R ℓ U ( s, x ) µ X ∞ ( { s } × d x ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:21) (M1) = E (cid:20) X s> (cid:12)(cid:12)(cid:12)(cid:12) Z R ℓ U ( s, x ) e µ ( X ∞ ,d , G ∞ ) ( { s } × d x ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:21) < ∞ , yield that U ∈ H ( µ X ∞ ,d , F , ∞ ; R ) , hence the F − martingale U ⋆ e µ ( X ∞ ,d , F ) is well–defined.Let now W be a positive G ∞ − predictable function such that E (cid:2) W ∗ µ X ∞ ,d ∞ (cid:3) < ∞ . By [34, Theorem II.1.8], wehave that E (cid:2) W ∗ ν ( X ∞ ,d , G ∞ ) ∞ (cid:3) < ∞ , as well as E (cid:2) W ∗ ν ( X ∞ ,d , F ) ∞ (cid:3) < ∞ . Then, the property ν ( X ∞ ,d , F ) | P G ∞ = ν ( X ∞ ,d , G ∞ ) from Lemma 3.39 translates into W ⋆ e µ ( X ∞ ,d , G ∞ ) = W ∗ e µ ( X ∞ ,d , G ∞ ) = W ∗ µ X ∞ ,d − W ∗ ν ( X ∞ ,d , G ∞ ) Lemma 3.39 = W ∗ µ X ∞ ,d − W ∗ ν ( X ∞ ,d , F ) = W ∗ e µ ( X ∞ ,d , F ) = W ⋆ e µ ( X ∞ ,d , G ∞ ) up to indistinguishability,where in the first and the last equalities, we used [34, Proposition II.1.28], while in the second one as well as inthe second to last one, we used the definition of the compensated integer valued measure. It is immediate that theabove equality holds also if W is a real–valued G ∞ − predictable function such that E (cid:2) | W | ∗ µ X ∞ ,d ∞ (cid:3) < ∞ , i.e. W ⋆ e µ ( X ∞ ,d , G ∞ ) = W ⋆ e µ ( X ∞ ,d , F ) , for W as described above. (3.65)In other words, when the G ∞ − predictable integrand W is such that W ⋆ e µ ( X ∞ ,d , G ∞ ) is of finite–variation, thenit is indistinguishable from W ⋆ e µ ( X ∞ ,d , F ) .In view of the above discussion, we can conclude that for an arbitrary U ∈ H ( µ X ∞ ,d , G ∞ , ∞ ; R ) , we also have U ⋆ e µ ( X ∞ ,d , G ∞ ) = U ⋆ e µ ( X ∞ ,d , F ) up to indistinguishability. Indeed, let us denote by ( τ m ) m ∈ N the sequence of G ∞ − totally inaccessible G ∞ − stopping times which exhausts the thin G ∞ − optional set [∆ X ∞ ,d = 0] (see [34,Proposition I.1.32, Proposition I.2.26]), and fix some U ∈ H ( µ X ∞ ,d , G ∞ , ∞ ; R ) . Then ∆( U ⋆ e µ ( X ∞ ,d , G ∞ ) ) τ m = U ( τ m , ∆ X ∞ ,dτ m ) ∈ L (Ω , G ∞ τ m , P ; R ) , for every m ∈ N . By [12, Theorem 10.2.10], we know that, for every m ∈ N , there exists a continuous and G ∞ − adapted process Π ∗ ,m such that M m := U (cid:0) τ m , ∆ X ∞ ,dτ m (cid:1) J τ m , ∞ J − Π ∗ ,m ∈ H ,d (cid:0) G ∞ , ∞ ; R (cid:1) , for every m ∈ N . TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 37 Using (M3) , for every m ∈ N , there exists U m ∈ H ( µ X ∞ ,d , G ∞ , ∞ ; R ) such that M m = U m ⋆ e µ ( X ∞ ,d , G ∞ ) (3.65) = U m ⋆ e µ ( X ∞ ,d , F ) , where the second equality holds because M m is a process of finite variation for every m ∈ N , since it is a single–jump process. Consequently, M m ∈ H ,d ( F , ∞ ; R ) for every m ∈ N . Finally, by [12, Theorem 10.2.14] we canapproximate (the precise argument is presented in the proof of the aforementioned theorem) both U ⋆ e µ ( X ∞ ,d , G ∞ ) and U ⋆ e µ ( X ∞ ,d , F ) by the same sequence (cid:0) P nm =1 M m (cid:1) n ∈ N . By the uniqueness of the limit, U ⋆ e µ ( X ∞ ,d , G ∞ ) = U ⋆ e µ ( X ∞ ,d , F ) up to indistinguishability. (cid:3) Concluding, since every G ∞ –martingale is also an F –martingale, the filtration G ∞ is immersed in F . The fol-lowing result shows that the orthogonal decomposition remains the same under both filtrations. Corollary 3.42. The orthogonal decomposition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , F ) is given by Y ∞ = Y + Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , F ) , (3.66) where Z ∞ ∈ H ( X ∞ ,c , G ∞ , ∞ ; R ) and U ∞ ∈ H ( µ X ∞ ,d , G ∞ , ∞ ; R ) are determined by the orthogonal de-composition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , G ∞ ) , see Theorem 3.3. In other words, the orthogonal de-composition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , F ) is indistinguishable from the orthogonal decomposition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , G ∞ ) .Proof. In Proposition 3.29, we have proven that Y ∞ ∈ H ( F , ∞ ; R ) , therefore the orthogonal decomposition of Y ∞ with respect to ( X ∞ ,c , µ X ∞ ,d , F ) is well–defined. Assume that Y ∞ = Y + Z ∞ , F · X ∞ ,c + U ∞ , F ⋆ e µ ( X ∞ ,d , F ) + N ∞ , F , where Z ∞ , F ∈ H ( X ∞ ,c , F , ∞ ; R ) , U ∞ , F ∈ H ( µ X ∞ ,d , F , ∞ ; R ) and N ∞ , F ∈ H ( X ⊥ , F , ∞ ; R ) . On theother hand, by Lemmata 3.40 and 3.41, we have that Z ∞ · X ∞ ,c ∈ H ,c ( F , ∞ ; R ) and U ∞ ⋆ e µ ( X ∞ , G ∞ ) ∈H ,d ( F , ∞ ; R ) , that is to say Y ∞ = Y + Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) = Y + Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , F ) . Hence, from [34, Theorem III.4.24], we get that, up to indistinguishability Z ∞ · X ∞ ,c = Z ∞ , F · X ∞ ,c , U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) = U ∞ , F ⋆ e µ ( X ∞ ,d , F ) , N ∞ = 0 . (cid:3) Proof of the main Theorem. This subsection is devoted to the proof of the main theorem. In view of thepreparatory results obtained in the previous sections, as well as of the outline of the proof presented in subsection3.4, the following proof basically amounts to proving that Step is valid. Proof of Theorem 3.3. By Lemma 3.21.(iii) we have that the sequence (cid:0) ( N k , h N k i ) (cid:1) k ∈ N is tight in D ( R ) .Therefore, an arbitrary subsequence (cid:0) ( N k l , h N k l i ) (cid:1) l ∈ N has a further subsequence (cid:0) ( N k lm , h N k lm i ) (cid:1) m ∈ N whichconverges in law, say to ( N , Ξ) , i.e. ( N k lm , h N k lm i ) ⊤ L −−−−→ m →∞ ( N , Ξ) ⊤ , (3.67)where N is a càdlàg process and Ξ is a continuous and increasing process. The continuity of Ξ follows fromLemma 3.21.(i). Therefore, we can use the results of subsection 3.7 for the subsequence (cid:0) ( N k lm , h N k lm i ) (cid:1) m ∈ N and the pair ( N , Ξ) .By Proposition 3.34 and Corollary 3.42 we conclude h Y ∞ , N i = h Z ∞ · X ∞ ,c , N c i + h U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) , N d i = h Z ∞ · X ∞ ,c , N c i + h U ∞ ⋆ e µ ( X ∞ ,d , F ) , N d i = Z ∞ · h X ∞ ,c , N c i + (cid:0) U ∞ M µ X ∞ ,d [∆ N | e P F ] (cid:1) ∗ ν ( X ∞ ,d , F ) (3.60) = 0 , (3.68) i.e. [ Y ∞ , N ] is an F − martingale. On the other hand, we have proved in Proposition 3.29 that [ Y ∞ , N ] − Ξ is an F − martingale as well. By subtracting the two martingales, we obtain that Ξ is also an F − martingale. Hence, Ξ is an F − predictable process of finite variation and a martingale, therefore it has to be constant, see [34, CorollaryI.3.16]. Now, we have that h N k lm i L −−−−→ m →∞ Ξ implies that h N k lm i L −−−−→ m →∞ Ξ . Recall that by definition h N k i = 0 for every k ∈ N , hence Ξ = 0 . Therefore Ξ = 0 and, since the limit is adeterministic process, the convergence above is equivalent to the following h N k lm i (J , P ) −−−−−→ m →∞ . Since the limit above is common for every subsequence and ( D , J ( R )) is Polish, we can conclude from [23,Theorem 9.2.1] that h N k i (J ( R ) , P ) −−−−−−−→ . Using Lemma 3.21.(v) and [31, Theorem 1.11], we can strengthen the above convergence to h N k i ( J ( R ) , L ) −−−−−−−−−→ . (3.69)Then, we can also conclude that N k ( J ( R ) , L ) −−−−−−−−−→ . Indeed, for every R > and by Doob’s L − inequality weobtain E (cid:20) sup t ∈ [0 ,R ] | N kt | (cid:21) ≤ E (cid:2) | N kR | (cid:3) = 4 E (cid:2) h N k i R (cid:3) −−−−−→ k →∞ , which implies the convergence E [ d ( N k , −→ .Using the convergence of Y k ( J ( R ) , L ) −−−−−−−−−→ Y ∞ and the convergence of ( N k ) k ∈ N to the zero process, which istrivially continuous, we can obtain the joint convergence ( Y k , N k ) ( J ( R ) , L ) −−−−−−−−−→ ( Y ∞ , . Moreover, using the orthogonal decompositions of Y k and Y ∞ and the previous results, we obtain Z k · X k,c + U k ⋆ e µ ( X k,d , G k ) = Y k − N k − Y k ( J ( R ) , L ) −−−−−−−−−→ Y ∞ − Y ∞ = Z ∞ · X ∞ ,c + U ∞ ⋆ e µ ( X ∞ ,d , G ∞ ) , which yields then (3.2). Thus, it is only left to prove convergence (3.3). Since the sequences ( X k ) k ∈ N and ( Y k ) k ∈ N satisfy the conditions of Theorem 2.16, we obtain in particular that the sequences ( Y k + X k ) k ∈ N and ( Y k − X k ) k ∈ N also satisfy the conditions of this theorem. Therefore we can conclude that h Y k + X k i (J ( R ) , P ) −−−−−−−→ h Y ∞ + X ∞ i , and h Y k − X k i (J ( R ) , P ) −−−−−−−→ h Y ∞ − X ∞ i . By the continuity of the limiting processes, recall (M1) and [34, Theorem 4.2], using the identity h Y k , X k i = 14 (cid:0) h Y k + X k i − h Y k − X k i (cid:1) , for every k ∈ N , and convergence (3.69), we can conclude that ( h Y k i , h Y k , X k i , h N k i ) ( J ( R × R ℓ × R ) , L ) −−−−−−−−−−−−−−→ ( h Y ∞ i , h Y ∞ , X ∞ i , . In order to strengthen the last convergence to an L − convergence, we only need to recall that by Theorem 2.16the sequences (cid:0) Tr (cid:2) h X k i ∞ (cid:3)(cid:1) k ∈ N and (cid:0) h Y k i ∞ (cid:1) k ∈ N are uniformly integrable. Now Lemma 2.17 provides theuniform integrability of (cid:0)(cid:13)(cid:13) Var( h Y k , X k i ) ∞ (cid:13)(cid:13) (cid:1) k ∈ N , which allows us to conclude. (cid:3) TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 39 A PPENDIX A. A UXILIARY RESULTS A.1. Joint convergence for the Skorokhod J –topology. This appendix contains some useful results aboutthe joint convergence of sequences in the Skorokhod J − topology, which are heavily used in Sub–section 2.3and throughout Section 3. Let us recall that the spaces D q = D ([0 , ∞ ); R q ) and D ([0 , ∞ ); R ) q do not coincidetopologically, see [34, Statements VI.1.21-22]. Proposition VI.2.2 in [34] describes the relationship betweenthe convergence on the Skorokhod space of the product state space and the convergence on the product of theSkorokhod spaces of one dimensional state spaces; see also Aldous [1, Lemma 3.5] and Ethier and Kurtz [29,Proposition 3.6.5]. A suitable variation of [34, Proposition VI.2.2] is provided in [17, Lemma 1], which we statehere for convenience. Lemma A.1. Let α n := ( α n, , . . . , α n,q ) ⊤ ∈ D q , for every n ∈ N . The convergence α n J ( R q ) −−−−−−→ α ∞ holds ifand only if the following hold α n,i J ( R ) −−−−−→ α ∞ ,i , for i = 1 , . . . , q , and p X i =1 α n,i J ( R ) −−−−−→ p X i =1 α ∞ ,i , for p = 1 , . . . , q . The following corollary is also useful for our purposes. Corollary A.2. Let α n , β n , γ n ∈ D , for every n ∈ N . If ( α n , β n ) ⊤ J ( R ) −−−−−−→ ( α ∞ , β ∞ ) ⊤ and ( α n , γ n ) ⊤ J ( R ) −−−−−−→ ( α ∞ , γ ∞ ) ⊤ then ( α n , β n , γ n ) ⊤ J ( R ) −−−−−−→ ( α ∞ , β ∞ , γ ∞ ) ⊤ . Proof. Consider the sequence (cid:0) ( α n , β n , − β n , γ n ) ⊤ (cid:1) n ∈ N . Using that the pairs converge, conditions (i)–(ii) ofLemma A.1 are satisfied, the desired result follows since ( α n , β n , − β n , γ n ) ⊤ J ( R ) −−−−−−→ ( α ∞ , β ∞ , − β ∞ , γ ∞ ) ⊤ . (cid:3) Remark A.3. The result above does not depend on the dimension of the state spaces, and can be generalizedinductively to an arbitrary number of sequences, as long as a common converging sequence exists.The following lemma is another convenient tool when we want to conclude joint convergence. Lemma A.4. Let α n := ( α n, , . . . , α n,q ) ⊤ ∈ D q for every n ∈ N , and f : R q −→ R p be a function continuouson R p . If α n J ( R q ) −−−−−−→ α ∞ then f ( α n ) J ( R p ) −−−−−−→ f ( α ∞ ) . Proof. See [1, Lemma 2.8]. (cid:3) A.1.1. J –continuous functions. Let I := (cid:8) I ⊂ R , I is a subinterval of ( −∞ , or I is a subinterval of (0 , ∞ ) (cid:9) . (A.1)The aim of this sub–sub–section is the following: given a sequence α n J ( R q ) −−−−−−→ α ∞ and a function g : R q −→ R , we want to define a sequence ( ζ n ) n ∈ N ⊂ D , where ζ n will be constructed using α n for all n ∈ N , such that ζ n J ( R ) −−−−−→ X Fix q ∈ N . (i) The function D q ∋ α s n ( α, I ) ∈ R + , is continuous at each point ( α, I ) ∈ D q × J ( α ) , for n ∈ N . (ii) If s n ( α, I ) < ∞ , for some n ∈ N , I ∈ J ( α ) , then the function D q ∋ α ∆ α s n ( α,I ) ∈ D q , is continuous.Proof. Let ( α k ) k ∈ N be such that α k J ( R q ) −−−−−−→ α ∞ and I := Q qi =1 I i ∈ J ( α ∞ ) . Observe that J I = ∅ , bydefinition of J ( α ∞ ) . We define s k,n := s n ( α k , I ) , for k ∈ N and n ∈ N . (i) The convergence s k, −−−→ k →∞ s ∞ , holds by definition. Assume that the convergence s k,n −−−→ k →∞ s ∞ ,n holdsfor some n ∈ N . We will prove that the convergence s k,n +1 −−−→ k →∞ s ∞ ,n +1 holds as well.Before we proceed, fix a positive number u [ α ∞ , I ] such that u [ α ∞ , I ] / ∈ ∪ i ∈ J I {| u | , u ∈ W ( α ∞ ,i ) } and u [ α ∞ , I ] ≤ min ∪ i ∈ J I {| v | , v ∈ ∂I i } . Observe now that, for i ∈ J I and for U := ( −∞ , − u [ α ∞ , I ]) ∪ ( u [ α ∞ , I ] , ∞ ) , the sequence (cid:0) s l ( α ∞ ,i , U ) (cid:1) l ∈ N exhausts the set of times that α ∞ ,i exhibits a jump of height greater than u [ α ∞ , I ] , i.e. { t ∈ R + , | ∆ α ∞ ,i | > u [ α ∞ , I ] } ⊂ (cid:0) s l ( α ∞ ,i , U ) (cid:1) l ∈ N , for every i ∈ J I . (A.6)We will distinguish between several different cases now. Case : s ∞ ,n +1 < ∞ . By Property (A.6), there exist unique l i,n , l i,n +1 ∈ N with l i,n < l i,n +1 such that s l i,n ( α ∞ ,i , U ) = s ∞ ,n and s l i,n +1 ( α ∞ ,i , U ) = s ∞ ,n +1 for every i ∈ J I . (A.7)By [34, Proposition VI.2.7] and the above identities, we obtain s l i,n ( α k,i , U ) −−−→ k →∞ s ∞ ,n and ∆ α k,is li,n ( α k,i ,U ) −−−→ k →∞ ∆ α ∞ ,is ∞ ,n for every i ∈ J I , (A.8)as well as s l i,n +1 ( α k,i , U ) −−−→ k →∞ s ∞ ,n +1 and ∆ α k,is li,n +1 ( α k,i ,U ) −−−→ k →∞ ∆ α ∞ ,is ∞ ,n +1 for every i ∈ J I . (A.9) Recall that ∂A denotes the | · |− boundary of the set A ⊂ R . Observe that for α ∈ D the time point t p ( α, u ) defined in [34, Definition VI.2.6] can be rewritten using our notation as t p ( a, u ) = s p ( α, ( −∞ , − u ) ∪ ( u, ∞ )) . TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 41 In particular, by [34, Proposition 2.1.b], the convergence s k,n −−−→ k →∞ s ∞ ,n , which has been assumed true as theinduction hypothesis, and the Convergence (A.8), we can obtain that (cid:0) s l i,n ( α k,i , U ) − s k,n (cid:1) k ∈ N ∈ c ( N ) , for every i ∈ J I , (A.10)where c ( N ) := { ( γ m ) m ∈ N ⊂ R N , ∃ m ∈ N such that γ m = 0 for every m ≥ m } . Define k n,i := max (cid:8) k ∈ N , s l i,n ( α k,i , U ) = s k,n (cid:9) for i ∈ J I . By Property (A.10), we obtain that k n,i < ∞ for every i ∈ J I . Since J I is a finite set, the number ¯ k n :=max (cid:8) k n,i , i ∈ J I (cid:9) , is well–defined and finite, therefore s l ,n ( α k, , U ) = · · · = s l ℓ,n ( α k,ℓ , U ) = s k,n , for every k > ¯ k n . (A.11)Now, in view of Convergence (A.9), we can conclude the induction step once we prove the analogue to (A.10)for m = n + 1 , i.e. ( s l i,n +1 ( α k,i , U ) − s k,n +1 ) k ∈ N ∈ c ( N ) , for every i ∈ J I . (A.12)At this point we further distinguish between two cases. Case . : For every i ∈ J I , we have l i,n +1 = 1 + l i,n .By [34, Proposition VI.2.1.b], Convergence (A.9) and the convergence α k J ( R q ) −−−−−−→ α ∞ , we can conclude that (cid:0) s l i,n +1 ( α k,i , U ) − s l j,n +1 ( α k,j , U ) (cid:1) ∈ c ( N ) , for every i, j ∈ J I . (A.13)Therefore, we can fix hereinafter an index from J I and we will do so for µ := min J I , i.e. µ is the minimumelement of J I . Define k n +1 ,i := max (cid:8) k ∈ N : s l i,n +1 ( α k,i , U ) = s l µ,n +1 ( α k,µ , U ) (cid:9) , for i ∈ J I \ { µ } . By Property (A.13), we obtain that k n +1 ,i < ∞ for every i ∈ J I \ { µ } . Since J I \ { µ } is a finite set, the number ¯ k n +10 := max (cid:8) k n +1 ,i , i ∈ J I \ { µ } (cid:9) , is well–defined and finite. Observe that s l i,n ( α k,i , U ) = s l i,n +1 ( α k,i , U ) = s l µ,n +1 ( α k,µ , U ) , for k > ¯ k n +10 and for i ∈ J I , (A.14)where the first equality holds by assumption. Moreover, by Convergence (A.9) we obtain that Y i ∈ J I I i (cid:16) ∆ α k,is li,n ( α k,i ,U ) (cid:17) , for all but finitely many k, (A.15)since ∆ α ∞ ,is ∞ ,n +1 lies in the interior of the open interval I i , for every i ∈ J I . For notational convenience, we willassume that the above convergence holds for k > ¯ k n +10 . Therefore, s k,n +1 = inf { s > s k,n , ∆ α k,is ∈ I i for every i ∈ J I } = inf \ i ∈ J I { s > s k,n , ∆ α k,is ∈ I i } (A.10) = inf \ i ∈ J I { s > s l i,n ( α k,i , U ) , ∆ α k,is ∈ I i } , for k > ¯ k n = inf \ i ∈ J I { s ∈ (cid:0) s l i,n ( α k,i , U ) , s l i,n ( α k,i , U ) (cid:3) , ∆ α k,is ∈ I i }∧ inf \ i ∈ J I { s > s l i,n ( α k,i , U ) , ∆ α k,is ∈ I i } (cid:1) , for k > ¯ k n (A.15) = \ i ∈ J I { s l i,n ( α k,i , U ) } ∧ inf \ i ∈ J I { s > s l i,n ( α k,i , U ) , ∆ α k,is ∈ I i } (cid:1) , for k > ¯ k n ∨ ¯ k n +10 (A.14) = s l µ,n +1 ( α k,i , U ) , for k > ¯ k n ∨ ¯ k n +10 = s l i,n +1 ( α k,i , U ) , for k > ¯ k n ∨ ¯ k n +10 , i ∈ J I , i.e. Property (A.12) holds. Case . : There exists i ∈ J I for which l i,n +1 > l i,n . Define ¯ J I := { i ∈ J I , l i,n +1 > l i,n } and fix ξ i ∈ ( l i,n , l i,n +1 ) ∩ N , for every i ∈ ¯ J I . Recall that by [34,Proposition VI.2.7], we have lim k →∞ s ξ i ( α k,i , U ) = s ξ i ( α ∞ ,i , U ) and lim k →∞ ∆ α k,is ξi ( α k,i ,U ) = ∆ α ∞ ,is ξi ( α ∞ ,i ,U ) . (A.16)We can conclude that Property (A.12) holds, if for every ¯ s ∈ ( s ∞ ,n , s ∞ ,n +1 ) such that lim k →∞ s ξ i ( α k,i , U ) = ¯ s, for every i ∈ ¯ J I , (A.17)holds Y i ∈ J I \ ¯ J I I i (cid:0) ∆ α k,i ¯ s (cid:1) Y i ∈ ¯ J I I i (cid:16) ∆ α k,is ξi ( α k,i ,U ) (cid:17) , for all but finitely many k. (A.18)However, if we had Y i ∈ J I \ ¯ J I I i (cid:0) ∆ α k,i ¯ s (cid:1) Y i ∈ ¯ J I I i (cid:16) ∆ α k,is ξi ( α k,i ,U ) (cid:17) , for all but finitely many k, then in view of the definition of s ∞ ,n +1 , we would have s ∞ ,n +1 = s ξ i ( α ∞ ,i , U ) for every i ∈ ¯ J I , whichcontradicts Property (A.7). The contradiction arises in view of s ξ i ( α ∞ ,i , U ) < s l i,n +1 ( α k,i , U ) , since ξ i < l i,n +1 . Case : s ∞ ,n +1 = ∞ . We distinguish again between two cases. Case . : s ∞ ,n < ∞ Using the same arguments as the ones used in Property (A.7) for s ∞ ,n , we can associate to s n ( α ∞ ,i , U ) a uniquenatural number l i,n such that Convergence (A.8) holds. Moreover, by definition of s ∞ ,n +1 we obtain that { s > s n ( α ∞ , I ) , ∆ α ∞ ,is ∈ I i for every i ∈ J I } = ∅ , or, equivalently for every s > s ∞ ,n there exists an i ∈ J I such that ∆ α ∞ ,i I i . (A.19)Assume now that lim inf k →∞ s k,n +1 = ¯ s, for some ¯ s ∈ ( s k,n , ∞ ) , i.e. there exists ( k l ) l ∈ N such that s k l ,n +1 −−−→ l →∞ ¯ s . Equivalently, for every i ∈ J I holds ∆ α k l ,is kl,n +1 ∈ I i for all butfinitely many l. By convergence α k J ( R q ) −−−−−−→ α ∞ , [34, Proposition VI.2.1] and convergence s k l ,n +1 −−−→ l →∞ ¯ s ,we have that ∆ α ∞ ,i ¯ s ∈ I i , for every i ∈ J I . But this contradicts the assumption s ∞ ,n +1 = ∞ in view of theequivalent form (A.19). Case . : s ∞ ,n = ∞ . By the induction hypothesis it holds that s k,n −−−→ k →∞ s ∞ ,n , and by the definition of s k,n +1 it holds that s k,n ≤ s k,n +1 , for every k ∈ N , n ∈ N . The previous convergence yields s k,n +1 −−−→ k →∞ ∞ = s ∞ ,n +1 . (ii) Assume that there exist n ∈ N and I ∈ J ( α ) such that s ∞ ,n < ∞ . By (i), we have that s k,n −−−→ k →∞ s ∞ ,n ,which in conjunction with the convergence α k J ( R q ) −−−−−−→ α ∞ and [34, Proposition VI.2.1] implies that ∆ α ks k,n −−−→ k →∞ ∆ α ∞ s ∞ ,n . (cid:3) TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 43 Corollary A.6. Let α k J ( R q ) −−−−−−→ α ∞ and I ∈ J ( α ∞ ) . Define ˆ n := ( max { n ∈ N , s n ( α ∞ , I ) < ∞} , if { n ∈ N , s n ( α ∞ , I ) < ∞} is non–empty and finite , ∞ , otherwise . Then for every function g : R q −→ R which is continuous on C := q Y i =1 A i , where A i := ( W ( α ∞ ,i ) , if i ∈ J I ,W ( α ∞ ,i ) ∪ { } , if i ∈ { , . . . , q } \ J I , and for every ≤ n ≤ ˆ n holds g (∆ α ks n ( α k ,I ) ) −−−→ k →∞ g (∆ α ∞ s n ( α ∞ ,I ) ) . In particular, for the càdlàg functions β k · := g (∆ α ks n ( α k ,I ) ) [ s n ( α k ,I ) , ∞ ) ( · ) , for k ∈ N , the convergence β k J ( R ) −−−−−→ β ∞ holds.Proof. Fix an n ∈ N such that n ≤ ˆ n. By Proposition A.5.(ii) holds ∆ α ks n ( α k ,I ) −−−→ k →∞ ∆ α ∞ s n ( α, ∞ ,I ) , where ∆ α ∞ ,is n ( α ∞ ,I ) ∈ ( W ( α ∞ ,i ) , if i ∈ J I ,W ( α ∞ ,i ) ∪ { } , if i ∈ { , . . . , q } \ J I . Therefore, by definition of the time s n ( α ∞ , I ) and of the set C holds g (∆ α ks n ( α k ,I ) ) −−−→ k →∞ g (∆ α ∞ s n ( α ∞ ,I ) ) . (A.20)By Proposition A.5.(i), the above convergence and [34, Example VI.1.19] we obtain the convergence β k J ( R ) −−−−−→ β ∞ . (cid:3) The following simple counterexample shows that in Corollary A.6 the convergence β k J ( R ) −−−−−→ β ∞ does notnecessarily hold for a function g which is not continuous on the set C . Example A.7. Let (cid:0) ( t k , x k ) (cid:1) k ∈ N be such that ( t k , x k ) ∈ R + × R for every k ∈ N , t k −→ t ∞ ∈ R and x k ց x ∞ ∈ R \ { } . Define γ k · := x k [ t k , ∞ ) ( · ) , for every k ∈ N . By [34, Example VI.1.19.ii)], we have γ k J ( R ) −−−−−→ γ ∞ . On the other hand, for I := ( x ∞ , x ∞ ) holds s ( γ k , I ) = t k for all but finitely many k and s ( γ ∞ , I ) = t ∞ , i.e. s ( γ k , I ) −−−→ k →∞ s ( γ ∞ , I ) . Moreover, for w > x ∞ we also have ∆ γ kt k ( x ∞ ,w ) (∆ γ kt k ) = x k ( x ∞ ,w ) ( x k ) = x k for all but finitely many k ∈ N , and ∆ γ ∞ t ∞ ( x ∞ ,w ) (∆ γ ∞ t ∞ ) = x ∞ ( x ∞ ,w ) ( x ∞ ) = 0 . Therefore, for R ∋ x g x ( x ∞ ,w ) ( x ) g (∆ γ ks ( γ k ,I ) ) = ∆ γ kt k ( x ∞ ,w ) (∆ γ kt k ) −−−→ n →∞ x ∞ = 0 = ∆ γ ∞ t ∞ ( x ∞ ,w ) (∆ γ ∞ t ∞ ) = g (∆ γ ∞ s ( α ∞ ) ) , and for this reason we cannot obtain the convergence ∆ γ kt k ( x ∞ ,w ) (∆ γ nt k ) ( x ∞ ,w ) ( · ) J ( R ) −−−−−→ ∆ γ ∞ t ∞ ( x ∞ ,w ) (∆ γ ∞ t ∞ ) ( x ∞ ,w ) ( · ) . Proposition A.8. Fix some subset I := Q qi =1 I i of R q and a function g : R q → R . Define the map D ( R q ) ∋ α ˆ α [ g, I ] := ( α , . . . , α q , α g,I ) ⊤ ∈ D ( R q +1 ) , where α g,I · := X The arguments are similar to those in the proof of Corollary VI.2.8 in [34], therefore they are omitted forthe sake of brevity. The interested reader can also consult Saplaouras [65, Proposition I.134] for the correspond-ing proof. (cid:3) A.1.2. Technical proofs. We conclude this subsection with the proofs that were omitted from Section 2.3. Proof of Proposition 2.14. The first step is to show that the L convergence of ( M k ∞ ) k together with the weakconvergence of the filtrations, imply the convergence of the martingales in the J ( R q ) − topology. Let ε > and F k w −−−→ F ∞ , then P (cid:16) d J ( R q ) ( M k , M ∞ ) > ε (cid:17) ≤ P (cid:16) d J ( R q ) (cid:0) M k , E [ M ∞∞ |F k · ] (cid:1) > ε (cid:17) + P (cid:16) d J ( R q ) (cid:0) E [ M ∞∞ |F k · ] , M ∞ (cid:1) > ε (cid:17) ≤ P (cid:16) sup t ∈ [0 , ∞ ) (cid:12)(cid:12) M kt − E [ M ∞∞ |F k · ] (cid:12)(cid:12) > ε (cid:17) + P (cid:16) d J ( R q ) (cid:0) E [ M ∞∞ |F k · ] , M ∞ (cid:1) > ε (cid:17) ≤ ε E (cid:2) | M k ∞ − M ∞∞ | (cid:3) + P (cid:16) d J ( R q ) (cid:0) E [ M ∞∞ |F k · ] , E [ M ∞∞ |F ∞· ] (cid:1) > ε (cid:17) k →∞ −−−−→ , (A.22)where the first summand converges to by assumption and the second one by the weak convergence of thefiltrations. Let us point out that for the second inequality we have used that for α, β ∈ D q holds d J ( R q ) ( α, β ) ≤ d lu ( α, β ) ≤ d k·k ∞ ( α, β ) , by the definition of the metrics, while for the third inequality we used Doob’s martingale inequality.The next step is to apply Lemma A.1 to ( M k , E [ ξ |F k · ]) ⊤ k =: ( N k ) k , for ξ ∈ L (Ω , F ∞∞ , P ; R ) , in order to obtainthe convergence in the extended sense. The J ( R ) − convergence of each ( N k,i ) k , for i = 1 , . . . , q , and of thepartial sums ( P pi =1 N k,i ) k , for p = 1 , . . . , q follows from the previous step and Lemma A.1. Moreover, the J ( R ) − convergence of N k,q +1 follows from the definition of the weak convergence of filtrations. Hence, wejust have to show the J ( R ) − convergence of ( P q +1 i =1 N k,i ) k .By assumption, we have P qm =1 M k,m ∞ + ξ L (Ω , G , P ; R ) −−−−−−−−−→ k →∞ P qm =1 M ∞ ,m ∞ + ξ, and arguing as in (A.22) we obtain E (cid:20) q X m =1 M k,m ∞ + ξ (cid:12)(cid:12)(cid:12)(cid:12) F k · (cid:21) (J ( R ) , P ) −−−−−−−→ k →∞ E (cid:20) q X m =1 M ∞ ,m ∞ + ξ (cid:12)(cid:12)(cid:12)(cid:12) F ∞· (cid:21) . Moreover, by the linearity of conditional expectations, we get that q X m =1 E (cid:2) M k,m ∞ (cid:12)(cid:12) F k · (cid:3) + E (cid:2) ξ (cid:12)(cid:12) F k · (cid:3) (J ( R ) , P ) −−−−−−−→ k →∞ q X m =1 E (cid:2) M m ∞ (cid:12)(cid:12) F ∞· (cid:3) + E (cid:2) ξ (cid:12)(cid:12) F ∞· (cid:3) . The converse statement is trivial. (cid:3) Proof of Theorem 2.16. (i) By [52, Corollary 12], we obtain for every i = 1 , . . . , q the convergence ( M k,i , [ M k ] ii ) ⊤ ( J ( R ) , P ) −−−−−−−−→ ( M ∞ ,i , [ M ∞ ] ii ) ⊤ , TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 45 which in conjunction with Corollary A.2 and the convergence M k (J ( R q ) , P ) −−−−−−−−→ M ∞ implies M k, [ M k ] M k, [ M k ] ... ... M k,q [ M k ] qq ( J ( R q × ) , P ) −−−−−−−−−−→ M ∞ , [ M ∞ ] M ∞ , [ M ∞ ] ... ... M ∞ ,q [ M ∞ ] qq . (A.23)On the other hand, let i, j ∈ { , . . . , q } with i = j . Using that the sequence of square integrable martingales ( M k,l ) , for every l = 1 , . . . , q , is L − bounded, Doob’s maximal inequality and [34, Corollary VI.6.30], we getthat the sequences ( M k,i ) k ∈ N and ( M k,j ) k ∈ N possess the P–UT property; see [34, Section VI.6]. Therefore, by[34, Theorem VI.6.22], we obtain ( M k,i , M k,j , M k,i − · M k,j , M k,j − · M k,i ) ⊤ ( J ( R ) , P ) −−−−−−−−→ ( M ∞ ,i , M ∞ ,j , M ∞ ,i − · M ∞ ,j , M ∞ ,j − · M ∞ ,i ) ⊤ . (A.24)Now, in order to show that the quadratic variation of M k,i and M k,j converges, we just need to show that theproduct M k,i M k,j converges, by the definition of the quadratic variation, see [34, Definition I.4.45].By the convergence ( M k,i , M k,j ) ⊤ ( J ( R ) , P ) −−−−−−−−→ ( M ∞ ,i , M ∞ ,j ) ⊤ , recall (A.23), we obtain the convergence ( M k,i , M k,j , M k,i M k,j ) ⊤ ( J ( R ) , P ) −−−−−−−−→ ( M ∞ ,i , M ∞ ,j , M ∞ ,i M ∞ ,j ) ⊤ , (A.25)where we have applied Lemma A.4 for the continuous function R ∋ ( x , x ) ⊤ ( x , x , x x ) ⊤ ∈ R .Then (A.24)–(A.25) imply that ( M k,i , M k,j , [ M k ] ij ) ⊤ ( J ( R ) , P ) −−−−−−−−→ ( M ∞ ,i , M ∞ ,j , [ M ∞ ] ij ) ⊤ , (A.26)while (A.23) and (A.26), in conjunction with Remark A.3, yield that ( M k , [ M k ]) ( J ( R q × R q × q ) , P ) −−−−−−−−−−−−−→ ( M ∞ , [ M ∞ ]) . (A.27)Let us now show that the predictable quadratic variations converge as well. By [52, Corollary 12], we have for i = 1 , . . . , q h M k i ii (J ( R ) , P ) −−−−−−−→ h M ∞ i ii . (A.28)Moreover the convergence M k ∞ ( J ( R ) , L ) −−−−−−−−−→ M ∞∞ implies in particular, for every i, j = 1 , , . . . , q with i = j ,that M k,i ∞ + M k,j ∞ ( J ( R ) , L ) −−−−−−−−−→ M ∞ ,i ∞ + M ∞ ,j ∞ . (A.29)In view of (A.23) and (A.29), we can apply [52, Corollary 12] to ( M k,i + M k,j ) k ∈ N and ( M k,i − M k,j ) k ∈ N , forevery i, j = 1 , , . . . , q with i = j . Therefore we get that h M k,i + M k,j i (J ( R ) , P ) −−−−−−−→ h M ∞ ,i + M ∞ ,j i , and h M k,i − M k,j i (J ( R ) , P ) −−−−−−−→ h M ∞ ,i − M ∞ ,j i . Now recall that M ∞ is quasi–left–continuous, which implies that the processes h M ∞ ,i + M ∞ ,j i and h M ∞ ,i − M ∞ ,j i are continuous for every i, j = 1 , , . . . , q with i = j . Therefore, by [34, Theorem I.4.2, PropositionVI.2.2] and the last results we obtain h M k i ij = 14 ( h M k,i + M k,j i − h M k,i − M k,j i ) (J ( R ) , P ) −−−−−−−→ 14 ( h M ∞ ,i + M ∞ ,j i − h M ∞ ,i − M ∞ ,j i ) = h M ∞ i ij . (A.30) Concluding, by (A.27), (A.28), (A.30) and due to the continuity of h M ∞ i we have ( M k , [ M k ] , h M k i ) ( J ( R q × R q × q × R q × q ) , P ) −−−−−−−−−−−−−−−−−→ ( M ∞ , [ M ∞ ] , h M ∞ i ) . (ii) Let i = 1 , . . . , q. The sequence ( M k,i ) k ∈ N satisfies the conditions of [52, Corollary 12]. In the middle of theproof of the aforementioned corollary, we can find the required convergences. (cid:3) Proof of Lemma 2.17. By [31, Corollary 1.10], it is enough to prove that the sequence (cid:0) Var (cid:0) [ L k,i , N k,j ] (cid:1) ∞ (cid:1) k ∈ N is uniformly integrable, for every i = 1 , . . . , p and j = 1 , . . . , q. We will use [31, Theorem 1.9] in orderto prove it. Let i, j be arbitrary but fixed. The L –boundedness of the sequence (Var([ L k,i , N k,j ]) ∞ ) k ∈ N isobtained using the Kunita–Watanabe inequality in the form [31, Corollary 6.34] and the L − boundedness of thesequences (cid:0) [ L k,i ] ∞ (cid:1) k ∈ N and (cid:0) [ N k,j ] ∞ (cid:1) k ∈ N , due to the uniform integrability of the former; see [31, Theorem1.7.1)]. By the Kunita–Watanabe inequality again, but now in the form [31, Theorem 6.33], and the Cauchy–Schwarz inequality, we obtain for any A ∈ G Z A Var (cid:0) [ L k,i , N k,j ] (cid:1) ∞ d P K-W ineq. ≤ Z A [ L k,i ] ∞ [ N k,j ] ∞ d P C-S ineq. ≤ (cid:18) Z A [ L k,i ] ∞ d P (cid:19) (cid:18) Z A [ N k,j ] ∞ d P (cid:19) ≤ (cid:18) Z A [ L k,i ] ∞ d P (cid:19) (cid:18) sup k ∈ N Z Ω [ N k,j ] ∞ d P (cid:19) ≤ C (cid:18) Z A [ L k,i ] ∞ d P (cid:19) , (A.31)where C := sup k ∈ N E h Tr (cid:2) [ N k ] ∞ (cid:3)i . Note that for the third inequality we used that [ N k,j ] ∞ ≥ for all k ∈ N .Now we use [31, Theorem 1.9] for the uniformly integrable sequence (cid:0) [ L k,i ] ∞ (cid:1) k ∈ N . For every ε > , there exists δ > such that, whenever P ( A ) < δ , it holds sup k ∈ N Z A [ L k,i ] ∞ d P < ε C ⇐⇒ sup k ∈ N (cid:18) Z A [ L k,i ] ∞ d P (cid:19) / < εC . This further implies sup k ∈ N E (cid:2) A Var (cid:0) [ L k,i , N k,j ] ∞ (cid:1)(cid:3) (A.31) < ε , which is the required condition. (cid:3) A.2. Measure Theory. This appendix contains some necessary results of measure theoretic nature. Let us en-dow the Euclidean space R ℓ with the usual metric d |·| , where we have suppressed the dimension in the notation.Let D be a countable and dense subset of R . Then every open U ⊂ R is the countable union of pairwise disjointintervals with endpoints in D, i.e. there exists a sequence of intervals (cid:0) ( a k , b k ) (cid:1) k ∈ N with a k , b k ∈ D , for every k ∈ N , such that U = ∪ k ∈ N ( a k , b k ) . In particular, these intervals can be selected from the set I ( X ∞ ,i ) , for every i = 1 , . . . , ℓ . Lemma A.9. The following equality holds σ (cid:0) J ( X ∞ ) (cid:1) = B (cid:0) R ℓ (cid:1) , where J ( X ∞ ) has been introduced inSubsection 3.5 and σ (cid:0) J ( X ∞ ) (cid:1) denotes the σ − algebra on R ℓ generated by the family J ( X ∞ ) . Proof. The space R ℓ is a finite product of the second countable metric space R . Therefore, it holds that B ( R ℓ ) = B ( R ) ⊗ · · · ⊗ B ( R ) | {z } ℓ − times , so that it is sufficient to prove that σ ( I ( X ∞ ,i )) = B ( R ) for every i = 1 , . . . , ℓ. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 47 Using the comment at the beginning of this appendix, we have the following chain of equalities σ ( I ( X ∞ ,i )) = σ ( { U ⊂ R , U is open and subset of R \ { }} ) = σ ( { U ⊂ R \ { } , U is open } ∪ { R } ∪ { } )= σ ( { U ⊂ R , U is open } ) = B ( R ) , which allows us to conclude. (cid:3) Lemma A.10. Let (Σ , S ) be a measurable space and ̺ , ̺ be two finite signed measures on this space. Let A be a family of sets with the following properties: (i) A is a π − system, i.e. if A, B ∈ A , then A ∩ B ∈ A . (ii) We have σ ( A ) = S . (iii) ̺ ( A ) = ̺ ( A ) for every A ∈ A . Then it holds ̺ = ̺ .Proof. See Bogachev [8, Lemma 7.2.1], or Saplaouras [65, Lemma A.7]. (cid:3) A.3. Young functions. This appendix contains a brief overview as well as some new results on moderate Youngfunctions. In particular, in subsection 3.7, we are interested in using the Burkholder–Davis–Gundy inequality formoderate Young functions and Doob’s maximal inequality for functions whose Young conjugate is moderate.We will use and adapt the terminology of [22, Chapter VI, Section 3, Paragraph 97]. A function Φ : R + −→ R + is called Young if it is increasing, convex and satisfies Φ(0) = 0 and lim x →∞ Φ( x ) x = ∞ . We can write every Young function Φ in the form Φ( x ) = R x φ ( t )d t, where φ : R + −→ R + is an increasing andcàdlàg function. Due to the growth condition, it is immediate to check that lim x →∞ φ ( x ) = ∞ , which impliesthat the càdlàg inverse of φ , which is defined by ψ ( s ) := inf { t ≥ , φ ( t ) > s } , s ∈ R + , (A.32)is real–valued and unbounded as well. Definition A.11. Every Young function Φ is associated to the constants c Φ := inf x> xφ ( x )Φ( x ) , and c Φ := sup x> xφ ( x )Φ( x ) . If c Φ < ∞ , then Φ is called moderate. Observe that by the immediate inequality Φ( x ) ≤ xφ ( x ) , we have that c Φ ≥ c Φ ≥ . A characterization ofmoderate Young functions is given in [46, Theorem 3.1.1]. In other words, a Young function Φ is moderate ifand only if Φ( λx ) ≤ λ c Φ Φ( x ) for every x ∈ R + and for every λ ≥ . However, for a Young function to bemoderate, it turns out that we actually only need to prove the property for some λ > , e.g. for λ = 2 , see [31,Definition 10.32, Lemma 10.33.2)]. The Young conjugate of Φ is the Young function Ψ : R + −→ R + defined as Ψ( x ) := Z x ψ ( s )d s, where ψ is the càdlàg inverse of φ defined in (A.32). By [46, Theorem 3.1.1], we have that c Ψ is the conjugateindex of c Φ and c Ψ is the conjugate index of c Φ , i.e. c Ψ = ( c Φ c Φ − , if c Φ > , ∞ , if c Φ = 1 , and c Ψ = ( c Φ c Φ − , if c Φ > , ∞ , if c Φ = 1 . Therefore, the Young conjugate of Φ is moderate if c Φ > . In the following, to every sequence A := ( α k ) k ∈ N such that α = 0 and α k ≤ α k +1 , for every k ∈ N , and lim k →∞ α k = ∞ , we will associate the Youngfunction Φ A with Φ A ( x ) := R x P ∞ k =1 α k [ k,k +1) ( t )d t. For convenience, we define φ A : R + −→ R + by φ A ( t ) := P ∞ k =1 α k [ k,k +1) ( t ) . If, moreover, α k ≤ α k for every k ∈ N , then the Young function Φ A ismoderate, as an immediate consequence of the comments after Definition A.11. Proposition A.12. Let A = ( α k ) k ∈ N be an increasing and unbounded sequence of positive integers for whichholds α k ≤ α k +1 and α k ≤ α k , for every k ∈ N . Let now Φ A be the moderate Young function associatedto the sequence A . Define quad : R + −→ R + by quad( x ) = x , and let Ψ be the Young conjugate of Φ A , quad := Φ A ◦ quad , with associated right derivative ψ . Then (i) Φ A , quad , Ψ are moderate Young functions. (ii) ψ is continuous and can be written as a concatenation of linear and constant functions defined on intervals.Besides, the slopes of the linear parts constitute a non–increasing sequence converging to . (iii) We have Ψ( x ) ≤ quad ( x ) , where the equality holds on a compact neighborhood of , and lim x ↑∞ (cid:8) quad ( x ) − Ψ( x ) (cid:9) = ∞ . (iv) There exists a Young function Υ such that Υ ◦ Ψ = quad .Proof. (i) We will prove initially that Φ A , quad is a Young function. In view of the comments before and afterDefinition A.11, it is sufficient to prove that it can be written as a Lebesgue integral whose integrand is a càdlàg,increasing and unbounded function.For every x ∈ R + , we have by definition Φ A , quad ( x ) = Φ A (cid:16) x (cid:17) = Z [0 , x ] φ A ( z ) d z = Z [0 ,x ] tφ A (cid:16) t (cid:17) d t. (A.33)We define φ A , quad : R + −→ R + by φ A , quad ( t ) := tφ A (cid:16) t (cid:17) = t [0 , √ ( t ) + t ∞ X k =1 α k [ √ k, √ k +2) ( t ) , (A.34) i.e. φ A , quad is càdlàg and piecewise-linear. Observe, moreover, that (cid:4) ∆ φ A , quad ( √ k + 2) = ( α k +1 − α k ) √ k + 2 ≥ , for every k ∈ N . (cid:4) φ A , quad has increasing slopes; the value of the slope of the linear part defined on the interval [ √ k, √ k + 2) is determined by the value of the respective element α k ≥ , for every k ∈ N . (cid:4) lim s →∞ φ A , quad ( s ) = ∞ . Therefore, Φ A , quad is a Young function and its conjugate Ψ is also a Young function.We will prove now that both Φ A , quad and Ψ are moderate. We have directly that c quad = c quad = 2 . Moreover, bythe property α k ≤ α k we have that Φ A is moderate, hence c Φ A < ∞ . Now we obtain c Φ A , quad = inf x> xφ A , quad ( x )Φ A , quad ( x ) = inf x> x φ A ( x )Φ A ( x ) = 2 inf x> x φ A ( x )Φ A ( x ) = 2 inf u> uφ A ( u )Φ A ( u ) = 2 c Φ A ≥ , because for every Young function Υ holds c Υ ≥ . In addition, for c Φ A , quad we have c Φ A , quad = sup x> xφ A , quad ( x )Φ A , quad ( x ) = 2 sup u> uφ A ( u )Φ A ( u ) = 2 c Φ A < ∞ . Hence, Φ A , quad is a moderate Young function. Besides, since c Φ A , quad > , we have from [46, Theorem 3.1.1 (f)]that c Ψ < ∞ . Therefore, Ψ is also moderate.(ii) For the rest of the proof, i.e. for parts (ii)–(iv), we will simplify the notation and simply write φ for thefunction φ A , quad .Firstly, let us observe that ψ is real–valued, resp. unbounded, when φ is unbounded, resp. real–valued. Inorder to determine the value ψ ( s ) for s ∈ (0 , ∞ ) , let us define two sequences of subsets of R + , ( C k ) k ∈ N ∪{ } , The reader who is not familiar with generalized inverses may find Embrechts and Hofert [26] helpful, especially the comments after[26, Remark 2.2]. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 49 ( J k ) k ∈ N ∪{ } , by C k := (cid:2) φ ( √ k ) , lim t ↑√ k +2 φ ( t ) (cid:1) and J k +1 := (cid:2) lim t ↑√ k +2 φ ( t ) , φ ( √ k + 2) (cid:1) , for k ∈ N ∪ { } . (A.35)Observe that (cid:4) C k = φ (cid:0) [ √ k, √ k + 2) (cid:1) = ∅ , ∀ k ∈ N ∪ { } , since φ is continuous and increasing on [ √ k, √ k + 2) . (cid:4) J k = ∅ if and only if φ is continuous at √ k + 2 , which is further equivalent to α k = α k +1 . For convenience, let us define two sequences ( s k ) k ∈ N ∪{ } , ( s k +1 − ) k ∈ N ∪{ } of positive numbers as follows s := 0 , s k := φ ( √ k ) = α k √ k, for k ∈ N and s k +1 − := lim x ↑√ k +2 φ ( x ) = α k √ k + 2 , for k ∈ N ∪ { } . (A.36)The introduction of the last notation allows us to rewrite C k = [ s k , s k +1 − ) and J k +1 = [ s k +1 − , s k +1 ) , for k ∈ N ∪ { } . Now we are ready to determine the values of ψ on (0 , ∞ ) . The reader should keep in mind that thefunction φ is increasing and right–continuous. • Let s ∈ C = (cid:2) φ (0) , φ ( √ − ) (cid:1) = φ (cid:0) [0 , √ (cid:1) , then ψ ( s ) = inf { t ∈ R + , φ ( t ) > s } s ∈ φ ( [0 , √ )= inf (cid:8) t ∈ [0 , √ , φ ( t ) > s (cid:9) (A.34) = inf { t ∈ R + , Id ( t ) [0 , √ ( t ) > s } = s, where the second equality is valid because φ is continuous on [0 , √ with φ (cid:0) [0 , √ (cid:1) = [0 , s − ) . To sum up,we have proven that ψ [ s ,s − ) = Id [ s ,s − ) . • Let s ∈ J = (cid:2) φ ( √ − ) , φ ( √ (cid:1) = [ s − , s ) . If J = ∅ , which amounts to α = 1 , there is nothing to prove.On the other hand, if J = ∅ , then φ ( √ − ) ≤ s < φ ( √ and consequently ψ ( s ) = inf { t ∈ R + , φ ( t ) > s } = √ for every s ∈ J . To sum up, we have proven that ψ [ s − ,s ) = √ [ s − ,s ) . For the general case let us fix a k ∈ N . We will distinguish between the cases s ∈ C k and s ∈ J k . For the latterwe can argue exactly as in the case s ∈ J , but for the sake of completeness we will provide the proof. • Let s ∈ C k = (cid:2) φ ( √ k ) , φ ( √ k + 2 − ) (cid:1) = φ (cid:0) [ √ k, √ k + 2) (cid:1) . Since C k is the image of [ √ k, √ k + 2) through R + ∋ t χ α k t ∈ R + , then ψ has to coincide with R + ∋ s χ − ,r α k s ∈ R + on C k = [ s k , s k +1 − ) . To sum up, we have proved that ψ ( s ) [ s k ,s k +1 − ) = α k Id [ s k ,s k +1 − ) . • Let s ∈ J k +1 = (cid:2) φ ( √ k + 2 − ) , φ ( √ k + 2) (cid:1) = [ s k +1 − , s k +1 ) . If J k +1 = ∅ , which amounts to α k +1 = α k , there is nothing to prove. On the other hand, if J k +1 = ∅ , then φ ( √ k + 2 − ) ≤ s < φ ( √ k + 2) andconsequently ψ ( s ) = inf { t ∈ R + , φ ( t ) > s } = √ k + 2 , for every s ∈ J k +1 . To sum up, we have proven that ψ [ s k +1 − ,s k +1 ) = √ k + 2 [ s k +1 − ,s k +1 ) . Overall, we have that the right derivative of Ψ can be written as a concatenation of linear and constant functionsdefined on intervals, i.e. ψ ( s ) = Id [ s ,s − ) + ∞ X k =1 α k Id [ s k ,s k +1 − ) + ∞ X k =0 √ k + 2 [ s k +1 − ,s k +1 ) , (A.37)where s := 0 , s k := α k √ k, for k ∈ N , and s k +1 − := α k √ k + 2 , for k ∈ N ∪ { } . Recall now that α k ≥ , for every k ∈ N , therefore we have that the slopes of ψ are smaller than . Moreover,since α k ≤ α k +1 and lim k →∞ α k = ∞ , we have that α k +1 ≤ α k and lim k →∞ α k = 0 . Finally, as it can beeasily checked, ψ is continuous. This causes no surprise, since φ is strictly increasing, see Embrechts and Hofert[26, Proposition 2.3.(7)] .(iii) Let us consider the function ζ : R + → R defined by ζ := Id − ψ, i.e. ζ is also continuous. Moreover, ζ isdifferentiable on a superset of R + \ { ( s k ) k ∈ N ∪{ } ∪ ( s k +1 − ) k ∈ N ∪{ } } , which is clearly an open and dense subsetof R + , since there is no accumulation point in the sequence ( s k ) k ∈ N ∪{ } ∪ ( s k +1 − ) k ∈ N ∪{ } . Obviously ζ ′ ( x ) = ( − α k , for x ∈ ( s k , s k +1 − ) for k ∈ N ∪ { } , , for x ∈ ( s k +1 − , s k +1 ) for k ∈ N ∪ { } , whenever ( s k +1 − , s k +1 ) = ∅ . Define M := min { k ∈ N , α k > } , which is a well-defined positive integer, since α k −−→ ∞ . Recall now that α k ≥ for k ∈ N and we can conclude that ζ ′ ( x ) > almost everywhere on [ s M , ∞ ) .We prove now that quad and ψ coincide only on a compact neighborhood of . By the definition of M we havethat α k = 1 for k ∈ { , . . . , M − } and α M > , therefore Id [0 ,s M ] = ψ [0 ,s M ] and x > ψ ( x ) for every x ∈ ( s M , ∞ ) . Finally, it is left to prove that lim x ↑∞ ( Id ( x ) − ψ ( x )) = ∞ . Recall that ( s k +1 − , s k +1 ) = ∅ whenever k is such that α k < α k +1 , i.e. there are infinitely many non-trivial intervals ( s k +1 − , s k +1 ) . But these intervals correspond to theintervals where ψ is constant. Since Id is increasing, we can conclude that ζ is unbounded and that the desiredresult holds.(iv) For the following recall (A.36), (A.37) and the definition of M. Let us start with the introduction of theauxiliary function η : ∪ ∞ k = M (cid:2) Ψ( s k +1 − ) , Ψ( s k +1 ) (cid:1) −−→ (0 , defined by η ( z ) := Ψ( s k +1 ) − z Ψ( s k +1 ) − Ψ( s k +1 − ) . Recall nowthat Ψ is continuous and increasing, which allows us to define the function υ : R + −→ R + by υ ( z ) := [0 , Ψ( s M )) ( z ) + ∞ X k = M α k [Ψ( s k ) , Ψ( s k +1 − )) ( z )+ ∞ X k = M (cid:16) η ( z ) α k + (1 − η ( z )) α k +1 (cid:17) [Ψ( s k +1 − ) , Ψ( s k +1 )) ( z ) . (A.38)We can directly check that υ is indeed well–defined, non–negative, non–decreasing and unbounded. Therefore,the function υ is the right derivative of a Young function, say Υ .We intend to prove that Υ ◦ Ψ = quad, which is equivalent to proving that the right–derivative of Υ ◦ Ψ equalsId. The following simple calculations allow us to evaluate the right derivative of Υ ◦ Ψ in terms of υ, ψ , and Ψ . For any x ∈ R + → R + , we have Υ ◦ Ψ( x ) = Z [0 , Ψ( x )] υ ( t ) d t = Z [0 ,x ] ψ ( z ) υ (cid:0) Ψ( z ) (cid:1) d z. Now we can compare the right derivative of Υ ◦ Ψ , which is the function ψ ( υ ◦ Ψ) : R + −→ R + , with theidentity function Id. To this end, we will consider the behaviour of ψ ( υ ◦ Ψ) on the intervals [0 , s M ) , [ s k , s k +1 − ) ,for k ≥ M and [ s k +1 − , s k +1 ) for k ≥ M, which form a partition of R + . Before we proceed, let us evaluate the For the following we keep our notation. In [26] the presented results is for the left–continuous generalized inverse of a function.However, φ − ,l is the càdlàg version of ψ , therefore we can directly conclude that ψ has to be also continuous. In fact, η ( z ) is the unique number in [0 , for which z can be written as convex combination of Ψ( s k +1 − ) and Ψ( s k +1 ) . TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 51 function υ at Ψ( s ) for s ∈ R + υ (cid:0) Ψ( s ) (cid:1) = [0 , Ψ( s M )) (Ψ( s )) + ∞ X k = M α k [Ψ( s k ) , Ψ( s k +1 − )) (Ψ( s ))+ ∞ X k = M n η (cid:0) Ψ( s ) (cid:1) α k + (cid:2) − η (cid:0) Ψ( s ) (cid:1)(cid:3) α k +1 o [Ψ( s k +1 − ) , Ψ( s k +1 )) (Ψ( s ))= [0 ,s M ) ( s ) + ∞ X k = M α k [ s k ,s k +1 − ) ( s )+ ∞ X k = M n η (cid:0) Ψ( s ) (cid:1) α k + (cid:2) − η (cid:0) Ψ( s ) (cid:1)(cid:3) α k +1 o [ s k +1 − ,s k +1 ) ( s ) (A.39)because Ψ is continuous and increasing. • Let s ∈ [0 , s M ) . At the end of the proof of (iii) we obtained that Id [0 ,s M ) = ψ [0 ,s M ) . Therefore, we canconclude that ψ ( s )( υ ◦ Ψ)( s ) = sυ (cid:0) Ψ( s ) (cid:1) = s. • Let s ∈ [ s k , s k +1 − ) , for some k ≥ M. Then, ψ ( s ) υ (Ψ( s )) (A.37) = (A.39) α k α k s = s. • Let s ∈ [ s k +1 − , s k +1 ) for some k ≥ M. Then, for the chosen s there exists (unique) µ s ∈ [0 , such that s = µ s s k +1 − + (1 − µ s ) s k +1 . (A.40)However, Ψ is linear on [ s k +1 − , s k +1 ) , i.e. Ψ( s ) = µ s Ψ( s k +1 − ) + (1 − µ s )Ψ( s k +1 ) (A.41)and Ψ( s ) ∈ h Ψ( s k +1 − ) , Ψ( s k +1 ) (cid:17) . Therefore, by definition of ηη (Ψ( s )) = Ψ( s k +1 ) − Ψ( s )Ψ( s k +1 ) − Ψ( s k +1 − ) (A.41) = Ψ( s k +1 ) − µ s Ψ( s k +1 − ) − (1 − µ s )Ψ( s k +1 )Ψ( s k +1 ) − Ψ( s k +1 − )= ✘✘✘✘✘ Ψ( s k +1 ) − µ s Ψ( s k +1 − ) − ✘✘✘✘✘ Ψ( s k +1 ) + µ s Ψ( s k +1 )Ψ( s k +1 ) − Ψ( s k +1 − ) = µ s (A.42)and finally, we can conclude in view of ψ ( s )( υ ◦ Ψ)( s ) = √ k + 2 υ (Ψ( s )) (A.39) = √ k + 2 n η (cid:0) Ψ( s ) (cid:1) α k + (cid:2) − η (cid:0) Ψ( s ) (cid:1)(cid:3) α k +1 o (A.42) = √ k + 2( µ s α k + (1 − µ s ) α k +1 ) (A.36) = µ s s k +1 − + (1 − µ s ) s k +1 (A.40) = s. (cid:3) A.4. Proof of Corollary 3.4. Proof. Let us initially define the random variables D k := ( X k ∞ , ξ k , X k ) for k ∈ N . Observe that the state space of D k is R ℓ +1 × D ( R + ; R ℓ ) , which is clearly Polish as a finite Cartesian product ofPolish spaces. Moreover, observe that D k L −→ D ∞ , due to Assumptions (W2) and (W5) . We are going to usethe Skorokhod Representation Theorem, see Billingsley [7, Theorem 6.7], in order to obtain a probability space (Ω , F , P ) and a sequence ( D k ) k ∈ N of random variables defined on (Ω , F , P ) such that(i) L ( D k ) = L ( D k ) , for every k ∈ N , (ii) D k δ Π −−→ D ∞ , P − almost surely, where δ Π : (cid:0) R ℓ +1 × D ( R + ; R ℓ ) (cid:1) × (cid:0) R ℓ +1 × D ( R + ; R ℓ ) (cid:1) −→ R + with δ Π (cid:0) ( x, α ) , ( y, β ) (cid:1) := | x − y | + δ J ( R ℓ ) ( α, β ) for x, y ∈ R ℓ +1 and α, β ∈ D ( R + ; R ℓ ) . The expectation under the measure P will be denoted by E [ · ] and the conditional expectation of a random variable Z with respect to a σ − algebra H under the measure P will be denoted by E [ Z |H ] . In view of the above, D k can be written as ( X k ∞ , ξ k , X k ) for some R ℓ − , resp. R − valued, random variables X k ∞ , ξ k and some R ℓ − valuedprocess X k , for every k ∈ N . The next step is to construct the stochastic basis (Ω , F , F X k , P ) for every k ∈ N .In order to make clear the correspondence with the conditions of Theorem 3.3, we define G k := F X k for every k ∈ N . Then, we can translate Conditions (W1) , (W2) , (W3) and (W5) into Conditions (M1) , (M2) , (M3) and (M5) under the probability space (Ω , F , P ) . Moreover, in view of Condition (W4) , we can conclude that X k is aprocess with independent increments. This property in conjunction with the convergence obtained by (ii) impliesin particular that F X k w −→ F X ∞ . Our last claim is verified by Coquet et al. [17, Proposition 2]. Therefore,Condition (M4) is also satisfied when we work on the probability space (Ω , F , P ) .Finally, we need to prove also that L (cid:0) E k [ ξ k |F X k · ] (cid:1) = L (cid:0) E [ ξ k |F X k · ] (cid:1) for every k ∈ N , in order to be able totransfer the results from (Ω , F , P ) to the original space. We underline that the last statement needs some specialcare, since the Skorokhod representation theorem does not deal with the associated filtrations. However, wewill see in the next lines that, if we work with the natural filtrations, then we can assume that the laws of thecorresponding optional projections (or simply of the conditional expectations) are unaffected. For the followingwe will use Jacod and Shiryaev [34, Theorem VI.1.14 c)]. In other words, we will use the fact that for every t ∈ R + the σ − algebra D t ( R ℓ ) is the σ − algebra generated by all maps D ( R + ; R ℓ ) ∋ α α ( u ) ∈ R ℓ for u ≤ s , and coincides with the Borel σ − algebra associated to the Polish space (cid:0) D ([0 , t ]; R ℓ ) , δ J ( R ℓ ) (cid:1) .Define now D ( R ℓ ) := W t ∈ R + D t ( R ℓ ) . In order to prove our claim that L (cid:0) E k [ ξ k |F X k · ] (cid:1) = L (cid:0) E [ ξ k |F X k · ] (cid:1) ,it is sufficient by Kolomogorov’s extension theorem to prove that for every finite subset T of R + , the finitedimensional distributions associated to the processes E k [ ξ k |F X k · ] and E [ ξ k |F X k · ] coincide.We will initially prove that for every t ∈ R + and for every k ∈ N holds E k [ ξ k |F X k t ] = g kt ( X k ) as well as E [ ξ k |F X k t ] = g kt ( X k ) , (A.43)where g kt : (cid:0) D ( R + ; R ℓ ) , D t ( R ℓ ) (cid:1) −→ (cid:0) R , B ( R ) (cid:1) .Fix ( t, k ) ∈ R + × N . Doob’s lemma (Kallenberg [37, Lemma 1.13]), the F X k t − measurability of E k [ ξ k |F X k t ] ,and the F X k t − measurability of E [ ξ k |F X k t ] , imply that there exist g kt : (cid:0) D ( R + ; R ℓ ) , D t ( R ℓ ) (cid:1) −→ (cid:0) R , B ( R ) (cid:1) ,and ¯ g kt : (cid:0) D ( R + ; R ℓ ) , D t ( R ℓ ) (cid:1) −→ (cid:0) R , B ( R ) (cid:1) , such that E k [ ξ k |F X k t ] = g kt ( X k ) , and E [ ξ k |F X k t ] = ¯ g kt ( X k ) .Our aim, now, is to prove that g kt = ¯ g kt , P X k − a.s. (cid:16) hence also P X k − a.s. (cid:17) , (A.44)where P X k , resp. P X k , denotes the push forward measure P X k : (cid:0) D ( R + ; R ℓ ) , D ( R ℓ ) (cid:1) −→ (cid:0) R + , B ( R + ) (cid:1) de-fined by P X k ( A ) := P k (cid:0) ( X k ) − ( A ) (cid:1) . The interpretation of P X k is analogous. At this point recall (i) and thedefinition of the conditional expectation, in order to derive the following equality E k (cid:2) g kt ( X k ) ( X k ) − ( A ) (cid:3) = E k (cid:2) ξ k ( X k ) − ( A ) (cid:3) = E (cid:2) ξ k ( X k ) − ( A ) (cid:3) = E h ¯ g kt ( X k ) ( X k ) − ( A ) i for every A ∈ B (cid:0) R ℓ ) (cid:1) . Using classical approximation arguments, we can prove using the above equality that E k (cid:2) g kt ( X k ) f (cid:0) X k (cid:1)(cid:3) = E (cid:2) ¯ g kt ( X k ) f (cid:0) X k (cid:1)(cid:3) , for every bounded f : (cid:0) D ( R + ; R ℓ ) , D ( R ℓ ) (cid:1) −→ (cid:0) R , B ( R ℓ ) (cid:1) . In particular, using the above equality for everybounded h t : (cid:0) D ( R + ; R ℓ ) , D t ( R ℓ ) (cid:1) −→ (cid:0) R , B ( R ℓ ) (cid:1) , of the fact that both g kt , and ¯ g kt are D t ( R ℓ ) − measurable, TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 53 and that the measures P X k and P X k are equal, we can conclude that Equality (A.44) holds. Now, we can easilyconclude that the associated finite dimensional distributions are equal in view of P k (cid:2)(cid:8) ω k ∈ Ω k : E k [ ξ k |F X k t ]( ω k ) ∈ A , . . . , E k [ ξ k |F X k t n ]( ω k ) ∈ A n (cid:9)(cid:3) = P X k (cid:18) n \ m =1 (cid:2) ( g kt ) − ( A )] (cid:19) = P X k (cid:18) n \ m =1 (cid:2) ( g kt ) − ( A )] (cid:19) = P hn ω ∈ Ω : E [ ξ k |F X k t ]( ω ) ∈ A , . . . , E [ ξ k |F X k t n ]( ω ) ∈ A n oi , for every t , . . . , t n ∈ R + , A , . . . , A n ∈ B ( R ℓ ) and every n ∈ N . (cid:3) R EFERENCES [1] D. Aldous. Weak convergence and the general theory of processes , volume Incomplete draft of a monograph. Depart-ment of Statistics, University of California, 1981.[2] C. Aliprantis and K. Border. Infinite dimensional analysis: a hitchhiker’s guide . Springer–Verlag Berlin Heidelberg,3rd edition, 2006.[3] F. Antonelli and A. Kohatsu-Higa. Filtration stability of backward SDE’s. Stochastic Analysis and its Applications ,18(1):11–37, 2000.[4] M. Barlow and P. Protter. On convergence of semimartingales. Séminaire de probabilités de Strasbourg , XXIV:188–193, 1990.[5] P. Barrieu and N. El Karoui. Monotone stability of quadratic semimartingales with applications to unbounded generalquadratic BSDEs. The Annals of Probability , 41(3B):1831–1863, 2013.[6] P. Barrieu, N. Cazanave, and N. El Karoui. Closedness results for BMO semi–martingales and application to quadraticBSDEs. Comptes Rendus de l’Académie des Sciences – Series I – Probability Theory , 346(15–16):881–886, 2008.[7] P. Billingsley. Convergence of probability measures . Wiley Series in Probability and Statistics: Probability andStatistics. John Wiley & Sons Inc., New York, second edition, 1999. A Wiley-Interscience Publication.[8] V. Bogachev. Measure theory . Springer–Verlag Berlin Heidelberg, 2007.[9] P. Briand, B. Delyon, and J. Mémin. Donsker–type theorem for BSDEs. Electronic Communications in Probability ,6:1–14, 2001.[10] P. Briand, B. Delyon, and J. Mémin. On the robustness of backward stochastic differential equations. StochasticProcesses and their Applications , 97(2):229–253, 2002.[11] P. Cheridito and M. Stadje. BS ∆ Es and BSDEs with non–Lipschitz drivers: comparison, convergence and robustness. Bernoulli , 19(3):1047–1085, 2013.[12] S. Cohen and R. Elliott. Stochastic calculus and applications . Probability and its applications. Springer New York,2015.[13] F. Coquet and L. Słomi´nski. On the convergence of Dirichlet processes. Bernoulli , 5(4):615–639, 1999.[14] F. Coquet, V. Mackeviˇcius, and J. Mémin. Stability in D of martingales and backward equations under discretizationof filtration. Stochastic Processes and their Applications , 75(2):235–248, 1998.[15] F. Coquet, V. Mackeviˇcius, and J. Mémin. Corrigendum to "Stability in D of martingales and backward equations un-der discretization of filtration” [Stochastic Processes and their Applications 75 (1998) 235–248]. Stochastic Processesand their Applications , 82(2):335–338, 1999.[16] F. Coquet, J. Mémin, and V. Mackeviˇcius. Some examples and counterexamples of convergence of σ − algebras andfiltrations. Lithuanian Mathematical Journal , 40(3):228–235, 2000.[17] F. Coquet, J. Mémin, and L. Słomi´nski. On weak convergence of filtrations. Séminaire de probabilités de Strasbourg ,XXXV:306–328, 2001.[18] F. Delbaen and W. Schachermayer. A compactness principle for bounded sequences of martingales with applications.In R. Dalang, M. Dozzi, and F. Russo, editors, Seminar on stochastic analysis, random fields and applications. CentroStefano Franscini, Ascona, 1996 , volume 45 of Progress in probability , pages 137–174. Birkhäuser Verlag, 1999.[19] F. Delbaen, P. Monat, W. Schachermayer, M. Schweizer, and C. Stricker. Inégalités de normes avec poids et fermetured’un espace d’intégrales stochastiques. Comptes Rendus de l’Académie des Sciences – Series I – Probability theory ,319:1079–1081, 1994.[20] F. Delbaen, P. Monat, W. Schachermayer, M. Schweizer, and C. Stricker. Weighted norm inequalities and hedging inincomplete markets. Finance and Stochastics , 1(3):181–227, 1997.[21] C. Dellacherie and P.-A. Meyer. Probabilities and potential. North–Holland Mathematics Studies , 29, 1978.[22] C. Dellacherie and P.-A. Meyer. Probabilities and potential B: theory of martingales . North-Holland MathematicsStudies. Elsevier Science, 1982. [23] R. Dudley. Real analysis and probability . Cambridge University Press, 2nd edition, 2002.[24] D. Duffie and P. Protter. From discrete– to continuous–time finance: weak convergence of the financial gain process. Mathematical Finance , 2(1):1–15, 1992.[25] N. El Karoui, A. Matoussi, and A. Ngoupeyou. Quadratic exponential semimartingales and application to BSDEswith jumps. arXiv preprint arXiv:1603.06191 , 2016.[26] P. Embrechts and M. Hofert. A note on generalized inverses. Mathematical Methods of Operations Research , 77(3):423–432, 2013.[27] M. Émery. Stabilité des solutions des équations différentielles stochastiques application aux intégrales multiplicativesstochastiques. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete , 41(3):241–262, 1978.[28] M. Émery. équations différentielles lipschitziennes: étude de la stabilité. Séminaire de probabilités de Strasbourg ,XIII:281–293, 1979.[29] S. Ethier and T. Kurtz. Markov processes: characterization and convergence . Wiley series in probability and mathe-matical statistics. J. Wiley & Sons, New York, Chichester, 1986.[30] H. Föllmer and P. Protter. Local martingales and filtration shrinkage. ESAIM: Probability and Statistics , 15:S25–S38,2011.[31] S. He, J. Wang, and J. Yan. Semimartingale theory and stochastic calculus . Science Press, 1992.[32] D. Hoover. Convergence in distribution and Skorokhod convergence for the general theory of processes. ProbabilityTheory and Related Fields , 89(3):239–259, 1991.[33] J. Jacod. Calcul stochastique et problèmes de martingales , volume 714 of Lecture notes in mathematics . Springer,1979.[34] J. Jacod and A. Shiryaev. Limit theorems for stochastic processes , volume 288 of Grundlehren der mathematischenWissenschaften Grundlehren der mathematischen Wissenschaften Grundlehren der mathematischen Wissenschaften .Springer–Verlag Berlin Heidelberg, 2003.[35] J. Jacod, S. Méléard, and P. Protter. Explicit form and robustness of martingale representations. The Annals ofProbability , 28(4):1747–1780, 10 2000.[36] A. Jakubowski, J. Mémin, and G. Pagès. Convergence en loi des suites d’intégrales stochastiques sur l’espace D deSkorokhod. Probability Theory and Related Fields , 81(1):111–137, 1989.[37] O. Kallenberg. Foundations of modern probability . Probability and its applications. Springer–Verlag New York, 2ndedition, 2002.[38] Y. Kchia. Semimartingales and contemporary issues in quantitative finance . PhD thesis, École Polytechnique, 2011.[39] Y. Kchia and P. Protter. Progressive filtration expansions via a process, with applications to insider trading. Interna-tional Journal of Theoretical and Applied Finance , 18(4):1550027, 2015.[40] T. Kurtz and P. Protter. Weak limit theorems for stochastic integrals and stochastic differential equations. The Annalsof Probability , 19(3):1035–1070, 07 1991.[41] T. Kurtz and P. Protter. Weak convergence of stochastic integrals and differential equations. In D. Talay and L. Tubaro,editors, Probabilistic models for nonlinear partial differential equations. Lectures given at the 1st session of the CentroInternazionale Matematico Estivo (C.I.M.E.) held in Montecatini Terme, Italy, May 22–30, 1995 , volume 1627 of Lecture notes in mathematics , pages 1–41. Springer Berlin Heidelberg, 1996.[42] D. Leão and A. Ohashi. Weak approximations for Wiener functionals. The Annals of Applied Probability , 23(4):1660–1691, 2013.[43] D. Leão, A. Ohashi, and A. Simas. Weak differentiability of Wiener functionals and occupation times. Bulletin desSciences Mathématiques , 149:23–65, 2018.[44] D. Leão, A. Ohashi, and A. Simas. A weak version of path–dependent functional It¯o calculus. The Annals of Proba-bility , 46(6):3399–3441, 2018.[45] E. Lenglart, D. Lépingle, and M. Pratelli. Présentation unifiée de certaines inégalités de la théorie des martingales. Séminaire de probabilités de Strasbourg , XIV:26–48, 1980.[46] R. Long. Martingale spaces and inequalities . Vieweg+Teubner Verlag, 1993.[47] J. Ma, P. Protter, J. San Martín, and S. Torres. Numerical method for backward stochastic differential equations. TheAnnals of Applied Probability , 12(1):302–316, 2002.[48] V. Mackeviˇcius. s p stabilité des solutions d’équations différentielles stochastiques symétriques avec semi–martingalesdirectrices discontinues. Comptes rendus de l’Académie des sciences – Series 1 – Mathematics , 302(19):689–692,1986.[49] V. Mackeviˇcius. s p stability of solutions of symmetric stochastic differential equations with discontinuous drivingsemimartingales. Annales de l’institut Henri Poincaré, Probabilités et Statistiques ( B ) , 23(4):575–592, 1987.[50] D. Madan, M. Pistorius, and M. Stadje. Convergence of BS ∆ Es driven by random walks to BSDEs: the case of(in)finite activity jumps with general driver. Stochastic Processes and their Applications , 126(5):1553–1584, 2016. TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 55 [51] J. Mémin. Espaces de semi martingales et changement de probabilité. Zeitschrift für Wahrscheinlichkeitstheorie undVerwandte Gebiete , 52(1):9–39, 1980.[52] J. Mémin. Stability of Doob–Meyer decomposition under extended convergence. Acta Mathematicae ApplicataeSinica , 19(2):177–190, 2003.[53] P.-A. Meyer. Sur le lemme de la Vallée Poussin et un théorème de Bismut. Séminaire de probabilités de Strasbourg ,XII:770–774, 1978.[54] M. Mocha and N. Westray. Quadratic semimartingale BSDEs under an exponential moments condition. Séminaire deprobabilités de Strasbourg , XLIV:105–139, 2012.[55] P. Monat and C. Stricker. Fermeture de G T (Θ) et de L ( F ) + G T (Θ) . Séminaire de probabilités de Strasbourg ,XXVIII:189–194, 1994.[56] P. Monat and C. Stricker. Föllmer–schweizer decomposition and mean–variance hedging for general claims. TheAnnals of Probability , 23(2):605–628, 1995.[57] I. Monroe. On embedding right continuous martingales in brownian motion. Ann. Math. Statist. , 43(4):1293–1311,08 1972. doi: 10.1214/aoms/1177692480. URL https://doi.org/10.1214/aoms/1177692480 .[58] A. Ngoupeyou. Optimisation des portefeuilles d’actifs soumis au risque de défaut . PhD thesis, Université Évry–Val–d’Essonne, 2010.[59] A. Papapantoleon, D. Possamaï, and A. Saplaouras. Existence and uniqueness for BSDEs with jumps: the whole nineyards. Electronic Journal of Probability , 23(121):1–68, 2018.[60] K. Parthasarathy. Probability measures on metric spaces . AMS Chelsea Publishing Series. Academic Press, 1972.[61] D. Possamaï and X. Tan. Weak approximation of second–order BSDEs. The Annals of Applied Probability , 25(5):2535–2562, 2015.[62] P. Protter. H p stability of solutions of stochastic differential equations. Zeitschrift für Wahrscheinlichkeitstheorie undverwandte Gebiete , 44(4):337–352, 1978.[63] P. Protter. Approximations of solutions of stochastic differential equations driven by semimartingales. The Annals ofProbability , 13(3):716–743, 1985.[64] P. Protter. Stochastic integration and differential equations , volume 21 of Stochastic modelling and applied probabil-ity . Springer–Verlag Berlin Heidelberg, 2nd edition, 2005.[65] A. Saplaouras. Backward stochastic differential equations with jumps are stable . PhD thesis, Technische UniversitätBerlin, 2017.[66] M. Schweizer. Variance–optimal hedging in discrete time. Mathematics of Operations Research , 20(1):1–32, 1995.[67] L. Słomi´nski. Stability of strong solutions of stochastic differential equations. Stochastic Processes and their Appli-cations , 31(2):173–202, 1989.[68] B. Tsirelson. Within and beyond the reach of Brownian innovation. In Proceedings of the International Congress ofMathematicians, Vol. III (Berlin, 1998) , number Extra Vol. III, pages 311–320, 1998.[69] M. Yor. Sous–espaces denses dans L ou H et représentation des martingales. Séminaire de probabilités de Stras-bourg , XII:265–309, 1978. D EPARTMENT OF M ATHEMATICS , N ATIONAL T ECHNICAL U NIVERSITY OF A THENS , Z OGRAFOU C AMPUS , 15780 A THENS , G REE - CE E-mail address : [email protected] D EPARTMENT OF I NDUSTRIAL E NGINEERING AND O PERATIONS R ESEARCH , C OLUMBIA U NIVERSITY , 500W 120 TH STREET ,10027, N EW Y ORK , NY, USA E-mail address : [email protected] D EPARTMENT OF M ATHEMATICS , U NIVERSITY OF M ICHIGAN , E AST H ALL , 530 C HURCH S TREET , A NN A RBOR , MI 48109-1043, USA E-mail address :: { ∆ X s ( ω ) =0 } δ ( s, ∆ X s ( ω )) (d t, d x ) , TABILITY RESULTS FOR MARTINGALE REPRESENTATIONS 7 see [34, Proposition II.1.16]. Here δ z denotes the Dirac measure at the point z , for any z ∈ R + × R ℓ . Noticethat µ X ( ω ; R + × { } ) = 0 , and that E (cid:20) Z (0 ,τ ] | x | µ X (d s, d x ) (cid:21) = E " X