Strong supermartingales and limits of nonnegative martingales
aa r X i v : . [ m a t h . P R ] F e b The Annals of Probability (cid:13)
Institute of Mathematical Statistics, 2016
STRONG SUPERMARTINGALES AND LIMITS OFNONNEGATIVE MARTINGALES
By Christoph Czichowsky and Walter Schachermayer London School of Economics and Political Science and Universit¨at Wien
Given a sequence ( M n ) ∞ n =1 of nonnegative martingales startingat M n = 1, we find a sequence of convex combinations ( f M n ) ∞ n =1 anda limiting process X such that ( f M nτ ) ∞ n =1 converges in probability to X τ , for all finite stopping times τ . The limiting process X then is anoptional strong supermartingale. A counterexample reveals that theconvergence in probability cannot be replaced by almost sure con-vergence in this statement. We also give similar convergence resultsfor sequences of optional strong supermartingales ( X n ) ∞ n =1 , their leftlimits ( X n − ) ∞ n =1 and their stochastic integrals ( R ϕ dX n ) ∞ n =1 and ex-plain the relation to the notion of the Fatou limit.
1. Introduction.
Koml´os’s lemma (see [12, 18] and [4]) is a classical re-sult on the convergence of random variables that can be used as a sub-stitute for compactness. It has turned out to be very useful, similarly tothe Bolzano–Weierstrass theorem, and has become a work horse of stochas-tic analysis in the past decades. In this paper, we generalize this result towork directly with nonnegative martingales and convergence in probabilitysimultaneously at all finite stopping times.Let us briefly explain this in more detail. Koml´os’s subsequence theoremstates that given a bounded sequence ( f n ) ∞ n =1 of random variables in L ( P ),there exists a random variable f ∈ L ( P ) and a subsequence ( f n k ) ∞ k =1 suchthat the Ces`aro means of any subsubsequence ( f n kj ) ∞ j =1 converge almost Received December 2013; revised July 2014. Supported by the European Research Council (ERC) under Grant FA506041. Supported in part by the Austrian Science Fund (FWF) under Grant P25815, theEuropean Research Council (ERC) under Grant FA506041 and by the Vienna Scienceand Technology Fund (WWTF) under Grant MA09-003.
AMS 2000 subject classifications.
Key words and phrases.
Koml´os’s lemma, limits of nonnegative martingales, Fatoulimit, optional strong supermartingales, predictable strong supermartingales, limits ofstochastic integrals, convergence in probability at all finite stopping times, substitute forcompactness.
This is an electronic reprint of the original article published by theInstitute of Mathematical Statistics in
The Annals of Probability ,2016, Vol. 44, No. 1, 171–205. This reprint differs from the original in paginationand typographic detail. 1
C. CZICHOWSKY AND W. SCHACHERMAYER surely to f . It quickly follows that there exists a sequence ( ˜ f n ) ∞ n =1 of convexcombinations ˜ f n ∈ conv( f n , f n +1 , . . . ) that converges to f almost surely thatwe refer to as Koml´os’s lemma.Replacing the almost sure convergence by the concept of Fatou conver-gence , F¨ollmer and Kramkov [9] obtained the following variant of Koml´os’slemma for stochastic processes. Given a sequence ( M n ) ∞ n =1 of nonnegativemartingales M n = ( M nt ) ≤ t ≤ starting at M n = 1, there exists a sequence( M n ) ∞ n =1 of convex combinations M n ∈ conv( M n , M n +1 , . . . ) and a nonneg-ative c`adl`ag supermartingale X = ( X t ) ≤ t ≤ starting at X = 1 such that M n is Fatou convergent along the rationals Q ∩ [0 ,
1] to X in the sense that X t = lim q ∈ Q ∩ [0 , ,q ↓ t lim n →∞ M nq = lim q ∈ Q ∩ [0 , ,q ↓ t lim n →∞ M nq , P -a.s. , for all t ∈ [0 ,
1) and X = lim n →∞ M n .In this paper, we are interested in a different version of Koml´os’s lemma fornonnegative martingales in the following sense. Given the sequence ( M n ) ∞ n =1 of nonnegative martingales as above and a finite stopping time τ defin-ing f n := M nτ gives a sequence of nonnegative random variables that isbounded in L ( P ). By Koml´os’s lemma there exist convex combinations f M n ∈ conv( M n , M n +1 , . . . ) such that f M nτ converges in probability to somerandom variable f τ . The question is then, if we can find one sequence( f M n ) ∞ n =1 of convex combinations f M n ∈ conv( M n , M n +1 , . . . ) and a stochas-tic process X = ( X t ) ≤ t ≤ such that we have that f M nτ converges to X τ inprobability for all finite stopping times τ .Our first main result (Theorem 2.6) shows that this is possible and thatthe limiting process X = ( X t ) ≤ t ≤ is an optional strong supermartingale .These supermartingales have been introduced by Mertens [14] and are op-tional processes that satisfy the supermartingale inequality for all finitestopping times. This indicates that optional strong supermartingales arethe natural processes for our purpose to work with, and we expand in Theo-rem 2.7 our convergence result from martingales ( M n ) ∞ n =1 to optional strongsupermartingales ( X n ) ∞ n =1 .In dynamic optimization problems our results can be used as substitute forcompactness; compare, for example, [5, 9, 11, 13, 17]. Here the martingales M n are usually a minimizing sequence of density processes of equivalentmartingale measures for the dual problem or, as in [5] and [9], the wealthprocesses of self-financing trading strategies.At a fixed stopping time the convergence in probability can always bestrengthened to almost sure convergence by simply passing to a subsequence.By means of a counterexample (Proposition 4.1) we show that this is notpossible for all stopping times simultaneously.Conversely, one can ask what the smallest class of stochastic processesis that is closed under convergence in probability at all finite stopping UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES times and contains all bounded martingales. Our second contribution (The-orem 2.8) is to show that this is precisely the class of all optional strongsupermartingales provided the underlying probability space is sufficientlyrich to support a Brownian motion.As the limiting strong supermartingale of a sequence of martingales in thesense of convergence in probability at all finite stopping times is no longera semimartingale, we need to restrict the integrands to be predictable finitevariation processes ϕ = ( ϕ t ) ≤ t ≤ to come up with a similar convergence re-sult for stochastic integrals in Proposition 2.12. For this, we need to extendour convergence result to ensure the convergence of the left limit processes( X n − ) ∞ n =1 in probability at all finite stopping times to a limiting process X (0) = ( X (0) ) ≤ t ≤ as well after possibly passing once more to convex com-binations. It turns out that X (0) is a predictable strong supermartingale thatdoes, in general, not coincide with the left limit process X − of the limit-ing optional strong supermartingale X . The notion of a predictable strongsupermartingale has been introduced by Chung and Glover [2] and refersto predictable processes that satisfy the supermartingale inequality for all predictable stopping times. Using instead of the time interval I = [0 ,
1] its
Alexandroff double arrow space e I = [0 , × { , } as index set we can mergeboth limiting strong supermartingales into one supermartingale X = ( X ˜ t ) ˜ t ∈ e I indexed by e I .Our motivation for studying these questions comes from portfolio opti-mization under transaction costs in mathematical finance in [3]. While forthe problem without transaction costs the solution to the dual problem isalways attained as a Fatou limit, the dual optimizer under transaction costsis in general a truly l`adl`ag optional strong supermartingale. So we expectour results naturally to appear whenever one is optimizing over nonnegativemartingales that are not uniformly integrable or stable under concatenation,and they might find other applications as well.The paper is organized as follows. We formulate the problem and stateour main results in Section 2. The proofs are given in Sections 3, 5, 6 and 7.Section 4 provides the counterexample that our convergence results cannotbe strengthened to almost sure convergence.
2. Formulation of the problem and main results.
Let (Ω , F , P ) be a prob-ability space and L ( P ) = L (Ω , F , P ) the space of all real-valued randomvariables. As usual we equip L ( P ) with the topology of convergence in prob-ability and denote by L ( P ) = L (Ω , F , P ; R + ) its positive cone. We call asubset A of L ( P ) bounded in probability or simply bounded in L ( P ), iflim m →∞ sup f ∈ A P ( | f | > m ) = 0.Koml´os’s subsequence theorem (see [12] and [18]) states the following. C. CZICHOWSKY AND W. SCHACHERMAYER
Theorem 2.1.
Let ( f n ) ∞ n =1 be a bounded sequence of random variablesin L (Ω , F , P ) . Then there exists a subsequence ( f n k ) ∞ k =1 and a randomvariable f such that the Ces`aro means J P Jj =1 f n kj of any subsubsequence ( f n kj ) ∞ j =1 converge P -almost surely to f , as J → ∞ . In applications this result is often used in the following variant that wealso refer to as Koml´os’s lemma; compare Lemma A.1 in [4].
Corollary 2.2.
Let ( f n ) ∞ n =1 be a sequence of nonnegative random vari-ables that is bounded in L ( P ) . Then there exists a sequence ( ˜ f n ) ∞ n =1 ofconvex combinations ˜ f n ∈ conv( f n , f n +1 , . . . ) and a nonnegative random variable f ∈ L ( P ) such that ˜ f n P - a . s . −→ f . As has been illustrated by the work of Kramkov and Schachermayer [13]and ˇZitkovi´c [19] (see also [17]) Koml´os’s lemma can be used as a substi-tute for compactness, for example, in the derivation of minimax theoremsfor Lagrange functions, where the optimization is typically over convex sets.Replacing the P -almost sure convergence by the concept of Fatou conver-gence
F¨ollmer and Kramkov [9] used Koml´os’s lemma to come up with asimilar convergence result for stochastic processes. For this, we equip theprobability space (Ω , F , P ) with a filtration F = ( F t ) ≤ t ≤ satisfying theusual conditions of right continuity and completeness and let ( M n ) ∞ n =1 be asequence of nonnegative martingales M n = ( M nt ) ≤ t ≤ starting at M n = 1.For all unexplained notation from the general theory of stochastic processesand stochastic integration, we refer to the book of Dellacherie and Meyer [8].The construction of the Fatou limit by F¨ollmer and Kramkov can besummarized as in the following proposition. Proposition 2.3 (Lemma 5.2 of [9]).
Let ( M n ) ∞ n =1 be a sequence ofnonnegative martingales M n = ( M nt ) ≤ t ≤ starting at M n = 1 . Then thereexists a sequence ( M n ) ∞ n =1 of convex combinations M n ∈ conv( M n , M n +1 , . . . ) and nonnegative random variables Z q for q ∈ Q ∩ [0 , such that: (1) M nq P - a . s . −→ Z q for all q ∈ Q ∩ [0 , ; (2) the process X = ( X t ) ≤ t ≤ given by X t := lim q ∈ Q ∩ [0 , ,q ↓ t Z q and X = Z (2.1) is a c`adl`ag supermartingale; UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES (3) the process X = ( X t ) ≤ t ≤ is the Fatou limit of the sequence ( M n ) ∞ n =1 along Q ∩ [0 , , that is, X t = lim q ∈ Q ∩ [0 , ,q ↓ t lim n →∞ M nq = lim q ∈ Q ∩ [0 , ,q ↓ t lim n →∞ M nq , P -a.s. , and X = lim n →∞ M n . Here it is important to note that lim q ∈ Q ∩ [0 , ,q ↓ t denotes the limit to t through all q ∈ Q ∩ [0 ,
1] that are strictly bigger than t . Therefore we do nothave in general that X t = lim n →∞ M nt for t ∈ [0 , t ∈ Q ∩ [0 , Example 2.4.
Let ( Y n ) ∞ n =1 be a sequence of random variables takingvalues in { , n } such that P [ Y n = n ] = n and define a sequence ( M n ) ∞ n =1 ofmartingales M n = ( M nt ) ≤ t ≤ by M nt = 1 + ( Y n − K / /n ) , K ( t ) . Then M nt converges to J , / K ( t ) for each t ∈ [0 , X = J , / J ( t ).The convergence, of course, also fails at stopping times in general. Thismotivates us to ask for a different extension of Koml´os’s lemma to nonnega-tive martingales in the following sense. Let ( M n ) ∞ n =1 be again a sequence ofnonnegative martingales M n = ( M nt ) ≤ t ≤ starting at M n = 1 and τ a finitestopping time. Then defining f n := M nτ gives a sequence ( f n ) ∞ n =1 of non-negative random variables that are bounded in L ( P ). By Koml´os’s lemmathere exist convex combinations f M n ∈ conv( M n , M n +1 , . . . ) and a nonnega-tive random variable f τ such that f M nτ =: ˜ f n P - a . s . −→ f τ . The questions are then:(1) Can we find one sequence ( f M n ) ∞ n =1 of convex combinations f M n ∈ conv( M n , M n +1 , . . . )such that, for all finite stopping times τ , we have f M nτ P - a . s . −→ f τ (2.2)for some random variables f τ that may depend on the stopping times τ ?(2) If (1) is possible, can we find a stochastic process X = ( X t ) ≤ t ≤ suchthat X τ = f τ for all finite stopping times τ ? C. CZICHOWSKY AND W. SCHACHERMAYER (3) If such a process X = ( X t ) ≤ t ≤ as in (2) exists, what kind of processis it?Let us start with the last question. If such a process X = ( X t ) ≤ t ≤ exists,it follows from Fatou’s lemma that it is (up to optional measurability) anoptional strong supermartingale. Definition 2.5.
A real-valued stochastic process X = ( X t ) ≤ t ≤ is calledan optional strong supermartingale , if:(1) X is optional;(2) X τ is integrable for every [0 , τ ;(3) for all stopping times σ and τ with 0 ≤ σ ≤ τ ≤
1, we have X σ ≥ E [ X τ |F σ ] . These processes have been introduced by Mertens [14] as a generalizationof the notion of a c`adl`ag (right continous with left limits) supermartingalethat one is usually working with. Indeed, by the optional sampling theoremeach c`adl`ag supermartingale is an optional strong supermartingale, but notevery optional strong supermartingale has a c`adl`ag modification. For exam-ple, every deterministic decreasing function ( X t ) ≤ t ≤ is an optional strongsupermartingale, but there is little reason why it should be c`adl`ag. However,by Theorem 4 in Appendix I in [8], every optional strong supermartingale isindistinguishable from a l`adl`ag (left and right limits) process, and so we canassume without loss of generality that all optional strong supermartingaleswe consider in this paper are l`adl`ag. Similarly to the Doob–Meyer decom-position in the c`adl`ag case, every optional strong supermartingale X has aunique decomposition X = M − A (2.3)into a local martingale M and a nondecreasing predictable process A startingat 0. This decomposition is due to Mertens [14] (compare also Theorem 20in Appendix I in [8]) and is therefore called the Mertens decomposition . Notethat, under the usual conditions of completeness and right continuity of thefiltration, we can and do choose a c`adl`ag modification of the local martingale M in (2.3). On the other hand, the nondecreasing process A is in particularl`adl`ag.For l`adl`ag processes X = ( X t ) ≤ t ≤ we denote by X t + := lim h ց X t + h and X t − := lim h ց X t − h the right and left limits and by ∆ + X t := X t + − X t and∆ X t := X t − X t − the right and left jumps. We also use the convention that X − = 0 and X = X .After these preparations we have now everything in place to formulateour main results. The proofs will be given in the Sections 3, 5, 6 and 7. UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES Theorem 2.6.
Let ( M n ) ∞ n =1 be a sequence of nonnegative c`adl`ag mar-tingales M n = ( M nt ) ≤ t ≤ starting at M n = 1 . Then there is a sequence ( f M n ) ∞ n =1 of convex combinations f M n ∈ conv( M n , M n +1 , . . . ) and a nonnegative optional strong supermartingale X = ( X t ) ≤ t ≤ such that,for every [0 , -valued stopping time τ , we have that f M nτ P −→ X τ . (2.4)Combining the above with a similar convergence result for predictablefinite variation processes by Campi and Schachermayer [1] allows us to ex-tend our convergence result to optional strong supermartingales by usingthe Mertens decomposition. Theorem 2.6 is thus only a special case of thefollowing result. Theorem 2.7.
Let ( X n ) ∞ n =1 be a sequence of nonnegative optional strongsupermartingales X n = ( X t ) ≤ t ≤ starting at X n = 1 . Then there is a se-quence ( e X n ) ∞ n =1 of convex combinations e X n ∈ conv( X n , X n +1 , . . . ) and a nonnegative optional strong supermartingale X = ( X t ) ≤ t ≤ such that,for every [0 , -valued stopping time τ , we have convergence in probability,that is, e X nτ P −→ X τ . (2.5)We thank Kostas Kardaras for indicating that convergence (2.5) is topo-logical . It corresponds to the weak topology that is generated on the spaceof optional processes by the topology of L ( P ) and all evaluation mappings e τ ( X )( ω ) := X τ ( ω ) ( ω ) that evaluate an optional process X = ( X t ) ≤ t ≤ at afinite stopping time τ . By the optional cross section theorem this topologyis Hausdorff.Given Theorem 2.6 and Theorem 2.7 above one can ask conversely whatthe smallest class of stochastic processes is that is closed under convergencein probability at all finite stopping times and contains the set of boundedmartingales. Here the next result shows that this set is the set of optionalstrong supermartingales. Theorem 2.8.
Let X = ( X t ) ≤ t ≤ be an optional strong supermartin-gale and suppose that its stochastic base (Ω , F , F , P ) is sufficiently rich tosupport a Brownian motion W = ( W t ) ≤ t ≤ . Then there is a sequence of C. CZICHOWSKY AND W. SCHACHERMAYER bounded c`adl`ag martingales ( M n ) ∞ n =1 such that, for every [0 , -valued stop-ping time τ , we have convergence in probability, that is, M nτ P −→ X τ . (2.6)We thank Perkowski and Ruf for pointing out to us that they have inde-pendently obtained a similar result to Theorem 2.8 for c`adl`ag supermartin-gales in Proposition 5.9 of [15] by taking several limits successively. More-over, we would like to thank Ruf for insisting on a clarification of an earlierversion of Theorem 2.8 which led us to a correction of the statement [con-vergence in probability in (2.6) as opposed to almost sure convergence] aswell as to a more detailed proof.Let us now turn to the theme of stochastic integration. By Theorem 2.6the limit of a sequence ( M n ) ∞ n =1 of martingales in the sense of (2.4) will, ingeneral, be no longer a semimartingale. In order to come up with a similarconvergence result for stochastic integrals ϕ · M n = R ϕ dM n , we thereforeneed to restrict the choice of integrands ϕ = ( ϕ t ) ≤ t ≤ to predictable finitevariation processes. As we shall explain in more detail in Section 7 below,this allows us to define stochastic integrals ϕ · X = R ϕ dX with respectto optional strong supermartingales X = ( X t ) ≤ t ≤ pathwise, since X isl`adl`ag. These integrals coincide with the usual stochastic integrals, if X =( X t ) ≤ t ≤ is a semimartingale. For a general predictable, finite variationprocess ϕ , the stochastic integral ϕ · X depends not only on the valuesof the integrator X but also explicitly on that of its left limits X − ; see(7.3) below. As a consequence, in order to obtain a satisfactory convergenceresult for the integrals ϕ · X n to a limit ϕ · X , we have to take specialcare of the left limits of the integrators. (The convergence of stochasticintegrals is crucially needed in applications in mathematical finance, wherethe integrals correspond to the gains from trading by using self-financingtrading strategies.) More precisely: given the convergence e X nτ P −→ X τ as in(2.5), at all [0 , τ of a sequence ( e X n ) ∞ n =1 of optionalstrong supermartingales do we have the convergence of the left limits e X nσ − P −→ X σ − (2.7)for all [0 , σ as well?For totally inaccessible stopping times σ , we are able to prove that (2.7)is actually the case. Proposition 2.9.
Let ( X n ) ∞ n =1 and X be nonnegative optional strongsupermartingales ( X nt ) ≤ t ≤ and ( X t ) ≤ t ≤ such that X nq P −→ X q UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES for every rational number q ∈ [0 , . Then X nτ − P −→ X τ − for all [0 , -valued totally inaccessible stopping times τ . At accessible stopping times σ , the convergence e X nτ P −→ X τ for all fi-nite stopping times τ does not necessarily imply convergence (2.7) of theleft limits e X nσ − . Moreover, even if the left limits e X nσ − converge to somerandom variable Y in probability, it may happen that Y = X σ − . In orderto take this phenomenon into account, we need to consider two processes X (0) = ( X (0) t ) ≤ t ≤ and X (1) = ( X (1) t ) ≤ t ≤ that correspond to the limitingprocesses of the left limits e X n − and the processes e X n itself or, alternatively,replace the time interval I = [0 ,
1] by the set e I = [0 , × { , } with the lex-icographic order. The set e I is motivated by the Alexandroff double arrowspace . Equipping the set e I with the lexicographic order simply means thatwe split every point t ∈ [0 ,
1] into a left and a right point ( t,
0) and ( t, t, < ( t, t, ≤ ( s,
0) if and only if t ≤ s andthat ( t, < ( s,
0) if and only if t < s . Then we can merge both processes, X (0) = ( X (0) t ) ≤ t ≤ and X (1) = ( X (1) t ) ≤ t ≤ , into one process, X ˜ t = ( X (0) t , ˜ t = ( t, X (1) t , ˜ t = ( t, t ∈ e I , which is by (2.11) below a supermartingale indexed by ˜ t ∈ e I . Asthe limit of the left limits, the process X (0) = ( X (0) t ) ≤ t ≤ will be predictableand it will turn out that it is even a predictable strong supermartingale. Werefer to the article of Chung and Glover [2] (see the second remark followingthe proof of Theorem 3 on page 243) as well as Definition 3 in Appendix Iof the book of Dellacherie and Meyer [8] for the subsequent concept. Definition 2.10.
A real-valued stochastic process X = ( X t ) ≤ t ≤ iscalled a predictable strong supermartingale if:(1) X is predictable;(2) X τ is integrable for every [0 , predictable stopping time τ ;(3) for all predictable stopping times σ and τ with 0 ≤ σ ≤ τ ≤
1, we have X σ ≥ E [ X τ |F σ − ] . After these preparations we are able to extend Theorem 2.7 to hold alsofor left limits. C. CZICHOWSKY AND W. SCHACHERMAYER
Theorem 2.11.
Let ( X n ) ∞ n =1 be a sequence of nonnegative optionalstrong supermartingales starting at X n = 1 . Then there is a sequence ( e X n ) ∞ n =1 of convex combinations e X n ∈ conv( X n , X n +1 , . . . ) , a nonnegative optionalstrong supermartingale X (1) = ( X (1) t ) ≤ t ≤ and a nonnegative predictablestrong supermartingale X (0) = ( X (0) t ) ≤ t ≤ such that e X nτ P −→ X (1) τ , (2.9) e X nτ − P −→ X (0) τ , (2.10) for all [0 , -valued stopping times τ , and we have that X (1) τ − ≥ X (0) τ ≥ E [ X (1) τ |F τ − ](2.11) for all [0 , -valued predictable stopping times τ . With the above we can now formulate the following proposition. Notethat, since ϕ · e X n ∈ conv( ϕ · X n , ϕ · X n +1 , . . . ), part (2) is indeed an analogousresult to Theorem 2.7 for stochastic integrals. Proposition 2.12.
Let ( X n ) ∞ n =1 be a sequence of nonnegative optionalstrong supermartingales X n = ( X nt ) ≤ t ≤ starting at X n = 1 . Then thereexist convex combinations e X n ∈ conv( X n , X n +1 , . . . ) as well as an optionaland a predictable strong supermartingale X (1) and X (0) such that: (1) e X nτ P −→ X (1) τ and e X nτ − P −→ X (0) τ for all [0 , -valued stopping times τ ; (2) for all predictable processes ϕ = ( ϕ t ) ≤ t ≤ of finite variation, we havethat ϕ · e X nτ P −→ Z τ ϕ cu dX (1) u + X
3. Proof of Theorems 2.6 and 2.7.
The basic idea for the proof of The-orem 2.6 is to consider the Fatou limit X = ( X t ) ≤ t ≤ as defined in (2.1).Morally speaking X = ( X t ) ≤ t ≤ should also be the limit of the sequence( M ) ∞ n =1 in the sense of (2.4). However, as we illustrated in Example 2.4,things may be more delicate. While we do not need to have convergencein probability at all finite stopping times in general, the next lemma showsthat we always have one-sided P -almost sure convergence. UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES Lemma 3.1.
Let X and ( M n ) ∞ n =1 be as in Proposition 2.3. Then wehave that ( M nτ − X τ ) − P - a . s . −→ , as n → ∞ , (3.1) for all [0 , -valued stopping times τ , where x − = max {− x, } . Proof.
Let σ k be the k th dyadic approximation of the stopping time τ , that is, σ k := inf { t ∈ D k | t > τ } ∧ , (3.2)where D k = { j − k | j = 0 , . . . , k } . As M n is a martingale, we have M nτ = E [ M nσ k |F τ ], for every n ∈ N , and thereforelim n →∞ M nτ = lim n →∞ E [ M nσ k |F τ ] ≥ E h lim n →∞ M nσ k (cid:12)(cid:12)(cid:12) F τ i = E [ Z σ k |F τ ]for all k by Fatou’s lemma, where Z q is defined in Proposition 2.3, for every q ∈ Q ∩ [0 , Z σ k → X τ P -a.s. and in L ( P ) by backward supermartin-gale convergence (see Theorem V.30 and the proof of Theorem IV.10 in [8],e.g.), we obtain that lim n →∞ M nτ ≥ X τ , which proves (3.1). (cid:3) For any sequence ( c M n ) ∞ n =1 of convex combinations c M n ∈ conv( M n , M n +1 , . . . ) , we can use the one-sided convergence (3.1) to show in the next lemma thatat any given stopping time τ , we either have the convergence of c M nτ to X τ in probability, or there exists a sequence ( f M n ) ∞ n =1 of convex combinations f M n ∈ conv( c M n , c M n +1 , . . . )and a nonnegative random variable Y such that f M nτ P −→ Y . In the lattercase, Y ≥ X τ and E [ Y ] > E [ X τ ], as we shall now show. Lemma 3.2.
Let X and ( M n ) ∞ n =1 be as in Proposition 2.3, let τ be a [0 , -valued stopping time and ( c M n ) ∞ n =1 a sequence of convex combinations c M n ∈ conv( M n , M n +1 , . . . ) . Then we have either ( c M nτ − X τ ) + P −→ , as n → ∞ , (3.3) C. CZICHOWSKY AND W. SCHACHERMAYER with x + = max { x, } , or there exists a sequence ( f M ) ∞ n =1 of convex combina-tions f M n ∈ conv( c M n , c M n +1 , . . . ) ⊆ conv( M n , M n +1 , . . . ) and a nonnegative random variable Y such that f M nτ P −→ Y, as n → ∞ , (3.4) and E [ Y τ ] > E [ X τ ] . (3.5) Proof.
If (3.3) does not hold, there exists α > c M n ), still denoted by ( c M n ) ∞ n =1 again indexed by n , such that P ( c M nτ − X τ > α ) ≥ α (3.6)for all n . Since E [ c M nτ ] = 1, there exists by Koml´os’s lemma a sequence( f M n ) ∞ n =1 of convex combinations f M n ∈ conv( c M n , c M n +1 , . . . ) and a nonneg-ative random variable Y such that (3.4) holds. To see (3.5), we observe that,for each ε > { c M nτ ≥ X τ − ε } P −→ , as n → ∞ , by (3.1). From the inequality c M nτ A n ≥ X τ A n + α A n , where A n := { c M nτ ≥ X τ + α } , we obtain c M nτ { c M nτ ≥ X τ − ε } ≥ X τ { c M nτ ≥ X τ − ε } + α A n . Now taking the convex combinations leading to f M n and then e Y n ∈ conv( α A n , α A n +1 , . . . )such that e Y n P −→ e Y , as n → ∞ , we derive Y ≥ X τ + e Y − ε (3.7)by passing to limits. Since | e Y n | ≤ E [ e Y n ] ≥ α , we deduce fromLebesgue’s theorem that e Y n L ( P ) −→ e Y , as n → ∞ , and E [ e Y ] ≥ α . Therefore(3.7) implies that E [ Y ] ≥ E [ X τ ] + E [ e Y ] − ε ≥ E [ X τ ] + α − ε for each ε > ε → (cid:3) UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES By the previous lemma we either already have the convergence of c M nτ to X τ in probability at a given stopping time τ , or we can use Koml´os’s lemmaonce again to find convex combinations f M n ∈ conv( c M n , c M n +1 , . . . ) and arandom variable Y such that f M nτ P −→ Y . The next lemma shows that we canexhaust this latter phenomenon by a countable number of stopping times( τ m ) ∞ m =1 and that we can use the random variables Y m := P − lim n →∞ f M nτ m to redefine the c`adl`ag supermartingale X at the stopping times τ m to obtaina limiting process e X = ( e X t ) ≤ t ≤ . The limiting process e X will be an optionalstrong supermartingale, and we can relate the loss of mass Y m − X τ m to theright jumps ∆ + e A τ m of the predictable part of the Mertens decomposition e X = f M − e A . Lemma 3.3.
In the setting of Proposition 2.3, let ( τ m ) ∞ m =1 be a se-quence of [0 , ∪ {∞} -valued stopping times with disjoint graphs, that is, J τ m K ∩ J τ k K = ∅ for m = k . Then there exists a sequence ( f M n ) ∞ n =1 of convexcombinations f M n ∈ conv( M n , M n +1 , . . . ) such that, for each m ∈ N , the se-quence ( f M nτ m ) ∞ n =1 converges P -a.s. to a random variable Y m on { τ m < ∞} .The process e X = ( e X t ) ≤ t ≤ given by e X t ( ω ) = (cid:26) Y m ( ω ) , t = τ m ( ω ) < ∞ and m ∈ N , X t ( ω ) , elsewhere (3.8) is an optional strong supermartingale with the following properties: (1) e X + = X , where e X + denotes the process of the right limits of e X ; (2) denoting by e X = f M − e A , the Mertens decomposition of e X , we have e X τ m − X τ m = − ∆ + e X τ m = ∆ + e A τ m := e A τ m + − e A τ m (3.9) for each m ∈ N . Proof.
Combining Koml´os’s lemma with a diagonalization procedurewe obtain nonnegative random variables Y m and convex combinations f M n ∈ conv( M n , M n +1 , . . . ) such that f M nτ m P - a . s . −→ Y m , for all m ∈ N , and we can define the process e X via (3.8). This process e X isclearly optional.To show that e X is indeed an optional strong supermartingale, we need toverify that e X ̺ ≥ E [ e X ̺ |F ̺ ](3.10) C. CZICHOWSKY AND W. SCHACHERMAYER for every pair of [0 , ̺ and ̺ such that ̺ ≤ ̺ . Forthis, we observe that it is sufficient to consider (3.10) on the set { ̺ < ̺ } .For i = 1 , ̺ i,k ) ∞ k =1 the k th dyadic approximation of ̺ i as in(3.2) above. Then we have E [ e X ̺ |F ̺ ]= E " lim n →∞ ∞ X m =1 f M nτ m { τ m = ̺ } + lim k →∞ (cid:16) lim n →∞ M n̺ ,k (cid:17) { τ m = ̺ , ∀ m } (cid:12)(cid:12)(cid:12) F ̺ = E " lim n →∞ ∞ X m =1 f M nτ m { τ m = ̺ } + lim k →∞ (cid:16) lim n →∞ f M n̺ ,k (cid:17) { τ m = ̺ , ∀ m } (cid:12)(cid:12)(cid:12) F ̺ ≤ E " lim n →∞ ∞ X m =1 f M nτ m { τ m = ̺ } (3.11) + lim k →∞ (cid:16) lim n →∞ E [ f M n̺ ,k |F ̺ ] (cid:17) { τ m = ̺ , ∀ m } (cid:12)(cid:12)(cid:12) F ̺ = E h lim n →∞ f M n̺ |F ̺ i (3.12) ≤ E h lim k →∞ lim n →∞ E [ f M n̺ |F ̺ ,k ] |F ̺ i (3.13) = E h lim k →∞ lim n →∞ f M n̺ ,k |F ̺ i (3.14) = E " lim k →∞ lim n →∞ ∞ X m =1 f M n̺ ,k { τ m = ̺ } + lim k →∞ lim n →∞ f M n̺ ,k { τ m = ̺ , ∀ m } (cid:12)(cid:12)(cid:12) F ̺ ≤ lim k →∞ lim n →∞ ∞ X m =1 E [ f M n̺ ,k |F ̺ ] { τ m = ̺ } (3.15) + E h lim k →∞ lim n →∞ M n̺ ,k |F ̺ i { τ m = ̺ , ∀ m } = lim n →∞ ∞ X m =1 f M nτ m { τ m = ̺ } + E h lim k →∞ Z ̺ ,k |F ̺ i { τ m = ̺ , ∀ m } (3.16) = ∞ X m =1 e X τ m { τ m = ̺ } + X ̺ { τ m = ̺ , ∀ m } = e X ̺ (3.17)by using Fatou’s lemma in (3.11), (3.13) and (3.15), the martingale prop-erty of the f M n and the convergence in probability of the M n in (3.12), UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES (3.14) and (3.16) and exploiting the backward supermartingale convergenceof ( Z ̺ ,k ) ∞ k =1 in (3.17).(1) We argue by contradiction and assume that G := { e X + = X } has P ( π ( G )) >
0, where π : Ω × [0 , → Ω is given by π (( ω, t )) = ω . As the set G is optional, there exists by the optional cross-section theorem (TheoremIV.84 in [8]) a [0 , ∪ {∞} -valued stopping time σ such that J σ { σ< ∞} K ⊆ G and P ( σ < ∞ ) >
0, which is equivalent to the assumption that the set F := { e X σ + = X σ } has strictly positive measure P ( F ) >
0. Without loss ofgenerality we can assume that there exists δ > F ⊆ { σ + δ < } .Let ( h i ) ∞ i =1 be a sequence of real numbers decreasing to 0 that are no atomsof the laws τ m − σ for all m ∈ N . Then defining σ i := ( σ + h i ) F ∧ i ∈ N gives a sequence of stopping times such that e X σ i = X σ i for each i and σ i ց σ on F . But this implies that e X σ + = lim i →∞ e X σ i = lim i →∞ X σ i = X σ on F, (3.18)which contradicts P ( F ) > P ( π ( G )) > X at countably many stopping times( τ m ) ∞ m =1 to obtain e X leaves right limits of the l`adl`ag optional strong su-permartingale e X invariant so that these remain e X τ m + = X τ + m = X τ m on { τ m < } for each m. (3.19)Since f M is c`adl`ag, this implies that e X τ m − X τ m = − ∆ + e X τ m = ∆ + e A τ m (3.20)for each m , thus proving property (2). (cid:3) Continuing with the proof of Theorem 2.6, the idea is to define the limitingsupermartingale X by (3.8) and to use Lemma 3.3 to enforce the convergenceat a well-chosen countable number of stopping times ( τ m ) ∞ m =1 to obtain theconvergence in (2.5) for all stopping times. It is rather intuitive that one hasto take special care of the jumps of the limiting process X . As these can beexhausted by a sequence ( τ k ) ∞ k =1 of stopping times, the previous lemma cantake care of this issue. However, the subsequent example shows that therealso may be a problem with the convergence in (2.4) at a stopping time τ at which X is continuous . Example 3.4.
Let σ : Ω −→ [0 ,
1] be a totally inaccessible stopping time,( A t ) 1] and n ∈ N . Then we have that M ,nτ P −→ X τ , (3.21) M ,nτ P −→ X τ for all [0 , τ . The left and right limits of X and X coincide, that is, X − = X − and X = X , but X = X . As X = X − = X = X coincides with the Fatou limits X (and X , resp.) of ( M ,n ) ∞ n =1 [and ( M ,n ) ∞ n =1 , resp.], this example illustrates that we cannot deduce fromthe Fatou limits X and X , where it is necessary to correct the convergenceby using Lemma 3.3. Computing the Mertens decompositions X = M − A and X = M − A , we obtain M = 1 ,A = σ ∧ t,M = 1 − σ ∧ t + J σ, K ,A = K σ, K . This shows that using X instead of X = X changes the compensator of M not only after the correction in the sense of Lemma 3.3 on K σ, K but onall of [0 , τ m ) ∞ m =1 , where one needs to enforce the convergence in proba-bility by using Lemma 3.3. Therefore we combine the previous lemmas withan exhaustion argument to prove Theorem 2.6. Proof of Theorem 2.6. Let T be the collection of all families T =( τ m ) N ( T ) m =1 of finitely many [0 , ∪ {∞} -valued stopping times τ m with disjoint UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES graphs. For each T ∈ T , we consider an optional strong supermartingale X T that is obtained by taking convex combinations e X n, T ∈ conv( M n , M n +1 , . . . )such that e X n, T τ m P −→ Y T m on { τ m < ∞} for each m = 1 , . . . , N ( T ) and thensetting X T t ( ω ) = (cid:26) Y T m ( ω ) , t = τ m ( ω ) < ∞ and m = 1 , . . . , N ( T ), X t ( ω ) , else,(3.22)as explained in Lemma 3.3. Then each X T has a Mertens decomposition X T = M T − A T , (3.23)and we have by part (2) of Lemma 3.3 that E " N ( T ) X m =1 ( X T τ m ∧ − X τ m ∧ ) = E " N ( T ) X m =1 ∆ + A T τ m ∧ ≤ . Therefore b ϑ := sup T ∈ T E " N ( T ) X m =1 ( X T τ m ∧ − X τ m ∧ ) ≤ , (3.24)and there exists a maximizing sequence ( T k ) ∞ k =1 such that E " N ( T k ) X m =1 ( X T k τ m ∧ − X τ m ∧ ) ր sup T ∈ T E " N ( T ) X m =1 ( X T τ m ∧ − X τ m ∧ ) = b ϑ. (3.25)It is easy to see that we can assume that ( T k ) ∞ k =1 can be chosen to beincreasing, that is, T k ⊆ T k +1 for each k . This means that T k +1 just addssome stopping times to those which appear in T k . Then e T := S ∞ k =1 T k is acountable collection of stopping times ( τ m ) ∞ m =1 with disjoint graphs, and byLemma 3.3 there exists an optional strong supermartingale X e T and convexcombinations X n, e T ∈ conv( M n , M n +1 , . . . ) such that X n, e T e τ m P −→ Y e T m for all m and X e T t ( ω ) := ( Y e T m ( ω ) , t = τ m ( ω ) < ∞ , X t ( ω ) , else.(3.26)As we can suppose without loss of generality that X n, T k +1 ∈ conv( X n, T k ,X n +1 , T k , . . . ) and X n, e T ∈ conv( X n, T k , X n +1 , T n +1 , . . . ), we have that Y T k m = Y T k +1 m = Y e T m on { τ m < } for all k ≥ m . Let X e T = M e T − A e T be the Mertensdecomposition of X e T . Then∆ + A e T τ m = X e T τ m − X τ m = X T k τ m − X τ m = ∆ + A T k τ m (3.27) C. CZICHOWSKY AND W. SCHACHERMAYER on { τ m < } for m ≤ N ( T k ), since, as we explained in the proof of Lemma 3.3,modifying X at countably many stopping times does not change the rightlimits, and these remain X e T τ m + = X τ m = X T k τ m + on { τ m < } for m ≤ N ( T k ).(3.28)This implies that N ( T k ) X m =1 ( X T k τ m ∧ − X τ m ∧ ) = N ( T k ) X m =1 ( X e T τ m ∧ − X τ m ∧ ) = N ( T k ) X m =1 ∆ + A e T τ m ∧ (3.29)and therefore E " ∞ X m =1 ∆ + A e T τ m ∧ = E " ∞ X m =1 ( X e T τ m ∧ − X τ m ∧ ) = b ϑ (3.30)by the monotone convergence theorem.Now suppose that there exists a [0 , τ such that X n, e T τ does not converge in probability to X e T τ . By Lemma 3.2 we can thenpass once more to convex combinations f M n ∈ conv( X n, e T , X n +1 , e T , . . . ) suchthat there exists a random variable Y such that f M nτ P −→ Y , f M nτ m P −→ Y e T m and an optional strong supermartingale e X such that e X t ( ω ) = ( Y ( ω ) , t = τ ( ω ) ≤ X e T t ( ω ) , else.(3.31)However, since E [ e X τ − X τ ] > e T k := T k ∪ {T } givesa sequence in T such thatlim k →∞ E " N ( e T k ) X m =1 ( X e T k τ m ∧ − X e T k τ m ∧ ) = lim k →∞ E " N ( T k ) X m =1 ( X T k τ m ∧ − X τ m ∧ ) + E [ e X τ − X τ ]= b ϑ + E [ e X τ − X τ ] > b ϑ, and therefore a contradiction to the definition of b ϑ as supremum. Here wecan take the convex combinations f M n ∈ conv( X n, e T , X n +1 , e T , . . . ) for all e T k . (cid:3) Combining Theorem 2.6 with a similar convergence result for predictablefinite variation processes by Campi and Schachermayer [1], we now deduceTheorem 2.7 from Theorem 2.6. UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES Proof of Theorem 2.7. We consider the extension of Theorem 2.6to local martingales first. For this, let ( X n ) ∞ n =1 be a sequence of nonnega-tive local martingales X n = ( X nt ) ≤ t ≤ and ( σ nm ) ∞ m =1 a localizing sequenceof [0 , X n . Then, for each n ∈ N , there ex-ists m ( n ) ∈ N such that P ( σ nm < < − ( n +1) for all m ≥ m ( n ). Define themartingales M n := ( X n ) σ nm ( n ) (3.32)that satisfy M k = X k for all k ≥ n on F n := T k ≥ n { σ km ( k ) = 1 } with P ( F n ) > − − n . By Theorem 2.6 there exist a sequence of convex combinations f M n ∈ conv( M n , M n +1 , . . . ) and an optional strong supermartingale X suchthat f M kτ P −→ X τ on F n for all [0 , τ . Therefore taking e X n ∈ conv( X n ,X n +1 , . . . ) with the same weights as f M n ∈ conv( M n , M n +1 , . . . ) gives e X kτ P −→ X τ on F n for all [0 , τ and for each n and, since e X k = f M k for all k ≥ n . But, since P ( F cn ) < − n → 0, as n → ∞ this implies that e X kτ P −→ X τ for all [0 , τ . This finishes the proof in the case whenthe X n are local martingales.For the case of optional strong supermartingales, let ( X n ) ∞ n =1 be a se-quence of nonnegative optional strong supermartingales X n = ( X nt ) ≤ t ≤ and X n = M n − A n their Mertens decompositions into a c`adl`ag local mar-tingale M n and a predictable, nondecreasing, l`adl`ag process A n . As the localmartingales M n ≥ X n + A n ≥ X n are nonnegative, there exists by the firstpart of the proof a sequence of convex combinations c M n ∈ conv( M n , M n +1 ,. . . ) and an optional strong supermartingale b X with Mertens decomposition b X = c M − b A such that c M nτ P −→ b X τ (3.33)for all [0 , τ . Now let b A n ∈ conv( A n , A n +1 , . . . ) bethe convex combinations that are obtained with the same weights as the c M n . Then there exists a sequence ( e A n ) ∞ n =1 of convex combinations e A n ∈ conv( b A n , b A n +1 , . . . ) and a predictable, nondecreasing, l`adl`ag process e A suchthat P h lim n →∞ e A nt = e A t , ∀ t ∈ [0 , i = 1 . (3.34) C. CZICHOWSKY AND W. SCHACHERMAYER Indeed, we only need to show that ( e A n ) n ∈ N is bounded in L ( P ); then (3.34)follows from Proposition 3.4 of Campi and Schachermayer in [1]. By mono-tone convergence we obtain E [ e A n ] = lim m →∞ E [ e A n ∧ σ nm ] = lim m →∞ E [ f M n ∧ σ nm − e X n ∧ σ nm ] ≤ n ∈ N and therefore the boundedness in L ( P ). Here f M n ∈ conv( c M n , c M n +1 , . . . ) and e X n ∈ conv( b X n , b X n +1 , . . . ) denote convex combinations hav-ing the same weights as the b A n and ( σ nm ) ∞ m =1 is a localizing sequence ofstopping times for the local martingale f M n .Taking convex combinations does not change the convergence (3.33), andso e X n ∈ conv( X n , X n +1 , . . . ) is a sequence of convex combinations and e X := b X − b A an optional strong supermartingale such that e X nτ P −→ e X τ (3.35)for all [0 , τ . (cid:3) Remark 3.5. (1) Observe that the proof of Theorem 2.7 actually showsthat the limiting optional strong supermartingale X is equal to X up to a setthat is included in the graphs of countably many stopping times ( τ m ) ∞ m =1 .(2) Replacing Koml´os’s lemma (Corollary 2.2) by Koml´os’s subsequencetheorem (Theorem 2.1) in the proof of Theorems 2.6 and 2.7, we obtain,by taking subsequences of subsequences rather than convex combinationsof convex combinations, the following stronger assertion: Given a sequence( X n ) ∞ n =1 of nonnegative optional strong supermartingales X n = ( X nt ) ≤ t ≤ starting at X n = 1, there exists a subsequence ( X n k ) ∞ k =1 and an optionalstrong supermartingale X = ( X t ) ≤ t ≤ such that the Ces`aro means J P Jj =1 X n kj of any subsubsequence ( X n kj ) ∞ j =1 converge to X in probabilityat all finite stopping times, as J → ∞ . 4. A counterexample. At a single finite stopping time τ we may, ofcourse, pass to a subsequence to obtain that f M nτ converges not only in prob-ability but also P -almost surely to e X τ . The next proposition shows that wecannot strengthen Theorem 2.6 to obtain P -almost sure convergence for all finite stopping times simultaneously. The obstacle is, of course, that the setof all stopping times is far from being countable. Proposition 4.1. Let ( M n ) ∞ n =1 be a sequence of independent nonnega-tive continuous martingales M n = ( M nt ) ≤ t ≤ starting at M n = 1 such that M nτ P −→ − τ (4.1) UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES for all [0 , -valued stopping times τ . Then we have for all ε > and allsequences ( f M n ) ∞ n =1 of convex combinations f M n ∈ conv( M n , M n +1 , . . . ) thatthere exists a stopping time τ such that P h lim n →∞ f M nτ = + ∞ i > − ε. Remark 4.2. If (Ω , F , ( F t ) ≤ t ≤ , P ) supports a sequence ( W n ) ∞ n =1 of in-dependent Brownian motions W n = ( W nt ) ≤ t ≤ , the existence of a sequence( M n ) ∞ n =1 verifying (4.1) follows similarly as in the proof of Theorem 2.8 inSection 5 below.For the proof of Proposition 4 . Lemma 4.3. In the setting of Proposition 4.1, let τ and σ be two [0 , -valued stopping times such that τ ≤ σ and τ < σ on some A ∈ F τ with P ( A ) > . Then there exists, for all c > , a constant γ = γ ( c, τ, σ ) > anda number N = N ( τ, σ ) ∈ N such that P (cid:18) sup t ∈ [ τ,σ ] f M nt > c + 1 (cid:19) ≥ γ for all n ≥ N . Proof. Let α = E [( σ − τ ) A ] P ( A ) and ε ∈ (0 , 1) such that α > ( c + 4) ε and P ( B n ) ≥ (1 − ε ) P ( A )for all n ≥ N , where A n := {| f M nτ − (1 − τ ) | < ε } ∩ A,B n := {| f M nσ − (1 − σ ) | < ε } ∩ A n . Then setting ̺ n := inf { t ∈ [ τ, σ ] | f M nt > c + 1 } we can estimate E [ f M nτ A n ] = E [ f M n̺ n ∧ A n ]= E [ f M n̺ n ∧ ( A n ∩{ ̺ n ≤ } + { ̺ n > }∩ B n + { ̺ n > }∩ B cn ∩ A n )] ≤ ( c + 1) P ( ̺ n ≤ , A n ) + E [(1 − σ + ε ) B n ] + ( c + 1) P ( B cn ∩ A n )by the optional sampling theorem and the continuity of f M n . Since E [ f M nτ A n ] ≥ E [(1 − τ − ε ) A n ] ≥ E [(1 − τ − ε ) B n ] , C. CZICHOWSKY AND W. SCHACHERMAYER we obtain that E [((1 − τ − ε ) − (1 − σ + ε )) B n ] − ( c + 1)( P ( A ) − P ( B n )) ≤ ( c + 1) P ( ̺ n ≤ , A n ) ≤ ( c + 1) P ( ̺ n ≤ γ := α − ε − ( c + 1) εc + 1 P ( A ) ≤ P ( ̺ n ≤ 1) = P (cid:18) sup t ∈ [ τ,σ ] f M nτ > c + 1 (cid:19) for all n ≥ N , where γ > ε , as E [( σ − τ ) B n ] ≥ ( α − ε ) P ( A ). (cid:3) Proof of Proposition 4.1. We shall define τ as an increasing limit ofa sequence of stopping times τ m . For this, we set n = 0, τ = 0 and σ = and then define for m ∈ N successively n m ( ω ) := inf n n ∈ N (cid:12)(cid:12)(cid:12) n > n m − ( ω ) and ∃ t ∈ [ τ m − ( ω ) , σ m − ( ω )]with f M nt ( ω ) ≥ m + 1 o ,τ m ( ω ) := inf { t ∈ ( τ m − ( ω ) , σ m − ( ω )) | f M n m ( ω ) t ( ω ) ≥ m + 1 } ∧ ,σ m ( ω ) := inf { t > τ m ( ω ) | f M n m ( ω ) t ( ω ) < m } ∧ σ m − ( ω ) . By construction and the continuity of f M n we then have, for all k ≥ m , that f M n m ( ω ) t ( ω ) ≥ m for all t ∈ [ τ k ( ω ) , σ k ( ω )]on { τ k < } . Therefore setting τ := lim m →∞ τ m gives that f M n m ( ω ) τ ( ω ) ≥ m for all m on { τ < } . So it only remains to show that P ( τ < ≥ − ε. (4.2)We prove (4.2) by induction. For this, assume that there exists for each m ∈ N , some α m > N m ∈ N such that P ( D m ) < − ε − m for D m := { σ m > τ m + α m , n m ∈ ( N m − , N m ] } . (4.3)Indeed, for m = 0, we can choose α = , N − = 0 and N = 1. Regardingthe induction step we first show that n m < ∞ P -a.s. on D m − . To that end,we can assume w.l.o.g. that the ( f M n ) ∞ n =1 are also independent by choosingthe blocks of which we take the convex combinations disjoint and passing to UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES a subsequence. As we are only making an assertion about the limes superior,this will be sufficient. Moreover, we observe that F := { n m < ∞} ∩ D m − = ∞ [ n = N m − F n ∩ D m − with F n := {∃ t ∈ ( τ m − ( ω ) , σ m − ( ω )] | f M nt ( ω ) ≥ m + 1 } . Then using the es-timate 1 − x ≤ exp( − x ) and the independence of the F n of each other and D m − gives P ( D m − ∩ F c ) = lim k →∞ P k \ n = N m − F cn ! P ( D m − )= lim k →∞ k Y n = N m − (1 − P ( F n )) P ( D m − ) ≤ lim k →∞ exp − k X n = N m − P ( F n ) ! P ( D m − ) . Since P ∞ n = N m − P ( F n ) = ∞ by Lemma 4.3, this implies that P ( D m − ∩ F c ) = 0 and hence that n m < ∞ P -a.s. on D m − . More precisely, by ap-plying Lemma 4.3 for c = 2 m with τ = τ m − , σ = σ m − and A = D m − to f M n for n ≥ N m − , we get that P ( F n ) ≥ γ > n ≥ N m − . Therefore τ m < P -a.s. on D m − as well. By the continuity of the f M n and, as τ m < on D m − , we obtain that ≥ σ m > τ m P -a.s. on D m − , which finishes theinduction step.Now, since { τ < } ⊇ T ∞ m =1 D m =: D and P ( D ) ≥ − ∞ X m =1 P ( D cm ) = 1 − ∞ X m =1 ε m = 1 − ε, we have established (4.3), which completes the proof of the proposition. (cid:3) 5. Proof of Theorem 2.8. We now pass to the proof of Theorem 2.8. Thefollowing lemma yields a building block. Lemma 5.1. Let W = ( W t ) ≤ t ≤ be a standard Brownian motion on (Ω , F , F , P ) and ̺ a [0 , ∪ {∞} -valued stopping time. Then there existsa sequence ( ϕ n ) ∞ n =1 of predictable integrands of finite variation such that M n := ϕ n · W ≥ − is a bounded martingale for each n ∈ N and M nτ P - a . s . −→ − K ̺, K ( τ ) = − { τ>̺ } , as n → ∞ , (5.1) for all [0 , -valued stopping times τ . C. CZICHOWSKY AND W. SCHACHERMAYER Proof. We consider the case ̺ ≡ ϕ n ) ∞ n =1 . To come up with one, we use the deterministicfunctions ψ nt := 12 − n − t (0 , − n ) ( t ) . Then the continuous martingales N n := ( ψ n · W t ) ≤ t< − n are well defined,for each n ∈ N . It follows from the Dambis–Dubins–Schwarz theorem thatthe stopping times τ n := inf { t ∈ (0 , − n ) | N nt = − } ,σ n,k := inf { t ∈ (0 , − n ) | N nt > k } are P -a.s. strictly smaller than 2 − n for all n, k ∈ N , since h N n i t = 12 − n − t − − n for t ∈ [0 , − n )and lim t ր − n h N n i t = ∞ . Therefore setting e ψ n,k = ψ n J ,τ n ∧ σ n,k K gives a se-quence e N n,k = e ψ n,k · W = ( ψ n · W ) τ n ∧ σ n,k of bounded martingales such that, for all [0 , τ , e N n,kτ P - a . s . −→ − { τ ≥ − n } , as k → ∞ , since σ n,k ր − n P -a.s, as k → ∞ . Defining ϕ n := e ψ n,k ( n ) and M n = e N n,k ( n ) as a suitable diagonal sequence such that M n − n = e N n,k ( n )2 − n → − 1, as n → ∞ ,then yields the assertion for ̺ ≡ 0, as M n = 0 for all n ∈ N and { τ ≥ − n } P - a . s . −→ { τ> } , as n → ∞ .Next we observe that if we consider for some [0 , ∪ {∞} -valued stoppingtime σ the stopped Brownian notion W σ = ( W σ ∧ t ) ≤ t ≤ , then we obtain bythe above argument that( M n ) στ = M nσ ∧ τ = ( ϕ n • ( W σ )) τ P - a . s . −→ (0 , ( σ ∧ τ )for every [0 , τ .For the general case ̺ 0, consider the process W t := ( W t + ̺ − W ̺ ) ≤ t ≤ which is a Brownian motion with respect to the filtration F := ( F t ) ≤ t ≤ :=( F ( t + ̺ ) ∧ ) ≤ t ≤ that is independent of F ̺ and stopped at the F -stoppingtime ¯ σ := (1 − ̺ ). Then the general case ̺ ̺ ≡ W and the stopping time ¯ τ =( τ − ̺ ) { τ>̺ } which is always smaller than ¯ σ . Indeed, as the correspondingmartingales M n obtained for W with respect to ( F t ) ≤ t ≤ start at 0, theprocesses M nt ( ω ) = (cid:26) , t ≤ ̺ ( ω ) ∧ M nt + ̺ ( ω ) ( ω ) , ̺ ( ω ) < t ≤ UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES are martingales with respect to the filtration F = ( F t ) ≤ t ≤ that converge to J ̺, K ( τ ) P -a.s. for every [0 , F -stopping time τ . (cid:3) Proof of Theorem 2.8. Let X = M − A be the Mertens decomposi-tion of the optional strong supermartingale X . It is then sufficient to showthe assertion for M and A separately.(1) We begin with the local martingale M . As any localizing sequence( τ m ) ∞ m =1 of stopping times for M gives a sequence f M m := M τ m of martin-gales that converges uniformly in probability, we obtain a sequence M n ofmartingales that converges P -a.s. uniformly to M by passing to a subse-quence ( f M ) ∞ n =1 such that P ( τ n < < − n . To see that we can choose the M n to be bounded, we observe that setting M n,kt := E [ M n ∧ k ∨ − k |F t ]for t ∈ [0 , 1] gives for every martingale M n a sequence of bounded martin-gales M n,k = ( M n,kt ) ≤ t ≤ such that M n,k L ( P ) −→ M n , as k → ∞ , and thereforelocally in H ( P ) by Theorem 4.2.1 in [10]. By the Burkholder–Davis–Gundyinequality (see, e.g., Theorem IV.48 in [16]), this also implies uniform con-vergence in probability and hence P -a.s. uniform convergence by passingto a subsequence, again indexed by k . Then taking a diagonal sequence( M n,k ( n ) ) ∞ n =1 gives a sequence of martingales ( M n ) ∞ n =1 = ( M n,k ( n ) ) ∞ n =1 thatconverges P -a.s. uniformly to M and therefore also satisfies (2.6) for every[0 , τ .(2) To prove the assertion for the predictable part A , we decompose A = A c + ∞ X i =1 ∆ + A σ i K σ i , K + ∞ X j =1 ∆ A ̺ j J ̺ j , K into its continuous part A c , its totally right-discontinuous part A rd := P ∞ i =1 ∆ + A σ i K σ i , K and totally left-discontinuous part A ld := P ∞ j =1 ∆ A ̺ j J ̺ j , K .By superposition it is sufficient to approximate − A c , each single right jumpprocess − A σ i K σ i , K for i ∈ N and each single left jump process − ∆ A ̺ j J ̺ j , K for j ∈ N separately. Indeed, let ( M c,n ) ∞ n =1 , ( M rd,i,n ) ∞ n =1 for each i ∈ N and( M ld,j,n ) ∞ n =1 for each j ∈ N be sequences of bounded martingales such that M c,nτ P −→ − A cτ , (5.2) M rd,i,nτ P −→ − ∆ + A σ i K σ i , K ( τ ) , (5.3) M ld,j,nτ P −→ − ∆ A ̺ j J ̺ j , K ( τ ) , (5.4) C. CZICHOWSKY AND W. SCHACHERMAYER as n → ∞ , for all [0 , τ . Then setting M n := M c,n + n X i =1 M rd,i,n + n X j =1 M ld,j,n gives a sequence of bounded martingales such that M nτ P −→ − A τ , as n → ∞ ,for all [0 , τ .(2a) We begin with showing the existence of ( M rd,i,n ) ∞ n =1 for some fixed i ∈ N . For this, we set ϑ i,nt := (∆ + A σ i ∧ n ) K σ i , K ϕ nt ∈ L ( W ) , where ( ϕ n ) ∞ n =1 is a sequence of integrands as obtained in Lemma 5.1 forthe stopping time ̺ = σ i . Then it follows immediately from Lemma 5.1 that ϑ i,n · W τ P - a . s . −→ ∆ + A σ i K σ i , K ( τ ), as n → ∞ , for every [0 , τ and therefore that M rd,i,n := ϑ i,n · W gives a sequence of bounded martingales such that (5.3) holds. Note thatby the construction of the integrands ϕ n in Lemma 5.1 the approximat-ing martingales M rd,i,n are 0 on J , σ i K , constant to either − ∆ + A σ i ∧ n or(∆ + A σ i ∧ n ) k ( n ) on J σ i + 2 − n , K . Therefore they converge P -a.s. uniformlyto − ∆ + A σ i on J σ i + 2 − m , K for each m ∈ N .(2b) To obtain the approximating sequence ( M ld,j,n ) ∞ n =1 for some fixed j ∈ N , we observe that the stopping time ̺ j is predictable and let ( ̺ j,k ) ∞ k =1 bean announcing sequence of stopping times, that is, a nondecreasing sequenceof stopping times such that ̺ j,k < ̺ j on { ̺ j > } and ̺ j,k P - a . s . −→ ̺ j , as k → ∞ .Since ∆ A ̺ j ∈ L ( P ) is F ̺ j − -measurable by Theorem IV.67.b) in [7] and F ̺ j − = W ∞ k =1 F ̺ j,k by Theorem IV.56.d) in [7], we have that E [∆ A ̺ j |F ̺ j,k ] P - a . s . −→ ∆ A ̺ j , as k → ∞ , (5.5)by martingale convergence. Therefore setting e A ld,j,k := E [∆ A ̺ j |F ̺ j,k ] K ̺ j,k , K (5.6)gives a sequence of single right jump processes that converges to ∆ A ̺ j J ̺ j , K P -a.s. at each [0 , τ , since K ̺ j,k, , K ( τ ) P - a . s . −→ J ̺ j , K ( τ ),as k → ∞ , for all [0 , τ .By part (2a) there exists for each k ∈ N a sequence ( f M j,k,n ) ∞ n =1 of boundedmartingales such that f M j,k,nτ P - a . s . −→ − e A ld,j,kτ , as n → ∞ , for all [0 , τ . For the stopping time ̺ j we can therefore find a di-agonal sequence ( f M j,k,n ( k ) ) ∞ k =1 such that f M j,k,n ( k ) ̺ j P - a . s . −→ − e A ld,j,k̺ j , as k → UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES ∞ . By the proof of Lemma 5.1 and part (2a) above we can choose themartingales f M j,k,n ( k ) such that f M j,k,n ( k ) ≡ J , ̺ j,k K and f M j,k,n ( k ) ≡− ( E [∆ A ̺ j |F ̺ j,k ] ∧ n ( k )) on J ( ̺ j,k + 2 − n ( k ) ) F k , K , where the set F k := n f M j,k,n ( k ) ̺ j +2 − n ( k ) = − ( E [∆ A ̺ j |F ̺ j,k ] ∧ n ( k )) o has probability P ( F k ) > − − k . This sequence ( f M j,k,n ( k ) ) ∞ k =1 therefore al-ready satisfies f M j,k,n ( k ) τ P - a . s . −→ − ∆ A ̺ j J ̺ j , K ( τ ) for all [0 , τ and we have (5.4).(2c) For the approximation of the continuous part A c , we observe that bythe left-continuity and adaptedness of A c there exists a sequence ( e A n ) ∞ n =1 of nondecreasing integrable simple predictable processes that converges uni-formly in probability to A c and hence P -a.s. uniform by passing to a fastconvergent subsequence again indexed by n ; see for example Theorem II.10in [16]. Recall that a simple predictable process is a predictable process e A of the form e A = m X i =1 ∆ + A σ i K σ i , K , (5.7)where ( σ i ) mi =1 are [0 , ∪ {∞} -valued stopping times such that σ i < σ i +1 for i = 1 , . . . , m − + A σ i is F σ i -measurable.By part (2a) there exists, for each n ∈ N , a sequence ( f M n,k ) ∞ k =1 of mar-tingales such that f M n,kτ P - a . s . −→ − e A nτ , as k → ∞ , for all [0 , τ . Therefore we can pass to a diagonal sequence f M n,k ( n ) such that P h lim n →∞ f M n,k ( n ) q = − A cq , ∀ q ∈ Q ∩ [0 , i = 1 . (5.8)By Theorem 2.7 there exists a sequence ( M n ) ∞ n =1 of convex combinations M n ∈ conv( f M n,k ( n ) , f M n +1 ,k ( n +1) , . . . )and an optional strong supermartingale X such that M nτ P −→ X τ for all[0 , τ .To complete the proof it therefore only remains to show that X = − A c .For this, we argue by contradiction and assume that the optional set G := { X = − A c } is not evanescent, that is, that P ( π ( G )) > 0, where π (( ω, t )) = ω denotes the projection on the first component. By the optional cross-section theorem (Theorem IV.84 in [8]) there then exists a [0 , ∪ {∞} -valuedstopping time τ such that X τ = − A cτ on F := { τ < ∞} with P ( F ) > τ A and a totallyinaccessible stopping time τ I such that τ = τ A ∧ τ I by Theorem IV.81.c) in C. CZICHOWSKY AND W. SCHACHERMAYER [7]. On { τ I < ∞} we obtain that M nτ I − = M nτ I P −→ X τ I and A cτ I − = A cτ I fromthe continuity of M n and A c . Therefore X τ I = − A cτ I , as M nτ I − P −→ X τ I − byProposition 2.9 and X τ I − = − A cτ I − by (5.8). This implies that P ( τ I < ∞ ) =0 and hence P ( τ A < ∞ ) = P ( F ) > 0. Since τ A is accessible, there exists apredictable stopping time σ such that P ( τ A = σ < ∞ ) > 0. By the strongsupermartingale property of X we have that X σ − ≥ E [ X σ |F σ − ] ≥ E [ X σ + |F σ − ] on { σ < ∞} , as σ is predictable. Since X − = − A c − and X + = − A c + by (5.8), this impliesthat X σ = − A cσ by the continuity of A c . However, this contradicts P ( F ) > (cid:3) 6. Proof of Theorem 2.11. We begin with the proof of Proposition 2.9,and for this, we will use the following variant of Doob’s up-crossing in-equality that holds uniformly over the set X of nonnegative optional strongsupermartingales X = ( X t ) ≤ t ≤ starting at X = 1. Lemma 6.1. For each ε > and δ > , there exists a constant C = C ( ε, δ ) ∈ N such that sup X ∈ X P [ M ε ( X ) > C ] < δ, where the random variable M ε ( X ) is pathwise defined as the maximal amountof moves of the process X of size bigger than ε , that is, M ε ( X )( ω ):= sup n m ∈ N (cid:12)(cid:12)(cid:12) | X t i ( ω ) − X t i − ( ω ) | > ε, for ≤ t < t < · · · < t m ≤ o . Proof. Choose n ∈ N such that n ≤ ε , fix some X ∈ X and denote by X = M − A its Mertens decomposition. Then M = X + A is a nonnegativec`adl`ag local martingale and hence a c`adl`ag supermartingale such that E [ M t ] ≤ t ∈ [0 , C ∈ N with C ≥ δ we obtain from Doob’s maximalinequality that P (cid:16) M ∗ := sup ≤ s ≤ M s > C (cid:17) ≤ C ≤ δ . Then we divide the interval [0 , C ] into nC =: N subintervals I k := [ kN , k +1 N ]of equal length of at most ε for k = 0 , . . . , N − 1. The basic intuition be-hind this is that whenever the nonnegative (c`adl`ag) local martingale M = UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES ( M t ) ≤ t ≤ moves more than ε , while its supremum stays below C , it hasat least to cross one of the subintervals I k . For each interval I k we can esti-mate the number U ( M ; I k ) of up-crossings of the interval I k by the process M = ( M t ) ≤ t ≤ up to time 1 by Doob’s up-crossing inequality by P [ U ( M ; I k ) > C ] ≤ NC E [ U ( M ; I k )] ≤ NC sup ≤ t ≤ E [ M t ] ≤ NC . Choosing ˜ C = N δ we obtain that P [ U ( M ; I k ) > ˜ C ] ≤ δ N . Then summing over all intervals gives for the number U ε ( M ) of up-movesof the process M of size ε that P [ U ε ( M ) > ˜ C N ] ≤ P [ M ∗ ≤ C , ∃ k ∈ { , . . . , N } with U ( M ; I k ) > ˜ C ] + P [ M ∗ > C ] ≤ δ. Since X = M − A is nonnegative starting at X = 1 and A is nondecreasing,the number M ε ( X ) of moves of X of size ε is smaller than 2( U ε ( X ) + N ).Therefore we can conclude that P [ M ε ( X ) > C ] ≤ δ (6.1)for C = 2( ˜ C + 1) N . To complete the proof, we observe that the constants C and C = 2( ˜ C + 1) N are independent of the choice of the optional strongsupermartingale X ∈ X , and we can therefore take the supremum over all X ∈ X in the inequality (6.1). (cid:3) Let X = ( X t ) ≤ t ≤ be a l`ag (existence of left limits) process and τ be a(0 , m ∈ N , let τ m be the m th dyadic approxima-tion of the stopping time τ as defined in (3.2). Note that τ m is { m , . . . , } -valued, as τ > 0. As ( X t ) ≤ t ≤ is assured to have l`ag trajectories, we obtain X τ m − − m P - a . s . −→ X τ − , as m → ∞ , (6.2)and therefore in probability. The next lemma gives a quantitative version ofthis rather obvious fact. Lemma 6.2. Let τ be a totally inaccessible (0 , -valued stopping time.Then the convergence in (6.2) above holds true in probability uniformlyover all nonnegative optional strong supermartingales X ∈ X , that is, X =( X t ) ≤ t ≤ , starting at X = 1 . More precisely, we have for each ε > that lim m →∞ sup X ∈ X P [ | X τ m − − m − X τ − | > ε ] = 0 . (6.3) C. CZICHOWSKY AND W. SCHACHERMAYER Proof. Denote by A = ( A t ) ≤ t ≤ the compensator of τ , which is theunique continuous increasing process such that ( J τ, K − A t ) ≤ t ≤ is a mar-tingale. For every predictable set G ⊆ Ω × [0 , P [ τ ∈ G ] = E [ G J τ K ] = E (cid:20)Z G ( t ) d J τ, K ( t ) (cid:21) = E (cid:20)Z G ( t ) dA t (cid:21) . (6.4)Here we used that the predictable σ -algebra on Ω × [0 , 1] is generated by theleft-open stochastic intervals, that is, intervals of the form K σ , σ K for stop-ping times σ and σ and a monotone class argument to deduce the secondequality in (6.4). The third equality is the definition of the compensator.Fix X ∈ X , ε > δ > A tofind c = c ( ε, δ, τ ) such that the exceptional set F = { M ε ( X ) ≥ c } (6.5)satisfies E [ F A ] < δ. (6.6)Find m large enough such that E [ F A ] < δ, (6.7)where F is the exceptional set F = (cid:26) ∃ k ∈ { , . . . , m } such that A k/ m − A ( k − / m > δc (cid:27) . (6.8)Define G to be the predictable set G = m [ k =1 (cid:26) ( ω, t ) (cid:12)(cid:12)(cid:12)(cid:12) k − m < t ≤ k m and(6.9) sup ( k − / m ≤ u ≤ t | X u − ( ω ) − X ( k − / m ( ω ) | ≤ ε (cid:27) . We then have P [ τ / ∈ G ] < δ . Indeed, applying (6.4) to the complement G c of G we get P [ τ / ∈ G ] = E (cid:20) ( F ∪ F + Ω \ ( F ∪ F ) ) Z G c dA t (cid:21) , where F and F denote the exceptional sets in (6.5) and (6.8). By (6.6) and(6.7), E (cid:20) F ∪ F Z G c dA t (cid:21) ≤ δ. (6.10) UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES On the set Ω \ ( F ∪ F ) we deduce from (6.5), (6.8) and (6.9) that Z G c dA t ≤ c δc = δ so that P [ τ / ∈ G ] ≤ δ. (6.11)For ( ω, t ) ∈ G such that k − m < t ≤ k m , we have | X t − ( ω ) − X ( k − / m ( ω ) | ≤ ε so that by (6.11) we get P [ | X τ − − X τ m − − m | > ε ] < δ, which shows (6.3). (cid:3) Proof of Proposition 2.9. Fix ε > 0, and apply Lemma 6.2 to find m ∈ N such that P [ | e X τ m − − m − e X τ − | > ε ] < ε, (6.12)for each e X ∈ X . As ( X nq ) ∞ n =1 converges to X q in probability, for every rationalnumber q ∈ Q ∩ [0 , 1] we have P (cid:20) max ≤ k ≤ m | X nk/ m − X k/ m | > ε (cid:21) < ε, for all n ≥ N ( ε ). We then may apply (6.12) to X n and X to conclude that P h | X nτ − − X τ − | > ε i < ε. (cid:3) With Proposition 2.9 we have now everything in place to prove Theo-rem 2.11. Proof of Theorem 2.11. The existence of the optional strong super-martingale X (1) is the assertion of Theorem 2.7. To obtain the predictablestrong supermartingale X (0) , we observe that, since e X n and X (1) are l`adl`ag,the optional set F := ∞ [ n =1 { e X n = e X n − } ∪ { X (1) = X (1) − } has at most countably many sections, and therefore there exists by Theo-rem 117 in Appendix IV of [7] a countable number of [0 , ∪ {∞} -valuedstopping times ( σ m ) ∞ m =1 with disjoint graphs such that F = S ∞ m =1 J σ m K . ByTheorem IV.81.c) in [7] we can decompose each stopping time σ m into C. CZICHOWSKY AND W. SCHACHERMAYER an accessible stopping time σ Am and a totally inaccessible stopping time σ Im such that σ m = σ Am ∧ σ Im . Again combining Koml´os’s lemma with adiagonalization procedure we obtain a sequence of convex combinations e X n ∈ conv( X n , X n +1 , . . . ) such that e X nτ P −→ X (1) τ for all [0 , τ as well as e X nτ m − P - a . s . −−−→ Y (0) m , as n → ∞ , for all stopping times τ m := σ Am ∧ Y (0) m for m ∈ N . Now we can define X (0) by X (0) t ( ω ) = ( Y (0) m ( ω ) , t = σ Am ( ω ) and m ∈ N , X (1) t − ( ω ) = X (1) t ( ω ) , else.For all [0 , τ , we then have convergence (2.10), thatis, e X nτ − ( ω ) = e X nτ ( ω ) F ( ω, τ ( ω )) + ∞ X m =1 e X nτ − m { σ Am = τ } + ∞ X m =1 e X nσ Im − { σ Im = τ } P −→ X (0) τ ( ω ) F ( ω, τ, ( ω )) + ∞ X m =1 Y (0) m { σ Am = τ } + ∞ X m =1 X (1) σ Im − { σ Im = τ } , since e X n = e X n − for all n ∈ N on F and e X nσ − { σ = τ } P −→ X σ − { σ = τ } for all[0 , τ by Proposition 2.9. Asall stopping times σ Am are accessible and each Y (0) m is F τ m − -measurable,we have that X (0) is an accessible process such that X (0) τ { τ< ∞} is F τ − -measurable for every stopping time τ . Therefore X (0) is by Theorem 3.20in [6] even predictable. By Remark 5.(c) in Appendix I of [8] the left limitprocess e X n − of each optional strong supermartingale e X n is a predictablestrong supermartingale satisfying e X nτ − ≥ E [ e X nτ |F τ − ]for all [0 , X (0) τ ≥ E [ X (1) τ |F τ − ] follow immediately from (2.9) and (2.10) by Fatou’s lemma. Tosee X (1) τ − ≥ X (0) τ , let ( τ m ) ∞ m =1 be a foretelling sequence of stopping times forthe predictable stopping time τ . Then we have e X nτ m ≥ E [ e X nτ m + k |F τ m ]for all n, m, k ∈ N . Applying Fatou’s lemma we then obtain e X nτ m ≥ E [ e X nτ − |F τ m ] UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES by sending k → ∞ , X (1) τ m ≥ E [ X (0) τ − |F τ m ]by sending also n → ∞ and finally X (1) τ − ≥ X (0) τ by sending m → ∞ . (cid:3) 7. Proof of Proposition 2.12. One application of Theorem 2.11 is a con-vergence result for stochastic integrals of predictable integrands of finitevariation with respect to nonnegative optional strong supermartingales.Fix a nonnegative optional strong supermartingale X ∈ X , and let ϕ =( ϕ t ) ≤ t ≤ be a predictable process of finite variation, so that it has l`adl`agpaths. We then define Z t X u ( ω ) dϕ u ( ω ) := Z t X u ( ω ) dϕ cu ( ω ) + X
1] that is again pathwise well defined. The integral R t ϕ cu ( ω ) dX u ( ω )can again be defined as a pathwise Riemann–Stieltjes integral or a pathwiseLebesgue–Stieltjes integral. If X = ( X t ) ≤ t ≤ is a semimartingale, the def-inition of ( R t ϕ u dX u ) ≤ t ≤ via (7.3) coincides with the classical stochasticintegral.We first derive an auxiliary result. Lemma 7.1. Let ( X n ) ∞ n =1 , X (0) and X (1) be l`adl`ag stochastic processessuch that: (i) X nτ P −→ X (1) τ and X nτ − P −→ X (0) τ for all [0 , -valued stopping times τ ; C. CZICHOWSKY AND W. SCHACHERMAYER (ii) for all ε > and δ > , there are constants C ( δ ) > and C ( ε, δ ) > such that sup X ∈X P h sup ≤ s ≤ | X s | > C ( δ ) i ≤ δ, (7.4) sup X ∈X P [ M ε ( X ) > C ( ε, δ )] ≤ δ, (7.5) where X = { X (0) , X (1) , X n , X n − for n ∈ N } , X = { X (1) , X n for n ∈ N } and M ε ( X ) := sup n m ∈ N (cid:12)(cid:12)(cid:12) | X t i ( ω ) − X t i − ( ω ) | > ε for ≤ t < t < · · · < t m ≤ o for X ∈ X .Then we have, for all predictable processes ϕ = ( ϕ t ) ≤ t ≤ of finite variation,that: Z τ X nu dϕ u P −→ Z τ X (1) u dϕ cu + X
0, as n → ∞ , for each N byassumption (i).The key observation for the proof of the convergencesup ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)Z t X nu dϕ cu − Z t X (1) u dϕ cu (cid:12)(cid:12)(cid:12)(cid:12) P −→ , as n → ∞ , (7.8)is that we can use assumption (ii) to approximate the stochastic Riemann–Stieltjes integrals by Riemann sums in probability uniformly for all X ∈ X ,as either the integrator or the integrand moves very little. Indeed, for ε > c , c > 0, we have thatsup ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z t X u dϕ cu − N X m =1 X σ m − ( ϕ cσ m ∧ t − ϕ cσ m − ∧ t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ N X m =1 sup u ∈ [ σ m − ,σ m ] | X u − X σ m − | ( | ϕ c | σ m − | ϕ c | σ m − ) ≤ c c ε c c + ε c c = ε on {| ϕ | ≤ c } ∩ { X ∗ ≤ c } ∩ { M ε/ (2 c ) ( X ) ≤ c } , where the stopping times( σ m ) ∞ m =0 are given by σ = 0 and σ m := inf (cid:26) t > σ m − (cid:12)(cid:12)(cid:12)(cid:12) | ϕ c | t − | ϕ c | σ m − > ε c c (cid:27) ∧ N = c c ε . Choosing c , c > N sufficiently large we there-fore obtainsup X ∈X P sup ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z t X n dϕ cu − N X m =1 X σ m − ( ϕ cσ m ∧ t − ϕ cσ m − ∧ t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > ε ! < δ C. CZICHOWSKY AND W. SCHACHERMAYER for any δ > ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)Z t X nu dϕ cu − Z t X (1) u dϕ cu (cid:12)(cid:12)(cid:12)(cid:12) ≤ sup ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z t X nu dϕ cu − N X m =1 X nσ m − ( ϕ cσ m ∧ t − ϕ cσ m − ∧ t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + N X m =1 | X nσ m − − X (1) σ m − | ( | ϕ c | σ m − | ϕ c | σ m − )+ sup ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z t X (1) u dϕ cu − N X m =1 X (1) σ m − ( ϕ cσ m ∧ t − ϕ cσ m − ∧ t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) then implies (7.8), asmax m =0 ,...,N − | X nσ m − X (1) σ m | P −→ , as n → ∞ , for each fixed N by assumption (i).(2) As X nτ ϕ τ P −→ X (1) τ ϕ τ for all [0 , (cid:3) Combining the previous lemma with Lemma 6.1 allows us now to completethe proof of Proposition 2.12. Proof of Proposition 2.12. Part (1) is Theorem 2.11, and part (2)follows from Lemma 7.1 as soon as we have shown that its assumptionsare satisfied. Assumption (i) is (1) and for the set X assumption (ii) canbe derived from Lemma 6.1. Therefore it only remains to show (7.4) for X (0) and X n − for n ∈ N . For the left limits (7.4) follows from the validityof the latter for the processes X n for n ∈ N and for the predictable strongsupermartingale X (0) from (3.1) in Appendix I of [8]. (cid:3) Acknowledgments. We would like to thank an anonymous referee forcareful reading of the paper and pertinent remarks.REFERENCES [1] Campi, L. and Schachermayer, W. (2006). A super-replication theorem in Ka-banov’s model of transaction costs. Finance Stoch. Chung, K. L. and Glover, J. (1979). Left continuous moderate Markov processes. Z. Wahrsch. Verw. Gebiete Czichowsky, C. and Schachermayer, W. (2014). Duality theory for portfolio op-timisation under transaction costs. Ann. Appl. Probab. To appear.UPERMARTINGALES AND LIMITS OF NONNEGATIVE MARTINGALES [4] Delbaen, F. and Schachermayer, W. (1994). A general version of the fundamentaltheorem of asset pricing. Math. Ann. Delbaen, F. and Schachermayer, W. (1999). A compactness principle for boundedsequences of martingales with applications. In Seminar on Stochastic Analysis,Random Fields and Applications (Ascona, 1996) . Progress in Probability Dellacherie, C. (1972). Capacit´es et processus stochastiques . Ergebnisse der Math-ematik und ihrer Grenzgebiete . Springer, Berlin. MR0448504[7] Dellacherie, C. and Meyer, P.-A. (1978). Probabilities and Potential . North-Holland Mathematics Studies . North-Holland, Amsterdam. MR0521810[8] Dellacherie, C. and Meyer, P.-A. (1982). Probabilities and Potential B. Theoryof Martingales . North-Holland Mathematics Studies . North-Holland, Amster-dam. MR0745449[9] F¨ollmer, H. and Kramkov, D. (1997). Optional decompositions under constraints. Probab. Theory Related Fields Jacod, J. (1979). Calcul stochastique et probl`emes de martingales . Lecture Notes inMath. . Springer, Berlin. MR0542115[11] Karatzas, I. and ˇZitkovi´c, G. (2003). Optimal consumption from investment andrandom endowment in incomplete semimartingale markets. Ann. Probab. Koml´os, J. (1967). A generalization of a problem of Steinhaus. Acta Math. Acad.Sci. Hungar. Kramkov, D. and Schachermayer, W. (1999). The asymptotic elasticity of utilityfunctions and optimal investment in incomplete markets. Ann. Appl. Probab. Mertens, J.-F. (1972). Th´eorie des processus stochastiques g´en´eraux applicationsaux surmartingales. Z. Wahrsch. Verw. Gebiete Perkowski, N. and Ruf, J. (2015). Supermartingales as Radon–Nikodym densitiesand related measure extensions. Ann. Probab. Protter, P. E. (2005). Stochastic Integration and Differential Equations , 2nd ed. Stochastic Modelling and Applied Probability . Springer, Berlin. Version 2.1,Corrected third printing. MR2273672[17] Schachermayer, W. (2004). Portfolio Optimization in Incomplete Financial Mar-kets . Scuola Normale Superiore, Classe di Scienze, Pisa. MR2144570[18] Schwartz, M. (1986). New proofs of a theorem of Koml´os. Acta Math. Hungar. ˇZitkovi´c, G. (2010). Convex compactness and its applications. Math. Financ. Econ. Department of MathematicsLondon School of Economicsand Political ScienceColumbia House, Houghton StreetLondon WC2A 2AEUnited KingdomE-mail: [email protected]