Clark--Ocone formula and variational representation for Poisson functionals
aa r X i v : . [ m a t h . P R ] J un The Annals of Probability (cid:13)
Institute of Mathematical Statistics, 2009
CLARK–OCONE FORMULA AND VARIATIONALREPRESENTATION FOR POISSON FUNCTIONALS
By Xicheng Zhang University of New South Wales andHuazhong University of Science and Technology
In this paper we first prove a Clark–Ocone formula for any boundedmeasurable functional on Poisson space. Then using this formula, un-der some conditions on the intensity measure of Poisson random mea-sure, we prove a variational representation formula for the Laplacetransform of bounded Poisson functionals, which has been conjec-tured by Dupuis and Ellis [
A Weak Convergence Approach to theTheory of Large Deviations (1997) Wiley], p. 122.
1. Introduction.
Let W be a standard d -dimensional Brownian motion.The following elegant formula for the Laplace transform of a bounded andmeasurable functional F of Brownian motion was first established by Bou´eand Dupuis [1]: − log E [ e − F ] = inf v E (cid:20) F (cid:18) · + Z · v s ds (cid:19) + 12 Z | v s | ds (cid:21) , (1)where the infimum is taken for all processes v that are progressively measur-able with respect to the augmented filtration generated by Brownian motion.This result was later extended to Hilbert space-valued Brownian motion byBudhiraja and Dupuis [3]. Furthermore, the author in [21] extended thisrepresentation to the abstract Wiener space, and gave a simplified proof byusing Clark–Ocone’s formula. This formula has proven to be useful in deriv-ing various asymptotic results in large deviations (cf. [1, 2, 3, 16, 17, 18]).For Poisson functionals, a similar representation formula has been conjec-tured by Dupuis and Ellis in [4], page 122, from the background of control Received February 2008. Supported by ARC Discovery Grant DP0663153 of Australia.
AMS 2000 subject classifications.
Key words and phrases.
Clark–Ocone formula, variational representation, Poisson func-tional, Girsanov theorem.
This is an electronic reprint of the original article published by theInstitute of Mathematical Statistics in
The Annals of Probability ,2009, Vol. 37, No. 2, 506–529. This reprint differs from the original in paginationand typographic detail. 1
X. ZHANG theory: − log E [ e − F ]= inf φ E (cid:20)Z Z R d [ φ ( y, t ) log φ ( y, t ) − φ ( y, t ) + 1] ν ¯ X ( t ) ( dy ) dt + F ( ¯ X ) (cid:21) , where the infimum is taken over all suitable controls φ , and ¯ X is a controlledMarkov process with jump defined by the generator Z R d [ f ( x + y ) − f ( x )] φ ( y, t ) ν x ( dy ) . Here, ν x ( dy ) is the jump intensity of a Markov process.However, there is no rigorous proof for this variational formula up tonow. In the present paper we will attempt to give a rigorous proof in a moregeneral setting. Roughly speaking, let (Ω , P ) be the canonical Poisson space(simple configuration space over [0 , × R d ) and ν an intensity measure on R d . For any bounded random variable F on Ω, we want to prove that − log E ( e − F ) = inf φ E (cid:20)Z Z R d [ φ ( y, t ) log φ ( y, t ) − φ ( y, t ) + 1] ν ( dy ) dt + F ◦ Γ − φ (cid:21) , where the infimum runs over some classes of predictable processes, and Γ − φ is a predictable transformation on Ω associated with φ .In contrast to the Wiener space case, the main difficulty of proving thisformula comes from the nonlinearity of Poisson space. In particular, theGirsanov theorem for the Poisson measure is related to some nonlinear in-vertible and predictable transformations on R d (cf. [5], Theorem 3.10.21).Indeed, the definition of the above Γ − φ depends on solving a mass trans-portation problem or the classical Monge–Amp`ere equation. More precisely,to a given positive function φ , we need to seek an invertible transformation x y ( x ) of R d such that, for all test functions f ∈ C ( R d ), Z R d f ( y ( x )) ν ( dx ) = Z R d f ( x ) φ ( x ) ν ( dx ) , which is formally equivalent to solving the following nonlinear PDE in thecase of ν ( dx ) = θ ( x ) dx : θ ( y − ( x )) · det( ∇ y − ( x )) = θ ( x ) φ ( x ) . For an optimal mass transportation problem, we refer to the book of Villani[20]. Since our problem has no constraint conditions on y , an easy solu-tion can be constructed when ν has full support and no charges on d − LARK–OCONE FORMULA AND POISSON FUNCTIONALS In order to prove the above variational representation formula, the firststep is to establish the following Clark–Ocone formula: for any boundedfunctional F , F = E F + Z Z U p D ( u,t ) F ˜ µ ( du, dt ) , where ˜ µ is the compensated Poisson random measure, p D ( u,t ) F is the pre-dictable projection of D ( u,t ) F and D is the difference operator [see (6) below].The proof of this formula depends on an integration by parts formula givenin Picard [11, 12]. Although there are many martingale representation for-mulas for Poisson functionals (e.g., see [8, 9, 13]), the well-known results aremainly concentrated on the representation for functionals in the first orderSobolev space by using the Chaos decomposition. The main point for us isthat p D ( u,t ) F is a bounded predictable process.This paper is organized as follows: In Section 2 some notation and nec-essary lemmas are given as preliminaries. In Section 3 we prove the Clark–Ocone formula for bounded Poisson functionals. In Section 4 we shall provetwo variational representation formulas for Poission functionals. One (Theo-rem 4.4 below) is weaker, and needs no assumption. Another (Theorem 4.11below) is stronger, and needs to work in a locally compact metric space, andalso requires some extra assumptions [see (H1) and (H2) below]. In Section 5we discuss these two extra assumptions, and give a solution when U = R d and the intensity ν satisfies certain assumptions.
2. Preliminaries.
Let U be a Lusin space, that is, a Hausdorff spacethat is the image of a Polish space under a continuous bijection. We fixa σ -finite and infinite measure ν on ( U , B ( U )). Since U × [0 ,
1] is still aLusin space and has the same cardinality with R , it is well known that( U × [0 , , B ( U × [0 , , , B ([0 , ω on U × [0 ,
1] suchthat ω ( { u, t } ) ≤ u, t ) ∈ U × [0 , ω ( A × [0 , < + ∞ forany A ∈ B ( U ) with ν ( A ) < ∞ . The canonical random measure on Ω is thendefined by µ ω ( A × (0 , t ]) := ω ( A × (0 , t ]) , t ∈ [0 , , A ∈ B ( U ) . The filtration ( F t ) t ∈ [0 , is defined by F t := σ { µ ω ( A × (0 , s ]) : s ≤ t, A ∈ B ( U ) } . We shall simply write F as F . Let P be the probability measure on (Ω , F )such that µ ω is a Poisson random measure with the intensity measure ν ( du ). X. ZHANG
That is, for any A ∈ B ( U ) and t ∈ [0 , ω µ ω ( A × (0 , t ]) is a Poisson random variable with mean ν ( A ) · t , and ω µ ω ( I i × A j )are independent if the sets I i × A j are disjoint. We shall also denote by ˜ µ ω thecompensated Poisson random measure µ ω − π , where π ( du, dt ) := ν ( du ) × dt ,and dt is the Lebesgue measure on [0 , F P t be the completion of F t with respect to P , then (Ω , F P , P ; ( F P t ) t ∈ [0 , )forms a complete filtration probability space. We shall denote by P the pre-dictable σ -field associated with ( F P t ) t ∈ [0 , , which is generated by all leftcontinuous F P t -adapted processes. For the simplicity of notation, we shallwrite for p ∈ [1 , ∞ ] L p := L p ( U × [0 , × Ω , B ( U × [0 , × F P , π × P )and L p P := L p ( U × [0 , × Ω , B ( U ) × P , ν × dt × P ) . Let C be the linear span of the following simple processes: φ ( u, t, ω ) := 1 ( t ,t ] ( t ) · g ( u, ω ) , (2)where 0 ≤ t < t ≤ g is bounded and B ( U ) × F t -measurable andsatisfies g ( u, ω ) · U c ( u ) = 0 for some U ∈ B ( U ) with ν ( U ) < + ∞ . (3) Remark 2.1.
For g ∈ B ( U ) × F P t , by the monotone class theorem, wecan find a ˜ g ∈ B ( U ) × F t such that ˜ g = g , ν × P -a.s.The following lemma is standard. The construction will also be used inthe proof of Lemma 4.8 below. Lemma 2.2. C is dense in L p P for any p ∈ [1 , ∞ ) . Proof.
We sketch the proof. Let φ ∈ L p P . For ε ∈ (0 , / φ to [ − ε,
0] by setting φ ( u, t, ω ) = 0 for t ∈ [ − ε, φ ε ( u, t, ω ) := 1 ε Z tt − ε Z ss − ε φ ( u, r, ω ) dr ds, t ∈ [0 , . Obviously, t φ ε ( u, t, ω ) is a continuous differentiable and F P t -adapted pro-cess, and satisfies Z | φ ε ( u, t, ω ) | p dt ≤ Z | φ ( u, t, ω ) | p dt, Z | φ ′ ε ( u, t, ω ) | p dt ≤ p +1 ε p Z | φ ( u, t, ω ) | p dt. LARK–OCONE FORMULA AND POISSON FUNCTIONALS Second, for ε ∈ (0 , /
2) and n ∈ N , we define φ ε,n ( u, t, ω ) := n − X k =0 ( kn − , ( k +1) n − ] ( t ) · φ ε ( u, kn − , ω ) . Then Z | φ ε,n ( u, t, ω ) | p dt ≤ sup t ∈ [0 , | φ ε ( u, t, ω ) | p ≤ Z | φ ′ ε ( u, t, ω ) | p dt. Last, let ( U m ) m ∈ N be an increasing sequence of Borel subsets of U suchthat S m U m = U and ν ( U m ) < + ∞ , and define φ mε ( u, kn − , ω ) := ( − m ) ∨ ( φ ε ( u, kn − , ω ) ∧ m ) · U m ( u ) . By the diagonalization method and the dominated convergence theorem, wemay find the desired approximation in L p P by Remark 2.1. (cid:3) We recall the notion about the relative entropy as follows (cf. [4]).
Definition 2.3.
Let P (Ω) denote the set of all probability measuresdefined on (Ω , F ). For γ ∈ P (Ω), the relative entropy function R ( ·k γ ) is amapping from P (Ω) into R ∪ ∞ given by R ( γ ′ k γ ) := E γ ′ (cid:18) log dγ ′ dγ (cid:19) , whenever γ ′ ∈ P (Ω) is absolutely continuous with respect to γ such thatthe above integral is finite, where E γ ′ denotes the expectation with respectto γ ′ . In all other cases, R ( γ ′ k γ ) := ∞ .The following proposition can be found in [4], Proposition 1.4.2. Proposition 2.4.
Let γ ∈ P (Ω) , and F a bounded random variable on (Ω , F ) . (i) We have the following variational formula: − log E γ ( e − F ) = inf γ ′ ∈ P (Ω) [ R ( γ ′ k γ ) + E γ ′ ( F )] . (ii) The infimum in (i) is uniquely attained at the probability measure γ defined by γ ( dω ) = e − F ( ω ) / E γ ( e − F ) · γ ( dω ) . (4) X. ZHANG
The following contents will be used in the second part of Section 4. Inorder to prove the variational representation formula for Poisson functionals,we need to endow Ω with a suitable topology such that Ω becomes a Polishspace. For this, we assume that U is a noncompact locally compact connectedcomplete metric space, and ν is a Radon measure on U . Let C c ( U × [0 , U × [0 ,
1] with compact supports.The topology on Ω is taken as the weakest topology such that, for any f ∈ C c ( U × [0 , ω
7→ h f, µ ω i := Z Z U f ( u, t ) µ ω ( du, dt ) = X ( u,t ) ∈ supp( ω ) f ( u, t )(5)is continuous, where supp( ω ) is the support of integer-valued Radon measure ω , and the sum only has finite terms. By [15], Theorem 1.8, Ω is a Polishspace under the above topology.The following result can be found in [1], Lemma 2.8. Lemma 2.5.
Let γ ∈ P (Ω) and { γ n , n ∈ N } ⊂ P (Ω) satisfy sup n ∈ N R ( γ n k γ ) < + ∞ . (i) If { F k , k ∈ N } is a sequence of uniformly bounded random variablesconverging to F , γ -a.s., then lim k →∞ sup n ∈ N E γ n | F k − F | = 0 . (ii) If γ n converges weakly to the probability measure γ , then for anybounded random variable F on (Ω , F )lim n →∞ E γ n ( F ) = E γ ( F ) . Let C be the set of all cylindrical functions on Ω with the form F ( ω ) := h ( h f , µ ω i , . . . , h f n , µ ω i ) , h ∈ C ∞ c ( R n ) , f i ∈ C c ( U × [0 , . We also need the following standard result.
Lemma 2.6.
Let F be a bounded random variable on (Ω , F P , P ) . Thenthere exists a family of functions F n ∈ C with sup n k F n k ∞ ≤ k F k ∞ such thatfor P -almost all ω F n ( ω ) → F ( ω ) , as n → ∞ . LARK–OCONE FORMULA AND POISSON FUNCTIONALS Proof.
We sketch it. Let C ( U × [0 , C c ( U × [0 , { f i , i ∈ N } ⊂ C c ( U × [0 , C ( U × [0 , Q n := σ {h f i , µ ω i , i = 1 , . . . , n } . Then Q P n ↑ Q P ∞ = F P . First,let G n := E ( F |Q n ), then there exists some bounded measurable function g n on R n such that G n ( ω ) = h n ( h f , µ ω i , . . . , h f n , µ ω i ) . Next, using the usual localizing and mollifying techniques, we may approach h n by h n,k ∈ C ∞ c ( R n ). By the diagonalization method and extracting somesubsequence if necessary, we then get the desired approximation sequence. (cid:3)
3. Clark–Ocone formula.
Let us first recall some definitions about thedifference operator given in [11, 12]. For a fixed ( u, t ) ∈ U × [0 , ε − ( u,t ) and ε +( u,t ) on Ω by removing and adding a mass asfollows: for A ∈ B ( U × [0 , ε − ( u,t ) ω )( A ) := ω ( A ∩ { ( u, t ) } c )and ( ε +( u,t ) ω )( A ) := ( ε − ( u,t ) ω )( A ) + 1 A ( u, t ) . It is clear that ( u, t, ω ) ε ± ( u,t ) ω are B ( U × [0 , × F P / F P -measurable.For a functional F on Ω, the difference operator D is defined by D ( u,t ) F ( ω ) := F ◦ ε +( u,t ) ( ω ) − F ( ω ) . (6)Clearly, it is well defined except on a π × P -null set N . In the following,we always put D ( u,t ) F ( ω ) = 0 for ( u, t, ω ) ∈ N . For a φ ∈ L , the divergenceoperator δ is defined by δ ( φ )( ω ) := Z Z U φ ( u, t, ε − ( u,t ) ω )˜ µ ω ( du, dt ) . We need the following simple lemma.
Lemma 3.1.
For any φ ∈ C , we have δ ( φ )( ω ) = Z Z U φ ( u, t, ω )˜ µ ω ( du, dt ) . Proof.
Let φ ∈ C have the form (2). Notice that for any A ∈ F t and t > t , u ∈ U , 1 A ◦ ε − ( u,t ) = 1 A . X. ZHANG
Since g is bounded and B ( U ) × F t -measurable, by the monotone class the-orem, we have for any t > t g ( u, ε − ( u,t ) ω ) = g ( u, ω ) . Hence, δ ( φ )( ω ) = Z Z U ( t ,t ] ( t ) · g ( u, ε − ( u,t ) ω )˜ µ ω ( du, dt )= Z Z U ( t ,t ] ( t ) · g ( u, ω )˜ µ ω ( du, dt )= Z Z U φ ( u, t, ω )˜ µ ω ( du, dt ) . The result follows. (cid:3)
The following integration by parts formula can be found in [11], Lemma1.4.
Theorem 3.2.
Let φ ∈ L and F be a bounded random variable. Then E ( F δ ( φ )) = E (cid:18)Z Z U D ( u,t ) F · φ ( u, t ) π ( du, dt ) (cid:19) . Before proving the Clark–Ocone formula, we recall the following classicalpredictable projection theorem (cf. [19], page 173, Theorem 5.6).
Lemma 3.3.
Let ψ be a bounded measurable process on U × [0 , × Ω .There exists a unique (up to indistinguishability with respect to t for each u ) predictable process φ ∈ L ∞P such that for every predictable stopping time τ and u ∈ UE ( ψ ( u, τ ) · { τ< ∞} |F P τ − ) = φ ( u, τ ) · { τ< ∞} , P -a.s. (7) We shall write φ as p ψ , which is called the predictable projection of ψ . Proof. (Uniqueness) Let φ and φ be two predictable projections of ψ . Set A := { ( u, t, ω ) : φ ( u, t, ω ) = φ ( u, t, ω ) } . Then for each u ∈ U , the section Π u ( A ) := { ( t, ω ) : ( u, t, ω ) ∈ A } ∈ P . Bythe section theorem (cf. [19], page 172, Theorem 5.5) and (7), we have P (Π(Π u ( A ))) = 0, where Π(Π u ( A )) = { ω : ( t, ω ) ∈ Π u ( A ) , ∃ t ∈ [0 , } . Hence,for every u ∈ U , φ ( u, · , · ) and φ ( u, · , · ) are indistinguishability.(Existence) Let M be the class of all bounded measurable processes ψ possessing a predictable projection. It is clear that M is a vector space LARK–OCONE FORMULA AND POISSON FUNCTIONALS containing the constants. Moreover, M is also a monotone class. In fact, let ψ n ∈ M be a uniformly bounded increasing sequence with limit ψ . Let φ n bethe corresponding predictable projection of ψ n . It is then easily checked bythe monotone convergence theorem that lim φ n is the predictable projectionof ψ .Hence, it is enough to prove that M contains all the processes of the form ψ ( u, t, ω ) = 1 [0 ,t ] ( t ) · g ( u, ω ), which generates the σ -field B ( U ) × B ([0 , ×F P , where g is bounded and B ( U ) × F P -measurable. Define φ ( u, t ) = 1 [0 ,t ] ( t ) · E ( g ( u ) |F P t − ) . By Doob’s optional stopping theorem, one then finds that such φ is a pre-dictable projection of ψ . The proof is complete. (cid:3) We now prove the following Clark–Ocone formula.
Theorem 3.4.
Let F be any bounded random variable on Ω . Then F = E F + Z Z U p D ( u,t ) F ˜ µ ( du, dt ) , (8) where p D ( u,t ) F ∈ L P ∩ L ∞P is the predictable projection of D ( u,t ) F . Moreover, E (cid:18)Z Z U | p D ( u,t ) F | π ( du, dt ) (cid:19) < + ∞ . Proof.
It is well known that there exists a predictable process ϕ ∈ L P such that (cf. [7]) F = E ( F ) + Z Z U ϕ ( u, t )˜ µ ( du, dt ) . By Lemma 3.1 and the isometry formula of the stochastic integral, we havefor any φ ∈ C E ( F δ ( φ )) = E (cid:18)Z Z U ϕ ( u, t ) · φ ( u, t ) π ( du, dt ) (cid:19) . (9)On the other hand, by Theorem 3.2 and Fubini’s theorem, we have for any φ ∈ C E ( F δ ( φ )) = E (cid:18)Z Z U D ( u,t ) F · φ ( u, t ) π ( du, dt ) (cid:19) = Z Z U E ( E ( D ( u,t ) F |F P t − ) · φ ( u, t )) π ( du, dt ) , (10) [by (7)] = Z Z U E ( p D ( u,t ) F · φ ( u, t )) π ( du, dt )= E (cid:18)Z Z U p D ( u,t ) F · φ ( u, t ) π ( du, dt ) (cid:19) . X. ZHANG
The formula (8) now follows by combining (9), (10) and Lemma 2.2.By BDG’s inequality (cf. [5], Theorem 4.1.12) and (8), we have E (cid:18)Z Z U | p D ( u,t ) F | µ ( du, dt ) (cid:19) ≤ C E (cid:18)Z Z U p D ( u,t ) F ˜ µ ( du, dt ) (cid:19) ≤ C E ( F − E F ) , where C is a universal constant. Hence, E (cid:18)Z Z U | p D ( u,t ) F | π ( du, dt ) (cid:19) ≤ C E (cid:18)Z Z U | p D ( u,t ) F | ˜ µ ( du, dt ) (cid:19) + C E (cid:18)Z Z U | p D ( u,t ) F | µ ( du, dt ) (cid:19) ≤ C E (cid:18)Z Z U | p D ( u,t ) F | µ ( du, dt ) (cid:19) + C E ( F − E F ) = C E (cid:18)Z Z U | p D ( u,t ) F | π ( du, dt ) (cid:19) + C E ( F − E F ) < + ∞ . The proof is complete. (cid:3)
Remark 3.5.
In general, it is not known whether E ( D ( u,t ) F |F P t − ) ispredictable, although for fixed ( u, t ) ∈ U × [0 , E ( D ( u,t ) F |F P t − ) = p D ( u,t ) F a.s. by (7). However, Løkka in [8], Theorem 7 and Proposition 10, provedthat for L´evy functional F , if F belongs to the first-order Sobolev space,( u, t ) E ( D ( u,t ) F |F P t − ) is predictable. Compared with Løkka’s result, The-orem 3.4 only requires that F is bounded, and more importantly, the boundof p D ( u,t ) F can be explicitly calculated from F , which is crucial for the nextsection. Moreover, this would also have some applications in mathematicalfinance as in [8], Section 5.
4. Variational representation formula.
We begin with the following ele-mentary lemma.
Lemma 4.1.
Let c > − . Then for some C > and any x ≥ c , | log(1 + x ) | ≤ C | x | , | log(1 + x ) − x | ≤ C | x | (11) and | (1 + x ) log(1 + x ) − x | ≤ C | x | . (12) LARK–OCONE FORMULA AND POISSON FUNCTIONALS Let φ ∈ L P be a bounded predictable process satisfying φ ( u, t, ω ) ≥ c φ > − E (cid:18)Z Z U φ ( u, t ) π ( du, dt ) (cid:19) < + ∞ , (14)and such that t E t ( φ ) is a square integrable F P t -martingale, where E t ( φ ) := exp (cid:26)Z t Z U log(1 + φ ( u, s ))˜ µ ( du, ds )(15) + Z t Z U [log(1 + φ ( u, s )) − φ ( u, s )] π ( du, ds ) (cid:27) . By (11), E t ( φ ) is well defined. All such predictable processes will be denotedby G . Proposition 4.2.
Let < c ≤ F ≤ c be a random variable on Ω . Thenfor some φ ∈ G , E ( F |F t ) = E F · E t ( φ ) , ∀ t ∈ [0 , . More precisely, φ ( u, t ) = p D ( u,t ) F E ( F |F t − ) , π × P -a.s. (16) Proof.
By Theorem 3.4, we have M t := E ( F |F t ) = E F + Z t Z U p D ( u,s ) F ˜ µ ( du, ds ) . Let φ be given by (16). Then, it is clear by Theorem 3.4 that φ ∈ L P ∩ L ∞P and (14) holds. For (13), it only needs to notice that by (7) and (6) φ ( u, t ) = E ( F ◦ ε +( u,t ) |F t − ) E ( F |F t − ) − ≥ c c − , π × P -a.s.On the other hand, if we define X t := Z t Z U φ ( u, s )˜ µ ( du, ds ) , then by φ ∈ L P , X is a square-integrable F P t -martingale, ∆ X s ≥ c c −
1, and M t = E F + Z t M s − dX s . X. ZHANG
By [14], page 84, Theorem 37, we have M t = E F · exp { X t } · Y
For φ ∈ G , define a new probability measure on (Ω , F P ) by d P φ := E ( φ ) d P , (18) then for any ψ ∈ L P , t Z t Z U ψ ( u, s )˜ µ ( du, ds ) − Z t Z U ψ ( u, s ) φ ( u, s ) π ( du, ds ) is a square integrable F P t -martingale under P φ . Proof.
Note that by Itˆo’s formula, E t ( φ ) solves the following linearequation: E t ( φ ) = 1 + Z t Z U E s − ( φ ) · φ ( u, s )˜ µ ( du, ds ) . If we put Z t := R t R U ψ ( u, s )˜ µ ( du, ds ), then h Z, E ( φ ) i t = Z t Z U E s − ( φ ) · ψ ( u, s ) φ ( u, s ) π ( du, ds ) . LARK–OCONE FORMULA AND POISSON FUNCTIONALS By the Meyer–Girsanov theorem (cf. [14], page 133, Theorem 36), we knowthat t Z t − Z t E s − ( φ ) d h Z, E ( φ ) i s is a square integrable F P t -martingale under P φ . The result follows. (cid:3) We may prove the following representation formula.
Theorem 4.4.
Let F be a bounded random variable on Ω . Then − log E ( e − F ) = inf φ ∈ G E P φ ( F + L ( φ )) , (19) where P φ is defined by (18), and L ( φ ) := Z Z U [(1 + φ ( u, s )) log(1 + φ ( u, s )) − φ ( u, s )] π ( du, ds )(20) is well defined by (12). Moreover, the infimum is uniquely attained at some φ ∈ G . Proof.
For any φ ∈ G , by Jensen’s inequality, we have − log E ( e − F ) = − log E P φ ( e − F − log( d P φ /d P ) ) ≤ E P φ ( F ) + R ( P φ k P ) . By (18) and (15), we have R ( P φ k P ) = E P φ (cid:18)Z Z U log(1 + φ ( u, s ))˜ µ ( du, ds )+ Z Z U [log(1 + φ ( u, s )) − φ ( u, s )] π ( du, ds ) (cid:19) . By Proposition 4.3, we know that t Z Z U log(1 + φ ( u, s ))˜ µ ( du, ds ) − Z Z U φ ( u, s ) log(1 + φ ( u, s )) π ( du, ds )is a square integrable F P t -martingale under P φ . Hence, by (12), (14) andH¨older’s inequality, R ( P φ k P ) = E P φ (cid:18)Z Z U [(1 + φ ( u, s )) log(1 + φ ( u, s )) − φ ( u, s )] π ( du, ds ) (cid:19) ≤ E (cid:18) E ( φ ) · Z Z U | φ ( u, s ) | π ( du, ds ) (cid:19) < + ∞ . Thus, the upper bound is obtained. X. ZHANG
For the lower bound, by Proposition 4.2, there exists a φ ∈ G such that E ( e − F ) = e − F · E − ( φ ). Thus, we have − log E ( e − F ) = E P φ ( F ) + R ( P φ k P ) = E P φ ( F + L ( φ )) . The uniqueness follows from that when the infimum is attained, then Jensen’sinequality becomes an equality. The proof is thus complete. (cid:3)
We now turn to proving another representation like (1) about the formula(19), which was conjectured by Dupuis and Ellis [4], page 122. For furtherdiscussions, we need to consider a noncompact locally compact connectedcomplete metric space ( U , ρ ), and assume that:(H1) For each φ ∈ G , there exists an invertible transformation with respectto u , γ φ : U × [0 , × Ω → U , U ∋ u γ φ ( u, t, ω ) ∈ U , such that(i) γ φ , γ − φ ∈ B ( U ) × P / B ( U );(ii) ν ◦ γ − φ = (1 + φ ) · ν , that is, for ( ds × d P )-almost all ( s, ω ) ∈ [0 , × Ω and any bounded measurable function f on U Z U f ( γ φ ( u, s, ω )) ν ( du ) = Z U f ( u ) · (1 + φ ( u, s, ω )) ν ( du );(iii) for each t ∈ [0 , γ φ t | [0 ,t ] = γ φ | [0 ,t ] and γ − φ t | [0 ,t ] = γ − φ | [0 ,t ] , where φ t := φ · [0 ,t ] .(H2) Let φ, φ n ∈ G satisfy − < c ≤ φ, φ n ≤ c . If φ n converges to φ in L P , then there is a subsequence n k (still denoted by n ) such that for( π × P )-almost all ( u, s, ω ),lim n →∞ ρ ( γ φ n ( u, s, ω ) , γ φ ( u, s, ω )) = lim n →∞ ρ ( γ − φ n ( u, s, ω ) , γ − φ ( u, s, ω )) = 0 , where ρ is the metric on U . Remark 4.5.
The invertibility is understood in the measure sense, thatis, γ φ ( γ − φ ( u, t, ω ) , t, ω ) = γ − φ ( γ φ ( u, t, ω ) , t, ω ) = u, π × P -a.s.However, by a suitable redefinition procedure, one may assume that theabove identities hold for all ( u, t, ω ) ∈ U × [0 , × Ω. In fact, for some ( dt × P )-null set A ∈ P and each ( t, ω ) / ∈ A , there exists a ν -null set N ( t,ω ) ∈ B ( U ) suchthat γ φ ( · , t, ω ) is a one-to-one and onto mapping on N c ( t,ω ) . Thus, γ + φ := γ φ and γ − φ := γ − φ may be redefined as follows:˜ γ ± φ ( u, t, ω ) := (cid:26) γ ± φ ( u, t, ω ) , if ( t, ω ) ∈ A c and u ∈ N c ( t,ω ) , u, otherwise.In the sequel, we still use γ ± φ to denote these redefinitions. LARK–OCONE FORMULA AND POISSON FUNCTIONALS Fig. 1.
Transformation: Γ ± φ . The above constructed γ ± φ induce predictable transformations Γ ± φ on Ωas follows (see Figure 1): ω Γ ± φ ( ω ) ( ω ) := ( γ ± φ ( ω )) ∗ ( ω ) , (21)where ( γ ± φ ) ∗ ( ω ) denote the image measures of ω under transformations( u, t ) ( γ ± φ ( u, t, ω ) , t ). In particular, for each ω ∈ Ω,Γ + φ ( ω ) (Γ − φ ( ω ) ( ω )) = Γ − φ ( ω ) (Γ + φ ( ω ) ( ω )) = ω. (22)In what follows, we sometimes simply write Γ + φ ( ω ) (resp. Γ − φ ( ω ) ) as Γ φ (resp. Γ − φ ). The following Girsanov theorem can be found in [5], page 165. Theorem 4.6.
Assume (H1) . For any φ ∈ G , the mapping ω µ Γ φ ( ω ) is still a Poisson random measure under P φ with the same intensity measure ν , where P φ is defined by (18). In particular, P φ ◦ (Γ φ ) − = P , where P φ ◦ (Γ φ ) − denotes the image measure or distribution of ω Γ φ ( ω ) under P φ . We now prepare several lemmas for later use. The following lemma isdirect.
Lemma 4.7.
Assume (H2) . Then P ◦ (Γ ± φ n ) − weakly converges to P ◦ (Γ ± φ ) − as n → ∞ . Proof.
It only needs to prove that for P -almost all ω ∈ Ω, Γ ± φ n ( ω ) con-verges to Γ ± φ ( ω ) with respect to the weak topology defined by (5). That is, X. ZHANG for any f ∈ C c ( U × [0 , h f, µ Γ ± φn ( ω ) i → h f, µ Γ ± φ ( ω ) i . As in Remark 4.5, by (H2), we assume that for P -almost all ω and all( u, t ) ∈ U × [0 ,
1] lim n →∞ ρ ( γ ± φ n ( u, t, ω ) , γ ± φ ( u, t, ω )) = 0 . Since f has compact support in U × [0 , h f, µ Γ ± φn ( ω ) i = X ( u,t ) ∈ supp( ω ) f ( γ ± φ n ( u, t, ω ) , t ) → X ( u,t ) ∈ supp( ω ) f ( γ ± φ ( u, t, ω ) , t )= h f, µ Γ ± φ ( ω ) i . The result follows. (cid:3)
We introduce the following subclasses of G and C : For − < c ≤ c > φ ∈ G c c ⊂ G or φ ∈ C c c ⊂ C if c ≤ φ ≤ c . It is noticed that C c c ⊂ G c c by (3). Lemma 4.8.
Let − < c ≤ and c > . For any φ ∈ G c c , there existsa sequence φ n ∈ C c c such that lim n →∞ Z Z U E | φ n ( u, t ) − φ ( u, t ) | π ( du, dt ) = 0 , (23) and lim n →∞ E | L ( φ n ) − L ( φ ) | = 0 . (24) Proof.
As in the construction in Lemma 2.2, it is easy to find thedesired φ n . As for the limit (24), it follows from the construction of φ n inLemma 2.2, (12) and the dominated convergence theorem. (cid:3) Lemma 4.9.
Assume (H1) . Let g be a bounded F P t -measurable function.Then for any φ ∈ G , g (Γ ± φ ( ω )) = g (Γ ± φ t ( ω )) , P -a.a. ω, where φ t = φ · [0 ,t ] . LARK–OCONE FORMULA AND POISSON FUNCTIONALS Proof.
By the monotone class theorem, it is enough to consider cylin-drical function g with the following form: g ( ω ) = h ( h f , µ ω i , . . . , h f n , µ ω i ) , h ∈ C ∞ c ( R n ) , f i ∈ C c ( U × [0 , t ]) . For this type g , the desired equality follows by direct calculations and (iii)of (H1). (cid:3) The following lemma is crucial for the proof of Theorem 4.11 below. Themain idea comes from [1, 3] (see also [21]).
Lemma 4.10.
Assume (H1) . Let − < c ≤ and c > . For any φ ∈ C c c , there are two ˜ φ, ˆ φ ∈ C c c such that for any bounded random variable F on Ω E P ˜ φ ( F + L ( ˜ φ )) = E ( F ◦ Γ − φ + L ( φ )) , (25) E P φ ( F + L ( φ )) = E ( F ◦ Γ − ˆ φ + L ( ˆ φ )) , (26) where the functional L is defined by (20). Moreover, R ( P ◦ (Γ − φ ) − k P ) = E P ˜ φ ( L ( ˜ φ )) = E L ( φ ) . (27) Proof.
Let φ ∈ C c c have the form φ ( u, t, ω ) := n X i =0 ( t i ,t i +1 ] ( t ) · g i ( u, ω ) , g i ∈ B ( U ) × F t i . Let us construct ˜ g i as follows:˜ g ( u, ω ) = g ( u, ω )and for i = 1 , , . . . , n − g i ( u, ω ) = g i ( u, Γ ˜ φ i ( ω ) ( ω )) , where ˜ φ i ( u, t, ω ) := i − X j =0 ( t j ,t j +1 ] ( t ) · ˜ g j ( u, ω ) . Finally, we let ˜ φ ( u, t, ω ) := ˜ φ n ( u, t, ω ) . From the construction, it is clear that ˜ φ ∈ C c c . Moreover, it is not hard toverify by Lemma 4.9 and induction that ˜ φ satisfies˜ φ ( u, t, ω ) = φ ( u, t, Γ ˜ φ ( ω ) ( ω )) . (28) X. ZHANG
Similarly, one may construct ˆ φ ∈ C c c such thatˆ φ ( u, t, ω ) := φ ( u, t, Γ − ˆ φ ( ω ) ( ω )) . As above, by induction, Lemma 4.9 and (22), one can verify φ ( u, t, ω ) = ˆ φ ( u, t, Γ φ ( ω ) ( ω )) . (29)Now by Theorem 4.6, we have P ˜ φ ◦ (Γ ˜ φ ) − = P = P φ ◦ (Γ φ ) − . (30)Hence, we obtain by (22) and (28) E P ˜ φ ( F + L ( ˜ φ )) = E P ˜ φ ( F (Γ − φ (Γ ˜ φ ) (Γ ˜ φ ( · ))) + L ( φ (Γ ˜ φ )))= E ( F ◦ Γ − φ + L ( φ )) , as well as by (22) and (29) E P φ ( F + L ( φ )) = E P φ ( F (Γ − ˆ φ (Γ φ ) (Γ φ ( · ))) + L ( ˆ φ (Γ φ )))= E ( F ◦ Γ − ˆ φ + L ( ˆ φ )) . Moreover, by (30), (28) and (22), we also have P ◦ (Γ − φ ) − = P ˜ φ and so, R ( P ◦ (Γ − φ ) − k P ) = R ( P ˜ φ k P ) = E P ˜ φ ( L ( ˜ φ )) = E P ˜ φ ( L ( φ (Γ ˜ φ ))) = E L ( φ ) . The proof is complete. (cid:3)
We are now in a position to prove our main result in the present paper.
Theorem 4.11.
Assume that (H1) and (H2) hold. Let F be any boundedrandom variable on Ω . Then − log E ( e − F ) = inf φ ∈ G E ( F ◦ Γ − φ + L ( φ ))= inf φ ∈ C βα E ( F ◦ Γ − φ + L ( φ )) , where L ( φ ) and Γ − φ are defined by (20) and (21) respectively, and α := e − k F k ∞ − , β := 1 + e k F k ∞ . (31) LARK–OCONE FORMULA AND POISSON FUNCTIONALS Proof. ( Upper bound ) Let φ ∈ G . Then φ ∈ G c c for some c ∈ ( − , c >
0. Let φ n ∈ C c c be as in Lemma 4.8. Let ˜ φ n ∈ C c c be the correspondingone constructed in Lemma 4.10. Then, by (19) and (25), − log E ( e − F ) ≤ E P ˜ φn ( F + L ( ˜ φ n )) = E ( F ◦ Γ − φ n + L ( φ n )) . (32)Noting that by (27) and (24)sup n R ( P ◦ (Γ − φ n ) − k P ) = sup n E L ( φ n ) < + ∞ , we have by Lemma 4.7 and (ii) of Lemma 2.5lim n →∞ E ( F ◦ Γ − φ n ) = E ( F ◦ Γ − φ ) . Hence, by (32) and (24), − log E ( e − F ) ≤ E ( F ◦ Γ φ + L ( φ )) , which gives the upper bound.Moreover, by the lower semi-continuity of R ( ·k P ) (cf. [4], Lemma 1.4.3),we also have R ( P ◦ (Γ − φ ) − k P ) ≤ lim n →∞ R ( P ◦ (Γ − φ n ) − k P )(33) = lim n →∞ E L ( φ n ) ≤ E L ( φ ) , for all φ ∈ G . ( Lower bound ) We divide the proof into two steps.(Step 1): First of all, let F ∈ C have the following form: F ( ω ) = g ( h f , µ ω i , . . . , h f n , µ ω i ) , g ∈ C ∞ c ( R n ) , f i ∈ C c ( U × [0 , . Then, by (6) and a simple calculation, we have | D ( u,t ) e − F ( ω ) | = | e − F ( ε +( u,t ) ω ) − e − F ( ω ) | ≤ C n X i =1 | f i ( u, t ) | , (34)where C is independent of ( u, t, ω ).Set φ ( u, t ) := p D ( u,t ) e − F E ( e − F |F t − ) . It is clear by Proposition 4.2 that φ ∈ G βα , where α, β are given by (31).Let φ n ∈ C βα be as in Lemma 4.8. By (34) and the construction of φ n , thereexists a U ⊂ U with ν ( U ) < + ∞ such that for all n ∈ N | φ n ( u, t, ω ) | ≤ C · U ( u ) , π × P -a.e.(35) X. ZHANG
By limits (23), (24) and extracting a subsequence if necessary, we may fur-ther assume that φ n → φ, π × P -a.e.and L ( φ n ) → L ( φ ) , ˜ µ ( φ n ) → ˜ µ ( φ ) , P -a.e.By (35) and the dominated convergence theorem, we have E P φn ( F + L ( φ n )) → E P φ ( F + L ( φ )) as n → ∞ . (36)Moreover, by Proposition 4.2, we have e − F = E ( e − F ) E ( φ ) . So, by (36) and (26), we have for any ε > n large enough − log E ( e − F ) = E P φ ( F ) + R ( P φ k P )= E P φ ( F + L ( φ )) ≥ E P φn ( F + L ( φ n )) − ε = E ( F ◦ Γ − ˆ φ n + L ( ˆ φ n )) − ε. The lower bound now follows by ˆ φ n ∈ C βα (see Lemma 4.10).(Step 2): For any bounded random variable F on (Ω , F ), by Lemma 2.6,there exists a sequence F n ∈ C such thatsup n k F n k ∞ ≤ k F k ∞ (37)and lim n →∞ F n = F, P -a.s.For any ε > F n , by Step 1 and (37), there exists a φ n ∈ C βα , where α, β are given by (31), such that − log E ( e − F n ) ≥ E ( F n ◦ Γ − φ n + L ( φ n )) − ε. (38)In view of (33), (38) and (37), we havesup n R ( P ◦ (Γ − φ n ) − k P ) ≤ sup n E L ( φ n ) < + ∞ . Therefore, by (i) of Lemma 2.5,lim n →∞ E | F n ◦ Γ − φ n − F ◦ Γ − φ n | = 0 . LARK–OCONE FORMULA AND POISSON FUNCTIONALS Using the dominated convergence theorem to the left-hand side of (38) givesthat, for sufficiently large n , − log E ( e − F ) ≥ E ( F ◦ Γ − φ n + L ( φ n )) − ε. Since φ n ∈ C βα and ε is arbitrary, we thus complete the proof of the lowerbound. (cid:3) Remark 4.12.
By the same argument as in the proof of [1], Theorem5.1, the F in Theorem 4.11 can be any random variable bounded from above.
5. (H1)–(H2) and mass transportation problem.
In this section we givea more concrete description for (H1)–(H2). Let ( U , ρ ) be a locally compactcomplete metric space, and ν a σ -finite and infinite measure on ( U , B ( U )).Let U be the set of all positive measurable functions on U bounded fromabove and also from below. Question.
Under what constraints, for each φ ∈ U , does there exist aunique invertible measurable transformation γ φ on U such that ν ◦ γ − φ = φ · ν , that is, Z U f ( γ φ ( u )) ν ( du ) = Z U f ( u ) φ ( u ) ν ( du ) , ∀ f ∈ C c ( U )?(39)Moreover, for 0 < C ≤ φ, φ n ≤ C , if φ n converges ν -a.e. to φ , does it holdthat lim n →∞ ρ ( γ φ n ( u ) , γ φ ( u )) = lim n →∞ ρ ( γ − φ n ( u ) , γ − φ ( u )) = 0 , ν -a.a. u ?(40)Obviously, if this question has a solution, then (H1) and (H2) are sat-isfied. We remark that the required predictability follows from continuousdependence (40) with respect to φ . In the classical problem of optimal masstransportation, the constraint is given by minimizing the following cost func-tional (cf. [20]): inf γ φ Z U c ( ρ ( u, γ φ ( u ))) ν ( du ) , where c is a convex function on R + .Let us look at the case of U = R d and ν ( dx ) = θ ( x ) dx . It is clear that (39)can be reduced to θ ( γ − φ ( x )) det( ∇ γ − φ ( x )) = φ ( x ) θ ( x ) , where ∇ denotes the gradient. If we further require that γ − φ ( x ) is the gradi-ent of some strictly convex function h ( x ), then we need to solve the followingclassical Monge–Amp`ere equation: θ ( ∇ h ( x )) det( ∇ h ( x )) = φ ( x ) θ ( x ) . X. ZHANG
For this equation, there are many literatures to study it, for example, see[20] and references therein. Since our problem is looser, we can find an easysolution when U = R d .Let us first see the one-dimensional case. Let ν have full support and noatoms. There are three possibilities:(1) ν ([0 , + ∞ )) = ν (( −∞ , ∞ ,(2) ν ([0 , + ∞ )) = + ∞ , ν (( −∞ , < + ∞ ,(3) ν ([0 , + ∞ )) < + ∞ , ν (( −∞ , ∞ .It suffices to consider the first case. The others are analogous. Let φ ∈ U .In the first case, note that for x ≥ + ( x ) := Z x φ ( u ) ν ( du ) and Φ − ( x ) := Z − x φ ( u ) ν ( du )are strictly increasing continuous functions on [0 , + ∞ ), and Φ ± (+ ∞ ) = + ∞ .Define for x ≥ γ φ ( x ) := Φ − ( ν ([0 , x ])) , γ φ ( − x ) := − Φ − − ( ν ([ − x, . It is clear that γ φ is an invertible continuous transformation of R , and forany a < b , ν ([ a, b ]) = Z γ φ ( b ) γ φ ( a ) φ ( u ) ν ( du ) , (41)which means ν ◦ γ − φ = φ · ν .We now verify (40). For 0 < C ≤ φ, φ n ≤ C , assume that φ n converges ν -a.s. to φ . Noticing that by (41) Z γ φn ( x )0 φ n ( u ) ν ( du ) = Z γ φ ( x )0 φ ( u ) ν ( du ) , we have by the dominated convergence theorem (cid:12)(cid:12)(cid:12)(cid:12)Z γ φn ( x ) γ φ ( x ) ν ( du ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C (cid:12)(cid:12)(cid:12)(cid:12)Z γ φn ( x ) γ φ ( x ) φ n ( u ) ν ( du ) (cid:12)(cid:12)(cid:12)(cid:12) = 1 C (cid:12)(cid:12)(cid:12)(cid:12)Z γ φ ( x )0 ( φ n ( u ) − φ ( u )) ν ( du ) (cid:12)(cid:12)(cid:12)(cid:12) → . Since ν has full support in R , it follows thatlim n →∞ | γ φ n ( x ) − γ φ ( x ) | = 0 . Similarly,lim n →∞ | γ − φ n ( x ) − γ − φ ( x ) | = lim n →∞ (cid:12)(cid:12)(cid:12)(cid:12)Z γ − φ ( x )0 ( φ n ( u ) − φ ( u )) ν ( du ) (cid:12)(cid:12)(cid:12)(cid:12) = 0 . LARK–OCONE FORMULA AND POISSON FUNCTIONALS For the multi-dimensional case, we assume that ν has full support and nocharges on d − x i ∈ R − { } , i = 2 , . . . , d , let x + i = x i ∨ x − i = x i ∧ ′ ) R ∞ R x +2 x − · · · R x + d x − d dν = ∞ , R −∞ R x +2 x − · · · R x + d x − d dν < ∞ ;(2 ′ ) R ∞ R x +2 x − · · · R x + d x − d dν < ∞ , R −∞ R x +2 x − · · · R x + d x − d dν = ∞ ;(3 ′ ) R ∞ R x +2 x − · · · R x + d x − d dν = ∞ , R −∞ R x +2 x − · · · R x + d x − d dν = ∞ . Remark 5.1.
Let θ ≥ c > R d . If ν ( dx ) = θ ( x ) dx , then (3 ′ ) holds.We consider the first case. The others are analogous. Without loss ofgenerality, we assume d = 2 and fix a φ ∈ U . For x , x ∈ R , with x = 0 let α φ ( x , x ) and β φ ( x , x ) be the unique elements in R such that Z x −∞ Z x +2 x − dν = Z α φ ( x ,x ) −∞ Z x +2 x − φ dν and Z β φ ( x ,x ) −∞ Z x +2 x − dν = Z x −∞ Z x +2 x − φ dν. For x = 0, set α φ ( x ,
0) = x = β φ ( x , ′ ), α φ and β φ are well defined functions on R × R , and α φ ( ∞ , x ) = β ( ∞ , x ) = ∞ .Thus, we may define for ( x , x ) ∈ R γ φ ( x , x ) = ( α φ ( x , x ) , x )and γ − φ ( x , x ) = ( β φ ( x , x ) , x ) . It is easy to see that α φ ( β φ ( x , x ) , x ) = β φ ( α φ ( x , x ) , x ) = ( x , x )and γ φ ◦ γ − φ ( x , x ) = γ − φ ◦ γ φ ( x , x ) = ( x , x ) . Let 0 < C ≤ φ, φ n ≤ C and φ n converge ν -a.s. to φ . As in the one-dimensional case, one can provelim n →∞ | α φ n ( x , x ) − α φ ( x , x ) | = 0 X. ZHANG and lim n →∞ | β φ n ( x , x ) − β φ ( x , x ) | = 0 . Hence, lim n →∞ | γ φ n ( x , x ) − γ φ ( x , x ) | = 0and lim n →∞ | γ − φ n ( x , x ) − γ − φ ( x , x ) | = 0 . Summarizing the above discussions, we obtain the following result byTheorem 4.11 and Remark 4.12 when U = R d . Theorem 5.2.
Let ν be a σ -finite and infinite measure on R d with fullsupport in R d and without charges on any d − -dimensional subspaces. As-sume that one of (1 ′ ) , (2 ′ ) and (3 ′ ) holds. Then, for any random variable F on Ω bounded from above, − log E ( e − F ) = inf φ ∈ G E ( F ◦ Γ − φ + L ( φ )) , where L ( φ ) and Γ − φ are defined respectively by (20) and (21). Acknowledgments.
The author would like to thank Professor BenjaminGoldys for providing him an excellent environment to work in the Universityof New South Wales. He is also very grateful to Professor Jiagang Ren forhis valuable suggestions. REFERENCES [1]
Bou´e, M. and
Dupuis, P. (1998). A variational representation for certain functionalsof Brownian motion.
Ann. Probab. Bou´e, M. , Dupuis, P. and
Ellis, R. S. (2000). Large deviations for small noisediffusions with discontinuous statistics.
Probab. Theory Related Fields
Budhiraja, A. and
Dupuis, P. (2000). A variational representation for positivefunctionals of infinite dimensional Brownian motion.
Probab. Math. Statist. Dupuis, P. and
Ellis, R. S. (1997).
A Weak Convergence Approach to the Theoryof Large Deviations . Wiley, New York. MR1431744[5]
Bichteler, K. (2002).
Stochastic Integration with Jumps . Encyclopedia of Mathe-matics and Its Applications . Cambridge Univ. Press, Cambridge. MR1906715[6] Cohn, D. L. (1980).
Measure Theory . Birkh¨auser, Boston, MA. MR578344[7]
Jacod, J. (1974/75). Multivariate point processes: Predictable projection, Radon-Nikod´ym derivatives, representation of martingales.
Z. Wahrsch. Verw. Gebiete Løkka, A. (2004). Martingale representation of functionals of L´evy processes.
Stoch.Anal. Appl. [9] Nualart, D. and
Schoutens, W. (2000). Chaotic and predictable representationsfor L´evy processes.
Stochastic Process. Appl. Parthasarathy, K. R. (1967).
Probability Measures on Metric Spaces . AcademicPress, New York. MR0226684[11]
Picard, J. (1996). On the existence of smooth densities for jump processes.
Probab.Theory Related Fields
Picard, J. (1996). Formules de dualit´e sur l’espace de Poisson.
Ann. Inst. H.Poincar´e Probab. Statist. Prat, J.-J. and
Privault, N. (1999). Explicit stochastic analysis of Brownian mo-tion and point measures on Riemannian manifolds.
J. Funct. Anal.
Protter, P. E. (2004).
Stochastic Integration and Differential Equations , 2nd ed.Springer, Berlin. MR2020294[15]
Pugachev, O. V. (2002). The space of simple configurations is a Polish space.
Mat.Zametki Ren, J. and
Zhang, X. (2005). Schilder theorem for the Brownian motion on thediffeomorphism group of the circle.
J. Funct. Anal.
Ren, J. and
Zhang, X. (2005). Freidlin–Wentzell’s large deviations for homeomor-phism flows of non-Lipschitz SDEs.
Bull. Sci. Math.
Ren, J. and
Zhang, X. (2008). Freidlin–Wentzell’s large deviations for stochasticevolution equations.
J. Funct. Anal.
Revuz, D. and
Yor, M. (1994).
Continuous Martingales and Brownian Motion , 2nded.
Grundlehren der Mathematischen Wissenschaften [Fundamental Principlesof Mathematical Sciences] . Springer, Berlin. MR1303781[20]
Villani, C. (2003).
Topics in Optimal Transportation . Graduate Studies in Mathe-matics . Amer. Math. Soc., Providence, RI. MR1964483[21] Zhang, X. (2009). A variational representation for random functionals on abstractWiener spaces. Preprint.