Quadratic Transportation Cost Inequalities Under Uniform Distance For Stochastic Reaction Diffusion Equations Driven by Multiplicative Space-Time White Noise
aa r X i v : . [ m a t h . P R ] A p r Quadratic Transportation Cost Inequalities UnderUniform Distance For Stochastic Reaction DiffusionEquations Driven by Multiplicative Space-TimeWhite Noise.
Shijie Shang and Tusheng Zhang , May 1, 2019
Abstract
In this paper, we established a quadratic transportation cost in-equality for solutions of stochastic reaction diffusion equations drivenby multiplicative space-time white noise based on a new inequality weproved for the moments (under the uniform norm) of the stochasticconvolution with respect to space-time white noise, which is of inde-pendent interest. The solutions of such stochastic partial differentialequations are typically not semimartingales on the state space.
Keywords and Phrases:
Stochastic partial differential equations, reac-tion diffusion equations, transportation cost inequalities, concentration ofmeasure, moment estimates for stochastic convolutions
AMS Subject Classification:
Primary 60H15; Secondary 93E20, 35R60.
Let (
X, d ) be a metric space with a Borel probability measure µ . For ameasurable subset A ⊂ X and r >
0, we denote by A r the r -neighborhood of A , namely A r = { x : d ( x, A ) < r } . We say that µ has normal concentrationon ( X, d ) if there are constants
C, c > r > A with µ ( A ) ≥ ,1 − µ ( A r ) ≤ Ce − cr . (1.1)It is well known that Gaussian measures on R d and uniform measures onthe spheres S d have normal concentration. In the past decades, many peopleestablished normal concentration properties for various kinds of interestingmeasures. We mention the celebrated works of M. Talagrand [T1], [T2] and[T3]. We refer the readers to the monograph [L] for a nice exposition of theconcentration of measure phenomenon. It turns out that the concentrationof measure phenomenon has close connections with entropy and functionalinequalities, e.g. Poincare inequalities, logarithmic Sobolev inequalities andtransportation cost inequalities. In particular, transportation cost inequali-ties imply the normal concentration. An elegant, simple proof of this fact is School of Mathematics, University of Science and Technology of China, Hefei, China. School of Mathematics, University of Manchester, Oxford Road, Manchester M139PL, England, U.K. Email: [email protected] L -distance for stochastic reaction diffusion equations driven by multiplica-tive space-time white noise. However, under the uniform distance they onlyobtained the quadratic transportation cost inequality for stochastic reactiondiffusion equations driven by additive space-time white noise. As is wellknown, one of the essential differences between SPDEs driven by colorednoise and SPDEs driven by space-time white noise is that the solution ofthe later is not a semimartingale and therefore in particular Ito formulacould not be used.The aim of this paper is to prove that under the uniform distance thequadratic transportation cost inequality holds for stochastic reaction diffu-sion equations driven by multiplicative space-time white noise. Our newcontribution is the p th moment inequalities under the uniform norm we ob-tained for the stochastic convolution with respect to space-time white noise,which is of independent interest. The significance of the inequality is toallow the order p of the moment to be any positive number, not just for suf-ficiently large ones. These new estimates allow us to establish the quadratictransportation cost inequality under the uniform norm.The rest of the paper is organized as follows. In Section 2, we recallthe notions of measure concentration, transportation cost inequalities andpresent the framework for stochastic reaction diffusion equations. Section 3is devoted to the proof of the new moment estimates for stochastic convo-lutions with respect to space-time white noise under the uniform norm. InSection 4, we prove the quadratic transportation cost inequality. In this section, we will recall several results on measure concentration fromthe monograph [L] and set up the framework of the stochastic reactiondiffusion equations driven by space-time white noise.2et (
X, d, µ ) be a metric space with a Borel probability measure µ . Theconcentration function α µ ( r ) is defined as α µ ( r ) := sup (cid:26) − µ ( A r ) : A ⊂ X, µ ( A ) ≥ (cid:27) , r > . The normal concentration of µ means that α µ ( r ) ≤ Ce − cr for all r > C, c .Let µ , ν be two Borel probability measures on the metric space ( X, d ).Consider the Wasserstein distance W ( ν, µ ) := (cid:20) inf Z X Z X d ( x, y ) π ( dx, dy ) (cid:21) between µ and ν , where the infimum is taken over all probability measures π on the product space X × X with marginals µ and ν . Recall that therelative entropy of ν with respect to µ is defined by H ( ν | µ ) := Z X log ( dνdµ ) dν, if ν is absolutely continuous with respect to µ , and + ∞ if not. We say thatthe measure µ satisfies a quadratic transportation cost inequality if thereexists a constant C > ν , W ( ν, µ ) ≤ C p H ( ν | µ ) . (2.1)The following result is taken from [L]. Proposition 2.1 If µ satisfies a quadratic transportation cost inequality,then µ has normal concentration. Remark 2.2
The notion of concentration of measure phenomenon dependson the underlying topology of the associated metric space. The stronger thetopology, the stronger the concentration.
Before ending this section, let us recall the setup for the stochastic re-action diffusion equations driven by space-time white noise. Consider thefollowing equation: du ( t, x ) = 12 u ′′ ( t, x ) dt + b ( u ( t, x )) dt + σ ( u ( t, x )) W ( dt, dx ) , x ∈ (0 , ,u ( t,
0) = u ( t,
1) = 0 , t > ,u (0 , x ) = u ( x ) , x ∈ (0 , , (2.2)where u ∈ C (0 , W ( dt, dx ) is a space-time white noise on some filtratedprobability space (Ω , F , F t , P ), here F t , t ≥ { W ( t, x ); ( t, x ) ∈ [0 , ∞ ) × [0 , } . Thecoefficients b ( · ) , σ ( · ) : R → R are deterministic measurable functions. Wesay that an adapted, continuous random field { u ( t, x ) : ( t, x ) ∈ R + × [0 , }
3s a solution to the stochastic partial differential equation (SPDE) (2.2) if t ≥ Z u ( t, x ) φ ( x ) dx = Z u ( x ) φ ( x ) dx + 12 Z t ds Z u ( s, x ) φ ′′ ( x ) dx + Z t ds Z b ( u ( s, x )) φ ( x ) dx + Z t Z σ ( u ( s, x )) φ ( x ) W ( ds, dx ) , P − a.s. (2.3)for any φ ∈ C (0 , u is a solution to SPDE (2.2)if and only if u satisfies the following integral equation u ( t, x ) = P t u ( x ) + Z t Z p t − s ( x, y ) b ( u ( s, y )) dsdy + Z t Z p t − s ( x, y ) σ ( u ( s, y )) W ( ds, dy ) , (2.4)where P t , t ≥ p t ( x, y ) are the corresponding semigroup and the heatkernel associated with the operator ∆ equipped with the Dirichlet bound-ary condition on the interval [0 , H.1 ) There exists a constant L b such that for all x, y ∈ R , | b ( x ) | ≤ L b (1 + | x | ) , | b ( x ) − b ( y ) | ≤ L b | x − y | . (2.5)( H.2 ) There exist constants K σ and L σ such that for all x, y ∈ R , | σ ( x ) | ≤ K σ , | σ ( x ) − σ ( y ) | ≤ L σ | x − y | . (2.6)It is well known (see [W]) that under the hypotheses (H.1) and (H.2),SPDE (2.2) admits a unique random field solution u ( t, x ). In fact, for the ex-istence and uniqueness the diffusion coefficient σ ( · ) needs not to be bounded,the stronger assumption (H.2) is needed for proving the transportation costinequality. In this section, we will establish some moment estimates for the stochasticconvolution against space-time white noise. Of particular interest are theestimates of the moments of lower order. These bounds will be used laterin the paper. 4 roposition 3.1
Let { σ ( s, y ) : ( s, y ) ∈ R + × [0 , } be a random field suchthat the stochastic integral against space time white noise is well defined.Then for any T > , p > , there exists a constant C T,p > such that E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ C T,p Z T sup y ∈ [0 , E | σ ( s, y ) | p ds. (3.1) Remark 3.2
The constant C T,p in (3.1) can be bounded as C T,p < p p T p − (cid:18) π (cid:19) p (cid:18) √ π (cid:19) p +1 (cid:18) p − p − (cid:19) p − . (3.2) Proof . Obviously, we can assume that the right hand side of (3.1) is finite.We employ the factorization method. Choose α such that p < α < − p .This is possible because p >
10. Let( J α σ )( s, y ) : = Z s Z ( s − r ) − α p s − r ( y, z ) σ ( r, z ) W ( dr, dz ) , (3.3)( J α − f )( t, x ) : = sinπαπ Z t Z ( t − s ) α − p t − s ( x, y ) f ( s, y ) dsdy. (3.4)By the stochastic Fubini theorem (see Theorem 2.6 in [W]), for any ( t, x ) ∈ R + × [0 , Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) = J α − ( J a σ )( t, x ) . (3.5)Therefore sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) = sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12) J α − ( J α σ )( t, x ) (cid:12)(cid:12) , P − a.s.. (3.6)Recall the well-known inequality0 ≤ p t ( x, y ) ≤ √ πt exp − ( x − y )22 t , ∀ x, y ∈ [0 , . (3.7)A straightforward calculation gives Z p t ( x, y ) dy < , (3.8) Z p t ( x, y ) dy = sup y ∈ [0 , p t ( x, y ) × Z p t ( x, y ) dy ≤ C t − , C := 1 √ π . (3.9)5y H¨oler’s inquality, (3.8) and (3.9), we have E sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p = E sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ Z t Z ( t − s ) α − p t − s ( x, y ) J α σ ( s, y ) dsdy (cid:12)(cid:12)(cid:12)(cid:12) p ≤ (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p E sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:26) Z t ( t − s ) α − × (cid:18)Z p t − s ( x, y ) | J α σ ( s, y ) | dy (cid:19) ds (cid:27) p ≤ (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p E sup ( t,x ) ∈ [0 ,T ] × [0 , ( Z t ( t − s ) α − × (cid:18)Z p t − s ( x, y ) | J α σ ( s, y ) | p dy (cid:19) p ds ) p ≤ (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p E sup ( t,x ) ∈ [0 ,T ] × [0 , ( Z t ( t − s ) α − × (cid:18)Z p t − s ( x, y ) dy (cid:19) × p (cid:18)Z | J α σ ( s, y ) | p dy (cid:19) × p ds ) p ≤ (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p C E sup t ∈ [0 ,T ] (Z t ( t − s ) α − − p (cid:18)Z | J α σ ( s, y ) | p dy (cid:19) p ds ) p ≤ (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p C E sup t ∈ [0 ,T ] " (cid:18)Z t ( t − s ) ( α − − p ) pp − ds (cid:19) p − p × p × (cid:18)Z t Z | J α σ ( s, y ) | p dyds (cid:19) p × p ≤ (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p C × (cid:18)Z T s ( α − − p ) pp − ds (cid:19) p − × Z T Z E | J α σ ( s, y ) | p dyds ≤ C ′ T,p sup ( s,y ) ∈ [0 ,T ] × [0 , E (cid:12)(cid:12)(cid:12)(cid:12)Z s Z ( s − r ) − α p s − r ( y, z ) σ ( r, z ) W ( dr, dz ) (cid:12)(cid:12)(cid:12)(cid:12) p , (3.10)where we have used the condition α > p , so that C ′ T,p,α = (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p C × (cid:18)Z T s ( α − − p ) pp − ds (cid:19) p − × T = (cid:12)(cid:12)(cid:12)(cid:12) sinπαπ (cid:12)(cid:12)(cid:12)(cid:12) p C p − αp − ! p − T αp − < ∞ . (3.11)6pplying the BDG inequality (see Proposition 4.4 in [K]) and (3.9), we have (cid:13)(cid:13)(cid:13)(cid:13)Z s Z ( s − r ) − α p s − r ( y, z ) σ ( r, z ) W ( dr, dz ) (cid:13)(cid:13)(cid:13)(cid:13) L p (Ω) ≤ p Z s Z ( s − r ) − α p s − r ( y, z ) k σ ( r, z ) k L p (Ω) drdz ≤ p Z s ( s − r ) − α (cid:18)Z p s − r ( y, z ) dz (cid:19) sup z ∈ [0 , k σ ( r, z ) k L p (Ω) dr ≤ C p Z s ( s − r ) − α − sup z ∈ [0 , k σ ( r, z ) k L p (Ω) dr ≤ C p (cid:18)Z s ( s − r ) ( − α − ) × pp − dr (cid:19) p − p × Z s sup z ∈ [0 , k σ ( r, z ) k pL p (Ω) dr ! p . (3.12)Therefore sup ( s,y ) ∈ [0 ,T ] × [0 , E (cid:12)(cid:12)(cid:12)(cid:12)Z s Z ( s − r ) − α p s − r ( y, z ) σ ( r, z ) W ( dr, dz ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ C ′′ T,p × Z T sup z ∈ [0 , E | σ ( r, z ) | p dr, (3.13)where the condition α < − p was used to see that C ′′ T,p,α =(4 C p ) p × (cid:18)Z T r ( − α − ) × pp − dr (cid:19) p − =(4 C p ) p × (cid:18) p − p − − αp (cid:19) p − T p − − αp < ∞ . (3.14)Combining (3.10) with (3.13), we obtain E sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ C T,p Z T sup z ∈ [0 , E | σ ( r, z ) | p dr, (3.15)where C T,p = min p <α< − p C ′ T,p,α × C ′′ T,p,α . (3.16)In view of (3.11), (3.14) and (3.9), a straightforward calculation leads to C T,p < p p T p − (cid:18) π (cid:19) p (cid:18) √ π (cid:19) p +1 (cid:18) p − p − (cid:19) p − . (3.17)This completes the proof of the estimate (3.1). (cid:4) emma 3.3 Let σ ( s, y ) be as in Proposition 3.1, then for any T > , p > , λ > , there exists a constant C T,p > such that P sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) > λ ! ≤ P Z T sup y ∈ [0 , | σ ( s, y ) | p ds > λ p ! + C T,p λ p E min ( λ p , Z T sup y ∈ [0 , | σ ( s, y ) | p ds ) . (3.18) Here the constant C T,p is the same as the constant C T,p in (3.1).
Proof . For any λ >
0, defineΩ λ := ( ω ∈ Ω : Z T sup y ∈ [0 , | σ ( s, y ) | p ds ≤ λ p ) . (3.19)By Chebyshev’s inequality, we have P sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) > λ ! ≤ P (Ω \ Ω λ ) + P sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) Ω λ > λ ! ≤ P (Ω \ Ω λ ) + 1 λ p E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12) Ω λ Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p . (3.20)Now, we introduce the random field e σ ( s, y ) := σ ( s, y ) { ω ∈ Ω: R s sup y ∈ [0 , | σ ( r,y ) | p dr ≤ λ p } . (3.21)Note that the stochastic integral of e σ ( · , · ) with respect to the space timewhite noise is well defined. Since for any ω ∈ Ω λ , Z t Z | σ ( s, y ) − e σ ( s, y ) | dsdy = 0 , ∀ t ∈ [0 , T ] , (3.22)by the local property of the stochastic integral (see Lemma 5.1 in Appendix), Ω λ Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy )= Ω λ Z t Z p t − s ( x, y ) e σ ( s, y ) W ( ds, dy ) , P − a.s.. (3.23)8ence using the bound (3.1), we get E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12) Ω λ Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p = E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12) Ω λ Z t Z p t − s ( x, y ) e σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) e σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ C T,p E Z T sup y ∈ [0 , | e σ ( s, y ) | p ds ≤ C T,p E min ( λ p , Z T sup y ∈ [0 , | σ ( s, y ) | p ds ) . (3.24)Combining (3.20) with (3.24), we obtain (3.18). (cid:4) Proposition 3.4
Let { σ ( s, y ) : ( s, y ) ∈ R + × [0 , } be a random field suchthat the stochastic integral against space time white noise is well defined.Then the following two estimates hold:(i) for any T > , < p ≤ , q > , there exists a constant C T,p,q suchthat E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ C T,p,q E "Z T sup y ∈ [0 , | σ ( s, y ) | q ds pq . (3.25) (ii) For any T > , < p ≤ , ǫ > , there exists a constant C T,p,ǫ suchthat E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ ǫE " sup ( s,y ) ∈ [0 ,T ] × [0 , | σ ( s, y ) | p + C T,p,ǫ E Z T sup y ∈ [0 , | σ ( s, y ) | p ds. (3.26) Remark 3.5
The significance of the estimates (3.25) and (3.26) is thatthey allow p to be small, which is crucial for the proof of the transportationcost inequality in the next section. Proof . The estimate (3.25) can be easily derived from (3.18) and Lemma5.2 in Appendix as follows: 9 " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p = Z ∞ pλ p − P sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) > λ ! dλ ≤ Z ∞ pλ p − P Z T sup y ∈ [0 , | σ ( s, y ) | q ds > λ q ! dλ + C T,p Z ∞ pλ p − − q E min ( λ q , Z T sup y ∈ [0 , | σ ( s, y ) | q ds ) dλ = C T,p,q E "Z T sup y ∈ [0 , | σ ( s, y ) | q ds pq , (3.27)where C T,p,q := 1 + C T,p qq − p , (3.28)and the constant C T,p is defined in (3.16).Let us now prove the assertion (ii) in Proposition 3.4. From (3.25) itfollows that for any q > E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( s, y ) W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) p ≤ C T,p,q E "Z T sup y ∈ [0 , | σ ( s, y ) | q ds pq ≤ C T,p,q E " sup ( s,y ) ∈ [0 ,T ] × [0 , | σ ( s, y ) | q − p × Z T sup y ∈ [0 , | σ ( s, y ) | p ds pq = C T,p,q E sup ( s,y ) ∈ [0 ,T ] × [0 , | σ ( s, y ) | ( q − p ) pq × Z T sup y ∈ [0 , | σ ( s, y ) | p ds ! pq ≤ ǫE " sup ( s,y ) ∈ [0 ,T ] × [0 , | σ ( s, y ) | p + C T,p,q × C T,p,q,ǫ E Z T sup y ∈ [0 , | σ ( s, y ) | p ds, (3.29)where we have used the following Young inequality ab ≤ ǫC T,p,q a qq − p + C T,p,q,ǫ b qp ,C T,p,q,ǫ := p (cid:18) q − pǫ/C T,p,q (cid:19) q − pp q − qp . (3.30)Set C T,p,ǫ := inf q> C T,p,q × C T,p,q,ǫ . (3.31)Now, (3.26) follows from (3.29) with the constant C T,p,ǫ defined above. (cid:4) Quadratic transportation cost inequality
In this section, we will show that the law µ of the random field solution u ( · , · ) of SPDE (2.2), viewed as a probability measure on C ([0 , T ] × [0 , ν that are absolutely continuous with respect to µ .Let ν ≪ µ on C ([0 , T ] × [0 , Q onthe filtered probability space (Ω , F , {F t } ≤ t ≤ T , P ) by dQ := dνdµ ( u ) dP. (4.1)Denote the Radon-Nikodym derivative restricted on F t by M t := dQdP (cid:12)(cid:12)(cid:12)(cid:12) F t , t ∈ [0 , T ] . Then M t , t ∈ [0 , T ] forms a P -martingale. The following result was provedin [KS]. Lemma 4.1
There exists an adapted random field h = { h ( s, x ) , ( s, x ) ∈ [0 , T ] × [0 , } such that Q − a.s. for all t ∈ [0 , T ] , Z t Z h ( s, x ) dsdx < ∞ and f W : [0 , T ] × [0 , → R defined by f W ( t, x ) := W ( t, x ) − Z t Z x h ( s, y ) dsdy, (4.2) is a Brownian sheet under the measure Q . Moreover, M t = exp (cid:18)Z t Z h ( s, x ) W ( ds, dx ) − Z t Z h ( s, x ) dsdx (cid:19) , Q − a.s., (4.3) and H ( ν | µ ) = 12 E Q (cid:20)Z T Z h ( s, x ) dsdx (cid:21) , (4.4) where E Q stands for the expectation under the measure Q . Here is the main result of this section.
Theorem 4.2
Suppose the hypotheses (H.1) and (H.2) hold. Then the law µ of the solution u ( · , · ) of SPDE (2.2) satisfies the quadratic transportationcost inequality on the space C ([0 , T ] × [0 , . Consequently µ has normalconcentration. roof . Take ν ≪ µ on C ([0 , T ] × [0 , Q by (4.1). Let h ( t, x ) be the corresponding random field appeared in Lemma4.1. Then the solution u ( t, x ) of equation (2.2) satisfies the following SPDEunder the measure Q , u ( t, x ) = P t u ( x ) + Z t Z p t − s ( x, y ) b ( u ( s, y )) dsdy + Z t Z p t − s ( x, y ) σ ( u ( s, y )) f W ( ds, dy )+ Z t Z p t − s ( x, y ) σ ( u ( s, y )) h ( s, y ) dsdy. (4.5)Consider the solution of the following SPDE: v ( t, x ) = P t u ( x ) + Z t Z p t − s ( x, y ) b ( v ( s, y )) dsdy + Z t Z p t − s ( x, y ) σ ( v ( s, y )) f W ( ds, dy ) . (4.6)By Lemma 4.1 it follows that under the measure Q , the law of ( v, u ) formsa coupling of ( µ, ν ). Therefore by the definition of the Wasserstein distance, W ( ν, µ ) ≤ E Q " sup ( t,x ) ∈ [0 ,T ] × [0 , | u ( t, x ) − v ( t, x ) | . In view of (4.4), to prove the quadratic transportation cost inequality W ( ν, µ ) ≤ p CH ( ν | µ ) , (4.7)it is sufficient to show that E Q " sup ( t,x ) ∈ [0 ,T ] × [0 , | v ( t, x ) − u ( t, x ) | ≤ CE Q (cid:20)Z T Z h ( s, y ) dsdy (cid:21) (4.8)for some independent constant C , and assume that the right hand side of(4.8) is finite. For simplicity, in the sequel we still denote E Q by the symbol E . From (4.6) and (4.5) it follows that E " sup ( t,x ) ∈ [0 ,T ] × [0 , | v ( t, x ) − u ( t, x ) | ≤ I + II + III ) , (4.9)where I := E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) (cid:2) b ( v ( s, y )) − b ( u ( s, y )) (cid:3) dsdy (cid:12)(cid:12)(cid:12)(cid:12) ,II := E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) (cid:2) σ ( v ( s, y )) − σ ( u ( s, y )) (cid:3) ˜ W ( ds, dy ) (cid:12)(cid:12)(cid:12)(cid:12) ,III := E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) σ ( u ( s, y )) h ( s, y ) dsdy (cid:12)(cid:12)(cid:12)(cid:12) .
12y Holder’s inequality and (3.9), the term I can be estimated as follows: I ≤ L b E " sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:12)(cid:12)(cid:12)(cid:12)Z t Z p t − s ( x, y ) | v ( s, y ) − u ( s, y ) | dsdy (cid:12)(cid:12)(cid:12)(cid:12) ≤ L b E ( sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:20) (cid:18)Z t Z p t − s ( x, y ) dsdy (cid:19) × (cid:18)Z t Z | v ( s, y ) − u ( s, y ) | dsdy (cid:19) (cid:21)) ≤ r Tπ L b E Z T Z | v ( s, y ) − u ( s, y ) | dsdy ≤ r Tπ L b Z T E " sup ( r,y ) ∈ [0 ,s ] × [0 , | v ( r, y ) − u ( r, y ) | ds. (4.10)For the term II , applying the estimate (3.26) we obtain that for any ǫ > II ≤ ǫE " sup ( t,x ) ∈ [0 ,T ] × [0 , | σ ( v ( t, x )) − σ ( u ( t, x )) | + C T, ,ǫ E Z T sup y ∈ [0 , | σ ( v ( s, y )) − σ ( u ( s, y )) | ds ≤ ǫL σ E " sup ( t,x ) ∈ [0 ,T ] × [0 , | v ( t, x ) − u ( t, x ) | + C T, ,ǫ L σ Z T E " sup ( r,y ) ∈ [0 ,s ] × [0 , | v ( r, y ) − u ( r, y ) | ds. (4.11)The term III can be bounded as follows:
III ≤ K σ E ( sup ( t,x ) ∈ [0 ,T ] × [0 , (cid:20) (cid:18)Z t Z p t − s ( x, y ) dsdy (cid:19) × (cid:18)Z t Z h ( s, y ) dsdy (cid:19) (cid:21)) ≤ r Tπ K σ E (cid:20)Z T Z h ( s, y ) dsdy (cid:21) . (4.12)Set Y ( t ) := E " sup ( s,x ) ∈ [0 ,t ] × [0 , | v ( s, x ) − u ( s, x ) | . (4.13)Putting (4.9)-(4.12) together, we obtain Y ( T ) ≤ r Tπ L b Z T Y ( s ) ds + 3 ǫL σ Y ( T ) + 3 C T, ,ǫ L σ Z T Y ( s ) ds + 3 r Tπ K σ E (cid:20)Z T Z h ( s, y ) dsdy (cid:21) . (4.14)13ecall that(see e.g. Theorem 3.13 in [DKZ]) E " sup ( t,x ) ∈ [0 ,T ] × [0 , | u ( t, x ) | < ∞ , (4.15) E " sup ( t,x ) ∈ [0 ,T ] × [0 , | v ( t, x ) | < ∞ . (4.16)Hence Y ( T ) < ∞ for any T >
0. Taking any ǫ < L σ , we deduce from (4.14)that Y ( T ) ≤ L b − ǫL σ r Tπ Z T Y ( s ) ds + 3 C T, ,ǫ L σ − ǫL σ Z T Y ( s ) ds + 3 K σ − ǫL σ r Tπ E (cid:20)Z T Z h ( s, y ) dsdy (cid:21) . (4.17)Clearly, (4.17) still holds if we replace T with any t ∈ [0 , T ]. ApplyingGronwall’s inequality, we obtain Y ( T ) ≤ K σ inf <ǫ< L σ ( − ǫL σ r Tπ exp L b T − ǫL σ r Tπ + 3 C T, ,ǫ L σ T − ǫL σ !) × E (cid:20)Z T Z h ( s, y ) dsdy (cid:21) , (4.18)where the constant C T, ,ǫ is defined in (3.31) with p = 2. This proves (4.8),hence completes the proof of Theorem 4.2. (cid:4) The following local property of the Walsh stochastic integral against space-time white noise is similar to that of the Ito integral.
Lemma 5.1
Let { σ ( t, x ) : ( t, x ) ∈ [0 , T ] × [0 , } be a random field such thatthe stochastic integral against space time white noise is well defined. Let Ω ⊂ Ω be a measurable subset such that for a.s. ω ∈ Ω , Z T Z | σ ( t, x ) | dtdy = 0 . (5.1) Then for a.s. ω ∈ Ω , Z T Z σ ( t, x ) W ( dt, dx ) = 0 . (5.2) Proof . The local property can be similarly proved as that of Ito integral. Weonly outline the proof here. Firstly, we note that the local property obviouslyholds when σ ( · , · ) is a simple process. When σ ( t, x ) is a bounded, continuousrandom field, we can prove the local property through an approximation of σ by a sequence of simple processes. For the general random field σ ( · , · ),the local property can be proved by further two approximations, first bybounded random fields and then by continuous random fields. (cid:4) emma 5.2 Let X ≥ be a random variable, then for any < p < q , EX p = Z ∞ px p − P ( X > x ) dx, (5.3) Z ∞ E min { x q , X } x q px p − dx = qq − p E h X pq i . (5.4) Proof . (5.3) and (5.4) can be easily proved by Fubini theorem. (5.4) issimilar to Lemma 2 in [I], for completeness, we provide the proof here. Z ∞ E min { x q , X } x q px p − dx = E Z X q px p − dx + E (cid:20) X Z ∞ X q px p − − q dx (cid:21) = E h X pq i − pp − q E (cid:20) X (cid:16) X q (cid:17) p − q (cid:21) = qq − p E h X pq i . (5.5) (cid:4) Acknowledgement . This work is partially supported by NNSF of China(11671372, 11431014, 11721101).