Weak convergence for the stochastic heat equation driven by Gaussian white noise
aa r X i v : . [ m a t h . P R ] J u l Weak convergence for the stochastic heat equationdriven by Gaussian white noise
Xavier Bardina, Maria Jolis and Llu´ıs Quer-Sardanyons ∗ Departament de Matem`atiquesUniversitat Aut`onoma de Barcelona08193 Bellaterra (Barcelona) [email protected]; [email protected]; [email protected]
June 4, 2018
Abstract
In this paper, we consider a quasi-linear stochastic heat equation on [0 , n ∈ N such that approximate the white noise in some sense. Then, weprovide sufficient conditions ensuring that the real-valued mild solution of the SPDEperturbed by this family of noises converges in law, in the space C ([0 , T ] × [0 , n R t ( − N ( n s ) ds , where N is a standard Poisson process,converges in law to a Brownian motion. The second one is constructed in terms ofthe kernels associated to the extension of Donsker’s theorem to the plane. Keywords: stochastic heat equation; white noise; weak convergence; two-parameterPoisson process; Donsker kernels.
AMS subject classification: ∗ The three authors are supported by the grant MEC-FEDER Ref. MTM2006-06427 from the Direcci´onGeneral de Investigaci´on, Ministerio de Educaci´on y Ciencia, Spain. Introduction
In the almost last three decades, there have been enormous advances in the study of ran-dom field solutions to stochastic partial differential equations (SPDEs) driven by generalBrownian noises. The starting point of this theory was the seminal work by Walsh [36],and most of the research developed thereafter has been mainly focused on the analysis ofheat and wave equations perturbed by Gaussian white noises in time with a fairly gen-eral spatial correlation (see, for instance, [2, 9, 11, 13, 27]). Notice also that some efforthas been made to deal with SPDEs driven by fractional type noises (see, for instance,[19, 26, 29, 33]).Indeed, the motivation to consider these type of models in the above mentioned ref-erences has sometimes put together theoretical mathematical aspects and applications tosome real situations. Let us mention that, for instance, different type of SPDEs providesuitable models in the study of growth population, some climate and oceanographicalphenomenons, or some applications to mathematical finance (see [14], [21], [1], [7], re-spectively).However, real noisy inputs are only approximately white and Gaussian, and whatone usually does is to justify somehow that one can approximate the randomness actingon the system by a Gaussian white noise. This fact has been illustrated by Walsh in[35], where a parabolic SPDE has been considered in order to model a discontinuousneurophysiological phenomenon. The noise considered in this article is determined by aPoisson point process and the author shows that, whenever the number of jumps increasesand their size decreases, it approximates the so-called space-time white noise in the senseof convergence of the finite dimensional distributions. Then, the author proves that thesolutions of the PDEs perturbed by these discrete noises converge in law (in the sense offinite dimensional distribution convergence) to the solution of the PDE perturbed by thespace-time white noise.Let us now consider the following one-dimensional quasi-linear stochastic heat equa-tion: ∂U∂t ( t, x ) − ∂ U∂x ( t, x ) = b ( U ( t, x )) + ˙ W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , , (1)where T > b : R → R is a globally Lipschitz functionand ˙ W is the formal notation for the space-time white noise. We impose some initialcondition and boundary conditions of Dirichlet type, that is: U (0 , x ) = u ( x ) , x ∈ [0 , ,U ( t,
0) = U ( t,
1) = 0 , t ∈ [0 , T ] , where u : [0 , → R is a continuous function. The random field solution to Equation (1)will be denoted by U = { U ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } and it is interpreted in the mild sense. More precisely, let { W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } denote a Brownian sheet on[0 , T ] × [0 , , F , P ). For0 ≤ t ≤ T , let F t be the σ -field generated by the random variables { W ( s, x ) , ( s, x ) ∈ [0 , t ] × [0 , } , which can be conveniently completed, so that the resulting filtration {F t , t ≥ } satisfies the usual conditions. Then, a process U is a solution of (1) if it is F t -adaptedand the following stochastic integral equation is satisfied: U ( t, x ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( U ( s, y )) dyds + Z t Z G t − s ( x, y ) W ( ds, dy ) , a.s. (2)for all ( t, x ) ∈ (0 , T ] × (0 , G denotes the Green function associated to theheat equation in [0 ,
1] with Dirichlet boundary conditions. We should mention that thestochastic integral in the right-hand side of Equation (2) is a Wiener integral, which canbe understood either in the sense of Walsh [36] or in the framework of Da Prato andZabczyk [12]. Besides, existence, uniqueness and pathwise continuity of the solution of(2) are a consequence of [36, Theorem 3.5].The aim of our work is to prove that the mild solution of (1) –which is given by thesolution of (2)– can be approximated in law, in the space C ([0 , T ] × [0 , ∂U n ∂t ( t, x ) − ∂ U n ∂x ( t, x ) = b ( U n ( t, x )) + θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , , (3)with initial condition u and Dirichlet boundary conditions, where n ∈ N . In this equation, θ n will be a noisy input that approximates the white noise ˙ W in the following sense: Hypothesis 1.1
The finite dimensional distributions of the processes ζ n ( t, x ) = Z t Z x θ n ( s, y ) dyds, ( t, x ) ∈ [0 , T ] × [0 , , converge in law to those of the Brownian sheet Observe that, if the processes θ n have square integrable paths, then the mild form ofEquation (3) is given by: U n ( t, x ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( U n ( s, y )) dyds + Z t Z G t − s ( x, y ) θ n ( s, y ) dyds. (4)Standard arguments yield existence and uniqueness of solution for Equation (4) and,furthermore, as it will be detailed later on (see Section 3), the solution U n has continuoustrajectories a.s.In order to state the main result of the paper, let us consider the following hypothesiswhich, as it will be made explicit in the sequel, will play an essential role: Hypothesis 1.2
For some q ∈ [2 , , there exists a positive constant C such that, forany f ∈ L q ([0 , T ] × [0 , , it holds: E (cid:18)Z T Z f ( t, x ) θ n ( t, x ) dxdt (cid:19) ≤ C q (cid:18)Z T Z | f ( t, x ) | q dxdt (cid:19) q . ypothesis 1.3 There exist m > and a positive constant C such that the followingis satisfied: for all s , s ′ ∈ [0 , T ] and x , x ′ ∈ [0 , satisfying < s < s ′ < s and < x < x ′ < x , and for any f ∈ L ([0 , T ] × [0 , , it holds: sup n ≥ E (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z s ′ s Z x ′ x f ( s, y ) θ n ( s, y ) dyds (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m ≤ C Z s ′ s Z x ′ x f ( s, y ) dyds ! m . We remark that, in Hypothesis 1.2, the restriction on the parameter q will be due tothe integrability properties of the Green function G . On the other hand, in the condition s ′ < s (resp. x ′ < x ) of Hypothesis 1.3, the number 2 could be replaced by any k >
1. We are now in position to state our main result:
Theorem 1.4
Let { θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , n ∈ N , be a family of stochasticprocesses such that θ n ∈ L ([0 , T ] × [0 , a.s., and such that Hypothesis 1.1, 1.2 and1.3 are satisfied. Moreover, assume that u : [0 , → R is continuous and b : R → R isLipschitz.Then, the family of stochastic processes { U n , n ≥ } defined as the mild solutions ofEquation (3) converges in law, in the space C ([0 , T ] × [0 , , to the mild solution U ofEquation (1). Let us point out that, as we will see in Section 3, Theorem 1.4 will be almost animmediate consequence of the analogous result when taking null initial condition andnonlinear term (see Theorem 3.5). Thus, the essential part of the paper will be concernedto prove the convergence in law, in the space C ([0 , T ] × [0 , ∂X n ∂t ( t, x ) − ∂ X n ∂x ( t, x ) = θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , , (5)with vanishing initial data and Dirichlet boundary conditions, towards the solution of ∂X∂t ( t, x ) − ∂ X∂x ( t, x ) = ˙ W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , . (6)Observe that the mild solution of Equations (5) and (6) can be explicitly written as,respectively, X n ( t, x ) = Z t Z G t − s ( x, y ) θ n ( s, y ) dyds, ( t, x ) ∈ [0 , T ] × [0 , , (7)and X ( t, x ) = Z t Z G t − s ( x, y ) W ( ds, dy ) , ( t, x ) ∈ [0 , T ] × [0 , , (8)where the latter defines a centered Gaussian process.An important part of the work is also devoted to check that two interesting particularfamilies of noises verify the hypotheses of Theorem 1.4. More precisely, consider thefollowing processes: 4. The Kac-Stroock processes on the plane: θ n ( t, x ) = n √ tx ( − N n ( t,x ) , (9)where N n ( t, x ) := N ( √ nt, √ nx ), and { N ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } is a standardPoisson process in the plane.2. The Donsker kernels : Let { Z k , k ∈ N } be an independent family of identicallydistributed and centered random variables, with E ( Z k ) = 1 for all k ∈ N , and suchthat E ( | Z k | m ) < + ∞ for all k ∈ N and some sufficiently large m ∈ N . For any n ∈ N , we define the kernels θ n ( t, x ) = n X k =( k ,k ) ∈ N Z k · [ k − ,k ) × [ k − ,k ) ( tn, xn ) , ( t, x ) ∈ [0 , T ] × [0 , . (10)In the case where θ n are the Kac-Stroock processes, it has been proved in [5] that thefamily of processes ζ n ( t, x ) = Z t Z x θ n ( s, y ) dsdy, n ∈ N , converge in law, in the space of continuous functions C ([0 , ), to the Brownian sheet.This result has been inspired by its one-dimensional counterpart, which is due to Stroock[31] and states that the family of processes Y ε ( t ) = 1 ε Z t ( − N ( sε ) ds, t ∈ [0 , , ε > , where N stands for a standard Poisson process, converges in law in C ([0 , ε tends to0, to the standard Brownian motion. Moreover, it is worth mentioning that Kac (see [22])already considered this kind of processes in order to write the solution of the telegrapher’sequation in terms of a Poisson process.On the other hand, when θ n are the Donsker kernels, the convergence in law, in thespace of continuous functions, of the processes ζ n ( t, x ) = Z t Z x θ n ( s, y ) dsdy, n ∈ N , to the Brownian sheet is a consequence of the extension of Donsker’s theorem to the plane(see, for instance, [37]).We should mention at this point that the motivation behind our results has also beenconsidered by Manthey in [24] and [25]. Indeed, in the former paper, the author considersEquation (5) with a family of correlated noises { θ n , n ∈ N } whose integral processes Z t Z x θ n ( s, y ) dyds, converge in law (in the sense of finite dimensional distribution convergence) to the Brow-nian sheet. Then, sufficient conditions on the noise processes are specified under which5he solution X n of (5) converges in law, in the sense of the finite dimensional distributionconvergence, to the solution of (6). Moreover, it has also been proved that, wheneverthe noisy processes are Gaussian, the convergence in law holds in the space of continuousfunctions too; these results have been extended to the quasi-linear equation (3) in [25].In this sense, let us mention that, in an Appendix and for the sake of completeness, wehave added a brief explanation of Manthey’s method and showed that his results do notapply to the examples of noisy inputs that we are considering in the paper.Let us also remark that recently there has been an increasing interest in the studyof weak approximation for several classes of SPDEs (see [15, 16]). In these references,the methods for obtaining the corresponding approximation sequences are based on dis-cretisation schemes for the differential operator driving the equation, and the rate ofconvergence of the weak approximations is analysed. Hence, this latter framework differssignificantly from the setting that we have described above. On the other hand, we noticethat weak convergence for some classes of SPDEs driven by the Donsker kernels havebeen considered in the literature; namely, a reduced hyperbolic equation on R –which isessentially equivalent to a one-dimensional stochastic wave equation– has been consideredin [8, 17], while in [32], the author deals with a stochastic elliptic equation with non-lineardrift. Furthermore, in [34], weak convergence of Wong-Zakai approximations for stochas-tic evolution equations driven by a finite-dimensional Wiener process has been studied.Eventually, it is worth commenting that other type of problems concerning SPDEs drivenby Poisson-type noises have been considered e.g. in [18, 20, 23, 28, 30].The paper is organised as follows. In Section 2, we will present some preliminaries onEquation (1), its linear form (6) and some general results on weak convergence. In Section3, we prove the results of convergence for equations (6) and (1), so that we end up withthe proof of Theorem 1.4. The proof of the fact that the Kac-Stroock processes satisfythe hypotheses of Theorem 1.4 will be carried out in Section 4, while the analysis in thecase of the Donsker kernels will be performed at Section 5. Finally, we add an Appendixwhere we give the proof of Lemma 2.3 and relate our results with those of Manthey ([24],[25]). As it has been explained in the Introduction, we are concerned with the mild solution ofthe formally-written quasi-linear stochastic heat equation (1). That is, we consider a real-valued stochastic process { U ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , which we assume to be adaptedwith respect to the natural filtration generated by the Brownian sheet on [0 , T ] × [0 , t, x ) ∈ [0 , T ] × [0 , U ( t, x ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( U ( s, y )) dyds + Z t Z G t − s ( x, y ) W ( ds, dy ) , a.s., (11)where we recall that G t ( x, y ), ( t, x, y ) ∈ R + × (0 , , denotes the Green function associatedto the heat equation on [0 ,
1] with Dirichlet boundary conditions. Explicit formulas for6 are well-known, namely: G t ( x, y ) = 1 √ πt + ∞ X n = −∞ (cid:18) e − ( x − y − n )24 t − e − ( x + y − n )24 t (cid:19) or G t ( x, y ) = 2 ∞ X n =1 sin( nπx ) sin( nπy ) e − n π t . Moreover, it holds that0 ≤ G t ( x, y ) ≤ √ πt e − ( x − y )24 t , t > , x, y ∈ [0 , . We have already commented in the Introduction that, in order to prove Theorem1.4, we will restrict our analysis to the linear version of Equation (1), which is given by(6). Hence, let us consider for the moment X = { X ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } to bethe mild solution of Equation (6) with vanishing initial conditions and Dirichlet boundaryconditions. This can be explicitly written as (8). Notice that, for any ( t, x ) ∈ (0 , T ] × (0 , X ( t, x ) defines a centered Gaussian random variable with variance E ( X ( t, x ) ) = Z t Z G t − s ( x, y ) dyds. Indeed, by (iii) in Lemma 2.1 below, it holds that E ( X ( t, x ) ) ≤ Ct , where the constant C > x .In the sequel, we will make use of the following result, which is a quotation of [3,Lemma B.1]: Lemma 2.1 (i) Let α ∈ ( , . Then, for all t ∈ [0 , T ] and x, y ∈ [0 , , Z t Z | G t − s ( x, z ) − G t − s ( y, z ) | α dzds ≤ C | x − y | − α . (ii) Let α ∈ (1 , . Then, for all s, t ∈ [0 , T ] such that s ≤ t and x ∈ [0 , , Z s Z | G t − r ( x, y ) − G s − r ( x, y ) | α dydr ≤ C ( t − s ) − α . (iii) Under the same hypothesis as (ii), Z ts Z | G t − r ( x, y ) | α dydr ≤ C ( t − s ) − α . Let us recall that we aim to prove that the process X can be approximated in law, inthe space C ([0 , T ] × [0 , X n ( t, x ) = Z t Z G t − s ( x, y ) θ n ( s, y ) dyds, ( t, x ) ∈ [0 , T ] × [0 , , n ≥ , (12)7here the processes θ n satisfy certain conditions.In order to prove this convergence in law, we will make use of the following two generalresults. The first one (Theorem 2.2) is a tightness criterium on the plane that generalizesa well-known theorem of Billingsley; it can be found in [38, Proposition 2.3], where itis proved that the hypotheses considered in the result are stronger than those of thecommonly-used criterium of Centsov [10]. The second one (Lemma 2.3) will be used toprove the convergence of the finite dimensional distributions of X n ; though it can be foundaround in the literature, we have not been able to find an explicit proof, so that, for thesake of completeness, we will sketch it in the Appendix. Theorem 2.2
Let { X n , n ∈ N } be a family of random variables taking values in C ([0 , T ] × [0 , . The family of the laws of { X n , n ∈ N } is tight if there exist p ′ , p > , δ > and aconstant C such that sup n ≥ E | X n (0 , | p ′ < ∞ and, for every t, t ′ ∈ [0 , T ] and x, x ′ ∈ [0 , , sup n ≥ E | X n ( t ′ , x ′ ) − X n ( t, x ) | p ≤ C ( | x ′ − x | + | t ′ − t | ) δ . Lemma 2.3
Let ( F, k · k ) be a normed space and { J n , n ∈ N } and J linear maps de-fined on F and taking values in the space L (Ω) of almost surely finite random variables.Assume that there exists a positive constant C such that, for any f ∈ F , sup n ≥ E | J n ( f ) | ≤ C k f k and (13) E | J ( f ) | ≤ C k f k , (14) and that, for some dense subspace D of F , it holds that J n ( f ) converges in law to J ( f ) ,as n tends to infinity, for all f ∈ D .Then, the sequence of random variables { J n ( f ) , n ∈ N } converges in law to J ( f ) , forany f ∈ F . Eventually, for any real function X defined on R , and ( t, x ) , ( t ′ , x ′ ) ∈ R such that t ≤ t ′ and x ≤ x ′ , we will use the notation ∆ t,x X ( t ′ , x ′ ) for the increment of X over therectangle ( t, t ′ ] × ( x, x ′ ]:∆ t,x X ( t ′ , x ′ ) = X ( t ′ , x ′ ) − X ( t, x ′ ) − X ( t ′ , x ) + X ( t, x ) . This section is devoted to prove Theorem 1.4. For this, as we have already mentioned, itis convenient to consider, first, the linear equation (6) together with its mild solution (8).The first step consists in establishing sufficient conditions for a family of processes { θ n , n ∈ N } in order that the approximation processes X n (see (12)) converge, in thesense of finite dimensional distributions, to X , the solution of (8): X ( t, x ) = Z t Z G t − s ( x, y ) W ( ds, dy ) . (15)8 roposition 3.1 Let { θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , n ∈ N , be a family of stochasticprocesses such that θ n ∈ L ([0 , T ] × [0 , a.s. and such that Hypothesis 1.1 and 1.2 aresatisfied.Then, the finite dimensional distributions of the processes X n given by (12) converge,as n tends to infinity, to those of the process defined by (15).Proof: We will apply Lemma 2.3 to the following setting: let q ∈ [2 ,
3) as in Hypothesis1.2 and consider the normed space ( F := L q ([0 , T ] × [0 , , k · k q ), where k · k q denotes thestandard norm in L q ([0 , T ] × [0 , J n ( f ) := Z T Z f ( s, y ) θ n ( s, y ) dyds, and J ( f ) := Z T Z f ( s, y ) W ( ds, dy ) , f ∈ F. Then, J n and J define linear applications on F and, by Hypothesis 1.2, it holds thatsup n ≥ E | J n ( f ) | ≤ C k f k q , for all f ∈ L q ([0 , T ] × [0 , E | J ( f ) | ≤ C k f k q , for all f ∈ L q ([0 , T ] × [0 , D of elementary functions of the form f ( t, x ) = k − X i =0 f i ( t i ,t i +1 ] ( t ) ( x i ,x i +1 ] ( x ) , (16)with k ≥ f i ∈ R , 0 = t < t < · · · < t k = T and 0 = x < x < · · · < x k = 1, is densein ( F, k · k q ).On the other hand, the finite dimensional distributions of X n converge to those of X if, and only if, for all m ≥ a , . . . , a m ∈ R , ( s , y ) , . . . , ( s m , y m ) ∈ [0 , T ] × [0 , m X j =1 a j X n ( s j , y j ) L −→ n →∞ m X j =1 a j X ( s j , y j ) . (17)This is equivalent to have that J n ( K ) = R T R K ( s, y ) θ n ( s, y ) dyds converges in law, as n tends to infinity, to R T R K ( s, y ) W ( ds, dy ), where K ( s, y ) := m X j =1 a j [0 ,s j ] ( s ) G s j − s ( y j , y ) . By Lemma 2.1 (iii), the function K belongs to L q ([0 , T ] × [0 , J n ( f ) converges in9aw to J ( f ) = R T R f ( s, y ) W ( ds, dy ), for every elementary function f of the form (16).In fact, if f is such a function, observe that we have J n ( f ) = k − X i =0 f i Z t i +1 t i Z x i +1 x i θ n ( s, y ) dyds, and this random variable converges in law, as n tends to infinity, to k − X i =0 f i Z t i +1 t i Z x i +1 x i W ( ds, dy ) = Z T Z f ( s, y ) W ( ds, dy ) , because the finite dimensional distributions of ζ n converge to those of the Brownian sheet. ✷ Let us now provide sufficient conditions on θ n in order that the family of laws of theprocesses X n is tight in C ([0 , T ] × [0 , Proposition 3.2
Let { θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , n ∈ N , be a family of stochasticprocesses such that θ n ∈ L ([0 , T ] × [0 , a.s. Suppose that Hypothesis 1.3 is satisfied.Then, the process X n defined in (12) possesses a version with continuous paths andthe family of the laws of { X n , n ∈ N } is tight in C ([0 , T ] × [0 , .Proof: It suffices to prove thatsup n ≥ E [ X n ( t ′ , x ′ ) − X n ( t, x )] m ≤ C [ | x ′ − x | mα + | t ′ − t | mα ] , (18)for all α ∈ (0 , ), t, t ′ ∈ [0 , T ] and x, x ′ ∈ [0 , m >
8, then it can be found α ∈ (0 , ) such that m α > X n from Kolmogorov’s continuity criterium in the plane. Furthermore, by Theorem 2.2,we also obtain the tightness of the laws of X n in C ([0 , T ] × [0 , H ( t, x ; s, y ) := [0 ,t ] ( s ) G t − s ( x, y ). We will need to estimate the moment of order m , for some m >
8, of the quantity X n ( t ′ , x ′ ) − X n ( t, x ) = Z T Z [ H ( t ′ , x ′ ; s, y ) − H ( t, x ; s, y )] θ n ( s, y ) dyds, for t, t ′ ∈ [0 , T ] and x, x ′ ∈ [0 , , Y n ( T, Y n , which indeed depends on t, t ′ , x, x ′ , is defined by Y n ( s , x ) := Z s Z x [ H ( t ′ , x ′ ; s, y ) − H ( t, x ; s, y )] θ n ( s, y ) dyds, ( s , x ) ∈ [0 , T ] × [0 , . Hence, inequality (18) is equivalent to prove that E (∆ , Y n ( T, m ≤ C [ | x ′ − x | mα + | t ′ − t | mα ] , α ∈ (0 , ) and n ≥
1. By [6, Lemma 3.2] (in the statement of this lemma, it issupposed that m is an even integer number, but this assumption is not used in its proof),it suffices to prove that there exist γ > C > s , s ′ ∈ [0 , T ] and x , x ′ ∈ [0 ,
1] satisfying 0 < s < s ′ < s and 0 < x < x ′ < x , thensup n ≥ E (∆ s ,x Y n ( s ′ , x ′ )) m ≤ C (cid:2) | t ′ − t | mα + | x ′ − x | mα (cid:3) ( s ′ − s ) mγ ( x ′ − x ) mγ . (19)By Hypothesis 1.3 for the particular case of f ( s, y ) = H ( t ′ , x ′ ; s, y ) − H ( t, x ; s, y ), weobtain sup n ≥ E (∆ s ,x Y n ( s ′ , x ′ )) m ≤ C (cid:18)Z T Z [ s ,s ′ ] ( s ) [ x ,x ′ ] ( y ) | H ( t ′ , x ′ ; s, y ) − H ( t, x ; s, y ) | dyds (cid:19) m . Let p ∈ (1 , ) and q > p + q = 1. Then, by H¨older’s inequality and thedefinition of H ,sup n ≥ E (∆ s ,x Y n ( s ′ , x ′ )) m ≤ C (cid:18)Z T Z [ s ,s ′ ] ( s ) [ x ,x ′ ] ( y ) dyds (cid:19) m q (cid:18)Z T Z | H ( t ′ , x ′ ; s, y ) − H ( t, x ; s, y ) | p dyds (cid:19) m p ≤ C ( x ′ − x ) m q ( s ′ − s ) m q × Z t Z | G t ′ − s ( x ′ , y ) − G t − s ( x, y ) | p dyds + Z t ′ t Z | G t ′ − s ( x ′ , y ) | p dyds ! m p . (20)By Lemma 2.1, the last term in the right-hand side of (20) can be bounded, up to someconstant, by (cid:16) | x − x ′ | − p + | t − t ′ | − p (cid:17) m p ≤ C (cid:16) | x − x ′ | m (3 − p )2 p + | t − t ′ | m (3 − p )4 p (cid:17) . Therefore, if we plug this bound in (20) and we take α = − p p and γ = q , then we haveproved (19), because p ∈ (1 , ) is arbitrary. ✷ Remark 3.3
As it can be deduced from the first part of the proof of Proposition 3.2,the restriction m > has to be considered in order to be able to apply Theorem 2.2 andKolmogorov’s continuity criterium. As a consequence of Propositions 3.1 and 3.2, we can state the following result onconvergence in law for the processes X n : Theorem 3.4
Let { θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , n ∈ N , be a family of stochasticprocesses such that θ n ∈ L ([0 , T ] × [0 , a.s. Assume that Hypothesis 1.1, 1.2 and 1.3are satisfied.Then, the family of stochastic processes { X n , n ≥ } defined in (12) converges in law,as n tends to infinity in the space C ([0 , T ] × [0 , , to the Gaussian process X given by(15).
11e can eventually extend the above result to the quasi-linear Equation (1), so thatwe end up with the proof of Theorem 1.4. This will be an immediate consequence of theabove theorem and the next general result:
Theorem 3.5
Let { θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , n ∈ N , be a family of stochasticprocesses such that θ n ∈ L ([0 , T ] × [0 , a.s. Assume that u : [0 , → R is a continuousfunction and b : R → R is Lipschitz. Moreover, suppose that the family of stochasticprocesses { X n , n ≥ } defined in (12) converges in law, as n tends to infinity in the space C ([0 , T ] × [0 , , to the Gaussian process X given by (15).Then, the family of stochastic processes { U n , n ≥ } defined as the mild solutions ofEquation (3) converges in law, in the space C ([0 , T ] × [0 , , to the mild solution U ofEquation (1).Proof: Let us first recall that we denote by U = { U ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } theunique mild solution of Equation (1), which means that U fulfils U ( t, x ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( U ( s, y )) dyds + Z t Z G t − s ( x, y ) W ( ds, dy ) , a.s. The approximation sequence is denoted by { U n , n ∈ N } , where U n = { U n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } is a stochastic process satisfying U n ( t, x ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( U n ( s, y )) dyds + Z t Z G t − s ( x, y ) θ n ( s, y ) dyds, a.s. where the noisy input θ n has square integrable paths, a.s.Using the properties of the Green function (see Lemma 2.1), the fact that θ n ∈ L ([0 , T ] × [0 , U n hascontinuous paths a.s., for all n ∈ N .Next, for each continuous function η : [0 , T ] × [0 , −→ R , consider the following(deterministic) integral equation: z η ( t, s ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( z η ( s, y )) dyds + η ( t, x ) . As before, by the properties of G and the assumptions on u and b , it can be checked thatthis equation possesses a unique continuous solution.Now, we will prove that the map ψ : C ([0 , T ] × [0 , −→ C ([0 , T ] × [0 , η −→ z η
12s continuous with respect to the usual topology on this space. Indeed, given η , η ∈C ([0 , T ] × [0 , | z η ( t, x ) − z η ( t, x ) |≤ Z t Z G t − s ( x, y ) (cid:12)(cid:12) b ( z η ( s, y )) − b ( z η ( s, y )) (cid:12)(cid:12) dyds + | η ( t, x ) − η ( t, x ) |≤ L Z t Z G t − s ( x, y ) (cid:12)(cid:12) z η ( s, y ) − z η ( s, y ) (cid:12)(cid:12) dyds + | η ( t, x ) − η ( t, x ) | , (21)where L is the Lipschitz constant of the function b .For a given f ∈ C ([0 , T ] × [0 , k f k t = max s ∈ [0 , t ] , x ∈ [0 , | f ( s, x ) | . By using this notation, we deduce that inequality (21) implies that, for any t ∈ [0 , T ], k z η − z η k t ≤ L Z t G ( t − s ) k z η − z η k s ds + k η − η k T , where G ( s ) := sup x ∈ [0 , Z G s ( x, y ) dy ≤ sup x ∈ [0 , Z √ πs e − ( x − y )24 s dy ≤ C. Applying now Gronwall’s lemma, we obtain that there exists a finite constant
A > k z η − z η k T ≤ A k η − η k T , and, therefore, the map ψ is continuous.Consider now X n ( t, x ) = Z t Z G t − s ( x, y ) θ n ( s, y ) dyds and X ( t, x ) = Z t Z G t − s ( x, y ) W ( ds, dy ) . By hypothesis, we have that X n converges in law in C ([0 , T ] × [0 , X , as n goes toinfinity. On the other hand, we have U n = ψ ( X n ) and U = ψ ( X ) , and hence the continuity of ψ implies the convergence in law of U n to U in C ([0 , T ] × [0 , ✷ Convergence in law for the Kac-Stroock processes
This section is devoted to prove that the hypotheses of Theorem 1.4 are satisfied in thecase where the approximation family is defined in terms of the Kac-Stroock process θ n set up in (9). That is, X n ( t, x ) = n Z t Z G t − s ( x, y ) √ sy ( − N n ( s,y ) dyds. (22)First, we notice that Hypothesis 1.1 has been proved in [5].The following proposition states that Hypothesis 1.2 is satisfied in this particularsituation. Proposition 4.1
Let θ n be the Kac-Strock processes. Then, for all p > , there exists apositive constant C p such that E (cid:18)Z T Z f ( t, x ) θ n ( t, x ) dxdt (cid:19) ≤ C p (cid:18)Z T Z | f ( t, x ) | p dxdt (cid:19) p , (23) for any f ∈ L p ([0 , T ] × [0 , and all n ≥ . The proof of this proposition is based on the following technical lemma:
Lemma 4.2
Let f ∈ L ([0 , T ] × [0 , and α ≥ . Then, for any u, u ′ ∈ (0 , satisfyingthat < u < u ′ ≤ α u , E Z T Z u ′ u f ( t, x ) θ n ( t, x ) dxdt ! ≤ (cid:0) α +1 − (cid:1) Z T Z u ′ u f ( t, x ) dxdt, for all n ≥ .Proof: First, we observe that E Z T Z u ′ u f ( t, x ) θ n ( t, x ) dxdt ! =2 n Z T Z u ′ u Z T Z u ′ u f ( t , x ) f ( t , x ) √ t t x x × E (cid:2) ( − N n ( t ,x )+ N n ( t ,x ) (cid:3) { t ≤ t } dx dt dx dt . (24)The expectation appearing in (24) can be computed as it has been done in the proof of[6, Lemma 3.1] (see also [5, Lemma 3.2]). More precisely, one writes the sum N n ( t , x ) + N n ( t , x ) as a suitable sum of rectangular increments of N n and applies that, if Z has aPoisson distribution with parameter λ , then E (cid:2) ( − Z (cid:3) = exp( − λ ). Hence, the term inthe right-hand side of (24) admits a decomposition of the form I + I , where I =2 n Z T Z u ′ u Z T Z u ′ u f ( t , x ) f ( t , x ) √ t t x x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt , =2 n Z T Z u ′ u Z T Z u ′ u f ( t , x ) f ( t , x ) √ t t x x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt . Let us apply the inequality ab ≤ ( a + b ), a, b ∈ R , so that we have I ≤ I + I , wherethe latter terms are defined by I = n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt ,I = n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt . In order to deal with the term I , we will use the fact that exp {− n ( t − t ) x } ≤ exp {− n ( t − t ) x } , for x ≤ x , and then integrate with respect to t , x . Thus I ≤ n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt ≤ Z T Z u ′ u f ( t , x ) dx dt . (25)Concerning the term I , we use similar arguments as before and, moreover, we apply thefact that, for x , x ∈ [ u, u ′ ), then x < α x . Hence I ≤ n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt ≤ α n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt ≤ α − Z T Z u ′ u f ( t , x ) dx dt . (26)The analysis of the term I is slightly more involved. Namely, notice first that I ≤ I + I , where I = n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt , = n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt . For the term I , we simply use that, by hypothesis, x ≤ α x , and we integrate withrespect to t , x , so that we end up with I ≤ α n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp {− n [( t − t ) x + ( x − x ) t ] } { t ≤ t } { x ≤ x } dx dt dx dt ≤ α − Z T Z u ′ u f ( t , x ) dx dt . (27)The term I is much more delicate. Namely, taking into account the integration’s regionin I as well as the fact that x − x ≤ (2 α − x (because x ≤ α x ), it holds2( t − t ) x + 2( x − x ) t ≥ ( t − t ) x + 12 α − t − t )( x − x ) + 12 α − x − x ) t = ( t − t ) x + 12 α − x − x ) t . Therefore, I ≤ n Z T Z u ′ u Z T Z u ′ u f ( t , x ) t x × exp (cid:26) − n [( t − t ) x + 12 α − x − x ) t ] (cid:27) { t ≤ t } { x ≤ x } dx dt dx dt ≤ (2 α − Z T Z u ′ u f ( t , x ) dx dt , (28)where the latter expression has been obtained after integrating with respect to t , x .We conclude the proof by putting together (25)-(28). ✷ Proof of Proposition 4.1:
Let us consider the following dyadic-type partition of (0 , ,
1] = ∞ [ k =0 ( a k +1 , a k ] , with a k = kα , for some α ≥
1. In particular, observe that a k − a k +1 = α − ( k +1) α and we arein position to apply Lemma 4.2: for all k ≥ E Z T Z a k a k +1 f ( t, x ) θ n ( t, x ) dxdt ! ≤
34 (2 α +1 − Z T Z a k a k +1 f ( t, x ) dxdt. E (cid:18)Z T Z f ( t, x ) θ n ( t, x ) dxdt (cid:19) = E ∞ X k =0 Z T Z a k a k +1 f ( t, x ) θ n ( t, x ) dxdt ! ≤ ∞ X k =0 k +1 E Z T Z a k a k +1 f ( t, x ) θ n ( t, x ) dxdt ! ≤
34 (2 α +1 − ∞ X k =0 k +1 Z T Z a k a k +1 f ( t, x ) dxdt. (29)Let p, q > p + q = 1. Then, applying H¨older’s inequality, the last term of(29) can be bounded by34 (2 α +1 − ∞ X k =0 k +1 Z T Z a k a k +1 | f ( t, x ) | p dxdt ! p ( a k − a k +1 ) q ≤
34 (2 α +1 − (cid:18)Z T Z | f ( t, x ) | p dxdt (cid:19) p ∞ X k =0 k +1 (2 α − q ( k +1) αq ≤
34 (2 α +1 − α − q (cid:18)Z T Z | f ( t, x ) | p dxdt (cid:19) p ∞ X k =0 ( k +1) ( αq − ) (30)and this series is convergent whenever we take α such that α > q . Hence, expression (30)may be bounded by32 (2 α +1 −
1) (2 α − q αq − (cid:18)Z T Z | f ( t, x ) | p dxdt (cid:19) p , which implies that the proof is complete. ✷ Remark 4.3
It is worth noticing that, in the statement of Proposition 4.1, we have notbeen able to obtain the validity of the result for p = 1 . Indeed, as it can be deduced fromits proof, the constant C p in (23) blows up when p → (because q → ∞ , so α → ∞ ). By Proposition 3.1, a consequence of Proposition 4.1 is that the finite dimensionaldistributions of X n (see (22)) converge, as n tends to infinity, to those of X ( t, x ) = Z t Z G t − s ( x, y ) W ( ds, dy ) . In order to prove that Theorem 1.4 applies for the Kac-Stroock processes, it onlyremains to verify that Hypothesis 1.3 is satisfied. In fact, this is given by the followingresult: 17 roposition 4.4
Let θ n be the Kac-Stroock kernels. Then, for any even m ∈ N , thereexists a positive constant C m such that, for all s , s ′ ∈ [0 , T ] and x , x ′ ∈ [0 , satisfying < s < s ′ < s and < x < x ′ < x , we have that sup n ≥ E Z s ′ s Z x ′ x f ( s, y ) θ n ( s, y ) dyds ! m ≤ C m Z s ′ s Z x ′ x f ( s, y ) dyds ! m , for any f ∈ L ([0 , T ] × [0 , .Proof: To begin with, define Z n ( s , x ) := Z s Z x f ( s, y ) θ n ( s, y ) dyds and observe that we can apply the same arguments as in the proof of [6, Lemma 3.3] (seep. 324 therein) in order to obtain the following estimate: E (∆ s ,x Z n ( s ′ , x ′ )) m ≤ m ! n m Z [0 ,T ] m × [0 , m m Y i =1 (cid:0) [ s ,s ′ ] ( s i ) [ x ,x ′ ] ( y i ) f ( s i , y i ) √ s i y i (cid:1) × exp (cid:8) − n [( s m − s m − ) y ( m − + · · · + ( s − s ) y (1) ] (cid:9) × exp (cid:8) − n [( y ( m ) − y ( m − ) s m − + · · · + ( y (2) − y (1) ) s ] (cid:9) × { s ≤···≤ s m } ds · · · ds m dy · · · dy m , where y (1) , . . . , y ( m ) denote the variables y , . . . , y m ordered increasingly. Hence E (∆ s ,x Z n ( s ′ , x ′ )) m ≤ m ( s x ) m m ! n m Z [0 ,T ] m × [0 , m m Y i =1 (cid:0) [ s ,s ′ ] ( s i ) [ x ,x ′ ] ( y i ) f ( s i , y i ) (cid:1) × exp {− nx [( s m − s m − ) + · · · + ( s − s )] }× exp (cid:8) − ns [( y ( m ) − y ( m − ) + · · · + ( y (2) − y (1) )] (cid:9) × { s ≤···≤ s m } ds · · · ds m dy · · · dy m . (31)Notice that in (31) we have not been able to order the variables y , . . . , y m , becauseneither the function ( s, y ) f ( s, y ) factorizes nor ( y . . . , y m ) f ( s , y ) · · · f ( s m , y m )is symmetric. However, the fact that the variables s i are ordered determines m couples( s , s ) , ( s , s ) . . . , ( s m − , s m ), such that the second element in each couple is greaterthan or equal to the first one. Concerning the variables y i , we also have m couples( y (1) , y (2) ) , . . . , ( y ( m − , y ( m ) ) satisfying the same property.The key point of the proof relies in factorizing the product in the first part of theright-hand side of (31) into two convenient products: m Y j =1 (cid:0) [ s ,s ′ ] ( s i j ) [ x ,x ′ ] ( y i j ) f ( s i j , y i j ) (cid:1) m Y k =1 (cid:0) [ s ,s ′ ] ( s r k ) [ x ,x ′ ] ( y r k ) f ( s r k , y r k ) (cid:1) , where I = { i j , j = 1 , . . . , m } and R = { r k , k = 1 , . . . , m } are two disjoint subsequencesof { , . . . , m } . In particular, it holds that I ⊎ R = { , . . . , m } . These subsequences will be18hosen using the following rule: any couple ( s i , s i +1 ) will contain an element of the form s i j and one of the form s r k , and any couple ( y ( i ) , y ( i +1) ) will contain an element of the form y i j and one of the form y r k . For this, we will split the m elements f ( s , y ) , . . . , f ( s m , y m )in two groups of m elements: A = { f ( s i , y i ) , . . . , f ( s i m , y i m ) } ,B = { f ( s r , y r ) , . . . , f ( s r m , y r m ) } . In order to determine the elements of each group, and such that the above condition issatisfied, we proceed by an iterative method: we will start with an element of A and wewill associate to it an element of B satisfying what we want; then, to the latter elementof B we will associate a suitable element of A , and so on. More precisely, we start, say,with f ( s i , y i ) = f ( s , y ). Then, if at any step of the iteration procedure we have anelement f ( s i j , y i j ) ∈ A , we will associate to it an element f ( s r k , y r k ) ∈ B in such a waythat { s i j , s r k } forms one of the couples ( s i , s i +1 ). On the other hand, if at any step ofthe iteration procedure we have an element f ( s r k , y r k ) ∈ B , then we will associate to it f ( s i j , y i j ) ∈ A such that { y i j , y r k } determines one of the couples ( y ( i ) , y ( i +1) ). The onlything that remains to be clarified is what we are going to do in case that, at some step,we end up with and element of A or B which has already appeared before. In this case,we do not take the latter element, but another one which has not been chosen by now.Let us illustrate the above-described procedure by considering a particular example:let m = 8 and assume that we fix y , . . . , y in such a way that y < y < y < y < y < y < y < y , that is: y (1) = y , y (2) = y , y (3) = y , y (4) = y ,y (5) = y , y (6) = y , y (7) = y , y (8) = y . Recall that we assume that s ≤ · · · ≤ s . We start with f ( s , y ) ∈ A . Then, theiteration sequence will be the following: f ( s , y ) −→ f ( s , y ) −→ f ( s , y ) −→ f ( s , y ) −→ f ( s , y ) −→ f ( s , y ) −→ f ( s , y ) −→ f ( s , y )Thus, A = { f ( s , y ) , f ( s , y ) , f ( s , y ) , f ( s , y ) } and B = { f ( s , y ) , f ( s , y ) , f ( s , y ) ,f ( s , y ) } . In particular, any couple ( s i , s i +1 ) (resp. ( y ( i ) , y ( i +1) )) contains one s (resp. y )of the group A and one of B .We can now come back to the analysis of the right-hand side of (31) and we can usethe above detailed procedure to estimate it by 2 m − ( s x ) m m ! ( J + J ), with J = n m Z [0 ,T ] m × [0 , m Y i j ∈I (cid:0) [ s ,s ′ ] ( s i j ) [ x ,x ′ ] ( y i j ) f ( s i j , y i j ) (cid:1) × exp {− nx [( s m − s m − ) + · · · + ( s − s )] }× exp (cid:8) − ns [( y ( m ) − y ( m − ) + · · · + ( y (2) − y (1) )] (cid:9) × { s ≤···≤ s m } ds · · · ds m dy · · · dy m . = n m Z [0 ,T ] m × [0 , m Y r k ∈R (cid:0) [ s ,s ′ ] ( s r k ) [ x ,x ′ ] ( y r k ) f ( s r k , y r k ) (cid:1) × exp {− nx [( s m − s m − ) + · · · + ( s − s )] }× exp (cid:8) − ns [( y ( m ) − y ( m − ) + · · · + ( y (2) − y (1) )] (cid:9) × { s ≤···≤ s m } ds · · · ds m dy · · · dy m . We will only deal with the term J , since J can be treated using exactly the samearguments. The idea is to integrate in J with respect to s r k , y r k , with r k ∈ R , for k = 1 , . . . , m . Recall that the variables s r k (resp. y r k ) have been chosen in such a waythat they only appear once in each couple ( s i , s i +1 ) (resp. ( y ( i ) , y ( i +1) )). Observe that wehave, for any k = 1 , . . . , m , Z s ′ s exp {− nx ( s r k − s i ) } { s i ≤ s rk } ds r k ≤ C n or Z s ′ s exp {− nx ( s i +1 − s r k ) } { s rk ≤ s i +1 } ds r k ≤ C n , for some s i and s i +1 , depending on which position occupies s r k in the corresponding couple.For the integrals with respect to y r k one obtains the same type of bound. Therefore, J ≤ C m Z [0 ,T ] m × [0 , m m Y j =1 (cid:0) [ s ,s ′ ] ( s i j ) [ x ,x ′ ] ( y i j ) f ( s i j , y i j ) (cid:1) ds i · · · ds i m dy i · · · dy i m = C m (cid:18)Z T Z [ s ,s ′ ] ( s ) [ x ,x ′ ] ( y ) f ( s, y ) dyds (cid:19) m . (32)As it has been mentioned, one can use the same arguments to get the same upper boundfor J . Hence, the right-hand side of (31) can be estimated by (32), and this concludesthe proof. ✷ In this section, we aim to prove that the hypothesis of Theorem 1.4 are satisfied in the casewhere the approximation sequence is constructed in terms of the Donsker kernels. Namely,we consider { Z k , k ∈ N } an independent family of identically distributed and centeredrandom variables, with E ( Z k ) = 1 for all k ∈ N , and such that E ( | Z k | m ) < + ∞ for all k ∈ N , and some even number m ≥
10. Then, for all n ≥ t, x ) ∈ [0 , T ] × [0 , θ n ( t, x ) = n X k =( k ,k ) ∈ N Z k [ k − ,k ) × [ k − ,k ) ( tn, xn ) . X n ( t, x ) = Z t Z G t − s ( x, y ) θ n ( s, y ) dyds, ( t, x ) ∈ [0 , T ] × [0 , . (33)Recall that Hypothesis 1.1 is a consequence of the extension of Donsker’s theorem tothe plane (see, for instance, [37]). On the other hand, we have the following result: Lemma 5.1
Let θ n be the above defined Donsker kernels. Then, there exists a positiveconstant C m such that, for any f ∈ L ([0 , T ] × [0 , , we have E (cid:18)Z T Z f ( t, x ) θ n ( t, x ) dxdt (cid:19) m ≤ C m (cid:18)Z T Z f ( t, x ) dxdt (cid:19) m , (34) for all n ≥ . Remark 5.2
Notice that, taking into account that m ≥ , inequality (34) implies bothHypothesis 1.2 and 1.3, so that the hypotheses of Theorem 1.4 are satisfied for the Donskerkernels.Proof of Lemma 5.1: First, we observe that we can write E (cid:18)Z T Z f ( t, x ) θ n ( t, x ) dxdt (cid:19) m (35)= Z [0 ,T ] m × [0 , m f ( t , x ) · · · f ( t m , x m ) E " m Y j =1 θ n ( t j , x j ) dt · · · dt m dx · · · dx m By definition of θ n , E " m Y j =1 θ n ( t j , x j ) = n m E m Y j =1 X k =( k ,k ) ∈ N Z k [ k − ,k ) ( t j n ) [ k − ,k ) ( x j n ) = n m X k ,...,k m ∈ N E ( Z k · · · Z k m ) m Y j =1 (cid:16) [ k j − ,k j ) ( t j n ) [ k j − ,k j ) ( x j n ) (cid:17) . Notice that, by hypothesis, E ( Z k · · · Z k m ) = 0 if, for some j ∈ { , . . . , m } , we have that k j = k l for all l ∈ { , . . . , m } \ { j } ; that is, if some variable Z k j appears only once in theproduct Z k · · · Z k m .On the other hand, since E ( | Z k | m ) < ∞ for all k ∈ N , then E ( Z k · · · Z k m ) is boundedfor all k , . . . , k m ∈ N . Hence, E " m Y j =1 θ n ( t j , x j ) ≤ n m C m X ( k ,...,k m ) ∈ A m m Y j =1 (cid:16) [ k j − ,k j ) ( t j n ) [ k j − ,k j ) ( x j n ) (cid:17) , A m = { ( k , . . . , k m ) ∈ N m ; for all l ∈ { , . . . , m } , k l = k j for some j ∈ { , . . . , m } \ { l }} . Notice that we have the following estimation: X ( k ,...,k m ) ∈ A m m Y j =1 (cid:16) [ k j − ,k j ) ( t j n ) [ k j − ,k j ) ( x j n ) (cid:17) ≤ D m ( t , . . . , t m ; x , . . . , x m ) , where D m denotes the set of ( t , . . . , t m ; x , . . . , x m ) ∈ [0 , T ] m × [0 , m satisfying thefollowing property: for all l ∈ { , . . . , m } , there exists j ∈ { , . . . , m } \ { l } such that | t j − t l | < n and | x j − x l | < n and, moreover, if there is some r = j, l verifying | t l − t r | < n and | x l − x r | < n , then | t j − t r | < n and | x j − x r | < n .Next, observe that we can bound I D m ( t , . . . , t m ; x , . . . , x m ) by a finite sum of productsof indicators, where in each product of indicators there appear all the m variables t , . . . , t m and all the m variables x , . . . , x m , but each indicator concerns only two or three ofthem. Moreover, each variable only appears in one of the indicators of each product and,whenever we have some indicator concerning two variables t j and t l , (respectively threevariables t j , t l and t r ), we have the same indicator for the variables x j and x l , (respectivelyfor the variables x j , x l and x r ). Therefore, expression (35) can be bounded by a finitesum of products of the following two kinds of terms:(i) For some l, j ∈ { , . . . , m } such that l = j , C m n Z [0 ,T ] × [0 , | f ( t l , x l ) || f ( t j , x j ) | [0 , n ) ( | t j − t l | ) [0 , n ) ( | x j − x l | ) dt j dt l dx j dx l . (36)(ii) For some l, j, r ∈ { , . . . , m } such that l = j , l = r and r = j , C m n Z [0 ,T ] × [0 , | f ( t l , x l ) || f ( t j , x j ) || f ( t r , x r ) |× [0 , n ) ( | t j − t l | ) [0 , n ) ( | t l − t r | ) [0 , n ) ( | t j − t r | ) × [0 , n ) ( | x j − x l | ) [0 , n ) ( | x l − x r | ) [0 , n ) ( | x j − x r | ) dt j dt l dt r dx j dx l dx r . Then, it turns out that, in order to conclude the proof, it suffices to bound the first type ofterm (i) by C m R T R f ( t, x ) dxdt and the second one (ii) by C m (cid:16)R T R f ( t, x ) dxdt (cid:17) .Let us use first the fact that, for all a, b ∈ R , 2 ab ≤ a + b , so that a term of the form(36) can be bounded, up to some constant, by C m n Z [0 ,T ] × [0 , f ( t l , x l ) [0 , n ) ( | t j − t l | ) [0 , n ) ( | x j − x l | ) dt j dt l dx j dx l ≤ C m Z T Z f ( t, x ) dxdt.
22n the other hand, using that for all a, b, c ∈ R + , 2 abc ≤ ( ab + ac ), we can study theterms of type ( ii ) in the following way: C m n Z [0 ,T ] × [0 , | f ( t l , x l ) || f ( t j , x j ) || f ( t r , x r ) |× [0 , n ) ( | t j − t l | ) [0 , n ) ( | t l − t r | ) [0 , n ) ( | t j − t r | ) × [0 , n ) ( | x j − x l | ) [0 , n ) ( | x l − x r | ) [0 , n ) ( | x j − x r | ) dt j dt l dt r dx j dx l dx r ≤ C m n Z [0 ,T ] × [0 , | f ( t l , x l ) | f ( t j , x j ) × [0 , n ) ( | t j − t l | ) [0 , n ) ( | t l − t r | ) [0 , n ) ( | t j − t r | ) × [0 , n ) ( | x j − x l | ) [0 , n ) ( | x l − x r | ) [0 , n ) ( | x j − x r | ) dt j dt l dt r dx j dx l dx r ≤ C m n Z [0 ,T ] × [0 , | f ( t l , x l ) | f ( t j , x j ) [0 , n ) ( | t j − t l | ) [0 , n ) ( | x j − x l | ) dt j dt l dx j dx l = C m n Z T Z | f ( t l , x l ) | (cid:18)Z T Z f ( t j , x j ) [0 , n ) ( | t j − t l | ) [0 , n ) ( | x j − x l | ) dt j dx j (cid:19) dt l dx l . At this point, we apply Cauchy-Schwarz inequality, so that the latter expression can beestimated by C m n (cid:18)Z T Z f ( t l , x l ) dt l dx l (cid:19) × Z T Z (cid:18)Z T Z f ( t j , x j ) [0 , n ) ( | t j − t l | ) [0 , n ) ( | x j − x l | ) dt j dx j (cid:19) dt l dx l ! = C m n (cid:18)Z T Z f ( t l , x l ) dt l dx l (cid:19) × (cid:18)Z [0 ,T ] × [0 , f ( t j , x j ) f ( t p , x p ) [0 , n ) ( | t j − t l | ) [0 , n ) ( | x j − x l | ) × [0 , n ) ( | t p − t l | ) [0 , n ) ( | x p − x l | ) dt j dt p dt l dx j dx p dx l (cid:17) ≤ C m (cid:18)Z T Z f ( t, x ) dxdt (cid:19) . This finishes the proof of the lemma. ✷ A Appendix
In this appendix, we give a sketch of the proof of Lemma 2.3 and discuss the relationbetween our results and those of Manthey in [24] (see also [25]).
Proof of Lemma 2.3:
As we have already pointed out, we will only give the main linesof the proof. 23et f ∈ E and h ∈ C ( R ) having a bounded derivative. We aim to prove that, for any η >
0, it holds | E [ h ( J n ( f ))] − E [ h ( J ( f ))] | < η, (37)for sufficiently big n . For this, the idea is to consider an element g in D which is closeto f with respect to the norm k · k . Then, one splits the left-hand side of (37) in severalterms, which can be easily treated using the following facts:1. When f is replaced by g , we have that the left-hand side of (37) converges to zero,by hypothesis.2. One keeps control of the remaining terms using that h defines a Lipschitz functionand that (13) and (14) hold. ✷ Relation with Manthey results
In [24], the author considers the family of processes { X n , n ∈ N } such that each X n isthe mild solution of the equation ∂X n ∂t ( t, x ) − ∂ X n ∂x ( t, x ) = θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , , with null initial condition and Dirichlet boundary conditions. The processes θ n are cor-related noises satisfying the following conditions:(i) For all ( t, x ) ∈ [0 , T ] × [0 , Z t Z x θ n ( s, y ) dyds < ∞ , a.s. (ii) For each m ∈ N and ( t , x ) , . . . , ( t m , x m ) ∈ [0 , T ] × [0 , (cid:18)Z t Z x θ n ( s, y ) dyds, . . . , Z t m Z x m θ n ( s, y ) dy (cid:19) converges weakly to ( W ( t , x ) , . . . , W ( t m , x m )), where we recall that { W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } denotes a Brownian sheet.(iii) For all ( t, x ) ∈ [0 , T ] × [0 , E [ θ n ( t, x )] = 0.(iv) There exists n ∈ N such thatsup n ≥ n t,x ) ∈ [0 ,T ] × [0 , Z t Z x (cid:12)(cid:12) E (cid:2) θ n ( s, y ) θ n ( t, x ) (cid:3)(cid:12)(cid:12) dyds < ∞ . X n converges weakly, in the sense ofthe convergence of finite dimensional distributions, to the process X which is the mildsolution of ∂X∂t ( t, x ) − ∂ X∂x ( t, x ) = ˙ W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , . Furthermore, it is showed that, if the processes θ n are Gaussian, the convergence alsoholds in C ([0 , T ] × [0 , q = 2. Therefore, the hypotheses assumed in Proposition 3.1 (which assures theconvergence of the finite dimensional distributions) are weaker than (i)-(iv).Eventually, processes θ n given by the Kac-Stroock processes and the Donsker kernelsare not Gaussian so that, if conditions (i)-(iv) were satisfied, using Manthey’s resultonly convergence of the finite dimensional distributions could be obtained. In fact, it isstraightforward to check that the Donsker kernels satisfy these conditions, but condition(iv) fails for the Kac-Stroock processes. This is proved in the following lemma: Lemma A.1
Assume that { θ n ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } , n ≥ , is the Kac-Stroockprocess (9). Then, the family { θ n , n ∈ N } does not satisfy condition (iv) above.Proof: We will show that, when θ n ( s, y ) = n √ sy ( − N n ( s,y ) , then the quantity Z T Z | E [ θ n ( s, y ) θ n ( t, x )] | dyds is not uniformly bounded in n, t and x . Indeed, it holds Z T Z | E ( θ n ( s, y ) θ n ( t, x )) | dyds = Z T Z n √ sytx E (cid:2) ( − N n ( s,y )+ N n ( t,x ) (cid:3) dyds. (38)Owing to the proof of [6, Lemma 3.1] (see also [5, Lemma 3.2]), we have that E (cid:2) ( − N n ( s,y )+ N n ( t,x ) (cid:3) = e − n [( t − s ) x +( x − y ) s ] { s ≤ t,y ≤ x } + e − n [( t − s ) x +( y − x ) s ] { s ≤ t,y ≥ x } + e − n [( s − t ) y +( x − y ) t ] { s ≥ t,y ≤ x } + e − n [( s − t ) y +( y − x ) t ] { s ≥ t,y ≥ x } . Then, expression (38) is the sum of four positive integrals. It is clear that one of them isgiven by I ( n, t, x ) = Z t Z x n √ sytx e − n [( t − s ) x +( y − x ) s ] dyds.
25e will check that this integral is not uniformly bounded. In fact,sup n, t, x I ( n, t, x ) ≥ sup n I (cid:18) n, T, n (cid:19) = sup n √ T Z T Z n n √ sy √ n e − n [( T − s ) n +( y − n ) s ] dyds = √ T e − T sup n Z T Z n n √ sy e s e − nys dyds = √ T e − T sup n Z T Z n √ sz e s e − zs dzds = √ T e − T Z T Z ∞ √ sz e s e − zs dzds. (39)Let us apply the change of coordinates v = sz , for any fixed s , and then Fubini Theoremin the last integral of (39), so that we end up withsup n, t, x I ( n, t, x ) ≥ √ T e − T Z + ∞ √ v e − v (cid:18)Z v ∧ T s e s ds (cid:19) dv, and the latter is clearly divergent. This fact concludes the proof. ✷ References [1] R.J. Adler, P. M¨uller and B. Rozovskii (editors). Stochastic modelling in physicaloceanography, volume 39 of Progress in Probability. Birkh¨auser Boston Inc., Boston,MA, 1996.[2] V. Bally, I. Gy¨ongy and E. Pardoux. White noise driven parabolic SPDEs withmeasurable drift. J. Funct. Anal. 120 (1994), no. 2, 484–510.[3] V. Bally, A. Millet and M. Sanz-Sol´e. Approximation and support theorem in H¨oldernorm for parabolic stochastic partial differential equations. Ann. Probab. 23 (1995),no. 1, 178–222.[4] X. Bardina and C. Florit. Approximation in law to the dd