Gaussian density estimates for solutions to quasi-linear stochastic partial differential equations
aa r X i v : . [ m a t h . P R ] F e b Gaussian density estimates for solutions to quasi-linearstochastic partial differential equations
David Nualart ∗ Department of MathematicsUniversity of KansasLawrence, Kansas, 66045, USA [email protected]
Llu´ıs Quer-Sardanyons † Departament de Matem`atiquesUniversitat Aut`onoma de Barcelona08193 Bellaterra (Barcelona), Spain [email protected]
October 29, 2018
Abstract
In this paper we establish lower and upper Gaussian bounds for the solutions to theheat and wave equations driven by an additive Gaussian noise, using the techniques ofMalliavin calculus and recent density estimates obtained by Nourdin and Viens in [19].In particular, we deal with the one-dimensional stochastic heat equation in [0 , drivenby the space-time white noise, and the stochastic heat and wave equations in R d ( d ≥ and d ≤ , respectively) driven by a Gaussian noise which is white in time and has ageneral spatially homogeneous correlation. The Malliavin calculus, or stochastic calculus of variations, is a suitable technique for provingthat a given random vector F = ( F , . . . , F m ) on the Wiener space possesses a smoothprobability density. There has been a current interest in the applications of the stochasticcalculus of variations to obtain lower and upper bounds of Gaussian type for the density ofa given Wiener functional. The starting point of this research is the work by Kusuoka andStroock [12], where they proved that the density of a uniformly hypoelliptic diffusion whosedrift is a smooth combination of its diffusion coefficient has a lower bound of Gaussian type.Recently, three different approaches have been developed to derive Gaussian-type bounds fordensities of general Wiener functionals using the techniques of Malliavin calculus: ∗ Supported by the NSF grant DMS-0604207. † Supported by grants MEC-FEDER Ref. MTM20006-06427 from the Direcci´on General de Investigaci´on,Ministerio de Educaci´on y Ciencia, Spain, and BE 2007 from the Ag`encia de Gesti´o d’Ajuts Universitaris i deRecerca, Generalitat de Catalunya. F = ( F , . . . , F m ) of a Wiener sheet in [0 , T ] × R d . The method uses an approximation of F by means of a sequence of conditionallynon-degenerate random variables adapted to the filtration generated by the white noise.This paper was inspired by the work by Kusuoka and Stroock [12], and it can be usedin a non Markovian framework. As an application, the author obtains lower boundsfor the solution to the one-dimensional heat equation driven by a space-time whitenoise, assuming that the diffusion coefficient σ is bounded away from zero. The ideasintroduced in this paper were later developed in the work by Bally [1] to get Gaussianlower bounds for locally elliptic Itˆo processes.(ii) Nourdin and Viens in [19] have proved a new formula for the density of a one-dimensionalWiener functional in terms of the Malliavin calculus (see (2.2) below). As an application,they obtain upper and lower Gaussian bounds for the density of the maximum of a generalGaussian process.(iii) In [14] Malliavin and Nualart derived Gaussian lower bounds for multidimensional Wienerfunctionals under an exponential moment condition on the divergence of a coveringvector field. A one-dimensional version of this result was obtained by Nualart in [10].The purpose of this paper is to apply the results obtained by Nourdin and Viens in [19]to the solutions of different classes of stochastic partial differential equations driven by anadditive Gaussian noise. Upper and lower Gaussian estimates for the density of solutions tostochastic partial differential equations are essential tools in the potential analysis for thesetype of equations (see the recent works [6, 7, 8]).The paper is organized as follows. In Section 2 we recall briefly the results of [19]. Section3 deals with the one-dimensional heat equation on the interval [0 , with Dirichlet boundaryconditions and driven by a space-time white noise. Let us denote by u ( t, x ) its solutionevaluated at some ( t, x ) ∈ R + × (0 , and set m = E | u ( t, x ) | . Then, we derive a gaussianlower and upper bound for the density of u ( t, x ) of the form Ct − exp (cid:18) − ( z − m ) C ′ t (cid:19) . (1.1)The lower bound differs from that obtained by Kohatsu-Hida in the paper [11], where the firstfactor was Ct .Section 3 is devoted to the stochastic heat equation in R d with an additive Gaussian noisewhich is white in time and it has a spatially homogeneous correlation in space. We assumethat the spectral measure of the noise integrates (1 + | ξ | ) η for some η ∈ (0 , , and we obtainlower and upper Gaussian bounds for the density of the solution of the form (1.1), that involvethe powers t and t − η (see Theorem 4.4 for the precise statement). Let us mention at thispoint that it has been of much importance the fact the Malliavin derivative of the solutiondefines a non-negative process, since it solves a linear parabolic equation.Finally, in Section 4 we consider a stochastic wave equation in dimension d = 1 , , , againdriven by an additive Gaussian noise which is white in time and it has a spatially homogeneouscorrelation in space. In this case we obtain lower and upper bounds only for t small enough, and2hey involve the powers t and t − η , where again η is given by the integrability of the spectralmeasure. We have not been able to overcome this technical restriction on the time parameterbecause, in comparison with the heat equation, the Malliavin derivative of the solution in thiscase does not need to be a non-negative function neither a function itself. In this section, we recall a general method set up in [19] in order to show that a smoothrandom variable in the sense of Malliavin calculus has a probability density admitting Gaussianlower and upper bounds.First of all, let us briefly describe the Gaussian context in which we will be working onand the main elements of the Malliavin calculus associated to it. Namely, suppose thatin a complete probability space (Ω , F , P ) we are given a centered Gaussian family W = { W ( h ) , h ∈ H } of random variables indexed by a separable Hilbert space H with covariance E ( W ( h ) W ( g )) = h h, g i H , h, g ∈ H. The family W is usually called an isonormal Gaussian process on H . Assume that F is the σ -field generated by W .Let us use standard notation for the main operators of the Malliavin calculus determinedby the family W (see, for instance, [20]). More precisely, we denote by D the Malliavinderivative, defined as a closed and unbounded operator from L (Ω) into L (Ω; H ) , whosedomain is denoted by D , . The adjoint of the operator D is denoted by δ , usually called thedivergence operator. A random element u ∈ L (Ω; H ) belongs to the domain of δ if and onlyif it satisfies | E ( h DF, δ ( u ) i H ) | ≤ C k F k L (Ω) , for any F ∈ D , , where the constant C only depends on u . For any element u in the domainof δ , the random variable δ ( u ) can be characterized by the duality relationship E ( F δ ( u )) = E ( h DF, u i H ) , for every F ∈ D , .Any random variable F in L (Ω , F , P ) can be decomposed by means of its Wiener chaosexpansion (see [20, Section 1.1.1]), which is usually written as F = ∞ X n =0 J n F, where J n denotes the projection onto the n th Wiener chaos.Using the chaos expansion one may define the operator L by the formula L = P ∞ n =0 − nJ n ,which is called the generator of the Orstein-Uhlenbeck semigroup (see [20, Section 1.4]). It isrelated to the Malliavin derivative D and its adjoint δ through the formula δDF = − LF,
3n the sense that F belongs to the domain of L if and only if it belongs to the domain of δD ,and in this case the above equality holds.One can also define the inverse of L , denoted by L − , as follows: for any F ∈ L (Ω , F , P ) ,set L − F := P ∞ n =1 − n J n F . Then it holds that LL − F = F − E ( F ) , for any F ∈ L (Ω , F , P ) , so that L − acts as the inverse of L for centered random variables.Let us consider F ∈ D , with mean zero and we define the following function in R : g ( z ) := E [ h DF, − DL − F i H | F = z ] . By [18, Proposition 3.9], it holds that g ( z ) ≥ on the support of F . Then, [19, Theorem3.1] states that if the random variable g ( F ) is bounded away from zero almost surely, that is g ( F ) ≥ c > , a.s. , for some constant c , then F has a density ρ whose support is R satisfying, almost everywhere: ρ ( z ) = E | F | g ( z ) exp (cid:18) − Z z yg ( y ) dy (cid:19) . (2.2)As a consequence, see [19, Corollary 3.4], if one also has that g ( F ) ≤ c , a.s., then the density ρ satisfies, for almost all z ∈ R : E | F | c exp (cid:18) − z c (cid:19) ≤ ρ ( z ) ≤ E | F | c exp (cid:18) − z c (cid:19) . (2.3)Let us also mention that [19, Proposition 3.5] provides a more suitable formula for g ( F ) for computational purposes. Indeed, given a random variable F ∈ D , , one can write DF =Φ F ( W ) , where Φ F is a measurable mapping from R H to H , determined P ◦ W − -almostsurely (see [20], p. 54-55). Then g ( F ) = Z ∞ e − θ E h h Φ F ( W ) , Φ F ( e − θ W + p − e − θ W ′ ) i H (cid:12)(cid:12) F i dθ, (2.4)where W ′ stands for an independent copy of W such that W and W ′ are defined on theproduct probability space (Ω × Ω ′ , F ⊗ F ′ , P × P ′ ) . Eventually, E denotes the mathematicalexpectation with respect to P × P ′ .Formula (2.4) can be still rewritten in the following form: g ( F ) = Z ∞ e − θ E h E ′ (cid:16)D DF, g DF E H (cid:17) (cid:12)(cid:12) F i dθ, (2.5)where, for any random variable X defined in (Ω , F , P ) , e X denotes the shifted random variablein Ω × Ω ′ e X ( ω, ω ′ ) = X ( e − θ ω + p − e − θ ω ′ ) , ω ∈ Ω , ω ′ ∈ Ω . Notice that, indeed, e X depends on the parameter θ , but we have decided to drop its explicitdependence for the sake of simplicity.Along the paper we denote by C a generic constant which may vary from line to line.4 Density estimates for the stochastic heat equation in [0 , In this section, we will consider a stochastic heat equation in [0 , , with Dirichlet boundaryconditions, some non-linear drift b and with an additive space-time white noise perturbation.We aim to give sufficient conditions on the coefficient b ensuring Gaussian lower and upperbounds for the probability density of the solution at any point. [0 , We are concerned with the following one dimensional heat equation driven by a space-timewhite noise: ∂u∂t ( t, x ) − ∂ u∂x ( t, x ) = b ( u ( t, x )) + σ ˙ W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , , (3.6)where T > , the initial condition is given by a continuous function u : [0 , → R and weconsider Dirichlet boundary conditions. That is, u (0 , x ) = u ( x ) , x ∈ [0 , , (3.7) u ( t,
0) = u ( t,
1) = 0 , t ∈ [0 , T ] . The real-valued random field solution to Equation (3.6) will be denoted by { u ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } . The function b : R → R is of class C having a bounded derivative and σ > is a constant. We assume that { W ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } is a Brownian sheeton [0 , T ] × [0 , , defined in a complete probability space (Ω , F , P ) . That is, { W ( t, x ) } is acentered Gaussian family with the covariance function E ( W ( t, x ) W ( s, y )) = ( t ∧ s )( x ∧ y ) . For ≤ t ≤ T , let F t be the σ -field generated by the random variables { W ( s, x ) , ( s, x ) ∈ [0 , t ] × [0 , } and the P -null sets.The solution to the formal Equation (3.6) is understood in the mild sense: a {F t } -adaptedstochastic process { u ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } solves (3.6) with initial and boundaryconditions (3.7) if, for any ( t, x ) ∈ (0 , T ] × (0 , , u ( t, x ) = Z G t ( x, y ) u ( y ) dy + Z t Z G t − s ( x, y ) b ( u ( s, y )) dyds + σ Z t Z G t − s ( x, y ) W ( ds, dy ) , (3.8)where G t ( x, y ) , ( t, x, y ) ∈ R + × (0 , , denotes the Green function associated to the heatequation on [0 , with Dirichlet boundary conditions. We will use the following facts, for any < t ′ < t and x ∈ (0 , , ≤ G t ( x, y ) ≤ √ πt e − ( x − y )24 t , (3.9)5 x √ t − t ′ ≤ Z tt ′ Z | G t − s ( x, y ) | dyds ≤ √ π √ t − t ′ , (3.10)where c x is a positive constant. We also have inf α ≤ x ≤ − α c x > for any α ∈ (0 , . In thecase of Neumann boundary conditions, c x does not depend on x (see, for instance, [17, LemmaA1.2]).Let us also mention that the stochastic integral in (3.8) is understood as an integral withrespect to the Brownian sheet in the sense of Walsh [26].Existence and uniqueness of mild solution for Equation (3.8) can be deduced from theresults in [26]; for an even more general setting, see also [2]. Malliavin calculus applied toEquation (3.8) have been dealt with in [3]. In this case, we consider the Gaussian contextassociated to the space-time white noise. That is, we have H = L ( R + × [0 , and theGaussian family { W ( h ) , h ∈ L ( R + × [0 , } is given by the Wiener integral W ( h ) = Z R + Z h ( s, y ) W ( ds, dy ) . A consequence of Proposition 4.3 and Theorem 2.2 in [3] is that, for all ( t, x ) ∈ (0 , T ] × [0 , , the random variable u ( t, x ) belongs to D , , the Malliavin derivative satisfies the linearparabolic equation D r,z u ( t, x ) = σG t − r ( x, z ) + Z tr Z G t − s ( x, y ) b ′ ( u ( s, y )) D r,z u ( s, y ) dyds, (3.11)for any ( r, z ) ∈ [0 , t ] × [0 , and ( t, x ) ∈ [0 , T ] × [0 , , and the probability law of u ( t, x ) hasa density. The positivity of σ and G guarantees that the solution of Equation (3.11) remainsnon-negative, that is D r,z u ( t, x ) ≥ , a.s. This will be a key point in proving that the densityof u ( t, x ) admits Gaussian upper and lower estimates. We fix
T > and consider { u ( t, x ) , ( t, x ) ∈ [0 , T ] × [0 , } the unique mild solution ofEquation (3.8). This section will be devoted to proof the following result. Theorem 3.1
Assume that the drift coefficient b is a C function with bounded derivative.Then, for all t ∈ (0 , T ] and x ∈ (0 , , the random variable u ( t, x ) possesses a density p satisfying the following statement: for almost every z ∈ R , E | u ( t, x ) − m | C t exp (cid:26) − ( z − m ) C t (cid:27) ≤ p ( z ) ≤ E | u ( t, x ) − m | C t exp (cid:26) − ( z − m ) C t (cid:27) , (3.12) where m := E ( u ( t, x )) and C , C are positive quantities depending on σ , k b ′ k ∞ , T and x . The statement of the above Theorem 3.2 will be a consequence of formula (2.2) (see[19, Theorem 3.1]) and the following proposition. In order not to overload notations, set6 := u ( t, x ) − E ( u ( t, x )) , and recall that, as it has been specified in Section 2 (see (2.5)therein), g ( F ) = Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t Z D r,z F (cid:16) ^ D r,z F (cid:17) dzdr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dθ = Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t Z D r,z u ( t, x ) (cid:16) ^ D r,z u ( t, x ) (cid:17) dzdr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dθ, (3.13)where g DF = ( DF )( e − θ ω + √ − e − θ ω ′ ) . Proposition 3.2
Fix
T > and assume that the drift coefficient b is of class C and has abounded derivative. There exist positive constants C , C such that C t ≤ g ( F ) ≤ C t , (3.14) for all t ∈ (0 , T ] . In order to prove Proposition 3.2, we will need the following technical lemma:
Lemma 3.3
Let t > . Assume that b ∈ C and it has a bounded derivative. There exists apositive constant K depending on σ and k b ′ k ∞ , such that, for any δ ∈ (0 , : sup (1 − δ ) t ≤ ν ≤ t ≤ y ≤ Z t (1 − δ ) t Z E (cid:2) | D r,z u ( ν, y ) | (cid:12)(cid:12) F (cid:3) dzdr ≤ K ( δt ) (3.15) and sup θ ∈ R sup (1 − δ ) t ≤ ν ≤ t ≤ y ≤ Z t (1 − δ ) t Z E (cid:20) E ′ (cid:18)(cid:12)(cid:12)(cid:12) ^ D r,z u ( ν, y ) (cid:12)(cid:12)(cid:12) (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dzdr ≤ K ( δt ) , (3.16) P -almost surely.Proof: We will only deal with the proof of (3.15), since (3.16) may be checked using exactlythe same arguments.Let us first invoke the linear equation (3.11) satisfied by the Malliavin derivative Du ( ν, v ) ,for ( ν, v ) ∈ [(1 − δ ) t, t ] × (0 , , and then take the square L -norm on [(1 − δ ) t, t ] × [0 , : Z t (1 − δ ) t Z | D r,z u ( ν, v ) | dzdr ≤ σ Z t (1 − δ ) t Z | G ν − r ( v, z ) | dzdr + 2 Z t (1 − δ ) t Z (cid:18)Z νr Z G ν − s ( v, y ) b ′ ( u ( s, y )) D r,z u ( s, y ) dyds (cid:19) dzdr. (3.17)By (3.10), the first term in the right-hand side of (3.17) can be bounded by q π σ ( δt ) . Forthe second one, we apply H¨older’s inequality, the fact that b ′ is bounded and Fubini’s theorem,so that we end up with Z t (1 − δ ) t Z | D r,z u ( ν, v ) | dzdr ≤ r π σ ( δt ) + 2 k b ′ k ∞ δt Z ν (1 − δ ) t Z | G ν − s ( v, y ) | (cid:18)Z t (1 − δ ) t Z | D r,z u ( s, y ) | dzdr (cid:19) dyds. E [ · | F ] and using again (3.10) we obtain sup (1 − δ ) t ≤ ρ ≤ ν ≤ v ≤ Z t (1 − δ ) t Z E (cid:2) | D r,z u ( ρ, v ) | (cid:12)(cid:12) F (cid:3) dzdr ≤ r π σ ( δt ) + 2 k b ′ k ∞ δt Z ν (1 − δ ) t sup (1 − δ ) t ≤ τ ≤ s ≤ y ≤ Z t (1 − δ ) t Z E (cid:2) | D r,z u ( τ, y ) | (cid:12)(cid:12) F (cid:3) dzdr √ ν − s ds. If we set Ψ δ,t ( ν ) := sup (1 − δ ) t ≤ ρ ≤ ν ≤ v ≤ Z t (1 − δ ) t Z E (cid:2) | D r,z u ( ρ, v ) | (cid:12)(cid:12) F (cid:3) dzdr, ν ∈ [(1 − δ ) t, t ] , then we have seen that Ψ δ,t ( ν ) ≤ K ( δt ) + K δt Z ν (1 − δ ) t Ψ δ,t ( s ) 1 √ ν − s ds, ν ∈ [(1 − δ ) t, t ] , a.s. Now we can conclude by applying Gronwall’s lemma [5, Lemma 15]. (cid:3)
Proof of Proposition 3.2:
We first recall that the Malliavin derivative of u ( ν, v ) , ( ν, v ) ∈ [0 , T ] × [0 , , satisfies that D r,z u ( ν, v ) ≥ , for all ( r, z ) ∈ [0 , T ] × [0 , , a.s. This is becausethe Malliavin derivative solves the linear parabolic equation (3.11). Let us deal with the proofof (3.14) in two steps: Step 1: The lower bound.
Fix δ ∈ (0 , and let us first derive the lower bound in (3.14).Since the Malliavin derivative of u ( t, x ) is non-negative, formula (3.13) yields g ( F ) ≥ Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t (1 − δ ) t Z D r,z u ( t, x ) (cid:16) ^ D r,z u ( t, x ) (cid:17) dzdr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dθ. By Equation (3.11), we can decompose the right-hand side of the above inequality in a sumof four terms: A ( t, x ; δ ) = σ Z t (1 − δ ) t Z | G t − r ( x, z ) | dzdr,A ( t, x ; δ ) = σ Z t (1 − δ ) t Z G t − r ( x, z ) × E (cid:20)Z tr Z G t − s ( x, y ) b ′ ( u ( s, y )) D r,z u ( s, y ) dyds (cid:12)(cid:12)(cid:12) F (cid:21) dzdr,A ( t, x ; δ ) = σ Z ∞ e − θ Z t (1 − δ ) t Z G t − r ( x, z ) × E (cid:20) E ′ (cid:18)Z tr Z G t − s ( x, y ) b ′ ( ^ u ( s, y )) (cid:16) ^ D r,z u ( s, y ) (cid:17) dyds (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dzdrdθ, ( t, x ; δ ) = Z ∞ e − θ Z t (1 − δ ) t Z E (cid:20)(cid:18)Z tr Z G t − s ( x, y ) b ′ ( u ( s, y )) D r,z u ( s, y ) dyds (cid:19) × E ′ (cid:18)Z tr Z G t − s ( x, y ) b ′ ( ^ u ( s, y )) (cid:16) ^ D r,z u ( s, y ) (cid:17) dyds (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dzdrdθ. First we notice that, by (3.10): A ( t, x ; δ ) ≥ σ c x ( δt ) , (3.18)Thus we can write g ( F ) ≥ σ c x ( δt ) − | A ( t, x ; δ ) + A ( t, x ; δ ) + A ( t, x ; δ ) | , (3.19)so that we will need to obtain upper bounds for the terms | A i ( t, x ; δ ) | , i = 1 , , .We apply Fubini’s theorem for the conditional expectation, the boundedness of b ′ , Cauchy-Schwarz’s inequality and the bound (3.10), so we have the following estimates: | A ( t, x ; δ ) |≤ C (cid:18)Z t (1 − δ ) t Z | G t − r ( x, z ) | dzdr (cid:19) × Z t (1 − δ ) t Z (cid:12)(cid:12)(cid:12)(cid:12)Z tr Z G t − s ( x, y ) E (cid:2) | D r,z u ( s, u ) | (cid:12)(cid:12) F (cid:3) dyds (cid:12)(cid:12)(cid:12)(cid:12) dzdr ! ≤ C ( δt ) (cid:18)Z t (1 − δ ) t Z (cid:18)Z tr Z | G t − s ( x, y ) | E (cid:2) | D r,z u ( s, y ) | (cid:12)(cid:12) F (cid:3) dyds (cid:19) dzdr (cid:19) ≤ C ( δt ) (cid:18)Z t (1 − δ ) t Z | G t − s ( x, y ) | (cid:18)Z t (1 − δ ) t Z E (cid:2) | D r,z u ( s, y ) | (cid:12)(cid:12) F (cid:3) dzdr (cid:19) dyds (cid:19) ≤ C ( δt ) sup (1 − δ ) t ≤ s ≤ t ≤ y ≤ Z t (1 − δ ) t Z E (cid:2) | D r,z u ( s, y ) | (cid:12)(cid:12) F (cid:3) dzdr . At this point we are in position to apply (3.15) in Lemma 3.3. Therefore | A ( t, x ; δ ) | ≤ C ( δt ) , a.s., (3.20)where C = √ π (2 π ) σ k b ′ k ∞ .In order to get a bound for | A ( t, x ; δ ) | , one can use analogous arguments as for | A ( t, x ; δ ) | but applying (3.16) instead of (3.15) in Lemma (3.3). Hence, one obtains | A ( t, x ; δ ) | ≤ C ( δt ) , a.s. (3.21)9et us finally estimate | A ( t, x ; δ ) | . For this, we apply Fubini’s theorem, the fact that b ′ is bounded, Cauchy-Schwarz inequality with respect to dzdrdP | F dP ′ and we finally invokeLemma 3.3: | A ( t, x ; δ ) |≤ C Z ∞ e − θ Z t (1 − δ ) t Z Z t (1 − δ ) t Z G t − s ( x, y ) G t − ¯ s ( x, ¯ y ) × (cid:18)Z t (1 − δ ) t Z E h E ′ (cid:16) | D r,z u ( s, y ) ^ D r,z u (¯ s, ¯ y ) | (cid:17) (cid:12)(cid:12)(cid:12) F i dzdr (cid:19) dydsd ¯ yd ¯ sdθ ≤ C Z ∞ e − θ Z t (1 − δ ) t Z Z t (1 − δ ) t Z G t − s ( x, y ) G t − ¯ s ( x, ¯ y ) × (cid:18)Z t (1 − δ ) t Z E h | D r,z u ( s, y ) | (cid:12)(cid:12)(cid:12) F i dzdr (cid:19) × (cid:18)Z t (1 − δ ) t Z E h E ′ (cid:16) | ^ D r,z u (¯ s, ¯ y ) | (cid:17) (cid:12)(cid:12)(cid:12) F i(cid:19) dydsd ¯ yd ¯ sdθ ≤ C ( δt ) (cid:18)Z t (1 − δ ) t Z G t − s ( x, y ) dyds (cid:19) ≤ C ( δt ) , (3.22)where C = √ π σ k b ′ k ∞ . The very last estimate in (3.22) has been obtained after applyingCauchy-Schwarz’s inequality and the bound (3.10). Eventually, plugging the bounds (3.20)-(3.22) in (3.19) we have g ( F ) ≥ σ c x ( δt ) − c (cid:16) ( δt ) + ( δt ) (cid:17) , where c is a positive constant depending on σ and k b ′ k ∞ . Hence, if we assume that δ < ∧ T ,then we can write g ( F ) ≥ σ c x ( δt ) − c ( δt ) ≥ (cid:16) σ c x δ − c δ T (cid:17) t . It only remains to observe that the quantity σ c x δ − c δ T is strictly positive whenever δ is sufficiently small, namely δ ∈ (0 , δ ) , with δ = 1 ∧ T ∧ T (cid:18) σ c x c (cid:19) . Thus, the lower bound in (3.14) has been then proved.
Step 2: The upper bound.
The upper estimation in (3.14) is almost an immediate conse-quence of the computations which we have just performed for the lower bound. More precisely,according to (3.13) and the considerations in the first part of the proof, we have the following: g ( F ) ≤ X i =0 | A i ( t, x ; 1) | , (3.23)10here we notice that we have substituted δ by in A i ( t, x ; δ ) , i = 0 , , , . We have alreadyseen that | A i ( t, x ; 1) | ≤ Ct , for i = 1 , , and | A ( t, x ; δ ) | ≤ Ct , so we just need to bound | A ( t, x ; 1) | , which follows directly from (3.10). Thus g ( F ) ≤ C (cid:16) t + t + t (cid:17) , a.s., for a constant C > . Therefore g ( F ) ≤ C t , where the constant C > depends on σ , k b ′ k ∞ and T . (cid:3) We are now in position to prove the main result of this section:
Proof of Theorem 3.2:
The random variable F = u ( t, x ) − E ( u ( t, x )) is centered, belongsto D , and, by Proposition 3.2, it holds that < C t ≤ g ( F ) , for all t ∈ (0 , T ] . We applythen [19, Theorem 3.1] and we obtain that the probability density ρ : R → R of F is given by ρ ( z ) = E | u ( t, x ) − E ( u ( t, x )) | g ( z ) exp (cid:18) − Z z yg ( y ) dy (cid:19) , for almost every z ∈ R . Thus, the density p of the random variable u ( t, x ) satisfies p ( z ) = E | u ( t, x ) − E ( u ( t, x )) | g ( z − E ( u ( t, x ))) exp − Z z − E ( u ( t,x ))0 yg ( y ) dy ! . (3.24)In order to conclude the proof, we only need to use the bounds (3.14) in the above expression(3.24). (cid:3) R d In this section we are interested in the stochastic heat equation in R d with an additive Gaussiannoise which is white in time and it has a spatially homogeneous correlation in space. We aimto find out sufficient conditions on the drift term and the noise’s spatial correlation ensuringthat the density of the solution admits Gaussian-type estimations. The fact that we deal witha SPDE in R d with a non-trivial spatial correlation makes the analysis in this case much moreinvolved in comparison to the one-dimensional setting in Section 3. Let us consider the following stochastic parabolic Cauchy problem in R d : ∂u∂t ( t, x ) − ∆ u ( t, x ) = b ( u ( t, x )) + σ ˙ W ( t, x ) , ( t, x ) ∈ [0 , T ] × R d , (4.25)where T > , ∆ denotes the Laplacian operator in R d , b : R → R is a C function withbounded derivative, and suppose that we are given an initial condition of the form u (0 , x ) = u ( x ) , x ∈ R d , u : R d → R measurable and bounded.The random perturbation ˙ W (formally) stands for a Gaussian noise which is white intime and with some spatially homogeneous correlation. More precisely, on a complete prob-ability space (Ω , F , P ) , we consider a family of mean zero Gaussian random variables W = { W ( ϕ ) , ϕ ∈ C ∞ ( R d +1 ) } , where C ∞ ( R d +1 ) denotes the space of infinitely differentiable func-tions with compact support, with covariance functional E ( W ( ϕ ) W ( ψ )) = Z ∞ Z R d ( ϕ ( t ) ∗ ψ s ( t )) ( x ) Λ( dx ) dt, ϕ, ψ ∈ C ∞ ( R d +1 ) , (4.26)where ψ s ( t, x ) := ψ ( t, − x ) and Λ is a non-negative and non-negative definite temperedmeasure. By [25, Chapter VII, Th´eor`eme XVII], Λ is symmetric and there exists a non-negative tempered measure µ whose Fourier transform is Λ . That is, by definition of theFourier transform on the space S ′ ( R d ) of tempered distributions, for all φ belonging to thespace S ( R d ) of rapidly decreasing C ∞ functions, Z R d φ ( x )Λ( dx ) = Z R d F φ ( ξ ) µ ( dξ ) , and there is an integer m ≥ such that Z R d (1 + | ξ | ) − m µ ( dξ ) < ∞ . The measure µ is usually called the spectral measure of the noise W . In particular, we have E ( W ( ϕ ) W ( ψ )) = Z ∞ Z R d F ϕ ( t )( ξ ) F ψ ( t )( ξ ) µ ( dξ ) dt, ϕ, ψ ∈ C ∞ ( R d +1 ) . Notice that we have used the symbol “ ∗ ” for the standard convolution in R d . Example 4.1
Usual examples of spatial correlations are given by Λ( dx ) = f ( x ) dx , where f is a non-negative continuous function on R d \ { } which is integrable in a neighborhood of ;for instance, one can take f to be a Riesz kernel f ǫ ( x ) = | x | − ǫ , for < ǫ < d . The space-timewhite noise would correspond to consider f equal to the Dirac delta on the origin. In thislatter case, the spectral measure µ is the Lebesgue measure on R d . We denote by H the completion of the Schwartz space S ( R d ) endowed with the semi-innerproduct h φ , φ i H = Z R d ( φ ∗ ( φ ) s ) ( x ) Λ( dx ) = Z R d F φ ( ξ ) F φ ( ξ ) µ ( dξ ) ,φ , φ ∈ S ( R d ) , and associated semi-norm k · k H . Notice that H is a Hilbert space that maycontain distributions (see [5, Example 6]). Set H T := L ([0 , T ]; H ) .Then, it turns out that the Gaussian noise W can be naturally extended to H T , so thatwe obtain a family { W ( h ) , h ∈ H T } of centered Gaussian random variables such that E ( W ( h ) W ( h )) = h h , h i H T = Z T h h ( t ) , h ( t ) i H dt, h , h ∈ H T . W t ( g ) := W ( [0 ,t ] g ) , t ∈ [0 , T ] , g ∈ H , we obtain a cylindrical Wiener process { W t ( g ) , t ∈ [0 , T ] , g ∈ H} on the Hilbert space H .That is, for any t ∈ [0 , T ] and g ∈ H , W t ( g ) is a mean zero Gaussian random variable and E ( W t ( g ) W s ( g )) = ( t ∧ s ) h g , g i H , s, t ∈ [0 , T ] , g , g ∈ H . We denote by F t the σ -algebra generated by { W s ( g ) , s ∈ [0 , t ] , g ∈ H} and the P -null sets.As it has been explained in [21, Section 3], one can construct (real-valued) stochastic integralsof predictable processes in L (Ω × [0 , T ]; H ) with respect to the cylindrical Wiener process W ;notice that we are making an abuse of notation since W denoted the Gaussian family givenat the beginning. The resulting stochastic integral turns out to extend Walsh’s integrationtheory [26] and it is equivalent, in some particular situations, to Dalang’s stochastic integralset up in [5]. The stochastic integrals appearing throughout this section will be understoodas integrals with respect to the cylindrical Wiener process W .We are now in position to rigorously define the mild solution of Equation (4.25): a squareintegrable {F t } -adapted random field { u ( t, x ) , ( t, x ) ∈ [0 , T ] × R d } solves Equation (4.25)if, for any ( t, x ) ∈ [0 , T ] × R d , u ( t, x ) = Z R d G t ( x − y ) u ( y ) dy + Z t Z R d G t − s ( x − y ) b ( u ( s, y ) dyds + σ Z t Z R d G t − s ( x − y ) W ( ds, dy ) . (4.27)Here, the function G t ( x ) , t > , x ∈ R d , denotes the fundamental solution associated to theheat equation in R d , that is the centered Gaussian kernel of variance t : G t ( x ) = 1 √ πt e − x t . General existence and uniqueness results for Equation (4.27) may be found in [5]. Moreprecisely, it turns out that sufficient conditions for existence and uniqueness of solution toEquation (4.27) are the following: b is Lipschitz, u is measurable and bounded, and thenoise’s spatial correlation is related with the fundamental solution G through the condition Z T Z R d |F G t ( ξ ) | µ ( dξ ) dt < + ∞ . (4.28)As it has been shown in [5, Example 8], condition (4.28) holds if and only if Z R d
11 + | ξ | µ ( dξ ) < + ∞ , (4.29)and this integrability condition will be assumed to be satisfied in the remainder of this section.13et us turn now to the question whether the probability law of the solution at any pointhas a density. The Gaussian setting in which we apply the Malliavin calculus machinery isdetermined by the Gaussian family { W ( h ) , h ∈ H T } given before.Using this framework, it is a consequence of [21, Theorem 5.2] that, if the drift coefficient b ∈ C has a bounded and Lipschitz continuous derivative and condition (4.29) is fulfilled,then the solution u ( t, x ) to Equation (4.27), at any point ( t, x ) ∈ (0 , T ] × R d , is differentiablein the Malliavin sense, that is u ( t, x ) ∈ D , , and its law has a density with respect to theLebesgue measure. At this point, we should mention that in [21] all the results are provedin the case where the noise’s correlation is given by a function f , that is Λ( dx ) = f ( x ) dx .However, the extension of those results to a general tempered measure Λ is straightforward.See also the works [15], [23], [24] for related results with slightly stronger conditions on thespectral measure.It will be of much importance for us the equation satisfied be the Malliavin derivative of u ( t, x ) . Indeed, see either [15], [23] or [21], the Malliavin derivative Du ( t, x ) takes values inthe Hilbert space H T and satisfies the following linear parabolic equation: Du ( t, x ) = σG t −· ( x − ⋆ ) + Z t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) Du ( s, y ) dyds, (4.30)where “ ⋆ ” stands for the H -variable. Moreover, one proves that sup t ∈ [0 ,T ] sup x ∈ R d E ( k Du ( t, x ) k H T ) < + ∞ . (4.31)Equation (4.30) may be interpreted in the following sense: for any r ∈ [0 , t ) , D r u ( t, x ) satisfiesthe equation in H D r u ( t, x ) = σG t − r ( x − ⋆ ) + Z t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds. (4.32)The integral on the right-hand side of (4.32) is understood as a H -valued pathwise integral.Before going on with our analysis, let us briefly describe how this integral is rigorously defined.Let { e j , j ≥ } be a complete orthonormal system of H . Then, using the properties of G ,the boundedness of b ′ and (4.31), the H -valued integral I t = Z t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds can be defined through its components (cid:26)Z t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) h D r u ( s, y ) , e j i H dyds, j ≥ (cid:27) with respect to the basis { e j , j ≥ } , and those latter integrals takes values in R . Moreover,one can obtain an upper bound for the square moment of I t (for the general setting see, forinstance, [22], p. 24): E ( |I t | ) ≤ C Z t Z R d | G t − s ( x − y ) | | b ′ ( u ( s, y )) | E ( k D r u ( s, y ) k H ) dyds. (4.33)14et us go back now to Equation (4.32). Since G t − r ( x − ⋆ ) : R d → R defines a function(indeed in S ( R d ) ), this implies that either D r u ( t, x ) and the integral in the right-hand sideof (4.32) define elements of H which are functions in z . Therefore, for any fixed ( t, x ) ∈ (0 , T ] × R d , we can state that the Malliavin derivative satisfies, for all ( r, z ) ∈ [0 , t ) × R d : D r,z u ( t, x ) = σG t − r ( x − z ) + Z t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r,z u ( s, y ) dyds. (4.34)A crucial consequence of this fact is that, as in the case of the one-dimensional stochasticheat equation with boundary conditions, the Malliavin derivative D r,z u ( t, x ) is non-negative,for all ( r, z ) ∈ [0 , t ) × R d , a.s.We will need a slightly stronger condition on the spectral measure µ than (4.29). Namely,consider the following hypothesis: Hypothesis H η : There exists η ∈ (0 , such that Z R d | ξ | ) η µ ( dξ ) < + ∞ . (4.35)Then, one can prove the following estimates (see [15, Lemma 3.1]): Lemma 4.2
Assume that the spectral measure µ satisfies R R d | ξ | µ ( dξ ) < + ∞ .1. Let T > . Then, there exists a constant k > such that, for all t ∈ [0 , T ] , k t ≤ Z t Z R d |F G s ( ξ ) | µ ( dξ ) ds. The constant k depends on T and, indeed, it converges to zero as T tends to infinity.2. Suppose that Hypothesis H η holds. Then, there exists a constant k > such that, forall t ≥ , Z t Z R d |F G s ( ξ ) | µ ( dξ ) ds ≤ k t β , (4.36) for all β ∈ (0 , − η ] . Remark 4.3
It is worth mentioning that the integrability condition (4.28) was sufficient forus to prove the existence of density for the solution u ( t, x ) , at any point ( t, x ) ∈ (0 , T ] × R d (see [21, Theorem 5.2]). However, as it will be made clearer in Section 4.2, we will reallyneed upper bounds of the form (4.36) in order to obtain Gaussian estimates for the density of u ( t, x ) . .2 Gaussian estimates for the density of the solution Let us consider
T > and { u ( t, x ) , ( t, x ) ∈ [0 , T ] × R d } the unique mild solution to Equation(4.27). This section is devoted to proof the following result: Theorem 4.4
Fix t ∈ (0 , T ] and x ∈ R d . Suppose that Hypothesis H η is satisfied for some η ∈ (0 , ) and that the coefficient b is of class C and has a bounded Lipschitz continuousderivative. Then, the random variable u ( t, x ) has a density p with respect to Lebesgue measurewhich satisfies the following: for almost every z ∈ R , E | u ( t, x ) − m | C t − η exp (cid:26) − ( z − m ) C t (cid:27) ≤ p ( z ) ≤ E | u ( t, x ) − m | C t exp (cid:26) − ( z − m ) C t − η (cid:27) , where m = E ( u ( t, x )) and C , C are positive constants depending on σ , k b ′ k ∞ , η and T . Theorem 4.4 will be a consequence of [19, Theorem 3.1] and the following proposition. Aswe have done in Section 3.2, we use the notation F = u ( t, x ) − E ( u ( t, x )) and we remindthat we will need to find almost sure lower and upper bounds for the random variable g ( F ) ,where g ( F ) = Z ∞ e − θ E h E ′ (cid:16) h Du ( t, x ) , ^ Du ( t, x ) i H T (cid:17) (cid:12)(cid:12) F i dθ. Proposition 4.5
Fix
T > . Assume that Hypothesis H η holds for some η ∈ (0 , ) and thecoefficient b is of class C and has a bounded Lipschitz continuous derivative. There existpositive constants C , C such that, C t ≤ g ( F ) ≤ C t − η , a.s., (4.37) for all t ∈ (0 , T ] . In order to prove Proposition 4.5, we will need the following technical lemma, which playsthe role of Lemma 3.3 in our standing setting.
Lemma 4.6
Let t > and assume that Hypothesis H η holds. Then, there exists a positiveconstant C depending on σ , k b ′ k ∞ and the constant k in (4.36), such that, for all δ ∈ (0 , , sup (1 − δ ) t ≤ ν ≤ t y ∈ R d E (cid:20)Z t (1 − δ ) t k D r u ( ν, y ) k H dr (cid:12)(cid:12)(cid:12) F (cid:21) ≤ C ( δt ) β , a.s. (4.38) and sup θ ≥ sup (1 − δ ) t ≤ ν ≤ t y ∈ R d E (cid:20) E ′ (cid:18)Z t (1 − δ ) t k ^ D r u ( ν, y ) k H dr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) ≤ C ( δt ) β , a.s., (4.39) for any β ∈ (0 , − η ] . roof: It is very similar to that of Lemma 3.3. In fact, owing to Equation (4.32), we have,for any ( ν, v ) ∈ [(1 − δ ) t, t ] × R d , Z t (1 − δ ) t k D r u ( ν, v ) k H dr ≤ σ Z t (1 − δ ) t k G ν − r ( v − ⋆ ) k H dr + 2 k b ′ k ∞ ( δt ) Z ν (1 − δ ) t Z R d | G ν − s ( x − y ) | (cid:18)Z t (1 − δ ) t k D r u ( s, y ) k H dr (cid:19) dyds, (4.40)where we have applied Minkowski’s and Cauchy-Schwarz’s inequalities. By (4.36) in Lemma4.2, Z t (1 − δ ) t k G ν − r ( v − ⋆ ) k H dr = Z δt Z R d |F G r ( ξ ) | µ ( dξ ) dr ≤ k ( δt ) β , for all β ∈ (0 , − η ] . Therefore, plugging this bound in (4.40) and taking conditional expec-tation, we obtain: E (cid:20)Z t (1 − δ ) t k D r u ( ν, v ) k H dr (cid:12)(cid:12)(cid:12) F (cid:21) ≤ σ k ( δt ) β + 12 √ π k b ′ k ∞ ( δt ) Z ν (1 − δ ) t sup (1 − δ ) t ≤ τ ≤ s y ∈ R d E (cid:20)Z t (1 − δ ) t k D r u ( τ, y ) k H dr (cid:12)(cid:12)(cid:12) F (cid:21) √ ν − s ds, a.s. As we have done in the proof of Lemma 3.3, we are now in position to apply Gronwall’s lemma[5, Lemma 15]. Hence (4.38) is proved. The estimation (4.39) can be checked using exactlythe same arguments. (cid:3)
Proof of Proposition 4.5:
The framework of the proof is similar to that of Proposition 3.2in Section 3.2. However, computations here will be slightly more involved since we are workingin a Hilbert-space-valued setting determined by H T = L ([0 , T ]; H ) . Let us first deal with thelower bound in (4.37): Step 1: The lower bound.
Recall that F = u ( t, x ) − E ( u ( t, x )) and the random variable g ( F ) can be written as g ( F ) = Z ∞ e − θ E h E ′ (cid:16) h Du ( t, x ) , ^ Du ( t, x ) i H T (cid:17) (cid:12)(cid:12) F i dθ = Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t h D r u ( t, x ) , ^ D r u ( t, x ) i H dr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dθ, where ^ Du ( t, x ) denotes the shifted random variable ( Du ( t, x ))( e − θ ω + √ − e − θ ω ′ ) .According to Equation (4.30), for any δ ∈ (0 , we have the decomposition g ( F ) ≥ σ B ( t, x ; δ ) − | B ( t, x ; δ ) + B ( t, x ; δ ) + B ( t, x ; δ ) | , (4.41)where 17 ( t, x ; δ ) = Z t (1 − δ ) t k G t − r ( x − ⋆ ) k H dr,B ( t, x ; δ )= σE "Z t (1 − δ ) t (cid:28) G t − r ( x − ⋆ ) , Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds (cid:29) H dr (cid:12)(cid:12)(cid:12) F ,B ( t, x ; δ ) = σ Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t (1 − δ ) t h G t − r ( x − ⋆ ) , Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( ^ u ( s, y )) ^ D r u ( s, y ) dyds (cid:29) H dr ! (cid:12)(cid:12)(cid:12) F dθ,B ( t, x ; δ ) = Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t (1 − δ ) t (cid:28)Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds, Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( ^ u ( s, y )) ^ D r u ( s, y ) dyds (cid:29) H dr ! (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F dθ. By part 1 in Lemma 4.2, notice first that B ( t, x ; δ ) = Z δt Z R d |F G s ( ξ ) | µ ( dξ ) ds ≥ k δt. (4.42)Concerning the second term B ( t, x ; δ ) , we can apply Cauchy-Schwarz and Mikowski’sinequalities, so that we obtain | B ( t, x ; δ ) | ≤ C ( δt ) (cid:18)Z t (1 − δ ) t k G t − r ( x − ⋆ ) k H dr (cid:19) × E "Z t (1 − δ ) t (cid:13)(cid:13)(cid:13)(cid:13)Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds (cid:13)(cid:13)(cid:13)(cid:13) H dr (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F ≤ C ( δt ) (cid:18)Z δt Z R d |F G r ( ξ ) | µ ( dξ ) dr (cid:19) × (cid:18)Z t (1 − δ ) t Z R d | G t − s ( x − y ) | E (cid:20)Z t (1 − δ ) t k D r u ( s, y ) k H dr (cid:12)(cid:12)(cid:12) F (cid:21) dyds (cid:19) . Thus, by (4.36), Lemma 4.6 and the fact that Z t (1 − δ ) t Z R d | G t − s ( x − y ) | dyds ≤ C ( δt ) , (4.43)18e have: | B ( t, x ; δ ) | ≤ C ( δt ) β + , (4.44)for all β ∈ (0 , − η ] . The term | B ( t, x ; δ ) | can be treated in the same way as we have justdone for | B ( t, x ; δ ) | . Namely, one proves that | B ( t, x ; δ ) | ≤ C ( δt ) (cid:18)Z t (1 − δ ) t k G t − r ( x − ⋆ ) k H dr (cid:19) × Z ∞ e − θ E " E ′ Z t (1 − δ ) t (cid:13)(cid:13)(cid:13)(cid:13)Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds (cid:13)(cid:13)(cid:13)(cid:13) H dr ! (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F dθ ≤ C ( δt ) (cid:18)Z δt Z R d |F G r ( ξ ) | µ ( dξ ) dr (cid:19) × Z ∞ e − θ (cid:18)Z t (1 − δ ) t Z R d | G t − s ( x − y ) | E (cid:20) E ′ (cid:18)Z t (1 − δ ) t k D r u ( s, y ) k H dr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dyds (cid:19) dθ. Taking into account (4.36), Lemma 4.6 and (4.43), we also get | B ( t, x ; δ ) | ≤ C ( δt ) β + , (4.45)for all β ∈ (0 , − η ]) .Eventually, in order to deal with the term | B ( t, x ; δ ) | , we mainly apply Cauchy-Schwarz’sinequality with respect to E h E ′ (cid:16)R t (1 − δ ) t k • k H dr (cid:17) (cid:12)(cid:12)(cid:12) F i , so that we will take advantage of thecomputations which we have performed so far. More precisely, we have B ( t, x ; δ ) ≤ E "Z t (1 − δ ) t (cid:13)(cid:13)(cid:13)(cid:13)Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds (cid:13)(cid:13)(cid:13)(cid:13) H dr (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F × Z ∞ e − θ E " E ′ Z t (1 − δ ) t (cid:13)(cid:13)(cid:13)(cid:13)Z t (1 − δ ) t Z R d G t − s ( x − y ) b ′ ( u ( s, y )) D r u ( s, y ) dyds (cid:13)(cid:13)(cid:13)(cid:13) H dr ! (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F dθ. The two terms in the right-hand side of the above inequality already appeared in the analysisof B ( t, x ; δ ) and B ( t, x ; δ ) , respectively, and each of them may be bounded, up to someconstant, by ( δt ) β + . Therefore | B ( t, x ; δ ) | ≤ C ( δt ) β + , (4.46)for any β ∈ (0 , − η ] . Estimations (4.41), (4.42) and (4.44)-(4.46) yield g ( F ) ≥ σ k δt − c (cid:16) ( δt ) β + + ( δt ) β + (cid:17) , where c depends on σ , k b ′ k ∞ and k . Hence, if δ < ∧ T and β ∈ ( , − η ] , g ( F ) ≥ t (cid:16) σ k δ − c δ β + t β − (cid:17) ≥ t (cid:16) σ k δ − c δ β + T β − (cid:17) . C := σ k δ − c δ β + T β − defines a positive constant whenever δ ∈ (0 , δ ) , with δ = 1 ∧ T ∧ T (cid:18) σ k c (cid:19) β − . Therefore, we obtain the desired lower bound in (4.37).
Step 2: The upper bound.
The upper in (4.37) is an almost immediate consequence ofthe computations in the Step 1 and the decomposition g ( F ) ≤ X i =0 | B i ( t, x ; 1) | . (4.47)Indeed, we have already found upper bounds for the last three terms on the right-hand side of(4.47). On the other hand, observe that (4.36) yields B ( t, x ) ≤ Ct β , for all β ∈ (0 , − η ] . This bound, together with (4.44)-(4.46) in the case δ = 1 , implies g ( F ) ≤ C (cid:16) t β + t β + + t β + (cid:17) ≤ C t β , for all β ∈ (0 , − η ] , where the constant C depends on T . Therefore we conclude the proof. (cid:3) The proof of Theorem 4.4 can be finished as in the case of Theorem 3.1. (cid:3)
The main objective here is to extend the results in Section 4 to a stochastic wave equation inspace dimension d ≤ and controlled by the spatially homogeneous Gaussian noise consideredthere. The intrinsic properties of the differential operator driving the equation will not allowus to obtain optimal results for all time parameter T > , even if we assume that the noise’sspace correlation satisfies Hypothesis H η , an assumption which is slightly stronger than theone needed to have existence of density for the corresponding mild solution (see [21]). d = 1 , , We consider here the same setting as in Section 4 but for the stochastic wave equation inspatial dimension d ≤ : ∂ u∂t ( t, x ) − ∆ u ( t, x ) = b ( u ( t, x )) + σ ˙ W ( t, x ) , ( t, x ) ∈ [0 , T ] × R d , (5.48)20here T > , b : R → R is a C function with bounded derivative, and suppose that we aregiven initial conditions of the form u (0 , x ) = u ( x ) , ∂u∂t (0 , x ) = v ( x ) , x ∈ R d , with u , v : R d → R measurable and bounded functions such that u is of class C ( R d ) and has a bounded derivative ∇ u . The random perturbation ˙ W corresponds to the spatiallyhomogeneous Gaussian noise described in the previous Section 4.1. We recall that µ denotesthe corresponding spectral measure and {F t , t ≥ } the filtration defined by the cylindricalWiener process associated to the noise W .The mild solution of Equation (5.48) is given by a {F t } -adapted process { u ( t, x ) , ( t, x ) ∈ [0 , T ] × R d } such that, for all ( t, x ) ∈ (0 , T ] × R d , u ( t, x ) = Z R d v ( x − y )Γ dt ( dy ) + ∂∂t (cid:18)Z R d u ( x − y )Γ dt ( dy ) (cid:19) + Z t Z R d b ( u ( s, x − y ) Γ dt − s ( dy ) ds + σ Z t Z R d Γ dt − s ( x − y ) W ( ds, dy ) , (5.49)where Γ dt , t > , denotes the fundamental solution of the wave equation in dimension d =1 , , : Γ t ( x ) = 12 {| x | 11 + | ξ | µ ( dξ ) < + ∞ . (5.50)We also point out that the stochastic integral in the right-hand side of (5.49) is a well-defined integral of a deterministic element in H T with respect to the cylindrical Wiener processassociated to the noise (see Lemma 3.2 and Example 4.2 in [21]).For all dimensions d ≥ , we have a unified expression for the Fourier transform of Γ dt : F Γ dt ( ξ ) = sin(2 πt | ξ | )2 π | ξ | . Using this fact and assuming that (5.50) holds, one proves the following lemma (see [13],Lemmas 5.4.1 and 5.4.3): 21 emma 5.1 For any t ≥ it holds that c ( t ∧ t ) 11 + | ξ | ≤ Z t |F Γ ds ( ξ ) | ds ≤ c ( t + t ) 11 + | ξ | , (5.51) with some positive constants c , c > . Thus, if we assume that R R d µ ( dξ )1+ | ξ | < ∞ , (5.51) yields d ( t ∧ t ) ≤ Z t Z R d |F Γ ds ( ξ ) | µ ( dξ ) ds ≤ d ( t + t ) , for all t ≥ , with some positive constants d , d . In particular, for t ∈ [0 , we have d t ≤ Z t Z R d |F Γ ds ( ξ ) | µ ( dξ ) ds ≤ d t. (5.52)However, under Hypothesis H η (see (4.35)) one can get a slightly sharper upper estimation(see [23, Lemma 3]): Lemma 5.2 Let T > and assume that Hypothesis H η holds. Then Z t Z R d |F Γ ds ( ξ ) | µ ( dξ ) ds ≤ d t − η , (5.53) for all t ∈ [0 , T ] . Eventually, if d = 1 , , , explicit computations yield that, for any t ≥ , Z t Z R d Γ ds ( dy ) ds ≤ Ct , (5.54)where C is a positive constant that only depends on d .If b belongs to C and it has a Lipschitz continuous bounded derivative, then the solution u ( t, x ) , at any ( t, x ) ∈ (0 , T ] × R d , belongs to D , and its Malliavin derivative, as a randomvariable taking values in H T = L ([0 , T ]; H ) , satisfies Du ( t, x ) = σ Γ dt −· ( x − ⋆ ) + Z t Z R d b ′ ( u ( s, x − y )) Du ( s, x − y ) Γ dt − s ( dy ) ds, (5.55)where “ ⋆ ” stands for the H -variable (see [21, Proposition 5.1]). This linear equation isunderstood in L (Ω × [0 , T ]; H ) and let us remark that σ Γ dt −· ( x − ⋆ ) is a well-defined elementin H T (see [21, Lemma 3.2]). Moreover, under the standing hypothesis, the random variable u ( t, x ) , for ( t, x ) ∈ (0 , T ] × R d , has a density with respect to the Lebesgue measure (see [21,Theorem 5.2]). Of course, the Gaussian setting here is the same as the one that has beenconsidered in Section 4. 22 .2 Gaussian estimates of the density at small time For T > , consider the unique mild solution { u ( t, x ) , ( t, x ) ∈ [0 , T ] × R d } to Equation(5.48). In this section, we will prove that the density p : R d → R of u ( t, x ) has lower andupper Gaussian bounds whenever T is small , where this essentially means that T < . Themain result is the following: Theorem 5.3 Suppose that Hypothesis H η is satisfied and that the coefficient b is of class C and has a bounded Lipschitz continuous derivative. Then, there exists T ∈ (0 , such thatthe following statement is satisfied: for any T ∈ (0 , T ) and ( t, x ) ∈ (0 , T ] × R d , the randomvariable u ( t, x ) has a density p with respect to Lebesgue measure such that, for almost every z ∈ R , E | u ( t, x ) − m | C t − η exp (cid:26) − ( z − m ) C t (cid:27) ≤ p ( z ) ≤ E | u ( t, x ) − m | C t exp (cid:26) − ( z − m ) C t − η (cid:27) , where m = E ( u ( t, x )) and C , C are positive constants depending on σ , k b ′ k ∞ , T and η . Remark 5.4 In the case of the stochastic heat equation presented in Section 4, we have beenable to obtain Gaussian upper and lower bounds for any T > , while here we restrict ouranalysis to small T . As we will precisely point out in the next Proposition 5.5, that differenceis due to the fact that the Malliavin derivative of the solution to the stochastic wave equationdoes not need to be a non-negative function. The statement of Theorem 5.3 is an immediate consequence of [19, Theorem 3.1] and thefollowing proposition. For t > and x ∈ R d , set F = u ( t, x ) − E ( u ( t, x )) and we remind thatwe will need to find almost sure lower and upper bounds for the random variable g ( F ) , where g ( F ) = Z ∞ e − θ E h E ′ (cid:16) h Du ( t, x ) , ^ Du ( t, x ) i H T (cid:17) (cid:12)(cid:12) F i dθ. (5.56) Proposition 5.5 Assume that Hypothesis H η holds. There exist T ∈ (0 , and positiveconstants C , C such that, for any T ∈ (0 , T ) , C t ≤ g ( F ) ≤ C t − η , a.s. (5.57) for all t ∈ [0 , T ] .Proof: It follows the same lines as the proof of Proposition 4.5, so that we will only point outthe main steps. More precisely, we observe first that the Malliavin derivative Du ( t, x ) solvesthe linear equation (5.55) with a non-negative initial condition but driven by a hyperbolicoperator. Thus, in comparison with the stochastic heat equation, D r,z u ( t, x ) does not need tobe non-negative as a function of ( r, z ) ; indeed, in the case d = 3 , even the Malliavin derivativedoes not need to be a function. Hence, in order to deal with the lower bound of g ( F ) (see(5.56)), we will not be able to restrict the integral with respect to dr on a small time intervalas we have done in the proof of Proposition 4.5. This is the reason why we will be forced toconsider T < . 23n fact, by (5.55), we are only able to consider the decomposition g ( F ) ≥ D ( t ) − ( | D ( t ) | + | D ( t ) | + | D ( t ) | ) , (5.58)where D ( t ) = σ Z t k Γ dt − r ( x − ⋆ ) k H dr,D ( t ) = σE (cid:20)Z t (cid:28) Γ dt − r ( x − ⋆ ) , Z t Z R d b ′ ( u ( s, x − y )) D r u ( s, x − y ) Γ dt − s ( dy ) ds (cid:29) H dr (cid:12)(cid:12)(cid:12) F (cid:21) .D ( t ) = Z ∞ e − θ σE (cid:20) E ′ (cid:18)Z t (cid:10) Γ dt − r ( x − ⋆ ) , Z t Z R d b ′ ( ^ u ( s, x − y )) (cid:16) ^ D r u ( s, x − y ) (cid:17) Γ dt − s ( dy ) ds (cid:29) H dr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dθ.D ( t ) = Z ∞ e − θ E (cid:20) E ′ (cid:18)Z t (cid:28)Z t Z R d b ′ ( u ( s, x − y )) D r u ( s, x − y ) Γ dt − s ( dy ) ds, Z t Z R d b ′ ( ^ u ( s, x − y )) (cid:16) ^ D r u ( s, x − y ) (cid:17) Γ dt − s ( dy ) ds (cid:29) H dr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) dθ. By the lower bound in (5.52), we have D ( t ) ≥ d t . (5.59)Concerning the term D , we can argue as follows: | D ( t ) | ≤ C (cid:18)Z t k Γ dt − r ( x − ⋆ ) k H dr (cid:19) × E "Z t (cid:13)(cid:13)(cid:13)(cid:13)Z t Z R d b ′ ( u ( s, x − y )) D r u ( s, x − y ) Γ dt − s ( dy ) ds (cid:13)(cid:13)(cid:13)(cid:13) H dr (cid:12)(cid:12)(cid:12) F ≤ Ct − η E "Z t (cid:18)Z t Z R d k D r u ( s, x − y ) k H Γ dt − s ( dy ) ds (cid:19) dr (cid:12)(cid:12)(cid:12) F ≤ Ct − η (cid:18)Z t Z R d E (cid:20)Z t k D r u ( s, x − y ) k H dr (cid:12)(cid:12)(cid:12) F (cid:21) Γ dt − s ( dy ) ds (cid:19) ≤ Ct − η , (5.60)where we have used (5.53), (5.54) and (5.63) in Lemma 5.6 below.Using similar arguments one proves that | D ( t ) | ≤ Ct − η . (5.61)24he analysis of | D ( T ) | can also be performed by following the calculations above to obtain(5.60), so that we end up with | D ( t ) | ≤ Ct − η . (5.62)Plugging the estimates (5.59)-(5.62) in (5.58) yields g ( F ) ≥ d t − c (cid:0) t − η + t − η (cid:1) , for all t ∈ [0 , T ] , where c is a positive constant depending on σ , k b ′ k ∞ and η . Hence, if T < we have g ( F ) ≥ t (cid:0) d − c T − η (cid:1) , and the quantity C := d − c T − η is strictly positive whenever T < T , where T = 1 ∧ (cid:18) d c (cid:19) − η . Therefore, we have proved the lower bound in (5.57).The upper bound in (5.57) is an immediate consequence of what we have done so far and(5.53), because g ( F ) ≤ X i =0 | D i ( t ) | ≤ C t − η . (cid:3) In the proof of Proposition 5.5, we have applied the following technical lemma, whoseproof is very similar to that of Lemma 4.6: Lemma 5.6 Let t > and assume that Hypothesis H η holds. Then, there exists a positiveconstant K depending on σ , k b ′ k ∞ and the constant d in Lemma 5.2, such that sup ≤ s ≤ t y ∈ R d E (cid:20)Z t k D r u ( s, y ) k H dr (cid:12)(cid:12)(cid:12) F (cid:21) ≤ Kt − η (5.63) and sup θ ≥ sup ≤ s ≤ t y ∈ R d E (cid:20) E ′ (cid:18)Z t k ^ D r u ( s, y ) k H dr (cid:19) (cid:12)(cid:12)(cid:12) F (cid:21) ≤ Kt − η . (5.64) Acknowledgement This work has been mainly carried out while the second named authorwas visiting the Department of Mathematics at the University of Kansas. He would like tothank professors David Nualart and Yaozhong Hu for the kind hospitality.25 eferences [1] Bally, V. Lower bounds for the density of locally elliptic Itˆo processes. Ann. Probab. 34(2006), no. 6, 2406–2440.[2] Bally, V.; Gy¨ongy, I.; Pardoux, ´E. White noise driven parabolic SPDEs with measurabledrift. J. Funct. Anal. 120 (1994), no. 2, 484–510.[3] Bally, V.; Pardoux, E. Malliavin calculus for white noise driven parabolic SPDEs. PotentialAnal. 9 (1998), no. 1, 27–64.[4] Carmona, R.; Nualart, D. Random nonlinear wave equations: Smoothness of the solutions.Probab. Theory Relat. Fields (1988), no. 4, 469–508.[5] Dalang, R. C. Extending martingale measure stochastic integral with applications tospatially homogeneous s. p. d. e’s. Electron. J. Probab. (1999) , no. 6, 29 pp.[6] Dalang, R. C.; Khoshnevisan, D.; Nualart, E. Hitting probabilities for systems of non-linear stochastic heat equations with additive noise. ALEA Lat. Am. J. Probab. Math.Stat. (2007), 231–271.[7] Dalang, R. C.; Khoshnevisan, D.; Nualart, E. Hitting probabilities for systems of non-linearstochastic heat equations with multiplicative noise, Annals of Probability, to appear.[8] Dalang, R. C.; Nualart, E. Potential theory for hyperbolic SPDEs. Ann. Probab. (2004), no. 3A, 2099–2148.[9] Dalang, R. C.; Quer-Sardanyons, L. work in preparation.[10] Nualart, E. Exponential divergence estimates and heat kernel tail. C. R. Math. Acad. Sci.Paris (2004), no. 1, 77–80.[11] Kohatsu-Higa, A. Lower bounds for densities of uniformly elliptic random variables onWiener space. Probab. Theory Related Fields (2003), no. 3, 421–457.[12] Kusuoka, D.; Stroock, D. Applications of the Malliavin calculus, part III. J. Fac. Sci.Univ. Tokyo Sect. IA Math. (1987), 391–442.[13] L´evˆeque, O. Hyperbolic Stochastic Partial Differential Equations Driven by Boundarynoises. Ph.D. thesis, EPFL (2001)[14] Malliavin, P.; Nualart, E. Density minoration of a strongly non degenerated randomvariable. Preprint.[15] M´arquez-Carreras, D.; Mellouk, M.; Sarr`a, M. On stochastic partial differential equationswith spatially correlated noise: smoothness of the law. Stoch. Proc. Appl. (2001),269–284.[16] Millet, A.; Sanz-Sol´e, M.: A stochastic wave equation in two space dimensions: smooth-ness of the law. Ann. Probab. , No.2, 803-844 (1999)2617] Morien, P. L. The H¨older and the Besov regularity of the density for the solution of aparabolic stochastic partial differential equation. Bernoulli (1999), no. 2, 275–298.[18] Nourdin, I.; Peccati, G. Stein’s method on Wiener chaos. Probab. Theory and Rel. Fields,to appear.[19] Nourdin, I.; Viens, F. Density estimates and concentration inequalities with Malliavincalculus, preprint.[20] Nualart, D. The Malliavin Calculus and Related Topics, Second edition. Springer-Verlag,Berlin (2006).[21] Nualart, D.; Quer-Sardanyons, L. Existence and smoothness of the density for spatiallyhomogeneous SPDEs. Potential Anal. (2007), no. 3, 281–299.[22] Quer-Sardanyons, L. The stochastic wave equation: study of the law and approximations.PhD-Thesis Universitat de Barcelona, 2005.[23] Quer-Sardanyons, L.; Sanz-Sol´e, M. Absolute continuity of the law of the solution to the3-dimensional stochastic wave equation. J. Funct. Anal. (2004), no.1, 1–32.[24] Quer-Sardanyons, L.; Sanz-Sol´e, M.; A stochastic wave equation in dimension 3: Smooth-ness of the law. Bernoulli10