Random homogenisation of a highly oscillatory singular potential
aa r X i v : . [ m a t h . A P ] M a r Random homogenisation of a highly oscillatorysingular potential
Martin Hairer, Etienne Pardoux, Andrey Piatnitski
Abstract
In this article, we consider the problem of homogenising the linear heatequation perturbed by a rapidly oscillating random potential. We considerthe situation where the space-time scaling of the potential’s oscillations is not given by the diffusion scaling that leaves the heat equation invariant. Instead,we treat the case where spatial oscillations are much faster than temporaloscillations. Under suitable scaling of the amplitude of the potential, we proveconvergence to a deterministic heat equation with constant potential, thuscompleting the results previously obtained in [PP12].
We consider the parabolic PDE with space-time random potential given by ∂ t u ε ( x, t ) = ∂ x u ε ( x, t ) + ε − β V (cid:18) xε , tε α (cid:19) u ε ( x, t ) , (1.1) u ε ( x,
0) = u ( x ) ,where x ∈ R , t ≥ V is a stationary centred random field. The homogenisationtheory of equations of this type has been studied by a number of authors. Thecase when V is time-independent was considered in [IPP08, Bal10]. The articles[CKP01, DIPP06] considered a situation where V is a stationary process as a functionof time, but periodic in space. Purely periodic / quasiperiodic operators with largepotential were also studied in [BLP78, Koz83].For α ≥ β = α , (1.1) was studied in [PP12], where it was shown that itssolutions converge as ε → ∂ t u ( x, t ) = ∂ x u ( x, t ) + ¯ V u ( x, t ) , u ( x,
0) = u ( x ) , (1.2)1 INTRODUCTION V is given by¯ V = Z ∞ Φ(0 , t ) dt , (1.3)in the case α > V = Z ∞ Z ∞−∞ e − x t √ πt Φ( x, t ) dx dt , (1.4)in the case α = 2. Here, Φ( x, t ) = E V (0 , V ( x, t ) is the correlation function of V which is assumed to decay sufficiently fast.In the case 0 < α <
2, it was conjectured in [PP12] that the correct scaling touse in order to obtain a non-trivial limit is β = 1 / α/
4, but the correspondingvalue of ¯ V was not obtained. Furthermore, the techniques used there seem to breakdown in this case. The main result of the present article is that the conjecture doesindeed hold true and that the solutions to (1.1) do again converge to those of (1.2)as ε →
0. This time, the limiting constant ¯ V is given by¯ V = 12 √ π Z ∞ Φ( t ) √ t dt , (1.5)where we have set Φ( s ) := R R Φ( x, s ) dx . Remark 1.1
One can “guess” both (1.3) and (1.5) if we admit that (1.4) holds.Indeed, (1.3) is obtained from (1.4) by replacing Φ( x, t ) by Φ( δx, t ) and taking thelimit δ →
0. This reflects the fact that this corresponds to a situation in which, at thediffusive scale, the temporal oscillations of the potential are faster than the spatialoscillations. Similarly, (1.5) is obtained by replacing Φ( x, t ) with δ − Φ( δ − x, t ) andthen taking the limit δ →
0, reflecting the fact that we are in the reverse situationwhere spatial oscillations are faster. These arguments also allow to guess the correctexponent β in both regimes.The techniques employed in the present article are very different from [PP12]:instead of relying on probabilistic techniques, we adapt the analytical techniquesfrom [Hai13a].From now on, we will rewrite (1.1) as ∂ t u ε ( x, t ) = ∂ x u ε ( x, t ) + V ε ( x, t ) u ε ( x, t ) , u ε ( x,
0) = u ( x ) , INTRODUCTION V ε is the rescaled potential given by V ε ( x, t ) = ε − (1 / α/ V (cid:18) xε , tε α (cid:19) . Before we proceed, we give a more precise description of our assumptions on therandom potential V . Besides some regularity and integrability assumptions, our main assumption will bea sufficiently fast decay of maximal correlations for V . Recall that the “maximalcorrelation coefficient” of V , subsequently denoted by ̺ , is given by the followingdefinition where, for any given compact set K ⊂ R , we denote by F K the σ -algebragenerated by { V ( x, t ) : ( x, t ) ∈ K } . Definition 1.2
For any r > ̺ ( r ) is the smallest value such that the bound E (cid:0) ϕ ( V ) ϕ ( V ) (cid:1) ≤ ̺ ( r ) q E ϕ ( V ) E ϕ ( V ) ,holds for any two compact sets K , K such that d ( K , K ) def = inf ( x ,t ) ∈ K inf ( x ,t ) ∈ K ( | x − x | + | t − t | ) ≥ r ,and any two r.v.’s ϕ i ( V ) such that ϕ i ( V ) is F K i -measurable and E ϕ i ( V ) = 0.Note that ̺ is a decreasing function. With this notation at hand, we then make thefollowing assumption: Assumption 1.3
The field V is stationary, centred, continuous, and C in the x -variable. Furthermore, E (cid:0) | V ( x, t ) | p + | ∂ x V ( x, t ) | p (cid:1) < ∞ for every p > . For most of our results, we will furthermore require that the correlations of V decay sufficiently fast in the following sense: Assumption 1.4
The maximal correlation function ̺ from Definition 1.2 satisfies ̺ ( R ) . (1 + R ) − q for every q > . INTRODUCTION Remark 1.5
Retracing the steps of our proof, one can see that in order to obtainour main result, Theorem 1.8, we actually only need this bound for some sufficientlylarge q . Similarly, the assumption on the x -differentiability of V is not absolutelynecessary, but simplifies some of our arguments.Let us first give a few examples of random fields satisfying our assumptions. Example 1.6
Take a measure space ( M , ν ) with some finite measure ν and a func-tion ψ : M × R → R such that sup m ∈M sup x,t | ψ ( m, x, t ) | + | ∂ x ψ ( m, x, t ) | | x | q + | t | q < ∞ ,for all q > . Assume furthermore that ψ satisfies the centering condition Z R Z R Z M ψ ( m, y, s ) ν ( dm ) dy ds = 0 . Consider now a realisation µ of the Poisson point process on M × R with intensitymeasure ν ( dm ) dy ds and set V ( x, t ) = Z M Z R Z R ψ ( m, y − x, s − t ) µ ( dm, dy, ds ) . Then V satisfies Assumptions 1.3 and 1.4. Example 1.7
Take for V a centred Gaussian field with covariance Φ such that sup x,t | Φ( x, t ) | + | ∂ x Φ( x, t ) | | x | q + | t | q < ∞ ,for all q > . Then V does not quite satisfy Assumptions 1.3 and 1.4 because V and ∂ x V are not necessarily continuous. However, it is easy to check that our proofs stillwork in this case. The advantage of Definition 1.2 is that it is invariant under the compositionby measurable functions. In particular, given a finite number of independent ran-dom fields { V , . . . , V k } of the type of Examples 1.6 and 1.7 (or, more generally,any mutually independent fields satisfying Assumptions 1.3 and 1.4) and a function F : R k → R such that1. E F ( V ( x, t ) , . . . , V k ( x, t )) = 0,2. F , together with its first partial derivatives, grows no faster than polynomiallyat infinity.Then, our results hold with V ( x, t ) = F ( V ( x, t ) , . . . , V k ( x, t )). INTRODUCTION Consider the solution to the heat equation with constant potential ∂ t u ( x, t ) = ∂ x u ( x, t ) + ¯ V u ( x, t ) , t ≥ , x ∈ R ; (1.6) u ( x,
0) = u ( x ) , where ¯ V is defined by (1.5). Then, the main result of this article is the followingconvergence result: Theorem 1.8
Let V be a random potential satisfying Assumptions 1.3 and 1.4, andlet u ∈ C / ( R ) be of no more than exponential growth. Then, as ε → , one has u ε ( t, x ) → u ( t, x ) in probability, locally uniformly in x ∈ R and t ≥ . Remark 1.9
The precise assumption on u is that it belongs to the space C / e ℓ forsome ℓ ∈ R , see Section 2.1 below for the definition of this space. Remark 1.10
The fact that E V = 0 is of course not essential, since one can easilysubtract the mean by performing a suitable rescaling of the solution.To prove Theorem 1.8, we use the standard “trick” to introduce a corrector that“kills” the large potential V ε to highest order. The less usual feature of this problemis that, in order to obtain the required convergence, it turns out to be advantageousto use two correctors, which ensures that the remaining terms can be brought undercontrol. These correctors, which we denote by Y ε and Z ε , are given by the solutionsto the following inhomogeneous heat equations: ∂ t Y ε ( x, t ) = ∂ x Y ε ( x, t ) + V ε ( x, t ) , ∂ t Z ε ( x, t ) = ∂ x Z ε ( x, t ) + | ∂ x Y ε ( x, t ) | − ¯ V ε ( t ) , (1.7)where we have set ¯ V ε ( t ) = E | ∂ x Y ε ( x, t ) | . In both cases, we start with the flat (zero)initial condition at t = 0. Writing v ε ( x, t ) = u ε ( x, t ) exp [ − ( Y ε ( x, t ) + Z ε ( x, t ))] , Theorem 1.8 is then a consequence of the following two claims:1. Both Y ε and Z ε converge locally uniformly to 0.2. The process v ε converges locally uniformly to the solution u of (1.6). INTRODUCTION v ε solves the equation ∂ t v ε = ∂ x v ε + ¯ V ε v ε + 2( ∂ x Y ε + ∂ x Z ε ) ∂ x v ε + (cid:0) | ∂ x Z ε | + 2 ∂ x Z ε ∂ x Y ε (cid:1) v ε , (1.8)with initial condition u . The second claim will then essentially follow from the first(except that, due to the appearance of nonlinear terms involving the derivatives of thecorrectors, we need somewhat tighter control than just locally uniform convergence),combined with the fact that the function ¯ V ε ( t ) converges locally uniformly to theconstant ¯ V . Remark 1.11
One way of “guessing” the correct forms for the correctors Y ε and Z ε is to note the analogy of the problem with that of building solutions to the KPZequation. Indeed, performing the Cole-Hopf transform h ε = log u ε , one obtains for h ε the equation ∂ t h ε = ∂ x h ε + (cid:0) ∂ x h ε (cid:1) + V ε ,which, in the case where V ε is replaced by space-time white noise, was recentlyanalysed in detail in [Hai13a]. The correctors Y ε and Z ε then arise naturally in thisanalysis as the first terms in the Wild expansion of the KPZ equation.This also suggests that it would be possible to find a diverging sequence of con-stants C ε such that the solutions to ∂ t u ε ( x, t ) = ∂ x u ε ( x, t ) + ε − α V (cid:18) xε , tε α (cid:19) u ε ( x, t ) − C ε u ε ( x, t ) ,converge in law to the solutions to the multiplicative stochastic heat equation drivenby space-time white noise. In the non-Gaussian case, this does still seem out of reachat the moment, although some recent progress can be found in [Hai13b].The proof of Theorem 1.8 now goes as follows. In a first step, which is ratherlong and technical and constitutes Section 2 below, we obtain sharp a priori boundsfor Y ε and Z ε in various norms. In a second step, which is performed in Section 3,we then combine these estimates in order to show that the only terms in (1.8) thatmatter are indeed the first two terms on the right hand side. Remark 1.12
Throughout this article, the notation X . Y will be equivalent tothe notation X ≤ CY for some constant C independent of ε . ESTIMATES OF Y ε AND Z ε Y ε and Z ε In this section, we shall prove that both Y ε and Z ε tend to zero as ε →
0, andestablish further estimates on those sequences of functions which will be needed fortaking the limit of the sequence v ε . But before doing so, let us first introduce sometechnical tools which will be needed both in this section and in the last one. First of all, we define the notion of an admissible weight w as a function w : R → R + such that there exists a constant C ≥ C − ≤ w ( x ) w ( y ) ≤ C , (2.1)for all pairs ( x, y ) with | x − y | ≤
1. Given such an admissible weight w , we thendefine the space C w as the closure of C ∞ under the norm k f k w = k f k ,w = sup x ∈ R | f ( x ) | w ( x ) . We also define C αw for α ∈ (0 ,
1) as the closure of C ∞ under the norm k f k α,w = k f k w + sup | x − y |≤ | f ( x ) − f ( y ) | w ( x ) | x − y | α . Similarly, for α ≥
1, we define C αw recursively as the closure of C ∞ under the norm k f k α,w = k f k w + k f ′ k α − ,w . It is clear that, if w and w are two admissible weights, then so is w = w w .Furthermore, it is a straightforward exercise to use the Leibniz rule to verify thatthere exists a constant C such that the bound k f f k α,w ≤ C k f k α ,w k f k α ,w , (2.2)holds for every f i ∈ C α i w i , provided that α ≤ α ∧ α .We now show that a similar inequality still holds if one of the two H¨older expo-nents is negative . For α ∈ ( − , C αw is the closure of C ∞ under the norm k f k α,w = sup | x − y |≤ | R yx f ( z ) dz | w ( x ) | x − y | α +1 . ESTIMATES OF Y ε AND Z ε f to belong to C α +1 w , exceptthat we do not worry about its growth.With these notations at hand, we then have the bound: Proposition 2.1
Let w and w be two admissible weights and let α < < α besuch that α > | α | . Then, the bound (2.2) holds with α = α .Proof. We only need to show the bound for smooth and compactly supported ele-ments f and f , the general case then follows by density. Denote now by F anantiderivative for f , so that Z yx f ( z ) f ( z ) dz = Z yx f ( z ) dF ( z ) ,where the right hand side is a Riemann-Stieltjes integral. For any interval I ⊂ R ,we now write f α,I = sup { x,y }⊂ I | f ( x ) − f ( y ) || x − y | α . It then follows from Young’s inequality [You36] that there exists a constant C de-pending only on the precise values of the α i and on the constants appearing in thedefinition (2.1) of admissibility for the weights w i , such that (cid:12)(cid:12)(cid:12)Z yx f ( z ) dF ( z ) (cid:12)(cid:12)(cid:12) ≤ | f ( x ) | (cid:12)(cid:12) F ( y ) − F ( x ) (cid:12)(cid:12) + C f α , [ x,y ] F α +1 , [ x,y ] | x − y | α + α +1 ≤ w ( x ) | x − y | α +1 (cid:0) k f k ,w k f k α ,w + C k f k α ,w k f k α ,w (cid:1) ,which is precisely the requested bound.There are two types of admissible weights that will play a crucial role in thesequel: e ℓ ( x ) def = exp( − ℓ | x | ) , p κ ( x ) def = 1 + | x | κ ,where the exponent κ will always be positive, but ℓ could have any sign. One has ofcourse the identity e ℓ · e m = e ℓ + m . (2.3)Furthermore, it is straightforward to verify that there exists a constant C such thatthe bound p κ ( x ) e ℓ ( x ) ≤ Cℓ − κ , (2.4)holds uniformly in x ∈ R , κ ∈ (0 , ℓ ∈ (0 , ESTIMATES OF Y ε AND Z ε Proposition 2.2
Let α ∈ ( − , ∞ ) , let β > α , and let ℓ, κ ∈ R . Then, for every t > , the operator P t extends to a bounded operator from C αe ℓ to C βe ℓ and from C αp κ to C βp κ . Furthermore, for every ℓ > and κ > , there exists a constant C such thatthe bounds k P t f k β,e ℓ ≤ Ct − β − α k f k α,e ℓ , k P t g k β,p κ ≤ Ct − β − α k g k α,p κ ,hold for every f ∈ C αe ℓ , every g ∈ C αp κ , every t ∈ (0 , , every | ℓ | ≤ ℓ , and every | κ | ≤ κ .Proof. The proof is standard: one first verifies that the semigroup preserves thesenorms, so that the case β = α is covered. The case of integer values of β can easily beverified by an explicit calculation. The remaining values then follow by interpolation. Y ε and Z ε For any integer k ≥
2, define the k -point correlation function Φ ( k ) for x, t ∈ R k byΦ ( k ) ( x, t ) = E (cid:0) V ( x , t ) . . . V ( x k , t k ) (cid:1) . (In particular, Φ (2) ( x , t , x , t ) = Φ( x − x , t − t ), where Φ is the correlationfunction of V defined above.) With these notations at hand, we have the followingbound which will prove to be useful: Lemma 2.3
The function Ψ (4) given by Ψ (4) ( x, t ) = Φ (4) ( x, t ) − Φ( x − x , t − t )Φ( x − x , t − t ) ,satisfies the bound | Ψ (4) ( x, t ) | ≤ η ( | x − x | + | t − t | ) η ( | x − x | + | t − t | ) (2.5)+ η ( | x − x | + | t − t | ) η ( | x − x | + | t − t | ) ,where the function η : R + → R + is defined by η ( r ) = p K̺ ( r/ , with K = 4 (cid:0) k V ( x, t ) k k V ( x, t ) k + k V ( x, t ) k (cid:1) , where we write k · k for the L (Ω) norm of a real-valued random variable. ESTIMATES OF Y ε AND Z ε Remark 2.4
In the Gaussian case, one has the identityΨ (4) ( x, t ) = Φ( x − x , t − t )Φ( x − x , t − t ) + Φ( x − x , t − t )Φ( x − x , t − t ) ,so that the bound (2.5) follows from the fact that ̺ dominates the decay of thecorrelation function Φ. Proof.
For the sake of brevity denote ξ j = ( x j , t j ). We set R = max ≤ i ≤ dist (cid:16) ξ i , [ j = i { ξ j } (cid:17) , R = max dist (cid:16) { ξ i , ξ i } , { ξ i , ξ i } (cid:17) , where the second maximum is taken over all permutations { i , i , i , i } of { , , , } .Consider first the case R ≥ R . Without loss of generality we can assume that R = dist( ξ , S j =1 { ξ j } ). It is easily seen that, in the case under consideration,dist(( ξ i , ξ j ) ≤ R , i, j = 1 , , , . (2.6)Then the functions Φ (4) and Φ( ξ − ξ )Φ( ξ − ξ ) admit the following upper bounds: | Φ (4) ( ξ , ξ , ξ , ξ ) | = | E ( V ( ξ ) V ( ξ ) V ( ξ ) V ( ξ )) |≤ ̺ ( R ) k V ( ξ ) k k V ( ξ ) V ( ξ ) V ( ξ ) k ≤ ̺ ( R ) k V ( ξ ) k k ( V ( ξ )) k , and Φ( ξ − ξ )Φ( ξ − ξ ) ≤ ̺ ( R ) k V k k V k Therefore, | Ψ (4) ( x, t ) | ≤ ̺ ( R ) (cid:0) k V ( ξ ) k k ( V ( ξ )) k + k V k (cid:1) From (2.6) and the fact that ̺ is a decreasing function we derive K̺ ( R ) = η (3 R ) η (3 R ) ≤ η ( | ξ − ξ | ) η ( | ξ − ξ | ) . This yields the desired inequality.Assume now that R < R and dist( { ξ , ξ } , { ξ , ξ } ) = R . In this casedist( ξ , ξ ) < R and dist( ξ , ξ ) < R . (2.7)Indeed, if we assume that dist( ξ , ξ ) ≥ R , then dist( ξ , { ξ , ξ , ξ } ) ≥ R and, thus, R ≥ R which contradicts our assumption. We have (cid:12)(cid:12) Ψ (4) ( ξ , ξ , ξ , ξ ) (cid:12)(cid:12) = (cid:12)(cid:12) Φ (4) ( ξ , ξ , ξ , ξ ) − Φ( ξ − ξ )Φ( ξ − ξ ) (cid:12)(cid:12) (2.8) ESTIMATES OF Y ε AND Z ε (cid:12)(cid:12) E (cid:0) [ V ( ξ ) V ( ξ ) − E ( V ( ξ ) V ( ξ ))][ V ( ξ ) V ( ξ ) − E ( V ( ξ ) V ( ξ ))] (cid:1)(cid:12)(cid:12) ≤ ̺ ( R ) k ( V ( ξ )) k . In view of (2.7), dist( ξ , ξ ) ≤ R and dist( ξ , ξ ) ≤ R . Therefore, K̺ ( R ) ≤ η ( | ξ − ξ | ) η ( | ξ − ξ | ) , and the desired inequality follows.It remains to consider the case R < R and dist( { ξ , ξ } , { ξ , ξ } ) = R ; the casedist( { ξ , ξ } , { ξ , ξ } ) = R can be addressed in the same way. In this casedist( ξ , ξ ) ≥ R , dist( ξ , ξ ) ≥ R , dist( ξ , ξ ) < R . Therefore, dist( ξ , { ξ , ξ , ξ } ) = dist( ξ , ξ ), and we have | Φ (4) ( ξ , ξ , ξ , ξ ) | ≤ ̺ ( | ξ − ξ | ) k V ( ξ ) k k ( V ( ξ )) k | Φ( ξ − ξ )Φ( ξ − ξ ) | ≤ ̺ ( R ) k V k ≤ ̺ ( | ξ − ξ | ) k V k . This yields | Ψ (4) ( ξ , ξ , ξ , ξ ) | ≤ ̺ ( | ξ − ξ | ) (cid:0) k V ( ξ ) k k ( V ( ξ )) k + k V k (cid:1) In the same way one gets | Ψ (4) ( ξ , ξ , ξ , ξ ) | ≤ ̺ ( | ξ − ξ | ) (cid:0) k V ( ξ ) k k ( V ( ξ )) k + k V k (cid:1) From the last two estimates we obtain | Ψ (4) ( ξ , ξ , ξ , ξ ) | ≤ p ̺ ( | ξ − ξ | ) p ̺ ( | ξ − ξ | ) (cid:0) k V ( ξ ) k k ( V ( ξ )) k + k V k (cid:1) ≤ η ( | ξ − ξ | ) η ( | ξ − ξ | ) . This implies the desired inequality and completes the proof of Lemma 2.3.In order to prove our next result, we will need the following small lemma:
Lemma 2.5
Let F : R + → R + be an increasing function with F ( r ) ≤ r q . Then, R ∞ (1 + r ) − p dF ( r ) < ∞ as soon as p > q .Proof. We have R ∞ (1 + r ) − p dF ( r ) ≤ R ∞ r − p dF ( r ), so we only need to boundthe latter. We write Z ∞ r − p dF ( r ) ≤ X k ≥ Z k +1 k r − p dF ( r ) ≤ X k ≥ − pk Z k +1 k dF ( r ) ≤ X k ≥ − pk q ( k +1) . This expression is summable as soon as p > q , thus yielding the claim.
ESTIMATES OF Y ε AND Z ε Lemma 2.6
Fix t > and let ϕ : R × R + → R + be a smooth function with compactsupport. Define ϕ δ ( x, t ) = δ − ϕ (cid:0) xδ , tδ (cid:1) . Then, for all p ≥ , ε, δ > , one has thebound (cid:20) E (cid:18)Z t Z R ϕ δ ( x − y, t − s ) V ε ( y, s ) dy ds (cid:19) p (cid:21) /p ≤ C ϕ (cid:0) ε − / − α/ ∧ δ − / ε − α/ ∧ δ − / ε α/ (cid:1) ,where C ϕ depends on p , on the supremum and the support of ϕ , and on the bound ofAssumption 1.3.Proof. We consider separately the cases δ > max( ε, ε α ), δ < min( ε, ε α ), as well asmin( ε, ε α ) ≤ δ ≤ max( ε, ε α ).Assume first that δ > max( ε, ε α ). Without loss of generality we also assume that p is even, that is p = 2 k with k ∈ N . Then J ε,δp := E (cid:18)Z t Z R ϕ δ ( x − y, t − s ) V ε ( y, s ) dy ds (cid:19) p = Z t . . . Z t Z R . . . Z R k Y i =1 ϕ δ ( x − y i , t − s i ) E (cid:16) k Y i =1 V ε ( y i , s i ) (cid:17) d~yd~s, where d~y = dy . . . dy k and d~s = ds . . . ds k . Changing the variables ˜ y i = ε − y i and˜ s i = ε − α s i , and considering the definition of ϕ δ and V ε , we obtain J ε,δp = δ − k ε − k − αk ε k +2 αk Z [0 ,t/ε α ] k Z R k k Y i =1 ϕ (cid:16) x − ε ˜ y i δ , t − ε α ˜ s i δ (cid:17) E (cid:16) k Y i =1 V (˜ y i , ˜ s i ) (cid:17) d~ ˜ yd~ ˜ s. The support of the function k Q i =1 ϕ (cid:0) x − ε ˜ y i δ , t − ε α ˜ s i δ (cid:1) belongs to the rectangle ( x − k δε s ϕ , x + k δε s ϕ ) k × ( t − k δ ε α s ϕ , t + k δ ε α s ϕ ) k , where s ϕ is the diameter of support of ϕ = ϕ ( y, s ).Denote Π δ,ε = (0 , k δε s ϕ ) k and Π δ,ε = (0 , k δ ε α s ϕ ) k . Since V ( y, s ) is stationary, wehave J ε,δp ≤ δ − k ε − k − αk ε k +2 αk k ϕ k kC Z (0 , k δε s ϕ ) k Z (0 , k δ εα s ϕ ) k (cid:12)(cid:12)(cid:12) E (cid:16) k Y i =1 V (˜ y i , ˜ s i ) (cid:17)(cid:12)(cid:12)(cid:12) d~ ˜ yd~ ˜ s. (2.9)For any R ≥ R k V δ,ε ( R ) = (cid:8) (˜ y, ˜ s ) ∈ Π δ,ε × Π δ,ε : max ≤ j ≤ k dist(˜ y j , [ i = j ˜ y i ) ≤ R, max ≤ j ≤ k dist(˜ s j , [ i = j ˜ s i ) ≤ R (cid:9) , ESTIMATES OF Y ε AND Z ε |V δ,ε | ( R ) the Lebesgue measure of this set. It is easy to check thatthe set V δ,ε (0) is the union of sets of the form { (˜ y, ˜ s ) ∈ Π δ,ε × Π δ,ε : ˜ y i = ˜ y i , . . . , ˜ y i k − = ˜ y i k , ˜ s j = ˜ s j , . . . , ˜ s j k − = ˜ s j k } with i l = i m and j l = j m if l = m , that is, V δ,ε (0) is the union of a finite numberof subsets of 2 k -dimensional planes in R k . The 2 k -dimensional measure of this setsatisfies the following upper bound |V δ,ε (0) | k ≤ C ( k ) (cid:16) δε (cid:17) k (cid:16) δ ε α (cid:17) k , Therefore, |V δ,ε | ( R ) . (cid:16) δε (cid:17) k (cid:16) δ ε α (cid:17) k R k , (2.10)For each (˜ y, ˜ s ) ∈ V δ,ε ( R ) we have (cid:12)(cid:12)(cid:12) E (cid:16) k Y i =1 V (˜ y i , ˜ s i ) (cid:17)(cid:12)(cid:12)(cid:12) ≤ ̺ ( R ) C ( k ) k V k L (Ω) k V k − k L (Ω) . (2.11)Combining (2.9), (2.10) and (2.11) yields J ε,δp . δ − k ε − k − αk ε k +2 αk Z ∞ ̺ ( R ) d |V δ,ε | ( R ) . δ − k ε αk . Here, the last inequality holds due to Assumption 1.4, combined with (2.10) andLemma 2.5. Therefore, recalling that p = 2 k , we have the bound( J ε,δp ) /p . δ − / ε α/ . (2.12)In the case δ < min( ε, ε α ) we have J ε,δp = Z t . . . Z t Z R . . . Z R k Y i =1 ϕ δ ( x − y i , t − s i ) E (cid:16) k Y i =1 V ε ( y i , s i ) (cid:17) d~y d~s ≤ Z t . . . Z t Z R . . . Z R k Y i =1 | ϕ δ ( x − y i , t − s i ) | (cid:12)(cid:12)(cid:12) E (cid:16) k Y i =1 V ε ( y i , s i ) (cid:17)(cid:12)(cid:12)(cid:12) d~y d~s ≤ E (cid:0) ( V ε ( y , s ) k (cid:1) Z t . . . Z t Z R . . . Z R k Y i =1 | ϕ δ ( x − y i , t − s i ) | d~y d~s ESTIMATES OF Y ε AND Z ε . ε − k − αk k ϕ k kL ,so that ( J ε,δp ) /p . ε − / − α/ . (2.13)Finally, if we are in the regime ε < δ < ε α/ , then J ε,δp = δ − k ε k + α k Z [0 ,t/ε α ] k Z R k k Y i =1 ϕ (cid:16) x − ε ˜ y i δ , t − ε α ˜ s i δ (cid:17) E (cid:16) k Y i =1 V (˜ y i , ˜ s i ) (cid:17) d~ ˜ y d~ ˜ s ≤ δ − k ε k +3 αk/ k ϕ k kL ∞ Z (0 , k δ εα s ϕ ) k Z (0 , k δε s ϕ ) k (cid:12)(cid:12)(cid:12) E (cid:16) k Y i =1 V (˜ y i , ˜ s i ) (cid:17)(cid:12)(cid:12)(cid:12) d~ ˜ y d~ ˜ s . δ − k ε k +3 αk/ k ϕ k kL ∞ Z (0 , k δ εα s ϕ ) k k V k L (Ω) k V k − k L (Ω) (cid:16) δε (cid:17) k ∞ Z ̺ ( R ) R k − dR . δ − k ε − αk/ . Hence, ( J ε,δp ) /p . δ − / ε − α/ (2.14)so that, combining (2.12), (2.13) and (2.14), the desired estimate holds. Lemma 2.7
Fix t > and let ϕ : R × R + → R + be a function which is uniformlybounded and decays exponentially in x , uniformly over s ∈ [0 , t ] .Then, for all p ≥ , ε > , one has the bound (cid:20) E (cid:18)Z t Z R ϕ ( x − y, t − s ) V ε ( y, s ) dy ds (cid:19) p (cid:21) /p ≤ C ϕ (cid:0) ε − / − α/ ∧ ε − α/ ∧ ε α/ (cid:1) . Here, the proportionality constant depends on p , on t , on the bounds on ϕ , and onthe bounds of Assumption 1.3.Proof. The proof of this lemma is similar (with some simplifications) to that of theprevious statement. We leave it to the reader.
Lemma 2.8
For each p ≥ , there exists a constant C p such that for all ε > , t ≥ , x ∈ R , [ E ( | Y ε ( x, t ) | p )] /p ≤ C p (1 + √ t ) ε α/ (2.15)[ E ( | ∂ x Y ε ( x, t ) | p )] /p ≤ C p (2.16) (cid:2) E (cid:0)(cid:12)(cid:12) ∂ x Y ε ( x, t ) (cid:12)(cid:12) p (cid:1)(cid:3) /p ≤ C p ε − . (2.17) ESTIMATES OF Y ε AND Z ε Proof.
Our main ingredient is the existence of a function ψ : R + → [0 ,
1] which issmooth, compactly supported in the interval [1 / , X n ∈ Z ψ (2 − n r ) = 1 ,for all r > p t ( x ) = X n ∈ Z − n ϕ n ( x, t ) , (2.18)where ϕ n ( x, t ) = 2 n ϕ (2 n x, n t ) , ϕ ( x, t ) = p t ( x ) ψ ( √ x + t ) . (2.19)The advantage of this formulation is that the function ϕ is smooth and compactlysupported. The reason why we scale ϕ n in this way, at the expense of still havinga prefactor 2 − n in (2.18) is that this is the scaling used in Lemma 2.6 (setting δ = 2 − n ).We use this decomposition to define Y εn by Y εn ( x, t ) = 2 − n Z t Z R ϕ n ( x − y, t − s ) V ε ( y, s ) dy ds , (2.20)so that, by (2.18), one has Y ε = P n Y εn . Setting ˜ ϕ ( x, t ) = ∂ x ϕ ( x, t ) and defining˜ ϕ n ( x, t ) = 2 n ˜ ϕ (2 n x, n t ) as in (2.19), the derivative of Y ε can be decomposed inthe same way: ∂ x Y εn ( x, t ) = 2 − n Z t Z R ˜ ϕ n ( x − y, t − s ) V ε ( y, s ) dy ds . (2.21)We first bound the derivative of Y ε . Since ˜ ϕ is smooth and compactly supported,the constants appearing in Lemma 2.6 do not depend on t and we have (cid:0) E | ∂ x Y εn ( x, t ) | p (cid:1) /p . n/ ε α/ ∧ − n/ ε − α/ = 2 −| n + α log ε | . Since the sum (over n ) of this quantity is bounded independently of ε , (2.16) nowfollows by the triangle inequality.Note that (2.17) follows from the same argument, if we integrate by parts (hencedifferentiate V ε ). ESTIMATES OF Y ε AND Z ε Y ε in a similar way. This timehowever, we combine all the terms with n < p − t ( x ) = X n ≤ − n ϕ n ( x, t ) , Y ε − ( x, t ) = Z t Z R p − t − s ( x − y ) V ε ( y, s ) dy ds ,so that Y ε = P n> Y εn + Y ε − . Similarly to before, we obtain (cid:0) E | Y εn ( x, t ) | p (cid:1) /p . − n/ ε α/ . (2.22)In order to bound Y ε − , we apply Lemma 2.6 with δ = 1 and ϕ = p − . It is immediatethat c ( t ) . √ t and c ( t ) .
1, so that (cid:0) E | Y ε − ( x, t ) | p (cid:1) /p . √ tε α/ . Combining this with (2.22), summed over n >
0, yields the desired bound.We deduce from Lemma 2.8 and equation 1.7
Corollary 2.9 As ε → , Y ε ( x, t ) → in probability, locally uniformly with respectto x and t .Proof. It follows from Lemma 2.8 and equation 1.7 that for some a, b > p ≥
1, all bounded subsets D ⊂ R + × R ,sup ( x,t ) ∈ D E [ | Y ε ( x, t ) | p ] . ε pa , (2.23)sup ( x,t ) ∈ D E [ | ∂ x Y ε ( x, t ) | p ] . ε − pb , sup ( x,t ) ∈ D E [ | ∂ t Y ε ( x, t ) | p ] . ε − pb . (2.24)We deduce from (2.23) that for all ( x, t ) , ( y, s ) ∈ D , p ≥ E [ | Y ε ( x, t ) − Y ε ( y, s ) | p ] . ε pa , and from (2.24), writing Y ε ( x, t ) − Y ε ( y, s ) as the sum of an integral of ∂ x Y ε and anintegral of ∂ t Y ε , we get E [ | Y ε ( x, t ) − Y ε ( y, s ) | p ] . ( | x − y | + | t − s | ) p ε − pb . Hence from H¨older’s inequality E [ | Y ε ( x, t ) − Y ε ( y, s ) | α + β ] ≤ ( | x − y | + | t − s | ) β ε αa − βb . Provided β > α > βb/a , we obtain an estimate which allows us to deduce theresult from a combination of (2.23) and Kolmogorov’s Lemma.
ESTIMATES OF Y ε AND Z ε Lemma 2.10
The function t → ¯ V ε ( t ) is continuous, and, for each ε > , thereexists a positive constant ¯ V ε such that ¯ V ε ( t ) → ¯ V ε , as t → ∞ . Furthermore, lim ε → ¯ V ε = ¯ V := ∞ Z Z R Φ( y, t )2 √ πt dy dt, and ¯ V ε ( t ) → ¯ V as ε → , uniformly in t ∈ [1 , + ∞ ] .Proof. Writing Φ ε for the correlation function of V ε and using the definition of ¯ V ε ( t ),we have ¯ V ε ( t ) = E (cid:20)(cid:18) ∂∂x t Z Z R p t − s ( x − y ) V ε ( y, s ) dy ds (cid:19) (cid:21) = E (cid:20)(cid:18) t Z Z R p ′ t − s ( x − y ) V ε ( y, s ) dy ds (cid:19) (cid:21) = E (cid:20) t Z t Z Z R Z R p ′ t − s ( x − y ) p ′ t − r ( x − z ) V ε ( y, s ) V ε ( z, r ) dy dz ds dr (cid:21) = t Z t Z Z R Z R p ′ t − s ( x − y ) p ′ t − r ( x − z )Φ ε ( y − z, s − r ) dy dz ds dr = t Z t Z Z R Z R p ′ s ( y ) p ′ r ( z )Φ ε ( y − z, s − r ) dy dz ds dr . It is easy to check that, for each ε >
0, this integral is a continuous function of t and that it converges, as t → + ∞ . Performing the change of variables y ′ = yε / α/ , z ′ = zε / α/ , s ′ = sε α/ , r ′ = rε α/ , renaming the new variables and setting T ε = ε − − α/ t , we obtain¯ V ε ( t ) = 116 π T ε Z T ε Z Z R Z R ys / zr / e − y s − z r Φ (cid:16) y − zε − α , s − rε α − (cid:17) dy dz ds dr. ESTIMATES OF Y ε AND Z ε V ε ( t ) = 116 π T ε Z T ε Z Z R Z R z s / r / e − z s − z r Φ (cid:16) y − zε − α , s − rε α − (cid:17) dy dz ds dr + r ε ( t ) . (2.25)The further analysis relies on the following limit relation:lim ε → sup Lemma 2.11 For any T > , any even integer k ≥ , any < β < /k , any p > k and any κ > , there exists a constant C such that for all ≤ t ≤ T , ε > , (cid:0) E k Y ε ( t ) k p ,p κ (cid:1) /p ≤ C ε α (1 − κ ) , (cid:0) E k ∂ x Y ε ( t ) k p ,p κ (cid:1) /p ≤ C ε − κ , (cid:0) E k ∂ x Y ε ( t ) k pβ,p κ (cid:1) /p ≤ Cε − κ . Proof. We establish the estimates of the norms of ∂ x Y ε ( t ) only. The norm of Y ε ( t )is estimated similarly. Let q > p = qk . For any x < y , we have the identity | ∂ x Y ε ( t, y ) − ∂ x Y ε ( t, x ) | k = k Z yx ( ∂ x Y ε ( t, z ) − ∂ x Y ε ( t, x )) k − ∂ x Y ε ( t, z ) dz . ESTIMATES OF Y ε AND Z ε q and taking expectations, we obtain E ( | ∂ x Y ε ( t, y ) − ∂ x Y ε ( t, x ) | p ) ≤ k q (cid:12)(cid:12)(cid:12)(cid:12)Z yx ( ∂ x Y ε ( t, z ) − ∂ x Y ε ( t, x )) k − ∂ x Y ε ( t, z ) dz (cid:12)(cid:12)(cid:12)(cid:12) q . ( y − x ) q − Z yx E (cid:0)(cid:12)(cid:12) ( ∂ x Y ε ( t, z ) − ∂ x Y ε ( t, x )) k − ∂ x Y ε ( t, z ) (cid:12)(cid:12) q (cid:1) dz . ( y − x ) q q E (cid:0) | ∂ x Y ε ( t, x ) | q ( k − (cid:1) E (cid:0) | ∂ x Y ε ( t, x ) | q (cid:1) . ( y − x ) q ε − q , (2.35)where we have used the stationarity (in z ) of the processes ∂ x Y ε ( t, z ) and ∂ x Y ε ( t, z ),as well as the estimates (2.16) and (2.17) from Lemma 2.8.As a consequence of Kolmogorov’s Lemma, there exists a stationary sequence ofpositive random variables { ξ n } n ∈ Z such that for every n ∈ Z , the boundsup x ∈ [ n,n +1] | ∂ x Y ε ( t, x ) | ≤ ξ n ,holds almost surely, and such that (cid:0) E ξ pn (cid:1) /p . ε − /k for every p ≥ 1. The bound on k ∂ x Y ε ( t ) k ,p κ then follows at once.The bound on k ∂ x Y ε ( t ) k β,p κ follows in virtually the same way, using the fact that(2.35) also yields the boundsup x,y ∈ [ n − ,n +1] | ∂ x Y ε ( t, x ) − ∂ x Y ε ( t, y ) || x − y | β ≤ ˜ ξ n ,for some stationary sequence of random variables ˜ ξ n which has all of its momentsbounded in the same way as the sequence { ξ n } .We further obtain the following bound on the “negative H¨older norm” of ∂ x Y ε : Corollary 2.12 For any T > , k being any even integer, p > k and κ = 1 /k , thereexists a constant C T,p,κ such that (cid:16) E k ∂ x Y ε ( t ) k p − ,p κ (cid:17) /p ≤ C T,p,κ ε α/ − κ ,for all ≤ t ≤ T , ε > .Proof. We note that k ∂ x Y ε ( t ) k − ,p κ = sup | x − y |≤ | Y ε ( t, x ) − Y ε ( t, y ) | p κ ( x ) | x − y | / , ESTIMATES OF Y ε AND Z ε k Y ε ( t ) k ,p κ = sup x | Y ε ( t, x ) | p κ ( x ) , k ∂ x Y ε ( t ) k ,p κ = sup x | ∂ x Y ε ( t, x ) | p κ ( x ) . We have, for | x − y | ≤ | Y ε ( t, x ) − Y ε ( t, y ) | p κ ( x ) | x − y | / = (cid:18) | Y ε ( t, x ) − Y ε ( t, y ) | p κ ( x ) (cid:19) / (cid:18) | Y ε ( t, x ) − Y ε ( t, y ) | p κ ( x ) | x − y | (cid:19) / ≤ (cid:18) | Y ε ( t, x ) | p κ ( x ) + C κ | Y ε ( t, y ) | p κ ( y ) (cid:19) / (cid:18) C κ sup x ≤ z ≤ y | ∂ x Y ε ( t, z ) | p κ ( z ) (cid:19) / . It remains to take supremums and apply H¨older’s inequality.We have similar results for Z ε ( x, t ). Lemma 2.13 For each p ≥ , there exists a constant C such that for all ε > , t ≥ , x ∈ R , (cid:2) E (cid:0) | Z ε ( x, t ) | (cid:1)(cid:3) / ≤ C (cid:0) t (cid:1) ε γα/ . Proof. The main ingredient in the proof is a bound on the correlation function ofthe right hand side of the equation for Z ε , which we denote byΛ ε ( z, z ′ ) = Cov (cid:0) | ∂ x Y ε ( z ) | , | ∂ x Y ε ( z ′ ) | (cid:1) . Inserting the definition of Y ε , we obtain the identityΛ ε ( z, z ′ ) = Z · · · Z ˜ P ( z − z ) ˜ P ( z − z ) ˜ P ( z ′ − z ) ˜ P ( z ′ − z )Ψ (4) ε ( z , · · · , z ) dz · · · dz ,where ˜ P ( z ) = ˜ P ( x, t ) = ∂ x p t ( x ) ,with p t the standard heat kernel andΨ (4) ε ( z , · · · , z ) = ε − − α Ψ (4) (cid:16) x ε , · · · , x ε , t ε α , · · · , t ε α (cid:17) . Here, we used the shorthand notation z i = ( x i , t i ), and integrals over z i are under-stood to be shorthand for R t R R dx i dt i . We now make use of Lemma 2.3, whichallows to factor this integral as | Λ ε ( z, z ′ ) | . (cid:16) ε − − α Z Z ˜ P ( z − z ) ˜ P ( z ′ − z ) ̺ ε ( z − z ) dz dz (cid:17) def = ˜ ̺ ε ( z, z ′ ) ,where we used the shorthand notation ̺ ε ( x, t ) = ̺ (cid:16) xε , tε α (cid:17) . We will show below that the following bound holds: ESTIMATES OF Y ε AND Z ε Lemma 2.14 ˜ ̺ ε ( z, z ′ ) . (cid:16) ∧ ε αγ/ d γp ( z, z ′ ) (cid:17) + (1 + t + t ′ ) ε α/ def = ζ ε ( z − z ′ ) + (1 + t + t ′ ) ε α/ ,where d p denotes the parabolic distance given by d p ( z, z ′ ) = | x − x ′ | + | t − t ′ | . Taking this bound for granted, we write as in the proof of Lemma 2.8 Z ε = Z ε − + P n> Z εn with Z εn ( z ) = 2 − n Z ϕ n ( z − z ′ ) (cid:0) | ∂ x Y ε ( z ′ ) | − ¯ V ε ( t ′ ) (cid:1) dz ′ ,and similarly for Z ε − . Squaring this expression and inserting the bound from Lemma 2.14,we obtain E | Z εn ( z ) | . − n Z Z ϕ n ( z − z ′ ) ϕ n ( z − z ′′ ) (cid:0) ζ ε ( z ′ − z ′′ ) + (1 + t ′ + t ′′ ) ε α (cid:1) dz ′ dz ′′ . − n Z ζ ε ( z ′ ) dz ′ + 2 − n (1 + t ) ε α ,where we made use of the scaling of ϕ n given by (2.19). Performing the correspondingbound for Z ε − , we similarly obtain E | Z ε − ( z ) | . t Z ζ ε ( z ′ ) dz ′ + (1 + t ) ε α . The claim now follows from the bounds Z ζ ε ( z ′ ) dz ′ ≤ Z t Z R ε αγ (cid:0) | x | + | s | (cid:1) γ dx ds . ε αγ t − γ . Proof of Lemma 2.14. Similarly to the proof of Lemma 2.8, we write˜ ̺ ε ( z, z ′ ) = X n ≥ X n ≥ ˜ ̺ n ,n ε ( z, z ′ ) ,with˜ ̺ n ,n ε ( z, z ′ ) = ε − − α − n − n Z Z ˜ ϕ n ( z − z ) ˜ ϕ n ( z ′ − z ) ̺ ε ( z − z ) dz dz . ESTIMATES OF Y ε AND Z ε n ≥ 1, ˜ ϕ n is defined as in the proof of Lemma 2.8, whereas ˜ ϕ is differentfrom what it was there and is defined as˜ ϕ ( x, t ) = ∂ x p − t ( x ) . By symmetry, we can restrict ourselves to the case n ≥ n , which we will do in thesequel. In the case where n > 0, the above integral could be restricted to the set ofpairs ( z , z ), such that their parabolic distance satisfies d p ( z , z ) ≥ (cid:0) d p ( z, z ′ ) − − n (cid:1) + ,where ( · · · ) + denotes the positive part of a number.Replacing ˜ ϕ n by its supremum and integrating out ˜ ϕ n and ̺ ε yields the bound˜ ̺ n ,n ε ( z, z ′ ) . (cid:0) δ n , ( t + t ′ ) (cid:1) n − n ε α/ Z A ε ( n ) ̺ ( z ) dz ,where A ε (0) = R and A ε ( n ) = (cid:8) z : d p (0 , z ) ≥ ε − α/ (cid:0) d p ( z, z ′ ) − − n (cid:1) + (cid:9) ,for n > 0. (Remark that the prefactor 1 + t + t ′ is relevant only in the case n = n = 0.) It follows from the integrability of ̺ that one always has the bound˜ ̺ n ,n ε ( z, z ′ ) . (cid:0) δ n , ( t + t ′ ) (cid:1) n − n ε α/ . (2.36)Moreover, we deduce from Assumption 1.4 that, whenever n > d ( z, z ′ ) ≥ − n , one has the improved bound: for any γ > ̺ n ,n ε ( z, z ′ ) . n − n ε α/ (cid:16) ∧ ε αγ/ d γp ( z, z ′ ) (cid:17) . (2.37)The bound (2.36) is sufficient for our needs in the case n = 0, so we assume that n > ̺ n ,n ε ( z, z ′ ) which will be useful in the regimewhere n is very large. Since the integral of ˜ ϕ n is bounded independently of n , weobtain˜ ̺ n ,n ε ( z, z ′ ) . ε − − α − n − n sup d p ( z ,z ) ≤ − n Z ˜ ϕ n ( z ′ − z ) ̺ ε ( z − z ) dz . (2.38)We now distinguish between three cases, which depend on the size of z − z ′ . ESTIMATES OF Y ε AND Z ε Case 1: d p ( z, z ′ ) ≤ ε α/ . In this case, we proceed as in the proof of Lemma 2.6,which yields˜ ̺ n ,n ε ( z, z ′ ) . ε − − α − n − n sup z Z ˜ ϕ n ( z ) ̺ ε ( z − z ) dz . ε − − α − n − n sup x Z R sup s ̺ ε ( x − x , s ) Z t ˜ ϕ n ( x , t ) dt dx . ε − − α − n Z R sup s ̺ ε ( x , s ) dx . ε − α − n . (2.39) Case 2: | x − x ′ | ≥ d p ( z, z ′ ) / ≥ ε α/ / . Note that in (2.38), the argument of ̺ ε canonly ever take values with | x − x | ∈ B ε ( n ) where B ε ( n ) = (cid:8) ¯ x : | ¯ x | ≥ (cid:0) | x − x ′ | − − n (cid:1)(cid:9) . As a consequence, we obtain the bound˜ ̺ n ,n ε ( z, z ′ ) . ε − − α − n − n sup ¯ x ∈ B ε ( n ) sup s ∈ R ̺ ε (¯ x, s ) . The case of interest to us for this bound will be 2 − n ≤ ε α/ , in which case we deducefrom this calculation and Assumption 1.4 that˜ ̺ n ,n ε ( z, z ′ ) . ε − − α − n − n (cid:16) εd p ( z, z ′ ) (cid:17) γ ,where γ is an arbitrarily large exponent. Choosing γ large enough, we conclude thatone also has the bound˜ ̺ n ,n ε ( z, z ′ ) . ε − α − n (cid:16) ∧ ε α/ d p ( z, z ′ ) (cid:17) γ , (2.40)which will be sufficient for our needs. Case 3: | t − t ′ | ≥ d p ( z, z ′ ) / ≥ ε α / . Similarly, we obtain˜ ̺ n ,n ε ( z, z ′ ) . ε − α − n Z R sup s ∈ B ′ ε ( n ) ̺ ε ( x , s ) dx ,where B ′ ε ( n ) = (cid:8) s : | s | ≥ ε − α (cid:0) | t − t ′ | − − n (cid:1)(cid:9) . ESTIMATES OF Y ε AND Z ε − n ≤ ε α/ , this yields as before˜ ̺ n ,n ε ( z, z ′ ) . ε − α − n (cid:16) ∧ ε α/ d p ( z, z ′ ) (cid:17) γ . (2.41)It now remains to sum over all values n ≥ n ≥ n = 0, we sum the bound (2.36), which yields X n ≥ ˜ ̺ n , ε ( z, z ′ ) ≤ (1 + t + t ′ ) ε α/ . In order to sum the remaining terms, we first consider the case d p ( z, z ′ ) < ε α/ . Inthis case, we use (2.36) and (2.39) to deduce that X n ≥ n ˜ ̺ n ,n ε ( z, z ′ ) . n ε α/ ∧ − n ε − α/ ,so that in this case ˜ ̺ ε ( z, z ′ ) . t + t ′ ) ε α/ .It remains to consider the case d p ( z, z ′ ) ≥ ε α/ . For this, we break the sum over n in three pieces: N = { n ≥ − n ≥ d ( z, z ′ ) / } , N = { n ≥ − ε α/ ≤ − n < d ( z, z ′ ) / } , N = { n ≥ − n < − ε α/ } . For n ∈ N , we only make use of the bound (2.36). Summing first over n ≥ n andthen over n ∈ N , we obtain X n ∈ N X n ≥ n ˜ ̺ n ,n ε ( z, z ′ ) . ε α/ d p ( z, z ′ ) . For n ∈ N , we only make use of the bound (2.37). Summing again first over n ≥ n and then over n ∈ N , we obtain X n ∈ N X n ≥ n ˜ ̺ n ,n ε ( z, z ′ ) . ε αγ/ d γp ( z, z ′ ) . In the last case, we similarly use either (2.40) or (2.41), depending on whether | x − x ′ | ≥ d p ( z, z ′ ) / | t − t ′ | ≥ d p ( z, z ′ ) / 2, which yields again X n ∈ N X n ≥ n ˜ ̺ n ,n ε ( z, z ′ ) . ε αγ/ d γp ( z, z ′ ) . Combining the above bounds, the claim follows. ESTIMATES OF Y ε AND Z ε Lemma 2.15 For any T > , p ≥ , κ > , < β < , there exists a constant C T,p,κ,β such that for all ≤ t ≤ T , ε > , (cid:0) E k ∂ x Z ε ( t ) k pβ,p κ (cid:1) /p ≤ C T,p,κ,β ε − κ . Proof. This is a corollary of Lemma 2.11 and Proposition 2.2.As a Corollary we deduce Corollary 2.16 For any T > , p ≥ , κ > , there exists a constant C T,κ such thatfor all ≤ t ≤ T , ε > , (cid:12)(cid:12) E k Z ε ( t ) k p ,p κ (cid:12)(cid:12) /p ≤ C T,κ ε α/ − κ , (cid:12)(cid:12) E k ∂ x Z ε ( t ) k p ,p κ (cid:12)(cid:12) /p ≤ C T,κ ε α/ − κ . We will need moreover Corollary 2.17 As ε → , Z ε ( x, t ) → in probability, locally uniformly in ( x, t ) .Proof. It follows from estimate (2.16) that for any p > K ⊂ R × R + , there exists a constant C p,K such that E (cid:18) Z K (cid:0) | ∂ x Y ε ( x, t ) | − V ε (cid:1) p dxdt (cid:19) ≤ C p,K . Then, by the Nash estimate, we obtain E k Z ε k C γ ( K ) ≤ C K , (2.42)where the H¨older exponent γ > C K do not depend on ε . As a consequence ofthe first estimate of Corollary 2.16 we have E k Z ε k pL p ( K ) ≤ C p,K ε p ( α/ − κ ) . (2.43)Combining (2.42) and (2.43) one can easily derive the required convergence. PROOF OF THE MAIN RESULT Before concluding with the proof of our main theorem, we prove a result for aparabolic heat equation with coefficients which live in spaces of weighted H¨oldercontinuous functions.We consider an abstract evolution equation of the type ∂ t u = ∂ x u + F ∂ x u + G u , (3.1)where F and G are measurable functions of time, taking values in C − βp κ for somesuitable κ > β < . The main result of this section is the following: Theorem 3.1 Let β and κ be positive numbers such that β + κ < and let F and G be functions in L p loc ( R + , C − βp κ ) for every p ≥ .Let furthermore ℓ ∈ R and u ∈ C / e ℓ . Then, there exists a unique global mildsolution to (3.1). Furthermore, this solution is continuous with values in C / e m forevery m < ℓ and the map ( u , F, G ) u is jointly continuous in these topologies.Proof. We will show a slightly stronger statement, namely that for every δ > u t ∈ C γe ℓ − δt for t ∈ [0 , T ] forarbitrary values of T > 0. We fix T , δ and ℓ from now on.We then write ||| u ||| δ,ℓ,T def = sup t ∈ [0 ,T ] k u t k ,e ℓ − δt ,and we denote by B δ,ℓ,T the corresponding Banach space. With this notation at hand,we define a map M T : B δ,ℓ,T → B δ,ℓ,T by (cid:0) M T u (cid:1) t = Z t P t − s (cid:0) F s ∂ x u s + G s u s (cid:1) ds , t ∈ [0 , T ] . It follows from Proposition 2.2 that we have the bound (cid:13)(cid:13)(cid:0) M T u (cid:1) t (cid:13)(cid:13) ,e ℓ − δt ≤ C Z t ( t − s ) − β (cid:13)(cid:13) F s ∂ x u s + G s u s (cid:13)(cid:13) − β,e ℓ − δt ds . Combining Proposition 2.1 with (2.3) and (2.4), we furthermore obtain the bound (cid:13)(cid:13) F s ∂ x u s (cid:13)(cid:13) − β,e ℓ − δt ≤ C (cid:0) δ | t − s | (cid:1) − κ k F s k − β,p κ (cid:13)(cid:13) ∂ x u s (cid:13)(cid:13) ,e ℓ − δs ≤ C (cid:0) δ | t − s | (cid:1) − κ k F s k − β,p κ ||| u ||| δ,ℓ,T , PROOF OF THE MAIN RESULT C is uniformly bounded for δ ∈ (0 , 1] and bounded ℓ and s . A similar bound holds for G s u s so that, combining these bounds and usingH¨older’s inequality for the integral over t , we obtain the existence of constants ζ > p > |||M T u ||| δ,ℓ,T ≤ Cδ − κ T ζ (cid:0) k F k L p ( C − βpκ ) + k G k L p ( C − βpκ ) (cid:1) ||| u ||| δ,ℓ,T ,holds. Since the norm of this operator is strictly less than 1 provided that T is smallenough, the short-time existence and uniqueness of solutions follow from Banach’sfixed point theorem. The existence of solutions up to the final time T follows byiterating this argument, noting that the interval of short-time existence restartingfrom u ( t ) at time t can be bounded from below by a constant that is uniform overall t ∈ [0 , T ], as a consequence of the linearity of the equation.Actually, we obtain the bound k u t k ,e ℓ − δt . exp (cid:0) Ct (cid:0) k F k L p ( C − βpκ ) + k G k L p ( C − βpκ ) (cid:1) /ζ (cid:1) k u k ,e ℓ ,where the constants C and ζ depend on the choice of ℓ and δ .The solutions are obviously linear in u since the equation is linear in u . Itremains to show that the solutions also depend continuously on F and G . Let ¯ u bethe solution to the equation ∂ t ¯ u = ∂ x ¯ u + ¯ F ∂ x ¯ u + ¯ G ¯ u , (3.2)and write ̺ = u − ¯ u . The difference ̺ then satisfies the equation ∂ t ̺ = ∂ x ̺ + F ∂ x ̺ + G ̺ + ( F − ¯ F ) ∂ x ¯ u + ( G − ¯ G ) ¯ u ,with zero initial condition. Similarly to before, we thus have ̺ t = (cid:0) M T ̺ (cid:1) t + Z t P t − s (cid:0) ( F s − ¯ F s ) ∂ x ¯ u s + ( G s − ¯ G s ) ¯ u s (cid:1) ds . It follows from the above bounds that ||| ̺ ||| δ,ℓ,T . |||M T ̺ ||| δ,ℓ,T + Cδ − κ T ζ (cid:0) k F − ¯ F k L p ( C − βpκ ) + k G − ¯ G k L p ( C − βpκ ) (cid:1) ||| ¯ u ||| δ,ℓ,T . Over short times, the required continuity statement thus follows at once. Over fixedtimes, it follows as before by iterating the argument. Remark 3.2 In principle, one could obtain a similar result for less regular initialconditions, but this does not seem worth the additional effort in this context. EFERENCES Proof of Theorem 1.8. We apply Theorem 3.1 with β = and κ = . Note that theequation (1.8) for v ε is precisely of the form (3.1) with F = 2 ∂ x Y ε + 2 ∂ x Z ε , G = | ∂ x Z ε | + 2 ∂ x Z ε ∂ x Y ε . It follows from Corollaries 2.12 and 2.16 that, for every p > δ > 0, one has thebound (cid:12)(cid:12)(cid:12) E Z T k F k pβ,p κ dt (cid:12)(cid:12)(cid:12) /p . ε α − δ ,say. Similarly, it follows from Lemma 2.11 and Corollary 2.16 that one actually hasthe bound (cid:12)(cid:12)(cid:12) E Z T k G k p ,p κ dt (cid:12)(cid:12)(cid:12) /p . ε α − δ ,which is stronger than what we required. As a consequence of Theorem 3.1, thisshows immediately that v ε → u in probability, locally uniformly both in space andin time. We conclude by recalling that from Corollary 2.9 and 2.17, the correctors Y ε and Z ε themselves converge locally uniformly to 0 in probability. References [Bal10] G. Bal . Homogenization with large spatial random potential. MultiscaleModel. Simul. , no. 4, (2010), 1484–1510.[BLP78] A. Bensoussan , J.-L. Lions , and G. Papanicolaou . Asymptotic anal-ysis of periodic structures . North-Holland, Amsterdam, 1978.[CKP01] F. Campillo , M. Kleptsyna , and A. Piatnitski . Homogenization ofrandom parabolic operator with large potential. Stochastic Process. Appl. , no. 1, (2001), 57–85.[DIPP06] M. A. Diop , B. Iftimie , ´E. Pardoux , and A. L. Piatnitski . Singularhomogenization with stationary in time and periodic in space coefficients. J. Funct. Anal. , no. 1, (2006), 1–46.[Hai13a] M. Hairer . Solving the KPZ equation. Ann. Math. (2013). To appear.[Hai13b] M. Hairer . A theory of regularity structures, 2013. Preprint. EFERENCES B. Iftimie , ´E. Pardoux , and A. Piatnitski . Homogenization of asingular random one-dimensional PDE. Ann. Inst. Henri Poincar´e Probab.Stat. , no. 3, (2008), 519–543.[Koz83] S. M. Kozlov . Reducibility of quasiperiodic differential operators andaveraging. Trudy Moskov. Mat. Obshch. , (1983), 99–123.[PP12] ´E. Pardoux and A. Piatnitski . Homogenization of a singular randomone-dimensional PDE with time-varying coefficients. Ann. Probab. ,no. 3, (2012), 1316–1356.[You36] L. C. Young . An inequality of the H¨older type, connected with Stieltjesintegration. Acta Math.67