On the chaotic character of the stochastic heat equation, before the onset of intermitttency
aa r X i v : . [ m a t h . P R ] J u l The Annals of Probability (cid:13)
Institute of Mathematical Statistics, 2013
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEATEQUATION, BEFORE THE ONSET OF INTERMITTTENCY
By Daniel Conus , Mathew Joseph and Davar Khoshnevisan Lehigh University, University of Utah and University of Utah
We consider a nonlinear stochastic heat equation ∂ t u = ∂ xx u + σ ( u ) ∂ xt W , where ∂ xt W denotes space–time white noise and σ : R → R is Lipschitz continuous. We establish that, at every fixed time t >
0, the global behavior of the solution depends in a critical manneron the structure of the initial function u : under suitable conditionson u and σ , sup x ∈ R u t ( x ) is a.s. finite when u has compact sup-port, whereas with probability one, lim sup | x |→∞ u t ( x ) / (log | x | ) / > u is bounded uniformly away from zero. This sensitivity tothe initial data of the stochastic heat equation is a way to state thatthe solution to the stochastic heat equation is chaotic at fixed times,well before the onset of intermittency.
1. Introduction and main results.
Let W := { W ( t, x ) } t ≥ ,x ∈ R denote areal-valued Brownian sheet indexed by two parameters ( t, x ) ∈ R + × R . Thatis, W is a centered Gaussian process with covarianceCov( W ( t, x ) , W ( s, y )) = min( t, s ) × min( | x | , | y | ) × (0 , ∞ ) ( xy ) . (1.1)And let us consider the nonlinear stochastic heat equation ∂∂t u t ( x ) = κ ∂ ∂x u t ( x ) + σ ( u t ( x )) ∂ ∂t ∂x W ( t, x ) , (1.2)where x ∈ R , t > σ : R → R is a nonrandom and Lipschitz continuous func-tion, κ > u : R → R is bounded, nonrandom and measurable. The mixed partial derivative ∂ W ( t, x ) / ( ∂t ∂x ) is the so-called “space–time white noise,” and is defined Received March 2011; revised November 2011. Supported in part by the Swiss National Science Foundation Fellowship PBELP2-122879. Supported in part by the NSF Grant DMS-07-47758. Supported in part by the NSF Grants DMS-07-06728 and DMS-10-06903.
AMS 2000 subject classifications.
Primary 60H15; secondary 35R60.
Key words and phrases.
Stochastic heat equation, chaos, intermittency.
This is an electronic reprint of the original article published by theInstitute of Mathematical Statistics in
The Annals of Probability ,2013, Vol. 41, No. 3B, 2225–2260. This reprint differs from the original inpagination and typographic detail. 1
D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN as a generalized Gaussian random field; see Chapter 2 of Gelfand and Vilen-kin [17], Section 2.7, for example.It is well known that the stochastic heat equation (1.2) has a (weak)solution { u t ( x ) } t> ,x ∈ R that is jointly continuous; it is also unique up toevanescence; see, for example, Chapter 3 of Walsh [23], (3.5), page 312. Andthe solution can be written in mild form as the (a.s.) solution to the followingstochastic integral equation: u t ( x ) = ( p t ∗ u )( x ) + Z (0 ,t ) × R p t − s ( y − x ) σ ( u s ( y )) W (d s d y ) , (1.3)where p t ( z ) := 1(2 π κ t ) / exp (cid:18) − z κ t (cid:19) ( t > , z ∈ R )(1.4)denotes the free-space heat kernel, and the final integral in (1.3) is a stochas-tic integral in the sense of Walsh [23], Chapter 2. Chapter 1 of the minicourseby Dalang et al. [10] contains a quick introduction to the topic of stochasticPDEs of the type considered here.We are interested solely in the physically interesting case that u ( x ) ≥ x ∈ R . In that case, a minor variation of Mueller’s comparisonprinciple [21] implies that if in addition σ (0) = 0, then with probability one u t ( x ) ≥ t > x ∈ R ; see also Theorem 5.1 of Dalang et al. [10],page 130, as well as Theorem 2.1 below.We follow Foondun and Khoshnevisan [15], and say that the solution u := { u t ( x ) } t> ,x ∈ R to (1.2) is (weakly) intermittent if0 < lim sup t →∞ t log E( | u t ( x ) | ν ) < ∞ for all ν ≥ , (1.5)where “log” denotes the natural logarithm, to be concrete. Here, we referto property (1.5), if and when it holds, as mathematical intermittency [tobe distinguished from physical intermittency , which is a phenomenologicalproperty of an object that (1.2) is modeling].If σ ( u ) = const · u and u is bounded from above and below uniformly,then the work of Bertini and Cancrini [2] and Mueller’s comparison prin-ciple [21] together imply (1.5). In the fully nonlinear case, Foondun andKhoshnevisan [15] discuss a connection to nonlinear renewal theory, anduse that connection to establish (1.5) under various conditions; for instance,they have shown that (1.5) holds provided that lim inf | x |→∞ | σ ( x ) /x | > x ∈ R u ( x ) is sufficiently large.If the lim sup in (1.5) is a bona fide limit, then we arrive at the usualdescription of intermittency in the literature of mathematics and theoreticalphysics; see, for instance, Molchanov [20] and Zeldovich et al. [24–26].Mathematical intermittency is motivated strongly by a vast physics litera-ture on (physical) intermittency and localization , and many of the references N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION can be found in the combined bibliographies of [15, 20, 24, 25]. Let us say afew more words about “localization” in the present context.It is generally accepted that if (1.5) holds, then u := { u t ( x ) } t> ,x ∈ R oughtto undergo a separation of scales (or “pattern/period breaking”). In fact, onecan argue that property (1.5) implies that, as t → ∞ , the random function x u t ( x ) starts to develop very tall peaks, distributed over small x -intervals(see Section 2.4 of Bertini and Cancrini [2] and the Introduction of themonograph by Carmona and Molchanov [7]). This “peaking property” iscalled localization , and is experienced with very high probability, providedthat: (i) the intermittency property (1.5) holds; and (ii) t ≫ t ≫
1. And it is also expected that there are physically-intermittent processes, not unlike those studied in the present paper, which,however, do not satisfy the (mathematical) intermittency condition (1.5) onLiapounov exponents; see, for example, the paper by Cherktov et al. [8].Our wish is to better understand “physical intermittency” in the settingof the stochastic heat equation (1.2). We are motivated strongly by the lit-erature on smooth finite-dimensional dynamical systems ([22], Section 1.3),which ascribes intermittency in part to “chaos,” or slightly more precisely,sensitive dependence on the initial state of the system.In order to describe the contributions of this paper, we first recall a con-sequence of a more general theorem of Foondun and Khoshnevisan [16]:if σ (0) = 0, and if u is H¨older continuous of index > and has compactsupport, then for every t > z →∞ z log P n sup x ∈ R u t ( x ) > z o = −∞ . (1.6)It follows in particular that the global maximum of the solution (at a fixedtime) is a finite (nonnegative) random variable.By contrast, one expects that ifinf x ∈ R u ( x ) > , (1.7)then the solution u t is unbounded for all t >
0. Here we prove that fact anda good deal more; namely, we demonstrate here that there in fact exists aminimum rate of “blowup” that applies regardless of the parameters of theproblem.A careful statement requires a technical condition that turns out to benecessary as well as sufficient. In order to discover that condition, let usconsider the case that u ( x ) ≡ ρ > x ∈ R . Then, (1.7)clearly holds; but there can be no blowup if σ ( ρ ) = 0. Indeed, in that casethe unique solution to the stochastic heat equation is u t ( x ) ≡ ρ , which isbounded. Thus, in order to have an unbounded solution, we need at the very D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN least to consider the case that σ ( x ) = 0 for all x >
0. [Note that σ (0) = 0 ispermitted.] Instead, we will assume the following seemingly stronger, but infact more or less equivalent, condition from now on: σ ( x ) > x ∈ R \ { } . (1.8)We are ready to present the first theorem of this paper. Here and through-out we write “ f ( R ) % g ( R ) as R → ∞ ” in place of the more cumbersome“there exists a nonrandom C > R →∞ f ( R ) /g ( R ) ≥ C .”The largest such C is called “the constant in % .” We might sometimes alsowrite “ g ( R ) - f ( R )” in place of “ f ( R ) % g ( R ).” And there is a corresponding“constant in - .” Theorem 1.1.
Let { u t ( x ) } t> ,x ∈ R be a solution to ∂∂t u t ( x ) = κ ∂ ∂x u t ( x ) + σ ( u t ( x )) ∂ ∂t ∂x W ( t, x ) ( t > , x ∈ R )(1.9) written in mild form (1.3), where the initial function u : R → R is bounded,nonrandom and satisfies inf x ∈ R u ( x ) > . (1.10) Then, the following hold: (1) If inf x ∈ R σ ( x ) ≥ ε > and t > , then a.s., sup x ∈ [ − R,R ] u t ( x ) % (log R ) / κ / as R → ∞ ; (1.11) and the constant in % does not depend on κ . (2) If σ ( x ) > for all x ∈ R and there exists γ ∈ (0 , / such that lim | x |→∞ σ ( x ) log( | x | ) (1 / − γ = ∞ , (1.12) then for all t > the following holds almost surely: sup x ∈ [ − R,R ] | u t ( x ) | % (log R ) γ κ / as R → ∞ ; (1.13) and the constant in % does not depend on κ . Note in particular that if σ is uniformly bounded below then a.s.,lim sup | x |→∞ u t ( x )(log | x | ) / ≥ const κ / . (1.14)We believe that it is a somewhat significant fact that a rate (log R ) / ofblowup exists that is valid for all u and σ in the first part of Theorem 1.1.However, the actual numerical estimate—that is, the (1 / κ might suggest(see Remark 1.5). In fact, we believe that the actual blowup rate might N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION depend critically on the fine properties of the function σ . Next, we highlightthis assertion in one particularly interesting case. Here and throughout, wewrite “ f ( R ) ≍ g ( R ) as R → ∞ ” as shorthand for “ f ( R ) % g ( R ) and g ( R ) % f ( R ) as R → ∞ .” The two constants in the preceding two % ’s are called the“constants in ≍ .” Theorem 1.2. If σ is uniformly bounded away from and ∞ and t > ,then sup x ∈ [ − R,R ] u t ( x ) ≍ (log R ) / κ / a.s. as R → ∞ . (1.15) Moreover, for every fixed κ > , the preceding constants in ≍ do not dependon κ ≥ κ . In particular, we find that if σ is bounded uniformly away from 0 and ∞ ,then there exist constants c ∗ , c ∗ ∈ (0 , ∞ ) such that c ∗ κ / ≤ lim sup | x |→∞ u t ( x )(log | x | ) / ≤ c ∗ κ / a.s. , (1.16)uniformly for all κ ≥ κ .The preceding discusses the behavior in case σ is bounded uniformly awayfrom 0; that is, a uniformly-noisy stochastic heat equation (1.2). In general,we can say little about the remaining case that σ (0) = 0. Nevertheless inthe well-known parabolic Anderson model, namely (1.2) with σ ( x ) = cx forsome constant c >
0, we are able to obtain some results (Theorem 1.3) thatparallel Theorems 1.1 and 1.2.
Theorem 1.3. If σ ( x ) = cx for some c > , then a.s., log sup x ∈ [ − R,R ] u t ( x ) ≍ (log R ) / κ / as R → ∞ , (1.17) and the constants in ≍ do not depend on κ > . Hence, when σ ( x ) = cx we can find constants C ∗ , C ∗ ∈ (0 , ∞ ) such that0 < lim sup | x |→∞ u t ( x )exp { C ∗ (log | x | ) / / κ / } ≤ lim sup | x |→∞ u t ( x )exp { C ∗ (log | x | ) / / κ / } < ∞ , almost surely. Remark 1.4.
Thanks to (1.3), and since Walsh stochastic integrals havezero mean, it follows that E u t ( x ) = ( p t ∗ u )( x ). In particular, E u t ( x ) ≤ sup x ∈ R u ( x ) is uniformly bounded. Since u t ( x ) is nonnegative, it followsfrom Fatou’s lemma that lim inf | x |→∞ u t ( x ) < ∞ a.s. Thus, the behaviordescribed by Theorem 1.1 is one about the highly-oscillatory nature of x D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN u t ( x ), valid for every fixed time t >
0. We will say a little more about thistopic in Appendix B below.
Remark 1.5.
We pay some attention to the powers of the viscosityparameter κ in Theorems 1.2 and 1.3. Those powers suggest that at least twodistinct universality classes can be associated to (1.2): (i) when σ is boundeduniformly away from zero and infinity, the solution behaves as random walkin weakly-interacting random environment; and (ii) when σ ( x ) = cx for some c >
0, then the solution behaves as objects that arise in some random matrixmodels.
Remark 1.6.
In [18], (2), Kardar, Parisi and Zhang consider the solu-tion u to (1.2) and apply formally the Hopf–Cole transformation u t ( x ) :=exp( λh t ( x )) to deduce that h := { h t ( x ) } t ≥ ,x ∈ R satisfies the following“SPDE”: for t > x ∈ R , ∂∂t h t ( x ) = κ ∂ ∂x h t ( x ) + κ λ (cid:18) ∂∂x h t ( x ) (cid:19) + ∂ ∂t ∂x W ( t, x ) . (1.18)This is the celebrated “KPZ equation,” named after the authors of [18], andthe random field h is believed to be a universal object (e.g., it is expected toarise as a continuum limit of a large number of interacting particle systems).Theorem 1.3 implies that there exist positive and finite constants a t and A t —depending only on t —such that a t κ / < lim sup | x |→∞ h t ( x )(log | x | ) / < A t κ / a.s. for all t > . (1.19)This is purely formal, but only because the construction of h via u is notrigorous. More significantly, our proofs suggest strongly a kind of asymptoticspace–time scaling “ | log x | ≈ t ± / .” If so, then the preceding verifies thatfluctuation exponent 1 /z of h is 2 / h . The latter has been predicted by Kardar et al. [18], page 890, and provedby Balazs, Quastel and Sepp¨al¨ainen [1] for a special choice of u (hence h )and t → u is a constant; at a technical levelthis uses Mueller’s comparison principle [21]. And we use coupling on afew occasions: first, we describe a two-step coupling of { u t ( x ) } t> ,x ∈ R tothe solution { v t ( x ) } t> ,x ∈ R of (1.2)—using the same space–time white noise N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION ∂ W/ ( ∂t ∂x )—in the case that σ is bounded below uniformly on R . Thelatter quantity [i.e., { v t ( x ) } t> ,x ∈ R ] turns out to be more amenable to mo-ment analysis than { u t ( x ) } t> ,x ∈ R , and in this way we obtain the followinga priori estimate, valid for every t > x ∈ R P { u t ( x ) ≥ λ } % −√ κ λ as λ → ∞ . (1.20)Theorem 1.1 follows immediately from this and the Borel–Cantelli lemma,provided that we prove that if x and x ′ are “ O (1) distance apart,” then u t ( x )and u t ( x ′ ) are “approximately independent.” A quantitative version of thisstatement follows from coupling { u t ( x ) } t> ,x ∈ R to the solution { w t ( x ) } t> ,x ∈ R of a random evolution equation that can be thought of as the “localization”of the original stochastic heat equation (1.2). The localized approximation { w t ( x ) } t> ,x ∈ R has the property that w t ( x ) and w t ( x ′ ) are (exactly) inde-pendent for “most” values of x and x ′ that are O (1) distance apart. Andthis turns out to be adequate for our needs.Theorem 1.2 requires establishing separately a lower and an upper boundon sup x ∈ [ − R,R ] u t ( x ). Both bounds rely heavily on the following quantitativeimprovement of (1.20): if σ is bounded, thenlog inf x ∈ R P { u t ( x ) ≥ λ } % −√ κ λ as λ → ∞ . (1.21)And, as it turns out, the preceding lower bound will perforce imply a corre-sponding upper estimate,log inf x ∈ R P { u t ( x ) ≥ λ } - −√ κ λ as λ → ∞ . (1.22)The derivation of the lower bound on sup x ∈ [ − R,R ] u t ( x ) follows closely theproof of Theorem 1.1, after (1.21) and (1.22) are established. Therefore, theremaining details will be omitted.The upper bound on sup x ∈ [ − R,R ] u t ( x ) requires only (1.22) and a well-known quantitative version of the Kolmogorov continuity theorem.Our proof of Theorem 1.3 has a similar flavor to that of Theorem 1.1, forthe lower bound, and Theorem 1.2, for the upper bound. We make strong useof the moments formulas of Bertini and Cancrini [2], Theorem 2.6. [This iswhy we are only able to study the linear equation in the case that σ (0) = 0.]Throughout this paper, we use the following abbreviation: u ∗ t ( R ) := sup x ∈ [ − R,R ] u t ( x ) ( R > . (1.23)We will also need the following elementary facts about the heat kernel: k p s k L ( R ) = (4 π κ s ) − / for every s > Z t k p s k L ( R ) d s = p t/ ( π κ ) for all t ≥ . (1.25) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
We will tacitly write Lip σ for the optimal Lipschitz constant of σ ; that is,Lip σ := sup −∞ 2. Mueller’s comparison principle and a reduction. Mueller’s compari-son principle [21] is one of the cornerstones of the theory of stochastic PDEs.In its original form, Mueller’s comparison principle is stated for an equationthat is similar to (1.2), but for two differences: (i) σ ( z ) := κ z for some κ > x takes values in a compact interval such as [0 , σ of the type studied here. Andin both cases, the proofs assume that the initial function u has compactsupport. Below we state and prove a small variation of the preceding com-parison principles that shows that Mueller’s theory continues to work when:(i) the variable x takes values in R ; and (ii) the initial function u is notnecessarily compactly supported. Theorem 2.1 (Mueller’s comparison principle). Let u (1)0 and u (2)0 denotetwo nonnegative bounded continuous functions on R such that u (1)0 ( x ) ≥ u (2)0 ( x ) for all x ∈ R . Let u (1) t ( x ) , u (2) t ( x ) be solutions to (1.2) with respectiveinitial functions u (1)0 and u (2)0 . Then, P { u (1) t ( x ) ≥ u (2) t ( x ) for all t > and x ∈ R } = 1 . (2.1) Proof. Because the solution to (1.2) is continuous in ( t, x ), it sufficesto prove thatP { u (1) t ( x ) ≥ u (2) t ( x ) } = 1 for all t > x ∈ R . (2.2)In the case that u (1)0 and u (2)0 both have bounded support, the preceding isproved almost exactly as in Theorem 3.1 of Mueller [21]. For general u (1)0 and u (2)0 , we proceed as follows.Let v : R → R + be a bounded and measurable initial function, and definea new initial function v [ N ]0 : R → R + as v [ N ]0 ( x ) := v ( x ) , if | x | ≤ N , v ( N )( − x + N + 1) , if N < x < N + 1, v ( − N )( x + N + 1) , if − ( N + 1) < x < − N ,0 , if | x | ≥ N + 1.(2.3) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION Then, let v [ N ] t ( x ) be the solution to (1.2) with initial condition v [ N ]0 . Weclaim that δ [ N ] t ( x ) := v t ( x ) − v [ N ] t ( x ) → N → ∞ .(2.4)Let u (1) , [ N ] t and u (2) , [ N ] t denote the solutions to (1.2) with initial conditions u (1) , [ N ]0 and u (2) , [ N ]0 , respectively, where the latter are defined similarly as v [ N ]0 above. Now, (2.4) has the desired result because it shows that u (1) , [ N ] t ( x ) → u (1) t ( x ) and u (2) , [ N ] t ( x ) → u (2) t ( x ) in probability as N → ∞ . Since u (1) , [ N ] t ( x ) ≥ u (2) , [ N ] t ( x ) a.s. for all t > x ∈ R , (2.2) follows from taking limits.In order to conclude we establish (2.4); in fact, we will prove thatsup x ∈ R sup t ∈ (0 ,T ) E( | δ [ N ] t ( x ) | ) = O (1 /N ) as N → ∞ (2.5)for all T > δ [ N ] t ( x ) = ( p t ∗ δ [ N ]0 )( x )(2.6) + Z (0 ,t ) × R p t − s ( y − x ) { σ ( v s ( y )) − σ ( v [ N ] s ( y )) } W (d s d y ) . Because ( p t ∗ δ [ N ]0 )( x ) ≤ z ∈ R v ( z )) R | y | >N p t ( y ) d y , a direct estimate ofthe latter stochastic integral yieldsE( | δ [ N ] t ( x ) | ) ≤ const · t − / e − N / (2 t ) + const · Lip σ Z t d s Z ∞−∞ d y p t − s ( y − x )E( | δ [ N ] s ( y ) | )(2.7) ≤ const · t − / e − N / (2 t ) + const · Lip σ e βt M [ N ] t ( β ) Z ∞ e − βr k p r k L ( R ) d r, where β > M [ N ] t ( β ) := sup s ∈ (0 ,t ) ,y ∈ R [e − βs E( | v s ( y ) − v [ N ] s ( y ) | )] . (2.8)We multiply both sides of (2.7) by exp( − βt ) and take the supremum overall t ∈ (0 , T ) where T > M [ N ] T ( β ) ≤ const · h sup t ∈ (0 ,T ) { t − / e − N / (2 t ) } + β − / M [ N ] T ( β ) i . (2.9)The quantity in sup t ∈ (0 ,T ) {· · ·} is proportional to 1 /N (with the constant ofproportionality depending on T ), and the implied constant does not depend D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN on β . Therefore, it follows that if β were selected sufficiently large, then M [ N ] T ( β ) = O (1 /N ) as N → ∞ for that choice of β . This implies (2.4). (cid:3) Next we apply Mueller’s comparison principle to make a helpful simplifi-cation to our problem.Because B := inf x ∈ R u ( x ) > B := sup x ∈ R u ( x ) < ∞ , it followsfrom Theorem 2.1 that almost surely, u t ( x ) ≤ u t ( x ) ≤ u t ( x ) for all t > x ∈ R , (2.10)where u solves the stochastic heat equation (1.2) starting from initial func-tion u ( x ) := B , and u solves (1.2) starting from u ( x ) := B . This showsthat it suffices to prove Theorems 1.1, 1.2 and 1.3 with u t ( x ) everywherereplaced by u t ( x ) and u t ( x ). In other words, we can assume without loss ofgenerality that u is identically a constant. In order to simplify the notation,we will assume from now on that the mentioned constant is one. A quickinspection of the ensuing proofs reveals that this assumption is harmless.Thus, from now on, we consider in place of (1.2), the following parabolicstochastic PDE: ∂∂t u t ( x ) = κ ∂ ∂x u t ( x ) + σ ( u t ( x )) ∂ ∂t ∂x W ( t, x ) , ( t > , x ∈ R ), u ( x ) = 1 . (2.11)We can write its solution in mild form as follows: u t ( x ) = 1 + Z [0 ,t ] × R p t − s ( y − x ) σ ( u s ( y )) W (d s d y ) . (2.12) 3. Tail probability estimates. In this section we derive the followingcorollary which estimates the tails of the distribution of u t ( x ), where u t ( x )solves (2.11) and (2.12). In fact, Corollary 3.5, Propositions 3.7 and 3.8below readily imply the following: Corollary 3.1. If inf z ∈ R [ σ ( z )] > , then for all t > , − √ κ λ - log P {| u t ( x ) | ≥ λ } - −√ κ (log λ ) / , (3.1) uniformly for x ∈ R and λ > e . And the constants in - do not depend on κ .If (1.12) holds for some γ ∈ (0 , / , then for all t > , − κ / (12 γ ) λ /γ - log P {| u t ( x ) | ≥ λ } - −√ κ (log λ ) / , (3.2) uniformly for all x ∈ R and λ > e . And the constants in - do not dependon κ . N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION An upper-tail estimate. We begin by working toward the upperbound in Corollary 3.1. Lemma 3.2. Fix T > , and define a := T (Lip σ ∨ / (2 κ ) . Then, forall real numbers k ≥ , sup x ∈ R sup t ∈ [0 ,T ] E( | u t ( x ) | k ) ≤ C k e ak where C := 8 (cid:18) | σ (0) | / (Lip σ ∨ (cid:19) . Proof. We follow closely the proof of Theorem 2.1 of [15], but matterssimplify considerably in the present, more specialized, setting.First of all, we note that because u ≡ u t ( x ) does not depend on x ; this property was observed earlierby Dalang [11], for example.Therefore, an application of Burkholder’s inequality, using the Carlen–Kree bound [6] on Davis’s optimal constant [12] in the Burkholder–Davis–Gundy inequality [3–5] and Minkowski’s inequality imply the following: forall t ≥ β > x ∈ R , k u t ( x ) k k ≤ (cid:13)(cid:13)(cid:13)(cid:13)Z [0 ,t ] × R p t − s ( y − x ) σ ( u s ( y )) W (d s d y ) (cid:13)(cid:13)(cid:13)(cid:13) k ≤ βt √ k (cid:16) | σ (0) | + Lip σ sup r ≥ [e − βr k u r ( x ) k k ] (cid:17) (3.3) × (cid:18)Z ∞ e − βs k p s k d s (cid:19) / = 1 + √ k e βt (8 κ β ) / (cid:16) | σ (0) | + Lip σ sup r ≥ [e − βr k u r ( x ) k k ] (cid:17) . See Foondun and Khoshnevisan [15], Lemma 3.3, for the details of the deriva-tion of such an estimate. (Although Lemma 3.3 of [15] is stated for evenintegers k ≥ 2, a simple variation on the proof of that lemma implies theresult for general k ≥ 1; see Conus and Khoshnevisan [9].) It follows that ψ ( β, k ) := sup t ≥ [e − βt k u t ( x ) k k ](3.4)satisfies ψ ( β, k ) ≤ √ k (4 κ β ) / ( | σ (0) | + Lip σ ψ ( β, k )) . (3.5)If Lip σ = 0, then clearly ψ ( β, k ) < ∞ . If Lip σ > 0, then ψ ( β, k ) < ∞ for all β > k Lip σ / (4 κ ); therefore, the preceding proves that if β > k Lip σ / (4 κ ), D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN then ψ ( β, k ) ≤ − ( √ k Lip σ / (4 κ β ) / ) · (cid:18) √ k | σ (0) | (4 κ β ) / (cid:19) . (3.6)We apply this with β := k (Lip σ ∨ / (2 κ ) to obtain the lemma. (cid:3) Remark 3.3. In the preceding results, the term Lip σ ∨ σ only because it can happen thatLip σ = 0. In the latter case, σ is a constant function, and the machinery ofLemma 3.2 is not needed since u t ( x ) is a centered Gaussian process with avariance that can be estimated readily. (We remind the reader that the casewhere σ is a constant is covered by Theorem 1.2; see Section 6.)Next we describe a real-variable lemma that shows how to transform themoment estimate of Lemma 3.2 into subexponential moment estimates. Lemma 3.4. Suppose X is a nonnegative random variable that satisfiesthe following: there exist finite numbers a, C > and b > such that E( X k ) ≤ C k e ak b for all real numbers k ≥ . (3.7) Then, E exp { α (log + X ) b/ ( b − } < ∞ —for log + u := log( u ∨ e) —provided that < α < − b − ( ab ) / ( b − . (3.8)Lemmas 3.2, 3.4 and Chebyshev’s inequality together imply the followingresult. Corollary 3.5. Choose and fix T > , and define c := p / ≈ . .Then for all α < c √ κ / ( √ T (Lip σ ∨ , sup x ∈ R sup t ∈ [0 ,T ] E(e α (log + u t ( x )) / ) < ∞ . (3.9) Consequently, lim sup λ ↑∞ λ ) / sup x ∈ R sup t ∈ [0 ,T ] log P { u t ( x ) > λ } ≤ − c √ κ √ T (Lip σ ∨ . (3.10)We skip the derivation of Corollary 3.5 from Lemma 3.4, as it is immedi-ate. The result holds uniformly in t ∈ [0 , T ] and x ∈ R as the constants a and C in Lemma 3.2 are independent of t and x . Instead we verify Lemma 3.4. Proof of Lemma 3.4. Because (cid:20) log + (cid:18) XC (cid:19)(cid:21) b/ ( b − ≤ b/ ( b − · { (log + X ) b/ ( b − + (log + C ) b/ ( b − } , (3.11) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION we can assume without loss of generality that C = 1; for otherwise we mayconsider X/C in place of X from here on.For all z > e, Chebyshev’s inequality implies thatP { e α (log + X ) b/ ( b − > z } ≤ e − max k g ( k ) , (3.12)where g ( k ) := k (cid:18) log zα (cid:19) ( b − /b − ak b . (3.13)One can check directly that max k g ( k ) = c log z , where c := 1 − b − α · ( ab ) / ( b − . (3.14)Thus, it follows that P { exp[ α (log + X ) b/ ( b − ] > z } = O ( z − c ) for z → ∞ . Con-sequently, E exp { α (log + X ) b/ ( b − } < ∞ as long as c > 1; this is equivalentto the statement of the lemma. (cid:3) Lower-tail estimates. In this section we proceed to estimate the tailof the distribution of u t ( x ) from below. We first consider the simplest casein which σ is bounded uniformly from below, away from zero. Proposition 3.6. If ε := inf z ∈ R σ ( z ) > , then for all t > , inf x ∈ R E( | u t ( x ) | k ) ≥ ( √ o (1))( µ t k ) k ( as k → ∞ )(3.15) where the “ o (1) ” term depends only on k , and µ t := 2e · ε r tπ κ . (3.16) Proof. Because the initial function in (2.11) is u ( x ) ≡ 1, it follows thatthe distribution of u t ( x ) does not depend on x ; see Dalang [11]. Therefore,the “inf” in the statement of the proposition is superfluous.Throughout, let us fix x ∈ R and t > 0. Now we may consider a mean-onemartingale { M τ } ≤ τ ≤ t defined as follows: M τ := 1 + Z (0 ,τ ) × R p t − s ( y − x ) σ ( u s ( y )) W (d s d y ) (0 ≤ τ ≤ t ) . (3.17)The quadratic variation of this martingale is h M i τ = Z τ d s Z ∞−∞ d y p t − s ( y − x ) σ ( u s ( y )) (0 ≤ τ ≤ t ) . (3.18) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN Therefore, by Itˆo’s formula, for all positive integers k , and for every τ ∈ [0 , t ], M kτ = 1 + 2 k Z τ M k − s d M s + (cid:18) k (cid:19) Z τ M k − s d h M i s = 1 + 2 k Z τ M k − s d M s (3.19) + (cid:18) k (cid:19) Z τ M k − s d s Z ∞−∞ d y p t − s ( y − x ) σ ( u s ( y )) . By the assumption of the lemma, σ ( u s ( y )) ≥ ε a.s. Therefore, M kτ ≥ k Z τ M k − s d M s + (cid:18) k (cid:19) ε Z τ M k − s · k p t − s k L ( R ) d s (3.20) = 1 + 2 k Z τ M k − s d M s + (cid:18) k (cid:19) ε Z τ M k − s (4 π κ ( t − s )) / d s. We set τ := t and then take expectations to find thatE( M kt ) ≥ (cid:18) k (cid:19) ε Z t E( M k − s ) d s (4 π κ ( t − s )) / (3.21) = 1 + (cid:18) k (cid:19) ε Z t E( M k − s ) ν ( t, d s ) , where the measures { ν ( t, · ) } t> are defined as ν ( t, d s ) := (0 ,t ) ( s )(4 π κ ( t − s )) / d s. (3.22)We may iterate the preceding in order to obtainE( M kt ) ≥ k − X l =0 a l,k ε l +1)0 · Z t ν ( t, d s ) Z s ν ( s , d s ) · · · (3.23) × Z s l ν ( s l , d s l +1 ) , where a l,k := l Y j =0 (cid:18) k − j (cid:19) for 0 ≤ l < k (3.24)and s := t . The right-hand side of (3.23) is exactly equal to E( M kt ) in thecase where σ ( z ) ≡ ε for all z ∈ R . Indeed, the same computation as aboveworks with identities all the way through. In other words,E( | u t ( x ) | k ) = E( M kt ) ≥ E( η t ( x ) k ) , (3.25) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION where η t ( x ) := 1 + ε · Z (0 ,t ) × R p t − s ( y − x ) W (d s d y ) . (3.26)We define ζ t ( x ) := ε · Z (0 ,t ) × R p t − s ( y − x ) W (d s d y ) , (3.27)so that η t ( x ) = 1 + ζ t ( x ). Clearly,E( η t ( x ) k ) ≥ E( ζ t ( x ) k ) . (3.28)Since ζ is a centered Gaussian process,E( ζ t ( x ) k ) = [E( ζ t ( x ) )] k · (2 k )! k ! · k (3.29)and E( ζ t ( x ) ) = ε · Z t d s Z ∞−∞ d y p t − s ( y − x ) = ε · r tπ κ ;(3.30)see (1.25). The proposition follows from these observations and Stirling’sformula. (cid:3) We can now use Proposition 3.6 to obtain a lower estimate on the tail ofthe distribution of u t ( x ). Proposition 3.7. If there exists ε > such that σ ( x ) ≥ ε for all x ∈ R , then there exists a universal constant C ∈ (0 , ∞ ) such that for all t > , lim inf λ →∞ λ inf x ∈ R log P {| u t ( x ) | ≥ λ } ≥ − C (Lip σ ∨ √ κ ε √ t . (3.31) Proof. Choose and fix t > x ∈ R . We apply the celebrated Paley–Zygmund inequality in the following form: for every integer k ≥ | u t ( x ) | k ) ≤ E( | u t ( x ) | k ; | u t ( x ) | ≥ k u t ( x ) k k ) + E( | u t ( x ) | k )(3.32) ≤ q E( | u t ( x ) | k )P {| u t ( x ) | ≥ k u t ( x ) k k } + E( | u t ( x ) | k ) . This yields the following bound:P (cid:26) | u t ( x ) | ≥ k u t ( x ) k k (cid:27) ≥ [E( | u t ( x ) | k )] | u t ( x ) | k )(3.33) ≥ exp (cid:18) − t (Lip σ ∨ κ k (1 + o (1)) (cid:19) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN as k → ∞ ; see Lemma 3.2 and Proposition 3.6. Another application of Propo-sition 3.6 shows that k u t ( x ) k k ≥ (1 + o (1))( µ t k ) / as k → ∞ where µ t isdefined in (3.16). This implies as k → ∞ ,P (cid:26) | u t ( x ) | ≥ 12 ( µ t k ) / (cid:27) ≥ exp (cid:20) − t (Lip σ ∨ κ k (1 + o (1)) (cid:21) . (3.34)The proposition follows from this by setting k to be the smallest possibleinteger that satisfies ( µ t k ) / ≥ λ . (cid:3) Now, we study the tails of the distribution of u t ( x ) under the conditionsof part (2) of Theorem 1.1. Proposition 3.8. Suppose σ ( x ) > for all x ∈ R and (1.12) holds forsome γ ∈ (0 , / . Then lim inf λ →∞ inf x ∈ R log P {| u t ( x ) | > λ } λ /γ ≥ − C (cid:18) (Lip σ ∨ / κ / t / (cid:19) /γ , (3.35) where C ∈ (0 , ∞ ) is a constant that depends only on γ . Proof. For every integer N ≥ 1, define σ ( N ) ( x ) := σ ( x ) , if | x | ≤ N , σ ( − N ) , if x < − N , σ ( N ) , if x > N .(3.36)It can be checked directly that σ ( N ) is a Lipschitz function, and that in fact:(i) Lip σ ( N ) ≤ Lip σ ; and (ii) and inf z ∈ R σ ( N ) ( z ) > u ( N ) t ( x ) denote the solution to (2.11), when σ is replaced by σ ( N ) . Wefirst establish the boundE( | u ( N ) t ( x ) − u t ( x ) | ) = O ( N − ) as N → ∞ . (3.37)Let us observe, using the mild representation of the solution to (2.11),that E( | u ( N ) t ( x ) − u t ( x ) | ) ≤ T + T ) , (3.38)where T := E (cid:18)(cid:12)(cid:12)(cid:12)(cid:12)Z (0 ,t ) × R p t − s ( y − x )[ σ ( N ) ( u ( N ) s ( y )) − σ ( u ( N ) s ( y ))] W (d s d y ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:19) = Z t d s Z ∞−∞ d y p t − s ( y − x )E( | σ ( N ) ( u ( N ) s ( y )) − σ ( u ( N ) s ( y )) | ) and(3.39) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION T := E (cid:18)(cid:12)(cid:12)(cid:12)(cid:12)Z (0 ,t ) × R p t − s ( y − x )[ σ ( u ( N ) s ( y )) − σ ( u s ( y ))] W (d s d y ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:19) = Z t d s Z ∞−∞ d y p t − s ( y − x )E( | σ ( u ( N ) s ( y )) − σ ( u s ( y )) | ) . We can estimate the integrand of T by the following:E( | σ ( N ) ( u ( N ) s ( y )) − σ ( u ( N ) s ( y )) | ) ≤ Lip σ · E( | N − u ( N ) s ( y ) | ; u ( N ) s ( y ) > N )(3.40) + Lip σ · E( |− N − u ( N ) s ( y ) | ; u ( N ) s ( y ) < − N ) ≤ σ · E( | u ( N ) s ( y ) | ; | u ( N ) s ( y ) | > N ) . We first apply the Cauchy–Schwarz inequality and then Chebyshev’s in-equality (in this order) to conclude thatE( | σ ( N ) ( u ( N ) s ( y )) − σ ( u ( N ) s ( y )) | ) ≤ σ N · E( | u ( N ) s ( y ) | )(3.41) = O ( N − ) as N → ∞ , uniformly for all y ∈ R and s ∈ (0 , t ). Indeed, Lemma 3.2 ensures thatE[ | u ( N ) s ( y ) | ] is bounded in N , because lim N →∞ Lip σ ( N ) = Lip σ . This im-plies readily that T = O ( N − ) as N → ∞ .Next we turn to T ; thanks to (1.25), the quantity T can be estimatedas follows: T ≤ Lip σ · Z t d s Z ∞−∞ d y p t − s ( y − x )E( | u ( N ) s ( y ) − u s ( y ) | )(3.42) ≤ const · Z t M ( s ) √ t − s , where M ( s ) := sup y ∈ R E( | u ( N ) s ( y ) − u s ( y ) | ) (0 ≤ s ≤ t ) . (3.43)Notice that the implied constant in (3.42) does not depend on t .We now combine our estimates for T and T to conclude that M ( s ) ≤ const N + const · Z s M ( r ) √ s − r d r (0 ≤ s ≤ t )(3.44) ≤ const · (cid:26)(cid:18)Z s [ M ( r )] / d r (cid:19) / + 1 N (cid:27) , D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN thanks to H¨older’s inequality. We emphasize that the implied constant de-pends only on the Lipschitz constant of σ , the variable t and the diffusionconstant κ . Therefore,[ M ( s )] / ≤ const · (cid:26)Z s [ M ( r )] / d r + 1 N (cid:27) , (3.45)uniformly for s ∈ (0 , t ). Gronwall’s inequality then implies the bound M ( t ) = O ( N − ), valid as N → ∞ .Now we proceed with the proof of Proposition 3.8. For all N ≥ 1, thefunction σ ( N ) is bounded below. Let ε ( N ) be such that σ ( N ) ( x ) ≥ ε ( N )for all x ∈ R . Let D := D t := (4 t/ (e π κ )) / . According to the proof ofProposition 3.7, specifically (3.34) applied to u ( N ) , we haveP (cid:26) | u t ( x ) | ≥ D ε ( N ) k / (cid:27) ≥ exp (cid:20) − t (Lip σ ∨ κ k (1 + o (1)) (cid:21) (3.46) − P (cid:26) | u t ( x ) − u ( N ) t ( x ) | ≥ D ε ( N ) k / (cid:27) . Thanks to (1.12), we can write ε ( N ) ≫ (log N ) − (1 / − γ ) as N → ∞ , (3.47)using standard notation. Therefore, if we choose N := (cid:22) exp (cid:26) t (Lip σ ∨ k κ (cid:27)(cid:23) , (3.48)then we are led to the bound ε ( N ) ≫ (cid:18) t (Lip σ ∨ κ (cid:19) − (1 / − γ ) k γ − (1 / . (3.49)We can use Chebyshev’s inequality in order to estimate the second term onthe right-hand side of (3.46). In this way we obtain the following:P (cid:26) | u t ( x ) | ≥ ˜ D k γ (cid:27) ≥ exp (cid:26) − t (Lip σ ∨ κ k (1 + o (1)) (cid:27) (3.50) − C N k γ , where ˜ D := D (cid:26) κ t (Lip σ ∨ (cid:27) (1 / − γ , (3.51) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION and C is a constant that depends only on t , Lip σ and κ . For all sufficientlylarge integers N , 1 C N k γ ≤ exp (cid:20) − t (Lip σ ∨ κ k (1 + o (1)) (cid:21) , (3.52)and the proposition follows upon setting λ := ˜ Dk γ / (cid:3) 4. Localization. The next step in the proof of Theorem 1.1 requires us toshow that if x and x ′ are O (1) apart, then u t ( x ) and u t ( x ′ ) are approximatelyindependent. We show this by coupling u t ( x ) first to the solution of a local-ized version—see (4.1) below—of the stochastic heat equation (2.11). Andthen a second coupling to a suitably-chosen Picard-iteration approximationof the mentioned localized version.Consider the following parametric family of random evolution equations(indexed by the parameter β > U ( β ) t ( x ) = 1 + Z (0 ,t ) × [ x −√ βt,x + √ βt ] p t − s ( y − x ) σ ( U ( β ) s ( y )) W (d s d y )(4.1)for all x ∈ R and t ≥ Lemma 4.1. Choose and fix β > . Then, (4.1) has an almost surelyunique solution U ( β ) such that for all T > and k ≥ , sup β> sup t ∈ [0 ,T ] sup x ∈ R E( | U ( β ) t ( x ) | k ) ≤ C k e ak , (4.2) where a and C are defined in Lemma 3.2. Proof. A fixed-point argument shows that there exists a unique, up tomodification, solution to (4.1) subject to the condition that for all T > t ∈ [0 ,T ] sup x ∈ R E( | U ( β ) t ( x ) | k ) < ∞ for all k ≥ . (4.3)See Foondun and Khoshnevisan [15] for more details on the ideas of theproof; and the moment estimate follows as in the proof of Lemma 3.2. Weomit the numerous remaining details. (cid:3) Lemma 4.2. For every T > there exists a finite and positive constant C := C ( κ ) such that for sufficiently large β > and k ≥ , sup t ∈ [0 ,T ] sup x ∈ R E( | u t ( x ) − U ( β ) t ( x ) | k ) ≤ C k k k/ e F k ( k − β ) , (4.4) where F ∈ (0 , ∞ ) depends on ( T, κ ) but not on ( k, β ) . Proof. For all x ∈ R and t > 0, define V t ( x ) := 1 + Z (0 ,t ) × R p t − s ( y − x ) σ ( U ( β ) s ( y )) W (d s d y ) . (4.5) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN Then, k V t ( x ) − U ( β ) t ( x ) k k = (cid:13)(cid:13)(cid:13)(cid:13)Z (0 ,t ) ×{ y ∈ R : | y − x | > √ βt } p t − s ( y − x ) σ ( U ( β ) s ( y )) W (d s d y ) (cid:13)(cid:13)(cid:13)(cid:13) k (4.6) ≤ √ k (cid:13)(cid:13)(cid:13)(cid:13)Z t d s Z | y − x |≥√ βt d y p t − s ( y − x ) σ ( U ( β ) s ( y )) (cid:13)(cid:13)(cid:13)(cid:13) / k/ . The preceding hinges on an application of Burkholder’s inequality, using theCarlen–Kree bound [6] on Davis’s optimal constant [12] in the Burkholder–Davis–Gundy inequality [3–5]; see Foondun and Khoshnevisan [15] for thedetails of the derivation of such an estimate. Minkowski’s inequality tells usthen that the preceding quantity is at most2 s k Z t d s Z | y − x |≥√ βt d y p t − s ( y − x ) k σ ( U ( β ) s ( y )) k k/ (4.7) ≤ const · s k Z t d s Z | y − x |≥√ βt d y p t − s ( y − x )(1 + k U ( β ) s ( y ) k k ) . Equation (4.7) holds because the Lipschitz continuity of the function σ en-sures that it has at-most-linear growth: | σ ( x ) | ≤ const · (1 + | x | ) for all x ∈ R .The inequality in Lemma 4.1 implies that, uniformly over all t ∈ [0 , T ] and x ∈ R , k V t ( x ) − U ( β ) t ( x ) k k ≤ const · s kC e ak Z t d r Z | z |≥√ βt d z p r ( z )(4.8) ≤ const · k / e ak √ κ sZ d s √ s Z | w |≥√ β d w p s ( w ) , where we have used (1.4). Now a standard Gaussian tail estimate yields Z | w |≥√ β p s ( w ) d w ≤ − β/s κ , (4.9)and the latter quantity is at most 2 exp( − β/ κ ) whenever s ∈ (0 , x ∈ R k V t ( x ) − U ( β ) t ( x ) k k ≤ const · k / e ak √ κ e − β/ κ . (4.10)On the other hand, u t ( x ) − V t ( x ) = Z (0 ,t ) × R p t − s ( y − x )[ σ ( u s ( y )) − σ ( U ( β ) s ( y ))] W (d s d y ) , N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION whence k u t ( x ) − V t ( x ) k k ≤ √ k (cid:13)(cid:13)(cid:13)(cid:13)Z t d s Z ∞−∞ d y p t − s ( y − x )[ σ ( u s ( y )) − σ ( U ( β ) s ( y ))] (cid:13)(cid:13)(cid:13)(cid:13) / k/ (4.11) ≤ √ k Lip σ (cid:13)(cid:13)(cid:13)(cid:13)Z t d s Z ∞−∞ d y p t − s ( y − x )[ u s ( y ) − U ( β ) s ( y )] (cid:13)(cid:13)(cid:13)(cid:13) / k/ ≤ √ k Lip σ · sZ t d s Z ∞−∞ d y p t − s ( y − x ) k u s ( y ) − U ( β ) s ( y ) k k . Consequently, (4.10) implies that k u t ( x ) − U ( β ) t ( x ) k k ≤ √ k Lip σ · sZ t d s Z ∞−∞ d y p t − s ( y − x ) k u s ( y ) − U ( β ) s ( y ) k k (4.12) + const · k / e ak √ κ e − β/ (2 κ ) . Let us introduce a parameter δ > N k,δ ( Z ) := sup s ≥ sup y ∈ R [e − δs k Z s ( y ) k k ](4.13)for every space–time random field Z := { Z s ( y ) } s> ,y ∈ R . Then, we have N k,δ ( u − U ( β ) ) ≤ √ k Lip σ N k,δ ( u − U ( β ) ) · sZ ∞ e − δr k p r k L ( R ) d r (4.14) + const · k / e ak − β/ (2 κ ) √ κ . Thanks to (1.24), if δ := Dk for some sufficiently large constant D , thenthe square root is at most [4 √ k (Lip σ ∨ − , whence it follows that (for thatfixed choice of δ ) N k,δ ( u − U ( β ) ) ≤ const · k / e ak − const · ( β/ κ ) √ κ . (4.15)The lemma follows from this. (cid:3) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN Now let us define U ( β,n ) t ( x ) to be the n th Picard-iteration approximationto U ( β ) t ( x ). That is, U ( β, t ( x ) := 1, and for all ℓ ≥ U ( β,ℓ +1) t ( x )(4.16) := 1 + Z (0 ,t ) × [ x −√ βt,x + √ βt ] p t − s ( y − x ) σ ( U ( β,ℓ ) s ( y )) W (d s d y ) . Lemma 4.3. There exist positive and finite constants C ∗ and G —depend-ing on ( t, κ ) —such that uniformly for all k ∈ [2 , ∞ ) and β > e , sup x ∈ R E( | u t ( x ) − U ( β, [log β ]+1) t ( x ) | k ) ≤ C k ∗ k k/ e Gk β k . (4.17) Proof. The method of Foondun and Khoshnevisan [15] shows that if δ := D ′ k for a sufficiently-large D ′ , then N k,δ ( U ( β ) − U ( β,n ) ) ≤ const · e − n for all n ≥ k ∈ [2 , ∞ ) . (4.18)To elaborate, we follow the arguments in [15] of the proof of Theorem 2.1leading up to equation (4.6) but with v n there replaced by U ( β,n ) here. Wethen obtain k U ( β,n +1) − U ( β,n ) k k,θ ≤ const · s k Υ (cid:18) θk (cid:19) k U ( β,n ) − U ( β,n − k k,θ , (4.19)where k f k k,θ := n sup t ≥ sup x ∈ R e − θt E( | f ( t, x ) | k ) o /k (4.20)and Υ( θ ) := 12 π Z ∞−∞ d ξθ + ξ . (4.21)A quick computation reveals that by choosing θ := D ′′ k , for a large enoughconstant D ′′ > 0, we obtain k U ( β,n +1) − U ( β,n ) k k,θ ≤ e − k U ( β,n ) − U ( β,n − k k,θ . (4.22)We get (4.18) from this.Next we set n := [log β ] + 1 and apply the preceding together with Lem-ma 4.2 to finish the proof. (cid:3) Lemma 4.4. Choose and fix β, t > and n ≥ . Also fix x , x , . . . ∈ R such that | x i − x j | ≥ n √ βt whenever i = j . Then { U ( β,n ) t ( x j ) } j ∈ Z is acollection of i.i.d. random variables. Proof. The proof uses induction on the variable n , and proceeds byestablishing a little more. We will use the σ -algebras P ( A ) as defined in N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION Appendix A, where A ⊂ R ranges over all Lebesgue-measurable sets of finiteLebesgue measure Proposition A.1.Since U ( β, t ( x ) ≡ 1, the statement of the lemma holds tautologically for n = 0. In order to understand the following argument better let us concen-trate on the case n = 1. In that case, U ( β, t ( x ) = 1 + σ (1) · Z (0 ,t ) × [ x −√ βt,x + √ βt ] p t − s ( y − x ) W (d s d y ) . (4.23)In particular, if we define the process { ˜ U ( β, s ( t, x ) } ≤ s ≤ t by˜ U ( β, s ( t, x ) = 1 + σ (1) · Z (0 ,s ) × [ x −√ βt,x + √ βt ] p t − r ( y − x ) W (d r d y ) , (4.24)it follows from Proposition A.1 that U ( β, • ( t, x ) ∈ P ([ x − √ βt, x + √ βt ]).Hence, Corollary A.2 shows that { ˜ U ( β, • ( t, x j ) } j ∈ Z are independent pro-cesses. Taking s = t shows the independence part of the lemma for n = 1.It is not hard to see that the law of U ( β, t ( x ) is independent of x , namelya Gaussian distribution with mean and variance parameters that are inde-pendent of x . This concludes the proof for n = 1.Now, let us define processes { ˜ U ( β,n ) s ( t, x ) } ≤ s ≤ t by˜ U ( β,n ) s ( t, x )= 1 + σ (1) · Z (0 ,s ) × [ x −√ βt,x + √ βt ] p t − r ( y − x ) σ ( ˜ U ( β,n − r ( r, y ))(4.25) × W (d r d y ) . If we proved that ˜ U ( β,n ) • ( t, x ) ∈ P ([ x − n √ βt, x + n √ βt ]) for all x ∈ R and t > 0, it then would follow from (4.16) and Proposition A.1 that ˜ U ( β,n +1) • ( t, x ) ∈P ([ x − ( n + 1) √ βt, x + ( n + 1) √ βt ]) for all x ∈ R and t > 0. Since this factis true for n = 1, then we have proved that ˜ U ( β,n ) • ( t, x ) ∈ P ([ x − n √ βt, x + n √ βt ]) for all x ∈ R , t > n ∈ N . We then use this result, togetherwith Corollary A.2 to deduce that { ˜ U ( β,n ) • ( t, x j ) } j ∈ Z are independent pro-cesses. The independence part of the lemma follows upon taking s := t inthis discussion.Since the law of the noise W is invariant by translation, it is not hardto prove by induction that the law of U ( β,n ) t ( x ) is indeed independent of x .(Notice that this is always true when the initial condition is constant [11],Lemma 18.) This concludes the (inductive) proof of the lemma. (cid:3) 5. Proof of Theorem 1.1. We are ready to combine our efforts thus farin order to verify Theorem 1.1. D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN Proof of Theorem 1.1. Parts (1) and (2) of Theorem 1.1 are provedsimilarly. Therefore, we present the details of the second part. For the proofof the first part, we can take γ := 1 / U β and U ( β,n ) are defined, respectively, in (4.1) and (4.16).For all x , . . . , x N ∈ R ,P n max ≤ j ≤ N | u t ( x j ) | < λ o ≤ P n max ≤ j ≤ N | U ( β, log β ) t ( x j ) | < λ o (5.1) + P n max ≤ j ≤ N | U ( β, log β ) t ( x j ) − u t ( x j ) | > λ o . (To be very precise, we need to write ( β, [log β ] + 1) in place of ( β, log β ).)The whole program of Section 3, that led to Proposition 3.8 can be carriedout for U ( β ) instead of u . Only minor changes (typically in Proposition 3.6)are needed. Then, (4.18) shows that the same moment estimates are validfor U ( β,n ) as for U ( β ) .We can follow along the proof of Proposition 3.8 and prove similarly theexistence of constants c , c > β for all sufficiently largevalues of β —so that for all x ∈ R and λ ≥ {| U ( β, log β ) t ( x ) | ≥ λ } ≥ c e − c λ /γ . (5.2)Suppose, in addition, that | x i − x j | ≥ √ βt log β whenever i = j . Then,Lemmas 4.3 and 4.4 together imply the following:P n max ≤ j ≤ N | u t ( x j ) | < λ o (5.3) ≤ (1 − c e − c · (2 λ ) /γ ) N + N C k ∗ k k/ e Gk β − k λ − k . The constants C ∗ and G may differ from the ones in Lemma 4.3. We nowselect the various parameters judiciously: choose λ := k , N := ⌈ k exp( c · (2 k ) /γ ) ⌉ and β := exp( ρk (1 − γ ) /γ ) for a large-enough positive constant ρ > · /γ c . In this way, (5.3) simplifies: for all sufficiently large integers k ,P n max ≤ j ≤ N | u t ( x j ) | < k o ≤ e − c k + exp (cid:20) c · (2 k ) /γ + log k + k log C ∗ − k log k Gk − ρk /γ (cid:21) (5.4) ≤ − c k . Now we choose the x i ’s as follows: set x := 0, and define iteratively x i +1 := x i + 2 p βt ([log β ] + 1)(5.5) = 2( i + 1) p βt ([log β ] + 1) for all i ≥ . N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION The preceding implies that for all sufficiently large k ,P n sup x ∈ [0 , N +1) √ βt ([log β ]+1)] | u t ( x ) | < k o ≤ − c k , (5.6)whence P n sup | x |≤ N +1) √ βt ([log β ]+1) | u t ( x ) | < k o ≤ − c k (5.7)by symmetry.It is not hard to verify that, as k → ∞ ,2( N + 1) p βt ([log β ] + 1) = O (e ρk /γ ) . (5.8)Consequently, the Borel–Cantelli lemma, used in conjunction with a stan-dard monotonicity argument, implies that u ∗ t ( R ) ≥ const · (log( R ) /c ) γ a.s.for all sufficiently large values of R , where u ∗ t ( R ) is defined in (1.23). ByProposition 3.8, c = const · κ / γ . Therefore, the theorem follows. (cid:3) 6. Proof of Theorem 1.2. Next we prove Theorem 1.2. In order to ob-tain the upper bound, the proof requires an estimate of spatial continuity of x u t ( x ). However, matters are somewhat complicated by the fact that weneed a modulus of continuity estimate that holds simultaneously for every x ∈ [ − R, R ], uniformly for all large values of R . This will be overcome in a fewsteps. The first is a standard moment bound for the increments of the solu-tion; however, we need to pay close attention to the constants in the estimate. Lemma 6.1. Choose and fix some t > , and suppose σ is uniformlybounded. Then there exists a finite and positive constant A such that for allreal numbers k ≥ , sup −∞ Throughout, let S := sup x ∈ R | σ ( x ) | .If x, x ′ ∈ [ − R, R ] and t > u t ( x ) − u t ( x ′ ) = N t , where { N τ } τ ∈ (0 ,t ) is the continuous mean-one martingale de-scribed by N τ := Z (0 ,τ ) × R [ p t − s ( y − x ) − p t − s ( y − x ′ )] σ ( u s ( y )) W (d s d y )(6.2)for τ ∈ (0 , t ). The quadratic variation of { N τ } τ ∈ (0 ,t ) is estimated as follows: h N i τ ≤ S · Z τ d s Z ∞−∞ d y [ p s ( y − x ) − p s ( y − x ′ )] (6.3) ≤ e τ S · Z ∞ e − s d s Z ∞−∞ d y [ p s ( y − x ) − p s ( y − x ′ )] . D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN For every s > y -integral using Plancherel’stheorem, and obtain π − R ∞−∞ (1 − cos( ξ | x − x ′ | )) exp( − κ sξ ) d ξ . Therefore,there exists a finite and positive constant a such that h N i τ ≤ e τ S π · Z ∞−∞ − cos( ξ | x − x ′ | )1 + κ ξ d ξ ≤ a κ | x − x ′ | , (6.4)uniformly for all τ ∈ (0 , t ); we emphasize that a depends only on S and t .The Carlen–Kree estimate [6] for the Davis [12] optimal constant in theBurkholder–Davis–Gundy inequality [3–5] implies the lemma. (cid:3) The second estimate turns the preceding moment bounds into an maxi-mal exponential estimate. We use a standard chaining argument to do this.However, once again we have to pay close attention to the parameter depen-dencies in the implied constants. Lemma 6.2. Choose and fix t > , and suppose σ is uniformly bounded.Then there exist a constant C ∈ (0 , ∞ ) such that E (cid:20) sup x,x ′ ∈ I : | x − x ′ |≤ δ exp (cid:18) κ | u t ( x ) − u t ( x ′ ) | Cδ (cid:19)(cid:21) ≤ δ , (6.5) uniformly for every δ ∈ (0 , and every interval I ⊂ [0 , ∞ ) of length at mostone. Proof. Recall [10], (39), page 11, the Kolmogorov continuity in the fol-lowing quantitative form: suppose there exist ν > γ > { ξ ( x ) } x ∈ R satisfies the following:E( | ξ ( x ) − ξ ( x ′ ) | ν ) ≤ C | x − x ′ | γ ;(6.6)we assume that the preceding holds for all x, x ′ ∈ R , and C ∈ (0 , ∞ ) isindependent of x and x ′ . Then, for every integer m ≥ (cid:16) sup x,x ′ ∈ I : | x − x ′ |≤ − m | ξ ( x ) − ξ ( x ′ ) | ν (cid:17) ≤ (cid:18) (2 − γ + ν ) /ν C /ν − − ( γ − /ν (cid:19) ν · − m ( γ − . (6.7)(Reference [10], (39), page 11, claims this with 2 − m ( γ − replaced with 2 − mγ on the right-hand side. But this is a typographical error; compare with [10],(38), page 11.)If δ ∈ (0 , m ≥ − m − ≤ δ ≤ − m ,whence it follows thatE (cid:16) sup x,x ′ ∈ I : | x − x ′ |≤ δ | ξ ( x ) − ξ ( x ′ ) | ν (cid:17) ≤ (cid:18) (2 − γ + ν ) /ν C /ν − − ( γ − /ν (cid:19) ν · − m ( γ − N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION (6.8) ≤ (cid:18) (2 − γ + ν ) /ν C /ν − − ( γ − /ν (cid:19) ν · (2 δ ) γ − . We apply the preceding with ξ ( x ) := u t ( x ), γ := ν/ k and C := ( Ak/ κ ) k ,where A is the constant of Lemma 6.1. It follows that there exists a positiveand finite constant A ∗ such that for all intervals I of length at most one, allintegers k ≥ 2, and every δ ∈ (0 , (cid:16) sup x,x ′ ∈ I : | x − x ′ |≤ δ | u t ( x ) − u t ( x ′ ) | k (cid:17) ≤ (cid:18) A ∗ k κ (cid:19) k δ k − . (6.9)Stirling’s formula tells us that there exists a finite constant B ∗ > A ∗ k ) k ≤ B k ∗ k ! for all integers k ≥ 1. Therefore, for all α, δ > (cid:20) sup x,x ′ ∈ I : | x − x ′ |≤ δ exp (cid:18) α | u t ( x ) − u t ( x ′ ) | δ (cid:19)(cid:21) ≤ δ ∞ X k =0 (cid:18) ζB ∗ κ (cid:19) k . (6.10)And this is at most two if α := κ / (2 B ∗ ). The result follows. (cid:3) Next we obtain another moments bound, this time for the solution ratherthan its increments. Lemma 6.3. Choose and fix t > , and suppose σ is uniformly bounded.Then for all integers k ≥ , sup x ∈ R E( | u t ( x ) | k ) ≤ (2 √ o (1))(˜ µ t k ) k ( as k → ∞ )(6.11) where the “ o (1) ” term depends only on k , and ˜ µ t := 8e · S r tπ κ . (6.12) Proof. Let us choose and fix a t > 0. Define S := sup x ∈ R | σ ( x ) | , andrecall the martingale { M τ } τ ∈ (0 ,t ) from (3.17). Itˆo’s formula (3.19) tells usthat a.s., for all τ ∈ (0 , t ), M kτ ≤ k Z τ M k − s d M s + (cid:18) k (cid:19) S Z τ M k − s (4 π κ ( t − s )) / d s. (6.13)[Compare with (3.20).] We can take expectations, iterate the preceding andargue as we did in the proof of Proposition 3.6. To summarize the end result,let us define η t ( x ) := 1 + S · Z (0 ,τ ) × R p t − s ( y − x ) W (d s d y ) (0 ≤ τ ≤ t )(6.14) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN and ζ t ( x ) := S · Z (0 ,τ ) × R p t − s ( y − x ) W (d s d y ) (0 ≤ τ ≤ t ) . (6.15)Then we have E[ M kt ] ≤ E[ η t ( x ) k ] ≤ k (1 + E[ ζ t ( x ) k ]), and similar com-putations as those in the proof of Proposition 3.6 yield the lemma. (cid:3) Next we turn the preceding moment bound into a sharp Gaussian tail-probability estimate. Lemma 6.4. Choose and fix a t > , and suppose that σ is uniformlybounded. Then there exist finite constants C > c > such that simultaneouslyfor all λ > and x ∈ R , c exp( − C √ κ λ ) ≤ P {| u t ( x ) | > λ } ≤ C exp( − c √ κ λ ) . (6.16) Proof. The lower bound is proved by an appeal to the Paley–Zygmundinequality, in the very same manner that Proposition 3.7 was established.However, we apply the improved inequality in Lemma 6.3 (in place of theresult of Lemma 3.2). As regards the upper bound, note that Lemma 6.3implies that there exists a positive and finite constant ˜ A such that for allintegers m ≥ 0, sup x ∈ R E( | u t ( x ) | m ) ≤ ( ˜ A/ √ κ ) m m !, thanks to the Stirlingformula. Thus,sup x ∈ R E exp( α | u t ( x ) | ) ≤ ∞ X m =0 (cid:18) α ˜ A √ κ (cid:19) m = 11 − α ˜ A κ − / < ∞ , (6.17)provided that α ∈ (0 , √ κ / ˜ A ). Notice that this has a different behavior than(6.10) in terms of κ . If we fix such an α , then we obtain from Chebyshev’sinequality the bound P { u t ( x ) > λ } ≤ (1 − α ˜ A κ − / ) − · exp( − αλ ), validsimultaneously for all x ∈ R and λ > 0. We write α := c √ κ to finish. (cid:3) We are finally ready to assemble the preceding estimates in order to es-tablish Theorem 1.2. Proof of Theorem 1.2. Consider the proof of Theorem 1.1: if wereplace the role of (3.2) by the bounds in Lemma 6.4 and choose λ := k , N := ⌈ k × exp( c √ κ k ) ⌉ and β := exp((Lip σ ∨ k / κ ) in the equivalent of(5.3) with the appropriate estimates, then we obtain the almost-sure boundlim inf R →∞ u ∗ t ( R ) / (log R ) / > const · κ − / , where “const” is independentof κ . It remains to derive a corresponding upper bound for the lim sup.Suppose R ≥ − R, R ] using alength-1 mesh with endpoints { x j } Rj =0 via x j := − R + j for 0 ≤ j ≤ R. (6.18)Then we write P { u ∗ t ( R ) > α (log R ) / } ≤ T + T , (6.19) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION where T := P n max ≤ j ≤ R u t ( x j ) > α (log R ) / o , (6.20) T := P n max ≤ j ≤ R sup x ∈ ( x j ,x j +1 ) | u t ( x ) − u t ( x j ) | > α (log R ) / o . By Lemma 6.4, T ≤ R sup x ∈ R P { u t ( x ) > α (log R ) / } ≤ const R − c √ κ α . (6.21)Similarly, T ≤ R sup I P n sup x,x ′ ∈ I | u t ( x ) − u t ( x ′ ) | > α (log R ) / o , (6.22)where “sup I ” designates a supremum over all intervals I of length one.Chebyshev’s inequality and Lemma 6.2 together imply that T ≤ R − ( κ /C ) α +1 sup I E (cid:20) sup x,x ′ ∈ I exp (cid:18) κ C | u t ( x ) − u t ( x ′ ) | (cid:19)(cid:21) (6.23) ≤ const R − κ α ) /C . Let q := min( κ /C, c √ κ ) to find that ∞ X R =1 P { u ∗ t ( R ) > α (log R ) / } ≤ const · ∞ X R =1 R − qα +1 , (6.24)and this is finite provided that α > (2 /q ) / . By the Borel–Cantelli lemma,lim sup R →∞ : R ∈ Z u ∗ t ( R )(log R ) / ≤ (cid:18) q (cid:19) / < ∞ a.s.(6.25)Clearly, (8 /q ) / ≤ const · κ − / for all κ ≥ κ , for a constant depends onlyon κ . And we can remove the restriction “ R ∈ Z ” in the lim sup by astandard monotonicity argument; namely, we find—by considering in thefollowing R − ≤ X ≤ R —thatlim sup X →∞ u ∗ t ( X )(log X ) / ≤ lim sup R →∞ : R ∈ Z u ∗ t ( R )(log( R − / ≤ (cid:18) q (cid:19) / a.s.(6.26)This proves the theorem. (cid:3) 7. Proof of Theorem 1.3. This section is mainly concerned with the proofof Theorem 1.3. For that purpose, we start with tail-estimates. Lemma 7.1. Consider (2.11) with σ ( x ) := cx , where c > is fixed. Then, log P {| u t ( x ) | ≥ λ } ≍ −√ κ (log λ ) / as λ → ∞ . (7.1) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN Proof. Corollary 3.5 implies the upper bound (the boundedness of σ is not required in the results of Section 3.1).As for the lower bound, we know from [2], Theorem 2.6,e k ( k − t/ κ ≤ E( | u t ( x ) | k ) ≤ k ( k − t/ κ , (7.2)uniformly for all integers k ≥ x ∈ R . Now we follow the same methodas in the proof of Proposition 3.7, and use the Paley–Zygmund inequalityto obtain P (cid:26) | u t ( x ) | ≥ k u t ( x ) k k (cid:27) ≥ [E( | u t ( x ) | k )] | u t ( x ) | k )(7.3) ≥ C e − D k / κ for some nontrivial constants C and D that do not depend on x or k .We then obtain the following: uniformly for all x ∈ R and sufficiently-largeintegers k , P (cid:26) | u t ( x ) | ≥ C Dk / κ (cid:27) ≥ C e − D k / κ . (7.4)Let λ := ( C/ 2) exp { Dk / κ } , and apply a direct computation to deduce thelower bound. (cid:3) We are now ready to prove Theorem 1.3. Our proof is based on roughly-similar ideas to those used in the course of the proof of Theorem 1.1. How-ever, at a technical level, they are slightly different. Let us point out some ofthe essential differences: unlike what we did in the proof of Theorem 1.1, wenow do not choose the values of N , β and λ as functions of k , but rather asfunctions of R ; the order of the moments k will be fixed; and we will not sumon k , but rather sum on a discrete sequence of values of the parameter R .The details follow. Proof of Theorem 1.3. First we derive the lower bound by followingthe same method that was used in the proof of Theorem 1.1; see Section 5.But we now use Lemma 7.1 rather than Corollary 3.1.The results of Section 4 can be modified to apply to the parabolic Ander-son model, provided that we again apply Lemma 7.1 in place of Corollary 3.1.In this way we obtain the following, where the x i ’s are defined by (5.5) :consider the eventΛ := n max ≤ j ≤ N | u t ( x j ) | < Ξ o where Ξ := exp (cid:18) C (log R ) / κ / (cid:19) . (7.5) To be very precise, we once again need to write ( β, [log β ] + 1) in place of ( β, log β ).N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION Then, P(Λ) ≤ P n max ≤ j ≤ N | U ( β, log β ) t ( x j ) | < o + P {| u t ( x j ) − U ( β, log β ) t | > Ξ for some 1 ≤ j ≤ N } (7.6) ≤ (1 − P {| U ( β, log β ) t ( x j ) | ≥ } ) N + N β − k C k ∗ k k/ e Gk Ξ . Note that we do not yet have a lower bound on P {| U ( β, log β ) t ( x ) | ≥ λ } . How-ever, we haveP {| U ( β, log β ) t ( x j ) | ≥ }≥ P {| u t ( x j ) | ≥ } − P {| u t ( x j ) − U ( β, log β ) t ( x j ) | ≥ Ξ } (7.7) ≥ α R − α C / − N β − k C k ∗ k k/ e Gk Ξ , valid for some positive constants α and α . Now let us choose N := ⌈ R a ⌉ and β := R − a for a fixed a ∈ (0 , N and β and thelower bound in (7.7), the upper bound in (7.6) becomesP(Λ) ≤ (cid:18) − α R − α C / + C k ∗ k k/ e Gk R k (1 − a ) − a Ξ (cid:19) N + C k ∗ k k/ e Gk R k (1 − a ) − a Ξ . (7.8)Let us consider k large enough so that k (1 − a ) − a > 2. Notice that k willnot depend on R ; this is in contrast with what happened in the proof ofTheorem 1.1.We can choose the constant C to be small enough to satisfy α C / < a/ n sup x ∈ [0 ,R ] | u t ( x ) | < e C (log R ) / / κ / o ≤ exp( − α R a/ ) + const R . (7.9)The Borel–Cantelli lemma yields the lower bound of the theorem.We can now prove the upper bound. Our derivation is modeled after theproof of Theorem 1.2.First, we need a continuity estimate for the solution of (2.11) in the casethat σ ( x ) := cx . In accord with (7.2),E( | u t ( x ) − u t ( y ) | k ) ≤ (2 √ k ) k (cid:20)Z t d r k u ( r, k k Z R d z | p t − r ( x − z ) − p t − r ( y − z ) | (cid:21) k (7.10) ≤ (2 √ k ) k (cid:20)Z t d r /k e Dk / κ Z R d z | p t − r ( x − z ) − p t − r ( y − z ) | (cid:21) k D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN for some constant D which depends on t . Consequently [see the derivationof (6.4)], E( | u t ( x ) − u t ( y ) | k ) ≤ C k (cid:18) | y − x | κ (cid:19) k exp (cid:18) Bk κ (cid:19) (7.11)for constants B, C ∈ (0 , ∞ ) that do not depend on k . We apply an argument,similar to one we used in the proof of Lemma 6.2, in order to deduce thatfor simultaneously all intervals I of length 1,E (cid:16) sup x,x ′ ∈ I : | x − x ′ |≤ | u t ( x ) − u t ( x ′ ) | k (cid:17) ≤ C k e C k / κ κ k (7.12)for constants C , C ∈ (0 , ∞ ) that do not depend on k or κ . Now, we followthe proof of Theorem 1.2 and partition [ − R, R ] into intervals of length 1.Let b > { u ∗ t ( R ) > b (log R ) / / κ / } ≤ T + T , (7.13)where T := P n max ≤ j ≤ R u t ( x j ) > e b (log R ) / / κ / o (7.14)and T := P n max ≤ j ≤ R sup x ∈ ( x j ,x j +1 ) | u t ( x ) − u t ( x j ) | > e b (log R ) / / κ / o . (7.15)[Compare with (6.19).]On one hand, Lemma 7.1 implies that T ≤ R · P { u t ( x j ) > e b (log R ) / / κ / } ≤ c RR c b / (7.16)for some constants c , c > 0. On the other hand (7.12) and Chebyshev’sinequality imply that T ≤ R P n sup x,x ′ ∈ I : | x − x ′ |≤ | u t ( x ) − u t ( x ′ ) | ≥ e b (log R ) / / κ / o (7.17) ≤ RC k e C k / κ κ k e kb (log R ) / / κ / . Now we choose k := ⌈ κ / (log R ) / ⌉ in order to obtain T ≤ const · R C − b where the constant depends on κ . With these choices of parameters wededuce from (7.16) and (7.17) that if b were sufficiently large, then ∞ X R =1 P { u ∗ t ( R ) > b (log R ) / / κ / } < ∞ . (7.18)The Borel–Cantelli lemma and a monotonicity argument together completethe proof. (cid:3) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION APPENDIX A: WALSH STOCHASTIC INTEGRALSThroughout this Appendix, (Ω , F , P) denotes (as is usual) the underlyingprobability space. We state and prove some elementary properties of Walshstochastic integrals [23].Let L d denote the collection of all Borel-measurable sets in R d thathave finite d -dimensional Lebesgue measure. (We could work with Lebesgue-measurable sets, also.)Let us follow Walsh [23] and define for every t > A ∈ L d the randomfield W t ( A ) := Z [0 ,t ] × A W (d s d y ) . (A.1)The preceding stochastic integral is defined in the same sense as Wiener.Let F t ( A ) denote the sigma-algebra generated by all random variables ofthe form { W s ( B ) : s ∈ (0 , t ] , B ∈ L d , B ⊆ A } . (A.2)We may assume without loss of generality that, for all A ∈ L d , {F t ( A ) } t> isa right-continuous P-complete filtration (i.e., satisfies the “usual hypotheses”of Dellacherie and Meyer [13]). Otherwise, we augment {F t ( A ) } t> in theusual way. Let F t := _ A ∈L d F t ( A ) ( t > . (A.3)Let P denote the collection of all processes that are predictable with respectto {F t } t> . The elements of P are precisely the “predictable random fields”of Walsh [23].For us, the elements of P are of interest because if Z ∈ P and k Z k L ( R + × R d × Ω) := E Z ∞ d t Z R d d x [ Z t ( x )] < ∞ , (A.4)then the Walsh stochastic integral I t := R [0 ,t ] × R Z s ( y ) W (d s d y ) is definedproperly, and has good mathematical properties. Chief among those goodproperties are the following: { I t } t> is a continuous mean-zero L -martingalewith quadratic variation h I i t := R t d s R ∞−∞ d y [ Z s ( y )] .Let us define P ( A ) to be the collection of all processes that are predictablewith respect to {F t ( A ) } t> . Clearly, P ( A ) ⊆ P for all A ∈ L d . Proposition A.1. If Z ∈ P ( A ) for some A ∈ L d and k Z k L ( R + × R d × Ω) < ∞ , then the martingale defined by J t := R [0 ,t ] × A Z s ( y ) W (d s d y ) is in P ( A ) . Proof. It suffices to prove this for a random field Z that has the form Z s ( y )( ω ) = [ a,b ] ( s ) X ( ω ) A ( y ) ( s > , y ∈ R , ω ∈ Ω)(A.5) D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN where 0 ≤ a < b , and X is a bounded F a ( A )-measurable random variable.But in that case, J t ( ω ) = X ( ω ) · R [0 ,t ] ∩ [ a,b ] × A W (d s d y ), whence the result fol-lows easily from the easy-to-check fact that the stochastic process defined by I t := R [0 ,t ] ∩ [ a,b ] × A W (d s d y ) is continuous (up to a modification). The latterassertion follows from the Kolmogorov continuity theorem; namely, we checkfirst that E( | I t − I r | ) = | A | · | t − r | , where | A | denotes the Lebesgue measureof A . Then use the fact, valid for all Gaussian random variables including I t − I r , that E( | I t − I r | k ) = const · { E( | I t − I r | ) } k/ for all k ≥ (cid:3) Proposition A.1 is a small variation on Walsh’s original construction of hisstochastic integrals. We need this minor variation for the following reason: Corollary A.2. Let A (1) , . . . , A ( N ) be fixed and nonrandom disjointelements of L d . If Z (1) , . . . , Z ( N ) are, respectively, in P ( A (1) ) , . . . , P ( A ( N ) ) and k Z ( j ) k L ( R + × R d × Ω) < ∞ for all j = 1 , . . . , N , then J (1) , . . . , J ( N ) areindependent processes, where J ( j ) t := Z [0 ,t ] × A j Z s ( y ) W (d s d y ) ( j = 1 , . . . , N, t > . (A.6) Proof. Owing to Proposition A.1, it suffices to prove that if some se-quence of random fields X (1) , . . . , X ( N ) satisfies X ( j ) ∈ P ( A ( j ) ) ( j = 1 , . . . , N ),then X (1) , . . . , X ( N ) are independent. It suffices to prove this in the case thatthe X ( j ) ’s are simple predictable processes; that is, in the case that X ( j ) t ( ω ) = [ a j ,b j ] ( s ) Y j ( ω ) A ( j ) ( y ) , (A.7)where 0 < a j < b j and Y j is a bounded F a j ( A ( j ) )-measurable random vari-able. In turn, we may restrict attention to Y j ’s that have the form Y j ( ω ) := ϕ j (cid:18)Z [ α j ,β j ] × A ( j ) W (d s d y ) (cid:19) , (A.8)where 0 < α j < β j ≤ a j and ϕ j : R → R is bounded and Borel measurable.But the assertion is now clear, since Y , . . . , Y N are manifestly indepen-dent. In order to see this we need only verify that the covariance between R [ α j ,β j ] × A ( j ) W (d s d y ) and R [ α k ,β k ] × A ( k ) W (d s d y ) is zero when j = k ; and thisis a ready consequence of the fact that A ( j ) ∩ A ( k ) = ∅ when j = k . (cid:3) APPENDIX B: SOME FINAL REMARKSRecall that k U k k denotes the usual L k (P)-norm of a random variable U for all k ∈ (0 , ∞ ). According to Lemma 3.2,lim sup t →∞ t log sup x ∈ R E( | u t ( x ) | k ) ≤ aCk if k ≥ . (B.1) N THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION This and Jensen’s inequality together imply that γ ( ν ) := lim sup t →∞ t log sup x ∈ R E( | u t ( x ) | ν ) < ∞ for all ν > . (B.2)And Chebyshev’s inequality implies that for all ν > x ∈ R P { u t ( x ) ≥ e − qt } ≤ exp (cid:18) νt (cid:20) q + 1 νt log sup x ∈ R E( | u t ( x ) | ν ) (cid:21)(cid:19) (B.3) = exp (cid:18) νt (cid:20) q + γ ( ν ) ν + o (1) (cid:21)(cid:19) ( t → ∞ ) . Because u t ( x ) ≥ 0, it follows that ν γ ( ν ) /ν is nondecreasing on (0 , ∞ ),whence ℓ := lim ν ↓ γ ( ν ) ν = inf ν> γ ( ν ) ν exists and is finite . (B.4)Therefore, in particular,lim sup t →∞ t log sup x ∈ R P { u t ( x ) ≥ e − qt } < q ∈ ( −∞ , − ℓ ) . (B.5)Now consider the case that σ (0) = 0, and recall that in that case u ( x ) ≥ x ∈ R . Mueller’s comparison principle tells us that u t ( x ) ≥ t ≥ x ∈ R , whence it follows that k u t ( x ) k = E[ u t ( x )] = ( p t ∗ u )( x )is bounded in t . This shows that γ (1) = 0, and hence ℓ ≤ 0. We have provedthe following: Proposition B.1. If σ (0) = 0 , then there exists q ≥ such that t log u t ( x ) ≤ − q + o P (1) as t → ∞ , for every x ∈ R , (B.6) where o P (1) is a term that converges to zero in probability as t → ∞ . Bertini and Giacomin [5] have studied the case that σ ( x ) = cx and haveshown that, in that case, there exists a special choice of u such that for allcompactly supported probability densities ψ ∈ C ∞ ( R ),1 t log Z ∞−∞ u t ( x ) ψ ( x ) d x = − c κ + o P (1) as t → ∞ . (B.7)Equation (B.7) and more generally Proposition B.1 show that the typicalbehavior of the sample function of the solution to (1.2) is subexponential intime, as one might expect from the unforced linear heat equation. And yet,it frequently is the case that u t ( x ) grows in time exponentially rapidly in L k (P) for k ≥ u t ( x ) (in thisand related models) is that it decays exponentially rapidly with time. [Equa- D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN tion (B.7) is proof of this fact in one special case.] In other words, one mightexpect that typically q > 0. We are not able to resolve this matter here, andtherefore ask the following questions: Open problem 1. Is q > ℓ < Open problem 2. Can the o P (1) in (B.6) be replaced by a term that con-verges almost surely to zero as t → ∞ ?REFERENCES [1] Bal´azs, M. , Quastel, J. and Sepp¨al¨ainen, T. (2011). Fluctuation exponent of theKPZ/stochastic Burgers equation. J. Amer. Math. Soc. Bertini, L. and Cancrini, N. (1995). The stochastic heat equation: Feynman–Kacformula and intermittence. J. Stat. Phys. Burkholder, D. L. (1966). Martingale transforms. Ann. Math. Statist. Burkholder, D. L. , Davis, B. J. and Gundy, R. F. (1972). Integral inequalitiesfor convex functions of operators on martingales. In Proceedings of the SixthBerkeley Symposium on Mathematical Statistics and Probability (Univ. Califor-nia, Berkeley, Calif., 1970/1971), Vol. II: Probability Theory Burkholder, D. L. and Gundy, R. F. (1970). Extrapolation and interpolation ofquasi-linear operators on martingales. Acta Math. Carlen, E. and Kr´ee, P. (1991). L p estimates on iterated stochastic integrals. Ann.Probab. Carmona, R. A. and Molchanov, S. A. (1994). Parabolic Anderson problem andintermittency. Mem. Amer. Math. Soc. viii+125. MR1185878[8] Chertkov, M. , Falkovich, G. , Kolokolov, I. and Lebedev, V. (1995). Statis-tics of a passive scalar advected by a large-scale two-dimensional velocity field:Analytic solution. Phys. Rev. E (3) Conus, D. and Khoshnevisan, D. (2010). Weak nonmild solutions to some SPDEs.Preprint. Available at http://arxiv.org/abs/1004.2744 .[10] Dalang, R. , Khoshnevisan, D. , Mueller, C. , Nualart, D. and Xiao, Y. (2009). A Minicourse on Stochastic Partial Differential Equations . Lecture Notes inMath. . Springer, Berlin. MR1500166[11] Dalang, R. C. (1999). Extending the martingale measure stochastic integral withapplications to spatially homogeneous s.p.d.e.’s. Electron. J. Probab. 29 pp.(electronic). MR1684157[12] Davis, B. (1976). On the L p norms of stochastic integrals and other martingales. Duke Math. J. Dellacherie, C. and Meyer, P.-A. (1982). Probabilities and Potential. B. Theoryof Martingales . North-Holland Mathematics Studies . North-Holland, Amster-dam. Translated from the French by J. P. Wilson. MR0745449[14] Durrett, R. (1998). Lecture Notes on Particle Systems and Percolation . Wadsworth& Brooks Cole, Pacific Grove, CA.[15] Foondun, M. and Khoshnevisan, D. (2009). Intermittence and nonlinear parabolicstochastic partial differential equations. Electron. J. Probab. [16] Foondun, M. and Khoshnevisan, D. (2010). On the global maximum of the solutionto a stochastic heat equation with compact-support initial data. Ann. Inst. HenriPoincar´e Probab. Stat. Gel’fand, I. M. and Vilenkin, N. Y. (1964). Generalized Functions. Vol. 4: Appli-cations of Harmonic Analysis . Academic Press, New York. MR0173945[18] Kardar, M. , Parisi, G. and Zhang, Y. C. (1986). Dynamic scaling of growinginterfaces. Phys. Rev. Lett. Liggett, T. M. (1985). Interacting Particle Systems . Grundlehren der Mathema-tischen Wissenschaften [Fundamental Principles of Mathematical Sciences] .Springer, New York. MR0776231[20] Molchanov, S. A. (1991). Ideas in the theory of random media. Acta Appl. Math. Mueller, C. (1991). On the support of solutions to the heat equation with noise. Stochastics Stochastics Rep. Ruelle, D. (1985). The onset of turbulence: A mathematical introduction. In Tur-bulence, Geophysical Flows, Predictability and Climate Dynamics (“Turbolenzae Predicibilit`a nella Fluidodinamica Geofisica e la Dinamica del Clima,” Ren-diconti della Scuola Internazionale di Fisica “Enrico Fermi”) . North-Holland,Amsterdam.[23] Walsh, J. B. (1986). An introduction to stochastic partial differential equations. In ´Ecole D’´et´e de Probabilit´es de Saint-Flour, XIV—1984 . Lecture Notes in Math. Zel’dovich, Y. B. , Molchanov, S. A. , Ruzma˘ıkin, A. A. and Sokoloff, D. D. (1988). Intermittency, diffusion, and generation in a nonstationary randommedium. Sov. Sci. Rev. C Math. Phys. Zeldovich, Y. B. , Molchanov, S. A. , Ruzmaikin, A. A. and Sokolov, D. D. (1985). Intermittency of passive fields in random media. J. Exp. Theor. Phys. Zhurnal eksperimentalnoi teoreticheskoi fiziki .](In Russian.)[26] Zel’dovich, Y. B. , Ruzma˘ıkin, A. A. and Sokoloff, D. D. (1990). The AlmightyChance . World Scientific Lecture Notes in Physics . World Scientific, RiverEdge, NJ. MR1141627 D. ConusDepartment of MathematicsLehigh UniversityChristmas–Saucon Hall14 East Packer AvenueBethlehem, Pennsylvania 18015USAE-mail: [email protected]