Solving the 4NLS with white noise initial data
aa r X i v : . [ m a t h . A P ] F e b SOLVING THE 4NLS WITH WHITE NOISE INITIAL DATA
TADAHIRO OH, NIKOLAY TZVETKOV, AND YUZHAO WANG
Abstract.
We construct global-in-time singular dynamics for the (renormalized) cubicfourth order nonlinear Schr¨odinger equation on the circle, having the white noise measureas an invariant measure. For this purpose, we introduce the “random-resonant / nonlineardecomposition”, which allows us to single out the singular component of the solution.Unlike the classical McKean, Bourgain, Da Prato-Debussche type argument, this singularcomponent is nonlinear, consisting of arbitrarily high powers of the random initial data.We also employ a random gauge transform, leading to random Fourier restriction normspaces. For this problem, a contraction argument does not work and we instead establishconvergence of smooth approximating solutions by studying the partially iterated Duhamelformulation under the random gauge transform. We reduce the crucial nonlinear estimatesto boundedness properties of certain random multilinear functionals of the white noise.
Contents
1. Introduction 21.1. White noise on the circle and Hamiltonian partial differential equations 21.2. The cubic fourth order nonlinear Schr¨odinger equation and a soft formulationof the main result 31.3. Renormalized equation 41.4. Statements of the well-posedness results 51.5. Outline of the well-posedness argument 81.6. The α > α = 0 case 141.8. Organization of the paper 162. Notations and preliminaries 162.1. Deterministic tools 172.2. Probabilistic estimates 193. Local theory, Part 1: 0 < α ≤ N N α = 0 304.1. Partially iterated Duhamel formulation 304.2. Proof of Theorem 2: the α = 0 case 345. Global well-posedness and invariance of the white noise measure 385.1. Invariance of the white noise measure under the truncated 4NLS 38 Mathematics Subject Classification.
Key words and phrases. fourth order nonlinear Schr¨odinger equation; biharmonic nonlinear Schr¨odingerequation; white noise; invariant measure; random-resonant / nonlinear decomposition; random Fourier re-striction norm space. X s,b -space 53A.2. Key tail estimates 55A.3. Proof of Lemma 2.11 60References 621. Introduction
White noise on the circle and Hamiltonian partial differential equations.
A white noise on the circle T = R / (2 π Z ) is defined as the following infinite-dimensionalrandom variable: u ω ( x ) = X n ∈ Z g n ( ω ) e inx , (1.1)where { g n } n ∈ Z is a family of independent standard complex-valued Gaussian random vari-ables. On the other hand, using the representation of the L ( T )-norm in terms of theFourier coefficients, one may formally define the white noise measure induced by (1.1) as“ Z − e − k u k L T ) du ” . There are many important Hamiltonian PDEs such as the Korteweg-de Vries equation(KdV) and the nonlinear Schr¨odinger equations (NLS), under which the L -norm of a solu-tion is conserved. Therefore, for this type of equations, thanks to the general globalizationargument introduced by Bourgain in [5, 6], if one can solve the equation locally in time with data distributed according to (1.1), then one can almost surely extend the solutionsfor all times and the white noise would be an invariant measure of the resulting flow.It is easy to check that the white noise measure induced by (1.1) is supported in the spaceof distributions H s ( T ) \ H − ( T ), s < − . It is this low regularity which makes it verydifficult to solve locally in time a Hamiltonian PDE with the white noise initial data definedin (1.1). It is remarkable that this severe difficulty was overcome in the context of the KdVequation; see [59, 46, 47, 48, 49]. An important property of the KdV equation heavilyexploited in these works is the absence of resonant interactions when restricted to solutionswith a fixed zero Fourier mode (which is a conserved quantity for the KdV equation).As we shall see below, in the case of NLS-type equations, one may remove a part of theresonant interactions by a gauge transform. Even after such a transformation, however,there are remaining resonant interactions. The main goal of this work is to show how, byexploiting an intricate mixture of probabilistic and deterministic analysis, one may deal with By convention, we endow T with the normalized Lebesgue measure (2 π ) − dx . NLS WITH WHITE NOISE INITIAL DATA 3 such resonant interactions in the context of the cubic fourth order nonlinear Schr¨odingerequation on the circle with the white noise initial data (1.1). In our construction, the mainrandom part of the solutions will be a nonlinear object (in fact, of infinite degree), which isin sharp contrast with the simple random linear evolution appearing in the previous randomdata well-posedness results such as [6, 14]. This difference between our main result and[6, 14] is similar in spirit with the difference between “scattering” and “modified scattering”appearing in the analysis of dispersive PDEs posed on the Euclidean space. See Remarks 1.5and 4.3 below.We succeeded to make our method work only for an NLS equation with a sufficientlystrong dispersion. The generalization of our result to the more standard (in particularbecause of its integrability) NLS with the second order dispersion remains as a challengingopen problem.1.2.
The cubic fourth order nonlinear Schr¨odinger equation and a soft formu-lation of the main result.
In this work, we consider the cubic fourth order nonlinearSchr¨odinger equation (4NLS) on the circle T : ( i∂ t u = ∂ x u + | u | uu | t =0 = u , ( x, t ) ∈ T × R , (1.2)where u is complex-valued. The equation (1.2) is also called the biharmonic NLS and it wasstudied for instance in [35, 66] in the context of stability of solitons in magnetic materials.The L -norm is formally conserved by the dynamics of (1.2) and therefore, as discussedin the previous subsection, one may hope to construct global dynamics of (1.2) with datagiven by (1.1). This is a delicate problem for many reasons, the most basic one being thatit is not clear how to interpret the nonlinearity for such low-regularity solutions.Let us now briefly go over the deterministic well-posedness theory of (1.2). A simplefixed point argument via the Fourier restriction norm method introduced by Bourgain [4]yields local well-posedness of (1.2) in H s ( T ), s ≥
0. The main ingredient is the following L -Strichartz estimate: k u k L ( T × R ) . k u k X , , (1.3)where X s,b denotes the Fourier restriction norm space adapted to (1.2). See [54] for theproof of (1.3). Thanks to the L -conservation law, this local result immediately impliesglobal well-posedness of (1.2) in H s ( T ), s ≥
0. The equation (1.2) is known to be ill-posedin negative Sobolev spaces in the sense of non-existence of solutions [30, 56]. See also [55, 17]for ill-posedness by norm inflation. We point out that the ill-posedness results in [55, 17]also apply to the renormalized equation (1.6) below.Taking into account that we have a well-defined flow of (1.2) for smooth initial data,one may formulate the problem of solving (1.2) with the white noise initial data (1.1) asthat of studying the limiting behavior of smooth solutions to (1.2) with initial data givenby suitable regularizations of (1.1). We do not know the answer to this question in fullgenerality but we can answer it in a satisfactory manner for the natural regularizations bymollification.
T. OH, N. TZVETKOV, AND Y. WANG
Let { u ω ,m } ∞ m =1 be a sequence of random smooth functions defined as the regularizationof u ω in (1.1) by mollification, i.e. u ω ,m = u ω ∗ ρ m = X n ∈ Z c ρ m ( n ) g n ( ω ) e inx , (1.4)where c ρ m ( n ) = θ ( n/m ) with a bump function θ on R which equals one near the origin. De-note by u m the smooth solution to (1.2) with smooth initial data u m | t =0 = u ω ,m constructedin [54]. If we could solve the equation (1.2) with data given by (1.1), then the sequence { u m } ∞ m =1 would converge to the solution in an appropriate sense. The ill-posedness resultin [30, 56], however, implies that there is no hope to make { u m } ∞ m =1 converge in any Sobolevspace of negative regularity. It turns out that a “renormalization” of u m is convergent. Hereis a precise statement. Theorem 1.
The sequence n exp (cid:0) it k u m ( t ) k L (cid:1) u m ( t ) o ∞ m =1 converges almost surely in C ( R ; H s ( T )) , s < − . If we denote the limit by u , then we have u = X n ∈ Z g n ( t, ω ) e inx , where for every t ∈ R , { g n ( t, ω ) } n ∈ Z is a family of independent standard complex-valuedGaussian random variables. Furthermore, the limit u does not depend on the choice of thebump function θ . Theorem 1 is a satisfactory qualitative statement. It, however, does not explain in whichsense the obtained limit u satisfies a limit equation and it does not give any description ofthe obtained limit. This will be the purpose of the next two subsections. Remark 1.1.
It is worthwhile to note that in a similar discussion for the KdV equation,one can show convergence of the sequence of regularized solutions for any regularizationof the white noise initial data. This is because local well-posedness analysis in [36, 47] ispurely deterministic. Furthermore, renormalization is not necessary for the KdV equation.It would be of interest to investigate whether the result of Theorem 1 holds for a moregeneral class of regularizations of the white noise than those given by mollification (1.4).1.3.
Renormalized equation.
We now derive the equation satisfied by the limiting dis-tribution derived in Theorem 1. Given a global solution u ∈ C ( R ; L ( T )) to (1.2), we definethe following invertible gauge transform: u ( t ) ( u )( t ) := e it ffl | u ( x,t ) | dx u ( t ) , (1.5)where ffl f ( x ) dx := π ´ T f ( x ) dx denotes integration with respect to the normalizedLebesgue measure (2 π ) − dx on T . A direct computation with the mass conservation showsthat the gauged function, which we still denote by u , solves the following renormalized4NLS: i∂ t u = ∂ x u + (cid:16) | u | − ffl | u | dx (cid:17) u. (1.6) We also allow θ to be a sharp cutoff function [ − , ( n ), in which case the resulting u ω ,m corresponds tothe frequency truncated version of the white noise onto the frequencies {| n | ≤ m } . Here, we endow C ( R ; H s ( T )) with the compact-open topology in time. NLS WITH WHITE NOISE INITIAL DATA 5
Note that the gauge transform G is invertible. In particular, we can freely convert solutionsto (1.2) into solutions to (1.6) and vice versa as long as they are in C ( R ; L ( T )). Clearly,the definition (1.5) does not make sense outside L ( T ) (in space) and hence the original4NLS (1.2) and the renormalized 4NLS (1.6) are no longer equivalent outside L ( T ). As itturns out, the renormalized equation (1.6) is the one satisfied by the limiting distribution u appearing in the statement of Theorem 1.Just like the original 4NLS (1.2), the L -Strichartz estimate (1.3) along with the massconservation yields global well-posedness of the renormalized 4NLS (1.6) in L ( T ). Theimportant point is that the renormalization removes a certain singular component fromthe cubic nonlinearity; see (1.17) and (1.18) below. This allows us to study well-posednessof the renormalized 4NLS (1.6) in negative Sobolev spaces. In recent papers [38, 56], therenormalized 4NLS (1.6) was shown to be locally well-posed in H s ( T ) for s ≥ − andglobally well-posed for s > − . Note that the white noise in (1.1) lies almost surely in H s ( T ) \ H − ( T ), s < − , which is beyond the scope of the known deterministic well-posedness results in [38, 56]. For this reason, the main part of our analysis is devoted tothe probabilistic construction of local-in-time and global-in-time solutions to (1.6) with thewhite noise as initial data.Note that the renormalization of the nonlinearity in (1.6) is canonical in the Euclideanquantum field theory (see, for example, [62]). This formulation first appeared in the workof Bourgain [6] for studying the invariant Gibbs measure for the defocusing cubic NLSon T . See [19, 51, 30, 52] for more discussion in the context of the (usual) nonlinearSchr¨odinger equations. See also Remark 1.6 below.1.4. Statements of the well-posedness results.
In the following, we consider theCauchy problem for the renormalized 4NLS (1.6) with Gaussian random data in a moregeneral form than (1.1). For this purpose, we introduce a family of mean-zero Gaussianmeasures on periodic distributions on T . Given α ∈ R , consider the Gaussian measure µ α with formal density: dµ α = Z − α e − k u k Hα du = Z − α Y n ∈ Z e − h n i α | b u n | d b u n . (1.7)We can indeed view µ α as the induced probability measure under the map Ξ α given byΞ α : ω ∈ Ω Ξ α ( ω )( x ) := X n ∈ Z g n ( ω ) h n i α e inx ∈ D ′ ( T ) , (1.8)where h · i = (1+ |·| ) and { g n } n ∈ Z is a sequence of independent standard complex-valuedGaussian random variables on a probability space (Ω , F , P ). An easy computation showsthat Ξ α in (1.8) lies in H s ( T ) for s < α −
12 (1.9)but not in H α − ( T ) almost surely. In particular, µ α is a Gaussian measure on H s ( T ) andthe triplet ( H α , H s , µ α ) forms an abstract Wiener space, provided that ( α, s ) satisfies (1.9). To be precise, it is an equivalent formulation to the Wick renormalization in handling rough Gaussianinitial data. By convention, we set Var( g n ) = 1. T. OH, N. TZVETKOV, AND Y. WANG
For more details, see [26, 37]. When α = 0, the random Fourier series (1.8) reduces to thatin (1.1) and hence the Gaussian measure µ in (1.7) corresponds to the white noise measure.Our first step is to construct local-in-time dynamics for the renormalized 4NLS (1.6)almost surely with respect to the random initial data of the form: u ω ( x ) = X n ∈ Z g n ( ω ) h n i α e inx (1.10)with α ≥
0. For this purpose, we first introduce the following nonlinear operator Z (ofinfinite degree) by setting Z ( f )( t ) := X n ∈ Z e i ( nx − n t ) ∞ X k =0 ( it ) k k ! | b f ( n ) | k b f ( n ) , (1.11)a priori defined for smooth functions f = P n ∈ Z b f ( n ) e inx on T . The following theoremaddresses almost sure local well-posedness of the renormalized 4NLS (1.6) for α ≥ Theorem 2 (Almost sure local well-posedness) . Let α ≥ . Then, the renormalized cubic4NLS (1.6) on T is locally well-posed almost surely with respect to the Gaussian measure µ α . More precisely, there exist C, c > such that for each sufficiently small δ > , thereexists a set Ω δ ⊂ Ω with the following properties: (i) P (Ω cδ ) = µ α ◦ Ξ α (Ω cδ ) < Ce − δc , where µ α and Ξ α are as in (1.7) and (1.8) . (ii) For each ω ∈ Ω δ , there exists a ( unique ) solution u to (1.6) with u | t =0 = u ω givenby the random Fourier series (1.10) in the class: z ω + C ([ − δ, δ ]; L ( T )) ⊂ C ([ − δ, δ ]; H s ( T )) , (1.12) where z ω = Z ( u ω ) is as in (1.11) and (i) s = 0 if α > and (ii) s = α − − ε forany ε > , if α ≤ . In the next subsections, we discuss an outline of the proof of Theorem 2.
Remark 1.2.
When α > , the random initial data u ω in (1.10) belongs almost surely to L ( T ) and hence the deterministic uniqueness statements apply. In particular, when α > ,one can easily modify the argument in [29] to conclude that the solution to (1.6) is almostsurely unconditionally unique, namely, uniqueness holds in the entire C ([ − δ, δ ]; H ( T )).For < α ≤ , the solution is almost surely conditionally unique. Namely, uniquenessholds in an auxiliary function space (the X ,b -space for some b > in this case) containedin C ([ − δ, δ ]; L ( T )). As for the uniqueness statements for 0 ≤ α ≤ , see Remark 1.10 for0 < α ≤ and Remark 4.4 for α = 0.Theorem 2 with α = 0 shows that the renormalized 4NLS (1.6) is almost surely locallywell-posed with the white noise in (1.1) as initial data. In constructing almost sure global-in-time dynamics, we adapt Bourgain’s invariant measure argument [5, 6] to our setting.More precisely, we use invariance of the white noise measure under the finite-dimensionalapproximation of the 4NLS flow to obtain a uniform control on the solutions, and thenapply a PDE approximation argument to extend the local solutions to (1.6) obtained fromTheorem 2 to global ones. As a byproduct, we also obtain invariance of the white noiseunder the resulting global flow of the renormalized 4NLS (1.6). NLS WITH WHITE NOISE INITIAL DATA 7
Theorem 3 (Almost sure global well-posedness and invariance of the white noise) . Let α = 0 . Then, the renormalized 4NLS (1.6) on T is globally well-posed almost surely withthe random initial data u ω given by (1.10) . More precisely, for almost every ω ∈ Ω , thereexists a unique solution u to (1.6) with u | t =0 = u ω , satisfying u ∈ z ω + C ( R ; L ( T )) ⊂ C ( R ; H − − ε ( T )) for any ε > , where z ω = Z ( u ω ) . Furthermore, the white noise measure µ is invariantunder the flow. Remark 1.3.
When α > , the deterministic global well-posedness [56] of the renormalized4NLS (1.6) in H s ( T ), s > − , implies almost sure global well-posedness of (1.6) with therandom initial data u ω in (1.10) since the random initial data u ω almost surely belongs to H s ( T ) for some s > − .The proof of Theorem 3 heavily depends on (formal) invariance of the white noise measureand hence is not applicable for the case α ∈ (0 , ]. In [19], Colliander and the first authoradapted Bourgain’s high-low decomposition method [8] to prove almost sure global well-posedness of the renormalized NLS (with the second order dispersion) with the randominitial data of the form (1.10) below L ( T ) (without relying on any invariant measure). Thesame approach is expected to yield almost sure global well-posedness of the renormalized4NLS (1.6) for some range of α ∈ (0 , ]. We do not pursue this analysis here. Remark 1.4.
The solution u constructed in Theorems 2 and 3 has a structure: u = random nonlinear term + smoother term . See (1.37). This is quite different from the standard probabilistic well-posedness results asin [6, 14], where a solution u has the structure: u = random linear term + smoother term . (1.13)In the field of stochastic PDEs, a well-posedness argument based on the decomposi-tion (1.13) is usually referred to as the Da Prato-Debussche trick. When the decompo-sition (1.13) is not sufficient, one may try to write a solution as the sum of finitely many stochastic terms plus a smoother remainder. See for example [27, 31].In the context of nonlinear dispersive PDEs, there are recent works [2, 50], where asolution theory was built, based on the decomposition of a solution as the sum of finitelymany stochastic terms plus a smoother remainder. A remarkable new feature of the de-composition used in Theorems 2 and 3 is that the series expansion (1.11) for Z ( u ω ) consistsnot only of the free solution (i.e. k = 0 in (1.11)) but also of infinitely many higher ordercorrections terms k ≥
1. As a consequence, z ω = Z ( u ω ) depends on arbitrarily high powersof Gaussian random variables and hence it does not belong to Wiener chaoses H ≤ k , definedin (2.10), of any finite order. See also Remark 1.11. Remark 1.5.
A decomposition such as (1.13) is not only useful in establishing well-posedness of a given equation, but also provides a finer regularity description of a solutionthus obtained. For example, The decomposition (1.13) states that in the high frequencyregime (i.e. at small spatial scales on the physical side), the dynamics is essentially governedby that of the random linear solution. See also Remark 1.11 (ii). In [9, Page 62], Bourgain
T. OH, N. TZVETKOV, AND Y. WANG made an “analogy” of the decomposition (1.13) to scattering (i.e. a nonlinear solution be-having like a linear solution asymptotically as t → ±∞ ) by saying “This property [namelythe decomposition (1.13)] reminds of “scattering” occurring in certain dispersive models”in the sense that in both the decomposition (1.13) and scattering, the dominant part ofdynamics is given by the linear dynamics.In our solution theory, we have the decomposition u = z ω + smoother term,where z ω = Z ( u ω ). Namely, the dominant part is nonlinear (with an explicit structure).In this context, one may wish to say that the results of Theorems 2 and 3 remind of modified scattering occurring in certain dispersive models [57, 34, 33], where the asymptoticdominant dynamics is given not by a linear dynamics but by a certain nonlinear dynamics.See Remark 4.3 below for more details on this analogy. Remark 1.6.
Instead of the renormalized 4NLS (1.6), one may work with the Wick renor-malization to study the same problem. Disadvantage for this approach is that there is noequation for the limiting dynamics. The limit u of smooth approximating solutions wouldformally “satisfy” i∂ t u = ∂ x u + | u | u − ∞ · u. (1.14)This is in sharp contrast with the case of the renormalized 4NLS (1.6), where the renor-malized nonlinearity has a well defined meaning as a cubic operator, defined a priori onsmooth functions. See (1.17) and (1.18). Lastly, we point out that if the Gaussian measure µ α in (1.7) were invariant, then one could show that the renormalized 4NLS (1.6) is equiv-alent to the Wick ordered 4NLS (1.14) in a suitable limiting sense, provided that α > .See Section 3 in [51]. Unfortunately, such invariance is true only for α = 0.1.5. Outline of the well-posedness argument.
When α > , it follows from (1.9) thatour random initial data u ω defined in (1.10) belongs to L ( T ) almost surely. Hence, theaforementioned deterministic global well-posedness of (1.6) in L ( T ) implies Theorem 2 inthis case. Therefore, we focus on the case 0 ≤ α ≤ in the following.When 0 ≤ α ≤ , the random initial data u ω in (1.10) lies strictly in negative Sobolevspaces almost surely. In view of the failure of the local uniform continuity of the solutionmap in these spaces (see [19, 54]), it is non-trivial to construct solutions to (1.6) in negativeSobolev spaces since a straightforward contraction argument fails in this regime. For α > ,the random initial data u ω in (1.10) almost surely belongs to H s ( T ) for some s > − andhence the global well-posedness in [56] based on a more robust energy method is applicableto conclude Theorem 2. In the following, however, we present a uniform approach toconstruct local-in-time solutions in a probabilistic manner for 0 ≤ α ≤ by making use ofrandomness of the initial data u ω in (1.10).By writing (1.6) in the Duhamel formulation, we have u ( t ) = S ( t ) u ω − i ˆ t S ( t − t ′ ) N ( u )( t ′ ) dt ′ , (1.15)where S ( t ) = e − it∂ x denotes the linear propagator and N ( u ) = (cid:18) | u | − | u | dx (cid:19) u. (1.16) NLS WITH WHITE NOISE INITIAL DATA 9
Next, we make an important decomposition of the nonlinearity N ( u ) into resonant andnon-resonant parts. Namely, define trilinear operators N and N by setting N ( u , u , u )( x, t ) := X n ∈ Z X Γ( n ) b u ( n , t ) b u ( n , t ) b u ( n , t ) e i ( n − n + n ) x , (1.17) N ( u , u , u )( x, t ) := − X n b u ( n, t ) b u ( n, t ) b u ( n, t ) e inx , (1.18)where Γ( n ) denotes the hyperplane:Γ( n ) := (cid:8) ( n , n , n ) ∈ Z : n = n − n + n and n , n = n (cid:9) . (1.19)When all the arguments coincide, we simply write N k ( u ) = N k ( u, u, u ), k = 1 ,
2. The term N ( u ) denotes the non-resonant part of the renormalized nonlinearity N ( u ), while N ( u )denotes the resonant part. Then, the renormalized nonlinearity N ( u ) can be written as N ( u ) = N ( u ) + N ( u ) . Let us first go over the basic idea of the probabilistic local well-posedness, as developedfor instance in [6, 14, 64, 19, 42]. See also [39]. This argument is based on the followingfirst order expansion: u = z ω + v, (1.20)where z ω denotes the random linear solution defined by z ω ( t ) := S ( t ) u ω . (1.21)By rewriting (1.15) as a fixed point problem for the residual term v := u − z ω , we obtainthe following perturbed renormalized 4NLS: v ( t ) = − i ˆ t S ( t − t ′ ) N ( v + z ω )( t ′ ) dt ′ . (1.22)Then, the main aim is to solve this fixed point problem for v in L ( T ), where the unper-turbed equation (1.6) is deterministically well-posed by a simple contraction argument. Inparticular, it is crucial to make use of probabilistic tools (for example, see Subsection 2.2)and show that the perturbation N ( v + z ω ) − N ( v ) is smoother than the random linearsolution z ω and lies in L ( T ) for each t . When α > , this can be indeed achieved and wecan show that for each small δ >
0, there exists Ω δ ⊂ Ω with P (Ω cδ ) < Ce − δc such that foreach ω ∈ Ω δ , there exists a solution u = z ω + v to the renormalized 4NLS (1.6) in the class: z ω + C ([ − δ, δ ]; L ( T )) ⊂ C ([ − δ, δ ]; H s ( T )) , for s < α − . The most singular contribution on the right-hand side of (1.22) is given by z ω ( t ) := − i ˆ t S ( t − t ′ ) N ( z ω )( t ′ ) dt ′ = it X n ∈ Z | g n | g n h n i α e i ( nx − n t ) (1.23) Strictly speaking, we need to consider the fixed point problem (1.22) in some appropriate function space X δ ⊂ C ([ − δ, δ ]; L ( T )). For simplicity, however, we only discuss the spatial regularity and suppress its timedependence. A similar comment applies in the following. In particular, in discussing spatial regularity of aspace-time distribution, we may suppress its time dependence. where N is as in (1.18), denoting the resonant interaction. This resonant cubic term isresponsible for the restriction α > . It is easy to see that z ω ( t ) lies in H s ( T ) \ H α − ( T )almost surely for s < α − . In particular, when α > , the L -deterministic well-posedness theory (via a contractionargument) becomes available for solving the perturbed equation (1.22). As mentionedabove, the case α > is also covered by the deterministic well-posedness in [38, 56] (basedon a more robust energy method) and thus our main goal in the following is to treat lowervalues of α . Remark 1.7.
This argument is basically the Da Prato-Debussche trick in the context ofstochastic PDEs [20, 21], where the random linear solution is replaced by the solution to alinear stochastic PDE. See [32] for a concise discussion on the Da Prato-Debussche trick. Itis worthwhile to point out that the paper [39, 6] by McKean and Bourgain precede [20, 21].According to the discussion above, the basic probabilistic argument based on the firstorder expansion (1.20) does not work for our problem when α ≤ because the second orderterm z ω does not belong to L ( T ) almost surely if α ≤ . See also Case (b) in Subsection 4.2of [19]. This shows that we can not solve the fixed point problem (1.22) in L ( T ) when α ≤ .A natural next step would be to consider the following second order expansion: u = z ω + z ω + v for a solution u to (1.6) and study the equation satisfied by the residual term v := u − z ω − z ω : ( i∂ t v = ∂ x v + (cid:2) N ( v + z ω + z ω ) − N ( z ω ) (cid:3) v | t =0 = 0 . Namely, we consider the following fixed point problem: v ( t ) = − i ˆ t S ( t − t ′ ) (cid:2) N ( v + z ω + z ω ) − N ( z ω ) (cid:3) ( t ′ ) dt ′ . (1.24)Note that the worst contribution z ω in the first step coming from the resonant interac-tion N ( z ω ) is now eliminated. We can then perform case-by-case nonlinear analysis on N k ( u , u , u ), k = 1 ,
2, in the spirit of [6, 19], where each u j can be z ω , z ω , or the smootherunknown function v except for the case u = u = u = z ω with k = 2. This allows us toshow that the fixed point problem (1.24) for the residual term v is almost surely locallywell-posed in L ( T ), provided that α > . Recalling that z ω , z ω ∈ C ( R ; H s ( T )) for s satisfying (1.9), we obtain a solution u = z + z + v to the renormalized 4NLS (1.6) in theclass: z ω + z ω + C ([ − δ, δ ]; L ( T )) ⊂ C ([ − δ, δ ]; H s ( T ))almost surely, for s < α − . Namely, z ω in (1.23) is trilinear in the random initial data. NLS WITH WHITE NOISE INITIAL DATA 11
In this second step, the restriction α > comes from the following resonant quinticterm in (1.24): z ω ( t ) : = − i X j ,j ,j ∈ N − j + j + j =5 ˆ t S ( t − t ′ ) N ( z ωj , z ωj , z ωj )( t ′ ) dt ′ = − t X n ∈ Z | g n | g n h n i α e i ( nx − n t ) . (1.25)Given t ∈ R , it is easy to see that z ω ( t ) lies in H s ( T ) \ H α − ( T ) almost surely for s < α − . In particular, z ω ( t ) does not lie in L ( T ) almost surely if α ≤ .One can repeat this process in an obvious manner. Namely, consider the following thirdorder expansion: u = z ω + z ω + z ω + v for a solution u to (1.15) and study the fixed point problem for v = u − z ω − z ω − z ω .From the discussion above, we see that the limitation comes from the resonant septic term,yielding the restriction of α > .In general, in the k th step, we could write a solution u to (1.15) as u = v + k X j =1 z ω j − (1.26)and consider the fixed point problem for v = u − P kj =1 z ω j − . Here, z j − denotes thefollowing resonant (2 j − z ω j − ( t ) := − i X j ,j ,j ∈ N − j + j + j =2 j − ˆ t S ( t − t ′ ) N ( z ωj , z ωj , z ωj )( t ′ ) dt ′ . (1.27)Proceeding as before, it is easy to see that the limitation in this k th step comes from z ω k +1 yielding the restriction of α > k + 1) (1.28)which is needed to guarantee that z ω k +1 ( t ) belongs almost surely to L ( T ).The restriction (1.28) shows that, in order to treat the α = 0 case, we at least needan infinite iteration of this procedure. Furthermore, the argument based on the k th orderexpansion (1.26) leads to the following equation for the the residual term v = u − P kj =1 z ω j − : i∂ t v = ∂ x v + N (cid:18) v + k X j =1 z ω j − (cid:19) − X j + j + j ∈{ , ,..., k − } j ,j ,j ∈{ , ,..., k − } N ( z ωj , z ωj , z ωj ) v | t =0 = 0 . In particular, we need to carry out the following case-by-case nonlinear analysis on N ℓ ( u , u , u ) , ℓ = 1 , , where each u i , i = 1 , ,
3, can be either the smoother unknown function v or z ωj for some j ∈ { , , . . . , k − } such that it is not of the form N ( z j , z j , z j ) with j + j + j ∈{ , , . . . , k − } . In general, it could be a cumbersome task to carry out this case-by-caseanalysis due to the increasing number of combinations. In the next subsection, we willdescribe an approach to overcome this issue. Remark 1.8.
In [2], the first author with B´enyi and Pocovnicu studied the cubic NLSon R with random initial data based on a higher order expansion (of order k ), analogousto (1.26). In order to avoid a combinatorial nightmare in relevant case-by-case analysis forhigh values of k , the authors introduced a modified expansion of order k , which simplifiedthe relevant analysis in a significant manner. We point out that the analysis in [2] issignificantly simpler than that in the current paper, since (i) the random data consideredin [2] are of positive regularities and (ii) the refinement of the bilinear Strichartz estimates[8, 58] are available on the Euclidean space. We also mention a recent work [50] on theprobabilistic local well-posedness of the three-dimensional cubic nonlinear wave equation innegative Sobolev spaces, where the main analysis is based on the second order expansion.1.6. The α > case. In this subsection, we describe an outline of the proof of Theorem 2for the α > α = 0 case.In view of the restriction (1.28), we need to iterate indefinitely the procedure describedabove in order to treat arbitrary α >
0. For this purpose, we define z ω by z ω = ∞ X j =1 z ω j − . (1.29)Then, from (1.21), (1.23), (1.25), and (1.27), we see that z ω defined in (1.29) is nothingbut a power series expansion of a solution to the following resonant ( i∂ t z ω = ∂ x z ω + N ( z ω ) z ω | t =0 = u ω , (1.30)where u ω is the random initial data defined in (1.10). By letting z ( t ) = S ( − t ) z ω ( t ), we seethat b z n ( t ) = b z ( n, t ) satisfies the following ODE: ( i∂ t b z n = −| b z n | b z n b z n | t =0 = g n h n i α , (1.31)for each n ∈ Z . By the explicit formula of solutions to (1.31), we have b z n ( t ) = e it | b z n (0) | b z n (0) . (1.32)Hence, we can express z ω as z ω ( t ) = X n ∈ Z e i ( nx − n t ) e it | gn | h n i α g n h n i α . (1.33)By expanding in a power series, we obtain z ω ( t ) = X n ∈ Z e i ( nx − n t ) ∞ X k =0 ( it ) k k ! | g n | k g n h n i (2 k +1) α . (1.34) NLS WITH WHITE NOISE INITIAL DATA 13
By comparing (1.11) and (1.34) with (1.10), we obtain z ω = Z ( u ω ) . Note that, unlike the random linear solution z ω in (1.21) and other lower order terms z ω j − in (1.27), the random resonant solution z ω depends on arbitrarily high powers ofGaussian random variables and hence it does not belong to Wiener chaoses of any finiteorder. Nonetheless, the formula (1.33) shows that z ω has a particular simple structure,allowing us to study its regularity properties; see Lemmas 1.9 and 2.10 below. In carryingout analysis on the random resonant solution z ω involving the X s,b -spaces, we instead needto make use of the series expansion (1.34) and apply Lemma 2.11 below for each k . Lemma 1.9.
Given α ∈ R , let z ω be as in (1.33) . Then, z ω belongs to C ( R ; H s ( T )) almostsurely, provided that s < α − .Proof. Fix ε > s + ε < α − . (1.35)Lemma 2.7 below states that we havesup n ∈ Z | g n ( ω ) | ≤ C ( ω ) h n i ε (1.36)for some almost surely finite constant C ( ω ) > t ∈ R , let { t j } ∞ j =1 be a sequence converging to t . Then, for each n ∈ Z , it followsfrom (1.32) that c z ω ( n, t j ) converges to c z ω ( n, t ) almost surely as j → ∞ . Furthermore,from (1.32) and (1.36), we havesup j ∈ N h n i s | c z ω ( n, t j ) | + h n i s | c z ω ( n, t ) | ≤ C ( ω ) h n i s − α + ε , where the right-hand side belongs to ℓ ( Z ) in view of (1.35). Hence, the claim follows fromthe dominated convergence theorem. (cid:3) Now, express a solution u to (1.6) in the following random-resonant / nonlinear decom-position : u = z ω + v. (1.37)Then, the residual term v = u − z ω satisfies ( i∂ t v = ∂ x v + (cid:2) N ( v + z ω ) − N ( z ω ) (cid:3) v | t =0 = 0 . (1.38)By writing (1.38) in the Duhamel formulation, we consider the following fixed point prob-lem: v ( t ) = Γ ω v ( t ) := − i ˆ t S ( t − t ′ ) (cid:2) N ( v + z ω ) − N ( z ω ) (cid:3) ( t ′ ) dt ′ . (1.39)In this formulation, we successfully reduced the number of combinations; we only need tostudy N k ( u , u , u ), k = 1 ,
2, where each u j can be either the random resonant solution z ω or the smoother unknown function v , except for the case u = u = u = z ω with k = 2. In Section 3, we perform the case-by-case nonlinear analysis and show that the fixedpoint problem (1.39) is almost surely locally well-posed in L ( T ) via the standard Fourierrestriction norm method, provided that α > Lastly, Lemma 1.9 allows us to conclude that the solution u = z ω + v to the renormalized4NLS (1.6) lies in the class: z ω + C ([ − δ, δ ]; L ( T )) ⊂ C ([ − δ, δ ]; H s ( T ))almost surely. Remark 1.10.
The probabilistic local well-posedness argument in [6, 14, 64, 19] yieldsuniqueness of solutions in a ball of radius O (1) in a suitable (local-in-time) function space(such as the Strichartz spaces or the X s,b -spaces) centered at the random linear solution.When α >
0, the proof of Theorem 2 yields uniqueness of solutions in the ball of radius 1in X , + ,δ centered at the random resonant solution z ω . Remark 1.11. (i) When α >
0, the terms z ω j − appearing in (1.26) get smoother as j increases and hence only a finite number of expansion is needed. Nonetheless, the random-resonant / nonlinear decomposition (1.37) allows us to avoid a number of combinations inthe relevant case-by-case analysis when k ≫
1. When α = 0, the terms z ω j − in (1.29) do not get smoother and hence the infinite order expansion in (1.29) is necessary in this case.(ii) Let α >
0. In this case, the random-resonant / nonlinear decomposition (1.37)with (1.29) allows us to write the solution u as u = z ω + z ω + · · · + z ω k +1 + v (1.40)for some v ∈ C ([ − δ, δ ]; L ( T )), where k is the smallest non-negative integer such that (1.28)holds. The expansion (1.40) provides a finer regularity description of the solution u than the random-linear / nonlinear decomposition (1.20). As mentioned above, the termsin (1.29) do not get smoother when α = 0. In this case, the solution u can be written as u = z ω + v for some v ∈ C ([ − δ, δ ]; L ( T )). Namely, the dominant part of the dynamics in small scalesis indeed given by the random resonant solution z ω defined in (1.33).1.7. The α = 0 case. Next, let us discuss the α = 0 case. Namely, we consider the whitenoise initial data (1.1). Unfortunately, the argument described above breaks down in thiscase. As we see in Section 3, the worst interaction comes from the following resonantnonlinear terms on the right-hand side of (1.38): N ( v, z ω , z ω ) + N ( z ω , z ω , v ) = − F − (cid:2) | g n | b v ( n ) (cid:3) and N ( z ω , v, z ω ) = −F − h e − in t e it | g n | g n b v ( n ) i . In order to weaken the effect of these terms, we introduce the following random gaugetransform: J ω ( u )( x, t ) = X n ∈ Z e inx − it | g n ( ω ) | b u ( n, t ) . (1.41) This regularity description can also be understood as the “local” (in space) description of the solutionsince the singular components of the solution become dominant in small scales.
NLS WITH WHITE NOISE INITIAL DATA 15
When α = 0, the solution z ω to the resonant 4NLS (1.30) reads as z ω ( x, t ) = X n ∈ Z e i ( nx − n t ) e it | g n | c u ω ( n ) . (1.42)The random gauge transform J ω in (1.41) allows us to filter out the random phase os-cillations appearing in (1.42). This gauge transform is clearly invertible and leaves the H s -norm invariant. If u is a solution to the renormalized 4NLS (1.6), then the gaugedfunction w := J ω ( u ) satisfies the following random equation: ( i∂ t w = ∂ x w + N ω ( w ) + N ω ( w ) w | t =0 = u ω . (1.43)Here, the first nonlinearity N ω ( w ) is defined by N ω ( w )( x, t ) := X n ∈ Z e inx X Γ( n ) e it Ψ ω (¯ n ) b w ( n , t ) b w ( n , t ) b w ( n , t ) , (1.44)where Γ( n ) is as in (1.19) and Ψ ω (¯ n ) denotes the random phase function:Ψ ω (¯ n ) := Ψ ω ( n , n , n , n ) = | g n ( ω ) | − | g n ( ω ) | + | g n ( ω ) | − | g n ( ω ) | . (1.45)The second nonlinearity N ω ( w ) is defined by N ω ( w )( x, t ) := − X n ∈ Z e inx (cid:2) | b w ( n, t ) | − | g n ( ω ) | (cid:3) b w ( n, t ) . (1.46)As we can see, (1.44) and (1.46) are random versions of (1.17) and (1.18). The mainadvantage of working with this gauged version of the renormalized 4NLS (1.6) lies in theweaker resonant nonlinearity (cid:2) | b w ( n ) | − | g n ( ω ) | (cid:3) b w ( n ), which would be eliminated if b w ( n ) = g n . This observation turns out to be crucial in our later analysis.The Duhamel formulation for the gauged solution w is given by w ( t ) = S ( t ) u ω − i ˆ t S ( t − t ′ ) (cid:2) N ω ( w ) + N ω ( w ) (cid:3) ( t ′ ) dt ′ . (1.47)Now by setting z ω = S ( t ) u ω , we see that the residual term v = w − z ω , satisfies the following Duhamel formulation: v ( t ) = − i ˆ t S ( t − t ′ ) (cid:2) N ω ( v + z ω ) + N ω ( v + z ω ) (cid:3) ( t ′ ) dt ′ . (1.48)A naive approach would be to try to solve the fixed point problem (1.48) by a contractionargument (namely, by the Picard iteration scheme) for v in L ( T ), exploiting randomness.It turns out, however, that this naive approach via a contraction argument does not workfor our problem. In the following, by partially iterating the Duhamel formulation, weprove convergence in L ( T ) of approximating smooth solutions and construct a solutionto (1.48) and hence to (1.43). See Section 4 for more details. We establish the crucialnonlinear estimates (Propositions 4.1 and 4.2) by reducing them to boundedness propertiesof certain random multilinear functionals of the white noise, whose tail estimates are provedin Appendix A. Remark 1.12.
As it will become clear from the analysis below, there is room to extendour analysis to the fractional NLS with dispersion weaker than the fourth order dispersion.However, this would not introduce any new qualitative phenomenon as compared to thecase of the fourth order dispersion and hence we only consider the fourth order NLS inthis paper. We also point out that the case of the standard NLS (with the second orderdispersion) is out of reach at this point. See the introduction in [23] for a discussion on thecriticality of this problem (in the context of the stochastic NLS with additive space-timewhite noise forcing).
Remark 1.13. (i) In the deterministic setting, Takaoka-Tsutsumi [63] implicitly useda gauge transform analogous to (1.42) in the low regularity study of the modified KdVequation to weak the resonant interaction. This led them to work in the modified X s,b -spaces. See also [41]. In our case, the gauge transform J ω is random and hence it leads tothe random X s,b -spaces. See Subsection A.1. We also point out the work [53] on the useof a gauge transform in the probabilistic context.(ii) In order to construct the dynamics for the α = 0 case, we partially iterate the Duhamelformulation (of the gauged equation) and establish convergence property of smooth ap-proximating solutions. See Section 4. This strategy is close in spirit to the work [49, 61].In the context of stochastic PDEs, such iteration of a Duhamel formulation appears inthe dispersive setting [47, 28] and in the parabolic setting [31, 16, 40]. We also mention[7, 10, 11, 12] on the probabilistic construction of solutions by establishing convergence ofsmooth solutions. In particular, the recent approach by Bourgain-Bulut [10, 11] relying onthe invariance of the truncated Gibbs measures even in the construction of local solutionsworks well for a power-type nonlinearity with positive regularity but is not suitable to ourproblem at hand. See [3] for a survey on this method.1.8. Organization of the paper.
In Section 2, we introduce the basic notations and listsome basic deterministic and probabilistic lemmas. In Section 3, we present the proof ofTheorem 2 for α >
0. The remaining part of the paper is devoted to handle the α = 0case. In Section 4, we prove Theorem 2, by assuming two key nonlinear estimates (Propo-sitions 4.1 and 4.2). In Section 5, we prove Theorem 3 and then Theorem 1. We presentthe proofs of Propositions 4.1 and 4.2 in Sections 6 and 7. Appendix A contains the proofsof some probabilistic lemmas.2. Notations and preliminaries
As in the usual low regularity analysis of dispersive PDEs, an important ingredient willbe the Fourier restriction norm method introduced in [4]. Given s, b ∈ R , define X s,b ( T × R )as a completion of the test functions under the following norm: k u k X s,b ( T × R ) = kh n i s h τ + n i b b u ( n, τ ) k ℓ n L τ , (2.1)where h · i = (1 + | · | ) . Recall that X s,b embeds into C ( R ; H s ( T )) for b > . Given a timeinterval I = [ a, b ], we define the local-in-time version X s,bI = X s,b ([ a, b ]) by setting k u k X s,bI = inf (cid:8) k v k X s,b ( T × R ) : v | I = u (cid:9) . (2.2) NLS WITH WHITE NOISE INITIAL DATA 17
Note that X s,bI is a Banach space. When I = [ − δ, δ ], we simply set X s,b,δ = X s,bI . Thelocal-in-time versions of other function spaces are defined analogously.For simplicity, we often drop 2 π in dealing with the Fourier transforms. If a function f is random, we may use the superscript f ω to show the dependence on ω ∈ Ω.Let η ∈ C ∞ c ( R ) be a smooth non-negative cutoff function supported on [ − ,
2] with η ≡ − ,
1] and set η δ ( t ) = η ( δ − t ) (2.3)for δ >
0. We also denote by χ = χ [ − , the characteristic function of the interval [ − , χ δ ( t ) = χ ( δ − t ) = χ [ − δ,δ ] ( t ).Let Z ≥ := Z ∩ [0 , ∞ ). Given a dyadic number N ∈ Z ≥ , let P N be the (non-homogeneous) Littlewood-Paley projector onto the (spatial) frequencies { n ∈ Z : | n | ∼ N } such that f = ∞ X N ≥ P N f. Given a non-negative integer N ∈ Z ≥ , we also define the Dirichlet projector π N onto thefrequencies {| n | ≤ N } by setting π N f ( x ) = X | n |≤ N b f ( n ) e inx . (2.4)Moreover, we set π ⊥ N = Id − π N . (2.5)By convention, we also set π ⊥− = Id.We use c, C to denote various constants, usually depending only on α and s . If a constantdepends on other quantities, we will make it explicit. For two quantities A and B , we use A . B to denote an estimate of the form A ≤ CB , where C is a universal constant,independent of particular realization of A or B . Similarly, we use A ∼ B to denote A . B and B . A . The notation A ≪ B means A ≤ cB for some sufficiently small constant c . Wealso use the notation a + (and a − ) to denote a + ε (and a − ε , respectively) for arbitrarilysmall ε > ε → Deterministic tools.
Define the phase function Φ(¯ n ) byΦ(¯ n ) = Φ( n , n , n , n ) = n − n + n − n . (2.6)Then, the phase function Φ(¯ n ) admits the following factorization. See [54] for the proof. Lemma 2.1.
Let n = n − n + n . Then, we have Φ(¯ n ) = ( n − n )( n − n ) (cid:0) n + n + n + n + 2( n + n ) (cid:1) . Recall that by restricting the X s,b -spaces onto a small time interval [ − δ, δ ], we can gaina small power of δ (at a slight loss in the modulation). Lemma 2.2.
Let s ∈ R and b < . Then, there exists C = C ( b ) > such that k η δ ( t ) · u k X s,b + k χ δ ( t ) · u k X s,b ≤ Cδ − b − k u k X s, − . The proof of Lemma 2.2 is based on the following scaling property: b η δ ( τ ) = δ b η ( δτ ),yielding k b η δ k L qτ ∼ δ q − q k b η k L qτ . δ q − q , (2.7)for q ≥
1. See [19] for details.Next, we collect the basic linear estimates (see [24]).
Lemma 2.3.
Let s ∈ R . (i) Given b ∈ R , there exists C = C ( b ) > such that k S ( t ) u k X s,b,δ ≤ C k u k H s for any < δ ≤ . (ii) Given b > , there exists C = C ( b ) > such that (cid:13)(cid:13)(cid:13)(cid:13) ˆ t S ( t − t ′ ) F ( x, t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13) X s,b,δ . k F k X s,b − ,δ for any δ > . The following periodic L -Strichartz estimate from [54] also plays an important role: k u k L x,t . k u k X , . (2.8)Interpolating (2.8) with k u k L x,t = k u k X , , we have k u k L x,t . k u k X ,
524 + and k u k L x,t . k u k X , . (2.9)We also recall the following lemma on convolutions. See [24] for a proof. Lemma 2.4.
Let α > β ≥ with α + β > . Then, there exists C > such that ˆ R h x − y i α h y i β dy ≤ C h x i γ for any x ∈ R , where γ is given by γ = α + β − , if α < ,β − ε, if α = 1 ,β, if α > for any small ε > . Lastly, we state two lemmas related to boundedness properties of products in Sobolevspaces.
Lemma 2.5.
Let ε > . Then, there exists C = C ( ε ) > such that k f g k H − ε ( R ) ≤ C k f k H
12 + ε ( R ) k g k H − ε ( R ) . Lemma 2.5 easily follows from standard analysis with Littlewood-Paley decompositionsand Bernstein’s inequality. We omit details.
Lemma 2.6.
Let ≤ b < . Then, we have k [0 ,T ] · f k H b ( R ) . k f k H b ( R ) , uniformly in T ≥ . NLS WITH WHITE NOISE INITIAL DATA 19
See [22] for a classical proof via an interpolation argument. By Plancherel’s identity,Lemma 2.6 also follows from the boundedness of the Hilbert transform (on the Fourierside) with an A -weight h τ i b , 0 ≤ b < . See [25].2.2. Probabilistic estimates.
Next, we state several probabilistic lemmas related toGaussian random variables. See also Appendix A for further lemmas. In the following, { g n } n ∈ Z denotes a family of independent standard complex-valued Gaussian random vari-ables on a probability space (Ω , F , P ).We first start by a well known fact (see for example [45, 19]). Lemma 2.7.
Let ε > . Then, there exist c, C > such that P (cid:16) sup n ∈ Z | g n ( ω ) | > K h n i ε (cid:17) < Ce − cK for any K > . In particular, given β > , by choosing K = δ − β , we have P (cid:16) sup n ∈ Z | g n ( ω ) | > δ − β h n i ε (cid:17) < Ce − δc for any δ > . Next, we recall the Wiener chaos estimates. Let { g n } n ∈ N be a sequence of independentstandard Gaussian random variables defined on a probability space (Ω , F , P ), where F is the σ -algebra generated by this sequence. Given k ∈ Z ≥ , we define the homogeneous Wienerchaoses H k to be the closure (under L (Ω)) of the span of Fourier-Hermite polynomials Q ∞ n =1 H k n ( g n ), where H j is the Hermite polynomial of degree j and k = P ∞ n =1 k n . Then,we have the following Ito-Wiener decomposition: L (Ω , F , P ) = ∞ M k =0 H k . See Theorem 1.1.1 in [44]. We also set H ≤ k = k M j =0 H j (2.10)for k ∈ N . For example, the random linear solution z ω defined in (1.21) belongs to H (foreach fixed t ∈ R ), while z ω in (1.23) belongs to H ≤ . As pointed out above, the randomresonant solution z ω defined in (1.33) does not belong to H ≤ k for any finite k ∈ N .In this setting, we have the following Wiener chaos estimate [62, Theorem I.22]. See also[65, Proposition 2.4]. Lemma 2.8.
Let k ∈ N . Then, we have k X k L p (Ω) ≤ ( p − k k X k L (Ω) for any finite p ≥ and any X ∈ H ≤ k . We also recall the following lemma, which is a consequence of Chebyshev’s inequality.See, for example, Lemma 4.5 in [67] and the proof of Lemma 3 in [1].
This implies that k n = 0 except for finitely many n ’s. This corresponds to Lemma 2.3 in the arXiv version.
Lemma 2.9.
Let k ≥ . Suppose that there exists C > such that a random variable X satisfies k X k L p (Ω) ≤ C p k for any finite p ≥ . Then, there exist c, C > such that P (cid:0) | X | > λ (cid:1) ≤ Ce − c C − k λ k for any λ > . In probabilistic well-posedness theory, a probabilistic improvement of Strichartz esti-mates for random linear solutions plays an important role. The following lemma statesthat a similar estimate also holds for the random resonant solution z ω defined in (1.33). Lemma 2.10.
Given α ≥ , let z ω be the solution to the resonant 4NLS (1.30) givenby (1.33) . Then, given p ≥ and ε > , there exist c, C > such that P (cid:16) k P N z ω k L px,t ( T × [ − δ,δ ]) > N − α + ε (cid:17) < Ce − N εδc (2.11) for any δ > and dyadic N ≥ . One way to prove Lemma 2.10 would be to directly apply the Wiener chaos estimate(Lemma 2.8) to the (2 k + 1)-fold products of Gaussian random variables in the seriesexpansion (1.34). See Lemma 2.11 for such a direct approach. In the particular case ofLemma 2.10, we can give a shorter proof by exploiting the invariance of a complex-valuedmean-zero Gaussian random variable under the transformation: g e it | g | g ; see Lemma 4.2in [54]. This allows us to avoid higher order products of Gaussian random variables. Proof of Lemma 2.10.
Given n ∈ Z and ( x, t ) ∈ T × R , define h n ( x, t ) by h n ( x, t ) := e i ( nx − n t ) e it | gn | h n i α g n h n i α . Then, it follows from the rotational invariance of complex-valued Gaussian random variablesand Lemma 4.2 in [54] that h n ( x, t ) ∼ N C (0 , h n i − α ) for each fixed ( x, t ) ∈ T × R .By Minkowski’s integral inequality and Lemma 2.8, we have (cid:18) E h k P N z ω k rL px,t ( T × [ − δ,δ ]) i(cid:19) r ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X | n |∼ N h n ( x, t ) (cid:13)(cid:13)(cid:13) L r (Ω) (cid:13)(cid:13)(cid:13)(cid:13) L px,δ . √ r (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X | n |∼ N h n ( x, t ) (cid:13)(cid:13)(cid:13) L (Ω) (cid:13)(cid:13)(cid:13)(cid:13) L px,δ . √ r δ p N − α for any r ≥ p . Then, the desired estimate (2.11) follows from Lemma 2.9. (cid:3) Finally, we conclude this section by stating a crucial lemma in studying powers of therandom resonant solution z ω in the multilinear X s,b -analysis. This lemma also plays animportant role in establishing boundedness properties of certain random multilinear func-tionals of the white noise (see Lemma 6.1 below), which is a key ingredient for the proof ofTheorem 2 when α = 0. We present the proof of this lemma in Appendix A. NLS WITH WHITE NOISE INITIAL DATA 21
Lemma 2.11.
Fix a non-empty set
A ⊂ { , , } and k, k j ∈ Z ≥ , j ∈ A , such that k = X j ∈A k j . (2.12) Given a ( deterministic ) sequence (cid:8) c ¯ kn ,n ,n (cid:9) n ,n ,n ∈ Z with ¯ k = { k j } j ∈A , define a sequence { Σ n } n ∈ Z by setting Σ n = Σ n (¯ k ) = 1 Q j ∈A k j ! X ( n ,n ,n ) ∈ Γ( n ) c ¯ kn ,n ,n Y j ∈A | g n j | k j g ∗ n j (2.13) for n ∈ Z , where Γ( n ) is as in (1.19) and g ∗ n j is defined by g ∗ n j = ( g n j , when j = 1 or ,g n j , when j = 2 . (2.14) Then, there exists
C > , independent of k and k j ∈ Z ≥ , j ∈ A , such that k Σ n k L p (Ω) ≤ C k ( p − k + |A| (cid:18) X ( n ,n ,n ) ∈ Γ( n ) | c ¯ kn ,n ,n | (cid:19) (2.15) for all p ≥ and n ∈ Z . Local theory, Part 1: < α ≤ In this section, we present the proof of Theorem 2 when 0 < α ≤ . In particular,we show that the Cauchy problem (1.38) for v is almost surely locally well-posed. Moreprecisely, we show that for each small δ >
0, there exists Ω δ with P (Ω cδ ) < Ce − δc suchthat, for each ω ∈ Ω δ , the map Γ ω defined in (1.39) is a contraction on B (1), where B (1)denotes the ball of radius 1 in X , + ,δ centered at the origin.Given v on T × [ − δ, δ ], let e v be an extension of v onto T × R . By the non-homogeneouslinear estimate (Lemma 2.3), we have (cid:13)(cid:13)(cid:13)(cid:13) ˆ t S ( t − t ′ ) N ω ( v )( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13) X ,
12 + ,δ ≤ (cid:13)(cid:13)(cid:13)(cid:13) η δ ( t ) ˆ t S ( t − t ′ ) N ω ( e v )( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13) X ,
12 + . k N ω ( e v ) k X , −
12 + , where η δ is a smooth cutoff on [ − δ, δ ] as in (2.3) and N ω ( v ) := χ δ · (cid:0) N ( v + e z ω ) − N ( e z ω ) (cid:1) (3.1)with an extension e z ω of the truncated random linear solution χ δ · z ω from [ − δ, δ ] to R .Then, our main goal is to prove that there exists Ω δ ⊂ Ω and θ > P (Ω cδ ) < Ce − δc such that k N ω ( e v ) k X , −
12 + . δ θ (cid:16) k e v k X ,
12 + (cid:17) (3.2)for all ω ∈ Ω δ and for any extension e v of v . By the definition (2.2) of the local-in-timenorm, we then conclude from (3.1) and (3.2) that (cid:13)(cid:13)(cid:13)(cid:13) ˆ t S ( t − t ′ ) N ω ( v )( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13) X ,
12 + ,δ . δ θ (cid:16) k v k X ,
12 + ,δ (cid:17) . By the trilinear structure of the nonlinearity, a similar estimate holds for the differenceΓ ω v − Γ ω v , allowing us to conclude that Γ ω is a contraction on B (1) ⊂ X , + ,δ for ω ∈ Ω δ . Note that the claim (1.12) follows from the embedding X , + ,δ ⊂ C ([ − δ, δ ]; L ( T ))and Lemma 1.9.In view of (3.1), in order to prove (3.2), we need to carry out case-by-case analysis on k χ δ · N k ( u , u , u ) k X s, −
12 + , k = 1 , , (3.3)where u j is taken to be either of type(I) rough random resonant part: u j = e z ω , where e z ω is some extension of χ δ · z ω , where z ω denotes the random resonant solution defined in (1.33),(II) smoother ‘deterministic’ nonlinear part: u j = e v j , where e v j is any extension of v , except for u = u = u = e z ω when k = 2 (thanks to the subtraction of N ( e z ω ) in (3.1)).In the following, we take e z ω = η δ z ω . It follows from (1.33) that F ( η δ z ω )( n, τ ) = b η δ (cid:16) τ + n − | g n | h n i α (cid:17) · g n h n i α . (3.4)Thanks to the sharp cutoff function in (3.3), we may take u j = χ δ · e v j (3.5)in (3.3) when u j is of type (II). We use the expressions u j ( I ) (and u j (II), respectively)to mean that u j is of type (I) (and of type (II), respectively) in the following. We pointout that the most intricate case appears when all u j ’s are of type (I) in estimating thenon-resonant contribution. In this case, a simple application of the Wiener chaos estimate(Lemma 2.8) is no longer applicable and we need to carefully estimate the contributionfrom the sum of the products of the (2 k j + 1)-linear term, k j ∈ N , j = 1 , ,
3, in (1.34),using Lemma 2.11. See Case (D) in Subsection 3.2.3.1.
Resonant part N . In this subsection, we estimate the resonant part of the nonlinearestimate (3.2). In particular, we prove k χ δ · N ( u , u , u ) k X , −
12 + . δ θ Y j ∈I k e v j k X ,
12 + (3.6)for some θ >
0, outside an exceptional set of probability < Ce − δc , where N is the resonantpart of the nonlinearity defined in (1.18), u j is either of type (I) or (II), except for the casewhen all u j ’s are of type (I), and the index set I is defined by I = (cid:8) j ∈ { , , } : u j is of type (II) (cid:9) . (3.7)We haveLHS of (3.6) = (cid:13)(cid:13)(cid:13)(cid:13) h τ + n i − ˆ τ = τ − τ + τ b u ( n, τ ) b u ( n, τ ) b u ( n, τ ) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L τ . (3.8) NLS WITH WHITE NOISE INITIAL DATA 23 • Case (a): u j of type (II), j = 1 , , p large ( = + p ), we have(3.8) . sup n kh τ + n i − + k L τ (cid:13)(cid:13)(cid:13)(cid:13) ˆ τ = τ − τ + τ b u ( n, τ ) b u ( n, τ ) b u ( n, τ ) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ . By Young’s and H¨older’s inequalities, ℓ n ⊂ ℓ n , and Lemma 2.2 with (3.5), . Y j =1 k b u j ( n, τ ) k ℓ n L − τ . Y j =1 kh τ + n i + b u j ( n, τ ) k ℓ n L τ ≤ Y j =1 k u j k X ,
16 + . δ − Y j =1 k e v j k X ,
12 + . • Case (b):
Exactly one u j of type (I). Say u ( I ), u (II), and u (II).By H¨older’s inequality (with p ≫ . sup n kh τ + n i − + k L τ × (cid:13)(cid:13)(cid:13)(cid:13) h n i − α | g n | ˆ τ = τ − τ + τ b η δ (cid:16) τ + n − | g n | h n i α (cid:17)b u ( n, τ ) b u ( n, τ ) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ . (cid:0) sup n h n i − α | g n | (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) ˆ τ = ζ − τ + τ − C ( n,ω ) b η δ ( ζ ) b u ( n, τ ) b u ( n, τ ) dζ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ , where C ( n, ω ) is defined by C ( n, ω ) := n − | g n | h n i α . (3.9)Note that for fixed n ∈ Z and ω ∈ Ω, C ( n, ω ) is a fixed number. Hence, we can applyYoung’s inequality (in τ, ζ , τ , and τ ), Lemma 2.7 with β = 0+, (2.7), and Lemma 2.2with (3.5) as above and obtain(3.8) . δ − (cid:0) sup n h n i − α | g n | (cid:1) Y j =2 k b u j ( n, τ ) k ℓ n L τ . δ − Y j =2 kh τ + n i + b u j ( n, τ ) k ℓ n L τ ≤ δ − Y j =2 k u j k X ,
14 + . δ − Y j =2 k e v j k X ,
14 + for any α >
0, outside an exceptional set of probability < Ce − δc . • Case (c):
Exactly two u j ’s of type (I).First, consider the case u ( I ), u ( I ), and u (II). Proceeding as before with p ≫ change of variables, we have(3.8) . (cid:13)(cid:13)(cid:13)(cid:13) h n i − α | g n | ˆ τ = τ − τ + τ b η δ (cid:16) τ + n − | g n | h n i α (cid:17) × b η δ (cid:16) τ + n − | g n | h n i α (cid:17)b u ( n, τ ) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ ≤ (cid:0) sup n h n i − α | g n | (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) ˆ τ = ζ − ζ + τ b η δ ( ζ ) b η δ ( ζ ) b u ( n, τ ) dζ dζ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ By Lemma 2.7, (2.7), and Lemma 2.2 with (3.5), . δ − (cid:0) sup n h n i − α | g n | (cid:1) k b u ( n, τ ) k ℓ n L τ . δ − k u k X , . δ − k e v k X ,
12 + for α >
0, outside an exceptional set of probability < Ce − δc .Next, consider the case u ( I ), u (II), and u ( I ). Proceeding in a similar manner (with p ≫ C ( n, ω ) as in (3.9)), we have(3.8) . (cid:13)(cid:13)(cid:13)(cid:13) h n i − α | g n | ˆ τ = τ − τ + τ b η δ (cid:16) τ + n − | g n | h n i α (cid:17) × b u ( n, τ ) b η δ (cid:16) τ + n − | g n | h n i α (cid:17) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ . (cid:0) sup n h n i − α | g n | (cid:1)(cid:13)(cid:13)(cid:13)(cid:13) ˆ τ = ζ − τ + ζ − C ( n,ω ) b η δ ( ζ ) b u ( n, τ ) b η δ ( ζ ) dζ dζ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ . δ − (cid:0) sup n h n i − α | g n | (cid:1) k b u ( n, τ ) k ℓ n L τ . δ − k u k X , . δ − k e v k X ,
12 + for α >
0, outside an exceptional set of probability < Ce − δc .3.2. Non-resonant part N . In this subsection, we evaluate the non-resonant part of thenonlinearity N ω ( v ). In particular, we prove k χ δ · N ( u , u , u ) k X , −
12 + . δ θ Y j ∈I k e v j k X ,
12 + (3.10)for some θ >
0, outside an exceptional set of probability < Ce − δc , where N is the non-resonant part of the nonlinearity defined in (1.17), u j is either of type (I) or (II), and theindex set I is as in (3.7). Set σ := h τ + n i and σ j := h τ j + n j i , j = 1 , , , and σ max := max( σ, σ , σ , σ ) and n max := max (cid:0) | n | , | n | , | n | , | n | (cid:1) + 1 . (3.11) NLS WITH WHITE NOISE INITIAL DATA 25
Given dyadic numbers
N, N , N , N ≥
1, we also set N max := max( N, N , N , N ) . By duality, we can estimate the left-hand side of (3.10) bysup k w k X , − ≤ (cid:12)(cid:12)(cid:12)(cid:12) ˆ δ − δ ˆ T N ( u , u , u ) · w dxdt (cid:12)(cid:12)(cid:12)(cid:12) . (3.12)Without loss of generality, we may assume that w = χ δ · w . • Case (A): u j of type (II), j = 1 , , . Y j =1 k u j k L x,t k w k L x,t . δ − Y j =1 k e v j k X ,
12 + k w k X , − . • Case (B):
Exactly one u j of type (I). Say u ( I ), u (II), and u (II).First suppose that max( σ , σ , σ ) ∼ σ max . Then, it follows from Lemma 2.1 thatmax( σ , σ , σ ) − ∼ σ − max & N − max . (3.13)By L px,t L x,t L x,t L x,t -H¨older’s inequality with p large, (2.9), Lemma 2.10, Lemma 2.2, and(3.13), we have(3.12) . X N,N ,N ,N dyadic k P N u k L px,t k P N u k X ,
524 + k P N u k X ,
524 + k P N w k X ,
524 + . X N,N ,N ,N dyadic N − α +1 k P N u k X ,
524 + k P N u k X ,
524 + k P N w k X ,
524 + . δ − X N,N ,N ,N dyadic N − +max k P N e v k X ,
12 + k P N e v k X ,
12 + k P N w k X , − . δ − Y j =2 k e v j k X ,
12 + (3.14)for α ≥
0, outside an exceptional set of probability < X N ≥ Ce − Nε δc . e − δc . Next, suppose that max( σ , σ , σ ) ≪ σ max , namely σ ∼ σ max . We first consider the case δ β ≫ N − ε max for some small β, ε >
0. It follows from Lemmas 2.1 and 2.7 that there existsa set Ω β,ε ⊂ Ω with P (Ω cβ,ε ) < Ce − δc such that | g n | h n i α . δ − β h n i ε ≪ N − ε max ≪ σ max , on Ω β,ε , uniformly in n ∈ Z , as long as α ≥
0. Hence, we have (cid:12)(cid:12)(cid:12)b η δ (cid:16) τ + n − | g n | h n i α (cid:17)(cid:12)(cid:12)(cid:12) . σ . N | ( n − n )( n − n ) | (3.15) on Ω β,ε . Then, by H¨older’s inequality (with p ≫ β ≪ . X N,N ,N ,N , dyadic δ β ≫ N − ε max (cid:13)(cid:13)(cid:13)(cid:13) X ( n ,n ,n ) ∈ Γ( n ) | n |∼ N, | n j |∼ N j | g n |h n i α (cid:8) N | ( n − n )( n − n ) | (cid:9) + ε × ˆ τ = τ − τ + τ (cid:12)(cid:12)(cid:12)b η δ (cid:16) τ + n − | g n | h n i α (cid:17)(cid:12)(cid:12)(cid:12) − ε | \ P N u ( n , τ ) || \ P N u ( n , τ ) | dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ . (cid:0) sup n h n i − α − ε | g n | (cid:1) X N,N ,N ,N , dyadic δ β ≫ N − ε max (cid:13)(cid:13)(cid:13)(cid:13) X ( n ,n ,n ) ∈ Γ( n ) | n |∼ N, | n j |∼ N j (cid:8) N | ( n − n )( n − n ) | (cid:9) + ε × (cid:13)(cid:13) | b η δ | − ε (cid:13)(cid:13) L p τ Y j =2 k \ P N j u j ( n j , τ j ) k L pp − τj (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . δ − ε − p − β X N,N ,N ,N , dyadic δ β ≫ N − ε max N − max 3 Y j =2 k \ P N j u j ( n j , τ j ) k ℓ nj L pp − τj . δ − ε − p − β Y j =2 k e v j k X ,
12 + for α ≥
0, outside an exceptional set of probability < Ce − δc .Lastly, we consider the case δ β . N − ε max . Proceeding as in (3.14), we bound thecontribution of this case to (3.12) by . δ − X N,N ,N ,N dyadic N − α +1 k P N e v k X ,
12 + k P N e v k X ,
12 + k P N w k X , − . δ − β − X N,N ,N ,N dyadic N − +max k P N e v k X ,
12 + k P N e v k X ,
12 + . δ − β − Y j =2 k e v j k X ,
12 + for α ≥
0, outside an exceptional set of probability < Ce − δc . • Case (C):
Exactly two u j ’s of type (I). Say u ( I ), u ( I ), and u (II).First, suppose that max( σ , σ ) ∼ σ max . Then, it follows from Lemma 2.1 thatmax( σ , σ ) − ∼ σ − max & N − max . (3.16) NLS WITH WHITE NOISE INITIAL DATA 27
Suppose that σ ∼ σ max . Then, by L px,t L px,t L x,t L x,t -H¨older’s inequality with p large, (2.9),Lemma 2.10, Lemma 2.2, and (3.16), we have(3.12) . X N,N ,N ,N dyadic k P N u k L px,t k P N u k L px,t k P N u k X , k P N w k X , . X N,N ,N ,N dyadic N − α +1 N − α +2 k P N u k X , k P N w k X , . δ − X N,N ,N ,N dyadic N − α +max k P N e v k X ,
12 + k P N w k X , − . δ − k e v k X ,
12 + for α >
0, outside an exceptional set of probability < X N ≥ Ce − Nε δc + X N ≥ Ce − Nε δc . e − δc . A similar argument holds when σ ∼ σ max .Next, suppose that max( σ , σ ) ≪ σ max , namely max( σ , σ ) ∼ σ max . Without loss ofgenerality, suppose that σ ∼ σ max . We first consider the case δ β ≫ N − ε max for some small β, ε >
0. Proceeding as in Case (B) above, the contribution to (3.12) is bounded by . X N,N ,N ,N , dyadic δ β ≫ N − ε max (cid:13)(cid:13)(cid:13)(cid:13) X ( n ,n ,n ) ∈ Γ( n ) | n |∼ N, | n |∼ N (cid:18) Y j =1 | g n j |h n j i α (cid:19) (cid:8) N | ( n − n )( n − n ) | (cid:9) + ε . × ˆ τ = τ − τ + τ (cid:12)(cid:12)(cid:12)b η δ (cid:16) τ + n − | g n | h n i α (cid:17)(cid:12)(cid:12)(cid:12) − ε (cid:12)(cid:12)(cid:12)b η δ (cid:16) τ + n − | g n | h n i α (cid:17)(cid:12)(cid:12)(cid:12) | \ P N u ( n , τ ) | dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L pτ . (cid:18) Y j =1 sup n j h n j i − α − ε | g n j | (cid:19) X N,N ,N ,N , dyadic δ β ≫ N − ε max (cid:13)(cid:13)(cid:13)(cid:13) X ( n ,n ,n ) ∈ Γ( n ) | n |∼ N, | n |∼ N (cid:8) N | ( n − n )( n − n ) | (cid:9) + ε × (cid:13)(cid:13) | b η δ | − ε (cid:13)(cid:13) L p τ k b η δ k L τ k \ P N u ( n , τ ) k L pp − τ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . δ − ε − p − β X N,N ,N ,N , dyadic δ β ≫ N − ε max N − max k \ P N u ( n , τ ) k ℓ n L pp − τ . δ − ε − p − β k e v k X , for α ≥
0, outside an exceptional set of probability < Ce − δc . Lastly, we consider the case δ β . N − ε max . Proceeding as in (3.14) but with L px,t L px,t L x,t L x,t -H¨older’s inequality, the contribution of this case to (3.12) . X N,N ,N ,N dyadic N − α +max k P N u k X , k P N w k X , . δ − β − X N,N ,N ,N dyadic N − − α +max k P N e v k X ,
12 + k P N w k X , − . δ − β − k e v k X ,
12 + for α ≥
0, outside an exceptional set of probability < Ce − δc . • Case (D): u j of type (I), j = 1 , , δ > F ( η δ z ω )( n, τ ) = δ ∞ X k =0 ( − δ ) k k ! ( ∂ k b η )( δ ( τ + n )) | g n | k g n h n i (2 k +1) α . Then, we have kN ( η δ z ω ) k X , −
12 + = (cid:13)(cid:13)(cid:13)(cid:13) h τ + n i − ∞ X k =0 ∞ X k ,k ,k =0 k = k + k + k ( − δ ) k k ! k ! k ! × X ( n ,n ,n ) ∈ Γ( n ) c k ,k ,k n ,n ,n ( τ, δ ) Y j =1 | g n j | k j g ∗ n j (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L τ , (3.17)where g ∗ n j is as in (2.14) and c k ,k ,k n ,n ,n ( τ, δ ) is defined by c k ,k ,k n ,n ,n ( τ, δ ) = δ ˆ τ = τ − τ + τ Y j =1 ( ∂ k j b η j )( δ ( τ j + n j )) h n j i (2 k j +1) α dτ dτ with the convention that b η j = b η when j = 1 or 3 and b η j = b η when j = 2. Then, byMinkowski’s integral inequality and Lemma 2.11, there exists C > (cid:13)(cid:13) kN ( η δ z ω ) k X , −
12 + (cid:13)(cid:13) L p (Ω) ≤ p ∞ X k =0 ∞ X k ,k ,k =0 k = k + k + k ( Cpδ ) k × ˆ R X n ∈ Z X ( n ,n ,n ) ∈ Γ( n ) h τ + n i − | c k ,k ,k n ,n ,n ( τ, δ ) | dτ ! (3.18)for any p ≥
2. In the following, we estimate (3.18) with p = δ − θ ≫ θ >
0. Note that, from Lemma 2.1 and n = n , n , we have σ max & n | ( n − n )( n − n ) | ≥ n . (3.20) NLS WITH WHITE NOISE INITIAL DATA 29 ◦ Subcase (D.1): σ ∼ σ max . First, note that, in view of supp η ⊂ [ − , |F − ( ∂ k j b η )( t ) | = | ( − it ) k j η ( t ) | ≤ C k j η ( t ) . (3.21)Then, by a change of variables: ζ = δτ + n − n + n and ζ j = δ ( τ j + n j ), j = 1 , , t ) with (3.21), and k = k + k + k , we have k c k ,k ,k n ,n ,n ( τ, δ ) k L τ = δ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ ζ = ζ − ζ + ζ Y j =1 ∂ k j b η j ( ζ j ) h n j i (2 k j +1) α dζ dζ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ζ = δ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y j =1 F − ( ∂ k j b η j ) h n j i (2 k j +1) α (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t ≤ C k δ Y j =1 h n j i (2 k j +1) α . (3.22)From (3.18), (3.20) and (3.22), we bound the contribution to (cid:13)(cid:13) kN ( η δ z ω ) k X , −
12 + (cid:13)(cid:13) L p (Ω) in this case by p δ ∞ X k ,k ,k =0 ( Cpδ ) k ( Cpδ ) k ( Cpδ ) k × X n ∈ Z X ( n ,n ,n ) ∈ Γ( n ) (cid:8) n ( n − n )( n − n ) (cid:9) − Y j =1 h n j i (2 k j +1) α ! By choosing small δ = δ ( C ) > Cpδ = Cδ − θ < . p δ (3.23)for α ≥ ◦ Subcase (D.2): σ ≪ σ max . Assume that σ ∼ σ max . A similar argument holds when σ ∼ σ max or σ ∼ σ max .From (3.17), H¨older’s inequality with q large (cid:0) = + q (cid:1) , Minkowski’s integral inequal-ity, and Lemma 2.11, we have (cid:13)(cid:13) kN ( η δ z ω ) k X , −
12 + (cid:13)(cid:13) L p (Ω) . p ∞ X k =0 ∞ X k ,k ,k =0 k = k + k + k ( Cpδ ) k X n ∈ Z X n = n − n + n n = n ,n k c k ,k ,k n ,n ,n ( τ, δ ) k L qτ ! (3.24)for any p ≥ q . By integration by parts, we have | ∂ k b η ( τ ) | = (cid:12)(cid:12)(cid:12)(cid:12) | τ | β ˆ d β dt β (cid:0) t k η ( t ) (cid:1) e itτ dt (cid:12)(cid:12)(cid:12)(cid:12) for τ = 0. In particular, with β = 1, we have k ∂ k b η ( τ ) k L qq +2 τ ( | τ | & K ) . C k K − q +22 q . (3.25) By a change of variables (as in (3.22)) and Young’s inequality, (3.25) with K ∼ δσ , (3.20),and (3.21), we can bound the contribution to k c k ,k ,k n ,n ,n ( τ, δ ) k L qτ in this case by δ − q (cid:18) Y j =1 h n j i (2 k j +1) α (cid:19) k ∂ k b η ( τ ) k L qq +2 τ ( | τ | & K ) (cid:13)(cid:13) F − ( ∂ k b η ) F − ( ∂ k b η ) (cid:13)(cid:13) L t ≤ C k δ Y j =1 h n j i (2 k j +1) α (cid:8) n ( n − n )( n − n ) (cid:9) − q +22 q . (3.26)Hence, by choosing q ≫ (cid:13)(cid:13) kN ( η δ z ω ) k X , −
12 + (cid:13)(cid:13) L p (Ω) in this case is also bounded by . p δ . (3.27)Finally, by Chebyshev’s inequality with (3.23) and (3.27), we have P (cid:16) kN ( η δ z ω ) k X , −
12 + > λ (cid:17) ≤ C p λ − p p p δ p for any λ >
0. Letting λ = Cp δ and p = δ − θ as in (3.19), we have P (cid:16) kN ( η δ z ω ) k X , −
12 + > Cδ − θ (cid:17) ≤ e − p ln √ p ≤ e − δc for all α ≥
0. In other words, we have kN ( η δ z ω ) k X , −
12 + ≤ Cδ − for α ≥
0, outside an exceptional set of probability . e − δc .This completes the proof of the nonlinear estimate (3.2) and hence the proof of Theorem 2for 0 < α ≤ . 4. Local theory, Part 2: α = 0The remaining part of this paper is devoted to the α = 0 case. Namely, we considerthe white noise initial data. In this section, we present the proof of almost sure local well-posedness (Theorem 2) by establishing convergence of smooth approximating solutions. Thekey ingredients are Propositions 4.1 and 4.2, whose proofs will be presented in Sections 6and 7, respectively.4.1. Partially iterated Duhamel formulation.
In Section 1, we introduced the randomgauge transform J ω in (1.41) and converted the renormalized 4NLS (1.6) into the randomequation (1.43) for w = J ω ( u ). In the following, we study the Duhamel formulation (1.47)for this random equation. Define I ( w , w , w )( t ) := − i ˆ t S ( t − t ′ ) N ω ( w , w , w )( t ′ ) dt ′ , I ( w )( t ) := − i ˆ t S ( t − t ′ ) N ω ( w )( t ′ ) dt ′ , (4.1)where N ω ( w , w , w ) is defined by N ω ( w , w , w )( x, t ) := X n ∈ Z e inx X Γ( n ) e it Ψ ω (¯ n ) b w ( n , t ) b w ( n , t ) b w ( n , t ) NLS WITH WHITE NOISE INITIAL DATA 31 with the random phase function Ψ ω defined in (1.45) and N ω ( w ) is as in (1.46). By setting I ( w ) := I ( w, w, w ), we define I ( w ) := I ( w ) + I ( w ). Then, we can write the Duhamelformulation (1.47) for w = J ω ( u ) as w = S ( t ) u ω + I ( w ) , (4.2)If we were to apply the strategy for the α > J ω ( z ω ) = S ( t ) u ω , we would write v = w − S ( t ) u ω and try to solve the fixed pointproblem for v : v = I ( v + S ( t ) u ω ) + I ( v + S ( t ) u ω ) (4.3)by a contraction argument. As mentioned in Section 1, however, we are not able to solvethe fixed point problem (4.3) by a contraction argument. In the following, we reformulatethe equation by assuming that w is a solution to (4.2) and study the reformulated problem.Recalling that b w ( n,
0) = g n and that w satisfies the equation (1.43), we formally have | b w ( n, t ) | − | g n | = ˆ t ddt | w ( n, t ′ ) | dt ′ = − i ˆ t X Γ( n ) e it Ψ ω (¯ n ) b w ( n , t ′ ) b w ( n , t ′ ) b w ( n , t ′ ) b w ( n, t ′ ) dt ′ =: E n ( w, w, w, w )( t ) . (4.4)In view of (1.46), (4.1), and (4.4), we then have I ( w ) = i ˆ t S ( t − t ′ ) X n ∈ Z e inx E n ( w, w, w, w )( t ′ ) b w ( n, t ′ ) dt ′ for a solution w to (1.43). We denote by e I ( w ) the quintilinear operator e I ω ( w, w, w, w, w )given by e I ω ( w , w , w , w , w )( x, t ) := ˆ t S ( t − t ′ ) X n ∈ Z e inx E n ( w , w , w , w )( t ′ ) b w ( n, t ′ ) dt ′ . Then, for a solution w to (1.43), the equality I ( w ) = e I ( w ) (4.5)formally holds. As a result, we can rewrite (4.2) as the following partially iterated Duhamelformulation with cubic and quintic nonlinearities: w = S ( t ) u ω + I ( w ) + e I ( w ) . (4.6)We then obtain the following fixed point problem for v = w − S ( t ) u ω : v = I ( v + S ( t ) u ω ) + e I ( v + S ( t ) u ω ) . (4.7)It turns out that the quintic term e I ( v + S ( t ) u ω ) has a better regularity property than theoriginal cubic resonant nonlinearity I ( v + S ( t ) u ω ), which enables us to solve the fixed pointproblem (4.7) for v by a contraction argument. See Remark 4.4 below. Note, however, thatin deriving the equation (4.7), we used the a priori equality (4.5), which only holds for asolution w = S ( t ) u ω + v to (4.2).In order to overcome this issue, we use an approximation method to construct a solutionto (1.6). To be more precise, we construct a local solution u to (1.6) as a limit of a sequence { u N } N ∈ N of smooth solutions with smooth initial data u ω ,N . For simplicity of thepresentation, we only consider the following frequency-truncated data: u ω ,N := π N u ω = X | n |≤ N g n ( ω ) e inx in the following. Here, π N is the Dirichlet frequency projection onto the frequencies {| n | ≤ N } defined in (2.4). See Remark 4.4 (ii) for the case of smooth initial data given bymollification as in (1.4).Letting g Nn := | n |≤ N · g n = ( g n , if | n | ≤ N, , if | n | > N, (4.8)we have u ω ,N ( x ) = X n ∈ Z g Nn ( ω ) e inx . Define a truncated version of the random phase function Ψ ω in (1.45) by settingΨ ωN := | g Nn ( ω ) | − | g Nn ( ω ) | + | g Nn ( ω ) | − | g Nn ( ω ) | . (4.9)We also set Ψ ω ∞ = Ψ ω .Let N ∈ N . Then, we have u ω ,N ∈ C ∞ ( T ) almost surely. Hence, by Proposition 1.1in [54], there exists a unique global-in-time solution u N to (1.6) with u N | t =0 = u ω ,N .Furthermore, by introducing the truncated random gauge transform: w N ( x, t ) = J ωN ( u N ) := X n ∈ Z e inx − it | g Nn ( ω ) | c u N ( n, t ) (4.10)with g Nn in (4.8), we see that w N satisfies a modified version of the random equation (1.43): ( i∂ t w N = ∂ x w N + N ω ,N ( w N ) + N ω ,N ( w N ) w | t =0 = u ω ,N , (4.11)where N ω ,N ( w ) = N ω ,N ( w, w, w ) and N ω ,N ( w ) are defined by N ω ,N ( w , w , w )( x, t ) := X n ∈ Z e inx X Γ( n ) e it Ψ ωN (¯ n ) b w ( n , t ) b w ( n , t ) b w ( n , t ) , (4.12) N ω ,N ( w )( x, t ) := − X n ∈ Z e inx (cid:2) | b w ( n, t ) | − | g Nn ( ω ) | (cid:3) b w ( n, t ) . By writing (4.11) in the Duhamel formulation, we have w N = S ( t ) u ω ,N + I ω ,N ( w N ) + I ω ,N ( w N ) , (4.13)where I ω ,N ( w ) := I ω ,N ( w, w, w ) and I ω ,N ( w ) are defined by I ω ,N ( w , w , w ) := − i ˆ t S ( t − t ′ ) N ω ,N ( w , w , w )( t ′ ) dt ′ , (4.14) I ω ,N ( w ) := − i ˆ t S ( t − t ′ ) N ω ,N ( w )( t ′ ) dt ′ . (4.15) NLS WITH WHITE NOISE INITIAL DATA 33
Noting that w N is almost surely a smooth solution to (4.11) with the truncated randominitial data u ω ,N , we have | d w N ( n, t ) | − | g Nn | = ˆ t ddt | d w N ( n, t ′ ) | dt ′ = − i ˆ t X Γ( n ) e it ′ Ψ ωN (¯ n ) d w N ( n , t ′ ) d w N ( n , t ′ ) d w N ( n , t ′ ) d w N ( n, t ′ ) dt ′ =: E Nn ( w N , w N , w N , w N )( t ) . (4.16)This motivates us to define a truncated version of e I by e I ω ,N ( w , w ,w , w , w )( x, t ):= ˆ t S ( t − t ′ ) X n ∈ Z e inx E Nn ( w , w , w , w )( t ′ ) b w ( n, t ′ ) dt ′ . (4.17)We also set e I ω ,N ( w N ) = e I ω ,N ( w N , w N , w N , w N , w N ). Then, we can rewrite (4.13) as thefollowing partially iterated Duhamel formulation: w N = S ( t ) u ω ,N + I ω ,N ( w N ) + e I ω ,N ( w N ) . (4.18)Note that while I ω ,N ( w ) in (4.15) corresponds to the resonant part of the nonlinearity,only the non-resonant contribution survives in (4.16) after substituting the equation, thusyielding a non-resonant structure in the quintic term e I ω ,N ( w N ).In order to prove Theorem 2, we need to show that { w N } N ∈ N converges in some functionspace and that the limit w = lim N →∞ w N is a distributional solution to (1.43). We nowstate the crucial nonlinear estimates in our analysis. Recall from (2.5) that given N ∈ Z ≥− = Z ∩ [ − , ∞ ), π ⊥ N denotes the frequency projection operator onto the (spatial)frequencies {| n | > N } with the understanding that π ⊥− = Id . Proposition 4.1.
Let < β, γ ≪ and b > be sufficiently close to . Then, there exist c, θ > and small δ > with the following property. For each < δ < δ , there exists Ω δ ⊂ Ω with P (Ω cδ ) < e − δc such that for each ω ∈ Ω δ , we have kI ω ,N ( w , w , w ) k X ,b,δ ≤ Cδ θ Y j =1 (cid:16) h N j i − β + k w j − S ( t ) π ⊥ N j ( u ω ) k X − γ,b,δ (cid:17) , (4.19) uniformly in N j ∈ Z ≥− , j = 1 , , , and N ≥ N ( ω, δ ) for some N ( ω, δ ) ∈ N . Here, weallow N = ∞ as well. Proposition 4.2.
Let < β, γ ≪ and b > be sufficiently close to . Then, there exist c, θ > and small δ > with the following property. For each < δ < δ , there exists Ω δ ⊂ Ω with P (Ω cδ ) < e − δc such that for each ω ∈ Ω δ , we have k e I ω ,N ( w , w , w , w , w ) k X ,b,δ ≤ Cδ θ Y j =1 (cid:16) h N j i − β + k w j − S ( t ) π ⊥ N j ( u ω ) k X − γ,b,δ (cid:17) , (4.20) uniformly in N j ∈ Z ≥− , j = 1 , . . . , , and N ≥ N ( ω, δ ) for some N ( ω, δ ) ∈ N . Here, weallow N = ∞ as well. We remark that both estimates (4.19) and (4.20) exhibit some smoothing effect. Themain reason is that both nonlinearities I ω ,N ( w , w , w ) and e I ω ,N ( w , w , w , w , w ) possessnon-resonant structures. In the next subsection, we present the proof of Theorem 2 byassuming Propositions 4.1 and 4.2. We present the proofs of these propositions in Sections 6and 7. By careful analysis, we reduce these nonlinear estimates to boundedness propertiesof certain random multilinear functionals of the white noise. Remark 4.3.
In deriving E n ( w, w, w, w ) in (4.4), we made use of a key cancellation:Re (cid:16) i F (cid:0) N ω ( w ) (cid:1) ( n ) b w ( n ) (cid:17) = 0 , (4.21)i.e. the resonant part of the nonlinearity disappears in (4.4). Interestingly, a similar can-cellation is used in the context of the modified scattering analysis of the one-dimensionalcubic nonlinear Schr¨odinger equation on the real line: i∂ t u = ∂ x u + | u | u (4.22)with localized initial data. More precisely, if we set v ( t ) = e it∂ x u ( t ), then by a stationaryphase argument, (4.22) can be rewritten as ∂ t b v ( ξ, t ) = cit − | b v ( ξ, t ) | b v ( ξ, t ) + R ( ξ, t ) , ξ ∈ R , (4.23)where c is a real constant and b v denotes the Fourier transform of v on the real line. Thetrilinear remainder term R ( ξ, t ) decays (in a suitable functional framework) faster than t − and therefore the principal part of the nonlinearity for analyzing long-time behavior isgiven by cit − | b v ( ξ, t ) | b v ( ξ, t ), which is the analogue of the resonant part of the nonlinearity N ω ( w ) in our problem. Note that the key cancellation in the context of (4.23) isRe (cid:16) it − | b v ( ξ, t ) | b v ( ξ, t ) b v ( ξ, t ) (cid:17) = 0 . (4.24)The cancellation (4.24) appears in computing ∂ t | b v ( ξ, t ) | , which is the analogue of thecomputation (4.4) in the context of (4.23). We point out strong similarity between (4.21)and (4.24).4.2. Proof of Theorem 2: the α = 0 case. In this subsection, we present the proof ofTheorem 2 for α = 0. More precisely, by applying Propositions 4.1 and 4.2 to the iteratedDuhamel formulation (4.18) we prove that, for each 0 < δ ≪
1, there exists Ω δ ⊂ Ω with P (Ω cδ ) ≤ e − δc such that for ω ∈ Ω δ , the following statements hold:(i) The sequence { w N − S ( t ) u ω ,N } N ∈ N is Cauchy in X , + ,δ .(ii) The limit w of w N satisfies the equation (1.43) in the distributional sense with thewhite noise initial data u ω .(iii) The solution w is unique in the class: S ( t ) u ω + B , where B denotes the ball ofradius 1 in X , + ,δ centered at the origin.Given 0 < β, γ ≪ b > sufficiently close to , apply Propositions 4.1 and 4.2 andconstruct a set Ω δ ⊂ Ω with P (Ω cδ ) < e − δc for each 0 < δ ≪ ω ∈ Ω δ and hence the parameter N ( ω, δ ) in Propositions 4.1 and 4.2 is a fixed number. In what follows, unless otherwisestated, the number N and M are always assumed to be greater than N ( ω, δ ). NLS WITH WHITE NOISE INITIAL DATA 35 (i) By setting v N = w N − S ( t ) u ω ,N , it follows from (4.13) and (4.18) that v N satisfies v N = I ω ,N ( v N + S ( t ) u ω ,N ) + I ω ,N ( v N + S ( t ) u ω ,N )= I ω ,N ( v N + S ( t ) u ω ,N ) + e I ω ,N ( v N + S ( t ) u ω ,N ) , (4.25)where I ω ,N , I ω ,N and e I ω ,N are as in (4.14), (4.15) and (4.17). Note that the second equalityholds since w N is a classical solution to (4.11).We first claim that k v N k X ,
12 + ,δ ≤ δ > N j = −
1, i.e. π ⊥ N j = Id) to (4.25), we have k v N k X ,
12 + ,δ . δ θ (1 + k v N k X − γ,
12 + ,δ ) + δ θ (1 + k v N k X − γ,
12 + ,δ ) ≤ δ θ (1 + k v N k X ,
12 + ,δ ) + δ θ (1 + k v N k X ,
12 + ,δ ) . (4.27)Then by choosing δ > { v N } N ∈ N is a Cauchy sequence in X , + ,δ . By possiblyrestricting to smaller δ >
0, we prove k v M − v N k X ,
12 + ,δ . N − min( β,γ ) (4.28)for any ω ∈ Ω δ and M ≥ N ≥ N ( ω, δ ). The bound (4.28) shows that v N converge in X , + ,δ for each ω ∈ Ω δ and thus w N = v N + S ( t ) u ω ,N converge to w = v + S ( t ) u ω in C ([ − δ, δ ]; H s ( T )), s < − .We now prove (4.28). From (4.25), we have k v M − v N k X ,
12 + ,δ ≤ kI ω ,M ( v M + S ( t ) u ω ,M ) − I ω ,N ( v N + S ( t ) u ω ,N ) k X ,
12 + ,δ + k e I ω ,M ( v M + S ( t ) u ω ,M ) − e I ω ,N ( v N + S ( t ) u ω ,N ) k X ,
12 + ,δ . (4.29)We first estimate the first term on the right-hand side of (4.29). From (4.12) and (4.14)with w N = v N + S ( t ) u ω ,N , we have kI ω ,M ( w M ) − I ω ,N ( w N ) k X ,
12 + ,δ ≤ kI ω ,M ( w M ) − I ω ,N ( w M ) k X ,
12 + ,δ + kI ω ,N ( w M ) − I ω ,N ( w N ) k X ,
12 + ,δ ≤ kI ω ,M ( w M ) − I ω ,N ( w M ) k X ,
12 + ,δ + kI ω ,N ( w M − w N , w M , w M ) k X ,
12 + ,δ + kI ω ,N ( w N , w M − w N , w M ) k X ,
12 + ,δ + kI ω ,N ( w N , w N , w M − w N ) k X ,
12 + ,δ . (4.30)In the following, we only treat the first two terms since the other two terms can be treatedin a similar manner. Using the trilinear structure of I ω ,L for L ∈ { M, N } , we have I ω ,L ( w M ) = I ω ,L ( π ⊥ N w M , w M , w M ) + I ω ,L ( π N w M , π ⊥ N w M , w M )+ I ω ,L ( π N w M , π N w M , π ⊥ N w M ) + I ω ,L ( π N w M , π N w M , π N w M ) . The key point is to observe that it follows directly from the definitions (4.12) and (4.14)with (4.9) that for M ≥ N I ω ,M ( π N w M , π N w M , π N w M ) − I ω ,N ( π N w M , π N w M , π N w M ) = 0 . Therefore, in order to control kI ω ,M ( w M ) − I ω ,N ( w M ) k X ,
12 + ,δ , we only need to bound kI ω ,L ( π ⊥ N w M , w M , w M ) k X ,
12 + ,δ , kI ω ,L ( π N w M , π ⊥ N w M , w M ) k X ,
12 + ,δ , kI ω ,L ( π N w M , π N w M , π ⊥ N w M ) k X ,
12 + ,δ for L = M and N . We only consider the first one since the others can be treated similarly.From Proposition 4.1 and (4.26), we have kI ω ,L ( π ⊥ N w M , w M , w M ) k X ,
12 + ,δ . δ θ (cid:0) N − β + k π ⊥ N v M k X − γ,
12 + ,δ (cid:1)(cid:0) k v M k X − γ,
12 + ,δ (cid:1) . δ θ (cid:0) N − β + N − γ k v M k X ,
12 + ,δ (cid:1) . δ θ N − min( β,γ ) , where we used the fact that w N = v N + S ( t ) u ω ,N . Therefore, we obtain kI ω ,M ( w M ) − I ω ,N ( w M ) k X ,
12 + ,δ . δ θ N − min( β,γ ) . Next, we proceed with estimating the second term on the right-hand side of (4.30): kI ω ,N ( w M − w N , w M , w M ) k X ,
12 + ,δ ≤ kI ω ,N ( v M − v N + S ( t ) π ⊥ N u ω , v N + S ( t ) u ω ,N , v N + S ( t ) u ω ,N ) k X ,
12 + ,δ + kI ω ,N ( S ( t ) π ⊥ M u ω , v N + S ( t ) u ω ,N , v N + S ( t ) u ω ,N ) k X ,
12 + ,δ . (4.31)By applying Proposition 4.1 to (4.31) with N = N or M and N = N = −
1, we obtain kI ω ,N ( w M − w N , w M , w M ) k X ,
12 + ,δ . δ θ (cid:16) N − β + k v M − v N k X − γ,
12 + ,δ (cid:17)(cid:16) k v M k X − γ,
12 + ,δ (cid:17) . δ θ (cid:16) N − β + k v M − v N k X − γ,
12 + ,δ (cid:17) . (4.32)Similarly, we can estimate the second term on the right-hand side of (4.29) by applyingProposition 4.2 and obtain k e I ω ,M ( w M ) − e I ω ,N ( w N ) k X ,
12 + ,δ . δ θ (cid:16) N − min( β,γ ) + k v M − v N k X ,
12 + ,δ (cid:17) . (4.33)Putting (4.29), (4.32), and (4.33) together, we obtain k v M − v N k X ,
12 + ,δ ≤ Cδ θ N − min( β,γ ) + Cδ θ k v N − v M k X ,
12 + ,δ . Therefore, by choosing δ >
NLS WITH WHITE NOISE INITIAL DATA 37 (ii) Next, we show that the limit w = v + S ( t ) u ω satisfies the Duhamel formulation (4.2): w = S ( t ) u ω + I ( w ) + I ( w ) , (4.34)in the distributional sense, locally in time. We first recall the following definition of theFourier-Lebesgue spaces F L s,p ( T ). Given s ∈ R and 1 ≤ p ≤ ∞ , define the Fourier-Lebesgue space F L s,p ( T ) by the norm: k f k F L s,p ( T ) := kh n i s b f ( n ) k ℓ pn ( Z ) . Then, it is easy to see that the white noise u ω in (1.10) (with α = 0) almost surely belongsto F L s,p ( T ) if and only if sp < − s < p = ∞ .Given 0 < δ ≪
1, let ω ∈ Ω δ . Then, it follows from Lemma 2.7 that the truncated ran-dom linear solution S ( t ) u ω ,N converges to S ( t ) u ω in C ([ − δ, δ ]; F L − ε, ∞ ( T )) for any ε > v N converges to v in X , + ,δ , and hence in C ([ − δ, δ ]; L ( T )). Puttingtogether, we see that w N converges to w in C ([ − δ, δ ]; F L − ε, ∞ ( T )). Hence, from the def-initions (4.1) and (4.15) of I and I ,N , we conclude that I ,N ( w N ) converges to I ( w )in C ([ − δ, δ ]; F L − ε, ∞ ( T )). On the other hand, from (4.30), we see that that I ,N ( w N )converges to I ( w ) in X , + ,δ . Together with the convergence of w N to w , we have shownthat each term in the truncated Duhamel formulation (4.13) converges to the correspondingterm in (4.34). Recalling that w N satisfies (4.13), we conclude that w is a solution to theDuhamel formulation (4.34) in the distributional sense.In Step (i), we already showed that w satisfies the iterated formulation (4.5). Thus, asa byproduct, we have verified that I ( w ) = e I ( w ) , for the solution w constructed in Step (i).(iii) Lastly, we turn to the uniqueness issue. Given 0 < δ ≪
1, fix ω ∈ Ω δ . Let w = S ( t ) u ω + v be the solution to (4.2) with the white noise initial data u ω constructed in Steps (i)and (ii). Suppose that there exists another solution e w to (4.2) of the form e w = S ( t ) u ω + e v for some e v ∈ B ⊂ X , + ,δ . Since such e w is also a solution to (1.43), by repeating theargument in Subsection 4.1, we see that e w satisfies the iterated Duhamel formulation (4.6): e w = S ( t ) u ω + I ( e w ) + e I ( e w ) . Then, by repeating the argument in Step (i) with Propositions 4.1 and 4.2, we obtain k v − e v k X ,
12 + ,δ ≤ Cδ θ k v − e v k X ,
12 + ,δ ≤ k v − e v k X ,
12 + ,δ for δ > v = e v in X , + ,δ . This proves uniqueness in the class S ( t ) u ω + B .This completes the proof of Theorem 2 when α = 0. Remark 4.4. (i) By a continuity argument, we can easily upgrade the uniqueness of w in S ( t ) u ω + B to uniqueness of w in the class S ( t ) u ω + X , + ,δ Note that Lemma 2.7 appears in the proof of Propositions 4.1 and 4.2 (see also Lemma A.3) and thuswe may assume that the conclusion of Lemma 2.7 holds on the set Ω δ constructed in Step (i). See Remark 2.9 in [18]. By inverting the random gauge transform J ω in (1.41), we thenobtain uniqueness of u in the class Z ( u ω ) + X , + ,δ − ,ω where Z is as in (1.11) and X , + ,δ − ,ω is the local-in-time version of the random Fourierrestriction norm space X , + − ,ω defined in (A.2).(ii) Let u ω ,m = u ω ∗ ρ m be the regularization of the white noise u ω by mollification via amollification kernel ρ m in (1.4). Denote by w m the solution to the gauged equation (1.43)with w m | t =0 = u ω ,m . Then, by proceeding as above, one can easily establish convergenceof w m to e w in the class S ( t ) u ω + B , satisfying (4.2). Then, by the uniqueness proved inStep (iii) above, we conclude that w = e w . This proves independence of the mollificationkernel.5. Global well-posedness and invariance of the white noise measure
In this section, we extend the local solutions constructed in Theorem 2 to global solutionsand prove invariance of the white noise measure (1.7) with α = 0 under the flow of therenormalized 4NLS (1.6). The main ingredient is Bourgain’s invariant measure argument [5,6].5.1. Invariance of the white noise measure under the truncated 4NLS.
In thissection, we will denote the white noise measure by µ . For fixed ε > µ is a measure on H − − ε ( T ), defined as the pushforward of P under the map from (Ω , F , P ) to H − − ε ( T )(equipped with the Borel σ -algebra) given by ω u ω = X n ∈ Z g n ( ω ) e inx . Given N ∈ N , we also define the finite-dimensional white noise measure µ N on E N =span (cid:8) e inx , | n | ≤ N (cid:9) as the pushforward of P under the map from (Ω , F , P ) to E N givenby ω π N u ω , where π N is the Dirichlet projector onto the frequencies {| n | ≤ N } definedin (2.4).Consider the frequency-truncated version of the renormalized 4NLS (1.6): ( i∂ t u N = ∂ x u N + π N ( N ( u N )) u N ( x,
0) = π N u ( x ) ∈ E N , (5.1)where N ( u ) denotes the renormalized nonlinearity in (1.16). It is easy to see that thesolution u N to (5.1) exists globally in time. Let e Θ N ( t ) denote the flow map for (5.1). Bythe Liouville theorem, we see that the truncated white noise measure µ N is invariant under e Θ N ( t ). Following [13], we also consider the extension of (5.1) to infinite dimensions, wherethe higher modes evolve according to linear dynamics: ( i∂ t u N = ∂ x u N + π N ( N ( π N u N )) u N ( x,
0) = u ( x ) ∈ H − − ε ( T ) . (5.2) Here, our assumption that the symbol b ρ m ≡ − c m, c m ] for some c >
0, independent of m ∈ N provides a simplification of the argument as compared to a general mollification kernel. NLS WITH WHITE NOISE INITIAL DATA 39
Let Θ N ( t ) denote the flow map for (5.2). Then, we haveΘ N ( t ) = e Θ N ( t ) π N + S ( t ) π ⊥ N , where π ⊥ N = Id − π N . Denoting by E ⊥ N the orthogonal complement of E N in H − − ε ( T ), let µ ⊥ N be the white noise measure on E ⊥ N (i.e. the image measure under the map: ω π ⊥ N u ω ).Note that µ ⊥ N is invariant along the linear flow on E ⊥ N (this is a consequence of the invarianceof complex-valued Gaussians under rotations). Therefore, by writing dµ = dµ N ⊗ dµ ⊥ N , we conclude the following invariance of µ under Θ N ( t ). Lemma 5.1.
For each t ∈ R , the white noise measure µ is invariant under the flow map Θ N ( t ) on H − − ε ( T ) . Almost sure global well-posedness.
By using the invariance of the white noisemeasure for (5.2) (Lemma 5.1) and a PDE approximation argument, we have the follow-ing lemma, guaranteeing long time existence with large probability for the renormalized4NLS (1.6).
Lemma 5.2.
There exist small < ε < ε ≪ and β > such that given any small κ > and T > , there exists a measurable set Σ κ,T ⊂ H − − ε ( T ) such that (i) µ (Σ cκ,T ) < κ and (ii) for any u ∈ Σ κ,T , there exists a ( unique ) solution u ∈ Z ( u ) + C ([ − T, T ]; L ( T )) ⊂ C ([ − T, T ]; H − − ε ( T )) to the renormalized 4NLS (1.6) with u | t =0 = u , where Z is defined in (1.11) . Furthermore,given any large N ≫ , we have (cid:13)(cid:13)(cid:13) u ( t ) − Θ N ( t )( u ) (cid:13)(cid:13)(cid:13) C ([ − T,T ]: H − − ε ( T )) . C ( κ, T ) N − β , where Θ N ( t ) denotes the flow map for (5.2) . For the uniqueness statement, see Remark 4.4 (i).
Proof.
Once we have almost sure local well-posedness (Theorem 2), the proof of Lemma 5.2is by now standard. In the following, we only sketch key parts of the argument and referto [5, 6, 15, 60, 61] for further details.Given a solution u N to (5.2), we define w N = J ωN ( u N ) as in the proof of Theorem 2,where J ωN denotes the truncated random gauge transform in (4.10). Namely, we have w N ( x, t ) = X n ∈ Z e inx − it | g Nn ( ω ) | c u N ( n, t ) , where g Nn is as in (4.8). The key observation is that convergence properties of w N in aFourier lattice can be directly converted to convergence properties of u N . For M > N ≥ w M − w N = (cid:0) π M w M − π N w N (cid:1) + π ⊥ M w M − π ⊥ N w N . Namely, in a space where a norm depends only on the sizes of the Fourier coefficients. For example, H s ( T ) and F L s,p ( T ). The convergence of ( π M w M − S ( t ) u ω ,M ) − ( π N w N − S ( t ) u ω ,N ) can be shown exactly as inthe proof of Theorem 2, locally in time, i.e. in X , + ,δ ⊂ C ([ − δ, δ ]; L ( T )), which yieldsconvergence of π M w M − π N w N in C ([ − δ, δ ]; H − − ε ( T )). On the other hand, the secondand third terms decay like N − β for some β > (cid:3) Once we have Lemma 5.2, the desired almost sure global well-posedness follows fromthe Borel-Cantelli lemma. Given κ >
0, let T j = 2 j and κ j = κ j , j ∈ N . By applyingLemma 5.2, construct a set Σ κ j ,T j and setΣ κ := ∞ \ j =1 Σ κ j ,T j . (5.3)Then, we have µ (Σ cκ ) < κ and for any u ∈ Σ κ , there exists a unique global-in-time solutionto the renormalized 4NLS (1.6) with u | t =0 = u . Finally, setΣ := ∞ [ n =1 Σ n . Then, we have µ (Σ c ) = 0 and for any u ∈ Σ, there exists a unique global-in-time solutionto the renormalized 4NLS (1.6) with u | t =0 = u . This proves almost sure global well-posedness.5.3. Invariance of the white noise measure.
Let Θ( t ) be the flow map for the renor-malized 4NLS (1.6) defined on the set Σ of full probability constructed above. Our goalhere is to show that ˆ Σ F (cid:0) Θ( t )( u ) (cid:1) dµ ( u ) = ˆ Σ F ( u ) dµ ( u ) (5.4)for any F ∈ L ( H − − ε ( T ) , dµ ) and any t ∈ R . By a density argument, it suffices toprove (5.4) for continuous and bounded F .Fix t ∈ R . By Lemma 5.1, we have ˆ Σ F (cid:0) Θ N ( t )( u ) (cid:1) dµ ( u ) = ˆ Σ F ( u ) dµ ( u ) . (5.5)Fix small δ >
0. The boundedness of F implies that for any sufficiently small κ >
0, wehave (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˆ Σ cκ F (cid:0) Θ( t )( u ) (cid:1) dµ ( u ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˆ Σ cκ F (cid:0) Θ N ( t )( u ) (cid:1) dµ ( u ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < δ, (5.6)where Σ κ is as in (5.3). Fix one such κ >
0. Then, by Lemma 5.2, we have k Θ( t )( u ) − Θ N ( t )( u ) k H − − ε ≤ C ( κ, t ) N − β NLS WITH WHITE NOISE INITIAL DATA 41 for any u ∈ Σ κ and sufficiently large N ≫
1. Hence, by continuity of F , we have (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˆ Σ κ F (cid:0) Θ( t )( u ) (cid:1) dµ ( u ) − ˆ Σ κ F (cid:0) Θ N ( t )( u ) (cid:1) dµ ( u ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < δ, (5.7)for any sufficiently large N ≫
1. Combining (5.5), (5.6), and (5.7) and taking δ →
0, weobtain (5.4).5.4.
Proof of Theorem 1.
The proof of Theorem 1 follows from the arguments presentedin the proofs of Theorems 2 and 3.6.
Nonlinear estimate I: non-resonant part
In this section, we present the proof of Proposition 4.1.6.1.
Probabilistic estimates.
We begin by presenting several probabilistic estimates thatwill be used to prove Proposition 4.1. The proofs of these lemmas are presented in Appen-dix A.We first recall some notations. Let η ∈ C ∞ c ( R ) be a smooth non-negative cutoff functionsupported on [ − ,
2] with η ≡ − , n ) = { ( n , n , n ) ∈ Z : n = n − n + n and n , n = n } , Φ(¯ n ) = Φ( n , n , n , n ) = n − n + n − n , Ψ ωN (¯ n ) = | g Nn ( ω ) | − | g Nn ( ω ) | + | g Nn ( ω ) | − | g Nn ( ω ) | . (6.1)where g Nn is as in (4.8). Given s, b ∈ R and δ >
0, the following random functionals S s,b,δj,N , j = 1 , ,
3, play an important role in the proof of Proposition 4.1 (and also in the proof ofProposition 4.2 presented in Section 7): S s,b,δ ,N ( f ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ∈ Z ( n ,n ,n ) ∈ Γ( n ) b f ( n ) b η δ ( τ + Φ(¯ n ) − | g Nn | ) h n i s h n i s h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ (6.2)(observe that there is at most one term in the n summation), S s,b,δ ,N ( f , f ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ,n ∈ Z ( n ,n ,n ) ∈ Γ( n ) b f ( n ) b f ( n ) × b η δ ( τ + Φ(¯ n ) − | g Nn | + | g Nn | ) h n i s h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n L τ , (6.3) S s,b,δ ,N ( f , f , f ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X Γ( n ) b f ( n ) b f ( n ) b f ( n ) × b η δ ( τ + Φ(¯ n ) − | g Nn | + | g Nn | − | g Nn | ) h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n L τ . (6.4) In the following, we will take f , f , f as the white noise f = f = f = u ω = X n ∈ Z g n ( ω ) e inx , (6.5)or its frequency truncated version (projected onto high frequencies) π ⊥ N j ( u ω ) = X | n | >N j g n ( ω ) e inx . For simplicity of notations, we set S s,b,δ ,N ( ω ) := S s,b,δ ,N ( π ⊥ N ( u ω )) , (6.6) S s,b,δ ,N ( ω ) := S s,b,δ ,N ( π ⊥ N ( u ω ) , π ⊥ N ( u ω )) , (6.7) S s,b,δ ,N ( ω ) := S s,b,δ ,N ( π ⊥ N ( u ω ) , π ⊥ N ( u ω ) , π ⊥ N ( u ω )) , (6.8)for N , N , N ∈ Z ≥− (recall our convention: π ⊥− = Id). With the notations defined above,we have the following tail estimates for these random functionals. Lemma 6.1.
Let s < , b < , and β > such that s and β are sufficiently close to and b is sufficiently close to . Then, there exist c, κ > and small δ > such that thefollowing statements holds. (i) We have P (cid:18)(cid:26) ω ∈ Ω : sup N ∈ N sup N ∈ Z ≥− h N i β | S s,b,δ ,N ( ω ) | > δ κ (cid:27)(cid:19) < e − δc for any < δ < δ . (ii) Let k = 2 , . Given < δ < δ , define the sets A k by A k := (cid:26) ω ∈ Ω : there exists N = N ( ω, δ ) ∈ N such that sup N ≥ N sup N j ∈ Z ≥− j =1 , ··· ,k (cid:18) k Y j =1 h N j i β (cid:19) | S s,b,δk,N ( ω ) | ≤ δ κ (cid:27) . Then, we have P ( A ck ) < e − δc for any < δ < δ . Given N ∈ N ∪ {∞} , we introduce a random version X s,b + ( ω, N ) of the X s,b -space: k u k X s,b + ( ω,N ) = kh n i s h τ + n + | g Nn ( ω ) | i b b u ( n, τ ) k ℓ n L τ with the understanding that g ∞ n = g n . By slightly losing spatial regularity, we can controlthe random X s,b -norm by the standard X σ,b -norm (with σ > s ) uniformly in u ∈ X σ,b . Strictly speaking, we should denote the dependence of S s,b,δj,N ( ω ) on the parameters N , N , and N .For simplicity of the presentation, however, we suppress such dependence unless it plays an important role. NLS WITH WHITE NOISE INITIAL DATA 43
Lemma 6.2.
Let σ > s and b > . Then, for each K > , there exists a set Ω K ⊂ Ω with P (Ω cK ) < Ce − cK b such that sup N ∈ N ∪{∞} k u k X s,b + ( ω,N ) . (1 + K ) k u k X σ,b In particular, by choosing K = δ − ε for some small ε > , there exists a set Ω δ ⊂ Ω with P (Ω cK ) < Ce − δc such that sup N ∈ N ∪{∞} k u k X s,b + ( ω,N ) . δ − ε k u k X σ,b uniformly in u ∈ X σ,b , for any < δ ≪ . For the proofs of Lemmas 6.1 and 6.2, see Appendix A. In the next subsection, we proveProposition 4.1, assuming these lemmas.6.2.
Proof of Proposition 4.1.
For j = 1 , ,
3, let z j = S ( t ) π ⊥ N j ( u ω ) and set v j = w j − z j .Then, by the linear estimate (Lemma 2.3), it suffices to construct Ω δ ⊂ Ω with P (Ω cδ ) 12 + ,δ ≤ Cδ θ Y j =1 (cid:16) h N j i − β + k v j k X s , 12 + ,δ (cid:17) (6.9)uniformly in N j ∈ Z ≥− , j = 1 , , 3, and N ≥ N ( ω, δ ) for some N ( ω, δ ) ∈ N . By thedefinition (2.2) of the local-in-time space, the estimate (6.9) follows once we prove k η δ ( t ) · N ω ,N ( e v + z , e v + z , e v + z ) k X , − 12 + ≤ Cδ θ Y j =1 (cid:16) h N j i − β + k e v j k X s , 12 + (cid:17) (6.10)for any extension e v j of v j (restricted to the time interval [ − δ, δ ]) onto R , j = 1 , , 3. Forsimplicity of notations, we denote the extension e v j by v j in the following.By duality, we haveLHS of (6.10) = sup k a k X , − ≤ (cid:12)(cid:12)(cid:12)(cid:12) ˆ T × R η δ ( t ) · N ω ,N ( v + z , v + z , v + z ) a ( x, t ) dxdt (cid:12)(cid:12)(cid:12)(cid:12) , (6.11)where η δ is as in (2.3). By (4.12) and expanding the product, we write the double integralin (6.11) as ˆ R η δ ( t ) X n X Γ( n ) e it Ψ ωN (¯ n ) hb v ( n ) b v ( n ) b v ( n ) b a ( n )+ | n | >N ( n ) e − itn g n b v ( n ) b v ( n ) b a ( n ) + similar terms+ (cid:18) Y j =1 | n j | >N j (cid:19) e − it ( n − n ) g n g n b v ( n ) b a ( n ) + similar terms+ (cid:18) Y j =1 | n j | >N j (cid:19) e − it ( n − n + n ) g n g n g n b a ( n ) i dt =: I + II + III + IV , Here and in the following, we suppress the time dependence. where the term I consists of the term with all three factors given by v j ’s, II consists of theterms with one factor of z j and two factors of v j ’s, III consists of the terms with two factorsof z j ’s and one factor of v j , and IV consists of the term with all three factors given by z j ’s. • Estimate on I . Define b ( j ) n = e itn + it | g Nn | h n i s b v j ( n ) and a n = e itn + it | g Nn | h n i s b a ( n ) , (6.12)essentially representing the Fourier transforms of the ungauged interaction representationsof v j and a . Then, we haveI = ˆ R η δ ( t ) X n X Γ( n ) e it Ψ ωN (¯ n ) b v ( n ) b v ( n ) b v ( n ) b a ( n ) dt = X n X Γ( n ) h n i s h n i s h n i s h n i s ˆ R (cid:0) η δ ( t ) e − it Φ(¯ n ) (cid:1) b (1) n b (2) n b (3) n a n dt. By Parseval’s identity in the t variable, we haveI = X n X Γ( n ) h n i s h n i s h n i s h n i s ˆ R b η δ ( τ + Φ(¯ n )) F ( b (1) n b (2) n b (3) n a n )( − τ ) dτ. By Cauchy-Schwarz inequality, we haveI . (cid:18) X n X Γ( n ) h n i s h n i s h n i s h n i s (cid:13)(cid:13)(cid:13)(cid:13) b η δ ( τ + Φ(¯ n )) h τ i − (cid:13)(cid:13)(cid:13)(cid:13) L τ (cid:19) × (cid:18) X n X Γ( n ) (cid:13)(cid:13)(cid:13) h τ i − F ( b (1) n b (2) n b (3) n a n )( τ ) (cid:13)(cid:13)(cid:13) L τ (cid:19) . (6.13)By Lemma 2.4 with (2.3), we have (cid:13)(cid:13)(cid:13)(cid:13) b η δ ( τ − Φ(¯ n )) h τ i − ε (cid:13)(cid:13)(cid:13)(cid:13) L τ . (cid:18) ˆ δ h τ i − ε δ h τ − Φ(¯ n ) i dτ (cid:19) . δ h Φ(¯ n ) i − ε (6.14)for any small ε > 0. Then, by (6.14) and Lemma 2.1 , we can bound the first factor of (6.13)by X n X Γ( n ) h n i s h n i s h n i s h n i s δ h Φ(¯ n ) i − ! . δ , (6.15)provided that s < X n X Γ( n ) (cid:13)(cid:13)(cid:13) h τ i − F ( b (1) n b (2) n b (3) n a n )( τ ) (cid:13)(cid:13)(cid:13) L τ = X n X Γ( n ) (cid:13)(cid:13)(cid:13) b (1) n b (2) n b (3) n a n (cid:13)(cid:13)(cid:13) H − . X n X n ,n ,n (cid:13)(cid:13) b (1) n (cid:13)(cid:13) H 12 + (cid:13)(cid:13) b (2) n (cid:13)(cid:13) H 12 + (cid:13)(cid:13) b (3) n (cid:13)(cid:13) H 12 + (cid:13)(cid:13) a n (cid:13)(cid:13) H − = (cid:18) X n (cid:13)(cid:13) a n (cid:13)(cid:13) H − (cid:19) Y j =1 (cid:18) X n j (cid:13)(cid:13) b ( j ) n j (cid:13)(cid:13) H 12 + (cid:19) . (6.16) NLS WITH WHITE NOISE INITIAL DATA 45 By (6.12), Plancherel’s identity, and Lemma 6.2, we have that X n (cid:13)(cid:13) b ( j ) n (cid:13)(cid:13) H 12 + = X n (cid:13)(cid:13) h n i s e itn + it | g Nn | b v j ( n ) (cid:13)(cid:13) H 12 + = X n h n i s (cid:13)(cid:13) h τ + n + | g Nn | i + b v j ( n, τ ) (cid:13)(cid:13) L τ = k v j k X s, 12 ++ ( ω,N ) . δ − ε k v j k X s , 12 + (6.17)and X n (cid:13)(cid:13) a n (cid:13)(cid:13) H − = k a k X s, 12 ++ ( ω,N ) . δ − ε k a k X , 12 + for small ε > 0, outside an exceptional set of probability < Ce − δc . Collecting estimates(6.13), (6.15), (6.16), and (6.17), we obtainI ( ω ) . δ − Y j =1 k v j k X s , 12 + outside an exceptional set of probability < Ce − δc . • Estimate on II. Without loss of generality, we may assume II has only one term:II = ˆ R η δ ( t ) X n X Γ( n ) e it Ψ ωN (¯ n ) h | n | >N e − itn g n b v ( n ) b v ( n ) b a ( n ) i dt. With b ( j ) n and a n as in (6.12), Parseval’s identity yieldsII = X n X Γ( n ) | n | >N g n h n i s h n i s h n i s ˆ R b η δ ( τ + Φ(¯ n ) − | g Nn | ) F ( b (2) n b (3) n a n )( − τ ) dτ. By Cauchy-Schwarz inequality in τ and then in n, n , n , we haveII ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ( n ,n ,n ) ∈ Γ( n ) | n | >N g n h n i s h n i s h n i s b η δ ( τ + Φ(¯ n ) − | g Nn | ) h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ × (cid:13)(cid:13)(cid:13) h τ i − F ( b (2) n b (3) n a n )( τ ) (cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ . S s, − ,δ ,N ( ω ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) b (2) n b (3) n a n (cid:13)(cid:13) H − τ (cid:13)(cid:13)(cid:13) ℓ n,n ,n . where S s,b,δ ,N ( ω ) is defined in (6.6). Proceeding as in (6.16) and (6.17), we arrive atII ≤ S s, − ,δ ,N ( ω ) k v k X s, 12 ++ ( ω,N ) k v k X s, 12 ++ ( ω,N ) k a k X s, − + ( ω,N ) , Then, by applying Lemmas 6.1 and 6.2, we conclude that there exist small θ, β > s < ω ) . δ θ h N i − β Y j =2 k v j k X s , 12 + outside an exceptional set of probability < Ce − δc . • Estimate on III. Without loss of generality, we assume that III has the following form:III = ˆ R η δ ( t ) X n X Γ( n ) e it Ψ ωN (¯ n ) e − it ( n − n ) χ , · g n g n b v ( n ) b a ( n ) dt where χ , := Q j =1 | nj | >Nj . By Parseval’s identity as before, we haveIII = X n X Γ( n ) χ , · g n g n h n i s h n i s ˆ R b η δ ( τ + Φ(¯ n ) − | g Nn | + | g Nn | ) h τ i − (cid:16) h τ i − F ( b (3) n a n )( − τ ) (cid:17) dτ, where b (3) n and a n are as in (6.12). By Cauchy-Schwarz inequality and proceeding as beforewe obtain III ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ,n ( n ,n ,n ) ∈ Γ( n ) χ , · g n g n b η δ ( τ + Φ(¯ n ) − | g Nn | + | g Nn | ) h n i s h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n L τ × (cid:13)(cid:13)(cid:13) h τ i − F ( b (3) n a n )( τ ) (cid:13)(cid:13)(cid:13) ℓ n,n L τ . S s, − ,δ ,N ( ω ) k v k X s, 12 ++ ( ω,N ) k a k X s, − + ( ω,N ) where S s,b,δ ,N ( ω ) is defined in (6.7). Then, by applying Lemmas 6.1 and 6.2 we conclude thatthere exist small θ, β > s < ω ) . δ θ (cid:18) Y j =1 h N i − β (cid:19) k v k X s , 12 + outside an exceptional set of probability < Ce − δc . • Estimate on IV. Lastly, we consider IV. We haveIV = ˆ R η δ ( t ) X n X Γ( n ) e it Ψ ωN (¯ n ) e − it ( n − n + n ) χ , , · g n g n g n b a ( n ) dt = X n X Γ( n ) χ , , · g n g n g n h n i s ˆ R η δ ( t ) e it (Ψ ω ,N (¯ n ) − Φ(¯ n )) a n dt, where χ , , := Q j =1 | nj | >Nj , Ψ ωN is as in (4.9), and Ψ ,N := | g Nn | − | g Nn | + | g Nn | . Byapplying Parseval’s identity and Cauchy-Schwarz inequality as before, we haveIV = X n X Γ( n ) χ , , · g n g n g n h n i s ˆ R b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) c a n ( τ ) dτ ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X Γ( n ) χ , , · g n g n g n b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n L τ (cid:13)(cid:13)(cid:13) k a n k H − (cid:13)(cid:13)(cid:13) ℓ n ≤ S s, − ,δ ,N ( ω ) k a k X s, − + ( ω,N ) , NLS WITH WHITE NOISE INITIAL DATA 47 where S s,b,δ ,N ( ω ) is defined in (6.8). Then, by applying Lemmas 6.1 and 6.2 we conclude thatthere exist small θ, β > s < ω ) . δ θ Y j =1 h N i − β outside an exceptional set of probability < Ce − δc .This completes the proof of Proposition 4.1.7. Nonlinear estimate II: resonant part This section is devoted to the proof of Proposition 4.2. Recall from (4.16) and (4.17)that e I ω ,N ( w , w , w , w , w )( x, t ) = ˆ t S ( t − t ′ ) X n ∈ Z e inx E Nn ( w , w , w , w )( t ′ ) b w ( n, t ′ ) dt ′ , where E Nn ( w , w , w , w )( t ) = − i ˆ t X Γ( n ) e it ′ Ψ ωN (¯ n ) b w ( n , t ′ ) b w ( n , t ′ ) b w ( n , t ′ ) b w ( n, t ′ ) dt ′ . Given w j , let v j = w j − S ( t ) π ⊥ N j ( u ω ). Then, we denote by e v j an extension of v j (viewedas a function on the time interval [ − δ, δ ]) and set e w j = S ( t ) π ⊥ N j ( u ω ) + e v j . Let s < < β be sufficiently close to 0. By the linear estimate (Lemma 2.3) and thedefinition (2.2) of the local-in-time space, it suffices to construct Ω δ ⊂ Ω with P (Ω cδ ) < e − δc such that for each ω ∈ Ω δ , we have (cid:13)(cid:13)(cid:13)(cid:13) χ δ ( t ) X n ∈ Z e inx E Nn ( e w , e w , e w , e w )( t ) ce w ( n, t ) (cid:13)(cid:13)(cid:13)(cid:13) X , − 12 + ≤ Cδ θ Y j =1 (cid:16) h N j i − β + k e v j k X s , 12 + (cid:17) (7.1)for any extension e v j of v j , j = 1 , . . . , 5, uniformly in N j ∈ Z ≥− , j = 1 , . . . , 5, and N ≥ N ( ω, δ ) for some N ( ω, δ ) ∈ N . For simplicity of notations, we denote e v j (and e w j , respectively) by v j (and w j , respectively) in the following. We also suppress the timedependence when it is clear from the context.By the (continuous) trivial embedding L ( T × R ) = X , ⊂ X , − + and H¨older’s in-equality, we haveLHS of (7.1) . (cid:13)(cid:13)(cid:13)(cid:13) χ δ ( t ) (cid:18) X n ∈ Z |E Nn ( w , w , w , w )( t ) b w ( n, t ) | (cid:19) (cid:13)(cid:13)(cid:13)(cid:13) L t . δ sup t ∈ [ − δ,δ ] (cid:18) X n ∈ Z |E Nn ( w , w , w , w )( t ) b w ( n, t ) | (cid:19) . Therefore, in order to prove Proposition 4.2, it suffices to provesup t ∈ [ − δ,δ ] (cid:18) X n ∈ Z |E Nn ( w , w , w , w )( t ) b w ( n, t ) | (cid:19) ≤ Cδ − + 5 Y j =1 (cid:16) h N j i − β + k v j k X s , 12 + (cid:17) (7.2)with large probability, where v j is given by v j = w j − S ( t ) π ⊥ N j ( u ω ) . Step (i): Elimination of w . With s < (cid:18) X n ∈ Z |E Nn ( w , w , w , w ) b w ( n ) | (cid:19) ≤ (cid:18) X n ∈ Z h n i − s |E Nn ( w , w , w , w ) | (cid:19) · sup n (cid:12)(cid:12) h n i s | n | >N g n ( ω ) (cid:12)(cid:12) + (cid:18) X n ∈ Z h n i − s |E Nn ( w , w , w , w ) | (cid:19) · sup n |h n i s b v ( n ) | . (7.3)By applying Lemma 2.7 with ε = − s > 0, we conclude thatsup n (cid:12)(cid:12) h n i s | n | >N g n ( ω ) (cid:12)(cid:12) ≤ h N i s δ − , (7.4)outside an exceptional set of probability < Ce − δc . We also havesup t ∈ [ − δ,δ ] sup n |h n i s b v ( n, t ) | . k v k X s, 12 + . (7.5)Therefore, we conclude from (7.3), (7.4), and (7.5) that, in order to prove (7.2), it sufficesto show the following estimate:sup t ∈ [ − δ,δ ] (cid:18) X n ∈ Z h n i − s |E Nn ( w , w , w , w ) | (cid:19) ≤ Cδ − + 4 Y j =1 (cid:16) h N j i − β + k v j k X s , 12 + (cid:17) (7.6)outside an exceptional set of probability < Ce − δc , uniformly in N j ∈ Z ≥− , j = 1 , . . . , N ≥ N ( ω, δ ) for some N ( ω, δ ) ∈ N . Step (ii) Smoothing effect . In the remaining part of this section, we present the proofof (7.6). By expanding the product of b w j ( n j , t ) = b v j ( n j , t ) + e − itn j | n j | >N j g n j , we can bound the left-hand side of (7.6) (without the supremum in time) by (cid:13)(cid:13)(cid:13)(cid:13) h n i − s ˆ t X Γ( n ) e it ′ Ψ ωN (¯ n ) b w ( n , t ′ ) b w ( n , t ′ ) b w ( n , t ′ ) b w ( n, t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . A + B + C + D + E, (7.7) NLS WITH WHITE NOISE INITIAL DATA 49 where A , B , C , D , and E are given by A := (cid:13)(cid:13)(cid:13)(cid:13) h n i − s ˆ t X Γ( n ) e it ′ (Ψ ωN (¯ n ) − Φ(¯ n )) χ , , . · g n g n g n g n dt ′ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n ,B := (cid:13)(cid:13)(cid:13)(cid:13) h n i − s ˆ t X Γ( n ) e it ′ (Ψ ω ,N (¯ n ) − Φ(¯ n )) χ , , · g n g n g n b (4) n dt ′ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n + similar terms ,C := (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ t X Γ( n ) e it ′ (Ψ ω ,N (¯ n ) − Φ(¯ n )) h n i s h n i s χ , · g n g n b (3) n b (4) n dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n + similar terms ,D := (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ t X Γ( n ) e it ′ ( | g Nn | − Φ(¯ n )) h n i s h n i s h n i s χ · g n b (2) n b (3) n b (4) n dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n + similar terms ,E := (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ t X Γ( n ) e − it ′ Φ(¯ n ) h n i s h n i s h n i s h n i s b (1) n b (2) n b (3) n b (4) n dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n . Here, b ( j ) n is as in (6.12), χ ,...,k = k Y j =1 | n j | >N j , k = 1 , . . . , , and Ψ ωk,N (¯ n ) = k X j =1 ( − j +1 | g Nn j | , k = 2 , . In view of the restriction of the time variable onto [ − δ, δ ], we may freely insert the cutofffunctions χ δ ( t ) and η δ ( t ) in evaluating the terms A , B , C , D , and E . In the following, weprove (7.6) by estimating each term on the right-hand side of (7.7). (ii.1) Estimate on A . Fix κ, ε > | g n ( ω ) | . δ − κ h n i ε (7.8)outside an exceptional set of probability < Ce − δc . Then, for such ω , we split A ( ω ) intotwo parts: A ( ω ) = A ( ω ) + A ( ω ) , where A ( ω ) denotes the contribution from the case n max . δ − κ . Namely, we have A ( ω ) := (cid:13)(cid:13)(cid:13)(cid:13) h n i − s χ δ ( t ) ˆ t X Γ( n ) n max . δ − κ e it ′ (Ψ ωN (¯ n ) − Φ(¯ n )) χ , , . · g n g n g n g n dt ′ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . Note that if max( N , N , N , N ) ≫ δ − κ , then we have A ( ω ) = 0. Otherwise, using (7.8),we have A ( ω ) . δ sκ − Cκ Y j =1 h N j i − for some C > 0. This yields (7.6). Next, we consider A ( ω ). Since n max ≫ δ − κ , we have | g n ( ω ) | . δ − κ h n i ε ≪ n + ε max . Then,it follows from Lemma 2.1 and (6.1), we have | Ψ ωN (¯ n ) − Φ(¯ n ) | ∼ h Φ(¯ n ) i (7.9)for ( n , n , n ) ∈ Γ( n ). Thus, from (7.8), (7.9), and Lemma 2.1, we obtain A ( ω ) = (cid:13)(cid:13)(cid:13)(cid:13) h n i − s X Γ( n ) e it (Ψ ωN (¯ n ) − Φ(¯ n )) − ωN (¯ n ) − Φ(¯ n ) χ , , . · g n g n g n g n (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . δ − κ (cid:18) Y j =1 h N j i − β (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) X Γ( n ) n β +4 ε − s max h Φ(¯ n ) i χ , , . (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . δ − κ (cid:18) Y j =1 h N j i − β (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) X Γ( n ) n − β − ε + s max ( n − n )( n − n ) (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . δ − κ Y j =1 h N j i − β , provided that ε, β, − s > (ii.2) Estimate on B . Without loss of generality, we may assume that B consists only ofone term: B = (cid:13)(cid:13)(cid:13)(cid:13) h n i − s ˆ t X Γ( n ) e it ′ (Ψ ω ,N (¯ n ) − Φ(¯ n )) χ , , · g n g n g n b (4) n dt ′ (cid:13)(cid:13)(cid:13)(cid:13) ℓ n . To exploit the oscillatory nature of the time integral, we rewrite the above integral as ˆ R η δ ( t ′ ) X Γ( n ) e it ′ (Ψ ω ,N (¯ n ) − Φ(¯ n )) χ , , · g n g n g n (cid:16) [0 ,t ] ( t ′ ) b (4) n ( t ′ ) (cid:17) dt ′ , where η δ is as in (2.3). Then, by Parseval’s identity, the above expression is X Γ( n ) χ , , · g n g n g n ˆ R b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) F t ( [0 ,t ] b (4) n )( − τ ) dτ = X Γ( n ) χ , , · g n g n g n ˆ R b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) h τ i − × (cid:16) h τ i − F t ( [0 ,t ] b (4) n )( − τ ) (cid:17) dτ. Therefore, by Cauchy-Schwarz inequality in the τ variable and Lemma 2.6, we have B ≤ (cid:13)(cid:13)(cid:13)(cid:13) X Γ( n ) χ , , · g n g n g n b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13) ℓ n L τ (cid:13)(cid:13)(cid:13) k [0 ,t ] ( t ′ ) b (4) n ( t ′ ) k H − t ′ (cid:13)(cid:13)(cid:13) ℓ ∞ n . S s, − ,δ ,N ( ω ) (cid:13)(cid:13)(cid:13) k b (4) n k H − t (cid:13)(cid:13)(cid:13) ℓ ∞ n , where S s,b,δ ,N ( ω ) is defined in (6.8). Then, proceeding as in (6.17), we obtain B ( ω ) . S s, − ,δ ,N ( ω ) k v k X s, − + ( ω,N ) . (7.10) NLS WITH WHITE NOISE INITIAL DATA 51 Finally, by applying Lemmas 6.1 and 6.2 to (7.10), we obtain the desired estimate (7.6) forthe term B outside an exceptional set of probability < Ce − δc . (ii.3) Estimate on C . Without loss of generality, we assume that C consists only of oneterm: C = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ R X Γ( n ) η δ ( t ′ ) e it ′ (Ψ ω ,N (¯ n ) − Φ(¯ n )) h n i s h n i s χ , · g n g n (cid:16) [0 ,t ] ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) (cid:17) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n . By Parseval’s identity, we have C = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X Γ( n ) χ , · g n g n ˆ R b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) h n i s h n i s h τ i − × (cid:16) h τ i − F t ( [0 ,t ] b (3) n b (4) n )( − τ ) (cid:17) dτ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n . By Cauchy-Schwarz inequality in τ and n followed by H¨older’s inequality in n , we have C ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ,n ( n ,n ,n ) ∈ Γ( n ) χ , · g n g n b η δ ( τ + Φ(¯ n ) − Ψ ω ,N (¯ n )) h n i s h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n L τ × sup n ∈ Z (cid:13)(cid:13)(cid:13) k [0 ,t ] ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) k H − t ′ (cid:13)(cid:13)(cid:13) ℓ n . (7.11)As for the second factor of (7.11), by applying Lemma 2.6 and then Lemma 2.5 and pro-ceeding as in (6.17), we havesup n ∈ Z (cid:13)(cid:13)(cid:13) k [0 ,t ] ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) k H − t ′ (cid:13)(cid:13)(cid:13) ℓ n . (cid:13)(cid:13)(cid:13) k b (3) n k H 12 + k b (4) n k H 12 + (cid:13)(cid:13)(cid:13) ℓ n,n . k v k X s, 12 ++ ( ω,N ) k v k X s, 12 ++ ( ω,N ) . (7.12)Therefore, from (7.11) and (7.12), we obtain C ( ω ) . S s, − ,δ ,N ( ω ) k v k X s, 12 ++ ( ω,N ) k v k X s, 12 ++ ( ω,N ) , (7.13)where S s,b,δ ,N ( ω ) is defined in (6.7). Finally, by applying Lemmas 6.1 and 6.2 to (7.13), weobtain the desired estimate (7.6) for the term C outside an exceptional set of probability < Ce − δc . (ii.4) Estimate on D . Without loss of generality, we assume that D has only one term: D = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ t X Γ( n ) η δ ( t ′ ) e it ′ ( | g Nn | − Φ(¯ n )) h n i s h n i s h n i s χ · g n (cid:16) [0 ,t ] ( t ′ ) b (2) n ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) (cid:17) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n . Proceeding as before with Parseval’s identity and H¨older’s inequality, we have D ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ∈ Z ( n ,n ,n ) ∈ Γ( n ) χ · g n b η δ ( τ + Φ(¯ n ) − | g Nn | ) h n i s h n i s h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ × sup n ∈ Z (cid:13)(cid:13)(cid:13) k [0 ,t ] ( t ′ ) b (2) n ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) k H − t ′ (cid:13)(cid:13)(cid:13) ℓ n ,n . Then, by estimating the the second factor as in (7.12) with Lemmas 2.5 and 2.6, we obtain D ( ω ) . S s, − ,δ ,N ( ω ) Y j =2 k v j k X s, 12 ++ ( ω,N ) , (7.14)where S s,b,δ ,N ( ω ) is defined in (6.6). Finally, by applying Lemmas 6.1 and 6.2 to (7.14), weobtain the desired estimate (7.6) for the term D outside an exceptional set of probability < Ce − δc . (ii.5) Estimate on E . We have E = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆ t X Γ( n ) η δ ( t ′ ) e − it ′ Φ(¯ n ) h n i s h n i s h n i s h n i s (cid:16) [0 , ( t ′ ) b (1) n ( t ′ ) b (2) n ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) (cid:17) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n . Proceeding as before with Parseval’s identity and H¨older’s inequality, we have E ≤ sup n ∈ Z (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) b η δ ( τ + Φ(¯ n )) h n i s h n i s h n i s h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n ) L τ × (cid:13)(cid:13)(cid:13) k [0 ,t ] ( t ′ ) b (1) n ( t ′ ) b (2) n ( t ′ ) b (3) n ( t ′ ) b (4) n ( t ′ ) k H − t ′ (cid:13)(cid:13)(cid:13) ℓ n, Γ( n ) , (7.15)where the ℓ n ) -norm is defined by k f n ,n ,n k ℓ n ) = (cid:18) X ( n ,n ,n ) ∈ Γ( n ) | f n ,n ,n | (cid:19) . By Lemma 2.4 followed by Lemma 2.1, we can bound the first factor on the right-hand sideof (7.15) bysup n ∈ Z (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) δ b η ( δ ( τ − Φ(¯ n ))) h n i s h n i s h n i s h n i s h τ i − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n ,n ,n L τ . sup n ∈ Z (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) δ n s max h τ i − h τ − Φ(¯ n ) i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n ) L τ ∼ δ (cid:18) X ( n ,n ,n ) ∈ Γ( n ) n s max h Φ(¯ n ) i − (cid:19) . , provided that s < E ( ω ) . Y j =1 k v j k X s, 12 ++ ( ω,N ) . (7.16) NLS WITH WHITE NOISE INITIAL DATA 53 Finally, by applying Lemma 6.2 to (7.16), we obtain the desired estimate (7.6) for the term E outside an exceptional set of probability < Ce − δc .This completes the proof of Proposition 4.2. Appendix A. Further probabilistic estimates In this appendix, we state and prove crucial probabilistic estimates. These probabilisticestimates play an important role in establishing Propositions 4.1 and 4.2. In Subsection A.3,we present the proof of Lemma 2.11.In the following, { g n } n ∈ Z denotes a sequence of independent standard complex-valuedGaussian random variables. In particular, we have E (cid:2) g kn g ℓm (cid:3) = δ kℓ δ nm · k ! (A.1)for any k, ℓ ∈ Z ≥ and n, m ∈ Z . The identity (A.1) easily follows from a computation withthe moment generating function for the chi-square distribution of degree 2 (i.e. | g n | =(Re g n ) + (Im g n ) ).A.1. Random X s,b -space. Given N ∈ N ∪ {∞} , set g Nn = | n |≤ N · g n as in (4.8) with theunderstanding that | n |≤ N ≡ N = ∞ . Then, we define random versions X s,b + ( ω, N )and X s,b − ( ω, N ) of the X s,b -space by the norm: k u k X s,b ± ( ω,N ) = (cid:13)(cid:13) h n i s h τ + n ± | g Nn ( ω ) | i b b u ( n, τ ) (cid:13)(cid:13) ℓ n L τ . (A.2)When N = ∞ , we simply set X s,b ± ,ω = X s,b ± ( ω, ∞ ). The following lemma shows that therandom X s,b -norm is controlled by the standard X s,b -norm in (2.1) with large probability. Lemma A.1. Let η ∈ S ( R ) be a Schwartz function in time and u ∈ X s,b with s ∈ R and b > . Then, there exists C > such that (cid:13)(cid:13)(cid:13)(cid:13) sup N ∈ N ∪{∞} k ηu k X s,b ± ( ω,N ) (cid:13)(cid:13)(cid:13)(cid:13) L p (Ω) ≤ Cp b +2 k u k X s,b (A.3) for all p ≥ , where the constant is independent of u . As a consequence, there exist c, C > such that P (cid:18) sup N ∈ N ∪{∞} k ηu k X s,b ± ( ω,N ) > K k u k X s,b (cid:19) ≤ Ce − K b +2 k u k − b +2 Xs,b (A.4) for any K > . We present the proof of Lemma A.1 at the end of this subsection. While the tail esti-mate (A.4) holds for each fixed u ∈ X s,b , Lemma A.1 does not provide a uniform controlin u ∈ X s,b and hence is not useful in the proof of the main nonlinear estimates (Proposi-tions 4.1 and 4.2). By slightly losing spatial regularity, however, we can control the random X s,b -norm by the standard X σ,b -norm (with σ > s ) uniformly in u ∈ X σ,b . See Lemma 6.2above. Lemma A.2. Let σ > s and b > . Then, for each K > , there exists a set Ω K ⊂ Ω with P (Ω cK ) < Ce − cK b such that sup N ∈ N ∪{∞} k u k X s,b ± ( ω,N ) . (1 + K ) k u k X σ,b (A.5) uniformly in u ∈ X σ,b .Proof. Fix ε > σ ≥ s + 2 bε. By Lemma 2.7, there exists Ω K with P (Ω cK ) < Ce − cK b such that h τ + n ± | g Nn ( ω ) | i b . h τ + n i b + | g Nn ( ω ) | b . h τ + n i b + K h n i bε . This implies that sup N ∈ N ∪{∞} k u k X s,b ± ( ω,N ) . k u k X s,b + K k u k X σ, for each ω ∈ Ω K , uniformly in u ∈ X σ,b . Then, the desired estimate (A.5) follows from themonotonicity of the X s,b -norm in s and b . (cid:3) We now present the proof of Lemma A.1. Proof of Lemma A.1. Trivially, we havesup N ∈ N ∪{∞} k ηu k X s,b ± ( ω,N ) ≤ k ηu k X s,b + k ηu k X s,b ± ( ω, ∞ ) . Since the multiplication by a smooth cutoff function η is bounded in X s,b , the estimate (A.3)follows once we prove (cid:13)(cid:13)(cid:13) k ηu k X s,b ± ,ω (cid:13)(cid:13)(cid:13) L p (Ω) ≤ Cp b +2 k u k X s,b . (A.6)The tail estimate (A.4) follows from applying Lemma 2.9 to (A.3).Let v ( t ) = S ( − t ) u ( t ) denote the interaction representation of u and set a n ( τ ) = b v ( n, τ ).Then, we have F ( ηu )( n, τ ) = ˆ R b η ( τ + n ) a n ( τ − τ ) dτ . (A.7)From the definition (A.2), (A.7), and the triangle inequality h τ i b . h τ i b + h τ − τ i b for b ≥ 0, we have k ηu k X s,b ± ,ω = X n ˆ R h n i s h τ i b |F ( ηu )( n, τ − n ∓ | g n | ) | dτ = X n ˆ R h n i s h τ i b (cid:12)(cid:12)(cid:12)(cid:12) ˆ R b η ( τ ∓ | g n | ) a n ( τ − τ ) dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ . X n ˆ R h n i s (cid:12)(cid:12)(cid:12)(cid:12) ˆ R h τ i b b η ( τ ∓ | g n | ) a n ( τ − τ ) dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ + X n ˆ R h n i s (cid:12)(cid:12)(cid:12)(cid:12) ˆ R b η ( τ ∓ | g n | ) h τ − τ i b a n ( τ − τ ) dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ =: I + II . (A.8) NLS WITH WHITE NOISE INITIAL DATA 55 Before proceeding further, we claim the following inequality: E b ( τ ) := (cid:18) E h h τ i bp | b η ( τ ∓ | g n | ) | p i(cid:19) p ≤ C ( b ) p b +2 h τ i . (A.9)We first use this estimate to bound I and II in (A.8). We present the proof of (A.9) at theend of this proof.By Minkowski’s integral inequality, (A.9), and Young’s inequality, we have E h I p i ≤ (cid:18) X n ˆ R h n i s (cid:12)(cid:12)(cid:12)(cid:12) ˆ R E b ( τ ) | a n ( τ − τ ) | dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ (cid:19) p . p ( b +2) p (cid:18) X n h n i s ˆ R | a n ( τ ) | dτ (cid:19) p = p ( b +2) p k u k pX s, . (A.10)Similarly, we have E h II p i ≤ (cid:18) X n ˆ R h n i s (cid:12)(cid:12)(cid:12)(cid:12) ˆ R E ( τ ) h τ − τ i b | a n ( τ − τ ) | dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ (cid:19) p . p p (cid:18) X n h n i s ˆ R h τ i b | a n ( τ ) | dτ (cid:19) p . p p k u k X s,b . (A.11)Hence, (A.6) follows from (A.8), (A.10), and (A.11).It remains to prove (A.9). By the triangle inequality: h τ i . h τ ∓ | g n | i + | g n | and using the rapid decay of b η ∈ S ( R ), we have (cid:13)(cid:13) h τ i b b η ( τ ∓ | g n | ) | (cid:13)(cid:13) L p (Ω) ≤ h τ i − (cid:13)(cid:13) h τ i b +2 b η ( τ ∓ | g n | ) (cid:13)(cid:13) L p (Ω) . h τ i − (cid:0) k g n k b +2) L b +2) p (Ω) (cid:1) . p b +2 h τ i , yielding (A.9). This completes the proof of Lemma A.1. (cid:3) A.2. Key tail estimates. In the following, we present the proof of the key tail estimates(Lemma 6.1) in establishing crucial nonlinear estimates (Propositions 4.1 and 4.2). Given s, b ∈ R , δ > 0, and N ∈ N , we recall the definitions of S s,b,δj,N , j = 1 , , 3, from (6.2), (6.3), and (6.4) (expressed in slightly different forms via Taylor expansions): S s,b,δ ,N ( f ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X n ∈ Z ( n ,n ,n ) ∈ Γ( n ) b f ( n ) b η δ ( τ + Φ(¯ n ) − | g Nn | ) h n i s h n i s h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ , (A.12) S s,b,δ ,N ( f , f ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ X k ,k =0 X n ,n ∈ Z ( n ,n ,n ) ∈ Γ( n ) b f ( n ) b f ( n ) × Y j =1 | g Nn j | k j k j ! · ∂ k + k b η δ ( τ + Φ(¯ n )) h n i s h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n L τ ,S s,b,δ ,N ( f , f , f ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ X k ,k ,k =0 X Γ( n ) b f ( n ) b f ( n ) b f ( n ) × Y j =1 | g Nn j | k j k j ! · ∂ k + k + k b η δ ( τ + Φ(¯ n )) h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n L τ . Here, η ∈ C ∞ c ( R ) denotes a smooth non-negative cutoff function supported on [ − , 2] with η ≡ − , n ), Φ(¯ n ), and Ψ ωN (¯ n ) are as in (1.19), (2.6), and (4.9),respectively. We also recall that there is only one term in the summation over n in (A.12).For simplicity of notations, we set S s,b,δ ,N ( ω ) := S s,b,δ ,N ( π ⊥ N ( u ω )) ,S s,b,δ ,N ( ω ) := S s,b,δ ,N ( π ⊥ N ( u ω ) , π ⊥ N ( u ω )) ,S s,b,δ ,N ( ω ) := S s,b,δ ,N ( π ⊥ N ( u ω ) , π ⊥ N ( u ω ) , π ⊥ N ( u ω )) , where u ω is the white noise in (6.5) and π ⊥ N j denotes the frequency projection onto thefrequencies {| n | > N j } as in (2.5) with the convention that π ⊥− = Id. With the no-tations defined above, we have the following tail estimates for these random functionals(Lemma 6.1). Lemma A.3. Let s < , b < , and β > such that s and β are sufficiently close to and b is sufficiently close to . Then, there exist c, κ > and small δ > such that thefollowing statements holds. (i) We have P (cid:18)(cid:26) ω ∈ Ω : sup N ∈ N sup N ∈ Z ≥− h N i β | S s,b,δ ,N ( ω ) | > δ κ (cid:27)(cid:19) < e − δc (A.13) for any < δ < δ . (ii) Let k = 2 , . Given < δ < δ , define the sets A k by A k := (cid:26) ω ∈ Ω : there exists N = N ( ω, δ ) ∈ N such that sup N ≥ N sup N j ∈ Z ≥− j =1 , ··· ,k (cid:18) k Y j =1 h N j i β (cid:19) | S s,b,δk,N ( ω ) | ≤ δ κ (cid:27) . NLS WITH WHITE NOISE INITIAL DATA 57 Then, we have P ( A ck ) < e − δc (A.14) for any < δ < δ .Proof. In the following, we take s < β > b < sufficiently close to .We first prove (A.13). Fix K ≫ 1. Given small ε > 0, it follows from Lemma 2.7 thatthere exists Ω K ⊂ Ω with P (Ω cK ) ≤ e − cK (A.15)such that we have | g Nn ( ω ) | ≤ K h n i ε (A.16)for any ω ∈ Ω K , any n ∈ Z , and any N ∈ N . We separately consider the following twocases: (i) n max . K and (ii) n max ≫ K ,where n max is as in (3.11). Suppose that n max . K . By crudely estimating the contributionin this case with (A.16), N < | n | . K , and b η δ ( τ ) = δ b η ( δτ ), we havesup N ∈ N sup N ∈ Z ≥− h N i β | S s,b,δ ,N ( ω ) | . K − s +2 β +2 ε (cid:13)(cid:13)(cid:13)(cid:13) n max . K b η δ ( τ + Φ(¯ n ) − | g Nn ( ω ) | ) h τ i b (cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ . K − s +2 β +2 ε (cid:13)(cid:13)(cid:13)(cid:13) n max . K δ h τ i b δ − b + ε h τ + Φ(¯ n ) − | g Nn ( ω ) | i − b + ε (cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n L τ . δ + b − ε K − s +2 β +2 ε ≪ δ + b − ε K , (A.17)provided that K ≫ s , β , and ε are all sufficiently close to 0.Next, we consider the case n max ≫ K . In this case, we have (cid:12)(cid:12) Φ(¯ n ) − | g Nn ( ω ) | (cid:12)(cid:12) ∼ | Φ(¯ n ) | uniformly for any ω ∈ Ω K , ¯ n = ( n , n , n , n ) ∈ Z , and N ∈ N . Then, by Lemma 2.4, wehave (cid:13)(cid:13)(cid:13)(cid:13) b η δ ( τ + Φ(¯ n ) − | g Nn ( ω ) | ) h τ i b (cid:13)(cid:13)(cid:13)(cid:13) L τ . (cid:18) ˆ δ h τ i b δ b h τ + Φ(¯ n ) − | g Nn ( ω ) | i b dτ (cid:19) . δ − b h Φ(¯ n ) − | g Nn ( ω ) | i b − ∼ δ − b h Φ(¯ n ) i b − (A.18)for b < sufficiently close to . Hence, from (A.18) and Lemma 2.1, we havesup N ∈ N sup N ∈ Z ≥− h N i β | S s,b,δ ,N ( ω ) | . δ − b K (cid:13)(cid:13)(cid:13)(cid:13) ( n ,n ,n ) ∈ Γ( n ) · | n |≥ N h n i β n b − s − ε max h n − n i b − h n − n i b − (cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n ,n . δ − b K, (A.19) provided that s , β , and ε are all sufficiently close to 0 and that b < is sufficiently closeto . Hence, by choosing K = δ − c for some small c > 0, the bound (A.13) follows from(A.15), (A.17), and (A.19).Let us now turn to the proof of (A.14) for k = 2. We have S s,b,δ ,N ( ω ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ X k ,k =0 X n ,n ∈ Z ( n ,n ,n ) ∈ Γ( n ) χ , Y j =1 | g Nn j | k j g ∗ n j k j ! ∂ k + k b η δ ( τ + Φ(¯ n )) h n i s h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ℓ n,n L τ , where g ∗ n j is as in (2.14) and χ , = Q j =1 | nj | >Nj . By Minkowski’s integral inequality andLemma 2.11 with (2.3), we have k S s,b,δ ,N k L p (Ω) ≤ pδ ∞ X k ,k =0 ( Cpδ ) k + k (cid:13)(cid:13)(cid:13)(cid:13) χ , ∂ k + k b η ( δ ( τ + Φ(¯ n ))) h n i s h n i s h τ i b (cid:13)(cid:13)(cid:13)(cid:13) ℓ n, Γ(¯ n ) L τ . (A.20)We separately consider the following two cases:(i) h τ i & | Φ(¯ n ) | and (ii) h τ i ≪ | Φ(¯ n ) | .First, suppose that h τ i & | Φ(¯ n ) | . By Plancherel’s identity with (3.21), we have δ (cid:13)(cid:13) ∂ k b η ( δ ( τ + Φ(¯ n ))) (cid:13)(cid:13) L τ ≤ C k (A.21)for any k ∈ Z ≥ . Then, from (A.20), (A.21), Lemma 2.1, and choosing p = δ − θ for some θ > Cpδ < k S s,b,δ ,N k L p (Ω) ≤ pδ h N i − β h N i − β ∞ X k ,k =0 ( Cpδ ) k + k × (cid:18) X n ∈ Z X Γ( n ) h n i β h n i β h n i s h n i s n b max ( n − n ) b ( n − n ) b (cid:19) ≤ Cpδ h N i − β h N i − β , (A.22)provided that s and β are all sufficiently close to 0 and that b < is sufficiently close to .Next, we consider the case h τ i ≪ | Φ(¯ n ) | . By Hausdorff-Young’s inequality, we have (cid:13)(cid:13) ∂ k b η ( δ ( τ + Φ(¯ n ))) (cid:13)(cid:13) L ∞ τ ≤ (cid:13)(cid:13) ( − it ) k η ( t ) (cid:13)(cid:13) L t ≤ C k , (cid:13)(cid:13) δ ( τ + Φ(¯ n )) ∂ k b η ( δ ( τ + Φ(¯ n ))) (cid:13)(cid:13) L ∞ τ ≤ (cid:13)(cid:13) ∂ t (cid:0) ( − it ) k η ( t ) (cid:1)(cid:13)(cid:13) L t ≤ C k for any k ≥ 0. By interpolating the two estimates above, we have (cid:13)(cid:13) δ ( τ + Φ(¯ n )) ∂ k b η ( δ ( τ + Φ(¯ n ))) (cid:13)(cid:13) L ∞ τ ≤ C k (A.23)for any k ≥ 0. Then, from (A.23), Lemma 2.1 and choosing p = δ − θ as above, we obtain k S s,b,δ ,N k L p (Ω) ≤ pδ h N i − β h N i − β ∞ X k ,k =0 ( Cpδ ) k + k × (cid:13)(cid:13)(cid:13)(cid:13) χ , h n i β h n i β h n i s h n i s | Φ(¯ n ) | − ε h τ i b + ε (cid:13)(cid:13)(cid:13)(cid:13) ℓ n, Γ(¯ n ) L τ ≤ Cpδ h N i − β h N i − β , (A.24) NLS WITH WHITE NOISE INITIAL DATA 59 provided that s , β , and ε are all sufficiently close to 0 and that b < is sufficiently closeto such that b + ε > . By applying Chebyshev’s inequality with (A.22) and (A.24) andchoosing λ = Cp δ with p = δ − θ , we obtain P (cid:16) h N i β h N i β | S s,b,δ ,N,N ,N | > λ (cid:17) ≤ h N i βp h N i βp C p λ − p p p δ p = 1 h N i βp h N i βp e − p ln p = 1 h N i βp h N i βp e − δc . (A.25)Here, we added subscripts N and N in S s,b,δ ,N,N ,N to show its dependence on N and N explicitly. Now, by summing (A.25) over N , N ∈ Z ≥− we obtain P (cid:18) sup N j ∈ Z ≥− j =1 , h N i β h N i β | S s,b,δ ,N,N ,N | > δ − θ (cid:19) ≤ Ce − δc for any 0 < δ < δ , where δ > βδ − θ = 1.Let M ≥ N ≥ 1. Then, by slightly modifying the computation above with the defini-tion (4.8) of g Nn and Minkowski’s inequality (on the ℓ n,n L τ -norm), we also have k S s,b,δ ,M,N ,N − S s,b,δ ,N,N ,N k L p (Ω) ≤ Cpδ N − β h N i − β h N i − β , since we must have n max ≥ N to have a non-zero contribution to the left-hand side above.This shows that (cid:8) S s,b,δ ,N,N ,N (cid:9) N ∈ N forms a Cauchy sequence in L p (Ω) for any p ≥ S s,b,δ , ∞ ,N ,N , satisfying k S s,b,δ , ∞ ,N ,N − S s,b,δ ,N,N ,N k L p (Ω) ≤ Cpδ N − β h N i − β h N i − β and P (cid:18) sup N j ∈ Z ≥− j =1 , h N i β h N i β | S s,b,δ , ∞ ,N ,N | > δ − θ (cid:19) ≤ Ce − δc (A.26)for any 0 < δ < δ .By repeating the computation in (A.25), we then obtain P (cid:18) sup N j ∈ Z ≥ j =1 , h N i β h N i β | S s,b,δ , ∞ ,N ,N − S s,b,δ ,N,N ,N | > δ − θ (cid:19) ≤ CN βp e − δc (A.27)for any 0 < δ < δ (by possibly making δ smaller but independent of N ∈ N ). Given ℓ ∈ N sufficiently large, by choosing ℓ = δ θ − , it follows from (A.27) that ∞ X N =1 P (cid:18) sup N j ∈ Z ≥ j =1 , h N i β h N i β | S s,b,δ , ∞ ,N ,N − S s,b,δ ,N,N ,N | > ℓ (cid:19) ≤ ∞ X N =1 C ( ℓ ) N βp < ∞ , since βp > 1. Therefore, we conclude from the Borel-Cantelli lemma that there exists Ω ℓ with P (Ω ℓ ) = 1 such that for each ω ∈ Ω ℓ , there exists N = N ( ω ) ∈ N such thatsup N j ∈ Z ≥ j =1 , h N i β h N i β | S s,b,δ , ∞ ,N ,N − S s,b,δ ,N,N ,N | ≤ ℓ for any N ≥ N . By setting Σ = T ∞ ℓ =1 Ω ℓ , we have P (Σ) = 1. This shows that, as N → ∞ , S s,b,δ ,N,N ,N converges almost surely to S s,b,δ , ∞ ,N ,N with respect to the metric: d ( f N ,N , g N ,N ) := sup N j ∈ Z ≥− j =1 , h N i β h N i β | f N ,N − g N ,N | . Combining this almost sure convergence with (A.26), we obtain (A.14) when k = 2.The proof of (A.14) for k = 3 follows in an analogous manner and hence we omitdetails. (cid:3) A.3. Proof of Lemma 2.11. We conclude this appendix by presenting the proof ofLemma 2.11.First, we consider the case |A| = 1. By Stirling’s formula: k ! ∼ √ k (cid:0) ke (cid:1) k , there exist C , C > k + 1)!( k !) ≤ C k √ k ≤ C k (A.28)for any k ∈ Z ≥ . Hence, the desired estimate (2.15) follows from the Wiener chaos estimate(Lemma 2.8), (A.1), and (A.28).The proof when |A| ≥ |A| = 3, namely, A = { , , } , since theproof for the case |A| = 2 follows in an analogous manner. In this case, by the Wienerchaos estimate (Lemma 2.8) with (2.12), we have k Σ n k L p (Ω) ≤ ( p − k + k Σ n k L (Ω) . (A.29)In the following, we estimate k Σ n k L (Ω) . From (2.13), we have k Σ n k L (Ω) = 1 k ! k ! k ! (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X ( n ,n ,n ) ∈ Γ( n ) X ( e n , e n , e n ) ∈ Γ( n ) c ¯ kn ,n ,n c ¯ k e n , e n , e n × Y j =1 | g n j | k j g ∗ n j Y e j =1 | g e n e j | k j g ∗ e n e j (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L (Ω) . (A.30)Recall from (A.1) that under the conditions n = n , n and e n = e n , e n , the right-handside of (A.30) yields zero contribution unless n = e n . Hence, we assume n = e n in thefollowing. • Case 1: n = n . Note that we must have n = e n = e n or n = e n = e n in this case.Otherwise, the right-hand side of (A.30) yields zero contribution. NLS WITH WHITE NOISE INITIAL DATA 61 We first consider the case n = e n = e n . In this case, we have n = e n . Then, from (A.1),we obtain RHS of (A.30) ≤ k ! k ! k ! X Γ( n ) | c ¯ kn ,n ,n | Y j =1 (2 k j + 1)! ! ≤ C k (cid:18) X Γ( n ) | c ¯ kn ,n ,n | (cid:19) . (A.31)Next, we consider the case n = e n = e n . In this case, we have n = e n . Then, from (A.1)and (A.28), we obtainRHS of (A.30) ≤ k ! k ! k ! X Γ( n ) | c ¯ kn ,n ,n | [( k + k + 1)!] (2 k + 1)! ! . (A.32)We claim that ( k + k + 1)! k ! k ! ≤ C k + k (A.33)for some C > 0. Hence, from (A.32) with (A.28) and (A.33), we obtainRHS of (A.30) ≤ C k (cid:18) X Γ( n ) | c ¯ kn ,n ,n | (cid:19) . (A.34)Hence, it remains to prove (A.32). Without loss of generality, assume k ≤ k . Then, byStirling’s formula, we have( k + k + 1)! k ! k ! ≤ C k ( k + k ) √ k k ( k + k ) k k k ≤ C k + k "(cid:18) k k (cid:19) k k k . (A.35)Then, (A.33) follows from (A.35) once we note that lim x →∞ (1 + x ) x = 1. • Case 2: n = n . In this case, we must have n = n = e n = e n . Proceeding as beforewith (A.1), we haveRHS of (A.30) ≤ k ! k ! k ! X Γ( n ) | c ¯ kn ,n ,n | (2 k + 2 k + 2)!(2 k + 1)! ! ≤ C k (cid:18) X Γ( n ) | c ¯ kn ,n ,n | (cid:19) , (A.36)where we used (2 k + 2 k + 2)!( k !) ( k !) ≤ C k + k (A.37)in the second inequality. The proof of (A.37) is analogous to that of (A.33) and thus weomit details.Putting (A.29), (A.31), (A.34), and (A.36) together, we obtain (2.15) when A = { , , } .This completes the proof of Lemma 2.11. Acknowledgements. T.O. and Y.W. were supported by the European Research Council(grant no. 637995 “ProbDynDispEq”). N.T. was supported by the ANR grant ODA (ANR-18-CE40-0020-01). References [1] A. B´enyi, T. Oh, O. Pocovnicu, Wiener randomization on unbounded domains and an applicationto almost sure well-posedness of NLS , Excursions in harmonic analysis. Vol. 4, 3–25, Appl. Numer.Harmon. Anal., Birkh¨auser/Springer, Cham, 2015.[2] ´A. B´enyi, T. Oh, O. Pocovnicu, Higher order expansions for the probabilistic local Cauchy theory of thecubic nonlinear Schr¨odinger equation on R , to appear in Trans. Amer. Math. Soc.[3] ´A. B´enyi, T. Oh, O. Pocovnicu, On the probabilistic Cauchy theory for nonlinear dispersive PDEs ,Landscapes of Time-Frequency Analysis. 1–32, Appl. Numer. Harmon. Anal., Birkh¨auser/Springer,Cham, 2019.[4] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications tononlinear evolution equations, I: Schr¨odinger equations, Geom. Funct. Anal. 3 (1993), 107–156.[5] J. Bourgain, Periodic nonlinear Schr¨odinger equation and invariant measures , Comm. Math. Phys. 166(1994), no. 1, 1–26.[6] J. Bourgain, Invariant measures for the 2D-defocusing nonlinear Schr¨odinger equation , Comm. Math.Phys. 176 (1996), no. 2, 421–445.[7] J. Bourgain, Invariant measures for the Gross-Piatevskii equation, J. Math. Pures Appl. 76 (1997), no.8, 649–702.[8] J. Bourgain, Refinements of Strichartz’ inequality and applications to 2D-NLS with critical nonlinearity, Internat. Math. Res. Notices 1998, no. 5, 253–283.[9] J. Bourgain, Nonlinear Schr¨odinger equations , Hyperbolic equations and frequency interactions (ParkCity, UT, 1995), 3–157, IAS/Park City Math. Ser., 5, Amer. Math. Soc., Providence, RI, 1999.[10] J. Bourgain, A. Bulut, Almost sure global well posedness for the radial nonlinear Schr¨odinger equationon the unit ball I: the 2D case, Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 31 (2014), no. 6, 1267–1288.[11] J. Bourgain, A. Bulut, Almost sure global well posedness for the radial nonlinear Schr¨odinger equationon the unit ball II: the 3D case , J. Eur. Math. Soc. (JEMS) 16 (2014), no. 6, 1289–1325.[12] B. Bringmann, Almost sure local well-posedness for a derivative nonlinear wave equation ,arXiv:1809.00220 [math.AP].[13] N. Burq, L. Thomann, N. Tzvetkov, Long time dynamics for the one dimensional non linear Schr¨odingerequation , Ann. Inst. Fourier (Grenoble) 63 (2013), no. 6, 2137–2198.[14] N. Burq, N. Tzvetkov, Random data Cauchy theory for supercritical wave equations. I. Local theory, Invent. Math. 173 (2008), no. 3, 449–475.[15] N. Burq, N. Tzvetkov, Random data Cauchy theory for supercritical wave equations. II. A global exis-tence result, Invent. Math. 173 (2008), no. 3, 477–496.[16] R. Catellier, K. Chouk, Paracontrolled distributions and the 3-dimensional stochastic quantization equa-tion , Ann. Probab. 46 (2018), no. 5, 2621–2679.[17] A. Choffrut, O. Pocovnicu, Ill-posedness of the cubic nonlinear half-wave equation and other fractionalNLS on the real line, Int. Math. Res. Not. 2018, no. 3, 699–738.[18] J. Chung, Z. Guo, S. Kwon, T. Oh, Normal form approach to global well-posedness of the quadraticderivative nonlinear Schr¨odinger equation on the circle , Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 34(2017), 1273–1297.[19] J. Colliander, T. Oh, Almost sure well-posedness of the cubic nonlinear Schr¨odinger equation below L ( T ), Duke Math. J. 161 (2012), no. 3, 367–414.[20] G. Da Prato, A. Debussche, Two-dimensional Navier-Stokes equations driven by a space-time whitenoise, J. Funct. Anal. 196 (2002), no. 1, 180–210.[21] G. Da Prato, A. Debussche, Strong solutions to the stochastic quantization equations, Ann. Probab. 31(2003), no. 4, 1900–1916.[22] A. de Bouard, A. Debussche, The Korteweg-de Vries equation with multiplicative homogeneous noise, Stochastic differential equations: theory and applications, 113–133, Interdiscip. Math. Sci., 2, WorldSci. Publ., Hackensack, NJ, 2007.[23] J. Forlano, T. Oh, Y. Wang, Stochastic cubic nonlinear Schr¨odinger equation with almost space-timewhite noise , to appear in J. Aust. Math. Soc. NLS WITH WHITE NOISE INITIAL DATA 63 [24] J. Ginibre, Y. Tsutsumi, G. Velo, On the Cauchy problem for the Zakharov system, J. Funct. Anal. 151(1997), no. 2, 384–436.[25] L. Grafakos, Modern Fourier analysis. Second edition. Graduate Texts in Mathematics, 250. Springer,New York, 2009. xvi+504 pp.[26] L. Gross, Abstract Wiener spaces, Proc. 5th Berkeley Sym. Math. Stat. Prob. 2 (1965), 31–42.[27] M. Gubinelli, P. Imkeller, P. Perkowski, Paracontrolled distributions and singular PDEs, Forum Math.Pi 3 (2015), e6, 75 pp.[28] M. Gubinelli, H. Koch, T. Oh, Paracontrolled approach to the three-dimensional stochastic nonlinearwave equation with quadratic nonlinearity , arXiv:1811.07808 [math.AP].[29] Z. Guo, S. Kwon, T. Oh, Poincar´e-Dulac normal form reduction for unconditional well-posedness ofthe periodic cubic NLS , Comm. Math. Phys. 322 (2013), no. 1, 19–48.[30] Z. Guo, T. Oh, Non-existence of solutions for the periodic cubic nonlinear Schr¨odinger equation below L , Internat. Math. Res. Not. 2018, no. 6, 1656–1729.[31] M. Hairer, A theory of regularity structures, Invent. Math. 198 (2014), 269–504.[32] M. Hairer, Singular stochastic PDEs , Proceedings of the International Congress of Mathematicians–Seoul 2014. Vol. IV, 49–73, Kyung Moon Sa, Seoul, 2014.[33] Z. Hani, B. Pausader, N. Tzvetkov, N. Visciglia, Modified scattering for the cubic Schr¨odinger equationon product spaces and applications, Forum Math. Pi 3 (2015), e4, 63 pp.[34] N. Hayashi, P. Naumkin, Asymptotics for large time of solutions to the nonlinear Schr¨odinger andHartree equations, Amer. J. Math. 120 (1998), no. 2, 369–389.[35] B.A. Ivanov, A.M. Kosevich, Stable three-dimensional small-amplitude soliton in magnetic materials, So. J. Low Temp. Phys. 9 (1983), 439–442.[36] T. Kappeler, P. Topalov, Global wellposedness of KdV in H − ( T , R ), Duke Math. J. 135 (2006), no. 2,327–360.[37] H. Kuo, Gaussian measures in Banach spaces, Lecture Notes in Mathematics, Vol. 463. Springer-Verlag,Berlin-New York, 1975. vi+224 pp.[38] C. Kwak, Periodic fourth-order cubic NLS: Local well-posedness and Non-squeezing property , J. Math.Anal. Appl. 461 (2018), no. 2, 1327–1364.[39] H.P. McKean, Statistical mechanics of nonlinear wave equations. IV. Cubic Schr¨odinger, Comm. Math.Phys. 168 (1995), no. 3, 479–491. Erratum: Statistical mechanics of nonlinear wave equations. IV. CubicSchr¨odinger , Comm. Math. Phys. 173 (1995), no. 3, 675.[40] J.-C. Mourrat, H. Weber, The dynamic Φ model comes down from infinity , Comm. Math. Phys. 356(2017), no. 3, 673–753.[41] K. Nakanishi, H. Takaoka, Y. Tsutsumi, Local well-posedness in low regularity of the mKdV equationwith periodic boundary condition, Discrete Contin. Dyn. Syst. 28 (2010), no. 4, 1635–1654.[42] A. Nahmod, G. Staffilani, Almost sure well-posedness for the periodic 3D quintic nonlinear Schr¨odingerequation below the energy space , J. Eur. Math. Soc. 17 (2015), 1687–1759.[43] E. Nelson, A quartic interaction in two dimensions , 1966 Mathematical Theory of Elementary Particles(Proc. Conf., Dedham, Mass., 1965) pp. 69–73 M.I.T. Press, Cambridge, Mass.[44] D. Nualart, The Malliavin calculus and related topics, Second edition. Probability and its Applications(New York). Springer-Verlag, Berlin, 2006. xiv+382 pp.[45] T. Oh, Invariant Gibbs measures and a.s. global well-posedness for coupled KdV systems, DifferentialIntegral Equations 22 (2009), no. 7-8, 637–668.[46] T. Oh, Invariance of the white noise for KdV , Comm. Math. Phys. 292 (2009), no. 1, 217–236.[47] T. Oh, Periodic stochastic Korteweg-de Vries equation with additive space-time white noise , Anal. PDE2 (2009), no. 3, 281–304.[48] T. Oh, White noise for KdV and mKdV on the circle , Harmonic analysis and nonlinear partial dif-ferential equations, 99–124, RIMS Kˆokyˆuroku Bessatsu, B18, Res. Inst. Math. Sci. (RIMS), Kyoto,2010.[49] T. Oh, Remarks on nonlinear smoothing under randomization for the periodic KdV and the cubic Szeg¨oequation , Funkcial. Ekvac. 54 (2011), no. 3, 335–365.[50] T. Oh, O. Pocovnicu, N. Tzvetkov, Probabilistic local Cauchy theory of the cubic nonlinear wave equationin negative Sobolev spaces , in preparation.[51] T. Oh, C. Sulem, On the one-dimensional cubic nonlinear Schr¨odinger equation below L , Kyoto J.Math. 52 (2012), no. 1, 99–115. [52] T. Oh, L. Thomann, A pedestrian approach to the invariant Gibbs measure for the 2- d defocusingnonlinear Schr¨odinger equations , Stoch. Partial Differ. Equ. Anal. Comput. 6 (2018), 397–445.[53] T. Oh, Y. Tsutsumi, N. Tzvetkov, Quasi-invariant Gaussian measures for the cubic nonlinearSchr¨odinger equation with third order dispersion , arXiv:1805.08409 [math.AP].[54] T. Oh, N. Tzvetkov, Quasi-invariant Gaussian measures for the cubic fourth order nonlinearSchr¨odinger equation , Probab. Theory Related Fields 169 (2017), 1121–1168.[55] T. Oh, Y. Wang, On the ill-posedness of the cubic nonlinear Schr¨odinger equation on the circle , An.S¸tiint¸. Univ. Al. I. Cuza Ia¸si. Mat. (N.S.) 64 (2018), no. 1, 53–84.[56] T. Oh, Y. Wang, Global well-posedness of the periodic cubic fourth order NLS in negative Sobolev spaces ,Forum Math. Sigma 6 (2018), e5, 80 pp.[57] T. Ozawa, Long range scattering for nonlinear Schr¨odinger equations in one space dimension, Comm.Math. Phys. 139 (1991), no. 3, 479–493.[58] T. Ozawa, Y. Tsutsumi, Space-time estimates for null gauge forms and nonlinear Schr¨odinger equations, Differential Integral Equations 11 (1998), no. 2, 201–222.[59] J. Quastel, B. Valk´o, KdV preserves white noise, Comm. Math. Phys. 277 (2008), no. 3, 707–714.[60] G. Richards, Maximal-in-time behavior of deterministic and stochastic dispersive partial differentialequations , Ph.D. Thesis, University of Toronto (Canada). 2012. 229 pp.[61] G. Richards, Invariance of the Gibbs measure for the periodic quartic gKdV, Ann. Inst. H. Poincar´eAnal. Non Lin´eaire 33 (2016), no. 3, 699–766.[62] B. Simon, The P ( ϕ ) Euclidean (quantum) field theory, Princeton Series in Physics. Princeton Univer-sity Press, Princeton, N.J., 1974. xx+392 pp.[63] H. Takaoka, Y. Tsutsumi, Well-posedness of the Cauchy problem for the modified KdV equation withperiodic boundary condition, Int. Math. Res. Not. 2004, no. 56, 3009–3040.[64] L. Thomann, Random data Cauchy problem for supercritical Schr¨odinger equations, Ann. Inst. H.Poincar´e Anal. Non Lin´eaire 26 (2009), no. 6, 2385–2402.[65] L. Thomann, N. Tzvetkov, Gibbs measure for the periodic derivative nonlinear Schr¨odinger equation ,Nonlinearity 23 (2010), no. 11, 2771–2791.[66] S. K. Turitsyn, Three-dimensional dispersion of nonlinearity and stability of multidimensional solitons, Teoret. Mat. Fiz., 64 (1985), 226–232 (in Russian).[67] N. Tzvetkov, Construction of a Gibbs measure associated to the periodic Benjamin-Ono equation, Probab. Theory Related Fields 146 (2010), no. 3-4, 481–514. Tadahiro Oh, School of Mathematics, The University of Edinburgh, and The Maxwell In-stitute for the Mathematical Sciences, James Clerk Maxwell Building, The King’s Buildings,Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom E-mail address : [email protected] Nikolay Tzvetkov, Universit´e de Cergy-Pontoise, 2, av. Adolphe Chauvin, 95302 Cergy-Pontoise Cedex, France E-mail address : [email protected] Yuzhao Wang, School of Mathematics, The University of Edinburgh, and The Maxwell In-stitute for the Mathematical Sciences, James Clerk Maxwell Building, The King’s Buildings,Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom, and School of Mathemat-ics, Watson Building, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UnitedKingdom E-mail address ::