Remarks on nonlinear smoothing under randomization for the periodic KdV and the cubic Szegö equation
aa r X i v : . [ m a t h . A P ] A ug REMARKS ON NONLINEAR SMOOTHING UNDERRANDOMIZATION FOR THE PERIODIC KDV AND THE CUBICSZEG ¨O EQUATION
TADAHIRO OH
Abstract.
We consider Cauchy problems of some dispersive PDEs with random initialdata. In particular, we construct local-in-time solutions to the mean-zero periodic KdValmost surely for the initial data in the support of the mean-zero Gaussian measures on H s ( T ), s > s where s = − + √ ≈ − . < − , by exhibiting nonlinear smoothingunder randomization on the second iteration of the integration formulation. We also showthat there is no nonlinear smoothing for the dispersionless cubic Szeg¨o equation underrandomization of initial data. Contents
1. Introduction 11.1. Nonlinear smoothing under randomization on initial data for nonlinearSchr¨odinger equations 11.2. Korteweg-de Vries equation 41.3. Cubic Szeg¨o equation 72. Notation 83. Basic lemmata and linear estimates 94. On the KdV equation 104.1. Overview; Unboundedness of nonlinear term 104.2. Nonlinear analysis via second iteration 124.3. Local well-posedness 165. On the Szeg¨o equation 19References 231.
Introduction
Nonlinear smoothing under randomization on initial data for nonlinearSchr¨odinger equations.
In studying invariance of the Gibbs measure for the defocusingcubic nonlinear Schr¨odinger equation (NLS) on T , Bourgain [4] considered the following Mathematics Subject Classification.
Key words and phrases. well-posedness; nonlinear smoothing; KdV; Szeg¨o equation .T.O. would like to thank D´epartement de Math´ematiques d’Orsay at Universit´e Paris-Sud, where a partof this manuscript was prepared.
Cauchy problem for the 2- d Wick ordered cubic NLS: ( iu t − ∆ u ± ( u | u | − u R −| u | dx ) = 0 u | t =0 = u , x ∈ T = R / (2 π Z ) (1.1)with random initial data u of the form u ( x ) = u ω ( x ) = X n ∈ Z g n ( ω ) p | n | e in · x , (1.2)where { g n } n ∈ Z is a family of independent standard complex-valued Gaussian random vari-ables on a probability space (Ω , F , P ). We can regard u in (1.2) as a typical element inthe support of the Gaussian part of the Gibbs measure on T : dρ = Z − exp (cid:16) − Z | u | dx − Z |∇ u | dx (cid:17) Y x ∈ T du ( x ) . (1.3)Note that (1.3) is basically the Wiener measure on T . Hence, we see that u of the form(1.2) belongs almost surely (a.s.) to H s ( T ) \ L ( T ) for any s < In [2], Bourgain introduced a new weighted space-time Sobolev space X s,b ( T d × R ), whosenorm is given by k u k X s,b ( T d × R ) = kh n i s h τ − | n | i b b u ( n, τ ) k l n L τ ( Z d × R ) , where h · i = 1 + | · | , and proved that (1.1) is locally well-posed in H s ( T ) for s > L ( T ) for (1.1) due to a slight loss ofderivative in the periodic L -Strichartz estimate on T .There are two main components in establishing invariance of Gibbs measures for Hamil-tonian PDEs:(a) construction of the Gibbs measure,(b) (at least local-in-time) well-posedness on the support of the Gibbs measure.In [4], Bourgain constructed the Gibbs measure after applying Wick ordering to the non-linear part of the Hamiltonian (in the defocusing case.) As for (b), one needs to constructa continuous flow of (1.1) on the support of the Gibbs measure, i.e. outside L ( T ). Thisis exactly the main difficulty in [4], due to the absence of deterministic well-posedness evenin L ( T ). In order to resolve this issue, Bourgain considered the Cauchy problem (1.1)with random initial data (1.2) and successfully constructed local-in-time solutions almostsurely in ω by exhibiting nonlinear smoothing under randomization of initial data . Then,by invariance of the finite dimensional Gibbs measures with approximation argument, suchlocal-in-time solutions were then extended globally in time.We briefly describe Bourgain’s idea in the following. First, write (1.1) in the Duhamelformulation: u ( t ) = Γ u ( t ) := S ( t ) u ± i Z t S ( t − t ′ ) N ( u )( t ′ ) dt ′ , (1.4)where S ( t ) = e − it ∆ , u is as in (1.2), and N ( u ) = u | u | − u R −| u | . Note that the linearpart S ( t ) u has the same regularity as u for each fixed t ∈ R , i.e. S ( t ) u ω / ∈ L ( T ) In [4], Wick ordering is a renormalization needed for constructing the Gibbs measure on T . Theequation (1.1) appears as an equivalent formulation of the NLS obtained from the Wick ordered Hamiltonian. We introduced − R | u | dx to avoid the zero frequency issue. This modification is done implicitly in [4]. This regularity can be easily determined from the basic theory of Gaussian measures on Hilbert andBanach spaces. See Kuo [18] and Zhidkov [27].
ONLINEAR SMOOTHING UNDER RANDOMIZATION 3 a.s. However, by a combination of deterministic PDE theory and probabilistic techniques,Bourgain showed that R t S ( t − t ′ ) N ( u )( t ′ ) dt ′ lies almost surely in a smoother space H s ( T ) forsome small s > T >
0, there exists a set Ω T with complemental measure < e − Tδ suchthat for ω ∈ Ω T , Γ defined in (1.4) is a contraction on a ball around the linear solution, i.e.on S ( t ) u ω + B , where B denotes the ball of radius 1 in the usual Bourgain space X s, + ,T for some small s >
0, where X s, + ,T is a local-in-time version of the X s,b space on [ − T, T ].Recently, several results appeared in this direction, exhibiting nonlinear smoothing underrandomization of initial data. See Burq-Tzvetkov [7] for the cubic nonlinear wave equationon a three-dimensional compact Riemannian manifold and Thomann [25] for NLS with aconfining potential on R d . (In [25], there is a statement for NLS without a potential butthe result is stated in terms of the Sobolev space corresponding to the Laplace operatorwith a confining potential.) Colliander-Oh [10] considered the 1- d Wick ordered cubic NLS(1.1) on T (both defocusing and focusing) with random initial data u of the form: u ( x ) = u ω ( x ) = X n ∈ Z g n ( ω ) p | n | α e inx . (1.5)Note that u in (1.5) is a.s. in H α − − ( T ) := T s<α − H s ( T ) \ H α − ( T ). In particular, u is almost surely in the negative Sobolev spaces for α ≤ . Also, recall that u in (1.5)represents a typical element in the support of the Gaussian measure ρ α on the distributionson T : dρ α = Z − exp (cid:16) − Z | u | dx − Z | D α u | dx (cid:17) Y x ∈ T du ( x ) , (1.6)where D = p − ∂ x . In [10], it is shown that (1.1) is locally well-posed almost surely in H α − − ( T ) for each α > , i.e. in H s ( T ) for each s > − . Moreover, we constructed almostsure global-in-time solutions (1.1) for each α > , i.e. in H s ( T ) for each s > − .The local-in-time argument in [10] closely follows that of Bourgain [4]. The main goal isto show that for each small T >
0, there exists a set Ω T with P (Ω cT ) < e − Tδ such that for ω ∈ Ω T , Γ in (1.4) is a contraction on S ( t ) u ω + B , where B denotes the ball of radius 1 in X s, + ,T for some s ≥
0. By the standard linear estimate on the Duhamel term in (1.4), itsuffices to prove kN ( u ) k X s, −
12 + ,T . T θ , (1.7)for u ∈ S ( t ) u ω + B with some θ >
0, where S ( t ) = e − it∂ x , u ω is as in (1.5), and N ( u ) = N ( u, u, u ) = u | u | − u R −| u | . Since u ∈ S ( t ) u ω + B , we can write u as u = S ( t ) u ω + v forsome v with k v k X s,
12 + ,T ≤
1. Hence, it suffices to show (1.7) for N ( u , u , u ), assumingthat u j is either of the type(I) linear part: random, less regular u j ( x, t ) = S ( t ) u ω = X n ∈ Z g n ( ω ) p | n | α e i ( nx + n t ) , or(II) nonlinear part: deterministic, smoother u j = v j with k v j k X s,
12 + ,T ≤ . TADAHIRO OH
Then, (1.7) was established by a combination of deterministic multilinear analysis andprobabilistic argument; in particular, the hypercontractivity of the Ornstein-Uhlenbecksemigroup related to products of Gaussian random variables played an important role.In the following subsections, we discuss our main results. The first result is on KdVequation with a derivative nonlinearity. We show that a simple application of the ideasin probabilistic Cauchy theory [4, 10] with a fixed point argument fails (Subsection 4.1.)Nonetheless, we apply the second iteration argument in the probabilistic setting to improveknown deterministic well-posedness results (without using complete integrability), i.e. s = − . See Theorem 1.1. The second result is on the dispersionless cubic Szeg¨o equation (See(1.17) below.) Here, we show that even with randomization on initial data, one can notimprove the deterministic well-posedness result (Proposition 1.6.) This result in particularindicates that it is important to have both randomization and dispersion together to yieldan improvement over deterministic results.1.2. Korteweg-de Vries equation.
In this part, we consider the periodic Korteweg-deVries (KdV) equation: ( u t + u xxx + uu x = 0 u (cid:12)(cid:12) t =0 = u , x ∈ T = R / π Z (1.8)with random initial data u of the form u ( x ) = u ω ( x ) = X n =0 g n ( ω ) | n | α e inx ∈ H α − − ( T ) a.s. , (1.9)where { g n } n ∈ N is a family of independent standard complex-valued Gaussian random vari-ables with g − n = g n for n ∈ N and H s ( T ) denotes a subspace of the Sobolev space H s ( T )consisting of real-valued mean-zero elements. Note that u in (1.9) represents a typicalelement in the support of the Gaussian measure on the real-valued mean-zero distributionson T : dρ ,α = Z − exp (cid:16) − Z ( D α u ) dx (cid:17) Y x ∈ T du ( x ) , u, mean 0 . (1.10)Our main goal is to construct local-in-time solutions in C ([ − T, T ]; H α − − ( T )) for each α ∈ ( α ,
0] with some α < X s,b space adapted to (the linear part of) KdV with the norm k u k X s,b ( T × R ) = kh n i s h τ − n i b b u ( n, τ ) k l n L τ ( Z × R ) , (1.11)Bourgain [3] proved local well-posedness of (1.8) in L ( T ) via the fixed point argument,immediately yielding global well-posedness in L ( T ) thanks to the conservation of the L -norm. Kenig-Ponce-Vega [14] (also see [9]) improved Bourgain’s result and proved localwell-posedness in H − ( T ) by establishing the bilinear estimate k ∂ x ( uv ) k X s, − . k u k X s, k v k X s, , (1.12) The second iteration argument is a name of the method, where we (partially) iterate the Duhamelformulation as in (4.16) and (4.17). This is not to be confused with the second iterate as in (1.21) and (4.7).
ONLINEAR SMOOTHING UNDER RANDOMIZATION 5 for s ≥ − under the mean zero assumption on u and v . Colliander-Keel-Staffilani-Takaoka-Tao [9] proved the corresponding global well-posedness result via the I -method.There are also results on (1.8) which exploit its complete integrability. In [5], Bourgainproved global well-posedness of (1.8) in the class M ( T ) of measures µ , assuming that itstotal variation k µ k is sufficiently small. His proof is based on the trilinear estimate for the second iteration of the integral formulation of (1.8), assuming an a priori uniform boundon the Fourier coefficients of the solution u of the formsup n ∈ Z | b u ( n, t ) | < C (1.13)for all t ∈ R . Then, he established (1.13) using the complete integrability. More recently,Kappeler-Topalov [12] proved global well-posedness of the KdV in H − ( T ) via the inversespectral method.Next, we state results regarding the necessary conditions on the regularity with respectto smoothness or uniform continuity of the solution map : u ∈ H s ( T ) → u ( t ) ∈ H s ( T ).Bourgain [5] showed that if the solution map is C , then s ≥ − . Christ-Colliander-Tao [8] proved that if the solution map is uniformly continuous, then s ≥ − . (Also, seeKenig-Ponce-Vega [15].) These results, in particular, state that it is non-trivial to constructsolutions in C ([ − T, T ]; H s ) for s < − .In the following, we combine deterministic and probabilistic arguments to constructlocal-in-time solutions u ∈ C ([ − T, T ]; H s ), s = α − − < − , with initial data (1.9),where T = T ( ω ). The main difficulty is to lower the differentiability so that α ≤
0, i.e. s < − . Our focus is to develop a method without complete integrability of (1.8), exploitingnonlinear smoothing under randomization. It turns out that when α ≤
0, Bourgain’s ideain [4] - a fixed point argument around the linear solution discussed in Subsection 1.1 (as in[4, 7, 10, 25]) - does not work for (1.8). See Subsection 4.1. Instead, we adapt the seconditeration argument in the probabilistic setting.
Theorem 1.1.
Let a be the positive root of a + a − given by a = − + √ ≈ . , (1.14) and let α := a − ≈ − . . Then, for each α ∈ ( α , ≈ ( − . , , KdV (1.8) islocally well-posed almost surely in H α − − ( T ) . More precisely, there exist c, β > such thatfor each T ≪ , there exists a set Ω T ∈ F with the following properties: (i) P (Ω cT ) = ρ ,α ◦ u (Ω cT ) < e − cTβ , where u : Ω → H α − − ( T ) . (ii) For each ω ∈ Ω T there exists a (unique) solution u of (1.8) in e − t∂ x u + C ([ − T, T ]; H − +0 ( T )) ⊂ C ([ − T, T ]; H α − − ( T )) with the initial condition u ω given by (1.9) . Here, the uniqueness holds only in amild sense. See Subsection 4.3.In particular, we have almost sure local well-posedness of KdV with respect to the Gaussianmeasure ρ ,α in (1.10) supported on H s ( T ) for each s > s with s = − + √ ≈ − . . (1.15) These results are often referred to as ill-posedness results.
TADAHIRO OH
The novelty of Theorem 1.1 is the construction of local-in-time solutions in C ([ − T, T ]; H s ( T )) with s < − without using complete integrability. This is non-trivial inview of ill-posedness results in H s ( T ), s < − , described above. The argument combinesthe second iteration and the probabilistic argument, and does not rely on complete inte-grability of the equation. Therefore, it can be apply to other non-integrable KdV variants.Lastly, the basic argument in the previous work such as [4, 7, 10, 25] in random Cauchytheory is based on a fixed point argument around the (random) linear solution discussedin Subsection 1.1. (This can be also viewed as a fixed point argument for the nonlinearpart, regarding the linear part as a random “forcing term”.) As shown in Subsection 4.1,an attempt to follow this argument fails for KdV. The proof of Theorem 1.1 presents anew way (via non-“fixed point” argument - namely the second iteration argument in theprobabilistic setting) to construct solution in random Cauchy theory. See Richards [24] fora successful application of this idea to the random Cauchy theory of the quartic KdV (withnonlinearity u u x .) Remark 1.2.
The regularity s ≈ − . s < − , withoutmodification. However, the values of s and b of the X s,b -norm in these estimates from [5]are strongly interrelated, giving a restriction the regularity. See Section 4. We believe thatone can lower the regularity s in Theorem 1.1 by improving the deterministic estimates in[5] with the values of s and b “independent”. (Namely, keep b = − or + and determinethe value of s for which the estimates hold.) We do not pursue this direction. Lastly, notethat Theorem 1.1 covers the critical value α = 0. See Remark 1.3. Remark 1.3.
The value α = 0, corresponding to the regularity s = − − , is the criticalvalue in terms of the know well-posedness/ill-posedness results discussed above. Moreover,when α = 0, u in (1.9) corresponds to the (mean-zero) Gaussian white noise on T , which isformally invariant in view of the L -conservation. Then, one can follow Bourgain’s argumentin [4] (i) to extend the solutions in Theorem 1.1 globally in time, i.e. in C ( R t ; H − − ( T )),and (ii) to establish invariance of the white noise under the flow of KdV. See Quastel-Valk´o[23], Oh [19, 21], and Oh-Quastel-Valk´o [22] for other proofs of invariance of white noise forKdV. Recently, Richards [24] applied the second iteration argument to the quartic KdV inthe probabilistic setting and constructed solutions below the deterministic threshold s = (and established invariance of the Gibbs measure for the quartic KdV.) Remark 1.4.
In [19, 20, 21], we proved local well-posedness of (1.8) in F L s,p ( T ) with s > − , p = 2+, sp < −
1, where the Fourier-Lebesgue space F L s,p ( T ) is defined by thenorm k f k F L s,p ( T ) = kh n i s b f ( n ) k l pn ( Z ) . (1.16)The proof is based on the second iteration argument introduced in [5]. It is known [1, 19]that u in (1.9) is almost surely in F L s,p for ( s − α ) p < −
1. Hence, this result showsthat there exists α < α ∈ [ α , u ∈ C ([ − T, T ]; F L s,p ) with s = α − p − > − , where T = T ( k u ( ω ) k F L s,p ). We point out that this deterministic well-posedness argument in F L s,p completely fails when p = 2 and s < − . In particular, Theorem 1.1 does not followfrom the result in [19, 20, 21]. ONLINEAR SMOOTHING UNDER RANDOMIZATION 7
Remark 1.5.
A linear part of a solution constructed in Theorem 1.1 indeed lies in C ([ − T, T ]; B ( T )) for any Banach space B ( T ) ⊃ H α ( T ) such that ( H α , B, ρ ,α ) is an abstractWiener space (roughly speaking, any Banach space B containing H α where the Gaussianmeasure ρ ,α makes sense as a countable additive probability measure.) In this case, asolution u to (1.8) lies in u = e − t∂ x u + ( − ∂ t − ∂ x ) − u ∈ C ([ − T, T ]; B ( T )) + C ([ − T, T ]; H − +0 ( T )) . As examples of B , we can take the Sobolev spaces W s,p with s < α − , and the Fourier-Lebesgue spaces F L s,p with s < α − p , where F L s,p is defined via the norm (1.16). SeeB´enyi-Oh [1] for regularity of ρ ,α in different function spaces. We can also take the Besovspaces B α − p, ∞ with p < ∞ . In [1], we study the regularity of ρ α in (1.6) with α = 1 but itcan be easily adjusted for ρ ,α in (1.10) with any α .1.3. Cubic Szeg¨o equation.
In studying the cubic NLS: iu t − ∆ u = | u | u on a manifold M , Burq-G´erard-Tzvetkov [6] observed that dispersive properties are strongly influencedby the geometry of the underlying manifold M . G´erard-Grellier [11] further pointed outthat dispersion disappears completely when M is a sub-Riemannian manifold. As a toymodel to study non-dispersive Hamiltonian equations, G´erard-Grellier [11] introduced the cubic Szeg¨o equation : ( iu t = Π( | u | u ) u | t =0 = u x ∈ T , (1.17)where Π is the Szeg¨o projector onto the non-negative frequencies, i.e.Π( f ) := X n ≥ b f ( n ) e inx . It turned out that (1.17) is completely integrable with infinitely many conservation lawsand that analysis of (1.17) has a strong connection to the theory of complex variables.It is shown in [11] that (1.17) is globally well-posed in H + ( T ) := Π( H ( T )) via the energymethod (for s > - the argument for s = is more intricate) and the conservation of the H + -norm. Our interest is to investigate if there is any nonlinear smoothing by consideringrandom initial data u of the form u ( x ) = u ω ( x ) = X n ≥ g n ( ω ) p | n | α e inx ∈ H α − − + ( T ) \ H α − + ( T ) , a.s. (1.18)for α ≤
1. First, write (1.17) in the integral formulation: u ( t ) = u − i N ( u, u, u )( t ) , (1.19)where N ( · , · , · ) is given by N ( u , u , u )( t ) = Z t Π( u u u )( t ′ ) dt ′ . (1.20)In the following, we consider the second iterate : z ( t ) = u − i N ( u , u , u )( t ) , (1.21)where u is as in (1.18). Strictly speaking, we need that e − t∂ x acts on B ( T ) continuously. e.g. we need p < ∞ for F L s,p . TADAHIRO OH
Proposition 1.6.
Let u be as in (1.18) . (a) Let α ∈ ( , . Then for s ≥ α − , we have (cid:13)(cid:13) N ( u , u , u ) (cid:13)(cid:13) C ([ − T,T ]; H s + ) = ∞ , a.s. for any T > . In particular, even with α = 1 , the second iterate z ( t ) for the cubic Szeg¨oequation (1.17) is a.s. unbounded in H + . (b) Let α > . Then, the second iterate z ( t ) for the cubic Szeg¨o equation (1.17) isa.s. bounded in H s + with s < α − − ε for any ε > . Proposition 1.6 (a) shows that there is no gain of regularity, even for α = 1. In view of well-posedness of (1.17) in H + , we conclude that there is no smoothing upon randomization ofinitial data. See also Remark 5.1. This shows that dispersion is closely related to nonlinearsmoothing under randomization of initial data. See Section 5 for details. Remark 1.7.
It is interesting to compare Proposition 1.6 with the boundedness of thesecond iterate for KdV; in Remark 4.1, we show that the nonlinear part of the seconditerate for KdV is bounded in L ( T ) even for deterministic mean-zero initial data u in H s ( T ) as long as s > − .On the one hand, the failure of well-posedness for the cubic Szeg¨o equation (1.17) below H + ( T ) is due to the unboundedness of the second iterate below H + ( T ) (indeed even in H + ( T ) - see [11]), and this problem can not be removed by considering the random initialdata of the form (1.18). On the other hand, the failure of well-posedness for KdV below H − ( T ) is not caused by the second iterate, and randomization of initial data with thesecond iteration argument comes in rescue. We also point out that while the failure ofwell-posedness of Wick ordered cubic NLS (1.1) below L is due to the unboundedness ofthe second iterate below L , this problem can be removed by a simple fixed point argument(around the linear solution) with random initial data.Lastly, recall an analogous result for the Benjamin-Ono equation by Tzvetkov [26]. Heshowed that the second iterate with initial data u of the form (1.9) with α = is in H s ( T ) \ L ( T ) for any s <
0. (Recall that the threshold regularity of the deterministic well-posedness theory is L ( T ). See Molinet [16, 17].) On the one hand, this result shows thatthere is no nonlinear smoothing under randomization of initial data. On the other hand,the second iterate is at least as regular as the random initial data. Proposition 1.6 (b) alsostates that the second iterate is at least as regular as the random initial data. Althoughthere is no nonlinear smoothing under randomization of initial data, these results statethat there is still a possibility of constructing solutions below the deterministic thresholdregularities. However, such a construction is out of reach at this point.This paper is organized as follows: We introduce notations in Section 2 and state basiclemmata in Section 3. In Section 4, we prove Theorem 1.1 by establishing the nonlinearestimate on the second iteration of the integral formulation. In Section 5, we show thatthere is no extra smoothing for the cubic Szeg¨o equation upon randomization of initial data.2. Notation
Let X s,b denote the periodic Bourgain space defined in (1.11). We often use the shorthandnotation k · k s,b to denote the X s,b norm. Since the X s, norm fails to control L ∞ t H sx norm, ONLINEAR SMOOTHING UNDER RANDOMIZATION 9 we introduce a smaller space Z s,b ( T × R ) whose norm is given by k u k Z s,b ( T × R ) := k u k X s,b ( T × R ) + k u k Y s,b − ( T × R ) (2.1)where h · i = 1 + | · | and k u k Y s,b ( T × R ) = kh n i s h τ − n i b b u ( n, τ ) k l n L τ ( Z × R ) . We also define thelocal-in-time version Z s,b,T on T × [ − T, T ], by k u k Z s,b,T = inf (cid:8) k e u k Z s,b ( T × R ) : e u | [ − T,T ] = u (cid:9) . The local-in-time versions of other function spaces are defined analogously.If a function depends on both x and t , we use ∧ x (and ∧ t ) to denote the spatial (andtemporal) Fourier transform, respectively. However, when there is no confusion, we simplyuse ∧ to denote the spatial Fourier transform, the temporal Fourier transform, and thespace-time Fourier transform, depending on the context. For simplicity, we often drop2 π in dealing with the Fourier transforms. If a function f is random, we may use thesuperscript f ω to show the dependence on ω Lastly, let η ∈ C ∞ c ( R ) be a smooth cutoff function supported on [ − ,
2] with η ≡ − ,
1] and let η T ( t ) = η ( T − t ). We use c, C to denote various constants, usually dependingonly on α and s . If a constant depends on other quantities, we will make it explicit. Weuse A . B to denote an estimate of the form A ≤ CB . Similarly, we use A ∼ B to denote A . B and B . A and use A ≪ B when there is no general constant C such that B ≤ CA .We also use a + (and a − ) to denote a + ε (and a − ε ), respectively, for arbitrarily small ε ≪
1. 3.
Basic lemmata and linear estimates
We first state several useful lemmata. See [10] for the proofs. Recall that by restrictingthe Bourgain spaces onto a small time interval [ − T, T ], we can gain a small power of T (ata loss of regularity on h τ − n i .) Lemma 3.1.
For b > b ′ > , we have k u k X s,b ′ ,T = k η T u k X s,b ′ ,T . T b − b ′ − k u k X s,b,T . (3.1)The proof basically follows from k c η T k L qτ ∼ T q − q k b η k L qτ ∼ T q − q , (3.2)where c η T ( τ ) = T b η ( T τ ), and interpolation.Next, we present a probabilistic lemma related to the Gaussian random variables.
Lemma 3.2.
Let ε, β > . Then, for T > , we have | g n ( ω ) | ≤ C ε T − β h n i ε (3.3) for all n ∈ Z for ω outside an exceptional set of measure < e − cTβ . Now, we briefly go over the linear estimates related to KdV. Let S ( t ) = e − t∂ x and T ≤ Lemma 3.3.
For any s ∈ R and b < , we have k S ( t ) u k X s,b,T . T − b k u k H s . Lemma 3.4.
For any s ∈ R and b ≤ , we have (cid:13)(cid:13)(cid:13)(cid:13) Z t S ( t − t ′ ) F ( x, t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13) X s,b,T . k F k Z s,b − ,T . Also, we have (cid:13)(cid:13) R t S ( t − t ′ ) F ( x, t ′ ) dt ′ (cid:13)(cid:13) X s,b,T . k F k X s,b − for b > . The next lemma is the periodic L -Strichartz estimate for KdV due to Bourgain [3]. Lemma 3.5.
Let u be a function on T × R . Then, we have k u k L x,t . k u k X , . On the KdV equation
In this section, we construct local-in-time solutions to (1.8) with random initial data ofthe form (1.9) for α ∈ ( α , α is as in Theorem 1.1.4.1. Overview; Unboundedness of nonlinear term.
By writing KdV (1.8) in theDuhamel formulation, we have u ( t ) = S ( t ) u + N ( u, u )( t ) , (4.1)where S ( t ) = e − t∂ x and N ( · , · ) is given by N ( u , u )( t ) := − Z t S ( t − t ′ ) ∂ x ( u u )( t ′ ) dt ′ . (4.2)It follows from the conservation of mean that u ( t ) has the spatial mean 0 for each t ∈ R since u has the mean 0. We use ( n, τ ), ( n , τ ), and ( n , τ ) to denote the Fourier variablesfor uu , the first factor, and the second factor u of uu in N ( u, u ), respectively. i.e. we have n = n + n and τ = τ + τ . By the mean zero assumption on u and by the fact that wehave ∂ x ( uu ) in the definition of N ( u, u ), we may assume n, n , n = 0. We also use thefollowing notation: σ := h τ − n i and σ j := h τ j − n j i . One of the main ingredients is the observation due to Bourgain [3]: n − n − n = 3 nn n , for n = n + n , (4.3)which in turn implies thatMAX := max( σ , σ , σ ) & h ( τ − n ) − ( τ − n ) − ( τ − n ) i ∼ h nn n i . (4.4)This estimate (4.4) played a crucial role in establishing the bilinear estimate (1.12).If we were to proceed as in [4, 10], then we would need to estimate kN ( u , u ) k Z − , ,T ,assuming that u j is either of type(I) random, less regular: u j ( t ) = η T ( t ) X n =0 g n ( ω ) e i ( nx + n t ) , or (4.5)(II) deterministic, smoother: u j = v with k v k Z − , ,T ≤ . ONLINEAR SMOOTHING UNDER RANDOMIZATION 11
In the following, we show that estimate on kN ( u , u ) k Z − , ,T fails to hold by consideringthe case when both u and u are of type (I).By computing the Duhamel term explicitly (see (4.19) below), one of the main contribu-tions to kN ( u , u ) k Z − , ,T is given by k ∂ x ( u u ) k X − , − ,T . Now, assume that u and u are of type (I). i.e., we have b u j ( n j , τ j ) = b η T ( τ j − n j ) g n j , j = 1 , η in (4.5). Then, wehave b u j ( n j , τ j ) = δ ( τ j − n j ) g n j , j = 1 ,
2. Hence, from (4.4) with σ , σ ∼
1, we have σ = MAX ∼ h nn n i . Then, we have k ∂ x ( u u ) k X − , − = X n Z (cid:12)(cid:12)(cid:12)(cid:12) | n |h n i − h τ − n i X n = n + n Z τ = τ + τ b u ( n , τ ) b u ( n , τ ) dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ ∼ X n Z (cid:12)(cid:12)(cid:12)(cid:12) X n = n + n Z τ = τ + τ δ ( τ − n ) g n | n | δ ( τ − n ) g n | n | dτ (cid:12)(cid:12)(cid:12)(cid:12) dτ = X n Z X n = n + n = m + m Z τ = τ + τ Y j =1 δ ( τ j − n j ) g n j | n j | dτ Z τ = e τ + e τ Y k =1 δ ( e τ k − m k ) g m k | m k | d e τ . (4.6)We have nontrivial contribution in (4.6) only when n + n = m + m and n + n = m + m .Hence, we have { n , n } = { m , m } . Therefore, we have(4.6) ∼ X n X n = n + n | g n | | n | | g n | | n | = X n | g n | | n | X n | g n | | n | = ∞ , a.s.The last equality holds from the following. Let F j ( ω ) := 2 − j P | n |∼ j | g n ( ω ) | . Then, F j converges to Var( g n ) = 2 a.s. by strong law of large numbers. Hence, the tails of the abovesums do not converge to 0.The above computation involving the Dirac delta function is somewhat formal. It canbe made rigorous by using a smooth cutoff function η T . However, we omit details. As aconclusion, we see that a simple application of the ideas from [4, 10] fails for KdV. In theremaining part of this section, we construct local-in-time solutions by adapting the seconditeration argument [5, 20] in the probabilistic setting. Remark 4.1.
Consider the second iterate for the Duhamel formulation (4.1) of KdV: z ( t ) = S ( t ) u + N ( S ( t ) u , S ( t ) u )( t ) , (4.7)where N ( · , · ) is defined in (4.2) and u ∈ H s ( T ). In the following, we show that thenonlinear part of the second iterate is bounded even in L ( T ) as long as u is in H s ( T )for s > − . In particular, this shows that the failure of (analytic) well-posedness for KdVbelow H − ( T ) is not due to the second iterate.Fix t >
0. Then, we have (cid:13)(cid:13) N ( S ( t ) u , S ( t ) u )( t ) (cid:13)(cid:13) L ∼ (cid:18) X n (cid:12)(cid:12)(cid:12)(cid:12) Z t e − it ′ n in X n = n + n Y j =1 e it ′ n j b u ( n j ) dt ′ (cid:12)(cid:12)(cid:12)(cid:12) (cid:19) . Since u has mean zero on T , we assume n , n = 0 in the following. Moreover, we assume n = 0 thanks to the derivative in the nonlinearity ∂ x ( u u ). First integrate in t ′ with (4.3). Then, by Young’s inequality followed by H¨older’s inequality, we have (cid:13)(cid:13) N ( S ( t ) u ,S ( t ) u )( t ) (cid:13)(cid:13) L . (cid:18) X n (cid:12)(cid:12)(cid:12)(cid:12) in X n = n + n Y j =1 b u ( n j ) Z t e − it ′ ( n − n − n ) dt ′ (cid:12)(cid:12)(cid:12)(cid:12) (cid:19) . (cid:18) X n (cid:12)(cid:12)(cid:12) X n = n + n Y j =1 | n j | − | b u ( n j ) | (cid:12)(cid:12)(cid:12) (cid:19) . Y j =1 (cid:13)(cid:13) h n j i − | b u ( n j ) | (cid:13)(cid:13) l nj ≤ Y j =1 kh n j i − − s k l nj kh n j i s b u ( n j ) k l nj . k u k H s as long as 4( − − s ) < −
1, i.e. s > − . By considering the random initial data u of theform (1.9), we can also show that the nonlinear part of the second iterate is a.s. boundedin L as long as α > − , namely s > −
1. We omit the details.4.2.
Nonlinear analysis via second iteration.
First, we briefly go over Bourgain’sargument in [5]. Define A j = { ( n, n , n , τ, τ , τ ) ∈ Z × R : σ j = MAX } , (4.8)and let N j ( u, u ) denote the contribution of N ( u, u ) on A j . Then, (4.1) can be written as u ( t ) = S ( t ) u + N ( u, u )( t ) + N ( u, u )( t ) + N ( u, u )( t ) . (4.9)By the standard bilinear estimate with Lemma 3.5 as in [3], [14], we have kN ( u, u ) k − + δ, − δ ≤ o (1) k u k − − δ, − δ , (4.10)where o (1) = T θ with θ > − T, T ].See (2.17), (2.26), and (2.68) in [5]. Here, we abuse the notation and use k · k s,b = k · k X s,b to denote the local-in-time version as well. Note that the temporal regularity b = − δ < . This allowed us to gain the spatial regularity by 2 δ in (4.10). Clearly, we can not expectto do the same for N ( u, u ). (By symmetry, we do not consider N ( u, u ) in the following.)When b < , the bilinear estimate (1.12) is known to fail for any s ∈ R due to thecontribution from N ( u, u ). See [14]. Following the notation in [5], let I s,b = kN ( u, u ) k X s,b and a := − δ < . (4.11) Main goal:
Estimate the Duhamel term N ( u, u ) in X s,b with s := − a = − + δ > − , and b := 1 − a = + δ > , (4.12)assuming that u is bounded only in X e s, e b with e s := − (1 − a ) = − − δ < − , and e b := a = − δ < . (4.13)By Lemma 3.4 and duality with k d ( n, τ ) k L n,τ ≤
1, we have I − a, − a = kN ( u, u ) k − a, − a (4.14) . X n,n n = n + n Z τ = τ + τ dτ dτ h n i − a d ( n, τ ) σ a b u ( n , τ ) h n i − a c ( n , τ ) σ a , ONLINEAR SMOOTHING UNDER RANDOMIZATION 13 where c ( n , τ ) = h n i − (1 − a ) σ a b u ( n , τ ) so that k c k L n,τ = k u k − (1 − a ) ,a = k u k − − δ, − δ . (4.15)The main idea here is to consider the second iteration, i.e. substitute (4.1) for b u ( n , τ ) in(4.14), thus leading to a trilinear expression, i.e. write N ( u, u ) as N ( u, u ) = N ( S ( t ) u , u ) + N ( N ( u, u ) , u ) . (4.16)Applying the second iteration on the second argument of N ( u, u ), we can write (4.1) and(4.9) as u ( t ) = S ( t ) u + N ( u, u )( t ) + N ( S ( t ) u , u )( t )+ N ( N ( u, u ) , u )( t ) + N ( u, S ( t ) u )( t ) + N ( u, N ( u, u ))( t ) . (4.17)Since σ = MAX & h nn n i ≫ A , we can assume that b u ( n , τ ) = (cid:0) N ( u, u ) (cid:1) ∧ ( n , τ ) ∼ | n | σ X n = n + n Z τ = τ + τ b u ( n , τ ) b u ( n , τ ) dτ . (4.18)Namely, we can assume that the contribution to b u ( n , τ ) from the linear part S ( t ) u of(4.9) is negligible since we have σ ∼ N ( u, u )( x, t ) = − i ∞ X k =1 i k t k k ! X n =0 e i ( nx + n t ) Z η ( λ − n )( λ − n ) k − [ ∂ x u ( n, λ ) dλ + i X n =0 e inx Z (cid:0) − η (cid:1) ( τ − n ) τ − n [ ∂ x u ( n, τ ) e iτt dτ + i X n =0 e i ( nx + n t ) Z (cid:0) − η (cid:1) ( λ − n ) λ − n [ ∂ x u ( n, λ ) dλ =: M ( u, u )( x, t ) + M ( u, u )( x, t ) + M ( u, u )( x, t ) . (4.19)Note that ( M ( u, u )) ∧ ( n , τ ) and ( M ( u, u )) ∧ ( n , τ ) are distributions supported on { τ − n = 0 } . i.e. σ ∼
1. Hence, the only contribution for the second iteration on A comesfrom M ( u, u ) whose Fourier transform is given in (4.18). This shows the validity of theassumption (4.18).The σ appearing in the denominator allows us to cancel h n i − a and h n i − a in thenumerator in (4.14). Then, I − a, − a can be estimated by . X n = n + n n = n + n Z τ = τ + τ τ = τ + τ h n i − a d ( n, τ ) σ a | n | σ b u ( n , τ ) b u ( n , τ ) h n i − a c ( n , τ ) σ a . (4.20)The argument was then divided into several cases, depending on the sizes of σ , · · · , σ .Here, the key algebraic relation is n − n − n − n = 3( n + n )( n + n )( n + n ) , with n = n + n + n . (4.21)Then, Bourgain proved -see (2.69) in [5]- I − a, − a ≤ o (1) k u k − (1 − a ) ,a I − a, − a + o (1) k u k − (1 − a ) ,a + o (1) k u k − (1 − a ) ,a , (4.22) assuming the a priori estimate (1.13): | b u ( n, t ) | < C for all n ∈ Z , t ∈ R . Indeed, theestimates involving the first two terms on the right hand side of (4.22) were obtained without (1.13), and only the last term in (4.22) required (1.13), -see “Estimation of (2.62)”in [5]-, which was then used to deduce k b u ( n, · ) k L τ < C. (4.23)The a priori estimate (1.13) is derived via the isospectral property of the KdV flow and isfalse for a general function in X − (1 − a ) ,a . (It is here that the smallness of the total variation k µ k is used in [5].)Our goal is to carry out a similar analysis on the second iteration without the a prioriestimates (1.13) and (4.23) coming from the complete integrability of KdV. We achievethis goal by exhibiting nonlinear smoothing under randomization on initial data. In thefollowing, we take α > − δ = a − , (4.24)where δ > u ( x ) = u ω ( x ) = X n =0 g n ( ω ) | n | α e inx (4.25)belongs to H e s ( T ) = H − − δ ( T ), a.s.Note that we can use the estimates on N ( N ( u, u ) , u ) from [5] except when the a prioribound (1.13) was assumed. i.e. we need to estimate the contribution from (2.62) in [5]: R a ( u , u , u ) := X n Z τ = τ + τ + τ χ B d ( n, τ ) h n i a σ a b u ( − n, τ ) b u ( n, τ ) b u ( n, τ ) dτ dτ dτ , (4.26)where k d ( n, τ ) k L n,τ ≤ B = { σ , σ , σ , σ < | n | γ } with some small parameter γ > n = − n and n = n = n in (4.20) after some reduction. Inour analysis, we directly estimate R a ( u , u , u ), assuming that u j is either of type(I) linear part: random, less regular u j ( t ) = η T ( t ) X n =0 g n ( ω ) | n | α e i ( nx + n t ) , or(II) nonlinear part: deterministic, and (expected to be) smoother u j = N ( u ) := N ( u, u ) (to be bounded in X − + δ, − δ,T ) . In [5], this parameter γ = γ ( a ), subject to the conditions (2.43) and (2.60) in [5], playeda certain role in estimating R a along with the a priori bound (1.13). However, it plays norole in our analysis. Before proceeding further, we record the conditions on the values of a and γ from [5]: a > , γ > − a ) a − / , and 2(1 − a ) + γ < . (4.27)From the last two conditions, we obtain the quadratic inequality a + a − >
0. Bychoosing a > a , where a = − + √ ≈ . a ∈ ( a , ) in the following. In particular, this implies that α > α := a − ≈ − . Note that α in [5] is a in this paper. ONLINEAR SMOOTHING UNDER RANDOMIZATION 15 and δ < − α ≈ . • Case 1: all type (II). By Cauchy-Schwarz and Young’s inequalities, we have(4.26) ≤ X n k d ( n, · ) k L τ h n i − − a k [ N ( u )( − n, τ ) k L τ k [ N ( u )( n, τ ) k L τ k [ N ( u )( n, τ ) k L τ By H¨older inequality (with appropriate ± signs) and the fact that − − a ≤ − a , ≤ X n k d ( n, · ) k L τ Y j =2 h n i − a k σ − − j k L τj k σ + j [ N ( u )( ± n, τ j ) k L τj ≤ T − δ − k d ( · , · ) k L n,τ kN ( u ) k X − a,a , ≤ T − δ − kN ( u ) k X −
12 + δ, − δ , (4.29)where the last two inequalities follows from Lemma 3.1 by choosing a > .In the following, fix small ε > β > • Case 2: all type (I). By Lemma 3.2, we have | g n ( ω ) | ≤ CT − β h n i ε outside an excep-tional set of measure < e − cTβ . Then, by Cauchy-Schwarz, Young’s inequalities, and Lemma3.1 with (4.24), we have(4.26) . T − β X n Z d ( n, τ ) h n i − + δ +3 ε − α × (cid:18) Z τ = τ + τ + τ b η T ( τ + n ) b η T ( τ − n ) b η T ( τ − n ) dτ dτ (cid:19) dτ ≤ T − β X n k d ( n, · ) k L τ h n i − +4 δ +3 ε k b η T ∗ b η T ∗ b η T k L τ . T − β − , (4.30)as long as δ < − ε . • Case 3: two type (I), one type (II). Without loss of generality, assume that u , u areof type (I), and that u is of type (II). By Lemma 3.2, Cauchy-Schwarz, Young, H¨olderinequalities, and Lemma 3.1, we have(4.26) ≤ T − β X n k d ( n, · ) k L τ h n i − + δ +2 ε − α × (cid:13)(cid:13)(cid:13)(cid:13) Z τ = τ + τ + τ b η T ( τ + n ) b η T ( τ − n ) [ N ( u )( n, τ ) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) L τ ≤ T − β k d k L n,τ k b η T k L (cid:13)(cid:13)(cid:13) h n i − +3 δ +2 ε k [ N ( u )( n, τ ) k L τ (cid:13)(cid:13)(cid:13) L n . T − β − (cid:13)(cid:13)(cid:13) h n i − +3 δ +2 ε k σ − − k L τ k σ +4 [ N ( u )( n, τ ) k L τ (cid:13)(cid:13)(cid:13) L n . T − δ − β − kN ( u ) k X −
12 + δ, − δ (4.31)outside an exceptional set of measure < e − cTβ as long as δ ≤ − ε . • Case 4: one type (I), two type (II). Without loss of generality, assume that u is oftype (I), and that u , u are of type (II). By Lemma 3.2, Cauchy-Schwarz, Young, H¨older inequalities, and Lemma 3.1, we have(4.26) ≤ T − β X n k d ( n, · ) k L τ h n i − + δ + ε − α × (cid:13)(cid:13)(cid:13)(cid:13) Z τ = τ + τ + τ b η T ( τ + n ) [ N ( u )( n, τ ) [ N ( u )( n, τ ) dτ dτ (cid:13)(cid:13)(cid:13)(cid:13) L τ ≤ T − β k d k L n,τ k b η T k L (cid:13)(cid:13)(cid:13) h n i − +2 δ + ε Y j =3 k σ − − j k L τ k σ + j [ N ( u )( n, τ j ) k L τj (cid:13)(cid:13)(cid:13) L n . T − β − kN ( u ) k X −
12 + δ,
13 + . T − δ − β − kN ( u ) k X −
12 + δ, − δ (4.32)outside an exceptional set of measure < e − cTβ as long as ε ≤ .Lastly, we point out that the estimates hold even if we restrict the initial data to besupported on {| n | ≤ N } , independent of N . Moreover, we can gain extra power of N − inCases 2–4, if we restrict the initial data to be supported on {| n | > N } .4.3. Local well-posedness.
Consider initial data u N of the form u N ( x ) = X ≤| n |≤ N g n ( ω ) | n | α e inx . (4.33)Then, for each N , there exists a unique global solution u N ∈ C ( R ; H s ( T )) for any s ≥ − a.s. in ω . Define Γ N = Γ Nu N byΓ N v = Γ Nu N v := S ( t ) u N + N ( v, v ) . Then, u N = Γ N u N . We set N N := N ( u N ) = N ( u N , u N ).Now, we put all the a priori estimates together. Note that all the implicit constants areindependent of N . Also, when there is no superscript N , it means that N = ∞ . In thefollowing, C j , θ j , and ε j denote positive constants.Fix a = − δ > a as in (4.11), where a is defined in (1.14). From Lemma 3.3, we have k S ( t ) u N k X s,b,T ≤ C k u N k H s (4.34)for any s, b ∈ R with C = C ( b ). In particular, by taking b > , we see that S ( t ) u iscontinuous on [ − T, T ] with values in H s . From the definition of N j ( · , · ), (4.10), and (4.11),we have kN ( u N , u N ) k X − a,a,T ≤ C T θ k u N k X − (1 − a ) ,a,T + 2 I N − a,a . (4.35)From (4.22) and (4.29)–(4.32), there exists a set Ω (1) T with P (cid:0) (Ω (1) T ) c (cid:1) < e − cTβ such that wehave I N − a, − a ≤ C (cid:0) T θ k u N k X − (1 − a ) ,a,T I N − a, − a + T θ k u N k X − (1 − a ) ,a,T + T θ kN N k X − a,a,T + T θ kN N k X − a,a,T + T θ kN N k X − a,a,T + T θ (cid:1) on Ω (1) T . Note that the choice of Ω (1) T is independent of N . For fixed R >
0, choose
T > C T θ R ≤ . Then, we have I N − a, − a ≤ C (cid:0) T θ k u N k X − (1 − a ) ,a,T + T θ kN N k X − a,a,T + T θ kN N k X − a,a,T + T θ kN N k X − a,a,T + T θ (cid:1) (4.36) ONLINEAR SMOOTHING UNDER RANDOMIZATION 17 for k u N k X − (1 − a ) ,a,T ≤ R . From (4.34)–(4.36), we have k u N k X − (1 − a ) ,a,T ≤ C k u N k H − (1 − a ) + C T θ k u N k X − (1 − a ) ,a,T + 2 C (cid:0) T θ k u N k X − (1 − a ) ,a,T + T θ kN N k X − a,a,T (4.37)+ T θ kN N k X − a,a,T + T θ kN N k X − a,a,T + T θ (cid:1) and kN N k X − a,a,T ≤ C T θ k u N k X − (1 − a ) ,a,T + 2 C (cid:0) T θ k u N k X − (1 − a ) ,a,T + T θ kN N k X − a,a,T (4.38)+ T θ kN N k X − a,a,T + T θ kN N k X − a,a,T + T θ (cid:1) . Moreover, for
N > M , we have k u N − u M k X − (1 − a ) ,a,T = k Γ N u N − Γ M u M k X − (1 − a ) ,a,T ≤ C k u N − u M k H − (1 − a ) + C T θ (cid:0) k u N k X − (1 − a ) ,a,T + k u M k X − (1 − a ) ,a,T (cid:1) k u N − u M k X − (1 − a ) ,a,T + C T θ (cid:0) k u N k X − (1 − a ) ,a,T + k u M k X − (1 − a ) ,a,T (cid:1) k u N − u M k X − (1 − a ) ,a,T (4.39)+ C T θ (cid:0) kN N k X − a,a,T + kN M k X − a,a,T (cid:1) kN N − N M k X − a,a,T + C M − ε T θ kN N − N M k X − a,a,T + C M − ε T θ kN N k X − a,a,T + C M − ε T θ kN N − N M k X − a,a,T + C M − ε T θ kN N k X − a,a,T + C M − ε T θ . Also, we have kN N −N M k X − a,a,T ≤ C T θ (cid:0) k u N k X − (1 − a ) ,a,T + k u M k X − (1 − a ) ,a,T (cid:1) k u N − u M k X − (1 − a ) ,a,T + C T θ (cid:0) k u N k X − (1 − a ) ,a,T + k u M k X − (1 − a ) ,a,T (cid:1) k u N − u M k X − (1 − a ) ,a,T + C T θ (cid:0) kN N k X − a,a,T + kN M k X − a,a,T (cid:1) kN N − N M k X − a,a,T (4.40)+ C M − ε T θ kN N − N M k X − a,a,T + C M − ε T θ kN N k X − a,a,T + C M − ε T θ kN N − N M k X − a,a,T + C M − ε T θ kN N k X − a,a,T + C M − ε T θ . Note that in estimating the difference Γ N u N − Γ M u M on A , one needs to consider e I − a, − a := kN ( u N , u N ) − N ( u M , u M ) k − a, − a (4.41)as in [5]. We can follow the argument on pp.135-136 in [5], yielding the third term in (4.39),except for R a defined in (4.26). As for R a , we can write N ( N ( u, u ) , u ) − N ( N ( v, v ) , v ) = N ( N ( u + v, u − v ) , u ) + N ( N ( v, v ) , u − v ) (4.42)as in (3.4) in [5], and then we can repeat the computation done for R a , yielding the lastsix terms in (4.39).Recall the large deviation estimate: P ( k u ( ω ) k H s > K ) < e − cK for s < − and suffi-ciently large K . Given small T >
0, let K = (2 C C T θ ) − . Then, defining Ω (2) T byΩ (2) T := { ω : k u ( ω ) k H − (1 − a ) ≤ K } we have P (cid:0) (Ω (2) T ) c (cid:1) < e − cTβ for some c, β >
0. Moreover, by letting R = 2 C k u k H − (1 − a ) ,we have C T θ R ≤ on Ω (2) T . Finally, let Ω T = Ω (1) T ∩ Ω (2) T . Then, by choosing T sufficiently small, we see that for ω ∈ Ω T , smooth global solutions u N ( ω ) (and N N ) with initial data u N ( ω ) converge in X − (1 − a ) ,a,T (in X − a,a,T , respectively.) For example, if we choose T by T = inf (cid:8) t > C t θ R + 2 C ( t θ R + t θ R + t θ R + t θ R + t θ ) ≥ R (cid:9) , then (4.37) and (4.38) along with continuity argument show that k u k X − (1 − a ) ,a,T ≤ R and kN ( u, u ) k X − a,a,T ≤ R . From (4.39) and (4.40), one obtains a different condition for T .We point out that the nonlinear part N j ( u N , u N ), j = 1 ,
2, converges in a stronger space X − a, − a,T . See (4.11) and (4.36).Let u denote the limit. We still need to show, for ω ∈ Ω T ,(i) u is indeed a solution to (1.8) with u ∈ H − (1 − a ) ( T ) given by (4.25).(ii) u ∈ C ([ − T, T ]; H − (1 − a ) ).(iii) uniqueness of solutions.The argument for (i) and (ii) exactly follows the corresponding argument in [20], and thuswe omit details. It follows from (4.34) with b = + δ , (4.36), and symmetry between σ and σ that ( S ( t ) u ∈ X − (1 − a ) , + δ,T ⊂ C ([ − T, T ]; H − (1 − a ) ) N ( u, u ) + N ( u, u ) ∈ X − a, + δ,T ⊂ C ([ − T, T ]; H − a ) . As for N ( u, u ), i.e. σ = MAX, we can repeat the argument in [20] by separately estimatingthe contributions on A := { max( σ , σ ) & h nn n i } and A c .Now, let us discuss uniqueness. In the following, fix α ∈ ( α , α is as in (4.28).Consider a “nice” initial condition u ∗ := u ( ω ∗ ) for some ω ∗ ∈ Ω T so that a solution u ∗ exists on [ − T, T ] for the initial condition u ∗ (along with the estimate in Subsection 4.2.)Then, for some ε, β >
0, we have sup n =0 h n i − ε | g n ( ω ∗ ) | ≤ CT − β . Now, let A γ,T = { v : sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) ≤ γCT − β } . Then, we have sup n h n i − ε | b v ( n ) | ≤ (1 + γ ) CT − β on A γ,T . Moreover, for v ∈ A γ,T , we have k v − u ∗ k H − (1 − a ) = (cid:18) X n =0 h n i − − δ (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) (cid:19) . sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) ≤ γCT − β ≤ k u ∗ k H − (1 − a ) by choosing ε < δ and γ sufficiently small. Hence, proceeding as before, we can constructsolutions v with initial data v ∈ A γ,T , satisfying (4.37) and (4.38) (after making a timeinterval slightly shorter.) Consider the difference between v and u ∗ = u ( ω ∗ ). From a slight ONLINEAR SMOOTHING UNDER RANDOMIZATION 19 modification of the argument in Subsection 4.2, we have k v − u ∗ k X − (1 − a ) ,a,T ≤ C k v − u ∗ k H − (1 − a ) + C T θ (cid:0) k v k X − (1 − a ) ,a,T + k u ∗ k X − (1 − a ) ,a,T (cid:1) k v − u ∗ k X − (1 − a ) ,a,T + C T θ (cid:0) k v k X − (1 − a ) ,a,T + k u ∗ k X − (1 − a ) ,a,T (cid:1) k v − u ∗ k X − (1 − a ) ,a,T + C T θ (cid:0) kN k X − a,a,T + kN ∗ k X − a,a,T (cid:1) kN − N ∗ k X − a,a,T + C T θ kN − N ∗ k X − a,a,T + C T θ kN ∗ k X − a,a,T sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) + C T θ kN − N ∗ k X − a,a,T + C T θ kN ∗ k X − a,a,T sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) + C T θ sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) , where N = N ( v, v ) and N ∗ = N ( u ∗ , u ∗ ). A similar estimate holds for kN N − N M k X − a,a,T .As a consequence, we obtain k v − u ∗ k X − (1 − a ) ,a,T . C ( T, R ) sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) = o (1)as γ →
0. Note that u ∗ ∈ A γ,T for any γ >
0. Hence, u ∗ is a unique solution. One cansimilarly obtain k v ( t ) − u ∗ ( t ) k C ([ − T,T ]; H − (1 − a ) ) . C ( T, R ) sup n h n i − ε (cid:12)(cid:12)b v ( n ) − | n | − α g n ( ω ∗ ) (cid:12)(cid:12) . This provides a weak form of continuous dependence. Lastly, when α = 0, u in (1.9)corresponds to the mean-zero Gaussian white noise. In this case, one can extend local-in-time solutions to global ones by invariance of the (finite dimensional) white noise. See[4, 21] for this part of discussion.5. On the Szeg¨o equation
In this section, we consider the dispersionless cubic Szeg¨o equation (1.17) and presentthe proof of Proposition 1.6. In particular, we show that unlike [4] and [10], there is nogain of regularity even if we take initial data to be random of the form (1.18).If we were to proceed as in [4] and [10], we would need to estimate the C ([ − T, T ]; H s + )-norm of N ( u , u , u ) defined in (1.20) for some s ≥ , assuming u j is either of type(I) random, less regular: u j ( x, t ) = η T ( t ) u ω = η T ( t ) X n ≥ g n ( ω ) p | n | α e inx , or(II) deterministic, smoother: u j = v j with k v j k C ([ − T,T ]; H s + ) ≤ , In view of the well-posedness result in H + ( T ), we consider α ≤ α ≤
1. Fornotational simplicity, we use h n i α for p | n | α . Note that all the summations take placeover non-negative indices, i.e. n ≥ n j ≥ j = 1 , , We could consider a different norm for type (II). However, it is not relevant for the following discussion.
Now, assume u j is of type (I), j = 1 , ,
3. Then, by separating the spatial and temporalcomponents, we have (cid:13)(cid:13) N ( u , u , u ) (cid:13)(cid:13) C ([ − T,T ]; H s + ) = C T (cid:13)(cid:13)(cid:13)(cid:13) h n i s (cid:12)(cid:12)(cid:12) X n = n − n + n g n h n i α g n h n i α g n h n i α (cid:12)(cid:12)(cid:12)(cid:13)(cid:13)(cid:13)(cid:13) l n ( Z ≥ ) = C T X n h n i s X n = n − n + n = m − m + m g n h n i α g n h n i α g n h n i α g m h m i α g m h m i α g m h m i α . (5.1)In the following, we first prove Proposition 1.6 (a), showing that (5.1) is infinite a.s. for α ∈ ( ,
1] and s ∈ [ α − , α − ].We say that we have a pair if we have n j = m j for some j = 1 , ,
3, (or if we have n = m or n = m .) Then, we can separate the sum in (5.1) into three cases: (a) 3 pairs,(b) 1 pair, (c) no pair. We estimate the contribution from each case in the following. • Case (a): X n h n i s X n = n − n + n | g n | h n i α | g n | h n i α | g n | h n i α . (5.2)Now, consider the contribution from n = n and n = n . For α > , we have c ω := X n h n i − α | g n ( ω ) | < ∞ , a.s.since E (cid:2) P n h n i − α | g n | (cid:3) ∼ P n h n i − α < ∞ . Note that c ω > F j ( ω ) :=2 − j P | n |∼ j | g n ( ω ) | . Then, F j ( ω ) converges to a positive constant a.s. by strong law oflarge numbers. Hence, for α ≤ s + , we have X | n |∼ j h n i s − α | g n ( ω ) | ∼ j (2 s − α +1) F j ( ω ) , a.s.Therefore, we have (5.2) ≥ c ω X n h n i s − α | g n ( ω ) | = ∞ , a.s.for α ≤ s + . In particular, when s = , the contribution from this case is divergent for α ≤
1. This already shows that there is no nonlinear smoothing even if we consider randominitial data of the form (1.18).Suppose that we have one pair n = m . Moreover, assume n = n . Then, we have n = n = m and thus m = m . Proceeding in a similar manner as above, we see that thecontribution to (5.1) is given by X n h n i s | g n | h n i α X n ,m | g n | h n i α | g m | h m i α = ∞ , a.s. (5.3)for α ≤ s + .For completeness of the argument, we give a brief discussion to show that the contribu-tions from other cases are finite (at least for α > and s ≤ α − .) • Case (b): n = m and { n , n } 6 = { m , m } .Other cases follow in a similar manner (except for the case discussed above.) In this case, ONLINEAR SMOOTHING UNDER RANDOMIZATION 21 the contribution to (5.1) is given by R ( ω ) := X n h n i s X n | g n | h n i α X n + n = n + n = m + m g n h n i α g n h n i α g m h m i α g m h m i α . By computing the second moment, we have E (cid:2) | R | (cid:3) = E (cid:20) X n h n i s X n | g n | h n i α X n + n = n + n = m + m g n h n i α g n h n i α g m h m i α g m h m i α × X e n h e n i s X e n | g e n | h e n i α X e n + e n = e n + e n = e m + e m g e n h e n i α g e n h e n i α g e m h e m i α g e m h e m i α (cid:21) . (5.4)We have nontrivial contribution in (5.4) when ( n , n , e m , e m ) = ( e n , e n , m , m ) (up topermutation.) By assumption, we have { n , n } 6 = { m , m } and { e n , e n } 6 = { e m , e m } .Then, it follows that ( n , n ) = ( e n , e n ) and ( m , m ) = ( e m , e m ) (up to permutation.)Hence, we have(5.4) ∼ X n h n i s X n h n i α X n + n = e n + e n h e n i s h e n i α × X n + n = n + n = m + m h n i α h n i α h m i α h m i α . (5.5)Without loss of generality, assume n & n , since n + n = n + n ≥ n . Then, for fixed n and n , we have h n i s X n + n = n + n h n i α h n i α . X n + n = n + n h n i α − s ) h n + n − n i α . h n + n i + for 2( α − s ) ≥ and α > , where the last inequality follows from a slight modificationof [26, Lemma 2.2]. The same inequality holds when n and n j are replaced by e n and e n j .Hence, we have (5.5) . X n h n i − − X n h n i − α X e n h e n i − α < ∞ . (5.6)Therefore, we have R ( ω ) < ∞ a.s. for α > and s ≤ α − . • Case (c): no pair. In this case, the contribution to (5.1) is given by R ( ω ) := X n h n i s X ∗ g n h n i α g n h n i α g n h n i α g m h m i α g m h m i α g m h m i α , (5.7)where ∗ = { n = n − n + n = m − m + m , no pair } . First, suppose n = n , and thus n = n . Then, the contribution to (5.7) is given by X n h n i s X n | g n | h n i α X n = m − m + m g n h n i α g m h m i α g m h m i α g m h m i α . By computing the second moment as before, we have E (cid:2) | R | (cid:3) ∼ X n h n i s max( h n i s , h m i s ) X n h n i α X e n h e n i α × X n = m − m + m h n i α h m i α h m i α h m i α . Now, we can follow the argument in Case (b) to show that E (cid:2) | R | (cid:3) < ∞ .In the following, assume that n , n , m , m = n . In this case, we have E (cid:2) | R | (cid:3) = E (cid:20) X n h n i s X ∗ g n h n i α g n h n i α g n h n i α g m h m i α g m h m i α g m h m i α × X e n h e n i s X e ∗ g e n h e n i α g e n h e n i α g e n h e n i α g e m h e m i α g e m h e m i α g e m h e m i α (cid:21) (5.8) ∼ X n h n i s X ∗ h e n i s Y j =1 h n j i α h m j i α where e ∗ = { e n = e n − e n + e n = e m − e m + e m , no pair , e n , e n , e m , e m = e n } . Note thatthere is no summation for e n since it is determined by the values n j and m j . Without lossof generality, assume n & n . Then, by (a slight modification of) [26, Lemma 2.2], we have h n i s X ∗ Y j =1 h n j i α . X n h n i α X n h n i α − s ) h n + n − n i α . X n h n i α h n + n i + . h n i + for 2( α − s ) ≥ and α > . Hence, we have E (cid:2) | R | (cid:3) < ∞ and thus R ( ω ) < ∞ a.s. for α > and s ≤ α − .In general, i.e. if s > α − , then we can choose e s < s such that e s ∈ [ α − , α − ]. Then,from the previous computation with e s instead of s , we have (cid:13)(cid:13) N ( u , u , u ) (cid:13)(cid:13) C ([ − T,T ]; H s + ) ≥ (cid:13)(cid:13) N ( u , u , u ) (cid:13)(cid:13) C ([ − T,T ]; H e s + ) = ∞ , a.s.This proves Part (a) of Proposition 1.6.Part (b) of of Proposition 1.6 follows easily by taking an expectation of (5.1). Aftertaking an expectation, only the case with 3 pairs remains. Assume n = max( n , n , n ) inthe following. Then, we have n & n . With s = α − − ε , we have E h(cid:13)(cid:13) N ( u , u , u ) (cid:13)(cid:13) C ([ − T,T ]; H s + ) i . X n h n i s X n = n − n + n h n i α h n i α h n i α . X n X n = n − n + n h n i ε h n i α h n i α < ∞ , as long as α > . Remark 5.1.
In studying nonlinear smoothing under randomization for the Wick orderedcubic NLS in [10], we needed to control a term similar to (5.1) when we estimate thecontribution from all type (I) to the nonlinearity in the X s, − + norm. On the one hand, ONLINEAR SMOOTHING UNDER RANDOMIZATION 23 if σ := h τ − n i is large, the estimate was trivial. On the other hand, if σ := h τ − n i issmall, then the relation: σ − σ + σ − σ = − n + n − n + n = − n − n )( n − n ) , where σ j := h τ j − n j i . n = n − n + n and τ = τ = τ − τ + τ , imposed arestriction on the summation. See [10] for details. However, due to the lack of dispersionfor (1.17), there is no restriction on the summation (5.1).In the case of KdV, although a direct estimate on the integral formulation failed toshow any nonlinear smoothing (see Subsection 4.1), we could show that there is a gain ofregularity by considering the second iteration. This is due to the fact that σ = h τ − n i & h nn n i appears in the denominator in the second iteration. See (4.20). However, even ifwe consider the second iteration for the cubic Szeg¨o equation (1.17), it seems that we donot have any gain due to the lack of dispersion. References [1] ´A. B´enyi, T. Oh,
Modulation spaces, Wiener amalgam spaces, and Brownian motions , to appear inAdv. Math.[2] J. Bourgain,
Fourier transform restriction phenomena for certain lattice subsets and applications tononlinear evolution equations I , GAFA., 3 (1993), 107–156.[3] J. Bourgain,
Fourier transform restriction phenomena for certain lattice subsets and applications tononlinear evolution equations II , GAFA., 3 (1993), 209–262.[4] J. Bourgain,
Invariant measures for the D -defocusing nonlinear Schr¨odinger equation, Comm. Math.Phys. 176 (1996), no. 2, 421–445.[5] J. Bourgain,
Periodic Korteweg-de Vries equation with measures as initial data,
Sel. Math., New Ser.3 (1997), 115–159.[6] N. Burq, P. G´erard, N. Tzvetkov,
Bilinear eigenfunction estimates and the nonlinear Schr¨odingerequation on surfaces , Invent. Math., 159 (2005), 187-223.[7] N. Burq, N. Tzvetkov,
Random data Cauchy theory for supercritical wave equations. I. Local theory,
Invent. Math. 173 (2008), no. 3, 449–475.[8] M. Christ, J. Colliander, T. Tao,
Asymptotics, frequency modulation, and low-regularity illposedness ofcanonical defocusing equations,
Amer. J. Math. 125 (2003), no. 6, 1235–1293.[9] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao,
Sharp Global Well-Posedness for KdV andModified KdV on R and T , J. Amer. Math. Soc. 16 (2003), no. 3, 705–749.[10] J. Colliander, T. Oh,
Almost sure well-posedness of the periodic cubic nonlinear Schr¨odinger equationbelow L , to appear in Duke Math. J.[11] P. G´erard, S. Grellier, The Szeg¨o Cubic Equation,
Ann. Sci. ´Ec. Norm. Sup´er. (4) 43 (2010), no. 5,761–810.[12] T. Kappeler and P. Topalov,
Global wellposedness of KdV in H − ( T , R ) , Duke Math. J. 135 (2006),no. 2, 327–360.[13] C. Kenig, G. Ponce, L. Vega,
The Cauchy problem for the Korteweg-de Vries equation in Sobolev spacesof negative indices,
Duke Math. J. 71 (1993), no. 1, 1–21.[14] C. Kenig, G. Ponce, and L. Vega,
A bilinear estimate with applications to the KdV equation,
J. Amer.Math. Soc. 9 (1996), no. 2, 573–603.[15] C. Kenig, G. Ponce, L. Vega,
On the ill-posedness of some canonical dispersive equations,
Duke Math.J. 106 (2001), no.3, 617–633.[16] L. Molinet,
Global well-posedness in L for the periodic Benjamin-Ono equation, Amer. J. Math. 130(2008), no. 3, 635–683.[17] L. Molinet,
Sharp ill-posedness result for the periodic Benjamin-Ono equation,
J. Funct. Anal. 257(2009), no. 11, 3488–3516.[18] H. Kuo,
Gaussian Measures in Banach Spaces,
Lec. Notes in Math. 463, Springer-Verlag, New York,1975.[19] T. Oh,
Invariance of the white noise for KdV , Comm. Math. Phys. 292 (2009), no. 1, 217-236. Also see
Erratum: “Invariance of the white noise for KdV” , in preparation. [20] T. Oh,
Periodic stochastic Korteweg-de Vries equation with the additive space-time white noise , Anal.PDE 2 (2009), no.3, 281–304.[21] T. Oh,
White noise for KdV and mKdV on the circle , RIMS Kˆokyˆuroku Bessatsu B18 (2010), 99–124.[22] T. Oh, J. Quastel, B. Valk´o,
Interpolation of Gibbs measures with White Noise for Hamiltonian PDE ,arXiv:1005.3957v1 [math.PR].[23] J. Quastel, B. Valk´o,
KdV preserves white noise,
Comm. Math. Phys. 277 (2008), no. 3, 707–714.[24] G. Richards,
Invariance of the Gibbs measure for the periodic quartic KdV, in preparation.[25] L. Thomann,
Random data Cauchy problem for supercritical Schr¨odinger equations,
Ann. Inst. H.Poincar´e Anal. Non Lin´eaire 26 (2009), no. 6, 2385–2402.[26] N. Tzvetkov,
Construction of a Gibbs measure associated to the periodic Benjamin-Ono equation ,Probab. Theory Relat. Fields 146 (2010), 481–514.[27] P. Zhidkov,
Korteweg-de Vries and Nonlinear Schr¨odinger Equations: Qualitative Theory,
Lec. Notesin Math. 1756, Springer-Verlag, 2001.
Tadahiro Oh, Department of Mathematics, Princeton University, Fine Hall, WashingtonRd, Princeton, NJ 08544-1000, USA
E-mail address ::