Well-posedness for the fourth-order Schrödinger equation with third order derivative nonlinearities
aa r X i v : . [ m a t h . A P ] A p r Well-posedness for the fourth-order Schr ¨odinger equation with thirdorder derivative nonlinearities
Hiroyuki Hirayama ∗ , Masahiro Ikeda † and Tomoyuki Tanaka ‡ Abstract.
We study the Cauchy problem to the semilinear fourth-order Schr¨odinger equations: i ∂ t u + ∂ x u = G (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19) , t > , x ∈ R , u | t = = u ∈ H s ( R ) , (4NLS)where γ ∈ { , , } and the unknown function u = u ( t , x ) is complex valued. In this paper, we consider thenonlinearity G of the polynomial G ( z ) = G ( z , · · · , z γ + ) : = X m ≤| α |≤ l C α z α , for z ∈ C γ + , where m , l ∈ N with 3 ≤ m ≤ l and C α ∈ C with α ∈ ( N ∪ { } ) γ + is a constant. Thepurpose of the present paper is to prove well-posedness of the problem (4NLS) in the lower order Sobolevspace H s ( R ) or with more general nonlinearities than previous results.Our proof of the main results is based on the contraction mapping principle on a suitable function spaceemployed by D. Pornnopparath (2018). To obtain the key linear and bilinear estimates, we construct asuitable decomposition of the Duhamel term introduced by I. Bejenaru, A. D. Ionescu, C. E. Kenig, and D.Tataru (2011). Moreover we discuss scattering of global solutions and the optimality for the regularity ofour well-posedness results, namely we prove that the flow map is not smooth in several cases. Mathematics Subject Classification (2010): 35Q55; 35A01; 35B45;37K10
Key words and phrases : Schr¨odinger equations, Fourth-order dispersion, Well-posedness, Low regularity, Derivative nonlinear-ity, Sobolev spaces, Scaling critical regularity, Modulation estimate, Solution map, General nonlinearity
Contents ffi culties and idea for the proof of the main results . . . . . . . . . . . . . . . 91.6 Organization of the present paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 ∗ Institute for Tenure Track Promotion, University of Miyazaki, 1-1, Gakuenkibanadai-nishi, Miyazaki, 889-2192 Japan,E-mail: [email protected] † Department of Mathematics, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi,Kohoku-ku, Yokohama, 223-8522, Japan / Center for Advanced Intelligence Project, RIKEN, Japan, E-mail: [email protected]/[email protected] ‡ Graduate school of Mathematics, Nagoya University, Chikusa-Ku, Nagoya, 464-8602, Japan, E-mail: [email protected] Decomposition of the Duhamel term and its application 164 Multilinear estimates for general nonlinearities 22 m ≥ with γ = and m ≥ with γ =
326 Multilinear estimates at the scaling critical regularity in the cases m = with γ = and m = with γ = m ≥ with γ =
578 Proof of well-posedness 62
A Derivation of an important 4NLS model with third order derivative nonlinearities 69
In the present paper we study well-posedness for the Cauchy problem in the Sobolev space H s ( R ) ofthe Schr¨odinger equation with the fourth-order dispersion and γ -times derivative nonlinearities: i ∂ t u + ∂ x u = G (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19) , ( t , x ) ∈ I × R , u | t = t = u ∈ H s ( R ) , (1.1)where γ ∈ { , , } denotes the order of the highest derivatives in the nonlinearity G , i : = √− ∂ t : = ∂/∂ t , ∂ x : = ∂/∂ x , u = u ( t , x ) : I × R → C is an unknown function of ( t , x ), t ∈ R is an initial time, ( t ∈ ) I denotes the maximal existence time interval of the function u , u = u ( x ) : R → C is a prescribedfunction which belongs to a L ( R )-based s -th order Sobolev space H s ( R ) for some s ∈ R . Throughoutthis paper, we consider the nonlinear function G : C γ + → C of the following polynomial: G ( z ) = G m , l γ ( z ) = G m , l (cid:16) z , · · · , z γ + (cid:17) : = X m ≤| α |≤ l C α z α , (1.2)where z ∈ C γ + , m ∈ N and l ∈ N with 3 ≤ m ≤ l denote the lowest degree and the highest degree ofthe polynomial G respectively, and C α ∈ C with α ∈ ( N ∪ { } ) γ + is a complex constant.The purpose of the present paper is to improve and generalize the results obtained in the previouspapers [
31, 32, 17, 18, 7, 20, 37, 38, 6, 34, 15, 16 ], that is, to prove well-posedness in the lower orderSobolev space H s ( R ) to the problem (1.1) and to show well-posedness to (1.1) with more general nonlin-earities than the previous results (see Theorems 1.1, 1.2, 1.3, 1.4, 1.5 and Remark 1.3). Here we say that2ell-posedness to (1.1) holds if existence, uniqueness of the solution and continuous dependence uponthe initial data are valid. We also discuss scattering of the global solutions (Theorems 1.4, 1.5) and theoptimality of our well-posedness results, namely, prove that the flow map is not smooth in the sense ofFr´echet derivative for some specific nonlinearity (see Remarks 1.7 and 1.9). There are many physical results and mathematical results about (1.1) without derivative nonlinearities( γ =
0) or with first order derivatives ( γ =
1) (see [
3, 6, 12, 15, 16, 37 ] and their references). We recallresults closely related to the present study. Y. Wang [ ] studied the Cauchy problem (1.1) with a gaugeinvariant nonlinearity ∂ x ( | u | m − u ) with odd m ≥ H s c ( R ), where ˙ H is the homogeneous Sobolev space and s c (1 , m ) : = − m − (seeTheorem 1.1 in [ ]). The first author and Okamoto [ ] studied the Cauchy problem (1.1) with m = m = L ( R ) for a scaling invariant nonlinearity (1.12)below. In particular, they proved large data local well-posedness and small data scattering in H s c ( R ) fora specific nonlinearity G = G , = ∂ x (cid:16) ¯ u (cid:17) . (see Theorem 1.3 and Remark 3 in [ ]). In the present paper,we improve the results obtained in [
37, 16 ] (see Remark 1.8 for more precise). Hayashi and Naumkin[ ] proved a small data scattering to the problem (1.1) with γ = m ≥ m with m > ] (resp. [ ]) also study small data globalexistence and asymptotic behavior of solutions to the problem (1.1) with γ = i ∂ x (cid:16) | u | u (cid:17) (resp. a cubic nonlinearity, i ∂ x (cid:16) | u | u (cid:17) ) in a weighted Sobolev space.Several models with the fourth-order dispersion, and the second-times derivative ( γ =
2) nonlin-earities have been derived from the variational principle with Lagrange density by Karpman [ ] andKarpman and Shagalov [ ] to take into account the role of small fourth-order dispersion in the propa-gation of intense laser beams in a bulk medium with Kerr nonlinearity, and the stability of the solitionsfor the derived equations was studied in [
21, 22 ]. Fukumoto and Mo ff atto [ ] introduced the follow-ing Schr¨odinger equation (1.3), which contains not only the fourth-order dispersion and but also thesecond-order dispersion, and the second-order derivative ( γ =
2) nonlinearities: i ∂ t u + ν∂ x u + ∂ x u = G (cid:16)n ∂ kx u o k ≤ , n ∂ kx ¯ u o k ≤ (cid:17) , ( t , x ) ∈ R × R , (1.3)where ν ∈ R is a non-zero constant. Here the nonlinearity G = G , ( γ = m = l =
5) is given by G (cid:16)n ∂ kx u o k ≤ , n ∂ kx ¯ u o k ≤ (cid:17) : = − | u | u + λ | u | u + λ ( ∂ x u ) ¯ u + λ | ∂ x u | u + λ u ∂ x ¯ u + λ | u | ∂ x u , (1.4)where λ : = µ/ λ : = µ − ν/ λ : = µ + ν , λ : = µ and λ : = µ − ν , with a real constant µ ∈ R .The equation (1.3) describes the three dimensional motion of an isolated vortex filament embedded in aninviscid incompressible fluid filling an infinite region, and it is proposed as some detailed model takingaccount of the e ff ect from the higher order corrections of the Da Rios model, that is, i ∂ t u + ∂ x u = − | u | u , ( t , x ) ∈ R × R . This is the second order Schr¨odinger equation without derivative nonlinearities and with the cubic focus-ing nonlinearity, which has been extensively studied in the contexts of both physics and mathematics. Itis also known that (1.3) with (1.4) is completely integrable, if and only if the identity 2 µ = − ν holds,3amely, the identities λ : = − ν/ λ : = − ν/ λ : = − ν , λ : = − ν/ λ : = − ν hold (see [ ]).Under the relation 2 µ = − ν , the equation (1.3) has infinitely many conservation laws such as Φ [ u ]( t ) : = Z R | u ( t , x ) | dx , Φ [ u ]( t ) : = Z R | ∂ x u ( t , x ) | dx − Z R | u ( t , x ) | dx , Φ [ u ]( t ) : = Z R | ∂ x u ( t , x ) | dx + Z R | u ( t , x ) | u ( t , x ) ∂ x u ( t , x ) dx + Z R | u ( t , x ) | u ( t , x ) ∂ x u ( t , x ) dx + Z R ( ∂ x u ( t , x )) u ( t , x ) dx + Z R | ∂ x u ( t , x ) | | u ( t , x ) | dx + Z R | u ( t , x ) | dx , Φ [ u ]( t ) · · · see [ ]. For more information about physical backgrounds of (1.3), see [ ].Next we recall several previous results about well-posedness of the Cauchy problem (1.1) with sec-ond times ( γ =
2) derivative nonlinearities. Hao, Hisao and Wang [ ] proved existence of local-in-time solution and uniqueness of solutions in the class C (cid:16) I ; H s − ( R ) (cid:17) of the Cauchy problem (1.1) with3 ≤ m ≤ l for arbitrary data in H s ( R ), where s ≥ . We remark that in Theorem 1.1 in [ ], the regularity s − s of the initial data, namely, even if u belongs to H s ( R ), u ( t ) maynot be in H s ( R ) (cid:16) ( H s − ( R ) (cid:17) for some t ∈ I . However, this situation is not desirable from the viewpointof well-posedness.In the present paper, we improve Theorem 1.1 in [ ] in the following two sense. The first one is thatwe prove existence of a local-in-time solution to (1.1) with γ = ≤ m ≤ l for arbitrary data whichbelong to the wider class (cid:16) H s ( R ) with s ≥ (cid:17) than theirs (cid:16) H s ( R ) with s > (cid:17) . The second one is thatwe prove that for any t ∈ I , the solution u ( t ) belongs to the same space as the initial data (see Theorem1.1)-Remark 1.3). They [ ] also showed existence of solution locally in time and uniqueness of solutionsin H s − ( R ) ∩ H (cid:16) R ; x dx (cid:17) of the problem (1.1) with m = H s ( R ) ∩ H (cid:16) R ; x dx (cid:17) with s ≥ . We note that in the case of m =
2, some spatial decay as | x | → ∞ assumption on data seemsto be needed and we do not pursue the case of m = ] proved local well-posedness of (1.1) with the nonlinearity G = G , : = c u ∂ x u + c | u | u for arbitrary data in H s ( R ) with s ≥ , where c , c ∈ C are constants. Segata [ ]showed local well-posedness of the Cauchy problem (1.3)-(1.4) with a good sign ν < ffi cient λ = H s ( R ) with s ≥ . Huo and Jia [ , Theorem1.1] proved thesimilar conclusion as [ , Theorem2.1] without the sign condition ν < ν =
0. We emphasizethat one of our main results (Theorem 1.3) reconstructs their results [
6, 31, 17 ]. Segata [ ] showed localwell-posedness in H s ( R ) of the problem (1.3)-(1.4) with a good sign ν < H s ( R )with s > . We note that λ in (1.4) is not 0 necessarily in the result [ ], whose situation is di ff erentfrom that in [ ]. Huo and Jia [ ] removed the sign condition ν < , Theorem1.1] and provedlocal well-posedness in H s ( R ) of the problem (1.3)-(1.4) with ν > s > .There are fewer physical results and fewer mathematical results about the problem (1.1) with threetimes ( γ =
3) derivative nonlinearities than the other cases ( γ ∈ { , , } ). It should be known thatequation (1.1) with γ = G = G , is the followingform: G , (cid:16)n ∂ kx u o k ≤ , n ∂ kx ¯ u o k ≤ (cid:17) : = ∂ x ( H + iH + i (cid:16) | u | u (cid:17)) , (1.5)4here H is a fifth-order polynomial and H is a third-order polynomial, which are given by H = H ( u , ∂ x u , u , ∂ x u ) : = ∂ x ( | u | u ) + u ∂ x u − u ∂ x ¯ u ) | u | u , H = H (cid:16)n ∂ kx u o k ≤ , n ∂ kx ¯ u o k ≤ (cid:17) : = − ∂ x u ) ¯ u + ∂ x ( | u | ) u , respectively (see [ ] and its references). We note that H contains the first order derivative of u and ¯ u and H contains the second order derivative u and ¯ u , thus we see that the nonlinearity G , given by (1.5)contains the third order derivative. It should be also notified that the equation (1.1) with the nonlinearity G , given by (1.5) belongs to a hierarchy of the derivative nonlinear Schr¨odinger equation, which canbe written as i ∂ t U + ∂ x n ( − i Λ ) n − U o = , (1.6)where n ∈ N , U = U ( t , x ) = (cid:16) u ( t , x ) , u ( t , x ) (cid:17) T : R × R → C is a solution to (1.6) and Λ is the recursionoperator (see (A.1) for the definition). When n =
1, the equation (1.6) is equivalent to the well-knownderivative nonlinear Schr¨odinger equation: i ∂ t u + ∂ x u = − i ∂ x (cid:16) | u | u (cid:17) , (1.7)which describes nonlinear Alfv´en waves in space plasma physics (see [ ]) and ultra-short pulse propa-gation (see [ ]). The equation (1.7) has also been extensively studied in the field of mathematics (see [ ]and its references for example). Moreover, when n =
2, we can see that the equation (1.6) is equivalentto (1.1) with the nonlinearity G , given by (1.5) (see Appendix A, for the derivation of (1.1)-(1.5) fromthe hierarchy (1.6) with n = γ =
3) as far as the authors know. Ruzhansky, Wang andZhang [ , Theorem1.2] proved the small data global well-posedness and scattering in the modulationspace M + m − , ( R ) with m ≥ ] for the definition of the modulation spaces). As in [ ,Remark1.3], if m ≥
10, then this result covers initial data in the Sobolev space H + m − + ε ( R ) with anarbitrarily small ε >
0. However, it is not clear whether their solution belongs to the same space as initialdata or not for t ∈ I , whose situation is not preferable from the viewpoint of well-posedness.Huo and Jia [ , Theorem1.1] proved local well-posedness of the problem (1.1) with γ = m = l ∈ N with m < l , that is G , l , for small data in H s ( R ) with s >
4. The proof of [ , Theorem1.1]is based on a dyadic Fourier restriction space, which is similar to the function space (6.1) we use in thepresent paper.However, well-posedness in the Sobolev space H s ( R ) for s ≤ G , l ( γ = m =
3) was still a major open problem. In the present paper, we solve this problemand prove local well-posedness of the problem (1.1) with G , l for small data in H ( R ) (see Theorem 1.1).Finally we also emphasize that one of our main results (Theorem 1.2) implies that the Cauchy prob-lem (1.1) with the nonlinearity G , given by (1.5), where the equation (1.1) is completely integrable andbelongs to a hierarchy (1.6) of the derivative nonlinear Schr¨odinger equation, is locally well-posed in H ( R ) for small data in H ( R ), which is also a completely new result.Equation (1.1) is invariant under the translation with respect to time and space variables. Thus wemay assume that the initial time is zero, i.e. t =
0. 5 .3 Scaling critical Sobolev index
Before stating our main results, we introduce a scaling critical Sobolev index s c for the Cauchy problem(1.1). Such index often divides well-posedness and ill-posedness of Cauchy problems for evolutionequations. If the nonlinear term G = G m , l γ with m = l is the following form: G m , m γ (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19) = X k + l = m X | α | + | β | = γ C k , l α,β ( ∂ α x u ) · · · ( ∂ α k x u ) (cid:16) ∂ β x u (cid:17) · · · (cid:16) ∂ β l x u (cid:17) , (1.8)where α : = ( α , · · · , α k ) ∈ ( N ∪ { } ) k , β : = ( β , · · · , β l ) ∈ ( N ∪ { } ) l are multi-indices and C k , l α,β ∈ C isa constant, then equation (1.1) is invariant under the scaling transformation u u ϑ for ϑ >
0, which isdefined by u ϑ ( t , x ) : = ϑ − γ m − u (cid:16) ϑ t , ϑ x (cid:17) , where u : I × R → C is a solution to (1.1). A simple computation gives u ϑ (0 , x ) = ϑ − γ m − u ( ϑ x ) and k u ϑ (0 , · ) k ˙ H s = ϑ − γ m − − + s k u k ˙ H s , where for s ∈ R , ˙ H s = ˙ H s ( R ) denotes the L ( R )-based s -th order homogeneous Sobolev space. Fromthis observation, we define the scaling critical (Sobolev) index s c as s c = s c ( γ, m ) : = − − γ m − . If s = s c , then ˙ H s -norm of initial data is also invariant under the scaling transformation. The case s = s c is called scaling critical, the case s > s c is called scaling subcritical and the case s < s c is called scalingsupercritical.We also introduce a minimal regularity (Sobolev) exponent s = s ( γ, m ) given by s = s ( γ, m ) : = γ − , m = , γ − , m = , s c + ǫ, m ≥ γ ∈ { , } , where ǫ > s = s (3 , m ) : = ( , m = , , m ≥ . (1.10)We note that if s satisfies s ≥ s ( γ, m ) with γ ∈ { , } , then s belongs to the scaling subcritical case s > s c . In this subsection, we state our main results in the present paper.
Theorem 1.1 (Well-posedness for general nonlinearity) . Let γ ∈ { , , } , m , l ∈ N with ≤ m ≤ l ands ≥ γ − . Then the Cauchy problem (1.1) with (1.2) is locally well-posed in H s ( R ) for small initial datau ∈ H s ( R ) .Remark . Theorem 1.1 with γ = , Theorem1.1] with n =
1. More precisely,Theorem 1.1 with γ = H ( R ), than H + ǫ ( R ) with a positive ǫ >
0, whose function space is used in [ , Theorem1.1].6or the scaling invariant nonlinearity G m , m γ , which is defined by (1.8), we can prove local well-poseness in H s ( R ) with s ≥ max { s , } , where s is called a minimal regularity given by (1.9) and (1.10): Theorem 1.2 (Well-posedness for scaling invariant nonlinearity) . We assume that the nonlinearity G = G m , m γ is the form of (1.8). Let γ ∈ { , , } , m ≥ , and s ≥ max { s , } . Then the Cauchy problem (1.1) islocally well-posed in H s ( R ) for small initial data u ∈ H s ( R ) . The precise statement of the theorem is stated in Theorem 8.1.We can also get the following local well-posedness result to the problem (1.1) with the followingnonlinearity (1.11):
Theorem 1.3.
Let γ ∈ { , , } and m , l ∈ N with ≤ m ≤ l. We assume that the nonlinear function G m , l γ is the form ofG m , l γ (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19) : = X m ≤ k + l ≤ l X | α | + | β |≤ γ C k , l α,β ( ∂ α x u ) · · · ( ∂ α k x u ) (cid:16) ∂ β x u (cid:17) · · · (cid:16) ∂ β l x u (cid:17) , (1.11) where C k , l α,β ∈ C is a constant. Then the Cauchy problem (1.1) is locally well-posed in H s ( R ) for smallinitial data u ∈ H s ( R ) with s ≥ γ − .Remark . The nonlinearity defined in (1.11) is general form such as the each terms do not containmore than γ derivatives. Remark . In the case of γ ∈ { , } , namely γ ,
3, the small assumption on the initial data u assumedin Theorems 1.1, 1.2, and 1.3 can be removed (see Theorem 4.4). Remark . Theorem 1.1 with γ = , Theorem1.1] in thefollowing two sense: The first one is that Theorem 1.1 with Remark 1.3 gives existence of a local-in-time solution to (1.1) with γ = m ∈ [3 , l ] and the general nonlinearity (1.2) for arbitrary data whichbelong to the wider class, that is, H s ( R ) with s ≥ , than H s ( R ) with s > used in [ , Theorem1.1].The second one is that Theorem 1.1 with Remark 1.3 verifies that for any t ∈ I , the solution u ( t ) to theproblem (1.1)-(1.2) belongs to the same space as the initial data, whose situation improves the previousresult [ , Theorem1.1]. Remark . Theorem 1.3 with γ = , Theorem1.1], [ , Theo-rem2.1], [ , Theorem1.1], [ , Theorem1.1] and [ , Theorem1.1]. More precisely, Theorem 1.3 with γ = H s ( R ) with s ≥ to the problem (1.1)with more general nonlinearities (1.11) with γ = G , : = c u ∂ x ¯ u + c | u | u with c , c ∈ C in [ , Thoemre1.1] and the physical model (1.4) with λ = , Theorem2.1] ( ν <
0) and [ ,Theorem1.1] ( ν > γ = H s ( R ) with thelower order regularity s ≥ than s ≥ and s > , which are assumed in the previous results [ ,Theorem1.1] and [ , Theorem1.1] respectively. Remark . Theorem 1.3 with γ = H ( R ) to the problem (1.1)with the nonlinearity G , given by (1.5), where the equation (1.1) is completely integrable and belongsto a hierarchy (1.6) of the derivative nonlinear Schr¨odinger equation. Remark . Let γ ∈ { , , } . Then we can prove that the data-to-solution map u u to the problem(1.1) with the gauge invariant cubic nonlinearity G , γ (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19) : = ∂ γ x ( | u | u ) is not C in H s ( R )7or s < γ − in the sense of Fr´echet derivative. Indeed, if we choose f N ∈ L satisfying b f N ( ξ ) : = N − s + [ N − N − , N + N − ] ( ξ )for N ≫ ≤ t ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z t e i ( t − t ′ ) ∂ x ∂ γ x (cid:16) | e it ∂ x f N ( t ′ ) | e it ∂ x f N ( t ′ ) (cid:17) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) H s & N − s + γ − → ∞ as N → ∞ for s < γ − by the same argument as in the proof of [ , Theorem1.4], where the implicitconstant is independent of N . This means that Theorem 1.2 with m = ≤ t ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z t e i ( t − t ′ ) ∂ x (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) ∂ γ x e it ∂ x f N ( t ′ ) (cid:12)(cid:12)(cid:12)(cid:12) ∂ γ x e it ∂ x f N ( t ′ ) (cid:19) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) H s & N − s + γ − → ∞ as N → ∞ for s < γ − . This means that Theorem 1.1 is optimal as long as we use the iteration argument.Next we consider the following scaling invariant nonlinearity G = G m , m γ (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19) : = ∂ γ x P m ( u , u ) , (1.12)where γ ∈ { , , } , m ∈ N and P m : C → C is the m -th order polynomial defined by P ( z , w ) = P m ( z , w ) : = m X k = C k z k w m − k . (1.13)Here C k ∈ C with k ∈ { , · · · , m } is a complex constant. For the nonlinearity (1.12), we can prove thefollowing global well-posedness results in H s ( R ) in the scaling critical or subcritical case s ≥ s c under γ = m ≥ γ ∈ { , } and m ≥ Theorem 1.4 (Well-posedness and scattering at the scaling critical regularity for γ = . We assume thatthe nonlinearity G = G m , m is the form of (1.12). Let γ = , m ≥ , and s ≥ s c . Then, the Cauchy problem(1.1) is globally well-posed in H s ( R ) for small initial data u ∈ H s ( R ) . Moreover, the global solution uscatters in H s ( R ) as t → ±∞ . Theorem 1.5 (Well-posedness and scattering at the scaling critical regularity for γ ∈ { , } ) . We assumethat the nonlinearity G = G m , m γ is the form of (1.12). Let γ ∈ { , } , m ≥ , and s ≥ s c . Then, theCauchy problem (1.1) is locally well-posed in H s ( R ) for arbitrary initial data u ∈ H s ( R ) and globallywell-posed in H s ( R ) for small initial data u ∈ H s ( R ) . Moreover, the global solution u scatters in H s ( R ) as t → ±∞ .Remark . Theorem 1.5 with γ = ] and [ ]. In [ ], well-posedness in the scaling critical Sobolev H s c ( R ) was shown only forthe special gauge invariant nonlinearity ∂ x ( | u | m − u ) with m ≥
5. In Theorem 1.2 and Remark 3 in [ ], thescaling invariant nonlinearity G , given by (1.12) with m = H s c ( R ) was proved only for the special nonlinearity ∂ x ( u ) (see [ , Theorem]).For the other quartic nonlinearities, that is, ∂ x ( u ), ∂ x ( u | u | ), ∂ x ( | u | ), ∂ x ( | u | u ), well-posedness wasproved in the space L ( R ) (see [ , Remark 3]), which is a smaller space than the scaling critical Sobolevspace H s c ( R ). 8 emark . Let γ ∈ { , , } , m ∈ N with m ≥ s < s c . Then we can prove that the data-to solutionmap u u to the problem (1.1) with a specific scaling invariant nonlinearity G m , m γ = ∂ γ x ( u m ) is not C m in the same manner as the proof of [ , Theorem 1.4 (ii)]. This implies that Theorems 1.4 and 1.5 areoptimal as long as we use the iteration argument. ffi culties and idea for the proof of the main results The strategy of our proof of the main results is based on the contraction argument on a suitable functionspace employed in [ ] with several multilinear estimates from the auxiliary space to the solution space.The multilinear estimates are basically proved by combining the linear estimates (Strichartz estimates,Kato-type smoothing estimates, Maximal function estimates, Kenig-Ruiz estimates), a suitable decom-position of the Duhamel term introduced in [ ], bilinear Strichartz estimates on the solution spaces withthe Littlewood-Paley decomposition and modulation estimates.Especially, to obtain the multilinear estimates (Theorem 4.4) in the case of ( γ, m ) = (3 , H ( R ). By using changing varables with u = h ∂ x i v and applying the multilinear estimate, we canget the well-posedness for the general nonlinearity G , l in the Sobolev space H ( R ) (Theorem 1.1). Thisimproves the previous result [ , Theorem1.1]. We note that such bilinear Strichartz estimates were notused in the previous papers [
20, 34 ].In the proof of the scaling critical s = s c ( γ, m ) case and ( γ, m ) = (3 , ,
4) or (1 ,
4) case of Theo-rems 1.4 and 1.5, we need a more delicate argument than the other cases such as the scaling subcriticalcase s > s c ( γ, m ). Indeed, in such cases, we employ more sophisticated solution spaces and their aux-iliary spaces (see (6.1)) than the spaces given by Definition 2.3. By using these spaces, we can useso-called modulation estimates and deal with nonlinear interactions more precisely. Moreover we alsoprove more refined bilinear Strichartz estimates (Theorem 6.5) than Theorem 4.2 and apply them to getmultilinear estimates (Theorems 6.7, 6.8 and 7.2). The rest of the present paper is organized as follows. In Section 2, we introduce several notations usedthroughout this paper and collect fundamental estimates in Fourier analysis and several space-time esti-mates for solutions to the free fourth order Schr¨odinger equation. We define the solution spaces and theirauxiliary spaces to prove Theorems 1.1, 1.2, 1.3. In Section 3, we derive several space-time estimates forthe Duhamel term. Especially the proof of the estimate on the L x L t -norm of the Duhamel term is givenby using a decomposition of it introduced in [ ]. In section 4, we prove a bilinear Strichartz estimateon the solution spaces (Theorem 4.2) via the decomposition of the Duhamel term again. Moreover weprove multilinear estimates by combining the Littlewood-Paley theory, the linear estimates and the bilin-ear Strichartz estimate (Theorem 4.4). In section 5, we prove multilinear estimates for several specificscaling invariant nonlinearities at the scaling critical regularity in the cases m ≥ γ = m ≥ γ = m = γ = m = γ =
2. We provemore refined bilinear Strichartz estimates on the solution spaces (Theorem 6.5). By applying Theorem6.5 and treating nonlinear interactions more precisely, we dirive multilinear estimates (Theorems 6.7 and9.8). In section 7, we introduce the similar solution spaces and their auxiliary spaces (7.1) as (6.1), totreat the similar nonlinearities as studied in Section 5 at the scaling critical regularity in the case m = γ =
1. The proof of Theorem 7.2 is done via the almost similar manner as the proof of Theorem6.8. In section 8, we give a proof of Theorem 1.1-Theorem 1.5.
We summarize the notations used throughout this paper. For a time interval I and a Hilbert space H , wewrite the function space composed of continuous functions from I to H as C ( I ; H ). For a Banach space E ⊂ C ( R ; H ), we define the time restriction space E ( I ) as E ( I ) : = { u ∈ C ( I ; H ) | ∃ v ∈ C ( R ; H ) s . t . v | I = u } , k u k E ( I ) : = inf {k v k E | v | I = u } . In particular, we write E ( T ) instead of E ( I ) if I = [0 , T ] for T > ≤ p ≤ ∞ , we denote the Lebesuge space by L p = L p (cid:16) R ℓ (cid:17) , where ℓ = k f k L p : = (cid:16)R R ℓ | f ( x ) | p dx (cid:17) / p if 1 ≤ p < ∞ and k f k L ∞ : = ess.sup x ∈ R ℓ | f ( x ) | . For 1 ≤ p , q ≤ ∞ and a timeinterval I , we use the space-time Lebesgue space L pt (cid:16) I ; L qx (cid:17) with the norm k u k L pt ( I ; L qx ) : = (cid:13)(cid:13)(cid:13) k u ( t ) k L qx (cid:13)(cid:13)(cid:13) L pt ( I ) . We also use the time-space Lebesgue space L qx (cid:16) R ; L pt ( I ) (cid:17) with the norm k u k L qx ( R ; L pt ( I ) ) : = (cid:13)(cid:13)(cid:13)(cid:13) k u ( x ) k L pt ( I ) (cid:13)(cid:13)(cid:13)(cid:13) L qx ( R ) . We often omit the time interval I = [0 , T ] ( T >
0) and the whole space R , and write L pT L qx = L pt (cid:16) [0 , T ]; L qx ( R ) (cid:17) , L qx L pT = L qx (cid:16) R ; L pt ([0 , T ]) (cid:17) , L pt L qx = L pt (cid:16) R ; L qx ( R ) (cid:17) , and L qx L pt = L qx (cid:16) R ; L pt ( R ) (cid:17) , if they do not cause a con-fusion. Let S (cid:16) R ℓ (cid:17) be the rapidly decaying function space. For f ∈ S ( R ), we define the Fourier transformof f as F (cid:2) f (cid:3) ( ξ ) = b f ( ξ ) : = √ π Z R e − ix ξ f ( x ) dx , and the inverse Fourier transform of f as F − (cid:2) f (cid:3) ( x ) : = √ π Z R e ix ξ f ( ξ ) d ξ, and extend them to S ′ ( R ) by duality. We also define the time-space Fourier transform of u ∈ S ( R × R ) as F t , x [ u ]( τ, ξ ) : = √ π Z R × R e ix ξ + it τ u ( t , x ) dtdx . For a measurable function m : R → C , we denote the Fourier multiplier operator by m ( ∂ x ), which isgiven by [ m ( ∂ x ) f ]( x ) : = F − h m ( ξ ) b f ( ξ ) i ( x ) , x ∈ R . (2.1)10or s ∈ R , we denote the inhomogeneous L -based Sobolev space by H s = H s ( R ) with the norm k f k H s : = (cid:13)(cid:13)(cid:13) h ∂ x i s f (cid:13)(cid:13)(cid:13) L = (cid:13)(cid:13)(cid:13)(cid:13) h ξ i s b f (cid:13)(cid:13)(cid:13)(cid:13) L , where h·i : = + | · | . We also use the L -based homogeneous Sobolev space ˙ H s = ˙ H s ( R ) with the norm k f k ˙ H s : = (cid:13)(cid:13)(cid:13) | ∂ x | s f (cid:13)(cid:13)(cid:13) L = (cid:13)(cid:13)(cid:13)(cid:13) | ξ | s b f (cid:13)(cid:13)(cid:13)(cid:13) L . We introduce the free propagator of the fourth-order Schr¨odinger equation n e it ∂ x o t ∈ R defined by (cid:16) e it ∂ x f (cid:17) ( x ) : = F − h e it ξ b f ( ξ ) i ( x ) = √ π Z R e i ( x ξ + t ξ ) b f ( ξ ) d ξ, (2.2)for ( t , x ) ∈ R × R . For a space-time function F ∈ L (cid:16) , ∞ ; L x ( R ) (cid:17) , we define the integral operator I as I [ F ]( t ) : = Z t e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ , (2.3)for t ≥ I [ F ]( t ) = K to the free fourth-orderSchr¨odinger equation given by K = K ( t , x ) : = F − ξ h e it ξ i ( x ) = √ π Z R e i ( x ξ + t ξ ) d ξ. (2.4)We note that the right hand side of (2.4) is a formal expression, since e it ξ does not belong to L x ( R ) butbelong to S ′ ( R ) for any t ∈ R . Moreover, the identity I [ F ]( t ) = √ π Z R Z t K ( t − t ′ , x − y ) F ( t ′ , y ) dt ′ dy (2.5)holds for any t ≥
0. This expression is utilized to prove Proposition 3.3.We use the convention that capital letters denote dyadic numbers, e.g., N = n for n ∈ Z . We fix anonnegative even function ϕ ∈ C ∞ (( − , ϕ ( r ) = | r | ≤ ϕ ( r ) ≤ ≤ | r | ≤ . Set ψ N ( r ) : = ϕ ( r / N ) − ϕ (2 r / N ) for N ∈ Z . For N ∈ Z , we denote the Littlewood-Palay projection by P N , whose symbol is given by ϕ N ( | ξ | ) , i.e.( P N f )( x ) : = F − h ψ N ( | ξ | ) b f ( ξ ) i ( x ) . We also define the operators P > N : = P M > N P M and P ≤ N : = Id − P > N . We often use abbreviations f N = P N f , f ≤ N = P ≤ N f , etc, if they do not cause a confusion. For an interval I ⊂ R , we denotethe characteristic function on I by I , which is defined by I ( ϑ ) : = ( , if ϑ ∈ I , , if ϑ ∈ R \ I . We define Dirac’s delta function centered at the origin as δ = δ ( x ) ∈ S ′ ( R ). We use the shorthand A . B to denote the estimate A ≤ CB with some constant C >
0, and A ≪ B to denote the estimate A ≤ C − B for some large constant C >
0. The notation A ∼ B stands for A . B and B . A .Next we state the definition of the ( H s − ) solution to the Cauchy problem (1.1).11 efinition 2.1 ( H s -solution) . Let s ∈ R and I ⊂ R be a time interval. A function u : I × R → C is a ( H s -)solution to (1.1) on I , if u ∈ C ( I ; H s ( R )) and satisfies the Duhamel formula u ( t ) = e it ∂ x u − i I [ G ( u )]( t )in H s ( R )-sense for any t ∈ I , where the free fourth order Schr¨odinger group n e it ∂ x o t ∈ R is given by (2.2)and the integral operator I is given by (2.3). If the maximal existence time interval I = R , then u iscalled a global ( H s − ) solution to (1.1).Next we recall the following Bernstein- and Sobolev- inequalities. Lemma 2.1 (Bernstein inequalities, Sobolev inequalities) . Let p , q satisfy ≤ p ≤ q ≤ ∞ , s > andN ∈ Z . Then the estimates k P > N f k L p ( R ) . N − s (cid:13)(cid:13)(cid:13) | ∂ x | s P > N f (cid:13)(cid:13)(cid:13) L p ( R ) (cid:13)(cid:13)(cid:13) | ∂ x | s P ≤ N f (cid:13)(cid:13)(cid:13) L p ( R ) . N s k P ≤ N f k L p ( R ) (cid:13)(cid:13)(cid:13) | ∂ x | ± s P N f (cid:13)(cid:13)(cid:13) L p ( R ) ∼ N ± s k P N f k L p ( R ) k P ≤ N f k L q ( R ) . N p − q k P ≤ N f k L p ( R ) k P N f k L p ( R ) . N p − q k P N f k L p ( R ) hold provided that the right-hand sides are finite, where the implicit constants depend only on p , q , s. For the proof of this lemma, see Appendix in [ ] for example.Next, we recall the Littlewood-Paley theorem. Lemma 2.2 (Lettlewood-Paley theorem) . Let p ∈ (1 , ∞ ) and f ∈ L p ( R ) . Then the equivalency (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X N ∈ Z | P N f ( · ) | (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L px ( R ) ∼ k f k L p ( R ) holds, where the implicit constant depends only on p. For the proof of this lemma, see [ ] for instance. In this subsection, we collect several estimates of solutions to the free fourth-order Schr¨odinger equation.We introduce the Strichartz estimates. Before stating the estimates, we define admissible pairs as follows.
Definition 2.2 (Admissible pairs) . We say that a pair ( q , r ) is admissible if it satisfies 2 ≤ q , r ≤ ∞ and2 q + r = . Lemma 2.3 (Strichartz estimates) .
1. Let ( q , r ) be admissible and I be a time interval. Then theestimate (cid:13)(cid:13)(cid:13)(cid:13) | ∂ x | q e it ∂ x φ (cid:13)(cid:13)(cid:13)(cid:13) L qt ( I ; L rx ( R )) . k φ k L x ( R ) (2.7) holds for any φ ∈ L ( R ) , where the implicit constant depends only on q and r. . Let ( ˜ q , ˜ r ) be admissible and ( ˜ q ′ , ˜ r ′ ) be the pair of H ¨older conjugate of ( ˜ q , ˜ r ) and I be a time interval.Then the estimate (cid:13)(cid:13)(cid:13)(cid:13) | ∂ x | − q − q I [ F ] (cid:13)(cid:13)(cid:13)(cid:13) L qt ( I ; L rx ( R )) . k F k L ˜ q ′ t ( I ; L ˜ r ′ x ( R )) (2.8) holds for any F ∈ L ˜ q ′ t (cid:16) I ; L ˜ r ′ x (cid:17) , where the implicit constant depends only on q , r , ˜ q , ˜ r. For the proof, see Proposition 3.1 in [ ] or Proposition 2.3 in [ ]. Lemma 2.4 (Kato type smoothing [ ]) . Let I be a time interval. Then the estimate (cid:13)(cid:13)(cid:13)(cid:13) | ∂ x | e it ∂ x φ (cid:13)(cid:13)(cid:13)(cid:13) L ∞ x ( R : L t ( I ) ) ≤ k φ k L x ( R ) holds for any φ ∈ L ( R ) . The proof of this estimate can be found in [ ]. Proposition 2.5 (Maximal function estimate [ ]) . Let ǫ > and < T < . Then there exists a positiveconstant C > such that for any φ ∈ L ( R ) , the inequality (cid:13)(cid:13)(cid:13)(cid:13) h ∂ x i − (1 + ǫ ) e it ∂ x φ (cid:13)(cid:13)(cid:13)(cid:13) L x ( R ; L ∞ t ([0 , T ]) ≤ C k φ k L x ( R ) (2.9) holds. This proposition is nothing but Proposition 2.2 in [ ]. Lemma 2.6 (Kenig-Ruiz estimate [
25, 24 ]) . Let I be a time interval. Then there exists a positive constantC > independent of I such that for any φ ∈ L ( R ) , the inequality (cid:13)(cid:13)(cid:13)(cid:13) | ∂ x | − e it ∂ x φ (cid:13)(cid:13)(cid:13)(cid:13) L x ( R ; L ∞ t ( I ) ) ≤ C k φ k L x ( R ) holds. For the proof of this lemma, see Theorem 2.5 in [ ]. In this subsection, we introduce solution spaces for the Cauchy problem (1.1) and their auxiliary spaces(see also [ ]). Definition 2.3 ( L ( R )-based auxiliary space, solution space) . For a dyadic number N ∈ N ∪{ } , thefunction space Y N is defined by Y N : = n F ∈ L x L t + L t L x : k F k Y N < ∞ o , with the norm k F k Y N : = inf (cid:26) N − k F k L x L t + k F k L t L x : F = F + F , F ∈ L x L t , F ∈ L t L x (cid:27) . The function space X N is defined by X N : = n u ∈ L ∞ t L x : k u k X N < ∞ o , with the norm k u k X N : = k u k L ∞ t L x + N k u k L t L ∞ x + N − (1 + ǫ ) k u k L x L ∞ t + N − k u k L x L ∞ t + N k u k L ∞ x L t + (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N , where ǫ > emark .
1. The function spaces Y N and X N given in Definition 2.3 are Banach spaces.2. The power of the dyadic numbers and the function spaces in X N , i.e. N k u k L t L ∞ x , N − (1 + ǫ ) k u k L x L ∞ t , N − k u k L x L ∞ t , N k u k L ∞ x L t , come from the Strichartz estimate (Lemma 2.3), the maximal function estimate (Proposition 2.5),the Kenig-Ruiz estimate (Lemma 2.6) and the Kato type smoothing (Lemma 2.4) respectively.3. In the previous work [ ], a similar semi-norm to (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N which appears in X N -normis used to study low regularity well-posedness for the second order Schr¨odinger equation withderivative nonlinearities: i ∂ t u + ∂ x u = G m , l ( u , ∂ x u , u , ∂ x u ) , where 3 ≤ m ≤ l .4. In [ ], the authors used the function space such as Definition 2.3 for high frequency to prove thewell-posedness of (1.1) with γ =
3. But thay used the Besov type Fourier restriction norm insteadof L T L x . We will also use it for the special cases. (See, Section 6 below.)5. For any function φ = φ ( x ) on R , the solution e it ∂ x φ to the free fourth-order Schr¨odinger equationsatisfies the following identity (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) e it ∂ x φ (cid:13)(cid:13)(cid:13)(cid:13) Y N = , which implies that k ( i ∂ t + ∂ x ) u k Y N is semi-norm.6. For a complex valued function u = u ( t , x ) on R × R , the relation (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N = (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t − ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N , (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N hold. This implies k u k X N , k u k X N .The following proposition means boundedness from L ( R ) to X N for localized solutions to the freefourth-order Schr¨odinger equation. Proposition 2.7 (Estimate for localized free solutions from L ( R ) to X N ) . Let < T < and N ∈ N .Then there exists a positive constant C > such that the estimate (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P N φ (cid:13)(cid:13)(cid:13)(cid:13) X N ( T ) ≤ C k P N φ k L ( R ) holds. Proposition 2.7 follows from the definition of X N ( T )-norm, the Strichartz estimate (2.7), the Katotype smoothing (Lemma 2.4), Kenig-Ruiz estimate (Lemma 2.6), and the maximal function estimate(2.9). Proposition 2.8 (Estimate for localized free solutions from X to L ( R )) . Let < T < . Then thereexists a positive constant C > such that the estimate (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) X ( T ) ≤ C k P ≤ φ k L ( R ) holds. roof. Because k e it ∂ x P ≤ φ k L ∞ T L x + k e it ∂ x P ≤ φ k L x L ∞ T + k e it ∂ x P ≤ φ k L T L ∞ x . k P ≤ φ k L x by the unitarity of e it ∂ x on L , Proposition 2.5, and Lemma 2.6, it su ffi ces to show that (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L T L ∞ x . k P ≤ φ k L ( R ) and (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L ∞ x L T . k P ≤ φ k L ( R ) . By the Bernstein inequality and the unitarity of e it ∂ x , we have (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L T L ∞ x . (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L T L x . T (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L ∞ T L x . k P ≤ φ k L ( R ) . On the other hand, we have (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L ∞ x L T . (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L T L ∞ x . T (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x P ≤ φ (cid:13)(cid:13)(cid:13)(cid:13) L ∞ T L x . k P ≤ φ k L ( R ) by the same argument. (cid:3) Next we introduce the following solution space and its auxiliary space of Besov type for H s -solutionto the Cauchy problem (1.1). Definition 2.4 ( H s ( R )-based auxiliary space, Solution space) . Let s ∈ R , T >
0. We define the functionspaces X s ( T ) and Y s ( T ) by the norms k u k X s = k u k X s ( T ) : = k P ≤ u k X ( T ) + X N ∈ N N s k P N u k X N ( T ) , k F k Y s = k F k Y s ( T ) : = k P ≤ F k Y ( T ) + X N ∈ N N s k P N F k Y N ( T ) respectively. Remark . The function spaces Y s ( T ) and X s ( T ) given in Definition 2.4 are Banach spaces. Proposition 2.9 (Estimate for free solutions from X s to H s ( R )) . Let < T < , s ∈ R and φ ∈ H s ( R ) .Then there exists a positive constant C > such that the estimate (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x φ (cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) ≤ C k φ k H s ( R ) holds. Proposition 2.9 follows from Proposition 2.7, Proposition 2.8, the Plancherel theorem and the prop-erties of the Littlewood-Paley projection P N . 15 Decomposition of the Duhamel term and its application
Our aim of this section is to prove the following estimates (Theorem 3.1 and 3.2) for the Duhamel term I [ F ], where the integral operator I is defined by (2.3), from the auxiliary space Y N to the solution space X N , whose function spaces are defined in Definition 2.3. The proof is based on the method of the proofof Lemma 7.4 in [ ]. (Also see Proposition 3.3 in [ ].) Theorem 3.1 (Estimate for the localized Duhamel term from Y to X ) . Let < T < . Then there existsa positive constant C > independent of T and F such that the estimates k P ≤ I [ F ] k X ( T ) ≤ C k P ≤ F k Y ( T ) (3.1) holds.Proof. By Proposition 2.8, we have k P ≤ I [ F ] k X ( T ) . Z T (cid:13)(cid:13)(cid:13)(cid:13) e i ( t − t ′ ) ∂ x P ≤ F ( t ′ ) (cid:13)(cid:13)(cid:13)(cid:13) X ( T ) dt ′ . Z T (cid:13)(cid:13)(cid:13)(cid:13) e − it ′ ∂ x P ≤ F ( t ′ ) (cid:13)(cid:13)(cid:13)(cid:13) L x dt ′ . k P ≤ F k L T L x . Therefore, it su ffi ces to show that k P ≤ I [ F ] k X ( T ) . k P ≤ F k L x L T . Because we have k P ≤ I [ F ] k L ∞ T L x . k P ≤ I [ F ] k L x L ∞ T and k P ≤ I [ F ] k L ∞ x L T . k P ≤ I [ F ] k L ∞ x L T . k P ≤ I [ F ] k L T L ∞ x . k P ≤ I [ F ] k L T , x . k P ≤ I [ F ] k L x L ∞ T by the H ¨older inequality, and the Sobolev inequality, it su ffi ces to show that k P ≤ I [ F ] k L px L ∞ T . k P ≤ F k L x L T (3.2)for p = ϕ ( ξ ) : = ϕ (cid:16) ξ (cid:17) , χ ( x ) : = F − ξ [ ˘ ϕ ]( x ), where ϕ is given by (2.6) and K ( t , x ) : = [0 , ( t ) e it ∂ x χ ( x ) , G ( t , x ) : = [0 , ( t ) P ≤ F ( t , x ) . Then, we obtain P ≤ I [ F ]( t , x ) = ( K ∗ G )( t , x ) because ϕ = ˘ ϕϕ , where ∗ denotes the time-space convolu-tion. Therefore, by the Young inequality, we have k P ≤ I [ F ] k L px L ∞ T . k K k L px L ∞ T k G k L x , T . We note that k K k L px L ∞ T . k χ k L x < ∞ for p = (cid:3) heorem 3.2 (Estimate for the localized Duhamel term from Y N to X N ) . Let < T < and N ∈ N .Then there exists a positive constant C > independent of T , N and F such that the estimate k P N I [ F ] k X N ( T ) ≤ C k P N F k Y N ( T ) (3.3) holds. Corollary 3.3 (Estimate for the Duhamel term from Y s to X s ) . Let s ∈ R , < T < , and F ∈ Y s ( T ) .Then there exists a positive constant C independent of s, T and F such that the estimate kI [ F ] k X s ( T ) ≤ C k F k Y s ( T ) holds. Collorary 3.3 follows from Theorem 3.1 and 3.2.Because we have k P N I [ F ] k X N ( T ) ≤ C k P N F k L T L x (3.4)by the same argument as in the proof of Theorem 3.1, to obtain Theorem 3.2, it su ffi ces to show that k P N I [ F ] k X N ≤ CN − k P N F k L x L T . (3.5)In the following, we focus on the proof of (3.5). We assume that the function F is defined in R × R . Let N ∈ N . For y ∈ R , we introduce the function w y = w y , N : R × R → C given byw y ( t , x ) = w y , N ( t , x ) : = √ π Z t ( ˘ P N K )( t − t ′ , x − y )( P N F )( t ′ , y ) dt ′ , where ˘ P N : = P N / + P N + P N and K ∈ S ′ ( R × R ) is the fundamental solution of the fourth orderSchr¨odinger equation, which is defined by (2.4). We note that the identity P N I [ F ]( t , x ) = Z R w y , N ( t , x ) dy (3.6)holds. Boundedness of the function w y from L t ( R ) to X N is obtained as follows: Lemma 3.4 (Estimate of the function w y from L ( R ) to X N ) . Let N ∈ N , y ∈ R . It holds that k w y k X N . N − k P N F ( · , y ) k L t ( R ) . (3.7)We only consider the case of y =
0. We put F ( t ) : = ( P N F )( t , ffi ces toshow that k w k X N . N − k F k L t ( R ) . (3.8)We introduce the operators P + and P − given by d P + f ( ξ ) : = [0 , ∞ ) ( ξ ) b f ( ξ ) , d P − f ( ξ ) : = ( −∞ , ( ξ ) b f ( ξ ) . (3.9)The inequality (3.8) is implied by the following proposition with L ∼ N .17 roposition 3.5. Let N ∈ N , < L . N. Let h ∈ L ∞ ( R ) is defined byw ( t , x ) = − e it ∂ x L v ( x ) + ( P < L / ( −∞ , )( x )( P + e it ∂ x v )( x ) − ( P < L / [0 , ∞ ) )( x )( P − e it ∂ x v )( x ) + h ( t , x ) , where v ( x ) : = F − ξ [ ψ N ( ξ ) F t [ F ]( ξ )]( x ) , L v ( x ) : = F − ξ [ ψ N ( ξ ) F t [ ( −∞ , F ]( ξ )]( x ) . Then, h satisfies k h k L qx L pt . L − − p N − − q − p k F k L t (3.10) for any p, q ≥ . In particular, if N ∼ L, then we have k h k L qx L pt . N − − q − p k F k L t . Proof.
We first prove that F tx [ h ]( τ, ξ ) = A ( τ, ξ ) F t [ F ]( τ ), where A ( τ, ξ ) = ψ N ( ξ ) − ( ξ + ξ τ + ξτ + τ ) τ> ( τ ) ψ N ( τ )4 τ ψ < L / ( ξ − τ ) + ( ξ − τ )( ξ + τ ) τ> ( τ ) ψ N ( τ )4 τ ψ < L / ( ξ + τ ) i ( τ − ξ − i . (3.11)Since (0 , t ] ( t ′ ) = [0 , ∞ ) ( t − t ′ ) − ( −∞ , ( t ′ ), we have w ( t , x ) = √ π Z R [0 , ∞ ) ( t − t ′ )( P N K )( t − t ′ , x ) F ( t ′ ) dt ′ − e it ∂ x Z R ( −∞ , ( t ′ )( P N K )( − t ′ , x ) F ( t ′ ) dt ′ ! = : I − e it ∂ x I . By the direct calculation, we have F x [ I ] = ψ N ( ξ ) √ π Z R e − it ′ ξ ( −∞ , ( t ′ ) F ( t ′ ) dt ′ = ψ N ( ξ ) F t [ ( −∞ , F ]( ξ ) = d L v ( ξ ) . Therefore, we obtain h ( t , x ) = I − ( P < L / ( −∞ , )( x )( P + e it ∂ x v )( x ) + ( P < L / [0 , ∞ ) )( x )( P − e it ∂ x v )( x ) = : I − J + + J − because e it ∂ x I − e it ∂ x L v ( x ) =
0. By the direct calculation, we have F tx [ I ]( τ, ξ ) = F tx [( [0 , ∞ ) P N K ) ∗ t F ]( τ, ξ ) = ψ N ( ξ ) i ( τ − ξ − i F t [ F ]( τ ) . Therefore, to obtain (3.11), it su ffi ces to show that F tx [ J ± ]( τ, ξ ) = Q ± ( ξ, τ ) τ> ( τ ) ψ N ( τ )4 τ ψ < L / ( ξ ∓ τ ) F t [ F ]( τ ) i ( τ − ξ − i , where Q ± ( ξ, τ ) : = ξ − τξ ∓ τ .
18e note that ξ − τ = ( ξ − τ )( ξ + ξ τ + ξτ + τ ) = ( ξ + τ )( ξ − τ )( ξ + τ ) . By the direct calculation, we have F x [ P < L / ( −∞ , ]( ξ ) = − ψ < L / ( ξ ) i ( ξ + i , F x [ P < L / [0 , ∞ ) ]( ξ ) = − ψ < L / ( ξ ) i ( ξ − i F tx [ P ± e it ∂ x v ]( τ, ξ ) = δ ( τ − ξ ) ξ ≷ ( ξ ) ψ N ( ξ ) F t [ F ]( ξ ) . Therefore, by using the variable transform η ω as η = ± ω , we have F tx [ J ± ]( τ, ξ ) = − ψ < L / ( ξ ) i ( ξ ± i ! ∗ ξ (cid:16) δ ( τ − ξ ) ξ ≷ ( ξ ) ψ N ( ξ ) F t [ F ]( ξ ) (cid:17) = ∓ Z ±∞ ψ < L / ( ξ − η ) i ( ξ − η ± i δ ( τ − η ) ψ N ( η ) F t [ F ]( η ) d η = ∓ Z ∞ ψ < L / ( ξ ∓ ω ) i ( ξ ∓ ω ± i δ ( τ − ω ) ψ N ( ω ) F t [ F ]( ω ) ± ω d ω = − ψ < L / ( ξ ∓ τ ) i ( ξ ∓ τ ± i ψ N ( τ ) F t [ F ]( τ ) τ> ( τ )4 τ = Q ± ( ξ, τ ) τ> ( τ ) ψ N ( τ )4 τ ψ < L / ( ξ ∓ τ ) F t [ F ]( τ ) i ( τ − ξ − i . (3.12)As a result, we obtain (3.11).Next, we prove (3.10). We divide A ( τ, ξ ) into A ( τ, ξ ) = X j = A j ( τ, ξ ) , A j ( τ, ξ ) = Ω j ( τ, ξ ) A ( τ, ξ ) , where Ω : = (cid:26) ( τ, ξ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) τ > , | ξ − τ | < L (cid:27) , Ω : = (cid:26) ( τ, ξ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) τ > , | ξ + τ | < L (cid:27) , Ω : = { ( τ, ξ ) | τ ≤ } , Ω : = R \ ( Ω ∪ Ω ∪ Ω ) . First, we assume ( τ, ξ ) ∈ Ω . Then, we have ψ < L / ( ξ − τ ) = ψ < L / ( ξ + τ ) =
0, and ξ ∼ τ > ξ ∼ N or τ ∼ N . Furthermore, by the Talor expansion, we obtain ψ N ( ξ ) = ψ N ( τ ) + ( ξ − τ ) O ( N − ) . Therefore, we have A ( τ, ξ ) = ψ N ( τ ) − ξ + ξ τ + ξτ + τ τ i ( τ − ξ − i + ξ − τ i ( τ − ξ − i O ( N − ) = ψ N ( τ )4 τ τ + ξτ + ξ i ( τ + ξτ + ξ τ + ξ ) − i ( τ + ξτ + ξ τ + ξ ) O ( N − ) .
19t implies that k A k rL r τ L ξ . Z <τ ∼ N τ r Z <ξ ∼ N (3 τ + ξτ + ξ ) ( τ + ξτ + ξ τ + ξ ) d ξ r d τ + Z <τ ∼ N Z <ξ ∼ N N − ( τ + ξτ + ξ τ + ξ ) d ξ r d τ . N − ( r − for r ≥ k A k L ∞ τ L ξ . sup <τ ∼ N τ Z <ξ ∼ N (3 τ + ξτ + ξ ) ( τ + ξτ + ξ τ + ξ ) d ξ d τ + sup <τ ∼ N Z <ξ ∼ N N − ( τ + ξτ + ξ τ + ξ ) d ξ d τ . N − . By the same argument, we obtain k A k L r τ L ξ . N − ( − r ) for 2 ≤ r ≤ ∞ . Next, we assume ( τ, ξ ) ∈ Ω . Therefore, we have A ( τ, ξ ) = ψ N ( ξ ) i ( τ − ξ − i . We note that | τ − ξ | ≥ − τ + N ∼ | τ | + N when τ ≤ | ξ | ≥ N . It implies that k A k rL r τ L ξ . Z −∞ | τ | + N ) r Z | ξ |∼ N d ξ ! r d τ . N − ( r − for r ≥ k A k L ∞ τ L ξ . sup τ ≤ | τ | + N ) Z | ξ |∼ N d ξ ! . N − . Finally, we assume ( τ, ξ ) ∈ Ω . Then, we obtain | τ − ξ | = | τ − ξ || τ + ξ || τ + ξ | & LN if | ξ | ∼ N or τ ∼ N . It implies that | A ( τ, ξ ) | . (cid:18) ψ N ( ξ ) + ψ N ( τ ) ψ < L / ( ξ − τ ) + ψ N ( − τ ) ψ < L / ( ξ + τ ) (cid:19) | τ − ξ | . k A k L r τ L ξ . k A k L ξ L r τ . Z | ξ |∼ N Z | τ − ξ | & LN τ − ξ ) r d τ ! r d ξ . L − − r ) N − (5 − r ) for r ≥ k A k L ∞ τ L ξ . sup τ Z | ξ |∼ N , | τ − ξ | & LN τ − ξ ) d ξ ! . L − N − . As a result, for 2 ≤ r ≤ ∞ , we obtain k A k L r τ L ξ . L − (1 − r ) N − ( − r ) . (3.13)For p , q ≥
2, we put 1 p ′ : = − p , r : = p ′ − = − p . We note that r ≥
2. Because 1 ≤ p ′ ≤ ≤ q and F tx [ h ]( τ, ξ ) = A ( τ, ξ ) F t [ F ]( τ ), we have k h k L qx L pt . kF t [ h ] k L qx L p ′ τ . kF t [ h ] k L p ′ τ L qx . N − q kF tx [ h ] k L p ′ τ L ξ . N − q k A k L r τ L ξ kF t [ F ] k L τ . Therefore, we obtain k h k L qx L pt . N − q N − ( − r ) kF t [ F ] k L τ ∼ L − − p N − − q − p k F k L t by (3.13). (cid:3) Remark . If ( p , q ) is admissible, namely, 2 / p + / q = /
2, then we have N p k h k L pt L qx . N p + − q k h k L pt L x . N p + − q k h k L x L pt . N p + − q − − − p k F k L t = N − k F k L t . Proof of Theorem 3.2.
We prove (3.5). Let F ∞ is an extension of F on R such that k F ∞ k L x L t ≤ k F k L x L T and define w ∞ y by w ∞ y ( t , x ) : = √ π Z t ( ˘ P N K )( t − t ′ , x − y )( P N F ∞ )( t ′ , y ) dt ′ , Because P N I [ F ∞ ]( t , x ) = Z R w ∞ y ( t , x ) dy , we have k P N I [ F ∞ ] k X N . Z R k w ∞ y k X N dy . Z R N − k ( P N F ) ∞ ( t , y ) k L t dy = N − k P N F ∞ k L x L t by Lemma 3.4. This implies (3.5). (cid:3) Remark . The estimates in this section also can be obtained if we replace N ∈ N by N ∈ Z . Thecondition T < k · k X N does not contain k · k L x L ∞ T andwe consider the homogeneous norm instead of k · k X s and k · k Y s , then we can remove the conditon T < Multilinear estimates for general nonlinearities
In this section, we first give a proof of a bilinear Strichartz estimate (Theorem 4.2) on the solution spacesgiven in Definition 2.3. Then we show a multilinear estimate (Theorem 4.4) from the solution space tothe auxiliary space given in Definition 2.4.
Lemma 4.1 (Bilinear Strichartz estimates L ( R ) × L ( R ) → L t , x ( R × R )) . Let N , N ∈ N with N ≫ N .Then for any functions f , g satisfying P N f , P N g ∈ L ( R ) , the estimate (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) P N e it ∂ x f (cid:17) (cid:16) P N e it ∂ x g (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x ( R × R ) ≤ CN − (cid:13)(cid:13)(cid:13) P N f (cid:13)(cid:13)(cid:13) L (cid:13)(cid:13)(cid:13) P N g (cid:13)(cid:13)(cid:13) L (4.1) holds, where C is a positive constant independent of N , N , f , g. This lemma can be proved in the almost similar manner as the proof of Lemma 3.4 in [ ] (see alsoRemark 6.2 below).The following bilinear Strichartz type estimate is useful to constract low regular solutions. Theorem 4.2 (Bilinear Strichartz estimate on X N × X N ) . Let N , N ∈ Z and u ∈ X N , u ∈ X N . IfN ≫ N , then the estimate k P N u P N u k L t , x . N − k P N u k X N k P N u k X N (4.2) holds, where the implicit constant is independnet of N , N , u , u .Proof. We put u j , N j : = P N j u j and F j : = ( i ∂ t + ∂ x ) u j for j = ,
2. It su ffi ces to show that k u , N u , N k L t , x . N − (cid:16) k u , N (0) k L x + k F k Y N (cid:17) (cid:16) k u , N (0) k L x + k F k Y N (cid:17) . This follows from the following estimates. k u , N u , N k L t , x . N − (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) , (4.3) k u , N u , N k L t , x . N − (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) , (4.4) k u , N u , N k L t , x . N − (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) , (4.5) k u , N u , N k L t , x . N − (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) . (4.6)We prove only (4.6) because the other estimates can be obtained by the similar or simpler way. We notethat u j , N j ( t ) = e it ∂ x u j , N j (0) − i Z t e i ( t − t ′ ) ∂ x P N j F j ( t ′ ) dt ′ = : A j + B j . To obtain (4.6), we prove the followings. k A A k L t , x . N − k u , N (0) k L x k u , N (0) k L x , (4.7) k A B k L t , x . N − N − k u , N (0) k L x k F k L x L t , (4.8) k B A k L t , x . N − k F k L x L t k u , N (0) k L x , (4.9) k B B k L t , x . N − N − k F k L x L t k F k L x L t . (4.10)224.7) is obtained by (4.1).Now we prove (4.8) and (4.9). By Proposition 3.5, we have B j = − Z R e it ∂ x L v j , y ( x ) dy + Z R ( P < N j / ( −∞ , )( x )( P + e it ∂ x v j , y )( x ) dy − Z R ( P < N j / [0 , ∞ ) )( x )( P − e it ∂ x v j , y )( x ) dy + Z R h j , y ( t , x ) dy , where v j , y = F − ξ [ ψ N ( ξ ) F t [ F j ( t , y )]( ξ )], L v j , y = F − ξ [ ψ N ( ξ ) F t [ ( −∞ , ( t ) F j ( t , y )]( ξ )] and h j , y satisfies k h j , y k L qx L pt . N − − q − p j k F j ( t , y ) k L t . (4.11)We note that k v j , y ( x ) k L x . N − j k F j ( t , y ) k L t , kL v j , y ( x ) k L x . N − j k F j ( t , y ) k L t . (4.12)Furthermore, for any g ∈ L ( R ), it holds that k ( P < N j / ( −∞ , )( x ) g ( t , x ) k L t , x . k g k L t , x . (4.13)Indeed, if χ N j is defined by P < N j / f = χ N j ∗ f , then we have k ( P < N j / ( −∞ , )( x ) g ( t , x ) k L t , x = k ( χ N j ∗ ( −∞ , )( x ) g ( t , x ) k L tx . Z R | χ N j ( z ) |k ( −∞ , ( x − z ) g ( t , x ) k L tx dz . k g k L t , x because χ N j ( x ) = F − ξ [ ϕ (2 N − j ξ )]( x ), where ϕ is defined in (2.6). By the same way, we obtain k ( P < N j / [0 , ∞ ) )( x ) g ( t , x ) k L t , x . k g k L t , x . (4.14)Therefore, we have k A B k L t , x . Z R k e it ∂ x u , N (0) e it ∂ x L v , y k L t , x dy + Z R k e it ∂ x u , N (0) e it ∂ x P + v , y k L t , x dy + Z R k e it ∂ x u , N (0) e it ∂ x P − v , y k L t , x dy + Z R k e it ∂ x u , N (0) h , y k L t , x dy = : I + II + III + IV . By (4.1) and (4.12), we obtain I + II + III . Z R N − k u , N (0) k L x (cid:16) kL v , y k L x + k v , y k L x (cid:17) dy . Z R N − N − k u , N (0) k L x k F ( t , y ) k L t dy = N − N − k u , N (0) k L x k F k L x L t . q , p ) = (2 , ∞ ), we obtain IV . Z R k e it ∂ x u , N (0) k L ∞ x L t k h , y k L x L ∞ t dy . Z R N − k u , N (0) k L x N − k F ( t , y ) k L t dy = N − N − k u , N (0) k L x k F k L x L t . Therefore, we get (4.8). By the same way, we also get (4.9).Finally, we prove (4.10). By the same argument as above, k B B k L tx is controlled by the summationof " R k e it ∂ x e v , y e it ∂ x e v , y k L t , x dy dy , " R k e it ∂ x e v , y h , y k L t , x dy dy , " R k h , y e it ∂ x e v , y k L t , x dy dy , " R k h , y h , y k L t , x dy dy , where e v j , y ∈ {L v j , y , P + v j , y , P − v j , y } . By (4.1) and (4.12), we obtain " R k e it ∂ x e v , y e it ∂ x e v , y k L t , x dy dy . " R N − k e v , y k L x k e v , y k L x dy dy . " R N − N − k F ( t , y ) k L t N − k F ( t , y ) k L t dy dy = N − N − k F k L x L t k F k L x L t . By the H ¨older inequality, Lemma 2.4, (4.12), and (4.11) with ( q , p ) = (2 , ∞ ), we obtain " R k e it ∂ x e v , y h , y k L t , x dy dy . " R k e it ∂ x e v , y k L ∞ x L t k h , y k L x L ∞ t dy dy . " R N − k e v , y k L x N − k F ( t , y ) k L t dy dy . " R N − k F ( t , y ) k L t N − k F ( t , y ) k L t dy dy = N − N − k F k L x L t k F k L x L t . By the H ¨older inequality and (4.11) with ( q , p ) = ( ∞ , , ∞ ), we obtain " R k h , y h , y k L t , x dy dy . " R k h , y k L ∞ x L t k h , y k L x L ∞ t dy dy . " R N − k F ( t , y ) k L t N − k F ( t , y ) k L t dy dy = N − N − k F k L x L t k F k L x L t . Therefore, we get (4.10). (cid:3)
Corollary 4.3.
Let T > , N , N ∈ Z and u ∈ X N ( T ) , u ∈ X N ( T ) . If N ≫ N , then the estimate k P N u P N u k L T L x . T θ N − + θ k P N u k X N ( T ) k P N u k X N ( T ) (4.15) holds for any θ ∈ [0 , , where the implicit constant is independent of T , N , N , u , u . roof. By the H ¨older inequality and the definition of X N -norm, we have k P N u P N u k L T L x ≤ k ( − T , T ) ( t ) k L t k P N u k L t L ∞ x k P N u k L ∞ t L x . T N − k P N u k X N k P N u k X N . Therefore, by the interpolation between this estimate and (4.2), we obtain (4.15). (cid:3)
Remark . The estimates (4.1), (4.2), and (4.15) also holds if we replace P N by P ≤ . Theorem 4.4 (Multilinear estimate) . Let m ≥ , γ ∈ { , , } , < T < . Sets = s ( γ, m ) : = γ − , m = , γ − , m = , s c + ǫ, m ≥ for γ ∈ { , } and s = s (3 , m ) : = ( , m = , , m ≥ , where ǫ > is an arbitrary positive number. If s ≥ max { s , } , then for any u , · · · , u m ∈ X s ( T ) and themulti-index α = ( α , · · · , α m ) ∈ ( N ∪ { } ) m with | α | = γ , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = ∂ α i x e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) . T δ m Y i = k u i k X s ( T ) (4.16) for some δ > if γ ∈ { , } and δ = if γ = , where e u i ∈ { u i , u i } . The implicit constant depends only onm , s, s . This multilinear estimate will be used to prove Theorem 1.2 and 1.3 (see the proof of Theorem 8.1and Remark 8.1).
Proof.
Let m ≥ s ≥ s , 0 < T <
1, and u i ∈ X s ( T ) ( i = , · · · , m ). We write P ≤ = P . For i = , · · · , m and N i ∈ N ∪{ } , we set c i , N i : = N si k P N i u i k X Ni ( T ) . We define I : = X N ∈ N ∪{ } N s (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) , I k : = X N ∈ N ∪{ } N s (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N X ( N , ··· , N m ) ∈ Φ k N ≫ N γ m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ( k = , · · · , m ) , where Φ k : = { ( N , · · · , N m ) | N ≥ · · · ≥ N m , N ∼ · · · ∼ N k ≫ N k + } and N m + : =
1. We will show I k . T δ m Y i = X N i c i , N i (4.17)25or some δ ≥
0. For k =
0, by the H ¨older inequality and the Bernstein inequality, we have I . (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N . (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ T k P u k L ∞ t L x m Y i = k P u i k L ∞ t , x . T m Y i = c i , . Therefore, it su ffi ces to show N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) . T δ N − ǫ m Y i = c i , N i (4.18)when ( N , · · · , N m ) ∈ Φ and X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) . T δ N − ǫ k + m Y i = c i , N i (4.19)when ( N , · · · , N m ) ∈ Φ k with k = , · · · , m . for some ǫ >
0. Indeed, if (4.18) and (4.19) holds, then bythe Cauchy-Schwarz inequality for dyadic summation, we have I . X N X ( N , ··· , N m ) ∈ Φ k N ≫ N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) . X N X N ∼ N X N ≥···≥ N m T δ N − ǫ m Y i = c i , N i . T δ X N X N ∼ N c , N m Y i = X N i ≥ N − ǫ m − i X N i c i , N i . T δ m Y i = X N i c i , N i and I k . X ( N , ··· , N m ) ∈ Φ k N ≫ X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) . X ( N , ··· , N m ) ∈ Φ k T δ N − ǫ k + m Y i = c i , N i ≤ T δ X N ∼···∼ N k k Y i = c i , N i m Y i = k + X N i ≥ N − ǫ m − k i X N i c i , N i . T δ m Y i = X N i c i , N i for k = , · · · , m . 26ow, we prove (4.18) and (4.19). Case 1 : m = k = N ∼ N ≫ N ).By the definition of the norm k · k Y N ( T ) , the H ¨older inequality and (4.15), we have N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s − N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T . N s − + γ k P N u P N u k L x , T k P N u k L x L ∞ T . T θ N s − + γ + θ N + ǫ Y i = k P N i u i k X Ni ( T ) ∼ T θ N − (3 − γ − θ )1 N − s N − s + + ǫ Y i = c i , N i . T θ N − s − − γ − θ + + ǫ N − s − − γ − θ + + ǫ Y i = c i , N i . T θ N − ǫ Y i = c i , N i for 0 ≤ θ ≤ min { , − γ } , ǫ >
0, and s ≥ γ − + θ + ǫ . We choose θ and ǫ as 3 ǫ + θ ≤
1. In particular, wehave to choose θ = γ = k = N ∼ N ≫ N )By the definition of the norm k · k Y N ( T ) , the H ¨older inequality and (4.15), we have X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s + γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ N s + γ T k P N u k L T L ∞ x k P N u P N u k L T , x . T N s + γ − N − Y i = k P N i u i k X Ni ( T ) ∼ T N − ( s − γ + N − s Y i = c i , N i . T N − ( s − γ − )1 N − s − − γ Y i = c i , N i . T N − ǫ Y i = c i , N i for ǫ > s ≥ max { γ − , − − γ + ǫ, } . We note that γ − ≥ − − γ + ǫ if we choose ǫ as ǫ ≤ k = N ∼ N ∼ N ≫ k · k Y N ( T ) , the H ¨older inequality, we have X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s + γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ N s + γ T k P N u k L T L ∞ x k P N u k L T L ∞ x k P N u k L ∞ T L x . T N s + γ − N − Y i = k P N i u i k X Ni ( T ) ∼ T N − s + γ −
11 3 Y i = c i , N i . T Y i = c i , N i for s ≥ max { γ − , } . Case 2 : m ≥ s < because the case s ≥ is simpler. By the Bernstein inequality, wehave m Y i = k P N i u i k L ∞ x , T . m Y i = N i k P N i u i k L ∞ T L x . N ( m − − s )5 m Y i = c i , N i (4.20)for m ≥ s < . We assume Q mi = k P N i u i k L ∞ x , T = Q mi = c i , N i = m =
4. Then (4.20) is true for m ≥ k = N ∼ N ≫ N )By the definition of the norm k · k Y N ( T ) , the H ¨older inequality, (4.15), and (4.20) we have N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s − N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T ≤ N s − + γ k P N u P N u k L x , T k P N u k L x L ∞ T k P N u k L x L ∞ T m Y i = k P N i u i k L ∞ x , T . T θ N s − + γ + θ N N Y i = k P N i u i k X Ni ( T ) m Y i = k P N i u i k L ∞ x , T . T θ N − (3 − γ − θ )1 N − s N − s + N − s + N ( m − − s )5 m Y i = c i , N i . T θ Y i = N − s + − − γ − θ + m − ( − s ) i m Y i = c i , N i . T θ N − ǫ m Y i = c i , N i ≤ θ ≤ min { , − γ } , ǫ >
0, and s ≥ s c + θ + ǫ m − . We choose θ and ǫ as θ + ǫ ≤ ( m − s − s c ) for s > s c . In particular, we have to choose θ = γ = k = N ∼ N ≫ N )By the definition of the norm k · k Y N ( T ) , the H ¨older inequality, the Sobolev inequality, (4.15), and(4.20), we have X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ N s + γ k P N u P N u k L T , x k P N u P N u k L T L ∞ x m Y i = k P N i u i k L ∞ T , x ≤ N s + γ N k P N u P N u k L T , x k P N u P N u k L T , x m Y i = k P N i u i k L ∞ x , T . T θ N s + γ − + θ N −
12 4 Y i = k P N i u i k X Ni ( T ) m Y i = k P N i u i k L ∞ x , T . T θ N − ( s − γ + − θ )1 N − s N − s N ( m − − s )5 m Y i = c i , N i . T θ Y i = N − s − ( s − γ + − θ ) + m − ( − s ) i m Y i = c i , N i . T θ N − ǫ m Y i = c i , N i for 0 ≤ θ ≤ ǫ >
0, and s ≥ max { γ − + θ, s c + θ + ǫ m − , } . We choose θ and ǫ as θ + ǫ ≤ ( m − s − s c )for s > s c .(iii) For k = N ∼ N ∼ N ≫ N ) 29y the definition of the norm k · k Y N ( T ) , the H ¨older inequality, (4.15), and (4.20), we have X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s + γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ N s + γ k P N u k L T L ∞ x k P N u k L T L ∞ x k P N u P N u k L T , x m Y i = k P N i u i k L ∞ x , T . T θ N s + γ − N − N − + θ Y i = k P N i u i k X Ni ( T ) m Y i = k P N i u i k L ∞ x , T . T θ N − (2 s − γ + − θ )1 N − s N ( m − − s )5 m Y i = c i , N i . T θ N − s − (2 s − γ + − θ ) + ( m − − s )4 m Y i = c i , N i . T θ N − ǫ m Y i = c i , N i for 0 ≤ θ ≤ ǫ >
0, and s ≥ max { ( γ − + θ ) , s c + θ + ǫ m − , } . We choose θ and ǫ as θ + ǫ ≤ ( m − s − s c )for s > s c .(iv) For k = N ∼ N ∼ N ∼ N ≫ N )If m =
4, by the definition of the norm k · k Y N ( T ) , H ¨older inequality, we have X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s + γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ N s + γ T k P N u k L T L ∞ x k P N u k L T L ∞ x k P N u k L T L ∞ x k P N u k L ∞ T L x . T N s + γ − N − N − Y i = k P N i u i k X Ni ( T ) ∼ T N − s + γ − m Y i = c i , N i . T Y i = c i , N i for s ≥ max { γ − , } . 30f m ≥
5, by the definition of the norm k · k Y N ( T ) , H ¨older inequality, we have X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) ≤ N s + γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ T θ N s + γ k P N u k L − θ T L ∞ x k P N u k L T L ∞ x k P N u k L T L ∞ x k P N u k L T L ∞ x k P N u k L ∞ T L x m Y i = k P N i u i k L ∞ x , T , where we assumed Q mi = k P N i u i k L ∞ x , T = m =
5. We note that m Y i = k P N i u i k L ∞ x , T . N ( m − − s )5 m Y i = c i , N i for m ≥
6. By the interpolation between k P N u k L T L ∞ x . N − k P N u k X N ( T ) and k P N u k L ∞ T L x . k P N u k X N ( T ) , we have k P N u k L − θ T L θ x . N − − θ k P N u k X N ( T ) for 0 ≤ θ ≤
1. This and the Sobolev inequality imply k P N u k L − θ T L ∞ x . N − + θ k P N u k X N ( T ) . Therefore, we obtain X N . N N s N γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N m Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( T ) . T θ N s + γ − + θ N − N − N − Y i = k P N i u i k X Ni ( T ) m Y i = k P N i u i k L ∞ x , T . T θ N − s + γ − + θ N − s + ( m − − s )5 m Y i = c i , N i ∼ T θ N − ( m − s + m − − (4 − γ ) + θ m Y i = c i , N i . T θ N − ǫ Y i = c i , N i for 0 ≤ θ ≤ ǫ >
0, and s ≥ max { γ − + θ , s c + θ + ǫ m − , } . We choose θ and ǫ as θ + ǫ ≤ ( m − s − s c ) for s > s c . (cid:3) emark . For m =
3, if we do not use the bilinear Strichartz estimate, then the worst case is not k = k = N ∼ N ), we have N s − N (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T ≤ N s − N k P N u k L ∞ x L T k P N u k L x L ∞ T k P N u k L x L ∞ T . N s − N − s + N − s + + ǫ N − s + + ǫ Y i = c i , N i . N − s + + ǫ N − s + + ǫ Y i = c i , N i when γ =
3. This guarantees the trilinear estimate only for s >
1. Therefore, by using the bilinearStrichartz estimate, we can improve the result in [ ]. Remark . When γ =
3, we cannot obtain (4.16) with δ >
0. This is the reason why large data cannotbe treated in Theorem 1.1 and 1.2.
Remark . We can also obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) . T δ m Y i = k u i k X s ( T ) (4.21)for the same s and δ as in Theorem 4.4 because (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) . (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P ≤ e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) + m X k = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:16) ∂ γ x P > e u k (cid:17) Y ≤ i ≤ mi , k e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) . We can treat the first term of R.H.S by the same way as the estimate for I and the second term of R.H.Sby the same way as the estimates for I k ( k = , · · · , m ). m ≥ with γ = and m ≥ with γ = In this section, we prove multilinear estimates at the scaling critical regularity s = s c ( γ, m ) in the cases m ≥ γ = m ≥ γ =
2. To treat these cases, we define the function spaces X N and Z N equipped with the norms k u k X N : = k u k L ∞ t L x + N − k u k L x L ∞ t + N k u k L ∞ x L t , k F k Z N : = N − k F k L x L t , (5.1)instead of Definition 2.3. Furthermore, we define ˙ X s , X s , ˙ Z s , and Z s by k u k ˙ X s : = X N ∈ Z N s k P N u k X N , k u k X s : = k u k ˙ X + k u k ˙ X s , k F k ˙ Z s : = X N ∈ Z N s k P N F k Z N , k F k Z s : = k F k ˙ Z + k F k ˙ Z s .
32e can see that (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x u (cid:13)(cid:13)(cid:13)(cid:13) X s . k u k H s , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z t e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X s . k F k Z s (5.2)by the same argument as in the previous sections. Theorem 5.1 (Multilinear estimates) . Let m ≥ . Sets c = s c ( m ) : = − m − . (i) For any u , · · · , u m ∈ ˙ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z sc . m Y i = k u i k ˙ X sc , (5.3) where e u i ∈ { u i , u i } . The implicit constant depends only on m. (ii) If s ≥ s c , For any u , · · · , u m ∈ X s , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z s . m Y i = k u i k X s , (5.4) where e u i ∈ { u i , u i } . The implicit constant depends only on m and s.Proof. Let s ≥
0. We assume u i ∈ ˙ X s ∩ ˙ X s c and k u i k ˙ X sc . i = , · · · , m ). We first prove (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . m X i = k u i k ˙ X s . (5.5)This implies (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . m X i = k u i k ˙ X s Y ≤ k ≤ mk , i k u k k ˙ X sc (5.6)for any u i ∈ ˙ X s ∩ ˙ X s c because k u i k ˙ X sc = k u i k ˙ X sc . For i = , · · · , m and N i ∈ Z , we set c , N : = N s k P N u k X N , c i , N i : = N s c i k P N i u i k X Ni ( i = , · · · , m ). We define I = X N ∈ Z N s + X N ≥···≥ N m N ∼ N (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N , I = X N ∈ Z N s + X N ≥···≥ N m N ≫ N (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N , Then, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . I + I .
33t su ffi ces to show that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t . N − s − Q mi = N − s c i Q i = N s c − i m Y i = c i , N i . (5.7)Indeed, if (5.7) holds, then we have I . X N N s + X N ∼ N X N ≥···≥ N m (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N . X N X N ∼ N X N ≥···≥ N m N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t . X N X N ∼ N X N ≥···≥ N m N s + N − s − Q mi = N − s c i Q i = N s c − i m Y i = c i , N i . X N X N ∼ N c , N X N ≥···≥ N m Q mi = N − s c i Q i = N s c − i c , N + m Y i = c i , N i . X N X N ∼ N c , N X N c , N + m Y i = X N i c i , N i . X N c , N X N c , N + m Y i = X N i c i , N i by the Young inequality and I ≤ X N X N ∼ N X N ≪ N X N ≥···≥ N m N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N ≤ X N X N ∼ N X N ≪ N X N ≥···≥ N m N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t . X N X N ∼ N X N ≪ N NN ! s + c , N c , N X N ≥···≥ N m Q mi = N − s c i Q i = N s c − i m Y i = c i , N i . m Y i = X N i c i , N i by the Cauchy-Schwarz inequality because ( m − − s c ) = s c − ) >
0. Therefore, we obtain (5.5)since X N i c i , N i = k u i k X sc . i = , · · · m . Now, we prove (5.7). By the H ¨older inequality and the Bernstein inequality, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t ≤ k P N u k L ∞ x L t Y i = k P N i u i k L x L ∞ t m Y i = k P N i u i k L ∞ x , t . N − s − Q mi = N − s c i Q i = N s c − i m Y i = c i , N i . The estimate (5.3) follows from (5.6) with s = s c . Next, we prove (5.4). By (5.6) with s =
0, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z . m X i = k u i k ˙ X m Y k , i k u k k ˙ X sc . m Y i = k u i k X s for s ≥ s c . While by (5.6) with s ≥ s c , we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . m X i = k u i k ˙ X s m Y k , i k u k k ˙ X sc . m Y i = k u i k X s . Therefore, we obtain (5.4). (cid:3)
Remark . We cannot obtain (5.3) and (5.4) for m = N s c − i ( i = , · · · ,
5) are vanished.
Theorem 5.2 (Multilinear estimates) . Let m ≥ . Sets c = s c ( m ) : = − m − . (i) For any u , · · · , u m ∈ ˙ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z sc . m Y i = k u i k ˙ X sc , (5.8) where e u i ∈ { u i , u i } . The implicit constant depends only on m. (ii) If s ≥ s c , then for any u , · · · , u m ∈ X s , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z s . m Y i = k u i k X s , (5.9) where e u i ∈ { u i , u i } . The implicit constant depends only on m and s.Proof. Let c i , N i ( i = , · · · , m ) are defined in the proof of Theorem 5.1. Then, we have (5.7) by the sameway, where we assumed m Y i = N − s c i = m =
5. Therefore, we obtain N s + − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t . NN ! s + Q i = N − s c i Q mi = N − s c i N m Y i = c i , N i , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . m X i = k u i k ˙ X s Y ≤ k ≤ mk , i k u k k ˙ X sc . for s ≥ s c by the same argument as in the proof of Theorem 5.1 because 1 = − s c ) + ( m − − s c ) > I . m Y i = X N i c i . N i and need not the renormalize argument for k u i k ˙ X sc ( i = , · · · , m ). (cid:3) To treat the large data for the scaling critical case, we give the following.
Theorem 5.3 (Multilinear estimates) . Let m ≥ , < T < , and M ∈ N . Sets c = s c ( m ) : = − m − . (i) For any u , · · · , u m ∈ ˙ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z sc ( T ) . T δ M κ m Y i = k u i k ˙ X sc ( T ) (5.10) for some δ > and κ > depending only on m. (ii) For any u , · · · , u m ∈ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z sc ( T ) . T δ M κ m Y i = k u i k X sc ( T ) (5.11) for some δ > and κ > depending only on m.Proof. By the symmetry, we can assume N ≥ · · · ≥ N m and N m < M . Let s ≥ c , N : = N s k P N u k X N ( T ) , c i , N i : = N s c i k P N i u i k X Ni ( T ) ( i = , · · · , m − c m , N m : = N s c m k P < M P N m u m k X Nm ( T ) . We first assume thecase m ≥ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T ≤ T − p (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L pT ≤ T − p k P N u k L ∞ x L pT Y i = k P N i u i k L x L ∞ T m − Y i = k P N i u i k L ∞ x L ∞ T k P < M P N m u m k L ∞ x L ∞ T for p >
2, where we assumed m − Y i = k P N i u i k L ∞ x L ∞ T = m =
6. By the interpolation between the two estimates k P N u k L ∞ x L T . N − k P N u k X N ( T ) , k P N u k L ∞ x L ∞ T . N k P N u k X N ( T ) ,
36e obtain k P N u k L ∞ x L pT . N − p k P N u k X N ( T ) . Therefore, by the Bernstein inequality, it holds that N s + − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T . T − p NN ! s + (cid:18)Q i = N − s c i Q m − i = N − s c i (cid:19) N − s c m N p − m Y i = c i , N i . We choose p > p = m − m − . Then, 4( − s c ) + ( m − − s c ) = p − >
0. Therefore, we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . T − p M − s c m X i = k u i k ˙ X s Y ≤ k ≤ mk , i k u k k ˙ X sc for s ≥ s c by the same argument as in the proof of Theorem 5.2 because N m < M and − s c > m =
5. Then, s c =
0. Therefore, it holds that N s + − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T ≤ T − p N s + − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L pT ≤ T − p N s + − k P N u k L ∞ x L pT Y i = k P N i u i k L x L ∞ T k P < M P N u k L x L ∞ T . T − p NN ! s + (cid:18)Q i = N i (cid:19) N N p − m Y i = c i , N i and obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Z s . T − p M X i = k u i k ˙ X s Y ≤ k ≤ mk , i k u k k ˙ X sc for s ≥ s c by choosing p > p = because = · − > N < M . (cid:3) m = with γ = and m = with γ = In this section, we study the Cauchy problem (1.1) with a specific scaling invariant nonlinearity (1.12) atthe scaling critical regularity s = s c ( γ, m ) in the cases ( γ, m ) = (3 ,
5) and (2 , N ∈ Z . To do so,we use the following more sophisticated Banach spaces X N and Y N endowed with the norms k u k X N : = k u (0) k L x + (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N , k F k Y N : = inf (cid:26) k F k Z N + k F k ˙ X , − , (cid:12)(cid:12)(cid:12)(cid:12) F = F + F (cid:27) , k F k Z N : = N − k F k L x L t , (6.1)37espectively, instead of Definition 2.3. Here k · k ˙ X , b , q denotes the Besov type Fourier restriction normgiven by k u k ˙ X , b , q : = X A ∈ Z A bq k Q A u k qL t , x q , if 1 ≤ q < ∞ , sup A ∈ Z A b k Q A u k L t , x , if q = ∞ , where Q A with A ∈ Z denotes the Littlewood-Paley projection defined by F t , x [ Q A u ]( τ, ξ ) : = ψ A (cid:16) τ − ξ (cid:17) F t , x [ u ]( τ, ξ ) . We note that k ( i ∂ t + ∂ x ) u k ˙ X , − , = k u k ˙ X , , . Furthermore, we define the Banach spaces ˙ X s , X s , ˙ Y s , and Y s endowed with the norms k u k ˙ X s : = X N ∈ Z N s k P N u k X N , k u k X s : = k u k ˙ X + k u k ˙ X s , k F k ˙ Y s : = X N ∈ Z N s k P N u k Y N , k F k Y s : = k F k ˙ Y + k F k ˙ Y s , where s ∈ R . We can see that by Lemma 2.2, the estimates (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x f (cid:13)(cid:13)(cid:13)(cid:13) ˙ X s . k f k ˙ H s , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z t e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ X s . k F k ˙ Y s (6.2)hold for any f ∈ ˙ H s ( R ) and F ∈ ˙ Y s . Remark . The similar norms as k · k X N and k · k Y N are used by Tao ([ ]) to prove the well-posednessof the quartic generalized Korteweg-de Vries at the scaling critical regularity ˙ H − (see [ , Section2]). In the proof of [ , Theorem 1.3], Pornnopparath used such norms to prove the small data globalwell-posedness of the quintic derivative nonlinear Schr¨odinger equations at the scaling critical regularity˙ H . Lemma 6.1 (Extension lemma) . Let S be any space-time Banach space that satisfies k g ( t ) F ( t , x ) k S . k g k L ∞ t k F ( t , x ) k S for any F ∈ S and g ∈ L ∞ t . Let T : L ( R ) × · · · × L ( R ) → S be a spatial multilinear operator satisfying (cid:13)(cid:13)(cid:13)(cid:13) T ( e it ∂ x u , , · · · , e it ∂ x u k , ) (cid:13)(cid:13)(cid:13)(cid:13) S . k Y j = k u j , k L x for any u , , · · · , u k , ∈ L ( R ) with k ∈ N . Then it holds that kT ( u , · · · , u k ) k S . k Y j = (cid:18) k u j (0) k L x + (cid:13)(cid:13)(cid:13) ( i ∂ t + ∂ x ) u j (cid:13)(cid:13)(cid:13) ˙ X , − , (cid:19) for any u , · · · u k ∈ ˙ X , , . ].By Lemma 6.1, the linear estimates (Lemma 2.3, 2.4, 2.6), and the same argument as in the proofof Theorem 3.2 (See also Remark 3.2), we obtain the following. Proposition 6.2.
Let N ∈ Z . It holds that k P N u k L ∞ t L x + N k P N u k L t L ∞ x + N k P N u k L ∞ x L t + N − k P N u k L x L ∞ x . k P N u k X N for any u ∈ X N . Furthermore, we get the following.
Proposition 6.3.
Let N ∈ Z . It holds that k P N u k ˙ X , , ∞ . k P N u k X N for any u ∈ X N .Proof. We put F = F + F : = (cid:16) i ∂ t + ∂ x (cid:17) u , where P N F ∈ Z N and P N F ∈ ˙ X , − , . Then, we have u ( t ) = e it ∂ x u − i Z t e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ − i Z t e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ = e it ∂ x u − i X k = Z t −∞ e i ( t − t ′ ) ∂ x F k ( t ′ ) dt ′ − e it ∂ x Z −∞ e − it ′ ∂ x F k ( t ′ ) dt ′ ! . Because the space-time Fourier transform of the linear solution is supported in { ( τ, ξ ) | τ − ξ = } , wehave (cid:13)(cid:13)(cid:13)(cid:13) P N e it ∂ x u (cid:13)(cid:13)(cid:13)(cid:13) ˙ X , , ∞ = , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N e it ∂ x Z −∞ e − it ′ ∂ x F k ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ X , , ∞ = . Therefore, it su ffi ces to show that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Z t −∞ e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ X , , ∞ . k P N F k Z N (6.3)and (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Z t −∞ e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ X , , ∞ . k P N F k ˙ X , − , . (6.4)We first prove (6.3). For y ∈ R , we putw y ( t , x ) : = √ π Z t −∞ ( P N K )( t − t ′ , x − y ) F ( t ′ , y ) dt ′ , where K is defined by (2.4). Then, we have P N Z t −∞ e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ = Z R w y ( t , x ) dy (6.5)and F t , x [ w y ]( τ, ξ ) = √ π i e − i ξ y ψ N ( ξ ) τ − ξ − i F t [ F ]( τ, y ) . A ∈ Z , by using the variable transform ξ ω as ω = τ − ξ , we have k Q A w y k L t , x ∼ k ψ A ( τ − ξ ) F t , x [ w y ]( τ, ξ ) k L τ,ξ ∼ Z R Z R ψ N ( ξ ) ψ A ( τ − ξ )( τ − ξ ) |F t [ F ]( τ, y ) | d ξ d τ ! . Z R N − Z R ψ A ( ω ) ω |F t [ F ]( τ, y ) | d ω d τ ! . N − A − Z R |F t [ F ]( τ, y ) | d τ ! . N − A − k F ( t , y ) k L t . This and (6.5) imply (6.3).Next, we prove (6.4). By the direct calculation, we have F t , x " P N Z t −∞ e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ ( τ, ξ ) = i ψ N ( ξ ) τ − ξ − i F t , x [ F ]( τ, ξ ) . Therefore, for A ∈ Z , we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Q A P N Z t −∞ e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . A − k Q L P N F k L t , x . This and the embedding ˙ X , − , ֒ → ˙ X , − , ∞ imply (6.4). (cid:3) For L ∈ Z , we define the bilinear operators R ± as R ± L ( f , g ) : = Z R Z R e i ξ x ψ L ( ξ ± ( ξ − ξ )) b f ( ξ ) b g ( ξ − ξ ) d ξ d ξ. (6.6) Lemma 6.4 (Refined bilinear Strichartz estimate L ( R ) × L ( R ) → L t , x ( R × R )) . Let L , N , N ∈ Z withN ≥ N . Then for any functions f , g satisfying P N f , P N g ∈ L ( R ) , the estimates (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) R + L (cid:18) P N e it ∂ x f , P N e it ∂ x g (cid:19)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t , x ( R × R ) ≤ CN − L − (cid:13)(cid:13)(cid:13) P N f (cid:13)(cid:13)(cid:13) L (cid:13)(cid:13)(cid:13) P N g (cid:13)(cid:13)(cid:13) L , (6.7) (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) P N e it ∂ x f , P N e it ∂ x g (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x ( R × R ) ≤ CN − L − (cid:13)(cid:13)(cid:13) P N f (cid:13)(cid:13)(cid:13) L (cid:13)(cid:13)(cid:13) P N g (cid:13)(cid:13)(cid:13) L (6.8) hold, where C is a positive constant independent of L , N , N , f , g.Proof. We only prove (6.8), since (6.7) can be obtained in the similar manner. By Plancherel’s theorem,the equivalency (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) P N e it ∂ x f , P N e it ∂ x g (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x ( R × R ) ∼ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z R ψ L ( ξ − ( ξ − ξ )) e it ξ ψ N ( ξ ) b f ( ξ ) e it ( ξ − ξ ) ψ N ( ξ − ξ ) b g ( ξ − ξ ) d ξ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t ,ξ ( R × R ) . ffi ces to show that the estimate I : = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z R " Ω e it ξ b f ( ξ ) e it ( ξ − ξ ) b g ( ξ − ξ ) h ( t , ξ ) d ξ d ξ ! dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . N − L − k ψ N f k L ξ k ψ N g k L ξ k h k L t ,ξ (6.9)holds for any h ∈ L ( R × R ), where Ω = Ω ( L , N , N ) is defined by Ω : = { ( ξ , ξ ) | | ξ | ∼ N , | ξ − ξ | ∼ N , | ξ − ( ξ − ξ ) | ∼ L } . Since the identity Z R e it ( ξ + ( ξ − ξ ) ) h ( t , ξ ) dt = √ π F t [ h ] (cid:16) − ξ − ( ξ − ξ ) , ξ (cid:17) holds for any ξ, ξ ∈ R , by the Cauchy-Schwarz inequality, the estimate I = √ π (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) " Ω b f ( ξ ) b g ( ξ − ξ ) F t [ h ]( − ξ − ( ξ − ξ ) , ξ ) d ξ d ξ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . k ψ N f k L ξ k ψ N g k L ξ " Ω |F t [ h ] (cid:16) − ξ − ( ξ − ξ ) , ξ (cid:17) | d ξ d ξ ! holds. We use changing variables with ξ τ as τ = − ξ − ( ξ − ξ ) . Since the relations (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) d τ d ξ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∼ (cid:12)(cid:12)(cid:12) ξ − ( ξ − ξ ) (cid:12)(cid:12)(cid:12) ∼ | ξ − ( ξ − ξ ) | max n | ξ | , | ξ − ξ | o ∼ LN hold for any ( ξ , ξ ) ∈ Ω , by Plancherel’s theorem, the estimates " Ω |F t [ h ]( − ξ − ( ξ − ξ ) , ξ ) | d ξ d ξ . N − L − " |F t [ h ]( τ, ξ ) | d τ d ξ ∼ N − L − k h k L t ,ξ hold, which implies (6.9). (cid:3) Remark . When N ≫ N , the relations | ξ | ∼ N and | ξ − ξ | ∼ N imply the equivalency | ξ − ( ξ − ξ ) | ∼ N . Therefore, Lemma 4.1 follows from (6.8) with L = N . Theorem 6.5 (Refined bilinear Strichartz estimate on X N × X N ) . Let L , N , N ∈ Z and u ∈ X N , u ∈ X N . If N ≥ N & L, then the estimates (cid:13)(cid:13)(cid:13)(cid:13) R + L (cid:16) P N u , P N u (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − k P N u k X N k P N u k X N , (6.10) k R − L ( P N u , P N u ) k L t , x . N − L − k P N u k X N k P N u k X N (6.11) hold, where the implicit constants are independnet of L , N , N , u , u .Proof. We only prove (6.11) since (6.10) can be obtained in the similar manner. We set u j , N j : = P N j u j and F j : = (cid:16) i ∂ t + ∂ x (cid:17) u j for j = ,
2. It su ffi ces to show that the estimate k R − L ( u , N , u , N ) k L t , x . N − L − (cid:16) k u , N (0) k L x + k F k Y N (cid:17) (cid:16) k u , N (0) k L x + k F k Y N (cid:17) k R − L ( u , N , u , N ) k L t , x . N − L − (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) , (6.12) k R − L ( u , N , u , N ) k L t , x . N − L − (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) , (6.13) k R − L ( u , N , u , N ) k L t , x . N − L − (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) , (6.14) k R − L ( u , N , u , N ) k L t , x . N − L − (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) . (6.15)We can obtain the estimate (6.15) in the same manner as the proof of (4.6). Indeed, we use (6.23) and(6.24) below instead of (4.13) and (4.14). To obtain (6.12), (6.13), and (6.14), we use Lemma 6.1. Then,we have only to prove (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) e it ∂ x u , N (0) , e it ∂ x u , N (0) (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − k u , N (0) k L x k u , N (0) k L x , (6.16) (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) e it ∂ x u , N (0) , u , N (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − k u , N (0) k L x (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) , (6.17) (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) u , N , e it ∂ x u , N (0) (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:18) k u , N (0) k L x + N − k F k L x L t (cid:19) k u , N (0) k L x . (6.18)The estimate (6.16) is obtained by (6.8). We prove (6.17) only since (6.18) can be shown in the similarmanner. We note that the identity holds u , N ( t ) = e it ∂ x u , N (0) − i Z t e i ( t − t ′ ) ∂ x P N F ( t ′ ) dt ′ = : A + B for any t ∈ R . To obtain (6.17), we prove the following estimates: (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) e it ∂ x u , N (0) , A (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − k u , N (0) k L x k u , N (0) k L x , (6.19) (cid:13)(cid:13)(cid:13)(cid:13) R − L (cid:16) e it ∂ x u , N (0) , B (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − N − L − k F k L x L t k u , N (0) k L x . (6.20)The estimate (6.19) is obtained by (6.8) because A is a linear solution.Now we prove (6.20). By Proposition 3.5, we have B = − Z R e it ∂ x L v , y ( x ) dy + Z R ( P < L / ( −∞ , )( x )( P + e it ∂ x v , y )( x ) dy − Z R ( P < L / [0 , ∞ ) )( x )( P − e it ∂ x v , y )( x ) dy + Z R h , y ( t , x ) dy , where v , y = F − ξ [ ψ N ( ξ ) F t [ F ( t , y )]( ξ )], L v , y = F − ξ [ ψ N ( ξ ) F t [ ( −∞ , ( t ) F ( t , y )]( ξ )] and h , y satisfies k h , y k L qx L pt . L − − p N − − q − p k F ( t , y ) k L t . (6.21)We note that k v , y ( x ) k L x . N − k F ( t , y ) k L t , kL v , y ( x ) k L x . N − k F ( t , y ) k L t . (6.22)42urthermore, it holds that k R − L ( g ( t , x ) , ( P < L / ( −∞ , )( x ) g ( t , x )) k L tx . k R − L ( g , g ) k L tx . (6.23)Now, we prove (6.23). Because | ξ − ( ξ − ξ ) | ∼ L and | ξ | ≪ L imply | ξ − ( ξ − ξ − ξ ) | ∼ L , we have F x [ R − L ( g ( t , x ) , ( P < L / ( −∞ , )( x ) g ( t , x ))]( ξ ) = Z ψ L ( ξ − ( ξ − ξ )) b g ( t , ξ ) (cid:16) F x [ P < L / ( −∞ , ] ∗ b g ( t ) (cid:17) ( ξ − ξ ) d ξ ∼ Z ψ L ( ξ − ( ξ − ξ − ξ )) b g ( t , ξ ) F x [ P < L / ( −∞ , ]( ξ ) b g ( ξ − ξ − ξ ) d ξ d ξ = Z F x [ P < L / ( −∞ , ]( ξ ) F x [ R − L ( g ( t , x ) , g ( t , x ))]( ξ − ξ ) d ξ = F x [ P < L / ( −∞ , ( x ) R − L ( g ( t , x ) , g ( t , x ))]( ξ ) . Therefore, if χ L is defined by P < L / f = χ L ∗ f , then we have k R − L ( g ( t , x ) , ( P < L / ( −∞ , )( x ) g ( t , x )) k L tx = k P < L / ( −∞ , ( x ) R − L ( g ( t , x ) , g ( t , x )) k L tx = k ( χ N j ∗ ( −∞ , )( x ) R − L ( g ( t , x ) , g ( t , x )) k L tx . Z R | χ N j ( z ) |k ( −∞ , ( x − z ) R − L ( g ( t , x ) , g ( t , x )) k L tx dz . k R − L ( g , g ) k L tx because χ N j ( x ) = F − ξ [ ϕ (2 N − j ξ )]( x ). We also obtain k R − L ( g ( t , x ) , ( P < L / [0 , ∞ ) )( x ) g ( t , x )) k L tx . k R − L ( g , g ) k L tx . (6.24)by the same way. By using (6.23) and (6.24), we have k R − L ( e it ∂ x u , N (0) , B ) k L tx . Z R k R − L ( e it ∂ x u , N (0) , e it ∂ x L v , y ) k L tx dy + Z R k R − L ( e it ∂ x u , N (0) , e it ∂ x P + v , y ) k L tx dy + Z R k R − L ( e it ∂ x u , N (0) , e it ∂ x P − v , y ) k L tx dy + Z R k R − L ( e it ∂ x u , N (0) , h , y ) k L tx dy = : I + II + III + IV . By (6.8) and (6.22), we obtain I + II + III . Z R N − L − k u , N (0) k L x (cid:16) k v , y k L x + kL v , y k L x (cid:17) dy . Z R N − N − k u , N (0) k L x k F ( t , y ) k L t dy = N − N − L − k u , N (0) k L x k F k L x L t . While, by the H ¨older inequality, Lemma 2.4, and (6.21) with ( q , p ) = (2 , ∞ ), we obtain IV . Z R k e it ∂ x u , N (0) k L ∞ x L t k h , y k L x L ∞ t dy . Z R N − k u , N (0) k L x L − N − k F ( t , y ) k L t dy . N − N − L − k u , N (0) k L x k F k L x L t since N ≥ N . Therefore, we get (6.20). (cid:3) orollary 6.6. Let T > , L , N , N ∈ Z , and u ∈ X N , u ∈ X N . If N ≥ N & L, then the estimates (cid:13)(cid:13)(cid:13)(cid:13) R + L (cid:16) P N u , P N u (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L T L x . T θ N − + θ L − − θ k P N u k X N k P N u k X N , (6.25) k R − L ( P N u , P N u ) k L T L x . T θ N − + θ L − − θ N − L − k P N u k X N k P N u k X N (6.26) hold, where the implicit constants are independnet of T , L , N , N , u , u . Proof is the same as Corollary 4.3.
Theorem 6.7 (Multilinear estimates) . (i) For any u , · · · , u ∈ ˙ X , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y . Y i = k u i k ˙ X , (6.27) where e u i ∈ { u i , u i } . (ii) If s ≥ , then for any u , · · · , u ∈ X s , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s . Y i = k u i k X s , (6.28) where e u i ∈ { u i , u i } . The implicit constant depends only on s.Proof. We define I : = X N ∈ Z N s + X ( N , ··· , N ) ∈ Φ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N , I k : = X N ∈ Z N s + X ( N , ··· , N ) ∈ Φ k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N ( k = , , , I : = X N ∈ Z N s + X ( N , ··· , N ) ∈ Φ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N , where Φ : = { ( N , · · · , N ) | N ≥ · · · ≥ N , N ∼ N ≫ N } , Φ k : = { ( N , · · · , N ) | N ≥ · · · ≥ N , N & N , N ∼ · · · ∼ N k ≫ N k + } ( k = , , , Φ : = { ( N , · · · , N ) | N ≥ · · · ≥ N , N . N ∼ · · · ∼ N } . We note that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s . X k = I k holds. Let s ≥
0. For i = , · · · , m and N i ∈ Z , we set c , N : = N s k P N u k X N , c i , N i : = N i k P N i u i k X Ni ( i = , · · · , I k for k ,
4. We assume u i ∈ ˙ X s ∩ ˙ X and k u i k ˙ X . i = , · · · , X N i c i , N i = k u i k X . i = , · · ·
5. We prove I k . X i = k u i k ˙ X s . (6.30)This implies I k . X i = k u i k ˙ X s Y ≤ k ≤ k , i k u k k ˙ X (6.31)for any u i ∈ ˙ X s ∩ ˙ X because u i ( i = , · · · m ) do not appear in the definition of I k when k ,
4. To obtain(6.30), it su ffi ces to show I k . X N c , N . (6.32)We first prove (6.32) for k =
5. By the H ¨older inequality and Proposition 6.2, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N = N − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t ≤ N − k P N u k L ∞ x L t k P N u k L x L ∞ t k P N u k L x L ∞ t k P N u k L x L ∞ t k P N u k L x L ∞ t . N − N − N N N N Y i = k P N i u i k X Ni ∼ N − N − s − Y i = c i , N i . Therefore, by the Cauchy-Schwarz inequality for dyadic summation, we obtain I . X N X ( N , ··· , N ) ∈ Φ N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Z N . X N X N . N X N ∼ N X N ∼ N X N ∼ N X N ∼ N N s + N − N − s − Y i = c i , N i . X N X N ∼ N X N ∼ N X N ∼ N X N ∼ N Y i = c i , N i . Y i = X N i c i , N i . This implies (6.32) for k = I k for k = , ,
3. By the same argument as in the proof of Theorem 5.1, to obtain(6.32), it su ffi ces to show that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t . N − s − N N k + ! Y i = c i , N i (6.33)for ( N , · · · , N ) ∈ Φ k . By the H ¨older inequality, (6.11) with L = N , Bernstein inequality, and Proposi-tion 6.2, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t ≤ k P N u P N k + u k + k L x , t Y ≤ i ≤ i , k + k P N i u i k L x L ∞ t k P N u k L ∞ t , x . N − Y ≤ i ≤ i , k + N i N Y i = k P N i u i k X Ni ∼ N − s − N − k + N Y i = c i , N i . Therefore, we obtain (6.33).Now, we consider I . We prove I . Y i = X N i c i , N i . (6.34)We put Φ , : = { ( N , · · · , N ) ∈ Φ | N . N } , Φ , : = { ( N , · · · , N ) ∈ Φ | N ≫ N } , and I , l : = X N ∈ Z N s + X ( N , ··· , N ) ∈ Φ , l (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( l = , . (i) For l = N . N )By the definition of the norm k · k Y N , the H ¨older inequality, (6.11) with L = N , Bernstein inequality,and Proposition 6.2, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ≤ N − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t ≤ N − k P N u P N u k L x , t k P N u k L x L ∞ t k P N u k L x L ∞ t k P N u k L ∞ x , t . N − N − N N N Y i = k P N i u i k X Ni ∼ N − N − s − N N − Y i = c i , N i . I , . X N X N . N X N ∼ N X N ∼ N X N ∼ N X N ≪ N N s + N − s − N N − Y i = c i , N i . X N X N ∼ N X N ∼ N X N ∼ N c , N c , N c , N c , N X N ≪ N N − s − N s + c , N . Y i = X N i c i , N i by the Cauchy-Schwarz inequality for dyadic summation.(ii) For l = N ≫ N )We put P ± N i : = P ± P N i , where P + and P − are defined in (3.9). Then we have Y i = P N i e u i = Y i = ( P + N i e u i + P − N i e u i ) P N e u . We define K : = { i ∈ { , , , }| e u i = u i } . We only have to consider the cases K = , , K = , K = ,
0, respectively.
Case 1. K = J + − : = P + N u P − N u P N u P N u P N e u , J ++ : = P + N u P + N u P + N u P + N u P N e u , J −− : = P − N u P − N u P − N u P − N u P N e u . We first prove the estimate for J + − . Because ξ and ξ are opposite sign for ξ ∈ supp F x [ P + N u ] and ξ ∈ supp F x [ P − N u ], it holds that | ξ − ξ | = ξ + ( − ξ ) ≥ max {| ξ | , | ξ |} ∼ N . This implies that k P + N u P − N u k L t , x . N − k P + N u k X N k P − N u k X N by (6.11) with L = N . Therefore, by the definition of the norm k·k Y N , the H ¨older inequality, the Bernsteininequality and Proposition 6.2, we obtain k P N J + − k Y N ≤ N − k J + − k L x L t ≤ N − k P + N u P − N u k L t , x k P N u k L x L ∞ t k P N u k L x L ∞ t k P N u k L ∞ x , t . N − N − N N N Y i = k P N i u i k X Ni ∼ N − N − s − N − N Y i = c i , N i . N ∼ N , we have N s + k P N J + − k Y N . NN ! s + N N ! Y i = c i , N i . (6.35)Next, we prove the estimates for J ++ . We put J high ++ : = X A & N Q A J ++ , J low ++ : = X A ≪ N Q A J ++ . We first consider J high ++ . Because k J high ++ k ˙ X , − , ∼ X A & N A − k Q A J high ++ k L tx . N − k J ++ k L tx , by the definition of the norm k · k Y N , the H ¨older inequality, the Bernstein inequality and Proposition 6.2,we obtain k P N J high ++ k Y N . k J high ++ k ˙ X , − , . N − k P + N u k L ∞ x L t k P + N u k L x L ∞ t k P + N u k L x L ∞ t k P + N u k L ∞ x , t k P N u k L ∞ x , t . N − N − N N N N Y i = k P N i u i k X Ni ∼ N − − s N N Y i = c i , N i . (6.36)Finally, we consider J low ++ . Let ( τ i , ξ i ) ∈ supp F t , x [ P + N i u i ] ( i = , , , τ , ξ ) ∈ supp F t , x [ P N u ].Then, | τ + τ + τ + τ ± τ − ( ξ + ξ + ξ + ξ ± ξ ) | ≪ N since supp F t , x [ J low ++ ] ⊂ { ( τ, ξ ) | | τ − ξ | ≪ N } . Because N ∼ N ∼ N ∼ N ≫ N , there exist C i > r i ∈ R satisfying | r i | ≪ N , such that ξ i = C i N + r i ( i = , , , , ξ = r . This implies | ( ξ + ξ + ξ + ξ ± ξ ) − ( ξ + ξ + ξ + ξ ± ξ ) |∼ | ( C + C + C + C ) N − ( C + C + C + C ) N |∼ N (6.37)since ( C + C + C + C ) > C + C + C + C . Therefore, at least one of τ i − ξ i ( i = , , ,
4) and τ ∓ ξ is larger than N . By the symmetry, we can assume supp F t , x [ P + N u ] ⊂ { ( τ, ξ ) | | τ − ξ | & N } orsupp F t , x [ P N u ] ⊂ { ( τ, ξ ) | | τ ∓ ξ | & N } . For the former case, we have k P + N u k L tx . X A & N k Q A P + N u k L tx . N − sup A ∈ Z A k Q A P + N u k L tx = N − k P + N u k ˙ X , , ∞ . N − k P + N u k X N
48y Proposition 6.3. Therefore, by the definition of the norm k · k Y N , the H ¨older inequality, Bernsteininequality, and Proposition 6.2, we obtain (cid:13)(cid:13)(cid:13) P N J low ++ (cid:13)(cid:13)(cid:13) Y N ≤ N − (cid:13)(cid:13)(cid:13) J low ++ (cid:13)(cid:13)(cid:13) L x L t ≤ N − k P + N u k L x , t k P + N u k L x L ∞ t k P + N u k L x L ∞ t k P + N u k L ∞ x , t k P N u k L ∞ x , t . N − N − N N N N Y i = k P N i u i k X Ni ∼ N − N − s − N N Y i = c i , N i . (6.38)For the later case, we have k P N e u k L t , x . X A & N k Q A P N u k L t , x . N − sup A ∈ Z A k Q A P N u k L t , x = N − k P N u k ˙ X , , ∞ . N − k P N u k X N by Proposition 6.3. Now, we used k ψ A ( τ + ξ ) F tx [ P N u ] k L t , x ∼ k ψ A ( τ − ξ ) F tx [ P N u ] k L τ,ξ when e u = u and | τ + ξ | & N . Therefore, by the H ¨older inequality, the Sobolev inequality, and theBernstein inequality, we have k P + N u P + N u P N e u k L t , x . k P + N u k L ∞ t L x k P + N u k L ∞ t L x k P N e u k L t L ∞ x . N N N N −
21 5 Y i = k P N i u i k X Ni . Therefore, by the definition of the norm k · k Y N , the H ¨older inequality, and Proposition 6.2, we obtain (cid:13)(cid:13)(cid:13) P N J low ++ (cid:13)(cid:13)(cid:13) Y N ≤ N − (cid:13)(cid:13)(cid:13) J low ++ (cid:13)(cid:13)(cid:13) L x L t ≤ N − k P + N u k L x L ∞ t k P + N u k L x L ∞ t k P + N u P + N u P N e u k L x , t . N − N N N N N N −
21 5 Y i = k P N i u i k X Ni ∼ N − N − s − N Y i = c i , N i . (6.39)Because N ∼ N and N . N , We have N s + k P N J ++ k Y N . NN ! s + + NN ! s + N N ! Y i = c i , N i . NN ! s + N N ! Y i = c i , N i . (6.40)by (6.36), (6.38), and (6.39). By the same argument, we obtain k P N J −− k Y N . NN ! s + N N ! Y i = c i , N i . (6.41)49s a result, by (6.35), (6.40), and (6.41), we have I , . X N X ( N , ··· , N ) ∈ Φ , N s + (cid:16) k P N J + − k Y N + k P N J ++ k Y N + k P N J −− k Y N (cid:17) . X N X N . N X N ∼ N X N ∼ N X N ∼ N X N ≪ N NN ! s + N N ! Y i = c i , N i . Y i = X N i c i , N i . Case 2. K = e u i = u i ( i = , , e u = u , and we only have to prove theestimates for J ± : = P ± N u P N u P N u P ± N u P N e u , J ± : = P ± N u P ± N u P ± N u P ∓ N u P N e u . We first prove the estimate for J ± . Because ξ and ξ are same sign for ξ ∈ supp F x [ P ± N u ] and ξ ∈ supp F x [ P ± N u ], it holds that | ξ + ξ | = | ξ | + | ξ | ≥ max {| ξ | , | ξ |} ∼ N . It implies that k P ± N u P ± N u k L t , x . N − k P ± N u k X N k P ± N u k X N by (6.10) with L = N . Therefore, we can treat J ± by the same way for J + − in Case 1.Next, we prove the estimates for J + . We put J high2 + : = X A & N Q A J + , J low + : = X A ≪ N Q A J + . We have to only consider J low2 + because we can treat J high2 + by the same way for J high ++ in Case 1. Let( τ i , ξ i ) ∈ supp F t , x [ P + N i u i ] ( i = , , τ , ξ ) ∈ supp F t , x [ P − N u ], ( τ , ξ ) ∈ supp F t , x [ P N u ]. Then, | τ + τ + τ − τ ± τ − ( ξ + ξ + ξ − ξ ± ξ ) | ≪ N since supp F t , x [ J low2 + ] ⊂ { ( τ, ξ ) | | τ − ξ | ≪ N } . Because N ∼ N ∼ N ∼ N ≫ N , there exist C i > r i ∈ R satisfying | r i | ≪ N , such that ξ i = C i N + r i ( i = , , , ξ = − C N + r , ξ = r . It implies | ( ξ + ξ + ξ − ξ ± ξ ) − ( ξ + ξ + ξ − ξ ± ξ ) |∼ | ( C + C + C + C ) N − ( C + C + C − C ) N |∼ N C + C + C + C ) > C + C + C − C . Therefore, we can treat J low2 by the same way for J low ++ in Case 1. J also can be treated by the same way. Case 3. K = e u i = u i ( i = ,
3) and e u i = u i ( i = , N ≫ N , it holdsthat | ξ + ξ + ξ + ξ | ∼ N for ξ i ∈ supp F x [ P N i u i ] ( i = ,
3) and ξ i ∈ supp F x [ P N i u i ] ( i = , | ξ + ξ | and | ξ + ξ | is larger than N . By the symmetry, we can assume | ξ + ξ | & N . Then, we have k P N u P N u k L tx . N − N − k P N u k X N k P N u k X N by (6.10) with L = N . Therefore, we can treat I , by the same way for J + − in Case 1. Indeed, by thedefinition of the norm k · k Y N , the H ¨older inequality, and Proposition 6.2, we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) P N Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ≤ N − (cid:13)(cid:13)(cid:13) P N ( P N u P N u P N u P N u P N e u ) (cid:13)(cid:13)(cid:13) L x L t ≤ N − k P N u P N u k L t , x k P N u k L x L ∞ t k P N u k L x L ∞ t k P N u k L ∞ x , t . N − N − N − N N N Y i = k P N i u i k X Ni ∼ N − N − s − N − N Y i = c i , N i . It implies I , . Y i = X N i c i , N i . As a result, we obtain (6.31) for k =
4. Therefore, we get (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s . X i = k u i k ˙ X s Y ≤ k ≤ k , i k u k k ˙ X (6.42)for s ≥
0. The estimate (6.27) follows from (6.42) with s = . The estimate (6.28) follows from (6.42)with s = s ≥ . (cid:3) Theorem 6.8 (Multilinear estimates) . (i) For any u , · · · , u ∈ ˙ X − , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y − . Y i = k u i k ˙ X − , (6.43)51 here e u i ∈ { u i , u i } . (ii) If s ≥ − , then for any u , · · · , u ∈ X s , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s . Y i = k u i k X s , (6.44) where e u i ∈ { u i , u i } . The implicit constant depends only on s.Proof. Let s ≥ − . For i = , · · · , m and N i ∈ Z , we set c , N : = N s k P N u k X N , c i , N i : = N − i k P N i u i k X Ni ( i = , , I : = X N ∈ Z N s + X ( N , ··· , N ) ∈ Φ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N , I k : = X N ∈ Z N s + X ( N , ··· , N ) ∈ Φ k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ( k = , , , where Φ : = { ( N , · · · , N ) | N ≥ · · · ≥ N , N ∼ N ≫ N } , Φ k : = { ( N , · · · , N ) | N ≥ · · · ≥ N , N & N , N ∼ · · · N k ≫ N k + } ( k = , , Φ : = { ( N , · · · , N ) | N ≥ · · · ≥ N , N . N ∼ · · · ∼ N } . We prove I k . Y i = X N i c i , N i . (6.45)First, we assume ( N , N , N , N ) ∈ Φ ∪ Φ ∪ Φ . Then, N ≫ N . By the definition of the norm k · k Y N , the H ¨older inequality, (6.11) with L = N , Bernstein inequality, and Proposition 6.2, we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y N ≤ N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L t ≤ N s + k P N u P N u k L x , t k P N u k L x L ∞ t k P N u k L x L ∞ t . N s + N − N N Y i = k P N i u i k X Ni ∼ N s + N − s − N N N Y i = c i , N i = NN ! s + N N N N Y i = c i , N i . We note that s + > s ≥ − . Therefore, we obtain (6.45) for k = , , I . We put P ± N i : = P ± P N i , where P + and P − are defined in (3.9). Then we have Y i = P N i e u i = Y i = ( P + N i e u i + P − N i e u i ) . We define K : = { i ∈ { , , , }| e u i = u i } . For the cases K = , , ,
4, we can obtain (6.45) for I by almost same way to the proof of Theorem 6.7.Therefore, we only give the proof for the case K = e u i = u i ( i = ,
3) and e u i = u i ( i = , J ± : = P ± N u P ± N u P N u P N u , J ± : = P ± N u P ∓ N u P ± N u P ∓ N u . We first prove the estimate for J ± . Because ξ and ξ are same sign for ξ ∈ supp F x [ P ± N u ] and ξ ∈ supp F x [ P ± N u ], it holds that | ξ + ξ | = | ξ | + | ξ | ≥ max {| ξ | , | ξ |} ∼ N . It implies that k P ± N u P ± N u k L t , x . N − k P ± N u k X N k P ± N u k X N by (6.10) with L = N . Therefore, by the definition of the norm k · k Y N , the H ¨older inequality, Bernsteininequality, and Proposition 6.2, we obtain N s + k P N J ± k Y N ≤ N s + k J ± k L x L t ≤ N s + k P ± N u P ± N u k L x , t k P N u k L x L ∞ t k P N u k L x L ∞ t . N s + N − N N Y i = k P N i u i k X Ni ∼ N s + N − s − N N N Y i = c i , N i . Because N ∼ N ∼ N ∼ N , we have N s + k P N J ± k Y N . NN ! s + Y i = c i , N i . (6.46)Next, we prove the estimates for J + . We put J high2 + : = X A & N Q A J + , J low2 + : = X A ≪ N Q A J + . We first consider J high2 + . Because k J high2 + k ˙ X , − , ∼ X A & N A − k Q A J high2 + k L t , x . N − k J + k L t , x ,
53y the definition of the norm k · k Y N , the H ¨older inequality, Bernstein inequality, and Proposition 6.2, weobtain N s + k P N J high2 + k Y N . k J high2 + k ˙ X , − , . N s + N − k P + N u k L ∞ x L t k P − N u k L x L ∞ t k P + N u k L x L ∞ t k P − N u k L ∞ x , t . N s + N − N − N N N Y i = k P N i u i k X Ni . N s + N − − s N N N Y i = c i , N i . Because N ∼ N ∼ N ∼ N , we have N s + k P N J high2 + k Y N . NN ! s + Y i = c i , N i . (6.47)Finally, we consider J low2 + . Let ( τ i , ξ i ) ∈ supp F t , x [ P + N i u i ] ( i = , τ i , ξ i ) ∈ supp F t , x [ P − N i u i ] ( i = , | τ − τ + τ − τ − ( ξ − ξ + ξ − ξ ) | ≪ N since supp F t , x [ J low2 + ] ⊂ { ( τ, ξ ) | | τ − ξ | ≪ N } . Because N ∼ N ∼ N ∼ N , there exist C i > r i ∈ R satisfying | r i | ≪ N , such that ξ i = C i N + r i ( i = , , ξ i = − C i N + r i ( i = , . This implies | ( ξ − ξ + ξ + ξ ) − ( ξ − ξ + ξ − ξ ) |∼ | ( C + C + C + C ) N − ( C − C + C − C ) N |∼ N since ( C + C + C + C ) > C − C + C − C . Therefore, at least one of τ i − ξ i ( i = ,
3) and τ i + ξ i ( i = ,
4) is larger than N . If supp F t , x [ P + N i u i ] ⊂ { ( τ, ξ ) | | τ − ξ | & N } ( i = , k P + N i u i k L t , x . X A & N k Q A P N i u i k L t , x . N − sup A ∈ Z A k Q A P N i u i k L t , x = N − k P N i u i k ˙ X , , ∞ . N − k P N i u i k X N by Proposition 6.3. If supp F t , x [ P − N i u i ] ⊂ { ( τ, ξ ) | | τ + ξ | & N } ( i = , k P − N i u i k L tx . X A & N k Q A P N i u i k L tx . N − sup A ∈ Z A k Q A P N i u i k L tx = N − k P N i u i k ˙ X , , ∞ . N − k P N i u i k X N by Proposition 6.3. Now, we used k ψ A ( τ + ξ ) F tx [ P − N i u i ] k L t , x ∼ k ψ A ( τ − ξ ) F tx [ P − N i u i ] k L τ,ξ .
54e only assume the case supp F t , x [ P + N u ] ⊂ { ( τ, ξ ) | | τ − ξ | & N } because the other cases are same. Bythe definition of the norm k · k Y N , the H ¨older inequality, the Bernstein inequality and Proposition 6.2, weobtain N s + (cid:13)(cid:13)(cid:13) P N J low2 + (cid:13)(cid:13)(cid:13) Y N ≤ N s + N − (cid:13)(cid:13)(cid:13) J low2 + (cid:13)(cid:13)(cid:13) L x L t ≤ N s + k P + N u k L x , t k P − N u k L x L ∞ t k P + N u k L x L ∞ t k P − N u k L ∞ x , t . N s + N − N N N Y i = k P N i u i k X Ni ∼ N s + N − s − N N N Y i = c i , N i . Because N ∼ N ∼ N ∼ N , we have k P N J + k Y N . NN ! s + Y i = c i , N i . (6.48)By the same argument, we obtain k P N J − k Y N . NN ! s + Y i = c i , N i . (6.49)As a result, by (6.46), (6.48), and (6.49), we have I . X N X ( N , ··· , N ) ∈ Φ N s + (cid:16) k P N J + k Y N + k P N J − k Y N + k P N J + k Y N + k P N J − k Y N (cid:17) . X N X N . N X N ∼ N X N ∼ N X N ∼ N NN ! s + Y i = c i , N i . Y i = X N i c i , N i because s + > s ≥ − . (cid:3) Remark . We cannot obtain the same estimate for I when K = L = N likeas the proof of Theorem 6.7. Indeed, if we use (6.10), then the power of N becomes negative for s < N . Remark . We can also obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s ( T ) . T δ X k = k u k k ˙ X s ( T ) Y ≤ i ≤ i , k k u i k ˙ X − + ρ ( T ) for any 0 < T < s > − , 0 < ρ < , and some δ = δ ( ρ ) > L = N , θ = ρ for I , I , I , and J ± , k P N u k L ∞ x L T . T θ k P N u k L ∞ x L − θ T . T θ N − + θ k P N u k X N ( T ) (6.50)55ith θ = ρ for J high2 + , and k P + N u k L T L x . T θ N − + θ k P N u k X N ( T ) (6.51)with θ = ρ for J low2 + in the proof of Theorem 6.8. The second estimate in (6.50) can be obtained by theinterpolation between the following estimates k P N u k L ∞ x L ∞ T . N k P N u k L ∞ T L x . N k P N u k X N ( T ) , k P N u k L ∞ x L T . N − k P N u k X N ( T ) . The estimate (6.51) can be obtained by the interpolation between the following estimates. k P + N u k L T L x . N − k P N u k X N ( T ) , k P + N u k L T L x . T k P N u k X N ( T ) . Theorem 6.9 (Multilinear estimates) . (i) For any u , · · · , u ∈ ˙ X − , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i − Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y − . T δ M κ Y i = k u i k ˙ X − (6.52) for some δ > and κ > . (ii) For any u , · · · , u ∈ X − , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i − Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y − . T δ M κ Y i = k u i k X − (6.53) for some δ > and κ > .Proof. By the symmetry, we can assume N ≥ · · · ≥ N and N < M . Let s ≥ − , c , N : = N s k P N u k X N ( T ) , c i , N i : = N s c i k P N i u i k X Ni ( T ) ( i = , c , N : = N s c k P < M P N u k X N ( T ) . By the H ¨olderinequality and (6.26) with L = N , θ = , we have N s + − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T ≤ N s + k P N u P < M P N u k L x L T k P N u k L x L ∞ T k P N u k L x L ∞ T . T NN ! s + N N N N Y i = c i , N i if ( N , · · · N m ) ∈ Φ k with k = , ,
3. On the other hand, if ( N , · · · N ) ∈ Φ , then N ∼ · · · ∼ N < M .Therefore, by the H ¨older inequality, we have N s + − (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L x L T ≤ T N s + Y i = k P < M P N i u i k L x L ∞ T . T NN ! s + N N N N Y i = c i , N i . As a result, we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i − Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s ( T ) . T M m X i = k u i k ˙ X s ( T ) Y ≤ k ≤ mk , i k u k k ˙ X sc ( T ) by the same argument as in the proof of Theorem 5.3. (cid:3) Multilinear estimates at the scaling critical regularity in the case of m ≥ with γ = In this section, we use the solution space X N and its auxiliary space Y N with the norms k u k X N : = k u (0) k L x + (cid:13)(cid:13)(cid:13)(cid:13)(cid:16) i ∂ t + ∂ x (cid:17) u (cid:13)(cid:13)(cid:13)(cid:13) Y N , k u k Y N : = inf (cid:26) k u k L t L x + k u k ˙ X , − , (cid:12)(cid:12)(cid:12)(cid:12) u = u + u (cid:27) , (7.1)instead of Definition 2.3, where N ∈ Z . Furthermore, we introduce the function spaces ˙ X s , X s , ˙ Y s , and Y s with the norms k u k ˙ X s : = X N ∈ Z N s k P N u k X N , k u k X s : = k u k ˙ X + k u k ˙ X s , k F k ˙ Y s : = X N ∈ Z N s k P N F k Y N , k F k Y s : = k F k ˙ Y + k F k ˙ Y s , where s ∈ R . We can easily see that the estimates (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x f (cid:13)(cid:13)(cid:13)(cid:13) ˙ X s . k f k ˙ H s , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z t e i ( t − t ′ ) ∂ x F ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ X s . k F k ˙ Y s (7.2)hold for any f ∈ ˙ H s ( R ) and F ∈ Y s , where the implicit constants are independent of f . Furthermore, bythe same argument as the proof of Proposition 6.2 and 6.3, the estimates N k P N u k L t L ∞ x . k P N u k X N (7.3)and k P N u k ˙ X , , ∞ . k P N u k X N hold for any function u satisfying P N u ∈ X N . Theorem 7.1 (Refined bilinear Strichartz estimates on X N × X N ) . Let L , N , N ∈ Z and P N u ∈ X N , P N u ∈ X N . If N ≥ N & L, then the estimates (cid:13)(cid:13)(cid:13)(cid:13) R + L (cid:16) P N u , P N u (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) L t , x ( R × R ) . N − L − k P N u k X N k P N u k X N , (7.4) (cid:13)(cid:13)(cid:13) R − L (cid:0) P N u , P N u (cid:1)(cid:13)(cid:13)(cid:13) L t , x ( R × R ) . N − L − k P N u k X N k P N u k X N (7.5) hold, where the implicit constants are independent of L , N , N , u , u . Here the bilinear operators R ± L are defined by (6.6).Proof. We prove only (7.5) since (7.4) can be proved in the similar way. We set u j , N j : = P N j u j and F j : = (cid:16) i ∂ t + ∂ x (cid:17) u j for j = ,
2. It su ffi ces to show that the estimate (cid:13)(cid:13)(cid:13) R − L ( u , N , u , N ) (cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:16) k u , N (0) k L x + k F k Y N (cid:17) (cid:16) k u , N (0) k L x + k F k Y N (cid:17) (cid:13)(cid:13)(cid:13) R − L ( u , N , u , N ) (cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) , (7.6) (cid:13)(cid:13)(cid:13) R − L ( u , N , u , N ) (cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) , (7.7) (cid:13)(cid:13)(cid:13) R − L ( u , N , u , N ) (cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) (cid:18) k u , N (0) k L x + k F k ˙ X , − , (cid:19) , (7.8) (cid:13)(cid:13)(cid:13) R − L ( u , N , u , N ) (cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) . (7.9)To obtain (7.6), (7.7), and (7.8), we use Lemma 6.1. Then, we have to prove only (cid:13)(cid:13)(cid:13)(cid:13) R − L ( e it ∂ x u , N (0) , e it ∂ x u , N (0)) (cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − k u , N (0) k L x k u , N (0) k L x , (7.10) (cid:13)(cid:13)(cid:13)(cid:13) R − L ( e it ∂ x u , N (0) , u , N ) (cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − k u , N (0) k L x (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) , (7.11) (cid:13)(cid:13)(cid:13)(cid:13) R − L ( u , N , e it ∂ x u , N (0)) (cid:13)(cid:13)(cid:13)(cid:13) L t , x . N − L − (cid:16) k u , N (0) k L x + k F k L t L x (cid:17) k u , N (0) k L x . (7.12)Since the identity u j , N j ( t ) = u j , N j (0) − i Z t e it ∂ x ( e − it ′ ∂ x F j ( t ′ )) dt ′ , holds, we can obtain (7.10), (7.11), (7.12), and (7.9) by using (6.8) and bilinearity of the operator R − L . (cid:3) Theorem 7.2 (Multilinear estimates) . Let m ≥ . Sets c = s c ( m ) : = − m − . (i) For any u , · · · , u m ∈ ˙ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y sc . m Y i = k u i k ˙ X sc , (7.13) where e u i ∈ { u i , u i } and the implicit constant depends only on m. (ii) If s ≥ s c , then for any u , · · · , u m ∈ X s , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s . m Y i = k u i k X s , (7.14) where e u i ∈ { u i , u i } and the implicit constant depends only on m and s.Proof. Let s ≥ min { s c , } . We assume u i ∈ ˙ X s ∩ ˙ X s c ( i = , · · · , m ). For i = , · · · , m and N i ∈ Z , we set c , N : = N s k P N u k X N , c i , N i : = N s c i k P N i u i k X Ni ( i = , · · · , m ). We put Φ : = { ( N , · · · , N m ) | N ≥ · · · ≥ N m , N ∼ N ≫ N } , Φ k : = { ( N , · · · , N m ) | N ≥ · · · ≥ N m , N & N , N ∼ · · · ∼ N k ≫ N k + } ( k = , , · · · , m − , Φ m : = { ( N , · · · , N m ) | N ≥ · · · ≥ N m , N . N ∼ · · · ∼ N m } .
58e first consider the case m ≥
5. By the H ¨older inequality, Theorem 7.1 with L = N , the Bernsteininequality, and (7.3), we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t L x ≤ N s + k P N u P N u k L t , x k P N u k L t L ∞ x k P N u k L t L ∞ x m Y i = k P N i u i k L ∞ t , x . NN ! s + N − − s c N − − s c N − s c Q mi = N − s c i N m Y i = c i , N i (7.15)if ( N , · · · N m ) ∈ Φ k with k = , ,
3. We note that = − − s c + ( m − − s c ) >
0. On the other hand,by the H ¨older inequality, the Bernstein inequality, and (7.3), we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t L x ≤ N s + Y i = k P N i u i k L t L ∞ x k P N u k L ∞ t L x m Y i = k P N i u i k L ∞ t , x . N s + N s + N N − s c Q mi = N − s c i N s c N + s c N + s c m Y i = c i , N i ∼ NN ! s + N − s c Q mi = N − s c i N + s c m Y i = c i , N i (7.16)if ( N , · · · N m ) ∈ Φ k with k = , · · · , m . We assume Q mi = N − s c i = m =
5. We note that 1 + s c = − s c + ( m − − s c ) >
0. Therefore, we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s . m X i = k u i k ˙ X s Y ≤ k ≤ mk , i k u k k ˙ X sc (7.17)for any u i ∈ ˙ X s ∩ ˙ X s c by the same argument as in the proof of Theorem 5.2.Next, we consider m =
4. Then, s c = − . By the same argument as above, we obtain N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t L x . NN ! s + N N ! Y i = c i , N i if ( N , N , N , N ) ∈ Φ k with k = , ,
3. Therefore, we only have to consider the case ( N , N , N , N ) ∈ Φ . Namely, N ∼ N ∼ N ∼ N . We put P ± N i : = P ± P N i , where P + and P − are defined in (3.9). Then wehave Y i = P N i e u i = Y i = ( P + N i e u i + P − N i e u i ) . We define K : = { i ∈ { , , , }| e u i = u i } . By the same argument as in the proof of Theorem 6.7 for K = , K =
2, we canuse the bilinear estimates (6.10), (6.11) with L = N or the modulation bound such as (6.37). Namely, k P N i e u i P N j e u j k L t , x . N − k P N i u i k X Ni k P N j u j k X Nj i , j ) or k P N i e u i k L t , x . N − k P N i u i k X Ni for some i holds. For the former case with ( i , j ) = (1 , L = N , and (7.3), we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t L x ≤ N s + k P N e u P N e u k L t , x k P N u k L t L ∞ x k P N u k L t L ∞ x . N s + N − N − N − Y i = k P N i u i k X Ni . NN ! s + Y i = c i , N i . For the later case with i =
1, by the H ¨older inequality, the Bernstein inequality, we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L t L x ≤ N s + k P N e u k L tx k P N u k L ∞ tx k P N u k L t L ∞ x k P N u k L t L ∞ x . N s + N − N N − N − Y i = k P N i u i k X Ni . NN ! s + Y i = c i , N i . Therefore, we obtain (7.17) for m = (cid:3) Remark . If m ∈ { , , } (then, s c < (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s ( T ) . T δ X k = k u k k ˙ X s ( T ) Y ≤ i ≤ i , k k u i k ˙ X sc + ρ ( T ) for any 0 < T < s > s c , 0 < ρ < − s c , and some δ = δ ( ρ ) > Theorem 7.3 (Multilinear estimates) . Let m ≥ , < T < , and M ∈ N . Sets c = s c ( m ) : = − m − . (i) For any u , · · · , u m ∈ ˙ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y sc ( T ) . T δ M κ m Y i = k u i k ˙ X sc ( T ) (7.18) for some δ > and κ > depending only on m. (ii) For any u , · · · , u m ∈ X s c , it holds that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) . T δ M κ m Y i = k u i k X sc ( T ) (7.19) for some δ > and κ > depending only on m. roof. By the symmetry, we can assume N ≥ · · · ≥ N m and N m < M . Let s ≥ min { , s c } , c , N : = N s k P N u k X N ( T ) , c i , N i : = N s c i k P N i u i k X Ni ( T ) ( i = , · · · , m − c m , N m : = N s c m k P < M P N m u m k X Nm ( T ) . Wefirst assume the case m ≥
5. By the interpolation between the two estimates k P N u k L T L ∞ x . N − k P N u k X N ( T ) , k P N u k L T L ∞ x . T N k P N u k X N ( T ) , we obtain k P N u k L T L ∞ x . T θ N − + θ k P N u k X N ( T ) (7.20)for 0 < θ <
1. We use this estimate with θ = m − instead of k P N u k L T L ∞ x . N − k P N u k X N ( T ) in (7.15) and (7.16). Then, we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x m Y i = e u i − m Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s ( T ) . T θ M − s c m X i = k u i k ˙ X s ( T ) Y ≤ k ≤ mk , i k u k k ˙ X sc ( T ) by the same argument as in the proof of Theorem 5.3.Next, we assume the case m =
4. Then, s c = − . By the H ¨older inequality, Theorem 7.1 with L = N , and (7.20) with θ = , we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ N s + k P N u P < M P N u k L T L x k P N u k L T L ∞ x k P N u k L T L ∞ x . T NN ! s + N N ! N Y i = c i , N i . if ( N , · · · N m ) ∈ Φ k with k = , ,
3. On the other hand, if ( N , · · · N ) ∈ Φ , then N ∼ · · · ∼ N < M .Therefore, by the H ¨older inequality and the Bernstein inequality, we have N s + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y i = P N i u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L T L x ≤ T N s + k P < M P N u k L ∞ T L x Y i = k P < M P N i u i k L ∞ T L ∞ x . T NN ! s + N N N N Y i = c i , N i . As a result, we obtain (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x Y i = e u i − Y i = P ≥ M e u i (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˙ Y s ( T ) . T M m X i = k u i k ˙ X s ( T ) Y ≤ k ≤ mk , i k u k k ˙ X sc ( T ) by the same argument as in the proof of Theorem 5.3. (cid:3) Proof of well-posedness
In this section, we give proofs of Theorem 1.2 and Theorem 1.5. For s ∈ R and r >
0, we define theclosed ball B r ( H s ( R )) in H s ( R ) centered at the origin with the radius r as B r ( H s ( R )) : = (cid:8) φ ∈ H s ( R ) : k φ k H s ( R ) ≤ r (cid:9) . For T > m ∈ N with m ≥ ρ >
0, we introduce a closed ball X s ( T ; ρ ) in X s ( T ) centered at theorigin with a radious ρ as X s ( T ; ρ ) : = (cid:8) u ∈ X s ( T ) : k u k X s ( T ) ≤ ρ (cid:9) . Here the function space X s ( T ) will be chosen as Definition 2.4 for Theorem 1.2 or (6.1) for Theorem 1.5below. For u ∈ B r ( H s ( R )), we introduce a nonlinear mapping Ψ given by Ψ [ u ]( t ) : = e it ∂ x u − i I (cid:20) G (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19)(cid:21) (8.1)for t ∈ ( − T , T ), where u ∈ X s ( T ; ρ ). In the following subsections, we consider whether the nonlinearmapping Ψ is a contraction mapping. In this subsection, we give a proof of Theorem 1.2. In the proof, we choose the solution space X s ( T )as given in Definition 2.4. The main tool for the proof is the multilinear estimate (Theorem 4.4). If thenonlinearity G = G m , m γ is the form of (1.8), the Duhamel term is written as I (cid:20) G m , m γ (cid:18)n ∂ kx u o k ≤ γ , n ∂ kx ¯ u o k ≤ γ (cid:19)(cid:21) = X k + l = m X | α | + | β | = γ C k , l α,β I h ( ∂ α x u ) · · · ( ∂ α k x u ) (cid:16) ∂ β x u (cid:17) · · · (cid:16) ∂ β l x u (cid:17)i = X | α | = γ C m α I (cid:2) ( ∂ α x e u ) · · · ( ∂ α k x e u ) (cid:0) ∂ α k + x e u (cid:1) · · · (cid:0) ∂ α m x e u (cid:1)(cid:3) = X | α | = γ C m α I m Y i = ∂ α i x e u . (8.2)Here to obtain the second equality of (8.2), we write u and u as e u and β j as α k + j for j ∈ { , . . . , l } . Theprecise statement of Theorem 1.2 with γ = Theorem 8.1.
Let γ = , m ≥ , s ≥ s , where s is given by (1.10), and T ∈ (0 , . We assume that thenonlinearity G = G m , m γ is the form of (1.8). Then the following statements hold: • (Existence): There exists a positive constant ε = ε ( m , s ) > such that for any u ∈ B ε ( H s ( R )) ,there exists a unique solution u ∈ X s ( T ; 2 C ε ) to the problem (1.1)-(1.8) on the time intervalI T = ( − T , T ) , where C is a positive constant given in Proposition 2.9. • (Uniqueness): Let u ∈ X s ( T ; 2 C ε ) be the solution obtained in the Existence part. Let T ∈ (0 , T ] and w ∈ X s ( T ; 2 C ε ) be another solution to (1.1)-(1.8). If the identity u = w (0) holds, then theidentity u = w holds on [ − T , T ] . • (Continuity of the flow map): The flow map Ξ : B ε ( H s ( R )) X s ( T ; 2 C ε ) , u u is Lipschitzcontinuous. roof of Theorem 8.1. (Existence): Let ε >
0, which will be chosen later. Let u ∈ X s ( T ; 2 C ε ). Bythe identity (8.2), Proposition 2.9, Corollary 3.3 and Theorem 4.4 with γ =
3, we see that there exists apositive constant C = C ( m , s ) > k Ψ [ u ] k X s ( T ) ≤ (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x u (cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) + X | α | = (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) I m Y i = ∂ α i x e u (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) ≤ C k u k H s ( R ) + C s X | α | = (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = ∂ α i x e u (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) ≤ C ε + C k u k mX s ( T ) ≤ C ε + C (2 C ε ) m ≤ C ε (8.3)hold. Here we take ε > ε ≤ (cid:18) C C m − m (cid:19) m − to obtain the last inequality of (8.3). This impliesthat the nonlinear mapping Ψ is well defined from X s ( T , C ε ) to itself. Let u , w ∈ X s ( T ; 2 C ε ). By asimple computation, the identity m Y i = u i − m Y i = w i = m X i = i − Y l = w l ( u i − w i ) m Y k = i + u k (8.4)holds for any u , · · · , u m ∈ C and w , · · · , w m ∈ C , where we assumed Q i − l = w l = i = Q m k = i + u k = i = m . By the identity (8.4), in the similar manner as the proof of the estimate (8.3),the estimates k Ψ [ u ] − Ψ [ w ] k X s ( T ) ≤ X | α | = (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) I m Y i = ∂ α i x e u − m Y i = ∂ α i x e w (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) ≤ C s X | α | = (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = ∂ α i x e u − m Y i = ∂ α i x e w (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) ≤ C s X | α | = (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) m X i = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) i − Y l = (cid:0) ∂ α l x w l (cid:1) (cid:8) ∂ α i x ( u i − w i ) (cid:9) m Y k = i + ∂ α k x u k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) ≤ C s X | α | = (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) m X i = k w k i − X s ( T ) k u k m − iX s ( T ) k u − w k X s ( T ) ≤ C (2 C ε ) m − k u − w k X s ( T ) ≤ k u − w k X s ( T ) (8.5)hold. Here we take ε such as ε ≤ C (cid:16) C (cid:17) m − to obtain the last inequality of (8.5). This implies that thenonlinear mapping Ψ is a contraction mapping. Thus by the contraction mapping principle, we see thatthere exists a unique solution u ∈ X s ( T ; 2 C ε ) to (1.1)-(1.8).(Uniqueness) and (Continuity of the flow map) can be proved in the similar manner as the proof of theestimate (8.5), which completes the proof of the theorem. (cid:3) The precise statement of Theorem 1.2 with γ ∈ { , } is as follows. Theorem 8.2.
Let γ ∈ { , } , m ≥ , and s ≥ max { s , } , where s is given by (1.9). We assume that thenonlinearity G = G m , m γ is the form of (1.8). Then the following statements hold: (Existence): For any r > , there exists a positive T = T ( r , m , s ) > such that for any u ∈ B r ( H s ( R )) , there exists a solution u ∈ X s ( T ; 2 r ) to the problem (1.1)-(1.8) on the time intervalI T = ( − T , T ) . • (Uniqueness): Let u ∈ X s ( T ; 2 r ) be the solution obtained in the Existence part. Let T ∈ (0 , T ] and w ∈ X s ( T ) be another solution to (1.1)-(1.8). If the identity u = w (0) holds, then the identityu = w holds on [ − T , T ] . • (Continuity of the flow map): The flow map Ξ : B r ( H s ( R )) X s ( T ; 2 r ) , u u is Lipschitzcontinuous.Moreover, let ( − T min , T max ) be the maximal existence time interval of the solution u. Then the blow-upalternative holds: T max < ∞ = ⇒ lim t → T max − k u ( t ) k H s ( R ) = ∞ . The similar statement also holds in the negative time direction.Proof of Theorem 8.2. (Existence): Let T ∈ (0 , u ∈ X s ( T ; 2 C r ). Bythe identity (8.2), Proposition 2.9, Corollary 3.3 and Theorem 4.4 with γ ∈ { , } , we see that there existpositive constants δ = δ ( m , s ) > C = C ( m , s ) > k Ψ [ u ] k X s ( T ) ≤ (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x u (cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) + X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) I m Y i = ∂ α i x e u (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) ≤ C k u k H s ( R ) + C s X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = ∂ α i x e u (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) ≤ C r + C T δ k u k mX s ( T ) ≤ C r + C T δ (2 C r ) m ≤ C r (8.6)hold. Here we take T > T ≤ (cid:16) C (2 C r ) m − (cid:17) δ to obtain the last inequality of (8.6). This impliesthat the nonlinear mapping Ψ is well defined from X s ( T , C r ) to itself. Let u , w ∈ X s ( T ; 2 C r ). By theidentity (8.4), in the similar manner as the proof of the estimate (8.6), the estimates k Ψ [ u ] − Ψ [ w ] k X s ( T ) ≤ X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) I m Y i = ∂ α i x e u − m Y i = ∂ α i x e w (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X s ( T ) ≤ C s X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) m Y i = ∂ α i x e u − m Y i = ∂ α i x e w (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) ≤ C s X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) m X i = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) i − Y l = (cid:0) ∂ α l x w l (cid:1) (cid:8) ∂ α i x ( u i − w i ) (cid:9) m Y k = i + ∂ α k x u k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) Y s ( T ) ≤ C s T δ X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) m X i = k w k i − X s ( T ) k u k m − iX s ( T ) k u − w k X s ( T ) ≤ C T δ (2 C r ) m − k u − w k X s ( T ) ≤ k u − w k X s ( T ) (8.7)hold. This implies that the nonlinear mapping Ψ is a contraction mapping. Thus by the contractionmapping principle, we see that there exists a unique solution u ∈ X s ( T ; 2 C r ) to (1.1)-(1.8).64Uniqueness): We only consider the positive time direction, since the negative time diraction can betreated in the similar manner. On the contrary, we assume that there exists t ∈ (0 , T ] such that therelation u ( t ) , w ( t ) holds. Then we can define t : = inf { t ∈ [0 , T ) : u ( t ) , w ( t ) } . Since u , w ∈ X s ( T ) ֒ → C ([0 , T ); H s ( R )), the identity u ( t ) = w ( t ) holds. By the time translation, we may assume that t = τ ∈ (0 , T ) such that the estimates k u − w k X s ( τ ) ≤ C τ δ X | α | = γ (cid:12)(cid:12)(cid:12) C m α (cid:12)(cid:12)(cid:12) m X i = k w k i − X s ( T ) k u k m − iX s ( T ) k u − w k X s ( τ ) ≤ k u − w k X s ( τ ) hold, which implies that the identity u ( t ) = w ( t ) holds on [0 , τ ). This contradicts the definition of t .(Continuity of the flow map): Let u , w ∈ X s ( T ; ρ ) be the solutions to the problem (1.1)-(1.8) on the timeinterval I T with the initial data u , w ∈ B r ( H s ( R )), repectively. In the similar manner as the proof of theestimate (8.7), the estimates k u − w k X s ( T ) ≤ C k u − w k H s ( R ) + C T δ (2 C r ) m − k u − w k X s ( T ) ≤ C k u − w k H s ( R ) + k u − w k X s ( T ) hold, which implies that the flow map Ξ is Lipschitz continuous.The blow-up alternative can be proved in the standard manner. (cid:3) Remark .
1. Theorem 1.3 and the local well-posedness part of Theorem 1.5 for s > s c can beproved in the similar manner as above.2. To show Theorem 1.1, we introduce a new unknown function v given by v : = h ∂ x i γ u . By applyingthe contraction mapping principle, we can construct the new function v . Especially, the nonlinearterms can be handled in the similar argument to treat the form of (1.11). In this subsection, we give a proof of Theorem 1.5. In the proof, when (i) γ = m ≥ s ≥ s c , (ii) γ = m = s ≥ s c , or (iii) γ = m ≥ s ≥ s c , we choose the solution space X N as (i) (7.1),(ii) (6.1) or (iii) (5.1), and we use the multilinear estimates of (i) Theorem 7.2, (ii) Theorem 6.8, or (iii)Theorem 5.2.The precise statement of the global well-posedness part of Theorem 1.5 is as follows. Theorem 8.3.
Let γ ∈ { , } , m ≥ , and s ≥ s c . We assume that the nonlinearity G = G m , m γ is the formof (1.12). Then there exists a positive constant ε = ε ( m , s ) > such that for any u ∈ B ε ( H s ( R )) , thereexists a unique small global solution u ∈ X s ( R ) to the problem (1.1)-(1.12) on R . Moreover, there existscattering states u ± ∈ H s ( R ) such that the identity lim t →±∞ (cid:13)(cid:13)(cid:13)(cid:13) u ( t ) − e it ∂ x u ± (cid:13)(cid:13)(cid:13)(cid:13) H s = holds, where the double-sign corresponds.Proof of Theorem 8.3. We only consider the case of γ =
1, since the case of γ = ε >
0, which will be chosen later. Let u ∈ X s ( R ; 2 C ε ). By the linear estimates65(7.2) for γ = γ =
2) and the multilinear estimates (Theorem 7.2 for γ = γ = C = C ( m , s ) > k Ψ [ u ] k X s ≤ (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x u (cid:13)(cid:13)(cid:13)(cid:13) X s + m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) I h ∂ γ x (cid:16) u k u m − k (cid:17)i(cid:13)(cid:13)(cid:13)(cid:13) X s ≤ C k u k H s ( R ) + C s m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x (cid:16) u k u m − k (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) Y s ≤ C ε + C k u k mX s ≤ C ε + C (2 C ε ) m ≤ C ε (8.9)hold. Here we take ε > ε ≤ (cid:18) C C m − m (cid:19) m − to obtain the last inequality of (8.9). This impliesthat the nonlinear mapping Ψ is well defined from X s ( R , C ε ) to itself. Let u , w ∈ X s ( R ; 2 C ε ). By asimple computation, the identity u k u m − k − w k w m − k = ( u − w ) u m − k k X i = u k − i w i − + ( u − w ) w k m − k X i = u m − k − i w i − (8.10)holds for any u , w ∈ C . By the identity (8.10), in the similar manner as the proof of the estimate (8.9),the estimates k Ψ [ u ] − Ψ [ w ] k X s ≤ m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) I h ∂ γ x (cid:16) u k u m − k − w k w m − k (cid:17)i(cid:13)(cid:13)(cid:13)(cid:13) X s ≤ C s m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x (cid:16) u k u m − k − w k w m − k (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) Y s ≤ C s m X k = | C k | k X i = (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x n ( u − w ) u m − k u k − i w i − o(cid:13)(cid:13)(cid:13)(cid:13) Y s + m − k X i = (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x n ( u − w ) u m − k − i w k w i − o(cid:13)(cid:13)(cid:13)(cid:13) Y s ≤ C s m X k = | C k | m X i = k u k m − iX s k w k i − X s k u − w k X s ≤ C (2 C ε ) m − k u − w k X s ≤ k u − w k X s (8.11)hold. Here we take ε such as ε ≤ C (cid:16) C (cid:17) m − to obtain the last inequality of (8.11). This implies thatthe nonlinear mapping Ψ is a contraction mapping. Thus by the contraction mapping principle, we seethat there exists a unique solution u ∈ X s ( R ; 2 C ε ) to (1.1)-(1.12).Next we prove that the global solution u ∈ X s ( R ) scatters in H s ( R ) as t → ±∞ . We only considerthe positive time direction, since the negative time direction can be treated in the similar manner. Let t > t >
0. We claim that if F ∈ Y s , then the relation k F k Y s ( t , t ) → . (8.12)holds as t > t → ∞ . Indeed, we note that for any N ∈ Z , the relation k P N F k Y N ( t , t ) → t > t → ∞ due to the definition of the Y N -norm. By the embedding Y s ֒ → Y s ( t , t ) and therelation (8.13), the relation (8.12) holds. We note that the nonlinear term G belongs to Y s due to theestimates (8.9). By the linear estimates ((7.2) for γ = γ =
2) and the relation (8.12), therelations (cid:13)(cid:13)(cid:13)(cid:13) e − it ∂ x u ( t ) − e − it ∂ x u ( t ) (cid:13)(cid:13)(cid:13)(cid:13) H s ( R ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z t t e − it ′ ∂ x G ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) H s ( R ) ≤ sup t ∈ [ t , t ] (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)Z tt e − i ( t − t ′ ) ∂ x G ( t ′ ) dt ′ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) H s ( R ) ≤ C s k G k Y s ( t , t ) → t > t → ∞ , which implies that n e − it ∂ x u ( t ) o t > satisfies the Cauchy condition in H s ( R ). Since H s ( R ) is complete and the operator e it ∂ x is unitary, there exists u + ∈ H s ( R ) such that the identity (8.8)holds, which completes the proof of the theorem. (cid:3) Remark . Theorem 1.4 can be proved in the similar manner as above.Next we give a proof of large data local well-posedness at the scaling critical regularity s = s c inTheorem 1.5. For R ≥ ǫ > φ ∈ B R ( H s c ( R )), we introduce a closed ball φ + B ǫ ( H s c ( R )) in H s c ( R )given by φ + B ǫ ( H s c ( R )) : = (cid:8) φ ∈ H s c ( R ) | φ = φ + φ , k φ k H sc ( R ) ≤ ǫ (cid:9) . For any φ ∈ H s c ( R ), the identity lim M →∞ k P ≥ M φ k H sc ( R ) = M : H s c ( R ) → N given by M ( φ ) = M ( φ, ǫ ) : = min n M ∈ Z : k P ≥ M φ k H sc ( R ) ≤ ǫ o . For T ∈ (0 , ρ > a >
0, we introduce a closed ball X s c ( T ; ρ, a ) in X s c ( T ) defined by X s c ( T ; ρ, a ) : = (cid:8) u ∈ X s c ( T ) : k u k X sc ( T ) ≤ ρ, k P ≥ M u k X sc ( T ) ≤ a (cid:9) . In the following, we prove that the nonlinear mapping Ψ given by (8.1) is a contraction mapping on thecomplete metric space X s c ( T ; ρ, a ). The main tools for the proof are the multilinear estimates (7.14)-(7.19) if γ = γ = s = s c is as follows. Theorem 8.4 (Large data local well-posedness at the critical regularity s = s c ) . Let γ ∈ { , } and m ≥ .We assume that the nonlinearity G = G m , m γ is the form of (1.12). Then the following statements hold: • (Existence): There exist a constant ǫ = ǫ ( m ) > dependent only on m and a map M : H s c ( R ) → N such that the following holds: For any R > and u ∗ ∈ B R ( H s c ( R )) , there exists T = T (cid:16) R , M ( u ∗ ) (cid:17) > such that for any u ∈ u ∗ + B ǫ ( H s c ( R )) , there exists a solution u ∈ X s c ( T ) to the problem (1.1)-(1.8) on the time interval I T = ( − T , T ) . • (Uniqueness): Let u ∈ X s c ( T ) be the solution obtained in the Existence part. Let T ∈ (0 , T ] andw ∈ X s c ( T ) be another solution to (1.1)-(1.8). If the identity u = w (0) holds, then the identityu = w holds on [ − T , T ] . • (Continuity of the flow map): For any R > , u ∗ ∈ B R ( H s c ( R )) and T > given above, the flowmap u u ∈ X s c ( T ) is Lipschitz continuous. roof of Theorem 8.4. We only consider the case of γ =
1, since the case of γ = , Theorem 6.2]. Let a ∈ , (cid:16) C (cid:17) m − and ǫ ∈ (cid:16) , a C i be positive numbers, where C is given by (7.2) if γ = γ =
2, and C isdefined by (8.9). Let R > u ∗ ∈ B R ( H s c ( R )). We set M ∈ Z suchas M : = M (cid:16) u ∗ , ǫ (cid:17) . We define ρ and T as ρ = ρ ( R ) : = max (4 C R , a ) , T = T ( R , M ) : = min , a C M κ ρ m ! δ , where κ is given by (7.19) if γ = γ =
2, and C = C ( m ) > m , which is given by (8.14) below. Then we note that the estimates C R ≤ ρ , C ǫ ≤ a ≤ ρ , C a m ≤ a ≤ ρ , C T δ M κ ρ m ≤ a ≤ ρ u ∈ u ∗ + B ǫ ( H s c ( R )) be an arbitrary initial data. Then the estimates k u k H sc ( R ) ≤ R + ǫ, k P ≥ M u k H sc ( R ) ≤ k P ≥ M u k H sc ( R ) + k P ≥ M ( u − u ∗ ) k H sc ( R ) ≤ ǫ hold. We apply the multilinear estimates ((7.14)-(7.19) if γ = γ = γ = γ = k Ψ [ u ] k X sc ( T ) ≤ (cid:13)(cid:13)(cid:13)(cid:13) e it ∂ x u (cid:13)(cid:13)(cid:13)(cid:13) X sc ( T ) + m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) I h ∂ γ x (cid:16) u k u m − k (cid:17)i(cid:13)(cid:13)(cid:13)(cid:13) X sc ( T ) ≤ C k u k H sc ( R ) + C s c m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x (cid:16) u k u m − k (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) ≤ C k u k H sc ( R ) + C s c m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x n ( P ≥ M u ) k ( P ≥ M u ) m − k o(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) + C s c m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x n u k u m − k − ( P ≥ M u ) k ( P ≥ M u ) m − k o(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) ≤ C ( R + ǫ ) + C k P ≥ M u k mX sc ( T ) + C T δ M κ k u k mX sc ( T ) ≤ C ( R + ǫ ) + C a m + C T δ M κ ρ m ≤ ρ (8.14)hold. In the similar manner as the proof of the above estimates (8.14), the inequalities k P ≥ M Ψ [ u ] k X sc ( T ) ≤ C ǫ + C a m + C T δ M κ ρ m ≤ a (8.15)hold. The estimates (8.14)-(8.15) imply that the nonlinear mapping Ψ is well defined from X s c ( T ; ρ, a )68o itself. By the identity (8.10), in the similar manner as the proof of the estimate (8.14), the inequalities k Ψ [ u ] − Ψ [ w ] k X sc ( T ) ≤ m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) I h ∂ γ x (cid:16) u k u m − k − w k w m − k (cid:17)i(cid:13)(cid:13)(cid:13)(cid:13) X sc ( T ) ≤ C s m X k = | C k | (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x (cid:16) u k u m − k − w k w m − k (cid:17)(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) ≤ C s m X k = | C k | k X i = (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x n ( u − w ) u m − k u k − i w i − o(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) + m − k X i = (cid:13)(cid:13)(cid:13)(cid:13) ∂ γ x n ( u − w ) u m − k − i w k w i − o(cid:13)(cid:13)(cid:13)(cid:13) Y sc ( T ) ≤ C s m X k = | C k | m X i = (cid:16) T δ M κ k u k m − iX sc ( T ) k w k i − X sc ( T ) + k P ≥ M u k m − iX sc ( T ) k P ≥ M w k i − X sc ( T ) (cid:17) k u − w k X sc ( T ) ≤ (cid:16) C a m − + C T δ M κ ρ m − (cid:17) k u − w k X sc ( T ) ≤ k u − w k X sc ( T ) (8.16)hold. This implies that the nonlinear mapping Ψ is a contraction mapping. Thus by the contractionmapping principle, we see that there exists a unique solution u ∈ X s c ( T ; ρ, a ) to (1.1)-(1.12).(Uniqueness): We note that for any w ∈ X s c ( T ), the identity lim M →∞ k P ≥ M w k X sc ( T ) = u , w ∈ u ∗ + B ǫ ( H s c ( R )) and u , w ∈ X s c ( T ) be the correspondingsolutions given by the Existence part. In the similar manner as the proof of the estimate (8.16), theestimate k u − w k X sc ( T ) ≤ C k u − w k H sc ( R ) + k u − w k X sc ( T ) hold, which implies that the flow map u ∗ + B ǫ ( H s c ( R )) X s c ( T ), u u is Lipschitz continuous. (cid:3) A Derivation of an important 4NLS model with third order derivativenonlinearities
In this appendix, we derive the important 4NLS model with third order derivative nonlinearities ( γ = n =
2) of the derivative nonlinear Schr¨odinger (DNLS) hierarchy(1.6). To describe the DNLS hierarchy (1.6) more precisely, we give the definitions of several notations.For a complex-valued function u = u ( x ) on R , we introduce a C -valued function U = U ( x ) on R definedby U : = ( u , u ) T . Let σ be the third Pauli matrix given by σ : = − ! . For a C -valued smooth function ( v , w ) T on R , we introduce a first order di ff erential operator D definedby D vw ! ( x ) : = σ ddx vw ! ( x ) = v x ( x ) − w x ( x ) ! . C -valued smooth function ( v , w ) T decaying 0 as | x | → ∞ , we introduce a linear operator D defined by D vw ! ( x ) : = − U ( x ) Z ∞ x U ( y ) ∗ ddy v ( y ) w ( y ) ! dy = − u ( x ) u ( x ) ! Z ∞ x n u ( y ) v y ( y ) + u ( y ) w y ( y ) o dy . We note that for a smooth function u = u ( x ) decaying 0 as | x | → ∞ , the identity D U ( x ) = | u ( x ) | u ( x ) | u ( x ) | u ( x ) ! holds for any x ∈ R . Indeed, this identity follows from the following identities − Z ∞ x n u ( y ) u y ( y ) + u ( y ) u y ( y ) o = − Z ∞ x ddy | u ( y ) | dy = | u ( x ) | . For a smooth C -valued function ( v , w ) T decaying 0 as | x | → ∞ , we define the recursion operator Λ givenby Λ vw ! : = i D + i D ) vw ! . (A.1)By using this operator, we can write the n -th of the derivative nonlinear Schr¨odinger hierarchy as i ∂ t U ( t , x ) + ∂ x n ( − i Λ ) n − U o ( t , x ) = , ( t , x ) ∈ R × R , (A.2)where U = U ( t , x ) = (cid:16) u ( t , x ) , u ( t , x ) (cid:17) T is a smooth solution decaying 0 as | x | → ∞ and n ∈ N .In the following, we only consider the case of n =
2. By a simple calculation, the identity( − i Λ ) = ( D + i D ) = D − D D − D ( D D + D D ) + i n D ( D D + D D ) + D D − D o holds. By a simple calculation and taking the first component of the equation (A.2), we can derive theequation (1.1) with (1.5). Acknowedgements
The authors express deep gratitude to Professor Hervert Koch for many useful suggestions and com-ments. They also deeply grateful to Professor Kenji Nakanishi and Dr. Yohei Yamazaki for pointingout the completely integrable structure for the fourth order Schr¨odinger equation with third-order deriva-tive nonlinearities. The first author is supported by Grant-in-Aid for Young Scientists Research (B)No.17K14220 and Program to Disseminate Tenure Tracking System from the Ministry of Education,Culture, Sports, Science and Technology. The second author is supported by JST CREST Grant Num-ber JPMJCR1913, Japan and Grant-in-Aid for Young Scientists Research (B) No.15K17571 and YoungScientists Research (No.19K14581), Japan Society for the Promotion of Science. The third author wassupported by RIKEN Junior Research Associate Program.
References [1] G. P. Agrawal,
Nonlinear Fiber Optics , 3re ed. (Academic Press, 2001)702] I. Bejenaru, A. D. Ionescu, C. E. Kenig, D. Tataru
Global Schr¨odinger maps in dimensions d ≥ :Small data in the critical Sobolev spaces , Ann. of Math., (2011), 1443–1506.[3] M. Ben-Artzi, H. Koch, J.C. Saut, Dispersion estimates for fourth order Schr¨odinger equations , C.R. Acad. Sci., Serie 1 (2000), 87–92.[4] T. Cazenave, “Semilinear Schr¨odinger equations,” Courant Lecture Notes in Mathematics , NewYork University, Courant Institute of Mathematical Sciences, New York; American MathematicalSociety, Providence, RI, 2003.[5] J. Colliander, M. Keel, G. Sta ffi lani, H. Takaoka, T. Tao Global well-posedness and scattering forthe energy-critical nonlinear Schr¨odinger equation in R , Ann. of Math., (2008), 767–865.[6] C. Cuo, S. Sun, H. Ren, The local well-posedness for nonlinear fourth-order Schr¨odinger equationwith mass-critical nonlinearity and derivative , Boundary Value Problems, (2014), page 11pp.[7] C. Hao, L. Hsiao, B. Wang,
Well-posedness of Cauchy problem for the fourth order nonlinearSchr¨odinger equations in multi-dimensional spaces , J. Math. Anal. Appl., (2007), 58–83.[8] Y. Fukumoto,
Motion of a curve vortex filament: Higher-order asymptotics , in ”Proc. of IU-TAM Symposium on Geometry and Statistics of Turbulence” (eds. T. Kambe, T. Nakano and T.Miyauchi), (2001), 211-216, Kluwer.[9] Y. Fukumoto,
Three dimensional motion of a vortex filament and its relation to the localized induc-tion hierarchy , Eur. Phys. J.,
B. 29 (2002), 167–171.[10] Y. Fukumoto, H. K. Mo ff atto Motion and expansion of a viscous vortex ring. Part I. A higher-orderasymptotic formula for the velocity , J. Fluid. Mech., (2000), 1–45.[11] V. S. Gerdjikov, M. I. Ivanov, P. P. Kulish
Quadratic bundle and nonlinear equations , Theor. Math.Phys., (1980), 342. (In Russian)[12] N. Hayashi, P.I.Naumkin, Asymptotic properties of solutions to dispersive equation of Schr¨odingertype , J. Math. Soc. Japan, (2008), 631–652.[13] N. Hayashi, P.I.Naumkin, Global existence and asymptotic behavior of solutions to the fourth-ordernonlinear Schr¨odinger equation in the critical case , Nonlinear Anal., (2015), 112–131.[14] N. Hayashi, P.I.Naumkin,
Factorization technique for the fourth-order nonlinear Schr”odingerequation , Z. Angew. Math. Phys., (2015), 2343–2377.[15] N. Hayashi, P.I.Naumkin, Large time asymptotics for the fourth-order nolinear Schr¨odinger equa-tion , J. Di ff . Equs., (2015), 880–905.[16] H. Hirayama, M. Okamoto, Well-posedness and scattering for fourth order nonlinear Schr¨odingertype equations at the scaling critical regularity , Commun. Pure. Appl. Anal., (2016), 831–851.[17] Z. Huo, Y. Jia, The Cauchy problem for the fourth-order nonlinear Schr¨odinger equation related tothe vortex filament , J. Di ff erential Equations, (2005), 1–35.[18] Z. Huo, Y. Jia, A refined well-posedness for the fourth-order nonlinear Schr¨odinger equation relatedto the voltex filament , Communications in Partial Di ff erential Equations, (2007), 1493–1510.7119] M. Ikeda, N. Kishimoto, M. Okamoto, Well-posedness for a quadratic derivative nonlinearSchr¨odinger system at the critical regularity , J. Funct. Anal., (2016), 747–798.[20] Z. Huo, Y. Jia,
Well-posedness for the fourth-order nonlinear derivative Schr¨odinger equation inhigher dimension , J. Math. Pures Appl., (2011), 190–206.[21] V. I. Karpman, Stabilization of soliton instabilities by higher-order dispersion: fourth order non-linear Schr¨odinger-type equations , Phys. Rev. E., (1996), 1336–1339.[22] V. I. Karpman, A. G. Shagalov, Stability of soliton described by nonlinear Schr¨odinger-type equa-tions with higher-order dispersion , Phys. Rev. D., (2000), 194–210.[23] T. Kato,
On the Cauchy problem for the (generalized) Korteweg-de Vries equation , in applied math-ematics, 93–128, Adv. Math. Suppl. Stud., , Academic Press, New York, 1983.[24] C. E. Kenig, G. Ponce and L. Vega, Oscillatory integrals and regularity of dispersive equations ,Indiana. Univ. Math. J., (1991), 33–69.[25] C. E. Kenig, A. Ruiz, A strong type (2 . estimate for the maximal function associated to theSchr¨odinger equation , Trans. Amer. Math. Soc., (1983), 239–246.[26] M. A. Keel, T. Tao, Endpoint Strichartz estimates , Amer. J. Math., (1998), 955–980.[27] J. Langer, R. Perline,
Poisson geometry of the filament equation , J. Nonlinear Sci., (1991), 71–93.[28] E. Mjøhus and T. Hada, Nonlinear Waves and Chaos in Space Plasmas , edited by T. Hada and H.Matsumoto (Terrapub, Tokyo, 1997), p.121.[29] B. Pausader,
Global well-posedness for energy critical fourth-order Schr¨odinger equations in theradial case , Dyn. Partial Di ff er. Equ., (2007), 197–225.[30] D. Pornnopparath, Small data well-posedness for derivative nonlinear Schr¨odinger equations , J.Di ff . Equs., (2018), 3792–3840.[31] J. Segata, Well-posedness for the fourth order nonlinear Schr¨odinger type equation related to thevortex filament , Di ff . Int. Equs., (2003), 841–864.[32] J. Segata, Remark on well-posedness for the fourth order nonlinear Schr¨odinger type equation ,Proc. Amer. Math. Soc., (2004), 3559–3568.[33] E. M. Stein,
Harmonic Analysis , Princeton University Press, 1993.[34] M. Ruzhansky, B. Wang, H. Zhang,
Global well-posedness and scattering for the fourth ordernonlinear Schr¨odinger equations with small data in modulation and Sobolev spaces , J. Math. PuresApp., (2016), 31–65.[35] T. Tao,
Nonlinear dispersive equation. Local and global analysis. , CBMS Regional ConferenceSeries in Mathematics, 106, American Mathematical Society, Providence, RI, 2006.[36] T. Tao,