Maximal estimate for average over space curve
aa r X i v : . [ m a t h . C A ] F e b MAXIMAL ESTIMATE FOR AVERAGEOVER SPACE CURVE
HYERIM KO, SANGHYUK LEE, AND SEWOOK OH
Abstract.
Let M be the maximal operator associated to a smooth curvein R which has nonvanishing curvature and torsion. We prove that M isbounded on L p if and only if p > Introduction
Let γ be a smooth curve defined from the interval J := [ − ,
1] to R . We considerthe average Af over the dilations of γ which is given by Af ( x, t ) = Z f ( x − tγ ( s )) ψ ( s ) ds, t > . Here ψ is a smooth function with supp ψ ⊂ ( − , γ has nonvanishing curvature and torsion, equivalently,(1.1) det( γ ′ ( s ) , γ ′′ ( s ) , γ ′′′ ( s )) = 0for s ∈ J . The condition is the natural nondegeneracy condition which is commonlyused in the studies related to space curves and the most typical examples are thehelix and the moment curve ( s, s , s ). In this paper we are concerned with L p boundedness of the maximal operator Mf ( x ) = sup
1) for d ≥
3. The case d = 2 was later proved by Bourgain [5]. As it turned out, the problem becamemore difficult for the circle or the curves with nonvanishing curvature in R sincethe typical interpolation argument relying on L estimate no longer works. Insuch cases the maximal estimates were obtained by implementation of geometricincidence property [5, 30, 27, 28]) or by utilizing the local smoothing property ofthe averaging operator [18, 29, 14].Concerning the maximal average over the curve in three or higher dimensionalspaces, the L p boundedness is naturally expected to be even harder to prove sinceFourier transform of the measure supported on a space curve has slower decay. L p boundedness of such maximal operators has been of interest for a long time (see[23] for a historical comment) but no positive result was known until recently. Itwas Pramanik and Seeger [23] who proved for the first time that M is L p bounded Mathematics Subject Classification.
Key words and phrases.
Maximal estimate, space curve. for p >
38. (Also see [20, 21, 23, 24] for the developments related to L p Sobolevestimate for f → Af ( · , t ).) Their result was obtained by relying on Wolff’s sharp ℓ p decoupling inequality for the cone in R [33]. More precisely, it was shown in[23] that the maximal operator M is bounded on L p for p > ( p ◦ + 2) / ℓ p decoupling inequality holds for p > p ◦ . Combined with the recent ℓ p decouplinginequality on the optimal range p ≥ L p boundedness for p >
4. However, a modification of Stein’sexample in [32] shows that M can not be bounded on L p for p ≤ L p boundedness of M . Theorem 1.1.
Suppose that γ : J → R is a smooth curve which has nonvan-ishing curvature and torsion, and ψ is a nontrivial, nonnegative, smooth functionsupported in ( − , . Then, there is a constant C such that k Mf k L p ( R ) ≤ C k f k L p ( R ) (1.2) for all f ∈ L p ( R ) if and only if p > . The assumption that ψ is smooth is not necessary and it is clear that the theoremholds true for a continuous ψ . Even though γ is assumed to be smooth, there is apositive integer D such that (1.2) holds for γ ∈ C D ( J ) (see Remark 1).The maximal estimate in [23] was shown by exploiting L p local smoothing phe-nomena of the averaging operator. However, being compared with the average overhypersurfaces or curves in R , the L p local smoothing property of A is not well un-derstood. We instead try to make use of L p - L q type smoothing estimate which hasa close connection to the adjoint restriction estimate. Usefulness of such estimateshas been manifested in the study of L p improving property of the localized circularand spherical maximal functions [29, 14] (also see [1, 25, 3]).Our argument in this paper is closely related to the induction strategy developedby Ham and one of the authors [11]. They obtained the sharp adjoint restrictionestimate for the space curve in L p ( µ ) when µ is an α -dimensional measure (seeSection 2.1 for the definition). The work was in turn inspired by the multilinearapproach due to Bourgain and Guth [7]. Main novelty of the current paper lies indevising an induction argument which directly works for the maximal operator. Incontrast to the adjoint restriction operator a suitable form of multilinear estimateis not so obvious for the averaging operator A . In order to prove a multilinearestimate for A which enjoys better boundedness property under a certain additionalassumption, we first express the operator A as a sum of adjoint restriction operatorsand then relate them to geometry of the curves so that the transversality conditioncan be reformulated in terms of the relative positions between the associated curves.Unfortunately, some the consequent adjoint restriction operators are associated toC , / surfaces but not to C surfaces, so we can not directly apply the multilinearrestriction estimate which is due to Bennett, Carbery, and Tao [4]. However, it isnot difficult to see that the argument in [4] continues to work for the C , / surfaces(see Theorem 3.6 below). We also make use of some of the results from [23] tostrengthen the multilinear estimate and also to deal with the nondegenerate part,whereas the difficult degenerate part is to be handled by the multilinear estimatewhich we prove in Section 3.The argument here can be further developed to prove not only L p improvingproperty of the maximal operator sup ≤ t ≤ (cid:12)(cid:12) Af ( x, t ) (cid:12)(cid:12) but also maximal estimates AXIMAL ESTIMATE FOR SPACE CURVE 3 with respect to α -dimensional measures (see Remark 2). Nonetheless, we do notattempt to pursue the matter in this paper. Structure of the paper.
In Section 2 we show that the maximal estimate can bededuced from a form of weighted estimates, and we formalize the induction setupto prove the weighted estimates. In Section 3 we obtain a weighted multilinearestimate for A under a certain separation condition. In Section 4 we establish themaximal bound putting the previous estimates together and show the optimalityof the range of p . 2. Reductions and preliminaries
In this section we reduce the proof of maximal estimate to showing a form ofweighted estimates for the averaging operators which are given by the curves closeto a specific curve. We also obtain some preparatory results which are to be usedto prove the estimates in Section 3 and Section 4.By the argument in [5] (also see [26]) which relies on Littlewood-Paley decom-position and scaling one can obtain the maximal estimate (1.2) from that forsup ≤ t ≤ (cid:12)(cid:12) Af ( x, t ) (cid:12)(cid:12) . More precisely, it is sufficient to show that there is an ε p > k Af k L px L ∞ t ( R × [1 , ≤ Cλ − ε p k f k L p ( R ) for all f ∈ S ( R ) whenever(2.2) supp b f ⊂ A λ := { ξ ∈ R : 3 λ/ ≤ | ξ | ≤ λ/ } , λ ≥ . For the rest of the paper, we assume (2.2) unless it is mentioned otherwise.
Notation.
Throughout the paper C , C , . . . and c are supposed to be indepen-dent positive constants, and C ε , C δ are constants depending on ε, δ but all of theseconstants may vary at each appearance. In addition to the conventional notation b · we use F and F − to denote the Fourier and inverse Fourier transforms, respec-tively. By Q = O ( Q ) we denote | Q | ≤ CQ for a constant C and we also usethe notation Q = O s ( Q ) if | Q | ≤ Q .2.1. Estimate with α -dimensional measure. Let B d ( z, r ) denote the ball ofradius r which is centered at z ∈ R d . Let µ be a positive Borel measure on R . For0 < α ≤ µ is α -dimensional if there is a constant C such that µ ( B ( z, r )) ≤ Cr α for all r > z ∈ R . For an α -dimensional measure µ we define h µ i α = sup z ∈ R ,r> r − α µ ( B ( z, r )) . Instead of directly proving the maximal estimate (2.1) we obtain estimates for Af with α -dimensional measures. From those estimates we can deduce the estimate(2.1). As far as the authors are aware, it seems that this type of argument de-ducing the maximal estimate from the estimates with α -dimensional measures firstappeared in [19]. (See also [33, p.1283] for a related discussion.) Theorem 2.1.
Let µ be 3-dimensional. Suppose that γ : J → R is a smooth curvesatisfying (1.1) . Then, for p > there is an ε p > such that k Af k L p ( R × [1 , ,dµ ) ≤ C h µ i p λ − ε p k f k L p ( R ) (2.3) H. KO, S. LEE, AND S. OH whenever b f is supported on A λ . We shall work only with 3-dimensional measures even though it is possible toprove such estimates with α -dimensional measure, α = 3 on a certain range of p (see Remark 2). The following shows the estimate (2.3) implies (2.1). Lemma 2.2.
Suppose (2.3) holds true for any -dimensional measure µ . Then theestimate (2.1) holds. To prove this, we start with an elementary lemma.
Lemma 2.3.
Let η ∈ C ∞ ([2 − , ]) and ψ ∈ C ∞ ( J ) . Set r = 1 + 4 max {| γ ( s ) | : s ∈ supp ψ } and K η ( x, t ) = (2 π ) − Z Z e i ( x · ξ − tγ ( s ) · ξ ) ψ ( s ) ds η ( λ − | ξ | ) dξ. If | x | ≥ r and | t | ≤ , then | K η ( x, t ) | ≤ C k η k C N +3 E N ( x ) for any N ≥ where E N ( x ) := λ − N (1 + | x | ) − N . Changing variables we note K η ( x, t ) = λ (2 π ) RR e iλ ( x · ξ − tγ ( s ) · ξ ) ψ ( s ) ds η ( | ξ | ) dξ. Thenrepeated integration by parts in ξ gives the desired estimate since |∇ ξ ( x · ξ − tγ ( s ) · ξ ) | ≥ − | x | if | x | ≥ r and | t | ≤ Proof of Lemma 2.2.
To obtain (2.1) it suffices to show the local estimate k Af k L px L ∞ t ( B (0 , × [1 , ≤ Cλ − ε p k f k L p ( R ) . (2.4)This is obvious if b f is not assumed to be supported in A λ . However, we may handle f as if it were supported on a ball of radius r . Since supp b f ⊂ A λ , Af ( · , t ) = K η ( · , t ) ∗ f for an η such that η ∈ C ∞ c ((2 − , η = 1 on [3 / , / | K η ( x, t ) | ≤ CE N ( x ) if | x | ≥ r and | t | ≤
2. Thus, by the typicallocalization argument (e.g., see the proof of Lemma 3.10) one can easily see that(2.4) implies (2.1).In order to prove (2.4), using the Kolmogorov-Seliverstov-Plessner linearization,it is enough to show(2.5) k Af ( · , t ( · )) k L p ( B (0 , ≤ Cλ − ε p k f k L p ( R ) for a measurable function t : B (0 , → [1 ,
2] with C independent of t . Since b f issupported in A λ , Af is uniformly continuous on every compact subset. So, for (2.4)we may assume that t is continuous. With a continuous function t , the positivelinear functional C c ( R ) ∋ F R B (0 , F ( x, t ( x )) dx defines a measure µ by therelation Z F ( x, t ) dµ ( x, t ) = Z B (0 , F ( x, t ( x )) dx, F ∈ C c ( R ) . We now notice that µ is a 3-dimensional measure. Since B (( x ◦ , t ◦ ) , r ) ⊂ { ( x, t ) ∈ R × R : | x − x ◦ | ≤ r } , µ (cid:0) B (( x ◦ , t ◦ ) , r ) (cid:1) = Z B (0 , χ B (( x ◦ ,t ◦ ) ,r ) ( x, t ( x )) dx ≤ Z χ B ( x ◦ ,r ) ( x ) dx = 43 πr In fact, µ becomes a regular Borel measure by the Riesz-Markov-Kakutani representationtheorem. AXIMAL ESTIMATE FOR SPACE CURVE 5 for any r > x ◦ , t ◦ ) ∈ R × R . Thus we have h µ i ≤ π/ . Noting k Af ( · , t ( · )) k L p ( B (0 , = k Af k L p ( dµ ) , we apply Theorem 2.1 and get (2.5) with C independent of t . (cid:3) Weighted estimate.
For 0 < α ≤ α the collection ofnonnegative measurable functions ω on R such that the measure ω dxdt is α -dimensional. For a simpler notation we denote[ ω ] α = h ω dxdt i α for ω ∈ Ω α . Even though Ω α is properly contained in the set of α -dimensionalmeasures, the fact that supp b f ⊂ A λ allows us to recover the estimate (2.3) froman estimate against ω ∈ Ω α . Lemma 2.4.
Let I = [2 − , ] . Suppose that k Af k L p ( R × I,ω ) ≤ C [ ω ] p λ − ε p k f k L p ( R ) (2.6) whenever ω ∈ Ω and b f is supported on A λ , then (2.3) holds for any -dimensionalmeasure µ . The proof of the maximal estimate (2.1) is now reduced to showing (2.6). Lemma2.4 of course remains valid for any α ∈ (0 , Lemma 2.5.
Let < α ≤ and ϕ ∈ S ( R ) . Set ϕ λ = λ ϕ ( λ · ) . If µ is an α -dimensional measure, then | ϕ | λ ∗ µ ∈ Ω α and [ | ϕ | λ ∗ µ ] α ≤ C ϕ h µ i α . In what follows e χ denotes a function in C ∞ ( I ) which satisfies e χ = 1 on [1 , β , β respectively denote the functions such that β ∈ C ∞ ([2 − , β = 1 on[3 / , / β ∈ C ∞ ([ − , β = 1 on [ − , Lemma 2.6.
Let r = 1 + 4 max {| γ ( s ) | : s ∈ supp ψ } and let m ( ξ, τ ) = Z Z e χ ( t ) e − it ( τ + γ ( s ) · ξ ) ψ ( s ) dsdt β ( λ − | ξ | ) , ( ξ, τ ) ∈ R × R . Then, we have |F − (cid:0) m ( ξ, τ )(1 − β (( λr ) − τ )) (cid:1) | ≤ C N k ψ k ∞ e E Nt for any N > where e E Nt := (1 + | t | ) − N E N .Proof. Let ρ ℓ ( t ) = ( − it ) k + ℓ e χ ( t ) and note that ∂ αξ ∂ kτ m ( ξ, τ ) is a sum of the terms R b ρ | α | ( τ + γ ( s ) · ξ )( γ ( s )) α ψ ( s ) ds × O ( λ −| α | ) with α + α = α . Thus we have | ∂ αξ ∂ kτ m ( ξ, τ ) | ≤ C N k ψ k ∞ r | α | ( r λ ) − N (1 + | τ | ) − N for any N if | τ | ≥ r λ , and weget the desired estimate by routine integration by parts. (cid:3) Proof of Lemma 2.4.
We define an auxiliary operator e A by F ( e Ah )( ξ, τ ) = β (( λr ) − τ ) F (cid:0) e χ ( t ) Ah (cid:1) ( ξ, τ ) . Since b f is supported in A λ , we have | ( e χ ( t ) A − e A ) f | ≤ C e E Nt ∗ | f | by Lemma 2.6.We then note that R e E Nt ( x − y ) dµ ( x, t ) ≤ Cλ − N h µ i and R e E Nt ( x − y ) dy ≤ Cλ − N .Thus by Schur’s test we get k e E Nt ∗ f k L p ( R × R ,dµ ) ≤ C h µ i p λ − N k f k L p ( R ) (2.7) H. KO, S. LEE, AND S. OH for 1 ≤ p ≤ ∞ and a large N . So, in order to obtain (2.3), it suffices to prove k e Af k L p ( R × [1 , ,dµ ) ≤ C h µ i p λ − ε p k f k L p ( R ) . (2.8)Since the space time Fourier transform of e Af is supported in B (0 , r λ ), e Af = e Af ∗ ϕ r λ for some ϕ ∈ S ( R ), which gives | e Af | p ≤ C | e Af | p ∗ | ϕ r λ | via H¨older’sinequality. Thus we have k e Af k L p ( R × [1 , ,dµ ) ≤ C k e Af k L p ( R × R ,ω ) , where we set ω = | ϕ r λ | ∗ µ . Therefore, using | ( e χ ( t ) A − e A ) f | ≤ C e E Nt ∗ | f | again, wehave only to obtain the estimate for e χ ( t ) Af in L p ( R × R , ω ) since the minor partcan be handled as before. Since [ ω ] ≤ C h µ i by Lemma 2.5, the estimate (2.8)follows from (2.6) because supp e χ ⊂ I . (cid:3) Normalization of curves and weights.
In order to prove the estimate (2.6),as mentioned before, we use an induction type argument over a class of curves.For the purpose we need to normalize the curves properly so that the inductionassumption applies. This step is important especially for defining the inductionquantity and proving uniform estimates (cf. [11, 15]).Let D ≥ be a positive integer which is taken to be large. Let γ ∈ C D ( J ) whichsatisfies (1.1). Then, for s ◦ and 0 < δ ≪ s ◦ − δ, s ◦ + δ ] ⊂ J , we defineM δγ ( s ◦ ) = (cid:0) δγ ′ ( s ◦ ) , δ γ ′′ ( s ◦ ) , δ γ ′′′ ( s ◦ ) (cid:1) and γ δs ◦ ( s ) = (M δγ ( s ◦ )) − (cid:0) γ ( δs + s ◦ ) − γ ( s ◦ ) (cid:1) . (2.9)Let γ ◦ ( s ) = ( s, s / , s / γ ◦ in C D ( J ). For ε ◦ >
0, we set C D ( ε ◦ ) = (cid:8) γ ∈ C D ( J ) : k γ − γ ◦ k C D ( J ) ≤ ε ◦ (cid:9) . Using an affine map, one can transform a small enough sub-curve of any γ ∈ C D ( J )satisfying (1.1) so as to be contained in C D ( ε ◦ ). The following lemma is a slightmodification of [11, Lemma 2.1]. Lemma 2.7.
Let s ◦ ∈ ( − , and γ ∈ C D ( J ) satisfying (1.1) on J . Then, forany ε ◦ > , there exists δ ∗ = δ ∗ ( ε ◦ , γ ) > such that γ δs ◦ ∈ C D ( ε ◦ ) whenever [ s ◦ − δ, s ◦ + δ ] ⊂ J and | δ | ≤ δ ∗ . Additionally, if γ ∈ C D ( ε ◦ ) and ε ◦ < − , there isa uniform δ ◦ > such that γ δs ◦ ∈ C D ( ε ◦ ) whenever [ s ◦ − δ, s ◦ + δ ] ⊂ J with | δ | ≤ δ ◦ . For a matrix M we denote k M k = sup | z | =1 | M z | . Proof.
By Taylor’s expansion, we have γ ( δs + s ◦ ) − γ ( s ◦ ) = δγ ′ ( s ◦ ) s + δ γ ′′ ( s ◦ ) s δ γ ′′′ ( s ◦ ) s
3! + e R ( s ◦ , δ, s )= M δγ ( s ◦ ) γ ◦ ( s ) + e R ( s ◦ , δ, s )and k e R ( s ◦ , δ, · ) k C D ( J ) ≤ Cδ . By (2.9), γ δs ◦ ( s ) = γ ◦ ( s ) + (M δγ ( s ◦ )) − e R ( s ◦ , δ, s ) . Since k (M δγ ( s ◦ )) − k ≤ C δ − for a constant C , taking a positive δ ∗ such that CC δ ∗ ≤ ε ◦ we have k (M δγ ( s ◦ )) − e R ( s ◦ , δ, · ) k C D ( J ) ≤ ε ◦ and, hence, γ δs ◦ ∈ C D ( ε ◦ )for 0 < δ ≤ δ ∗ . The second assertion can also be shown in the same manner, so weomit the detail. (cid:3) AXIMAL ESTIMATE FOR SPACE CURVE 7
For δ > δ the diagonal matrix ( δe , δ e , δ e ) . To normalizethe weights we need the next lemma which can be shown by the argument in [11].
Lemma 2.8.
Let < α ≤ , < δ ≪ and ω ∈ Ω α , and let M be a × nonsingular matrix. Set ω δ ( x, t ) = ω (cid:0) D δ x, t (cid:1) and ω M ( x, t ) = ω (cid:0) M( x, t ) (cid:1) . Then fora constant C independent of ω and δ we have [ ω δ ] α ≤ Cδ α − [ ω ] α , (2.10) [ ω M ] α ≤ | det M | − k M k α [ ω ] α . (2.11) Proof.
The inequality (2.10) is equivalent to Z B ( y,r ) ω (D δ x, t ) dxdt ≤ Cδ α − [ ω ] α r α for y ∈ R and r >
0. To see this, changing variables x → D − δ x, the left hand sideis equal to δ − R χ B ( y,r ) (D − δ x, t ) ω ( x, t ) dxdt . Then we note that the set { ( x, t ) :(D − δ x, t ) ∈ B ( y, r ) } is contained in a rectangle R δ of dimensions about δr × δ r × δ r × r . Since R δ is covered by at most Cδ − many balls of radius δ r , the inequalityfollows.For (2.11) we only have to show Z B ( y,r ) ω (M( x, t )) dxdt ≤ | det M | − k M k α [ ω ] α r α for y ∈ R and r >
0. Changing variables, we see that the left hand side equals | det M | − R χ B ( y,r ) (M − ( x, t )) ω ( x, t ) dxdt . So, we get the inequality since ( x, t ) ∈ B (M y, k M k r ) if M − ( x, t ) ∈ B ( y, r ). (cid:3) Reduction and the induction quantity.
Throughout the paper we fix asmall positive constant c ◦ . To show (2.6) for a smooth curve satisfying (1.1),it is sufficient to handle γ ∈ C D ( ε ◦ ) with a small ε ◦ > ψ ∈ C D andsupp ψ ⊂ [ − c ◦ , c ◦ ]. As we shall see later, this can be shown by a finite decompositionand changing variables via affine transformation. Definition 2.1.
Let c ◦ , ε ◦ and δ be the numbers such that 0 < c ◦ ≤ − , 0 <ε ◦ ≤ c ◦ , and(2.12) 0 < δ ≤ min( c ◦ , δ ◦ )where δ ◦ is given in Lemma 2.7. The number δ is to be chosen later (see Section4.1). We also denote J ◦ = [ − c ◦ , c ◦ ] and J ( δ ) = (cid:8) J : J = [ c ◦ δ ( k − , c ◦ δ ( k + 1)] , k ∈ Z , | k | ≤ ( c ◦ δ ) − + 1 (cid:9) , so that the intervals in J ( δ ) cover J . For each J ∈ J ( δ ) we define N D ( J ) to be theset of functions such that ψ ∈ C D ( J ) and k ψ ( | J | · ) k C D ( R ) ≤
1. For a given interval J we denote by ψ J a function in N D ( J ).For a smooth function a on J × I × A λ , following [23], we define an integraloperator by setting A γ [ a ] f ( x, t ) = (2 π ) − Z Z e i ( x − tγ ( s )) · ξ a ( s, t, ξ ) ds b f ( ξ ) dξ. (2.13)In particular, we note Af = A γ [ ψ ] f as is clear by Fourier inversion. H. KO, S. LEE, AND S. OH
Let us take ζ ∈ C ∞ ([ − , ζ ≥ P k ∈ Z ζ ( s − k ) = 1. Foran interval J we denote by c J the center of J and set ζ J ( s ) = ζ (2( s − c J ) / | J | ).Consequentially, ζ J ∈ C ∞ ( J ) and P J ∈ J ( δ ) ζ J ( s ) = 1 for s ∈ J . As a result, we have A γ [ ψ ] f ( x, t ) = X J ∈ J ( δ ) A γ [ ψζ J ] f ( x, t )(2.14)if supp ψ ⊂ J ◦ . The following is one of the key lemmas which relates the estimatefor the average over a short curve to that over a larger one. Lemma 2.9.
Let I ′ ⊂ I be an interval, and let ω ∈ Ω , J = [ s ◦ − c ◦ δ, s ◦ + c ◦ δ ] ∈ J ( δ ) and ψ J ∈ N D ( J ) . Suppose that γ ∈ C D ( J ) satisfies (1.1) and supp b f ⊂ A λ .Then, there are e ω ∈ Ω , e f with k e f k p = k f k p , and ψ J ◦ ∈ N D ( J ◦ ) which satisfy thefollowing: k A γ [ ψ J ] f k L p ( R × I ′ ,ω ) = δ − p k A γ δs ◦ [ ψ J ◦ ] e f k L p ( R × I ′ , e ω ) , (2.15) [ e ω ] ≤ C (1 + | γ ( s ◦ ) | ) | det M γ ( s ◦ ) | − (1 + k M γ ( s ◦ ) k ) [ ω ] , (2.16) and supp F ( e f ) ⊂ n ξ : 34 d ∗ δ λ ≤ | ξ | ≤ d ∗ δλ o , (2.17) where /d ∗ = k (M γ ( s ◦ )) − t k and /d ∗ = inf | z | =1 | (M γ ( s ◦ )) − t z | .Proof. We denote ψ J ◦ ( s ) = ψ J ( δs + s ◦ ). Then it is clear that ψ J ◦ ∈ N D ( J ◦ ). We set e f ( x ) = | det(M δγ ( s ◦ )) | p f (M δγ ( s ◦ ) x ) , so k e f k p = k f k p and the Fourier transform of e f is supported in the set S λ = { ξ :3 λ/ ≤ | (M δγ ( s ◦ )) − t ξ | ≤ λ/ } because supp b f ⊂ A λ . Since M δγ ( s ◦ ) = M γ ( s ◦ )D δ , itis easy to see that S λ ⊂ { ξ : 3 λd ∗ / ≤ | D − δ ξ | ≤ λd ∗ / } , thus we get (2.17).We now define ω and e ω by setting ω ( x, t ) = ω ( x + tγ ( s ◦ ) , t ) and e ω ( x, t ) = δ ω (M δγ ( s ◦ ) x, t ) , respectively. Denoting by M the matrix such that M( x, t ) = ( x + tγ ( s ◦ ) , t ), wenote that ω = ω M , det M = 1, and k M k ≤ | γ ( s ◦ ) | . Thus using (2.11) we have[ ω ] ≤ (1 + | γ ( s ◦ ) | ) [ ω ] . Similarly, let M ′ denote the matrix such that M ′ ( x, t ) =(M γ ( s ◦ ) x, t ). Then e ω = δ ( ω M ′ ) δ since M δγ ( s ◦ ) = M γ ( s ◦ )D δ . Using (2.10) and(2.11) we get [ e ω ] ≤ C | det M γ ( s ◦ ) | − (1+ k M γ ( s ◦ ) k ) [ ω ] since det M ′ = det M γ ( s ◦ )and k M ′ k ≤ k M γ ( s ◦ ) k . Combining these two inequalities gives (2.16).To complete the proof it remains to show (2.15). We note that A γ [ ψ J ] f ( x, t ) = δ R f (cid:0) x − tγ ( s ◦ ) − t M δγ ( s ◦ ) γ δs ◦ ( s ) (cid:1) ψ J ( δs + s ◦ ) ds, changing variables s → δs + s ◦ andusing (2.9). We thus have A γ [ ψ J ] f ( x, t ) = δ | det M δγ ( s ◦ ) | − p Z e f (cid:0) (M δγ ( s ◦ )) − ( x − tγ ( s ◦ )) − tγ δs ◦ ( s ) (cid:1) ψ J ◦ ( s ) ds. Therefore the change of variables x → M δγ ( s ◦ ) x + tγ ( s ◦ ) yields (2.15). (cid:3) AXIMAL ESTIMATE FOR SPACE CURVE 9
Reduction.
Let γ ∈ C D ( J ) be a curve satisfying (1.1). For a given ε ◦ > δ = δ ∗ where δ ∗ is the number given in Lemma 2.7. We now apply (2.14) to ψ ∈ C D ( J ) and then Lemma 2.9 to each interval J , so that we have k A γ [ ψ ] f k L p ( R × I,ω ) ≤ δ − p X J ∈ J ( δ ) (cid:13)(cid:13) A γ δcJ [ ψ J ] e f J (cid:13)(cid:13) L p ( R × I, e ω J ) , where γ δc J ∈ C D ( ε ◦ ) (by Lemma 2.7), [ e ω J ] ≤ C J [ ω ] , C − ψ J ∈ N D ( J ◦ ) for someconstants C J , C >
0, and e f J satisfies that k e f J k p ≤ k f k p and supp F ( e f J ) ⊂{ ξ : ( B J ) − λ ≤ | ξ | ≤ B J λ } for a constant B J . Since there are at most Cδ − ∗ many intervals, for the estimate (2.6) it is enough to obtain estimate for each A γ δcJ [ ψ J ] e f J against the weight e ω J . Hence, after decomposing e f J via Littlewood-Paley projection and replacing e ω J with ( C J [ ω ] ) − e ω J , in order to show (2.6) weneed only to consider the curve γ ∈ C D ( ε ◦ ) and the weight ω with [ ω ] ≤ A γ [ ψ ] f ( x, t ) = A γ [ ψ ] f ( · r )( rx, rt ), by scaling after splitting I into three intervals [2 − , , [1 ,
2] and [2 , k A γ [ ψ ] f k L p ( R × [1 , ,ω ) ≤ Cλ − ε p k f k L p ( R ) for [ ω ] ≤ γ ∈ C D ( ε ◦ ), and ψ ∈ N D ( J ◦ ) for some D . Definition 2.2.
Fixing p, ε ◦ , D , for λ ≥ Q ( λ ) by Q ( λ ) = sup (cid:8) k A γ [ ψ ] f k L p ( R × [1 , ,ω ) : γ ∈ C D ( ε ◦ ) , ψ ∈ N D ( J ◦ ) , [ ω ] ≤ , supp b f ⊂ A λ , k f k L p ( R ) ≤ (cid:9) . An elementary estimate gives Q ( λ ) ≤ Cλ for 1 ≤ p ≤ ∞ .Thanks to the discussion in the above and Lemma 2.4, Theorem 2.1 now followsfrom the next proposition, which we prove in Section 4.1. Proposition 2.10.
For p ∈ (3 , ∞ ) , there are positive constants ε ◦ , D , ε p , and C such that Q ( λ ) ≤ Cλ − ε p . (2.18)In order to show (2.18) we need only to handle A γ [ ψ ] with ψ ∈ N D ( J ◦ ), whichwe decompose in the fashion of (2.14). Thus it suffices work with the intervals J ∩ J ◦ = ∅ . We set J ◦ ( δ ) = (cid:8) J ∈ J ( δ ) : J ⊂ (1 + 2 c ◦ ) J ◦ (cid:9) . What follows next is a consequence of Lemma 2.9, which plays an important rolein proving (2.18).
Lemma 2.11.
Let J ∈ J ◦ ( δ ) and ψ J ∈ N D ( J ) . Suppose γ ∈ C D ( ε ◦ ) , [ ω ] ≤ , and supp b f ⊂ A λ . If δ λ ≥ and ε ◦ > is sufficiently small, there are constants C ,independent of γ, ω, and ψ J , such that (2.19) (cid:13)(cid:13) A γ [ ψ J ] f (cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ Cδ − p K δ ( λ ) k f k L p ( R ) , where K δ ( λ ) = X − δ λ ≤ j ≤ δλ Q (2 j ) . Proof.
We denote J = [ s ◦ − c ◦ δ, s ◦ + c ◦ δ ]. Since γ ∈ C D ( ε ◦ ), γ δs ◦ ∈ C D ( ε ◦ ) byLemma 2.7 and our choice of δ , i.e., (2.12). Noting that s ◦ ∈ J ◦ , γ ∈ C D ( ε ◦ )and ε ◦ ≤ c ◦ , we see that | γ ( s ◦ ) | ≤ c ◦ and k M γ ( s ◦ ) − I k ≤ c ◦ . If we use P ∞ ℓ =0 (I − M γ (s ◦ )) ℓ = (M γ (s ◦ )) − , it follows k (M γ ( s ◦ )) − − I k ≤ c ◦ − c ◦ . Since k M k = k M t k for any matrix M, k (M γ ( s ◦ )) − t − I k < / ≤ inf | z | =1 | (M γ ( s ◦ )) − t z | , k (M γ ( s ◦ )) − t k , | det M γ ( s ◦ ) | ≤ . Therefore, by (2.16) and (2.17) we see respectively that [ e ω ] ≤ C with a constant C independent of γ and that supp F ( e f ) ⊂ { ξ : 2 − δ λ ≤ | ξ | ≤ δλ } . Let β ∗ ∈ C ∞ ([3 / , / P j β ∗ (2 − j · ) = 1. We decompose e f = X − δ λ ≤ j ≤ δλ e f j , where e f j = F − (cid:0) β ∗ (2 − j | · | ) F ( e f ) (cid:1) . By (2.15) it follows that (cid:13)(cid:13) A γ [ ψ J ] f (cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ δ − p X − δ λ ≤ j ≤ δλ k A γ δs ◦ [ ψ J ◦ ] e f j k L p ( R × [1 , , e ω ) . Since supp F ( e f j ) ⊂ A j and k e f j k p ≤ C β ∗ k f k p and since γ δs ◦ ∈ C D ( ε ◦ ), ψ J ◦ ∈ N D ( J ◦ ) and [ e ω ] ≤ C , we have k A γ δs ◦ [ ψ J ◦ ] e f j k L p ( R × [1 , , e ω ) ≤ CQ (2 j ) k f k p while C is independent of γ, ω, and ψ J . Therefore we get (2.19). (cid:3) Decomposition in Fourier side.
To show the inequality (2.18) we need onlyto deal with γ ∈ C D ( ε ◦ ) and ψ ∈ N D ( J ◦ ), therefore it suffices to consider the curve γ over the interval (1+2 c ◦ ) J ◦ . This additional localization is helpful for simplifyingthe argument which follows henceforth.Since ε ◦ ≤ c ◦ , it is clear that(2.20) | γ ′ ( s ) − e | ≤ c ◦ , | γ ′′ ( s ) − e | ≤ c ◦ , | γ ′′′ ( s ) − e | ≤ c ◦ for s ∈ (1 + 2 c ◦ ) J ◦ and γ ∈ C D ( ε ◦ ). Thus we have | γ ′ ( s ) · ξ | + | γ ′′ ( s ) · ξ | ≥ c ◦ | ξ | if | ξ | ≥ c ◦ | ξ | or | ξ | ≥ c ◦ | ξ | . Using Proposition 2.19 below we can handle thecontribution from the part of frequency | ξ | ≥ c ◦ | ξ | or | ξ | ≥ c ◦ | ξ | since thecondition (2.26) is satisfied. We shall mainly concentrate on the case where ξ isincluded in the set A ∗ λ := (cid:8) ξ : 2 − λ ≤ | ξ | ≤ λ, | ξ | ≤ c ◦ | ξ | , | ξ | ≤ c ◦ | ξ | (cid:9) . The following is easy to see.
Lemma 2.12.
There exists a function σ ∈ C D − ( A ∗ λ ) , homogeneous of degree ,such that, for ξ ∈ A ∗ λ , | σ ( ξ ) | ≤ c ◦ and γ ′′ ( σ ( ξ )) · ξ = 0 . Indeed, we need to solve the equation γ ′′ ( s ) · ξ = 0 for a given ξ , equivalently, ξ − ξ + s + e ( ξ, s ) = 0 where e ( ξ, s ) is a function of homogeneous of degree zeroand k e ( ξ, · ) k C D − ≤ ε ◦ . An elementary argument shows existence of σ ( ξ ) and theimplicit function theorem guarantees that σ ∈ C D − ( A ∗ λ ) since γ ∈ C D ( ε ◦ ). It isclear that | σ ( ξ ) | ≤ c ◦ because ξ − ξ + σ ( ξ ) + e ( ξ, σ ( ξ )) = 0. AXIMAL ESTIMATE FOR SPACE CURVE 11
For ξ ∈ A ∗ λ , we denote Λ γ ( ξ ) = γ ′′′ ( σ ( ξ )) · ξ,R γ ( ξ ) = − γ ′ ( σ ( ξ )) · ξ Λ γ ( ξ ) . If ξ ∈ A ∗ λ and σ ( ξ ) ∈ (1 + 2 c ◦ ) J ◦ , by (2.20) we have 2 − λ ≤ | Λ γ ( ξ ) | ≤ λ , | γ ′ ( σ ( ξ )) · ξ − ξ | ≤ c ◦ λ , and (cid:12)(cid:12) Λ γ ( ξ ) − ξ (cid:12)(cid:12) ≤ c ◦ λ , so | R γ ( ξ ) | ≤ c ◦ . Decomposition of the operator A γ [ ψ J ] . By Taylor’s expansion we have γ ′ ( s ) · ξ = − Λ γ ( ξ ) R γ ( ξ ) + 2 − Λ γ ( ξ )( s − σ ( ξ )) + O ( ε ◦ λ | s − σ ( ξ ) | ) , (2.21) γ ′′ ( s ) · ξ = Λ γ ( ξ )( s − σ ( ξ )) + O ( ε ◦ λ | s − σ ( ξ ) | )(2.22)for s ∈ J and ξ ∈ A ∗ λ . Thus γ ′ ( s ) · ξ and γ ′′ ( s ) · ξ have lower bounds if σ ( ξ ) isdistanced from J , so it is not difficult to have control over the contribution from theassociated frequency. However, if σ ( ξ ) is close to J for ξ ∈ supp b f , the behavior of A γ [ ψ J ] f becomes less favorable. This leads us to define, for K ≥ J ∈ J ◦ ( δ ), R J ( K ) = (cid:8) ξ : | γ ′ ( c J ) · ξ | ≤ Kc ◦ δ λ, | γ ′′ ( c J ) · ξ | ≤ Kc ◦ δλ, − λ ≤ | ξ | ≤ λ (cid:9) , which contains the unfavorable frequency part of A γ [ ψ J ] f . Concerning the sets R J ( K ) we have the next lemma which we use later. Lemma 2.13.
Let γ ∈ C D ( ε ◦ ) . If ε ◦ > is sufficiently small, we have the followingwith C independent of γ and δ : X J ∈ J ◦ ( δ ) χ R J (2 ) ≤ C. (2.23) Proof.
In order to show (2.23) it is sufficient to verify that the sets r J := { ξ : λξ ∈R J (2 ) } overlap each other at most C many times. Note that r J is contained in2 c ◦ δ neighborhood of the line L J passing through the origin with its directionparallel to γ ′ ( c J ) × γ ′′ ( c J ). Since r J ⊂ { ξ : 2 − ≤ | ξ | ≤ } , it is sufficient to showthat the directions of the lines L J are separated from each other by a distance atleast 2 − c ◦ δ . This in turn follows from the fact that dds (cid:16) γ ′ ( s ) × γ ′′ ( s ) (cid:17) = γ ′ ( s ) × γ ′′′ ( s ) = − e + O s (5 c ◦ )for γ ∈ C D ( ε ◦ ) because the distance between the centers c J of J is at least c ◦ δ .Since s ∈ [ − c ◦ , c ◦ ] and γ ∈ C D ( ε ◦ ), we have | γ ′ ( s ) − e | ≤ c ◦ (1 + 2 c ◦ ) and | γ ′′′ ( s ) − e | ≤ c ◦ . Thus the last equality is clear. (cid:3) Let e β ∈ C ∞ ([2 − , ]) such that e β = 1 on [2 − , e χ R J ( ξ ) = β (cid:16) | γ ′ ( c J ) · ξ | c ◦ δ λ (cid:17) β (cid:16) | γ ′′ ( c J ) · ξ | c ◦ δλ (cid:17) e β (cid:16) | ξ | λ (cid:17) , so that e χ R J is supported in R J (2 ) and e χ R J ( ξ ) = 1 if ξ ∈ R J (2 ) ∩ A ∗ λ . We set P J f = F − ( e χ R J b f ) . The following is a consequence of (2.23).
Lemma 2.14. If ε ◦ is small enough, we have (cid:0) P J ∈ J ◦ ( δ ) k P J f k pp (cid:1) /p ≤ C k f k p for ≤ p ≤ ∞ whenever γ ∈ C D ( ε ◦ ) . The inequality follows from interpolation between the cases p = 2 and p = ∞ .Plancherel’s theorem and (2.23) give ( P J k P J f k ) / ≤ C k f k and the estimatemax J k P J f k ∞ ≤ C k f k ∞ is obvious. Decomposition away from the conic surface C λ . We further decompose A γ [ ψ J ] P J f in Fourier side taking into account how close ξ is to the conic set C λ := { ξ ∈ A ∗ λ : R γ ( ξ ) = 0 } . To this end we set e χ A ∗ λ ( ξ ) = β (cid:0) ξ c ◦ | ξ | (cid:1) β (cid:0) ξ c ◦ | ξ | (cid:1) β ( λ − | ξ | ) . For 0 < ν ≪
1, we define the cutoff functions π c , π e , π o , and π o by setting π c ( ξ ) = e χ A ∗ λ ( ξ ) β ( λ − ν | R γ ( ξ ) | ) ,π e ( ξ ) = β ( λ − | ξ | ) − e χ A ∗ λ ( ξ ) β ( δ − | R γ ( ξ ) | ) , and, for j = 0 , π j o ( ξ ) = e χ A ∗ λ ( ξ ) χ { ξ :( − j +1 R γ ( ξ ) > } (cid:0) β ( δ − | R γ ( ξ ) | ) − β ( λ − ν | R γ ( ξ ) | ) (cid:1) . The support of e χ A ∗ λ is contained in A ∗ λ and π c + π o + π o + π e = β ( λ − | · | ) almosteverywhere. The functions π c , π o + π o , and β ( λ − | · | ) − π e roughly split the set A ∗ λ into three regions { ξ : | R γ ( ξ ) | ≤ Cλ ν − / } , { ξ : Cλ ν − / ≤ | R γ ( ξ ) | ≤ C δ } ,and { ξ : C δ ≤ | R γ ( ξ ) |} . The division between the first set and the othertwo reflects different asymptotic behaviors of the multiplier A γ [ ψ J ]( e i ( · ) · ξ )(0 , t ) as | ξ | → ∞ . The further division of the second and the third sets is necessitated toguarantee the transversality condition for the multilinear estimate, which is to bediscussed in the next section.We also define the associated multiplier operators P c , P o , P o , and P e by d P c g ( ξ ) = π c ( ξ ) b g ( ξ ) , F ( P j o g )( ξ ) = π j o ( ξ ) b g ( ξ ) , j = 0 , , d P e g ( ξ ) = π e ( ξ ) b g ( ξ ) . Besides, we set P n = P c + P o + P o . Then easy estimates for the kernels of theoperators give kP c k p → p ≤ C λ C , kP j o k p → p ≤ C λ C , j = 0 , , (2.24)for 1 ≤ p ≤ ∞ and some constants C, C >
0. It is possible to get better boundsif we use the decoupling or the square function estimate for the cone (for example,[16, 10]) but we do not attempt to do so since it is irrelevant to our purpose.Similarly, we also have(2.25) kP e k p → p ≤ C δ − C , kP n k p → p ≤ C δ − C for 1 ≤ p ≤ ∞ . For the former we need only to note that kF − ( π e ) k L ( R ) ≤ C δ − C .The latter follows from the former because the multiplier associated to the operator P n is β ( λ − | · | ) − π e .2.6. Nondegenerate part.
Decomposition of the operator A in Fourier side givesrise to the operators of the form of (2.13) such as A γ [ ψ J ] P J , A γ [ ψ J ](1 − P J ), . . . , A γ [ ψ J ] P e . If | γ ′ ( s ) · ξ | + | γ ′′ ( s ) · ξ | ≥ C | ξ | on the support of a , we can handle A γ [ a ]using the following theorem which is a straightforward consequence of [23, Theorem4.1]. AXIMAL ESTIMATE FOR SPACE CURVE 13
Theorem 2.15.
Let K ≥ and [ s ◦ − r, s ◦ + 2 r ] ⊂ J with K − ≤ r . Sup-pose that a ( s, t, ξ ) is a smooth function supported in [ s ◦ − r, s ◦ + r ] × I × A λ and | ∂ j s ∂ j t ∂ αξ a ( s, t, ξ ) | ≤ B | ξ | −| α | for | α | ≤ and j , j = 0 , . Also, assume that | γ ′ ( s ) · ξ | + | γ ′′ ( s ) · ξ | ≥ K − | ξ | (2.26) whenever ( s, t, ξ ) ∈ supp a for some t ∈ I . Then, if p ≥ and ε ◦ > is smallenough, for ε > k A γ [ a ] f k L p ( R × I ) ≤ C ε BK C λ − p + ε k f k L p ( R ) (2.27) whenever γ ∈ C D ( ε ◦ ) and b f is supported in A λ . The statement of Theorem 2.15 differs from the one in [23] in a couple of aspects.First, the range of p is enlarged to p ≥ thanks to the ℓ p -decoupling inequality forthe cone [6]. Secondly, there is an extra factor K C in (2.27). The estimate (2.27)can be seen following the argument in [23]. It is also possible to deduce (2.27) fromthat with K ∈ [2 − ,
2] by finite decomposition and making use of scaling and affinetransform. Uniformity of the bound over γ ∈ C D ( ε ◦ ) is clear.The estimate | R e − itγ ( s ) · ξ a ( s, t, ξ ) ds | ≤ C BK C | ξ | − follows by (2.26) and vander Corput’s Lemma. We thus have k A γ [ a ] f k L ( R × I ) ≤ C BK C λ − k f k L ( R ) byPlancherel’s theorem. Interpolation between the estimate and (2.27) with p = 6gives Corollary 2.16.
Under the same assumption as in Theorem 2.15, if ≤ p ≤ and ε ◦ is small enough, for ε > k A γ [ a ] f k L p ( R × I ) ≤ C ε BK C λ − − p + ε k f k L p ( R ) whenever γ ∈ C D ( ε ◦ ) and b f is supported in A λ . We also make use of the following ([23, Theorem 1.4]).
Theorem 2.17.
Let J ⊂ J be a compact interval of length δ and ψ J ∈ N D ( J ) .Then, if p ≥ and ε ◦ is small enough, for ε > k A γ [ ψ J ] f k L p ( R × I ) ≤ C ε δ − C λ − p + ε k f k L p ( R ) (2.28) whenever γ ∈ C D ( ε ◦ ) and b f is supported in A λ . Compared with [23, Theorem 1.4], the range of p is extended to p ≥ δ − C can be shown by scaling and its uniformity over γ ∈ C D ( ε ◦ ) is also obvious. Estimates for A γ [ ψ J ](1 − P J ) and A γ [ ψ J ] P e . The condition (2.26) is satisfied on thesupport of ψ J ( s )(1 − e χ R J ( ξ )). Thus, using Corollary 2.16, we can get a favorableestimate for A γ [ ψ J ](1 − P J ). We also obtain the similar estimate for A γ [ ψ J ] P e (seeProposition 2.19 below). Proposition 2.18.
Let [ ω ] ≤ , and J ∈ J ◦ ( δ ) . If ≤ p ≤ and ε ◦ > is smallenough, for ε > there are constants C and C ε such that k A γ [ ψ J ](1 − P J ) f k L p ( R × [1 , ,ω ) ≤ C ε δ − C λ ( p − )+ ε k f k L p ( R ) (2.29) whenever supp b f ⊂ A λ , γ ∈ C D ( ε ◦ ) , and ψ J ∈ N D ( J ) . The critical case p = 6 can be included by interpolation with a trivial estimate. Proof.
We set a ( s, t, ξ ) = e χ ( t ) ψ J ( s )(1 − e χ R J ( ξ )) β ( λ − | ξ | ) , so that A γ [ a ] f = e χ ( t ) A γ [ ψ J ](1 − P J ) f . We claim that (2.26) holds on the supportof a with K = C δ − > C = C ( c ◦ ).To see this, it suffices to consider the case ξ ∈ A ∗ λ because of (2.20). We firstnote that ξ ∈ R J (2 ) if σ ( ξ ) ∈ [ c J − | J | , c J + | J | ] and | R γ ( ξ ) | ≤ c ◦ δ . Indeed,since | σ ( ξ ) − c J | ≤ c ◦ δ , by (2.21) we have | γ ′ ( c J ) · ξ | ≤ c ◦ δ λ , and we get | γ ′′ ( c J ) · ξ | ≤ c ◦ δλ from (2.22). So, it follows ξ ∈ R J (2 ) since ξ ∈ A ∗ λ . Hence, if ξ ∈ supp (1 − e χ R J ) β ( λ − | · | ) ∩ A ∗ λ , we have σ ( ξ ) / ∈ [ c J − | J | , c J + | J | ] or | R γ ( ξ ) | ≥ c ◦ δ . In the first case we have | γ ′′ ( s ) · ξ | ≥ − c ◦ δλ by (2.22). Thus we mayassume | R γ ( ξ ) | ≥ c ◦ δ and | s − σ ( ξ ) | ≤ c ◦ δ and then we get | γ ′ ( s ) · ξ | ≥ c ◦ δ λ using (2.21). This shows the claim.Since (2.26) holds on the support of a , by Corollary 2.16 we have the estimate k e χ ( t ) A γ [ ψ J ](1 − P J ) f k L p ( R × R ) ≤ C ε δ − C λ − − p + ε k f k L p ( R ) (2.30)for 2 ≤ p ≤
6. We use the estimate to obtain the weighted estimate (2.29) andargue similarly as in the proof of Lemma 2.4. So, we shall be brief.As before, let us define an operator e A J by F ( e A J h )( ξ, τ ) = β (( λr ) − τ ) β ( λ − | ξ | ) F (cid:0)e χ ( t ) A γ [ ψ J ] h (cid:1) ( ξ, τ ) , where r = 1 + 4 max {| γ ( s ) | : s ∈ supp ψ J } . Then we have | ( e χ ( t ) A γ [ ψ J ] − e A J ) h | ≤ C e E Nt ∗ | h | for any N if we use Lemma 2.6. Putting together this (e.g., (2.7)),[ ω ] ≤ k (1 − P J ) f k p ≤ C k f k p , we see that k e χ ( t ) A γ [ ψ J ](1 − P J ) f k L p ( R × R ,ω ) ≤ k e A J (1 − P J ) f k L p ( R × R ,ω ) + Cλ − N k f k p . The Fourier transform of e A J (1 − P J ) f is supported in B (0 , r λ ). By Lemma 3.11we thus get k e A J (1 − P J ) f k L p ( R × R ,ω ) ≤ Cλ /p k e A J (1 − P J ) f k L p ( R × R ) . Disregardingthe minor contribution from ( e χ ( t ) A γ [ ψ J ] − e A J )(1 − P J ) f , we only need to obtain theestimate for e χ ( t ) A γ [ ψ J ](1 − P J ) f in L p ( R × R ). Therefore we obtain the estimate(2.29) by (2.30). (cid:3) Proposition 2.19.
Under the same assumption as in Proposition 2.18, if ≤ p ≤ and ε ◦ > is small enough, for any ε > k A γ [ ψ J ] P e f k L p ( R × [1 , ,ω ) ≤ C ε δ − C λ ( p − )+ ε k f k L p ( R ) whenever supp b f ⊂ A λ , γ ∈ C D ( ε ◦ ) and ψ J ∈ N D ( J ) .Proof. We set π e ( ξ ) = e χ A ∗ λ ( ξ ) (cid:0) − β ( δ − | R γ ( ξ ) | ) (cid:1) and π e ( ξ ) = β ( λ − | ξ | ) − e χ A ∗ λ ( ξ ),so that π e = π e + π e . Then we break e χ ( t ) A γ [ ψ J ] P e f = A γ [ a ] f + A γ [ a ] f , where a j ( s, t, ξ ) = e χ ( t ) ψ J ( s ) π j e ( ξ ) , j = 1 , . We first consider A γ [ a ] f . After decomposing ψ J into the bump functions ψ ℓ supported in finitely overlapping intervals J ℓ such that δ ≤ | J ℓ | ≤ δ , ψ J = P ψ ℓ , and | ψ ( k ) ℓ | ≤ C k δ − k , we set a ℓ ( s, t, ξ ) = e χ ( t ) ψ ℓ ( s ) π e ( ξ ). By (2.22) | γ ′′ ( s ) · ξ | ≥ − λδ for s ∈ supp ψ ℓ if σ ( ξ ) [ c J ℓ − | J ℓ | , c J ℓ + | J ℓ | ]. Otherwise, from (2.21)we have | γ ′ ( s ) · ξ | ≥ − δ λ for s ∈ supp ψ ℓ since | R γ ( ξ ) | ≥ δ on supp π e . AXIMAL ESTIMATE FOR SPACE CURVE 15
Therefore (2.26) holds with K = Cδ − for ( s, t, ξ ) ∈ supp a ℓ . An application ofCorollary 2.16 with a = a ℓ gives k A γ [ a ℓ ] f k L p ( R × R ) ≤ C ε δ − C λ − − p + ε k f k L p ( R ) . Arguing similarly as in the proof of Proposition 2.18, we get the weighted estimate k A γ [ a ℓ ] f k L p ( R × [1 , ,ω ) ≤ C ε δ − C λ − + p + ε k f k L p ( R ) . Summation over ℓ thus givesthe desired estimate since there are at most Cδ − many ℓ .The estimate k A γ [ a ] f k L p ( R × [1 , ,ω ) ≤ C ε λ − + p + ε k f k L p ( R ) can be obtainedlikewise but more straightforwardly since | γ ′ ( s ) · ξ | + | γ ′′ ( s ) · ξ | ≥ c ◦ | ξ | on supp a . (cid:3) Multilinear estimates
The main object of this section is to prove the following weighted multilinearestimate for A γ [ ψ J ] P n f . Throughout this section we assume γ ∈ C D ( ε ◦ ) with an ε ◦ small enough. Proposition 3.1.
Let J k ∈ J ◦ ( δ ) , ≤ k ≤ , and [ ω ] ≤ . Suppose that b f , . . . , b f are supported in A λ and dist ( J ℓ , J k ) ≥ δ , ℓ = k . If / < p ≤ , thereare constants ε p > , D , and C δ > such that (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ]( P n P J k f k ) | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ C δ λ − ε p Y k =1 k f k k L p ( R ) (3.1) whenever γ ∈ C D ( ε ◦ ) and ψ J k ∈ N D ( J k ) , ≤ k ≤ . Expansions of the multiplier.
In order to prove Proposition 3.1 we first tryto express A γ [ ψ J k ] P n as a sum of adjoint restriction operators. To do so, we expandthe Fourier multiplier of the operator A γ [ ψ J k ] P n into a series of suitable form. Wehandle separately A γ [ ψ J k ] P c (Lemma 3.2) and A γ [ ψ J k ] P j o , j = 1 , Multiplier of A γ [ ψ J ] P c . Let J ∈ J ◦ ( δ ). For ψ J ∈ N D ( J ) we set m J ( t, ξ ) = (2 π ) − Z e − itγ ( s ) · ξ ψ J ( s ) ds. The multiplier m J π c of A γ [ ψ J ] P c has the worse decay in ξ as the zeros of γ ′ ( s ) · ξ and γ ′′ ( s ) · ξ are close to each other. We defineΦ c ( ξ ) = γ ( σ ( ξ )) · ξ, ξ ∈ A ∗ λ , and an adjoint restriction operator T c λ by setting T c λ g ( x, t ) = Z C c λ ( δ ) e i ( x · ξ − t Φ c ( ξ )) g ( ξ ) dξ, where C c λ ( δ ) = { ξ ∈ A ∗ λ : | R γ ( ξ ) | ≤ δ } . We note that supp π c ⊂ C c λ ( δ ). Lemma 3.2.
Let < ν ≪ and J ∈ J ◦ ( δ ) . Suppose γ ∈ C D ( ε ◦ ) , ψ J ∈ N D ( J ) ,and b f is supported on A λ . Then we have A γ [ ψ J ] P c f = X ℓ ∈ Z : | ℓ |≤ λ ν e itℓ T c λ (cid:0) c ℓ π c b f (cid:1) + E c f, t ∈ I, and the following hold with C , C N , and C δ independent of γ and ψ J : | c ℓ ( ξ ) | ≤ C N λ ν − (1 + λ − ν | ℓ | ) − N (3.2) for any N and kE c f k L q ( R × I ) ≤ C δ λ C − νD k f k p , ≤ p ≤ q ≤ ∞ . (3.3)Summation over ℓ results from the Fourier series expansion in t of an amplitudefunction which appears after factoring out e − it Φ c ( ξ ) . This simplifies the amplitudefunction depending both on ξ and t which causes considerable loss in bound whenwe attempt to directly apply the multilinear restriction estimate (for example see[4, Theorem 6.2]).For the proof of Lemma 3.2 and Lemma 3.4 below we write m J ( t, ξ ) in a differentform. Changing of variables s → s + σ ( ξ ), we have(3.4) m J ( t, ξ ) = (2 π ) − e − it Φ c ( ξ ) Z e − itφ ( s,ξ ) ψ J ( s + σ ( ξ )) ds, where φ ( s, ξ ) := γ ( s + σ ( ξ )) · ξ − γ ( σ ( ξ )) · ξ . We here note that J ⊂ (1 + 2 c ◦ ) J ◦ and | σ ( ξ ) | ≤ c ◦ for ξ ∈ A ∗ λ by Lemma 2.12.Thus φ ∈ C D − ([ − / , / × A ∗ λ ) and supp ψ J ( · + σ ( ξ )) ⊂ J ◦ . Since γ ∈ C D ( ε ◦ )and γ ′′ ( σ ( ξ )) · ξ = 0, by Taylor’s expansion it follows that φ ( s, ξ ) = Λ γ ( ξ ) (cid:0) − R γ ( ξ ) s + 16 s + Θ( s, ξ ) (cid:1) , (3.5) | ∂ ks Θ( s, ξ ) | ≤ C k ε ◦ | s | max(4 − k, , ≤ k ≤ D. (3.6)In what follows we occasionally resort to (3.5) and (3.6) to exploit the propertiesof the phase function φ ( · , ξ ). Proof of Lemma 3.2.
We need to consider m J ( t, ξ ) while ξ ∈ supp π c . We break ψ J ( s + σ ( ξ )) = a m ( s, ξ ) + a e ( s, ξ ) , where a m ( s, ξ ) = ψ J ( s + σ ( ξ )) β (2 − λ − ν s ) . Then we put I θ ( t, ξ ) = (2 π ) − Z e − itφ ( s,ξ ) a θ ( s, ξ ) ds, θ ∈ { m, e } . By (3.4) it follows m J ( t, ξ ) = e − it Φ c ( ξ ) (cid:0) I m ( t, ξ ) + I e ( t, ξ ) (cid:1) . The major term is I m while I e decays fast as λ → ∞ . Let χ ◦ ∈ C ∞ ([0 , π ]) suchthat χ ◦ = 1 on the interval [2 − , ]. Expanding χ ◦ ( t ) I m ( t, ξ ) into Fourier series in t over the interval [0 , π ] we have χ ◦ ( t ) I m ( t, ξ ) = X ℓ ∈ Z c ℓ ( ξ ) e itℓ . Note that F ( χ ◦ I m ( · , ξ ))( ℓ ) = (2 π ) − R c χ ◦ ( ℓ + φ ( s, ξ )) a m ( s, ξ ) ds . Since | φ ( s, ξ ) | ≤ Cλ ν on the support of a m ( · , ξ ) by (3.5), we have |F ( χ ◦ I m ( · , ξ ))( ℓ ) | ≤ Cλ ν − | ℓ | − N for any N if | ℓ | ≥ C λ ν for a large C . Thus we get (3.2) for any N >
0. Wealso note that | ∂ αξ φ | ≤ C and | ∂ αξ a m | ≤ C δ because | ∂ αξ σ | ≤ Cλ −| α | on A ∗ λ for | α | ≤ D − N > | ∂ αξ c ℓ ( ξ ) | ≤ C δ λ ν − (1 + λ − ν | ℓ | ) − N . AXIMAL ESTIMATE FOR SPACE CURVE 17
We now put E c g ( x, t ) = X | ℓ | >λ ν e itℓ F − x (cid:0) c ℓ e − it Φ c π c b g (cid:1) + F − x ( I e ( t, · ) e − it Φ c π c b g ) . We shall show (3.3) to complete the proof. The terms F − x (cid:0) c ℓ e − it Φ c π c b g (cid:1) in thesummation can be handled easily. Combining the estimate (3.7) and | ∂ αξ e − it Φ c | ≤ C for | α | ≤
4, we see that F − x ( c ℓ e − it Φ c π c b g ) = K t ∗ P c g and | K t | ≤ C δ λ C (1 + λ − ν | ℓ | ) − N (1 + | x | ) − . Thus, the convolution inequality gives kF − x ( c ℓ e − it Φ c π c b g ) k L q ( R × I ) ≤ C δ λ C (1 + λ − ν | ℓ | ) − N kP c g k p for 1 ≤ p ≤ q ≤ ∞ . Taking a large N ≥ D and using the estimate in (2.24), weobtain P | ℓ |≥ λ ν kF − x ( c ℓ e − it Φ c π c b g ) k L q ( R × I ) ≤ C δ λ C − νD k g k p .In order to show the estimate for F − x ( I e ( t, · ) e − it Φ c π c b g ) we claim(3.8) | ∂ αξ I e ( t, ξ ) | ≤ C δ λ − ν ( D −| α | ) for ξ ∈ supp π c and | α | ≤
4. Using (3.8) for | α | ≤
4, similarly as before, we see F − x ( I e ( t, · ) e − it Φ c π c b g ) = K t ∗ P c g with | K t | ≤ C δ λ C − νD (1 + | x | ) − . Therefore,the convolution inequality and (2.24) give (cid:13)(cid:13) F − x ( I e ( t, · ) e − it Φ c π c b g ) (cid:13)(cid:13) L q ( R × I ) ≤ C δ λ C − νD k g k p , ≤ p ≤ q ≤ ∞ . Now it remains to show (3.8). We recall a e ( s, ξ ) = ψ J ( s + σ ( ξ ))(1 − β (2 − λ − ν s )).Since | s | ≥ λ ν − on the support of a e ( · , ξ ) and | R γ ( ξ ) | ≤ λ ν − for ξ ∈ supp π c ,by (3.5) and (3.6) it follows that C λ | s | ≤ | ∂ s φ ( s, ξ ) | ≤ C λ | s | and C λ | s | − k ≤ | ∂ ks φ ( s,ξ ) | ≤ C λ | s | − k , k = 2 , , | ∂ ks φ ( s, ξ ) | ≤ C ε ◦ λ, ≤ k ≤ D (3.9)for some positive constants C , . . . , C . Thus, noting | ∂ ks a e ( s, ξ ) | ≤ C δ λ ( − ν ) k for0 ≤ k ≤ D , we have(3.10) b ℓ +1 := | ∂ ℓ +1 s φ ( s, ξ ) || ∂ s φ ( s, ξ ) | ℓ +1 ≤ C δ λ − ν ( ℓ +1) , b ′ ℓ := | ∂ ℓs a e ( s, ξ ) || ∂ s φ ( s, ξ ) | ℓ ≤ C δ λ − νℓ for ℓ ≥ ξ ∈ supp π c and | s | ≥ λ ν − . After integrating by parts D − |I e ( t, ξ ) | is bounded by a finite sum of the terms C R Q mj =1 M ℓ j ds where M ℓ ∈ { b ℓ +1 , b ′ ℓ } , P mj =1 ℓ j = D −
1, and ℓ j ≥
1. Using (3.10) we get |I e ( t, ξ ) | ≤ C δ λ − νD for ξ ∈ supp π c . Furthermore, since ∂ ks ∂ αξ φ , α = 0 are bounded, the sameargument shows (3.8). (cid:3) Multipliers of A γ [ ψ J ] P o and A γ [ ψ J ] P o . We obtain similar expansions for m J π j o , j = 0 ,
1. As we shall see, m J π o is a minor term decaying rapidly as λ → ∞ (see(3.21)). We concentrate on the case ξ ∈ supp π o for the moment.Let ρ ∈ C ∞ ([2 − , ]), ρ ∈ C ∞ c ([0 , − )) and ρ ∈ C ∞ ((2 , ∞ )) such that ρ = 1 on [2 − , ] and ρ + ρ + ρ = 1 on [0 , ∞ ). For j = 0 , , , we set a j ( s, ξ ) = ψ J ( s + σ ( ξ )) ρ j (cid:0) R − / γ ( ξ ) | s | (cid:1) , I j ( t, ξ ) = (2 π ) − Z e − itφ ( s,ξ ) a j ( s, ξ ) ds, and then we have(3.11) m J ( t, ξ ) = e − it Φ c ( ξ ) (cid:0) I ( t, ξ ) + I ( t, ξ ) + I ( t, ξ ) (cid:1) . The main term is I while I and I are rapidly decaying as λ → ∞ (see (3.22)below). The second derivative of the phase function does not vanish on supp a ( · , ξ ),so we may apply the method of stationary phase for I ( t, ξ ). For the purpose weset(3.12) e φ ( s, ξ ) = L − ( ξ ) φ (cid:0) R / γ ( ξ ) s, ξ (cid:1) , where L ( ξ ) = Λ γ ( ξ ) R γ ( ξ ) and set a ± ( s, ξ ) = ψ J (cid:0) R / γ ( ξ ) s + σ ( ξ ) (cid:1) ρ ( ± s ) , I ± ( t, ξ ) = (2 π ) − R / γ ( ξ ) Z e − itL ( ξ ) e φ ( s,ξ ) a ± ( s, ξ ) ds. By scaling s → R / γ ( ξ ) s we have(3.13) I ( t, ξ ) = I +1 ( t, ξ ) + I − ( t, ξ ) . We try to find the stationary points of the function e φ ( · , ξ ) which give rise totwo different phase functions Φ ± (see (3.15) below). As we shall see later, it isimportant for application of the multilinear restriction estimate how smooth thesephase functions are. So, we deal with the matter carefully. Lemma 3.3.
There are τ + , τ − ∈ C D − ( A ∗ λ × [ − δ , δ ]) , homogeneous of degreezero, such that ± τ ± ( ξ, θ ) ∈ [2 − , and, if R γ ( ξ ) ≥ , (3.14) ∂ s e φ (cid:0) τ ± ( ξ, R / γ ( ξ )) , ξ (cid:1) = 0 . Proof.
We begin by setting Θ ( s, ξ ) = s − Θ( s, ξ ) , which is homogeneous of degree zero in ξ . One can see Θ ∈ C D − ([ − / , / × A ∗ λ )without difficulty because Θ ( s, ξ ) = ( s/ R (1 − t ) γ (4) ( st + σ ( ξ )) · ξ Λ − γ ( ξ ) dt by Taylor’s theorem with integral remainder. Then we consider the function e φ ( s, ξ, θ ) = − s + s
3! + s Θ ( θs, ξ )with ( s, ξ, θ ) ∈ Ω ± := ( ± [2 − , ]) × A ∗ λ × [ − δ , δ ]. It is clear that e φ ∈ C D − (Ω ± ).Since Θ , ∂ s Θ and ∂ s Θ are O ( ε ◦ ) as can be seen using (3.5) and (3.6), wehave ∂ s e φ ( s, ξ, θ ) = − s / O ( ε ◦ ) and ∂ s e φ ( s, ξ, θ ) = s + O ( ε ◦ ). We nownote that ∂ s e φ ( · , ξ, θ ) has two distinct zeros which are respectively close to √ −√
2, thus by the implicit function theorem there are τ + ( ξ, θ ) and τ − ( ξ, θ )such that ∂ s e φ (cid:0) τ ± ( ξ, θ ) , ξ, θ (cid:1) = 0 and ± τ ± ( ξ, θ ) ∈ [2 − ,
2] if ε ◦ is small enough.Additionally, τ + and τ − are D − ∂ s e φ .By (3.5) and (3.12) we note that e φ ( s, ξ, R / γ ( ξ )) = e φ ( s, ξ ), thus it follows that e φ ( s, ξ, R / γ ( ξ )) = ∂ s e φ ( s, ξ ) when R γ ( ξ ) ≥
0. Therefore we obtain (3.14). (cid:3)
We set s ± ( ξ ) = R / γ ( ξ ) τ ± (cid:0) ξ, R / γ ( ξ ) (cid:1) . AXIMAL ESTIMATE FOR SPACE CURVE 19
Then from (3.12) it follows γ ′ (cid:0) s ± ( ξ ) + σ ( ξ ) (cid:1) · ξ = 0. We defineΦ ± ( ξ ) = γ (cid:0) s ± ( ξ ) + σ ( ξ ) (cid:1) · ξ (3.15)for ξ ∈ A ∗ λ ∩ { ξ : R γ ( ξ ) ≥ } . If R γ ( ξ ) = 0 for some ξ , ∇ Φ ± ( ξ ) may not existbecause R / γ is not differentiable at ξ . However, ∇ Φ ± can be defined to be acontinuous function on A ∗ λ ∩ { ξ : R γ ( ξ ) ≥ } . Indeed, differentiating (3.15) gives(3.16) ∇ Φ ± ( ξ ) = γ (cid:0) s ± ( ξ ) + σ ( ξ ) (cid:1) if R γ ( ξ ) > . Thus ∇ Φ ± becomes continuous on A ∗ λ ∩ { ξ : R γ ( ξ ) ≥ } if we set ∇ Φ ± ( ξ ) = γ (cid:0) σ ( ξ ) (cid:1) when R γ ( ξ ) = 0 since γ , σ are continuous.We define the adjoint restriction operators T ± λ by T ± λ g ( x, t ) = Z C o λ ( δ ) e i ( x · ξ − t Φ ± ( ξ )) g ( ξ ) dξ, where C o λ ( δ ) := { ξ ∈ A ∗ λ : 0 ≤ R γ ( ξ ) ≤ δ } . Putting together the discussion sofar with the method of stationary phase we can obtain Lemma 3.4.
Let < ν ≪ , M = [ D − ] , and J ∈ J ◦ ( δ ) . Suppose γ ∈ C D ( ε ◦ ) , ψ J ∈ N D ( J ) , and b f is supported on A λ . Then, we have (3.17) A γ [ ψ J ]( P o + P o ) f = X ± M − X ℓ =0 t − ℓ +12 T ± λ (cid:0) γ ± ℓ π o b f (cid:1) + E o f, t ∈ I, and the following hold with C and C δ independent of γ and ψ J : | γ ± ℓ ( ξ ) | ≤ C δ λ − − ν λ − ℓν (3.18) for ≤ ℓ ≤ M − and (3.19) kE o f k L q ( R × I ) ≤ C δ λ C − νM k f k p , ≤ p ≤ q ≤ ∞ . It should be noted that the expansion in (3.17) is obtained only on the supportof π o but not on the larger set C o λ ( δ ).We now proceed to apply to I ± the method of stationary phase. We first notethat supp a ± ( · , ξ ) ⊂ ± [2 − , ] and, as seen in the above, the phase e φ ( · , ξ ) has thestationary points τ ± ( ξ, R / γ ( ξ )) while ∂ s e φ ( · , ξ ) = s + O s ( ε ◦ ) for ξ ∈ A ∗ λ ∩ { ξ : 0 ≤ R γ ( ξ ) ≤ δ } . We also note that | L ( ξ ) | ≥ − λ ν for ξ ∈ supp π o and that L ( ξ ) e φ ( τ ± ( ξ, R / γ ( ξ )) , ξ ) = γ (cid:0) s ± ( ξ ) + σ ( ξ ) (cid:1) · ξ − Φ c ( ξ ). Bring all these observationstogether, we now apply [12, Theorem 7.7.5] (also see [12, Theorem 7.7.6]) and obtain I ± ( t, ξ ) = e it (Φ c ( ξ ) − Φ ± ( ξ )) R / γ ( ξ ) M − X ℓ =0 d ± ℓ ( ξ )( tL ( ξ )) − − ℓ + e ± M ( t, ξ )(3.20)for ξ ∈ supp π o where M = [ D − ] and e ± M ( t, ξ ) = O (cid:0) | tL ( ξ ) | − M (cid:1) . The functions d ± ℓ ( ξ ) are bounded on the support of π o since so are ∂ ks e φ and ∂ ks a ± . Proof of Lemma 3.4.
Recalling (3.11) and (3.13) we write m J ( π o + π o ) = e − it Φ c ( I +1 + I − ) π o + e − it Φ c ( I + I ) π o + m J π o . Using (3.20), we now put E ( t, · ) = e − it Φ c (cid:0) e + M ( t, · ) + e − M ( t, · ) (cid:1) π o + e − it Φ c (cid:0) I ( t, · ) + I ( t, · ) (cid:1) π o + m J ( t, · ) π o , and then we set E o f = F − ξ ( E ( t, · ) b f ) and γ ± ℓ ( ξ ) = R / γ ( ξ ) d ± ℓ ( ξ )( L ( ξ )) − − ℓ . Thuswe have (3.17) and the inequality (3.18) follows because | L ( ξ ) | ≥ − λ ν and d ± ℓ are bounded on the support of π o .To show (3.19) we use the following:(3.21) | ∂ αξ m J ( t, ξ ) | ≤ C δ λ − ν ( D −| α | ) , ξ ∈ supp π o , and(3.22) | ∂ αξ I ( t, ξ ) | ≤ C δ λ − ν ( D −| α | ) , | ∂ αξ I ( t, ξ ) | ≤ C δ λ − ν ( D −| α | ) , ξ ∈ supp π o . Assuming this for the moment we obtain (3.19). Note that | ∂ αξ Φ c | ≤ Cλ −| α | and | ∂ αξ e ± M | ≤ C λ C − νM for | α | ≤
4. Combining this, (3.21) and (3.22) for | α | ≤ R γ ( ξ ) ≤ − λ ν − for ξ ∈ supp π o , by (3.5) we see that | ∂ s φ | ≥ C λ (cid:0) − R γ ( ξ ) + s (1 / − ε ◦ | s | ) (cid:1) ≥ C λ max( s , λ ν − ) for some C , C > ℓ ≥ a e is replaced by ψ J ( s + σ ( ξ )). Thus integration by parts gives | m J ( t, ξ ) | ≤ C δ λ − νD if R γ ( ξ ) ≤− λ ν − . The same argument also works for ∂ αξ m J ( t, ξ ), so we obtain (3.21).We now show (3.22) only with α = 0, and the derivatives ∂ αξ I and ∂ αξ I canbe handled likewise. We consider I first. By (3.5) we have | ∂ s φ | ≥ CλR γ ( ξ ) for | s | ≤ − R / γ ( ξ ). Combining this with (3.9), we get the first estimate in (3.10)for ℓ ≥ | s | ≤ − R / γ ( ξ ) because λ ν − ≤ R γ ( ξ ). Note that | ∂ ℓs a ( s, ξ ) | ≤ C δ R − ℓ/ γ ( ξ ), hence for ℓ ≥ a e replacedby a . Therefore repeated integration by parts gives the estimate for I . We canhandle I in the same manner. Since | s | ≥ R / γ ( ξ ), by (3.5) we have C λ | s | ≤| ∂ s φ ( s, ξ ) | ≤ C λ | s | and obviously | ∂ ℓs a ( s, ξ ) | ≤ C δ R − ℓ/ γ ( ξ ). So, we get theestimate (3.10) for | s | ≥ R / γ ( ξ ) and ℓ ≥ a e is replaced by a . Thusintegration by parts gives the estimate for I . (cid:3) In contrast to Φ c the 2nd derivatives of Φ ± are no longer bounded. However,a computation with γ = γ ◦ leads us to expect that Φ ± ∈ C , / . What followsshows this holds true for γ ∈ C D ( ε ◦ ). Lemma 3.5.
For ξ , ξ ∈ C o ( δ ) , there is a constant C independent of γ such that (3.23) |∇ Φ ± ( ξ ) − ∇ Φ ± ( ξ ) | ≤ C | ξ − ξ | . Proof.
Let us set τ ± ( ξ ) = τ ± ( ξ, R / γ ( ξ )), so s ± ( ξ ) = R / γ ( ξ ) τ ± ( ξ ). Using (3.16)and applying the mean value inequality to γ , it is easy to see |∇ Φ ± ( ξ ) − ∇ Φ ± ( ξ ) | ≤ C | s ± ( ξ ) − s ± ( ξ ) | + C | σ ( ξ ) − σ ( ξ ) | . Since σ ∈ C D − ( A ∗ λ ) from Lemma 2.12, we only have to consider the first term onthe right hand side, which is in turn bounded by | R / γ ( ξ ) − R / γ ( ξ ) || τ ± ( ξ ) | + R / γ ( ξ ) | τ ± ( ξ ) − τ ± ( ξ ) | . If γ = γ ◦ , Φ c ( ξ ) = − ξ ξ /ξ + ξ / (3 ξ ) and Φ ± ( ξ ) = Φ c ( ξ ) ∓ − ξ (cid:0) ξ /ξ − ξ /ξ (cid:1) / . AXIMAL ESTIMATE FOR SPACE CURVE 21
It is easy to see that | R / γ ( ξ ) − R / γ ( ξ ) | ≤ C | ξ − ξ | . Since τ ± is D − C o ( δ ) (Lemma 3.3) and τ ± ( ξ ) = τ ± ( ξ, R / γ ( ξ )), by the mean value inequality it follows that | τ ± ( ξ ) − τ ± ( ξ ) | ≤ C | R / γ ( ξ ) − R / γ ( ξ ) | + C | ξ − ξ | . Consequently, we get the inequality (3.23). (cid:3)
Multilinear restriction estimate.
In this section we obtain a form of mul-tilinear restriction estimate which we need to prove (3.1). The surfaces associatedto Φ c and Φ ± have a certain curvature property, so it is possible to get an L - L q smoothing estimate using the typical T T ∗ argument. However, the consequent es-timate is not so strong enough as to be useful for controlling the maximal operator.Instead, we utilize 4-linear estimates which are to be deduced from the multilinearrestriction estimate under transversality assumption ([4]). Multilinear restriction estimate for C ,α hypersurfaces. For the adjoint restrictionestimate, the surfaces are typically assumed to be compact and twice continuouslydifferentiable. The same assumption was also made for the multilinear restrictionestimate in [4, Theorem 1.16] but the phase functions Φ ± no longer have boundedsecond derivatives. Nevertheless, it is not difficult to see that the argument in [4]continues to work with C ,α surface, α >
0. It seems to the authors that there isno proper reference concerning this matter, so we provide a brief discussion on themultilinear restriction estimate for the less regular C ,α surfaces.For k = 1 , . . . d , let U k be a compact subset of an open set U ′ k ⊂ B d − (0 , )and Φ k be a real valued function on U ′ k which satisfies k Φ k k C ,α ( U ′ k ) ≤ B for some0 < α ≤
1. Let us set T k g k ( x, t ) = Z U k e i ( x · ξ − t Φ k ( ξ )) g k ( ξ ) dξ. Theorem 3.6.
Let d ≥ , θ ∈ (0 , , and let N k ( ξ ) = ( ∇ Φ k ( ξ ) , | ( ∇ Φ k ( ξ ) , | . Suppose | det( N ( ξ ) , . . . , N d ( ξ d )) | ≥ θ for ξ k ∈ U k , k = 1 , . . . , d . Then, for any ε > (cid:13)(cid:13) d Y k =1 T k g k (cid:13)(cid:13) L d − ( B d (0 ,R )) ≤ C ε ( θ ) R ε d Y k =1 k g k k L ( U k ) (3.24) whenever R ≥ . The constant C ε ( θ ) takes the form Cθ − C ε for a constant C ε > . The theorem holds with C curve, even with Lipschitz curve when d = 2 butit is unknown whether the same continues to be true in higher dimensions. Onceone makes a couple of crucial observations concerning the C ,α surfaces, it is notdifficult to prove Theorem 3.6 through routine adaptation of the arguments in [4,Proposition 2.1]. Instead of reproducing them here we provide a sketch of the proof.We refer the reader to [4, 2] for the details.For the proof of Theorem 3.6, first of all, we note that(3.25) | Φ k ( ξ + h ) − Φ k ( ξ ) − ∇ Φ k ( ξ ) · h | ≤ CB | h | α +1 whenever ξ + h, ξ ∈ U k . If Φ k is assumed to be in C ,α ( U k ) instead of C ,α ( U ′ k ),this can not be completely clear. In such a case we need to impose an additionalcondition such that U k has a C ,α boundary (e.g. see [8, pp. 136–137]). Onthe other hand, if U k is convex, (3.25) is a simple consequence of the mean valuetheorem. Since U k is compact, there is a positive number ρ k such that x, y arecontained in a ball which is a subset of U ′ k whenever x, y ∈ U k and | x − y | ≤ ρ k . Therefore we get (3.25) for | h | ≤ ρ k and this is enough to show (3.25) for any ξ, ξ + h ∈ U k because U k is compact and ∇ Φ k is continuous. Sketch of proof.
Let us denote Σ k = { ( ξ, − Φ k ( ξ )) : ξ ∈ U k } . We consider theestimate (cid:13)(cid:13) d Y k =1 c G k (cid:13)(cid:13) L d − ( B d (0 ,R )) ≤ C R − d d Y k =1 k G k k L ( R d ) (3.26)for R ≥ G k is supported in Σ k (1 /R ) := { ( ξ, τ ) ∈ R d − × R : dist (( ξ, τ ) , Σ k ) < /R } . The estimate (3.24) is equivalent to (3.26) with C = CR ε (see [4]). Let C ( R ) be the infimum of C with which (3.26) holds. The key part of the proof isto establish the implication(3.27) C ( R ) ≤ R b = ⇒ C ( R ) ≤ C ( θ, ε ) R b α + ε for any ε > b is a positive constant. Via iteration the exponent of R canbe suppressed to be arbitrary small and hence we get the estimate (3.24).Using (3.25) we see that the set Σ k (1 /R ) ∩ B d ( ζ, R − / (1+ α ) ), ζ ∈ Σ k is containedin a C/R neighborhood of the tangent plane to Σ k at ζ . Thus Σ k (1 /R ) can be cov-ered with a collection { R kj } of finitely overlapping rectangles of dimensions about R − × R − / (1+ α ) × · · · × R − / (1+ α ) which are essentially tangential to Σ k (1 /R ).These rectangles provide a decomposition of G k = P j G kj while supp G kj ⊂ R kj .Thus, after applying the assumption C ( R ) ≤ R b to the integrals over the balls ofradius R / (1+ α ) which finitely overlap and cover B d (0 , R ), one can get the implica-tion (3.27) using the multilinear Kakeya estimate [4, 9] for the transversal collection T k of the tubes of width R / (1+ α ) and length R which have their axes parallel tothe normal vector of the surface Σ k . (cid:3) Making use of Theorem 3.6 we obtain the following.
Proposition 3.7.
Let θ , . . . , θ ∈ { c , + , −} and let J k ∈ J ◦ ( δ ) , ≤ k ≤ .Suppose that γ ∈ C D ( ε ◦ ) and dist ( J ℓ , J k ) ≥ δ , ℓ = k . Then, for any ε > and R ≥ there is a constant C ε such that (cid:13)(cid:13)(cid:13) Y k =1 |T θ k (cid:0)e χ R Jk ( λ · ) g k (cid:1) | (cid:13)(cid:13)(cid:13) L ( B (0 ,R )) ≤ Cδ − C ε R ε Y k =1 k g k k . (3.28) Proof.
We begin with recalling that e χ R Jk ( λ · ) is supported in λ − R J k (2 ) and that | R γ ( ξ ) | ≤ δ if ξ ∈ C c ( δ ) or C o ( δ ). Since ∇ ξ Φ c ( ξ ) = γ ( σ ( ξ )) + γ ′ ( σ ( ξ )) · ξ ∇ σ ( ξ ),we have ∇ ξ Φ c ( ξ ) = γ ( σ ( ξ )) + O ( δ ) for ξ ∈ C c ( δ ). If ξ ∈ C o ( δ ), by (3.16) we have ∇ ξ Φ ± ( ξ ) = γ ( σ ( ξ )) + O s (2 δ ) because | R γ ( ξ ) | ≤ δ . ThusN k ( ξ ) := | ( ∇ Φ θ k ( ξ ) , | − ( ∇ Φ θ k ( ξ ) , ξ, − Φ θ k ( ξ )) satisfiesN k ( ξ ) = ( γ ( σ ( ξ )) , p | γ ( σ ( ξ )) | + 1 + O s (2 δ ) , ξ ∈ C θ k ( δ ) , k = 1 , . . . , , where we denote C ± ( δ ) = C o ( δ ).Let ξ k ∈ λ − R J k (2 ) ∩ C θ k ( δ ), k = 1 , . . . ,
4. Then we have σ ( ξ k ) ∈ [ − c ◦ , c ◦ ]since J k ⊂ (1 + 2 c ◦ ) J ◦ . Let Γ denote the matrix whose k -th column is the vector AXIMAL ESTIMATE FOR SPACE CURVE 23 (cid:0) γ ( σ ( ξ k )) , (cid:1) , k = 1 , . . . ,
4. By the generalized mean value theorem (see for example[22, Part V, Ch.1, 95]) there exists u k ∈ [ − c ◦ , c ◦ ] such thatdet Γ = det (cid:18) γ ( u ) γ ′ ( u ) γ ′′ ( u ) γ ′′′ ( u )1 0 0 0 (cid:19) Y ≤ ℓ
4. That is to say, thetransversality condition holds uniformly regardless of the choice of θ , . . . , θ ∈{ c , + , −} .We now note that Φ c is continuously differentiable at least twice in a regioncontaining C c ( δ ) and that k Φ ± k C , / ( C o ( δ )) ≤ C by Lemma 3.5. To apply Theorem3.6 we need only to make it sure that Φ ± extends as a C , / function to an openset containing C o ( δ ). The only part of the boundary which can be problematic is S := { ξ : R γ ( ξ ) = 0 } ∩ C o ( δ ) since Φ ± is homogenous and D − { ξ : R γ ( ξ ) = 2 δ } ∩ C o ( δ ) (see Lemma 2.12 and 3.3). We notethat R γ ( ξ ) = 0 if and only if g ( ξ ) := γ ′ ( σ ( ξ )) · ξ = 0. Since ∇ g ( ξ ) = γ ′′ ( σ ( ξ )) = e + O s (6 c ◦ ) for ξ ∈ A ∗ by Lemma 2.12 and since g ∈ C D − ( A ∗ ), by the implicitfunction theorem it follows that S is a part of a C D − boundary. Thus we canextend Φ ± to be a C , / function across S (e.g., [8, pp. 136–137]). Therefore wemay apply Theorem 3.6 and get the estimate (3.28). (cid:3) As Φ c , Φ ± are homogeneous of degree 1, the following is an immediate conse-quence of Proposition 3.7 by means of scaling and Plancherel’s theorem. Corollary 3.8.
Under the same assumption as in Proposition 3.7, for any ε > there is a C ε = C ε ( δ ) > such that (cid:13)(cid:13)(cid:13) Y k =1 |T θ k λ (cid:0)e χ R Jk b f k (cid:1) | (cid:13)(cid:13)(cid:13) L ( B (0 , )) ≤ C ε λ ε Y k =1 k f k k . (3.29)3.3. Multilinear estimate for A γ [ ψ J k ] P n P J k . We are ready to prove Proposition3.1. We first show the multilinear estimate in L q ( R × I ) from which we deducethe weighted estimate. Proposition 3.9.
Let J k ∈ J ◦ ( δ ) , ≤ k ≤ . Suppose that dist ( J ℓ , J k ) ≥ δ , ℓ = k .If /q = 5 / (8 p ) + 1 / and ≤ p ≤ , for ε > there are constants C ε = C ε ( δ ) and D = D ( ε ) such that (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k f k | (cid:13)(cid:13)(cid:13) L q ( R × I ) ≤ C ε λ − p − + ε Y k =1 k f k k L p ( R ) (3.30) whenever γ ∈ C D ( ε ◦ ) , ψ J k ∈ N D ( J k ) , and b f k is supported on A λ . By the localization argument it is sufficient for the estimate (3.30) to show itslocal counterpart. In fact, we have
Lemma 3.10.
Let ≤ p ≤ q ≤ ∞ and b ∈ R , and let I ′ ⊂ I be an interval. Let γ ∈ C D ( ε ◦ ) , ω ∈ Ω α , < α ≤ , and ψ J k ∈ N D ( J k ) , J k ∈ J ◦ ( δ ) , ≤ k ≤ . If (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k f k | (cid:13)(cid:13)(cid:13) L q ( B (0 , × I ′ ,ω ) ≤ Bλ b [ ω ] q α Y k =1 k f k k L p ( R ) (3.31) holds for a large enough D = D ( b ) , then we have (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k f k | (cid:13)(cid:13)(cid:13) L q ( R × I ′ ,ω ) ≤ C δ Bλ b [ ω ] q α Y k =1 k f k k L p ( R ) . (3.32) Proof.
Let K k ( · , t ) denote the kernel of the operator A γ [ ψ J k ] P n P J k . We note thatthe multiplier of P n P J k is given by m ( ξ ) = e χ A ∗ λ ( ξ ) β ( δ − | R γ ( ξ ) | ) e χ R Jk ( ξ ) and k m ( λ · ) k C M ≤ Cδ − CM for M ≤ D −
2. Since | γ ( s ) | ≤ c ◦ + ε ◦ ) for s ∈ J k , byLemma 2.3 we have | K k ( x, t ) | ≤ C δ E M ( x ) for M ≤ ( D − / | x | ≥ t ∈ I .For k ∈ Z set B k = B ( k ,
1) and B ′ k = B ( k , | A γ [ ψ J k ] P n P J k f | ≤ X k ∈ Z χ B k | A γ [ ψ J k ] P n P J k ( χ B ′ k f ) | + C δ E M ∗ | f | . Taking M = 4 N + 9 and combining this with the trivial estimate | A γ [ ψ J k ] P n P J k g | ≤ C δ λ (1 + | · | ) − N ∗ | g | , we see that Q k =1 | A γ [ ψ J k ] P n P J k f k | is bounded by X k ∈ Z χ B k Y k =1 | A γ [ ψ J k ] P n P J k ( χ B ′ k f k ) | + C δ Y k =1 ( E N ∗ | f k | ) . Since k E N ∗| f |k L q ( R × I ′ ,ω ) ≤ C [ ω ] /qα λ − N k f k p for 1 ≤ p ≤ q , taking a large N ≥ − b, we may disregard the second term. We now use (3.31) to get (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k ( χ B ′ k f k ) | (cid:13)(cid:13)(cid:13) L q ( B k × I ′ ,ω ) ≤ Bλ b [ ω ] q α Y k =1 k χ B ′ k f k k L p ( R ) . Thus the desired estimate (3.32) follows by taking summation over k and H¨older’sinequality since B ′ k overlap each other at most 6 times. (cid:3) Thanks to Lemma 3.10, the proof of Proposition 3.9 is reduced to showing (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k f k | (cid:13)(cid:13)(cid:13) L q ( B (0 , × I ) ≤ C ε λ − p − + ε Y k =1 k f k k L p ( R ) (3.33)for p, q satisfying 1 /q = 5 / (8 p ) + 1 /
16 and 2 ≤ p ≤
6. Since kP n P J k g k p ≤ C δ k g k p by (2.25), using the estimate (2.28) with p = 6 after H¨older’s inequality, we get theestimate (3.33) with p = 6. Thus in view of interpolation we only have to obtain (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k f k | (cid:13)(cid:13)(cid:13) L ( B (0 , × I ) ≤ C ε λ − + ε Y k =1 k f k k L ( R ) . (3.34) Proof of (3.34) . For a given ε > ν such that 10 ν = 2 − ε and then take aninteger D such that D ≥ C /ν with a large constant C . For simplicity let us set(3.35) F k = A γ [ ψ J k ] P n P J k f k , k = 1 , . . . , . AXIMAL ESTIMATE FOR SPACE CURVE 25
By Lemma 3.2 and 3.4, we have F k = F c k + F + k + F − k + E f k , k = 1 , . . . , , where E satisfies kE f k k q ≤ C δ λ C − νD k f k k p for 1 ≤ p ≤ q ≤ ∞ , and F c k = X | ℓ |≤ λ ν e itℓ T c λ ( c ℓ π c e χ R Jk b f k ) , F ± k = X ≤ m ≤ M − t − m +12 T ± λ ( γ ± m π o e χ R Jk b f k ) . We thus need to handle the product terms Π k =1 h k where h k ∈ { F c k , F ± k , E f k } ,1 ≤ k ≤
4. Any product which has E f k as one of its factors is easily handled bytaking C large enough if one uses H¨older’s inequality and the trivial estimates kT c λ ( π c b g ) k q ≤ C δ λ C k g k p and kT ± λ ( π o b g ) k q ≤ C δ λ C k g k p , which hold for 1 ≤ p ≤ q ≤ ∞ . So, it suffices to obtain the estimates for the products which consist onlyof the terms F c k , F ± k . By (3.2) and (3.18) we have P | ℓ |≤ λ ν λ − ν k c ℓ k ∞ ≤ Cλ ν and P M − ℓ =0 k γ ± ℓ k ∞ ≤ C δ λ − − ν . Thus, using the estimate (3.29) and Plancherel’stheorem, we obtain (cid:13)(cid:13)(cid:13) Y k =1 | F θ k k | (cid:13)(cid:13)(cid:13) L ( B (0 , )) ≤ C ε λ − +10 ν + ε Y k =1 k f k k , where θ k ∈ { c , + , −} for 1 ≤ k ≤
4. Therefore we get (3.34). (cid:3)
Proof of Proposition 3.1.
We are in a position to prove Proposition 3.1.By Lemma 3.10, it suffices to show that (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] P n P J k f k | (cid:13)(cid:13)(cid:13) L p ( B (0 , × [1 , ,ω ) ≤ C δ λ − ε p Y k =1 k f k k L p ( R ) (3.36)for 14 / < p ≤
6. The exponent p/ P n P J k f k is supported in B (0 , λ ). Nevertheless, the following lemmaenables us to deal with the case p/ < Lemma 3.11.
Let < p ≤ ∞ , < α ≤ and ω ∈ Ω α . Suppose that F ∈ L p ( R , ω ) and b F is supported on B (0 , λ ) . Then we have k F k L p ( R ,ω ) ≤ C [ ω ] p α λ − αp k F k L p ( R ) . (3.37) Proof.
Note that wdxdt ≪ dxdt since ω ∈ Ω α . Thus it follows that k F k L ∞ ( R ,ω ) ≤k F k L ∞ ( R ) . When 1 ≤ p < ∞ , as already seen, (3.37) is a simple consequence ofH¨older’s inequality, so we only need to consider p ∈ (0 , ϕ ∈ S ( R ) such that b ϕ = 1 on B (0 ,
1) and b ϕ is supported on B (0 , F = F ∗ ϕ λ since b F is supported on B (0 , λ ). We first claim that(3.38) | F | p ≤ C | F | p ∗ | ϕ | pλ , where we denote | ϕ | pλ = ( | ϕ | p ) λ . Once we have (3.38) the proof of (3.37) is ratherstraightforward. Indeed, by (3.38) it follows that k F k pL p ( R ,ω ) ≤ C R | F ( x ) | p | ϕ | pλ ∗ ω ( x ) dx ≤ C k F k pL p ( R ) k| ϕ | pλ ∗ ω k ∞ . Since k| ϕ | pλ ∗ ω k ∞ ≤ Cλ − α [ ω ] α , this gives (3.37).We now turn to the proof of (3.38). By scaling we may assume λ = 1, otherwiseone may replace F with F ( · /λ ). To show (3.38) when λ = 1, we first notice that | F | ∗ | ϕ | ( x ) = Z | F ( y ) ϕ ( x − y ) | dy ≤ k F ϕ ( x − · ) k − p ∞ | F | p ∗ | ϕ | p ( x ) . Since the Fourier transform of
F ϕ ( x − · ) is supported in B (0 , F ϕ ( x − · ) = (cid:0) F ϕ ( x − · ) (cid:1) ∗ − ϕ (5 − · ) and k F ϕ ( x − · ) k ∞ ≤ C k F ϕ ( x − · ) k = C | F | ∗ | ϕ | ( x ).Combining this and the inequality above gives ( | F | ∗ | ϕ | ) p ≤ C | F | p ∗ | ϕ | p . Since | F | ≤ | F | ∗ | ϕ | , we get (3.38) with λ = 1. (cid:3) Proof of (3.36) . Here we keep using the simpler notation (3.35). The estimate(3.36) follows if we show(3.39) (cid:13)(cid:13)(cid:13) Y k =1 | e χF k | (cid:13)(cid:13)(cid:13) L p ( B (0 , × I,ω ) ≤ C δ λ − ε p Y k =1 k f k k L p ( R ) . We deduce the weighted estimate from Proposition 3.9 in the same way as in theproof of Proposition 2.18. The difference is that we are dealing with a multilinearestimate and the exponent p/ e χF k = e A k f k + E k f k where F ( e A k f k )( ξ, τ ) = β (( λr ) − τ ) F ( e χF k )( ξ, τ )and r = 1 + 4 max {| γ ( s ) | : s ∈ supp ψ J k , k = 1 , . . . , } . By Lemma 2.6 we have |E k f k ( x, t ) | ≤ C e E Mt ∗ | f k | ( x ) for any M . Thus, taking a large M and using thetrivial estimate | e χF k | ≤ Cλ (1 + | · | ) − M ∗ | f k | , one can easily see (cid:13)(cid:13)(cid:13) Y k =1 | e χF k | (cid:13)(cid:13)(cid:13) L q ( B (0 , × I,ω ) ≤ C (cid:13)(cid:13)(cid:13) Y k =1 | e A k f k | (cid:13)(cid:13)(cid:13) L q ( R ,ω ) + Cλ − N Y k =1 k f k k L p ( R ) for a large N . Since [ ω ] ≤ F ( Q k =1 e A k f k ) is containedin a ball of radius 2 r λ , the inequality (3.37) gives k Q k =1 | e A k f k | k L q ( R ,ω ) ≤ Cλ /q k Q k =1 | e A k f k | k L q ( R ) . To estimate the last one in L q ( R ), using |E k f k ( x, t ) | ≤ C e E Mt ∗ | f k | again and disregarding the minor contributions, it suffices to obtain thebound on k Q k =1 | e χF k | k L q ( R ) . Since supp e χ ⊂ I , by the estimate (3.30) we get (cid:13)(cid:13)(cid:13) Y k =1 | e χF k | (cid:13)(cid:13)(cid:13) L q ( B (0 , × I,ω ) ≤ C ε ( δ ) λ ( p − )+ ε Y k =1 k f k k L p ( R ) for 1 /q = 5 / (8 p ) + 1 /
16 and 2 ≤ p ≤
6. Finally, we obtain (3.39) for 14 / < p ≤ k ω k L ( B (0 , × I ) ≤ C [ ω ] and q ≥ p . (cid:3) Remark . In the above we try to obtain the estimate (3.1) on a range of p as largeas possible by suppressing ν arbitrary small ( Proof of (3.34)). This forces us totake a large D ≥ C /ν . However, to obtain the maximal estimate it is enough tohave the estimate (3.1) on the smaller range 3 < p ≤ / < p ≤
6. For3 < p ≤
6, we can prove (3.1) with a fixed ν and D . For example, optimizing theestimates at various places, we can take ν = 1 /
397 and D = 720. In other words,Theorem 1.1 holds true for γ ∈ C ( J ).4. Proof of Theorem 1.1
In this section we complete the proof of Theorem 1.1. We prove the sufficiencyand the necessity parts in separate sections.4.1.
Sufficiency.
By the reduction in Section 2.4, Lemma 2.4) and Lemma 2.2, itsuffices to prove Proposition 2.10, which also proves Theorem 2.1.
AXIMAL ESTIMATE FOR SPACE CURVE 27
Decomposition.
We first decompose the averaging operator A γ [ ψ ] in such a waythat we can use the multilinear estimate obtained in Section 3. The followingLemma 4.1 is a slight modification of [11, Lemma 2.8]. Let us set J ∗ ( δ ) = (cid:8) ( J , . . . , J ) : J , . . . , J ∈ J ◦ ( δ ) , min ℓ = k dist ( J ℓ , J k ) ≥ δ (cid:9) . Lemma 4.1.
Let ψ ∈ N D ( J ◦ ) and γ ∈ C D ( ε ◦ ) . There is a constant C = C ( D ) independent of z = ( x, t ) , γ , and δ such that (4.1) | A γ [ ψ ] f ( z ) | ≤ C max J ∈ J ◦ ( δ ) | A γ [ ψ J ] f ( z ) | + Cδ − X ( J ,...,J ) ∈ J ∗ ( δ ) 4 Y k =1 (cid:12)(cid:12) A γ [ ψ J k ] f ( z ) (cid:12)(cid:12) , where ψ J ∈ N D ( J ) and ψ J k ∈ N D ( J k ) .Proof. Let us recall (2.14). It is clear that there is a constant C D > C − D ψζ J ∈ N D ( J ) for J ∈ J ◦ ( δ ). Setting ψ J = C − D ψζ J we have A γ [ ψ ] f ( z ) = C D X J ∈ J ◦ ( δ ) A γ [ ψ J ] f ( z ) . Let us set J = J ◦ ( δ ). For a fixed z , define J ∗ to be an interval in J such that | A γ [ ψ J ∗ ] f ( z ) | = max J ∈ J | A γ [ ψ J ] f ( z ) | . For k = 2 , , , we recursively define J k and J ∗ k ∈ J k . Let J k = { J ∈ J k − : dist ( J, J ∗ k − ) ≥ δ } and let J ∗ k ∈ J k denote aninterval such that | A γ [ ψ J ∗ k ] f ( z ) | = max J ∈ J k | A γ [ ψ J ] f ( z ) | . Thus, if dist (
J, J ∗ k ) ≥ δ for all 1 ≤ k ≤
3, we have | A γ [ ψ J ] f | ≤ | A γ [ ψ J ∗ k ] f | for 1 ≤ k ≤ J = S k =1 { J ∈ J ◦ ( δ ) : dist ( J, J ∗ k ) < δ } . Splitting the sum intothe cases J ∈ J and J
6∈ J , we have C − D | A γ [ ψ ] f ( z ) | ≤ X J ∈J | A γ [ ψ J ] f ( z ) | + X J | A γ [ ψ J ] f ( z ) | . The first on the right hand side is apparently bounded by C max J ∈ J ◦ ( δ ) | A γ [ ψ J ] f ( z ) | and the second by Cδ − Q k =1 | A γ [ ψ J ∗ k ] f ( z ) | . This gives (4.1) since dist ( J ∗ k , J ∗ ℓ ) ≥ δ if k = ℓ . (cid:3) The next lemma gives control over the first term on the right hand side of (4.1).
Lemma 4.2.
Let < p ≤ , and let [ ω ] ≤ and ψ J ∈ N D ( J ) for each J ∈ J ◦ ( δ ) .If δ λ ≥ and ε ◦ > is sufficiently small, there is an ε p > such that k max J ∈ J ◦ ( δ ) | A γ [ ψ J ] f | (cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ C (cid:0) δ − p K δ ( λ ) + C δ λ − ε p (cid:1) k f k L p ( R ) whenever γ ∈ C D ( ε ◦ ) and b f is supported on A λ .Proof of Lemma 4.2. By the embedding ℓ p ⊂ ℓ ∞ and Minkowski’s inequality, k max J ∈ J ◦ ( δ ) | A γ [ ψ J ] f | (cid:13)(cid:13) pL p ( R × [1 , ,ω ) ≤ p (I + II) , whereI = X J ∈ J ◦ ( δ ) k A γ [ ψ J ] P J f (cid:13)(cid:13) pL p ( R × [1 , ,ω ) , II = X J ∈ J ◦ ( δ ) k A γ [ ψ J ]( f − P J f ) (cid:13)(cid:13) pL p ( R × [1 , ,ω ) . For II we apply Proposition 2.18. Taking ε p = ( − p ) and using the estimate(2.29) with ε = ε p /
2, we have II p ≤ C δ λ − ε p k f k L p ( R ) since there are at most Cδ − many J . To handle I, we invoke Lemma 2.11 and then use Lemma 2.14 to obtainI ≤ Cδ p − K δ ( λ ) p X J ∈ J ◦ ( δ ) k P J f k pp ≤ Cδ p − K δ ( λ ) p k f k pp . Therefore the desired bound follows. (cid:3)
Now we consider the product terms appearing in (4.1).
Lemma 4.3.
Let < p ≤ , [ ω ] ≤ , and ( J , . . . , J ) ∈ J ∗ ( δ ) . If δ λ ≥ and ε ◦ > is small enough, there are positive constants ε p , c , D such that (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] f | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ C δ (cid:16) λ − ε p + λ − c K δ ( λ ) (cid:17) k f k L p ( R ) (4.2) whenever γ ∈ C D ( ε ◦ ) , ψ J k ∈ N D ( J k ) , k = 1 , . . . , , and b f is supported in A λ .Proof. For each 1 ≤ k ≤ f = b k + g k , where b k = P n P J k f, g k = P n (1 − P J k ) f + P e f. We here use f = P n f + P e f because b f is supported on A λ . Thus, the left hand sideof (4.2) is bounded by a constant times M = X h k ∈{ b k ,g k } (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] h k | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) . We consider the cases ( h , . . . , h ) = ( b , . . . , b ) and ( h , . . . , h ) = ( b , . . . , b ). Forthe former case we use Proposition 3.1 and the estimate (2.25). Since 14 / < p ≤ ε p > (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] b k | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ C δ λ − ε p k f k L p ( R ) . For the other case we combine Proposition 2.18, 2.19, and Lemma 2.11. In fact,Proposition 2.18 and 2.19 followed by (2.25) yield k A γ [ ψ J k ] g k k L p ( R × [1 , ,ω ) ≤ C ε δ − C λ ( p − )+ ε k f k L p ( R ) for 2 ≤ p ≤
6. If we consider a particular case ( h , . . . , h ) = ( b , b , b , g ), byH¨older’s inequality and the above estimate we have (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] h k | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ C δ λ − c k f k L p ( R ) 3 Y k =1 (cid:13)(cid:13) A γ [ ψ J k ] b k (cid:13)(cid:13) L p ( R × [1 , ,ω ) for a constant c > p > /
5. We apply Lemma 2.11 to handle the lastthree factors. Since k b k k ≤ C δ − C k f k L p ( R ) from (2.25), the inequality (2.19) gives (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] h k | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) ≤ C δ λ − c K δ ( λ ) k f k L p ( R ) . We can deal with the remaining products similarly. As a consequence, we obtain M ≤ C δ (cid:16) λ − ε p + X ℓ =1 λ − (4 − ℓ ) c K δ ( λ ) ℓ (cid:17) k f k L p ( R )AXIMAL ESTIMATE FOR SPACE CURVE 29 and therefore the bound (4.2) after a simple manipulation since we may assume ε p ≤ c taking a smaller ε p if necessary. (cid:3) We now conclude the proof of (2.18) putting together the previous estimates.
Proof of (2.18) . Since k Af k L ∞ ( R × [1 , ,ω ) ≤ C k f k L ∞ ( R ) , by interpolation it issufficient to show (2.18) for 3 < p <
6. Let p ∈ (3 ,
6) and take an ε ◦ > D such that the estimates in Lemma 4.2 and 4.3 hold whenever γ ∈ C D ( ε ◦ ) and ψ J ∈ N D ( J ), J ∈ J ◦ ( δ ).Let γ ∈ C D ( ε ◦ ) , ω ∈ Ω with [ ω ] ≤ ψ ∈ N D ( J ◦ ), and let f be a functionsuch that supp b f ⊂ A λ and k f k p ≤
1. By (4.1) and Minkowski’s inequality we seethat k A γ [ ψ ] f k L p ( R × [1 , ,ω ) is bounded by C (cid:13)(cid:13) max J ∈ J ◦ ( δ ) | A γ [ ψ J ] f | (cid:13)(cid:13) L p ( R × [1 , ,ω ) + C δ X ( J ,...,J ) ∈ J ∗ ( δ ) (cid:13)(cid:13)(cid:13) Y k =1 | A γ [ ψ J k ] f | (cid:13)(cid:13)(cid:13) L p ( R × [1 , ,ω ) . Then Lemma 4.2 and 4.3 gives k A γ [ ψ ] f k L p ( R × [1 , ,ω ) ≤ C (cid:0) δ − p + λ − c (cid:1) K δ ( λ ) + C δ λ − ε p if 2 δ − ≤ λ . Taking supremum over f, ω, ψ , and γ , we obtain(4.3) Q ( λ ) ≤ C (cid:0) δ − p + λ − c (cid:1) K δ ( λ ) + C δ λ − ε p for 2 δ − ≤ λ. In order to close the induction we need to modify Q ( λ ) slightly. Fix0 < b which is to be chosen later. We define Q b ( λ ) = sup ≤ r ≤ λ r b Q ( r ) . We observe λ b K δ ( λ ) ≤ b δ − b P − δ λ ≤ j ≤ δλ jb Q (2 j ), and hence we have λ b K δ ( λ ) ≤ C | log δ | δ − b Q b (2 δλ ) . Multiplying λ b to both sides of (4.3), we thusget λ b Q ( λ ) ≤ C (cid:0) δ − p + λ − c (cid:1) | log δ | δ − b Q b (2 δλ ) + C δ λ b − ε p for 2 δ − ≤ λ . We now choose a small enough b such that 1 − p − b > b − ε p <
0, then fix a small enough δ > Cδ − p | log δ | δ − b ≤ − and 2 δ ≤
1. Such a choice is clearly possible because p >
3. Let λ ◦ be a largenumber such that δ − p ≥ λ − c ◦ and 2 δ − ≤ λ ◦ . Then we have the inequality λ b Q ( λ ) ≤ − Q b ( λ ) + C δ for λ ≥ λ ◦ since Q b is increasing. This obviously implies λ b Q ( λ ) ≤ − Q b ( r ) + C δ for λ ◦ ≤ λ ≤ r . Note that Q b ( λ ◦ ) ≤ λ b ◦ C for a certain constant C (becauseof the trivial estimate Q ( λ ) ≤ Cλ ). Taking supremum over λ ∈ [1 , r ] we get Q b ( r ) ≤ − Q b ( r ) + λ b ◦ C + C δ . Therefore we have Q b ( r ) ≤ C for a constant C and conclude Q ( λ ) ≤ C λ − b for λ ≥ (cid:3) Remark . Routine adaptation of our argument proves L p improving property of thelocalized maximal operator M f ( x ) := sup ≤ t ≤ (cid:12)(cid:12) Af ( x, t ) (cid:12)(cid:12) . In fact, if γ is smooth,the estimate k M f k L q ( R ) ≤ C k f k L p ( R ) holds provided that (1 /p, /q ) is containedin the interior of the triangle with vertices (0 , , (1 / , / , and (19 / , / p > M is bounded from L p to L p ( dµ ) for p > − α when µ is an α dimensional measure and 3 > α > −√ = 2 . . . . Necessity.
To prove that L p boundedness of M fails for p ≤
3, it is sufficientto show the next proposition. Our construction below is a modification of Stein’sexample in [32].
Proposition 4.4.
Let p ≤ and ψ be a nonnegative continuous functionsupported in J . Suppose γ : J → R be a smooth curve satisfying (1.1) . Then thereis an h ∈ L p ( R ) such that M h = ∞ on a nonempty open set.Proof. Since ψ ≥ ψ
0, we may assume that ψ ( s ) ≥ c on an interval J ⊂ J for some c >
0. By (1.1) we may additionally assume that | γ ( s ) | ≥ c on J takinga subinterval of J if necessary because the condition (1.1) can not be satisfied ifthere is no such a subinterval.Since γ ′ ( s ) , γ ′′ ( s ) , γ ′′′ ( s ) are linearly independent, we can write γ ( s ) = c ( s ) γ ′ ( s ) + c ( s ) γ ′′ ( s ) + c ( s ) γ ′′′ ( s ) , s ∈ J (4.4)for some smooth functions c , c , and c . We claim that there is an s ◦ ∈ J suchthat c ( s ◦ ) = 0. Suppose that there is no such s ◦ ∈ J , that is to say, c ( s ) ≡ s ∈ J . Differentiating both side of (4.4), we have ( c ′ ( s ) − γ ′ ( s ) + [ c ( s ) + c ′ ( s )] γ ′′ ( s ) + c ( s ) γ ′′′ ( s ) = 0 , which implies c ( s ) ≡ c ( s ) + c ′ ( s ) ≡
0, and c ′ ( s ) ≡ s ∈ J . This leads to a contradiction and proves the claim. Thereforethere are s ◦ ∈ J and δ > | c ( s ) | ≥ c, s ∈ [ s ◦ − δ, s ◦ + δ ] ⊂ J for some c >
0. We only consider the case c ( s ) ≥ c since the other case can behandled similarly.For x ∈ R let y = ( y , y , y ) denote the coordinate of x with respect to thebasis { γ ′ ( s ◦ ) , γ ′′ ( s ◦ ) , γ ′′′ ( s ◦ ) } , i.e., x = y γ ′ ( s ◦ ) + y γ ′′ ( s ◦ ) + y γ ′′′ ( s ◦ ) , and we set y = ( y , y ). For some ε ∈ (0 , /
3) we take g ( t ) = χ [0 , − ] ( t ) | t | − | log | t || − − ε andthen we consider h ( x ) = χ ( | y | ) g ( y ) , where χ ∈ C ∞ ([ − , χ = 1 on [ − , h ∈ L p ( R ) for p ≤ g ∈ L p ( R ) for p ≤
3. Thus we only haveto show that sup 3, by Taylor’s expansion we have γ ( s ) = (cid:0) c ( s ◦ ) + ( s − s ◦ ) (cid:1) γ ′ ( s ◦ ) + (cid:0) c ( s ◦ ) + ( s − s ◦ ) / (cid:1) γ ′′ ( s ◦ )+ (cid:0) c ( s ◦ ) + ( s − s ◦ ) / (cid:1) γ ′′′ ( s ◦ ) + O (( s − s ◦ ) ) . So, y − ta ( s ) = y − tc ( s ◦ ) − t (cid:0) − ( s − s ◦ ) + O (( s − s ◦ ) ) (cid:1) . For y > t = y /c ( s ◦ ) > 0. Then it follows that C y | s − s ◦ | ≤ | y − ta ( s ) | ≤ C y | s − s ◦ | for some C , C > 0, so | g ( y − ta ( s )) | ≥ Cy − | s − s ◦ | − | log( y | s − s ◦ | ) | − − ε provided that | s − s ◦ | < c ′ for a small c ′ > < y ≤ 1. Thus by our choiceof δ and s ◦ we have Ah (cid:16) x, y c ( s ◦ ) (cid:17) ≥ Cy − Z | s − s ◦ |≤ δ ′ e χ ( y, s ) | s − s ◦ | − | log( y | s − s ◦ | ) | − − ε ds AXIMAL ESTIMATE FOR SPACE CURVE 31 for 0 < y ≤ δ ′ = min( δ, c ′ ) and e χ ( y, s ) = χ ( | y − y c ( s ◦ ) a ( s ) | ). Since e χ ( y, s ) ≥ | y | ≤ r ◦ for a small enough r ◦ > 0, we have Ah (cid:16) x, y c ( s ◦ ) (cid:17) ≥ Cy − Z | s − s ◦ |≤ min( δ ′ , y / | s − s ◦ | − | log | s − s ◦ || − − ε ds = ∞ for y ∈ B (0 , r ◦ ) ∩ { y : 0 < y < } as desired. (cid:3) Acknowledgement. This work was supported by the National Research Founda-tion of Korea (NRF) grant number NRF2019R1A6A3A01092525 (Hyerim Ko) andNRF2018R1A2B2006298 (Sanghyuk Lee and Sewook Oh). References [1] T. C. Anderson, K. Hughes, J. Roos, A. Seeger, L p → L q bounds for spherical maximal oper-ators , Math. Z. (2020), https://doi.org/10.1007/s00209-020-02546-0.[2] J. Bennett, Aspects of multilinear harmonic analysis related to transversality , Harmonic anal-ysis and partial differential equations, 1–28, Contemp. Math., 612, Amer. Math. Soc., Provi-dence, RI, 2014.[3] D. Beltran, R. Oberlin, L. Roncal, A. Seeger, B. Stovall, Variation bounds for spherical aver-ages , arXiv:2009.07366.[4] J. Bennett, A. Carbery, T. Tao, On the multilinear restriction and Kakeya conjectures , ActaMath. (2006), 261–302.[5] J. Bourgain, Averages in the plane over convex curves and maximal operators , J. Analyse.Math. (1986), 69–85.[6] J. Bourgain, C. Demeter, The proof of the ℓ decoupling conjecture, Ann. of Math. (2015),351–389.[7] J. Bourgain, L. Guth, Bounds on oscillatory integral operators based on multilinear estimates ,Geom. Funct. Anal. (2011), 1239–1295.[8] D. Gilbarg, N. Trudinger, Elliptic partial differential equations of second order, Classics inMathematics, Springer-Verlag, Berlin, 2001.[9] L. Guth, The endpoint case of the Bennett-Carbery-Tao multilinear Kakeya conjecture , ActaMath. (2010), 263–286.[10] L. Guth, H. Wang, R. Zhang, A sharp square function estimate for the cone in R , Ann. ofMath. (2020), 551–581.[11] S. Ham, S. Lee, Restriction estimates for space curves with respect to general measure , Adv.Math. (2014), 251–279.[12] L. Ho¨rmander, The analysis of linear partial differential operators I: Distribution Theoryand Fourier Analysis , 2nd ed., Springer-Verlag, 1990.[13] I. Ikromov, M. Kempe, D. M¨uller, Estimates for maximal functions associated with hyper-surfaces in R and related problems of harmonic analysis , Acta Math. (2010), 151–271.[14] S. Lee, Endpoint estimates for the circular maximal function , Proc. Amer. Math. Soc. (2003), 1433–1442.[15] , Square function estimates for the Bochner-Riesz means , Anal. PDE, (2018),1535–1586.[16] S. Lee, A. Vargas, On the cone multiplier in R , J. Funct. Anal. (2012), 925–940.[17] P. Mattila, Fourier analysis and Hausdorff dimension , Cambridge Studies in Advanced Math-ematics, 150. Cambridge University Press, Cambridge, 2015.[18] G. Mockenhaupt, A. Seeger, C. Sogge, Wave front sets, local smoothing and Bourgain’scircular maximal theorem , Ann. of Math. (1992), 207–218.[19] D. Oberlin, R. Oberlin, Spherical means and pinned distance sets , Commun. Korean Math.Soc. (2015), 23–34.[20] D. Oberlin, H. Smith, A Bessel function multiplier , Proc. Amer. Math. Soc. (1999),2911–2915.[21] D. Oberlin, H. Smith, C. Sogge, Averages over curves with torsion , Math. Res. Lett. (1998),535–539.[22] G. Polya, G. Szeg¨o, Problems and theorems in analysis , Die Grundlehren der mathematischenWissenschaften, Band 216, Springer-Verlag, New York-Heidelberg, 1976. [23] M. Pramanik, A. Seeger, L p regularity of averages over curves and bounds for associatedmaximal operators , Amer. J. Math. (2007), 61–103.[24] , L p -Sobolev estimates for a class of integral operators with folding canonical relations ,J. Geom. Anal. (2020), https://doi.org/10.1007/s12220-020-00388-0.[25] J. Roos, A. Seeger, Spherical maximal functions and fractal dimensions of dilation sets ,arXiv:2004.00984.[26] W. Schlag, L p → L q estimates for the circular maximal function , Ph.D. Thesis. CaliforniaInstitute of Technology, 1996.[27] , A generalization of Bourgain’s circular maximal theorem , J. Amer. Math. Soc. (1997), 103–122.[28] , A geometric proof of the circular maximal theorem , Duke Math. J. (1998), 505–533.[29] W. Schlag, C. D. Sogge, Local smoothing estimates related to the circular maximal theorem ,Math. Res. Let. (1997), 1–15.[30] C. D. Sogge, Propagation of singularities and maximal functions in the plane , Invent. Math. (1991), 349–376.[31] E. M. Stein, Harmonic Analysis: Real Variable Methods, Orthogonality and OscillatoryIntegrals , Princeton Univ. Press, Princeton, NJ, 1993.[32] , Maximal functions: spherical means , Proc. Nat. Acad. Sci. USA (1976), 2174–2175.[33] T. Wolff, Local smoothing type estimates on L p for large p , Geom. Funct. Anal. (2000),1237–1288. Department of Mathematical Sciences and RIM, Seoul National University, Seoul08826, Republic of Korea Email address : [email protected] Email address : [email protected] Email address ::