Resolution of Singularities in Two Dimensions and the Stability of Integrals
aa r X i v : . [ m a t h . C A ] J un Resolution of Singularities in Two Dimensionsand the Stability of IntegralsMichael Greenblatt
June 9, 2009
1. Background and Statement of Results
Suppose S ( x, y ) is a smooth real-valued function on a neighborhood of (0 ,
0) with S (0 ,
0) = 0. For a small open set U containing the origin and a small ǫ >
0, define M S,U ( ǫ )by M S,U ( ǫ ) = |{ ( x, y ) ∈ U : | S ( x, y ) | < ǫ }| (1 . S ( x, y ) is real-analytic then if U is sufficiently small then for some C U > M S,U ( ǫ ) = C U ǫ j | ln ǫ | p + o ( ǫ j | ln ǫ | p ) (1 . j is a positive rational number and p = 0 or 1. Also using resolution of singularities,it can be shown that ( j, p ) is independent of U for small enough U . We refer to j as the growth index of S and to p as the multiplicity of this growth index. If S ( x, y ) is merelysmooth, then it follows from [G1] that (1 .
2) still holds unless there is a smooth coordinatechange fixing the origin after which the bisectrix intersects the Newton polygon of S inthe interior of its horizontal ray (see below for the relevant definitions.) It is also true thatexcept in those exceptional situations, C U is independent of U for small enough U . In theexceptional situations, then we can at least say there is a j such that for small enough U ,for some C U > M S,U ( ǫ ) < C U ǫ j , while for j ′ > j there is no C ′ U for which theestimate M S,U ( ǫ ) < C ′ U ǫ j ′ holds. Thus for the smooth situation, we also have a naturaldefinition of the growth index of S ( x, y ) and its multiplicity.A natural question to consider is the effect of perturbing S ( x, y ) on the growthindex and its multiplicity. Besides being of intrinsic interest, these questions and theiroscillatory integral analogues (described later in this section) are important in the analysisof Fourier transforms of surface-supported measures such as in [IKeM] and [IoSa]. Complexand higher-dimensional analogues of these questions are also connected to various issuesin complex geometry.Since by the implicit function theorem one has M S,U ( ǫ ) ∼ |∇ S (0 , | − ǫ when ∇ S (0 , = 0 and U is sufficiently small, one typically assumes that S ( x, y ) has a criticalpoint at the origin. That is, one assumes that S (0 ,
0) = 0 and ∇ S (0 ,
0) = 0.This research was supported in part by NSF grant DMS-06540731or the analogous problem in one dimension, the growth index is just the recip-rocal of the order of vanishing of S ( x ) at x = 0, so a small perturbation of the phasecan only result in the growth index staying the same or increasing. A famous example ofVarchenko [V] shows that the analogous phenomenon does not necessarily hold in three orhigher dimensions. Thus a general result can hold only in dimension two. The strongestresult in the current literature is the following theorem due to Karpushkin [K3]. Let D r denote the open disk in R of radius r centered at the origin, and E r the open disk in C of radius r centered at the origin. For a function f ( x, y ) real-analytic on D r , let ˜ f ( z , z )denote the unique holomorphic extension of f ( x, y ) to E r . Then Karpushkin’s theorem is: Theorem: ([K3]) Suppose S ( x, y ) is real-analytic on D r satisfying S (0 ,
0) = 0 and ∇ S (0 ,
0) = 0 such that S ( x, y ) has growth index j with multiplicity p at the origin. Thenthere is a δ >
0, an s < r , and a positive constant C S depending on S ( x, y ) such that if f ( x, y ) is real-analytic on D s and ˜ f ( z , z ) extends to a continuous function on ¯ E s with | ˜ f ( z , z ) | < δ for all ( z , z ) ∈ ¯ E s then for 0 < ǫ < M S + f,D s ( ǫ ) ≤ C S ǫ j | ln ǫ | p (1 . s and in the constant C S . Karpushkin’s proofs involveideas from singularity theory, in particular the theory of versal deformations which turnsarbitrary perturbations of S ( x, y ) into a number of canonical forms which then may beconsidered individually.Another method for dealing with stability of M S,U ( ǫ ) was introduced in [PSSt]where the above theorem is proven modulo the logarithmic factors. Their methods areoften referred to in this subject as the method of algebraic estimates, which also givepartial analogues in higher dimensions (the example of Varchenko shows the full analoguesnot feasible). Also, in the case of linear perturbations of smooth functions, results of theabove nature are proven in [IKeM].The purpose of this paper is to show how resolution of singularities algorithms intwo dimensions, in conjunction with some one-dimensional Van der Corput-type lemmas,provides another method which we will use to prove new estimates and theorems for the M S,U ( ǫ ) as well as for oscillatory integral analogues. Since these algorithms will applyto all smooth functions, our theorems will hold for all smooth functions as opposed tothe earlier real-analytic results of [K1]-[K3] and [PSSt]. We also will make use of thesuperadapted coordinate systems of [G1] that put functions in certain canonical formssuitable for these problems. They are a refinement of the adapted coordinate systems of[V]. Adapted coordinate systems are also used in [K1]-[K3] and [PSSt].For our sharpest estimates (Theorems 1.1 and 1.5) our condition on the perturba-tion function f ( x, y ) will be that the absolute values of finitely many derivatives of f ( x, y )at the origin are less than some δ which depends on S ( x, y ). We will get uniformity in2he coefficient C S of (1 . s since suchuniformity does not hold in the general smooth case. For a given f ( x, y ), there will also befinitely many t with | t | < tf ( x, y ) is the perturbation function isexcluded from the theorems. This is due to certain error terms being affected by the zeroesof certain one-dimensional polynomials induced by S ( x, y ) + tf ( x, y ). Since these issuesonly affect error terms, other than for these exceptional values we will get the uniformestimates.Before we state our theorems we first define some relevant terminology. Definition 1.1.
Let S ( x, y ) = P a,b s ab x a y b denote the Taylor expansion of S ( x, y ) at theorigin. Assume there is at least one ( a, b ) for which s ab is nonzero. For any ( a, b ) for which s ab = 0, let Q ab be the quadrant { ( x, y ) ∈ R : x ≥ a, y ≥ b } . Then the Newton polygon N ( S ) of S ( x, y ) is defined to be the convex hull of the union of all Q ab .A Newton polygon consists of finitely many (possibly zero) bounded edges ofnegative slope as well as an unbounded vertical ray and an unbounded horizontal ray.More generally, one can define the Newton polygon of a power series in x N and y N for apositive integer N analogously to Definition 1.1. Definition 1.2.
The
Newton distance d ( S ) of S ( x, y ) is defined to be inf { t : ( t, t ) ∈ N ( S ) } .Throughout this paper, we will use the ( a, b ) coordinates to write equations oflines relating to Newton polygons, so as to distinguish from the x - y variables of the domainof S ( x, y ). The line in the a - b plane with equation a = b comes up so frequently it has itsown name: Definition 1.3.
The bisectrix is the line in the a - b plane with equation a = b .In Theorems 1.1-1.3 below, S ( x, y ) is a smooth function on a neighborhood of the originwith nonvanishing Taylor expansion at the origin and satisfying S (0 ,
0) = 0 and ∇ S (0 ,
0) =0. Denote the growth index of S ( x, y ) by j and its multiplicity by p . Our first and sharpesttheorem is the following. Theorem 1.1.
There is a positive integer l and a δ > f ( x, y ) is a smoothfunction on a neighborhood of the origin with sup | α |≤ l | ∂ α f (0 , | < δ , then for all butfinitely many t with | t | <
1, if D is a sufficiently small disk centered at the origin (dependingon S + tf ), for all 0 < ǫ < we have M S + tf,D ( ǫ ) ≤ C S ǫ j | ln ǫ | p (1 . D in Theorem 1.1 as such a statement isfalse for general smooth functions. It should also be pointed out that since in most casesthe leading coefficient of (1 .
2) is independent of U for small enough U , simply shrinking3own D does not typically help in getting a uniform constant in the right-hand side of(1 . S ( x, y ) and f ( x, y ) do not both have Morse(nondegenerate) critical points at the origin, then Theorem 1.1 still holds for | t | ≥ t and f as well as S . Another way ofsaying this is that the growth index and multiplicity of S ( x, y ) + tf ( x, y ) is at least as goodas that of S ( x, y ) for all but finitely many t . Theorem 1.2.
Let f ( x, y ) be a smooth function on a neighborhood of the origin with acritical point there and assume that S ( x, y ) and f ( x, y ) do not both have Morse criticalpoints at the origin. Let j t denote the growth index of S ( x, y ) + tf ( x, y ) at the origin and p t its multiplicity. Then for all but finitely many real values of t we have ( − j t , p t ) ≤ ( − j, p )(under the lexicographic ordering).Theorem 1.2 does not hold if S ( x, y ) and f ( x, y ) are both Morse as can be seenby taking S ( x, y ) = x + y and f ( x, y ) = x − y . However, in such situations the growthindex of all but finitely many S + tf is still going to be 1. Also, note that the conditionexcluding finitely many t may be necessary; for example, when f ( x, y ) = − S ( x, y ) plus asmall error term.Since the growth index and multiplicity of αS ( x, y ) + βS ( x, y ) is the same asthat of S ( x, y ) + βα S ( x, y ) for any α = 0, Theorem 1.2 and symmetry imply the following: Theorem 1.3.
Suppose S ( x, y ) and S ( x, y ) are two smooth functions on a neighborhoodof the origin with critical points at the origin, not both Morse. Let ( j , p ) and ( j , p )be their growth indices and multiplicities. For any (real) α and β , Let j α,β and p α,β bethe growth index and multiplicity of αS ( x, y ) + βS ( x, y ). Then there is a finite set ofnumbers such that unless βα is in this set, ( − j α,β , p α,β ) ≤ min(( − j , p ) , ( − j , p )) underthe lexicographic ordering.One includes ∞ as a possible value of βα in Theorem 1.3. Also, note that one canmake appropriate generalizations of Theorem 1.3 for several functions.We now give an idea of how Theorems 1.1 and 1.2 are proved. First considerthe simple case where D is a small disk centered at the origin and S ( x, y ) and f ( x, y ) aremonomials a x α y β and a x α y β . Then using an elementary argument, one can evaluate M S + tf,D ( ǫ ) directly to show Theorems 1.1 and 1.2. Already one might have to excludeone value of t ; this occurs if α = α , β = β , and t = − a a . Next, suppose that insteadof being monomials, S ( x, y ) and f ( x, y ) are comparable to monomials. That is, supposethere are smooth functions a ( x, y ) and a ( x, y ), both nonvanishing at the origin, suchthat S ( x, y ) = a ( x, y ) x α y β and f ( x, y ) = a ( x, y ) x α y β . Then roughly speaking onehas the same behavior as in the monomial case. There is an added difficulty if α = α and β = β , for in this case one must also have to consider the zeroes of a ( x, y ) + ta ( x, y ).4ore generally, the strong form of resolution of singularities says that in the real-analytic case there is a coordinate change φ such that S ◦ φ ( x, y ) and f ◦ φ ( x, y ) are locallycomparable to monomials in the above sense. However, the best one can automatically sayabout the Jacobian of this coordinate change is that it too is comparable to a monomial.Hence when looking at integrals one cannot automatically reduce to the above situationsin general. Fortunately, in two dimensions there are substitutes for such resolution ofsingularities algorithms that reduce to situations similar to where S ◦ φ and f ◦ φ arelocally comparable to monomials, and which have determinant one. Even better, thesealgorithms hold for the general smooth case.Specifically, in section 3, we will take a small disk D centered at the origin andwrite D = ∪ ni =1 D i . On each D i there will be a coordinate change φ i such that on each φ − i D i , the function S ◦ φ i is comparable to a monomial in a certain sense. This can be donein such a way that f ◦ φ i is also comparable to a monomial, although we won’t explicitlyuse this fact for certain technical reasons. Each φ i ( x, y ) is of the form ( ± x, ± y − g i ( x )),and the domain φ − i D i is a ”curved triangle” consisting of the points in φ − i ( D i ) betweentwo curves y = p i ( x ) and y = q i ( x ) such that p i (0) = q i (0) = 0 and p i ( x N ) and q i ( x N )are smooth for some N . Since each φ i now has Jacobian determinant ±
1, in examining M S,D ( ǫ ) one can switch to considering S ◦ φ i and f ◦ φ i on the set φ − i D i . Althoughthese two functions aren’t strictly speaking comparable to monomials, there are enoughsimilarities with that situation such that after some effort one can prove Theorem 1.2. Onehas to exclude finitely many values of t for each D i to avoid cancellations such as in themonomial case.The idea of dividing into curved triangles related to the singularities of S ( x, y )to simplify the behavior of integrals related to S ( x, y ) goes a while back. It was used inthe various Phong-Stein papers on oscillatory integral operators such as [PS] and then inthe author’s earlier work such as [G2]-[G3]. The Phong-Stein papers use curved trianglesderiving from Puiseux expansions of real-analytic functions, while [G2]-[G3] uses explicitresolution of singularities algorithms such as in this paper. Since the problems beingconsidered here are rather different from the earlier problems, we will derive from firstprinciples a resolution of singularities theorem amenable to the situations at hand.Proving the stronger result of Theorem 1.1 requires additional ideas. In fact, ifour goal was only to prove Theorem 1.2 and its consequences, then section 4 would benoticeably shorter. To get the sharper estimates of Theorem 1.1, we will draw on theresults of [G1]. Specifically, we first put S ( x, y ) into what in [G1] are called superadaptedcoordinates. These are a generalization of the notion of adapted coordinates of [V]. Then weapply the resolution of singularities algorithm of section 3 to S ( x, y ), getting the resulting D i . One next focuses on the D i which give the dominant terms in M S,D ( ǫ ). For these D i , one subdivides further into sets D ij . This will be a coarse subdivision related to theresolution of singularities of S ( x, y ) + tf ( x, y ); however, one does not have to use the fullresolution of singularities theorem here. For the D ij that give the largest contribution to M S,D ( ǫ ), one uses estimates from section 2 related to one-dimensional Van der Corput-type5emmas to prove the sharp estimates of Theorem 1.1.On the remaining D ij , as well as the D i that do not give dominant terms of M S,D ( ǫ ), one now applies the full resolution of singularities theorem to f ◦ φ i ( x, y ). Theresulting functions, call them f ◦ Φ ij and S ◦ Φ ij , are now comparable to monomials in thenew coordinates, and the considerations used for Theorem 1.2 can now be used. Becausethe contributions here are error terms for M S,D ( ǫ ) (that is, they give higher powers of ǫ than the estimates sought), we do not have to worry about constants here if ǫ is smallenough, which we will see we can assume. However, we still have to exclude finitely manyvalues of t as in Theorem 1.2; it is conceivable that for such t the power of epsilon appearingin such error terms becomes as small or smaller than the desired power for M S,D ( ǫ ). Notethat this phenomenon does not appear in the real-analytic results [K1]-[K3] or [PSSt]. Theauthor does not know if this is a result of the weaker assumptions of Theorem 1.1, or ifone can avoid excluding finitely many values of t in the context of Theorem 1.1 with anadditional argument. As indicated above, this cannot be avoided in the context of Theorem1.2. Oscillatory Integrals.
Let S ( x, y ) be a smooth function on a neighborhood D of the origin with S (0 ,
0) =0. Suppose φ ( x, y ) is a real-valued smooth function supported in D . We consider theoscillatory integral defined by J S,φ ( λ ) = Z R e iλS ( x,y ) φ ( x, y ) dx dy (1 . λ is a real parameter and we are interested in the behavior of J S,φ ( λ ) as | λ | → ∞ .Since J S,φ ( − λ ) is the complex conjugate of J S,φ ( λ ), one only needs to consider the situationas λ → ∞ . As is well-known, the analysis of (1 .
5) is closely related to the analysis of thesublevel areas above. Specifically, in the real-analytic case, if S (0 ,
0) = 0 and ∇ S (0 ,
0) = 0then in analogy to (1 .
2) we have nontrivial asymptotics of the form J S,φ ( λ ) = D φ λ − j (ln λ ) p + o ( λ − j (ln λ ) j ) (1 . .
6) such that ( − j, p ) is maximal (under the lexicographic ordering) suchthat D φ is nonzero for at least one φ in any sufficiently small neighborhood of the origin.We refer to j as the oscillatory index of J S,φ and p as its multiplicity. Using well-knownarguments from [AGV], it can be shown that (in the real-analytic case) the oscillatoryindex is equal to the growth index, with the multiplicity the same in both cases, unlessthere is a coordinate system near the origin in which S ( x, y ) = x − y . In this case, thegrowth and oscillatory indices are both 1, with the multiplicity of the growth index being1 and the multiplicity of the oscillatory index being zero.Furthermore, Karpushkin’s methods work for the oscillatory integral case as well,and in [K1]-[K2] analogues for oscillatory integrals (1 .
6) to his above-mentioned theoremon sublevel areas are proven. 6sing some results of [G1], Theorem 1.3 directly implies analogues for oscillatoryintegrals. To see why this is the case, we first give some background from [G1] whichwill also be used in the proof of Theorem 1.1. Suppose S ( x, y ) is a smooth function ona neighborhood of the origin such that S (0 ,
0) = 0 and ∇ S ( x, y ). Write the Taylor seriesof S ( x, y ) as P ab s ab x a y b and denote the Newton distance of S ( x, y ) by d . Then it isproven in [G1] that there is a smooth coordinate change taking the origin to itself, suchthat after the coordinate change S ( x, y ) is in ”superadapted coordinates”, which meansthe following. Definition.
For a compact edge e of N ( S ) with equation a + mb = α , let S e ( x, y ) denotethe sum of all terms of P a,b s ab x a y b with a + mb = α . S ( x, y ) is said to be in superadaptedcoordinates if for any compact edge e of N ( S ) intersecting the bisectrix, any zero of S e (1 , y )or S e ( − , y ) has order less than d ( S ).In [G1] it is proven that in superadapted coordinates, the growth index j is givenby d . The multiplicity p is 1 if and only if the bisectrix intersects N ( S ) at a vertex. It isalso shown that d = 1 if S ( x, y ) is Morse and d > U of the origin for thesevalues of j and p one has J S,φ ( λ ) < Cλ − j (ln λ ) p for any φ supported in U , and that for any( − j ′ , p ′ ) < ( − j, p ) (with respect to the lexicographic ordering) there is some φ supportedon U for which the estimate J S,φ ( λ ) < Cλ − j ′ (ln λ ) p ′ does not hold for any C .Thus in the non-Morse case, for a general smooth function it makes sense to definethe oscillatory index and multiplicity to be the same as the growth index and multiplicity.In the Morse case one defines them to be inherited from the Morse coordinates. Notethat this definition agrees with the old definition for the real-analytic case. Note also thatthe analogue of Theorem 1.3 for oscillatory integrals follows immediately from Theorem1.3. Since the two types of Morse critical points have the same oscillatory indices andmultiplicities, the case where S ( x, y ) and S ( x, y ) both have Morse critical points at theorigin may be included in the oscillatory integral result: Theorem 1.4.
Suppose S ( x, y ) and S ( x, y ) are smooth functions on a neighborhood ofthe origin with critical points at the origin. Let ( j , p ) and ( j , p ) be their oscillatoryindices and multiplicities at the origin. Let j α,β and p α,β be the oscillatory index andmultiplicity of αS ( x, y ) + βS ( x, y ). Then there is a finite set of real numbers such thatunless βα is in this set, then ( − j α,β , p α,β ) ≤ min(( − j , p ) , ( − j , p )) under the lexicographicordering. Again, here we include ∞ as a possible value of βα . Note that Theorem 1.4 per-tains to integrals of the form R R e iαS ( x,y )+ iβS ( x,y ) φ ( x, y ) dx dy , which are tied to Fouriertransforms of surface-supported measures.For oscillatory integrals, getting constants depending on S ( x, y ) and not f ( x, y )in analogy with Theorem 1.1 is difficult for a few reasons. For one, since the integrands7f oscillatory integrals have both positive and negative values, even if one had preciseone-dimensional Van der Corput lemmas for the oscillatory integral case, averaging theresulting estimates over a second dimension might give extra cancellation that needs to betaken into account. Secondly, and perhaps more importantly, applying one-dimensionalVan der Corput lemmas on the integrands of J S,φ will result in bounds depending on thesupremum of |∇ φ | , and such upper bounds do not necessarily behave well under coordinatechanges. However, if the perturbed function S + tf is real-analytic there are explicitformulas from [AGV] for transforming the estimates of Theorem 1.1 into explicit estimatesfor large enough λ , and we get the following. Theorem 1.5.
Suppose S ( x, y ), f ( x, y ), l , δ , j , p , and D are as in Theorem 1.1. There isa B S > | t | <
1, if φ is supported in D and S + tf isreal-analytic, then for sufficiently large λ we have | J S + tf,φ ( λ ) | < B S || φ || ∞ λ − j (ln λ ) p (1 . Proof.
We may assume S ( x, y ) is not Morse since the Morse case follows from the explicitasymptotic expansions known in the Morse situation. By Theorem 1.2 of [G1], the initialcoefficient D φ of the asymptotics (1 .
6) for S + tf satisfies | D φ | ≤ j Γ( j ) lim ǫ → (cid:12)(cid:12)(cid:12)(cid:12) I S + tf,φ ( ǫ ) ǫ j (ln ǫ ) p (cid:12)(cid:12)(cid:12)(cid:12) (1 . I S + tf,φ ( ǫ ) = R | S + tf | <ǫ φ ( x, y ) dx dy . Note that we have (cid:12)(cid:12)(cid:12)(cid:12) I S + tf,φ ( ǫ ) ǫ j (ln ǫ ) p (cid:12)(cid:12)(cid:12)(cid:12) ≤ || φ || ∞ (cid:12)(cid:12)(cid:12)(cid:12) M S + tf,D ( ǫ ) ǫ j (ln ǫ ) p (cid:12)(cid:12)(cid:12)(cid:12) Since Theorem 1.1 holds for S + tf on D , the above is at most C S || φ || ∞ Thus since j Γ( j ) ≤
2, the limit in (1 .
8) is at most 2 C S || φ || ∞ . By taking λ sufficiently largethat the other terms of the asymptotics (1 .
6) are small in comparison, we obtain (1 .
7) with B S = 3 C S and the theorem follows.
2. Lemmas about curved triangles and one-dimensional Van der Corput Lem-mas
We make extensive use of the classical Van der Corput lemma throughout this paper.Although we don’t need very sharp constants, to simplify our notation we use the followingversion that follows from [R]. We refer to [CaCWr] for more information on this generalsubject. 8 emma 2.1. ([R]) Suppose for a positive integer k , f ( t ) is a C k function on an interval I such that for some positive constant c , | d k fdt k | > ck ! on I . Then for any ǫ > |{ t ∈ I : | f ( t ) | < ǫ }| ≤ min( | I | , c − k ǫ k ) (2 . Lemma 2.2.
Let A m,N denote the set { ( x, y ) : 0 < x < x , < y < N x m } , where m >
0. Suppose g ( x, y ) is a function on A m,N such that | ∂ βy g ( x, y ) | > aβ ! x α on A m,N ,where a, α > β is a positive integer. Then for a fixed x we have |{ y : ( x, y ) ∈ A m,N , | g ( x, y ) | < ǫ }| ≤ { y : ( x, y ) ∈ A m,N , ax α y β < ǫ }| (2 . Lemma 2.3.
Let h ( x, y ) = ax α y β for some a, α, β ≥
0. Let A m,N be as in Lemma 2.2. a) If β > α , then there are constants c, C > a, m, α , and β such that forsufficiently small ǫ we have cx β − αβ ǫ β < |{ ( x, y ) ∈ A m, : | h ( x, y ) | < ǫ }| < Cx β − αβ ǫ β (2 . b) If β = α , then there are constants c, C > a, α, m , and β such that forsufficiently small ǫ we have the estimate cǫ β | ln ǫ | < |{ ( x, y ) ∈ A m, : | h ( x, y ) | < ǫ }| < Cǫ β | ln ǫ | (2 . c) If β < α , then there are constants c, C > a, m, α , and β such that forsufficiently small ǫ we have cN α − βα + mβ ǫ m +1 α + mβ < |{ ( x, y ) ∈ A m,N : | h ( x, y ) | < ǫ }| < CN α − βα + mβ ǫ m +1 α + mβ (2 . Proof:
Viewing |{ ( x, y ) ∈ A m,N : | h ( x, y ) | < ǫ }| as the integral of the characteristicfunction of {| h ( x, y ) | < ǫ } over A m,N , we change variables twice, first by replacing y by x m y ′ and then by replacing x by ( x ′ ) m +1 . We obtain that |{ ( x, y ) ∈ A m,N : | h ( x, y ) | < ǫ }| is given by ( m + 1) − |{ ( x ′ , y ′ ) ∈ [0 , x m +10 ] × [0 , N ] : a ( x ′ ) α + mβm +1 ( y ′ ) β < ǫ }| (2 . x ′ -measure in (2 .
6) for fixed y ′ is given by min( x m +10 , cǫ m +1 α + mβ ( y ′ ) − mβ + βα + mβ ). Note thatthe two terms in the minimum are equal at y = y ′ = c ′ x − α + mββ ǫ β . Also note that thepower mβ + βα + mβ of y ′ is greater than 1 if β > α , and less than 1 if β < α . In the former case,if ǫ is sufficiently small the measure of the set (2 .
6) is comparable to the portion where9 ′ < y ′ , given by y ′ x m +10 = c ′ x β − αβ ǫ β and thus the formula (2 .
3) holds. If β = α , then theexponent is exactly 1, and one obtains the additional logarithmic factor of (2 . β < α , the measure of the set (2 .
6) is comparable to the measure of the part where
N > y ′ > N , giving (2 .
5) for small enough ǫ . This completes the proof of the lemma.We will make frequent use of the next lemma in conjunction with the above lemmas. Lemma 2.4.
Suppose R ( x, y ) is a smooth function on a neighborhood of (0 ,
0) such that N ( R ) has a vertex at ( c, d ). Let r cd x c y d be the associated term of the Taylor expansionof R ( x, y ) at (0 , a) If ( c, d ) is the intersection of two compact edges of N ( R ), with equations a + mb = α and a + m ′ b = α ′ respectively with m ′ > m , then for any δ > r > ξ > < x < r and ξ − x m ′ < y < ξx m one has | R ( x, y ) − r cd x c y d | < δ | r cd | x c y d (2 . b) If ( c, d ) is the intersection of the horizontal ray of N ( R ) with a compact edge withequation a + mb = α , then for sufficiently small η >
0, for any δ > r > ξ > < x < r and x η < y < ξx m equation (2 .
7) holds. c) In the setting of case b), if we also have that d = 0 then for any δ > r > ξ > < x < r and 0 < y < ξx m equation (2 .
7) holds.
Proof.
We start with a). Let the Taylor expansion of R ( x, y ) at the origin be written as P ab r ab x a y b . For a large M we may write R ( x, y ) − r cd x c y d = X c ≤ a 7) can be made less than δ | r cd | x c y d in absolute value by shrinking theradius of D appropriately. As for the second sum, if one changes coordinates from ( x, y )to ( x, y ′ ), where y ′ = x m y , then ( x, y ′ ) ∈ [0 , × [0 , ξ ]. Observe that a given term r ab x a y b of the second sum becomes r ab x a + mb ( y ′ ) b . Since a + mb ≥ α and b > d in each termin the second sum, the entire sum can be written as x α ( y ′ ) d ( y ′ f ( x, y ′ )) for some f ( x, y ′ )which is a polynomial in y ′ and a fractional power of x . Thus by shrinking ξ appropriately,since y ′ < ξ the sum can be made of absolute value less than δ | r cd | x α ( y ′ ) d . Note that r cd x c y d = r cd x c + dm ( y ′ ) d , and this is equal to r cd x α ( y ′ ) d since ( c, d ) is on the edge withequation a + mb = α . Thus by choosing ξ sufficiently small, we can have that the second10um is of absolute value at most δ | r cd | x c y d (in the original coordinates.) These are thebounds we need.The third sum is dealt with in exactly the same way, reversing the roles of the x and y axes. Lastly, since x m > y > x m ′ the error term E M ( x, y ) can be made less than δ | r cd | x c y d in absolute value for small x if M is chosen sufficiently large. Putting these alltogether, we obtain | R ( x, y ) − r cd x c y d | < δ | r cd | x c y d as needed. This completes the proofof part a). of the lemma.Moving on now to part b), we once again look at the expression (2 . y > x η for some small η ensures that for large enough M the errorterm will still be smaller than δ | r cd | x c y d using the inequality (2 . . C ( | x | M + | y | M ) ≤ C ′ ( | x | M + | x | Mm ). Since d = 0, by taking M large enough the error term can be madeless than δ | r cd | x c y d = δ | r cd | x c for small enough x . This gives us part c) and completesthe proof of Lemma 2.4. 3. The resolution of singularities algorithm. In this section we prove the version of two-dimensional resolution of singularitieswe need for the analysis in section 4. In keeping with the philosophy of [G2] as well asits antecedents such as [PS] or [V], it involves dividing a neighborhood of the origin into”curved triangles” each of which has a coordinate system in which S ( x, y ) behaves like amonomial in an appropriate sense. The theorem we use is the following. Theorem 3.1. Suppose S ( x, y ) is a smooth function defined on a neighborhood of theorigin with S (0 , 0) = 0 such that the Taylor expansion P ab s ab x a y b of S ( x, y ) at theorigin has at least one nonvanishing term. Then for any sufficiently small η > 0, and anysufficiently small disk D centered at the origin, we may, up to a set of measure zero, write D ∩ { ( x, y ) : | y | < | x | η } as a finite union ∪ i D i , where the D i have the following properties.Let M denote the difference between the y coordinates of the uppermost and lowermostvertices of N ( S ). If N ( S ) has one vertex let M = 1. Then there is a positive integer M ′ depending on M such that for each i there is a function g i ( x ) with g i ( x M ′ ) smooth anda function φ i ( x, y ) of the form ( ± x, ± y − g i ( x )) such that f ◦ φ i ( x, y ) is approximately anonzero monomial b i x α i y β i ( α i , β i ≥ , β i an integer) on the curved triangle D ′ i = φ − i D i in the following sense. a) The set D ′ i is of the form { ( x, y ) ∈ φ − i ( D ) : x > , f i ( x ) < y < F i ( x ) } where each f i ( x M ′ ) and F i ( x M ′ ) are smooth. The initial term of the Taylor expansion of F i ( x ) is ofthe form A i x N i , where A i , N i > 0. The function f i ( x ) is either the zero function or has aTaylor series with initial term a i x n i where a i , n i > n i > N i .11 ) If β i = 0, then f i ( x ) is the zero function and there are positive constants c i and C i suchthat on D ′ i one has the estimates c i x α i < S ◦ φ i ( x, y ) < C i x α i c) If β i > 0, then we can write S = S + S as follows. S has a zero of infinite order at(0 , 0) and is the zero function if S is real-analytic. Also, S ( x M ′ , y ) is a smooth function.As for S , for any preselected δ > b i , for any 0 ≤ k ≤ α i , 0 ≤ l ≤ β i on D ′ i we have | ∂ kx ∂ ly S ◦ φ i ( x, y ) − b i α i ( α i − ... ( α i − k + 1) β i ( β i − .... ( β i − l + 1) x α i − k y β i − l |≤ δ | b i | x α i − l y β i − k (3 . d) The total number of sets D i can be bounded in terms of M .The following corollary will follow immediately from the proof of Theorem 3.1. Corollary 3.2. If S ( x p , y ) is a smooth function for some positive integer p , then Theorem3.1 still holds, except the exponent M ′ is replaced by pM ′ . Proof of Theorem 3.1. We first dispense with the case where N ( S ) has exactly onevertex ( a, b ). Let s ab x a y b be the corresponding term of the Taylor expansion of S ( x, y ).Thus for any N we have S ( x, y ) = s ab x a y b + O ( | x | N + | y | N ). We divide a small disk D centered at the origin into 8 curved triangles by slicing along the x and y axes as well asalong the lines y = ± x . We make these triangles the D i ’s with each φ i ( x ) = ( ± x, ± y ) or( ± y, ± x ) and each f i ( x ) = 0. Then condition c) above automatically holds and we aredone. Thus we now assume that N ( S ) has multiple vertices. Hence N ( S ) has at leastone compact edge. We write the equations of these edges as a + m j b = α j , where m >m > ... > m n . Clearly it suffices to divide the x > , y > D centered at the origin according to the lemma, so we restrict our attention to D = D ∩ { x > , y > } . For a small ξ > D ∩ { y < x η } as the finite union of sets U j and T j , where the U j are all possible setsof the form { ( x, y ) ∈ D : ξx m j < y < ξ − i x m j } and the T j are all possible sets of the form { ( x, y ) ∈ D : ξ − i x m j +1 < y < ξx m j } as well as the sets T n = { ( x, y ) ∈ D : 0 < y < ξx m n } and T = { ( x, y ) ∈ D : ξ − i x m < y < x η } . (We assume η < m ). Observe that each T j corresponds to a unique vertex of N ( S ). We will turn each T j into one of the D i ’s withthe associated φ i just the identity map. As for the U j , we will subdivide each U j furtherinto V jk and U jl , where the U jl will also become D i ’s for which part b) of the theoremholds, and where the V jk will undergo further subdivisions and coordinate changes.We start with the T j ’s for j = n . Let ( c, d ) denote the vertex of N ( S ) cor-responding to T j and let s cd x c y d denote the associated term of the Taylor expansion.12hen by Lemma 2.4 applied to S ( x, y ) and its various y partials, for any δ > ξ such that if D is sufficiently small then on T j , we have the inequality | S ( x, y ) − s cd x c y d | < δ | s cd | x c y d as well as its analogues for any ∂ kx ∂ ly S ( x, y ) for k ≤ c , l ≤ d . Thus a) and c) of the theorem hold with φ i the identity map, which is what weneed for these T j .Next, we look at T n = { ( x, y ) ∈ D : 0 < y < ξx m n } . This time we cannot applyLemma 2.4 automatically. By a famous theorem of Borel (see [H] for a proof), one can let s ( x, y ) be a smooth function on a neighborhood of the origin whose Taylor expansion atthe origin is given by P ab s ab x a y b − d . Then Lemma 2.4c) applies to s ( x, y ) and we canassume that on T n we have | s ( x, y ) − s cd x c | < δ | s cd x c | (3 . s ( x, y ) = y d s ( x, y ). Then s ( x, y ) has the same Taylor expansion at the origin as S ( x, y ) and is equal to S ( x, y ) when S ( x, y ) is real-analytic. We also have the desiredinequality | s ( x, y ) − s cd x c y d | < δ | s cd x c y d | (3 . . 3) for the x and y partials also hold; for example, theNewton polygon of s ( x, y ) is such that taking a y derivative of s ( x, y ) incurs a factor ofat most Cx − m n which is much smaller than y on T n if ξ is appropriately small. Thus wehave a) and c) of Theorem 3.1 and we are done with the analysis of the T j ’s.Next, we proceed to the analysis of the U j ’s. Let S m j ( x, y ) denote the sum of theterms s ab x a y b of the Taylor expansion lying on the edge with equation a + m j b = α j . Notethat S m j ( x, y ) is a polynomial and is the sum of s ab x a y b minimizing a + m j b . Let r j < ... 0. Note that S m j (1 , y ′ ) has no zeroes on [ a jl , b jl ]. In the new coordinates, the finite Taylor expansion S ( x, y ) = P a,b 0. Since S m j (1 , y ′ )has no zeroes on [ a jl , b jl ], there are C jl , ǫ jl > C jl x α j > | x α j S m j (1 , y ′ ) | > ǫ jl x α j on U ′ jl . Furthermore, if η is sufficiently small and M is sufficiently large, we have that13 x α j + ζ f ( x, y ′ ) | and the O ( | x | M + | x | mM | y ′ | M ) terms are both less than min( C jl x α j , ǫ jl x α j ).As a result, shrinking the disk D if necessary we can assume that on U ′ jl we have C jl x α j > | S ( x, x m j y ′ ) | > ǫ jl x α j (3 . U jl , this means that on U jl we have C jl x α j > | S ( x, y ) | > ǫ jl x α j (3 . U jl , S ( x, y ) satisfies the conditions of Theorem 3.1b) with β i = 0, if we letthe coordinate change φ i associated with U jl be given by ( x, y ) → ( x, y + c jl x m ), where y = c jl x m j denotes the equation of the lower boundary curve of U jl . This completes theanalysis of the U jl .We now move to the analysis of any V jk that may exist. Let o jk denote the orderof the zero of S m j (1 , y ) at y = r jk . The idea is as follows. Using ideas from resolution ofsingularities, we will do a coordinate change of the form ( x, y ) → ( x, y − r jk x m j + higherorder terms) such that in the new coordinates S ( x, y ) becomes a function whose analoguesto the zeroes r jk each has order < o jk . Thus after at most max j,k o jk iterations, there willno longer be any sets V jk and we will have divided D into sets each of which is a U jl or T j in some iteration. Since each coordinate change will be of the form ( x, y ) → ( x, ± y − g ( x )),the composition of finitely many such coordinate changes is of the desired form. (We canget − y as well as y since after one of these coordinate changes the resulting set lies in boththe upper right and lower right quadrants). By the above analysis of S ( x, y ) on the T j and U jl , the resulting S ◦ φ i ( x, y ) will satisfy Theorem 3.1 as needed.The coordinate change on V jk is chosen in the following way. We once againswitch to the ( x, y ′ ) coordinates and make use of (3 . V ′ jk denote V jk in the newcoordinates and define s j ( x, y ′ ) = S ( x,x mj y ′ ) x αj , where recall α j = a + m j b for ( a, b ) on e j .We claim that s j ( x N , y ′ ) is a smooth function of x and y ′ on V ′ jk for some N . To see this,observe that by (3 . p, q ) we have ∂ px ∂ qy ′ ( s j ( x N , y ′ )) = ∂ px ∂ qy ′ ( S m j (1 , y ′ )) + ∂ px ∂ qy ′ ( x Nζ j f ( x, y ′ ))+ O ( | x | N ( M − p ) + | x | N ( mM − p ) | y ′ | M − q ) (3 . S ( x, y ) = P a,b ≤ M s ab x a y b + O ( | x | M + | y | M ). Thus as long as N ζ is an integer, by taking M large enough we have that ∂ px ∂ qy ′ s j ( x N , y ′ ) exists and is continuous on V ′ jk . Hence s j ( x N , y ) is a smooth functionof x and y as needed.Furthermore, ∂ o jk y ′ s j ( x, r jk ) = 0, while ∂ o jk − y ′ s j ( x, r jk ) = 0. Hence the implicitfunction theorem (applied to s j ( x N , y ′ )) says that there is a function t jk ( x ) for small x with t jk ( x N ) smooth such that t jk (0) = r jk and ∂ o jk − y ′ s j ( x, t jk ( x )) = 0 (3 . c | y ′ − t jk ( x ) | ≤ | ∂ o jk − y ′ s j ( x, y ′ ) | ≤ C | y ′ − t jk ( x ) | (3 . S ( x, y ), on V jk we have ∂ o jk − y S ( x, x m j t jk ( x )) = 0 (3 . a ) cx α j − m j o jk | y − x m j t jk ( x ) | ≤ | ∂ o jk − y S ( x, y ) | ≤ Cx α j − m j o jk | y − x m j t jk ( x ) | (3 . b )Since the terms of S m j ( x, y ) are on the line a + m j b = α j , the maximum possible order ofa zero of S m j (1 , y ) is α j m j . Hence o jk ≤ α j m j and the exponent in (3 . b ) is a nonnegativenumber which we denote by p . Thus if we make the coordinate change φ jk ( x, y ) = ( x, y + x m j t jk ( x )) and let R jk ( x, y ) = S ◦ φ jk ( x, y ), then we have ∂ o jk − y R jk ( x, 0) = 0 (3 . a ) cx p | y | ≤ | ∂ o jk − y R jk ( x, y ) | ≤ Cx p | y | (3 . b )We now iterate the above algorithm to R jk ( x, y ) on φ − jk ( V jk ). We first slice into two piecesalong the x -axis. These two pieces are done the same way, so we focus our attention on the y > W . We divide W into pieces T ′ j ′ , U ′ j ′ l ′ , and V ′ j ′ k ′ exactlyas done above. To simplify notation, write R ( x, y ) = R jk ( x, y ), with the understandingthat any subscripts on R really refer to subscripts on a fixed R jk .We will show that any positive zero r ′ j ′ k ′ of any R m ′ j (1 , y ) has order at most o jk − 1. Thus after at most o jk iterations there will be no more V ′ j ′ k ′ and we will be done.The fact that R ( x, y ) is a smooth function of x N and y rather than x and y does notcause any problems in future stages; it just means after the next stage N may be replacedby a large multiple of N . Also, there are no problems caused by the fact that the upperboundary of φ − jk ( V jk ) is some curve y = ξx m j + higher order terms instead of y = x η asbefore; by shrinking ξ if necessary we can ensure that this curve lies harmlessly inside oneof the new U ′ j ′ l ′ whereupon the only effect is to shrink this U ′ j ′ l ′ somewhat.So we turn our attention to showing that the order of any new positive zero r ′ j ′ k ′ of R m ′ j (1 , y ) is at most o jk − 1. For this, we will use (3 . b ). Note that such a zero occursfor V ′ j ′ k ′ of the form { ( x, y ) : ( r ′ j ′ k ′ − ξ ) x m ′ j < y < ( r ′ j ′ k ′ + ξ ) x m ′ j } . The analogue of (3 . R ( x, y ) implies that on V ′ j ′ k ′ we have an expression ∂ o jk − y R ( x, y ) = x α ′ j ∂ o jk − y R m ′ j (1 , y ) + x α ′ j + ζ ′ j ∂ o jk − y f ( x, y )+ O ( | x | M + | y | M − o jk +1 ) (3 . . b ), along any curve y = cx m ′ j , the function ∂ o jk − y R ( x, y ) (= ∂ o jk − y R jk ( x, y ))will always have a zero of the same order x p + m ′ j as x → 0. Thus if the zero r ′ j ′ k ′ of R ′ j (1 , y )15ere of order o jk or greater (3 . 12) gives a contradiction: on the curve y = r ′ j ′ k ′ x m ′ j thefunction ∂ o jk − y R ( x, y ) vanishes to order greater than α ′ j , while on nearby curves y = cx m ′ j , c = r ′ j ′ k ′ , it vanishes to order α ′ j . Thus we conclude that the order of the zero r ′ j ′ k ′ is atmost o jk − g i ( x M ′ ) is smooth for some positive integer M ′ depending on M follows by induction. Namely, if V jk comes from an edge e withvertices ( a, b ) and ( a ′ , b ′ ), the coordinate change coming from that stage ofthe induction isa smooth function of x b ′− b . In addition, each o jk is at most b ′ − b , and the correspondingdifference in y coordinates will be at most o jk in future iterations. Also, the number ofdifferent U jl and V jk coming from that edge is bounded by twice the number of possiblezeroes of S e (1 , y ), or 2( b ′ − b ). Since there are at most o jk iterations of the algorithm, theresult follows. This completes the proof of Theorem 3.1. 4. The beginning of the proofs of Theorems 1.1 and 1.2; the first decompositionand preliminary lemmas. We will prove Theorems 1.1 and 1.2 simultaneously; the proofs have much in com-mon. Furthermore, in proving Theorem 1.1 we may assume that S ( x, y ) has a degeneratecritical point at (0 , M S,D ( ǫ ).Given this and that Theorem 1.2 excludes the nondegenerate case, the arguments of thissection will always be under the assumption that S ( x, y ) has a degenerate critical point at(0 , S ( x, y ) is in superadapted coordinates; a fixedcoordinate change does not affect the statements of Theorem 1.1 or 1.2.Note next that for a given vertex v of N ( S ) there is at most one value of t forwhich the Taylor expansion of S + tf at the origin does not have a c v x v term. Thus thereare at most finitely many values of t for which the Taylor expansion of S + tf at the origindoes not contain a c v x v term for each vertex v of N ( S ). In other words, other than forthese values one has N ( S ) ⊂ N ( S + tf ). Thus excluding these values of t , in provingTheorems 1.1 and 1.2 we may assume that N ( S ) ⊂ N ( S + tf ).Next, notice that in proving Theorem 1.1 it actually suffices to show (1 . 4) holdsfor all sufficiently small ǫ > ǫ . The reason is as follows. Suppose for anysufficiently small disk D centered at the origin one has (1 . 3) for sufficiently small ǫ . Fixone such neighborhood D . Then for all ǫ < ǫ we have M S,D ( ǫ ) ≤ C S ǫ j | ln ǫ | p Since M S,D ( ǫ ) is monotone in D , for all D ⊂ D , for ǫ < ǫ one also has M S,D ( ǫ ) ≤ C S ǫ j | ln ǫ | p But by shrinking D enough the inequality will also hold for > ǫ ≥ ǫ . Thus if we fix onesuch shrunken D , call it D , then (1 . 4) will now hold for all > ǫ > D ⊂ D .16e now begin the proofs of Theorem 1.1 and 1.2. As indicated above, we canassume that S ( x, y ) is in superadapted coordinates and that N ( S ) ⊂ N ( S + tf ). Our goalwill be to show that under the hypotheses of Theorem 1.1, for all but finitely many t (1 . ǫ , and that under the hypotheses of Theorem 1.2, for all butfinitely many t one has the estimate M S + tf,D ( ǫ ) ≤ Cǫ j | ln ǫ | p . Here C may depend on S, t, f , and D .We now fix a small disk D centered at the origin. We divide D into eight piecesthrough the x and y axes and two curves l and l chosen as follows. If the bisectrixintersects the interior of a compact edge e of N ( S ) with equation a + mb = α , then wechoose l and l to be any two curves of the form y = c | x | m and y = − c | x | m so long as c isnot a zero of S e (1 , y ) or S e ( − , y ). (Recall that the polynomial S e ( x, y ) is the sum of theterms s ab x a y b of S ( x, y )’s Taylor expansion at the origin with ( a, b ) ∈ e ). If the bisectrixintersects N ( S ) at a vertex ( d, d ), we choose l to be y = | x | m and l to be y = −| x | m ,where m is such that a line with equation a + mb = α intersects N ( S ) at the single point( d, d ). If the bisectrix intersects N ( S ) in the interior of the horizontal or vertical rays,switching the roles of the x and y axes if necessary we can assume it’s the horizontal rayand N ( S )’s lowest vertex is of the form ( c, d ) for c < d . In this case we choose l to be y = | x | m and l to be y = −| x | m where a line with equation a + mb = α intersects N ( S )at the single point ( c, d ).In the above fashion D is divided into eight pieces E , ...E . Clearly it suffices toshow the desired bounds for each M S + tf,E i ( ǫ ). The argument for each E i is the same, sowe will focus our attention on the piece from the upper right quadrant between the x axisand the curve y = cx m or y = x m . Denote this piece by E . We now apply the resolution ofsingularities algorithm of section 3 to S ( x, y ), obtaining the corresponding sets D i . Define D ′ i = D i ∩ E . Clearly it suffices to show the desired upper bounds for each M S + tf,D ′ i ( ǫ ),which is what we will do.Next, let φ i be the maps of Theorem 3.1. Parts b) and c) of this theorem say that S ◦ φ i ( x ) is approximately a monomial in the precise sense given there. Denote φ − i D ′ i by F i , S ◦ φ i ( x ) by S i ( x ), and f ◦ φ i ( x ) by f i ( x ). Since φ i has determinant ± M S + tf,D ′ i ( ǫ ) = M S i + tf i ,F i ( ǫ ). Thus our task is to prove good upper bounds for M S i + tf i ,F i ( ǫ ). Also, inTheorem 1.1 the smallness assumptions on the suprema of derivatives of f are impliedby corresponding smallness assumptions on the derivatives of f i (possibly with a different δ ), so in our future arguments we may always assume the conditions on f i instead of f without loss of generality.Analogous to above, excluding at most finitely many t we can assume that N ( S i ) ⊂ N ( R i ). Also note that by Theorem 3.1, the set F i is a ”curved triangle” inthe sense that there are h i ( x ) and H i ( x ) such that for some η > η > { ( x, y ) : 0 < x < η , h i ( x ) < y < H i ( x ) } ⊂ F i ⊂ { ( x, y ) : 0 < x < η , h i ( x ) < y < H i ( x ) } Here h i ( x M ′ ) and H i ( x M ′ ) are smooth for the M given by the theorem. The function h i ( x ) may or may not be the zero function, and the Taylor expansion of H i ( x ) in fractional17owers of x has some nonvanishing initial term A i x Ni . Theorem 3.1 also says that we canlet ( a i , b i ) be such that S i ( x, y ) is comparable to a multiple of x a i y b i on F i in the sense ofpart b) or c) of the theorem. The following lemma gives us various conditions satisfied by( a i , b i ), N i , and N ( S i ) what we will make use of. Lemma 4.1.a) If the bisectrix intersects N ( S ) in the interior of a compact edge, then if S i ( x ) is viewedas a function on the x > S i ( x ) is in superadaptedcoordinates with the same Newton distance d that N ( S ) has, and the bisectrix intersects N ( S i ) in the interior of a compact edge e i . If the equation of this edge is denoted by a + m i b = α i , then N i ≥ m i . Furthermore b i < d and a i > d . The ordinate of theintersection of the bisectrix with the line of slope − N i containing ( a i , b i ), given by a i + N i b i N i ,is at most the Newton distance d of N ( S ). b) If the bisectrix intersects N ( S ) at the vertex ( d, d ), the same is true for N ( S i ), and onceagain S i is in superadapted coordinates with Newton distance d . Either ( a i , b i ) = ( d, d ),which happens for at least one i , or b i < d and a i > d like above. In the latter case weagain have a i + N i b i N i ≤ d . c) If the bisectrix intersects N ( S ) in the interior of one of the rays, then the same is truefor N ( S i ) and again S i is in superadapted coordinates with Newton distance d . One oftwo things occurs. The first possibility is that ( a i , b i ) = ( c, d ) for some c < d , the lowerboundary of F i is the x -axis, and part c) of Theorem 3.1 holds. The other possibility isthat b i < d and a i ≥ d . In either case, we have the strict inequality a i + N i b i N i < d . Proof. Recall that we divided a disk D centered at the origin into 8 pieces, each of whichafter a coordinate change of the form ( x, y ) → ( ± x, ± y ) or ( ± y, ± x ) becomes of the form E = { ( x, y ) ∈ D : 0 < y < cx m } , and we are focusing our attention on E . In the newcoordinates, S ( x, y ) becomes a function which we denote by S ( x, y ). Note that S ( x, y )is automatically still in superadapted coordinates. In the setting of part a) of this lemma,the bisectrix intersects N ( S ) at the point ( d, d ) which is in the interior of a compactedge e with equation a + mb = α for some α , m as above. In the setting of part b), theintersection is still ( d, d ) which is now a vertex of N ( S ), and in the setting of part c), theintersection is ( d, d ) which is in the interior of the horizontal ray.We now give some facts that are immediate consequences of the proof of theresolution of singularities algorithm of section 3, applied to S ( x, y ) on E . First, each φ i ( x, y ) is of the form ( x, ± y − g i ( x )), where g i ( x K ) is a smooth function for some K .Next, if g i ( x ) is not the zero function then the Taylor expansion of g i ( x ) in fractionalpowers of x has initial term p i x l i where l i ≥ m is such that N ( S ) has an edge withequation a + l i b = α i for some i . The definition of E is such that ( d, d ) is either the uppervertex of this edge, or the edge lies entirely below ( d, d ). The number N i , defined such thatthe upper boundary of F i has equation y = q i x N i + higher order terms, satisfies N i ≥ m a i , b i ) is a vertex of N ( S i ). The definition of E is suchthat in the settings of part a) or b) of this lemma, either ( a i , b i ) = ( d, d ) and N i = m , or( a i , b i ) is the lower vertex of a compact edge of N ( S i ) of slope − N i . In the latter caseeither ( d, d ) is the upper vertex of the edge or it lies entirely below ( d, d ).We now proceed with the proof, starting with part a). If l i > m , then thecoordinate change φ i ( x, y ) will not affect ( S ) e ( x, y ), the sum of the terms of S ( x, y )’sTaylor series at (0 , 0) corresponding to the edge a + mb = α . Hence the resulting function S i ( x, y ) will be in superadapted coordinates like before. On the other hand, if l i = m , then( S ) e ( x, y ) becomes ( S ) e ( x, ± y − p i x m ) after the coordinate change. Hence ( S ) e (1 , y )becomes ( S ) e (1 , ± y − p i ) and ( S ) e ( − , y ) becomes ( S ) e ( − , ± y − p i ). Shifting the y variable does not change the fact that the definition of superadapted coordinates holds;the condition is that these polynomials have zeroes of order less than d . Hence in the l i = m case we are in superadapted coordinates as well. In either case, the bisectrix still intersects N ( S i ) at ( d, d ), which is in the interior of a compact edge e i with equation a + mb = α forsome α . As a result ( a i , b i ), being a vertex of N ( S i ) lying below ( d, d ), satisfies a i > d and b i < d as needed. Using the last paragraph, ( a i , b i ) is the lower vertex of a compact edge e i of N ( S i ) with slope − N i which either contains ( d, d ) or is below the edge containing ( d, d ).Thus the intersection of the bisectrix with the line containing e i is at ( d, d ) or below. But( a i , b i ) is on this edge, so the intersection occurs at ( a i + N i b i N i , a i + N i b i N i ). Hence a i + N i b i N i ≤ d as needed. This completes the proof of part a).Next we suppose we are in the setting of part b). In this case, the initial term of g i ( x ) is p i x l i for some l i > m since N ( S ) has no edge with equation of the form a + mb = α .As a result, the coordinate change φ i ( x, y ) will not alter the fact that the bisectrix intersectsthe Newton polygon at ( d, d ). Furthermore, one will still be in superadapted coordinates;if e is an edge of N ( S ) containing ( d, d ), then either ( S ) e ( x, y ) is unchanged by thecoordinate change, or ( S ) e (1 , y ) becomes ( S i ) e (1 , y ) = ( S ) e (1 , ± y − p i ) and ( S ) e ( − , y )becomes S i ( − , y ) = ( S ) e ( − , ± y − p i ) like above. In either event S i ( x, y ) will still be insuperadapted coordinates. By the last paragraph, ( a i , b i ) is either ( d, d ) or a lower vertex.In the former case, a i + N i b i N i = d , and in the latter case, exactly as in part a) we have a i + N i b i N i ≤ d as needed. This completes the proof of part b).Suppose we are in the setting of part c). Then the bisectrix intersects N ( S )either in the interior of the horizontal or vertical ray. Suppose it is the horizontal ray.Then N ( S ) has a vertex ( c, d ) for some c < d . In this case no further subdivisions arenecessary; we can take S i = S and let F i be all of { ( x, y ) ∈ D : 0 < y < x m } . Part c) ofTheorem 3.1 automatically holds. So suppose the bisectrix intersects N ( S ) in the interiorof the vertical ray. In this case, the highest vertex of N ( S ) is ( d, c ) for some c < d . Sinceany coordinate change φ i fixes the highest vertex of N ( S ), the highest vertex of N ( S i ) is( d, c ) as well. Thus S i is in superadapted coordinates with the bisectrix intersecting N ( S i )inside its vertical ray. Since ( a i , b i ) is either the vertex ( d, c ) or a lower one, we have a i ≥ d and b i ≤ c < d . Lastly, since the bisectrix intersects N ( S i ) inside the vertical ray, theordinate of the intersection of any supporting line of N ( S i ) containing ( a i , b i ) with N ( S i )19s less than d . Hence a i + N i b i N i < d and we are done. 5. The Main Argument. We work in the setting of section 4. For some fixed value of t , let R i = S i + tf i .We will proceed along the lines outlined in section 1, dividing a given F i into finitely manypieces and show that, excluding finitely many values of t , the contribution to M R i ,F i ( ǫ ) ofeach piece satisfies the upper bounds given by Theorem 1.1 or 1.2. We start this as follows.Since S i ( x M ′ , y ) and f i ( x M ′ , y ) are smooth functions, where M is as in Theorem 3.1, byCorollary 3.2, we can apply the resolution of singularities algorithm to R i ( x, y ). We nowdo so, but focus our attention only on the first stage of the algorithm, dividing the upperright quadrant into the sets called T j and U j in the proof. To highlight their dependenceon i here, we refer to them as T ij and U ij here. Clearly, we need only consider those T ij and U ij that intersect F i . Each U ij corresponds to an edge of N ( R i ) whose equation wewrite as a + m ij b = α ij , while each T ij corresponds to a vertex of N ( R i ).If N ( R i ) has a compact edge whose upper vertex is on or below the line y = b i and has slope greater than − N i , let e ij ′ be the uppermost amongst all such edges. If e ij ′ exists, let G i denote the union of U ij ′ with any T ij and U ij corresponding to edges andvertices of N ( R i ) below e ij ′ . If e ij ′ does not exist, simply define G i to be the lowest T ij .We will now find upper bounds for M R i ,G i ( ǫ ) that are as good as needed for Theorem 1.1or 1.2. Lemma 5.1. Under the assumptions of Theorem 1.1, we have M R i ,G i ( ǫ ) ≤ C S ǫ j | ln ǫ | p ,and under the assumptions of Theorem 1.2 we have M R i ,G i ( ǫ ) ≤ Aǫ j | ln ǫ | p for someconstant A . Proof. Let ( a ′ , b ′ ) be the uppermost vertex of the union of all edges and vertices of N ( R i )that correspond to a T ij or U ij included in G i . If G i consists solely of the lowest T ij , let( a ′ , b ′ ) be the lowest vertex of N ( R i ). Note that by definition of G i , we have b ′ ≤ b i . Writethe Taylor expansion of R i ( x, y ) at the origin as P cd r cd x c y d , so that r a ′ b ′ x a ′ y b ′ denotesthe term corresponding to the vertex ( a ′ , b ′ ). Note that this Taylor expansion may containfractional powers of x , but not y . By Lemma 2.4 c), on G i we have | ∂ b ′ y R i ( x, y ) | > b ′ !2 | r a ′ b ′ x a ′ | (5 . M R i ,G i ( ǫ ) = |{ ( x, y ) ∈ G i : | R i ( x, y ) | < ǫ }|≤ |{ ( x, y ) ∈ G i : 12 | r a ′ b ′ x a ′ y b ′ | < ǫ }| (5 . . 2) may be estimated using Lemma 2.3. We break into cases,depending on whether b ′ > a ′ , b ′ = a ′ , or b ′ < a ′ . If b ′ > a ′ , the lemma says that the right-hand side of (5 . 2) is bounded by Cr η ǫ b ′ where r is the radius of D . Since b ′ ≤ b i ≤ d ,20 the Newton distance of S , this can be made less than ǫ d by shrinking the radius oforiginal disk D enough. Since the growth index of S ( x, y ) is given by d in superadaptedcoordinates, this is at least as good as the estimate we need.If b ′ = a ′ , then Lemma 2.3 tells us that |{ ( x, y ) ∈ G i : 12 | r b ′ b ′ x b ′ y b ′ | < ǫ }| < Cǫ b ′ | ln ǫ | (5 . C depends on lower bounds for | r b ′ b ′ | as well as the set A . Since b ′ ≤ b i ≤ d , this is better than the estimate we seek unless b ′ = b i = d . So we suppose b ′ = b i = d .Since N ( S i ) ⊂ N ( R i ), we must have a i ≥ a ′ = d . By Theorem 4.1, the only way one canhave a i ≥ d and b i = d is for ( a i , b i ) = ( d, d ). Hence we have ( a i , b i ) = ( a ′ , b ′ ) = ( d, d ). Inview of Theorem 3.1c), F i contains a set of the form { ( x, y ) : 0 < x < η, x m < y < x m } on which S i ( x, y ) ∼ x d y d , and so by Lemma 2.3b) one has that M S i ,G i ( ǫ ) > cǫ d | ln ǫ | .Hence we must be in the situation where the growth index d of S ( x, y ) has multiplicity 1,and (5 . 2) and (5 . 3) give the desired estimate for Theorem 1.2. Next, note that |{ ( x, y ) ∈ G i : 12 | r dd x d y d | < ǫ }| = |{ ( x, y ) ∈ G i : x d y d < ǫr dd }| (5 . δ sufficiently small we can ensure that r dd < s dd ,where s dd x d y d denotes the term of the Taylor expansion of S i ( x, y ). Hence, using Lemma2.3b) on (5 . 2) and (5 . M R i ,G i ( ǫ ) < C S ǫ d | ln ǫ | , the desired estimate forTheorem 1.1.We now turn to the case where b ′ < a ′ . The definition of G i is such that G i iscontained in a set of the form { ( x, y ) : 0 < x < η, < y < x m } for some m > N i . Weapply part c) of Lemma 2.3, which says that |{ ( x, y ) ∈ G i : 12 | r a ′ b ′ x a ′ y b ′ | < ǫ }| ≤ Cǫ m +1 a ′ + mb ′ (5 . a )In view of (5 . M R i ,G i ( ǫ ) = |{ ( x, y ) ∈ G i : | R i ( x, y ) | < ǫ }| < C ′ ǫ m +1 a ′ + mb ′ (5 . b )Like above, the constant C ′ depends on lower bounds for R a ′ b ′ (as well as ( a ′ , b ′ )). Notethat a ′ + mb ′ m +1 is the ordinate of the intersection of the line of slope − m containing ( a ′ , b ′ )with the bisectrix. Since a ′ > b ′ and m > N i , this is strictly less than the correspondingordinate for the line of slope − N i containing ( a ′ , b ′ ). Since N ( R i ) contains N ( S i ), this willbe at most the ordinate of the intersection of the bisectrix with line of the same slope − N i containing ( a i , b i ), given by a i + b i N i N i . This is at most d by Lemma 4.1. Hence by (5 . b ), wehave M R i ,G i ( ǫ ) < Cǫ d + η for some positive η , a better estimate than we need and we aredone. 21he next step is to prove upper bounds of Theorem 1.1 and 1.2 for the remaining M R i ,T ij ( ǫ ): The upper bounds for M R i ,T ij ( ǫ ) when T ij is not a subset of G i . Each such T ij corresponds to some vertex of N ( R i ), which we denote by ( p, q ). T ij is typically of the form { ( x, y ) ∈ F i : ξ x m < y < ξx m } . It’s possible that anuppermost T ij is some proper subset of such a set, but the following argument works forthat situation too. Also, although we only need to bound M R i ,T ij ( ǫ ) for T ij intersecting F i , the following argument works for all T ij . We will analyze R i ( x, y ) on T ij similarly tothe way R ( x, y ) was analyzed in (2 . R i ( x, y ) atthe origin as P ab r ab x a y b . We similarly write the Taylor series of S i ( x, y ) at the origin as P ab s ab x a y b . Note that S i ( x, y ) − s pq x p y q = X p ≤ a 6) bounded by δ | r pq | x p y q by choosing ξ small enough. In particular, we can make the absolute value ofthe entire right hand side of (5 . 6) less than | r pq | x p y q on T ij . Since s pq = 0 this means | S i ( x, y ) | < | r pq | x p y q on T ij . Similarly, by choosing ξ small enough, we can assumethat the right-hand side of the analogue to (5 . 6) with S i replaced by R i is also less than | r pq | x p y q on T ij . Hence on T ij we have | R i ( x, y ) | > | r pq | x p y q > | S i ( x, y ) | (5 . |{ ( x, y ) ∈ T ij : | R i ( x, y ) | < ǫ }| ≤ |{ ( x, y ) ∈ T ij : | S ( x, y ) | < ǫ }|≤ C S ǫ j | ln ǫ | p This is at least as strong as the estimate we need.Now suppose that s pq = 0. Since ( p, q ) is a vertex of N ( R i ) and N ( S ) ⊂ N ( R i ),this means ( p, q ) is a vertex of N ( S ) as well. Like above, by shrinking ξ enough one canassume that the right-hand side of (5 . 6) and its analogue with S i replaced by R i are lessthan | s pq | and | r pq | respectively. As a result, on T ij we have | S i ( x, y ) | < | s pq | x p y q , | R i ( x, y ) | > | r pq | x p y q |{ ( x, y ) ∈ T ij : | R i ( x, y ) | < ǫ }| < |{ ( x, y ) ∈ T ij : | S i ( x, y ) | < | s pq || r pq | ǫ }| (5 . . 8) is at most Cǫ j | ln ǫ | p , the desired estimate in the setting ofTheorem 1.2. As for Theorem 1.1, we may assume the δ of that Theorem is small enoughso that | s pq || r pq | < 2. In this case, we have |{ ( x, y ) ∈ T ij : | S i ( x, y ) | < | r pq || s pq | ǫ }| < |{ ( x, y ) ∈ T ij : | S i ( x, y ) | < ǫ }| < C S ǫ j | ln ǫ | p (5 . . 8) and (5 . 9) gives the desired estimates for Theorem 1.1 and we are done.It remains to bound M R i ,U ij ( ǫ ) for the U ij intersecting F i that are not containedin G i . Each U ij corresponds to an edge e ij of N ( R i ). There are only finitely many possiblesuch e ij for any R i ; that is, there are only finitely many pairs of vertices that can be theendpoints of an edge corresponding to some such U ij for any possible S i + tf i , regardlessof what f i is. To see why this is true, write the equation of e ij as a + m ij b = α ij , and itsupper vertex as ( a, b ). Since U ij us a subset of F i , we automatically have m ij ≤ N i . Weseparately consider the cases b < b i , b = b i , and b > b i , and show in each case that thereare finitely many possibilities for e ij .We start with the case where b < b i . In this case G i takes care of all U ij with m ij > N i , so we are left with the case when m ij = N i . There are only finitely manypossibilities with a ≤ a i , so we may assume that a > a i . Note that since ( a i , b i ) is in N ( S i ), it is also in N ( R i ). Since a > a i this means ( a, b ) can not be on a the vertical rayof N ( R i ). Instead it is the lower vertex of a compact edge e ′ of N ( R i ) of slope at least b − b i a − a i ≥ − a − a i . Since the slope of e ij is − N i and e ij lies below e ′ , we must have that − N i < − a − a i . In other words we have a < a i + N i . Since it also true that b < b i , thereare finitely many possibilities for this to occur.Next, consider the case where b = b i . Here since N ( R i ) contains N ( S i ) and( a i , b i ) is a vertex of N ( S i ), a ≤ a i and there are finitely many possibilities for a . Sinceonce again G i takes care of all U ij with m ij > N i , we have m ij = N i and once again thereare a finite number of possibilities for e ij .Lastly, we consider the situation where b > b i . Then since ( a, b ) lies above thevertex ( a i , b i ) of N ( S i ), we have a < a i . Since N ( R i ) contains N ( S i ), ( a i , b i ) ∈ N ( R i ).Hence the slope of of e ij is at most b i − ba i − a ≤ b i − b . It is also greater than or equal to − N i because U ij ⊂ F i . Thus we have − N i ≤ b i − b or b ≤ b i + N i . Coupled with the conditionthat a < a i , once again we have finitely many possibilities for ( a, b ) and we are done.23hus for our future arguments, it suffices to fix a single e ij and prove the desiredupper bounds for the U ij associated with e ij . Recall that for a fixed x the vertical cross-section of U ij has width ( ξ − ξ ) x m ij . If one now applies the next step of the resolution ofsingularities algorithm of Theorem 3.1 (to R i ( x, y )), one divides U ij into pieces V ijk and W ijk , where each V ijk is of the form { ( x, y ) : ( r ijk − ξ ) x m ij < y < ( r ijk + ξ ) x m ij for a root r ijk of the polynomial ( S i ) e ij (1 , y ). On each W ijk one has R i ( x, y ) ∼ x α ij where e ij hasequation a + m ij b = α ij . We also need to split each V ijk into two pieces V ijk , and V ijk ,where V ijk is the portion where | y − r ijk x m ij | < x m ij + η , and V ijk is the rest. η here is asmall positive constant which must be sufficiently small for our arguments to work. Notethat since there are at most boundedly many r ijk for a given e ij and boundedly many e ij for our fixed S ( x, y ), the total number of V ijk , V ijk , and U ijk is uniformly bounded given S ( x, y ). Our analysis will proceed as follows. We will first show that each M R i ,W ijk ( ǫ )satisfies the bounds required of Theorems 1.1 or 1.2. Then we will show that if η > M R i ,V ijk ( ǫ ) alsosatisfies the bounds. Afterwards, a separate argument will be used to show that for each V ijk intersecting F i , M R i ,V ijk ( ǫ ) satisfies bounds better than the ones we need. We willdo this by applying the full resolution of singularities algorithm on f i ( x, y ) on each V ijk .We will obtain the corresponding sets called D i in Theorem 3.1, and show that for each D amongst them M R i ,D ( ǫ ) satisfies better estimates than what we need. The analysis of M R i ,W ijk ( ǫ ) . Since N ( R i ) contains N ( S i ) and e ij has equation a + m ij b = α ij , ( a i , b i ) is on or abovethe line containing e ij and hence a i + m ij b i ≥ α ij . Since on W ijk we have R i ( x, y ) ∼ x α ij ,we have R i ( x, y ) ≥ C x a i + m ij b i . Since for fixed x the set W ijk has cross-section ∼ x m ij ,we conclude that |{ ( x, y ) ∈ W ijk : | R i ( x, y ) | < ǫ | < C ǫ mijai + mij bi (5 . a )If ( a i , b i ) is strictly above the line, then for some ζ > |{ ( x, y ) ∈ W ijk : | R i ( x, y ) | < ǫ | < C ǫ mijai + mijbi + ζ (5 . b )For now at least, we have no information concerning the constants C , C of (5 . a ) − (5 . b ). Note that the exponent m ij a i + m ij b i is the reciprocal of the ordinate of the intersec-tion of the bisectrix with the line of slope − m ij containing ( a i , b i ). Also note that since S i ( x, y ) ∼ x a i + m ij b i on W ijk , we have |{ ( x, y ) ∈ W ijk : | S i ( x, y ) | < ǫ | > C ǫ mijai + mijbi (5 . |{ ( x, y ) ∈ W ijk : | R i ( x, y ) | < ǫ | < C |{ ( x, y ) ∈ W ijk : | S i ( x, y ) | < ǫ | C M S,D ( ǫ ) ≤ C ǫ d | ln ǫ | p (5 . a i , b i ) lies strictly above the line containing e ij , the added ζ in (5 . b )gives us any constant C we want for ǫ small enough, thereby implying the estimate neededfor Theorem 1.1. Thus we need only consider the case where ( a i , b i ) is actually on the linecontaining e ij . Furthermore, in any situation in which m ij a i + m ij b i is strictly greater than d , we could once again make C arbitrarily small. So it makes sense to analyze when wehave do not have this strict inequality; we will see momentarily that this only happens inspecial situations.Consider the case when the bisectrix intersects the interior of a compact edge of N ( S i ). In this case, by Lemma 4.1a) we have b i < a i . Hence if m ij is greater than theminimal possible value on F i , given by N i , then we have m ij a i + m ij b i > N i a i + N i b i , which in turnis at least d . Hence when the bisectrix intersects the interior of a compact edge of N ( S i ),equality can only occur if m ij = N i .Next, consider the case when the bisectrix intersects the vertex ( d, d ) of N ( S i ).In this case, by Lemma 4.1b) either ( a i , b i ) = ( d, d ) or a i > d, b i < d , and N i a i + N i b i ≥ d . Inthe latter case, if m ij > N i then like above we have m ij a i + m ij b i > N i a i + N i b i ≥ d , and equalitycan occur only when m ij = N i .Lastly, consider the case where the bisectrix intersects N ( S i ) in the interior ofone of the rays. In this case Lemma 4.1c) applies and either ( a i , b i ) = ( c, d ) for some c < d ,or ( a i , b i ) satisfies a i ≥ d , b i < d and N i a i + N i b i > d . In the latter case since m ij ≥ N i weautomatically have m ij a i + m ij b i > d and equality does not occur. In the former case, since c < d we also have m ij c + m ij d > d and equality also does not occur.In summary, in bounding M R i ,W ijk ( ǫ ) we have already covered all possible casesexcept the following two situations. First, we can have m ij = N i , a i > d , and b i < d .Secondly, the bisectrix may intersect N ( S i ) at the vertex ( d, d ) with ( a i , b i ) = ( d, d ).Furthermore, as mentioned above we only need to consider the situation when ( a i , b i ) ison the line containing the edge e ij . The argument we will use for these two situationsactually will give the needed bounds for all of M R i ,V ijk ( ǫ ) in those situations as well. Infact, recalling that U ij = ∪ k V ijk ∪ ∪ k W ijk , what we will do is prove upper bounds for allof M R i ,U ij ( ǫ ) in both situations. Bounding M R i ,U ij ( ǫ ) in the two exceptional situations. 25e start with the first situation, where m ij = N i , a i > d , b i < d , and ( a i , b i )is on the line containing e ij . We write the Taylor expansions of R i ( x, y ) and S i ( x, y ) atthe origin as P a,b r ab x a y b and P a,b s ab x a y b . Here the b ’s are all integers but a may be anonintegral positive rational number. We write R i ( x, y ) = X a + N i b = α ij r ab x a y b + X a + N i b>α ij , a,b 20) using Lemma 2.3. Since α ij = a i + N i b i and b i < a i ,we have b i < α ij − b i N i and part c) of Lemma 2.3 applies. We obtain that |{ ( x, y ) ∈ U ij : | R i ( x, y ) | < ǫ }| < C ′ S ǫ Ni +1 αij (5 . α ij = a i + N i b i , by Lemma 4.1a) the exponent N i +1 α ij is at most d , and (5 . 21) givesus the estimate we need. Thus we are done in the setting of Theorem 1.1.Suppose now that we are in the setting of Theorem 1.2. Since ( a i , b i ) ∈ e ij and S i is in superadapted coordinates, any zero of ( S i ) e ij (1 , y ) is of order less than d . As aresult, no matter what f i is, there are at most finitely many t for which ( R i ) e ij (1 , y ) =( S i ) e ij (1 , y ) + t ( f i ) e ij (1 , y ) has a zero of order d or greater on [0 , A i ]. (This can be provenby an elementary argument.) Hence excluding those t , for a given t we can divide [0 , N i ]into closed intervals B , ..., B m such that on each B k , ∂ ly (( R i ) e ij (1 , y )) is nonvanishing forsome 0 ≤ l < d . We then apply the above argument for each B k , replacing b i by l . Thusfor each k the corresponding set of points where | R i ( x, y ) | < ǫ has measure less than C | ǫ | d .Adding over all k we get the upper bounds required by Theorem 1.2. Thus we have proventhe desired bounds for M R i ,U ij ( ǫ ) in the case that the bisectrix intersects N ( S i ) in theinterior of a compact edge.We now turn to the case where the bisectrix intersects N ( S i ) at a vertex ( d, d )with ( a i , b i ) = ( d, d ). For fixed x , the y -cross-section of U ij is contained in [ c ij x m ij , C ij x m ij ]for some c ij and C ij which depend on the function R i ( x, y ). We write [ c ij x m ij , C ij x m ij ] asthe union of [ c ij x m ij , x m ij ] and [ x m ij , C ij x m ij ], and correspondingly write U ij = U ij ∪ U ij .We focus our attention on U ij only, as U ij is done analogously with the roles of the two axesreversed. One technical point here is worth mentioning. Since ( a i , b i ) = ( d, d ) and S ( x, y )was in superadapted coordinates, the algorithm of Theorem 3.1 is such that φ i ( x, y ) is the27dentity; F i is carved out of the original disk and there is no coordinate change. This isrelevant here because it implies fractional powers of x do not appear; one can switch the x and y axes without any issues caused by fractional powers arising.The argument is basically the same as that of (5 . − (5 . 21) so we will be brief.This time we write R i ( x, y ) = X a + m ij b = α ij r ab x a y b + X a + m ij b>α ij , a,b 20) here is |{ ( x, y ) ∈ U ij : | R i ( x, y ) | < ǫ }| < |{ ( x, y ) ∈ U ij : 14 d ! C ′′ S x d y d | < ǫ }| (5 . |{ ( x, y ) ∈ U ij : | R i ( x, y ) | < ǫ }| < C ′′′ S ǫ d | ln ǫ | (5 . t for which ( R i ) e ij (1 , y ) has zeroes of order greater than d on [ c ij , 1] andassume ( R i ) e ij (1 , y ) has no zeroes of order greater than d . One then proceeds as in theprevious case, dividing [ c ij , 1] into intervals on which either ( R i ) e ij (1 , y ) is nonvanishingor ∂ ky (( R i ) e ij (1 , y )) is nonvanishing for some k ≤ d . This completes the arguments for the M R i ,U ij ( ǫ ) in the exceptional cases. The analysis of M R i ,V ijk ( ǫ ) . Analogous to (5 . a ), for every i and j we have that R i ( x, x m ij y ) = x α ij ( R i ) e ij (1 , y ) + x α ij + ζ P ( x, y ) + O ( | x | M ′ )28here is a zero r of ( R i ) e ij (1 , y ) such that on V ijk we have | y − rx m ij | > x m ij + η . Hencein the above equation, | y − r | > x η . For any η ′ > 0, we can choose η so that | y − r | > x η implies that ( R i ) e ij (1 , y ) > Cx η ′ for the ( x, y ) being considered. Hence for any η ′ > x α ij ( R i ) e ij (1 , y ) > Cx α ij + η ′ As long as η ′ < ζ and M ′ is sufficiently large, then we therefore have | R i ( x, x m ij y ) | > C ′ x α ij + η ′ (5 . V ijk is between the curves y = c x m ij and c x m ij for some c and c , we concludethat |{ ( x, y ) ∈ V ijk : | R i ( x, y ) | < ǫ }| < C ′′ ǫ mijαij + η ′ (5 . i, j ) for which the exponent m ij α ij is larger than what we need, by shrinking η ′ enough, (5 . 28) gives that M R i ,V ijk ( ǫ ) satisfies a better estimate than what we need.However, we saw that the only situations in which this exponent is not better than whatwe need are the exceptional cases discussed above. But these are exactly the cases in whichwe proved the desired upper bounds for all of M R i ,U ij ( ǫ ). Hence by shrinking η enough,(5 . 28) gives better than the needed estimates for all remaining V ijk and we are done. The analysis of M R i ,V ijk ( ǫ ) . We now focus our attention on some fixed V ijk , which consists of the points in U ij forwhich | y − rx m ij | < x m ij + η for some r ∈ R . The exact value of r will not be important inwhat follows. We apply the resolution of singularities algorithm of Theorem 3.1 to f i ( x, y )on F i , and consider the sets called D i in that theorem that intersect V ijk . Since we arealready using the index i , we refer to them as D l here. To each D l there is a coordinatechange φ l such that f i ◦ φ l is comparable to a monomial on D l in the sense of Theorem3.1 b) or c). Since we consider only those D l intersecting V ijk , the function φ l is such that | y | < x m ij + η on φ − l D l . Write S ′ i ( x, y ) = S i ◦ φ l ( x, y ). Our first task will be to understand S ′ i ( x, y )’s behavior on the set φ − l D l , which we denote by E l .To this end, note that the domain F i of S i ( x, y ) has upper boundary given by y = A i x N i + higher order terms, and lower boundary given by either y = 0 or y = a i x n i +higher order terms, where n i > N i . Since S i ( x, y ) ∼ x a i y b i on F i , if n i > m ij > N i then on N ( S i ) the linear function a + m ij b is minimized at exactly one point, ( a i , b i ). If m ij = N i or n i , then a + m ij b will be minimized at ( a i , b i ) and possibly also at other points in N ( S i ).Next, suppose f ( x ) is any function such that f ( x K ) is smooth for some K andsuch that the Taylor expansion of f ( x ) has initial term cx m ij for some c . Also supposethat n i > m ij > N i . Then the Newton polygon of S i ( x, y − f ( x )) will have an edge ofslope − m ij containing the point ( a i + m ij b i , 0) and no edges with lesser slope (i.e. nomore horizontal edges). This fact implies that S i ( x, y − f ( x )) ∼ x a i + m ij b i on V ijk , since29 y | < | x | m ij + η on V ijk . Note that the same is true if m ij = N i , if c is small enough, or if m ij = n i , if c is large enough. For in these cases the vertices other than ( a i , b i ) minimizing a + m ij b do not interfere. Consequently, we may assume that S ′ i ( x, y ) = S i ◦ φ l ( x, y ) andits Newton polygon has the above properties. For S ′ i = S i ◦ φ l ( x, y ) is either of the form S i ( x, y − f ( x )) or S i ( x, − y − f ( x )) for an f ( x ) of this type. (In the case where m ij = n i or N i , if the quantity called ξ in the proof of the resolution of singularities algorithm that wasoriginally applied to S i ( x, y ) was small enough, which we may assume, then the coefficient a i will be large enough and the coefficient A i will be small enough for this to work.)Let f ′ i = f i ◦ φ l and R ′ i = R i ◦ φ l , so that R ′ i = S ′ i + tf ′ i . We now will estimatethe various M R ′ i ,E l ( ǫ ) and show they satisfy estimates better than those that we need.We start with E l satisfying part c) of Theorem 3.1; that is, when f ′ i ( x, y ) ∼ x α l y β l with β l > 0. Define X l to be the set of points in E l for which | R ′ i ( x, y ) | > | S ′ i ( x, y ) | , and let Y l = E l − X l . Then we have |{ ( x, y ) ∈ X l : | R ′ i ( x, y ) | < ǫ }| < |{ ( x, y ) ∈ X l : | S ′ i ( x, y ) | < ǫ }| (5 . ≤ M S ′ i ,E l ( ǫ )This is better than the estimate we need, so we focus our attention on Y l . Note that on Y l we have 12 | S ′ i ( x, y ) | ≤ | tf ′ i ( x, y ) | ≤ | S ′ i ( x, y ) | (5 . , f ′ i ( x, y ) ∼ x α l y β l on E l , while by the above discussion S ′ i ( x, y ) ∼ x a i + m ij b i on E l . Thuswhen (5 . 30) holds we have x α l y β l ∼ x a i + m ij b i . Next, note that by Theorem 3.1c), for any K one has | ∂ y f ′ i ( x, y ) | > C x α l y β l − − O ( x K ) > C x a i + m ij b i y − − O ( x K ) > C x a i + m ij b i − m ij − η (5 . | y | < | x | m ij + η on E l . On the other hand,since S ′ i ( x, y )’s Newton polygon has an edge of slope − m ij containing ( a i + m ij b i , | ∂ y S ′ i ( x, y ) | < C x a i + m ij b i − m ij (5 . y derivative of R ′ i ( x, y ) = S ′ i ( x, y ) + tf ′ i ( x, y ), the derivative of thesecond term dominates and (for t = 0) we have | ∂ y R ′ i ( x, y ) | > C x a i + m ij b i − m ij − η (5 . x > |{ ( x, y ) ∈ Y l : | R ′ i ( x, y ) | < ǫ }| ≤ |{ ( x, y ) ∈ Y l : C x a i + m ij b i − m ij − η y < ǫ }| | ( x, y ) : 0 < x < x , | y | < x m ij + η , C x a i + m ij b i − m ij − η y < ǫ }| (5 . | y | < | x | m ij + η on E l . Equation (5 . 34) can be boundedwith the help of Lemma 2.3. If part a) or b) applies, it is bounded by C ǫ | ln ǫ | , betterthan the estimate we need since we are assuming d > 1. If part c) applies, we get |{ ( x, y ) ∈ Y l : | R ′ i ( x, y ) | < ǫ }| < C ǫ mij + η +1 ai + mij bi (5 . η > 0, this exponent is better than m ij +1 a i + m ij b i , the reciprocal of the ordinate of theintersection of the bisectrix with the line of slope − m ij containing ( a i , b i ). By (5 . d . Hence the exponent on the right-hand side of (5 . 35) is greater than d , better than the estimate that we need.We now move to the case where part b) of Theorem 3.1 is satisfied; that is, weassume β l = 0 and thus f ′ i ( x, y ) ∼ x α l on E l . If α l were less than a i + m ij b i , for t = 0we would have R ′ i ( x, y ) = S ′ i ( x, y ) + tf ′ i ( x, y ) ∼ x α l on E l as well. Thus for small enough x (which we may assume by making the radius of the original disk D sufficiently small)we would have | R ′ i ( x, y ) | ≥ | S ′ i ( x, y ) | , and thus M R ′ i ,E l ( ǫ ) ≤ M S ′ i ,E l ( ǫ ), better than theestimate that we need. Similarly, if α l were greater than a i + m ij b i , f ′ i ( x ) would be smallcompared to S ′ i ( x, y ) and thus when x is sufficiently small we have | R ′ i ( x, y ) | ≥ | S ′ i ( x, y ) | ,once again giving an estimate better than the one we need. Hence in the following weassume that α l = a i + m ij b i .Next, note that since we are in the setting of part b) of Theorem 3.1, the upperboundary of E l has equation y = bx p + higher order terms, where p ≥ m ij + η . The lowerboundary of E l is the x -axis. Since f ′ i ( x, y ) ∼ x a i + m ij b i on all of E l , the Newton polygonof f ′ i contains the vertex ( a i + m ij b i , − p .We now look at S ′ i ( x, x p y ) and f ′ i ( x, x p y ). First, in analogy with (5 . a ) − (5 . b )we write f ′ i ( x, y ) = X a + pb = a i + m ij b i f ′ ab x a y b + X a + pb>a i + m ij b i , a,b 38) is containedin [0 , b + ǫ ] for any ǫ > 0. (Recall b is such that the upper boundary of E l is given by y = bx p + ... ). We write [0 , b + ǫ ] as a union B ∪ ... ∪ B k , where on each B k either( R ′ i ) e p (1 , y ) is nonvanishing, or ∂ y ( R ′ i ) e p (1 , y ) is nonvanishing.Consider the case of a B k for which ( R ′ i ) e p (1 , y ) is nonvanishing. Then on thedomain of (5 . R ′ i ( x, x p y ) > C x a i + m ij b i . Translating back into the coordinatesof E l , we have R ′ i ( x, y ) > C x a i + m ij b i on a subset A k of { ( x, y ) : 0 < x < x , < y < ( b + 1) x p } for some x > 0. Thus we have M R ′ i ,A k ( ǫ ) = |{ ( x, y ) ∈ A k : | R ′ i ( x, y ) | < ǫ }| < C ǫ pai + mijbi (5 . p > m ij , this exponent is better than m ij a i + m ij b i ≥ d . Thus (5 . 39) is better than theexponent we need. Now consider the case of a B k for which ∂ y (( R ′ i ) e p (1 , y )) is nonvanishing.In this case, the relevant analogue of (5 . 38) is( ∂ y R ′ i )( x, x p y ) = x a i + m ij b i − p ∂ y (( R ′ i ) e p (1 , y )) + x a i + m ij b i − p + ζ R ′′ ( x, y ) + O ( | x | M ′ ) (5 . . 40) we have | ( ∂ y R ′ i )( x, x p y ) | > C x a i + m ij b i − p if x is sufficientlysmall, which we may assume. Translating back into the original coordinates of E l , this timewe get that ∂ y R ′ i ( x, y ) > C x a i + m ij b i − p on a subset A k of { < x < x , < y < ( b + 1) x p } .Thus we may apply Lemma 2.2 and say that M R ′ i ,A k ( ǫ ) = |{ ( x, y ) ∈ A k : | R ′ i ( x, y ) | < ǫ }| < |{ ( x, y ) ∈ A k : C x a i + m ij b i − p y < ǫ }|≤ |{ ( x, y ) ∈ E l : C x a i + m ij b i − p y < ǫ }| (5 . E l is a subset of { ( x, y ) : 0 < x < x , < y < ( b + 1) x p } for some x > 0, we mayuse Lemma 2.3 to estimate the right-hand side of (5 . | M R ′ i ,A k ( ǫ ) | < Cǫ | ln ǫ | , better than the estimate we need. If part c)applies, we get M R ′ i ,A k ( ǫ ) < C ǫ pai + mijbi (5 . . 6. References. [AGV] V. Arnold, S Gusein-Zade, A Varchenko, Singularities of differentiable maps , Vol-ume II, Birkhauser, Basel, 1988. 32CaCWr] A. Carbery, M. Christ, J. Wright, Multidimensional van der Corput and sublevelset estimates. , J. Amer. Math. Soc. (1999), no. 4, 981-1015.[G1] M. Greenblatt, The asymptotic behavior of degenerate oscillatory integrals in twodimensions , to appear, J. Funct. Anal.[G2] M. Greenblatt, A direct resolution of singularities for functions of two variables withapplications to analysis , J. Anal. Math. (2004), 233-257.[G3] M. Greenblatt, Sharp L estimates for one-dimensional oscillatory integral operatorswith C ∞ phase. Amer. J. Math. (2005), no. 3, 659-695.[H] L. H¨ormander, The analysis of linear partial differential operators. I. Distributiontheory and Fourier analysis , 2nd ed. Springer-Verlag, Berlin, (1990). xii+440 pp.[IKeM] I. Ikromov, M. Kempe, and D. M¨uller, Sharp L p estimates for maximal operatorsassociated to hypersurfaces in R for p > 2, preprint.[IoSa] A. Iosevich, E. Sawyer, Oscillatory integrals and maximal averages over homogeneoussurfaces , Duke Math. J. no. 1 (1996), 103-141.[K1] V. N. Karpushkin, A theorem concerning uniform estimates of oscillatory integralswhen the phase is a function of two variables , J. Soviet Math. (1986), 2809-2826.[K2] V. N. Karpushkin, Uniform estimates of oscillatory integrals with parabolic or hyper-bolic phases , J. Soviet Math. (1986), 1159-1188.[K3] V. N. Karpushkin, Uniform estimates for volumes , Tr. Math. Inst. Steklova (1998), 225-231.[PS] D. H. Phong, E. M. Stein, The Newton polyhedron and oscillatory integral operators ,Acta Mathematica (1997), 107-152.[PSSt] D. H. Phong, E. M. Stein, J. Sturm, On the growth and stability of real-analyticfunctions , Amer. J. Math. (1999), no. 3, 519-554.[R] K. M. Rogers, Sharp Van der Corput estimates and minimal divided differences , Proc.Amer. Math. Soc. (2005), no. 12, 3543-3550.[V] A. N. Varchenko, Newton polyhedra and estimates of oscillatory integrals , FunctionalAnal. Appl.18