Gradient estimates for oblique derivative problems via the maximum principle
aa r X i v : . [ m a t h . A P ] J a n GRADIENT ESTIMATES FOR OBLIQUE DERIVATIVEPROBLEMS VIA THE MAXIMUM PRINCIPLE
GARY M. LIEBERMAN
Introduction
In this work, we study the gradient estimate for solutions of the boundary valueproblem a ij ( x, u, Du ) D ij u + a ( x, u, Du ) = 0 in Ω , (0.1a) b ( x, u, Du ) = 0 on ∂ Ω , (0.1b)where Ω is a domain in R n for some positive integer n and we follow the summationconvention. Our two primary hypotheses are that the matrix [ a ij ] is positive definiteand that the vector derivative b p ( x, z, p ) = ∂b ( x, z, p ) /∂p satisfies the condition b p · γ > ∂ Ω for γ the exterior unit normal to ∂ Ω. Such problems have been studiedfor a long time and are the topic of the book [13]. It is well-known that the key stepin proving existence of solutions to this problem is a bound on the gradient of thesolution, and, here, we study the gradient estimate under more general hypothesesthan in previous works. Specifically, we obtain a gradient bound under conditionson the differential equation that are based on those in [19]. Unfortunately, we areunable to include some important special cases for reasons that will be describedin more detail later. Our model equation is the so-called false mean curvatureequation, in which a ij = δ ij + p i p j , where δ ij = ( i = j, i = j. The oblique derivative problem for this equation was first studied in [7] and ourresults improve the ones there. It is our hope that the method described here canbe extended to other oblique derivative problems, such as the capillary problem (cid:18) δ ij − D i uD j u | Du | (cid:19) D ij u + a ( x, u, Du ) = 0 in Ω ,Du · γ (1 + | Du | ) / + ψ ( x, u ) = 0 on ∂ Ωunder suitable conditions on a and ψ but we have not seen how to do so, yet. Ofcourse, this problem can be completely analyzed by various other methods, and werefer the interested reader to the Notes of Chapter 10 in [13] as well as [8] and [16]for details on these other methods. Date : September 25, 2018.
We also analyze the corresponding parabolic problem − u t + a ij ( X, u, Du ) D ij u + a ( X, u, Du ) = 0 in Ω , (0.2a) b ( X, u, Du ) = 0 on S Ω , (0.2b) u = u on B Ω(0.2c)for a fairly general space-time domain Ω, where we have followed the notation in[11]. More specifically, we first write P Ω for the parabolic boundary of Ω, that is, P Ω is the set of all points X = ( x , t ) in the topological boundary of Ω such that,for any R >
0, the cylinder Q ( X , R ) = { X ∈ R n +1 : | x − x | < R, t − R < t < t } contains at least one point not in Ω. Then S Ω denotes the set of all X ∈ P Ωsuch that, for any
R > Q ( X , R ) contains at least one point in Ω. Finally B Ω = P Ω \ S Ω. The currently available situation for this problem is quite limitedcompared to that for the elliptic problem. Gradient estimates are only knownin three cases. The first case is when the equation is uniformly parabolic in thesense that the eigenvalues of the matrix [ a ij ( X, z, p )] are bounded from above andbelow by positive constants for all (
X, z, p ) ∈ Ω × R × R n . In this case, gradientestimates appear in [21] and Section 13.3 of [11]. The second case is that of theconormal problem, which means that there is a vector-valued function A such that a ij ( X, z, p ) = ∂A i ( X, z, p ) /∂p j and b ( X, z, p ) = A ( X, z, p ) · γ + ψ ( X, z )for some scalar function ψ . In this case, gradient estimates appear in [9] (seeSection 13.2 of [11] for further discussion of this work). Finally, Huisken [4] and,more recently, Mizuno and Takasao [17], proved gradient estimates for the problem − u t + (cid:18) δ ij − D i uD j u | Du | (cid:19) D ij u = f ( X, u, Du ) in Ω ,γ · Du = 0 on S Ω ,u = u on B Ωwhen Ω = ω × (0 , T ) for some domain ω ⊂ R n and T > f ( X, z, p ) ≡ f and only prove their gradient bound for small time.Our present results expand on the first and third cases in this list (although ourresults do not really include those of Mizuno and Takasao). In fact, our argument,when specialized to uniformly parabolic equations, is essentially identical to the onein Section 13.3 of [11]. More importantly, we obtain gradient bounds for a numberof parabolic problems not previously accessible.Our scheme is straightforward but, sadly, rather heavy on computation. Webegin with some basic assumptions and notation in Section 1 and some simpleproperties of monotone functions in Section 2. In Section 3, we introduce an aux-iliary function which is crucial in our gradient estimates. To apply our maximumprinciple argument, we present some preliminary calculations in Section 4 which arethen used in Sections 5 and 6 to derive our gradient estimates for elliptic problems.Examples to illustrate these results are given in Section 7. We then turn to the RADIENT ESTIMATES 3 parabolic problem, giving gradient estimates in Sections 8 and 9. We close withexamples of our gradient for parabolic problems in Section 10.Before beginning, we note an important limitation to our current gradient esti-mates. Our maximum principle argument is based rather heavily on the correspond-ing analysis of Serrin [19] for interior gradient, but he introduced a decomposition a ij ( x, z, p ) = a ij ∗ ( x, z, p ) + f i ( x, z, p ) p j + f j ( x, z, p ) p i for some functions a ij ∗ and f i . This decomposition gives a number of striking resultsinvolving the structure of the coefficient a . In particular, interior gradient estimatesfor solutions of the equation( δ ij + D i uD j u ) D ij u + | Du | = 0are derived in [19]. Such a decomposition is not available in our case since p i and p j would have to be replaced with more complicated functions of p determined ina convoluted way from the boundary condition. Hence we can only obtain gradientestimates for solutions of the( δ ij + D i uD j u ) D ij u + | Du | q = 0with q ∈ (1 , Basic assumptions and notation
First, we say that ∂ Ω ∈ C if ∂ Ω can be written as the level set of a C function f such that Df doesn’t vanish on ∂ Ω. We use d to denote the distance functionto ∂ Ω, and we denote by Ω r , for any positive number r , the subset of Ω on which d < r . We then recall from Lemma 5.18 of [13] (see also Section 10.3 of [11]) thatif ∂ Ω ∈ C , then there is a positive constant R , determined only by Ω, such that d ∈ C ( ∂ Ω ∪ Ω R ). We set γ = Dd in Ω R and note that γ is a C unit vector.Moreover, for any vector ξ , if we define the vector ˜ ξ by(1.1) ˜ ξ i = D i γ k ξ k , then | ˜ ξ | ≤ | ξ | /R in Ω R / by virtue of Lemma 14.17 of [3].To describe our operator more easily, we define v = (1 + | p | ) / and ν = p/v .We define the underlying matrix [ g ij ] by g ij = δ ij − ν i ν j , where δ ij is the Kronecker δ , that is, δ ij = 1 if i = j and 0 otherwise. We also set c ij = δ ij − γ i γ j , and, for any n -dimensional vector p , we write p ′ for the vector with components p ′ i = c ij p j . We also set v ′ = (1 + | p ′ | ) / . We also adopt the following convention concerning derivatives of functions. First,we use subscripts to denote derivatives with respect to x , z , and p . So f x = ∂f∂x , f z = ∂f∂z , f p = ∂f∂p . We also use subscript indices to denote derivatives with respect to the correspondingcoordinate of x : f i = ∂f∂x i GARY M. LIEBERMAN and we use superscript indices to denote derivatives with respect to the correspond-ing component of p : f i = ∂f∂p i . Some properties of monotone functions
Before we study the special function used in the remainder of this work, wepresent some properties of monotone functions. Specifically, we show that a de-creasing function is essentially equivalent to a smooth decreasing function thatsatisfies some simple differential inequalities. The property of decreasing functionsis essentially contained in Lemma 1.1 of [10].
Lemma 2.1.
Let L ≥ . Then, for any bounded, positive, decreasing function f ,defined on ( L, ∞ ) there is a function F ∈ C ( L, ∞ ) with f ( x ) ≤ F ( x ) , (2.1a) − F ( x ) x ≤ F ′ ( x ) ≤ , (2.1b) − F ( x ) x ≤ F ′′ ( x ) ≤ F ( x ) x , (2.1c) for all x ∈ ( L, ∞ ) and lim x →∞ F ( x ) = lim x →∞ f ( x ) . (2.1d) Proof.
By extending f to be constant on the interval [0 , L ], we may assume that L = 0. As in the proof of Lemma 1.1 from [10], we set g ( x ) = 1 x Z x f ( y ) dy, and we note that g is continuous. Since f is decreasing, we have g ( x ) ≥ f ( x ). Next,a simple integration by parts shows that(2.2) Z xε g ( y ) y dy = g ( ε ) − g ( x ) + Z xε f ( y ) y dy for any positive x and ε . It follows that g ( x ) = g ( ε ) + Z xε y ( f ( y ) − g ( y )) dy, and hence g is decreasing.Next, we set h ( x ) = 1 x Z x g ( y ) dy and F ( x ) = 1 x Z x h ( y ) dy. Then h is C and F is C , and the preceding arguments show that h and F aredecreasing with F ( x ) ≥ h ( x ) ≥ g ( x ) ≥ f ( x ). In addition, xF ′ ( x ) + F ( x ) = h ( x ) ≥ , which implies the first inequality of (2.1b). RADIENT ESTIMATES 5
We now compute F ′ ( x ) = h ( x ) − F ( x ) x , so F ′′ ( x ) = F ( x ) − h ( x ) x + h ′ ( x ) − F ′ ( x ) x = 2( F ( x ) − h ( x )) x + h ′ ( x ) x . Since F ( x ) ≥ h ( x ) and h ′ ( x ) ≥ − h ( x ) /x ≥ − F ( x ) /x , we infer the first inequalityof (2.1c). The second inequality of (2.1c) follows because h ≥ ≥ h ′ .Finally, (2.1d) follows from repeated application of l ′ Hˆospital’s Rule. (cid:3)
For ease of notation, we say that a function F is ∗ -decreasing if it satisfies (2.1b)and (2.1c). This concept will be useful in later sections.We also make a corresponding observation for Dini functions, recalling that afunction f is Dini if f is increasing and Z L f ( y ) y dy < ∞ for some positive L . (In particular, f must be nonnegative with lim x → f ( x ) = 0.)This observation will not be used here, but it may be helpful in other investigations. Lemma 2.2. If f is Dini, then there is a C Dini function F such that F ( x ) ≥ f ( x ) , (2.3a) F ( x ) ≤ f (4 x ) , (2.3b) F ′ ( x ) ≤ F ( x ) x , (2.3c) for all sufficiently small positive x .Proof. We define g and h as in the proof of Lemma 2.1 and set F ( x ) = 2 h (2 x ).Then g ( x ) ≥ x Z xx/ f ( x/ dy = f ( x/ . Similarly, h ( x ) ≥ g ( x/ ≥ f ( x/ , which yields (2.3a). This time, the monotonicity of f implies that g and h areincreasing with h ( x ) ≤ g ( x ) ≤ f ( x ), and this inequality yields (2.3b). In addition,(2) and the monotone convergence theorem imply that g is Dini. It follows that h is Dini and hence so is F . The proof of (2.3c) follows that of the first inequality of(2.1b). (cid:3) The N function In this section, we introduce a function N , determined by the boundary condi-tion, which is key to deriving our estimates. Such a function was first presented in[1] and our function takes advantage of a modification due to the present author [9]that allows us to study boundary conditions under weaker hypotheses than in [1].In the previous applications of this function, a positive parameter ε is introducedand then fixed at a particular value. For our purposes, it will be very importantto keep track of this parameter throughout our work, so we rewrite the results andpresent the proof with a careful accounting of this parameter. GARY M. LIEBERMAN
To state the properties of this function in a useful way, we introduce one ad-ditional bit of terminology that will be useful. For τ ≥
1, we write Σ( τ ) for thesubset of ∂ Ω × R × R n on which v ≥ τ . Theorem 3.1.
Let ∂ Ω ∈ C . Let τ ≥ be given and let b ∈ C (Σ( τ )) with b p · γ > on Σ ( τ ) , the subset of Σ( τ ) on which b = 0 . Suppose (3.1) lim t →∞ b ( x, z, p − τ γ ( x )) < < lim t →∞ b ( x, z, p + τ γ ( x )) for all ( x, z, p ) ∈ Σ( τ ) , and that there are positive constants β and c such that (3.2) | b p | ≤ β b p · γ, | p · γ | ≤ c v ′ on Σ ( τ ) . Suppose also that there are a ∗ -decreasing function ε x and a nonnegativeconstant β such that | b x | ≤ ε x ( v ) v b p · γ (3.3a) | b z | ≤ β vb p · γ (3.3b) on Σ ( τ ) . Then there is a positive constant ε ( β , c ) , along with a C (Ω R / × R × R n × (0 , ε )) function N such that ≤ N p · γ ≤ , (3.4a) | N p | ≤ β , (3.4b) | N z | ≤ c ) β v ε , (3.4c) | N x | ≤ c ) β R v + 12(1 + c ) ε x ( v ) v , (3.4d) | N − N p · p | ≤ β (1 + c ) v ε , (3.4e) on Ω R / × R × R n × (0 , ε ) , where the arguments of N are ( x, z, p, ε ) and v ε = p | p ′ | + ε | p · γ | . Moreover, N = 0 on Σ ((1 + c ) / τ ) × (0 , ε ) . If we set (3.5) W = | p ′ | + εN , ν = p ′ + εN N p for some ε ∈ (0 , ε ) , then (1 − c ) ε / ) v ε ≤ W ≤ (1 + 8(1 + c ) ε / ) v ε , (3.6a) | ν | ≤ v ε , (3.6b) | W − p · ν | ≤ c ) β ε / v ε . (3.6c) Also, there is a nonnegative constant c , determined only by β , c and n such that | N N km ξ k ξ m | ≤ c ε / | ξ ′ | + 14 | ξ · γ | , (3.7a) (1 − c ε / ) | ξ ′ | + 12 ε | ξ · γ | ≤ (cid:2) c km + εN N km + εN k N m (cid:3) ξ k ξ m , (3.7b) | N N pz · ξ | ≤ c (cid:18) ε / | ξ ′ | + | ξ · γ | (cid:19) β v ε , (3.7c) | N N kx ξ k | ≤ c (cid:18) ε / | ξ ′ | + | ξ · γ | (cid:19) (cid:20) vR + ε x ( v ) v (cid:21) (3.7d) RADIENT ESTIMATES 7 for any ξ ∈ R n , | N N zz | ≤ c β v ε , (3.7e) | N N xz | ≤ c ε / β v ε (cid:20) vR + ε x ( v ) v (cid:21) . (3.7f) Finally, if ∂ Ω ∈ C , then there is a nonnegative constant c , determined only by Ω , such that | N N xx | ≤ c ε / (cid:20) vR + ε x ( v ) v (cid:21) + c v. (3.7g) Proof.
Although the results are mostly contained in Lemma 10.8 of [13] and Lemma4.3 of [12], we sketch the proof here because there are some points that are notimmediate consequences of the arguments there. Specifically, Lemma 10.8 of [13]only studies second derivatives of N with respect to p (and the argument therewill not give the existence of the other second derivatives) while Lemma 4.3 of [12]assumes that Ω is the upper half-plane. In addition, the exact form of the estimatesin terms of ε was not studied in either of those works.In our proof, we shall assume that R is finite and ε x and β are positive. Theother cases are proved by similar but (sometimes) simpler arguments. We start bynoting (exactly as in Lemma 10.7 of [13]) that there is a function g , defined onΩ ∗ R × R × R n (where Ω ∗ R is the set of all x ∈ R n with inf {| x − y | : y ∈ ∂ Ω } < R including points outside Ω), such that p · γ = g ( x, z, p ) if and only if b ( x, z, p ) = 0.Specifically, we have(3.8) 0 = b ( x, z, p + γ [ g ( x, z, p ) − p · γ ]) . It follows that g ( x, z, p ) = g ( x, z, q ) whenever p · γ = q · γ and that | g ( x, z, p ) | ≤ c v ′ , (3.9a) | g x | ≤ θ x ( v ) v , | g z | ≤ β v ′ , | g p | ≤ β , (3.9b)where the function θ x is defined by θ x ( σ ) = 4(1 + c ) β R σ + (1 + c ) ε x ( σ ) . Note that θ x is ∗ -decreasing.We now let ϕ be a nonnegative C ∞ ( R n +1 ) function with support in the unitball and Z ϕ ( Y ) dY = 1 , where here and in the remainder of this proof, all integrals are taken over R n +1 .We also write Y = ( y, w, q ) with y ∈ R n , w ∈ R and q ∈ R n and we set ˜ v = p | p | + s and ˜ v ′ = p | p ′ | + s . With K = 1 / (6 β ) and ε ′ a positive GARY M. LIEBERMAN constant to be determined, we introduce the following abbreviations: x ∗ = x + ε ′ syθ x (˜ v )˜ v ,z ∗ = z + ε ′ swβ ˜ v ′ ,p ∗ = p + Ksq,X ∗ = ( x ∗ , z ∗ , p ∗ ) , and we define the function ˜ g by˜ g ( x, z, p, s ) = Z g ( X ∗ ) ϕ ( Y ) dY. An elementary calculation shows that1 θ x (˜ v )˜ v ≤ R , so if ε ≤
1, then ˜ g is defined for all ( x, z, p, s ) ∈ Ω R / × R × R n × R .To proceed, we note that, even though g depends on p ′ rather than p , ˜ g mayalso depend on p · γ because γ changes with x . (This situation does not arise in[12] and it is not relevant for the argument in [13].) In particular, we have γ ( x ) − γ ( x ∗ ) = D i γ ( x ∗∗ ) ε ′ sy i θ x (˜ v )˜ v for some point x ∗∗ on the line segment between x and x ∗ and hence(3.10) | γ ( x ) − γ ( x ∗ ) | ≤ ε ′ | s | ˜ v . Using this inequality, we can estimate the first derivatives of ˜ g . We start by intro-ducing two more functions h and h , defined by h ( σ ) = ε ′ θ x ( σ ) σ , h ( σ ) = ε ′ β σ . A simple computation shows that˜ g s ( x, z, p, s ) = I + I + I , with I = Z g i ( X ∗ ) y i (cid:20) h (˜ v ) − s h ′ (˜ v )˜ v (cid:21) ϕ ( Y ) dY,I = Z g z ( X ∗ ) w (cid:20) h (˜ v ′ ) − s h ′ (˜ v ′ )˜ v ′ (cid:21) ϕ ( Y ) dYI = Z g k ( X ∗ ) Kq k ϕ ( Y ) dY. To estimate I and I (and to estimate many of our later integrals), we begin byusing (3.10) and noting that ( p + Ksq ) ′ m = c km ( x ∗ )( p + Ksq ) k to infer that (cid:12)(cid:12) ( p + Ksq ) ′ (cid:12)(cid:12) ≤ | p ′ | + | s | + ( | p | + | s | ) 2 ε ′ | s | ˜ v and hence (cid:16) (cid:12)(cid:12) ( p + Ksq ) ′ (cid:12)(cid:12) (cid:17) / ≤ √ ε ′ ) | ˜ v ′ | . RADIENT ESTIMATES 9 If ε ′ ≤ (so that 1 + 2 ε / ≤ √ | g x ( X ∗ ) | ≤ θ x (˜ v )˜ v , (3.11a) | g z ( X ∗ ) | ≤ β ˜ v ′ . (3.11b)Next, we invoke (2.1b) to see that (cid:12)(cid:12)(cid:12)(cid:12) h (˜ v ) − s h ′ (˜ v )˜ v (cid:12)(cid:12)(cid:12)(cid:12) ≤ s , and hence | I | ≤ ε ′ , and a similar argument gives | I | ≤ ε ′ . The choiceof K implies that | I | ≤ /
6, so(3.12) | ˜ g s | ≤ ε ′ ≤ /
18. We now compute and estimate the other derivatives of ˜ g . First,˜ g i ( x, z, p, s ) = Z g i ( X ∗ ) ϕ ( Y ) dY + Z g z ( X ∗ ) sh ′ (˜ v ′ ) D i ( c km ) p k p m ˜ v ′ ϕ ( Y ) dY, so (3.11a) and (3.11b) imply | ˜ g x | ≤ θ x (˜ v )˜ v . Similarly, ˜ g z ( x, z, p, s ) = Z g z ( X ∗ ) ϕ ( Y ) dY, so | ˜ g z | ≤ β ˜ v ′ . Finally, we compute ˜ g p ( x, z, p, s ) = J + J + J with J = − p Z g i ( X ∗ ) y i sh ′ (˜ v )˜ v ϕ ( Y ) dY,J = − p ′ Z g z ( X ∗ ) w sh ′ (˜ v ′ )˜ v ′ ϕ ( Y ) dY,J = Z g p ( X ∗ ) ϕ ( Y ) dY. It’s easy to check that | J | + | J | ≤ ε ′ | s | / ˜ v . The analysis of J is more subtle. Forany vector ξ , we have ξ k Z g k ( X ∗ ) ϕ ( Y ) dY = ξ ′ k Z g k ( X ∗ ) ϕ ( Y ) dY + ξ · γ Z g k ( X ∗ )[ γ k ( x ) − γ k ( x ∗ )] dY because g p · γ = 0. It follows that(3.13) | ˜ g k ξ k | ≤ β (1 + 4 ε ′ ) | ξ ′ | + 5 β ε ′ | s | ˜ v | ξ · γ | . because β ≥
1. Due to (3.12), the equation p · γ − ˜ g ( x, z, p, ε / N ) = N defines N as a function of ( x, z, p ) and ε . By writing N = p · γ − g ( x, z, p ) + [ g ( x, z, p ) − ˜ g ( x, z, p, ε / N )] , we infer that | N | ≤ | p · γ − g ( x, z, p ) | + 12 ε / | N | , and hence(3.14) | N ( x, z, p ; ε ) | ≤ − ε / ( | p · γ | + | g ( x, z, p ) | ) . It then follows from (3.9a) that(3.15) | N | ≤ c ) v if ε ≤ /
4. Since ˜ v ′ = p ( v ′ ) + εN , we have ˜ v ′ ≤ c ) v ε . We then infer that N p · γ − − ˜ g p · γ − ˜ g s ε / g s ε / so our estimates for the derivatives of ˜ g imply (3.4a), (3.4b), (3.4c), and (3.4d)provided ε ≤ (10(1 + c )) − and ε ′ ≤ / (20 β ).Differentiating (3.8) with respect to p shows that g p − γ = − b p b p · γ , where b p is evaluated at ( x, z, p ′ + γg ( x, z, p )) since p ′ + γg = p + γ ( g − p · γ ), andhence, after taking the dot product of this equation with p ′ + γg , we find from (3.2)that | g p · p − g | ≤ β (1 + | p ′ | + g ( z, x, p ) ) / . Another application of (3.9a) shows that(3.16) | g p · p − g | ≤ (1 + c ) β v ′ . Next, we recall that˜ g p ( x, z, p, s ) = J + J + J , ˜ g s ( x, z, p, s ) = I + I + I with | J | + | J | ≤ ε ′ | s | / ˜ v and | I | + | I | ≤ ε ′ . In addition, J = Z g p ( X ∗ ) ϕ ( Y ) dY, I = Z g p ( X ∗ ) · ( Kq ) dY, so | ˜ g p · p + ˜ g s s − ˜ g | ≤ ε ′ | s | + Z | g p ( X ∗ ) · p ∗ − g ( X ∗ ) | ϕ ( Y ) dY ≤ (1 + c ) β ˜ v ′ + 10 ε ′ | s | , with ˜ g and its derivatives evaluated at ( x, z, p, s ), by (3.16). A direct computationgives N − N p · p = − ˜ g + ˜ g p · p + ˜ g s ε / N − ˜ g s ε / with ˜ g and its derivatives now evaluated at ( x, z, p, ε / N ), so | N − N p · p | ≤ c ) β ( v + ε / | N | ) + 10 ε ′ ε / | N | ] . An application of (3.15) yields (3.4e).
RADIENT ESTIMATES 11
A simple modification of the argument leading to (3.14) shows that | N | ≥
11 + ε / | p · γ − g | . The Cauchy-Schwarz inequality then gives N ≥ − ε / (1 + ε / ) ( p · γ ) − ε / (1 + ε / ) g ( x, z, p ) ≥ (1 − ε / )( p · γ ) − ε / g ( x, z, p ) if ε ≤
1. A similar argument, using (3.14) shows that N ≤ (1 + 4 ε / )( p · γ ) + 8 ε / g ( x, z, p ) if ε ≤ / √
2. Using these inequalities along with (3.9a) gives (3.6a).Estimates (3.4b), (3.15), and the Cauchy-Schwarz inequailty imply (3.6b) pro-vided ε ≤ / (36 β (1 + c ) ).Since | W − p · ν | = ε | N || N − N p · p | , (3.6c) follows from (3.15), (3.4e), and theobservation that ε / v ≤ v ε .To estimate the second derivatives of N , we use integration by parts repeatedly.First, for the second derivatives with respect to p to obtain J = p Z g ( X ∗ ) h ′ (˜ v ) h (˜ v )˜ v ∂∂y i ( y i ϕ ( Y )) dY,J = − p ′ Z g ( X ∗ ) 1(˜ v ′ ) ∂∂w ( wϕ ( Y )) dY,J = − Ks Z g ( X ∗ ) ϕ q ( Y ) dY. Straightforward computation then shows that, for any vector ξ , we have˜ g km ξ k ξ m = J + J + J + J + J + J + J + J + J + J + J + J + J , where J , . . . , J come from differentiating J : J = −| ξ | Z g ( X ∗ ) h ′ (˜ v ) h (˜ v )˜ v ∂∂y i ( y i ϕ ( Y )) dY,J = − ( p · ξ ) Z g j ( X ∗ ) y j h ′ (˜ v ) sh ′ (˜ v ) h (˜ v )˜ v ∂∂y i ( y i ϕ ( Y )) dY,J = − ( p · ξ )( p ′ · ξ ′ ) Z g z ( X ∗ ) wh ′ (˜ v ′ ) sh ′ (˜ v ) h (˜ v )˜ v ˜ v ′ ∂∂y i ( y i ϕ ( Y )) dY,J = − ( p · ξ ) Z g p ( X ∗ ) · ξ h ′ (˜ v ) h (˜ v )˜ v ∂∂y i ( y i ϕ ( Y )) dY,J = − ( p · ξ ) Z g ( X ∗ ) (cid:20) h ′′ (˜ v ) h (˜ v )˜ v − h ′ (˜ v )( h ′ (˜ v )˜ v − h (˜ v )) h (˜ v ) ˜ v (cid:21) ∂∂y i ( y i ϕ ( Y )) dY ; J , . . . , J come from differentiating J : J = −| ξ ′ | Z g ( X ∗ ) 1(˜ v ′ ) ∂∂w ( wϕ ( Y )) dY,J = − ( p · ξ )( p ′ · ξ ′ ) Z g i ( X ∗ ) y i h ′ (˜ v ) s (˜ v ′ ) ˜ v ∂∂w ( wϕ ( Y )) dY,J = − ( p ′ · ξ ′ ) Z g z ( X ∗ ) wh ′ (˜ v ′ ) s (˜ v ′ ) ∂∂w ( wϕ ( Y )) dY,J = ( p ′ · ξ ′ ) Z g p ( X ∗ ) · ξ v ′ ) ∂∂w ( wϕ ( Y )) dY,J = 2( p ′ · ξ ′ ) Z g ( X ∗ ) 1(˜ v ′ ) ∂∂w ( wϕ ( Y )) dY ;and J , J , and J come from differentiating J : J = − p · ξKs ˜ v Z g i ( X ∗ ) y i h ′ (˜ v ) s ∂ϕ ( Y ) ∂q k ξ k dY,J = − Ks Z g z ( X ∗ ) wh ′ (˜ v ′ ) s p ′ · ξ ′ ˜ v ′ ∂ϕ ( Y ) ∂q k ξ k dY,J = − Ks Z g p ( X ∗ ) · ξ ∂ϕ ( Y ) ∂q k ξ k dY. To estimate J , we first integrate by parts and then rewrite the derivative withrespect to y . In this way, we see that J = −| ξ | Z ∂g ( X ∗ ) ∂y i y i h ′ (˜ v ) h (˜ v )˜ v ϕ ( Y ) dY = −| ξ | Z g i ( X ∗ ) y i sh ′ (˜ v )˜ v ϕ ( Y ) dY. Therefore | J | ≤ ε ′ | ξ | / ˜ v . In a similar fashion, we find that | J | + | J | ≤ C ( n, c , β ) ε ′ | ξ | ˜ v . It is straightforward to estimate J , J , and J . The resultant inequality is | J | + | J | + | J | ≤ C ( n, c , β ) ε ′ | ξ | ˜ v . To estimate J , we use the decomposition J = J a + J b with J a = ( p · ξ ) ξ ′ k Z g k ( X ∗ ) h ′ (˜ v )˜ vh (˜ v ) ∂∂y i ( y i ϕ ( Y )) dYJ b = ( p · ξ )( ξ · γ ) Z g k ( X ∗ )[ γ k ( x ) − γ k ( x ∗ )] h ′ (˜ v )˜ vh (˜ v ) ∂∂y i ( y i ϕ ( Y )) dY, recalling that g p ( X ∗ ) · γ ( x ∗ ) = 0. We then have | J a | ≤ C ( n, β ) | ξ || ξ ′ | ˜ v RADIENT ESTIMATES 13 and, by virtue of (3.10), | J b | ≤ C ( n, β ) ε ′ | ξ || ξ · γ | ˜ v . It follows that | J | ≤ C ( n, β ) | ξ | ˜ v ( | ξ ′ | + ε ′ | ξ · γ | ) . The estimates for J and J follow the same idea as for J , yielding | J | + | J | ≤ C ( n, β ) ε ′ | ξ || ξ ′ | ˜ v ′ . Straightforward calculation gives the following estimates for J and J : | J | ≤ C ( n, β ) ε ′ | ξ || ξ ′ | ˜ v , | J | ≤ C ( n, β ) | ξ ′ | ˜ v ′ . By using the same decomposition as for J , we find that | J | ≤ C ( n, β ) (cid:18) | ξ ′ | ˜ v ′ + ε ′ | ξ ′ || ξ · γ | ˜ v (cid:19) . A simple integration by parts shows that J = J and hence | J | ≤ C ( n, β ) | ξ | ˜ v ( | ξ ′ | + ε ′ | ξ · γ | ) . Integration by parts also shows that J = J , so J ≤ C ( n, β ) | ξ ′ | ˜ v ′ . The estimate for J is somewhat more complex. First, we write J = J a + J b with J a = − Ks ξ ′ m Z g m ( X ∗ ) ∂ϕ ( Y ) ∂q k ξ k dY,J b = − Ks ( ξ · γ ) Z g m ( X ∗ )[ γ m ( x ) − γ m ( x ∗ )] ∂ϕ ( Y ) ∂q k ξ k dY. Then we write J a = J c + J d with J c = − Ks ξ ′ m ξ ′ k Z g m ( X ∗ ) ∂ϕ ( Y ) ∂q k dY,J d = − Ks ξ ′ m ( ξ · γ ) Z g m ( X ∗ ) ∂ϕ ( Y ) ∂q k γ k ( x ) dY. It follows that | J c | ≤ C ( n, β ) | ξ ′ | / | s | and an integration by parts yields J d = − Ks ˜ v ξ ′ m ( ξ · γ ) Z g k ( X ∗ )[ γ k ( x ) − γ k ( x ∗ )] ∂ϕ ( Y ) ∂q m dY. It follows that | J d | ≤ C ( n, β ) ε ′ | ξ ′ || ξ · γ | ˜ v . Arguing as before, we also have | J b | ≤ C ( n, β ) ε | ξ || ξ · γ | ˜ v and therefore | ˜ g km ξ k ξ m | ≤ C ( n, c , β ) | (cid:18) ε ′ | ξ | ˜ v + | ξ || ξ ′ | ˜ v + ε ′ | ξ || ξ · γ | ˜ v + | ξ ′ | | s | (cid:19) . By using the Cauchy-Schwarz inequality along with the inequalities ε ′ ≤
1, and | s | ≤ ˜ v ′ ≤ ˜ v , and v ≤ ˜ v , we conclude that(3.17) | ˜ g km ξ k ξ m | ≤ C ( n, β ) (cid:18) | ξ ′ | ε ′ | s | + ε ′ ( ξ · γ ) v (cid:19) . To estimate the derivative g sp , we integrate I , I , and I by parts and thendifferentiate the resultant integrals to obtain˜ g ks ξ k = I + I + I + I + I + I + I + I + I + I + I for any vector ξ with I = − p · ξ ˜ v Z g j ( X ∗ ) y j h ′ (˜ v ) (cid:20) − s h ′ (˜ v ) h (˜ v )˜ v (cid:21) ∂∂y i ( y i ϕ ( Y )) dY,I = − p ′ · ξ ′ ˜ v ′ Z g z ( X ∗ ) wh ′ (˜ v ′ ) (cid:20) − s h ′ (˜ v ) h (˜ v )˜ v (cid:21) ∂∂y i ( y i ϕ ( Y )) dY,I = − K Z g k ( X ∗ ) ξ k (cid:20) − s h ′ (˜ v ) h (˜ v )˜ v (cid:21) ∂∂y i ( y i ϕ ( Y )) dY,I = − p · ξ ˜ v Z g ( X ∗ ) s h ′′ (˜ v ) h (˜ v )˜ v − ( h ′ (˜ v )) ˜ v − h ′ (˜ v ) h (˜ v ) h (˜ v ) ˜ v ∂∂y i ( y i ϕ ( Y )) dY ;and I = − p ′ · ξ ′ ˜ v ′ Z g j ( X ∗ ) y j h ′ (˜ v ) (cid:20) s (˜ v ′ ) (cid:21) ∂∂w ( wϕ ( Y )) dY,I = − p ′ · ξ ′ ˜ v ′ Z g z ( X ∗ ) wh ′ (˜ v ) (cid:20) s (˜ v ′ ) (cid:21) ∂∂w ( wϕ ( Y )) dY,I = − s Z g k ( X ∗ ) ξ k (cid:20) s (˜ v ′ ) (cid:21) ∂∂w ( y i ϕ ( Y )) dY,I = 2 s ( p ′ · ξ ′ )(˜ v ′ ) Z g ( X ∗ ) ∂∂w ( wϕ ( Y )) dY ;and I = − p · ξ ˜ v Z g j ( X ∗ ) y j h ′ (˜ v ) ∂∂q k ( q k ϕ ( Y )) dY,I = − p ′ · ξ ′ R g z ( X ∗ ) wh ′ (˜ v ) ∂∂q k ( q k ϕ ( Y )) dY,I = − s Z g m ( X ∗ ) ξ m ∂∂q k ( q k ϕ ( Y )) dY. The integrals I , I , I , I , I , I , I , and I are estimated directly, and theintegrals I , I , and I are estimated using the same decomposition as for J . RADIENT ESTIMATES 15
We therefore obtain | ˜ g ks ξ k | ≤ C (cid:18) | ξ ′ | s + | ξ · γ | v (cid:19) if ε ≤ p are straightforward. To estimate g px ,we compute, using integration by parts˜ g i = − sh (˜ v ) Z g ( X ∗ ) ∂∂y i ϕ ( Y ) dY + 1(˜ v ′ ) Z g ( X ∗ ) D i ( c km ) p k p m ∂∂w ( wϕ ( Y )) dY, and hence ˜ g ki ξ k η i = J + J + J + J + J + J + J + J + J , with J = h ′ (˜ v ) p · ξsh (˜ v ) ˜ v Z g ( X ∗ ) η · ∂ϕ ( Y ) ∂y dY,J = − h ′ (˜ v ) p · ξh (˜ v )˜ v Z g j ( X ∗ ) y j η · ∂ϕ ( Y ) ∂y dY,J = − h ′ (˜ v ′ ) p ′ · ξ ′ h (˜ v )˜ v ′ Z g z ( X ∗ ) η · ∂ϕ ( Y ∗ ) ∂y dY,J = − sh (˜ v ) Z g k ( Y ∗ ) ξ k η · ∂ϕ ( Y ) ∂y dY ;and J = − p · ξ (˜ v ′ ) ˜ v Z g ( X ∗ ) η i D i ( c km ) p k p m ∂∂w ( wϕ ( Y )) dY,J = sp · ξ (˜ v ′ ) ˜ v Z g j ( X ∗ ) y j h ′ (˜ v ) η i D i ( c km ) p k p m ∂∂w ( wϕ ( Y )) dY,J = sp ′ · ξ ′ (˜ v ′ ) Z g z ( X ∗ ) h ′ (˜ v ′ ) η i D i ( c km ) p k p m ∂∂w ( wϕ ( Y )) dY,J = 1(˜ v ′ ) Z g r ( X ∗ ) ξ r η i D i ( c km ) p k p m ∂∂w ( wϕ ( Y ∗ )) dY,J = − v ′ ) Z g ( X ∗ ) η i D i γ k p · γξ k ∂∂w ( wϕ ( Y ∗ )) dY,J = − v ′ ) Z g ( X ∗ ) η i D i γ m ) p m ξ · γ ∂∂w ( wϕ ( Y ∗ )) dY. After integrating J by parts, direct estimation shows that | ˜ g ki ξ k η i | ≤ Cθ x (˜ v )˜ v s | η || ξ | . Similar arguments give | ˜ g kz ξ k | ≤ Cε ′ β ˜ v ′ (cid:18) | ξ ′ | s + | ξ · γ | v (cid:19) . The remaining second derivatives are estimated using similar arguments. Afterintegrating I , I , and I by parts, we find that | ˜ g ss | ≤ Cs , | ˜ g sx | ≤ Cθ x (˜ v )˜ v s , | ˜ g sz | ≤ Cβ ˜ v ′ s . In the same vein, we have | ˜ g xx | ≤ Cθ x (˜ v ) ˜ v ε ′ | s | , | ˜ g xz | ≤ Cβ θ x (˜ v )˜ v ˜ v ′ ε ′ | s | , | ˜ g zz | ≤ Cβ (˜ v ′ ) ε ′ | s | . Here, the constant C in the estimate of ˜ g xx also depends on Ω, specifically, on the C nature of ∂ Ω.From our estimates for the derivatives of ˜ g , we obtain estimates for the secondderivatives of N .First, by direct computation, N km = − ˜ g km − ε / ˜ g ks N m − ε / ˜ g ms N k − ε ˜ g ss ρ k ρ m g s ε / , with ˜ g and its derivatives now evaluated at ( x, z, p, ε / N ). Recalling also (3.15),we infer that there is a positive constant c , determined only by β , c , and n , suchthat | N N km ξ k ξ m | ≤ c (cid:18) | ξ ′ | ε ′ ε / + ε ′ ( ξ · γ ) (cid:19) . If we take ε ′ = min (cid:26) β , c (cid:27) , we have (3.7a) provided c ≥ c /ε ′ .Next, we recall that N k N m ξ k ξ m = ( ξ · γ − ε / ˜ g p · ξ ) (1 + ε / ˜ g s ) . Since | ˜ g s | ≤ , it follows that 1(1 + ε / ˜ g s ) ≥ − ε. Also ( ξ · γ − ε / ˜ g p · ξ ) ≥ ( ξ · γ ) − ε / | ξ · γ || ˜ g p · ξ | , The Cauchy-Schwarz inequality and (3.13) then give N k N m ξ k ξ m ≥ (1 − ε )[(1 − β ε )( ξ · γ ) − β | ξ ′ | ] ≥ (1 − β ε )( ξ · γ ) − β | ξ ′ | . In combination with (3.7a), this inequality implies (3.7b). Estimates (3.7c), (3.7d),(3.7e), (3.7f), and (3.7g) are proved by similar arguments. (cid:3) Some preliminary calculations
The basic idea in the proof of the gradient estimate is to examine a qua-dratic function of the gradient of the solution. As first seen in [9], the functionis c km D k uD m u + εN ( x, u, Du ; ε ) for a suitably small ε , and this function has beenused in several special circumstances as well. Here, we want to introduce a suitablechange of variables to more closely mimic the gradient estimates in [19] (and subse-quently in Chapter 15 of [3]). The combination of the more complicated quadraticfunction and the more general structure conditions leads to messier calculations, RADIENT ESTIMATES 17 so we start here with some basic calculations which will be used in the next fewsections to prove our gradient bound.We begin by introducing an increasing C function Ψ, defined on some intervalwhich includes the range of u , and we write ψ for the inverse to Ψ. To simplifythe writing, we also use two standard bits of notation. We set ¯ u = Ψ ◦ u and ω = ψ ′′ / ( ψ ′ ) .We also define w by(4.1) w = c km D m ¯ uD k ¯ u + εN ( x, u, Du ) ( ψ ′ ) , we use the vector ¯ ν defined by(4.2) ¯ ν = 1 ψ ′ ν , and we set(4.3) S = a ij [ c km + ε ( N k N m + N N km )] D ik ¯ uD jm u. It is helpful to notice from (3.7b) that(4.4) S ≥
12 [ a ij c km D ik ¯ uD jm ¯ u + εa ij γ k γ m D ik ¯ uD jm ¯ u ] . Our first step is to compute the gradient of w . A simple calculation gives(4.5) D i w = 2 D ik ¯ u ¯ ν k + D i ( c km ) D k ¯ uD m ¯ u + ωI D i u + 2 ε ( ψ ′ ) N [ N z D i u + N i ] . with(4.6) I = 2 εN ( N k D k u − N )( ψ ′ ) . Because the exact expression for the second derivatives is quite involved, we jumpdirectly to the main expression of interest, which is a ij D ij w . A long, tedious, butstandard calculation shows that a ij D ij w = 2 a ij D ijk ¯ u ¯ ν k + 2 S + ω ′ ψ ′ I E + ( ω A + ωB + C ) E + ω ( S + I a ij D ij u ) + S + 2 ε ( ψ ′ ) N N z a ij D ij ¯ u, (4.7) with I given by (4.6), A = 2 εN N km D k uD m u + 2 ε ( N − N k D k u ) ,B = − ε ( ψ ′ ) E ( N k D k u − N ) a ij D i uN j + 4 ε ( ψ ′ ) ( N k D k u − N ) N z + 4 ε ( ψ ′ ) E N N ki D k ua ij D j u + 4 ε ( ψ ′ ) N N kz D k u,C = 2 ε ( ψ ′ ) E a ij ( N i N j + N N ij ) + 4 ε ( ψ ′ ) E [ N i N z + N N iz ] a ij D j u + 2 ε [( N z ) + N N zz ]( ψ ′ ) + a ij D ij ( c km ) D k ¯ uD m ¯ u E ,S = 4 εψ ′ ( N k D k u − N ) N m a ij D im ¯ uD j u + 4 εN N km a ij D ik ¯ uD j uD m u,S = 2 a ij D i ( c km ) D jm ¯ uD k ¯ u + 4 εψ ′ ( N z N k + N N kz ) a ij D ik ¯ uD j u + 4 εψ ′ ( N k N i + N N ki ) a ij D jk ¯ u. We now estimate these terms. First, we have A ≥ εN N km D k uD m u, so (3.6a) and (3.7a) imply that A ≥ − cw , (4.8a)where, here and in the remainder of this section, we use c to denote any constantdetermined only by β , c , n , and Ω. From (3.4c), (3.4e), (3.6a), (3.7c), and (3.7d),we conclude that B ≥ − c " θ x ( v ) v (cid:18) Λ E (cid:19) / + β ε / w . (4.8b)We use (3.4b), (3.4c), (3.4d), (3.6a), (3.7c), (3.7e), (3.7f), and (3.7g) to concludethat C ≥ − cw " β (1 + ε x ( v ) v ) (cid:18) Λ E (cid:19) / + (1 + ε x ( v ) v ) Λ E + β ε + Λ E ε . (4.8c)The Cauchy-Schwarz inequality, (3.4e), and (3.7a) imply that S ≥ − cε / (Λ w ) / S / . (4.8d)Because D i ( c km ) = − γ m D i γ k − γ k D i γ m , we infer from the Cauchy inequality,(3.7a), (3.7b), (3.4c), (3.7c), and (3.6a) that S ≥ − cw / (cid:16) [ ε − / + ε x ( v ) v ]Λ + ε / β E (cid:17) / S / . (4.8e)Finally, (3.4e) and (3.15) imply that | I | ≤ cεvv ε ( ψ ′ ) , (4.8f) RADIENT ESTIMATES 19 and hence | I | ≤ cεvw / ψ ′ , (4.8g) | I | ≤ cε / w . (4.8h)Next, we note (see (15.17) from [3]) that the differential equation (0.1a) is equiv-alent to(4.9) ψ ′ a ij ( x, u, Du ) D ij ¯ u + a ( x, u, Du ) + ω E ( x, u, Du ) = 0 . If we apply the operator ν k D k to this equation and then add D ¯ u · ¯ ν [ ω ( r + 1) + s ]times (4.9) for functions r and s to be further specified, we obtain (compare withequation (15.22) of [3]):0 = [ a ij D ijk ¯ u + κ i D ik ¯ u ]¯ ν k + ω ′ ψ ′ E D ¯ u · ν + ( ω A + ωB + C ) E D ¯ u · ¯ ν + ωS + S , (4.10)with κ i = ψ ′ a jk,i D jk ¯ u + a i + ω E i ,A = ( δ + r ) EE ,B = ( δ + r ) a + ( δ + s ) EE ,C = ( δ + s ) a E S = [ ψ ′ ( δ + r + 1) a ij D ij ¯ u ] D ¯ u · ¯ ν ,S = [ ψ ′ ( δ + s ) a ij D ij ¯ u ] D ¯ u · ¯ ν , and the differential operators δ and δ are defined by δ f ( x, z, p ) = p · f p ( x, z, p ) , δ f ( x, z, p ) = f z ( x, z, p ) + f k ( x, z, p ) ν k p · ν . We defer the estimates for these terms to later sections because these estimatesdepend on the structure conditions for the differential equation.By also calculating κ i D i w , we find that a ij D ij w + κ i D i w = D ¯ u · ¯ ν E (cid:20) ω ′ ψ ′ ( I −
1) + ω A + ωB + C (cid:21) + 2 S + ωS + S + ωI a ij D ij u, (4.11) with I = I D ¯ u · ¯ ν ,A = ( − I ) A + A D ¯ u · ¯ ν ,B = − B + B D ¯ u · ¯ ν + I B ′ − E i D i γ k D k uDu · γDu · ν E + 2 εN N z ADu · ν + 2 εN N i E i Du · ν E ,C = − C + C D ¯ u · ¯ ν + 2 εN a i N i Du · ν E − a i D i γ k D k uDu · γDu · ν E + 2 B ′ εN N z Du · ν ,S = S + ( − I ) S ,S = S − S + 2 εN N z ( δ + r + 1) a ij D ij ¯ uψ ′ − a jk,i D jk ¯ uD i γ m D m ¯ uDu · γ − εN N i a jk,i D jk ¯ uψ ′ , and B ′ = ( δ + r ) a E . The remainder of this paper is concerned with deriving a gradient bound undervarious hypotheses modeled on those of Serrin [19]. With A ∞ = lim sup | p |→∞ sup ( x,z ) ∈ Ω × R A ( x, z, p ) , (4.12a) B ∞ = lim sup | p |→∞ sup ( x,z ) ∈ Ω × R B ( x, z, p ) , (4.12b) C ∞ = lim sup | p |→∞ sup ( x,z ) ∈ Ω × R C ( x, z, p ) , (4.12c)and ν replaced by Du/ | Du | , Serrin derived a gradient bound in four cases: A ∞ ≤ C ∞ ≤ B ∞ ≤ −√ A ∞ C ∞ , and the oscillation of u is sufficiently small. These fourcases are exactly those for which the differential equation dydt = A ∞ y + B ∞ y + C ∞ + η has a solution on the range of u for η a sufficiently small positive constant. Unfortu-nately, there are some important difficulties in trying to translate the full argumentin [19] to our situation. First Serrin uses a decomposition a ij = a ij ∗ + p i c j + p j c i , with [ a ij ∗ ] a uniformly elliptic matrix and c a convenient vector-valued function. Forthe oblique derivative problem, the corresponding decomposition would be a ij = a ij ∗ + ν i c j + ν j c i , and this decomposition depends in a complicated way on the parameter ε and onthe function b . In addition, our control on the term A is not good enough tohandle the case A ∞ = 0, for example. On the other hand, we can consider severalof the critical examples from [19]. Specifically, we shall consider two importantcases: C ∞ ≤ RADIENT ESTIMATES 21 possible that B ∞ and C ∞ depend on ε . We will therefore need to take this factinto account. 5. Global gradient estimates
We now turn to our gradient estimates. First, we prove them in a neighborhoodof ∂ Ω, assuming that a bound is already known away from the boundary. Suchbounds are well-known (see, for example, Theorem 3 from [19] or Theorem 15.3from [3]). In this case, our estimate is quite straightforward under suitable structureconditions. To state our result simply, we define(5.1) B ′∞ = lim sup | p |→∞ sup ( x,z ) ∈ Ω × R | B ′ | , A ′∞ = lim sup | p |→∞ sup ( x,z ) ∈ Ω × R | A | . For a positive constant M , we also write Γ( M ) for the set of all ( x, z, p ) ∈ Ω × R × R n with | p | ≤ M .Our first estimate assumes that C ∞ ≤ Theorem 5.1.
Let u ∈ C (Ω) be a solution of (0.1) with ∂ Ω ∈ C and b satisfyingthe hypotheses of Theorem 3.1 for some ∗ -decreasing function ε x . Suppose that thereare functions r and s , nonnegative constants µ and M , two decreasing functions ˜ µ and (for each ε ∈ (0 , ε ) ) ˜ µ ε with lim σ →∞ ˜ µ ( σ ) = 0 , (5.2a) lim σ →∞ ˜ µ ε ( σ ) = 0 , (5.2b) and a ∗ -decreasing function ˜ µ ∗ such that ( δ + r + 1) a ij η ij ≤ ˜ µ ( v ) v E / ( a ij δ km η ij η jm ) / , (5.3a) ( δ + s ) a ij η ij ≤ ˜ µ ε ( v ) v E / ( a ij δ km η ij η jm ) / , (5.3b) | a ijp η ij | ≤ ˜ µ ∗ ( v ) v E / ( a ij δ km η ij η jm ) / (5.3c) on Γ( M ) for all matrices [ η ij ] and that | a | ≤ µ E , (5.4a) Λ ≤ ˜ µ ∗ ( v ) E , (5.4b) | a p | ≤ ˜ µ ∗ ( v ) E (5.4c) on Γ( M ) . Suppose also that B ′∞ is bounded uniformly with respect to ε and that C ∞ ≤ for all ε > . If ˜ µ ∗ ( v ) δ b ≤ ˜ µ ( v ) b p · γ, (5.5a) ˜ µ ∗ ( v ) δ b ≤ ˜ µ ( v ) b p · γ (5.5b) on Σ ( τ ) , and if (5.6) lim σ →∞ (1 + ε x ( σ ) σ )˜ µ ∗ ( s ) = 0 , then there is a constant M , determined only by β , β , c , M , τ , n , ˜ µ ∗ , ˜ µ , Ω , sup { d ≥ R / } | Du | , the oscillation of u , and the limit behavior in (4.12) , (5.1) , (5.2) ,and (5.6) , such that | Du | ≤ M in Ω . Proof.
First, we assume additionally that u ∈ C (Ω), and we note from the differ-ential equation for u along with (4.8h) that ωI a ij D ij u = − ωI a ≥ − cωε / w | a | . (Again, we use c to denote any constant determined only by Ω, β , and c .) From(4.11), (5.3), and (5.4a), we conclude that a ij D ij w + κ i D i w ≥ S + D ¯ u · ¯ ν (cid:18) ω ′ ψ ′ ( I −
1) + ω A + ωB + C (cid:19) , with A = A − c (cid:18) ε Λ E + ˜ µ ( v ) ε (cid:19) ,B = B − cε | a | E ,C = C − cε (cid:20) (1 + ε x ( v ) v ) Λ E + β ε + ˜ µ ε ( v ) + ˜ µ ∗ ( v ) (1 + ε x ( v ) v ) (cid:21) . We now observe that( δ + r ) E = ( δ + r + 1) a ij p i p j + E , so (5.3a) with η ij = p i p j implies that A ∞ = A ′∞ = 1 . By invoking (3.4c), (3.4d), (3.15), (4.8), (5.4) and (5.6), we infer that there areconstants A , ∞ and c , determined only by β , c , n , and Ω, such thatlim inf | p |→∞ inf ( x,z ) ∈ Ω × R A ≥ − A , ∞ , lim inf | p |→∞ inf ( x,z ) ∈ Ω × R B ≥ − B ∞ − cµ ε / , lim inf | p |→∞ inf ( x,z ) ∈ Ω × R C ≥ − cβ ε. We now define the function χ by χ = ω ◦ ψ − and write B ∞ for the uniformupper bound on B ∞ . Then, as shown on p. 595 of [19], there is a positive constant η , determined only by A , ∞ and B ∞ , such that the differential equation χ ′ ( z ) = A , ∞ χ + B ∞ χ + η has a solution on the range of u . With this choice for χ (which also gives thefunction ψ ), we conclude that there are positive constants M and ε such that w ≥ M and ε < ε imply that(5.7) a ij D ij w + κ i D i w ≥ S + 12 ηD ¯ u · ¯ ν E . We now fix ε ∈ (0 , ε ).For k a positive constant to be chosen, we now introduce the function w = w + k d Z w µ ∗ ( √ σ ) dσ. RADIENT ESTIMATES 23
Straightforward calculation shows that a ij D ij w + κ i D i w = (cid:18) k d ˜ µ ∗ ( √ w ) (cid:19) ( a ij D ij w + κ i D i w )+ k Z w µ ∗ ( √ σ ) dσ ( a ij D ij d + κ i D i d ) − k ˜ µ ′∗ ( √ w ) d √ w ˜ µ ∗ ( √ w ) a ij D i w D j w + 2 k ˜ µ ∗ ( √ w ) a ij D i dD j w . Since ˜ µ ′∗ ≤
0, we infer that a ij D ij w + κ i D i w ≥ S + 12 ηD ¯ u · ¯ ν E + C + S with C = k Z w µ ∗ ( √ σ ) dσ ( a ij D ij d + ω E i D i d + a i D i d ) − k ˜ µ ∗ ( √ w ) ( a ij D i dD j ( c km ) D k ¯ uD m ¯ u ) − k ˜ µ ∗ ( √ w ) (cid:18)(cid:20) ωI + 2 ε ( ψ ′ ) N N z (cid:21) a ij D i dD j u + 2 ε ( ψ ′ ) N a ij D i dN j (cid:19) and S = k Z w µ ∗ ( √ σ ) dσψ ′ a jk,i D jk ¯ uD i d + 2 k ˜ µ ∗ ( √ w ) a ij D i dD jk ¯ uν k wherever w ≥ M . Since ˜ µ ∗ is decreasing, it follows that Z w µ ∗ ( √ σ ) dσ ≤ w ˜ µ ∗ ( √ w ) . In addition, because ˜ µ ∗ is ∗ -decreasing, it follows that1˜ µ ∗ ( √ σ ) ≥ √ σ √ w ˜ µ ∗ ( √ w )for 0 ≤ σ ≤ w and hence Z w µ ∗ ( √ σ ) dσ ≥ w µ ∗ ( √ w ) . Moreover, because ˜ µ ∗ is ∗ -decreasing, we conclude that the fraction˜ µ ∗ ( √ w )˜ µ ∗ ( v )is bounded from above and below by positive constants, determined only by A ′∞ , B ′∞ , β , c , n , ε , and ˜ µ ∗ .Our estimate of C uses an estimate of E p , which we now derive. For any vector ξ , we have E i ξ i = a jk,i p j p k ξ i + 2 a ik p k ξ i . Then (5.3c) implies that a jk,i p j p k ξ i ≤ ˜ µ ∗ ( v ) v E / ( a ij δ km p i p k p j p m ) / | ξ | = ˜ µ ∗ ( v ) v E | p || ξ | . The Cauchy-Schwarz inequality implies that2 a ij p k ξ ≤ E / ( a ij ξ i ξ j ) / ≤ E / Λ / | ξ | . From (5.4b) and the inequality | p | ≤ v , it follows that E i ξ i ≤ µ ∗ ( v ) E | ξ | , and hence(5.8) | E p | ≤ µ ∗ ( v ) E . We now apply (5.4b), (5.4c), and (5.8) to conclude that C ≥ − D ¯ u · ¯ ν E ( ck ) , and we apply (5.3c) and (5.4b) to conclude that S ≥ − ck ( D ¯ u · ¯ ν ) / E / S / . (Here, c is determined by all the quantities mentioned in the conclusion of thistheorem.) It follows that(5.9) a ij D ij w + κ i D i w ≥ (cid:18) η (cid:20) k d Z w µ ( √ σ ) dσ (cid:21) − c ( k + k ) (cid:19) D ¯ u · ¯ ν E + (cid:18) k d Z w µ ( √ σ ) dσ (cid:19) S . By taking k sufficiently small, we conclude that a ij D ij w + κ i D i w ≥ E , the subset of Ω R on which w ≥ M .We also remove the assumption u ∈ C by observing (see also equation (15.13)in [3]) that w is a weak solution of the differential inequality D i ( a ij D j w ) + [ κ i − D j ( a ij )] D i w ≥ E . It then follows from Theorem 8.1 in [3] implies that w attains its maximumover E on ∂E , which consists of three subsets: E = (cid:26) x ∈ ∂E : d ( x ) = R (cid:27) ,E = { x ∈ ∂E : w ( x ) = M } ,E = ∂E ∩ ∂ Ω . With M = sup Ω R / w , it’s straightforward to check that w ≤ M + k R Z M µ ∗ ( √ σ ) dσ on E , and we have an upper bound for M . Moreover,(5.10) w ≤ M + k R Z M µ ∗ ( √ σ ) dσ on E . RADIENT ESTIMATES 25 On E , we compute b i D i w = − ( ωδ b + δ b ) w + b i D i ( c km ) D k ¯ uD m ¯ u + k Z w µ ∗ ( √ σ ) dσb p · γ. We then invoke (5.5), (5.6), and the first inequality of (3.2) to conclude that thereis a positive constant M such that b i D i w > E where w ≥ M . Since w = w on E , it follows that w ≤ max { M , M } on E . It follows that w ≤ c on E , and we have the upper bound (5.10) onΩ R / \ E as well. Since w ≥ w , we conclude that w ≤ c on Ω R / , so | Du | ≤ c on Ω R / . In combination with our assumed upper bound for | Du | on Ω \ Ω R / ,we obtain the desired estimate. (cid:3) We remark that, although the structure condition (5.4b) does not appear inthe global estimates of Section 3 from [19] or Section 15.2 from [3] (except in theexample of uniformly elliptic equations, where a stronger assumption is made), it issatisfied by many standard equations. It also appears in the local gradient estimatesof Chapter 15 from [3], specifically in condition (15.47) there. We also remark thatthe argument given here does not rely on the precise form of the interior gradientestimate, unlike the argument in [15] and [7].As we shall see in our parabolic examples, condition (5.3a) is quite restrictive.In order to relax it, we need to strengthen the condition on b z just a little. Eventhough this improvement will not be used for our elliptic examples, we include ithere for completeness. Theorem 5.2.
Let u ∈ C (Ω) be a solution of (0.1) with ∂ Ω ∈ C and b satisfyingthe hypotheses of Theorem 3.1 for some ∗ -decreasing function ε x . Suppose that thereare functions r and s , nonnegative constants M and µ , a decreasing function ˜ µ ε (for each ε ∈ (0 , ε ) ) satisfying (5.2b) , and a ∗ -decreasing function ˜ µ ∗ such that (5.3b) , (5.3c) , and (5.11) ( δ + r + 1) a ij η ij ≤ µ v E / ( a ij δ km η ij η jm ) / , hold on Γ( M ) for all matrices [ η ij ] and that (5.4) holds on Γ( M ) . Suppose alsothat B ′∞ is bounded uniformly with respect to ε and that C ∞ ≤ for all ε > . Ifthere is a decreasing function ˜ µ satisfying (5.2a) such that (5.5) holds on Σ ( τ ) and (5.6) is satisfied and if (5.12) lim | p |→∞ b z ( x, z, p ) b p ( x, z, p ) · γ = 0 , then there is a constant M , determined only by β , c , M , τ , ˜ µ ∗ , ˜ µ , Ω , theoscillation of u , sup { d ≥ R / } | Du | , the limit behavior in (4.12) , (5.1) , (5.6) , (5.2b) ,and (5.12) such that | Du | ≤ M in Ω .Proof. By using (5.12) in the proof of Theorem 3.1, we conclude that there isa decreasing function ε z , determined only by the limit behavior in (5.12) with lim σ →∞ ε z ( σ ) = 0 such that | N z | ≤ c ) ε z ( v ε ) , | N N pz · ξ | ≤ c ε z ( v ε ) (cid:18) ε / | ξ ′ | + | ξ · γ | (cid:19) v ε for any vector ξ , | N N zz | ≤ c βε z ( v ε ) v ε , | N N xz | ≤ c ε / ε z ( v ε ) (cid:18) vR + ε x ( v ) v (cid:19) . With these improved estimates, we see from (4.11) and (5.3) that a ij D ij w + κ i D i w ≥ S + D ¯ u · ¯ ν (cid:18) ω ′ ψ ′ ( I −
1) + ω A ′ + ωB ′ + C ′ (cid:19) , with A ′ = A − c (cid:18) ε Λ E + µ ε (cid:19) ,B ′ = B ,C ′ = C − cε (cid:20) (1 + ε x ( v ) v ) Λ E + β ε z ( v ε ) ε + ˜ µ ε ( v ) + ˜ µ ∗ ( v ) (1 + ε x ( v ) v ) (cid:21) . From the proof of Theorem 5.1, we see that there is a constant A ∗ , ∞ , determinedby µ and the same quantities as for A , ∞ such thatlim inf | p |→∞ inf ( x,z ) ∈ Ω × R A ′ ≥ − A ∗ , ∞ ε , lim inf | p |→∞ inf ( x,z ) ∈ Ω × R B ≥ − B ∞ − cε / , lim inf | p |→∞ inf ( x,z ) ∈ Ω × R C ′ ≥ . Just as before, there is a positive constant η , determined only by A ∗ , ∞ and B ∞ ,such that the differential equation Y ′ = A ∗ , ∞ Y + B ∞ Y + η has a solution on the range of u . If we now take χ = Y /ε , we have χ ′ = A ∗ , ∞ ε χ + B ∞ χ + ηε, and therefore there are positive constants M and ε such that the inequality ε < ε implies that (5.7) holds wherever w ≥ M . The remainder of the proof is identicalto that of Theorem 5.1. (cid:3) In fact, we can weaken condition (5.12) slightly. It suffices that β be sufficientlysmall.When C ∞ <
0, we can take ψ ( s ) = s in the proof of Theorem 5.1. Hence, anumber of hypotheses can be removed or weakened in this case. For brevity, wejust state the result. RADIENT ESTIMATES 27
Theorem 5.3.
Let u ∈ C (Ω) be a solution of (0.1) with ∂ Ω ∈ C and b satisfyingthe hypotheses of Theorem 3.1 for some ∗ -decreasing function ε x . Suppose thatthere are functions r and s , a ∗ -decreasing function ˜ µ ∗ , a decreasing function ˜ µ ε satisfying (5.2b) , and nonnegative constants M and µ such that (5.3b) and (5.3c) are satisfied on Γ( M ) for all matrices [ η ij ] and that (5.4b) and (5.4c) hold on Γ( M ) . Suppose also that B ′∞ is uniformly bounded for ε ∈ (0 , ε ] , and that C ∞ < .If (5.13) b z ≤ ε x ( v ) vb p · γ on Σ ( τ ) , and if (5.6) holds, then there is a constant M , determined only by β , c , M , n , ˜ µ ∗ , µ , ˜ µ , τ , Ω , sup { d ≥ R / } | Du | , and the limit behavior in (5.2b) , (4.12c) , and (5.6) , such that | Du | ≤ M in Ω . When the oscillation of u is sufficiently small, then we can derive a gradientbound as long as the quantities A ∞ , A ′∞ , B ∞ , B ′∞ , and C ∞ are bounded fromabove. In fact, the upper bounds may depend on ε . Theorem 5.4.
Let u ∈ C (Ω) be a solution of (0.1) with ∂ Ω ∈ C and b satisfyingthe hypotheses of Theorem 3.1 for some ∗ -decreasing function ε x . Set ε = ε / ,and suppose that there are functions r and s , a ∗ -decreasing function ˜ µ ∗ , and non-negative constants M and µ such that conditions (5.3c) , (5.11) , and (5.14) ( δ + s ) a ij η ij ≤ µ v E / ( a ij δ km η ik η jm ) / hold on Γ( M ) for all matrices [ η ij ] and that conditions (5.4) hold on Γ( M ) . Sup-pose also that B ′∞ and C ∞ are finite for each ε ∈ (0 , ε ) . If (5.15) lim sup σ →∞ ε x ( σ ) σ ˜ µ ∗ ( σ ) < ∞ , and if (5.16) ˜ µ ∗ ( v ) max { δ b, δ b } ≤ vb p · γ on Σ ( τ ) , then there are constants M and ω , determined only by β , β , β , c , M , µ , τ , Ω , sup { d ≥ R / } | Du | , and the limit behavior in (4.12) , (5.1) , and (5.15) ,such that | Du | ≤ M in Ω provided osc Ω u ≤ ω .Proof. We first note that the proof of the estimate for A ′∞ from Theorem 5.1 showsthat A ′∞ ≤ c by virtue of (5.11). As stated in the hypotheses of this theorem, wetake ε = ε /
2. With k to be determined, we take w as in the proof of Theorem5.1. Then we can choose k so that b i D i w > ∂ Ω on which w ≥ M and ω ≤ k gives a constant c such that a ij D ij w + κ i D i w ≥ D ¯ u · ν E (cid:18) ω ′ ψ ′ − c (cid:19) on the subset of Ω R / on which w ≥ M and ω ≤
1. We now choose ω so that χ ′ = c + 1 and χ (inf u ) = 0. It follows for ω = 1 / ( c + 1) that ω ≤ (cid:3) Note that, if ˜ µ ∗ ( s ) = K/s for some positive constant K (which will be the casein all of our examples), then condition (5.15) is automatically satisfied and (5.13)and (5.16) follow from the condition(5.17) | b x | + v | b z | ≤ β v b p · γ for some nonnegative constant β .6. Local gradient estimates
It is also possible to give local estimates. To present them in a more compactformat, we introduce some further notation.Specifically, for some y ∈ ∂ Ω and some
R >
0, we look at solutions of a ij ( x, u, Du ) D ij u + a ( x, u, Du ) = 0 in Ω ∩ B ( y, R ) , (6.1a) b ( x, u, Du ) = 0 on ∂ Ω ∩ B ( y, R ) . (6.1b)Roughly speaking, we can estimate the gradient of u at y if ˜ µ ∗ is a power functionwith suitable negative exponent. We also set A ′∞ ,R = lim sup | p |→∞ sup ( x,z ) ∈ Ω ∩ B ( y,R ) × R | A ( x, z, p ) | , (6.2a) B ∞ ,R = lim sup | p |→∞ sup ( x,z ) ∈ Ω ∩ B ( y,R ) × R | B ( x, z, p ) | , (6.2b) B ′∞ ,R = lim sup | p |→∞ sup ( x,z ) ∈ Ω ∩ B ( y,R ) × R | B ′ ( x, z, p ) | , (6.2c) C ∞ ,R = lim sup | p |→∞ sup ( x,z ) ∈ Ω ∩ B ( y,R ) × R C ( x, z, p ) , (6.2d)and we write Γ R ( M ) for the set of all ( x, z, p ) ∈ Γ( M ) with | x − y | < R . We alsowrite Σ ( τ, R ) for the set of all ( x, z, p ) ∈ Σ ( τ ) with | x − y | < R .Our local gradient estimate takes the following form. Theorem 6.1.
Let u ∈ C (Ω ∩ B ( y, R )) be a solution of (6.1) with ∂ Ω ∩ B ( y, R ) ∈ C for some R ∈ (0 , R ) and b satisfying the hypotheses of Theorem 3.1 with ∂ Ω ∩ B ( y, R ) in place of ∂ Ω for some ∗ -decreasing function ε x . Suppose that there areconstants θ ∈ (0 , , M > , and µ ≥ such that (6.3) v θ | a ijp η ij | ≤ µ E / ( a ij δ km η ik η jm ) / on Γ R ( M ) for all matrices [ η ij ] , and v θ Λ ≤ µ E , (6.4a) v θ | a p | ≤ µ E (6.4b) on Γ R ( M ) . (a) Suppose also that there are functions r and s , a nonnegative constant µ ,a decreasing function ˜ µ and, for each ε ∈ (0 , ε ) , a decreasing function ˜ µ ε satisfying (5.2) such that (5.3a) and (5.3b) hold on Γ R ( M ) for all matri-ces [ η ij ] and (5.4a) holds on Γ R ( M ) , and suppose that B ′∞ ,R is boundeduniformly with respect to ε and that C ∞ ,R ≤ . If (6.5) lim σ →∞ σ − θ ε x ( σ ) = 0 and if (6.6) max { δ b, δ b } ≤ v θ ˜ µ ( v ) b p · γ on Σ ( τ , R ) , then there is a constant M , determined only by β , β , c , M , n , θ , µ , R , τ , Ω , the oscillation of u over Ω ∩ B ( y, R ) , and the limitbehavior in (5.2) , (6.2) , and (6.5) , such that | Du ( y ) | ≤ M . RADIENT ESTIMATES 29 (b)
Suppose also that there are functions r and s , a nonnegative constant µ ,and, for each ε ∈ (0 , ε ) , a decreasing function ˜ µ ε satisfying (5.2b) such that (5.3b) and (5.11) hold on Γ R ( M ) for all matrices [ η ij ] and (5.4a) holds on Γ R ( M ) , and suppose that B ′∞ ,R is bounded uniformly with respect to ε andthat C ∞ ,R ≤ . If (6.5) holds, if (5.12) holds, and if (6.6) is satisfied on Σ ( τ , R ) for some decreasing function ˜ µ satisfying (5.2a) , then there is aconstant M , determined only by β , β , c , M , n , θ , µ , R , τ , Ω , theoscillation of u over Ω ∩ B ( y, R ) , and the limit behavior in (5.2) , (5.12) , (6.2) , and (6.5) , such that | Du ( y ) | ≤ M . (c) Suppose also there are functions r and s and a nonnegative constant µ such that (5.11) holds on Γ R ( M ) for all matrices [ η ij ] and C ∞ ,R < . If (6.5) holds and if (5.13) is satisfied on Σ ( τ , R ) , then there is a constant M , determined only by β , β , c , M , n , µ , µ , R , Ω , τ , and the limitbehavior in (6.2d) , (6.3) , and (6.5) , such that | Du ( y ) | ≤ M . (d) Suppose also that there are functions r and s and a nonnegative constant µ such that conditions (5.4a) , (5.11) , and (5.14) hold on Γ R ( M ) for allmatrices [ η ij ] , and suppose B ′∞ ,R and C ∞ ,R are finite for each ε . If (6.7) lim sup σ →∞ σ − θ ε x ( σ ) < ∞ , and if (6.8) max { δ b, δ b } ≤ µ v θ b p · γ on Σ ( τ , R ) , then there are constants M and ω , determined only by β , β , c , M , µ , Ω , τ , and the limit behavior in (5.1) , (6.2) and (6.7) , suchthat | Du ( y ) | ≤ M provided osc Ω ∩ B ( y,R ) u ≤ ω .Proof. To prove part (a), we use the notation from the proof of Theorem 5.1 (with µ s − θ in place of ˜ µ ∗ ( s )); in particular, we take ε , ψ , and M from the proof ofthe theorem, and we assume initially that u ∈ C (Ω ∩ B ( y, R )). From (5.9), weconclude that there is a positive constant η ′ such that a ij D ij w + κ i D i w ≥ (cid:16) k dw θ/ (cid:17) [ S + η ′ w E ]for k = k θµ v ≥ M .Next, the discussion on pages 346 and 347 of [13] gives a positive constant R ,determined only by R and β , and, for each R ∈ (0 , R ), a function ζ ∈ C ( R n )such that ζ ( y ) = 3 / ζ ≤ B ( y, R/ b p · Dζ ≥ ∂ Ω ∩ B ( y, R/
2) at which ζ ≥
0. Further, there is a constant c , determined onlyby β and n such that R | Dζ | + R | D ζ | ≤ c . We therefore assume without loss ofgenerality that R < R .We now set q = 1 + 2 /θ and w = ζ q w and note that D i w = ζ − q D i w − ζ − q w D i ( ζ − q )= ζ − q D i w − qζ − w D i ζ. It follows that a ij D ij w + κ i D i w = ζ /θ ( a ij D ij w + κ i D i w ) + C w E + S , with κ i = κ i − qζ − a ij D j ζ,C = 1 E (cid:0) − q ζ − a ij D i ζD j ζ + qζ − a ij D ij ζ + ζ q − [ ω E i + a i ] D i ζ (cid:1) , and S = qψ ′ w ζ q − a jk,i D jk ¯ u. The proof of (5.8) shows that | E p | ≤ µ E , and hence, by invoking (3.4c), (3.4d),(4.8h), (6.3), (6.5), (6.4), and the Cauchy-Schwarz inequality, we conclude thatthere is a constant c for which a ij D ij w + κ i D i w ≥ w E (cid:18) ˜ η − cζv θ R − c ( ζv θ R ) (cid:19) . From the definition of w , it follows that there is a constant c , determined only bythe quantities in the conclusion of this theorem (except for R ), such that w ≤ cv (1 + Rv θ ) ≤ cv θ , if we further assume (again, without loss of generality) that R ≤
1. Therefore, w ≤ cζ q v θ = c ( ζv θ ) (2+ θ ) /θ . For any positive constant M , it follows that1 ζv θ R + 1( ζv θ R ) ≤ c ( M − θ/ (2+ θ )2 + M − θ/ (2+ θ )2 )wherever w ≥ M , and hence, just as in the proof of Theorem 3 of [19] or Theorem15.3 in [3], if we choose M sufficiently large, determined also by R , then a ij D ij w + κ i D i w > E , the subset of Ω ∩ B ( y, R/
2) where w ≥ M and ζ >
0. Just as in theproof of Theorem 5.1, it follows that w ≤ M in E or w attains its maximum on E = ∂ Ω ∩ E , even if u ∈ C (Ω ∩ B ( y, R/ w attains its maximum on E , then at that point we have b i D i w = w θ ζ (2 − θ ) /θ b i D i ζ + ζ /θ b i D i w . Since b i D i ζ ≥
0, it follows from the proof of Theorem 5.1 that w cannot attain itsmaximum at a point of E with w ≥ M for a suitable constant M , and hencewe obtain the estimate w ≤ c in Ω ∩ B ( y, R/ w ( y ) and hence for | Du ( y ) | .Parts (b), (c), and (d) are proved in a similar fashion. (cid:3) Of course, (5.17) implies (6.7) and (6.8) if θ = 1. RADIENT ESTIMATES 31 examples Capillary-type boundary conditions.
To begin, we look at some boundaryfunctions b which satisfy our structure conditions. We suppose that there are C scalar functions h (defined on [1 , ∞ )) and ψ (defined on ∂ Ω × R ) such that(7.1) b ( x, z, p ) = h ( v ) p · γ + ψ ( x, z ) . We assume that h is positive and that there is a positive constant h such that − h ( σ ) σ ≤ h ′ ( σ ) ≤ h h ( σ ) (7.2a)for all σ ∈ (1 , ∞ ) and lim σ →∞ σh ( σ ) > sup ∂ Ω × R | ψ | . (7.2b)By virtue of the first inequality in (7.2a), the limit in (7.2b) exists. Moreover, thelatter condition is only a restriction on ψ if the limit is finite. The capillary problemis the special case h ( σ ) = 1 /σ , in which case (7.2b) requires sup | ψ | < b p = h ( v ) γ + h ′ ( v ) p · γ pv ,b p · γ = h ( v ) + h ′ ( v ) ( p · γ ) v . The first inequality in (7.2a) implies that b p · γ ≥ h ( v ) (cid:20) − (cid:16) p · γv (cid:17) (cid:21) and hence b p · γ >
0. Moreover, b p · γ = h ( v ) " − (cid:18) ψvh ( v ) (cid:19) wherever b = 0. It follows from (7.2b) (and the monotonicity of the function σ σh ( σ )) that there are positive constant τ and Ψ for which1 − (cid:18) ψvh ( v ) (cid:19) ≥ Ψwhen v ≥ τ and hence b p · γ ≥ Ψ h ( v )when v ≥ τ . Moreover, if lim s →∞ sh ( s ) = ∞ , then we may take Ψ ∈ (0 , τ is sufficiently large.When b ( x, z, p ) = 0, we have | b p ( x, z, p ) | ≤ h ( v ) + | h ′ ( v ) || p · γ | = h ( v ) + h ′ ( v ) ψh ( v ) ≤ h ( v ) (cid:20) | ψ | h ′ ( v ) h ( v ) (cid:21) , so we infer the first inequality of (3.2) with β = (1 + h sup | ψ | ) / Ψ by also usingthe second inequality of (7.2a). When b ( x, z, p ) = 0 and v ≥ τ , we have | p · γ | v = | ψ ( x, z ) | vh ( v ) ≤ (1 − Ψ) / . Simple algebra then yields the second inequality of (3.2) with c = ((1 − Ψ) / Ψ) / .Similar computations yield (3.3) with ε x ( σ ) = (cid:18) sup | ψ x | h (1)Ψ + 2 R Ψ (cid:19) σ , and β = sup | ψ z | /h (1). We also observe that (5.12) is satisfied if lim σ →∞ σh ( σ ) = ∞ .Since θ >
0, (6.5) and (6.7) are immediate. Furthermore, (5.6) follows from (3.3)if(7.3) lim σ →∞ ˜ µ ∗ ( σ ) = 0because ε x ( s ) s is a bounded function of s , while (5.15) and (5.17) with β = 1 + β also follow from (3.3).The other conditions are more delicate. Since δ b = ( p · γ ) (cid:20) h ( v ) + h ′ ( v ) | p | v (cid:21) = − ψ (cid:18) h ′ ( v ) | p | h ( v ) v (cid:19) and 1 + h ′ ( v ) | p | h ( v ) v ≥ − | p | v > , it follows that δ b ≤ ψ ≥
0. We defer further discussion of the otherconditions to the examples of differential equations.7.2.
Other boundary conditions I.
It is possible to generalize the previous ex-ample slightly by allowing some x and z dependence in the gradient term of b .Specifically, we suppose that there are a positive-definite matrix valued C func-tion β ij and a positive C ([1 , ∞ )) function h such that b has the form b ( x, z, p ) = h (˜ v ) β ij ( x, z ) p i γ j + ψ (7.4a)with ˜ v = (cid:0) β ij ( x, z ) p i p j (cid:1) / . (7.4b)We further assume that h satisfies the conditions (7.2a) for some positive constant h and(7.5) lim σ →∞ σh ( σ ) > sup | ψ | λ / , where λ ( x, z ) is the minimum eigenvalue of the matrix [ β ij ( x, z )]. By imitatingthe arguments in the previous example, we see that the hypotheses of Theorem 3.1are satisfied with β and c determined by h , sup ∂ Ω × R | ψ | , the function β ij , andthe quantities in (7.5); β is determined also by sup | ψ z | and sup | β ijz | ; and ε x ( σ ) = K σ for some constant K determined by the quantities in (7.5), h , sup | ψ x | , and thefunction β ij . The second inequality of (3.2) is proved by using the proof of Lemma2.2(b) from [5], specifically, the demonstration there that 1 − ( ν · γ ) is boundedaway from zero. It’s easy to see that condition (2.2) from [5] is valid with constants β , τ ≥ β ∈ (0 , σ →∞ σh ( σ ) = ∞ . RADIENT ESTIMATES 33
Other boundary conditions II.
We can also consider boundary conditionswith b of the form(7.6) b ( x, z, p ) = v q ( x ) p · γ + ψ ( x, z )with q a C function satisfying the inequality q ( x ) ≥ −
1, and sup Q × R | ψ | < Q is the set on which q = −
1. Since the special case q ≡ − q
6≡ −
1. This time, the hypotheses of Theorem 3.1are satisfied for suitable constants β , β and c , along with ε x of the form ε x ( σ ) = K ln σσ for K a positive constant. Specifically, the constants β and c are determined bysup Q × R | ψ | , the C nature of q at points of Q , and the maximum of q ; β is determined also bysup | ψ z | ; and K is determined also by the C nature of q everywhere and sup | ψ x | .This time, (5.6) holds if lim σ →∞ ˜ µ ∗ ( σ ) ln σ = 0 , (5.12) holds if q > −
1, and (6.5), (5.17), and (6.7) always hold. Moreover (5.13)holds with β determined by the C nature of q at points of Q , the maximum of q ,and sup | ψ z | .We infer (5.5) (for a suitable ˜ µ ) if(7.7) lim σ →∞ v − q ( σ )˜ µ ∗ ( σ ) = 0 , while (5.13) follows from the weaker condition(7.8) lim sup σ →∞ v − q ( σ )˜ µ ∗ ( σ ) < ∞ . We infer (5.12) provided inf q > −
1. If q + θ >
0, then (6.6) holds, while q + θ ≥ σ →∞ ˜ µ ∗ ( σ ) ln σ = 0 . Non-variational boundary conditions.
All of our boundary conditions sofar correspond to the natural boundary condition for a variational problem. Inother words, they have the form b ( x, z, p ) = F p ( x, z, p ) · γ for some C function F which is convex with respect to p . Specifically, F ( x, z, p ) = Z v σh ( σ ) dσ + ψ ( x, z ) p · γ for (7.1), F ( x, z, p ) = Z ˜ v σh ( σ ) dσ + ψ ( x, z ) p · γ for (7.4), and F ( x, z, p ) = v q ( x )+2 q ( x ) + 2 + ψ ( x, z ) p · γ for (7.6). This form allows the possibility of deriving gradient bounds by applyingthe maximum principle to F ( x, u, Du ) (assuming suitable structure conditions on F , a ij , and a ), an idea going back to Ural ′ tseva [20] and Lieberman [5] althoughboth authors only studied the conormal problem in which a ij = ∂ F/ ( ∂p i ∂p j ). Wenow provide a class of boundary conditions not having this form but for which thepaper of the current page provides gradient estimates.We start with a unit vector valued function β , defined on ∂ Ω × R , such that β · γ ≥ β ∗ for some constant β ∗ ∈ (0 , β = γ is such a function forany such β ∗ , and if β is a unit vector with β · γ ≥
1, then β · γ ≡ β = γ . Thefunction b now has the form b ( x, z, p ) = h ( v ) β · p + ψ ( x, z )(7.10a)for some positive function h with − β ∗ ≤ σh ′ ( σ ) h ( σ ) ≤ β ∗ − β ∗ (7.10b)for all σ ≥ b is oblique. To this end, we compute b p · γ = h ( v ) β · γ + h ′ ( v ) v ( β · ν () γ · ν ) . We also note that | β − γ | = | β | + | γ | − β · γ ≥ − β ∗ , and hence( β · pν )( γ · ν ) = ( β · ν ) + ( β · ν ) ν · ( β − γ ) ≥ ( β · ν ) − | β · ν || ν | β − γ |≥ ( β · ν ) − | β · ν || ν | (2 − β ∗ ) / . The Cauchy-Schwarz inequality implies that | β · ν || ν (2 − β ∗ ) / ≤ ( β · ν ) − | ν | (2 − β ∗ )and therefore(7.11) ( β · ν )( γ · ν ) ≥ − − β ∗ | ν | . If h ′ ( v ) ≤
0, then h ′ ( v ) v ( β · ν )( γ · ν ) ≥ − β ∗ h ( v )by virtue of the first inequality in (7.10b), so b p · γ > h ′ ( v ) >
0, then h ′ ( v ) v ( β · ν )( γ · ν ) ≥ − β ∗ − β h ( v ) 1 − β ∗ | ν | > − β ∗ h ( v ) . It follows in either case that b p · γ >
0. For large v , we can obtain a lower boundfor b p · γ in line with the previous lower bounds. First, we write h ′ ( v ) v ( β · ν )( γ · ν ) = − h ′ ( v ) v ( γ · ν ) h ( v ) ψ ( x, z ) ≥ − β ∗ − β ∗ | ψ | v . The lower bound in (7.10) implies that lim σ →∞ σh ( σ ) = ∞ , so there is a τ ≥ v (1), β ∗ , and sup | ψ | , such that v ≥ τ implies that vh ( v ) ≥ | ψ | − β ∗ RADIENT ESTIMATES 35 and simple algebra shows that b p · γ ≥ ( β ∗ / h ( v ) for v ≥ τ . For this reason, wecan actually relax the upper bound in (7.10b) somewhat to h ′ ( σ ) ≤ β ∗ h ( σ )(1 − β ∗ ) σ if σ ≤ τ ,h h ( σ ) if σ > τ for some h ≥
0. We leave the details of the analysis for this more extended classof boundary conditions to the reader. We do point, however, that the hypothesesof Theorem 3.1 are satisfied with constants β and c determined by v (1), β ∗ , andsup | ψ | and ε x ( σ ) = K/σ for some positive constant K determined only by v (1), β ∗ , sup | ψ | , and Ω.Although we could modify Examples 7.2 and 7.3 in a similar fashion, we leavethe details of that modification to the interested reader.We also make a quick comparison between this example and Example 7.2. Namely,the expression β ij p i γ j appears in Example 7.2 while the slightly different expression β i p i appears in this example. Of course, given a matrix β ij ] as in Example 7.2, thevector β given by β i = β ij γ j makes these two expressions equal. (Although, in or-der for β to be a unit vector, we must define an intermediate vector ˆ β by ˆ β i = β ij γ j and then set β = ˆ β/ | ˆ β . Then β ij p i γ j and β i p i differ only by a positive factor.) Onthe other hand, given a vector β as in this example, we can find a positive-definite,symmetric matrix [ β ij ] making the expressions equal. To create such a matrix, weassume that γ = (0 , . . . , β ij = β n + β n P n − j =1 ( β j ) if i = j < n,β i if j = n,β j if i = n, . It’s elementary to check that, for this matrix, β ij γ j = β i . Moreover, direct calcu-lation shows that, for any vector ξ , we have β ij ξ i ξ j = n − X i =1 β ii ( ξ i ) + β n ( ξ n ) + 2 n − X i =1 β i ξ i ξ n , and a simple application of the Cauchy-Schwarz inequality to the last sum in thisequation shows that β ij ξ i ξ j ≥ β n | ξ | , so [ β ij ] is positive definite. It is symmetric by construction, and all of its entriesare bounded from above by a constant determined only by β ∗ . Therefore, we couldhave written this example directly in terms of a matrix [ β ij ] as in Example 7.2. Wechoose not to do so for ease of notation and to point out that this example appliesto semilinear boundary conditions, which correspond to h being constant, as well.Now, we consider some classes of differential equations.7.5. The false mean curvature equation.
The model problem for our class ofequations is known as the false mean curvature equation. In particular, a ij = δ ij + p i p j and a satisfies the conditions | a x | = o ( | p | ) , (7.12a) a z ≤ o ( | p | ) , (7.12b) | a | + | p || a p | = O ( | p | ) . (7.12c)We now take ˜ µ ∗ ( s ) = K/s with K a large constant, determined only by n andthe limit behavior in (7.12c). Since δ a ij = 2 p i p j , conditions (5.3) are satisfiedwith r = − s = 0, ˜ µ ( σ ) ≥ /σ (the exact choice of ˜ µ will be made later), and˜ µ ε ≡
0. With these choices (in particular, with K sufficiently large), (5.4b), (5.6),and (5.4c) follow for any of our examples of boundary conditions. It’s easy tocompute B ′∞ = C ∞ = 0, while B ∞ is bounded by a constant determined by thelimit behavior in (7.12c). Moreover, local gradient bounds follow from Example 2on page 585 of [19] (with the constant θ in that example equal to 1 and the multiplerfunction t in that example equal to 0).When h has the form (7.1), then conditions (5.13) and (6.8) are easy to check.To verify (5.5), we consider two possibilities. First, iflim σ →∞ σh ( σ ) < ∞ , in addition to the bound on sup | ψ | , we assume that(7.13) lim σ →∞ h ′ ( σ ) σh ( σ ) = − ψ z ≤
0. We then take˜ µ ( σ ) = K (cid:18) σ + 1 + h ′ ( σ ) σh ( σ ) (cid:19) for a sufficiently large positive constant K . Since we have already shown that δ b ≤ ψ ≥
0, we only need to estimate δ b wherever ψ <
0. Moreover,it follows from (7.13) that h ′ ( σ ) < σ sufficiently large, in which case, we have δ b ≤ − ψ (cid:20) h ′ ( v ) vh ( v ) + 1 v (cid:21) . From this inequality, (5.5a) follows easily, and (5.5b) is easily verified since b z = ψ z ≤ σ →∞ σh ( σ ) = ∞ , we assume that(7.14) lim sup σ →∞ h ′ ( σ ) h ( σ ) ≤ . This time, we take ˜ µ ( σ ) = K (cid:18) σ + 1 σh ( σ ) + max { , h ′ ( σ ) h ( σ ) } (cid:19) to infer (5.5).In particular, we require ψ z ≤ h ( σ ) = 1 /σ (the capillary problem) or h ( σ ) = (arctan σ ) /σ . If h ( σ ) = σ q − for some q > h ( σ ) = exp( σ α ) for some α ≥
0, our gradient estimate holds without any restriction on ψ z . RADIENT ESTIMATES 37
When b has the form (7.4), we obtain a gradient estimate if h satisfies the samerestrictions as before and if β ijz = 0.When b has the form (7.6) and q is nonconstant, we obtain a gradient estimateprovided inf q > − b has the form (7.10a) with h satisfying (7.10b), we obtain agradient estimate provided β and ψ are C with respect to x and z .In fact, the hypotheses of Theorem 6.1(a) are satisfied with θ = 1, so we obtaina local gradient bound for all these boundary conditions.Note that this gradient bound improves the one for operators of this form in [7] byrelaxing the conditions on a . Theorem 4.2 of that work assumes that | a | = O ( | p | )and that δ a ≤ o ( | p | ) to obtain a local gradient estimate, while Theorem 6.1(a)allows | a | = O ( | p | ) and δ a ≤ O ( | p | ). Theorem 5.1 of [7] gives a global gradientestimate under somewhat weaker conditions on a , specifically, | a | = O ( | p | − η ) with η > b : δ b, δ b ≤ O ( b p · γ ) . In particular, if b ( x, z, p ) = v q − p · γ + ψ ( x, z, p ) , the condition on δ b requires q = 0 or q ≥
1. Our results do not fully extend thosein [7] (see also Theorems 9.8 and 9.9 from [13], which are essentially the estimatesfrom [7]) because some of the conditions in that source involve the operator δ T ,defined by δ T f = f z + 1 | p ′ | p ′ · f x − | p ′ | f p · D ( c km ) p k p m ;however, the author is unaware of any special equations which satisfy the conditionsin [7] are satisfied but not those in this work.7.6. A generalization of the false mean curvature equation.
Our resultsactually apply to a larger class of differential equations based on the one in [7].These operators have the form(7.15) a ij = a ij ∗ + τ ( x, z, p ) p i p j assuming that τ ≥ a ij ∗ ] is a symmetric, uniformly elliptic matrix, that is,there is a positive function Λ ∗ and there is a positive constant µ ∗ such that(7.16) µ ∗ Λ ∗ ( x, z, p ) | ξ | ≤ a ij ∗ ( x, z, p ) ξ i ξ j ≤ Λ ∗ ( x, z, p ) | ξ | for all ξ ∈ R n and all ( x, z, p ) ∈ Ω × R × R n with | p | ≥ M . We also assume that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂p (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = O (( τ Λ ∗ ) / ) , (7.17a) | p || δ a ij ∗ | + | p | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂z (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = o ( | p | ( τ Λ ∗ ) / ) . (7.17b)The function τ is assumed to satisfy | τ p | = O ( τ / | p | ) , (7.18a) | τ x | + | p || τ z | = o ( τ | p | ) . (7.18b)These conditions on τ are essentially identical to those in [7]. (See conditions (3.7)and (4.4) there.) The only difference between the two sets of conditions is that [7] assumes an estimate on δ T τ while we rewrite the condition in terms of τ z and τ x separately. The function τ is further assumed to be appropriately related to Λ ∗ . Itwas assumed in (4.4) of [7] that τ = O (Λ ∗ ). Here, we shall assume instead(7.19) Λ ∗ = o ( τ | p | )since, if τ = O (Λ ∗ / | p | ), the equation is uniformly elliptic and stronger results areavailable. We refer the reader to [15] and to the next example for details of thestronger results for uniformly elliptic equations.We are now ready to check our conditions. First, if we take r = − ( δ + 3) τ /τ ,then ( δ + r + 1) a ij = ( δ + r + 1) a ij ∗ , so( δ + r + 1) a ij η ij ≤ o ( v ) τ / Λ ∗ n X i,j =1 ( η ij ) ≤ o ( τ | p | ) / (cid:0) a ij δ km η ik η jm (cid:1) / . Since τ | p | = O ( E /v ), we infer (5.3a) for some ˜ µ ε determined by the limit behaviorin (7.17b) and (7.19). A similar calculation verifies (5.3b) with s = 0 and ˜ µ ε ( σ ) =¯ µ ( σ ) /ε for some decreasing function ¯ µ satisfyinglim σ →∞ ¯ µ ( σ ) = 0and determined by the limit behavior in (7.17b), while (5.3c) follows from (7.17a)and (7.18a) with ˜ µ ∗ ( s ) = K /s for some positive constant K determined by thelimit behavior in those conditions. If we also assume that | a | + | p || a p | = O ( τ | p | ) , (7.20a) | a x | = o ( τ | p | ) , (7.20b) a z ≤ o ( τ | p | ) , (7.20c)then (5.4c) holds with ˜ µ ∗ as before and K determined also by the limit behavior in(7.20a). Taking (7.20b) and (7.20c) into account, we also compute B ′∞ = C ∞ = 0.To obtain an interior gradient estimate, we use Theorem 3 from [19]. First,we define the matrix A ′ = [ a ij ∗ ] and take the multipliers r = 0 and s = − a from (21) of [19] is zero,and that the quantities b and c from that equation are bounded from above bynonnegative constants determined only by the limit behavior in (7.17), (7.18), and(7.20), so the hypotheses of Theorem 2 from [19] are satisfied. In addition, furthercomputation gives (29) from [19] with θ = 1 there. Theorem 3 from [19] then givesa local gradient estimate for such equations. Hence, we obtain a gradient boundhere, too, under the same conditions on b as in the previous example.We point out here that the estimates in [7] require stronger hypotheses on a ij ∗ and on a than the ones here. For example, the second inequality of (3.1) in [7]states that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂p (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = O (cid:18) Λ ∗ | p | (cid:19) and (7.19) implies that Λ ∗ | p | = o (( τ Λ ∗ ) / ) , RADIENT ESTIMATES 39 which means that the second inequality of (3.1) in [7] is stronger than (7.17a).Similar comparisons show that | ∂a ij ∗ /∂z | and | ∂a ij ∗ /∂x | can be larger here thanin Theorem 4.2 of [7]. In addition, Theorem 4.2 of [7] requires | a | = O ( τ | p | )while Theorem 5.1 of [7] requires | a | = O ( τ | p | − α ) for some positive α or else | a | = o ( τ | p | ) and τ = O ( λ ∗ | p | − α ) for some positive α . Finally, it is also assumedin [7] that δ b ≤ O ( b p · γ ).We also note that the choice a ij ∗ = exp( − v ) δ ij and τ = v α (for any real α )satisfies (7.16), (7.17), (7.18), and (7.19) but not the condition τ = O (Λ ∗ ) from [7].7.7. Uniformly elliptic equations.
We now suppose that there is a positive con-stant µ ∗ such that(7.21) µ ∗ Λ | ξ | ≤ a ij ξ i ξ j for all vectors ξ . We also suppose that | p | | a ijp | + | p || a ijz | + | a ijx | = O (Λ | p | ) , (7.22a) | p || a | + | p | | a p | + | a x | = O (Λ | p | ) , (7.22b) a z ≤ O (Λ | p | ) . (7.22c)Finally, we assume that b satisfies conditions (3.1), (3.2), and (5.17).Straightforward calculation shows that conditions (5.11), (5.14), (6.3), and (6.4)are satisfied with θ = 1, r = − s = 0, and the constants µ and µ determined by n , µ ∗ , and the constants in (7.22). In addition, for any y ∈ ∂ Ω and any
R >
0, thehypotheses of Theorem 3.1 are satisfied in ∂ Ω ∩ B ( y, R ) with ε x ≡ β and β = β .Further, C ∞ ,R and B ′∞ ,R are seen to be bounded by constants determined by thesame quantities as for µ and µ . A H¨older continuity estimate for u follows fromTheorem 2.3 of [15] (or Lemma 8.4 of [13]), so we obtain a gradient bound from part(d) of Theorem 6.1 if R is sufficiently small. In conjunction with the usual interiorgradient estimate for uniformly elliptic equations Theorem 15.5 from [3], we infer aglobal gradient estimate for solutions of (0.1) under these hypotheses. Theorem 3.3of [15] gives the exact same estimate as here, but there are two reasons to note theproof given here: we do not need the special form for the interior gradient estimateproved as Lemma 3.1 in [15], and we can use the N function to derive this gradientestimate.In particular, we obtain a gradient bound for four types of functions b . The firstis that b has the form (7.1) and h and ψ satisfy (7.2), the second is that b has theform (7.4) and h and ψ satisfy (7.2a) and (7.5), the third is that b has the form(7.6) with q a C function satisfying the inequality q ( x ) ≥ −
1, and sup Q × R | ψ | < x ∈ Q , where Q is the set on which q = −
1, and the fourth is that b has theform (7.10a) with h satisfying (7.10b).8. Global gradient bounds for parabolic problems
A large part of our analysis of parabolic problems is essentially identical to thatof elliptic problems. For this reason, we emphasize the few significant differentdifference between the two cases and sketch the minor modifications needed.First, for a function f depending on the variables ( x, t ), we write Df = ∂f /∂x (exactly as before) and f t = ∂f /∂t , and we say that a function f is in C , ∗ (Σ) forsome open subset Σ of R n if the derivatives D f , f t and Df t are continuous in Σ. Then, we write P Ω ∈ C , ∗ if S Ω can be written as the intersection of the set { ( x, t ) ∈ R n : 0 < t ≤ T } for some positive constant T with a level set of a C , ∗ function f with | Df | bounded away from zero on S Ω. If the function f is only C , (that is, if D f and f t are continuous), then we write P Ω ∈ C , .We use d ∗ to denote the parabolic distance to S Ω, that is, we define d ∗ by d ∗ ( X ) = inf Y ∈ S Ω s ≤ t max {| x − y | , | t − s | / } , and we write d for the spatial distance to S Ω, that is, d ( X ) = inf Y ∈ S Ω s = t | x − y | , where Y = ( y, s ). As pointed out in Section 10.3 of [11], there is a positive constant C , determined only by Ω, such that d ∗ /d is trapped between C and 1 /C in Ωwhenever P Ω ∈ C , . In that reference, it is also shown that, if P Ω ∈ C , ∗ , thenthere is a positive constant R , determined only by Ω, such that d ∈ C , ∗ (Σ R ),where Σ R is the subset of Ω on which d < R .Further, for r >
0, we write Ω r for the subset of Ω on which d < r . If P Ω ∈ C , ∗ ,then γ = Dd is a C , unit vector field in Σ R which extends the unit inner spatialnormal to S Ω. Moreover, for any vector ξ , the vector ˜ ξ , defined by (1.1), satisfies | ˜ ξ | ≤ | ξ | /R in Ω R / .Since our problem depends on t , we need to modify notation slightly beforepresenting our N function. For τ ≥
1, we write Σ( τ ) for the subset of S Ω × R × R n onwhich v ≥ τ . In place of Theorem 3.1, we have the following result. Since the proofis essentially the same as for Theorem 3.1 (except for notational adjustments), weomit it here. We do mention, though, that the notation C , (Ω R / × R × R n × (0 , ε ))means a function which is once continuously differentiable with respect to t andtwice continuously differentiable with respect to all other variables. Theorem 8.1.
Let P Ω ∈ C , , let τ ≥ , and let b ∈ C (Σ( τ ) with b p · γ > on Σ ( τ ) , the subset of Σ( τ ) on which b = 0 . Suppose (8.1) lim t →∞ b ( X, z, p − τ γ ( X )) < < lim t →∞ b ( X, z, p + τ γ ( X )) for all ( X, z, p ) ∈ Σ( τ ) and that there are positive constants β and c such thatconditions (3.2) are satisfied on Σ ( τ ) . Suppose also that there are a ∗ -decreasingfunction ε x and a nonnegative constant β such that (3.3) holds. Then there is apositive constant ε ( β , c ) , along with a C , (Ω R / × R × R n × (0 , ε )) function N ,such that conditions (3.4) hold on Ω R / × R × R n × (0 , ε ) with v ε as in Theorem3.1. Moreover, N = 0 on Σ ((1 + c ) / τ ) × R × R n × (0 , ε ) .If we define w and ν by (3.5) for some ε ∈ (0 , ε ) , then (3.6) is valid. Also, thereis a nonnegative constant c , determined only by β , c , and n such that (3.7a) , (3.7b) , (3.7c) , and (3.7d) all hold for all ξ ∈ R n , and (3.7e) and (3.7f) are satisfied.Finally, if P Ω ∈ C , ∗ and if there is an increasing function Λ such that (8.2) | b t | ≤ Λ ( v ) b p · γ on Σ ( τ ) , then there is a nonnegative constant c , determined also by Ω , such that (3.7g) is valid and (8.3) | N t | ≤ c v + 2Λ (2(1 + c ) v ) . RADIENT ESTIMATES 41
We also observe that, if Ω is a cylinder and if b is time-independent, then the proofof the preceding theorem shows that N is also time-independent. In particular, allof our elliptic gradient estimates have direct parabolic analogs in this case. Tofacilitate our exposition, we include these analogs in the more general results givenbelow.The first way in which our parabolic gradient estimates differ from the ellipticones is that the multiplier functions r and s must be chosen as r = − s = 0.For the reader’s convenience, we rewrite the appropriate conditions explicitly forthis pair of multipliers. First, we set A = ( δ − EE ,B = ( δ E + ( δ − a E ,B ′ = ( δ − a E ,C = δ a E , and we introduce the following constants related to the limit behavior these func-tions. A ′∞ = lim sup | p |→∞ sup ( X,z ) ∈ Ω × R | A ( X, z, p ) | , (8.4a) B ∞ = lim sup | p |→∞ sup ( X,z ) ∈ Ω × R B ( X, z, p ) , (8.4b) B ′∞ = lim sup | p |→∞ sup ( X,z ) ∈ Ω × R | B ′ ( X, z, p ) | , (8.4c) C ∞ = lim sup | p |→∞ sup ( X,z ) ∈ Ω × R C ( X, z, p ) . (8.4d)For ease of notation, we modify the definition of Γ( M ) to deonte the set of all( X, z, p ) ∈ Ω × R × R n with | p | ≥ M .We begin with estimates that are simple analogs of the corresponding ellipticones. Theorem 8.2.
Let u ∈ C , (Ω) be a solution of (0.2) with P Ω ∈ C , ∗ and b satisfying the hypotheses of Theorem 8.1 for some ∗ -decreasing function ε x andsome increasing function Λ such that (8.5) Λ ((1 + c ) v ) ≤ µ (1 + E ) v on Γ( M ) for some nonnegative constant µ . Suppose that there is a ∗ -decreasingfunction ˜ µ ∗ such that (5.3c) is satisfied on Γ( M ) for all matrices [ η ij ] and (5.4c) holds on Γ( M ) . (a) Suppose also that there are decreasing functions ˜ µ and ˜ µ ε satisfying (5.2) and a nonnegative constant µ such that δ a ij η ij ≤ ˜ µ ( v ) v E / ( a ij δ km η ij η jm ) / , (8.6a) δ a ij η ij ≤ ˜ µ ε ( v ) v E / ( a ij δ km η ij η jm ) / , (8.6b) hold on Γ( M ) for all matrices [ η ij ] , that (8.7) Λ | p | ≤ µ E on Γ( M ) , that B ′∞ is bounded uniformly with respect to ε and that C ∞ ≤ . If (5.5) is satisfied on Σ ( τ ) and if ε x satisfies (5.6) , then there is aconstant M , determined only by β , β , c , n , µ , sup { d ≥ R / } | Du | , theoscillation of u , and the limit behavior in (5.2) , (5.6) , and (8.4) , such that | Du | ≤ M in Ω . (b) Suppose also that there are a decreasing function ˜ µ ε satisfying (5.2b) and anonnegative constant µ such that (5.3c) , (8.6b) , and (8.8) δ a ij η ij ≤ µ v E / ( a ij δ km η ij η jm ) / hold on Γ( M ) for all matrices [ η ij ] and (8.7) holds on Γ( M ) , that B ′∞ isfinite, and that C ∞ ≤ . If ε x satisfies (5.6) and if b satisfies (5.12) , thenthere is a positive constant M , determined only by β , β , c , n , µ , µ ,the oscillation of u , sup { d ≥ R / } | Du | , and the limit behavior in (5.2) , (5.6) , (5.12) , and (8.4) , such that | Du | ≤ M in Ω . (c) Suppose that there are a constant µ and a decreasing function ˜ µ ε satisfying (5.2b) such that (5.3c) , (8.6b) , and (8.8) hold on Γ( M ) for all matrices [ η ij ] and (5.4b) holds on Γ( M ) , and suppose B ′∞ is finite and C ∞ < . If (5.5) and (5.13) are satisfied on Σ ( τ ) and if ε x satisfies (5.6) , then thereis a constant M , determined only by β , β , c , n , µ , sup { d ≥ R / } | Du | ,the oscillation of u , and the limit behavior in (5.2) , (5.6) , and (8.4) , suchthat | Du | ≤ M in Ω . (d) Suppose also that there are nonnegative constants M and µ such thatconditions (8.8) and (8.9) | δ a ij η ij | ≤ µ v E / ( a ij δ km η ik η jm ) / hold on Γ( M ) for all matrices [ η ij ] , that (8.7) hold on Γ( M ) , and supposethat B ′∞ and C ∞ are finite for each ε . If ε x satisfies (5.15) , if (5.16) holds on Σ ( τ ) , and if (8.5) holds, then there are constants M and ω ,determined only by β , β , c , µ , µ , Ω , sup { d ≥ R / } | Du | , and the limitbehavior in (8.4) and (5.15) such that | Du | ≤ M in Ω provided osc Ω u ≤ ω .Proof. To prove part (a), we use the notation from the proof of Theorem 5.1. Now,we use a different estimate for ωI a ij D ij u from the one in the elliptic case. Firstwe write ωI a ij D ij u = ωI ψ ′ a ij D ij ¯ u + ω I E . It follows from (4.4), (4.8g), (8.7), and the Cauchy-Schwarz inequality that ωI ψ ′ a ij D ij ¯ u ≥ − c ( µ ε ) / ( w E ) / S / , while (4.6) yields ω I E ≥ − cε / ω w E . Using these estimates and assuming initially that Du ∈ C , (Ω), we see fromthe proof of Theorem 5.1 that, by choosing ψ suitably, there are positive constants˜ η and M such that w satisfies the differential inequality − ∂w ∂t + a ij D ij w + κ i D i w ≥ (cid:18) ˜ ηw E − εN N t ( ψ ′ ) + 2 c kmt D k ¯ uD m ¯ u (cid:19) (cid:18) k d ˜ µ ∗ ( √ w ) (cid:19) RADIENT ESTIMATES 43 on Ω ′ , the subset of all Ω R / on which | Du | ≥ M . As in Theorem 5.1, it followsthat w ≥ w µ ∗ ( √ w ) , and hence (after taking (8.5) into account) there are nonnegative constants k and k , determined only by c , µ , sup | γ t | , and n , such that − ∂w ∂t + a ij D ij w + κ i D i w ≥ [˜ η − k ε ] w E − k w on Ω ′ .We now decrease ε so that ˜ η ≥ k ε to infer that − ∂w ∂t + a ij D ij w + κ i D i w + k w ≥ ′ , and hence w = e − k t w cannot take its maximum in Ω ′ . The restrictionthat Du ∈ C , is removed by using Theorem 6.15 from [11] rather than Theorem8.1 from [3].The analysis of a boundary maximum for w is only slightly different from theanalysis in the elliptic case. This time, we note that there is a positive constant M such that b i D i w > S Ω where w ≥ M , while the size of w at P Ω R / \ S Ω is controlledby the assumed bounds on | Du | there.The other parts are proved via a similar modification of the corresponding elliptictheorems. (cid:3) There is another way to obtain a global gradient bound for parabolic problemsusing as our model Corollary 11.2 from [11]. Specifically, we have the followingresult, which includes many cases of interest.
Theorem 8.3.
Let u ∈ C , (Ω) be a solution of (0.2) with P Ω ∈ C , ∗ and b satisfying the hypotheses of Theorem 8.1 for some ∗ -decreasing function ε x and Λ ( σ ) = µ σ for some nonnegative constant µ . Suppose that there is a nonnegativefunction µ such that δ a ij η ij ≤ µ v (cid:0) a ij δ km η ik η jm (cid:1) / , (8.10a) δ a ij η ij ≤ µ ( ε ) v (cid:0) a ij δ km η ik η jm (cid:1) / , (8.10b) | a ijp η ij | ≤ µ v (cid:0) a ij δ km η ik η jm (cid:1) / (8.10c) on Γ( M ) for all matrices [ η ij ] and | ( δ − a | ≤ µ , (8.11a) δ a ≤ µ ( ε ) , (8.11b) E ≤ µ (8.11c) on Γ( M ) . If (8.12) (1 + ε x ( v ) v ) Λ ≤ µ
54 GARY M. LIEBERMAN on Γ( M ) and if (5.13) holds on Σ ( τ ) , then there is a constant M , determinedonly by n , β , β , c , µ , µ , M , τ , Ω , and sup { d ≥ R / } | Du | such that | Du | ≤ M in Ω .Proof. We first fix ε ∈ (0 , ε ). With w as in the proof of Theorem 5.1 and k apositive constant to be determined, we set w = w + k d Z w (1 + ε x ( √ σ ) √ σ ) dσ. Following the proof of Theorem 5.1 and writing Ω ′ for the subset of all X ∈ Ω R / with | Du | ≥ max { M , τ } , we see that b i D i w > S Ω ∩ S Ω ′ provided k is chosen sufficiently large. In addition, there is a constant c such that − ∂w ∂t + a ij D ij w + κ i D i w ≥ − cw on Ω ′ . Therefore w = exp( − ct ) w satisfies − w t + a ij D ij w + κ i D i w > ′ ,b i D i w > S Ω ∩ S Ω ′ . These differential inequalities then give the desired bound for w , and hence for | Du | , by the maximum principle. (cid:3) Local gradient estimates for parabolic problems
Local estimates are generally straightforward If C ≤ u issufficiently small; however, there is an additional complication if Ω is not cylindrical.To keep our discussion of these estimates consistent in the various cases, we startby rewriting the notation from Section 6.First, for Y ∈ R n +1 and positive constants R and R ′ , we write Q ( Y ; R, R ′ ) = { X ∈ R n +1 : | x − y | < R, s − R ′ < t < s } . Next, for positive constants M , R , R ′ , and τ , we write Γ R,R ′ ( M ) for the setof all ( X, z, p ) ∈ Γ( M ) with X ∈ Q ( Y ; R, R ′ ) and Σ ( τ ; R, R ′ ) for the set of all( X, z, p ) ∈ Σ ( τ ) with X ∈ Q ( Y ; R, R ′ ). It will also be convenient to defineΩ ′ ( Y ; R, R ′ ) = P (Ω ∩ Q ( Y ; R, R ′ )) \ S Ω . Our local problem is then written in the form − u t + a ij ( X, u, Du ) D ij u + a ( X, u, Du ) = 0 in Ω ∩ Q ( Y ; R, R ′ ) , (9.1a) b ( X, u, Du ) = 0 in S Ω ∩ Q ( Y ; R, R ′ ) , (9.1b)and we introduce the numbers B ′∞ ; R,R ′ = lim sup | p |→∞ sup ( X,z ) ∈ Ω ∩ Q ( Y ; R,R ′ ) × R | B ′ ( X, z, p ) | , (9.2a) C ∞ ; R,R ′ = lim sup | p |→∞ sup ( X,z ) ∈ Ω ∩ Q ( Y ; R,R ′ ) × R C ( X, z, p ) . (9.2b)Our first local estimate has the following form. RADIENT ESTIMATES 45
Theorem 9.1.
Let Y ∈ S Ω , let R and R ′ be positive numbers and suppose P Ω ∩ Q ( Y ; R, R ′ ) ∈ C , ∗ with d < R / in Ω ∩ Q ( Y ; R, R ′ ) Suppose that the hypotheses ofTheorem 8.2 are all satisfied with Ω ∩ Q ( Y ; R, R ′ ) in place of Ω , S Ω ∩ Q ( Y ; R, R ′ ) , ˜ µ ∗ ( σ ) = µ σ θ for some positive constants µ and θ with θ < , and ˜ µ = ε x . Then | Du ( Y ) | ≤ M for some number M determined by the same quantities as in Theorem8.2 with Ω ′ ( Y ; R, R ′ ) in place of { d ≥ R / } and also by µ and θ .Proof. We note that the function ζ from the proof of Theorem 6.1 (now consideredas a function of X which is constant with respect to t ) satisfies b p · Dζ ≥ S Ω ∩ Q ( Y ; R, R ) with ζ ≥
0. The proof of Theorem 6.1 (with the usualparabolic modifications, as in the proof of Theorem 8.2) then gives the desiredestimate. (cid:3)
Note that the assumption that d < R / ∩ Q ( Y ; R, R ′ ) is a restriction onthe number R ′ in a possibly convoluted way. In particular, if Ω is cylindrical, whichmeans that Ω = S × (0 , T ) for some fixed subset S of R n , then this assumption isautomatically satisfied as long as R < R .The local estimate corresponding to Theorem 8.3 is somewhat more complicatedin that we need to replace the function ζ by a slightly different function. Our modelis the function ϕ from the proof of Theorem 2.1 in [2]. This function is defined as ϕ ( X ) = 1 − | x − x | R − Kt for a fixed point x and a fixed number R while K is a positive constant at ourdisposal.Using this function, we first prove an interior gradient estimate which will beuseful in discussing our local gradient bound, which is an analog of Theorem 8.3.To simplify its statement, we define Ω ∗ to be the set of all ( X, z, p ) ∈ B ( x , R ) × (0 , T ) × R × R n with | p | ≥ M with x ∈ R n , R , T , and M R given. We also definethe operator δ by δ f ( x, z, p ) = f z ( x, z, p ) + f k ( x, z, p ) δ km p m | p | . Theorem 9.2.
Let u ∈ C , ( B ( x , R ) × (0 , T )) be a solution of (9.3) − u t + a ij ( X, u, Du ) D ij u + a ( X, u, Du ) = 0 in B ( x , R ) × (0 , T ) for some x ∈ R n and positive constants R and T . Suppose that there are positiveconstants µ , µ , µ , M , and q such that q > , µ < (cid:18) − q (cid:19) , (9.4a) δ a ij η ij ≤ µ ( a ij δ km η ik η jm ) / | p | , (9.4b) ( | p | a ij,k + 4 a ik δ jm p m ) η ij ξ k ≤ (cid:16) µ | p | − /q | ξ | + µ | p | ( a ij ξ i ξ j ) / (cid:17) × (cid:0) a ij δ km η ik η jm (cid:1) / (9.4c) on Ω ∗ for all matrices [ η ij ] and vectors ξ . Suppose also that Λ ≤ µ , (9.5a) δ a ≤ µ , (9.5b) | a p | ≤ µ (9.5c) on Ω ∗ . Then there is a constant T ∈ (0 , T ] , determined only by µ , µ , µ , q , and R , such that (9.6) sup B ( x ,R/ × (0 ,T ) | Du | ≤ { M , sup B ( x ,R ) ×{ } | Du |} . Proof.
First, we set w = | Du | , C = a ij δ km D ik uD jm u , and κ = a ijp D ij u + a p , andwe define the operator L by L h = − h t + a ij D ij h + κ i D i h. A simple calculation shows that(9.7) L w = 2 C − δ a ij D ij u + δ a ] w. With K a positive constant to be determined, we now set ϕ ( X ) = 1 − | x − x | R − Kt, and we write Γ for the subset of B ( x , R ) × (0 , T ) on which ϕ is positive. It thenfollows that w = ϕ q w satisfies the equation L w = ϕ q L w + qwϕ q − ( a ij D ij ϕ − ϕ t + a ij,k D ij uD k ϕ + a k D k ϕ ))+ q ( q − wϕ q − a ij D i ϕD j ϕ + 4 qϕ q − a ij δ km D ik uD m uD j ϕ. (Note that we have rewritten D i w as 2 δ km D ik uD m u .) We now use (9.7) rewritethis equation as L w = 2 ϕ q C + J wϕ q − + q ( q − ϕ q − wa ij D i ϕD j ϕ + J + J with J = q ( a ij D ij ϕ − ϕ t + a k D k ϕ ) − δ aϕ,J = qϕ q − ( | Du | a ij,k + 4 a ik δ jm D m u ) D ik uD k ϕ,J = − ϕ q wδ a ij D ij u. We first use (9.4c) to see that J ≥ − µ qϕ q − | Du | − /q | Dϕ | C − µ qϕ q − | Du | C ( a ij D i ϕD j ϕ ) / and (9.4b) to see that J ≥ − µ | Du | C . By invoking the Cauchy-Schwarz inequality, we conclude, for any positive constants θ and θ , that L w ≥ (2 − θ − θ ) C + J wϕ q − + J qwϕ q − a ij D i ϕD j ϕ, RADIENT ESTIMATES 47 with J = J − µ q | Dϕ | θ w /q − µ θ ϕ,J = q − − µ q θ . We first choose θ to make J = 0, that is, θ = qµ q − , noting that the second inequality of (9.4a) implies that θ ∈ (0 , θ = 1 − ( θ ) / L w ≥ J wϕ q − . From (9.5) and the explicit form of ϕ , we conclude that there is a constant C ,determined only by n , R , µ such that J ≥ K − C on Ω ∗∗ , which is the subset ofΩ ∗ on which w ≥ M . We finally choose K = C and use the maximum principle(Corollary 2.5 from [11]) to conclude that w must attain its maximum over Ω ∗∗ somewhere on P Ω ∗∗ and hence w ≤ max { M , sup B ( x ,R ) w } . The proof is completed by choosing T small enough that ϕ ≥ on B ( x , R/ × (0 , T ). (cid:3) Condition (9.4c) seems rather artificial but our examples will show its value. Wealso shall take advantage of a slight variant of this result, which is closer to Theorem2.1 of [2]. In particular, we shall show that it is valid (with suitable constants) if a ij = g ij , which is the case studied by Ecker and Huisken in [2]; in particular, ourtheorem is a simple generalization of Theorem 2.1 from that work. Theorem 9.3.
Let u ∈ C , (Ω ∩ Q ( Y ; R, R ′ )) be a solution of (9.1) with P (Ω ∩ Q ( Y ; R, R ′ )) ∈ C , ∗ and b satisfying the hypotheses of Theorem 8.1, with Ω ∩ Q ( Y ; R, R ′ ) in place of Ω and S Ω ∩ Q ( Y ; R, R ′ ) in place of S Ω for some ∗ -increasingfunction ε x and Λ ( s ) = µ s for some nonnegative constant µ . Suppose that d ≤ R in Ω ∩ Q ( Y ; R, R ′ ) and that Λ satisfies (8.5) on Γ R,R ′ ( M ) . Suppose alsothat there are a nonnegative function µ and a constant θ ∈ (0 , such that (8.10a) , (8.10b) , and (9.8) | a ijp η ij | ≤ µ v θ/ (cid:0) a ij δ km η ik η jm (cid:1) / hold on Γ R,R ′ ( M ) for all matrices [ η ij ] and (8.11) hold on Γ R,R ′ ( M ) . If also (8.12) and (9.9) (1 + ε x ( v ) v ) v θ Λ ≤ µ hold on Γ R,R ′ ( M ) and if (5.13) holds on Σ ( τ ) , then there are a constant R ,determined only by n , β , β , c , µ , µ , M , τ , Ω , and a constant M determinedalso by sup Ω ′ ( Y ; R,R ′ ) | Du | such that | Du ( Y ) | ≤ M in Ω provided R ′ ≤ R .Proof. With w as in Theorem 8.3 and L as in Theorem 9.2, we infer that thereare positive constants ˜ η and k such that L w ≥ (1 + k d [1 + √ w ε x ( √ w )])[˜ ηw E + S ] − k w . Next, we set ϕ ( X ) = ζ − Kt with K a constant at our disposal and w = ζ q w with q = 2 /θ . A straightforward calculation, taking (4.5) into account, yields L w = η q L w + C + S with C = qw ϕ q − ( a ij D ij ζ + a k D k ζ + K )+ q ( q − ϕ q − w a ij D i ζD j ζ + 4(1 + k d [1 + ε x ( √ w ) √ w ]) qϕ q − N N z a ij D i ζD j u + 2(1 + k d [1 + ε x ( √ w ) √ w ]) qϕ q − N a ij D i ζN j + 2(1 + k d [1 + ε x ( √ w ) √ w ]) qϕ q − a ij D i ζD j ( c km ) D k uD m u and S = 4(1 + k d [1 + ε x ( √ w ) √ w ]) qϕ q − a ij D i ζD jk uν k + qϕ q − w a ij,k D ij uD k ϕ . We now estimate the summands in C . As a preliminary step, we estimate onlyportions of each summand. From our estimates on Dζ and D ζ , we conclude that a ij D ij ζ + a k D k ζ ≥ − c for some c determined only by n , R , R , β , and µ . We also have a ij D i ζD j ζ ≥ N N z a ij D i ζD j u ≥ − cw Λ / E / by (3.4c), (3.6a), (3.15), the Cauchy-Schwarz inequality and our estimate on Dζ .Similarly, (3.4d), (3.6a), (3.15), the Cauchy-Schwarz inequality and our estimateon Dζ imply that N a ij D i ζN j ≥ − cw (1 + √ w ε x ( √ w ))Λ . Finally, a ij D i ζD j ( c km ) D k uD m u ≥ − c Λ w . It therefore follows from (8.12) that C ≥ qw ϕ q − [ K − c ]Similarly, the Cauchy-Schwarz inequality implies that a ij D i ζD jk uν k ≥ − ϕ S − cϕ w Λ , and, along with (9.8), that a ij,k D ij uD k ϕ ≥ − ϕ w S − cw θ/ ϕ . Hence, S ≥ − (1 + k d [1 + ε x ( √ w ) √ w ]) ϕ q S − cw θ/ ϕ ϕ q − w Λ(1 + ε x ( v ) v ) . From (9.9), we infer that L w ≥ qw ϕ q − ( K − c )wherever w ≥ M because w θ/ ϕ = w θ/ . Choosing K sufficiently large gives L w ≥ (cid:3) RADIENT ESTIMATES 49
We note here that Serrin also looked in [19] at estimates that are local withrespect to x and t . The remark following Theorem 4 of that work points out thatonly one additional hypothesis is needed in this case. The additional hypothesis isjust there are positive constants µ and θ such that(9.10) E ≥ µ v θ for v ≥ M . (We refer also to Sections 11.4 and 11.5 of [11] for a slightly differentlook at this idea, but we mention that Section 11.4 of [11] is based very heavily on[19].) In our situation, this condition also leads to estimates which are local in x and x , and the estimates are obtained via the same modification of our proofs asin [19]. We include the results for completeness only. Theorem 9.4.
Suppose, in addition to the hypotheses of Theorem 9.1, that thereare positive constants µ and θ such that (9.10) holds. Then | Du ( Y ) | ≤ M forsome constant M determined by the same quantities as Theorem 9.1 except that,instead of depending on sup Ω ′ ( Y ; R,R ′ ) | Du | , it depends on µ and θ .Proof. We replace ζ ( x ) q in the proof of Theorem 9.1by ζ ( x ) q (cid:18) t − sR ′ (cid:19) q , with q = 2 /θ . We refer the interested reader to the proof of Theorem 4 in [19] orthe proof of Theorem 11.3(a) in [11] for further information. (cid:3) In fact, there is also the possibility of studying estimates that are local withrespect to t as well, but we shall not discuss them in detail because they are verysimilar to the global estimates.10. Examples for parabolic problems
In this section, we jump immediately to a discussion of differential equationsbecause, except for (8.2) and the connection between ε x and the differential equa-tion, all the relevant details of the various boundary conditions have already beenestablished in our elliptic examples.10.1. Capillary-type differential equations.
We assume that a ij has the form(10.1) a ij = h ( v ) g ij for some positive function h . Different choices for h can be used in differenttheorems, but all of them require some restrictions on this function. We firstcompute a ij,k η ij = h ′ ( v ) g ij η ij ν k − h v − g ik δ jm η ij . A little calculation reveals that we cannot apply Theorem 8.2 because (5.3c) and(8.7) fail. To apply Theorem 8.3, we note from our computation of a ij,k η ij that δ a ij η ij ≤ h ′ ( v ) | p | p h ( v ) ( a ij δ km η ik η jm ) / + 2 p h ( v ) v ( a ij δ km η ik η jm ) / . Hence, we obtain (8.10a) if(10.2) | h ′ ( σ ) | = O ( p h ( σ ) σ − ) since this inequality implies that h is bounded. In addition, it is satisfied for h ( σ ) = σ Q provided Q = 0 or Q ≤ −
2. Of course, (8.10b) holds with µ ≡ | a ijp η ij | ≤ h ′ ( v ) p h ( v ) + 2 p h ( v ) v ! ( a ij δ km η ik η jm ) / , so (8.10c) is also satisfied. We assume that a has the form(10.3) a ( X, z, p ) = vH ( X, z )for some C function H satisfying H z ≤
0. Then conditions (8.11) are satisfiedwith µ and µ determined only by upper bounds for | h | , | H | , and | H x | . Next, weassume that b has the form(10.4) b ( X, z, p ) = h ( v ) p · γ + ψ ( X, z )with h satisfying (7.2). Then the hypotheses of Theorem 8.1 are satisfied with β , β , and c determined only by the limit behavior in (7.2) and bounds for | ψ x | and | ψ z | . In addition, we have ε x ( σ ) = c/σ and Λ ( σ ) = cσ for a constant c determinedonly by Ω, h (1), and bounds on | ψ x | , and | ψ t | . Hence (9.9) holds as well. If weassume finally either that ψ z ≤ h ( σ ) is bounded away from zero, then(5.13) holds.If b has the form b ( X, z, p ) = h (˜ v ) β ij ( X, z ) p i γ j + ψ ( X, z )(10.5a)with ˜ v = (1 + β ij ( X, z ) p i p j ) / (10.5b)for some positive definite matrix [ β ij ] and some positive function h satisfying (7.2a)and (7.5), then we can apply the same analysis as in Example 7.2 to conclude thatthe conditions of (8.1) are satisfied and that (9.9) holds. If, in addition, ψ z ≤ h is bounded away from zero and if β ij is independent of z , then (5.13) is valid.Summarizing, we can apply Theorem 8.3 for a ij having the form (10.1), a havingthe form (10.3), and b having the form (10.4) with h satisfying (7.2) provided • | H x | , | H z | , | ψ x | , | ψ z | , and | ψ t | are uniformly bounded, • H z ≤ • h satisfies (10.2), • ψ z ≤ h is bounded away from zero.If, instead b has the form (10.5) with h satisfying (7.2a) and (7.5) and if β ij isindependent of z , then we can apply Theorem 8.3 under exactly the same hypothe-ses. In particular, this theorem applies when h ≡ h ( σ ) = σ Q with Q ≤ −
2. Moreover, if
Q < −
2, then we can obtain a local gradient bound viaTheorem 9.3.If b has the form(10.6) b ( X, z, p ) = v q ( X ) p · γ + ψ ( X, z )for some C function q with q ≥ −
1, then we again have the conditions of Theorem8.1 satisfied but with ε x ( σ ) = K ln σσ RADIENT ESTIMATES 51 for some positive constant K ; however, the condition Λ ( σ ) = µ σ requires q to beindependent of t . Hence, if a ij has the form (10.1), a has the form (10.3), and if b has the form (10.6), then we can apply Theorem 8.3 provided • | H x | , | H z | , | ψ x | , | ψ z | , and | ψ t | are uniformly bounded, • H z ≤ • h satisfies (10.2), • | h ( σ ) | = O (1 / ln σ ), • ψ z ≤ q ≥ • q is independent of t .In particular, this example provides a gradient estimate if h ( v ) = v Q with Q ≤ − h ≡ b has the form(10.7) b ( X, z, p ) = h ( v ) β ( X, p ) · p + ψ ( X, z )with β a unit vector such that β · γ ≥ β ∗ for some constant β ∗ ∈ (0 ,
1) and h a positive function satisfying (7.10b), then our analysis yields a gradient boundassuming that • | H x | , | H z | , | ψ x | , | ψ z | , and | ψ t | are uniformly bounded, • H z ≤ • h satisfies (10.2).We turn to the conditions on a ij in Theorem 9.2. First, our expression for a ij,k implies that ( | p | a ij,k + 4 a ij δ jm ) η ij ξ k = h ′ ( v ) | p | g ij η ij ν · ξ + h ( v ) H , with H = | p | (cid:18) − g ik δ jm η ij p m ξ k v + 4 g ik δ jm η ij p m ξ k (cid:19) = 4 v − | p | v g ik δ jm η ij p m ξ k = (cid:18) − v (cid:19) g ik δ jm η ij p m ξ k . Since δ a ij ≡
0, we infer (9.4) with µ = 2 and suitable µ provided | h ′ ( σ ) | = O ( p h ( σ ) σ − − /q )for some q >
2. It’s easy to see that h ( σ ) = σ Q satisfies these hypotheses for Q = 0if we take q > Q <
0, these hypotheses are satisfied if we take q > max { , − /Q } . Moreover, (9.5a) is satisfied for h ( σ ) = σ Q for Q ≤
0. Hence,Theorem 9.2 applies under the conditions given about which satisfy the hypothesesof Theorem 8.3.In particular, we can obtain a gradient bound for the problem − u t + g ij D ij u + vH ( X, u ) = 0 in Ω ,γ · Du + ψ ( X, u ) = 0 on S Ω ,u = u on B Ωprovided | H x | , | H z | , | ψ x | , | ψ z | , | ψ t | , and | Du | are uniformly bounded and H z ≤ previously pointed out, we improve their results by allowing nonzero boundarydata but we need to assume that H is differentiable.10.2. False mean curvature equations.
We now consider a ij having the form(7.15) with τ ≥ a ij ∗ satisfying slightly more general conditions than in Ex-ample 7.6. We assume that that there are a positive function Λ ∗ and a positiveconstant µ ∗ such that (7.16) holds for all ( x, z, p ) ∈ Ω × R × R n and all ξ ∈ R n , andwe assume that (7.17a) holds. In place of (7.17b), we assume that(10.8) | p | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂z (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = o ( | p | ( τ Λ ∗ ) / ) . We also assume that τ satisfies (7.18) and(10.9) Λ ∗ = O ( τ | p | ) , while a satisfies (7.20).Then (5.3c), (5.4), and (8.6b) were verified in Example 7.6 with ˜ µ ∗ ( s ) = K /s and a suitable ˜ µ ε satisfying (5.2b). To check (8.8), we first use (7.17a) to concludethat there is a constant c such that | δ a ij ∗ | ≤ c ( τ Λ ∗ ) / . Then, for any matrix [ η ij ] and this constant c , we have δ a ij ∗ η ij ≤ ncτ / | p | (Λ ∗ n X i,j =1 | η ij | ) / ≤ ncµ ∗ | p | ( τ | p | ) / ( a ij ∗ δ jk η ik η jm ) / ≤ ncµ ∗ | p | ( τ | p | ) / ( a ij δ jk η ik η jm ) / . Moreover, direct computation shows that δ ( τ p i p j ) = ( δ + 2) τ p i p j , so similar reasoning, using (7.18a), shows that δ ( τ p i p j ) η ij ≤ c | p | E / ( a ij δ jk η ik η jm ) / , and therefore (8.8) is also satisfied.Since lim σ →∞ ε x ( σ ) = 0 in all our examples, (5.6) is also satisfied.To verify (5.5) and (5.12), we assume more about b . If b has the form (7.1) with h satisfying (7.2) for some nonnegative constant h , we consider two cases. First,if lim σ →∞ σh ( σ ) < ∞ , we assume (7.13) and ψ z = 0 (although this condition can be relaxed slightly toassuming that ψ z is nonpositive and sufficiently small). Then, as already shown,(5.5) and (5.12) hold. On the other hand, iflim σ →∞ σh ( σ ) = 0 , then we assume (7.14). Again, we have already shown that (5.5) holds in this caseand that (5.12) holds. RADIENT ESTIMATES 53 If b has the form (10.5) with h satisfying (7.2) and (7.5), then we obtain a gradientbound under the same additional restrictions on h and ψ z as for the previous formfor b .If b has the form (10.6), then we assume that inf q > −
1. From this assumption,we immediately infer (7.7) and hence (5.5) as well as (5.12).If b has the form (10.7) with h satisfying (7.10b), we also infer a global gradientestimate.Furthermore, if we assume also that there is a positive constant θ such that τ ≥ v θ − , then we obtain a gradient estimate which is local in x and t .Let us now assume that a ij has the form (7.15) with a ij ∗ and τ satisfying adifferent set of conditions. First, we suppose that there are positive functions λ ∗ and Λ ∗ such that λ ∗ | ξ | ≤ a ij ∗ ξ i ξ j ≤ Λ ∗ | ξ | , for all ξ ∈ R n . We also assume thatΛ ∗ = O ( v − ) , and that | p | | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂p (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + | p | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂z (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∂a ij ∗ ∂x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = O ( p λ ∗ ) , while τ satisfies τ = O ( v − ) , | p | | τ p | + | p || τ z | + | τ x | = O ( √ τ ) . Finally, we assume that a satisfies (8.11a) and (8.11b). This will be the case ifthere is a C function a such that a ( x, z, p ) = va ( x, z ). By imitating our previousarguments, we see that all the hypotheses of Theorem 8.3 as long as b satisfies thehypotheses of Theorem 8.1 with ε x constant. It follows that we have a gradientestimate for this class of parabolic differential equations and for all of the boundaryconditions in our examples. In particular, we can get a gradient estimate for theproblem − u t + v − ( δ ij + D i uD j u ) D ij u = 0 in Ω ,Du · γv + ψ ( x, z ) = 0 on S Ω ,u = u on B Ωas long as ψ z , ψ x , and Du are bounded.10.3. Uniformly parabolic equations.
Our final example is the parabolic analogof Example 7.7. We assume that there is a positive constant µ ∗ such that (7.21) issatisfied. In place of (7.22), we assume that | p || a ijp | = O (Λ) , (10.10a) | p || a ijz | + | a ijx | = o (Λ) , (10.10b) | a | + | p || a p | = O (Λ | p | ) , (10.10c) | a x | = o (Λ | p | ) , (10.10d) a z ≤ o (Λ | p | ) . (10.10e) Then conditions (5.2b), (5.4), (5.6), (5.3c), (8.6b), and (8.8), and are all satisfiedwith ˜ µ ε determined by ε and the limit behavior in (10.10b), (10.10d), and (10.10e),and ˜ µ ∗ ( σ ) = K /σ for a sufficiently large constant K . We therefore obtain agradient bound for all of our boundary conditions under the restrictions mentionedin the previous example.There is one special case when we can replace (10.10) by (7.22). If we assume(7.21) holds with Λ bounded from above and below by positive constants and if | a | = O ( | p | ) (which is the same as | a | = O (Λ | p | ) in this situation), then a modulusof continuity is known. Theorem 3 of [18] gives a slightly weaker result, but standardarguments can be used to obtain this result from that theorem. (See also Lemma13.14 of [11], which states the result with a very minimal proof.) It follows that,if a ij and a also satisfy (7.21), then we obtain a gradient estimate for all of ourexamples of boundary conditions. This result was first proved by Ural ′ tseva in [21]as Theorem 1 (but assuming that b is twice differentiable with second derivativessatisfying certain structure conditions) by very different means. It was also provedas Theorem 13.13 in [11] but, just as in the elliptic case, the proof there takesadvantage of the exact form of the interior gradient estimate.If we assume further that Λ = O ( | p | − ), then the discussion of the previousexample (with τ ≡
0) gives a gradient estimate under the hypotheses (7.21), | p | | a ijp | + | p || a ijz | + | a ijx | = O ( √ Λ) , | p | | a p | + | p || a | + | a x | = O ( | p | ) ,a z ≤ O ( | p | ) , and any of our examples of boundary conditions. References [1] G. C. Dong,
Initial and nonlinear oblique boundary value problems for fully nonlinear para-bolic equations , J. Partial Differential Equations Ser. A (1988), no. 2, 12–42.[2] K. Ecker and G. Huisken, Interior estimates for hypersurfaces moving by mean curvature ,Invent. Math. (1991), 547–569.[3] D. Gilbarg and N. S. Trudinger,
Elliptic Partial Differential Equations of Second Order ,Springer-Verlag, Berlin, 2001. reprint of 1998 edition.[4] G. Huisken,
Non-parametric mean curvature evolution with boundary conditions , J. Differ-ential Equations (1989), 369–385.[5] G. M. Lieberman, The conormal derivative problem for elliptic equations of variational type ,J. Differential Equations (1982), 218–257.[6] , Regularized distance and its applications , Pac. J. Math. (1985), 329–352.[7] ,
Gradient estimates for solutions of nonuniformly elliptic oblique derivative problems ,Nonl. Anal. (1987), 49–61.[8] , Gradient estimates for capillary-type problems via the maximum principle , Comm.Partial Differential Equations (1988), 33–59.[9] , The conormal derivative problem for non-uniformly parabolic equations , Indiana U.Math. J. (1988), 23–72.[10] , The natural generalization of the natural conditions of Ladyzhenskaya and Ural ′ tsevafor elliptic equations , Comm. Partial Differential Equations (1991), 311–361.[11] , Second Order Parabolic Differential Equations , World Scientific Publishing Co., Inc.,River Edge, NJ, 1996.[12] ,
Gradient estimates for singular fully nonlinear elliptic equations , Nonlin. Anal. (2015), 382–397.[13] ,
Oblique Derivative Problems for Elliptic Equations , World Scientific, Hackensack,N. J., 2013.
RADIENT ESTIMATES 55 [14] ,
Gradient estimates for capillary-type problems via the maximum principle, A secondlook . to appear.[15] G. M. Lieberman and N. S. Trudinger,
Nonlinear oblique boundary value problems for non-linear elliptic equations , Trans. Amer. Math. Soc. (1986), 509–546.[16] X. N. Ma and J. J. Xu,
Gradient estimates of mean curvature equations with Neumannboundary conditions , Adv. Math. (2016), 1010–1036.[17] M. Mizuno and K. Takasao,
Gradient estimates for mean curvature flow with Neumannboundary conditions , Nonlinear Diff. Eqn. Appl. (2017), no. 4, Art. 32,24.[18] A. I. Nazarov, H¨older estimates for bounded solutions of problems with an oblique deriva-tive for parabolic equations of nondivergence structure , Nonlinear equations and variationalinequalities. Linear operators and spectral theory (Russian), Probl. Mat. Anal., vol. 11,Leningrad. Univ., Leningrad, 1990, pp. 37–46, 250 (Russian). Translated in J. Soviet Math. (1993), no. 6, 1247–1252.[19] J. Serrin, Gradient estimates for solutions of nonlinear elliptic and parabolic equations , Con-tributions to Nonlinear Functional Analysis, 1971, pp. 565–601.[20] N. N. Ural’ceva,
Nonlinear boundary value problems for equations of minimal-surface type ,Trudy Mat. Inst. Steklov. (1971), 217–226 (Russian); English transl., Proc. Steklov Math.Inst. (1971), 227–237.[21] N. N. Uraltseva,
Gradient estimates for solutions of nonlinear parabolic oblique boundaryproblem , Geometry and nonlinear partial differential equations (Fayetteville, AR, 1990), Con-temp. Math., vol. 127, Amer. Math. Soc., Providence, RI, 1992, pp. 119–130., Geometry and nonlinear partial differential equations (Fayetteville, AR, 1990), Con-temp. Math., vol. 127, Amer. Math. Soc., Providence, RI, 1992, pp. 119–130.