Self-similar solutions for the LSW model with encounters
aa r X i v : . [ m a t h . A P ] M a r Self-similar solutions for the LSW model with encounters
M. Herrmann ∗ , B. Niethammer † , and J. J. L. Vel´azquez ‡ .November 1, 2018 Abstract
The LSW model with encounters has been suggested by Lifshitz and Slyozov as a regularizationof their classical mean-field model for domain coarsening to obtain universal self-similar long-timebehavior. We rigorously establish that an exponentially decaying self-similar solution to this modelexist, and show that this solutions is isolated in a certain function space. Our proof relies on settingup a suitable fixed-point problem in an appropriate function space and careful asymptotic estimatesof the solution to a corresponding homogeneous problem. keywords: coarsening with encounters , self-similar solutions , kinetics of phase transitions MSC (2000): 45K05, 82C22, 35Q72
The classical mean-field theory by Lifshitz and Slyozov [6] and Wagner [16] describes domain coarseningof a dilute system of particles which interact by diffusional mass transfer to reduce their total interfacialarea. It is based on the assumption that particles interact only via a common mean-field θ = θ ( t ) whichyields a nonlocal transport equation for the number density f = f ( v, t ) of particles with volume v . It isgiven by ∂f∂t + ∂∂v (cid:16)(cid:16) − θ ( t ) v / (cid:17) f (cid:17) = 0 , v > , t > , (1)where θ ( t ) is determined by the constraint that the total volume of the particles is preserved in time, i.e. ∞ Z vf ( v, t ) dv = ρ . (2)This implies that θ ( t ) = 1 (cid:10) v / (cid:11) = ∞ R f ( v, t ) dv ∞ R v / f ( v, t ) dv , (3)where (cid:10) v k (cid:11) := m k := R ∞ v k f ( v, t ) dv for k >
0. It is observed in experiments that coarsening systemsdisplay statistical self-similarity over long times, that is the number density converges towards a uniqueself-similar form. Indeed, also the mean-field model (1)-(2) has a scale invariance, which suggests thattypical particle volumes grow proportional to t . Going over to self-similar variables one easily establishesthat there exists a whole one-parameter family of self-similar solutions. All of the members of this familyhave compact support and can be characterized by their behavior near the end of their support: Oneis infinitely smooth, the others behave like a power law. It has been established in [12] (cf. also [3] for ∗ Institut f¨ur Mathematik, Humboldt-Universit¨at zu Berlin, Unter den Linden 6, 10099 Berlin, Germany † Mathematical Institute, University of Oxford, 24-29 St. Giles, Oxford, OX1 3LB, England ‡ Departamento de Matem´atica Aplicada, Facultad de Matem´aticas, Universidad Complutense, Madrid 28040, Spain p < ∞ if and only if the data are regularly varyingwith power p at the end of their support. The domain of attraction of the infinitely smooth solution ischaracterized by a more involved condition [13], which we do not give here since it is not relevant for theforthcoming analysis.This weak selection of self-similar asymptotic states reflects a degeneracy in the mean-field model whichis generally believed to be due to the fact that the model is valid only in the regime of vanishing volumefraction of particles [10]. Some effort has been made to derive corrections to the classical mean-fieldmodel in order to reflect the effect of positive volume fraction, such as screening induced fluctuations[7, 14, 11], or to take nucleation into account [2, 8, 15]. A different approach was already suggested byLifshitz and Slyozov [6] which is to take the occasional merging of particles (“encounters”) into account.This leads to the equation ∂f∂t + ∂∂v (cid:16)(cid:16) − θ ( t ) v / (cid:17) f (cid:17) = J [ f ] , (4)where J [ f ] is a typical coagulation term, given by J [ f ] = 12 v Z w ( v − v ′ , v ′ ) f ( v − v ′ , t ) f ( v ′ , t ) dv ′ − ∞ Z w ( v, v ′ ) f ( v ′ , t ) f ( v, t ) dv ′ , with a suitable rate kernel w specified below. Volume conservation (2) should still be valid, and since ∞ Z vJ [ f ] ( v, t ) dv = 0this requires that θ is again given by (3).It remains to specify the rate kernel w ( v, v ′ ) which Lifshitz and Slyozov assume to be dimensionlesswith respect to rescalings of v, v ′ and to be additive for large values of v and v ′ . For simplicity we assume– just as in [6] – that w ( v, v ′ ) = (cid:18) h v i + h v ′ i (cid:19) − ( v + v ′ ) = h v i − ( v + v ′ ) , (5)that is we obtain a coagulation term with the so-called “additive kernel”. Well-posedness of (4) with thiskernel has been established in [5].As explained before, the model (4), (2) is relevant in the regime that the volume fraction covered bythe particles is small and hence we assume that Z vf ( v, t ) dv = ε ≪ . (6)The system (4)-(6) can now be written in self-similar variables f ( v, t ) = εt Φ (cid:16) vt , log ( t ) (cid:17) , z = vt , τ = log ( t ) , θ ( t ) = λ ( τ ) t / as Φ τ − z Φ z −
2Φ + ∂∂z (cid:16)(cid:16) − λ ( τ ) z / (cid:17) Φ (cid:17) = εJ [Φ] ( z, τ ) , (7) ∞ Z z Φ( z, τ ) dz = 1 , (8)where J [Φ] ( z, τ ) = 12 z Z z Φ ( z − z ′ , τ ) Φ ( z ′ , τ ) dz ′ − Φ ( z, τ ) ∞ Z ( z + z ′ ) Φ ( z ′ , τ ) dz ′ λ ( τ ) = ∞ R Φ ( z, t ) dz ∞ R z / Φ ( z, t ) dz Our goal in this paper is to study stationary solutions of (7)-(8) in the regime of small ε . We noticefirst that the convolution term on the right hand side of (7) enforces that any solution must have infinitesupport. We also expect that for small ε > ε = 0. It can be verified by a stabilityargument that the only solution of the LSW model for which this is possible is the smooth one which hasthe largest support.Indeed, we obtain as our main result, that for any given sufficiently small ε > ε = 1 /
2. For every smaller ε there exists a stationary solution with algebraic decay.The domain of attraction of these self-similar solutions has been completely characterized in [9], and canalso be related to the regular variation of certain moments of the initial data.However, the situation here is somewhat different. While the behavior for large volumes v is deter-mined by the coagulation term, the tail introduced by the coagulation term is very small, and the equationbehaves - at least in the regime in which we are working - as the LSW model with a small perturbation.Our analysis reflects this fact, since we also treat the coagulation term as a perturbation. In this section we set up a suitable fixed point problem for the construction of stationary solutions to(7)-(8). These solve − z ∂ Φ ∂z −
2Φ + ∂∂z (cid:16)(cid:16) − λz / (cid:17) Φ (cid:17) = εJ [Φ] ( z ) , ∞ Z z Φ ( z ) dz = 1 , Φ ( z ) ≥ , (9)with z > λ = ∞ R Φ ( z ) dz ∞ R z / Φ ( z ) dz . In the LSW limit ε = 0 there exists a family of solutions with compact support, which can be parametrizedby the mean field λ ∈ h (cid:0) (cid:1) / , ∞ (cid:17) . The self-similar solution with the largest support, which is [0 , / LSW ( z ) = C exp − z Z − λ LSW ξ − / ξ + 1 − λ LSW ξ / dξ for z ∈ [0 , ] , z > , with λ LSW := 3 (cid:18) (cid:19) / C is a normalization constant chosen such that R ∞ z Φ LSW dz = 1. We denote from now on thissolution by Φ LSW . As discussed above there are several physical and mathematical arguments supportingthe fact that such a solution is the only stable one under perturbations of the model.The main goal of this paper is to show the following result.
Theorem 2.1.
For any sufficiently small λ LSW − λ there exists a choice for ε such that there exists anexponentially decaying solution to (9) . The key idea for proving this theorem is to reduce the problem to a standard fixed point problemassuming that (9) is a small perturbation of Φ
LSW . Formal asymptotics as ε → LSW as ε → . Notice, however that Φ
LSW vanishes for z ≥ . Therefore, in order to approximate Φ for z ≥ Lifshitz and Slyozov approximate (9) by means of − z ∂ Φ ∂z −
2Φ + ∂∂z (cid:16)(cid:16) − λz / (cid:17) Φ (cid:17) = εJ [Φ LSW ] ( z ) (10)There exists a unique solution of (10) which vanishes for z ≥
1. Such a function is of order ε in theinterval (cid:0) , (cid:1) . However, there is a boundary layer in the region z ≈ for λ close to λ LSW where thefunction Φ experiences an abrupt change. Adjusting the value of λ in a suitable manner it is possible toobtain Φ which is of order one for z < . A careful analysis shows that λ must be chosen as λ LSW − λ ∼ π (2) / ε )) (11)as ε →
0. This scaling law was already derived in [6], and is in accordance with our results. Notice thatthe smallness of Φ for z ≥ implies that most of the volume of the particles is in the region z < . In order to approximate Φ for z ≥ z ∈ [ , εJ [Φ] becomes of order ε for z ∈ [1 , ] and the contribution ofthis region can be expected to be negligible compared to that of the interval [ , εJ (Φ) for z > can be ignored. This procedurecan be iterated to obtain in the limit a solution to (7) which decays exponentially fast at infinity. Whatremains to be established is that such a procedure indeed leads to a converging sequence of solutions. Arigorous proof could be based on such a procedure; we proceed, however, in a slightly different manner.Before we continue we briefly comment on (11), which give the deviation of the mean-field from thevalue of the LSW model. This quantity is of particular interest, since its inverse is a measure for thecoarsening rate, which is one of the key quantities in the study of coarsening systems. Equation (11)predicts a much larger deviation than the ones obtained from other corrections to the LSW models. Forexample, one model which takes the effect of fluctuations into account [14] predicts a deviation of order O ( ε / ). The large deviation predicted by (10) can be attributed to the fact that all particles contributeto the coagulation term and suggests, that encounters are more relevant in the self-similar regimes thanfluctuations. We refer to [11] for a more extensive discussion of these issues. Derivation of a fixed point problem
We now transform (9) with the choice of kernel (5) into a fixed point problem. To this end we write ourequation as follows − z ∂ Φ ∂z −
2Φ + ∂∂z (cid:16)(cid:16) − λz / (cid:17) Φ (cid:17) = ε z v Z Φ ( z − z ′ ) Φ ( z ′ ) dz ′ − Φ ( z, t ) − m z Φ ( z ) (12)4ith 1 = ∞ Z z Φ( z ) dz, m = ∞ Z Φ( z ) dz. (13)It would be natural to proceed as follows: For each given value of ε we select m and λ in order to satisfy(13). However, it turns out to be more convenient to fix λ and then select ε and m such that (13) issatisfied. The reason is that our argument requires to differentiate the function ψ defined below withrespect to either λ or ε , but it is easier to control the derivatives with respect to ε .In the following we always consider λ < λ LSW and write δ := λ LSW − λ > , ˜ ε := εm > . An important role in the fixed point argument is played by the functions z ψ ( z ; ε, e ε, δ ), which aredefined as solution to the following homogeneous problem − (cid:16) z − ( λ LSW − δ ) z / (cid:17) ψ ′ = (cid:16) − ( λ LSW − δ ) z − / − e εz − ε ) (cid:17) ψ. (14)Each of these functions ψ is uniquely determined up to a constant to be fixed later. Notice that for δ > ψ ( z ; ε, e ε, δ ) is defined for all z ≥
0. If δ = 0, however, the function ψ ( z ; ε, e ε,
0) is defined onlyin the set z > , and it becomes singular as z → (cid:0) (cid:1) + . Therefore the function ψ changes abruptly in aneighborhood of z = for λ close to λ LSW . More precisely, if ψ takes values of order one for z < , thenit is of order exp ( − c/ √ δ ) for z > , and this transition layer causes most of the technical difficulties.We can now transform (12) into a fixed point problem for an integral operator. Indeed, using Variationof Constants, and assuming that Φ ( z ) decreases sufficiently fast to provide the integrability required inthe different formulas, we obtain that each solution to (12) satisfiesΦ( z ) = ε ∞ Z z ξ (cid:0) ξ + 1 − ( λ LSW − δ ) ξ / (cid:1) ψ ( z ; ε, e ε, δ ) ψ ( ξ ; ε, e ε, δ ) (Φ ∗ Φ)( ξ ) dξ =: I [Φ; ε, e ε, δ ] ( z ) , (15)where the symmetric convolution operator ∗ is defined by(Φ ∗ Φ )( z ) = z Z Φ ( z − y )Φ ( y ) dy = z Z Φ ( z − y )Φ ( y ) dy. (16)However, the values of the parameters ( ε, e ε ) cannot be chosen arbitrarily but must be determined by the compatibility conditions ε ∞ Z zI [Φ; ε, e ε, δ ] dz = ε, ε ∞ Z I [Φ; ε, e ε, δ ] dz = e ε . (17)Notice that the operator I [Φ; ε, e ε, δ ] maps the cone of nonnegative functions Φ into itself, and this impliesthat each solution to (15) is nonnegative. Main results and outline of the proofs
We introduce the following function space Z of exponentially decaying functions. For arbitrary but fixedconstants β > β > Z := { Φ : k Φ k Z < ∞} with k Φ k Z := ⌈ Φ ⌉ Z + ⌊ Φ ⌋ Z , ⌈ Φ ⌉ Z := sup ≤ z ≤ | Φ( z ) | , ⌊ Φ ⌋ Z := sup z ≥ (cid:12)(cid:12) Φ( z ) exp ( β z ) z β (cid:12)(cid:12) . (18)Below in Section 3.1 we prove that Φ ∈ Z implies Φ ∗ Φ ∈ Z . The particular choice of the parameters β and β affects our smallness assumptions for the parameter δ : The larger β and β are the smaller5 must be chosen, and the faster the solution will decay. We come back to this issue at the end of thepaper, cf. Remark 3.25.Our (local) existence and uniqueness results relies on the following smallness assumptions concerning δ , ε , e ε and Φ. Assumption 2.2.
Suppose that1. δ is sufficiently small,2. both ε and e ε are of order o ( √ δ ) ,3. Φ is sufficiently close to Φ LSW , in the sense that k Φ − Φ LSW k Z is small. Our first main result guarantees that we can choose the parameters ε and e ε appropriately. Theorem 2.3.
Under Assumption 2.2 we can solve (17) , i.e., for each Φ there exists a unique choice of ( ε, e ε ) such that the compatibility conditions are satisfied. This solution belongs to U δ = (cid:8) ( ε, e ε ) : ǫ δ ≤ ε ≤ ǫ δ , e ǫ δ ≤ e ε ≤ e ǫ δ (cid:9) , where ǫ δ ∼ exp ( − c/ √ δ ) and e ǫ δ ∼ ( − c/ √ δ ) will be identified in Equation (43) below. The solution from Theorem 2.3 depends naturally on the function h = h [Φ] = Φ ∗ Φ, and is denotedby ( ε [ h ; δ ] , e ε [ h ; δ ]). In a second step we define an operator ¯ I δ [Φ] via¯ I δ [Φ] := I (cid:2) Φ; ε (cid:2) h [Φ]; δ (cid:3) , e ε (cid:2) h [Φ]; δ (cid:3) , δ (cid:3) (19)with I as in (15), and show that for sufficiently small δ there exists a corresponding fixed point. Theorem 2.4.
Under Assumption 2.2 there exists a nonnegative solution to
Φ = ¯ I δ [Φ] that is isolatedin the space Z . In order to prove Theorem 2.3 we rewrite the compatibility conditions as a fixed point equation for( ε, e ε ) with parameters δ and h . This reads g [ h ; ε, e ε, δ ] = ε, g [ h ; ε, e ε, δ ] = e ε, (20)where g i [ h ; ε, e ε, δ ] := ε ∞ Z ξ h ( ξ ) (cid:0) ξ + 1 − ( λ LSW − δ ) ξ / (cid:1) ξ Z γ i ( z ) ψ ( z ; ε, e ε, δ ) ψ ( ξ ; ε, e ε, δ ) dzdξ, (21)with γ ( z ) = z and γ ( z ) = 1.For fixed h the integrals g and g depend extremely sensitive on ε , e ε , and δ . Therefore, the crucial partin our analysis are the following asymptotic expressions for g , g and their derivatives that we derivewithin Section 3.2. Proposition 2.5.
Assumption 2.2 implies | g [ h ; ε, e ε, δ ] − ǫ δ | = o ( ǫ δ ) , | g [ h ; ε, e ε, δ ] − e ǫ δ | = o ( e ǫ δ ) as well as | ε∂ ε g i − g i | ≤ o ( g i ) , | e ε∂ e ε g i | ≤ o ( g i ) , for i = 1 , . Exploiting these estimates we prove Theorem 2.3 by means of elementary analysis, see Section 3.2,and as a consequence we derive in Section 3.2 the following result, which in turn implies Theorem 3.3.6 roposition 2.6.
Under Assumption 2.2 there exists a small ball around Φ LSW in the space Z such thatthe operator ¯ I δ [Φ] is a contraction on this ball. We proceed with some comments concerning the uniqueness of solutions. Proposition 2.6 providesa local uniqueness result in the function space Z . Moreover, since we can choose the decay parameters β > β > LSW , nor the existence ofalgebraically decaying solutions.
In what follows c and C are small and large, respectively, positive constants which are independent of δ .Moreover, o (1) denotes a number that converges to 0 as δ →
0, where this convergence is always uniformwith respect to all other quantities under consideration.For the subsequent considerations the following notations are useful. For δ > a δ ( z ) := 11 + z − λ LSW z / + δz / , b δ ( z ) := 2 − λ LSW z − / + δz − / , (22)compare Figure 1, so that by definition the function z ψ ( z ; ε, e ε, δ ) solves the homogenous equation O (1 /δ ) O (1) O (1) O ( √ δ ) a δ ( z ) versus z - - b δ ( z ) versus z ∼ z − / ∼ Figure 1: Sketch of the functions a δ and b δ for small δ and z ∈ [0 , ψ ′ = − a δ ( z ) ( b δ ( z ) − e εz − ε ) ψ, ≤ z < ∞ , (23)compare (14). For convenience we normalize ψ by Z z ψ ( z ; ε, e ε, δ ) dz = 1 = Z z Φ LSW ( z ) dz = 1 , (24)and this implies (cid:12)(cid:12)(cid:12) b ψ δ ( z ) − Φ LSW ( z ) (cid:12)(cid:12)(cid:12) = o (1) , ≤ z ≤ , with b ψ δ ( z ) = ψ ( z ; 0 , , δ ), and1 (cid:12)(cid:12)(cid:12) b ψ δ ( z ) (cid:12)(cid:12)(cid:12) ( ≤ C σ for 0 ≤ z ≤ − σ, ≥ c σ exp (cid:16) c/ √ δ (cid:17) for + σ ≤ z ≤ , with 0 < σ < arbitrary, compare Figure 2. 7 .5 1.20.10. b ψ δ ( z ) versus zO (1) O (cid:0) exp( − c/ √ δ ) (cid:1) O (1) ln( b Γ ( z ; δ )) versus z O (1 / √ δ ) Figure 2: Sketch of the functions and b ψ δ and ln ( b Γ ( · ; δ )) for small δ and z ∈ [0 , G i [ h ; ε, e ε, δ ] := ∞ Z ξ a δ ( ξ ) h ( ξ ) Γ i ( ξ ; ε, e ε, δ ) dξ, Γ i ( ξ ; ε, e ε, δ ) := ξ Z γ i ( z ) ψ ( z ; ε, e ε, δ ) ψ ( ξ ; ε, e ε, δ ) dz = ξ Z γ i ( z ) exp ξ Z z a δ ( y ) (cid:16) b δ ( y ) − e εy − ε (cid:17) dy dz, (25)with γ ( z ) = z , γ ( z ) = 1, so that the compatibility conditions (21) read ε = ε G [ h ; ε, e ε, δ ] , e ε = ε G [ h ; ε, e ε, δ ] . In order to prove the existence of solutions to these equations we need careful estimates on the functionals G i and their derivatives with respect to ε and e ε . These are derived within § ε and e ε we can ignore that the functions Γ and Γ depend on these parameters, and for h close to Φ LSW ∗ Φ LSW we can neglect the contributionsof the exponential tails to G and G . More precisely, our main approximation arguments areΓ i ( ξ ; ε, e ε, δ ) ≈ b Γ i ( ξ ; δ ) , G i [ h ; ε, e ε, δ ] ≈ b G i [ h ; δ ] ≈ b G i [Φ LSW ∗ Φ LSW ; δ ] , where b Γ i ( ξ ; δ ) := Γ i ( ξ ; 0 , , δ ) , b G i [ h ; δ ] := Z ξ a δ ( ξ ) h ( ξ ) b Γ i ( ξ ; δ ) dξ. (26)Moreover, for small δ we can further approximate b G i by b G i [ h ; δ ] ≈ K i, δ R [ h ] , where K i, δ := b Γ i (1; δ )and R is the well defined limit for δ → R δ [ h ] := Z ̺ δ ( ξ ) h ( ξ ) dξ, ̺ δ ( ξ ) := ξ a δ ( ξ ) exp − Z ξ a δ ( y ) b δ ( y ) dy , (27)see Corollary 3.5 below. 8 .1 Auxiliary results Here we prove that the convolution operator ∗ from (16) is continuous and maps the space Z into itself,and we derive further useful estimates. Lemma 3.1.
For arbitrary Φ , Φ ∈ Z we have Φ ∗ Φ ∈ Z , and there exists a constant C such that k Φ ∗ Φ − Φ ∗ Φ k Z ≤ C ( k Φ k Z + k Φ k Z ) k Φ − Φ k Z . Moreover, for each n ∈ N there exists a constant C n such that z n ∞ Z z ξ n | Φ( ξ ) | dξ + 1 z n ∞ Z z ξ n ( ξ − z ) | Φ( ξ ) | dξ ≤ C n exp ( − β z ) z β ⌊ Φ ⌋ Z (28) for z ≥ , and hence ∞ Z ξ n | Φ( ξ ) | dξ ≤ C n ⌊ Φ ⌋ Z (29) for all Φ ∈ Z .Proof. Let Φ i ∈ Z be arbitrary. Definition (18) provides (cid:12)(cid:12) (Φ ∗ Φ )( z ) (cid:12)(cid:12) ≤ C k Φ k Z k Φ k Z , ≤ z ≤ , where we used that sup ≤ z ≤ | Φ( z ) | ≤ C k Φ k Z . For z ≥ z Z z − exp ( − β y ) y β dy = Z exp ( − β ( z − y ))( z − y ) β dy ≤ exp ( − β z )( z − β exp ( β ) − β ≤ C exp ( − β z ) z β and z − Z exp ( − β ( z − y ))( z − y ) β exp ( − β y ) y β dy ≤ exp ( − β z ) z − Z dy ( z − y ) β y β ≤ C exp ( − β z ) z β . These results imply Φ ∗ Φ ∈ Z with k Φ ∗ Φ k Z ≤ C k Φ k Z k Φ k Z , and we conclude k Φ ∗ Φ − Φ ∗ Φ ] k Z ≤ k Φ ∗ (Φ − Φ ) k Z + k Φ ∗ (Φ − Φ ) k Z ≤ C ( k Φ k Z + k Φ k Z )( k Φ − Φ k Z ) . Finally, the estimates (28) and (29) follow from Φ( z ) ≤ ⌊ Φ ⌋ Z z − β exp ( − β z ) and elementary estimatesfor integrals. Remark 3.2.
All constants C in Lemma 3.1 depend on the parameters β and β that appear in thedefinition of the function space Z , compare (18) . More precisely, we have C → ∞ as β → ∞ or β → ∞ . a δ and b δ All subsequent estimates rely on the following properties of the functions a δ and b δ which are illustratedin Figure 1. Lemma 3.3.
Let δ ≤ and σ be arbitrary with < σ < . Then,1. a δ is uniformly positive on [0 , , where max a δ ( y ) = 2 / δ (1 + O ( δ )) and argmax a δ ( y ) = 12 + O ( δ ) denote the maximum and the maximizer, respectively, . b δ is uniformly integrable on [0 , and nonnegative for z ≥ (cid:0) (cid:1) / ,3. z a δ ( z ) → and b δ ( z ) → as z → ∞ ,4. a δ can be expanded with respect to δ on [0 , − σ ] ∪ [ + σ, , i.e., | a δ ( z ) − a ( z ) | ≤ C σ δ for z ≤ with (cid:12)(cid:12) z − (cid:12)(cid:12) > σ and some constant C σ depending on σ ,5. Z a δ ( y ) dy = κ √ δ (1 + o (1)) for some constant κ given in the proof.Proof. The assertions 1–4 follow immediately from the definitions of a δ and b δ , compare (22). With y = + √ δη we find √ δ a δ ( y ) = 22 / + η (1 + o (1)) . This expansion implies √ δ Z a δ ( y ) dy = (1 + o (1)) ∞ Z −∞ / + η dη = (1 + o (1)) √ π / , and the proof is complete. ̺ δ and R δ Lemma 3.4.
For δ ≤ we have R ̺ δ ( ξ ) dξ ≤ C, where ̺ δ is defined in (27) . Moreover, for each < σ < there exist constants c σ and C σ such that1. ̺ δ ( ξ ) ≥ c σ for + σ ≤ ξ ≤ ,2. ̺ δ ( ξ ) ≤ o (1) C σ for ≤ ξ ≤ − σ .Proof. The existence of c σ and C σ is provided by Lemma 3.3, so there remains to show / σ Z / − σ ̺ δ ( ξ ) dξ ≤ C for fixed but small σ . According to Lemma 3.3 there exist constants c and C such that ̺ δ ( ξ ) ≤ C a δ ( ξ ) exp − c Z ξ a δ ( y ) dy = Cc dd ξ exp − c Z ξ a δ ( y ) dy dξ for all ξ with (cid:12)(cid:12) ξ − (cid:12)(cid:12) ≤ σ , and we conclude / σ Z / − σ ̺ δ ( ξ ) dξ ≤ Cc exp − c Z / σ a δ ( y ) dy − exp − c Z / − σ a δ ( y ) dy , which gives the desired result. Corollary 3.5.
The functionals R are uniformly Lipschitz continuous with respect to h ∈ Z for δ ≤ with R δ [ h ] δ → −−−→ R [ h ] := Z / ξ a ( ξ ) h ( ξ ) exp − Z ξ a ( y ) b ( y ) dy dξ for all h ∈ Z . .1.4 Properties of ψ Lemma 3.6.
The estimates ψ ( z ; ε , e ε , δ ) ψ ( z ; ε , e ε , δ ) ≤ exp (cid:18) C | ε − ε | + | e ε − e ε |√ δ (cid:19) , (30) are satisfied for ≤ z ≤ , δ ≤ , and arbitrary ( ε , e ε ) , ( ε , e ε ) .Proof. The Variation of Constants formula provides ψ ( z ; ε , e ε , δ ) = d ψ ( z ; ε , e ε , δ ) exp z Z a δ ( y )( ε − ε + ( e ε − e ε ) y ) dy for some factor d which depends on ε , ε , e ε , e ε , and δ . Thanks to Z z a δ ( y ) dy + Z z a δ ( y ) y dy ≤ C √ δ for 0 ≤ z ≤ d e C − / ψ ( z ; ε , e ε , δ ) ≤ ψ ( z ; ε , e ε , δ ) ≤ d e C / ψ ( z ; ε , e ε , δ ) , where e C denotes the constant on the r.h.s. in (30). Finally, the normalization condition (24) yields e C − / ≤ d ≤ e C / , and the proof is complete. b Γ i and Γ i Recall Definition (26), which implies that the functions ξ b Γ i ( ξ ; δ ), i = 1 ,
2, are strictly increasing andsatisfy the ODE ∂ ξ b Γ i ( ξ ; δ ) = γ i ( ξ ) + a δ ( ξ ) b δ ( ξ ) b Γ i ( ξ ; δ ) > , b Γ i (0; δ ) = 0 . (31) Lemma 3.7.
For all δ ≤ we have c exp (cid:18) c √ δ (cid:19) ≤ K i, δ ≤ C exp (cid:18) C √ δ (cid:19) , (32) and c K , δ ≤ K , δ ≤ K , δ . (33) Proof.
Exploiting the properties of a δ and b δ from Lemma 3.3 we find b Γ i (1; δ ) ≤ Z γ i ( z ) exp Z a δ ( y ) b δ ( y ) dy exp Z a δ ( y ) b δ ( y ) dy dz ≤ C exp (cid:18) C √ δ (cid:19) , as well as b Γ i (1; δ ) ≥ c / Z γ i ( z ) exp c Z / a δ ( y ) dy dz ≥ c exp (cid:18) c √ δ (cid:19) , and this gives (32) since K i, δ = b Γ i (1; δ ) by definition. The inequality K , δ ≤ K , δ is obvious as γ ( z ) ≤ γ ( z ) for all 0 ≤ z ≤
1. Moreover, there exists a constant ˜ c such that b Γ (cid:0) ; δ (cid:1) ≥ ˜ c b Γ (cid:0) ; δ (cid:1) , where we used that a δ , b δ can be expanded in powers of δ on [0 , ], and the ODE (31) implies that b Γ isa supersolution to the equation for c b Γ ( ξ ; δ ) on [ , c = min { ˜ c, } . Hence we proved (33).11 emma 3.8. Let δ ≤ . Then b Γ i ( ξ ; δ ) ≤ C K i, δ ξ , ≤ ξ < ∞ , (34) and b Γ i ( ξ ; δ ) = K i, δ exp − Z ξ a δ ( y ) b δ ( y ) dy − o (1) √ δ , ≤ ξ ≤ . (35) In particular,1. b Γ i ( ξ ; δ ) ≤ C σ for ξ ≤ − σ ,2. b Γ i ( ξ ; δ ) ≥ c σ K i, δ for + σ ≤ ξ ≤ ,with < σ < arbitrary.Proof. We start with ξ ≥
1. Lemma 3.3 providesexp ξ Z a δ ( y ) b δ ( y ) dy ≤ exp ( C + 3 ln ( ξ )) ≤ Cξ and we conclude b Γ i ( ξ ; δ ) = Z γ i ( z ) exp ξ Z z a δ ( y ) b δ ( y ) dy dz + ξ Z γ i ( z ) exp ξ Z z a δ ( y ) b δ ( y ) dy dz ≤ Cξ Z γ i ( z ) exp Z z a δ ( y ) b δ ( y ) dy dz + Cξ ξ Z γ i ( z ) dz ≤ Cξ ( K i, δ + 1) , which implies (34) thanks to (32). Now consider 0 ≤ ξ ≤
1, and let ξ = (cid:0) (cid:1) / < so that b δ ( y ) ≥ y ≥ ξ , compare Lemma 3.3. For 0 ≤ ξ ≤ ξ we have0 ≤ Z ξ γ i ( z ) exp ξ Z z a δ ( y ) b δ ( y ) dy = ξ Z ξ γ i ( z ) exp − z Z ξ a δ ( y ) b δ ( y ) dy + Z ξ γ i ( z ) exp − z Z ξ a δ ( y ) b δ ( y ) dy dz ≤ ξ Z ξ γ i ( z ) exp ξ Z a δ ( y ) | b δ ( y ) | dy dz + Z ξ γ i ( z ) dz ≤ C, whereas for ξ ≤ ξ ≤ ≤ Z ξ γ i ( z ) exp ξ Z z a δ ( y ) b δ ( y ) dy dz ≤ Z ξ γ i ( z ) dz ≤ C. ≤ ξ ≤ b Γ i ( ξ ; δ ) = Z γ i ( z ) exp ξ Z z a δ ( y ) b δ ( y ) dy dz − Z ξ γ i ( z ) exp ξ Z z a δ ( y ) b δ ( y ) dy dz ≥ Z γ i ( z ) exp Z z a δ ( y ) b δ ( y ) dy exp − Z ξ a δ ( y ) b δ ( y ) dy dz − C = K i, δ exp − Z ξ a δ ( y ) b δ ( y ) dy − C, and (35) follows due to (32). The remaining assertions are direct consequences of (35) and Lemma3.3.From now on we assume that both ε and e ε are small with respect to δ . Assumption 3.9.
Suppose ε = o ( √ δ ) and e ε = o ( √ δ ) . Lemma 3.10.
Under Assumption 3.9 the estimates Γ i ( ξ ; ε, e ε, δ ) ≤ b Γ i ( ξ ; δ ) , ≤ ξ < ∞ , and (cid:12)(cid:12)(cid:12) Γ i ( ξ ; ε, e ε, δ ) − b Γ i ( ξ ; δ ) (cid:12)(cid:12)(cid:12) = o (1) b Γ i ( ξ ; δ ) , ≤ ξ ≤ , hold for i = 1 , and all δ ≤ .Proof. The first assertion follows from Definition (25) and the positivity of a δ . With 0 ≤ z ≤ ξ ≤ ≤ ξ Z z a δ ( y )( e εy + ε ) dy ≤ ( e ε + ε ) Z a δ ( y ) ≤ C ε + e ε √ δ , and (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) exp − ξ Z z a δ ( y )( e εy + ε ) dy − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ exp ξ Z z a δ ( y )( e εy + ε ) dy − ≤ exp (cid:18) C ε + e ε √ δ (cid:19) − o (1)gives the second assertion. ( ε, e ε ) Lemma 3.11.
For δ ≤ and all h sufficiently close to Φ LSW ∗ Φ LSW the following estimates are satisfied k h k Z ≤ k Φ LSW ∗ Φ LSW k Z , b G i [ h ; δ ] = (cid:0) ± o (1) (cid:1) K i, δ R δ [ h ] , c ≤ R δ [ h ] ≤ C. (36) Proof.
Equation (35) provides b G i [ h ; δ ] = Z ξ a δ ( ξ ) h ( ξ ) b Γ i ( ξ ; δ ) dξ = K i, δ Z ξ a δ ( ξ ) h ( ξ ) exp − Z ξ a δ ( y ) b δ ( y ) dy dξ − o (1) √ δ K i, δ Z ξ a δ ( ξ ) h ( ξ ) dξ, = K i, δ ( R δ [ h ] ± o (1) ⌈ h ⌉ Z ) . (37)13ecall that Φ LSW ∗ Φ LSW is strictly positive on [0 , ], and suppose that k h − Φ LSW ∗ Φ LSW k Z is suffi-ciently small such that h ( ξ ) ≤ k Φ LSW ∗ Φ LSW k Z for all 0 ≤ ξ ≤ h ( ξ ) ≥ (Φ LSW ∗ Φ LSW )( ξ ) ≥ c, ≤ ξ ≤ . Thanks to Lemma 3.4 this estimate implies0 < c Z / / ̺ δ ( ξ ) dξ ≤ R δ [ h ] ≤ k Φ LSW ∗ Φ LSW k Z R δ [1] < ∞ . In particular, R δ [ h ] ± o (1) ⌈ h ⌉ Z = (1 ± o (1)) R δ [ h ], which is the third claimed estimate, and using (37) wefind the second inequality from (36).From now on we make the following assumption on the function h . Assumption 3.12.
Let δ be sufficiently small, and ε + e ε = o ( √ δ ) . Moreover, suppose that h is sufficientlyclose to Φ LSW ∗ Φ LSW in the sense that all estimates from (36) as well as ⌊ h ⌋ Z = o (1) (38) are satisfied. Remark 3.13.
1. The condition ⌊ h ⌋ Z = o (1) arises naturally. In fact, below we consider functions Φ with k Φ − Φ LSW k Z = o (1) , and this implies ⌊ Φ ∗ Φ ⌋ Z ≤ k Φ ∗ Φ − Φ LSW ∗ Φ LSW k Z = o (1) .2. The condition k Φ − Φ LSW k Z = o (1) depends crucially on the parameters parameter β and β from (18) , because for fixed Φ the quantity ⌊ Φ ⌋ Z grows as β → ∞ or β → ∞ . This will effect the finalchoice for δ , see Condition (46) below. G i , b G i , and their derivatives In order to estimate G i we split the ξ -integration in (25) as follows G i, [ h ; ε, e ε, δ ] := Z ξ a δ ( ξ ) h ( ξ ) Γ i ( ξ ; ε, e ε, δ ) dξ, G i, [ h ; ε, e ε, δ ] := ∞ Z ξ a δ ( ξ ) h ( ξ ) Γ i ( ξ ; ε, e ε, δ ) dξ, so that G i = G i, + G i, . Lemma 3.14.
Assumption 3.12 implies (cid:12)(cid:12)(cid:12) G i [ h ; ε, e ε, δ ] − b G i [ h ; δ ] (cid:12)(cid:12)(cid:12) ≤ o (1) b G i [ h ; δ ] , and ε (cid:12)(cid:12) ∂ ε G i [ h ; ε, e ε, δ ] (cid:12)(cid:12) ≤ o (1) b G i [ h ; δ ] , e ε (cid:12)(cid:12) ∂ e ε G i [ h ; ε, e ε, δ ] (cid:12)(cid:12) ≤ o (1) b G i [ h ; δ ] , for both i = 1 and i = 2 .Proof. Lemma 3.10 implies (cid:12)(cid:12)(cid:12) G i, [ h ; ε, e ε, δ ] − b G i [ h ; δ ] (cid:12)(cid:12)(cid:12) ≤ o (1) b G i [ h ; δ ] , (39)and using Lemma 3.3 and Lemma 3.8 we find G i, [ h ; ε, e ε, δ ] ≤ ∞ Z ξ a δ ( ξ ) h ( ξ ) b Γ i ( ξ ; δ ) dξ ≤ C K i, δ ∞ Z h ( ξ ) ξ dξ ≤ C K i, δ ⌊ h ⌋ Z ≤ o (1) b G i ( h ; δ ) , (40)14here we additionally used (29) and (38). Combining (39) and (40) yields the desired estimates for G i .In order to control the derivatives we compute ∂ ε G i, [ h ; ε, e ε, δ ] = Z ξ a δ ( ξ ) h ( ξ ) ∂ ε Γ i ( ξ ; ε, e ε, δ ) dξ,∂ e ε G i, [ h ; ε, e ε, δ ] = Z ξ a δ ( ξ ) h ( ξ ) ∂ e ε Γ i ( ξ ; ε, e ε, δ ) dξ, as well as similar formulas for the derivatives of G i, , where ∂ ε Γ i ( ξ ; ε, e ε, δ ) = ξ Z γ i ( z ) exp ξ Z z a δ ( y )( b δ ( y ) − e εy − ε ) dy − ξ Z z a δ ( y ) dy dz,∂ e ε Γ i ( ξ ; ε, e ε, δ ) = ξ Z γ i ( z ) exp ξ Z z a δ ( y )( b δ ( y ) − e εy − ε ) dy − ξ Z z a δ ( y ) y dy dz. For 0 ≤ z ≤ ξ ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ξ Z z a δ ( y )( y + 1) dy (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ Z a δ ( y )( y + 1) dy ≤ C √ δ , so that (cid:12)(cid:12) ∂ ε Γ i ( ξ ; ε, e ε, δ ) (cid:12)(cid:12) ≤ C √ δ b Γ i ( ξ ; δ ) , (cid:12)(cid:12) ∂ e ε Γ i ( ξ ; ε, e ε, δ ) (cid:12)(cid:12) ≤ C √ δ b Γ i ( ξ ; δ )due to Lemma 3.10. Multiplication with ξ a δ ( ξ ) h ( ξ ) and integration over 0 ≤ ξ ≤ ε (cid:12)(cid:12) ∂ ε G i, [ h ; ε, e ε, δ ] (cid:12)(cid:12) ≤ o (1) b G i [ h ; δ ] , e ε (cid:12)(cid:12) ∂ ε G i, [ h ; ε, e ε, δ ] (cid:12)(cid:12) ≤ o (1) b G i [ h ; δ ] , where we used ( ε + e ε ) / √ δ = o (1). For ξ ≥ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ξ Z z a δ ( y )( y + 1) dy (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ Z a δ ( y )( y + 1) dy + ξ Z a δ ( y )( y + 1) dy ≤ C √ δ + Cξ ≤ C √ δ ξ, and exploiting Lemma 3.8 and Lemma 3.10 we find | ∂ ε Γ i ( ξ ; ε, e ε, δ ) | ≤ C √ δ ξ K i, δ , | ∂ e ε Γ i ( ξ ; ε, e ε, δ ) | ≤ C √ δ ξ K i, δ . Finally, (29) combined with (36) gives ε | ∂ ε G i, [ h ; ε, e ε, δ ] | ≤ o (1) b G i [ h ; δ ] , e ε | ∂ e ε G i, [ h ; ε, e ε, δ ] | ≤ o (1) b G i [ h ; δ ] , and the proof is complete. ε and e ε Lemma 3.15.
The following assertions hold true under Assumption 3.12.1. For small δ and i = 1 , we have (cid:12)(cid:12) g i [ h ; ε, e ε, δ ] − ε b G i [ h ; δ ] (cid:12)(cid:12) ≤ o (1) ε b G i [ h ; δ ] (41) as well as (cid:12)(cid:12) ε∂ ε g i [ h ; ε, e ε, δ ] − g i [ h ; ε, e ε, δ ] (cid:12)(cid:12) + (cid:12)(cid:12)e ε ∂ e ε g i [ h ; ε, e ε, δ ] (cid:12)(cid:12) ≤ o (1) ε b G i [ h ; δ ] . (42)15 . For each α > there exists δ α > such that for all δ ≤ δ α each solution ( ε, e ε ) to the compatibilityconditions (20) must belong to U [ h ; α, δ ] := (cid:8) ( ε, ˜ ε ) : α ε app [ h ; δ ] ≤ ε ≤ α ε app [ h ; δ ] , α e ε app [ h ; δ ] ≤ e ε ≤ α e ε app [ h ; δ ] (cid:9) , where ε app [ h ; δ ] := 1 / b G [ h ; δ ] , e ε app [ h ; δ ] := ε app [ h ; δ ] b G [ h ; δ ] / b G [ h ; δ ] have the same order of magnitude thanks to (33) .3. For small δ and given h (bounded by Assumption 3.12) there exists a solution ( ε [ h ; δ ] , e ε [ h ; δ ]) to (20) . Moreover, this solution is unique under the constraints ε, e ε = o ( √ δ ) .Proof. Let δ be sufficiently small, and h and α > g ( ε, e ε ) ε = (1 ± o (1)) εε app , g ( ε, e ε ) = (1 ± o (1)) (cid:18) εε app (cid:19) e ε app . where g i ( ε, e ε ), ε app , and e ε app are shorthand for g i [ h, ε, e ε, δ ], ε app [ h ; δ ], and e ε app [ h ; δ ], respectively. There-fore, ( g ( ε, e ε ) , g ( ε, e ε )) = ( ε, e ε ) implies ε = (1 ± o (1)) ε app , and in turn e ε = (1 ± o (1)) e ε app , so eachsolution to (20) must be an element of U [ h ; α, δ ].Now suppose ( ε, e ε ) ∈ U [ h, α, δ ]. For ε = α ε app and ε = αε app we have g ( ε, e ε ) = (1 ± o (1)) α ε < ε and g ( ε, e ε ) = (1 ± o (1)) αε > ε, respectively, and (42) implies ∂ ε g >
0. Therefore, for each e ε there exists a unique solution ε = ε ( e ε ) to g = ε , i.e., g ( ε ( e ε ) , e ε ) = ε ( e ε ) = (1 ± o (1)) ε app , and differentiation with respect to e ε shows (cid:12)(cid:12)(cid:12)(cid:12) d ε d e ε (cid:12)(cid:12)(cid:12)(cid:12) = | ∂ e ε g | / | ∂ ε g − | = o (1) ε e ε = o (1) ε app e ε app = o (1)since cε app ≤ e ε app ≤ Cε app thanks to Lemma 3.11. Now let ˜ g ( e ε ) := g ( ε ( e ε ) , e ε ), and note that˜ g = (1 ± o (1)) e ε app , (cid:12)(cid:12)(cid:12)(cid:12) ˜ g d e ε (cid:12)(cid:12)(cid:12)(cid:12) = | ∂ ε g | (cid:12)(cid:12)(cid:12)(cid:12) d ε d e ε (cid:12)(cid:12)(cid:12)(cid:12) + | ∂ e ε g | = o (1) ˜ g ˜ ε = o (1) . Thus, for small δ the function ˜ g is contractive with˜ g (cid:0) α e ε app (cid:1) > α e ε app , ˜ g ( α e ε app ) < α e ε app , and hence there exists a unique solution to e ε = ˜ g ( e ε ).In Section 3.3 below we consider functions h close to Φ LSW ∗ Φ LSW , and then the following result,which follows from Lemma 3.11, Lemma 3.15 and Corollary 3.5, becomes useful.
Corollary 3.16.
Suppose that k h − Φ LSW ∗ Φ LSW k Z = o (1) . Then the solution ( ε [ h ; δ ] , e ε [ h ; δ ]) from Lemma 3.15 satisfies ε app [ h ; δ ] = (1 ± o (1)) ǫ δ , e ε app [ h ; δ ] = (1 ± o (1)) e ǫ δ , where ǫ δ := 1 / K , δ R [Φ LSW ∗ Φ LSW ] , e ǫ δ := ǫ δ K , δ / K , δ . (43) In particular, the assertions from Theorem 2.3 and Proposition 2.5 are satisfied. .2.3 Continuity of ε and ˜ ε .Lemma 3.17. The solution from Lemma 3.15 depends Lipschitz-continuously on h . More precisely, forarbitrary h , h that fulfil Assumption 3.12 we have (cid:12)(cid:12)(cid:12) ε [ h ; δ ] − ε [ h ; δ ] (cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)e ε [ h ; δ ] − e ε [ h ; δ ] (cid:12)(cid:12)(cid:12) ≤ C ( ε [ h ; δ ] + ε [ h ; δ ]) k h − h k Z . Proof.
We fix δ , abbreviate ε i = ε [ h i ; δ ] and e ε i = e ε [ h i ; δ ], and for arbitrary τ ∈ [0 ,
1] and ( ε, e ε ) ∈ U δ wewrite h ( τ ) = τ h + (1 − τ ) h , as well as¯ g i ( ε, e ε, τ ) = g i [ h ( τ ); ε, e ε, δ ] , ε ( τ ) = ε [ h ( τ ); δ ] , e ε ( τ ) = e ε [ h ( τ ); δ ] , so that ε ( τ ) = ¯ g ( ε ( τ ) , e ε ( τ ) , τ ) , e ε ( τ ) = ¯ g ( ε ( τ ) , e ε ( τ ) , τ ) (44)holds by construction. For fixed ( ε, e ε ) we estimate ∂ τ ¯ g i as follows | ∂ τ ¯ g i ( ε, e ε, τ ) | ≤ g i [ | h − h | ; ε, e ε, δ ] = ε ∞ Z ξ a δ ( ξ ) | h ( ξ ) − h ( ξ ) | Γ i ( ξ ; ε, e ε, δ ) ≤ ε ⌈ h − h ⌉ Z b G i (1; δ ) + ε ⌊ h − h ⌋ Z K i, δ , ≤ Cε K i, δ k h − h k Z , compare (39) and (40), and Lemma 3.11 combined with Lemma 3.15 provides | ∂ τ g i ( ε ( τ ) , e ε ( τ ) , τ ) | ≤ Cε ( τ ) k h − h k Z = C ( ε + ε ) k h − h k Z . Differentiating (44) with respect to τ yields ε ′ = ∂ ε ¯ g ε ′ + ∂ e ε ¯ g e ε ′ + ∂ τ ¯ g , e ε ′ = ∂ ε ¯ g ε ′ + ∂ e ε ¯ g e ε ′ + ∂ τ ¯ g , where ′ denotes dd τ . Moreover, Lemma 3.15 combined with ¯ g = ε , ¯ g = e ε , and ε/ e ε = O (1) provides ∂ ε ¯ g = (2 + o (1)) ∂ ε ¯ g = O (1) , ∂ e ε ¯ g = o (1) , ∂ e ε ¯ g = o (1) , and we conclude that (cid:18) − o (1) o (1) O (1) 1 + o (1) (cid:19) (cid:18) ε ′ e ε ′ (cid:19) ∼ (cid:18) ∂ τ ¯ g ∂ τ ¯ g (cid:19) . Finally, we find | ε − ε | + | e ε − e ε | = Z | ε ′ + e ε ′ | dτ ≤ C ( ε + ε ) k h − h k Z , which is the desired result. Φ For each h ∈ Z and arbitrary parameters ( ε, e ε ) we define the function J [ h ; ε, e ε, δ ]( z ) := ψ ( z ; ε, e ε, δ ) ∞ Z z ξa δ ( ξ ) h ( ξ ) ψ ( ξ ; ε, e ε, δ ) dξ, which is related to the fixed problem for Φ via¯ I δ [Φ] = ε [Φ ∗ Φ; δ ] J (cid:2) Φ ∗ Φ , ε [Φ ∗ Φ; δ ] , e ε [Φ ∗ Φ; δ ] , δ (cid:3) , compare (19). Notice that the exponental decay of h implies that the function J [ h ; ε, e ε, δ ] is well definedand has finite moments, i.e., ∞ Z J [ h ; ε, e ε, δ ]( z ) dz < ∞ , ∞ Z zJ [ h ; ε, e ε, δ ]( z ) dz < ∞ . Moreover, below we show, cf. Corollary 3.20, that the operator J [ · ; ε, e ε ; δ ] maps the space Z into itself.17 .3.1 Approximation of the operator J In this section we show that the operator J can be approximated by J app [ h ; ε, e ε, δ ]( z ) := χ [0 , ( z ) ψ ( z ; ε, e ε, δ ) ∞ Z ξ J [ h ; ε, e ε, δ ]( ξ ) dξ, that means all contributions coming from J res [ h ; ε, e ε, δ ] := J [ h ; ε, e ε, δ ] − J app [ h ; ε, e ε, δ ]can be neglected. To prove this we split the operator J into three parts J = J + J + J with J [ h ; ε, e ε, δ ]( z ) := + χ [0 , ( z ) ψ ( z ; ε, e ε, δ ) ∞ Z ξa δ ( ξ ) h ( ξ ) ψ ( ξ ; ε, e ε, δ ) dξ,J [ h ; ε, e ε, δ ]( z ) := − χ [0 , ( z ) ψ ( z ; ε, e ε, δ ) z Z ξa δ ( ξ ) h ( ξ ) ψ ( ξ ; ε, e ε, δ ) dξ,J [ h ; ε, e ε, δ ]( z ) := + χ [1 , ∞ ) ( z ) ψ ( z ; ε, e ε, δ ) ∞ Z z ξa δ ( ξ ) h ( ξ ) ψ ( ξ ; ε, e ε, δ ) dξ. (45)Notice that J app [ h ; ε, e ε, δ ], J [ h ; ε, e ε, δ ], and J [ h ; ε, e ε, δ ] are supported in [0 , J [ h ; ε, e ε, δ ] equals [1 , ∞ ). Moreover, the next result shows that J does not contribute to the residualoperator J res . Remark 3.18.
For all h ∈ Z and all parameters ( ε, e ε, δ ) we have J res [ h ; ε, e ε, δ ]( z ) = J [ h ; ε, e ε, δ ]( z ) − ψ ( z ; ε, e ε, δ ) Z ξ J [ h ; ε, e ε, δ ]( ξ ) dξ + ∞ Z ξ J [ h ; ε, e ε, δ ]( ξ ) dξ , for ≤ z ≤ , as well as J res [ h ; ε, e ε, δ ]( z ) = J [ h ; ε, e ε, δ ]( z ) , for z ≥ .Proof. The second assertion is a direct consequence of (45). Now let 0 ≤ z ≤
1. Due to the normalizationcondition R ξ ψ ( ξ ; ε, e ε, δ ) dξ = 1 we have ∞ Z z J [ h ; ε, e ε, δ ]( z ) dz = ∞ Z ξa δ ( ξ ) h ( ξ ) ψ ( ξ ; ε, e ε, δ ) dξ and this implies ψ ( z ; ε, e ε, δ ) Z ξ J [ h ; ε, e ε, δ ]( ξ ) dξ = J [ h ; ε, e ε, δ ]( z ) . Moreover, by definition we have J res [ h ; ε, e ε, δ ]( z ) = J [ h ; ε, e ε, δ ]( z ) + J [ h ; ε, e ε, δ ]( z ) − ψ ( z ; ε, e ε, δ ) X i =1 ∞ Z ξ J i [ h ; ε, e ε, δ ]( ξ ) dξ, and the combination of both results yields the first assertion.18n the next step we estimate the operators J and J as well as their derivatives with respect to ( ε, e ε ). Lemma 3.19.
For all h ∈ Z and all parameters ( ε, e ε, δ ) that satisfy Assumption 3.9 we find (cid:12)(cid:12) J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) + (cid:12)(cid:12) ∂ ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) + (cid:12)(cid:12) ∂ e ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) ≤ Cδ ⌈ h ⌉ Z for all ≤ z ≤ , as well as (cid:12)(cid:12) J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) + (cid:12)(cid:12) ∂ ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) + (cid:12)(cid:12) ∂ e ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) ≤ C ⌊ h ⌋ Z exp ( − β z ) z β for all z ≥ .Proof. Let 0 ≤ z ≤
1. The definition of J provides J [ h ; ε, e ε, δ ]( z ) = − z Z ξa δ ( ξ ) h ( ξ ) exp ξ Z z a δ ( y )( b δ ( y ) − ε − e εy ) dy dξ, and we estimate (cid:12)(cid:12) J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) ≤ ⌈ h ⌉ Z exp (cid:18) C ε + e ε √ δ (cid:19) z Z ξa δ ( ξ ) exp − z Z ξ a δ ( y ) min { , b δ ( y ) } dy dξ ≤ ⌈ h ⌉ Z C z Z ξa δ ( ξ ) exp − Z a δ ( y ) min { , b δ ( y ) } dy dξ ≤ ⌈ h ⌉ Z C Z ξa δ ( ξ ) dξ ≤ C √ δ ⌈ h ⌉ Z . Moreover, (cid:12)(cid:12) ∂ ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) z Z ξa δ ( ξ ) h ( ξ ) exp ξ Z z a δ ( y )( b δ ( y ) − ε − e εy ) dy ξ Z z a δ ( y ) dy dξ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ √ δ J (cid:2) | h | ; ε, e ε, δ (cid:3) ( z ) ≤ Cδ ⌈ h ⌉ Z , and the estimate for ∂ e ε J [ h ; ε, e ε, δ ]( z ) is entirely similar. Now let z ≥
1. Then, (cid:12)(cid:12) J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) ≤ ∞ Z z ξa δ ( ξ ) | h ( ξ ) | exp ξ Z z a δ ( y )( b δ ( y ) − ε − e εy ) dy dξ ≤ C ∞ Z z | h ( ξ ) | exp ξ Z z a δ ( y ) b δ ( y ) dy dξ ≤ C ∞ Z z | h ( ξ ) | exp ( C + 3 ln ξ − z ) dξ = Cz ∞ Z z ξ | h ( ξ ) | dξ, as well as (cid:12)(cid:12) ∂ e ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) + (cid:12)(cid:12) ∂ e ε J [ h ; ε, e ε, δ ]( z ) (cid:12)(cid:12) ≤≤ ∞ Z z ξa δ ( ξ ) | h ( ξ ) | exp ξ Z z a δ ( y )( b δ ( y ) − ε − e εy ) dy ξ Z z a δ ( y )( y + 1) dy dξ ≤ C ∞ Z z | h ( ξ ) | ( ξ − z ) exp ( C + 3 ln ξ − z ) dξ = Cz ∞ Z z ξ ( ξ − z ) | h ( ξ ) | dξ. Finally, using (28) completes the proof. 19s a consequence of Remark 3.18 and Lemma 3.19 we obtain estimates for the residual operator. Inparticular, h ∈ Z implies J res [ h ; ε, e ε, δ ] ∈ Z , and hence J app [ h ; ε, e ε, δ ] ∈ Z . Corollary 3.20.
The assumptions from Lemma 3.19 imply J res [ h ; ε, e ε, δ ] ∈ Z, ∂ ε J res [ h ; ε, e ε, δ ] ∈ Z, ∂ e ε J res [ h ; ε, e ε, δ ] ∈ Z with ⌈ J res [ h ; ε, e ε, δ ] ⌉ Z ≤ Cδ ⌈ h ⌉ Z + C ⌊ h ⌋ Z , ⌊ J res [ h ; ε, e ε, δ ] ⌋ Z ≤ C ⌊ h ⌋ Z as well as k ∂ ε J res [ h ; ε, e ε, δ ] k Z + k ∂ e ε J res [ h ; ε, e ε, δ ] k Z ≤ Cδ k h k Z . In particular, for fixed h we have k J res [ h ; ε , e ε , δ ] − J res [ h ; ε , e ε , δ ] k Z ≤ Cδ k h k Z ( | ε − ε | + | e ε − e ε | ) . Proof.
All assertions are direct consequences of Remark 3.18 and Lemma 3.19.
Here we introduce suitable subsets of the function space Z that allow to apply the contraction principlefor the operator ¯ I δ . For this reason we introduce the functions b Φ δ ( z ) := χ [0 , ( z ) ψ ( z ; 0 , , δ ) , which satisfy k b Φ δ − Φ LSW k Z → δ →
0, see (23) and (24).
Definition 3.21.
Let µ δ be a number with µ δ = o (1) , δ K , δ = o (cid:0) µ δ (cid:1) , k b Φ δ − Φ LSW k Z = o (cid:0) µ δ (cid:1) , (46) and define the sets Y δ = { Φ ∈ Z : k Φ − b Φ δ k Z ≤ µ δ } , Z δ = { h ∈ Z : k h − b Φ δ ∗ b Φ δ k Z ≤ µ δ } . Lemma 3.22.
For all sufficiently small δ the following assertions are satisfied.1. Φ LSW ∈ Y δ ,2. Φ ∈ Y δ implies Φ ∗ Φ ∈ Z δ ,3. each h ∈ Z δ satisfies Assumption 3.12, hence there exist the solutions ε [ h ; δ ] and e ε [ h ; δ ] from Lemma3.15.Proof. The first assertion holds by construction. For Φ ∈ Y δ we have k Φ ∗ Φ − b Φ δ ∗ b Φ δ k Z = k ∗ (Φ − b Φ δ ) + (Φ − b Φ δ ) ∗ (Φ − b Φ δ ) k Z ≤ k Φ ∗ (Φ − b Φ δ ) k Z + k (Φ − b Φ δ ) ∗ (Φ − b Φ δ ) k Z ≤ C k Φ − b Φ δ k Z = o ( µ δ ) . This implies k b Φ δ ∗ b Φ δ − Φ LSW ∗ Φ LSW k Z ≤ C k b Φ δ − Φ LSW k Z ≤ Cµ δ = o ( µ δ ) , and for all h ∈ Z δ wefind k h − Φ LSW ∗ Φ LSW k Z ≤ k h − b Φ δ ∗ b Φ δ k Z + C k b Φ δ − Φ LSW k Z = o ( µ δ ) = o (1) . Therefore, ⌊ h ⌋ Z = ⌊ Φ LSW ∗ Φ LSW ⌋ Z + o ( µ δ ) = o (1) , and the proof is complete. 20 .3.3 Contraction principle for ΦRecall that the solution ( ε, e ε )[ h ; δ ] from Lemma 3.15 satisfies1 ε [ h ; δ ] = ∞ Z ξ J [ h ; ε [ h ; δ ] , e ε [ h ; δ ] , δ ] . Therefore we define I [ h ; δ ] = I app [ h ; δ ] + I res [ h ; δ ] with I app [ h ; δ ]( z ) := ε [ h ; δ ] J app [ h ; ε [ h ; δ ] , e ε [ h ; δ ] , δ ] , I res [ h ; δ ]( z ) := ε [ h ; δ ] J res [ h ; ε [ h ; δ ] , e ε [ h ; δ ] , δ ] , and this implies ¯ I δ [Φ] = I [Φ ∗ Φ; δ ], with ¯ I δ as in (19), as well as I app [ h ; δ ]( z ) = χ [0 , ( z ) ψ ( z ; ε [ h ; δ ] , e ε [ h ; δ ] , δ ) . In particular, is close to b Φ δ with error controlled by ε and e ε , provided that δ is small and h close toΦ LSW ∗ Φ LSW . Lemma 3.23.
For sufficiently small δ the operator I maps Z δ into Y δ , and is Lipschitz continuous witharbitrary small constant. More precisely, k I [ h ; δ ] − I [ h ; δ ] k Z ≤ o (1) k h − h k Z for all h , h ∈ Z δ .Proof. For each h ∈ Z δ Lemma 3.17 provides k I res [ h ; δ ] k Z ≤ C ε [ h ; δ ] δ k h k Z ≤ Cδ K , δ k h k Z , where we used Lemma 3.11 and Lemma 3.15. Moreover, (46) implies k I res [ h ; δ ] k Z ≤ o (cid:0) µ δ (cid:1) k h k Z = o (cid:0) µ δ (cid:1) k b Φ δ ∗ b Φ δ k Z = o (cid:0) µ δ (cid:1) . From Lemma 3.6 we derive e C − b Φ δ ( z ) ≤ ψ ( z ; ε [ h ; δ ] , e ε [ h ; δ ] , δ ) ≤ e C b Φ δ ( z ) , for all 0 ≤ z ≤ e C [ h ; δ ] = exp (cid:18) C ε [ h ; δ ] + e ε [ h ; δ ] √ δ (cid:19) = exp (cid:0) o (cid:0) µ δ (cid:1)(cid:1) . We conclude (cid:12)(cid:12) I app [ h ; δ ]( z ) − b Φ δ ( z ) (cid:12)(cid:12) = (cid:16) e C [ h ; δ ] − (cid:17)b Φ δ ( z ) = o (cid:0) µ δ (cid:1) , and find k I [ h ; δ ] − b Φ δ k Z = o (cid:0) µ δ (cid:1) , which implies I [ h ; δ ] ∈ Y δ for small δ . The Lipschitz continuity of I res follows from Lemma 3.17 and Corollary 3.20, and the Lipschitz continuity of I app is a consequence ofLemma 3.6. Moreover, using (46) and the same estimates as above we find that the Lipschitz constantsare of order o (cid:0) µ δ (cid:1) .Now we can prove Theorem 2.4 and Proposition 2.6 from § Corollary 3.24.
The operator ¯ I δ is a contraction of Y δ , and thus there exists a unique fixed point in Y δ .Moreover, this fixed point is nonnegative since the cone of nonnegative function is invariant under theaction of ¯ I δ .Proof. By construction, we have ¯ I δ [Φ] = I [Φ ∗ Φ; δ ] and all assertions follow from Lemma 3.22 and Lemma3.23. 21inally, we discuss the influence of the parameters ( β , β ) that control the decay behavior of thesolution, see (18). Remark 3.25.
Consider another pair of parameters ( ˜ β , ˜ β ) with ˜ β > β and ˜ β > β , and denote by e Y δ the corresponding set from Lemma 3.22. Our previous results imply, compare Remark 3.2 and Remark3.13, that e Y δ ⊂ Y δ for all small δ , and this yields the following two assertions. ( i ) For small but fixed δ the solution that is found with the parameters ( ˜ β , ˜ β ) equals the solution for ( β , β ) . ( ii ) The smaller δ is the larger we can choose the parameters ( β , β ) , i.e., the faster the solution decays. Acknowledgements.
MH and BN gratefully acknowledge support through the DFG Research Center
Matheon and the DFG Research group
Analysis and Stochastics in complex physical systems . JJLVwas supported through the Max Planck Institute for Mathematics in the Sciences, the Alexander-von-Humboldt foundation, and DGES Grant MTM2007-61755.
References [1] J. Carr and O. Penrose. Asymptotic behaviour in a simplified Lifshitz–Slyozov equation.
Physica D ,124:166–176, 1998.[2] Y. Farjoun and J. Neu. An asymptotic solution of aggregation dynamics. Progress in industrialmathematics at ECMI 2006.
Mathematics in Industry . Vol. 12, 368-375, 2008.[3] B. Giron, B. Meerson, and P. V. Sasorov. Weak selection and stability of localized distributions inOstwald ripening.
Phys. Rev. E , 58:4213–6, 1998.[4] M. Herrmann, P. Lauren¸cot, and B. Niethammer. Self-similar solutions with fat tails to a coagulationequation with nonlocal drift. in preparation.[5] Philippe Lauren¸cot. The Lifshitz-Slyozov equation with encounters.
Math. Models Methods Appl.Sci. , 11(4):731–748, 2001.[6] I. M. Lifshitz and V. V. Slyozov. The kinetics of precipitation from supersaturated solid solutions.
J. Phys. Chem. Solids , 19:35–50, 1961.[7] M. Marder. Correlations and Ostwald ripening.
Phys. Rev. A , 36:858–874, 1987.[8] B. Meerson. Fluctuations provide strong selection in Ostwald ripening.
Phys. Rev. E , 60, 3:3072–5,1999.[9] G. Menon and R. L. Pego. Approach to self-similarity in Smoluchowski’s coagulation equations .
Comm. Pure Appl. Math. , 57:1197–1232, 2004.[10] B. Niethammer. Derivation of the LSW theory for Ostwald ripening by homogenization methods.
Arch. Rat. Mech. Anal. , 147, 2:119–178, 1999.[11] B. Niethammer. Effective theories for Ostwald ripening. In
Analysis and Stochastics of GrowthProcesses and Interface Models . Oxford University Press, 2008. to appear.[12] B. Niethammer and R. L. Pego. Non–self–similar behavior in the LSW theory of Ostwald ripening.
J. Stat. Phys. , 95, 5/6:867–902, 1999.[13] B. Niethammer and J. J. L. Vel´azquez. On the convergence towards the smooth self-similar solutionin the LSW model.
Indiana Univ. Math. J. , 55:761–794, 2006.[14] B. Niethammer and J. J. L. Vel´azquez. On screening induced fluctuations in Ostwald ripening.
J.Stat. Phys. , 130, 3:415–453, 2008.[15] J. J. L. Vel´azquez. The Becker–D¨oring equations and the Lifshitz–Slyozov theory of coarsening.
J.Stat. Phys. , 92:195–236, 1998.[16] C. Wagner. Theorie der Alterung von Niederschl¨agen durch Uml¨osen.