Bounded solutions for an ordinary differential system from the Ginzburg-Landau theory
aa r X i v : . [ m a t h . A P ] S e p Bounded solutions for an ordinary differential system fromthe Ginzburg-Landau theory.
Anne Beaulieu, LAMA, Univ Paris Est Creteil, Univ Gustave Eiffel, UPEM, CNRS, F-94010, Cr´eteil, France.September 30, 2019 [email protected]
Abstract.
In this paper, we look at a linear system of ordinary differential equations as derivedfrom the two-dimensional Ginzburg-Landau equation. In two cases, it is known that thissystem admits bounded solutions coming from the invariance of the Ginzburg-Landauequation by translations and rotations. The specific contribution of our work is to provethat in the other cases, the system does not admit any bounded solutions. We show thatthis bounded solution problem is related to an eigenvalue problem.AMS classification : 34B40: Ordinary Differential Equations, Boundary value prob-lems on infinite intervals. 35J60: Nonlinear PDE of elliptic type. 35P15: Estimation ofeigenvalues, upper and lower bound.
Let n and d be given integers, n ≥ d ≥
1. We define the following system ( a ′′ + a ′ r − ( n − d ) r a − f d b = − (1 − f d ) ab ′′ + b ′ r − ( n + d ) r b − f d a = − (1 − f d ) b (1.1)and the following equations a ′′ + a ′ r − d r a = − (1 − f d ) a (1.2)and a ′′ + a ′ r − d r a − af d = − (1 − f d ) a. (1.3)with the variable r >
0, and for real valued functions r a ( r ) and r b ( r ).Here f d is the only solution of the differential equation f ′′ d + f ′ d r − d r f d = − f d (1 − f d ) (1.4)with the conditions f d (0) = 0 and lim + ∞ f d = 1.1et us consider the Ginzburg-Landau equation on a bounded connected domain Ω, (cid:26) − ∆ u = ε u (1 − | u | ) in Ω u = g in ∂ Ω (1.5)where ε > u and g have complex values and degree ( g, ∂ Ω) ≥ − ∆ u = u (1 − | u | ) in R (1.6)where u is a complex valued map. The study of the energy-minimizing solutions ofequation (1.5) is in the book of Bethuel, Brezis H´elein, [3].Let us explain how the system (1.1) and the equations (1.2) and (1.3) are derived fromthe equations (1.5) and (1.6).We denote N ε ( u ) = ∆ u + ε u (1 − | u | ). Let u ( x ) = f d ( | x | ε ) e idθ . We have N ε ( u ) = 0.We will always denote f ( r ) = f d ( rε ) . We differentiate N ε at u . d N ε ( u )( ω ) = ∆ ω + ωε (1 − f ) − ε f e idθ e idθ .ω, where ω is any complex valued function and 2 u.ω = uω + ωu. We will use the operator e − idθ d N ε ( u ) e idθ instead of d N ε ( u ). We consider the Fourier expansion ω ( x ) = X n ≥ ( a n ( r ) e − inθ + b n ( r ) e inθ ) + a ( r ) , a n ( r ) ∈ C , b n ( r ) ∈ C . Letting ω n ( x ) = a n ( r ) e − inθ + b n ( r ) e inθ , we have2 e idθ .e idθ ω n = ω n + ω n = ( b n + a n ) e inθ + ( b n + a n ) e − inθ . Moreover e − idθ ∆( e idθ ω ) = ∆ ω − d r ω + i dr ∂ω∂θ . Then e − idθ d N ε ( u ) e idθ ω = X n ≥ e − inθ ( a ′′ n + a ′ n r − ( n − d ) r a n + a n ε (1 − f ) − a n ε f − b n ε f )+ X n ≥ e inθ ( b ′′ n + b ′ n r − ( n + d ) r b n + b n ε (1 − f ) − b n ε f − a n ε f )+ a ′′ + a ′ r − d r a + a ε (1 − f ) − a + a ε f . Separating the Fourier components of e − idθ d N ε ( u ) e idθ ω , we can consider the operatorsfor n ≥ , L n ( a n , b n ) = ( a ′′ n + a ′ n r − ( n − d ) r a n + a n ε (1 − f ) − b n ε f b ′′ n + b ′ n r − ( n + d ) r b n + b n ε (1 − f ) − a n ε f n = 0 , L ( a ) = a ′′ + a ′ r − d r a + a ε (1 − f ) − a + a ε f . When we have to solve the system ( L n ( a n , b n ) = ( α n , β n ) , L ( a ) = α ), for some given( α n , β n ) ∈ C × C and α ∈ C , we are led to consider separately the real part and theimaginary part. So we consider the following operators, where a n and b n are real valuedfunctionsfor n ≥ L n, R : ( a n , b n ) ( a ′′ n + a ′ n r − ( n − d ) r a n + a n ε (1 − f ) − b n ε f b ′′ n + b ′ n r − ( n + d ) r b n + b n ε (1 − f ) − a n ε f ; L n, I : ( a n , b n ) ( a ′′ n + a ′ n r − ( n − d ) r a n + a n ε (1 − f ) + b n ε f b ′′ n + b ′ n r − ( n + d ) r b n + b n ε (1 − f ) + a n ε f and, for n = 0 , L , I : a a ′′ + a ′ r − d r a + a ε (1 − f ) ; L , R : a a ′′ + a ′ r − d r a + a ε (1 − f ) − a ε f . Considering ( − a n , b n ), we see that, for n ≥
1, only one of the operators L n, R or L n, I is of interest. The eigenvalue problems L n, R ( a n , b n ) = − λ ( ε )( a n , b n ) , ( a n , b n )(1) = 0,for all integers d ≥ n ≥
1, as well as the problems L , R ( a ) = − λ ( ε ) a and L , I ( a ) = − λ ( ε ) a , a (1) = 0, have been studied in several papers, including [6], [9],[7], [8], [1]. In the third chapter of their book [10], Pacard and Rivi`ere study the system(1.1) and the equations (1.2) and (1.3) for d = 1. These authors’ aim is to constructsome solutions for (1.5).Let us now bring together some of the results contained in the above studies. Theorem 1.1
For d ≥ and n ≥ , the existence of an eigenvalue λ ( ε ) → as ε → is equivalent to the existence of a bounded solution of (1.1). For the equations (1.2) and(1.3) and for all d ≥ , the results of [10] are valid for all d ≥ , that is the real vectorspace of the bounded solutions of (1.2) is one-dimensional, spanned by f d and there isno bounded solution of (1.3). For n = 1 , the vector space of the bounded solutions of(1.1) is also a one dimensional vector space, spanned by ( f ′ d + dr f d , f ′ d − dr f d ) . For d = 1 and n ≥ , there are no bounded solutions. For d ≥ and for n ≥ d − , there are nobounded solutions. A bounded solution is any solution defined at r = 0 and which has a finite limit as r → + ∞ . For all d ≥
1, the known bounded solutions, for n = 0 and n = 1, come fromthe invariance of the Ginzburg-Landau equation with respect to the translations and therotations.The present paper’s aim is to prove the following Theorem 1.2
For all real numbers d and n such that d ≥ and n > , the system (1.1)has no bounded solution. We will consider n and d as real parameters, although the Ginzburg-Landau problem isabout integers n and d . So we have to consider the functions f d , for d ∈ R , d ≥
1. But in[5], where all the solutions of (1.4) are studied, the authors consider only the case d ∈ N ⋆ .However, this hypothesis is not essential in their paper. Here are the properties of f d weneed. 3 heorem 1.3 Let d ∈ R + ⋆ . For all a > there exists a unique solution of (1.4) suchthat lim r → a f ( r ) r − d = 1 . There exists a unique value A d > such that this solution isdefined in R + ⋆ and non decreasing. Denoting it by f d , we have the expansions f d ( r ) = 1 − d r + O ( 1 r ) near + ∞ (1.7) and f d ( r ) = A d ( r d − d + 1) r d +2 ) + O ( r d +4 ) near 0 . (1.8) Moreover, if we denote g ( d ) ( r ) = r − d f d ( r ) , then, for all α > , the map d g ( d ) iscontinuous from ]0 , + ∞ [ into L ∞ ([0 , α ]) . For d ∈ R + ⋆ , γ ∈ R + and γ ∈ R + , we define the following system ( a ′′ + a ′ r − γ r a − f d b − f d a = − (1 − f d ) ab ′′ + b ′ r − γ r b − f d a − f d b = − (1 − f d ) b. (1.9)Letting x = a + b and y = a − b , we are led to the system verified by ( x, y ), that is ( x ′′ + x ′ r − γ r x + ξ r y − f d x = − (1 − f d ) xy ′′ + y ′ r − γ r y + ξ r x = − (1 − f d ) y (1.10)with γ = γ + γ ξ = γ − γ . In all the paper, when there is no other indication, we will suppose that d ≥ γ + γ − d ≥
1. We will denote n = r γ + γ − d . (1.11)But Theorem 1.4, Theorem 1.5 and Theorem 1.6 will be valid for d > γ + γ − d > R > a ( R ) , a ′ ( R ) , b ( R ) , b ′ ( R )), the system (1.9) has a unique solution, de-fined in R + ⋆ . This solution is continuous wrt the real positive parameters d , γ and γ , because the coefficients of the system depend continuously on them. Moreover,when the Cauchy data depends continuously on the parameters, so does the solution( a ( r ) , a ′ ( r ) , b ( r ) , b ′ ( r )), which, consequently, is bounded independently of r and of theparameters, when r and the parameters stay in a given compact set. This principlecomes from the Cauchy-Lipschitz Theorem, whose proof rests on an application of theBanach Fixed Point Theorem to a suitable integral equation. However, we don’t knowwhether a given solution keeps the same behavior at 0 or at + ∞ for all the values of theparameters, even when this solution is continuous wrt the parameters. So we begin withthe definition of some continuous solutions wrt to the parameters in a certain range, andwhose behaviors remain inchanged, either at 0 or at + ∞ .To begin with, let us give the following definition4 efinition 1.1 We say that1. a = O ( f ) at 0 if there exists R > and C > such that ∀ r ∈ ]0 , R ] , | a ( r ) | ≤ C | f ( r ) | . a has the behavior f at 0, and we denote a ∼ f , if there exists a map g , such that lim g = 0 , | a − f | = O ( f g ) . a = o ( f ) at 0 if there exists a map g , such that lim g = 0 , a = f g. We will use the same convention at + ∞ . We will consider that ( d, γ , γ ) belongs to the set D = { ( d, γ , γ ) ∈ ( R + ) ; d ≥ γ >
1; 0 ≤ γ ≤ γ < γ + 2 d + 2 } . Let us remark that ( d, | n − d | , n + d ) ∈ D , whenever d ≥ n ≥ d ∈ R , n ∈ R .We will need the following subsets of DD = { ( d, γ , γ ) ∈ D ; γ > } and D = { ( d, γ , γ ) ∈ D ; 0 ≤ γ <
14 ; − γ − γ + 2 d + 2 > − γ + 2 d + 1 > } . (1.12)Whenever d ≥ n ≥ n ∈ R , d ∈ R , we remark that ( d, | n − d | , n + d ) ∈ D when n = d and that ( d, | n − d | , n + d ) ∈ D when | n − d | < .The following theorem is about a base of solutions defined near 0. Theorem 1.4
For all ( d, γ , γ ) ∈ D , there exist four independent solutions ( a, b ) of(1.9) verifying the following conditions1. ( a ( r ) , b ( r )) ∼ ( O ( r γ +2 d +2 ) , r γ ) and ( a ′ ( r ) , b ′ ( r )) ∼ ( O ( r γ +2 d +1 ) , γ r γ − ) . ( a ( r ) , b ( r )) ∼ (cid:26) ( O ( r θ ( r )) , r − γ ) if ( d, γ , γ ) ∈ D ( O ( r − γ +2 d +2 ) , r − γ ) if ( d, γ , γ ) ∈ D ( a ′ ( r ) , b ′ ( r )) ∼ (cid:26) ( O ( rθ ( r )) , − γ r − γ − ) if ( d, γ , γ ) ∈ D ( O ( r − γ +2 d +1 ) , − γ r − γ − ) if ( d, γ , γ ) ∈ D where θ ( r ) = ( − r γ − + r − γ d γ + γ − d − if γ + γ − d − = 0 − r γ − log r if γ + γ − d − . ( a ( r ) , b ( r )) ∼ ( r γ , O ( r γ +2 d +2 )) and, if γ = 0 ( a ′ ( r ) , b ′ ( r )) ∼ ( γ r γ − , O ( r γ +2 d +1 )) while, if γ = 0 , ( a ′ ( r ) , b ′ ( r )) = ( O ( r ) , O ( r d +1 )) . . ( a ( r ) , b ( r )) ∼ (cid:26) ( r − γ , O ( r ˜ θ ( r )) if ( d, γ , γ ) ∈ D ( τ ( r ) , O ( τ ( r ) r d +2 )) if ( d, γ , γ ) ∈ D and ( a ′ ( r ) , b ′ ( r )) ∼ (cid:26) ( r − γ − , O ( r ˜ θ ( r )) if ( d, γ , γ ) ∈ D ( τ ′ ( r ) , O ( τ ′ ( r ) r d +2 )) if ( d, γ , γ ) ∈ D where ˜ θ ( r ) = ( − r γ − + r − γ d γ + γ − d − if γ + γ − d − = 0 − r γ − log r if γ + γ − d − τ ( r ) = ( r − γ − r γ γ if γ = 0 − log r if γ = 0 .
5. For j = 1 and for j = 3 , for all r > , the maps ( d, γ , γ ) ( a j ( r ) , a ′ j ( r ) , b j ( r ) , b ′ j ( r )) are continuous in D .
6. For j = 1 and for j = 3 , and for all r > , ( a j ( r ) , a ′ j ( r ) , b j ( r ) , b ′ j ( r )) is differentiablewrt to γ and wrt γ , whenever ( d, γ , γ ) ∈ D , and γ > γ .Moreover the map ( d, γ , γ ) ∂∂γ i ( a j ( r ) , a ′ j ( r ) , b j ( r ) , b ′ j ( r )) is continous, for i = 1 and i = 2 . And we have ( ∂a ∂γ i , ∂a ′ ∂γ i , ∂b ∂γ i , ∂b ′ ∂γ i )( r ) ∼ log r ( O ( r γ +2 d +2 ) , O ( r γ +2 d +1 ) , r γ , γ r γ − ) (1.13) and, if γ = 0 ( ∂a ∂γ i , ∂a ′ ∂γ i , ∂b ∂γ i , ∂b ′ ∂γ i )( r ) (1.14) ∼ log r ( r γ , γ r γ − + O ( r γ +1 ) , O ( r γ +2 d +2 ) , O ( r γ +2 d +1 ))
7. For j = 2 or for j = 4 , the same notation ( a j , b j ) is used for two solutions,one of them being defined for ( d, γ , γ ) ∈ D , the other one being defined for ( d, γ , γ ) ∈ D .Moreover, for each domain D i , i = 1 , and for all r > the maps ( d, γ , γ ) ( a j ( r ) , a ′ j ( r ) , b j ( r ) , b ′ j ( r )) are continuous in D i . For each r > , the partial differen-tiability of ( a j ( r ) , a ′ j ( r ) , b j ( r ) , b ′ j ( r )) wrt γ or wrt γ is also true separatly in eachdomain D i , i = 1 , . The second theorem is about a base of solutions defined near + ∞ . Theorem 1.5
1. We have a base of four solutions ( a, b ) of (1.9), with given behav-iors at + ∞ . In order to distinguish these solutions from the solutions defined inTheorem 1.4, we use the notation ( u i , v i ) , i = 1 , . . . , , for these solutions. We have ( u ( r ) , v ( r )) ∼ r → + ∞ ( e √ r / √ r, e √ r / √ r )(1 + O ( r − ));( u ( r ) , v ( r )) ∼ r → + ∞ ( e −√ r / √ r, e −√ r / √ r )(1 + O ( r − )); and ( u ( r ) , v ( r )) ∼ r → + ∞ ( r − n , − r − n )(1 + O ( r − ));( u ( r ) , v ( r )) ∼ r → + ∞ ( r n , − r n )(1 + O ( r − )) . . Except for j = 2 , the construction of ( u j , v j ) is done separatly for each compactsubset K of D . For each of the four solutions and for all r > the map ( d, γ , γ ) ( u j ( r ) , u ′ j ( r ) , v j ( r )) , v ′ j ( r )) is continuous on K . There partial derivatives wrt γ andwrt γ exist whenever γ < γ and are continuous. We have ( ∂u ∂γ i , ∂u ′ ∂γ i , ∂v ∂γ i , ∂v ′ ∂γ i )( r ) ∼ r → + ∞ e √ r √ r log r ( O ( r − ) , O ( r − ) , O ( r − ) , O ( r − ));( ∂u ∂γ i , ∂u ′ ∂γ i , ∂v ∂γ i , ∂v ′ ∂γ i )( r ) ∼ r → + ∞ e −√ r √ r log r ( O ( r − ) , O ( r − ) , O ( r − ) , O ( r − )) ; ( ∂u ∂γ i , ∂u ′ ∂γ i , ∂v ∂γ i , ∂v ′ ∂γ i )( r ) ∼ r → + ∞ log r ( r n , O ( r n − ) , − r n , O ( r n − ))(1 + O ( r − )) ; ( ∂u ∂γ i , ∂u ′ ∂γ i , ∂v ∂γ i , ∂v ′ ∂γ i )( r ) ∼ r → + ∞ log r ( r − n , O ( r − n − ) , − r − n , O ( r − n − ))(1 + O ( r − )) . By our construction, the solution ( u j , v j ) depends on the given compact set K , except for j = 2. For j = 1, this difficulty disappears after the proof of Theorem 1.6. For the othersolutions, named ( u , v ) and ( u , v ), we will have to make sure that the parameter( d, γ , γ ) stays in a compact set, as soon as we want and use the continuity and thedifferentiability of these solutions wrt the parameters.In [1] we already gave the behaviors of a base of solutions at 0 and at + ∞ . But, in thepresent paper, the continuity wrt to ( d, γ , γ ), especially of the five solutions ( a , b )and ( a , b ) (defined at 0) and ( u , v ), ( u , v ), ( u , v ) (defined at + ∞ ) and theredifferentiability wrt γ and γ , are essential.The following theorem connects the least behavior at 0 to the exponentially blowing upbehavior at + ∞ and the least behavior at + ∞ to the greater blowing up behavior at 0. Theorem 1.6
Suppose that d > and that γ ≥ γ ≥ , ( γ + γ ) / > d . Let ( a , b ) be the solution of (1.1) defined by ( a , b ) ∼ ( O ( r γ +2 d +2 ) , r γ ) . Then ( a , b ) blowsup exponentially at + ∞ . Let ( u , v ) be the solution of (1.1) defined by ( u , v ) ∼ + ∞ ( e −√ r √ r , e −√ r √ r ) . Then ( u , v ) ∼ C ( o ( r − γ ) , r − γ ) , for some C = 0 . Now, let us relate the problem (1.9) to an eigenvalue problem, which is a little bit differentfrom the one considered in the previous works on the subject, but, for our proof, we findit more suitable.Let 0 ≤ γ < γ , µ ∈ R and ε > ( a ′′ + a ′ r − γ r a − ε f a − ε f b = − ε µ (1 − f ) ab ′′ + b ′ r − γ r b − ε f b − ε f a = − ε µ (1 − f ) b (1.15)for r ∈ ]0 , a (1) = b (1) = 0 . Let us explain in which sense this canbe considered as an eigenvalue problem.Let γ ≥ H γ = { r ( a ( r ) , b ( r )) ∈ R ; ( ae iγ θ , be iθ ) ∈ H ( B (0 , , C ) × H ( B (0 , , C ) } , where ( r, θ ) are the polar coordinates in R . The dependence on γ is needed to distin-guish γ = 0 and γ = 0. We endow H γ with the scalar product < ( a, b ) | ( u, v ) > = Z ( ra ′ u ′ + rb ′ v ′ + γ r au + 1 r bv ) dr H γ is a Hibert space. Let H ′ γ be the topological dual space of H γ .We consider the T γ ,γ : H γ → H ′ γ defined by < T γ ,γ ( a, b ) , ( u, v ) > H ′ , H = Z ( ra ′ u ′ + rb ′ v ′ + γ r au + γ r bv + rε f ( a + b )( u + v )) dr. We remark that (( a, b ) , ( u, v )) < T γ ,γ ( a, b ) , ( u, v ) > H ′ γ , H γ is a scalar product on H γ . So, T γ ,γ is an isomorphism, by the Riesz Theorem.Last, let us define the embedding I : H γ → H ′ γ ( a, b ) (( u, v ) R r ( au + bv ) dr )Since the embedding H ( B (0 , × H ( B (0 , ⊂ L ( B (0 , × L ( B (0 , I is compact.Let us define C = ε (1 − f ) I. Since C is a compact operator and thanks to the continuityof T − γ ,γ , then T − γ ,γ C is a compact operator from H γ into itself. By the standardtheory of self adjoint compact operators, there exists a Hilbertian base of H formed ofeigenvectors of T − γ ,γ C . Now let us define m γ ,γ ( ε ) as the first eigenvalue for the above eigenvalue problem in H γ , that is m γ ,γ ( ε ) = inf ( a,b ) ∈H γ / { (0 , } R ( ra ′ + rb ′ + γ r a + γ r b + rε f d ( rε )( a + b ) ) dr ε R r (1 − f d ( rε ))( a + b ) dr (1.16)and let us define m ( ε ) = inf a ∈H d / { } R ( ra ′ + d r a ) dr ε R r (1 − f d ( rε )) a dr (1.17)It is a classical result that these infimum are attained. Considering the rescaling (˜ a, ˜ b )( r ) =( a ( εr ) , b ( εr )) and an extension by 0 outside [0 , /ε ], we see that ε m γ ,γ ( ε ) decreaseswhen ε decreases. Then lim ε → m γ ,γ ( ε ) exists.Moreover, m γ ,γ ( ε ) is a simple eigenvalue and there exists an eigenvector ( a, b ) verifying a ( r ) ≥ − b ( r ) ≥ r > . Also, m ( ε ) is realized by some function a ( r ) ≥ L n R ω = − λ ( ε ) ω. Wehave ( ∃ λ < ⇔ ( ∃ µ < . By examining the proof of Theorems 1.4, the possible behaviors at 0 of the solutions ofthe system ( a ′′ + a ′ r − γ r a − f d a − f d b = − µ ( ε )(1 − f d ) ab ′′ + b ′ r − γ r b − f d b − f d a = − µ ( ε )(1 − f d ) b. (1.18)8nd those of the solutions of (1.1) are the same. More precisely, if µ ( ε ) is a boundedeigenvalue, it behaves as an additional bounded parameter and we construct two solutionsdenoted by ( α , β ) and ( α , β ), depending on ε and verifying, for r ∈ [0 , R ], | α ( r ) | + | β ( r ) − r γ | ≤ Cr γ +2 d +1 , | α ′ ( r ) | + | β ′ ( r ) − γ r γ − | ≤ Cr γ +2 d and | α ( r ) − r γ | + | β ( r ) | ≤ Cr γ +2 , | α ′ ( r ) − γ r γ − | + | β ′ ( r ) | ≤ Cr γ +1 , where R and C are independent of ε , as in the proof of Theorem 1.4.We can suppose that µ ( ε ) → µ , as ε → . Let ω ε = ( a ε , b ε ) be an eigenvector associatedto µ ( ε ). We define ˜ ω ε ( r ) = ω ε ( εr ) , for r ∈ [0 , ε ]. For some constants A ε and B ε ,(˜ a ε , ˜ b ε ) = A ε ( α , β ) + B ε ( α , β ) . We may suppose that max {| A ε | , | B ε |} = 1. Thus,(˜ a ε , ˜ a ′ ε , ˜ b ε , ˜ b ′ ε )( R ) is bounded independently of ε . Considering it as a Cauchy data, inthe range r ≥ R , we deduce that (˜ a ε , ˜ a ′ ε , ˜ b ε , ˜ b ′ ε ) is bounded independently of ε , in everyinterval [ R, α ], α >
0. Finally, we deduce the existence of some ω such that˜ ω ε → ω , as ε → , uniformly on each compact subset of [0 , + ∞ ], where ω = ( a , b ) verifies ( a ′′ + a ′ r − γ r a − f d a − f d b = − µ (1 − f d ) a b ′′ + b ′ r − γ r b − f d b − f d a = − µ (1 − f d ) b (1.19)Examining the proof of Theorem 1.5, the possible behaviors at + ∞ of the solutions of(1.19) are those given in Theorem 1.5, when we suppose that γ + γ − µd > n by q γ + γ − µd .Let us remark that the function f d and the eigenvalue problem used here are not exatlythe same as in the previous works [9], [7] and [8] and [1]. However, the proofs of thethree following Theorems can be deduced from these works. Theorem 1.7
For all d ≥ ,(i) there exist C > and ε > such that, for all ε < ε , m ( ε ) − ε ≥ C ; m ( ε ) → and there exists an associated eigenvector a ε such that ˜ a ε → f d , uniformly in each [0 , R ] , R > .(ii) m d − ,d +1 ( ε ) > and m d − ,d +1 ( ε ) − ε → .(iii) for d > and n ≥ d − , there exist C > and ε > such that, for all ε < ε , m | d − n | ,d + n ( ε ) − ε ≥ C .(iv) There exists an eigenvector ω ε associated to the eigenvalue m d − ,d +1 ( ε ) such that k (1 − f d ) (˜ ω ε − F d ) k L ( B (0 , ε )) → , as ε → , where F d = ( f ′ d + dr f d , f ′ d − dr f d ) appearsin Theorem 1.1. The interested reader can find a direct proof of Theorem 1.7 in the appendix of [2].The following theorem is very important for our proof.9 heorem 1.8
Let d ∈ R , d > be given. For all n ∈ ]1 , d + 1[ , there exists C n > independent of ε such that m | d − n | ,d + n ( ε ) ≤ − C n . For the sake of completeness, we give a proof of this theorem in Part VI of the presentpaper, following the proof of [9], given for n = d = 2.The following theorem connects the eigenvalue problem to the existence of the boundedsolutions. Theorem 1.9 (i) Let d > and γ > γ ≥ be given. With the notation above, if µ ( ε ) → µ , if ˜ ω ε → ω , if γ + γ − µd > and if ω blows up at + ∞ , then µ ( ε ) − ε ≥ C ,where C is a given positive number, independent of ε .(ii) If there exists some bounded solution ( a, b ) of (1.9), then there exists an eigenvalue µ ( ε ) verifying µ ( ε ) − → . To make the paper as self contained as possible, we give the proof of Theorem 1.9 (i)in Part VI, following the proof of [8], given for µ = 1 and for the eigenvalue λ ( ε ). Theinterested reader can find a direct proof of Theorem 1.9 (ii) in [2].The following theorems are new. Theorem 1.10
When γ + γ ε − d > , if there exists some bounded solution ω = ( a, b ) of (1.9), then we have lim ε → m γ ,γ ( ε ) ≥ . Combining Theorem 1.10 and Theorem 1.9 (ii), we get the following
Corollary 1.1
If there exists some bounded solution ω = ( a, b ) of (1.9), then we have lim ε → m γ ,γ ( ε ) = 1 and if ω ε is some eigenvector associated to m γ ,γ ( ε ) , then ˜ ω ε tendsto ω , uniformly in all [0 , R ] , R > . The following theorem can be deduced at once from Theorem 1.10 and Theorem 1.8.
Theorem 1.11
Let n and d be real numbers and γ = | n − d | , γ = n + d . There is nobounded solution of (1.9), when d ≥ and < n < d + 1 . Using Theorems 1.1, 1.6 and 1.11, we will prove the following theorem.
Theorem 1.12
There is no bounded solution of (1.9), whenever d ≥ and n ≥ d + 1 . Then Theorem 1.2 is proved. With Theorem 1.9 (i), we get
Theorem 1.13
For d ≥ , n > , γ = | n − d | and γ = n + d , there is no eigenvalue µ ( ε ) , with eigenvector in H | n − d | , such that µ ( ε ) − ε → , as ε → . The paper is organised as follows. In Part II, we give a sketch of the proofs of Theorem1.3, Theorem 1.4 and Theorem 1.5. Complete proofs of Theorem 1.4 and Theorem 1.5are altogether long, technical and classical. The interested reader can consult Part II andPart III of [2], which is a long preliminary version of the present paper. In Part III, weprove Theorem 1.6. In Part IV, we prove Theorem 1.10. In Part V, we prove Theorem1.12. In Part VI, we give the proof of Theorem 1.9 (i) and of Theorem 1.8, which isneeded in the proof of Theorem 1.11. Theorem 1.9 (i) is needed to prove Theorem 1.10and to deduce Theorem 1.13. 10
Proof of Theorem 1.3, proof of Theorem 1.4, proof ofTheorem 1.5.
The existence of f d , its expansion near 0 and + ∞ and its property of uniqueness areproved in [5]. However, these authors suppose that d ∈ N ⋆ and this is used only in thefirst step of their proof. Let us give an alternative proof for this first step, valid for all d >
0. We have to prove that for all a > f ∼ ar d and that f is defined in an interval [0 , R ], with R >
0. We rewrite the equation(1.4) as ( r d − ( r − d f ) ′ ) ′ = − r d − f (1 − f ) . For all
R > a > f solves (1.4) in [0 , R ], and f ∼ ar d if and only if the map g : r r − d f ( r ) is a fixed point in C ([0 , R ]) of the function Φ defined byΦ( g )( r ) = a + Z r t − d +1 Z t − s d − g ( s )(1 − s d g ( s )) ds. (2.20)Let us denote ϕ ( s, g ) = − g (1 − s d g ). As in the proof of the Cauchy-Lipschitz Theorem,we remark first that for all α > β > M and C such that (cid:0) s ∈ ]0 , α ] , k g − a k L ∞ ([0 ,α ]) < β (cid:1) ⇒ (cid:0) k ϕ ( s, g ) k L ∞ ([0 ,α ]) ≤ M (cid:1) and (cid:0) s ∈ ]0 , α ] , k g − a k L ∞ ([0 ,α ]) < β, k g − a k L ∞ ([0 ,α ]) < β (cid:1) ⇒ ( | ϕ ( s, g ) − ϕ ( s, g ) | ( s ) ≤ C | g − g | ( s )) . Moreover, M and C remain inchanged if α is replaced by a smallest positive number.Now, we estimate, for r ∈ [0 , α ] | Φ( g ) − a | ( r ) ≤ M r d +2 d ( d + 2) and | Φ( g ) − Φ( g ) | ( r ) ≤ C r d +2 d ( d + 2) k g − g k L ∞ ([0 ,α ]) . Now, we choose some R such that0 < R < min { , α, d ( d + 2) βM , d ( d + 2) C } and we denote B ( a, β ) = { g ∈ C [0 , R ]); k g − a k L ∞ ([0 ,R ]) ≤ β } , in order Φ to be a con-tractant function from the closed subset B ( a, R ) of the Banach space C ([0 , R ]) into itself.Thus, by the Banach fixed Point Theorem, Φ has a unique fixed point g in B ( a, R ). Then r r d g ( r ), defined in [0 , R ], is the desired solution of (1.4).The proof of [5] can be used to conclude to the existence of A d .Now, let us prove the continuity of d g ( d ) , where g ( d ) ( r ) = r − d f d ( r ). First, let us provethat the map d A d , defined in R + ⋆ , increases.As a first step, for δ = d , we combine the equations of f d and f δ to obtain, for every( r , r ), 0 < r < r , [ r ( f ′ d f δ − f ′ δ f d )] r r = Z r r f d f δ ( f d − f δ ) dt.
11e derive two properties. The first one is that f d − f δ cannot keep the same signin [0 , + ∞ [, otherwise, when r = 0 and r → + ∞ , the lrs would be 0 and the rhswould be non zero. The second one is that f d and f δ can be equal only for one value r >
0. Indeed, if r < r are such that f d ( r i ) = f δ ( r i ), for i = 1 ,
2, we get that r f d ( r )( f d − f δ ) ′ ( r ) − r f d ( r )( f d − f δ ) ′ ( r ) has the same sign as f d − f δ in [ r , r ], andthis is a contradiction.Now, let 0 < δ < d be given. Near + ∞ we have the expansion f d ( r ) − f δ ( r ) = δ − d r +0( r )and consequently, there exists R > f d ( r ) < f δ ( r ), for all r ∈ [ R, + ∞ [. But wehave also r d < r δ for 0 < r <
1. Since the sign of f d − f δ has to change once in [0 , + ∞ [,and in view of the expansions near 0, we deduce that A d > A δ .Now, we denote lim d → δ,d>δ A d = B . But f d is defined in [0 , + ∞ [. We have, for all r > g ( d ) = Φ( g ( d ) ), where Φ is defined in (2.20), but with A d instead of a . Using in addition0 ≤ f d (1 − f d ) ≤
1, we get that for all α >
0, there exists β > d in aninterval containing δ , such that | g ( d ) | ( r ) ≤ β and | ( g ( d ) ) ′ | ( r ) ≤ β , for all r ∈ [0 , α ]. So, forall r > g ( d ) ( r ) has a limit, denoted by g , as d → δ , uniformly in every [0 , α ], α > g )( r ) = g ( r ), for all r >
0, where Φ is defined in (2.20), but with B insteadof a and δ instead of d . Consequently, if we denote f ( r ) = r d g ( r ), then f ∼ Br δ , f isa solution of (1.4) (with δ in place of d ), f is non decreasing in [0 , + ∞ [. In view of theuniqueness of such a solution of (1.4) ([5]), we deduce that B = A δ and that f = f δ . Thesame result remains true when d → δ , d < δ . We have proved that d g ( d ) is continuousfrom [1 , + ∞ [ into L ∞ ([0 , α ]), for all α > The pattern of proof is the same for the four solutions. Let us give an idea of the proof.1. We construct some solution ( a , b ) such that for all compact subset K of D , thereexist some R >
0, depending only on K and some C >
0, also depending only on K , suchthat for all r ∈ ]0 , R ] and all ( d, γ , γ ) ∈ K , we have | a ( r ) | + | b ( r ) − r γ | ≤ Cr γ +2 d +1 and | a ′ ( r ) | + | b ′ ( r ) − γ r γ − | ≤ Cr γ +2 d . and such that, for all r ∈ ]0 , R ], ( d, γ , γ ) ( a ( r ) , a ′ ( r ) , b ( r ) , b ′ ( r )) is continuous on K , and differentiable wrt γ and wrt γ . First, the construction is done for r ∈ ]0 , R ].Then the definition of this solution in [0 , + ∞ [ and the continuity wrt ( d, γ , γ ) ∈ K , forall r >
0, follows from the Cauchy-Lipschitz Theorem. Let us remark the importance forthe constants C and R to be independent of the parameters.We use a constructive method, similar to the proof of the Banach fixed point Theorem.We define a fixed point problem of the form ( a, b ) = Φ( a, b ) , that is ( a = r γ + r γ R r t − γ − R t s γ +1 ( f d b − (1 − f d ) a ) dsdtb = r γ R r t − γ − R t s γ +1 ( f d a − (1 − f d ) b ) dsdt. (2.21)whose solutions verify the differential system that we have to solve.2. We construct some solution ( a , b ), such that, for any compact subset K ∈ D ,exist some real numbers R and C verifying, for all 0 < r < R , | a ( r ) − r γ | ≤ Cr γ +2 , | b ( r ) | ≤ Cr γ +2 d +2 , | a ′ ( r ) − γ r γ − | ≤ Cr γ +1 , | b ′ ( r ) | ≤ Cr γ +2 d +1 . ( a = r γ R r t − γ − R t s γ +1 ( f d b − (1 − f d ) a ) dsdtb = r γ + r γ R r t − γ − R t s γ +1 ( f d a − (1 − f d ) b ) dsdt. (2.22)3. For the construction of ( a , b ), in the case when ( d, γ , γ ) ∈ D , we consider thefixed point problem ( a = r − γ R r t γ − R t s − γ +1 ( f d b − (1 − f d ) a ) dsdtb = r − γ + r − γ R r t γ − R t s − γ +1 ( f d a − (1 − f d ) b ) dsdt. (2.23)while, when ( d, γ , γ ) ∈ D , we consider the fixed point problem ( a = r γ R r t − γ − R t s γ +1 ( f d b − (1 − f d ) a ) dsdtb = r − γ + r − γ R r t γ − R t s − γ +1 ( f d a − (1 − f d ) b ) dsdt. (2.24)4. In order to construct a solution ( a , b ), when ( d, γ , γ ) ∈ D , we solve thefollowing fixed point problem ( a = r − γ + r − γ R r t γ − R t s − γ +1 ( f d b − (1 − f d ) a ) dsdtb = r − γ R r t γ − R t s − γ +1 ( f d a − (1 − f d ) b ) dsdt. (2.25)and, when ( d, γ , γ ) ∈ D , we solve the following fixed point problem ( a = τ ( r ) + τ ( r ) R r t τ − ( t ) R t sτ ( s )( f d b − (1 − f d ) a ) dsdtb = r − γ R r t γ − R t s − γ +1 ( f d a − (1 − f d ) b ) dsdt (2.26) We use the system (1.10) and we construct a base of four solutions, ( x j , y j ), j = 1 , . . . , ∞ . The solutions ( u j , v j ) announced in Theorem1.5 are obtained by u j = x j + y j and v j = x j − y j .We denote J + = e √ r √ r , J − = e −√ r √ r , γ = γ + γ , n = p γ − d , ξ = γ − γ . We can replace the first equation of (1.10) by( e √ r ( xe −√ r ) ′ ) ′ = e √ r q ( r ) x − ξ r y or ( e − √ r ( xe √ r ) ′ ) ′ = e −√ r q ( r ) x − ξ r y, where q ( r ) = − γ − d r + 3(1 − f d + d r ) . The second equation of the system (1.10) can be written as( r n +1 ( r − n y ) ′ ) ′ = r n +1 ( ξ r x − (1 − f d − d r ) y )13r ( r − n +1 ( r n y ) ′ ) ′ = r − n +1 ( ξ r x − (1 − f d − d r ) y ) . Finally, the system (1.10) can be written as ( ( e ± √ r ( r e ∓√ r x ) ′ ) ′ = r e ±√ r q ( r ) x − ξ r y ( r ± n +1 ( r ∓ n y ) ′ ) ′ = r ± n +1 ( ξ r x − (1 − f d − d r ) y ) (2.27)In order to construct four solutions of (2.27), we give R > x, y ) = Φ( x, y ) , for ( x, y ) defined in [ R , + ∞ [, and whose solutionsare solutions of (2.27). The function Φ will depend on R , except for one solution denotedby ( x , y ) (vanishing exponentially at + ∞ ). The present construction does not allow usto construct the solutions ( x j , y j ), j = 2 without taking into account a given compactsubset K ⊂ { ( d, γ , γ ); 0 ≤ γ < γ ; ξ − d > } . (2.28)Indeed, R depends on K . Let us list the different fixed point problems we need.1. The exponential blowing up behavior at + ∞ : the solution ( x , y ). For R > ( x = J + + J + R r + ∞ ( J + ) − t R tR sJ + ( ξ s y − − f d − d s ) x ) dsdty = r n R rR t − n − R tR s n +1 ( ξ s x − (1 − f d − d s ) y ) dsdt.
2. The intermediate blowing up behavior at + ∞ : the solution ( x , y ). For R > ( x = J + R r + ∞ ( J + ) − t R tR sJ + ( ξ s y − − f d − d s ) x ) dsdty = r n + r n R r + ∞ t − n − R tR s n +1 ( ξ s x − (1 − f d − d s ) y ) dsdt
3. The least behavior at + ∞ : the solution ( x , y ). We consider ( x = J − + J − R r + ∞ ( J − ) − t R t + ∞ sJ − ( ξ s y − − f d − d s ) x ) dsdty = r − n R r + ∞ t n − R t + ∞ s − n +1 ( ξ s x − − f d − d s ) y ) dsdt
4. The intermediate vanishing behavior at + ∞ : the solution ( x , y ). For R > ( x = J − R rR ( J − ) − t R t + ∞ sJ − ( ξ s y − − f d − d s ) x ) dsdty = r − n + r − n R r + ∞ t n − R t + ∞ s − n +1 ( ξ s x − − f d − d s ) y ) dsdt We need the following estimate, which is not difficult to prove, by an integration by part.Let α ∈ R and β > Z + ∞ t s α e − βs ds ≤ β t α e − βt for all t ≥ αβ (2.29)and Z tR s α e βs ds ≤ β t α e βt for all t ≥ R ≥ − αβ (2.30)14 The smallest behavior at zero is connected with the great-est behavior at infinity.
Proof of Theorem 1.6.
Let ( d, γ , γ ) ∈ D . Let us prove first that ( a , b ) blows up exponentially at + ∞ .Let us define x = a + b and y = a − b . We have x ( r ) ∼ r γ and y ( r ) ∼ − r γ . Thus,we have x ( r ) > y ( r ) < r = 0. Let us suppose that x ( r ) > y ( r ) < , R [. Combining the first equation of the system (1.10) and the equation (1.4), we get,for all r ≥ rx ′ f d − rf ′ d x ] r + Z r − γ + d s xf d ds + µ Z r ys f d ds − Z r sf d xds = 0 . For 0 < r ≤ R , we deduce that rf d ( xf d ) ′ ( r ) ≥ Z r sf d xds. (3.31)This proves that xf d increases in ]0 , R ] and therefore x ( R ) > ry ′ f d − rf ′ d y ] r + Z r − γ + d s yf d ds + ξ Z r xs f d ds = 0 . For 0 < r ≤ R , we deduce that rf d ( − yf d ) ′ ( r ) ≥ Z r − γ + d s yf d ds. (3.32)This proves that − yf d increases in ]0 , R ] and therefore − y ( R ) >
0. Finally, we have provedthat x ( r ) > y ( r ) < r >
0. Now (3.31) and (3.32) are valid for all r > f d ∼ + ∞
1. Thus, the behavior of x at + ∞ cannot be a polynomialincreasing behavior. We return to Theorem 1.5 that gives all the possible behaviors at+ ∞ and we deduce that x and y have an exponentially increasing behavior at + ∞ . So a and b have an exponentially increasing behavior at + ∞ , too.Let us prove now that ( u , v ) ∼ D ( o ( r γ ) , r − γ ), for some D = 0. Multiplying (1.9)and integrating by parts, we get easily, for all r > r > s ( a ′ u − u ′ a + v b ′ − v ′ b )( s )] r r = 0 . Using ( a , b ) ∼ + ∞ C ( e √ r √ r , e √ r √ r ), for some C = 0, and ( u , v ) ∼ + ∞ ( e −√ r √ r , e −√ r √ r ), weget lim r → + ∞ r ( a ′ u − u ′ a + v b ′ − v ′ b )( r ) = 4 C √ . Consequently lim r → r ( a ′ u − u ′ a + v b ′ − v ′ b )( r ) = 4 C √ . We know that ( a , b ) ∼ ( o ( r γ ) , r γ ). According to Theorem 1.4, that gives all thepossible behaviors at 0, we conclude that the only fitting behavior at 0 for ( u , v ) is( u , v ) ∼ D ( o ( r γ ) , r − γ ), for D = C √ γ .This ended the proof of Theorem 1.6. 15 The proof of Theorem 1.10 and of Corollary 1.1.
Let d >
1. We can rewrite the system (1.9) as X ′ = M X with X = ( a, ra ′ , b, rb ′ )t (4.33)with M = r − r (1 − f d ) + γ r rf d
00 0 0 r rf d − r (1 − f d ) + γ r . Lemma 4.1
Let us suppose that there exists a bounded solution of (1.9) and let us chosea base of solutions, X , X , X , X , for (4.33), whose third vector is a bounded solution.Let us name R ( s ) the resolvant matrix, whose columns are the vectors X i , i = 1 , . . . , .Let us name C and C the second and the fourth column of R − ( s ) . We haveat and when ( d, γ , γ ) ∈ D and γ + γ − d − < C = O ( s γ ) O ( s γ +2 γ ) O ( s − γ ) O ( s γ ) and C = O ( s − γ ) O ( s γ ) O ( s γ ) O ( s γ + γ ) and at and when ( d, γ , γ ) ∈ D and γ + γ − d − > C = O ( s − γ +2 d +2 ) O ( s γ +2 d +2 ) O ( s − γ ) O ( s γ ) and C = O ( s − γ ) O ( s γ ) O ( s − γ +2 d +2 ) O ( s γ +2 d +2 ) and at and when ( d, γ , γ ) ∈ D C = O ( τ ( s ) s − γ + γ +2 d +2 ) O ( τ ( s ) s γ + γ +2 d +2 ) O ( τ ( s )) O ( s γ ) and C = O ( s γ − γ τ ( s )) O ( s γ + γ τ ( s )) O ( s d +2 τ ( s )) O ( s γ +2 d +2 ) and in any case, at + ∞C ∼ + ∞ − n √ nJ − nJ + − √ s n − √ s − n and C ∼ + ∞ − n √ nJ − nJ + √ s n √ s − n where − n √ is the determinant of R ( s ) . roof R ( s ) is chosen as follows R ( s ) ∼ + ∞ J + J − s − n s n s ( J + ) ′ s ( J − ) ′ − ns − n ns n J + J − − s − n − s n s ( J + ) ′ s ( J − ) ′ ns − n − ns n where, as usual, the notation J + stands for e √ s √ s and the notation J − stands for e −√ s √ s .To give the behaviors at 0, we return to Theorem 1.4. We have, for some c i = 0, i = 1 , . . . , d, γ , γ ) ∈ D , R ( s ) ∼ O ( s γ +2 d +2 ) O ( s ˜ γ ) c s γ c s − γ O ( s γ +2 d +2 ) O ( s ˜ γ ) c γ s γ − c γ s − γ c s γ c s − γ O ( s γ +2 d +2 ) O ( s ˜ γ ) c γ s γ − c γ s − γ O ( s γ +2 d +2 ) O ( s ˜ γ ) where we use the notation˜ γ = min { γ , − γ + 2 d + 2 } and ˜ γ = min { γ , − γ + 2 d + 2 } if γ + γ − d − = 0(if γ + γ − d − O ( s ˜ γ ) by O ( s γ log s ) and O ( s ˜ γ ) by O ( s γ log s ))andIf ( d, γ , γ ) ∈ D , R ( s ) ∼ O ( s γ +2 d +2 ) O ( s − γ +2 d +2 ) c s γ c τ ( s ) O ( s γ +2 d +2 ) O ( s − γ +2 d +2 ) c γ s γ − c sτ ′ ( s ) c s γ c s − γ O ( s γ +2 d +2 ) O ( τ ( s ) s d +2 ) c γ s γ − c γ s − γ O ( s γ +2 d +2 ) O ( τ ( s ) s d +2 ) where τ ( s ) = ( s − γ − s γ γ if γ = 0 − log s if γ = 0The determinant W of R ( s ) is independent of s , due to the fact that the matrix M of the differential system has a null trace. Moreover, J + J − = s . Using the behavior at+ ∞ of R ( s ), given above, we deduce that W is the principal term, as s → + ∞ of1 s (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s √ − s √ − n n − − s √ − s √ n − n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) that is W = − n √ . A direct calculation of the suitable determinants gives the estimate of C and C . The proof of Theorem 1.10 completed. m = lim ε → m γ ,γ ( ε ). We can define ω ε ∈ H γ an eigenvector associated to m γ ,γ ( ε ) and ω = ( a , b ) such that ˜ ω ε → ω on each compact subset of [0 , + ∞ [. Inwhat follows, let us suppose that m <
1. Then γ + γ − md > a ≥ − b ≥
0, the possible behaviors at + ∞ for ( a , b ) are ( r − n , − r − n ) and ( r n , − r n )where n = r γ + γ − md . (4.34)Since m <
1, we have by Theorem 1.9 (i), that ω has a bounded behavior at + ∞ andconsequently( a , b ) ∼ + ∞ ( r − n , − r − n ) and a + b = O ( r − n − ) at + ∞ . At 0, in view of a ≥ − b ≥
0, the only possible behavior is( a , b ) ∼ ( cr γ , O ( r γ +2 d +2 )) , for some c > . Let us prove that the hypothesis m < n = q γ + γ − d , we have, by (4.34)( m < ⇔ ( n > n ) . Let us denote X = ( a , ra ′ , b , rb ′ ) t , the vector corresponding to ω . We have X ′ = M X − ( m − − f d )(0 , ra , , rb )t . let us define X , X , X and X as in Lemma 4.1. We are going to prove that there existsome constants C i such that X = X i =1 C i X i − ( m − X i =1 ˆ X i , with ˆ X i bounded at 0, i = 1 , , , ∞ (cid:26) ˆ X = X O ( r − n − J − ) ; ˆ X = X O ( r − n − J + )ˆ X = X O (1) ; ˆ X = X O (1) . (4.36)In order to prove (4.35) and (4.36), we write X = X i =1 A i ( r ) X i (4.37)with i = 1 , . . . , , A i ( r ) = A i − ( m − Z r [ R − ( s ) s (1 − f d ) a b ds ] i (4.38)18here the notation [ ] i means the i th line of the vector, and where A i is a constant.Let us examine the behavior of each term A i ( r ) X i at + ∞ and at 0, using Lemma 4.1.For the first term, we use the first terms of C and C , given in Lemma 4.1, to obtain[ R − ( s ) s (1 − f d ) a b ] ∼ + ∞ O ( 1 s J − ( a + b ))and ∼ s ( O ( s γ a + O ( s − γ b )) if ( d, γ , γ ) ∈ D , γ + γ − d − < s ( O ( s − γ +2 d +2 a + O ( s − γ b )) if ( d, γ , γ ) ∈ D , γ + γ − d − > s ( O ( τ ( s ) s γ − γ +2 d +2 a + O ( τ ( s ) s γ − γ b )) if ( d, γ , γ ) ∈ D . Let us define B = − ( m − Z + ∞ [ R − ( s ) s (1 − f d ) a b ] ds and ˆ X = X Z r + ∞ [ R − ( s ) s (1 − f d ) a b ] ds We can write A ( r ) X = ( A + B ) X − ( m −
1) ˆ X We see that ˆ X = X O (1) at 0. Using (2.29), we get ˆ X = X O ( r − n − J − ) at + ∞ .For the second term, we obtain[ R − ( s ) s (1 − f d ) a b ] ∼ + ∞ O ( 1 s J + ( a + b ))and ∼ s ( O ( s γ +2 γ a ) + O ( s γ b )) if ( d, γ , γ ) ∈ D and γ + γ − d − < s ( O ( s γ +2 d +2 a ) + O ( s γ b )) if ( d, γ , γ ) ∈ D and γ + γ − d − > sτ ( s )( O ( s γ + γ +2 d +2 ) a + O ( s γ + γ ) b ) if ( d, γ , γ ) ∈ D Denoting B = − ( m − Z [ R − ( s ) s (1 − f d ) a b ] ds and ˆ X = X Z r [ R − ( s ) s (1 − f d ) a b ] ds we get A ( r ) X = ( A + B ) X − ( m −
1) ˆ X with, by (2.30) ˆ X = X O ( r − n − J + ) at + ∞ . Moreover, ˆ X is bounded at 0. 19or the third term, we obtain[ R − ( s ) s (1 − f d ) a b ] ∼ + ∞ − n √ √ d s s n ( − a + b ) . (4.39)Since − a + b ∼ + ∞ − r − n , then this term is integrable at + ∞ . At 0, it is ∼ s ( O ( s − γ a ) + O ( s γ b )) if ( d, γ , γ ) ∈ D and γ + γ − d − < s ( O ( s − γ a ) + O ( s − γ +2 d +2 b )) if ( d, γ , γ ) ∈ D and γ + γ − d − > s ( O ( τ ( s ) a ) + O ( τ ( s ) s d +2 b )) if ( d, γ , γ ) ∈ D and this is bounded at 0.Letting B = − ( m − Z [ R − ( s ) s (1 − f d ) a b ] ds and ˆ X = X Z r [ R − ( s ) s (1 − f d ) a b ] ds, we find A ( r ) X = ( A + B ) X − ( m −
1) ˆ X with ˆ X = X O (1) at + ∞ and ˆ X is bounded at 0.For the fourth term,[ R − ( s ) s (1 − f d ) a b ] ∼ + ∞ − n √ d √ s s − n ( − a + b )and ∼ s ( O ( s γ a ) + O ( s γ + γ b )) if ( d, γ , γ ) ∈ D and γ + γ − d − < s ( O ( s γ a ) + O ( s γ +2 d +2 b )) if ( d, γ , γ ) ∈ D and γ + γ − d − > sτ ( s )( O ( s γ ) a + O ( s γ +2 d +2 ) b ) if ( d, γ , γ ) ∈ D Letting B = − ( m − Z [ R − ( s ) s (1 − f d ) a b ] ds and ˆ X = X Z r [ R − ( s ) s (1 − f d ) a b ] ds we find A ( r ) X = ( A + B ) X − ( m −
1) ˆ X . X = X O (1) at + ∞ and ˆ X is bounded at 0.Now, summing the four terms, and letting C i = A i + B i , we find (4.35) and (4.36).Since X is bounded at 0, we have C = C = 0.But X is bounded at + ∞ and ˆ X i is bounded at + ∞ , i = 1 , ,
3. Since we have also a >> ˆ a at + ∞ , we infer that C = 0 and that ˆ X must be bounded at + ∞ . Returningto the definition of ˆ X , we must have Z + ∞ [ R − ( s ) s (1 − f d ) a b ] ds = 0 , therefore ˆ X = X Z r + ∞ [ R − ( s ) s (1 − f d ) a b ] ds, that givesˆ X = X Z r + ∞ s (1 − f d )[ a C + b C ] ∼ + ∞ X Z r + ∞ − √ − n √ s − n − n d s ds. Thus, at + ∞ ˆ a = a − n √ d √ n + n r − n a + o ( r − n ) . (4.40)Since we have now X = C X − ( m − X i =1 ˆ X i and since ˆ a = O ( r − n − ) and ˆ a = O ( r − n − ), then ˆ a = o ( a ) and ˆ a = o ( a ) at + ∞ .Consequently a + ( m − a ∼ + ∞ C a − ( m − a (4.41)Recalling (4.40) and recalling n < n , this implies that C − ( m − Z + ∞ [ R − ( s ) s (1 − f d ) a b ] ds = 0and then C a − ( m − a = − ( m − a Z r + ∞ [ R − ( s ) s (1 − f d ) a b ] ds. Using (4.39), we getat + ∞ C a − ( m − a = − ( m − a − √ − d √ n ( n − n ) r n − n + o ( r − n ) . (4.42)21inally, we sum (4.40) and (4.42) to get, by (4.41) a (1 + ( m − − d n n + n ) ∼ + ∞ − ( m −
1) 8 d n ( 1 n − n ) r − n and thus ( m −
1) 8 d n ( − n − n + 1 n + n ) = 1 . But we have by (4.34) n − n = ( − m + 1) d . After simplification by m −
1, we get n = n , that gives m = 1, that is in contradictionwith the hypothesis m <
1. So we deduce that m = 1.The proof of (4.35) and (4.36) for ( d, γ , γ ) ∈ D and γ + γ − d − Proof of Corollary 1.1.
By Theorem 1.9 (ii), if there exists a bounded solution ω , then there exists some eigen-value tending to 1. So m = 1. It remains to prove that ω = cω , for some c = 0. But ω cannot have the least behavior at 0, otherwise it would blow up exponentially at + ∞ .So, there exists c = 0 such that ω ∼ cω . If ω = cω , then ω − cω has the least behaviorat 0, and consequently blows up exponentially at + ∞ . This cannot be true, because ω is bounded at + ∞ and, since a ≥ b ≥
0, the possible blowing up behavior at + ∞ for ω can only be polynomial. We can conclude that ω = cω . n ≥ d + 1 : the proof of Theorem 1.12. Let ω = ( a , b ) be the solution defined in Theorem 1.4 and η = ( u , v ) be the solutiondefined in Theorem 1.5. According to Theorem 1.6, ω ∼ + ∞ ( J + , J + ) and η has thegreater blowing up behavior at 0. Let η and η be defined in Theorem 1.5 and havingthe intermediate behaviors at + ∞ . Let ω = ( a , b ) be defined in Theorem 1.4. Withthese definitions, we can write ω = C ( n, d ) ω + C ( n, d ) η + C ( n, d ) η + C ( n, d ) η . Let us remark that ω and ω − C ( n, d ) ω form a base of the bounded solutions at 0, andthat ω − C ( n, d ) ω = o ( ω ) at + ∞ . So the problem of the existence of some boundedsolution is reduced to the problem C ( n, d ) = 0.Supposing that there exists a bounded solution for ( n , d ), d > n ≥ d + 1, we have,by Theorem 1.1, n ≤ d −
1. From now on, ( n, d ) is such that 1 ≤ d ≤ d + 1 and d ≤ n ≤ d . Clearly, ( d, | n − d | , n + d ) stays in a compact subset of D . This is sufficientfor the solutions η and η to be defined without ambiguity. The real numbers C i ( n, d )defined above can be computed by means of determinants involving the four components( a, a ′ , b, b ′ )( r ) of the five solutions present, for a given r >
0. Thus, C i is continuous wrt( d, γ , γ ) and consequently is continuous wrt ( d, n ). C i is also differentiable wrt γ andwrt γ and therefore wrt n , since n ≥ d . 22 emma 5.2 With the notation above, if there exists ( n , d ) , d ≥ , n ≥ d + 1 suchthat C ( n , d ) = 0 , then there exists a continuous map d n ( d ) , defined for d < d ,closed to d and verifying C ( n ( d ) , d ) = 0 . Proof
Let us prove that ∂C ∂n ( n , d ) = 0 . If ∂C ∂n ( n , d ) = 0 , then ∂∂n ( ω − C ( n, d ) ω )( n , d )is bounded at + ∞ . Let us denote ( a, b ) = ω − C ( n, d ) ω . Then ( a, b ) verifies the system(1.1), with ( n , d ) in place of ( n, d ), and ( ∂a∂n , ∂b∂n )( n , d ) verifies also a system, obtainedby differentiation wrt n , at ( n , d ), that is ( ∂a∂n ′′ + r ∂a∂n ′ − ( n − d ) r ∂a∂n − ( n − d ) r a − f d ∂b∂n = − (1 − f d ) ∂a∂n∂b∂n ′′ + r ∂b∂n ′ − ( n + d ) r ∂b∂n − ( n + d ) r b − f d ∂a∂n = − (1 − f d ) ∂b∂n (5.43)By combining the systems (1.1) and (5.43), for ( n , d ), an integration by parts gives Z + ∞ − n − d r a − n + d r b dr = 0and we conclude that a = b = 0, that is false.So, we have proved that ∂C ∂n ( n , d ) = 0. The Implicit Functions Theorem gives acontinuous map d n ( d ) such that C ( n ( d ) , d ) = 0, and defined in a neighborhood of d , with values in a neighborhood of n . The proof of Theorem 1.12 completed.
With the definitions given above, let us define the set E = { d ≥ d ≤ d + 1; ∃ n ≥ d + 12 , C ( n, d ) = 0 } . If d ∈ E , then n ≤ d −
1, by Theorem 1.1. Thus, E is a closed subset of [1 , + ∞ [, thanksto the continuity of C wrt ( n, d ). Since d ∈ E , E 6 = ∅ and we let d = inf E . Giventhat d ∈ E , there exists n ≥ d + such that C ( n , d ) = 0. According to Theorem1.11, n ≥ d + 1. If d >
1, we deduce from Lemma 5.2 that there exists d < d ,sufficiently closed to d in order to have n ( d ) > d + . Therefore n ( d ) ≥ d + , whichis in contradiction with d = inf E . This proves that d = 1. But 1
6∈ E , by Theorem1.1. This contradiction proves the non existence of ( n , d ) such that n ≥ d + 1 and C ( n , d ) = 0.The proof of Theorem 1.12 is complete. Proof of Theorem 1.9 (i).
Let us define n = q γ + γ − µd .Let ω ε = ( a ε , b ε ) ∈ H γ be an eigenvector associated to µ ( ε ). Using (1.15), we write µ ( ε ) ε Z r (1 − f )( a ε + b ε ) dr = Z ( ra ′ ε + rb ′ ε + γ r a ε + γ r b ε + rε f ( a ε + b ε ) ) dr. We use the definition (1.17) of m ( ε ) to get µ ( ε ) ε Z r (1 − f )( a ε + b ε ) dr m ( ε ) ε Z r (1 − f )( a ε + b ε ) dr + Z ( γ − d r a ε + γ − d r b ε + rε f ( a ε + b ε ) ) dr. Now, we use the trick of TC Lin (see [7]). Letting ˜ b ε = τ ˜ a ε , we consider the map H : τ γ − d r + γ − d r τ + rf d (1 + τ ) (6.44)and we minimize this map. The minimum is attained for τ verifying τ ( γ − d r + rf d ) + rf d = 0 and 1 + τ = γ − d r / ( γ − d r + rf d )and consequently H ( τ ) = γ − d r + ( rf dγ − d r + rf d ) ( γ − d r ) + rf d ( γ − d rγ − d r + rf d ) . We have H ( τ ) ∼ r → + ∞ ( γ + γ − d ) /r. Moreover, for all τ > , H ( τ ) ≥ H ( τ ) . Since γ + γ − d > , there exists some constants C > R >
0, independent of τ ,such that for all τ > H ( τ ) ≥ C r for all r > R . Then, for all
R > R and all ε < R , we write Z ε H ( r )˜ a ε ( r ) dr ≥ Z R H ( r )˜ a ε ( r ) dr + Z RR H ( r )˜ a ε ( r ) dr. Now a blows up exponentially at + ∞ , or as r n . We can choose R large enough and aconstant C > a ( r ) ≥ C ( e √ r √ r ) or C r n for all r > R . Since ˜ a ε → a as ε →
0, uniformly in [0 , R ], we can chose ε such that for all ε < ε Z R H ( r )˜ a ε ( r ) dr ≥ Z R H ( r ) a ( r ) dr. Moreover, for all
R > R , ˜ a ε → a as ε →
0, uniformly in [ R , R ]. Then, there exists ε ( R ) such that for all ε < ε ( R ) we have Z RR H ( r )˜ a ε ( r ) dr ≥ C Z RR r r n dr or Z RR H ( r )˜ a ε ( r ) dr ≥ C Z RR r ( e √ r √ r ) dr. And finally, for ε < ε ( R ), we have( µ ( ε ) − m ( ε ) ε ) Z r (1 − f )( a ε + b ε ) dr ≥ Z R H ( r ) a ( r ) dr +24 ( C C R RR r r n , if ( a , b ) ∼ + ∞ ( r n , − r n ) C C R RR r ( e √ r √ r ) dr, if ( a , b ) ∼ + ∞ ( J + , J + )where C and C , given above, are independent of R and ε . But we can choose R suchthat the lhs is positive.We deduce that µ ( ε ) − m ( ε ) > . Then we use Theorem 1.7 (i), that gives m ( ε ) − ε ≥ C and consequently µ ( ε ) − ε ≥ C . The lemma is proved. The proof of Theorem 1.8.
The proof for n = 2 and d = 2 is originally in [9].For real numbers d ≥ n ≥
1, let x = f ′ d r n − and y = d f d r n . A calculation gives ( − ( rx ′ ) ′ + γ r x − ξ r y − r (1 − f d ) x = − n − r n − f d (1 − f d ) − ( ry ′ ) ′ + γ r y − ξ r x − r (1 − f d ) y = 0 (6.45)For a = x + y and b = x − y , we deduce that ( − ( ra ′ ) ′ + γ r a + f d b − r (1 − f d ) a = − n − r n − f d (1 − f d ) − ( rb ′ ) ′ + γ r b + f d a − r (1 − f d ) b = − n − r n − f d (1 − f d ) (6.46)where, as usual, γ = | n − d | , γ = n + d , γ = γ + γ and ξ = γ − γ .We verify that x ∼ y ∼ dr d − n + O ( r d − n +2 ) and, at + ∞ , x = O ( r − n ) , y = O ( r − n ) , and consequently that a ∼ dr d − n + O ( r d − n +2 ) and b ∼ O ( r d − n +2 ) . let us suppose that d ≥ < n < d + 1. We can multiply the system (6.46)and integrate by parts. We obtain that Z + ∞ ( ra ′ + rb ′ + γ r a + γ r b + rf d ( a + b ) − r (1 − f d )( a + b )) dr = Z + ∞ − ( n − r n − f d (1 − f d )( a + b ) dr This gives R + ∞ ( ra ′ + rb ′ + γ r a + γ r b + rf d ( a + b ) ) dr R + ∞ r (1 − f d )( a + b ) dr = 1 − C n with C n = R + ∞ n − r n − f d (1 − f d )( a + b ) dr R + ∞ r (1 − f d )( a + b ) dr > . Now we use an approximation argument, valid as soon as n >
0. For example for a givenconstant 0 < N < a ε , b ε )( r ) = ( ( a, b )( rε ) in [0 , N ]= ( a ( r ) (1 − r ) (1 − N ) , b ( r ) (1 − r ) (1 − N ) ) in [ N, . We have that ( a ε , b ε ) ∈ H | n − d | andthat R ( ra ′ ε + rb ′ ε + γ r a ε + γ r b ε + r ε f ( a ε + b ε ) ) dr ε R r (1 − f )( a ε + b ε ) dr = R Nε ( ra ′ + rb ′ + γ r a + γ r b + rf d ( a + b ) ) dr + O ( ε n ) R Nε r (1 − f d )( a + b ) dr + O ( ε n ) → − C n , as ε tends to 0 . We deduce that, if 1 < n < d + 1, m d − n,d + n ( ε ) < − C n , for ε small enough and theproof of Theorem 1.8 is complete. References [1] Beaulieu, Anne,
Some remarks on the linearized operator about the radial solutionfor the Ginzburg-Landau equation.
Nonlinear Anal. 54 (2003), no. 6, 1079-1119;[2] Beaulieu, Anne
The kernel of the linearized Ginzburg-Landau operator , preprint2018, HAL Id : hal-01724551;[3] Bethuel, Fabrice ; Brezis, Ha¨ım ; H´elein, Fr´ed´eric, Ginzburg-Landau Vortices,Birkha¨user, 1994;[4] Crandall, Mickael G. ; Rabinowitz, Paul H.
Bifurcation, perturbation of simpleeigenvalues and linearized stability.
Arch. Rational Mech. Anal. 52 (1973), 161180[5] Herv´e, Rose-Marie ; Herv´e, Michel, ´Etude qualitative des solutions r´eelles d’une´equation diff´erentielle li´ee `a l’´equation de Ginzburg-Landau. (French) Ann. Inst. H.Poincar´e Anal. Non Lin´eaire 11 (1994), no. 4, 427-440;[6] Lieb, E. H. ; Loss, M.
Symmetry of the Ginzburg-Landau minimizer in a disc.
Math.Res. Lett. 1 (1994), no. 6, 701-715;[7] Lin, Tai-Chia,
The stability of the radial solution to the Ginzburg-Landau equation.
Comm. Partial Differential Equations 22 (1997), no. 3-4, 619-632;[8] Lin, Tai-Chia,
Spectrum of the linearized operator for the Ginzburg-Landau equa-tion.
Electron. J. Differential Equations 2000, no. 42, 25;[9] Mironescu, Petru,