Infinite series asymptotic expansions for decaying solutions of dissipative differential equations with non-smooth nonlinearity
aa r X i v : . [ m a t h . C A ] S e p INFINITE SERIES ASYMPTOTIC EXPANSIONS FOR DECAYINGSOLUTIONS OF DISSIPATIVE DIFFERENTIAL EQUATIONS WITHNON-SMOOTH NONLINEARITY
DAT CAO , LUAN HOANG , ∗ , AND THINH KIEU Abstract.
We study the precise asymptotic behavior of a non-trivial solution that con-verges to zero, as time tends to infinity, of dissipative systems of nonlinear ordinary dif-ferential equations. The nonlinear term of the equations may not possess a Taylor seriesexpansion about the origin. This absence technically cripples previous proofs in establishingan asymptotic expansion, as an infinite series, for such a decaying solution. In the currentpaper, we overcome this limitation and obtain an infinite series asymptotic expansion, astime goes to infinity. This series expansion provides large time approximations for the so-lution with the errors decaying exponentially at any given rates. The main idea is to shiftthe center of the Taylor expansions for the nonlinear term to a non-zero point. Such a pointturns out to come from the non-trivial asymptotic behavior of the solution, which we proveby a new and simple method. Our result applies to different classes of non-linear equationsthat have not been dealt with previously.
Contents
1. Introduction 12. Notation, definitions and background 33. The first asymptotic approximation 64. The series expansion 105. Extended results 196. Specific cases and examples 21References 291.
Introduction
The Navier–Stokes equations (NSE) for a viscous, incompressible fluid in bounded orperiodic domains with a potential body force can be written in the functional form asd y d t + Ay + B ( y, y ) = 0 , (1.1)where A is the (linear) Stokes operator and B is a bilinear form in appropriate functionalspaces.In [18], Foias and Saut prove that any regular solution y ( t ) of (1.1) has a following asymp-totic behavior, as t → ∞ , e λt y ( t ) → ξ for some λ > ξ = 0 with Aξ = λξ. (1.2) Date : September 16, 2020. ∗ Corresponding author.
This result is extended later by Ghidaglia [21] to a more general class of parabolic inequal-ities. The proof in [21] uses the same Dirichlet quotient technique by Foias–Saut [18].In [19], Foias and Saut go further and prove the following asymptotic expansion, as t → ∞ , y ( t ) ∼ ∞ X n =1 q n ( t ) e − µ n t , (1.3)in all Sobolev spaces, where q n ( t ) are polynomials in t , valued in the space of smooth func-tions. See Definition 2.1 below for the precise meaning of (1.3). Their proof of (1.3) doesnot require the knowledge of (1.2) and uses a completely different technique.The expansion (1.3) is studied deeply in later work [15–17, 20] concerning its convergence,associated normalization map, normal form, invariant nonlinear manifolds, relation withthe Poincar´e–Dulac theory, etc. It is applied to the analysis of physics-oriented aspects offluid flows [13, 14], is established for the NSE in different contexts such as with the Coriolisforce [25], or with non-potential forces [8, 10, 24], is extended to dissipative wave equationsin [27], is investigated for general ordinary differential equations (ODE) without forcingfunctions in [26], and with forcing functions in [9]. The considerations of ODE in [9, 26]turns out to be fruitful, and prompts to the recently obtained asymptotic expansions for theLagrangian trajectories of viscous, incompressible fluid flows in [23].In the same spirit as [9, 26], we study, in this paper, the ODE systems in R d of the formd y d t + Ay = F ( y ) , t > , (1.4)where A is a d × d constant (real) matrix, and F is a vector field on R d .Our goal is to obtain the asymptotic expansion (1.3), as t → ∞ , for any decaying solution y ( t ) of (1.4), where q n ( t )’s are R d -valued polynomials in t . (For other approaches to theasymptotic analysis of the solutions, see discussions in Remark 6.13 below.)In all of the above cited papers, function F in (1.4) must be infinitely differentiable at theorigin. It is due to the requirement that F ( y ) can be approximated, up to arbitrary orders,near the limit of y ( t ), i.e. the origin, by the polynomials that come from of the Taylor seriesof F . The current paper investigates the situation when this is not the case, and hence theresults in [9, 26, 27] do not apply.A standard and intuitive way to find expansion (1.3) is substituting it into equation (1.4),expanding both sides in t , and equating the coefficient functions of corresponding exponentialterms. Because of the lack of the Taylor series of F ( y ) about the origin, one does not knowhow to find the expansion in t for F ( y ( t )) on the right-hand side of (1.5). The task seemsto be impossible. However, as will be proved later in this paper, we are still able to obtainthe infinite series asymptotic expansion (1.3) for y ( t ) in many cases. This is achieved bycombining Foias–Saut’s method in [19] with the following new idea. For illustrative purposes,we consider an example, d y d t + Ay = F ( y ) = | y | / y | y | / . (1.5)First, we use the geometric series to approximate F ( y ) by a series F ( y ) ∼ ∞ X k =1 F k ( y ) as y → , (1.6) nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 3 where F k ’s are a positively homogeneous functions of strictly increasing degrees β k → ∞ . (Ingeneral cases, (1.6) is a hypothesis.) See Definition 2.2 and Assumption 4.1 for details. Afterestablishing the asymptotic approximation (1.2) for some eigenvector ξ of A , we approximateeach F k by using its Taylor series about ξ = 0. Therefore, we can bypass the lack of theTaylor series of F about 0. This, of course, is just a brief description and must be facilitatedwith capable techniques.The paper is organized as follows. In section 2, we set the assumptions for matrix A ,establish basic properties and recall a crucial approximation lemma, Lemma 2.4.In section 3, we prove, for a more general equation (3.1) with a general structure (3.2), thatany non-trivial, decaying solution has the first asymptotic approximation (1.2), see Theorem3.3. This result can be obtained by repeating Foias–Saut’s proof in [18, Proposition 3], orapplying [21, Theorem 1.1]. However, our new proof provides an alternative method and, atleast for the current setting, is shorter. See Remark 3.4 for comparisons between the proofs.The paper’s main result is in section 4. In Theorem 4.3, we prove that any non-trivial,decaying solution of (1.4) has an asymptotic expansion of the form (1.3). In order to imple-ment to general scheme of Foias–Saut’s [19], we use the first approximation e − λt ξ in (1.2).By the positive homogeneity of each function F k in (1.6), we can scale y ( t ) by the factor e − λt and then shift the Taylor expansions of F k ’s from center zero to center ξ = 0. Because ofthe above scaling and its effect during complicated iterations, the exponential rates must beshifted back, see the set e S in (4.8), and forth, see the set S in (4.10), when being generatedin Definition 4.2.Although we focus on infinite series expansions in this paper, we consider, in the first partof section 5, the case when the function F ( y ) has only a finite sum approximation, see (5.1).We prove in Theorem 5.1 that any decaying solution y ( t ) has a corresponding finite sumapproximation. In the second part of section 5, Theorem 5.3 generalizes Theorems 4.3 and5.1 by relaxing the conditions on functions F and F k ’s, in accordance with the knowledge ofthe eigenspaces of A .Section 6 is devoted to identifying some specific classes of functions F , see Theorems6.1, 6.5 and 6.9. Briefly speaking, these functions can be expanded in terms of power-likefunctions of the types x γ i i , | x i | γ i , | x i | γ i sign( x i ) for coordinates x i ’s of x ∈ R d , or of type k x k γp ,or, more generally, k P ( x ) k γp with ℓ p -norms k · k p , where P is a homogeneous polynomial.Lastly, we compare, in Remark 6.13, our results with other asymptotic expansion theoriesfor ODE, notably the one that has been developed by Bruno (Bryuno) and collaborators,see [3–5, 7] and references therein.2. Notation, definitions and background
We will use the following notation throughout the paper. • N = { , , , . . . } denotes the set of natural numbers, and Z + = N ∪ { } . • Denote R ∗ = R \ { } , and, for n ∈ N , R n ∗ = ( R ∗ ) n and R n = R n \ { } . • For any vector x ∈ R n , we denote by | x | its Euclidean norm, and by x ( k ) the k -tuple( x, . . . , x ) for k ≥
1, and x (0) = 1. • For an m × n matrix M , its Euclidean norm in R mn is denoted by | M | . • Let f be an R m -valued function and h be a non-negative function, both are definedin a neighborhood of the origin in R n . We write f ( x ) = O ( h ( x )) as x →
0, if thereare positive numbers r and C such that | f ( x ) | ≤ Ch ( x ) for all x ∈ R n with | x | < r . D. Cao, L. Hoang and T. Kieu • Let f : [ T , ∞ ) → R n and h : [ T , ∞ ) → [0 , ∞ ) for some T ∈ R . We write f ( t ) = O ( h ( t )) , implicitly meaning as t → ∞ ,if there exist numbers T ≥ T and C > | f ( t ) | ≤ Ch ( t ) for all t ≥ T . • Let T ∈ R , functions f, g : [ T , ∞ ) → R n , and h : [ T , ∞ ) → [0 , ∞ ). We willconveniently write f ( t ) = g ( t ) + O ( h ( t )) to indicate f ( t ) − g ( t ) = O ( h ( t )).The type of asymptotic expansions at time infinity that is studied in this paper is thefollowing. Definition 2.1.
Let ( X, k · k X ) be a normed space and ( α n ) ∞ n =1 be a sequence of strictlyincreasing non-negative real numbers. A function f : [ T, ∞ ) → X , for some T ≥ , is saidto have an asymptotic expansion f ( t ) ∼ ∞ X n =1 f n ( t ) e − α n t in X, (2.1) where each f n : R → X is a polynomial, if one has, for any N ≥ , that (cid:13)(cid:13)(cid:13) f ( t ) − N X n =1 f n ( t ) e − α n t (cid:13)(cid:13)(cid:13) X = O ( e − ( α N + ε N ) t ) for some ε N > . (2.2)One can see, e.g. [9, Lemma 4.1], that the polynomials f , f , . . . , f N in (2.2) are unique.In the case α n → ∞ as n → ∞ , the (infinite series) asymptotic expansion (2.1) providesexponentially precise approximations for f ( t ), as t → ∞ . More specifically, for any γ > P Nn =1 f n ( t ) e − α n t of the series, with sufficiently large N , approximates f ( t ),as t → ∞ , with an error of order O ( e − γt ).Regarding the nonlinearity in (1.4), the function F will be approximated near the originby functions, not necessarily polynomials, in the following class. Definition 2.2.
Suppose ( X, k · k X ) and ( Y, k · k Y ) be two (real) normed spaces.A function F : X → Y is positively homogeneous of degree β ≥ if F ( tx ) = t β F ( x ) for any x ∈ X and any t > . (2.3) Define H β ( X, Y ) to be the set of positively homogeneous functions of order β from X to Y , and denote H β ( X ) = H β ( X, X ) .For a function F ∈ H β ( X, Y ) , define k F k H β = sup k x k X =1 k F ( x ) k Y = sup x =0 k F ( x ) k Y k x k βX . The following are immediate properties.(a) If F ∈ H β ( X, Y ) with β >
0, then taking x = 0 and t = 2 in (2.3) gives F (0) = 0 . (2.4)If, in addition, F is bounded on the unit sphere in X , then k F k H β ∈ [0 , ∞ ) and k F ( x ) k Y ≤ k F k H β k x k βX ∀ x ∈ X. (2.5)(b) The zero function (from X to Y ) belongs to H β ( X, Y ) for all β ≥
0, and a constantfunction (from X to Y ) belongs to H ( X, Y ).(c) Each H β ( X, Y ), for β ≥
0, is a linear space.(d) If F ∈ H β ( X, R ) and F ∈ H β ( X, Y ), then F F ∈ H β + β ( X, Y ). nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 5 (e) If F : X → Y is a homogeneous polynomial of degree m ∈ Z + , then F ∈ H m ( X, Y ).In (e) above and throughout the paper, a constant function, even when it is zero, isconsidered as a homogeneous polynomial of degree zero.The space H β ( X, Y ) can contain much more complicated functions than homogeneouspolynomials. For example, let s ∈ Z + , numbers ν j , for 1 ≤ j ≤ s , be positive, P j , for1 ≤ j ≤ s , be a homogeneous polynomial of degree m j ∈ N from X to a normed space( Y j , k · k Y j ). Let P : X → Y be homogeneous polynomial of degree m ∈ Z + . Considerfunction F defined by F ( x ) = k P ( x ) k ν Y k P ( x ) k ν Y . . . k P s ( x ) k ν s Y s P ( x ) , for x ∈ X . (2.6)Then one has F ∈ H β ( X, Y ) , where β = m + s X j =1 m j ν j . (2.7)Thanks to (2.7) and property (c) above, any linear combination of functions of the formin (2.6) with the same number β also belongs to H β ( X, Y ).If n, m, k ∈ N and L is an m -linear mapping from ( R n ) m to R k , the norm of L is definedby kLk = max {|L ( x , x , . . . , x m ) | : x j ∈ R n , | x j | = 1 , for 1 ≤ j ≤ m } . (2.8)It is known that the norm kLk belongs to [0 , ∞ ), and one has |L ( x , x , . . . , x m ) | ≤ kLk · | x | · | x | . . . | x m | ∀ x , x , . . . , x m ∈ R n . (2.9)In particular, when m = 1, (2.8) yields the operator norm for any k × n matrix L .Let the space’s dimension d ∈ N be fixed throughout the paper. Consider the ODE system(1.4). Assumption 2.3.
Hereafter, matrix A is a diagonalizable with positive eigenvalues. Thanks to Assumption 2.3, the spectrum σ ( A ) of matrix A consists of eigenvalues Λ k ’s,for 1 ≤ k ≤ d , which are positive and increasing in k . Then there exists an invertible matrix S such that A = S − A S, where A = diag[Λ , Λ , . . . , Λ d ] . Denote the distinct eigenvalues of A by λ j ’s that are strictly increasing in j , i.e.,0 < λ = Λ < λ < . . . < λ d ∗ = Λ d with 1 ≤ d ∗ ≤ d. For 1 ≤ k, ℓ ≤ d , let E kℓ be the elementary d × d matrix ( δ ki δ ℓj ) ≤ i,j ≤ d , where δ ki and δ ℓj are the Kronecker delta symbols.For λ ∈ σ ( A ), define ˆ R λ = X ≤ i ≤ d, Λ i = λ E ii and R λ = S − ˆ R λ S. Then one immediately has I d = d ∗ X j =1 R λ j , R λ i R λ j = δ ij R λ j , AR λ j = R λ j A = λ j R λ j , (2.10) D. Cao, L. Hoang and T. Kieu and there exists c ≥ c − | x | ≤ d ∗ X j =1 | R λ j x | ≤ c | x | for all x ∈ R d . (2.11)Below, we recall a key approximation lemma for linear ODEs. It is Lemma 2.2 of [9],which originates from Foias–Saut’s work [19], and is based on the first formalized version [24,Lemma 4.2]. Lemma 2.4 ( [9, Lemma 2.2]) . Let p ( t ) be an R d -valued polynomial and g : [ T, ∞ ) → R d , forsome T ∈ R , be a continuous function satisfying | g ( t ) | = O ( e − αt ) for some α > . Suppose λ > and y ∈ C ([ T, ∞ ) , R d ) is a solution of y ′ ( t ) = − ( A − λI d ) y ( t ) + p ( t ) + g ( t ) , for t ∈ ( T, ∞ ) . If λ > λ , assume further that lim t →∞ ( e (¯ λ − λ ) t | y ( t ) | ) = 0 , where ¯ λ = max { λ j : 1 ≤ j ≤ d ∗ , λ j < λ } . (2.12) Then there exists a unique R d -valued polynomial q ( t ) such that q ′ ( t ) = − ( A − λI d ) q ( t ) + p ( t ) for t ∈ R , (2.13) and | y ( t ) − q ( t ) | = O ( e − εt ) for some ε > . (2.14)In fact, the polynomial q ( t ) in Lemma 2.4 can be defined explicitly as follows. We write,with the use of (2.10), q ( t ) = P d ∗ j =1 R λ j q ( t ), where, for each 1 ≤ j ≤ d ∗ and t ∈ R , R λ j q ( T + t ) = e − ( λ j − λ ) t R t e ( λ j − λ ) τ R λ j p ( T + τ ) dτ if λ j > λ,R λ j y ( T ) + R ∞ R λ j g ( T + τ ) dτ + R t R λ j p ( T + τ ) dτ if λ j = λ, − e − ( λ j − λ ) t R ∞ t e ( λ j − λ ) τ R λ j p ( T + τ ) dτ if λ j < λ. (2.15)In the case p ( t ) ≡
0, it follows (2.15) that q ( t ) ≡ ξ , which is a constant vector in R d . Then(2.13) and (2.14) read as ( A − λI d ) ξ = 0 and | y ( t ) − ξ | = O ( e − εt ) . (2.16)3. The first asymptotic approximation
Consider the following ODE on R d , which is more general than (1.4),d y d t + Ay = F ( t, y ) , t > . (3.1) Assumption 3.1.
Function F mapping ( t, x ) ∈ [0 , ∞ ) × R d to F ( t, x ) ∈ R d is continuousin [0 , ∞ ) × R d , locally Lipschitz with respect to x in [0 , ∞ ) × R d , and there exist positivenumbers c ∗ , ε ∗ , α such that | F ( t, x ) | ≤ c ∗ | x | α ∀ t ≥ , ∀ x ∈ R d with | x | ≤ ε ∗ . (3.2)It follows (3.2) that F ( t,
0) = 0 for all t ≥
0. By the uniqueness/backward uniqueness ofODE system (3.1), a solution y ( t ) ∈ C ([0 , ∞ )) of (3.1) has the property y (0) = 0 if and only if y ( t ) = 0 for all t ≥ . (3.3) nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 7 Thanks to Assumption 2.3 and (3.2), it is well-known that the trivial solution y ( t ) ≡ y ( t ) ∈ C ([0 , ∞ )) of (3.1) that satisfies y (0) = 0 andlim t →∞ y ( t ) = 0 , (3.4)will be referred to as a non-trivial, decaying solution . These solutions will be the focus ofour study.The following elementary result provides, for non-trivial, decaying solutions, a more preciseupper bound, compared to (3.4), and an additional lower bound. Proposition 3.2.
Let y ( t ) be a non-trivial, decaying solution of (3.1) . Then there exists anumber C > such that | y ( t ) | ≤ C e − Λ t for all t ≥ . (3.5) Moreover, for any ε > , there exists a number C = C ( ε ) > such that | y ( t ) | ≥ C e − (Λ d + ε ) t for all t ≥ . (3.6) Proof.
Set Y ( t ) = ( P d ∗ j =1 | R λ j y ( t ) | ) / . Applying R λ j to equation (3.1), taking dot productof the resulting equation with R λ j y , using the last property in (2.10), and then summingover j , we obtain12 dd t Y ( t ) = 12 dd t d ∗ X j =1 | R λ j y | = − d ∗ X j =1 λ j | R λ j y | + d ∗ X j =1 R λ j F ( t, y ) · R λ j y. (3.7)Note that Λ d ∗ X j =1 | R λ j y | ≤ d ∗ X j =1 λ j | R λ j y | ≤ Λ d d ∗ X j =1 | R λ j y | . (3.8)Denote C = P d ∗ j =1 k R λ j k . Let ε > T ε ≥ | y ( t ) | ≤ ε ∗ and C c ∗ c | y ( t ) | α ≤ ε ∀ t ≥ T ε . (3.9)We have, for t ≥ T ε , (cid:12)(cid:12)(cid:12) d ∗ X j =1 R λ j F ( t, y ) · R λ j y (cid:12)(cid:12)(cid:12) ≤ d ∗ X j =1 k R λ j k | F ( t, y ) | · | y | ≤ C c ∗ | y | α . (3.10)Combining (3.10) with (2.11) and (3.9) gives (cid:12)(cid:12)(cid:12) d ∗ X j =1 R λ j F ( t, y ) · R λ j y (cid:12)(cid:12)(cid:12) ≤ C c ∗ | y | α · c Y ( t ) ≤ εY ( t ) ∀ t ≥ T ε . (3.11) Proof of (3.5) . By equation (3.7), the first inequality in (3.8), and (3.11), we have12 dd t Y ≤ − (Λ − ε ) Y ∀ t ≥ T ε . Thus, for t ≥ T ε , Y ( t ) ≤ Y ( T ε ) e − − ε )( t − T ε ) . D. Cao, L. Hoang and T. Kieu
Using this estimate and (3.10) in (3.7) gives, for t > T ε ,12 dd t Y ≤ − Λ Y + C c ∗ ( c / Y ) α ≤ − Λ Y + C ′ e − (2+ α )(Λ − ε )( t − T ε ) , hence, dd t Y ≤ − Y + 2 C ′ e − β ( t − T ε ) , (3.12)where β = (1 + α/ − ε ) and C ′ is a positive number.Choose ε sufficiently small so that β > Λ . Applying Gronwall’s inequality to (3.12), for t ≥ T ε , yields Y ( t ) ≤ e − ( t − T ε ) Y ( T ε ) + 2 C ′ Z tT ε e − ( t − τ ) e − β ( τ − T ε ) d τ, and, also by (2.11), | y ( t ) | ≤ c Y ( t ) ≤ e − ( t − T ε ) c (cid:16) Y ( T ε ) + C ′ β − Λ (cid:17) . Therefore, we obtain the inequality in (3.5) for some constant C >
0, but only for all t ≥ T ε . Combining this with the boundedness of | y ( t ) | on [0 , T ε ], we then obtain estimate(3.5) for all t ≥ C > Proof of (3.6) . By equation (3.7), the second inequality in (3.8), and (3.11), we have12 dd t Y ≥ − Λ d Y − εY = − (Λ d + ε ) Y ∀ t > T ε . Hence, Y ( t ) ≥ Y ( T ε ) e − d + ε )( t − T ε ) ∀ t ≥ T ε . By the virtue of (3.3), | y ( t ) | > t ≥
0. It follows that | y ( t ) | ≥ c − Y ( t ) ≥ c − | y ( T ε ) | e − d + ε )( t − T ε ) = C ′ e − d + ε ) t ∀ t ≥ T ε , (3.13)where C ′ >
0. Since y ∈ C ([0 , T ε ] , R d ) and | y ( t ) | > , T ε ], one has | y ( t ) | it is boundedbelow by a positive constant on [0 , T ε ]. Combining this fact with estimate (3.13) for t ≥ T ε ,we obtain the all-time estimate (3.6). (cid:3) The lower bound (3.6) in Proposition 3.2 can be derived by using results for abstractproblems in infinite dimensional spaces such as [22, Theorems 1.1 and 1.2], see also [12].Nonetheless, the proof above is included for being self-contained and simple.As discussed in the Introduction, the next theorem either follows the proof of [18, Propo-sition 3], or is a consequence of [21, Theorem 1.1]. However, the proof presented below usesa new method, which may be useful in other problems.
Theorem 3.3.
Let y ( t ) be a non-trivial, decaying solution of (3.1) . Then there exist aneigenvalue λ ∗ of A and a corresponding eigenvector ξ ∗ such that | y ( t ) − e − λ ∗ t ξ ∗ | = O ( e − ( λ ∗ + δ ) t ) for some δ > . (3.14) Proof.
Define the set S ′ = ( n X j =1 λ ′ j + mαλ : for any numbers n ∈ N , λ ′ j ∈ σ ( A ) , ≤ m ∈ Z ) . (3.15) nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 9 The set S ′ can be arranged as a strictly increasing sequence { ν n } ∞ n =1 . Note that ν = λ and ν n → ∞ as n → ∞ . For any n ∈ N , one has ν n + αλ > ν n and ν n + αλ ∈ S ′ . Hence,by the strict increase of ν n ’s, we have ν n + αλ ≥ ν n +1 . (3.16) Step 1.
First, by Proposition 3.2, | y ( t ) | ≤ Ce − ν t . Let w ( t ) = e ν t y ( t ). Then w ( t )satisfies w ′ ( t ) + ( A − ν I d ) w ( t ) = g ( t ) def = e ν t F ( t, y ( t )) . (3.17)We estimate the right-hand side | g ( t ) | ≤ Ce ν t | y ( t ) | α ≤ Ce ν t e − ν (1+ α ) t = O ( e − αν t ) . (3.18)By equation (3.17) and estimate (3.18), we can apply Lemma 2.4 to y ( t ) = w ( t ) and p ( t ) ≡
0. Then, by and (2.16), there exists a vector ξ ∈ R d and a number ε > Aξ = ν ξ , (3.19) | w ( t ) − ξ | = O ( e − εt ) , that is , | y ( t ) − e − ν t ξ | = O ( e − ( ν + ε ) t ) . (3.20) Step 2.
Set M = { n ∈ N : | y ( t ) | = O ( e − ( ν n + δ ) t ) for some δ > } .Suppose n ∈ M . Let w n ( t ) = e ν n +1 t y ( t ). Then w ′ n ( t ) + ( A − ν n +1 I d ) w n ( t ) = g n +1 ( t ) def = e ν n +1 t F ( t, y ( t )) . (3.21)To estimate the last term, we note from (3.16) that ν n (1 + α ) ≥ ν n + λ α ≥ ν n +1 . Then,for large t , | g n +1 ( t ) | ≤ Ce ν n +1 t | y ( t ) | α ≤ Ce ν n +1 t e − ( ν n + δ )(1+ α ) t | = O ( e − δ (1+ α ) t ) . (3.22)By (3.21) and (3.22), we, again, can apply Lemma 2.4 to y ( t ) = w n ( t ) and p ( t ) ≡
0. Then,by (2.16), there exists a vector ξ n +1 ∈ R d and a number ε > Aξ n +1 = ν n +1 ξ n +1 , | w n ( t ) − ξ n +1 | = O ( e − εt ) , that is , | y ( t ) − e − ν n +1 t ξ n +1 | = O ( e − ( ν n +1 + ε ) t ) . Step 3.
If the vector ξ in Step 1 is not zero, then, thanks to (3.20) and (3.19), the theoremis proved with λ ∗ = λ and ξ ∗ = ξ .Now, consider ξ = 0. By (3.20) with ξ = 0, one has 1 ∈ M , hence M is a non-emptysubset of N . By (3.6) and the fact ν n → ∞ , the set M must be finite. Let k be the maximumnumber of M , and n = k + 1. By the result in Step 2 applied to n = k , there exist ξ n ∈ R d and ε > Aξ n = ν n ξ n , (3.23) | y ( t ) − e − ν n t ξ n | = O ( e − ( ν n + ε ) t ) . (3.24)If ξ n = 0, then (3.24) implies n ∈ M , which is a contradiction. Thus, ξ n = 0, which,together with (3.23), implies λ ∗ = ν n is an eigenvalue and ξ ∗ = ξ n is a correspondingeigenvector of A . Then, estimate (3.14) follows (3.24). (cid:3) Remark 3.4.
We compare the above proof of Theorem 3.3 with Foias–Saut’s proof in [18].We recall from [18] that the Dirichlet quotient Ay ( t ) · y ( t ) / | y ( t ) | is proved to converge, as t → ∞ , to an eigenvalue λ ∗ of A first, and then, based on this, the two limits e λ ∗ t R λ ∗ y ( t ) → ξ ∗ = 0 and e λ ∗ t ( I d − R λ ∗ ) y ( t ) → y ( t ) / | y ( t ) | , see [18, Proposition 1]. We, instead, do not use the Dirichlet quotient to determine the exponential rate, but createthe set S ′ of possible rates, see (3.15), and find the first λ ∗ ∈ S ′ such that e λ ∗ t | y ( t ) | doesnot decay exponentially. Then, by the virtue of approximation lemma 2.4, estimate (3.14)is established without analyzing y ( t ) / | y ( t ) | . This idea, in fact, is inspired by Foias–Saut’sproof in [19] of the asymptotic expansion (1.3). However, we restrict it solely to the problemof first asymptotic approximation, and hence make it significantly simpler.4. The series expansion
In this section, we focus on obtaining the asymptotic expansion, as t → ∞ , for solutionsof equation (1.4). Regarding the equation’s nonlinearity, we assume the following. Assumption 4.1.
The mapping F : R d → R d has the the following properties. (i) F is locally Lipschitz on R d and F (0) = 0 . (ii) Either (H1) or (H2) below is satisfied. (H1) There exist numbers β k ’s, for k ∈ N , which belong to (1 , ∞ ) and increase strictlyto infinity, and functions F k ∈ H β k ( R d ) ∩ C ∞ ( R d ) , for k ∈ N , such that it holds,for any N ∈ N , that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F ( x ) − N X k =1 F k ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = O ( | x | β ) as x → , for some β > β N . (4.1)(H2) There exist N ∗ ∈ N , strictly increasing numbers β k ’s in (1 , ∞ ) , and functions F k ∈ H β k ( R d ) ∩ C ∞ ( R d ) , for k = 1 , , . . . , N ∗ , such that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) F ( x ) − N ∗ X k =1 F k ( x ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = O ( | x | β ) as x → , for all β > β N ∗ . (4.2)In Assumption 4.1(ii), we conveniently write case (H1) as F ( x ) ∼ ∞ X k =1 F k ( x ) , (4.3)and case (H2) as F ( x ) ∼ N ∗ X k =1 F k ( x ) . (4.4)The following remarks on Assumption 4.1 are in order.(a) Applying (2.4) and (2.5) to each function F k , one has F k (0) = 0 , k F k k H βk < ∞ , and | F k ( x ) | ≤ k F k k H βk | x | β k for all x ∈ R d .Hence, (4.1) indicates that the remainder F ( x ) − P Nk =1 F k ( x ) between F ( x ) and itsapproximate sum P Nk =1 F k ( x ) is small, as x →
0, of a higher order (of | x | ) than thatin the approximate sum P Nk =1 F k ( x ).(b) With functions F k ’s as in (H2) of Assumption 4.1, if F ( x ) = P N ∗ k =1 F k ( x ), then F satisfies (4.4). For the relation between (4.3) and (4.4), see Remark 4.4 below.(c) By the remark (e) after Definition 2.2, if F is a C ∞ -vector field on the entire space R d with F (0) = 0 and F ′ (0) = 0, then F satisfies Assumption 4.1 with the right-handside of (4.3) is simply the Taylor expansion of F ( x ) about the origin. nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 11 (d) Note that we do not require the convergence of the formal series on the right-handside of (4.3). Even when the convergence occurs, the limit is not necessarily thefunction F . For instance, if h : R d → R d satisfies | x | − α h ( x ) → x → α >
0, then F and F + h have the same expansion (4.3).(e) The class of functions F ’s that satisfy Assumption 4.1 contains much more thansmooth vector fields, see section 6 below.By Assumption 4.1, for each N ∈ N in case of (4.3), or N ∈ N ∩ [1 , N ∗ ] in case of (4.4),there is ε N > (cid:12)(cid:12)(cid:12) F ( x ) − N X k =1 F k ( x ) (cid:12)(cid:12)(cid:12) = O ( | x | β N + ε N ) as x → . (4.5)Note from (4.5) with N = 1 that, as x → | F ( x ) | ≤ | F ( x ) | + | F ( x ) − F ( x ) | ≤ k F k H β | x | β + O ( | x | β + ε ) = O ( | x | β ) . Thus, there exist numbers c ∗ , ε ∗ > | F ( x ) | ≤ c ∗ | x | β ∀ x ∈ R d with | x | < ε ∗ . (4.6)By property (4.6) and Assumption 4.1, function F satisfies conditions in Assumption3.1. Therefore, the facts about trivial and non-trivial solutions in section 3 still applies toequation (1.4), and Theorem 3.3 holds true for solutions of (1.4).Hereafter, y ( t ) is a non-trivial, decaying solution of (1.4).Let eigenvalue λ ∗ = λ n and its corresponding eigenvector ξ ∗ be as in Theorem 3.3. Itfollow (3.14) that | y ( t ) | = O ( e − λ ∗ t ) . (4.7)To describe the exponential rates in a possible asymptotic expansion of solution y ( t ) weuse the following sets e S and S . Definition 4.2.
We define a set e S ⊂ [0 , ∞ ) as follows.In the case of (4.3) , let α k = β k − > for k ∈ N , and e S = n d ∗ X k = n m k ( λ k − λ ∗ ) + ∞ X j =1 z j α j λ ∗ : m k , z j ∈ Z + , with z j > for only finitely many j ’s o . (4.8) In the case of (4.4) , let α k = β k − > for k = 1 , , . . . , N ∗ , and e S = n d ∗ X k = n m k ( λ k − λ ∗ ) + N ∗ X j =1 z j α j λ ∗ : m k , z j ∈ Z + o . (4.9) In both cases, the set e S has countably, infinitely many elements. Arrange e S as a sequence ( e µ n ) ∞ n =1 of non-negative and strictly increasing numbers. Set µ n = e µ n + λ ∗ for n ∈ N , and define S = { µ n : n ∈ N } . (4.10)The set e S has the following elementary properties. (a) For n ≤ ℓ ≤ d ∗ , by choose m k = δ kℓ , and z j = 0 for all j in (4.8) or (4.9), we have λ ℓ − λ ∗ ∈ e S . Hence, λ ℓ ∈ S for all ℓ = n , n + 1 , . . . , d ∗ . (4.11)(b) Clearly, e µ = 0 and µ = λ ∗ . The numbers µ n ’s are positive and strictly increasing.Also, e µ n → ∞ and µ n → ∞ as n → ∞ . (4.12)(c) For all x, y ∈ e S and k ∈ N , one has x + y, x + α k λ ∗ ∈ e S. (4.13)As a consequence of (4.13), one has e µ n + α k λ ∗ ≥ e µ n +1 for all n, k. (4.14)Let r ∈ N and s ∈ Z + . Since F r is a C ∞ -function in a neighborhood of ξ ∗ = 0, we havethe following Taylor’s expansion, for any h ∈ R d , F r ( ξ ∗ + h ) = s X m =0 m ! D m F r ( ξ ∗ ) h ( m ) + g r,s ( h ) , (4.15)where D m F r ( ξ ∗ ) is the m -th order derivative of F r at ξ ∗ , and g r,s ( h ) = O ( | h | s +1 ) as h → . (4.16)For m ≥
0, denote F r,m = 1 m ! D m F r ( ξ ∗ ) . (4.17)When m = 0, (4.17) reads as F r, = F r ( ξ ∗ ). When m ≥ F r,m is an m -linear mappingfrom ( R d ) m to R d .By (2.9), one has, for any r, m ≥
1, and y , y , . . . , y m ∈ R d , that |F r,m ( y , y , . . . , y m ) | ≤ kF r,m k · | y | · | y | · · · | y m | . (4.18)For our convenience, we write inequality (4.18) even when m = 0 with kF r, k def = | F r ( ξ ∗ ) | .Our main result is the following theorem. Theorem 4.3.
There exist polynomials q n : R → R d such that y ( t ) has an asymptoticexpansion, in the sense of Definition 2.1, y ( t ) ∼ ∞ X n =1 q n ( t ) e − µ n t in R d , (4.19) where µ n ’s are defined in Definition 4.2, and q n ( t ) satisfies, for any n ≥ , q ′ n + ( A − µ n I d ) q n = J n def = X r ≥ ,m ≥ ,k ,k ,...,k m ≥ , P mj =1 e µ kj + α r λ ∗ = e µ n F r,m ( q k , q k , . . . , q k m ) in R . (4.20)We clarify the notation in Theorem 4.3. nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 13 (a) In case of assumption (4.3), the index r in J n is taken over the whole set N . In caseof assumption (4.4), the index r in J n is restricted to 1 , , . . . , N ∗ , thus, we explicitlyhave J n = N ∗ X r =1 X m ≥ ,k ,k ,...,k m ≥ , P mj =1 e µ kj + α r λ ∗ = e µ n F r,m ( q k , q k , . . . , q k m ) . (4.21)(b) When m = 0, the terms q k j ’s in J n are not needed, see the explanation after (4.17),hence the condition k j ≥ J n becomes X F r ( ξ ∗ ) for α r λ ∗ = e µ n , that is, β r λ ∗ = µ n . (4.22)Thus, we rewrite (4.20) more explicitly, by considering m = 0 and m ≥ J n , as q ′ n + ( A − µ n I d ) q n = X r ≥ ,α r λ ∗ = e µ n F r ( ξ ∗ ) + X r ≥ ,m ≥ ,k ,k ,...,k m ≥ , P mj =1 e µ kj + α r λ ∗ = e µ n F r,m ( q k , q k , . . . , q k m ) . (4.23)Note, in (4.22), that such an index r may or may not exists. In the latter case,the term is understood to be zero. In the former case, r is uniquely determined andwe have only one term.(c) When n = 1, we have e µ = 0, and there are no indices satisfying constraints for thesum in J . Hence J = 0, and (4.20) becomes q ′ + ( A − µ I d ) q = 0 , (4.24)(d) Consider n = 2. If m ≥
1, then, for the second sum on the right-hand side of (4.23),one has at least e µ k ≥ e µ . Hence e µ k j + α r λ ∗ > µ k ≥ e µ . Therefore, the last conditionfor the indices in the second sum on the right-hand side of (4.23) is not met. Thus,(4.23) becomes q ′ + ( A − µ I d ) q = J = X r ≥ ,α r λ ∗ = e µ F r ( ξ ∗ ) = X r ≥ ,β r λ ∗ = µ F r ( ξ ∗ ) . (e) We verify that the sum in J n is a finite sum.Let n ≥
2. Firstly, the indices in the sum of J n satisfy e µ n = m X j =1 e µ k j + α r λ ∗ ≥ α r λ ∗ = α r µ . Then α r ≤ e µ n /µ . (4.25)Secondly, for m ≥
1, one has e µ n = m X j =1 e µ k j + α r λ ∗ > m X j =1 µ k j ≥ m e µ , which yields m < e µ n / e µ . (4.26)Note that condition (4.27) is not met for n = 2 and m ≥ e µ n = P mj =1 e µ k j + α r λ ∗ > e µ k j , which yields k j < n. (4.27) Hence, the terms q k j ’s in (4.20) come from previous steps.By (4.25), (4.26), (4.27), the sum in J n is over only finitely many r ’s, m ’s and k j ’s.(f) For n ≥
2, suppose r ∗ , m ∗ , k ∗ are non-negative integers such that α r ∗ ≥ e µ n /µ , m ∗ ≥ e µ n / e µ , k ∗ ≥ n − . (4.28)Then J n can be equivalently written as J n = r ∗ X r =1 m ∗ X m =0 X ≤ k ,k ,...,k m ≤ k ∗ , P mj =1 e µ kj + α r µ = e µ n F r,m ( q k , q k , . . . , q k m ) . (4.29)Clearly, the right-hand side of (4.29) is a part of the sum in J n , and the converseis also true thanks to (4.25), (4.26) and (4.27) above. Thus, the sums on both sidesof (4.29) are the same.(g) In case of (4.4) and n ≥ J n is given by (4.21), and relation (4.29) under condition(4.28) can be recast as J n = N ∗ X r =1 m ∗ X m =0 X ≤ k ,k ,...,k m ≤ k ∗ , P mj =1 e µ kj + α r µ = e µ n F r,m ( q k , q k , . . . , q k m ) , (4.30)for any non-negative integers m ∗ , k ∗ satisfying m ∗ ≥ e µ n / e µ and k ∗ ≥ n − . (4.31)We are ready to prove Theorem 4.3 now. Proof of Theorem 4.3.
We will prove for the case (4.3) first, and then make necessary changesfor the case (4.4) later.Part A: Proof for the case of (4.3). For any N ∈ N , we denote by ( T N ) the following state-ment: There exist R d -valued polynomials q ( t ) , q ( t ) , . . . , q N ( t ) such that equation (4.20) holds true for n = 1 , , . . . , N , and (cid:12)(cid:12)(cid:12) y ( t ) − N X n =1 q n ( t ) e − µ n t (cid:12)(cid:12)(cid:12) = O ( e − ( µ N + δ N ) t ) as t → ∞ , (4.32) for some δ N > . We will prove ( T N ) for all N ∈ N by induction in N . First step (N=1). By Theorem 3.3 and the fact µ = λ ∗ , the statement ( T ) is true with q ( t ) = ξ ∗ for all t ∈ R , and some δ > Induction step.
Let N ≥
1. Suppose there are polynomials q n ’s for 1 ≤ n ≤ N suchthat the statement ( T N ) holds true.For n = 1 , . . . , N , let y n ( t ) = q n ( t ) e − µ n t , u n ( t ) = y ( t ) − P nk =1 y k ( t ). By induction hypothe-ses, the polynomials q n ’s satisfy (4.24), (4.20) and u N ( t ) = O ( e − ( µ N + δ N ) t ) . (4.33) nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 15 Let w N ( t ) = e µ N +1 t u N ( t ). We derive the differential equation for w N ( t ). w ′ N − µ N +1 w N = u ′ N e µ N +1 t = ( y ′ − N X k =1 y ′ k ) e µ N +1 t = ( − Ay + F ( y ) − N X k =1 y ′ k ) e µ N +1 t = (cid:16) − Au N − N X k =1 Ay k + F ( y ) − N X k =1 y ′ k (cid:17) e µ N +1 t . Thus w ′ N + ( A − µ N +1 I d ) w N = e µ N +1 t F ( y ) − e µ N +1 t N X k =1 ( Ay k + y ′ k ) . (4.34)By (4.12), we can choose a number r ∗ ∈ N such that β r ∗ ≥ µ N +1 /µ , which is equivalent to α r ∗ ≥ e µ N +1 /µ . (4.35)By (4.5), one has F ( x ) = r ∗ X r =1 F r ( x ) + O ( | x | β r ∗ + ε r ∗ ) as x → . (4.36)Using (4.36) with x = y ( t ) and utilizing property (4.7), we write the first term on theright-hand side of (4.34) as e µ N +1 t F ( y ( t )) = e µ N +1 t r ∗ X r =1 F r ( y ( t )) + e µ N +1 t O ( | y ( t ) | β r ∗ + ε r ∗ ) = E ( t ) + e µ N +1 t O ( e − λ ∗ ( β r ∗ + ε r ∗ ) t ) , where E ( t ) = e µ N +1 t r ∗ X r =1 F r ( y ( t )) . (4.37)Because of condition for β r ∗ in (4.35), we then have e µ N +1 t F ( y ( t )) = E ( t ) + O ( e − e δ N t ) , where e δ N = λ ∗ ε r ∗ . (4.38)The term P r ∗ r =1 F r ( y ) in (4.37) will be calculated as below. For k = 1 , . . . , N , denote e y k ( t ) = y k ( t ) e λ ∗ t = q k ( t ) e − e µ k t and e u k ( t ) = u k ( t ) e λ ∗ t . When 2 ≤ k ≤ N , one has e y k ( t ) = q k ( t ) e − e µ k t = O ( e − ( e µ k − ε ) t ) for any ε ∈ (0 , e µ k ). (4.39)By (4.33), e u N ( t ) = u N ( t ) e λ ∗ t = O ( e − ( e µ N + δ N ) t ) . (4.40)Also, from ( T ), we similarly have e u ( t ) = u ( t ) e λ ∗ t = O ( e − δ t ) . (4.41)Then F r ( y ( t )) = F r ( y + u ) = F r (cid:0) e − λ ∗ t ( ξ ∗ + e u ) (cid:1) = e − β r λ ∗ t F r ( ξ ∗ + e u ) . (4.42)Let s ∗ ∈ N satisfy s ∗ δ + β λ ∗ ≥ µ N +1 and s ∗ ≥ e µ N +1 / e µ . (4.43) By Taylor’s expansion (4.15) with s = s ∗ , using the notation in (4.17), F r ( ξ ∗ + e u ) = s ∗ X m =0 F r,m e u ( m )1 + g r,s ∗ ( e u ) . (4.44)It follows (4.42) and (4.44) that F r ( y ( t )) = e − β r λ ∗ t F r ( ξ ∗ ) + s ∗ X m =1 F r,m e u ( m )1 ! + e − β r λ ∗ t g r,s ∗ ( e u ) . (4.45)The terms in (4.45) are further calculated as follows.For the last term in (4.45), by using (4.16), (4.41) and the first condition in (4.43), wefind that e − β r λ ∗ t g r,s ∗ ( e u ) = e − β r λ ∗ t O ( | e u ( t ) | s ∗ +1 ) = e − β r λ ∗ t O ( e − δ ( s ∗ +1) t )= O ( e − ( β λ ∗ + δ s ∗ + δ ) t ) = O ( e − ( µ N +1 + δ ) t ) . (4.46)For the remaining terms on the right-hand side of (4.45), we write F r,m e u ( m )1 = F r,m (cid:16) N X k =2 e y k + e u N (cid:17) ( m ) = F r,m (cid:16) N X k =2 e y k + e u N , N X k =2 e y k + e u N , . . . , N X k =2 e y k + e u N (cid:17) = F r,m (cid:16) N X k =2 e y k (cid:17) ( m ) + X finitely many F r,m ( z , . . . , z N ) . (4.47)Note, in the case N = 1, that the sum P Nk =2 e y k and, hence, the term F r,m ( P Nk =2 e y k ) ( m ) arenot present in the calculations in (4.47). In the last sum of (4.47), each z , . . . , z N is either P Nk =2 e y k or e u N , and at least one of z j ’s must be e u N . By inequality (4.18), estimate (4.39)for e y k , and estimates (4.40), (4.41) for e u N , we have |F r,m ( z , . . . , z N ) | ≤ kF r,m k · | z | . . . | z N | = O ( | e u N | ) = O ( e − ( e µ N + δ N ) t ) . Therefore, s ∗ X m =0 F r,m e u ( m )1 = F r ( ξ ∗ ) + s ∗ X m =1 F r,m (cid:16) N X k =2 e y k (cid:17) ( m ) + O ( e − ( e µ N + δ N ) t )= s ∗ X m =0 N X k ,...,k m ≥ F r,m ( e y k , e y k , . . . , e y k m ) + O ( e − ( e µ N + δ N ) t )= s ∗ X m =0 N X k ,...,k m ≥ e − t P mj =1 e µ kj F r,m ( q k , q k , . . . , q k m ) + O ( e − ( e µ N + δ N ) t ) . Thus, e − β r λ ∗ t s ∗ X m =0 F r,m e u ( m )1 = s ∗ X m =0 N X k ,...,k m =2 e − t ( P mj =1 e µ kj + β r λ ∗ ) F r,m ( q k , q k , . . . , q k m )+ O ( e − ( e µ N + β r λ ∗ + δ N ) t ) . (4.48) nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 17 Again, in the case N = 1, the last double summation has only one term corresponding to m = 0, which is F r ( ξ ∗ ).Using property (4.14), we have e µ N + β r λ ∗ + δ N = e µ N + α r λ ∗ + λ ∗ + δ N ≥ e µ N +1 + λ ∗ + δ N = µ N +1 + δ N . Hence, the last term in (4.48) can be estimated as O ( e − ( e µ N + β r λ ∗ + δ N ) t ) = O ( e − ( µ N +1 + δ N ) t ) . (4.49)Therefore, by formula of E ( t ) in (4.37), and (4.45), (4.46), (4.48), (4.49), we have E ( t ) = e µ N +1 t (cid:16) J + O ( e − ( µ N +1 + δ N ) t ) + O ( e − ( µ N +1 + δ ) t ) (cid:17) = e µ N +1 t J + O ( e − min { δ ,δ N } t ) , (4.50)where J = r ∗ X r =1 s ∗ X m =0 N X k ,...,k m ≥ e − t ( P mj =1 e µ kj + β r λ ∗ ) F r,m ( q k , q k , . . . , q k m ) . (4.51)Denote µ = e µ k + . . . + e µ k m + α r λ ∗ . When m = 0, one has µ = α r λ ∗ , which belongs to e S . When m ≥
1, by property (4.13), µ also belongs to e S . Clearly, µ > e µ . Thus, inboth cases of m , the number µ must equal e µ p for a unique p ≥
2. Because of the indices r, m, k , . . . , k m being finitely many, there are only finitely many such numbers p ’s. Thus,there is p ∗ ∈ N such that any index p above satisfies p ≤ p ∗ . Hence, the exponent in (4.51)is m X j =1 e µ k j + β r λ ∗ = µ + λ ∗ = e µ p + λ ∗ = µ p for some integer p ∈ [2 , p ∗ ] . (4.52)Using index p in (4.52), we can split the sum in J into two parts corresponding to p ≤ N +1and p ≥ N + 2. We then write J = S + S , where S = N +1 X p =2 r ∗ X r =1 s ∗ X m =0 X ≤ k ,...,k m ≤ N, P mj =1 e µ kj + β r λ ∗ = µ p e − µ p t F r,m ( q k , q k , . . . , q k m ) ,S = p ∗ X p = N +2 r ∗ X r =1 s ∗ X m =0 X ≤ k ,...,k m ≤ N, P mj =1 e µ kj + β r λ ∗ = µ p e − µ p t F r,m ( q k , q k , . . . , q k m ) . We re-write S = P N +1 k =2 e − µ k t J k , where J k = r ∗ X r =1 s ∗ X m =0 X ≤ k ,...,k m ≤ N, P mj =1 e µ kj + β r λ ∗ = µ k F r,m ( q k , q k , . . . , q k m ) for k = 1 , , . . . , N + 1. (4.53)We estimate S . Set δ ′ N = min { e δ N , δ , δ N , ( µ N +2 − µ N +1 ) / } >
0. Using inequality (4.18)to estimate |F r,m ( q k , q k , . . . , q k m ) | , and recalling that q k j ’s are polynomials in t , we have |F r,m ( q k , q k , . . . , q k m ) | ≤ kF r,m k · | q k | · | q k | . . . | q k m | = O ( e δ ′ N t ) . For e − µ p t , we use µ p ≥ µ N +2 , and obtain S = O ( e − µ N +2 t e δ ′ N t ) = O ( e − ( µ N +1 + δ ′ N ) t ) . (4.54) Combining the above calculations from (4.50) to (4.54) gives E ( t ) = e µ N +1 t N +1 X k =2 e − µ k t J k + O ( e − δ ′ N t ) . (4.55)Thus, by (4.34), (4.38) and (4.55), w ′ N + ( A − µ N +1 I d ) w N = (cid:16) N +1 X k =2 e − µ k t J k − N X k =1 ( Ay k + y ′ k ) (cid:17) e µ N +1 t + O ( e − δ ′ N t ) . Using the fact Ay k + y ′ k = e − µ k t ( q ′ k + ( A − µ k I d ) q k ), for k = 1 , , . . . , N , we deduce w ′ N + ( A − µ N +1 I d ) w N = − e µ N +1 t N X k =1 e − µ k t χ k + J N +1 + O ( e − δ ′ N t ) , (4.56)where χ = q ′ + ( A − µ I d ) q , χ k = q ′ k + ( A − µ k I d ) q k − J k for 2 ≤ k ≤ N. We already know χ = 0. Let us focus on the sum P Nk =1 e − µ k t χ k on the right-hand side of(4.56). In case N = 1, this sum is already zero.Consider N ≥
2. Note that condition P mj =1 e µ k j + β r λ ∗ = µ k in formula (4.53) of J k isequivalent to P mj =1 e µ k j + α r λ ∗ = e µ k . Then, for each k = 1 , , . . . , N + 1, by the virtue ofrelation (4.29) for n = k ≤ N + 1, r ∗ = r ∗ , m ∗ = s ∗ and k ∗ = N , one has J k = J k for k = 1 , , . . . , N + 1 . (4.57)Above, condition (4.28) is met thanks to the condition for α r ∗ in (4.35), the second conditionfor s ∗ in (4.43), and the fact N ≥ k − χ k = 0 for 2 ≤ k ≤ N . Hence, (4.56)becomes w ′ N + ( A − µ N +1 I d ) w N = J N +1 + O ( e − δ ′ N t ) . (4.58)Note that µ N +1 > µ ≥ λ . Let λ i is an eigenvalue of A with λ i < µ N +1 . If λ i ≤ λ n = µ then λ i ≤ µ N . If λ i > λ n , then, according to property (4.11), λ i ∈ S , hence, by theconstraint λ i < µ N +1 , we have λ i ≤ µ N . Therefore, in both cases e ( λ i − µ N +1 ) t | w N ( t ) | = e λ i t | u N ( t ) | = e λ i t O ( e − ( µ N + δ N ) t ) = O ( e − δ N t ) . That is, condition (2.12) is satisfied.Applying Lemma 2.4 to the equation (4.58), there exists polynomial q N +1 : R → R d anda number δ N +1 > | w N ( t ) − q N +1 ( t ) | = O ( e − δ N +1 t ) . (4.59)Moreover q N +1 ( t ) solves q ′ N +1 + ( A − µ N +1 I d ) q N +1 = J N +1 = J N +1 , that is, equation (4.20) holds for n = N + 1.Multiplying (4.59) by e − µ N +1 t gives (cid:12)(cid:12)(cid:12) y ( t ) − N +1 X n =1 q n ( t ) e − µ n t (cid:12)(cid:12)(cid:12) = O ( e − ( µ N +1 + δ N +1 ) t ) , which proves (4.32) for N := N + 1. nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 19 Hence the statement ( T N +1 ) holds true. Conclusion for Part A.
By the induction principle, the statement ( T N ) holds true forall N ∈ N . Note also that, the polynomials ( T N +1 ) are exactly the ones from ( T N ). Hence,the polynomials q n ’s exist for all n ∈ N , for which ( T N ) holds true for all N ∈ N . Therefore,we obtain the desired expansion (4.19).Part B: Proof for the case of (4.4). We follow the proof in Part A with the following adjust-ments. The number r ∗ is simply N ∗ , and condition (4.35) for r ∗ is not required anymore. Allthe sum P r ≥ appearing in the proof that involves F r or F r,m will be replaced with P ≤ r ≤ N ∗ .From (4.36) to the end of the proof in Part A, positive number ε r ∗ is arbitrary, and number β r ∗ in calculations from (4.36) to (4.38) is replaced with any number β ∗ ≥ µ N +1 /µ . Then(4.36) still holds true thanks to (4.2). We also take into account that J n is given by (4.21),and one has relation (4.30) under condition (4.31). With these changes, the above proof inPart A goes through, and we obtain the desired statement for this case (4.4).The proof of Theorem 4.3 is now complete. (cid:3) Remark 4.4.
Assume we have (4.4), then by adding more functions F k = 0 and numbers β k ’s, for k > N ∗ , such that β k increases strictly to infinity, one can convert (4.4) into (4.3).(For example, one can take β k = β N ∗ + k for k > N ∗ .) However, we did not use this fact inPart B of the proof of Theorem 4.3 above. The reason is to have simpler constructions of e S and q n ’s in (4.9) and (4.21) for the case (4.4), as opposed to (4.8) and (4.20) if it is convertedto (4.3). 5. Extended results
In this section, we extend Theorem 4.3 to the situations that require less of the function F .First, we consider the case when the function F in (1.4) only has a finite sum approxima-tion. We will find a finite sum asymptotic approximation for decaying solutions of (1.4).Assume function F satisfies (i) and (H2) of Assumption 4.1 with (4.2) being replaced with (cid:12)(cid:12)(cid:12) F ( x ) − N ∗ X k =1 F k ( x ) (cid:12)(cid:12)(cid:12) = O ( | x | β N ∗ +¯ ε ) as x →
0, for some number ¯ ε >
0. (5.1)Note that (5.1) is different from (4.2) due to the restriction of ¯ ε . Also, we usually thinkof ¯ ε as a small number, but, in (5.1), it can be large. This happens when the remainder F ( x ) − P N ∗ k =1 F k ( x ) may have very precise approximation, i.e., large ¯ ε , but it does not havea homogeneous structure that we can take advantage of.From (5.1), one can see that estimate (4.5) still holds for all N ∈ N ∩ [1 , N ∗ ], where δ N inany number in (0 , β N +1 − β N ) when N < N ∗ , and is ¯ ε when N = N ∗ . Consequently, (4.6) isstill valid, and the facts and results in section 3 apply.Let y ( t ) be a non-trivial, decaying solution of (1.4). Applying Theorem 3.3, we have thefirst approximation (3.14).For more precise approximations, define sets e S and S by (4.9) and (4.10), respectively.Let ¯ N ∈ N be defined by¯ N = max { N ∈ N : λ ∗ ( β N ∗ + ¯ ε ) > µ N } . (5.2) From the definition of e S , we see that α N ∗ λ ∗ ∈ e S . Therefore, there exists a unique number N ′ ∈ N such that α N ∗ λ ∗ = e µ N ′ , which is equivalent to µ N ′ = β N ∗ λ ∗ . The last expressiongives µ N ′ > λ ∗ = µ , thus, one must have N ′ ≥
2. Note that N ′ belongs to the set on theright-hand side of (5.2), then ¯ N ≥ N ′ ≥ Theorem 5.1.
There exist R d -valued polynomials q n ( t ) ’s, for ≤ n ≤ ¯ N , and a number δ > such that (cid:12)(cid:12)(cid:12) y ( t ) − ¯ N X n =1 q n ( t ) e − µ n t (cid:12)(cid:12)(cid:12) = O ( e − ( µ ¯ N + δ ) t ) , (5.3) where each polynomial q n ( t ) , for ≤ n ≤ ¯ N , satisfies equation q ′ n + ( A − µ n I d ) q n = N ∗ X r =1 X m ≥ ,k ,k ,...,k m ≥ , P mj =1 e µ kj + α r λ ∗ = e µ n F r,m ( q k , q k , . . . , q k m ) in R . (5.4) Proof.
We follow Part A of the proof of Theorem 4.3, with some changes similar to those inPart B.First, we take r ∗ = N ∗ , 1 ≤ r ≤ N ∗ and replace ε r ∗ with number ¯ ε in (5.1).Second, we replace condition (4.35) with λ ∗ ( β r ∗ + ¯ ε ) > µ ¯ N , which is satisfied by definitionof ¯ N in (5.2).Third, for 1 ≤ N ≤ ¯ N −
1, the calculations (4.36)–(4.38) are still valid with number e δ N in (4.38) being changed to e δ N = λ ∗ ( β r ∗ + ¯ ε ) − µ N +1 . Note that e δ N ≥ λ ∗ ( β r ∗ + ¯ ε ) − µ ¯ N > N for 1 ≤ N ≤ ¯ N and obtain ( T ¯ N ), which, by (4.32), yields(5.3). Here, each polynomial q n ( t ), for 1 ≤ n ≤ ¯ N , satisfies equation (4.20) with J n beinggiven by (4.21) particularly; that is, we obtain equation (5.4). (cid:3) Next, we relax the regularity requirements for F and F k ’s.Regarding F , its local Lipschitz property is imposed to guarantee the existence and unique-ness of solutions at least starting with small initial data. However, in some problems, F isnot that regular, but a small solution y ( t ), for t ∈ [0 , ∞ ), already exists and is given. Thenour results obtained above apply to this solution y ( t ).Regarding F k ’s, what we need in the proofs of Theorems 4.3 and 5.1 is that each F k , inaddition to being positively homogeneous, has the Taylor series approximation of all ordersabout ξ ∗ , where ξ ∗ is from Theorem 3.3. Because ξ ∗ depends on y ( t ) and varies in R d ,function F k is required in Assumption 4.1 to be smooth on the entire set R d . However, inmany cases, F k is only known to be smooth on an open set V strictly smaller than R d . Thenone needs ξ ∗ to belong to V as well. This is possible when more information about ξ ∗ , as aneigenvector of matrix A , is provided.These two points will be reflected in Theorem 5.3 below. Definition 5.2.
For an open set V in R d , denote by X ( V ) , respectively X ( V ) , the set oflocally Lipschitz continuous, respectively continuous, functions on R d , with approximation (4.3) or (4.4) where F k ∈ H β k ( R d ) ∩ C ∞ ( V ) for all respective k ’s. The sets b X ( V ) and b X ( V ) are defined similarly with (5.1) replacing (4.3) and (4.4) . In particular, denote X = X ( R d ) and X = X ( R d ) . nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 21 Note that X is the set of functions that satisfy Assumption 4.1.An extension of the results in Theorems 4.3 and 5.1 is the following general theorem. Theorem 5.3.
Suppose that all eigenvectors of matrix A belong to an open set V in R d . (i) Then Theorem 4.3 applies to any function F ∈ X ( V ) , and Theorem 5.1 applies toany function F ∈ b X ( V ) , for any non-trivial, decaying solution y ( t ) of (1.4) . (ii) If F ∈ X ( V ) , respectively F ∈ b X ( V ) , then Theorem 4.3, respectively Theorem 5.1,still holds true for a solution y ( t ) ∈ C ([0 , ∞ )) of (1.4) that satisfies y ( t ) → as t → ∞ , and there is a divergent, strictly increasing sequence ( t n ) ∞ n =1 in (0 , ∞ ) suchthat y ( t n ) = 0 for all n ∈ N .Proof. (i) In the proofs of Theorems 4.3 and 5.1, the eigenvector ξ ∗ belongs to V , and,thanks to the condition F k ∈ C ∞ ( V ), we can still use the Taylor expansions of F k ’s about ξ ∗ . Therefore, both proofs are unchanged and produce respective conclusions.(ii) We re-examine Proposition 3.2. Select T ε = t n for sufficiently large n such that (3.9)still holds. Then we still obtain upper bound (3.5). With y ( T ε ) = y ( t n ) = 0, the estimate(3.13) holds for some C ′ >
0. Thus, the inequality in (3.6) holds for all t ≥ T ε . With sucha lower bound of | y ( t ) | , we can still prove Theorem 3.3. After that, the argument in (i)continues to be valid. (cid:3) The sets defined in Definition 5.2 and used in Theorem 5.3 will be explored more in section6 below. Here, we state their very first property.
Proposition 5.4.
For any open set V in R d , the sets X ( V ) , b X ( V ) X ( V ) and b X ( V ) arelinear spaces.Proof. We gives a proof for X ( V ), the other sets can be proved similarly. Thanks to Remark4.4, it suffices to prove that the sum of any two functions of the form (4.3) is also of theform (4.3). Suppose F ( x ) is the same as in (4.3), and G ( x ) ∼ ∞ X k =1 G k ( x ) , (5.5)where each G k is similar to F k , but with degree β ′ k > β k . Arrange the set { β k , β ′ j : k, j ∈ N } as an strictly increasing sequence ( ¯ β k ) ∞ k =1 . Clearly, ¯ β k → ∞ as k → ∞ ,and ( β k ) ∞ k =1 and ( β ′ k ) ∞ k =1 are subsequences of ( ¯ β k ) ∞ k =1 . By inserting the zero function into(4.3) and (5.5) when needed, one can rewrite (and verify) F and G as F ( x ) ∼ ∞ X k =1 e F k ( x ) and G ( x ) ∼ ∞ X k =1 e G k ( x ) , where e F k ( x ) and e G k ( x ) are in C ∞ ( V ), positively homogeneous of the same degree ¯ β k . Then, F + G is, obviously, of the form (4.3) with e F k + e G k replacing F k , and ¯ β k replacing β k . (cid:3) Specific cases and examples
We specify many cases for the function F in Theorem 5.3, i.e., describe classes of functionsin the spaces X ( V ), b X ( V ) X ( V ) and b X ( V ) in Definition 5.2. For n ∈ N , p ∈ [1 , ∞ ) and x = ( x , x , . . . , x n ) ∈ R n , the ℓ p -norm of x is k x k p = (cid:16) n X j =1 | x j | p (cid:17) /p . We recall that all these norms k · k p on R n are equivalent to each others.For any n ∈ N , p ≥ α >
0, one has the following.(a) The function x ∈ R n
7→ k x k αp belongs to C ( R n ) ∩ C ∞ ( R n ∗ ) ∩ H α ( R n ).(b) Assume, additionally, that p is an even number. Then the function x ∈ R d
7→ k x k αp belongs to C ∞ ( R n ).The first class of functions in X we describe is in the next theorem, which involves the ℓ p -norms of x and polynomials on R d . Theorem 6.1.
Let δ > and m ∈ N . Suppose G : ( − δ, ∞ ) m → R be a C ∞ -function with G (0) = 0 , and G : R d → R d is a homogeneous polynomial of degree m ∈ Z + . Define afunction F : R d R d by F ( x ) = G ( k x k s p , k x k s p , . . . , k x k s m p m ) G ( x ) for x ∈ R d , (6.1) where p j ∈ [1 , ∞ ) and s j ∈ (0 , ∞ ) for j = 1 , , . . . , m , are given real numbers.Let ¯ s = min { s j : j = 1 , , . . . , m } . Assume ¯ s + m > . Then the following statementshold true. (i) F (0) = 0 and F ∈ C ( R d ) ∩ C ∞ ( R d ∗ ) . (ii) F ∈ X ( R d ∗ ) . (iii) If p , . . . , p m > , then F ∈ C ( R d ) , and, consequently, F is locally Lipschitz in R d . (iv) If p , p , . . . , p m are even numbers, then F ∈ X .Proof. In part (i), the property F (0) = 0 follows the fact G (0) = 0. The proof of theremaining statement in (i) is elementary, using the chain rule for derivatives and property(a) right before this theorem.We prove (ii). By using the Taylor expansion of G ( z ), for z ∈ ( − δ, ∞ ) m , about the originof R m , we can approximate G ( k x k s p , k x k s p , . . . , k x k s m p m ), for k ∈ N , by X γ =( γ ,γ ,...,γ m ) ∈ Z m + , | γ |≤ k c γ k x k s γ p k x k s γ p . . . k x k s m γ m p m , where each γ is a multi-index with length | γ | = γ + γ + . . . + γ m , and c γ = | γ | ! γ ! γ ! . . . γ m ! · ∂ | γ | G (0) ∂x γ ∂x γ . . . ∂x γ m m , (6.2)with the remainder being O (( k x k s p + k x k s p + . . . + k x k s m p m ) k +1 ) = O ( | x | ¯ s ( k +1) ) as x → n m + m X j =1 s j γ j : γ j ∈ Z + , ( γ , γ , . . . , γ m ) = 0 o as a strictly increasing sequence ( β k ) ∞ k =1 . Note that β k → ∞ as k → ∞ , and, because of theassumption ¯ s + m >
1, we have β k > k ∈ N . nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 23 Then we can re-write F ( x ) in the form of (4.3), where F k ( x ) = X γ =( γ ,γ ,...,γ m ) ∈ Z m + ,m + P mj =1 s j γ j = β k c γ k x k s γ p k x k s γ p . . . k x k s m γ m p m G ( x ) . (6.3)By property (a) right before this theorem and property (d) after Definition 2.2, F k ∈H β k ( R d ) ∩ C ∞ ( R d ∗ ). By this and the facts F (0) = 0, F ∈ C ( R d ) in (i), we conclude F ∈X ( R d ∗ ).We prove (iii). Because G is a homogeneous polynomial of degree m , there is C > G ( x ) and its derivative matrix DG ( x ) can be estimated, for any x ∈ R d , by | G ( x ) | ≤ C | x | m and | DG ( x ) | ( ≤ C | x | m − if m ≥ m = 0. (6.4)By using the linear approximation of G ( z ) for z near 0 in R m , we have G ( z ) = O ( | z | ) = O ( | z | + . . . + | z m | ) , as z = ( z , . . . , z m ) → . Applying this property to z = ( k x k s p , k x k s p , . . . , k x k s m p m ), we have G ( k x k s p , k x k s p , . . . , k x k s m p m ) = O ( k x k s p + k x k s p + . . . + k x k s m p m ) = O ( | x | ¯ s ) as x → F ( x ) = O ( | x | ¯ s + m ) as x →
0. (6.5)Since ¯ s + m > F (0) = 0, it follows (6.5) that DF (0) = 0 . (6.6)For 1 ≤ i ≤ m and 1 ≤ j ≤ d , one has the partial derivative, thanks to p i > x = ( x , . . . , x d ) ∈ R d ∂ ( | x j | p i ) ∂x j = p i | x j | p i − sign( x j ) , which is a continuous function on R d .For x ∈ R d \ { } and j = 1 , , . . . , d , we have ∂F ( x ) ∂x j = m X i =1 ∂G ( z ) ∂z i (cid:12)(cid:12)(cid:12) z =( k x k s p , k x k s p ,..., k x k smpm ) s i k x k s i − p i p i | x j | p i − sign( x j ) G ( x )+ G ( k x k s p , k x k s p , . . . , k x k s m p m ) ∂G ( x ) ∂x j . (6.7)Clearly, ∂F ( x ) /∂x j is continuous on R d \ { } . Consider its continuity at the origin.For the first summation on the right-hand side of (6.7), ∂G ( z ) ∂z i (cid:12)(cid:12)(cid:12) z =( k x k s p , k x k s p ,..., k x k smpm ) = O (1) as x → , (6.8)and, thanks to the first estimate in (6.4), k x k s i − p i p i | x j | p i − | sign( x j ) G ( x ) | ≤ O ( | x | s i − | x | m ) = O ( | x | ¯ s + m − ) as x → By the second estimate in (6.4), the last term in (6.7), it is zero when m = 0, and can beestimated, when m ≥
1, by (cid:12)(cid:12)(cid:12) G ( k x k s p , k x k s p , . . . , k x k s m p m ) ∂G ( x ) ∂x j (cid:12)(cid:12)(cid:12) ≤ O ( | x | ¯ s ) C | x | m − = O ( | x | ¯ s + m − ) as x →
0. (6.9)The above estimates from (6.8) to (6.9) for the right-hand side of (6.7) yieldlim x → ∂F ( x ) ∂x j = 0 . Together with (6.6), this limit implies that ∂F ( x ) /∂x j is continuous at the origin for j =1 , , . . . , d . Therefore, F ∈ C ( R d ), and, consequently, F is locally Lipschitz in R d .Finally, we prove (iv). In case all p j ’s are even numbers, then, by property (b) right beforeTheorem 6.1, all F k ’s in (6.3) belong to C ∞ ( R d ). Combining this fact with (ii) and (iii)above, we have F ∈ X . (cid:3) Example 6.2.
Let α be any number in (0 , ∞ ) that is not an even integer, and F ( x ) = | x | α x for x ∈ R d . (6.10)Applying Theorem 6.1(iv) to m = 1, G ( z ) = z for z ∈ R , G ( x ) = x , p = 2 and s = α , wehave F ∈ X . Even in this simple case, the asymptotic expansions obtained in Theorem 4.3is new. Example 6.3.
Given a constant d × d matrix M , even numbers p , p ≥
2, and real numbers α, β >
0, let F ( x ) = k x k αp M x k x k βp for x ∈ R d . (6.11)Applying Theorem 6.1(iv) to functions G ( z , z ) = z / (1 + z ), G ( x ) = M x and numbers s = α , s = β , one has F ∈ X . The explicit form of (4.3) can be obtained quickly as follows.For x ∈ R d with k x k p <
1, we expand 1 / (1 + k x k βp ), using the geometric series, and canverify that F ( x ) ∼ ∞ X k =1 ( − k − k x k αp k x k ( k − βp M x, (6.12)in the sense of (H1) in Assumption 4.1. This yields (4.3) with β k = 1 + α + ( k − β .When k · k p = k · k p = | · | , function F in (6.11) covers the particular case discussed in(1.5), and expansion (6.12) simply reads as F ( x ) ∼ ∞ X k =1 ( − k − | x | α +( k − β M x. Example 6.4.
For k ∈ N , let M k be a constant d × d matrix, and p k ≥ α k > x ∈ R d
7→ k x k α k p k M k x can play the role of F k in (4.3) or (5.1). In thiscase, we write, respectively, F ( x ) ∼ ∞ X k =1 k x k α k p k M k x, or (cid:12)(cid:12)(cid:12) F ( x ) − N ∗ X k =1 k x k α k p k M k x (cid:12)(cid:12)(cid:12) = O ( | x | α N ∗ +1+¯ ε ) as x →
0. (6.13) nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 25
In particular, thanks to Theorem 6.1(iv), the function F ( x ) = N ∗ X k =1 k x k α k p k M k x, for x ∈ R d , belongs to X .(b) We can replace M k x in (6.13) with an R d -valued homogeneous polynomial in x ofdegree m k ∈ Z + . Of course, the set { α k + m k : k ∈ N } is required to be in (1 , ∞ ) and canbe re-arranged as a sequence that strictly increases to infinity.In Examples 6.2, 6.3 and 6.4 above, we can also consider more complicated variations. Forexample, in (6.10), (6.11) and (6.13), we can replace | x | or k x k p k with k S k x k p k , where S k ’sare invertible d × d matrices.Note that a positively homogeneous function of the form (2.6), in general, does not belongto C ∞ ( R d ). Hence, it cannot play a role of an F k in (4.3) or (5.1). However, in some cases,see (6.14) and (6.15) below, it can. Theorem 6.5.
Consider function F ( x ) given by (2.6) with X = Y = R d , s ≥ and ( Y j , k · k Y j ) = ( R n j , k · k p j ) for j = 1 , . . . , s . Suppose, for j = 1 , . . . , s ,the number p j is even, and (6.14) the only solution of equation P j ( x ) = 0 is x = 0 . (6.15)(i) One has F ∈ H β ( R d ) ∩ C ( R d ) ∩ C ∞ ( R d ) , where number β is defined in (2.7) . (ii) If β > , then F ∈ X . (iii) Let ¯ ν = min { ν j : j = 1 , . . . , s } and assume m + ¯ ν > . Then F ∈ C ( R d ) .Consequently, F ∈ X .Proof. For part (i), the fact F ∈ H β ( R d ) is due to (2.7), while the other fact F ∈ C ( R d ) ∩ C ∞ ( R d ) is clear. Part (ii) comes from part (i).We prove part (iii) now. Same as (6.4), there is C > j = 0 , , . . . , s , andany x ∈ R d , | P j ( x ) | ≤ C | x | m j and | DP j ( x ) | ( ≤ C | x | m j − if m j ≥ m j = 0, (6.16)Because s ≥ m j ≥ j ≥
1, we have β = m + P sj =1 m j ν j ≥ m + ¯ ν > F (0) = 0 and, by the first estimate in (6.16), F ( x ) = O ( | x | m + P mj =1 ν j m j ) = O ( | x | β ) as x → β >
1, we have the derivative matrix DF (0) = 0.For j = 1 , , . . . , s , write P j = ( P j, , P j, , . . . , P j,n j ).Let x = ( x , . . . , x d ) ∈ R d \ { } . Then, thanks to condition(6.15), P j ( x ) = 0 for j =1 , , . . . , s . For i = 1 , , . . . , d , we have the partial derivative ∂F ( x ) ∂x i = k P ( x ) k ν p k P ( x ) k ν p . . . k P s ( x ) k ν s p s ∂P ( x ) ∂x i + s X j =1 Y ≤ j ′ ≤ s,j ′ = j k P j ′ ( x ) k ν j ′ p j ′ ν j k P j ( x ) k ν j − p j p j n j X ℓ =1 ( P j,ℓ ( x )) p j − ∂P j,ℓ ( x ) ∂x i ! P ( x ) . (6.17) One can see that this partial derivative is continuous on R d \ { } . For the continuity of ∂F ( x ) /∂x j at the origin, we estimate the right-hand side of (6.17). On the one hand, k P ( x ) k ν p k P ( x ) k ν p . . . k P s ( x ) k ν s p s (cid:12)(cid:12)(cid:12) ∂P ( x ) ∂x j (cid:12)(cid:12)(cid:12) is zero if m = 0,or, in the case m ≥
1, it can be estimated, with the use of (6.16), by k P ( x ) k ν p k P ( x ) k ν p . . . k P s ( x ) k ν s p s (cid:12)(cid:12)(cid:12) ∂P ( x ) ∂x j (cid:12)(cid:12)(cid:12) ≤ C ′ | x | P sj =1 m j ν j | x | m − = C ′ | x | β − , for some generic constant C ′ >
0. Here, and also in calculations below, we use the equivalencebetween any norm k · k p j and | · | .On the other hand, for each j = 1 , . . . , s , and ℓ = 1 , . . . , n j , by using the estimates in(6.16) again, we have ν j Y ≤ j ′ ≤ s,j ′ = j k P j ′ ( x ) k ν j ′ p j ′ k P j ( x ) k ν j − p j p j | P j,ℓ ( x ) | p j − (cid:12)(cid:12)(cid:12)(cid:12) ∂P j,ℓ ( x ) ∂x i (cid:12)(cid:12)(cid:12)(cid:12) | P ( x ) |≤ C ′ Y ≤ j ′ ≤ s,j ′ = j | x | m j ′ ν j ′ k P j ( x ) k ν j − p j | x | m j − | x | m ≤ C ′ Y ≤ j ′ ≤ m j ,j ′ = j | x | m j ′ ν j ′ | x | m j ( ν j − | x | m j − | x | m = C ′ | x | m + P mj ′ =1 ν j ′ m j ′ − = C ′ | x | β − . Summing up the above estimates after (6.17) and passing x →
0, with β >
1, givelim x → ∂F ( x ) ∂x i = 0 = ∂F (0) ∂x i . The last relation comes from the fact DF (0) = 0 obtained earlier. Thus, ∂F/∂x i is con-tinuous on R d , for i = 1 , . . . , d . Because F ∈ C ( R d ) from part (i), we obtain F ∈ C ( R d ).Consequently, F is locally Lipschitz, and, by combining this with the facts in part (i), weconclude F ∈ X . (cid:3) In Theorem 6.5, we usually consider the case ν j /p j N for all j . Indeed, for an index j with ν j /p j ∈ N , the corresponding term k P j ( x ) k ν j p j is a polynomial, and we can combine itwith the polynomial P ( x ). Example 6.6.
Regarding condition (6.15), it can be met for many forms of P j . For example,if P j ( x ) = ( x T M x ) M x for x ∈ R d , where M is a positive definite d × d matrix, and M isan invertible d × d matrix, then P j satisfies (6.15). Example 6.7.
Consider d = 2 and let F ( x , x ) = ( | x − x | p + | x + x | p ) α/p · ( | x x | p + | x − x | p ) β/p M ( x , x ) , nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 27 where p , p ≥ M is a R -valued homogeneous polynomials of degree m ∈ Z + , and α, β >
0. Then F is of the form (2.6) with s = 2, n = n = 2, m = 3, ν = α , m = 2, ν = β , and P ( x ) = ( x − x , x + x ) , P ( x ) = ( x x , x − x ) . One can verify that P and P satisfy (6.15). If m +min { α, β } >
1, then, thanks to Theorem6.5(iii), F ∈ X .In the remainder of this section, we focus on functions constituted essentially by x γ i i , where x i ’s are coordinates of a vector x ∈ R d . We will consider more general forms of these powerfunctions, and also combine them with other positively homogeneous functions such as k x k γ i p i . Notation 6.8.
We will use the following notation for different types of power functions. • Define ω , a subset of R , by ω = ( Z + × { } ) ∪ ([0 , ∞ ) × {− , } ). • For x ∈ R and ( γ, τ ) ∈ ω , denote h x i γτ as follows h x i = h x i = h x i − = 1 , for γ = 0, and (6.18) h x i γ = x γ , h x i γ = | x | γ , h x i γ − = | x | γ sign( x ) , for γ >
0. (6.19) • For γ = ( γ , γ , . . . , γ n ) ∈ R n and τ = ( τ , τ , . . . , τ n ) ∈ R n , denote[ τ, γ ] = (cid:16) ( γ , τ ) , ( γ , τ ) , . . . , ( γ n , τ n ) (cid:17) ∈ ( R ) n . • For vector x = ( x , x , . . . , x n ) ∈ R n , multi-index γ = ( γ , γ , . . . , γ n ) ∈ [0 , ∞ ) n and τ = ( τ , τ , . . . , τ n ) ∈ {− , , } n with [ γ, τ ] ∈ ω n , denote h x i γτ = h x i γ τ · h x i γ τ . . . h x n i γ n τ n . (6.20) • For x ∈ R n , p = ( p , p , . . . , p n ) ∈ [1 , ∞ ) n and γ = ( γ , γ , . . . , γ n ) ∈ [0 , ∞ ) n , denote k x k γp = k x k γ p · k x k γ p . . . k x k γ n p n , with the convention k x k p i = 1.The last type of power in (6.19) can be used to re-write the terms like | x i | α x i as h x i i α +1 − .Also, when some power γ i in (6.20) is zero, then, thanks to (6.18), the corresponding term h x i i γ i τ i is 1 regardless the value of x i .Let m ∈ N , p ∈ [1 , ∞ ) m , ν ∈ [0 , ∞ ) m , and γ, τ ∈ R d with [ γ, τ ] ∈ ω d , and a constantvector c ∈ R d . Thenthe function x ∈ R d
7→ k x k νp h x i γτ c belongs to H | ν | + | γ | ( R d ) ∩ C ( R d ) ∩ C ∞ ( R d ∗ ), (6.21)where | ν | and | γ | denote the lengths of multi-indices, see (6.2).In the following presentation, condition | ν | = 0 is used to indicate that the term k x k νp isnot present in (6.21). In this case, the values of m and p are irrelevant.When, in general, the term h x i γτ is a homogeneous polynomial, or, in particular, | γ | = 0,the function in (6.21) is reduced to the form (6.1), which was already dealt with in Theorem6.1. Theorem 6.9.
Assume that all eigenvectors of matrix A belong to V = R d ∗ . (i) Suppose function F : R d → R d and number β ∈ (1 , ∞ ) satisfy that F is a finite sumof the functions in (6.21) with | ν | + | γ | = β . Then F (0) = 0 and F ∈ H β ( R d ) ∩ C ( R d ) ∩ C ∞ ( V ) . Consequently, F belongs to X ( V ) , and can also play the role of a function F k in (4.3) or (5.1) with β k = β . (ii) Suppose F is a finite sum of functions in (6.21) with multi-indices ν = ( ν , . . . , ν m ) and γ = ( γ , . . . , γ d ) satisfying (a) | ν | + | γ | > , and (b) | ν | = 0 or ( ∀ i = 1 , . . . , m : ν i ≥ ), and (c) ∀ j = 1 , . . . , d : γ j = 0 or γ j ≥ .Then F ∈ X ( V ) .Proof. Part (i) clearly comes from property (6.21) and the fact β > F ( x ) = k x k νp h x i γτ c given as in (6.21) with p = ( p , . . . , p m ) and τ = ( τ , . . . , τ d ). By (6.21), F ∈ H β ( R d ) ∩ C ∞ ( V ),with β = | ν | + | γ | , which is greater than 1, thanks to condition (a). Conditions (b) and (c)guarantee that the functions x ∈ R d
7→ k x k ν i p i , for i = 1 , . . . , m , and x = ( x , . . . , x d ) ∈ R d
7→ h x j i γ j τ j , for j = 1 , . . . , d , are locally Lipschitz on R d . Therefore, the function F , as amultiplication of these functions and the constant vector c , is locally Lipschitz. All together,we have F ∈ X ( V ). (cid:3) Example 6.10.
Consider the following system of ODEs in R : y ′ + 2 y + y = | y | / | y | / y ,y ′ + y + 2 y = k y k / / y | y | / sign( y ) . The corresponding matrix A has eigenvalues and bases of the corresponding eigenspaces asfollows: λ = 1, basis { ( − , } , and λ = 3, basis { (1 , } . Then any eigenvector of A belongs to V = R ∗ . The corresponding function F belongs to X ( V ), thanks to Theorem6.9(i), and we can apply Theorem 5.3(ii). Example 6.11.
Consider the following system in R : y ′ + y = −| y | α y ,y ′ + y + 2 y = − y y , where α > A , its eigenvalues and bases of correspondingeigenspaces are A = (cid:18) (cid:19) , λ = 1 , basis { (1 , − } ,λ = 2 , basis { (0 , } . In this case, F = f + g , where f ( x , x ) = ( −| x | α x , ∈ H α ( R ) and g ( x , x ) = (0 , − x x ) ∈ H ( R ) . (6.22)One finds that any eigenvector of A belongs to V = R × R ∗ , and f, g ∈ C ∞ ( V ) . (6.23)Hence, F ∈ X ( V ) and we can apply Theorem 5.3(ii).In case α ≥
1, we have F is locally Lipschitz on R . This fact, together with (6.22) and(6.23), implies that F ∈ X ( V ) and we can apply Theorem 5.3(i). nfinite Series Asymptotic Expansions for Decaying Solutions of Dissipative Differential Equations 29 Example 6.12.
There are many other situations, especially in multi-dimensional spaceshigher than R . We give one more example. Let d = 3, and assume 3 × A has thefollowing eigenvalues and bases of the corresponding eigenspaces λ = λ = 1 , basis { ξ = (1 , , , ξ = (0 , , } , and λ = 2 , basis { ξ = (1 , , − } . Let ξ be an eigenvector of A . Then ξ = c ξ + c ξ for c + c >
0, or ξ = c ξ for c = 0.One can verify that ξ ∈ V = { ( x , x , x ) : x = 0 or x x = 0 } = ( R × R ∗ × R ) ∪ ( R ∗ × R × R ∗ ) = ( R × R ) ∩ ( R × R ) . Let F ( x ) = ( x + x ) / · ( x + x ) / P ( x ), where P is a polynomial vector field on R ofdegree m ∈ N without the constant term, i.e., P (0) = 0. Then, thanks to Theorem 6.5(iii), F ∈ X ( V ) and we can apply Theorem 5.3(i). Remark 6.13.
In case F is analytic, Lyapunov’s First Method yields that a decaying so-lution solution y ( t ) of (1.4) equals a series P ∞ n =1 q n ( t ) e − µ n t for sufficiently large t , where q n ( t )’s are some polynomials. See e.g. [1, Chapter I, §
4] where the proof is based on thePoincar´e–Dulac normal form. Bruno (Bryuno) investigates a much larger class of equationsof differential sums, which are not necessarily of a dissipative type like ours. He develops thetheory of power geometry and finds solutions that have certain forms of asymptotic expan-sions. Specific algorithms are developed to calculate those asymptotic expansions. See [2–7]and references there in. His equations can have complex values, and the nonlinearity iscomprised of power functions. His method and results are totally different from ours. Forexample, he does not obtain the particular expansion (1.3). Also, we obtain the asymptoticexpansions for any given non-trivial, decaying solutions, and our nonlinearity, in case ofreal-valued functions, can contain more general terms such as in (2.6) and (6.21).
References [1]
Bibikov, Y. N.
Local theory of nonlinear analytic ordinary differential equations , vol. 702 of
LectureNotes in Mathematics . Springer-Verlag, Berlin-New York, 1979.[2]
Bruno, A. D.
Local methods in nonlinear differential equations . Springer Series in Soviet Mathematics.Springer-Verlag, Berlin, 1989.[3]
Bruno, A. D.
Power geometry in algebraic and differential equations , vol. 57 of
North-Holland Math-ematical Library . North-Holland Publishing Co., Amsterdam, 2000.[4]
Bruno, A. D.
On complicated expansions of solutions to ODES.
Comput. Math. Math. Phys. 58 , 3(2018), 328–347.[5]
Bryuno, A. D.
Asymptotic behavior and expansions of solutions of an ordinary differential equation.
Uspekhi Mat. Nauk 59 , 3(357) (2004), 31–80.[6]
Bryuno, A. D.
Power-logarithmic expansions of solutions of a system of ordinary differential equations.
Dokl. Akad. Nauk 419 , 3 (2008), 298–302.[7]
Bryuno, A. D.
Power-exponential expansions of solutions of an ordinary differential equation.
Dokl.Akad. Nauk 444 , 2 (2012), 137–142.[8]
Cao, D., and Hoang, L.
Asymptotic expansions in a general system of decaying functions for solutionsof the Navier-Stokes equations.
Annali di Matematica Pura ed Applicata 3 , 199 (2020), 1023–1072.[9]
Cao, D., and Hoang, L.
Asymptotic expansions with exponential, power, and logarithmic functionsfor non-autonomous nonlinear differential equations.
Journal of Evolution Equations (2020), 1–45. Inpress, DOI:10.1007/s00028-020-00622-w. Preprint https://arxiv.org/abs/1911.11077.[10]
Cao, D., and Hoang, L.
Long-time asymptotic expansions for Navier–Stokes equations with power-decaying forces.
Proceedings of the Royal Society of Edinburgh: Section A Mathematics 150 , 2 (2020),569–606. [11]
Coddington, E. A., and Levinson, N.
Theory of ordinary differential equations . McGraw-Hill BookCompany, Inc., New York-Toronto-London, 1955.[12]
Cohen, P. J., and Lees, M.
Asymptotic decay of solutions of differential inequalities.
Pacific J. Math.11 (1961), 1235–1249.[13]
Foias, C., Hoang, L., and Nicolaenko, B.
On the helicity in 3D-periodic Navier-Stokes equations.I. The non-statistical case.
Proc. Lond. Math. Soc. (3) 94 , 1 (2007), 53–90.[14]
Foias, C., Hoang, L., and Nicolaenko, B.
On the helicity in 3D-periodic Navier-Stokes equations.II. The statistical case.
Comm. Math. Phys. 290 , 2 (2009), 679–717.[15]
Foias, C., Hoang, L., Olson, E., and Ziane, M.
On the solutions to the normal form of theNavier-Stokes equations.
Indiana Univ. Math. J. 55 , 2 (2006), 631–686.[16]
Foias, C., Hoang, L., Olson, E., and Ziane, M.
The normal form of the Navier-Stokes equationsin suitable normed spaces.
Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 26 , 5 (2009), 1635–1673.[17]
Foias, C., Hoang, L., and Saut, J.-C.
Asymptotic integration of Navier-Stokes equations withpotential forces. II. An explicit Poincar´e-Dulac normal form.
J. Funct. Anal. 260 , 10 (2011), 3007–3035.[18]
Foias, C., and Saut, J.-C.
Asymptotic behavior, as t → + ∞ , of solutions of Navier-Stokes equationsand nonlinear spectral manifolds. Indiana Univ. Math. J. 33 , 3 (1984), 459–477.[19]
Foias, C., and Saut, J.-C.
Linearization and normal form of the Navier-Stokes equations withpotential forces.
Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 4 , 1 (1987), 1–47.[20]
Foias, C., and Saut, J.-C.
Asymptotic integration of Navier-Stokes equations with potential forces.I.
Indiana Univ. Math. J. 40 , 1 (1991), 305–320.[21]
Ghidaglia, J.-M.
Long time behaviour of solutions of abstract inequalities: applications to thermo-hydraulic and magnetohydrodynamic equations.
J. Differential Equations 61 , 2 (1986), 268–294.[22]
Ghidaglia, J.-M.
Some backward uniqueness results.
Nonlinear Anal. 10 , 8 (1986), 777–790.[23]
Hoang, L.
Asymptotic expansions for the Lagrangian trajectories from solutions of the Navier–Stokesequations.
Comm. Math. Physics (2020), 1–14. In press, DOI.:10.1007/s00220-020-03863-5.[24]
Hoang, L. T., and Martinez, V. R.
Asymptotic expansion for solutions of the Navier-Stokes equa-tions with non-potential body forces.
J. Math. Anal. Appl. 462 , 1 (2018), 84–113.[25]
Hoang, L. T., and Titi, E. S.
Asymptotic expansions in time for rotating incompressible vis-cous fluids.
Annales de l’Institut Henri Poincar´e. Analyse Non Lin´eaire (2020), 1–29. In press,DOI:10.1016/j.anihpc.2020.06.005.[26]
Minea, G.
Investigation of the Foias-Saut normalization in the finite-dimensional case.
J. Dynam.Differential Equations 10 , 1 (1998), 189–207.[27]
Shi, Y.
A Foias-Saut type of expansion for dissipative wave equations.
Comm. Partial DifferentialEquations 25 , 11-12 (2000), 2287–2331. Department of Mathematics and Statistics, Minnesota State University, Mankato, Mankato,MN 56001, U. S. A.
E-mail address : [email protected] Department of Mathematics and Statistics, Texas Tech University, 1108 Memorial Cir-cle, Lubbock, TX 79409–1042, U. S. A.
E-mail address : [email protected] Department of Mathematics, University of North Georgia, Gainesville Campus, 3820Mundy Mill Rd., Oakwood, GA 30566, U. S. A.
E-mail address ::