High Order Numerical Homogenization for Dissipative Ordinary Differential Equations
HHIGH ORDER NUMERICAL HOMOGENIZATION FOR DISSIPATIVE ORDINARYDIFFERENTIAL EQUATIONS
ZEYU JIN ∗ AND
RUO LI † Abstract.
We propose a high order numerical homogenization method for dissipative ordinary differential equations(ODEs) containing two time scales. Essentially, only first order homogenized model globally in time can be derived. To achievea high order method, we have to adopt a numerical approach in the framework of the heterogeneous multiscale method (HMM).By a successively refined microscopic solver, the accuracy improvement up to arbitrary order is attained providing input datasmooth enough. Based on the formulation of the high order microscopic solver we derived, an iterative formula to calculate themicroscopic solver is then proposed. Using the iterative formula, we develop an implementation to the method in an efficientway for practical applications. Several numerical examples are presented to validate the new models and numerical methods.
Key words.
Multiscale methods; Dissipative systems; Homogenization; Correction model
AMS subject classifications.
1. Introduction.
Multiple-time-scale problems are often encountered in many disciplines such aschemical kinetics [23], molecular dynamics [4, 32] and celestial mechanics [21]. There are many studiesconcerning stiff systems of ordinary differential equations (ODEs), especially those with two time scales. Ingeneral, these systems can be divided into two categories [9]. One is dissipative systems where fast variablestend to the stationary state at an exponential rate. The other one is oscillatory systems where fast variablesoscillate in some orbits.It is impossible to resolve all the time scales and capture all the variables numerically due to limitedcomputing power. In many problems, we are only interested in the dynamics of slow macroscopic variables.There is some work on designing efficient numerical algorithms for stiff ODEs, such as implicit Runge-Kuttamethods [30], backward differentiation formulas [2], Rosenbrock methods [20] and projective methods [14].It is well known [24] that, as ε →
0, the dynamics of slow variables satisfy a limiting equation, which canbe obtained by averaging methods. For simple systems, the limiting equation can be derived by analyticaltools. For complex systems, however, we have to sample the fast variables to approximate the limitingequation. A famous method of this type is the heterogeneous multiscale method [10, 1, 11]. In HMM, thereis a microscopic solver to sample the fast variables, and a macroscopic solver to evolve the slow variables.Some results of numerical analysis on this method can be found in [9]. There are three main sources of errorsof HMM, including modeling error, sampling error and truncation error of the macroscopic solver. Themodeling error is of order O ( ε ). When ε is sufficiently small, the modeling error is small enough. However,when ε is relatively but not extremely small, the modeling error is not ignorable. Therefore, it is necessaryto propose a correction model to reduce the modeling error in this situation.There have been several attempts to reduce modeling error in multiscale method. For example, in [6],they use B-series to derive a high-order stroboscopic averaged equations for a kind of highly oscillatorysystems. In [19, 31], they develop a first order correction model for sediment transport in sub-critical case,where the modeling error can be reduced to O ( ε ).In this paper, we develop a novel high-order correction model and the corresponding numerical algorithmsfor stiff dissipative system of ODEs. Our studies begin with the theory of invariant manifold for ODEs[15, 25]. It can be proved that the invariant manifold is a global attractor. Actually, the microscopic solverof HMM is designed to find an approximation to the invariant manifold with an error of O ( ε ). In orderto derive the correction models, we need to find high-order approximations to the invariant manifold. Bythe asymptotic approximation method, the analytical expressions of the first several terms in the formalexpansion can be obtained. We can prove that this approximation shares similar properties to the invariantmanifold. In other words, the trajectories tend to this approximation to invariant manifold at an exponentialrate. Once the trajectories are close to this approximation, then they can be approximated by a reducedmodel over a finite time horizon. However, the terms obtained by the formal expansion are too complicated ∗ Yuanpei College & School of Mathematical Sciences, Peking University ([email protected]). † CAPT, LMAM & School of Mathematical Sciences, Peking University ([email protected]).1 a r X i v : . [ m a t h . NA ] F e b ZEYU JIN AND RUO LI to be implemented, especially for higher-order methods. In addition, it requires the evaluation of high-orderderivatives. We design an iterative formula for the sake of practicality. We can prove that the iterativemethod matches with the formal expansion in some sense. We can also prove that the approximationaccuracy reaches O ( ε k +1 ) after k iteration steps. By the technology of numerical derivatives, we design arecursion method using the iterative scheme. We also present some numerical analysis on our algorithms.The rest of this paper is arranged as follows. In Section 2, we briefly introduce the multiscale dissipativesystems and the heterogeneous multiscale methods. In Section 3, we present our models for high-orderhomogenization and analyze their properties. In Section 4, we develop two types of algorithms in theframework of HMM. Some numerical analysis is presented in Section 5. Numerical results are shown inSection 6 and the paper ends with a brief summary and conclusion in Section 7.
2. Preliminaries.2.1. Dissipative systems.
Let us consider the following ODEs with scale separation [25]:(2.1) d x d t = f ( x, y ) , d y d t = 1 ε g ( x, y ) ,x | t =0 = x , y | t =0 = y , where x ∈ R n x is the slow variable and y ∈ R n y is the fast variable, n x and n y are the dimension of x and y , respectively. The parameter 0 < ε ≤ ε (cid:28) y x be thesolution of(2.2) d y x d t = 1 ε g ( x, y x ) ,y x | t =0 = y , for any x fixed. Suppose that d µ x ( y ) is the corresponding invariant measure of (2.2) satisfying that for any φ ∈ L ( R n y ; d µ x ), lim T → + ∞ T (cid:90) T φ ( y x ( t )) d t = (cid:90) R ny φ ( y ) d µ x ( y ) , for µ x − a.e. y ∈ R n y . Let F ( x ) = lim T → + ∞ T (cid:90) T f ( x, y x ( t )) d t = (cid:90) R ny f ( x, y ) d µ x ( y ) . Under some appropriate assumptions [24], the trajectory of x ( t ) tends to a solution of the limiting equation(2.3) d X d t = F ( X ) , as ε → f and g . Precisely, we assume that Assumption f and g are sufficiently smooth. In addition, there exists K ∈ N \ { } such that f ∈ W K, ∞ ( R n x × R n y , R n x ) , ∇ g ∈ W K, ∞ ( R n x × R n y , R n y × ( n x + n y ) ) . Here we adopt standard notations of Sobolev spaces. The space W k, ∞ ( R n , R m ) is equipped with the norm defined by (cid:107) ξ (cid:107) W k, ∞ ( R n , R m ) := max j =0 , ,...,k (cid:13)(cid:13) ∇ j ξ (cid:13)(cid:13) L ∞ , ξ ∈ W k, ∞ ( R n , R m ) . When no ambiguity is possible, (cid:107)·(cid:107) W k, ∞ ( R n , R m ) and W k, ∞ ( R n , R m ) are abbreviated as (cid:107)·(cid:107) k, ∞ and W k, ∞ , respectively. Let | · | k, ∞ defined by | ξ | k, ∞ := (cid:13)(cid:13)(cid:13) ∇ k ξ (cid:13)(cid:13)(cid:13) L ∞ be the Sobolev semi-norm.IGH ORDER NUMERICAL HOMOGENIZATION g . Assumption x ∈ R n x , there exists γ ( x ) ∈ R n y such that(2.4) g ( x, γ ( x )) = 0 . In addition, there exists β > (cid:104) g ( x, y ) − g ( x, ˜ y ) , y − ˜ y (cid:105) ≤ − β | y − ˜ y | , ∀ x ∈ R n x , ∀ y, ˜ y ∈ R n y . By Assumption 2.2, one can see that the dynamic for y with x fixed has a unique, globally exponentiallyattracting point [25]. At this time, the system (2.1) is called a dissipative system . It is clear that theinvariant measure d µ x ( y ) at fixed x is a one-point distribution at y = γ ( x ). In addition, one has that F ( x ) = f ( x, γ ( x )). It was pointed out in [25] that, under some appropriate assumptions, the modeling errorbetween the solutions of (2.1) and (2.3) is of order O ( ε ) in a finite time horizon. Remark g x = g ( x, · ) : R n y → R n y . If g ( x, y ) satisfies (2.5), then we say that ˜ g x is a β -stronglydissipative operator or − ˜ g x is a β -strongly monotonic operator on R n y by the definitions in [27, 3]. It canbe deduced [17] that, if ˜ g x is L -Lipschitz continuous, and β -strongly dissipative, then ˜ g x is a bijection on R n y , ˜ g − x : R n y → R n y is β -Lipschitz continuous, and βL -strongly dissipative. The heterogeneous multiscale method (HMM) is a generalstrategy for multiscale problems [1, 9]. It makes use of two solvers: a macroscopic solver and a microscopicsolver. Let us consider the system (2.1). As a macroscopic solver, a conventional explicit ODE solver ischosen to evolve (2.3). For example, take the forward Euler method with a time step ∆ t as the macroscopicsolver, which can be expressed as x n +1 = x n + ∆ tF ( x n ) . To evaluate F ( x n ), a microscopic solver is chosen to resolve the microscopic scale. In the case of the forwardEuler method with a time step δt , one gets that y n,m +1 = y n,m + δtε g ( x n , y n,m ) , m = 0 , , . . . , M − , (2.6a) y n, suitably chosen.(2.6b)Then one can estimate F ( x n ) by F ( x n ) ≈ M (cid:88) m =0 K m,M f ( x n , y n,m ) , where the weights { K m,M } should satisfy the constraint (cid:80) Mm =0 K m,M = 1 . As suggested in [9], for dissipativesystems, one can choose the weights as K m,M = 0 , m = 0 , , . . . , M − ,K M,M = 1 . In other words, one may estimate that F ( x n ) = f ( x n , γ ( x n )) ≈ f ( x n , y n,M ). In this situation, the microscopicsolver (2.6) can be regarded as a nonlinear solver for the equation g ( x n , y ) = 0 with respect to y .
3. Models.
In this section, we present our high-order correction models of the limiting equation (2.3)and analyze their properties. We begin with asymptotic approximation to the invariant manifold and thendesign an iterative formula to generate high-order correction models automatically. The modeling error canbe reduced to O ( ε k +1 ) in the k th-order correction model. ZEYU JIN AND RUO LI
To begin with, we introduce the concept of invariant manifold [25]. Assum-ing that there exists an invariant set S ε of (2.1), and that it can be represented as a smooth graph over x ,namely, there exists a function Γ : R n x × (0 , ε ] → R n y that is differentiable with respect to x , such that S ε = { ( x, y ) : y = Γ( x, ε ) , x ∈ R n x } , ∀ ε ∈ (0 , ε ] . This implies that(3.1) g ( x, Γ( x, ε )) = ε ∇ x Γ( x, ε ) f ( x, Γ( x, ε )) , ∀ x ∈ R n x . The equation (3.1) plays a central role in our discussions. It can be proved under some appropriate conditionsthat S ε is a global attractor of the system (2.1), that is, for any initial values,(3.2) lim t → + ∞ | y ( t ) − Γ( x ( t ) , ε ) | = 0 . Proposition
Suppose that ∇ y f ( x, y ) and ∇ x Γ( x, ε ) are bounded, then S ε is a global attractor ofthe system (2.1) for sufficiently small ε .Proof. Assuming that (cid:12)(cid:12) ∇ y f ( x, y ) (cid:12)(cid:12) ≤ C and (cid:12)(cid:12) ∇ x Γ( x, ε ) (cid:12)(cid:12) ≤ C . Let z ( t ) = y ( t ) − Γ( x ( t ) , ε ). ByAssumption 2.2, one gets that12 d | z | d t = (cid:28) z, d z d t (cid:29) = (cid:28) y − Γ( x, ε ) , ε g ( x, y ) − ∇ x Γ( x, ε ) f ( x, y ) (cid:29) = (cid:28) y − Γ( x, ε ) , ε g ( x, y ) − ε g ( x, Γ( x, ε )) (cid:29) + (cid:104) y − Γ( x, ε ) , ∇ x Γ( x, ε ) f ( x, Γ( x, ε )) − ∇ x Γ( x, ε ) f ( x, y ) (cid:105)≤ (cid:18) − βε + C (cid:19) | z | . This implies that | z ( t ) | ≤ e − βε t | z (0) | when 0 < ε ≤ min (cid:110) β C , ε (cid:111) , and then the proof is completed.Proposition 3.1 says that the trajectory of ( x ( t ) , y ( t )) tends to the set S ε at an exponential rate. Later werefer to this type of properties as the attractive property . Since S ε is invariant, one obtains that, if the initialvalues lie on S ε , then the trajectory of ( x ( t ) , y ( t )) stays on S ε for any t >
0. At this time, one may use(3.3) d X d t = f ( X, Γ( X, ε ))rather than (2.1) to calculate the trajectory of x ( t ). In other words, the system (2.1) can be decoupled. Werefer to this type of properties as the decouplable property . It should be emphasized that Proposition 3.1depends on the existence of Γ( x, ε ), however, the rest of this article does not depend on it.As an example, we study the invariant manifold of the linearized equation of the system (2.1). Example d x d t = A x + A y + b , d y d t = 1 ε ( A x + A y + b ) , where A ∈ R n x × n x , A ∈ R n x × n y , A ∈ R n y × n x , A ∈ R n y × n y , b ∈ R n x , b ∈ R n y . We assume that A is a negative definite matrix to satisfy Assumption 2.2.Suppose that the function Γ( x, ε ) has a form of Γ( x, ε ) = Cx + d , where C = C ( ε ) ∈ R n y × n x and d = d ( ε ) ∈ R n y . By (3.1), one gets the following equations: εCA C + εCA − A C − A = 0 , (3.5a) ( A − εCA ) d = εCb − b . (3.5b)It is known that (3.5a) is an algebraic matrix Riccati equation [13, 28]. We can prove the well-posedness of(3.5) when ε is sufficiently small. See Theorem A.1 in Appendix A. IGH ORDER NUMERICAL HOMOGENIZATION In this part, we consider the asymptotic approximation to Γ( x, ε ).We calculate the first several terms in the formal asymptotic expansion, and then prove the attractiveproperty and the decouplable property of this approximation.
Suppose that Γ( x, ε ) satisfies (3.1). We try formal expansions of the formΓ( x, ε ) = γ ( x ) + εγ ( x ) + ε γ ( x ) + O ( ε ) , (3.6a) ∇ x Γ( x, ε ) = ∇ γ ( x ) + ε ∇ γ ( x ) + ε ∇ γ ( x ) + O ( ε ) . (3.6b)By Assumption 2.2, we specify that γ ( x ) = γ ( x ). By Taylor’s expansion of g ( x, y ) at ( x, γ ( x )) and theformal expansion in (3.6a), one notices that the left-hand side of (3.1) can be written as(3.7) g ( x, Γ( x, ε ))= g ( x, γ ( x ) + εγ ( x ) + ε γ ( x ) + O ( ε ))= εG y ( x ) γ ( x ) + ε G y ( x ) γ ( x ) + 12 n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) n y i =1 + O ( ε ) , where G y ( x ) = [ G i,jy ( x )] n y i,j =1 = (cid:104) ∂g i ∂y j ( x, γ ( x )) (cid:105) n y i,j =1 and G yy ( x ) = [ G i,j,kyy ( x )] n y i,j,k =1 = (cid:104) ∂ g i ∂y j ∂y k ( x, γ ( x )) (cid:105) n y i,j,k =1 .Similarly, by Taylor’s expansion of f ( x, y ) at ( x, γ ( x )) and the formal expansions in (3.6), one obtains thatthe right-hand side of (3.1) can be written as(3.8) ε ∇ x Γ( x, ε ) f ( x, Γ( x, ε ))= ε ( ∇ γ ( x ) + ε ∇ γ ( x )) ( F ( x ) + εF y ( x ) γ ( x )) + O ( ε )= ε ∇ γ ( x ) F ( x ) + ε ( ∇ γ ( x ) F y ( x ) γ ( x ) + ∇ γ ( x ) F ( x )) + O ( ε ) , where F y ( x ) = [ F i,jy ( x )] n x ,n y i =1 ,j =1 = (cid:104) ∂f i ∂y j ( x, γ ( x )) (cid:105) n x ,n y i =1 ,j =1 . By Assumption 2.2, G y ( x ) is invertible. Bycomparing (3.7) and (3.8), one gets the analytic expressions of γ ( x ), γ ( x ) and γ ( x ): γ ( x ) = γ ( x ) , (3.9a) γ ( x ) = G y ( x ) − ∇ γ ( x ) F ( x ) , (3.9b) γ ( x ) = G y ( x ) − (cid:32) ∇ γ ( x ) F y ( x ) γ ( x )(3.9c) + ∇ γ ( x ) F ( x ) − (cid:34) n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) (cid:35) n y i =1 (cid:33) . Remark G x ( x ) + G y ( x ) ∇ γ ( x ) = 0 , where G x ( x ) = [ G i,jx ( x )] n y ,n x i =1 ,j =1 = (cid:104) ∂g i ∂x j ( x, γ ( x )) (cid:105) n y ,n x i =1 ,j =1 . Therefore, one can obtain that ∇ γ ( x ) = − G y ( x ) − G x ( x ) . This implies that γ ( x ), ∇ γ ( x ) and γ ( x ) can be rewritten as algebraic expressions of function values andderivatives of f ( x, y ) and g ( x, y ) at ( x, γ ( x )). Therefore, one can obtain that γ , γ and γ are sufficientlysmooth. In addition, by Assumptions 2.1 and 2.2, and the expressions in (3.9), one can deduce that ∇ γ ∈ W K, ∞ , γ ∈ W K, ∞ and γ ∈ W K − , ∞ . (See Lemmas in Appendix C.1). ZEYU JIN AND RUO LI
As an approximation to Γ( x, ε ), the function ˜Γ ( x, ε ) := γ ( x ) + εγ ( x ) + ε γ ( x )should share similar properties with Γ( x, ε ). Now we make the above statement rigorous. In this part, welet S (2) ε := { ( x, y ) : y = ˜Γ ( x, ε ) , x ∈ R n x } and z ( t ) = y ( t ) − ˜Γ ( x ( t ) , ε ).The attractive property that is parallel to Proposition 3.1 can be formulated as in Theorem 3.4, whichsays that the system goes quickly from arbitrary initial values into a small vicinity of the approximateinvariant manifold S (2) ε . Theorem
There exists a constant
C > independent of ε such that | z ( t ) | ≤ Cε + e − βε t | z (0) | , for sufficiently small ε . In particular, as long as t is sufficiently large, then | z ( t ) | = O ( ε ) .Proof. By Taylor’s expansion of f ( x, y ) and g ( x, y ) at ( x, γ ( x )), one gets that(3.11) 1 ε g ( x, ˜Γ ( x, ε )) − ∇ x ˜Γ ( x, ε ) f ( x, ˜Γ ( x, ε ))= G y ( x ) ( γ ( x ) + εγ ( x )) + 12 ε n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) n y i =1 − ( ∇ γ ( x ) + ε ∇ γ ( x )) ( F ( x ) + εF y ( x ) γ ( x )) + O ( ε )= ( G y ( x ) γ ( x ) − ∇ γ ( x ) F ( x ))+ ε (cid:32) G y ( x ) γ ( x ) + 12 (cid:34) n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) (cid:35) n y i =1 − ∇ γ ( x ) F y ( x ) γ ( x ) − ∇ γ ( x ) F ( x ) (cid:33) + O ( ε )= O ( ε ) , where O ( ε ) can be controlled uniformly thanks to Assumption 2.1. By Assumption 2.2, Remark 3.3 and(3.11), there exists a constant C > | z | d t = (cid:28) z, d z d t (cid:29) = (cid:28) y − ˜Γ ( x, ε ) , ε g ( x, y ) − ∇ x ˜Γ ( x, ε ) f ( x, y ) (cid:29) = (cid:28) y − ˜Γ ( x, ε ) , ε g ( x, y ) − ε g ( x, ˜Γ ( x, ε )) (cid:29) + (cid:28) y − ˜Γ ( x, ε ) , ε g ( x, ˜Γ ( x, ε )) − ∇ x ˜Γ ( x, ε ) f ( x, ˜Γ ( x, ε )) (cid:29) + (cid:68) y − ˜Γ ( x, ε ) , ∇ x ˜Γ ( x, ε )( f ( x, ˜Γ ( x, ε )) − f ( x, y )) (cid:69) ≤ (cid:18) − βε + C (cid:19) | z | + C ε | z | ≤ − β ε | z | + C β ε , where 0 < ε ≤ min (cid:110) β C , ε (cid:111) . Here the last inequality is due to the Cauchy–Schwarz inequality. ByGronwall’s inequality, one gets that | z ( t ) | ≤ C β ε (1 − e − βε t ) + e − βε t | z (0) | , which completes the proof.Now we consider the decouplable property. Substituting ˜Γ ( x, ε ) for Γ( x, ε ) in (3.3), one obtains thefollowing equation:(3.12) d X d t = f ( X , ˜Γ ( X , ε )) . IGH ORDER NUMERICAL HOMOGENIZATION S (2) ε , then one may use (3.12) toapproximate the trajectory of x ( t ) in some sense. To be rigorous, we have Theorem 3.5. Theorem
There exists a constant
C > independent of ε such that | x ( t ) − X ( t ) | ≤ e Ct (cid:18) | x (0) − X (0) | + ε + εβ | z (0) | (cid:19) , for sufficiently small ε . In particular, if | x (0) − X (0) | = O ( ε ) and | z (0) | = O ( ε ) , then | x ( t ) − X ( t ) | = O ( ε ) for t ∼ O (1) .Proof. Since ∇ x f ( x, y ), ∇ y f ( x, y ) and ∇ x ˜Γ ( x, ε ) are all bounded, then there exists a constant C > (cid:12)(cid:12)(cid:12)(cid:12) d( x − X )d t (cid:12)(cid:12)(cid:12)(cid:12) = | f ( x, y ) − f ( X , ˜Γ ( X , ε )) |≤ | f ( x, z + ˜Γ ( x, ε )) − f ( x, ˜Γ ( X , ε )) | + | f ( x, ˜Γ ( X , ε )) − f ( X , ˜Γ ( X , ε )) |≤ C | z | + C | x − X | . Then one obtains that 12 d | x − X | d t = (cid:28) x − X , d( x − X )d t (cid:29) ≤| x − X | · (cid:12)(cid:12)(cid:12)(cid:12) d( x − X )d t (cid:12)(cid:12)(cid:12)(cid:12) ≤ C | z | · | x − X | + C | x − X | . Therefore, by Theorem 3.4 and Cauchy-Schwarz inequality, there exists a constant
C > | x − X | d t ≤ C ( | x − X | + ε ) + e − βε t | z (0) | . By Gronwall’s inequality, | x ( t ) − X ( t ) | ≤ e Ct | x (0) − X (0) | + ε ( e Ct −
1) + e Ct − e − βε tβε + C | z (0) | , which completes the proof. Remark x ( t ) , y ( t )) always tends to a state where | y − ˜Γ ( x, ε ) | = O ( ε )at an exponential rate. Theorem 3.5 says that, once the trajectory arrives at this state, one may use (3.12),which is not stiff, to approximate the trajectory of x ( t ) in a finite time horizon with an accuracy of O ( ε ).In the literature, this phenomenon is referred as an initial layer or a boundary layer [25, 26, 29, 8, 22]. Thisremark plays a guiding role in designing our numerical algorithms later. Remark ( x, ε ) are studied in this part. One may guess that ˜Γ k ( x, ε ) := (cid:80) kj =0 ε j γ j ( x ) should have similar properties. However, it is expected that the expression of γ k ( x ) is verycomplex when k is large. In addition, the evaluation of high-order derivatives is involved in the expressionsof γ k ( x ). It is not convenient to conduct theoretical analysis or to design numerical algorithms at this time. In this part, an iterative method is presented to approximate the invariantmanifold S ε . We put forward an iterative formula, which can be used to produce a series of successivelyrefined approximations to Γ( x, ε ). This method overcomes the difficulties in Remark 3.7 due to its conciseform. It can be proved theoretically that this method is consistent with asymptotic approximation in somesense, and that the approximation accuracy reaches O ( ε k +1 ) after k iteration steps. ZEYU JIN AND RUO LI
Inspired by (3.1), we propose the following fixed-point iterative formulato approximate Γ( x, ε ):Γ ( x, ε ) = γ ( x ) , (3.13a) g ( x, Γ k +1 ( x, ε )) = ε ∇ x Γ k ( x, ε ) f ( x, Γ k ( x, ε )) , k = 0 , , , . . . . (3.13b)Remark 2.3 implies the existence and uniqueness of the solution Γ k +1 ( x, ε ) of (3.13b), since g ( x, · ) isLipschitz continuous and strongly dissipative. Using the notations in Remark 2.3, one can write (3.13b) asΓ k +1 ( x, ε ) = ˜ g − x ( ε ∇ x Γ k ( x, ε ) f ( x, Γ k ( x, ε ))) . In this part, we study the relationshipsbetween the iterative formula (3.13) and asymptotic approximation. It can be proved that the iterativeformula can produce approximations in (3.9) in the first two iteration steps. The proof of Theorem 3.8 canbe found in Appendix B.
Theorem
We have the following conclusions: | Γ ( x, ε ) − γ ( x ) − εγ ( x ) | = O ( ε ) , (3.14a) |∇ x Γ ( x, ε ) − ∇ γ ( x ) − ε ∇ γ ( x ) | = O ( ε ) , (3.14b) | Γ ( x, ε ) − γ ( x ) − εγ ( x ) − ε γ ( x ) | = O ( ε ) , (3.14c) where the bounds can be uniformly controlled in x ∈ R n x . In other words, (cid:107) Γ ( · , ε ) − γ − εγ (cid:107) , ∞ = O ( ε ) , (cid:13)(cid:13) Γ ( · , ε ) − γ − εγ − ε γ (cid:13)(cid:13) , ∞ = O ( ε ) . We prove in Theorem 3.8 that the iterativeformula (3.13) matches the formal expansions in the first two iteration steps. As for high-order approximation,it should be tedious to prove similar results due to the complex expressions of γ k ( x ) when k is large. However,we can get rid of the specific expressions of γ k ( x ) and prove the attractive property and the decouplableproperty that are parallel to Theorems 3.4 and 3.5. Proofs of the following two theorems can be foundin Appendix C. The interpretations for these properties are similar to Remark 3.6. In this part, we let z k ( t ) = y ( t ) − Γ k ( x ( t ) , ε ) and let X k be defined as the solution of(3.15) d X k d t = f ( X k , Γ k ( X k , ε )) . Theorem
For any k = 0 , , . . . , K and for any A ∈ (0 , β ) , there exists a constant C k > suchthat | z k ( t ) | ≤ C k ( ε k +2 + e − Aε t | z k (0) | ) , where C k is dependent on k and independent of ε . In particular, as long as t is sufficiently large, then | z k ( t ) | = O ( ε k +1 ) . Theorem
For any k = 0 , , . . . , K , there exists a constant C k > dependent on k and independentof ε such that | x ( t ) − X k ( t ) | ≤ e C k t (cid:18) | x (0) − X k (0) | + ε k +2 + C k εβ | z k (0) | (cid:19) . In particular, if | x (0) − X k (0) | = O ( ε k +1 ) and | z k (0) | = O ( ε k + ) , then | x ( t ) − X k ( t ) | = O ( ε k +1 ) for t ∼ O (1) . Now we consider using the iterative formula (3.13) on the linearized equation (3.4). One may noticethat (3.4) does not satisfy Assumption 2.1 since f ( x, y ) in this case is not bounded. However, we can stillprove similar results that the iterative formula (3.13) improves the approximation accuracy. Actually, wecan also prove the convergence of (3.13) in this case. IGH ORDER NUMERICAL HOMOGENIZATION Example C ∗ , d ∗ ) to (3.5) for sufficiently small ε . Suppose that Γ k ( x, ε ) = C k x + d k ,where C k = C k ( ε ) ∈ R n y × n x and d k = d k ( ε ) ∈ R n y . By (3.13), the iterative formula for C k and d k can begiven as C = − A − A , d = − A − b ,C k +1 = − A − A + εA − C k A + εA − C k A C k ,d k +1 = − A − b + εA − C k b + εA − C k A d k . Therefore, one can deduce that C k +1 − C ∗ = εA − ( C k − C ∗ ) A + εA − C k A ( C k − C ∗ ) + εA − ( C k − C ∗ ) A C ∗ , (3.16a) d k +1 − d ∗ = εA − ( C k − C ∗ ) b + εA − C k A ( d k − d ∗ ) + εA − ( C k − C ∗ ) A d ∗ . (3.16b)First, we consider (3.16a). By the uniform boundedness of C ∗ , there exists a constant M > ε and k such that(3.17) (cid:107) C k +1 − C ∗ (cid:107) ≤ εM (cid:107) C k − C ∗ (cid:107) + εM (cid:107) C k − C ∗ (cid:107) . If we take ε sufficiently small, for example, such that εM (1 + (cid:107) C − C ∗ (cid:107) ) ≤ , then one can obtain by(3.17) that (cid:107) C k +1 − C ∗ (cid:107) ≤ (cid:107) C k − C ∗ (cid:107) , ∀ k = 0 , , , . . . . Therefore, the sequence { C k } converges to C ∗ and is uniformly bounded in k and sufficiently small ε . By(3.16a), there exists a constant M > ε and k such that (cid:107) C k +1 − C ∗ (cid:107) ≤ εM (cid:107) C k − C ∗ (cid:107) . By Theorem A.1, (cid:107) C − C ∗ (cid:107) = O ( ε ). Therefore, (cid:107) C k − C ∗ (cid:107) = O ( ε k +1 ) for fixed k .By (3.16b), there exists a constant M > ε and k such that(3.18) | d k +1 − d ∗ | ≤ εM (cid:107) C k − C ∗ (cid:107) + εM | d k − d ∗ | , which indicates that | d k − d ∗ | is uniformly bounded when ε is sufficiently small. Thus, lim k →∞ | d k − d ∗ | < + ∞ .By taking upper limit of (3.18), one gets that(1 − εM ) lim k →∞ | d k − d ∗ | ≤ . Therefore, when ε is sufficiently small, the sequence { d k } converges to d ∗ . By Theorem A.1, | d − d ∗ | = O ( ε ).Therefore, by (3.18), one obtains that | d k − d ∗ | = O ( ε k +1 ) for fixed k .In conclusion, we have already proven that C k → C ∗ and d k → d ∗ as k → ∞ . Furthermore, for fixed k ,we have that (cid:107) C k − C ∗ (cid:107) = O ( ε k +1 ) and | d k − d ∗ | = O ( ε k +1 ).
4. Numerical Scheme.
In this section, we develop numerical algorithms to implement the models inSection 3.
Basically, our numerical scheme falls into the framework ofHMM, which contains a microscopic solver to compute the steady state, and a macroscopic solver to evaluatethe slow variables. As seen in Remark 3.6, the numerical simulations can be divided into two stages: • the first stage: solving the coupled system (2.1) in the initial layer by a coupled solver until | z k ( t ) | = | y ( t ) − Γ k ( x ( t ) , ε ) | is sufficiently small; • the second stage: using the HMM-type algorithms developed later to solve the decoupled system(3.15).Briefly, the HMM-type algorithms contain two parts: • approximating the invariant manifold, i.e., calculating Γ k ( x, ε ) by a microscopic solver;0 ZEYU JIN AND RUO LI • solving the decoupled system of the slow variables by a macroscopic solver.In our numerical scheme, we need to calculate Γ k ( x, ε ), as mentioned before. Let ˆΓ k ( x, ε ) ≈ Γ k ( x, ε ) bethe numerical approximation to Γ k ( x, ε ). Hereinafter, we denote the algorithms, where Γ k ( x, ε ) is needed,by HMM k . Given a grid { t n } , let ( x n , y n ) be the numerical approximation to ( x ( t n ) , y ( t n )).In the first stage, we solve the coupled system (2.1) numerically on the grid { t n = n ∆ t c } N c n =0 with N c ∆ t c = T c , where ∆ t c is the time step size of the coupled solver and N c can be determined by the criterionin Remark 4.1. One may choose an explicit one-step scheme as the coupled solver, which can be written as ( x n +1 , y n +1 ) = ( x n , y n ) + ∆ t c φ c (( x n , y n ) , ( f, g/ε ) , ∆ t c ) , n = 0 , , . . . , N c − . In the second stage, we solve the decoupled system (3.15) numerically with initial value x N c on thegrid { t n = T c +( n − N c )∆ t } Nn = N c with ( N − N c )∆ t = T − T c , where ∆ t is the time step size of the macroscopicsolver. Similarly, we may choose an explicit one-step scheme as the macroscopic solver: x n +1 = x n + ∆ tφ d ( x n , f ( · , ˆΓ k ( · , ε )) , ∆ t ) , n = N c , N c + 1 , . . . , N − . In our numerical scheme, we may need to solve the equation(4.1) g (˜ x, ˜ y ) = εh with respect to ˜ y , where h is a given quantity which may depend on ˜ x . Remark 2.3 ensures the existenceand uniqueness of the solution of (4.1). Let us denote the solution by ˜ g − x ( εh ). Consider the following ODEwith respect to ˜ y (4.2) d˜ y d t = 1 ε g (˜ x, ˜ y ) − h, whose stationary point is exactly ˜ g − x ( εh ). The microscopic solver solves (4.2) numerically:˜ y m +1 = ˜ y m + δtφ m (˜ y m , g (˜ x, · ) /ε − h, δt ) , m = 0 , , . . . , M − , ˜ y suitably chosen , where ˜ y m is short for ˜ y m (˜ x, h ), δt is the time step size and M is number of steps in the microscopic solver.One may estimate ˜ g − x ( εh ) by ˜ y M (˜ x, h ). We will discuss the selection of the initial value ˜ y in Remark 4.4. In the initial layer [0 , T c ], i.e., the first stage of simulation, the system tends to theinvariant manifold quickly. According to Theorem 3.10, the terminal time T c of the coupled solver shouldbe taken such that | y ( T c ) − Γ k ( x ( T c ) , ε ) | = O ( ε k + ) . By Theorem 3.9, T c should be of order O ( ε log ε ) for fixed initial value and k . This indicates that thecoupled solver is only needed for a short time. Remark T c in numerical simulations. Here we present a possibleempirical criterion inspired by Theorem 3.9: if n ≡ n p ) | y n − ˆΓ k ( x n , ε ) | ≥ µ | y n − n p − ˆΓ k ( x n − n p , ε ) | , for some positive integer n , where n p ∈ N \ { } are given beforehand, then terminate the coupled solverand let N c be n . We let µ = exp (cid:16) − ˆ β ε n p ∆ t c (cid:17) , where − ˆ β < ∂g∂y ( x, y ). We calculate ˆΓ k per n p steps in order to reduce computational cost. In ournumerical experiments, we set n p = 10. Notice that an explicit one-step scheme for the ODE: d z d t = h ( z ) can be written in the form of z n +1 = z n + ∆ tφ ( z n , h, ∆ t ).IGH ORDER NUMERICAL HOMOGENIZATION In this part, we present several numerical ap-proaches to approximating Γ k ( x, ε ). Remark 3.3 tells us that γ ( x ) and γ ( x ) can be rewritten as expressionsof function values and derivatives of f ( x, y ) and g ( x, y ) at ( x, γ ( x )). In addition, γ ( x ) can be approximatedby(4.3) γ ( x ) ≈ ˜ y M ( x, . Therefore, Γ k ( x, ε ) can be approximated according to the expressions (3.9) when k = 0 , ,
2. However, theseexpressions are too complicated. In this part, we would like to design an algorithm that is easy to implement.The algorithm is based on the analytical expressions in (3.9) and it requires to evaluate the Jacobian matrixof g ( x, y ).First, we notice by (3.9b) and (3.10) that(4.4) γ ( x ) = − G y ( x ) − G x ( x ) F ( x ) . By (3.9), one can obtain that(4.5) Γ ( x, ε ) = γ ( x ) + εγ ( x ) + ε γ ( x ) + O ( ε )= γ ( x ) + εG y ( x ) − ( ∇ γ ( x ) + ε ∇ γ ( x )) f ( x, γ ( x ) + εγ ( x )) − ε G y ( x ) − n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) n y i =1 + O ( ε ) . Now we consider how the terms in (4.5) are evaluated. One can notice that( ∇ γ ( x ) + ε ∇ γ ( x )) f ( x, γ ( x ) + εγ ( x ))= lim τ → γ ( x + F ( x ) τ ) + εγ ( x + F ( x ) τ ) − γ ( x ) − εγ ( x ) τ , where F ( x ) := f ( x, γ ( x ) + εγ ( x )). Therefore, numerical derivatives can be used to approximate this term,that is,(4.6) ( ∇ γ ( x ) + ε ∇ γ ( x )) f ( x, γ ( x ) + εγ ( x )) ≈ ∆ + yτ , where ∆ + y = γ ( x + F ( x ) τ ) + εγ ( x + F ( x ) τ ) − γ ( x ) − εγ ( x ) with τ suitably chosen. Besides, Taylor’sexpansion of g ( x, y ) at ( x, γ ( x )) yields that(4.7) g ( x, γ ( x ) + εγ ( x )) = εG y ( x ) γ ( x ) + 12 ε (cid:104) n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) (cid:105) n y i =1 + O ( ε ) . By (4.5), (4.6) and (4.7), one can obtain thatΓ ( x, ε ) ≈ γ ( x ) + εγ ( x ) + G y ( x ) − (cid:18) ε ∆ + yτ − g ( x, γ ( x ) + εγ ( x )) (cid:19) , where γ ( x ) and γ ( x ) can be approximated by (4.3) and (4.4), respectively. Now we would like to utilize the iterative formula (3.13) todesign an high-order HMM-type algorithm.In order to obtain Γ k ( x, ε ), we need to approximate ∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε )). Actually, this termis the directional derivative of Γ k − ( x, ε ) along f ( x, Γ k − ( x, ε )). Similar to what we did previously inSubsection 4.3.1, we notice that ∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε ))= lim τ → Γ k − ( x + f ( x, Γ k − ( x, ε )) τ, ε ) − Γ k − ( x, ε ) τ . ZEYU JIN AND RUO LI
Therefore, one may use numerical derivative to approximate this term, that is, ∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε )) ≈ ∆ + yτ , where ∆ + y = Γ k − ( x + f ( x, Γ k − ( x, ε )) τ, ε ) − Γ k − ( x, ε ) with τ suitably chosen. Then we need to solve anequation in the form of (4.1), whose solution can be approximated by the microscopic solver. Remark k ( x, ε ) by recursion. Actually, onecan use the algorithm introduced in Subsection 4.3.1 to obtain Γ k ( x, ε ) where k ≥ We summarize the algorithms introduced in Subsections 4.3.1and 4.3.2 in Algorithms 4.1 and 4.2, respectively.
Algorithm 4.1
Approximating Γ k ( x, ε ) using the methods in Subsection 4.3.1. function HMMtype1 ( x , ε , k ) if k equals to 0 then ˆΓ ( x, ε ) ← ˜ y M ( x, return ˆΓ ( x, ε ); else if k equals to 1 then ˆ γ ( x ) ← ˜ y M ( x, γ ( x ) according to (4.4);ˆΓ ( x, ε ) ← ˆ γ ( x ) + ε ˆ γ ( x ); return ˆΓ ( x, ε ); else if k equals to 2 then ˆ γ ( x ) ← ˜ y M ( x,
0) and ˆ G y ( x ) ← ∂g∂y ( x, ˆ γ ( x ));ˆ γ ( x ) according to (4.4);ˆΓ ( x, ε ) ← ˆ γ ( x ) + ε ˆ γ ( x );ˆ F ( x ) ← f ( x, ˆΓ ( x, ε ));ˆΓ ( x + ˆ F ( x ) τ, ε ) ← HMMtype1 ( x + ˆ F ( x ) τ , ε , 1);ˆΓ ( x, ε ) ← ˆΓ ( x, ε ) + ˆ G y ( x ) − (cid:16) ε ˆΓ ( x + ˆ F ( x ) τ,ε ) − ˆΓ ( x,ε ) τ − g ( x, ˆΓ ( x, ε )) (cid:17) ; return ˆΓ ( x, ε ); else ˆΓ k − ( x, ε ) ← HMMtype1 ( x , ε , k − F k − ( x ) ← f ( x, ˆΓ k − ( x, ε ));ˆΓ k − ( x + ˆ F k − ( x ) τ, ε ) ← HMMtype1 ( x + ˆ F k − ( x ) τ , ε , k − k ( x, ε ) ← ˜ y M (cid:16) x, ˆΓ k − ( x + ˆ F k − ( x ) τ,ε ) − ˆΓ k − ( x,ε ) τ (cid:17) ; return ˆΓ k ( x, ε ); end ifend function Remark k ( x, ε ) for one time,Algorithm 4.2 needs to call the microscopic solver for (2 k +1 −
1) times, while Algorithm 4.1 needs one timeif k ≤
1, and (3 × k − −
1) times otherwise. One can notice that Algorithm 4.1 calls the microscopicsolver for fewer times than Algorithm 4.2, at the expense of computing the Jacobian matrix of g ( x, y ).The computation cost increases exponentially as k increases. In addition, a larger k may lead to moreaccumulations of numerical error (See Remark 5.5). Remark y at the next time step, as the initial value of the microscopic solver. In both algo-rithms, when calculating ˆΓ k ( x, ε ), we need to approximate the directional derivative of Γ k − ( x, ε ) along IGH ORDER NUMERICAL HOMOGENIZATION Algorithm 4.2
Approximating Γ k ( x, ε ) using the methods in Subsection 4.3.2. function HMMtype2 ( x , ε , k ) if k equals to 0 then ˆΓ ( x, ε ) ← ˜ y M ( x, return ˆΓ ( x, ε ); else ˆΓ k − ( x, ε ) ← HMMtype2 ( x , ε , k − F k − ( x ) ← f ( x, ˆΓ k − ( x, ε ));ˆΓ k − ( x + ˆ F k − ( x ) τ, ε ) ← HMMtype2 ( x + ˆ F k − ( x ) τ , ε , k − k ( x, ε ) ← ˜ y M (cid:16) x, ˆΓ k − ( x + ˆ F k − ( x ) τ,ε ) − ˆΓ k − ( x,ε ) τ (cid:17) ; return ˆΓ k ( x, ε ); end ifend function f ( x, Γ k − ( x, ε )). For example, in Algorithm 4.2 at a microscopic time step t = t n , we need ∆ + yτ to approxi-mate ∇ x Γ k − ( x n , ε ) f ( x n , Γ k − ( x n , ε )). At the next time step t = t n +1 , one can use ˆΓ k ( x n , ε ) + ∆ + yτ ∆ t as y n +1 , which may reduce sampling error. Similar strategies can be used in a Runge-Kutta type macroscopicsolver. Remark
5. Numerical Analysis.
Now we present some results of numerical analysis on our algorithms. Wefocus on Algorithm 4.2. Some results here are also applicable to Algorithm 4.1.As mentioned before, there are three main sources of errors of HMM, including modeling error, samplingerror and truncation error of the macroscopic solver. We have proven that the modeling error can be reducedto O ( ε k +1 ). The truncation error of the macroscopic solver depends on what the macroscopic solver is. Inclassical HMM, sampling error mainly comes from the microscopic solver. Numerical analysis of samplingerror in classical HMM can be found in [9]. In our algorithms, another source of sampling error is numericalderivatives. In addition, numerical error may accumulate in our recursive algorithms. Now we analyze the sampling error generated by the numerical deriva-tives directly. In this part, the round-off error and error from the microscopic solver is disregarded. In otherwords, we would like to analyze the error between Γ k ( x, ε ) and ˆΓ d k ( x, ε ), where Γ k ( x, ε ) and ˆΓ d k ( x, ε ) aredefined as follows:(5.1) Γ ( x, ε ) = ˆΓ d0 ( x, ε ) = γ ( x ) .g ( x, Γ k +1 ( x, ε )) = ε ∇ x Γ k ( x, ε ) f ( x, Γ k ( x, ε )) , k ∈ N ,g ( x, ˆΓ d k +1 ( x, ε )) = ε ˆΓ d k ( x + f ( x, ˆΓ d k ( x, ε )) τ, ε ) − ˆΓ d k ( x, ε ) τ , k ∈ N . The last equality of (5.1) can also be written asˆΓ d k +1 ( x, ε ) = ˜ g − x (cid:32) ε ˆΓ d k ( x + f ( x, ˆΓ d k ( x, ε )) τ, ε ) − ˆΓ d k ( x, ε ) τ (cid:33) . We have the following result. The proof of Theorem 5.1 can be found in Appendix C.
Theorem
For any k = 0 , , . . . , (cid:98) K (cid:99) , (5.2) (cid:13)(cid:13)(cid:13) Γ k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) K − k, ∞ = O ( ετ ) . ZEYU JIN AND RUO LI
Remark e d k = (cid:13)(cid:13)(cid:13) Γ k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) , ∞ , then g ( x, ˆΓ d k +1 ( x, ε )) = ε ˆΓ d k ( x + f ( x, ˆΓ d k ( x, ε )) τ, ε ) − ˆΓ d k ( x, ε ) τ = ε Γ k ( x + f ( x, Γ k ( x, ε )) τ, ε ) − Γ k ( x, ε ) τ + O ( εe d k τ )= ε ∇ x Γ k ( x, ε ) f ( x, Γ k ( x, ε )) + O ( εe d k τ + ετ )= g ( x, Γ k +1 ( x, ε )) + O ( εe d k τ + ετ ) . Therefore, e d k +1 = O ( εe d k τ + ετ ). In this way, one can calculate that e d0 = 0 , e d1 = O ( ετ ) , e d2 = O ( ε + ετ ) , e d3 = O ( ε τ + ε + ετ ) , . . . . However, this is not a good estimate. For example, when k = 2, the modeling error turns out to be O ( ε ),while the bound for the sampling error is O ( ε + ετ ). Actually, if we have some additional assumptions onthe regularity of f and g , then we can improve the bound for this type of error to O ( ετ ). Remark k = 0 , , . . . , (cid:98) K (cid:99) , (cid:13)(cid:13)(cid:13) Γ k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) K − k, ∞ = O ( ετ ) , where ˆΓ d k is replaced by central difference formula. The proof is exactly like that of Theorem 5.1. Due to the round-off error and error from the microscopicsolver, one cannot obtain ˆΓ d k ( x, ε ) exactly as in (5.1). This may lead to accumulation of numerical error. Inthis part, we aim to analyze the error between ˆΓ d k and ˆΓ r k , where ˆΓ d k is defined in (5.1), and ˆΓ r k satisfies that (cid:13)(cid:13)(cid:13) ˆΓ r0 ( · , ε ) − γ (cid:13)(cid:13)(cid:13) , ∞ = η , (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ˆΓ r k +1 ( · , ε ) − ˜ g − x (cid:32) ε ˆΓ r k ( · + f ( · , ˆΓ r k ( · , ε )) τ, ε ) − ˆΓ r k ( · , ε ) τ (cid:33)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) , ∞ = η k +1 , k ∈ N . Theorem
For any k = 0 , , . . . , (cid:98) K +12 (cid:99) , (cid:13)(cid:13)(cid:13) ˆΓ r k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) , ∞ = O k (cid:88) j =0 ε j τ j η k − j . Proof.
Let e r k = (cid:13)(cid:13)(cid:13) ˆΓ r k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) , ∞ . Notice that when k = 0 , , . . . , (cid:98) K − (cid:99) , ∇ x ˆΓ d k ( x, ε ) is bounded IGH ORDER NUMERICAL HOMOGENIZATION (cid:12)(cid:12)(cid:12) ˆΓ r k +1 ( x, ε ) − ˆΓ d k +1 ( x, ε ) (cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˆΓ r k +1 ( x, ε ) − ˜ g − x (cid:32) ε ˆΓ r k ( x + f ( x, ˆΓ r k ( x, ε )) τ, ε ) − ˆΓ r k ( x, ε ) τ (cid:33)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ g − x (cid:32) ε ˆΓ r k ( x + f ( x, ˆΓ r k ( x, ε )) τ, ε ) − ˆΓ r k ( x, ε ) τ (cid:33) − ˜ g − x (cid:32) ε ˆΓ d k ( x + f ( x, ˆΓ r k ( x, ε )) τ, ε ) − ˆΓ d k ( x, ε ) τ (cid:33)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ˜ g − x (cid:32) ε ˆΓ d k ( x + f ( x, ˆΓ r k ( x, ε )) τ, ε ) − ˆΓ d k ( x, ε ) τ (cid:33) − ˜ g − x (cid:32) ε ˆΓ d k ( x + f ( x, ˆΓ d k ( x, ε )) τ, ε ) − ˆΓ d k ( x, ε ) τ (cid:33)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ O (cid:16) η k +1 + ετ e r k (cid:17) . Therefore, e r k +1 = O (cid:0) η k +1 + ετ e r k (cid:1) , and then the proof is completed by induction. Remark k , let η = max j =0 , ,...,k η j . By Theorem 5.4, if ε = O ( τ ), then (cid:13)(cid:13)(cid:13) ˆΓ r k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) , ∞ = O ( η ) ;if τ = o ( ε ), then (cid:13)(cid:13)(cid:13) ˆΓ r k ( · , ε ) − ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) , ∞ = O (cid:18) ε k τ k η (cid:19) . In Subsections 5.1 and 5.2, we have analyzed the numerical error when calculatingΓ k ( x, ε ). Now we would like to figure out how it influences the global error. Let κ = (cid:13)(cid:13)(cid:13) Γ k ( · , ε ) − ˆΓ k ( · , ε ) (cid:13)(cid:13)(cid:13) , ∞ < + ∞ . We would like to compare the solutions of the following two ODEs:d X k d t = f ( X k , Γ k ( X k , ε )) , and d ˆ X k d t = f ( ˆ X k , ˆΓ k ( ˆ X k , ε )) . By Gronwall’s inequality and the boundedness of ∇ x Γ k ( x, ε ) when k = 0 , , . . . , K (See Corollary C.6), onecan deduce the following proposition. By Proposition 5.6, one obtains that the order of global error between X k ( t ) and ˆ X k ( t ) in a finite time horizon is the same as order of L ∞ -error between Γ k ( · , ε ) and ˆΓ k ( · , ε ), if X k (0) = ˆ X k (0). Proposition
For k = 0 , , . . . , K , there exists a constant C > such that | X k ( t ) − ˆ X k ( t ) | ≤ e Ct (cid:16) | X k (0) − ˆ X k (0) | + κ (cid:17) . ZEYU JIN AND RUO LI
6. Numerical Experiments.
In this section, we perform numerical simulations on several examplesto demonstrate numerical efficiency of our models and algorithms developed in the previous sections.The numerical experiments are set up as follows. We conduct the simulations in the time interval [0 , T ].We use the coupled solver in the interval [0 , T c ], where T c is determined by the criteria in Remark 4.1. Thecoupled solver utilizes the common 4th-order explicit Runge-Kutta scheme (RK4) with time step ∆ t c . TheHMM-type algorithms are used in the interval [ T c , T ]. The macroscopic solver utilizes RK4 with time stepsize ∆ t . The microscopic solver utilizes the Forward Euler scheme (FE) with number of steps M and thetime step size δt = αε , where α is a given constant dependent on the problem. We compare the (cid:96) -normerror of slow variable, that is, | x ( T ) − x N | . In all numerical experiments, the initial values ( x , y ) and theterminal time T are specified to ensure the dissipativity of the system on [0 , T ]. In this part, we use a naive example to test the accuracy and efficiency ofour algorithms.
Example d x d t = y, d y d t = 1 ε ( x − y ) ,x | t =0 = x , y | t =0 = y , with x = 1, y = 2, T = 4. The exact solution of this example is x ( t ) = − λ x + y λ − λ e λ t + λ x − y λ − λ e λ t , where λ = − √ ε ε and λ = − −√ ε ε .Parameters: ε = 1 . × − , τ = 1 . × − , M = 1, α = 1 .
0, ∆ t c = 1 . × − , ∆ t = 5 . × − , ˆ β = 1.Algorithm 4.1 and the forward difference formula are adopted. We compare our HMM k solvers withthe coupled solver, with respect to the numerical error and the total computing time. To ensure fairnesswhen comparing the accuracy of different HMM-type algorithms, we use the same T c . In other words, wetake k = 2 in Remark 4.1 for all the HMM-type algorithms. The numerical results are reported in Table 1. Table 1
Numerical results for Example . solver error time (s) T c coupled 2.1832e-09 5.44 4.0e+00HMM0 2.1836e-03 0.02 4.0e-04HMM1 4.6017e-08 0.04 4.0e-04HMM2 2.3441e-09 0.09 4.0e-04For one thing, our high order numerical homogenization method reduces the numerical error, comparedwith the classical HMM0 solver. For another thing, the HMM2 solver uses far less time than the coupledsolver to reach roughly the same error. In this numerical example, the time cost of HMM-type algorithms ismainly due to the macroscopic solver, since here it is easy to compute γ ( x ) by the microscopic solver. Bythis numerical experiment, we exhibit the accuracy and efficiency of out method. In the previous section, we have already proven that the modeling error ofHMM k method turns out to be O ( ε k +1 ). In this part, we would like to test and verify this modeling errorestimate by numerical experiments. Let us consider the following three numerical examples. For eachnumerical example, we compare numerical error between numerical solutions and the reference solution. IGH ORDER NUMERICAL HOMOGENIZATION Example d x d t = − x + ( x + c ) y, d y d t = 1 ε ( x − ( x + 1) y ) ,x | t =0 = x , y | t =0 = y , with x = 1, y = 0, c = 0 . T = 1.Parameters: τ = 1 . × − , M = 10, α = 0 .
5, ∆ t c = 1 . × − , ˆ β = 1 . t c = 1 . × − . See Figure 1(a) for fixed ∆ t = 1 . × − and different ε . See Figure 1(b) for fixed ε = 1 . × − and different ∆ t . Example d x (1) d t = − y + a sin(2 πx (2) ) , d x (2) d t = b, d y d t = 1 ε (cid:18) y + x (1) − y (cid:19) ,x | t =0 = ( x (1) , x (2) ) | t =0 = x , y | t =0 = y , with x = (3 , y = 1, a = 2, b = 1, T = 1.Parameters: τ = 1 . × − , M = 25, α = 0 .
1, ∆ t c = 1 . × − , ˆ β = 0 . t c . See Figure 2(a) for fixed ∆ t = 1 . × − and different ε . SeeFigure 2(b) for fixed ε = 1 . × − and different ∆ t . Example d x (1) d t = − dx (2) , d x (2) d t = − ay + x (1) + bx (2) , d y d t = 1 ε ( x (2) − c y − c y − c y ) ,x | t =0 = ( x (1) , x (2) ) | t =0 = x , y | t =0 = y , with x = (1 , y = 1, a = 0 . b = 0 . c = 7, c = 15, c = 20, d = 1, T = 1.Parameters: τ = 1 . × − , M = 10, α = 0 .
1, ∆ t c = 1 . × − , ˆ β = 10.Algorithm 4.2 and the central difference formula are adopted. The reference solution is given by thecoupled solver with time step size ∆ t c . See Figure 3(a) for fixed ∆ t = 1 . × − and different ε . SeeFigure 3(b) for fixed ε = 1 . × − and different ∆ t .The results in Figures 1(a), 2(a), and 3(a) show the order of numerical error. By numerical investigation,the theoretical result that the modeling error of HMM k is of order O ( ε k +1 ) is verified. The results inFigures 1(b), 2(b), and 3(b) indicate whether the modeling error or the truncation error of macroscopicsolver takes a dominant position in the numerical error. It has been proven that the sampling error generated by numericalderivatives is O ( ετ ), if the forward difference formula is adopted. As seen in Remark 5.3, when the centraldifference formula is adopted, this part of error turns out to be O ( ετ ). Let us consider the followingnumerical example.8 ZEYU JIN AND RUO LI -4 -3 -2 -1 -12 -10 -8 -6 -4 -2 e rr o r HMM0HMM1HMM2HMM3 (a) Different ε and fixed ∆ t -1 -12 -10 -8 -6 -4 -2 e rr o r HMM0HMM1HMM2HMM3 (b) Different ∆ t and fixed ε Fig. 1 . Numerical results for Example . -4 -3 -2 -1 -12 -10 -8 -6 -4 -2 e rr o r HMM0HMM1HMM2 (a) Different ε and fixed ∆ t -2 -1 -12 -10 -8 -6 -4 -2 e rr o r HMM0HMM1HMM2 (b) Different ∆ t and fixed ε Fig. 2 . Numerical results for Example . IGH ORDER NUMERICAL HOMOGENIZATION Example d x d t = y, d y d t = − ε (cid:0) ( x − y + x (cid:1) ,x | t =0 = x , y | t =0 = y , with x = 4, y = 2, T = 5.Parameters: M = 20, α = 0 .
1, ∆ t c = 1 . × − , ∆ t = 2 . × − , ˆ β = 3.Algorithm 4.2 is adopted. The reference solution is given by the coupled solver with time step size∆ t c . We test the numerical error for different τ and different ε . See Figure 4(a) for the forward differenceformula. See Figure 4(b) for the central difference formula.From the numerical results, one can see that, when τ is relatively large, the numerical error isapproximately O ( ετ ) for the forward difference formula, and O ( ετ ) for the central difference formula.Therefore, the theoretical analysis in Subsection 5.1 is verified.
7. Conclusions.
We proposed a high-order numerical homogenization method for the dissipativeordinary differential equations. We develop the correction models based on the asymptotic approximationsand a novel iterative formula. The corresponding numerical algorithms are designed in the framework of theheterogeneous multiscale methods. We provide some theoretical analysis on our algorithms. By numericalinvestigation, not only the error estimates are verified, but also the efficiency of our methods is exhibited.
Acknowledgements.
Zeyu Jin is supported by the Elite Undergraduate Training Program of Schoolof Mathematical Sciences in Peking University. Ruo Li is partially supported by the National Key R&DProgram of China (No. 2020YFA0712000) and the National Science Foundation in China (No. 11971041).
Appendix A. Well-posedness of (3.5) for Small ε . Theorem
A.1.
Under the assumptions in Example , for each
M > (cid:13)(cid:13) A − A (cid:13)(cid:13) , there exists δ ∈ (0 , ε ] ,such that for each ε ∈ (0 , δ ] , there exists a unique solution ( C ∗ , d ∗ ) to (3.5) such that (cid:107) C ∗ (cid:107) ≤ M . Furthermore, C ∗ + A − A = O ( ε ) and d ∗ + A − b = O ( ε ) .Proof. First, we consider the equation (3.5a). It is easy to see that the solution of (3.5a) is the fixedpoint of the mapping T : C (cid:55)→ − A − A + εA − CA + εA − CA C. We define a set E = { C ∈ R n y × n x : (cid:107) C (cid:107) ≤ M } , where M > (cid:13)(cid:13) A − A (cid:13)(cid:13) is arbitrary. Let M = (cid:13)(cid:13) A − (cid:13)(cid:13) ( (cid:107) A (cid:107) + (cid:107) A (cid:107) ). When C ∈ E , we have that (cid:107)T C (cid:107) ≤ (cid:13)(cid:13) A − A (cid:13)(cid:13) + εM (cid:107) C (cid:107) + εM (cid:107) C (cid:107) ≤ (cid:13)(cid:13) A − A (cid:13)(cid:13) + εM ( M + M ) . We take δ = min (cid:26) ε , M − (cid:107) A − A (cid:107) M ( M + M ) (cid:27) >
0. For each ε ∈ (0 , δ ], we have that T : E → E . For each C, ˜ C ∈ E , (cid:13)(cid:13)(cid:13) T C − T ˜ C (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) εA − ( C − ˜ C ) A + εA − ( C − ˜ C ) A C + εA − ˜ CA ( C − ˜ C ) (cid:13)(cid:13)(cid:13) ≤ εM (1 + 2 M ) (cid:13)(cid:13)(cid:13) C − ˜ C (cid:13)(cid:13)(cid:13) . We take δ = min { δ , M (1+2 M ) } >
0. Then for each ε ∈ (0 , δ ], T is a contraction mapping on E .Therefore, there exists a unique solution C ∗ to (3.5a) in E . We take δ = min { δ , M M } . For each ε ∈ (0 , δ ], (cid:13)(cid:13) εA − C ∗ A (cid:13)(cid:13) ≤ εM M ≤ <
1. Thus, A − εC ∗ A is invertible. With C ∗ fixed, there exists a uniquesolution d ∗ to (3.5b) in R n y , and d ∗ is uniformly bounded in ε ∈ (0 , δ ]. Furthermore, we notice that C ∗ + A − A = ε ( A − C ∗ A + A − C ∗ A C ∗ ) = O ( ε ) ,d ∗ + A − b = ε ( A − C ∗ A d ∗ + A − C ∗ b ) = O ( ε ) . Then the proof is completed.0
ZEYU JIN AND RUO LI -4 -3 -2 -1 -11 -10 -9 -8 -7 -6 -5 -4 -3 e rr o r HMM0HMM1HMM2 (a) Different ε and fixed ∆ t -2 -1 -11 -10 -9 -8 -7 -6 -5 -4 -3 e rr o r HMM0HMM1HMM2 (b) Different ∆ t and fixed ε Fig. 3 . Numerical results for Example . -6 -4 -2 -12 -10 -8 -6 -4 e rr o r (a) Forward difference formula -6 -4 -2 -12 -10 -8 -6 -4 e rr o r (b) Central difference formula Fig. 4 . Numerical results for Example . IGH ORDER NUMERICAL HOMOGENIZATION Appendix B. Proof of Theorem 3.8.
Proof of Theorem . By Remark 2.3, one can obtain that | Γ ( x, ε ) − γ ( x ) − εγ ( x ) | ≤ β | g ( x, Γ ( x, ε )) − g ( x, γ ( x ) + εγ ( x )) | = 1 β | ε ∇ γ ( x ) f ( x, γ ( x )) − g ( x, γ ( x ) + εγ ( x )) | . By (3.9b) and Taylor’s expansion of g ( x, y ) at ( x, γ ( x )), ε ∇ γ ( x ) f ( x, γ ( x )) − g ( x, γ ( x ) + εγ ( x ))= εG y ( x ) γ ( x ) − (cid:0) g ( x, γ ( x )) + εG y ( x ) γ ( x ) + O ( ε ) (cid:1) = O ( ε ) , where O ( ε ) can be controlled uniformly thanks to Assumption 2.1. In other words, there exists a constant C > | ε ∇ γ ( x ) f ( x, γ ( x )) − g ( x, γ ( x ) + εγ ( x )) | ≤ C ε . Therefore, | Γ ( x, ε ) − γ ( x ) − εγ ( x ) | ≤ C β ε , which completes the proof of (3.14a).By the expression of γ ( x ) in (3.9b), one gets that(B.1) g ( x, Γ ( x, ε )) = εG y ( x ) γ ( x ) . For the sake of clarity, we consider the i -th component of (B.1), that is,(B.2) g i ( x, Γ ( x, ε )) = ε n y (cid:88) k =1 ∂g i ∂y k ( x, γ ( x )) γ k ( x ) . Taking the derivative of (B.2) with respect to x j , we obtain that(B.3) ∂g i ∂x j ( x, Γ ( x, ε )) + n y (cid:88) k =1 ∂g i ∂y k ( x, Γ ( x, ε )) ∂ Γ k ∂x j ( x, ε )= ε n y (cid:88) k =1 (cid:18) ∂ g i ∂y k ∂x j ( x, γ ( x )) + n y (cid:88) (cid:96) =1 ∂ g i ∂y k ∂y (cid:96) ( x, γ ( x )) ∂γ (cid:96) ∂x j ( x ) (cid:19) γ k ( x )+ ε n y (cid:88) k =1 ∂g i ∂y k ( x, γ ( x )) ∂γ k ∂x j ( x ) . Actually (B.3) yields that ∇ x Γ ( x, ε ) is uniformly bounded in ε ∈ (0 , ε ], since ∇ x Γ ( x, ε ) = G y ( x ) − A, where A = A ( x, ε ) = [ A i,j ] n y ,n x i =1 ,j =1 , A i,j is defined as A i,j = ε n y (cid:88) k =1 (cid:18) ∂ g i ∂y k ∂x j ( x, γ ( x )) + n y (cid:88) (cid:96) =1 ∂ g i ∂y k ∂y (cid:96) ( x, γ ( x )) ∂γ (cid:96) ∂x j ( x ) (cid:19) γ k ( x )+ ε n y (cid:88) k =1 ∂g i ∂y k ( x, γ ( x )) ∂γ k ∂x j ( x ) − ∂g i ∂x j ( x, Γ ( x, ε )) , ZEYU JIN AND RUO LI and it is obvious that A ( x, ε ) is uniformly bounded in ε ∈ (0 , ε ] due to Assumption 2.1. By (3.14a) andTaylor’s expansion of ∂g i ∂x j and ∂g i ∂y k at ( x, γ ( x )), one obtains that(B.4) ∂g i ∂x j ( x, Γ ( x, ε )) = ∂g i ∂x j ( x, γ ( x )) + ε n y (cid:88) k =1 ∂ g i ∂x j ∂y k ( x, γ ( x )) γ k ( x ) + O ( ε ) , and(B.5) ∂g i ∂y k ( x, Γ ( x, ε )) = ∂g i ∂y k ( x, γ ( x )) + ε n y (cid:88) (cid:96) =1 ∂ g i ∂y k ∂y (cid:96) ( x, γ ( x )) γ (cid:96) ( x ) + O ( ε ) , where both O ( ε ) can be controlled uniformly due to Assumption 2.1. In addition, the ( i, j )-component of(3.10) can be written as(B.6) ∂g i ∂x j ( x, γ ( x )) = − n y (cid:88) k =1 ∂g i ∂y k ( x, γ ( x )) ∂γ k ∂x j ( x ) . By (B.3), (B.4), (B.5), (B.6) and the uniform boundedness of ∇ x Γ ( x, ε ), one can obtain that(B.7) n y (cid:88) k =1 (cid:18) ∂g i ∂y k ( x, γ ( x )) + εB i,k (cid:19) · (cid:18) ∂ Γ k ∂x j ( x, ε ) − ∂γ k ∂x j ( x ) − ε ∂γ k ∂x j ( x ) (cid:19) = O ( ε ) , where O ( ε ) can be controlled uniformly, B = B ( x ) = [ B i,k ] n y i,k =1 defined as B i,k = n y (cid:88) (cid:96) =1 ∂ g i ∂y k ∂y (cid:96) ( x, γ ( x )) γ (cid:96) ( x ) . (B.7) can be rewritten as(B.8) ( G y ( x ) + εB )( ∇ x Γ ( x, ε ) − ∇ γ ( x ) − ε ∇ γ ( x )) = O ( ε ) . Notice that B ( x ) is uniformly bounded thanks to Assumption 2.1. Therefore, as long as ε is sufficientlysmall, G y ( x ) + εB is invertible and ( G y ( x ) + εB ) − is uniformly bounded. Therefore, the proof of (3.14b)is completed by (B.8).By Remark 2.3, one obtains that | Γ ( x, ε ) − γ ( x ) − εγ ( x ) − ε γ ( x ) |≤ β | g ( x, Γ ( x, ε )) − g ( x, γ ( x ) + εγ ( x ) + ε γ ( x )) |≤ β | ε ∇ x Γ ( x, ε ) f ( x, Γ ( x, ε )) − g ( x, γ ( x ) + εγ ( x ) + ε γ ( x )) | . By (3.14a), (3.14b) and Taylor’s expansion of f ( x, y ) at ( x, γ ( x )), one gets that(B.9) ε ∇ x Γ ( x, ε ) f ( x, Γ ( x, ε ))= ε (cid:0) ∇ γ ( x ) + ε ∇ γ ( x ) (cid:1)(cid:0) F ( x ) + εF y ( x ) γ ( x ) (cid:1) + O ( ε )= ε ∇ γ ( x ) F ( x ) + ε (cid:0) ∇ γ ( x ) F y ( x ) γ ( x ) + ∇ γ ( x ) F ( x ) (cid:1) + O ( ε ) . By Taylor’s expansion of g ( x, y ) at ( x, γ ( x )), one gets that(B.10) g ( x, γ ( x ) + εγ ( x ) + ε γ ( x ))= εG y ( x ) γ ( x ) + ε (cid:32) G y ( x ) γ ( x ) + 12 (cid:104) n y (cid:88) j,k =1 G i,j,kyy γ j ( x ) γ k ( x ) (cid:105) n y i =1 (cid:33) + O ( ε ) . IGH ORDER NUMERICAL HOMOGENIZATION | ε ∇ x Γ ( x, ε ) f ( x, Γ ( x, ε )) − g ( x, γ ( x ) + εγ ( x ) + ε γ ( x )) | = O ( ε ) , where O ( ε ) can be controlled uniformly thanks to Assumption 2.1. In other words, there exists a constant C > | ε ∇ x Γ ( x, ε ) f ( x, Γ ( x, ε )) − g ( x, γ ( x ) + εγ ( x ) + ε γ ( x )) | ≤ C ε . Therefore, | Γ ( x, ε ) − γ ( x ) − εγ ( x ) − ε γ ( x ) | ≤ C β ε . Thus, the proof for (3.14c) is completed.
Appendix C. Proofs of Theorems 3.9, 3.10, and 5.1.C.1. Several lemmas.
In the proofs of these three theorems, we have to calculate high-order gradientsof vector-valued functions. Here we present several facts about high-order derivatives.
Lemma
C.1.
Suppose that the functions A , ξ , η in this lemma are sufficiently smooth. We have thefollowing conclusions. Assume that k ≥ and ξ, η ∈ W k, ∞ ( R n , R ) , then | ξη | k, ∞ (cid:46) k (cid:88) j =0 | ξ | j, ∞ | η | k − j, ∞ . where the bound is dependent on k , and independent of ξ and η . Assume that k ≥ . Let ξ : R n → R and η : R m → R n be sufficiently smooth functions. If ∇ ξ ∈ W k, ∞ and ∇ η ∈ W k, ∞ , then | ξ ◦ η | k +1 , ∞ (cid:46) k +1 (cid:88) j =1 | ξ | j, ∞ (cid:107)∇ η (cid:107) jk +1 − j, ∞ , where the bound is dependent on k , and independent of ξ and η . In particular, if ξ ∈ W k +1 , ∞ and ∇ η ∈ W k, ∞ , then ξ ◦ η ∈ W k +1 , ∞ . Assume that k ≥ . Let A : R m → R n × n is a matrix-valued function. If ∇ A ∈ W k, ∞ and A ( · ) − ∈ L ∞ , then | A ( · ) − | k +1 , ∞ (cid:46) k +1 (cid:88) j =1 (cid:13)(cid:13) A ( · ) − (cid:13)(cid:13) j +10 , ∞ (cid:107)∇ A (cid:107) jk +1 − j, ∞ , where the bound is dependent on k , and independent of A .Proof. Item 1 is a direct corollary of Leibniz rule.As for Item 2, first we notice that ∇ x (( ξ ◦ η )( x )) = (( ∇ ξ ) ◦ η ) ( x ) · ∇ η ( x ) , which yields directly that the conclusion holds for k = 0. Now we assume that the conclusion holds for allnon-negative integers that are less than k , and then one can obtain by Item 1 and the induction hypothesis4 ZEYU JIN AND RUO LI that | ξ ◦ η | k +2 , ∞ = (cid:12)(cid:12)(cid:0) ( ∇ ξ ) ◦ η (cid:1) · ∇ η (cid:12)(cid:12) k +1 , ∞ (cid:46) k +1 (cid:88) i =0 | ( ∇ ξ ) ◦ η | i, ∞ |∇ η | k +1 − i, ∞ (cid:46) | ξ | , ∞ |∇ η | k +1 , ∞ + k +1 (cid:88) i =1 i (cid:88) j =1 | ξ | j +1 , ∞ (cid:107)∇ η (cid:107) ji − j, ∞ |∇ η | k +1 − i, ∞ ≤ | ξ | , ∞ |∇ η | k +1 , ∞ + k +1 (cid:88) j =1 k +1 (cid:88) i = j | ξ | j +1 , ∞ (cid:107)∇ η (cid:107) j +1 k +1 − j, ∞ (cid:46) k +2 (cid:88) j =1 | ξ | j, ∞ (cid:107)∇ η (cid:107) jk +2 − j, ∞ , which completes the proof.As for Item 3, one can notice first that A ( x ) − is continuous, which yields that ∂A ( x ) − ∂x i = − A ( x ) − ∂A ( x ) ∂x i A ( x ) − , ∀ i = 1 , , . . . , m. Then the conclusion can be proved similarly to Item 2.
Lemma
C.2.
Suppose that
Φ : R n × [ − δ, δ ] → R is sufficiently smooth. If there exists k, (cid:96) ∈ N such thatthe function Φ( x, s ) satisfies ∂ i Φ ∂s i ( x,
0) = 0 , ∀ i = 0 , , . . . , (cid:96), and sup s ∈ [ − δ,δ ] (cid:13)(cid:13)(cid:13)(cid:13) ∂ (cid:96) +1 Φ ∂s (cid:96) +1 ( · , s ) (cid:13)(cid:13)(cid:13)(cid:13) k, ∞ < + ∞ , then (cid:107) Φ( · , h ) (cid:107) k, ∞ = O ( | h | (cid:96) +1 ) , as h → . Proof.
By Taylor’s expansion, for each j = 0 , , . . . , k , there exists θ ∈ [0 ,
1] such that |∇ jx Φ( x, h ) | = 1( (cid:96) + 1)! (cid:12)(cid:12)(cid:12)(cid:12) ∇ jx ∂ (cid:96) +1 Φ ∂s (cid:96) +1 ( x, θh ) h (cid:96) +1 (cid:12)(cid:12)(cid:12)(cid:12) (cid:46) sup s ∈ [ − δ,δ ] (cid:13)(cid:13)(cid:13)(cid:13) ∂ (cid:96) +1 Φ ∂s (cid:96) +1 ( · , s ) (cid:13)(cid:13)(cid:13)(cid:13) k, ∞ | h | (cid:96) +1 , which completes the proof. Corollary
C.3.
Suppose that ξ : R n → R and η : R n → R n are sufficiently smooth, and η ∈ W k, ∞ .We have the following conclusions. If ∇ ξ ∈ W k, ∞ , then (cid:107) ξ ( · + η ( · ) h ) − ξ ( · ) − h ∇ ξ ( · ) η ( · ) (cid:107) k, ∞ = O ( | h | ) , as h → . If ∇ ξ ∈ W k, ∞ , then (cid:107) ξ ( · + η ( · ) h ) − ξ ( · − η ( · ) h ) − h ∇ ξ ( · ) η ( · ) (cid:107) k, ∞ = O ( | h | ) , as h → . Proof.
Let Φ( x, s ) = ξ ( x + η ( x ) s ) − ξ ( x ) − s ∇ ξ ( x ) η ( x ). By direct calculation, one can obtain that ∂ Φ ∂s ( x, s ) = ∇ ξ (cid:0) x + η ( x ) s (cid:1) η ( x ) − ∇ ξ ( x ) η ( x ) ,∂ Φ ∂s ( x, s ) = n (cid:88) i ,i =1 ∂ ξ∂x i ∂x i (cid:0) x + η ( x ) s (cid:1) η i ( x ) η i ( x ) ,∂ Φ ∂s ( x, s ) = n (cid:88) i ,i ,i =1 ∂ ξ∂x i ∂x i ∂x i (cid:0) x + η ( x ) s (cid:1) η i ( x ) η i ( x ) η i ( x ) . IGH ORDER NUMERICAL HOMOGENIZATION x,
0) = ∂ Φ ∂s ( x,
0) = 0. By Lemma C.1, Φ( x, s ) satisfies the conditions of Lemma C.2 when (cid:96) = 1, which yields Item 1. Similarly, Φ( x, s ) − Φ( x, − s ) satisfies Lemma C.2 when (cid:96) = 2, which completesthe proof. Lemma
C.4. If z , z ∈ C k ( R n x , R n y ) for some k = 1 , , . . . , K , then there exist unique y , y ∈C k ( R n x , R n y ) satisfying that g ( x, y i ( x )) = z i ( x ) for i = 0 , . In addition, if z i ∈ W k, ∞ for i = 0 , ,then (cid:107) y − y (cid:107) k, ∞ (cid:46) (cid:107) z − z (cid:107) k, ∞ , where the bound is dependent on β , k and (cid:107) z i (cid:107) k, ∞ where i = 0 , .Proof. By Remark 2.3 and implicit function theorem for g ( x, y ) − sz ( x ) − (1 − s ) z ( x ) = 0 with respectto y , there exists a unique function y = y ( x, s ) ∈ C k ( R n x × [0 , , R n y ) such that(C.1) g ( x, y ( x, s )) = sz ( x ) + (1 − s ) z ( x ) . Now we assume that z i ∈ W k, ∞ for i = 0 ,
1. Let y i ( x ) = y ( x, i ) for i = 0 ,
1. For each j = 0 , , . . . , k ,there exists θ ∈ [0 ,
1] such that(C.2) |∇ j y ( x ) − ∇ j y ( x ) | ≤ (cid:12)(cid:12)(cid:12)(cid:12) ∂∂s ∇ jx y ( x, θ ) (cid:12)(cid:12)(cid:12)(cid:12) . By (C.1), one obtains that(C.3) ∂y∂s ( x, s ) = (cid:18) ∂g∂y ( x, y ( x, s )) (cid:19) − ( z ( x ) − z ( x ))and(C.4) ∇ x y ( x, s ) = (cid:18) ∂g∂y ( x, y ( x, s )) (cid:19) − (cid:18) s ∇ z ( x ) + (1 − s ) ∇ z ( x ) − ∂g∂x ( x, y ( x, s )) (cid:19) . We assert that sup s ∈ [0 , (cid:107)∇ x y ( · , s ) (cid:107) k − , ∞ is bounded, where the bound is dependent on β , k and (cid:107) z i (cid:107) k, ∞ .By (C.4), sup s ∈ [0 , (cid:107)∇ x y ( · , s ) (cid:107) , ∞ is bounded. Now assume that sup s ∈ [0 , (cid:107)∇ x y ( · , s ) (cid:107) j, ∞ is bounded for some j = 0 , , . . . , k −
2. Lemma C.1 yields that sup s ∈ [0 , (cid:13)(cid:13)(cid:13)(cid:13) ∂g∂ ( x, y ) ( · , y ( · , s )) (cid:13)(cid:13)(cid:13)(cid:13) j +1 , ∞ and then sup s ∈ [0 , (cid:107)∇ x y ( · , s ) (cid:107) j +1 , ∞ are bounded, which yields the assertion.By the above assertion and Lemma C.1, (cid:18) ∂g∂y ( x, y ( x, s )) (cid:19) − is bounded in W k, ∞ , where the bounddepends on β , k and (cid:107) z i (cid:107) k, ∞ . Therefore, by (C.3), one obtains thatsup s ∈ [0 , (cid:13)(cid:13)(cid:13)(cid:13) ∂y∂s ( · , s ) (cid:13)(cid:13)(cid:13)(cid:13) k, ∞ (cid:46) (cid:107) z − z (cid:107) k, ∞ . Together with (C.2), the proof is completed.
Lemma
C.5.
Assume that y i : R n x → R n y satisfies that y i ∈ C k +1 and ∇ y i ∈ W k, ∞ for each i = 0 , and some k = 0 , , . . . , K − . If y − y ∈ L ∞ , then (cid:107)∇ y ( · ) f ( · , y ( · )) − ∇ y ( · ) f ( · , y ( · )) (cid:107) k, ∞ (cid:46) (cid:107) y − y (cid:107) k +1 , ∞ , where the bound depends on f , k and (cid:107)∇ y i (cid:107) k, ∞ with i = 0 , .Proof. Let Φ( x, s ) = sy ( x ) + (1 − s ) y ( x ) and Ψ( x, s ) = ∇ x Φ( x, s ) f ( x, Φ( x, s )). For each j = 0 , , . . . , k ,there exists θ ∈ [0 ,
1] such that |∇ jx Ψ( x, − ∇ jx Ψ( x, | ≤ (cid:12)(cid:12)(cid:12)(cid:12) ∂∂s ∇ jx Ψ( x, θ ) (cid:12)(cid:12)(cid:12)(cid:12) . ZEYU JIN AND RUO LI
Actually,(C.5) ∂ Ψ ∂s ( x, s ) = ∇ x (cid:18) ∂ Φ ∂s ( x, s ) (cid:19) f ( x, Φ( x, s )) + ∇ x Φ( x, s ) ∂f∂y ( x, Φ( x, s )) ∂ Φ ∂s ( x, s )= ( ∇ y ( x ) − ∇ y ( x )) f ( x, Φ( x, s )) + ∇ x Φ( x, s ) ∂f∂y ( x, Φ( x, s ))( y ( x ) − y ( x )) . By Lemma C.1, one obtains that sup s ∈ [0 , (cid:107) f ( · , Φ( · , s )) (cid:107) k, ∞ and sup s ∈ [0 , (cid:13)(cid:13)(cid:13)(cid:13) ∇ x Φ( · , s ) ∂f∂y ( · , Φ( · , s )) (cid:13)(cid:13)(cid:13)(cid:13) k, ∞ are bounded,and the bound depends on f , k and (cid:107)∇ y i (cid:107) k, ∞ with i = 0 ,
1. Therefore,sup s ∈ [0 , (cid:13)(cid:13)(cid:13)(cid:13) ∂ Ψ ∂s ( · , s ) (cid:13)(cid:13)(cid:13)(cid:13) k, ∞ (cid:46) (cid:107) y − y (cid:107) k +1 , ∞ , which completes the proof. Corollary
C.6.
For k = 0 , , . . . , K , (C.6) (cid:107) Γ k +1 ( · , ε ) − Γ k ( · , ε ) (cid:107) K − k, ∞ = O ( ε k +1 ) . Proof.
By (3.10) and Lemma C.1, one can prove by induction that (cid:107)∇ x Γ ( · , ε ) (cid:107) K, ∞ is bounded.Therefore, (cid:107) ε ∇ x Γ ( · , ε ) f ( · , Γ ( · , ε )) (cid:107) K, ∞ = O ( ε ). By Lemma C.4, (cid:107) Γ ( · , ε ) − Γ ( · , ε ) (cid:107) K, ∞ = O ( ε ). Nowassume that (C.6) holds for 0 , , . . . , k where k ≤ K −
1. By the induction hypothesis, (cid:107)∇ x Γ j ( · , ε ) (cid:107) K − k − , ∞ are bounded for j = 0 , , . . . , k + 1. By Lemma C.5, one obtains that (cid:107)∇ x Γ k +1 ( · , ε ) f ( · , Γ k +1 ( · , ε )) − ∇ x Γ k ( · , ε ) f ( · , Γ k ( · , ε )) (cid:107) K − k − , ∞ = O ( ε k +1 ) . By Lemma C.4, the proof is completed.
C.2. Proof of Theorem 3.9.
Proof of Theorem . We prove this theorem by induction.When k = 0, there exists a constant C > k such thatd | z | d t = 2 ε (cid:104) z , g ( x, y ) − g ( x, γ ( x )) (cid:105) − (cid:104) z , ∇ γ ( x ) f ( x, y ) (cid:105)≤ − βε | z | + C | z | ≤ − Aε | z | + C β − A ) ε. Here the last inequality is due to Cauchy inequality. By Gronwall inequality, | z ( t ) | ≤ C A (2 β − A ) ε (1 − e − Aε t ) + e − Aε t | z (0) | ≤ C A (2 β − A ) ε + e − Aε t | z (0) | . Assuming that the theorem holds for k −
1, where k = 1 , , . . . , K . One obtains that(C.7) d | z k | d t = 2 ε (cid:104) z k , g ( x, y ) − g ( x, Γ k ( x, ε )) (cid:105) + 2 (cid:104) z k , ∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε )) − ∇ x Γ k ( x, ε ) f ( x, y ) (cid:105)≤ − βε | z k | + 2 | z k | · |∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε )) − ∇ x Γ k ( x, ε ) f ( x, y ) | . Corollary C.6 and Assumption 2.1 yield that there exists a constant
C > |∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε )) − ∇ x Γ k ( x, ε ) f ( x, y ) |≤ |∇ x Γ k − ( x, ε ) f ( x, Γ k − ( x, ε )) − ∇ x Γ k ( x, ε ) f ( x, Γ k − ( x, ε )) | + 2 |∇ x Γ k ( x, ε ) f ( x, Γ k − ( x, ε )) − ∇ x Γ k ( x, ε ) f ( x, y ) |≤ Cε k + C | z k − | . IGH ORDER NUMERICAL HOMOGENIZATION A ∈ ( A, β ). Combing (C.7) and (C.8), one gets by Cauchy inequality that(C.9) d | z k | d t ≤ − βε | z k | + Cε k | z k | + C | z k − | · | z k |≤ − ˜ Aε | z k | + C β − ˜ A ) ε k +1 + C β − ˜ A ) ε | z k − | . The induction hypothesis says that there exists C k − > | z k − ( t ) | ≤ C k − ( ε k + e − ˜ Aε t | z k − (0) | ) . By (C.9), (C.10) and Gronwall’s inequality, one gets that(C.11) | z k ( t ) | ≤ C (1 + C k − )2 ˜ A (2 β − ˜ A ) (1 − e − ˜ Aε t ) ε k +2 + e − ˜ Aε t | z k (0) | + C C k − β − ˜ A ) te − ˜ Aε t ε | z k − (0) | ≤ C (1 + C k − )2 ˜ A (2 β − ˜ A ) ε k +2 + e − Aε t | z k (0) | + C C k − e −
2( ˜ A − A )(2 β − ˜ A ) e − Aε t ε | z k − (0) | . By Corollary C.6, one gets that(C.12) | z k − (0) | ≤ | z k (0) | + 2 | z k (0) − z k − (0) | ≤ | z k (0) | + Cε k . By (C.11) and (C.12), the theorem holds for k . Then the theorem is thus proved. C.3. Proof of Theorem 3.10.
Proof of Theorem . Since ∇ x f ( x, y ), ∇ y f ( x, y ) and ∇ x Γ k ( x, ε ) are all bounded, then there existsa constant C > (cid:12)(cid:12)(cid:12)(cid:12) d( x − X k )d t (cid:12)(cid:12)(cid:12)(cid:12) = | f ( x, y ) − f ( X k , Γ k ( X k , ε )) |≤ | f ( x, z k + Γ k ( x, ε )) − f ( x, Γ k ( X k , ε )) | + | f ( x, Γ k ( X k , ε )) − f ( X k , Γ k ( X k , ε )) |≤ C | z k | + C | x − X k | . Then one obtains that12 d | x − X k | d t = (cid:28) x − X k , d( x − X k )d t (cid:29) ≤ | x − X k | · (cid:12)(cid:12)(cid:12)(cid:12) d( x − X k )d t (cid:12)(cid:12)(cid:12)(cid:12) ≤ C | z k | · | x − X k | + C | x − X k | . Therefore, by Theorem 3.9 and Cauchy-Schwarz inequality, there exists a constant C k > | x − X k | d t ≤ C k ( | x − X k | + ε k +2 + e − βε t | z k (0) | ) . By Gronwall inequality, | x ( t ) − X k ( t ) | ≤ e C k t | x (0) − X k (0) | + ε k +2 ( e C k t −
1) + C k e C k t − e − βε tβε + C k | z k (0) | , which completes the proof. C.4. Proof of Theorem 5.1.
Proof of Theorem . When k = 0, (5.2) is straightforward since Γ ( x, ε ) = ˆΓ d0 ( x, ε ). Assume that(5.2) holds for 0 , , . . . , k where k = 0 , , . . . , (cid:98) K (cid:99) −
1. By Lemma C.4, it suffices to show that(C.13) (cid:13)(cid:13)(cid:13) ˆΓ d k ( · + f ( · , ˆΓ d k ( · , ε )) τ, ε ) − ˆΓ d k ( · , ε ) − τ ∇ x Γ k ( · , ε ) f ( · , Γ k ( · , ε )) (cid:13)(cid:13)(cid:13) K − k − , ∞ = O ( τ ) . ZEYU JIN AND RUO LI
By the induction hypothesis, (cid:13)(cid:13)(cid:13) ∇ x ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) K − k − , ∞ is bounded. By Lemma C.5,(C.14) (cid:13)(cid:13)(cid:13) τ ∇ x ˆΓ d k ( · , ε ) f ( · , ˆΓ d k ( · , ε )) − τ ∇ x Γ k ( · , ε ) f ( · , Γ k ( x, ε )) (cid:13)(cid:13)(cid:13) K − k − , ∞ = O ( ετ ) . Since (cid:13)(cid:13)(cid:13) ∇ x ˆΓ d k ( · , ε ) (cid:13)(cid:13)(cid:13) K − k − , ∞ is bounded, one obtains by Corollary C.3 that(C.15) (cid:13)(cid:13)(cid:13) ˆΓ d k ( · + f ( · , ˆΓ d k ( · , ε )) τ, ε ) − ˆΓ d k ( · , ε ) − τ ∇ x ˆΓ d k ( · , ε ) f ( · , ˆΓ d k ( · , ε )) (cid:13)(cid:13)(cid:13) K − k − , ∞ = O ( τ ) . By (C.14) and (C.15), one can obtain (C.13). Then the proof is completed.
REFERENCES[1]
A. Abdulle, W. E, B. Engquist, and E. Vanden-Eijnden , The heterogeneous multiscale method , Acta Numerica, 21(2012), pp. 1–87.[2]
R. K. Brayton, F. G. Gustavson, and G. D. Hachtel , A new efficient algorithm for solving differential-algebraicsystems using implicit backward differentiation formulas , Proceedings of the IEEE, 60 (1972), pp. 98–108.[3]
H. Brezis , Functional analysis, Sobolev spaces and partial differential equations , Springer Science & Business Media,2010.[4]
R. Car and M. Parrinello , Unified approach for molecular dynamics and density-functional theory , Physical ReviewLetters, 55 (1985), p. 2471.[5]
J. Carr , Applications of centre manifold theory , vol. 35, Springer Science & Business Media, 2012.[6]
P. Chartier, A. Murua, and J. M. Sanz-Serna , Higher-order averaging, formal series and numerical integration I:B-series , Foundations of Computational Mathematics, 10 (2010), pp. 695–727.[7]
L. Chua, M. Komuro, and T. Matsumoto , The double scroll family , IEEE Transactions on Circuits and Systems, 33(1986), pp. 1072–1118.[8]
S. M. Cox and A. J. Roberts , Initial conditions for models of dynamical systems , Physica D: Nonlinear Phenomena, 85(1995), pp. 126–141.[9]
W. E , Analysis of the heterogeneous multiscale method for ordinary differential equations , Communications in Mathe-matical Sciences, 1 (2003), pp. 423–436.[10]
W. E , The heterogeneous multiscale method: A ten-year review , in American Physical Society, 2012.[11]
B. Engquist and Y.-H. Tsai , Heterogeneous multiscale methods for stiff ordinary differential equations , Mathematics ofComputation, 74 (2005), pp. 1707–1742.[12]
K. Eriksson, C. Johnson, and A. Logg , Explicit time-stepping for stiff ODEs , SIAM Journal on Scientific Computing,25 (2004), pp. 1142–1157.[13]
G. Freiling , A survey of nonsymmetric Riccati equations , Linear Algebra and its Applications, 351 (2002), pp. 243–270.[14]
C. W. Gear and I. G. Kevrekidis , Projective methods for stiff differential equations: problems with gaps in theireigenvalue spectrum , SIAM Journal on Scientific Computing, 24 (2003), pp. 1091–1106.[15]
D. Givon, R. Kupferman, and A. Stuart , Extracting macroscopic dynamics: model problems and algorithms ,Nonlinearity, 17 (2004), pp. R55–R127.[16]
J. Guckenheimer, K. Hoffman, and W. Weckesser , The forced van der Pol equation I: The slow flow and itsbifurcations , SIAM Journal on Applied Dynamical Systems, 2 (2003), pp. 1–35.[17]
S. He and Q.-L. Dong , An existence-uniqueness theorem and alternating contraction projection methods for inversevariational inequalities , Journal of Inequalities and Applications, (2018), pp. 1–19.[18]
F. G. Heineken, H. M. Tsuchiya, and R. Aris , On the mathematical status of the pseudo-steady state hypothesis ofbiochemical kinetics , Mathematical Biosciences, 1 (1967), pp. 95–113.[19]
Y. Jiang, R. Li, and S. Wu , A second order time homogenized model for sediment transport , Multiscale Modeling &Simulation, 14 (2016), pp. 965–996.[20]
P. Kaps, S. W. H. Poon, and T. D. Bui , Rosenbrock methods for stiff ODEs: A comparison of Richardson extrapolationand embedding technique , Computing, 34 (1985), pp. 17–40.[21]
J. Laskar , Large-scale chaos in the solar system , Astronomy and Astrophysics, 287 (1994), pp. L9–L12.[22]
F. Legoll, T. Lelievre, and G. Samaey , A micro-macro parareal algorithm: application to singularly perturbedordinary differential equations , SIAM Journal on Scientific Computing, 35 (2013), pp. A1951–A1986.[23]
S. Macnamara, K. Burrage, and R. B. Sidje , Multiscale modeling of chemical kinetics via the master equation ,Multiscale Modeling & Simulation, 6 (2008), pp. 1146–1168.[24]
G. C. Papanicolaou , Some probabilistic problems and methods in singular perturbations , The Rocky Mountain Journalof Mathematics, (1976), pp. 653–674.[25]
G. Pavliotis and A. Stuart , Multiscale methods: averaging and homogenization , Springer Science & Business Media,2008.[26]
A. J. Roberts , Appropriate initial conditions for asymptotic descriptions of the long term evolution of dynamicalsystems , Journal of the Australian Mathematical Society, 31 (1989), pp. 48–75.[27]
E. K. Ryu and S. Boyd , Primer on monotone operator methods , Applied & Computational Mathematics, 15 (2016),pp. 3–43.IGH ORDER NUMERICAL HOMOGENIZATION [28] W. C. Su, Z. Gajic, and X. M. Shen , The exact slow-fast decomposition of the algebraic Ricatti equation of singularlyperturbed systems , IEEE Transactions on Automatic Control, 37 (1992), pp. 1456–1459.[29]
F. Verhulst , Methods and applications of singular perturbations: boundary layers and multiple timescale dynamics ,vol. 50, Springer Science & Business Media, 2005.[30]
G. Wanner and E. Hairer , Solving ordinary differential equations II , vol. 375, Springer Berlin Heidelberg, 1996.[31]
S. Wu , Multiscale modelling and simulation for the channel morphodynamic problems in large timescale , PhD thesis,Peking University, 2015.[32]