Computing stability of multi-dimensional travelling waves
Veerle Ledoux, Simon J.A. Malham, Jitse Niesen, Vera Thümmler
aa r X i v : . [ m a t h . D S ] D ec COMPUTING STABILITY OF MULTI-DIMENSIONALTRAVELLING WAVES ∗ VEERLE LEDOUX † , SIMON J.A. MALHAM ‡ , JITSE NIESEN § , AND
VERA TH ¨UMMLER ¶ Abstract.
We present a numerical method for computing the pure-point spectrum associatedwith the linear stability of multi-dimensional travelling fronts to parabolic nonlinear systems. Ourmethod is based on the Evans function shooting approach. Transverse to the direction of propa-gation we project the spectral equations onto a finite Fourier basis. This generates a large, linear,one-dimensional system of equations for the longitudinal Fourier coefficients. We construct the stableand unstable solution subspaces associated with the longitudinal far-field zero boundary conditions,retaining only the information required for matching, by integrating the Riccati equations associatedwith the underlying Grassmannian manifolds. The Evans function is then the matching conditionmeasuring the linear dependence of the stable and unstable subspaces and thus determines eigen-values. As a model application, we study the stability of two-dimensional wrinkled front solutionsto a cubic autocatalysis model system. We compare our shooting approach with the continuousorthogonalization method of Humpherys and Zumbrun. We then also compare these with standardprojection methods that directly project the spectral problem onto a finite multi-dimensional basissatisfying the boundary conditions.
Key words. multi-dimensional stability, parabolic systems, Evans function
AMS subject classifications.
1. Introduction.
Our goal in this paper is to present a practical extension ofthe Evans function shooting method to computing the stability of multi-dimensional travelling wave solutions to parabolic nonlinear systems. The problem is thus tosolve the linear spectral equations for small perturbations of the travelling wave.Hence we need to determine the values of the spectral parameter for which solutions match longitudinal far-field zero boundary conditions and, in more than one spatialdimension, the transverse boundary conditions. These are the eigenvalues . There aretwo main approaches to solving such eigenvalue problems. • Projection:
Project the spectral equations onto a finite dimensional basis,which by construction satisfies the boundary conditions. Then solve the re-sulting matrix eigenvalue problem. • Shooting:
Start with a specific value of the spectral parameter and the correctboundary conditions at one end. Shoot/integrate towards the far end. Ex-amine how close this solution is to satisfying the boundary conditions at thefar end. Re-adjust the spectral parameter accordingly. Repeat the shootingprocedure until the solution matches the far boundary conditions.The projection method can be applied to travelling waves of any dimension. Itsmain disadvantage is that to increase accuracy or include adaptation is costly and ∗ VL is a postdoctoral fellow of the Fund of Scientific Research—Flanders (F.W.O.–Vlaanderen).SJAM and JN were supported by EPSRC First Grant GR/S22134/01. SJAM was also supportedby Nuffield Foundation Newly Appointed Science Lecturers Grant SCI/180/97/129 and LondonMathematical Society Scheme 2 Grant 2611. JN was also supported by Australian Research Councilgrant DP0559083. † Vakgroep Toegepaste Wiskunde en Informatica, Ghent University, Krijgslaan 281-S9, B-9000Gent, Belgium ([email protected]) ‡ Maxwell Institute of Mathematical Sciences and School of Mathematical and Computer SciencesHeriot-Watt University, Edinburgh EH14 4AS, UK ([email protected]) § School of Mathematics, University of Leeds, Leeds, LS2 9JT, UK ([email protected]) ¶ Fakult¨at f¨ur Mathematik, Universit¨at Bielefeld, 33501 Bielefeld, Germany ([email protected]) 1
Ledoux, Malham, Niesen and Th¨ummler complicated as we have to re-project onto the finer or adapted basis. Further, fromthe set of eigenvalues that are produced by the resulting large algebraic eigenvalueproblem, care must be taken to distinguish two subsets as follows. In the limit ofvanishing discretization scale, one subset corresponds to isolated eigenvalues of finitemultiplicity, while the other subset converges to the absolute or essential spectrum,depending on the boundary conditions (see Sandstede and Scheel [26]). The lattersubset can be detected by changing the truncated domain size and observing whicheigenvalues drift in the complex spectral parameter plane. In contrast, for the shootingmethod, it is much easier to fine-tune accuracy and adaptivity, and zeros of thematching condition are isolated eigenvalues of finite multiplicity (we do not have toworry about spurious eigenvalues as described above). Its disadvantage though is thatby its very description it is a one-dimensional method.When implementing the shooting method, a Wronskian-like determinant measur-ing the discrepancy in the boundary matching condition in the spectral problem istypically used to locate eigenvalues in the complex spectral parameter plane. This isknown as the Evans function or miss-distance function—see Alexander, Gardner andJones [1] or Greenberg and Marletta [13]. It is a function of the spectral parameterwhose zeros correspond to eigenvalues, i.e. a match . In practice, we match at a pointroughly centred at the front. Further for parabolic systems it is typically analytic inthe right-half complex parameter plane and the argument principle allows us to glob-ally determine the existence of zeros and therefore unstable eigenvalues by integratingthe Evans function along suitably chosen contours.The one-dimensional label of the Evans function has started to be overturned andin particular Deng and Nii [9], Gesztesy, Latushkin and Makarov [11], and Gesztesy,Latushkin and Zumbrun [12] have extended the Evans function theoretically to multi-dimensions. Importantly from a computational perspective, Humpherys and Zum-brun [19] outlined a computational method crucial to its numerical evaluation inthe multi-dimensional scenario. Their method overcame a restriction on the orderof the system that could be investigated numerically using shooting, in particularusing exterior product spaces. It is based on continuous orhogonalization and isrelated to integrating along the underlying Grassmannian manifold whilst evolvingthe coordinate representation for the Grassmannian. More recently Ledoux, Mal-ham and Th¨ummler [21] provided an alternative (though related) approach to thatof Humpherys and Zumbrun. This approach is based on chosen coordinatizations ofthe Grassmannian manifold and integrating the resulting Riccati systems.The main idea of this paper is to determine the stability of multi-dimensionaltravelling waves by essentially combining the two main approaches in the naturalway: we project the spectral problem transversely and shoot longitudinally. Thebasic details are as follows.1. Transverse to the direction of propagation of the travelling wave we projectonto a finite Fourier basis. This generates a large, linear, one-dimensionalsystem of equations for the longitudinal Fourier coefficients.2. We construct the stable and unstable solution subspaces associated withthe longitudinal far-field zero boundary conditions by integrating with givenGrassmannian coordinatizations (chosen differently for each subspace).3. The Evans function is the determinant of the matrix of vectors spanningboth subspaces and measures their linear dependence. We evaluate it at anintermediate longitudinal point. ulti-dimensional stability ∂ t U = B ∆ U + c ∂ x U + F ( U ) , (1.1)on the cylindrical domain R × T . Here U is a vector of component fields and F ( U )represents a nonlinear coupling reaction term. The diagonal matrix B encodes thepositive constant diffusion coefficients. We suppose we have Dirichlet boundary con-ditions in the infinite longitudinal direction x ∈ R and periodic boundary conditionsin the transverse direction y ∈ T . We assume also, that we are in a reference frametravelling in the longitudinal direction with velocity c . Though we assume only twospatial dimensions our subsequent analysis in the rest of this paper is straightfor-wardly extended to higher dimensional domains which are transversely periodic. Wealso suppose we have a travelling wave solution U = U c ( x, y ) of velocity c satisfyingthe boundary conditions, whose stability is the object of our scrutiny.As a specific application we consider a cubic autocatalysis reaction-diffusion sys-tem that admits wrinkled cellular travelling fronts. They appear as solutions bifur-cating from planar travelling wave solutions as the autocatalytic diffusion parameteris increased. The initiation of this first bifurcation is known as the cellular instabilityand has been studied in some detail in the reaction kinetics and combustion literature;see for example Horv´ath, Petrov, Scott and Showalter [18] or Terman [29]. Howeverour main interest here is the stability of these cellular fronts themselves. It has beenshown by direct numerical simulation of the reaction-diffusion system that the cellularfronts become unstable as the autocatalyst diffusion parameter is increased further—see for example Horv´ath, Petrov, Scott and Showalter [18] and Malevanets, Caretaand Kapral [23]. We apply our multi-dimensional Evans function shooting methodto this problem and we reveal the explicit form of the spectrum associated with suchfronts. We compare our shooting methods with standard direct projection methodsobtaining high accuracy concurrency. We confirm the literature on these instabilitiesand establish the shooting method as an effective and accurate alternative to standarddirect projection methods.Our paper is structured as follows. In Section 2 we review Grassmannian andStiefel manifolds. We show how to construct the stable and unstable solution sub-spaces of the spectral problem using a given Grassmannian coordinatization and definethe Evans function using coordinatizations for each subspace. We show how to projectthe spectral problem onto a finite dimensional transverse basis in detail in Section 3.A large portion of numerical effort is devoted to accurately constructing the multi-dimensional fronts whose stability we wish to study, and in Section 4 we detail thetechnique we used to do this. In Section 5 we outline our main application—the cubicautocatalysis problem—and review the known travelling wave solutions and stabilityproperties. We then bring to bear our full range of numerical techniques to the cubicautocatalysis problem in Section 6, comparing their accuracy. In Section 7 we providesome concluding remarks and outline future directions of investigation.
2. Review: Grassmann flows and spectral matching.2.1. Grassmannian coordinate representations. A m -frame is a m -tuple of m ≤ n linearly independent vectors in C n . The Stiefel manifold V ( n, m ) of m -frames Ledoux, Malham, Niesen and Th¨ummler is the open subset of C n × m of all m -frames centred at the origin. Hence any element Y ∈ V ( n, m ) can be represented by an n × m matrix of rank m : Y = Y · · · Y m ... ... Y n · · · Y nm . (2.1)The set of m dimensional subspaces of C n forms a complex manifold Gr( n, m ) calledthe Grassmann manifold of m -planes in C n ; it is compact and connected (see Steen-rod [28, p. 35] or Griffiths and Harris [14, p. 193]). There is a quotient map from V ( n, m ) to Gr( n, m ), sending each m -frame centred at the origin to the m -plane itspans—see Milnor-Stasheff [24, p. 56]. Any m -plane in C n can be represented byan n × m matrix of rank m , like Y above. However any two such matrices Y and Y ′ related by a rank m transformation u ∈ GL( k ), so that Y ′ = Y u , will representthe same m -plane. Hence we can cover the Grassmann manifold Gr( n, m ) by coordi-nate patches U i , labelled with multi-index i = { i , . . . , i m } ⊂ { , . . . , n } , where U i isthe set of m -planes Y ∈ Gr( n, m ), which have a matrix representation y i ◦ ∈ C n × m whose m × m submatrix, designated by the rows i , is the identity matrix. Each suchpatch is an open dense subset of Gr( n, m ) isomorphic to C ( n − m ) m . For example, if i = { , . . . , m } , then U { ,...,m } can be uniquely represented by a matrix of the form y i ◦ = (cid:18) I m ˆ y (cid:19) (2.2)with i ◦ = { m + 1 , . . . , n } and ˆ y ∈ C ( n − m ) m . For each i , there is a bijective map ϕ i : U i → C ( n − m ) m given by ϕ i : y i ◦ ˆ y , representing the local coordinate chart forthe coordinate patch U i . For more details see Griffiths and Harris [14, p.193–4]. Consider a non-autonomous linear vector field defined on theStiefel manifold of the form V ( x, Y ) = A ( x ) Y , where Y ∈ V ( n, m ) and A : R → gl ( n ),the general linear algebra of rank n matrices. Following the exposition in Ledoux,Malham and Th¨ummler [21], fix a coordinate patch U i for Gr( n, m ) for some i , anddecompose Y ∈ V ( n, m ) into Y = y i ◦ u. If we substitute this form into the ordinary differential system Y ′ = V ( x, Y ) we obtain: y ′ i ◦ u + y i ◦ u ′ = ( A i + A i ◦ y i ◦ ) u, where A i denotes the submatrix obtained by restricting the matrix A ( x ) to its i thcolumns. If we project this last system onto its i ◦ th and i th rows, respectively, wegenerate the following pair of equations for the coordinate chart variables ˆ y = ϕ i ◦ y i ◦ and transformations u , respectively:ˆ y ′ = c + d ˆ y − ˆ ya − ˆ yb ˆ y (2.3a)and u ′ = ( a + b ˆ y ) u. (2.3b)Here a , b , c and d denote the i × i , i × i ◦ , i ◦ × i and i ◦ × i ◦ submatrices of A , respectively. ulti-dimensional stability V ( n, m ) generated by the linearvector field V ( x, Y ) can be decomposed into an independent Riccati flow in the coor-dinate chart variables of a fixed coordinate patch U i of Gr( n, m ), and a slaved linearflow of transformations in GL( m ). Hence we can reconstruct the flow on the Stiefelmanifold generated by the linear vector field V ( x, Y ), by simply solving the Riccatiequation above, and then if required, subsequently integrating the linear equation forthe transformations. In general, solutions to the Riccati equation (2.3a) can blow up.However, this is simply the manifestation of a poorly chosen local representative coor-dinate patch. This can be gleaned from the fact that simultaneously the determinantof u becomes zero—afterall the linear Stiefel flow, with globally smooth coefficients,does not blow up. To resolve this, we simply change patch and keep a record of thedeterminant of the patch swapping transformation. Further details and strategies canbe found in Ledoux, Malham and Th¨ummler [21]. The continuous orthogonalization method of Humpherysand Zumbrun [19] can be thought of as the flow on the Grassmann manifold corre-sponding to the linear vector field V , with the coordinatization evolving according toa given unitary flow (see Ledoux, Malham and Th¨ummler [21]). Indeed their methodis specified by a Drury–Oja flow for the n × m orthogonal matrix Q , together withthe flow for the determinant of the matrix R , in the QR -decomposition of Y : Q ′ = ( I n − QQ † ) A ( x ) Q, (2.4a)(det R ) ′ = Tr (cid:0) Q † A ( x ) Q (cid:1) (det R ) . (2.4b)Here Q † denotes the Hermitian transpose of Q . In practice the determinant of R isexponentially rescaled—see Humpherys and Zumbrun [19]. We also did not find itnecessary to apply any of the stabilization techniques suggested therein. Consider the linear spectral problem on R with spec-tral parameter λ in standard first order form with coefficient matrix A ( x ; λ ) ∈ C n × n : Y ′ = A ( x ; λ ) Y. (2.5)We assume there exists a subdomain Ω ⊂ C containing the right-half complex plane,that does not intersect the essential spectrum. For λ ∈ Ω, we know that there existsexponential dichotomies on R − and R + with the same Morse index m in each case (seeSandstede [25]). Hence for λ ∈ Ω, let Y − ( x ; λ ) ∈ V ( n, m ) denote the matrix whosecolumns are solutions to (2.5) which span the unstable subspace of solutions decayingexponentially to zero as x → − ∞ , and let Y + ( x ; λ ) ∈ V ( n, n − m ) denote the matrixwhose columns are the solutions which span the stable subspace of solutions decayingexponentially to zero as x → + ∞ . The values of spectral parameter λ for whichthe columns of Y − and columns of Y + are linearly dependent on R are pure-pointeigenvalues. The Evans function D ( λ ) is the measure of the lineardependence between the two basis sets Y − and Y + : D ( λ ) ≡ e − R x Tr A ( ξ ; λ ) d ξ det (cid:0) Y − ( x ; λ ) Y + ( x ; λ ) (cid:1) , (2.6)(see Alexander, Gardner and Jones [1] or Sandstede [25] for more details). Note thatfor the decompositions Y − = y i ◦− u − and Y + = y i ◦ + u + , assuming det u − = 0 anddet u + = 0, we havedet (cid:0) Y − Y + (cid:1) = det (cid:0) y i ◦− y i ◦ + (cid:1) · det u − · det u + . Ledoux, Malham, Niesen and Th¨ummler
Let Y − ( λ ) denote the n × m matrix whose columns are the m eigenvectors of A ( −∞ ; λ )corresponding to eigenvalues with a positive real part. Analogously let Y +0 ( λ ) denotethe n × ( n − m ) matrix whose columns are the ( n − m ) eigenvectors of A (+ ∞ ; λ )corresponding to eigenvalues with a negative real part.Suppose we fix a coordinate patch for Gr( n, m ) labelled by i − (which has car-dinality m ). We integrate the corresponding Riccati differential problem (2.3a) forˆ y − = ϕ i − ◦ y i ◦− from x = −∞ to x = x ∗ . The initial data ˆ y − ( −∞ ; λ ) is generatedby postmultiplying the i ◦− × { , . . . , m } submatrix of Y − ( λ ) by the inverse of the i − × { , . . . , m } submatrix of Y − ( λ ) (i.e. perform the simple decomposition into y i ◦− and u − for Y − ( λ ), and use the i ◦− × { , . . . , m } block of y i ◦− as the initial data for ˆ y − ).Similarly we fix a patch for Gr( n, n − m ) labelled by i + (with cardinality ( n − m )) andintegrate the Riccati differential problem (2.3a) for ˆ y + = ϕ i + ◦ y i ◦ + from x = + ∞ to x = x ∗ . We perform the analogous decomposition on Y +0 ( λ ) with index i + to generatethe initial data ˆ y + (+ ∞ ; λ ). If the solutions to both Riccati problems do not blow up,so that det u − ( x ; λ ) = 0 for all x ∈ ( −∞ , x ∗ ] and det u + ( x ; λ ) = 0 for all x ∈ [ x ∗ , ∞ ),then we define the modified Evans function, which is analytic in λ , by D ( λ ; x ∗ ) ≡ det (cid:0) y i ◦− ( x ∗ ; λ ) y i ◦ + ( x ∗ ; λ ) (cid:1) , (2.7)where x ∗ ∈ R is the matching point. In our practical implementation in Section 5,we took the patches labelled by i − = { , . . . , m } and i + = { m + 1 , . . . , n } for the leftand right problems, respectively. We used the generally accepted rule of thumb forEvans function calculations: to match at a point roughly centred at the front. In ourexample application we took x ∗ = 50, and did not observe singularities in ˆ y − or ˆ y + .However generally, varying x ∗ may introduce singularities for ˆ y − or ˆ y + in their re-spective domains ( −∞ , x ∗ ] and [ x ∗ , ∞ ). Indeed this was observed in several examplesconsidered in Ledoux, Malham and Th¨ummler [21]. There, a robust remedy is pro-vided, which on numerical evidence, allows matching anywhere in the computationaldomain (and which in future work we intend to apply to our context here).If we use the Humpherys and Zumbrun continuous orthogonalization approachthen we define the Evans function to be D HZ ( λ ; x ∗ ) ≡ det (cid:0) Q − ( x ∗ ; λ ) Q + ( x ∗ ; λ ) (cid:1) · det R − ( x ∗ ; λ ) · det R + ( x ∗ ; λ ) . (2.8)This is analytic in λ when we include the det R ± terms. We can also measure the angle between theunstable and stable linear subspaces V ( n, m ) and V ( n, n − m ), see Bj¨orck and Golub [7].When the columns of the matrices Y − ( x ; λ ) and Y + ( x ; λ ) which span V ( n, m ) and V ( n, n − m ), respectively, are nearly linearly dependent the angle will be small. Itis defined as follows—without loss of generality we assume m ≥ n/
2. Let Q − and Q + be unitary bases for V ( n, m ) and V ( n, n − m ). They are specified by the QR -decompositions Y − = Q − R − and Y + = Q + R + . The cosine of the smallest angle θ between V ( n, m ) and V ( n, n − m ) is given by the largest singular value of the matrix( Q − ) † Q + . However when θ is small it is more accurate to compute sin θ , which isgiven by the smallest singular value of (cid:0) I n − Q − ( Q − ) † (cid:1) Q + . Hence if the singular values of this matrix are σ i , i = 1 , . . . , n − m , define θ ( λ ; x ∗ ) ≡ arcsin min i =1 ,...,n − k σ i , where again, x ∗ is the matching point. Eigenvalues correspond to zeros of θ ( λ ; x ∗ ). ulti-dimensional stability
3. Projection onto a finite transverse Fourier basis.
Consider the parabolicnonlinear system (1.1) posed on the cylindrical domain R × T . Recall that we assumeDirichlet boundary conditions in the infinite longitudinal direction x ∈ R and periodicboundary conditions in the transverse direction y ∈ T (here T is the one-torus withperiod/circumference L ). Suppose we have a travelling wave solution U = U c ( x, y )of velocity c satisfying the boundary conditions. The stability of this solution isdetermined by the spectrum of the linear differential operator L = B ∆ + c ∂ x + D F ( U c ) , (3.1)where D F is the Jacobian of the interaction term. Hence the eigenvalue problem is B ∆ U + c ∂ x U + D F ( U c ) U = λU. (3.2)We wish to define and compute the Evans function for this spectral problem. The ideais to perform a Fourier decomposition in the transverse direction. This transforms theproblem to a one-dimensional problem, for which we can apply the Evans functionshooting approach. Hence suppose that we can write U ( x, y ) = ∞ X k = −∞ ˆ U k ( x ) e iky/ ˜ L , (3.3)where ˆ U k ∈ C N are the Fourier coefficients (with N denoting the number of compo-nents of U ) and ˜ L = 2 πL —this implicitly restricts our attention to perturbations U with period L . Similarly, we expand D F ( U c ) asD F ( U c ( x, y )) = ∞ X k = −∞ ˆ D k ( x ) e iky/ ˜ L , (3.4)where the Fourier coefficients ˆ D k ∈ C N × N . Substituting our Fourier expansions for U and D F ( U c ) in (3.3) and (3.4) in the eigenvalue problem (3.2) yields B ( ∂ xx + ∂ yy ) ∞ X k = −∞ ˆ U k e iky/ ˜ L + c ∂ x ∞ X k = −∞ ˆ U k e iky/ ˜ L + ∞ X ℓ = −∞ ˆ D ℓ e iℓy/ ˜ L ∞ X k = −∞ ˆ U k e iky/ ˜ L = λ ∞ X k = −∞ ˆ U k e iky/ ˜ L . Reordering the double sum ∞ X ℓ = −∞ ˆ D ℓ e iℓy/ ˜ L ∞ X k = −∞ ˆ U k e iky/ ˜ L = ∞ X k = −∞ ∞ X ν = −∞ ˆ D k − ν ˆ U ν e iky/ ˜ L , yields that for all k ∈ Z , we must have ∂ xx ˆ U k − ( k/ ˜ L ) ˆ U k + c B − ∂ x ˆ U k + ∞ X ν = −∞ B − ˆ D k − ν ˆ U ν = λB − ˆ U k . In practical implementation, we consider only the first K Fourier modes. Hence wereplace the above infinite system of equations by following finite system of equations
Ledoux, Malham, Niesen and Th¨ummler which we now also express in first order form: ∂ x ˆ U k = ˆ P k , (3.5a) ∂ x ˆ P k = λB − ˆ U k + ( k/ ˜ L ) ˆ U k − c B − ˆ P k − K X ν = − K B − ˆ D k − ν ˆ U ν , (3.5b)for k = − K, − K + 1 , . . . , K . This is a large system of ordinary differential equationsinvolving the spectral parameter λ , so we can apply the standard Evans functionapproach to it. In matrix form this is ∂ x (cid:18) ˆ U ˆ P (cid:19) = (cid:18) O N (2 K +1) I N (2 K +1) A ( x ; λ ) A (cid:19) (cid:18) ˆ U ˆ P (cid:19) , (3.6)where ˆ U is the vector of C N -valued components ˆ U − K , ˆ U − K +1 , . . . , ˆ U K and ˆ P is thevector of C N -valued components ˆ P − K , ˆ P − K +1 , . . . , ˆ P K . The lower right block matrixis A = − c B − ⊗ I K +1 . The lower left block matrix is given by A = ˜ A ( λ ) + ˆ A ( x )where if we set E k ( λ ) ≡ λB − + ( k/ ˜ L ) I N then˜ A ( λ ) = E − K ( λ ) O · · · OO E − K +1 ( λ ) · · · O ... ... . . . ... O · · · O E + K ( λ ) and ˆ A ( x ) = − B − ⊗ ˆ D ˆ D − · · · ˆ D − K ˆ D ˆ D · · · ˆ D − K +1 ... ... . . . ...ˆ D K ˆ D K − · · · ˆ D . A similar Galerkin approximation for nonplanar fronts can be found in Gesztesy,Latushkin and Zumbrun [12, p. 28,37].
4. The freezing method.
To study the long-time behaviour of the wrinkledfront solution we use the method of freezing travelling waves described in Beyn andTh¨ummler [6] and analyzed in Th¨ummler [30]. This method transports the probleminto a moving frame whose velocity is computed during the computation. For thetravelling waves whose stability is our concern here, the freezing method has twoadvantages over direct simulation: Firstly, we can do long-time computations of timedependent (for example oscillating) solutions. This is because the method picks outthe correct speed for the moving frame in which the wave is roughly stationary (inparticular it does not drift out of the chosen domain). Secondly, the freezing methodcan be used to obtain a suitably accurate guess for the waveform, which can then beused to initiate a Newton solver that computes the travelling wave as a stationarysolution in the co-moving frame. For completeness we will briefly review the idea ofthe freezing method here. Consider a parabolic nonlinear system of form (1.1) in astationary frame of reference: ∂ t U = B ∆ U + F ( U ) . (4.1) ulti-dimensional stability U ( x, y, t ) = V ( x − γ ( t ) , y, t ) into this equation, where γ ( t )is the unknown reference position for a wave travelling in the longitudinal x -direction,we generate the partial differential equation for V and γ ′ : ∂ t V = B ∆ V + γ ′ ( t ) ∂ x V + F ( V ) . (4.2a)In order to compensate for the additional degree of freedom which has been introducedby the unknown position γ , we specify the following additional algebraic constraint0 = Z R × T (cid:0) ∂ x ˆ V ( x, y, t ) (cid:1) T (cid:0) ˆ V ( x, y, t ) − V ( x, y, t ) (cid:1) d x d y. (4.2b)This constraint selects a unique solution from the one-parameter group of possiblesolutions generated by longitudinal translation. It is a phase condition that normalizesthe system with respect to a given template function—for example a suitable choiceis to set ˆ V ( x, y, t ) to be the initial data V ( x, y, V ( x, y, t ) and ζ ( t ) = γ ′ ( t ), withinitial data V ( x, y,
0) and ζ (0) within the attractive set of the stable travelling waveand its velocity.A stationary solution ( V, ζ ) = ( V c , c ) of the partial differential algebraic sys-tem (4.2) represents a travelling wave solution U c ( x, y, t ) = V c ( x − ct, y ) with fixedspeed c to the parabolic nonlinear problem (4.1). If the travelling wave U c is stablethen the stationary solution ( V c , c ) is also stable and the time evolution of partial dif-ferential algebraic system (4.2) will converge to ( V c , c ). Thus the numerical methodconsists of solving the partial differential algebraic system (4.2) until the solution( V, ζ ) has reached the equilibrium ( V c , c ). Alternatively one can also solve the partialdifferential algebraic system (4.2) until some time t = T when the solution is deemedsufficiently close to equilibrium and then use a direct method (e.g. Newton’s method)to solve the stationary equation0 = B ∆ v + θ∂ x v + F ( v ) , (4.3a)0 = Z R × T (cid:0) ∂ x ˆ V ( x, y ) (cid:1) T (cid:0) ˆ V ( x, y ) − v ( x, y ) (cid:1) d x d y, (4.3b)for ( v, θ ) starting with initial values v ( x, y ) = V ( x, y, T ) and θ = ζ ( T ).
5. Review: autocatalytic system.
As our model application we consider thecubic autocatalysis system u t = ∆ u + c∂ x u − uv , (5.1a) v t = δ ∆ v + c∂ x v + uv . (5.1b)Here u ( x, y, t ) is the concentration of the reactant and v ( x, y, t ) is the concentration ofthe autocatalyst, in the infinitely extended medium ( x, y ) ∈ R × T . In the far field, wesuppose ( u, v ) approaches the stable homogenenous steady state (0 ,
1) as x → −∞ ,and the unstable homogeneous steady state (1 ,
0) as x → + ∞ . The parameter δ isthe ratio of the diffusivity of the autocatalyst to that of the reactant. This system isis globally well-posed for smooth initial data, and any finite δ > c ≥ c min . The unique travelling wave for c = c min con-verges exponentially to the homogeneous steady states. They are readily constructed0 Ledoux, Malham, Niesen and Th¨ummler by shooting (see for example Balmforth, Craster and Malham [3]) and represent theplanar front solution referred to hereafter.It is well known that these planar travelling wave solutions become unstableto transverse perturbations when δ is greater than a critical ratio δ cr ≈ . U ( x )e λt +i( k/ ˜ L ) y , for any k ∈ Z , then the associated linear operator L whose spectrum we wish to determine has the form L = B (cid:0) ∂ xx − ( k/ ˜ L ) I (cid:1) + c ∂ x + D F ( U c ) . (5.2)The spectrum of L consists of the pure-point spectrum, i.e. isolated eigenvalues offinite multiplicity, together with the essential spectrum. By a result of Henry [15], ifwe consider L as an operator on the space of square integrable functions, we knowthat the essential spectrum is contained within the parabolic curves of the continuousspectrum in left-half complex plane. The continuous spectrum for this model problemis given for all µ ∈ R and each fixed k ∈ Z , by the locus in the complex λ -plane of thecurves λ = − δ ( k/ ˜ L ) + i cµ − δµ ,λ = − − δ ( k/ ˜ L ) + i cµ − δµ ,λ = − ( k/ ˜ L ) + i cµ − µ , where the last curve appears with multiplicity two. Hence the continuous spectrumtouches the imaginary axis at the origin when k is zero. The origin is a simpleeigenvalue for L due to translational invariance of the underlying travelling wave U c ( x ) in the longitudinal direction. Thus, even though the essential spectrum doesnot affect linear stability, we cannot automatically deduce asymptotic stability unlesswe consider a gently weighted space of square integrable functions, which will shiftthe essential spectrum to the left.We now notice that the spectral problem for the linear differential operator L isfourth order and one-dimensional, the transverse perturbation information is encodedthough the transverse wavenumber parameter k . Hence we can employ the Evansfunction shooting approach, in particular in Figure 5.1 we plot the dispersion relationfor planar fronts corresponding to different values of the physical parameter δ . Thedispersion relation gives the relationship between the real part of the exponentialgrowth factor of small perturbations, Re( λ ), and transverse wave number k (in factto make comparisons later easier we have used k/ ˜ L on the abscissa). To constructthe graph shown, we locate zeros of the Evans function on the real λ axis and followthem as the transverse wave number k is varied. We see in Figure 5.1 that the interval(0 , k δ ) of unstable transverse wavenumbers k increases as δ increases (at least up to δ = 5). The wavenumber where the maximum of Re( λ ) occurs has largest growth andis thus typically the transverse mode expressed in the dynamics.However our real concern here is the stability of the wrinkled cellular fronts them-selves as we increase the diffusion ratio parameter δ beyond δ cr . We know the planarwaves are unstable in this regime and the question is whether the bifurcating wrinkledcellular fronts are stable for values of δ just beyond δ cr but do become unstable at ulti-dimensional stability −3 wave number k / (2 π L ) g r o w t h r a t e δ = 2 δ = 3 δ = 4 δ = 5 Fig. 5.1 . Dispersion relations for the planar front for different values of δ . For δ = 2 , allperturbations decay and the planar front is linearly stable, but for the other values of δ there is agrowing interval (0 , k δ ) of unstable transverse wavenumbers. another critical value of δ . Using direct simulations of the reaction-diffusion systemMalevanets, Careta and Kapral [23] demonstrated that when δ = 5 the transversestructure of the propagating fronts was very complex with spatio-temporal dynamicsof its own. Our goal is to elicit in practice the initial mechanisms that precipitatethe behaviour they observe through a careful study of the spectra of the underlyingmulti-dimensional travelling fronts.
6. Numerics.6.1. The method.
We will compute the spectrum of the two-dimensional trav-elling front by computing the Evans function associated with the linear operator L defined in (3.1). First, we compute the front with the freezing method as explainedin Section 4. Then as in Section 3, we project the problem onto a finite number oftransverse Fourier modes k = − K, . . . , K . This generates the large linear system (3.6)which is of the standard linear form (2.5) where the coefficient matrix is given by A ( x ; λ ) = (cid:18) O N (2 K +1) I N (2 K +1) A ( x ; λ ) A (cid:19) , where the matrices A and A are explicitly given at the end of Section 3. Thiscoefficient matrix depends on both the travelling wave solution U c and the spectralparameter λ . It also depends on the nonlinear reaction term. For our autocatalyticproblem the Jacobian D F of the nonlinear reaction term and its Fourier transform ˆ D are given byD F ( U ) = (cid:18) − v − uvv uv (cid:19) and ˆ D = (cid:18) − ˆ v ∗ ˆ v − u ∗ ˆ v ˆ v ∗ ˆ v u ∗ ˆ v (cid:19) , where ˆ u and ˆ v are the Fourier transforms of u and v respectively, and ∗ denotes theconvolution defined by (ˆ u ∗ ˆ v ) k = P ∞ ℓ = −∞ ˆ u ℓ ˆ v k − ℓ .As described in Section 2.5, we construct the unstable subspace Y − ( λ ) of (3.6) at x = −∞ , as the matrix whose columns are the eigenvectors of A ( −∞ ; λ ) corresponding2 Ledoux, Malham, Niesen and Th¨ummler to eigenvalues with a positive real part. Similarly we construct the stable subspace Y +0 ( λ ) at x = + ∞ from the eigenvectors of A (+ ∞ ; λ ) corresponding to eigenvalueswith a negative real part. Then to construct the modified Evans function (2.7) at thematching point x ∗ we proceeded as follows (for both the planar and wrinkled frontcases for the autocatalytic problem below). The full system is of size n = 4(2 K + 1).We use the coordinate patch identified by i − = { , . . . , m } where m = 2(2 K + 1)for the interval ( −∞ , x ∗ ]. On this interval we solve the Riccati equation (2.3a) forˆ y − ; with the chosen coordinate patch, the coefficient matrices in the Riccati equationare a = O m , b = I m , c = A and d = A . The initial data for ˆ y − is simply thelower ( n − m ) × m block of Y − ( λ ) postmultiplied by the inverse of the upper m × m block of Y − ( λ ). For the interval [ x ∗ , + ∞ ) we use the coordinate patch identified by i + = { m + 1 , . . . , n } . We solve the Riccati equation (2.3a) for ˆ y + ; the coefficientmatrices in this case are a = A , b = A , c = I m and d = O m . The initial data for ˆ y + is the upper m × ( n − m ) block of Y +0 ( λ ) postmultiplied by the inverse of the lower( n − m ) × ( n − m ) block of Y +0 ( λ ). Of course in practical calculations, our intervals ofintegration are [ L − x , x ∗ ] and [ x ∗ , L + x ] for suitable choices of L − x and L + x . In particular,we assume the interval [ L − x , L + x ] contains the travelling front and extends far enoughin both directions that we can assume the front is sufficiently close to the far fieldhomogeneous steady states. Further, the respective coordinate patches for the left andright intervals, and matching point x ∗ , are chosen to avoid singularities in the Riccatiflows in these intervals (in all our calculations for the autocatalysis problem we werealways able to find such a matching point when using these two patches). Hence weevaluate the Evans function by computing the determinant (2.7). This information isused to find the spectrum of the travelling wave. We first consider planar travelling waves. This will notgive us new information but it is a good test for our procedure. The travelling wavescan be found by the freezing method, but also by the shooting method explainedin Balmforth, Craster and Malham [3] for the one-dimensional case. The next stepis to find the unstable subspace of (3.5) at x = −∞ . The travelling wave U c isplanar and thus independent of y , so the Fourier coefficient ˆ D k vanishes for k = 0.This implies that the differential equation (3.5) decouples in 2 K + 1 systems, one forevery k = − K, . . . , K , each of the form ∂ x ˆ U k = ˆ P k ,∂ x ˆ P k = λB − ˆ U k + (cid:0) k ˜ L (cid:1) ˆ U k − B − ˆ D ( x ) ˆ U k − c B − ˆ P k . For the autocatalytic system (5.1), this becomes ∂ x (cid:18) ˆ U k ˆ P k (cid:19) = λδ + (cid:0) k ˜ L (cid:1) + δ u
22 2 δ u u − cδ − u λ + (cid:0) k ˜ L (cid:1) − u u − c (cid:18) ˆ U k ˆ P k (cid:19) . (6.1)We have ( u , u ) → (0 ,
1) in the limit x → −∞ . The coefficient matrix in (6.1) hastwo unstable eigenvectors when λ is to the right of the continuous spectrum, and theseeigenvectors are (cid:0) , ν , µ, µν (cid:1) T , where µ = − c δ + q c δ + (cid:0) k ˜ L (cid:1) + λ +1 δ ,ν = (cid:0) k ˜ L (cid:1) + λ − cµ − µ , (6.2a) ulti-dimensional stability , , , µ ) T , where µ = − c + q c + (cid:0) k ˜ L (cid:1) + λ. (6.2b)These eigenvectors are all analytic functions of λ except at those points where either ν or one of the expressions under the square root sign vanishes. A small calculationshows that the latter can only happen when λ is on the negative real axis, so it willnot interfere with stability computations, which concern eigenvalues with positive realpart. In contrast, we may have ν = 0 for positive λ , but in the situations consideredhere ν vanishes only for values of λ which are so large that they do not concern ushere. Alternatively, the singularities caused by ν = 0 can easily be avoided by using( ν, , µν, µ ) T as eigenvector instead of (1 , /ν, µ, µ/ν ) T .We thus have two unstable directions for every wavenumber k . Together theseform the 2(2 K + 1)-dimensional unstable subspace Y − ( λ ) at x = −∞ . Using this, asdescribed in Section 6.1 above, we integrate the Riccati equation (2.3a) from x = L − x to x = x ∗ , where L − x corresponds to a position sufficiently far behind the front. Theintegration is done with Matlab function ode45 , which is based on the explicit fifth-order Runge–Kutta method due to Dormand and Prince. In all experiments, we usean absolute tolerance of 10 − and a relative tolerance of 10 − .In the other limit, where x → + ∞ , we need the two stable eigenvectors of thecoefficient matrix in (6.1), which are(1 , , µ, T , where µ = − c δ − q c δ + (cid:0) k ˜ L (cid:1) + λδ , (6.3a)and (0 , , , µ ) T , where µ = − c − q c + (cid:0) k ˜ L (cid:1) + λ. (6.3b)Again, we get a 2(2 K + 1)-dimensional subspace Y +0 ( λ ). We use this to generate theinitial condition for the Riccati equation (2.3a) we integrate backwards from x = L + x to x = x ∗ —note the coefficients are different to those for the left interval—see Section 6.1above. Here L + x is a longitudinal position sufficiently far ahead of the front. In ouractual calculations we used L ± x = ±
25 and x ∗ = 0. Finally, the Evans function iscomputed using the modified form (2.7).This concludes the explanation for the Riccati approach. The process for theDrury–Oja approach due to Humpherys and Zumbrun [19] is very similar. The dif-ference is that it uses the Drury–Oja flow (2.4) to propagate the subspaces and (2.8)to evaluate the Evans function.As an illustration, we show in Figure 6.1 a graph of the Evans function along thereal axis for δ = 3, with the Fourier cut-off at K = 24. We see that the Evans functionhas a zero at λ = 0 and double zeros around 0 . . . λ = 0 corresponds to the eigenvalue at the origin caused by translational symmetry.The other zeros correspond to double eigenvalues. They have positive real part andthus we can conclude that the planar wave is unstable for δ = 3—coinciding with thelong-established statements in Section 5.In fact, the zeros for the Evans function can be related to the dispersion curve inFigure 5.1. As mentioned above, the differential equation (3.5) decouples into 2 K + 1subsystems of the form (6.1), each corresponding to the linear operator L given in (5.2)for a particular wave number. The basis vectors we chose for the unstable subspace4 Ledoux, Malham, Niesen and Th¨ummler −4 −2−1012345678 x 10 −60 λ D ( λ ) Fig. 6.1 . The Evans function along the real axis for the planar front at δ = 3 . The Fouriercut-off is K = 24 and the transversal length of the domain is L = 200 . The Evans function iscomputed using the Drury–Oja approach. A very similar plot can be produced using the Riccatiapproach. are zero for all but one of these subsystems. Since the subsystems are decoupled, thesolution at x = x ∗ is also zero for all but one of these subsystems. Thus, the matrix (cid:0) Y − ( x ∗ ; λ ) Y + ( x ∗ ; λ ) (cid:1) at the matching point is (after reordering) a block-diagonalmatrix consisting of 2 K + 1 four-by-four blocks, and its determinant is the productof the determinants of the 4 × D k of the operator (5.2) for a particular wave number k . Thus,the Evans function for the two-dimensional planar wave is the product of the Evansfunctions for the one-dimensional wave with respect to transverse perturbations: D ( λ ) = K Y k = − K D k ( λ ) . (6.4)We chose L = 200, so k = 1 corresponds to a wave number of π ≈ . D ( λ ) has a zero around λ = 0 . D − ( λ ) has a zero at the same value. This explains why the two-dimensional Evansfunction has a double zero around λ = 0 . λ = 0 . λ = 0 . k = 2 and k = 3, respectively.As Figure 6.1 shows, the Evans function is of the order 10 − in the interval ofinterest. It is important to emphasize that we are primarily interested in the zerosof the Evans function, which correspond to eigenvalues of the operator L definedin (3.1). The scale of the Evans function between the zeros, which depends on thenormalization of the eigenvectors (6.2) and (6.3), is of lesser relevance in this context.However for completeness, we address this issue in Section 6.5. Having established that our method works for planarfronts, we now turn to the wrinkled front. The most immediate problem is thatof computing reliable steady wrinkled fronts. The procedure we used to do this isoutlined as follows: ulti-dimensional stability Fig. 6.2 . The wrinkled front for δ = 3 . The left panel shows the u component and the rightpanel shows the v component. • The partial differential algebraic system (4.2) is solved on the computationaldomain [ − , × [ − ,
60] with a second-order finite element methodon a grid of right triangles formed by dividing a grid of 300 ×
240 rect-angles along their diagonals. We used the Finite Element package ComsolMultiphysics TM [8] which includes a version of the DAE solver daspk . • To obtain reasonable starting waveform profiles for the partial differentialalgebraic system (4.2), we constructed the one-dimensional waveform, whichwe then swept out uniformly in the transverse direction. • For travelling waves that appeared to be stable, we use the freezing methoddescribed in Section 4 to obtain good starting waveform profiles to initiatethe use of Newton’s method in Comsol. • For travelling waves that appeared to be unstable, we use simple parame-ter continuation (also using Comsol) starting with a (parametrically) nearbystable wave.Figure 6.2 shows the front which results for δ = 3.The wrinkled front varies along the transverse y -direction. Therefore, the Fouriercoefficients ˆ D k do not vanish for k = 0 and the differential equation (3.6) does notdecouple. In the limits x → ±∞ , we have restricted ourselves to wrinkled front profilesthat approach a y -independent state, which is the same as for the planar front. Hence,the computation of the stable and unstable subspaces for the planar front, which wereported above, remains valid for the wrinkled front.The computation now proceeds as with the planar front. We solve the Riccatidifferential equation (2.3a) in the left and right intervals (as described in Section 6.1above) for the Riccati approach, and (2.4) for the Drury–Oja approach. The front U c is determined as explained above, with interpolation being used for those points thatfall between the grid points of the finite element solution. We use cubic interpolation,because linear interpolation is not sufficiently accurate. Finally, the Evans function iscomputed using either the modified form (2.7) or Humpherys and Zumbrun form (2.8).Figure 6.3 shows a plot of the Evans function along the real axis for δ = 3,computed using the Riccati approach. The result of the Drury–Oja approach is shownin Figure 6.4. For both approaches, we used projection on K = 9 modes and wesolved the Riccati differential equations (2.3a) (with appropriate coefficients for theleft and right intervals), and the Drury–Oja equations (2.4), by the Matlab ODE-solver ode45 with absolute and relative tolerances of 10 − and 10 − . The domain is6 Ledoux, Malham, Niesen and Th¨ummler −6 −5 −4 −3 −2 −1 0 1 2x 10 −3 −5−4−3−2−1012 x 10 −6 λ D ( λ ) −5.45 −5.4 −5.35 −5.3x 10 −3 −4−20246x 10 −8 λ D ( λ ) −8 −6 −4 −2 0x 10 −4 −2−101x 10 −9 λ D ( λ ) −3 −4−2024x 10 −8 λ D ( λ ) Fig. 6.3 . The Evans function along the real axis for the wrinkled front at δ = 3 , computedusing the Riccati approach. The three plots in the bottom row zoom in on the zeros of the Evansfunction. −6 −5 −4 −3 −2 −1 0 1 2x 10 −3 −5−4−3−2−1012 x 10 −42 λ D ( λ ) −5.45 −5.4 −5.35 −5.3x 10 −3 −202x 10 −44 λ D ( λ ) −8 −6 −4 −2 0x 10 −4 −4−202x 10 −45 λ D ( λ ) −3 −505x 10 −43 λ D ( λ ) Fig. 6.4 . Same as Figure 6.3 but with the Drury–Oja approach. ulti-dimensional stability K Eigenvalues (Evans function)3 0 . − . − . − . − . − . − . . . − . − . − . − . − . . . − . − . − . − . − . . − . − . − . − . − . − . . − . − . − . − . − . − . . − . − . − . − . − . − . . − . − . − . − . − . − . . − . − . − . − . − . − . . . . − . − . − . − . Table 6.1
Zeros of the Evans function for the wrinkled front at δ = 3 for different values of the wavenumber cut-off K , as computed with the Riccati and Drury–Oja approaches. The last row shows theeigenvalues computed by ARPACK. [ − , × [ − ,
60] and as matching point we have chosen x ∗ = 50 which is roughlythe front location (see Figure 6.2). Both approaches produce Evans functions withzeros at (approximately) the same λ -values: there is one eigenvalue with a positivereal part around λ = 0 . − . − . λ = 0 . δ = 3.However, the Riccati and Drury–Oja approaches do differ in some respects. TheRiccati approach involves integrating a system of half the dimension as compared tothe Drury–Oja approach, so we can expect it to be faster, especially for large values of K (in comparable experiments for large K , it was typically two to three times faster).See Ledoux, Malham and Th¨ummler [21] where some detailed accuracy versus costcomparisons can be found. We also note that the Drury–Oja approach again producedvery small values for the Evans function, which as explained in Section 6.5, is not asalarming as it first seems. The Riccati approach, which uses a different scaling of theEvans function, yields only moderately small values, though again see Section 6.5.We also used the angle between the unstable and stable subspace θ ( λ ; x ∗ ) todetermine the location of the eigenvalues. Though this scales better than the Evansdeterminant, it is non-negative by definition and zeros occur as touch-downs in thecomplex spectral parameter plane. Actual zeros of the function are thus harder toestablish compared to sign changes in the real and imaginary parts of the Evansdeterminant. Further, θ ( λ ; x ∗ ) is not analytic in λ .More accurate values for the zeros of the Evans function can be found by usinga standard root-finding method. This yields the values listed in Table 6.1. Theresults of the Riccati and Drury–Oja approaches agree (to the precision shown) inall cases. The table also shows results for different values of the wave number cut-off K . The eigenvalues converge quickly when K increases, and projection on K = 7modes is sufficient to get six digits of accuracy, at least in this example. The largestcomputation we performed is with K = 24, in which case the one-dimensional equationhas order 196.8 Ledoux, Malham, Niesen and Th¨ummler Re λ I m λ −7 −6 −5 −4 −3 −2 −1 0 1 2 3x 10 −3 −5−4−3−2−1012345 x 10 −3 Re D( λ ) = 0Im D( λ ) = 0 Fig. 6.5 . The locus where the real and imaginary parts of Evans function vanish for the wrinkledfront at δ = 3 . This plot uses a × grid in the depicted region of the complex plane. TheEvans function was computed on the 5000 grid points in the top half and extended by symmetry tothe bottom half. We also computed the eigenvalues by a projection method to check these results.The Comsol package, which we used to compute the travelling front, has an eigenvaluesolver based on the Arnoldi method as implemented in ARPACK [22]. This solveruses a shift σ to allow the user to choose on which part of the spectrum to concentrate.We found that the choice of this shift is not straightforward. It is unwise to use ashift which is exactly an eigenvalue, so in particular, the default value of σ = 0 shouldnot be used. We used a shift of σ = 1 or σ = 3 which proved to be safe. Theresulting eigenvalues are also listed in Table 6.1. They are in close agreement withthe eigenvalues found using the Evans function, giving additional confidence in ourcomputation.Figure 6.5 shows the contours where the real and imaginary parts are zero; eigen-values correspond with the intersections of these contours. All eigenvalues are real.Interestingly, the contours are well separated on the boundary of the depicted re-gion of the complex plane even though the eigenvalues are packed closely together.This suggests that it may be better to study the Evans function in a contour sur-rounding the eigenvalues instead of concentrating on the eigenvalues themselves—seeSection 6.4.Figures 6.6 and 6.7 show the Evans function for the case δ = 2 .
5. The Evansfunction no longer has a zero with positive real part, meaning that there are nounstable eigenvalues. Hence for this value of δ the wrinkled front is in fact stable.We have thus established the following overall stability scenario for travelling wavesolutions of the cubic autocatalysis problem. When δ is less than δ cr ≈ . δ beyond this instability ulti-dimensional stability −8 −6 −4 −2 0 x 10 −3 −0.035−0.03−0.025−0.02−0.015−0.01−0.00500.005 λ D ( λ ) −15 −10 −5 0 x 10 −4 −10−8−6−4−202 x 10 −6 λ D ( λ ) Fig. 6.6 . The Evans function along the real axis for the wrinkled front at δ = 2 . , computedusing the Riccati approach. The right panel zooms in on the flat plateau around the origin in theleft panel. −8 −6 −4 −2 0 x 10 −3 −15−10−505 x 10 −43 λ D ( λ ) −15 −10 −5 0 x 10 −4 −505 x 10 −45 λ D ( λ ) Fig. 6.7 . Same as Figure 6.6 but with the Drury–Oja approach. there exist wrinkled fronts that are stable for δ = 2 .
5. However for δ = 3 thesewrinkled fronts themselves become unstable. These stability results may depend onthe transverse domain size (which we fixed at L = 120); see Horv´ath, Petrov, Scottand Showalter [18]. If one is only interested in the sta-bility of the underlying front then several methods exist to detect eigenvalues in theright-half complex spectral parameter plane—which also require relatively few Evansfunction evaluations. One important method uses the argument principle. We havethe following sectorial estimate for the spectrum of L : if λ ∈ σ ( L ) thenRe( λ ) ≤ k D F ( U c ) k L ∞ ( R × T ) , (6.5a)Re( λ ) + | Im( λ ) | ≤ c κ + 2 k D F ( U c ) k L ∞ ( R × T ) , (6.5b)where κ ≡ min { B , B } —we provide a proof in Appendix A. Hence we can restrictour search for unstable eigenvalues to the region of the complex λ -plane bounded by0 Ledoux, Malham, Niesen and Th¨ummler λ a r gu m en t ( m u l t i p l e s o f π ) Fig. 6.8 . The left figure shows the contour C in the complex plane. The right figure showsthe argument of D ( λ ) when λ transverses the top half of this contour (for the wrinkled front with δ = 3 ). The plot is split up in four parts, corresponding to the quarter circle, the segment alongthe imaginary axis, the diagonal segment, and the segment going down. The scale on the horizontalaxis is linear on the first, third and fourth part and logarithmic on the second part. these inequalities.In our case, we have c ≈ .
577 and k D F ( U c ) k L ∞ ≈ .
422 for δ = 2 .
5, and c ≈ .
548 and k D F ( U c ) k L ∞ ≈ .
450 for δ = 3. So, the above estimate impliesthat any eigenvalue λ satisfies Re( λ ) < . λ ) + | Im( λ ) | <
3. Any unstableeigenvalue is thus inside the contour bounded by these lines and the imaginary axis;this contour is depicted in the left part of Figure 6.8 (the small semi-circle aroundthe origin is to exclude the double eigenvalue at the origin, as explained below). TheEvans function is analytic inside this contour, and thus we can count the numberof zeros inside the contour by computing the change of the argument of the Evansfunction as λ goes around the contour.This suggests the following method. We divide the contour C in small segments,compute the Evans functions at the end points of the segments, find the change inthe argument over each section and sum these to find the total change over the wholecontour. The number of zeros is then the change of the argument divided by 2 π . Thesegments in which the contour is divided have to be chosen small enough that thechange of the argument over each segment is smaller than π . Adaptive proceduresto achieve this have been proposed by, amongst others, Ying and Katz [31], but forsimplicity we determined the partition of the contour by trial and error.The Evans function, with our choice of initial conditions, satisfies D ( λ ) = D ( λ ).Hence, it suffices to transverse half the contour C . The plot at the right of Figure 6.8shows how the argument of D ( λ ) changes as λ transverses the top half of the contour.We see that the change over the whole contour is − π , and thus there is one zeroinside the contour. This is the zero around λ = 0 . L is invariant under translations and hence it has two eigenvaluesat λ = 0. Thus, the Evans function is zero at λ = 0 and its argument is undefined. ulti-dimensional stability C includes a small semi-circle of radius 10 − to avoidthe origin. It is of course possible that we miss some eigenvalues because of thisdeformation. To guard against this, we also compute the change in the argumentof D ( λ ) as λ transverses a circle around the origin with radius 10 − . We find thatthe argument changes by 4 π and hence the Evans function has two zeros inside thecircle. This shows that we have not missed any eigenvalues by the deformation of C around the origin.Another method is the parity-index argument (which we have not investigatedhere, but we include a description for completeness). To determine if there are eigen-values on the positive real axis, we could use that D ( λ ) ∼ δ − K − / , as λ → + ∞ , as we prove in Appendix B. A change in sign of the Evans function alongthe real axis indicates an unstable eigenvalue. This can be exposed by comparing thesign of the Evans function in the asymptotic limit λ → + ∞ to its sign elsewhere onthe positive real axis, or the derivative or second derivative of the Evans functionalong the real axis evaluated at the origin. Note that to implement this techniquewe need to ensure the Evans function evaluated in the neighbourhood of the originand that generating the asymptotic limit have been normalized, as far as their sign isconcerned, in the same way. This may require keeping careful track of the the non-zeroanalytic multiples dropped in our definition of the modified Evans function, i.e. thenormalization of the initial data, the scalar exponential factor and the determinantsof u − and u + (which can be computed analytically in the asymptotic limit also). The results reported in Table 6.1show that the values of λ for which the Evans function D ( λ ) is zero converge as K → ∞ . However they do not show how the Evans function scales for other values of λ (not in the spectrum) as K becomes large. We see in Figure 6.1 for the planar front,that in the domain of interest with λ of order 10 − , the Evans function computedusing the Drury–Oja approach with K = 24, is of order 10 − . Similarly, for thewrinkled front, we see in Figures 6.4 and 6.7, also computed using the Drury–Ojaapproach with K = 9, the scale of the Evans function is of order 10 − for λ of order10 − . These small results we find for the Evans function—far smaller than machineprecision—suggest that the computation might be inaccurate because of round-offerror. This is not the case here. The solution of the Drury–Oja equation (2.4) doesnot contain such small numbers. They only appear when the determinant is computedat the matching point, which is a stable computation; see for example Higham [17, § K = 24 is the product of 2 K + 1 = 49 one-dimensional Evans functions,and if each of them is 0.06—not a very small number—then their product is of theorder 10 − , which is a very small number.For the modified Evans function we construct using the Riccati approach, we seein Figure 6.3, that in the domain of interest, it is of order 10 − when K = 9. Ifwe reproduce Figure 6.3 for increasing values of K , we find that the modified Evansfunction actually grows. For large K , it increases roughly by a factor of 10 per unitincrease in K . However it is important to emphasize that the solutions to the Riccatisystems, that are used to construct the modified Evans function, remain bounded. It2 Ledoux, Malham, Niesen and Th¨ummler is only when we evaluate the determinant, that for example when K = 21, the scaleof the modified Evans function in Figure 6.3 is of order 10 . Hence the question ofthe scaling of the Evans function comes into play in the multi-dimensional context.In the one-dimensional case, the standard definition of the Evans function is (2.6)given in Alexander, Gardner and Jones [1]. Of course this definition stands moduloany analytic non-zero multiplicative factor—afterall it’s the zeros of the Evans func-tion we are typically after. A different appropriate normalization of the initial data Y ± ( λ ) rescales the Evans function by such a factor. Further it is common to drop thescalar exponential factor. Hence for example the difference between the Humpherysand Zumbrun Evans function (2.8) and the standard Evans function is that the scalarexponential factor is dropped and the initial data might be normalized differentlywhen it is QR -factorized. For the modified Evans function in (2.7), we dropped theexponential factor and also the determinants of some full rank submatrices in thedeterminant of the standard Evans function.However such analytic non-zero factors impact the scale of the Evans functionin the multi-dimensional context, as they depend on K . This can lead to growth ordecay of the Evans function employed away from the eigenvalues as K is increased,depending on which factors are kept and which are dropped. This is precisely whatwe have observed for the Drury–Oja and Riccati approaches, which decay and grow,respectively, as K increases.Gesztesy, Latushkin and Zumbrun [12, § L . Importantly, the 2-modified Fredholm determinant converges as K → ∞ . They treat the self-adjoint case; the non-self-adjoint operator L we considercan, using suitable exponential weight functions, be transformed into a self-adjointone (numerically though this is not necessary and could introduce further complica-tions due to the introduction of exponential factors in the potential term). Gesztesy,Latushkin and Zumbrun define the Evans function much like we have, through aGalerkin approximation, using the matching condition (2.6). Their normalization,natural to their context, involves ensuring the Evans function does not depend on thecoordinate system chosen and is invariant to similarity transformations (see Gesztesy,Latushkin and Makarov [11]). Crucially, Gesztesy, Latushkin and Zumbrun demon-strate in Theorem 4.15 that their 2-modified Fredholm determinant equals the Evansfunction multiplied by an exponential factor (nonzero and analytic), whose exponentdiverges as K → ∞ (it can diverge to positive or negative infinity depending on theprecise form of the coefficient matrix A ( x ; λ )). Hence their Evans function divergesas well. Consequently, the 2-modified Fredholm determinant is a natural candidatefor the canonical Evans function.With this in mind, our observation regarding the measured divergence of themodified and Humpherys and Zumbrun Evans functions in our experiments, is nottoo surprising. Their divergence, the modified Evans function to large values, andthe Humpherys and Zumbrun Evans function to small values, is simply a reflection ofthe over/under estimate of the crucial, correct multiplicative exponential factor thatequates these Evans functions with the 2-modified Fredholm determinant of Gesztesy,Latushkin and Zumbrun, which guarantees a finite limit as K → ∞ . It would beinteresting to see whether one can explicitly compute the factor relating the Evansfunction of Gesztesy, Latushkin and Makarov and the Evans function, as defined inthis paper. ulti-dimensional stability
7. Concluding remarks.
Our goal in this paper was to show, for the first time,that spectral shooting methods can be extended to study the stability of genuinelymulti-dimensional travelling fronts. It was already obvious to the people working inthis field that this can be done by a Fourier decomposition in the transverse dimen-sion, thus reducing the problem to a large one-dimensional problem; however, no-onehad yet managed to tackle the resulting large one-dimensional problems. This issuewas overcome in theory by the work of Humpherys and Zumbrun [19] and latterlythat of Ledoux, Malham and Th¨ummler [21], which showed that enough informationfor matching can be retained by integrating along the natural underlying Grassman-nian manifolds (though these ideas have been well known in the control theory andquantum chemistry for a long while—see for example Hermann and Martin [16] andJohnson [20]). The dimension of the Grassmannian manifolds scale with the squareof the order of the original problem. These methods brought the possibility of multi-dimensional shooting within our grasp, but there were still challenges to be overcome.We have addressed the most immediate of these in this paper:1. How to project the problem onto the finite transverse Fourier basis and ensurea numerically well-posed large system of linear ordinary differential spectralequations;2. How to construct the genuinely multi-dimensional fronts, this was a particu-lar challenge which we managed to overcome using the combined techniquesof freezing and parameter continuation (the latter for the unstable multi-dimensional fronts);3. Choose a non-trivial but well-known example problem, which exhibits the de-velopment of planar front instabilities (an important initial testing ground forus) but also two-dimensional wrinkled fronts that themselves became unstableas a physical parameter was varied;4. Apply the Humpherys and Zumbrun continuous orthogonalization as well asthe Riccati methods to study the stability of the wrinkled fronts, in particularintegrating a large system of order 196—using K = 24 transverse modes—hitherto the largest system considered in Evans function calculations was oforder 6 (see Allen and Bridges [2]).5. Ultimately demonstrating the feasibility , accuracy , numerical convergence and robustness of multi-dimensional shooting to determine eigenvalues.However there is still more to do. Though multi-dimensional shooting is competitivein terms of accuracy, how does it compare in terms of computational cost? There aremultiple sources of error including: approximation of the underlying wrinkled front,domain truncation, transverse Fourier subspace finite projection, approximation ofthe solution to the projected one-dimensional spectral equations and the matchingposition. Our experience is that the accuracy of the underlying wrinkled front isespecially important. Although we have endeavoured to keep all these errors small, notonly for additional confidence in our numerical results, but also for optimal efficiency,we would like to know the relative influence of each one of these sources of error inorder to target our computational effort more effectively. Finally we would also liketo see how the method transfers to hyperbolic travelling waves and more complicatedmixed partial differential systems. Acknowledgements.
SJAM attended a workshop at AIM in Palo Alto in May2005 on “Stability criteria for multi-dimensional waves and patterns” organised byChris Jones, Yuri Latushkin, Bob Pego, Bj¨orn Sandstede and Arnd Scheel that in-stigated this work. The authors would like to thank Marcel Oliver for his invaluable4
Ledoux, Malham, Niesen and Th¨ummler advice and help at the beginning of this project and also Gabriel Lord, Tom Bridges,Bj¨orn Sandstede and Jacques Vanneste for useful discussions. A large part of thiswork was conceived and computed while all four authors were visiting the Isaac New-ton Institute in the Spring of 2007. We would like to thank Arieh Iserles and ErnstHairer for inviting us and providing so much support and enthusiasm. Lastly wewould like to thank the anonymous referees whose helpful comments and suggestionsmarkedly improved the overall presentation of the paper.
Appendix A. Sectorial estimate on the spectrum.
We follow the standardarguments that prove that the parabolic linear operator L is sectorial, see for exampleBrin [5, p. 26]. First consider the L complex inner product with the spectral problem B ∆ U + c∂ x U + D F ( U c ) U = λU. Premultiplying this spectral equation with U † , and integrating over the domain R × T ,we get − Z R × T ∇ U † B ∇ U d x d y + c Z R × T U † ∂ x U + U † D F ( U c ) U d x d y = λ k U k L . In this last equation, if we set B = P T P where P ≡ diag (cid:8) B / , B / (cid:9) , we get λ k U k L + k P ∇ U k L = c Z R × T U † ∂ x U + U † D F ( U c ) U d x d y. (A.1)If we consider the complex conjugate transpose of the spectral equation, postmultiplywith U , and integrate over the domain R × T , we get¯ λ k U k L + k P ∇ U k L = c Z R × T ∂ x U † U + U † (cid:0) D F ( U c ) (cid:1) T U d x d y. If we now add these last two equations then we getRe( λ ) k U k L + k P ∇ U k L = Z R × T U † (cid:0) D F + (D F ) T (cid:1) ( U c ) U d x d y ≤ k D F ( U c ) k L ∞ k U k L . (A.2)Now dividing through by k U k L we get (6.5a)Next consider taking the imaginary part of (A.1) followed by the absolute valueon both sides and using H¨older’s inequality to get | Im( λ ) | k U k L ≤ c k U k L k∇ U k L + k D F ( U c ) k L ∞ k U k L . (A.3)Adding inequalities (A.2) and (A.3) and using that k P ∇ U k L ≥ κ k∇ U k L , where κ ≡ min { B , B } , and the arithmetic-geometric mean inequality c k U k L k∇ U k L ≤ c κ k U k L + κ k∇ U k L , we get (6.5b)—after dividing through by k U k L . Appendix B. Evans function when spectral parameter is large.
We followthe arguments for such results given in Alexander, Gardner and Jones [1, p. 186] and ulti-dimensional stability x = p | λ | x in (3.5) for each k = − K, − K + 1 , . . . , K and taking the limit | λ | → ∞ to get ∂ ˜ x ˜ x ˆ U k − e i arg λ B − ˆ U k = 0 . Hence we have a decoupled system of 2 K + 1 equations of the form ∂ ˜ x (cid:18) ˆ U ˆ P (cid:19) = (cid:18) O K +1) I ⊗ I K +1 ˜ B ⊗ I K +1 O K +1) (cid:19) (cid:18) ˆ U ˆ P (cid:19) where ˜ B = e i arg λ B − . This is a constant coefficient linear problem and the so-calledspatial eigenvalues µ are the roots of the characteristic polynomialdet (cid:0) ( µ I − ˜ B ) ⊗ I K +1 (cid:1) = (cid:0) det( µ I − ˜ B ) (cid:1) K +1 . where for any matrix C we have used that det (cid:0) C ⊗ I k (cid:1) ≡ det (cid:0) C k (cid:1) ≡ (cid:0) det C (cid:1) k .Hence the spatial eigenvalues are ±√ B , ±√ B ; each with algebraic multiplicity2 K + 1. Associated with each of these four basic spatial eigenvalue forms we have the(2 K + 1)-dimensional eigenspaces: µ = + p B : span { e +2 k (+ p B ) : k = − K, − K + 1 , . . . , + K } ,µ = − p B : span { e − k ( − p B ) : k = − K, − K + 1 , . . . , + K } ,µ = + p B : span { e +2 k +1 (+ p B ) : k = − K, − K + 1 , . . . , + K } ,µ = − p B : span { e − k +1 ( − p B ) : k = − K, − K + 1 , . . . , + K } , where e i ( β ) is the vector with 1 in position i , with β in position 2(2 K + 1) + i , andzeros elsewhere. Hence as | λ | → ∞ , the Evans function has the asymptotic form D ( λ ) ∼ det (cid:18) I ⊗ I K +1 − I ⊗ I K +1 ˜ B / ⊗ I K +1 ˜ B / ⊗ I K +1 (cid:19) = 2 det (cid:0) ˜ B / ⊗ I K +1 (cid:1) = 2 (cid:0) δ − / e i arg λ (cid:1) K +1 . REFERENCES[1] J.C. Alexander, R. Gardner, and C.K.R.T. Jones,
A topological invariant arising in the stabilityanalysis of traveling waves , J. Reine Angew. Math. 410 (1990), pp. 167–212.[2] L. Allen and T.J. Bridges,
Numerical exterior algebra and the compound matrix method , Numer.Math. 92(2) (2002), pp. 131–149.[3] N.J. Balmforth, R.V. Craster and S.J.A. Malham,
Unsteady fronts in an autocatalytic system ,Proc. R. Soc. Lond A. 455 (1999), pp. 1401–1433.[4] J. Billingham and D. Needham,
The development of travelling waves in quadratic and cubicautocatalysis with unequal diffusion rates, I and II , Phil. Trans. R. Soc. Lond. A, 334(1991), pp. 1–124, and 336 (1991), pp. 497–539.[5] L.Q. Brin,
Numerical testing of the stability of viscous shock waves , PhD thesis, Indiana Uni-versity, Bloomington, 1998.[6] W.-J. Beyn and V. Th¨ummler.
Freezing solutions of equivariant evolution equations , SIAM J.Appl. Dyn. Syst., 3(2) (2004), pp. 85–116.[7] ˚A. Bj¨orck and G.H. Golub,
Numerical methods for computing angles between linear subspaces Ledoux, Malham, Niesen and Th¨ummler[9] J. Deng and S. Nii,
Infinite-dimensional Evans function theory for elliptic eigenvalue problemsin a channel , J. Differential Equations 225 (2006), pp. 57–89.[10] J.W. Evans,
Nerve axon equations, IV: The stable and unstable impulse , Indiana Univ. Math.J. 24 (1975), pp. 1169–1190.[11] F. Gesztesy, Y. Latushkin and K.A. Makarov,
Evans functions, Jost functions, and Fredholmdeterminants , Arch. Rational Mech. Anal. 186 (2007), pp. 361–421.[12] F. Gesztesy, Y. Latushkin and K. Zumbrun,
Derivatives of (modified) Fredholm determinantsand stability of standing and travelling waves , J. Math. Pures Appl. 90(2) (2008), pp.160–200.[13] J. Greenberg and M. Marletta,
Numerical solution of non-self-adjoint Sturm-Liouville problemsand related systems , SIAM J. Numer. Anal. 38(6), pp. 1800–1845.[14] P. Griffiths and J. Harris,
Principles of algebraic geometry , Wiley Classics Library Edition,1994.[15] D. Henry,
Geometric theory of semilinear parabolic equations , Lecture Notes in Mathematics840, Springer–Verlag, 1981.[16] R. Hermann and C. Martin,
Lie and Morse theory for periodic orbits of vector fields and ma-trix Riccati equations, I: general Lie-theoretic methods , Math. Systems Theory 15 (1982),pp. 277–284.[17] N. Higham,
Accuracy and Stability of Numerical Algorithms , second edition, SIAM, 2002.[18] D. Horv´ath, V. Petrov, S.K. Scott and K. Showalter,
Instabilities in propagating reaction-diffusion fronts , J. Chem. Phys. 98(8) (1993), pp. 6332–6343.[19] J. Humpherys and K. Zumbrun,
An efficient shooting algorithm for Evans function calculationsin large systems , Physica D 220 (2006), pp. 116–126.[20] B.R. Johnson,
New numerical methods applied to solving the one-dimensional eigenvalue prob-lem , The Journal of Chemical Physics 67(9) (1977), pp. 4086–4093.[21] V. Ledoux, S.J.A. Malham and V. Th¨ummler,
Computing the Evans function using Grassman-nians , submitted to Math. Comp., arXiv:0710.1037v2, 2 Sept 2008.[22] R.B. Lehoucq, D.C. Sorensen and C. Yang,
ARPACK users guide: Solution of large scaleeigenvalue problems with implicitly restarted Arnoldi methods , SIAM, Philadelphia (1998).[23] A. Malevanets, A. Careta and R. Kapral,
Biscale chaos in propagating fronts , Phys. Rev. E 52(1995), pp. 4724–4735.[24] J.W. Milnor and J.D. Stasheff,
Characteristic classes , Annals of mathematics studies 76, Prince-ton University Press, 1974.[25] B. Sandstede,
Stability of travelling waves , In Handbook of Dynamical Systems II, B. Fiedler,ed., Elsevier (2002), pp. 983–1055.[26] B. Sandstede and A. Scheel,
Absolute and convective instabilities of waves on unbounded andlarge bounded domains , Physica D 145 (2000), pp. 233–277.[27] J. Schiff and S. Shnider,
A natural approach to the numerical integration of Riccati differentialequations , SIAM J. Numer. Anal. 36(5) (1999), pp. 1392–1413.[28] N. Steenrod,
The topology of fibre bundles , Princeton University Press, 1951.[29] D. Terman,
Stability of planar wave solutions to a combustion model , SIAM J. Math. Anal.,21 (1990), pp. 1139–1171.[30] V. Th¨ummler.
Numerical analysis of the method of freezing traveling waves , PhD thesis, Biele-feld University, 2005.[31] X. Ying and I. Katz,