EXPODE -- Advanced Exponential Time Integration Toolbox for MATLAB
aa r X i v : . [ m a t h . NA ] A p r EXPODE – Advanced Exponential Time IntegrationToolbox for
MATLAB
GEORG JANSING, Heinrich-Heine Universität DüsseldorfApril 18, 2014
Abstract
We present a
MATLAB toolbox for five different classes of exponential integrators forsolving (mildly) stiff ordinary differential equations or time-dependent partial differentialequations. For the efficiency of such exponential integrators it is essential to approximatethe products of the matrix functions arising in these integrators with vectors in a stable,reliable and efficient way. The toolbox contains options for computing the matrix functionsdirectly by diagonalization or by Padé approximation. For large scale problems, Krylovsubspace methods are implemented as well.The main motivation for this toolbox was to provide general tools which on one handallows one to easily run experiments with different exponential integrators and on theother hand to make it easily extensible by making it simple to include other methods orother options to compute the matrix functions. Thus we implemented the toolbox to becompatible with the ODE solvers included in
MATLAB . Most of the methods can be usedwith adaptive stepping.
In this paper we present our new
MATLAB toolbox
EXPODE for exponential integrators containingsome of the most prominent integrators developed recently. Our code is based on the imple-mentations provided in [10] and [8].We strongly recommend the reader to consult the recent and well-written review on ex-ponential integrators by Hochbruck and Ostermann [11]. Historical remarks can be found in[16].Exponential integrators are usually applied to stiff ordinary differential equations where thestiffness is generated by a linear term. The non-linearity is usually assumed to be well approxi-mated by polynomials. Typical applications are abstract or discretized parabolic or hyperbolicpartial differential equations. The unprecedented memory capacity of modern computers canhandle correspondingly fine spatial discretizations. Implicit time integration schemes need tosolve non-linear systems of equations with a dimension which is a multiple of the dimension ofthe underlying differential equation. As a consequence, the solution of these systems becomesvery costly. Classical explicit schemes, while generally less costly per time step, offer a differentproblem. They often have interdependent time step and spatial element size requirements forstability. Explicit exponential methods can often avoid or at least weaken these conditions toallow a more efficient use of the memory capacity.Target equations of our toolbox are very high-dimensional systems where management costsfor smart memory management and arithmetics are justified by the high computational costs.We allow direct computation of actions of the arising matrix functions with vectors or ap-proximation of those with a Krylov method. We have adaptive step size control for some of themethods and support both autonomous and non-autonomous equations.For an easy entry we are as compatible as possible with
MATLAB ’s ODE toolbox. We alsogive fine grained control over the integration process and have a focus on extensibility such thatnew functionality can be added with only minimal changes to
EXPODE ’s code.In some numerical experiments we benchmark
EXPODE ’s performance. First we use the wellknown van der Pol equation to test our adaptive step size choices and then use problems typical1or studying exponential integrators [7, 10]. In a more extensive experiment in the end weinvestigate
EXPODE ’s potential when applied to the hyperbolic Maxwell’s equations with thespatial discretization provided by another specialized and well written
MATLAB package, thediscontinuos Galerkin finite elements provided by [6].
In this section we briefly describe the class of problems which can be solved by using our toolboxand the methods which are implemented.
We consider nonlinear initial value problems u ′ ( t ) = F (cid:0) t, u ( t ) (cid:1) , u ( t ) = u (1)where u : R → C d and F : R × C d → C d .Exponential integrators are based on linearization of F . One can usually distinguish twodifferent types. The first one uses a fixed linearization, u ′ ( t ) = Ju ( t ) + g (cid:0) t, u ( t ) (cid:1) , u ( t ) = u , (2)where J ∈ C d × d is an approximation to the Jacobian of F . Roughly speaking, an exponentialintegrator can be expected to be efficient, if J contains the stiff part of F and if g is nice in thesense that g ( t, u ( t )) is a smooth function in t , where u is the solution of (1).The second option is to linearize in each time step, u ′ ( t ) = F ( t, u ( t )) = J n u ( t ) + d n t + g n ( t, u ( t )) , u ( t n ) = u n , (3a)where J n = ∂F∂u ( t n , u n ) , d n = ∂F∂t ( t n , u n ) , g n ( t, u ( t )) = F ( t, u ( t )) − J n u ( t ) − d n t. (3b)For details on the construction and on the analysis of exponential integrators for time-dependentpartial differential equations we refer to the review [11] and the references therein. A general exponential Runge-Kutta scheme applied to (2) is of the form Y ni = e c i h n J u n + h n s X j =1 a ij ( h n J ) G nj , ≤ i ≤ s (4a) u n +1 = e h n J u n + h n s X i =1 b i ( h n J ) G ni , (4b)where G ni := g ( t n + c i h n , Y ni ) . (5)The coefficient functions b i ( z ) are linear combinations of the entire functions ϕ k ( z ) = Z e (1 − τ ) z τ k − ( k − τ (6)and a ij ( z ) of ϕ k ( c i z ) respectively. The following recurrence formula is satisfied ϕ k +1 ( z ) = ϕ k ( z ) − ϕ k (0) z , ϕ ( z ) = e z . (7)2hese functions play a very important role in exponential integrators, and the efficient compu-tation or approximation of ϕ k ( γh n J ) u for γ ∈ [0 , is essential, see section 3.2 below.If the following simplifying assumptions on the coefficient functions a ij and b i are satisfied s X i =1 b i ( z ) = ϕ ( z ) , s X j =1 a ij ( z ) = c i ϕ ( c i z ) , ≤ i ≤ s, (8)cf. [7], the scheme (4) can be reformulated as Y ni = u n + c i h n ϕ ( c i h n J ) F ( t n , u n ) + h n s X j =1 a ij ( h n J ) D nj , ≤ i ≤ s (9a) u n +1 = u n + h n ϕ ( h n J ) F ( t n , u n ) + h n s X i =1 b i ( h n J ) D ni , (9b)where D nj = g ( t n + c j h n , Y nj ) − g ( t n , u n ) . (10)Thus, all (internal) stages can be interpreted as corrected exponential Euler steps.In our package, we restrict ourselves to explicit schemes ( a ij ( z ) = 0 for j ≥ i ) satisfying (8).Note that this implies D n = 0 , and the sum over the inner stages in (9) actually starts withtwo.The package includes implementations of the exponential Euler method ( s = 1 ), two two-stage schemes proposed by Strehmel and Weiner [20, Example 4.2.2] (cf. (2.39) and (2.40) in[11]) and two three-stage methods by Hochbruck and Ostermann (cf. (5.8) and (5.9) in [7]), twomethods with four stages, namely the ETD4RK method by Cox and Mattews in [3] and theETD4RK-B method by Krogstad in [14] and a method with five stages (cf. (5.19) by Hochbruckand Ostermann in [7]).For details on the convergence properties of these methods in a framework of parabolic partialdifferentail equations we refer to [7].Our implementation allows one to easily add other explicit Runge-Kutta schemes, see section4.3 below. We implemented these methods for constant step sizes only. If we apply an exponential Runge-Kutta scheme to the linearized equation (3) we obtain ex-ponential Rosenbrock-type methods [8]. For an efficient implementation, these methods shouldalso be reformulated such that most of the matrix functions are multiplied with vectors of smallnorm: D nj = g n ( t n + c j h n , Y nj ) − g n ( t n , u n ) , (11)where g n ( t, u ) = F ( t, u ) − J n u − d n t. This yields Y ni = u n + h n c i ϕ ( c i h n J n ) F ( t n , u n )+ h n c i ϕ ( c i h n J n ) d n + h n i − X j =2 a ij ( h n J n ) D nj , (12a) u n +1 = u n + h n ϕ ( h n J n ) F ( t n , u n ) + h n ϕ ( h n J n ) d n + h n s X i =2 b i ( h n J n ) D ni . (12b)For autonomous problems, we have d n = 0 .The toolbox contains the exponential Rosenbrock-Euler method, where s = 1 , and the meth-ods exprb3 and exprb4 that are proposed in section 5.1 in [8]. For all three methods an errorestimator is available, which makes it possible to use variable step sizes. The latter two methodsuse an embedded scheme for the estimation of the local error. For the exponential Rosenbrock-Euler method we used the error estimator described in [2].3 .2.3 exp4 The exp4 scheme proposed in [10] was the seed for many activities on exponential integratorsand matrix functions. With the paper, the authors provided
MATLAB and C-codes of exp4 , whichcan be used easily. The exp4 scheme has two different error estimators and also features a denseoutput formula to evaluate the numerical solution at arbitrary times. Our
EXPODE package isactually inspired by the original exp4 codes. In particular a lot of fine tuning for the Krylovmethod for the approximation of the matrix functions (see section 3.2) was motivated by thisintegrator. exp4 can be interpreted as a special case of an exponential Rosenbrock-type method, whichuses the ϕ function only. Motivated by classical Adams methods [5, 17] their exponential counterpart, exponential Adamsmethods , were constructed for the solution of semilinear problems (2) in [ ? ]. In contrast toclassical methods, the interpolation is done for the nonlinearity g instead of the full right handside F .For a constant step size h , an exponential k -step Adams method has the form u n +1 = e − hJ u n + h k − X j =0 γ j ( − hJ ) ∇ j G n = u n + hϕ ( − hJ )( G n − Ju n ) + h k − X j =1 γ j ( − hJ ) ∇ j G n (13)with coefficient functions γ j ( z ) = ( − j Z e (1 − τ ) z (cid:18) − τj (cid:19) d τ, where (cid:18) − τj (cid:19) = 1 j ! j − Y k =0 ( − τ − k ) . (14)Here, ∇ G n = G n , ∇ j +1 G n = ∇ j G n − ∇ j G n − , j ≥ denote the backward differences for G j = g ( t j , u j ) , j = 1 , ..., n . The coefficient functions arelinear combinations of the ϕ functions (6). An analysis of these methods is given in [11]. Tostart the k -step methods, we used the fixed point iteration proposed in [12]. Alternatively, wealso provide the option to use an exponential Runge-Kutta method to compute the startingvalues.The package contains exponential k -step Adams methods for k = 1 , . . . , . The same idea of interpolation is now applied to the linearized equation (3). An additionalorder of accuracy is gained by exploiting ∂g n ∂u ( t n , u n ) = 0 , ∂g n ∂t ( t n , u n ) = 0 . See [12] for details. The interpolation polynomial b p n now additionally satisfies b p ′ n ( t n ) = 0 accordingly. These linearized exponential multistep methods are defined as u n +1 = u n + hϕ ( hJ n ) F ( t n , u n ) + h ϕ ( hJ n ) d n + h k − X j =1 b γ j +1 ( hJ n ) j X ℓ =1 ℓ ∇ ℓ G n,n , (15a)with weights b γ j +1 ( z ) = ( − j +1 Z e (1 − τ ) z τ (cid:18) − τj (cid:19) d τ (15b)4nd backwards differences now based on G n,m = g n ( t m , u m ) keeping the first index fixed. Inaddition to this general scheme, an implementation of the scheme proposed in [21, formula(39)] is contained in our toolbox. Again using the fixed point iteration proposed in [12], weimplemented integrators for k = 1 , . . . , . Using an exponential Rosenbrock scheme for theinitial steps instead, we can obtain exponential linearized multistep methods up to k = 4 dueto the accuracy of the onestep methods. In this section we discuss some details of our implementation.
Step size control is provided for exponential Rosenbrock-type methods and exp4 via a standardGustafsson approach [5, pp. 31–35 and pp. 550ff] together with different norms of the scalederror vector b e n = (cid:18) e n ( i ) sc ( i ) (cid:19) di =1 , sc = AT ol + max {| u n | , | u n − |} · RT ol, where ( i ) denotes the i -th component and e n is the estimated error in the n th time step. Bydefault we use the maximum-norm (complying to MATLAB ’s defaults), but can easily switch toother norms like the Euclidian norm or user defined norms defined in a
MATLAB function or by aninner product defined via its Gramian matrix. This is implemented in the options
NormControl and
NormFunction .The desired accuracy is determined by the absolute (
AT ol ) and relative (
RT ol ) tolerance andthe chosen norms. An implementation using an iterative process for the evaluation of the matrixfunctions also has an impact on the step size selection. We will discuss this in the followingsection.
We now turn our attention to the matrix functions arising in exponential integrators.Since the ϕ functions are analytic, ϕ ( hJ ) can be computed via diagonalization if J is diag-onalizable. The evaluation of ϕ at the eigenvalues can be computed directly from (6) via ϕ k ( z ) = e z − (cid:16) z + z + ... + zk − k − (cid:17) z k ( k − , z = 0 , k ! , z = 0 . (16)In finite precision arithmetic, we use a sufficiently high-order Padé approximation in a neigh-borhood of zero. Alternatively, one could break down recursion formula (7) to the evaluation ofthe exponential function ϕ ( z ) = e z or use contour integration [13, 19, 15, 22].Diagonalization is inefficient for large matrices, but fortunately, exponential integrators donot require the complete matrix function but ϕ k ( h n J ) v for a vector v only. This can be approx-imated within a Krylov subspace with respect to J and v , see [4, 9].Krylov methods have the advantage that they require the evaluation of the matrix-vectorproducts Jv only. It is not necessary to compute the full Matrix J explicitly. The error iscontrolled via the error estimators proposed in [10, 18].We implemented configurable maximal Krylov subspace dimensions (cf. KrylovMaxDimoption). If the Krylov approximation fails to converge within the maximum allowed dimensionof the Krylov space, the step size h has to be reduced such that with the reduced step size, theerror estimator fulfills the accuracy requirements. If we cannot reduce the step size (e. g. if werun a code for constant step sizes), a warning is triggered. Note that all products of ϕ functionswith the same vector v can be approximated in one Krylov subspace. The dimension of thissubspace is chosen such that all ϕ functions are approximated sufficiently well.In addition, we reuse data from previous steps if possible. For instance, if we compute matrixfunctions via diagonalization, solving semilinearized ODEs (2) and use the same step size in the5ext time step, we do not have to recompute ϕ k ( h n J ) . We also reuse the Krylov subspace for ϕ k ( h n J n ) F ( t n , u n ) if we have to reduce the step size to to a step rejection.For statistical purposes, we provide data about the Krylov process. This data is displayedat the end of the integration process by default. A typical output looks like this: statistics:[ ... ]number of matrix function evaluation times vector: 331number of Krylov subspaces: 147total number of Krylov steps: 1455number of step size reductions due to Krylov: 0number of recycled subspaces: 10maximal dimensions of subspaces: F1: 15, v: 15, D2: 15, D3: 11 The first line reports the number of products of a ϕ -function with a vector, the second linecontains the number of Krylov subspaces that have been built for different vectors. The thirdline counts the sum of all Krylov subspace dimensions in the whole integration process. Nextwe have the number of step size reductions which are necessary to fulfill the error tolerances,followed by the number of reused subspaces. Those can arise after a step rejection – eitherby the error estimator of the integrator or by the Krylov process. The maximal dimensions inthe last line correspond to the maximal dimension of a subspace for a specific vector. Theirmeanings can be looked up in table 1. Note that the multistep methods require initial steps,such that – depending on the choice of their computation – some of the labels for Runge-Kuttaand Rosenbrock-type methods may appear in their statistics as well.name vector integrator type(s) F1 F ( t n , u n ) linearized v ∂F∂t ( t n , u n ) linearized D ι , ι = , D nι Rosenbrock-type Y ι plusA , ι = , . . . , G nι − Ju n Runge-Kutta d ι , ι = , d ι exp4GDiff ι , ι = , . . . , ∇ ι G n both multistep GDiffInit ι , ι = , . . . , ∆ ι G n both multistepTable 1:Meaning of the labels in the Krylov statistics output In this section we give a brief introduction to the
MATLAB package
EXPODE . A more extensivedocumentation is contained in the
EXPODE package ( manual.pdf ). We will now describe the minimal requirements and the installation of the
EXPODE toolbox.
EXPODE runs on all recent and middle-aged computers. The performance strongly depends onthe problem and on the available hardware. The toolbox was tested on
MATLAB versions down to
MATLAB
EXPODE . Usually the user packageshould be sufficient.To install, just unpack the archive. This will create a new expode subdirectory. To makeit available in
MATLAB , just add the package’s root to
MATLAB ’s path and run the initPaths function with 6 > addpath mydownloadpath/expode;>> initPaths;
To make a permanent installation for the current user, put the above line into your startup.m file. See
MATLAB ’s help for more information.
To get a first impression of
EXPODE we start with running some of the examples included. Toaccess the examples we add the examples directory to the
MATLAB path. Run the followingcommands >> addpath mydownloadpath/expode/examples;>> [t, y] = Heat1D([], [], ’run’); to solve a heat equation with a time-dependent source term in one dimension. Use >> help Heat1D; to obtain information on the example. The solution will be visualized in a mesh plot. Allexamples contained in the package can be run by simply calling them without arguments. Shortinformation on the problems is available via help . To work with the solver, it is convenient torun the example manually. >> % parameters>> N = 100; epsilon = 0.1; gamma = 0.1;>>>> % get initial conditions>> [tspan, y0, options] = Heat1D([], [], ’init’, N, epsilon, gamma);>>>> % run the example>> [t, y] = expode(@Heat1D, tspan, y0, options);
Now we can start playing with options and parameters. Switching to the direct solver for thematrix functions, we use >> options = expset(options, ’MatrixFunctions’, ’direct’);
Other options are set similarly. Some checks on the values set for an option are applied au-tomatically. An overview of the available options for an integrator is provided by calling theintegrator info without arguments. More detailed information on a specific option can be shownwith this command as well: >> exprbinfo % prints all available options for exprb>> exprbinfo MinStep % prints helptext for MinStep option
A common task is to create order plots, where the problem is solved on a fixed time intervalwith a number of different time steps and the error is plotted over these time steps. Computingthe error or an approximation to it requires one to evaluate the exact solution or a very accuratereference solution first. If an exact solution is available, this can be done by calling ode(t, [],’exact’) . As an example to show how simply this can be done with the package, we considerthe semi1 example. We refer to its helptext for detailed information. An order plot for a finitedifference spatial discretization with N = 50 grid points for all Rosenbrock-type methods iscreated via >> allMethods(@semi1, ’exprb’, ’’, [], 50); The input argument ” chooses the direct solver for the evaluation of the matrix functions.The chosen step sizes depend on the problem’s tspan data and are chosen uniformly logarith-mically. This choice and other parameters can be manipulated with options to allMethods .To implement a new differential equation, we recommend modifying one of the example filesin the examples directory. You should start in examples/Hello_World , where we put someintroductory files. MinEx.m is a very simple example while
Template.m uses more advancedfeatures. Both files contain many helpful comments.7 .3 Running Specific Integrators
Here we briefly describe how the specific exponential integrators can be invoked. The callingsequence of all
EXPODE integrators is >> [t, y] = integrator(@ode, {@jac}, {tspan, y }, {opts}, {varargin}); where integrator has to be substituted by either expode or one of the specific integratorsbelow. Arguments in braces ( {} ) are optional. The at sign ( @ ) represents either a functionhandle, a function name as string or an inline object. The function @ode has to evaluatedata of the differential equation required for the solution. It follows MATLAB ’s standard syntax,though it should be callable with >> res = ode(t, y, flag, {varargin}); where the flag controls what the function returns. The ode function has to return the evaluationof the right hand side of the differential equation, when the empty string ( ’’ ) is given as flag.This is needed for all solvers. In addition other flags have to be handled depending on theintegrator, see table 2.flag meaning integrator type(s) ” F ( t, u ) all jacobian J n = ∂F∂u ( t, u ) linearized linop J from (2) semilinear (req.) andlinearized (opt.) gfun g ( t, u ) from (2) semilinear (req.) andlinearized (opt.) df_dt d n = ∂F∂t ( t, u ) linearized (opt.) dg_dy ∂g∂u ( t, u ) semilinear (req.) andlinearized (opt.) init return default [ tspan, y0, opts ] for the equation all (opt.)Table 2:Flags for the ode -file for the different integratorsThe @jac argument is only available for the linearized integrators and is a handle to afunction evaluating the Jacobian. Alternatively the ode will be queried with the ’jacobian’ flag. tspan = [ t , T ] is the integration interval, y the initial condition. opts is an optionsstructure, set with one of the set commands and varagin will be passed to ode to configureparameters of the differential equation.The command to use an exponential Runge-Kutta integrator is exprk . To select one of theschemes described in section 2.2 use the Scheme option. The default scheme is ’Krogstad’ .To set the parameters appearing in some of the methods, use the
Parameters option. For adetailed overview of the available schemes we refer to the integrator documentation.The command to use an exponential Rosenbrock-type integrator is exprb . To select one ofthe three available schemes, use the Order option with value ’two’ , ’three’ or ’four’ , wherethe latter is the default. To control the parameters for the error estimator for the order fourintegrator use ErrorEstimate . We refer to the documentation for details on the schemes.The exp4 method is called with the exp4 command. It has a built in dense output generator,that allows one to evaluate the numerical solutions at arbitrary times. Specify t = [ t , ...,t m ] to use this feature, where the sequence { t j } mj =0 is either strictly increasing or strictlydecreasing.The semilinear k -step methods are available via expmssemi , where k is set with the kStep option. Our implementation allows constant step sizes only.The linearized k -step methods are called with expms , where k is defined as for expmssemi .You can also select ’Tokman’ to use the scheme suggested in [21, eq. (39)].8DEproperty option choices integratorsautonomous/nonautonomous NonAutonomous {’off’} , ’on’ linearizedsemilinear equation (2) Semilin {’off’} , ’on’ linearizedconstant/nonconstantJacobian JConstant {’off’} , ’on’ linearizedcomplex/real valued solution Complex ’off’ , {’on’} allstructure of the Jacobianor linear part Structure {’none’} , ’normal’ , ’symmetric’ , ’skewsymmetric’ , ’diagonal’ allTable 3:Options corresponding to properties of differential equations.Default values are set in braces Some problems allow one to exploit certain properties to improve the efficiency of the integrator.In table 3 we collected some properties together with information on how to exploit themin
EXPODE . Note that for non autonomous equations you might get wrong results, when theappropriate option is not set.
As mentioned before, the evaluation of the matrix functions is a crucial point in the implemen-tation of exponential integrators.The default setting for the matrix function evaluator is to compute and store the full matrixfunctions. This is suitable for small or medium sized problems. It requires one to evaluate thefull Jacobian or its linear part. For linearized problems, the Jacobian has to be computed ineach time step while for methods based on a fixed linearization, it has to be computed onlyonce.We also provide an implementation of the Arnoldi process to approximate the matrix func-tions, which can be used for large scale problems. It should not be used for very small problems,where the direct evaluation is more efficient and more reliable.To switch on the Krylov method, set >> options = exprbset(’MatrixFunctions’, ’arnoldi’);
Krylov subspace methods can be implemented by using the matrices J n or J explicitly (saved assparse matrices) or in a matrix-free fashion, where subroutines for the evaluation of the matrix-vector products J n v or Jv , respectively, are provided. The options for these matrix-free versionsare activated by setting >> options = exprbset(options, ’JacobianV’, ’on’);>> options = exprbset(options, ’LinOpV’, ’on’); respectively. If one of these options is ’on’ , then the corresponding ODEfile should provideflags ’jacobian_v’ or ’linop_v’ respectively. Alternatively a function handle to a functiondefined as function res = evalFun(t, y) can be provided instead of ’on’ to evaluate therequired parts in its own routine.To add more flexibility we also enabled the user to provide his or her own implementationto compute the matrix functions. This is especially interesting for situations where the matrixfunctions can be computed more cheaply, more easily, or in a structure-preserving way due tospecial properties of the matrix. Then, instead of the ’direct’ or ’arnoldi’ settings a functionhandle has to be given function [ hOut, varargout ] = ...matFun(job, t, y, h, flag, v, reusable, reuse, facs).
9e refer to the documentation contained in the package for detailed instruction. Here we willonly give a short introduction.Due to the possible complexity of this task, a number of stages have to be incorporatedinto the process: in addition to the evaluation itself there is an ’init’ , a ’registerjobs’ andan ’initstep’ phase to precompute data – indicated by the corresponding value for flag . Itmight happen that the evaluator requires to reduce the step size to guarantee the prescribedaccuracy. Therefore it is possible to return a different value for hOut than the input step size h to indicate such a time step reduction.Details of the implementation can be found in the manual. We suggest taking the two EXPODE internal evaluators – matFun/matFunDirect.m and matFun/matFunKrylov.m – as a guideline.
The
EXPODE package was implemented such that it is open for user specified extensions in manydifferent ways. An example was pointed out in the previous section for the evaluation of thematrix functions. It is also possible to add new time integrators to the package. To do so, oneshould use the developer package of
EXPODE , which contains some useful tools for this purpose.The integration steps for the integrators are written as plugins to the expode routine. Thedeveloper does not need to worry about things like direct user interaction syntactical optionschecking, output control and other management tasks.We provide scripts to generate an integrator stub and for deployment. For a rudimentaryintegrator only three files have to be edited after running the generation script: a setup routinewhich gives some information to the expode , a routine that provides the options for the userand the integration step itself. Detailed information on the development process are availablein the documentation contained in the
EXPODE package.
In this section we want to discuss some numerical examples.
To benchmark the adaptive step size implementation, let us consider the well-known van derPol equation [23]. It describes a non-linearly damped oscillator. Written as a second order ODEit reads u ′′ ( t ) − µ (1 − u ( t ) ) u ′ ( t ) + u ( t ) = 0 . With large µ the equation becomes very stiff. A plot of the solution with u (0) = [2 , − . T and µ = 1000 can be found in figure 1. At the vertical edges the first derivative of the solutionbecomes very large so that small step sizes have to be chosen. We plotted the step sizes selectedby the step size control of exprb4 , exprb3 and exp4 . Due to their larger (classical) order, exprb4 and exp4 can choose larger step sizes. Another
MATLAB package implementing exponential Runge-Kutta methods and exponential mul-tistep methods, is
EXPINT [1]. Note that the evaluation of matrix functions in
EXPINT corre-sponds to the “direct” option in our package. This favors our package for large scale problems.We compare the efficiency of both packages on the semilinear problem u ′ = ∆ u + 11 + u + Φ , u (0 , x, y ) = x (1 − x ) y (1 − y ) (17)with homogeneous Dirichlet conditions on Ω = [0 , . Φ is chosen such that the exact solution isgiven by u ( t, x, y ) = x (1 − x ) y (1 − y ) exp( t ) , see e.g. [1] or [7]. The space discretization was donewith finite differences with N = 50 inner grid points in each dimension. This yields a system ofODEs of medium size dimension N = 2500 . We compared EXPINT ’s and our implementationof the Hochbruck-Ostermann exponential Runge-Kutta scheme [7] and our implementation of10 y y ’ t −5 −4 −3 −2 −1 t h exp RB 4exp RB 3EXP4 Figure 1: Left: t vs. the solution ( y ( t ) , y ′ ( t )) of the van der Pol equation ( µ = 1000 ), right: t vs. step sizes h chosen by different EXPODE integrators with adaptive step size.the three stage exponential Rosenbrock method. Additionally we solved the one-dimensionalversion of this problem with N = 100 grid points to retrieve a low-dimensional ODE of N = 100 degrees of freedom. The results are shown in figure 2. −15 −10 −5 time [s] e rr o r −2 −20 −10 step size e rr o r −15 −10 time [s] e rr o r Figure 2: Left: Time vs. error plots for the one-dimensional version, middle: step size vs.error and right: time vs. error plots for the two-dimensional version of the semilinear problem(17). Dotted:
EXPINT , dashed:
EXPODE with diagonalization, dash-dotted:
EXPODE with Krylovmethod for the Matrix functions, solid: exprb , thin dashed in the middle image: slope line oforder four.In the left image we see that for low dimensional systems and small target accurcies,
EXPINT is clearly preferable, since it is much faster. Especially the Krylov matrix function evaluator hasnegative impact on the performance since it has the overhead of building up the Krylov subspaceseach time step while
EXPINT only computes some matrix exponentials via Padé approximation.For × matrices the solution of the linear systems arising there is quite cheap.In the middle image we see numerical order four of all solvers. EXPINT uses Padé approximations and a scaling and squaring technique to compute the ϕ functions (6). Let J be the discretized Laplacian operator, then hJ has to be scaled, such thatits norm is smaller than a given bound. This implies that smaller time step sizes lead to fewerscaling and squaring steps. After computing the ϕ functions of the matrix once only matrixvector products are needed for the time stepping, such that a smaller time step size actuallyleads to faster computations.For the direct solver in EXPODE we have an initialization phase, where we diagonalize theoperator hA , which takes the same time independent of h . Then we apply ϕ k ( hA ) directly as EXPINT does, so we get only slightly increased costs for higher accuracy by the smaller time stepsize.In
EXPODE with the Arnoldi method for the matrix functions we build up Krylov subspacesin each timestep and cannot reuse data from previous time steps. Nevertheless, this is muchfaster than computing the matrix functions themselves.11 .3 Brusselator
As our next example we use the Brusselator example, which was also used to benchmark theoriginal exp4 code [10]. It models a reaction-diffusion process with two species u = u ( t, x, y ) and v = v ( t, x, y ) . The equation reads dd t (cid:18) uv (cid:19) = A (cid:18) uv (cid:19) + (cid:18) γ + u vuu − u v (cid:19) , A = (cid:20) α ∆ − ( β + 1) 0 β α ∆ (cid:21) together with homogeneous Neumann boundary conditions. The initial value for u is takenfrom MATLAB ’s peaks function and v (0 , · , · ) ≡ . The stiffness results from the diffusion term A . We choose the parameters γ = 1 , β = 3 . and the diffusion coefficient α = 10 − . Spacediscretization is done via finite differences with grid points in Ω = [0 , . This results in asystem of 20.000 unknowns.In figure 3 we present time step vs. error and work-precision diagrams for the fourth orderrepresentatives of each of our integrator classes. The multistep integrators use the fixed pointiteration for their starting values. As matrix function evaluator we choose the Arnoldi Krylovmethod with a maximal dimension of 36 for the Krylov subspaces for the linearized one-stepmethods and 100 for the others. We choose to use constant step sizes with matching tolerancesfor the Arnoldi process, since not all integrators allow adaptive step sizes. The error is mea-sured in the maximum norm against a reference solution computed by MATLAB ’s ode15s withsufficiently high accuracy requirements. This means that we only consider the ODE error andnot the spatial error. We added dashed lines for slope of order four.In the time vs. error plot we arranged data for the same runs and additionally gave twocurves for the multistep integrators using exprb (linearized) and exprk (exponential Adams)for the initial values. −3 −2 −1 −12 −10 −8 −6 −4 −2 step size e rr o r −10 −8 −6 −4 −2 time [s] e rr o r Figure 3: Step size vs. error and time vs. error plots for different fourth order integrators appliedto the Brusselator example. Integrators: 3 stage exprb (solid line), exp4 (dashed line), 5 stageHochbruck-Ostermann exponential Runge-Kutta-Scheme (dash-dotted line), 3 step expms withinitial values computed by fixed point iteration (dotted, plus markers) and with exprb initialvalues (dotted, square markers) and 4 step expmssemi with Runge-Kutta initial values (dashed,plus markers) and with initial values computed by fixed point iteration (dashed, square markers).In the first image we observe fourth order convergence for all schemes as h → . Thetwo linearized one-step methods have the best error constants (offset on the error-axis) closelyfollowed by the exponential Runge-Kutta scheme.In the second one we see the linearized one step methods lead to the best accuracy for a givencomputation time. Here the exponential Runge-Kutta scheme lags a bit further behind, since itis computationally more expensive due to its five internal stages. exp4 has seven of them, butthe first three and the second three of them can be computed simultaneously, see [10], so exprb4 and exp4 have approximately the same computational costs. The multistep integrators (withplus sign markers) are much weaker for larger time steps. The reason for this is the fixed pointiteration which gets relatively more expensive when the time step size increases. In figure 4 we12 −3 −2 −1 step size no . o f f i x ed po i n t s t ep s −3 −2 −1 step size app r o x i m a t e c o s t Figure 4: Left: number of fixed point steps required to fulfill the accuracy requirement forthe different step sizes in the Brusselator Example. expms : dotted, plus markers, expmssemi :dashed, plus markers. Right: approximate costs for the fixed point iterator linestyles andmarkers as in left picture, approximate costs for the (non-startup) time steps, approximatecosts for the whole integration, line styles as left, now square markers.show the number of fixed point steps required versus the step size chosen and an approximationof the normal time step costs versus the fixed point step costs. Each fixed point step costsapproximately as much as the number of backwards steps in use times the cost of one time step.So the expms solver fixed point step costs 3 normal time steps, the expmssemi fixed point step4 normal time steps. The plot assumes constant time for the matrix function evaluator. Wealso gave curves for multistep methods with exprb / exprk initial values, which are significantlymore efficient, especially for larger step sizes in this case. Note that multistep methods of orderhigher than four can only be obtained using the fixed point iteration. As an example for a hyperbolic problem we consider Maxwell’s equations in two space dimen-sions. For a charge free domain the equations are given by µ ∂∂t ~H = −∇ × ~Eǫ ∂∂t ~E = ∇ × ~H. Here ~E denotes the electric field, ~H the magnetic field, ǫ the permittivity and µ the permeability.Assuming constant µ and ǫ and using normalized cartesian coordinates we can eliminate thetwo material parameters.As domain we use a magnetic box Ω = [ − , . Our initial conditions are chosen, such that ~E = kπx ) · sin( kπy ) · cos( tk √ π ) ~H = − √ sin( kπx ) · cos( kπy ) · sin( tk √ π ) √ cos( kπx ) · sin( kπy ) · sin( tk √ π )0 is the exact solution. We assume the arising quantities to be constant in the z -direction to reduceto two spatial dimensions. The space discretization is done using the discontinuos Galerkinmethod (DG-method) from the codes by [6]. We created grids with a tiny triangle in the centerto simulate very filigree structures and bad quality grids. We run four different ratios between13aximal area of the outer elements to area of the center triangle, see figure 5 for examples. Forgrid generation we use [ ? ]. −1 0 1−101 −1 0 1−101 Figure 5: Examples for the grids used in the Maxwell example with ratios (left) and (cid:0) (cid:1) (right).We were interested how the existence of tiny elements in an otherwise mostly coarse gridwould affect the stability requirements by an exponential integrator.In our numerical experiment we created time vs. error plots. The error was measured in the L -Norm against the exact solution. The spatial resolution was refined and the time step sizeswere chosen such that the solvers are stable and the time and the spatial discretization errorsare almost equal.We compared the explicit space saving order four Runge-Kutta method used in [6] withour three stage exponential Rosenbrock-type ( exprb ) and the exp4 integrators, both using theArnoldi method to compute the matrix functions. The time step size in the Runge-Kuttasolver is automatically chosen, such that the solver is stable. This automatically leads to a timediscretization error in the magnitude of the spatial error. For exprb and exp4 we set the accuracyrequirement to the values retrieved by a reference solution and allowed adaptive step size choice.We also used the L norm for the step size estimator. This was done providing a custom errornorm function using the mass-scalarproduct from the DG-codes via the NormFunction option.The results are shown in Figure 6. We give plots of spatial step size ( ∆ x ) vs. error (measuredagainst the exact solution, not the reference solution), ∆ x vs. chosen time step sizes ( h ) and atime vs. error plots.In the first row of images we see, that the accuracy matches the requirements of the spatialdiscretization.For the smallest grid ratio we see, that the exponential methods choose larger time steps.Still we get no speed gain, because we have to build an up to 36-dimensional Krylov subspacein each time step.While both the Runge-Kutta and the exponential methods decrease their time step sizes duethe bigger irregularity of the grid for shrinking grid ratios, the gap between the curves grows.This also corresponds to the time vs. error plots, where the exponential methods get relativelyfaster for more irregular grids. In the following table we present the average quotient of the timestep size of the exprb integrator and the Runge-Kutta solver and the average speedup by grid ra-tio: ratio (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) avg. h quotient ( exprb vs. Runge-Kutta) 8.06053 15.481 26.1532 35.5925avg. speedup ( exprb vs. Runge-Kutta) 1.23683 2.23478 3.39336 4.46899We see that the exponential methods need to decrease their step size much less than the Runge-Kutta solver when decreasing the ratio. such that they become relatively more effiecient on themore irregular grids. On the last grid exprb is about five times faster than the conventionalintegrator.The step size estimator was tested by manually increasing the time step size. In this casethe solver became unstable, so the step size estimator actually detects the stability requirement14 −1 −8 −7 −6 −5 −4 −1 −8 −7 −6 −5 −4 −1 −8 −7 −6 −5 −4 −1 −8 −7 −6 −5 −4 ∆ x vs. error ( L -Norm) −1 −4 −3 −2 −1 −1 −4 −3 −2 −1 −1 −4 −3 −2 −1 −1 −4 −3 −2 −1 ∆ x vs. h −8 −7 −6 −5 −4 −8 −7 −6 −5 −4
10 10010 −8 −7 −6 −5 −4
10 10010 −8 −7 −6 −5 −4 time (seconds) vs. error ( L -Norm)Figure 6: Results for the Maxwell example. Integrators: 3 stage exprb (solid line), exp4 (dashedline) and Runge-Kutta (dash dotted line), Ratios: , (cid:0) (cid:1) , (cid:0) (cid:1) and (cid:0) (cid:1) of the method.It should be noted that EXPODE saves the solution in one long vector, while the
MATLAB
DG-codes use a custom format, where each field component is saved as one rectangular matrixcontaining the degrees of freedom for each finite element in its columns. For this reason ourODE file has to switch between these two formats in each function evaluation, which slows downthe computation.
We presented a new advanced toolbox for exponential integrators, which implements severalexponential integration schemes in recent research and uses modern techniques to approximatethe arising matrix functions. We designed the toolbox to solve large systems of differential equa-tions, and we have shown its applicability to those systems here. We also noticed rather weakperformance for lower-dimensional systems, but the time cost was relatively small nonetheless.This coincides with the package’s philosophy: develop with small problems to use it with largerones.We allow substantial control over the integration process without the need of changing15
XPODE code directly. See for instance the usage of the mass norm in Maxwell example. Thismakes it easier to combine
EXPODE with other
MATLAB packages that perform well for other as-pects of the equation. For example, one may want to find a spatial discretization using theDG-Codes from [6].It has been shown that our Krylov matrix function evaluator is able to absorb spatial ir-regularity to some extent. This lead to some ideas for the extension of the package by animproved Krylov method that can use the problem’s mass matrix and mass scalar product torespect spatial structures. We are currently implementing this idea, and the initial tests arequite promising.The
EXPODE package is flexible, and we expect to implement further exponential schemes inthe future.
Acknowledgements
We thank Marlis Hochbruck for the initial idea for this package and for providing a lot offeedback. We also thank Julia Schweitzer for the initial exprb -code
EXPODE is built upon andAchim Schädle for many great suggestions and a lot of valuable discussions. Patches provided byAbdullah Demirel to include order five exponential Rosenbrock-type schemes might be includedin the next version.We thank the many testers, especially Anke Wortmann, Alexander Beckmann, MagdalenaWeiss-Ribicky, Florian Kleen and everyone else who provided valuable feedback.
References [1] Håvard Berland, Bård Skaflestad, and Will M. Wright. Expint—a matlab package forexponential integrators.
ACM Trans. Math. Softw. , 33(1), March 2007.[2] M. Caliari and A. Ostermann. Implementation of exponential Rosenbrock-type integrators.
Appl. Numer. Math. , 59(3-4):568 – 581, 2009.[3] S. M. Cox and P. C. Matthews. Exponential time differencing for stiff systems.
J. Comput.Phys. , 176(2):430 – 455, 2002.[4] E. Gallopoulos and Y. Saad. Efficient solution of parabolic equations by Krylov approxi-mation methods.
SIAM J. Sci. Stat. Comput. , 13(5):1236–1264, 1992.[5] E. Hairer and G. Wanner.
Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems , volume 14 of
Springer Series in Computational Mathematics . Springer,Berlin, Heidelberg, 2nd edition, 1996.[6] J.S. Hesthaven and T. Warburton.
Nodal Discontinuous Galerkin Methods: Algorithms,Analysis, and Applications . Texts in Applied Mathematics. Springer, New York, 2008.[7] M. Hochbruck and A. Ostermann. Explicit exponential Runge–Kutta methods for semilin-ear parabolic problems.
SIAM J. Numer. Anal. , 43(3):1069–1090, 2005.[8] M. Hochbruck, A. Ostermann, and J. Schweitzer. Exponential Rosenbrock-type methods.
SIAM J. Numer. Anal. , 47(1):786–803, 2009.[9] Marlis Hochbruck and Christian Lubich. On Krylov subspace approximations to the matrixexponential operator.
SIAM J. Numer. Anal. , 34(5):1911–1925, 1997.[10] Marlis Hochbruck, Christian Lubich, and Hubert Selhofer. Exponential integrators for largesystems of differential equations.
SIAM J. Sci. Comput. , 19(5):1552–1574, 1998.[11] Marlis Hochbruck and Alexander Ostermann. Exponential integrators.
Acta Numerica ,19:209–286, 2010.[12] Marlis Hochbruck and Alexander Ostermann. Exponential multistep methods of Adams-type.
BIT , 51(4):889–908, 2011. 1613] Aly-Khan Kassam and Lloyd N. Trefethen. Fourth-order time-stepping for stiff PDEs.
SIAM J. Sci. Comput. , 26(4):1214–1233, 2005.[14] S. Krogstad. Generalized integrating factor methods for stiff pdes.
J. Comput. Phys. ,203(1):72 – 88, 2005.[15] María López-Fernández. A quadrature based method for evaluating exponential-type func-tions for exponential methods.
BIT , 50(3):631–655, 2010.[16] Borislav Ventseslavov Minchev and William Matthew Wright. A review of exponentialintegrators for first order semi-linear problems. Technical report, Trondheim, Norway: TheNorwegian University of Science and Technology, 2005.[17] S. P. Nørsett. An A-stable modification of the Adams-Bashforth methods. In
Confer-ence on the Numerical Solution of Differential Equations , volume 109 of
Lecture Notes inMathematics , pages 214–219. Springer, Berlin, Heidelberg, 1969.[18] Y. Saad. Analysis of some Krylov subspace approximations to the matrix exponentialoperator.
SIAM J. Numer. Anal. , 29(1):209–228, 1992.[19] Thomas Schmelzer and Lloyd N. Trefethen. Evaluating matrix functions for exponentialintegrators via Carathéodory-Fejér approximation and contour integrals.
Electron. Trans.Numer. Anal. , 29:1–18, 2007/08.[20] K. Strehmel and R. Weiner.
Linear-implizite Runge–Kutta Methoden und ihre Anwendun-gen , volume 127 of
Teubner-Texte zur Mathematik . Teubner, Stuttgart, 1992.[21] M. Tokman. Efficient integration of large stiff systems of ODEs with exponential propaga-tion iterative (EPI) methods.
J. Comput. Phys. , 213(2):748 – 776, 2006.[22] L.N. Trefethen, J.A.C. Weideman, and T. Schmelzer. Talbot quadratures and rationalapproximations.
BIT , 46(3):653–670, 2006.[23] B. Van der Pol. A theory of the amplitude of free and forced triode vibrations.