Exponential quadrature rules for linear fractional differential equations
aa r X i v : . [ m a t h . NA ] N ov Exponential quadrature rulesfor linear fractional differential equations
Roberto Garrappa
Universit`a degli Studi di Bari - Dipartimento di MatematicaVia E. Orabona 4 - 70125 Bari - Italy [email protected]
Marina Popolizio
Universit`a del Salento - Dipartimento di Matematica e Fisica “Ennio De Giorgi”Via per Arnesano - 73100 Lecce - Italy [email protected]
October 21, 2013
Abstract
This paper focuses on the numerical solution of initial value prob-lems for fractional differential equations of linear type. The approachwe propose grounds on expressing the solution in terms of some inte-gral weighted by a generalized Mittag–Leffler function. Then suitablequadrature rules are devised and order conditions of algebraic type arederived. Theoretical findings are validated by means of numerical ex-periments and the effectiveness of the proposed approach is illustratedby means of comparisons with other standard methods.
Fractional calculus is nowadays receiving a great attention from scientistsof different branches; models involving derivatives of non integer order areindeed successfully employed to describe many complex processes and real–world phenomena in several areas such as biology, engineering, finance,physics and so forth (e.g, see [1, 2, 6, 28, 34, 43]).At the same time the subject of the numerical solution of fractionaldifferential equations (FDEs) has been extensively addressed in order toprovide efficient tools for the simulation of fractional order models and newimportant contributions have been recently published; we cite, among the1thers, [15, 20, 18, 26, 32, 35] and references therein. Recently, in [19] theinvestigation of numerical methods has been extended also to FDEs withdiscontinuous right–hand side [10, 11, 12].A typical impasse in the numerical solution of FDEs is the long andpersistent memory, a common problem with methods for Volterra IntegralEquations [3, 4].However one of the major difficulties in the development of numericalmethods for FDEs is to obtain a sufficiently high accuracy. Classical ap-proaches for ordinary differential equations (ODEs) based on polynomialreplacements usually fail to provide satisfactory results when applied toFDEs: indeed, as shown in [31], the asymptotic expansion of the solutionof a FDE possesses real powers which can not be suitably approximated bymeans of polynomials.Due to this lack of smoothness, methods like product integration (PI)rules (which can be considered as the counterpart of Adams multistep meth-ods for ODEs) suffer from a severe order barrier when applied in a fractionalorder context; it has been shown [16] that, regardless of the degree of thepolynomial replacement, an order of convergence not exceeding 2 is expectedfor FDEs in the more general situation (see also [14, 27]).The main aim of this paper is to investigate alternative approaches inwhich the special nature of the problem can be exploited in order to devicesuitable numerical methods capable to provide accuracy of high order. Thisis the case of linear FDEs, whose generic expression is D αt y ( t ) + λy ( t ) = f ( t ) , (1.1)where α ∈ R is the positive fractional order, λ ∈ R , y ( t ) : [ t , T ] → R and the forcing term f ( t ) is assumed sufficiently smooth. The symbol D αt denotes the derivative operator of non–integer order α , with respect to theorigin t , according to the Caputo definition [13, 25, 39] D αt y ( t ) = 1Γ( m − α ) Z tt y ( m ) ( u ) (cid:0) t − u (cid:1) α +1 − m du, with Γ( · ) the Euler gamma function, m = ⌈ α ⌉ the smallest integer such that α < m and y ( m ) the standard derivative of integer order; equation (1.1) isusually coupled with initial conditions in the form y ( k ) ( t ) = y ,k , k = 0 , . . . , m − . (1.2)The scalar equation (1.1) represents a model problem having impor-tance in several areas, for instance in control theory. Anyway, more involved2odels, such as multidimensional systems (possibly coming from spatial dis-cretization of partial differential equations with time and/or space fractionalderivatives), can be considered and analyzed starting from (1.1).The basic strategy underlying the approach proposed in this paper con-sists in reformulating (1.1) in terms of some special weighted integrals havingjust the forcing term f ( t ) as the integrand and hence device specific quadra-ture rules for their numerical approximation. In this way we expect highaccuracy under suitable and reasonable assumptions of smoothness just forthe integrand f ( t ) (a quite common situation in real–life applications); thesame goal can be hardly achieved by using a classical formulation in whichquadrature rules are applied to the usually non–smooth whole vector field λy ( t ) + f ( t ).Rules of this kind can be considered as a generalization of the exponentialquadrature rules employed in the numerical treatment of linear ODEs [24]to improve stability properties: as we will see, also convergence propertiescan benefit from approaches of this kind when applied to FDEs.This paper is organized as follows. In Section 2 we present the alternativeformulation for (1.1) in terms of some generalized Mittag–Leffler functionsand we present a brief review of the main properties of these importantfunctions. In Section 3 we introduce the new quadrature rules, we providea systematic theory concerning their order of accuracy and we derive theconditions for generating rules of any prescribed degree of precision. Byusing these results, we perform in Section 4 the error analysis for the ap-plication to the numerical solution of FDEs. Section 5 is hence devotedto the evaluation of the quadrature weights by means of some techniquescombining the inversion of the Laplace transform with rational approxima-tions; the multidimensional case is then addressed. Finally in Section 6 wepresent some numerical tests to validate theoretical results and compare theapproach investigated in this paper with classical methods. By means of the Laplace transform, it is immediate to see that problem(1.1-1.2) can be rewritten in the spectral domain as s α Y ( s ) − m − X k =0 s α − k − y ,k + λY ( s ) = F ( s ) , Y ( s ) and F ( s ) are the Laplace transform of y ( t ) and f ( t ) respectively[39]; hence Y ( s ) can be obtained as Y ( s ) = m − X k =0 s α − k − s α + λ y ,k + F ( s ) s α + λ . Thus, turning back to the temporal domain, the solution of (1.1-1.2) canbe written as y ( t ) = m − X k =0 e α,k +1 ( t − t ; λ ) y ,k + Z tt e α,α ( t − s ; λ ) f ( s ) ds, (2.1)where e α,β ( t ; λ ) is the inverse Laplace transform of s α − β / (cid:0) s α + λ (cid:1) ; it is ageneralization of the Mittag–Leffler (ML) function E α,β ( z ) = ∞ X k =0 z k Γ( αk + β ) , z ∈ C , and can be evaluated [34, 39] according to e α,β ( t ; λ ) = t β − E α,β ( − t α λ ) . For any z ∈ C the behavior of e α,β ( t ; z ) for t close to the origin can bestudied directly from its definition; indeed, it is elementary to observe that e α,β (0; z ) = 0 when β >
1, while e α, (0; z ) = 1 and lim t → + e α,β ( t ; z ) = + ∞ for β < e α,β [39]. Lemma 2.1.
Let a < t , ℜ ( α ) > , β > and r ∈ R such that r > − .Then for any z ∈ C r + 1) Z ta e α,β ( t − s ; z )( s − a ) r ds = e α,β + r +1 ( t − a ; z ) . To keep the notation as compact as possible, for a < b ≤ t and r ∈ R weconveniently introduce the function R α,β,r ( t, a, b ; z ) = 1Γ( r + 1) Z ba e α,β ( t − s ; z )( s − a ) r ds. (2.2)The function R α,β,r ( t, a, b ; z ) is itself a generalization of e α,β ( t ; z ); indeed,Lemma 2.1 implies that R α,β,r ( t, a, t ; z ) = e α,β + r +1 ( t − a ; z ).4n several situations it has been observed that it can be convenient toscale the function e α,β according to e α,β ( t ; z ) = h β − e α,β ( t/h ; h α z ) , (2.3)where h is any positive real value [20, 21]. A similar scaling holds alsofor R α,β,k ( t, a, b ; z ) since, as it can be proved by using Lemma 7.3 in theAppendix, it is R α,β,k ( t, a, b ; z ) = h β + k R α,β,k (cid:18) t − ah , , h α z (cid:19) , h = b − a. (2.4) Let us consider on [ t , T ] an equispaced mesh t j = t + jh , with step–size h >
0, and rewrite Equation (2.1) in a piecewise manner as y ( t n ) = m − X k =0 e α,k +1 ( t n − t ; λ ) y ,k + n − X j =0 Z t j +1 t j e α,α ( t n − s ; λ ) f ( s ) ds. (3.1)To approximate the weighted integrals I j (cid:2) f, t n (cid:3) = Z t j +1 t j e α,α ( t n − s ; λ ) f ( s ) ds we fix ν nodes c , c , . . . , c ν ∈ [0 ,
1] and consider the quadrature rules of thetype Q n,j (cid:2) f (cid:3) = ν X r =1 b r ( n − j ) f ( t j + c r h ) (3.2)(obviously, weights b r ( j ) depend on α and λ but for simplicity we omittedthis dependence in the notation).The remainder of the approximation is T n,j (cid:2) f (cid:3) = I j (cid:2) f, t n (cid:3) − Q n,j (cid:2) f (cid:3) which we can better analyze by extending to Q n,j the Peano’s theorem forclassical quadrature rules (e.g., see [7]). To this purpose it is useful to firstconsider, for any k ∈ N and ξ ∈ R , the generalized step function G ξ,k ( t ) ≡ ( t − ξ ) k + = (cid:26) ( t − ξ ) k if ξ ≤ t ξ > t . heorem 3.1. Let f ∈ C K (cid:0) [ t j , t j +1 ] (cid:1) and assume f ( K +1) ∈ C (cid:0) ] t j , t j +1 [ (cid:1) .Then T n,j (cid:2) f (cid:3) = K X k =0 f ( k ) ( t j ) k ! T n,j [ G t j ,k ] + Z t j +1 t j f ( K +1) ( ξ ) K ! T n,j [ G ξ,K ] dξ. Proof.
By replacing the function f in I j (cid:2) f, t n (cid:3) and f ( t j + c r h ) in (3.2)respectively with their Taylor expansions f ( s ) = K X k =0 f ( k ) ( t j ) k ! ( s − t j ) k + Z st j ( s − ξ ) K K ! f ( K +1) ( ξ ) dξf ( t j + c r h ) = K X k =0 f ( k ) ( t j ) k ! h k c kr + Z t j + c r ht j ( t j + c r h − ξ ) K K ! f ( K +1) ( ξ ) dξ it is immediate, after subtraction, to see that T n,j (cid:2) f (cid:3) = K X k =0 f ( k ) ( t j ) k ! T n,j (cid:2) G t j ,k (cid:3) + ˜ T n,j [ f ]where ˜ T n,j [ f ] is given by˜ T n,j [ f ] = Z t j +1 t j e α,α ( t n − s ; λ ) Z st j ( s − ξ ) K K ! f ( K +1) ( ξ ) dξ ds − ν X r =1 b r ( n − j ) Z t j + c r ht j ( t j + c r h − ξ ) K K ! f ( K +1) ( ξ ) dξ. To study ˜ T n,j [ f ] we preliminarily observe that by inverting the order ofintegration we are able to obtain Z t j +1 t j e α,α ( t n − s ; λ ) Z st j ( s − ξ ) K K ! f ( K +1) ( ξ ) dξds = Z t j +1 t j f ( K +1) ( ξ ) K ! I j [ G ξ,K , t n ] dξ and, furthermore, since Z t j + c r ht j f ( K +1) ( ξ )( t j + c r h − ξ ) K dξ = Z t j +1 t j f ( K +1) ( ξ )( t j + c r h − ξ ) K + dξ, it is elementary to see that˜ T n,j ( f ) = Z t j +1 t j f ( K +1) ( ξ ) 1 K ! T n,j (cid:2) G ξ,K (cid:3) dξ from which the proof follows. 6he classical concept of degree of precision applies to the quadrature Q n,j as stated in the following definition. Definition 3.2.
A quadrature rule Q n,j is said to have degree of precision d if it is exact for any polynomial of degree not exceeding d but it is notexact for polynomials of degree d + 1.The following proposition shows that the function T n,j (cid:2) G ξ,K (cid:3) /K ! is the K –th Peano kernel of the error of the quadrature Q n,j if it has degree ofprecision K . It is an immediate consequence of Definition 3.2 and Theorem3.1 and therefore we omit the proof. Proposition 3.3.
Let f ∈ C K (cid:0) [ t j , t j +1 ] (cid:1) and assume f ( K +1) ∈ C (cid:0) ] t j , t j +1 [ (cid:1) .If Q n,j has degree of precision K then T n,j (cid:2) f (cid:3) = Z t j +1 t j f ( K +1) ( ξ ) K ! T n,j (cid:2) G ξ,K (cid:3) dξ. (3.3) Corollary 3.4.
Let f ∈ C K (cid:0) [ t j , t j +1 ] (cid:1) and assume f ( K +1) ∈ C (cid:0) ] t j , t j +1 [ (cid:1) .If Q n,j has degree of precision K and its Peano Kernel does not change itssign in [ t j , t j +1 ] , there exists a ζ ∈ ] t j , t j +1 [ such that T n,j (cid:2) f (cid:3) = f ( K +1) ( ζ )( K + 1)! T n,j (cid:2) G t j ,K +1 (cid:3) . Proof.
Since T n,j (cid:2) G ξ,K (cid:3) has the same sign in [ t j , t j +1 ] the mean value theo-rem can be applied to (3.3) to obtain T n,j (cid:2) f (cid:3) = f ( K +1) ( ζ ) K ! Z t j +1 t j T n,j (cid:2) G ξ,K (cid:3) dξ, with t j < ζ < t j +1 . The application of the above equation to G t j ,K +1 leadsto T n,j (cid:2) G t j ,K +1 (cid:3) = K Z t j +1 t j T n,j (cid:2) G ξ,K (cid:3) dξ which allows to conclude the proof.Under reasonable assumptions of smoothness for the forcing term f ( t ),Proposition 3.3 and its Corollary 3.4 allow to express the error of a quadra-ture rule (3.2) in terms of its degree of precision in a similar way as in theinteger–order case.The next step is to find necessary and sufficient conditions of algebraictype to characterize the degree of precision of the quadrature rules and hencederive the main coefficients. To this purpose we consider the following result.7 roposition 3.5. A quadrature rule Q n,j has degree of precision d if andonly if ν X r =1 b r ( j ) c kr = R α,k ( j ; h α λ ) , k = 0 , , . . . , d, (3.4) where for shortness we put R α,k ( j ; h α λ ) = k ! h α R α,α,k ( j, , h α λ ) .Proof. By (2.2) it is I j [ G t j ,k , t n ] = k ! R α,α,k ( t n , t j , t j +1 ; λ ) and, after using(2.4), we have T n,j (cid:2) G t j ,k (cid:3) = h k k ! (cid:16) h α R α,α,k ( n − j, , , h α λ ) − ν X r =1 b r ( n − j ) c kr k ! (cid:17) and the proof follows thanks to Theorem 3.1. Remark . Observe that when α = 1 and λ = 0, I j [ f, t n ] becomes astandard (i.e. non–weighted) integral with Q n,j the corresponding classicalquadrature rule; on the other side, thanks to Lemma 7.2 in the Appendix,the order conditions (3.4) become ν X r =1 b r ( j ) c kr = hk + 1which are the standard conditions for the degree of precision of classicalrules (note that in this case, as expected, weights b r ( j ) are constant withrespect to j ). We can now investigate the behavior of quadrature rules Q n,j when appliedfor solving the linear FDE (1.1). As described before, the idea is to resortto the expression (3.1) for its solution; then, in each interval [ t j , t j +1 ], onecan think to apply the quadrature rule Q n,j (3.2). The resulting numericalapproximation y n of y ( t n ) would read as the convolution quadrature y n = m − X k =0 e α,k +1 ( t n − t ; λ ) y ,k + n − X j =0 ν X r =1 b r ( n − j ) f ( t j + c r h ) . (4.1)Thanks to the results on the error T n,j (cid:2) f (cid:3) of each quadrature rule Q n,j we are able to study the accuracy of the numerical scheme (4.1) by meansof the following result. 8 roposition 4.1. Let f ∈ C d (cid:0) [ t , T ] (cid:1) and assume that, for any j = 0 , . . . , n − , f ( d +1) ∈ C (cid:0) ] t j , t j +1 [ (cid:1) and Q n,j has degree of precision d . Then y ( t n ) − y n = h d +1 n − X j =0 Z f ( d +1) ( t j + uh ) ·· h α R α,α,d ( n − j, u, h α λ ) − ν X r =1 b r ( n − j ) ( c r − u ) d + d ! ! du. Proof.
By subtracting (4.1) from (3.1) the error y ( t n ) − y n can be writtenas a sum, for j = 0 , . . . , n −
1, of the errors T n,j (cid:2) f (cid:3) of each quadrature Q n,j and Proposition 3.3 can be hence applied to each error term. Observe thatfor ξ ∈ [ t j , t j +1 ] it is I j (cid:2) G ξ,d , t n (cid:3) = Z t j +1 ξ e α,α ( t n − s ; λ )( s − ξ ) d ds and hence, from Proposition 7.3, I j (cid:2) G ξ,d , t n (cid:3) = d ! h α + d R α,α,d ( n − j, u, h α λ ) , where u = ( t j − ξ ) /h . We can therefore write the d –th Peano kernel of thequadrature Q n,j as T n,j (cid:2) G ξ,K (cid:3) = K ! h d h α R α,α,d ( n − j, u, h α λ ) − ν X r =1 b r ( n − j ) ( c r − u ) d + d ! ! from which the proof immediately follows.Proposition 4.1 provides some useful information to relate the accuracyof the numerical method (4.1) with the degree of precision of the underlyingquadrature rule. We recall that a numerical method like (4.1) convergeswith order p if y ( t n ) − y n = O (cid:0) h p (cid:1) , as h →
0. Thus, if the quadrature rules Q n,j have degree of precision d , the resulting method (4.1) converges withorder d + 1.Anyway the analysis based only on the degree of precision and the Peanokernel is sometimes not adequate to disclose full information on the behaviorof the quadrature (4.1).To make a deeper analysis we focus on rules devised in an optimal way:in practice, if a degree of precision K − K nodes.Indeed, the K algebraic conditions (3.4) to get the degree of precision K − K nodes and weights.9 roposition 4.2. Let f ∈ C K +1 (cid:0) [ t j , t j +1 ] (cid:1) and assume that c , . . . , c K are K distinct nodes. If Q n,j has degree of precision K − then T n,j (cid:2) f (cid:3) = h K +1 K ! ∞ X i =0 ( t n − t ) αi + α − λ i Γ( αi + α ) Ω [ i,K ] n,j [ f ] + O (cid:0) h K +2 (cid:1) , where Ω [ i,K ] n,j [ f ] = Z π K ( u ) (cid:18) − jn − un (cid:19) αi + α − f ( K ) ( t j + uh ) du and π K ( u ) = ( u − c ) · · · ( u − c K ) .Proof. Consider the polynomial p , of degree K −
1, interpolating the function f at the nodes t j + c h, . . . , t j + c K h , for which it is elementary to observethat I j [ p, t n ] = K X r =1 ˜ b r ( n, j ) f ( t j + c r h ) , ˜ b r ( n, j ) = Z t j +1 t j e α,α ( t n − s ; λ ) L r (cid:0) ( s − t j ) /h (cid:1) ds, where (cid:8) L r (cid:0) u (cid:1)(cid:9) r =1 ,K denotes the usual Lagrangian basis. I j [ p, t n ] providesa quadrature rule (of the same kind of (3.2)) which is exact when f ( t ) =( t − t j ) k , k = 0 , . . . , K −
1. Hence the weights ˜ b r ( n, j ) satisfy the orderconditions (3.4). Moreover since nodes c r are distinct, the system definedby (3.4) has a unique solution. As a consequence b r ( n − j ) = ˜ b r ( n, j ) and thequadrature rule Q n,j coincides with the interpolatory rule I j (cid:2) p, t n ]. Hencealso the errors coincide; since for any u ∈ [0 ,
1] they can be written as f ( t j + uh ) − p ( t j + uh ) = h K π K ( u ) K ! f ( K ) ( t j + uh ) + O (cid:0) h K +1 (cid:1) , then T n,j (cid:2) f (cid:3) = h K +1 K ! Z f ( K ) ( t j + uh ) e α,α ( t n − t j − uh ; λ ) π K ( u ) du + O (cid:0) h K +2 (cid:1) . Using the series expansion of the function e α,α leads to T n,j (cid:2) f (cid:3) = h K +1 K ! ∞ X i =0 ( t n − t ) αi + α − λ i Γ( αi + α ) ·· Z π K ( u ) (cid:18) − jn − un (cid:19) αi + α − f ( K ) ( t j + uh ) du + O (cid:0) h K +2 (cid:1) from which the proof follows. 10hanks to the above result we are now able to represent the error of theconvolution quadrature (4.1) in a more meaningful way. Proposition 4.3.
Let f ∈ C K +1 (cid:0) [ t , T ] (cid:1) and assume that c , . . . , c K are K distinct nodes. If each Q n,j has degree of precision K − then y ( t n ) − y n = h K P K K ! Z t n t e α,α ( t n − s ; λ ) f ( K ) ( s ) ds + O (cid:0) h K +min { α, } (cid:1) where P K = R π K ( u ) du .Proof. The starting point of this proof is Proposition 4.2. Observe that1 n n − X j =0 (cid:18) − jn − un (cid:19) αi + α − f ( K ) ( t j + uh )is the general trapezoidal rule R [ n ] [ g ; γ ], γ = 2 u −
1, when applied to thefunction g ( s ) = (1 − s ) αi + α − f ( K ) (cid:0) t + s ( t n − t ) (cid:1) ; this problem has beenstudied in [33] where it has been detected that, if I [ g ] = R g ( s ) ds , then E [ n ] [ g ; γ ] = I [ g ] − R [ n ] [ g ; γ ] = (cid:18) u − (cid:19) f ( K ) ( t ) n − + O (cid:0) n − min { ,α ( i +1) } (cid:1) . Thus we have1 n n − X j =0 Ω [ i,K ] n,j [ f ] = Z π K ( u ) R [ n ] [ g ; 2 u − du = Z π K ( u ) I [ g ] du − Z π K ( u ) E [ n ] [ g ; 2 u − du and hence, after putting η n ( s ) = t + s ( t n − t ), it is1 n n − X j =0 Ω [ i,K ] n,j [ f ] = Z π K ( u ) du Z (1 − s ) αi + α − f ( K ) (cid:0) η n ( s ) (cid:1) ) ds + O (cid:0) n − min { ,α ( i +1) } (cid:1) . Therefore y ( t n ) − y n = h K +1 nK ! P K ∞ X i =0 ( t n − t ) αi + α − λ i Γ( αi + α ) Z (1 − s ) αi + α − f ( K ) (cid:0) η n ( s ) (cid:1) ) ds + h K +1 nK ! ∞ X i =0 ( t n − t ) αi + α − λ i Γ( αi + α ) O (cid:0) n − min { ,α ( i +1) } (cid:1) + n O (cid:0) h K +2 (cid:1) y ( t n ) − y n = h K +1 P K nK ! ∞ X i =0 ( t n − t ) αi + α − λ i Γ( αi + α ) ·· Z (1 − s ) αi + α − f ( K ) (cid:0) η n ( s ) (cid:1) ) ds + O (cid:0) h K +min { α, } (cid:1) . The proof now follows after moving the summation in the integral andapplying a simple change of variable.Proposition 4.3 is of practical importance since it gives the explicit ex-pression of the principal term of the error, thus allowing to monitor it andeven to zero it out. In fact, the constant P K varies according to the selectednodes and can be set to zero by choosing them in a suitable way. It is elemen-tary to see that when K = 1 the choice of c = 1 / P = 0 whilst for K = 2 any selection of nodes with ( c + c ) / − c c = 1 / P = 0 (for instance, c = 0, c = 2 / c = 1 / c = 1). Similarlyfor higher values of K . In general, with an odd number K of nodes it sufficesto choose the nodes symmetrically with respect to the midpoint of [0 , P K = 0, the order of the quadrature rule increases from K to K + min { α, } ; the order K + 1 is hence obtained only when α ≥ < α < α of order can be gained.For ease of presentation we summarize the above discussion in the fol-lowing Corollary. Corollary 4.4.
Let f ∈ C K +1 (cid:0) [ t , T ] (cid:1) and assume that c , . . . , c K are K distinct nodes chosen such that conditions (3.4) hold for d = K − . Thenmethod (4.1) converges with order K . Moreover, for nodes chosen such that P K = 0 , the order of convergence is K + min { α, } . We now discuss some technical aspects concerning the evaluation of quadra-ture weights b r ( j ) in order to fulfill order conditions (3.4).From Proposition 3.5, the weights vector b ν ( j ) = (cid:0) b ( j ) , b ( j ) , . . . , b ν ( j ) (cid:1) T C ν b ν ( j ) = R α,ν ( j ; h α λ ), where C ν = . . . c . . . c ν ... . . . ... c ν − . . . c ν − ν , R α,ν ( j ; z ) = R α, ( j ; z ) R α, ( j ; z )... R α,ν − ( j ; z ) . The appearance of the mildly ill-conditioned Vandermonde matrix C ν does not represent a particular difficulty; indeed very low dimensions areinvolved in practical implementations, since the number of quadrature nodesusually does not exceed 3 or 4.The most remarkable task concerns with the evaluation of the generalizedML functions e α,β ( j ; z ) in R α,k ( j ; z ) (see Lemma 7.1) which is a non–trivialtask and can represent a bottleneck if not well organized. Moreover, convo-lution quadrature (4.1) demands for the computation of a large number ofML functions and therefore fast algorithms should be provided in order tokeep the whole computation at a reasonable level.To this purpose, methods based on the numerical inversion of the Laplacetransform can be of help. Recently there has been a renewed interest in thisapproach (see, for instance, [30, 45, 22]) which turns out to be suitableand highly–competitive for evaluating functions, like the ML ones, whoseLaplace transform has a simpler representation than the function itself.Indeed, it is a well known result [34] that the Laplace transform of e α,β ( t ; z ) is L (cid:0) e α,β ( t ; z ); s (cid:1) = s α − β s α + z , ℜ ( s ) > | z | α , with a branch cut along the negative real semi-axis to uniquely define thereal powers of s .After putting τ = sj , the integral representation of e α,β ( j ; z ) can bewritten as e α,β ( j ; z ) = j β − πi Z C e τ τ α − β τ α + j α z dτ, (5.1)where the contour C represents a deformation of the Bromwich line whichbegins and ends in the left half–plane and encircles the circular disk | τ | = j | z | α in the positive sense.Among several possible approaches to numerically evaluate formulas like(5.1) we consider here the approach described in [44] (and already exploitedin the numerical solution of linear FDEs in [21]) which, broadly speaking,consists in replacing the exponential function e τ by a ( N − , N ) rational13pproximation R N ( τ ) = P Nk =1 r k / ( τ − τ k ). By assuming the contour C winding in the negative sense once around each pole τ k , the application ofthe Cauchy’s integral formula in (5.1) with e τ replaced by R N ( τ ), togetherwith Lemma 7.1, leads to the approximation for R α,r ( j ; z ) in the form R [ N ] α,r ( j ; z ) = r ! − j α + r N X k =1 r k τ − r − k τ αk + j α z + r − X l =0 ( j − α + l ( r − l )! N X k =1 r k τ − l − k τ αk + ( j − α z ! . (5.2)The computation of the above formula is not very expensive since thepowers τ αk and τ lk , l = 0 , , . . . , r , can be computed once and reused at eachtime–level j ; furthermore poles and residues occur in complex conjugatepairs thus allowing, for real λ , to almost halve the computational cost.The approximation to R α,r ( j ; z ) provided by R [ N ] α,r ( j ; z ) strictly dependson the approximation R N ( τ ) used for e τ . It seems quite natural to focus onChebyshev rational functions (whose residues and poles are readily availablethanks to the work of Cody et al. [5]), which represent the best rationalapproximation to the exponential on ( −∞ , . − N [38], thus allowing to reach highprecision, nearly close to the round-off error, with a reasonable degree N . The extension of the results discussed in the previous sections to the mul-tidimensional case, i.e. when λ is a matrix and f a vector–valued function,is straightforward. First observe that the matrix function e α,β ( t ; h α λ ) isdefined, in a similar way as for scalar instances, by inverting its Laplacetransform L (cid:0) e α,β ( t ; z ); s (cid:1) = s α − β (cid:0) s α I + h α λ (cid:1) − , where I is the identity matrix of the same size of λ and s satisfies ℜ ( s ) > | λ i | α with λ i denoting the eigenvalues of λ . For simplicity we will assumethat the spectrum of λ lies on the positive real semi–axis. In a similar waythe definition of R α,r ( j ; h α λ ) can be extended to matrix arguments.The multidimensional case thus involves the numerical evaluation of ma-trix functions; this topic receives great attention since matrix functions areuseful tools not only in applied mathematics and scientific computing, butalso in various other fields like control theory and complex networks. Awide literature is devoted to this topic; we just cite, among the others,[8, 36, 29, 9, 41, 42, 37] and the thorough book by Higham [23].14he weights of the quadrature rule (4.1) are now some matrices, say B r ( j ), which are evaluated again by means of (3.4) as the solution of the sys-tem (cid:0) C ν ⊗ I (cid:1) B ( j ) = h α R α,ν ( j ; h α λ ), with B ( j ) = (cid:0) B ( j ) T , B ( j ) T , . . . , B ν ( j ) T (cid:1) T and ⊗ the Kronecker product. Note that this is a set of linear systems, eachfor any column in B ( j ) or R α,ν ( j ; h α λ ). In these cases once a suitable factor-ization of the coefficient matrix is computed, it can be applied to all systems.In our situation, thanks to the Kronecker structure, only a factorization of C ν is needed, as in the scalar case.The matrix counterpart of (5.2) for the numerical approximation of R α,k ( j ; h α λ ) involves the inversion of some matrices. These evaluations,obviously, lead to an increase of the computational effort necessary for theintegration of the FDE (1.1). By means of some numerical experiments, wewill observe that this issue does not impair the advantages in accuracy ofthe proposed approach. In this Section we present numerical experiments to validate theoretical re-sults and compare the approach under investigation with some other schemesfor FDEs. As a first case, we consider the test problem (1.1) with the forcingterm f ( t ) = 1Γ( p + 1 − α ) ( t − t ) p − α , p > ⌈ α ⌉ − , (6.1)where the parameter p will be selected, for each experiment, according to thesmoothness requirements as investigated in Section 3 and 4, and dependingon the nodes number ν .When coupled with the initial condition y ( t ) = y for 0 < α < y ( t ) = y and y ′ ( t ) = 0 for 1 < α < y ( t ) = E α, (cid:0) − ( t − t ) α λ (cid:1) y + ( t − t ) p − E α,p (cid:0) − ( t − t ) α λ (cid:1) and can be evaluated, with accuracy up to round–off error, by means of the Matlab mlf function [40] with the very smalltolerance of 10 − . A rational approximation with degree N = 15 is used in(5.2) with the aim of calculating weights with a negligible error.In the following experiments we use λ = 3, t = 0 and we evaluatethe numerical solution at T = 1 by a sequence of decreasing step–sizes h .Errors E ( h ), with respect to the exact solution, are reported together withan estimate of the order of convergence, denoted with EOC and obtained aslog (cid:0) E ( h ) /E ( h/ (cid:1) .All the experiments are performed using Matlab ver. 7.7 on a computer15unning the 32-bit Windows Vista Operating System and equipped with theIntel quad-core CPU Q9400 at 2.66GHz.For the first group of experiments we consider methods (4.1) with ν = 1and quadrature weights derived from order conditions (3.4) with d = 0. Wesolve the test problem with different values of the single node c to point outthe different behavior according to the choice of the node. In the optimalcase (i.e., c = ), since the factor P = − c in the principal part of theerror vanishes, we expect from Proposition 4.3 an order of convergence equalto 1 + min { α, } while in the remaining cases order 1 is expected.Results in Tables 1 and 2, respectively for α = 0 . α = 1 .
5, agreewith the theoretical findings and confirm the importance of selecting nodesin a proper way to improve the convergence properties. c = 0 c = c = 1 h Error EOC Error EOC Error EOC1 / . −
2) 1 . −
2) 1 . − / . −
2) 1 .
058 8 . −
3) 1 .
293 9 . −
3) 0 . /
16 1 . −
2) 1 .
082 3 . −
3) 1 .
349 5 . −
3) 0 . /
32 5 . −
3) 1 .
084 1 . −
3) 1 .
390 3 . −
3) 0 . /
64 2 . −
3) 1 .
076 4 . −
4) 1 .
421 1 . −
3) 0 . /
128 1 . −
3) 1 .
064 1 . −
4) 1 .
443 9 . −
4) 0 . Table 1: Errors and EOC for ν = 1 with forcing term (6.1), α = 0 . p = 2 . c = 0 c = c = 1 h Error EOC Error EOC Error EOC1 / . −
2) 3 . −
4) 4 . − / . −
2) 0 .
936 1 . −
4) 1 .
294 1 . −
2) 1 . /
16 9 . −
3) 0 .
966 4 . −
5) 1 .
690 9 . −
3) 1 . /
32 4 . −
3) 0 .
983 1 . −
5) 1 .
826 4 . −
3) 1 . /
64 2 . −
3) 0 .
991 3 . −
6) 1 .
892 2 . −
3) 1 . /
128 1 . −
3) 0 .
995 8 . −
7) 1 .
929 1 . −
3) 1 . Table 2: Errors and EOC for ν = 1 with forcing term (6.1), α = 1 . p = 3 . ν = 2 constructedafter imposing order conditions (3.4) for d = 1; we first consider the 2nodes c = { , } , for which an order of convergence equal to 2 is predicted,and hence we consider two combinations of optimal nodes, namely c = { , / } and c = { / , } for which the expected order of convergence is2 + min { , α } . Also these experiments (see Table 3 for α = 0 . α = 1 .
5) validate in a quite clear manner the results of Section 4.16 = { , } c = { , } c = { , } h Error EOC Error EOC Error EOC1 / . −
4) 1 . −
3) 2 . − / . −
4) 1 .
775 2 . −
4) 2 .
310 6 . −
5) 2 . /
16 7 . −
5) 1 .
844 5 . −
5) 2 .
371 1 . −
5) 2 . /
32 1 . −
5) 1 .
892 9 . −
6) 2 .
412 2 . −
6) 2 . /
64 5 . −
6) 1 .
925 1 . −
6) 2 .
439 5 . −
7) 2 . /
128 1 . −
6) 1 .
948 3 . −
7) 2 .
458 1 . −
7) 2 . Table 3: Errors and EOC for ν = 2 with forcing term (6.1), α = 0 . p = 3 . c = { , } c = { , } c = { , } h Error EOC Error EOC Error EOC1 / . −
3) 5 . −
5) 3 . − / . −
4) 2 .
014 6 . −
6) 2 .
914 2 . −
6) 0 . /
16 9 . −
5) 2 .
005 8 . −
7) 2 .
966 5 . −
7) 2 . /
32 2 . −
5) 2 .
002 1 . −
7) 2 .
983 7 . −
8) 2 . /
64 6 . −
6) 2 .
001 1 . −
8) 2 .
990 1 . −
8) 2 . /
128 1 . −
6) 2 .
000 1 . −
9) 2 .
994 1 . −
9) 2 . Table 4: Errors and EOC for ν = 2 with forcing term (6.1), α = 1 . p = 4 . ν = 3 and d = 2 and theresults are presented in Table 5. For brevity, in this case we omit to presentthe case α > c = { , . , } c = { , . , } c = { . , . , . } h Error EOC Error EOC Error EOC1 / . −
5) 3 . −
5) 2 . − / . −
6) 2 .
545 3 . −
6) 3 .
342 2 . −
6) 3 . /
16 3 . −
7) 2 .
679 3 . −
7) 3 .
396 2 . −
7) 3 . /
32 4 . −
8) 2 .
773 2 . −
8) 3 .
430 1 . −
8) 3 . /
64 6 . −
9) 2 .
839 2 . −
9) 3 .
453 1 . −
9) 3 . /
128 9 . −
10) 2 .
886 2 . −
10) 3 .
468 1 . −
10) 3 . Table 5: Errors and EOC for ν = 3 with forcing term (6.1), α = 0 . p = 4 . α = 0 .
5, obtained withan increasing number ν of nodes chosen in the optimal way described atthe end of Section 4, and we compare them with the results provided bythe fractional PECE method [15]. We have chosen a forcing term smoothenough to assure high accuracy for all the selected methods (in this case p = 6). 17he improvement in the accuracy, with respect to the fractional PECEmethod, is remarkable for all the methods and becomes striking as ν in-creases. Note that even with ν = 4 an accuracy very close to the machineprecision is reached; thus simulations with ν > c = { } c = { , } c = { , , } c = { , , , } Frac. PECE h Error EOC Error EOC Error EOC Error EOC Error EOC1 / . −
4) 1 . −
5) 2 . −
6) 7 . −
8) 2 . / . −
4) 1 .
106 4 . −
6) 1 .
976 2 . −
7) 3 .
007 4 . −
9) 4 .
175 4 . −
2) 5 . /
16 4 . −
5) 1 .
253 9 . −
7) 2 .
137 2 . −
8) 3 .
175 2 . −
10) 4 .
301 4 . −
3) 3 . /
32 1 . −
5) 1 .
340 1 . −
7) 2 .
245 2 . −
9) 3 .
281 1 . −
11) 4 .
378 1 . −
3) 2 . /
64 7 . −
6) 1 .
394 3 . −
8) 2 .
320 2 . −
10) 3 .
350 4 . −
13) 4 .
466 2 . −
4) 1 . /
128 2 . −
6) 1 .
428 7 . −
9) 2 .
372 2 . −
11) 3 .
395 8 . −
15) 5 .
701 9 . −
5) 1 . Table 6: Errors and EOC for various methods with forcing term (6.1), α =0 . p = 6 . −3 −2 −14 −12 −10 −8 −6 −4 −2 cpu−time (in sec) E rr o r Frac. PECE ν = 1 ν = 2 ν = 3 ν = 4 Figure 1: Accuracy versus time for experiments in Table 6As the plot clearly shows the execution time of the methods under inves-tigation are reasonable and they do not increase exceedingly as the number ν of weights and nodes increases. Furthermore any given level of accuracy18s obtained with the proposed scheme with a computational cost which islower with respect to the classical fractional PECE method (for instance, anaccuracy of 10 − is obtained by the fractional PECE method in 10 − CPUseconds and in approximatively 10 − CPU seconds by the new method with ν = 1 or less when ν > f ( t ) = sin( t − t ) + 3 cos( t − t ) . (6.2)As reference solution we consider that obtained by the method (4.1) with ν = 4 and a very small stepsize. Results in Table 7 and Figure 2 show theeffectiveness of the proposed method also for this test problem. c = { } c = { , } c = { , , } c = { , , , } Frac. PECE h Error EOC Error EOC Error EOC Error EOC Error EOC1 / . −
2) 6 . −
4) 1 . −
5) 3 . −
7) 9 . − / . −
2) 1 .
253 1 . −
4) 2 .
170 1 . −
6) 3 .
084 1 . −
8) 4 .
347 3 . −
2) 1 . /
16 5 . −
3) 1 .
329 2 . −
5) 2 .
246 1 . −
7) 3 .
217 7 . −
10) 4 .
389 6 . −
3) 2 . /
32 2 . −
3) 1 .
380 6 . −
6) 2 .
310 1 . −
8) 3 .
304 3 . −
11) 4 .
423 1 . −
3) 1 . /
64 7 . −
4) 1 .
415 1 . −
6) 2 .
360 1 . −
9) 3 .
363 1 . −
12) 4 .
449 5 . −
4) 1 . /
128 2 . −
4) 1 .
440 2 . −
7) 2 .
398 1 . −
10) 3 .
406 7 . −
14) 4 .
461 1 . −
4) 1 . Table 7: Errors and EOC for various methods with forcing term (6.2) and α = 0 . −3 −2 −14 −12 −10 −8 −6 −4 −2 cpu−time (in sec) E rr o r Frac. PECE ν = 1 ν = 2 ν = 3 ν = 4 Figure 2: Accuracy versus time for experiments in Table 7Our last experiment concerns with a multidimensional system D α U ( t ) + AU ( t ) = F ( t ) (6.3)19here A is an M × M matrix and U and F are M -valued functions. Problemsof this kind occur frequently in applications; for instance, a time–fractionalpartial differential equation (PDE) ∂ α ∂t α u ( x, t ) − ∂ u∂x u ( x, t ) = t p Γ( p + 1) sin πx (6.4)for t > x ∈ [0 ,
1] and subject to the initial and boundary conditions u ( x,
0) = sin πx, u (0 , t ) = u (1 , t ) = 0can be transformed into the system of FDEs (6.3) by the so–called Method ofLines : once a mesh grid x j = j ∆ x , j = 0 , , . . . , M +1, ∆ x = 1 / ( M +1), hasbeen fixed on [0 , A is the tridiagonal matrix with entries ( − , , − x ) , and U ( t ) = (cid:0) u ( x , t ) , u ( x , t ) , . . . , u ( x M , t ) (cid:1) T , F ( t ) = t p Γ( p +1) (cid:0) sin( πx ) , sin( πx ) , . . . , sin( πx M ) (cid:1) T ∈ R M .More generally, F ( t ) will include not only data arising from inhomo-geneous terms in the differential operator but also possible inhomogeneousboundary data.It is elementary to verify that the exact solution of the fractional PDE(6.4) is given by u ( t, x ) = sin( πx ) (cid:0) e α, ( t ; π ) + e α,p + α +1 ( t ; π ) (cid:1) and its graphis plotted in Figure 3. u ( x ,t ) Figure 3: Exact solution of the fractional PDE (6.4) for α = 0 . p = 3.In Table 8 we present the result of the computation for α = 0 . p = 3and M = 8. We note that for large values of h the fractional PECE methodfails in most cases to provide correct results due to instability problems.20ndeed, the spectrum of the matrix A lies in the interval [ − / (∆ x ) ,
0] and,since the method is actually explicit, in order to obtain a stable behaviora very strong restriction on the step–size h , of the kind h α < (∆ x ) C α / C α ∈ [1 ,
2] (see [17]), has to be imposed. Thus, to make a meaningfulcomparison we have considered also the implicit PI trapezoidal rule (therule used as the predictor in the PECE algorithm) which possesses betterstability properties.Also in this case we observe that method (4.1) allows to reach a greateraccuracy with respect to the PI rule. Anyway the joined comparison of errorsand computational costs, as showed in the first plot of Figure 4, emphasizeshow the proposed method is more competitive especially when high accuracyis required. ν = 2 , c = { , } ν = 3 , c = { , , } Frac. PECE PI Trapez h Error EOC Error EOC Error EOC Error EOC1 / . −
5) 4 . −
7) 1 . . − /
16 5 . −
6) 2 .
429 4 . −
8) 3 .
559 1 . ∗ . −
4) 1 . /
32 8 . −
7) 2 .
594 3 . −
9) 3 .
676 9 . ∗ . −
5) 1 . /
64 1 . −
7) 2 .
686 2 . −
10) 3 .
736 2 . ∗ . −
5) 1 . /
128 1 . −
8) 2 .
735 1 . −
11) 3 .
766 7 . ∗ . −
6) 1 . /
256 2 . −
9) 2 .
762 1 . −
12) 3 .
782 4 . ∗ . −
6) 1 . /
512 4 . −
10) 2 .
777 9 . −
14) 3 .
790 2 . ∗ . −
7) 1 . / . −
11) 2 .
786 2 . −
14) 2 .
175 5 . − ∗ . −
7) 1 . Table 8: Errors and EOC for test problem (6.3) for α = 0 . N = 8 and p = 3 . M = 16 and α = 0 . ν = 2 , c = { , } ν = 3 , c = { , , } PI Trapez h Error EOC Error EOC Error EOC1 / . −
5) 4 . −
7) 2 . − /
16 5 . −
6) 2 .
014 5 . −
8) 3 .
125 7 . −
4) 1 . /
32 1 . −
6) 2 .
170 5 . −
9) 3 .
259 2 . −
4) 1 . /
64 2 . −
7) 2 .
298 5 . −
10) 3 .
366 7 . −
5) 1 . /
128 4 . −
8) 2 .
394 5 . −
11) 3 .
444 2 . −
5) 1 . /
256 8 . −
9) 2 .
461 4 . −
12) 3 .
497 7 . −
6) 1 . /
512 1 . −
9) 2 .
508 3 . −
13) 3 .
554 2 . −
6) 1 . / . −
10) 2 .
539 3 . −
14) 3 .
545 8 . −
7) 1 . Table 9: Errors and EOC for test problem (6.3) for α = 0 . N = 16 and p = 3 .
0. 21 −1 −14 −12 −10 −8 −6 cpu−time (in sec) E rr o r PI Trapez. ν =2, c ={1/3, 1} ν =3, c={0, 1/2, 1} 10 −1 −14 −12 −10 −8 −6 cpu−time (in sec) E rr o r PI Trapez. ν =2, c ={1/3, 1} ν =3, c={0, 1/2, 1} Figure 4: Accuracy versus time for for test problem (6.3) with α = 0 . M = 8 and p = 3 . α = 0 . M = 16 and p = 3 . Lemma 7.1.
Let a < b ≤ t , ℜ ( α ) > , β > and k ∈ N . Then R α,β,k ( t, a, b ; z ) = e α,β + k +1 ( t − a ; z ) − k X ℓ =0 ( b − a ) k − ℓ ( k − ℓ )! e α,β + ℓ +1 ( t − b ; z ) . Proof.
First write Z ba e α,β ( t − s ; z )( s − a ) k ds = Z ta e α,β ( t − s ; z )( s − a ) k ds − Z tb e α,β ( t − s ; z )( s − a ) k ds and hence, since for h = b − a it is( s − a ) k = ( s − b + h ) k = k X ℓ =0 (cid:18) kℓ (cid:19) h k − ℓ ( s − b ) ℓ , the application of Lemma 2.1 leads to the desired result since Γ( k + 1) = k !,being k ∈ N . Lemma 7.2.
Let a < b ≤ t and k ∈ N . Then R , ,k ( t, a, b ; 0) = ( b − a ) k +1 / ( k + 1)! . roof. It is elementary to see that e α,β ( t ; 0) = t β − / Γ( β ) and hence Lemma7.1 leads to R , ,k ( t, a, b ; 0) = ( t − a ) k +1 Γ( k + 2) − k X ℓ =0 h k − ℓ ( k − ℓ )! ( t − b ) ℓ +1 Γ( ℓ + 2) , where h = b − a . By recalling that Γ( n + 1) = n ! and1( k − ℓ )!( ℓ + 1)! = 1( k + 1)! (cid:18) k + 1 ℓ + 1 (cid:19) , we can write R , ,k ( t, a, b ; 0) = 1( k + 1)! ( t − a ) k +1 − k +1 X ℓ =1 (cid:18) k + 1 ℓ (cid:19) h k − ℓ +1 ( t − b ) ℓ ! = 1( k + 1)! (cid:16) ( t − a ) k +1 − ( t − b + h ) k +1 + h k +1 (cid:17) from which the proof easily follows since t − b + h = t − a . Lemma 7.3.
Let a < b ≤ t , h = b − a and k ∈ N . Then for any ξ ∈ [ a, b ] itis k + 1) Z bξ e α,β ( t − s ; z )( s − ξ ) k ds = h β + k R α,β,k (cid:16) t − ah , ξ − ah , h α z (cid:17) Proof.
First consider u = ( ξ − a ) /h . Thus Z bξ e α,β ( t − s ; z )( s − ξ ) k ds = Z ba + uh e α,β ( t − s ; z )( s − a − uh ) k ds and the change of variable s = a + hr leads to Z bξ e α,β ( t − s ; z )( s − ξ ) k ds = h k +1 Z u e α,β ( t − a − rh ; z )( r − u ) k dr. By using (2.3) we have e α,β ( t − a − rh ; z ) = h β − e α,β (( t − a ) /h − r ; h α z )from which the proof follows in a straightforward way.23 eferences [1] A. Bueno-Orovio, D. Kay, V. Grau, B. Rodriguez, K. Burrage, Frac-tional diffusion models of cardiac electrical propagation : role of struc-tural heterogeneity in dispersion of repolarization, Tech. Rep. OCCAM13/35, Oxford Centre for Collaborative Applied Mathematics, Oxford(UK) (2013).[2] D. Cafagna, Fractional calculus: A mathematical tool from the past forpresent engineers [past and present], Industrial Electronics Magazine,IEEE 1 (2) (2007) 35–40.[3] A. Cardone, D. Conte, Multistep collocation methods for Volterraintegro-differential equations, Appl. Math. Comput. 221 (2013) 770–785.[4] A. Cardone, L. G. Ixaru, B. Paternoster, Exponential fitting directquadrature methods for Volterra integral equations, Numer. Algorithms55 (4) (2010) 467–480.[5] W. J. Cody, G. Meinardus, R. S. Varga, Chebyshev rational approxima-tions to e − x in [0 , + ∞ ) and applications to heat-conduction problems,J. Approximation Theory 2 (1969) 50–65.[6] M.-F. Danca, R. Garrappa, W. K. S. Tang, G. Chen, Sustaining stabledynamics of a fractional-order chaotic financial system by parameterswitching, Comput. Math. Appl. 66 (5) (2013) 702–716.[7] P. J. Davis, P. Rabinowitz, Methods of numerical integration, Com-puter Science and Applied Mathematics, 2nd ed., Academic Press Inc.,Orlando, FL, 1984.[8] N. Del Buono, L. Lopez, Geometric integration on manifold of squareoblique rotation matrices, SIAM J. Matrix Anal. Appl. 23 (4) (2002)974–989.[9] N. Del Buono, L. Lopez, R. Peluso, Computation of the exponentialof large sparse skew-symmetric matrices, SIAM J. Sci. Comput. 27 (1)(2005) 278–293.[10] L. Dieci, L. Lopez, Sliding motion in Filippov differential systems: the-oretical results and a computational approach, SIAM J. Numer. Anal.47 (3) (2009) 2023–2051. 2411] L. Dieci, L. Lopez, Sliding motion on discontinuity surfaces of high co-dimension. A construction for selecting a Filippov vector field, Numer.Math. 117 (4) (2011) 779–811.[12] L. Dieci, L. Lopez, A survey of numerical methods for IVPs of ODEswith discontinuous right-hand side, J. Comput. Appl. Math. 236 (16)(2012) 3967–3991.[13] K. Diethelm, The analysis of fractional differential equations, vol. 2004of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2010.[14] K. Diethelm, N. J. Ford, A. D. Freed, Detailed error analysis for afractional Adams method, Numer. Algorithms 36 (1) (2004) 31–52.[15] K. Diethelm, A. D. Freed, The FracPECE subroutine for the numeri-cal solution of differential equations of fractional order, in: S.Heinzel,T.Plesser (eds.), Forschung und wissenschaftliches Rechnen 1998, Ges-sellschaft f¨ur wissenschaftliche Datenverarbeitung, G¨ottingen, 1999.[16] J. Dixon, On the order of the error in discretization methods for weaklysingular second kind Volterra integral equations with nonsmooth solu-tions, BIT 25 (4) (1985) 624–634.[17] R. Garrappa, On linear stability of predictor–corrector algorithms forfractional differential equations, Int. J. Comput. Math. 87 (10) (2010)2281–2290.[18] R. Garrappa, A family of Adams exponential integrators for fractionallinear systems, Comput. Math. Appl. 66 (5) (2013) 717–727.[19] R. Garrappa, On some generalizations of the implicit Euler method fordiscontinuous fractional differential equations, Math. Comput. Simulat.URL http://dx.doi.org/10.1016/j.matcom.2012.04.009 [20] R. Garrappa, M. Popolizio, Generalized exponential time differencingmethods for fractional order problems, Comput. Math. Appl. 62 (3)(2011) 876–890.[21] R. Garrappa, M. Popolizio, On accurate product integration rules forlinear fractional differential equations, J. Comput. Appl. Math. 235 (5)(2011) 1085–1097.[22] R. Garrappa, M. Popolizio, Evaluation of generalized Mittag–Lefflerfunctions on the real line, Adv. Comput. Math. 39 (1) (2013) 205–225.2523] N. J. Higham, Functions of matrices, Society for Industrial and AppliedMathematics (SIAM), Philadelphia, PA, 2008.[24] M. Hochbruck, A. Ostermann, Exponential integrators, Acta Numer.19 (2010) 209–286.[25] A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications offractional differential equations, vol. 204 of North-Holland MathematicsStudies, Elsevier Science B.V., Amsterdam, 2006.[26] C. Li, A. Chen, J. Ye, Numerical approaches to fractional calculusand fractional ordinary differential equation, J. Comput. Phys. 230 (9)(2011) 3352–3368.[27] C. Li, C. Tao, On the fractional Adams method, Comput. Math. Appl.58 (8) (2009) 1573–1588.[28] P. Lino, G. Maione, Loop-shaping and easy tuning of fractional-orderproportional integral controllers for position servo systems, Asian Jour-nal of Control 15 (3) (2013) 796–805.[29] L. Lopez, V. Simoncini, Preserving geometric properties of the expo-nential matrix by block Krylov subspace methods, BIT 46 (4) (2006)813–830.[30] M. L´opez-Fern´andez, C. Palencia, A. Sch¨adle, A spectral order methodfor inverting sectorial Laplace transforms, SIAM J. Numer. Anal. 44 (3)(2006) 1332–1350.[31] C. Lubich, Runge-Kutta theory for Volterra and Abel integral equationsof the second kind, Math. Comp. 41 (163) (1983) 87–102.[32] C. Lubich, Discretized fractional calculus, SIAM J. Math. Anal. 17 (3)(1986) 704–719.[33] J. N. Lyness, B. W. Ninham, Numerical quadrature and asymptoticexpansions, Math. Comp. 21 (1967) 162–178.[34] F. Mainardi, Fractional calculus and waves in linear viscoelasticity, Im-perial College Press, London, 2010.[35] I. Moret, A note on Kyrlov methods for fractional evolution problems,Numer. Funct. Anal. Optim. 34 (5) (2013) 539–556.2636] I. Moret, P. Novati, RD-rational approximations of the matrix expo-nential, BIT, Numerical Mathematics 3 (44) (2004) 595–615.[37] I. Moret, M. Popolizio, The restarted shift-and-invert Krylov methodfor matrix functions, Numerical Linear Algebra with Appl.URL http://dx.doi.org/10.1002/nla.1862http://dx.doi.org/10.1002/nla.1862