Solving delay differential equations through RBF collocation
aa r X i v : . [ m a t h . NA ] J a n SOLVING DELAY DIFFERENTIAL EQUATIONSTHROUGH RBF COLLOCATION
Francisco Bernal , Gail Guti´errez Universidad Carlos III, 28911 Legan´es (Madrid) Universidad Pontificia Bolivariana (Medell´ın)January 3, 2017
Abstract
A general and easy-to-code numerical method based on radial basis functions (RBFs)collocation is proposed for the solution of delay differential equations (DDEs). It relies onthe interpolation properties of infinitely smooth RBFs, which allow for a large accuracyover a scattered and relatively small discretization support. Hardy’s multiquadric ischosen as RBF and combined with the Residual Subsampling Algorithm of Driscoll andHeryudono for support adaptivity. The performance of the method is very satisfactory,as demonstrated over a cross-section of benchmark DDEs, and by comparison withexisting general-purpose and specialized numerical schemes for DDEs.
In this work, we present a general numerical approach for solving DDEs based on the RBFcollocation method invented by Kansa [18][19], also known as Kansa’s method. Due to itsmany advantages (which include superior interpolation accuracy, spectral convergence, ro-bustness with respect to the discretization support, and ease of coding), Kansa’s method isbecoming increasingly popular for the solution of ordinary and partial differential equations(ODEs and PDEs, respectively). Its performance in the solution of DDEs, however, hasscarcely been explored, with the exception of a recent paper on the solution of neutral DDEswith multiquadrics [21]. This paper is organized as follows. In Section 2, Kansa’s methodis adapted to a general formulation of (first order) DDEs. The basic algorithm is furtherimproved by the inclusion of several heuristic observations concerning the tunable shape pa-rameter which appears in the multiquadric RBF, and by the residual subsampling algorithm(RSA) by Driscoll and Heryuodono [7]. The RSA is at the core of the high accuracy attainedby the multiquadrics interpolant. Section 2 is closed by some remarks concerning the solutionof nonlinear problems with Kansa’s method. Section 3 tests the proposed method againsta cross-section of benchmark problems taken from the literature. As we shall see, not onlydoes Kansa’s method attain excellent results in well-understood (first order) DDEs, but alsoin the less explored neutral and higher-order DDEs -which may offer an additional tool forlooking into this kind of problems. Finally, Section 4 concludes the paper.
Consider the following linear DDE y ′ ( x ) − p ( x ) y ( x ) − q ( x ) y [ x − τ ( x )] = s ( x ) if x ∈ [ a, b ] (1) y ( x ) = h ( x ) if x ≤ a (2)1t will be convenient to split (2) into a DDE and an ODE y ′ ( x ) − p ( x ) y ( x ) − q ( x ) y [ x − τ ( x )] = s ( x ) if x − τ ( x ) > a (3) y ′ ( x ) − p ( x ) y ( x ) = q ( x ) h [ x − τ ( x )] + s ( x ) if x − τ ( x ) < a (4) y ( a ) = h ( a ) (5)Discretize [ a, b ] into a set N scattered nodes ξ = { x j , j = 1 ...N } (with x = a and x N = b ), and consider as well the outside point x = a − λ, λ >
0. We seek an approximatesolution to (3)-(5) in the form of an expansion of N + 1 RBFs φ j ( r ): y ( x ) = j = N X j =0 α j φ ( k x − x j k ) (6)The addition of an RBF at x allows to enforce both the initial condition and the DDEat x = a , thus contributing to the accuracy (this is the PDECB strategy discussed in [11]).Once the coefficients α j are available, the approximate RBF solution can be reconstructedanywhere in [ a, b ]. In order to solve for the coefficients, (3)-(5) are enforced over (6) on a setof collocation N nodes, usually ξ . Notice that no equation is collocated on x , but two ofthem are on x = a . For i = 1 , . . . , N , this leads to the linear system of dimension N + 1 j = N X j =0 { φ ′ j ( r ij ) − p ( x i ) φ j ( r ij ) − q ( x i ) φ j ( || x i − τ ( x i ) − x j || ) } = s ( x i ) if x i − τ ( x i ) > a (7) j = N X j =0 { φ ′ j ( r ij ) − p ( x i ) φ j ( r ij ) } α j = q ( x i ) h [ x i − τ ( x i )] + s ( x i ) if x i − τ ( x i ) ≤ a (8) j = N X j =0 α j φ j ( r ij ) = h ( a ) if x i = a (9)where r ij = || x i − x j || . In the remainder of this paper we will restrict ourselves to thewell-tested Hardy’s multiquadric (MQ), φ j ( r j ) = q k x − x j k + c j (10)whose derivative is φ ′ j ( r j ) = x − x j q k x − x j k + c j (11)as the RBF of choice. The shape of the MQ depends on the free parameter c j (hence thename of shape parameter for it). The fact that the MQ has global support leads to fullypopulated matrices. It is a hallmark of Kansa’s method that the best accuracy can only beobtained at the expense of extreme ill-conditioning, as will be discussed next. In order toimprove stability, the direct inversion of the linear system (7)-(9) has been replaced by theuse of Penrose’s pseudoinverse. c j Although the accuracy of the interpolant (6) is largely influenced by the values c j , j =0 , . . . , N + 1, theoretical results regarding the choice of an ’optimal’ set of values are not yetavailable, and heuristic rules must be used instead, which mostly address the homogeneouscase c j = c . In this case, the convergence rate of the error of the interpolant (6) has beenproven to go as Λ c/h in interpolation problems [22], and has been shown to obey Λ √ c/h in2lliptic PDEs [6], where 0 < Λ < h is the distance between nodes. Therefore, theaccuracy could seemingly be improved at no computational cost by increasing c . However,as c (cid:1) ∞ , the MQ profile becomes increasingly flatter and the collocation system (7)-(9)becomes extremely ill-conditioned, dictating in practice a limit for the accuracy attainable ata given resolution h and machine precision. A trade-off principle arises between accuracy andstability, which is actually common to all parameter-dependent RBFs, not only MQs [27].Optimal results are obtained by pushing c as large as possible before incurring in numericalinstability. Since in the approximation of differential equations the exact solution is unknown,other estimators are used instead, which replicate the behavior of the error curves with c andare available in run time. Examples are the ’leave-one-out’ strategy [26][10], or the residualto the ODE/PDE [6]. For instance, in (3)-(5),the pointwise residual is defined as R ( x ) = s ( x ) − j = N X j =0 α j φ ′ j ( x ) + p ( x ) j = N X j =0 α j φ j ( x ) + q ( x ) j = N X j =0 α j φ ′ j [ x − τ ( x )] (12)The case where c is center-dependent has been less investigated, although it may outperformMQ collocation with constant c , as shown in a numerical investigation by Kansa and Carlson[20]. Carlson and Foley showed that c j is related to the curvature of the function to beinterpolated at x ≈ x j [4]. In [14], Hon and Mao let c j = M j + b , where j is the centerindex and M and b are chosen so that the condition number κ is about 10 . In [30],Wertz et al. reported improved accuracy in a 2D problem if c j ≫ c for ( x j , y j ) ∈ ∂ Ω and c j = µ (1 + γ ( − j ) if ( x j , y j ) ∈ Ω, for some constants µ and γ . These findings were confirmedin the 1D case in a later work by Fornberg and Zuev [12]. Another common strategy hasbeen to set c proportional to the distance to the closest node in the point set, v.g. in [7]. In the case that the DDE is nonlinear, or that the lagged argument is a function of thesolution itself ( a state-delay DDE ), the collocation of the interpolant (6) leads to a systemof nonlinear algebraic equations for the unknowns α , . . . , α N . Let us write this system as F ( α , . . . , α N ) = 0... (13) F N ( α , . . . , α N ) = 0In order to solve ~F = ~
0, a gradient-based method may be used. In the MATLAB routine fsolve , the user can choose between providing the analytical Jacobian J to the solver, J = ∂F ∂α . . . ∂F ∂α N ... . . . ... ∂F N ∂α . . . ∂F N ∂α N (14)or allowing it to construct J based on finite differences. In order to keep the implementationof Kansa’s method as simple and general as possible, we have only explored the latter pos-sibility. However, there is a practical drawback: while it is well known that the convergenceof Newton-type methods is very sensitive to the condition number of the Jacobian, RBFinterpolation needs to push κ for the best accuracy, often beyond the ill-condition threshold(which is κ ≈ in our MATLAB environment). Consequently, we have used instead atrust-region method (that of Powell’s [25]) in our numerical experiments, with good results.Nevertheless, the condition number must be kept lower than in linear DDEs in order toguarantee convergence, which is likely to prevent optimal accuracy as well. Another important yet open issue in Kansa’s method is the optimal number and locationof RBF centers/collocation nodes. We will restrict ourselves to the case where both point3ets are identical (save for the extra RBF center at x added in order to enforce the equa-tion at x = a ). In 1D problems, regular grids are often preferred for simplicity, althoughexperimental evidence suggests that the optimal placement of nodes is problem-dependent, i.e. is determined by the function to be interpolated. We will be using an algorithm intro-duced by Driscoll and Heryuodono [7] which works well in practice, both for interpolationand differential equations and not only in 1D. The idea is to monitor the residual R to thedifferential equation at midpoints and iteratively refine the point set until R drops belowsome user-defined threshold. The reader is referred to the original paper for details. Here,we present a slightly modified version of the algorithm which we have preferred. Residual Subsampling Algorithm (RSA) • Initially, discretize [ a, b ] into a grid of N (0) nodes with spacing ∆ = ( b − a ) / ( N (0) − x j = a + ( j − } , j = 1 , . . . , N (0) , ξ (0) = { x j } , and x = a − ∆. The N (0) + 1starting MQ centers are the set x ∪ ξ (0) . Define the values of the adjustable parameters λ > µ > γ > η > θ max > θ min >
0, and itmax . • For k = 0 , . . . until max | R ( k ) j | < θ max or k > itmax – Distribute the shape parameters as c = c N ( k ) = λµd , and c j = µd j [1+ γ ( − j ] , j =1 , . . . , N ( k ) −
1, where d j is the distance to the closest collocation node from x j . – Compute set of midpoints z j = ( x j + x j +1 ) / , j = 1 ...N ( k ) − – Solve the DDE through Kansa’s method with y ( x ) = j = N ( k ) X j =0 α j φ j ( || x − x j || ). – Compute the residuals { R ( k ) j } to the differential equation at midpoints. – Set Θ ( k ) = max (cid:0) θ max , max j =1 ...N ( k ) − | R ( k ) j /η | (cid:1) . – Define point set Ξ ( k ) = ξ ( k ) ∪ { z j such that | R ( k ) j | > Θ ( k ) } . – Delete points x i , i = 2 ...N ( k ) − | R ( k ) i − | < θ min > | R ( k ) i +1 | from Ξ ( k ). – Let ξ ( k +1) = Ξ ( k ) and { x , . . . , x N ( k +1) } = ξ ( k +1) . – Update { d j } for j = 1 ...N ( k +1) – Consider the set of 1 + N ( k +1) MQs centered at x ∪ ξ ( k +1) and iterate.In the above algorithm, the shape parameters are adjusted after each iteration in orderto prevent the condition number from skyrocketing. As further nodes are included, however,the onset of instability will be eventually reached and the accuracy of the MQ approximationbegins to deteriorate. The only tweakings to the original RSA in [7] are: PDECB, the use ofthe recipe in [30] in the distribution of c ’s, and the subtitution of θ max by Θ ( k ) on enlargementof the point set. In the remainder of the paper, we will refer to the method described in section 2 as MQCM(multiquadric collocation method). The MQCM is coded in MATLAB 7 running on a laptopwith 1.8 GHz CPU and 1 GB RAM. In this section, the MQCM is tested against a cross-section of benchmark DDEs taken from the literature. The performance of the MQCM iscompared with that of MATLAB built-in general-purpose routines DDE23 [28] by Shampineand Thompson, or DDESD [29] by Shampine, which are both based on Runge-Kutta-typeschemes. DDE23 is restricted to constant delays, while the more recent DDESD can handlevariable- and state-delay equations as well. For Examples 4 and 5, DDENSD has been used4nstead of DDESD, which is a routine based on DDESD for DDEs of neutral type. The factthat these three programs are written in MATLAB allows for a direct comparison of errorestimates and CPU times with MQCM. In particular, the root mean squared error is definedas
RM S ( ǫ ) = s P i = N ev i =1 [ u NUM ( z i ) − u EX ( z i )] N ev (15)where u EX is the exact solution, u NUM the approximation yielded by the considered nu-merical scheme, ǫ is the point-wise error, and z i , i = 1 , . . . , N ev = 103 is a set of equispacedevaluation points in [ a, b ]. In some of the examples presented, published results of somespecialized method for the kind of DDE considered have been included as further reference.In such cases, not all the estimators are available for comparison. CPU times, in particular,cannot be directly compared -which is denoted by adding an * to the corresponding entry.In all of the numerical examples which follow, the working parameters for the RSA havebeen set to λ = 10 , µ = q /N (0) , γ = 0 . , η = 10 , θ max = 10 − , θ min = 10 − (16)except in Example 5 where µ = p /N (0) . The initial discretization is N (0) = 6 in Examples1-4, and N (0) = 10 in Examples 5 and 6. Consider the following DDE with a stiffness parameter p (Example 1 in [16]). (cid:26) y ′ ( x ) = Ay ( x ) + y (cid:0) x − π (cid:1) − A sin( x ) , x ∈ [0 , y ( x ) = e px + sin( x ) , x ∈ (cid:2) − π , (cid:3) (17)where A = p − e − πp/ . The exact solution is given by y EX ( x ) = e px + sin( x ). For p < p also enters the equation exponentially, itseffect on the stiffness of the problem is dramatic. Table 1 compares the performance of theMQCM with that of DDE23 and with that of the spectral method in [16] (SPC). In Table 1,an entry like 5 . −
15) means 5 . × − , and so on. DoF (degrees of freedom) stands for thesize of support of the given discretization scheme -the number of MQ centers in the MQCM.The listed results for DDE23 are the best within a reasonable computing time and/or memoryrestrictions.The MQCM is barely affected, if anything, by the increasing stiffness of the problem.In fact, the advantages of the MQCM in dealing with stiff ODEs were already reported in[14]. In terms of efficiency, the MQMC outperforms DDE23. The inversion of full matricesrequired by the MQCM is made up for by the gain in the size of the discretization support.On the other hand, the accuracy of the SPC can be improved by increasing the order ofthe scheme, as happens in Table 1 for different p . The SPC is more efficient than the MQCM,but is affected by the increasing value of p (see discussion in [16]). Moreover, it is restrictedto constant delays.Table 2 shows the performance of the RSA throughout the iterations for this problem.While it converges on average, the scheme is clearly not monotone. It is surprising that theconvergence can be sustained at so high condition numbers. While in our implementation ofKansa’s method the ill-conditioning problem is -at least partially- ameliorated by the use ofthe pseudoinverse (instead of the direct inversion of the matrix), this phenomenon of highaccuracy at very high condition numbers has already been reported when smooth functionsare interpolated with MQs [14]. 5able 1: Comparison to other methods (Example 1) p x ǫ MQ ǫ DDE ǫ SPC π/ π/ π/ π π/ ǫ ) 9.4(-14) 2.3(-12)CPU 16 891 0.009 *3 π/ π/ π/ π π/ ǫ ) 6.0(-14) 3.9(-11)CPU 26 1358 0.017 *3 π/ π/ π/ π π/ ǫ ) 1.4(-13) 2.1(-10)CPU 19 5850 0.036 * Table 2: RSA iterations (Example 1) p = − . p = − p = − ǫ ) Condition ♯ DoF RMS( ǫ ) Condition ♯ DoF RMS( ǫ ) Condition ♯ y p= −0.1p= −1p= −2 Figure 1: Plots of the exact solution of Example 1
Consider the following pantograph differential equation (see also [3]). y ′ ( x ) = − y ( x ) + q y ( qx ) − q e − qx , y (0) = 1 , ≤ x ≤ T, < q < . (18)whose solution is y EX ( x ) = e − x .Numerical methods for DDEs like (18) are a topical subject of research because of twofeatures associated to a proportional delay of the form τ ( x ) = (1 − q ) x , 0 < q <
1, namely:it vanishes at x = 0 and becomes unbounded as x (cid:1) ∞ . The former one leads to difficultiesin carrying out the integration of the first step, while the latter entails the need for a vastamount of computer memory if long term integration ( T >>
0) is required. In what followswe set T = 10. Table 3 compares the MQCM with DDESD and with the specialized referencemethod (REF) in [3], which works on a specific ( geometric ) kind of mesh in order to attainsuperconvergence. The listed results for DDESD are not the best attainable, but those forwhich the CPU time is comparable to that of the MQCM.Table 3: Comparison to other methods (Example 2) REF MQ DDESD q DoF MAX ( ǫ ) DoF MAX ( ǫ ) CP U DoF MAX ( ǫ ) CP U
A recent improvement to the reference method [3] is [15] , more efficient than the formerin case that long integration times T are required. For (18) with T = 10 and q = 0 .
5, itattains | y ( x = T ) − y EX ( x = T ) | = 1 . −
13) with 1280 nodes. The results of the MQCMthroughout the first 11 iterations of the RSA are shown in Table 4. A fifth column has beenadded that lists the errors (in absolute value) of the MQCM solution at x = T . In the event that the solution y ( x ) to the DDE has discontinuities or low-order derivativesingularities, Kansa’s method performs relatively poorly. The reason is that nonsmooth fea-tures do not belong to the interpolation space, which is made up of infinitely derivable MQs7hat cannot possibly capture them accurately. Any attempt to do so will bring about Gibbs’oscillations around the singularities, whose amplitude will not be damped by letting N → ∞ .An interesting approach is to include MQs with c = 0 close to the singularities as in [17].However, although the oscillations are indeed reduced, we have not been able to recover thehigh convergence rate attained with smooth solutions. In order to solve DDEs with piecewisesmooth solutions, Kansa’s method can still be applied sequentially if the domain is parti-tioned into subintervals which are C ∞ . For instance, assume that it can be predicted thatthe only three singularities take place at a < x < x < x < b . First, the DDE is solved inthe subdomain a ≤ x < x to yield y (1) app ( x ). Then, y (1) app ( x ) is used as history function for thesecond subdomain x < x < x yielding y (2) app ( x ), and so on.The next example ([16], Example 4, also [24] 1.1.12), deals with a DDE having piecewise C ∞ initial function: y ′ ( x ) = y ( x ) + y ( x −
1) (19) y ( x ) = (cid:26) , x ∈ [ − , − / , x ∈ [ − / ,
0] (20)The analytical solution for x ∈ [0 , /
3] is given by y EX ( x ) = e x , x ∈ [0 , / , − C e x , x ∈ [2 / , ,xe x − + C e x , x ∈ [1 , / , C xe x − + C e x , x ∈ [5 / , , ( x − x ) + C xe x − + C e x x ∈ [2 , /
3] (21)where C = 1 + e − / , C = − e − + C , c = e − + C − e − / − C e − , and C = e − + 2 C e − + C − C e − .Notice that the discontinuity of the initial function propagates in x , giving rise to singu-larities of order k at points x k = − / k, k ≥
0. For both the MQCM and the SPC tocope with this problem, the integration domain [ a, b ] = [0 , /
3] must be divided into the 5smooth subintervals in (21). The results of MQCM, SPC and DDE23 are listed in Table (5).DDE23 has a ’Jumps’ option which has been set to a vector that contains the locations ofthe discontinuities.
Neutral DDEs (which involve lagged derivatives) are considered tougher to handle with nu-merical methods than retarded ODEs and are an active research field. In the following prob-lem, taken from [23] (see also [24] 2.3.4), the delay is a function of the solution itself, andTable 4: RSA iterations (Example 2) it DoF RMS( ǫ ) Condition ♯ ǫ ( x = T )0 7 0.01 1.9(+11) 0.021 12 2.2(-5) 3.3(+15) 5.3(-6)2 14 2.6(-5) 1.7(+14) 1.3(-4)3 16 4.3(-7) 1.1(+14) 7.3(-7)4 23 1.4(-8) 4.2(+16) 3.5(-9)5 30 6.0(-10) 2.2(+17) 3.0(-10)6 49 2.3(-10) 1.1(+18) 1.2(-10)7 64 2.6(-11) 7.3(+18) 1.8(-10)8 65 3.2(-11) 3.3(+18) 3.5(-11)9 77 1.7(-12) 1.2(+18) 2.4(-12)10 135 1.3(-13) 3.9(+18) 2.8(-13)11 177 2.2(-13) 8.5(+18) 8.7(-15)CPU 7.8 ’dogleg’ of MATLAB nonlinear solver fsolve . (cid:26) y ′ ( x ) = − y ′ ( y ( x ) − , x ≥ y ( x ) = 1 − x, x ≤ . (22)The exact solution is y EX ( x ) = 1 + x, ≤ x ≤ y ( x ) required to trigger Powell’s method is y GUESS ( x ) = 0. The entry f in Table 6 means that Powell’s method fails to converge in the maximum number of iterationsallowed (set to 30). Nevertheless, it yields an approximation good enough to be used as aguess for the nonlinear solution with 13 MQs (whose solution is in turn used as a guess forthe next RSA iteration, and so on). For reference, DDENSD yields RM S ( ǫ REF ) = 2 . − . This example is a nonlinear neutral differential equation with vanishing state delay. It wasfirst proposed in [8] as a modification of a problem originally considered in [5]: y ′ ( x ) = cos( x ) (cid:2) y (cid:0) xy ( x ) (cid:1)(cid:3) + cy ( x ) y ′ (cid:0) xy ( x ) (cid:1) + g ( x ) , ≤ x ≤ πg ( x ) = (1 − c ) sin( x ) cos (cid:0) x sin ( x ) (cid:1) − sin (cid:0) x + x sin ( x ) (cid:1) y (0) = 0 (23)For every choice of the parameter c , the exact solution is y EX ( x ) = sin ( x ). Because thedelay vanishes at x = 0 , π/ , π/ , ... , the numerical solution of (23) by Runge-Kutta methodscauses some difficulties. For the MQCM, the main difficulty is that the condition numbermust be kept low enough (below 10 ) for the nonlinear solver to converge in a reasonablenumber of nonlinear iterations (again Powell’s algorithm in the fsolve routine). Therefore,we have set N (0) = 11 and µ = p /N (0) . The initial hint of the solution is y GUESS = 1 / REF ), we have taken those of [13] (example 2), where (23) is solved bythe Radau-type code RADAR5 (Table 7). In the case c = 1, there is a singularity at x = π/ y ′ ( π/
2) is not well defined- and the MQCM with default parameters fails.Table 5: Comparison to other methods (Example 3) x ǫ MQ ǫ DDE ǫ SPC ǫ ) 3.2(-13) 9.3(-13)CPU 11 246 Table 6: RSA iterations (Example 4)
It DoF RMS( ǫ ) Condition NL iter0 7 0.0452 7.5(+11) f .6 Example 6: Second order DDE The last example illustrates the ability of the MQCM to accurately solve higher-order DDEs.Since equations of this type are less common in the literature, most solvers are not designedto handle them. In order to compare, we have transformed a system of two state-delay DDEsinto a second-order DDE: (cid:26) y ′′ ( x ) = (cid:0) exp[1 − y ( x )] − x (cid:1) y (cid:0) x − exp[1 − y ( x )] (cid:1) y ′ ( x ) x ≥ y ( x ) = log( x ) 0 < x ≤ y ( x ) and insertion into y ′′ ( x ) in y ′ ( x ) = y ( x ) x ≥ y ′ ( x ) = (cid:0) exp[1 − y ( x )] − x (cid:1) y ( x )( x − exp[1 − y ( x )]) y ( x ) x ≥ y ( x ) = log( x ) 0 < x ≤ y ( x ) = 1 /x < x ≤ y EX ( x ) = y ,EX ( x ) = log( x ).We consider the interval [ a, b ] = [1 , two extra MQ centers for PDEBC (since y , y ′ , and y ′′ are enforced at x = a ). Such centers are placed at x = a − ∆ and x − = a − ǫ REF ) =2 . −
13) (for y ( x )) in 2.1 s. Notice that ill-condition must be kept lower in order to ensureconvergence of the nonlinear solver, thus limiting accuracy. A possible alternative wouldbe to use a Newton-type routine with the analytical Jacobian to the given DDE. The goodperformance shown by MQCM relies on the accuracy with which numerical derivatives arereproduced in Kansa’s method. While not every system of m DDEs can be transformedinto a single DDM of order m , there are many cases where this transformation can be anadvantageous alternative for the solution of DDE systems with the MQCM. A novel numerical method for the solution of DDEs has been presented. It relies on themultiquadric collocation method introduced by Kansa combined with the RSA algorithmTable 7: RSA iterations (Example 5) c DoF Condition CPU RMS( ǫ ) ǫ MQ ( x = π ) ǫ REF ( x = π )-1.0 65 1.8(13) 58.3 4.7(-9) 3.3(-9) 1.8(-9)-0.7 44 2.5(11) 20.4 3.2(-8) 9.5(-9) 4.2(-9)-0.3 44 9.1(10) 9.8 3.2(-8) 7.5(-8) 1.7(-10)0.0 69 6.8(12) 30.0 3.0(-8) 1.6(-8) 1.2(-9)0.3 46 9.9(10) 17.1 4.3(-9) 2.5(-9) 1.0(-9)0.7 49 3.0(11) 36.2 1.1(-9) 7.2(-10) 5.3(-9)1.0 f Table 8: RSA iterations (Example 6)
It DoF RMS( ǫ ) Condition NL iter0 12 0.103 2.4(+10) f f
10y Driscoll and Heryudono for node adaptivity, which uses residual as refinement criterium.As long as the solution of the DDE is smooth (or piecewise smooth, with the position ofthe singularities being known in advance), the present method can accurately handle a largevariety of such problems, including state-delay, neutral, and high-order DDEs. Moreover,the scheme is straightforward to code and enjoys spectral convergence. Because in this paperthe stress is placed on simplicity, nonlinearities are fed to a general-purpose solver, withoutattempting to optimize. Possible improvements include: providing an analytical Jacobian,and linearizing the DDE along the lines of [9][2].
References [1] A.N. Al-Mutib,
An Explicit One-Step Method of Runge-Kutta Type for Solving DelayDiff. Eqns. , Utilitas Math. , 67-80 (1987).[2] F. Bernal and M. Kindelan, A meshless solution to the p-Laplace equation , in Progresson Meshless Methods, A.J.M. Ferreira, E.J. Kansa, G.E. Fasshauer, V. Leito (eds.),Springer (to appear).[3] H. Brunner, Q. Hu and Q. Lin,
Geometric meshes in collocation methods for Volterraintegral equations with proportional delays , IMA J. Numer. Anal. , 783-798 (2001).[4] R.E. Carlson and T.A. Foley, The parameter R2 in multiquadric interpolation , Comput.Math. Appl. , 29-42 (1991).[5] R. N. Castleton, L. J. Grimm, A first order method for differential equations of neutraltype , Math. Comp. , 571-577 (1973).[6] A.H.-D. Cheng, M.A. Golberg, E.J. Kansa and T. Zammito, Exponential convergenceand h-c multiquadric collocation method for partial differential equations , Numer. Meth-ods Part. Differ. Equat. Adaptive residual subsampling methods for radial basisfunction interpolation and collocation problems , Computers Math. Appl. , 927-939(2007).[8] W. H. Enright, H. Hayashi, A delay differential equation solver based on a continuousRunge-Kutta method with defect control , Numer. Alg. , 349-364 (1998).[9] G.E. Fasshauer, Nonsymmetric Multilevel RBF Collocation within an Operator NewtonFramework for Nonlinear PDEs , in Trends in Approximation Theory, K. Kopotun, T.Lyche, and M. Neamtu (eds.), Vanderbilt University Press, 103-112 (2001).[10] G.E. Fasshauer and J. Zhang,
On Choosing ”Optimal” Shape Parameters for RBF Ap-proximation , Numerical Algorithms Continuation for Nonlinear EllipticPartial Differential Equations Discretized by the Multiquadric Method , Int. J. Bifur.Chaos, , 481-492 (2000).[12] B. Fornberg and J. Zuev, The Runge phenomenon and spatially variable shape parame-ters in RBF interpolation , Comput. and Math. with Applic, , 379-398 (2007).[13] N. Guglielmi, E. Hairer, Implementing Radau IIA Methods for Stiff Delay DifferentialEquations , Computing (1), 1-12 (2000).[14] Y.C. Hon and X.Z. Mao, A multiquadric interpolation method for solving initial valueproblems , Sci. Comput., , No.1, 51-55, (1997).1115] E. Ishiwataa, Y. Muroyab, Rational approximation method for delay differential equa-tions with proportional delay , Applied Mathematics and Computation, , issue 2,741-747 (2007).[16] K. Ito, H.T. Tran, and A. Manitius,
A Fully-Discrete Spectral Method for Delay Differ-ential Equations , SIAM J. Numerical Analysis, , 1121-1140 (1991).[17] J.H. Jung, A note on the Gibbs phenomenon with multiquadric radial basis functions ,Applied Numerical Mathematics , Issue 2, 213-229 (2007).[18] E. J. Kansa, Multiquadrics - a scattered data approximation scheme with applicationsto computational fluid-dynamics. I. Surface approximations and partial derivative esti-mates , Comput. Math. Appls. , 127-145 (1990).[19] E. J. Kansa, Multiquadrics - a scattered data approximation scheme with applications tocomputational fluid-dynamics. II. Solutions to parabolic, hyperbolic and elliptic partialdifferential equations , Comput. Math. Appls. , 147-161 (1990).[20] Kansa, E.J. and Carlson, R.E., Improved accuracy of multiquadric interpolation usingvariable shape parameters , Comput. Math. Appl. , 99-120 (1992).[21] S. Karimi Vanani and A. Aminataei, Multiquadric approximation scheme on the numer-ical solution of delay differential systems of neutral type , Mathematical and ComputerModelling, In Press (2008).[22] W.R. Madych,
Miscellaneous error bounds for multiquadric and related interpolators ,Comput. Math. Appl. , 121-38 (1992).[23] S.B. Norkin and L.E. El’sgol’ts, Introduction to the Theory and Applications of Differ-ential Equations with Deviating Arguments , Math. in Sci. and Eng. , 44-45 (1973).[24] C.A.H. Paul,
A test set of functional differential equations , Technical Report 243, Uni-versity of Manchester (1992).[25] M.J.D. Powell,
A Fortran Subroutine for Solving Systems of Nonlinear Algebraic Equa-tions , Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, ed., Ch.7(1970).[26] S. Rippa
An algorithm for selecting a good value for the parameter c in radial basisfunction interpolation , Adv. Comput. Math. (2-3), 193-210 (1999).[27] R. Schaback, Error estimates and condition numbers for radial basis function interpola-tion , Adv. Comput. Math. , 251-264 (1995).[28] L.F. Shampine and S. Thompsom, Solving DDEs in Matlab , Appl. Numer. Math., ,441-458 (2001).[29] L.F. Shampine, Solving ODEs and DDEs with residual control , Appl. Numer. Math., , 113-127 (2005).[30] J. Wertz, E. J. Kansa and L. Ling, The role of the multiquadric shape parameters in solv-ing elliptic partial differential equations , Computers and Mathematics with Applications51