Numerical Integration as an Initial Value Problem
aa r X i v : . [ c s . NA ] J un Numerical Integration as an Initial Value Problem
Daniel Gebremedhin, Charles Weatherford
Physics Department, Florida A&M University, Tallahassee, FL, USA.
Abstract
Numerical integration (NI) packages commonly used in scientific research arelimited to returning the value of a definite integral at the upper integrationlimit, also commonly referred to as numerical quadrature. These quadraturealgorithms are typically of a fixed accuracy and have only limited ability toadapt to the application. In this article, we will present a highly adaptivealgorithm that not only can efficiently compute definite integrals encoun-tered in physical problems but also can be applied to other problems such asindefinite integrals, integral equations and linear and non–linear eigenvalueproblems. More specifically, a finite element based algorithm is presentedthat numerically solves first order ordinary differential equations (ODE) bypropagating the solution function from a given initial value (lower integrationvalue). The algorithm incorporates powerful techniques including, adaptivestep size choice of elements, local error checking and enforces continuity ofboth the integral and the integrand across consecutive elements.
Keywords: numerical integration, ordinary differential equations
1. Introduction
Numerical integration(NI) is one of the most useful numerical tools thatis routinely utilized in all scientific and engineering applications. There aregeneral and specialized methods that efficiently compute definite integralseven for many pathological cases. Most of the integration algorithms are insoftware packages and are available in almost all scientific libraries. Ref. [1]is an excellent book on the subject containing a comprehensive reference
Email addresses: [email protected] (Daniel Gebremedhin), [email protected] (Charles Weatherford)
Preprint submitted to Elsevier June 6, 2018 o scientific articles and the accompanying software. However, as efficientand advanced as those quadrature algorithms are, and perhaps for this veryreason, the techniques implemented in those algorithms do not necessarilycarry over to treat other related numerical problems.Applications pertinent to NI are usually a lot more complex than simplya value at a single point at the upper integration limit. A broader andversatile advantage can be gained if integration is viewed from its generalmathematical perspective as a specific case of an initial value problem. Inthis paper, we present an algorithm that solves ODEs that, with a slightmodification, can be used on many relevant applications, one of the mostimportant of which being NI. As far as numerical quadrature is concerned,the efficiency of the method given here is comparable to the common NIpackages available in scientific libraries. Moreover, the present method isso simple to use that it can be readily modified and applied to a broaderspectrum of numerical applications as long as they can be set up as initialvalue problems.The algorithm in question has recently been applied to a second orderODE to solve and treat a challenging system–namely, the soft Coulombproblem, where its versatility and power is self–evident [2]. In this work,we discuss in some detail how the same algorithm can be adapted to solvea first-order ODE and thereby apply it, in a purely mathematical setting,to the calculation of the numerical integral of a given function. Illustrativeexamples that are best solved by the present method, and thereby highlightits important features, will also be included.
2. Description of Algorithm
The main intent of this paper is to present a finite element algorithm thatnumerically approximates a solution function y to the following first orderODE dd x y ( x ) = f ( x ) (1)with a given initial condition y ( x = a ) = y a , where, both y a and a areassumed to be finite. Generally, f ( x ) can be a simple function of y suchthat the above equation may be a linear or non–linear eigenvalue problem. f can also be a kernel function of homogeneous or inhomogeneous integralequation [3]. Particularly, if f ( x ) is a predetermined simple function and y a = 0, then the above equation will be equivalent to a single integral given2y y ( x ) = Z xa f ( t ) d t. (2)Begin the numerical solution to eq. (1) by breaking the x–axis into finiteelements and mapping into a local variable τ with domain − ≤ τ ≤ x = x i + q i ( τ + 1) , x i ≤ x ≤ x i +1 (3)Here, i = 1 , , . . . , i max labels the elements with x = a , while q i = ( x i +1 − x i ) / τ ¯ y ( τ ) = q ¯ f ( τ ) . (4)The over–bar indicates the appropriate change in functional form while theelement index i is dropped for simplification of notation. At this point, wewill expand ¯ y ( τ ) in a polynomial basis set as follows.¯ y ( τ ) = M − X µ =0 u µ ( τ ) B µ + s ( τ ) q ¯ f ( −
1) + ¯ y ( −
1) (5)Notice that ¯ f ( − ≡ f ( x i ) and similarly for ¯ y . The main import of theabove expansion is that it allows us to enforce continuity of both f and y across the boundary of two consecutive elements. This is because the basisfunctions u and their derivatives (denoted by s ) identically vanish at − u and s are defined in terms of Legendre polynomials of thefirst kind P [4] as s µ ( τ ) = Z τ − P µ ( t ) dt , u µ ( τ ) = Z τ − s µ (t) dt . (6)They are discussed in [2] in more detail. These polynomials satisfy the fol-lowing recurrence identities presented here for the first time. It is surprisingto see how these relations remain three–term, with no surface values, despitethe fact that the polynomials s and u are sequentially primitives of Legendrepolynomials. s ( τ ) = τ + 1 , s ( τ ) = 12 ( τ − µ + 1) s µ ( τ ) = (2 µ − τ s µ − ( τ ) − ( µ − s µ − ( τ ) µ ≥ u ( τ ) = 12 ( τ + 1) , u ( τ ) = 16 ( τ + 1) ( τ − µ + 2) u µ ( τ ) = (2 µ − τ u µ − ( τ ) − ( µ − u µ − ( τ ) µ ≥ M , M − X µ =0 s µ ( τ ν ) B µ = q h ¯ f ( τ ν ) − ¯ f ( − i (9)where, τ ν is a root of an M th order Legendre polynomial. This technique,known as collocation method, is an alternate way of constructing a linear sys-tem of equations compared to the more familiar projection integrals. s µ ( τ ν )are now elements of a square matrix which, for a given size M , is constantand hence, merely needs to be constructed only once, LU decomposed andstored for all times. Thus solving for the unknown coefficients B only involvesback substitution. Construction of the right–hand side column, on the otherhand, requires an evaluation of the function f at M + 1 points including atthe beginning of the element. In the examples that follow we will denote thetotal number of function evaluations as N .After the coefficients B are calculated this way, the value of the integralat the end point of the element i can be obtained by evaluating eq. (5) at τ = +1. The result is simply¯ y (+1) = 2 B − B + 2 q ¯ f ( −
1) + ¯ y ( − . (10)One of the advantages of representing the solution in the form given in eq. (5)is that it allows us to evaluate the value of the integral at any continuouspoint inside the element. In this sense, y is simply a function whose domainextends into all of the solved elements as the propagation proceeds. Hence,from a numerical perspective, this algorithm is best implemented by way ofobject–oriented programming in order to incorporate and preserve all thenecessary quantities of all the elements in a hierarchy of derived variables.The integrand function y ( x ), for instance, can be saved (as an object) for lateruse by including, among other quantities, the grid containing the coordinatesof the steps { x i } i max i =1 . Then locating the index to which a given point x belongs, is an interpolation exercise for which efficient codes already exist.See, for instance, subroutine hunt and its accompanying notes in ref. [5]. For4 sorted array, as is our case, this search generally takes about log i max trieswhile further points within close proximity can be located rather easily.An even more important advantage of this form of the solution is itssuitability for estimation of the size of the next element via the methoddescribed in [2]. This adaptive step size choice relies on the knowledge ofthe derivatives of y up to 4 th order at the end of a previous element. Thosederivatives can be directly computed from eq. (5). Instead of solving twoquadratic equations as we did in the last paper, however, we will solve onecubic equation, which is more suited for the present purpose. The user isrequired to guess only the size of the very first element. Overestimationof the size of an element may cause an increase in the number of functionevaluations, but there will be no compromise in the accuracy of the solutionas it will be made clear in a moment.One other attractive feature is that we can precisely measure the errorof the calculated solution directly from eq. (4). Specifically, at the end of asolved element i , the error between the exact f ( x i +1 ) and the calculateddd x y ( x ) (cid:12)(cid:12)(cid:12)(cid:12) x = x i +1 = 1 q i dd τ ¯ y ( τ ) (cid:12)(cid:12)(cid:12)(cid:12) τ =+1 = 2 B q i + f ( x i ) (11)can be obtained. Notice that f ( x i +1 ) will be needed in the beginning pointof the next element. If the resulting error is not satisfactory, a bisection stepwill be taken to reduce the size of the element by moving the upper limit x i +1 closer to x i and re–solving. The main purpose of the adaptive step sizechoice is then to estimate a priory an optimum step size, thereby reducing thenumber of bisections and/or the number of function evaluations necessary.Since this error is that of the integrand f , and not of the integral y , it isnot necessary to demand this error be as small as the machine precision.The reason is, higher derivatives of y computed from eq. (5) will successivelydeteriorate in accuracy as the order increases. In a 16 digit calculation arelative error of 10 − − − in f ( x i +1 ) often suffices, depending on howsmooth the integrand is, for calculating the solution function y correctly towithin O ( − −
16, whose halfor double is either too small or excessive. Rather, a more effective way isto estimate an adaptive step size, fix an optimum order of expansion, andreduce the size of the element whenever necessary - which is what is donehere.Implementation of the algorithm starts by fixing the size of the first step q and the size of the linear system M . Also, define f ( a ) = f ( x ). The restof the procedure is sketched below.1. Construct the right–hand side of eq. (9) by evaluating the function¯ f ( τ ν ) at the GL nodes.2. Calculate the solution coefficients B by back substitution.3. Calculate the error between the two sides of eq. (2) at the end pointusing eq. (11) for the left–hand side and by evaluating f ( x i +1 ) directly.Retain the value of f ( x i +1 ) which will be needed at the beginning ofthe next element.4. If the error is not satisfactory, reduce q i → q i / y as per eq. (10) and its higher derivatives at τ = 1using eq. (5) and estimate the size of the next step q i +1 using the methodpresented in [2].6. For finite domain systems with an upper limit b , make sure the resultingstep size does not stride beyond b . i.e., take q i +1 ← min ( q i +1 , ( b − x i +1 ) / y converges.Finally, we may sometimes prefer not to evaluate the function f ( a ) at thelower limit of the integration. A good example is an integrand containingsingular terms at the origin. In those instances, only for the very first element,the main expansion given in eq. (5) can be altered to be in terms of the s polynomials as follows. ¯ y ( τ ) = M − X µ =0 s µ ( τ ) B µ + ¯ y ( −
1) (12)The rest of the propagation can then resume normally and the other modi-fications that follow can be worked out straightforwardly.6 . Numerical Examples
We will now consider illustrative examples that demonstrate the typicalbehavior of the numerical algorithm discussed above. All of the exampleschosen are difficult to calculate with an accuracy close to working precision,without breaking the range of integration into smaller intervals. Wheneverapplicable, comparisons will be made with DQAG [7], which is one of theintegration subroutines compiled in QUADPACK . DQAG is an adaptivemethod that keeps bisecting the element with highest error estimate untilthe value of the overall integration is achieved to within the desired error.In all of the examples that follow, the error at the end of the elementshas been determined by (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) B q i + f ( x i ) − f ( x i +1 ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ | f ( x i +1 ) | δ rel + δ abs (13)where, δ abs and δ rel denote the absolute and relative errors respectively. Thevalues of δ rel ≈ . × − and δ abs ≈ . × − will be used unlessotherwise specified. The size of the first step is fixed to be 0.5–that is q =0 .
25 and the number of basis functions used is M = 13.DQAG, on the other hand, has been run with absolute and relative errorsof δ rel = δ abs ≈ . × − and the lowest order ( key = 1) has been usedin order to keep the number of function evaluations at a minimum so as toprovide a fair comparison. Our program is written in modern C++ and runon a late 2013, 2.8 GHz Intel Core i7 MacBook Pro laptop computer withApple LLVM 8.1 compiler. We consider the 15 closed integrals studied in [8]. These integrals exhibitdifferent properties and were chosen by Bailey et al. to test three othermethods of NI. Herein we attempt all of them and report the output. Table 1shows the results for numerical values of those integrals calculated using thepresent method as well as DQAG. δ rel with respect to the exact values, anda number of function evaluations N are also indicated. δ rel and N of theoutput from DQAG are also shown for comparison. The two methods areessentially, qualitatively similar in performance. Notice that the labels of theintegrals are taken from the article [8]. able 1: Results for 15 closed integrals taken from [8]. The values of the integral y ( b )from the present method are shown in the second column. δ rel and N of both the presentmethod and DQAG are also shown.label y ( b ) δ rel δ rel (DQAG) N N (DQAG)1 0.250 000 000 000 000 1.110[-16] 0.000 29 152 0.210 657 251 225 807 0.000 1.318[-16] 29 453 1.905 238 690 482 68 0.000 0.000 191 154 0.514 041 895 890 071 0.000 0.000 29 455 -0.444 444 444 444 445 8.743[-16] 3.747[-16] 871 8856 0.785 398 163 397 448 1.414[-16] 2.827[-16] 974 7957 1.198 140 227 142 81 6.337[-9] 6.159[-9] 2129 17258 1.999 999 999 999 98 8.438[-15] 4.441[-16] 922 15459 -1.088 793 045 151 79 9.993[-15] 2.855[-15] 1243 133510 2.221 441 454 672 65 6.485[-9] 2.907[-9] 2032 172511 1.570 796 326 794 90 0.000 0.000 29 10512 1.772 453 840 168 93 6.057[-9] 5.929[-9] 2439 145513 1.253 314 137 315 62 9.514[-14] 0.000 96 25514 0.500 000 000 000 001 1.110[-15] 0.000 231 37515 1.570 796 326 794 90 1.414[-16] 2.218[-11] 1523 210 . . . . . x . . . . . . f ( x ) y ( x ) Figure 1: (Color online) Plots of the functions given in Eqs. (14) and (15). f varies veryrapidly towards the upper integration limit causing numerical difficulty. Lets take a closer look at one of the integrals (index 6) in order to demon-strate the behavior of our algorithm. Example 6 has an integration limits[0 ,
1] and an integrand f given below. f ( x ) = √ − x , ≤ x ≤ y ( x ) = 12 h x √ − x + arcsin( x ) i . (15)with y (1) = π/
4. Both functions f and y are plotted in Fig. 1. The calculatedvalue of the integral is y (1) = 0 . δ rel of ≈ . × − compared to the exact value. It took N = 974 function evaluationsand i max = 35 steps to propagate the integral y from 0 to 1, where the sizeof the first step was set to 0 .
10 15 20 25 30 35 steps i − − − − − step sizeerror Figure 2: (Color online) Step size of the finite elements (blue .) and δ rel of the integralat the end of the elements (red *) are shown. The algorithm spends more time and picksup more errors towards the upper integration limit. Vertical axis values are log of theactual. Fig. (2), the step sizes chosen by our algorithm kept decreasing to as low as2 . × − , which is consistent with the singularity of the derivative of theintegrand f at the upper integration limit. Fig. (2) also shows relative errorsof the integral y at the end of all the elements in comparison with the exactvalues from eq. (15).Clearly, the maximum error occurs at the last step near 1, which is whatmotivated our choice of this example. Rapidly changing functions, such asthose with integrable singularity, are generally troublesome to the algorithmbecause the step size may not be small enough and/or the order of polyno-mials high enough to accommodate portions of the integrand with (nearly)vertical shape.In comparison, DQAG takes 795 function evaluations and calculates theintegral with a δ rel of 2 . × − . 10 .2. Nonlinear Problem Bender et al. have studied the following interesting nonlinear eigenvalueproblem for which the numerical solution can be very challenging [9].dd x y ( x ) = cos [ πxy ( x )] , x ≥ x y σ +1 ( x ) = f σ ( x ) , σ = 1 , , . . . (17)where f σ ( x ) = cos [ πxy σ ( x )] and σ labels the levels of the iteration. Noticethat at any stage of the iteration only the values of f at the GL nodes arerequired. The first iterate of array f is seeded from the solution vector B ofthe final result in the preceding element. This is not only convenient but alsoan excellent approximation because the adaptive step choice implementedhere is based on the assumption that the solution function between twoconsecutive elements remains constant up to a fourth order Taylor seriesexpansion. The first element has been started by setting all the elements ofthe solution vector B to unity. Apparently, only for this problem, the firsttwo steps of the procedure given in Section 2, must be repeated until theiteration in eq. (17) converges.The results of our calculation for y ( x ) , x ∈ [0 , y a = n, n = 1 , , . . . ,
10. Similar plots havebeen reported in ref. [9] which are a result of point–wise convergent calcula-tions. Our solution function, on the other hand, has been propagated fromthe origin outward. The very large number of steps the solution required forsuch a modest distance of x = 24 is quite remarkable. Table 1 summarizesthe output of our program. It took millions of steps, with an average stepsize as low as ≈ . × − and an average number of function evaluationsup to N ave ≈ . N ave includes all the iter-ations, which, along with the brevity of the elapsed times shown, indicatesthe efficiency of our implementation.We have also included the final numerical values at y (24) for reference.For this example, the required error has been set lower as δ rel = 3 . × − , foran obvious reason. In order to check the validity our results, we have re–runthe program by still lowering the magnitude of the error to δ rel = 3 . × − .With the lowered error, the indicated values of y (24) vary by an amount nolarger than ∼ . × − . The computational times in the last column of11 x y ( x ) Figure 3: (Color online) Calculated solution functions to eq. (16) for 0 ≤ x ≤
24 withinitial condition y a = 1 , , . . . ,
10 are shown. The functions are oscillatory near the originbefore they evolve into an asymptotic discrete bundle.Table 2: Results for y (24) to eq. (16) for the respective initial conditions y (0). The restof the columns are: total number of steps, average step size, average number of functionsevaluations per element and time elapsed. y (0) y (24) i max q ave ( × − ) N ave t [ sec ]1 0.020 844 865 419 015 3 1 633 376 1.469 254.338 632 0.104 224 327 270 128 1 701 378 1.411 253.336 663 0.270 983 253 633 302 1 908 989 1.257 251.829 734 0.437 742 187 280 245 2 068 413 1.160 250.609 795 0.687 880 611 222 152 2 397 070 1.001 249.380 916 0.938 019 076 811 230 2 636 633 0.9103 248.157 1007 1.271 537 122 002 93 3 041 709 0.7890 247.071 1168 1.688 434 875 810 57 3 586 933 0.6691 245.951 1359 2.105 332 915 403 23 4 021 621 0.5968 244.952 15110 2.605 611 041 676 66 4 626 563 0.5187 244.110 174 In this example, we consider a double integral for which one of the limitsof the inner integral is identical to the variable of the outer integral. Thesetypes of integrals are common in studies of many particle dynamical systemsinvolving Green’s functions or double range addition theorems such as theLaplace expansion. Particularly, we will look at the following integral, whichis the most prominent radial integral that is encountered in calculations thatinvolve exponential type orbitals or Geminals in a spherical coordinate sys-tem. It stems from the addition theorem for r n e − αr given in [10] and theresulting integrals are still topics of interest in recent research [11, 12, 13].Let us define the integral as I µ µ λ λ ( α , β , α , β ) = Z ∞ d y e − α y y µ ˆ i λ ( β y ) Z ∞ y d x e − α x x µ ˆ k λ ( β x ) (18)where ˆ i and ˆ k are spherical modified Bessel functions of the first and secondkind respectively [4]. The screening parameters α , β , α , β are positive realnumbers while all of the indices µ , λ , µ , λ are integers. The correct com-position of these parameters is such that both the inner and outer integralsremain finite as is the case in physical applications.In order to propagate from the origin, both lower integration limits needto be set to zero. This can be attained by switching the order of the twointegrals using the following identity which maintains identical x – y region ofintegration [14]. Z ∞ d y f ( y ) Z ∞ y d x g ( x ) ≡ Z ∞ d x g ( x ) Z x d y f ( y ) (19)Hence, eq. (18) can be written as I µ µ λ λ ( α , β , α , β ) = Z ∞ d x e − α x x µ ˆ k λ ( β x ) J µ λ ( α , β ; x ) (20)where, J now represents the inner integral given below which needs to bepropagated only once from the origin until the integral converges. J µ λ ( α , β ; x ) = Z x d y e − α y y µ ˆ i λ ( β y ) , ≤ x ≤ x i max +1 (21)13ere x i max +1 signifies the end point of the last element where the result ofthe above integral converged to its value at infinity. Hence, beyond thispoint J is considered to be constant function, i.e., J ( x ) = J ( x i max +1 ) for x > x i max +1 . Once the above integral is done and all the relevant parametersstored, it can be evaluated at any desired point x >
0, which is a significantcomputational gain since the double integral I given in eq. (18) has essentiallybeen reduced to two simple integrals. This demonstrates one of the mainadvantages contained in the present algorithm.Table 3 shows a sample calculation for the integral I in eq. (20) for β , β ∈ { . , . , . } . The rest of the parameters are set as λ = − , µ = Table 3: Exact and calculated values of the integral I are shown for β & β ∈{ . , . , . } . Columns 3 and 4 show the number of function evaluations N taken inthe integrals Eq. (20) and Eq. (19) respectively. Numbers in square bracket signify powersof ten. β β N of J N of I I – calculated I – exact0 . . . . . . . . . , λ = − , µ = 14 , α = 2 β , α = 2 β . The Bessel functions ˆ i and ˆ k have been computed using the subroutines in GNU Scientific Library (GSL) . For the first element, eq. (12) has been used in order to avoid evaluationat the origin. We also have calculated the exact values of the integrals using Mathematica [15], the first 18 digits of which are displayed in the last column.Comparison with the the present calculated results reveals that the integral was done accurately. The table also further shows the total number offunction evaluations N for the inner and outer integrals.
4. Conclusion
There are many physical applications that can be modeled as initial valueproblems and the methods available to compute them are equally diverse.No single algorithm is known to address all of them at once, and hence, re-searchers usually digress from their main area of interest so as to familiarizethemselves with many of the computational options. The present algorithmis by no means capable of solving all initial value problems, but it comespragmatically close, especially for most physical applications. This is possi-ble because it produces solutions on finite elements whose size is chosen tolocally, checks the validity of the solution, and communicates the solutionfunction and its first derivative to the next element, maintaining continiuityof both. More importantly, it can be applied on any ODE because it is easyto implement and modify, especially with the proper use of object–orientedprogramming techniques. The basis functions u and s discussed above arebased on Legendre polynomials. In the future, we will use other classic or-thogonal polynomials such as Chebyshev or Jacobi to determine if furtheradvantages can be gained.
5. Acknowledgement
DHG and CAW were partially supported by the Department of En-ergy, National Nuclear Security Administration, under Award Number(s)de-na0002630. CAW was also supported in part by the Defense Threat Re-duction Agency.
6. REFERENCESReferences [1] P. Davis, P. Rabinowitz, Methods of Numerical Integration, DoverBooks on Mathematics Series, Dover Publications, 2007.URL https://books.google.com/books?id=gGCKdqka0HAC [2] D. H. Gebremedhin, C. A. Weatherford,Calculations for the one-dimensional soft coulomb problem and the hard coulomb limit,15hys. Rev. E 89 (2014) 053319. doi:10.1103/PhysRevE.89.053319 .URL http://link.aps.org/doi/10.1103/PhysRevE.89.053319 [3] P. Kythe, P. Puri, Computational Methods for Linear Integral Equations,Birkh¨auser Boston, 2002.URL https://books.google.com/books?id=CmX1aHur7jcC [4] G. B. Arfken, H. J. Weber, F. E. Harris, Mathematical Methods forPhysicists, Sixth Edition: A Comprehensive Guide, 6th Edition, Aca-demic Press, 2005.[5] W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numer-ical Recipes in FORTRAN (2Nd Ed.): The Art of Scientific Computing,Cambridge University Press, New York, NY, USA, 1992.[6] S. Ehrich, Applications and Computation of Orthogonal Polynomials: Conference at the Mathematical Research Institute Oberwolfach, Germany March 22–28, 1998,Birkh¨auser Basel, Basel, 1999, Ch. Stieltjes Polynomials andthe Error of Gauss-Kronrod Quadrature Formulas, pp. 57–77. doi:10.1007/978-3-0348-8685-7\_4 .URL http://dx.doi.org/10.1007/978-3-0348-8685-7_4 [7] R. Piessens, E. d. Doncker-Kapenga, C. ¨Uberhuber, D. Kahaner,Quadpack, Springer Series in Computational Mathematics, Springer-Verlag Berlin Heidelberg, 1983.URL [8] D. H. Bailey, K. Jeyabalan, X. S. Li,A comparison of three high-precision quadrature schemes,Experimental Mathematics 14 (3) (2005) 317–329. arXiv:http://dx.doi.org/10.1080/10586458.2005.10128931 , doi:10.1080/10586458.2005.10128931 .URL http://dx.doi.org/10.1080/10586458.2005.10128931 [9] C. M. Bender, A. Fring, J. Komijani, Nonlinear eigenvalue problems,Journal of Physics A: Mathematical and Theoretical 47 (23) (2014)235204.URL http://stacks.iop.org/1751-8121/47/i=23/a=235204 [10] D. Gebremedhin, C. Weatherford, Canonical two-range addition theorem for slater-type orbitals,International Journal of Quantum Chemistry 113 (1) (2013) 71–75.16 oi:10.1002/qua.24319 .URL http://dx.doi.org/10.1002/qua.24319 [11] L. G. Jiao, Y. K. Ho, Computation of two-electron screened coulomb potential integrals in hylleraas basis sets,Computer Physics Communications 188 (2015) 140 – 147. doi:http://dx.doi.org/10.1016/j.cpc.2014.11.019 .URL [12] J. F. Rico, R. L´opez, G. Ram´ırez, I. Ema,Repulsion integrals involving slater-type functions and yukawa potential,Theoretical Chemistry Accounts 132 (1) (2012) 1–9. doi:10.1007/s00214-012-1304-x .URL http://dx.doi.org/10.1007/s00214-012-1304-x [13] J. J. Fernndez, R. Lpez, I. Ema, G. Ramrez, J. Fernndez Rico,Auxiliary functions for molecular integrals with slater-type orbitals. i. translation methods,International Journal of Quantum Chemistry 106 (9) (2006) 1986–1997. doi:10.1002/qua.21002 .URL http://dx.doi.org/10.1002/qua.21002http://dx.doi.org/10.1002/qua.21002