aa r X i v : . [ m a t h . G M ] J un On a Method for Solving Infinite Series
Henrik Stenlund ∗ Visilab Signal Technologies Oy, Finland6th May, 2014
Abstract
This paper is about a method for solving infinite series in closed formby using inverse and forward Laplace transforms. The resulting integral isto be solved instead. The method is extended by parametrizing the series.A further Laplace transform with respect to it will offer more options forsolving a series.
Summation of series, infinite series, Laplace transform, inverse Laplacetransform
MSC: 44A10, 11M41, 16W60, 20F14, 40A25, 65B10
There is a crack in the mountain.
Contents ∗ The author is obliged to Visilab Signal Technologies for supporting this work. Visilab Report v2 two references added with comments A.1 Alternating Series . . . . . . . . . . . . . . . . . . . . . . . . . . 16A.2 Shifted Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16A.3 Shifted Alternating Series . . . . . . . . . . . . . . . . . . . . . . 16A.4 Power Factor Series . . . . . . . . . . . . . . . . . . . . . . . . . . 17A.5 Power Factor Alternating Series . . . . . . . . . . . . . . . . . . . 17A.6 Exponential Factor Series . . . . . . . . . . . . . . . . . . . . . . 17A.7 Exponential Factor Alternating Series . . . . . . . . . . . . . . . 17A.8 Differentiated Series . . . . . . . . . . . . . . . . . . . . . . . . . 18A.9 Differentiated Alternating Series . . . . . . . . . . . . . . . . . . 18A.10 Integrated Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 18A.11 Integrated Alternating Series . . . . . . . . . . . . . . . . . . . . 18A.12 Added Constant Parameter Series . . . . . . . . . . . . . . . . . 19A.13 Alternating Added Constant Parameter Series . . . . . . . . . . . 19A.14 Hyperbolic Inverted Sine Series . . . . . . . . . . . . . . . . . . . 19A.15 Hyperbolic Inverted Cosine Series . . . . . . . . . . . . . . . . . . 19A.16 Hyperbolic Inverted Sine Series With a Complex Argument . . . 20A.17 Hyperbolic Inverted Cosine Series With a Complex Argument . . 20A.18 Hyperbolic Sine Series . . . . . . . . . . . . . . . . . . . . . . . . 20A.19 Hyperbolic Cosine Series . . . . . . . . . . . . . . . . . . . . . . . 20A.20 Square Root Series . . . . . . . . . . . . . . . . . . . . . . . . . . 21A.21 Alternating Square Root Series . . . . . . . . . . . . . . . . . . . 21A.22 Exponential Series . . . . . . . . . . . . . . . . . . . . . . . . . . 21A.23 Negative Exponential Series . . . . . . . . . . . . . . . . . . . . . 212
Introduction
An infinite series of the form below, assuming it to converge, a = ∞ X k =1 g ( k ) (1)needs to be solved. If there are any parameters involved, the results are eitherin a closed form or in terms of some special function which may itself be definedas an infinite series. Else the result is purely numerical but may consist ofa function of constants, like π . Another infinite series or an integral is oftenacceptable as a solution being more suitable for further analysis. There existsjust a handful of general ways for series solutions. Of those the most importantones are Euler method [1], the Euler-Maclaurin and the Poisson formulas [6], [9],[11], in addition to using integral transforms. Other very specific methods exist[2] but they have less general interest. Such are the Ramanujan summation [13]and Voronoi summation [9]. Hardy’s monograph [10] on diverging series is mostuseful showing ways of handling various types of series.Wheelon [5] has developed a method for solving series in closed form. Hestarts from a known Laplace transform pair and creates a sum to get an inte-gral. The method resembles the simple one described in this paper though theapproach is different. MacFarlane [3] has an approach analogous to the previ-ous by using the Mellin transform. McFadden [4] and Glasser [8] have sharedinteresting formulas for solving series. The latter ones are similar to the simplemethod here. Euler [1] has given a method which can sometimes be applied to solve simpleconvergent series of type e = ∞ X k =1 a k (2)The series in equation (2) can be extended to a power series as follows e ( x ) = ∞ X k =1 a k x k (3)This power series can possibly be identified as some known expansion. Thenthe limit x → Using any of the known integral transforms, like Laplace and Fourier, has beenthe simplest way of finding solutions to infinite series with a parameter. If there3s no suitable parameter, one can be added and at the end, set to 1 or anothervalue to make it disappear. The parameter must be located in the summand insuch a functional position that the sum will still be uniformly convergent andcan be Laplace transformed in either direction.We start with s ( α ) = ∞ X k =1 g ( k, α ) (4)and subject it to the inverse Laplace transform L − α [ ∞ X k =1 g ( k, α )] , x = ∞ X k =1 G ( k, x ) = L − α [ s ( α )] , x (5)To proceed, the elements of the sum must be transformable for this to be of anyuse. The transformed sum is attempted to be solved by any available methods.In a successful case, it will deliver the function L − α [ s ( α )] , x = S ( x ) (6)which can next be Laplace transformed to get the solution L x [ S ( x )] , α = s ( α ) (7)The validity of this approach is based on the validity of each operation in theprocess. Success of the method depends on the resulting parameter functionand its transforms to make the sum solvable in equation (5).The Fourier transform can be used in a similar way and also the Fourier sineand Fourier cosine transforms can be used. The methods of integral transformsare somewhat ineffective and easily produce divergent series or give some formalsolution which may turn out to be false. The Euler-Maclaurin formula [6], [12], [9] is the most important method forsolving infinite series. It is applicable to sequences and generally used for finitedifference problems. It gives a relationship between the sum and integrals asfollows, assuming a, b to be integers. Let the first 2 n derivatives of f ( x ) becontinuous on an interval [ a, b ]. The interval is divided into equal parts h = ( a − b ) n . Then for some θ, ≤ θ ≤ m X k =0 f ( a + kh ) = 1 h Z ba f ( x ) dx + 12 ( f ( a )+ f ( b ))+ n − X k =1 B k h k − [ f (2 k − ( b ) − f (2 k − ( a )](2 k )! ++ h n (2 n )! B n m − X k =0 f (2 n ) ( a + kh + θh ) (8)The B i are Bernoulli numbers. This formula is a bit tedious to use and theamount of work is dependent on the derivatives. A closed-form solution is rareand usually it gives only an approximation.4 .5 Poisson Formula The Poisson formula usually does not give an immediate solution but is a trans-form allowing other procedures to be applied. ∞ X k = −∞ F ( k ) = ∞ X m = −∞ Z ∞−∞ e πimx F ( x ) dx (9) In sum calculus partial summation is a useful tool [7] X y ( k )∆ z ( k ) = y ( k ) z ( k ) − X z ( k + 1)∆ y ( k ) (10)where the ∆ is the difference operator∆ y ( k ) = y ( k + 1) − y ( k ) (11)This is analogous to integration by parts and may produce a solution. If we have a power series of the form (see [7]) ∞ X k =0 a k x k = ∞ X k =0 u k v k x k (12)where u k can be written as a polynomial in k and v k is defined as V ( x ) = ∞ X k =0 v k x k (13)Then we have ∞ X k =0 a k x k = V ( x ) u + xV ′ ( x ) ∆ u
1! + x V ′′ ( x ) ∆ u
2! + ... (14)Because u k is a polynomial, the series on the right will terminate. A simple way for summation is the following [7]. We have a difference∆ u ( k ) = u ( k + 1) − u ( k ) = f ( k ) (15)We can subject it to the summation operator∆ − ∆ u k = ∆ − f ( k ) (16)which is equal to N X k =1 f ( k ) = u ( N + 1) − u (1) (17)This is the sum of the difference of the u ( k ).5 .9 Composition In Section 2 we derive the method, its extension and explain the process of howto apply it. In Section 3 we illustrate the method with some example cases. Ap-pendix A extends further the equivalences to better suit various cases. We havemade the presentation very explicit avoiding mathematical rigor and detailedproofs. We use throughout lower case symbols ( h ( s )) for Laplace transformedfunctions and upper case ( H ( t )) for inverse transformed functions. We have an infinite series ∞ X k =1 g ( k ) (18)to be solved with the assumption that it will converge. We assume the g ( k ) tohave an inverse Laplace transform G ( t ) g ( k ) = Z ∞ e − kt G ( t ) dt (19)The discrete argument k ∈ N + is a subset of the complex continuum. We cantemporarily extend the range of the variable to k ∈ C , formally used in theLaplace transform. Thus we get ∞ X k =1 g ( k ) = ∞ X k =1 Z ∞ e − kt G ( t ) dt (20)Since the sum is uniformly convergent we can interchange the order of summa-tion and integration ∞ X k =1 g ( k ) = Z ∞ ∞ X k =1 e − kt G ( t ) dt = Z ∞ dtG ( t ) ∞ X k =1 e − kt (21)We know the sum to be equal to ∞ X k =1 e − kt = 1 e t − ∞ X k =1 g ( k ) = Z ∞ dtG ( t ) e t − •
1. the series (18) converges • g ( k ) has an inverse Laplace transform G ( t ) •
3. the resulting integral convergesFailure to comply with these requirements will lead to either a diverging integralor a false solution. The series has been transformed to a rather simple integral.
We can parametrize equation (23) with α ∈ C and ℜ ( α ) > ∞ X k =1 g ( αk ) = Z ∞ dtG ( t ) e αt − f ( α ) (24)Proof of this goes in the same way as with equation (23). Uniform convergenceto f ( α ) is required of the series. Since f ( α ) is a function of the parameter, wemay assume it to be the result of a Laplace transform. This is not necessarilyso but at this point we can take it as an assumption. Subjecting this equationto an inverse Laplace transform gives F ( x ) = L − α [ f ( α )] , x (25)The left side of equation (24) is subjected to the same transform and we have F ( x ) = ∞ X k =1 G ( xk ) k (26)Proof of this follows directly from the basic properties of the Laplace trans-form. These equations are equivalent, being either Laplace transforms or inversetransforms of each other. The new series in equation (26) must be uniformlyconvergent to F ( x ), x ∈ R, x ≥
0. 7he schematic below shows the structure. Z ∞ dtG ( t ) e αt − ∞ X k =1 g ( αk ) = f ( α ) (27) L x , α ⇑ ⇓ L − α , x ∞ X k =1 G ( xk ) k = F ( x ) (28)Any of these expressions can be converted to any of the others, having morepossibilities for solving series than equation (23) alone does. The functions F ( x )and f ( α ) are solutions to the equations above. They can not be understood inthe sense that any function could be expanded as a series using these equations.One should carefully note the distinction between the two ways of applyingthe Laplace transform, to the index function itself g ( k ) = Z ∞ e − kt G ( t ) dt = L t [ G ( t )] , k (29)and to the parameter-dependent functions F ( x ) = L − α [ f ( α )] , x (30)Using the functions f ( α ) and F ( x ) poses the additional requirements of uniformconvergence of pertinent series and that they are connected by the Laplacetransform. Appendix A shows several modifications to equations (27) and (28).Each of them is proven in the same way. We may have a series of type A in a parametrized form ∞ X k =1 g ( αk ) (31)or we might have a series similar to type B ∞ X k =1 G ( xk ) k (32)8t is possible to proceed in different ways to solve a series: • Type A, equation (31), integral –
1. starting from equation (27) –
2. determine the inverse Laplace transform G ( t ) –
3. solve the integral in equation (27) and get f ( α ) –
4. the parameter α is set to 1. If only equation (27) is applied, theparameter α is not necessary. The solution is f (1) • Type A, equation (31), via type B series –
1. starting from equation (27) –
2. calculate the inverse Laplace transform G ( t ) –
3. solve the new series (28) getting F ( x ) – F ( x ) is Laplace transformed to get f ( α ). The solution is f (1) • Type B series equation (32), via type A series –
1. starting from equation (28) –
2. generate the Laplace transform g ( k ) from G ( t ) –
3. parametrize it to g ( αk ) –
4. solve the sum in equation (27) –
5. inverse transform the resulting f ( α ) to get F ( x ), the solution. –
6. Set x to some final value depending on the fitting in equation (28). • Type B series equation (32), via integral and inverse transform –
1. starting from equation (28) –
2. use the G ( t ) to solve the integral in equation (27) to get f ( α ) –
3. inverse transform f ( α ) to get F ( x ). –
4. Set x to some final value depending on the fitting in equation (28).Success of this method is dependent on the existence and eventual finding ofLaplace transforms and inverse transforms necessary, on solving the new seriesor the resulting integral.One is thus able to work in either space. This structure has an analogy witha solid state lattice having spatial vectors and its reciprocal lattice with wavevectors. In the following we display a few example applications of the method presented.We have systematically kept the α parameter in the equations though it mightnot be used. It would be necessary only if equation (28) would be required tobe generated or the function f ( α ) itself would be of interest. We avoid proofsfor simplicity. 9 .1 Riemann Zeta Function A simple example for applying equation (27) is the series of inverse power func-tions, with ℜ ( z ) > ∞ X k =1 g ( k ) = ∞ X k =1 k z (33)and this is parametrized to ∞ X k =1 g ( αk ) = ∞ X k =1 αk ) z = f ( α ) (34)The inverse Laplace transform of g ( k ) is G ( t ) = t z − Γ( z ) (35)Therefore we obtain by equation (27) ∞ X k =1 g ( αk ) = Z ∞ dtG ( t ) e αt − z ) Z ∞ dt · t z − e αt − ζ ( z ) and thus ∞ X k =1 g ( αk ) = ζ ( z ) α z (37)As α → ∞ X k =1 g ( k ) = ζ ( z ) (38)closing the loop. To demonstrate equation (28) we turn our eyes to ∞ X k =1 G ( xk ) k = ∞ X k =1 x z − k z Γ( z ) (39)= x z − ζ ( z )Γ( z ) = F ( x )Let’s do Laplace on this and we will get L x [ F ( x )] , α = ζ ( z )Γ( z ) Z ∞ dx · x z − e − αx (40)= ζ ( z ) α z = f ( α )getting the same result via another path.10 .2 Simple Trigonometric Series The cosine series is a good test case since the outcome is simple. The parameter α is required to be 0 < α< π ∞ X k =1 cos ( αk ) = ∞ X k =1 g ( αk ) (41)The inverse Laplace transform of g ( k ) is G ( t ) = 12 [ δ ( t + i ) + δ ( t − i )] (42)The integral in equation (27) gives us ∞ X k =1 g ( αk ) = Z ∞ dtG ( t ) e αt − Z ∞ dt · [ δ ( t + i ) + δ ( t − i )] e αt − ∞ X k =1 g ( αk ) = −
12 (44)irrespective of α .The integrated cosine series is a bit more complex. Therefore we use thenew result in Appendix A. The parameter α is within 0 < α< π ∞ X k =1 sin ( αk ) k = ∞ X k =1 g ( αk ) k (45)The inverse Laplace transform of g ( k ) is G ( t ) = 12 i [ δ ( t + i ) − δ ( t − i )] (46)We use the integral in equation (88) and obtain ∞ X k =1 g ( αk ) k = − Z ∞ dtG ( t ) ln (1 − e − αt )= − i Z ∞ dt · ln (1 − e − αt ) · [ δ ( t + i ) − δ ( t − i )] (47)The integral becomes ∞ X k =1 sin ( αk ) k = − i ln ( − e αi ) = ± π − α > α> − π . The equation should read ∞ X k =1 sin ( αk ) k = π · signum ( α )2 − α .3 Converting a Fractional Series to a Power Series A more requiring case is the following convergent series, with | a | < ℜ ( β ) > ∞ X k =1 g ( k ) = ∞ X k =1 a + k ) β (50)The inverse Laplace transform of g ( k ) is G ( t ) = t β − Γ( β ) e − a · t (51)We have by equation (27) ∞ X k =1 g ( αk ) = ∞ X k =1 a + αk ) β = 1Γ( β ) Z ∞ dt · e − at t β − e αt − ζ ( s ), ℜ ( s ) > ∞ X k =1 g ( αk ) = ∞ X n =0 ( − n a n Γ( β + n ) ζ ( β + n )Γ( n + 1)Γ( β ) α β (53)As α → ∞ X k =1 g ( k ) = ∞ X k =1 a + k ) β = ∞ X n =0 ( − n a n Γ( β + n ) ζ ( β + n )Γ( n + 1)Γ( β ) (54)This is verified numerically with the ranges of validity mentioned above. Theadvantage of this result is that we have got the original less attractive functionon the left expanded as a power series of a . On the other hand, this series isclosely related to the Hurwitz zeta function. A more complicated case is the following alternating series, with b, a ∈ C , ℜ ( b + ia + 1) > a = 0 and |ℑ ( a ) | < ∞ X k =1 ( − k +1 sin ( a · ln ( k )) k b +1 = ∞ X k =1 ( − k +1 g ( αk ) (55)We plan to use equation (70) and need the inverse Laplace transform of g ( k ) = sin ( a · ln ( k )) k b +1 (56)We can use the method for calculating inverse Laplace transforms for implicitlogarithmic functions [14] with h ( s ) = L t [ H ( t )] , s L − s [ h ( c · ln ( s )) s b +1 ] , t = Z ∞ t cu + b H ( u ) du Γ( cu + b + 1) (57)12ith h ( s ) = sin ( as ) (58) H ( t ) = 12 i [ δ ( t + ia ) − δ ( t − ia )] (59)obtaining G ( t ) = 12 i [ t b − ia Γ( b − ia + 1) − t b + ia Γ( b + ia + 1) ] (60)We use the basic integral definition of the Riemann zeta function for ℜ ( s ) > ζ ( s ) = 11 − − s Z ∞ dt · t s − e t + 1 (61)We have now all the information needed to apply equation (70) ∞ X k =1 ( − k +1 g ( αk ) = α − b − i [ α ia ζ ( b − ia + 1)(1 − − b + ia ) − α − ia ζ ( b + ia + 1)(1 − − b − ia )](62)As α → ∞ X k =1 ( − k +1 g ( k ) = 12 i [ ζ ( b − ia + 1)(1 − − b + ia ) − ζ ( b + ia + 1)(1 − − b − ia )]= −ℑ [ ζ ( b + ia + 1)(1 − − b − ia )] (63)It is tempting to see how the companion function cos ( a · ln ( k )) would behave.We process the equations through, starting from ∞ X k =1 ( − k +1 m ( k ) = ∞ X k =1 ( − k +1 cos ( a · ln ( k )) k b +1 (64)The inverse Laplace is the following M ( t ) = 12 [ t b − ia Γ( b − ia + 1) + t b + ia Γ( b + ia + 1) ] (65)and we get ∞ X k =1 ( − k +1 m ( k ) = 12 [ ζ ( b − ia + 1)(1 − − b + ia ) + ζ ( b + ia + 1)(1 − − b − ia )]= ℜ [ ζ ( b + ia + 1)(1 − − b − ia )] (66)If we now sum equations (63) and (66) properly, we get ∞ X k =1 ( − k +1 ( cos ( a · ln ( k )) − i · sin ( a · ln ( k ))) k b +1 = ∞ X k =1 ( − k +1 k b + ia +1 = ζ ( b + ia + 1)(1 − − b − ia ) (67)The series expression on the right is equal to ζ ( b + ia + 1)(1 − − b − ia ) when ℜ ( b + ia + 1) >
0. Indeed, this was all about the ζ ( s ).13 .5 Partial Summation The equation (23) can be extended to a kind of partial summation if we applyit only to the function f ( k ) in the following sum ∞ X k =1 g ( k ) f ( k ) = Z ∞ dtF ( t ) ∞ X k =1 g ( k ) e − kt (68)Here the F ( t ) is the inverse Laplace of f ( k ) f ( k ) = Z ∞ e − kt F ( t ) dt (69)The remaining summation may be more attractive for solving. The integralremains to be solved. We have shown that the method presented offers a new way of generating so-lutions to infinite series. Various types of functions g ( k ) of the index can behandled. The simple formula (23) is quite effective in many cases to offer asolution in the form of a rather simple integral. A parameter α can be added tothe functions as a multiplier of the index in order to extend the method. Theresulting equations (27) and (28) offer more options for getting solutions. Theyare equivalent to each other via a mediating Laplace transform. The method isexplained in Chapter 2.3.The conditions stated after equation (23) should be honored. It is essentialfor a new solution to be verified and the range of validity needs to be determined.Success of the method is dependent on finding the necessary Laplace forwardor inverse transforms. Convergence or uniform convergence is required of theseries. References [1]
Euler, L. : ,Instit. Calc. Diff. Paris, II p. 289 (1755) [2]
Whittaker, E.T., Watson, G.N. : A Course of Modern Analysis , Mer-chant Books (1915) , 2nd Edition[3]
MacFarlane, G. : The Application of Mellin Transforms to the Summa-tion of Slowly Convergent Series , Phil. Mag. Vol. 4, 188 (1949) [4]
McFadden, J. : Summation of Fourier Series , Journal of Applied PhysicsVol. 24 1953 p. 364 (1953) [5]
Wheelon, A. : On the Summation of Infinite Series in Closed Form , Jour-nal of Applied Physics Vol. 25 No. 1 Jan 1954 (1954)
Abramowitz, M., Stegun, I.A. : Handbook of Mathematical Functions ,Dover (1970) , 9th Edition[7]
Spiegel, M. R. : Finite Differences and Difference Equations , McGraw-Hill., 1st edition. (1971) [8]
Glasser, M. : The Summation of Series , SIAM J. Math. Anal. Vol 2, No.4, Nov. 1971 (1971) [9]
Ivic, A. : The Riemann Zeta-Function , Dover Publications (1985) , 1stEdition[10]
Hardy, G. : Divergent Series , (1991) , AMS Chelsea Publishing, II ed.[11] Patterson, S., J. : An Introduction to the Theory of Riemann zeta-Function , Cambridge Studies in Advanced Studies (1995) , 1st Edition[12]
Jeffrey, A., Hui-Hui Dai : Handbook of Mathematical Formulas andIntegrals , Elsevier (2008) , 4th Edition[13]
Candelpergher, B., Gopalkrishna Gadiyar, H. Padma, R. : Ramanujan Summation and the Exponential Generating Function ,arXiv:0901.3452v1 [math.NT] 22 Jan (2009) [14]
Stenlund, H. : A Note on Laplace Transforms of Some Particular Func-tion Types , arXiv:1402.2876v1 [math.GM] 9 Feb 2014 (2014) Appendix. Various Particular Forms of Equiv-alence
We can generate different equations modifying equation (27) and (28) to bettersuite practical applications. These forms are generally valid but they may bemore useful than the basic formulas as their forms are already bent in the rightdirection. Proofs are similar to those before. We have added subscripts to thefunctions f ( α ) and F ( x ) to remind that they differ between the cases. Theformulas are given names referring to their expected crude behavior as a series.Notice the different index ranges in some cases. These equations can be used inthe simple form of the upper formula with α = 1. A.1 Alternating Series Z ∞ dtG ( t ) e αt + 1 = ∞ X k =1 ( − k +1 g ( αk ) = f ( α ) (70) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 G ( xk ) k = F ( x ) (71) A.2 Shifted Series Z ∞ dtG ( t ) e − βt e αt − ∞ X k =1 g ( αk + β ) = f ( α ) (72) L x , α ⇑ ⇓ L − α , x ∞ X k =1 e − βxk G ( xk ) k = F ( x ) (73) A.3 Shifted Alternating Series Z ∞ dtG ( t ) e − βt e αt + 1 = ∞ X k =1 ( − k +1 g ( αk + β ) = f ( α ) (74) L x , α ⇑ ⇓ L − α , x ∞ X k =1 e − βxk ( − k +1 G ( xk ) k = F ( x ) (75)16 .4 Power Factor Series We have with | γ | > Z ∞ dtG ( t ) γe αt − ∞ X k =1 g ( αk ) γ k = f ( α ) (76) L x , α ⇑ ⇓ L − α , x ∞ X k =1 G ( xk ) γ k k = F ( x ) (77) A.5 Power Factor Alternating Series
With | γ | > Z ∞ dtG ( t ) γe αt + 1 = ∞ X k =1 ( − k +1 g ( αk ) γ k = f ( α ) (78) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 G ( xk ) γ k k = F ( x ) (79) A.6 Exponential Factor Series
With ℜ ( β ) > Z ∞ dtG ( t ) e αt + β − ∞ X k =1 e − βk g ( αk ) = f ( α ) (80) L x , α ⇑ ⇓ L − α , x ∞ X k =1 e − βk G ( xk ) k = F ( x ) (81) A.7 Exponential Factor Alternating Series
With ℜ ( β ) > Z ∞ dtG ( t ) e αt + β + 1 = ∞ X k =1 ( − k +1 e − βk g ( αk ) = f ( α ) (82) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 e − βk G ( xk ) k = F ( x ) (83)17 .8 Differentiated Series Z ∞ dtG ( t )( e αt − = ∞ X k =1 k · g ( α ( k + 1)) = f ( α ) (84) L x , α ⇑ ⇓ L − α , x ∞ X k =1 kG ( xk +1 ) k + 1 = F ( x ) (85) A.9 Differentiated Alternating Series Z ∞ dtG ( t )( e αt + 1) = ∞ X k =1 ( − k +1 k · g ( α ( k + 1)) = f ( α ) (86) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 G ( xk +1 ) k + 1 = F ( x ) (87) A.10 Integrated Series Z ∞ dtG ( t ) ln (1 − e − αt ) = − ∞ X k =1 g ( αk ) k = f ( α ) (88) L x , α ⇑ ⇓ L − α , x − ∞ X k =1 G ( xk ) k = F ( x ) (89) A.11 Integrated Alternating Series Z ∞ dtG ( t ) ln (1 + e − αt ) = ∞ X k =1 ( − k +1 g ( αk ) k = f ( α ) (90) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 G ( xk ) k = F ( x ) (91)18 .12 Added Constant Parameter Series We assume β to be an extra constant parameter with | α | > | β | Z ∞ dtG ( t ) e αt − e βt = ∞ X k =1 g ( αk − β ( k − f ( α ) (92) L x , α ⇑ ⇓ L − α , xe βx ∞ X k =1 e − βxk G ( xk ) k = F ( x ) (93) A.13 Alternating Added Constant Parameter Series
We assume β to be an extra constant parameter with | α | > | β | Z ∞ dtG ( t ) e αt + e βt = ∞ X k =1 ( − k +1 g ( αk − β ( k − f ( α ) (94) L x , α ⇑ ⇓ L − α , xe βx ∞ X k =1 e − βxk ( − k +1 G ( xk ) k = F ( x ) (95) A.14 Hyperbolic Inverted Sine Series Z ∞ dtG ( t ) e αt − e − αt = ∞ X k =1 g ( α (2 k − f ( α ) (96) L x , α ⇑ ⇓ L − α , x ∞ X k =1 G ( x k − )2 k − F ( x ) (97) A.15 Hyperbolic Inverted Cosine Series Z ∞ dtG ( t ) e αt + e − αt = ∞ X k =1 ( − k +1 g ( α (2 k − f ( α ) (98) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 G ( x k − )2 k − F ( x ) (99)19 .16 Hyperbolic Inverted Sine Series With a ComplexArgument Z ∞ dtG ( t ) e t ( β + iα ) − e − t ( β + iα ) = ∞ X k =1 g (( β + iα )(2 k − f ( α ) (100) L x , α ⇑ ⇓ L − α , x ∞ X k =1 G ( xi (2 k − ) e ixβ i (2 k −
1) = F ( x ) (101) A.17 Hyperbolic Inverted Cosine Series With a ComplexArgument Z ∞ dtG ( t ) e t ( β + iα ) + e − t ( β + iα ) = ∞ X k =1 ( − k +1 g (( β + iα )(2 k − f ( α )(102) L x , α ⇑ ⇓ L − α , x ∞ X k =1 ( − k +1 G ( xi (2 k − ) e ixβ i (2 k −
1) = F ( x ) (103) A.18 Hyperbolic Sine Series Z ∞ dtG ( t ) sinh ( e − αt ) = ∞ X k =1 g ( α (2 k − k − f ( α ) (104) L x , α ⇑ ⇓ L − α , x ∞ X k =1 G ( x k − )(2 k − k − F ( x ) (105) A.19 Hyperbolic Cosine Series Z ∞ dtG ( t ) cosh ( e − αt ) = g (0) + ∞ X n =1 g ( α n )(2 n )! = f ( α ) (106) L x , α ⇑ ⇓ L − α , xδ ( x ) g (0) + ∞ X n =1 G ( x n )2 n (2 n )! = F ( x ) (107)20 .20 Square Root Series Z ∞ dtG ( t ) p (1 − e − αt ) = g (0) + ∞ X n =1 · · ... (2 n − g ( αn )2 · · ... (2 n ) = f ( α ) (108) L x , α ⇑ ⇓ L − α , xδ ( x ) g (0) + ∞ X n =1 · · ... (2 n − G ( xn )2 · · ... (2 n ) n = F ( x ) (109) A.21 Alternating Square Root Series Z ∞ dtG ( t ) p (1 + e − αt ) = g (0) + ∞ X n =1 ( − n · · ... (2 n − g ( αn )2 · · ... (2 n ) = f ( α )(110) L x , α ⇑ ⇓ L − α , xδ ( x ) g (0) + ∞ X n =1 ( − n · · ... (2 n − G ( xn )2 · · ... (2 n ) n = F ( x ) (111) A.22 Exponential Series Z ∞ dtG ( t ) e e − αt = g (0) + ∞ X k =1 g ( αk ) k ! = f ( α ) (112) L x , α ⇑ ⇓ L − α , xδ ( x ) g (0) + ∞ X k =1 G ( xk ) k · k ! = F ( x ) (113) A.23 Negative Exponential Series Z ∞ dtG ( t ) e − e − αt = g (0) + ∞ X k =1 ( − k g ( αk ) k ! = f ( α ) (114) L x , α ⇑ ⇓ L − α , xδ ( x ) g (0) + ∞ X k =1 ( − k G ( xk ) k · k ! = F ( xx