A New Linear Inversion Formula for a class of Hypergeometric polynomials
aa r X i v : . [ m a t h . C A ] J u l A NEW LINEAR INVERSION FORMULA FOR A CLASS OFHYPERGEOMETRIC POLYNOMIALS
R. NASRI(*), A. SIMONIAN(*) AND F. GUILLEMIN (**)
Abstract.
Given complex parameters x , ν , α , β and γ / ∈ − N , consider theinfinite lower triangular matrix A ( x, ν ; α, β, γ ) with elements A n,k ( x, ν ; α, β, γ ) = ( − k (cid:16) n + αk + α (cid:17) · F ( k − n, − ( β + n ) ν ; − ( γ + n ); x )for 1 k n , depending on the Hypergeometric polynomials F ( − n, · ; · ; x ), n ∈ N ∗ . After stating a general criterion for the inversion of infinite matricesin terms of associated generating functions, we prove that the inverse matrix B ( x, ν ; α, β, γ ) = A ( x, ν ; α, β, γ ) − is given by B n,k ( x, ν ; α, β, γ ) = ( − k (cid:16) n + αk + α (cid:17) · (cid:20) γ + kβ + k F ( k − n, ( β + k ) ν ; γ + k ; x ) + β − γβ + k F ( k − n, ( β + k ) ν ; 1 + γ + k ; x ) (cid:21) for 1 k n , thus providing a new class of linear inversion formulas. Func-tional relations for the generating functions of related sequences S and T , thatis, T = A ( x, ν ; α, β, γ ) S ⇐⇒ S = B ( x, ν ; α, β, γ ) T , are also provided. Introduction
We address a new class of linear inversion formulas with coefficients involvingHypergeometric polynomials. After an overview of the state-of-the-art in the asso-ciated fields, we then summarize our main contributions.1.1.
Motivation.
Consider the following inversion problem: let x ∈ ]0 , , ν < . Solve the infinite lower-triangular linear system (1.1) ∀ b ∈ N ∗ , b X ℓ =1 ( − ℓ (cid:18) bℓ (cid:19) Q b,ℓ E ℓ = K b , with unknown E ℓ , ℓ ∈ N ∗ , and where matrix Q = ( Q b,ℓ ) b,ℓ ∈ N ∗ is given by (1.2) Q b,ℓ = − Γ( b )Γ(1 − bν )Γ( b − bν ) x − b − x F ( ℓ − b, − bν ; − b ; x ) , ℓ b. This inversion problem is motivated by the resolution of an integral equation arisingfrom Queuing Theory [Prop. 5.2][1]. In (1.2), Γ is the Euler Gamma function and(1.3) F ( α, β ; γ ; x ) = X m > ( α ) m ( β ) m ( γ ) m x m m ! Date : Version of July 7, 2020. denotes the Gauss Hypergeometric series with complex parameters α , β , γ / ∈ − N (( c ) m , m ∈ N , denotes the Pochhammer symbol for any c ∈ C with ( c ) = 1 [2,5.2(iii)]). Recall that function F ( α, β ; γ ; · ) reduces to a polynomial with degree − α (resp. − β ) if α (resp. β ) is a non positive integer; expression (1.2) for coefficient Q b,ℓ thus involves a Hypergeometric polynomial with degree b − ℓ in both x and ν .The diagonal coefficients Q b,b , b >
1, are non-zero so that lower-triangular system(1.1) has a unique solution. To make this solution explicit in terms of parameters,write system (1.1) equivalently as(1.4) ∀ b ∈ N ∗ , b X ℓ =1 A b,ℓ ( x, ν ) E ℓ = e K b , with the reduced right-hand side ( e K b ) defined by e K b = − Γ( b − bν )Γ( b )Γ(1 − bν ) (1 − x ) x b − · K b , b > , and with matrix A ( x, ν ) = ( A b,ℓ ( x, ν )) given by(1.5) A b,ℓ ( x, ν ) = ( − ℓ (cid:18) bℓ (cid:19) F ( ℓ − b, − bν ; − b ; x ) , ℓ b. As shown in this paper, the linear relation (1.4) to which initial system (1.1) hasbeen recast can be explicitly inverted for any right-hand side ( K b ) b ∈ N ∗ ; this conse-quently fully solves system (1.1).As developed below, our inversion procedure will actually address a larger family A ( x, ν ; α, β, γ ) of infinite matrices depending on three other arbitrary parameters α , β , γ and including our initial matrix (1.5) as a special case. As A ( x, ν ; α, β, γ ),the inverse matrix B ( x, ν ; α, β, γ ) = A ( x, ν ; α, β, γ ) − will prove to involve also aspecific class of Gauss Hypergeometric polynomials.1.2. State-of-the-art.
We first review known classes of linear inversion formulasfor the resolution of infinite linear systems. Most of these inversion formulas havebeen motivated by problems from pure Combinatorics together with the determi-nation of remarkable relations on special functions: a) given complex sequences ( a j ) j ∈ Z , ( b j ) j ∈ Z and ( c j ) j ∈ Z with c j = c k for j = k ,it has been shown [3] that the lower triangular matrices A and B with coefficients(1.6) A n,k = n − Y j = k ( a j + b j c k ) n Y j = k +1 ( c j − c k ) , B n,k = a k + b k c k a n + b n c n · n Y j = k +1 ( a j + b j c n ) n − Y j = k ( c j − c n )for k n , are inverses. A generalization of (1.6) to the multi-dimensional case when A = ( A n , k ) with multi-indexes n , k ∈ Z r , r ∈ N , is also provided in [4]. As an ap-plication, the obtained relations provide summation formulas for multidimensionalbasic Hypergeometric series.The matrix A = A ( x, ν ) introduced in (1.4)-(1.5), however, cannot be cast intothe product form (1.6): in fact, such a product form for the coefficients of A ( x, ν ) EW INVERSION FORMULA 3 should involve the n − k zeros c j,n,k , k j n −
1, of the specific Hypergeometricpolynomial F ( k − n, − nν ; − n ; x ), k n , in variable x ; but such zeros depend on allindexes j , n and k , which precludes the use of a factorization such as (1.6) wheresequences with one index only intervene; b) given a sequence ( β n ) n ∈ N , the inverse M of triangular matrix L with coeffi-cients L n,k = (cid:0) nk (cid:1) β n − k , k n , has been shown [5, Theorem 9] to be given by M n,k = (cid:18) nk (cid:19) A k (0) , k n, where A k , k ∈ N , is the family of Appell polynomials associated with ( β n ) n ∈ N ; eachcoefficient A k (0) can be generally written in terms of a determinant of order k . Thisframework does not apply, however, to matrix (1.5) since the ratio A n,k / (cid:0) nk (cid:1) doesnot depend there on the difference n − k only; c) given constants α , β , x and the family of Jacobi polynomials P ( α,β ) n ( x ) = ( α + 1) n n ! F (cid:18) − n, n + α + β + 1; α + 1; 1 − x (cid:19) , n ∈ N , it is established in [6, Theorem 4.1] that the lower triangular matrices L and M where L ( α,β ) n,k = P ( α + k,β + k ) n − k ( x ) and M ( α,β ) n,k = n + βk + α P ( − α − n, − β − n ) n − k ( x ) + α − βk + α P ( − α − n, − β − n − n − k ( x )for all k n , are inverses. A q -analogue of this result in terms of q -ultrasphericalpolynomials is considered in [7].Closed analytical formulae for generalized linearization coefficients for Jacobi poly-nomials and other special polynomials have also been addressed in [8, 9]. Writingthe latter coefficient L ( α,β ) n,k as L ( α,β ) n,k = ( α + k + 1) n − k ( n − k )! F (cid:18) k − n, n + k + α + β + 1; k + α + 1; 1 − x (cid:19) , the second argument in F depends on n + k only. This does not fit, however, theconsidered case (1.5) where the second argument of F is − nν , with ν = 1 in general.In this paper, we will actually consider the inversion of the much larger familyof lower triangular matrices A ( x, ν ; α, β, γ ) with coefficients(1.7) A n,k ( x, ν ; α, β, γ ) = ( − k (cid:18) n + αk + α (cid:19) · F ( k − n, − ( β + n ) ν ; − ( γ + n ); x )for 1 k n , and depending on three other complex parameters α , β , γ / ∈ − N ; ourintroducing case (1.5) thus corresponds to the specific values α = β = γ = 0. Usingfunctional operations on exponential generating series related to the coefficientsof matrix A ( x, ν ; α, β, γ ), we will show how it can be inverted through a fullyexplicit procedure. As developed below, the remarkable structure of the inversion B ( x, ν ; α, β, γ ) = A ( x, ν ; α, β, γ ) − brings a new contribution to the field of linearinversion formulas, namely infinite matrices with coefficients involving a class ofHypergeometric polynomials depending on five parameters. R. NASRI(*), A. SIMONIAN(*) AND F. GUILLEMIN (**)
Paper contribution.
Our main contributions can be summarized as follows: • in Section 2, we first establish an inversion criterion for a general class ofinfinite lower-triangular matrices, enabling us to state the inversion formula for theclass of lower triangular matrices (1.7); • in Section 3, functional relations are obtained for the ordinary (resp. expo-nential) generating functions of sequences ( S n ) n ∈ N ∗ and ( T n ) n ∈ N ∗ related by theinversion formula. 2. Lower-Triangular Systems
Let ( a m ) m ∈ N and ( b m ) m ∈ N be complex sequences such that a = b = 1 anddenote by f ( x ) and g ( x ) their respective exponential generating series, that is,(2.1) f ( x ) = + ∞ X m =0 a m m ! x m , g ( x ) = + ∞ X m =0 b m m ! x m . We use the notation [ x n ] f ( x ) for the coefficient of x n , n ∈ N , in series f ( x ). For all x, α ∈ C , define the infinite lower-triangular matrices A ( x, α ) = ( A n,k ( x, α )) n,k ∈ N ∗ and B ( x, α ) = ( B n,k ( x, α )) n,k ∈ N ∗ by(2.2) A n,k ( x, α ) = ( − k (cid:18) α + nα + k (cid:19) n − k X m =0 ( k − n ) m a m m ! x m ,B n,k ( x, α ) = ( − k (cid:18) α + nα + k (cid:19) n − k X m =0 ( k − n ) m b m m ! x m . From definition (2.2), matrices A ( x, α ) and B ( x, α ) have diagonal elements A k,k ( x, α ) = B k,k ( x, α ) = ( − k , k ∈ N ∗ , and are thus invertible.2.1. An inversion criterion.
The sequences ( a m ) m ∈ N and ( b m ) m ∈ N will be saidto be independent if, for any pair ( n, k ), • they may depend on one index n or k , but not on both, • they do not depend on the same index, that is, if ( a m ) m ∈ N depends on n (resp.on k ), then ( b m ) m ∈ N depends on k (resp. on n ) and not on n (resp. not on k ).To alleviate notation, the dependance of either ( a m ) m ∈ N or ( b m ) m ∈ N with respectto indexes n or k is specified below only when necessary. We now state the followinginversion criterion. Theorem 2.1.
Given independent sequences ( a m ) m ∈ N and ( b m ) m ∈ N , theirassociated matrix A ( x, α ) and B ( x, α ) defined by (2.2) are inverse of eachother if and only if the condition (2.3) [ x n − k ] f ( − x ) g ( x ) = δ n,k , k n, on generating functions f and g holds, with δ n,k = 1 if n = k and 0otherwise. EW INVERSION FORMULA 5
For conciseness of notation again, the dependence of generating functions f and g on parameter α is omitted. The proof of Theorem 2.1 requires the followingtechnical lemma whose proof is deferred to Appendix 5.1. Lemma 2.1.
Let N ∈ N ∗ and complex numbers λ , µ . Defining D N ( λ, µ ) = N − X r =0 ( − r Γ(1 + r − λ )Γ(1 − r + µ ) , we then have (2.4) D N ( λ, µ ) = µ − λ (cid:20) − λ )Γ(1 + µ ) − ( − N Γ( N − λ )Γ(1 − N + µ ) (cid:21) , µ = λ sin( πλ ) π [ ψ ( − λ ) − ψ ( N − λ )] , µ = λ, where ψ = Γ ′ / Γ . We now proceed with the justification of Theorem 2.1.
Proof.
Let two independent sequences ( a m ) m ∈ N and ( b m ) m ∈ N . Without loss ofgenerality, we assume that, for every m ∈ N , a m depends on n and not on k , while b m depends on k and not on n .Both matrices A ( x, α ) and B ( x, α ) being lower-triangular, so is their product C ( x, α ) = A ( x, α ) B ( x, α ). After definition (2.2), the coefficient C n,k ( x, α ) = X ℓ > A n,ℓ ( x, α ) B ℓ,k ( x, α ) , k n (where the latter sum over index ℓ is actually finite), of matrix C ( x, α ) reads C n,k ( x, α ) = + ∞ X ℓ =1 ( − ℓ (cid:18) α + nα + ℓ (cid:19) n − ℓ X m =0 ( − m ( n − ℓ )! a m ( n − ℓ − m )! m ! x m × ( − k (cid:18) α + ℓα + k (cid:19) ℓ − k X m ′ =0 ( − m ′ ( ℓ − k )! b m ′ ( ℓ − k − m ′ )! m ′ ! x m ′ after writing ( − r ) m = ( − m r ! / ( r − m )! for any positive integer r . Using theidentity (cid:18) α + nα + ℓ (cid:19)(cid:18) α + ℓα + k (cid:19) ( n − ℓ )!( ℓ − k )! = Γ( α + n + 1)Γ( α + k + 1) ,C n,k ( x, α ) simplifies to C n,k ( x, α ) = ( − k Γ( α + n + 1)Γ( α + k + 1) × + ∞ X ℓ =1 ( − ℓ n − ℓ X m =0 ( − m a m x m m !( n − ℓ − m )! ℓ − k X m ′ =0 ( − m ′ b m ′ x m ′ m ′ !( ℓ − k − m ′ )! . (2.5)Since a m depends only on n and b m depends only on k , a m and b m are bothindependent of index ℓ in the sum (2.5). We can then exchange the summation R. NASRI(*), A. SIMONIAN(*) AND F. GUILLEMIN (**) order to sum first on ℓ , giving C n,k ( x, α ) = ( − k Γ( α + n + 1)Γ( α + k + 1) X ( m,m ′ ) ∈ ∆ n,k ( − m a m x m m ! ( − m ′ b m ′ x m ′ m ′ ! × X k ℓ n ( − ℓ ( n − ℓ − m )!( ℓ − k − m ′ )!(2.6)with the subset ∆ n,k = { ( m, m ′ ) ∈ N , m + m ′ n − k } for given k n , andwhere the latter sum on index ℓ can be equivalently written as X k ℓ n ( − ℓ ( n − ℓ − m )!( ℓ − k − m ′ )! = n − k X r =0 ( − n − r ( r − m )!( n − r − k − m ′ )!= ( − n D n − k +1 ( m, n − k − m ′ )with the index change ℓ = n − r and the notation of Lemma 2.1. The expression(2.6) for coefficient C n,k ( x ) consequently reduces to C n,k ( x, α ) = ( − n + k Γ( α + n + 1)Γ( α + k + 1) X ( m,m ′ ) ∈ ∆ n,k ( − m a m x m m ! ( − m ′ b m ′ x m ′ m ′ ! × D n − k +1 ( m, n − k − m ′ )(2.7)and we are left to calculate D n − k +1 ( m, n − k − m ′ ) for all non negative m and m ′ , ( m, m ′ ) ∈ ∆ n,k . By Lemma 2.1 applied to λ = m and µ = n − k − m ′ , wesuccessively derive that: (a) if µ > λ ⇔ m + m ′ < n − k , formula (2.4) entails D n − k +1 ( m, n − k − m ′ ) =1 n − k − ( m + m ′ ) (cid:20) − m )Γ(1 + n − k − m ′ ) − ( − n − k +1 Γ( n − k + 1 − m )Γ( − m ′ ) (cid:21) ;as Γ( − m ) = Γ( − m ′ ) = ∞ for all non negative integers m > m ′ > D n − k +1 ( m, n − k − m ′ ) = 0 , m + m ′ < n − k ; (b) if λ = µ ⇔ m + m ′ = n − k , formula (2.4) entails(2.9) D n − k +1 ( m, m ) = lim λ → m sin( πλ ) π [ ψ ( − λ ) − ψ ( n − k + 1 − λ )] . We have sin( mπ ) = 0 while function ψ has a polar singularity at every nonpositive integer; the limit (2.9) is thus indeterminate (0 × ∞ ) but this issolved via the reflection formula ψ ( z ) − ψ (1 − z ) = − π cot( π z ), z / ∈ − N ,for function ψ [2, Chap.5, 5.5.4]. In fact, applying the latter to z = − λ firstgives sin( πλ ) ψ ( − λ ) = sin( πλ ) ψ (1 + λ ) + π · cos( πλ ) whencelim λ → m sin( πλ ) π ψ ( − λ ) = 0 × ψ (1 + m ) + ( − m = ( − m ; EW INVERSION FORMULA 7 besides, the second term ψ ( n − k + 1 − λ ) in (2.9) has a finite limit when λ → m since m + m ′ = n − k ⇒ m n − k so that n − k + 1 − λ tends toa positive integer. From (2.9) and the latter discussion, we are left with(2.10) D n − k +1 ( m, m ) = ( − m , m + m ′ = n − k. In view of items (a) and (b) , identities (2.9) and (2.10) together reduce expres-sion (2.7) to C n,k ( x, α ) = Γ( α + n + 1)Γ( α + k + 1) n − k X m =0 ( − m a m x m m ! b n − k − m ( n − k − m )! x n − k = Γ( α + n + 1)Γ( α + k + 1) [ x ] n − k f ( − x ) g ( x )where f and g denote the exponential generating function of the sequence ( a m ) m ∈ N and the sequence ( b m ) m ∈ N , respectively. It follows that C ( x, α ) = A ( x, α ) B ( x, α )is the identity matrix Id if and only if condition (2.3) holds, as claimed. (cid:3) The inversion formula.
We now formulate the inversion formula for a wholefamily of lower-triangular matrices involving Hypergeometric polynomials.
Theorem 2.2.
Let x, ν, α, β ∈ C and γ ∈ C \ Z ∗− . Define the lower-triangular matrices A ( x, ν ; α, β, γ ) and B ( x, ν ; α, β, γ ) by (2.11) A n,k ( x, ν ; α, β, γ ) = ( − k (cid:18) n + αk + α (cid:19) F ( k − n, − ( β + n ) ν ; − ( γ + n ); x ) ,B n,k ( x, ν ; α, β, γ ) = ( − k (cid:18) n + αk + α (cid:19) · (cid:20) γ + kβ + k F ( k − n, ( β + k ) ν ; γ + k ; x ) + β − γβ + k F ( k − n, ( β + k ) ν ; 1 + γ + k ; x ) (cid:21) for k n . The inversion formula (2.12) T n = n X k =1 A n,k ( x, ν ) S k ⇐⇒ S n = n X k =1 B n,k ( x, ν ) T k , n ∈ N ∗ , then holds for any pair of complex sequences ( S n ) n ∈ N ∗ and ( T n ) n ∈ N ∗ . Proof.
To show that AB = Id for matrices A and B defined in (2.11), we applyTheorem 2.1. From definition (2.2), we first specify the sequences ( a m ; n,k ) m ∈ N and( b m ; n,k ) m ∈ N for a given pair ( n, k ).From the standard definition (1.3) of Gauss Hypergeometric function F ( α, β, γ ; · ),the sequences ( a m ; n,k ) m ∈ N and ( b m ; n,k ) m ∈ N respectively associated with matrices A and B defined in (2.11) are readily given by a m ; n,k = ( − ( β + n ) ν ) m ( − γ − n ) m ,b m ; n,k = γ + kβ + k (( β + k ) ν ) m ( γ + k ) m + β − γβ + k (( β + k ) ν ) m (1 + γ + k ) m R. NASRI(*), A. SIMONIAN(*) AND F. GUILLEMIN (**) for all m > n, k ∈ N ∗ , 1 k n ; in particular, a n,k = b n,k = 1.It appears that a m ; n,k does not depend on k , while b m ; n,k does not depend on n ;sequences ( a m ; n,k ) and ( b m ; n,k ) are therefore independent. In the following, weremove the index k from a m ; n,k and the index n from b m ; n,k .To verify the inversion criterion (2.3), let f n and g k denote the exponential gen-erating function of sequence ( a m ; n ) m > and ( b m ; k ) m > , respectively. The product f n ( − x ) g k ( x ) is then given by f n ( − x ) g k ( x ) = X m > ( − m a m ; n m ! x m X m > b m ; k m ! x m = X ℓ > U ( n,k ) ℓ x ℓ where we set, for any ℓ > U ( n,k ) ℓ = ℓ X m =0 ( − m ( − ( β + n ) ν ) m ( − γ − n ) m m ! × (cid:20) γ + kβ + k (( β + k ) ν ) ℓ − m ( ℓ − m )!( γ + k ) ℓ − m + β − γβ + k (( β + k ) ν ) ℓ − m ( ℓ − m )!(1 + γ + k ) ℓ − m (cid:21) . (2.13)Criterion (2.3) thus amounts to show that U ( n,k ) n − k = 1 if n = k and U ( n,k ) n − k = 0 if1 k < n . Let then n > k and consider expression (2.13) applied to ℓ = n − k ;using the identities 1( − γ − n ) m ( γ + k ) n − k − m = ( − m ( γ + n − m )Γ( γ + k )Γ(1 + γ + n ) , − γ − n ) m (1 + γ + k ) n − k − m = ( − m Γ(1 + γ + k )Γ(1 + γ + n )and writing ( γ + k )Γ( γ + k ) = Γ(1 + γ + k ), the term Γ(1 + γ + k ) factors out andwe obtain(2.14) U ( n,k ) n − k = Γ(1 + γ + k )( β + k )Γ(1 + γ + n ) X ( n,k ) n − k where the sum X ( n,k ) n − k = n − k X m =0 ( − ( β + n ) ν ) m m ! · ( β + n − m )(( β + k ) ν ) n − k − m ( n − k − m )!is independent of γ . Splitting the term ( β + n − m ) of the second factor inside thissum into ( β + n ) and − m , X ( n,k ) n − k can be written as the sum of convolution terms X ( n,k ) n − k = [ x ] n − k (cid:26) ( β + n ) · h ( x ; ( β + n ) ν ) h ( x ; − ( β + k ) ν ) − x · (cid:18) ∂h∂x ( x ; ( β + n ) ν ) (cid:19) h ( x ; − ( β + k ) ν ) (cid:27) (2.15)where we set h ( x ; λ ) = P m > ( − λ ) m x m /m ! = (1 − x ) λ for | x | < λ ;applying this definition of h ( x ; λ ) successively to arguments λ = ( β + n ) ν and EW INVERSION FORMULA 9 λ = − ( β + k ) ν then enables us to reduce (2.15) to X ( n,k ) n − k = [ x ] n − k n ( β + n )(1 − x ) ( n − k ) ν + ( β + n ) νx (1 − x ) ( n − k ) ν − o = ( β + n ) (( k − n ) ν ) n − k ( n − k )! + ( β + n ) ν (1 + ( k − n ) ν ) n − k − ( n − k − . (2.16)After simplification, the latter expression eventually yields X ( n,k ) n − k = ( β + n ) δ n,k (for n = k , note that the second term in (2.16) is zero since it has denominator( − ∞ ); the latter equality and (2.14) together provide U ( n,k ) n − k = δ n,k .The inversion condition (2.3) is therefore fulfilled for all n, k >
1. We conclude thatinverse relation (2.12) holds for any pair of sequences ( S n ) n > and ( T n ) n > . (cid:3) Generating functions
Functional relations are now derived for the Ordinary (resp. Exponential) Gener-ating Functions, OGFs (resp. EGFs), of sequences related by the inversion formula.3.1.
Relations for OGF’s.
When α = γ , the inversion formula (2.12) trans-lates into reciprocal relations for the O.G.F.’s of related sequences ( S n ) n ∈ N ∗ and( T n ) n ∈ N ∗ ; note that the restriction γ / ∈ − N in Theorem 2.2 cancels out when α = γ .There is generally no such explicit relation, however, when α = γ . Corollary 3.1.
For given complex parameters x, ν, β and γ , let ( S n ) n ∈ N ∗ and ( T n ) n ∈ N ∗ be sequences related by the inversion formulas (2.12) ofTheorem 2.2, that is, S = B ( x, ν ; γ, β, γ ) · T ⇔ T = A ( x, ν ; γ, β, γ ) · S .Denote by G S ( z ) and G T ( z ) the formal O.G.F.’s of S and T , respec-tively. Defining the mapping Ξ (depending on parameters x and ν ) by (3.1) Ξ( z ) = zz − (cid:18) − z − (1 − x ) z (cid:19) ν , the relations (3.2) G S ( z ) = (cid:20) − ν − z + ν − (1 − x ) z (cid:21) ( − Ξ( z ) /z ) β (1 − z ) γ − β · G T (Ξ( z )) , G T ( ξ ) = (cid:20) − ν − Ω( ξ ) + ν − (1 − x )Ω( ξ ) (cid:21) − ( − Ω( ξ ) /ξ ) β (1 − Ω( ξ )) β − γ · G S (Ω( ξ )) hold, where Ω is the inverse mapping Ξ( z ) = ξ ⇔ z = Ω( ξ ) . Proof. a) From the definition (2.11) of matrix B ( x, ν ; γ, β, γ ), the generating func-tion of the sequence S = B ( x, ν ; γ, β, γ ) · T is given by G S ( z ) = X n > z n n X k =1 B n,k ( x, ν ; γ, β, γ ) T k ! = X n > z n n X k =1 T k ( − k (cid:18) n + γk + γ (cid:19)(cid:20) γ + kβ + k F ( k − n, ( β + k ) ν ; γ + k ; x ) + β − γβ + k F ( k − n, ( β + k ) ν ; 1 + γ + k ; x ) (cid:21) that is, G S ( z ) = X k > ( − z ) k T k (cid:20) γ + kβ + k U ( γ + k, ( β + k ) ν, γ + k ; z, x ) + β − γβ + k U ( γ + k, ( β + k ) ν, γ + k ; z, x ) (cid:21) (3.3)where we define U ( α , α , α ; z, x ) by(3.4) U ( α , α , α ; z, x ) = X n > (1 + α ) n n ! z n F ( − n, α ; α ; x ) . Applying definition (1.3) to Hypergeometric polynomial F ( − n, α , α ; x ) and writ-ing ( − n ) m = ( − m n ! / ( n − m )!, then interchanging the summation order and usingthe index change n n − k , an expression for U is easily derived in terms of anotherGauss hypergeometric function, namely(3.5) U ( α , α , α ; z, x ) = (1 − z ) − − α · F (cid:18) α , α ; α ; xzz − (cid:19) . Setting α = γ + k , α = ( β + k ) ν and either α = γ + k or α = 1 + γ + k in (3.5),the right-hand side of (3.3) then reads γ + kβ + k U ( γ + k, ( β + k ) ν, γ + k ; z, x ) + β − γβ + k U ( γ + k, ( β + k ) ν, γ + k ; z, x ) =(1 − z ) − − γ − k (cid:20) γ + kβ + k F (cid:18) γ + k, ( β + k ) ν ; γ + k ; xzz − (cid:19) + β − γβ + k F (cid:18) γ + k, ( β + k ) ν ; 1 + γ + k ; xzz − (cid:19)(cid:21) which, after invoking known identities F ( α, β ; α ; x ) = (1 − x ) − β together with F (1 + α, β ; α ; x ) = (1 − x ) − β + βx (1 − x ) − − β /α , further simplifies to(1 − z ) − − γ − k (cid:20)(cid:26) γ + kβ + k (cid:18) − xzz − (cid:19) − ( β + k ) ν + ν xzz − (cid:18) − xzz − (cid:19) − − ( β + k ) ν (cid:27) + β − γβ + k (cid:18) − xzz − (cid:19) − ( β + k ) ν (cid:21) =(1 − z ) − − γ − k (cid:18) − xzz − (cid:19) − ( β + k ) ν (cid:18) − νxz − (1 − x ) z (cid:19) . Replacing the latter in the right-hand side of (3.3), the expression of G S ( z ) thenreduces to G S ( z ) = 11 − z (cid:18) − νxz − (1 − x ) z (cid:19) (1 − z ) − γ (cid:18) − xzz − (cid:19) − βν × X k > ( − z − z (1 − xzz − − ν ) k T k , that is,(3.6) G S ( z ) = 11 − z (cid:18) − νxz − (1 − x ) z (cid:19) ( − Ξ( z ) /z ) β (1 − z ) γ − β G T (Ξ( z )) EW INVERSION FORMULA 11 with Ξ( z ) defined as in (3.1). Writing11 − z (cid:20) − νxz − (1 − x ) z (cid:21) = 1 − ν − z + ν − z (1 − x ) , (3.6) eventually provides the first relation (3.2). b) For any parameters x and ν , the function z Ξ( z ) is analytic in a neigh-borhood of z = 0, with Ξ(0) = 0 and Ξ ′ ( z ) ∼ − z as z →
0, hence Ξ ′ (0) = − = 0.By the Implicit Function Theorem, Ξ has an analytic inverse Ω : ξ Ω( ξ ) in aneighborhood of ξ = 0; the inversion of the first relation (3.2) consequently providesthe second relation (3.2), as claimed. (cid:3) Relations (3.2) between formal generating series can also be understood as a func-tional identity between the analytic functions z G S ( z ) and z G T ( z ) in someneighborhood of the origin z = 0 in the complex plane. Now, Corollary 3.1 canbe supplemented by making explicit the inverse mapping Ω involved in the 2ndrelation (3.2). To this end, we state some preliminary properties (in the sequel, logwill denote the determination of the logarithm in the complex plane cut along thenegative semi-axis ] − ∞ ,
0] with log(1) = 0).
Lemma 3.1.
Let R ( ν ) = | e − ψ ( ν ) | where ψ ( ν ) = (1 − ν ) log(1 − ν ) + ν log( − ν ) , ν ∈ C \ [0 , + ∞ [ , (1 − ν ) log(1 − ν ) + ν log( ν ) , ν ∈ R , ν < , (1 − ν ) log( ν −
1) + ν log( ν ) , ν ∈ R , ν > . The power series
ΣΣΣ( w ) = X b > Γ( b (1 − ν ))Γ( b )Γ(1 − bν ) · w b , | w | < R ( ν ) , is given by (3.7) ΣΣΣ( w ) = Θ( w ) − ν Θ( w ) + 1 − ν where Θ : w Θ( w ) denotes the unique analytic solution (depending on ν ) to the implicit equation (3.8) 1 − Θ + w · Θ − ν = 0 , | w | < R ( ν ) , verifying Θ(0) = 1 . The proof of Lemma 3.1 is detailed in Appendix 5.2.
Corollary 3.2.
For all ν ∈ C and x = 0 , the inverse mapping Ω of Ξ defined in (3.1) can be expressed by (3.9) Ω( ξ ) = ΣΣΣ( x ξ )(1 − x (1 − ν )) ΣΣΣ( x ξ ) − x , | ξ | < R ( ν ) | x | , in terms of power series ΣΣΣ( · ) defined in Lemma 3.1. Proof. (i)
The homographic transform h : z θ with θ = (1 − z ) / (1 − z (1 − x )) isan involution, with inverse h − given by(3.10) z = h − ( θ ) = 1 − θ − θ (1 − x ) . Let then ξ = Ξ( z ) with function Ξ defined as in (3.1); we first claim that thecorresponding θ = h ( z ) equals θ = Θ( x ξ ) where Θ is the function defined by theimplicit equation (3.8). In fact, definition (3.1) for Ξ and expression (3.10) for z interms of θ together entail ξ = Ξ( z ) = zz − θ ν = 1 − θ − θ (1 − x ) (cid:18) − θ − θ (1 − x ) − (cid:19) − θ ν = θ − x θ θ ν and the two sides of the latter equalities give 1 − θ + xξθ − ν = 0, hence the identity θ = Θ( x ξ ), as claimed. (ii) The corresponding inverse z = Ω( ξ ) can now be expressed as follows; equal-ity (3.7) applied to w = x ξ can be first solved for Θ( x ξ ), givingΘ( x ξ ) = 1 + (1 − ν )ΣΣΣ( xξ )1 − ν ΣΣΣ( xξ ) ;it then follows from (3.10) and this expression of Θ( x ξ ) that z = Ω( ξ ) = 1 − Θ( x ξ )1 − (1 − x )Θ( x ξ ) = 1 − − ν )ΣΣΣ( xξ )1 − ν ΣΣΣ( xξ )1 − (1 − x ) 1 + (1 − ν )ΣΣΣ( xξ )1 − ν ΣΣΣ( xξ )which easily reduces to formula (3.9). (cid:3) Relation for EGF’s.
We now turn an identity between the exponential gen-erating functions of related sequences S and T . Corollary 3.3.
When α = 0 and given sequences S and T related by theinversion formulae S = B (0 , β, γ ; x, ν ) · T ⇔ T = A (0 , β, γ ; x, ν ) · S , the EGF G ∗ S of the sequence S can be expressed by G ∗ S ( z ) = exp( z ) · X k > ( − k T k z k k ! (cid:20) γ + kβ + k Φ(( β + k ) ν ; γ + k ; − x z ) + β − γβ + k Φ(( β + k ) ν ; 1 + γ + k ; − x z ) (cid:21) (3.11) for all z ∈ C , where Φ( λ ; µ ; · ) denotes the Confluent Hypergeometric func-tion with parameters λ , µ / ∈ − N . Proof.
For α = 0, a calculation similar to that of Corollary 3.1 gives(3.12) G ∗ S ( z ) = X n > z n n ! n X k =1 B n,k ( x, ν ; 0 , β, γ ) T k ! = I + J EW INVERSION FORMULA 13 where terms I and J , after interchanging the summation order between n and k and using the index change m = n − k , can be written as I = X k > ( − k T k z k k ! γ + kβ + k R k ( z, x ; β, γ ) ,J = X k > ( − k T k z k k ! γ + kβ + k R k ( z, x ; β, γ )respectively, with R k ( z, x ; β, γ ) = X m > z m m ! F ( − m, ( β + k ) ν ; γ + k ; x ) . Writing F ( − m, η ; ζ ; x ) = m ! P j m ( η ) j ( − x ) j / { j !( m − j )!( ζ ) j } after definition(1.3) for any η and ζ , the latter sum R k ( z, x ; β, γ ) reduces to R k ( z, x ; β, γ ) = X j > (( β + k ) ν ) j ( γ + k ) j j ! ( − x ) j X m > j z m ( m − j )! = e z X j > (( β + k ) ν ) j ( γ + k ) j j ! ( − xz ) j ;from the expansion of Φ(( β + k ) ν ; γ + k ; − xz ) in powers of − xz , we then obtain R k ( z, x ; β, γ ) = e z Φ(( β + k ) ν ; γ + k ; − xz ). Applying this identity to each sum I and J above, equality (3.12) provides (3.11). (cid:3) Conclusions
As argued in the Introduction, the explicit inversion of the five parameters fam-ily of lower-triangular matrices A ( x, ν ; α, β, γ ) has been motivated by the resolu-tion of linear system (1.1) whose coefficients depend on a specific family of GaussHypergeometric polynomials. Other important applications of inversion formulasinvolving Gauss Hypergeometric polynomials (such as Jacobi, Chebyshev, Ultras-pherical,etc.) can also be found in weighted quadrature rules [12] whose errors canbe controlled by some generalized classical inequalities [13].The general inversion criterion stated in Theorem 2.1 could be possibly applied toother examples of so-called independent sequences ( a m ) and ( b m ) in order to obtainnew instances of inversion formulas. Similarly, remarkable functional identities canbe derived through Corollary 3.1 for OGF’s. As to Corollary 3.3 for EGF’s, afurther application of relation (3.11) to the specific matrix (1.5) seems promisingas it can provide interesting integral representations for the associated EGF G ∗ E ofthe solution E = ( E k ) k > . This is an object of forthcoming study. References [1] Guillemin F, Quintuna Rodriguez VK, Simonian A, Nasri R.
Sojourn time in a M [ X ] /M/ Processor Sharing Queue with Batch Arrivals (II) . arXiv preprint arXiv:2006.02198; 2020.[2] Olver FW, Lozier DW, Boisvert RF, et al. (ed.).
NIST Handbook of Mathematical Functions .Cambridge university press. 2010.[3] Krattenthaler C.
A new Matrix Inverse . Proceedings of the American Mathematical Society.1996; 124: 47-59.[4] Schlosser M.
Multidimensional Matrix Inversions and A r and D r Basic Hypergemeotric se-ries . The Ramanujan Journal. 1997; 243-274. [5] Costabile FA, Longo E.
Algebraic Theory of Appell Polynomials with Application toGeneral Interpolation Problem , Chapter 2 in Linear Algebra book. IntechOpen. 2012.http://dx.doi.org/10.5772/46482.[6] Cagliero L, Koornwinder T H.
Explicit Matrix Inverses for Lower Triangular Matrices withEntries involving Jacobi Polynomials . Journal of Approximation Theory. 2015; 193: 20-38.[7] Aldenhoven N.
Explicit Matrix Inverses for Lower Triangular Matrices with Entries involvingContinuous q -ultraspherical Polynomials . Journal of Approximation Theory. 2015; 199: 1-12.[8] Chaggara H, Koepf W. On Linearization Coefficients of Jacobi Polynomials . Applied Math-ematics Letters. 2010; 23: 609-614.[9] Foupouagnigni M, Koepf W, Tcheutia DD.
Connection and Linearization Coefficients of theAskey-Wilson Polynomials . Journal of Symbolic Computation. 2013; 53: 96-118.[10] Erdelyi A.
Higher Transcendental Functions . Vol. 1. New York: MacGraw Hill. 1981.[11] Gradsteyn IS, Ryzhik IM.
Table of Integrals, Series and Products . ed. Academic Press. 2007.[12] Eslahchi MR, Dehghan M and Masjed-Jamei M.
On Numerical Improvement of the FirstKind GaussChebyshev Quadrature Rules . Applied Mathematics and Computation. 2005; 165:5-21.[13] Masjed-Jamei M, Dragomir SS, Srivastava HM.
Some Generalizations of the Cauchy-Schwarzand the Cauchy-Bunyakovsky Inequalities involving Four Free Parameters and their Appli-cations . Mathematical and Computer Modelling. 2009; 49: 1960-1968.[14] Polya G, Szego G.
Problems and Theorems in Analysis. Vol.I . Springer Science & BusinessMedia. 1972. Appendix
Proof of Lemma 2.1. a)
By the reflection formula Γ( z )Γ(1 − z ) = π/ sin( π z ), z / ∈ − N [2, Sect.5.5.3] applied to the argument z = r − µ , the generic term d r ( λ, µ )of the sum D N ( λ, µ ) equivalently reads d r ( λ, µ ) = ( − r Γ(1 + r − λ )Γ(1 − r + µ ) = − sin( πµ ) π Γ( r − µ )Γ(1 + r − λ )and Stirling’s formula [2, Sect.5.11.3] entails that d r ( λ, µ ) = O ( r λ − µ − ) for large r ; the series P r > d r ( λ, µ ) is thus convergent if and only if Re( µ ) > Re( λ ). Writethen the finite sum D N ( λ, µ ) as the difference + ∞ X r =0 ( − r Γ(1 + r − λ )Γ(1 − r + µ ) − + ∞ X r = N ( − r Γ(1 + r − λ )Γ(1 − r + µ ) = + ∞ X r =0 ( − r Γ(1 + r − λ )Γ(1 − r + µ ) − + ∞ X r =0 ( − r + N Γ(1 + r + N − λ )Γ(1 − r − N + µ ) ;applying similarly the reflection formula to the argument z = r − µ + N for thesecond sum, we obtain D N ( λ, µ ) = sin( π µ ) π " + ∞ X r =0 Γ( r − µ + N )Γ(1 + r + N − λ ) − + ∞ X r =0 Γ( r − µ )Γ(1 + r − λ ) = sin( π µ ) π " + ∞ X r =0 ( N − µ ) r Γ( r − µ )(1 + N − λ ) r Γ(1 + N − λ ) − + ∞ X r =0 ( − µ ) r Γ( − µ )(1 − λ ) r Γ(1 − λ ) EW INVERSION FORMULA 15 when introducing Pochhammer symbols of order r , hence D N ( λ, µ ) = sin( π µ ) π h Γ( N − µ )Γ(1 + N − λ ) F (1 , N − µ ; 1 + N − λ ; 1) − Γ( − µ )Γ(1 − λ ) F (1 , − µ ; 1 − λ ; 1) i in terms of the Hypergeometric function F . Now, recall the identity [11, Sect.9.122.1](5.1) F ( α, β ; γ ; 1) = Γ( γ )Γ( γ − α − β )Γ( γ − α )Γ( γ − β ) , Re( γ ) > Re( α + β );when applying (5.1) to the values α = 1, β = N − µ , γ = 1 + N − λ (resp. α = 1, β = − µ , γ = 1 − λ ), the latter sum D N ( λ, µ ) consequently reduces to(5.2) D N ( λ, µ ) = sin( π µ ) π Γ( µ − λ )Γ(1 − λ + µ ) (cid:20) Γ( N − µ )Γ( N − λ ) − Γ( − µ )Γ( − λ ) (cid:21) , Re( µ ) > Re( λ ) . By the reflection formula for function Γ again, we haveΓ( N − µ )Γ(1 − N + µ ) = − ( − N π sin( πµ ) , Γ( − µ )Γ(1 + µ ) = − π sin( πµ ) , so that expression (5.2) eventually yields D N ( λ, µ ) = − Γ( µ − λ )Γ(1 − λ + µ ) (cid:20) ( − N Γ( N − λ )Γ(1 − N + µ ) − − λ )Γ(1 + µ ) (cid:21) = 1 λ − µ (cid:20) ( − N Γ( N − λ )Γ(1 − N + µ ) − − λ )Γ(1 + µ ) (cid:21) which states the first identity (2.4) for Re( µ ) > Re( λ ). b) The reflection formula for Γ applied to z = r − λ enables us to write D N ( λ, λ ) = N − X r =0 ( − r Γ(1 + r − λ )Γ(1 − r + λ ) = − sin( πλ ) π N − X r =0 Γ( r − λ )Γ(1 − r + λ )= − sin( πλ ) π N − X r =0 r − λ = sin( πλ ) π [ ψ ( − λ ) − ψ ( N − λ )]after the expansion formula [2, Chap.5, Sect.5.7.6] for the function ψ and the secondidentity (2.4) for µ = λ follows. c) The first identity (2.4) stated for Re( µ ) > Re( λ ) defines an analytic functionof variables λ ∈ C and µ ∈ C for µ = λ ; besides, it is easily verified that thisfunction has the limit given by D N ( λ, λ ) when µ → λ . On the other hand, thefinite sum D N ( λ, µ ) defines itself an entire function of both variables λ ∈ C and µ ∈ C ; by analytic continuation, identity (2.4) consequently holds for any pair( λ, µ ) ∈ C × C (cid:4) Proof of Lemma 3.1. a)
We first determine the convergence radius of thepower series ΣΣΣ( w ) in terms of complex parameter ν . For large b , • if 1 − ν / ∈ ] − ∞ ,
0] and − ν / ∈ ] − ∞ , ν ∈ C \ [0 , + ∞ [, the genericterm σ b of this series is asymptotic to σ b = Γ( b (1 − ν ))Γ( b )Γ(1 − bν ) = − ν · Γ( b (1 − ν )) b ! Γ( − bν ) ∼ − r − ν π (1 − ν ) b e b · ϕ − ( ν ) after Stirling’s formula Γ( z ) ∼ √ πe z log z − z / √ z for large z with | arg( z ) | π − η , η > ϕ − ( ν ) = (1 − ν ) log(1 − ν ) + ν log( − ν ); • if 1 − ν / ∈ ] − ∞ ,
0] and ν ∈ [0 , + ∞ [ (the parameter ν is consequently real),that is, 0 ν <
1, write Γ(1 − bν ) = π/ [sin( πbν )Γ( bν )] after the reflection formulaso that the generic term σ b is now asymptotic to σ b = − ν · Γ( b (1 − ν )) b ! π Γ( bν ) sin( πbν ) ∼ − ν r π ν (1 − ν ) b sin( πbν ) e b · ϕ ( ν ) after Stirling’s formula (ibid.) and where ϕ ( ν ) = (1 − ν ) log(1 − ν ) + ν log( ν ); • finally if ν − ∈ [0 , + ∞ ], that is, if ν >
1, write Γ(1 − bν ) = π/ [sin( πbν )Γ( bν )]together with Γ(1 − b (1 − ν )) = π/ [sin( πb (1 − ν ))Γ( b (1 − ν ))] after the reflectionformula so that the generic term σ b is asymptotic to σ b = ( − b − ν · Γ( bν ) b ! Γ(1 − b (1 − ν )) ∼ ( − b − ν s π ν ( ν − b e b · ϕ + ( ν ) after Stirling’s formula and where ϕ + ( ν ) = (1 − ν ) log( ν −
1) + ν log( ν ). b) By the latter discussion, it therefore follows that the power series ΣΣΣ( w ) hasthe finite convergence radius R ( ν ) = | e − ψ ( ν ) | with ψ ( ν ) = ϕ − ( ν ), ψ ( ν ) = ϕ ( ν ) or ϕ ( ν ) = ϕ + ( ν ) according to the value of ν , as stated in Lemma 3.1.Now, by the above expression of σ b for ν ∈ C \ [0 , + ∞ [, write(5.3) σ b = − ν · Γ( b (1 − ν )) b ! Γ( − bν ) = − ν · (cid:18) − b (1 − ν ) b (cid:19) = − ν · (cid:18) α + bβb (cid:19) for all b >
1, where we set α = − β = 1 − ν . From [14, Problem 216, p.146,p. 349], it is known that(5.4) 1 + X b > (cid:18) α + bβb (cid:19) w b = Θ( w ) α +1 (1 − β )Θ( w ) + β for any pair α and β , where Θ = Θ( w ) denotes the unique solution to the implicitequation 1 − Θ + w Θ β = 0 with Θ(0) = 1. By expression (5.3) and relation (5.4)applied to the specific values α = − β = 1 − ν , we can consequently assertthat the series ΣΣΣ( w ) equalsΣΣΣ( w ) = X b > σ b w b = − ν (cid:20) ν Θ( w ) + 1 − ν − (cid:21) = Θ( w ) − ν Θ( w ) + 1 − ν for | w | < R ( ν ), as claimed. The validity of equality (3.7) for real ν ∈ [0 , + ∞ [follows by analytic continuation (cid:4) Address: (*) Orange Labs, OLN/NMP, Orange Gardens, 44 avenue de la Rpublique,CS 50010, 92326 Chatillon Cedex, France France (**) Orange Labs Networks Lannion,2 avenue Pierre Marzin, 22307 Lannion Cedex, Lannion, France
E-mail address ::