Denominator Bounds for Systems of Recurrence Equations using ΠΣ-Extensions
aa r X i v : . [ c s . S C ] A p r Denominator Bounds for Systems of RecurrenceEquations using ΠΣ -Extensions ∗ Johannes Middeke and Carsten Schneider
Abstract
We consider linear systems of recurrence equations whose coefficients aregiven in terms of indefinite nested sums and products covering, e.g., the harmonicnumbers, hypergeometric products, q -hypergeometric products or their mixed ver-sions. These linear systems are formulated in the setting of ΠΣ -extensions and ourgoal is to find a denominator bound (also known as universal denominator) for thesolutions; i. e., a non-zero polynomial d such that the denominator of every solutionof the system divides d . This is the first step in computing all rational solutions ofsuch a rather general recurrence system. Once the denominator bound is known, theproblem of solving for rational solutions is reduced to the problem of solving forpolynomial solutions. Difference equations are one of the central tools within symbolic summation. In oneof its simplest forms, the telescoping equation plays a key role: given a sequence f ( k ) , find a solution g ( k ) of f ( k ) = g ( k + ) − g ( k ) . Finding such a g ( k ) in a given ring/field or in an appropriate extension of it (inwhich the needed sequences are represented accordingly) yields a closed form of Johannes MiddekeResearch Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenberger-straße 69, A-4040 Linz, Austria e-mail: [email protected] SchneiderResearch Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenberger-straße 69, A-4040 Linz, Austria e-mail: [email protected] ∗ Supported by the Austrian Science Fund (FWF) grant SFB F50 (F5009-N15) 1 Johannes Middeke and Carsten Schneider the indefinite sum ∑ bk = a f ( k ) = g ( b + ) − g ( a ) . Slightly more generally, solving thecreative telescoping and more generally the parameterized telescoping equation en-able one to search for linear difference equations of definite sums. Finally, linearrecurrence solvers enhance the the summation toolbox to find closed form solutionsof definite sums. This interplay between the different algorithms to solve differenceequations has been worked out in [2, 29, 56, 41, 42, 40] for hypergeometric sumsand has been improved, e.g., by difference ring algorithms [33, 22, 51, 53, 54] tothe class of nested sums over hypergeometric products, q -hypergeometric productsor their mixed versions, or by holonomic summation algorithms [55, 25] to the classof sequences/functions that can be described by linear difference/differential equa-tions.More generally, coupled systems of difference equations are heavily used to de-scribe problems coming from practical problem solving. E.g., big classes of Feyn-man integrals in the context of particle physics can be described by coupled systemsof linear difference equations; for details and further references see [1]. Here oneends up at n Feynman integrals I ( k ) , . . . , I n ( k ) which are solutions of a coupled sys-tem. More precisely, we are given matrices A ( k ) , . . . A l ( k ) ∈ K ( k ) m × n with entriesfrom the rational function field K ( k ) , K a field containing the rational numbers, anda vector b ( k ) of length m in terms of nested sums over hypergeometric productssuch that the following coupled system holds: A l ( k ) I ( k + l ) ... I n ( k + l ) + A l − ( k ) I ( k + l − ) ... I n ( k + l − ) + · · · + A ( k ) I ( k ) ... I n ( k ) = b ( k ) . (1)Then one of the main challenges is to solve such a system, e.g., in terms ofd’Alembertian [15, 16] or Liouvillian solutions [30, 43]. Furthermore, solvingcoupled systems arises as crucial subproblem within holonomic summation algo-rithms [25]. In many situations, one proceeds as follows to get the solutions ofsuch a coupled system: first decouple the system using any of the algorithms de-scribed in [17, 57, 16, 36, 21] such that one obtains a scalar linear recurrence inonly one of the unknown functions, say I ( k ) , and such that the remaining integrals I ( k ) , . . . I n ( k ) can be expressed as a linear combination of the shifted versions of I ( k ) and the entries of b ( k ) over K ( k ) . Thus solving the system (1) reduces to theproblem to solve the derived linear recurrence and, if this is possible, to combinethe solutions such that I ( k ) can be expressed by them. Then given this solution,one obtains for free also the solutions of the remaining integrals I ( k ) , . . . , I n ( k ) .This approach in general is often rather successful since one can rely on the verywell explored solving algorithms [3, 14, 15, 16, 4, 16, 31, 30, 32, 22, 45, 49, 43] todetermine, e.g., d’Alembertian and Liouvillian solutions for scalar linear recurrencerelations and can heavily use summation algorithms [33, 50, 52, 53] to simplify thefound solutions.The main drawback of this rather general tactic of solving a decoupled systemis efficiency. First, the decoupling algorithms themselves can be very costly; forfurther details see [21]. Second, the obtained scalar recurrences have high orders enominators of Recurrence Systems 3 with rather huge coefficients and the existing solving algorithms might utterly failto find the desired solutions in reasonable time. Thus it is highly desirable to attackthe original system (1) directly and to avoid any expensive reduction mechanismsand possible blow-ups to a big scalar equation. Restricting to the first-order case( m = n = A ( t ) from K ( t ) n × n , find all solutions y ( t ) = ( y ( t ) , . . . , y n ( t )) ∈ K ( t ) n suchthat y ( t + ) − A y ( t ) = d ( t ) ∈ K [ t ] \ { } such that forany solution y ( t ) ∈ K ( t ) n of (2) we have d ( t ) y ( t ) ∈ K [ t ] n .2. Given such a d ( t ) , plugging y ( t ) = y ′ ( t ) d ( t ) into (2) yields an adapted system for theunknown polynomial y ′ ( t ) ∈ K [ t ] n . Now compute a degree bound, i.e., a b ∈ N such that the degrees of the entries in y ′ are bounded by b .3. Finally, inserting the potential solution y ′ = y + y t + · · · + y b t b into the adaptedsystem yields a linear system in the components ( y , . . . , y n , . . . , y b , . . . , y bn ) ∈ K n ( b + ) of the unknown vectors y , . . . , y b ∈ K n . Solving this system yields all y , . . . , y b ∈ K n and thus all solutions y ( t ) ∈ K ( t ) n for the original system (2).For an improved version exploiting also ideas from [31] see [11]. Similarly, the q -rational case (i.e., t qt instead of t t +
1) has been elaborated in [5, 6]. Inaddition, the higher order case m = n ∈ N has been considered in [12] for the rationalcase.In this article, we will push further the calculation of universal denominators(see reduction step (1)) to the general difference field setting of ΠΣ -fields [33] andmore general to the framework of ΠΣ -extensions [33]. Here we will utilise similaras in [12, 9] algorithms from [20] to transform in a preprocessing step the coupledsystem to an appropriate form. Given this modified system, we succeed in generalis-ing compact formulas of universal denominator bounds from [18, 24] to the generalcase of ΠΣ -fields. In particular, we generalise the available denominator boundsin the setting of ΠΣ -fields of [22, 45] from scalar difference equations to coupledsystems. As consequence, the earlier work of the denominator bounding algorithmsis covered in this general framework and big parts of the q -rational, multibasic andmixed multibasic case [19] for higher-order linear systems are elaborated. Moregenerally, these denominator bound algorithms enable one to search for solutions ofcoupled systems (1) where the matrices A i ( k ) and the vector b ( k ) might consist ofexpressions in terms of indefinite nested sums and products and the solutions mightbe derived in terms of such sums and products. Furthermore, these algorithms canbe used to tackle matrices A i ( k ) which are not necessarily square. Solving such sys-tems will play an important role for holonomic summation algorithms that workover such general difference fields [47]. In particular, the technologies described in Johannes Middeke and Carsten Schneider the following can be seen as a first step towards new efficient solvers of coupledsystems that arise frequently within the field of particle physics [1].The outline of the article is as follows. In Section 2 we will present some basicproperties of ΠΣ -theory and will present our main result concerning the computa-tion of the aperiodic part of a universal denominator of coupled systems in a ΠΣ -extension. In Section 3 we present some basic facts on Ore polynomials which weuse as an algebraic model for recurrence operators and introduce some basic defini-tions for matrices. With this set up, we will show in Section 4 how the aperiodic partof a universal denominator can be calculated under the assumption that the coupledsystem is brought into particular regularised form. This regularisation is carried outin Section 6 which relies on row reduction that will be introduced in Section 5. Wepresent examples in Section 7 and conclude in Section 8 with a general method thatenables one to search for solutions in the setting of ΠΣ -fields. ΠΣ -Theory and the Main Result In the following we model the objects in (1), i.e, in the entries of A ( k ) , . . . , A l ( k ) and of b ( k ) with elements from a field F . Further we describe the shift operationacting on these elements by a field automorphism σ : F → F . In short, we call such apair ( F , σ ) consisting of a field equipped with a field automorphism also a differencefield. Example 1.
1. Consider the rational function field F = K ( t ) for some field K and thefield automorphism σ : F → F defined by σ ( c ) = c for all c ∈ K and σ ( t ) = t + ( F , σ ) is also called the rational difference field over K .2. Consider the rational function field K = K ′ ( q ) over the field K ′ and the rationalfunction field F = K ( t ) over K . Further define the field automorphism σ : F → F defined by σ ( c ) = c for all c ∈ K and σ ( t ) = qt . ( F , σ ) is also called the q -rational difference field over K .3. Consider the rational function field K = K ′ ( q , . . . , q e ) over the field K ′ and therational function field F = K ( t , . . . , t e ) over K . Further define the field auto-morphism σ : F → F defined by σ ( c ) = c for all c ∈ K and σ ( t i ) = q i t i for all1 ≤ i ≤ e . ( F , σ ) is also called the ( q , . . . , q e ) -multibasic rational difference fieldover K .4. Consider the rational function field K = K ′ ( q , . . . , q e ) over the field K ′ andthe rational function field F = K ( t , . . . , t e , t ) over K . Further define the fieldautomorphism σ : F → F defined by σ ( c ) = c for all c ∈ K , σ ( t ) = t + σ ( t i ) = q i t i for all 1 ≤ i ≤ e . ( F , σ ) is also called the mixed ( q , . . . , q e ) -multibasic rational difference field over K .More generally, we consider difference fields that are built by the following typeof extensions. Let ( F , σ ) be a difference field; i. e., a field F together with an au-tomorphism σ : F → F . Elements of F which are left fixed by σ are referred to as Throughout this article, all fields contain the rational numbers Q as subfield.enominators of Recurrence Systems 5 constants . We denote the set of all constants byconst F = { c ∈ F | σ ( c ) = c } . A ΠΣ -extension ( F ( t ) , σ ) of ( F , σ ) is given by the rational function field F ( t ) in theindeterminate t over F and an extension of σ to F ( t ) which can be built as follows:either1. σ ( t ) = t + β for some β ∈ F \ { } (a Σ -monomial) or2. σ ( t ) = α t for some α ∈ F \ { } (a Π -monomial)where in both cases we require that const F ( t ) = const F . More generally, weconsider a tower ( F ( t ) . . . ( t e ) , σ ) of such extensions where the t i are either Π -monomials or Σ -monomials adjoined to the field F ( t ) . . . ( t i − ) below. Such a con-struction is also called a ΠΣ -extension ( F ( t ) . . . ( t e ) , σ ) of ( F , σ ) . If F ( t ) . . . ( t e ) consists only of Π -monomials or of Σ -monomials, it is also called a Π - or a Σ -extension. If F = const F , ( F ( t ) . . . ( t e ) , σ ) is called a ΠΣ -field over F .Note that all difference fields from Example 1 are ΠΣ -fields over K . Further notethat ΠΣ -extensions enable one to model indefinite nested sums and products thatmay arise as rational expressions in the numerator and denominator. See [33] or [51]for examples of how that modelling works.Let ( F , σ ) be an arbitrary difference field and ( F ( t ) , σ ) be a ΠΣ -extension of ( F , σ ) . In this work, we take a look at systems of the form A ℓ σ ℓ y + . . . + A σ y + A y = b (3)where A , . . . , A ℓ ∈ F [ t ] m × n and b ∈ F [ t ] m . Our long-term goal is to find all rationalsolutions for such a system, i. e., rational vectors y ∈ F ( t ) n which satisfy (3) fol-lowing the three steps presented in the introduction. In this article we will look atthe first step: compute a so-called denominator bound (also known as a universaldenominator ). This is a polynomial d ∈ F [ t ] \ { } such that dy ∈ F [ t ] n is polynomialfor all solutions y of (3). Once that is done, we can simply substitute the denomi-nator bound into the system and then it will be sufficient to search for polynomialsolutions. In future work, it will be a key challenge to derive such degree bounds;compare the existing results [33, 44, 46] for scalar equations. Degree bounds for therational case ( l =
1) and the q -rational case ( l arbitrarily) applied to the system (3)can be found in [8] and [37], respectively. Once a degree bound for the polynomialsolutions is known, the latter problem translates to solving linear systems over F if F = const F . Otherwise, one can apply similar strategies as worked out in [33, 22, 49]to reduce the problem to find polynomial solutions to the problem to solve coupledsystems in the smaller field F . Further comments on this proposed machinery willbe given in the conclusion.In order to derive our denominator bounds for system (3), we rely heavily on thefollowing concept [2, 22]. Let a , b ∈ F [ t ] \ { } be two non-zero polynomials. Wedefine the spread of a and b asspread ( a , b ) = { k ≥ | gcd ( a , σ k ( b )) / ∈ F } . Johannes Middeke and Carsten Schneider
In this regard note that σ k ( b ) ∈ F [ t ] for any k ∈ Z and b ∈ F [ t ] . In particular, if b isan irreducible polynomial, then also σ k ( b ) is an irreducible polynomial.The dispersion of a and b is defined as the maximum of the spread, i. e., we de-clare disp ( a , b ) = max spread ( a , b ) where we use the conventions max /0 = − ∞ andmax S = ∞ if S is infinite. As an abbreviation we will sometimes use spread ( a ) = spread ( a , a ) and similarly disp ( a ) = disp ( a , a ) . We call a ∈ F [ t ] periodic if disp ( a ) is infinite and aperiodic if disp ( a ) is finite.It is shown in [33, 22] (see also [44, Theorem 2.5.5]) that in the case of Σ -extensions the spread of two polynomials will always be a finite set (possiblyempty). For Π -extensions the spread will certainly be infinite if t | a and t | b as σ k ( t ) | t for all k . It can be shown in [33, 22] (see also [44, Theorem 2.5.5]), how-ever, that this is the only problematic case. Summarising, the following propertyholds. Lemma 1.
Let ( F ( t ) , σ ) be a ΠΣ -extension of ( F , σ ) and a ∈ F [ t ] \ { } . Then a isperiodic if and only if t is a Π -monomial and t | a. This motives the following definition.
Definition 1.
Let ( F ( t ) , σ ) be a ΠΣ -extension of ( F , σ ) and a ∈ F [ t ] \ { } . We de-fine the periodic part of a asper ( a ) = ( t is a Σ -monomial , t m if t is a Π -monomial and m ∈ N is maximal s.t. t m | a . and the aperiodic part as ap ( a ) = aper ( a ) .Note that ap ( a ) = a if t is a Σ -monomial. In this article we will focus on the problemto compute the aperiodic part of a denominator bound d of the system (3). Beforewe state our main result, we will have to clarify what me mean by the denominatorof a vector. Definition 2.
Let y ∈ F ( t , . . . , t e ) n be a rational column vector. We say that y = d − z is a reduced representation for y if d ∈ F [ t , . . . , t e ] \ { } and z = ( z , . . . , z n ) ∈ F [ t , . . . , t e ] n are such that gcd ( z , d ) = gcd ( z , . . . , z n , d ) = Theorem 1.
Let ( F ( t ) , σ ) be a ΠΣ -extension of ( F , σ ) and let A , . . . , A l ∈ F [ t ] m × n ,b ∈ F [ t ] m . If one can compute the dispersion of polynomials in F [ t ] , then one cancompute the aperiodic part of a denominator bound of (3) . This means that one cancompute a d ∈ F [ t ] \ { } such that for any solution q − p ∈ F ( t ) n of (3) with q − pbeing in reduced representation we have ap ( q ) | d. If z is the zero vector, then the assumption gcd ( z , d ) = d = Note that such a d in Theorem 1 forms a complete denominator bound if t is a Σ -monomial. Otherwise, if t is a Π -monomial, there exists an m ∈ N such that t m d is a denominator bound. Finding such an m algorithmically in the general ΠΣ -caseis so far an open problem. For the q -rational case we refer to [37].In order to prove Theorem 1, we will perform a preprocessing step and regu-larise the system (3) to a more suitable form (see Theorem 3 in Section 6); forsimilar strategies to accomplish such a regularisation see [12, 9]. Afterwards, wewill apply Theorem 2 in Section 4 which is a generalisation of [18, 24]. Namely,besides computing the dispersion in F [ t ] one only has to compute certain σ - andgcd-computations in F [ t ] in order to derive the desired aperiodic part of the univer-sal denominator bound.Summarising, our proposed denominator bound method is applicable if the dis-persion can be computed. To this end, we will elaborate under which assumptionsthe dispersion can be computed in F [ t ] . Define for f ∈ F \ { } and k ∈ Z the follow-ing functions: f ( k , σ ) : = f σ ( f ) . . . σ k − ( f ) if k >
01 if k = σ − ( f ) ... σ − k ( f ) if k < , f { k , σ } : = f ( , σ ) + f ( , σ ) + · · · + f ( k − , σ ) if k >
00 if k = − ( f ( − , σ ) + · · · + f ( k , σ ) ) if k < . Then analysing Karr’s algorithm [33] one can extract the following (algorithmic)properties that are relevant to calculate the dispersion in ΠΣ -extensions; com-pare [34]. Definition 3. ( F , σ ) is weakly σ ∗ -computable if the following holds.1. There is an algorithm that factors multivariate polynomials over F and that solveslinear systems with multivariate rational functions over F .2. ( F , σ r ) is torsion free for all r ∈ Z , i.e., for all r ∈ Z , for all k ∈ Z \ { } and all g ∈ F \ { } the equality (cid:0) σ r ( g ) g (cid:1) k = σ r ( g ) g = Π -Regularity. Given f , g ∈ F with f not a root of unity, there is at most one n ∈ Z such that f ( n , σ ) = g . There is an algorithm that finds, if possible, this n .4. Σ -Regularity. Given k ∈ Z \ { } and f , g ∈ F with f = f not a root of unity,there is at most one n ∈ Z such that f { n , σ k } = g . There is an algorithm that finds,if possible, this n .Namely, we get the following result based on Karr’s reduction algorithms. Lemma 2.
Let ( F ( t ) , σ ) be a ΠΣ -extension of ( F , σ ) . Then the following holds.1. If ( F , σ ) is weakly σ ∗ -computable, one can compute the spread and dispersionof two polynomials a , b ∈ F [ t ] \ F .2. If ( F , σ ) is weakly σ ∗ -computable, ( F ( t ) , σ ) is weakly σ ∗ -computable.Proof. (1) By Lemma 1 of [45] the spread is computable if the shift equivalenceproblem is solvable. This is possible if ( F , σ ) is weakly σ ∗ -computable; see Corol-lary 1 of [34] (using heavily results of [33]).(2) holds by Theorem 1 of [34]. ⊓⊔ Johannes Middeke and Carsten Schneider
Thus by the iterative application of Lemma 2 we end up at the following result thatsupplements our Theorem 1.
Corollary 1.
Let ( F ( t ) , σ ) be a ΠΣ -extension of ( F , σ ) where ( F , σ ) with F = G ( t ) . . . ( t e ) is a ΠΣ -extension of a weakly σ ∗ -computable difference field ( G , σ ) .Then the dispersion of two polynomials a , b ∈ F [ t ] \ F is computable. Finally, we list some difference fields ( G , σ ) that one may choose for Corollary 1.Namely, the following ground fields ( G , σ ) are weakly σ ∗ -computable.1. By [48] we may choose const G = G where G is a rational function field over analgebraic number field; note that ( F , σ ) is a ΠΣ -field over G .2. By [34] ( G , σ ) can be a free difference field over a constant field that is weakly σ ∗ -computable (see item 1).3. By [35] ( G , σ ) can be radical difference field over a constant field that is weakly σ -computable (see item 1).Note that all the difference fields introduced in Example 1 are ΠΣ -fields whichare weakly σ ∗ -computable if the constant field K is a rational function field over analgebraic number field (see item 1 in the previous paragraph) and thus the dispersioncan be computed in such fields. For the difference fields given in Example 1 one mayalso use the optimised algorithms worked out in [19].Using Theorem 1 we obtain immediately the following multivariate case in thesetting of ΠΣ -extensions which can be applied for instance for the multibasic andmixed multibasic rational difference fields defined in Example 1. Corollary 2.
Let ( E , σ ) be a ΠΣ -extension of ( F , σ ) with E = F ( t )( t ) . . . ( t e ) where σ ( t i ) = α i t i + β i ( α i ∈ F ∗ , β i ∈ F ) for ≤ i ≤ e. Let A , . . . , A l ∈ E m × n ,b ∈ E . Then there is an algorithm that computes a d ∈ F [ t , . . . , t e , t ] \ { } suchthat d ′ : = t m . . . t m e e d is a universal denominator bound of system (3) for somem , . . . , m e ∈ N where m i = if t i is a Σ -monomial. That is, for any solutiony = q − p ∈ F n of (3) in reduced representation we have that q | d ′ .Proof. Note that one can reorder the generators in E = F ( t , . . . , t e ) without chang-ing the constant field const E = const F . Hence for any i with 1 ≤ i ≤ e , ( A i ( t i ) , σ ) is a ΠΣ -extension of ( A i , σ ) with A i = F ( t ) . . . ( t i − )( t i + ) . . . ( t e ) . Thus for each i with 1 ≤ i ≤ e , we can apply Theorem 1 (more precisely, Theorems 3 and 2 below)to compute the aperiodic part d i ∈ A i [ t i ] \{ } of a denominator bound of (3). W.l.o.g.we may suppose that d , . . . , d e ∈ A : = F [ t , . . . , t e ] ; otherwise, one clears denomi-nators: for d i one uses a factor of A i ). Finally, compute d : = lcm ( d , . . . , d e ) in A .Suppose that d t m . . . t m e e is not a denominator bound for any choice m , . . . , m e ∈ N where for 1 ≤ i ≤ e , m i = t i is a Σ -monomial. Then we find a solution y = q − p of (3) in reduced representation with p ∈ A n and q ∈ A and an irreducible h ∈ A \ F with h | q and h ∤ d where h = t i for all i where t i is a Π -monomial. Let j with1 ≤ j ≤ e such that h ∈ A j [ t j ] \ A j . Since d j is the aperiodic part of a denominatorbound w. r. t. t j , and the case h = t j is excluded if t j is a Π -monomial, it follows that h w = d j for some w ∈ A j [ t j ] . Write w = uv with u ∈ A and v ∈ A j . Since d j ∈ A , enominators of Recurrence Systems 9 h w ∈ A and thus the factor v ∈ A must be contained in h ∈ A . But since h is irre-ducible in A , v ∈ F \ { } and thus w ∈ A . Hence h divides d j and thus it divides also d = lcm ( d , . . . , d e ) in A , a contradiction. ⊓⊔ For this section, let ( F , σ ) be a fixed difference field. An alternative way of express-ing the system (3) is to use operator notation. Formally, we consider the ring of Orepolynomials F ( t )[ σ ] over the rational functions F ( t ) w. r. t. the automorphism σ andthe trivial σ -derivation . Ore polynomials are named after Øystein Ore who firstdescribed them in his paper [39]. They provide a natural algebraic model for lineardifferential, difference, recurrence of q -difference operators (see, e. g., [39], [23],[26], [13] and the references therein).We briefly recall the definition of Ore polynomials and refer to the aforemen-tioned papers for details: As a set they consist of all polynomial expressions a ν σ ν + . . . + a σ + a with coefficients in F ( t ) where we regard σ as a variable . Addition of Ore poly-nomials works just as for regular polynomials. Multiplication on the other hand isgoverned by the commutation rule σ · a = σ ( a ) · σ for all a ∈ F ( t ) . (Note that in the above equation σ appears in two different roles: as the Ore variableand as automorphism applied to a .) Using the associative and distributive law, thisrule lets us compute products of arbitrary Ore polynomials. It is possible to showthat this multiplication is well-defined and that F ( t )[ σ ] is a (non-commutative) ring(with unity).For an operator L = a ν σ ν + . . . + a ∈ F ( t )[ σ ] we declare the application of L toa rational function α ∈ F ( t ) to be L ( α ) = a ν σ ν ( α ) + . . . + a σ ( α ) + a α . Note that this turns F ( t ) into a left F ( t )[ σ ] -module. We extend this to matrices ofoperators by setting L ( α ) = (cid:0) ∑ nj = L i j ( α j ) (cid:1) j for a matrix L = ( L i j ) i j ∈ F ( t )[ σ ] m × n and a vector of rational functions α = ( α j ) j ∈ F ( t )[ σ ] n . It is easy to see that theaction of F ( t )[ σ ] m × n on F ( t ) n is linear over const F . With this notation, we canexpress the system (3) simply as A ( y ) = b where Some authors would denote F ( t )[ σ ] by the more precise F ( t )[ σ ; σ , ] . A more rigorous way would be to introduce a new symbol for the variable. However, a lot ofauthors simply use the same symbol and we decided to join them.0 Johannes Middeke and Carsten Schneider A = A ℓ σ ℓ + . . . + A σ + A ∈ F ( t )[ σ ] m × n . The powers of σ form a (left and right) Ore set within F ( t )[ σ ] (see, e. g.,[28, Chapter 5] for a definition and a brief description of localisation over non-commutative rings). Thus, we may localise by σ obtaining the Ore Laurent polyno-mials F ( t )[ σ , σ − ] . We can extend the action of F ( t )[ σ ] on F ( t ) to F ( t )[ σ , σ − ] inthe obvious way.We need to introduce some notation and naming conventions. We denote the n -by- n identity matrix by n (or simply if the size is clear from the context). Similarly m × n (or just ) denotes the m -by- n zero matrix. With diag ( a , . . . , a n ) we mean adiagonal n -by- n matrix with the entries of the main diagonal being a , . . . , a n .We say that a matrix or a vector is polynomial if all its entries are polynomials in F [ t ] ; we call it rational if its entries are fractions of polynomials; and we speak of operator matrices if its entries are Ore or Ore Laurent polynomials.Let M be a square matrix over F [ t ] (or F ( t )[ σ ] or F ( t )[ σ , σ − ] ). We say that M is unimodular if M possesses a (two-sided) inverse over F [ t ] (or F ( t )[ σ ] or F ( t )[ σ , σ − ] , respectively). We call M regular , if its rows are linearly independentover F [ t ] (or F ( t )[ σ ] or F ( t )[ σ , σ − ] , respectively) and singular if they are not lin-early independent. For the special case of a polynomial matrix M ∈ F [ t ] n × n , we cancharacterise these concepts using determinants : here, M is singular if det M = M =
0; and unimodular if det M ∈ F \ { } . Another equivalent char-acterisation of regular polynomial matrices is that they have a rational inverse M − ∈ F ( t ) n × n .We denote the set of all unimodular polynomial matrices by GL n ( F [ t ]) and thatof all unimodular operator matrices by GL n ( F ( t )[ σ ]) or by GL n ( F ( t )[ σ , σ − ]) . Wedo not have a special notation for the set of regular matrices. Remark 1.
Let A ∈ F ( t )[ σ ] m × n and b ∈ F ( t ) m . Assume that we are given two uni-modular operator matrices P ∈ GL m ( F ( t )[ σ , σ − ]) and Q ∈ GL n ( F ( t )[ σ , σ − ]) .Then the system A ( y ) = b has the solution y if and only if ( PAQ )( ˜ y ) = P ( b ) has the solution ˜ y = Q − ( y ) : Assume first that A ( y ) = b . Then also P ( A ( y )) =( PA )( y ) = P ( b ) and furthermore we have P ( b ) = ( PA )( y ) = ( PA )( QQ − ( y )) =( PAQ )( Q − ( y )) = ( PAQ )( ˜ y ) . Because P and Q are unimodular, we can easily goback as well. Thus, we can freely switch from one system to the other. Definition 4.
We say that the systems A ( y ) = b and ( PAQ )( ˜ y ) = P ( b ) in Remark 1are related to each other. Let ( F ( t ) , σ ) be again a ΠΣ -extension of ( F , σ ) . Recall that we we are consid-ering the system (3) which has the form A ℓ σ ℓ y + . . . + A σ y + A y = b where The other two rings do not admit determinants since they lack commutativity.enominators of Recurrence Systems 11 A , . . . , A ℓ ∈ F [ t ] m × n and b ∈ F [ t ] m . We start this section by identifying systems withgood properties. Definition 5.
We say that the system in equation (3) is head regular if m = n (i. e.,all the matrices are square) and det A ℓ = Definition 6.
We say that the system in equation (3) is tail regular if m = n anddet A = Definition 7.
The system A ( y ) = b in equation (3) is called fully regular if it is headregular and there exists a unimodular operator matrix P ∈ GL n ( F ( t )[ σ , σ − ]) suchthat the related system ( PA )( ˜ y ) = P ( b ) is tail regular.We will show later in Section 6 that any head regular system is actually alreadyfully regular and how the transformation matrix P from Definition 7 can be com-puted.Moreover, in Definition 7, we can always choose P in such a way that the coef-ficient matrices ˜ A , . . . , ˜ A ˜ ℓ and the right hand side of the related system ( PA )( ˜ y ) = P ( b ) are polynomial: simply multiplying by a common denominator will not changethe unimodularity of P .The preceding Definition 7 is very similar to strongly row-reduced matrices [9,Def. 4]. The main difference is that we allow an arbitrary transformation P whichtranslates between a head and tail regular system while [9] require their transforma-tion (also called P ) to be of the shape diag ( σ m , . . . , σ m n ) for some specific expo-nents m , . . . , m n ∈ Z . At this time, we do not know which of the two forms is moreadvantageous; it would be an interesting topic for future research to explore whetherthe added flexibility that our definition gives can be used to make the algorithm moreefficient. Remark 2.
In the situation of Definition 7, the denominators of the solutions of theoriginal system A ( y ) = b and the related system ˜ A ( ˜ y ) = ˜ b are the same: By Remark 1,we know that y solves the original system if and only if ˜ y solves the related system.The matrix Q of Remark 1 is just the identity in this case.We are now ready to state the main result of this section. For the rational differ-ence field this result appears in various specialised forms. E.g., the version m = n = l = Theorem 2.
Let the system in equation (3) be fully regular, and let y = d − z ∈ F ( t ) n be a solution in reduced form. Let ( PA )( ˜ y ) = P ( b ) be a tail regular related systemwith trailing coefficient matrix ˜ A ∈ F [ t ] n × n . Let m be the common denominator ofA − ℓ and let p be the common denominator of ˜ A − . Then disp ( ap ( d )) ≤ disp ( σ − ℓ ( ap ( m )) , ap ( p )) = D (4) and ap ( d ) (cid:12)(cid:12)(cid:12) gcd (cid:16) D ∏ j = σ − ℓ − j ( ap ( m )) , D ∏ j = σ j ( ap ( p )) (cid:17) . (5)We will show in Section 6 that any coupled system of the form (3) can be broughtto a system which is fully regular and which contains the same solutions as theoriginal system. Note further that the denominator bound of the aperiodic part givenon the right hand side of (5) can be computed if the dispersion of polynomials in F [ t ] (more precisely, if D ) can be computed. Summarising, Theorem 1 is established ifTheorem 2 is proven and if the transformation of system (5) to a fully regular versionis worked out in Section 6. Remark 3.
Let ( F ( t ) . . . ( t e ) , σ ) be a ΠΣ -extension of ( F , σ ) with σ ( t i ) = α i t i + β i ( α i ∈ F ∗ , β i ∈ F ) for 1 ≤ i ≤ e . In this setting a multivariate aperiodic denominatorbound d ∈ F [ t , . . . , t e ] \ { } has been provided for a coupled system in Corollary 2.Namely, within its proof we determine the aperiodic denominator bound d by apply-ing Theorem 1 (and thus internally Theorem 2) for each ΠΣ -monomial t i . Finally,we merge the different denominator bounds d i to the global aperiodic denominatorbound d = lcm ( d , . . . , t e ) . In other words, the formula (5) is reused e times (withpossibly different D s). This observation gives rise to the following improvement: itsuffices to compute for 1 ≤ i ≤ e the dispersions D i (using the formula (4) for thedifferent ΠΣ -monomials t i ), to set D = max ( D , . . . , D e ) and to apply only once theformula (5). We will illustrate this tactic in an example of Section 7.For the sake of clarity we split the proof into two lemmata. Lemma 3.
With the notations of Theorem 2, it is disp ( ap ( d )) ≤ disp ( σ − ℓ ( ap ( m )) , ap ( p )) = D . Proof.
For the ease of notation, we will simply write p instead of ap ( p ) and we willdo the same with m = ap ( m ) and d = ap ( d ) .Assume that disp ( d ) = λ > D for some λ ∈ N . Then we can find two irre-ducible aperiodic factors a , g ∈ F [ t ] of d such that σ λ ( a ) / g ∈ F . In particular, dueto Lemma 1 we can choose a , g with this property such that λ is maximal.We distinguish two cases. First, assume that a | p . We claim that in this case wehave σ ℓ ( g ) ∤ m . Otherwise, it was g | σ − ℓ ( m ) which together with g | σ λ ( a ) | σ λ ( p ) implied λ ∈ spread ( σ − ℓ ( m ) , p ) which contradicts D < λ . Moreover, σ ℓ ( g ) cannotoccur in σ i ( d ) for 0 ≤ i < ℓ because else σ ℓ ( g ) | σ i ( d ) and thus ˜ b = σ ℓ − i ( g ) | d implied that a and ˜ g are factors of d . Now, since σ λ + ℓ − i ( a ) / ˜ g = σ ℓ − i ( σ λ ( a ) / g ) ∈ F ,this contradicts the maximality of λ . Thus, σ ℓ ( g ) must occur in the denominator of A ℓ σ ℓ ( y ) + A ℓ − σ ℓ − ( y ) . . . + A σ ( y ) + A y = b ∈ F [ t ] n (6)for at least one component: Let A − ℓ = mU for some U ∈ F [ t ] n × n . Then UA ℓ = m n and enominators of Recurrence Systems 13 UA ℓ |{z} = m σ ℓ ( y ) + UA ℓ − σ ℓ − ( y ) . . . + UA σ ( y ) + UA y = m σ ℓ ( z ) ασ ℓ ( g ) + ∑ j = ℓ (cid:16) ∏ k = j ,ℓ σ j ( d ) (cid:17) UA j σ j ( z ) ∏ j = ℓ σ j ( d ) = Ub ∈ F [ t ] n for some α ∈ F [ t ] n such that σ ℓ ( d ) = ασ ℓ ( g ) . The equation is equivalent to (cid:16) ∏ j = ℓ σ j ( d ) (cid:17) m σ ℓ ( z ) = (cid:16)(cid:16) ∏ j = ℓ σ j ( d ) (cid:17) Ub − ∑ j = ℓ (cid:16) ∏ k = j ,ℓ σ j ( d ) (cid:17) UA j σ j ( z ) (cid:17) ασ ℓ ( g ) . Note that (every component of the vector on) the right hand side is divisible by σ ℓ ( g ) . For the left hand side, we have σ ℓ ( g ) (cid:12)(cid:12)(cid:12) m ∏ j = ℓ σ j ( d ) . Also, we know that g ∤ z j for at least one j . Thus, σ ℓ ( g ) does not divide (at least onecomponent of) the left hand side. This is a contradiction.We now turn our attention to the second case a ∤ p . Here, we consider the relatedtail regular system ˜ A ˜ ℓ σ ˜ ℓ ( y ) + . . . + ˜ A y = ˜ b instead of the original system. Recallthat y remains unchanged due to Remark 2. Similar to the first case, let ˜ A − = pV ,i. e., V ˜ A = p n for some V ∈ F [ t ] n × n . Note that a ∤ σ i ( d ) for all i ≥
1; otherwise, σ − i ( a ) was a factor of d with σ λ + i ( σ − i ( a )) / b ∈ F contradicting the maximality of λ . Let now V ˜ A ˜ ℓ σ ˜ ℓ ( y ) + . . . + V ˜ A σ ( y ) + p n y = ˜ ξ ∈ F [ t ] n . We write again y = d − z . Then, after multiplying with the common denominator d σ ( d ) · · · σ ℓ ( d ) and rearranging the terms we obtain p ∏ k = σ k ( d ) ! z = (cid:16) ℓ ∏ j = σ j ( d ) (cid:17) ˜ ξ − ℓ ∑ j = (cid:16) ∏ k = j σ k ( d ) (cid:17) V ˜ A j σ j ( z ) where every term on the right hand side is divisible a . However, on the left hand side a does not divide the scalar factor p ∏ k = σ k ( d ) and because of gcd ( z , d ) = z which is not divisible by a . Thus, a does not dividethe left hand side which is a contradiction. ⊓⊔ Lemma 4.
With the notations of Theorem 2, we have ap ( d ) (cid:12)(cid:12)(cid:12) gcd (cid:16) D ∏ j = σ − ℓ − j ( m ) , D ∏ j = σ j ( p ) (cid:17) . Proof.
Again, we will simply write p , m and d instead of ap ( p ) , ap ( m ) and ap ( d ) ,respectively. As in the proof of Lemma 3, we let U ∈ F [ t ] n × n be such that UA ℓ = m .Multiplication by U from the left and isolating the highest order term transforms the system (3) into σ ℓ ( y ) = mU (cid:16) b − ℓ − ∑ j = A j σ j ( y ) (cid:17) . (7)Now, we apply σ − to both sides of the equation in order to obtain an identity for σ ℓ − ( y ) σ ℓ − ( y ) = σ − ( m ) σ ( U ) (cid:16) σ ( b ) − ℓ − ∑ j = σ ( A j ) σ j − ( y ) (cid:17) . We can substitute this into (7) in order to obtain a representation σ ℓ ( y ) = mU (cid:16) b − A ℓ − σ − ( m ) σ ( U ) (cid:16) σ ( b ) − ℓ − ∑ j = σ ( A j ) σ j − ( y ) (cid:17) − ℓ − ∑ j = A j σ j ( y ) (cid:17) = m σ − ( m ) ˜ U (cid:16) ˜ b − ℓ − ∑ j = − ˜ A j σ j ( y ) (cid:17) for σ ℓ ( y ) in terms of σ ℓ − ( y ) , . . . , σ − ( y ) where ˜ b ∈ F [ t ] n and ˜ A ℓ − , . . . , ˜ A − , ˜ U ∈ F [ t ] n × n .We can continue this process shifting the terms on the right side further with eachstep. Eventually, after D steps, we will arrive at a system of the form σ ℓ ( y ) = m σ − ( m ) · · · σ − D ( m ) U ′ (cid:16) b ′ − ℓ − D − ∑ j = − D A ′ j σ j ( y ) (cid:17) (8)where b ′ ∈ F [ t ] n and A ′− D , . . . , A ′ ℓ − D − , U ′ ∈ F [ t ] n × n .Assume now that y = d − z is a solution of (3) or thus of (8) which is in reducedrepresentation for some d ∈ F [ t ] and a vector z ∈ F [ t ] n . Substituting this in equa-tion (8) yields1 σ ℓ ( d ) σ ℓ ( z ) = m σ − ( m ) . . . σ − D ( m ) U ′ (cid:16) b ′ − ℓ − D − ∑ j = − D A ′ j σ j ( d ) σ j ( z ) (cid:17) = ∏ Dj = σ − j ( m ) · ∏ ℓ − D − j = − D σ j ( d ) U ′ (cid:16) ℓ − D − ∏ j = − D σ j ( d ) · b ′ − ℓ − D − ∑ j = − D ∏ k = j σ k ( d ) · A ′ j σ j ( z ) (cid:17) or, equivalently after clearing denominators, D ∏ j = σ − j ( m ) · ℓ − D − ∏ j = − D σ j ( d ) · σ ℓ ( z )= σ ℓ ( d ) · U ′ (cid:16) ℓ − ∏ j = σ j − D ( d ) · b ′ − ℓ − D − ∑ j = − D ∏ k = j σ k ( d ) · A ′ j σ j ( z ) (cid:17) . (9) enominators of Recurrence Systems 15 Let further q ∈ F [ t ] be an irreducible factor of the aperiodic part of d . Then σ ℓ ( q ) divides the right hand side of equation (9). Looking at the left hand side, we see that σ ℓ ( q ) cannot divide ∏ ℓ − j = σ j − D ( d ) since D = disp ( d ) and there is at least one entry z k of z with 1 ≤ k ≤ n such that q ∤ z k because d − z is in reduced representation. Itfollows that σ ℓ ( q ) | ∏ Dj = σ − j ( m ) , or, equivalently, q | ∏ Dj = σ − ℓ − j ( m ) . We can thuscancel q from the equation. Reasoning similarly for the other irreducible factors ofthe aperiodic part of d we obtain d | ∏ Dj = σ − ℓ − j ( m ) .In order to prove d | ∏ Dj = σ j ( p ) , we consider once more the related tail regularsystem ˜ A ˜ ℓ σ ˜ ℓ ( y ) + . . . + ˜ A y = ˜ b . Recall that by Remark 2 y is both a solution of theoriginal and the related. Let V ˜ A = p for some V ∈ F [ t ] n × n . Multiplying the relatedsystem by V and isolating y yields y = pV (cid:16) ˜ b − ˜ ℓ ∑ j = ˜ A j σ j ( ˜ y ) (cid:17) . Now, an analogous computation allows us to shift the orders of the terms on theright hand side upwards. We obtain an equation y = p σ ( p ) · · · σ D ( p ) V ′ (cid:16) ˜ b ′ − ˜ ℓ ∑ j = ˜ A ′ j σ D + j ( y ) (cid:17) for suitable matrices V ′ , ˜ A ′ , . . . , ˜ A ′ ˜ ℓ ∈ F [ t ] n × n and ˜ b ′ ∈ F [ t ] n . Substituting again y = d − z and clearing denominators we arrive at an equation similar to (9) and using thesame reasoning we can show that d | ∏ Dj = σ j ( p ) . ⊓⊔ We will show in Section 6 below that it is actually possible to make any system of theform (3) fully regular. One of the key ingredients for this will be row (and column)reduction which we are going to introduce in this section. The whole expositionclosely follows the one in [20]. We will concentrate on row reduction since columnreduction works mutatis mutandis basically the same.Consider an arbitrary operator matrix A ∈ F ( t )[ σ ] m × n . When we speak about the degree of A , we mean the maximum of the degrees (in σ ) of all the entries of A .Similarly, the degree of a row of A will be the maximum of the degrees in that row.Let ν be the degree of A and let ν , . . . , ν m be the degrees of the rows of A . Forsimplicity, we first assume that none of the rows of A is zero. When we multiply A by the matrix ∆ = diag ( σ ν − ν , . . . , σ ν − ν m ) from the left, then for each i = , . . . , m we multiply the i th row by σ ν − ν i . The resulting row will have degree ν . That is,multiplication by ∆ brings all rows to the same degree. We will write the product as ∆ A = A ν σ ν + . . . + A σ + A where A , . . . , A ν ∈ F ( t ) m × n are rational matrices. Since none of the rows of A iszero, also none of the rows of A ν is zero. We call A ν the leading row coefficientmatrix of A and denote it by A ν = LRCM ( A ) . In general, if some rows of A arezero, then we simply define the corresponding rows in LRCM ( A ) to be zero, too. Definition 8.
The matrix A ∈ F ( t )[ σ ] m × n is row reduced (w. r. t. σ ) if LRCM ( A ) hasfull row rank. Remark 4. If A ( y ) = b is a head reduced system where A = A ℓ σ ℓ + . . . + A σ + A for A , . . . , A ℓ ∈ R n × n , then A is row-reduced. This is obvious since in this caseLRCM ( A ) = A ℓ and det A ℓ =
0. Conversely, if A is row-reduced, then ∆ A (with ∆ as above) is head regular.It can be shown that for any matrix A ∈ F ( t )[ σ ] m × n there exists a unimodularoperator matrix P ∈ GL m ( F ( t )[ σ ]) such that PA = (cid:18) ˜ A (cid:19) for some row reduced ˜ A ∈ F ( t )[ σ ] r × n where r is the (right row) rank of A over F ( t )[ σ ] . (For more details, see [20, Thm. 2.2] and [20, Thm. A.2].)It is a simple exercise to derive an analogous column reduction of A . Moreover,it can easily be shown that it has similar properties. In particular, we can alwayscompute Q ∈ GL n ( F ( t )[ σ ]) such that the product will be AQ = (cid:0) ˆ A (cid:1) for some column reduced ˆ A ∈ F ( t )[ σ ] m × r where r is the (left column) rank of A .We remark that in fact r in both cases will be the same number since the leftcolumn rank of A equals the right row rank by, e. g., [27, Thm. 8.1.1]. Therefore, inthe following discussion we will simply refer to it as the rank of A . In Theorem 2, we had assumed that we were dealing with a fully regular system.This section will explain how every arbitrary system can be transformed into a fullyregular one with the same set of solutions.Represent the system (3) by an operator matrix A ∈ F ( t )[ σ ] m × n . We first applycolumn reduction to A which gives a unimodular operator matrix Q ∈ GL n ( F ( t )[ σ ]) such that the non-zero columns of AQ are column reduced. Next, we apply rowreduction to AQ obtaining P ∈ GL m ( F ( t )[ σ ]) such that in total enominators of Recurrence Systems 17 PAQ = (cid:18) ˜ A
00 0 (cid:19) where ˜ A ∈ F ( t )[ σ ] r × r will now be a row reduced square matrix and r is the rank of A . If we define the matrix ∆ as in the previous Section 5, then the leading coefficientmatrix of ∆ PAQ and that of
PAQ will be the same. Moreover, since ∆ is unimodularover F ( t )[ σ , σ − ] , also ∆ P is unimodular over F ( t )[ σ , σ − ] . Thus, if we define ˆ A ∈ F ( t )[ σ ] r × r by ∆ PAQ = (cid:18) ˆ A
00 0 (cid:19) , then we can write ˆ A = ˆ A ν σ ν + . . . + ˆ A σ + ˆ A where ν is the degree of ˆ A and where ˆ A , . . . , ˆ A ν ∈ F ( t ) r × r are rational matrices.Since ˆ A is still row reduced, we obtain that its leading row coefficient matrix ˆ A ν hasfull row rank.Assume now that we started with the system A ( y ) = b . Then ( ∆ PAQ )( y ) =( ∆ P )( b ) is a related system with the same solutions as per Remark 1. More con-cretely, let us write y = (cid:18) y y (cid:19) and ( ∆ P )( b ) = (cid:18) ˜ b ˜ b (cid:19) where y and ˜ b ∈ F ( t ) r are vectors of length r , y ∈ F ( t ) m − r has length m − r , and˜ b ∈ F ( t ) n − r has length n − r . Then ( ∆ PAQ )( y ) = ( ∆ P )( b ) can expressed asˆ A ( y ) = ˜ b and 0 = ˜ b . The requirement that ˜ b = compatibility condition . Moreover, we see that the vari-ables in y can be chosen freely.If the compatibility condition does not hold, then the system does not have anysolutions and we may abort the computation right here. Otherwise, A ( y ) = b isequivalent to a system ˆ A ( y ) = ˜ b of (potentially) smaller size. Clearing denomi-nators in the last system does not change its solvability nor the fact that ˆ A is rowreduced. Thus, we have arrived at an equivalent head regular system.It remains to explain how we can turn a head regular system into a fully regularone. Thus, as above we assume now that the first regularisation step is already doneand that the operator matrix A ∈ F ( t )[ σ ] n × n is such that A ( y ) = b is head regular.That does in particular imply that A is row reduced and hence that n equals the rankof A over F ( t )[ σ ] .We claim that n is also the rank of A over F ( t )[ σ , σ − ] , i. e., that the rowsof A are linearly independent over F ( t )[ σ , σ − ] . Assume that vA = v ∈ F ( t )[ σ , σ − ] n . There is a power σ k of σ such that σ k v ∈ F ( t )[ σ ] n . Since then ( σ k v ) A =
0, we obtain that A did not have full rank over F ( t )[ σ ] . The claim followsby contraposition. Note that also the other direction obviously holds.Let ℓ be the degree of A and write A as A = A ℓ σ ℓ + . . . + A σ + A where A , . . . , A ℓ ∈ F ( t ) n × n . We multiply A over F ( t )[ σ , σ − ] by σ − ℓ from the left.This does not change the rank. The resulting matrix σ − ℓ A will be in F ( t )[ σ − ] n × n .Using a similar argument as above, we see that the rank of σ − ℓ A over F ( t )[ σ − ] isstill n . We have σ − ℓ A ℓ = σ − ℓ ( A ) σ − ℓ + . . . + σ − ℓ ( A ℓ − ) σ − + σ − ℓ ( A ℓ ) , i. e., σ − ℓ A is similar to A with the coefficients in reverse order.We can now apply row reduction to σ − ℓ A w. r. t. the Ore variable σ − . Just asbefore we may also shift all the rows afterwards to bring them to the same degree.Let the result be W σ − ℓ A = ˜ A σ − ˜ ℓ + . . . + ˜ A ˜ ℓ − σ − + ˜ A ˜ ℓ where ˜ ℓ is the new degree, W ∈ GL n ( F ( t )[ σ , σ − ]) is a unimodular operator matrix,the matrices ˜ A , . . . , ˜ A ˜ ℓ ∈ F ( t ) n × n are rational, and where the non-zero rows of ˜ A are independent. However, since the rank of σ − ℓ A is n (over F ( t )[ σ − ] ), we obtainthat ˜ A does in fact not possess any zero-rows at all. Thus, ˜ A has full rank.Multiplication by σ ˜ ℓ from the left, brings everything back into F ( t )[ σ ] n × n ; i. e.,we have σ ˜ ℓ W σ − ℓ A = σ ˜ ℓ ( ˜ A ˜ ℓ ) σ ˜ ℓ + . . . + σ ˜ ℓ ( ˜ A ) σ + σ ˜ ℓ ( ˜ A ) where σ ˜ ℓ ( ˜ A ) still has full rank and where the transformation matrix σ ˜ ℓ W σ − ℓ isunimodular over F ( t )[ σ , σ − ] . In other words, we have found a related tail regularsystem. Since we started with a head regular system, we even found that it is fullyreduced.We can summarise the results of this section in the following way. An overviewof the process is shown in Figure 1. Theorem 3.
Any system of the form (3) can be transformed into an related fullyregular system. Along the way we acquire some compatibility conditions indicatingwhere the system may be solvable.
We would like to once more compare our approach to the one taken in [9]. Theyshow how to turn a system into strongly row-reduced form (their version of fullyregular as explained after Definition 7 in the proof of their [9, Prop. 5]. Althoughthey start out with an input of full rank, this is not a severe restriction as the samepreprocessing step (from A to ˆ A ) which we used could be applied in their case, too.Just like our approach, their method requires two applications of row reduction. Note that the commutation rule σ − a = σ − ( a ) σ − follows immediately from the rule σ a = σ ( a ) σ .enominators of Recurrence Systems 19 A ∈ F ( t )[ σ ] m × n arbitrary ( ∆ P ) AQ = (cid:18) ˆ A
00 0 (cid:19) with ˆ A ∈ F ( t )[ σ ] r × r head regularˆ A head regular σ ˜ ℓ W σ − ℓ ˆ A tail regularrow/column reduction w. r. t. σ row reduction w. r. t. σ − assuming the compatibility conditions hold Fig. 1
Outline of the regularisation.
They do, however, obtain full regularity in the opposite order: The first reductionmakes the system tail regular while the second reduction works on the leading ma-trix. In our case, the first row reduction (removes unnecessary equations) and makesthe system head regular while the second one works on the tail. The other big dif-ference is that our second reduction is w. r. t. σ − while [9] rewrite the system interms of the difference operator ∆ = σ − ∆ . As mentionedafter Definition 7, we cannot with certainty tell yet which of the two approaches ispreferable. That will be a topic for future research. As a first example, we consider the system (cid:18) − t − t + − t − t − t − t + t + − t − t − t − t − t − t − t (cid:19) y ( t + )+ (cid:18) t − t + t t − t + t t + t + t + t + t + t (cid:19) y ( t )= (cid:18) t + t + t + t + t (cid:19) . Here, we have F = Q and we are in the Σ -extension case with σ ( t ) = t +
1. We caneasily see that the leading and trailing matrices are both regular. Inverting them andcomputing common denominators, we arrive at m = ( t − ) t ( t + t + )( t − t + )( t + ) and p = t ( t + )( t − t + )( t + t + ) . We have spread ( σ − m , p ) = { } and thus the dispersion is 0. We obtain the denom-inator bound gcd ( σ − m , p ) = t ( t − t + ) . This does fit well with the actual solutions for which a Q -basis is given by1 t ( t − t + ) (cid:18) − t ( t − t + ) t − t + (cid:19) and 1 t ( t − t + ) (cid:18) − t ( t − t + ) t − t − t + (cid:19) . (We can easily check that those are solutions; and according to [9, Thm. 6] thedimension of the solution space is 2.)For the second example, we consider a ( , ) -multibasic rational difference fieldover F = Q ; i. e., we consider Q ( t , t ) with σ ( t ) = t and σ ( t ) = t . The systemin this example is = : A ( t , t ) z }| {(cid:18) ( t t − )( t t − ) − ( t t − )( t t − )( t − t )( t − t ) ( t − t )( t − t ) (cid:19) (cid:18) y ( t , t ) y ( t , t ) (cid:19) + (cid:18) − ( t t − )( t t − ) ( t t − )( t t − ) − ( t − t )( t − t ) − ( t − t )( t − t ) (cid:19) (cid:18) y ( t , t ) y ( t , t ) (cid:19) + (cid:18) ( t t − )( t t − ) − ( t t − )( t t − )( t − t )( t − t ) ( t − t )( t − t ) (cid:19)| {z } = : A ( t , t ) (cid:18) y ( t , t ) y ( t , t ) (cid:19) = (cid:18) (cid:19) . This is a 2-by-2 system of order 2 over Q ( t , t )[ σ ] . Both the σ -leading matrix A and the σ -trailing matrix A are invertible, which means that the system is both headand tail regular; hence it is fully regular. The denominator of A − is m = ( t t − )( t t − )( t − t )( t − t ) and the denominator of A − is p = ( t t − )( t t − )( t − t )( t − t ) . We have ap ( m ) = m and ap ( p ) = p . Following the strategy/algorithm proposed inRemark 3 we compute the dispersions w. r. t. t and t (which turn out to be the samein this example); obtaining D = disp t , t ( σ − ( ap ( m )) , ap ( p )) = . By Corollary 2 it follows that the denominator bound for this system is d = gcd ( σ − ( ap ( m )) , ap ( d )) = ( t t − )( t − t ) . This fits perfectly with the actual Q -basis of the solution space which is given by enominators of Recurrence Systems 21 ( t t − )( t − t ) (cid:18) ( t + )( t − )( t − )( t + ) (cid:19) , ( t − t ) (cid:18) t − t t + − t + t t + (cid:19) , ( t − t ) (cid:18) t − t t + t − t − t + t t + t − t (cid:19) , and 14 ( t t − )( t − t ) (cid:18) t t − t t − t + t t t − t t − t + t (cid:19) . (It is easy to check that these are solutions; and they are a basis of the solutions sincethe dimension of the solution space is 4 according to [9].) Given a ΠΣ -extension ( F ( t ) , σ ) of ( F , σ ) and a coupled system of the form (3)whose coefficients are from F ( t ) , we presented algorithms that compute an aperi-odic denominator bound d ∈ F [ t ] for the solutions under the assumption that thedispersion can be computed in F [ t ] (see Theorem 1). If t represents a sum, i.e., it hasthe shift behaviour σ ( t ) = t + β for some β ∈ F , this is the complete denominatorbound. If t represents a product, i.e., it has the shift behaviour σ ( t ) = α t for some α ∈ F ∗ , then t m d will be a complete denominator bound for a sufficiently large m .It is so far an open problem to determine this m in the Π -monomial case by an al-gorithm; so far a solution is only given for the q -case with σ ( t ) = qt in [38]. In thegeneral case, one can still guess m ∈ N , i.e., one can choose a possibly large enough m ( m = t is a Σ -monomial) and continue. Namely, plugging y = y ′ t m d with theunknown numerator y ′ ∈ F [ t ] n into the system (3) yields a new system in y ′ whereone has to search for all polynomial solutions y ′ ∈ F [ t ] n . It is still an open problemto determine a degree bound b ∈ N that bounds the degrees of all entries of all so-lutions y ′ ; for the rational case σ ( t ) = t + q -case σ ( t ) = qt see [38]. In the general case, one can guess a degree bound b , i.e., one can choose apossibly large enough b ∈ N and continues to find all solutions y ′ whose degrees ofthe components are at most b . This means that one has to determine the coefficientsup to degree b in the difference field ( F , σ ) .If F = const ( F , σ ) , this task can be accomplished by reducing the problem to alinear system and solving it. Otherwise, suppose that F itself is a ΠΣ -field over aconstant field K . Note that in this case we can compute d (see Lemma 1), i.e., weonly supplemented a tuple ( m , b ) of nonnegative integers to reach this point. Nowone can use degree reduction strategies as worked out in [33, 22, 49] to determinethe coefficients of the polynomial solutions by solving several coupled systems inthe smaller field F . In other words, we can apply our strategy again to solve thesesystems in F = F ′ ( τ ) where τ is again a ΠΣ -monomial: compute the aperiodic de-nominator bound d ′ ∈ F ′ [ τ ] , guess an integer m ′ ≥ m ′ = τ is a Π -monomial)for a complete denominator bound τ m ′ d ′ , guess a degree bound b ′ ≥ smaller field F ′ . Eventually, we end up at the constant field and solve the problemthere by linear algebra.Summarising, we obtain a method that enables one to search for all solutions of acoupled system in a ΠΣ -field where one has to adjust certain nonnegative integer tu-ples ( m , b ) to guide our machinery. Restricting to scalar equations with coefficientsfrom a ΠΣ -field, the bounds of the period denominator part and the degree boundshas been determined only some years ago [10]. Till then we used the above strategyalso for scalar equations [49] and could derive the solutions in concrete problemsin a rather convincing way. It is thus expected that this approach will be also ratherhelpful for future calculations. References
1. Jakob Ablinger, Arnd Behring, Johannes Bl¨umlein, Abilio De Freitas, Andreas von Man-teuffel, and Carsten Schneider,
Calculating three loop ladder and V-topologies for massiveoperator matrix elements by computer algebra , Comput. Phys. Comm. (2016), 33–112(english), arXiv:1509.08324 [hep-ph].2. Sergei A. Abramov,
On the summation of rational functions , Zh. vychisl. mat. Fiz. (1971),1071–1074.3. , Rational solutions of linear differential and difference equations with polynomialcoefficients , U.S.S.R. Comput. Math. Math. Phys. (1989), no. 6, 7–12.4. Sergei A. Abramov, Rational solutions of linear difference and q-difference equations withpolynomial coefficients , Programming and Computer Software (1995), no. 6, 273–278,Translated from Russian.5. Sergei A. Abramov, Rational solutions of first order linear q-difference systems , Proc. FPSAC99, 1999.6. Sergei A. Abramov,
A direct algorithm to compute rational solutions of first order linear q-difference systems , Discrete Mathematics (2002), no. 246, 3–12.7. Sergei A. Abramov and Moulay A. Barkatou,
Rational solutions of first order linear differencesystems , Proceedings of ISSAC’98 (Rostock), 1998.8. ,
Rational solutions of first order linear difference systems , ISSAC’98, 1998.9. Sergei A. Abramov and Moulay A. Barkatou,
On solution spaces of products of linear differ-ential or difference operators , ACM Communications in Computer Algebra (2014), no. 4,155–165.10. Sergei A. Abramov, Manuel Bronstein, Marko Petkovˇsek, and Carsten Schneider, In prepara-tion (2017).11. Sergei A. Abramov, Amel Gheffar, and Denis E. Khmelnov, Rational solutions of linear differ-ence equations: Universal denominators and denominator bounds , Programming and Com-puter Software (2011), 78–86.12. Sergei A. Abramov and E. Khmelnov, D. Denominators of rational solutions of linear differ-ence systems of an arbitrary order , Programming and Computer Software (2012), no. 2,84–91.13. Sergei A. Abramov, Q. Le, H. and Ziming Li, Univariate Ore polynomial rings in computeralgebra , Journal of Mathematical Sciences (2005), no. 5, 5885–5903.14. Sergei A. Abramov, Peter Paule, and Marko Petkovˇsek, q-hypergeometric solutions of q-difference equations , Discrete Mathematics (1998), no. 180, 3–22.15. Sergei A. Abramov and Marko Petkovˇsek,
D’Alembertian solutions of linear differential anddifference equations , Proc. ISSAC’94 (J. von zur Gathen, ed.), ACM Press, 1994, pp. 169–174.enominators of Recurrence Systems 2316. Sergei A. Abramov and Eugene V. Zima,
D’Alembertian solutions of inhomogeneous lin-ear equations (differential, difference, and some other) , Proc. ISSAC’96, ACM Press, 1996,pp. 232–240.17. Moulay A. Barkatou,
An algorithm for computing a companion block diagonal form for asystem of linear differential equations , Appl. Algebra Eng. Commun. Comput. (1993), 185–195.18. Moulay A. Barkatou, Rational solutions of matrix difference equations. problem of equiva-lence and factorization , Proceedings of ISSAC (Vancouver, BC), 1999, pp. 277–282.19. Andrej Bauer and Marko Petkovˇssek,
Multibasic and mixed hypergeometric Gosper-type al-gorithms , Journal of Symbolic Computation (1999), no. 4–5, 711–736.20. Bernhard Beckermann, Howard Cheng, and George Labahn, Fraction-free row reduction ofmatrices of Ore polynomials , Journal of Symbolic Computation (2006), 513–543.21. Alin Bostan, Fr´ed´eric Chyzak, and ´Elie de Panafieu, Complexity estimates for two uncouplingalgorithms , Proceedings of ISSAC’13 (Boston), June 2013.22. Manuel Bronstein,
On solutions of linear ordinary difference equations in their coefficientfield , J. Symbolic Comput. (2000), no. 6, 841–877.23. Manuel Bronstein and Marko Petkovˇsek, An introduction to pseudo-linear algebra , Theoreti-cal Computer Science (1996), no. 157, 3–33.24. William Y. Chen, Peter Paule, and Husam L. Saad,
Converging to Gosper’s algorithm , Adv.in Appl. Math. (2008), no. 3, 351–364. MR 244959625. Fr´ed´eric Chyzak, An extension of Zeilberger’s fast algorithm to general holonomic functions ,Discrete Math. (2000), 115–134.26. Fr´ed´eric Chyzak and Bruno Salvy,
Non-commutative elimination in Ore algebras proves mul-tivariate identities , Journal of Symbolic Computation (1998), no. 2, 187–227.27. Paul Moritz Cohn, Free rings and their relations , 2 ed., Monographs, no. 19, London Mathe-matical Society, London, 1985.28. ,
Introduction to ring theory , Springer Undergraduate Mathematics Series, Springer-Verlag, London, 2000.29. R. William Gosper,
Decision procedures for indefinite hypergeometric summation , Proc. Nat.Acad. Sci. U.S.A. (1978), 40–42.30. Peter A. Hendriks and Michael F. Singer, Solving difference equations in finite terms , J. Sym-bolic Comput. (1999), no. 3, 239–259.31. Mark van Hoeij, Rational solutions of linear difference equations , Proc. ISSAC’98, 1998,pp. 120–123.32. ,
Finite singularities and hypergeometric solutions of linear recurrence equations ,J. Pure Appl. Algebra (1999), no. 1-3, 109–131.33. Michael Karr,
Summation in finite terms , Journal of the Association for Computing Machinery (1981), no. 2, 305–350.34. Manuel Kauers and Carsten Schneider, Indefinite summation with unspecified summands , Dis-crete Math. (2006), no. 17, 2021–2140.35. ,
Symbolic summation with radical expressions , Proc. ISSAC’07 (C.W. Brown, ed.),2007, pp. 219–226.36. Johannes Middeke,
A computational view on normal forms of matrices of Ore polynomials ,Ph.D. thesis, Johannes Kepler University, Linz, July 2011, Research Institute for SymbolicComputation (RISC).37. ,
Denominator bounds and polynomial solutions for systems of q-recurrences over k ( t ) for constant k , to appear in Proc. ISSAC’17, 2017.38. Johannes Middeke, Waterloo workshop on computer algebra , ch. Denominator Bounds forHigher Order Recurrence Systems over ΠΣ ∗ Fields, submitted, 2017.39. Øystein Ore,
Theory of non-commutative polynomials , The Annals of Mathematics (1933),no. 3, 480–508.40. Peter Paule, Greatest factorial factorization and symbolic summation , J. Symbolic Comput. (1995), no. 3, 235–268.41. Marko Petkovˇsek, Hypergeometric solutions of linear recurrences with polynomial coeffi-cients , J. Symbolic Comput. (1992), no. 2-3, 243–264.4 Johannes Middeke and Carsten Schneider42. Marko Petkovˇsek, Herbert S. Wilf, and Doron Zeilberger, a = b Solving linear recurrence equations with polynomialcoefficients , Computer Algebra in Quantum Field Theory: Integration, Summation and Spe-cial Functions (Carsten Schneider and Johannes Bl¨umlein, eds.), Texts and Monographs inSymbolic Computation, Springer, 2013, pp. 259–284.44. Carsten Schneider,
Symbolic summation in difference fields , Tech. Report 01-17, RISC-Linz,J. Kepler University, November 2001, PhD Thesis.45. ,
A collection of denominator bounds to solve parameterized linear difference equa-tions in ΠΣ -extensions , An. Univ. Timis¸oara Ser. Mat.-Inform. (2004), no. 2, 163–179,Extended version of Proc. SYNASC’04.46. , Degree bounds to find polynomial solutions of parameterized linear difference equa-tions in ΠΣ -fields , Appl. Algebra Engrg. Comm. Comput. (2005), no. 1, 1–32.47. , A new Sigma approach to multi-summation , Adv. in Appl. Math. (2005), no. 4,740–767.48. , Product representations in ΠΣ -fields , Ann. Comb. (2005), no. 1, 75–99.49. , Solving parameterized linear difference equations in terms of indefinite nested sumsand products , J. Differ. Equations Appl. (2005), no. 9, 799–821.50. , Simplifying sums in ΠΣ -extensions , J. Algebra Appl. (2007), no. 3, 415–441.51. , Simplifying multiple sums in difference fields , Computer Algebra in QuantumField Theory: Integration, Summation and Special Functions (Carsten Schneider and Jo-hannes Bl¨umlein, eds.), Texts and Monographs in Symbolic Computation, Springer, 2013,arXiv:1304.4134 [cs.SC], pp. 325–360.52. ,
Fast algorithms for refined parameterized telescoping in difference fields , ComputerAlgebra and Polynomials (M. Weimann J. Guitierrez, J. Schicho, ed.), Lecture Notes in Com-puter Science (LNCS), no. 8942, Springer, 2015, arXiv:1307.7887 [cs.SC], pp. 157–191.53. ,