In-depth comparison of the Berlekamp -- Massey -- Sakata and the Scalar-FGLM algorithms: the non adaptive variants
aa r X i v : . [ c s . S C ] S e p In-depth comparison of the Berlekamp – Massey – Sakata andthe Scalar-FGLM algorithms:the non adaptive variants
J´er´emy Berthomieu ∗ , Jean-Charles Faug`ere Sorbonne Universit´es,
UPMC
Univ Paris 06,
CNRS , INRIA ,Laboratoire d’Informatique de Paris 6 (
LIP6 ), ´Equipe P ol S ys ,4 place Jussieu, 75252 Paris Cedex 05, France Abstract
We compare thoroughly the B erlekamp – M assey – S akata algorithm and theS calar -FGLM algorithm, which compute both the ideal of relations of a multi-dimensional linear recurrent sequence.Suprisingly, their behaviors di ff er. We detail in which way they do and provethat it is not possible to tweak one of the algorithms in order to mimic exactly thebehavior of the other. Keywords:
The BMS algorithm, the S calar -FGLM algorithm, Gr¨obner basiscomputation, multidimensional linear recurrent sequence, algorithms comparison
Contents1 Introduction 2 ∗ Laboratoire d’Informatique de Paris 6, Universit´e Pierre-et-Marie-Curie, boˆıte courrier 169,4 place Jussieu, F-75252 Paris Cedex 05, France.
Email addresses: [email protected] (J´er´emy Berthomieu), [email protected] (Jean-Charles Faug`ere)
Preprint submitted to Journal of Symbolic Computation November 21, 2018 .4 Multi-Hankel matrices . . . . . . . . . . . . . . . . . . . . . . . 12 calar -FGLM algorithm 245 Another linear algebra solver inspired by the BMS algorithm 286 Analogies and di ff erences 32 References 461. Introduction
Computing the smallest linear recurrence relation satisfied by a sequence is afundamental problem in Computer Science. It is the shortest linear feedback shiftregister (LFSR) which generates the sequence. The length of this relation estimatesthe linear complexity of the sequence.In the 18th century, Gauß was interested in predicting the next term of a se-quence. Given a discrete set ( u i ) i ∈ N , find the best coe ffi cients, in the least-squaressense, ( α i ) ≤ i ≤ d that will approximate u i by − P dk = α k u i − k . Least-square sensemeans that the solution minimizes the sum of the squares of the errors.This problem has also been extensively used in Digital Signal Processing the-ory and applications. Numerically, Levinson – Durbin recursion method can beused to solve this problem. Hence, to some extent, the original Levinson – Durbinproblem in Norbert Wiener’s Ph.D. thesis, Levinson (1947); Wiener (1964), pre-dates the Hankel interpretation of the Berlekamp – Massey algorithm, see for in-stance Jonckheere and Ma (1989).The Berlekamp – Massey algorithm (BM, Berlekamp (1968); Massey (1969))guesses a solution of this problem for sequences with one parameter, i.e. in the one-dimensional case. This algorithm has been tremendously studied and many variants2ere designed. We refer the reader to Kaltofen and Pan (1991); Kaltofen and Yuhasz(2013a,b) for a very nice classification of the BM algorithms for solving this prob-lem, and for its generalization to matrix sequences.Classically, two designs of the BM algorithm are used.The first one assumes that the coe ffi cients of the sequence ( u i ) i ∈ N are given on-line , i.e. u i + is known only after u i , and that a bound d is given such that u d − will be computed but not u d . Then, the BM algorithm guesses a linear recur-rence relation satisfied by the table ( u , . . . , u i ), checks if this relation is satisfiedby ( u , . . . , u i + ) and updates the relation if it was not. The algorithm stops whenreaching u d − .The other one assumes that the table ( u , . . . , u d − ) is known at once. Then, thealgorithm finds the kernel of the Hankel matrix of size ⌈ d / ⌉×⌊ d / ⌋ associated withthe sequence ( u i ) i ∈ N . A complexity breakthrough is reached since this comes downto calling the extended Euclidean algorithm between x d and U ( x ) = P i − i = u i x d − i − and stopping it prematurely when reaching a remainder of degree strictly less than ⌊ d / ⌋ . The relation is then given by the B´ezout coe ffi cient of U ( x ) associated withthis remainder. See Blackburn (1997); Dornstetter (1987).Sakata extended the BM algorithm to 2 dimensions in Sakata (1988) and thento n dimensions in Sakata (1990, 2009). The so-called Berlekamp – Massey –Sakata algorithm (BMS) guesses a Gr¨obner basis of the ideal of relations satisfiedby the first terms of the input sequence, (Sakata, 1990, Lemma 5).In a way, the BMS algorithm extends the first design of the BM algorithm, aswhen calling the BMS algorithm on a univariate sequence, it behaves exactly likethe BM algorithm on this sequence.The so-called S calar -FGLM algorithm, presented in Berthomieu et al. (2015,2016) guesses the reduced Gr¨obner basis of the ideal of relations of a sequence.It extends the second design of the BM algorithm through the computation of thekernel of a multi-Hankel matrix, the multivariate generalization of a Hankel matrix.However, no fast method is currently known for computing this kernel.While the second design of the BM algorithm seems more e ffi cient than thefirst one, mainly thanks to fast Euclidean algorithms, it is not clear how their mul-tidimensional extensions compare. Surprisingly, the BMS and the S calar -FGLMalgorithm behave so di ff erently that it is not possible to apply a small modificationon either algorithm in order to simulate the behavior of the other. Computing linear recurrence relations of multi-dimensional sequences findsapplications in Coding Theory, Computer Algebra and Combinatorics.Historically, the BM algorithm was designed to decode cyclic codes, like theBCH codes, Bose and Ray-Chaudhuri (1960); Hocquenghem (1959). Therefore,3ecoding n -dimensional cyclic codes, a generalization of Reed Solomon codes,was Sakata’s motivation for designing the BMS algorithm in Sakata (1991).On the other hand, as the output of the BMS and the S calar -FGLM algorithmsis a Gr¨obner basis, a natural application in Computer Algebra is the computation ofa Gr¨obner basis of an ideal for another order, typically from a total degree orderingto an elimination ordering. In fact the latest versions of the S parse -FGLM algo-rithm rely heavily on the BM and BMS algorithms, see Faug`ere and Mou (2011,2017).Finally, computing linear recurrence relations with polynomial coe ffi cients findsapplications in Computer Algebra for computing properties of univariate and mul-tivariate Special Functions. The Dynamic Dictionary of Mathematical Functions(DDMF, Benoit et al. (2010)) generates automatically web-pages on univariate spe-cial functions through the di ff erential equations they satisfy. Equivalently, theycould be generated through the linear recurrence relations satisfied by the sequenceof coe ffi cients of their Taylor series. Deciding whether 2D / calar -FGLM algorithm to han-dle relations with polynomial coe ffi cients in Berthomieu and Faug`ere (2016). The main goal of this paper is to compare both the BMS and the S calar -FGLMalgorithms. As it is not possible to store the whole input sequence, both algorithmstakes a bound as an input and only handle sequence terms up to this index bound.We start by recalling some classical notation and definitions that shall be usedin the proofs and the algorithms of the paper in Section 2.Then, in order to be self-contained, we dedicate the next two sections to apresentation of each algorithm.A lot of articles, such as Bras-Amor´os and O’Sullivan (2006); Sakata (1988,1990, 2009), or book chapters, such as (Cox et al., 2005, Chapter 10), present theBMS algorithm. Some of them deals with the very general case of an ordereddomain. We specialize this description to the simpler case of a polynomial ring K [ x , . . . , x n ]. In the BMS algorithm, the input bound is a monomial, so that thealgorithm shall visit every monomial in increasing order up to the bound.On the other hand, in Section 4, we describe the S calar -FGLM algorithm witha point of view closer to the BMS algorithm. In the S calar -FGLM algorithm, theinput bound is a set of terms which contains the staircase of the computed Gr¨obnerbasis.These presentations shall help us to first design a new algorithm in betweenboth of them in Section 5. 4hen, it will help us to compare them in Section 6, our main contribution of thispaper. We detail exactly how both algorithms behave similarly and how, dependingon the input, they can surprisingly di ff er.A main likeness between both algorithms is that they determine which mono-mials are in the Gr¨obner basis staircase. However, they handle the leading termsoutside of this staircase di ff erently. Theorem 1.
Let u = ( u i , j ) ( i , j ) ∈ N be a sequence, let ≺ be a degree monomial order-ing.Assuming we call each algorithm on u , ≺ and a bound allowing us to find thesame set S as the staircase, then • for any monomial m on the border of S , the BMS algorithm returns a relationwith leading term m. Therefore, the computed ideal of relations is zero-dimensional. • the S calar -FGLM algorithm returns relations with leading terms on the bor-der of S but may fail to close the staircase. Therefore, the computed ideal ofrelations might be positive-dimensional.If u is linear recurrent and the bound big enough, then both algorithms computecorrectly the ideal of relations of u . The last part of the theorem is important as in most applications u is linearrecurrent. Therefore, both algorithms are able to retrieve the ideal of relations of u .We refer to Theorem 15 for a more precise and general version of this result.By design, these algorithms return a set of relations, satisfied by the sequenceterms, and their shifts, i.e. how far these relations have been tested. The followingtheorem proves that the outputs of the algorithms are quite di ff erent. This shouldconvince the reader that the algorithms do not compute the same thing wheneverthe bound is too low or u is not linear recurrent. It is a specialization of Theorem 19to the binomial sequence. Theorem 2.
Let b = (cid:16)(cid:16) ij (cid:17)(cid:17) ( i , j ) ∈ N be the sequence of the binomial coe ffi cients andlet ≺ be a total degree monomial ordering.Assuming we call each algorithm on b , ≺ and a bound allowing us to retrievethe same relations x y − y − , y d , ( x − d , with d > . • Then, the S calar -FGLM algorithm ensures that the shifts of the three rela-tions are equal: they are still valid when multiplied by all the monomials ofdegree at most d − . The
BMS algorithm ensures that the shifts of y d and ( x − d are less thanthe shift of x y − y − : relations y d and ( x − d are still valid when multipliedby all the monomials of degree at most d − while relation x y − y − is stillvalid when multiplied by all the monomials of degree at most d − .In other words, the lesser the leading monomial of a relation computed by the BMS algorithm, the greater its shift.
We mention earlier that the S parse -FGLM algorithm was a possible applicationof these algorithms. Although, they are not meant to be run with the lexicographi-cal ordering, we prove the following result to illustrate the di ff erence in behaviorsof these algorithms. This result is extended to any dimension in Theorem 20. Theorem 3.
Let u = ( u i , j ) ( i , j ) ∈ N be a linear recurrent sequence whose ideal ofrelations I = h g ( y ) , x − f ( y ) i is in shape position for the lex ( y ≺ x ) ordering, with deg f < deg g = d and g squarefree.Assuming we call each algorithm on u , the lex ( y ≺ x ) ordering, and a boundon the sequence terms. • The S calar -FGLM algorithm, with the set of terms T = { , y , . . . , y d − } ,yields the ideal I. • The
BMS algorithm, visiting monomials , y , . . . , y d , yields the ideal h g ( y ) , x i .This ideal is not I unless f = .In other words, the S calar -FGLM algorithm can retrieve an ideal of relations inshape position while, in general, the BMS algorithm cannot.
Finally, in Section 7, we compare the algorithms based on the number of basicoperations and the number of table queries they perform.We show that the S calar -FGLM algorithm performs in general more queries tothe table than the BMS algorithm. Yet, in the best case scenario where the leadingterms of the Gr¨obner basis of the ideal are all the monomials of a given degree, theS calar -FGLM has a better behavior than the BMS algorithm.
We are now in a position where the BMS algorithm and the S calar -FGLM al-gorithm are well understood and where we know that each algorithm has strengthsand weaknesses.As anticipated in the original paper, the naive linear algebra solver in theS calar -FGLM algorithm is its main weakness. Therefore, a fast multi-Hankelsolver could improve this algorithm. Moreover, although its presentation is of a6lobal algorithm, it can be turned into an iterative one using naive Gaussian elim-ination. Thus, a fast multi-Hankel arithmetic could also be useful for an iterativevariant of the algorithm.On the other hand, the BMS algorithm is a real iterative algorithm: if in addi-tion of the relations, one outputs the set of failing relations (see Remark 10), thenone could continue the computation up to a farther bound with no additional cost.Moreover, it is a faster algorithm since it uses a polynomial arithmetic instead of alinear algebra one.A consequence of this paper could be the design of an hybrid algorithm tak-ing advantage of both the BMS and the S calar -FGLM algorithms. Another di-rection would be the study of adaptive variants of the algorithms. The A daptive S calar -FGLM (Berthomieu et al. (2015, 2016)) is a more e ffi cient variant of theS calar -FGLM algorithm trying not to test too far the computed relations in orderto minimize the table queries and the complexity. Likewise, one could design anadaptive variant of the BMS algorithm based on this philosophy and study theircomplexities.In summary, the goal would be to take a step further in the hybrid approachusing the e ffi ciency of the polynomial arithmetic in the BMS algorithm to computethe relations and the smaller number of queries performed by the A daptive S calar -FGLM algorithm.
2. Preliminaries
In this section, we present classical notation that shall be used all along thepaper. We also present some definitions that will be useful for all the proofs andalgorithms.
Let n ≥
1, we write i = ( i , . . . , i n ) ∈ N n . Likewise, we denote x = ( x , . . . , x n )and for i ∈ N n , we write x i = x i · · · x i n n . Let u = ( u i ) i ∈ N n be a n -dimensionalsequence over the field K . If there exists a finite set of indices K ⊂ N n and numbers( α k ) k ∈K in the field K such that ∀ i ∈ N n , X k ∈K α k u k + i = , (1)then we say that u satisfies the linear recurrence relation (simply relation in thefollowing) defined by α = ( α k ) k ∈K . 7 xample 1. Let b be the 2-dimensional sequence of the binomial coe ffi cients, b = (cid:16)(cid:16) ij (cid:17)(cid:17) ( i , j ) ∈ N . Then the Pascal’s rule: ∀ ( i , j ) ∈ N , b i + , j + − b i , j + − b i , j = is a linear recurrence relation for the sequence b . As we can only work with a finite number of terms of a sequence, in this paper,a table shall denote a finite subset of terms of a sequence: it is one of the inputparameters of the algorithms.Given a finite table extracted from the sequence u , the main purpose of theBMS and the S calar -FGLM algorithms is to, lousy speaking, determine a minimalset of relations that will allow us to generate this finite table using only the valuesof u on their supports.In order to study the relations satisfied by the sequence u , it will be useful toassociate them with polynomials in K [ x ]. Definition 1.
Let f = P k ∈K α k x k ∈ K [ x ] . We will denote by (cid:2) f (cid:3) u , or (cid:2) f (cid:3) when noambiguation arises, the linear combination P k ∈K α k u k . Moreover, if α defines arelation for u , that is for all i ∈ N n , h x i f i = , then we say that f is the polynomialof this relation. The main benefit of the [ ] notation resides in the immediate fact that for allindex i , h x i f i = P k ∈K α k u k + i .In the previous example, the Pascal’s rule relation is associated with polyno-mial P = x y − y −
1, so that ∀ ( i , j ) ∈ N , [ x i y j P ] = . Definition 2 (Fitzpatrick and Norton (1990); Sakata (1988)) . Let u = ( u i ) i ∈ N n be an-dimensional sequence with coe ffi cients in K . The sequence u is linear recurrent if from a nonzero finite number of initial terms { u i , i ∈ S } , and a finite number oflinear recurrence relations, without any contradiction, one can compute any termof the sequence.Equivalently, u is linear recurrent if its ideal of relations { f , ∀ m ∈ K [ x ] , (cid:2) m f (cid:3) = } is zero-dimensional .2.2. Gr¨obner bases We let T be the set of all monomials in K [ x ], i.e. T = { x i , i ∈ N n } . Amonomial ordering ≺ on K [ x ] is an order relation satisfying the following threeclassical properties: 8. for all m ∈ T , 1 (cid:22) m ;2. for all m , m ′ , s ∈ T , m ≺ m ′ ⇒ m s ≺ m ′ s ;3. every subset of T has a least element for ≺ .For a monomial ordering ≺ on K [ x ], the leading monomial of f , denoted lm ( f ),is the greatest monomial in the support of f for ≺ . The leading coe ffi cient of f ,denoted lc ( f ), is the nonzero coe ffi cient of lm ( f ). The leading term of f , lt ( f ),is defined as lt ( f ) = lc ( f ) lm ( f ). For an ideal I , we denote, classically, lm ( I ) = { lm ( f ) , f ∈ I } .We recall briefly the definition of a Gr¨obner basis and a staircase. Definition 3.
Let I be a nonzero ideal of K [ x ] and let ≺ be a monomial ordering.A set G ⊆
I is a
Gr¨obner basis of I if for all f ∈ I, there exists g ∈ G such that lm ( g ) | lm ( f ) .The set G is a minimal Gr¨obner basis of I if for any g ∈ G , G \ { g } does notspan I.Furthermore, G is (minimal) reduced if for any g , g ′ ∈ G , g , g ′ and anymonomial m ∈ supp g ′ , lt ( g ) ∤ m.Let G be a reduced truncated Gr¨obner basis, the staircase of G isS = Staircase( G ) = { s ∈ T , ∀ g ∈ G , lm ( g ) ∤ s } . It is also the canonical basis of K [ x ] / I. Remark 4.
By definition, a staircase is stable by division: that is, for any s , s ′ ∈ T ,if s is in the staircase of G and s ′ | s, then s ′ is also in the staircase of G .In some instances, the goal will be to make the smallest Gr¨obner basis staircasefrom a monomial set S : this is done by adding all the divisors of the elements of S .We denote this by stabilizing S with the Stabilize( S ) command. We, now, present notation of Sakata (1988, 1990, 2009) and relate it to poly-nomials and polynomial ideals. This definition shall act like a dictionary betweenSakata’s notation in these paper and the polynomials algebra notation. We alsorefer to Guisse (2017), (Mora, 2009, Section 1) and (Sakata, 2009, Section 2) forthis kind of dictionary.
Definition 4.
Given a set of polynomials G ⊆ K [ x ] . • Σ ( G ) = { x i , ∃ g ∈ G , lm ( g ) | x i } .Whenever, G is a Gr¨obner basis of an ideal I, Σ ( G ) is by definition lm ( I ) . As Σ ( G ) satisfies ∀ s ∈ Σ ( G ) , m ∈ T , if s | m, then m ∈ Σ ( G ) , it has minimalelements for the division. They form the set σ ( G ) = min | ( Σ ( G )) .Whenever, G is a minimal Gr¨obner basis of an ideal I, σ ( G ) is by definition lm ( G ) . • ∆ ( G ) = T \ Σ ( G ) .Whenever G is a Gr¨obner basis of an ideal I, ∆ ( G ) is its staircase , the canon-ical basis of K [ x ] / I. • As ∆ ( G ) satisfies ∀ d ∈ ∆ ( G ) , m ∈ T , if m | d, then m ∈ ∆ ( G ) , it has maximalelements for the division. They form the set δ ( G ) = max | ( ∆ ( G )) .Whenever, G is a Gr¨obner basis of an ideal I, δ ( G ) is by definition the cornerset of the staircase. Gr¨obner basis theory allows us to choose any monomial ordering ≺ . Amongall the monomial ordering, we will mainly use the • lex ( x n ≺ · · · ≺ x ) ordering which compares monomials as follows x i ≺ x i ′ if, and only if, there exists k , 1 ≤ k ≤ n such that for all ℓ < k , i ℓ = i ′ ℓ and i k < i ′ k , see (Cox et al., 2015, Chapter 2, Definition 3); • drl ( x n ≺ · · · ≺ x ) order which compares monomials as follows x i ≺ x i ′ if,and only if, i + · · · + i n < i ′ + · · · + i ′ n or i + · · · + i n = i ′ + · · · + i ′ n and thereexists k , 2 ≤ k ≤ n such that for all ℓ > k , i ℓ = i ′ ℓ and i k > i ′ k . Equivalently,there exists k , 1 ≤ k ≤ n such that for all ℓ > k , i + · · · + i ℓ = i ′ + · · · + i ′ ℓ and i + · · · + i k < i ′ + · · · + i ′ k , see (Cox et al., 2015, Chapter 2, Definition 6).However, in the BMS algorithm, we need to be able to enumerate all the mono-mials up to a bound monomial. This forces the user to take an ordering ≺ such thatfor all M ∈ T , the set { m ≺ M , m ∈ T } is finite. Such an ordering ≺ makes ( N n , ≺ )isomorphic to ( N , < ), thus it makes sense to speak about the next monomial for ≺ .This request excludes for instance the lex ordering, and more generally anyelimination ordering. In other words, only weighted degree ordering, or weightordering , should be used. It is well known that any monomial ordering ≺ on T canbe obtained from a matrix A ∈ R n × n through: x i ≺ x j if, and only if, x A · i ≺ lex x A · j ,see Erd¨os (1956). Such a matrix A ∈ R n × n defines a monomial ordering if itsfirst row is nonnegative. It defines a weight ordering if its first row is positive,see Robbiano (1986) and (Cox et al., 2015, Chapter 2, Exercises 4.10 and 4.11) Definition 5.
Let I be a homogeneous ideal of K [ x ] and let ≺ be a monomialordering. A set G ⊆
I is a d -truncated Gr¨obner basis , or truncated Gr¨obner basis f I up to degree d, if for all g ∈ G , deg g ≤ d and for for all f ∈ I, if deg f ≤ d,then there exists a g ∈ G such that lt ( g ) | lt ( f ) .This can be computed using any Gr¨obner basis algorithm by discarding criticalpairs of degree greater than d. For an a ffi ne ideal I , an analogous definition of d -truncated Gr¨obner basis ex-ists. It is the output of a Gr¨obner basis algorithm discarding all critical pairs ( f , f ′ )with deg lt ( f ) + deg lt ( f ′ ) − deg lcm( lt ( f ) , lt ( f ′ )) > d , i.e. with degree higherthan d . In this situation, a d -truncated Gr¨obner basis G will span the subspace ofpolynomials P g ∈G h g g with deg h g ≤ d − deg g .A truncated Gr¨obner basis G is reduced if for any g , g ′ ∈ G and any monomial m ∈ supp g , lm ( g ′ ) ∤ m .The following definition extends the definition of the staircase of a Gr¨obnerbasis to truncated Gr¨obner basis. Definition 6.
Let G be a reduced truncated Gr¨obner basis, the staircase of G isS = Staircase( G ) = { s ∈ T , ∀ g ∈ G , lm ( g ) ∤ s } . From any ideal J ⊆ K [ x ], it is clear that one can construct a sequence u = ( u i ) i ∈ N n whose ideal of relations contains J : from a Gr¨obner basis G of J andstaircase S , set the values of the sequence terms u i = [ x i ], for x i ∈ S , as desiredand then computes the terms u j = [ x j ], for x j ∈ lm ( I ), using the relations given by G . However, Proposition 3.3 in Brachat et al. (2010) proves that there are nonzeroideals of K [ x ] that cannot be the ideals of relations of linear recurrent sequences,whenever n ≥
2. Indeed, the ideal of relations is necessarily
Gorenstein , Gorenstein(1952); Macaulay (1934), and problems occur only if J has a zero of multiplicityat least 2.For instance, there is no bivariate sequence u = ( u i , j ) ( i , j ) ∈ N whose ideal ofrelations I is J = h x , x y , y i . That is, any sequence u satisfying u + i , j = u + i , + j = u i , + j =
0, for all ( i , j ) ∈ N , satisfies a relation induced by a degree-1 polynomial.Hence, I strictly contains J .The following theorem can also be found in (Elkadi and Mourrain, 2007, The-orem 8.3). Theorem 5.
Let I ⊆ K [ x ] be a -dimensional ideal and let R = K [ x ] / I. The idealI (resp. ring R) is
Gorenstein if equivalently R and its dual are isomorphic as R-modules; there exists a K -linear form τ on R such that the following bilinear form isnon degenerate R × R → K ( a , b ) τ ( a b ) . On the one hand, this result is important for the S parse -FGLM application. Ifthe input ideal is not Gorenstein, the output ideal will be bigger. However, this canbe easily tested by comparing the degrees of the input and output ideals. On theother hand, this yields a probabilistic test for the Gorenstein property of an ideal J . Pick at random initial conditions, construct a sequence thanks to these initialconditions and J and then compute the ideal I of relations of the sequence. If I = J , then J is Gorenstein. We refer to Daleo and Hauenstein (2016) for anothertest on the Gorenstein property of an ideal. A matrix H ∈ K m × n is Hankel , if there exists a sequence u = ( u i ) i ∈ N such thatfor all ( i , i ′ ) ∈ N n , the coe ffi cient h i , i ′ lying on the i th row and i ′ th column of H satisfies h i , i ′ = u i + i ′ .In a multivariate setting, we can extend this Hankel matrices notion to multi-Hankel matrices. Indexing the rows and columns with monomials x i = x i · · · x i n n and x i ′ = x i ′ · · · x i ′ n n , the coe ffi cient of H lying on the row labeled with x i andcolumn labeled with x i ′ is u i + i ′ . Given two sets of monomials U and T , we let H U , T be the multi-Hankel matrix with rows (resp. columns) indexed with monomials in U (resp. T ). Example 2.
Let u = ( u i , j ) ( i , j ) ∈ N be a sequence. Let U = { , y , y , x , x y , x y , x , x y , x y } and T = { , y , x , x y , x , x y } , thenH U , T = y x x y x x y u , u , u , u , u , u , y u , u , u , u , u , u , y u , u , u , u , u , u , x u , u , u , u , u , u , x y u , u , u , u , u , u , x y u , u , u , u , u , u , x u , u , u , u , u , u , x y u , u , u , u , u , u , x y u , u , u , u , u , u , . We can see that this matrix is a × matrix with Hankel blocksof size × . Let T = { , y , x , y , x y , x } , then the following matrix has a less obviousstructure: H T , T = y x y x y x u , u , u , u , u , u , y u , u , u , u , u , u , x u , u , u , u , u , u , y u , u , u , u , u , u , x y u , u , u , u , u , u , x u , u , u , u , u , u , .
3. The BMS algorithm
As in Guisse (2017), we specialize to K [ x ] the presentation of the BMS algo-rithm given in Bras-Amor´os and O’Sullivan (2006), Cox et al. (2005) and Sakata(2009) in the more general case of ordered domains. BMS algorithm
Given a table u = ( u i ) i ∈ N n and a weight ordering ≺ for x . We let T = { } ∪{ x i , i ∈ N n } and extend ≺ (still denoted by ≺ ) to T with the convention that 0 ≺ m , by only considering, at each step, thetable ( u i ) i ∈{ k , x k (cid:22) m } . As we only know partially the table u , we need to define somenotions according to this partial knowledge at step m . Definition 7.
Let m ∈ T . Let f ∈ K [ x ] , we say that the relation f is valid up to m,whenever ∀ t ∈ T , lm ( t f ) (cid:22) m ⇒ [ t f ] = . We thus define the shift of f as shift( f ) = m lm ( f ) .We say that the relation f fails at m whenever ∀ t ∈ T , t f ≺ m ⇒ [ t f ] = , " m lm ( f ) f , . We define the fail of f as fail( f ) = m. If the relation f never fails, that is for allt ∈ T , [ t f ] = , then by convention fail( f ) = shift( f ) = + ∞ . Proposition 6.
Let u be a table and f ∈ K [ x ] such that fail( f ) ≻ m. For allg ∈ K [ x ] , if lm ( g f ) (cid:22) m, then [ g f ] = . The following proposition show how to combine two failing relations with thesame shift in order to obtain a new relation valid with a bigger shift.13 roposition 7.
Let f and f be two relations such that v = fail( f ) lm ( f ) = fail( f ) lm ( f ) ande = (cid:2) v f (cid:3) , e = (cid:2) v f (cid:3) . Let f be the nonzero polynomial f − e e f . Then, fori ∈ { , } , fail( f ) ≻ fail( f i ) , i.e. fail( f ) lm ( f ) ≻ v.Proof. For any c ∈ K and any µ ∈ K [ x ] such that lm ( g ) ≺ v , we have [ µ ( f + c f )] = [ µ f ] + c [ µ f ] =
0, hence fail( f + c f ) (cid:23) fail( f i ).It remains to prove that for a good choice of c , we have a strict inequality: as,[ v ( f + c f )] = [ v f ] + c [ v f ] = e + c e , it is clear that [ v f ] = [ v ( f − e e f )] = f ) ≻ v lm ( f ) (cid:23) fail( f i ). Definition 8.
Using the same notation as in Definition 6, we letI m = { f ∈ K [ x ] , fail ( f ) ≻ m } , and G m be the least elements for ≺ of I m , it is a truncated Gr¨obner basis of I m : G m = min ≺ { g , g ∈ I m } , S m = Staircase( G m ) . Example 3.
Let us go back to Example 1 with sequence b = (cid:16)(cid:16) ij (cid:17)(cid:17) ( i , j ) ∈ N . Consider K [ x , y ] with the drl ( y ≺ x ) ordering, and m = x .y y x x From this table, on the one hand, we can deduce that • since it is not identically , there is no relation with leading monomial valid up to x , hence ∈ S x ; • since [ y + α ] = α and [ x ( y + α )] = + α , there is no relation with leadingmonomial y valid up to x y and thus x , hence y ∈ S x ; • since [ y ( x + β y + α )] = , there is no relation with leading monomial x validup to x y and thus x , hence x ∈ S x .On the other hand, we can check that • since [ y ] = , relation y is valid up to y and thus x , hence y ∈ T \ S x ; since [ x y − = , relation x y − is valid up to x y and thus x , hencex y ∈ T \ S x ; • since [ x − x ] = , relation x − x is valid up to x , hence x ∈ T \ S x .Therefore, S x = { , y , x } , max | ( S x ) = { y , x } and min | ( T \ S x ) = { y , x y , x } . Thisis summed up in the following diagram.y J y N J N J x x J : min | ( T \ S x ) N : max | ( S x ) Let us notice that many relations with respective leading monomials y , x y , x suitactually. These would be y − α x + α y y + α , x y − (1 + α ) x + α y y + α andx − (1 + α ) x + α y y + α . Furthermore, I x is not stable by addition: ( x − x ) , ( x − x + ∈ I x but x − x − ( x − x + = ( x − < I x since fail ( x − = x y.Hence, I x is not an ideal of K [ x , y ] .For m = x , with the following table, we find thaty y y x x x • since [ y ] = [ y y ] = [ x y ] = , then y is valid up to x y and thus x ; • since [ x y − = [ y ( x y − = and [ x ( x y − y )] = , then x y − failsat x y. Yet, since [ y ] = [ y y ] = and [ x y ] = , then by Proposition 7, [ x y − y − = [ y ( x y − y − = and [ x ( x y − y − vanishes as well.Hence, x y − y − is valid up to x y and thus x ; • since [ x − x ] = and [ y ( x − x )] = , then x − x fails at x y. Likewise, since [ x − = and [ y ( x − = , then [ x − x + = and [ y ( x − x + = .Furthermore, [ x ( x − x + = , so that x − x + is valid up to x .Therefore, S x = { , y , x } , max | ( S x ) = { y , x } and min | ( T \ S x ) = { y , x y , x } . Wecan also check that these relations span the only valid relations with support in x ∪ { y , x y , x } . y y J y N J N J x x x Although I m is not an ideal in general, we have the following results: Proposition 8.
Using the notation of Definitions 7 and 8, I m is closed under multiplication by elements of K [ x ] , for all monomials t , t ′ such that t | t ′ , (a) if t ′ ∈ S m , then t ∈ S m . (b) if t ∈ T \ S m , then t ′ ∈ T \ S m , Moreover, it is clear that the sequence ( I m ) m ∈T is decreasing and that if u islinear recurrent then I = T m ∈T I m . Therefore, ( S m ) m ∈T is increasing and its limitis S the finite target staircase. Hence, for m big enough, S m will be the targetstaircase. We will give an upper bound in Proposition 11.The following result gives an intrinsic characterization of S m that is key in theiteration of the BMS algorithm. Proposition 9.
For all monomial m ∈ T , S m = n fail( f ) lm ( f ) , f < I m o .Furthermore, let m + be the successor of m. Let s be a monomial in the staircaseS m + . Then, s was added at step m + , i.e. s < S m , if, and only if, s | m + and m + s ∈ S m + \ S m .Proof. We shall prove the first assertion by double inclusion. If s = fail( f ) lm ( f ) then forall g ∈ K [ x ] such that lm ( g ) = s , fail( g ) (cid:22) m , hence s < lm ( I m ), s ∈ S m .The reverse inclusion is proved by induction on m . For m = S m = ∅ andthere is nothing to do. Let us assume the inclusion is satisfied for a monomial m .Let s ∈ S m + . On the one hand, if s ∈ S m , then there exists f ∈ K [ x ] \ I m ⊆ K [ x ] \ I m + such that s = fail( f ) lm ( f ) .If, on the other hand, s ∈ S m + \ S m , then there exists a relation f ∈ K [ x ] suchthat lm ( f ) = s , and m ≺ fail( f ) (cid:22) m + , hence fail( f ) = m + and s divides m + .Let us assume that for all g ∈ K [ x ] with lm ( g ) = m + s , we have fail( g ) (cid:22) m ≺ m + .Therefore, m + s ∈ S m and there exists h < I m such that fail( h ) lm ( h ) = m + s . By Proposition 7,there is α ∈ K such that fail( f − α h ) ≻ m + . Since fail( h ) (cid:22) m ≺ m + , then lm ( h ) (cid:22) s and lm ( f − α h ) = s , hence fail( f − α h ) lm ( f − α h ) ≻ m + s . This contradicts the fact that m + s ∈ S m .Thus there exists a g ∈ K [ x ] with lm ( g ) = m + s and fail( g ) (cid:23) m + .16et g be such a relation, since fail( f ) = m + , then [ g f ] , g ) = m + .Therefore, fail( g ) lm ( g ) = m + m + / s = s so that s ∈ n fail( f ) lm ( f ) , f < I m + o .Now, we proved that s ∈ S m + \ S m implies s | m + and m + s ∈ S m + \ S m . Thisimplication is clearly an equivalence.From this proposition it follows that if m ∈ T , and if m + is its successor:max | ( S m + ) = max | max | ( S m ) ∪ ( m + s , s ∈ min | ( T \ S m ) ∩ S m + )! (2)Relation 2 allows us to construct, iterating on the monomial m , the set of re-lations G m representing the truncated Gr¨obner basis of I m . Relations g ∈ G m areindexed by their leading monomials, describing T \ S m . Remark 10.
We can also construct another set, describing the edge of S m , stilldenoted S m , as there is a one-to-one correspondence between a staircase and itsedge. The relations h ∈ S m are indexed by their ratio fail( h ) lm ( h ) between their fail andtheir leading monomial, describing the full staircase of I m .When two relations h and h ′ in S m are such that fail( h ) lm ( h ) = fail( h ′ ) lm ( h ′ ) , then we onlyneed to keep one. Since the goal is to combine a relation of S m with a relationfailing at m + to make a new one with a bigger shift, as in Proposition 7, it is bestto handle smaller polynomials. This yields Algorithm 1.We saw that for m big enough, S m will be the target staircase. We now give anupper bound. Proposition 11.
Let u be a linear recurrent sequence and I be its ideal of relations.Let S be the staircase of I for ≺ . Let s max be the largest monomial in S . Then,for m (cid:23) ( s max ) , S m = S .Let G be a minimal Gr¨obner basis of I for ≺ and let g max be the largest leadingmonomial of G . Then, for m (cid:23) s max · max ≺ ( g max , s max ) , the BMS algorithm returnsa minimal Gr¨obner basis of I for ≺ . Example 4.
For the drl ( y ≺ x ) ordering, I = h x p , y q i and q > p ≥ , wehave, s max = x p − y q − and g max = y q . Therefore, the right staircase is foundat most at step m = x p − y q − , while the Gr¨obner basis is found at most at stepx p − y q − max ≺ ( x p − y q − , x q ) , i.e. y q − if p = and x p − y q − otherwise. From Propositions 9 and 11, we can deduce that S = n fail( f ) lm ( f ) , f < I o .17 lgorithm 1: The BMS algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ and amonomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . // ordered for ≺ G : = { } . // the future Gr¨obner basis S : = ∅ . // staircase edge, elements will be [ h , fail( h ) / lm ( h )] For all m ∈ T do S ′ : = S . For g ∈ G doIf lm ( g ) | m then e : = h m lm ( g ) g i u . If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min | { [ h , fail( h ) / lm ( h )] } . // see Remark 10 G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . // translates the relation Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − h m lm ( h ) h i u lm ( g ′ ) fail( h ) m h . // see Proposition 7 Else g ′ : = g . G : = G ′ . S : = S ′ . Return G . xample 5. We give the trace of the algorithm called on the binomial sequence b for the drl ( y ≺ x ) ordering up to monomial x (hence visiting all the monomials ofdegree at most ).To simplify the reading, whenever a relation succeeds in m or cannot be testedin m, we skip the updating part as this relation remains the same.We start with the empty staircase S and the relation G = { } .For the monomial The relation g = fails since [ b , ] = . Thus S ′ = { [1 , } .S ′ is updated to { [1 , } and G ′ = { y , x } .For the relation g ′ = y, y ∤ thus g ′ = y.For the relation g ′ = x, x ∤ thus g ′ = x.We update G : = G ′ = { y , x } and S : = S ′ = { [1 , } .For the monomial yThe relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x.S ′ is set to { [1 , } and G ′ = { y , x } .We set g ′ = y and g ′ = x.We update G : = G ′ = { y , x } and S : = S ′ = { [1 , } .For the monomial xNothing must be done for the relation g = y.The relation g = x fails since [ b , ] = . Thus S ′ = { [1 , , [ x , } .S ′ is set to { [1 , } and G ′ = { y , x } .We set g ′ = y.For the relation g ′ = x, x | x and xx | fail(1) , hence g ′ = x − .We update G : = G ′ = { y , x − } and S : = S ′ = { [1 , } .For the monomial y The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − .S ′ is set to { [1 , } and G ′ = { y , x } .We set g ′ = y and g ′ = x − .We update G : = G ′ = { y , x − } and S : = S ′ = { [1 , } .For the monomial x yThe relation g = y fails since [ b , ] = . Thus S ′ = { [1 , , [ y , x ] } .The relation g = x − fails since [ b , − b , ] = . Thus S ′ = { [1 , , [ y , x ] , [ x − , y ] } .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .For the relation g ′ = y , y ∤ x y thus g ′ = y .For the relation g ′ = x y, x y | x y and x yx y | fail( y ) , hence g ′ = x y − .For the relation g ′ = x , x ∤ x y thus g ′ = x − x. e update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial x Nothing must be done for the relation g = y .Nothing must be done for the relation g = x y − .The relation g = x − x succeeds since [ b , − b , ] = .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − and g ′ = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial y The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x y − .Nothing must be done for the relation g = x − x.S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − and g = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial x y The relation g = y succeeds since [ b , ] = .The relation g = x y − succeeds since [ b , − b , ] = .Nothing must be done for the relation g = x − x.S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − and g = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ x , y ] , [ y , x − } .For the monomial x yNothing must be done for the relation g = y .The relation g = x y − fails since [ b , − b , ] = . Thus S ′ = { [ y , x ] , [ x − , y ] , [ x y − , x ] } .The relation g = x − x fails since [ b , − b , ] = . Thus S ′ = { [ y , x ] , [ x − , y ] , [ x y − , x ] , [ x − x , y ] } .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y .For the relation g ′ = x y, x y | x y and x yx y | fail( y ) , hence g ′ = x y − y − .For the relation g ′ = x , x | x y and x yx | fail( x − , hence g ′ = x − x + .We update G : = G ′ = { y , x y − y − , x − x + } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial x Nothing must be done for the relation g = y .Nothing must be done for the relation g = x y − y − .The relation g = x − x + succeeds since [ b , − b , + b , ] = . ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − y − and g = x − x + .We update G : = G ′ = { y , x y − y − , x − x + } and S : = S ′ = { [ y , x ] , [ x − , y ] } .The algorithm returns relations y , x y − y − , x − x + , all three with ashift x.3.2. A Linear Algebra interpretation of the BMS algorithm
In order to make the presentation of the BMS algorithm closer to that of theS calar -FGLM algorithm, we propose to replace every evaluation using the [ ] op-erator with a matrix-vector product.As stated above, given a monic relation f = lm ( f ) + P s ∈ S α s s , testing theshift of this relation by a monomial m is done with the bracket operator, i.e. testingwhether [ m f ] = ~ f , the vector ~ f = ... ... s ∈ S α s ... ... lm ( f ) , this can also be done through testing if the following matrix-vector product H m , S ∪{ lm ( f ) } ~ f = (cid:16) ··· s ∈ S ··· lm ( f ) m · · · [ m s ] · · · [ m lm ( f )] (cid:17) ...α s ... = shift and the fail of a relation, i.e.Definition 7, become as follows. Definition 9.
Let f = lt ( f ) + P s ∈ S α s s be a polynomial.The monomial m is a shift of f ifH { ,..., m } , S ∪{ lm ( f ) } ~ f = ··· s ∈ S ··· lm ( f )1 · · · [ s ] · · · [ lm ( f )] ... ... ... m · · · [ m s ] · · · [ m lm ( f )] ...α s ... = ... . et m + be the successor of m, m + lm ( f ) is the fail of f ifH { ,..., m , m + } , S ∪{ lm ( f ) } ~ f = ··· s ∈ S ··· lm ( f )1 · · · [ s ] · · · [ lm ( f )] ... ... ... m · · · [ m s ] · · · [ m lm ( f )] m + · · · [ m + s ] · · · [ m + lm ( f )] ...α s ... = ... e , with e , . We can also write another proof of Proposition 7 with a matrix viewpoint.
Proof of Proposition 7.
Let f = lm ( f ) + P s ∈ S α s s and f = lm ( f ) + P s ∈ S ′ β s s bemonic. Let v − be the predecessor of v . Let ˜ S = S ∪ S ′ \ { lm ( f ) , lm ( f ) } , assuming lm ( f ) , lm ( f ), then we have H { ,..., v − , v } , ˜ S ∪{ lm ( f ) , lm ( f ) } ( ~ f + c ~ f ) = ... e + c e ··· s ∈ ˜ S ··· lm ( f ) lm ( f )1 · · · [ s ] · · · [ lm ( f )] [ lm ( f )] ... ... ... ... v − · · · [ v − s ] · · · [ v − lm ( f )] [ v − lm ( f )] v · · · [ v s ] · · · [ v lm ( f )] [ v lm ( f )] ...α s + c β s ... c = ... e + c e . It is now clear that vector ~ f − e e ~ f is in the kernel of this matrix. That is, polyno-mial f − e e f has a shift v .Changing every evaluation into a matrix-vector product in the BMS algorithmyields the following presentation of the BMS algorithm, namely Algorithm 2.22 lgorithm 2: Linear Algebra variant of the BMS algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ and amonomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . // ordered for ≺ G : = { } . // the future Gr¨obner basis S : = ∅ . // staircase edge, elements will be [ h , fail( h ) / lm ( h )] For all m ∈ T do S ′ : = S . For g ∈ G doIf lm ( g ) | m then e : = H n m lm ( g ) o , supp( g ) ~ g . If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min fail( h ) ∈ S ′ { [ h , fail( h ) / lm ( h )] } . // see Remark 10 G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . // shifts the relation Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − (cid:18) H n m lm ( h ) o , supp( h ) ~ h (cid:19) lm ( g ′ ) fail( h ) m h . // see Prop. 7 Else g ′ : = g . G : = G ′ S : = S ′ Return G . . The S calar -FGLM algorithm This section is devoted to the description of the S calar -FGLM algorithm in-troduced in Berthomieu et al. (2015, 2016).The S calar -FGLM algorithm aims at computing linear recurrence relations ofa multidimensional sequence with a matrix viewpoint and an approach close to theFGLM algorithm, see Faug`ere et al. (1993).The main idea is to shift the linear recurrence relations in order to determinetheir coe ffi cients. As we can only know a finite number of the sequence terms, weneed the following definition. Definition 10.
Let f ∈ K [ x ] and T be a set of monomials in x , we say that f has ashift T if ∀ m ∈ T , [ m f ] = . (3) Remark 12.
We would like to emphasize that this definition is close to Definition 7of the shift for the
BMS algorithm.Whenever T is a set of monomials T M = { m , m (cid:22) M } , f has a shift T M if, andonly if, f has a shift M, i.e. fail( f ) ≻ lm ( f ) M.Unless stated otherwise, we will now always assume that the set T is stable bydivision.
From the relations h x i + d + P k ∈K α k x i + k i =
0, valid for all x i ∈ T , one candeduce that the polynomial P = x d + P k ∈K α k x k satisfies [ m P ] = m ∈ T .In other words, P has a shift T .To determine P with a shift T M , it su ffi ces to solve the linear system h x d + P k ∈K α k x k i = ... ... h m x d + P k ∈K α k m x k i = ... ... h M x d + P k ∈K α k M x k i = . Before determining the coe ffi cients of the relations, one needs to determinetheir support. Definition 11.
Let T be a finite subset of terms. We say that a finite set S ⊂ T is a useful staircase with respect to u , T and ≺ if X t ∈ S β t [ m t ] = , ∀ m ∈ S mplies that β t = for all t ∈ S , S is maximal for the inclusion and minimal for ≺ .We compare two ordered sets for ≺ by seeing them as tuples of their elements andthen comparing them lexicographically. We recall that for two sets of terms U and T , the multi-Hankel matrix associ-ated with U and T is H U , T = ··· m ∈ T ··· ... . . . ... . . . m ′ ∈ U · · · [ m m ′ ] · · · ... . . . ... . . . . Whenever U = { , x , . . . , x k − } and T = { , x , . . . , x ℓ − } then H U , T is a classicalHankel matrix of size k × ℓ .Definition 11 can be rewritten in term of a matrix rank. Definition 12.
Let T be a finite subset of terms.We say that a finite set S ⊂ T is a useful staircase with respect to u , T and ≺ if the matrix H S , S has full rank equal to S and to rank H T , T , S is minimal for theinclusion and for ≺ .We compare two ordered sets for ≺ by seeing them as tuples of their elementsand then comparing them lexicographically.In other words, S is the column rank profile of matrix H T , T . As noted by the authors, it is important to notice that useful staircases need notbe Gr¨obner bases staircases as proven by the following example. Though, if the setof terms T contains the true staircase of the ideal of relations of I with respect to ≺ , then the useful staircase will be this staircase, as expected. Example 6.
We consider the bivariate sequence u = ( i = j = ) ( i , j ) ∈ N = ··· ··· ··· ··· ... ... ... ... ... whose ideal of relations is h y , x i . The useful staircase with respect to u , T = { , y , x , y } and drl ( y ≺ x ) is S = { y , x } , as the columns labeled with and y of thematrix H T , T = y x y y x y re zero. However, for a bigger set T ′ = { , y , x , y , x y , x } , the useful staircase ofthe matrix H T ′ , T ′ = y x y x y x y x y x y x is the true staircase { , y , x , x y } , which is stable by division. Proposition 13.
If S is the useful staircase with respect to the finite subset T and ≺ , then for all m ∈ T \ S , there exists a relation with support in S ∪ { m } , but not inS , with a shift T .In particular, we can always pick m in the border of S .Proof. If m ∈ T \ S , then S ∪ { m } is bigger than S . As the rank of H T , S ∪{ m } cannotbe S ∪ { m } = S + = rank H T , S +
1, then it must be S . Therefore, the lastcolumn of H T , S ∪{ m } , labeled with m , is a linear combination of the previous ones,i.e. there is a relation with support in S ∪ { m } but not in S .Finding this relation is straightforward, as it su ffi ces to solve the nondegeneratelinear system H T , S α + H T , { m } = H S , S α + H S , { m } = S ∪ { m } with a shift T , whenever m < T , though.In the S calar -FGLM algorithm presented in Berthomieu et al. (2015, 2016),a relation was returned for every m in the border of S , whether m was in T ornot by solving the linear system H S , S α + H S , ∪{ m } =
0. This would mean thatsome relations could be returned without even being tested with a shift T , see alsoExample 10. Therefore, it seems preferable to only return relations with support in T , to ensure the shift T .This yields the Algorithm 3 that di ff ers thus a little bit from the one in theaforementioned articles. Example 7.
We give the trace of the algorithm called on two sequences: the se-quence u = (2 i j ( i + ( i , j ) ∈ N and the the binomial sequence b with the drl ( y ≺ x ) ordering, and on the set T = { , y , x , y , x y , x } of all the monomials of degree atmost d = . lgorithm 3: The S calar -FGLM algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , ≺ a monomial ordering and T aset of terms in x stable by division. Output:
A reduced truncated Gr¨obner basis with respect to ≺ of the ideal ofrelations of u with staircase included in T .Build the matrix H T , T .Compute the useful staircase (column rank profile) S of H T , T such thatrank H T , T = rank H S , S . S ′ : = Stabilize( S ). // the staircase (stable under division) L : = T \ S ′ . // the set of next terms to study G : = ∅ . // the future Gr¨obner basis While L , ∅ do t : = min ≺ ( L ).Find α such that H S , S α + H S , { t } = G : = G ∪ (cid:8) t + P s ∈ S α s s (cid:9) .Remove multiples of t in L and sort L by increasing order (with respect to ≺ ). Return G . We build the matrixH T , T = y x y x y x y x y x y
12 36 36 108 108 96 x
12 36 32 108 96 80 . The useful staircase of this matrix is S = { , x } .It is stable by division so S ′ = S .We set L = { , y , x , y , x y , x } \ { , x } = { y , y , x y , x , y , x y , x y , x } and G = ∅ .We take t = y and solve H S , S α + H S , { y } = which yields relation y − ,so G = { y − } and L is updated to { x , x } .We take t = x and solve H S , S α + H S , { x } = which yields relationx − x + , so G = { y − , x − x + } and L is updated to ∅ .We return G = { y − , x − x + } .Furthermore, the relations g ∈ G satisfy [ m g ] = , for all m ∈ T = { , y , x , y , x y , x } , i.e. have a shift T . We build the matrixH T , T = y x y x y x y x y x y x . The useful staircase of this matrix is S = { , y , x , y , x } .It is stable by division so S ′ = S .We set L = { , y , x , y , x y , x } \ { , y , x , y , x } = { x y , x y , x y } andG = ∅ .We take t = x y and solve H S , S α + H S , { x y } = which yields relationx y − y − , so G = { x y − y − } and L is updated to ∅ .We return G = { x y − y − } .Furthermore, this relation g ∈ G satisfies [ m g ] = , for all m ∈ T = { , y , x , y , x y , x } , i.e. has a shift T .
5. Another linear algebra solver inspired by the BMS algorithm
In this section, we design an algorithm for computing the ideal of relationsof a sequence that is close to both the BMS algorithm and to the S calar -FGLMalgorithm. The main idea will be to increase the number of rows and columns ofseveral multi-Hankel matrices and to check whether the ranks of these matrices areincreasing.
Proposition 14.
Let S be a staircase and g be a relation on sequence u such that lm ( g ) lies on the border of S and supp( g ) ⊆ S ∪ { lm ( g ) } . Assume furthermore thatg has a shift m, that is [ g µ ] = for all µ (cid:22) m.Let m + be the successor of m. If [ m + g ] , , then the linear system P s ∈ S α s [ s ] + [ lm ( g )] = ... P s ∈ S α s [ m s ] + [ m lm ( g )] = P s ∈ S α s [ m + s ] + [ m + lm ( g )] = has no solution and there is no nonzero valid relation with support in Stabilize (cid:0) S ∪ { m + } (cid:1) .Proof. This is a consequence of Proposition 9.28 xample 8. Let us consider the binomial sequence b and relations y andx − . We know that the relation y has a shift y, i.e. [ y ] = [ y ] = , and wewant to check if a relation with leading monomial y has a shift x. Therefore,we need to solve α [1] + [ y ] = α [ y ] + [ y ] = α [ x ] + [ x y ] = ⇐⇒ α = = α + = which has no solution. Hence x is in the staircase. Thanks to Proposition 9,since relation [ y ] fails in x y for [ x y ] = , we can also determine that x is inthe staircase.Likewise, we know that the relation x − has a shift , i.e. [ x − = , and wewant to check if a relation with leading monomial x has a shift y. Therefore,we need to solve α [1] + [ x ] = α [ y ] + [ x y ] = ⇐⇒ α + = = which has no solution. Hence y is in the staircase. We still consider the binomial sequence b but with relations y , x y − andx − x. We know that the relation x − x has a shift , i.e. [ x − x ] = , and wewant to check if a relation with leading monomial x has a shift y. Therefore,we need to solve α [1] + α y [ y ] + α x [ x ] + [ x ] = α [ y ] + α y [ y ] + α x [ x y ] + [ x y ] = ⇐⇒ α + α x + = α x + = whose solution is α x = − , α = and α y is any. Hence, although therelation x − x fails at x y for [ x y − x y ] = , the relation x − x + doesnot and has a shift y. This yields Algorithm 4.
Example 9.
We detail how Algorithm 4 behaves on the binomial sequence b up tomonomial x . We start with the empty staircase S and the relation , with V = ∅ .For the monomial , the matrix H { } , ∅ has rank while the matrix H { } , { } = ( ) has rank , hence S is updated to { } and the relations are now y, withV y = ∅ , and x, with V x = ∅ .For the monomial y, lgorithm 4: Linear Algebra solver.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ and amonomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . // ordered for ≺ G : = { [1 , ∅ ] } . // the future Gb, elements will be [ g , V g ] S : = ∅ . // the staircase For all m ∈ T do S ′ : = S For g ∈ G doIf lm ( g ) | m thenIf rank H V g ∪ n m lm ( g ) o , S < rank H V g ∪ n m lm ( g ) o , S ∪{ lm ( g ) } then S ′ : = S ′ ∪ n m lm ( g ) o . Else V g : = V g ∪ n m lm ( g ) o . S : = Stabilize( S ′ ). G : = Border( S ). For g ∈ G do V g : = { µ ∈ K [ x ] , µ lm ( g ) (cid:22) m } For g ∈ G do Find α such that H V g , S α + H V g , { lm ( g ) } = g : = g + P s ∈ S α s s . Return G . oth matrices H { } , { } = ( ) and H { } , { , y } = ( ) have rank , hence V y is updated to { } ;as x does not divide y, nothing is done.For the monomial x,as y does not divide x, nothing is done;both matrices H { } , { } = ( ) and H { } , { , x } = ( ) have rank , hence V x is updated to { } .For the monomial y ,both matrices H { , y } , { } = (cid:16) (cid:17) and H { , y } , { , y } = (cid:16) (cid:17) have rank , henceV y is updated to { , y } ;as x does not divide y, nothing is done.For the monomial x y,the matrix H { , y , x } , { } = (cid:18) (cid:19) has rank while the matrix H { , y , x } , { , y } = (cid:18) (cid:19) has rank , hence S is updated to { , y } .the matrix H { , y } , { } = (cid:16) (cid:17) has rank while the matrix H { , y } , { , x } = (cid:16) (cid:17) has rank , hence S is updated to { , y , x } and the relations are now y ,with V y = { } , x y, with V x y = { } , and x , with V x = ∅ .For the monomial x ,as y does not divide x , nothing is done;as x y does not divide x , nothing is done;both matrices H { } , { , y , x } = ( ) and H { } , { , y , x , x } = ( ) haverank , hence V x is updated to { } .For the monomial y ,both matrices H { , y } , { , y , x } = (cid:16) (cid:17) and H { , y } , { , y , x , y } = (cid:16) (cid:17) haverank , hence V y is updated to { , y } .as x y does not divide y , nothing is done;as x does not divide y , nothing is done.For the monomial x y ,both matrices H { , y , x } , { , y , x } = (cid:18) (cid:19) and H { , y , x } , { , y , x , y } = (cid:18) (cid:19) have rank , hence V y is updated to { , y , x } .both matrices H { , y } , { , y , x } = (cid:16) (cid:17) and H { , y } , { , y , x , x y } = (cid:16) (cid:17) haverank , hence V x y is updated to { , y } .as x does not divide x y , nothing is done.For the monomial x y,as y does not divide x y, nothing is done;both matrices H { , y , x } , { , y , x } = (cid:18) (cid:19) and H { , y , x } , { , y , x , x y } = (cid:18) (cid:19) have rank , hence V x y is updated to { , y , x } . oth matrices H { , y } , { , y , x } = (cid:16) (cid:17) and H { , y } , { , y , x , x } = (cid:16) (cid:17) haverank , hence V x is updated to { , y } .For the monomial x ,as y does not divide x , nothing is done;as x y does not divide x , nothing is done;both matrices H { , y , x } , { , y , x } = (cid:18) (cid:19) and H { , y , x } , { , y , x , x } = (cid:18) (cid:19) have rank , hence V x is updated to { , y , x } .Solving the linear systems yields relations y , with a shift x, x y − y − , with a shiftx, and x − x + , with a shift x.
6. Analogies and di ff erences In this section, we present a list of similarities and di ff erences of behaviors andoutput for the BMS and the S calar -FGLM algorithms. This should convince thereader that these algorithms are not the same and that it is not possible to tweakone of them to mimic the behavior of the other. Although both algorithms compute first a set of elements in the staircase, oneof the main di ff erences between the BMS and the S calar -FGLM algorithms is howthey handle the leading terms outside of this staircase. Theorem 15.
Let u be a sequence and ≺ be a monomial ordering.Calling the BMS algorithm on u , ≺ and a stopping monomial M yields a trun-cated Gr¨obner basis of a zero-dimensional ideal.Calling the S calar -FGLM algorithms on u , ≺ and a set of terms T stable bydivision yields a truncated Gr¨obner basis of an ideal, with leading monomials inT , which is not necessarily a zero-dimensional ideal.Furthermore, if the BMS and the S calar -FGLM algorithms compute the idealof relations of u , then the ideal computed by the BMS algorithm is included in theideal computed by the S calar -FGLM algorithm. These ideals are equal if, andonly if, u is linear recurrent.Proof. The proof of the first part comes directly from the line G ′ : = Border( S ′ )in the description of the BMS algorithm and then to the manipulations done to g ′ ∈ G ′ .The proof of the second part comes from the fact that the potential leadingterms in the S calar -FGLM algorithm are taken in the intersection of the border ofthe staircase and the input set of terms. Nothing may ensure that this set has a purepower of every variable. See also Example 10.32his is illustrated in the following examples. Example 10. We let u = (cid:16) i + j + i + j > (cid:17) ( i , j ) ∈ N be a sequence and con-sider the drl ( y ≺ x ) ordering.The BMS algorithm called on u and the stopping monomial y returns theideal of relations h x − y , y − y i .The S calar -FGLM algorithm called on u and the set of terms T = { , y , x , y } returns the ideal of relations h y − y + i . We consider now the binomial sequence b and the drl ( y ≺ x ) ordering.The BMS algorithm called on b and the stopping monomial x returns h x y − y − , y , ( x − i .The S calar -FGLM algorithm called on b and the set of terms T of all themonomials of degree at most returns h x y − y − i .The first ideal is obviously included in the second which is the true ideal ofrelations of the binomial sequence. Remark 16.
It is possible to tweak the S calar -FGLM algorithm so that it tries toclose the staircase. The idea is to pick the potential leading terms in the border ofthe staircase. Then, for t such a potential leading term, if t is not in the input set ofterms T , one tries to solve H T , S α + H T , { t } = instead of only H S , S α + H S , { t } = ,so that relation t + P s ∈ S α s s has a shift T . See Algorithm 5. Algorithm 5:
Tweaked S calar -FGLM algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , ≺ a monomial ordering and T aset of terms in x stable by division. Output:
A reduced truncated Gr¨obner basis with respect to ≺ of the ideal ofrelations of u with staircase included in T .Build the matrix H T , T .Compute the useful staircase (column rank profile) S of H T , T such thatrank H T , T = rank H S , S . S ′ : = Stabilize( S ). L : = (cid:16) T ∪ S ni = x i S ′ (cid:17) \ S ′ . G : = ∅ . While L , ∅ do t : = min ≺ ( L ).Find α such that H S , S α + H S , { t } = If t ∈ T or H T \ S , S α + H T \ S , { t } = then // has a shift T ! G : = G ∪ (cid:8) t + P s ∈ S α s s (cid:9) .Remove multiples of t in L and sort L by increasing order (with respect to ≺ ). Return G . Let us notice that this tweaked version of the S calar -FGLM still can fail toclose the staircase. 33 xample 11.
We call Algorithm 5 on sequence u = (cid:16) i + j + i + j > (cid:17) ( i , j ) ∈ N , theset T = { , y , x , y } and the drl ( y ≺ x ) ordering as in Example 10.We build the matrix H T , T = y x y y x y . The useful staircase of this matrix is S = { , y , x } .It is stable by division so S ′ = S .We set L = { , y , x , y , x y , x } \ { , y , x } = { y , x y , x } and G = ∅ .We take t = y and solve H S , S α + H S , { y } = which yields relation y − y + ,so G = { y − y + } and L is updated to { x y , x } .We take t = x y and solveH S , S ∪{ x y } α α y α x = y x x y y x α α y α x = , which yields relation x y − x − y + . We check thatH T \ S , S ∪{ x y } − − = (cid:16) y x x y (cid:17) − − = , set G = { y − y + , x y − x − y + } and update L to { x } .We take t = x and solveH S , S ∪{ x y } α α y α x = y x x y y x α α y α x = , which yields relation x − x − x + . However,H T \ S , S ∪{ x } − − = (cid:16) y x x y (cid:17) − − = , o the relation is not valid when shifted by y . Hence, we let G = { y − y + , x y − x − y + } and update L to ∅ .We return G = { y − y + , x y − x − y + } .Furthermore, each relation g ∈ G satisfies [ m g ] = , for all m ∈ T = { , y , x , y } , i.e. has a shift T .6.2. Reduction of relations Even though the BMS and the S calar -FGLM algorithms may compute thesame ideal of relations for a given sequence, the Gr¨obner bases they compute maydi ff er. However, it is possible to tweak the BMS algorithm so that it returns thesame Gr¨obner basis of the ideal as the S calar -FGLM algorithm. Theorem 17.
Let u be a sequence and ≺ be a monomial ordering.Calling the S calar -FGLM algorithms on u , ≺ and a set of terms T stable bydivision yields a truncated minimal reduced Gr¨obner basis of an ideal.Calling the BMS algorithm on u , ≺ and a stopping monomial M yields a trun-cated minimal Gr¨obner basis of an ideal, which is not necessarily reduced.Furthermore, even if u is linear recurrent and the S calar -FGLM algorithmcomputes the ideal of relations of u , then there is no reason for the output of the BMS algorithm to be reduced.Proof.
When updating a relation g thanks to a failing relation h in the BMS algo-rithm, nothing ensures that g has support in S ∪ { lm ( g ) } , where S is the currentstaircase, as supp h may not be included in S . This prevents the returned Gr¨obnerbasis to be reduced, see also Example 12.As the S calar -FGLM algorithm computes a staircase S , the monomials on theborder of S and then solves a multi-Hankel linear system indexed by S and one ofthe monomial on this border, it is clear that the output truncated Gr¨obner basis isreduced.The following example show which Gr¨obner bases are returned by the BMSand the S calar -FGLM algorithms for a same sequence. Example 12.
We let u = (cid:16) i + j − (cid:17) ( i , j ) ∈ N be a sequence and consider the drl ( y ≺ x ) ordering. The ideal of relations of u is I = h x y − x − y + , x − y − x + y , y − y + y − i .The BMS algorithm called on u and the stopping monomial y returns g = x y − x − y + , with shift x , g = x − x y − y − x + y − , with shift x andg = y − x y − y + x + y − , with shift y . We can notice that { g , g , g } is a minimal Gr¨obner basis but not a reduced Gr¨obner basis of I. he S calar -FGLM algorithm called on u and the set of all the monomials ofdegree at most yields relations g ′ = x y − x − y + , g ′ = x − y − x + y , g ′ = y − y + y − . We can notice that { g ′ , g ′ , g ′ } = { g , g + g , g + g } is aminimal reduced Gr¨obner basis of I. Remark 18.
Let g ′ , g ′ be two computed relations by the BMS algorithm and let µ be a monomial. Assume µ lm ( g ′ ) (cid:22) lm ( g ′ ) , then shift( µ lm ( g ′ )) (cid:23) lm ( g ′ ) = v.Therefore g ′ − c µ g ′ has still shift v for any scalar c: hence one can replace g ′ byg ′ − c µ g ′ , i.e. one can reduce g ′ by g ′ into g and replace g ′ by g . Let us noticethat we can tweak the BMS algorithm so that, at each step, the set of relations is atruncated reduced Gr¨obner basis. It su ffi ces to perform an inter-reductions of thecomputed relations at the end of each step of the For loop, see Algorithm 6.
Algorithm 6:
Tweaked BMS algorithm
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ and amonomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . G : = { } . S : = ∅ . For all m ∈ T do S ′ : = S For g ∈ G doIf lm ( g ) | m then e : = h m lm ( g ) g i u If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min fail( h ) ∈ S ′ { [ h , fail( h ) / lm ( h )] } . G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − h m lm ( h ) h i u lm ( g ′ ) fail( h ) m h . Else g ′ : = g . G : = InterReduce( G ′ ) S : = S ′ Return G . .3. Validity of relations We compare the relationship between relations and shifts as they are computedby the BMS and the S calar -FGLM algorithms.
Theorem 19.
Let u be a sequence and ≺ be a monomial ordering.Calling the BMS algorithm on u , ≺ and a stopping monomial M yields rela-tions g , . . . , g r and shifts v , . . . , v r such that ∀ i , ≤ i ≤ r , v i lm ( g i ) (cid:22) Mand g i is valid with a shift v i , potentially .Calling the S calar -FGLM algorithm on u , ≺ and a set of terms T M = { m , m (cid:22) M } yields relations g ′ , . . . , g ′ r ′ such that ∀ i , ≤ i ≤ r ′ , lm ( g ′ i ) (cid:22) Mand g ′ i has a shift T M , i.e. is valid with a shift M.Proof. The BMS algorithm tests its relations up to M , i.e. it shifts them up to M .In the worst case, the leading term of a relation is greater than M , but then it has ashift 0.The S calar -FGLM algorithm returns relations g = lt ( g ) + P s ∈ S α s s such that H T , S α + H T , { lm ( g ) } =
0, i.e. they are valid when shifted by any monomial in T .We illustrate this with the following example. Example 13.
We let b be the binomial sequence and consider the drl ( y ≺ x ) ordering.The BMS algorithm called on b and the stopping monomial x returns x y − y − , with a shift x ; y , with a shift x ; and ( x − , with a shift x . ith the matrix viewpoint, one has rows y x x y y x ... ... ... ... ... x ... ... ... ... ... x − − = , rows y x ··· y · · · y · · · x · · · ... ... ... ... ... x · · · ... = , y x ··· x · · · y · · · x · · · ... ... ... ... ... x · · · − ... = . We can notice that the first matrix has many more rows than the other two.The S calar -FGLM algorithm called on b and the set T of all the monomials ofdegree at most returns x y − y − with a shift x . With the matrix viewpoint, onehas rows y x x y y x ... ... ... ... ... x − − = . Likewise, calling Algorithm 5 on the same input returns x y − y − , y , x , all hree valid up to x . With the matrix viewpoint, we also have this matrix equality: rows y x ··· y · · · y · · · x · · · ... ... ... ... ... x · · · ... = y x ··· x · · · y · · · x · · · ... ... ... ... ... x · · · − ... = . We can see that the last two matrices have as many rows as the first one.
That being said, for a monomial m = µ lm ( g ) ∈ T , the column labeled with m in H T , T is also linearly dependent from the previous ones. In particular, it allowsus to verify that the relation µ g is valid with a shift T , i.e. g is valid with a shift T ∪ µ T . Example 14.
Resuming Example 13, the columns labeled with x y , x y , x y , x y and x y are linearly dependent from the previous ones. For instance, the columnlabeled with x y is the sum of the columns labeled with y and y. Thus, Pascal’srule x y − y − is also valid with shifts y T , x T , y T , x y T and x T . Since T is theset of the monomials of degree at most , S µ ∈{ , y , x , y , x y , x } µ T is the set of all themonomials of degree .All in all, like for the BMS algorithm, we find that the Pascal’s rule is validwith a shift x .6.4. Monomial ordering and Set of Terms Given a linear recurrent sequence u with ideal of relation defined by a Gr¨obnerbasis G for a monomial ordering ≺ , the BMS and the S calar -FGLM algorithmscan return G only if the input set of terms contains the staircase defined by G . Thatis why, it is preferable to run both of them with an ordering ≺ such that for allmonomial M ∈ K [ x ], T M = { m , m (cid:22) M } is finite. In particular, the lex orderingdoes not satisfy such a property.However, we can try to see how they behave when calling them with the lex monomial ordering. We relate this to the randomized reduction to the BM algo-rithm presented in (Berthomieu et al., 2015, 2016, Section 3) where the authorsperform a randomized linear change of variables so that, generically, the ideal ofrelations is in shape position. We also relate this to the S parse -FGLM algorithmapplication where the input is a sequence made from a Gr¨obner basis, typically forthe drl ordering, and the output is the ideal of relations of this sequence for anotherordering, typically lex , see Faug`ere and Mou (2011, 2017).39 heorem 20. Let u be a linear recurrent sequence whose ideal of relation I isin shape position for the lex ( x n ≺ · · · ≺ x ≺ x ) ordering, i.e. there exist g n squarefree and f n − , . . . , f ∈ K [ x n ] with deg g n = d , deg f i < d such that I = h g n ( x n ) , x n − − f n − ( x n ) , . . . , x − f ( x n ) i .Calling the S calar -FGLM algorithm as designed in Berthomieu et al. (2015,2016) or its tweaked version Algorithm 5 on u , T containing at least { , x n , . . . , x d − n } and lex ( x n ≺ · · · ≺ x ≺ x ) allows one to retrieve I.Calling the BMS algorithm on u , with the stopping monomial x en and lex ( x n ≺· · · ≺ x ≺ x ) yields h g ( x n ) , x n − , . . . , x i . This ideal is not equal to I, unlessf = · · · = f n − = .In other words, the S calar -FGLM algorithm can retrieve an ideal of relationsin shape position while, in general, the BMS algorithm cannot.Proof.
When calling the S calar -FGLM algorithm on u with the lex ( x n ≺ · · · ≺ x ≺ x ) ordering and with the set of terms T containing { , x n , . . . , x d − n } , thealgorithm shall determine that the useful staircase S = { , x n , . . . , x d − n } . Then, theset of potential leading monomials is { x dn , x n − , . . . , x } . For x dn , it solves H S , S α + H S , { x dn } = g n ( x n ) while for any k , 1 ≤ k ≤ n −
1, it solves H S , S α + H S , { x k } = x k − f k ( x n ). Then it tests that these relationshave a shift T , and since they are the true relations of u , they do.When calling the BMS algorithm with the lex ( x n ≺ · · · ≺ x ≺ x ) or-dering and with the stopping monomial M = x en , the algorithm behaves mutatismutandis like the BM algorithm except that as soon as 1 is detected to be inthe staircase, the BMS algorithms adds polynomials x , . . . , x n − in the truncatedGr¨obner basis. As these relations cannot be tested further, the output shall alwaysbe G = h g ( x n ) , x n − , . . . , x i . See also Example 15 below.We illustrate the behavior of the S calar -FGLM algorithm with an example. Example 15.
We let u = ( F i + k ) ( i , j , k ) ∈ N be a sequence, where ( F i ) i ∈ N is the Fi-bonacci sequence, and consider the drl ( z ≺ y ≺ x ) ordering. The ideal of relationsof u is I = h z − z − , y − , x − z − i .The S calar -FGLM algorithm called on u and the set of terms { , z , . . . , z d + } yields h g , g , g i , which is indeed the ideal of relations of the sequence. In detail:It creates the matrixH T , T = z ··· z d + · · · F d + z · · · F d + ... ... ... ... z d + F d + F d + · · · F d + nd finds it has rank with useful staircase S = { , z } .It solves H S , S ∪{ z } α α z = z z z ! α α z = and finds relation z − z − .Then, it solvesH S , S ∪{ y } α α z = z y z ! α α z = H S , S ∪{ x } α α z = z x z ! α α z = . and finds the relations g = y − and g = x − z − . It also checks that thelast two have a shift T withH T \ S , S ∪{ y } − = z yz ... ... ... ... z d + F d + F d + F d + − = , H T \ S , S ∪{ x } − − = z xz ... ... ... ... z d + F d + F d + F d + − − = . Finally, it returns g , g , g .The BMS algorithm called on u and the stopping monomial z d + returns h g , y , x i ,which is not the ideal of relations of the sequence, as neither y nor x are in I. Indetail:The algorithms tests the relation g = in u , , = F = where it succeeds.It tests g in u , , = F = where it fails. It has now relations g = x , g = yand g = z , all three with a shift .Going on testing z in u , , = F = , u , , = F and so on, it is able toupdate g into z − z − but is never able to test either g or g .Finally, it returns g with a shift z d and g , g with a shift .Although g is in the ideal of relations, g and g are not. emark 21. Let us notice though that, whenever the user knows the degree d ofthe ideal of relations of a linear recurrent sequence, we can tweak both algorithmsto be able to recover fully the ideal of relations.On the one hand, it su ffi ces to call the S calar -FGLM algorithm with the set ofmonomials T = { m , deg m ≤ d } and the lex ( x n ≺ · · · ≺ x ≺ x ) ordering.On the other hand, it su ffi ces to change a little bit how we enumerate the mono-mials less than the stopping monomial M in the BMS algorithm. In most imple-mentation, monomials less than or equal to M are given by the ordered set of termsT M = { m , m (cid:22) M } . If one knows in advance the degree d of the ideal, then itsu ffi ces to enumerate the monomials in { m , deg m ≤ d − , m (cid:22) M } and to callthe BMS algorithm with the lex ( x n ≺ · · · ≺ x ≺ x ) ordering. This tweakedversion of the BMS algorithm was implemented for the S parse -FGLM applicationin Faug`ere and Mou (2011, 2017).
7. Complexity and Benchmarks
In this section, we present some benchmarks to compare the behaviors of theBMS and the S calar -FGLM algorithms. We relate them with the announced com-plexity of each algorithm.Three families of ideals of relations are used to make the sequences. • In the first family, the leading monomials of the ideal of relations are h y ⌊ d / ⌋ , x d i .Thus, its staircase is a rectangle of size around d /
2. In three variables, theleading monomials are h z ⌈ d / ⌉ , y ⌊ d / ⌋ , x d i , so that the staircase is a rectangularcuboid of size around d /
6. This family will be called
Rectangle . • In the second family, the leading monomials of the ideal of relations are h x y , y d , x d i . Thus, its staircase looks like a L and has size 2 d −
1. In threevariables, the leading monomials are h y z , x z , x y , z d , y d , x d i , so that the stair-case has size 3 d −
2. This family will be called L shape . It was consideredas the worst case in Berthomieu et al. (2015, 2016) for the A daptive S calar -FGLM algorithm, a variant of the S calar -FGLM algorithm, for the numberof queries. It should also be a worst case for the BMS algorithm. • In the last family, the leading monomials of the ideal of relations are allthe monomials of degree d . Thus, its staircase is a simplex and has size (cid:16) d + (cid:17) = d ( d + in two variables. In three variables, the staircase has size (cid:16) d + (cid:17) = d ( d +
1) ( d + . This family will be called Simplex . It should be the bestcase for both the S calar -FGLM and the BMS algorithms.42or all these families, we called the algorithms with the drl ( y ≺ x ) ordering.For the BMS algorithm, we used Proposition 11 to estimate sharply the stop-ping monomial. For the S calar -FGLM algorithm, we took all the monomials ofthe largest degree appearing in the staircase and the minimal Gr¨obner basis. Thanks to the Proposition 11 giving a monomial M such that at step M , theBMS algorithm recovers a Gr¨obner basis of the ideal of relations of the input se-quence, we can deduce the following proposition. Proposition 22.
Let u = ( u i ) i ∈ N n be a sequence and G be a minimal Gr¨obner basisof its ideal of relations for a total degree ordering.Let d S be the greatest degree of the elements in the staircase of G , d G be thegreatest degree of the elements in G and d max = max( d S , d G ) .Let S ( d ) be the simplex of all monomials of degree d.Then, the BMS algorithm needs to perform at least S ( d S + d max − = (cid:16) n + d S + d max − n (cid:17) and at most S ( d S + d max ) = (cid:16) n + d S + d max n (cid:17) queries to u .The S calar -FGLM algorithm called on T = S ( d max ) the set of all of monomialsof degree at most d max needs to perform S (2 d max ) = (cid:16) n + d max n (cid:17) queries to u .For n fixed, these numbers grow as O ( d n max ) . d Q u e r i e s / S Rectangle L shape SimplexS calar -FGLMBMSBoth algorithms
Figure 1: Number of table queries (2D) d Q u e r i e s / S Rectangle L shape SimplexS calar -FGLMBMSBoth algorithms
Figure 2: Number of table queries (3D)
In the experiments of Figures 1 and 2, we report on the ratio between the num-bers of queries and the size of the staircase for the three families of polynomials.Not surprisingly, the S calar -FGLM algorithm always performs the most queries.This is due to the fact that in Proposition 22, either d G = d S + d S ≥ d G , hence d max ∈ { d S − , d S } and d S + d max ∈ { d max − , d max } .Though, we can see that for the Rectangle family, each algorithm performsexactly as many queries as the other.For the Rectangle and Simplex families, the size of the staircase and the numberof queries grow like O ( d n ), where n = ,
3, the dimension. This is why the ratioseems rather constant.However, for the L shape family, the size of the staircase only grows as O ( d )while the number of queries grows as O ( d n ). This is also confirmed by our experi-ments, where the ratio between the number of queries and the size of the staircasegrows much faster in dimension 3 than in dimension 2.In fact, each algorithm performs as many queries for the L shape family as forthe Simplex family. Thus, we can see that neither is able to take profit from thesize of the staircase. The complexity of the BMS algorithm has been studied in Sakata (2009).44 roposition 23.
Let u = ( u i ) i ∈ N n be a sequence, G be a minimal Gr¨obner basis ofits ideal of relations for a total degree ordering and S be the staircase of G .Then, the BMS algorithm performs at most O (cid:16) ( S ) lm ( G ) (cid:17) operations to re-cover the ideal of relations of u . The S calar -FGLM computes the column rank profile of a matrix of size S ( d max ).Then, it solves as many linear systems with the submatrix of size S as there arepolynomials in the Gr¨obner basis. All in all, we have the following result. Proposition 24.
Let u = ( u i ) i ∈ N n be a sequence, G be a reduced Gr¨obner basis ofits ideal of relations for a total degree ordering and S be the staircase of G . Letd max be the maximal degree of the elements of S and G .Then, the number of operations performed by the S calar -FGLM algorithm torecover the ideal of relations of u is at most O (cid:16) ( S ( d max )) + ( S ) lm ( G ) (cid:17) . In the following Figures 3 and 4, we report on the ratio between the number ofbasic operations and the cube of the size of the staircase. d B a s i c O p / S Rectangle L shape SimplexS calar -FGLMBMS
Figure 3: Number of basic operations (2D)
For the Rectangle family, we have S ∈ O ( d n ), S ( d max ) ∈ O ( d n ) and lm ( G ) = S ) lm ( G ) ∈ O ( d n ). This is why, we can see, first, a constant ratiobetween the number of basic operations done by the S calar -FGLM algorithm andthe size of the staircase and, then, a decreasing ratio for the BMS algorithm. Ananalogous analysis explains why, for the L shape family, the ratio is increasing forthe S calar -FGLM algorithm and quite constant for the BMS algorithm.45nexpectedly, the S calar -FGLM algorithm performs fewer basic operationsthan the BMS algorithm for the Simplex family. This is mainly due to the fact that,for this family, the term ( S ) lm ( G ) is in fact larger than ( S ( d max )) . d B a s i c O p / S Rectangle L shape SimplexS calar -FGLMBMS
Figure 4: Number of basic operations (3D)
We now compare the ratio between the number of basic operations and thenumber of queries made by each algorithm in Figures 5 and 6.As we can see, beside for the Simplex family where the S calar -FGLM per-formed fewer operations but more queries than the BMS algorithm, the polynomialarithmetic of the BMS algorithm allows it to have a much better behavior than theS calar -FGLM algorithm.This reinforces the conviction that an hybrid approach between the BMS andthe S calar -FGLM algorithm or a fast multi-Hankel solver should be investigated.46 d B a s i c O p / Q u e r i e s Rectangle L shape SimplexS calar -FGLMBMS
Figure 5: Number of basic operations by queries (2D)4 5 6 7 8 9 10100500100050001000050000 d B a s i c O p / Q u e r i e s Rectangle L shape SimplexS calar -FGLMBMS
Figure 6: Number of basic operations by queries (3D) eferences Banderier, C., Flajolet, P., 2002. Basic analytic combinatorics of directed latticepaths. Theoret. Comput. Sci. 281 (1–2), 37–80, selected Papers in honour ofMaurice Nivat.URL
Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., Salvy, B., 2010.The Dynamic Dictionary of Mathematical Functions (DDMF). In: Fukuda, K.,Hoeven, J. v. d., Joswig, M., Takayama, N. (Eds.), Mathematical Software –ICMS 2010. Springer, Berlin, Heidelberg, pp. 35–41.URL http://dx.doi.org/10.1007/978-3-642-15582-6_7
Berlekamp, E., 1968. Nonbinary BCH decoding. IEEE Trans. Inform. Theory14 (2), 242–242.Berthomieu, J., Boyer, B., Faug`ere, J.-Ch., 2015. Linear Algebra for ComputingGr¨obner Bases of Linear Recursive Multidimensional Sequences. In: 40th In-ternational Symposium on Symbolic and Algebraic Computation. Proceedingsof the 40th International Symposium on Symbolic and Algebraic Computation.Bath, United Kingdom, pp. 61–68.Berthomieu, J., Boyer, B., Faug`ere, J.-Ch., 2016. Linear Algebra for Comput-ing Gr¨obner Bases of Linear Recursive Multidimensional Sequences. Journalof Symbolic Computation, 48.Berthomieu, J., Faug`ere, J.-Ch., 2016. Guessing Linear Recurrence Relations ofSequence Tuples and P-recursive Sequences with Linear Algebra. In: 41st Inter-national Symposium on Symbolic and Algebraic Computation. Waterloo, ON,Canada, pp. 95–102.Blackburn, S. R., 1997. Fast rational interpolation, reed-solomon decoding, andthe linear complexity profiles of sequences. IEEE Transactions on InformationTheory 43 (2), 537–548.Bose, R., Ray-Chaudhuri, D., 1960. On a class of error correcting binary groupcodes. Information and Control 3 (1), 68 – 79.URL
Bostan, A., Bousquet-M´elou, M., Kauers, M., Melczer, S., 2014. On 3-dimensionallattice walks confined to the positive octant, to appear in Annals of Combina-torics. 48ousquet-M´elou, M., Mishna, M., 2010. Walks with small steps in the quarterplane. In: Algorithmic probability and combinatorics. Vol. 520 of Contemp.Math. Amer. Math. Soc., Providence, RI, pp. 1–39.URL http://dx.doi.org/10.1090/conm/520/10252
Bousquet-M´elou, M., Petkovˇsek, M., 2003. Walks confined in a quadrant are notalways d-finite. Theoret. Comput. Sci. 307 (2), 257–276, random Generation ofCombinatorial Objects and Bijective Combinatorics.URL
Brachat, J., Comon, P., Mourrain, B., Tsigaridas, E. P. P., 2010. Symmetric tensordecomposition. Linear Algebra Appl. 433 (11-12), 1851–1872.Bras-Amor´os, M., O’Sullivan, M. E., 2006. The correction capability of theBerlekamp–Massey–Sakata algorithm with majority voting. Applicable Alge-bra in Engineering, Communication and Computing 17 (5), 315–335.URL http://dx.doi.org/10.1007/s00200-006-0015-8
Cox, D., Little, J., O’Shea, D., 2015. Ideals, Varieties, and Algorithms, 4th Edition.Undergraduate Texts in Mathematics. Springer, New York, an introduction tocomputational algebraic geometry and commutative algebra.Cox, D. A., Little, J., O’Shea, D., 2005. Using Algebraic Geometry, 2nd Edition.Vol. 185 of Graduate Texts in Mathematics. Springer, New York.Daleo, N. S., Hauenstein, J. D., 2016. Numerically testing generically reducedprojective schemes for the arithmetic gorenstein property. In: Kotsireas, I. S.,Rump, S. M., Yap, C. K. (Eds.), Mathematical Aspects of Computer and Infor-mation Sciences: 6th International Conference, MACIS 2015, Berlin, Germany,November 11-13, 2015, Revised Selected Papers. Springer International Pub-lishing, Cham, pp. 137–142.URL http://dx.doi.org/10.1007/978-3-319-32859-1_11
Dornstetter, J., 1987. On the equivalence between Berlekamp’s and Euclid’s algo-rithms (corresp.). IEEE Transactions on Information Theory 33 (3), 428–431.Elkadi, M., Mourrain, B., 2007. Introduction `a la r´esolution des syst`emes polyno-miaux. Vol. 59 of Math´ematiques et Applications. Springer.Erd¨os, J., 1956. On the structure of ordered real vector spaces. Publ. Math. Debre-cen 4, 334–343. 49aug`ere, J.-Ch., Gianni, P., Lazard, D., Mora, T., 1993. E ffi cient Computation ofZero-dimensional Gr¨obner Bases by Change of Ordering. J. Symbolic Comput.16 (4), 329–344.Faug`ere, J.-Ch., Mou, C., 2011. Fast Algorithm for Change of Ordering of Zero-dimensional Gr¨obner Bases with Sparse Multiplication Matrices. In: Proc. of the36th ISSAC. ACM, pp. 115–122.Faug`ere, J.-Ch., Mou, C., 2017. Sparse FGLM algorithms. Journal of SymbolicComputation 80 (3), 538 – 569.Fitzpatrick, P., Norton, G., 1990. Finding a basis for the characteristic ideal of ann-dimensional linear recurring sequence. IEEE Trans. Inform. Theory 36 (6),1480–1487.Gorenstein, D., 1952. An arithmetic theory of adjoint plane curves. Trans. Amer.Math. Soc. 72, 414–436.Guisse, V., Sep. 2017. Alg`ebre lin´eaire d´edi´ee pour les algorithmes Scalar-FGLMet Berlekamp-Massey-Sakata. Master’s thesis, Universit´e Paris Diderot (Paris 7).URL https://hal.inria.fr/hal-01516249 Hocquenghem, A., 1959. Codes correcteurs d’erreurs. Chi ff res 2, 147 – 156.Jonckheere, E., Ma, C., 1989. A simple Hankel interpretation of the Berlekamp-Massey algorithm. Linear Algebra Appl. 125 (0), 65 – 76.URL Kaltofen, E., Pan, V., 1991. Processor e ffi cient parallel solution of linear systemsover an abstract field. In: SPAA ’91. ACM Press, New York, N.Y., pp. 180–191.Kaltofen, E., Yuhasz, G., 2013a. A fraction free Matrix Berlekamp / Massey algo-rithm. Linear Algebra Appl. 439 (9), 2515–2526.Kaltofen, E., Yuhasz, G., 2013b. On the Matrix Berlekamp-Massey Algorithm.ACM Trans. Algorithms 9 (4), 33:1–33:24.URL http://doi.acm.org/10.1145/2500122
Levinson, N., 1947. The Wiener RMS (Root-Mean-Square) error criterion in thefilter design and prediction. J. Math. Phys. 25, 261–278.Macaulay, F. S., 1934. Modern algebra and polynomial ideals. Mathematical Pro-ceedings of the Cambridge Philosophical Society 30, 27–46.URL http://journals.cambridge.org/article_S0305004100012354 it -15, 122–127.Mora, T., 2009. Gr¨obner technology. In: Sala, M., Sakata, S., Mora, T., Traverso,C., Perret, L. (Eds.), Gr¨obner Bases, Coding, and Cryptography. Springer BerlinHeidelberg, Berlin, Heidelberg, pp. 11–25.URL http://dx.doi.org/10.1007/978-3-540-93806-4_2 Robbiano, L., 1986. On the theory of graded structures. Journal of SymbolicComputation 2 (2), 139 – 170.URL
Sakata, S., 1988. Finding a minimal set of linear recurring relations capable ofgenerating a given finite two-dimensional array. J. Symbolic Comput. 5 (3),321–337.URL
Sakata, S., 1990. Extension of the Berlekamp-Massey algorithm to N Dimensions.Inform. and Comput. 84 (2), 207–239.URL http://dx.doi.org/10.1016/0890-5401(90)90039-K
Sakata, S., 1991. Decoding binary 2-D cyclic codes by the 2-D Berlekamp-Masseyalgorithm. IEEE Trans. Inform. Theory 37 (4), 1200–1203.URL http://dx.doi.org/10.1109/18.86974
Sakata, S., 2009. The bms algorithm. In: Sala, M., Sakata, S., Mora, T., Traverso,C., Perret, L. (Eds.), Gr¨obner Bases, Coding, and Cryptography. Springer BerlinHeidelberg, Berlin, Heidelberg, pp. 143–163.URL http://dx.doi.org/10.1007/978-3-540-93806-4_9http://dx.doi.org/10.1007/978-3-540-93806-4_9