In-depth comparison of the Berlekamp--Massey--Sakata and the Scalar-FGLM algorithms: the adaptive variants
aa r X i v : . [ c s . S C ] J un In-depth comparison of the Berlekamp–Massey–Sakata andthe Scalar-FGLM algorithms:the adaptive variants
J´er´emy Berthomieu ∗ , Jean-Charles Faug`ere Sorbonne Universit´e,
CNRS , INRIA , Laboratoire d’Informatique de Paris 6,
LIP6 , ´Equipe P ol S ys ,4 place Jussieu, 75252 Paris Cedex 05, France Abstract
The B erlekamp –M assey –S akata algorithm and the S calar -FGLM algorithm bothcompute the ideal of relations of a multidimensional linear recurrent sequence.Whenever quering a single sequence element is prohibitive, the bottleneck ofthese algorithms becomes the computation of all the needed sequence terms. Assuch, having adaptive variants of these algorithms, reducing the number of se-quence queries, becomes mandatory.A native adaptive variant of the S calar -FGLM algorithm was presented by itsauthors, the so-called A daptive S calar -FGLM algorithm.In this paper, our first contribution is to make the B erlekamp –M assey –S akata algorithm more e ffi cient by making it adaptive to avoid some useless relation test-ings. This variant allows us to divide by four in dimension 2 and by seven indimension 3 the number of basic operations performed on some sequence family.Then, we compare the two adaptive algorithms. We show that their behaviorsdi ff er in a way that it is not possible to tweak one of the algorithms in order tomimic exactly the behavior of the other. We detail precisely the di ff erences and thesimilarities of both algorithms and conclude that in general the A daptive S calar -FGLM algorithm needs fewer queries and performs fewer basic operations than theA daptive B erlekamp –M assey –S akata algorithm.We also show that these variants are always more e ffi cient than the originalalgorithms. Keywords:
The BMS algorithm, the S calar -FGLM algorithm, Gr¨obner basis ∗ Laboratoire d’Informatique de Paris 6, Sorbonne Universit´e, Campus Pierre-et-Marie-Curie,boˆıte courrier 169, 4 place Jussieu, F-75252 Paris Cedex 05, France.
Email addresses: [email protected] (J´er´emy Berthomieu), [email protected] (Jean-Charles Faug`ere)
Preprint submitted to Journal of Symbolic Computation June 5, 2018 omputation, multidimensional linear recurrent sequence, algorithms comparison
Contents1 Introduction 2 calar -FGLM algorithm 195 Analogies and di ff erences of the adaptive variants 22 References 34Appendices 38A The BMS algorithm 38
A.1 A Polynomial interpretation of the BMS algorithm . . . . . . . . 38A.2 A Linear Algebra interpretation of the BMS algorithm . . . . . . 46
1. Introduction
A fundamental problem in Computer Science is to estimate the linear com-plexity of an infinite sequence S : this is the smallest length of a recurrence with2onstant coe ffi cients satisfied by S or the length of the shortest linear feedback shiftregister (LFSR) which generates it.Linear Prediction dates back to Gauß in the 18th century: given a discrete set oforiginal values ( u i ) i ∈ N , the goal is to find the best coe ffi cients, in the least-squaressense, ( α i ) i ∈ N that will approximate u i by − P dk = α k u i − k . Least-square sense meansthat the solution minimizes the sum of the squares of the errors.This yields a linear system whose matrix is Hankel. This problem has also beenextensively used in Digital Signal Processing theory and applications. Numerically,Levinson–Durbin recursion method can be used to solve this problem. Hence,to some extent, the original Levinson–Durbin problem in Norbert Wiener’s Ph.D.thesis, Levinson (1947); Wiener (1964), predates the Hankel interpretation of theBerlekamp–Massey algorithm, see for instance Jonckheere and Ma (1989).The Berlekamp–Massey algorithm (BM, Berlekamp (1968); Massey (1969))is a famous algorithm guessing a solution of this problem for a one-dimensionalsequence. This algorithm has been tremendously studied and many variants weredesigned. We refer the reader to Kaltofen and Pan (1991); Kaltofen and Yuhasz(2013a,b) for a very nice classification of the BM algorithms for solving this prob-lem, and for its generalization to matrix sequences.A generalization of the BM algorithm to 2 dimensions was first designed in Sakata(1988). It was then further generalized to n dimensions in Sakata (1990, 2009). Theso-called Berlekamp–Massey–Sakata algorithm (BMS) guesses a Gr¨obner basis ofthe ideal of relations satisfied by the first terms of the input sequence, (Sakata,1990, Lemma 5).In Berthomieu et al. (2015, 2017), the authors designed the S calar -FGLM al-gorithm. It also guesses a reduced Gr¨obner basis of the ideal of relations of asequence. While the BM algorithm can be seen as the computation of the kernelof a Hankel matrix, the S calar -FGLM algorithm computes the kernel of a multi-Hankel matrix, its multivariate generalization.In some applications, computing even a term of the input sequence is costly oreven the bottleneck of the S calar -FGLM algorithm. An adaptive variant of the al-gorithm, called the A daptive S calar -FGLM algorithm was designed in Berthomieu et al.(2015, 2017) in order to minimize the number of sequence queries.More recently, the authors proposed a new algorithm, P olynomial S calar -FGLM, in Berthomieu and Faug`ere (2018) for computing the linear recurrencerelations of a sequence based on multivariate polynomial arithmetic. It extendsthe BMS algorithm through the use of polynomial divisions and is a completerevision of the S calar -FGLM algorithm without any linear algebra operations.Yet, in this paper the algorithms are treated as high-level ones, with linear alge-bra operations. We do not try to improve them using polynomial arithmetic asin Berthomieu and Faug`ere (2018). 3inally, let us recall that as it is not possible to store the whole input sequence,all these algorithms take a bound as an input and only handle sequence terms up tothis index bound. This is why they can only guess the ideal of relations. Computing linear recurrence relations of multi-dimensional sequences findsapplications in Coding Theory, Computer Algebra and Combinatorics.Historically, the BM algorithm was designed to decode cyclic codes, like theBCH codes, Bose and Ray-Chaudhuri (1960); Hocquenghem (1959). Therefore,decoding n -dimensional cyclic codes, a generalization of Reed Solomon codes,was Sakata’s motivation for designing the BMS algorithm in Sakata (1991).On the other hand, as the output of the BMS and the S calar -FGLM algorithmsis a Gr¨obner basis, a natural application in Computer Algebra is the computation ofa Gr¨obner basis of an ideal for another order, typically from a total degree orderingto an elimination ordering. In fact the latest versions of the S parse -FGLM algo-rithm rely heavily on the BM and BMS algorithms, see Faug`ere and Mou (2011,2017). These notions are recalled in a concise way in Section 2, see also (Berthomieu and Faug`ere,2017, Section 2).Finally, computing linear recurrence relations with polynomial coe ffi cients findsapplications in Computer Algebra for computing properties of univariate and mul-tivariate Special Functions. The Dynamic Dictionary of Mathematical Functions(DDMF, Benoit et al. (2010)) generates automatically web-pages on univariate spe-cial functions through the di ff erential equations they satisfy. Equivalently, theycould be generated through the linear recurrence relations satisfied by their Tay-lor series sequence of coe ffi cients. Deciding whether 2D / calar -FGLM algorithm to han-dle relations with polynomial coe ffi cients in Berthomieu and Faug`ere (2016). Following the open question in Berthomieu and Faug`ere (2017) whether anadaptive variant of the BMS algorithm, reducing the number of sequence queries,exists or not, first we answer positively. Then, the goal of this paper is to comparethis adaptive variant and the A daptive S calar -FGLM algorithm.In Section 3, we design an adaptive variant of the BMS algorithm, namelythe A daptive BMS algorithm, reducing the number of sequence queries. To ourknowledge some early termination criteria were proposed for the BMS algorithm,see Sakata (2009). However, these criteria did not allow one to skip some relationtestings. Here, the A daptive
BMS algorithm can skip some relation testings and4till test some further relations. In practice, this variant is more e ffi cient than theBMS algorithm thanks to these skippings. To do so, it uses an a priori upper boundon the staircase size to prevent some useless relation testings. In some favorablecases, this can even allow us to require fewer sequence elements than when callingthe BMS algorithm. The presentation of this variant follows the linear algebradescription of the BMS algorithm introduced in (Berthomieu and Faug`ere, 2017,Section 3.2), see also Appendix A.2.In Section 4, we deal with the A daptive S calar -FGLM algorithm, first pre-sented in Berthomieu et al. (2015). Compared to the BMS algorithm, we iterativelyincrease the size of the staircase. Although it can drastically decrease the numberof sequence queries, one of its drawback is that it can fail to compute the true idealof relations of a sequence.Therefore, it is essential to investigate when these algorithms output a Gr¨obnerbasis of the ideal of relations. To do so, we focus on their similarities and di ff er-ences of behaviors. We report here simplified and synthetic versions of the resultsobtained in Section 5.A first similarity is that they both output a zero-dimensional ideal of relations. Theorem 1 (Theorem 7) . Let u = ( u i , j ) ( i , j ) ∈ N be a sequence, let ≺ be a degreemonomial ordering and d be the size of the staircase.Calling each algorithm on u , ≺ , d yields a truncated Gr¨obner basis of a zero-dimensional ideal. In the Gr¨obner basis change of ordering application, like the S parse -FGLMalgorithm, one needs to use the lexicographical ordering. Although the BMS algo-rithm is not designed to handle such an ordering, the A daptive
BMS can perfectlybe called with this ordering. Indeed, if the ideal is in shape position , then, as asecond similarity, both algorithm output correctly the ideal.
Theorem 2 (Theorem 10) . Let u = ( u i , j ) ( i , j ) ∈ N be a linear recurrent sequencewhose ideal of relations I = h g ( y ) , x − f ( y ) i is in shape position for the lex ( y ≺ x ) ordering, with deg f < deg g = d and g squarefree.Assuming no error is thrown in the execution of the A daptive S calar -FGLM algorithm called on u , d and lex ( y ≺ x ) ordering, then the ouput ideal is I.Likewise, calling the A daptive BMS algorithm on u , d and lex ( y ≺ x ) yieldsideal I. Although, the previous two theorems seem to show that both algorithms havevery similar outputs, their outputs can still di ff er.Indeed, as neither algorithm can test if their output relations are valid on thewhole sequence, they intrinsically return the shifts of the relations: that is the set of5ranslation monomials for which the relations are valid. Thus, the larger the shift,the more the relation has been tested. Therefore, it reinforces the confidence onecan have in the guessed output ideal. Even if both algorithms output the same ideal,they usually do so while outputting di ff erent shifts. Theorem 3 (Theorem 9) . Let u = ( u i , j ) ( i , j ) ∈ N be a sequence, ≺ be a monomialordering and d be the size of the output staircase S . Let us assume that bothalgorithms return a common relation g when called on u , ≺ , d and some stoppingmonomial M for the A daptive BMS algorithm.Then, the shift associated to g the A daptive BMS algorithm yields is the mono-mial set { m , m lm ( g ) (cid:22) M } . In other words, the smaller lm ( g ) , the larger its shift.The shift associated to g the A daptive S calar -FGLM algorithm returns is ei-ther S if lm ( g ) ≻ max ≺ ( S ) or { m ∈ S , m ≺ lm ( g ) } ∪ { lm ( g ) } otherwise. In otherwords, the larger lm ( g ) , the larger its shift. As a consequence of these di ff erences of behavior, it is not possible to tweakone of the algorithms in order to mimic exactly the behavior of the other.Finally, in Section 6, we compare both algorithms based on the number ofsequence queries they perform and their number of basic operations. We showthat the A daptive BMS algorithm is able to perform four (resp. seven) times feweroperations than the BMS algorithm to ouput the ideal of relations of a family ofbidimensional (resp. tridimensional) sequences.We also show that the A daptive S calar -FGLM needs fewer queries and fewerbasic operations to recover the whole ideal of relations of several families of se-quences. However, it seems that asymptotically the ratios between the number ofbasic operations and the number of sequence queries made by both algorithm couldbe the same. We now understand better the advantages of each algorithm.On the one hand, the A daptive S calar -FGLM algorithm can fail to return theright answer, yet, on the other hand, we can tweak it to test the computed relationsfurther, allowing us to discard wrong relations. Furthermore, generally it returnsthe right ideal of relations and it usually does so faster than the A daptive BMSalgorithm.However, the A daptive
BMS algorithm seems to be the safer one. If the upperbounds on the staircase size is correct, it will always return the right ideal of rela-tions. Though, its performance speedup relies on the number of skipped relationtestings and thus on the sharpness of this bound. Moreover, it seems hard to pre-dict in advance which monomials will be totally skipped during the execution ofthe algorithm. 6ombining the design of the P olynomial S calar -FGLM algorithm, based onpolynomial arithmetic in Berthomieu and Faug`ere (2018), and the comparison ofthe A daptive BMS and A daptive S calar -FGLM algorithms in this paper could leadto the design of an hybrid algorithm taking advantage of all these algorithms. Inparticular, this algorithm could replace the linear algebra arithmetic by a polyno-mial one.Indeed, the goal would be to mix the e ffi ciency of the polynomial arithmeticin the P olynomial S calar -FGLM algorithm and the small number of queries per-formed by the A daptive BMS and the A daptive S calar -FGLM algorithms to com-pute the relations.
2. Preliminaries
In this section, we give a brief description of classical notation used all alongthe paper. We refer the reader to (Berthomieu and Faug`ere, 2017, Section 2) for amore detailed presentation.
For n ≥
1, we let i = ( i , . . . , i n ) ∈ N n . Classically, we write x = ( x , . . . , x n )and x i = x i · · · x i n n . An n -dimensional sequence u = ( u i ) i ∈ N n over a field K satisfiesthe (linear recurrence) relation induced by α = ( α k ) k ∈K ∈ K |K| , with K ⊂ N n finiteif ∀ i ∈ N n , X k ∈K α k u k + i = . (1) Example 1.
Let b be the -dimensional sequence of the binomial coe ffi cients, b = (cid:16)(cid:16) ij (cid:17)(cid:17) ( i , j ) ∈ N . Then the Pascal’s rule: ∀ ( i , j ) ∈ N , b i + , j + − b i , j + − b i , j = is a linear recurrence relation for the sequence b . As we can only work with a finite number of terms of a sequence, in this paper,a table shall denote a finite subset of terms of a sequence: it is one of the inputparameters of the algorithms.Given a finite table extracted from the sequence u , the main purpose of theBMS and the S calar -FGLM algorithms is to, lousy speaking, determine a minimalset of relations that will allow us to generate this finite table using only the valuesof u on their supports.Relations satisfied by a sequence can be added and shifted, therefore it is natu-ral to associate them with multivariate polynomials in K [ x ].7 efinition 1. Let f = P k ∈K α k x k ∈ K [ x ] . We will denote by (cid:2) f (cid:3) u , or (cid:2) f (cid:3) whenno ambiguity arises, the linear combination P k ∈K α k u k . Moreover, if α defines arelation for u , that is for all i ∈ N n , h x i f i = , then we say that f is the polynomialof this relation. The main benefit of the [ ] notation resides in the immediate fact that for allindex i , h x i f i = P k ∈K α k u k + i .In the previous example, the Pascal’s rule relation is associated with polyno-mial P = x y − y −
1, so that ∀ ( i , j ) ∈ N , [ x i y j P ] = . Definition 2 (Fitzpatrick and Norton (1990); Sakata (1988)) . Let u = ( u i ) i ∈ N n bean n-dimensional sequence with coe ffi cients in K . The sequence u is linear recur-rent if from a nonzero finite number of initial terms { u i , i ∈ S } , and a finite numberof linear recurrence relations, without any contradiction, one can compute anyterm of the sequence.Equivalently, u is linear recurrent if its ideal of relations { f , ∀ m ∈ K [ x ] , (cid:2) m f (cid:3) = } is zero-dimensional .2.2. Gr¨obner bases Let T = { x i , i ∈ N n } be the set of all monomials in K [ x ]. A monomial ordering ≺ on K [ x ] is an order relation satisfying the following three classical properties:1. for all m ∈ T , 1 (cid:22) m ;2. for all m , m ′ , s ∈ T , m ≺ m ′ ⇒ m s ≺ m ′ s ;3. every subset of T has a least element for ≺ .For a monomial ordering ≺ on K [ x ], the leading monomial of f , denoted lm ( f ),is the greatest monomial in the support of f for ≺ . The leading coe ffi cient of f ,denoted lc ( f ), is the nonzero coe ffi cient of lm ( f ). The leading term of f , lt ( f ),is defined as lt ( f ) = lc ( f ) lm ( f ). For an ideal I , we denote, classically, lm ( I ) = { lm ( f ) , f ∈ I } .We recall briefly the definition of a Gr¨obner basis and a staircase. Definition 3.
Let I be a nonzero ideal of K [ x ] and let ≺ be a monomial ordering.A set G ⊆
I is a
Gr¨obner basis of I if for all f ∈ I, there exists g ∈ G such that lm ( g ) | lm ( f ) .The set G is a minimal Gr¨obner basis of I if for any g ∈ G , G \ { g } does notspan I.Furthermore, G is (minimal) reduced if for any g , g ′ ∈ G , g , g ′ and anymonomial m ∈ supp g ′ , lt ( g ) ∤ m. et G be a reduced truncated Gr¨obner basis, the staircase of G isS = Staircase( G ) = { s ∈ T , ∀ g ∈ G , lm ( g ) ∤ s } . It is also the canonical basis of K [ x ] / I. Gr¨obner basis theory allows us to choose any monomial ordering ≺ . Amongall the monomial ordering, we will mainly use the • lex ( x n ≺ · · · ≺ x ) ordering which compares monomials as follows x i ≺ x i ′ if, and only if, there exists k , 1 ≤ k ≤ n such that for all ℓ < k , i ℓ = i ′ ℓ and i k < i ′ k , see (Cox et al., 2015, Chapter 2, Definition 3); • drl ( x n ≺ · · · ≺ x ) order which compares monomials as follows x i ≺ x i ′ if,and only if, i + · · · + i n < i ′ + · · · + i ′ n or i + · · · + i n = i ′ + · · · + i ′ n and thereexists k , 2 ≤ k ≤ n such that for all ℓ > k , i ℓ = i ′ ℓ and i k > i ′ k . Equivalently,there exists k , 1 ≤ k ≤ n such that for all ℓ > k , i + · · · + i ℓ = i ′ + · · · + i ′ ℓ and i + · · · + i k < i ′ + · · · + i ′ k , see (Cox et al., 2015, Chapter 2, Definition 6).However, in the BMS algorithm, we need to be able to enumerate all the mono-mials up to a bound monomial. This forces the user to take an ordering ≺ such thatfor all M ∈ T , the set { m ≺ M , m ∈ T } is finite. Such an ordering ≺ makes ( N n , ≺ )isomorphic to ( N , < ), thus it makes sense to speak about the next monomial for ≺ .This request excludes for instance the lex ordering, and more generally anyelimination ordering. In other words, only weighted degree ordering, or weightordering , should be used. A matrix H ∈ K m × n is Hankel , if there exists a sequence u = ( u i ) i ∈ N such thatfor all ( i , i ′ ) ∈ { , . . . , m } × { , . . . , n } , the coe ffi cient h i , i ′ lying on the i th row and i ′ th column of H satisfies h i , i ′ = u i + i ′ .In a multivariate setting, we can extend this Hankel matrices notion to multi-Hankel matrices. Indexing the rows and columns with monomials x i = x i · · · x i n n and x i ′ = x i ′ · · · x i ′ n n , the coe ffi cient of H lying on the row labeled with x i andcolumn labeled with x i ′ is u i + i ′ . Given two sets of monomials U and T , we let H U , T be the multi-Hankel matrix with rows (resp. columns) indexed with monomials in U (resp. T ). Example 2.
Let u = ( u i , j ) ( i , j ) ∈ N be a sequence. Let U = { , y , y , x , x y , x y , x , x y , x y } and T = { , y , x , x y , x , x y , x , x y } ,thenH U , T = y x x y x x y x x y u , u , u , u , u , u , u , u , y u , u , u , u , u , u , u , u , y u , u , u , u , u , u , u , u , x u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , x u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , . We can see that this matrix is a × matrix with Hankel blocksof size × . Let T = { , y , x , y , x y , x , y , x y , x y , x } , then the following matrix has aless obvious structure:H T , T = y x y x y x y x y x y x u , u , u , u , u , u , u , u , u , u , y u , u , u , u , u , u , u , u , u , u , x u , u , u , u , u , u , u , u , u , u , y u , u , u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , u , u , x u , u , u , u , u , u , u , u , u , u , y u , u , u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , u , u , x y u , u , u , u , u , u , u , u , u , u , x u , u , u , u , u , u , u , u , u , u , .
3. An Adaptive version of the BMS algorithm
The BMS algorithm was presented first in Sakata (1988) for the dimension 2case and then was extended to dimension n in Sakata (1990, 2009). In (Berthomieu and Faug`ere,2017, Section 3) or Appendix A, we give a description of the algorithm mainlybased on linear algebra.The BMS algorithm is an iterative algorithm, visiting each term u i = [ x i ] of theinput sequence in increasing order for the input monomial order. At each step, ithas a truncated Gr¨obner basis of the ideal of relations and test them in the visited10onomial. If some of them fail, the algorithm updates the Gr¨obner basis with newvalid relations.When a relation g fails at monomial m , two situations arise: either m lm ( g ) wasalready in the staircase and then a new relation g ′ with lm ( g ′ ) = lm ( g ) is com-puted or it was not and both lm ( g ) and m lm ( g ) are added to the staircase. Newrelations are then computed depending on the possible new leading monomials.See (Berthomieu and Faug`ere, 2017, Proposition 9) and Proposition A.4.This is summed up in the following example; it is a truncated version of (Berthomieu and Faug`ere,2017, Example 10) and Example A.3. Example 3.
We give the trace of the algorithm called on the binomial sequence b for the drl ( y ≺ x ) ordering from monomial y up to monomial x .To simplify the reading, whenever a relation succeeds in m or cannot be testedin m, we skip the updating part as this relation remains the same.We start with the non empty staircase S = { [ y , x ] , [ x − x + , y ] } and therelations G = { x y − y − , y , x − x + x } . This means that on the one hand therelations in G have been tested up to all their multiples less than y while relationy (resp. x − x + ) in S fails when multiplied by x (resp. y ) but does not failwhen multiplied by a lesser monomial.For the monomial y Nothing must be done for the relation g = x y − y − .The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − x + x.For the monomial x y The relation g = x y − y − succeeds since [ b , − b , − b , ] = .The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − x + x.For the monomial x y The relation g = x y − y − succeeds since [ b , − b , − b , ] = .The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − x + x.For the monomial x y The relation g = x y − y − succeeds since [ b , − b , − b , ] = .Nothing must be done for the relation g = y .The relation g = x − x + x fails since [ b , − b , + b , ] = . ThusS ′ = { [ y , x ] , [ x − x + , y ] , [ x − x + , y ] } .S ′ is set to { [ y , x ] , [ x − x + , y ] } and G ′ = { y , x y , x } .We set g ′ = x y − y − and g ′ = y .For the relation g ′ = x , x | x y and x y x | fail( x − x + , henceg ′ = x − x + x − . e update G : = G ′ = { y , x y − y − , x − x + x − } and S : = S ′ = { [ y , x ] , [ x − x + , y ] } .For the monomial x yThe relation g = x y − y − succeeds since [ b , − b , − b , ] = .Nothing must be done for the relation g = y .The relation g = x − x + x − succeeds since [ b , − b , + b , − b , ] = .For the monomial x Nothing must be done for the relation g = x y − y − .Nothing must be done for the relation g = y .The relation g = x − x + x − succeeds since [ b , − b , + b , − b , ] = .The algorithm returns relations x y − y − , y , x − x + x − , the first onewith a shift x and the last two with a shift x . The problem is now to understand when the Gr¨obner basis of the ideal of re-lations has actually been computed. Assuming the sequence is linear recurrent,Proposition 4 provides an answer to this question (see also (Berthomieu and Faug`ere,2017, Proposition 11) and Proposition A.6.
Proposition 4.
Let u be a linear recurrent sequence and I be its ideal of relations.Let S be the staircase of I for ≺ . Let s max be the largest monomial in S . Then,at step m (cid:23) ( s max ) , the computed staircase is equal to S .Let G be a minimal Gr¨obner basis of I for ≺ and let g max be the largest leadingmonomial of G . Then, at step m (cid:23) s max · max ≺ ( g max , s max ) , the computed Gr¨obnerbasis is a minimal Gr¨obner basis of I for ≺ . Example 4.
For the drl ( y ≺ x ) ordering, I = h x p , y q i and q > p ≥ , wehave, s max = x p − y q − and g max = y q . Therefore, the right staircase is foundat most at step m = x p − y q − , while the Gr¨obner basis is found at most at stepx p − y q − max ≺ ( x p − y q − , y q ) , i.e. y q − if p = and x p − y q − otherwise. Remark 5.
In some favourable cases though, it is not necessary to go up to thisbound to guess the right relations. In Example 4, for p = and q = , the rightstaircase is found at step y. In fact, the right Gr¨obner basis is already guessed aswell, while Proposition 4 only ensures that it will be correctly guessed at step y . It could therefore be very fruitful to have an heuristic helping us determining ifthe current Gr¨obner basis is the right one when the size of the staircase is known inadvance. Indeed, it could allow us to end earlier the running of the BMS algorithm.Unfortunately, it is not rare that an interrupted BMS algorithm does not return the12orrect Gr¨obner basis, in fact such an interrupted BMS algorithm will never returnthe right Gr¨obner basis for any of the four families of sequences used in Section 6.The goal is thus to reduce the number of testings di ff erently.Let us recall that at step m , whenever a relation g such that lm ( g ) | m fails, if m lm ( g ) is not in the staircase, then the algorithm adds both lm ( g ) and m lm ( g ) in the newstaircase. Assuming we know in advance the size of the staircase of the outputGr¨obner basis, during the execution of the algorithm, we can detect that testingthe relation g in m is useless if the staircase becomes too big after adding the twomonomials.Let us show in the following example how we can take advantage of this strat-egy. Example 5.
Let us reconsider Example 3 with the assumption that the staircasehas a size at most .We start with the non empty staircase S = { [ y , x ] , [ x − x + , y ] } and therelations G = { x y − y − , y , x − x + x } . This means that on the one hand therelations in G have been tested up to all their multiples less than y while relationy (resp. x − x + ) in S fails when multiplied by x (resp. y ) but does not failwhen multiplied by a lesser monomial.For the monomial y Nothing must be done for the relation g = x y − y − .The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − x + x.For the monomial x y Should the relation g = x y − y − fail in x y , we would have to addx y and y in the staircase, raising its size to . We skip testing g .Should the relation g = y fail in x y , we would have to add y andx y in the staircase, raising its size to . We skip testing g .Nothing must be done for the relation g = x − x + x.For the monomial x y Should the relation g = x y − y − fail in x y , we would have to addx y and x y in the staircase, raising its size to . We skip testing g .The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − x + x.For the monomial x y Should the relation g = x y − y − fail in x y , we would have to addx y and x y in the staircase, raising its size to . We skip testing g .Nothing must be done for the relation g = y .The relation g = x − x + x fails since [ b , − b , + b , ] = . ThusS ′ = { [ y , x ] , [ x − x + , y ] , [ x − x + , y ] } . ′ is set to { [ y , x ] , [ x − x + , y ] } and G ′ = { y , x y , x } .We set g ′ = x y − y − and g ′ = y .For the relation g ′ = x , x | x y and x y x | fail( x − x + , henceg ′ = x − x + x − .We update G : = G ′ = { y , x y − y − , x − x + x − } and S : = S ′ = { [ y , x ] , [ x − x + , y ] } .For the monomial x yShould the relation g = x y − y − fail in x y, we would have to addx y and x in the staircase, raising its size to . We skip testing g .Nothing must be done for the relation g = y .Should the relation g = x − x + x − fail in x y, we would haveto add x and x y in the staircase, raising its size to . We skip testingg .For the monomial x Nothing must be done for the relation g = x y − y − .Nothing must be done for the relation g = y .The relation g = x − x + x − succeeds since [ b , − b , + b , − b , ] = .The algorithm returns relations x y − y − , y , x − x + x − , the first onewith a shift x and the other two with a shift x . In this example, skipping some relation testings allowed us to skip all the test-ings in a loop, namely loops x y and x y . As a byproduct, we also reduced thenumber of table queries.Integrating this strategy in the BMS algorithm yields an adaptive variant, Al-gorithm 1, reducing the number of relation testings and table queries.This version was motivated by a remark in Sakata (2009) where the authorannounced that in applications where an approximate size of the staircase is known,one can stop early the execution of the BMS algorithm. Yet, we do not know if sucha strategy is classical and if it is exactly the one described in Algorithm 1.Predicting how many monomials will be completely skipped in order to reducethe number of table queries can be a hard task. Indeed, it is clear that if relation g can be skipped at monomial m , it will also be skipped at any multiple of m . Yet,even if m is completely skipped, a relation that cannot be tested in m might need tobe tested in m x i for some i .Therefore, even if m is completely skipped, m x i might be not. We illustratethis phenomon with the following example. Example 6.
Let u = ( u i , j ) ( i , j ) ∈ N be the sequence defined by u , = and u i , j = if ( i , j ) , (4 , . Running the BMS algorithm on these arguments yields relations lgorithm 1: A daptive BMS (Linear Algebra variant).
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ , a givenbound d and a monomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . G : = { } . S : = ∅ . For all m ∈ T do S ′ : = S . For g ∈ G doIf lm ( g ) | m thenIf m lm ( g ) < Stabilize( S ) and (cid:16) S ∪ n lm ( g ) , m lm ( g ) o(cid:17) > d thennext . // skip this relation testing e : = h m lm ( g ) g i u If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min fail( h ) ∈ S ′ { [ h , fail( h ) / lm ( h )] } . G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − h m lm ( h ) h i u lm ( g ′ ) fail( h ) m h . Else g ′ : = g . G : = G ′ . S : = S ′ . Return G . , x so that the staircase has size . We assume though that the only knownupper bound on the staircase size is .We give a short trace of the algorithm called on u for the drl ( y ≺ x ) orderingup to monomial x . Therefore, we also input as the upper bound on the size ofthe output staircase to the A daptive BMS algorithm.For all the monomials from to x y The relation g = succeeds.For the monomial x yThe relation g = fails since [ u , ] = . Thus S ′ = { [1 , x y ] } .S ′ is set to { [1 , x y ] } and G ′ = { y , x } .We set g ′ = y and g ′ = x .For the relation g ′ = y , y ∤ x y thus g ′ = y .For the relation g ′ = x , x ∤ x y thus g ′ = x .We update G : = G ′ = { y , x } and S : = S ′ = { [1 , x y ] } .For the monomial x The relation g = x succeeds.For the monomial y The relation g = y succeeds.For the monomial x y The relation g = y succeeds.For the monomial x y The relation g = y succeeds.For the monomial x y The relation g = y succeeds.For the monomial x y The relation g = y succeeds.For the monomial x yThe relation g = x succeeds.For the monomial x The relation g = x succeeds.For the monomial y The relation g = y succeeds.For the monomial x y Should the relation g = y fail, we would have to add y and x y tothe staircase, raising its size to . We skip testing g .For the monomial x y Should the relation g = y fail, we would have to add y and x y tothe staircase, raising its size to . We skip testing g . or the monomial x y The relation g = y succeeds.For the monomial x y The relation g = y succeeds.For the monomial x y The relation g = y succeeds.The relation g = x succeeds.For the monomial x yThe relation g = x succeeds.For the monomial x The relation g = x succeeds.For the monomial y Should the relation g = y fail, we would have to add y and y to thestaircase, raising its size to . We skip testing g .For the monomial x y We did not test g in x y . We skip testing g .For the monomial x y We did not test g in x y and x y . We skip testing g .For the monomial x y We did not test g in x y . We skip testing g .For the monomial x y Should the relation g = y fail, we would have to add y and x y tothe staircase, raising its size to . We skip testing g .For the monomial x y The relation g = y succeeds.The relation g = x succeeds.For the monomial x y The relation g = y succeeds.The relation g = x succeeds.For the monomial x yThe relation g = x succeeds.For the monomial x The relation g = x succeeds.For the monomial y We did not test g in y . We skip testing g .For the monomial x y We did not test g in y and x y . We skip testing g . or the monomial x y We did not test g in x y and x y . We skip testing g .For the monomial x y We did not test g in x y and x y . We skip testing g .For the monomial x y We did not test g in x y and x y . We skip testing g .For the monomial x y We did not test g in x y . We skip testing g .The relation g = x succeeds.For the monomial x y Should the relation g = y fail, we would have to add y and x y tothe staircase, raising its size to . We skip testing g .Should the relation g = x fail, we would have to add x and x y tothe staircase, raising its size to . We skip testing g .For the monomial x y The relation g = y succeeds.The relation g = x succeeds.For the monomial x yThe relation g = x succeeds.For the monomial x The relation g = x succeeds.The algorithm returns relations y , x , the first one with a shift x and theother one with a shift x .The following figure shows the visited monomials where at least one relation wastested ( · ) and those completely skipped ( × ).y × y × × y · × × y · × × × y · · × × × y · · · · × · y · · · · · · × y · · · · · · · · y · · · · · · · · · · · · · · · · · · · x x x x x x x x x lthough the monomial x y was completely skipped, x y is not thanks to therelation x that must be tested.
4. The Adaptive version of the S calar -FGLM algorithm
While the BMS and A daptive
BMS algorithms are iterative algorithms, theS calar -FGLM algorithm is global, see Berthomieu et al. (2015, 2017) and (Berthomieu and Faug`ere,2017, Section 4). It finds the Gr¨obner basis of the ideal of relations by computingthe column rank profile of a big multi-Hankel matrix indexed by a set of monomials T . In practice, this set T must contain all the monomials less than the monomialsin the Gr¨obner basis of relations.To circumvent the inherent complexity of computing the rank profile of a bigmulti-Hankel matrix, the authors proposed an adaptive algorithm behaving moreclosely to the FGLM algorithm, see Faug`ere et al. (1993).The goal is to iterate on a monomial t and compute, for a set S such that H S , S is full rank, if H S ∪{ t } , S ∪{ t } is also full rank. If it is, then t is added to S , otherwisea relation with support in S ∪ { t } has been found. No further relation with leadingterm a multiple of t will be computed. When a given lower-bound on the size ofthe staircase is reached, the algorithm stops and computes the remaining relationsfrom the leading terms lying on the border of the staircase.This yields the A daptive S calar -FGLM algorithm: Algorithm 2. Example 7.
We give the trace of the algorithm on the sequence u = (2 i j ( i + ( i , j ) ∈ N with the drl ( y ≺ x ) ordering with a lower bound on the staircase size.We set L = { } , S = ∅ , G ′ = ∅ .We set t = and build the matrix H S ∪{ } , S ∪{ } = ( ) that is full rank. HenceS = { } and L = { y , x } .We set t = y and build the matrix H S ∪{ y } , S ∪{ y } = (cid:16) (cid:17) that is not full rank.Solving H S , S α + H S , { y } = yields relation y − , so G = { y − } , G ′ = { y } andL is updated to { x } .We set t = x and build the matrix H S ∪{ x } , S ∪{ x } = (cid:16) (cid:17) that is full rank.Hence S = { , x } and L = { x } .Now S is greater or equal to the bound . Solving H S , S α + H S , { x } = yields relation x − x + , so G = { y − , x − x + } and L is updated to ∅ .Furthermore, the relation y − has been tested with a shift { , y } while therelation x − x + has been tested with a shift { , x } . Remark 6.
If no lower bound on the size of S were given, then an infinite loopmight occur on a non linear recurrent sequence. For instance, on the factorialsequence ( i !) i ∈ N , all the monomials x i would be found in the staircase. lgorithm 2: A daptive S calar -FGLM (simple version). Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , ≺ a monomial ordering and d agiven bound. Output:
A reduced truncated Gr¨obner basis of a zero-dimensional ideal of degree ≥ d . L : = { } . // set of next terms to study S : = ∅ . // the useful staircase with respect to ≺ G : = ∅ , G ′ : = ∅ . While L , ∅ do t : = min ≺ ( L ). If H S ∪{ t } , S ∪{ t } is full rank then S : = S ∪ { t } and L : = L ∪ { x i t , i = , . . . , n } \ { t } .Remove multiples of elements of G ′ in L . If S ≥ d then // early termination While L , ∅ do t ′ : = min ≺ ( L ).Find α such that H S , S α + H S , { t ′ } = G : = G ∪ (cid:8) t ′ + P s ∈ S α s s (cid:9) .Remove multiples of elements of t ′ in L . Return G . Else
Find α such that H S , S α + H S , { t } = G ′ : = G ′ ∪ { t } . G : = G ∪ (cid:8) t + P s ∈ S α s s (cid:9) .Remove multiples of t in L and sort L by increasing order. Error “Run S calar -FGLM”. f we know the sequence is linear recurrent, then we can remove this bound. Inthat case, the last step of Example 7 becomes:We set t = x and build the matrix H S ∪{ x } , S ∪{ x } = (cid:18) (cid:19) that is notfull rank. Solving H S , S α + H S , { x } = yields relation x − x + , so G = { y − , x − x + } , G ′ = { y , x } and L is updated to ∅ .Furthermore, the relation y − has been tested with shift { , y } while the relationx − x + has been tested with a shift { , x , x } . For a generic sequence, the algorithm computes the ideal of relations of thesequence. However, it is easy to make a sequence such that the algorithm fails. Itsu ffi ces to have a sequence whose staircase S has a subset S ′ such that the matrix H S ′ , S ′ has a rank defect.This motivated the authors to extend the algorithm to bypass this issue in Berthomieu et al.(2017).We give an example of what can happen when the wrong relations are com-puted and describe their shifts. Example 8.
We consider the ideal I = h y − y , x y − x y , x − x + x − x i ⊆ F [ x , y ] and a sequence u = ( u i , j ) ( i , j ) ∈ N over F made from this ideal and someinitial conditions. The first terms of the sequence are ··· ··· ···− ··· ··· ... ... ... ... ... ... . We also callthe algorithm on the drl ( y ≺ x ) ordering.We set L = { } , S = ∅ , G ′ = ∅ .We set t = and build the matrix H S ∪{ } , S ∪{ } = ( ) that is full rank. HenceS = { } and L = { y , x } .We set t = y and build the matrix H S ∪{ y } , S ∪{ y } = (cid:16) (cid:17) that is full rank. HenceS = { , y } and L = { x , y , x y } .We set t = x and build the matrix H S ∪{ x } , S ∪{ x } = (cid:18) (cid:19) that is full rank.Hence S = { , y , x } and L = { y , x y , x } .We set t = y and build the matrix H S ∪{ y } , S ∪{ y } = ! that is not fullrank. Solving H S , S α + H S , { y } = yields relation y − y with a shift { , y , x , y } ,so G = { y − y } , G ′ = { y } and L is updated to { x y , x } .We set t = x y and build the matrix H S ∪{ x y } , S ∪{ x y } = ! that is not fullrank. Solving H S , S α + H S , { x y } = yields relation x y − x − y + with a shift { , y , x , x y } , so G = { y − y , x y − x − y + } , G ′ = { y , x y } and L is updatedto { x } . e set t = x and build the matrix H S ∪{ x } , S ∪{ x } = −
13 4 − ! that is fullrank. Hence S = { , y , x , x } and L = { x } .We set t = x and build the matrix H S ∪{ x } , S ∪{ x } = −
12 2 4 4 43 4 3 − − − that is notfull rank. Solving H S , S α + H S , { x } = yields relation g = x + x + x + y + with a shift { , y , x , x } , so G = { y − y , x y − x − y + , x + x + x + y + } , G ′ = { y , x y , x } and L is updated to ∅ .We can notice that • the first relation, y − y is really a relation of u but has only, a priori, a shift { , y , x , y } , i.e. its shift is y . • the second relation, x y − x − y + , is not a real relation of u and is known tohave a shift { , y , x , x y } . Actually we can check that [ y ( x y − x − y + = and [ x ( x y − x − y + = , so that the relation has a shift { , y , x , y , x y } ,i.e. its shift is x y and its fail is x y. • the third relation, x + x + x + y + , is not a true relation of u and is knownto have a shift { , y , x , x , x } . Actually we can check that [ y ( x y − x − y + = and [ x y ( x + x + x + y + = − , i.e. its shift is y and its fail is x y.All in all, we computed the relation x + x + x + y + assuming it shouldbe valid when multiplied by x or x , while it cannot be valid when multiplied byx y ≺ x ≺ x .
5. Analogies and di ff erences of the adaptive variants We now compare theoretically the A daptive
BMS and the A daptive S calar -FGLM algorithms. As the A daptive BMS algorithm di ff ers from the BMS algo-rithm just in the execution: mainly some testings are skipped, results from (Berthomieu and Faug`ere,2017, Section 6) are still valid for the A daptive BMS algorithm. On the other hand,the A daptive S calar -FGLM algorithm does not necessarily provide the same out-put as the S calar -FGLM algorithm. In (Berthomieu and Faug`ere, 2017, Section 5.1, Theorem 7), we show that theBMS algorithm always returns a zero-dimensional ideal while the S calar -FGLMalgorithm can return a zero-dimensional or a positive-dimensional ideal. This is infact one of the main di ff erences between these two algorithms.22n the following theorem, we prove that the A daptive BMS algorithm and theA daptive S calar -FGLM algorithm are closer on that matter assuming one knowsthe size of the output staircase in advance. Theorem 7.
Let u be a sequence, ≺ be a monomial ordering and d be the size ofthe staircase.Calling the A daptive BMS algorithm on u , ≺ , d and a stopping monomial Myields a truncated Gr¨obner basis of a zero-dimensional ideal.Calling the A daptive S calar -FGLM algorithms on u , ≺ and d yields a trun-cated Gr¨obner basis of a zero-dimensional ideal.Proof. The first part of the result comes directly from the line G ′ : = Border( S ′ ) inthe description of the A daptive BMS algorithm, Algorithm 1.The second part of the result comes from the fact that the leading terms of therelations are lying in the border of the staircase and are minimal for both ≺ and | .Thus, for any variable x i , there always exists a relation with leading term a purepower of x i .It is possible to change this early termination procedure so that the A daptive S calar -FGLM algorithm is closer to the S calar -FGLM algorithm, yielding a po-tential positive-dimensional algorithm. If we still want to try to close as much aspossible the staircase with degenerate square matrices, it su ffi ces to check that therelation t ′ + P s ∈ S α s s is valid with a shift S ∪ { t ′ } . This yields Algorithm 3. The A daptive S calar -FGLM algorithm computes a staircase and then relationswith support in the staircase except their leading terms that lie on the border. Onthe other hand, although the A daptive BMS algorithm may compute the same idealof relations as the A daptive S calar -FGLM algorithm, their Gr¨obner basis can bedi ff erent. Theorem 8.
Let u be a sequence, ≺ be a monomial ordering and d be the size ofthe staircase.Calling the A daptive S calar -FGLM algorithms on u , ≺ , and d yields a trun-cated reduced Gr¨obner basis of an ideal.Calling the A daptive BMS algorithm on u , ≺ , d and a stopping monomial Myields a truncated minimal Gr¨obner basis of an ideal, which is not necessarilyreduced.Furthermore, even if u is linear recurrent and the A daptive S calar -FGLM algorithm computes the ideal of relations of u , then there is no reason for theoutput of the A daptive BMS algorithm to be reduced. lgorithm 3: Tweaked A daptive S calar -FGLM. Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , ≺ a monomial ordering and d agiven bound. Output:
A reduced truncated Gr¨obner basis of a zero-dimensional ideal of degree ≥ d . L : = { } . // set of next terms to study S : = ∅ . // the useful staircase with respect to ≺ G : = ∅ , G ′ : = ∅ . While L , ∅ do t : = min ≺ ( L ). If H S ∪{ t } , S ∪{ t } is full rank then S : = S ∪ { t } and L : = L ∪ { x i t , i = , . . . , n } \ { t } .Remove multiples of elements of G ′ in L . If S ≥ d then // early termination While L , ∅ do t ′ : = min ≺ ( L ).Find α such that H S , S α + H S , { t ′ } = If H { t ′ } , S α + H { t ′ } , { t ′ } = then G : = G ∪ (cid:8) t ′ + P s ∈ S α s s (cid:9) .Remove multiples of elements of t ′ in L . Return G . Else
Find α such that H S , S α + H S , { t } = G ′ : = G ′ ∪ { t } . G : = G ∪ (cid:8) t + P s ∈ S α s s (cid:9) .Remove multiples of t in L and sort L by increasing order. Error “Run S calar -FGLM”. roof. For two distinct polynomials g , g ′ in the Gr¨obner basis returned by A daptive S calar -FGLM algorithm, lt ( g ) does not divide any monomial in the support of g ′ .Hence the Gr¨obner basis is reduced.For two distinct polynomials g , g ′ in the Gr¨obner basis returned by A daptive BMS algorithm, lt ( g ) does not divide lt ( g ′ ). Hence the Gr¨obner basis is minimal.However, there is no reason for lt ( g ) not to divide any monomial in the support of g ′ . Example 9.
We let u = (cid:16) i + j − (cid:17) ( i , j ) ∈ N be a sequence and consider the drl ( y ≺ x ) ordering. The ideal of relations of u is I = h x y − x − y + , x − y − x + y , y − y + y − i .The A daptive BMS algorithm called on u and the stopping monomial y re-turns g = x y − x − y + , with shift x , g = x − x y − y − x + y − , withshift x and g = y − x y − y + x + y − , with shift y . We can notice that { g , g , g } is a Gr¨obner basis but not a reduced Gr¨obner basis of I.The A daptive S calar -FGLM algorithm called on u and the set of all the mono-mials of degree at most yields relations g ′ = x y − x − y + , g ′ = x − y − x + y , g ′ = y − y + y − . We can notice that { g ′ , g ′ , g ′ } = { g , g + g , g + g } is a reduced Gr¨obner basis of I. As for the BMS algorithm, it is not hard to tweak the A daptive
BMS algorithmso that it returns a reduced Gr¨obner basis. It su ffi ces to perform an inter-reductionof the relations either at the end of each step of the main For loop or just beforereturning the Gr¨obner basis, see Algorithm 4.
One of the main di ff erences between the BMS and the S calar -FGLM algo-rithms is the validity of the relations they return. Given a Gr¨obner basis returnedby both algorithms. Loosely speaking, the S calar -FGLM algorithm will only en-sure that all the relations in the Gr¨obner basis have the same shifts while for theBMS algorithm, the smaller the leading term of a relation is, the larger its shift iscomputed. See (Berthomieu and Faug`ere, 2017, Theorem 19).Naturally, if the given upper bound on the size of the staircase to the A dap - tive BMS algorithm is correct, then the shifts computed by the A daptive
BMSalgorithm are the same as those computed by the BMS algorithm.In Examples 7 and 8, we can see that the shifts computed by the A daptive S calar -FGLM algorithm are not all the same. This is the main di ff erence betweenthe S calar -FGLM and the A daptive S calar -FGLM algorithms.In fact, we prove in the following Theorem 9 that the larger the leading term ofa computed relation, the larger its shift. 25 lgorithm 4: Tweaked A daptive
BMS algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ , a givenbound d and a monomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . G : = { } . S : = ∅ . For all m ∈ T do S ′ : = S . For g ∈ G doIf lm ( g ) | m thenIf m lm ( g ) < Stabilize( S ) and (cid:16) S ∪ n lm ( g ) , m lm ( g ) o(cid:17) > d thennext . // skip this relation testing e : = h m lm ( g ) g i u If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min fail( h ) ∈ S ′ { [ h , fail( h ) / lm ( h )] } . G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − h m lm ( h ) h i u lm ( g ′ ) fail( h ) m h . Else g ′ : = g . G : = InterReduce( G ′ ) S : = S ′ . Return G . heorem 9. Let u be a sequence, ≺ be a monomial ordering and d be the size ofthe output staircase S . Let S M = { m ∈ S , m ≺ M } .Calling the A daptive BMS algorithm on u , ≺ , d and a stopping monomial Myields relations g , . . . , g r and shifts v , . . . , v r such that ∀ i , ≤ i ≤ r , v i lm ( g i ) (cid:22) Mand g i is valid with a shift v i , potentially .Calling the A daptive S calar -FGLM algorithm on u , ≺ and d yields relationsg ′ , . . . , g ′ r ′ such that ∀ i , ≤ i ≤ r ′ , deg lm ( g ′ i ) ≤ dand g ′ i has a shift S lm ( g ′ i ) ∪ { lm ( g ′ i ) } if lm ( g ′ i ) ≻ max ≺ ( S ) and S otherwise.Proof. The first part is clear from the behavior of both the BMS and the A daptive
BMS algorithms.The second part comes from the fact that if g ′ i , with lm ( g ′ i ) = t is found before S is completed, then it was because the matrix H S t ∪{ t } , S t ∪{ t } had a rank default, where S t is the state for S at loop t . Furthermore, S t = S lm ( g ′ i ) = S t .Otherwise, it is computed by solving H S , S α + H S , { t ′ } = S .In a way, the behavior of the A daptive S calar -FGLM algorithm is the oppositeof the behaviors of the BMS and the A daptive BMS algorithms.Furthermore, if one uses Algorithm 3 instead of the A daptive S calar -FGLMalgorithm, then each returned relation g ′ i has a shift S lm ( g ′ i ) ∪ { lm ( g ′ i ) } . Example 10.
Let us consider the sequence u = ( F i + ) ( i , j ) ∈ N , where ( F i ) i ∈ N is theFibonacci sequence. Its ideal of relation is h y − , x − x − i so that its staircasehas size .Calling the A daptive S calar -FGLM algorithm on this sequence with this boundof the staircase makes us creating the matricesH { } , { } , which is full rank, hence ∈ S ;H { , y } , { , y } , which is not full rank, hence the relation y − is found with a shift { , y } ;H { , x } , { , x } , which is full rank, hence x ∈ S .Now, the staircase is found so it remains to solve H S , S α + H S , { x } = yielding therelation x − x − with a shift S . .4. Monomial ordering and Set of Terms In this section, we study how both algorithms handle a monomial ordering thatis not a weighted degree ordering. The classical specification of the BMS algorithmare that the ordering must be a weighted ordering. However, when running theA daptive
BMS algorithm, the upper bound on the staircase size makes us nevervisit monomials of degree more than twice this size. Therefore, we can now useany monomial ordering with the A daptive
BMS algorithm by just enumerating, inincreasing order, all the monomials of degree less than twice the upper bound.This allows us to deal with ideal in shape position with both the A daptive
BMSand the A daptive S calar -FGLM algorithms. Theorem 10.
Let u be a linear recurrent sequence whose ideal of relation I isin shape position for the lex ( x n ≺ · · · ≺ x ≺ x ) ordering, i.e. there exist g n squarefree and f n − , . . . , f ∈ K [ x n ] with deg g n = d , deg f i < d such that I = h g n ( x n ) , x n − − f n − ( x n ) , . . . , x − f ( x n ) i .Assuming no error is thrown in the execution of the A daptive S calar -FGLM algorithm called on u , d and lex ( x n ≺ · · · ≺ x ≺ x ) , then the ouput is I.Calling the A daptive BMS algorithm on u , d and lex ( x n ≺ · · · ≺ x ≺ x ) yields I.Proof. Assuming no error is thrown during the execution of the A daptive S calar -FGLM algorithm, the staircase is incrementally updated from ∅ to n , x n , . . . , x d − n o .Then, the staircase size is reached and the early termination procedure solvesthe system H S , S α + H S , { t } = t ∈ n x dn , x n − , . . . , x o yielding g n ( x n ) , x n − − f n − ( x n ) , . . . , x − f ( x n ).For the A daptive BMS algorithm, we visit every monomial of degree at most2 d −
1. The first relation, g n ( x n ) is computed by the algorithm visiting monomials1 , x n , . . . , x d − n like the BM algorithm. Then, each relation x i − f i ( x n ) is computedby visiting monomials x i , x i x n , . . . , x i x d − n , all of degree less than 2 d − Example 11.
We let u = ( F i + k + ) ( i , j , k ) ∈ N , where ( F i ) i ∈ N is the Fibonacci se-quence. The ideal of relations of u is I = h z − z − , y − , x − z − i with astaircase of size .For the A daptive S calar -FGLM called on u , d = and the lex ( z ≺ y ≺ x ) ordering, the algorithm creates the matricesH { } , { } = ( ) , which is full rank, hence ∈ S ;H { , z } , { , z } = (cid:16) (cid:17) , which is full rank, hence z ∈ S .Now, the staircase is found so it remains to solveH S , S α + H S , { z } = yielding the relation g = z − z − ; S , S α + H S , { y } = yielding the relation g = y − ;H S , S α + H S , { x } = yielding the relation g = x − z − .The algorithm returns h g , g , g i = I.Calling the A daptive BMS algorithm on u , d = , the stopping monomial x zand lex ( z ≺ y ≺ x ) ordering makes us visit the set of all monomials of degree atmost d − = less than x z, i.e. { , z , z , z , y , y z , y z , y , y z , y , x , x z } .The algorithms tests the relation g = in u , , = F = where it fails. Ithas now relations g = x , g = y and g = z.Testing g = z in u , , = F = , it updates now the relation to g = z − .Going on testing g = z − in u , , = F = and u , , = F = , it is ableto guess that g = z − z − . The staircase is now { , z } of size so it hasbeen found. As anticipated, there is no need to go further in that direction.Testing g = y in u , , = F = , the relation is updated to g = y − .Then, it checks that this relation is valid in u , , but skips u , , , u , , , u , , , u , , thanks to its criterion.It remains to test g = x in u , , = F = . It fails and the algorithm updatesthe relation to g = x − .Finally, g = x − is tested in u , , = F = and the relation is updated tog = x − z − .The algorithm returns h g , g , g i = I.
6. Complexity and Benchmarks of the adaptive variants
In this section, we present some benchmarks to compare how the A daptive
BMS and the A daptive S calar -FGLM algorithms behave.Four families of ideals of relations are used to make the sequences. • In the first family, the leading monomials of the ideal of relations are h y ⌊ d / ⌋ , x d i .Thus, its staircase is a rectangle of size around d /
2. In three variables, theleading monomials are h z ⌈ d / ⌉ , y ⌊ d / ⌋ , x d i , so that the staircase is a rectangularcuboid of size around d /
6. This family will be called
Rectangle . • In the second family, the leading monomials of the ideal of relations are h x y , y d , x d i . Thus, its staircase looks like a L and has size 2 d −
1. In threevariables, the leading monomials are h y z , x z , x y , z d , y d , x d i , so that the stair-case has size 3 d −
2. This family will be called L shape . It was consideredas the worst case in Berthomieu et al. (2015, 2017) for the A daptive S calar -FGLM algorithm for the number of queries. It should also be a worst casefor the A daptive BMS algorithm. 29
In the third family, the leading monomials of the ideal of relations are allthe monomials of degree d . Thus, its staircase is a simplex and has size (cid:16) d + (cid:17) = d ( d + in two variables. In three variables, the staircase has size (cid:16) d + (cid:17) = d ( d +
1) ( d + . This family will be called Simplex . It should be the bestcase for both the S calar -FGLM and the BMS algorithms. • In the last family, the leading monomials of the ideal of relations are h y d , x i .Thus, its staircase looks like a line and has size d . In three variables, theleading monomials are h z d , y , x i , so that the staircase has also size d . This isthe generic family for a lex ( z ≺ y ≺ x ) basis and this example correspondsto the change of ordering application, see Section 5.4. This family will becalled Shape position .For the first three families, we called the algorithms with the drl ( z ≺ y ≺ x )ordering, for the last one, we called them with the lex ( z ≺ y ≺ x ) ordering.For the A daptive BMS algorithm, we used Proposition 4 to estimate sharplythe stopping monomial.
The A daptive S calar -FGLM algorithm computes all the multi-Hankel matri-ces whose rows and columns are all the terms that are in the staircase or are aleading monomial in the Gr¨obner basis.Likewise, the A daptive BMS algorithm needs to test each relation, with supportin S ∪ lm ( G ), shifted by as many monomial as in S .Therefore, we have the following proposition. Proposition 11.
Let u = ( u i ) i ∈ N n be a sequence and G be a reduced Gr¨obner basisof its ideal of relations for a total degree ordering.Let S be the staircase of G , S + = S ∪ lm ( G ) . Let S + T = { s t , s ∈ S , t ∈ T } and S = S + S = { s s ′ , s , s ′ ∈ S } .Let d S be the greatest degree of the elements in S , d G be the greatest degree ofthe elements in G and d max = max( d S , d G ) .Let S ( d ) be the simplex of all monomials of degree d.Then, the A daptive BMS algorithm needs to perform at least S + S + ) and atmost S ( d S + d max ) = (cid:16) n + d S + d max n (cid:17) queries to the sequence.The A daptive S calar -FGLM algorithm needs to perform at least S ) andfewer than S + ) queries to u . In the worst case, this number grows as ( S + ) . In the experiments of Figures 1 and 2, we can see that for the Rectangle fam-ily, the A daptive S calar -FGLM algorithm perform much fewer queries than theA daptive BMS. 30 d Q u e r i e s / S Rectangle L shape Simplex Shape positionA daptive S calar -FGLMA daptive BMSBoth algorithms
Figure 1: Number of table queries (2D): A daptive S calar -FGLM & A daptive BMS
For the L shape family, the size of the staircase only grows as O ( d ). Our exper-iments suggest that the number of queries grows as O ( d n ) for the A daptive BMSalgorithm, while it only grows as O ( d ) for the A daptive S calar -FGLM algorithm.This can be a huge advantage in dimension at least 3.We can see that the A daptive BMS algorithm cannot take profit from the sizeof the staircase in the L shape family as it needs as many queries as in the Simplexfamily. Yet, although the L shape family is a worst case for the A daptive S calar -FGLM algorithm, it is still able to query fewer sequence terms for the L shapefamily than for the Simplex family. The complexity of the BMS algorithm has been studied in Sakata (2009) yield-ing the following proposition.
Proposition 12.
Let u = ( u i ) i ∈ N n be a sequence, G be a minimal Gr¨obner basis ofits ideal of relations for a total degree ordering and S be the staircase of G .Then, the BMS algorithm performs at most O (cid:16) ( S ) lm ( G ) (cid:17) operations to re-cover the ideal of relations of u . Obviously, the bound of Proposition 12 on the number of basic operations ap-plies to the A daptive
BMS algorithm. Yet, since the number of skipped relation31 d Q u e r i e s / S Rectangle L shape Simplex Shape positionA daptive S calar -FGLMA daptive BMSBoth algorithms
Figure 2: Number of table queries (3D): A daptive S calar -FGLM & A daptive BMS testings is hard to predict, it is not clear how to make it sharper for the A daptive
BMS algorithm.The A daptive S calar -FGLM computes the rank of a matrix of size at most S . Furthermore, it solves as many linear systems with this matrix as there arepolynomials in the Gr¨obner basis. All in all, we have the following result. Proposition 13.
Let u = ( u i ) i ∈ N n be a sequence, G be a reduced Gr¨obner basis ofits ideal of relations for a total degree ordering and S be the staircase of G .Then, the number of operations performed by the A daptive S calar -FGLM al-gorithm to recover the ideal of relations of u is at most O (cid:16) ( S ) ( S + lm ( G )) (cid:17) . In the following Figures 3 and 4, we report on the ratio between the number ofbasic operations and the cube of the size of the staircase.It seems that the A daptive S calar -FGLM always perform fewer operationsthan the A daptive BMS algorithm. Though, it is possible that, in dimension 2,for larger parameters, the A daptive
BMS becomes more e ffi cient than the A dap - tive S calar -FGLM algorithm as suggested by the graphs. Concerning the L shapefamily, although the A daptive BMS algorithm do not reduce much its number oftable queries, it performs in fact much fewer basic operations than the BMS algo-rithm. For instance, in (Berthomieu and Faug`ere, 2017, Section 6), we can see thatthe BMS algorithm performs four times (resp. seven times) as many basic opera-tions as the A daptive
BMS algorithm in dimension 2 (resp. dimension 3).32 d B a s i c O p / S Rectangle L shape Simplex Shape positionA daptive S calar -FGLMA daptive BMS
Figure 3: Number of basic operations (2D): A daptive S calar -FGLM & A daptive BMS4 5 6 7 8 9 10 11 12 13 14 150.10.5151050100 d B a s i c O p / S Rectangle L shape Simplex ShapePositionA daptive S calar -FGLMA daptive BMS
Figure 4: Number of basic operations (3D): A daptive S calar -FGLM & A daptive BMS
33t is also possible that the larger number of operations the A daptive
BMS al-gorithm performs compared to the S calar -FGLM algorithm is due to the largernumber of queries it needs to recover the relations.Therefore, we now also compare the ratio between their number of basic oper-ations and their number of queries in Figures 5 and 6. d B a s i c O p / Q u e r i e s Rectangle L shape Simplex Shape PositionA daptive S calar -FGLMA daptive BMS
Figure 5: Number of basic operations by queries (2D): A daptive S calar -FGLM & A daptive BMS
In dimension 2, the A daptive S calar -FGLM algorithm seems to have a betterratio between the number of operations and the number of queries than the A dap - tive BMS algorithm. Yet, once again, it is possible that this statement is not truefor larger d .In dimension 3, however, our experiments lead us to believe that this ratio willalways be larger for the A daptive BMS algorithm than for the A daptive S calar -FGLM algorithm. 34 d B a s i c O p / Q u e r i e s Rectangle L shape Simplex Shape PositionA daptive S calar -FGLMA daptive BMS
Figure 6: Number of basic operations by queries (3D): A daptive S calar -FGLM & A daptive BMS
References
Banderier, C., Flajolet, P., 2002. Basic analytic combinatorics of directed latticepaths. Theoret. Comput. Sci. 281 (1–2), 37–80, selected Papers in honour ofMaurice Nivat.URL
Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., Salvy, B., 2010.The Dynamic Dictionary of Mathematical Functions (DDMF). In: Fukuda, K.,Hoeven, J. v. d., Joswig, M., Takayama, N. (Eds.), Mathematical Software –ICMS 2010. Springer, Berlin, Heidelberg, pp. 35–41.URL http://dx.doi.org/10.1007/978-3-642-15582-6_7
Berlekamp, E., 1968. Nonbinary BCH decoding. IEEE Trans. Inform. Theory14 (2), 242–242.Berthomieu, J., Boyer, B., Faug`ere, J.-Ch., 2015. Linear Algebra for ComputingGr¨obner Bases of Linear Recursive Multidimensional Sequences. In: 40th In-ternational Symposium on Symbolic and Algebraic Computation. Proceedingsof the 40th International Symposium on Symbolic and Algebraic Computation.Bath, United Kingdom, pp. 61–68.Berthomieu, J., Boyer, B., Faug`ere, J.-Ch., 2017. Linear Algebra for Comput-ing Gr¨obner Bases of Linear Recursive Multidimensional Sequences. Journal35f Symbolic Computation 83 (Supplement C), 36–67, special issue on the con-ference ISSAC 2015: Symbolic computation and computer algebra.URL https://hal.inria.fr/hal-01253934
Berthomieu, J., Faug`ere, J.-Ch., 2016. Guessing Linear Recurrence Relations ofSequence Tuples and P-recursive Sequences with Linear Algebra. In: 41st Inter-national Symposium on Symbolic and Algebraic Computation. Waterloo, ON,Canada, pp. 95–102.Berthomieu, J., Faug`ere, J.-Ch., 2017. In-depth comparison of the Berlekamp –Massey – Sakata and the Scalar-FGLM algorithms: the non adaptive variants,preprint.URL https://hal.inria.fr/hal-01516708
Berthomieu, J., Faug`ere, J.-Ch., 2018. A Polynomial-Division-Based Algorithmfor Computing Linear Recurrence Relations. In: ISSAC 2018 - 43rd Interna-tional Symposium on Symbolic and Algebraic Computation. New York, UnitedStates, p. 8.URL https://hal.inria.fr/hal-01784369
Bose, R., Ray-Chaudhuri, D., 1960. On a class of error correcting binary groupcodes. Information and Control 3 (1), 68 – 79.URL
Bostan, A., Bousquet-M´elou, M., Kauers, M., Melczer, S., 2014. On 3-dimensionallattice walks confined to the positive octant, to appear in Annals of Combina-torics.Bousquet-M´elou, M., Mishna, M., 2010. Walks with small steps in the quarterplane. In: Algorithmic probability and combinatorics. Vol. 520 of Contemp.Math. Amer. Math. Soc., Providence, RI, pp. 1–39.URL http://dx.doi.org/10.1090/conm/520/10252
Bousquet-M´elou, M., Petkovˇsek, M., 2003. Walks confined in a quadrant are notalways d-finite. Theoret. Comput. Sci. 307 (2), 257–276, random Generation ofCombinatorial Objects and Bijective Combinatorics.URL
Bras-Amor´os, M., O’Sullivan, M. E., 2006. The correction capability of theBerlekamp–Massey–Sakata algorithm with majority voting. Applicable Alge-bra in Engineering, Communication and Computing 17 (5), 315–335.URL http://dx.doi.org/10.1007/s00200-006-0015-8 ffi cient Computation ofZero-dimensional Gr¨obner Bases by Change of Ordering. J. Symbolic Comput.16 (4), 329–344.Faug`ere, J.-Ch., Mou, C., 2011. Fast Algorithm for Change of Ordering of Zero-dimensional Gr¨obner Bases with Sparse Multiplication Matrices. In: Proc. of the36th ISSAC. ACM, pp. 115–122.Faug`ere, J.-Ch., Mou, C., 2017. Sparse FGLM algorithms. Journal of SymbolicComputation 80 (3), 538 – 569.Fitzpatrick, P., Norton, G., 1990. Finding a basis for the characteristic ideal of ann-dimensional linear recurring sequence. IEEE Trans. Inform. Theory 36 (6),1480–1487.Guisse, V., 2016. Alg`ebre lin´eaire d´edi´ee pour les algorithmes scalar - fglm etBerlekamp-Massey-Sakata. Master’s thesis, Universit´e Paris-Diderot.Hocquenghem, A., 1959. Codes correcteurs d’erreurs. Chi ff res 2, 147 – 156.Jonckheere, E., Ma, C., 1989. A simple Hankel interpretation of the Berlekamp-Massey algorithm. Linear Algebra Appl. 125 (0), 65 – 76.URL Kaltofen, E., Pan, V., 1991. Processor e ffi cient parallel solution of linear systemsover an abstract field. In: SPAA ’91. ACM Press, New York, N.Y., pp. 180–191.Kaltofen, E., Yuhasz, G., 2013a. A fraction free Matrix Berlekamp / Massey algo-rithm. Linear Algebra Appl. 439 (9), 2515–2526.Kaltofen, E., Yuhasz, G., 2013b. On the Matrix Berlekamp-Massey Algorithm.ACM Trans. Algorithms 9 (4), 33:1–33:24.URL http://doi.acm.org/10.1145/2500122
Levinson, N., 1947. The Wiener RMS (Root-Mean-Square) error criterion in thefilter design and prediction. J. Math. Phys. 25, 261–278.37assey, J. L., 1969. Shift-register synthesis and BCH decoding. IEEE Trans. In-form. Theory it -15, 122–127.Sakata, S., 1988. Finding a minimal set of linear recurring relations capable ofgenerating a given finite two-dimensional array. J. Symbolic Comput. 5 (3),321–337.URL Sakata, S., 1990. Extension of the Berlekamp-Massey algorithm to N Dimensions.Inform. and Comput. 84 (2), 207–239.URL http://dx.doi.org/10.1016/0890-5401(90)90039-K
Sakata, S., 1991. Decoding binary 2-D cyclic codes by the 2-D Berlekamp-Masseyalgorithm. IEEE Trans. Inform. Theory 37 (4), 1200–1203.URL http://dx.doi.org/10.1109/18.86974
Sakata, S., 2009. The BMS Algorithm. In: Sala, M., Sakata, S., Mora, T., Traverso,C., Perret, L. (Eds.), Gr¨obner Bases, Coding, and Cryptography. Springer BerlinHeidelberg, Berlin, Heidelberg, pp. 143–163.URL http://dx.doi.org/10.1007/978-3-540-93806-4_9
Wiener, N., 1964. Extrapolation, Interpolation, and Smoothing of Stationary TimeSeries. The MIT Press.
Appendix A The BMS algorithm
This appendix can also be found in (Berthomieu and Faug`ere, 2017, Section 3).
As in Guisse (2016), we specialize to K [ x ] the presentation of the BMS algo-rithm given in Bras-Amor´os and O’Sullivan (2006), Cox et al. (2005) and Sakata(2009) in the more general case of ordered domains. A.1 A Polynomial interpretation of the
BMS algorithm
Given a table u = ( u i ) i ∈ N n and a weight ordering ≺ for x . We let T = { } ∪{ x i , i ∈ N n } and extend ≺ (still denoted by ≺ ) to T with the convention that 0 ≺ m , by only considering, at each step, thetable ( u i ) i ∈{ k , x k (cid:22) m } . As we only know partially the table u , we need to define somenotions according to this partial knowledge at step m . Definition A.1.
Let m ∈ T . Let f ∈ K [ x ] , we say that the relation f is valid up to m, whenever ∀ t ∈ T , lm ( t f ) (cid:22) m ⇒ [ t f ] = . We thus define the shift of f as shift( f ) = m lm ( f ) . e say that the relation f fails at m whenever ∀ t ∈ T , t f ≺ m ⇒ [ t f ] = , " m lm ( f ) f , . We define the fail of f as fail( f ) = m. If the relation f never fails, that is for allt ∈ T , [ t f ] = , then by convention fail( f ) = shift( f ) = + ∞ . Proposition A.1.
Let u be a table and f ∈ K [ x ] such that fail( f ) ≻ m. For allg ∈ K [ x ] , if lm ( g f ) (cid:22) m, then [ g f ] = . The following proposition show how to combine two failing relations with thesame shift in order to obtain a new relation valid with a bigger shift.
Proposition A.2.
Let f and f be two relations such that v = fail( f ) lm ( f ) = fail( f ) lm ( f ) ande = (cid:2) v f (cid:3) , e = (cid:2) v f (cid:3) . Let f be the nonzero polynomial f − e e f . Then, fori ∈ { , } , fail( f ) ≻ fail( f i ) , i.e. fail( f ) lm ( f ) ≻ v.Proof. For any c ∈ K and any µ ∈ K [ x ] such that lm ( g ) ≺ v , we have [ µ ( f + c f )] = [ µ f ] + c [ µ f ] =
0, hence fail( f + c f ) (cid:23) fail( f i ).It remains to prove that for a good choice of c , we have a strict inequality: as,[ v ( f + c f )] = [ v f ] + c [ v f ] = e + c e , it is clear that [ v f ] = [ v ( f − e e f )] = f ) ≻ v lm ( f ) (cid:23) fail( f i ). Definition A.2.
Using the same notation as in Definition 3, we letI m = { f ∈ K [ x ] , fail ( f ) ≻ m } , and G m be the least elements for ≺ of I m , it is a truncated Gr¨obner basis of I m : G m = min ≺ { g , g ∈ I m } , S m = Staircase( G m ) . Example A.1.
Let us go back to Example 1 with sequence b = (cid:16)(cid:16) ij (cid:17)(cid:17) ( i , j ) ∈ N . Con-sider K [ x , y ] with the drl ( y ≺ x ) ordering, and m = x .y y x x From this table, on the one hand, we can deduce that since it is not identically , there is no relation with leading monomial valid up to x , hence ∈ S x ; • since [ y + α ] = α and [ x ( y + α )] = + α , there is no relation with leadingmonomial y valid up to x y and thus x , hence y ∈ S x ; • since [ y ( x + β y + α )] = , there is no relation with leading monomial x validup to x y and thus x , hence x ∈ S x .On the other hand, we can check that • since [ y ] = , relation y is valid up to y and thus x , hence y ∈ T \ S x ; • since [ x y − = , relation x y − is valid up to x y and thus x , hencex y ∈ T \ S x ; • since [ x − x ] = , relation x − x is valid up to x , hence x ∈ T \ S x .Therefore, S x = { , y , x } , max | ( S x ) = { y , x } and min | ( T \ S x ) = { y , x y , x } . Thisis summed up in the following diagram.y J y N J N J x x J : min | ( T \ S x ) N : max | ( S x ) Let us notice that many relations with respective leading monomials y , x y , x suitactually. These would be y − α x + α y y + α , x y − (1 + α ) x + α y y + α andx − (1 + α ) x + α y y + α . Furthermore, I x is not stable by addition: ( x − x ) , ( x − x + ∈ I x but x − x − ( x − x + = ( x − < I x since fail ( x − = x y.Hence, I x is not an ideal of K [ x , y ] .For m = x , with the following table, we find thaty y y x x x • since [ y ] = [ y y ] = [ x y ] = , then y is valid up to x y and thus x ; since [ x y − = [ y ( x y − = and [ x ( x y − y )] = , then x y − failsat x y. Yet, since [ y ] = [ y y ] = and [ x y ] = , then by Proposition A.2, [ x y − y − = [ y ( x y − y − = and [ x ( x y − y − vanishes as well.Hence, x y − y − is valid up to x y and thus x ; • since [ x − x ] = and [ y ( x − x )] = , then x − x fails at x y. Likewise, since [ x − = and [ y ( x − = , then [ x − x + = and [ y ( x − x + = .Furthermore, [ x ( x − x + = , so that x − x + is valid up to x .Therefore, S x = { , y , x } , max | ( S x ) = { y , x } and min | ( T \ S x ) = { y , x y , x } . Wecan also check that these relations span the only valid relations with support inS x ∪ { y , x y , x } . y y J y N J N J x x x Although I m is not an ideal in general, we have the following results: Proposition A.3.
Using the notation of Definitions A.1 and A.2, I m is closed under multiplication by elements of K [ x ] , for all monomials t , t ′ such that t | t ′ , (a) if t ′ ∈ S m , then t ∈ S m . (b) if t ∈ T \ S m , then t ′ ∈ T \ S m , Moreover, it is clear that the sequence ( I m ) m ∈T is decreasing and that if u islinear recurrent then I = T m ∈T I m . Therefore, ( S m ) m ∈T is increasing and its limitis S the finite target staircase. Hence, for m big enough, S m will be the targetstaircase. We will give an upper bound in Proposition A.6.The following result gives an intrinsic characterization of S m that is key in theiteration of the BMS algorithm. Proposition A.4.
For all monomial m ∈ T , S m = n fail( f ) lm ( f ) , f < I m o .Furthermore, let m + be the successor of m. Let s be a monomial in the staircaseS m + . Then, s was added at step m + , i.e. s < S m , if, and only if, s | m + and m + s ∈ S m + \ S m .Proof. We shall prove the first assertion by double inclusion. If s = fail( f ) lm ( f ) then forall g ∈ K [ x ] such that lm ( g ) = s , fail( g ) (cid:22) m , hence s < lm ( I m ), s ∈ S m .41he reverse inclusion is proved by induction on m . For m = S m = ∅ andthere is nothing to do. Let us assume the inclusion is satisfied for a monomial m .Let s ∈ S m + . On the one hand, if s ∈ S m , then there exists f ∈ K [ x ] \ I m ⊆ K [ x ] \ I m + such that s = fail( f ) lm ( f ) .If, on the other hand, s ∈ S m + \ S m , then there exists a relation f ∈ K [ x ] suchthat lm ( f ) = s , and m ≺ fail( f ) (cid:22) m + , hence fail( f ) = m + and s divides m + .Let us assume that for all g ∈ K [ x ] with lm ( g ) = m + s , we have fail( g ) (cid:22) m ≺ m + . Therefore, m + s ∈ S m and there exists h < I m such that fail( h ) lm ( h ) = m + s . ByProposition A.2, there is α ∈ K such that fail( f − α h ) ≻ m + . Since fail( h ) (cid:22) m ≺ m + , then lm ( h ) (cid:22) s and lm ( f − α h ) = s , hence fail( f − α h ) lm ( f − α h ) ≻ m + s . This contradicts thefact that m + s ∈ S m . Thus there exists a g ∈ K [ x ] with lm ( g ) = m + s and fail( g ) (cid:23) m + .Let g be such a relation, since fail( f ) = m + , then [ g f ] , g ) = m + .Therefore, fail( g ) lm ( g ) = m + m + / s = s so that s ∈ n fail( f ) lm ( f ) , f < I m + o .Now, we proved that s ∈ S m + \ S m implies s | m + and m + s ∈ S m + \ S m . Thisimplication is clearly an equivalence.From this proposition it follows that if m ∈ T , and if m + is its successor:max | ( S m + ) = max | max | ( S m ) ∪ ( m + s , s ∈ min | ( T \ S m ) ∩ S m + )! (2)Relation 2 allows us to construct, iterating on the monomial m , the set of re-lations G m representing the truncated Gr¨obner basis of I m . Relations g ∈ G m areindexed by their leading monomials, describing T \ S m . Remark A.5.
We can also construct another set, describing the edge of S m , stilldenoted S m , as there is a one-to-one correspondence between a staircase and itsedge. The relations h ∈ S m are indexed by their ratio fail( h ) lm ( h ) between their fail andtheir leading monomial, describing the full staircase of I m .When two relations h and h ′ in S m are such that fail( h ) lm ( h ) = fail( h ′ ) lm ( h ′ ) , then we onlyneed to keep one. Since the goal is to combine a relation of S m with a relationfailing at m + to make a new one with a bigger shift, as in Proposition A.2, it is bestto handle smaller polynomials. This yields Algorithm 5.We saw that for m big enough, S m will be the target staircase. We now give anupper bound. Proposition A.6.
Let u be a linear recurrent sequence and I be its ideal of rela-tions.Let S be the staircase of I for ≺ . Let s max be the largest monomial in S . Then,for m (cid:23) ( s max ) , S m = S . lgorithm 5: The BMS algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ and amonomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . // ordered for ≺ G : = { } . // the future Gr¨obner basis S : = ∅ . // staircase edge, elements will be [ h , fail( h ) / lm ( h )] For all m ∈ T do S ′ : = S . For g ∈ G doIf lm ( g ) | m then e : = h m lm ( g ) g i u . If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min | { [ h , fail( h ) / lm ( h )] } . // see Remark A.5 G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . // translates the relation Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − h m lm ( g ) g i u lm ( g ′ ) fail( h ) m h . // see Proposition A.2 Else g ′ : = g . G : = G ′ . S : = S ′ . Return G . et G be a minimal Gr¨obner basis of I for ≺ and let g max be the largest leadingmonomial of G . Then, for m (cid:23) s max · max ≺ ( g max , s max ) , the BMS algorithm returnsa minimal Gr¨obner basis of I for ≺ . Example A.2.
For the drl ( y ≺ x ) ordering, I = h x p , y q i and q > p ≥ , wehave, s max = x p − y q − and g max = y q . Therefore, the right staircase is found atmost at step m = x p − y q − , while the Gr¨obner basis is found at most at stepx p − y q − max ≺ ( x p − y q − , y q ) , i.e. y q − if p = and x p − y q − otherwise. From Propositions A.4 and A.6, we can deduce that S = n fail( f ) lm ( f ) , f < I o . Example A.3.
We give the trace of the algorithm called on the binomial sequence b for the drl ( y ≺ x ) ordering up to monomial x (hence visiting all the monomialsof degree at most ).To simplify the reading, whenever a relation succeeds in m or cannot be testedin m, we skip the updating part as this relation remains the same.We start with the empty staircase S and the relation G = { } .For the monomial The relation g = fails since [ b , ] = . Thus S ′ = { [1 , } .S ′ is updated to { [1 , } and G ′ = { y , x } .For the relation g ′ = y, y ∤ thus g ′ = y.For the relation g ′ = x, x ∤ thus g ′ = x.We update G : = G ′ = { y , x } and S : = S ′ = { [1 , } .For the monomial yThe relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x.S ′ is set to { [1 , } and G ′ = { y , x } .We set g ′ = y and g ′ = x.We update G : = G ′ = { y , x } and S : = S ′ = { [1 , } .For the monomial xNothing must be done for the relation g = y.The relation g = x fails since [ b , ] = . Thus S ′ = { [1 , , [ x , } .S ′ is set to { [1 , } and G ′ = { y , x } .We set g ′ = y.For the relation g ′ = x, x | x and xx | fail(1) , hence g ′ = x − .We update G : = G ′ = { y , x − } and S : = S ′ = { [1 , } .For the monomial y The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x − .S ′ is set to { [1 , } and G ′ = { y , x } . e set g ′ = y and g ′ = x − .We update G : = G ′ = { y , x − } and S : = S ′ = { [1 , } .For the monomial x yThe relation g = y fails since [ b , ] = . Thus S ′ = { [1 , , [ y , x ] } .The relation g = x − fails since [ b , − b , ] = . Thus S ′ = { [1 , , [ y , x ] , [ x − , y ] } .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .For the relation g ′ = y , y ∤ x y thus g ′ = y .For the relation g ′ = x y, x y | x y and x yx y | fail( y ) , hence g ′ = x y − .For the relation g ′ = x , x ∤ x y thus g ′ = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial x Nothing must be done for the relation g = y .Nothing must be done for the relation g = x y − .The relation g = x − x succeeds since [ b , − b , ] = .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − and g ′ = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial y The relation g = y succeeds since [ b , ] = .Nothing must be done for the relation g = x y − .Nothing must be done for the relation g = x − x.S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − and g = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial x y The relation g = y succeeds since [ b , ] = .The relation g = x y − succeeds since [ b , − b , ] = .Nothing must be done for the relation g = x − x.S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − and g = x − x.We update G : = G ′ = { y , x y − , x − x } and S : = S ′ = { [ x , y ] , [ y , x − } .For the monomial x yNothing must be done for the relation g = y .The relation g = x y − fails since [ b , − b , ] = . Thus S ′ = { [ y , x ] , [ x − , y ] , [ x y − , x ] } .The relation g = x − x fails since [ b , − b , ] = . Thus S ′ = { [ y , x ] , [ x − , y ] , [ x y − , x ] , [ x − x , y ] } .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } . e set g ′ = y .For the relation g ′ = x y, x y | x y and x yx y | fail( y ) , hence g ′ = x y − y − .For the relation g ′ = x , x | x y and x yx | fail( x − , hence g ′ = x − x + .We update G : = G ′ = { y , x y − y − , x − x + } and S : = S ′ = { [ y , x ] , [ x − , y ] } .For the monomial x Nothing must be done for the relation g = y .Nothing must be done for the relation g = x y − y − .The relation g = x − x + succeeds since [ b , − b , + b , ] = .S ′ is set to { [ y , x ] , [ x − , y ] } and G ′ = { y , x y , x } .We set g ′ = y , g ′ = x y − y − and g = x − x + .We update G : = G ′ = { y , x y − y − , x − x + } and S : = S ′ = { [ y , x ] , [ x − , y ] } .The algorithm returns relations y , x y − y − , x − x + , all three with ashift x.A.2 A Linear Algebra interpretation of the BMS algorithm
In order to make the presentation of the BMS algorithm closer to that of theS calar -FGLM algorithm, we propose to replace every evaluation using the [ ] op-erator with a matrix-vector product.As stated above, given a monic relation f = lm ( f ) + P s ∈ S α s s , testing theshift of this relation by a monomial m is done with the bracket operator, i.e. testingwhether [ m f ] = ~ f , the vector ~ f = ... ... s ∈ S α s ... ... lm ( f ) , this can also be done through testing if the following matrix-vector product H m , S ∪{ lm ( f ) } ~ f = (cid:16) ··· s ∈ S ··· lm ( f ) m · · · [ m s ] · · · [ m lm ( f )] (cid:17) ...α s ... = shift and the fail of a relation, i.e.Definition A.1, become as follows. 46 efinition A.3. Let f = lt ( f ) + P s ∈ S α s s be a polynomial.The monomial m is a shift of f ifH { ,..., m } , S ∪{ lm ( f ) } ~ f = ··· s ∈ S ··· lm ( f )1 · · · [ s ] · · · [ lm ( f )] ... ... ... m · · · [ m s ] · · · [ m lm ( f )] ...α s ... = ... . Let m + be the successor of m, m + lm ( f ) is the fail of f ifH { ,..., m , m + } , S ∪{ lm ( f ) } ~ f = ··· s ∈ S ··· lm ( f )1 · · · [ s ] · · · [ lm ( f )] ... ... ... m · · · [ m s ] · · · [ m lm ( f )] m + · · · [ m + s ] · · · [ m + lm ( f )] ...α s ... = ... e , with e , . We can also write another proof of Proposition A.2 with a matrix viewpoint.
Proof of Proposition A.2.
Let f = lm ( f ) + P s ∈ S α s s and f = lm ( f ) + P s ∈ S ′ β s s be monic. Let v − be the predecessor of v . Let ˜ S = S ∪ S ′ \ { lm ( f ) , lm ( f ) } ,assuming lm ( f ) , lm ( f ), then we have H { ,..., v − , v } , ˜ S ∪{ lm ( f ) , lm ( f ) } ( ~ f + c ~ f ) = ... e + c e ··· s ∈ ˜ S ··· lm ( f ) lm ( f )1 · · · [ s ] · · · [ lm ( f )] [ lm ( f )] ... ... ... ... v − · · · [ v − s ] · · · [ v − lm ( f )] [ v − lm ( f )] v · · · [ v s ] · · · [ v lm ( f )] [ v lm ( f )] ...α s + c β s ... c = ... e + c e . It is now clear that vector ~ f − e e ~ f is in the kernel of this matrix. That is, polyno-mial f − e e f has a shift v .Changing every evaluation into a matrix-vector product in the BMS algorithmyields the following presentation of the BMS algorithm, namely Algorithm 6.47 lgorithm 6: Linear Algebra variant of the BMS algorithm.
Input:
A table u = ( u i ) i ∈ N n with coe ffi cients in K , a monomial ordering ≺ and amonomial M as the stopping condition. Output:
A set G of relations generating I M . T : = { m ∈ K [ x ] , m (cid:22) M } . // ordered for ≺ G : = { } . // the future Gr¨obner basis S : = ∅ . // staircase edge, elements will be [ h , fail( h ) / lm ( h )] For all m ∈ T do S ′ : = S . For g ∈ G doIf lm ( g ) | m then e : = H n m lm ( g ) o , supp( g ) ~ g . If e , then S ′ : = S ′ ∪ nh ge , m lm ( g ) io . S ′ : = min fail( h ) ∈ S ′ { [ h , fail( h ) / lm ( h )] } . // see Remark A.5 G ′ : = Border( S ′ ). For g ′ ∈ G ′ do Let g ∈ G such that lm ( g ) | lm ( g ′ ). If lm ( g ) ∤ m then g ′ : = lm ( g ′ ) lm ( g ) g . // shifts the relation Else if ∃ h ∈ S , m lm ( g ′ ) | fail( h ) then g ′ : = lm ( g ′ ) lm ( g ) g − (cid:18) H n m lm ( g ) o , supp( g ) ~ g (cid:19) lm ( g ′ ) fail( h ) m h . // see Prop. A.2 Else g ′ : = g . G : = G ′ S : = S ′ Return G ..