Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts
Claude-Pierre Jeannerod, Vincent Neiger, Eric Schost, Gilles Villard
aa r X i v : . [ c s . S C ] M a y Fast Computation of Minimal Interpolation Basesin Popov Form for Arbitrary Shifts
Claude-Pierre Jeannerod
Inria, Université de LyonLaboratoire LIP (CNRS, Inria, ENSL, UCBL)[email protected]
Vincent Neiger
ENS de Lyon, Université de LyonLaboratoire LIP (CNRS, Inria, ENSL, UCBL)[email protected]
Éric Schost
University of WaterlooDavid R. Cheriton School of Computer [email protected]
Gilles Villard
CNRS, Université de LyonLaboratoire LIP (CNRS, Inria, ENSL, UCBL)[email protected]
ABSTRACT
We compute minimal bases of solutions for a general inter-polation problem, which encompasses Hermite-Pad´e approx-imation and constrained multivariate interpolation, and hasapplications in coding theory and security.This problem asks to find univariate polynomial relationsbetween m vectors of size σ ; these relations should havesmall degree with respect to an input degree shift. For anarbitrary shift, we propose an algorithm for the computationof an interpolation basis in shifted Popov normal form with acost of O ˜( m ω − σ ) field operations, where ω is the exponentof matrix multiplication and the notation O ˜( · ) indicatesthat logarithmic terms are omitted.Earlier works, in the case of Hermite-Pad´e approxima-tion [34] and in the general interpolation case [18], computenon-normalized bases. Since for arbitrary shifts such basesmay have size Θ( m σ ), the cost bound O ˜( m ω − σ ) was feasi-ble only with restrictive assumptions on the shift that ensuresmall output sizes. The question of handling arbitrary shiftswith the same complexity bound was left open.To obtain the target cost for any shift, we strengthen theproperties of the output bases, and of those obtained duringthe course of the algorithm: all the bases are computed inshifted Popov form, whose size is always O ( mσ ). Then, wedesign a divide-and-conquer scheme. We recursively reducethe initial interpolation problem to sub-problems with moreconvenient shifts by first computing information on the de-grees of the intermediate bases. Keywords
M-Pad´e approximation; Hermite-Pad´e approximation; orderbasis; polynomial matrix; shifted Popov form.
ISSAC’16, July 19–22, 2016, Waterloo, ON, Canada.ACM ISBN. DOI: http://dx.doi.org/10.1145/2930889.2930928
1. INTRODUCTION1.1 Problem and main result
We focus on the following interpolation problem from [31,2]. For a field K and some positive integer σ , we have asinput m vectors e , . . . , e m in K × σ , seen as the rows of amatrix E ∈ K m × σ . We also have a multiplication matrix J ∈ K σ × σ which specifies the multiplication of vectors e ∈ K × σ by polynomials p ∈ K [ X ] as p · e = e p ( J ). Then,we want to find K [ X ]-linear relations between these vectors,that is, some p = ( p , . . . , p m ) ∈ K [ X ] m such that p · E = p · e + · · · + p m · e m = 0. Such a p is called an interpolantfor ( E , J ).Hereafter, the matrix J is in Jordan canonical form: thisassumption is satisfied in many interesting applications, asexplained below. The notion of interpolant we consider isdirectly related to the one introduced in [31, 2]. Supposethat J has n Jordan blocks of dimensions σ × σ , . . . , σ n × σ n and with respective eigenvalues x , . . . , x n ; in particular, σ = σ + · · · + σ n . Then, one may identify K σ with F = K [ X ] / ( X σ ) × · · · × K [ X ] / ( X σ n ) , by mapping any f = ( f , . . . , f n ) in F to the vector e ∈ K σ made from the concatenation of the coefficient vectors of f , . . . , f n . Over F , the K [ X ]-module structure on K σ givenby p · e = e p ( J ) becomes p · f = ( p ( X + x ) f mod X σ , . . . , p ( X + x n ) f n mod X σ n ) . Now, if ( e , . . . , e m ) ∈ K m × σ is associated to ( f , . . . , f m ) ∈ F m , with f i = ( f i, , . . . , f i,n ) and f i,j in K [ X ] / ( X σ j ) for all i, j , the relation p · e + · · · + p m · e m = 0 means that forall j in { , . . . , n } , we have p ( X + x j ) f ,j + · · · + p m ( X + x j ) f m,j = 0 mod X σ j ;applying a translation by − x j , this is equivalent to p f ,j ( X − x j ) + · · · + p m f m,j ( X − x j ) = 0 mod ( X − x j ) σ j . Thus, in terms of vector M-Pad´e approximation as in [31,2], ( p , . . . , p m ) is an interpolant for ( f , . . . , f m ), x , . . . , x n ,and σ , . . . , σ n .The set of all interpolants for ( E , J ) is a free K [ X ]-moduleof rank m . We are interested in computing a basis of thismodule, represented as a matrix in K [ X ] m × m and called an interpolation basis for ( E , J ). Its rows are interpolants for( E , J ), and any interpolant for ( E , J ) can be written as aunique K [ X ]-linear combination of its rows.esides, we look for interpolants that have some typeof minimal degree. Following [31, 34], for a nonzero p =[ p , . . . , p m ] ∈ K [ X ] × m and a shift s = ( s , . . . , s m ) ∈ Z m ,we define the s -degree of p as max j m (deg( p j )+ s j ). Up toa change of sign, this notion of s -degree is equivalent to theone in [3] and to the notion of defect from [1, Definition 3.1].Then, the s -row degree of a matrix P ∈ K [ X ] k × m ofrank k is the tuple rdeg s ( P ) = ( d , . . . , d k ) ∈ Z k with d i the s -degree of the i -th row of P . The s -leading matrix of P = [ p ij ] i,j is the matrix in K k × m whose entry ( i, j ) is thecoefficient of degree d i − s j of p ij . Then, P is s -reduced ifits s -leading matrix has rank k ; see [3].Our aim is to compute an s -minimal interpolation basisfor ( E , J ), that is, one which is s -reduced: equivalently, itis an interpolation basis whose s -row degree, once writtenin nondecreasing order, is lexicographically minimal. Thiscorresponds to Problem 1 below. In particular, an inter-polant of minimal degree can be read off from an s -minimalinterpolation basis for the uniform shift s = . Problem 1 (Minimal interpolation basis).
Input: • the base field K , • the dimensions m and σ , • a matrix E ∈ K m × σ , • a Jordan matrix J ∈ K σ × σ , • a shift s ∈ Z m . Output: an s -minimal interpolation basis for ( E , J ) . A well-known particular case of this problem is Hermite-Pad´e approximation, that is, the computation of order bases (or σ -bases, or minimal approximant bases), where J hasonly eigenvalue 0. Previous work on this case includes [1,14, 30, 34] with algorithms focusing on J with n blocks ofidentical size σ/n . For a shift s ∈ N m with nonnegativeentries, we write | s | for the sum of its entries. Then, inthis context, the cost bound O ˜( m ω − σ ) has been obtainedunder each of the following assumptions:( H ) max( s ) − min( s ) ∈ O ( σ/m ) in [34, Theorem 5.3] andmore generally | s − min( s ) | ∈ O ( σ ) in [33, Section 4.1];( H ) | max( s ) − s | ∈ O ( σ ) in [34, Theorem 6.14].These assumptions imply in particular that any s -minimalbasis has size in O ( mσ ), where by size we mean the numberof field elements used to represent the matrix.An interesting example of a shift not covered by ( H )or ( H ) is h = (0 , σ, σ, . . . , ( m − σ ) which is related tothe Hermite form [3, Lemma 2.6]. In general, as detailedin Appendix A, one may assume without loss of generalitythat min( s ) = 0, max( s ) ∈ O ( mσ ), and | s | ∈ O ( m σ ).There are also applications of Problem 1 to multivariateinterpolation, where J is not nilpotent anymore, and forwhich we have neither ( H ) nor ( H ), as we will see in Sub-section 1.3. It was left as an open problem in [34, Section 7]to obtain algorithms with cost bound O ˜( m ω − σ ) for suchmatrices J and for arbitrary shifts. In this paper, we solvethis open problem.An immediate challenge is that for an arbitrary shift s ,the size of an s -minimal interpolation basis may be beyond our target cost: we show this in Appendix B with an ex-ample of Hermite-Pad´e approximation. Our answer is tocompute a basis in s -Popov form : among its many interest-ing features, it can be represented using at most m ( σ + 1)elements from K , and it is canonical: for every nonsingular A ∈ K [ X ] m × m and s ∈ Z m , there is a unique matrix P in s -Popov form which is left-unimodularly equivalent to A .We use the definition from [2, Section 7], phrased using thenotion of pivot [19, Section 6.7.2]. Definition 1.1 (Pivot of a row).
Let p = [ p j ] j ∈ K [ X ] × m be a nonzero row vector and let s ∈ Z m . The s -pivot index of p is the largest index j ∈ { , . . . , m } such that rdeg s ( p ) = deg( p j ) + s j ; then, p j and deg( p j ) are called the s -pivot entry and the s -pivot degree of p . Definition 1.2 (Popov form).
Let P ∈ K [ X ] m × m benonsingular and let s ∈ Z m . Then, P is said to be in s -Popov form if its s -pivot entries are monic and on its diag-onal, and if in each column of P the nonpivot entries havedegree less than the pivot entry. We call s -Popov interpolation basis for ( E , J ) the uniqueinterpolation basis for ( E , J ) which is in s -Popov form; inparticular, it is an s -minimal one. For small values of σ ,namely σ ∈ O ( m ), we gave in [18, Section 7] an algo-rithm which computes the s -Popov interpolation basis in O ˜( σ ω − m ) operations for an arbitrary s [18, Theorem 1.4].Hence, in what follows, we focus on the case m ∈ O ( σ ).We use the convenient assumption that J is given to us asa list of eigenvalues and block sizes: J = (( x , σ , ) , . . . , ( x , σ ,r ) , . . . , ( x t , σ t, ) , . . . , ( x t , σ t,r t )) , for some pairwise distinct eigenvalues x , . . . , x t , with r > · · · > r t and σ i, > · · · > σ i,r i for all i ; we say that thisrepresentation is standard . Theorem
Assuming that J ∈ K σ × σ is a Jordan ma-trix given by a standard representation, there is a determin-istic algorithm which solves Problem 1 using O ( m ω − M ( σ ) log( σ ) log( σ/m ) ) if ω > , O ( m M ( σ ) log( σ ) log( σ/m ) log( m ) ) if ω = 2 operations in K and returns the s -Popov interpolation basisfor ( E , J ) . In this result, M ( · ) is such that polynomials of degree atmost d in K [ X ] can be multiplied using M ( d ) operations in K , and M ( · ) satisfies the super-linearity properties of [13,Chapter 8]. It follows from [8] that M ( d ) can be taken in O ( d log( d ) log(log( d ))). The exponent ω is so that we canmultiply m × m matrices in O ( m ω ) ring operations on anyring, the best known bound being ω < .
38 [11, 22].Compared to our work in [18], our algorithm here has twokey new features: • it supports arbitrary shifts with a cost O ˜( m ω − σ ); • it computes the basis in s -Popov form.To the best of our knowledge, no algorithm for Problem 1with cost O ˜( m ω − σ ) was known previously for arbitrary shifts, even for the specific case of order basis computation.If J is given as an arbitrary list (( x , σ ) , . . . , ( x n , σ n )),we can reorder it (and permute the columns of E accord-ingly) to obtain an equivalent standard representation inime O ( M ( σ ) log( σ ) ) [5, Proposition 12]; if K is equippedwith an order, and if we assume that comparisons take unittime, this can of course be done in time O ( σ log( σ )). Several previous algorithms for order basis computation,such as those in [1, 14], follow a divide-and-conquer schemeinspired by the Knuth-Sch¨onhage-Moenck algorithm [20, 29,23] This paper builds on our previous work in [18], where weextended this recursive approach to more general interpola-tion problems. However, the main algorithm in [18] does nothandle an arbitrary shift s with a satisfactory complexity;here, we use it as a black box, after showing how to reducethe problem to a new one with suitable shift.Let E , J , and s be our input, and write J (1) and J (2) for the σ/ × σ/ J . First, compute an s -minimal interpolation basis P (1) for J (1) and the first σ/ E ; then, compute thelast σ/ E (2) of the residual P (1) · E ; then, computea t -minimal interpolation basis P (2) for ( E (2) , J (2) ) with t =rdeg s ( P (1) ); finally, return the matrix product P (2) P (1) .This approach allows to solve Problem 1 using O ˜( m ω σ )operations in K . In the case of Hermite-Pad´e approximation,this is the divide-and-conquer algorithm in [1]. Besides, an s -minimal basis computed by this method has degree at most σ and thus size in O ( m σ ), and there are indeed instancesof Problem 1 for which this size reaches Θ( m σ ). In Ap-pendix B, we show such an instance for the algorithm in [1],in the case of Hermite-Pad´e approximation.It is known that the average degree of the rows of any s -minimal interpolation basis is at most ( σ + ξ ) /m , where ξ = | s − min( s ) | [31, Theorem 4.1]. In [18], focusing on thecase where ξ is small compared to σ , and preserving such aproperty in recursive calls via changes of shifts, we obtainedthe cost bound O ( m ω − M ( σ ) log( σ ) log( σ/m ) + m ω − M ( ξ ) log( ξ/m )) (1)to solve Problem 1; this cost is for ω >
2, and a simi-lar one holds for ω = 2, both being in O ˜( m ω − ( σ + ξ )).The fundamental reason for this kind of improvement over O ˜( m ω σ ), already seen with [34], is that one controls theaverage row degree of the bases P (2) and P (1) , and of theirproduct P (2) P (1) .This result is O ˜( m ω − σ ) for ξ in O ( σ ). The main dif-ficulty to extend it to any shift s is to control the size ofthe computed bases: the Hermite-Pad´e example pointed outabove corresponds to ξ = Θ( mσ ) and leads to an output ofsize Θ( m σ ) for the algorithm of [18] as well.The key ingredient to control this size is to work withbases in s -Popov form: for any s , the s -Popov interpolationbasis P for ( E , J ) has average column degree at most σ/m and size at most m ( σ + 1), as detailed in Section 2.Now, suppose that we have computed recursively the bases P (2) and P (1) in s - and t -Popov form; we want to output the s -Popov form P of P (2) P (1) . In general, this product is notnormalized and may have size Θ( m σ ): its computation isbeyond our target cost. Thus, one main idea is that we will not rely on polynomial matrix multiplication to combinethe bases obtained recursively; instead, we use a minimalinterpolation basis computation for a shift that has goodproperties as explained below.An important remark is that if we know a priori the col-umn degree δ of P , then the problem becomes easier. This idea was already used in algorithms for the Hermite form H of a polynomial matrix [15, 33], which first compute the col-umn degree δ of H , and then obtain H as a submatrix ofsome minimal nullspace basis for a shift involving − δ .In Section 4, we study the problem of computing the s -Popov interpolation basis P for ( E , J ) having its columndegree δ as an additional input. We show that this reducesto the computation of a d -minimal interpolation basis R with the specific shift d = − δ . The properties of this shift d allow us first to compute R in O ˜( m ω − σ ) operations usingthe partial linearization framework from [30, Section 3] andthe minimal interpolation basis algorithm in [18, Section 3],and second to easily retrieve P from R .Still, in general we do not know δ . We will thus computeit, relying on a variation of the divide-and-conquer strategyat the beginning of this subsection. We stop the recursionas soon as σ m , in which case we do not need δ to achieveefficiency: the algorithm from [18, Section 7] computes the s -Popov interpolation basis in O ˜( σ ω − m ) operations forany s [18, Theorem 1.4]. Then, we show in Section 3 thatfrom P (1) and P (2) computed recursively in shifted Popovform , we can obtain δ for free. Finally, instead of consid-ering P (2) P (1) , we use the knowledge of δ to compute thebasis P from scratch as explained in the previous paragraph.This summarizes our main algorithm, which is presentedin Section 2. As a particular case of Problem 1, when all the eigenvaluesof J are zero, we obtain the following complexity result about order basis computation [34, Definition 2.2]. Theorem
Let m, n ∈ Z > , let ( σ , . . . , σ n ) ∈ Z n> ,let s ∈ Z m , and let F ∈ K [ X ] m × n with its j -th column F ∗ ,j of degree less than σ j . The unique basis P ∈ K [ X ] m × m in s -Popov form of the K [ X ] -module of approximants { p ∈ K [ X ] × m | pF ∗ ,j = 0 mod X σ j for each j } can be computed deterministically using O ( m ω − M ( σ ) log( σ ) log( σ/m ) ) if ω > , O ( m M ( σ ) log( σ ) log( σ/m ) log( m ) ) if ω = 2 operations in K , where σ = σ + · · · + σ n . Previous work on this problem includes [1, 14, 30, 34, 18],mostly with identical orders σ = · · · = σ n ; an interestingparticular case is Hermite-Pad´e approximation with n = 1.To simplify matters, for all our comparisons, we consider ω >
2. For order basis computation with σ = · · · = σ n and n m , the cost bound O ( m ω M ( σ/m ) log( σ/n )) wasachieved in [34] under either of the assumptions ( H ) and( H ) on the shift. Still, the corresponding algorithm returnsa basis P which is only s -reduced, and because both the shift s and the degrees in P may be unbalanced, one cannot di-rectly rely on the fastest known normalization algorithm [28]to compute the s -Popov form of P within the target cost.Another application of Problem 1 is a multivariate inter-polation problem that arises for example in the first step ofalgorithms for the list-decoding of Parvaresh-Vardy codes [26]and of folded Reed-Solomon codes [16], as well as in robustPrivate Information Retrieval [12]. The bivariate case corre-sponds to the interpolation steps of K¨otter and Vardy’s soft-decoding [21] and Guruswami and Sudan’s list-decoding [17]algorithms for Reed-Solomon codes.iven a set of points in K r +1 and associated multiplic-ities, this problem asks to find a multivariate polynomial Q ( X, Y , . . . , Y r ) such that: ( a ) Q has prescribed exponentsfor the Y variables, so that the problem can be linearizedwith respect to Y , leaving us with a linear algebra problemover K [ X ]; ( b ) Q vanishes at all the given points with theirmultiplicities, inducing a structure of K [ X ]-module on theset of solutions; ( c ) Q has some type of minimal weighteddegree, which can be seen as the minimality of the shifteddegree of the vector over K [ X ] that represents Q .Following the coding theory context [17, 26], given a point( x, y ) ∈ K × K r and a set of exponents µ ⊂ N r +1 , we say thatthe polynomial Q ( X, Y ) ∈ K [ X, Y , . . . , Y r ] vanishes at ( x, y ) with multiplicity support µ if the shifted polynomial Q ( X + x, Y + y ) has no monomial with exponent in µ . We will onlyconsider supports that are stable under division , meaningthat if ( γ , γ , . . . , γ r ) is in µ , then any ( γ ′ , γ ′ , . . . , γ ′ r ) with γ ′ j γ j for all j is also in µ .Now, given a set of exponents Γ ⊂ N r , we represent Q ( X, Y ) = P γ ∈ Γ p γ Y γ as the row p = [ p γ ] γ ∈ Γ ∈ K [ X ] × m where m is the cardinality of Γ. Again, we assume thatthe exponent set Γ is stable under division; then, the setof solutions is a free K [ X ]-module of rank m . In the men-tioned applications, we typically have Γ = { ( γ , . . . , γ r ) ∈ N r | γ + · · · + γ r ℓ } for an integer ℓ called the list-sizeparameter .Besides, we are given some weights w = ( w , . . . , w r ) ∈ N r on the variables Y = Y , . . . , Y r , and we are looking for Q ( X, Y ) which has minimal w -weighted degree, which isthe degree in X of the polynomial Q ( X,X w Y , . . . , X w r Y r )= X γ ∈ Γ p γ X γ w + ··· + γ r w r Y γ · · · Y γ r r . This is exactly requiring that the s -degree of p = [ p γ ] γ beminimal, for s = [ γ w + · · · + γ r w r ] γ . We note that it issometimes important, for example in [12], to return a whole s -minimal interpolation basis and not only one interpolantof small s -degree. Problem 2 (Multivariate interpolation).
Input: • number of Y variables r > , • set Γ ⊂ N r of cardinality m , stable under division, • pairwise distinct points { ( x k , y k ) ∈ K × K r } k p , • supports { µ k ⊂ N r +1 } k p , stable under division, • a shift s ∈ Z m . Output: a matrix P ∈ K [ X ] m × m such that • the rows of P form a basis of the K [ X ] -module ( p = [ p γ ] γ ∈ Γ ∈ K [ X ] × m (cid:12)(cid:12)(cid:12)(cid:12) X γ ∈ Γ p γ ( X ) Y γ vanishesat ( x k , y k ) with support µ k for k p ) , • P is s -reduced. For more details about the reduction from Problem 2 toProblem 1, explaining how to build the input matrices ( E , J )with J a Jordan matrix in standard representation, we re-fer the reader to [18, Subsection 2.4]. In particular, thedimension σ is the sum of the cardinalities of the multiplic-ity supports. In the mentioned applications to coding the-ory, we have m = (cid:0) r + ℓr (cid:1) where ℓ is the list-size parameter;and σ is the so-called cost in the soft-decoding context [21,Section III], that is, the number of linear equations whenlinearizing the problem over K . As a consequence of Theo-rem 1.3, we obtain the following complexity result. Theorem
Let σ = P k p µ k . There is a deter-ministic algorithm which solves Problem 2 using O ( m ω − M ( σ ) log( σ ) log( σ/m ) ) if ω > , O ( m M ( σ ) log( σ ) log( σ/m ) log( m ) ) if ω = 2 operations in K , and returns the unique basis of solutionswhich is in s -Popov form. Under the assumption that the x k are pairwise distinct,the cost bound O ( m ω − M ( σ ) log( σ ) ) was achieved for anarbitrary shift using fast structured linear algebra [9, The-orems 1 and 2], following work by [25, 27, 32]. However,the corresponding algorithm is randomized and returns onlyone interpolant of small s -degree. For a broader overview ofprevious work on this problem, we refer the reader to theintroductive sections of [4, 9] and to [18, Section 2].The term O ( m ω − M ( ξ ) log( ξ/m )) reported in (1) for thecost of the algorithm of [18] can be neglected if ξ ∈ O ( σ );this is for instance satisfied in the context of bivariate in-terpolation for soft- or list-decoding of Reed-Solomon codes[18, Sections 2.5 and 2.6]. However, we do not have thisbound on ξ in the list-decoding of Parvaresh-Vardy codesand folded Reed-Solomon codes and in Private InformationRetrieval. Thus, in these cases our algorithm achieves thebest known cost bound, improving upon [7, 6, 10, 12, 18].
2. FAST POPOV INTERPOLATION BASIS
In this section, we present our main result, Algorithm 1. Itrelies on three subroutines; two of them are from [18], whilethe third is a key new ingredient, detailed in Section 4. • LinearizationMIB [18, Algorithm 9] solves the basecase σ m using linear algebra over K . The inputsare E , J , s , as well as an integer for which we can takethe first power of two greater than or equal to σ . • ComputeResiduals [18, Algorithm 5] (with an addi-tional pre-processing detailed at the end of Section 4)computes the residual P (1) · E from the first basis P (1) obtained recursively. • KnownMinDegMIB , detailed in Section 4, computesthe s -Popov interpolation basis when one knows a pri-ori the s -minimal degree of ( E , J ) (see below).In what follows, by s -minimal degree of ( E , J ) we meanthe tuple of degrees of the diagonal entries of the s -Popov in-terpolation basis P for ( E , J ). Because P is in s -Popov form,this is also the column degree of P , and the sum of these de-grees is deg(det( P )). As a consequence, using Theorem 4.1in [31] (or following the lines of [19] and [2]) we obtain thefollowing lemma, which implies in particular that the size of P is at most m ( σ + 1). emma Let E ∈ K m × σ , J ∈ K σ × σ , s ∈ Z m , andlet ( δ , . . . , δ m ) be the s -minimal degree of ( E , J ) . Then, wehave δ + · · · + δ m σ . Algorithm PopovMIB
Input: • a matrix E ∈ K m × σ , • a Jordan matrix J ∈ K σ × σ in standard representation, • a shift s ∈ Z m . Output: • the s -Popov interpolation basis P for ( E , J ) , • the s -minimal degree δ = ( δ , . . . , δ m ) of ( E , J ) . If σ m , return LinearizationMIB ( E , J , s , ⌈ log ( σ ) ⌉ ) Else a. E (1) ← first ⌈ σ/ ⌉ columns of Eb. ( P (1) , δ (1) ) ← PopovMIB ( E (1) , J (1) , s ) c. E (2) ← last ⌊ σ/ ⌋ columns of P (1) · E = ComputeResiduals ( J , P (1) , E ) d. ( P (2) , δ (2) ) ← PopovMIB ( E (2) , J (2) , s + δ (1) ) e. P ← KnownMinDegMIB ( E , J , s , δ (1) + δ (2) ) f. Return ( P , δ (1) + δ (2) )Taking for granted the results in the next sections, we nowprove our main theorem. Proof of Theorem 1.3.
For the case σ m , the cor-rectness and the cost bound of Algorithm 1 both followfrom [18, Theorem 1.4]: it uses O ( σ ω − m + σ ω log( σ )) op-erations (with an extra log( σ ) factor if ω = 2).Now, we consider the case σ > m . Using the notation inthe algorithm, assume that P (1) is the s -Popov interpola-tion basis for ( E (1) , J (1) ), and P (2) is the t -Popov interpola-tion basis for ( E (2) , J (2) ), where t = s + δ (1) = rdeg s ( P (1) ),and δ (1) and δ (2) are the s - and s + δ (1) -minimal degrees of( E (1) , J (1) ) and ( E (2) , J (2) ), respectively.We claim that P (2) P (1) is s -reduced: this will be proved inLemma 3.2. Let us then prove that P (2) P (1) is an interpola-tion basis for ( E , J ). Let p ∈ K [ X ] × m be an interpolant for( E , J ). Since J is upper triangular, p is in particular an in-terpolant for ( E (1) , J (1) ), so there exists v ∈ K [ X ] × m suchthat p = vP (1) . Besides, we have P (1) · E = [0 | E (2) ], so that0 = p · E = vP (1) · E = [0 | v · E (2) ], and thus v · E (2) = 0.Then, there exists w ∈ K [ X ] × m such that v = wP (2) ,which gives p = wP (2) P (1) .In particular, the s -Popov interpolation basis for ( E , J )is the s -Popov form of P (2) P (1) . Thus, Lemma 3.2 com-bined with Lemma 3.3 will show that the s -minimal degreeof ( E , J ) is δ (1) + δ (2) . As a result, Proposition 4.3 statesthat Step correctly computes the s -Popov interpolationbasis for ( E , J ).Concerning the cost bound, the recursion stops when σ m , and thus the algorithm uses O ( m ω log( m )) operations(with an extra log( m ) factor if ω = 2). The depth of the recursion is O (log( σ/m )); we have two recursive calls in di-mensions m × σ/
2, and two calls to subroutines with costbounds given in Corollary 4.5 and Proposition 4.3, respec-tively. The conclusion follows from the super-linearity prop-erties of M ( · ).
3. OBTAINING THE MINIMAL DEGREEFROM RECURSIVE CALLS
In this section, we show that the s -minimal degree of( E , J ) can be deduced for free from two bases computed re-cursively as in Algorithm 1. To do this, we actually prove aslightly more general result about the degrees of the s -pivotentries of so-called weak Popov matrix forms [24]. Definition 3.1 (Weak Popov form, pivot degree).
Let P ∈ K [ X ] m × m be nonsingular and let s ∈ Z m . Then, P is said to be in s -weak Popov form if the s -pivot indices ofits rows are pairwise distinct; P is said to be in s -diagonalweak Popov form if its s -pivot entries are on its diagonal.If P is in s -weak Popov form, the s -pivot degree of P isthe tuple ( δ , . . . , δ m ) where for j ∈ { , . . . , m } , δ j is the s -pivot degree of the row of P which has s -pivot index j . We recall from Section 1 that for P ∈ K [ X ] k × m , its s -leading matrix lm s ( P ) ∈ K k × m is formed by the coefficientsof degree 0 of X − d PX s , where d = rdeg s ( P ) and X s standsfor the diagonal matrix with entries X s , . . . , X s m . Then,a nonsingular P ∈ K [ X ] m × m is in s -diagonal weak Popovform with s -pivot degree δ if and only if lm s ( P ) is lowertriangular and invertible and rdeg s ( P ) = s + δ .For example, at all stages of the algorithms in [31, 1, 18]for Problem 1 (as well as [14] if avoiding row permutationsat the base case of the recursion), the computed bases are inshifted diagonal weak Popov form. This is due to the com-patibility of this form with matrix multiplication, as statedin the next lemma. Lemma
Let s ∈ Z m , P (1) ∈ K [ X ] m × m in s -diagonalweak Popov form with s -pivot degree δ (1) , t = s + δ (1) =rdeg s ( P (1) ) , and P (2) ∈ K [ X ] m × m in t -diagonal weak Popovform with t -pivot degree δ (2) . Then, P (2) P (1) is in s -diagonalweak Popov form with s -pivot degree δ (1) + δ (2) . Proof.
By the predictable-degree property [19, Theo-rem 6.3-13] we have rdeg s ( P (2) P (1) ) = rdeg t ( P (2) ) = t + δ (2) = s + δ (1) + δ (2) . The result follows since lm s ( P (2) P (1) ) =lm t ( P (2) )lm s ( P (1) ) is lower triangular and invertible.For matrices in s -Popov form, the s -pivot degree coincideswith the column degree: in particular, the s -minimal degreeof ( E , J ) is the s -pivot degree of the s -Popov interpolationbasis for ( E , J ). With the notation of Algorithm 1, the pre-vious lemma proves that the s -pivot degree of P (2) P (1) is δ (1) + δ (2) . In the rest of this section, we prove that the s -Popov form of P (2) P (1) has the same s -pivot degree as P (2) P (1) . Consequently, the s -minimal degree of ( E , J ) is δ (1) + δ (2) and thus can be found from P (2) and P (1) with-out computing their product.It is known that left-unimodularly equivalent s -reducedmatrices have the same s -row degree up to permutation [19,Lemma 6.3-14]. Here, we prove that the s -pivot degree isinvariant among left-unimodularly equivalent matrices in s -weak Popov form. emma Let s ∈ Z m and let P and Q in K [ X ] m × m be two left-unimodularly equivalent nonsingular polynomialmatrices in s -weak Popov form. Then P and Q have thesame s -pivot degree. Proof.
Since row permutations preserve both the s -pivotdegrees and left-unimodular equivalence, we can assume that P and Q are in s -diagonal weak Popov form. The s -pivotdegrees of P and Q are then rdeg s ( P ) − s and rdeg s ( Q ) − s ,and it remains to check that rdeg s ( P ) = rdeg s ( Q ).For any nonsingular W ∈ K [ X ] m × m in s -weak Popov formwe have | rdeg s ( W ) | = deg(det( W )) + | s | [19, Section 6.3.2].Thus, if W is furthermore comprised entirely of rows in the K [ X ]-row space of P (that is, W is a left multiple of P ) thenwe must have | rdeg s ( W ) | > | rdeg s ( P ) | .To arrive at a contradiction, suppose there exists a rowindex i such that the s -degree of P i, ∗ differs from that of Q i, ∗ and without loss of generality assume that the s-degreeof Q i, ∗ is strictly less than that of P i, ∗ . Then the matrix W obtained from P by replacing the i -th row of P with Q i, ∗ is in s -diagonal weak Popov form. This is a contradiction,since | rdeg s ( W ) | < | rdeg s ( P ) | and Q i, ∗ is in the K [ X ]-rowspace of P for Q is left-unimodularly equivalent to P .In particular, any nonsingular matrix in s -weak Popovform has the same s -pivot degree as its s -Popov form, whichproves our point about the s -Popov form of P (2) P (1) .
4. COMPUTING INTERPOLATION BASESWITH KNOWN MINIMAL DEGREE
In this section, we propose an efficient algorithm for com-puting the s -Popov interpolation basis P for ( E , J ) when the s -minimal degree δ of ( E , J ) is known a priori .First, we show that the shift d = − δ leads to the same d -Popov interpolation basis P as the initial shift s . Then,we prove that P can be easily recovered from any interpola-tion basis which is simply d -reduced. The following lemmaextends [28, Lemmas 15 and 17] to the case of any shift s . Lemma
Let s ∈ Z m , and let P ∈ K [ X ] m × m be in s -Popov form with column degree δ = ( δ , . . . , δ m ) . Then P is also in d -Popov form for d = ( − δ , . . . , − δ m ) , and wehave rdeg d ( P ) = (0 , . . . , . In particular, for any matrix R ∈ K [ X ] m × m which is unimodularly equivalent to P and d -reduced, R has column degree δ , and P = lm d ( R ) − R . Proof.
Let us denote P = [ p ij ] i,j , and let i ∈ { , . . . , m } .Since P is in s -Popov form, it is enough to prove that the d -pivot entries of the rows of P are on its diagonal. We havedeg( p ij ) < deg( p jj ) = δ j for all j = i , and deg( p ii ) = δ i .Then, the i -th row of P has d -pivot index i and d -degree 0.Thus P is in d -Popov form with d -row degree (0 , . . . , R be a d -reduced matrix left-unimodularly equiv-alent to P . Then, rdeg d ( R ) = rdeg d ( P ) = (0 , . . . , R = lm d ( R ) X δ + Q with the j -th col-umn of Q of degree less than δ j . In particular, since lm d ( R )is invertible, the column degree of R is δ . Besides, we ob-tain lm d ( R ) − R = X δ + lm d ( R ) − Q , and the j -th columnof lm d ( R ) − Q has degree less than δ j . Thus lm d ( R ) − R isin d -Popov form and unimodularly equivalent to P , henceequal to P .In particular, if δ is the s -minimal degree of ( E , J ) and d = − δ , any d -minimal interpolation basis R for ( E , J ) has size at most m + m | δ | , which for σ > m is in O ( mσ ). Still,the algorithm in [18] cannot directly be used to computesuch an R efficiently, because | d − min( d ) | can be as largeas Θ( mσ ), for example when δ = ( σ, , . . . , O ˜( m ω σ ) operations.By Lemma 2.1, however, d = − δ satisfies | max( d ) − d | σ . For this type of unbalanced shift, a solution in O ˜( m ω − σ ) already exists in the particular case of order ba-sis computation [34, Section 6], building upon the partiallinearization technique in [30, Section 3]. Here, we adopt asimilar approach, taking advantage of the a priori knowl-edge of the column degree of the output matrix. Lemma
Let E ∈ K m × σ , J ∈ K σ × σ , and s ∈ Z m , andlet δ = ( δ , . . . , δ m ) denote the s -minimal degree of ( E , J ) .Then, let δ = ⌈ σ/m ⌉ > , and for i ∈ { , . . . , m } write δ i = ( α i − δ + β i with α i > and β i < δ , and let m = α + · · · + α m . Then, define δ ∈ N m as δ = ( δ, . . . , δ, β | {z } α , . . . , δ, . . . , δ, β m | {z } α m ) (2) and the expansion-compression matrix E ∈ K [ X ] m × m as E = X δ ... X ( α − δ . . . X δ ... X ( α m − δ . (3) Let further d = − δ ∈ Z m and R ∈ K [ X ] m × m be a d -minimal interpolation basis for ( E · E , J ) . Then, the s -Popovinterpolation basis for ( E , J ) is the submatrix of lm d ( R ) − R E formed by its rows at indices α + · · · + α i for i m . Proof.
Let P denote the s -Popov interpolation basis for( E , J ); P has column degree δ . First, we partially linearizethe columns of P in degree δ to obtain e P ∈ K [ X ] m × m ; moreprecisely, e P is the unique matrix of degree less than δ suchthat P = e P E . Then, we define P ∈ K [ X ] m × m as follows: • for 1 i m , the row α + · · · + α i of P is the row i of e P ; • for 0 i m − j α i +1 −
1, the row α + · · · + α i + j of P is the row [0 , · · · , , X δ , − , , · · · , ∈ K [ X ] × m with the entry X δ at column index α + · · · + α i + j .Since P is in s -Popov form with column degree δ , it is in − δ -Popov form by Lemma 4.1. Then, one can check that P is in d -Popov form and has d -row degree (0 , . . . , P is an interpolant for ( E · E , J ). In particular, since R is an interpolation basis for( E · E , J ), there is a matrix U ∈ K [ X ] m × m such that P = UR . Besides, there exists no interpolant p ∈ K [ X ] × m for( E· E , J ) which has d -degree less than 0: otherwise, p E wouldbe an interpolant for ( E , J ), and it is easily checked that itwould have − δ -degree less than 0, which is impossible.hus every row of R has d -degree at least 0, and thepredictable degree property [19, Theorem 6.3.13] shows that U is a constant matrix, and therefore unimodular. Then, P is an interpolation basis for ( E · E , J ), and since it is in d -Popov form, by Lemma 4.1 we obtain that P = lm d ( R ) − R .The conclusion follows.Then, it remains to prove that such a basis R can becomputed efficiently using the algorithm MinimalInterpo-lationBasis in [18]; this leads to Algorithm 2.
Algorithm KnownMinDegMIB
Input: • a matrix E ∈ K m × σ with σ > m > , • a Jordan matrix J ∈ K σ × σ in standard representation, • a shift s ∈ Z m , • δ = ( δ , . . . , δ m ) ∈ N m the s -minimal degree of ( E , J ) . Output: the s -Popov interpolation basis P for ( E , J ) . δ ← ⌈ σ/m ⌉ , α i ← ⌊ δ i /δ ⌋ + 1 for i m , m ← α + · · · + α m Let δ ∈ N m as in (2) and d ← − δ ∈ N m Let
E ∈ K [ X ] m × m as in (3) and E ← E · E4. R ← MinimalInterpolationBasis ( E , J , d + ( δ, . . . , δ ))
5. P ← lm d ( R ) − R6.
Return the submatrix of P E formed by the rows at indices α + · · · + α i for i m Proposition
Assuming that J ∈ K σ × σ is a Jordanmatrix given by a standard representation, and assumingwe have the s -minimal degree of ( E , J ) as an additional in-put, there is a deterministic algorithm KnownMinDegMIB which solves Problem 1 using O ( m ω − M ( σ ) log( σ ) log( σ/m )) if ω > , O ( m M ( σ ) log( σ ) log( σ/m ) log( m ) ) if ω = 2 operations in K . Proof.
We focus on the case σ > m ; otherwise, a bettercost bound can be achieved even without knowing δ [18,Theorem 1.4]. The correctness of Algorithm 2 follows fromLemma 4.2. We remark that it uses d + ( δ, . . . , δ ) ratherthan d because the minimal interpolation basis algorithmin [18] requires the input shift to have non-negative entries.Since adding a constant to every entry of d does not changethe notion of d -reducedness, the basis R obtained at Step is a d -minimal interpolation basis for ( E , J ).Concerning the cost bound, we will show that it is domi-nated by the time spent in Step . First, we prove that | d − min( d ) | ∈ O ( σ ), so that the cost of Step follows from [18,Theorem 1.5]. We have α i = 1+ ⌊ δ i /δ ⌋ mδ i /σ for all i .Thus, m = α + · · · + α m m + P i m mδ i /σ m thanksto Lemma 2.1. Then, since all entries of d are in {− δ, . . . , } ,we obtain | d − min( d ) | mδ m (1 + σ/m ) σ .Step can be done in O ( m M ( σ ) log( σ )) operations ac-cording to Lemma 4.4 below. Lemma 4.1 proves that the sum of the column degrees of R is | δ | = | δ | σ . Then, the product in Step can be donein O ( m ω − σ ) operations, by first linearizing the columns of R into a m × m + | δ | matrix over K , then left-multiplyingthis matrix by lm d ( R ) − (itself computed using O ( m ω ) op-erations), and finally performing the inverse linearization.Because of the degrees in P and the definition of E , theoutput in Step can be formed without using any arithmeticoperation.The efficient computation of E · E can be done with thealgorithm for computing residuals in [18, Section 6]. Lemma
The product
E · E at Step of Algorithm 2can be computed using O ( m M ( σ ) log( σ )) operations in K . Proof.
The product
E · E has m rows, with m m asabove. Besides, by definition of E , each row of E · E is aproduct of the form X iδ · E j, ∗ , where 0 i m , 1 j m ,and E j, ∗ denotes the row j of E . In particular, iδ σ :then, according to [18, Proposition 6.1], each of these m products can be performed using O ( M ( σ ) log( σ )) operationsin K .This lemma and the partial linearization technique canalso be used to compute the residual at Step of Algo-rithm 1, that is, a product of the form P · E with the sumof the column degrees of P bounded by σ . First, we expandthe high-degree columns of P to obtain P ∈ K [ X ] m × m ofdegree less than ⌈ σ/m ⌉ such that P = P E ; then, we com-pute E = E · E ; and finally we rely on the algorithm in [18,Proposition 6.1] to compute P · E = P · E efficiently. Corollary
Let E ∈ K m × σ with σ > m , and let J ∈ K σ × σ be a Jordan matrix given by a standard representation.Let P ∈ K [ X ] m × m with column degree ( δ , . . . , δ m ) such that δ + · · · + δ m σ . Then, the product P · E can be computedusing O ( m ω − M ( σ ) log( σ )) operations in K . Acknowledgments.
We thank B. Beckermann and G.Labahn for their valuable comments, as well as an anony-mous referee for suggesting a shorter proof of Lemma 3.3.C.-P. Jeannerod and G. Villard were partly supported bythe ANR project HPAC (ANR 11 BS02 013). V. Neiger wassupported by the international mobility grants Explo’ra Docfrom
R´egion Rhˆone-Alpes , PALSE , and
Mitacs Globalink -Inria . ´E. Schost was supported by NSERC.
5. REFERENCES [1] B. Beckermann and G. Labahn. A uniform approach for thefast computation of matrix-type Pad´e approximants.
SIAMJ. Matrix Anal. Appl. , 15(3):804–823, 1994.[2] B. Beckermann and G. Labahn. Fraction-free computationof matrix rational interpolants and matrix gcds.
SIAM J.Matrix Anal. Appl. , 22(1):114–144, 2000.[3] B. Beckermann, G. Labahn, and G. Villard. Normal formsfor general polynomial matrices.
J. Symbolic Comput. ,41(6):708–737, 2006.[4] P. Beelen and K. Brander. Key equations for list decodingof Reed-Solomon codes and how to solve them.
J. SymbolicComput. , 45(7):773–786, 2010.[5] A. Bostan, C.-P. Jeannerod, and ´E. Schost. Solvingstructured linear systems with large displacement rank.
Theor. Comput. Sci. , 407(1-3):155–181, 2008.[6] K. Brander.
Interpolation and List Decoding of AlgebraicCodes . PhD thesis, Technical University of Denmark, 2010.7] P. Busse.
Multivariate List Decoding of Evaluation Codeswith a Gr¨obner Basis Perspective . PhD thesis, Universityof Kentucky, 2008.[8] D. G. Cantor and E. Kaltofen. On fast multiplication ofpolynomials over arbitrary algebras.
Acta Inform. ,28(7):693–701, 1991.[9] M. Chowdhury, C.-P. Jeannerod, V. Neiger, ´E. Schost, andG. Villard. Faster algorithms for multivariate interpolationwith multiplicities and simultaneous polynomialapproximations.
IEEE Trans. Inf. Theory ,61(5):2370–2387, 2015.[10] H. Cohn and N. Heninger. Approximate common divisorsvia lattices. In
Tenth Algorithmic Number TheorySymposium , pages 271–293. Mathematical SciencesPublishers (MSP), 2012-2013.[11] D. Coppersmith and S. Winograd. Matrix multiplicationvia arithmetic progressions.
J. Symbolic Comput. ,9(3):251–280, 1990.[12] C. Devet, I. Goldberg, and N. Heninger. Optimally robustprivate information retrieval. In
USENIX Security 12 ,pages 269–283. USENIX, 2012.[13] J. von zur Gathen and J. Gerhard.
Modern ComputerAlgebra (third edition) . Cambridge University Press, 2013.[14] P. Giorgi, C.-P. Jeannerod, and G. Villard. On thecomplexity of polynomial matrix computations. In
ISSAC’03 , pages 135–142. ACM, 2003.[15] S. Gupta and A. Storjohann. Computing Hermite forms ofpolynomial matrices. In
ISSAC’11 , pages 155–162. ACM,2011.[16] V. Guruswami and A. Rudra. Explicit codes achieving listdecoding capacity: Error-correction with optimalredundancy.
IEEE Trans. Inf. Theory , 54(1):135–150, 2008.[17] V. Guruswami and M. Sudan. Improved decoding ofReed-Solomon and algebraic-geometry codes.
IEEE Trans.Inf. Theory , 45(6):1757–1767, 1999.[18] C.-P. Jeannerod, V. Neiger, ´E. Schost, and G. Villard.Computing minimal interpolation bases. HAL Open archive- https://hal.inria.fr/hal-01241781, 2015.[19] T. Kailath.
Linear Systems . Prentice-Hall, 1980.[20] D. E. Knuth. The analysis of algorithms. In
Congr`es int.Math., Nice, France , volume 3, pages 269–274, 1970.[21] R. Koetter and A. Vardy. Algebraic soft-decision decodingof Reed-Solomon codes.
IEEE Trans. Inf. Theory ,49(11):2809–2825, 2003.[22] F. Le Gall. Powers of tensors and fast matrixmultiplication. In
ISSAC’14 , pages 296–303. ACM, 2014.[23] R. T. Moenck. Fast computation of GCDs. In
Proc. 5thACM Symp. Theory Comp. , pages 142–151, 1973.[24] T. Mulders and A. Storjohann. On lattice reduction forpolynomial matrices.
J. Symbolic Comput. , 35:377–401,2003.[25] V. Olshevsky and M. A. Shokrollahi. A displacementapproach to efficient decoding of algebraic-geometric codes.In
STOC’99 , pages 235–244. ACM, 1999.[26] F. Parvaresh and A. Vardy. Correcting errors beyond theGuruswami-Sudan radius in polynomial time. In
FOCS’05 ,pages 285–294. IEEE, 2005.[27] R. M. Roth and G. Ruckenstein. Efficient decoding ofReed-Solomon codes beyond half the minimum distance.
IEEE Trans. Inf. Theory , 46(1):246–257, 2000.[28] S. Sarkar and A. Storjohann. Normalization of row reducedmatrices. In
ISSAC’11 , pages 297–304. ACM, 2011.[29] A. Sch¨onhage. Schnelle Berechnung vonKettenbruchentwicklungen.
Acta Inform. , 1:139–144, 1971.[30] A. Storjohann. Notes on computing minimal approximantbases. In
Dagstuhl Seminar Proceedings , 2006.[31] M. Van Barel and A. Bultheel. A general module theoreticframework for vector M-Pad´e and matrix rationalinterpolation.
Numer. Algorithms , 3:451–462, 1992.[32] A. Zeh, C. Gentner, and D. Augot. An interpolation procedure for list decoding Reed-Solomon codes based ongeneralized key equations.
IEEE Trans. Inf. Theory ,57(9):5946–5959, 2011.[33] W. Zhou.
Fast Order Basis and Kernel Basis Computationand Related Problems . PhD thesis, University of Waterloo,2012.[34] W. Zhou and G. Labahn. Efficient algorithms for orderbasis computation.
J. Symbolic Comput. , 47(7):793–819,2012.
APPENDIXA. REDUCING THE ENTRIES OF THE SHIFT
Let A ∈ K [ X ] m × m be nonsingular, let s ∈ Z m , and con-sider σ ∈ N such that σ > deg(det( A )). Here, we show howto construct a shift t ∈ N m such that • the s -Popov form P of A is also in t -Popov form; • min( t ) = 0, max( t ) ( m − σ , and | t | m σ/ ˆs = ( s π (1) , . . . , s π ( m ) ) where π is a permutationof { , . . . , m } such that ˆs is non-decreasing. Then, we define ˆt = (ˆ t , . . . , ˆ t m ) by ˆ t = 0 and, for 2 i m ,ˆ t i − ˆ t i − = (cid:26) σ if ˆ s i − ˆ s i − > σ, ˆ s i − ˆ s i − otherwise . Let t = (ˆ t π − (1) , . . . , ˆ t π − ( m ) ). Since the diagonal entries of P have degree at most deg(det( A )) < σ , we obtain that P is in t -diagonal weak Popov form and thus in t -Popov form. B. EXAMPLE OF ORDER BASIS WITH SIZEBEYOND OUR TARGET COST
We focus on a Hermite-Pad´e approximation problem withinput F of dimensions 2 m × σ with σ > m ,and shift s = (0 , . . . , , σ, . . . , σ ) ∈ N m with m entries 0 and m entries σ .Let f be a polynomial in X with nonzero constant coef-ficient, and let f , . . . , f m be generic polynomials in X ofdegree less than σ . Then, we consider the following inputwith all entries truncated modulo X σ : F = [ f, f + Xf, X ( f + Xf ) , · · · , X m − ( f + Xf ) , f , · · · , f m ] T . After m steps, the iterative algorithm in [1] has computedan s -minimal basis P ( m ) of approximants for F and order m ,which is such that t = rdeg s ( P ( m ) ) = (1 , . . . , , σ, . . . , σ )and P ( m ) F = [0 , · · · , , X m f, X m g , · · · , X m g m ] T mod X σ ,for some polynomials g , . . . , g m .Now we finish the process up to order σ . Since the coef-ficient of degree m of X m f is nonzero and because of thespecific shift t , the obtained s -minimal basis P of approxi-mants for F has degree profile P = [1] [0]... . . . . . .[1] · · · [1] [0][ d + 1] · · · [ d + 1] [ d + 1][ d ] · · · [ d ] [ d ] [0]... · · · ... ... . . .[ d ] · · · [ d ] [ d ] [0] , where d = σ − m , [ i ] denotes an entry of degree i , the entriesleft blank correspond to the zero polynomial, and the entries[ d +1] are on the m -th row. In particular, P has size Θ( m σσ