Coordinate sections of generic Hankel matrices
Rainelly Cunha, Maral Mostafazadehfard, Zaqueu Ramos, Aron simis
aa r X i v : . [ m a t h . A C ] M a y Coordinate sections of generic Hankel matrices
Rainelly Cunha Maral Mostafazadehfard Zaqueu Ramos Aron Simis Abstract
One deals with degenerations by coordinate sections of the square generic Hankelmatrix over a field k of characteristic zero, along with its main related structures,such as the determinant of the matrix, the ideal generated by its partial derivatives,the polar map defined by these derivatives, the Hessian matrix and the ideal of thesubmaximal minors of the matrix. It is proved that the polar map is dominant for anysuch degenerations, and not homaloidal in the generic case. The problem of whetherthe determinant f of the matrix is a factor of the Hessian with the (Segre) expectedmultiplicity is considered, for which the expected lower bound of the dual variety of V ( f ) is established. Introduction
A good amount of algebraic work has addressed various aspects of Hankel matrices, ifnot to mention the non-enumerable list of papers dealing with its applications – oftenunder the disguise of Toelitz matrices – ranging from Functional Analysis to OrthogonalPolynomial Theory to the Moment Problem and Probability. In commutative algebra andalgebraic geometry they appear with functional entries, or actually polynomial entries. Anexpressive case is that where the entries are actually linear forms, in fact even variables,giving room to some of the so-called determinantal ideals or varieties. Typical geometricobjects defined by Hankel matrices and their ideals of minors are secant varieties andnormal scrolls. Those in turn have many applications in discrete geometry, includinggraph theory.In commutative algebra, the work of Watanabe ([25]), Conca ([5]), Eisenbud ([10] and[11]), among others, has a distinct role. It is rarely the case that a paper on the subjectdoes not quote one of these sources. AMS 2010 Mathematics Subject Classification (2010 Revision). Primary 13C40, 13D02, 13H10, 14E05,14E07; Secondary 13C15, 14M10, 14M12, 14M15. Key Words and Phrases : Hankel matrix, Hessian, ideal of minors, polar map, gradient ideal, linearrank, special fiber, Cohen–Macaulay. Partially supported by a CAPES-PNPD post-doctoral fellowship (1723357/2017) from the FederalUniversity of Sergipe, Brazil, during parts of this work. This author thanks the Department of Mathematicsfor providing the appropriate environment to carry the research. Part of the work was done while this author held a CAPES-PNPD Post-Doctoral fellowship (88882.317200/2019-01) from IMPA, Brazil. She thanks IMPA for providing the appropriate environmentto carry the research. Part of the work was done while this author held a grant from CAPES/FAPITEC (88887.157386/2017-00). Partially supported by a CNPq grant (302298/2014-2). Part of the work was done while this authorheld a PVNS Fellowship from CAPES (5742201241/2016), and subsequently a Senior Visiting ResearchGrant at the ICMC-USP (S˜ao Carlos, SP). He thanks the Department of Mathematics of the FederalUniversity of Paraiba and the ICMC-USP for providing an excellent environment for discussions on thiswork. coordinate sections honoring [12]. The geometric idea behindthis terminology is that one is looking at coordinate hyperplane sections of the associateddeterminantal varieties. This sort of degeneration has been thoroughly dealt with in [7]and [8], and considered before by other authors ([12], [11]). The main subject is thus asquare Hankel matrix H m [ r ] of order m , with r zeros along the lower stretch of the its lastcolumn. This section is about the basics of such matrices and the gradient ideals of therespective determinants.Section 2 deals with properties of the ideals of lower minors of the above degenerations.We are particularly interested in the ideal of submaximal minors. Several propertiesrelating I m − ( H m ) (here H m = H m [0], the generic case) to the gradient ideal J generatedby partial derivatives of det H m are proved in [18]. One of these is that I m − ( H m ) isthe minimal component of the primary decomposition of J . The main subject in thesection is the codimension, primeness and Cohen–Macaulayness of ideals of minors of thedegeneration H m [ r ]. A central question addressed, but unfortunately still pending, is theprecise structure of the ideal of submaximal minors and its associated algebras, such asthe special fiber.Having dealt for a while with ideals of minors of the degeneration matrix H m [ r ], inSection 3 we consider more closely its differential properties. Throughout and henceforth,we assume that the ground characteristic is zero. Thus, we look at the ideal J = J ( f ) ∈ R generated by the partial derivatives of f := det H m [ r ] – usually called the Jacobian ideal of f , a terminology here deferred in favor of gradient ideal of f . The Jacobian matrix ofthe partial derivatives is the so-called Hessian matrix of f . By abuse, its determinant iscalled the Hessian of f . First one considers the Hessian h ( f ) of f = det H m [ r ]. Sincewe are assuming characteristic zero, the algebraic translation of its non-vanishing is thatthe analytic spread of the gradient ideal of f is maximal, i.e., equal to dim R in this case.2he geometric impact of this result is that the polar map of f is dominant for any valueof r . It immediately raises the supplementary question as to whether the polar map isactually a Cremona map (i.e., whether f is a homaloidal polynomial) – this problem willbe considered in Section 4.2. This state of affairs is quite surprising as the analogousdegeneration in the case of the generic square matrix has a vanishing Hessian determinantas soon as r ≥ m − r = 2 (sub-Hankel) has been proved in [6] byshowing that the corresponding Hessian is a power of x m +1 up to a nonzero coefficient from k . At the other end of the spectrum, the fully generic case (i.e., r = 0) has been given in[17, Proposition 3.3.11] by a method of degenerating the Hessian matrix by setting everyvariable to zero, except x m . Unfortunately, this simple degeneration does not work wellall the way for arbitrary r ≥
1. However, the idea of suitably degenerating the Hessianmatrix can still be used as is shown in the Appendix. The proof draws on degeneratingthe Hessian matrix by as many coordinate sections as possible. It is not apparent what isa best choice (if ever) since it may depend on both m and r . Some choices may work inone case, but not elsewhere. For any promising degeneration the argument will be fatallylong. Luckily, in the case r = 0 (i.e., the fully generic case), one can alternatively look atthe more elegant argument in Corollary 4.7.An interesting question in general is whether f := det H m [ r ] is a factor of its Hessiandeterminant h ( f ) with multiplicity ≥
1. If this is the case, then f is said in addition tohave the expected multiplicity (according to Segre) if its multiplicity as a factor of h ( f )is codim( V ( f ) ∗ ) −
1, where V ( f ) ∗ denotes the dual variety to the hypersurface V ( f ) (see[6]). In a similar context this question has been tackled in [18, Section 3.2] (see also [7,Remark 2.7 (2)]) for a couple of corrections). In the case of the m × m generic matrix G theHessian of f = det G equals f m ( m − . Now, it is well-known that the dual variety to f inthis case is the ideal of 2-minors of the same matrix in the dual variables. Since the latterhas codimension ( m − = m ( m −
2) + 1 then f is a divisor of its Hessian determinantwith the expected multiplicity. An extension of this result has been conjectured in [18],namely for the class of higher leap Hankel-wise m × m matrices, and confirmed for a fewlow values of m and the leap.In a different north, the question comes up for degenerations of G that preserve thenon-vanishing of the corresponding Hessian determinant. A case has been dealt with in[7, Section 2.3], where the same phenomenon has been proved to hold. On the other hand,for the analogue of the degeneration studied here as applied to G the Hessian alwaysvanishes for r ≥
1, so the question does not come up. It is somewhat surprising that thedegenerations H m [ r ] by coordinate sections recover the non-vanishing of the correspondingHessian determinant as mentioned above. We prove that dim V ( f ) ∗ is at least m − J . Here the main result is thestructure of its minimal prime ideals in the case where 1 ≤ r ≤ m −
3. This question forthe cases where r = 0 (generic) or r = m − J is never a reduction of the ideal of submaximal minors. This lies at thecrux of the typical difficulty in handling the case of 1 ≤ r ≤ m −
3. The complete set ofassociated primes of J remains a mystery even in the generic case.The last topic of the section is the linear behavior of the gradient ideal J . The twonotions of linear behavior for a homogeneous ideal – its linear rank and its linear type3roperty – have no apparent relation in general. Yet, for a homogeneous ideal generatedby its linear system on the initial degree, it has been noted that the two notions intertwineas regards the birationality of the map defined by this linear system (see [9, Theorem 3.2and Proposition 3.4] and earlier references mentioned there).For the generic Hankel matrix H m , at least in null characteristic, it was proved thatthe linear rank of J is 3 ([17, Theorem 3.3.5]). At the other end of the spectrum, forthe degeneration H m [ m −
2] it was proved that the linear rank of J is maximal possible(= m ) ([6]) and J is in addition of linear type ([18, Section 4.1, Theorem 4.8]). Here weconjecture that, for 1 ≤ r ≤ m −
3, the linear rank of J is 2. We give a proof of thisassertion based on another conjecture regarding regular sequence modulo the gradientideal in the generic case. The linear rank conjecture itself fails in positive characteristic.Section 4 is mainly about properties of the generic Hankel matrix, as a natural sequelto questions posed in [18]. Thus, we first make a thorough review of the backgroundcombinatorics, including Pl¨ucker relations, that affects this case. The main results areProposition 4.1 and Theorem 4.8. The first essentially states that the defining equationsof the algebra of submaximal minors are the same as those of a Grassmannian (namely,the Pl¨ucker relations) – a somewhat surprising result. The second tells that the polar mapin the generic case is not homaloidal and, in addition, that the gradient ideal is a reductionof the ideal of submaximal minors. The first of these results has also been obtained by N.Medeiros by a geometric argument.The last part of the section contains a discussion of the elements to prove a parallelresult to the effect that also in the case 1 ≤ r ≤ m − r = m − Let ( R, m ) denote a Notherian local ring and its maximal ideal (respectively, a standardgraded ring over a field and its irrelevant ideal). For an ideal I ⊂ m (respectively, ahomogeneous ideal I ⊂ m ), the special fiber of I is the ring R ( I ) / m R ( I ). Note that thisis an algebra over the residue field of R . The (Krull) dimension of this algebra is calledthe analytic spread of I and is denoted ℓ ( I ).Quite generally, given ideals J ⊂ I be ideals in a ring R , J is said to be a reduction of I if there exists an integer n ≥ I n +1 = J I n . A reduction J of I is called minimal if no ideal strictly contained in J is a reduction of I . The reduction number of I withrespect to a reduction J is the minimum integer n such that J I n = I n +1 . It is denoted byred J ( I ). The (absolute) reduction number of I is defined as red( I ) = min { red J ( I ) | J ⊂ I is a minimal reduction of I } . If R/ m is infinite every minimal reduction of I is minimallygenerated by exactly ℓ ( I ) elements. In particular, every reduction of I contains a reductiongenerated by ℓ ( I ) elements. The following invariants are related in the case of ( R, m ):codim( I ) ≤ ℓ ( I ) ≤ min { µ ( I ) , dim( R ) } , where µ ( I ) stands for the minimal number of generators of I . If the rightmost inequalityturns out to be an equality, one says that I has maximal analytic spread. By and large,4he ideals considered in this work will have dim R ≤ µ ( I ), hence being of maximal analyticspread means in this case that ℓ ( I ) = dim R .Suppose now that R is standard graded over a field k and I is an ideal of grade ≥ n + 1 forms of a given degree s . One has a free graded presentation R ( − ( s + 1)) t ⊕ X j ≥ R ( − ( s + j )) ϕ −→ R ( − s ) n +1 −→ I −→ − ( s + j ) and t ≥
0. Of much interest is the image of R ( − ( s + 1)) t by ϕ ,so-called linear part of ϕ – often denoted ϕ . Since ϕ has a rank, so does ϕ . One saysthat the rank of ϕ is the linear rank of ϕ (or of I ) and that ϕ has maximal linear rank provided its linear rank is n (=rank( ϕ )). Clearly, the latter condition is trivially satisfiedif ϕ = ϕ , in which case I is said to have linear presentation (or is linearly presented ).Given a monomial order the polynomial ring R over a field, if f ∈ R we denote byin( f ) the initial term of f and by in( I ) the ideal generated by the initial terms of theelements of I , called the initial ideal of I . For the general theory of monomial ideals andGr¨obner bases we refer to [15]. The generic Hankel matrix of order s × ( n − s + 1) in n variables is the catalecticant H s,n − s +1 := x x x . . . x s . . . x n − s x n − s +1 x x x . . . x s +1 . . . x n − s +1 x n − s +2 x x x . . . x s +2 . . . x n − s +2 x n − s +3 ... ... ... . . . ... ... ...x s x s +1 x s +2 . . . x s − . . . x n − x n , (1)where, say, s ≤ n − s + 1 (i.e., n ≥ s − s = n − s + 1 := m , the square Hankel matrixof order m , henceforth denotes H m . This work focuses on certain degenerations of thegeneric Hankel matrix.As usual, the ideal of t -minors of an arbitrary matrix M will be denoted I t ( M ).The ideals of minors of a Hankel matrix have a notable behavior. It has been provedin [11, Proposition 4.3] that, for any 1 ≤ t ≤ min { s, n − s + 1 } , the ideal of t -minors ofthe generic Hankel matrix H s,n − s +1 is prime and of codimension n − t + 2. The proof ofthis result uses an important property of Hankel matrix first made explicit in the work ofGruson and Peskine: Proposition 1.1. ([14])
For any t ≤ s , one has I t ( H s,n − s +1 ) = I t ( H t,n − t +1 ) . This property allows to reduce to the case of maximal minors. In this case, to getthe codimension one may use the fact that the Hankel matrix specializes to a well-knownshape involving only 2 m − t + 1 variables – but see Lemma 2.3 later on for a moreprecise statement. In particular, in the case of the square Hankel matrix H m , its ideal ofsubmaximal minors coincides with the ideal of maximal minors of H m − ,,m +1 .5 emark 1.2. It is perhaps worth observing that the notation of the Hankel matrix in (1)should include the set of variables used. Instead it is customary to fix the set of variablesonce for all, while administering various changes of sizes. In this regard, given H s,n − s +1 and t ≤ s as in Proposition 1.1, a matrix such as H t,n − t +1 is uniquely defined.The following result originally appeared in [13] in a different context. It has indepen-dently been obtained in [17, Proposition 5.3.1] in the presently stated form. Proposition 1.3.
Let M denote a square matrix over R = k [ x , . . . , x n ] such that everyentry is either 0 or x i for some i = 1 , . . . , n . Then, for each i = 1 , . . . , n , the partialderivative of f = det( M ) with respect to x i is the sum of the ( signed ) cofactors of theentry x i in all its slots as an entry of M . Let R = k [ x ] = k [ x , . . . , x n ] denote a standard graded polynomial ring over a field k of characteristic zero. We will only consider degenerations induced by a k -algebrahomomorphism of R and, in fact, those induced by coordinate sections. More particularly,our focus is on the following degenerations of the square version of (1): x x . . . x m − x m x x . . . x m x m +1 ... ... . . . ...x m − x m . . . x m − x m − x m x m +1 . . . x m − , x x . . . x m − x m − x m x x . . . x m − x m x m +1 ... ... . . . . . . ... ...x m − x m − . . . x m − x m − x m − x m − x m . . . x m − x m − x m x m +1 . . . x m − , x x x . . . x m − x m x x x . . . x m x m +1 x x x . . . x m +1 ... ... ... . . . ... ...x m − x m − x m . . . x m − x m x m +1 . . . x m x m +1 . . . We will denote by H m [ r ] a Hankel degeneration as above, where r denotes the numberof zeros on the last column. The last matrix in the above thread was dubbed sub-Hankel in [6] (see also [17], [18]). This notation will also be used when the Hankel matrix is notnecessarily square, namely, H s,n − s +1 [ r ].For a given r , the ground ring for H m [ r ] is the polynomial ring k [ x , . . . , x m − r − ]. Ifno confusion arises, when r is fixed in the discussion, we will denote this ring simply by R . It is quite clear that taking minors commute with ring homomorphisms. In the presentsituation it is expressed in the following convenient way: Lemma 1.4.
Given an integer r ≥ , let Φ be the endomorphism of R = k [ x , . . . , x n ] such that Φ( x i ) = (cid:26) if i ≥ m − rx i otherwise hen (a) I t ( H s,n − s +1 [ r ]) = Φ( I t ( H s,n − s +1 )) , for all ≤ t ≤ s . (b) I t ( H s,n − s +1 [ r ]) = I t ( H t,n − t +1 [ r ]) , for all ≤ t ≤ s . Proof. (a) This is clear.(b) It follows from item (a) and Proposition 1.1.
The proof of the following proposition is inspired by an elementary fact observed in thecase of the sub-Hankel in [6, Remark 4.6 (c)], sufficiently generalized to the general caseof a Hankel degeneration.Actually, the observation will work for the generic Hankel matrix itself, thus avoidingdrawing upon the general result about this matrix being 1-generic [11].
Proposition 2.1.
Let H m [ r ] denote the degeneration of the m × m generic Hankel matrixconsidered in the previous section. Let R denote the polynomial ring on the distinct nonzero entries of the matrix. Then (i) det H m [ r ] = 0 . (ii) det H m [ r ] ∈ R is irreducible if and only if r ≤ m − . Proof.
Set f := det H m [ r ].(i) There are many elementary ways of verifying the non-vanishing of f . Perhaps aneasy one is to see that f has a unique nonzero pure term in x m , namely, the product ofthe entries along the main anti-diagonal.(ii) The “only if” part is obvious since the determinant would then be a power of x m or zero.For the reverse implication we will induct on m . The initial step of the induction willbe subsumed in the general step. We may assume that r ≤ m − r = m − x only appears once and on the first row, one easily sees that f = x f + g ,for some g ∈ k [ x , . . . , x m − − r ], where f ∈ k [ x , . . . , x m − − r ] is the determinant of theHankel degeneration of type H m − [ r ] obtained by omitting the first row and the firstcolumn of the original Hankel degeneration.To show that f is irreducible it suffices to prove that it is a primitive polynomial (ofdegree 1) in k [ x , . . . , x m − − r ][ x ]. Now, on one hand, f is irreducible by the inductivehypothesis since one is assuming that r ≤ m − m − −
2. Therefore, it is enoughto see that f is not a factor of g . For this, one verifies their initial terms in the revlexmonomial order: in( f ) = x m − m +1 and in( g ) = in( f ) = x mm . Remark 2.2.
Since f is homogeneous, an alternative argument for the case r ≤ m − R/ ( f ) is normal. Since R/ ( f ) is a hypersurface ring, it sufficesto prove that it is locally regular in codimension one. By Proposition 3.4 below, provedindependently, the gradient ideal J has codimension 3 = 1 + 2 provided r ≤ m −
3. Thisproves that f is irreducible when r ≤ m − .2 Ideals of lower minors As a preliminary we need a result about the lower minors of a not necessarily squaregeneric Hankel matrix.Let H denote the generic Hankel matrix of order a × b , with a ≤ b and entries x , . . . , x a + b − . Set R = k [ x , . . . , x a + b − ] for the ground polynomial ring on these en-tries. Then the ideal I a ( H ) of maximal minors has the expected codimension b − a + 1; inparticular, R/I a ( H ) is a Cohen-Macaulay ring of dimension 2( a − Lemma 2.3.
With the above notation, an explicit system of parameters of
R/I a ( H ) isgiven by the residues of the elements x , . . . , x a − , x b +1 , . . . , x a + b − . Proof.
Consider the corresponding fully generic a × b matrix G = ( y i,j ), for which itis well-known that I a ( G ) ⊂ S := k [ y i,j ] has codimension b − a + 1. Let D denote theideal of S generated by the ( a − b −
1) independent linear forms y i,j − y i +1 ,j − , for1 ≤ i < a, ≤ j ≤ b . Clearly, there is an isomorphism S/D ≃ R specializing G to H andinducing an isomorphism S/I a ( G ) (mod D ) ≃ R/I a ( H ) . Since the dimensions of
S/I a ( G ) and R/I a ( H ) are ab − ( b − a + 1) = ( b + 1)( a −
1) and a + b − − ( b − a + 1) = 2( a − a − b − D is also a regularsequence on S/I a ( G ). On the other hand, the initial ideal of I a ( G ) in the reverse lexorder is generated by the products of the entries along the anti-diagonals of the a -minors.Therefore, it follows that the stated set { x , . . . , x a − , x b +1 , . . . , x a + b − } is a system ofparameters (regular sequence) on R/I a ( H ).For the next result we set R := k [ x , . . . , x m − ] and ¯ R := k [ x , . . . , x m − r − ], the latterviewed as the residue ring of the first by the ideal ( x m − r , . . . , x m − ). We emphasize thatthe first is the ground polynomial ring of H m and the second that of H m [ r ]. Proposition 2.4.
Assume that ≤ r ≤ m − and let ≤ t ≤ m . Then (i) codim I t ( H m [ r ]) = min { m − t ) + 1 , m − t − r } and ¯ R/I t ( H m [ r ]) is a Cohen–Macaulay ring. (ii) I t ( H m [ r ]) is a prime ideal if and only if either t = 1 or else t ≥ r + 2 . Proof. (i) We apply Lemma 1.4 (b), by which the ideal I t ( H m [ r ]) ⊂ ¯ R is generated bythe maximal minors of H t, m − t [ r ]. Therefore its codimension is at most 2 m − t − t + 1 =2( m − t ) + 1.We analyse the two cases separately: (1) t ≥ r + 2 . Let H denote the uniquely defined generic Hankel matrix of size t × (2 m − t ) over theground ring R (see Remark 1.2). By Proposition 1.1, one has I t ( H m ) = I t ( H ). Setting A := R/I t ( H ), one has R/ ( x m − r , . . . , x m − ) ≃ ¯ R and A/ ( x m − r , . . . , x m − ) A ≃ ¯ R/I t ( H m [ r ]) . { x m − r , . . . , x m − } is a subset of the system of parameters determined inLemma 2.3, as applied with a = t, b = 2 m − t , it is a regular sequence over A . Therefore,¯ R/I t ( H m [ r ]) is Cohen–Macaulay.Moreover, since dim( ¯ R/I t ( H m [ r ])) = dim( A ) − r , one gets codim ( I t ( H m [ r ])) = 2( m − t ) + 1. (2) t ≤ r + 1 . In this situation, we get 2 m − t − (2 m − r −
1) = r − t +1 null columns at the right end ofthe matrix H t, m − t [ r ], blurring the original requirement for the nature of the degeneration.Yet, removing these additional null columns yields a matrix of the shape H t, m − r − [ t − I t ( H t, m − r − [ t − I t ( H m [ r ]).Applying [11, Proposition 4.3] to the generic Hankel matrix H of size t × (2 m − r − R = k [ x , . . . , x m − r + t − ] yields that R/I t ( H ) has dimension 2( t − { x m − r , . . . , x m − r + t − } is a subset of the system ofparameters determined in Lemma 2.3, hence it is a regular sequence over A = R/I t ( H ).Therefore, ¯ R/I t ( H m [ r ]) is Cohen–Macaulay. Doing the arithmetic once more, one getscodim I t ( H m [ r ]) = 2 m − t − r .(ii) Clearly, I ( H m [ r ]) is prime. Now, suppose that t ≤ r + 1. If t = r + 1, the t -minoron the bottom-right corner of the matrix contains a product of variables, hence is notprime. Clearly, this implies that no ideal of t -minors is prime either, for t ≤ r + 1.For the reverse implication we proceed as follows. Note that the hypothesis put usin the first case of (i), in which the ideal I t ( H m [ r ]) coincides with the ideal of maximalminors of the matrix H t, m − t [ r ] and has codimension 2( m − t ) + 1. Since the genericHankel matrix is 1-generic (see [11, Proposition 4.3]) then by [10, Theorem 1 (ii)] theideal ( x i , ..., x i s , I t ( H t, m − t )) is prime provided s ≤ t −
2, where H t, m − t denotes thecorresponding generic Hankel matrix. But since r ≤ t − Remark 2.5.
Note that a particular feature of the above result and its method of proofis that, for any given 1 ≤ t ≤ m and any 0 ≤ r ≤ m −
2, the dimension of the k -linearspan of the t -minors of H m [ r ] is the same as that of H m .For convenience, we isolate two special cases of the above proposition. Corollary 2.6.
Assume that ≤ r ≤ m − . One has: (i) If m ≥ , I m − ( H m [ r ]) has codimension and is a prime ideal if and only if r ≤ m − . (ii) If m ≥ , I m − ( H m [ r ]) has codimension and is a prime ideal if and only if r ≤ m − . Remark 2.7.
It is interesting to note that, even for r ≤ m −
3, the ring
R/I m − ( H m [ r ])is not always normal, a property that may require m >> r .We add some additional considerations about the ideal of submaximal minors. Thenext result is a non-generic version of [2, Theorem 10.16 (b)], with the same proof. Lemma 2.8.
Let M be a square matrix whose entries are either variables over a field k or zeros, such that det( M ) = 0 . Let R denote the polynomial ring over k on the nonzero ntries of M and let S ⊂ R denote the k -subalgebra generated by the submaximal minors.Then the extension S ⊂ R is algebraic at the level of the respective fields of fractions. A consequence is the following:
Proposition 2.9.
With the notation of
Proposition 2.4 , the ideal I m − ( H m [ r ]) ⊂ R hasmaximal analytic spread. Proof.
The analytic spread of an ideal is also the dimension of the special fiber algebraof its Rees algebra. In this case, the ideal is a homogeneous ideal of the polynomial ring R generated in one single degree. Thus, this algebra is isomorphic to the k -subalgebra S ⊂ R generated by the minors. Now apply the previous lemma.A tall order in classical invariant theory is the structure of the ideal of the definingpolynomial relations of the submaximal minors of a matrix of linear entries. It may beinteresting to compare the generic, generic symmetric and the Hankel situations as regardsinvariants of the coordinate section degeneration in the square case. Thus, let the size be m × m with 0 ≤ r ≤ m −
2. Let PI and CM be short for polar image and Cohen–Macaulay,respectively. The overall picture looks as follows: rank of Hessian polar image (PI) primality of I m − fiber of I m − Generic m − r ( r + 1) Gorenstein ladder if m ≥ (cid:0) r +12 (cid:1) + 3 cone over PISymmetric (cid:0) m +12 (cid:1) − o ( r ) CM ladder ? cone over PIHankel 2 m − r − m ≥ r + 3 ? For the first two rows of the above table see [7] and [8], respectively. In regard tothe question mark at the end of the third row, one can reduce to the case where I m − is replaced by the maximal minors of an ( m − × ( m + 1) degenerate Hankel matrix asstated in Lemma 1.4 (b) and dealt with in the previous subsection. In the case where r = 0 (generic Hankel), one knows that the defining ideal of the fiber is generated byPl¨ucker relations ([5, Theorem 4.7]) – more precisely, the fiber is isomorphic as k -algebrato the homogeneous coordinate ring of the Grassmannian G ( m − , m + 1) as is proved inProposition 4.1. However, already for m = 4 , r = 1, there is a minimal defining relationof degree 3 besides the quadratic Pl¨ucker relations.This state of affairs may lead to the following: Question 2.10.
Let 1 ≤ r ≤ m − • Is the special fiber of I m − ( H m [ r ]) Cohen–Macaulay? • Is the Rees algebra of I m − ( H m [ r ]) Cohen–Macaulay and of fiber type? • Is the defining ideal of the special fiber of I m − ( H m [ r ]) minimally generated byquadrics and cubics? • Are the cubic ones merely forced by the degeneration assumption or have a deeper GL -representation meaning as described in [3]?These questions are specially intriguing because they actually refer to maximal minors.For low values of m , a computer verification has been carried for the first three questions.Curiously, in the case r = m − The Hessian and the Gradient ideal
We emphasize that throughout the ground field k has characteristic zero (or sufficentlylarge as compared to the size m of the matrix).The gradient ideal J = J ( f ) ⊂ R of f := det H m [ r ] is the ideal generated by thepartial derivatives of f . The Jacobian matrix of the partial derivatives is the so-called Hessian matrix of f and its determinant is called the Hessian of f .The basic result is Theorem 3.1.
Let f = det H m [ r ] . For ≤ r ≤ m − , the Hessian h ( f ) does not vanish. Due to its length, the proof is given in the Appendix. An argument in the spirit ofCorollary 4.7, obtained in the fully generic case, is more than welcome, but so far we havebeen unable to hit it.
It is a classical question as to when a form f is a factor of its Hessian h ( f ) with multiplicity ≥
1. If this takes place, then f is said in addition to have the expected multiplicity (according to Segre) if its multiplicity as a factor of h ( f ) is codim( V ( f ) ∗ ) −
1, where V ( f ) ∗ denotes the dual variety to the hypersurface V ( f ) (see [6]).In this part we elaborate on the case of f := det H m [ r ]. The following is a first stepregarding the Segre expected multiplicity property. Proposition 3.2.
Let ≤ r ≤ m − m ≥ , and f = det H m [ r ] . Then dim V ( f ) ∗ is atleast m − . Proof.
A formula due to Segre, as transcribed in [20, Lemma 7.2.7], says thatdim( V ( f ) ∗ ) = rank H ( f )(mod f ) − , where H ( f ) denotes de Hessian matrix f . It will then suffice to show that H ( f ) has rankat least m + 1 modulo f (note, as a slight control, that m + 1 ≤ m − r − r ≤ m − H m ( m −
2) obtained by further degenerating x m +2 = · · · = x m − ( r +1) = 0 . The Hessian matrix of det H m ( m −
2) can be viewed as the ( m +1) × ( m +1) submatrix Θ of H ( f ) of the first m + 1 rows and columns modulo ( x m +2 , . . . , x m − ( r +1) ). By [6, Theorem4.4(iii)], det Θ modulo ( x m +2 , . . . , x m − ( r +1) ) is a nonzero scalar multiple of x ( m +1)( m − m +1 .Thus, det Θ does not vanish. In addition, det Θ does not vanish module f because f doesnot have a pure term in x m +1 . Therefore, H ( f ) has rank at least m + 1 modulo f .The context immediately prompt us to a few essential questions, which we choose tostate as: Conjecture 3.3.
Let ≤ r ≤ m − m ≥ , and f = det H m [ r ] . Then: (i) dim V ( f ) ∗ = m − , and hence the expected multiplicity of f as a factor of its Hessiandeterminant is m − r − − ( m − − m − r − . f is a factor of its Hessian determinant with the expected multiplicity. (iii) Let D denote the defining ideal of V ( f ) ∗ in its natural embedding. Then the initialdegree of D is m and the subideal ( D m ) generated in the initial degree has codimension m − r − . ( In addition, if = r = m − then ( D m ) is a linearly presented codimension perfect ideal. )(iv) V ( f ) ∗ is arithmetically Cohen–Macaulay if and only if r = m − , in which case V ( f ) is self-dual up to a coordinate change. Since a conjecture ought to be grounded on reasons other than mere computer exper-imentation, we give some elements for potential proofs: • The conjectured statement in (i) accommodates the generic case as well as the sub-Hankel degeneration ( r = m −
2) – for the latter, f is not a factor of its Hessian determinant(cf. [6, Theorem 4.4 (iii)]). • The value of the dimension in item (i) follows from Proposition 3.2 and the con-jectured codimension in item (iii). Note that (iii) tells, in particular, that the Hankeldegenerations of the sort considered here have geometric behavior totally distinct fromthe fully generic and the symmetric counterparts – for the latter the dual variety is de-fined by quadrics and often by ladder 2-minors (see [7] and [8]). • As for (iv), one has an affirmative answer in one direction, as follows. Thus, supposethat r = m − V ( f ) ∗ is self-dual up to coefficients (in particular, V ( f ) ∗ is arithmetically Cohen–Macaulay).Set ¯ R := R/f and ¯ J := ( J, f ) /f where J is the gradient ideal of f. By [6, Lemma 4.2]and Euler’s formula the syzygy matrix of ¯ J contains a ( m + 1) × ( m + 1) submatrix withthe following shape: ϕ := x ( m − x x x x . . . x m x ( m − x ∗ x ∗ x ∗ x . . . x m +1 x ( m − x ∗ x ∗ x ∗ x . . . ... ... ... ... ... · · · ...x m − x m − ∗ x m − ∗ x m − ∗ x m . . . x m − x m − ∗ x m − ∗ x m x m +1 . . . x m − x m − ∗ x m x m +1 . . . x m x m +1 . . . x m +1 − x m +1 . . . (mod f )where ∗ are unspecified nonzero coefficients. Since ϕ is a submatrix of the syzygy matrix,its rank is at most m. Expanding the determinant of ϕ by Laplace along the last row, onegets x m +1 det e ϕ ≡ f ), where e ϕ = mx x x x . . . x m ( m − x ∗ x ∗ x ∗ x . . . x m +1 ... ... ... ... · · · ... x m − ∗ x m − ∗ x m − ∗ x m . . . x m − ∗ x m − ∗ x m x m +1 . . . x m − ∗ x m x m +1 . . . x m x m +1 . . . . x m +1 f ) and ¯ R is a domain we have det e ϕ ∈ ( f ) . Note that det e ϕ = 0 . Inparticular, for reasons of degree, det e ϕ is a nonzero scalar multiple of f. Let B be the unique ( m +1) × ( m +1) linear matrix with entries in k [ t , . . . , t m +1 ] = k [ t ]such that ( t ) · ϕ = ( x ) · B. In particular, B has the following shape: t ( m − t . . . t ( m − t t . . . t ( m − t • t t . . . t ( m − t • t • t t . . . ... ... ... ... ... · · · ...t m • t m − • t m − • t m − . . . t t m +1 − t m +1 t m t m − t m − . . . t where • are unspecified nonzero coefficients. Note that B t is submatrix of the matrizJacobian dual of ¯ J .
Let k [ t , . . . , t m +1 ] /P denote the homogeneous coordinate ring of V ( f ) ∗ . By [6] oneknows that the Hessian matrix H ( f ) has rank m + 1 modulo ( f ), hence P is a principal(prime) ideal.Since the rank of the Jacobian dual matrix of ¯ J modulo P is at most m we havedet B ∈ P. Expanding det B by Laplace along the first row yields det B = t det e B , where e B = − t t . . . − t • t t . . . − t • t • t t . . . ... ... ... ... · · · ... − ( m − t m • t m − • t m − • t m − . . . t − mt m +1 t m t m − t m − . . . t . In particular, t det e B ≡ P ) . Since P is prime and t / ∈ P , then det e B ∈ P. Onthe other hand, the same sort of argument as [6, Remark 4.6 (c)] shows that det e B isirreducible. Thus, P = (det e B ) . To conclude, note that e B can be replaced by the obvioussub-Hankel matrix with nonzero coefficients obtained by exchanging rows equidistant fromthe extremes. The codimension of the gradient ideal J of det( H m [ r ]) comes in as one of the basic in-gredients in the paper. The result below is a neat consequence of the nature of J in itsrelation to the cofactors of H m [ r ] and of the elementary cofactor formulas. Proposition 3.4. (char( k ) = 0) Let J ⊂ R = k [ x , . . . , x m − r − ] denote the gradientideal of det( H m [ r ]) , where m − r ≥ . Then codim( J ) = (cid:26) if m − r = 23 otherwise. roof. By Proposition 1.3, J is contained in I m − ( H m [ r ]), for every degeneration step.The latter has codimension 3 by Corollary 2.6. Therefore, J has codimension at most 3.The case where m − r = 2 is easily checked and, in any case, sufficiently studied in [6]and [18]. Thus, we assume that m − r ≥ m − r .When m − r = 3, one proceeds as follows.Let ∆ i,j denote the (signed) cofactor of the ( j, i )-entry of H m [ r ] . Given a prime ideal P containing J , we will prove that P ⊃ I m − ( H m [ r ]) or P ⊃ ( x m , x m +1 , x m +2 ), whichimplies that any minimal prime of J has codimension at least 3. For this, we divide theproof in two cases. Case 1: x m +2 ∈ P. Note that f = x m − m +1 + H with H ∈ ( x m +2 ). Thus, since ( x m +2 , f ) ⊂ P then x m +1 ∈ P. By a similar token, f m = x m − m + G with G ∈ ( x m +1 , x m +2 ) . This way, since( x m +1 , x m +2 , f m ) ∈ P, we see that x m ∈ P. Therefore, ( x m , x m +1 , x m +2 ) ⊂ P. Case 2: x m +2 / ∈ P. We claim that, for every 2 ≤ k ≤ m , the entries of the matrix ∆ , · · · ∆ ,k ... · · · ... ∆ k, · · · ∆ k,k belong to P. Induct on k .Let k = 2. As a consequence of the classical formula for the matrix of cofactors of theabove matrix one has the following relation: x m ∆ , + x m +1 ∆ , + x m +2 ∆ , = 0 (2)Since f = ∆ , and f = (1 / , , it follows from (2) that ∆ , ∈ P. Thus, since f = ∆ , + 2∆ , we have ∆ , ∈ P. Hence, for k = 2 the statement is true.Now, assuming the statement for some let k ≥
2, we show it holds for k + 1 . Considerthe matrix ∆ , · · · ∆ ,k ∆ ,k +1 ... · · · ... ... ∆ k, · · · ∆ k,k ∆ k,k +1 ∆ k +1 , · · · ∆ k +1 ,k ∆ k +1 ,k +1 Again the cofactor formula yields the following relations: x m − k +2 ∆ ,j + · · · + x m +1 ∆ k,j + x m +2 ∆ k +1 ,j = 0 , for all 1 ≤ j ≤ k + 1 . (3)These equalities, along with the inductive hypothesis, yield∆ j,k +1 , ∆ k +1 ,j ∈ P (4)for every 1 ≤ j ≤ k. Finally, for j = k + 1 one has x m − k +2 ∆ ,k +1 + · · · + x m +1 ∆ k,k +1 + x m +2 ∆ k +1 ,k +1 = 014rom this and (4), it follows that ∆ k +1 ,k +1 ∈ P. Therefore, taking k = m we have I m − ( H m [ r ]) ⊂ P , as was to be shown.Thus, we are through with the case where m − r = 3.We now induct on m − r . For the inductive step, note that the ascending inductionstep from m − r to m − r + 1 = m − ( r −
1) corresponds to a descending induction stepfrom r to r −
1. Thus, we are given the matrix H m [ r − r − ≤ m −
4, and thecorresponding gradient ideal J ⊂ R = k [ x , . . . , x m − r ] and assume by induction that thecorresponding gradient ideal J ′ ⊂ R ′ = k [ x , . . . , x m − r − ] of det( H m [ r ]) has codimension3. Since the latter matrix is a degeneration of the former by setting x m − r
0, clearly( J ′ , x m − r ) ⊂ ( J, x m − r ) as ideals in the bigger ring R , hence the codimension of J is atleast 3 since ( J ′ , x m − r ) has codimension 4. Remark 3.5. (1) The proposition will be a consequence of the more encompassing Theo-rem 3.9 (a) below. Indeed, the proofs bear some similarity, but are mutually independent.(2) It has been seen in the proof of Proposition 2.4 (i) that, for arbitrary r ≤ m −
3, theentry subset { x m − r , x m − ( r − , . . . , x m − } of the fully generic m × m Hankel matrix H m isa regular sequence modulo the submaximal minors of H m . It would seem that this sequenceis equally regular modulo the gradient ideal of det( H m ) (see Conjecture 3.11 below). Thus,the specialization still has codimension 3 in the entry ring of H m [ r ]. Unfortunately, thespecialized ideal is larger than the gradient ideal of det( H m [ r ]), hence one resorts to theinductive argument above to bypass this apparent obstruction. In this part we will suppose throughout that r ≤ m −
3. The case where r = m − ≤ j ≤ m − . Consider block partitions of H m [ r ] and its cofactor matrix, as follows: H m [ r ] = (cid:20) U m − j D j (cid:21) (5)where U m − j is an m − j -rowed rows.cof( H m [ r ]) = (cid:20) A m − j B m − j B ′ j C j (cid:21) , B ′ j = B tm − j (6)where A m − j is a square ( m − j )-rowed matrix.The notation is such that the subscript of any of the blocks denotes its number of rows.Block multiplication and the cofactor formula yield f I m = cof( H m [ r ]) H m [ r ] = (cid:20) A m − j U m − j + B m − j D j B tm − j U m − j + C j D j (cid:21) , (7)where f = det H m [ r ] . Since char( k ) = 0, Euler’s identity implies that f ∈ J = J m [ r ] ⊂ R = k [ x , . . . , x m − r − ], and hence the entries on the rightmost matrix belong to J aswell. 15 emma 3.6. For any ideal I ⊂ R containing J , the following holds: I ( A m − j U m − j ) ⊂ I ⇒ I ( B m − j ) I j ( D j ) ⊂ I. By the same token, I ( B tm − j U m − j ) ⊂ I ⇒ I ( C j ) I j ( D j ) ⊂ I. Proof.
It suffices to consider the first implication. As has been noted above, onehas I ( A m − j U m − j + B m − j D j ) ⊂ J , hence I ( A m − j U m − j + B m − j D j ) ⊂ I and since I ( A m − j U m − j ) ⊂ I by hypothesis, then I ( B m − j D j ) ⊂ I .Note that I j ( D j ) is the ideal of maximal minors of D j . Thus, letting M denote anarbitrary j × j submatrix of D j , one has I ( B m − j M ) ⊂ I . Thus, I ( B m − j M cof( M )) = I (det( M ) B m − j ) ⊂ I . Therefore, I j ( D j ) I ( B m − j ) ⊂ I , as required.Note that Lemma 3.6 still holds by replacing the four block matrices of cof( H m [ r ]) bytheir respective i th rows, for an arbitrary i .Given a matrix M and an index i , L i ( M ) will stand for its i th row.For the next lemma, write [ L ( A m − j ) L ( B m − j )] for the first row of the matrix cof( H m [ r ])as decomposed in (6). Recall a previous convention by which ∆ i,j denotes the (signed)cofactor of the ( j, i )-entry of H m [ r ] . Lemma 3.7. ( j = m − I m − ( D m − ) · I ([ L ( A ) L ( B )]) ⊂ J. Proof.
Clearly, L ( A ) = [∆ ∆ , ] , hence its entries belong to J since ∂f /∂x = ∆ , and ∂f /∂x = 2∆ , . Clearly, then I ( L ( A ) U ) ⊂ J . By Lemma 3.6 one deduces that I m − ( D m − ) · I ( L ( B )) ⊂ J. So far, j was fixed. Now we make it vary in a certain range in order to decide when agiven prime ideal containing J contains or not the ideal I m − ( H m [ r ]). For example, theprime ideal Q = ( x m , x m +1 , . . . , x m − r − ) contains J but not I m − ( H m [ r ]), while addingthe entry x m − to Q gives a prime ideal containing the submaximal minors. Precisely, onehas: Proposition 3.8.
Suppose that r ≤ m − . Let P ⊃ J be a prime ideal. If there exists anindex j in the range r + 2 ≤ j ≤ m − such that I j ( D j ) ⊂ P and I j − ( D j − ) P , then I m − ( H m [ r ]) ⊂ P. Proof.
Since I j ( D j ) ⊂ P , (7) implies that the entries on the row [ A m − j B m − j ] (resp.,[ A m − j B m − j ] t ) belong to P. On the other hand, one has:∆ m − j +1 ,m − j +1 = ∂f /∂x m − j +1) − − m − j X i =max { ,m − j +2 } ∆ m − j +1) − i,i ∈ P. Therefore, the row entries of L m − j +1 ( A m − j +1 ) = [∆ m − j +1 , . . . ∆ m − j +1 ,m − j +1 ] belongto P. It follows that I ( L m − j +1 ( A m − j +1 ) · U m − j +1 ) ⊂ P. Then the upshot from Lemma 3.6is that I j − ( D j − ) · I ( L m − j +1 ( B m − j +1 )) ⊂ P. But since I j − ( D j − ) P, then I ( L m − j +1 ( B m − j +1 )) = (∆ m − j +1 ,m − j +2 , . . . , ∆ m − j +1 ,m ) ⊂ P. I ([ A m − j +1 B m − j +1 ]) ⊂ P (resp. , I ([ A m − j +1 B m − j +1 ] t ) ⊂ P ) (8)Thus, I ( B tm − j +1 · U m − j +1 ) ⊂ P and once again Lemma 3.6, implies that I j − ( D j − ) I ( C j − ) ⊂ P. It follows that I ( C j − ) ⊂ P. (9)A moment reflection will convince us that (8) and (9) imply that I m − ( H m [ r ]) ⊂ P , aswas to be shown. Theorem 3.9.
Let J ⊂ R denote the gradient ideal of the determinant of H m [ r ] , with ≤ r ≤ m − , and let Q denote the ideal generated by the m − r nonzero variables of itslast column. Then: (a) The minimal primes of
R/J are Q and P := I m − ( H m [ r ]) . (b) J is not a reduction of P . (c) The unmixed and minimal components of J coincide if and only if r = m − . Proof. (a) It suffices to prove that any prime ideal P containing J necessarily containseither Q = ( x m , x m +1 , . . . , x m − r − ) or I m − ( H m [ r ]) . Divide the argument in two cases:
Case 1: I r +1 ( D r +1 ) ⊂ P. Since x r +12 m − r − ∈ I r +1 ( D r +1 ) then x m − r − ∈ P. More: for any m ≤ u ≤ m − r − r + 1)-minor in I r +1 ( D r +1 ) of the form x r +1 u + x u +1 g u +1 + · · · + x m − r − g m − r − , for certain forms g u +1 , . . . , g m − r − ∈ R . Decreasing induction on u then wraps up theinclusion ( x m , . . . , x m − r − ) ⊂ P. Case 2: I r +1 ( D r +1 ) P. If there exists an index j in the range r + 2 ≤ j ≤ m − I j ( D j ) ⊂ P thenProposition 3.8 implies that I m − ( H m [ r ]) ⊂ P. Thus, assume that I j ( D j ) P for every r +2 ≤ j ≤ m − . In particular, I m − ( D m − ) P. By Lemma 3.7, I m − ( D m − ) · I ([ L ( A ) L ( B )]) ⊂ P and since I m − ( D m − ) P, then I ([ L ( A ) L ( B )]) ⊂ P. (10)Thus, ∆ , ∈ P. But ∆ , = ∂f /∂x − , ∈ J ⊂ P. Therefore, the entries L ( A ) =[∆ , ∆ , ] belong to P. Then, from Lemma 3.6 one has I m − ( D m − ) · I ([ L ( A ) L ( B )]) ⊂ P I ([ L ( A ) L ( B )]) ⊂ P. (11)From (10) and (11) one has I ([ A B ]) ⊂ P (resp. , I ([ A B ] t ) ⊂ P ) (12)From these inclusions and Lemma 3.6 it obtains I m − ( D m − ) · I ( C m − ) ⊂ P, andhence, I ( C m − ) ⊂ P. (13)Clearly, (12) and (13) imply that I m − ( H m [ r ]) ⊂ P. (b) If J is a reduction of P , at least √ J = P , which would contracdict the result in(a).(c) This is an immediate consequence of (a). Remark 3.10. (i) Computational evidence suggests that the Q -primary component of J is generated by (cid:0) m − r (cid:1) forms of degree r (but only coincides with the r th power of Q when r = 1).(ii) The structure of the embedded associated primes of R/J is quite involved. Thefollowing two primes seem to be candidates in general: ( x m − , Q ) (in codimension m − r +1)and p I m − ( H m [ r ]) (in codimension 5). We once more emphasize that the ground characteristic is assumed to be zero.For the generic Hankel matrix H m it was proved that the linear rank of J is 3 ([17,Theorem 3.3.5]). At the other end of the spectrum, for the degeneration H m [ m −
2] it wasproved that the linear rank of J is maximal possible (= m ) ([6]) and J is in addition oflinear type ([18, Section 4.1, Theorem 4.8]).Though not obvious at all, one expects that the linear rank of J m [ r ] for 1 ≤ r ≤ m − Conjecture 3.11. If J = J m [0] is the gradient ideal of the fully generic m × m Hankelmatrix, then the sequence { x m +3 , . . . , x m − } is regular modulo J . Conjecture 3.12.
Let ≤ r ≤ m − . Then the linear rank of J m [ r ] is . Claim.
Conjecture 3.11 implies Conjecture 3.12.
Proof.
Set R := k [ x , . . . , x m − ], ¯ R := R/ ( x m − r , . . . , x m − ) , ¯ J := J ¯ R ⊂ ¯ R, so¯ J = ( J, x m − r , . . . , x m − ) / ( x m − r , . . . , x m − ) ⊂ k [ x , . . . , x m − r − ] . One asserts that the module of linear syzygies of ¯ J is a free ¯ R -module of rank 3.To see this, note that since r ≤ m − m − r ≥ m + 3, Conjecture 3.11 impliesthat { x m − r , . . . , x m − } is a regular sequence modulo J .The minimal graded resolution of R/J specializes to that of ¯ R/ ¯ J . In particular, themodule of linear syzygies of ¯ J has the same structure as that of J . But, in the fully genericcase, this module has been shown to be free of rank 3 ([17]).18ow, the generators of J m [ r ] are part of a minimal set of generators of ¯ J in the naturalorder of the partial derivatives, as follows from Proposition 1.3. In particular, any linearsyzygy of J m [ r ] gives one of ¯ J by filling down enough zeros. This implies that the moduleof syzygies of J m [ r ] is a submodule of that of ¯ J and is free of rank ≤ J has the matrix form β x γ x α x β x γ x ... ... ...α m − x m − β m − x m − γ m − x m − α m − x m − γ m − x m − (14)with α i , β j , γ l elements of k , where α i = 0, for 2 ≤ i ≤ m −
1. Note that there is no k -elementary operation that kills the last coordinate of the first syzygy above. Since themodule specializes to the module of linear syzygies of ¯ J , the latter is obtained by settingto zero the entries x m − r , . . . , x m − . Thus, there is a linear syzygy of ¯ J whose (2 m − r )thcoordinate α m − r x m − r − does not vanish. Since f m [ r ] has only 2 m − r − J m [ r ]. Therefore, the module of syzygies of J m [ r ] is freeof rank at most 2.But, by the same token, the β and γ version of the syzygies of ¯ J are syzygies of J m [ r ].We conclude that the module of syzygies of J m [ r ] is free of rank exactly 2; in particular,it has linear rank 2. Remark 3.13. (1) Conjecture 3.12 fails in positive characteristic. For example, in char-acteristic 3 the linear rank of J [1] is 3. The reason might be the juggling with thenonvanishing syzygies coordinates of (14) made possible in null characteristic.(2) If r = m − x m +2 is notregular on R/ ( J, x m − , . . . , x m +3 ), and neither on J m [ m −
2] for that matter. And, infact, we know that J m [ m −
2] has a whole batch of new linear syzygies forcing maximallinear rank.(3) What are the odds against the first of the above conjectures? First, since one is in ahomogeneous environment, one can just as well consider the sequence { x m − , . . . , x m +3 } in reverse order. At each step this way one is actually asking about the associated primesof ¯ J , not those of J m [ r ] exactly. Thus, it is urgent to compare both, a task that can becarried under severe hypothesis. In any case, it would seem that the conjecture could useinformation about the associated primes of J m [ r ], a problem that has been poorly accessedso far (Remark 3.10). Question 3.14.
Assume that m ≥ ≤ r ≤ m −
2. Is J of linear type?One can check this up to small values of m by computer calculation. So far, the onlyproved case is r = m − r . At this point it is not even clear that J satisfies property ( F ), afront-runner feature of an ideal of linear type. In the generic case, J is at least a completeintersection locally at its unique minimal prime ([18, Proof of Theorem 3.14]).Proving affirmatively this question would give a short argument for both Theorem 4.8(c) and Theorem 4.10 below. 19 Special results in the generic case
We now focus on the square generic Hankel matrix H m of order m and on its determinant f . Let R = k [ x ] = k [ x , . . . , x m − ] stand for the corresponding ground polynomial ringon the entries of H m . Let f stand for the set of derivatives of f . Recall that H r,s denotes the r × s generic Hankel matrix. When r = s =: m , one has I m − ( H m,m ) = I m − ( H ( m − × ( m +1) ). The advantage of this transfer is that not only onedeals with the more pliable maximal minors (of the simplest case of a non-square genericHankel matrix), but also the minimal number of generators becomes the predicted one (cid:0) m +12 (cid:1) .For the current purpose, the maximal minor with columns i < · · · < i m − will bedenoted [ i , . . . , i m − ], following a pretty much established notation. By and large, onerefers to some of the details developed in [18, Section 3.3]. Pretty much as in the case ofa generic matrix [2, chapter 4], the set of maximal minors of the matrix H ( m − × ( m +1) ispartially ordered by setting[ i , . . . , i m − ] ≤ [ j , . . . , j m − ] ⇔ i ≤ j , . . . , i m − ≤ j m − . As an illustration, in the case of m = 5, one has[1234][1235][1236] [1245][1246] [1345][1256] [1346] [2345][1356] [2346][1456] [2356][2456][3456]where a link to the successive upper level denotes ≤ .This poset has some remarkable properties reflected in the above diagram:(1) Let (cid:0) n (cid:1) ≤ l ≤ (cid:0) m +22 (cid:1) −
3. The l th level of the poset is the subset of minors[ i , . . . , i m − ] such that i + · · · + i m − = l (i.e., the minors indexed by the orderedpartitions of l ) 202) A maximal minor in the l th level admits at most two comparable maximal minorson the ( l + 1)th level. To see this, note that a maximal minor [ i , . . . , i m − ] isalternatively designed by its complementary columns in the matrix. Thus, one canwrite A = [ i , . . . , i m − ] = [1 , . . . , i − , ˆ i, . . . , j − , ˆ j, . . . , m + 1]where b denotes deletion. Therefore, in this notation there are at most two compa-rable minors with A in the successive upper level, namely[1 , . . . , [ i − , i, . . . , j − , ˆ j, . . . , m + 1] and [1 , . . . , i − , ˆ i, . . . , [ j − , j, . . . , m + 1] . Thus far, the catalogue of the properties depends only on the size of the matrix,regardless as to what nature of specialization of the generic matrix one is considering.For convenience, one sets y t,u = x j , where j = t + u − , ≤ j ≤ m −
1. Thus, H m,m = ( y t,u ). Subtler properties of the maximal minors of H m − ,m +1 are as follows:(3) (Level elements) The cofactors of y t,u and y u,t coincide and the maximal minors onthe l th level of the poset are the distinct cofactors along the anti-diagonal { y t,u | t + u =2 m − l } . In particular, there is a “central” level with largest possible number ofelements.(4) (Partial derivatives) Let f j , ≤ j ≤ m − , denote the partial derivative ofdet( H m,m ) with respect to x j . Then f m − l is a k -linear combination of the maximalminors in the l th level of the poset.(5) (Defining relations of the maximal minors) In the generic case it is a classical resultthat the so-called Pl¨ucker relations generate the defining ideal of the maximal minors.The following result is slightly surprising:
Proposition 4.1.
Let k be a field of characteristic zero and let G = ( y u,v ) ≤ u ≤ m − ≤ v ≤ m + 1 and H = ( x i + j − ) ≤ i ≤ m − , ≤ j ≤ m +1 denote, respectively, the ( m − × ( m +1) genericmatrix over k and the generic Hankel matrix of the same size over k . Then therespective special fiber algebras of the ideals I m − ( G ) and I m − ( H ) of maximal minorsare isomorphic as graded k -algebras. Proof.
Consider the natural k -algebra specialization map k [ y u,v ] ։ k [ x , . . . , x m − ] , y u,v x u + v − , ≤ u + v ≤ m (15)of the corresponding ground polynomial rings. Since each of the ideals in question isequigenerated, its fiber algebra is graded k -isomorphic to the respective k -subalgebraof the polynomial ring generated by the maximal minors, which one denotes here by k [ ∆ ( G )] and k [ ∆ ( H )], respectively.Clearly, the map (15) restricts to a k -algebra surjection k [ ∆ ( G )] ։ k [ ∆ ( H )] . (16)But both algebras have Krull dimension 2 m − G ( m − , m + 1), where thelatter has the well-known dimension 2( m −
1) = 2 m − k [ ∆ ( H )] = 2 m − I m − ( H ) with the ideal of submaximal minors of the associated m × m Hankel matrix (see Proposition 1.1). After this is done the dimension comesout immediately off Proposition 2.9 or, alternatively, by the non-vanishing of theHessian determinant of the square matrix as proved in [17, Proposition 3.3.11], oryet by Corollary 4.7, whose proof is independent.To conclude, the kernel of the surjection (16) is a prime ideal of height zero in adomain, hence must be the zero ideal.
Remark 4.2.
For arbitrary size this turns out to be false, simply by having differentdimensions, but perhaps more crucial as regards the present material, since minimalquadratic relations other than Pl¨ucker relations come in the picture as is explainedin [4] and [5].(6) (Shape of the Pl¨ucker relation) The Pl¨ucker relation in the case of a generic matrixof arbitrary size p × q has a somewhat cumbersome expression, with many indicesfloating around (see [2, Lemma 4.4]). Luckily, in the case of present interest, where p = m − , q = m + 1, assuming that the characteristic of the ground field is zero, theexpression becomes simpler and these simpler expressions can be shown to generatethe ideal of relations. This is based on the elementary observation to the effect thattwo maximal minors have in common at least m − { i , . . . , i m − } ∩ ( { , . . . , m + 1 } \ { k , . . . , k m − } )has at most 2 elements. Consequently, the typical Pl¨ucker relation has at most3 terms, each a product of two minors, while the relevant ones correspond to thecase where the above intersection has exactly 2 elements, while the cases of 0 and 1element yield, respectively, an empty equation or to an identity. Thus, the shape canbe described up to certain signs by assuming that, e.g., k = i , . . . , k m − = i m − and the above intersection is the set { i m − , i m − } , which affords the relation [ i , . . . , i m − ] · [ k , . . . , k m − ] = [ i , . . . , i m − , i m − , i m − ] · [ i , . . . , i m − , k m − , k m − ]= [ i , . . . , i m − , [ i m − , i m − , k m − ] · [ i m − , i , . . . , i m − , [ k m − , k m − ]+ [ i , . . . , i m − , i m − , [ i m − , k m − ] · [ i m − , i , . . . , i m − , [ k m − , k m − ] . Up to signs and reordering of indices, this is the shape of a Pl¨ucker relation thatintervenes in the proof of Theorem 4.4.
Remark 4.3.
As will be noted later, Pl¨ucker relations are not the right tool to deal withthe degenerations H [ r ]. According to common usage, the polar map of f is the rational map P m − P m − defined by the partial derivatives of f . In a more classical terminology, f is a homaloidal polynomial if this map is birational, i.e., a Cremona map.22ealing with these notions moves the emphasis on to suitable k -subalgebras of R ratherthan on its ideals. Since k [ f ] is a subalgebra of R generated in degree m −
1, the polarmap is birational if and only if the homogeneous inclusion k [ f ] ⊂ k [ R m − ] induces anisomorphism of the respective fields of fractions.As a tool, we use yet another subalgebra, namely, let ∆ denote the set of ( m − H m . By Proposition 1.3, one has a homogeneous inclusion k [ f ] ⊂ k [ ∆ ]. Theorem 4.4. p ( f ) k [ ∆ ] = ( ∆ ) k [ ∆ ] . Proof.
It clearly suffices to show that any minor ∆ ∈ ∆ has a power in ( f ) k [ ∆ ]. Thisfact has been essentially obtained in [18, Proposition 3.13] although the statement of theproposition there claimed less than was actually shown in the proof.For convenience and precision we retrace the main parts of the argument in [18, Propo-sition 3.13] by pointing out the crucial re-editing. Thus, one first lists f = { f , . . . , f m − } and expresses each f j as a sum of maximal minors of H m − ,m +1 , as explained in property(4) above.As no harm is done, one keeps denoting any of these maximal minors and their set bythe same symbols ∆ , ∆ , respectively. The argument is then by descending induction on j = 1 , . . . , m − f j has apower in ( f ) k [ ∆ ].For j = 2 m − , m − f m − , f m − are themselves (signed) minorsup to a coefficient. As in the proof of [18, Proposition 3.13] and for later reasons, onedisplays the details of one additional explicit step in the induction – although this step isreally embodied in the general inductive step. Namely, one deals now with the maximalminor ∆ = [1 , . . . , n − , n − , n ], a summand of the partial derivative f m − . For this,consider the following Pl¨ucker relation involving ∆ and ∆ ′ = [1 , . . . , n − , n + 1]:∆∆ ′ = 1 / , . . . , n − , n − , n + 1] f m − − [1 , . . . , n − , n, n + 1] f m − , (17)where it will crucial that at least one element of each right-side factor belong to J .On the other hand, by the above properties (3) and (4) together one knows that f m − is a k -linear (actually Z -linear) combination of the minors ∆ , ∆ ′ .Now, take the obvious relation:∆ − / λ ∆ + ∆ ′ ) + 1 /λ ∆∆ ′ = 0 . (18)where λ is the suitable integer coefficient that appears in the latter Z -linear combination.From this follows immediately that ∆ ∈ ( f ) k [ ∆ ].For the general inductive step the argument is similar, by showing that any ∆ whichappears as a summand of f j , for j ≤ m −
3, satisfies an equation of integral dependenceof the form ∆ + a ∆ f j + bg = 0 , (19)for suitable a, b ∈ k , where g is a (signed) sum of products of two maximal minors, one ofwhich always appears as a summand of some f t , with t > j . By the inductive hypothesisthe latter minors have a power in ( f ) k [ ∆ ], and therefore so does every such g . Finally,(19) implies that some power of ∆ belongs to ( f ) k [ ∆ ] as well.23 emark 4.5. In [18, Proposition 3.13] the stated result is that some power of ∆ belongsto the ideal J = ( f ), but the argument clearly shows the stronger fact that ∆ belongs to( f ) k [ ∆ ]. Corollary 4.6.
The ring inclusion k [ f ] ⊂ k [ ∆ ] is an integral extension. In particular, dim k [ f ] = dim k [ ∆ ] . Proof.
Fairly general, let A = k [ A ] ⊂ B = k [ B ] denote a homogeneous inclusion ofstandard graded algebras over k . Then B ⊂ √ A B implies that B is finitely generatedas an A -module. In fact, letting B N ⊂ ( A B ) N = A N + A B N − for certain N ≥
1, thenit is clear that B is generated by { B , B , . . . , B N − } as an A -module.One also retrieves the result of [17, Proposition 3.3.11]: Corollary 4.7.
The Hessian determinant of H m does not vanish. In particular, its partialderivatives are algebraically independent over k . The following theorem solves affirmatively some of the questions in [18, Conjecture3.16].
Theorem 4.8.
Let H m denote the m × m ( m ≥ generic Hankel matrix and let k [ f ] ⊂ k [ ∆ ] be as above. Then: (a) The fiber algebra k [ ∆ ] is a Gorenstein unique factorizations domain with regularity m − . (b) The extension k [ f ] ⊂ k [ ∆ ] is a Noether normalization. (c) The polar map of f = det( H m ) is not homaloidal. (d) The ideal J := ( f ) ⊂ R is a reduction of the ideal of minors I m − ( H m ) with reductionnumber m − . Proof. (a) It is classical that the homogeneous coordinate ring of a Grassmannian isGorenstein, as it follows from a result of [19], by using that it is Cohen–Macaulay ([16])and a unique factorization domain ([21]).As for the regularity, one knows that the degree of the Hilbert series (as a rationalfunction) of a Cohen–Macaulay standard graded k -algebra C coincides with its a -invariant a ( C ). On the other hand, under the same hypothesis, the Castelnuovo–Mumford regularityof C coincides with the degree of the polynomial in the numerator of its Hilbert series.Since the denominator has degree dim C , it follows that the regularity in the case of theGrassmannian (with the notation of Proposition 4.1) is 2 m − − a ( k [ ∆ ( G )]). On the otherhand, a ( k [ ∆ ( G )]) = − ( m + 1) ([1, Corollary 1.4].Therefore, the regularity of k [ ∆ ( G )] = 2 m − − ( m + 1) = m − k [ f ] ⊂ k [ ∆ ] is integral. By [17, Proposition 3.3.11],the Hessian of f = det( H m ) does not vanish, hence f is an algebraically independent setover k . In other words, k [ f ] is a polynomial ring over k .(c) Since k [ f ] is a polynomial ring it is integrally closed in its field of fractions. If theextension k [ f ] ⊂ k [ ∆ ] is birational then Corollary 4.6 implies that k [ f ] = k [ ∆ ]. But this24s impossible since for m ≥ k -linear span of k [ f ] (actually the k -linear span of the ( m − (cid:0) m +12 (cid:1) > m − m ≥ k -algebra k [ ∆ ] is graded isomorphic to the special fiber of the ideal I m − ( H m ) =( ∆ ) ⊂ R . Then the result is explained in the proof of [24, Theorem 1.77]For the reduction number, by [24, Proposition 1.85], since k [ ∆ ] is Cohen–Macaulay,the reduction number of J is the degree of the polynomial in the numerators of its Hilbertseries. But this is also the regularity by the facts in the proof of (a). Therefore, one getsthe required value. Remark 4.9.
A similar result to Theorem 4.8 (c) has been obtained by Nivaldo Medeirosby a geometric argument.
Recall, as mentioned before, that f := det H m [ r ] is homaloidal for r = m − Theorem 4.10. (Conjectured)
Assume that ≤ r ≤ m − . If f := det H m [ r ] is homa-loidal then r = m − . The following collects a few elements detected in this regard.
Caveat 1.
The reason one cannot extend the method of proof of Theorem 4.8 (a) tothe degenerated environment is that the extension k [ f ] ⊂ k [ ∆ ] is not integral anymore.In fact, at least if r ≤ m −
3, then P
6⊂ √ J since R/J admits other minimal prime than P . From the point of view of the actual argument of Theorem 4.8 (a), although one stillhas a poset of maximal minors, the existing Pl¨ucker relations do not necessarily come upwith at least one factor belonging to k [ f ]. Caveat 2.
Since k [ ∆ ] has maximal dimension, for a degeneration H m [ r ] one still hasdim k [ f ] = dim k [ ∆ ] due to Theorem 3.1 – of course, this equality is equivalent to thenon-vanishing of the Hessian, so an a priori proof of the dimension equality would givea more elegant proof of the latter. Anyway, if one assumes homaloidness for r ≤ m − k [ f ] is a polynomial ring, by [22,Proposition 3.7 and Proposition 6.1 (a)] one has the equality e ( k [ ∆ ]) = e ( k [ f ] , k [ ∆ ]) + 1 , where e ( k [ f ] , k [ ∆ ]) is the relative multiplicity introduced in [22]. Note that the latterrelative multiplicity can be computed as the ordinary multiplicity of the graded algebra G/ G ( ∆ ) G, where G stands for the associated graded ring of the ideal ( f ) k [ ∆ ] of the k -algebra k [ ∆ ].In order to derive a contradiction one needs an a priori knowledge of a sufficiently largelower bound for e ( k [ ∆ ]), a presently unseen goal. Caveat 3.
Picking up from a slightly different angle, still assuming that f is homa-loidal, let ∆ ∈ ∆ denote the cofactor of the ( m, m ) entry, the extension A := k [ f ] ⊂ := k [ f , ∆] is birational for even more reason. Since ∆ has same degree as the elementsof f , it must be an element of the field of fractions k ( f ) of the form G ( f ) /F ( f ), with F ( f ) , G ( f ) ∈ k [ f ] homogeneous of degrees, say, s − , s , respectively. Since k [ f ] is a poly-nomial ring over k , this forces a presentation of B as a k -algebra over a polynomial ring k [ t , u ], with t = { t , . . . , t m − r − } , as follows: B ≃ k [ t , u ] / ( uF ( t ) + G ( t )) , where t i f i , u ∆, and F, G are forms in t of degrees, s − , s , respectively.The proof along these steps would then claim that this is impossible unless r = m − f = ∂f /∂x is a pure power, in which case necessarily f = x m − m − r − = x m − m +1 . Caveat 4:
It is equivalent to show that if r < m −
2, the defining equation e of B associated to a presentation with set of generators { f , ∆ } has u -degree ≥ e has no pure power term in u since u is not integral). Caveat 5:
A computer calculation for the simplest situation ( m = 4 , r = 1) givesthat e has degree 15 and u -degree 5, well beyond what is needed.Note that either F ( t ) or G ( t ) must involve t because the set { f , . . . , f m − r − , ∆ } is algebraically independent over k . Suppose that F ( t ) effectively involves t . Then bya k -linear change of variables fixing t – i.e., of the form t t , t i t + α i t i , for2 ≤ i ≤ m − r − F ( t ) has a non vanishing pure power term αt s − . Then we’d have ∆ f s − ∈ J s . In this case it’d suffice to prove that for no exponent l one has ∆ ∈ J l : f l − . The feeling is that an early obstruction is the initial degree of this colon ideal whichmay turn out to be > m −
1. Note that the colon ideal contains the ideal J l : J l − , butone cannot derive anything from this as J is seemingly Ratliff–Rush closed – at least ifone takes for granted that the associated graded ring G of J has positive depth giventhe expectation (Question 3.14) that J is actually of linear type (not just analyticallyindependent) and G is Cohen–Macaulay.As a slight confirmation, if r = m − uF ( t ) + G ( t ) with F = t e ( B ) − , that is, F ( f ) = f e ( B ) − = x ( m − e ( B ) − m +1 . Caveat 6:
Early obstructions in the case r < m − R has dimension at least m +2 (while the dimension is m +1 in the case r = m − J ⊂ R – and hence ( J, ∆) ⊂ R too – is an ideal of codimension 3 (while J has codimension2 and ( J, ∆) has codimension 3 in the case r = m − Proof of the non-vanishing of the Hessian.
The method consists in sufficiently“degenerating” the Hessian matrix to allow direct calculation with some of the submatricesto eventually get a non-vanishing expression. Namely, consider the ring endomorphism ϕ of R mapping any variable in v := { x , x m − r − , x m − r − } to itself and mapping any26ariable off v to zero. We will show that by applying ϕ to the entries of the Hessian matrix H ( f ) the resulting matrix H ( f )( v ) has non-vanishing determinant.For visualization we depict the matrix H m [ r ] for arbitrary r ≤ m − x x · · · x m − r − x m − r − x m − r x m − r +1 · · · x m − x m x x · · · x m − r − x m − r x m − r +1 x m − r +2 · · · x m x m +1 ... ... · · · ... ... ... ... · · · ... ...x m − r − x m − r · · · x m − r − x m − r − x m − r − x m − r − · · · x m − r − x m − r − x m − r x m − r +1 · · · x m − r − x m − r − x m − r − x m − r · · · x m − r − x m − r − x m − r +1 x m − r +2 · · · x m − r − x m − r − x m − r x m − r +1 · · · x m − r − ... ... · · · ... ... ... ... .. . ... ...x m − x m · · · x m − r − x m − r − x m − r − x m − r − · · · x m x m +1 · · · x m − r − x m − r − x m − r − · · · The goal is to isolate terms of the partial derivatives of f that have in their support aproduct of at least two variables off v , since such terms will produce at least one variableoff v in the entries of H ( f ) and hence will vanish upon applying ϕ .The expression “terms of degree at least 2 off v ” will next appear recurrently in thesense just explained. In order to avoid tedious repeatition we replace the expression bythe letter T .Now, recall that the partial derivatives of f are sums of signed ( m − k = 1 , . . . , m − r −
1, we have f k = X i + j = k +1 M i,j , (20)where M i,j is the (signed) cofactor of the ( i, j )th entry.Let us pick up the shape of such a partial derivative of f as we go through the variousrelevant intervals for the sum i + j .Observe that for i + j ≤ m − r , expanding the minor M i,j by the Laplace rule alongits first m − r − M i,j = D i,j x r +12 m − r − + T, where D i,j is the cofactor of the( i, j )th entry of the submatrix D = x x . . . x m − r − x x . . . x m − r ... ... · · · ...x m − r − x m − r . . . x m − r − . (21) Lemma 5.1.
Assume that i + j ≤ m − r . Then: (a) For k + 1 = i + j < m − r , one has ϕ (cid:18) ∂f k ∂x l (cid:19) = ( ± kx m − r − m − r − x r +12 m − r − , if l = 2 m − r − ( k + 1) − , otherwise For k + 1 = i + j = m − r , one has ϕ (cid:18) ∂f m − r − ∂x l (cid:19) = ± ( m − r − m − r − x m − r − m − r − x r +12 m − r − , if l = m − r − ± ( m − r − r + 1) x m − r − m − r − x r m − r − , if l = 2 m − r − ± ( m − r − x x m − r − m − r − x r +12 m − r − , if l = 2 m − r − , otherwise . Proof. (a) i + j < m − r Expanding D i,j for i + j < m − r yields: D i,j = ± z m − r − i,m − r − j x m − r − m − r − + T = ± x m − r − ( i + j ) − x m − r − m − r − + T. Then (20) becomes f k = X i + j = k +1 D i,j x r +12 m − r − + T = ± X i + j = k +1 x m − r − ( i + j ) − x m − r − m − r − x r +12 m − r − + T = ± k · x m − r − ( k +1) − x m − r − m − r − x r +12 m − r − + T. Clearly, ϕ (cid:16) ∂f k ∂x l (cid:17) vanish if l = 2 m − r − ( k + 1) −
1. Moreover, ϕ (cid:18) ∂f k ∂x m − r − ( k +1) − (cid:19) = ± kx m − r − m − r − x r +12 m − r − (b) For i + j = m − r = k + 1 one has D i,j = ( ± x m − r − m − r − + T, if i = 1 or j = 1 ± x m − r − m − r − ± x x m − r − x m − r − m − r − + T, if i = 1 and j = 1 . Observe that, when i = 1 and j = 1 then we have necessarily m − r ≥ f m − r − = X i + j = m − r D i,j x r +12 m − r − + T = ± ( m − r − x m − r − m − r − x r +12 m − r − ± ( m − r − x x m − r − x m − r − m − r − x r +12 m − r − + T. Taking derivative with respect to x l shows that ϕ (cid:16) ∂f m − r − ∂x l (cid:17) is as in the statement.So far we have discussed the first ( m − r −
1) columns of H ( f )( v ) and hence, bysymmetry, its first ( m − r −
1) rows as well. In particular, the columns m − r, . . . , m − r − H ( f )( v ) are partially obtained. For the concluding argument on the entire shape of H ( f )( v ) it will suffice to move all the way to the interval 2 m − r − ≤ i + j ≤ m − r ,which will give the shape of columns 2 m − r − , . . . , m − r − H ( f )( v ) . emma 5.2. Assume that m − r − ≤ i + j ≤ m − r . Then: (a) For l = 2 m − r − , . . . , m − r − one has ϕ (cid:18) ∂f l ∂x k (cid:19) = ( ± q = ± x x m − r − m − r − x r m − r − , if k = 4 m − r − l − , , if k > m − r − l − . (b) ϕ (cid:16) ∂f m − r − ∂x m − r − (cid:17) = ± ( r + 1) rx m − r − m − r − x r − m − r − . Proof. (a) Let us look at the cofactors M i,j for 2 m − r − ≤ i + j < m − r . Write M i,j = ( − i + j det( C i,j ), where C i,j is the submatrix of H m ( r ) obtained by omitting its i th row and its j th column. If i > m − r − j > m − r − C i,j missesthe entry x m − r − sitting on the ( i, m − r − i )th and (2 m − r − j, j )th slots H m ( r ).Therefore, in this case C i,j has only ( m −
2) columns with some entry in v , and hence, M i,j cannot have any term supported on v ; in addition, by the same token, any of itsterms having degree 1 in variables off v involves necessarily the m − r − x m − r − , the r − x m − r − on the matrix C i,j and the entry of H m ( r ) in the(2 m − r − j, m − r − i )th slot. But the latter is zero since (2 m − r − j )+(2 m − r − i ) > m − r when 2 m − r − ≤ i + j < m − r .Summing up, we have shown that, for i > m − r − j > m − r −
1, the cofactor M i,j have neither terms supported on v nor terms having degree 1 in variables off v .Thus, we are left with the next two possibilities:(i) i < m − r − j < m − r − i < m − r − C i,j misses the entries x m − r − and x m − r − in the ( i, m − r − i )th and (2 m − r − j, j )th slotsof H m ( r ), respectively. Clearly, M i,j does not have terms supported in v . Moreover, anyterm of M i,j involving x cannot simultaneously involve variables of v in slots ( m − r − , , m − r −
1) on C i,j , and hence ought to have degree at least 2 in the variables off v . Then one has M i,j = ± x m − r − m − r − x r m − r − x m − r − ( i + j ) − + T. (ii) i = m − r − j = m − r − i = m − r −
1. A similarargument as above concerning slots ( m − r − ,
1) and (2 m − r − j, j ) of H m ( r ) will doand one gets M i,j = ± x m − r − m − r − x r m − r − x m − r − j ± x x m − r − m − r − x r m − r − x m − r − j − + T = ± x m − r − m − r − x r m − r − x m − r − j ± x x m − r − m − r − x r m − r − x m − r − l − + T whereas i + j = l e i = m − r − l = 2 m − r − , . . . , m − r − f l = ± x x m − r − m − r − x r m − r − x m − r − l − ± c l x m − r − m − r − x r m − r − x m − r − l − + T. for some c l ∈ k . Therefore, for l = 2 m − r − , . . . , m − r −
2, one gets ϕ (cid:18) ∂f l ∂x m − r − l − (cid:19) = ± q = ± x x m − r − m − r − x r m − r − .
29n addition, since 3 m − r − l − < m − r − l − ϕ (cid:16) ∂f l ∂x k (cid:17) = 0 if k > m − r − l − k + 1 = i + j = 2 m − r . Expanding along the first m − r − M i,j = det D · x r m − r − + T . Expanding det D one gets M i,j = ± x x m − r − m − r − x r m − r − x m − r − ± x m − r − m − r − x r m − r − + T and therefore, f m − r − = ± ( r + 1) x x m − r − m − r − x r m − r − x m − r − ± ( r + 1) x m − r − m − r − x r m − r − + T. Taking derivative with respect to x m − r − shows that ϕ (cid:18) ∂f m − r − ∂x m − r − (cid:19) = ± ( r + 1) rx m − r − m − r − x r − m − r − . We now proceed to the proof of Theorem 3.1 proper.Collecting the information gathered so far, we see that by applying ϕ to the entries ofthe Hessian matrix H ( f ), one obtains a matrix in the form: H ( f )( v ) = (cid:18) A B t B A ′ (cid:19) . Here A and B are matrices of sizes (2 m − r − × (2 m − r −
3) and ( r + 2) × (2 m − r − AB has the shape . . . . . . ± p . . . . . . ± p ... ... . . . ... ... ... . .. ... ... . . . ± ( m − r − p . . . . . . ± ( m − r − m − r − p . . . d . . . ± ( m − r − p ∗ . . . ∗ ∗ ... ... .. . ... ... ... . . . ... ... ± p . . . ∗ . . . ∗ ∗± p . . . d ∗ . . . ∗ ∗ . . . ∗ . . . ∗ ∗ ... ... .. . ... ... ... . . . ... ... . . . ∗ . . . ∗ ∗ . . . ± ( m − r − r + 1) x m − r − m − r − x r m − r − ∗ . . . ∗ ∗ where p = x m − r − m − r − x r +12 m − r − and d = ± ( m − r − x x m − r − m − r − x r +12 m − r − . This part followsfrom the Lemma (5.1)As for the matrix A ′ , its shape follows from Lemma (5.2) A ′ = ∗ ∗ . . . ± q ... ... . . . ... ... ∗ ± q . . . ± q . . . . . . ± ( r + 1) rx m − r − m − r − x r − m − r − q = x x m − r − m − r − x r m − r − .Now expand the above determinant along the first 2 m − r − m − r − m − r − m − r − m − r −
3) rows and involving the first m − r − A itself and the following matrix that we will denote X : . . . ± p
00 0 . . . ± p ... ... · · · ... ... ... ... ... ... . . . ± ( m − r − p . . . . . . . . . ± ( m − r − r + 1) x m − r − m − r − x r m − r − . . . ± ( m − r − p ∗ . . . ∗ ∗ ∗ ... ... ... ... ... . . . ... ... ... ± p . . . ∗ . . . ∗ ∗ ∗± p . . . ∗ . . . ∗ ∗ ∗ obtained upon replacing the ( m − r − A with the last column of B t (i.e.,the transpose of the last row of B ). Their complementary matrices are, respectively, A ′ and X ′ = ∗ ∗ ∗ . . . ± q ... ... ... . . . ... ∗ ∗ ± q . . . ∗ ± q . . . m − r − r + 1) x m − r − m − r − x r m − r − . . . Thereof, we obtain det H ( f )( v ) = ± det A det A ′ ± det X det X ′ . Expanding the variousdeterminants in this expression givesdet H ( f )( v ) = 2 r +1 ( r + 1)( m − r − m − r − p m − r − q r +1 · (cid:0) ± r ( m − r − p x m − r − m − r − x r − m − r − ± ( m − r − r + 1) x m − r − m − r − x r m − r − (cid:1) . The first factor above is a term in p and q , hence does not vanish. The second factor is asum of distinct terms, so does not vanish either. Therefore, the expression is nonzero. Remark 5.3.
It would seem like there might exist an easy argument for the non-vanishingof the Hessian determinant of an intermediate det H m [ r ] since it is “squeezed” betweenthe extreme situations where r = 0 and r = m −
2, where we know the Hessian does notvanish. Unfortunately, we may need the specifics of the present setup as for arbitrarythreads of degenerations some intermediate Hessian determinants may vanish or not (see,e.g., [6], also [23]).
References [1] W. Bruns and J. Herzog, On the computation of a-invariants, Manuscripta Math. (1992), 201–213. 24[2] W. Bruns, U. Vetter, Determinantal Rings, Lecture Notes in Mathematics ,Springer-Verlag, 1988. 9, 20, 22 313] W. Bruns, A. Conca and M. Varbaro, Relations between the minors of a genericmatrix, Adv. Math., (2013), 171–206. 10[4] A. Conca, J. Herzog and G. Valla, Sagbi bases with applications to blow-up algebras,J. Reine Angew. Math. (1996), 113–138. 22[5] A. Conca, Straightening law and powers of determinantal ideals of Hankel matrices,Adv. Math., (1998), 263–292. 1, 10, 22[6] C. Ciliberto, F. Russo and A. Simis, Homaloidal hypersurfaces and hypersurfaceswith vanishing Hessian, Adv. Math., (2008) 1759–1805. 2, 3, 4, 6, 7, 11, 12, 13,14, 15, 18, 25, 31[7] R. Cunha, Z. Ramos and A. Simis, Degenerations of the generic square matrix. Polarmap and determinantal structure, Intern. J. Algebra Comput., (2018), 1255–1297.2, 3, 10, 12[8] R. Cunha, Z. Ramos and A. Simis, Symmetry preserving degenerations of the genericsymmetric matrix, J. Algebra, (2019), 154–191. 2, 10, 12[9] A. Doria, H. Hassanzadeh and A. Simis, A characteristic free criterion of birationality,Adv. Math., (2012), 390–413. 4[10] D. Eisenbud, On the resiliency of determinantal ideals, Proceedings of the U.S.-JapanSeminar, Kyoto 1985. In Advanced Studies in Pure Math. II, Commutative Algebraand Combinatorics, ed. M. Nagata and H. Matsumura, North-Holland (1987) 29–38.1, 8, 9[11] D. Eisenbud, Linear sections of determinantal varieties, Amer. J. Mathematics, (1988), 541–575. 1, 2, 5, 7, 9[12] M. Giusti and M. Merle, Sections des vari´et´es d´eterminantielles par les plans decoordonn´ees in Proc. Int. Conf. on Algebraic Geometry (La Rabida 1981, Spain),Lect. Notes in Maths. 961, Springer Verlag (1982), 103–118. 2[13] M. A. Golberg, The derivative of a determinant, The American MathematicalMonthly, (1972), 1124–1126. 6[14] L. Gruson and C. Peskine, Courbes de LEspace Projectif: Vari´et´es de S´ecantes 1–32. In: Le Barz P., Hervier Y. (eds) Enumerative Geometry and Classical AlgebraicGeometry. Progress in Mathematics, vol 24 Birkhuser Boston, 1982. 5[15] J. Herzog and T. Hibi, Monomial ideals , Graduate Texts in Mathematics ,Springer-Verlag, 2011. 5[16] M. Hochster, Grassmannians and their Schubert subvarieties are arithmeticallyCohen–Macaulay, J. Algebra (1973), 40–57. 24[17] M. Mostafazadehfard, Hankel and sub-Hankel determinants – a detailedstudy of their polar ideals , PhD Thesis, Universidade Federal de Pernambuco(Recife, Brazil), July 2014. 3, 4, 6, 18, 19, 22, 243218] M. Mostafazadehfard and A. Simis, Homaloidal determinants, J. Algebra (2016),59-101. 2, 3, 4, 6, 14, 15, 18, 19, 20, 23, 24[19] M. P. Murthy, A note on factorial rings, Arch. Math. (1964), 418–420. 24[20] F. Russo, On the Geometry of Some Special Projective Varieties , Lecture Notes ofthe Unione Matematica Italiana, Springer 2015. 11[21] P. Samuel,
Lectures on Unique Factorization Domains , Math. Ser. Vol. 30, TataInstitute of Fundamental Research, Bombay, 1964. 24[22] A. Simis, B. Ulrich and W. V. Vasconcelos, Codimension, multiplicities and integralextensions, Math. Proc. Camb. Phil. Soc. , (2001), 237–257. 25[23] A. Simis et. al., Apocriphal homaloidal determinants, preliminary notes, 2018. 31[24] W. Vasconcelos,
Integral closure. Rees algebras, multiplicities, algorithms , SpringerMonographs in Mathematics, Springer-Verlag, Berlin, 2005. 25[25] J. Watanabe, Hankel matrices and Hankel ideals, Proc. School Sci. Tokai Univ. (1997), 11–21. 1 Addresses:
Rainelly Cunha
Instituto Federal de Educa¸c˜ao, Ciˆencia e Tecnologia do Rio Grande do Norte59015-000 Natal, RN, Brazil e-mail : [email protected]
Maral Mostafazadehfard
Instituto de Matem´atica, CT–Bloco CUniversidade Federal do Rio de Janeiro21941-909 Rio de Janeiro, RJ, Brazil e-mail : [email protected]
Zaqueu Ramos
Departamento de Matem´atica, CCETUniversidade Federal de Sergipe49100-000 S˜ao Cristov˜ao, Sergipe, Brazil e-mail : [email protected]
Aron Simis
Departamento de Matem´atica, CCENUniversidade Federal de Pernambuco50740-560 Recife, PE, Brazil e-maile-mail