Kronecker powers of tensors and Strassen's laser method
Austin Conner, Fulvio Gesmundo, Joseph M. Landsberg, Emanuele Ventura
aa r X i v : . [ c s . CC ] S e p KRONECKER POWERS OF TENSORS AND STRASSEN’S LASERMETHOD
AUSTIN CONNER, FULVIO GESMUNDO, JOSEPH M. LANDSBERG, EMANUELE VENTURA
Abstract.
We answer a question, posed implicitly in [16, § q >
2, a negative resultfor complexity theory. We further show that when q >
4, the analogous result holds for theKronecker cube. In the positive direction, we enlarge the list of explicit tensors potentially usefulfor the laser method. We observe that a well-known tensor, the 3 × ∈ C ⊗ C ⊗ C , could potentially be used in the laser method toprove the exponent of matrix multiplication is two. Because of this, we prove new upper boundson its Waring rank and rank (both 18), border rank and Waring border rank (both 17), which,in addition to being promising for the laser method, are of interest in their own right. Wediscuss “skew” cousins of the little Coppersmith-Winograd tensor and indicate whey they maybe useful for the laser method. We establish general results regarding border ranks of Kroneckerpowers of tensors, and make a detailed study of Kronecker squares of tensors in C ⊗ C ⊗ C .In particular we show numerically that for generic tensors in C ⊗ C ⊗ C , the rank and borderrank are strictly sub-multiplicative. Introduction
This paper studies Kronecker powers of several tensors, with particular focus on their ranks andborder ranks. Our main motivation comes from theoretical computer science, more preciselyupper bounds for the exponent of matrix multiplication. Independent of complexity, the resultsare of geometric interest in their own right.The exponent ω of matrix multiplication is defined as ω := inf { τ | n × n matrices may be multiplied using O ( n τ ) arithmetic operations } . The exponent is a fundamental constant governing the complexity of the basic operations inlinear algebra. It is conjectured that ω = 2. There was steady progress in the research forupper bounds from 1968 to 1988: after Strassen’s famous ω < .
81 [37], Bini et. al. [6],using border rank (see below), showed ω < .
78, then a major breakthrough by Sch¨onhage [34](the asymptotic sum inequality) was used to show ω < .
55, then Strassen’s laser method wasintroduced and used by Strassen to show ω < .
48, and further refined by Coppersmith andWinograd to show ω < . ω < . Mathematics Subject Classification.
Key words and phrases.
Matrix multiplication complexity, Tensor rank, Asymptotic rank, Laser method.Landsberg supported by NSF grant AF-1814254. Gesmundo acknowledges support VILLUM FONDEN viathe QMATH Centre of Excellence (Grant no. 10059).
All upper bounds since 1984 are obtained via Strassen’s laser method, a technique that provesupper bounds on the exponent indirectly via an auxiliary tensor that is easy to analyze. In 2014[3] gave an explanation for the limited progress since 1988, followed by further explanations in[1, 2, 11]: there are limitations to the laser method applied to certain auxiliary tensors. Theselimitations are referred to as barriers . Our main motivation is to explore ways of overcomingthese barriers via auxilary tensors that avoid them.
Remark 1.1.
Another approach to upper bounds are the group-theoretic methods of Cohn andUmans, see, e.g., [12, 13]. One can show ω < .
62 by this method [13].
Definitions and notation.
Let
A, B, C be complex vector spaces. We will work with tensorsin A ⊗ B ⊗ C . Let GL ( A ) denote the general linear group of A . Unless stated otherwise, wewrite { a i } for a basis of A , and similarly for bases of B and C . Often we assume that all tensorsinvolved in the discussion belong to the same space A ⊗ B ⊗ C ; this is not restrictive, since wemay re-embed the spaces A, B, C into larger spaces whenever it is needed. We say that twotensors are isomorphic if they are the same up to a change of bases in A , B and C .Let M h n i denote the matrix multiplication tensor, the tensor corresponding to the bilinear map C n × C n → C n which sends a pair of n × n matrices to their product.Given T, T ′ ∈ A ⊗ B ⊗ C , we say that T degenerates to T ′ if T ′ ∈ GL ( A ) × GL ( B ) × GL ( C ) · T ,the closure of the orbit of T under the natural action of GL ( A ) × GL ( B ) × GL ( C ) on A ⊗ B ⊗ C .A tensor T ∈ A ⊗ B ⊗ C has rank one if T = a ⊗ b ⊗ c for some a ∈ A , b ∈ B , c ∈ C . The rank of T , denoted R ( T ), is the smallest r such that T is sum of r rank one tensors. The borderrank of T , denoted R ( T ), is the smallest r such that T is the limit of a sequence of rank r tensors. Equivalently R ( T ) ≤ r if and only if M ⊕ r h i degenerates to T . Border rank is uppersemi-continuous under degeneration: if T ′ is a degeneration of T , then R ( T ′ ) ≤ R ( T ). The border subrank of T , denoted Q ( T ) is the largest q such that T degenerates to M ⊕ q h i .Given T ∈ A ⊗ B ⊗ C and T ′ ∈ A ′ ⊗ B ′ ⊗ C ′ , the Kronecker product of T and T ′ is the tensor T ⊠ T ′ := T ⊗ T ′ ∈ ( A ⊗ A ′ ) ⊗ ( B ⊗ B ′ ) ⊗ ( C ⊗ C ′ ), regarded as 3-way tensor. Given T ∈ A ⊗ B ⊗ C ,the Kronecker powers of T are T ⊠ N ∈ A ⊗ N ⊗ B ⊗ N ⊗ C ⊗ N , defined iteratively. The matrixmultiplication tensor has the following important self-reproducing property: M h n i ⊠ M h m i = M h nm i . We have R ( T ⊠ T ′ ) ≤ R ( T ) R ( T ′ ), and similarly for border rank.The asymptotic rank of T is R ✿ ( T ) := lim N →∞ R ( T ⊠ N ) /N and the asymptotic subrank of T is Q ✿ ( T ) = lim N →∞ Q ( T ⊠ N ) /N . These limits exist and are finite, see [39]. Moreover R ✿ ( T ) ≤ R ( T )and Q ✿ ( T ) ≥ Q ( T ).A tensor T ∈ A ⊗ B ⊗ C is concise if the induced linear maps T A : A ∗ → B ⊗ C , T B : B ∗ → A ⊗ C , T C : C ∗ → A ⊗ B are injective. We say that a concise tensor T ∈ C m ⊗ C m ⊗ C m has minimalrank (resp. minimal border rank ) if R ( T ) = m (resp. R ( T ) = m ).Bini [5] showed that the exponent of matrix multiplication may be defined as ω = inf { τ : R ( M h n i ) ∈ O ( n τ ) } = log n R ✿ ( M h n i ) . The laser method and the Coppersmith-Winograd tensors.
The best auxiliary tensorfor the laser method so far has been the big Coppersmith-Winograd tensor, which is
RONECKER POWERS OF TENSORS 3 (1) T CW,q := q X j =1 a ⊗ b j ⊗ c j + a j ⊗ b ⊗ c j + a j ⊗ b j ⊗ c + a ⊗ b ⊗ c q +1 + a ⊗ b q +1 ⊗ c + a q +1 ⊗ b ⊗ c ∈ ( C q +2 ) ⊗ , It was used to obtain the current world record ω < .
373 and all bounds below ω < .
41. Thebarrier identified in [3] said that T CW,q cannot be used to prove ω < . Q ✿ ( M h n i ) = n which is maximal, which is used to show any tensor with non-maximalasymptotic subrank cannot be used to prove ω = 2 by the laser method, and Strassen [40] hadshown Q ✿ ( T CW,q ) is non-maximal.The second best tensor for the laser method so far has been the little Coppersmith-Winogradtensor, which is(2) T cw,q := q X j =1 a ⊗ b j ⊗ c j + a j ⊗ b ⊗ c j + a j ⊗ b j ⊗ c ∈ ( C q +1 ) ⊗ . The laser method was used to prove the following inequality:
Theorem 1.2. [16]
For all k and q , (3) ω ≤ log q ( 427 ( R ( T ⊠ kcw,q )) k ) . More precisely, the ingredients needed for the proof but not the statement appears in [16]. Itwas pointed out in [9, Ex. 15.24] that the statement holds with R ( T ⊠ kcw,q ) k replaced by R ✿ ( T cw,q ) and the proof implicitly uses (3). The equation does appear in [27, Thm. 5.1.5.1].An easy calculation shows R ( T cw,q ) = q + 2 (one more than minimal). Applying Theorem 1.2to T cw, with k = 1 gives ω ≤ .
41 [16]. Theorem 1.2 shows that, unlike T CW,q , T cw, is notsubject to the barriers of [3, 1, 2, 11] for proving ω = 2, and T cw,q , for 2 ≤ q ≤
10 are notsubject to the barriers for proving ω < .
3. Thus, if any Kronecker power of T cw,q for 2 ≤ q ≤ ω , and if it were the case that R ✿ ( T cw, ) = 3, one would obtain that ω is two. Remark 1.3.
Although we know little about asymptotic rank of explicit tensors beyond matrixmultiplication, generic tensors have asymptotic rank less than their border rank: For a generictensor T ∈ C m ⊗ C m ⊗ C m , with m >
3, Lickteig showed that R ( T ) = ⌈ m m − ⌉ [33]. Strassen [40,Lemma 3.5] implicitly showed that for any T ∈ C m ⊗ C m ⊗ C m , if R ( T ) > m ω > m . , then R ✿ ( T ) < R ( T ). It is worth recalling Strassen’s proof: any T ∈ C m ⊗ C m ⊗ C m is a degenerationof M h ,m,m i ∈ C m ⊗ C m ⊗ C m , so T ⊠ is a degeneration of M h m ,m ,m i = M h ,m,m i ⊠ M h m, ,m i ⊠ M h m,m, i . In particular R ( T ⊠ ) ≤ R ( M h m ,m ,m i ) and R ✿ ( T ) = R ✿ ( T ⊠ ) ≤ R ✿ ( M h m ,m ,m i ) = m ω , so R ✿ ( T ) ≤ m ω . Since ω < . R ✿ ( T ) ≤ m . for all T ∈ C m ⊗ C m ⊗ C m . A. CONNER, F. GESMUNDO, J.M. LANDSBERG, E. VENTURA Results
Lower bounds for Kronecker powers of T cw,q . We address Problem 9.8 in [7], whichwas motivated by Theorem 1.2: Is R ( T ⊠ cw,q ) < ( q + 2) ? We give an almost complete answer: Theorem 2.1.
For all q > , R ( T ⊠ cw,q ) = ( q + 2) , and ≤ R ( T ⊠ cw, ) ≤ . We also examine the Kronecker cube:
Theorem 2.2.
For all q > , R ( T ⊠ cw,q ) = ( q + 2) . Proofs are given in § Corollary 2.3.
For all q > and all N , R ( T ⊠ Ncw,q ) ≥ ( q + 1) N − ( q + 2) , and R ( T ⊠ Ncw, ) ≥ ∗ N − . Previously, in [8] it had been shown that R ( T ⊠ Ncw,q ) ≥ ( q + 1) N + 2 N − q, N , whereas thebound in Corollary 2.3 is ( q + 1) N + 3( q + 1) N − + 3( q + 1) N − + ( q + 1) N − .Previous to this work one might have hoped to prove ω < . T cw, . Now, the smallest possible calculation to give a new upper bound on ω would be e.g., to prove the fourth Kronecker power of a small Coppersmith-Winograd tensorachieves the lower bound of Corollary 2.3 (which we do not expect to happen). Of course, onecould work directly with the matrix multiplication tensor, in which case the cheapest possibleupper bound would come from proving the border rank of the 6 × Corollary 2.4.
A general tensor T ∈ C m ⊗ C m ⊗ C m of border rank m + 1 , satisfies R ( T ⊠ ) = R ( T ) = ( m + 1) for m ≥ and R ( T ⊠ ) = R ( T ) = ( m + 1) for m ≥ . A skew cousin of T cw,q . In light of the negative results for complexity theory above, onemight try to find a better tensor than T cw,q that is also not subject to the barriers. In [14], when q is even, we introduced a skew cousin of the big Coppersmith-Winograd tensor, which hasthe largest symmetry group of any tensor in its space satisfying a natural genericity condition.However this tensor turns out not to be useful for the laser method. Inspired by it, we introducea skew cousin of the small Coppersmith-Winograd tensor when q is even:(4) T skewcw,q := q X j =1 a ⊗ b j ⊗ c j + a j ⊗ b ⊗ c j + q X ξ =1 ( a ξ ⊗ b ξ + q − a ξ + q ⊗ b ξ ) ⊗ c ∈ ( C q +1 ) ⊗ . In the language of [9], T skewcw,q has the same “block structure” as T cw,q , which immediatelyimplies Theorem 1.2 also holds for T skewcw,q : RONECKER POWERS OF TENSORS 5
Theorem 2.5.
For all k , (5) ω ≤ log q ( 427 ( R ( T ⊠ kskewcw,q )) k ) . In particular, the known barriers do not apply to T skewcw, for proving ω = 2 and to any T skewcw,q for q ≤
10 for proving ω < .
3. Unfortunately, we have
Proposition 2.6. R ( T skewcw,q ) ≥ q + 3 . Proposition 2.6 is proved in § R ( T skewcw,q ) > R ( T cw,q ) for all q , in particular R ( T skewcw, ) = 5, as for all T ∈ C ⊗ C ⊗ C , R ( T ) ≤ T cw, , substantial strict sub-multiplicativity holds for the Kronecker square of T skewcw, : Theorem 2.7. R ( T ⊠ skewcw, ) ≤ . Remark 2.8.
Regarding border rank strict submultiplicativity of Kronecker powers for otherexplicit tensors, little is known. For matrix multiplication, the only explicit drop under aKronecker power that is known to our knowledge is [35]: R ( M ⊠ h i ) ≤ < M h i for which anybound on the Kronecker squares other than the trivial R ( T ⊠ ) ≤ R ( T ) is known. In [10], theyshow that T CGJ,m := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + ( P i =1 a i ) ⊗ ( P j =1 b j ) ⊗ ( P j =1 c k )+ 2( a + a ) ⊗ ( b + b ) ⊗ ( c + c ) + a ⊗ ( P ms =4 b s ⊗ c s ) ∈ C ⊗ C m ⊗ C m satisfies R ( T CGJ,m ) = m + 2 and R ( T ⊗ CGJ,m ) ≤ ( m + 2) −
1. Of course, for any tensor T , R ( T ⊠ ) ≤ R ( T ⊗ ), and strict inequality, e.g., with M h i is possible. This is part of a generaltheory in [10] for constructing examples with a drop of one when the last non-trivial secantvariety is a hypersurface.We also show Theorem 2.9. R ( T ⊠ skewcw, ) ≤ . Theorems 2.7 and 2.9 are proved in § Two familiar tensors with no known laser method barriers.
Recall from above thateither R ✿ ( T cw, ) = 3 or R ✿ ( T skewcw, ) = 3 would imply ω = 2.Let det ∈ ( C ) ⊗ and perm ∈ ( C ) ⊗ be the 3 × ω = 2: either R ✿ (det ) = 9 or R ✿ (perm ) = 9 would imply ω = 2. This observation is animmediate consequence of the following lemma: Lemma 2.10.
We have the following isomorphisms of tensors: T ⊠ cw, ∼ = perm T ⊠ skewcw, ∼ = det . A. CONNER, F. GESMUNDO, J.M. LANDSBERG, E. VENTURA
Lemma 2.10 is proved in § R (det ) ≤
17 and R (det ) ≤
18. Although it is not necessarily relevant for complexity theory, we actually provestronger statements, which are important for geometry:A symmetric tensor T ∈ S C m ⊆ C m ⊗ C m ⊗ C m has Waring rank one if T = a ⊗ a ⊗ a forsome a ∈ C . The Waring rank of T , denoted R S ( T ), is the smallest r such that T is sum of r tensors of Waring rank one. The Waring border rank of T , denoted R S ( T ), is the smallest r such that T is limit of a sequence of tensors of Waring rank r .We actually show: Theorem 2.11. R S (det ) ≤ . and Theorem 2.12. R S (det ) ≤ . Proofs are respectively given in § § Generic tensors in C ⊗ C ⊗ C . A generic tensor in C ⊗ C ⊗ C has border rankfive. We have obtained the following numerical result, labeled with an asterisk because it is onlyproven to hold numerically: Theorem* 2.13.
For all T ∈ C ⊗ C ⊗ C , R ( T ⊠ ) ≤ < . This result is obtained by starting with a tensor whose entries are obtained from making drawsaccording to a uniform distribution on [ − , . Problem 2.14.
Write a symbolic proof of Theorem 2.13. Even better, give a geometric proof.
Theorem 2.13 is not too surprising because C ⊗ C ⊗ C is secant defective , in the sense thatby a dimension count, one would expect the maximum border rank of a tensor to be 4, but theactual maximum is 5. This means that for a generic tensor, there is a 8 parameter family of rank5 decompositions, and it is not surprising that the na¨ıve 64-parameter family of decompositionsof the square might have decompositions of lower border rank on the boundary.3. Symmetries of tensors and the proof of Lemma 2.10
Symmetry groups of tensors and polynomials.
The group GL ( A ) × GL ( B ) × GL ( C )acts naturally on A ⊗ B ⊗ C . The map Φ : GL ( A ) × GL ( B ) × GL ( C ) → GL ( A ⊗ B ⊗ C ) has atwo dimensional kernel ker Φ = { ( λ Id A , µ Id B , ν Id C ) : λµν = 1 } ≃ ( C ∗ ) .In particular, the group ( GL ( A ) × GL ( B ) × GL ( C )) / ( C ∗ ) × is naturally identified with a sub-group of GL ( A ⊗ B ⊗ C ). Given T ∈ A ⊗ B ⊗ C , the symmetry group of a tensor T is thestabilizer of T in ( GL ( A ) × GL ( B ) × GL ( C )) / ( C ∗ ) × , that is(6) G T := { g ∈ ( GL ( A ) × GL ( B ) × GL ( C )) / ( C ∗ ) × | g · T = T } . Let S k be the permutation group on k elements. We record the following observation: RONECKER POWERS OF TENSORS 7
Proposition 3.1.
For any tensor T ∈ A ⊗ B ⊗ C , G T ⊠ N ⊃ S N .Proof. Write T ⊠ N = P I,J,K T I,J,K a I ⊗ b J ⊗ c K where I = ( i , . . . , i N ), a I = a i ⊗ · · · ⊗ a i N , etc..For σ ∈ S N , define σ · T = P I,J,K T IJK a σ ( I ) ⊗ b σ ( J ) ⊗ c σ ( K ) . Since T IJK = T i j k · · · T i N j N k N we have T IJK = T σ ( I ) ,σ ( J ) ,σ ( K ) and we conclude. (cid:3) Remark 3.2.
For a symmetric tensor (equivalently, a homogeous polynomial), T ∈ S d A , wealso consider the symmetry group G sT := { g ∈ GL ( A ) | g · T = T } where the action is theinduced action on polynomials.3.2. Proof of Lemma 2.10.
Write ( − σ for the sign of a permutation σ . Letdet = X σ,τ ∈ S ( − τ a σ (1) τ (1) ⊗ b σ (2) τ (2) ⊗ c σ (3) τ (3) , perm = X σ,τ ∈ S a σ (1) τ (1) ⊗ b σ (2) τ (2) ⊗ c σ (3) τ (3) be the 3 × C ⊗ C ⊗ C . Proof of Lemma 2.10.
After the change of basis ˜ b := − b and ˜ c := c , ˜ c := − c , we obtain T skewcw, = a ⊗ b ⊗ ˜ c − a ⊗ b ⊗ ˜ c + a ⊗ ˜ b ⊗ c − a ⊗ ˜ b ⊗ ˜ c + a ⊗ b ⊗ c − a ⊗ b ⊗ c . This shows that, after identifying the three spaces, T skewcw, = a ∧ a ∧ a is the unique (upto scale) skew-symmetric tensor in C ⊗ C ⊗ C . In particular, T skewcw, is invariant under theaction of SL on C ⊗ C ⊗ C .Consequently, the stabilizer of T ⊠ skewcw, in GL ( C ) contains (and in fact equals) SL × ⋊ Z .This is the stabilizer of the determinant polynomial det . Since the determinant is characterizedby its stabilizer, we conclude.The tensor T cw, is symmetric and, after identifying the three spaces, it coincides with a ( a + a ) ∈ S C . After the change of basis ˜ a := a + a , ˜ a := a − a , we obtain T cw, = a ˜ a ˜ a ∈ S C is the square-free monomial of degree 3. The stabilizer of T cw, under the action of GL on S C is T SL ⋊ S , where T SL denotes the torus of diagonal matrices with determinant one,and S acts permuting the three basis elements.Consequently, the stabilizer of T ⊠ cw, in GL ( C ) contains (and in fact equals) ( T SL ⋊ S ) × ⋊ Z .This is the stabilizer of the permanent polynomial perm . Since the permanent is characterizedby its stabilizer, we conclude. (cid:3) Remark 3.3.
For the reader’s convenience, here are short proofs that det m , perm m are charac-terized by their stabilizers: To see det m is characterized by its stabilizer, note that SL m × SL m = SL ( E ) × SL ( F ) acting on S m ( E ⊗ F ) decomposes it to M | π | = m S π E ⊗ S π F which is multiplicity free, with the only trivial module S m E ⊗ S m F = Λ m E ⊗ Λ m F . To seethat perm m is characterized by its stabilizer, take the above decomposition and consider the A. CONNER, F. GESMUNDO, J.M. LANDSBERG, E. VENTURA T SL ( E ) × T SL ( F ) -invariants, these are the weight zero spaces ( S π E ) ⊗ ( S π F ) . By [22], one has thedecomposition of the weight zero spaces as S Em × S Fm -modules to ( S π E ) ⊗ ( S π F ) = [ π ] E ⊗ [ π ] F .The only such that is trivial is the case π = ( d ). Remark 3.4.
Even Kronecker powers of T skewcw, are invariant under SL × k , and coincide, upto a change of basis, with the Pascal determinants (see, e.g., [25, § T ⊠ kskewcw, = P asDet k, ,the unique, up to scale, tensor spanning (Λ C ) ⊗ k ⊂ S (( C ) ⊗ k ). Remark 3.5.
One can regard the 3 × C × C × C → C , where the three copies of C are the first, second and third column of a 3 × T skewcw, as atensor and the one given by the permanent is T cw, as a tensor. This perspective, combined withthe notion of product rank, immediately provides the upper bounds R (perm ) ≤
16 (which isalso a consequence of Lemma 2.10) and R (det ) ≤
20, see [17, 24].
Remark 3.6.
A similar change of basis as the one performed in the second part of proof ofLemma 2.10 shows that, up to a change of basis, T skewcw,q ∈ Λ C q +1 . In particular, its evenKronecker powers are symmetric tensors.4. Koszul flattenings and lower bounds for Kronecker powers
In this section we review Koszul flattenings, prove a result on propagation of Koszul flatteninglower bounds under Kronecker products, and prove Theorems 2.1 and 2.2. We give two proofsof Theorem 2.1 because the first is elementary and method of the second generalizes to give theproof of Theorem 2.2.4.1.
Definition.
Respectively fix bases { a i } , { b j } , { c k } of the vector spaces A, B, C . Given T = P ijk T ijk a i ⊗ b j ⊗ c k ∈ A ⊗ B ⊗ C , define the linear map T ∧ pA : Λ p A ⊗ B ∗ → Λ p +1 A ⊗ CX ⊗ β P ijk T ijk β ( b j )( a i ∧ X ) ⊗ c k . Then [29, Proposition 4.1.1] states(7) R ( T ) ≥ rank( T ∧ pA ) (cid:0) dim( A ) − p (cid:1) . In practice, one takes a subspace A ′∗ ⊆ A ∗ of dimension 2 p + 1 and restricts T (considered asa trilinear form) to A ′∗ × B ∗ × C ∗ to get an optimal bound, so the denominator (cid:0) dim( A ) − p (cid:1) isreplaced by (cid:0) pp (cid:1) in (7). Write φ : A → A/ ( A ′∗ ) ⊥ =: A ′ for the projection onto the quotient:the corresponding Koszul flattening map gives a lower bound for R ( φ ( T )), which, by linearity,is a lower bound for R ( T ). The case p = 1 is equivalent to Strassen’s equations [38]. Thereare numerous expositions of Koszul flattenings and their generalizations, see, e.g., [25, § § § Proof of Proposition 2.6.
Write q = 2 u . Fix a space A ′ = h e , e , e i . Define φ : A → A ′ by φ ( a ) = e ,φ ( a i ) = e for i = 1 , . . . , u,φ ( a i ) = e for i = u + 1 , . . . , q. RONECKER POWERS OF TENSORS 9
As an element of Λ A , we have T skewcw,q = a ∧ P u a i ∧ a u + i .We prove that if T = T skewcw,q then rank( T ∧ A ′ ) = 2( q + 2) + 1. This provides the lower bound R ( T ) ≥ l q +2)+12 m = q + 3.We record the images via T ∧ A ′ of a basis of A ′ ⊗ B ∗ . Fix the range of i = 1 , . . . , u : T ∧ A ′ ( e ⊗ β ) = ( e ∧ e ) ⊗ P u c u + i − ( e ∧ e ) ⊗ P u c i ,T ∧ A ′ ( e ⊗ β i ) = ( e ∧ e ) ⊗ c ,T ∧ A ′ ( e ⊗ β u + i ) = ( e ∧ e ) ⊗ c ,T ∧ A ′ ( e ⊗ β ) = ( e ∧ e ) ⊗ P u c u + i ,T ∧ A ′ ( e ⊗ β i ) = ( e ∧ e ) ⊗ c u + i + e ∧ e ⊗ c ,T ∧ A ′ ( e ⊗ β u + i ) = e ∧ e ⊗ c i ,T ∧ A ′ ( e ⊗ β ) = ( e ∧ e ) ⊗ P u c i ,T ∧ A ′ ( e ⊗ β i ) = e ∧ e ⊗ c u + i ,T ∧ A ′ ( e ⊗ β u + i ) = ( e ∧ e ) ⊗ c i − e ∧ e ⊗ c . Notice that the image of P u ( e ⊗ β i ) − P u ( e ⊗ β u + i ) − e ⊗ β is (up to scale) e ∧ e ⊗ c .This shows that the image of T ∧ A ′ containsΛ A ′ ⊗ c + e ∧ e ⊗ h P u c i , P u c u + i i + h e ∧ e , e ∧ e i ⊗ h c , . . . , c q i . These summands are clearly in disjoint subspaces, so we concluderank( T ∧ A ′ ) ≥ q = 2 q + 5 . (cid:3) Propagation of lower bounds under Kronecker products.
A tensor T ∈ A ⊗ B ⊗ C ,with dim B = dim C is 1 A -generic if T ( A ∗ ) ⊆ B ⊗ C contains a full rank element. Here is apartial multiplicativity result for Koszul flattening lower bounds under Kronecker products: Proposition 4.1.
Let T ∈ A ⊗ B ⊗ C with dim B = dim C be a tensor with a Koszulflattening lower bound for border rank R ( T ) ≥ r given by T ∧ pA (possibly after a restriction φ ).Let T ∈ A ⊗ B ⊗ C , with dim B = dim C = b be A -generic. Then (8) R ( T ⊠ T ) ≥ & rank( T ∧ pA ) · b (cid:0) pp (cid:1) ' . In particular, if rank( T ∧ pA ) ( pp ) ∈ Z , then R ( T ⊠ T ) ≥ r b .Proof. After applying a restriction φ as described above, we may assume dim A = 2 p + 1 sothat the lower bound for T is R ( T ) ≥ & rank( T ∧ pA ) (cid:0) pp (cid:1) ' . Let α ∈ A ∗ be such that T ( α ) ∈ B ⊗ C has full rank b , which exists by 1 A -genericity. Define ψ : A ⊗ A → A by ψ = Id A ⊗ α and set Ψ := ψ ⊗ Id B ⊗ C ⊗ B ⊗ C . Then (Ψ( T ⊠ T ) ∧ pA )provides the desired lower bound.Indeed, the linear map (Ψ( T ⊠ T ) ∧ pA ) coincides with T ∧ pA ⊠ T ( α ). Since matrix rank ismultiplicative under Kronecker product, we conclude. (cid:3) First proof of Theorem 2.1.
Write a ij = a i ⊗ a j ∈ A ⊗ and similarly for B ⊗ and C ⊗ .Let A ′ = h e , e , e i and define the linear map φ : A ⊗ → A ′ via φ ( a ) = φ ( a ) = φ ( a ) = e + e ,φ ( a ) = e ,φ ( a ) = φ ( a ) = e + e φ ( a ) = φ ( a ) = e φ ( a i ) = φ ( a i ) = e for i = 3 , . . . , qφ ( a ij ) = 0 for all other pairs ( i, j ) . Write T q := T ⊠ cw,q | A ∗′ ⊗ B ∗⊗ ⊗ C ∗⊗ . Consider the p = 1 Koszul flattening ( T q ) ∧ A ′ : A ′ ⊗ B ⊗ ∗ → Λ A ′ ⊗ C ⊗ . We are going to prove that rank(( T q ) ∧ A ′ ) = 2( q + 2) . This provides the lower bound R ( T ⊠ cw,q ) ≥ ( q + 2) and equality follows because of the submultiplicativity properties of border rank underKronecker product.We proceed by induction on q . When q = 3, the result is true by a direct calculation using the p = 2 Koszul flattening with a sufficiently generic C ⊂ A ∗ , which is left to the reader. When q = 4 one does a direct computation with the p = 1 Koszul flattening, which is also left to thereader, and which provides the base of the induction.Write W j = a ⊗ b j ⊗ c j + a i ⊗ b ⊗ c j + a i ⊗ b i ⊗ c . Then T cw,q = P q W j , so that T ⊠ cw,q = P ij W i ⊠ W j .If q ≥
4, write T cw,q = T cw,q − + W q , so T ⊠ cw,q = T ⊠ cw,q − + T cw,q − ⊠ W q + W q ⊠ T cw,q − + W q ⊠ W q .Let S q = ( T cw,q − ⊠ W q + W q ⊠ T cw,q − + W q ⊠ W q ) | A ′ ⊗ B ∗⊗ ⊗ C ∗⊗ .Write U = A ′ ⊗ h β ij : i, j = 0 , . . . , q − i and U = A ′ ⊗ h β qi , β iq : i = 0 , . . . , q i so that U ⊕ U = A ′ ⊗ B ⊗ ∗ . Similarly, define V = Λ A ′ ⊗ h c ij : i, j = 0 , . . . , q − i and V =Λ A ′ ⊗ h c qi , c iq : i = 0 , . . . , q i , so that V ⊕ V = Λ A ′ ⊗ C ⊗ . Observe that ( T q − ) ∧ A ′ is identically0 on U and its image is contained in V . Moreover, the image of U under ( S q ) ∧ A ′ is containedin V . Representing the Koszul flattening in blocks, we have( T q − ) ∧ A ′ = (cid:20) M
00 0 (cid:21) ( S q ) ∧ A ′ = (cid:20) N N N (cid:21) therefore rank(( T q ) ∧ A ′ ) ≥ rank( M + N ) + rank( N ).First, we prove that rank( M + N ) ≥ rank( M ) = 2( q + 1) . This follows by a degenerationargument. Consider the linear map given by precomposing the Koszul flattening with theprojection onto U . Its rank is semicontinuous under degeneration. Since T ⊠ cw,q degenerates to T ⊠ cw,q − , we deduce rank( M + N ) ≥ rank( M ). The equality rank( M ) = 2( q + 1) followsby the induction hypothesis. RONECKER POWERS OF TENSORS 11
We show that rank( N ) = 2(2 q + 3). The following equalities are modulo V . Moreover, eachequality is modulo the tensors resulting from the previous ones. They are all straightforwardapplications of the Koszul flattening map, which in these cases, can always be performed onsome copy of W i ⊠ W j .( S q ) ∧ A ′ ( e ⊗ β qj ) ≡ e ∧ e ⊗ c qj for j = 3 , . . . , q ( S q ) ∧ A ′ ( e ⊗ β jq ) ≡ e ∧ e ⊗ c jq for j = 3 , . . . , q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e . ⊗ c q Further passing modulo h e ∧ e i ⊗ C , we obtain( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q , and modulo the above,( S q ) ∧ A ′ ( e ⊗ β qj ) ≡ e ∧ ( e + e ) ⊗ c qj for j = 3 , . . . , q ( S q ) ∧ A ′ ( e ⊗ β jq ) ≡ e ∧ ( e + e ) ⊗ c jq for j = 3 , . . . , q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ ( e + e ) ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ ( e + e ) ⊗ c q . Finally passing modulo h e ∧ e i , we have( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q ( S q ) ∧ A ′ ( e ⊗ β q ) ≡ e ∧ e ⊗ c q . All the tensors listed above are linearly independent. Adding all the contributions together, weobtain rank(( S q ) ∧ A ′ ) = [2( q −
3) + 1] + 4 + 8 + 2 + [2( q −
3) + 1] + 4 = 2(2 q + 3)as desired, and this concludes the proof. (cid:3) A short detour on computing ranks of equivariant maps.
We briefly explain howto exploit Schur’s Lemma (see, e.g., [21, § G be a reductive group. In the proof of Theorems 2.1 and 2.2, G will be the product ofsymmetric groups. Let Λ G be the set of irreducible representations of G . For λ ∈ Λ G , let W λ denote the corresponding irreducible module.Suppose U, V are two representations of G . Write U = L λ ∈ Λ G W ⊕ m λ λ , V = L λ ∈ Λ G W ⊕ ℓ λ λ , where m λ is the multiplicity of W λ in U and ℓ λ is the multiplicity of W λ in V . The direct summandcorresponding to λ is called the isotypic component of type λ .Let f : U → V be a G -equivariant map. By Schur’s Lemma [21, § f decomposes as f = ⊕ f λ , where f λ : W ⊕ m λ λ → W ⊕ ℓ λ λ . Consider multiplicity spaces M λ , L λ with dim M λ = m λ and dim L λ = ℓ λ so that W ⊕ m λ λ ≃ M λ ⊗ W λ as a G -module, where G acts trivially on M λ andsimilarly W ⊕ ℓ λ λ ≃ L λ ⊗ W λ .By Schur’s Lemma, the map f λ : M λ ⊗ W λ → L λ ⊗ W λ decomposes as f λ = φ λ ⊗ Id [ λ ] , where φ λ : M λ → L λ . Thus rank( f ) is uniquely determined by rank( φ λ ) for λ ∈ Λ G .The ranks rank( φ λ ) can be computed via restrictions of f . For every λ , fix a vector w λ ∈ W λ , sothat M λ ⊗ h w λ i is a subspace of U . Here and in what follows, for a subset X ⊂ V , h X i denotesthe span of X . Then the rank of the restriction of f to M λ ⊗ h w λ i coincides with the rank of φ λ .We conclude rank( f ) = P λ rank( φ λ ) · dim W λ . The second proof of Theorem 2.1 and proof of Theorem 2.2 will follow the algorithm describedabove, exploiting the symmetries of T cw,q . Consider the action of the symmetry group S q on A ⊗ B ⊗ C defined by permuting the basis elements with indices { , . . . , q } . More precisely,a permutation σ ∈ S q induces the linear map defined by σ ( a i ) = a σ ( i ) for i = 1 , . . . , q and σ ( a ) = a . The group S q acts on B, C similarly, and the simultaneous action on the threefactors defines an S q -action on A ⊗ B ⊗ C . The tensor T cw,q is invariant under this action.4.5. Second Proof of Theorem 2.1.
When q = 3, as before, one uses the p = 2 Koszulflattening with a sufficiently generic C ⊂ A ∗ .For q ≥
4, we apply the p = 1 Koszul flattening map to the same restriction of T ⊠ cw,q as the firstproof, although to be consistent with the code at the website, we use the less appealing swap ofthe roles of a and a in the projection φ .Since T cw,q is invariant under the action of S q , T ⊠ cw,q is invariant under the action of S q × S q ,acting on A ⊗ ⊗ B ⊗ ⊗ C ⊗ . Let Γ := S q − × S q − where S q − is the permutation group on { , . . . , q } , so T ⊠ cw,q is invariant under the action of Γ. Note further that Γ acts trivially on A ′ , so( T q ) ∧ A ′ is Γ-equivariant, because in general, Koszul flattenings are equivariant under the productof the three general linear groups, which is GL ( A ′ ) × GL ( B ⊗ ) × GL ( C ⊗ ) in our case. (Weremind the reader that T q := T ⊠ cw,q | A ∗′ ⊗ B ∗⊗ ⊗ C ∗⊗ .) We now apply the method described in § T q ) ∧ A ′ ). RONECKER POWERS OF TENSORS 13
Let [triv] denote the trivial S q − -representation and let V denote the standard representation,that is the Specht module associated to the partition ( q − ,
1) of q −
3. We have dim[triv] = 1and dim V = q −
4. (When q = 4 only the trivial representation appears.)The spaces B, C are isomorphic as S q − -modules and they decompose as B = C = [triv] ⊕ ⊕ V .After fixing a 5-dimensional multiplicity space C for the trivial isotypic component, we write B ∗ = C = C ⊗ [triv] ⊕ V . To distinguish the two S q − -actions, we write B ⊗ B = ([triv] ⊕ L ⊕ V L ) ⊗ ([triv] ⊕ R ⊕ V R ) . Thus, B ∗⊗ = C ⊗ =( C ⊗ [triv] L ⊕ V L ) ⊗ ( C ⊗ [triv] R ⊕ V R )=( C ⊗ C ) ⊗ ([triv] L ⊗ [triv] R ) ⊕ C ⊗ ([triv] L ⊗ V R ) ⊕ C ⊗ ( V L ⊗ [triv] R ) ⊕ ( V L ⊗ V R ) . Write W , . . . , W for the four irreducible representations in the decomposition above and let M , . . . , M be the four corresponding multiplicity spaces.Recall from [20] that a basis of V is given by standard Young tableaux of shape ( q − ,
1) (withentries in 4 , . . . , q for consistency with the action of S q − ); let w std be the vector correspondingto the standard tableau having 4 , , . . . , q in the first row and 5 in the second row. We referto [20, §
7] for the straightening laws of the tableaux. Let w triv be a generator of the trivialrepresentation [triv].For each of the four isotypic components in the decomposition above, we fix a vector w i ∈ W i and explicitly realize the subspaces M i ⊗ h w i i of B ∗⊗ as follows: W i w i dim M i M i ⊗ h w i i [triv] L ⊗ [triv] R w triv ⊗ w triv h β ij : i,j =0 ,..., i⊕h P qj =4 β ij : i =0 ,..., i⊕h P qi =4 β ij : j =0 ,..., i⊕h P qi,j =4 β ij i [triv] L ⊗ V R w triv ⊗ w std h β i − β i : i =0 ,..., i⊕h P qi =4 ( β i − β i ) i V L ⊗ [triv] R w std ⊗ w triv h β j − β j : j =0 ,..., i⊕h P qj =4 ( β j − β j ) i V L ⊗ V R w std ⊗ w std h β − β − β + β i . The subspaces in C ⊗ are realized similarly.Since ( T ⊠ cw,q ) ∧ A ′ is Γ-equivariant, by Schur’s Lemma, it has the isotypic decomposition ( T ⊠ cw,q ) ∧ A ′ = f ⊕ f ⊕ f ⊕ f , where f i : A ′ ⊗ ( M i ⊗ W i ) → Λ A ′ ⊗ W i . As explained in § i : A ′ ⊗ M i ⊗h w i i → Λ A ′ ⊗ M i ⊗ h w i i . Using the bases presented in the fourth column of the table above, we write down the fourmatrices representing the maps Φ , . . . , Φ .The map Φ is represented by the 3 × − , so rank(Φ ) = 2.The map Φ is represented by the 15 ×
15 matrix (here q ′ = q − − − − − − − q ′ − − q ′ − − We prove the matrix above and those that follow are as asserted for all q in §
7. The proof goesby showing each entry must be a low degree polynomial in q , and then one simply tests enoughsmall cases to fix the polynomials. Thus rank(Φ ) = 12, and similarly for Φ .The map Φ is represented by a 75 ×
75 matrix that can be presented in block form − X Y − Z Y − Z X with X the matrix , RONECKER POWERS OF TENSORS 15 Y the matrix q ′ q ′ q ′ q ′ q ′ q ′ q ′ q ′ , and Z the matrix We compute rank(Φ ) = 72.Although these matrices are of fixed size, they are obtained via intermediate tensors whosedimensions depend on q , which created a computational challenge. Two ways of addressing thechallenge (including the one utilized in the code) are explained in § in Ap-pendix F. The implementation of the method of § q , and the code thecomputation of their ranks is in Appendix H. The ranks are bounded below by taking a matrix M (which has some entries depending linearly on q ), multiplying it on the left by a rectangularmatrix P whose entries are rational functions of q , and on the right by a rectangular matrix Q whose entries are constant, to obtain a square matrix P M Q that is upper triangular with ± P has no integral solution when q > Adding all the contributions givesrank( T ∧ A ′ ) = 2 · dim( V ⊗ V ) + 12 · dim([triv] ⊗ V ) + 12 · dim( V ⊗ [triv]) + 72 · dim([triv] ⊗ [triv]) == 2 · ( q − + 12 · ( q −
4) + 12 · ( q −
4) + 72 · q + 2) . This concludes the proof of Theorem 2.1.
Remark 4.2.
One might have hoped to exploit the full symmetry group S q × S q to simplifythe argument further. However there is no choice of a restriction map ψ which is S q − s × S q − s -invariant for s < Proof of Theorem 2.2.
We will use a Koszul flattening with p = 2, so we need a 5dimensional subspace of ( A ∗ ) ⊗ . Let A ′∗ := * α , P qi =1 ( α i + α i + α i ) ,α + α + α + α + α + α + α + α ,α + α + α + α + α − α + α + α + α ,α + α + α + α + α + α + α + α + α + α + α + α + . Write φ : A ⊗ → A ′ for the resulting projection map and, abusing notation, for the inducedmap A ⊗ ⊗ B ⊗ ⊗ C ⊗ → A ′ ⊗ B ⊗ ⊗ C ⊗ . Write T = φ ( T ⊠ cw,q ), suppressing the q from thenotation. Consider the Koszul flattening:( T ) ∧ A ′ : Λ A ′ ⊗ B ∗⊗ → Λ A ′ ⊗ C ⊗ . We will show rank(( T ) ∧ A ′ ) = 6( q + 2) , which implies R ( T ⊠ cw,q ) ≥ ( q + 2) .In order to compute rank(( T ) ∧ A ′ ), we follow the same strategy as before. The matrices thatarise are in Appendix F at , and the code thatgenerates them is in Appendix H. The explanation of how we proved they are as asserted isoutlined in § T ) ∧ A ′ is invariant under the action of Γ = S q − × S q − × S q − where the first copyof S q − permutes the basis elements with indices 5 , . . . , q of the first factors, and similarlyfor the other copies of S q − . Let [triv] be the trivial S q − -representation and let V be thestandard representation, namely the Specht module associated to the partition ( q − , V = q −
5, so if q = 5, only the trivial representation appears.The S q − -isotypic decomposition of B (and C ) is C ⊗ [triv] ⊕ V and this induces the decom-position of B ∗⊗ ≃ C ⊗ given by RONECKER POWERS OF TENSORS 17 B ∗⊗ ≃ C ⊗ =( C ) ⊗ ⊗ ([triv] ⊗ [triv] ⊗ [triv] ) ⊕ ( C ) ⊗ ⊗ [([triv] ⊗ [triv] ⊗ V ) ⊕ ([triv] ⊗ V ⊗ [triv] ) ⊕ ( V ⊗ [triv] ⊗ [triv] )] ⊕ ( C ) ⊗ [([triv] ⊗ V ⊗ V ) ⊕ ( V ⊗ V ⊗ [triv] ) ⊕ ( V ⊗ [triv] ⊗ V )] ⊕ V ⊗ V ⊗ V consisting of eight isotypic components. As in the previous proof, for each of the eight irreduciblecomponents W i , we consider w i ∈ W i and we compute the rank of the restriction to Λ A ′ ⊗ M i ⊗h w i i of the Koszul flattening; call this restriction Φ i .The ranks of the restrictions are recorded in the following table: W i dim(Λ A ′ ⊗ M i ) rank(Φ i )[triv] ⊗ [triv] ⊗ [triv] · (cid:0) (cid:1) = 2160 2058[triv] ⊗ [triv] ⊗ V (and permutations) 6 · (cid:0) (cid:1) = 360 294[triv] ⊗ V ⊗ V (and permutations) 6 · (cid:0) (cid:1) = 60 42 V ⊗ V ⊗ V (cid:0) (cid:1) = 10 6The relevant matrices are available at in Ap-pendix G. The implementation of the method of § q and the code thecomputation of their ranks is in Appendix G with the code to do the computation in AppendixH. As before, the ranks are bounded below by taking a matrix M (which has some entries de-pending linearly on q ), multiplying it on the left by a rectangular matrix P whose entries arerational functions of q , and on the right by a rectangular matrix Q whose entries are constant,to obtain a square matrix P M Q that is upper triangular with ± P has no integral solution when q > T ∧ A ′ ) =6 · dim( V ⊗ V ⊗ V )+42 · · dim([triv] ⊗ V ⊗ V )+294 · · dim([triv] ⊗ [triv] ⊗ V )+2058 · dim([triv] ⊗ [triv] ⊗ [triv]) = 6 · ( q + 2) . This concludes the proof. Upper bounds for Waring rank and border rank of det Proof of Theorem 2.11.
Let ϑ = exp(2 πi/
6) and let ϑ be its inverse. The matrices inthe following decomposition represent elements of C = C ⊗ C . The tensor det = T ⊠ skewcw, ∈ S ( C ⊗ C ) has the following Waring rank 18 decomposition:det = − ϑ −
00 0 ϑ ⊗ + − ϑ −
00 0 ϑ ⊗ + − ϑ ϑ
00 0 ϑ ⊗ + − − ϑ − ϑ ⊗ + ϑ − ϑ ⊗ + ϑ − a − ϑ ⊗ + ϑ − a ⊗ + ϑ − ϑ − ϑ ⊗ + ϑ − ϑ ⊗ + − ϑ
00 0 ϑ − ⊗ + − ϑ
00 0 ϑ − ⊗ +
00 0 − − ⊗ + − − ⊗ + ϑ ϑ ⊗ + ϑ ϑ ⊗ + ϑ − ϑ
01 0 0 ⊗ + ϑ − ϑ − ϑ ⊗ + ϑ − ϑ
01 0 0 ⊗ . The verification of the equality is straight-forward. (cid:3)
Proof of Theorem 2.12.
We first present the proof and then explain how it was arrivedat.
RONECKER POWERS OF TENSORS 19
Proof.
Set w ( t ) = z z t − , w ( t ) = z z z tz , w ( t ) = − z z t − z − z t t w ( t ) = t − z z t − z t , w ( t ) = − z t − z t − , w ( t ) = − z z t − z − z t − z w ( t ) = z z t z z tz , w ( t ) = z − t z z tz , w ( t ) = z t z t z t
01 0 0 w ( t ) = − z − z , w ( t ) = z z z tz , w ( t ) = − z z t z t
00 0 t w ( t ) = z z t z − t t , w ( t ) = z z t
00 0 00 z t − t , w ( t ) = − tz z t w ( t ) = z z t z z t z t − t , w ( t ) = z z t z t z . We will identify algebraic values for the z α ∈ Q occurring in number fields of extension degreeat most 81 such that(9) t det + O ( t ) = P i =1 w i ( t ) ⊗ , which will complete the proof. First observe that z ≈ . , z ≈ − . , z ≈ − . ,z ≈ − . , z ≈ . , z ≈ − . ,z ≈ . , z ≈ − . , z ≈ − . ,z ≈ − . , z ≈ . , z ≈ − . ,z ≈ . , z ≈ . , z ≈ − . ,z ≈ . , z ≈ − . , z ≈ . ,z ≈ − . , z ≈ − . , z ≈ . ,z ≈ . , z ≈ − . , z ≈ − . ,z ≈ . , z ≈ − . , z ≈ . ,z ≈ . , z ≈ − . , z ≈ . ,z ≈ − . , z ≈ . , z ≈ − . ,z ≈ . , z ≈ − . , z ≈ − . ,z ≈ . , z ≈ . , z ≈ . ,z ≈ . , z ≈ − . , z ≈ . ,z ≈ − . , z ≈ − . z α . Thereturn out to be just 55 independent equations in the system. Here are the first few z z + z z + z z + z z + z z + z z + z z + z z − z = 0 ,z z − z z + z z − z z + z z − z z + z z + z z + z = 0 ,z z z + z z z + z z z + z z z + z z z − z z = 0 . Since the equations (9) have rational coefficients, each z α is an algebraic number, and z , . . . , z will thus be determined by their minimal polynomials and the approximate solutions.For example, z is the closest root to 1 . x + 31 x + 541 x + 7678 x + 76517 x + 597471 x + 4286757 x + 18001029 x + 26106929 x − x − x − x + 1348065345 x + 153449945193 x + 421609650857 x + 351883483001 x − x + 185523007962 x + 1215557562128 x + 1574302110440 x + 131020684656 x − x − x − x − x − x − x − . See Appendix C at for the minimal polynomialsof all the parameters.It remains to prove these algebraic numbers solve the system (9). In what follows, we explainhow to reduce the calculation to the verification of (1),(2), and (3) below. Then the data tocarry out the verifications is provided in Appendices C and D on the website.The idea of the proof is essentially standard: we wish to identify the smallest number field of Q containing all the parameters. For our purposes, we identify number fields by distinguished RONECKER POWERS OF TENSORS 21 primitive roots, or equivalently as fields Q [ x ] /p ( x ) equipped with an embedding into Q . Elementsin a number field are represented as polynomials in the primitive root. Given such a represen-tation, exact arithmetic is possible, so in principle one could simply evaluate the polynomialsof equation (9) in this field and check that the result is zero. However, identifying this globalnumber field appears to be difficult, so we instead exploited additional ad hoc observations.The first observation is that the subfield containing all the z α is as simple as possible: Thenumber field of degree 27 obtained by adjoining any z α to Q contains all the rest of the z β ’s.The expression of each z α as a polynomial in a primitive root, as well as the minimal polynomialof the primitive root, are provided on the website. Call this number field containing the cubes K . Clearly, any monomial in the z α also has cube in K . Hence, to prove that an equationof (9) is satisfied, one could compute inside K the cubes of the monomials which appear andtake cubic field extensions of K until one arrives at an extension containing all the monomialsthemselves. It turned out that for each equation of (9), at most one such cubic extension wasneeded. That is, if one of the cubes of the monomials appearing does not have a cube root in K , adjoining this cube root to K yields a field containing all the rest of the monomials. Withthe monomials represented in a common field, we checked the equation is satisfied with exactarithmetic in this field.Hence, for each equation of (9), there is a number field F containing all the monomials whichappear. The minimal polynomial of a primitive root of F , the expressions of the monomialvalues as polynomials in this primitive root, and the description of the embedding K → F bythe image of the primitive root of K expressed as a polynomial in the primitive root of F , areall provided in Appendix C on the website. Then, to check the claims, one must check, for eachequation,(1) The embedding K → F is well defined and injective. This is checked by computing theminimal polynomial in F of the claimed image of the primitive root of K , and verifyingit agrees with the minimal polynomial in K .(2) The cubes of the values of the monomials in F equal the values of the monomials com-puted in the z α inside K , then embedded into F .(3) The equation is satisfied using exact arithmetic in F . (cid:3) Explanations:
Many steps were accomplished by finding solutions of polynomial equations bynonlinear optimization. In each case, this was accomplished using a variant of Newton’s methodapplied to the mapping of variable values to corresponding polynomial values. The result of thisprocedure in each case is limited precision machine floating point numbers.First, we attempted to solve the equations describing a Waring rank 17 decomposition of det with nonlinear optimization, namely, det = P i =1 ( w ′ i ) ⊗ , where w ′ i ∈ C × . Instead of findinga solution to working precision, we obtained a sequence of local refinements to an approximatesolution where the norm of the defect is slowly converging to to zero, and some of the parametervalues are exploding to infinity. Numerically, these are Waring decompositions of polynomialsvery close to det .Next, this approximate solution needed to be upgraded to a solution to equation (9). We found a choice of parameters in the neighborhood of a solution, and then applied localoptimization to solve to working precision. We used the following method: Consider the linearmapping M : C → S ( C × ), M ( e i ) = ( w ′ i ) ⊗ , and let M = U Σ V ∗ be its singular valuedecomposition (with respect to the standard inner products for the natural coordinate systems).We observed that the singular values seemed to be naturally partitioned by order of magnitude.We estimated this magnitude factor as t ≈ − , and wrote Σ ′ as Σ where we multiplied eachsingular value by ( t/t ) k , with k chosen to agree with this observed partitioning, so that theconstants remaining were reasonably sized. Finally, we let M ′ = U Σ ′ V ∗ , which has entries in C [[ t ]]. M ′ is thus a representation of the map M with a parameter t .Next, for each i , we optimized to find a best fit to the equation ( a i + tb i + t c i ) ⊗ = M ′ ( e i ), whichis defined by polynomial equations in the entries of a i , b i and c i . The a i , b i and c i we constructedin this way proved to be a good initial guess to optimize equation (9), and we immediately sawquadratic convergence to a solution to machine precision. At this point, we greedily sparsifiedthe solution by speculatively zero-ing values and re-optimizing, rolling back one step in case offailure. After sparsification, it turned out the c i were not needed. The resulting matrices arethose given in the proof.To compute the minimal polynomials and other integer relationships between quantities, weused Lenstra-Lenstra-Lov´asz integer lattice basis reduction [32]. As an example, let ζ ∈ R beapproximately an algebraic number of degree k . Let N be a large number inversely proportionalto the error of ζ . Consider the integer lattice with basis { e i + ⌊ N ζ i ⌋ e k +1 } ⊂ Z k +2 , for 0 ≤ i ≤ k .Then elements of this lattice are of the form v e + · · · + v k e k + Ee k +1 , where E ≈ N p ( ζ ), p = v + v x + · · · x k x k . Polynomials p for which ζ is an approximate root are distinguished bythe property of having relatively small Euclidean norm in this lattice. Computing a small normvector in an integer lattice is accomplished by LLL reduction of a known basis.For example, the fact that the number field of degree 27 obtained by adjoining any z α to Q contains all the rest was determined via LLL reduction, looking for expressions of z α as apolynomial in z β for some fixed β . These expressions of z α in a common number field can bechecked to have the correct minimal polynomial, and thus agree with our initial description ofthe z α . LLL reduction was also used to find the expressions of values as polynomials in theprimitive root of the various number fields.After refining the known value of the parameters to 10 ,
000 bits of precision using Newton’smethod, LLL reduction was successful in identifying the minimal polynomials. The degreeswere simply guessed, and the results checked by evaluating the computed polynomials in theparameters to higher precision.
Remark 5.1.
In fact, all of the z α have algebraic degree 81, with their cubes having algebraicdegree 27. Not all of the z α are algebraic integers. Remark 5.2.
With the minimal polynomial information, it is possible to check that equation (9)is satisfied to any desired precision by the parameters.6.
Tight Tensors in C ⊗ C ⊗ C Following an analysis started in [15], we consider Kronecker squares of tight tensors in C ⊗ C ⊗ C . We compute their symmetry groups and numerically give bounds to their tensor rank andborder rank, highlighting the submultiplicativity properties. RONECKER POWERS OF TENSORS 23
Tight tensors.
The differential d Φ of Φ from (6) induces a map at the level of Lie algebras:we write g T for the annihilator of T under the action of ( gl ( A ) ⊕ gl ( B ) ⊕ gl ( C )) / C .A tensor T ∈ A ⊗ B ⊗ C is tight if it is annihilated by a regular semisimple element of ( gl ( A ) ⊕ gl ( B ) ⊕ gl ( C )) / C under its natural action on A ⊗ B ⊗ C . Tightness can be defined combinatoriallywith respect to a basis, see e.g. [15, Def. 1.3].Tensors useful for the laser method are often tight: T CW,q and T cw,q are tight (if one uses thecoordinate definition of tightness, one must make a change of basis).Regarding propagation of symmetry, in [15] it was shown that if T ∈ A ⊗ B ⊗ C and T ∈ A ⊗ B ⊗ C are concise, then(10) g T ⊠ T ⊇ g T ⊗ Id A ⊗ B ⊗ C + Id A ⊗ B ⊗ C ⊗ g T ;and that if g T = 0 and g T = 0 then g T ⊠ T = 0.The containment of (10) can be strict, which happens in the case of the matrix multiplicationtensor. In [15], we proposed to characterize tensors T ∈ A ⊗ B ⊗ C such that g T ⊗ Id A ⊗ B ⊗ C +Id A ⊗ B ⊗ C ⊗ g T is strictly contained in g T ⊠ ⊂ gl ( A ⊗ ) + gl ( B ⊗ ) + gl ( C ⊗ ).6.2. Tight supports in C ⊗ C ⊗ C .Proposition 6.1. The following is a list of all tight tensors in C ⊗ C ⊗ C with unextend-able tight support up to isomorphism. Up to isomorphism, there are isolated such, denoted T t , . . . , T t , T t − , and a one-parameter family denoted T t µ : T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c T t − := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c − a ⊗ b ⊗ c T t µ := a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + a ⊗ b ⊗ c + µa ⊗ b ⊗ c , µ ∈ C ∗ \− Proof.
In [15] we gave an exhaustive list of unextendable tight supports for tensors in C ⊗ C ⊗ C .There were 13 such, however J. Hauenstein pointed out to us that the supports labeled S , S , S all gave rise to isomorphic tensors. Fix a support S , and a tensor T with support S . For all ( ijk ) ∈ S , set L ijk = ( T ijk ) − . Write elements of the torus as α α α , β β β , γ γ γ | α i , β j , γ k ∈ C ∗ By (6) there are effectively only seven free parameters. We need to show that in all cases butthe last, the system of equations { L ijk = α i β j γ k | ( ijk ) ∈ S} has a consistent solution, andin the last the system with the equation for L deleted has a consistent solution. This is astraight-forward calculation. For example, in the last case, one gets: α = L α L α L L , β = α β L L α , β = L α β L L L α ,γ = L L L α α L β L , γ = α L L α β L , γ = L α β , and one cannot normalize T . (cid:3) We remark that T t = T CW, = T C [ x ] / ( x ) , i.e., that it has the interpretations both as the firstbig Coppersmith-Winograd tensor and also as the structure tensor for the algebra C [ x ] / ( x ).The tensors T cw, and T skewcw, respectively appear as degenerations of T t µ with µ = 1 and T t − . Proposition 6.2.
The dimensions of the symmetry Lie algebras, the symmetry Lie algebrasof the Kronecker squares, the border ranks and estimates of the border ranks of the Kroneckersquares of the unextendable tight tensors in C ⊗ C ⊗ C are as follows, where the upper borderrank bounds are only numerical unless the last column is labeled N/A: T dim( g T ) dim( g T ⊠ ) R ( T ) R ( T ⊠ ) ℓ error for upper bound in T ⊠ decomposition T t N/AT t ,
14] 0 . T t ,
14] 0 . T t ,
14] 0 . T t ,
15] 0 . T t . T t ,
15] 0 . T t . T t N/AT t , N/AT t − ,
19] 0 . T t µ , N/A
The symmetry Lie algebras are computed by a straight-forward calculation, see [14] for thesystem of linear equations. The lower bounds are obtained via Koszul flattenings. The upperborder rank bounds should be considered numerical evidence only. Evidence for a border rankupper bound of r of T consists of a tensor T ′ of rank r near to T in the ℓ sense (under thestandard inner product in the given basis).In summary: RONECKER POWERS OF TENSORS 25
Theorem 6.3.
The tensors T t , . . . , T t all satisfy dim g T ⊠ > g T . The largest jump is for T t = T CW, which goes from 6-dimensional to 28-dimensional.7. Justification of the matrices
In this section, describe two ways of proving that the matrices appearing in the second proof ofTheorem 2.1 and the proof of Theorem 2.2 are as asserted, one of which is carried out explicitlyin the code at .The computational issue is that, although the sizes of the matrices are fixed, they are obtainedvia intermediate matrices whose dimensions depend on q so one needs a way of encoding suchmatrices and tensors efficiently. The first method of proof critically relies on the definition ofa class of tensors, which we call box parameterized , whose entries and dimensions depend on aparameter q in a very structured way. In this proof one shows the entries of the output matricesare low degree, say δ , polynomials in q , and then by computing the first δ + 1 cases directly, onehas proven they are as asserted for all q . The second method, which is implemented in the code,does not rely on the structure to prove anything, but the structure allows an efficient coding ofthe tensors that significantly facilitates the computation.A k -way sequence of tensors T q ∈ A q ⊗ · · · ⊗ A qk parametrized by q ∈ N is basic box parameterized if it is of the form T q = p ( q ) P ( i ,...,i k ) ∈ Φ t i ,...,i k , where { a α,s } is a basis of A qα , t i ,...,i k = a ,i ⊗ · · · ⊗ a k,i k , p is a polynomial, and the index set Φis defined by conditions f j q + h j ≤ i j ≤ g j q + d j , f j , g j ∈ { , } , h j , d j ∈ Z ≥ , for each j , andany number of equalities i j = i k between indices.We sometimes abuse notation and consider Φ to be its set of indices or the set of equations andinequalities defining the set of indices; no confusion should arise.Tensor products of basic box parameterized tensors are basic box parameterized:( p ( q ) P ( i ,...,i k ) ∈ Φ t i ,...,i k ) ⊗ ( p ( q ) P ( j ,...,j l ) ∈ Φ j i ,...,j l )= p ( q ) p ( q ) P ( i ,...,i k ,j ,...,j l ) ∈ Φ × Φ t i ,...,i k ,j ,...,j l . We next show that contraction of a basic box parameterized tensor is basic box parameterizedwhen q ≥ max i,j {| h i − h j | , | d i − d j |} , where i and j range over those indices related by equalityto the ones being contracted. To do this, we first show they are closed under summing along acoordinate (with the same restriction on q ), which we may take to be i without loss of generality.(This corresponds to contracting with the vector P i a ∗ ,i i ∈ ( A q ) ∗ .) That is, we wish to show p ( q ) P ( i ,...,i k ) ∈ Φ t i ,...,i k is basic box parameterized with the above restriction on q . For this consider two cases. First,suppose there is a coordinate j = 1 so that i = i j ∈ Φ. To construct the summed tensor, adjointo Φ equalities i j = i k for all k for which i = i k ∈ Φ. Then, deleting i from the indices andreplacing the bounds on i j withmax( f j q + h j , f q + h ) ≤ i j ≤ min( g j q + d j , g q + d )yields the summed tensor. The max and the min can be replaced with one of their argumentsprovided q ≥ max( | h − h j | , | d − d j | ), so the sum is basic box parameterized with our restriction on q . Otherwise, suppose there is no coordinate so that i = i j ∈ Φ. Then the summed tensoris ( g q + d − f q − h + 1) p ( q ) P ( i ,...,i k ) ∈ Φ t i ,...,i k , which is basic box parameterized.Finally, to compute the contraction, say between indices i j and i k , adjoin i j = i k as a conditionto Φ and then sum over i j and then over i k using the previous technique.Call a tensor box parameterized if it is a finite sum of basic box parameterized tensors. Clearlybox parameterized tensors are closed under tensor products and contraction, possibly with aneasily computed restriction on q .Now, T cw,q ∈ ( C q +1 ) ⊗ = A ⊗ B ⊗ C is clearly box parameterized as a 3-way tensor. The tensors φ ∈ A ′ ⊗ ( A ⊗ ) ∗ (where dim A ′ = 3) and φ ∈ A ′ ⊗ ( A ⊗ ) ∗ (where dim A ′ = 5) defining theprojection maps are box parameterized as 3-way and 4-way tensors, respectively. The tensors KF ∈ ( A ′ ⊗ B ⊗ ⊗ C ⊗ ) ∗ ⊗ (( A ′ ) ∗ ⊗ B ⊗ ) ⊗ (Λ A ′ ⊗ C ⊗ )) and KF ∈ ( A ′ ⊗ B ⊗ ⊗ C ⊗ ) ∗ ⊗ ((Λ A ′ ) ∗ ⊗ B ⊗ ) ⊗ (Λ A ′ ⊗ C ⊗ )) defining the Koszul flattenings are also box parameterized, asthey are the tensor product of tensors of fixed size with identity tensors, which are basic box pa-rameterized. From this, we see that the corresponding Koszul flattenings are box parameterized,viewed in A ′∗ ⊗ B ⊗ ⊗ Λ A ′ ⊗ C ⊗ as a 6-way tensor for the square and Λ A ′∗ ⊗ B ⊗ ⊗ Λ A ′ ⊗ C ⊗ as an 8-way tensor for the cube.Finally, consider the change of basis map which block diagonalizes the flattening according toSchur’s lemma. We explain the square case, the cube case is available in the Appendix. Thischange of basis is the Kronecker product of the 3 × q + 1 × q + 1 matrix · · · − − − Let E denote the projection operator to the isotypic component of the trivial representation. Inbases, this corresponds to the first five rows of the matrix above. Let E denote the projectiononto the standard representation, which corresponds to the sixth row. It is easy to see that thefirst 6 columns of the inverse is the matrix1 q − q − q − q − q − − ( q − Write F for the inclusion of the trivial representation into the space in the original basis, whichis represented by the first five columns of this matrix, and F for the inclusion of the standard RONECKER POWERS OF TENSORS 27 representation which is represented by the sixth column. Write V for the trivial representationof S q − and V for the standard representation. Then, f V i ⊠ V j = (Id A ′ ⊠ E i ⊠ E j ) ◦ ( T ⊗ cw,q ) ∧ A ′ ◦ (Id Λ A ′ ⊠ F i ⊠ F j ) . (These four maps were labelled f , . . . , f in § E i and ( q − F i are clearly boxparametrized, it follows that ( q − f V i ⊠ V j is box parametrized. A similar argument shows thatthe cube ( q − f V i ⊠ V j ⊠ V k is box parameterized.At this point the first method shows the entries of the matrices are low degree polynomials in q so one can conclude by checking the first few cases.The fact that all tensors involved are basic box parameterized guided us how to encode thesemaps efficiently so that they could be computed by direct calculation, which provides the secondmethod and is described in Appendix H. References
1. J. Alman and V. V. Williams,
Further Limitations of the Known Approaches for Matrix Multiplication , 9thInnov. Th. Comp. Science Conf., ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, 2018, pp. 25:1–25:15.2. ,
Limits on all known (and some unknown) approaches to matrix multiplication , (2018), 580–591.3. A. Ambainis, Y. Filmus, and F. Le Gall,
Fast matrix multiplication: limitations of the Coppersmith-Winogradmethod , Proc. of the 47th ACM Symp. Th. Comp., 2015, pp. 585–593.4. E. Ballico, A. Bernardi, M. Christandl, and F. Gesmundo,
On the partially symmetric rank of tensor productsof W-states and other symmetric tensors , Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. (2019), 93–124.5. D. Bini, Relations between exact and approximate bilinear algorithms. Applications , Calcolo (1980), no. 1,87–97.6. D. Bini, G. Lotti, and F. Romani, Approximate solutions for the bilinear form computational problem , SIAMJ. Comput. (1980), no. 4, 692–697.7. M. Bl¨aser, Fast Matrix Multiplication , Theory of Computing, Graduate Surveys (2013), 1–60.8. Markus Bl¨aser and Vladimir Lysikov, On degeneration of tensors and algebras , 41st International Sympo-sium on Mathematical Foundations of Computer Science, LIPIcs. Leibniz Int. Proc. Inform., vol. 58, SchlossDagstuhl. Leibniz-Zent. Inform., Wadern, 2016, pp. Art. No. 19, 11.9. P. B¨urgisser, M. Clausen, and M. A. Shokrollahi,
Algebraic complexity theory , Grundlehren der Mathematis-chen Wissenschaften, vol. 315, Springer-Verlag, Berlin, 1997.10. M. Christandl, F. Gesmundo, and A. K. Jensen,
Border rank is not multiplicative under the tensor product ,SIAM J. Appl. Alg. Geom. (2019), 231–255.11. M. Christandl, P. Vrana, and J. Zuiddam, Barriers for fast matrix multiplication from irreversibility ,arXiv:1812.06952 (2018).12. Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Christopher Umans,
Group-theoretic algorithms formatrix multiplication , Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science(Washington, DC, USA), FOCS ’05, IEEE Computer Society, 2005, pp. 379–388.13. Henry Cohn and Christopher Umans,
Fast matrix multiplication using coherent configurations , Proceedings ofthe Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SIAM, Philadelphia, PA, 2012,pp. 1074–1087. MR 320296814. A. Conner, F. Gesmundo, J. M. Landsberg, and E. Ventura,
Tensors with maximal symmetries , in preparation.15. A. Conner, F. Gesmundo, J. M. Landsberg, E. Ventura, and Y. Wang,
A geometric study of Strassen’sasymptotic rank conjecture and its variants , arXiv:1811.05511 (2018).16. D. Coppersmith and S. Winograd,
Matrix multiplication via arithmetic progressions , J. Symb. Comput. (1990), no. 3, 251–280.17. H. Derksen, On the nuclear norm and the singular value decomposition of tensors , Found. Comp. Math. (2016), no. 3, 779–811.18. Harm Derksen and Visu Makam, On non-commutative rank and tensor rank , Linear Multilinear Algebra (2018), no. 6, 1069–1084. MR 3781583
19. Klim Efremenko, Ankit Garg, Rafael Oliveira, and Avi Wigderson,
Barriers for rank methods in arithmeticcomplexity , 9th Innovations in Theoretical Computer Science, LIPIcs. Leibniz Int. Proc. Inform., vol. 94,Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2018, pp. Art. No. 1, 19. MR 376173720. W. Fulton,
Young tableaux. With applications to representation theory and geometry , London MathematicalSociety Student Texts, vol. 35, Cambridge University Press, Cambridge, 1997.21. W. Fulton and J. Harris,
Representation theory: a first course , Graduate Texts in Mathematics, vol. 129,Springer-Verlag, New York, 1991.22. David A. Gay,
Characters of the Weyl group of SU ( n ) on zero weight spaces and centralizers of permutationrepresentations , Rocky Mountain J. Math. (1976), no. 3, 449–455. MR MR0414794 (54 Geometric complexity theory and matrix powering , Diff. Geom.Appl. (2017), 106–127.24. N. Ilten and Z. Teitler, Product ranks of the × determinant and permanent , Canad. Math, Bull. (2016),no. 2, 311–319.25. J. M. Landsberg, Tensors: Geometry and Applications , Graduate Studies in Mathematics, vol. 128, AmericanMathematical Society, Providence, RI, 2012. MR 286591526. ,
Geometry and complexity theory , Cambridge Studies in Advanced Mathematics, vol. 169, CambridgeUniversity Press, Cambridge, 2017.27. ,
Tensors: Asymptotic Geometry and Developments 20162018 , CBMS Regional Conference Series inMathematics, vol. 132, AMS, 2019.28. J. M. Landsberg and M. Micha lek, A n − log( n ) − lower bound for the border rank of matrix multiplication ,Int. Math. Res. Not. (2018), no. 15, 4722–4733.29. J. M. Landsberg and G. Ottaviani, Equations for secant varieties of Veronese and other varieties , Ann. Mat.Pura Appl. (4) (2013), no. 4, 569–606.30. ,
New lower bounds for the border rank of matrix multiplication , Th. of Comp. (2015), no. 11,285–298.31. F. Le Gall, Powers of tensors and Fast Matrix Multiplication , Proc. 39th Int. Symp. Symb. Alg. Comp., ACM,2014, pp. 296–303.32. A. K. Lenstra, H. W. Lenstra, Jr., and L. Lov´asz,
Factoring polynomials with rational coefficients , Math.Ann. (1982), no. 4, 515–534. MR 68266433. T. Lickteig,
Typical tensorial rank , Lin. Alg. Appl. (1985), 95–120.34. A. Sch¨onhage, Partial and total matrix multiplication , SIAM J. Comput. (1981), no. 3, 434–455.MR MR623057 (82h:68070)35. A. V. Smirnov, The Approximate Bilinear Algorithm of Length 46 for Multiplication of 4 x 4 Matrices ,arXiv:1412.1687 (2014).36. A. Stothers,
On the Complexity of Matrix Multiplication , Ph.D. thesis, U. Edinburgh, 2010.37. V. Strassen,
Gaussian elimination is not optimal , Numerische mathematik (1969), no. 4, 354–356.38. , Rank and optimal computation of generic tensors , Lin. Alg. Appl. (1983), 645–685.39. ,
Relative bilinear complexity and matrix multiplication , J. Reine Angew. Math. (1987),406–443.40. ,
The asymptotic spectrum of tensors , J. Reine Angew. Math. (1988), 102–152.41. V. V. Williams,
Multiplying matrices faster than Coppersmith-Winograd , Proc. 44th ACM Symp. Th. Comp.– STOC’12, ACM, 2012, pp. 887–898.(A. Conner, J.M. Landsberg, E. Ventura, Y. Wang)
Department of Mathematics, Texas A&M University,College Station, TX 77843-3368, USA
E-mail address , A. Conner: [email protected]
E-mail address , J. M. Landsberg: [email protected]
E-mail address , E. Ventura: [email protected] (F. Gesmundo)
QMATH, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O., Den-mark
E-mail address ::