Isomorphism problems for tensors, groups, and cubic forms: completeness and reductions
aa r X i v : . [ c s . CC ] J un Isomorphism problems for tensors, groups, and cubic forms:completeness and reductions
Joshua A. Grochow ∗ Youming Qiao † July 2, 2019
Abstract
In this paper we consider the problems of testing isomorphism of tensors, p -groups, cubicforms, algebras, and more, which arise from a variety of areas, including machine learning, grouptheory, and cryptography. These problems can all be cast as orbit problems on multi-way arraysunder different group actions. Our first two main results are:1. All the aforementioned isomorphism problems are equivalent under polynomial-time re-ductions, in conjunction with the recent results of Futorny–Grochow–Sergeichuk ( Lin. Alg.Appl. , 2019).2. Isomorphism of d -tensors reduces to isomorphism of -tensors, for any d ≥ .All but one of the reductions for the preceding contributions work over arbitrary fields. Togetherthey suggest that the aforementioned isomorphism problems form a rich and robust equivalenceclass, which we call Tensor Isomorphism -complete, or TI -complete for short. Furthermore,this provides a unified viewpoint on these hard isomorphism testing problems arising from avariety of areas.We then leverage the techniques used in the above results to prove two first-of-their-kindresults for Group Isomorphism ( GpI ):3. We give a reduction from testing isomorphism of p -groups of exponent p and small class( c < p ) to isomorphism of p -groups of exponent p and class 2. The latter are widelybelieved to be the hardest cases of GpI , but as far as we know, this is the first reductionfrom any more general class of groups to this class.4. We give a search-to-decision reduction for isomorphism of p -groups of exponent p and class in time | G | O (log log | G | ) . While search-to-decision reductions for Graph Isomorphism ( GI ) have been known for more than 40 years, as far as we know this is the first non-trivialsearch-to-decision reduction in the context of GpI .Our main technique for (1), (3), and (4) is a linear-algebraic analogue of the classical graphcoloring gadget, which was used to obtain the search-to-decision reduction for GI . This gadgetconstruction may be of independent interest and utility. The technique for (2) gives a methodfor encoding an arbitrary tensor into an algebra. ∗ Departments of Computer Science and Mathematics, University of Colorado, Boulder. [email protected] † Centre for Quantum Software and Information, University of Technology Sydney. [email protected]
Introduction
Isomorphism problems in light of Babai’s breakthrough on Graph Isomorphism.
In late2015, Babai presented a quasipolynomial-time algorithm for
Graph Isomorphism ( GI ) [Bab16].This is widely regarded as one of the major breakthroughs in theoretical computer science of thepast decade. Indeed, GI has been at the heart of complexity theory nearly since its inception: bothCook and Levin were thinking about GI when they defined NP [AD17, Sec. 1], Graph (Non-)Isomorphism played a special role in the creation of the class AM [Bab85, GMR85, BM88], andit still stands today as one of the few natural candidates for a problem that is “ NP -intermediate,”that is, in NP , but neither in P nor NP -complete [Lad75] (see [Exc] for additional candidates).Beyond its practical applications (e. g., [SV17, Irn05] and references therein) and its naturality, partof its fascination comes from its universal property: GI is universal for isomorphism problems for“explicitly given” structures [ZKT85, Sec. 15], that is, first-order structures on a set V where, e. g.,a k -ary relation on V is given by listing out a subset R ⊆ V k .In light of Babai’s breakthrough on GI [Bab16], it is natural to consider “what’s next?” forisomorphism problems. That is, what isomorphism problems stand as crucial bottlenecks to furtherimprovements on GI , and what isomorphism problems should naturally draw our attention forfurther exploration? Of course, one of the main open questions in the area remains whether ornot GI is in P . Babai [Bab16, arXiv version, Sec. 13.2 and 13.4] already lists several isomorphismproblems for further study, including Group Isomorphism , Linear Code Equivalence , and
Permutation Group Conjugacy . In this paper we expand this list in what we argue is a verynatural direction, namely to isomorphism problems for multi-way arrays , also known as tensors. Group actions on 3-way arrays. F , so a 3-way array is just A = ( a i,j,k ) , i ∈ [ ℓ ] , j ∈ [ n ] , k ∈ [ m ] , and a i,j,k ∈ F .Let GL( n, F ) be the general linear group of degree n over F , and let M( n, F ) denote the setof n × n matrices. There are three natural group actions on M( n, F ) : for A ∈ M( n, F ) , (1) ( P, Q ) ∈ GL( n, F ) × GL( n, F ) sends A to P t AQ , (2) P ∈ GL( n, F ) sends A to P − AP , and(3) P ∈ GL( n, F ) sends A to P t AP . These three actions then endow A with different alge-braic/geometric interpretations: (1) a linear map from a vector space V to another vector space W ,(2) a linear map from V to itself, and (3) a bilinear map from V × V to F .Likewise, 3-way arrays A = ( a i,j,k ) , i, j, k ∈ [ n ] , can be naturally acted by GL( n, F ) × GL( n, F ) × GL( n, F ) in one way, by GL( n, F ) × GL( n, F ) in two different ways, and by GL( n, F ) in two differentways. These five actions endow various families of 3-way arrays with different algebraic/geometricmeanings, including 3-tensors, bilinear maps, matrix (associative or Lie) algebras, and trilinearforms (a.k.a. non-commutative cubic forms). (See Sec. 2 for detailed explanations.) Over finitefields, the associated isomorphism problems are in NP ∩ coAM , following the essentially same coAM protocol as for GI .With these group actions in mind, 3-way arrays capture a variety of important structures inseveral mathematical and computational disciplines. They arise naturally in quantum mechanics(states are described by tensors), the complexity of matrix multiplication (matrix multiplication isdescribed by a tensor, and its algebraic complexity is essentially its tensor rank), the GeometricComplexity Theory approach [Mul11] to the Permanent versus Determinant Conjecture [Val79](tensors describe the boundary of the determinant orbit closure, e. g., [Lan12, Sec. 13.6.3] and[Gro12b, Sec. 3.5.1] for introductions, and [HL16, Hüt17] for applications), data analysis [KB09], There have been some disputes on the terminologies; see the preface of [Lan12]. Our approach is to use multi-wayarrays as the basic underlying object, and to use tensors as the multi-way arrays under a certain group action.
Main results.
The five natural actions on 3-way arrays mentioned above each lead to a differentisomorphism problem on 3-way arrays; we discuss these problems and their interpretations in Sec. 2.Our first main result, Thm. A, shows that these isomorphism problems for 3-way arrays are allequivalent under polynomial-time reductions. Due to the algebraic or geometric interpretations,these problems are further equivalent to isomorphism problems on certain classes of groups, cubicforms, trilinear forms (a.k.a. non-commutative cubic forms), associative algebras, and Lie algebras.One consequence of these results (Cor. P), along with those of [FGS19], is a reduction from
GpI for p -groups of exponent p and class < p to GpI for p -groups of exponent p and class 2. Althoughthe latter have long been believed to be the hardest cases of GpI , as far as we are aware, this is thefirst reduction from a more general class of groups to this class.Although these equivalences may have been expected by some experts, it had not been im-mediately clear to us for some time during this project. To get a sense for the non-obviousness,let us postulate the following hypothetical question. Recall that two matrices
A, B ∈ M( n, F ) arecalled equivalent if there exists P, Q ∈ GL( n, F ) such that P − AQ = B , and they are conjugate ifthere exists P ∈ GL( n, F ) such that P − AP = B . Can we reduce testing Matrix Conjugacy totesting
Matrix Equivalence ? Of course since they are both in P there is a trivial reduction; toavoid this, let us consider only reductions r which send a matrix A to a matrix r ( A ) such that A and B are conjugate iff r ( A ) and r ( B ) are equivalent. Nearly all reductions between isomorphismproblems that we are aware of have this form (so-called “kernel reductions” [FG11]; cf. functorialreductions [Bab14]). After some thought, we realize that this is essentially impossible. The reasonis that the equivalence class of a matrix is completely determined by its rank, while the conjugacyclass of a matrix is determined by its rational canonical form. Among n × n matrices there areonly n + 1 equivalence classes, but there are at least | F | n rational canonical forms (coming from thechoice of minimal polynomial/companion matrix). Even when F is a finite field, such a reductionwould thus require an exponential increase in dimension, and when F is infinite, such a reduction isimpossible (regardless of running time).Nonetheless, one of our results is that for spaces of matrices (one form of 3-way arrays), conjugacytesting does indeed reduce to equivalence testing! This is in sharp contrast to the case of singlematrices. In the above setting, it means that there exists a polynomial-time computable map φ from M( n, F ) to subspaces of M( s, F ) , such that A, B are conjugate up to a scalar if and only if φ ( A ) , φ ( B ) ≤ M( s, F ) are equivalent as matrix spaces. Such a reduction may not be clear at firstsight.Our second main result reduces d TI to , for any fixed d ≥ . From one viewpoint, this canbe seen as a linear algebraic analogue of the now-classical reduction from d -uniform HypergraphIsomorphism to GI (e. g., [ZKT85]). However, as the reader will see, the reduction here is quite a bitmore involved, using quiver algebras and the Wedderburn–Mal’cev Theorem on complements of theJacobson radical in associative algebras. From another viewpoint, this can be seen as a step towardsshowing that is not only universal among isomorphism problems on 3-way arrays [FGS19], butperhaps is already universal for isomorphism problems on d -way arrays for any d ; see Sec. 10.1.These first two results indicate the robustness and naturality of the notion of TI -completeness.Our next set of results reduce Graph Isomorphism and
Linear Code Equivalence to theseisomorphism problems for 3-way arrays (Sec. 3.2). This shows that these isomorphism problems for3-way arrays form a set of potentially harder problems than these two problems, as also supportedby the current difference in their practical difficulties. It currently seems unlikely to us that either There is a heuristic algorithm for
Linear Code Equivalence by Sendrier [Sen00], which is practically effective raph Isomorphism or Code Equivalence is TI -complete.Finally, our third main contribution is to show a search-to-decision reduction for these tensorproblems (Thm. C), which may be of independent interest, leveraging our technique from above.While such a reduction has long been known for GI , for Group Isomorphism in general this re-mains a long-standing open question. Our techniques allow us to give a search-to-decision reductionfor isomorphism of p -groups of class 2 and exponent p in time | G | O (log log | G | ) in the model of matrixgroups over finite fields. This group class is widely regarded to be the hardest cases of Group Iso-morphism . As far as we know, this is the first non-trivial search-to-decision reduction for testingisomorphism of a class of finite groups.
Implications of main results for practical computations.
Our first main result may partlyhelp to explain the difficulties from various areas when dealing with these isomorphism problems.There is currently a significant difference between isomorphism problems for 3-way arrays and thatfor graphs. Namely, in sharp contrast to
Graph Isomorphism —for which very effective practicalalgorithms have existed for some time [McK80, MP14]—the problems we consider here all still posegreat difficulty even on relatively small examples in practice. Indeed, such problems have beenproposed to be difficult enough for cryptographic purposes [Pat96, JQSY19]. As further evidenceof their practical difficulty, current algorithms implemented for
Alternating Matrix SpaceIsometry —a problem we show is TI -complete—can handle the cases when the 3-way array is ofsize × × over F , but absolutely not for 3-way arrays of size × × , even though inthis case the input can still be stored in only a few megabytes. In [PSS18], motivated by machinelearning applications, computations on one TI -complete problem were performed in Macaulay2 [GS],but these could not go beyond small examples either. Our results imply that the complexities ofthese problems arising in many fields—from computational group theory to cryptography to machinelearning—are all equivalent. Isomorphism problems for 3-way arrays as a bottleneck for graph isomorphism.
Inaddition to their many incarnations and practical uses mentioned above, the isomorphism problemswe consider on 3-way arrays can be further motivated by their relationship to GI . Specifically, theseproblems both form a key bottleneck to putting GI into P , and pose a great challenge for extendingtechniques used to solve GI .Isomorphism problems for 3-way arrays stand as a key bottleneck to put GI in P . This isbecause, as Babai pointed out [Bab16], Group Isomorphism is a key bottleneck to putting GI into P . Indeed, the current-best upper bounds on these two problems are now quite close: n O (log n ) for Group Isomorphism (originally due to [FN70, Mil78] , with improved constants [Wil14, Ros13a,Ros13b]), and n O (log n ) for GI [Bab16] (see [HBD17] for calculation of the exponent). Within Group Isomorphism , it is widely regarded, for several reasons (e. g., [Bae38, Hig60, Ser77, Wil15]),that the bottleneck is the class of p -groups of class 2 and exponent p (i.e., G/Z ( G ) is abelian and g p = 1 for all g , p odd). Then 3-way arrays enter the picture by Baer’s Correspondence [Bae38],which shows that the isomorphism problem for these groups is equivalent to telling whether twolinear spaces of skew-symmetric matrices over F p are equivalent up to transformations of the form A P t AP . This is the Alternating Matrix Space Isometry problem, which we show in this in many cases, though for self-dual codes it reverts to an exponential search. An n × n matrix A over F is alternating if for every v ∈ F n , v t Av = 0 . When F is not of characteristic , this isequivalent to the skew-symmetry condition. We thank James B. Wilson, who maintains a suite of algorithms for p -group isomorphism testing, for com-municating this insight to us from his hands-on experience. We of course maintain responsibility for any possiblemisunderstanding, or lack of knowledge regarding the performance of other implemented algorithms. Miller attributes this to Tarjan. TI -complete. To see why the techniques for GI face great difficulty when dealing with isomorphism problemsfor multi-way arrays, recall that most algorithms for GI , including Babai’s [Bab16], are built on twofamilies of techniques: group-theoretic, and combinatorial. One of the main differences is that theunderlying group action for GI is a permutation group acting on a combinatorial structure, whereasthe underlying group actions for isomorphism problems for 3-way arrays are matrix groups actingon (multi)linear structures.Already in moving from permutation groups to matrix groups, we find many new computationaldifficulties that arise naturally in basic subroutines used in isomorphism testing. For example,the membership problem for permutation groups is well-known to be efficiently solvable by Sims’salgorithm [Sim78] (see, e. g., [Ser03] for a textbook treatment), while for matrix groups this wasonly recently shown to be solvable with a number-theoretic oracle over finite fields of odd char-acteristic [BBS09]. Correspondingly, when moving from combinatorial structures to (multi)linearalgebraic structures, we also find severe limitation on the use of most combinatorial techniques, likeindividualizing a vertex. For example, it is quite expensive to enumerate all vectors in a vectorspace, while it is usually considered efficient to go through all elements in a set. Similarly, withina set, any subset has a unique complement, whereas within F nq , a subspace can have up to q Θ( n ) complements.Given all the differences between the combinatorial and linear-algebraic worlds, it may be surpris-ing that combinatorial techniques for Graph Isomorphism can nonetheless be useful for
GroupIsomorphism . Indeed, guided by the postulate that alternating matrix spaces can be viewed as alinear algebraic analogue of graphs, Li and the second author [LQ17] adapted the individualisationand refinement technique, as used by Babai, Erdős and Selkow [BES80], to tackle
AlternatingMatrix Space Isometry over F q . This algorithm was recently improved [BGL + q n · poly( n, log q ) time, seems stilllimited to getting q O ( n ) -time algorithms. New techniques.
Our first new technique for the above results on 3-way arrays is to developa linear-algebraic analogue of the coloring gadget used in the context of
Graph Isomorphism (see, e. g., [KST93]). These gadgets help us to restrict to various subgroups of the general lineargroup. Recall that, in relating GI with other isomorphism problems, coloring is a very useful idea.Given a graph G = ( V, E ) , a coloring of vertices is a function c : V → C where C is a set of“colors.” Colored isomorphism between two vertex-colored graphs asks only for isomorphisms thatsend vertices of one color to vertices of that same color. If we are interested in making a specificvertex v ∈ V special (“individualizing” that vertex), we can assign this vertex a unique color. Toreduce Colored Graph Isomorphism to ordinary
Graph Isomorphism uses certain gadgets,and we adapt this idea to the context of 3-way arrays. We note that [FGS19] construct a relatedsuch gadget. In this paper, we develop a new gadget which we use both by itself, and in combinationwith the gadget from [FGS19] (albeit in a new context), see Sec. 4 and Sec. 7.Our second new technique, used to show the reduction from d TI to , is a simultaneousgeneralization of our reduction from to Algebra Isomorphism and the technique Grigorievused [Gri81] to show that isomorphism in a certain restricted class of algebras is equivalent to GI .In brief outline: a 3-way array A specifies the structure constants of an algebra with basis x , . . . , x n via x i · x j := P k A ( i, j, k ) x k , and this is essentially how we use it in the reduction from to Because of the difference in verbosity of inputs, solving
Group Isomorphism for this class of groups in time poly( | G | ) is equivalent to solving Alternating Matrix Space Isometry in time p O ( n + m ) for n × n matrix spacesof dimension m over F p . The current state of the art is p O ( n ) , which corresponds to the nearly-trivial upper boundof | G | O (log | G | ) on Group Isomorphism . lgebra Isomorphism . For arbitrary d ≥ , we would like to similarly use a d -way array A tospecify how d -tuples of elements in some algebra A multiply. The issue is that for A to be analgebra, our construction must still specify how pairs of elements multiply. The basic idea is tolet pairs (and triples, and so on, up to ( d − -tuples) multiply “freely” (that is, without additionalrelations), and then to use A to rewrite any product of d − generators as a linear combination of theoriginal generators. While this construction as described already gives one direction of the reduction(if A ∼ = B , then A ∼ = B ), the other direction is trickier. For that, we modify the construction to analgebra in which short products (less than d − generators) do not quite multiply freely, but almost.After the fact, we found out that this construction generalizes the one used by Grigoriev [Gri81] toshow that GI was equivalent Algebra Isomorphism for a certain class of algebras (see Sec. 4 fora comparison).
Organization.
We aim to reach as wide an audience as possible, so we start with a detailedintroduction to the various isomorphism problems on 3-way arrays, and their algebraic and geometricinterpretations in Sec. 2. We then describe our results in detail in Sec. 3 and consider related workin Sec. 4. An illustration of the key technique is in Sec. 5. These sections may be viewed as anextended abstract.The remainder of the paper gives detailed proofs of all results. Sec. 6 contains additional prelim-inaries. In Sec. 7, we present those reductions which use the linear-algebraic coloring technique, thusproving Thm. A(2) and Thm. C. We then finish the proof of Thm. A by presenting the remainingreductions in Sec. 8. Thm. B is proved in Sec. 9. In Sec. 10, we put forward a theory of universalityfor basis-explicit linear structures, in analogy with [ZKT85]. While not yet complete, this seemsto provide another justification for studying
Tensor Isomorphism and related problems, and itmotivates some interesting open questions. In Appendix A we give a reduction from
Cubic FormEquivalence to Degree- d Form Equivalence for any d ≥ (for d > this is easy; for d = 4 it requires some work). The formulas for most natural group actions on 3-way arrays are somewhat unwieldy; our experiencesuggests that they are more easily digested when presented in the context of some of the naturalinterpretations of 3-way arrays as mathematical objects. To connect the interpretations with theformulas themselves, one technical tool is very useful, namely, given a 3-way array A ( i, j, k ) , wedefine its frontal slices to be the matrices A k defined by A k ( i, j ) := A ( i, j, k ) ; that is, we think ofthe box of A as arranged so that the i and j axes lie in the page, while the k -axis is perpendicularto the page. Similarly, its lateral slices (viewing the 3D box of A “from the side”) are defined by L j ( i, k ) := A ( i, j, k ) . An ℓ × n × m m frontal slices and n lateral slices.A natural action on arrays of size ℓ × n × m is that of GL( ℓ, F ) × GL( n, F ) × GL( m, F ) by changeof basis in each of the 3 directions, namely (( P, Q, R ) · A )( i ′ , j ′ , k ′ ) = P i,j,k A ( i, j, k ) P ii ′ Q jj ′ R kk ′ . Wewill see several interpretations of this action below. A 3-way array A ( i, j, k ) , where i ∈ [ ℓ ] , j ∈ [ n ] , and k ∈ [ m ] , is naturally identified as avector in F ℓ ⊗ F n ⊗ F m . Letting ~e i denote the i th standard basis vector of F n , a standard basis of F ℓ ⊗ F n ⊗ F m is { ~e i ⊗ ~e j ⊗ ~e k } . Then A represents the vector P i,j,k A ( i, j, k ) ~e i ⊗ ~e j ⊗ ~e j in F ℓ ⊗ F n ⊗ F m .The natural action by GL( ℓ, F ) × GL( n, F ) × GL( m, F ) above corresponds to changes of basis of thethree vector spaces in the tensor product. The problem of deciding whether two 3-way arrays arethe same under this action is called . Some authors call this
Tensor Equivalence ; we use “
Isomorphism ” both because this is the natural notion of atrix spaces. Given a 3-way array A , it is natural to consider the linear span of its frontal slices, A = h A , . . . , A m i , also called a matrix space . One convenience of this viewpoint is that the actionof GL( m, F ) becomes implicit: it corresponds to change of basis within the matrix space A . Thisallows us to generalize the three natural equivalence relations on matrices to matrix spaces: (1)two ℓ × n matrix spaces A and B are equivalent if there exists ( P, Q ) ∈ GL( ℓ, F ) × GL( n, F ) suchthat P A Q = B , where P A Q := { P AQ : A ∈ A} ; (2) two n × n matrix spaces A , B are conjugate if there exists P ∈ GL( n, F ) such that P A P − = B ; and (3) they are isometric if P A P t = B .The corresponding decision problems, when A is given by a basis A , . . . , A d , are Matrix SpaceEquivalence , Matrix Space Conjugacy , and
Matrix Space Isometry , respectively.
Nilpotent groups. If A, B are two subsets of a group G , then [ A, B ] denotes the sub group generated by all elements of the form [ a, b ] = aba − b − , for a ∈ A, b ∈ B . The lower central series of a group G is defined as follows: γ ( G ) = G , γ k +1 ( G ) = [ γ k ( G ) , G ] . A group is nilpotent if there issome c such that γ c +1 ( G ) = 1 ; the smallest such c is called the nilpotency class of G , or sometimesjust “class” when it is understood from context. A finite group is nilpotent if and only if it is theproduct of its Sylow subgroups; in particular, all groups of prime power order are nilpotent. Bilinear maps, finite groups, and systems of polynomials.
While the matrix space viewpointhas the merit of drawing an analogy with the more familiar object of matrices, other interpretationslead to standard complexity problems that may be more familiar to some readers. For example,from an ℓ × n × m A , we can construct a bilinear map (=system of m bilinear forms) f A : F ℓ × F n → F m , sending ( u, v ) ∈ F ℓ × F n to ( u t A v, . . . , u t A m v ) t , where the A k are the frontalslices of A . The group action defining
Matrix Space Equivalence is equivalent to the action of
GL( ℓ, F ) × GL( n, F ) × GL( m, F ) on such bilinear maps.When ℓ = n , the action in Matrix Space Isometry is equivalent to the natural action of
GL( n, F ) × GL( m, F ) on such bilinear maps. Two bilinear maps that are essentially the same up tosuch basis changes are sometimes called pseudo-isometric [BW12].Bilinear maps of the form V × V → W turn out to arise naturally in group theory and algebraicgeometry. When A k are skew-symmetric over F p , p an odd prime, Baer’s correspondence [Bae38]gives a bijection between finite p -groups of class 2 and exponent p , that is, in which g p = 1 for all g and in which [ G, G ] ≤ Z ( G ) , and their corresponding bilinear maps G/Z ( G ) × G/Z ( G ) → [ G, G ] ,given by ( gZ ( G ) , hZ ( G )) [ g, h ] = ghg − h − . Two such groups are isomorphic if and only if theircorresponding bilinear maps are pseudo-isometric, if and only if, using the matrix space terminology,the matrix spaces they span are isometric. When A k are symmetric, by the classical correspondencesbetween symmetric matrices and homogeneous quadratic forms, a symmetric bilinear map naturallyyields a quadratic map from F n to F m . The two quadratic maps are isomorphic if and only if thecorresponding bilinear maps are pseudo-isometric. Cubic forms & trilinear forms.
From a 3-way array A we can also construct a cubic form(=homogeneous degree 3 polynomial) P i,j,k A ( i, j, k ) x i x j x k , where x i are formal variables. If weconsider the variables as commuting—or, equivalently, if A is symmetric, meaning it is unchanged bypermuting its three indices—we get an ordinary cubic form; if we consider them as non-commuting,we get a trilinear form (or “non-commutative cubic form”). In either case, the natural notion ofisomorphism here comes from the action of GL( n, F ) on the n variables x i , in which P ∈ GL( n, F ) transforms the preceding form into P ijk A ( i, j, k )( P i ′ P ii ′ x i ′ )( P j ′ P jj ′ x j ′ )( P k ′ P kk ′ x k ′ ) . In terms of3-way arrays, we get ( P · A )( i ′ , j ′ , k ′ ) = P ijk A ( i, j, k ) P ii ′ P jj ′ P kk ′ . The corresponding isomorphism isomorphism for such objects, and because we will be considering many different equivalence relations on essentiallythe same underlying objects. In this paper elements in F n are column vectors. Cubic Form Equivalence (in the commutative case) and
Trilinear FormEquivalence . Algebras.
We may also consider a 3-way array A ( i, j, k ) , i, j, k ∈ [ n ] , as the structure constantsof an algebra (which need not be associative, commutative, nor unital), say with basis x , . . . , x n ,and with multiplication given by x i · x j = P k A ( i, j, k ) x k , and then extended (bi)linearly. Herethe natural notion equivalence comes from the action of GL( n, F ) by change of basis on the x i .Despite the seeming similarity of this action to that on cubic forms, it turns out to be quite dif-ferent: given P ∈ GL( n, F ) , let ~x ′ = P ~x ; then we have x ′ i · x ′ j = ( P i P i ′ i x i ) · ( P j P j ′ j x j ) = P i,j P i ′ i P j ′ j x i · x j = P i,j,k P i ′ i P j ′ j A ( i, j, k ) x k = P i,j,k P i ′ i P j ′ j A ( i, j, k ) P k ′ ( P − ) kk ′ x k ′ . Thus A be-comes ( P · A )( i ′ , j ′ , k ′ ) = P ijk A ( i, j, k ) P i ′ i P j ′ j ( P − ) kk ′ . The inverse in the third factor here is thecrucial difference between this case and that of cubic or trilinear forms above, similar to the differ-ence between matrix conjugacy and matrix isometry. The corresponding isomorphism problem iscalled Algebra Isomorphism . Summary.
The isomorphism problems of the above structures all have 3-way arrays as the un-derlying object, but are determined by different group actions. It is not hard to see that thereare essentially five group actions in total: , Matrix Space Conjugacy , Matrix Space Isometry , Trilinear Form Equivalence , and
Algebra Isomorphism . Itturns out that these cover all the natural isomorphism problems on 3-way arrays in which the groupacting is a product of
GL( n, F ) (where n is the side length of the arrays); see Sec. 6.1 for discussion. Definition 3.1 ( d TI , TI ) . For any field F , d TI F denotes the class of problems that are polynomial-time Turing (Cook) reducible to d -Tensor Isomorphism over F . When we write d TI withoutmentioning the field, the result holds for any field. TI F = S d ≥ d TI F .We now state our first main theorem. Theorem A. reduces to each of the following problems in polynomialtime.1.
Group Isomorphism for p -groups exponent p ( g p = 1 for all g ) and class 2 ( G/Z ( G ) isabelian) given by generating matrices over F p e . Here we consider only F pe where p is anodd prime.2. Matrix Space Isometry , even for alternating or symmetric matrix spaces.3.
Matrix Space Conjugacy , and even the special cases:(a)
Matrix Lie Algebra Conjugacy , for solvable Lie algebras L of derived length 2. (b) Associative Matrix Algebra Conjugacy . We follow a natural convention: when F is finite, a fixed algebraic extension of a finite field such as F p , therationals, or a fixed algebraic extension of the rationals such as Q , we consider the usual model of Turing machines;when F is R , C , the p -adic rationals Q p , or other more “exotic” fields, we consider this in the Blum–Shub–Smalemodel over F . And even further, where L/ [ L, L ] ∼ = F . Even for algebras A whose Jacobson radical J ( A ) squares to zero and A/J ( A ) ∼ = F . . Algebra Isomorphism , and even the special cases:(a)
Associative Algebra Isomorphism , for algebras that are commutative and unital,and for algebras that are commutative and 3-nilpotent ( abc = 0 for all a, b, c, ∈ A )(b) Lie Algebra Isomorphism , for 2-step nilpotent Lie algebras ( [ u, [ v, w ]] = 0 ∀ u, v, w )5. Cubic Form Equivalence and
Trilinear Form Equivalence .The algebras in (3) are given by a set of matrices which linearly span the algebra, while in (4) theyare given by structure constants (see “Algebras” in Sec. 2).
Remark 3.2.
Agrawal & Saxena [AS05, Thm. 5] gave a reduction from
Cubic Form Equivalence over F to Ring Isomorphism for commutative, unital, associative algebras over F , when everyelement of F has a cube root. For finite fields F q , the only such fields are those for which q = p e +1 and p ≡ , which is asymptotically half of all primes. As explained after the proof of [AS05,Thm. 5], the use of cube roots seems inherent in their reduction.Using our results in conjunction with [FGS19], we get a new reduction from Cubic FormEquivalence to Ring Isomorphism (for the same class of rings) which works over any field ofcharacteristic 0 or p > . Note that our reduction is very different from the one in [AS05].Figure 1 below summarizes where the various parts of Thm. A are proven.We then resolve an open question well-known to the experts: Theorem B. d -Tensor Isomorphism reduces to Algebra Isomorphism . Since the main result of [FGS19] reduces the problems in Theorem A to (cf. [FGS19, Rmk. 1.1]), we have:
Corollary B.
Each of the problems listed in Theorem A is TI -complete. In particular, dT I and T I are equivalent.
Remark 3.3.
This phenomenon is reminiscent of the transition in hardness from 2 to 3 in k - SAT , k - Coloring , k - Matching , and many other NP -complete problems. It is interesting thatan analogous phenomenon—a transition to some sort of “universality” from 2 to 3—occurs in thesetting of isomorphism problems, which we believe are not NP -complete over finite fields. Remark 3.4.
Here is a brief summary of what is known about the complexity of some of theseproblems. Over a finite field F q , these problems are in NP ∩ coAM . For ℓ × n × m q O ( ℓ + n + m ) , as GL( n, F q ) can be enumerated in time q Θ( n ) .Note that polynomial-time in the input size here would be poly( ℓ, n, m, log q ) . Over any field F ,these problems are in NP F in the Blum–Shub–Smale model. When the input arrays are over Q andwe ask for isomorphism over C or R , these problems are in PSPACE using quantifier elimination.By Koiran’s celebrated result on Hilbert’s Nullstellensatz, for equivalence over C they are in AM assuming the Generalized Riemann Hypothesis [Koi96]. When the input is over Q and we ask forequivalence over Q , it is unknown whether these problems are even decidable; classically this isstudied under Algebra Isomorphism for associative, unital algebras over Q (see, e. g., [AS06,Poo14]), but by Cor. B, the question of decidability is open for all of these problems. We asked several experts who knew of the question, but we were unable to find a written reference. Interestingly,Oldenburger [Old36] worked on what we would call d -Tensor Isomorphism as far back as the 1930s. We would begrateful for any prior written reference to the question of whether d TI reduces to . For
Cubic Form Equivalence , we only show that it is in TI F when char F > or char F = 0 . -TensorIso. U ⊗ · · · ⊗ U d Thm. B (cid:20) (cid:20) U ⊗ V ⊗ W Prop. 7.3,Cor. 7.6 u u Prop. 8.1 ) ) Bilinear MapIso. p- Group Iso. V ⊗ V ⊗ W Prop. 8.3 & & ◆◆◆◆◆◆◆◆ Prop. 8.3 + + Cor. 8.5 & & MatrixSpaceConjugacy V ⊗ V ∗ ⊗ W TrilinearForm Equiv. V ⊗ V ⊗ V Algebra Iso. V ⊗ V ⊗ V ∗ Cubic FormEquiv.
Special case, when is a unit O O CommutativeAlgebra Iso. [AS05, AS06] o o Special case O O Figure 1: Reductions for Thm. A. An arrow A → B indicates that A reduces to B , i. e., A ≤ pm B .For Cor. B, the five tensor problems in the center circle all reduce to via [FGS19]. For the“ V ⊗ V ⊗ W ” notation, see Sec. 6.1. 9ver finite fields, several of these problems can be solved efficiently when one of the side lengthsof the array is small. For d -dimensional spaces of n × n matrices, Matrix Space Conjugacy and
Isometry can be solved in q O ( n ) · poly( d, n, log q ) time: once we fix an element of GL( n, F q ) ,the isomorphism problem reduces to solving linear systems of equations. Less trivially, MatrixSpace Conjugacy can be solved in time q O ( d ) · poly( d, n, log q ) and for n × m × d tensorsin time q O ( d ) · poly( d, n, m, log q ) , since once we fix an element of GL( d, F q ) , the isomorphismproblem either becomes an instance of, or reduces to [IQ18], Module Isomorphism , which admitsseveral polynomial-time algorithms [BL08, CIK97, IKS10, Ser00]. Finally, one can solve
MatrixSpace Isometry in time q O ( d ) · poly( d, n, log q ) : once one fixes an element of GL( d, F q ) , there isa rather involved algorithm [IQ18], which uses the ∗ -algebra technique originated from the study ofcomputing with p -groups [Wil09, BW12]. We observe then
Graph Isomorphism and
Code Equivalence reduce to . In particular, the class TI contains the classical graph isomorphism class GI .Recall Code Equivalence asks to decide whether two linear codes are the same up to alinear transformation preserving the Hamming weights of codes. Here the linear codes are justsubspaces of F nq of dimension d , represented by linear bases. Linear transformations preservingthe Hamming weights include permutations and monomial transformations. Recall that the latterconsists of matrices where every row and every column has exactly one non-zero entry. Indeed, overmany fields this is without loss of generality, as Hamming-weight-preserving linear maps are alwaysinduced by monomial transformations (first proved over finite fields [Mac62], and more recently overmuch more general algebraic objects, e. g., [GNW04]). CodeEq has long been studied in the codingtheory community; see e.g. [PR97, Sen00].For
Code Equivalence , we observe that previous results already combine to give:
Observation 3.5.
Code Equivalence (under permutations) reduces to .Proof.
Code Equivalence reduces to
Matrix Lie Algebra Conjugacy [Gro12a], a specialcase of
Matrix Space Conjugacy , which in turn reduces to [FGS19].Using the linear-algebraic coloring gadget, we can extend this to equivalence of codes undermonomial transformations (see Sec. 5). Given two d × n matrices A, B over F of rank d , the Monomial Code Equivalence problem is to decide whether there exist Q ∈ GL( d, F ) and amonomial matrix P ∈ Mon( n, F ) ≤ GL( n, F ) (product of a diagonal matrix and a permutationmatrix) such that QAP = B . Proposition 3.6.
Monomial Code Equivalence reduces to . Since
Graph Isomorphism reduces to
Code Equivalence [Luk93] (see [Miy96]) and [PR97](even over arbitrary fields [Gro12a]), by Obs. 3.5 and Thm. A, we have the following.
Corollary 3.7.
Graph Isomorphism reduces to
Alternating Matrix Space Isometry . Using our linear-algebraic gadgets, we also reprove this result using a much more direct reduction(see Prop. 7.1). Besides being a different construction, another reason for the additional proof isthat the technique leads to the search-to-decision reduction, which we discuss below.10 .3 Application to
Group Isomorphism : reducing the nilpotency class
For several reasons, the hardest cases of
Group Isomorphism are believed to be p -groups of class2 and exponent p ; recall that these are groups in which every element has order p , the order of thegroup is p n , and G/Z ( G ) is abelian. See Nilpotent groups above. While this belief has been widelyheld for many decades, we are not aware of any prior reduction from a more general class of groupsto this class. However, by combining our results with the Lazard correspondence, we immediatelyget such a reduction. Corollary P.
Let p be an odd prime. For groups generated by m matrices of size n × n , GroupIsomorphism for p -groups of exponent p and class c < p reduces to Group Isomorphism for p -groups of exponent p and class in time poly( n, m, log p ) .Proof. By the Lazard correspondence (reproduced as Thm. 6.4 below) two p -groups of exponent p and class c < p are isomorphic if and only if their corresponding F p -Lie algebras are. By Prop. 6.5,we can construct a generating set for the corresponding Lie algebra by applying the power seriesfor logarithm to the generating matrices of G . This Lie algebra is thus a subalgebra of n × n matrices, so we can generate the entire Lie algebra (using the linear-algebra version of breadth-firstsearch; its dimension is ≤ n ) and compute its structure constants in time polynomial in n , m ,and log p . Then use [FGS19] to reduce isomorphism of Lie algebras to TI , and then apply Thm. A(specifically, Cor. 7.6) to reduce to isomorphism of p -groups of exponent p and class given by amatrix generating set.The only obstacle to getting this proof to work in the Cayley table model is that our reductionfrom TI to Alternating Matrix Space Isometry (Prop. 7.3) blows up the dimension quadrati-cally, which means the size of the group becomes | G | O (log | G | ) after the reduction. See Question 10.5. Reducing search problems to their associated decision problems is a classical and intriguing topic incomplexity theory. Aside from the now-standard search-to-decision reduction for SAT, one of theearliest results in this direction was by Valiant in the 1970’s [Val76]. A celebrated result of Bellareand Goldwasser shows that, assuming EE = NEE , there exists a language in NP for which searchdoes not reduce to decision under polynomial-time reductions [BG94]. However, as usual for suchstatements based on complexity-theoretic assumptions, the problems constructed by such a proofare considered somewhat unnatural. For natural problems, on the one hand, there are search-to-decision reductions for NP -complete problems and for GI . On the other hand, such is not known,nor expected to be the case, for Nash Equilibrium [CDT09] (for which decision is trivial).Reducing search to decision is particularly intriguing for testing isomorphism of groups. Onedifficulty is that it is not clear how to guess a partial solution, and then make progress by restrictingto a subgroup. In general, testing isomorphism of certain algebraic structures (groups, algebras,etc.) forms a large family of problems for which search-to-decision reductions are not known.Because of the close relationship between and isomorphism of various algebraic structures,one might expect similar difficulties in reducing search to decision for , and thus for TI -completeproblems as well. Nonetheless, for Alternating Matrix Space Isometry , we are able to usethe linear-algebraic coloring gadgets to get a non-trivial search-to-decision reduction.
Theorem C.
There is a search-to-decision reduction for
Alternating Matrix Space Isometry which, given n × n alternating matrix spaces A , B over F q , computes an isometry between them ifthey are isometric, in time q ˜ O ( n ) . The reduction queries the decision oracle with inputs of dimensionat most O ( n ) .
11s a consequence, a q ˜ O ( √ n ) -time decision algorithm would result in a q ˜ O ( n ) -time search algorithm,in contrast with the brute-force q O ( n ) running time. Note that in this context, the size of the inputis poly( n, log q ) , so a q ˜ O ( √ n ) running time is still quite generous.By the connection between Alternating Matrix Space Isometry and
Group Isomor-phism for p -groups of class and exponent p , we have the following. Note that the natural succinctinput representation mentioned in the following result can have size poly( ℓ, log p ) = poly(log | G | ) . Corollary C.
Let p be an odd prime, and let GpIso2Exp( p ) denote the isomorphism problem for p -groups of class 2 and exponent p in the model of matrix groups over F p . For groups of order p ℓ ,there is a search-to-decision reduction for GpIso2Exp( p ) running in time | G | O (log log | G | ) = p ˜ O ( ℓ ) . The most closely related work is that of Futorny, Grochow, and Sergeichuk [FGS19]. They showthat a large family of isomorphism problems on 3-way arrays—including those involving multiple3-way arrays simultaneously, or 3-way arrays that are partitioned into blocks, or 3-way arrays wheresome of the blocks or sides are acted on by the same group (e. g.,
Matrix Space Isometry )—all reduce to . Our work complements theirs in that all our reductions for Thm. A go in theopposite direction, reducing to other problems. Some of our other results relate GI and CodeEquivalence to ; the latter problems were not considered in [FGS19]. Thm. B considers d -tensors for any d ≥ , which were not considered in [FGS19].In [AS05, AS06], Agrawal and Saxena considered Cubic Form Equivalence and testing iso-morphism of commutative, associative, unital algebras. They showed that GI reduces to Alge-bra Isomorphism ; Commutative Algebra Isomorphism reduces to
Cubic Form Equiva-lence ; and
Homogeneous Degree- d Form Equivalence reduces to
Algebra Isomorphism assuming that the underlying field has d th root for every field element. By combining a reductionfrom [FGS19], Prop. 7.3, and Cor. 8.5, we get a new reduction from Cubic Form Equivalence to Algebra Isomorphism that works over any field in which is a unit, which is fields of char-acteristic or p > .There are several other works which consider related isomorphism problems. Grigorev [Gri81]showed that GI is equivalent to isomorphism of unital, associative algebras A such that the radical R ( A ) squares to zero and A/R ( A ) is abelian. Interestingly, we show TI -completeness for conjugacyof matrix algebras with the same abstract structure (even when A/R ( A ) is only 1-dimensional).Note the latter problem is equivalent to asking whether two representations of A are equivalent upto automorphisms of A . In the proof of Thm. B, which uses algebras in which R ( A ) d = 0 whenreducing from d TI , we use Grigoriev’s result.Brooksbank and Wilson [BW15] showed a reduction from Associative Algebra Isomor-phism (when given by structure constants) to
Matrix Algebra Conjugacy . Grochow [Gro12a],among other things, showed that GI and CodeEq reduce to
Matrix Lie Algebra Conjugacy ,which is a special case of
Matrix Space Conjugacy .In [KS06], Kayal and Saxena considered testing isomorphism of finite rings when the rings aregiven by structure constants. This problem generalizes testing isomorphism of algebras over finitefields. They put this problem in NP ∩ coAM [KS06, Thm. 4.1], reduce GI to this problem [KS06,Thm. 4.4], and prove that counting the number of ring automorphism ( FP AM ∩ coAM [KS06,Thm. 5.1]. They also present a ZPP reduction from GI to P .To summarize this zoo of isomorphism problems and reductions, we include Figure 2 for refer-ence. 12 ymmetric d -TensorDiagonal Iso. (cid:11) (cid:11) Matrix p -Group Iso. (class 2, exp. p ) [Bae38] (cid:15) (cid:15) d - Tensor Iso.
Thm. B (cid:2) (cid:2)
Degree- d Form Eq. [AS05] * ( ( ◗◗◗◗◗◗◗◗◗◗◗ † L L Alt. Mat.Space Isom. O O [FGS19] . . Prop. 8.3 (cid:15) (cid:15) Tensor Iso.
Thm. A n n Obs. 6.3 Thm. A Mat. SpaceConj. [FGS19] p p Cubic FormEq.
Prop. A.1 O O Thm. A † . . [FGS19] † ♠♠♠♠♠♠♠♠♠♠♠♠ Ring Iso. (basis) q q + + ❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲❲ [AS05, AS06] n n Mat. Assoc.Alg. Conj. O O Mat. LieAlg. Conj. g g PPPPPPPPPPPP
Ring Iso. (gens/rels)
UnitalAssoc. Alg.Iso. [BW15] O O Comm. Mat.Lie Alg.Conj. O O Mon. CodeEq.
Prop. 3.6 O O SemisimpleMat. LieAlg. Conj. w w ♦♦♦♦♦♦♦♦♦♦♦♦ @ @ (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0) Diag. Mat.Lie Alg.Conj. w w ♦♦♦♦♦♦♦♦♦ O O FactoringIntegers [AS05] O O [Rón88](over Q ) ‡ StringIsomorphism GI [Luk82],cf. [Luk93] o o [AS05, KS06] Z Z ✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹✹ Prop. 7.1 b b cf. [Gro12a] O O [PR97, Luk93] / / [Gro12a] ♦♦♦♦♦♦♦♦♦♦♦♦♦♦ Perm. CodeEq. [Gro12a] ♦♦♦♦♦♦♦♦♦♦ [BCGQ11] / / Perm.Group Conj.Alt. Mat.Space Isom. ( F p e , verbose) [Bae38] / / p -Group Iso. (class 2, exp. p ,table) o o / / Group Iso. (table) (Classical, cf. [ZKT85]) O O TI -complete ❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤ ❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴❴ ✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤ TI -complete * † ❴ ❴ ❴ ❴ ❴ ❴ ❴ ❴ ❴ ❴✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤✤ ❴❴❴❴❴❴❴❴❴❴ Figure 2: Summary of isomorphism problems around
Graph Isomorphism and
Tensor Isomor-phism . A → B indicates that A reduces to B , i. e., A ≤ pm B . Unattributed arrows indicate A isclearly a special case of B . Note that the definition of ring used in [AS05] is commutative, finite,and unital; by “algebra” we mean an algebra (not necessarily associative, let alone commutative norunital) over a field. The reductions between Ring Iso. (in the basis representation) and
Degree- d Form Eq. and
Unital Associative Algebra Isomorphism are for rings over a field. Theequivalences between
Alternating Matrix Space Isometry and p - Group Isomorphism arefor matrix spaces over F p e . Some TI -complete problems from Thm. A are left out for clarity. * These results only hold over fields where every element has a d th root. In particular, Degree d Form Equivalence and
Symmetric d -Tensor Isomorphism are -complete over fields with d -th roots. A finite field F q has this property if and only if d is coprime to q − . † These results only hold over rings where d ! is a unit. ‡ Assuming the Generalized Riemann Hypothesis, Rónyai [Rón88] shows a Las Vegas randomizedpolynomial-time reduction from factoring square-free integers—probably not much easier than thegeneral case—to isomorphism of 4-dimensional algebras over Q . Despite the additional hypotheses,this is notable as the target of the reduction is algebras of constant dimension, in contrast to allother reductions in this figure. 13 Overview of one new technique, and one full proof
In this section we describe one of the key new techniques in this paper: a linear-algebraic coloringgadget. We exhibit this gadget by giving the full proof of Prop. 3.6 as an example. A related gadgetwas used in [FGS19] to show reductions to ; our reductions all go in the opposite direction.Furthermore, whereas the gadgets used in [FGS19] were primarily to ensure that two differentblocks could not be mixed, our gadgets allow us to ensure that certain slices of a tensor can bepermuted, while disallowing more general linear transformations.In the context of GI , there are many ways to reduce Colored GI to ordinary GI ; here we giveone example, which will serve as an analogy for our linear-algebraic gadget. To individualize a vertex v ∈ G (give it a unique color), attach to it a large “star”: if | V ( G ) | = n , add n + 1 new vertices to G and attach them all to v ; call the resulting graph G v . This has the effect that any automorphismof G v must fix v , since v has a degree strictly larger than any other vertex. Furthermore, if H w is obtained by a similar construction, then there is an isomorphism G → H which sends v w ifand only if G v ∼ = H w . Finally, if we attach stars of size n + 1 to multiple vertices v , . . . , v k , thenany automorphism of G must permute the v i amongst themselves, and there is an isomorphism G → H sending { v , . . . , v k } 7→ { w , . . . , w k } if and only if the corresponding enlarged graphs areisomorphic.We adapt this idea to the context of 3-way arrays. Let A be an ℓ × n × m L , L , . . . , L n (each an ℓ × m matrix). For any vector v ∈ F n , we get an associated lateralmatrix L v , which is a linear combination of the lateral slices as given, namely L v := P nj =1 v j L j (note that when v = ~e j is the j -th standard basis vector, the associated lateral matrix is indeed L j ). By analogy with adjacency matrices of graphs, L v is a natural analogue of the neighborhoodof a vertex in a graph. Correspondingly, we get a notion of “degree,” which we may define as deg A ( v ) := rk L v = rk( n X j =1 v j L j ) = dim span { L v ~w : ~w ∈ F m } = dim span { ~u t L v : ~u ∈ F ℓ } The last two characterizations are analogous to the fact that the degree of a vertex v in a graph G may be defined as the number of “in-neighbors” (nonzero entries the corresponding row of theadjacency matrix) or the number of “out-neighbors” (nonzero entries in the corresponding column).To “individualize” v , we can enlarge A with a gadget to increase deg A ( v ) , as in the graph case.Note that deg A ( v ) ≤ min { ℓ, m } because the lateral matrices are all of size ℓ × m . For notationalsimplicity, let us individualize v = ~e = (1 , , . . . , t . To individualize v , we will increase itsdegree by d = min { ℓ, m } + 1 > max v ∈ F n deg A ( v ) . Extend A to a new 3-way array A v of size ( ℓ + d ) × n × ( m + d ) ; in the “first” ℓ × n × m “corner”, we will have the original array A , and thenwe will append to it an identity matrix in one slice to increase deg( v ) . More specifically, the lateralslices of A v will be L ′ = (cid:20) L I d (cid:21) and L ′ j = (cid:20) L j
00 0 (cid:21) ( for j > . Now we have that deg A v ( v ) ≥ d . This almost does what we want, but now note that any vector w = ( w , . . . , w n ) with w = 0 has deg A v ( w ) = rk( w L ′ + P j ≥ w j L j ) ≥ d . We can nonethelessconsider this a sort of linear-algebraic individualization.Leveraging this trick, we can then individualize an entire basis of F n simultaneously, so that d ≤ deg( v ) < d for any vector v in our basis, and deg( v ′ ) ≥ d for any nonzero v ′ outside the basis(not a scalar multiple of one of the basis vectors), as we do in the following proof of Prop. 3.6. Thisis also a 3-dimensional analogue of the reduction from GI to CodeEq [Luk93, Miy96, PR97] (wherethey use Hamming weight instead of rank). 14 roof of Prop. 3.6.
Without loss of generality we assume d > , as the problem is easily solvablewhen d = 1 . We treat a d × n matrix A as a 3-way array of size d × n × , and then follow the outlineproposed above, of individualizing the entire standard basis ~e , . . . , ~e n . Since the third direction onlyhas length 1, the maximum degree of any column is 1, so it suffices to use gadgets of rank 2. Morespecifically, we build a ( d + 2 n ) × n × (1 + 2 n ) A whose lateral slices are L j = a ,j × × · · · × · · · × ... ... ... . . . ... . . . ... a d,j × × · · · × · · · × × × × · · · × · · · × ... ... ... . . . ... . . . ... × × × · · · I · · · × ... ... ... . . . ... . . . ... × × × · · · × · · · × where the I block is in the j -th block of size 2 (that is, rows d + 2( j −
1) + { , } and columns j −
1) + { , } ). It will also be useful to visualize the frontal slices of A , as follows. Here eachentry of the “matrix” below is actually a (1 + 2 n ) -dimensional vector, “coming out of the page”: A = ˜ a , ˜ a , . . . ˜ a ,n ... ... . . . ... ˜ a d, ˜ a d, . . . ˜ a d,n e , . . . e , . . . e , . . . e , . . . ... ... . . . ... . . . e n, . . . e n, , where ˜ a i,j = (cid:20) a i,j n × (cid:21) ∈ F n e i,j = ~e i − j ∈ F n for i ∈ [ n ] , j ∈ [2] and the frontal slices are A = (cid:20) A n × n (cid:21) A i − j = E d +2( i − j,i for i ∈ [ n ] , j ∈ [2] (In A we turn the vectors ˜ a i,j and e i,j “on their side” so they become perpendicular to the page. )We claim that A and B are monomially equivalent as codes if and only if A and B are isomorphicas 3-tensors.( ⇒ ) Suppose QADP = B where Q ∈ GL( n, F ) , D = diag( α , . . . , α n ) and P ∈ S n ≤ GL( n, F ) .Then by examining the frontal slices it is not hard to see that for Q ′ = (cid:20) Q
00 ( DP ) − ⊗ I (cid:21) (where DP − ⊗ I denotes a n × n block matrix, where the pattern of the nonzero blocks and the scalarsare governed by ( DP ) − , and each × block is either zero or a scalar multiple of I ) we have Q ′ A DP = B and Q ′ A i − j DP = B π ( i ) − j , where π is the permutation correspondingto P . Thus A and B are isomorphic tensors, via the isomorphism ( Q ′ , DP, diag( I , P )) .( ⇐ ) Suppose there exist Q ∈ GL( d + 2 n, F ) , P ∈ GL( n, F ) , and R ∈ GL(1 + 2 n, F ) , such that Q A P = B R . First, note that every lateral slice of A is of rank either or , and the actions of Q and R do not change the ranks of the lateral slices. Furthermore, any non-trivial linear combinationof more than lateral slice results in a lateral matrix of rank ≥ . It follows that P cannot takenontrivial linear combinations of the lateral slices, hence it must be monomial.Now consider the frontal slices. Note that, as we assume d > , every frontal slice of Q A P ,except the first one, is of rank . Therefore, R must be of the form (cid:20) r , × ( n − ~r ′ R ′ (cid:21) where R ′ is15 n − × ( n − . Since R is invertible, we must have r , = 0 , and the first frontal slice of B R contains all the rows of B scaled by r , in its first d rows. The first frontal slice of Q A P is a matrixthat generates, by definition (and since we’ve shown P is monomial), a code monomially equivalentto A . Since the first frontal slices of Q A P and B R are equal, and the latter is just a scalar multipleof B , we have that A and B are monomially equivalent as codes as well. Font Object Space of objects
A, B, . . . matrix M( n, F ) or M( ℓ × n, F ) A , B , . . . matrix tuple M( n, F ) m or M( ℓ × n, F ) m A , B , . . . matrix space [Subspaces of M( n, F ) or Λ( n, F ) ] A , B , . . . T( ℓ × n × m, F ) Table 1: Summary of notation related to 3-way arrays and tensors.
Vector spaces.
Let F be a field. In this paper we only consider finite-dimensional vector spacesover F . We use F n to denote the vector space of length- n column vectors. The i th standard basisvector of F n is denoted as ~e i . Depending on the context, may denote the zero vector space, azero vector, or an all-zero matrix. Let S be a subset of vectors. We use h S i to denote the subspacespanned by elements in S . Some groups.
The general linear group of degree n over a field F is denoted by GL( n, F ) . Thesymmetric group of degree n is denoted by S n . The natural embedding of S n into GL( n, F ) is torepresent permutations by permutation matrices. A monomial matrix in M( n, F ) is a matrix whereeach row and each column has exactly one non-zero entry. All monomial matrices form a subgroupof GL( n, F ) which we call the monomial subgroup, denoted by Mon( n, F ) , which is isomorphic tothe semi-direct product F n ⋊ S n . The subgroup of GL( n, F ) consisting of block upper-triangularmatrices with a fixed block structure is called a (standard) parabolic subgroup. Matrices.
Let M( ℓ × n, F ) be the linear space of ℓ × n matrices over F , and M( n, F ) := M( n × n, F ) .Given A ∈ M( ℓ × n, F ) , A t denotes the transpose of A .A matrix A ∈ M( n, F ) is symmetric , if for any u, v ∈ F n , u t Av = v t Au , or equivalently A = A t .That is, A represents a symmetric bilinear form. A matrix A ∈ M( n, F ) is alternating , if for any u ∈ F n , u t Au = 0 . That is, A represents an alternating bilinear form. Note that in characteristic = 2 , alternating is the same as skew-symmetric, but in characteristic 2 they differ (in characteristic2, skew-symmetric=symmetric). The linear space of n × n alternating matrices over F is denotedby Λ( n, F ) .The n × n identity matrix is denoted by I n , and when n is clear from the context, we may justwrite I . The elementary matrix E i,j is the matrix with the ( i, j ) th entry being , and other entriesbeing . The ( i, j ) -th elementary alternating matrix is the matrix E i,j − E j,i . Matrix tuples.
We use M( ℓ × n, F ) m to denote the linear space of m -tuples of ℓ × n matrices.Boldface letters like A and B denote matrix tuples. Let A = ( A , . . . , A m ) , B = ( B , . . . , B m ) ∈ M( ℓ × n, F ) m . Given P ∈ M( ℓ, F ) and Q ∈ M( n, F ) , P A Q := ( P A Q, . . . , P A m Q ) ∈ M( ℓ, F ) . Given R = ( r i,j ) i,j ∈ [ m ] ∈ M( m, F ) , A R := ( A ′ , . . . , A ′ m ) ∈ M( m, F ) where A ′ i = P j ∈ [ m ] r j,i A j . Remark 6.1.
In particular, note that A ′ i corresponds to the entries in the i th column of R . While16his choice is immaterial (we could have chosen the opposite convention), all of our later calculationsare consistent with this convention.Given A , B ∈ M( ℓ × n, F ) m , we say that A and B are equivalent , if there exist P ∈ GL( ℓ, F ) and Q ∈ GL( n, F ) , such that P A Q = B . Let A , B ∈ M( n, F ) m . Then A and B are conjugate ,if there exists P ∈ GL( n, F ) , such that P − A P = B . And A and B are isometric , if thereexists P ∈ GL( n, F ) , such that P t A P = B . Finally, A and B are pseudo-isometric, if there exist P ∈ GL( n, F ) and R ∈ GL( m, F ) , such that P t A P = B R . Matrix spaces.
Linear subspaces of M( ℓ × n, F ) are called matrix spaces. Calligraphic letters like A and B denote matrix spaces. By a slight abuse of notation, for A ∈ M( ℓ × n, F ) m , we use h A i todenote the subspace spanned by those matrices in A . Let T( ℓ × n × m, F ) be the linear space of ℓ × n × m F . We usethe fixed-width teletypefont for 3-way arrays, like A , B , etc..Given A ∈ T( ℓ × n × m, F ) , we can think of A as a 3-dimensional table, where the ( i, j, k ) th entryis denoted as A ( i, j, k ) ∈ F . We can slice A along one direction and obtain several matrices, whichare then called slices. For example, slicing along the first coordinate, we obtain the horizontal slices,namely ℓ matrices A , . . . , A ℓ ∈ M( n × m, F ) , where A i ( j, k ) = A ( i, j, k ) . Similarly, we also obtainthe lateral slices by slicing along the second coordinate, and the frontal slices by slicing along thethird coordinate.We will often represent a 3-way array as a matrix whose entries are vectors. That is, given A ∈ T( ℓ × n × m, F ) , we can write A = w , w , . . . w ,n w , w , . . . w ,n ... . . . . . . ... w ℓ, w ℓ, . . . w ℓ,n , where w i,j ∈ F m , so that w i,j ( k ) = A ( i, j, k ) . Note that, while w i,j ∈ F m are column vectors, in theabove representation of A , we should think of them as along the direction “orthogonal to the paper.”Following [KB09], we call w i,j the tube fibers of A . Similarly, we can have the row fibers v i,k ∈ F n such that v i,k ( j ) = A ( i, j, k ) , and the column fibers u j,k ∈ F ℓ such that u j,k ( i ) = A ( i, j, k ) .Given P ∈ M( ℓ, F ) and Q ∈ M( n, F ) , let P A Q be the ℓ × n × m -way array whose k th frontalslice is P A k Q . For R = ( r i,j ) ∈ GL( m, F ) , let A R be the ℓ × n × m -way array whose k th frontalslice is P k ′ ∈ [ m ] r k ′ ,k A k ′ . Note that these notations are consistent with the notations for matrixtuples above, when we consider the matrix tuple A = ( A , . . . , A k ) of frontal slices of A .Let A ∈ T( ℓ × n × m, F ) be a 3-way array. We say that A is non-degenerate as a 3-tensor if thehorizontal slices of A are linearly independent, the lateral slices are linearly independent, and thefrontal slices are linearly independent. Let A = ( A , . . . , A m ) ∈ M( ℓ × n, F ) m be a matrix tupleconsisting of the frontal slices of A . Then it is easy to see that the frontal slices of A are linearlyindependent if and only if dim( h A i ) = m . The lateral (resp., horizontal) slices of A are linearlyindependent if and only if the intersection of the right (resp., left) kernels of A i is zero. Observation 6.2.
Given -way arrays A and B , we can construct non-degenerate 3-way arrays A ′ and B ′ in polynomial time, such that A and B are isomorphic as 3-tensors if and only if A ′ and B ′ are isomorphic as 3-tensors. Multi-way arrays.
For d ≥ , we use similar notation to 3-way arrays, which we will not belabor.Here we merely observe: 17 bservation 6.3. For any d ′ ≥ d , d - TI reduces to d ′ - TI .Proof. Given an n × · · · × n d d -way array A , we embed it as a d ′ -way array ˜ A of format n ×· · · × n d × × × · · · × . If A ∼ = B as d -tensors, say via ( P , . . . , P d ) , then ˜ A ∼ = ˜ B as d ′ -tensorsvia ( P , . . . , P d , , , . . . , . Conversely, if ˜ A ∼ = ˜ B via ( P , . . . , P d , α d +1 , . . . , α d ′ ) , then A ∼ = B via ( α d +1 α d +2 · · · α d ′ P , . . . , P d ) . That is, all that can “go wrong” under this embedding is multiplicationby scalars, but those scalars can be absorbed into any one of the P i . Algebras and their algorithmic representations.
An algebra A consists of a vector space V and a bilinear map ◦ : V × V → V . This bilinear map defines the product ◦ in this algebra. Notethat we do not assume A to be unital (having an identity), associative, alternating, nor satisfyingthe Jacobi identity. In the literature, an algebra without such properties is sometimes called anon-associative algebra (but also, as usual, associative algebras are a special case of non-associativealgebras).As in Section 1, after fixing an ordered basis ( b , . . . , b n ) where b i ∈ F n of V ∼ = F n , this bilinearmap ◦ can be represented by an n × n × n A , such that b i ◦ b j = P k ∈ [ n ] A ( i, j, k ) b k . Thisis the structural constant representation of A . Algorithms for associative algebras and Lie algebrashave been studied intensively in this model, e. g., [IR99, dG00].It is also natural to consider matrix spaces that are closed under multiplication or commutator.More specifically, let A ≤ M( n, F ) be a matrix space. If A is closed under multiplication, that is, forany A, B ∈ A , AB ∈ A , then A is a matrix (associative) algebra with the product being the matrixmultiplication. If A is closed under commutator, that is, for any A, B ∈ A , [ A, B ] = AB − BA ∈ A ,then A is a matrix Lie algebra with the product being the commutator. Algorithms for matrixalgebras and matrix Lie algebras have also been studied, e. g., [EG00, Iva00, IR99]. The Lazard correspondence for p -groups. The Lazard correspondence is a correspondencebetween certain classes of groups and Lie algebras, which extends the usual correspondence betweenLie groups and Lie algebras (say, over R ) to some groups and Lie algebras in positive characteristic.Here we state just enough to give a sense of it; for further details we refer to Khukhro’s book [Khu98]and Naik’s thesis [Nai13]. While the thesis is quite long, it also includes a reader’s guide, and collectsmany results scattered across the literature or well-known to the experts in one place, building thetheory from the ground up and with many examples.Recall that a Lie ring is an abelian group L equipped with a bilinear map [ , ] , called the Liebracket, which is (1) alternating ( [ x, x ] = 0 for all x ∈ L ) and (2) satisfies the Jacobi identity [ x, [ y, z ]] + [ y, [ z, x ]] + [ z, [ x, y ]] = 0 for all x, y, z ∈ L . Let L = L , and L i +1 = [ L, L i ] , which isthe subgroup (of the underlying additive group) generated by all elements of the form [ x, y ] for x ∈ L, y ∈ L i . Then L is nilpotent if L c +1 = 0 for some finite c ; the smallest such c is the nilpotencyclass . (Lie algebras are just Lie rings over a field.)The correspondence between Lie algebras and Lie groups over R uses the Baker–Campbell–Hausdorff (BCH) formula to convert between a Lie algebra and a Lie group, so we start there. TheBCH formula is the solution to the problem that for non-commuting matrices X, Y , e X e Y = e X + Y in general (where the matrix exponential here is defined using the power series for e x ). Rather,using commutators [ A, B ] = AB − BA , we have exp( X ) exp( Y ) = exp (cid:18) X + Y + 12 [ X, Y ] + 112 ([ X, [ X, Y ]] − [ Y, [ X, Y ]]) −
124 [ Y, [ X, [ X, Y ]]] + · · · (cid:19) , where the remaining terms are iterated commutators that all involve at least 5 X s and Y s, andsuccessive terms involve more and more. Applying the exponential function to a Lie algebra in18haracteristic zero yields a Lie group. The BCH formula can be inverted, giving the correspondencein the other direction.In a nilpotent Lie algebra, the BCH formula has only finitely many nonzero terms, so issues ofconvergence disappear and we may consider applying the correspondence over finite fields or rings;the only remaining obstacle is that the denominators appearing in the formula must be units in thering. It turns out that the correspondence continues to work in characteristic p so long as one doesnot need to use the p -th term of the BCH formula (which includes division by p ), and the latter isavoided whenever a nilpotent group has class strictly less than p . While the correspondence doesapply more generally, here we only state the version for finite groups. For any fixed nilpotency class c , computing the Lazard correspondence is efficient in theory; for how to compute it in practicewhen the groups are given by polycyclic presentations, see [CdGVL12].Let Grp p,n,c denote the set of finite groups of order p n and class c , and let Lie p,n,c denote theset of Lie rings of order p n and class c . We note that for nilpotency class 2, the Baer correspondenceis the same as the Lazard correspondence. Theorem 6.4 (Lazard Correspondence for finite groups, see, e. g., [Khu98, Ch. 9 & 10] or [Nai13,Ch. 6]) . For any prime p and any ≤ c < p , there are functions log : Grp p,n,c ↔ Lie p,n,c : exp suchthat (1) log and exp are inverses of one another, (2) two groups G, H ∈ Grp p,n,c are isomorphicif and only if log ( G ) and log ( H ) are isomorphic, and (3) if G has exponent p , then the exponentof the underlying abelian group of log ( G ) has exponent p . More strongly, log is an isomorphism ofcategories Grp p,n,c ∼ = Lie p,n,c . Part (3) can be found as a special case of [Nai13, Lemma 6.1.2].For p -groups given by d × d matrices over the finite field F p e , we will need one additional factabout the correspondence, namely that it also results in a Lie algebra of d × d matrices. (Beingable to bound the dimension of the Lie algebra and work with it in a simple linear-algebraic wayseems crucial for our reduction to work efficiently.) In fact, the BCH correspondence is easier to seefor matrix groups using the matrix exponential and matrix logarithm; most of the work for BCHand Lazard is to get the correspondence to work even without the matrices. In some sense, this isthus the “original” setting of this correspondence. Though it is surely not new, we could not find aconvenient reference for this fact about matrix groups over finite fields, so we state it formally here. Proposition 6.5.
Let G ≤ GL( d, F p e ) be a finite p -subgroup of d × d matrices over a finite fieldof characteristic p . Then log ( G ) (from the Lazard correspondence) can be realized as a finite Liesubalgebra of d × d matrices over F p e . Given a generating set for G of m matrices, a generating setfor log ( G ) can be constructed in poly( d, n, log p ) time.Proof sketch. G is conjugate in GL( d, F p e ) to a group of upper unitriangular matrices (upper tri-angular with all 1s on the diagonal); this is a standard fact that can be seen in several ways, forexample, by noting that the group U of all upper unitriangular matrices in GL( d, F p e ) is a Sylow p -subgroup, and applying Sylow’s Theorem. (Note that we do not need to do this conjugationalgorithmically, though it is possible to do so; this is only for the proof.) Thus we may write every g ∈ G as n , where the sum here is the ordinary sum of matrices, denotes the identity matrix,and n is strictly upper triangular. In particular, n d = 0 (ordinary exponentiation of matrices). Thusthe Taylor series for the logarithm log(1 + n ) = n − n n − · · · has only finitely many terms, so we may use it even over F p e .19n the Lie algebra we would like addition to be ordinary matrix addition; however, it turnsout that we can write this addition in terms of a formula involving only commutators of groupelements. Deriving this formula—the so-called first BCH inverse formula—for the matrices will bethe same, step for step, as deriving the first inverse BCH formula in general. Since the formulaeare identical, the additive structures on log( G ) (using the matrix logarithm) and log ( G ) (fromthe Lazard correspondence) are identical. Similar considerations apply to the matrix commutator [log( g ) , log( h )] = log( g ) log( h ) − log( h ) log( g ) , now using the second BCH inverse formula. Overall,we conclude that log ( G ) (using Lazard) and log( G ) (using the matrix logarithm) are isomorphicLie algebras.Equivalently, we may note that the derivation of the inverse BCH formula in [Khu98, Nai13]uses a free nilpotent associative algebra as an ambient setting in which both the group and thecorresponding Lie algebra live; in our case, we may replace the ambient free nilpotent associativealgebra with the algebra of d × d strictly upper-triangular matrices over F p e , and all the derivationsremain the same, mutatis mutandis . See, for example, [Khu98, p. 105, “Another remark...”]. To see that those problems in Section 2 exhaust distinct isomorphism problems coming from change-of-basis on 3-way arrays (without introducing multiple arrays, or block structure, or going to sub-groups of
GL( n, F ) ), and to keep track of the relation between all the above problems, we usestandard mathematical notation for spaces of tensors (however, we won’t actually need the fullabstract definition here; for a formal introduction see, e. g., [Lan12]).Much as the three natural equivalence relations on matrices differ by how the groups act onthe rows and columns, the same is true for tensors, but on the rows, columns, and depths (the“row-like” sub-arrays which are “perpendicular to the page”). There are two aspects to the notation:first, we keep track of which group is acting where by introducing names U, V, W for the differentvector spaces involved (this is also the standard basis-free notation, e. g., [Lan12]) and the groupsacting on them, viz.
GL( U ) , GL( V ) , GL( W ) , etc. Thus, while it is possible that dim U = dim V and thus GL( U ) ∼ = GL( V ) , the notation helps make clear which group is acting where. Second, totake into account the contragradient (“inverse”) action, given a vector space V , V ∗ denotes its dualspace, consisting of the linear functions V → F . GL( V ) acts on V ∗ by sending a linear function ℓ ∈ V ∗ to the function ( g · ℓ )( v ) = ℓ ( g − ( v )) . In this notation, the three different actions on matricescorrespond to the notations U ⊗ V (left-right action) V ⊗ V ∗ (conjugacy) V ⊗ V (isometry).When we have a matrix space A ⊆ M ( n × m, F ) instead of a single matrix A , we introducean additional vector space W , which is naturally isomorphic to A as a vector space. The actionof GL( W ) on W serves to change basis within the matrix space, while leaving the space itselfunchanged. In this notation, the problems we mention above are listed in Table 2.To see that the family of problems in Table 2 exhausts the possible isomorphism problems on(undecorated) 3-way arrays, we note that in this notation there are some “different-looking” isomor-phism problems that are trivially equivalent. The first is re-ordering the spaces: the isomorphismproblem for V ⊗ V ⊗ W is trivially equivalent to that for V ⊗ W ⊗ V , simply by permuting indices,viz. A ′ ( i, j, k ) = A ( i, k, j ) . The second is about dual vector spaces. Although a vector space V andits dual V ∗ are technically different, and the group action differs by an inverse transpose, we canchoose bases in V and V ∗ so that there is a linear isomorphism V → V ∗ which induces a bijectionbetween orbits; for example, the orbits of the action g · A = gAg t are the same as the orbits of theaction g · A = g − t Ag − , even though technically the former corresponds to V ⊗ V and the latter to20otation Name Group Action U ⊗ V ⊗ W Matrix Space Equivalence3-Tensor Isomorphism
A 7→ g A h − V ⊗ V ⊗ W Matrix Space IsometryBilinear Map Pseudo-Isometry
A 7→ g A g T V ⊗ V ∗ ⊗ W Matrix Space Conjugacy
A 7→ g A g − V ⊗ V ⊗ V Trilinear Form Equivalence f ( ~x ) f ( g − ~x ) V ⊗ V ⊗ V ∗ Algebra Isomorphism µ ( ~x, ~y ) gµ ( g − ~x, g − ~y ) Table 2: The cast of isomorphism problems on 3-way arrays. In Section 6.1 we show how thisexhausts the possibilities. V ∗ ⊗ V ∗ . This means that if we are considering the isomorphism problem in a tensor space such as V ⊗ V ⊗ W , we can dualize each of the vector spaces V, W separately, so long as when we do so, wedualize all instances of that vector space. For example, the isomorphism problem in V ⊗ V ⊗ W istrivially equivalent to that in V ∗ ⊗ V ∗ ⊗ W , but is not obviously equivalent to that in V ⊗ V ∗ ⊗ W (though we will show such a reduction in this paper). As a consequence, when the action on allthree directions comes from the same group, there are only two choices: V ⊗ V ⊗ V and V ⊗ V ⊗ V ∗ ;the remaining choices are trivially equivalent to one of these two. Using these, we see that theTable 2 in fact covers all possibilities up to these trivial equivalences. Special cases of interest.
As in the case of isometry of matrices, wherein skew-symmetric andsymmetric matrices play a special role, the same is true for isometry of matrix spaces. We say amatrix space A is symmetric if every matrix A ∈ A is symmetric, and similarly for skew-symmetric oralternating. Symmetric Matrix Space Isometry is equivalent to asking whether two polynomialmaps from F n to F m specified by homogeneous quadratic forms are the same under the action of GL( n, F ) × GL( m, F ) . This problem has been proposed by Patarin [Pat96] as the basis of securityfor certain identification and signature schemes. Alternating Matrix Space Isometry is aparticular case of interest, being in many ways a linear-algebraic analogue of GI [LQ17] (in additionto its close relation with Group Isomorphism for p -groups of class 2 and exponent p ).Among trilinear forms, we can identify commutative cubic forms as those for which the coefficient3-way array A is symmetric under all 6 permutations of its 3 indices A ( i, j, k ) = A ( j, i, k ) = · · · = A ( k, i, j ) . Over rings in which is a unit, cubic forms embed into trilinear forms via the standard map f T where T i ,i ,i = P π ∈ S [ x i π (1) x i π (2) x i π (3) ] f , where [ x e ] f denotes the coefficient of x e in f .Thus, over such rings Cubic Form Equivalence , as studied by Agrawal and Saxena [AS05,AS06],is a special case of
Trilinear Form Equivalence .Special cases of
Algebra Isomorphism that are of interest include those of unital, associativealgebras (commutative, e. g., as studied in [AS05,AS06,KS06], and non-commutative, such as groupalgebras) and Lie algebras.Interesting cases of
Matrix Space Conjugacy arise naturally whenever we have an algebra A (say, associative or Lie) that is given to us as a subalgebra of the algebra M( n, F ) of n × n matrices.Two such matrix algebras can be isomorphic as abstract algebras, but the more natural notion of“isomorphism of matrix algebras” is that of conjugacy, which respects both the algebra structureand the presentation in terms of matrices. This is the linear-algebraic analogue of permutationalisomorphism (=conjugacy) of permutation groups, and has been studied for matrix Lie algebras[Gro12a] and associative matrix algebras [BW15]. (For those who know what a representationis: it also turns out to be equivalent to asking whether two representations of an algebra A are21quivalent up to automorphisms of A , a problem which naturally arises as a subroutine in, e. g., Group Isomorphism , where it is often known as
Action Compatibility , e. g., [GQ17].)
As these problems arise from several different fields, there are various properties one might hopefor in the notion of reduction. Most of our reductions satisfy all of the following properties; seeRemark 6.6 below for details.1. Kernel reductions: there is a function r from objects of one type to objects of the other suchthat A ∼ B if and only if r ( A ) ∼ r ( B ) . See [FG11] for some discussion on the relationbetween kernel reductions and more general reductions.2. Efficiently computable: r as above is computable in polynomial time. In fact, we believe,though have not checked fully, that all of our reductions are computable by uniform constant-depth (algebraic) circuits; over finite fields and algebraic number fields, we believe they arein uniform TC (the threshold gates are needed to do some simple arithmetic on the indices).That is, there is a small circuit which, given A, i, j, k as input will output the ( i, j, k ) entry ofthe output.3. Polynomial-size projections (“p-projections”) [Val84]: each coordinate of the output is eitherone of the input variables or a constant, and the dimension of the output is polynomiallybounded by the dimension of the input. (In fact, in many cases, the dimension of the outputis only linearly larger than that of the input.)4. Functorial. For each type of tensor there is naturally a category of such tensors (see [Mac71]for generalities on categories). For example, for , U ⊗ V ⊗ W , the objects of the category arethree-tensors, and a morphism between A ∈ U ⊗ V ⊗ W and B ∈ U ′ ⊗ V ′ ⊗ W ′ is given by linearmaps P : U → U ′ , Q : V → V ′ , and R : W → W ′ such that ( P, Q, R ) · A = B . Isomorphismof 3-tensors is the special case when all three of P, Q, R are invertible. Analogous categoriescan be defined for the other problems we consider, such as V ⊗ V ∗ ⊗ W . A functor betweentwo categories C , D is a pair of maps ( r, r ) such that (1) r maps objects of C to objects of D ,(2) if f : A → B is a morphism in C , then r ( f ) : r ( A ) → r ( B ) is a morphism in D , (3) forany A ∈ C , r (id A ) = id r ( A ) , and (4) if f : A → B and g : B → C are morphisms in C , then r ( g ◦ f ) = r ( g ) ◦ r ( f ) .All our reductions are functorial on the categories in which we only consider isomorphisms; wesuspect that they are also functorial on the entire categories (that is, including non-invertiblehomomorphisms). Furthermore, all our reductions yield another map s such that for anyisomorphism f ′ : r ( A ) → r ( B ) , s ( f ) is an isomorphism A → B , and s ( r ( f )) = f for anyisomorphism f : A → B . If we only consider isomorphisms (and not other homomorphisms),we believe all known reductions between isomorphism problems have this form, cf. [Bab14].5. Containment in the sense used in the literature on wildness. There are several definitions inthe literature which typically are equivalent when restricted to so-called matrix problems. Fora few such definitions, see, e. g., [FGS19, Def. 1.2], [Ser00], or [SS07, Def. XIX.1.3]. For thoseproblems in this paper to which the preceding definitions could apply, our reductions havethe defined property. However, since we are working in a slightly more general setting, wewould like to suggest the following natural generalization of these notions. Given two pairs ( G, V ) and ( H, W ) of algebraic groups G, H acting on algebraic varieties
V, W , an algebraic ontainment is an algebraic map r : V → W (each coordinate of the output is given by apolynomial in the coordinates of the input) that is also a kernel reduction. In our case, all ourspaces V, W are affine space F n for some n , and our maps r are in fact of degree 1. (It might beinteresting to consider whether using higher degree allows for more efficient reductions.) Wemay also require it to be “functorial,” in the sense that there is a homomorphism of algebraicgroups r : G → H (simultaneously an algebraic map and a group homomorphism) such that r ( g ) · r ( v ) = r ( g · v ) . and a section s : H G , such that s ◦ r = id G and h · r ( v ) = r ( v ′ ) = ⇒ s ( h ) ◦ v = v ′ , where the dashed arrow above indicates that s need only be defined on a subset of H , namely,those h ∈ H such that there exist v, v ′ ∈ V with h · r ( v ) = r ( v ′ ) (but on this subset it shouldstill act like a homomorphism, in the sense that s ( hh ′ ) = s ( h ) s ( h ′ ) ). Remark 6.6.
We believe all of our reductions satisfy all of the above properties, with the possibleexceptions that Prop. 7.3 and Prop. 8.1 are only projections (3) and algebraic containments (5)on the set of non-degenerate
In this section, we present the remaining reductions that use the linear algebraic coloring idea. Wefirst reduce
Graph Isomorphism to Alternating Matrix Space Isometry , using a gadget torestrict the full general linear group to the monomial matrix group, similar to that in Section 5.However, unlike in the case there, the use here requires slightly more care because of the alternatingcondition. We then reduce to Alternating Matrix Space Isometry .The gadget there restricts the full general linear group to a parabolic subgroup. We note that sucha gadget has appeared in [FGS19], while ours is a slight modification of that to be compatible withthe alternating structure. Finally, we combine the two gadgets to give a search-to-decision reductionfor
Alternating Matrix Space Isometry over finite fields.
Graph Isomorphism to Alternating Matrix Space Isometry
Proposition 7.1.
Graph Isomorphism reduces to
Alternating Matrix Space Isometry . For this proof we will need the concept of monomial isometry; see Some Groups above. Recallthat a matrix is monomial if, equivalently, it can be written as DP where D is a nonsingular diagonalmatrix and P is a permutation matrix. We say two matrix spaces A , B are monomially isometric ifthere is some M ∈ Mon( n, F ) such that M t A M = B .23 roof. For a graph G = ([ n ] , E ) , let A G be the alternating matrix tuple A G = ( A , . . . , A | E | ) with A e = E i,j − E j,i where e = { i, j } ∈ E , and let A G = h A G i be the alternating matrix space spannedby that tuple. If P is a permutation matrix giving an isomorphism between two graphs G and H ,then it is easy to see that P t A G P = A H , and thus the corresponding matrix spaces are isometric.The converse direction is not clear (and may even be false). Instead, we will first extend the spaces A G and A H by gadgets which enforce that A G and A H are isometric iff they are monomiallyisometric (Lemma 7.2). Given Lemma 7.2, it thus suffices to reduce GI to Alternating MatrixSpace Monomial Isometry .Let us establish the latter reduction. We will show that G ∼ = H if and only if A G and A H are monomially isometric. The forward direction was handled above. For the converse, suppose P t D t A G DP = A H where D is diagonal and P is a permutation matrix. We claim that in this case, P in fact gives an isomorphism from G to H . First let us establish that P alone gives an isometrybetween A G and A H . Note that for any diagonal matrix D = diag( α , . . . , α n ) and any elementaryalternating matrix E i,j − E j,i , we have D t ( E i,j − E j,i ) D = α i α j ( E i,j − E j,i ) . Since A G has a basis ofelementary alternating matrices, the action of D on this basis is just to re-scale each basis element,and thus D t A G D = A G . Thus, we have P t A G P = A H .Finally, note that P t ( E i,j − E j,i ) P = E π ( i ) ,π ( j ) − E π ( j ) ,π ( i ) = A π ( e ) , where π ∈ S n is the per-mutation corresponding to P , and by abuse of notation we write π ( e ) = π ( { i, j } ) = { π ( i ) , π ( j ) } as well. Since the elementary alternating matrices are linearly independent, and A H has a basis ofelementary alternating matrices, the only way for A π ( e ) to be in A H is for it to be equal to one ofthe basis elements (one of the matrices in A H ). In other words, π ( e ) must be an edge of H . As P is invertible, we thus have that P gives an isomorphism G ∼ = H . Lemma 7.2.
Alternating Matrix Space Monomial Isometry reduces to
AlternatingMatrix Space Isometry .More specifically, there is a poly( n, m ) -time algorithm r taking alternating matrix tuples toalternating matrix tuples, such that for A , B ∈ Λ( n, F ) m , the matrix spaces A = h A i and B = h B i are monomially isometric if and only if the matrix spaces h r ( A ) i and h r ( B ) i are isometric.Proof. For A = ( A , . . . , A m ) ∈ Λ( n, F ) m , define r ( A ) to be the alternating matrix tuple ˜ A =( ˜ A , . . . , ˜ A m + n ) ∈ Λ( n + n , F ) m + n , where1. For k = 1 , . . . , m , ˜ A k = (cid:20) A k
00 0 (cid:21) .2. For k = m + ( i − n + j , i ∈ [ n ] , j ∈ [ n ] , ˜ A k is the elementary alternating matrix E i,in + j − E in + j,i .At this point, some readers may wish to look at the large matrix in Equation 1 and/or at Figure 3.It is clear that r can be computed in time ˜ O (( m + n )( n + n )) = poly( n, m ) . Given alternatingmatrix tuples A , B , let A , B be the corresponding matrix spaces they span, and let ˜ A = h r ( A ) i and ˜ B = h r ( B ) i . We claim that A and B are monomially isometric if and only if ˜ A and ˜ B are isometric.To prove this, it will help to think of our matrix tuples A , ˜ A , etc. as (corresponding to) 3-wayarrays, and to view these 3-way arrays from two different directions. Towards this end, write the3-way array corresponding to A as A = a , a , . . . a ,n − a , a , . . . a ,n − a , − a , . . . a ,n ... . . . . . . . . . ... − a ,n − a ,n − a ,n . . . , a i,j are vectors in F m (“coming out of the page”), namely a i,j ( k ) = A k ( i, j ) . The frontal slicesof this array are precisely the matrices A , . . . , A m .The 3-way array corresponding to ˜ A = r ( A ) is then the ( n + 1) n × ( n + 1) n × ( m + n ) array: ˜ A = ˜ a , ˜ a , . . . ˜ a ,n e , . . . e ,n . . . . . . . . . − ˜ a , ˜ a , . . . ˜ a ,n . . . e , . . . e ,n . . . . . . ... . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . ... − ˜ a ,n − ˜ a ,n − ˜ a ,n . . . . . . . . . . . . e n, . . . e n,n − e , . . . . . . . . . . . . . . . ... ... ... . . . ... ... . . . ... ... . . . ... . . . ... . . . ... − e ,n . . . . . . . . . . . . . . . − e , . . . . . . . . . . . . . . . ... ... ... . . . ... ... . . . ... ... . . . ... . . . ... . . . ... − e ,n . . . . . . . . . . . . . . . ... ... ... . . . ... ... . . . ... ... . . . ... . . . ... . . . ... . . . − e n, . . . . . . . . . . . . ... ... ... . . . ... ... . . . ... ... . . . ... . . . ... . . . ... . . . − e n,n . . . . . . . . . . . . , (1)where ˜ a i,j = (cid:20) a i,j (cid:21) ∈ F m + n (here think of the vector a i,j as a column vector, not coming out ofthe page; in the above array we then lay the column vector ˜ a i,j “on its side” so that it is comingout of the page), and e i,j := e m +( i − n + j ∈ F m + n , which we can equivalently write as (cid:20) m e i ⊗ e j (cid:21) ,where we think of e i ⊗ e j here as a vector of length n . Note that all the the nonzero blocks besidesupper-left “ A ” block only have nonzero entries that are strictly behind the nonzero entries in theupper-left block. A ✿✿✿✿ ✿✿✿✿✿✿✿✿ I n ✿✿✿✿✿✿ ❴❴❴❴✤✤ ✿✿✿✿✿✿✿✿ ✿✿✿✿ ✿✿✿✿ I n ✿✿✿✿✿✿✿✿ ✿✿✿✿✿✿✿✿ - I n ✿✿ ✤✤✤❴❴ ✿✿✿✿ ✿✿✿✿✿✿✿✿ - I n Figure 3: Pictorial representation of the reduction for Lemma 7.2.25he second viewpoint, which we will also use below, is to consider the lateral slices of A , orequivalently, to view A from the side. When viewing A from the side, we see the ( n + 1) n × ( m + n ) × ( n + 1) n A lat = ℓ , ℓ , . . . ℓ ,m e n +1 . . . e n . . . . . . ... . . . . . . ... ... . . . ... . . . ... . . . ... ℓ n, ℓ n, . . . ℓ n,m . . . . . . e n +1 . . . e n + n . . . e . . . . . . . . . ... ... . . . ... ... . . . ... . . . ... . . . ... . . . . . . e . . . . . . ... . . . . . . ... ... . . . ... . . . ... . . . ... . . . . . . . . . e n . . . ... ... . . . ... ... . . . ... . . . ... . . . ... . . . . . . . . . . . . e n , (2)where every ℓ i,k ∈ F n + n has only the first n components being possibly non-zero, namely, ℓ i,k ( j ) = A k ( i, j ) for i ∈ [ n ] , j ∈ [ n ] , k ∈ [ m ] and ℓ i,k ( j ) = 0 for any j > n . For the only if direction, suppose there exist P ∈ Mon( n, F ) and Q ∈ GL( m, F ) , such that P t A P = B Q . We can construct ˜ P ∈ Mon( n + n , F ) and ˜ Q ∈ GL( m + n , F ) such that ˜ P t ˜ A ˜ P = ˜ B ˜ Q .In fact, we will show that we can take ˜ P = (cid:20) P P ′ (cid:21) where P ′ ∈ Mon( n , F ) , and ˜ Q = (cid:20) Q Q ′ (cid:21) where Q ′ ∈ Mon( n , F ) . It is not hard to see that this form already ensures that the first m matricesin the vector ˜ P t ˜ A ˜ P and those of ˜ B ˜ Q are the same, since when ˜ P , ˜ Q are of this form, those first m matrices are controlled entirely by the P (resp., Q ) in the upper-left block of ˜ P (resp., ˜ Q ).The remaining question is then how to design appropriate P ′ and Q ′ to take care of the last n matrices in these tuples. This actually boils down to applying the following simple identity, but “in3 dimensions:” Let P be the permutation matrix corresponding to σ ∈ S n , so that P e i = e σ ( i ) , and e ti P = e tσ − ( i ) . Let D = diag( α , . . . , α n ) be a diagonal matrix. Then P t DP = diag( α σ − (1) , . . . , α σ − ( n ) ) . (3)To see how Equation 3 helps in our setting, it is easier to focus attention on the lower right n × n sub-array of A lat , which can be represented as a symbolic matrix M = x I n . . . x I n . . . ... . . . . . . ... . . . x n I n . Here we think of the x i ’s as independent variables, whose indices correspond to “how far into thepage” they are. That is, x i corresponds to the vector ~e i in A lat , which is coming out of the page andhas its only nonzero entry i slices back from the page.Then the action of P permutes the x i ’s and multiplies them by some scalars, the action of P ′ is on the left-hand side, and the action of Q ′ is on the right-hand side. Let σ be the permutation26upporting P . Then P sends M to M P = α σ (1) x σ (1) I n . . . α σ (2) x σ (2) I n . . . ... . . . . . . ... . . . α σ ( n ) x σ ( n ) I n . So setting P ′ = σ ⊗ I n , Q ′ the monomial matrix supported by σ ⊗ I n with scalars being /α i ’s, wehave P ′ t M P Q ′ = M by Equation 3. For the if direction, suppose there exist ˜ P ∈ GL( n + n , F ) and ˜ Q ∈ GL( m + n , F ) , such that ˜ P t ˜ A ˜ P = ˜ B ˜ Q . The key feature of these gadgets now comes into play: consider the lateral slices of ˜ A ,which are the frontal slices of A lat (which may be easier to visualize by looking at Equation 2 andFigure 3). The first n lateral slices of ˜ A and ˜ B are of rank ≥ n and < n , while the other lateral slicesare of rank < n (in fact, they are of rank 1; note that without loss of generality we may assume n > , for the only × alternating matrix space is the zero space). Furthermore, left multiplyinga lateral slice by ˜ P t and right multiplying it by ˜ Q does not change its rank. However, the actionof ˜ P here is by ˜ P t ˜ A ˜ P , and while the ˜ P t here corresponds to left multiplication on the lateral slices(=frontal slices of A lat ), the ˜ P on the right here corresponds to taking linear combinations of thelateral slices. In other words, just as A lat is the “side view” of ˜ A , ( ˜ P t A lat ˜ Q ) ˜ P is the side view of ( ˜ P t ˜ A ˜ P ) ˜ Q . Taking linear combinations of the lateral slices could, in principle, alter their rank; wewill use the latter possibility to show that ˜ P must be of a constrained form.Write ˜ P = (cid:20) P , P , P , P , (cid:21) where P , is of size n × n . We first claim that P , = . For if not, thenin ( A lat ) ˜ P (the side view), one of the last n frontal slices receives a nonzero contribution from one ofthe first n frontal slices of A lat . Looking at the form of these slices from Equation 2, we see that anysuch nonzero combination will have rank ≥ n , but this is a contradiction since the correspondingslice in B lat has rank . Thus P , = , and therefore P , must be invertible, since ˜ P is.Finally, we claim that P , has to be a monomial matrix. If not, then some frontal slice of ( A lat ) ˜ P among the first n would have a contribution from more than one of these n slices. Considering thelower-right n × n sub-matrix of such a slice, we see that it would have rank exactly kn for some k ≥ , which is again a contradiction since the first n slices of B lat all have rank < n . It followsthat P t , A i P , , i ∈ [ m ] , are in B , and thus A and B are monomially isometric via P , . to Matrix Space Isometry and
Matrix GroupIsomorphism
Proposition 7.3. reduces to
Alternating Matrix Space Isometry .Symbolically, isomorphism in U ⊗ V ⊗ W reduces to isomorphism in V ′ ⊗ V ′ ⊗ W ′ (or even to V V ′ ⊗ W ), where ℓ = dim U ≤ n = dim V and m = dim W , dim V ′ = ℓ + 7 n + 3 and dim W ′ = m + ℓ (2 n + 1) + n (4 n + 2) .Proof. We will exhibit a function r from 3-way arrays to matrix tuples such that two 3-way arrays A , B ∈ T ( ℓ × n × m, F ) which are non-degenerate as 3-tensors, are isomorphic as 3-tensors if andonly if the matrix spaces h r ( A ) i , h r ( B ) i are isometric. Note that we can assume our input tensorsare non-degenerate by Observation 6.2. The construction is a bit involved, so we will first describethe construction in detail, and then prove the desired statement. The gadget construction.
Given a 3-way array A ∈ T ( ℓ × n × m, F ) , let A denote the corresponding m -tuple of matrices, A ∈ M ( ℓ × n ) m . The first step is to construct s ( A ) ∈ Λ( ℓ + n, F ) m , defined by27 ( A ) = ( A Λ1 , . . . , A Λ m ) where A Λ i = (cid:20) A i − A ti (cid:21) . Already, note that if A ∼ = B , then s ( A ) and s ( B ) arepseudo-isometric matrix tuples (equivalently, h s ( A ) i and h s ( B ) i are isometric matrix spaces).However, it is not clear whether the converse should hold. Indeed, suppose P s ( A ) P T = s ( B ) Q for some P ∈ GL( ℓ + n, F ) , Q ∈ GL( m, F ) . If we write P as a block matrix (cid:20) P P P P (cid:21) , where P ∈ M ( ℓ, F ) and P ∈ M ( n, F ) , then by considering the (1,2) block we get that P A i P t − P t A ti P = P mj =1 q ij B j for all i = 1 , . . . , m , whereas what we would want is the same equation butwithout the P t A ti P term. To remedy this, it would suffice if we could extend the tuple s ( A ) to r ( A ) so that any pseudo-isometry ( P, Q ) between r ( A ) and r ( B ) will have P = 0 .To achieve this, we start from s ( A ) = A Λ ∈ Λ( n + ℓ, F ) m , and construct r ( A ) ∈ Λ( ℓ + 7 n +3 , F ) m + ℓ (2 n +1)+ n (4 n +2) as follows. Here we write it out symbolically, on the next page is the samething in matrix format, and in Figure 4 is a picture of the construction. Let s = m + ℓ (2 n + 1) + n (4 n + 2) . Write r ( A ) = ( ˜ A , . . . , ˜ A s ) , where ˜ A i ∈ Λ( ℓ + 7 n + 3 , F ) are defined as follows: • For ≤ i ≤ m , ˜ A i = (cid:20) A Λ i
00 0 (cid:21) . Recall that A Λ i ∈ Λ( ℓ + n, F ) . • For the next ℓ (2 n + 1) slices, that is, m + 1 ≤ i ≤ m + ℓ (2 n + 1) , we can naturally represent i − m by ( p, q ) where p ∈ [ ℓ ] , q ∈ [2 n + 1] . We then let ˜ A i be the elementary alternatingmatrix E p,ℓ + n + q − E ℓ + n + q,p . • For the next n (4 n + 2) slices, that is m + ℓ (2 n + 1) + 1 ≤ i ≤ m + ℓ ( n + 1) + n (4 n + 2) , wecan naturally represent i − m − ℓ ( n + 1) by ( p, q ) where p ∈ [ n ] , q ∈ [4 n + 2] . We then let ˜ A i be the elementary alternating matrix E ℓ + p,n + ℓ +2 n +1+ q − E n + ℓ +2 n +1+ q,ℓ + p .We may view the above construction is as follows. Write the frontal view of A as A = a ′ , . . . a ′ ,n ... . . . ... a ′ ℓ, . . . a ′ ℓ,n , where a ′ i,j ∈ F m , which we think of as a column vector, but when place in the above array, we thinkof it as coming out of the page.Let ˜ A be the -way array whose frontal slices are ˜ A i , so ˜ A ∈ T(( ℓ + 7 n + 3) × ( ℓ + 7 n + 3) × ( m + ℓ (2 n + 1) + n (4 n + 2)) , F ) . Then the frontal view of ˜ A is ˜ A = . . . a , . . . a ,n e , . . . e n +1 , . . . ... . . . ... ... . . . ... ... . . . ... ... . . . ... . . . a ℓ, . . . a ℓ,n e ,ℓ . . . e n +1 ,ℓ . . . − a , . . . − a ℓ, . . . . . . f , . . . f n +2 , ... . . . ... ... . . . ... ... . . . ... ... . . . ... − a ,n . . . − a ℓ,n . . . . . . f ,n . . . f n +2 ,n − e , . . . − e ,ℓ . . . . . . . . . ... . . . ... ... . . . ... ... . . . ... ... . . . ... − e n +1 , . . . − e n +1 ,ℓ . . . . . . . . . . . . − f , . . . − f ,n . . . . . . ... . . . ... ... . . . ... ... . . . ... ... . . . ... . . . − f n +2 , . . . − f n +2 ,n . . . . . . , a i,j = (cid:20) a ′ i,j (cid:21) ∈ F m + ℓ (2 n +1)+ n (4 n +2) , e i,j = ~e m +( j − n +1)+ i , and f i,j = ~e m + ℓ (2 n +1)+( j − n +2)+ i . ℓm ✿✿ ✿✿ ✿✿✿✿✿✿✿✿ A ✿✿✿✿✿✿✿✿✿✿✿✿ I n +1 ✿✿ ✿✿✿✿ ✿✿✿✿✿✿✿✿ ℓn − A t n ✿✿✿✿ I n +1 ✿✿✿✿✿✿ ❴❴❴❴❴❴❴ ✿✿✿✿ ✤✤✤✤✤✤✤✤✤✤✤✤✤✿✿✿✿✿ ✤✤✤✤✤✤✤✤✤✤ ✿✿✿✿ ... ✿✿✿✿✿✿ ❴❴❴❴❴❴❴❴❴❴❴ ✿✿✿✿✿✿✿✿✿✿✿✿ I n +1 ✿✿✿✿ ✿✿✿✿ ✤✤✤✤✤✤ ✿✿✿✿ ✿✿✿✿✿✿✿✿✿✿✿✿ ✿✿✿✿✿✿✿✿ ✿✿✿✿ ✿✿✿✿ I n +2 ... ✿✿✿✿✿✿✿✿✿✿✿✿ ✿✿✿✿✿✿✿✿ I n +2 ✿✿✿✿✿✿✿✿✿✿ ❴❴❴❴❴❴❴❴❴❴❴❴ ... ✿✿✿✿ ✿✿✿✿✿✿✿✿ I n +2 ✿✿✿✿ ✿✿✿✿✿✿✿✿ ... ✿✿✿✿ ✿✿✿✿✿✿✿✿ Figure 4: Pictorial representation of the reduction for Proposition 7.3.We now examine the ranks of the lateral slices L i of ˜ A . We claim:For i ... rk( L i )1 ≤ i ≤ ℓ n + 1 ≤ rk( L i ) ≤ n + 1 ℓ + 1 ≤ i ≤ ℓ + n n + 2 ≤ rk( L i ) ≤ n + 2 ℓ + n + 1 ≤ i ≤ ℓ + n + 6 n + 3 rk( L i ) ≤ n To see why these hold: • For ≤ i ≤ ℓ , the i th lateral slice L i is block-diagonal with two non-zero blocks. One blockis of size n × ℓ , and the other is − I n +1 . Therefore n + 1 ≤ rk( L i ) ≤ n + 1 . • For ℓ + 1 ≤ i ≤ ℓ + n , the i th lateral slice L i is also block-diagonal with two non-zero blocks.One block is of size ℓ × n , and the other is − I n +2 . Therefore n + 2 ≤ rk( L i ) ≤ n + 2 . • For ℓ + n + 1 ≤ i ≤ ℓ + n + 6 n + 3 , after rearranging the columns, the i th lateral slice L i has one non-zero block which is is I ℓ for the first n + 1 slices, and I n for the next n + 2 slices. Therefore rk( L i ) = ℓ or n , and since we have assumed ℓ ≤ n , in either case we have rk( L i ) ≤ n . 29e then consider the ranks of the linear combinations of the lateral slices. • As long as the linear combination involves L i for ℓ + 1 ≤ i ≤ ℓ + n , then the resulting matrixhas rank at least n + 2 , because of the matrix − I n +2 in the last n + 2 rows. • If the linear combination does not involve L i for ℓ + 1 ≤ i ≤ ℓ + n , then the resulting matrixhas rank at most n + 1 , because in this case, there are at most ℓ + n + 2 n + 1 ≤ n + 1 non-zero rows. • If the linear combination involves L i for ≤ i ≤ ℓ , then the resulting matrix has rank at least n + 1 , because of the matrix − I n +1 in the ( ℓ + n + 1) th to the ( ℓ + 3 n + 1) th rows.We then prove that A and B are isomorphic as 3-tensors if and only if h r ( A ) i and h r ( B ) i areisometric as matrix spaces. At first glance, the only if direction seems the easy one, as one expectsto extend a 3-tensor isomorphism between A to B to an isometry between h r ( A ) i and h r ( B ) i eas-ily. However, it turns out that this direction becomes somewhat technical because of the gadgetintroduced. This is handled in the following. For the if direction, suppose P t ˜ A P = ˜ B Q , for some P ∈ GL( ℓ + 7 n + 3 , F ) and Q ∈ GL( m + ℓ (2 n +1) + n (4 n + 2) , F ) . Write P as P , P , P , P , P , P , P , P , P , , where P , is of size ℓ × ℓ , P , is of size n × n ,and P , is of size (6 n + 3) × (6 n + 3) . By the discussion on the ranks of the linear combinations ofthe lateral slices, we have P , = , P , = , P , = , and P , = . So P = P , P , P , P , P , ,where P , , P , , P , are invertible. Then consider the action of such P on the first m frontal slicesof ˜ A . The first m frontal slices of ˜ A are of the form A i − A ti , where A i is of size ℓ × n . Thenwe have P t , P t , P t , P t , P t , A i − A ti P , P , P , P , P , = P t , A i P , − P t , A i P , . From the fact that Q is invertible and P t ˜ A P = ˜ B Q , by considering the (1 , block, we find thatevery frontal slice of P t A P lies in h B i (since the gadget does not affect the block-(1,2) position),which gives an isomorphism of tensors, as desired. For the only if direction, suppose A and B are isomorphic as 3-tensors, that is, P t A Q = B R , forsome P ∈ GL( ℓ, F ) , Q ∈ GL( n, F ) , and R ∈ GL( m, F ) .We show that there exist U ∈ GL(6 n + 3 , F ) and V ∈ GL( ℓ (2 n + 1) + n (4 n + 2) , F ) such thatsetting ˜ Q = diag( P, Q, U ) ∈ GL( ℓ + 7 n + 3 , F )˜ R = diag( R, V ) ∈ GL( m + ℓ (2 n + 1) + n (4 n + 2) , F ) , we have ˜ Q t r ( A ) ˜ Q = r ( B ) ˜ R , which will demonstrate that r ( A ) and r ( B ) are pseudo-isometric.30ince we are claiming that ˜ R = diag( R, V ) ∈ GL( m, F ) × GL( ℓ (2 n + 1)+ n (4 n + 2) , F ) works, and ˜ R is block-diagonal, it suffices to consider the first m frontal slices separately from the remainingslices. For the first m frontal slices, we have: ˜ Q t ˜ A i ˜ Q = P t Q t
00 0 U t A i − A ti P Q
00 0 U = P t A i Q − Q t A ti P . It follows from the fact that P t A Q = B R that the first m frontal slices of ˜ Q t r ( A ) ˜ Q and of r ( B ) ˜ R arethe same.We now consider the remaining frontal slices separately. Towards that end, let ˜ A ′ ∈ T(( ℓ +7 n + 3) × ( ℓ + 7 n + 3) × ( ℓ (2 n + 1) + n (4 n + 2)) , F ) be the 3-way array obtained by removing thefirst m frontal slices from ˜ A . That is, the i th frontal slice of ˜ A ′ is the ( m + i ) th frontal slice of ˜ A .Similarly construct ˜ B ′ from ˜ B . We are left to show that ˜ A ′ and ˜ B ′ are pseudo-isometric under some ˜ Q = diag( P, Q, U ) and V . Note that P and Q are from the isomorphism between A and B , while U and V are what we still need to design.We first note that both ˜ A ′ and ˜ B ′ can be viewed as a block 3-way array of size × × , whosetwo frontal slices are the block matrices − E 0 0 00 0 0 0 and − F 0 0 , where E is of size ℓ × (2 n + 1) × ℓ (2 n + 1) , and F is of size n × (4 n + 2) × n (4 n + 2) . Although theseare already identical in A ′ , B ′ , the issue here is that P and Q may alter the slices of ˜ A ′ when theyact on A , so we need a way to “undo” this action to bring it back to the same slices in B ′ .We now claim that we may further handle these two block slices—the “ E ” slices and the “ F ”-slices—separately, that is, that we may take U = diag( U , U ) and V = diag( V , V ) where U ∈ GL(2 n + 1 , F ) , U ∈ GL(4 n + 2 , F ) , V ∈ GL( ℓ (2 n + 1) , F ) , and V ∈ GL( n (4 n + 2) , F ) .To handle E , first note that we have P t R t U t U t E
00 0 0 0 − E t P R U U = P t EU
00 0 0 0 − U t E t P , where E ∈ M ( ℓ × (2 n + 1) , F ) .Now we examine the lateral slices of E . The i th lateral slice of E (up to a suitable permutation)is L i = (cid:2) . . . I ℓ . . . (cid:3) , where each is of size ℓ × ℓ , I ℓ is the i th block, and there are n + 1 block matrices in total. Theaction of P on L i is by left multiplication. So it sends L i to P t L i = (cid:2) . . . P t . . . (cid:3) .If we set U to be the identity and V = diag( P t , . . . , P t ) , where there are (2 n + 1) copies of P t onthe diagonal, then we have L i V = P t L i , and thus P t E U = E V .It is easy to check that F can be handled in the same way, where now R, U , V play the roles that P, U , V played before, respectively. This produces the desired U , U , V , and V , and concludesthe proof. Corollary 7.4. reduces to
Symmetric Matrix Space Isometry . roof. In the proof of Proposition 7.3, we can easily replace A Λ i with A si = (cid:20) A i A ti (cid:21) , and theelementary alternating matrices with the elementary symmetric matrices, and the resulting proofgoes through mutatis mutandis .Finally, we show how to reduce to Group Isomorphism for matrix groups. We begin with alemma that we also need for the search-to-decision reduction below. We believe this lemma to beclassical, but have not found a reference stating it in quite the form we need.
Lemma 7.5 (Constructive version of Baer’s correspondence for matrix groups) . Let p be an oddprime. Over the finite field F = F p e , Alternating Matrix Space Isometry is equivalent to
Group Isomorphism for matrix groups over F that are p -groups of class and exponent p . Moreprecisely, there are functions computable in time poly( n, m, log | F | ) : • G : Λ( n, F ) m → M( n + m + 1 , F ) n + m and • Alt : M( n, F ) m → Λ( m, F ) O ( m ) such that: (1) for an alternating bilinear map A , the group generated by G ( A ) is the Baer groupcorresponding to A , (2) G and Alt are mutually inverse, in the sense that the group generated by G ( Alt ( M , . . . , M m )) is isomorphic to the group generated by M , . . . , M m , and conversely Alt ( G ( A )) is pseudo-isometric to A .Proof. First, let G be a p -group of class and exponent p given by m generating matrices ofsize n × n over F . Then from the generating matrices of G , we first compute a generating set of [ G, G ] , by just computing all the commutators of the given generators. We can then remove thoseredundant elements from this generating set in time poly(log | [ G, G ] | , log | F | ) , using Luks’ result oncomputing with solvable matrix groups [Luk92]. We then compute a set of representatives of anon-redundant generating set of G/ [ G, G ] , again using Luks’s aforementioned result. From thesedata we can compute an alternating bilinear map representing the commutator map of G in time poly( n, m, log | F | ) .Conversely, let an alternating bilinear map be given by A = ( A , . . . , A m ) ∈ Λ( n, F ) m . From A ,for i ∈ [ n ] , construct B i = [ A ~e i , . . . , A m ~e i ] ∈ M( n × m, F ) . That is, the j th column of B i is the i thcolumn of A j . Then for i ∈ [ n ] , construct ˜ B i = e ti I n B i I m ∈ GL(1 + n + m, F ) , and for j ∈ [ m ] , construct ˜ C j = e tj I n
00 0 I m ∈ GL(1 + n + m, F ) . Let G ( A ) be the matrix group generated by ˜ B i and ˜ C j . Then it can be verified easily that, G ( A ) is isomorphic to the Baer group corresponding to the alternating bilinear map defined by A . Inparticular, [ G, G ] ∼ = F m ∼ = Z emp (isomorphism of abelian groups), and G/ [ G, G ] ∼ = F n ∼ = Z enp . Thisconstruction can be done in time poly( n, m, log | F | ) . Corollary 7.6.
Let p be an odd prime. over F = F p e reduces to Group Isomorphism for p -groups of class 2 and exponent p given by matrices over F , in time poly( n, log | F | ) (where n is the max of the dimensions of the 3-tensor). roof. Combine Proposition 7.3 with Lemma 7.5. Note that for this direction of the reduction, weonly need the function G from Lemma 7.5, which can be computed in time poly( n, log p ) . p - Group Isomorphism and
AlternatingMatrix Space Isometry
Theorem C.
Given an oracle deciding
Alternating Matrix Space Isometry , there is a q O ( n ) · n ! = q ˜ O ( n ) -time algorithm to find an isometry between two alternating matrix spaces A , B ∈ Λ( n, F q ) ,if it exists, using at most q O ( n ) oracle queries each of size at most O ( n ) . In particular, if
Alternating Matrix Space Isometry can be decided in q ˜ O ( √ n ) time, thenisometries between such spaces can be found in q ˜ O ( n ) time. See Question 10.5. Proof.
As before, we first present the gadget construction, which is a combination of the two gadgetsintroduced in Sections 7.1 and 7.2, respectively. Then based on this gadget, we present the search-to-decision reduction.
Gadget construction.
Let A = ( A , . . . , A m ) be an ordered linear basis of A , and let A ∈ M( n × n × m, F q ) be the 3-way array constructed from A , so we can write A = a , a , . . . a ,n − a , a , . . . a ,n − a , − a , . . . a ,n ... . . . . . . . . . ... − a ,n − a ,n − a ,n . . . , where a i,j ∈ F m , ≤ i < j ≤ n thought of as a vector coming out of the page.We first consider a -tensor ˜ A i constructed from A , for any ≤ i ≤ n − , as ˜ A i = a , . . . a ,i a ,i +1 . . . a ,n − e , . . . − e , n . . . . . . . . . − a , . . . a ,i a ,i +1 . . . a ,n . . . − e , . . . − e , n . . . . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... − a ,i − a ,i . . . a i,i +1 . . . a i,n . . . . . . − e i, . . . − e i, n . . . − a ,i +1 − a ,i +1 . . . − a i,i +1 . . . a i +1 ,n . . . . . . . . . − f , . . . − f ,n ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... − a ,n − a ,n . . . − a i,n − a i +1 ,n . . . . . . . . . . . . − f n − i, . . . − f n − i,n e , . . . . . . . . . . . . . . . . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... e , n . . . . . . . . . . . . . . . . . . e , . . . . . . . . . . . . . . . . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... e , n . . . . . . . . . . . . . . . . . .
00 0 . . . e i, . . . . . . . . . . . . . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... . . . e i, n . . . . . . . . . . . . . . .
00 0 . . . f , . . . f n − i, . . . . . . . . . . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... ... . . . ... . . . f ,n . . . f n − i,n . . . . . . . . . . . . . Consider the lateral slices of ˜ A i . 33 The first i lateral slices have rank in [2 n, n ) . Note that the rank is strictly less than n because some tube fibers (coming out of the page) are in the upper-left n × n sub-array. • The next n − i lateral slices have rank in [ n, n ) . • The remaining ni + n lateral slices have rank in [1 , n ) (since i ≥ .)By combining the arguments for the two gadgets introduced in Sections 7.1 and 7.2 respectively,we have the following. From Sec. 7.2, for invertible matrices P and Q to satisfy P t ˜ A i P = ˜ B Qi , P has to be of the form P , P , P , P , P , , where P , is of size i × i , P , is of size ( n − i ) × ( n − i ) ,and P , is of size (2 ni + n ) × (2 ni + n ) . Furthermore, from Sec. 7.1, P , is a monomial matrix. Inparticular, if such P and Q exist, then it implies that A and B are isometric by a matrix of the form (cid:20) P , P , (cid:21) where P , is a monomial matrix of size i × i . Note that the presence of P ,i , i = 1 , , ,does not interfere here, because of the argument in the if direction in the proof of Proposition 7.3.On the other hand, if A and B are isometric by a matrix of such form, then ˜ A i and ˜ B i are alsoisometric. The search-to-decision reduction.
Given these preparations, we now present the search-to-decision reduction for
Alternating Matrix Space Isometry . Recall that this requires us touse the decision oracle O to compute an explicit isometry transformation P ∈ GL( n, q ) , if A and B are indeed isometric. Think of P as sending the standard basis ( ~e , . . . , ~e n ) to another basis ( v , . . . , v n ) , where e i and v i are in F nq .In the first step, we guess v , the image of e , and a complement subspace of h v i , at the cost of q O ( n ) . For each such guess, let P be the matrix which sends e v and sends h e , . . . , e n i to thechosen complementary subspace in some fashion. We apply P to A , and call the resulting -wayarray A in the following. Then construct ˜ A and ˜ B , and feed these two instances to the oracle O .Note that, since P , (using notation as above) must be monomial, any equivalence between ˜ A and ˜ B must preserve our choice of v up to scale. Thus, clearly, if A and B are indeed isometric and weguess the correct image of e , then the oracle O will return yes (and conversely).In the second step, we guess v , the image of e , and a complement subspace of h v i within h e , . . . , e n i , at the cost of q O ( n ) . Note here that the previous step guarantees that there is anisometry respecting the direct sum decomposition h v i ⊕ h e , . . . , e n i , so we need only search for acomplement of v within h e , . . . , e n i , and not a more general complement of h v , v i in all of F nq .This is crucial for the runtime, as at the n/ step, the latter strategy would result in searchingthrough q Θ( n ) possibilities.For each such guess, we apply the corresponding transformation to A (and again call the resulting -way array A ). Then construct ˜ A and ˜ B , and feed these two instances to the oracle O . Clearly, if A and B are indeed isometric and we guess the correct image of e (and e from the previous step),then the oracle O will return yes. However, there is a small caveat here, namely we may guess someimage of e , such that A and B are actually isometric by some matrix P of the form (cid:20) P , P , (cid:21) where P , is a monomial matrix of size . But this is fine, as it still means that our choices of { v , v } is correct as a set up to scaling. So we proceed.In general, in the i th step, we know that A and B are isometric by some P = (cid:20) P , P , (cid:21) where P , is a monomial matrix of size ( i − × ( i − . We guess v i , the image of e i in h e i , . . . , e n i , and acomplement subspace of h v i i within h e i , . . . , e n i . This cost is q O ( n ) . For each such guess, we apply34he corresponding transformation to A (and call the resulting -way array A ). Then construct ˜ A i and ˜ B i , and feed these two instances to the oracle O . Once we guess correctly, we ensure that A and B are isometric by P = (cid:20) P , P , (cid:21) where P , is a monomial matrix of size i × i .So after the ( n − th step, we know that A and B are isometric by a monomial transformation.The number of all monomial transformations is by ( q − n · n ! ≤ q n · n log n = q ˜ O ( n ) . Therefore wecan enumerate all monomial transformations and check correspondingly.Note that all the instances we feed into the oracle O are of size O ( n ) . This concludes theproof. Corollary C (Search to decision for testing isomorphism of a class of p -groups) . Let p be anodd prime. Given an oracle deciding isomorphism of p -groups of class 2 and exponent p givenby generating matrices over F p of size poly( n ) , there is a | G | O (log log | G | ) -time algorithm to findan isomorphism between such groups, using at most poly( | G | ) oracle queries each of size at most poly( n ) .Proof. The result follows from Theorem C with the constructive version of Baer’s Correspondencein the model of matrix groups over finite fields (Lemma 7.5).In more detail, given Lemma 7.5 we can follow the procedure in the proof of Theorem C. Forthe given p -groups, we compute their commutator maps. Then whenever we need to feed thedecision oracle, we transform from the alternating bilinear map to a generating set of a p -group ofclass and exponent p with this bilinear map as the commutator map. After getting the desiredpseudo-isometry for the alternating bilinear maps, we can easily recover an isomorphism betweenthe originally given p -groups. This concludes the proof. In this section, we present other reductions to finish the proof of Theorem A. The reductions hereare based on the constructions which may be summarized as “putting the given 3-way array to anappropriate corner of a larger 3-way array.” Such an idea is quite classical in the context of matrixproblems and wildness [GP69]; here we use the same idea for problems on 3-way arrays.
Proposition 8.1. reduces to
Matrix Space Conjugacy . Symbolically, U ⊗ V ⊗ W reduces to V ′ ⊗ V ′∗ ⊗ W , where dim V ′ = dim U + dim V .Proof. The construction.
For a 3-way array A ∈ T( ℓ × n × m, F ) , let A = ( A , . . . , A m ) ∈ M( ℓ × n, F ) m be the matrix tuple consisting of frontal slices of A . Construct ˜ A = ( ˜ A , . . . , ˜ A m ) ∈ M( ℓ + n, F ) m from A , where ˜ A i = (cid:20) A i (cid:21) . See Figure 5.Given two non-degenerate 3-way arrays A , B which we wish to test for isomorphism (we canassume non-degeneracy without loss of generality, see Observation 6.2), we claim that A ∼ = B as3-tensors if and only if the matrix spaces h ˜ A i and h ˜ B i are conjugate. For the only if direction, since A and B are isomorphic as 3-tensors, there exist P ∈ GL( ℓ, F ) , Q ∈ GL( n, F ) , and R ∈ GL( m, F ) , such that P A Q = B R = ( B ′ , . . . , B ′ m ) ∈ M( ℓ × n, F ) m . Let ˜ P = (cid:20) P − Q (cid:21) . Then ˜ P − ˜ A i ˜ P = (cid:20) P Q − (cid:21) · (cid:20) A i (cid:21) · (cid:20) P − Q (cid:21) = (cid:20) P A i Q (cid:21) = (cid:20) B ′ i (cid:21) . Itfollows that, ˜ P − ˜ A ˜ P = ˜ B R , which just says that ˜ P − h ˜ A i ˜ P = h ˜ B i .35 ✿✿✿ ✹✹✹✹✹✿✿✿✿ A ✿✿✿✿✿✿✿✿✹✹✹✹✹ ˜ A = ✿✿✿✿ Figure 5: Pictorial representation of the reduction for Proposition 8.1.
For the if direction, since h ˜ A i and h ˜ B i are conjugate, there exist ˜ P ∈ GL( ℓ + n, F ) , and ˜ R ∈ GL( m, F ) , such that ˜ P − ˜ A ˜ P = ˜ B ˜ R . Write ˜ B ˜ R := ˜ B ′ = ( ˜ B ′ , . . . , ˜ B ′ m ) , where ˜ B ′ i = (cid:20) B ′ i (cid:21) , B ′ i ∈ M( ℓ × n, F ) . Let ˜ P = (cid:20) P , P , P , P , (cid:21) , where P , ∈ M( ℓ, F ) . Then as ˜ A ˜ P = ˜ P ˜ B ′ , we have forevery i ∈ [ m ] , (cid:20) P , P , P , P , (cid:21) (cid:20) A i (cid:21) = (cid:20) P , A i P , A i (cid:21) = (cid:20) B ′ i P , B ′ i P , (cid:21) = (cid:20) B ′ i (cid:21) (cid:20) P , P , P , P , (cid:21) . (4)This in particular implies that for every i ∈ [ m ] , P , A i = . In other words, every row of P , lies inthe common left kernel of A i with i ∈ [ m ] . Since A is non-degenerate, P , must be the zero matrix.It follows that ˜ P = (cid:20) P , P , P , (cid:21) ∈ GL( ℓ + n, F ) , so P , and P , are both invertible matrices. ByEquation 5, we have P , A = B ˜ R P , , where P , ∈ GL( ℓ, F ) , P , ∈ GL( n, F ) , and ˜ R ∈ GL( m, F ) ,which just says that A and B are isomorphic as 3-tensors. Corollary 8.2. reduces to1.
Matrix Lie Algebra Conjugacy , where L is commutative;2. Associative Matrix Algebra Conjugacy , where A is commutative (and in fact has theproperty that ab = 0 for all a, b ∈ A ; note that A is not unital);3. Matrix Lie Algebra Conjugacy , where L is solvable of derived length 2, and L/ [ L, L ] ∼ = F ; and,4. Associative Matrix Algebra Conjugacy , where the Jacobson radical J ( A ) squares tozero, and A/J ( A ) ∼ = F .Proof. We use the notation from the proof of Proposition 8.1. Note that the matrix spaces con-structed there, e. g., ˜ A , are all subspaces of the ( ℓ + n ) × ( ℓ + n ) matrix space U := (cid:20) M ( ℓ × n, F ) (cid:21) .For (1) and (2), observe that for any two matrices A, A ′ ∈ U , we have AA ′ = 0 , and thus [ A, A ′ ] = AA ′ − A ′ A = 0 as well. Thus any matrix subspace of U is both a commutative matrix Liealgebra and a commutative associative matrix algebra with zero product.For (3) and (4), we note that we can alter the construction of Proposition 8.1 by includingthe matrix M = (cid:20) I ℓ
00 0 (cid:21) in both matrix spaces ˜ A and ˜ B without disrupting the reduction. In-36eed, for the forward direction we have that (again, following notation as above) ˜ P − (cid:20) I ℓ
00 0 (cid:21) ˜ P = (cid:20) P Q − (cid:21) (cid:20) I ℓ
00 0 (cid:21) (cid:20) P − Q (cid:21) = (cid:20) I ℓ
00 0 (cid:21) .For the reverse direction, we then have that for ˜ B ′ = ˜ B ˜ R , we have ˜ B ′ i = (cid:20) αI d B ′ i (cid:21) . Let ˜ P = (cid:20) P , P , P , P , (cid:21) , where P , ∈ M( ℓ, F ) . Then as ˜ A ˜ P = ˜ P ˜ B ′ , we have for every i ∈ [ m ] , (cid:20) P , P , P , P , (cid:21) (cid:20) A i (cid:21) = (cid:20) P , A i P , A i (cid:21) = (cid:20) αP , + B ′ i P , B ′ i P , αP , (cid:21) = (cid:20) αI d B ′ i (cid:21) (cid:20) P , P , P , P , (cid:21) . (5)Considering the (2,1) block of this equation, we find that if α = 0 , then immediately P , = . Buteven if α = 0 , then we are back to the same argument as in Proposition 8.1, namely that by thenon-degeneracy of A , we still get P , = by considering the (2,2) block. The remainder of theargument only depended on the (1,2) block of the preceding equation, which is the same as before.Finally, to see the structure of the corresponding algebras, we must consider how our new element M interacts with the others. Easy calculations reveal: M = M M ˜ A i = ˜ A i ˜ A i M = [ M , ˜ A i ] = M ˜ A i − ˜ A i M = ˜ A i (3) For the structure of the Lie algebra, we have from the above equations that any commutatoris either 0 or lands in U . And since [ M , ˜ A i ] = ˜ A i , we have that [ L, L ] is the subspace of U thatwe started with before including M . Since everything in that subspace commutes, we get that [[ L, L ] , [ L, L ]] = 0 , and thus the Lie algebra is solvable of derived length 2. Moreover, L/ [ L, L ] isspanned by the image of M , whence it is isomorphic to F .(4) Recall that for rings without an identity, the Jacobson radical can be characterized as J ( A ) = { a ∈ A | ( ∀ b ∈ A )( ∃ c ∈ A )[ c + ba = cba ] } [Lam91, p. 63]. Note that the only nontrivialcases to check are those for which b = M , since otherwise ba = 0 and then we may take c = 0 as well. So we have J ( A ) = { a ∈ A | ( ∃ c ∈ A )[ c + M a = cM a ] } . But since M is a left identity,this latter equation is just c + a = ca . For any a ∈ U , we may take c = − a , since then bothsides of the equation are zero, and thus J ( A ) includes all the matrices in the original space fromProposition 8.1. However, M / ∈ J ( A ) , for there is no c such that c + M = cM : any element of A can be written αM + u for some u ∈ U . Writing c this way, we are trying to solve the equation αM + u + M = ( αM + u ) M = αM . Thus we conclude u = 0 , and then we get that α + 1 = α ,a contradiction. So M / ∈ J ( A ) , and thus A/J ( A ) is spanned by the image of M , whence it isisomorphic to F . Proposition 8.3.
Matrix Space Isometry reduces to
Algebra Isomorphism and
TrilinearForm Equivalence . Symbolically, V ⊗ V ⊗ W reduces to V ′ ⊗ V ′ ⊗ V ′∗ and to V ′ ⊗ V ′ ⊗ V ′ , where dim V ′ = dim V + dim W .Proof. The construction.
Given a matrix space A by an ordered linear basis A = ( A , . . . , A m ) ,construct the 3-way array A ′ ∈ T (( n + m ) × ( n + m ) × ( n + m ) , F ) whose frontal slices are: A ′ i = ( for i ∈ [ n ]) A ′ n + i = (cid:20) A i
00 0 (cid:21) ( for i ∈ [ m ]) . ( A ′ ) denote the algebra whose structure constants are defined by A ′ , and let f A ′ denote thetrilinear form whose coefficients are given by A ′ .Given two matrix spaces A , B , we claim that A and B are isometric if and only if Alg ( A ′ ) ∼ = Alg ( B ′ ) (isomorphism of algebras) if and only if f A ′ and f A ′ are equivalent as trilinear forms. Theproofs are broken into the following two lemmas, which then complete the proof of the proposition. Lemma 8.4.
Let notation be as above. The matrix spaces A , B are isometric if and only if Alg ( A ′ ) and Alg ( B ′ ) are isomorphic.Proof. Let A , B be the ordered bases of A , B , respectively. Recall that A , B are isometric if andonly if there exist ( P, R ) ∈ GL( n, F ) × GL( m, F ) such that P t A P = B R . Also recall that Alg ( A ′ ) and Alg ( B ′ ) are isomorphic as algebras if and only if there exists ˜ P ∈ GL( n + m, F ) such that ˜ P t A ′ ˜ P = B ′ ˜ P . Since A i (resp. B i ) form a linear basis of A (resp. B ), we have that A i (resp. B i )are linearly independent. The only if direction is easy to verify. Given an isometry ( P, R ) between A and B , let ˜ P = (cid:20) P R (cid:21) . Let ˜ P t A ′ ˜ P = ( A ′′ , . . . , A ′′ n + m ) . Then for i ∈ [ n ] , A ′′ i = . For n + 1 ≤ i ≤ n + m , A ′′ i = (cid:20) P t A i P
00 0 (cid:21) . Let B ′ ˜ P = ( B ′′ , . . . , B ′′ n + m ) . Then for i ∈ [ n ] , B ′′ i = . For n + 1 ≤ i ≤ n + m , B ′′ i is the ( i − n ) th matrix in B R , which in turn equals P t A i P by the assumption on P and R . Thisproves the only if direction. For the if direction, let ˜ P = (cid:20) P XY R (cid:21) ∈ GL( n + m, F ) be an algebra isomorphism, where P is of size n × n . Let ˜ P A ′ ˜ P t = ( A ′′ , . . . , A ′′ n + m ) , and B ′ ˜ P = ( B ′′ , . . . , B ′′ n + m ) . Since for i ∈ [ n ] , A ′ i = , we have A ′′ i = = B ′′ i . Therefore Y has to be , because B i ’s are linearly independent. Itfollows that ˜ P = (cid:20) P X R (cid:21) , where P and R are invertible. So for ≤ i ≤ m , we have ˜ P t A ′ i + n ˜ P = (cid:20) P t X t R t (cid:21) (cid:20) A i
00 0 (cid:21) (cid:20)
P X R (cid:21) = (cid:20) P t A i P P t A i XX t A i P X t A i X (cid:21) . Also the last m matrices in B ′ ˜ P are (cid:20) B ′′ i
00 0 (cid:21) ,where B ′′ i is the i th matrix in B R . This implies that P ∈ GL( n, F ) and R ∈ GL( m, F ) togetherform an isometry between A and B . Corollary 8.5.
Matrix Space Isometry reduces to1.
Associative Algebra Isomorphism , for algebras that are commutative and unital;2.
Associative Algebra Isomorphism , for algebras that are commutative and 3-nilpotent( abc = 0 for all a, b, c ∈ A ); and,3. Lie Algebra Isomorphism , for Lie algebras that are 2-step nilpotent ( [ u, [ v, w ]] = 0 for all u, v, w ∈ L ).Proof. We follow the notation from the proof of Lemma 8.4. We begin by observing that Alg ( A ′ ) is a3-nilpotent algebra, and therefore is automatically associative. Let V ′ = V ⊕ W , where dim V = n , dim W = m , and, as a subspace of V ′ ∼ = F n + m , V has a basis given by e , . . . , e n and W has a basisgiven by e n +1 , . . . , e n + m . Let ◦ denote the product in Alg ( A ′ ) , so that x i ◦ x j = P k A ′ ( i, j, k ) x k .Note that because the lower m rows and the rightmost m columns of each frontal slice of A ′ are zero,we have that w ◦ x = x ◦ w = 0 for any w ∈ W and any x ∈ V ′ . Thus only way to get a nonzero38roduct is of the form v ◦ v ′ where v, v ′ ∈ V , and here the product ends up in W , since the onlynonzero frontal slices are n + 1 , . . . , n + m . Since any nonzero product ends up in W , and anythingin W times anything at all is zero, we have that abc = 0 for all a, b, c ∈ Alg ( A ′ ) , that is, Alg ( A ′ ) is3-nilpotent. Any 3-nilpotent algebra is automatically associative, since the associativity conditiononly depends on products of three elements.(2) If instead of general Matrix Space Isometry , we start from
Symmetric Matrix SpaceIsometry (which is also -complete by Corollary 7.4), then we see that the algebra is commuta-tive, for we then have A ′ ( i, j, k ) = A ′ ( j, i, k ) , which corresponds to x i ◦ x j = x j ◦ x i .(1) As is standard, from the algebra A = Alg ( A ′ ) , we may adjoin a unit by considering A ′ = A [ e ] / ( e ◦ x = x ◦ e = x | x ∈ A ′ ) . In terms of vector spaces, we have A ′ ∼ = A ⊕ F , where the new F summand is spanned by the identity e . This standard algebraic construction has the propertythat two such algebras A, B are isomorphic if and only if their corresponding unit-adjoined algebras A ′ , B ′ are (see, e. g., [Dor32, Wik19]).(3) By starting from an alternating matrix space A (and noting that Alternating MatrixSpace Isometry is still -complete, by Corollary 7.4), we get that Alg ( A ′ ) is alternating, thatis, v ◦ v = 0 . Since we still have that it is 3-nilpotent, a ◦ b ◦ c = 0 , we find that ◦ automaticallysatisfies the Jacobi identity. An alternating product satisfying the Jacobi identity is, by definition,a Lie bracket (that is, we can define [ v, w ] := v ◦ w ), and thus we get a Lie algebra with structureconstants A ′ . Translating the 3-nilpotency condition a ◦ b ◦ c = 0 into the Lie bracket notation, weget [ a, [ b, c ]] = 0 , or in other words that the Lie algebra is nilpotent of class 2. Corollary 8.6. reduces to
Cubic Form Equivalence .Proof.
Agrawal and Saxena [AS06] show that
Commutative Algebra Isomorphism reduces to
Cubic Form Equivalence . Combine with Corollary 8.5(1).The reduction from V ⊗ V ⊗ W to V ′ ⊗ V ′ ⊗ V ′ is achieved by the same construction. Lemma 8.7.
Let A , B , A ′ , and B ′ be as above. Then A and B are pseudo-isometric if and only if A ′ and B ′ are isomorphic as trilinear forms.Proof. Recall that A and B are pseudo-isometric if there exist P ∈ GL( n, F ) , R ∈ GL( m, F ) suchthat P t A P = B R . Also recall that A ′ and B ′ are equivalent as trilinear forms if there exists ˜ P ∈ GL( n + m, F ) such that ˜ P t A ′ ˜ P ˜ P = B ′ . Since A i (resp. B i ) form a linear basis of A , we havethat A i (resp. B i ) are linearly independent. The only if direction is easy to verify. Given an pseudo-isometry
P, R between A and B , let ˜ P = (cid:20) P R − (cid:21) . Then it can be verified easily that ˜ P is a trilinear form equivalence between A ′ and B ′ , following the same approach in the proof of Lemma 8.4. For the if direction, write ˜ P = (cid:20) P XY R (cid:21) ∈ GL( n + m, F ) be a trilinear form equivalence between A ′ and B ′ . We first observe that the last m matrices in ˜ P t A ′ ˜ P are still linearly independent. Then,because of the first n matrices in B ′ are all zero matrices, Y has to be the zero matrix. It follows that ˜ P = (cid:20) P X R (cid:21) , where P and R are invertible. Then it can be verified easily that P and R − forman pseudo-isometry between A and B , following the same approach in the proof of Lemma 8.4.39 Reducing d -Tensor Isomorphism to Theorem B. d - Tensor Isomorphism reduces to
Algebra Isomorphism . If the input tensorhas size n × n × · · · × n d , then the output algebra has dimension O ( d n d − ) where n = max { n i } . Remark 9.1.
One cannot do too much better in terms of size of the output, as the followingargument suggests. Over finite fields, we may count the number of orbits, which provides a rigorouslower bound on the size blow-up of any kernel reduction (see, e. g., [FG11, Sec. 4.2.4]). Over infinitefields, if we consider algebraic reductions, they must preserve dimension, so we can make a similar(albeit more heuristic) argument by considering the “dimension” of the set of orbits. Here we haveput “dimension” in quotes because the set of orbits is not a well-behaved topological space (it istypically not even T ), but even in this case the same argument should essentially hold. The spaceof n × n × · · · × n d -tensors has dimension n d , and the group GL n × · · · × GL n has dimension dn , sothe “dimension” of the set of orbits is at least n d − dn ∼ n d ( d ≥ ); over F q , the number of orbitsis at least q n d − dn . For algebras of dimension N , the space of such algebras is ≤ N -dimensional,so the “dimension” of the set of orbits is at most N ; over F q , the number of orbits is at most q N .Thus we need N & n d , whence N & n d/ . Proof idea.
The idea here is similar to the reduction from to Algebra Isomorphism : wewant to create an algebra in which all products eventually land in an ideal, and multiplication ofalgebra elements by elements in the ideal is described by the tensor we started with. For a 3-tensorthis was very natural, as the structure constants of any algebra form a 3-tensor. In that case, weare using it to say how to write the product of 2 elements as a linear combination (the third factorof the tensor) of basis elements. With a d -tensor for d ≥ , we now want to use it to describe how towrite the product of d − elements as a linear combination of basis elements. The tricky part hereis that in an algebra we must still describe the product of any two elements. The idea is to createa set of generators, let them freely generate monomials up to degree d − , and then when we get aproduct of d − generators, rewrite it as a linear combination of generators according to the giventensor. This idea almost provides one direction of the reduction: if two d -tensors A , B are isomorphic,then the corresponding algebras A , B are isomorphic. However, there is an issue with implementingthis, namely that monomials are commutative, but our tensors A , B need not be symmetric, andmoreover, they need not even be “square” (have all side lengths equal). In [AS05, Thm. 5] theyreduce Degree- d Form Equivalence to Commutative Algebra Isomorphism along similarlines, but there the starting objects are themselves commutative, so this was not an issue. Inour case, we will get around this using a certain noncommutative algebra where the only nonzeroproducts are those which come “in the right order.”Another potentially tricky aspect of the reduction is the converse: suppose we build our algebras A , B as above from two d -tensors, and A , B are isomorphic; how can we guarantee that A and B are isomorphic? For this, we would like to be able to identify certain subsets of the algebras ascharacteristic (invariant under any automorphism), so that those characteristic subsets force theisomorphism to take a particular form, which we can then massage into an isomorphism betweenthe tensors A , B . Our way of doing this is to encode the “degree” structure into the path algebra of agraph, as described in the next section. If the graph has no automorphisms, then the path algebrahas the advantage that for any two vertices i, j , the subset of A spanned by the paths from i to j is nearly characteristic in a way we make precise below.40 .1 Preliminaries for Theorem B To make the above proof idea precise, we will need a little background on Leavitt path algebras(a.k.a. quiver algebras) and their quotients. For a textbook reference on these algebras, see [ASS06,Ch. II], and for a textbook treatment of Wedderburn–Artin theory and the Jacobson radical, see[Lam91]. Aside from the definition of path algebra, most of this section will end up being used asa black box; we include it mostly for ease of reference.We start with some important, classical results on the structure of associative algebras. The
Jacobson radical of an associative algebra A , here denoted R ( A ) , is the intersection of all maximalright ideals. Equivalently, R ( A ) = { a ∈ A : every element of AxA is invertible } . A unitalalgebra A over a field F is semisimple if R ( A ) = 0 ; in this case, by Wedderburn’s Theorem (seebelow), A is isomorphic to a direct sum of matrix algebras over finite-degree division rings extending F . An algebra A is called separable if it is semisimple over every field extending F , that is, A ⊗ F K is semisimple for all fields K extending F . Equivalently, A is separable if it is isomorphic to L di =1 M( d i , F i ) , where each F i is a division ring extending F such that the center Z ( F i ) is a separablefield extension of F . If F has characteristic zero or is perfect (which includes all finite fields), thenall its extensions are separable. For the algebra we construct, it will simply be a direct sum of copiesof F , so it is automatically separable over any field.An element a ∈ A is idempotent if a = a . An idempotent e is primitive if it cannot be writtenas the sum of two nonzero idempotents. Two idempotents e, f are orthogonal if ef = f e = 0 . A complete set of primitive orthogonal idempotents of A is a set { e , . . . , e n } of primitive idempotentswhich are pairwise orthogonal, and such that the set is maximal subject to this condition. Theorem 9.2 (Wedderburn–Mal’cev, see, e. g., [Far05]) . Let A be an finite-dimensional, associative,unital algebra over a field F . Then1. A/R ( A ) ∼ = L di =1 M( d i , F i ) (as algebras), where each F i is a division ring of finite degree over F .2. If A/R ( A ) is separable, then there exists a subalgebra S ⊆ A such that A = S ⊕ R ( A ) (as F -vector spaces).3. If T ⊆ A is any separable subalgebra, then there exists r ∈ R ( A ) such that (1+ r ) T (1+ r ) − ⊆ S . The last part of the preceding theorem is what we will use to show that the set of paths i → j in our graph is “nearly characteristic;” that is, it is not characteristic, but it is characteristic up toconjugacy (=inner automorphisms). Definition 9.3 (Leavitt path algebra) . Given a directed multigraph G (possibly with parallel edgesand self-loops, a.k.a. quiver), its Leavitt path algebra
Path ( G ) is the algebra of paths in G , wheremultiplication is given by concatenation of paths (when this is well-defined), and zero otherwise.That is, Path ( G ) is generated by { e v : v ∈ V ( G ) } ∪ { x a : a ∈ E ( G ) } , where the generators e v arethought of as the “path of length ” at vertex v . The defining relations in Path ( G ) are that theproduct of two paths is their concatenation if the end of the first equals the start of the second, andzero otherwise. More formally, the relations are: e v e w = δ v,w e v e v x a = δ v, start ( a ) x a x a e v = δ v, end ( a ) x a x a x b = 0 if start ( b ) = end ( a ) , where δ x,y is the Kronecker delta: it is if x = y and otherwise.41ote that we are allowed to take formal linear combinations of paths in this algebra, as it is an F -algebra (so in particular, it is an F -vector space). The arrow ideal of Path ( G ) is the two-sidedideal generated by the arrows, and has a basis consisting of all paths of length ≥ ; it is denoted R G . Lemma 9.4 (See [ASS06, Cor. II.1.11]) . If G is finite, connected, and acyclic, then R ( Path ( G )) isthe arrow ideal R G , and has a basis consisting of all paths of length ≥ , and the set { e v : v ∈ V ( G ) } is a complete set of primitive orthogonal idempotents. Corollary 9.5.
Let G be a finite, connected, acyclic graph, and I an ideal of Path ( G ) containedin R G ; let A = Path ( G ) /I . Then (1) R ( A ) = R G /I , (2) A/R ( A ) ∼ = F ⊕| V ( G ) | , whence A/R ( A ) isseparable, and (3) { e v : v ∈ V ( G ) } is a complete set of primitive orthogonal idempotents, where e v is the image of e v under the quotient map Path ( G ) → Path ( G ) /I = A .Proof. (1) This holds for any ideal contained in the radical of any finite-dimensional associativeunital algebra [Lam91, Prop. 4.6].(2) It is clear that as vector spaces, Path ( G ) = h e , . . . , e n i ⊕ R G (where n = | V ( G ) | ), and thespan of the e i is easily seen to be an algebra isomorphic to F n , where the i -th copy of F is spanned by π ( e i ) , where π : Path ( G ) → Path ( G ) /R G is the natural projection. Thus Path ( G ) /R G ∼ = F n . Since R ( A ) = R G /I , we have A/R ( A ) = ( Path ( G ) /I ) / ( R G /I ) ∼ = Path ( G ) /R G ∼ = F n . As a semisimplealgebra, we thus have that A/R ( A ) ∼ = L M(1 , F ) , and as F is always a separable extension overitself, A/R ( A ) is separable.(3) The property of being a set of primitive orthogonal idempotents is preserved by homomor-phisms, so there are only two things to check here: first, that none of the e v is zero modulo I , andsecond, that there are no additional primitive idempotents in A that are mutually orthogonal withevery e v . To see that none of the e v are zero, note that π : Path ( G ) → Path ( G ) /R G factors through A ; then since π ( e v ) = 0 for any v (from the previous paragraph), it must be the case that e v = 0 as well. Finally, we must show this is a complete set of primitive orthogonal idempotents. Supposenot; that is, suppose there is some e / ∈ { e v : v ∈ V ( G ) } such that e is a primitive idempotent thatis orthogonal in A to every e v . First, we claim that e / ∈ R ( A ) = R G /I . For, since G is a finiteacyclic graph, its arrow ideal R G is nilpotent: there are no paths longer than n − | V ( G ) − | ,so we must have R nG = 0 , whence R G cannot contain any idempotents. Since R G is nilpotent, thesame must be true of R G /I , whence R G cannot contain any idempotents, so e cannot be in R G .But then the image of e in A/R G is nonzero (since e / ∈ R G ), so e is another primitive idempotentorthogonal to every π ( e v ) in Path ( G ) /R G = A/R ( A ) . But this is a contradiction, since { π ( e v ) } isalready a complete set of primitive orthogonal idempotents for A/R ( A ) .Finally, in the course of the proof, we will use the following construction of Grigoriev: Theorem 9.6 (Grigoriev [Gri81, Theorem 1]) . Graph Isomorphism is equivalent to
AlgebraIsomorphism for algebras A such that the radical squares to zero and A/R ( A ) is abelian. In our proof, all we will need aside from Grigoriev’s result is to see the construction itself, whichwe recall here in language consistent with ours.
Construction [Gri81].
Given a graph G , construct an algebra A G as follows: it is generated by { e i : i ∈ V ( G ) } ∪ { e ij : ( i, j ) ∈ E ( G ) } subject to the following relations: e i e j = δ ij e i , e i e kj = δ ik e kj , e kj e i = δije kj , e ij e kl = 0 when j = k , R ( A G ) is generated by { e ij } , and the radical squares tozero. It is immediate that this is just Path ( G ) /R G . From any such algebra A , Grigoriev recoversa corresponding weighted graph, where the weight on ( i, j ) is dim e i A e j . In our setting we usemultiple parallel edges rather than weight, but the proof goes through mutatis mutandis .42 .2 Proof of Theorem B Proof.
Let A be an n × n × · · · × n d d -tensor. Let G be the following directed multigraph (seeFigure 6): it has d vertices, labeled , . . . , d , and for i = 1 , . . . , d − , it has n i parallel arrows fromvertex i to vertex i + 1 , and n d parallel arrows from to d . / / x , (cid:22) (cid:22) x , ! ! ... / / x ,n x d, E E ... F F x d,nd F F / / x , (cid:22) (cid:22) x , ! ! ... / / x ,n / / x , (cid:22) (cid:22) x , ... / / x ,n · · · / / x d − , (cid:23) (cid:23) x d − , " " ... / / x d − ,nd − d Figure 6: The graph G whose path algebra we take a quotient of to construct the reduction forTheorem B.Because of the structure of this graph, we can index the generators of Path ( G ) a little moremnemonically than in the preliminaries above: let the generators corresponding to the n i arrowsfrom i → ( i + 1) be x i,a for a = 1 , . . . , n i , and let the generators corresponding to the n d arrows → d be x d,a for a = 1 , . . . , n d . Let A be the quotient of Path ( G ) by the relation x ,i x ,i · · · x d − ,i d − = n d X j =1 A ( i , i , . . . , i d − , j ) x d,j (6)At the moment, we only have A in terms of generators and relations; however, it will be easy toturn it into its basis representation. The key is to bound its dimension, which we do now. Exceptfor paths of length d − (because of the nontrivial relations (6)), this is just counting the numberof paths in the graph described above. The only nonzero monomials of degree k + 1 are those ofthe form x i,a i x i +1 ,a i +1 x i +2 ,a i +2 · · · x i + k,a i + k . For a given choice of i ∈ { , . . . , d − − k } , there areexactly n i n i +1 · · · n i + k such monomials, so we have dim A = { e i } + n d + X k 43f breadth-first search, we may thus list a basis for A and its structure constants with respect tothat basis.We claim that the map A 7→ A is a reduction. Suppose B is another tensor of the same dimension,and let B be the associated algebra as above. We claim that A ∼ = B as d -tensors if and only if A ∼ = B as algebras. For the only if direction, suppose A ∼ = B via ( P , P , . . . , P d ) ∈ GL( n , F ) × · · · × GL( n d , F ) ,that is A ( i , . . . , i d ) = X j ,...,j d ( P ) i ,j · · · ( P d ) i d ,j d B ( j , . . . , j d ) for all i , . . . , i d . Then we claim that the block-diagonal matrix P = diag( P , P , . . . , P d − , P − d ) ∈ GL( n, F ) (where n = P di =1 n i ), together with mapping e i to e i , induces an isomorphism from A to B . Note that P itself is not an isomorphism, as dim A ≈ n d , but P specifies a linear map on thegenerators of A , which we may then exend to all of A .First let us see that P indeed gives a well-defined homomorphism A → B . Since P is onlydefined on the generators and is, by definition, extended by distributivity, the only thing to checkhere is that P sends the relations of A into the relations of B . Let y , , . . . , y ,n , . . . , y d,n d , e , . . . , e d denote the basis of B as above. The map P is defined by P ( e i ) = e i , P ( x i,a ) = n i X a ′ =1 ( P i ) aa ′ y i,a ′ for i = 1 , . . . , d − and P ( x d,a ) = n d X a ′ =1 ( P − td ) aa ′ y d,a ′ . By left multiplying by P td , we may rewrite this last equation as y d,a = n d X a ′ =1 ( P d ) a ′ ,a P ( x d,a ′ ) , note the transpose.To check the relations, let us write out the Leavitt path algebra relations explicitly for our graph,in our notation. The generators of A are x , , x , , . . . , x ,n , x , , x , , . . . , x ,n , . . . , x d,n d , e , . . . , e d ,and the relations are (6) and the quiver relations: e i e j = δ i,j e i e i x j,a = ( δ i,j + δ i, δ j,d ) x j,a x j,a e i = ( δ j +1 ,i + δ j,d δ i,d ) x j,a x i,a x d,b = 0 (7) x d,b x i,a = 0 ( i < d ) x i,a x j,b = 0 if j = i + 1 Note that the set e i A e j is linearly spanned by the paths i → j in this graph.The relations involving the e i are easy to verify, since they only depend on the first subscript of x i,a (resp., y j,b ), and P does not alter this subscript.44or relation (7), we have: P ( x i,a x d,b ) = P ( x i,a ) P ( x d,b )= n i X a ′ =1 ( P i ) aa ′ y i,a ′ ! n d X b ′ =1 ( P d ) bb ′ y d,b ′ ! = n i X a ′ =1 n d X b ′ =1 ( P i ) aa ′ ( P d ) bb ′ y i,a ′ y d,b ′ = 0 , where the final inequality comes from the defining relations y i,a ′ y d,b ′ = 0 in B .The verification for remaining quiver relations is similar, since P does not alter the start andend vertices of any arrow (though it may send a single arrow i → j in A to a linear combination ofarrows i → j in B ).We now verify the relation (6). We have P ( x ,i x ,i · · · x d − ,i d − )= n X j =1 n X j =1 · · · n d − X j d − =1 ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − y ,j y ,j · · · y d − ,j d − = X j ,j , ··· ,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − n d X j d =1 B ( j , j , . . . , j d ) y d,j d = X j , ··· ,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − n d X j d =1 B ( j , j , . . . , j d ) n d X i d =1 ( P d ) i d ,j d P ( x d,i d )= n d X i d =1 X j , ··· ,j d − ,j d ( P ) i ,j · · · ( P d ) i d ,j d B ( j , . . . , j d ) P ( x d,i d )= n d X i d =1 A ( i , . . . , i d ) P ( x d,i d ) , as desired. Thus the map A → B induced by P is an algebra homomorphism.Next, since P is an isomorphism of ( d + n ) -dimensional vector spaces, the map it induces A → B is surjective on the generators of B , whence it is surjective onto all of B . Finally, since dim A = dim B < ∞ , any linear surjective map A → B is automatically bijective, so this map isindeed an isomorphism of algebras. For the if direction, suppose that f : A → B is an isomorphism of algebras. Since theJacobson radical is characteristic, we have f ( R ( A )) = R ( B ) . Then { f ( e v ) : v ∈ V } is a setof primitive orthogonal idempotents in B , and their span T = h f ( e v ) : v ∈ V i is a separablesubalgebra (isomorphic to F n ) such that B = T ⊕ R ( B ) . By the Wedderburn–Mal’cev Theorem(Theorem 9.2(3)), there is some r ∈ R ( B ) such that (1 + r ) T (1 + r ) − = h e , . . . , e n i =: S . Sincethe e i are the only primitive idempotents in S , we must have that (1 + r ) f ( e i )(1 + r ) − = e π ( i ) forall i and some permutation π ∈ S n .Next we will show that this permutation is in fact the identity, so that (1 + r ) f ( e i )(1 + r ) − = e i for all i . For this, consider A ′ = A /R ( A ) and similarly B ′ . These are precisely the algebrasconsidered by Grigoriev [Gri81] (reproduced as Theorem 9.6 above). Since R ( A ) is characteristic,so is its square, and thus f induces an isomorphism A ′ ∼ = → B ′ . By Theorem 1 of Grigoriev [Gri81],any isomorphism A ′ → B ′ induces an isomorphism of the corresponding graphs, so this isomorphism45ust map e i to e i for each i (since our graph G has no automorphisms). Thus π must be the identity,and (1 + r ) f ( e i )(1 + r ) − = e i for all i .Since conjugation is an automorphism, let f ′ : A → B be c r ◦ f , where c r ( b ) = (1 + r ) b (1 + r ) − . By the above, f ′ ( e i ) = e i for all i . Thus f ′ ( e i A e j ) = e i B e j . In particular, define P i to be therestriction of f ′ to e i A e i +1 for i = 1 , . . . , d − and P d to be the restriction of f ′ to e A e d . Then wehave that P i is a linear bijection from the span of x i, , . . . , x i,n i to the span of y i, , . . . , y i,n i for all i . We claim that P = ( P , . . . , P d − , P − td ) is a tensor isomorphism A → B , that is, A ( i , . . . , i d ) = X j ,...,j d ( P ) i ,j · · · ( P − td ) i d ,j d B ( j , . . . , j d ) . From the fact that f ′ is an isomorphism, we have n d X i d =1 A ( i , . . . , i d ) f ′ ( x d,i d ) = f ′ ( x ,i x ,i · · · x d − ,i d − ) n d X i d =1 A ( i , . . . , i d ) n d X j d =1 ( P d ) i d ,j d y d,j d = f ′ ( x ,i ) f ′ ( x ,i ) · · · f ′ ( x d − ,i d − )= X j ,...,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − y ,j y ,j · · · y d − ,j d − = X j ,...,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − n d X j d =1 B ( j , . . . , j d ) y d,j d For each j d ∈ { , . . . , n d } , equating the coefficient of y d,j d gives n d X i d =1 A ( i , . . . , i d )( P d ) i d ,j d = X j ,...,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − B ( j , . . . , j d ) Let A ( i , . . . , i d − , − ) be the natural row vector of length n d , and similarly for B ( j , . . . , j d − , − ) .Then we may rewrite the preceding set of n d equations (one for each choice of j d ) in matrix notationas A ( i , . . . , i d − , − ) · P d = X j ,...,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − B ( j , . . . , j d − , − ) Right multiplying by P − d , we then get A ( i , . . . , i d − , − ) = X j ,...,j d − ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − B ( j , . . . , − ) P − d A ( i , . . . , i d ) = X j ,...,j d − ,j d ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − B ( j , . . . , j d )( P − d ) j d ,i d = X j ,...,j d ( P ) i ,j ( P ) i ,j · · · ( P d − ) i d − ,j d − ( P − td ) i d ,j d B ( j , . . . , j d ) , as claimed. 46 A classic result is that GI is complete for isomorphism problems of explicitly given structures (see,e. g., [ZKT85, Section 15]). Here we formally state the linear-algebraic analogue of this result, andobserve trivially that the results of [FGS19] already show that 3-Tensor Isomorphism is universalamong what we call “basis-explicit” (multi)linear structures of degree 2.First let us recall the statement of the result for GI , so we can develop the appropriate analoguefor tensor isomorphism. A first-order signature is a list of positive integers ( r , r , . . . , r k ; f , . . . , f ℓ ) ;a model of this signature consists of a set V (colloquially referred to as “vertices”), k relations R i ⊆ V r i , and ℓ functions F i : V f i → V . The numbers r i are thus the arities of the relations R i , and the f i are the arities of the functions F i . Two such models ( V ; R , . . . , R k ; F , . . . , F ℓ ) and ( V ′ ; R ′ , . . . , R ′ k ; F ′ , . . . , F ′ ℓ ) are isomorphic if there is a bijection ϕ : V → V ′ that sends R i to R ′ i for all i and F i to F ′ i for all i . In symbols, ϕ is an isomorphism if ( v , . . . , v r i ) ∈ R i ⇔ ( ϕ ( v ) , . . . , ϕ ( v r i )) ∈ R ′ i for all i and all v ∗ ∈ V , and similarly if ϕ ( F i ( v , . . . , v f i )) = F ′ i ( ϕ ( v ) , . . . , ϕ ( v f i )) for all i and all v ∗ ∈ V . By an “explicitly given structure” or “explicit model” we mean a modelwhere each relation R i is given by a list of its elements and each function is given by listing allof its input-output pairs. Fixing a signature, the isomorphism problem for that signature is todecide, given two explicit models of that signature, whether they are isomorphic. This isomorphismproblem is directly encoded into the isomorphism problem for edge-colored hypergraphs, which canthen be reduced to GI using standard gadgets.For example, the signature for directed graphs (possibly with self-loops) is simply σ = (2; ) —itsmodels are simply binary relations. If one wants to consider graphs without self-loops, this is aspecial case of the isomorphism problem for the signature σ , namely, those explicit models in which ( v, v ) / ∈ R for any v . Note that a graph without self-loops is never isomorphic to a graph withself-loops, and two directed graphs without self-loops are isomorphic as directed graphs if and onlyif they are isomorphic as models of the signature σ . In other words, the isomorphism problemfor simple directed graphs really is just a special case. The same holds for undirected graphswithout self-loops, which are simply models of the signature σ in which ( v, v ) / ∈ R and R issymmetric. As another example, the signature for finite groups is γ = (1; 1 , : the first relation R will be a singleton, indicating which element is the identity, the function F is the inverse function F ( g ) = g − , and the second function F is the group multiplication F ( g, h ) = gh . Of course,models of the signature γ can include many non-groups as well, but, as was the case with directedgraphs, a group will never be isomorphic to a non-group, and two groups are isomorphic as modelsof γ iff they are isomorphic as groups.A natural linear-algebraic analogue of the above is as follows. One additional feature we addhere for purposes of generality is that we need to make room for dual vector spaces. A linearsignature is then a list of pairs of nonnegative integers (( r , r ∗ ) , . . . , ( r k , r ∗ k ); ( f , f ∗ ) , . . . , ( f ℓ , f ∗ ℓ )) with the property that r i + r ∗ i > and f i + f ∗ i > for all i . By the arity of the i -th relation (resp.,function) we mean the sum r i + r ∗ i (resp., f i + f ∗ i ). Definition 10.1 (Linear signature, basis-explicit) . Given a linear signature σ = (( r , r ∗ ) , . . . , ( r k , r ∗ k ); ( f , f ∗ ) , . . . , ( f ℓ , f ∗ ℓ )) , Sometimes one also includes constants in the definition, but these can be handled as relations of arity 1. Whilewe could have done the same for functions, treating a function of arity f as its graph, which is a relation of arity f + 1 , distinguishing between relations and functions will be useful when we come to our linear-algebraic analogue. linear model for σ over a field F consists of an F -vector space V , and linear subspaces R i ≤ V ⊗ r i ⊗ ( V ∗ ) ⊗ r ∗ i for ≤ i ≤ k and linear maps F i : V ⊗ f i ⊗ ( V ∗ ) ⊗ f ∗ i → V for ≤ i ≤ ℓ . Twosuch linear models ( V ; R , . . . , R k ; F , . . . , F ℓ ) , ( V ′ ; R ′ , . . . , R ′ k ; F ′ , . . . , F ′ ℓ ) are isomorphic if there isa linear bijection ϕ : V → V ′ that sends R i to R ′ i for all i and F i to F ′ i for all i (details below).A basis-explicit linear model is given by a basis for each R i , and, for each element of a basis ofthe domain of F i , the value of F i on that element. Vectors here are written out in their usual densecoordinate representation.In particular, this means that an element of V ⊗ r —say, a basis element of R —is written out asa vector of length (dim V ) r . We will only be concerned with finite-dimensional linear models.Given ϕ : V → V ′ , let ϕ ⊗ r i ⊗ r ∗ i denote the linear map ϕ ⊗ r i ⊗ r ∗ i : V ⊗ r i ⊗ ( V ∗ ) ⊗ r ∗ i → V ′⊗ r i ⊗ ( V ′∗ ) ⊗ r ∗ i which is defined on basis vectors factor-wise: ϕ ⊗ r i ⊗ r ∗ i ( v ⊗ · · · ⊗ v r i ⊗ ℓ ⊗ · · · ⊗ ℓ r ∗ i ) = ϕ ( v ) ⊗ · · · ⊗ ϕ ( v r i ) ⊗ ϕ ∗ ( ℓ ) ⊗ · · · ⊗ ϕ ∗ ( ℓ r ∗ i ) , and then extended to the whole space by linearity. (Recall that V ∗ = Hom( V, F ) , so elements of V ∗ are linear maps ℓ : V → F , and thus ϕ ∗ ( ℓ ) := ℓ ◦ ϕ − is a mapfrom V ′ → V → F , i. e., an element of V ′∗ , as desired). Similarly, when we say that ϕ sends F i to F ′ i , we mean that ϕ ( F i ( v ⊗ · · · ⊗ v f i ⊗ ℓ ⊗ · · · ⊗ ℓ f ∗ i )) = F ′ i ( ϕ ⊗ f i ⊗ f ∗ i ( v ⊗ · · · ⊗ v f i ⊗ ℓ ⊗ · · · ⊗ ℓ f ∗ i )) . Remark 10.2. We use the term “basis-explicit” rather than just “explicit,” because over a finite field, one may also consider a linear model of σ as an explicit model of a different signature (where thedifferent signature additionally encodes the structure of a vector space on V , namely, the additionand scalar multiplication), and then one may talk of a single mathematical object having explicitrepresentations—where everything is listed out—and basis-explicit representations—where thingsare described in terms of bases. An example of this distinction arises when considering isomorphismof p -groups of class : the “explicit” version is when they are given by their full multiplication table(which reduces to GI ), while the “basis-explicit” version is when they are given by a generating setof matrices or a polycyclic presentation (which GI reduces to ). Theorem 10.3 (Futorny–Grochow–Sergeichuk [FGS19]) . Given any linear signature σ where allrelationship arities are at most 3 and all function arities are at most 2, the isomorphism prob-lem for finite-dimensional basis-explicit linear models of σ reduces to inpolynomial time. Because of the equivalence between d -Tensor Isomorphism and (Theorem B + [FGS19]), we expect the analogous result to hold for arbitrary d . Thus an analogueof the results of [FGS19] for d -tensors would yield the full analogue of the universality result for GI . Open Question 10.4. Is d -Tensor Isomorphism universal for isomorphism problems on d -wayarrays? That is, prove the analogue of the results of [FGS19] for d -way arrays for any d ≥ . Our search-to-decision reduction (Theorem C) produces instances of dimension O ( n ) from instancesof dimension n . As stated, this means that a simply-exponential ( q ˜ O ( n ) -time) decision algorithmwould result only in a q ˜ O ( n ) search algorithm, but the latter runtime is trivial. We note that itmay be possible to alleviate this blow-up by attempting to generalize the logarithmic-size “coloringpalette” construction for reducing Colored GI to GI from the graph case to the linear-algebraiccase. Open Question 10.5. Is there a search-to-decision reduction for Alternating Matrix SpaceIsometry (and, consequently, isomorphism of p -groups of class 2 and exponent p , given in their48atural succinct encoding) that runs in time q ˜ O ( n ) , and produces instances of quasi-linear ( ˜ O ( n ) )dimension?In Section 3.2 we gave several different reductions from GI to Alternating Matrix SpaceIsometry . To summarize, they are:1. A direct reduction from GI to Alternating Matrix Space Isometry (Prop. 7.1)2. GI ≤ Matrix Lie Algebra Conjugacy [Gro12a], which in turn reduces to [FGS19],and then to Alternating Matrix Space Isometry (Thm. A);3. GI ≤ CodeEq [PR97, Luk93, Miy96], CodeEq ≤ Matrix Lie Algebra Conjugacy [Gro12a], and then follow the same reductions as in (1);4. GI ≤ Monomial Code Equivalence (the same reduction from [PR97] works for monomialequivalence of codes, see [Gro12a]), which in turn reduces to (Prop. 3.6), and thence to Alternating Matrix Space Isometry (Thm. A)5. GI ≤ Algebra Isomorphism [Gri81, AS05], which reduces to [FGS19], and then to Alternating Matrix Space Isometry (Thm. A).Can one prove that these reductions are all distinct? Are some of them equivalent in somenatural sense, e. g., up to a change of basis?Next, most of our results hold for arbitrary fields, or arbitrary fields with minor restrictions.However, in all of our reductions, we reduce one problem over F to another problem over the samefield F . Open Question 10.6. What is the relationship between TI over different fields? In particular,what is the relationship between TI F p and TI F pe , between TI F p and TI F q for coprime p, q , or between TI F p and TI Q ?We note that even the relationship between TI F p and TI F pe is not particularly clear. For matrix tuples (rather than spaces; equivalently, representations of finitely generated algebras) it is the casethat for any extension field K ⊇ F , two matrix tuples over F are F -equivalent (resp., conjugate) if andonly if they are K -equivalent [KL86] (see [dSP10] for a simplified proof). However, for equivalenceof tensors this need not be the case. This seems closely related to the so-called “problem of forms”for various algebras, namely the existence of algebras that are not isomorphic over F , but whichbecome isomorphic over an extension field. Example 10.7 (Non-isomorphic tensors isomorphic over an extension field) . Over R , let M = I and let M = diag(1 , , , − . Since these two matrices have different signatures, they are notisometric over R ; since they have the same rank, they are isometric over C . To turn this into anexample of 3-tensors, first we consider the corresponding instance of Matrix Space Isometry given by M = h M i and M = h M i . Note that M = { λI : λ ∈ R } , so the signatures of allmatrices in M are (4 , , (0 , , or (0 , . Similarly, the signatures appearing in M are (3 , , (0 , ,and (1 , , so these two matrix spaces are not isometric over R , though they are isometric over C since M and M are. Finally, apply the reduction from Matrix Space Isometry to [FGS19]to get two 3-tensors A , A . Since the reduction itself is independent of field, if we consider it over R we find that A and A must not be isomorphic 3-tensors over R , but if we consider the reductionover C we find that they are isomorphic as 3-tensors over C .Similar examples can be constructed over finite fields F of odd characteristic, taking M = I and M = diag(1 , α ) where α is a non-square in F (and replacing the role of C with that of49 = F [ x ] / ( x − α ) ). Instead of signature, isometry types of matrices over F are characterized bytheir rank and whether their determinant is a square or not. In this case, since our matrices areeven-dimensional diagonal matrices, scaling them multiplies their determinant by a square. Thusevery matrix in M will have its determinant being a square in F , and every nonzero matrix in M will not, but in K they are all squares.It would also be interesting to study the complexity of other group actions on tensors and howthey relate to the problems here. For example, the action of unitary groups U ( C n ) × · · · × U ( C n d ) on C n ⊗ · · · ⊗ C n d classifies pure quantum states up to “local unitary operations,” and the actionof SL( U ) × · · · × SL( U d ) on U ⊗ · · · ⊗ U d , over C , is the well-studied action by stochastic localoperations with classical communication (SLOCC) on quantum states (e. g., [GW13,Miy04,CD– 07]).Isomorphism of m -dimensional lattices in n -dimensional space can be seen as the natural actionof O n ( R ) × GL m ( Z ) by left and right multiplication on n × m matrices. As another example,orbits for several of the natural actions of GL n ( Z ) × GL m ( Z ) × GL r ( Z ) on 3-tensors over Z , evenfor small values of n, m, r , are the fundamental objects in Bhargava’s seminal work on highercomposition laws [Bha04a, Bha04b, Bha04c, Bha08]. We note that while the orthogonal group O ( V ) is the stabilizer of a 2-form on V (that is, an element of V ⊗ V ) and SL( V ) is the stabilizer of theinduced action on V dim V V (by the determinant)—so gadgets similar to those in this paper mightbe useful— GL n ( Z ) is not the stabilizer of any such structure.In Remark 9.1 we observed that any reduction (in the sense of Sec. 6.2) from d TI to musthave a blow-up in dimension which is asymptotically n d/ , while our construction uses dimension O ( d n d − ) . Open Question 10.8. Is there a reduction from d TI to (as in Sec. 6.2) such that the dimensionof the output is poly( d ) · n d/ o (1)) ?Finally, in terms of practical algorithms, we wonder how well modern SAT solvers would do oninstances of over F (or over other finite fields, encoded into bit-strings). Acknowledgments The authors would like to thanks James B. Wilson for related discussions, and Uriya First, Lek-HengLim, and J. M. Landsberg for help searching for references asking whether d T I could reduce to T I .J. A. G. would like to thank V. Futorny and V. V. Sergeichuk for their collaboration on the relatedwork [FGS19]. Ideas leading to this work originated from the workshop “Wildness in computerscience, physics, and mathematics” at the Santa Fe Institute. Both authors were supported byNSF grant DMS-1750319. Y. Q. was partly supported by Australian Research Council DECRADE150100720. A Reducing Cubic Form Equivalence to Degree- d Form Equiva-lence Proposition A.1. Cubic Form Equivalence reduces to Degree- d Form Equivalence , forany d ≥ . We suspect that a similar construction would give a reduction from Degree- d ′ Form Equiv-alence to Degree- d Form Equivalence for any d ′ ≤ d , but our argument relies on a caseanalysis that is somewhat specific to d ′ = 3 . Our argument might be adaptable to any fixed value50f d ′ the prover desires, with a consequently more complicated case analysis, but to prove it for all d ′ simultaneously seems to require a different argument. Proof. The reduction itself is quite simple: f z d − f , where z is a new variable not appearing in f .If A is an equivalence between f and g —that is, f ( x ) = g ( Ax ) —then diag( A, z ) is an equivalencefrom z d − f to z d − g . Conversely, suppose ˜ f = z d − f is equivalent to ˜ g = z d − g via ˜ f ( x ) = ˜ g ( Bx ) .We split the proof into several cases. If d = 3 , then z is not present so we already have that f and g are equivalent. If f is not divisible by ℓ d − for some linear form ℓ , then z d − is the unique factor in both z d − f and z d − g which is raised do the d − power. Thus any equivalence B between these twomust map z to itself, hence has the form B = ∗ . . . ∗ ... . . . ... ... ∗ . . . ∗ ∗ . . . ∗ , (if we put z last in our basis, and think of the matrix as acting on the left of the column vectorscorresponding to the variables). However, since both f and g do not depend on z , it must be thecase that whatever contributions z makes to g ( Bx ) , they all cancel. More precisely, all monomialsinvolving z in g ( Bx ) must cancel, so if we alter B into ˜ B that ˜ Bx i never includes z (that is, if wemake the stars in the last row above all zero), then g ( ˜ Bx ) = g ( Bx ) , hence f ( x ) = g ( ˜ Bx ) , so f and g are equivalent.The preceding case always applies when d > , for then d − > , but deg f = 3 . We are leftto handle the following cases:1. d ≤ and f is a product of linear forms;2. d = 4 , f is a product of a linear form and an irreducible quadratic form. Suppose f is a product of linear forms, then let us define rk( f ) as the number of linearlyindependent linear forms appearing in the factorization of f . Note that if rk( f ) = 1 , then f = αℓ for some α ∈ F , if rk( f ) = 2 , then f = ℓ ℓ (now we can absorb any constant into ℓ ), and if rk( f ) = 3 then f = ℓ ℓ ℓ with all ℓ i linearly independent. Then we have that f ∼ g if and only if g is also a product of linear forms of the same rank. For GL n acts transitively on k -tuples of linearlyindependent vectors for all k ≤ n , and in order to have rk( f ) linearly independent forms, we musthave n ≥ rk( f ) . Since we have supposed z d − f ∼ z d − g , by uniqueness of factorization g must bea product of linear forms of the same rank as f , and thus indeed f ∼ g . If d = 4 and f = ℓϕ where ℓ is linear and ϕ is an irreducible quadratic, then to understandthe situation we begin by first doing a change of basis on f to put ϕ into a form in which its kernelis evident. Note that none of these simplifications are part of the reduction, but rather they areto help us prove that the reduction works. Thinking of ϕ as given by its matrix M ϕ such that ϕ ( x ) = x t M ϕ x , we can always change basis to get M ϕ into the form (cid:20) M ′ 00 0 n − r (cid:21) where r = rk( M ϕ ) = rk( M ′ ) . Since ϕ does not depend on z , if we think of ϕ as a quadratic form on { x , . . . , x n , z } , then the matrices are the same, but larger by one additional zero row and column.51ext we will try to simplify ℓ as much as possible while maintaining the (new) form of M ϕ =diag( M ′ , ) . For this we first compute the stabilizer of the new form of M ϕ . We can compute thestabilizer as the set of invertible matrices A such that: (cid:20) A t A t A t A t (cid:21) (cid:20) M ′ 00 0 n − r +1 (cid:21) (cid:20) A A A A (cid:21) = (cid:20) M ′ 00 0 n − r +1 (cid:21) . This turns into the following equations on the blocks of X : A t M ′ A = M ′ A t M ′ A = 0 A t M ′ A = 0 A t M ′ A = 0 From the first equation and the fact that M ′ is full rank, we find that A must be an invertible r × r matrix. From the next equation and the fact that both M and A are full rank, we then findthat A = 0 . Thus the stabilizer of M ϕ is: S := (cid:26)(cid:20) A A A (cid:21) : A t M ′ A = M ′ and A is invertible (cid:27) . Now we simplify ℓ . Note that S acts on ℓ as a column vector. Consider ℓ = P ni =1 ℓ i x i , with ℓ i ∈ F ; we will say “ ℓ contains x i ” if and only if ℓ i = 0 . If ℓ contains some x r + k with k ≥ , thenby setting A = I r and A = 0 , we may choose A to be any invertible matrix which sends ( ℓ r +1 , . . . , ℓ n , ℓ n +1 ) (recall the trailing ℓ n +1 for the z coordinate) to (1 , , . . . , , and thus withoutloss of generality we may assume that ℓ only contains x i with ≤ i ≤ r + 1 .Next, note that if ℓ contains some x i for ≤ i ≤ r and x r +1 , then we may use the action of S toeliminate the x r +1 . Namely, by taking A = I r , A = I n +1 , and A = ( − ℓ r +1 /ℓ i ) E i . This makes ℓ i x i in ℓ contribute − ℓ r +1 to the x r +1 coordinate, eliminating x r +1 . Thus, under the action of S ,we need only consider two cases for linear forms under the action of S : a linear form is equivalentto eithera. one which contains some x i with ≤ i ≤ r , in which case we can bring it to a form in whichit contains no x r + j with j ≥ (and no z ), or b. it contains no x i with ≤ i ≤ r , in which case we can use the action of S to bring it to theform ℓ = x r +1 .Let us call the corresponding linear forms “type (a)” and “type (b).” Note that the linear form z isof type (b).Now, write f = ℓϕ and g = ℓ ′ ϕ ′ , and assume that we have applied the preceding change of basisto bring f to the form specified above. Recall that we are assuming ˜ f ∼ ˜ g , and need to show that f ∼ g . If, after applying the same change of basis to g , we do not have M ϕ ′ = M ϕ , then f g and also ˜ f ˜ g —contrary to our assumption—since ϕ (resp., ϕ ′ ) is the unique irreducible quadraticfactor of ˜ f (resp., ˜ g ). So we may assume that, after this change of basis, ϕ = ϕ ′ , both of which have M ϕ = diag( M ′ , n − r +1 ) with r = rank( M ϕ ) .Next, since we are assuming ˜ f ∼ ˜ g , and z itself is of type (b), so it must be the case that thetypes of ℓ, ℓ ′ are the same. Thus we have two cases to consider: either they are both of type (a), orboth of type (b). Suppose both ℓ, ℓ ′ are of type (a). In this case, the equivalence between ˜ f and ˜ g cannot send z to ℓ ′ and ℓ to z , for both ℓ, ℓ ′ are of type (a), whereas z is of type (b). Thus the equivalence between ˜ f and ˜ g must restrict to an equivalence between f and g (when we ignore z , or set its contributionto the other variables to zero, as in the above case where f was not divisible by ℓ d − ).52 uppose both ℓ, ℓ ′ are of type (b). In this case, it is possible that the equivalence from ˜ f to ˜ g could send z to ℓ ′ and ℓ to z (since all three of ℓ, ℓ ′ , z are in case (b)); however, we will see that inthis case, even such a situation will not cause an issue. Without loss of generality, by the changeof bases described above, we have ˜ f = zx r +1 ϕ and ˜ g = zℓ ′ ϕ (the same ϕ ), where ℓ ′ contains no x i with ≤ i ≤ r . Using elements of S with A = I r , and A = 0 , we then get an action of GL n − r +1 (via A ) on linear forms in the variables x r +1 , . . . , x n , z . Since ℓ ′ is linearly independent from z (inparticular, it does not contain z ) and the action of GL is transitive on pairs of linearly independentvectors, we may use S to fix ϕ and z , and send x r +1 to ℓ ′ , giving the desired equivalence f ∼ g . References [AD17] Eric Allender and Bireswar Das. Zero knowledge and circuit minimization. Inf. Com-put. , 256:2–8, 2017. doi:10.1016/j.ic.2017.04.004 .[AS05] Manindra Agrawal and Nitin Saxena. Automorphisms of finite rings and ap-plications to complexity of problems. In STACS 2005, 22nd Annual Sympo-sium on Theoretical Aspects of Computer Science, Proceedings , pages 1–17, 2005. doi:10.1007/978-3-540-31856-9_1 .[AS06] Manindra Agrawal and Nitin Saxena. Equivalence of F -algebras and cubic forms. In STACS 2006, 23rd Annual Symposium on Theoretical Aspects of Computer Science,Proceedings , pages 115–126, 2006. doi:10.1007/11672142_8 .[ASS06] Ibrahim Assem, Daniel Simson, and Andrzej Skowroński. Elements of the representa-tion theory of associative algebras. Vol. 1 , volume 65 of London Mathematical SocietyStudent Texts . Cambridge University Press, Cambridge, 2006. Techniques of represen-tation theory. doi:10.1017/CBO9780511614309 .[Bab85] L Babai. Trading group theory for randomness. In Proceedings of the SeventeenthAnnual ACM Symposium on Theory of Computing , STOC ’85, pages 421–429. ACM,1985. doi:10.1145/22145.22192 .[Bab14] László Babai. On the automorphism groups of strongly regular graphs I. In Proceedingsof the 5th Conference on Innovations in Theoretical Computer Science , ITCS ’14, pages359–368, 2014. doi:10.1145/2554797.2554830 .[Bab16] László Babai. Graph isomorphism in quasipolynomial time [extended abstract].In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Com-puting, STOC 2016 , pages 684–697, 2016. arXiv:1512.03547 [cs.DS] version 2. doi:10.1145/2897518.2897542 .[Bae38] Reinhold Baer. Groups with abelian central quotient group. Trans. AMS , 44(3):357–386, 1938. doi:10.1090/S0002-9947-1938-1501972-1 .[BBS09] László Babai, Robert Beals, and Ákos Seress. Polynomial-time theory of matrix groups.In Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC2009 , pages 55–64, 2009. doi:10.1145/1536414.1536425 .[BCGQ11] László Babai, Paolo Codenotti, Joshua A. Grochow, and Youming Qiao. Code equiv-alence and group isomorphism. In Proceedings of the Twenty-Second Annual ACM–SIAM Symposium on Discrete Algorithms (SODA11) , pages 1395–1408, Philadelphia,PA, 2011. SIAM. doi:10.1137/1.9781611973082.107 .53BES80] László Babai, Paul Erdős, and Stanley M. Selkow. Random graph isomorphism. SIAMJ. Comput. , 9(3):628–635, 1980. doi:10.1137/0209047 .[BG94] Mihir Bellare and Shafi Goldwasser. The complexity of decision versus search. SIAMJ. Comput. , 23(1):97–119, 1994. doi:10.1137/S0097539792228289 .[BGL + 19] Peter A. Brooksbank, Joshua A. Grochow, Yinan Li, Youming Qiao, and James B.Wilson. Incorporating Weisfeiler–Leman into algorithms for group isomorphism.arXiv:1905.02518 [cs.CC], 2019.[Bha04a] Manjul Bhargava. Higher composition laws. I. A new view on Gauss compo-sition, and quadratic generalizations. Ann. of Math. (2) , 159(1):217–250, 2004. doi:10.4007/annals.2004.159.217 .[Bha04b] Manjul Bhargava. Higher composition laws. II. On cubic analogues of Gauss composi-tion. Ann. of Math. (2) , 159(2):865–886, 2004. doi:10.4007/annals.2004.159.865 .[Bha04c] Manjul Bhargava. Higher composition laws. III. The parametrization of quartic rings. Ann. of Math. (2) , 159(3):1329–1360, 2004. doi:10.4007/annals.2004.159.1329 .[Bha08] Manjul Bhargava. Higher composition laws. IV. The parametrization of quintic rings. Ann. of Math. (2) , 167(1):53–94, 2008. doi:10.4007/annals.2008.167.53 .[BL08] Peter A. Brooksbank and Eugene M. Luks. Testing isomorphism of modules. J. Algebra ,320(11):4020 – 4029, 2008. doi:10.1016/j.jalgebra.2008.07.014 .[BM88] L. Babai and S. Moran. Arthur–Merlin games: A randomized proof system, and ahierarchy of complexity classes. Journal of Computer and System Sciences , 36(2):254– 276, 1988. doi:10.1016/0022-0000(88)90028-1 .[BMW18] Peter A Brooksbank, Joshua Maglione, and James B Wilson. Rosenberg–Zelinskysequences for tensors and non-associative algebras. arXiv preprint arXiv:1812.00275[math.RA], 2018.[BW12] Peter A. Brooksbank and James B. Wilson. Computing isometry groupsof Hermitian maps. Trans. Amer. Math. Soc. , 364:1975–1996, 2012. doi:10.1090/S0002-9947-2011-05388-2 .[BW15] Peter A Brooksbank and James B Wilson. The module isomorphism problem reconsid-ered. Journal of Algebra , 421:541–559, 2015. doi:10.1016/j.jalgebra.2014.09.004 .[CD– 07] Oleg Chterental and Dragomir Ž. D– oković. Normal forms and tensor ranks of pure statesof four qubits. In G. D. Ling, editor, Linear Algebra Research Advances , chapter 4, pages133–167. Nova Science Publishers, New York, 2007. arXiv:quant-ph/0612184 .[CdGVL12] Serena Cicalò, Willem A. de Graaf, and Michael Vaughan-Lee. An effectiveversion of the Lazard correspondence. J. Algebra , 352(1):430 – 450, 2012. doi:10.1016/j.jalgebra.2011.11.031 .[CDT09] Xi Chen, Xiaotie Deng, and Shang-Hua Teng. Settling the complexity ofcomputing two-player Nash equilibria. J. ACM , 56(3):Art. 14, 57, 2009. doi:10.1145/1516512.1516516 . 54CIK97] Alexander Chistov, Gábor Ivanyos, and Marek Karpinski. Polynomial time algorithmsfor modules over finite dimensional algebras. In Proceedings of the 1997 internationalsymposium on Symbolic and algebraic computation , ISSAC ’97, pages 68–74. ACM,1997. doi:10.1145/258726.258751 .[dG00] W.A. de Graaf. Lie Algebras: Theory and Algorithms , volume 56 of North-HollandMathematical Library . Elsevier Science, 2000.[Dor32] JL Dorroh. Concerning adjunctions to algebras. Bull. AMS , 38(2):85–88, 1932. doi:10.1090/S0002-9904-1932-05333-2 .[dSP10] Clément de Seguins Pazzis. Invariance of simultaneous similarity and equivalence ofmatrices under extension of the ground field. Linear Algebra Appl. , 433(3):618–624,2010. doi:10.1016/j.laa.2010.03.022 .[EG00] Wayne Eberly and Mark Giesbrecht. Efficient decomposition of associative al-gebras over finite fields. Journal of Symbolic Computation , 29(3):441–458, 2000. doi:10.1006/jsco.1999.0308 .[Exc] Theoretical Computer Science Stack Exchange. Problems between P and NPC. https://cstheory.stackexchange.com/questions/79/problems-between-p-and-npc/ .[Far05] Rolf Farnsteiner. The theorem of Wedderburn–Malcev: H ( A, N ) and extensions. Lecture at BIREP: Representations of Finite Di-mensional Algebras and Quantum Groups at Bielefield, 2005. URL: .[FG11] Lance Fortnow and Joshua A. Grochow. Complexity classes of equivalence problems re-visited. Inform. and Comput. , 209(4):748–763, 2011. Also available as arXiv:0907.4775[cs.CC]. doi:10.1016/j.ic.2011.01.006 .[FGS19] Vyacheslav Futorny, Joshua A. Grochow, and Vladimir V. Sergeichuk. Wildness fortensors. Lin. Alg. Appl. , 566:212–244, 2019. doi:10.1016/j.laa.2018.12.022 .[FN70] V. Felsch and J. Neubüser. On a programme for the determination of the automorphismgroup of a finite group. In Pergamon J. Leech, editor, Computational Problems inAbstract Algebra (Proceedings of a Conference on Computational Problems in Algebra,Oxford, 1967) , pages 59–60, Oxford, 1970.[GMR85] S Goldwasser, S Micali, and C Rackoff. The knowledge complexity of interactive proof-systems. In Proceedings of the Seventeenth Annual ACM Symposium on Theory ofComputing , STOC ’85, pages 291–304. ACM, 1985. doi:10.1145/22145.22178 .[GNW04] Marcus Greferath, Alexandr Nechaev, and Robert Wisbauer. Finite quasi-Frobenius modules and linear codes. J. Algebra Appl. , 3(3):247–272, 2004. doi:10.1142/S0219498804000873 .[GP69] I. M. Gelfand and V. A. Ponomarev. Remarks on the classification of a pair of com-muting linear transformations in a finite-dimensional space. Functional Anal. Appl. ,3:325–326, 1969. doi:10.1007/BF01076321 .55GQ17] Joshua A. Grochow and Youming Qiao. Algorithms for group isomorphism viagroup extensions and cohomology. SIAM J. Comput. , 46(4):1153–1216, 2017. Pre-liminary version in IEEE Conference on Computational Complexity (CCC) 2014(DOI:10.1109/CCC.2014.19). Also available as arXiv:1309.1776 [cs.DS] and ECCCTechnical Report TR13-123. doi:10.1137/15M1009767 .[Gri81] D. Ju. Grigoriev. Complexity of “wild” matrix problems and of the isomorphism ofalgebras and graphs. Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) ,105:10–17, 198, 1981. Theoretical applications of the methods of mathematical logic,III. doi:10.1007/BF01084390 .[Gro12a] Joshua A. Grochow. Matrix Lie algebra isomorphism. In IEEE Conference on Compu-tational Complexity (CCC12) , pages 203–213, 2012. Also available as arXiv:1112.2012[cs.CC] and ECCC Technical Report TR11-168. doi:10.1109/CCC.2012.34 .[Gro12b] Joshua A. Grochow. Symmetry and equivalence relations in classical and geomet-ric complexity theory . PhD thesis, University of Chicago, Chicago, IL, 2012. URL: .[GS] Daniel R. Grayson and Michael E. Stillman. Macaulay2, a soft-ware system for research in algebraic geometry. Available at https://faculty.math.illinois.edu/Macaulay2/ .[GW13] Gilad Gour and Nolan R. Wallach. Classification of multipartite entanglement of allfinite dimensionality. Phys. Rev. Lett. , 111:060502, Aug 2013. arXiv:1304.7259 [quant-ph]. doi:10.1103/PhysRevLett.111.060502 .[HBD17] Harald Andrés Helfgott, Jitendra Bajpai, and Daniele Dona. Graph isomorphisms inquasi-polynomial time. arXiv:1710.04574 [math.GR], 2017.[Hig60] Graham Higman. Enumerating p -groups. I. Inequalities. Proc. London Math. Soc. (3) ,10:24–30, 1960. doi:10.1112/plms/s3-10.1.24 .[HL16] Jesko Hüttenhain and Pierre Lairez. The boundary of the orbit of the 3-by-3 determinant polynomial. Comptes Rendus Mathematique , 354(9):931–935, 2016.arXiv:1512.02437 [math.AG]. doi:10.1016/j.crma.2016.07.002 .[Hüt17] Jesko Hüttenhain. Geometric Complexity Theory and Orbit Clo-sures of Homogeneous Forms . PhD thesis, TU Berlin, 2017. URL: .[IKS10] Gábor Ivanyos, Marek Karpinski, and Nitin Saxena. Deterministic polynomial timealgorithms for matrix completion problems. SIAM J. Comput. , 39(8):3736–3751, 2010. doi:10.1137/090781231 .[IQ18] Gábor Ivanyos and Youming Qiao. Algorithms based on *-algebras, and theirapplications to isomorphism of polynomials with one secret, group isomorphism,and polynomial identity testing. In Artur Czumaj, editor, Proceedings of theTwenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018,New Orleans, LA, USA, January 7-10, 2018 , pages 2357–2376. SIAM, 2018. doi:10.1137/1.9781611975031.152 .56IR99] Gábor Ivanyos and Lajos Rónyai. Computations in associative and Lie al-gebras. In Some tapas of computer algebra , pages 91–120. Springer, 1999. doi:10.1007/978-3-662-03891-8_5 .[Irn05] Christophe-André Mario Irniger. Graph matching—filtering databases of graphs usingmachine learning techniques . PhD thesis, Universität Bern, 2005.[Iva00] Gábor Ivanyos. Fast randomized algorithms for the structure of matrix algebras overfinite fields. In Proceedings of the 2000 international symposium on Symbolic and alge-braic computation , pages 175–183. ACM, 2000. doi:10.1145/345542.345620 .[JQSY19] Zhengfeng Ji, Youming Qiao, Fang Song, and Aaram Yun. General linear group actionon tensors: A candidate for post-quantum cryptography. arXiv:1906.04330 [cs.CR],2019.[KB09] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAMreview , 51(3):455–500, 2009. doi:10.1137/07070111X .[Khu98] E. I. Khukhro. p -automorphisms of finite p -groups , volume 246 of London Mathe-matical Society Lecture Note Series . Cambridge University Press, Cambridge, 1998. doi:10.1017/CBO9780511526008 .[KL86] Lee Klingler and Lawrence S. Levy. Sweeping-similarity of matrices. Linear AlgebraAppl. , 75:67–104, 1986. doi:10.1016/0024-3795(86)90182-5 .[Koi96] Pascal Koiran. Hilbert’s Nullstellensatz is in the polynomial hierarchy. J. Complexity ,12(4):273–286, 1996. doi:10.1006/jcom.1996.0019 .[KS06] Neeraj Kayal and Nitin Saxena. Complexity of ring morphism problems. ComputationalComplexity , 15(4):342–390, 2006. doi:10.1007/s00037-007-0219-8 .[KST93] Johannes Köbler, Uwe Schöning, and Jacobo Torán. The graph isomorphism problem:its structural complexity . Birkhauser Verlag, Basel, Switzerland, Switzerland, 1993. doi:10.1007/978-1-4612-0333-9 .[Lad75] Richard E. Ladner. On the structure of polynomial time reducibility. J. ACM ,22(1):155–171, 1975. doi:10.1145/321864.321877 .[Lam91] T. Y. Lam. A first course in noncommutative rings , volume 131 of Graduate Texts inMathematics . Springer-Verlag, New York, 1991. doi:10.1007/978-1-4684-0406-7 .[Lan12] J.M. Landsberg. Tensors: Geometry and Applications , volume 128 of Graduate studiesin mathematics . American Mathematical Soc., 2012. doi:10.1090/gsm/128 .[LQ17] Yinan Li and Youming Qiao. Linear algebraic analogues of the graph isomorphismproblem and the Erdős–Rényi model. In Chris Umans, editor, , pages 463–474. IEEEComputer Society, 2017. doi:10.1109/FOCS.2017.49 .[Luk82] Eugene M. Luks. Isomorphism of graphs of bounded valence can be tested in polynomialtime. J. Comput. Syst. Sci. , 25(1):42 – 65, 1982. doi:10.1016/0022-0000(82)90009-5 .57Luk92] Eugene M. Luks. Computing in solvable matrix groups. In FOCS 1992, 33rd AnnualSymposium on Foundations of Computer Science , pages 111–120. IEEE Computer So-ciety, 1992. doi:10.1109/SFCS.1992.267813 .[Luk93] Eugene M. Luks. Permutation groups and polynomial-time computation. In Groupsand computation (New Brunswick, NJ, 1991) , volume 11 of DIMACS Ser. DiscreteMath. Theoret. Comput. Sci. , pages 139–175. Amer. Math. Soc., Providence, RI, 1993.[Mac62] Florence Jessie MacWilliams. Combinatorial problems of elementary abelian groups .PhD thesis, Radcliffe College, 1962.[Mac71] Saunders MacLane. Categories for the working mathematician . Springer-Verlag, New York-Berlin, 1971. Graduate Texts in Mathematics, Vol. 5. doi:10.1007/978-1-4757-4721-8 .[McK80] Brendan D. McKay. Practical graph isomorphism. Congr. Numer. , pages 45–87, 1980.[Mil78] Gary L. Miller. On the n log n isomorphism technique (a preliminary report). In STOC ,pages 51–58. ACM, 1978. doi:10.1145/800133.804331 .[Miy96] Takunari Miyazaki. Luks’s reduction of graph isomor-phism to code equivalence. Comment to E. W. Clark,https://groups.google.com/forum/ Int. J. Quant. Info. , pages 65–77, 2004. arXiv:quant-ph/0401023. doi:10.1142/S0219749904000080 .[MP14] Brendan D. McKay and Adolfo Piperno. Practical graph isomorphism, II. 60(0):94 –112, 2014. doi:10.1016/j.jsc.2013.09.003 .[Mul11] Ketan Mulmuley. On P vs. NP and geometric complexity theory: Dedicated to sriramakrishna. J. ACM , 58(2):5:1–5:26, 2011. doi:10.1145/1944345.1944346 .[Nai13] Vipul Naik. Lazard correspondence up to isoclinism . PhD thesis, The University ofChicago, 2013. URL: https://vipulnaik.com/thesis/ .[Old36] Rufus Oldenburger. Non-singular multilinear forms and certain p -way matrix factor-izations. Trans. Amer. Math. Soc. , 39(3):422–455, 1936. doi:10.2307/1989760 .[Pat96] Jacques Patarin. Hidden fields equations (HFE) and isomorphisms of polynomials(IP): two new families of asymmetric algorithms. In Advances in Cryptology - EU-ROCRYPT ’96, International Conference on the Theory and Application of Crypto-graphic Techniques, Saragossa, Spain, May 12-16, 1996, Proceeding , pages 33–48, 1996. doi:10.1007/3-540-68339-9_4 .[Poo14] Bjorn Poonen. Undecidable problems: a sampler. In Interpreting Gödel , pages 211–241.Cambridge Univ. Press, Cambridge, 2014. arXiv:1204.0299 [math.LO].[PR97] Erez Petrank and Ron M. Roth. Is code equivalence easy to decide? IEEE Trans. Inf.Theory , 43(5):1602–1604, 1997. doi:10.1109/18.623157 .58PSS18] Max Pfeffer, Anna Seigal, and Bernd Sturmfels. Learning paths from signature tensors.arXiv preprint arXiv:1809.01588 [math.NA], 2018.[Rón88] Lajos Rónyai. Zero divisors in quaternion algebras. J. Algorithms , 9(4):494–506, 1988. doi:10.1016/0196-6774(88)90014-4 .[Ros13a] David J. Rosenbaum. Bidirectional collision detection and faster deterministic isomor-phism testing. arXiv preprint arXiv:1304.3935 [cs.DS], 2013.[Ros13b] David J. Rosenbaum. Breaking the n log n barrier for solvable-group isomorphism. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algo-rithms , pages 1054–1073. SIAM, 2013. Preprint arXiv:1205.0642 [cs.DS].[Sen00] Nicolas Sendrier. Finding the permutation between equivalent linear codes: The sup-port splitting algorithm. IEEE Trans. Information Theory , 46(4):1193–1203, 2000. doi:10.1109/18.850662 .[Ser77] V. V. Sergeichuk. The classification of metabelian p -groups. In Matrix problems (Rus-sian) , pages 150–161. Akad. Nauk Ukrain. SSR Inst. Mat., Kiev, 1977.[Ser00] Vladimir V. Sergeichuk. Canonical matrices for linear matrix problems. Linear AlgebraAppl. , 317(1-3):53–102, 2000. doi:10.1016/S0024-3795(00)00150-6 .[Ser03] Ákos Seress. Permutation group algorithms , volume 152. Cambridge University Press,2003. doi:10.1017/CBO9780511546549 .[Sim78] Charles C Sims. Some group-theoretic algorithms. In Topics in algebra , pages 108–124.Springer, 1978. doi:10.1007/BFb0103126 .[SS07] Daniel Simson and Andrzej Skowroński. Elements of the representation theory of as-sociative algebras. Vol. 3 , volume 72 of London Mathematical Society Student Texts .Cambridge University Press, Cambridge, 2007. Representation-infinite tilted algebras.[SV17] Rachna Somkunwar and Vinod Moreshwar Vaze. A comparative study of graph iso-morphism applications. International Journal of Computer Applications , 162(7):34–37,Mar 2017. doi:10.5120/ijca2017913414 .[Val76] Leslie G. Valiant. Relative complexity of checking and evaluating. Inf. Process. Lett. ,5(1):20–23, 1976. doi:10.1016/0020-0190(76)90097-1 .[Val79] Leslie G. Valiant. Completeness classes in algebra. In Michael J. Fischer, Richard A.DeMillo, Nancy A. Lynch, Walter A. Burkhard, and Alfred V. Aho, editors, Proceedingsof the 11h Annual ACM Symposium on Theory of Computing, April 30 - May 2, 1979,Atlanta, Georgia, USA , pages 249–261. ACM, 1979. doi:10.1145/800135.804419 .[Val84] L. G. Valiant. An algebraic approach to computational complex-ity. In Proceedings of the International Congress of Mathematicians,Vol. 2 (Warsaw, 1983) , pages 1637–1643. PWN, Warsaw, 1984. URL: .[Wik19] Wikipedia contributors. Rng (algebra): adjoining an identity element —Wikipedia, the free encyclopedia, 2019. [Online; accessed 19-Feb-2019]. URL: https://en.wikipedia.org/wiki/Rng_(algebra) .59Wil09] James B. Wilson. Decomposing p -groups via Jordan algebras. J. Algebra , 322:2642–2679, 2009. doi:10.1016/j.jalgebra.2009.07.029 .[Wil14] James B. Wilson. 2014 conference on Groups, Computation, and Geometry at ColoradoState University, co-organized by P. Brooksbank, A. Hulpke, T. Penttila, J. Wilson,and W. Kantor. Personal communication, 2014.[Wil15] James B. Wilson. Surviving in the wilderness. Talk presented at the Sante Fe InstituteWorkshop on Wildness in Computer Science, Physics, and Mathematics, 2015.[ZKT85] V. N. Zemlyachenko, N. M. Korneenko, and R. I. Tyshkevich. Graph isomorphismproblem. J. Soviet Math. , 29(4):1426–1481, May 1985. doi:10.1007/BF02104746doi:10.1007/BF02104746