aa r X i v : . [ m a t h . R A ] F e b A construction of swap or switch polynomials
Claudio ProcesiFebruary 23, 2021
Contents
Goldman element . . . . . . . . . . . 32.7 The case of generic matrices . . . . . . . . . . . . . . . . . . . 4 × matrices 6 d = 2 h even . . . . . . . . . . . . . 184.13.1 The case d odd . . . . . . . . . . . . . . . . . . . . . . 19 Given a positive integer d and a field F denote by M d ( F ) the algebra of d × d matrices with entries in F . We will assume that F has characteristic0 and in fact work with F = Q and then denote M d := M d ( Q ) . Denote by F h X i = F h x , . . . , x i , . . . i the free algebra in the variables x i . The elements of F h X i are usually called non commutative polynomials .Given an associative algebra A over F by definition the homomorphisms of F h X i to A correspond to maps X → A and can be thought of as evaluations of the variables X in A . Given any positive integer k we may consider thealgebra A ⊗ k and then for every evaluation π : F h X i → A of X in A we havea corresponding evaluation π ⊗ k : F h X i ⊗ k → A ⊗ k . Definition . The elements of F h X i ⊗ k will be called tensor polynomials .They can be thought of as (non commutative) polynomials in the tensorvariables x ( i ) j := 1 ⊗ i − ⊗ x j ⊗ ⊗ k − . An element f ∈ F h X i ⊗ k is calleda tensor polynomial identity for A (short a TPI) if it vanishes under allevaluations of X in A . 1e then devote our study to permutation valued polynomials and inparticular to swap polynomials . Definition . Consider a non commutative 2–tensor polynomial f ∈ F h X i ⊗ in variables x i . Then f is called a swap polynomial for d × d matrices if, whenevaluated in the algebra of d × d matrices it takes values only multiples ofthe exchange (or swap) operator (1 ,
2) : a ⊗ b b ⊗ a , (1 , ∈ M ⊗ d .A 2–tensor polynomial f = P i A i ⊗ B i is balanced if all A i , B i are homo-geneous of the same degrees.This notion appears in a recent preprint Translating Uncontrolled Sys-tems in Time, by David Trillo, Benjamin Dive, and Miguel Navascu´es, arXiv:1903.10568v2[quant–ph] 28 May 2020 [19]. The authors introduce some special tensor val-ued polynomials. In particular they prove the existence of such operatorsusing the classical theory of central polynomials developed independentlyby Formanek and Razmyslov. The resulting swap polynomials have usuallyvery large degrees, although they exhibit such a polynomial for 2 tensor 2 × x i ⊗ x j of degree 1 whilein this paper we consider it of degree 2 in the matrix variables, so theirpolynomial is of degree 5 in the tensor variables.Swap polynomials belong to the interesting class of permutation valuedpolynomials that is polynomials whose values are a scalar times a constantlinear combination of permutation operators.The existence of such polynomials is a consequence of the existence ofcentral polynomials and follows easily from the so called Goldman elements as in Saltman, [18], see Theorem 2.3.We discuss first the case of 2 × d × d matrices we proposea canonical construction of a swap polynomial of degree 2 d in 2 d variables.In particular we exhibit a swap polynomial of degree 8 (or 4 in theirlanguage) but in 8 variables for 2 × d variables whichimplies that we know an explicit formula for their values.The paper [19] was pointed out to me by Felix Huber while discussinghis recent work Positive maps and trace polynomials from the SymmetricGroup arXiv:2002.12887 28–2–2020. 2
Tensor Polynomials
Goldman element
The following Definition and Theorem is attributed, with no reference, toOscar Goldman in the book of M. A. Knus, M. Ojanguren,
Th´eorie de ladescente et alg`ebres d’Azumaya page 112 [10].Let R be a rank n Azumaya algebra over its center A . By definition ofAzumaya algebra the map π : R ⊗ A R op → End A ( R ) , π ( X i a i ⊗ b i )( x ) = X i a i xb i is an isomorphism, moreover there is a faithfully flat extension A → B sothat B ⊗ A R ≃ M n ( B ) and the trace of M n ( B ) restricted to R takes valuesin A , so we have an A –linear map tr : R → A ⊂ R . Definition . We define the
Goldman element t ∈ R ⊗ A R by π ( t )( x ) := tr ( x ) . Theorem 2.3.
The Goldman element satisfies t = 1 , t ( a ⊗ b ) t − = b ⊗ a. (1) Proof.
Under the faithfully flat extension A → B the element t must map to P ni,j =1 e i,j ⊗ e j,i , by uniqueness since this element satisfies the same property: n X i,j =1 e i,j e h,k e j,i = ( h = k h = k . Then properties of (1) can be verified directly: t = n X i,j =1 e i,j ⊗ e j,i n X h,k =1 e h,k ⊗ e k,h = n X i,j =1 e i,i ⊗ e j,j = 1 . The second property can again be verified in the split algebra and it is( n X i,j =1 e i,j ⊗ e j,i ) e a,b ⊗ e c,d ( n X h,k =1 e h,k ⊗ e k,h ) = e c,d ⊗ e a,b . By Corollary 10.4.3 of [1] we can take as B = A n ( R ) the commutativering giving the universal map into matrices that is for any map j : R → M n ( C ) one has a map ¯ j : A n ( R ) making commutative the diagram. R i / / j $ $ ❏❏❏❏❏❏❏❏❏❏❏ M n ( A n ( R )) M n (¯ j ) (cid:15) (cid:15) M n ( C ) (2)3t follows that if ρ : R → S is a ring homomorphism of rank n Azumayaalgebras we have R i R / / j (cid:15) (cid:15) M n ( A n ( R )) M n (¯ i S ◦ ρ ) (cid:15) (cid:15) S i S / / M n ( A n ( S )) (3)and t R , t S the respective Goldman elements we have ρ ( t R ) = t S , hence: Corollary 2.4.
Under any splitting C ⊗ A R ≃ M n ( C ) , the element t mapsthe switch operator on C n ⊗ C n .Proof. The element t maps to P ni,j =1 e i,j ⊗ e j,i which is the switch operatoron C n ⊗ C n since n X i,j =1 e i,j ⊗ e j,i ( e a ⊗ e b ) = e b ⊗ e a . Remark . The properties of Formula (1) determine t only up to a multi-plicative scalar α with α = 1.In particular if A is a domain α = ± R is a free rank n module over A with basis a , a , . . . , a n then thereis a unique dual basis for the trace form tr ( xy ). That is, there are uniqueelements a ∗ , a ∗ , . . . , a ∗ n with tr ( a i a ∗ j ) = δ ji . Then we have t = n X i =1 a i ⊗ a ∗ i . (4)This depends upon the fact that the element P n i =1 a i ⊗ a ∗ i is independentof the basis chosen. For R = M n ( A ) the dual basis of elementary matricesis e ∗ i,j = e j,i . So under the faithfully flat splitting the element P n i =1 a i ⊗ a ∗ i coincides with P ni,j =1 e i,j ⊗ e j,i . Remark . In the Physics literature for n = 2 and A = C one has the Paulimatrices which form, up to the normalizing factor √ , an orthonormal basisfor the trace form. restricted to Hermitian matrices where it is positive. In the Theory of algebras with polynomial identities one has the basic alge-bra R k,d := F [ ξ , . . . , ξ k ] of polynomials in k generic d × d matrices ξ i = ( ξ ( i ) h,k )which one identifies with the non commutative polynomial functions in k ma-trix variables . Here the variables ξ ( i ) h,k are the coordinates of the kd vector4pace M d ( F ) k . See [1] for details. This algebra is a domain with a cen-ter Z . If G is the field of fractions of Z we have, as soon as k >
1, that R k,d ⊗ Z G := D k,d is a division algebra of dimension d over its center G .Moreover G is the field of rational functions on M kd invariant under con-jugation action by GL ( d, F ) the group of invertible matrices. Finally thepolynomial functions on M kd invariant under conjugation are generated bythe traces of the monomials in the ξ i . Thus we have the Goldman element t ∈ D k,d ⊗ G D k,d .Notice that this element is independent of the variables ξ i in particularwe may find it in the algebra of just two variables.In general a rational function f ∈ D k,d can be evaluated on an open setof matrices and on this set the evaluation of t is the switch operator (1 , c for t which is a scalar polynomial, that is a central polynomial . In fact this canbe done in several ways.The algebra R k,d := F [ ξ , . . . , ξ k ] , k ≥ f ∈ F [ ξ , . . . , ξ k ] is an element ofthe center with no constant term then inverting c one has that the algebra F [ ξ , . . . , ξ k ][ c − ] is Azumaya of rank n , Corollary 10.3.5 of [1]. Thus it hasa Goldman element which in fact coincides with that t of D k,d .Thus for each such c there exist a i , b i ∈ F [ ξ , . . . , ξ k ] and h > t = c − h P i a i ⊗ b i , a i , b j ∈ F [ ξ , . . . , ξ k ]. In other words, by addingan extra variable ζ we have the identity c h t = X i a i ⊗ b i , X i a i ζb i = c h tr ( ζ ) . (5)The element P i a i ⊗ b i is thus a swap polynomial. Theorem 2.8.
A tensor polynomial P i a i ⊗ b i is a swap polynomial if andonly if adding a variable ζ we have that P i a i ζb i is a central polynomialwhich vanishes for ζ with tr ( ζ ) = 0 .Proof. If P i a i ⊗ b i is a swap polynomial then it is equal, as function onmatrices, to α (1 ,
2) = α P ni,j =1 e i,j ⊗ e j,i with α an invariant scalar function.Then X i a i ζb i = α n X i,j =1 e i,j ζe j,i = α · tr ( ζ )1 . Conversely if P i a i ζb i = β β some polynomialinvariant, which vanishes for ζ with tr ( ζ ) = 0 then β = tr ( ζ ) α is divisibleby tr ( ζ ). Since β is linear in ζ we have α is independent of ζ and P i a i ζb i = tr ( ζ ) α X i a i ⊗ b i = α t . emark . For a swap polynomial P i a i ⊗ b i = α t we have nα = P i a i b i soas soon as the characteristic does not divide n we also have that nα = P i a i b i is a central polynomial. × matrices In [19] the authors explain a method to construct balanced swap polyno-mials, Definition 1.2. The condition of being balanced is necessary for theirapplications. They exhibit one, let us denote it by Q ( x, y ), in two variables x, y for 2 × Q ( x, y ) := (6) xy xy ⊗ xy xy − xy xy ⊗ y x y − xy x ⊗ xy xy + xy x ⊗ xy x + xy x ⊗ yxyxy − xy x ⊗ y xyx − xy ⊗ xyxyx + xy ⊗ yx yx − yxy x ⊗ xy xy − yxy x ⊗ xy x + yxy x ⊗ yxyxy + yxy x ⊗ y x − yxy ⊗ xyxyx + yxy ⊗ xy x + yxy ⊗ yxyx − yxy ⊗ y x + y xyx ⊗ xy xy − y xyx ⊗ xy x − y xyx ⊗ yxyxy + y xyx ⊗ yxy x − y xy ⊗ xyx y + y xy ⊗ xyxyx + y xy ⊗ yx y − y xy ⊗ yx yx + y x ⊗ xy x − y x ⊗ yxyxy + y x ⊗ y x y − y x ⊗ y x + y xy ⊗ xyxyx − y xy ⊗ xy x − y xy ⊗ yxyx + y xy ⊗ y x − y x ⊗ x y x + y x ⊗ xyx y + y x ⊗ xyxyx − y x ⊗ yx y − y x ⊗ yxyx + y x ⊗ y x + y ⊗ x yx − y ⊗ xyx This has been built by a computer program, and I have verified it, and nowI want to give a theoretical explanation for its existence and that of manyother swap polynomials, Theorem 3.12. Its value, checked by Computer, is tr ( y ) det([ x, y ]) t = tr ( y ) [ x, y ] t . (7) Formulas
This topic is treated in detail in Chapter 9 of [1]. We want torecall some formulas which will be useful to understand swap polynomials.The case of two 2 × x, y is fully treated in the 1981 paper ofFormanek, Halpin [6] and will be quickly reviewed here. Start with Exercise9.1.1 of [1]. Proposition 3.2.
The ring T of invariants of 2, × matrices x, y is thepolynomial ring in 5 generators tr ( x ) , det( x ) , tr ( y ) , det( y ) , tr ( xy ) . (8) The ring S of equivariant maps of 2, × matrices x, y to × matrices(also called the trace algebra ) is a free module S = T + T x + T y + T xy overthe ring of invariants with basis , x, y, xy. (9)6 he multiplication table is given by Cayley–Hamilton: x = tr ( x ) x − det( x ) , y = tr ( y ) y − det( y ) ,yx = − xy + tr ( x ) y + tr ( y ) x + tr ( xy ) − tr ( x ) tr ( y ) . (10)Of course in characteristic 0 we may replace the generators of Formula(8) with tr ( x ) , tr ( x ) , tr ( y ) , tr ( y ) , tr ( xy ) . (11)Let R := F h x, y i ⊂ S be the subalgebra generated by the two genericmatrices x, y . This is also the free algebra in two variables modulo the idealof polynomial identities of 2 × R is deduced in [6] from the following identities:[ x, y ] x = tr ( x )[ x, y ] − x [ x, y ] , [ x, y ] y = tr ( y )[ x, y ] − y [ x, y ]det( x )[ x, y ] = x [ x, y ] x, det( y )[ x, y ] = y [ x, y ] ytr ( xy )[ x, y ] = xyxy − yxyx. (12)From which it follows (Lemma (2) of [6]): Proposition 3.3.
The commutator ideal R [ x, y ] R equals S [ x, y ] . From this Theorem (3) of [6]
Theorem 3.4.
We have the decomposition of R as vector space: R = ⊕ i,j F x i y j ⊕ S [ x, y ] . (13)Recall that, given an inclusion A ⊂ B of rings, the conductor (of A in B ) is the maximal ideal I of A which is also an ideal of B . In other wordsit is the set of a ∈ A with Ba + aB ⊂ A . Proposition 3.5.
The commutator ideal R [ x, y ] R is the conductor of R in S .Proof. By the previous proposition R [ x, y ] R is an ideal in S so it is containedin the conductor. Now the conductor is a bigraded ideal and R/R [ x, y ] R has a basis of the ordered monomials x i y j . So if we had an element inthe conductor not in R [ x, y ] R we would have one of those monomials is inthe conductor. If x i y j is in the conductor we have x i y j tr ( x ) = f ( x, y ) forsome non commutative polynomial. Setting y = 1 we obtain some identity x i tr ( x ) = αx i +1 , α ∈ F . This implies tr ( x ) = αx a contradiction.The aim of the paper [6] was in particular to compute the Poincar´eseries P ( R ) = P ∞ i,j =1 dim( R i,j ) t i s j where dim( R i,j ) is the dimension of R inbidegree i, j with respect to x, y . Now we clearly have P ( T ) = 1(1 − t )(1 − s )(1 − t )(1 − s )(1 − ts ) , ( S ) = (1 + t + s + ts ) P ( T ) . P ( R ) = 1(1 − t )(1 − s ) + ts P ( S ) , P ( Z ) = 1 + ( s t ) P ( T ) . (14)Here Z is the center of R and the last Formula follows from Theorem 3.9.Finally the Poincar´e series of the free algebra is (1 − s − t ) − so thePoincar´e series of the polynomial identities in 2 variables for 2 × s t ( s + t − st )(1 − s ) − (1 − t ) − (1 − st ) − (1 − s − t ) − . In fact it is important to describe some basic elements in the conduc-tor of the ring R n of generic 2 × n ≥ S n . Lemma 3.6.
An element f ( x , . . . , x n ) ∈ R n is in the conductor of S m for all m ≥ n if and only if adding an extra variable x n +1 we have that tr ( x n +1 ) f ( x , . . . , x n ) ∈ R n +1 . Proof.
Write the polarized form of the Cayley–Hamilton identity in the form tr ( z ) tr ( w ) = − zw − wz + tr ( z ) w + tr ( w ) z + tr ( zw ) . One has recursively that Q mi =1 tr ( z i ) = P j A j tr ( B j ) with A j , B j explicitnon commutative polynomials.The algebra S m is generated over R m by the elements tr ( M ) with M amonomial in the generic matrices (of degree ≤ f ( x , . . . , x n ) ∈ R n is in the conductor of S m if it absorbs the elements Q mi =1 tr ( z i ). But bythe previous identity one is reduced to a single trace. Remark . In Theorem 10.4.8 of [1] we generalize to generic matrices ofany size proving that f ( x , . . . , x n ) ∈ R n is in the conductor of S m if andonly if for an extra variable x n +1 we have det( x n +1 ) f ( x , . . . , x n ) ∈ R n +1 . Corollary 3.8.
1) The polynomial [[ x , x ] , x ] is in the conductor of thegeneric matrices inside the trace algebra.2) The central polynomial [ x, y ] is in the conductor of the center ofthe generic matrices inside the invariant ring, that is for all m we have Q mi =1 tr ( z i )[ x, y ] is also a non commutative central polynomial.Proof. Recall the basic Formula 9.50 of [1]:[ z [ x , x ] + [ x , x ] z, x ] = tr ( z )[[ x , x ] , x ] . (15)With some elementary manipulations one obtains from this tr ( z )[ x, y ] = [ zx [ x, y ] + x [ x, y ] z, y ] − x [ z [ x, y ] + [ x, y ] z, y ] = (16) zx [ x, y ] y − yzx [ x, y ] − xz [ x, y ] y + xyz [ x, y ] + [ x, y ] z. x, y ] = − det([ x, y ]) is an irreducible polynomial van-ishing exactly on the subvariety V of pairs of matrices x, y which are NOTirreducible. If c ( x, y ) is any central polynomial in these two variables (withno constant term) then it vanishes on V (Proposition 10.2.2 of [1]). Thereforewe have c ( x, y ) = [ x, y ] α, α ∈ T . Conversely Theorem (5) of [6] Theorem 3.9.
The center Z of R equals F + [ x, y ] T .Proof. Every element of the form c ( x, y ) = [ x, y ] α, α ∈ T can be expressedas a central polynomial by Corollary 3.8 2).Formula (16) gives, by the discussion of § P ( x, y ) := 1 ⊗ ([ x, y ] + x [ x, y ] y ) − y ⊗ x [ x, y ] − x ⊗ [ x, y ] y + xy ⊗ [ x, y ] (17)with value [ x, y ] t but quite unbalanced.But now from this we can build a balanced swap polynomial taking thesame value as Q ( x, y ) of Formula (6).In fact Q ( x, y ) = tr ( y ) [ x, y ] P ( x, y ) which equals tr ( y ) [ x, y ] t = tr ( y )[ x, y ] ⊗ ([ x, y ] + x [ x, y ] y ) tr ( y ) − [ x, y ] y ⊗ tr ( y ) x [ x, y ] − [ x, y ] x ⊗ tr ( y ) [ x, y ] y + tr ( y )[ x, y ] xy ⊗ tr ( y )[ x, y ] This is balanced but contains traces, but each of these expressions, bothon the left and on the right of ⊗ in each summand, is a multiple of [ x, y ]which is in the conductor of R in S (Proposition 3.5). Thus using Formulas(12) each of the terms can be written as a pure non commutative polynomial! Theorem 3.10.
By applying the Formulas (12) we obtain a balanced swappolynomial of degree 10.Remark . I have not verified if, by applying these formulas, one obtainsexactly formula (6) or another formula giving the same swap polynomial upto a tensor polynomial identity (and one should also decide in which orderto apply Formulas (12)).Of course one may also exchange x and y . A more symmetric swap poly-nomial of degree 5 both in x and y is, by the same argument, tr ( x ) tr ( y )[ x, y ] t .We will refer to these 3 polynomials as Q i ( x, y ) , i = 1 , , <
10 and only these 3 in degree 10.Later we will see that there is a balanced swap polynomial of degree 8 butin 8 variables, Theorem 4.12.
Theorem 3.12.
For every invariant A = A A product of h > factorsof degree 1 giving A and k factors of degree 2 giving A . If either k = 2 ℓ is even, or k = 2 ℓ + 1 and h ≥ we have that A [ x, y ] t is the value of abalanced swap polynomial. roof. First if k = 2 ℓ we split A = B B each with ℓ elements and A = C C C with C = tr ( a ) tr ( b ) , a, b ∈ { x, y } a product of 2 traces and C , C of the same degree.Then C [ x, y ] P ( x, y ) = Q i ( x, y ), depending on the variables appearingin the traces. Then we multiply by B B C C and we distribute B C onthe first factor and B C on the second. We obtain a balanced polynomialinvolving traces but again by the same argument all terms can be expressedas non commutative polynomials. The second case is similar.It remains open the question if one can have a balanced swap polynomialwhich does not satisfy the previous conditions or even just whose value isnot divisible by [ x, y ] . My guess is NO. This may be related to the fact that − [ x, y ] is the discriminant of the basis (9).The algebra S becomes an Azumaya algebra after inverting the element[ x, y ] and in fact Proposition 3.13.
The polynomial P ( x, y ) is also an expression of [ x, y ] t by using the dual basis to , x, y, xy as in Formula (4) .Proof. In fact, recall from page 377 of [1] the matrix of the trace form, ofthe basis (9), is: D = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) tr (1) tr ( x ) tr ( y ) tr ( xy ) tr ( x ) tr ( x ) tr ( xy ) tr ( x y ) tr ( y ) tr ( yx ) tr ( y ) tr ( xy ) tr ( xy ) tr ( x y ) tr ( xy ) tr (( xy ) ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , det( D ) = − [ x, y ] (18)One can compute that the cofactor matrix ¯Λ of D is divisible by [ x, y ] so is ¯Λ = [ x, y ] Λ. Setting= det( x ) tr ( y ) − tr ( xy ) tr ( x ) tr ( y )+ tr ( x ) det[ y ) − x ) det[ y )+ tr ( xy ) := A Λ = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A − tr ( x ) det( y ) − det( x ) tr ( y ) tr ( x ) tr ( y ) − tr ( xy ) − tr ( x ) det( y ) 2 det( y ) tr ( xy ) − tr ( y ) − det( x ) tr ( y ) tr ( xy ) 2 det( x ) − tr ( x ) tr ( x ) tr ( y ) − tr ( xy ) − tr ( y ) − tr ( x ) 2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) From this one has that the dual basis, for the trace form of the basis (9), upto the scalar [ x, y ] is([ x, y ] + x [ x, y ] y ) , − [ x, y ] y, − x [ x, y ] , [ x, y ]Then [ x, y ] t is given by the dual bases expression (4)= 1 ⊗ ([ x, y ] + x [ x, y ] y ) − x ⊗ [ x, y ] y − y ⊗ x [ x, y ] + xy ⊗ [ x, y ]is again the polynomial P ( x, y ) of Formula (17).It remains to exhibit with an explicit formula a balanced swap polyno-mial g ( ξ ) = f ( ξ )(1 ,
2) for all d . An explicit construction is performed in § § Swap polynomials
A general approach to balanced swap polynomials is the following. Startfrom any swap trace polynomial G := X i A i ⊗ B i = f ( x , . . . , x n ) t with A i , B i trace polynomials on d × d matrices, f ( x , . . . , x n ) a scalar invari-ant function. This can be for instance constructed, for any n ≥
2, by taking n monomials A i in the generic matrices with ∆ := det( tr ( A i A j )) = 0. Solv-ing for the dual basis (by Cramer’s rule) B i = P d i =1 x i,j A i the equations∆ δ ij = tr ( A j B i ) = d X i =1 x i,j tr ( A j A i ) (4) = ⇒ n X i =1 A i ⊗ B i = ∆ t . One sees first that the homogeneous components of G of degree h (i.e.deg A i + deg B i = h ) are still swap trace polynomials. It is easy to constructfrom this a balanced swap polynomial. First by multiplying by a suitableproduct Q I tr ( z i ). This can be distributed in each of the two factors to makethem balanced.Next one has special central polynomials u ( x ) which are in the conductorof the inclusion of the ring of central polynomial inside the ring of invariants.That is if f ( x ) is any invariant, i.e. a polynomial in the traces we have u ( x ) f ( x ) is a central polynomial (traces disappear). So taking two suchelements of the same degree one has F ( x ) := X i u ( x ) A i ⊗ u ( x ) B i = u ( x ) u ( x ) f ( x )(1 , F ( x ) is a balanced swap polynomial. Moreover u ( x ) u ( x ) f ( x ) is acentral polynomial. The previous general procedure is more effective using the approach to cen-tral polynomials of Razmyslov, [17] i.e. antisymmetry as follows.Given a noncommutative polynomial f in several variables which is linearin a given variable x i , write it in the form f = P k a k x i b k . Consider it as afunction on matrices, set f i := P k b k a k we then have: tr ( f ) = tr ( x i f i ) . If f = P k a k x i b k depends linearly also upon another variable x j and itchanges sign by exchange of x i , x j , then when we substitute in f , x i with x j we get P k a k x j b k = 0 and we also deduce: tr ( x j f i ) = tr ( X k a k x j b k ) = 0 . f ( x , . . . , x n , y ) is a noncommutative polynomial which islinear in each variable x i and alternating in x , . . . , x n and tr ( f ( x , . . . , x n , y ))is different from 0 we have tr ( x j f i ) = δ ij tr ( f ( x , . . . , x n , y )). Thus we have,up to the scalar function tr ( f ( x , . . . , x n , y )) the polynomials f i form a dualbasis of the n variables x j . From Formula (4) we also have that t = tr ( f ( x , . . . , x n , y )) − n X i =1 x i ⊗ f i , the Goldman element= ⇒ h ( x, y ) := n X i =1 x i y f i ( x , . . . , ˇ x i , . . . , x n , y ) = tr ( y ) tr ( f ( x, y )) . (19)Thus we have a central polynomial h ( x, y ) of degree deg( f ) + 1 and, byFormula (4), the swap polynomial: H := n X i =1 x i ⊗ f i ( x , . . . , ˇ x i , . . . , x n , y ) = tr ( f ( x , . . . , x n , y )) t . (20)This H is of course unbalanced, but if m is the degree of f and we chooseany central polynomial c of degree m − c · H = n X i =1 c · x i ⊗ f i ( x , . . . , ˇ x i , . . . , x n , y ) = c · tr ( f ( x , . . . , x n , y )) t is balanced. At worst, if we cannot find such a central polynomial, replacing f with ¯ f := f uzw of degree m + 3 we may take as c = h ( x, y ) of degree m + 1 = ( m + 3) − f satisfying the previous conditions is the Capelli polyno-mial C n , of degree 2 n −
1. Following Razmyslov for each m one sets C m ( x , x , . . . , x m ; y , y , . . . , y m − ):= X σ ∈ S m ǫ σ x σ (1) y x σ (2) y . . . x σ ( m − y m − x σ ( m ) . (21)In fact this is only an analogy of the classical Capelli identity, which isinstead an identity of differential operators (see [15]).Thus we have an explicit central polynomial of degree 2 n and an explicitbalanced swap polynomial of degree 4 n + 2.We will see later, in § Let us recall some basic facts. Denote for simplicity by G = GL ( n, F ) thegroup of invertible matrices which acts on n × n matrices by conjugation.12 heorem 4.3. The invariants of n × n matrices are generated by elements tr( M ) where M are monomials (of degree ≤ n by Razmyslov). Among these invariants the ones that are multilinear and alternatinghave a very special structure.In fact these invariants have an exterior multiplication. The algebra ofthese invariants, under exterior multiplication, is the algebra of invariantmultilinear alternating functions ( V M d ( F ) ∗ ) G .In turn this algebra can be identified to the cohomology of the unitarygroup. As all such cohomology algebras it is a Hopf algebra and by Hopf’sTheorem it is the exterior algebra generated by the primitive elements.The primitive elements of ( V M d ( F ) ∗ ) G are [11]: T i − = T i − ( x , . . . , x i − ) : (23) = tr( St i − ( x , . . . , x i − )) . (22)Recall the standard polynomial in k variables is defined as St k ( X ) := X σ ∈ S k ǫ σ x σ (1) · · · x σ ( k ) = Alt X x x . . . x k . (23)In particular, since these elements generate an exterior algebra, a productof elements T i is non zero if and only if the T i involved are all distinct. Given k distinct T i their product depends on the order only up to sign.The 2 n different products form a basis of ( V M d ( F ) ∗ ) G . In dimension d the only non-zero product of these elements containing d variables is T d ( x , x , . . . , x d ) = T ∧ T ∧ T ∧ · · · ∧ T d − . (24)Notice that tr( St i ( x , . . . , x i )) = 0 , ∀ i .As a consequence, we have: Proposition 4.4.
Any multilinear antisymmetric function of x , . . . , x d isa multiple of T ∧ T ∧ T ∧ · · · ∧ T d − .Remark . The function det( x , . . . , x d ) is an alternating invariant of ma-trices, so it must have an expression as in Formula (24). In fact the com-putable integer constant is known up to a sign [5]: T d ( x , . . . , x d ) = C d det( x , . . . , x d ) , C d := ± · · · (2 d − · · · ( d − ∈ Z . (25) ±{ , , , , , , . . . } Denote by Σ n ( F d ) ⊂ M d ( F ) ⊗ n the algebra of operators spanned bythe permutations σ ∈ S n , which by Schur–Weyl Theory is the algebra ofinvariants under the conjugation action by G .Take a tensor polynomial G ( X , . . . , X k ) ∈ F h X i ⊗ n , multilinear alter-nating in each of the k sets of d variables X : = { x i, , . . . , x i,d } ⊂ X. G in in the algebra M d ( F ) of d × d matrices we have ageneralization of Proposition 8 of [9]. Proposition 4.6.
1. There is an element A G ∈ M d ( F ) ⊗ n invariant un-der the diagonal action of the linear group G = GL ( d ) such that G ( X , . . . , X k ) = k Y i =1 T d ( X i ) A G , A G ∈ Σ n ( F d ) ⊂ M d ( F ) ⊗ n . (26)
2. We have tr ( σ ◦ G ( X , . . . , X k ) = k Y i =1 T d ( X i ) tr ( σ ◦ A G ) , ∀ σ ∈ S n .
3. If n = 2 then G ( X , . . . , X k ) = 0 is a swap polynomial if and only if: d · tr ( G ( X , . . . , X k )) = tr ((1 , G ( X , . . . , X k )) ⇐⇒ d · tr ( A G ) = tr ((1 , A G ) . (27) Proof.
The first two parts are clear. As for 3. we have A G = a · Id + b (1 , G ( X , . . . , X k ) is a swap polynomial if and only if a = 0 (and b = 0): tr ( A G ) = tr ( a · Id + b (1 , a · d + b · dtr ((1 , A G ) = tr ( a (1 ,
2) + b · Id ) = a · d + b · d so a · d + b · d = d ( a · d + b · d ) ⇐⇒ a = 0 . Let us also remark:
Remark . tr ( a · Id + b (1 , , ⇐⇒ a = − d · b, tr ((1 , a · Id + b (1 , , ⇐⇒ b = − d · a. Of course for a 2-tensor valued such polynomial G ( X, Y ) we have that G ( X, Y ) = 0 if and only if tr ( G ( X, Y )) = tr ((1 , G ( X, Y )) = 0.A method to compute A G is by using the transform Φ introduced byCollins [2] and the so called Weingarten function, which with a differentterminology was already studied by Formanek [5].Φ( A G ) := X σ ∈ S n tr( σ ◦ A G ) σ − , Wg ( n, d ) := Φ( ) − = ⇒ A G = Φ( A G ) Wg ( n, d ) . (28)One has that the element Φ( ) is invertible in the center of Σ n ( F d ). Hence Wg ( n, d ) = P µ ⊢ d a µ c µ ∈ Σ n ( F d ) is the image of a class function ( c µ denotes14he sum over the conjugacy class relative to the partition µ ). Of course theexpression is unique only if d ≥ n .An explicit formula through characters is, see [2], or [14] a µ = X λ ⊢ n, ht ( λ ) ≤ d χ λ (1) χ λ ( µ ) s λ,d (1) . (29)Where µ is the cycle partition of σ , χ λ ( µ ) the character of σ in theirreduciblerepresentation of S n corresponding to λ and s λ,d (1) is the dimension of thecorresponding irreducible representation of GL ( d, F ).It is known [13] that he function a µ is always nonzero and positive if µ is the cycle partition of an even permutation and negative for odd permuta-tions.For a computation using Mathematica of the list d ! P µ ⊢ d a µ c µ and d ≤
8, for n = d see [14]. Remark . In Proposition 16 of [9] we have shown that, if we distribute the d variables Y in k monomials n i ( Y ) each of degree h i (with P i h ki =1 = d )then Alt Y n ( Y ) ⊗ · · · ⊗ n k ( Y )) is 0 unless the h i are a permutation ofa refinement of the sequence δ d := 1 , , . . . , d −
1. In this case we have tr ( σ − G d ( Y , . . . , Y d ) = 0 unless σ glues together the monomials so to re-cover the partition δ d . In this case tr ( σ − G d ( Y , . . . , Y d )) = ±T d ( Y ) . Thesign is that of the permutation that σ induces on the subset of the h i whichare odd.In particular if k = d this means that the h i are a permutation of thesequence δ d . For the sequence δ d it follows n i ( Y ) = y ( i − +1 . . . y i , G d ( Y , . . . , Y d ) := Alt Y ( n ( Y ) ⊗ · · · ⊗ n d ( Y )) tr ( σ ◦ G d ( Y , . . . , Y d )) = ( T d ( Y ) if σ = . (30)That is Φ( G d ( Y , . . . , Y d )) = T d ( Y ) and, Proposition 26 of [9]: G d ( Y , . . . , Y d ) := Alt Y ( n ( Y ) ⊗ · · · ⊗ n d ( Y )) = T d ( Y ) Wg ( d, d ) . (31) Our final construction of swap polynomials is based on Proposition 4.6. Sup-pose we have two tensor polynomials G i ( X , . . . , X k ) ∈ F h X i ⊗ , i = 1 , G i ( X , . . . , X k ) = k Y j =1 T d ( X j ) A i , A i ∈ Σ ( F d ) ⊂ M d ( F ) ⊗ , i = 1 , . (32)We then have A i = a i Id + b i (1 , , i = 1 , . Then we may assume a i = 0 , i = 1 , heorem 4.10. − a G + a G = T d ( X ) T d ( Y )( − a b + a b )(1 , is a swap polynomial (provided that ( − a b + a b ) = 0 ) that is the twopolynomials are not proportional. So the issue is to find a pair of such polynomials. First it is easy to see,cf. [9], that if k = 1 a multilinear balanced antisymmetric tensor polynomial G ( x , . . . , x d ) ∈ M d ( F ) ⊗ , d ≥ d × d matrices,so the minimum number of sets X i to consider is 2. The approach to findtwo such polynomials (balanced) for k = 2, rests on the Theory of Formanekdeveloped to prove Theorem 4.11.In principle a simple way of finding a polynomial multilinear and alternat-ing in each of two sets of d variables X := ( x , . . . , x d ); Y = ( y , . . . , y d )is the following. Take any monomial M ( X, Y ) product in some order of the2 d variables X, Y and alternate F M ( X, Y ) :=
Alt X Alt Y M ( X, Y ).Also split M ( X, Y ) := AB with A, B each of length d , and alternate G M ( X, Y ) :=
Alt X Alt Y A ⊗ B . The real issue is to choose M ( X, Y ) so that G M ( X, Y ) is not a tensor polynomial identity. Of course if F M ( X, Y ) is nota polynomial identity. then G M ( X, Y ) is not a tensor polynomial identity.A theorem of Formanek relative to a conjecture of Regev, see [5], statesthat a certain explicit polynomial F ( X, Y ) in d variables X = { x , . . . , x d } and d variables Y = { y , . . . , y d } is non zero. A general discussion can befound in theorems of Giambruno Valenti [7].From the Regev polynomial we shall construct two tensor polynomialswith the required properties.The definition of F ( X, Y ) is this. Decompose d = 1+3+5+ . . . +(2 d − d variables X and the d variables Y in thelist and construct the monomials m i ( X ) , i = 1 , . . . , d and similarly m i ( Y ) as m i ( X ) = x ( i − +1 . . . x i , m i ( Y ) = y ( i − +1 . . . y i .m ( X ) = x , m ( X ) = x x x , m ( X ) = x x x x x , . . . . (33)We finally define F ( X, Y ) :=
Alt X Alt Y ( m ( X ) m ( Y ) m ( X ) m ( Y ) . . . m d ( X ) m d ( Y )) , (34)where Alt X (resp. Alt Y ) is the operator of alternation in the variables X (resp. Y ). Theorem 4.11. [see [5] or [14]] F ( X, Y ) = ( − d − d !) (2 d − T d ( X ) T d ( Y ) Id d (35) (25) = ( − d − C d ( d !) (2 d −
1) ∆( X )∆( Y ) Id d ; ∆( X ) = det( x , . . . , x d ) .
16n fact this follows from the special value for σ = (1 , , . . . , d ), the fullcycle, of the Weingarten function: a σ = ( − d − d ( d !) (2 d − F ( X, Y ) is a central polynomial. In fact it has also the property ofbeing in the conductor of the ring of polynomials in generic matrices insidethe trace ring. In other words by multiplying F ( X, Y ) by any invariantwe still can write this as a non commutative polynomial. This follows bypolarizing in z the identity, cf Theorem 10.4.8 of [1]det( z ) d F ( X, Y ) = F ( zX, Y ) = F ( X, zY ) . In order to use this for tensor polynomials we start from a general fact.Let us consider two decompositions of d as a sum of c ≤ d positiveintegers: h := ( h , . . . , h c ); k := ( k , . . . , k c ) | X i h i = X i k i = d . (36)Decompose accordingly the list X (resp Y ) in c ≤ d lists with the i th listformed by the h i (resp k i ) variables successive to the ones of the previouslists. Define m i ( X ) the ordered product of the variables in the i th list relativeto X similarly n j ( Y ) for Y . We have m i ( X ) has degree h i and n j ( Y ) hasdegree k j . Define next N i = N i ( X, Y ) := m i ( X ) n i ( Y ) , i = 1 , . . . , c . N ⊗ N ⊗ . . . ⊗ N d = m ( X ) ⊗ m ( X ) ⊗ . . . ⊗ m d ( X ) · n ( Y ) ⊗ n ( Y ) ⊗ . . . ⊗ n d ( Y ) . From Remark 4.8 it follows that the element
Alt X Alt Y ( N ⊗ N ⊗ . . . ⊗ N d )is 0 unless both h and k are permutations η, ζ of the sequence 1 , , , . . . , d −
1. In this last case always by the same remark since
Alt Y ( n ( Y ) ⊗ n ( Y ) ⊗ . . . ⊗ n d ( Y )) = ±T d ( Y ) Wg ( d, d )we have by Formula (30) ± Alt X Alt Y tr ( σ − ◦ N ⊗ N ⊗ . . . ⊗ N d ) (37)= T d ( Y ) X τ a τ tr ( σ − ◦ Alt X m ( X ) ⊗ m ( X ) ⊗ . . . ⊗ m d ( X ) ◦ τ ) (38)= ±T d ( Y ) T d ( X ) a σ . The sign is ǫ η ǫ ζ if h and k are permutations η, ζ of 1 , , , . . . , d − M ( X, Y ) product in some order of the 2 d variables X, Y . If we split M ( X, Y ) = AB as a product of two factors each of length d we may construct the 2–tensor valued polynomial F ⊗ M ( X, Y ) :=
Alt X Alt Y A ⊗ B (39)If F M ( X, Y ) is not a polynomial identity we can take G ( X, Y ) = F ⊗ M ( X, Y )from the previous Formula.Instead, if F M ( X, Y ) is a polynomial identity for d × d matrices, then tr ((1 , F ⊗ M ( X, Y )) = tr ( F M ( X, Y )) = 0 so F ⊗ M ( X, Y ) can be taken as G ( X, Y ) provided tr ( F ⊗ M ( X, Y )) =
Alt X Alt Y tr ( A ) tr ( B ) = 0 . For the first we can take the polynomial of Formula (35), we need to findthe second. d = 2 h even Let us use Formula 38 to construct two polynomials G , G satisfying thehypotheses of Theorem 4.10 when d = 2 h is even.Consider the three monomials products of d factors in the variables X, YA = m ( X ) m ( Y ) m ( X ) . . . m d − ( X ) m d ( Y ) ,B = m d ( X ) m d − ( Y ) . . . m ( Y ) m ( X ) m ( Y ) .C = m ( Y ) m d ( X ) m d − ( Y ) . . . m ( Y ) m ( X ) . All of degree d and A, B or A, C involving disjoint sets of variables. Set G ( X, Y ) =
Alt X Alt Y A ⊗ B, G ( X, Y ) =
Alt X Alt Y A ⊗ C. (40)Both are multilinear balanced tensor polynomials in the variables X, Y .We have
Alt X Alt Y AB = F ( X, Y ) = 0 , Alt X Alt Y AC = 0 . Apply Formula 38 to the N i decomposing AB and AC ; tr ( G ( X, Y )) = tr ( G ( X, Y )) =
Alt X Alt Y tr ( A ) tr ( B ) (38) = T d ( X ) T d ( Y ) a h,h . (41) tr ((1 , ◦ G ( X, Y )) =
Alt X Alt Y tr ( AB ) (38) = T d ( X ) T d ( Y ) a d . (42) tr ((1 , ◦ G ( X, Y )) = 0 . Set A i := A G i = a i Id + b i (1 , , i = 1 , .tr ( A i ) = a i · d + b i · d, tr ((1 , A i = a i · d + b i · d From Formulas (41) and (42) a d + b d = 0 , a d + b d = a d , a · d + b · d = a · d + b · d = a h,h a = a d − d · a h,h d (1 − d ) , b = − d · a d − a h,h d (1 − d ) , b = a h,h d (1 − d ) , a = − a h,h (1 − d )We want to use Formula (27) and hence Theorem 4.12. A G = − a A + a A is a swap polynomial with value T d ( X ) T d ( Y )( − a b + a b )(1 , . d · a h,h G ( X, Y )+( a d − d · a h,h ) G ( X, Y ) = a h,h ( d · a d − a h,h )(1 − d )(1 − d ) d ! T d ( X ) T d ( Y )(1 , d = 2 , , d ! that a = − , a = − , a = − , a , = 43 , a , = 2235 , a , = 300539 . G − G = − T ( X ) T ( Y )(1 ,
2) = − D ( X ) D ( Y )(1 , G − G = − D ( X ) D ( Y )(1 , G − G = 22 · · · T ( X ) T ( Y )(1 ,
2) = 113 · · T ( X ) T ( Y )(1 , G − G = − T ( X ) T ( Y )(1 , . Remark . We have that a h,h > a d < a h,h ( d · a d − a h,h )(1 − d )(1 − d ) d ! is < d odd The previous construction does not apply to the case d odd. In this case A = m ( X ) m ( Y ) m ( X ) . . . m d − ( Y ) m d ( X ) ,B = m d ( Y ) m d − ( X ) . . . m ( Y ) m ( X ) m ( Y ) .tr ( Alt X A ) = tr ( Alt X m ( X ) m d ( X ) m ( Y ) m ( X ) . . . m d − ( Y ) = 0since Alt X m ( X ) m d ( X ) = St d ( X ) = 0.We construct G ( X, Y ) =
Alt X Alt Y A ⊗ B and have tr ( G ( X, Y )) = 0 ,tr ((1 , ◦ G ( X, Y )) = T d ( X ) T d ( Y ) a d .G ( X, Y ) = T d ( X ) T d ( Y )( a + (1 , b, ad + bd = 0 , ad + bd = a d = ⇒ b = − a d (1 − d ) , a = a d d (1 − d ) . We need another tensor polynomial G ( X, Y ) with tr ( G ( X, Y )) = 0. Infact we will give one with tr ((1 , G ( X, Y )) = 0 . d = 3 and the general case is similar. We start from themonomials of Formula (33) m ( X ) = x , m ( X ) = x x x , m ( X ) = x x x x x . We split the monomials m and consider A = x y x x m ( Y ) , B = y x y y m ( X ) G ( X, Y ) :=
Alt X Alt Y A ⊗ B, tr ((1 , G ( X, Y )) = tr ( Alt X Alt Y AB ) = 0since in AB appears the factor m ( Y ) y of degree 6 whose alternation is 0.We need to compute tr ( G ( X, Y )) =
Alt X Alt Y tr ( A ) tr ( B ) . Next tr ( B ) = tr ( m ( X ) y x y y ) consider M := x ⊗ x x ⊗ m ( X ) ⊗ x , M := y ⊗ m ( Y ) ⊗ y ⊗ y y (44) P = M M , tr ((1 , , P ) = tr ( A ) tr ( B ) Lemma 4.14.
Alt X tr ( σ − M ) = Alt Y tr ( σ − M ) = 0 except the followingcases: tr ((1 , x ⊗ x x ⊗ m ( X ) ⊗ x = tr ( x ) tr ( x x x ) tr ( m ( X )) tr ((2 , x ⊗ x x ⊗ m ( X ) ⊗ x = tr ( x ) tr ( x x x ) tr ( m ( X )) tr ((1 , y ⊗ m ( Y ) ⊗ y ⊗ y y ) = tr ( y ) tr ( y y y ) tr ( m ( Y )) tr ((3 , y ⊗ m ( Y ) ⊗ y ⊗ y y ) = tr ( y ) tr ( y y y ) tr ( m ( Y )) Proof.
The other σ give a product of trace monomials of lenghts differentfrom the partition 1 , , Alt Y M Formula (46)
Alt Y tr ((3 , M ) = T ( Y ) , Alt Y tr ((1 , M ) = −T ( Y ); (45)Φ( Alt Y M ) = T ( Y )[(3 , − (1 , ⇒ Alt Y M = T ( Y )[(3 , − (1 , Wg (4 , . (46)We deduce for Alt X M Formula (48)
Alt X tr ((2 , M ) = T ( X ) , Alt X tr ((1 , M ) = −T ( X ); (47)Φ( Alt X M ) = T ( X )[(2 , − (1 , ⇒ Alt X M = T ( X )[(2 , − (1 , Wg (4 , . (48) Alt Y P = x ⊗ x x ⊗ m ( X ) ⊗ x · T ( Y )[(3 , − (1 , Wg (4 , Alt X tr ((1 , , x ⊗ x x ⊗ m ( X ) ⊗ x · [(3 , − (1 , Wg (4 , . = X σ b σ Alt X tr ((1 , , x ⊗ x x ⊗ m ( X ) ⊗ x · [(3 , − (1 , σ ) . (49)We know that we have non zero contributions ± b σ from Wg (4 ,
3) = P σ b σ σ only when − b σ if (3 , σ (1 , ,
4) = (1 , , b σ if (3 , σ (1 , ,
4) = (2 , b σ if (1 , σ (1 , ,
4) = (1 , , − b σ if (1 , σ (1 , ,
4) = (2 , . The corresponding 4 values of σ are: σ = (3 , , , ,
4) = 1 σ = (3 , , , ,
4) = (1 , , σ = (1 , , , ,
4) = (1 , , σ = (1 , , , ,
4) = (2 , , . But b σ is a class function so the contribution is − b + b , Alt X Alt Y tr ( A ) tr ( B ) = T ( X ) T ( Y )[ − b + b , ](4)! [ b , − b , ] = − −
35 = − . Now the expression of Wg (4 ,
3) = P σ b σ σ is not unique since P σ ǫ σ σ = 0but the difference b σ − b τ for two permutations of the same parity or thesum b σ + b τ for two permutations of opposite parity is well defined.Examples of b λ · ( d + 1)! for d = 2 , , , − c + 14 c , + 174 c , , ; 3715 c − c , − c , − c , , + 615 c , , , − c + − c , + 503168 c , + 6124 c , , − c , , − c , , , + 5227168 c , , , , c − c , − c , − c , , − c , + 151210 c , , + 991105 c , , , + 701210 c , , + 28970 c , , , − c , , , , + 5227168 c , , , , , The general case d + 1 = 2 h reduces to this by considering the twomonomials C = m ( Y ) m ( X ) . . . m d − ( Y ) m d ( X ); D = m ( X ) m ( Y ) . . . m d − ( X ) m d ( Y )21ubstituting Alt Y Alt X AC ⊗ BD, Alt Y Alt X ACBD = 0 tr ( AC )( BD ) = tr ( τ − h Q ) , τ h = (1 , , . . . , h )( h + 1 , . . . , h ) , Q = Q · Q Q = x ⊗ x x ⊗ m ( X ) ⊗ x ⊗ m ( X ) ⊗ m ( X ) ⊗ . . . ⊗ m d ( X ) Q = y ⊗ m ( Y ) ⊗ y y ⊗ y ⊗ m ( Y ) ⊗ m ( Y ) ⊗ . . . ⊗ m d ( Y )continuing in the same way we need the non zero contributions ± b σ from Wg ( d + 1 , d ) = P σ b σ σ only when − b σ if (3 , στ − h = (1 , , b σ if (3 , στ − h = (2 , b σ if (1 , στ − h = (1 , , − b σ if (1 , στ − h = (2 , . The corresponding 4 values of σ are: σ = (1 , , τ h = (2 , , . . . , h )( h + 1 , . . . , h ) σ = (3 , , τ h = (2 , , τ h = (1 , , , , . . . , h )( h + 1 , . . . , h ) σ = (1 , , τ h = (1 , , τ h = (2 , , , , . . . , h )( h + 1 , . . . , h ) σ = (1 , , τ h = (1 , , τ h = (2 , , . . . , h )( h + 1 , . . . , h ) . − b ,h − ,h + 2 b h,h − b , ,h − ,h d = 5 , h = 3 , (5!) [ − b , + 2 b , − b , , ] = − − − − . It remains to prove that for all h we have − b ,h − ,h + 2 b h,h − b , ,h − ,h = 0 . Unfortunately we cannot use directly Novak’s argument, so we leave thisopen.
References [1] E. Aljadeff, A. Giambruno, C. Procesi, A. Regev.
Rings with polynomialidentities and finite dimensional representations of algebras , A.M.S. Col-loquium Publications, to appear. 2.1, 2.7, 3.1, 3.7, 3.1, 3.1, 3.1, 4.9[2] Collins, Benoit Moments and cumulants of polynomial random variableson unitary groups, the Itzykson-Zuber integral, and free probability. Int.Math. Res. Not. 2003, no. 17, 953–982 4.2, 4.2[3] Collins, Benoit; ´Sniady, Piotr Integration with respect to the Haar mea-sure on unitary, orthogonal and symplectic group. Comm. Math. Phys.264 (2006), no. 3, 773–795.[4] E. Formanek,
Central polynomials for matrix rings , J. Algebra (1972), 129-132. 2.7 225] E. Formanek, A conjecture of Regev about the Capelli polynomial , J.Algebra (1987), 93–114. 4.5, 4.2, 4.9, 4.11[6] E. Formanek, P. Halpin, Wen Ch’ing Winnie Li,
The Poincar´e series ofthe ring of × generic matrices , J. Algebra (1981), no. 1, 105–112.3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1[7] A. Giambruno and A. Valenti, Central polynomials and matrix invari-ants , Israel J. Math. (1996), 281–297. 4.9[8] F. Huber, Positive Maps and Matrix Contractions from the SymmetricGroup arXiv:2002.12887 28–2–2020.[9] F. Huber, C. Procesi
Tensor polynomial identities
Israel Journal ofMathematics, to appear. 4.2, 4.8, 4.8, 4.9[10] M.-A. Knus, M. Ojanguren,
Th´eorie de la descente et alg`ebresd’Azumaya , Lecture Notes in Mathematics, Vol.
Springer-Verlag,Berlin-New York, 1974. iv+163 pp. 2.1[11] B. Kostant,
A theorem of Frobenius, a theorem of Amitsur-Levitzki andcohomology theory , J. Mathematics and Mechanics (1958), no. 2, 237–264. 4.2[12] Matsumoto, Sho; Novak, Jonathan Jucys–Murphy elements and unitarymatrix integrals.
Int. Math. Res. Not. IMRN 2013, no. 2, 362–397.[13] Novak, Jonathan I,
Jucys-Murphy elements and the unitary Weingartenfunction.
Noncommutative harmonic analysis with applications to prob-ability II, 231–235, Banach Center Publ., 89, Polish Acad. Sci. Inst.Math., Warsaw, 2010. 4.2, 4.13[14] C. Procesi,
A note on the Weingarten function ,http://arxiv.org/abs/2008.11129 4.2, 4.2, 4.11[15] C. Procesi,
Lie Groups, An approach through invariants and represen-tations , Springer Universitext, 2007 pp. xxiv+596, 4.1[16] C. Procesi,
Tensor fundamental theorems of invariant theory ,http://arxiv.org/abs/2011.10820[17] Yu. P. Razmyslov,
A certain problem of Kaplansky , (Russian) Izv. Akad.Nauk SSSR Ser. Mat. (1973), 483–501. 2.7, 4.1[18] Saltman, David J. Lectures on division algebras.
CBMS Regional Con-ference Series in Mathematics, 94. Published by American Mathemati-cal Society, Providence, RI; on behalf of Conference Board of the Math-ematical Sciences, Washington, DC, 1999. viii+120 pp. 12319] David Trillo, Benjamin Dive, and Miguel Navascu´es,