aa r X i v : . [ m a t h . QA ] O c t Manin matrices for quadratic algebras
Alexey Silantyev ∗ Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, RussiaState University ”Dubna”, 141980 Dubna, Moscow region, Russia
Abstract
We give a general definition of Manin matrices for arbitrary quadratic algebras interms of idempotents. We establish their main properties and give their interpretationin terms of the category theory. The notion of minors is generalised for a general Maninmatrix. We give some examples of Manin matrices, their relations with Lax operatorsand obtain the formulae for some minors. In particular, we consider Manin matrices ofthe types B , C and D introduced by A. Molev and their relation with Brauer algebras.Infinite-dimensional Manin matrices and their connection with Lax operators are alsoconsidered. Contents A -Manin matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 ( A, B )-Manin matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.5 Comma categories and Manin matrices . . . . . . . . . . . . . . . . . . . . . 162.6 Products of Manin matrices and co-algebra structure . . . . . . . . . . . . . 212.7 Infinite-dimensional case: Manin operators . . . . . . . . . . . . . . . . . . . 24 q -Manin matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.3 Multi-parametric case: ( b q, b p )-Manin matrices . . . . . . . . . . . . . . . . . . 343.4 A 4-parametric quadratic algebra . . . . . . . . . . . . . . . . . . . . . . . . 39 ∗ [email protected] Lax operators 40 U q ( gl n ) type and q -Manin matrices . . . . . . . . . . . . . . 404.2 Lax operators of Yangian type as Manin operators . . . . . . . . . . . . . . . 45 q -minors and permanents . . . . . . . . . . . . . . . . . . . . . . . . . . 495.2 Dual quadratic algebras and their pairings . . . . . . . . . . . . . . . . . . . 505.3 Pairing operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.4 Minor operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585.5 Properties of the minor operators . . . . . . . . . . . . . . . . . . . . . . . . 625.6 Minors for left equivalent idempotents . . . . . . . . . . . . . . . . . . . . . 655.7 Construction of pairing operators . . . . . . . . . . . . . . . . . . . . . . . . 67 b q, b p )-Manin matrices . . . . . . . . . . . . . . . . . . . . . . 706.2 Pairing operators for q -Manin matrices from Hecke algebras . . . . . . . . . 766.3 Pairing operators for the 4-parametric case . . . . . . . . . . . . . . . . . . . 81 B , C and D B , C , D cases and Brauer algebras . . . . . . . . . . . . 89 Conclusion 94Appendices 95
A. Reduced expression and set of inversions . . . . . . . . . . . . . . . . . . . . . 95B. Lie algebras as quadratic algebras . . . . . . . . . . . . . . . . . . . . . . . . . 96
In the second half of the 80s Yuri Manin proposed to consider non-commutative quadraticalgebras as a generalisation of vector spaces and called them ‘quantum linear spaces’ [Man87,Man88, Man91]. A usual finite-dimensional vector space is presented by the algebra of poly-nomials. The linear maps correspond to the homomorphisms of these algebras in the categoryof graded algebras. One can consider homomorphisms of a quadratic algebra over a non-commutative ring (algebra). Such ‘non-commutative’ homomorphisms of finitely generatedquadratic algebras (‘finite-dimensional’ quantum linear spaces) are described by matriceswith non-commutative entries satisfying some commutation relations. We call them Maninmatrices.The main example proposed by Manin is the ‘quantum plane’ defined by the algebra with2 generators x, y and the commutation relation yx = qxy . He established the connection of2he quantum plane with some quantum group (Hopf algebra). This quantum group gives a‘non-commutative’ endomorphism of the quantum plane.This picture was also generalised to the case of n generators with the commutation rela-tions x j x i = qx i x j , i < j . The algebra describing all the ‘non-commutative’ endomorphismsof the corresponding quantum linear space was called right quantum algebra in [GLZ], wherethe authors proved a q -analogue of MacMahon Master Theorem. The ‘non-commutative’ en-domorphisms and homomorphisms of these quadratic algebras are described by square andrectangular matrices respectively. We call them q -Manin matrices.‘Non-commutative’ endomorphisms and homomorphisms of a usual finite-dimensionalvector space generalise the matrices with commutative entries to the matrices with non-commutative entries satisfying certain commutation relations (these are the commutationrelations of the right quantum algebra for q = 1). This type of matrices were observed in theTalalaev’s formula [T] and called then Manin matrices [CF]. These Manin matrices have a lotof applications to integrable systems, Sugawara operators, affine Lie algebras and quantumgroups [CF, RTS, CM]. Since they are an immediate generalisation of the usual matrices tothe non-commutative case, almost all the formulae of the matrix theory are valid for theseManin matrices [CFR]. Most of them are also generalised to the q -Manin matrices [CFRS].The ‘non-commutative’ homomorphisms between two quantum linear spaces are de-scribed by an algebra that can be constructed as internal hom [Man88]. These ‘non-commutative’ homomorphisms give us a notion of a Manin matrix for a pair of finitelygenerated quadratic algebras ( A , B ). These are the matrices with non-commutative entriessatisfying the commutation relations of the internal hom algebra. In previous works onManin matrices the attention was concentrated on the case A = B corresponding to the‘non-commutative’ endomorphisms. This general case allows us to consider more examplesand answer some questions about q -Manin matrices, which was unclear before.For instance, the column determinant is a natural generalisation of the usual determinantfor the q = 1 Manin matrices, but some properties of this determinant are proved by usinga part of commutation relations for the entries of these matrices. We show here that thispart of commutation relations define Manin matrices for some pair of different quadraticalgebras.In the case q = 1 the column determinant is generalised to (column) q -determinant.It is a natural generalisation for q -Manin matrices. Permutation of columns of q -Maninmatrices gives a matrix, which is not a q -Manin matrix. However its natural analogue of thedeterminant is also q -determinant. In contrast, the q -determinant is not relevant operationfor a q -Manin matrix with permuted rows. To consider the permuted q -Manin matricesas another type of Manin matrices and to define natural determinants for them one needsto consider some pairs of different quadratic algebras. It is convenient to do it in a moreuniversal case when the role of quadratic algebras is played by multi-parametric deformationsof the polynomial algebras.A multi-parametric deformation of super-vector spaces was considered by Manin in thearticle [Man89]. The super-versions of Manin matrices and MacMahon Master Theorem forthe non-deformed case were also considered in [MR]. Here we do not consider the super-case,3ut we plan to do it in future works.The notion of Manin matrices for two different quadratic algebras includes Manin matri-ces of types B , C and D introduced by Molev in [Molev].In this work we use tensor notations to describe quadratic algebras and Manin matrices bygeneralising the tensor approach given in [CFR, CFRS]. In this frame a quadratic algebrais defined by an idempotent that gives commutation relations for this algebra. E.g. thecommutative algebra of polynomials is defined by the anti-symmetrizer of the tensor productof two copies of a vector space. Two different idempotents may define the same quadraticalgebra. This gives an equivalence relation between the idempotents.The relations for Manin matrices can be written in tensor notations with the correspond-ing idempotents. In the case of two different quadratic algebras these commutation relationsare given by a pair of idempotents A and B , so we call the matrices, satisfying these relations,( A, B )-Manin matrices. In the case A = B we call them A -Manin matrices.The impotent notion in the matrix theory is the notion of minor, which is usually definedas a determinant of a square submatrix. The minors (defined via column determinant) play arole of decomposition coefficients in the expression for a ‘non-commutative’ homomorphismof the anti-commutative polynomial algebras (Grassmann algebras). The dual notion ispermanent of square submatrices (where some rows and columns may be repeated) that givecoefficients in the case of commutative polynomial algebras.This is directly generalised to some kinds of Manin matrices including the q -Manin ma-trices and the multi-parametric case. However in the general situation an analogue of minorsis not defined by an operation (determinant or permanent) on submatrices. The minors aredefined immediately without such operations. To do it we introduce some auxiliary ‘dual’quadratic algebras for a given idempotent and define a non-degenerate pairing betweenquadratic algebras (if it exists). In the tensor notation this pairing is written by using somehigher idempotents, which we call pairing operators. These operators allow to define minorsas entries of some operators with non-commutative entries, we call them minor operators.For the main examples the pairing operators is related with the representation theory of thesymmetric groups, Hecke algebras and Brauer algebras.The paper is organised as follows.In Section 2 we consider quadratic algebras and related Manin matrices. In Subsection 2.1we consider the quadratic algebras in terms of idempotents. Subsection 2.2 is devoted tothe equivalence relations between idempotents. In Subsections 2.3 and 2.4 we define A - and( A, B )-Manin matrices in terms of matrix commutation relations and in terms of quadraticalgebras, we give the main properties of these matrices. In Subsection 2.5 we interpretManin matrices in terms of comma categories and define a general right quantum algebravia adjoint functors. Subsection 2.6 is devoted to multiplication of Manin matrices and tobialgebra structure on the right quantum algebra. A generalisation of Manin matrices to theinfinite-dimensional case was done in Subsection 2.7.In Section 3 we consider the following particular cases: the Manin matrices of [CF, CFR], q -Manin matrices [CFRS] and multi-parametric case [Man89] (the Subsections 3.1, 3.2 and3.3 respectively). We recall basic properties of these Manin matrices and their determinants.4n Subsection 3.4 we consider a generalisation of the n = 3 case from Subsection 3.3.Section 4 is devoted to relationship between Manin matrices and Lax operators. In Sub-section 4.1 we explain the known connection of q -Manin matrices with some Lax operators(without spectral parameter) via a decomposition of an R -matrix into idempotents. Thisgives an important example of the idempotent equivalence. In Subsection 4.2 we interpretLax operators (with a spectral parameter) for the rational R -matrix as infinite-dimensionalManin matrices.In Section 5 we generalise the notion of minors for Manin matrices. Subsection 5.1 has amotivating role, where we consider minors for q -Manin matrices (defined via determinant andpermanent) as some decomposition coefficients. In Subsections 5.2 and 5.3 we define ‘dual’quadratic algebras and the corresponding parings via pairing operators. In Subsections 5.4and 5.5 we define minors as entries of minor operators and give some properties of theseoperators. In Subsection 5.6 we investigate the relation of pairing and minor operators forequivalent idempotents. Subsection 5.7 devoted to an algebraic approach to construction ofpairing operators.We consider examples of pairing and minor operators in Section 6. Subsection 6.1 isdevoted to the multi-parametric case (which includes the case of q -Manin matrices). InSubsection 6.2 we again consider the case of q -Manin matrices, but we construct the pairingoperators by using another idempotent (equivalent to the idempotent used in Subsection 6.1),which was found in Subsection 4.1. We show how they give related minor operators. Sub-section 6.3 is devoted to the case defined in Subsection 3.4.Section 7 is devoted to the Manin matrices of types B , C and D . In Subsection 7.1we recall the Molev’s definition of these matrices and interpret them as Manin matrices forpairs of idempotents. We consider the corresponding quadratic algebras. In Subsection 7.2we construct the pairing operators by means of Brauer algebra.In Appendix A we give a formulae for a set of inversions in the case of arbitrary Weylgroup. In Appendix B the universal enveloping algebras of Lie algebras are represented asquadratic algebras. Acknowledgements . The author is grateful to A. Chervov, V. Rubtsov, An. Kirillov andA. Molev for useful discussions and advice.
As a basic field we choose the field of complex numbers C (through any algebraically closedfield of characteristic zero can be taken). All the vector spaces are supposed to be definedover C . By an algebra we understand an associative unital algebras over C (not necessarycommutative). By a graded algebra we mean an N -graded associative unital algebra over C , where N is the set of non-negative integers: the grading of such algebra has the form A = L k ∈ N A k and the condition A k A l ⊂ A k + l is implied for all k, l ∈ N . By a quadraticalgebra we mean a graded algebra of the form A = R ⊗ ( T V /I ), where R is an arbitrary5lgebra, T V = L k ∈ N V ⊗ k = C ⊕ V ⊕ ( V ⊗ V ) ⊕ . . . is the tensor algebra of a finite-dimensionalvector space V and I ⊂ T V is its (two-sided) ideal generated by a subspace of V ⊗ V ; inparticular, A = R , A = R ⊗ V .Let C n be the space of complex column vectors of the size n . Its dual ( C n ) ∗ = Hom( C n , C )is the space of complex row vectors. Their standard bases ( e i ) ni =1 and ( e i ) ni =1 are dual toeach other: e i e j = δ ij . Let E ij be the n × m matrices with entries ( E ij ) kl = δ ik δ jl . It actson C m from the left and on ( C n ) ∗ from the right as E ij e k = δ jk e i , e i E jk = δ ij e k . We have E ij = e i e j .We use the following tensor notations. Let M ∈ R ⊗ Hom( C m , C n ) be an n × m matrixover an algebra R . It can be considered as an operator from C m to C n with entries M ij ∈ R ,that is M = P i,j M ij E ij . Introduce the notation M ( a ) = X i,j M ij (1 ⊗ · · · ⊗ ⊗ E ij ⊗ ⊗ · · · ⊗ , (2.1)where E ij is placed to the a -th site and 1 are unite matrices (of different sizes in general). Thisis an operator from C m ⊗· · ·⊗ C m r to C n ⊗· · ·⊗ C n r where m a = m , n a = n and m b = n b forall b = a . Analogously, for a matrix T = P i,j,k,l T ij,kl E ik ⊗ E jl ∈ R ⊗ Hom( C m ⊗ C m ′ , C n ⊗ C n ′ )with entries T ij,kl ∈ R we denote T ( ab ) = X i,j,k,l T ij,kl (1 ⊗ · · · ⊗ ⊗ E ik ⊗ ⊗ · · · ⊗ ⊗ E jl ⊗ ⊗ · · · ⊗ , (2.2)with E ik in the a -th site and E jl in the b -th site. The numbers a and b should be differentbut the both cases a < b and a > b are possible. In particular, T (21) = X i,j,k,l T ij,kl ( E jl ⊗ E ik ) . (2.3)In the same way the notations M ( a ) and T ( ab ) can be defined for any vector spaces V, W, V ′ , W ′ and any operators M ∈ R ⊗ Hom(
V, W ), T ∈ R ⊗ Hom( V ⊗ V ′ , W ⊗ W ′ ) (in the infinite-dimensional case the tensor product with R may be completed in some way). Consider a quadratic algebra with n generators, that is an algebra generated by the elements x , . . . , x n over C with some quadratic commutation relations. Since the number of elements x i x j is equal to n the number of independent quadratic relations is less or equal to n . Itmeans that these relations can be presented in the form n X i,j =1 A kl,ij x i x j = 0 , k, l = 1 , . . . , n, (2.4)6here A kl,ij ∈ C . The quadratic algebra with these relations is the quotient T V /I where V = ( C n ) ∗ and I is the ideal generated by the elements P ni,j =1 A kl,ij e i ⊗ e j . An element x i ∈ T V /I is the class of e i ∈ ( C n ) ∗ ⊂ T ( C n ) ∗ .The coefficients A kl,ij can be considered as entries of a matrix A acting on C n ⊗ C n . Interms of basis this action looks as follows: A ( e i ⊗ e j ) = P nk,l =1 A kl,ij ( e k ⊗ e l ). Note that forany invertible n × n matrix G , the product GA defines the same quadratic algebra, sincethe relations P nij =1 ( GA ) kl,ij x i x j = 0 is equivalent to (2.4). Proposition 2.1.
For each quadratic algebra the matrix A defining its relations can bechosen to be an idempotent: A = A. (2.5) Proof.
Let the given quadratic algebra be defined by the relations with a matrix A . If onehas proved that there exists an invertible G ∈ Aut( C n ⊗ C n ) such that ( GA ) = GA thenwe can choose GA as an idempotent matrix defining the quadratic relations for this algebra.Since G should be invertible the equation GAGA = GA is equivalent to AGA = A . Let usreduce the matrix A to a Jordan form. In the corresponding basis of C n ⊗ C n it takes theform A = α . . . α . . . . . . . . . . . . α r , (2.6)where α k are Jordan cells. Let us find the solution in the form (in the same basis) G = β . . . β . . . . . . . . . . . . β r , (2.7)where each β k is an invertible square submatrix, which has the same dimension as the Jordancell α k . Then, the equation AGA = A is equivalent to system of r equations α k β k α k = α k . Ifthe Jordan cell α k corresponds to the non-zero eigenvalue of the matrix A , then the matrix α is invertible and we can choose β k = α − k . Otherwise, α k has the form α k = . . .
00 0 1 . . . . . . . . . . . .
10 0 0 . . . (2.8)7nd β k can be chosen as β k = . . . . . . . . . . . . . . . . . . . (2.9)For α k = 0 we choose β k = 1. In all the cases such chosen matrices β k are invertible.Further we will suppose that the matrix A is an idempotent unless otherwise specified.Denote the corresponding quadratic algebra by X A ( C ).More generally, for an algebra R define the quadratic algebra X A ( R ) = R ⊗ X A ( C ). Thisis a graded algebra generated by the elements x , . . . , x n over R with the quadratic com-mutation relations (2.4) and rx i = x i r , r ∈ R , i = 1 , . . . , n . The formulae deg x i = 1,deg r = 0 ∀ r ∈ R give the grading on X A ( R ). An idempotent A ∈ End( C n ⊗ C n )defines a functor X A from the category of algebras to the category of graded algebras: X A ( f )( rx i · · · x i k ) = f ( r ) x i · · · x i k , ∀ f ∈ Hom( R , R ′ ), ∀ r ∈ R . Due to Proposition 2.1any quadratic algebra is isomorphic to X A ( R ) for some A and R .The relations (2.4) can be rewritten in matrix notations as follows. Consider the columnvector X = n P i =1 x i e i . Then the relations (2.4) takes the form A ( X ⊗ X ) = 0 , (2.10)where X ⊗ X = n P i,j =1 x i x j ( e i ⊗ e j ).The fact that there is no another independent relations for x i besides (2.4) can be re-formulated as follows: if T ij x i x j = 0 for some T ij ∈ R then T ij = P nk,l =1 G kl A kl,ij for some G kl ∈ R . Lemma 2.2.
The following equations for a matrix T ∈ R ⊗ End( C n ⊗ C n ) are equivalent: T ( X ⊗ X ) = 0 , (2.11) T (1 − A ) = 0 . (2.12) Proof.
The equation (2.11) is derived from (2.12) by multiplying by ( X ⊗ X ) from theright. Conversely, if the equation (2.11) holds then we have n relations P ni,j =1 T ab,ij x i x j = 0enumerated by a, b = 1 , . . . , n , where T ab,ij is entries of the matrix T . This implies T ab,ij = P nk,l =1 G ab,kl A kl,ij for some matrix G = ( G ab,kl ) that is T = GA . By multiplying the lastequality by (1 − A ) on the right and by taking into account (2.5) one yields (2.12).Let us consider the ‘change of variables’ y i = m P k =1 M ik x k , i = 1 , . . . , n , where x , . . . , x m are the generators of X A ( C ). In general, the transition matrix M is an n × m matrix withnon-commutative entries M ik ∈ R , so that y i ∈ X A ( R ). In terms of Y = n P i =1 y i e i we have Y = M X. (2.13)8 emma 2.3.
The equation A ( Y ⊗ Y ) = 0 (2.14) is equivalent to AM (1) M (2) (1 − A ) = 0 . (2.15) Proof.
Note that the generators x i ∈ X A ( C ) commute with the entries M kl ∈ R as ele-ments of algebra X A ( R ) = R ⊗ X A ( C ). Then by substituting (2.13) to (2.14) we obtain AM (1) M (2) ( X ⊗ X ) = 0, which in turn is equivalent to (2.15) by Lemma 2.2.Consider the quadratic algebra Ξ A ( R ) generated by the elements ψ , . . . , ψ n over R withthe commutation relations ψ i ψ j = n X k,l =1 A kl,ij ψ k ψ l (2.16)and rψ i = ψ i r , r ∈ R . Each idempotent A ∈ End( C n ⊗ C n ) defines a functor Ξ A from thecategory of algebras to the category of graded algebras: Ξ A ( R ) = R ⊗ Ξ A ( C ). Consider therow vector Ψ = n P i =1 ψ i e i = ( ψ , . . . , ψ n ). Then the relations (2.16) take the form(Ψ ⊗ Ψ)(1 − A ) = 0 . (2.17) Remark 2.4. If X A ( C ) is a Koszul algebra, then the algebra Ξ A ( C ) is its Koszul dualalgebra.We will use the following conventions. We write X A ( R ) = X A ′ ( R ) iff there exists analgebra homomorphism X A ( R ) → X A ′ ( R ) identical on generators, that is x i x i ∀ i and r r ∀ r ∈ R . We write Ξ A ( R ) = Ξ A ′ ( R ) iff there exists an algebra homomorphismΞ A ( R ) → Ξ A ′ ( R ) identical on generators ψ i and r ∈ R . We also write Ξ A ( R ) = X A ′ ( R )iff there exists an algebra homomorphism X A ( R ) → Ξ A ′ ( R ) such that ψ i x i ∀ i and r r ∀ r ∈ R . Remark 2.5.
We have X A ( R ) = X A ′ ( R ), Ξ A ( R ) = Ξ A ′ ( R ) or Ξ A ( R ) = X A ′ ( R ) iff X A ( C ) = X A ′ ( C ), Ξ A ( C ) = Ξ A ′ ( C ) or Ξ A ( C ) = X A ′ ( C ) respectively. The latter equalities are exactlythe isomorphisms of quadratic algebras T V /I as T V -modules (left, right of two-sided), wherein the case Ξ A ( C ) = X A ′ ( C ) we identify C n with ( C n ) ∗ via e i e i .Note that S = 1 − A is an idempotent iff A is idempotent. The transformation S → SG byan invertible matrix G ∈ Aut( C n ⊗ C n ) does not change the relation (2.17). By transposingthe relation (2.17) we see that it is equivalent to the relation (2.10) with X replaced withΨ ⊤ and A replaced with S ⊤ = 1 − A ⊤ , where ( · ) ⊤ means the matrix transposition. HenceΞ A ( R ) = X S ⊤ ( R ) the functor Ξ A can be identified with the functor X S ⊤ = X − A ⊤ .9 .2 Left and right equivalence of idempotents A fixed quadratic algebra can be defined by different idempotents. Here we give a conditionwhen it happens. To do it we introduce two equivalence relations for idempotent endomor-phisms acting on a vector space. We prove some useful properties of these idempotents andtheir equivalence relations.Let V be a finite-dimensional vector space. Note that the Jordan form of an idempotent A ∈ End( V ) (in an appropriate basis of V ) is a diagonal matrix diag(1 , . . . , , , . . . , A = (cid:18) (cid:19) (2.18)with the blocks of the sizes r × r , r × d , d × r , d × d , where r is the rank of A and d = dim V − r .We obtain the first property of the idempotents. Proposition 2.6.
The rank of an idempotent A ∈ End( V ) equals to its trace: rk A = tr A . Let us say that two idempotents
A, A ′ ∈ End( V ) are left-equivalent (to each other) ifthere exists G ∈ Aut( V ) such that A ′ = GA , and we call them right-equivalent if there exists G ∈ Aut( V ) such that A ′ = AG . The both relations are true equivalence relations. Proposition 2.7.
Two idempotents
A, A ′ ∈ End( V ) are left-equivalent iff the dual idempo-tents S = 1 − A and S ′ = 1 − A ′ are right equivalent. Proof.
Let us fix a basis of V such that the idempotent A has the form (2.18). Supposethat A is left-equivalent to A ′ , then there exists an invertible matrix G such that A ′ = GA .The matrices G and A ′ have the form G = (cid:18) α βγ δ (cid:19) , A ′ = GA = (cid:18) α γ (cid:19) , (2.19)where α , β , γ and δ are d × d , d × ℓ , ℓ × d and ℓ × ℓ matrices respectively. The idempotentnessof A ′ implies AA ′ = A , that is α = 1. Hence A ′ = A γ , where A γ := (cid:18) γ (cid:19) . Since S = 1 − A = (cid:18) (cid:19) and S ′ = 1 − A ′ = (cid:18) − γ (cid:19) we have S ′ = SG − γ , where G γ := (cid:18) γ (cid:19) .Further, note that two idempotents S and S are right equivalent iff the idempotents S ⊤ and S ⊤ are left equivalent. Thus the right equivalence of S and S ′ implies the left equivalenceof A = 1 − S and A ′ = 1 − S ′ . Remark 2.8.
We see from the proof that any matrix left equivalent to (2.18) has theform A γ = G γ A . Analogously, any matrix right equivalent to S = 1 − A has the form S γ = 1 − A γ = SG − γ . Proposition 2.9.
If two idempotents
S, S ′ ∈ End( V ) are right-equivalent then SV = S ′ V .If two idempotents A, A ′ ∈ End( V ) are left-equivalent then V ∗ A = V ∗ A ′ , where V ∗ =Hom( V, C ) . roof. If G ∈ Aut( V ) then GV = V , so from S ′ = SG one yields S ′ V = SGV = SV . Thesecond statement follows from the first one for S = A ⊤ , S ′ = ( A ′ ) ⊤ . Remark 2.10.
By using the Jordan forms one can prove the converse statement: the equal-ities SV = S ′ V and V ∗ A = V ∗ A ′ imply the right equivalence of S, S ′ and the left equivalenceof A, A ′ respectively. Lemma 2.11.
If two idempotents
A, A ∈ End( V ) satisfy the relations A ′ (1 − A ) = 0 and A (1 − A ′ ) = 0 then they are left equivalent. Proof . By substituting (2.18) and A ′ = (cid:18) α βγ δ (cid:19) to these relations we obtain β = 0, δ = 0and α = 1, so that A ′ = A γ = G γ A (see the proof of Proposition 2.7).Consider the case V = C n ⊗ C n . Recall that any idempotent A ∈ End( C n ⊗ C n ) definesquadratic algebras X A ( C ) and Ξ A ( C ). Proposition 2.12.
Let
A, A ′ ∈ End( C n ⊗ C n ) be idempotents. Then the following conditionsare equivalent. (a). A is left-equivalent to A ′ . (b). S = 1 − A is right-equivalent to S ′ = 1 − A ′ . (c). X A ( C ) = X A ′ ( C ) . (d). Ξ A ( C ) = Ξ A ′ ( C ) . Proof.
The conditions (a) and (b) are equivalent due to Proposition 2.7. Two left-equivalentidempotents A and A ′ define the same commutation relations (2.10). This means that (a)implies (c). Let us prove the converse implication. If X A ( C ) = X A ′ ( C ) then A ′ ( X ⊗ X ) = 0and A ( X ⊗ X ) = 0. By Lemma 2.2 we obtain the relations A ′ (1 − A ) = 0 and A (1 − A ′ ) = 0.Due to Lemma 2.11 these relations imply the left equivalence of A and A ′ . Since Ξ A ( C ) = X S ⊤ ( C ) the equivalence of (b) and (d) is derived by considering the transposed matrices. A -Manin matrices Definition 2.13. An n × n matrix M over some algebra R satisfying the equation AM (1) M (2) (1 − A ) = 0 (2.20)is called Manin matrix corresponding to the idempotent A or simply A -Manin matrix .The definition (2.20) can be rewritten in the following form AM (1) M (2) = AM (1) M (2) A. (2.21)This relation means that the expression AM (1) M (2) is invariant with respect to multiplicationby A from the right. 11 roposition 2.14. Let M be an n × n matrix over R . Let x i and ψ i be the generators of X A ( C ) and Ξ A ( C ) respectively. Consider the elements y i ∈ X A ( R ) and φ i ∈ Ξ A ( R ) definedas follows: y . . .y n = M . . . M n . . . . . .M n . . . M nn x . . .x n , (2.22)( φ , . . . , φ n ) = ( ψ , . . . , ψ n ) M . . . M n . . . . . .M n . . . M nn . (2.23) Then the following three conditions are equivalent: • The matrix M is an A -Manin matrix. • The elements y i satisfy the relations (2.4) , i.e. A ( Y ⊗ Y ) = 0 , where Y = n P i =1 y i e i . • The elements φ i satisfy the relations (2.16) , i.e. (Φ ⊗ Φ)(1 − A ) = 0 , where Φ = n P i =1 φ i e i . Proof.
Note that the formulae (2.22) and (2.23) have the form Y = M X and Φ = Ψ M .The second condition is equivalent to the first one due to Lemma 2.3. The third conditioncan be written as (1 − A ⊤ )(Φ ⊤ ⊗ Φ ⊤ ) = 0. Since the operator 1 − A ⊤ is also idempotent wecan again apply Lemma 2.3, so the third condition is equivalent to the equation(1 − A ⊤ )( M ⊤ ) (1) ( M ⊤ ) (2) A ⊤ = 0 . (2.24)Transposition of this equation and the formula (cid:16) ( M ⊤ ) (1) ( M ⊤ ) (2) (cid:17) ⊤ = M (1) M (2) gives ex-actly (2.20), which is the first condition.Note that the relation (2.20) has the form AM (1) M (2) S = 0, where S = 1 − A . It isequivalent to M (1) M (2) S = SM (1) M (2) S. (2.25)Let P = 1 − A . Then P = 1. We have A = − P and S = P . The idempotents A and S are often given by the matrix P satisfying P = 1. The relation (2.20) in terms of P takes the form M (1) M (2) − P M (1) M (2) + M (1) M (2) P − P M (1) M (2) P = 0 . (2.26) Remark 2.15.
The notations A , S , P and the sign of P is chosen according to the basiccase: the algebra of commutative polynomials. In this case the roles of A , S and P are playedby antisymmetrizer, symmetrizer and permutation matrix respectively. See Subsection 3.1for details. 12onsider some trivial examples. For A = 0 we have P = S = 1. The algebra X ( C ) = C h x , . . . , x n i is the algebra of all the non-commutative polynomials of the variables x , . . . , x n (without any relations). The algebra Ξ ( C ) is defined by the relations ψ i ψ j = 0, i, j = 1 , . . . , n . As a linear space it has the from Ξ ( C ) = C ⊕ C ψ ⊕ . . . ⊕ C ψ n . For A = 1 wehave P = − S = 0, X ( C ) = C ⊕ C x ⊕ . . . ⊕ C x n ∼ = Ξ ( C ), Ξ ( C ) = C h ψ , . . . , ψ n i ∼ = X ( C ).In the both cases the relation defining Manin matrices has the form 0 = 0, so that any n × n matrix M is a Manin matrix for the idempotent A = 0 as well as for the idempotent A = 1.Due to Propositions 2.12 and 2.14 the notion of A -Manin matrix does not change if wesubstitute A by a left-equivalent idempotent A ′ . This means that M is an A -Manin matrixiff it is an A ′ -Manin matrix. This implies that A -Manin matrices are associated to thequadratic algebras X A ( C ) and Ξ A ( C ) (with fixed generators).More generally, the definition of A -Manin matrix can be written in the form e AM (1) M (2) (1 − A ′ ) = 0 , (2.27)where e A = e GA , A ′ = GA and e G, G ∈ Aut( C n ⊗ C n ) are such that A ′ is an idempotent (i.e. AGA = A ). The most general form is e AM (1) M (2) e S = 0 , (2.28)where e A = e GA and e S = SG for any e G, G ∈ Aut( C n ⊗ C n ). ( A, B ) -Manin matrices Let A ∈ End( C n ⊗ C n ) and B ∈ End( C m ⊗ C m ) be two idempotents and R be an algebra.Let x , . . . , x m be generators of X B ( C ) and ψ , . . . , ψ n be generators of Ξ A ( C ). They satisfythe relations B ( X ⊗ X ) = 0 and (Ψ ⊗ Ψ)(1 − A ) = 0, where X = m P j =1 x j e j and Ψ = n P k =1 ψ k e k .Let M be an n × m matrix with entries M ij ∈ R . Consider a ‘change of variables’ y i = m X j =1 M ij x j ∈ X B ( R ) , i = 1 , . . . , n, (2.29) φ j = n X i =1 ψ i M ij ∈ Ξ A ( R ) , j = 1 , . . . , m. (2.30)Consider n × × m matrices Y = n P i =1 y i e i and Φ = m P j =1 φ j e j . Then one can rewrite (2.29),(2.30) in the matrix form: Y = M X and Φ = Ψ M . Now let us generalise Proposition 2.14to this case. Proposition 2.16.
The following three conditions are equivalent: AM (1) M (2) (1 − B ) = 0 , (2.31) A ( Y ⊗ Y ) = 0 , (2.32)(Φ ⊗ Φ)(1 − B ) = 0 . (2.33)13he proof of this proposition repeats the proof of Proposition 2.14 with B instead of A somewhere. Definition 2.17. An n × m matrix M over an algebra R satisfying the equation (2.31) iscalled ( A, B ) -Manin matrix or Manin matrix corresponding to the pair of idempotents ( A, B ).The defining relation for the (
A, B )-Manin matrices can be also written in terms of dualidempotents S and matrices P . For instance, the relation (2.31) is equivalent to (2.26) wherewe should understand the matrices P placing to the left from the matrices M as 1 − A and P placing to the right as 1 − B . In the similar way we can generalise the relation (2.25). Proposition 2.18.
Let A be an algebra and M be an ( A, B ) -Manin matrix with entries M ij ∈ A . Let x , . . . , x m ∈ A be elements commuting with all the entries M ij and satisfying m P i,j =1 B kl,ij x i x j = 0 , k, l = 1 , . . . , m , (in particular, it is valid for the algebra A = X B ( R ) ,the generators x i ∈ X B ( C ) and an ( A, B ) -Manin matrix M over R ). Then the elements y i = m P j =1 M ij x j satisfy n P i,j =1 A kl,ij y i y j = 0 , k, l = 1 , . . . , n . Proof.
We use the matrix notations introduced below. From Y = M X and (2.31) we obtain A ( Y ⊗ Y ) = AM (1) M (2) ( X ⊗ X ) = AM (1) M (2) B ( X ⊗ X ) . The right hand side vanishes since B ( X ⊗ X ) = 0.Note if M is an ( A, B )-Manin matrix then M ⊤ is (1 − B ⊤ , − A ⊤ )-Manin matrix. Bymeans of the identification of the functors Ξ A and X − A ⊤ we can write Proposition 2.18 inthe following form. Proposition 2.19.
Let A be a algebra and M be an ( A, B ) -Manin matrix with entries M ij ∈ A . Let ψ , . . . , ψ n ∈ A be elements commuting with all the entries M ij and satisfying m P i,j =1 A ij,kl ψ i ψ j = ψ k ψ l , k, l = 1 , . . . , n , (in particular, it is valid for the algebra A = Ξ A ( R ) ,the generators ψ i ∈ Ξ A ( C ) and an ( A, B ) -Manin matrix M over R ). Then the elements φ j = n P i =1 M ij ψ i satisfy m P i,j =1 B ij,kl φ i φ j = φ k φ l , k, l = 1 , . . . , m . For arbitrary operators A ′ ∈ End( C n ⊗ C n ) and S ′ ∈ End( C m ⊗ C m ) (not necessaryidempotents) the relation A ′ M (1) M (2) S ′ = 0 is equivalent to ( G A ′ ) M (1) M (2) ( S ′ G ) = 0where G ∈ Aut( C n ⊗ C n ) and G ∈ Aut( C n ⊗ C n ). Hence it is equivalent to (2.31) for someidempotents A and B , which have the forms A = G A ′ and B = 1 − S ′ G . In particular,if A ′ is an idempotent it is left-equivalent to the idempotent A ; if S ′ is an idempotent it isright-equivalent to 1 − B , which means that 1 − S is left-equivalent to B (see Proposition 2.7).Thus we obtain the following statement. 14 roposition 2.20. Let M be an n × m matrix over R . Let A, A ′ ∈ End( C n ⊗ C n ) and B, B ′ ∈ End( C m ⊗ C m ) be idempotents. Suppose A is left-equivalent to A ′ and B is left-equivalent to B ′ (equivalently, − A is right-equivalent to − A ′ and − B is right-equivalentto − B ′ ). Then M is an ( A, B ) -Manin matrix iff it is an ( A ′ , B ′ ) -Manin matrix. We see from this proposition that the property of the matrix M to be an ( A, B )-Maninmatrix effectively depends on the algebras X A ( C ) = X A ′ ( C ) and X B ( C ) = X B ′ ( C ) (withfixed generators). So the notion of ( A, B )-Manin matrix can be associated with the pair of X -quadratic algebras X A ( C ) and X B ( C ). Alternatively it can be associated with the pair ofΞ-quadratic algebras Ξ A ( C ) and Ξ B ( C ). We will see this in Subsection 2.5 more explicitly.Consider the question of permutation of rows and columns of a Manin matrix. Let S n be the n -th symmetric group. For a permutation σ ∈ S n let us permute the rows of an n × m matrix M in the following way: we put the i -th row to the place of the σ ( i )-th row.We denote the obtained matrix by σ M . By permuting columns of M with a permutation σ ∈ S m in the same way we obtain a matrix denoted by σ M . More explicitly we have( σ M ) σ ( i ) j = M ij and ( σ M ) iσ ( j ) = M ij . We write the permutation-index from the left since τ ( σ M ) = τσ M and τ ( σ M ) = τσ M for any τ, σ from S n and S m respectively. Note that thespace C n has a structure of S n -module: a permutation σ ∈ S n acts by the formula σe i = e σ ( i ) ,and we denote the corresponding operator C n → C n by the same letter σ . In this notationwe have σ M = σM and σ M = M σ − . Proposition 2.21.
Let M be an n × m matrix over R . Let σ ∈ S n and τ ∈ S m or, moregenerally, σ ∈ GL ( n, C ) , τ ∈ GL ( m, C ) . Then the following statements are equivalent. • M is an ( A, B ) -Manin matrix. • σ M = σM is a (cid:0) ( σ ⊗ σ ) A ( σ − ⊗ σ − ) , B (cid:1) -Manin matrix. • τ M = M τ − is a (cid:0) A, ( τ ⊗ τ ) B ( τ − ⊗ τ − ) (cid:1) -Manin matrix. Proof.
Multiplication of the condition (2.31) by σ ⊗ σ from the left gives the relation( σ ⊗ σ ) A ( σ − ⊗ σ − )( σM ) (1) ( σM ) (2) (1 − B ) = 0. By multiplying (2.31) by τ ⊗ τ from theright we obtain A ( M τ − ) (1) ( M τ − ) (2) ( τ ⊗ τ )(1 − B )( τ − ⊗ τ − ) = 0.Proposition 2.21 can be interpreted in terms of linear change of generators of the cor-responding quadratic algebras. Indeed, consider another generators ˜ ψ i = P nj =1 α ji ψ j of thealgebra Ξ A ( C ), where α ji are entries of an invertible matrix α ∈ GL ( n, C ). Let σ ∈ GL ( n, C )be the inverse matrix: P ni =1 α ji σ ik = δ jk . By substituting ψ k = P ni =1 σ ik ˜ ψ i to (2.30) weobtain φ j = P ni =1 ˜ ψ i ( σM ) ij . The quadratic commutation relation for the generators ˜ ψ i are˜ A ij,kl ˜ ψ i ˜ ψ j = ˜ ψ k ˜ ψ l , where ˜ A ij,kl are entries of the idempotent matrix ˜ A = ( σ ⊗ σ ) A ( σ − ⊗ σ − ).The formulae ˜ ψ i = P nj =1 α ji ψ j , ψ k = P ni =1 σ ik ˜ ψ i describe the isomorphism Ξ A ( C ) ∼ = Ξ ˜ A ( C ).It corresponds to the change of basis in ( C n ) ∗ , namely Ψ = P ni =1 ψ i e i = P nk =1 ˜ ψ k ˜ e k , where˜ e k = P ni =1 σ ki e i . Since X A ( C ) = Ξ − A ⊤ ( C ) we have an isomorphism X A ( C ) ∼ = X ˜ A ( C ) where A is an arbitrary idempotent and ˜ A = ( σ ⊗ σ ) A ( σ − ⊗ σ − ).15nalogously, the matrix M τ − corresponds to the new generators ˜ x i = P mj =1 τ ij x j of thealgebra X B ( C ), where τ ji are entries of τ ∈ GL ( m, C ). The transition to these generatorscorresponds to the change of basis in C n . The new basis elements are ˜ e j = P mi =1 β ij e i , where β ij are such that P mj =1 β ij τ jk = δ ik . The formulae ˜ x i = P mj =1 τ ij x j , x k = P mi =1 β ki ˜ x i definean isomorphism X B ( C ) ∼ = X ˜ B ( C ) where ˜ B = ( τ ⊗ τ ) B ( τ − ⊗ τ − ).Thus the notion of ( A, B )-Manin matrix is not associated with a pair of isomorphismclasses of quadratic algebras. However, isomorphisms of quadratic algebras gives a trans-formation of the Manin matrix in the same way as the change of bases of vector spaces V and W transform a matrix of a linear operator V → W . The notion associated with isomor-phism classes of quadratic algebras is a notion of Manin operator, which we will introducein Subsection 2.7.Finally, consider the case when A = 0 or B = 1 in the both cases the defining rela-tion (2.31) is 0 = 0. This means that any n × m matrix is a (0 , B )-Manin matrix as wellas an ( A, ∈ End( C n ⊗ C n ) and 1 ∈ End( C m ⊗ C m ). Further, thedefinition (2.31) for B = 0 ∈ End( C m ⊗ C m ) has the from AM (1) M (2) = 0. By multiplying ifby (1 − B ) from the right with an arbitrary idempotent B ∈ End( C m ⊗ C m ) we obtain (2.31).Thus any ( A, A, B )-Manin matrix. Analogously, any(1 , B )-Manin matrix is also an (
A, B )-Manin matrix, where 1 ∈ End( C n ⊗ C n ). Consider first the set Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) with A and B as above. It consists of algebrahomomorphisms f : X A ( C ) → X B ( R ) preserving the grading. Let x A , . . . , x nA and x B , . . . , x mB be the generators of X A ( C ) and X B ( C ) respectively, they satisfy P ni,j =1 A kl,ij x iA x jA = 0 and P mi,j =1 B kl,ij x iB x jB = 0. To give a homomorphism f ∈ Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) it is enough togive its value on the generators x iA ; since f ( x iA ) has degree 1 in X B ( C ) it has the form f ( x iA ) = m X j =1 M ij x jB , (2.34)where M ij ∈ R . This means that each f is given by a matrix M = ( M ij ) over R . Proposi-tion 2.14 implies that the formula (2.34) defines a homomorphism f iff the matrix M = ( M ij )is an ( A, B )-Manin matrix. Denote this homomorphism f : X A ( C ) → X B ( R ) by f M . Thus weobtain a bijection f M ↔ M between Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) and the set of all ( A, B )-Maninmatrices over the algebra R . Note that this bijection depends on the choice of generators ofthe algebras X A ( C ) and X B ( C ).More generally, consider the set Hom (cid:0) X A ( S ) , X B ( R ) (cid:1) for two algebras S and R . It canbe identified with a subset of Hom( S , R ) × Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) , since each graded homo-morphism f : X A ( S ) → X B ( R ) is given on the zero-degree elements s ∈ S and the generators x iA ∈ X A ( C ). Let α : S → R be an algebra homomorphism and f M ∈ Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) be a graded algebra homomorphism defined by an ( A, B )-Manin matrix M = ( M ij ), then16he formulae f ( s ) = α ( s ) , s ∈ S , f ( x iA ) = f M ( x iA ) = m X j =1 M ij x jB , i = 1 , . . . , n, (2.35)define a homomorphism f : X A ( S ) → X B ( R ) iff α ( s ) M ij = M ij α ( s ) for all s ∈ S , i = 1 , . . . , n and j = 1 , . . . , m . We write f = ( α, f M ) in this case. We haveHom (cid:0) X A ( S ) , X B ( R ) (cid:1) = n ( α, f M ) ∈ Hom( S , R ) × Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) | [ α ( s ) , M ij ] = 0 ∀ s, i, j o . (2.36)Analogously, we obtain that the set Hom (cid:0) Ξ B ( C ) , Ξ A ( R ) (cid:1) consists of the algebra ho-momorphisms f M : Ξ B ( C ) → Ξ A ( R ) defined on generators as f M ( ψ Bj ) = n P i =1 M ij ψ Ai where M ij ∈ R are entries of an ( A, B )-Manin matrix M = ( M ij ). More generally,Hom (cid:0) Ξ B ( S ) , Ξ A ( R ) (cid:1) = n ( α, f M ) ∈ Hom( S , R ) × Hom (cid:0) Ξ B ( C ) , Ξ A ( R ) (cid:1) | [ α ( s ) , M ij ] = 0 ∀ s, i, j o . Definition 2.22. [MacLane] Let C and D be categories, c be an object of C and G : D → C be a functor. The comma category ( c ↓ G ) consists of the pairs ( d, f ), where d is an objectof D and f ∈ Hom C ( c, Gd ) is a morphism in C . A morphism in ( c ↓ G ) from ( d, f ) to ( d ′ , f ′ )is a morphism h ∈ Hom D ( d, d ′ ) such that the diagram c f ~ ~ ⑤⑤⑤⑤⑤⑤⑤ f ′ ! ! ❈❈❈❈❈❈❈ Gd Gh / / Gd ′ (2.37)is commutative.Let A be the category of associative unital algebras over C and G be the category ofassociative unital N -graded algebras over C . By setting C = G , D = A , c = X A ( C ) and G = X B in Definition 2.22 we obtain the comma category (cid:0) X A ( C ) ↓ X B (cid:1) . It consists of thepairs ( R , f M ), where R is an algebra and f M ∈ Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) is a homomorphismcorresponding to some ( A, B )-Manin matrix M over R . Thus we can interpret the ( A, B )-Manin matrices as objects of the comma category (cid:0) X A ( C ) ↓ X B (cid:1) .The morphisms in (cid:0) X A ( C ) ↓ X B (cid:1) from ( R , f M ) to ( R ′ , f M ′ ) are all the homomorphisms h : R → R ′ such that h ( M ij ) = M ′ ij , where M ij and M ′ ij are entries of the corresponding( A, B )-Manin matrices M and M ′ . Two ( A, B )-Manin matrices M ∈ R ⊗ Hom( C m , C n )and M ′ ∈ R ′ ⊗ Hom( C m , C n ) are isomorphic to each other iff there exists an isomorphism h : R ∼−→ R ′ such that h ( M ij ) = M ′ ij (note that ( A, B )-Manin matrices which have the sameentries but are formally considered over different algebras may be non-isomorphic).17n the same way the (
A, B )-Manin matrices can be interpreted as objects of the commacategory (cid:0) Ξ B ( C ) ↓ Ξ A (cid:1) . So that the last one is equivalent to (cid:0) X A ( C ) ↓ X B (cid:1) .One of the application of the comma categories in the category theory is the universalmorphism (universal arrow). Definition 2.23. [MacLane] Let C and D be categories, c be an object of C and G : D → C be a functor. The universal morphism from c to G is the initial object of the comma category( c ↓ G ), that is a pair ( r, u ) of an object r ∈ D and a morphism u ∈ Hom C ( c, Gr ) such thatfor any object d ∈ D and any morphism f ∈ Hom C ( c, Gd ) there is a unique morphism h ∈ Hom D ( r, d ) making the diagram c u / / f ❆❆❆❆❆❆❆❆ Gr Gh (cid:15) (cid:15) Gd (2.38)commutative.As an initial object the universal morphism is unique up to an isomorphism. Let usdescribe the universal morphism ( U A,B , u
A,B ) from the object X A ( C ) to the functor X B . Itconsists of the algebra U A,B generated by the elements M ij , i = 1 , . . . , n , j = 1 , . . . , m , withthe commutation relation A M (1) M (2) (1 − B ) = 0 where M is the n × m matrix with entries M ij . Due to Definition 2.17 it is an ( A, B )-Manin matrix. It defines a homomorphism u A,B = f M : X A ( C ) → X B ( U A,B ). On can check that ( U A,B , u
A,B ) is the universal morphismfrom X A ( C ) to X B : for any morphism f = f M : X A ( C ) → X B ( R ) corresponding to the( A, B )-Manin matrix M with entries M ij ∈ R there is a unique homomorphism h : U A,B → R such that f = X B ( h ) u A,B , namely, h ( M ij ) = M ij . Via the equivalence of comma categories (cid:0) X A ( C ) ↓ X B (cid:1) and (cid:0) Ξ B ( C ) ↓ Ξ A (cid:1) we obtain the universal morphism from the object Ξ B ( C )to the functor Ξ A , this is the pair ( U A,B , e u A,B ) where e u A,B = f M : Ξ B ( C ) → Ξ A ( U A,B ).Let us call the matrix M the universal ( A, B ) -Manin matrix . The algebra U A,B is ageneralisation of the right quantum algebra [GLZ], so we call it the right quantum algebrafor the pair of idempotents ( A, B ).Consider more general comma category (cid:0) X A ( S ) ↓ X B (cid:1) . It consists of pairs ( R , f ) of analgebra R and a homomorphism f ∈ Hom (cid:0) X A ( S ) , X B ( R ) (cid:1) . Such a homomorphism f canbe identified with a pair of homomorphisms in the sense of the formulae (2.35), (2.36). If f = ( α, f M ) we write also ( R , f ) = ( R , α, f M ). A morphism h : ( R , α, f M ) → ( R ′ , α ′ , f M ′ ) isa homomorphism h : R → R ′ such that hα = α ′ and h ( M ij ) = M ′ ij .The initial object in the comma category (cid:0) X A ( S ) ↓ X B (cid:1) is the universal morphism( S ⊗ U A,B , ι, u
A,B ), where ι : S ֒ → S ⊗ U A,B is the inclusion. A homomorphism f =( α, f M ) : X A ( S ) → X B ( R ) gives the unique homomorphism h : S ⊗ U A,B → R defined by18he formulae h ( s ) = α ( s ) and h ( M ij ) = M ij and making the diagram X A ( S ) ( ι,u A,B ) / / f ) ) ❙❙❙❙❙❙❙❙❙❙❙❙❙❙❙❙ X B ( S ⊗ U A,B ) X B ( h ) (cid:15) (cid:15) X B ( R ) (2.39)commutative.Now let us calculate a composition X A ( S ) ( α,f M ) −−−−→ X B ( R ) ( β,f N ) −−−→ X C ( T ) , (2.40)where C ∈ End( C k ⊗ C k ) is an idempotent, T is an algebra, β : R → T is a homomorphismand f N : X B ( C ) → X C ( T ) is defined by ( B, C )-Manin matrix N = ( N jl ). On the elements s ∈ S we have ( β, f N )( α, f M )( s ) = β ( α ( s )). On the generators x iA ∈ X A ( C ) we obtain( β, f N )( α, f M )( x iA ) = ( β, f N )( P mj =1 M ij x jB ) = P mj =1 P kl =1 β ( M ij ) N jl x lC . Thus,( β, f N )( α, f M ) = (cid:0) βα, f K (cid:1) , (2.41)where K = β ( M ) N is the n × k matrix with entries K il = P mj =1 β ( M ij ) N jl ∈ T . Since (2.41)is a morphism the matrix K is an ( A, C )-Manin matrix.Recall also (see [MacLane]) that the functor F : C → D is called left adjoint to the functor G : D → C (while the functor G : D → C is called right adjoint to F : C → D ), iff there isan isomorphism Hom C ( c, Gd ) ∼ = Hom D ( F c, d ) natural in c and d . To construct a left adjointfunctor to a functor G : D → C it is enough to construct a universal morphism ( r c , u c ) fromeach c ∈ C to G . Then the left adjoint functor on objects is defined as F c = r c . For amorphism α : c → c ′ the morphism F α : F c → F c ′ is the unique morphism h : F c → F c ′ such that the diagram c u c / / α (cid:15) (cid:15) GF c Gh (cid:15) (cid:15) c ′ u c ′ / / GF c ′ (2.42)is commutative (as consequence, we obtain a natural transformation u : id C → GF withcomponents u c : c → GF c ).Let Q be the full subcategory of G consisting of the graded algebras of the form X A ( S ).This is the category of quadratic algebras. We can consider the functors X A and Ξ A as func-tors from A to Q . This means that we can substitute G by Q in the previous considerations.From the diagram (2.39) we see that the functor G = X B : A → Q has a left adjointfunctor F = F B : Q → A . It is defined on objects as F B (cid:0) X A ( S ) (cid:1) = S ⊗ U A,B . In particular,on the quadratic algebras X A ( C ) ∈ Q the functor F B gives the right quantum algebra U A,B .19et us calculate the functor F B on a morphism X A ( S ) ( α,f N ) −−−−→ X A ′ ( S ′ ) where α : S → S ′ is a homomorphism and f N is defined by an ( A, A ′ )-Manin matrix N = ( N ij ). We need toconstruct a commutative diagram X A ( S ) ( ι,u A,B ) / / ( α,f N ) (cid:15) (cid:15) f ) ) ❚❚❚❚❚❚❚❚❚❚❚❚❚❚❚❚ X B ( S ⊗ U A,B ) X B ( h ) (cid:15) (cid:15) X A ′ ( S ′ ) ( ι ′ ,u A ′ ,B ) / / X B ( S ′ ⊗ U A ′ ,B ) (2.43)Remind that u A,B = f M and u A ′ ,B = f M ′ , where M ′ is the universal ( A ′ , B )-Manin ma-trix. By the formula (2.41) we have f = ( ια, f K ), where K = N M ′ . Thus we obtain thehomomorphism F B ( α, f N ) = h : S ⊗ U A,B → S ′ ⊗ U A ′ ,B defined by the formulae h ( s ) = α ( s ) , s ∈ S , h ( M ij ) = X l N il M ′ lj . (2.44)The left adjoint to the functor Ξ A : A → Q is the functor e F A : Q → A defined on objectsas e F A (cid:0) Ξ B ( S ) (cid:1) = S ⊗ U A,B . The value of e F A on ( α, f N ) ∈ Hom (cid:0) Ξ B ( S ) , Ξ B ′ ( S ′ ) (cid:1) is thehomomorphism h : S ⊗ U A,B → S ′ ⊗ U A,B ′ such that h ( s ) = α ( s ) and h ( M ij ) = P l M ′ il N lj ,where M = ( M ij ) and M ′ = ( M ′ il ) are the universal ( A, B )- and (
A, B ′ )-Manin matrices.In particular, for S = C we have the natural bijectionsHom (cid:0) X A ( C ) , X B ( R ) (cid:1) = Hom (cid:0) Ξ B ( C ) , Ξ A ( R ) (cid:1) = Hom( U A,B , R ) . (2.45)This means that the set of ( A, B )-Manin matrices over R is identified with the set of algebrahomomorphisms f : U A,B → R . Namely, each ( A, B )-Manin matrix M = ( M ij ) has the form M ij = f ( M ij ) for some homomorphism f : U A,B → R and any homomorphism f : U A,B → R gives an ( A, B )-Manin matrix M . In terms of non-commutative geometry [Man91] thismeans that an ( A, B )-Manin matrix over R is an R -point of the algebra U A,B . Proposition 2.24.
Let A ∈ End( C n ⊗ C n ) and A ′ ∈ End( C m ⊗ C m ) be idempotents. Thenthe following three statements are equivalent. • X A ( C ) ∼ = X A ′ ( C ) as graded algebras. • Ξ A ( C ) ∼ = Ξ A ′ ( C ) as graded algebras. • n = m and there exists σ ∈ GL ( n, C ) such that A is left-equivalent to the idempotent ( σ ⊗ σ ) A ′ ( σ − ⊗ σ − ) (that is A ′ = GA ( σ ⊗ σ ) for some G ∈ Aut( C n ⊗ C n ) and σ ∈ Aut( C n ) ). Proof. If A is left-equivalent to e A = ( σ ⊗ σ ) A ′ ( σ − ⊗ σ − ) then X A ( C ) = X e A ( C ) ∼ = X A ′ ( C )and Ξ A ( C ) = Ξ e A ( C ) ∼ = Ξ A ′ ( C ) (see Section 2.4). Conversely let f : X A ( C ) → X A ′ ( C ) bean isomorphism and f − : X A ′ ( C ) → X A ( C ) be its inverse. They correspond to complex20 A, A ′ )-Manin matrices σ ∈ Hom( C m , C n ) and σ − ∈ Hom( C n , C m ) respectively. Due toinvertibility of these matrices we must have n = m . The relations A ( σ ⊗ σ )(1 − A ′ ) = 0 and A ′ ( σ − ⊗ σ − )(1 − A ) = 0 imply A (1 − e A ) = 0 and e A (1 − A ) = 0. By virtue of Lemma 2.11 theidempotents A and e A are left equivalent. One can similarly prove that the second statementimplies the third one. Remark 2.25.
In the same way one can prove that the quadratic algebras X A ( S ) and X A ′ ( R ) are isomorphic as graded algebras iff n = m , S ∼ = R and there exist invertiblematrices M ∈ Z ( R ) ⊗ End( C n ), G ∈ R ⊗ End( C n ⊗ C n ) such that A ′ = GA ( M ⊗ M ),where Z ( R ) = { z ∈ R | [ z, r ] = 0 ∀ r ∈ R } is the centre of R . The isomorphismsare ( α, f M ) : X A ( S ) → X A ′ ( R ) and ( α − , f N ) : X A ′ ( R ) → X A ( S ), where α : S → R is anisomorphism in A and N = α − ( M − ) ∈ Z ( S ) ⊗ End( C n ). The main property of Manin matrices is that their product is also a Manin matrix undersome condition.
Proposition 2.26.
Let R be an algebra. Let A ∈ End( C n ⊗ C n ) , B ∈ End( C m ⊗ C m ) and C ∈ End( C k ⊗ C k ) be idempotents. Let M and N be n × m and m × k matrices over R .Suppose that M and N are ( A, B ) - and ( B, C ) -Manin matrices respectively and that [ M ij , N j ′ l ] = 0 ∀ i = 1 , . . . , n, j, j ′ = 1 , . . . , m, l = 1 , . . . , k. (2.46) Then the n × k matrix K = M N is an ( A, C ) -Manin matrix over R . Proof.
Since M is an ( A, B )-Manin matrix we have AM (1) M (2) = AM (1) M (2) B . Thecommutativity condition (2.46) implies that N (1) M (2) = M (2) N (1) , hence AK (1) K (2) (1 − C ) = AM (1) N (1) M (2) N (2) (1 − C ) = AM (1) M (2) N (1) N (2) (1 − C ) = AM (1) M (2) BN (1) N (2) (1 − C ) . The right hand side vanishes since N is a ( B, C )-Manin matrix.
Remark 2.27.
Proposition 2.26 can be proved by using the functors X A or Ξ A . In the caseof X A one needs to consider the elements y j = P l N jl x l and z i = P j M ij y j of the algebra A = X C ( R ) where x l = x lC . By applying Proposition 2.18 twice we obtain the relations P k,l A ij,kl z k z l = 0. Since z i = P l K il x l the matrix K is an ( A, C )-Manin matrix by virtueof Proposition 2.16. Analogously, one can apply Propositions 2.19 and 2.16 to the elements φ j = P i M ij ψ i , χ l = P j N jl φ j of the algebra Ξ A ( R ).In some particular case the property claimed in Proposition 2.26 was deduced in Subsec-tion 2.5 (see the formula (2.41) and the text after it). The role of R is played by the algebra T . The homomorphism β : R → T gives a R -algebra structure on T and maps entry-wise21he ( A, B )-Manin matrix M = ( M ij ) over R to a ( A, B )-Manin matrix β ( M ) = (cid:0) β ( M ij ) (cid:1) over T . The entries of this matrix commute with the entries of N since [ β ( r ) , N ij ] = 0 forany r ∈ R .The typical case of Proposition 2.26 is R = S ⊗ S ′ , M ij ∈ S , N jl ∈ S ′ . In this caseit can be formulated in terms of comma categories as follows. Let ( S , f M ) and ( S ′ , f N ) beobjects of the categories (cid:0) X A ( C ) ↓ X B (cid:1) and (cid:0) X B ( C ) ↓ X C (cid:1) corresponding to ( A, B )- and(
B, C )-Manin matrices M and N . Then ( S ⊗ S ′ , f K ) is an object of (cid:0) X A ( C ) ↓ X C (cid:1) , where K = M N . In particular, for A = B = C we obtain a structure of tensor category on thecomma category (cid:0) X A ( C ) ↓ X A (cid:1) with the unit object ( C , id X A ( C ) ) corresponding to the unitmatrix.Proposition 2.26 can be formulated in terms of right quantum algebras. This formulationwas described in the works [Man87, Man88, Man91]. Proposition 2.28.
Let M , N and K be universal ( A, B ) -, ( B, C ) - and ( A, C ) -Manin ma-trices respectively. They have entries M ij ∈ U A,B , N ij ∈ U B,C and K ij ∈ U A,C . Then theformula ∆ A,B,C ( K il ) = m X j =1 M ij ⊗ N jl (2.47) defines a homomorphism ∆ A,B,C : U A,C → U A,B ⊗ U B,C . Proof.
Let R = U A,B ⊗ U B,C . The matrix e K with entries e K il = ∆ A,B,C ( K il ) ∈ R is theproduct of the matrices M and N . We need to check that A e K (1) e K (2) (1 − C ) = 0. The lastequality means that e K is an ( A, C )-Manin matrix, but this follows from Proposition 2.26.Note that in terms of functors F A , e F A : Q → A constructed in Subsection 2.5 (as leftadjoint functors to the functors X A , Ξ A : A → Q ) we have ∆
A,B,C = F C ( u A,B ) = e F A ( e u B,C ),where u A,B = f M : X A ( C ) → X B ( U A,B ) and e u B,C = f N : Ξ C ( C ) → Ξ B ( U B,C ) are the corre-sponding universal homomorphisms.Consider the algebra U A = U A,A . Proposition 2.28 implies that the map ∆ A = ∆ A,A,A is ahomomorphism U A → U A ⊗ U A . Moreover, it is easy to check that ∆ A is a comultiplication,that is (id ⊗ ∆ A )∆ A = (∆ A ⊗ id)∆ A . More generally, we have the following commutativediagram: U A,D ∆ A,B,D / / ∆ A,C,D (cid:15) (cid:15) U A,B ⊗ U B,D id ⊗ ∆ B,C,D (cid:15) (cid:15) U A,C ⊗ U C,D ∆ A,B,C ⊗ id / / U A,B ⊗ U B,C ⊗ U C,D (2.48)which reflects the associativity of the matrix multiplication.22 roposition 2.29.
The algebra U A has a bialgebra structure defined by the following comul-tiplication ∆ A : U A → U A ⊗ U A and counit ε A : U A → C : ∆ A ( M ik ) = n X j =1 M ij ⊗ M jk , ε A ( M ij ) = δ ij . (2.49)The formula (2.34) for the universal A -Manin matrix M gives a coaction of the bialgebra U A on the algebra X A ( C ). This is a homomorphism δ = f M : X A ( C ) → U A ⊗ X A ( C ) definedas δ ( x i ) = n X j =1 M ij ⊗ x j . (2.50)It satisfies the coaction axiom (id ⊗ δ ) δ = (∆ ⊗ id) δ . In terms of non-commutative geometrythe algebra X A ( C ) is interpreted as an algebra of functions on a non-commutative space andthe coaction δ as an action on this space. Thus the bialgebra U A (or its dual) plays therole of algebra of endomorphisms of a non-commutative space corresponding to the algebra X A ( C ).More generally, for arbitrary A, B, C we have (id ⊗ u B,C ) u A,B = (∆
A,B,C ⊗ id) u A,C and(id ⊗ e u A,B ) e u B,C = (∆ o pA,B,C ⊗ id) e u A,C , where ∆ o pA,B,C : U A,C → U B,C ⊗ U A,B is an algebra ho-momorphism defined as ∆ o pA,B,C ( K il ) = P mj =1 N jl ⊗ M ij . In particular, the homomorphism e u A,A : Ξ A ( C ) → U A ⊗ Ξ A ( C ) is a coaction on the algebra Ξ A ( C ) with respect to the comul-tiplication ∆ o pA = ∆ o pA,A,A : U A → U A ⊗ U A . Remark 2.30.
The bialgebra U A is not a Hopf algebra: an antipode S for the bialgebrastructure (2.49) should have the form S ( M ) = M − , but the matrix M is not invertibleover U A . One can extend the algebra A A by adding new generators f M ij being elementsof the formal inverse matrix f M = M − . Then the matrix S ( M ) ⊤ should be inverse to f M ⊤ , however its invertibility is not guaranteed in this extended algebra. The universalconstruction of a Hopf algebra extending the bialgebra U A is Hopf envelope [Man91]. This isthe algebra H A generated by the entries of the infinite series of matrices M k , k ∈ N , withthe relations M ⊤ k M k +1 = M k +1 M ⊤ k = 1 , A M (1)2 k M (2)2 k (1 − A ) = 0 , (1 − A ⊤ ) M (2)2 k +1 M (1)2 k +1 A ⊤ = 0 ,k ∈ N . Its Hopf structure is given by the formulae∆ A (cid:0) ( M k ) il (cid:1) = n X j =1 ( M k ) ij ⊗ ( M k ) jl , ε A ( M k ) = 1 , S ( M k ) = M ⊤ k +1 . (2.51)The algebra U A is mapped to H A by the formula M 7→ M . Note also that the matrix f M = M − = M satisfies the relation A f M (2) f M (1) (1 − A ) = 0 . (2.52)23he complex square and rectangular matrices are often interpreted as homomorphismsin a category with objects n ∈ N and homomorphisms Hom( m, n ) = Mat n × m ( C ) (it isequivalent to the category of finite-dimensional vector spaces). One can interpret the Maninmatrices in a similar way. Let A ′ ⊂ A be a small full tensor subcategory, this means that A ′ isa full subcategory of A such that C ∈ A ′ , R ⊗ S ∈ A ′ ∀ R , S ∈ A ′ and Ob( A ′ ) is a small set(in practice one needs only a small set of algebras, so we can take the full tensor subcategorygenerated by this set as A ′ ). Define the following category M A ′ . The objects of M A ′ areall the idempotents A ∈ End( C n ⊗ C n ), n ∈ N . The homomorphisms A → B are all the( A, B )-Manin matrices M over algebras R ∈ A ′ . In other words, we define Hom( A, B ) = F R ∈A ′ Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) , these sets are small since A ′ is small. The composition ofhomomorphisms ( M, R ) ∈ Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) and ( N, S ) ∈ Hom (cid:0) X B ( C ) , X C ( R ) (cid:1) wedefine as ( N M, R ⊗ S ). Due to Proposition 2.26 the product N M is an (
A, C )-Maninmatrix over R ⊗ S .If two idempotents are left-equivalent, then they are isomorphic as objects of M A ′ , so it isenough to take the quadratic algebras X A ( C ) as objects of M A ′ instead of the idempotents.Moreover, one can prove that X A ( C ) and X A ′ ( C ) are isomorphic as objects of M A ′ iff theyare isomorphic as objects of Q . Indeed, if ( R , f M ) ∈ Hom (cid:0) X A ( C ) , X B ( R ) (cid:1) is an isomorphism M A ′ and ( S , f N ) ∈ Hom (cid:0) X B ( C ) , X A ( S ) (cid:1) is its inverse then R ⊗ S = C , hence R = S = C and f M : X A ( C ) → X B ( C ), f N : X A ( C ) → X B ( C ) are isomorphisms in Q . Remark 2.31.
Due to Proposition 2.12 the formula X A ( C ) Ξ A ( C ) correctly define anoperation on the quadratic algebras A ∈ M A ′ . It was denoted in the works of Manin [Man87,Man88, Man91] by A A ! . The first equality (2.45) gives a contravariant fully faithful func-tor ( · ) ! : M A ′ → M A ′ . Since any object of M A ′ isomorphic to Ξ A ( C ) for some idempotent A this functor is an anti-autoequivalence of the category M A ′ . The (quasi-)inverse of ( · ) ! is( · ) ! itself. Remark 2.32.
One can extend the category M A ′ up to a category c M A ′ by taking all thealgebras X A ( S ) ∈ Q as objects of c M A ′ . The sets homomorphisms in c M A ′ can be defined asHom( X A ( S ) , X B ( S ′ )) = G R ∈A ′ n ( α, M ) | α ∈ Hom( S , S ′ ) , ( M, R ) ∈ Hom (cid:0) X A ( C ) , X B ( S ′ ⊗ R ) (cid:1) , [ α ( s ) , M ij ] = 0 ∀ s ∈ S o with some natural composition rule. In these settings X A ( S ) and X A ′ ( S ′ ) are isomorphic asobjects of c M A ′ iff they are isomorphic as objects of Q (see Remark 2.25). Let V and W be vector spaces (may be infinite-dimensional). Let A ∈ End( V ⊗ V ) and B ∈ End( W ⊗ W ) be idempotents and R be an algebra. Instead of a matrix over R we needto take an element M of the space R ⊗ Hom(
W, V ) or of some completion of this space. We24onsider M as an operator with entries in the algebra R : this means that λM w ∈ R where λ ∈ V ∗ and w ∈ W (in the case of completion the covector λ runs over a subset of the dualspace V ∗ such that λM w is well defined). Definition 2.33.
The operator M (from the space R ⊗ Hom(
W, V ) or its completion) iscalled an (
A, B ) -Manin operator if it satisfies the relation AM (1) M (2) (1 − B ) = 0, where M (1) = M ⊗ id V and M (2) = id W ⊗ M . In the case V = W , A = B it is called A -Maninoperator .Let ( v i ) be a basis of the space V , so that any vector v ∈ V has the from v = P i α i v i for unique coefficients α i ∈ C and the sum is finite. Then the action of the operator A can be written as A ( v k ⊗ v l ) = P i,j A ij,kl v j ⊗ v j for unique coefficients A ij,kl ∈ C . Sincethe sum P i,j should be finite there are only finitely many non-zero coefficients A ij,kl forany fixed k, l , we call it column finiteness condition for the matrix ( A ij,kl ). This conditionallows as to define the quadratic algebra Ξ A ( C ) generated by ψ i with commutation relations ψ k ψ l = P i,j A ij,kl ψ i ψ j . Formally this is the quotient Ξ A ( C ) = T / I , where T is the algebraof all the non-commutative polynomials of the formal variables ψ i while I is the ideal of T generated by the elements ψ k ψ l − P i,j A ij,kl ψ i ψ j ∈ T . Another choice of the basis ( v i ) leadsto an isomorphic quadratic algebra, so it essentially depends on the operator A only.To interpret an ( A, B )-Manin operator M as an object of a comma category as in Sub-section 2.5 we need to define also an algebra Ξ A ( R ). Let ( v i ) and ( w i ) be bases of V and W .The formula M w j = P i M ij v i defines the entries M ij ∈ R of an ( A, B )-Manin operator M in these bases. If M ∈ R ⊗ Hom(
W, V ) (without completion), then there are only finitelymany non-zero entries M ij for any fixed j (column finiteness condition for the matrix ( M ij ))and hence the sum P i M ij ψ i is finite. Define Ξ A ( R ) = R ⊗ Ξ A ( C ) (non-completed). Thenthe set of ( A, B )-Manin operators M ∈ R ⊗ Hom(
W, V ) bijectively corresponds to the setHom (cid:0) Ξ B ( C ) , Ξ A ( R ) (cid:1) .To consider the case of arbitrary infinite matrix ( M ij ) without any finiteness conditionwe define a completion of the space R ⊗ Hom(
W, V ) as the set of all the infinite formal sums P i,j M ij E ij where M ij ∈ R and E ij ∈ Hom(
W, V ) are operators acting as E ij w k = δ kj v i .Denote this completion by R b ⊗ Hom(
W, V ). It can be identified with Hom( W, R b ⊗ V ) where R b ⊗ V is the completion of R ⊗ V consisting of all the infinite formal sums P i r i v i , r i ∈ R .The operator M = P i,j M ij E ij acts by the formula M w j = P i M ij v i .Note that the completion R b ⊗ V (and hence R b ⊗ Hom(
W, V ) = Hom( W, R b ⊗ V )) doesnot depend on the choice of the basis ( w i ), but it does depend on the choice of the basis ( v i ).Namely, a basis ( v i ) defines the following topology in the R -module R ⊗ V : neighbourhoodsof 0 are the R -submodules generated by all v i except finitely many of them. The module R b ⊗ V is the completion of R ⊗ V with respect to this topology. Any two bases ( v i ) and ( v ′ i )are related by v i = P k α ki v ′ k , v ′ j = P k β kj v k , they define the same topology of R ⊗ V iff forany k there are only finitely many non-zero α ki and β kj .Suppose A satisfy row finiteness condition : there are only finitely many non-zero A ij,kl forfixed i, j . This condition means exactly that the operator A : V ⊗ V → V ⊗ V is continuouswith respect to the topology corresponding to the basis ( v i ⊗ v j ). Define b Ξ A ( R ) = b T / b I where25 T = L k ∈ N b T k is the graded algebra with grading component b T k consisting of the infiniteformal sums X i ,...,i k r i ...i k ψ i · · · ψ i k , r i ...i k ∈ R , and b I = n P k,l t kl ( ψ k ψ l − P i,j A ij,kl ψ i ψ j ) t ′ kl | t kl , t ′ kl ∈ b T o (due to the row finiteness conditionthe sum over k and l is correctly defined). The algebra b Ξ A ( R ) is a completion of Ξ A ( R ), itdepends on the choice of the basis ( v i ) due to the identification Ξ A ( C ) = V via ψ i ↔ v i .The proof of Lemma 2.2 holds for the completed algebra b Ξ A ( R ), so that the system ofequations P i,j ψ i ψ j T ij,ab = 0 for T ij,ab ∈ R is equivalent to the system P k,l A ij,kl T kl,ab = 0.Hence we have a bijection between the set of ( A, B )-operators M ∈ R b ⊗ Hom(
W, V ) and theset Hom (cid:0) Ξ B ( C ) , b Ξ A ( R ) (cid:1) .Let V b ⊗ V be the completion of V ⊗ V with respect to the topology corresponding to thebasis ( v i ⊗ v j ). It consists of all the infinite formal sums P i,j α ij v i ⊗ v j , α ij ∈ C . Any matrix( A ij,kl ) satisfying row finiteness condition define a continuous operator A : V b ⊗ V → V b ⊗ V bythe formula Av k ⊗ v l = P i,j A ij,kl v i ⊗ v j . Denote by End( V b ⊗ V ) the space of all the continuousoperators V b ⊗ V → V b ⊗ V . Note that we do not need the column finiteness condition to definethe algebra b Ξ A ( R ). Hence the ( A, B )-Manin operators M ∈ R b ⊗ Hom(
W, V ) for arbitraryidempotents A ∈ End( V b ⊗ V ) and B ∈ End( W ⊗ W ) can be identified with the elements of theset Hom (cid:0) Ξ B ( C ) , b Ξ A ( R ) (cid:1) . Explicitly, the relations P k,l,a,b A ij,kl M ka M lb ( δ ar δ bs − B ab,rs ) = 0are correctly defined in this case since all the sums in these relations are finite.Thus the ( A, B )-Manin operators in the non-completed or completed case are objectsof the comma category (cid:0) Ξ B ( C ) ↓ Ξ A (cid:1) or (cid:0) Ξ B ( C ) ↓ b Ξ A (cid:1) for the functor Ξ A : A → G or b Ξ A : A → G respectively.One can also generalise the graded algebras X A ( C ) to the case of infinite-dimensionalmatrices ( A ij,kl ). To do it one needs to require the row finiteness condition, which meansthat this matrix defines a continuous operator A : V b ⊗ V → V b ⊗ V . For any idempotent A ∈ End( V b ⊗ V ) define the quadratic algebra X A ( C ) as the algebra generated by x i withthe commutation relations P k,l A ij,kl x k x l = 0. That is X A ( C ) = Ξ id − A ⊤ ( C ), where A ⊤ isthe operator V ⊗ V → V ⊗ V defined as A ⊤ v i ⊗ v j = P k,l A ij,kl v k ⊗ v l . Let W b ⊗ W bethe completion of W ⊗ W with respect to the basis ( w i ⊗ w j ) and B ∈ End( W ⊗ W ) be anidempotent. The corresponding matrix ( B ij,kl ) satisfy the column finiteness condition. Define b X B ( R ) = b Ξ id − B ⊤ ( R ) where B ⊤ ∈ End( W b ⊗ W ) is the continuous operator W b ⊗ W → W b ⊗ W acting as B ⊤ w i ⊗ w j = P k,l B ij,kl w k ⊗ w l . Then we have the comma category ( X A ( C ) ↓ b X B )equivalent to (cid:0) Ξ B ( C ) ↓ b Ξ A (cid:1) .In particular, the completions allow us to consider the universal ( A, B ) -Manin operator .It has the form M = P i,j M ij E ij , where M ij are generators of the algebra U A,B withthe commutation relations P k,l,a,b A ij,kl M ka M lb ( δ ar δ bs − B ab,rs ) = 0. These relations arecorrectly defined iff the matrices ( A ij,kl ) and ( B ij,kl ) satisfy the row and column finitenessconditions respectively. 26roposition 2.26 can be generalised if we additionally suppose that the sums P j M ij N jl are well defined: if nether M nor N satisfies a needed finiteness condition, then we need tocomplete R in order to include all these sums. In particular, the map ∆ A,B,C is a homo-morphism from U A,C to a completion of U A,B ⊗ U B,C which contains the sums P j M ij N jl (we need to suppose that A , B and C satisfy the row, both and column finiteness conditionsrespectively). In the case A = B = C the map ∆ A is a ‘completed’ comultiplication for U A = U A,B . In terms of representations this means that tensor product of U A -modules arenot always defined: we need to impose some finiteness condition on the modules to guaranteethe existence of their tensor product.Finally, we consider some examples of completions used below. The simplest exampleof the infinite-dimensional space is the space of polynomials V = C [ u ]. It has the basis( v i = u i − ) i > . The completion of R [ u ] = R ⊗ C [ u ] is the algebra of formal series R [[ u ]].It consists of all the formal infinite sums ∞ P k =0 r k u k , r k ∈ R . The algebra of finite Laurentpolynomials R [ u, u − ] = R ⊗ C [ u, u − ] can be completed to the algebra R (( u )) consistingof the series ∞ P k = N r k u k , N ∈ Z , r k ∈ R . It has another completion – the algebra R (( u − ))consisting of N P k = −∞ r k u k , N ∈ Z , r k ∈ R . Note that the space R [[ u, u − ]] consisting of ∞ P k = −∞ r k u k is also a completion of R [ u, u − ], but it is not an algebra in the usual sense. Here we consider the main examples corresponding to the polynomial and Grassmann al-gebras and their deformations. We also consider a generalisation of a deformed polynomialalgebra with three variables. More examples will appear in Sections 4, 6, 7 and Appendix B.
Let P n ∈ End( C n ⊗ C n ) be the permutation operator acting as P n ( v ⊗ w ) = w ⊗ v, v, w ∈ C n . (3.1)Substituting basis elements v = e k and w = e l we obtain the entries of this operator:( P n ) ij,kl = δ il δ jk . Since P n = 1 the operators A n = 12 (cid:0) − P n (cid:1) , S n = 12 (cid:0) P n (cid:1) = 1 − A n (3.2)are idempotents: A n = A n , S n = S n . These are anti-symmetrizer and symmetrizer for twotensor factors respectively. Note that the permutation operator satisfies the braid relation P (23) n P (12) n P (23) n = P (12) n P (23) n P (12) n (3.3)27this is an equality of operators acting on the space C n ⊗ C n ⊗ C n ).The commutation relations (2.4) and (2.16) for A = A n have the form x i x j − x j x i = 0and ψ i ψ j + ψ j ψ i = 0. Hence the algebra X A n ( C ) is the polynomial algebra C [ x , . . . , x n ] andΞ A n ( C ) is the Grassmann algebra with Grassmann variables ψ , . . . , ψ n .The ( A n , A m )-Manin matrices are n × m matrices M over an algebra R satisfying therelation A n M (1) M (2) S m = 0 . (3.4)These matrices were called Manin matrices in [CF]. In this subsection we call them justManin matrices if there is no confusion with the general notion of Manin matrix.To write the matrix relation (3.4) in terms of entries one can substitute P ij,kl = δ il δ jk to the formula (2.26) written in entries. We obtain the following system of commutationrelations: [ M ik , M jk ] = 0 , i < j, (3.5)[ M ik , M jl ] + [ M il , M jk ] = 0 , i < j, k < l, (3.6)(the relations (3.5) is in fact the relations (3.6) for k = l ). The commutation relations (3.5)mean that any two entries of the same column commute. The formula (3.6) is so-called cross-relation for the 2 × i, j and columns k, l .For example, consider a 2 × M = (cid:18) a bc d (cid:19) . (3.7)It is a Manin matrix iff its entries a, b, c, d ∈ R satisfy[ a, c ] = 0 , [ b, d ] = 0 , [ a, d ] + [ b, c ] = 0 . (3.8)In the case n > n × m matrix is a Manin matrix iff any 2 × A n , A m )-Manin matrix is such a non-commutative generalisation of matrixthat the most of the properties of the usual matrices are inherited (with some generalisationof the notion of determinant). These properties are described in the works [CF] and [CFR]in details.It is clear, that a Manin matrix keeps to be a Manin matrix after the following operations:taking a submatrix, permutation of rows or columns, doubling of a row or a column. Inother words, if M = ( M ij ) is an n × m Manin matrix and i , . . . , i k ∈ { , . . . , n } , j , . . . , j l ∈{ , . . . , m } then the new k × l matrix N with the entries N st = M i s j t is also a Maninmatrix. Note that in the case of a permutation this fact follows from Proposition 2.21 and( σ ⊗ σ ) A n ( σ − ⊗ σ − ) = A n ∀ σ ∈ S n .Let us recall one more important fact on the Manin matrices [CFR].28 roposition 3.1. A matrix M ∈ R ⊗ Hom( C m , C n ) has pairwise commuting entries (i.e. [ M ij , M kl ] = 0 for any i, j, k, l ) iff M and its transposed M ⊤ are both Manin matrices. Proof.
If all entries of M commute with each other then M as well as M ⊤ is a Maninmatrix. In the converse direction it is enough to prove the statement for the case of 2 × × n = 1 or m = 1there are no 2 × M ⊤ or M respectively).The condition that the matrix M ⊤ = (cid:18) a cb d (cid:19) is a Manin matrix is equivalent to therelations [ a, b ] = 0, [ c, d ] = 0, [ a, d ] − [ b, c ] = 0. Together with the relations (3.8) they implythat all the entries a, b, c, d pairwise commute.For a general Manin matrix the entries from different columns do not commute, so thenotion of determinant should be generalised in a special way. It turns out that the naturalgeneralisation is so-called column determinant . For a n × n matrix M over an algebra (or aring) R it is defined asdet M = det col M = X σ ∈ S n ( − σ M σ (1) , M σ (2) , · · · M σ ( n ) ,n , (3.9)where ( − σ is the sign of the permutation σ . This is the usual expression for the determi-nant, but with the specified order of entries in each term: they are ordered in accordancewith the order of columns. If M is a Manin matrix then the order of columns can be chosenin a different way and this leads to the same result (see [CF], [CFR]): X σ ∈ S n ( − σ M σ ( i ) ,i M σ ( i ) ,i · · · M σ ( i n ) ,i n = det col M, (cid:18) . . . ni i . . . i n (cid:19) ∈ S n . (3.10)However, we can not take different order for different terms. For example, the determinantof the 2 × M ) = ad − cb . If it is a Manin matrix then due to the lastrelation (3.8) we have det( M ) = da − bc , but in general det( M ) does not equal to ad − bc orto da − cb even for a Manin matrix.An important property of the column determinant of Manin matrices is its behaviourunder the permutation of rows and columns. The determinant of an n × n Manin matrix M changes the sign under the transposition of two columns or rows. In the notations ofSubsection 2.4 we have det( τ M ) = ( − τ det M and det( τ M ) = ( − τ det M for any τ ∈ S n .The first formula is deduced asdet( τ M ) = X σ ∈ S n ( − σ M τ − σ (1) , · · · M τ − σ ( n ) ,n = ( − τ det M, where we made the substitution σ → τ σ and used ( − τσ = ( − τ ( − σ . The second formulafollows from (3.10) in a similar way.Since any submatrix of a Manin matrix M is also a Manin matrix, it is natural to define k × k minors of M to be the column determinants of k × k submatrices. We say that a Manin29atrix has rank r if there is non-zero r × r minor and all the k × k minors vanish for all k > r .In fact, it is enough to check it for k = r + 1 (see [CF], [CFR]). Many important propertiesof the Manin matrices are formulated in terms of column determinants and minors. Inparticular, one can construct ‘spectral invariants’ of square Manin matrices [CF], [CFR].The theory of Manin matrices are applies to the Yangians Y ( gl n ), affine Lie algebras b gl n ,Heisenberg gl n XXX -chain, Gaudin gl n model [CF], to elliptic versions of these models [RTS]etc.Let us, for example, present the connection of the notion of A n -Manin matrix with theYangian Y ( gl n ). Consider the rational R -matrix R ( u ) = u − P n . The Yangian Y ( gl n ) isdefined as the algebra generated by t rij , i, j = 1 , . . . , n , r ∈ Z > , with the commutationrelation R ( u − v ) T (1) ( u ) T (2) ( v ) = T (2) ( v ) T (1) ( u ) R ( u − v ) , (3.11)where u and v are formal variables and T ( u ) = 1 + ∞ P r =1 t r . . . t r n ... . . . ... t rn . . . t rnn u − r . Note that for u = 1we have R ( u ) = 2 A n . Substituting v = u − S n = 1 − A n from the right we obtain A n T (1) ( u ) T (2) ( u − S n = 0 (we understand ( u − − r as a seriesin u − ). This relation means that the matrix M = T ( u ) e − ∂∂u is a Manin matrix over thealgebra R = Y ( gl n )[[ u − ]][ e − ∂∂u ]. See details in [CF].The column determinant of M = T ( u ) e − ∂∂u is related with the notion of quantum deter-minant important in the theory of Yangians. Namely, we have det M = (cid:0) qdet T ( u ) (cid:1) e − n ∂∂u ,where qdet T ( u ) = X σ ∈ S n ( − σ T σ (1) , ( u ) T σ (2) , ( u − · · · T σ ( n ) ,n ( u − n + 1) (3.12)is the quantum determinant . The coefficients of the series qdet T ( u ) ∈ Y ( gl n )[[ u − ]] generatethe centre of the Yangian Y ( gl n ) (see [MNO]).Now let us consider ( A n , M = ( M ij ), where 0 ∈ End( C m ⊗ C m ). Theyare defined by the relation M ik M jl = M jk M il where i, j = 1 , . . . , n , k, l = 1 . . . , m . The 2 × A , ad = cb , bc = da , ac = ca , bd = db .These are the relations (3.8) plus the condition det M = 0. Again one can see that an n × m matrix is an ( A n , × A , A n , R coincides with the set of rank R of the size n × m .Recall that the formula (3.10) is valid for all matrices satisfying the cross-relations (3.6).This is more general case than the case of Manin matrices. To describe it we first considera 2 × φ = aψ + cψ , φ = bψ + dψ . (3.13)30hen we have φ φ + φ φ = ( ab + ba ) ψ + ( ad + bc ) ψ ψ + ( da + cb ) ψ ψ + ( cd + dc ) ψ .The relation [ a, d ] + [ b, c ] = 0 follows from the relations ψ ψ + ψ ψ = 0 and φ φ + φ φ = 0 only. Hence by refusing the relations φ i = 0 we have only the cross-relations (3.6)without commutation in columns (3.5). If we refuse from ψ i = 0, then we obtain theanti-commutation in rows: M ik M il + M il M ik = 0 , k < l. (3.14)Let e P n ∈ End( C n ⊗ C n ) be the matrix with the entries ( e P n ) ij,kl = ( − δ ij δ il δ jk = ( − δ kl δ il δ jk .Then the commutation relations ψ i ψ j + ψ j ψ i = 0, i < j (without ψ i = 0) can be written as(Ψ ⊗ Ψ)(1 + e P n ) = 0, so these commutation relations define the quadratic algebra Ξ e A n ( C ),where e A n = 12 (1 − e P n ). Let us conclude. • The matrix M is an ( A n , e A n )-Manin matrix iff it satisfies (3.6). • The matrix M is an ( e A n , e A n )-Manin matrix iff it satisfies (3.6) and (3.14). • The matrix M is an ( e A n , A n )-Manin matrix iff it satisfies (3.5), (3.6) and (3.14).The notion of Manin matrix can be generalized to the infinite-dimensional case. Denotefor any vector space V the operator P V ∈ End( V ⊗ V ) by P V ( v ⊗ v ′ ) = v ′ ⊗ v, v, v ′ ∈ V. (3.15)Let A V = 1 − P V S V = 1 − A V = 1 + P V A V , A W )-Manin operator M ∈ R ⊗ Hom(
V, W ) just Maninoperator (over R , from V to W ).Consider the case of the space of polynomials: V = W = C [ u ]. Its tensor square can beidentified with the space of polynomials of two variables: V ⊗ V = C [ u , u ]. In terms of thisidentification the operator P C [ u ] can be interpreted as the operator permuting the variables u and u , we denote it by P u ,u . Let ( v i ) ∞ i =1 be a basis of the space C [ u ]. For example,one can take v i = u i − . Acting by the operator M to the basis we obtain M v j = P i M ij v i (the sum is finite), so we have infinite-dimensional matrix ( M ij ) with the column finitenesscondition. The relation A V M (1) M (2) S V = 0 in the basis ( v i ) takes the form (3.5), (3.6) where i, j, k, l = 1 , . . . , ∞ . This means that M is a Manin operator iff any 2 × M ij )is a Manin matrix. The set of Manin operators includes all the n × m Manin matrices aswell as ( A n , A C [ u ] )- and ( A C [ u ] , A m )-Manin operators (all over a fixed R ).To generalise the consideration to the case of any infinite matrix ( M ij ) without anyfiniteness condition we should suppose that the operator M belongs to the completion of R ⊗ End (cid:0) C [ u ] (cid:1) consisting of the formal infinite sums ∞ P i,j =1 M ij E ij , where M ij ∈ R and E ij v k = δ jk v i . This completion is the space Hom (cid:0) C [ u ] , R [[ u ]] (cid:1) .31 .2 q -Manin matrices Let q be a non-zero complex number. Consider the q -commuting variables x , . . . , x n , thatis x j x i = qx i x j for i < j . By means of the notationsgn( k ) = +1 , if k > , if k = 0, − , if k <
0, (3.16)one can write these relations as x i x j = q sgn( i − j ) x j x i , i, j = 1 , . . . , n. (3.17)In the matrix form they have the form ( X ⊗ X ) = P qn ( X ⊗ X ), where X = n P i =1 x i e i and P qn ∈ End( C n ⊗ C n ) is the q -permutation operator acting as P qn ( e i ⊗ e j ) = q − sgn( i − j ) e j ⊗ e i ,that is P qn = n X i,j =1 q sgn( i − j ) E ij ⊗ E ji . (3.18)Its entries are ( P qn ) ij,kl = q sgn( i − j ) δ il δ jk = q sgn( l − k ) δ il δ jk . It also satisfies the braid relation( P qn ) (23) ( P qn ) (12) ( P qn ) (23) = ( P qn ) (12) ( P qn ) (23) ( P qn ) (12) . (3.19)Since ( P qn ) = 1 the matrices A qn = 12 (1 − P qn ) , S qn = 12 (1 + P qn ) = 1 − A qn (3.20)are idempotents. The corresponding algebra X A qn ( C ) is generated by x i with the rela-tions (3.17). It can be interpreted as an ‘algebra of functions’ on the n -dimensional quantumspace C nq . The algebra Ξ A qn ( C ) is the q -Grassmann algebra generated by ψ , . . . , ψ n with therelations ψ i ψ j = − q − sgn( i − j ) ψ j ψ i , i, j = 1 , . . . , n. (3.21)The matrix M ∈ R ⊗ Hom( C m , C n ) is an ( A qn , A qm )-Manin matrix iff the following equiv-alent relations hold A qn M (1) M (2) S qm = 0 , (3.22)(1 − P qn ) M (1) M (2) (1 + P qm ) = 0 , (3.23) A qn M (1) M (2) = A qn M (1) M (2) A qm , (3.24) M (1) M (2) S qm = S qn M (1) M (2) S qm . (3.25)32n terms of entries these relations have the form M ik M jk = q − M jk M ik , i < j, (3.26)[ M ik , M jl ] + qM il M jk − q − M jk M il = 0 , i < j, k < l. (3.27)The ( A qn , A qm )-Manin matrices are called q -Manin matrices . The Manin matrices consideredin Subsection 3.1 are q -Manin matrices for q = 1. The properties of the Manin matricesdescribed in [CF], [CFR] were generalised to the q -case in the work [CFRS].A natural generalisation of the column determinant to the case of q -Manin matrices is q -determinant defined for a matrix M ∈ R ⊗ End( C n ) asdet q ( M ) = X σ ∈ S n ( − q ) − inv( σ ) M σ (1) , M σ (2) , · · · M σ ( n ) ,n , (3.28)where inv( σ ) is the number of inversions: it is equal to the number of pairs ( i, j ) such that1 i < j n and σ ( i ) > σ ( j ). It coincides with the length of σ defined as the minimal l such that σ can be presented as a product of l elementary transpositions σ i = σ i,i +1 (seeAppendix A for details).For the 2 × M = (cid:18) a bc d (cid:19) the q -determinant has the from det q M = ad − q − cb .This matrix M is a q -Manin matrix iff ca = qac, db = qbd, ad − da + qbc − q − cb = 0 . (3.29)In this case the q -determinant can be rewritten as det q M = da − qbc .A general n × m matrix is a q -Manin matrix iff any 2 × q -Manin matrix ( n > n × n q -Manin matrix we can change the order of columnsin the expression of q -determinant in the following way [CFRS]: X σ ∈ S n ( − σ M σ (1) ,τ (1) M σ (2) ,τ (2) · · · M σ ( n ) ,τ ( n ) = ( − q ) − inv( τ ) det q M, τ ∈ S n . (3.30)By changing τ by τ − in (3.30) and by taking into account inv( τ − ) = inv( τ ) one canwrite this formula in the form det q ( τ M ) = ( − q ) − inv( τ ) det q M . In contrast with the case ofSubsection 3.1 the q -determinant of the matrix τ M obtained from an n × n q -Manin matrixby a permutation of rows does not related with det q M (the proof done for the q = 1 case doesnot work since q inv( τσ ) = q inv( τ ) q inv( σ ) ). Moreover, neither τ M nor τ M are q -Manin matricesin general. However they are Manin matrices for another idempotents. Namely, by virtue ofProposition 2.21 they are (cid:0) ( τ ⊗ τ ) A qn ( τ − ⊗ τ − ) , A qm (cid:1) - and (cid:0) A qn , ( τ ⊗ τ ) A qm ( τ − ⊗ τ − ) (cid:1) -Maninmatrices respectively (see Section 3.3). As we will see in Subsection 6.1 the q -determinant is anatural operation for ( A qn , B )-Manin matrices for any B , but not for ( B, A qm )-Manin matrices(the symmetry of the q -determinant of an ( A qn , B )-Matrix with respect to permutation ofcolumns depends on the choice of the idempotent B ).Analogously to the case q = 1, the formula (3.30) is valid for any M ∈ R ⊗ End( C n )satisfying the cross-relations (3.27), that is for any ( A qn , e A qn )-Manin matrix M , where e A qn =1 − e P qn e P qn ) ij,kl = ( − δ ij q sgn( i − j ) δ il δ jk , i, j, k, l = 1 , . . . , n .33 .3 Multi-parametric case: ( b q, b p ) -Manin matrices In Subsection 3.2 we introduced a q -deformation of the polynomial algebra C [ x , . . . , x n ]. The q -commutation of variables was defined by a unique parameter q . However we can considermulti-parameter deformation [Man89]. One has n ( n − / n × n matrix b q = ( q ij ) is parameter matrix iff it has entries q ij ∈ C \{ } satisfying the conditions q ij = q − ji , q ii = 1 . (3.31)A parameter matrix b q defines the commutation relations x j x i = q ij x i x j , (3.32)where i, j = 1 , . . . , n are arbitrary or subjected to i < j . It gives the algebra X A b q ( C ) where A b q = 1 − P b q , S b q = 1 + P b q , ( P b q ) kl,ij = q ij δ kj δ li . (3.33)It is immediately checked that ( P b q ) = 1 and that P b q satisfies the braid relation P (23) b q P (12) b q P (23) b q = P (12) b q P (23) b q P (12) b q . (3.34)The corresponding algebra Ξ A b q ( C ) is defined by the relations ψ j ψ i = − q − ij ψ i ψ j . (3.35)The independent relations are (3.35) for i < j and ψ i = 0.Let b p = ( p ij ) be an m × m parameter matrix. An ( A b q , A b p )-Manin matrix M is an n × m matrix over an algebra R satisfying A b p M (1) M (2) S b p = 0 . (3.36)In terms of entries this relation can be written as M ik M jk = q ji M jk M ik , (3.37) M ik M jl − q ji p kl M jl M ik + p kl M il M jk − q ji M jk M il = 0 (3.38)These conditions are empty for i = j and they do not change under i ↔ j or k ↔ l , henceit is enough to check (3.37) for i < j and (3.38) for i < j , k < l (the relation (3.37) is therelation (3.38) for k = l ). Definition 3.2.
A matrix M is called a ( b q, b p ) -Manin matrix satisfying the relations (3.37),(3.38). A square matrix M satisfying these relations is called b q -Manin matrix if b q = b p .34ow let us consider the permutation of rows and columns of such matrices. Proposition 3.3.
Let M be an n × m matrix over R , σ ∈ S n and τ ∈ S m . Then thefollowing statements are equivalent. • M is an ( b q, b p ) -Manin matrix. • σ M = σM is a ( σ b qσ − , b p ) -Manin matrix. • τ M = M τ − is a ( b q, τ b pτ − ) -Manin matrix.The matrix σ b qσ − has entries ( σ b qσ − ) ij = q σ − ( i ) ,σ − ( j ) . (3.39) Proof.
It follows from Proposition 2.21 and the formula( σ ⊗ σ ) P b q ( σ − ⊗ σ − ) = P σ b qσ − , (3.40)which in turn is deduced by direct calculation. Remark 3.4.
Proposition does not work for any operators σ ∈ GL ( n, C ) and τ ∈ GL ( m, C )since the formula (3.40) is not valid for general σ ∈ GL ( n, C ).Let us consider more general situation: one can apply the following operations on amatrix M : taking a submatrix, permutation of rows or columns, doubling of a row or acolumn. The result of a sequence of such operations is a new matrix N = M IJ consideredbelow. Theorem 3.5.
Let I = ( i , . . . , i k ) and J = ( j , . . . , j l ) where i s n and j t m for any s = 1 , . . . , k and t = 1 , . . . , l . Let M = ( M ij ) be an n × m matrix over R and M IJ be k × l matrix with entries ( M IJ ) st = M i s j t . Let b q and b p be n × n and m × m parameter matricesand let b p II and b q JJ be k × k and l × l matrices with entries ( b q II ) su = q i s i u , s, u = 1 , . . . , k , ( b p JJ ) tv = q j t j v , t, v = 1 , . . . , l . They are also parameter matrices. If M is a ( b q, b p ) -Maninmatrix then M IJ is a ( b q II , b p JJ ) -Manin matrix. Proof.
By substituting i → i s , j → i u , k → j t , l → j v to (3.38) we obtain the relations (3.38)for the matrix M IJ with coefficients defined by the parameter matrices b q II and b p JJ .For a non-zero complex number q let us denote by q [ n ] the n × n parameter matrix withentries ( q [ n ] ) ij = q sgn( j − i ) . Then P q [ n ] = P qn and the ( q [ n ] , q [ m ] )-Manin matrices are exactlythe n × m q -Manin matrices. A permutation of rows or columns of a such matrix M givesa ( σq [ n ] σ − , q [ m ] )- or ( q [ n ] , τ q [ m ] τ − )-Manin matrix respectively. In general these are not q -Manin matrices any more (see Subsection 3.2), but they are related with the quadraticalgebras isomorphic to X A qn ( C ) and X A qm ( C ), so that they have the same properties permutedin some sense. For instance, properties of the q -determinant of σ M are similar to ones of the q -determinant of a q -Manin matrix. 35et x , . . . , x n be the generators of X A qn ( C ). Then the n × n diagonal matrix M = x . . . x . . . . . . x n (3.41)is an ( A qn , A n )-Manin matrix, i.e. a ( q [ n ] , [ n ] )-Manin matrix. More generally, this is a (cid:0) ( pq ) [ n ] , p [ n ] (cid:1) -Manin matrix for any p ∈ C \{ } .An analogue of the q -determinant for a ( b q, b p )-Manin matrix depends on b q , but not on b p .We call it b q -determinant , it is defined for an n × n matrix M as follows [Man89]:det b q ( M ) = X σ ∈ S n ( − σ Y i
Let ψ , . . . , ψ n satisfy (3.35) . Then ψ i ψ i · · · ψ i n = 0 , if i k = i l for some k = l ; (3.43) ψ σ (1) ψ σ (2) · · · ψ σ ( n ) = ( − σ ψ ψ · · · ψ n Y i
The relations (3.35) allows us to permute the factors in the left hand side of (3.43).In particular, one can place the factors ψ i k and ψ i l to neighbour sites. It i k = i l then ψ i k ψ i l = 0.Let us rewrite the formula (3.44) in terms of Appendix A. Consider the root system forthe reflection group S n . Denote q α = q ij for the root α = e i − e j . Then due to (A.2) theformula (3.44) takes the form ψ σ (1) · · · ψ σ ( n ) = ( − σ ψ · · · ψ n Y α ∈ R + σ − q − α . (3.45)We prove the formula (3.45) by using induction on the length ℓ = ℓ ( σ ). Let σ = σ i · · · σ i ℓ − σ i ℓ be a reduced expression and τ = σ i · · · σ i ℓ − . Then ψ σ (1) · · · ψ σ ( n ) = ψ τσ iℓ (1) · · · ψ τσ iℓ ( n ) = − q − τ ( α iℓ ) ψ τ (1) · · · ψ τ ( n ) . Note that ℓ ( τ ) = ℓ −
1. From (A.4) we obtain R + σ − = R + τ − ⊔ { τ ( α i ℓ ) } .Together with the induction assumption this implies the formula (3.45).36emma 3.6 implies that for any n × n matrix M such that [ M ij , ψ k ] = 0 we have φ φ · · · φ n = det b q ( M ) ψ ψ · · · ψ n , (3.46)where φ j = n P i =1 ψ i M ij . A permutation of rows or columns corresponds to a permutation of ψ , . . . , ψ n or φ , . . . , φ n respectively: φ j = n X i =1 ( τ M ) ij ψ τ − ( i ) , φ τ − ( j ) = n X i =1 ( τ M ) ij ψ i . (3.47) Theorem 3.7.
Let b q = ( q ij ) and b p = ( p ij ) be n × n parameter matrices and τ ∈ S n . Let M be a ( b q, b p ) -Manin matrix over an algebra R . Then the generalised determinants of the ( τ b qτ − , b p ) -Manin matrix τ M and of the ( b q, τ b pτ − ) -Manin matrix τ M have the form det τ b qτ − ( τ M ) = ( − τ det b q ( M ) Y i
Let ψ i be generators of the algebra Ξ A b q ( C ). Then Proposition 2.16 implies thatthe elements φ j = n P i =1 M ij ψ i ∈ Ξ A b q ( R ) satisfy the commutation relations (3.35) with theparameter matrix b p . Due to the formula (3.39) the elements ψ ′ i = ψ τ − ( i ) and φ ′ j = φ τ − ( j ) satisfy the commutation relations (3.35) with the parameter matrices σ b qσ − and σ b pσ − respectively. From the formulae (3.46) and (3.47) we obtain φ · · · φ n = ψ ′ · · · ψ ′ n det τ b qτ − ( τ M ) , φ ′ · · · φ ′ n = ψ · · · ψ n det b q ( τ M ) . (3.50)The formula (3.44) for φ j and ψ i with σ = τ − takes the form ψ ′ · · · ψ ′ n = ( − τ ψ · · · ψ n Y i
Let σ ij ∈ S k be the transposition of i and j . Suppose i < j (without loss ofgenerality). The conditions q il = q jl imply σ ij b qσ ij = b q and Y s
Let M be an n × m matrix over R . Let b q and b p be n × n and m × m parametermatrices. Let I = ( i , . . . , i k ) and J = ( j , . . . , j k ) where i s n and j s m for all s = 1 , . . . , k . Let τ ∈ S k , K = ( i τ (1) , . . . , i τ ( k ) ) and L = ( j τ (1) , . . . , j τ ( k ) ) . • We have det b q KK ( M KJ ) = ( − τ det b q II ( M IJ ) Q s
Note that τ − b q II τ = b q KK , M KJ = τ − ( M IJ ) and M IL = τ − ( M IJ ), so the statementsfollow from Theorems 3.5, 3.7 and Corollary 3.8. Remark 3.10.
The formula (3.44) follows from the relations ψ j ψ i = − q − ij ψ i ψ j , i < j .As consequence, we did not need the relations φ i = 0 to prove the formula (3.49). Therelations ψ j ψ i = − p − ij ψ i ψ j , i < j , define the algebra Ξ e A b p ( C ), where e A b p = 1 − e P b p e P b p ) kl,ij = ( − δ ij p ij δ kj δ li . Hence the formula (3.49) is valid for any ( A b q , e A b p )-Manin matrix M . As consequence, the second statement of Corollary 3.8 is valid for these matrices if b p is generic. However they are not valid for some b p , so it is necessary to require M to be a( b q, b p )-Manin matrix. Moreover the third and forth statements of Corollary 3.9 are not validfor ( A b q , e A b p )-Manin matrices even if b p is generic since we used Theorem 3.5. For example, let M = (cid:18) a bc d (cid:19) , I = (1 , J = (1 , q -determinant of the matrix M IJ = (cid:18) a ac c (cid:19) is38et q ( M IJ ) = ac − q − ca . It vanishes if M is a ( q [2] , p [2] )-Manin matrix, but the cross relation ad − q − pda + pbc − q − cb = 0 is not enough for det q ( M IJ ) = 0. The cross relation for thematrix (cid:18) a ac c (cid:19) itself gives ac − qp − ca + pac − q − ca = 0. This imply det q ( M ) = 0 unless p = − Remark 3.11.
Let n = k + l , q ij = − i, j = k + 1 , . . . , n , i = j , and q ij = 1 forother i, j . By factorizing the algebra X A b q ( C ) over the relations x i = 0, i = k + 1 , . . . , n ,and introducing a Z -grading we obtain the free super-commutative quadratic algebra with k even and l odd generators. However the approach of Section 2 applied to this algebra doesnot give super-Manin matrices considered in [Man89, MR]. The reason is that we supposecommutativity of x i with entries of M , which should be replaced by super-commutativityin the super-case. We will consider Manin matrices for quadratic super-algebras in futureworks. Consider the algebra with generators x, y, z and relations axy − a − yx = κz ,byz − b − zy = κx , (3.51) czx − c − xz = κy , where a, b, c ∈ C \{ } , κ ∈ C .Let x = x , x = y , x = z , a = a , a = b , a = c , a ij = a − ji , a ii = 1. Let ε ijk be thetotally antisymmetric tensor such that ε = 1. Then (3.51) is equivalent to the system a ij x i x j − a ji x j x i = κ X k =1 ε ijk x k x k , i, j = 1 , , . (3.52)These relations can be written as x i x j = P k,l =1 P ij,kl x k x l where P ij,kl = a ji δ jk δ il + κa ji δ kl ε ijk . (3.53)The operator P ∈ End( C ⊗ C ) with the entries (3.53) satisfies P = 1. Hence the operator A a,b,cκ := 1 − P X A a,b,cκ ( C ). Bysetting κ = 0 we obtain the quadratic algebra X A b q ( C ) with the parameters q ij = a ij , sothe algebra X A a,b,cκ ( C ) is a generalisation of the 3-dimensional case of the algebra X A b q ( C )considered in Subsection 3.3 (this is not a κ -deformation in general, see Remark 6.4).39et us consider some examples of Manin matrices by taking A = A a,b,cκ as one of theidempotents. A 2 × M = (cid:18) α α α β β β (cid:19) is an ( A q , A a,b,cκ )-Manin matrix iff q ( a ji α i β j + a ij α j β i ) = a ji β i α j + a ij β j α i , (3.54) q ( a ij α k β k + κα i β j ) = a ij β k α k + κβ i α j (3.55)for all cyclic permutation ( i, j, k ) of (1 , , q ij = q sgn( j − i ) and p ij = a ij , while the relation (3.55) is ageneralisation of the q -commutation (3.37).A 3 × M = α β α β α β is an ( A a,b,cκ , A q )-Manin matrix iff the relations (3.52)are satisfied by the substitutions x i = α i and x i = β i and a ij ( α i β j + qβ i α j ) − a ji ( α j β i + qβ j α i ) = κ ( α k β k + qβ k α k ) (3.56)for all cyclic permutations ( i, j, k ) of (1 , , Lax operators are different square matrices and endomorphisms of vector spaces arisen in thetheories of integrable systems and quantum groups. We will consider Lax operators satisfying
RLL -relations with some R -matrices (solutions of the Yang-Baxter equation). Different R -matrices give different types of Lax operators. Since many quantum groups can be defined by RLL -relations the Lax operators of a certain type are related with the representation theoryof the corresponding quantum group. Here we consider connections between Manin matricesassociated with some quadratic algebras and the Lax operators associated with the quantumgroups U q ( gl n ), Y ( gl n ). Notice also that a connection between the q -Manin matrices andLax operators associated the affine quantum group U q ( b gl n ) was described in [CFRS]. U q ( gl n ) type and q -Manin matrices A relationship between Lax operator and q -Manin matrices was first described by Manin,see [Man88]. We investigate this relationship by applying a decomposition of the correspond-ing R -matrix.Let us first write the relations for a transposed q -Manin matrix. Recall that the matrices P n defined by (3.1) permute the factors Hom( C m , C n ) ⊗ Hom( C m , C n ) in the following way: P n T P m = T (21) P n M (1) N (2) P m = M (2) N (1) , (4.1)where T ∈ R ⊗ Hom( C m ⊗ C m , C n ⊗ C n ), M, N ∈ R ⊗ Hom( C m , C n ). Let us note that( P qn ) (21) = P q − n , ( A qn ) (21) = A q − n , ( S qn ) (21) = S q − n . (4.2)40ote also that transposition gives the same:( P qn ) ⊤ = P q − n , ( A qn ) ⊤ = A q − n , ( S qn ) ⊤ = S q − n . (4.3) Lemma 4.1.
Let M ∈ R ⊗ Hom( C m , C n ) . The transposed matrix M ⊤ is a q -Manin matrixiff the matrix M satisfies one of the following equivalent relations: S q − n M (1) M (2) A q − m = 0 , (4.4) S qn M (2) M (1) A qm = 0 . (4.5) Proof.
The relation (3.22) for the m × n matrix M ⊤ has the form A qm ( M ⊤ ) (1) ( M ⊤ ) (2) S qn = 0.If we transpose the both hand sides and take into account ( M (1) M (2) ) ⊤ = ( M ⊤ ) (1) ( M ⊤ ) (2) and (4.3) we obtain (4.4). Due to (4.2) the permutation of tensor factors yields (4.5).Suppose that q = −
1. Consider the R -matrix R q = R qn = q − n X i =1 E ii ⊗ E ii + X i = j E ii ⊗ E jj + ( q − − q ) X i>j E ij ⊗ E ji . (4.6)It satisfies the Yang–Baxter equation( R q ) (12) ( R q ) (13) ( R q ) (23) = ( R q ) (23) ( R q ) (13) ( R q ) (12) . (4.7)A Lax operator of U q ( gl n ) type is an n × n matrix L ∈ R ⊗ End( C n ) satisfying the RLL -relation R q L (1) L (2) = L (2) L (1) R q . More generally, consider an n × m matrix L ∈ R ⊗ Hom( C m , C n ) satisfying R qn L (1) L (2) = L (2) L (1) R qm . (4.8) Remark 4.2.
The commutation relations for the quantum group U q ( gl n ) can be written asthree matrix relations R qn L (1) ± L (2) ± = L (2) ± L (1) ± R qn , R qn L (1)+ L (2) − = L (2) − L (1)+ R qn for some matrices L + , L − ∈ U q ( gl n ) ⊗ End( C n ) [FRT, RTF].By multiplying the relation (4.8) by P n from the left and by taking into account (4.1) weobtain the equivalent relation b R qn L (1) L (2) = L (1) L (2) b R qn , (4.9)where b R q = b R qn := P n R qn = q − n X i =1 E ii ⊗ E ii + X i = j E ij ⊗ E ji + ( q − − q ) X i The formulae (4.12),(4.13), (4.14) and the first of (4.15) are obtained directly fromthe definitions of b R q + = q + b R qn q + q − and b R q − = q − − b R qn q + q − . Further( q + q − ) ( b R q − ) = (cid:18) − X i = j E ij ⊗ E ji + X i Let L ∈ R ⊗ Hom( C m , C n ) , then the following statements are equivalent. • L satisfies (4.8) , that is R qn L (1) L (2) = L (2) L (1) R qm . • L satisfies (4.9) , that is b R qn L (1) L (2) = L (1) L (2) b R qm . • L satisfies the relation b R qn + L (1) L (2) = L (1) L (2) b R qm + . (4.21) • L satisfies the relation b R qn − L (1) L (2) = L (1) L (2) b R qm − . (4.22) • L satisfies the relation A qn L (1) L (2) = L (2) L (1) A qm . (4.23) • L satisfies the relations A qn L (1) L (2) S qm = 0 , S qn L (2) L (1) A qm = 0 . (4.24) • The matrices L and L ⊤ are both q -Manin matrices. roof. By adding qL (1) L (2) to the both hand sides of (4.9) and by dividing by q + q − we obtain the equivalent relation (4.21). The equivalence of the relations (4.9) and (4.22)is proved similarly. Further, by using (4.17) one establishes the equivalence of (4.22) and(4.23). The relations (4.24) are obtained from (4.23) by multiplying by S qm from the rightand by multiplying by S qn from the left respectively. Conversely, suppose that L satisfies therelations (4.24). By virtue of the formulae (4.2) the second of the relations (4.24) can bewritten in the form S q − n L (1) L (2) A q − m = 0. Thus we have A qn L (1) L (2) A qm = A qn L (1) L (2) , L (1) L (2) A q − m = A q − n L (1) L (2) A q − m . (4.25)These relations implies A qn L (1) L (2) A qm A q − m = A qn L (1) L (2) A q − m = A qn A q − n L (1) L (2) A q − m . (4.26)By using (4.20) one yields A qn L (1) L (2) A qm P m = A qn P n L (1) L (2) A q − m . (4.27)Multiplication by P m from the right gives A qn L (1) L (2) A qm = A qn L (2) L (1) A qm . (4.28)By taking into account (4.24) we obtain (4.23) in the following way: A qn L (1) L (2) = A qn L (1) L (2) ( A qm + S qm ) = A qn L (1) L (2) A qm = A qn L (2) L (1) A qm =( A qn + S qn ) L (2) L (1) A qm = L (2) L (1) A qm . (4.29)Finally, by virtue of Lemma 4.1 the relations (4.24) mean exactly that L and L ⊤ are q -Maninmatrices.Theorem 4.4 implies that a Lax operator of U q ( gl n ) type is a particular case of q -Maninmatrix. Some properties of these Lax operators can be generalised to the case of q -Maninmatrices. The q -determinant (3.28) arose as a natural generalisation of the determinant forthe Lax operators of U q ( gl n ) type, its properties were generalised for the case of q -Maninmatrices in [CFRS].Note that the fact that the RLL -relation (4.8) is equivalent to the claim that L and L ⊤ areboth q -Manin matrices can be proved in the same way as Proposition 3.1 (see [Man88, CFR]).The approach considered here explains this fact in terms of left equivalence of idempotents,which will be applied in Subsection 6.2. This allows to explain why the Newton identities forthe q -Manin matrices proved in [CFRS] differs from the Newton identities for L -operatorsand b R qn − -Manin matrices deduced in [PS, IOP98, IOP99, IO].Decomposition of the operator b R q into dual idempotents described in Lemma 4.3 givesa general idea how to connect Lax operators with Manin matrices. It can be applied to asome general class of R -matrices. 44 .2 Lax operators of Yangian type as Manin operators The decomposition method described in Subsection 4.1 is generalised here to the case of therational R -matrix. This gives an interpretation of the corresponding Lax matrices as a classof Manin operators.Let h : Y ( gl n ) → R be a homomorphism from the Yangian to some algebra R . It isdefined by the image of the matrix T ( u ). It has the form L ( u ) = 1 + P ∞ r =1 P ni,j =1 ℓ rij E ij u − r ,where ℓ rij = h ( t rij ), and satisfies the RLL -relation R ( u − u ) L (1) ( u ) L (2) ( u ) = L (2) ( u ) L (1) ( u ) R ( u − u ) , (4.30)where R ( u ) = R n ( u ) = u − P n . Conversely, any n × n matrix over R which has this form andsatisfies (4.30) defines a homomorphism Y ( gl n ) → R . These are Lax operators of Yangiantype.We consider a more general matrix L ( u ) ∈ R (( u − )) ⊗ Hom( C m , C n ) satisfying the RLL -relation R n ( u − u ) L (1) ( u ) L (2) ( u ) = L (2) ( u ) L (1) ( u ) R m ( u − u ) . (4.31)It could be the Lax operator of the gl n XXX -model (which is an image of T ( u ) under somerepresentation of Y ( gl n ) multiplied by a polynomial), the Lax operator of the n particleToda chain etc.Note that the matrix R ( u ) satisfies the Yang–Baxter equation R (12) ( u ) R (13) ( u ) R (23) ( u ) = R (23) ( u ) R (13) ( u ) R (12) ( u ) (4.32)and R (21) ( u ) R (12) ( u ) = 1 − u , (4.33)where u ij = u i − u j .Define an operator on C n (( u − )) ⊗ = C n ⊗ C n (( u − , u − )) by the formula b R = b R n := P u ,u P n R n ( u ) = P u ,u (cid:18) u P n − (cid:19) , (4.34)where P u ,u is the operator permuting u and u , that is ( P u ,u f )( u , u ) = f ( u , u ). Therelation (4.31) is equivalent to b R n L (1) ( u ) L (2) ( u ) = L (1) ( u ) L (2) ( u ) b R m . (4.35)The relation (4.33) takes the from b R n b R n = 1 − u . (4.36)45 emark 4.5. The operator (4.34) satisfies the braid relation b R (23) b R (12) b R (23) = b R (12) b R (23) b R (12) . (4.37)It is obtained from the Yang-Baxter equation (4.32) via multiplication by P u ,u P u ,u P u ,u = P u ,u P u ,u P u ,u and (3.3) from the left. Due to the formula (4.36) this operator gives arepresentation of the group S k for an arbitrary k after some renormalization. Namely, thenormalised rational R -matrix R ( u ) = (1 − u ) − R ( u ) satisfies R (21) ( u ) R (12) ( u ) = 1 and thesame Yang Baxter equation (4.32). Hence the operator e R = P u ,u P n e R ( u ) satisfies e R = 1and the braid relation (4.37). This implies that the map σ a e R ( a,a +1) gives a representationof S k on the space ( C n ) ⊗ k ( u , . . . , u k ). Proposition 4.6. The operators b R + = b R n + := 1 − u + b R n , b R − = b R n − := 1 + u − b R n . (4.38) are orthogonal idempotents dual to each other: b R + + b R − = 1 , ( b R + ) = b R + , ( b R − ) = b R − , b R + b R − = b R − b R + = 0 . (4.39) Proof. The first of (4.39) is obvious. By using (4.36) and b Rf ( u , u ) = f ( u , u ) b R we obtain( b R + 1 − u )( b R − − u ) = 0. This implies the rest of (4.39).Now consider b A n := b R n − · u = 1 + u − − P u ,u P n R ( u ) u − u − + P u ,u ( u − − P n )2 . (4.40) Lemma 4.7. The operator b A n is an idempotent acting on the space C n [ u , u − ] ⊗ C n [ u , u − ] .It preserves the subspaces C n [ u ] ⊗ C n [ u ] and C n [ u − ] ⊗ C n [ u − ] . Proof. Let us rewrite (4.40) in the form b A n = 1 − P u ,u P n u − u ) (1 − P u ,u ) . (4.41)The first term is obviously preserves all tree spaces. Let us prove that so does the sec-ond term. By acting by (1 − P u ,u ) on a Laurent polynomial p ( u , u ) ∈ C [ u , u − , u , u − ]we obtain the Laurent polynomial q ( u , u ) = p ( u , u ) − p ( u , u ). Since q ( u , u ) = 0we have q ( u , u ) = ( u − u ) r ( u , u ) for some r ( u , u ) ∈ C [ u , u − , u , u − ]. Hence u − u ) (1 − P u ,u ) p ( u , u ) = r ( u , u ) is also a Laurent polynomial. If p ( u , u ) ∈ C [ u , u ]then r ( u , u ) ∈ C [ u , u ]. Analogously, for p ( u , u ) ∈ C [ u − , u − ] we obtain q ( u , u ) = p ( u , u ) − p ( u , u ) = ( u − − u − ) r ( u , u ) for some r ( u , u ) ∈ C [ u − , u − ] and hence u − u ) (1 − P u ,u ) p ( u , u ) = − u − u − r ( u , u ) ∈ C [ u − , u − ].46y multiplying 2 b A n = 1 + u − + u − b R with itself we obtain4 b A n = (1 + u − ) + (1 + u − ) u − b R + u − b R (1 + u − ) + u − b Ru − b R =1 + 2 u − + u − + (1 + u − + 1 − u − ) u − b R − u − b R . By taking into account (4.36) we obtain 4 b A n = 2 + 2 u − + 2 u − b R = 4 b A n .The basis of C n ⊗ C n [ u , u − , u , u − ] is ( e i ⊗ e j u k u l ) where k, l ∈ Z , i, j = 1 , . . . , n , andthe completion with respect to this basis is the space C n ⊗ C n [[ u , u − , u , u − ]]. Since b A n preserves the space C n ⊗ C n [ u , u − , u , u − ] its matrix in this basis satisfies the columnfiniteness condition introduced in Subsection 2.7. However it does not satisfies the rowfiniteness condition since it can not be extended to the completed tensor product space C n ⊗ C n [[ u , u − , u , u − ]]. Explicitly these can be seen from the formulae2 b A n ( e i ⊗ e j u k u l ) = e i ⊗ e j u k u l − e j ⊗ e i u k u l + e i ⊗ e j u k u l − u l u k u − u , (4.42) u k u l − u l u k u − u = − P l − m = k u m u k + l − − m , k < l, , k = l, P k − m = l u m u k + l − − m , k > l. (4.43)For instance, for any k > b A n ( e i ⊗ e j u k u − k ) has a non-zero coefficient at theterm e i ⊗ e j u u .Consider the topology of the space V = C n [ u, u − ] defined by the neighbourhoods of 0of the form V r = (cid:8) P Nk = − r t k u k | N > − r, t k ∈ C n (cid:9) . The completion of V with respect tothis topology is the space C n (( u − )). The corresponding completion of the space R ⊗ V is R b ⊗ V = R (( u − )) ⊗ C n . The neighbourhoods V r ⊗ V s defines the topology of V ⊗ V whichgives the completion C n ⊗ C n (( u − , u − )). The operator b A n ∈ End( V ⊗ V ) is continuouswith respect to this topology. In particular, it means that b A n is extended to the completion C n ⊗ C n (( u − , u − )).Thus an ( b A n , b A m )-Manin operator is an element M of Hom( C m [[ u, u − ]] , R b ⊗ V ) =Hom( C m [[ u, u − ]] , R (( u − )) ⊗ C n ) satisfying b A n M (1) M (2) = b A n M (1) M (2) b A m . (4.44)Note also that the operators e a ( ∂ u + ∂ u ) = e a∂ u ⊗ e a∂ u commute with b A n . Hence if M is an ( b A n , b A m )-Manin operator then e a∂ u M e b∂ u is also an ( b A n , b A m )-Manin operator for any a, b ∈ C (this follows from Proposition 2.21 generalised to the infinite-dimensional case). Theorem 4.8. Let L ( u ) ∈ R (( u − )) ⊗ Hom( C m , C n ) . The matrix L ( u ) is an ( b A n , b A m ) -Manin operator iff it satisfies RLL -relation (4.31) . Proof. Remind first that the relation (4.31) is equivalent to (4.35). Let us multiply b A n L (1) ( u ) L (2) ( u ) = b A n L (1) ( u ) L (2) ( u ) b A m (4.45)47y 4 u from the right and substitute 2 u b A n = 1 + u + b R n , 2 b A m = 1 + u − + u − b R m . Thisgives the equivalent relation( u − u − ) L (1) ( u ) L (2) ( u ) + u − b R n L (1) ( u ) L (2) ( u ) b R m =(1 + u − ) L (1) ( u ) L (2) ( u ) b R m − (1 + u − ) b R n L (1) ( u ) L (2) ( u ) . (4.46)The left hand side of (4.46) does not change at the conjugation T u T u − while the righthand side changes the sign. This means that (4.46) is valid iff the both hand sides vanish.Due to (4.36) the vanishing of each hand side of (4.46) is equivalent to (4.35).Let L ( u ) be an ( b A n , b A m )-Manin operator. Then M = L ( u + a ) e b∂ u is also an ( b A n , b A m )-Manin operator for any a, b ∈ C . In particular, the A n -Manin matrix M = L ( u ) e − ∂ u considered in Subsection 3.1 is an b A n -Manin operator. Remark 4.9. One can renormalise the matrix L ( u ) ∈ R (( u − )) ⊗ Hom( C m , C n ) by multi-plying it by a function in u . Such renormalization does not violent the RLL -relation (4.31).Hence one can suppose that L ( u ) ∈ R [[ u − ]] ⊗ Hom( C m , C n ) by multiplying L ( u ) by u k forbig enough k if needed. A matrix L ( u ) ∈ R [[ u − ]] ⊗ Hom( C m , C n ) satisfying RLL -relationis exactly an (cid:0) ( b A n ) − res , ( b A m ) − res (cid:1) -Manin operator, where ( b A n ) − res is the restriction of b A n to C n ⊗ C n [ u − , u − ]. We see from the formulae (4.42),(4.43) that ( b A n ) − res satisfies the rowfiniteness condition. In particular, we have the right quantum algebra U ( b A n ) − res and Theo-rem 4.8 implies that the Yangian Y ( gl n ) is a factor algebra of U ( b A n ) − res . Remark 4.10. The facts described in this Subsection works also for a matrix L ( u ) ∈ R (( u )) ⊗ Hom( C m , C n ) since we can consider another completion of R [ u, u − ]. Again, onecan suppose that L ( u ) ∈ R [[ u ]] ⊗ Hom( C m , C n ) making a renormalization if needed. Amatrix L ( u ) ∈ R [[ u ]] ⊗ Hom( C m , C n ) satisfying RLL -relation is an (cid:0) ( b A n ) + res , ( b A m ) + res (cid:1) -Maninoperator, where ( b A n ) + res is the restriction of b A n to C n ⊗ C n [ u , u ]. The notion of minor (determinant of a submatrix) is an important tool in the classicalmatrix theory. It can be interpreted in terms of the Grassmann algebra as some coefficients.This gives minors of ( A n , A m )-Manin matrices which we defined in Subsection 3.1 as columndeterminants of submatrices. The same can be done for q - and ( b q, b p )-Manin matrices byconsidering the quadratic algebras Ξ A ( C ) for A = A qn and A b q respectively. There is adual notion of minors corresponding to the quadratic algebras X A ( C ). In the case of thepolynomial algebras these dual minors are written via permanent.For a general ( A, B )-Manin matrix M we define two types of minors corresponding tothe homomorphisms f M : X A ( C ) → X B ( R ) and f M : Ξ B ( C ) → Ξ A ( R ). In fact these minorsgive the graded components of these homomorphisms. As usual minors they have a goodbehaviour at the multiplication of Manin matrices and at permutations of rows and columns.In future works we hope to find more properties of these minors by generalising the propertiesof the usual minors. 48 .1 The q -minors and permanents First we remind that the q -determinant (3.28) is the coefficient of proportionality for theproduct of the q -Grassmann variables [CFRS] (see also the formula (3.46) for the b q -version).Namely, let ψ , . . . , ψ n be the generators of Ξ A qn ( C ), M be an n × n matrix over an algebra R and φ j = P ni =1 ψ i M ij , where j = 1 , . . . , n , then φ φ · · · φ n = det q ( M ) ψ ψ · · · ψ n . (5.1)More generally, let M be n × m matrix over R and φ j = P ni =1 ψ i M ij , where j = 1 , . . . , m .For two k -tuples of indices I = ( i , . . . , i k ) and J = ( j , . . . , j k ) we denote by M IJ the k × k matrix over R with entries( M IJ ) ab = M i a j b , a, b = 1 , . . . , k, (5.2)where we suppose 1 i a n and 1 j b m . If i < . . . < i k and j < . . . < j k then M IJ isa k × k submatrix of M and det q ( M IJ ) is a q -analogue of minor defined in Subsection 3.1.The q -determinants det q ( M IJ ) are the coefficients in the decomposition φ j φ j · · · φ j k = X I =( i <...
In contrast to the situation described in Remark 2.31 one can not correctlydefine an operation on quadratic algebras by mapping an algebra X A ( C ) to X ∗ A ( C ). Ithappens since the equality X A ( C ) = X A ′ ( C ) does not imply an isomorphism of X ∗ A ( C ) and X ∗ A ′ ( C ) (see Section 5.6 for details).To write the formulae (5.6), (5.7) in a matrix form we introduce some conventions. Let V and W be vector spaces and W ∗ = Hom( C , W ). The product πξ of elements π ∈ V and ξ ∈ W ∗ is usually identified with the linear operator W → V acting as ( πξ )( w ) = ξ ( w ) · π , w ∈ W . For example, e i e j = E ij ∈ End( C n ). More generally, e i ...i k e j ...j k = E (1) i j · · · E ( k ) i k j k ,where e j ...j k = e j ⊗ · · · ⊗ e j k and e i ...i k = e i ⊗ · · · ⊗ e i k .Let us introduce the notation h α, β i for α ∈ X A ( C ) ⊗ V and β ∈ X ∗ A ( C ) ⊗ W ∗ . Thepairing acts on the first tensor factors while in the second factor the elements are multipliedas above: h uπ, vξ i = h u, v i πξ ∈ Hom( W, V ) where u ∈ X A ( C ), v ∈ X ∗ ( C ), π ∈ V and ξ ∈ W ∗ . In particular, D X i ,...,i k u i ...i k e i ...i k , X j ,...,j k v j ...j k e j ...j k E = X i ,...,i k ,j ...,j k h u i ...i k , v j ...j k i E (1) i j · · · E ( k ) i k j k . 51y using this notation we can write the pairing of degree 1 elements in matrix form: h X, X ∗ i = h x i e i , x j e j i = δ ij E ij = 1 . (5.8)Consider the operators S ( k ) = P i ,...,i k ,j ,...,j k S i ...i k j ...j k E (1) i j · · · E ( k ) i k j k ∈ End (cid:0) ( C n ) ⊗ k (cid:1) . For k = 1the operator S (1) is the n × n unit matrix. The formula (5.6) in matrix form reads h X ⊗ · · · ⊗ X, X ∗ ⊗ · · · ⊗ X ∗ i = S ( k ) (5.9)(here and below dots mean that we have k tensor factors). The conditions (5.7) is writtenas A ( a,a +1) S ( k ) = S ( k ) A ( a,a +1) = 0 , (5.10) a = 1 , . . . , k − 1. In terms of P = 1 − A we can rewrite (5.10) in the form P ( a,a +1) S ( k ) = S ( k ) P ( a,a +1) = S ( k ) . Analogously, one can define the algebra Ξ ∗ A ( C ) generated by the elements ψ , . . . , ψ n over C with the commutation relations ψ i ψ j = P k,l A ij,kl ψ k ψ l , i.e.(1 − A )(Ψ ∗ ⊗ Ψ ∗ ) = 0 . (5.11)where Ψ ∗ = n P i =1 ψ i e i . The pairing h· , ·i : Ξ A ( C ) × Ξ ∗ A ( C ) → C is defined by the formula h ψ i · · · ψ i l , ψ j · · · ψ j k i = δ kl A i ...i k j ...j k , (5.12)where A i ...i k j ...j k ∈ C , A ij = δ ij , A ∅∅ = 1. The formula (5.12) for k = l can be written as h Ψ ∗ ⊗ · · · ⊗ Ψ ∗ , Ψ ⊗ · · · ⊗ Ψ i = A ( k ) , (5.13)where A ( k ) = P i ,...,i k ,j ,...,j k A i ...i k j ...j k E (1) i j · · · E ( k ) i k j k , A (1) = 1. Again, we should require S ( a,a +1) A ( k ) = A ( k ) S ( a,a +1) = 0 , (5.14)where a = 1 , . . . , k − S = 1 − A , or, equivalently, P ( a,a +1) A ( k ) = A ( k ) P ( a,a +1) = − A ( k ) . Lemma 5.3. Let V and W be vector spaces. For any operators T ∈ Hom (cid:0) ( C n ) ⊗ k , V (cid:1) and e T ∈ Hom (cid:0) W, ( C n ) ⊗ k (cid:1) we have (cid:10) T ( X ⊗ · · · ⊗ X ) , ( X ∗ ⊗ · · · ⊗ X ∗ ) e T (cid:11) = T S ( k ) e T , (5.15) (cid:10) T (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) , (Ψ ⊗ · · · ⊗ Ψ) e T (cid:11) = T A ( k ) e T (5.16) (we understand T ( X ⊗ · · · ⊗ X ) , ( X ∗ ⊗ · · · ⊗ X ∗ ) e T , T (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) and (Ψ ⊗ · · · ⊗ Ψ) e T as elements of X A ( C ) ⊗ V , X ∗ A ( C ) ⊗ W ∗ , Ξ ∗ A ( C ) ⊗ V and Ξ A ( C ) ⊗ W ∗ respectively). roof. By substituting T ( X ⊗ · · · ⊗ X ) = P i ,...,i k x i · · · x i k T e i ...i k and ( X ∗ ⊗ · · · ⊗ X ∗ ) e T = P j ,...,j k x j · · · x j k e j ...j k e T we derive (cid:10) T ( X ⊗ · · · ⊗ X ) , ( X ∗ ⊗ · · · ⊗ X ∗ ) e T (cid:11) = X i ,...,i k ,j ,...,j k h x i ⊗ · · · ⊗ x i k , x j ⊗ · · · ⊗ x j k i× T e i ...i k e j ...j k e T = X i ,...,i k ,j ,...,j k S i ...i k j ...j k T E (1) i j · · · E ( k ) i k j k e T = T S ( k ) e T . The formula (5.16) are proved in a similar way (it can be considered as the formula (5.15)for the algebras Ξ A ( C ) = X ∗ − A ( C ) and Ξ ∗ A ( C ) = X − A ( C )). Corollary 5.4. Let T ∈ Hom (cid:0) ( C n ) ⊗ k , V (cid:1) and e T ∈ Hom (cid:0) W, ( C n ) ⊗ k (cid:1) . (a). If T ( X ⊗ · · · ⊗ X ) = 0 then T S ( k ) = 0 . (b). If ( X ∗ ⊗ · · · ⊗ X ∗ ) e T = 0 then S ( k ) e T = 0 . (c). If (Ψ ⊗ · · · ⊗ Ψ) e T = 0 then A ( k ) e T = 0 . (d). If T (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = 0 then T A ( k ) = 0 .In particular, these statements are valid for T, e T ∈ End (cid:0) ( C n ) ⊗ k (cid:1) and for T ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ , e T ∈ ( C n ) ⊗ k . Proof. The statement (a) are obtained from (5.15) in the case W = ( C n ) ⊗ k , e T = 1 ∈ End (cid:0) ( C n ) ⊗ k ) (cid:1) . For V = ( C n ) ⊗ k ), T = 1 ∈ End (cid:0) ( C n ) ⊗ k ) (cid:1) we obtain (b) . The statements (c) and (d) are obtained from (5.16) in the same way. In the cases V = W = ( C n ) ⊗ k and V = W = C we obtain Hom (cid:0) ( C n ) ⊗ k , V (cid:1) = Hom (cid:0) W, ( C n ) ⊗ k (cid:1) = End (cid:0) ( C n ) ⊗ k ) (cid:1) andHom (cid:0) ( C n ) ⊗ k , V (cid:1) = (cid:0) ( C n ) ⊗ k (cid:1) ∗ , Hom (cid:0) W, ( C n ) ⊗ k (cid:1) = ( C n ) ⊗ k respectively. Let A ∈ End( C n ⊗ C n ) be an idempotent and S = 1 − A . We formulate some conditions onthe operators (5.9) and (5.13) that guarantee non-degeneracy of the corresponding pairings. Definition 5.5. Operators S ( k ) , A ( k ) ∈ End (cid:0) ( C n ) ⊗ k (cid:1) are called pairing operators (for theidempotent A ) if they satisfy the following conditions: A ( a,a +1) S ( k ) = S ( k ) A ( a,a +1) = 0 , (5.17)( X ∗ ⊗ · · · ⊗ X ∗ ) S ( k ) = ( X ∗ ⊗ · · · ⊗ X ∗ ) , (5.18) S ( k ) ( X ⊗ · · · ⊗ X ) = ( X ⊗ · · · ⊗ X ) , (5.19) S ( a,a +1) A ( k ) = A ( k ) S ( a,a +1) = 0 , (5.20)(Ψ ⊗ · · · ⊗ Ψ) A ( k ) = (Ψ ⊗ · · · ⊗ Ψ) , (5.21) A ( k ) (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) . (5.22)53e call them the k -th S -operator and the k -th A -operator .The conditions (5.19) and (5.21) for k = 1 implies that S (1) = A (1) = 1. For k = 2the equations (5.17)–(5.19) and (5.20)–(5.22) have the solutions S ( k ) = S and S ( k ) = A respectively. Let us prove the uniqueness of solutions of these equations for any k (we donot prove their existence for k > Proposition 5.6. The pairing operators are unique. In particular, S (2) = S and A (2) = A . Proof. Let S ′ ( k ) and ¯ S ( k ) be k -th S -operators for the same idempotent A . By applying thepart (a) of Corollary 5.4 for S ( k ) = S ′ ( k ) , T = 1 − ¯ S ( k ) and the part (b) of Corollary 5.4 for S ( k ) = ¯ S ( k ) , T = 1 − S ′ ( k ) we obtain S ′ ( k ) = ¯ S ( k ) S ′ ( k ) and ¯ S ( k ) = ¯ S ( k ) S ′ ( k ) . Hence we obtain S ′ ( k ) = ¯ S ( k ) . The uniqueness of the A -operators follows similarly from the parts (c) and (d) of Corollary 5.4. Proposition 5.7. Pairing operators S ( k ) and A ( k ) are idempotents; they are orthogonal for k > : S k ) = S ( k ) , A k ) = A ( k ) , S ( k ) A ( k ) = A ( k ) S ( k ) = 0 . (5.23)( X ∗ ⊗ · · · ⊗ X ∗ ) A ( k ) = 0 , A ( k ) ( X ⊗ · · · ⊗ X ) = 0 , (5.24)(Ψ ⊗ · · · ⊗ Ψ) S ( k ) = 0 , S ( k ) (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = 0 . (5.25) Proof. Due to Corollary 5.4 the formula (5.19) gives S k ) = S ( k ) , while the formula (5.21)implies A k ) = A ( k ) . Further, from (5.17) and (5.20) we derive S ( k ) A ( k ) = S ( k ) P ( a,a +1) A ( k ) = − S ( k ) A ( k ) and A ( k ) S ( k ) = A ( k ) P ( a,a +1) S ( k ) = − A ( k ) S ( k ) . The formulae (5.24) and (5.25)follows from the orthogonality and (5.18), (5.19), (5.21), (5.22). For example, we have A ( k ) ( X ⊗ · · · ⊗ X ) = A ( k ) S ( k ) ( X ⊗ · · · ⊗ X ) = 0.Below we suppose that S ( k ) and A ( k ) are pairing operators for an idempotent A . Notethat in general the sum S ( k ) + A ( k ) does not coincide with the identity operator.The following formulae generalise the equalities (5.23). They are proved in the same way. Proposition 5.8. If ℓ k then S ( a,...,a + ℓ − ℓ ) S ( k ) = S ( k ) S ( a,...,a + ℓ − ℓ ) = S ( k ) , A ( a,...,a + ℓ − ℓ ) A ( k ) = A ( k ) A ( a,...,a + ℓ − ℓ ) = A ( k ) ,A ( a,...,a + ℓ − ℓ ) S ( k ) = S ( k ) A ( a,...,a + ℓ − ℓ ) = 0 , S ( a,...,a + ℓ − ℓ ) A ( k ) = A ( k ) S ( a,...,a + ℓ − ℓ ) = 0 , where a = 1 , . . . , k − ℓ + 1 and we used the notation T ( a,...,a + ℓ − = id ⊗ ( a − ⊗ T ⊗ id ⊗ ( k − ℓ − a +1) for T ∈ End (cid:0) ( C n ) ⊗ ℓ (cid:1) . Since Ξ A ( C ) = X ∗ S ( C ) and Ξ ∗ A ( C ) = X S ( C ) the conditions (5.17), (5.18), (5.19) inter-change with the conditions (5.20), (5.21), (5.22) under the substitutions A ↔ S, P ↔ − P, A ( k ) ↔ S ( k ) , X ↔ Ψ ∗ , X ∗ ↔ Ψ . (5.26)54hus if S ( k ) and A ( k ) are pairing operators for A then A ( k ) and S ( k ) are pairing operators for S = 1 − A .The following property of the pairing operators shows their role for the quadratic algebras.Recall that we have identifications X A ( C ) = ( C n ) ∗ and Ξ A ( C ) = C n . Let us identify thehigher graded components X A ( C ) k and Ξ A ( C ) k with subspaces of (cid:0) ( C n ) ⊗ k (cid:1) ∗ and (cid:0) ( C n ) ⊗ k (cid:1) by using the idempotents S ( k ) and A ( k ) respectively. Proposition 5.9. We have the following isomorphisms of vector spaces: (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) ∼ = X A ( C ) k , ξ ξ ( X ⊗ · · · ⊗ X ) , (5.27) S ( k ) (cid:0) ( C n ) ⊗ k (cid:1) ∼ = X ∗ A ( C ) k , π ( X ∗ ⊗ · · · ⊗ X ∗ ) π, (5.28) A ( k ) (cid:0) ( C n ) ⊗ k (cid:1) ∼ = Ξ A ( C ) k , π (Ψ ⊗ · · · ⊗ Ψ) π, (5.29) (cid:0) ( C n ) ⊗ k (cid:1) ∗ A ( k ) ∼ = Ξ ∗ A ( C ) k , ξ ξ (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) . (5.30) The pairings h· , ·i : X A ( C ) k × X ∗ A ( C ) k → C and h· , ·i : Ξ ∗ A ( C ) k × Ξ A ( C ) k → C correspond tothe multiplication ( ξ, π ) ξπ ∈ C . Proof. All these maps are linear. Consider the map (5.27). Due to (5.19) the image of ξ = e i ...i k S ( k ) is e i ...i k S ( k ) ( X ⊗ · · · ⊗ X ) = e i ...i k ( X ⊗ · · · ⊗ X ) = x i · · · x i k . Hence this mapis surjective. Let us check injectivity. If ξ ( X ⊗ · · · ⊗ X ) = 0 then by using Corollary 5.4 for T = ξ we obtain ξS ( k ) = 0. Since ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) we have ξ = ξS ( k ) = 0. The bijectivityof other maps are proved in the same way. The last statement follows from Lemma 5.3, forexample, (cid:10) ξ ( X ⊗ · · · ⊗ X ) , ( X ∗ ⊗ · · · ⊗ X ∗ ) π (cid:11) = ξS ( k ) π = ξπ .Let d k = rk S ( k ) and r k = rk A ( k ) . Then Proposition 5.9 impliesdim X A ( C ) k = dim X ∗ A ( C ) k = d k , dim Ξ A ( C ) k = dim Ξ ∗ A ( C ) k = r k . (5.31) Proposition 5.10. Let S ( k ) and A ( k ) be the pairing operators for the idempotent A . Thenthe pairings h· , ·i : X A ( C ) × X ∗ A ( C ) → C and h· , ·i : Ξ ∗ A ( C ) × Ξ A ( C ) → C defined by theformulae (5.9) and (5.13) are non-degenerate. Proof. Since the pairing h· , ·i : X A ( C ) × X ∗ A ( C ) → C respects the grading its non-degeneracyis equivalent to non-degeneracy of the restricted pairings h· , ·i : X A ( C ) k × X ∗ A ( C ) k → C . Dueto Proposition 5.9 it is enough to prove that the pairings (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) × S ( k ) (cid:0) ( C n ) ⊗ k (cid:1) → C given by ( ξ, π ) ξπ are non-degenerate, but this fact follows from the part (1) of Propo-sition 5.1. The non-degeneracy of the pairing h· , ·i : Ξ ∗ A ( C ) × Ξ A ( C ) → C is obtained in thesame way.The space ( C n ) ⊗ k is decomposed into the direct sum of S ( k ) ( C n ) ⊗ k and (1 − S ( k ) )( C n ) ⊗ k .Let v α = v S ( k ) α ∈ ( C n ) ⊗ k be eigenvectors: S ( k ) v α = v α for α = 1 , . . . , d k and S ( k ) v α = 0 for α = d k + 1 , . . . , n k . Then ( v α ) is a basis of ( C n ) ⊗ k such that ( v α ) α d k and ( v α ) α>d k are basesof the subspaces S ( k ) ( C n ) ⊗ k and (1 − S ( k ) )( C n ) ⊗ k respectively. Let ( v α = v αS ( k ) ) be dual basisof (cid:0) ( C n ) ⊗ k (cid:1) ∗ , i.e. v α v β = δ αβ . Since v α S ( k ) v β = δ αβ for β > d k and v α S ( k ) v β = 0 for β > d k we obtain v α S ( k ) = v α , α = 1 , . . . , d k , v α S ( k ) = 0, α = d k + 1 , . . . , n k . In coordinates we55ave v α = P i ,...,i k v i ...i k α e i ...i k , v α = P i ,...,i k v αi ...i k e i ...i k for some v i ...i k α , v αi ...i k ∈ C such that P i ,...,i k v i ...i k α v βi ...i k = δ βα and P n k α =1 v i ...i k α v αj ...j k = δ i j · · · δ i k j k .Analogously, let w α = v A ( k ) α = P i ,...,i k w i ...i k α e i ...i k and w α = v αA ( k ) = P i ,...,i k w αi ...i k e i ...i k formdual bases of ( C n ) ⊗ k and (cid:0) ( C n ) ⊗ k (cid:1) ∗ such that A ( k ) w α = w α for α = 1 , . . . , r k and A ( k ) w α = 0for α = r k + 1 , . . . , n k . Then w α A ( k ) = w α , α = 1 , . . . , r k and w α A ( k ) = 0, α = r k + 1 , . . . , n k . Proposition 5.11. The elements x α ( k ) = v α ( X ⊗ · · · ⊗ X ) = X i ,...,i k v αi ...i k x i · · · x i k , α = 1 , . . . , d k , (5.32) x ( k ) α = ( X ∗ ⊗ · · · ⊗ X ∗ ) v α = X i ,...,i k v i ...i k α x i · · · x i k , α = 1 , . . . , d k , (5.33) ψ ( k ) α = (Ψ ⊗ · · · ⊗ Ψ) w α = X i ,...,i k w i ...i k α ψ i · · · ψ i k , α = 1 , . . . , r k , (5.34) ψ α ( k ) = w α (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = X i ,...,i k w αi ...i k ψ i · · · ψ i k , α = 1 , . . . , r k , (5.35) form the dual bases of the k -th graded components X A ( C ) k , X ∗ A ( C ) k , Ξ A ( C ) k , Ξ ∗ A ( C ) k . Proof. This is consequence of Proposition 5.9 and the fact that ( v α ) d k α =1 , ( v α ) d k α =1 , ( w α ) r k α =1 and ( w α ) r k α =1 are dual bases of (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) , S ( k ) (cid:0) ( C n ) ⊗ k (cid:1) , A ( k ) (cid:0) ( C n ) ⊗ k (cid:1) and (cid:0) ( C n ) ⊗ k (cid:1) ∗ A ( k ) respectively.In contrast to the uniqueness the existence of the pairing operators is not guaranteed foran arbitrary idempotent A . In many interesting cases the pairing operators can be foundexplicitly. In general situation we can claim the existence of S (1) , A (1) , S (2) and A (2) only.In Subsection 6.3 we consider cases when the third pairing operators do not exist. Now wegive necessary and sufficient conditions for existence of a pairing operator. Theorem 5.12. Let A ∈ End( C n ⊗ C n ) be an idempotent and k > . Consider the subspaces V k = { π ∈ ( C n ) ⊗ k | A ( a,a +1) π = 0 } , V k = { ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ | ξA ( a,a +1) = 0 } ,W k = { π ∈ ( C n ) ⊗ k | S ( a,a +1) π = 0 } , W k = { ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ | ξS ( a,a +1) = 0 } . We have isomorphisms X A ( C ) k ∼ = V ∗ k , X ∗ A ( C ) k ∼ = V ∗ k , Ξ A ( C ) k ∼ = W ∗ k , Ξ ∗ A ( C ) k ∼ = W ∗ k given bythe pairings (cid:10) ξ ( X ⊗ · · · ⊗ X ) , π (cid:11) = ξπ, ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ , π ∈ V k , (5.36) (cid:10) ( X ∗ ⊗ · · · ⊗ X ∗ ) π, ξ (cid:11) = ξπ, π ∈ ( C n ) ⊗ k , ξ ∈ V k , (5.37) (cid:10) (Ψ ⊗ · · · ⊗ Ψ) π, ξ (cid:11) = ξπ, π ∈ ( C n ) ⊗ k , ξ ∈ W k , (5.38) (cid:10) ξ (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) , π (cid:11) = ξπ, ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ , π ∈ W k . (5.39)56 The k -th S -operator S ( k ) for the idempotent A exists iff the spaces V k and V k are dualvia the natural pairing, that is dim V k = dim V k and there are bases ( v α ) and ( v α ) of V k and V k such that v α v β = δ αβ . This pairing operator has the form S ( k ) = P α v α v α . • The k -th A -operator A ( k ) for the idempotent A exists iff the spaces W k and W k aredual via the natural pairing, that is dim W k = dim W k and there are bases ( w α ) and ( w α ) of W k and W k such that w α w β = δ αβ . This pairing operator is A ( k ) = P α w α w α . Proof. Due to the symmetry (5.26) it is enough to prove the statements concerning V k and V k .By definition the algebra X ∗ A ( C ) is the quotient of the tensor algebra T C n = L k ∈ N ( C n ) ⊗ k byits two sided ideal I generated by the elements P ns,t =1 e st A st,ij . In the k -th graded componentwe have X ∗ A ( C ) k = ( C n ) ⊗ k /I k , where I k is a subspace of ( C n ) ⊗ k spanned by the vectors P ns,t =1 e i ...i a − ⊗ ( e st A st,i a i a +1 ) ⊗ e i a +2 ...i k = A ( a,a +1) e i ...i k , a = 1 , . . . , k − i , . . . , i k = 1 , . . . , n .That is I k = P k − a =1 A ( a,a +1) ( C n ) ⊗ k . This definition implies that the element x i is the class[ e i ], hence [ e i ...i k ] = [ e i ⊗ · · · ⊗ e i k ] = x i · · · x i k = ( X ∗ ⊗ · · · ⊗ X ∗ ) e i ...i k , so the canonicalprojection p k : ( C n ) ⊗ k → X ∗ A ( C ) k has the form p : π ( X ∗ ⊗ · · · ⊗ X ∗ ) π . The subspaceorthogonal to Ker p k = I k is I ⊥ k = { ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ | ξI k = 0 } = V k .Note that for any subspace U of a vector space U we have an isomorphism U ⊥ ∼ = ( U/U ) ∗ via the pairing (cid:10) [ u ] , λ (cid:11) = λ ( u ), where λ ∈ U ⊥ and [ u ] ∈ U/U is the class of a vector u ∈ U .Hence we obtain the isomorphism V k = I ⊥ k ∼ = (cid:0) ( C n ) ⊗ k /I k (cid:1) ∗ = (cid:0) X ∗ A ( C ) k (cid:1) ∗ given by thepairing (5.37). In the same way we obtain the isomorphism V k ∼ = (cid:0) X A ( C ) k (cid:1) ∗ .If S ( k ) exists then there are dual bases ( v α ) d k α =1 and ( v α ) d k α =1 of the spaces S ( k ) ( C n ) ⊗ k and (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) . Since S ( k ) A ( a,a +1) = 0 we have (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) ⊂ V k . The equality (5.31)and the isomorphism V ∗ k ∼ = X ∗ A ( C ) k imply dim V k = dim X ∗ A ( C ) k = d k = dim (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) ,hence (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) = V k . Similarly we obtain dim V k = d k and S ( k ) ( C n ) ⊗ k = V k . Thus( v α ) d k α =1 and ( v α ) d k α =1 are dual bases of V k and V k respectively.Conversely, let d = dim V k = dim V k and let ( v α ) dα =1 and ( v α ) dα =1 be dual bases of V k and V k . Since the vectors v α are not orthogonal to V k they do not belong to I k , that is V k ∩ I k = 0. Moreover, dim I k = n k − dim V k = n k − dim V k and hence ( C n ) ⊗ k = V k ⊕ I k .This implies that the restriction of the projection p k to the subspace V k is an isomorphismand hence the elements x ( k ) α := ( X ∗ ⊗ · · · ⊗ X ∗ ) v α form a basis of X ∗ A ( C ) k . In particular, x i · · · x i k = P dα =1 c αi ...i k x ( k ) α for some c αi ...i k ∈ C , so that ( X ∗ ⊗ · · · ⊗ X ∗ ) = P dα =1 x ( k ) α ξ α where ξ α = P ni ,...,i k =1 c αi ...i k e i ...i k ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ . By multiplying this by A ( a,a +1) from theright and taking into account (5.5) we obtain P dα =1 x ( k ) α ξ α A ( a,a +1) = 0. Since the elements x ( k ) α are linearly independent, we have ξ α A ( a,a +1) = 0 for all a = 1 , . . . , k − 1, so that ξ α ∈ V k .Multiplication of the same relation by v β from the right gives ξ α v β = δ αβ , hence ξ α = v α . Let S ( k ) = P dα =1 v α v α , then we have ( X ∗ ⊗ · · · ⊗ X ∗ ) S ( k ) = P dα,β =1 x ( k ) α v α v β v β = P dα =1 x ( k ) α v α =( X ∗ ⊗· · ·⊗ X ∗ ). Analogously we obtain S ( k ) ( X ⊗· · ·⊗ X ) = ( X ⊗· · ·⊗ X ). Since S ( k ) satisfiesalso A ( a,a +1) S ( k ) = S ( k ) A ( a,a +1) = 0 it is the k -th S -operator for the idempotent A .57 emark 5.13. Theorem 5.12 means in fact that the non-degeneracies of the pairings V k × V k → C and W k × W k → C given by h ξ, π i = ξπ imply that they induce non-degeneratepairings X ∗ A ( C ) k × X A ( C ) k → C and Ξ A ( C ) k × Ξ ∗ A ( C ) k → C respectively. Conversely, ifthere exists a non-degenerate pairing X ∗ A ( C ) k × X A ( C ) k → C or Ξ A ( C ) k × Ξ ∗ A ( C ) k → C then the induced pairing V k × V k → C or W k × W k → C is non-degenerate but it maydiffer from the corresponding pairing defined by h ξ, π i = ξπ . In other words, the conditionsdim X ∗ A ( C ) k = dim X A ( C ) k and dim Ξ ∗ A ( C ) k = dim Ξ A ( C ) k do not guarantee the existence of S ( k ) and A ( k ) respectively (e.g. see Subsection 6.3). Remark 5.14. If some pairing S - or A -operators do not exist then one can consider the dualspace (cid:0) X A ( C ) (cid:1) ∗ = ∞ L k =0 V k or (cid:0) Ξ A ( C ) (cid:1) ∗ = ∞ L k =0 W k (without a structure of algebra) instead ofthe algebra X ∗ A ( C ) or Ξ ∗ A ( C ) respectively. The algebra structures on the spaces X ∗ A ( C ) andΞ ∗ A ( C ) are auxiliary. They are not in agreement with the algebra structures of X A ( C ) andΞ A ( C ), but they are used to define these spaces in a more convenient way. Let S ( k ) , A ( k ) ∈ End (cid:0) ( C n ) ⊗ k (cid:1) and e S ( k ) , e A ( k ) ∈ End (cid:0) ( C m ) ⊗ k (cid:1) be pairing operators for idem-potents A ∈ End( C n ⊗ C n ) and e A ∈ End( C n ⊗ C n ) respectively. Let X , X ∗ , Ψ, Ψ ∗ denote thesame as previous subsection. The corresponding column- and row-vectors for e A we denoteby e X , e X ∗ , e Ψ ∗ , e Ψ.By virtue of Proposition 5.9 any graded linear operator X A ( C ) → X e A ( R ) is given by theformula ξ ( X ⊗ · · · ⊗ X ) ξT k ( e X ⊗ · · · ⊗ e X ) , ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ (5.40)for some operators T k ∈ R ⊗ Hom (cid:0) ( C m ) ⊗ k , ( C n ) ⊗ k (cid:1) such that S ( k ) T k e S ( k ) = T k e S ( k ) . In thesame time a graded linear operator X ∗ e A ( C ) → X ∗ A ( R ) has the form( e X ∗ ⊗ · · · ⊗ e X ∗ ) π ( X ∗ ⊗ · · · ⊗ X ∗ ) T k π, π ∈ ( C m ) ⊗ k (5.41)for some T k ∈ R ⊗ Hom (cid:0) ( C m ) ⊗ k , ( C n ) ⊗ k (cid:1) such that S ( k ) T k e S ( k ) = S ( k ) T k .Analogously, a graded linear operator Ξ e A ( C ) → Ξ A ( R ) can be written as( e Ψ ⊗ · · · ⊗ e Ψ) π (Ψ ⊗ · · · ⊗ Ψ) R k π, π ∈ ( C m ) ⊗ k (5.42)for R k ∈ R ⊗ Hom (cid:0) ( C m ) ⊗ k , ( C n ) ⊗ k (cid:1) such that A ( k ) R k e A ( k ) = A ( k ) R k , while a graded linearoperator Ξ ∗ A ( C ) → Ξ ∗ e A ( R ) has the form ξ (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) ξR k ( e Ψ ∗ ⊗ · · · ⊗ e Ψ ∗ ) , ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ (5.43)for some operators R k ∈ R ⊗ Hom (cid:0) ( C m ) ⊗ k , ( C n ) ⊗ k (cid:1) such that A ( k ) R k e A ( k ) = R k e A ( k ) .58ote that T k and R k can be replaced by S ( k ) T k e S ( k ) and A ( k ) R k e A ( k ) respectively andthis does not change the maps (5.40), (5.41), (5.42), (5.43). Hence we can always suppose S ( k ) T k e S ( k ) = T k and A ( k ) R k e A ( k ) = R k .Denote X ∗ A ( R ) = R ⊗ X ∗ A ( C ) and Ξ ∗ A ( R ) = R ⊗ Ξ ∗ A ( C ). The pairings are invariant in thefollowing sense. Proposition 5.15. Let t : X A ( C ) → X e A ( R ) and t ∗ : X ∗ e A ( C ) → X ∗ A ( R ) be the operators (5.40) and (5.41) defined by the same T k ∈ R ⊗ Hom (cid:0) ( C m ) ⊗ k , ( C n ) ⊗ k (cid:1) satisfying S ( k ) T k e S ( k ) = T k .Let r : Ξ e A ( C ) → Ξ A ( R ) and r ∗ : Ξ ∗ A ( C ) → Ξ ∗ e A ( R ) be the operators (5.42) and (5.43) definedby the same R k ∈ R ⊗ Hom (cid:0) ( C m ) ⊗ k , ( C n ) ⊗ k (cid:1) satisfying A ( k ) R k e A ( k ) = R k . Then for any u ∈ X A ( C ) , v ∈ X ∗ e A ( C ) , ν ∈ Ξ ∗ A ( C ) , µ ∈ Ξ e A ( C ) we have h u, t ∗ ( v ) i = h t ( u ) , v i , h ν, r ( µ ) i = h r ∗ ( ν ) , µ i , (5.44) where the pairings X A ( C ) × X ∗ A ( C ) → C , Ξ ∗ A ( C ) × Ξ A ( C ) → C and X e A ( C ) × X ∗ e A ( C ) → C , Ξ ∗ e A ( C ) × Ξ e A ( C ) → C are defined by the pairing operators S ( k ) , A ( k ) and e S ( k ) , e A ( k ) respectively. Proof. Let u = ξ ( X ⊗ · · · ⊗ X ) and v = ( e X ∗ ⊗ · · · ⊗ e X ∗ ) π . Then by using Lemma 5.3 wehave h u, t ∗ ( v ) i = (cid:10) ξ ( X ⊗ · · · ⊗ X ) , ( X ∗ ⊗ · · · ⊗ X ∗ ) T k π (cid:11) = ξS ( k ) T k π = ξT k e S ( k ) π = (cid:10) ξT k ( e X ⊗ · · · ⊗ e X ) , ( e X ∗ ⊗ · · · ⊗ e X ∗ ) π (cid:11) = h t ( u ) , v i . The second formula (5.44) is proved similarly.Recall that in Subsection 2.5 we introduced the homomorphisms f M : X A ( C ) → X e A ( C )and f M : Ξ e A ( C ) → Ξ A ( C ) for a ( A, e A )-Manin matrix M ∈ R ⊗ Hom( C m , C n ). Their defini-tion on generators can be written in the matrix form as f M ( X ) = M e X , f M ( e Ψ) = Ψ M. (5.45)Since homomorphisms preserve multiplications we obtain f M ( X ⊗ · · · ⊗ X ) = M (1) · · · M ( k ) ( e X ⊗ · · · ⊗ e X ) , (5.46) f M ( e Ψ ⊗ · · · ⊗ e Ψ) = (Ψ ⊗ · · · ⊗ Ψ) M (1) · · · M ( k ) . (5.47)For general elements of X A ( C ) and Ξ b A ( C ) the values of the maps f M and f M are obtained viamultiplication of these formulae by ξ and π respectively, so these maps have the form (5.40)and (5.42) for T k = R k = M (1) · · · M ( k ) . In this way we obtain the following generalisationof (2.21) and (2.25). Proposition 5.16. Any ( A, e A ) -Manin matrix M ∈ R ⊗ Hom( C m , C n ) satisfy the relations M (1) · · · M ( k ) e S ( k ) = S ( k ) M (1) · · · M ( k ) e S ( k ) , (5.48) A ( k ) M (1) · · · M ( k ) = A ( k ) M (1) · · · M ( k ) e A ( k ) . (5.49)59 roof. Note that the left hand sides of (5.46) and (5.47) are invariant under multiplicationby S ( k ) from the left and by e A ( k ) from the right respectively. As consequence, we obtain S ( k ) M (1) · · · M ( k ) ( e X ⊗ · · · ⊗ e X ) = M (1) · · · M ( k ) ( e X ⊗ · · · ⊗ e X ) , (Ψ ⊗ · · · ⊗ Ψ) M (1) · · · M ( k ) e A ( k ) = (Ψ ⊗ · · · ⊗ Ψ) M (1) · · · M ( k ) . Then, (5.48) and (5.49) are derived by application of Corollary 5.4 for V = R ⊗ ( C n ) ⊗ k , T = S ( k ) M (1) · · · M ( k ) − M (1) · · · M ( k ) ∈ Hom (cid:0) ( C m ) ⊗ k , V (cid:1) and for W = R ∗ ⊗ ( C m ) ⊗ k , e T = M (1) · · · M ( k ) e A ( k ) − M (1) · · · M ( k ) ∈ Hom (cid:0) W, ( C n ) ⊗ k (cid:1) respectively.For an arbitrary M ∈ R ⊗ Hom( C m , C n ) define the linear operators t M : X A ( C ) → X e A ( R )and t ∗ M : X ∗ e A ( C ) → X ∗ A ( R ) by the operators T k = S ( k ) M (1) · · · M ( k ) e S ( k ) , that is t M ( X ⊗ · · · ⊗ X ) = S ( k ) M (1) · · · M ( k ) ( e X ⊗ · · · ⊗ e X ) , (5.50) t ∗ M ( e X ∗ ⊗ · · · ⊗ e X ∗ ) = ( X ∗ ⊗ · · · ⊗ X ∗ ) M (1) · · · M ( k ) e S ( k ) . (5.51)Analogously, for an arbitrary matrix M ∈ R ⊗ Hom( C m , C n ) define the linear operators r M : Ξ e A ( C ) → Ξ A ( R ) and r ∗ M : Ξ ∗ A ( C ) → Ξ ∗ e A ( R ) by the operators R k = A ( k ) M (1) · · · M ( k ) e A ( k ) ,that is r M ( e Ψ ⊗ · · · ⊗ e Ψ) = (Ψ ⊗ · · · ⊗ Ψ) M (1) · · · M ( k ) e A ( k ) , (5.52) r ∗ M (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = A ( k ) M (1) · · · M ( k ) ( e Ψ ∗ ⊗ · · · ⊗ e Ψ ∗ ) . (5.53) Remark 5.17. If M is an ( A, e A )-Manin matrix then the maps (5.50) and (5.52) are homo-morphisms: t M = f M and r M = f M (however, the maps t ∗ M and r ∗ M are not homomorphisms).Conversely, if t M or r M is a homomorphism then M is an ( A, e A )-Manin matrix. The maps t ∗ M and r ∗ M are homomorphisms iff M is a ( S, e S )-Manin matrix, where S = 1 − A , e S = 1 − e A .For a matrix M ∈ R ⊗ Hom( C m , C n ) we introduce minor operator corresponding to apair of operators T ∈ End (cid:0) ( C n ) ⊗ k (cid:1) and e T ∈ End (cid:0) ( C m ) ⊗ k (cid:1) by the formulaMin T e T M := T M (1) · · · M ( k ) e T . (5.54)From (5.50)–(5.53) we obtainMin S ( k ) e S ( k ) M = (cid:10) t M ( X ⊗ · · · ⊗ X ) , e X ∗ ⊗ · · · ⊗ e X ∗ (cid:11) = (cid:10) X ⊗ · · · ⊗ X, t ∗ M ( e X ∗ ⊗ · · · ⊗ e X ∗ ) (cid:11) , Min A ( k ) e A ( k ) M = (cid:10) Ψ ∗ ⊗ · · · ⊗ Ψ ∗ , r M ( e Ψ ⊗ · · · ⊗ e Ψ) (cid:11) = (cid:10) r ∗ M (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) , e Ψ ⊗ · · · ⊗ e Ψ (cid:11) . If M is an ( A, e A )-Manin matrix then due to Proposition 5.16 these operators take the formMin S ( k ) e S ( k ) M = S ( k ) M (1) · · · M ( k ) e S ( k ) = M (1) · · · M ( k ) e S ( k ) , (5.55)Min A ( k ) e A ( k ) M = A ( k ) M (1) · · · M ( k ) e A ( k ) = A ( k ) M (1) · · · M ( k ) . (5.56)60n these case these minor operators are defined by one operator only: e S ( k ) and A ( k ) respec-tively, so we can denote them asMin e S ( k ) M := M (1) · · · M ( k ) e S ( k ) = (cid:10) f M ( X ⊗ · · · ⊗ X ) , e X ∗ ⊗ · · · ⊗ e X ∗ (cid:11) , (5.57)Min A ( k ) M := A ( k ) M (1) · · · M ( k ) = (cid:10) Ψ ∗ ⊗ · · · ⊗ Ψ ∗ , f M ( e Ψ ⊗ · · · ⊗ e Ψ) (cid:11) . (5.58) Definition 5.18. Let M ∈ R ⊗ Hom( C m , C n ) be an ( A, e A )-Manin matrix. Then the minoroperators (5.57) and (5.58) are called S -minor and A -minor operators respectively. We alsocall them minor operators for ( A, e A ) -Manin matrix . Their entries(Min e S ( k ) M ) i ...i k j ...j k = e i ...i k (Min e S ( k ) M ) e j ...j k = (cid:10) f M ( x i · · · x i k ) , e x j · · · e x j k (cid:11) , (5.59)(Min A ( k ) M ) i ...i k j ...j k = e i ...i k (Min A ( k ) M ) e j ...j k = (cid:10) ψ i · · · ψ i k , f M ( e ψ j · · · e ψ j k ) (cid:11) (5.60)are called S -minors and A -minors of the order k or simply minors for ( A, e A ) -Manin matrix (here e x i and e ψ i are the generators of X ∗ e A ( C ) and Ξ e A ( C ) respectively).In terms of y i = f M ( x i ) = P mj =1 M ij e x j and φ j = f M ( e ψ j ) = P ni =1 ψ i M ij the minors canbe written as (Min e S ( k ) M ) i ...i k j ...j k = h y i · · · y i k , e x j · · · e x j k i , (5.61)(Min A ( k ) M ) i ...i k j ...j k = h ψ i · · · ψ i k , φ j · · · φ j k i . (5.62)They are coefficients in the decompositions y i · · · y i k = m X j ,...,j k =1 (cid:0) Min e S ( k ) M (cid:1) i ...i k j ...j k e x j · · · e x j k , (5.63) φ j · · · φ j k = n X i ...i k =1 (cid:0) Min A ( k ) M (cid:1) i ...i k j ...j k ψ i · · · ψ i k . (5.64)In the matrix form these formulae are written as( Y ⊗ · · · ⊗ Y ) = (Min e S ( k ) M )( e X ⊗ · · · ⊗ e X ) , (Φ ⊗ · · · ⊗ Φ) = (Ψ ⊗ · · · ⊗ Ψ)(Min A ( k ) M ) , where Y = P ni =1 y i e i and Φ = P mj =1 φ j e j .The formulae (5.63) and (5.64) are not decompositions by bases. However, due toProposition 5.9 they have the form of the decompositions considered in the part (2) ofProposition 5.1. For example, for the formula (5.63) we need to set V = (cid:0) ( C m ) ⊗ k (cid:1) ∗ and W = ( C m ) ⊗ k , while the operator S ( k ) acting on ξ ∈ (cid:0) ( C m ) ⊗ k (cid:1) ∗ from the right plays the role61f the idempotent A . Thus the minors of an ( A, e A )-Manin matrix M are the coefficients ofthe decompositions (5.63) and (5.64) satisfying the conditions m X l ,...,l k =1 (cid:0) Min e S ( k ) M (cid:1) i ...i k l ...l k e S l ...l k j ...j k = (cid:0) Min e S ( k ) M (cid:1) i ...i k j ...j k , (5.65) n X l ,...,l k =1 A i ...i k l ...l k (cid:0) Min A ( k ) M (cid:1) l ...l k j ...j k = (cid:0) Min e S ( k ) M (cid:1) i ...i k j ...j k . (5.66)In operator form these symmetries can be written as (cid:0) Min e S ( k ) M (cid:1) e S ( k ) = Min e S ( k ) M, A ( k ) (cid:0) Min A ( k ) M (cid:1) = Min A ( k ) M. The expression for the S - and A -minors of an ( A, e A )-Manin matrix M depends on thepairing operators e S ( k ) and A ( k ) only, hence they are defined if these pairing operators existeven if S ( k ) and e A ( k ) do not exist. The condition that M is a ( A, b A )-Manin matrix impliesthe symmetry of the minor S - and A -operators with respect to the upper and lower indices: S ( a,a +1) (cid:0) Min e S ( k ) M (cid:1) = Min e S ( k ) M, (cid:0) Min A ( k ) M (cid:1) e A ( a,a +1) = Min A ( k ) M. If S ( k ) and e A ( k ) do exist these symmetries can be written in the form S ( k ) (cid:0) Min e S ( k ) M (cid:1) = Min e S ( k ) M, (cid:0) Min A ( k ) M (cid:1) e A ( k ) = Min A ( k ) M. The determinant of usual complex matrices is a homomorphism: det( M N ) = det( M ) det( N ).The generalisation of this property to the case of k × k minors is (a generalisation of the)Cauchy–Binet formula:det (cid:0) ( M N ) IJ (cid:1) = X L =( l <... Let S ( k ) , A ( k ) , e S ( k ) , e A ( k ) and b S ( k ) , b A ( k ) be the pairing operators for idem-potents A ∈ End( C n ⊗ C n ) , e A ∈ End( C m ⊗ C m ) and b A ∈ End( C l ⊗ C l ) respectively. Let M and N be n × m and m × l matrices over an algebra R . Suppose that the entries of the firstone commute with the entries of the second one: [ M ij , N kl ] = 0 . • If N is an ( e A, b A ) -Manin matrix then Min S ( k ) b S ( k ) ( M N ) = (Min S ( k ) e S ( k ) M )(Min b S ( k ) N ) . (5.67)62 If M is an ( A, e A ) -Manin matrix then Min A ( k ) b A ( k ) ( M N ) = (Min A ( k ) M )(Min e A ( k ) b A ( k ) N ) . (5.68) • If M and N are Manin matrices for ( A, e A ) and ( e A, b A ) respectively then M N is aManin matrix for ( A, b A ) and the formulae (5.67) , (5.68) take the form Min b S ( k ) ( M N ) = (Min e S ( k ) M )(Min b S ( k ) N ) , (5.69)Min A ( k ) ( M N ) = (Min A ( k ) M )(Min e A ( k ) N ) . (5.70) Proof. The first and second statements follow from Proposition 5.16. For instance, theformula (5.68) is derived in the following way: A ( k ) ( M N ) (1) · · · ( M N ) ( k ) b A ( k ) = A ( k ) M (1) · · · M ( k ) N (1) · · · N ( k ) b A ( k ) = A ( k ) M (1) · · · M ( k ) e A ( k ) N (1) · · · N ( k ) b A ( k ) , where we used the commutativity in the form N ( i ) M ( j ) = M ( j ) N ( i ) , i < j . The last statementis implied by Proposition 2.26 and the formulae (5.55), (5.56).Now we give formulae for permutations of rows and columns. For any n and σ ∈ GL ( n, C )denote the conjugation by the element σ ⊗ k = σ ⊗ · · · ⊗ σ as ι σ T := σ ⊗ k T ( σ ⊗ k ) − , (5.71)where T ∈ End (cid:0) ( C n ) ⊗ k (cid:1) and ( σ ⊗ k ) − = σ − ⊗ · · · ⊗ σ − . Note that ι σ S ( k ) and ι σ A ( k ) arepairing operators for the idempotent ι σ A = ( σ ⊗ σ ) A ( σ − ⊗ σ − ). Proposition 5.20. For any matrix M ∈ R ⊗ Hom( C m , C n ) and operators σ ∈ GL ( n, C ) , τ ∈ GL ( m, C ) we have Min ι σ S ( k ) ι τ e S ( k ) ( σM τ − ) = σ ⊗ k (Min S ( k ) e S ( k ) M )( τ ⊗ k ) − , (5.72)Min ι σ A ( k ) ι τ e A ( k ) ( σM τ − ) = σ ⊗ k (Min A ( k ) e A ( k ) M )( τ ⊗ k ) − . (5.73) If M is an ( A, e A ) -Manin matrix then Min ι τ e S ( k ) ( σM τ − ) = σ ⊗ k (Min e S ( k ) M )( τ ⊗ k ) − , (5.74)Min ι σ A ( k ) ( σM τ − ) = σ ⊗ k (Min A ( k ) M )( τ ⊗ k ) − . (5.75) Proof. The formulae (5.72), (5.73) follow directly from the definition (5.54). If M isan ( A, e A )-Manin matrix, then Proposition 2.21 implies that σM τ − is a ( ι σ A, ι τ e A )-Maninmatrix. Thus we obtain (5.74), (5.75). 63n particular, Proposition 5.20 gives the minors of σ M = σM and τ M = M τ − for M ∈ R ⊗ Hom( C m , C n ), σ ∈ S n and τ ∈ S m .Let us consider the minor operators Min e S ( k ) M and Min A ( k ) M for an ( A, e A )-Manin matrix M as n k × m k matrices over R with the entries (5.61) and (5.62). They are also Maninmatrices for some pairs of idempotents. Proposition 5.21. Let M be an ( A, e A ) -Manin matrix. Let S ( k ) , A ( k ) and e S ( k ) , e A ( k ) are thepairing operators for A and e A . For any k, ℓ > we have (Min e S ( k ) M ) ⊗ (Min e S ( ℓ ) M ) e S ( k + ℓ ) = M (1) · · · M ( k + ℓ ) e S ( k + ℓ ) = Min e S ( k + ℓ ) M, (5.76) A ( k + ℓ ) (Min A ( k ) M ) ⊗ (Min A ( ℓ ) M ) = A ( k + ℓ ) M (1) · · · M ( k + ℓ ) = Min A ( k + ℓ ) M. (5.77) In particular, Min e S ( k ) M and Min A ( k ) M are (1 − S (2 k ) , − e S (2 k ) ) - and ( A (2 k ) , e A (2 k ) ) -Maninmatrices respectively. Proof. The formulae (5.76) and (5.77) follow from Proposition 5.8. To prove the secondstatement one needs to put ℓ = k in these formulae and to apply Proposition 5.16 for 2 k .Let us finally write the minor operators in terms of bases. Denote y α ( k ) = v α ( Y ⊗ · · · ⊗ Y ), e x α ( k ) = e v α ( e X ⊗ · · · ⊗ e X ), φ ( k ) α = (Φ ⊗ · · · ⊗ Φ) e w α , where e v α = v α e S ( k ) and e w α = v e A ( k ) α areeigenvectors of the idempotents e S ( k ) and ( e A ( k ) ) ⊤ respectively. Let e d k and e r k be the ranks of e S ( k ) and e A ( k ) . Consider matrix entries of the minor operators:(Min e S ( k ) M ) αβ := v α (Min e S ( k ) M ) e v β = v α M (1) · · · M ( k ) e v β , (5.78)(Min A ( k ) M ) γδ := w γ (Min A ( k ) M ) e w δ = w γ M (1) · · · M ( k ) e w δ , (5.79)where α d k , β e d k , γ r k and δ e r k . Then the formulae (5.63), (5.64) are rewritten as y α ( k ) = e d k X γ =1 (Min e S ( k ) M ) αγ e x γ ( k ) , φ ( k ) β = r k X γ =1 (Min A ( k ) M ) γβ ψ ( k ) γ , (5.80)where α = 1 , . . . , d k , β = 1 , . . . , e r k . These are decompositions by bases. They generalise theformulae given in Subsection 5.1.Note that y α ( k ) = f M ( x α ( k ) ) and φ ( k ) α = f M ( e ψ ( k ) α ). Thus the formulae (5.80) describe thehomomorphisms f M : X A ( C ) → X e A ( R ) and f M : Ξ e A ( C ) → Ξ A ( R ) in terms of bases (5.32)–(5.35).The formulae (5.69) and (5.70) are written in terms of bases in the form (cid:0) Min b S ( k ) ( M N ) (cid:1) αγ = e d k X β =1 (Min e S ( k ) M ) αβ (Min b S ( k ) N ) βγ , (5.81) (cid:0) Min A ( k ) ( M N ) (cid:1) αγ = e r k X β =1 (Min A ( k ) M ) αβ (Min e A ( k ) N ) βγ . (5.82)64 .6 Minors for left equivalent idempotents In contrast with the quadratic algebras X A ( C ) and Ξ A ( C ) their dual algebras X ∗ A ( C ) andΞ ∗ A ( C ) do not coincide with the corresponding algebras for a left equivalent idempotent A ′ in general, but they do coincide for a right equivalent idempotent A ′ . Indeed, by applyingProposition 2.12 to for A ⊤ and ( A ′ ) ⊤ we see that the following 4 conditions are equivalent: X ∗ A ( C ) = X ∗ A ′ ( C ), Ξ ∗ A ( C ) = Ξ ∗ A ′ ( C ), A is right equivalent to A ′ , S = 1 − A is left equivalentto S ′ = 1 − A ′ . Proposition 5.22. Let S ( k ) , A ( k ) and S ′ ( k ) , A ′ ( k ) be the pairing operators for idempotents A and A ′ . If A is left equivalent to A ′ then A ( k ) is left equivalent to A ′ ( k ) , while S ( k ) is rightequivalent to S ′ ( k ) for each k > . If A is right equivalent to A ′ then A ( k ) is right equivalentto A ′ ( k ) , while S ( k ) is left equivalent to S ′ ( k ) for each k > . Proof. The left equivalence of A and A ′ implies Ξ A ( C ) = Ξ A ′ ( C ) and hence we obtain(Ψ ⊗ · · · ⊗ Ψ) A ( k ) = (Ψ ⊗ · · · ⊗ Ψ) = (Ψ ⊗ · · · ⊗ Ψ) A ′ ( k ) . By applying Corollary 5.4 we obtain A ′ ( k ) (1 − A ( k ) ) = 0 and A ( k ) (1 − A ′ ( k ) ) = 0. Then, due to Lemma 2.11 the idempotents A ( k ) and A ′ ( k ) are left equivalent. Analogously, we obtain (1 − S ( k ) ) S ′ ( k ) = 0, (1 − S ′ ( k ) ) S ( k ) = 0.By Lemma 2.11 this implies the left equivalence of 1 − S ( k ) and 1 − S ′ ( k ) , which means inturn the right equivalence of S ( k ) and S ′ ( k ) by Proposition 2.7.Let A, A ′ ∈ End( C n ⊗ C n ) be left equivalent idempotents and let S ( k ) , A ( k ) and S ′ ( k ) , A ′ ( k ) be the corresponding pairing operators. From Proposition 5.22 we obtain A ′ ( k ) = G [ k ] A ( k ) , S ′ ( k ) = S ( k ) G ( k ) for some G [ k ] , G ( k ) ∈ Aut (cid:0) ( C n ) ⊗ k (cid:1) . Then, by means of Proposition 5.9 the identification X A ( C ) k = X A ′ ( C ) k induces an isomorphism (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) ∼ = (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ′ ( k ) . Explicitly it hasthe form ξ ξ ′ = ξG ( k ) = ξS ′ ( k ) , ξ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ( k ) . Indeed, we have ξ ′ ∈ (cid:0) ( C n ) ⊗ k (cid:1) ∗ S ′ ( k ) and ξ ′ ( X ⊗· · ·⊗ X ) = ξS ′ ( k ) ( X ⊗· · ·⊗ X ) = ξ ( X ⊗· · ·⊗ X ).The inverse map is ξ ′ ξ ′ G − k ) = ξS ( k ) . Analogously, the identification Ξ A ( C ) k = Ξ A ′ ( C ) k gives A ( k ) (cid:0) ( C n ) ⊗ k (cid:1) ∼ = A ′ ( k ) (cid:0) ( C n ) ⊗ k (cid:1) , π π ′ = G [ k ] π = A ′ ( k ) π, π ∈ A ( k ) (cid:0) ( C n ) ⊗ k (cid:1) . Proposition 2.9 implies the equalities of subspaces: S ( k ) (cid:0) ( C n ) ⊗ k (cid:1) = S ′ ( k ) (cid:0) ( C n ) ⊗ k (cid:1) and (cid:0) ( C n ) ⊗ k (cid:1) ∗ A ( k ) = (cid:0) ( C n ) ⊗ k (cid:1) ∗ A ′ ( k ) . Proposition 5.9 in turn gives the isomorphisms of vectorspaces X ∗ A ( C ) k ∼ = X ∗ A ′ ( C ) k and Ξ ∗ A ( C ) k ∼ = Ξ ∗ A ′ ( C ) k (these are not homomorphisms of alge-bras). They are given by the formulae x Ai · · · x Ai k x A ′ i · · · x A ′ i k and ψ i A · · · ψ i k A ψ i A ′ · · · ψ i k A ′ where x Ai , x A ′ i , ψ iA and ψ iA ′ are the generators of the algebras X ∗ A ( C ), X ∗ A ′ ( C ), Ξ ∗ A ( C ) andΞ ∗ A ′ ( C ) respectively. 65et { v α } d k α =1 , { v α } d k α =1 and { w α } r k α =1 , { w α } r k α =1 be dual bases of S ( k ) (cid:0) ( C m ) ⊗ k (cid:1) , (cid:0) ( C m ) ⊗ k (cid:1) ∗ S ( k ) , A ( k ) (cid:0) ( C m ) ⊗ k (cid:1) and (cid:0) ( C m ) ⊗ k (cid:1) ∗ A ( k ) respectively. Then v ′ α = v α , ( v ′ ) α = v α G ( k ) = v α S ′ ( k ) , α = 1 , . . . , d k , (5.83)( w ′ ) α = w α , w ′ α = G [ k ] w α = A ′ ( k ) w α , α = 1 , . . . , r k , (5.84)are dual bases of S ′ ( k ) (cid:0) ( C m ) ⊗ k (cid:1) , (cid:0) ( C m ) ⊗ k (cid:1) ∗ S ′ ( k ) , (cid:0) ( C m ) ⊗ k (cid:1) ∗ A ′ ( k ) and A ′ ( k ) (cid:0) ( C m ) ⊗ k (cid:1) .By substituting ( v ′ ) α and w ′ α to the formulae (5.32) and (5.34) we obtain the same basesof X A ′ ( C ) k = X A ( C ) k and Ξ A ′ ( C ) k = Ξ A ( C ) k :( v ′ ) α ( X ∗ ⊗ · · · ⊗ X ∗ ) = v α S ′ ( k ) ( X ∗ ⊗ · · · X ∗ ) = v α ( X ∗ ⊗ · · · X ∗ ) = x α ( k ) , (5.85)(Ψ ⊗ · · · ⊗ Ψ) w ′ α = (Ψ ⊗ · · · ⊗ Ψ) A ′ ( k ) w α = (Ψ ⊗ · · · ⊗ Ψ) w α = ψ ( k ) α . (5.86)Note that the bases of X ∗ A ′ ( C ) k and X A ( C ) k defined by the formula (5.33) for v ′ α = v α are notidentified since they are elements of different algebras. The same is valid for the bases (5.35)of Ξ A ′ ( C ) k and Ξ A ( C ) k .Let e S ( k ) , e A ( k ) and e S ′ ( k ) , e A ′ ( k ) be the pairing operators for left equivalent idempotents e A, e A ′ ∈ End( C m ⊗ C m ). We have e S ′ ( k ) = e S ( k ) e G ( k ) and e A ′ ( k ) = e G [ k ] e A [ k ] for some matrices e G ( k ) , e G [ k ] ∈ Aut (cid:0) ( C m ) ⊗ k (cid:1) . Let M be an ( A, e A )-Manin matrix. Due to Proposition 2.20this means that M is an ( A ′ , e A ′ )-Manin matrix. We can consider different minor operatorsfor M , but they are related by a multiplication of a complex matrix in the following way. Proposition 5.23. For any ( A, e A ) -Manin matrix we have Min e S ′ ( k ) M = (cid:0) Min e S ( k ) M (cid:1) e G ( k ) = (cid:0) Min e S ( k ) M (cid:1) e S ′ ( k ) , (5.87)Min A ′ ( k ) M = G [ k ] (cid:0) Min A ( k ) M (cid:1) = A ′ ( k ) (cid:0) Min A ( k ) M (cid:1) . (5.88) Proof. The formulae are obtained by the definitions of the minors:Min e S ′ ( k ) M = M (1) · · · M ( k ) e S ′ ( k ) = M (1) · · · M ( k ) e S ( k ) e G ( k ) = (cid:0) Min e S ( k ) M (cid:1) e G ( k ) , Min e S ′ ( k ) M = M (1) · · · M ( k ) e S ′ ( k ) e S ′ ( k ) = (cid:0) Min e S ( k ) M (cid:1) e S ′ ( k ) , Min A ′ ( k ) M = A ′ ( k ) M (1) · · · M ( k ) = G [ k ] A ( k ) M (1) · · · M ( k ) = G [ k ] (cid:0) Min A ( k ) M (cid:1) , Min A ′ ( k ) M = A ′ ( k ) A ′ ( k ) M (1) · · · M ( k ) = A ′ ( k ) (cid:0) Min A ( k ) M (cid:1) . Note that an n k × m k matrix is a ( A (2 k ) , e A (2 k ) )-Manin matrix iff it is an ( A ′ (2 k ) , e A ′ (2 k ) )-Maninmatrix. It is a (1 − S ( k ) , − e S ( k ) )-Manin matrix iff it is a (1 − S ′ (2 k ) , − e S ′ (2 k ) )-Manin matrix.Due to Proposition 5.21 the both matrices Min e S ( k ) M and Min e S ′ ( k ) M are (1 − S ( k ) , − e S ( k ) )-Manin matrices as well as (1 − S ′ (2 k ) , − e S ′ (2 k ) )-Manin matrices. They are related by the66hange of basis in the space ( C m ) ⊗ k corresponding to the matrix e G − k ) (see Subsection 2.4).In the same way the matrices Min A ( k ) M and Min A ′ ( k ) M are ( A ( k ) , e A ( k ) )-Manin matrices aswell as ( A ′ (2 k ) , e A ′ (2 k ) )-Manin matrices, and they are related by the change of basis in the space( C n ) ⊗ k corresponding to the matrix G [ k ] .Let { e v α } e d k α =1 , { e v α } e d k α =1 and { e w α } e r k α =1 , { e w α } e r k α =1 be dual bases of e S ( k ) (cid:0) ( C m ) ⊗ k (cid:1) , (cid:0) ( C m ) ⊗ k (cid:1) ∗ e S ( k ) , e A ( k ) (cid:0) ( C m ) ⊗ k (cid:1) and (cid:0) ( C m ) ⊗ k (cid:1) ∗ e A ( k ) . Let e v ′ α = e v α , ( e v ′ ) α = e v α e G ( k ) = e v α e S ′ ( k ) , α = 1 , . . . , e d k , ( e w ′ ) α = e w α , e w ′ α = e G [ k ] e w α = e A ′ ( k ) e w α , α = 1 , . . . , e r k . Proposition 5.24. Consider the entries of minor operators (5.78) and (5.79) in the basesdefined above: (Min e S ( k ) M ) αβ = v α (Min e S ( k ) M ) e v β , (Min e S ′ ( k ) M ) αβ = ( v ′ ) α (Min e S ′ ( k ) M ) e v ′ β , (Min A ( k ) M ) γδ = w γ (Min A ( k ) M ) e w δ , (Min A ′ ( k ) M ) γδ = ( w ′ ) γ (Min A ′ ( k ) M ) e w ′ δ . These entries coincide: (cid:0) Min e S ( k ) M (cid:1) αβ = (cid:0) Min e S ′ ( k ) M (cid:1) αβ , (cid:0) Min A ( k ) M (cid:1) γδ = (cid:0) Min A ′ ( k ) M (cid:1) γδ . Proof. By using (cid:0) Min A ′ ( k ) M (cid:1) e A ′ ( k ) = (cid:0) Min A ′ ( k ) M (cid:1) , Proposition 5.23 and ( w ′ ) γ A ′ ( k ) = ( w ′ ) γ we obtain (cid:0) Min A ′ ( k ) M (cid:1) γδ = ( w ′ ) γ (cid:0) Min A ′ ( k ) M (cid:1) e w ′ δ = ( w ′ ) γ (cid:0) Min A ′ ( k ) M (cid:1) e A ′ ( k ) e w δ =( w ′ ) γ (cid:0) Min A ′ ( k ) M (cid:1) e w δ = ( w ′ ) γ A ′ ( k ) (cid:0) Min A ( k ) M (cid:1) e w δ = ( w ′ ) γ (cid:0) Min A ( k ) M (cid:1) e w δ = (cid:0) Min A ( k ) M (cid:1) γδ . The equality of the entries of the S -minors is proved similarly. Theorem 5.12 gives a formula for the pairing operators via dual bases. However there isbasisless method of construction. It uses the representation theory of groups and algebras,for which some appropriate idempotents are already constructed.The operators A (1 , , A (2 , , . . . , A ( k − ,k ) ∈ End (cid:0) ( C n ) ⊗ k (cid:1) generate a subalgebra U k ofthe algebra End (cid:0) ( C n ) ⊗ k (cid:1) . Equivalently, the algebra U k can be defined as a subalgebra ofEnd (cid:0) ( C n ) ⊗ k (cid:1) generates by P ( a,a +1) or S ( a,a +1) . Let I + k and I − k be ideals of End (cid:0) ( C n ) ⊗ k (cid:1) gen-erated by A ( a,a +1) and S ( a,a +1) respectively. The subspaces U + k := U k ∩ I + k and U − k := U k ∩ I − k are non-unital subalgebras generated by A ( a,a +1) and S ( a,a +1) . They are maximal ideals ofthe algebra U k . In these terms the conditions (5.17) and (5.20) can be written in the form T S ( k ) = S ( k ) T = 0 ∀ T ∈ U + k , (5.89) T A ( k ) = A ( k ) T = 0 ∀ T ∈ U − k . (5.90)67he commutation relations for the algebras X ∗ A ( C ), X A ( C ), Ξ A ( C ) and Ξ ∗ A ( C ) imply( X ∗ ⊗ · · · ⊗ X ∗ ) T = 0 , T ( X ⊗ · · · ⊗ X ) = 0 ∀ T ∈ U + k , (5.91)(Ψ ⊗ · · · ⊗ Ψ) T = 0 , T (Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = 0 ∀ T ∈ U − k . (5.92)Thus, if S ( k ) ∈ U k satisfies (5.89) and 1 − S ( k ) ∈ U + k then S ( k ) is the k -th S -operator.Analogously, an operator A ( k ) ∈ U k satisfying (5.90) and 1 − A ( k ) ∈ U − k is the k -th A -operator.If the algebra U k admits the involution ω : P ( a,a +1) 7→ − P ( a,a +1) then A ( k ) = ω ( S ( k ) ), so byusing this involution one can obtain the k -th A -operator from the k -th S -operator and viceversa.Let us consider a case when the algebra U k is a group algebra of a finite group. Proposition 5.25. Let G + k and G − k be subgroups of End (cid:0) ( C n ) ⊗ k (cid:1) generated by the operators P (1 , , P (2 , , . . . , P ( k − ,k ) and − P (1 , , − P (2 , , . . . , − P ( k − ,k ) respectively. The group G + k isfinite iff the group G − k is finite. In this case we the pairing operators exist and have the form S ( k ) = 1 | G + k | X g ∈ G + k g, A ( k ) = 1 | G − k | X g ∈ G − k g. (5.93) Proof. Suppose G + k is finite. If − ∈ G − k then G − k = G + k ∪ ( − G + k ), so that G − k is also finite(more precisely we have G − k = G + k if − ∈ G + k or G − k = G + k ⊔ ( − G + k ) if − / ∈ G + k ). Hencewe can suppose − / ∈ G − k . For brevity we denote g a = P ( a,a +1) and g ∗ a = − g a = − P ( a,a +1) .The finiteness of the group G + k means that there exists N ∈ N such that any element g ∈ G + k can be written as g a g a · · · g a m with m N . Then any product g ∗ a · · · g ∗ a N +1 =( − N +1 g a · · · g a N +1 can be written as ( − N +1 g a ′ · · · g a ′ m = ( − N +1+ m g ∗ a ′ · · · g ∗ a ′ m for some m N . By using ( g ∗ s ) = 1 we obtain ( − N +1+ m = g ∗ a · · · g ∗ a N +1 g ∗ a ′ m · · · g ∗ a ′ ∈ G − k . Since − / ∈ G − k we have ( − N +1+ m = 1, so for any a , . . . , a N , a N +1 there exist m N and a ′ , . . . , a ′ m such that g ∗ a · · · g ∗ a N +1 = g ∗ a ′ · · · g ∗ a ′ m . This means that by induction we can writeany element of G − k as a product g ∗ a · · · g ∗ a m for some m N , which implies the finiteness ofthe group G − k . The converse implication is obtained by changing the sign of P .We have U k = C [ G + k ] = C [ G − k ]. The spaces U ± k are spanned by the elements T = 1 − g , g ∈ G ± k . Note that the operators (5.93) satisfy gS ( k ) = S ( k ) g = S ( k ) ∀ g ∈ G + k and AS ( k ) = A ( k ) g = A ( k ) ∀ g ∈ G − k , hence conditions (5.89) and (5.90) are valid. Since1 − S ( k ) = 1 | G + k | P g ∈ G + k (1 − g ) ∈ U + k and 1 − A ( k ) = 1 | G − k | P g ∈ G − k (1 − g ) ∈ U − k , the operators S ( k ) and A ( k ) are the pairing operators for A .At k = 2 the groups consist of exactly two elements: G +2 = { , P } , G − = { , − P } .Hence we obtain S (2) = (1 + P ) = S and A (2) = (1 − P ) = A . This is in accordance withProposition 5.6.Let U k be an abstract algebra with an augmentation ε : U k → C . We call an element s ( k ) left invariant or right invariant (with respect to ε ) if us ( k ) = ε ( u ) s ( k ) ∀ u ∈ U k or68 ( k ) u = ε ( u ) s ( k ) ∀ u ∈ U k respectively. If an element s ( k ) ∈ U k is left or right invariant andnormalised as ε ( s ( k ) ) = 1, then it is an idempotent.Let ρ : U k → End (cid:0) ( C n ) ⊗ k (cid:1) be an algebra homomorphism (representation) such that U k ⊂ ρ ( U k ) and ρ ( u ) − ε ( u ) ∈ U + k for all u ∈ ρ − ( U k ). Then the left and right invariance of s ( k ) implies that S ( k ) = ρ ( s ( k ) ) satisfy (5.89). If ρ ( U k ) = U k then S ( k ) ∈ U k , so due to theformulae (5.91) the operator S ( k ) is the k -th S -operator. In more general case one need tocheck the conditions (5.18), (5.19). Note that due to the condition ε ( s ( k ) ) = 1 it is enoughto show that the operators ρ ( u ) − ε ( u ) annihilate ( X ∗ ⊗ · · · ⊗ X ∗ ) and ( X ⊗ · · · ⊗ X ) byacting from the right and left respectively, where u runs over the algebra U k or at least overa set of its generators.The pairing operators A ( k ) are obtained in the same way. Usually one needs to considerthe same U k , ε , s ( k ) with different representation ρ or the same representation with different s ( k ) and ε . Let us conclude. Theorem 5.26. Let ρ : U k → End (cid:0) ( C n ) ⊗ k (cid:1) and ε : U k → C be algebra homomorphisms.Suppose that U k ⊂ ρ ( U k ) . Let s ( k ) ∈ U k be a normalised left and right invariant element: us ( k ) = s ( k ) u = ε ( u ) s ( k ) ∀ u ∈ U k , ε ( s ( k ) ) = 1 . (5.94) • If ρ ( u ) − ε ( u ) ∈ U + k for all u ∈ U k such that ρ ( u ) ∈ U k and ( X ∗ ⊗ · · · ⊗ X ∗ ) ρ ( u ) = ε ( u )( X ∗ ⊗ · · · ⊗ X ∗ ) , (5.95) ρ ( u )( X ⊗ · · · ⊗ X ) = ε ( u )( X ⊗ · · · ⊗ X ) (5.96) for all u ∈ U k , then S ( k ) = ρ ( s ( k ) ) ∈ End (cid:0) ( C n ) ⊗ k (cid:1) is the k -th S -operator. • If ρ ( u ) − ε ( u ) ∈ U − k for all u ∈ U k such that ρ ( u ) ∈ U k and (Ψ ⊗ · · · ⊗ Ψ) ρ ( u ) = ε ( u )(Ψ ⊗ · · · ⊗ Ψ) , (5.97) ρ ( u )(Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) = ε ( u )(Ψ ∗ ⊗ · · · ⊗ Ψ ∗ ) (5.98) for all u ∈ U k , then A ( k ) = ρ ( s ( k ) ) ∈ End (cid:0) ( C n ) ⊗ k (cid:1) is the k -th A -operator. Remark 5.27. The augmentation ε : U k → C defines the ideal I k := Ker( ε ) of U k . Itis a maximal ideal consisting of the elements u − ε ( u ) where u ∈ U k . The conditions ρ ( u ) − ε ( u ) ∈ U ± k ∀ u ∈ ρ − ( U k ) equivalent to ρ ( I k ) ∩ U k ⊂ U ± k . They can be written in theform ρ ( u ) ∈ U ± k ∀ u ∈ I k ∩ ρ − ( U k ). In terms of this ideal the conditions (5.94) take theform us ( k ) = s ( k ) u = 0 ∀ u ∈ I k and 1 − s ( k ) ∈ I k .Conversely, for any maximal ideal I k of the algebra U k there is a unique algebra iso-morphism U k / I k ∼ = C , so the canonical projection U k ։ U k / I k defines the augmentation ε : U k → C . In particular, the ideals U ± k gives augmentations U k ։ U k / U ± k ∼ = C .If the algebra U k admits an anti-automorphism which does not change generators thenit is enough to check only left invariance (or only right invariance) due to the following fact.69 roposition 5.28. Let ε : U k → C be a homomorphism. • Let s ( k ) be left invariant and ¯ s ( k ) be right invariant elements of the algebra U k . If ε ( s ( k ) ) = ε (¯ s ( k ) ) = 1 , then s ( k ) = ¯ s ( k ) . • Let s ( k ) ∈ U k be left-invariant and ε ( s ( k ) ) = 1 . If there exists an anti-automorphism α : U k → U k such that εα = ε then ¯ s ( k ) := α ( s ( k ) ) is right invariant and ε (¯ s ( k ) ) = 1 .Hence s ( k ) = ¯ s ( k ) , and s ( k ) is right invariant as well. • If a solution of (5.94) exists then it is unique. Proof. The first part is proved as Proposition 5.6, that is ¯ s ( k ) s ( k ) = ε (¯ s ( k ) ) s ( k ) = s ( k ) ,¯ s ( k ) s ( k ) = ε ( s ( k ) )¯ s ( k ) = ¯ s ( k ) . The second part follows from from the fact that ¯ s ( k ) = α ( s ( k ) ) isright invariant with respect to the augmentation εα − = ε .Consider the case when P satisfies the braid relation P (12) P (23) P (12) = P (23) P (12) P (23) .Since P = 1 we have the homomorphisms ρ ± : C [ S k ] → End (cid:0) ( C n ) ⊗ k (cid:1) defined by the for-mulae ρ ± ( σ a ) = ± P ( a,a +1) . The role of U k is played by C [ S k ]. Since G ± k = ρ ± ( S k ) the groups G + k and G − k are finite. The augmentation ε : C [ S k ] → C is the counit ε ( σ ) = 1 ∀ σ ∈ S k .The operators (5.93) coincide with the image of s ( k ) = k ! P σ ∈ S k σ under the homomorphisms ρ + and ρ − . Note that A ( k ) can be obtained as the image of a ( k ) = k ! P σ ∈ S k ( − σ σ under ρ + .In this case one need to consider the augmentation ε ( σ a ) = − 1. Anyway, we obtain S ( k ) = 1 k ! X σ ∈ S k π + ( σ ) , A ( k ) = 1 k ! X σ ∈ S k ( − σ π + ( σ ) . (5.99) Here we construct pairing operators for the examples given in Section 3 and consider thecorresponding minors. Since the Manin matrices described in Subsections 3.1 and 3.2 areparticular cases of the ( b q, b p )-Manin matrices it is sufficient to consider minors for the caseof Subsection 3.3. The formulae for S - and A -minors of the ( b q, b p )-Manin matrices are validfor more general case: for ( B, A b p )- and ( A b q , B )-Manin matrices respectively. By startingwith the idempotents b R q − introduced in Subsection 4.1 we write another pairing operators,which gives related minor operators for q -Manin matrices. Finally we investigate the case ofSubsection 3.4. ( b q, b p ) -Manin matrices Consider the idempotent A = A b q . The pairing operators for A are given by the formu-lae (5.99) where ρ + = ρ b q : C [ S k ] → End (cid:0) ( C n ) ⊗ k (cid:1) , ρ b q ( σ a ) = P ( a,a +1) b q .70et I = ( i , . . . , i k ) and b q II be the corresponding k × k matrix with entries ( b q II ) st = q i s i t .Then we have homomorphism Ξ A b qII ( C ) → Ξ A b q ( C ) given by the formula ψ s ψ i s . Byapplying it to (3.44) one yields ψ i σ (1) · · · ψ i σ ( k ) = ( − σ µ ( b q II , σ ) ψ i · · · ψ i k , (6.1)where µ ( b q, σ ) := Y s 72o consider S -operators for A = A b q and corresponding S -minors we first note the formula x σ (1) · · · x σ ( n ) = x · · · x n Y i The formulae (6.44) define idempotents s ± ( k ) satisfying ε ± ( s ± ( k ) ) = 1 , us ± ( k ) = s ± ( k ) u = ε ± ( u ) s ± ( k ) ∀ u ∈ H qk . (6.45) Hence S ( k ) = ρ ( s +( k ) ) and A ( k ) = ρ ( s − ( k ) ) are pairing operators for A = b R qn − . Proof. The first formula follows from (6.42) and the equality ε ± ( s ± ( k − ) = 1 which can besupposed by induction. Further we prove the left invariance. Suppose us ± ( k − = ε ± ( u ) s ± ( k − ∀ u ∈ H qk − . We need to prove h a s ± ( k ) = ± q ∓ s ± ( k ) for a = 1 , . . . , k − 1. Denote by I ± b,k the leftideal of H qk generated by the elements u − ε ± ( u ), u ∈ H qb . Due to the induction assumptionwe have I ± b,k s ± ( k − = 0 for b = 1 , . . . , k − 1. Then the relations we need to prove follows fromthe formulae h k − l t ± k ∈ ± q ∓ t ± k + I ± k − l,k which we prove by induction in l > 1. For l = 1 wehave h k − t ± k = q ± ( k − h k − ± q ± ( k − h k − + h k − k − X a =2 ( ± a q ∓ a h k − a · · · h k − h k − h k − . By using the relations (6.35)–(6.37) we obtain h k − t ± k = q ± ( k − h k − ± q ± ( k − ± q ± ( k − ( q − − q ) h k − + k − X a =2 ( ± a q ∓ a h k − a · · · h k − h k − h k − h k − . By taking into account q k − + q k − ( q − − q ) = q − q k − and q − k − q − k ( q − − q ) = qq − k wederive h k − t ± k ∈ ± q ∓ t ± k + I ± k − ,k . Suppose that h k − l t ± k ∈ ± q ∓ t ± k + I ± k − l,k for some l > k > l then for any k > l + 1 we obtain h k − l − t ± k = q ± ( k − h k − l − ± h k − l − t ± k − h k − ∈ q ± ( k − h k − l − ± ( ± q ∓ ) t ± k − h k − + I ± k − l − ,k − h k − . Since uh k − = h k − u for any u ∈ H qk − l − we have the inclusion I ± k − l − ,k − h k − ⊂ I ± k − l − ,k , sothat h k − l − t ± k ∈ ± q ∓ t ± k + I ± k − l − ,k . The right invariance of s ± ( k ) now follows from Proposi-tion 5.28, since the formula ρ ( h a ) = h a defines an anti-automorphism of H qk . Corollary 6.2. We have the following symmetric recurrent formulae: s ± ( k ) = q ± ( k − k q s ± ( k − ± ( k − q k q s ± ( k − h k − s ± ( k − , (6.46) S ( k ) = q k − k q S ( k − + ( k − q k q S ( k − (cid:0) b R qn (cid:1) ( k − ,k ) S ( k − , (6.47) A ( k ) = q − k k q A ( k − − ( k − q k q A ( k − (cid:0) b R qn (cid:1) ( k − ,k ) A ( k − . (6.48)78 roof. By multiplying t ± k = q ± ( k − ± P k − a =0 ( ± a q ± ( k − − a ) h k − − a · · · h k − by s ± ( k − from theright and taking into account (6.45) for k − s ± ( k − t ± k = q ± ( k − s ± ( k − ± k − X a =0 ( ± a q ± ( k − − a ) ε ± ( h k − − a · · · h k − ) s ± ( k − h k − = q ± ( k − s ± ( k − ± k − X a =0 q ± ( k − − a ) q ∓ a s ± ( k − h k − = q ± ( k − s ± ( k − ± ( k − q s ± ( k − h k − . Substituting it to s ± ( k ) = s ± ( k − s ± ( k ) = k q s ± ( k − t ± k s ± ( k − we obtain (6.46). Application of thehomomorphism ρ to (6.46) gives the formulae (6.47) and (6.48).Let us calculate the entries of the A -operators for the idempotent A = R qn − . Since X A ( C ) = X A b q ( C ) and X ∗ A ( C ) = X ∗ A b p ( C ) for q ij = q sgn( j − i ) and p ij = q sgn( i − j ) the formu-lae (6.1), (6.3) take the form ψ i σ (1) · · · ψ i σ ( k ) = ( − σ q − inv( σ ) ψ i · · · ψ i k , ψ i σ (1) · · · ψ i σ ( k ) = ( − σ q − inv( σ ) ψ i · · · ψ i k , (6.49)where i < . . . < i k . Hence A i σ (1) ...i σ ( k ) j τ (1) ...j τ ( k ) = ( − σ ( − τ q − inv( σ ) − inv( τ ) A i ...i k j ...j k , (6.50)for i < . . . < i k , j < . . . < j k and the other entries of A ( k ) vanish.To find A i ...i k j ...j k we first write the formula (6.50) for the longest permutation σ = τ : A i k ...i j k ...j = q − k ( k − A i ...i k j ...j k . (6.51)Let us prove that there are numbers λ k ∈ C such that A i k ...i j k ...j = λ k δ i k j k · · · δ i j (6.52)for any i k > i k − > . . . > i and j k > j k − > . . . > j . This holds for k = 1 with λ = 1.Suppose it holds for k − 1, then by taking the corresponding entries in (6.48) we obtain A i k ...i j k ...j = q − k k q A i k ...i j k ...j δ i j − ( k − q k q X l k ...l l ′ A i k ...i l k ...l (cid:0) b R qn (cid:1) l i l ′ j A l k ...l l ′ j k ...j j . (6.53)If the term in the sum does not vanish then l = i s for some s = k, . . . , l ′ = j t for some t = k, . . . , 2, so we have l > i and l ′ > j . The expression (4.10) implies that (cid:0) b R qn (cid:1) ii ′ jj ′ = 0for i > i ′ and j > j ′ . Hence the sum vanishes and we obtain (6.52) with λ k = q − k k q λ k − .Iteratively we derive λ k = q − k ( k − / k q ! , where k q ! = k q ( k − q · · · q q . Thus due to (6.50),(6.51) and (6.52) we obtain A i σ (1) ...i σ ( k ) j τ (1) ...j τ ( k ) = q k ( k − / k q ! ( − σ ( − τ q − inv( σ ) − inv( τ ) δ i j · · · δ i k j k (6.54)79or i < . . . < i k and j < . . . < j k .Since r k = dim Ξ b R qn − ( C ) k = dim Ξ A qn ( C ) k = (cid:0) kn (cid:1) we have tr A ( k ) = (cid:0) kn (cid:1) . Explicit calcula-tion of the trace (cf. (6.6)) gives us the formula X σ ∈ S k q − σ ) = q − k ( k − / k q ! (6.55)(it can be also checked by induction).Denote by A ′ ( k ) the k -th A -operator for A ′ = A qn . Its non-zero entries are given by theformula (6.5) for b q = q [ n ] , that is( A ′ ) i σ (1) ...i σ ( k ) j τ (1) ...j τ ( k ) = 1 k ! ( − τ ( − σ q inv( σ ) − inv( τ ) δ IJ , (6.56)where I = ( i < . . . < i k n ), J = ( j < . . . < j k n ). Due Proposition 5.22 theidempotents A ′ ( k ) and A ( k ) are left equivalent, so that A ′ ( k ) = G [ k ] A ( k ) for some invertiblematrix G [ k ] . The latter can be chosen as a diagonal matrix with entries( G [ k ] ) i σ (1) ...i σ ( k ) i σ (1) ...i σ ( k ) = k q ! k ! q σ ) − k ( k − / , for i < . . . < i k , σ ∈ S k , (6.57)( G [ k ] ) l ...l k l ...l k = 1 , if l s = l t for some s = t. (6.58)Let M be an ( b R qn − , A b p )-Manin matrix, i.e. a ( q [ n ] , b p )-Manin matrix. Its A -minors arerelated by the formula (5.88). Substitution b q = q [ n ] to (6.7) yields(Min A ′ ( k ) M ) i σ (1) ...i σ ( k ) j τ (1) ...j τ ( k ) = 1 k ! ( − τ ( − σ q inv( σ ) µ ( b p JJ , τ ) det q ( M IJ ) . (6.59)By dividing this by (6.57) we obtain the A -minors(Min A ( k ) M ) i σ (1) ...i σ ( k ) j τ (1) ...j τ ( k ) = q k ( k − / k q ! ( − τ ( − σ q − inv( σ ) µ ( b p JJ , τ ) det q ( M IJ ) . (6.60)Let w ′ I = A ′ ( k ) e I be the basis (6.9) for b q = q [ n ] . Then by the formula (5.84) we obtain thecorresponding basis of A ( k ) (cid:0) ( C n ) ⊗ k (cid:1) : w I = G − n ] w ′ I = G − n ] A ′ ( k ) e I = A ( k ) e I = q k ( k − / k q ! X σ ∈ S k ( − σ q − inv( σ ) e i σ (1) ...i σ ( k ) . (6.61)One can directly check that the basis ψ ( k ) I defined by the bases w I and w ′ I coincide (see (5.86)):by using the formulae (6.61), (6.1) and (6.55) we derive ψ ( k ) I = q k ( k − / k q ! X σ ∈ S k ( − σ q − inv( σ ) ψ i σ (1) · · · ψ i σ ( k ) = ψ i · · · ψ i k . Due to Proposition 5.24 the entries of the corresponding A -minor operators coincide for thecorresponding bases: w I (Min A ′ ( k ) ) e w J = w I (Min A ( k ) ) e w J , (6.62)where e w J is the same as in Subsection 6.1. 80 .3 Pairing operators for the -parametric case Here we consider the case described in Subsection 3.4. The idempotent A = A a,b,cκ isparametrised by a, b, c ∈ C \{ } and κ ∈ C . The commutation relations for the algebraΞ A ( C ) are ψ i ψ j + P k,l =1 ψ k ψ l P kl,ij = 0. They have the form ψ i ψ j + a ij ψ j ψ i = 0, i = j , and2 ψ i = P k,l =1 ε ikl a lk ψ k ψ l or ψ i = κa jk ψ k ψ j = − κa kj ψ j ψ k , ( i, j, k ) ∈ C , (6.63)where C is the set of cyclic permutations (1 , , , (2 , , , (3 , , ∗ A ( C ) is defined by the commutation relations ψ i ψ j + P k,l =1 P ij,kl ψ k ψ l = 0,these are ψ i ψ j + a ji ψ j ψ i + κa ji P k =1 ε ijk ψ k ψ k = 0. For i = j we obtain ψ i ψ i = 0, so we haveΞ ∗ A a,b,cκ ( C ) = Ξ ∗ A b q ( C ) where q ij = a ij . This implies that the idempotents A a,b,cκ and A b q areright equivalent. In particular, Ξ ∗ A a,b,cκ ( C ) = L k =0 Ξ ∗ A a,b,cκ ( C ) k and the dimensions of the compo-nents are dim Ξ ∗ A a,b,cκ ( C ) = 1, dim Ξ ∗ A a,b,cκ ( C ) = 3, dim Ξ ∗ A a,b,cκ ( C ) = 3, dim Ξ ∗ A a,b,cκ ( C ) = 1,dim Ξ ∗ A a,b,cκ ( C ) k = 0 for k > A a,b,cκ ( C ) = 3. However, the dimension ofΞ A a,b,cκ ( C ) is not always equal to 1, it depends on the parameters. The following theoremdescribes these dependence and gives the necessary and sufficient condition for existence ofthe corresponding pairing operator. Note that it is enough to consider the case κ = 0 sincethe case of the idempotent A a,b,c = A b q was considered in Subsection 6.1. Theorem 6.3. Assume κ = 0 . Consider the conditions (i) a = b = c ; (ii) a b = b c = c a = − , κ = − a b − c ; (6.64) (iii) a c = b a = c b = − , κ = a − b − c. Any two of these conditions implies the third one. We have Ξ A a,b,cκ ( C ) = L k =0 Ξ A a,b,cκ ( C ) k withthe dimensions of the components dim Ξ A a,b,cκ ( C ) = 1 , dim Ξ A a,b,cκ ( C ) = 3 , dim Ξ A a,b,cκ ( C ) = 3 , dim Ξ A a,b,cκ ( C ) = iff all three conditions (6.64) hold, iff one and only one of three conditions (6.64) holds, iff no one of three conditions (6.64) holds, dim Ξ A a,b,cκ ( C ) k = 0 for k > .The third A -operator exists iff the condition (i) holds and (ii) , (iii) do not, that is iff a = b = c and ( a = − or κ = − abc ). It equals A (3) = w w , where w = 16 ( e + e + e ) − a e + e + e ) , (6.65) w = e + e + e − a − ( e + e + e ) − κ ( b − e + c − e + a − e ) . (6.66)81 n this case the elements ψ i ψ j ψ k ∈ Ξ A a,b,cκ ( C ) have the form ψ i ψ j ψ k = ψ (3)1 , ψ i ψ k ψ j = − a − ψ (3)1 , ( i, j, k ) ∈ C , (6.67) ψ i ψ j = ψ i ψ j ψ i = ψ j ψ i = 0 , i = j (6.68) ψ = − κb − ψ (3)1 , ψ = − κc − ψ (3)1 , ψ = − κa − ψ (3)1 , (6.69) where ψ (3)1 = (Ψ ⊗ Ψ ⊗ Ψ) w = ψ ψ ψ . Proof. Under the condition (i) both conditions (ii) and (iii) gives a = − 1, which in turnimplies − a b − c = a − b − c . Hence (i) implies equivalence of (ii) and (iii) . Further, bycomparing the conditions (ii) and (iii) we see that they imply (i) .Now let us use Theorem 5.12. Since the idempotents S a,b,cκ = 1 − A a,b,cκ and S b q = 1 − A b q areleft equivalent the space W k for the case A = A a,b,cκ coincide with the space W k for A = A b q . Inparticular, W = C w . The space W consists of the covectors ξ = P i,j,k =1 ξ ijk e ijk satisfying ξP (12) = ξP (23) = − ξ . This gives us the system of equations ξ ijl = − a ij ξ jil , ξ lij = − a ij ξ lji , i = j, (6.70) ξ iil = κa jk ξ kjl , ξ lii = κa jk ξ lkj , ( i, j, k ) ∈ C . (6.71)The coefficients ξ ijk can be divided into three sets: { ξ iii | i = 1 , , } ∪ { ξ ijk | i = j = k = i } , (6.72) { ξ iij , ξ iji , ξ jii | ( i, j, k ) ∈ C } , (6.73) { ξ ijj , ξ jij , ξ jji | ( i, j, k ) ∈ C } . (6.74)The relations (6.70), (6.71) imply that any two coefficients from the same set are proportionalto each other (with a non-zero coefficient of proportionality), so that dim W 3. Theisomorphism Ξ A ( C ) ∼ = W ∗ implies that dim Ξ A ( C ) = dim W . Let us prove that non-vanishing of the coefficients from the sets (6.72), (6.73), (6.74) corresponds exactly to theconditions (i) , (ii) , (iii) respectively.Note that there are two types of symmetries of the system of equations (6.70), (6.71).First, it is invariant under the cyclic permutations of indices 1 1. Second, it is invariant under other permutations i ↔ j , i = j , with the sign change κ 7→ − κ . This system implies that ξ = κa ξ = κa a a ξ = a a ξ . A cyclicpermutation gives ξ = a a ξ . If the coefficients from the set (6.72) do not vanishthen a = a = a . This is the condition (i) . Conversely, if the condition (i) holds thenthere exists a solution with non-vanishing coefficients (6.72), namely ξ = w (the coefficientsfrom (6.73) and (6.74) vanish in this solution).Write down some relations between the coefficients (6.73) that follows from the sys-tem (6.70), (6.71): ξ = − a ξ = a ξ = κa a ξ , (6.75) ξ = κa ξ = − κa a ξ , (6.76) ξ = − κa ξ = κ a a ξ . (6.77)82ll other relations is obtained by the symmetry of the first type. By equating the same coef-ficients we obtain the condition of existence of non-vanishing solution of (6.75)–(6.77), theyare a a = − κ = − a a − a . The symmetry gives the whole condition (ii) . Thus thereis a non-zero solution ξ = w with vanishing coefficients from (6.72) and (6.74). By applyingthe symmetry of the second type we obtain the relations between the coefficients (6.74) andthe condition (iii) . This means that (iii) is necessary and sufficient condition for existenceof a non-zero solution ξ = w with vanishing coefficients (6.72) and (6.73).The isomorphism Ξ A ( C ) ∼ = W ∗ identifies the elements ψ i ψ j ψ k with linear functions on W . By substituting π = e ijk to the formula (5.38) we obtain ψ i ψ j ψ k ( ξ ) = ξ ijk , where i, j, k =1 , , ξ = P i,j,k =1 ξ ijk e ijk ∈ W k . Thus the elements ψ i ψ j ψ k for i = j = k = i and ψ i do not vanish iff (i) holds. Similarly, (ii) and (iii) are conditions of non-vanishing of theelements ψ i ψ j , ψ i ψ j ψ i , ψ j ψ i where ( i, j ) = (1 , , (2 , , (3 , 1) and ( i, j ) = (2 , , (3 , , (1 , A ( C ) k = 0 for k > k = 4. Note thatthe relations (6.63) implies that all the elements ψ i ψ i ψ i ψ i are proportional to each other,that is dim Ξ A ( C ) 1. If dim Ξ A ( C ) < ψ i ψ i ψ i vanishes for some i , i , i andhence Ξ A ( C ) = 0. In the case dim Ξ A ( C ) = 3 all the conditions (6.64) hold. In par-ticular, a = − 1. Suppose that dim Ξ A ( C ) = 0 and hence all the elements ψ i ψ i ψ i ψ i do not vanish. From the chain of relations ψ ψ ψ = κa ψ ψ = κ a a a ψ ψ ψ = κ a a a ψ ψ ψ = − κ a a a a a a ψ ψ ψ we obtain κ a a a = − 1. By sub-stituting (i) and (ii) we obtain − − a b − ca − b c − = − a b c − = − a , which implies a = 1. This contradicts with a = − 1, hence Ξ A ( C ) = 0.The existence of A (3) implies dim W = dim W = 1, so that one and only one of theconditions (6.64) holds. Namely W has the form C w , C w or C w under the condition (i) , (ii) or (iii) respectively. But w α w = 0 for α = 2 , 3, so that only the condition (i) isrelevant. Since w w = 1 we have A (3) = w w . Note that ψ i ψ j ψ k = ( X ⊗ X ⊗ X ) e ijk =( X ⊗ X ⊗ X ) A (3) e ijk = ψ (3)1 w e ijk , i, j, k = 1 , , 3. By substituting the explicit expression for w we derive the formulae (6.67)–(6.69). Remark 6.4. The dimension of the space X A ( C ) also depends on the values of the param-eters a, b, c, κ . By using the relations (3.51) one can relate xyz with zyx in two differentways. As a result we obtain two different expressions for xyz − a − b − c zyx , namely, κb − x − κcb − y + κa − b − c z = κa − z − κca − y + κb − a − c x . Assume κ = 0. Then we see that the elements x , y , z are linearly independent iff thecondition (i) holds. One can also obtain the relation( κa − bc − + a ) x y + ( κa − b c − − a − ) yx = κ ( a + a − c − ) xz + κa − ( c − + b c − ) y z. One can deduce that the dimension of the subspace spanned by the elements x i x j , x i x j x i , x j x i ,( i, j, k ) ∈ C , equals 4 if the condition (ii) holds and it equals 3 if this condition is false. Thedimension of the subspace spanned by the elements x j x i , x j x i x j , x i x j , ( i, j, k ) ∈ C , depends83n the condition (iii) in the same way. Thus the difference dim X A ( C ) − dim Ξ A ( C ) doesnot depend on the values of the parameters and equals 9. Moreover, the third S -operator S (3) for the idempotent A a,b,cκ exists iff the A -operator A (3) exists.Finally we write the A -Minors for an ( A a,b,cκ , B )-Manin matrix M . By substituting theexpression A ij,kl = ( δ ik δ jl − a ji δ il δ jk − κa ji δ kl ε ijk ) to (Min A (2) M ) ijj j = P k,l =1 A ij,kl M kj M lj we obtain (Min A (2) M ) ijj j = 12 ( M ij M j j − a ji M jj M ij − κa ji M kj M kj ) , (Min A (2) M ) jij j = − a ij (Min A (2) M ) ijj j , (Min A (2) M ) iij j = 0 , where ( i, j, k ) ∈ C .Let a = b = c and the condition (ii) do not hold ( a = − κ = − abc ). Thecomponents of the pairing operator A (3) = w w have the form A i i i k k k = w i i i w k k k .Hence (Min A (3) M ) i i i j j j = w i i i P k ,k ,k =1 w k k k M k j M k j M k j . By using (6.65), (6.66) weobtain(Min A (3) M ) j j j = 16 X ( i,j,k ) ∈ C (cid:0) M ij M jj M kj − a − M jj M ij M kj (cid:1) − κ (cid:0) b − M j M j M j + c − M j M j M j + a − M j M j M j (cid:1) , (Min A (3) M ) i i i j j j = (Min A (3) M ) j j j if ( i , i , i ) ∈ C , − a (Min A (3) M ) j j j if ( i , i , i ) ∈ C , i k = i l for some k = l. B , C and D Remind that the A n -Manin matrices and A qn -Manin matrices are related with the Yangians Y ( gl n ) and quantum affine algebras U q ( b gl n ) respectively. The Lie algebra gl n = gl ( n, C ) isusually considered as the case ‘type A ’, since gl n ∼ = C ⊕ sl n , where sl n = sl ( n, C ) is thesimple algebra of the type A n − . Hence these Manin matrices can be referred to the type A .Moreover, the minor operators for more general ( b q, b p )-Manin matrices are described by usingthe symmetric groups S k , which are the Weyl groups of the type A k − and participate inSchur–Weyl duality with the Lie algebras gl n .A generalisation of the A n -Manin matrices to the types B , C and D was introduced byA. Molev. Remind that so r +1 = so (2 r + 1 , C ), sp r = sp (2 r, C ) and so r = so (2 r, C ) aresimple Lie algebras of types B r , C r and D r respectively (where r > D r ). In this sectionwe assume that n = 2 r + 1 for the B r case and n = 2 r for C r and D r cases.84 .1 Molev’s definition and corresponding quadratic algebras By starting with the definition of Manin matrices of types B , C and D we interpret themas Manin matrices for quadratic algebras. To do it we use the notations i ′ = n − i + 1 and ε i = ( , i = 1 , . . . , r, − , i = r + 1 , . . . , n ;(the notation ε i is used for the case C r only). Introduce the operators Q n ∈ End( C n ⊗ C n )for the B and D cases and e Q n ∈ End( C n ⊗ C n ) for the C case: Q n = n X i,j =1 E ij ⊗ E i ′ j ′ , e Q n = n X i,j =1 ε i ε j E ij ⊗ E i ′ j ′ . One can check that these operators satisfies the following relations:( Q n ) = nQ n , P n Q n = Q n P n = Q n , (7.1)( e Q n ) = n e Q n , P n e Q n = e Q n P n = − e Q n , (7.2)where P n is the permutation operator defined in Section 3.1. Definition 7.1. [Molev] A matrix M ∈ R ⊗ End( C n ) is a Manin matrix of the type B (forodd n ) or D (for even n ) if it satisfies(1 − P n ) M (1) M (2) (cid:16) P n − Q n n (cid:17) = 0 . (7.3)A matrix M ∈ R ⊗ End( C n ) for even n is a Manin matrix of the type C if it satisfies (cid:16) − P n − e Q n n (cid:17) M (1) M (2) (1 + P n ) = 0 . (7.4)Introduce operators B n ∈ End( C n ⊗ C n ) for the B and D cases and e B n ∈ End( C n ⊗ C n )for the C case as B n = 1 − P n Q n n = A n + Q n n , e B n = 1 − P n − e Q n n = A n − e Q n n . (7.5)The formulae (7.1), (7.2) implies that these operators are idempotents. We see that therelations (7.3) and (7.4) can be written as the definition (2.31) by means of the idempotents A n = − P n , B n and e B n . Proposition 7.2. A matrix M ∈ R ⊗ End( C n ) is a Manin matrix of type B or D iff it isan ( A n , B n ) -Manin matrix. A matrix M ∈ R ⊗ End( C n ) is a Manin matrix of type C iff itis an ( e B n , A n ) -Manin matrix (for even n ). B n and e B n . Proposition 7.3. The commutation relations for the algebras X B n ( C ) and Ξ e B n ( C ) are P n ( X ⊗ X ) = X ⊗ X, Q n ( X ⊗ X ) = 0 and (7.6)(Ψ ⊗ Ψ) P n = − Ψ ⊗ Ψ , (Ψ ⊗ Ψ) e Q n = 0 (7.7) respectively. Proof. The relation B n ( X ⊗ X ) = 0 has the form (cid:16) − P n Q n n (cid:17) ( X ⊗ X ) = 0 . (7.8)Multiplication by Q n from the left gives Q n ( X ⊗ X ) = 0, so we derive (7.6). Analogously,(7.7) can obtained from (Ψ ⊗ Ψ)(1 − e B n ) = 0 by multiplication by e Q n from the right. Theconverse implications are obvious.The algebra X B n ( C ) is the quotient of the polynomial algebra C [ x , . . . , x n ] = X A n ( C ) bythe relation n X i =1 x i x i ′ = 0 . (7.9)The group of matrices G ∈ GL ( n, C ) preserving the symmetric bilinear form g n ( x, y ) = P ni =1 x i y i ′ is isomorphic to O ( n, C ).The algebra Ξ B n ( C ) is generated by λ, ψ , . . . , ψ n with the relations ψ i ψ j + ψ j ψ i = λδ i,j ′ . (7.10)Note that λ = ψ ψ n + ψ n ψ = n P ni =1 ψ i ψ i ′ is a central element and the Grassmann algebraΞ A n ( C ) is the quotient of Ξ B n ( C ) by the relation λ = 0 (by fixing a non-zero value of λ weobtain the Clifford algebra Cl n ( C )).The algebra X e B n ( C ) is generated by e λ, x , . . . , x n with the relations x i x j − x j x i = e λε i δ i,j ′ . (7.11)where n = 2 r . If n > e λ = x x n − x n x = r P ri =1 ( x i x i ′ − x i ′ x i ) is centraland X e B n ( C ) is the universal enveloping of the ( n + 1)-dimensional Heisenberg Lie algebra(by fixing a non-zero value of e λ we obtain the Weyl algebra A r ( C )).The algebra Ξ e B n ( C ) the quotient of the Grassmann algebra Ξ A n ( C ) with the Grassmannvariables ψ , . . . , ψ n by the relation r X i =1 ψ i ψ i ′ = 0 . (7.12)86ote that P ri =1 ψ i ψ i ′ = P ri =1 ( ψ i ψ i ′ − ψ i ′ ψ i ). The group of matrices G ∈ GL ( n, C ) preserv-ing the antisymmetric bilinear form ω r ( x, y ) = P ri =1 ( x i y i ′ − y i ′ x i ) is isomorphic to Sp (2 r, C ).These forms of the quadratic algebras X B n ( C ) and Ξ e B n ( C ) give us the relation betweenthe Manin matrices of the types B , C , D and the Manin matrices (of type A ) considered inSubsection 3.1. Proposition 7.4. Let M ∈ R ⊗ Hom( C m , C n ) . • If M is an ( A n , A m ) -Manin matrix, then M is an ( A n , B m ) -Manin matrix and an ( e B n , A m ) -Manin matrix in the same time. • If M is a ( B n , B m ) -Manin matrix, then M is an ( A n , B m ) -Manin matrix. • If M is a ( e B n , e B m ) -Manin matrix, then M is a ( e B n , A m ) -Manin matrix. This proposition follows from Proposition 7.3 and Propositions 2.16. Alternatively onecan use the algebras X e B n ( C ) and Ξ B n ( C ). One can also prove Proposition 7.4 directly bymultiplying the definition (2.31) from the left or from the right by an appropriate ma-trix: for example, multiplication of − P n M (1) M (2) 1+ P m = 0 by 1 − Q m m from the right gives − P n M (1) M (2) ( P m − Q m m ) = 0.Consider a 2 × M = (cid:18) a bc d (cid:19) . It is an ( A , B )-Manin matrix (a Manin matrixof type D ) iff [ a, c ] = [ b, d ] = 0. Note that e B = 0, so the algebra X e B ( C ) is a free non-commutative algebra with 2 generators (without relations). This implies that any 2 × e B , A )-Manin matrix (a Manin matrix of type C ).Let us generalise the relations (3.5) and (3.6) to the case of Manin matrices of the types B , C , D . Proposition 7.5. A matrix M ∈ R ⊗ Hom( C m , C n ) is an ( A n , B m ) -Manin matrix iff [ M ik , M jl ] + [ M il , M jk ] = Λ ij δ k + l,m +1 , i < j, k l, (7.13) where Λ ij = [ M i , M jm ] + [ M im , M j ] . The matrix M is a ( e B n , A m ) -Manin matrix iff [ M ik , M jl ] + [ M il , M jk ] = e Λ kl δ i + j,n +1 , i < j, k l, (7.14) where e Λ kl = [ M k , M nl ] + [ M l , M nk ] . Proof. We use Proposition 2.16. Let ψ , . . . , ψ n ∈ Ξ A n ( C ) be the Grassmann variables. Weneed to subject the new variables φ k = P ni =1 ψ i M ik ∈ Ξ A n ( R ) to the relations of the algebraΞ B m ( C ), we derive φ k φ l + φ l φ k = λδ k + l,m +1 where k l and λ = φ φ m + φ m φ . Substitution φ k = P ni =1 ψ i M ik and pairing with ψ i ψ j = − ψ j ψ i ∈ Ξ ∗ A n ( C ) gives the formula (7.13). Therelations (7.14) is obtained similarly by considering the elements y i = P k =1 M ik x k ∈ X A m ( R )where x k are the generators of X A m ( C ) = C [ x , . . . , x m ].87 orollary 7.6. The algebra U A n ,B m is generated by M ik , i = 1 , . . . , n , k = 1 , . . . , m , and Λ ij , i, j = 1 , . . . , n , i < j , with the relations (7.13) . The algebra U e B n ,A m is generated by M ik , i = 1 , . . . , n , k = 1 , . . . , m , and e Λ kl , k, l = 1 , . . . , m , k l , with the relations (7.14) . A Manin matrix of type B , C or D does not always keep so under such operationsas taking a submatrix, permutation of rows or columns, doubling of a row or column, acomposition of them, but sometimes it does. Corollary 7.7. Let I = ( i , . . . , i k ) and J = ( j , . . . , j l ) where i s n and j t m for any s = 1 , . . . , k and t = 1 , . . . , l . Let M = ( M ij ) be an n × m matrix over R and M IJ be k × l matrix with entries ( M IJ ) st = M i s j t . • Let M be an ( A n , B m ) -Manin matrix and j s + j t = m + 1 for any s, t = 1 , . . . , l , then M IJ is an ( A k , A l ) -Manin matrix. • Let M be an ( A n , B m ) -Manin matrix and j s + j t = m + 1 iff s + t = l + 1 (this conditionimplies that j , . . . , j l are pairwise different and hence l m ), then the matrix M IJ isan ( A k , B l ) -Manin matrix. • Let M be a ( e B n , A m ) -Manin matrix and i s + i t = n + 1 for any s, t = 1 , . . . , k , then M IJ is an ( A k , A l ) -Manin matrix. • Let M be a ( e B n , A m ) -Manin matrix and i s + i t = n + 1 iff s + t = k + 1 (this conditionimplies that i , . . . , i k are pairwise different and hence k n ), then M IJ is a ( e B k , A l ) -Manin matrix. Finally we give the relation of the Manin matrices of types B , C and D with the Yan-gians [AACFR, Molev]. Let g n = so n for the B and D cases and g n = sp n for the C case.Consider the R -matrices R so n ( u ) = 1 − P n u + Q n u − n/ , R sp n ( u ) = 1 − P n u + e Q n u − n/ − . (7.15)The Yangian Y ( g n ) is an algebra generated by t rij , i, j = 1 , . . . , n , r ∈ Z > , with the commu-tation relations R g n ( u − v ) T (1) ( u ) T (2) ( v ) = T (2) ( v ) T (1) ( u ) R g n ( u − v ) , (7.16)and T ′ ( u ) T ( u ) = 1 where T ( u ) = 1 + P r > n P i,j =1 t rij E ij u − r , T ′ ( u ) = 1 + X r > n X i,j =1 t rj ′ i ′ E ij ( u + n/ − − r for g n = so n , (7.17) T ′ ( u ) = 1 + X r > n X i,j =1 ε i ε j t rj ′ i ′ E ij ( u + n/ − r for g n = sp n . (7.18)88ote that R so n ( − 1) = 2(1 − B n ) and R sp n (1) = 2 e B n , hence by substituting u = v − v = u − − B n ) T (1) ( v − T (2) ( v ) = T (2) ( v ) T (1) ( v − − B n ) , for g n = so n , (7.19) e B n T (1) ( u ) T (2) ( u − 1) = T (2) ( u − T (1) ( u ) e B n , for g n = sp n . (7.20)By multiplying (7.19) by B n from the left and by exchanging tensor factors we derivethat M = T ( v ) e − ∂∂v is a B n -Manin matrix. Due to Propositions (7.2) and (7.4) this is aManin matrix of type B or D .By multiplying (7.20) by 1 − e B n from the right we see that M = T ( u ) e − ∂∂u is a e B n -Maninmatrix and hence is a Manin matrix of the type C . B , C , D cases and Brauer algebras Remind that the pairing operators S ( k ) and A ( k ) for the idempotent A n are the symmetrizersand anti-symmetrizers of the k -th tensor power. We will denote them as S gl n ( k ) and A gl n ( k ) to differ them from other pairing operators. The A -minors for a Manin matrix of type B or D (or, more generally, of an ( A n , B m )-Manin matrix) is the column determinantsof submatrices. The S -minors for a Manin matrix M of type C (or, more generally, ofa ( e B n , A m )-Manin matrix M ) are the normalised row permanents ν J perm( M IJ ) (see theformulae (5.4) and (6.15)).The S -minors for the Manin matrices of types B , D and the A -minors for the Maninmatrices of type C are given by k -th S -operators for B n and by k -th A -operators for e B n respectively. They can be constructed by the method described in Substitution 5.7. In thiscase the role of the algebra U k is played by the Brauer algebra [Br]. Definition 7.8. Let ω ∈ C . The Brauer algebra B k ( ω ) is an algebra generated by theelements σ , . . . , σ k − and ǫ , . . . , ǫ k − with the relations σ a = 1 , ǫ a = ωǫ a , σ a ǫ a = ǫ a σ a = ǫ a , a = 1 , . . . , k − , (7.21) σ a σ b = σ b σ a , ǫ a ǫ b = ǫ b ǫ a , σ a ǫ b = ǫ b σ a , | a − b | > , (7.22) σ a σ a +1 σ a = σ a +1 σ a σ a +1 , ǫ a ǫ a +1 ǫ a = ǫ a , ǫ a +1 ǫ a ǫ a +1 = ǫ a +1 ,σ a ǫ a +1 ǫ a = σ a +1 ǫ a , ǫ a +1 ǫ a σ a +1 = ǫ a +1 σ a , a = 1 , . . . , k − . (7.23)The subalgebra of B k ( ω ) generated by σ , . . . , σ k − is naturally identified with the groupalgebra C [ S k ]. Also, for b k the subalgebra of B k ( ω ) generated by σ , . . . , σ b − , ǫ , . . . , ǫ b − is naturally identified with B b ( ω ).If ω is a positive integer or an even negative integer then the Brauer algebra B k ( ω ) hasthe following representation on a tensor power of a finite-dimensional vector space, whichextends the representation ρ + or ρ − of the symmetric group (see e.g. [Molev]).89 roposition 7.9. The formulae ρ ( σ a ) = P ( a,a +1) n , ρ ( ǫ a ) = Q ( a,a +1) n , (7.24) e ρ ( σ a ) = − P ( a,a +1) n , e ρ ( ǫ a ) = − e Q ( a,a +1) n . (7.25) define algebra homomorphisms ρ : B k ( n ) → End (cid:0) ( C n ) ⊗ k (cid:1) and e ρ : B k ( − n ) → End (cid:0) ( C n ) ⊗ k (cid:1) . Proof. The relations for the generators σ a are valid since P n satisfies the braid relation.The relations (7.21) follow from (7.1), (7.2). The relations (7.22) are obvious. Further, bydirect calculation we obtain Q (12) n Q (23) n Q (12) n = Q (12) n , P (12) n Q (23) n Q (12) n = P (23) n Q (12) n , (7.26) e Q (12) n e Q (23) n e Q (12) n = e Q (12) n , P (12) n e Q (23) n e Q (12) n = − P (23) n e Q (12) n . (7.27)They give some of the relations (7.23). The rest of (7.23) is a consequence of the formu-lae (7.26), (7.27), (4.1), Q (21) n = Q (12) n and e Q (21) n = e Q (12) n .The subalgebras U k and e U k of End (cid:0) ( C n ) ⊗ k (cid:1) generated by B ( a,a +1) n and e B ( a,a +1) n are con-tained in the images of ρ and e ρ respectively, but they do not coincide with these images.Define an augmentation ε : B k ( ω ) → C by ε ( σ a ) = 1 , ε ( ǫ a ) = 0 . (7.28)Then for any ρ ( u ) ∈ U k and e ρ (˜ u ) ∈ e U k we have ρ ( u ) − ε ( u ) ∈ U + k and e ρ (˜ u ) − ε (˜ u ) ∈ e U − k ,where U + k and e U − k are the maximal ideals of U k and e U k generated by B ( a,a +1) n and 1 − e B ( a,a +1) n respectively. Proposition 7.3 implies that the augmentations ε : B k ( ± n ) → C satisfy theconditions (5.95), (5.96) and (5.97), (5.98) respectively. Hence, if we construct an element s ( k ) ∈ B k ( ω ) such that σ a s ( k ) = s ( k ) σ a = s ( k ) , ǫ a s ( k ) = s ( k ) ǫ a = 0 , ε ( s ( k ) ) = 1 (7.29)for any a = 1 , . . . , k − k -th S -operator S ( k ) = ρ ( s ( k ) ) for A = B n and a k -th A -operator A ( k ) = e ρ ( s ( k ) ) for A = e B n .Let τ ab = σ a σ a +1 · · · σ b − , a < b . This is the cycle ( a, a + 1 , . . . , b ) ∈ S k . Note that thetranspositions have the form σ ab = τ ab − σ b − τ − ab − . Define ǫ ab = τ ab − ǫ b − τ − ab − . We have σσ ab σ − = σ σ ( a ) σ ( b ) and σǫ ab σ − = ǫ σ ( a ) σ ( b ) for any σ ∈ S k . These elements can be consideredgraphically [Br, Molev] in the following way. Remind that an element σ ∈ S k is presentedby a diagram where each number i from the top row is connected with σ ( i ) from the bottomrow. In particular, the diagram corresponding to the element σ ab is1 . . . a − a ▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲ a + 1 . . . b − b rrrrrrrrrrrrrrrrrrrrrrrrr b + 1 . . . k . . . a − a a + 1 . . . b − b b + 1 . . . k ǫ ab is presented by the diagram1 . . . a − a a + 1 . . . b − b b + 1 . . . k . . . a − a a + 1 . . . b − b b + 1 . . . k To multiply two elements one needs to put the corresponding diagrams one over the otherand substitute each arising loop by a factor ω . This allows to simplify calculations in theBrauer algebra.Consider the elements y b = b − X a =1 ( σ ab − ǫ ab ) ∈ B b ( ω ) . (7.30)These elements (up to a constant term) were introduced in [N] as analogues of Jucys–Murphy elements for the Brauer algebra. One can check that each element y b commute withthe subalgebra B b − ( ω ). This implies, in particular, commutativity [ y a , y b ] = 0 (see [N] fordetails). The images of the elements y b ∈ B k ( n ) and y b ∈ B k ( − n ) under the homomorphisms ρ and e ρ are the matrices Y b = ρ ( y b ) = b − X a =1 ( P ( ab ) n − Q ( ab ) n ) , e Y b = e ρ ( y b ) = − b − X a =1 ( P ( ab ) n − e Q ( ab ) n ) , (7.31)respectively.The following analogue of the symmetrizer can be written in terms of the Jucys–Murphyelements (see [HX, Molev]): s ( k ) = 1 k ! k Y b =2 ( y b + 1)( y b + ω + b − b + ω − . (7.32) Proposition 7.10. The elements (7.32) satisfy the conditions (7.29) . Hence the matrices S ( k ) = S so n ( k ) = ρ ( s ( k ) ) = 1 k ! k Y b =2 ( Y b + 1)( Y b + n + b − b + n − k ) , (7.33) A ( k ) = A sp n ( k ) = e ρ ( s ( k ) ) = 1 k ! k Y b =2 ( e Y b + 1)( e Y b − n + b − b − n − k r + 1) . (7.34) are the k -th S -operator for B n and the k -th A -operator for e B n respectively. roof. By substituting ε ( y b ) = b − ε ( s ( k ) ) = 1. Further by induction supposethat σ a s ( k − = s ( k − and ǫ a s ( k − = 0 for all a = 1 , . . . , k − 2. Let I k − ,k be the leftideal of B k ( ω ) generated by the elements u − ε ( u ), u ∈ B k − ( ω ), then us ( k − = 0 forany u ∈ I k − ,k . Since y k commutes with elements of B k − ( ω ) and s ( k ) is proportional to( y k + ω + k − y k + 1) s ( k − we have σ a s ( k ) = s ( k ) and ǫ a s ( k − = 0 for all a = 1 , . . . , k − ǫ k − ( y k + ω + k − 3) = ǫ k − σ k − − ǫ k − + ǫ k − k − X a =1 ( σ ak − ǫ ak ) + ( ω + k − ǫ k − . (7.35)Note that ǫ k − σ ak = σ ak ǫ k − ,a ∈ I k − ,k and ǫ k − ǫ ak = ǫ k − σ a,k − ∈ ǫ k − + I k − ,k for a k − 2, so the element (7.35) belongs to I k − ,k . Hence ǫ k − s ( k ) is proportional to the element ǫ k − ( y k + ω + k − y k + 1) s ( k − ∈ I k − ,k ( y k + 1) s ( k − = 0. Further, we obtain σ k − (1 + y k ) = σ k − + 1 − ǫ k − + k − X a =1 ( σ ak σ k − ,a − σ k − ǫ ak ) ∈ y k + I k − ,k + k − X a =1 B k ( ω ) ǫ ak . By using the formulae ǫ ak = σ a,k − ǫ k − σ a,k − , ( σ a,k − − y k = y k ( σ a,k − − ∈ I k − ,k andthe fact that (7.35) belongs to I k − ,k one yields ǫ ak ( y k + ω + k − ∈ I k − ,k . This implies theinclusion σ k − (1 + y k )( y k + ω + k − ∈ (1 + y k )( y k + ω + k − 3) + I k − ,k , so that σ k − s ( k ) = s ( k ) .Thus s ( k ) is left invariant with respect to ε . Due to Proposition 5.28 we obtain the rightinvariance of s ( k ) . The rest follows from Theorem 5.26 and Proposition 7.3 (see the reasoningabove).The element (7.32) can be rewritten in the following form [IM, IMO]: s ( k ) = 1 k ! Y a
One can define another augmentation ε ′ : B k ( ω ) → C by the formulae ε ′ ( σ a ) = − ε ′ ( ǫ a ) = 0. The corresponding idempotent coincides with the usual anti-symmetrizer a ( k ) = k ! Q σ ∈ S k ( − σ σ since σ b a ( k ) = − a ( k ) , ǫ b a ( k ) = ǫ b σ b a ( k ) = − ǫ b a ( k ) for all b = 1 , . . . , k − ε ′ ( a ( k ) ) = 1. Hence the operators A gl n ( k ) = ρ ( a ( k ) ) and S gl n ( k ) = e ρ ( a ( k ) )define the pairings Ξ ∗ B n ( C ) × Ξ B n ( C ) → C and X e B n ( C ) × X ∗ e B n ( C ) → C respectively. Butthese pairings are degenerate since the conditions (5.95), (5.96), (5.97), (5.98) are not validfor ε ′ . 92et us calculate the ranks of the pairing operators S so n ( k ) and A sp n ( k ) . They are equal to theirtraces. According to [Molev, formulae (1.52) (1.72), (1.77)] the (full) traces of these pairingoperators aretr S so n ( k ) = n + 2 k − n + k − (cid:18) n + k − k (cid:19) = n + 2 k − k (cid:18) n + k − k − (cid:19) (2 k ) , tr A sp n ( k ) = ( − k − n + 2 k − − n + k − (cid:18) − n + k − k (cid:19) = n − k + 2 k (cid:18) n + 1 k − (cid:19) (2 k r + 1) . In particular, by substituting n = 2 r and k = r + 1 to the expression for tr A sp n ( k ) we deriverk A sp n ( k ) = tr A sp n ( k ) = 0. Hence A sp n ( k ) = 0 and Ξ e B r ( C ) k = 0 for all k > r + 1 . (7.38)This explains the inequality for k in the formula (7.34): for other k the operator A sp n ( k ) vanishes(it vanishes even for the last value k = r + 1). Let us conclude. Proposition 7.12. The dimension of the component X B n ( C ) k is not zero for any k > .The dimension of Ξ B n ( C ) k is not zero iff k r . Namely, we have dim X B n ( C ) k = n + 2 k − k (cid:18) n + k − k − (cid:19) (1 k ) , dim Ξ e B n ( C ) k = n − k + 2 k (cid:18) n + 1 k − (cid:19) (1 k r ) . The minor operators for Manin matrices M of the types B and D have the formMin S ( k ) ( M ) = M (1) · · · M ( k ) S so n ( k ) , Min A ( k ) ( M ) = A gl n ( k ) M (1) · · · M ( k ) , (7.39)( A gl n ( k ) = 0 for k > n + 1). For Manin matrices M of the type C we haveMin S ( k ) ( M ) = M (1) · · · M ( k ) S gl n ( k ) , Min A ( k ) ( M ) = A sp n ( k ) M (1) · · · M ( k ) , (7.40)( A sp r ( k ) = 0 for k > r + 1).Let us give an example. The bases of X B ( C ) and X ∗ B ( C ) consist of x i (2) = ( x i ) and x (2) i = ( x i ) , i = 1 , 2, respectively. The operator S so (2) = 1 − B defines the pairing h x j (2) , x (2) i i = δ ji . Let y i = P j =1 , M ij x j , then y i (2) = ( y i ) . Since x x = x x = 0 we have y i (2) = P j =1 , M ij x j (2) . Thus the second S -minor of a ( B , A )-Manin matrix M = (cid:18) a bc d (cid:19) inthe basis { x , x } has the formMin S (2) ( M ) = (cid:18) a b c d (cid:19) . (7.41)93 emark 7.13. The existence of the pairing operators for the idempotents A = B n and A = e B n can be deduced from Theorem 5.12. Since A ⊤ = A we have isomorphisms V k ∼ = V k and W k ∼ = W k given by restriction of the map ( C n ) ⊗ k → (cid:0) ( C n ) ⊗ k (cid:1) ∗ , e i ...i k e i ...i k . Underthese isomorphisms the natural pairings V k × V k → C and W k × W k → C are identifiedwith restrictions of the standard bilinear form h e i ...i k , e j ...j k i = δ i j · · · δ i k j k to the spaces V k and W k respectively. Since the subspaces V k and W k are given by linear equations with realcoefficients the restrictions of this bilinear form are non-degenerate, so the pairing operators S ( k ) and A ( k ) exist for any k . In particular, the pairing A -operators for B n and the pairing S -operators for e B n also exist. Conclusion It was demonstrated in the works [CF, CFR, CM, RTS, MR, CFRS] as well as in the currentpaper that the Manin matrices have a lot of applications to integrable systems, representationtheory and other fields of mathematics and physics. These results show importance of non-commutative geometry developed by Manin in [Man87, Man88, Man89, Man91, Man92] formany questions of mathematics and mathematical physics.By switching the consideration from A -Manin matrices to more general ( A, B )-Maninmatrices we obtain a larger class of useful examples such as the ( b q, b p )-Manin matrices andthe Manin matrices of types B , C and D . In particular, the theory of the ( b q, b p )-Maninmatrices gives more complete picture for the q -Manin matrices.It was shown that the tensor notations and usage of idempotents is a convenient approachto the Manin matrices. In particular, it allows to generalise the notion of minors to thegeneral case. This general theory of minors relates Manin matrices with the representationtheory of the symmetric groups and their generalisations such as Hecke and Brauer algebras.This alludes to a possible relation between the theory of Manin matrices and the Schur–Weylduality.The left equivalence of idempotents gives us relationship between different considerationsof Manin matrices. For example, the minors of q -Manin matrices can be considered bymeans of the q -antisymmetrizer arisen from a representation of the symmetric group as wellas by using higher idempotents of the Hecke algebra; the left equivalence implies a simplerelation between the minor operators constructed in this two different way. Moreover, rightequivalence can be used to investigate the corresponding quadratic algebras as it was donein Subsection 6.3.It is also discovered that some Lax matrices with spectral parameter is a type of Maninmatrices generalised to the infinite-dimensional case. Namely, we defined an idempotent b A n acting on a completed tensor power (cid:0) C n [ u, u − ] (cid:1) ⊗ and proved that these Lax matrices areexactly Manin operators that acts on the tensor factor C [ u, u − ] by a function multiplication.94 ppendices Appendix A. Reduced expression and set of inversions Let R be a finite root system and W be the corresponding reflection group in the senseof [Hum, ch. 1]. Let R + ⊂ R be a subset of positive roots. Denote the corresponding simpleroots by α , . . . , α r , where r is the rank of the root system R . The simple reflections s i = s α i generate the group W . The length of w ∈ W , denoted by ℓ ( w ), is the minimal k ∈ N suchthat w can be presented as a product of k simple reflections. If ℓ = ℓ ( w ), then any expression w = s i · · · s i l is called a reduced expression for the element w ∈ W .Note that ℓ ( w − ) = ℓ ( w ). Indeed, since s i = 1 we have ( s i · · · s i k ) − = s i k · · · s i , so theelement w can be presented as a product of k simple reflections iff the element w − can bepresented as a product of k simple reflections.Let R − = − R + be the set of negative roots and let R + w := R + ∩ w − R − . The latter isthe set of positive roots α ∈ R + such that wα ∈ R − . As it is proved in [Hum, § w , that is ℓ ( w ) = | R + w | . Note also that from − wR + w = ( − wR + ) ∩ ( − R − ) = R + ∩ wR − we obtain R + w − = − wR + w . (A.1)The symmetric group S n is a reflection group with the root system R = { e i − e j | i = j } of rank r = n − 1. In this case we have R + = { e i − e j | i < j } , R − = { e i − e j | i > j } , α i = e i − e i +1 , s i = σ i . For a permutation σ ∈ S n the set R + σ is identified with the set ofinversions of σ : R + σ = { e i − e j | i < j, σ ( i ) > σ ( j ) } . (A.2)In particular, the number of inversions is equal to the length: inv( σ ) = ℓ ( σ ). Note that thenumber of elements σ ∈ S k with a fixed length ℓ ( σ ) can be calculated from the generatingfunction (6.55). Proposition A.1. Let ℓ ( w ) = ℓ and w = s i · · · s i ℓ be a reduced expression. Then R + w = { α i ℓ , s i ℓ α i ℓ − , . . . , s i ℓ · · · s i α i } = { s i ℓ · · · s i k +1 α i k | k = 1 , . . . , ℓ } , (A.3) R + w − = { α i , s i α i , . . . , s i · · · s i ℓ − α i ℓ } = { s i · · · s i k − α i k | k = 1 , . . . , ℓ } . (A.4) Proof. First we prove that all the roots β k := s i ℓ · · · s i k +1 α i k belongs of R + w . Note that s i · · · s i k is a reduced expression for any k = 1 , . . . , ℓ , since otherwise ℓ ( w ) < ℓ . By virtueof [Hum, § ℓ ( s i · · · s i k − s i k ) = ℓ ( s i · · · s i k − ) + 1 implies s i · · · s i k − α i k ∈ R + , k = 1 , . . . , ℓ. (A.5)In the same way for the reduced expression w − = s i ℓ · · · s i we obtain β k = s i ℓ · · · s i k +1 α i k ∈ R + , k = 1 , . . . , ℓ. (A.6)95ince wβ k = s i · · · s i k − s i k α i k = − s i · · · s i k − α i k ∈ R − we derive β k ∈ R + w . Due to ℓ ( w ) = | R + w | we only need to prove that the roots β , . . . , β ℓ are pairwise different. We prove itby induction on ℓ . For ℓ = 0 and ℓ = 1 there is nothing to check. Let w ′ = s i · · · s i ℓ − and β ′ k = s i ℓ − · · · s i k +1 α i k ∈ R + w ′ . Due to the induction assumption the roots β ′ , . . . , β ′ ℓ − are pairwise different. Hence β k = s i ℓ β ′ k , k = 1 , . . . , ℓ − β ℓ = β k for some k = 1 , . . . , ℓ − 1. By acting by s i ℓ to the both hand sides weobtain − α i ℓ = β ′ k , but the roots β ′ k ∈ R + w ′ are positive. This contradiction ends the proof ofthe formula (A.3). The formula (A.4) is obtained from (A.1) and (A.3). Appendix B. Lie algebras as quadratic algebras Here we present a quadratic algebra closely related with a finite dimensional Lie algebra g .Let n = dim g + 1 and { x , . . . , x n − } be a basis of g with the commutation rela-tions [ x i , x j ] = P n − k =1 C ijk x k , C ijk ∈ C . Consider the quadratic algebra with generators x , . . . , x n − , x n and relations x i x j − x j x i = n − X k =1 C ijk x k x n , i, j = 1 , . . . , n − , (B.1) x i x n − x n x i = 0 , i = 1 , . . . , n − . (B.2)Let C g ∈ End( C n ⊗ C n ) be a matrix with entries( C g ) ij,kl = C ijk δ ln + C ijl δ kn , i, j, k, l = 1 , . . . , n, (B.3)where we set C ijn = C ink = C njk = 0. Since (B.3) is antisymmetric in i, j and symmetric in k, l we have the formulae C g = 0 , C g A n = 0 A n C g = C g . (B.4)They imply that the operator A g = A n − C g is an idempotent. The relations (B.1) and(B.2) define the algebra X A g ( C ). Remark B.1. By fixing a non-zero value of the central element x n we obtain the universalenveloping algebra U ( g ). In other words, the algebra X A g ( C ) is the quantisation of the algebra S g = C [ g ∗ ] with the Lie-Poisson brackets { x i , x j } = P n − k =1 C ijk x k . The central element x n plays the role of the quantisation parameter. References [AACFR] Arnaudon, D.; Avan, J.; Cramp´e, N.; Frappat, L.; Ragoucy, E.: R-matrixpresentation for super-Yangians Y (osp( m | n )), J. Math. Phys. (2003), 302–308; arXiv:math/0111325 [math.QA] . 96Br] Brauer, R.: On Algebras Which are Connected with the Semisimple Continuous Groups ,Ann. Math. (1937), 857–872.[CF] Chervov, A.; Falqui, G.: Manin matrices and Talalaev’s formula , J. Phys. A: Math.Theor. (2008), No. 19, 194006 (28pp.); arXiv:0711.2236 [math.QA] [CM] Chervov, A.; Molev, A.: On higher order Sugawara operators , IMRN , no.9,(2009), 1612–1635; arXiv:0808.1947 [math.RT] .[CFR] Chervov, A.; Falqui, G.; Rubtsov, V.: Algebraic properties of Manin matrices I , Adv.in Appl. Math. (2009), no. 3, 239–315; arXiv:0901.0235 [math.QA] .[CFRS] Chervov, A.; Falqui, G.; Rubtsov, V.; Silantyev, A.: Algebraic properties of Maninmatrices II: q -analogues and integrable systems , Adv. in Appl. Math. (2014), 25–89; arXiv:1210.3529 [math.QA] .[FRT] Faddeev, L. D.; Reshetikhin, N. Y.; Takhtajan, L. A.: Quantization of Lie Groups andLie Algebras , Algebraic Analysis (1988), 129–139.[GLZ] Garoufalidis S.; Le, Thang TQ.; Zeilberger D.: The quantum MacMahon Mas-ter Theorem , Proc. Natl. Acad. of Sci. (PNAS) (2006), No. 38, 13928-13931; arXiv:math.QA/0303319 .[Gur] Gurevich, D.I.: Algebraic aspects of the quantum Yang–Baxter equation , Algebra iAnaliz (1990), 119–148 (in Russian); translation in: Leningrad Math. J. (1991),801–828.[HX] Hu J.; Xiao, Z.: On tensor spaces for Birman–Murakami–Wenzl algebras , J. Algebra (2010), 2893–2922.[Hum] Humphreys, J. E.: Reflection groups and Coxeter groups , CUP, (1990).[IM] Isaev, A. P.; Molev, A. I.: Fusion procedure for the Brauer algebra , St. Petersburg Math.J. (2011), 437–446; arXiv:0812.4113 [math.RT] .[IMO] Isaev, A. P.; Molev, A. I.; Ogievetsky, O. V.: A new fusion procedure for the Braueralgebra and evaluation homomorphisms , Int. Math. Res. Not. , issue 11, (2012),2571–2606; arXiv:1101.1336 [math.RT] .[IOP98] Isaev, A. P.; Ogievetsky, O. V.; Pyatov, P. N.: Generalized Cayley–Hamilton–Newton identities, Czechoslovak J. Phys. (1998), no. 11, 1369–1374; arXiv:math.QA/9809047 .[IOP99] Isaev, A. P.; Ogievetsky, O. V.; Pyatov, P. N.: On quantum matrix algebras satis-fying the Cayley-Hamilton-Newton identities, J. Phys. A (1999), no. 9, L115–L121; arXiv:math.QA/9809170 . 97IO] Isaev, A. P.; Ogievetsky, O. V.: Half-quantum linear algebra, Nankai Series in Pure,Applied Mathematics and Theoretical Physics Symmetries and Groups in ContemporaryPhysics (2013), 479–486; arXiv:1303.3991 [math.QA] .[MacLane] MacLane, S.: Categories for the Working Mathematician , Springer-Verlag NewYork, Heidelberg Berlin.[Man87] Manin, Y.: Some remarks on Koszul algebras and quantum groups , Ann. de l’Inst.Fourier , no. 4 (1987), pp. 191–205.[Man88] Manin, Y.: Quantum Groups and Non-Commutative Geometry , University of Mon-treal, Centre de Recherches Math´ematiques, Montreal, QC, (1988), 91 pp.[Man89] Manin, Y.: Multiparametric quantum deformation of the general linear supergroup ,Comm. Math. Phys. (1989), 163–175.[Man91] Manin, Y.: Topics in Non Commutative Geometry , M. B. Porter Lectures, Prince-ton University Press, (1991), 164 pp.[Man92] Manin, Y.: Notes on Quantum Groups and Quantum de Rham Complexes , Theo-retical and Mathematical Physics, , Number 3 / September, (1992).[MNO] Molev, A. I.; Nazarov, M. L.; Olshanskii, G.I.: Yangians and classical Lie algebras ,Uspekhi Mat. Nauk, (1996), issue 2, 27–104 (in Russian); translation in: RussianMath. Surveys, :2 (1996), 205–282; arXiv:hep-th/9409025 .[MR] Molev, A.; Ragoucy, E.: The MacMahon Master Theorem for right quantum superal-gebras and higher Sugawara operators for gl ( m | n ), Moscow Mathematical Journal (2014), 83–119; arXiv:0911.3447 [math.RT] .[Molev] Molev, A.: Sugawara operators for classical Lie algebras , Providence, RI: AmericanMathematical Society, (2018), 321 pp.[N] Nazarov, D.: Young’s orthogonal form for Brauer’s centralizer algebra , J. Algebra (1996), 664–693.[PS] Pyatov, P. Saponov, P.: Characteristic Relations for Quantum Matrices , J. Phys. A (1995), no. 15, 4415–4421; arXiv:q-alg/9502012 .[RTF] Reshetikhin, N. Y.; Takhtadzhyan, L. A.; Faddeev, L. D.: Quantization of Lie groupsand Lie algebras, Algebra i Analiz (1989), No.1, 178–206 (in Russian); translation in:Leningr. Math. J. (1990), No.1, 193–225.[RTS] Rubtsov, V.; Silantyev, A.; Talalaev, D.: Manin Matrices, Quantum Elliptic Com-mutative Families and Characteristic Polynomial of Elliptic Gaudin model , SIGMA (2009), 110, 22 pages; arXiv:0908.4064 [math-ph] .98T] Talalaev, D.: Quantization of the Gaudin system , Funct. Anal. Appl. (2006), no. 1,73–77; arXiv:hep-th/0404153arXiv:hep-th/0404153