Traces on the Algebra of Observables of Rational Calogero Model based on the Root System
aa r X i v : . [ m a t h . R T ] N ov Traces on the Algebra of Observables of RationalCalogero Model based on the Root System
S.E.Konstein ∗ and I.V.Tyutin †‡ I.E.Tamm Department of Theoretical Physics,P. N. Lebedev Physical Institute,117924, Leninsky Prospect 53, Moscow, Russia.
Abstract
It is shown that H W ( R ) ( ν ), the algebra of observables of the rational Calogero modelbased on the root system R ⊂ R N , possesses T R independent traces, where T R is thenumber of conjugacy classes of elements without eigenvalue 1 belonging to the Coxetergroup W ( R ) ⊂ End ( R N ) generated by the root system R .Simultaneously, we reproduced an older result: the algebra H W ( R ) ( ν ), consideredas a superalgebra with a natural parity, possesses ST R independent supertraces, where ST R is the number of conjugacy classes of elements without eigenvalue − W ( R ). It was shown in [8] and [10] that for every associative superalgebra H W ( R ) ( ν ) of observablesof the rational Calogero model based on the root system R , the space of supertraces isnonzero. The dimensions of these spaces for every root system are listed in [13].Here we also consider these superalgebras as algebras (parity forgotten) and find theconditions of existence and the dimensions of the spaces of traces on these algebras.Astonishingly, the proof differs from the one in [8] and [10] in several signs only, andwe provide it here indicating change of signs by means of a parameter κ with κ = − κ = +1 for the traces. As a result, some parts of this text are almostcopypasted from [8] and [10], especially Subsection 4.3 and Appendices. ∗ E-mail: [email protected] † E-mail: [email protected] ‡ This work is supported by the Russian Fund for Basic Research, Grant 11-02-00685. Let A be an associative superalgebra with parity π . All expressions of linear algebra aregiven for homogenious elements only and are supposed to be extended to inhomogeneouselements via linearity.A linear function str on A is called a supertrace if str ( f g ) = ( − π ( f ) π ( g ) str ( gf ) for all f, g ∈ A . A linear function tr on A is called a trace if tr ( f g ) = tr ( gf ) for all f, g ∈ A . Let κ = ±
1. We can unify the definitions of trace and supertrace by introducing a κ -trace.We say that a linear function sp on A is a κ -trace if sp ( f g ) = κ π ( f ) π ( g ) sp ( gf ) for all f, g ∈ A . (1)A linear function L is even if L ( f ) = 0 for any f ∈ A such that π ( f ) = 1, and is odd if L ( f ) = 0 for any f ∈ A such that π ( f ) = 0.Let A and A be associative superalgebras with parities π and π , respectively. Then A = A ⊗ A has a natural parity π defined by the formula π ( a ⊗ b ) = π ( a ) + π ( b ).Let T i be traces on A i . Clearly, the function T given by the formula T ( a ⊗ b ) = T ( a ) T ( b )is a trace on A .Let S i be even supertraces on A i . Then the function S such that S ( a ⊗ b ) = S ( a ) S ( b )is an even supertrace on A .In what follows, we use three types of brackets:[ f, g ] = f g − gf, { f, g } = f g + gf, [ f, g ] κ = f g − κ π ( f ) π ( g ) gf. The superalgebra H W ( R ) ( ν ) of observables of the rational Calogero model based on the rootsystem R is a deformation of the skew product of the Weyl algebra and the group algebraof a finite group generated by reflections.We will define it by Definition 1.1; now let us describe the necessary ingredients.Let V = R N be endowed with a non-degenerate symmetric bilinear form ( · , · ) and thevectors ~a i constitute an orthonormal basis in V , i.e.( ~a i , ~a j ) = δ ij . From German word spur . Let A and B be the superagebras, and A is a B -module. We say that the superalgebra A ∗ B is a skewproduct if A ∗ B = A ⊗ B as a superspace and ( a ⊗ b ) · ( a ⊗ b ) = a b ( a ) ⊗ b b . Let x i be the coordinates of ~x ∈ V , i.e. ~x = ~a i x i . Then ( ~x, ~y ) = P Ni =1 x i y i for any ~x, ~y ∈ V .The indices i are raised and lowered by means of the forms δ ij and δ ij .For any nonzero ~v ∈ V = R N , define the reflections R ~v as follows: R ~v ( ~x ) = ~x − ~x, ~v )( ~v, ~v ) ~v for any ~x ∈ V. (2)The reflections (2) have the following properties R ~v ( ~v ) = − ~v, R ~v = 1 , ( R ~v ( ~x ) , ~u ) = ( ~x, R ~v ( ~u )) for any ~v, ~x, ~u ∈ V. (3)A finite set of vectors R ⊂ V is said to be a root system if the following conditions hold:i) R is R ~v -invariant for any ~v ∈ R ,ii) if ~v , ~v ∈ R are collinear, then either ~v = ~v or ~v = − ~v .The group W ( R ) ⊂ O ( N, R ) ⊂ End ( V ) generated by all reflections R ~v with ~v ∈ R isfinite.As it follows from this definition of a root system, we consider both crystallographic andnon-crystallographic root systems. We consider also the empty root system denoted by A ,assuming that it generates the trivial group consisting of the unity element only.Let H α , where α = 0 , V with orthonormal bases a α i ∈ H α , where i =1 , ... , N . For every vector ~v = P Ni =1 ~a i v i ∈ V , let v α ∈ H α be the vectors v α = P Ni =1 a α i v i ,so four bilinear forms on H ⊕ H can be defined by the expression( x α , y β ) = ( ~x, ~y ) for α, β = 0 , , (4)where ~x, ~y ∈ V and x α , y α ∈ H α are their copies. The reflections R ~v act on H α as follows: R ~v ( h α ) = h α − h α , v α )( ~v, ~v ) v α for any h α ∈ H α . (5)So the W ( R )-action on the spaces H α is defined.Let C [ W ( R )] be the group algebra of W ( R ), i.e., the set of all linear combinations P g ∈ W ( R ) α g ¯ g , where α g ∈ C and we temporarily use the notations ¯ g to distinguish g con-sidered as an element of W ( R ) ⊂ End ( V ) from the same element ¯ g ∈ C [ W ( R )] of thegroup considered as an element of the group algebra. The addition in C [ W ( R )] is definedas follows: X g ∈ W ( R ) α g ¯ g + X g ∈ W ( R ) β g ¯ g = X g ∈ W ( R ) ( α g + β g )¯ g and the multiplication is defined by setting g g = g g .Note that the additions in C [ W ( R )] and in End ( V ) differ. For example, if I ∈ W ( R ) isunity and the matrix K = − I from End ( V ) belongs to W ( R ), then I + K = 0 in End ( V )while I + K = 0 in C [ W ( R )]. In what follows, the element K ∈ H W ( R ) ( ν ) is a Kleinoperator . Let ν be a set of constants ν ~v with ~v ∈ R such that ν ~v = ν ~w if R ~v and R ~w belong to oneconjugacy class of W ( R ). Definition 1.1.
The superalgebra H W ( R ) ( ν ) is an associative superalgebra with unity of polynomials in the a α i with coefficients in the group algebra C [ W ( R )] subject to therelations gh α = g ( h α ) h α g for any g ∈ W ( R ) and h α ∈ H α (6)[ x α I, y β I ] = ε αβ ( ~x, ~y )¯1 I + X ~v ∈R ν ~v ( ~x, ~v )( ~y, ~v )( ~v, ~v ) ¯1 R ~v ! for any x α ∈ H α and y β ∈ H β . (7)where ε αβ is the antisymmetric tensor, ε = 1, and ¯1 is the unity in C [ a α i ]. The element = ¯1 · I is the unity of H W ( R ) ( ν ). The action of any operator g ∈ End ( V ) is given by amatrix g ji : g ( a α i h i ) = a α i g ji h j , g ( g ( h α )) = ( g g )( h α ) for any h α = a α i h i ∈ H α , (8) g (¯1) = ¯1 . (9)The commutation relations (7) suggest to define the parity π by setting: π ( a α i g ) = 1 for any α, i and g ∈ C [ W ( R )] π (¯1 g ) = 0 for any g ∈ C [ W ( R )] . (10)We say that H W ( R ) ( ν ) is a the superalgebra of observables of the Calogero model based onthe root system R .These algebras (with parity forgotten) are particular cases of Symplectic Reflection Al-gebras [12] and are also known as rational Cherednik algebras .Below we will usually designate , ¯1, I and I by 1, and F I = ¯ IF by F for any F ∈ C [ a α i ],and ¯1 G = G ¯1 by G for any G ∈ C [ W ( R )]. Besides, we will just write g instead of g becauseit will always be clear, whether g ∈ W ( R ) or g ∈ C [ W ( R )].The associative algebra H W ( R ) ( ν ) has a faithful representation via Dunkl differential-difference operators D i , see [3], acting on the space of smooth functions on V . Namely, let v i = δ ij v j , x i = δ ij x j , D i = ∂∂x i + 12 X ~v ∈R ν ~v v i ( ~x, ~v ) (1 − R ~v ) (11) Let A be an associative superalgebra with parity π . Following M.Vasiliev, see, e.g. [5], we say that anelement K ∈ A is a Klein operator if π ( K ) = 0, Kf = ( − π ( f ) f K for any f ∈ A and K = 1. Every Kleinoperator belongs to the anticenter of the superalgebra A , see [14], p.41.Any Klein operator, if exists, establishes an isomorphism between the space of even traces and the spaceof even supertraces on A . Namely, if f T ( f ) is an even trace, then f T ( f K ) is a supertrace, and if f S ( f ) is an even supertrace, then f S ( f K ) is a trace.It is proved in [15] that if H W ( R ) ( ν ) has isomorphic spaces of the traces and supertraces, then H W ( R ) ( ν )contains a Klein operator. Clearly, H W ( R ) does not contain either ¯1 ∈ C or I . and [6, 7] a α i = 1 √ x i + ( − α D i ) for α = 0 , . (12)The reflections R ~v transform the deformed creation and annihilation operators (12) asvectors: R ~v a α i = N X j =1 (cid:18) δ ij − v i v j ( ~v, ~v ) (cid:19) a α j R ~v . (13)Since [ D i , D j ] = 0, see [3], it follows that[ a α i , a β j ] = ε αβ δ ij + X ~v ∈R ν ~v v i v j ( ~v, ~v ) R ~v ! , (14)which manifestly coincides with (7).Observe an important property of superalgebra H W ( R ) ( ν ): the Lie (super)algebra of itsinner derivations contains sl generated by the operators T αβ = 12 N X i =1 { a α i , a β i } (15)which commute with C [ W ( R )], i.e., [ T αβ , R ~v ] = 0, and act on a α i as on vectors of theirreducible 2-dimensional sl -modules:[ T αβ , a γ i ] = ε αγ a β i + ε βγ a α i , where i = 1 , . . . , N. (16)The restriction of the operator T in the representation (12) on the subspace of W ( R )-invariant functions on V is a second-degree differential operator which is the well-knownHamiltonian of the rational Calogero model, see [1], based on the root system R , see [2].One of the relations (15), namely, [ T , a α i ] = − ( − α a α i , allows one to find the solutions ofthe equation T ψ = ǫψ and eigenvalues ǫ via usual Fock procedure with the vacuum | i suchthat a i | i =0 for any i , see [7]. After W ( R )-symmetrization these eigenfunctions becomethe wave functions of the Calogero Hamiltonian. κ -traces on H W ( R ) ( ν ) Every κ -trace sp ( · ) on A generates the following bilinear form on A : B sp ( f, g ) = sp ( f · g ) for any f, g ∈ A . (17)It is obvious that if such a bilinear form B sp is degenerate, then the null-vectors of thisform (i.e., v ∈ A such that B ( v, x ) = 0 for any x ∈ A ) constitute the two-sided ideal I ⊂ A .If the κ -trace generating degenerate bilinear form is homogeneous (even or odd), then thecorresponding ideal is a superalgebra.If κ = −
1, the ideals of this sort are present, for example, in the superalgebras H W ( A ) ( ν )(corresponding to the two-particle Calogero model) at ν = k + , see [5], and in the superalge-bras H W ( A ) ( ν ) (corresponding to three-particle Calogero model) at ν = k + and ν = k ± ,see [9], for every integer k . For all other values of ν all supertraces on these superalgebrasgenerate nondegenerate bilinear forms (17).The general case of H W ( A n − ) ( ν ) for arbitrary n is considered in [11]. Theorem 5.8.1of [11] states that the associative algebra H W ( A n − ) ( ν ) is not simple if and only if ν = qm ,where q, m are mutually prime integers, and 1 < m n , and presents the structure ofcorresponding ideals.Conjecture: Each of the ideals found in [11] is the set of null-vectors of the degeneratebilinear form (17) for some κ -trace sp on H W ( A n − ) ( ν ). Theorem 2.2.
Each nonzero κ -trace on H W ( R ) ( ν ) is even. Proof.
The space of superalgebra H W ( R ) ( ν ) can be decomposed into the direct sum ofirreducible sl -modules (Lie algebra sl is defined by eq. (15)). Clearly, each κ -trace shouldvanish on all these irreducible modules except singlets, and can take nonzero value onlyon singlets, i.e., on elements f ∈ H W ( R ) ( ν ) such that [ T αβ , f ] = 0 for α, β = 0 ,
1. So, if sp ( f ) = 0, then [ T , f ] = 0, which implies π ( f ) = 0. Theorem 2.3.
The dimension of the space of κ -traces on the superalgebra H W ( R ) ( ν ) isequal to the number of conjugacy classes of elements without eigenvalue κ belonging to theCoxeter group W ( R ) ⊂ End ( R N ) generated by the finite root system R ⊂ R N . Proof.
This Theorem follows from Theorem 4.11 and Theorem 3.6.Clearly, Theorem 2.3 implies the following theorem
Theorem 2.4.
Let the Coxeter group W ( R ) ⊂ End ( R N ) generated by the finite rootsystem R ⊂ R N have T R conjugacy classes without eigenvalue and ST R conjugacy classeswithout eigenvalue − .Then the superalgebra H W ( R ) ( ν ) possesses T R independent traces and ST R independentsupertraces. Clearly, the ¯1 · C [ W ( R )] is a subalgebra of H W ( R ) ( ν ) isomorphic to C [ W ( R )].It is easy to describe all κ -traces on C [ W ( R )]. Every κ -trace on C [ W ( R )] is completelydetermined by its values on W ( R ) and is a central function on W ( R ), i.e., the functionconstant on the conjugacy classes due to W ( R )-invariance. Thus, the number of the κ -traces on C [ W ( R )] is equal to the number of conjugacy classes in W ( R ).Since C [ W ( R )] ⊂ H W ( R ) ( ν ), some additional restrictions on these functions follow fromthe definition (1) of κ -trace and the defining relations (7) for H W ( R ) ( ν ). Namely, consider g ∈ W ( R ) and elements c αi ∈ H α such that gc αi = κ c αi g. (18)Then, eqs. (1) and (18) imply that sp (cid:0) c i c j g (cid:1) = κ sp (cid:0) c j gc i (cid:1) = sp (cid:0) c j c i g (cid:1) , The dimension of the space of supertraces on H W ( A n − ) ( ν ) is the number of the partition of n > H W ( A n − ) ( ν ) is one-dimensionalfor n > and therefore sp (cid:0) [ c i , c j ] g (cid:1) = 0 . (19)Since [ c i , c j ] g ∈ C [ W ( R )], the conditions (19) selects the central functions on C [ W ( R )]which can in principle be extended to κ -traces on H W ( R ) ( ν ), and Theorem 4.11 states thateach central functions on C [ W ( R )], which satisfy conditions (19), can be extended to κ -traceon H W ( R ) ( ν ). In [8], the conditions (19) are called Ground Level Conditions .Ground Level Conditions (19) is an overdetermined system of linear equations for thecentral functions on C [ W ( R )]. The dimension of the space of its solution is given in Theorem3.6. Let us introduce the gradation E on the vector space of C [ W ( R )]. For any g ∈ W ( R ),consider the subspaces E α ( g ) ⊂ H α : E α ( g ) = { h ∈ H α | g ( h ) = κ h } . (20)Clearly, dim E ( g ) = dim E ( g ). Set E ( g ) = dim E α ( g ) . (21)For any g ∈ W ( R ), E ( g ) is an integer such that 0 E ( g ) N . Notation.
Let W l denote a subset of all elements of the group g ∈ W ( R ) such that E ( g ) = l .Clearly, W ( R ) = N [ l =0 W l . (22)The set W l is W ( R )-invariant, and we can introduce W ∗ l – the space of W ( R )-invariantfunctions on W l . Theorem 3.5.
Each function S ∈ W ∗ can be extended uniquely to the central functionon W ( R ) satisfying the Ground Level Conditions. The following theorem follows from Theorem 3.5:
Theorem 3.6.
The dimension of the space of solutions of Ground Level Conditions (19)is equal to the number of conjugacy classes in W ( R ) with E ( g ) = 0 . Theorems 3.5 and 3.6 are proved below simultaneously.The following lemmas are needed to prove these theorems.
Lemma 3.7.
Let g be an orthogonal N × N real matrix without eigenvalue κ , i.e., thematrix g − κ is invertible. Then the matrix R ~v g has exactly one eigenvalue equal to κ . It follows from Lemma 3.8 formulated below that if κ = −
1, then ρ ( g ) = E ( g ) | mod is a parity onthe group algebra C [ W ( R )]. It is a well known parity of elements of the Coxeter group W ( R ). Besides( E ( g ) | κ =+1 − E ( g ) | κ = − ) | mod = N | mod . Proof.
Consider the equation R ~v g~x − κ ~x = 0 or, equivalently, g~x − κ R ~v ~x = 0 for eigen-vector ~x corresponding to eigenvalue κ . Using the definition of R ~v this equation can beexpressed as g~x − κ ( ~x − ~v, ~x ) | ~v | v ) = 0;hence, ~x = − κ ( ~v, ~x ) | ~v | ( g − κ ) − ~v. (23)It remains to show that this equation has a nonzero solution. Let ~v = ( g − κ ) ~w , and itfollows from eq. (23) that ~x = µ ~w , where µ ∈ R . Then | ~v | = 2( | ~w | − κ ( ~w, g ~w )) , − κ ( ~v, ~x ) = 2( | ~w | − κ ( ~w, g ~w )) µ, and eq. (23) becomes an identity µ ~w = µ ~w . So the vector ~x = ( g − κ ) − ~v is the onlysolution, up to a factor. Lemma 3.8.
Let g be an orthogonal N × N real matrix and ~c i , where i = 1 , ..., E ( g ) ,the complete orthonormal set of its eigenvectors corresponding to eigenvalue κ . Then i) E ( R ~v g ) = E ( g ) + 1 if ( ~v, ~c i ) = 0 for all i ; ii) if there exists an i such that ( ~v, ~c i ) = 0 , then E ( R ~v g ) = E ( g ) − and the space ofthe eigenvectors of R ~v g corresponding to eigenvalue κ is the subspace of span { ~c , ..., ~c E ( g ) } orthogonal to ~v . Proof.
Let C def = span { ~c , ..., ~c E ( g ) } and let V = C ⊕ B be orthogonal direct sum. Clearly, gB = B .Let us seek null-vector ~z of the operator R ~v g − κ , i.e., the solution of the equation R ~v g~z − κ ~z = 0 , (24)in the form ~z = ~c + ~b , where ~c ∈ C and ~b ∈ B . The definition of R ~v and (24) yield − ~v, ~v ) (cid:16) κ ( ~c, ~v ) + ( g~b, ~v ) (cid:17) ~v + ( g − κ ) ~b = 0 . (25)Represent ~v in the form ~v = ~v c + ~v b , where ~v c ∈ C , ~v b ∈ B . Let ~v b = ( g − κ ) ~w . Then eq.(24) is equivalent to the system − ~v, ~v ) (cid:16) κ ( ~c, ~v c ) + ( g~b, ( g − κ ) ~w ) (cid:17) ~v c = 0 , (26) − ~v, ~v ) (cid:16) κ ( ~c, ~v c ) + ( g~b, ( g − κ ) ~w ) (cid:17) ~w + ~b = 0 . (27)Consider the two cases: i) Let ( ~v, ~c i ) = 0 for all i = 1 , ..., E ( g ). So, ~v c = 0, and hence ~v ∈ B . Then (27) acquiresthe form − ~v, ~v ) ( g~b, ( g − κ ) ~w ) ~w + ~b = 0 . (28)It is easy to check that ~b = ~w is the only nonzero solution of (28) orthogonal to C .So, all the solutions of eq. (24) are linear combinations of the vectors ~z i = ~c i , where i = 1 , ..., E ( g ), and ~z E ( g )+1 = ~w . ii) Let ~v c = 0. Then eq. (26) gives κ ( ~c, ~v c ) + ( g~b, ( g − κ ) ~w ) = 0 (29)which reduces eq. (27) to ~b = 0 which, in its turn, reduces eq. (29) to ( ~c, ~v ) = 0 . Let P be the projection C [ W ( R )] → C [ W ( R )] defined as P ( X i α i g i ) = X i : g i = α i g i for any g i ∈ W ( R ), α i ∈ C . (30) Lemma 3.9.
Let g ∈ W ( R ) . Let c α , c α ∈ E α ( g ) ⊂ H W ( R ) ( ν ) (i.e., gc α = κ c α g , gc α = κ c α g ). Then E ( P ([ c α , c β ]) g ) = E ( g ) − for any g ∈ W ( R ) . (31) Proof.
Proof easily follows from the formula P ([ c α , c β ]) = ε αβ X ~v ∈R ν ~v ( ~c , ~v )( ~c , ~v )( ~v, ~v ) R ~v . (32)Indeed, if ( ~c , ~v )( ~c , ~v ) = 0, then Lemma 3.8 implies that E ( R ~v g ) = E ( g ) − Due to Lemma 3.9 some of the Ground Level Conditions express the κ -trace of elements g with E ( g ) = l via the κ -traces of elements R ~v g with E ( R ~v g ) = l − sp ( g ) = − sp (([ c i , c i ] − g ) if ( ~c i , ~c i ) = 1 . (33)We prove Theorems 3.5 and 3.6 using induction on E ( g ).The first step is simple: if E ( g ) = 0, then sp ( g ) is an arbitrary central function. The nextstep is also simple: if E ( g ) = 1, then there exists a unique element c ∈ E ( g ) and a uniqueelement c ∈ E ( g ) such that | c α | = 1 and gc α = κ c α g . Since (([ c , c ] − g ) ∈ C [ W ( R )]and E (([ c , c ] − g ) = 0, then sp ( g ) = − sp (([ c , c ] − g ) (34)is the unique possible value for sp ( g ) with E ( g ) = 1. In such a way, W ∗ is extended to W ∗ .A priori these values are not consistent with other Ground Level Conditions.Suppose that the Ground Level Conditions (19) sp (cid:0) [ c i , c j ] g (cid:1) = 0considered for all g with E ( g ) l and for all c αi ∈ E α ( g ) such that ( c αi , c βj ) = δ ij , where i = 1 , , ... , l , have Q l independent solutions. Statement 3.10.
The value Q l does not depend on l . Proof.
It was shown above that Q = Q . Let l >
1. Let us consider g ∈ W ( R ) with E ( g ) = l + 1. Let c αi ∈ E α ( g ), where i = 1 ,
2, be such that ( c αi , c βj ) = δ ij . These elements c αi give the conditions: sp ( g ) = − sp (([ c , c ] − g ) , (35) sp ( g ) = − sp (([ c , c ] − g ) , (36) sp ([ c , c ] g ) = 0 . (37)Below we prove that eqs. (35) and (36) are equivalent and that eq. (37) follows fromthem. So, we will prove that eq. (35) considered for all g ∈ W s , where 0 < s l + 1 realizesthe extension of W ∗ to W ∗ l +1 .Let us transform (35): sp ( g ) = sp ( S ) − sp ( S ), where (38) S = − [ c , c ] − − X ~v ∈R : ( ~v, ~c )( ~v, ~c ) =0 ν ~v ( ~v, ~c ) | ~v | R ~v g == − X ~v ∈R : ( ~v, ~c )( ~v, ~c )=0 ν ~v ( ~v, ~c ) | ~v | R ~v g == − X ~v ∈R : ( ~v, ~c )=0 ν ~v ( ~v, ~c ) | ~v | R ~v g (39) S = X ~v ∈R : ( ~v, ~c )( ~v, ~c ) =0 ν ~v ( ~v, ~c ) | ~v | R ~v g. (40)It is clear from eq. (39) and Lemma 3.8 that E ( S ) = l and S c = κ c S . Hence, due toeq. (33) and inductive hypothesis sp ( S ) = − sp (([ c , c ] − S ) = sp (([ c , c ] − c , c ] − g − S )) (41)and as a result sp ( S ) = sp (([ c , c ] − c , c ] − g ) − sp (([ c , c ]) S ) + sp ( S ) . (42)Finally, eq. (35) is equivalent under inductive hypothesis to sp ( g ) = sp (([ c , c ] − c , c ] − g ) − sp (([ c , c ]) S ) . (43)Analogously, eq. (36) is equivalent under inductive hypothesis to sp ( g ) = sp (([ c , c ] − c , c ] − g ) − sp (([ c , c ]) S ) , (44)where S = X ~v ∈R : ( ~v, ~c )( ~v, ~c ) =0 ν ~v ( ~v, ~c ) | ~v | R ~v g. (45)1Now, let us compare the corresponding terms in eqs. (43) and (44). First, the relation sp (([ c , c ] − c , c ] − g ) = sp (([ c , c ] − c , c ] − g ) (46)is identically true for every κ -trace on C [ W ( R )] since [ c , c ] commutes with g . Second, sp (([ c , c ]) S ) = sp (([ c , c ]) S ) (47)since sp ([ c , c ]( ~v, ~c ) R ~v g ) = sp ([ c , c ]( ~v, ~c ) R ~v g ) (48)for every ~v ∈ R such that ( ~v, ~c )( ~v, ~c ) = 0. Indeed, the element ~c = α~c + β~c , where α = − ( ~v, ~c ) = 0 and β = ( ~v, ~c ) = 0 , (49)is orthogonal to ~v : ( ~v, ~c ) = 0 (50)and satisfies the relation R ~v gc α = κ c α R ~v g (51)due to Lemma 3.8. This fact together with the fact that E ( P ([ c i , c ]) R ~v g ) = l − i = 1 , sp ([ c i , c ] R ~v g ) = sp ([ c , c i ] R ~v g ) = 0 for i = 1 , . (53)Substituting ~c = α ( ~c − β~c ) and ~c = β ( ~c − α~c ) in the left-hand side of eq. (48) and usingeqs. (50) and (53) one obtains the right-hand side of eq. (48). Thus, eq. (35) is equivalentto eq. (36); hence sp (([ c , c ] − g ) − sp (([ c , c ] − g ) = 0 (54)for every orthonormal pair c , c ∈ E ( g ). Consequently, sp ([ c , c ] g ) = 0 (55)which finishes the proof of Statement 3.10 and Theorem 3.6. κ -traces on H W ( R ) ( ν ) For proof of the following theorem, see this and subsequent sections.
Theorem 4.11.
Every κ -trace on the algebra C [ W ( R )] satisfying the equations sp ([ h , h ] g ) = 0 for any g ∈ W ( R ) with E ( g ) = 0 and h α ∈ E α ( g ) , (56) can be uniquely extended to a κ -trace on H W ( R ) ( ν ) . For each g ∈ W ( R ), introduce eigenbases b α i in C · H α ( i = 1 , ..., N , α = 0 ,
1) such that gb i = λ i b i g, (57) gb i = 1 λ i b i g, (58)( b i , b j ) = δ ij . Let B g be the set of all these b α i for a fixed g .In what follows we use the generalized indices I, J, ... instead of pairs ( α, i ) and sometimeswrite i ( I ), λ I , α ( I ) meaning that b I = b α ( I ) i ( I ) , gb I = λ I b I g. (59)Introduce also a symplectic form C IJ = [ b I , b J ] | ν =0 (60)and let f IJ be the ν -dependent part of the commutator [ b I , b J ]: F IJ def = [ b I , b J ] = C IJ + f IJ . (61)The indices I, J are raised and lowered with the help of the symplectic forms C IJ and C IJ : µ I = X J C IJ µ J , µ I = X J µ J C JI ; X M C IM C MJ = − δ JI . (62)Let M ( g ) be the matrix of the map B −→ B g b I = X i,α M α iI ( g ) a α i . (63)Obviously this map is invertible. Using the matrix notations one can rewrite (59) as gb I = N X J =1 Λ JI ( g ) b J g, (64)where the matrix Λ JI is diagonal, i.e., Λ JI = δ JI λ I .We will say that the monomial b I b I . . . b I k g is regular if b I s ∈ B g for all s = 1 , . . . , k and at least one of λ I s is not equal to κ .We will say that the monomial b I b I . . . b I k g is special if b I s ∈ B g for all s = 1 , . . . , k and λ I s = κ for all s . Clearly, that in this case E ( g ) > H W ( R ) ( ν ) as follows. Let M := P ( a α i ) g , M := P ( a α i ) g , where P and P are polynomials, and g , g ∈ W ( R ).We say that M > M if deg P > deg P or if deg P = deg P and E ( g ) > E ( g ). (65)3 κ -trace of General Elements To find the κ -trace we consider the defining relations (1) as a system of linear equations forthe linear function sp .Clearly, this system can be reduced to sp ([ b I , P ( a ) g ] κ ) = 0 (66) sp (cid:0) τ − P ( a ) gτ (cid:1) = sp ( P ( a ) g ) (67)for arbitrary polynomials P and arbitrary g, τ ∈ W ( R ).Since each κ -trace is even, the equation (66) can be rewritten in the form sp ( b I P ( a ) g − κ P ( a ) gb I ) = 0 . (68)Clearly, it is possible to express a κ -trace of any monomial in H W ( R ) ( ν ) in the termsof κ -trace on C [ W ( R )] using eq. (68). Indeed, this can be done in a finite number of thefollowing step operations. Regular step operation.
Let b I b I . . . b I k g be regular monomial, and we may assumewithout loss of generality, that λ I = κ .Then sp ( b I b I . . . b I k g ) = κ sp ( b I . . . b I k gb I ) = κ λ I sp ( b I . . . b I k b I g ) , which implies sp ( b I b I . . . b I k g ) − κ λ I sp ( b I b I . . . b I k g ) = κ λ I sp ([ b I . . . b I k , b I ] g ) . Thus, sp ( b I b I . . . b I k g ) = κ λ I − κ λ I sp ([ b I . . . b I k , b I ] g ) . (69)This step operation expresses the κ -trace of any regular degree k monomial in terms ofthe κ -trace of degree k − Special step operation.
Let M := b I b I . . . b I k g be special monomial and E ( g ) = l >
0. We can choose a basis b I in E ⊕ E such that C IJ | E ⊕E has the canonical form: C IJ | E ⊕E = (cid:18) I E ( g ) − I E ( g ) (cid:19) Up to a polynomial of lesser degree, the monomial M can be expressed in the form M = b pI b qJ b L . . . b L k − p − q g + lesser degree polynomial , where 0 p, q k, p + q k,λ I = λ J = λ L s = κ for any s, (70) C IJ = 1 , C IL s = 0 , C JL s = 0 for any s . M ′ := b pI b qJ b L . . . b L k − p − q .Now we can derive the equation for sp ( M ′ g ). Consider sp ( b J b I M ′ g ) = κ sp ( b I M ′ gb J ) = sp ( b I M ′ b J g ) , which implies sp ([ b I M ′ , b J ] g ) = 0 . (71)Transform the expression [ b I M ′ , b J ]:[ b p +1 I b qJ b L . . . b L k − p − q , b J ] = p X t =0 b tI (1 + f IJ ) b p − tI b qJ b L . . . b L k − p − q + k − p − q X t =1 b p +1 I b qJ b L . . . b L t − f L t J b L t +1 . . . b L k − p − q . (72)So, eq. (71) can be rewritten in the form sp ( M ′ g ) = − sp ( p X t =0 b tI f IJ b p − tI b qJ b L . . . b L k − p − q g + k − p − q X t =1 b p +1 I b qJ b L . . . b L t − f L t J b L t +1 . . . b L k − p − q g ) , (73)which is the desired equation for sp ( M ′ g ).Due to Lemma 3.8 it is easy to see that eq. (73) can be rewritten in the form sp ( M ′ g ) = X ˜ g ∈ W ( R ): E (˜ g )= E ( g ) − sp ( P ˜ g ( a α i )˜ g ) , (74)where the P ˜ g are some polynomials such that deg P ˜ g deg M ′ .So, the special step operation expresses the κ -trace of a special polynomial in terms ofthe κ -trace of polynomials lesser in the sense of the ordering (65).Thus, we showed that it is possible to express a κ -trace of any polynomial in the terms of κ -trace on C [ W ( R )] using a finite number of regular and special step operations. Since eachstep operation is manifestly W ( R )-invariant, and the κ -trace on C [ W ( R )] is W ( R )-invariantalso, the resulting κ -trace is W ( R )-invariant.This is not a proof of Theorem 4.11 yet because the resulting values of κ -traces may apriori depend on the sequence of step operations used and impose an additional constraintson the values of κ -trace on C [ W ( R )].Below we prove that the value of κ -trace does not indeed depend on the sequence of stepoperations used. We use the following inductive procedure:( ⋆ ) Let F := P ( a α i ) g ∈ H W ( R ) ( ν ), where P is a polynomial such that deg P = 2 k and g ∈ W ( R ). Assuming that κ -trace is correctly defined for all elements of H W ( R ) ( ν ) lesserthan F relative to the ordering (65), we prove that sp ( F ) is defined also without imposingan additional constraints on the solution of the Ground Level Conditions.The central point of the proof is consistency conditions (87), (88) and (103) proved inAppendices 1 and 2.5Assume that the Ground Level Conditions hold. The proof of Theorem 4.11 will begiven in a constructive way by the following double induction procedure, equivalent to ( ⋆ ): (i) Assume that sp ([ b I , P p ( a ) g ] κ ) = 0 for any P p ( a ) , g and I provided that b I ∈ B g and λ ( I ) = κ ; p k or λ ( I ) = κ , E ( g ) l , p k or λ ( I ) = κ ; p k − , where P p ( a ) is an arbitrary degree p polynomial in a α i and p is odd. This implies that thereexists a unique extension of the κ -trace such that the same is true for l replaced with l + 1. (ii) Assuming that sp ( b I P p ( a ) g − κ P p ( a ) gb I ) = 0 for any P p ( a ), g and b I ∈ B g , where p k , one proves that there exists a unique extension of the κ -trace such that the assumption (i) is true for k replaced with k + 2 and l = 0.As a result, this inductive procedure uniquely extends any solution of the Ground LevelConditions to a κ -trace on the whole H W ( R ) ( ν ). (Recall that the κ -trace of any odd elementof H W ( R ) ( ν ) vanishes because κ -trace is even.)It is convenient to work with the exponential generating functionsΨ g ( µ ) = sp (cid:0) e S g (cid:1) , where S = N X L =1 ( µ L b L ) , (75)where g is a fixed element of W ( R ), b L ∈ B g and µ L ∈ C are independent parameters.By differentiating eq. (75) with respect to µ L one can obtain an arbitrary polynomial in b L as a coefficient of g . The exponential form of the generating functions implies that thesepolynomials are symmetrized. In these terms, the induction on the degree of polynomials isequivalent to the induction on the homogeneity degree in µ of the power series expansionsof Ψ g ( µ ).As a consequence of the general properties of the κ -trace, the generating function Ψ g ( µ )must be W ( R )-invariant: Ψ τgτ − ( µ ) = Ψ g (˜ µ ) , (76)where the W ( R )-transformed parameters are of the form˜ µ I = X J (cid:0) M ( τ gτ − ) M − ( τ )Λ − ( τ ) M ( τ ) M − ( g ) (cid:1) IJ µ J (77)and matrices M ( g ) and Λ( g ) are defined in eqs. (63) and (64).The necessary and sufficient conditions for the existence of an even κ -trace are the W ( R )-covariance conditions (76) and the condition that sp (cid:0) [ b L , e S g ] κ (cid:1) = 0 for any g and L , (78)or, equivalently, sp (cid:0) b L e S g − κ e S gb L (cid:1) = 0 for any g and L . (79)6
To transform eq. (79) to a form convenient for the proof, we use the following two generalrelations true for arbitrary operators X and Y and parameter µ ∈ C : X exp( Y + µX ) = ∂∂µ exp( Y + µX )+ Z t exp( t ( Y + µX ))[ X, Y ] exp( t ( Y + µX )) D t, (80)exp( Y + µX ) X = ∂∂µ exp( Y + µX ) − Z t exp( t ( Y + µX ))[ X, Y ] exp( t ( Y + µX )) D t (81)with the convention that D n − t = δ ( t + . . . + t n − θ ( t ) . . . θ ( t n ) dt . . . dt n . (82)The relations (80) and (81) can be derived with the help of partial integration (e.g., over t ) and the following formula ∂∂µ exp( Y + µX ) = Z exp( t ( Y + µX )) X exp( t ( Y + µX )) D t (83)which can be proven by expanding in power series. The well-known formula[ X, exp( Y )] = Z exp( t Y )[ X, Y ] exp( t Y ) D t (84)is a consequence of eqs. (80) and (81). With the help of eqs. (80), (81) and (59) one rewrites eq. (79) as(1 − κ λ L ) ∂∂µ L Ψ g ( µ ) = Z ( − κ λ L t − t ) sp (cid:16) exp( t S )[ b L , S ] exp( t S ) g (cid:17) D t . (85)This condition should be true for any g and L and plays the central role in the analysis inthis section. Eq. (85) is an overdetermined system of linear equations for sp ; we show belowthat it has the only solution extending any fixed solution of the Ground Level Conditions.There are two essentially distinct cases, λ L = κ and λ L = κ . In the latter case, the eq.(85) takes the form0 = Z sp (cid:16) exp( t S )[ b L , S ] exp ( t S ) g (cid:17) D t , λ L = κ . (86)In Appendix 1 we prove by induction that eqs. (85) and (86) are consistent in thefollowing sense(1 − κ λ K ) ∂∂µ K Z ( − κ λ L t − t ) sp (cid:16) exp( t S )[ b L , S ] exp( t S ) g (cid:17) D t − ( L ↔ K ) = 0 (87)for λ L = κ , λ K = κ The independent proof of eq. (84) follows from the equalities:[ X, exp( Y )] = lim n →∞ [ X, (exp( Y /n )) n ] = lim n →∞ n − X k =0 (exp( Y /n )) k [ X, (1 + 1 n Y )](exp( Y /n )) n − k − . The same trick can be used for the proof of eq. (83). − κ λ K ) ∂∂µ K Z sp (cid:16) exp( t S )[ b L , S ] exp( t S ) g (cid:17) D t = 0 for λ L = κ . (88)Note that this part of the proof is quite general and does not depend on a concrete form ofthe commutation relations between a α i in eq. (7).By expanding the exponential e S in eq. (75) into power series in µ K (equivalently b K ) weconclude that eq. (85) uniquely reconstructs the κ -trace of monomials containing b K with λ K = κ (i.e., regular monomials ) in terms of κ -traces of some lower degree polynomials.Then the consistency conditions (87) and (88) guarantee that eq. (85) does not impose anyadditional conditions on the κ -traces of lower degree polynomials and allow one to representthe generating function in the formΨ g = Φ g ( µ ) (89)+ X L : λ L = κ Z µ L dτ − κ λ L Z D t ( − κ λ L t − t ) sp (cid:16) e t ( τS ′′ + S ′ ) [ b L , ( τ S ′′ + S ′ )] e t ( τS ′′ + S ′ ) g (cid:17) , where we introduced the generating functions Φ g for the κ -trace of special polynomials , i.e.,the polynomials depending only on b L with λ L = κ ,Φ g ( µ ) def = sp (cid:16) e S ′ g (cid:17) = Ψ g ( µ ) (cid:12)(cid:12)(cid:12) ( µ I =0 ∀ I : λ I = κ ) (90)and S ′ = X L : b L ∈ B g , λ L = κ ( µ L b L ); S ′′ = S − S ′ . (91)The relation (89) successively expresses the κ -trace of higher degree regular polynomials viathe κ -traces of lower degree polynomials.One can see that the arguments above prove the inductive hypotheses (i) and (ii) for theparticular case where either the polynomials P p ( a ) are regular and/or λ I = κ . Note that forthis case the induction (i) on the gradation E is trivial: one simply proves that the degreeof the polynomial can be increased by two.Let us now turn to a less trivial case of the special polynomials: sp (cid:16) b I e S ′ g − κ e S ′ gb I (cid:17) = 0 , where λ I = κ . (92)This equation implies sp (cid:16) [ b I , e S ′ ] g (cid:17) = 0 , where λ I = κ . (93)Consider the part of sp ([ b I , exp S ′ ] g ) which is of degree k in µ and let E ( g ) = l + 1. Byeq. (86) the conditions (93) give0 = Z sp (exp( t S ′ )[ b I , S ′ ] exp( t S ′ ) g ) D t . (94)Substituting [ b I , S ′ ] = µ I + P M f IM µ M , where the quantities f IJ and µ I are defined ineqs. (61)-(62), one can rewrite eq. (94) in the form µ I Φ g ( µ ) = − Z sp (cid:18) exp( t S ′ ) X M f IM µ M exp( t S ′ ) g (cid:19) D t . (95)8Now we use the inductive hypothesis (i) . The right hand side of eq. (95) is a κ -traceof a polynomial of degree ( k −
1) in a α i in the sector of degree k polynomials in µ , and E ( f IM g ) = l . Therefore one can use the inductive hypothesis (i) to obtain the equality Z sp (cid:16) exp( t S ′ ) X M f IM µ M exp( t S ′ ) g (cid:17) D t = Z sp (cid:16) exp( t S ′ ) exp( t S ′ ) X M f IM µ M g (cid:17) D t, where we used that sp ( S ′ F g ) = κ sp ( F gS ′ )= sp ( F S ′ g ) by definition of S ′ .As a result, the inductive hypothesis allows one to transform eq. (92) to the followingform: X I def = µ I Φ g ( µ ) + sp (cid:18) exp( S ′ ) X M f IM µ M g (cid:19) = 0 . (96)By differentiating this equation with respect to µ J one obtains after symmetrization ∂∂µ J ( µ I Φ g ( µ )) + ( I ↔ J ) = − Z sp (cid:16) e t S ′ b J e t S ′ X M f IM µ M g (cid:17) D t + ( I ↔ J ) . (97)An important point is that the system of equations (97) is equivalent to the originalequations (96) except for the ground level part Φ g (0). This can be easily seen from thesimple fact that the general solution of the system of equations for entire functions X I ( µ ) ∂∂µ J X I ( µ ) + ∂∂µ I X J ( µ ) = 0is of the form X I ( µ ) = X I (0) + X J c IJ µ J where X I (0) and c JI = − c IJ are some constants.The part of eq. (96) linear in µ is however equivalent to the Ground Level Conditionsanalyzed in Section 3. Thus, eq. (97) contains all information of eq. (19) additional to theGround Level Conditions. For this reason we will from now on analyze the equation (97).Using again the inductive hypothesis we move b I to the left and to the right of the righthand side of eq. (97) with equal weights equal to to get ∂∂µ J µ I Φ g ( µ ) + ( I ↔ J ) = − X M sp (cid:16) exp( S ′ ) { b J , f IM } µ M g (cid:17) − Z X
L,M ( t − t ) sp (cid:16) exp( t S ′ ) F JL µ L exp( t S ′ ) f IM µ M g (cid:17) D t + ( I ↔ J ) . (98)The last term in the right hand side of this expression can be shown to vanish under the κ -trace due to the factor t − t , so that one is left with the equation L IJ Φ g ( µ ) = − R IJ ( µ ) , (99)where R IJ ( µ ) = X M sp (cid:16) exp( S ′ ) { b J , f IM } µ M g (cid:17) + ( I ↔ J ) (100)9and L IJ = ∂∂µ J µ I + ∂∂µ I µ J . (101)The differential operators L IJ satisfy the standard sp (2 E ( g )) commutation relations[ L IJ , L KL ] = − ( C IK L JL + C IL L JK + C JK L IL + C JL L IK ) . (102)In Appendix 2 we show by induction that this sp (2 E ( g )) Lie algebra of differential operatorsis consistent with the right-hand side of the basic relation (99), i.e., that[ L IJ , R KL ] − [ L KL , R IJ ] = − ( C IK R JL + C JL R IK + C JK R IL + C IL R JK ) . (103)Generally, these consistency conditions guarantee that eqs. (99) express Φ g ( µ ) in termsof R IJ in the following wayΦ g ( µ ) = Φ g (0) + 18 E ( g ) E ( g ) X I,J =1 Z dtt (1 − t E ( g ) )( L IJ R IJ )( tµ ) , (104)provided R IJ (0) = 0 . (105)The latter condition must hold for the consistency of eqs. (99) since its left hand sidevanishes at µ I = 0. In the expression (104) it guarantees that the integral over t converges.In the case under consideration the property eq. (105) is indeed true as a consequence ofthe definition (100).Taking into account Lemma 3.8 and the explicit form (100) of R IJ one concludes thateq. (104) uniquely expresses the κ -trace of special polynomials in terms of the κ -tracesof polynomials of lower degrees or in terms of the κ -traces of special polynomials of thesame degree multiplied by elements of W ( R ) with a lower value of E provided that the µ -independent term Φ g (0) is an arbitrary solution of the Ground Level Conditions. Thiscompletes the proof of Theorem 4.11. (cid:4) H W ( R ) (0) ) Consider H W ( R ) (0). This algebra is the skew product of the Weyl superalgebra and thegroup algebra of the finite group W ( R ) generated by a root system R ⊂ V = R N . Algebrasof this type, and their generalization, were considered in [4].The superalgebra H W ( R ) (0) is an associative superalgebra of polynomials in a α i , where α = 0 , i = 1 , ..., N , with coefficients in the group algebra C [ W ( R )] subject to therelations ga α i = N X k =1 g ki a α k g for any g ∈ W ( R ) and a α i , (106)[ a α i , a β j ] = ε αβ δ ij , (107)0where ε αβ is the antisymmetric tensor, ε = 1, and g ki is a matrix realizing representation of g ∈ W ( R ) in End ( V ). The commutation relations (106)-(107) suggest to define the parity π by setting: π ( a α i ) = 1 and π ( g ) = 0 for any g ∈ W ( R ) . (108)Unifying indices i and α in one index I one can rewrite eq. (107) as[ a I , a J ] = ω IJ , (109)where ω IJ is a symplectic form.It is easy to find the general solution of eqs. (85) and (86) for generating function of κ -traces:1. If g ∈ W ( R ) and E ( g ) = 0, then sp ( P ( a I ) g ) = 0 for any polynomial P .2. If g ∈ W ( R ) and E ( g ) = 0, then sp ( g ) is an arbitrary central function on W ( R ).3. Let E ( g ) = 0. There exists a complete set b α,k of eigenvectors of g for each α , suchthat gb K = Λ K b K g and C KL = [ b K , b L ] is nondegenerate skewsymmetric form suchthat C KL = 0 only if λ K λ L = 1. In this notation, let S ( µ, b ) = X K µ K b K ,Q ( µ ) = 14 X KL µ K µ L ˜ C KL , where ˜ C KL = − κ λ K − κ λ K C KL = ˜ C LK . Then sp (cid:0) e S ( µ,b ) g (cid:1) = e Q ( µ ) sp ( g ) . (110)The solution eq. (110) can be obtain in initial basis also.Let S = P α i µ α i a α i , Ψ( g, µ, t ) = sp ( e tS g ) , Ψ( g, µ ) = sp ( e S g ) = Ψ( g, µ, . Then sp (cid:0) [ a αi , e tS g ] κ (cid:1) = sp (cid:0) tε αβ δ ij µ βj e tS g + e tS a αj gp ji (cid:1) , where p ji = (1 − κ g ) ji , (111)Since E ( g ) = 0, the matrix p ji is invertible, so eq. (111) gives ddt Ψ( g, µ, t ) = − µ αi ε αβ q ki δ kj µ βj Ψ( g, µ, t ) , where q ki = (cid:18) I − κ g (cid:19) ki = 12 (cid:18) I + κ gI − κ g (cid:19) ki + 12 δ ki So ddt Ψ( g, µ, t ) = − µ αi ε αβ ˜ ω ij µ βj Ψ( g, µ, t ) , where ˜ ω ij = 12 (cid:18) κ g − κ g (cid:19) ki δ kj = − ˜ ω ji and finally Ψ( g, µ ) = exp (cid:16) − µ αi ε αβ ˜ ω ij µ βj (cid:17) sp ( g ) . Acknowledgments
Authors are grateful to Oleg Ogievetsky and Rafael Stekolshchik for useful discussions.
Appendix 1. The proof of consistency condition (87) for λ = κ .Let parameters µ def = µ K and µ def = µ K be such that λ def = λ K = κ and λ def = λ K = κ .Let b and b denote b K and b K correspondingly. Let us prove by induction that eqs. (87)are true. To implement induction, we select a part of degree k in µ from eq. (85) and observethat this part contains a degree k polynomial in b M in the left-hand side of eq. (85) while thepart on the right hand side of the differential version (85) of eq. (78) which is of the samedegree in µ has degree k − b M . This happens because of the presence ofthe commutator [ b L , S ] which is a zero degree polynomial due to the basic relations (7). Asa result, the inductive hypothesis allows us to use the properties of κ -trace provided thatthe above commutator is always handled as the right hand side of eq. (7), i.e., we are notallowed to represent it again as a difference of the second-degree polynomials.Direct differentiation with the help of eq. (83) gives(1 − κ λ ) ∂∂µ Z ( − κ λ t − t ) sp (cid:16) e t S [ b , S ] e t S g (cid:17) D t − (cid:16) ↔ (cid:17) == (cid:18)Z (1 − κ λ )( − κ λ t − t ) sp (cid:0) e t S [ b , b ] e t S g (cid:1) D t − (cid:16) ↔ (cid:17)(cid:19) ++ Z (1 − κ λ )( − κ λ ( t + t ) − t ) sp (cid:16) e t S b e t S [ b , S ] e t S (cid:17) D t − (cid:16) ↔ (cid:17)! ++ Z (1 − κ λ )( − κ λ t − t − t ) sp (cid:16) e t S [ b , S ] e t S b e t S g (cid:17) D t − (cid:16) ↔ (cid:17)! . (A1.1)We have to show that the right hand side of eq. (A1.1) vanishes. Let us first transformthe second and the third terms on the right-hand side of eq. (A1.1). The idea is to movethe operators b through the exponentials towards the commutator [ b , S ] in order to usethen the Jacobi identity for the double commutators. This can be done in two differentways inside the κ -trace so that one has to fix appropriate weight factors for each of theseprocesses. The correct weights turn out to be D t ( − κ λ ( t + t ) − t ) b ≡ D t ( − κ λ − t (1 − κ λ )) b = D t λ λ − κ λ − t (1 − κ λ ) ! −→ b + − κ λ − κ λ ←− b ! (A1.2)and D t ( − κ λ t − t − t ) b ≡ D t (( − κ λ + 1) t − b = D t t (1 − κ λ ) − − κ λ ! ←− b − − κ λ − κ λ −→ b ! (A1.3)2in the second and third terms in the right hand side of eq. (A1.1), respectively. Here thenotation −→ A and ←− A imply that the operator A has to be moved from its position to the rightand to the left, respectively. Using eq. (84) along with the simple formula Z φ ( t , . . . t n +1 ) D n t = Z t φ ( t , . . . t n ) D n − t (A1.4)we find that all terms which involve both [ b , S ] and [ b , S ] cancel pairwise after antisym-metrization 1 ↔ Z (cid:16) λ λ t + t − t t (1 − κ λ )(1 − κ λ ) (cid:17) sp (cid:16) exp( t S )[ S, [ b , b ]] exp( t S ) g (cid:17) D t . (A1.5)Finally, we observe that this expression can be equivalently rewritten in the form Z (cid:16) λ λ t + t − t t (1 − κ λ )(1 − κ λ ) (cid:17) (cid:18) ∂∂t − ∂∂t (cid:19) sp (cid:16) exp( t S )[ b , b ] exp( t S ) g (cid:17) D t (A1.6)and after integration by parts cancel the first term on the right-hand side of eq. (A1.1).Thus, it is shown that eqs. (85) are mutually compatible for the case λ , = κ .Analogously, we can show that eqs. (85) are consistent with eq. (86). Indeed, let λ = κ , λ = κ . Let us prove that ∂∂µ sp (cid:16) [ b , exp( S )] g (cid:17) = 0 (A1.7)provided that the κ -trace is well-defined for the lower degree polynomials. The explicitdifferentiation gives ∂∂µ sp (cid:16) [ b , exp( S )] g (cid:17) = Z sp (cid:16) [ b , exp( t S ) b exp( t S )] g (cid:17) D t = (1 − κ λ ) − sp (cid:16) [ b , ( b exp( S ) − κ λ exp( S ) b )] g (cid:17) + . . . (A1.8)where dots denote some terms of the form sp (cid:16) [ b , B ] g (cid:17) involving more commutators inside B , which therefore amount to some lower degree polynomials and vanish by the inductivehypothesis. As a result, we find that ∂∂µ sp (cid:16) [ b , exp( S )] g (cid:17) = (1 − κ λ ) − sp (cid:16) ( b [ b , exp( S )] − κ λ [ b , exp( S )] b ) g (cid:17) + (1 − κ λ ) − sp (cid:16) ([ b , b ] exp( S ) − κ λ exp( S )[ b , b ]) g (cid:17) . (A1.9)This expression vanishes by the inductive hypothesis, too.3 Appendix 2. The proof of consistency conditions (103) (the case ofspecial polynomials)
In order to prove eq. (103) we use the inductive hypothesis (i) . In this appendix we usethe convention that any expression with the coinciding upper or lower indices are automat-ically symmetrized, e.g., F II def = ( F I I + F I I ). Let us write the identity0 = X M sp (cid:16)h exp( S ′ ) { b I , f IM } µ M , b J b J i g (cid:17) − ( I ↔ J ) (A2.10)which holds due to Lemma 3.9 for all terms of degree k − µ with E ( g ) l + 1 and forall lower degree polynomials in µ (one can always move f IJ to g in eq. (A2.10) combining f IJ g into a combination of elements of W ( R ) analyzed in Lemma 3.9).Straightforward calculation of the commutator in the right-hand-side of eq. (A2.10) gives0 = X + X + X , where X = − X M,L Z sp (cid:0) exp( t S ′ ) { b J , F JL } µ L exp( t S ′ ) { b I , f IM } µ M g (cid:1) D t − ( I ↔ J ) ,X = X M sp (cid:16) exp( S ′ ) n { b J , F IJ } , f IM o µ M g (cid:17) − ( I ↔ J ) ,X = X M sp (cid:16) exp( S ′ ) n b I , { b J , [ f IM , b J ] } o µ M g (cid:17) − ( I ↔ J ) . (A2.11)The terms bilinear in f in X cancel due to the antisymmetrization ( I ↔ J ) and the inductivehypothesis (i) . As a result, one can transform X to the form X = (cid:18) −
12 [ L JJ , R II ] + 2 sp (cid:16) e S ′ { b I , f IJ } µ J g (cid:17)(cid:19) − ( I ↔ J ) . (A2.12)Substituting F IJ = C IJ + f IJ and f IM = ([ b I , b M ] − C IM ) one transforms X to the form X = 2 C IJ R IJ − (cid:16) sp (cid:16) e S ′ { b J , f IJ } µ I g (cid:17) − ( I ↔ J ) (cid:17) + Y, (A2.13)where Y = sp (cid:16) e S ′ n { b J , f IJ } , [ b I , S ′ ] o g (cid:17) − ( I ↔ J ) . (A2.14)Using that sp (exp( S ′ ) [ P f IJ Q, S ′ ] g ) = 0 (A2.15)provided that the inductive hypothesis can be used, one transforms Y to the form Y = sp (cid:18) e S ′ (cid:18) − [ f IJ , ( b I S ′ b J + b J S ′ b I )] − b I [ f IJ , S ′ ] b J − b J [ f IJ , S ′ ] b I + [ f IJ , { b I , b J } ] S ′ (cid:19) g (cid:19) . (A2.16)4Let us rewrite X in the form X = X s + X a , where X s = 12 X M sp (cid:16) e S ′ (cid:16)n b I , { b J , [ f IM , b J ] } o + n b J , { b I , [ f IM , b J ] } o(cid:17) µ M g (cid:17) − ( I ↔ J ) ,X a = 12 X M sp (cid:16) e S ′ (cid:16)n b I , { b J , [ f IM , b J ] } o − n b J , { b I , [ f IM , b J ] } o(cid:17) µ M g (cid:17) − ( I ↔ J ) . With the help of the Jacobi identity [ f IM , b J ] − [ f JM , b I ] = [ f IJ , b M ] one expresses X s in theform X s = 12 sp (cid:16) e S ′ ( { b I , b J } [ f IJ , S ′ ] + [ f IJ , S ′ ] { b I , b J } + 2 b I [ f IJ , S ′ ] b J + 2 b J [ f IJ , S ′ ] b I ) g (cid:17) . Let us transform this expression for X a to the form X a = 12 X M sp (cid:16) e S ′ [ F IJ , [ f IM , b J ]] µ M g (cid:17) − ( I ↔ J ) . (A2.17)Substitute F IJ = C IJ + f IJ and f IM = ([ b I , b M ] − C IM ) in eq. (A2.17). After simpletransformations we find that Y + X = 0. From eqs. (A2.12) and (A2.13) it follows that theright hand side of eq. (A2.10) is equal to12 ([ L II , R JJ ] − [ L JJ , R II ]) + 2 C IJ R IJ . This completes the proof of the consistency conditions (103).
References [1] F. Calogero, J. Math. Phys., (1969) 2191, 2197; ibid (1971) 419.[2] M. A.Olshanetsky and A. M. Perelomov, Phys. Rep., (1983) 313.[3] C.F.Dunkl, Trans. Am. Math. Soc. (1989) 167.[4] D.S. Passman, Infinite Crossed Products, Pure and Applied Math vol. 135, AcademicPress, San Diego, 1989.[5] M.A. Vasiliev, JETP Letters, (1989) 344-347; Int. J. Mod. Phys. A6 (1991) 1115.[6] A. Polychronakos, Phys. Rev. Lett. (1992) 703.[7] L. Brink, H. Hansson and M.A. Vasiliev, Phys. Lett. B286 (1992) 109.[8] S.E. Konstein and M.A. Vasiliev, J. Math. Phys. (1996) 2872.[9] S.E.Konstein, Teor. Mat. Fiz., (1998) 122.[10] S. E. Konstein, “Supertraces on the Superalgebra of Observables of Rational CalogeroModel based on the Root System”, arXiv:math-ph/99040325[11] Ivan Losev, “Completions of symplectic reflection algebras”, arXiv:1001.0239v4[math.RT].[12] P. Etingof and V. Ginzburg, Symplectic reflection algebras, Calegoro-Moser space,and deformed Harish-Chandra homomorphism, Inv. Math. 147 (2002), 243-348;arXiv:math.AG/0011114, April 2001.[13] S.E.Konstein and R.Stekolshchik, “The Number of Supertraces on the Superalgebra ofObservables of Rational Calogero Model based on the Root System”, arXiv:0811.2487.[14] Seminar on supersymmetry (v. . Algebra and Calculus: Main chapters) , (J. Bernstein,D. Leites, V. Molotkov, V. Shander) ed. by D. Leites, MCCME, Moscow,2011