A local-global principle for linear dependence in enveloping algebras of Lie algebras
aa r X i v : . [ m a t h . R T ] M a r A LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE INENVELOPING ALGEBRAS OF LIE ALGEBRAS
JAKA CIMPRI ˇC AND ALJAˇZ ZALAR Abstract.
For an associative algebra A and a class C of representations of A the follow-ing question (related to nullstellensatz) makes sense: Characterize all tuples of elements a , . . . , a n ∈ A such that vectors π ( a ) v, . . . , π ( a n ) v are linearly dependent for every π ∈ C and every v in the representation space of π . We answer this question in thefollowing cases:(1) A = U ( L ) is the enveloping algebra of a finite-dimensional complex Lie algebra L and C is the class of all finite-dimensional representations of A .(2) A = U ( sl ( C )) and C is the class of all finite-dimensional irreducible representationsof A .(3) A = U ( sl ( C )) and C is the class of all finite-dimensional irreducible representationsof A with sufficiently high weights.In case (1) the answer is: tuples that are linearly dependent over C while in cases (2)and (3) the answer is: tuples that are linearly dependent over the center of A . Similarresults have been proved before for free algebras and Weyl algebras. Let A be a complex associative algebra and let C be a class of representations of A .We say that the elements p , . . . , p k ∈ A are C - locally linearly dependent (abbreviated as C -LLD) if for every representation π : A → End( V π ) in C we have that π ( p ) , . . . , π ( p k ) arelinearly dependent. We say that elements p , . . . , p k ∈ A are C - locally directionally linearlydependent (abbreviated as C -LDLD) if for every representation π : A → End( V π ) in C and every vector v ∈ V π we have that π ( p ) v, . . . , π ( p k ) v are linearly dependent. Clearly,linear dependence implies C -LLD which implies C -LDLD. The opposite implications arefalse in general. The motivation for this terminology comes from [2].Our first main result is the following theorem, proved in Section 1. Theorem 1.
Let L be a finite-dimensional complex Lie algebra, U ( L ) its universal en-veloping algebras and R the class of all finite-dimensional representations of U ( L ) . Forany elements p , . . . , p k ∈ U ( L ) the following are equivalent:(1) p , . . . , p k are linearly dependent.(2) p , . . . , p k are R -locally linearly dependent.(3) p , . . . , p k are R -locally directionally linearly dependent. The analogue of Theorem 1 for free algebras was proved in [2]. The analogue for thealgebra M n ( C ) of all complex n × n matrices is trivial. Namely, let π be the direct sum of n copies of the identity representation of M n ( C ) and let v be the direct sum of all elements of Date : March 13, 2020.2010
Mathematics Subject Classification.
Key words and phrases.
Lie algebra, linear dependence, local linear dependence, Nullstellensatz. Supported by the Slovenian Research Agency grant P1-0222. Supported by the Slovenian Research Agency grant P1-0288. the standard basis of C n . Then (3) implies that π ( p ) v, . . . , π ( p n ) v are linearly dependentwhich implies (1) since each π ( p i ) v is just a vectorization of p i . See also Lemma 1 below.Our second main result, whose proof is given in Section 2, is: Theorem 2.
Let sl be the Lie algebra of trace-zero × complex matrices and I theclass of all finite-dimensional irreducible representations of its universal enveloping algebra U ( sl ) . For any elements p , . . . , p k ∈ U ( sl ) the following are equivalent:(1) There exist z , . . . , z k in the center of U ( sl ) which are not all zero such that z p + . . . + z k p k = 0 .(2) p , . . . , p k are I -locally linearly dependent.(3) p , . . . , p k are I -locally directionally linearly dependent. To obtain the analogue of Theorem 2 for the enveloping algebra U ( sl ), which is ourthird main result, proved in Section 4, we consider a smaller class of irreducible repre-sentations. Namely, for each d ∈ N we define I d to be the class of all finite-dimensionalirreducible representations of sl with highest weights ( m , m ) satisfying m ≥ d , m ≥ d . Theorem 3.
Let sl be the Lie algebra of trace-zero × complex matrices. For anyelements p , . . . , p k ∈ U ( sl ) the following are equivalent:(1) There exist z , . . . , z k in the center of U ( sl ) which are not all zero such that z p + . . . + z k p k = 0 .(2) There exists d ∈ N such that p , . . . , p k are I d -locally linearly dependent.(3) There exists d ∈ N such that p , . . . , p k are I d -locally directionally linearly depen-dent. Here is a list of a few results related to Theorem 2 and 3 that are either known ortrivial:(1) If A = M n ( C ) and C = { id } then C -LLD is equivalent to linear dependence but C -LDLD is not as it is equivalent to the usual notion of locally linearly dependentmatrices; see [3]. For n ≥ E ij ∈ M n ( C ) are C -LDLDalthough they are linearly independent.(2) If A = M n ( C [ X , . . . , X m ]) and C = { ev a | a ∈ C m } is the set of all evaluationsat n -tuplets of complex numbers, then C -LLD is equivalent to linear dependenceover C [ X , . . . , X n ] (see below), but C -LDLD is not (it suffices to consider constantmatrices: see (1) above).Pick any matrices P , . . . , P k ∈ A and consider the matrix P = [ p , . . . , p k ]where p i is the vectorization of P i . Note that P , . . . , P k are C -LLD iff for every a ∈ C n every maximal subdeterminant of P ( a ) is zero iff every maximal subdeter-minant of P is zero iff P , . . . , P k are linearly dependent over C ( X , . . . , X n ).(3) If A = A n ( C ) is the n -th Weyl algebra and C = { π } , where π is the Schr¨odingerrepresentation of A , then C -LLD and C -LDLD are equivalent to linear dependence;see [5]. Recall that A n ( C ) has generators x , . . . , x n , y , . . . , y n and relations y i x j − x j y i = δ ij , x i x j = x j x i and y i y j = y j y i for i, j = 1 , . . . , n and that π acts on thevector space S ( R n ) of rapidly decreasing C ∞ -functions f : R n → C by ( x j f )( t ) = t j f ( t ) and ( y i f )( t ) = ∂f∂t i ( t ), where f ∈ S ( R n ) and t := ( t , . . . , t n ) ∈ R n . Notethat the center of A is C ; see [12, Example 2.5.2].Recall from linear algebra that the span of elements p , . . . , p k in a complex vectorspace is the set span { p , . . . , p k } of all complex linear combinations of p , . . . , p k . For an LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 3 algebra A and a class C of representations of A , two more notions of span of elements p , . . . , p k ∈ A will be used throughout the paper:the C - local linear span of p , . . . , p k , denoted by Loc C { p , . . . , p k } , is the set of all q ∈ A such that(A) π ( q ) ∈ span { π ( p ) , . . . , π ( p k ) } for all π ∈ C ;and the C - reflexive closure of p , . . . , p k , denoted by Ref C { p , . . . , p k } , is the set of all q ∈ A with(B) π ( q ) v ∈ span { π ( p ) v, . . . , π ( p k ) v } , for every π : A → End( V π ) in C and v ∈ V π . Clearly, span { p , . . . , p k } ⊆ Loc C { p , . . . , p k } ⊆ Ref C { p , . . . , p k } , and Theorem 1 impliesthat span { p , . . . , p k } = Loc R { p , . . . , p k } = Ref R { p , . . . , p k } for every p , . . . , p k ∈ U ( L ).On the other hand, we do not have a similar result in U ( sl ) for Ref I and Loc I . We willprovide several counterexamples in Section 3.1 (see Theorem 4). For finite-dimensionalcomplex solvable Lie algebras we will give explicit descriptions of Ref I and Loc I in Section3.2.Our motivation for studying Loc and Ref comes from their relation to nullstellensatz.Namely, assume that the class C contains only finite-dimensional representations. Thenthe properties (A) and (B) are respectively equivalent to the properties (A’) and (B’)below:(A’) For every π : A → End( V π ) in C and a matrix B ∈ End( V π ),tr( π ( p ) B ) = . . . = tr( π ( p k ) B ) = 0 implies tr( π ( q ) B ) = 0 . (B’) For every π ∈ C , v ∈ V π and w ∈ V ∗ π , h π ( p ) v, w i = . . . = h π ( p k ) v, w i = 0 implies h π ( q ) v, w i = 0 . Here V ∗ π stands for the dual of V π and h u, w i = w ( u ). These equivalences are prettyeasy to prove: the proof of the equivalence of (A) and (A’) uses the fact that the spanof π ( p ) , . . . , π ( p k ) is equal to its second orthogonal complement in End( V π ) with innerproduct defined by the trace map; the proof of the equivalence of (B) and (B’) is basedon the span of π ( p ) v, . . . , π ( p k ) v being equal to its second annihilator in V π . Acknowledgement.
The authors would like to thank anonymous referee for a very de-tailed report and numerous suggestions which improved the exposition of the manuscript.1.
Proof of Theorem 1
Let sl n denote the Lie algebra of all complex n × n matrices with zero trace. A theoremof Ado, see [6, 2.5.6], implies that for every finite-dimensional complex Lie algebra L thereexists an embedding ι : L → sl n for some n .Let U ( L ) be the universal enveloping algebra of L . By the PBW theorem [11, § ι induces an embedding of U ( L ) into U ( sl n ). If f , . . . , f n − is a basis of sl n , then themonomials f m · · · f m n − n − , m j ∈ N , form a basis of U ( sl n ).We write R for the class of all finite-dimensional representations of L . Proposition 1below reduces Theorem 1 to a special linearly independent set in sl n . J. CIMPRI ˇC AND A. ZALAR
Proposition 1.
The following statements are equivalent:(1) For every finite-dimensional Lie algebra L over C we have that every finite R -locally directionally linearly dependent subset of U ( L ) is linearly dependent.(2) For every finite-dimensional Lie algebra L over C and every linearly independentset p , . . . , p k ∈ U ( L ) there exists π : U ( L ) → End( V π ) in R and a vector v ∈ V π such that π ( p ) v, . . . , π ( p k ) v are linearly independent.(3) For every n, d ∈ N there exists a finite-dimensional representation π n,d : U ( sl n ) → End( V π n,d ) and a vector v n,d ∈ V π n,d such that all vectors of the form π n,d ( f ) m · · · π n,d ( f n − ) m n − v n,d where f , . . . , f n − is a basis for sl n and P n − i =1 m i ≤ d , are linearly independent.Proof. Clearly, (1) is equivalent to (2) and (3) is a special case of (2).It remains to prove the implication (3) ⇒ (2). Let L be a finite-dimensional complexLie algebra and let p , . . . , p k ∈ U ( L ) be linearly independent. We first identify U ( L ) witha subalgebra of U ( sl n ) for some n . Then using the basis f , . . . , f n − of U ( sl n ), for each ℓ ∈ N , let W ℓ denote the subspace of U ( sl n ) spanned by all elements Q n − i =1 f m i i where P n − i =1 m i ≤ ℓ .Let d ∈ N be such that p , . . . , p k ∈ W d . Choose p k +1 , . . . , p N ∈ W d , so that p , . . . , p N isa basis of W d . Let q , . . . , q N be another basis of W d consisting of all monomials Q n − i =1 f m i i with P n − i =1 m i ≤ d . By assumption (3) there exists a representation π n,d : U ( sl n ) → End( V π n,d ) and a vector v n,d ∈ V π n,d such that the vectors π n,d ( q ) v n,d , . . . , π n,d ( q N ) v n,d arelinearly independent. Claim (2) will follow from the linear independence of π n,d ( p ) v n,d , . . . , π n,d ( p N ) v n,d which we now show. There are γ ij ∈ C such that p i = P Nj =1 γ ij q j for i = 1 , . . . , N .Assume that P Ni =1 α i π n,d ( p i ) v n,d = 0 for some α i ∈ C . Then P Nj =1 β j π n,d ( q j ) v n,d = 0where β j = P Ni =1 α i γ ij for every j = 1 , . . . , N . Since π n,d ( q j ) v n,d are linearly independentit follows that β j = 0 for j = 1 , . . . , N . Since the matrix [ γ ij ] i,j is invertible, it followsthat α i = 0 for i = 1 , . . . , N .Let ρ n : sl n → End( C n ) be the standard representation of sl n defined by ρ n ( X ) u := Xu for every X ∈ sl n and u ∈ C n . Its unique extension to U ( sl n ) will be denoted by the samesymbol. Let π n = L ni =1 ρ n be the direct sum of n copies of ρ n and let v = L ni =1 e i , where e , . . . , e n is the standard basis of C n . Note that v belongs to V := ⊕ ni =1 C n = C n andthat π n maps into End( V ). Let f , . . . , f n − be a basis of sl n . The following is clear: Lemma 1.
With the above notation, the vectors v , π n ( f ) v, . . . , π n ( f n − ) v are linearlyindependent. For every k ∈ N let V ⊗ k be the k -th tensor power of V and let Sym k ( V ) be the k -thsymmetric power of V . Recall that Sym k ( V ) is the subset of V ⊗ k consisting of all elementsthat are invariant under the natural action of the symmetric group S k on V ⊗ k . We definea representation Sym k ( π n ) : sl n → End(Sym k ( V )) bySym k ( π n )( x ) := k − X i =0 I ⊗ i ⊗ π n ( x ) ⊗ I ⊗ ( k − i − LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 5 where I ∈ End( V ) is the identity. Its extension to U ( sl n ) is unique and it will be denotedby the same symbol. Lemma 2.
Let π n : sl n → End( V ) , v ∈ V and f , . . . , f n − ∈ sl n be as in Lemma 1. Let F k be the subspace of Sym k ( V ) generated by all elements of the form X σ ∈ S k u σ (1) ⊗ u σ (2) ⊗ · · · ⊗ u σ ( k ) , where u , . . . , u k ∈ V and u = v. Then the vectors
Sym k ( π n )( f i . . . f i k ) v ⊗ k , where ≤ i ≤ i ≤ . . . ≤ i k ≤ n − , are linearly independent in Sym k ( V ) /F k .Proof. By Lemma 1, the vectors v i := π n ( f i ) v , 1 ≤ i ≤ n −
1, and v = v are linearlyindependent. We have that(1) Sym k ( π n )( f i · · · f i k ) v ⊗ k − X σ ∈ S k v i σ (1) ⊗ v i σ (2) ⊗ · · · ⊗ v i σ ( k ) ∈ F k . Note that the projection of the set(2) n X σ ∈ S k v i σ (1) ⊗ v i σ (2) ⊗ · · · ⊗ v i σ ( k ) : 1 ≤ i ≤ i ≤ . . . ≤ i k ≤ n − o into the vector space Sym k ( V ) /F k is linearly independent. By (1) and (2) the conclusionof the lemma follows. Proof of Theorem 1.
The implications (1) ⇒ (2) ⇒ (3) are trivial.It remains to prove the implication (3) ⇒ (1). It suffices to prove statement (3) ofProposition 1. Fix n, d ∈ N . With the notation from Lemma 2 we define a representation π n,d := L dk =1 Sym k ( π n ) and a vector v n,d := L dk =1 v ⊗ k .To prove that the vectors(3) π n,d ( f ) m · · · π n,d ( f n − ) m n − v n,d where m + . . . + m n − ≤ d are linearly independent we assume that(4) X P n − i =1 m i ≤ d λ m ,...,m n − π n,d ( f ) m · · · π n,d ( f n − ) m n − v n,d = 0 . Project this onto d M k =1 Sym k ( V ) / (cid:16) d − M k =1 Sym k ( V ) ⊕ F d (cid:17) ∼ = Sym d ( V ) /F d to conclude, by Lemma 2, that λ m ,...,m n − = 0 whenever P n − i =1 m i = d . Repeatingthis argument for d − , d − , . . . in place of d , we prove that λ m ,...,m n − = 0 for all P n − i =1 m i ≤ d . J. CIMPRI ˇC AND A. ZALAR Proof of Theorem 2
Irreducible representations of sl . The main result of this subsection, Propo-sition 2, describes an irreducible representation of the Lie algebra sl of 2 × v making monomials of the form (5) linearly independent.This result will be needed in the proof of Theorem 2, given in Subsection 2.2 below.Let e , . . . , e k be the standard basis of C k , let E ij , 1 ≤ i, j ≤
2, be the standard basisof M ( C ) and let X := E , Y := E and H := E − E be the standard basis of sl .Recall [6, § k ∈ N there is a unique (up to equivalence) irreduciblerepresentation ρ k : sl → End( C k ) defined by ρ k ( X ) e i = x k,i − e i − , ρ k ( Y ) e i = y k,i e i +1 , ρ k ( H ) e i = h k,i e i , where x k,i := (cid:26) k − i, if i = 1 , . . . , k − , otherwise ,y k,i := (cid:26) i, if i = 1 , . . . , k − , otherwise ,h k,i := ( k + 1 − i ) , for i = 1 , . . . , k. We denote by 0 ℓ a sequence of ℓ zeroes. Proposition 2.
Assume the notation as above. For every t ∈ N ∪ { } and a vector v ∈ C ( d +1) + t , d ∈ N , of the form v = [0 d , , d , d − , , d − , . . . , , , , , , , , t ] T all vectors of the form (5) ρ ( d +1) + t ( X ) m ρ ( d +1) + t ( Y ) m ρ ( d +1) + t ( H ) m v with m , m , m ∈ N , ≤ m + m + m ≤ d and m m = 0 are linearly independent.Proof. Let e , . . . , e ( d +1) + t be the standard basis of C ( d +1) + t . Then(6) v = e i + . . . + e i d +1 where i = d + 1 and i k +1 = i k + 2( d − k + 1) for k = 1 , . . . , d . Note that i d +1 = ( d + 1) .For every k = − d, . . . , d and ℓ = 0 , . . . , d we write(7) Z k = ρ ( d +1) + t ( X ) k if k >
01 if k = 0 ρ ( d +1) + t ( Y ) − k if k < H ℓ = ρ ( d +1) + t ( H ) ℓ . To prove that all Z k H ℓ v with | k | + ℓ ≤ d are linearly independent we assume that(8) X | k | + ℓ ≤ d α k,ℓ Z k H ℓ v = 0 . Since ( d + 1) + t is fixed in the proof, we abbreviate x j := x ( d +1) + t,j , y j := y ( d +1) + t,j and h j := h ( d +1) + t,j . Thus(9) Z k H ℓ e j = z j,k ( h j ) ℓ e j − k LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 7 where z j,k = 0 if j − k
6∈ { , . . . , ( d + 1) + t } while in other cases z j,k = x j − k · · · x j − if k >
01 if k = 0 y j · · · y j − k − if k < x j and y j are nonzero for j = 1 , . . . , ( d + 1) + t −
1, it follows that z j,k are alsononzero when 1 ≤ j − k ≤ ( d + 1) + t . If we substitute (6) and (9) into (8), we get(10) d X k = − d d +1 X r =1 ( d −| k | X ℓ =0 α k,ℓ ( h i r ) ℓ ) z i r ,k e i r − k = 0 . We prove by backward induction on | k | that the equation (10) implies α k,ℓ = 0 for all k and ℓ such that | k | + ℓ ≤ d . This means we prove: • Induction base: α d, = α − d, = 0. • Induction step:
Fix m ∈ { , . . . , d − } . Suppose α k,ℓ = 0 for | k | ≥ m + 1 andprove that α k,ℓ = 0 for | k | = m .To establish the base of induction we first compute the coefficient of e in (10). Notethat e i r − k = e iff r = 1 and k = d , so that | k | + ℓ ≤ d forces ℓ = 0. Since z i ,d = 0 and h i = 0, it follows that α d, = 0. Next we compute the coefficient of e d +1 in (10). Notethat e i r − k = e d +1 iff r = 1 , k = − d or r = 2 , k = d . In both cases, it follows that ℓ = 0.Since α d, = 0 and z i , − d = 0 and h i = 0, it follows that α − d, = 0.To prove induction step we assume that α k,ℓ = 0 for every k with | k | ≥ m + 1. Thenequation (10) implies that(11) m X k = − m d +1 X r =1 ( d −| k | X ℓ =0 α k,ℓ ( h i r ) ℓ ) z i r ,k e i r − k = 0 . Suppose that s ∈ { , . . . , d − m + 1 } . We claim that equation (11) contains only one termwith e i s − m and only one term with e i s + m . Namely, if i r − k = i s − m for some r = 1 , . . . , d +1and k = − m, . . . , m then m − k = i s − i r . Clearly, 0 ≤ m − k ≤ m , so s ≥ r . If r = s then k = m and we are done. Otherwise, 2 m ≥ i s − i r ≥ i s − i s − = 2( d − s + 2) which impliesthat s ≥ d − m + 2. The other case is similar. It follows that for every s = 1 , . . . , d − m + 1(12) ( d − m X ℓ =0 α m,ℓ ( h i s ) ℓ ) z i s ,m = 0 and ( d − m X ℓ =0 α − m,ℓ ( h i s ) ℓ ) z i s , − m = 0We divide out by z i s ,m and z i s , − m to obtain two Vandermonde systems(13) d − m X ℓ =0 α m,ℓ ( h i s ) ℓ = 0 and d − m X ℓ =0 α − m,ℓ ( h i s ) ℓ = 0 , for s = 1 , . . . , d − m + 1 . Since the h i s are distinct for different s , the Vandermonde coefficient matrices in both areinvertible. It follows that(14) α m, = . . . = α m,d − m = 0 and α − m, = . . . = α − m,d − m = 0which completes the proof of the induction step. J. CIMPRI ˇC AND A. ZALAR I -local directional linear dependence in sl . In this subsection we prove Theo-rem 2, which is a characterization of the situation when finitely many elements of U ( sl )are I -locally directionally linearly independent, where I stands for the class of all finite-dimensional irreducible representations of U ( sl ).Recall from the previous subsection that E ij , 1 ≤ i, j ≤
2, is the standard basis of M ( C ), and X := E , Y := E , H := E − E is the standard basis of sl . Let ρ k , for k ∈ N , be the unique (up to equivalence) irreducible representation of sl of dimension k .The element C := XY + 12 H + Y X = 2 XY + 12 H − H of the enveloping algebra U ( sl ) is called the Casimir element . It is well-known that C generates the center Z of U ( sl ), i.e., Z = C [ C ], and that ρ k ( C ) = ( k − I k where I k isthe identity matrix of size k (see [11]). We write c k := ( k −
1) for all k ∈ N . Moreover,every element p ∈ U ( sl ) can be written in the form p = P mi =1 f i s i where f i are monomialsof the form X i Y i H i with i i = 0 and s i ∈ C [ C ] are central elements.Before we proceed with the proof of Theorem 2, we need the following lemma: Lemma 3.
Suppose u , . . . , u k ∈ C ( z ) ℓ , for k, ℓ ∈ N , are linearly dependent for infinitelymany complex values of z . Then they are linearly dependent over C ( z ) .Proof. Assume to the contrary that u , . . . , u k are linearly independent over C ( z ). Then wecan add vectors u k +1 , . . . , u ℓ ∈ C ( z ) ℓ such that u , . . . , u ℓ form a basis for C ( z ) ℓ over C ( z ).The determinant of the matrix with columns u , . . . , u ℓ is a non-zero rational function p ( z ) r ( z ) ∈ C ( z ) which has only finitely many zeros, a contradiction with the hypothesis thatinfinitely many evaluations of u , . . . , u k are C -linearly dependent. Proof of Theorem 2.
To prove the implication (1) ⇒ (2), we first divide out the greatestcommon divisor of z , . . . , z k ∈ C [ C ] from the equation P ki =1 z i p i = 0. Hence, we canassume WLOG that z , . . . , z k do not have a common zero. Applying each ρ n ∈ I , for n ∈ N , to P ki =1 z i p i = 0 one gets 0 = P ki =1 z i ( c n ) ρ n ( p i ). Since z , . . . , z k are withoutcommon zeroes, this linear combination is non-trivial and hence p , . . . , p k are I -locallylinearly dependent.The implication (2) ⇒ (3) is trivial.It remains to prove the implication (3) ⇒ (1). We write p j = P mi =1 f i t ij , m ∈ N , where t ij ∈ C [ C ] are central elements and f i are different monomials of the form X i Y i H i with i , i , i ∈ N and i i = 0. By Proposition 2 for all n ∈ N sufficiently largethere exist vectors v n ∈ V ρ n such that vectors ρ n ( f i ) v n , i = 1 , . . . , m , are linearly in-dependent. Therefore, for those n , the vectors ρ n ( p ) v n , . . . , ρ n ( p k ) v n , are linearly de-pendent if and only if the vectors [ t j ( c n ) , . . . , t mj ( c n )] T , j = 1 , . . . , k , are linearly depen-dent. Since this is true for infinitely many n -s, this implies by Lemma 3 that the vectors[ t j ( C ) , . . . , t mj ( C )] T , j = 1 , . . . , k , are C ( C )-linearly dependent and hence there exist v j ( C ) ∈ C ( C ), j = 1 , . . . , k , not all zero such that 0 = P kj =1 v j ( C )[ t j ( C ) , . . . , t mj ( C )] T .Multiplying by the least common denominator z ∈ C [ C ] of nonzero v , . . . , v k we ob-tain 0 = P kj =1 z j [ t j ( C ) , . . . , t mj ( C )] T for some z , . . . , z k ∈ C [ C ], not all zero and hence0 = z p + . . . + z k p k . Reflexive closures
LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 9
Reflexive closures in sl . Assume the notation from the previous section. Let q, p , . . . , p k be elements of U ( sl ). Theorem 4 gives a closely related sufficient condition(1) and a necessary condition (4) for q to belong to the I -local span, resp. the I -reflexiveclosure, of p , . . . , p k . The conditions differ only in the assumptions on the zero set of thecentral element z . Theorem 4.
Let q, p , . . . , p k be elements of U ( sl ) and consider the following statements:(1) There exist central elements z , z , . . . , z k ∈ C [ C ] such that z is nonzero, z ( c n ) =0 if ρ n ( q ) = 0 and z q = z p + . . . + z k p k . (2) q ∈ Loc I { p , . . . , p k } . (3) q ∈ Ref I { p , . . . , p k } . (4) There exist central elements z , z , . . . , z k ∈ C [ C ] such that z is nonzero and z q = z p + . . . + z k p k . Then (1) ⇒ (2) ⇒ (3) ⇒ (4) and the reverse implications do not hold. The proof of Theorem 4 uses the following trivial consequence of Lemma 3.
Lemma 4.
Suppose s, u , . . . , u k ∈ C ( z ) ℓ for k, ℓ ∈ N , have the property that s ( t ) ∈ span C { u ( t ) , . . . , u k ( t ) } for infinitely many t ∈ C . Then s ( z ) ∈ span C ( z ) { u ( z ) , . . . , u k ( z ) } . Proof of Theorem 4.
To prove (1) ⇒ (2) note that z q = z p + . . . + z k p k impliesthat z ( c n ) ρ n ( q ) = z ( c n ) ρ n ( p ) + . . . + z k ( c n ) ρ n ( p k ) . If ρ n ( q ) = 0, then clearly ρ n ( q ) ∈ span { ρ n ( p ) , . . . , ρ n ( p k ) } . Otherwise ρ n ( q ) = 0 which implies by assumption that z ( c n ) =0 and hence again ρ n ( q ) ∈ span { ρ n ( p ) , . . . , ρ n ( p k ) } .The implication (2) ⇒ (3) is trivial.The proof of (3) ⇒ (4) is analogous to the proof of the implication (3) ⇒ (1) in Theorem2 only that we use Lemma 4 instead of Lemma 3.It remains to construct counterexamples for the reverse implications. To prove (1) (2) take q = H , p = CX + c H and p = c X + CH . First, we prove that q ∈ Loc I { p , p } . Since ρ ( X ) = 0 we have ρ ( p ) = ρ ( p ) = c ρ ( H ) = c ρ ( q ), so ρ ( q ) ∈ span { ρ ( p ) , ρ ( p ) } . For n > c − c n ) ρ n ( q ) = c ρ n ( p ) − c n ρ n ( p ) , which alsoimplies that ρ n ( q ) ∈ span { ρ n ( p ) , ρ n ( p ) } . Second, we show that each triplet of centralelements z , z , z that satisfy z q = z p + z p must have that z ( c ) = 0. By comparingthe coefficients at X and H we get the system 0 = z C + z c and z = z c + z C . Hence c z = z ( c − C ) and z ( c ) = 0.To prove (2) (3) take q = X , p = I + H , p = X + Y and p = ( C − c ) X. Clearly ρ ( q ) = E / ∈ span { E , E + E , } = span { ρ ( p ) , ρ ( p ) , ρ ( p ) } , which implies that q Loc I { p , p , p } . Since[ y, T ∈ span { x, T , [ y, x ] T } for every x and y , we have that ρ ( q ) v ∈ span { ρ ( p ) v, ρ ( p ) v, ρ ( p ) v } . Clearly, wealso have that ρ n ( q ) v = c n − c ρ n ( p ) v for all n ≥ v ∈ C n , which implies that q ∈ Ref I { p , p , p } .To prove (3) (4) take q = I and p = ( C − c ) I and notice that ( C − c ) q = p but q Ref I { p } since ρ ( q ) e = e
6∈ { } = span { ρ ( p ) e } . As seen in the proof (4) of Theorem 4 does not suffice to conclude q ∈ Ref I { p , . . . , p k } . The failure of the reverse implications in Theorem 4 are caused by representations ρ n ofsmall dimension n . The following theorem says that, for n big enough, the same reverseimplications hold true. Theorem 5.
Let q, p , . . . , p k be elements from U ( sl ) . Then the following statements areequivalent:(1) There exist central elements z , z , . . . , z k ∈ C [ C ] such that z = 0 and z q = z p + . . . + z k p k . (2) ρ n ( q ) ∈ span { ρ n ( p ) , . . . , ρ n ( p k ) } for every n ∈ N big enough.(3) ρ n ( q ) v ∈ span { ρ n ( p ) v, . . . , ρ n ( p k ) v } for every n ∈ N big enough and every vector v .Proof. To prove (1) ⇒ (2) one takes n big enough such that z ( c r ) = 0 for every r ≥ n . Notice that for all such r we have that ρ r ( z ) = z ( c r ) = 0 and hence ρ r ( q ) = ρ r ( z ) P ki =1 ρ r ( z i ) ρ r ( p i ). The implication (2) ⇒ (3) is clear. The implication (3) ⇒ (1)follows easily from the proof of the implication (3) ⇒ (4) in Theorem 4, since for n bigenough, there exist vectors v n ∈ V ρ n such that ρ n ( q ) v n ∈ span { ρ n ( p i ) v n : i = 1 , . . . , k } .3.2. Reflexive closures in solvable Lie algebras.
By Lie’s theorem [8, Theorem 9.11],every irreducible representation π of a finite-dimensional complex solvable Lie algebra L is one-dimensional. It follows that π annihilates L := [ L, L ], hence it factors throughthe abelian Lie algebra
L/L . Let R be the left (equivalently the right) ideal of U ( L )generated by L . By [6, Proposition 2.2.14], the canonical homomorphism from U ( L ) to U ( L/L ) is surjective with kernel R and so U ( L ) /R ∼ = U ( L/L ). Clearly, every irreduciblerepresentation of U ( L ) factors through U ( L ) /R . Theorem 6.
Let L be a finite-dimensional complex solvable Lie algebra and R the two-sided ideal of U ( L ) generated by L = [ L, L ] . Pick p , . . . , p k , q ∈ U ( L ) and write I forthe two-sided ideal of U ( L ) generated by p , . . . , p k . The following are equivalent:(1) For some n ∈ N we have that q n ∈ I + R .(2) Every irreducible representation of U ( L ) which anihilates p , . . . , p k also annihi-lates q .(3) q ∈ Loc I { p , . . . , p k } .(4) q ∈ Ref I { p , . . . , p k } .Proof. The equivalence of (1) and (2) follows from Hilbert’s Nullsellensatz and U ( L ) /R ∼ = U ( L/L ). Namely, since U ( L/L ) is isomorphic to a polynomial algebra, the followingare equivalent for any p ′ , . . . , p ′ k , q ′ ∈ U ( L/L ): • q ′ belongs to the radical of the ideal generated by p ′ , . . . , p ′ k . • Every character φ of U ( L/L ) which anihilates p ′ , . . . , p ′ k also anihilates q ′ .The equivalence of (2) and (3) follows from the trivial observation that for complexnumbers α , . . . , α k , β we have that β ∈ span { α , . . . , α k } iff α = . . . = α k = 0 implies β = 0.Since all irreducible representations are one-dimensional, (3) is equivalent to (4). LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 11 Proof of Theorem 3 I -local directional linear dependence in sl . The Lie algebra of all trace-zerocomplex 3 × sl . We refer the reader to [10, Chapter 6] for thetheory of representations of sl ; here we write the basics. The standard basis of sl is X := E , X := E , X := E , Y := E , Y := E ,Y := E , H := E − E , H := E − E . (15)We write V = V = C . Let e , e , e be the standard basis of V and let f = e , f = − e , f = e be a basis of V . The action of sl on V is defined by π ( Z ) v := Zv and itsaction on V is defined by π ( Z ) v := − Z T v . (Note that π is the standard representationand π is its adjoint.) For every m , m ∈ N , we identify the m -th symmetric powerSym m ( V ) of V with the vector space of all homogeneous polynomials of degree m in e , e , e . Similarly, we identify Sym m ( V ) with the vector space of all homogeneouspolynomials of degree m in f , f , f . Let ψ be the representation of sl on Sym m ( V )defined by ψ ( e i e i · · · e i m ) := m X j =1 e i · · · e i j − π ( e i j ) e i j +1 · · · e i m .ψ is defined analogously. The representations ψ and ψ are irreducible but their tensorproduct ψ := ψ ⊗ ψ , defined by ψ ( v ⊗ v ) := ψ ( v ) ⊗ v + v ⊗ ψ ( v )is not irreducible. Let W be the subspace of Sym m ( V ) ⊗ Sym m ( V ) generated by allelements of the form(16) v i,j,k := ψ ( Y i Y j Y k )( e m ⊗ f m ) , i, j, k ∈ N . It turns out that W is an invariant subspace for ψ ( sl ) and the subrepresentation π m ,m := ψ | W is irreducible. Recall that a weight of a representation π is a pair of integers z , z such that π ( H i ) v = z i v for i = 1 , v is some nonzero vector, called a weight vector .The weight ( m , m ) is the highest weight if for every weight ( m ′ , m ′ ) we have( m , m ) − ( m ′ , m ′ ) = a (2 , −
1) + b ( − , a, b ≥
0. The highest weight of the representation with the irreducible subspacegenerated by (16) is ( m , m ) and its highest weight vector is v := e m ⊗ f m . In order to prove an analogue of Proposition 2, we start with the following proposition.
Proposition 3.
For every d, m , m ∈ N with m ≥ d and m ≥ d , the vectors v k,ℓ,m with k, ℓ, m ∈ N such that k + ℓ + m ≤ d are linearly independent.Proof. Denote S d := { ( k, ℓ, m ) ∈ N : k + ℓ + m ≤ d } and assume that(17) X ( k,ℓ,m ) ∈ S d α k,ℓ,m v k,ℓ,m = 0for some α k,ℓ,m ∈ R . We have to prove that each α k,ℓ,m is zero. After a short computationwhich depends on the formula π m ,m ( Y ji )( u ⊗ u ) = j X q =0 (cid:18) jq (cid:19) ψ ( Y qi ) u ⊗ ψ ( Y j − qi ) u for each i and j we get that(18) v k,ℓ,m = m X t =0 k X s =0 β k,ℓ,ms,t e m − s − t e s e t ⊗ f m + t − ℓ − k f ℓ − k + s f m + k − s − t where β k,ℓ,ms,t ∈ R and in particular β k,ℓ,mk,m = (cid:0) m m (cid:1)(cid:0) m ℓ (cid:1)(cid:0) m − mk (cid:1) = 0 . For a, b, c ∈ N wedenote by P a,b,c the projection to the linear subspace Lin { e i e b e a ⊗ f i f c f i : i j ∈ N } . Applying projections P a,b,c repeatedly in the lexicographic ordering of indices ( a, b, c )where ( b, c, a ) ∈ S d on (17) and using (18) we deduce inductively that each α k,ℓ,m in (17)is zero. Namely, first0 = P d, , (cid:16) X ( k,ℓ,m ) ∈ S d α k,ℓ,m v k,ℓ,m (cid:17) = α , ,d · β , ,d ,d e m − d e d ⊗ f m implies that α , ,d = 0 (since β , ,d ,d = 0). Now fix ( a , b , c ) and assume that α b,c,a = 0for all ( a, b, c ) ≻ lex ( a , b , c ). Then0 = P a ,b ,c (cid:16) X ( k,ℓ,m ) ∈ S d α k,ℓ,m v k,ℓ,m (cid:17) = α b ,c ,a · β b ,c ,a b ,a e m − b − a e b e a ⊗ f m − c f c implies that α b ,c ,a = 0 (since β b ,c ,a b ,a = 0). Lemma 5.
For every d, m , m ∈ N with m ≥ d and m ≥ d , generators of sl mapvectors v k,ℓ,m , k, ℓ, m ∈ N , by the following rules: π m ,m ( H ) v k,ℓ,m = αv k,ℓ,m , (19) π m ,m ( H ) v k,ℓ,m = βv k,ℓ,m , (20) π m ,m ( Y ) v k,ℓ,m = v k +1 ,ℓ,m , (21) π m ,m ( Y ) v k,ℓ,m = v k,ℓ +1 ,m + kv k − ,ℓ,m +1 , (22) π m ,m ( Y ) v k,ℓ,m = v k,ℓ,m +1 , (23) π m ,m ( X ) v k,ℓ,m = γv k − ,ℓ,m − mv k,ℓ +1 ,m − (24) π m ,m ( X ) v k,ℓ,m = δv k,ℓ − ,m + mv k +1 ,ℓ,m − . (25) π m ,m ( X ) v k,ℓ,m = ξv k − ,ℓ − ,m + ζ v k,ℓ,m − , (26) where α = ( m − k + ℓ − m ) ,γ = k ( m − k + 1 + ℓ − m ) ,ξ = − kℓ ( m − ℓ + 1) , β = ( m + k − ℓ − m ) ,δ = ℓ ( m − ℓ + 1) ,ζ = m ( m + m + 1 − ℓ − k − m ) . Proof.
We write π := π m ,m . Equalities (19) and (20) follow by the following facts: • v , , is a weight vector corresponding to the weight ( m , m ). • π ( Y ), π ( Y ), π ( Y ) are root vectors corresponding to the roots ( − , , − − , − • A vector v k,ℓ,m is nonzero by Proposition 3. LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 13
The equality (21) is clear while (23) follows by the fact that Y commutes with Y and Y in U ( sl ).The remaining equalities can be proved by induction on lexicographically increasingtriples ( k, ℓ, m ). For examples we will prove (22) and (24).The base of induction ( k, ℓ, m ) = (0 , ,
0) for (22) is established by calculating π ( Y ) v , , = v , , . Now fix a triple ( k , ℓ , m ) and assume that (22) is true for every triple ( k, ℓ, m )such that ( k , ℓ , m ) ≻ lex ( k, ℓ, m ). We separate two cases: Case 1: k >
0. By the relation Y Y = Y Y + Y from U ( sl ) and the fact that Y commutes with Y and Y we have that π ( Y ) v k ,ℓ ,m = π ( Y ) π ( Y ) v k − ,ℓ ,m + v k − ,ℓ ,m +1 , Now we use the induction hypothesis for ( k − , ℓ , m ) and get π ( Y ) v k ,ℓ ,m = v k ,ℓ +1 ,m + k v k − ,ℓ ,m +1 . Case 2: k = 0. We have π ( Y ) v ,ℓ ,m = v ,ℓ +1 ,m which is (22).Now we prove (24). The base of induction ( k, ℓ, m ) = (0 , ,
0) is established by calcu-lating π ( X ) v , , = ψ ( X ) e m ⊗ f m + e m ⊗ ψ ( X ) f m = 0 . Now fix a triple ( k , ℓ , m ) and assume that (24) is true for every triple ( k, ℓ, m ) suchthat ( k , ℓ , m ) ≻ lex ( k, ℓ, m ). We separate three cases: Case 1: k >
0. By the relation X Y = Y X + H from U ( sl ) we have that π ( X ) v k ,ℓ ,m = π ( Y ) π ( X ) v k − ,ℓ ,m + π ( H ) v k − ,ℓ ,m , Now we use the induction hypothesis for ( k − , ℓ , m ) for the first term, the equality(19) for the second term and after a short calculation we get (24). Case 2: k = 0, ℓ >
0. By the relation X Y = Y X from U ( sl ) we have that π ( X ) v ,ℓ ,m = π ( Y ) π ( X ) v ,ℓ − ,m , and by the induction hypothesis for (0 , ℓ − , m ) we get (24). Case 3: k = 0, ℓ = 0, m >
0. By the relation X Y = Y X − Y from U ( sl ) we havethat π ( X ) v , ,m = π ( Y ) π ( X ) v , ,m − − v , ,m − , and by the induction hypothesis for (0 , , m −
1) we get (24).
Proposition 4.
For every d, m , m ∈ N with m , m big enough, the vectors (27) π m ,m ( Y j Y j Y j X ℓ X ℓ X ℓ H r H r ) (cid:16) L X t =1 v k ( t ) ,ℓ ( t ) ,m ( t ) (cid:17) , are linearly independent, where the powers j , j , j , ℓ , ℓ , ℓ , r , r ∈ N are such that X i =1 j i + X i =1 ℓ i + X i =1 r i ≤ d , j ℓ = 0 , r ≤ and the indices k ( t ) , ℓ ( t ) , m ( t ) for t = 1 , . . . , L , with L := 4 d + 4 d + 2 d + 1 , are defined by k ( t ) = (3 d + 1) t, ℓ ( t ) = t d +1 , m ( t ) = t d +2 d +1 . Proof.
We write ~Y := ( Y , Y , Y ), ~X := ( X , X , X ), ~H := ( H , H ), ~j := ( j , j , j ), ~ℓ := ( ℓ , ℓ , ℓ ), ~r := ( r , r ) and ~Y ~j ~X ~ℓ ~H ~r := Y j Y j Y j X ℓ X ℓ X ℓ H r H r . Lemma 5 implies that π m ,m ( ~Y ~j ~X ~ℓ ~H ~r ) v k,ℓ,m = j + ℓ + ℓ + ℓ X s =0 c ~j,~ℓ,~r,s ( k, ℓ, m ) · v k − ℓ − ℓ − j + j + s,ℓ − ℓ − ℓ + s,m + j + j − s (28)where c ~j,~ℓ,~r,s ( k, ℓ, m ) are polynomials in k, ℓ, m . Let S be the endomorphism of V π m ,m defined by(29) S ( v k,ℓ,m ) = (cid:26) v k +1 ,ℓ +1 ,m − if m ≥ , m = 0 . Consider the operator C ~j,~ℓ,~r ( k, ℓ, m, S ) := j + ℓ + ℓ + ℓ X s =0 c ~j,~ℓ,~r,s ( k, ℓ, m ) S s The equation (28) can now be rewritten as(30) π m ,m ( ~Y ~j ~X ~ℓ ~H ~r ) v k,ℓ,m = C ~j,~ℓ,~r ( k, ℓ, m, S ) v k − ℓ − ℓ − j + j ,ℓ − ℓ − ℓ ,m + j + j To compute the leading term of C ~j,~ℓ,~r ( k, ℓ, m, S ) with respect to a monomial ordering ≻ defined below, we first introduce new variables(31) x := k − ℓ − m, y := − k + ℓ − m, z := − k + ℓ − m. Note that we have that(32) k = y − z, ℓ = 13 ( − x + 3 y − z ) , m = 13 ( − x − y + z ) . Now consider the lexicographic ordering induced by(33) x ≻ y ≻ z ≻ S. Using Lemma 5 we see that the leading term of C ~j,~ℓ,~r ( k, ℓ, m, S ) is the same as the leadingterm of ( k + S ) j ( k ( ℓ − k − m ) − mS ) ℓ ( − ℓ + mS ) ℓ ( kℓ − m ( k + ℓ + m ) S ) ℓ ·· ( − k + ℓ − m ) r ( − ℓ + k − m ) r which is equal to y j (cid:16) Sx (cid:17) ℓ (cid:16) − x (cid:17) ℓ (cid:16) x y (cid:17) ℓ z r x r = ( − ℓ ℓ +2( ℓ + ℓ ) x ℓ +2( ℓ + ℓ )+ r y j + ℓ z r S ℓ . We denote by Γ d the set of all tuples ( ~j, ~ℓ, ~r ) satisfying X i =1 j i + X i =1 ℓ i + X i =1 r i ≤ d, j ℓ = 0 and r ≤ . LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 15
Assume that(34) X ( ~j,~ℓ,~r ) ∈ Γ d λ ~j,~ℓ,~r · π m ,m ( ~Y ~j ~X ~ℓ ~H ~r ) (cid:16) L X t =1 v k ( t ) ,ℓ ( t ) ,m ( t ) (cid:17) = 0By the choice of k ( t ) , ℓ ( t ) , m ( t ) we have that triples of v -indices appearing in π m ,m ( ~Y ~j ~X ~ℓ ~H ~r ) v k ( t ) ,ℓ ( t ) ,m ( t ) are always different from triples of v -indices appearing in π m ,m ( ~Y ~j ~X ~ℓ ~H ~r ) v k ( t ′ ) ,ℓ ( t ′ ) ,m ( t ′ ) if t = t ′ . Therefore, the equation (34) implies that for every t = 1 , . . . , L , we have that X ( ~j,~ℓ,~r ) ∈ Γ d λ ~j,~ℓ,~r · π m ,m ( ~Y ~j ~X ~ℓ ~H ~r ) v k ( t ) ,ℓ ( t ) ,m ( t ) = 0 . The equation (30) implies that(35) 0 = X ( ~j,~ℓ,~r ) ∈ Γ d λ ~j,~ℓ,~r · C ~j,~ℓ,~r ( k ( t ) , ℓ ( t ) , m ( t ) , S ) v k ( t ) − ℓ − ℓ − j + j ,ℓ ( t ) − ℓ − ℓ ,m ( t )+ j + j For every ( ~j, ~ℓ, ~r ) ∈ Γ d let us define the set∆ ( ~j,~ℓ,~r ) := { ( d , d , d ) ∈ Z : d = − ℓ − ℓ − j + j + s, d = − ℓ − ℓ + s,d = j + j − s for some 0 ≤ s ≤ j + X i =1 ℓ i } . Fix a vector ~e := ( e , e ) ∈ Z and define a setΛ ~e := { ( ~j, ~ℓ, ~r ) ∈ Γ d : ( e + d , d , e − d ) ∈ ∆ ( ~j,~ℓ,~r ) for some d ∈ Z } = { ( ~j, ~ℓ, ~r ) ∈ Γ d : j = j + ℓ − ℓ + e , j = − j + ℓ + ℓ + e } Note that sets Λ ~e are pairwise disjoint and that they cover Γ d . Let us define a vectorfunction ~f of j , ~ℓ, ~e by ~f ( j , ~ℓ, ~e ) = ( j + ℓ − ℓ + e , j , − j + ℓ + ℓ + e ) . Clearly, Λ ~e = { ( ~j, ~ℓ, ~r ) ∈ Γ d : ( j , j , j ) = ~f ( j , ~ℓ, ~e ) } . Let Λ ′ ~e be the projection of Λ ~e along j and j . The equation (35) implies that0 = X ( j ,~ℓ,~r ) ∈ Λ ′ ~e λ ~f ( j ,~ℓ,~e ) ,~ℓ,~r · C ~f ( j ,~ℓ,~e ) ,~ℓ,~r ( k ( t ) , ℓ ( t ) , m ( t ) , S ) v k ( t )+ e − ℓ − ℓ ,ℓ ( t ) − ℓ − ℓ ,m ( t )+ e + ℓ + ℓ , = X ( j ,~ℓ,~r ) ∈ Λ ′ ~e (cid:16) λ ~f ( j ,~ℓ,~e ) ,~ℓ,~r · C ~f ( j ,~ℓ,~e ) ,~ℓ,~r ( k ( t ) , ℓ ( t ) , m ( t ) , S ) S d − ℓ − ℓ (cid:1) v k ( t )+ e − d,ℓ ( t ) − d,m ( t )+ e + d . Defining operators P j ,~ℓ,~r := C ~f ( j ,~ℓ,~e ) ,~ℓ,~r ( k ( t ) , ℓ ( t ) , m ( t ) , S ) S d − ℓ − ℓ , we get that(36) 0 = X ( j ,~ℓ,~r ) ∈ Λ ′ ~e (cid:16) λ ~f ( j ,~ℓ,~e ) ,~ℓ,~r P j ,~ℓ,~r (cid:17) v k ( t )+ e − d,ℓ ( t ) − d,m ( t )+ e + d . We will prove by contradiction that λ ~f ( j ,~ℓ,~e ) ,~ℓ,~r = 0 for all j , ~ℓ, ~r and hence λ ~j,~ℓ,~r = 0 forall ~j, ~ℓ, ~r ∈ Γ d in (34). Among tuples ( j , ~ℓ, ~r ) with λ ~f ( j ,~ℓ,~e ) ,~ℓ,~r = 0 choose a tuple ( j ′ , ~ℓ ′ , ~r ′ )such that the operator P j ,~ℓ,~r has the highest leading term with respect to the monomialordering (33). By the following claim such tuple is unique. Claim 1:
Different operators P j ,~ℓ,~r have different leading terms.From the discussion above, it follows that the leading term of the operator P j ,~ℓ,~r is( − ℓ ℓ +2( ℓ + ℓ ) x ( t ) ℓ +2( ℓ + ℓ )+ r y ( t ) j + ℓ z ( t ) r S ℓ + d − ℓ − ℓ . Pick any α, β, γ, δ ∈ N . We will show that there exists at most one tuple ( j , ~ℓ, ~r ) ∈ N such that ℓ + 2( ℓ + ℓ ) + r = α (37) j + ℓ = β (38) r = γ (39) ℓ + d − ℓ − ℓ = δ (40) j ℓ = 0(41) r ≤ ℓ + ℓ ) + r = α − δ + d which together with (42) implies that ℓ + ℓ = ( α − δ + d ) div 3 =: ε (44) r = ( α − δ + d ) mod 3(45)Equations (40) and (44) imply that(46) ℓ = δ + ε − d. Subtracting (44) from (38) we obtain(47) j − ℓ = β − ε which together with (41) implies that j = ( β − ε ) + := max { β − ε, } (48) ℓ = ( β − ε ) − := max { ε − β, } (49)From (44) and (49) we obtain(50) ℓ = ε − ( β − ε ) − We already know that r = γ from (39). This proves Claim 1. LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 17
For the tuple ( j ′ , ~ℓ ′ , ~r ′ ) let α ′ , β ′ , γ ′ , δ ′ be defined as in (37)-(40). Now we observe thecoefficients at the vector v k ( t )+ e − d + δ ′ ,ℓ ( t ) − d + δ ′ ,m ( t )+ e + d − δ ′ on both sides of (36) and get0 = ( − ℓ ′ ℓ ′ +2( ℓ ′ + ℓ ′ ) x ( t ) α ′ y ( t ) β ′ z ( t ) γ ′ + X α,β,γ ∈ N , ≤ α + β + γ ≤ d, ( α ′ ,β ′ ,γ ′ ) ≻ ( α,β,γ ) c α,β,γ x ( t ) α y ( t ) β z ( t ) γ , for some c α,β,γ ∈ C . Since this must hold for all t = 1 , . . . , L , this is a contradiction bythe following claim. Claim 2:
All vectors (cid:0) x ( t ) α y ( t ) α z ( t ) α (cid:1) t =1 ,...,L where 0 ≤ P i =1 α i ≤ d are linearly independent.By Vandermonde determinant one can show that all vectors (cid:0) k ( t ) α ℓ ( t ) α m ( t ) α (cid:1) t =1 ,...,L = (cid:0) (3 d + 1) α · t α + α · (2 d +1)+ α · (4 d +4 d +1) (cid:1) t =1 ,...,L where 0 ≤ P i =1 α i ≤ d are linearly independent. Indeed, for different triples ( α , α , α )satisfying 0 ≤ P i =1 α i ≤ d , the exponents α + α · (2 d + 1) + α · (4 d + 4 d + 1)are different, with the highest exponent L − α = α = 0, α = 2 d . By using(31) and (32) we see thatspan { (cid:0) x ( t ) α i y ( t ) α i z ( t ) α i (cid:1) t =1 ,...,L : 0 ≤ α + α + α ≤ d } is equal to span { (cid:0) k ( t ) α i ℓ ( t ) α i m ( t ) α i (cid:1) t =1 ,...,L : 0 ≤ α + α + α ≤ d } . Therefore also all vectors (cid:0) x ( t ) α y ( t ) α z ( t ) α (cid:1) t =1 ,...,L where 0 ≤ P i =1 α i ≤ d are linearly independent. This proves Claim 2.4.2. An explicit basis over the center of U ( sl ) . It is well-known that the center of U ( sl ) is generated by two algebraically independent elements Z and Z which are alsocalled Casimir operators. The algorithm for computing Z and Z can be found in [9],while explicit expressions are in [4, p. 984]. We have Z = H + H H + H + 3 Y X + 3 Y X + 3 Y X + 3 H + 3 H and Z = 3 Y Y X + 3 Y X X + 19 ( H + 2 H )(6 + 2 H + H )( − H − H )+ Y X ( H + 2 H ) − Y X (6 + 2 H + H ) + Y X ( − H − H )(Our choice of Z and Z is equal to h and − k − h in the notation of [4, p. 984].) By [11, Schur’s Lemma] an irreducible representation maps a central element into ascalar multiple of identity. Therefore it is enough to calculate π m ,m ( Z i ) v , , to determinethis scalar. From Lemma 5, we get that d ( m , m ) := π m ,m ( Z ) = m + m m + m + 3 m + 3 m ,d ( m , m ) := π m ,m ( Z ) = 19 ( m + 2 m )(6 + 2 m + m )( − m − m ) . (51) Proposition 5.
Monomials (52) Y j Y j Y j X ℓ X ℓ X ℓ H r H r where the powers j , j , j , ℓ , ℓ , ℓ , r , r ∈ N are such that j ℓ = 0 and r ≤ form abasis of U ( sl ) over its center.Proof. Linear independence of monomials (52) follows from Proposition 4. It remains toprove that they span U ( sl ) over its center.Let U ( sl ) k denote the C -linear span of monomials of the form(53) Y ℓ X j H r Y ℓ X j Y ℓ X j H r of degree at most k where the degree equals to the sum of the exponents. We write deg( m )for the degree of the monomial of the form (53). We define the set M k := { m of the form (53) : deg( m ) ≤ k, j ℓ = 0 and r ≤ } . We will prove that(54) U ( sl ) k = span Z ( M k )where Z stands for the center of U ( sl ). It suffices to prove that every monomial ofthe form (53) belongs to span Z ( M k ). Let us order the monomials (53) with respectto the degree reverse lexicographic ordering. Note that the largest monomial in thedefinition of Z is 3 Y X and that the largest monomial in the definition of Z + Z (6 +2 H + H ) is H . If we express Y X by Z and other monomials and similarly H by Z + Z (6 + 2 H + H ) and other monomials we get two substitution rules. (Note thatthe first substitution rule decreases min { j , ℓ } but it can increase r and that the secondsubstitution rule decreases r but can increase min { j , ℓ } .) If we start with a monomialwith either j ℓ > r ≥ j ℓ = 0 and r ≤
2. This proves (54).By the PBW theorem we know that every element of U ( sl ) belongs to U ( sl ) k for some k ∈ N . We define the set f M k := { m of the form (52) : deg( m ) ≤ k, j ℓ = 0 and r ≤ } , where deg( m ) is a sum of exponents in m . To finish the proof of the proposition it remainsto prove that(55) span Z M k = span Z f M k . LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 19
We prove (55) by induction on k . The base of induction k = 1 is clear. We assume that(55) for all k ≤ n for some n ∈ N . By the relations in U ( sl ) we have that Y ℓ X j H r Y ℓ X j Y ℓ X j H r = Y ℓ Y ℓ Y ℓ X ℓ X ℓ X ℓ H r H r + m ′ where m ′ is a Z -linear combination of monomials of the form (53) of degree at most n − P i =1 ( ℓ i + j i ) + r + r − . By (54) we have that m ′ ∈ span Z M n − and by theinduction hypothesis we have that m ′ ∈ span Z f M n − . This proves (55). Lemma 6. (1) Every polynomial g ∈ C [ x, y ] which satisfies g ( m , m ) = 0 for all sufficiently large integers m , m is equal to zero.(2) Every polynomial f ∈ C [ x, y ] which satisfies (56) f ( d ( m , m ) , d ( m , m )) = 0 for all sufficiently large integers m , m is equal to zero.(3) Every vectors u , . . . , u k ∈ C [ x, y ] n , such that the vectors (57) u i ( d ( m , m ) , d ( m , m )) , i = 1 , . . . , k are linearly dependent over C for all sufficiently large integers m , m , are linearlydependent over C [ x, y ] .Proof. Part (1) is well-known and easy to prove.To prove part (2), assume that (56) is true for all sufficiently large integers m , m .By part (1), it follows that (56) is true for all m , m ∈ C . Let us compute the partialderivatives of (56) with respect to m and m by using the chain rule. We get that(58) (cid:20) ∂f∂x ( d , d ) , ∂f∂y ( d , d ) (cid:21) " ∂d ∂m ∂d ∂m ∂d ∂m ∂d ∂m = [0 , " ∂d ∂m ∂d ∂m ∂d ∂m ∂d ∂m = − m )(1 + m )(2 + m + m )is nonzero for all positive reals m and m it follows that ∂f∂x ( d ( m , m ) , d ( m , m )) = 0(60) ∂f∂y ( d ( m , m ) , d ( m , m )) = 0(61)for all positive reals m and m , thus for all complex m , m by part (1). By induction,we show that(62) ∂ i + j f∂ i x ∂ j y ( d ( m , m ) , d ( m , m )) = 0for all i, j ∈ N and all m , m ∈ C . It follows that f ≡ U = [ u , . . . , u k ]. By assumption, each maximalsubdeterminant of U ( d ( m , m ) , d ( m , m )) is zero for all sufficiently large integers m and m . By part (2), it follows that each maximal subdeterminant of U is identicallyzero. Thus u , . . . , u k are linearly dependent over C ( x, y ). By clearing denominators, wesee that they are also linearly dependent over C [ x, y ]. We are now ready to prove Theorem 3.
Proof of Theorem 3.
To prove the implication (1) ⇒ (2), one has to show that thereexists d ∈ N such that for every m , m ∈ N , m ≥ d , m ≥ d , the elements π m ,m ( p ), . . . , π m ,m ( p k ) are linearly dependent, where π m ,m ∈ I d . Dividing the equation P ki =1 z i p i = 0by the common factor of z , . . . , z k , where z i ∈ C [ Z , Z ] for each i , we may assume that z , . . . , z k do not share a common nontrivial factor. Applying π m ,m to the equation P ki =1 z i p i = 0, one gets0 = k X i =1 z i ( d ( m , m ) , d ( m , m )) π m ,m ( p i ) . It suffices to prove that there exists d ∈ N such that for every m ≥ d, m ≥ d atleast one of the coefficients z i ( d ( m , m ) , d ( m , m )) is nonzero. Let us assume onthe contrary that such d does not exist. Then there exists a sequence ( m ( n )1 , m ( n )2 ) ∈ N , n ∈ N , satisfying max { m ( n )1 , m ( n )2 } < min { m ( n +1)1 , m ( n +1)2 } for every n ∈ N and z i ( d ( m ( n )1 , m ( n )2 ) , d ( m ( n )1 , m ( n )2 )) = 0 for each i and every n ∈ N . By the form of d ( m , m ), the sequence d ( m ( n )1 , m ( n )2 ) is strictly increasing and hence the polynomi-als z , . . . , z k share infinitely many common zeroes. This implies by Bezout’s theorem(see [7]) that they share a nontrivial factor, leading to a contradiciton.The implication (2) ⇒ (3) is trivial.The proof of the implication (3) ⇒ (1) is almost the same as the proof of the sameimplication of Theorem 2. Namely, the form of monomials f i is given by Proposition 5,the coefficients t i belong to C [ Z , Z ], while Proposition 2 and Lemma 3 are replaced byProposition 4 and part (3) of Lemma 6, respectively. References [1] C. Ambrozie, B. Kuzma, V. M¨uller, An upper bound on the dimension of the reflexivity closure, Proc.Amer. Math. Soc. 138 (2010) 1721–1731. https://doi.org/10.1090/S0002-9939-09-10184-3 .[2] M. Breˇsar, I. Klep, A local-global principle for linear dependence of noncommutative polynomials,Israel J. Math. 193 (2013) 71–82. https://doi.org/10.1007/s11856-012-0066-4 .[3] M. Breˇsar, P. ˇSemrl, On locally linearly dependent operators and derivations, Trans. Amer. Math.Soc. 351 (1999) 1257–1275. https://doi.org/10.1090/S0002-9947-99-02370-3 .[4] S. Catoiu, Prime ideals of the enveloping algebra U ( sl ), Communications in Algebra 28:2 (2000)981–1027. https://doi.org/10.1080/00927870008826874 .[5] J. Cimpriˇc, Local linear dependence of linear partial differential operators, Integr. Equ. Oper. Theory.(2018) 90:38. https://doi.org/10.1007/s00020-018-2466-2 .[6] J. Dixmier, Enveloping algebras, Graduate Studies in Mathematics 11, Amer. Math. Soc., Provi-dence, 1996.[7] W. Fulton, Algebraic curves: an introduction to algebraic geometry, Addison-Wesley PublishingCompany, Redwood City, CA, 1989.[8] W. Fulton, J. Harris, Representation theory, a first course, Springer, New York, 1991.[9] M.A. Gauger, Some remarks on the center of the universal enveloping algebra of a classical simpleLie algebra. Pacific J. Math. 62 (1976) 93–97. https://doi.org/10.2140/pjm.1976.62.93 .[10] B. C. Hall, Lie groups, lie algebras, and representations: an elementary introduction, Graduate Textsin Mathematics 222, Springer, New York, 2015.[11] J. E. Humphreys, Introduction to lie algebras and representation theory, Springer, New York, 1994.[12] K. Schm¨udgen. Unbounded operator algebras and representation theory. Operator Theory: Advancesand Applications 37, Birkh¨auser Verlag, Basel, 1989. LOCAL-GLOBAL PRINCIPLE FOR LINEAR DEPENDENCE 21
Jaka Cimpriˇc, Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
E-mail address : [email protected] Aljaˇz Zalar, Faculty of Computer and Information Science, University of Ljubljana,Slovenia
E-mail address ::