Primitive ideals of noetherian generalized down-up algebras
aa r X i v : . [ m a t h . R A ] M a r Primitive Ideals of Noetherian GeneralizedDown-Up Algebras
Iwan Praton
Abstract
We classify the primitive ideals of noetherian generalized down-upalgebras.
Keywords : down-up algebra; generalized down-up algebra; enveloping alge-bra; primitive ideal.
We can construct U ( sl ), the enveloping algebra of sl , as the algebra gen-erated by three elements h, x, y , subject to the relations hx − xh = x, yh − hy = y, xy − yx = h. (The first two relations are somewhat nontraditional.) There are variousways of generalizing this construction. One could increase the number ofgenerators; this could lead to enveloping algebras of other Lie algebras.Another path would be to stay with three generators but modify the definingrelations. We would then get an algebra with a reasonably small dimension,making the algebra computationally tractable.One such example is Smith’s algebras similar to U ( sl ); see [S]. Thesealgebras are generated by h, x, y subject to hx − xh = x, yh − hy = y, xy − yx = φ ( h ) , where φ is a polynomial. Thus h in this case is no longer the commutator of x and y , but a polynomial root of said commutator. The resulting algebrasshare many properties with U ( sl ). 1nother example, going in a different direction, are the down-up algebrasof Benkart and Roby [BR]. These can be defined as the algebras generatedby h, u, d subject to the relations hu − ruh = γu, dh − rhd = γd, du − sud = h, where r, s , and γ are constants. When γ = 0, these algebras have similardefining relations with U ( sl ), except the commutators have been modified.Down-up algebras, especially when rsγ = 0, also share many properties with U ( sl ).Other generalizations of U ( sl ) exist, some of which can be considered asa mixture of Smith’s algebras and down-up algebras. For example, Ruedahas studied algebras that are generated by h, x, y subject to hx − xh = x, yh − hy = y, xy − sxy = φ ( h ) , where s is a constant and φ is a polynomial [R]. For another example,consider the conformal sl enveloping algebras of Le Bruyn [LB], which aregenerated by h, u, d subject to hu − ruh = u, dh − rhd = d, du − sud = ah + h, where r, s, a are constants with rs = 0.In 2004 Cassidy and Shelton introduced the ultimate mixture of Smith’salgebras and down-up algebras [CS]. These algebras are generated by h, u, d subject to hu − ruh = γu, dh − rhd = γd, du − sud = φ ( h ) , where r, s, γ are constants and φ is a polynomial. It is these algebras thatare the subject of this paper.More specifically, we classify the primitive ideals of noetherian general-ized down-up algebras, hence completing the project begun in [P2] and [P3].As before, we try to provide a reasonably explicit list of generators for theseprimitive ideals. Most of the necessary techniques are straightforward gen-eralizations of [P1]. In particular, most of the time we will be using explicitcomputation—part of the charm of studying down-up algebras is that ele-mentary computations actually lead to useful results; it is not necessary torely on heavy theoretical machinery. 2 .2 Notations As usual, we denote the complex numbers by C ; C × will denote the nonzerocomplex numbers. If z ∈ C , it is convenient to define o ( z ) to be the orderof z in the multiplicative group C × . Thus z is a primitive n th root of unityif and only if o ( z ) = n . If z is not a root of unity, we set o ( z ) = ∞ . Byconvention, when we write o ( z ) = n or m , we take n or m to be finite, i.e., z is a root of unity.We also need to use an ordering on the degree of two-variable polynomi-als, so we describe here the ordering that we will use. The monomial x i y j has degree ( i, j ); the degrees are ordered “alphabetically by last name”, i.e.,( i, j ) > ( i ′ , j ′ ) iff j > j ′ or j = j ′ and i > i ′ . The degree of a polynomial isthe highest degree of its constituent monomials.We write h a, b, . . . , z i to denote the (two-sided) ideal generated by theelements a, b, . . . , z . We now state a careful definition of our object of study. A generalized down-up algebra is an algebra over C parametrized by three complex numbers anda complex polynomial. Specifically, the algebra L ( φ, r, s, γ ), where r, s, γ ∈ C and φ ∈ C [ x ], is the C -algebra generated by three generators u , d , and h ,subject to the relations hu − ruh = γu, dh − rhd = γd, du − sud = φ ( h ) . (We follow the convention in [P3], which is slightly different from [CS].) Weoften just write L for L ( φ, r, s, γ ) when the parameter values are implicitlyknown. The algebra L is noetherian if and only if rs = 0. Primitive idealsin the non-noetherian case was described in [P3], so in this paper we alwaysassume that rs = 0. This assumption implies that L is a domain [CS,Proposition 2.5]. In order to do computations, we need a basis for L . The standard basisconsists of the monomials { u i h j d k : i, j, k ≥ } . Since we are assuming that L is a domain, we can arrange the u s, d s, and h s in any order, i.e., themonomials { u i d k h j } , { d k h j u i } , { d k u i h j } , { h j u i d k } , and { h j d k u i } (where i, j, k ≥
0) are all bases of L [CS, Theorem 2.1].3 .5 Grading There is a useful grading on L that results from declaring that u has degree+1, d has degree −
1, and h has degree 0. (Thus the monomial u i h j d k hasdegree i − k .) Clearly L , the elements of degree 0, is itself an algebra. Itturns out to be a polynomial algebra on the two variables h and ud [CS,Proposition 4.1]. If i >
0, then any element in L i can be written (uniquely)as u i f ( h, ud ), where f ∈ C [ x, y ]. Similarly, if k <
0, then any element in L k can be written (uniquely) as g ( h, ud ) d k , where g ∈ C [ x, y ].As in the commutative case, we say that an element of L i is homogeneousof degree i . Any x ∈ L can be written as a sum of homogeneous elements: x = P i ∈ Z x i , where x i ∈ L i and only finitely many of the x i s are nonzero.We define the length ℓ ( x ) of x to be the number of nonzero x i s: ℓ ( x ) = { i ∈ Z : x i = 0 } . (Thus an element of length 1 is a nonzero homogeneouselement.) Different values of the parameters φ , r , s , and γ can lead to isomorphicalgebras. For example, L ( φ, r, s, γ ) and L ( ψ, r, s, cγ ) are isomorphic, where c = 0 and ψ ( x ) = φ ( cx ). (The isomorphism sends u to u ′ , d to d ′ , and h to ch ′ .) Thus we can assume that either γ = 0 or γ = 1 without lossof generality. Similarly, there is an isomorphism between L ( φ, r, s, γ ) and L ( cφ, r, s, γ ) via u cu ′ , d d ′ , and h h ′ . Thus we can assumethat the polynomial φ is either monic or zero. We will, however, continueto use γ and φ without additional assumptions, since assuming γ = 1 or φ monic does not significantly ease our workload. But there is an isomorphismbetween generalized down-up algebras that we will exploit heavily; see [CL,Proposition 1.7] Lemma 1.1.
Suppose r = 1 . Then L ( φ, r, s, γ ) is isomorphic to L ( ψ, r, s, ,where ψ ( x ) = φ ( x − γ/ ( r − . This lemma clarifies the role of γ : in most cases, γ is not necessary!The information carried by γ can be transfered into the polynomial φ . Notethat this phenomenon is not visible in the original formulation of down-upalgebras, since in that setting we do not have flexibility in the choice of φ .We will thus treat the cases r = 1 and r = 1 differently. When r = 1, wewill consider both γ = 0 and γ = 0. But when r = 1, we will assume that γ = 0; this cuts down the number of cases we have to consider by almosthalf. 4 .7 Conformal algebras Here is an example of the usefulness of Lemma 1.1. Recall that the algebra L ( φ, r, s, γ ) is called conformal if there exists a polynomial ψ such that sψ ( x ) − ψ ( rx + γ ) = φ ( x ). Conformal algebras are nice because we can thendefine H = ud + ψ ( h ); we shall see that having such an element is quiteuseful. For one, any polynomial in ud and h can be written as a polynomialin H and h ; the commutation relations involving H are more convenientthan those involving ud . Specifically, it is straightforward to show that Hu = suH and dH = sHd .To determine exactly when an algebra is conformal is not a completetriviality, mostly due to the presence of γ , but Lemma 1.1 allows us to ignore γ most of the time, so we can determine quickly which of our algebras areconformal [CL, Lemma 1.6, Proposition 1.8]. Lemma 1.2.
Suppose φ ( x ) = P ni =0 a i x i . If s = r j for ≤ j ≤ n , then L ( φ, r, s, is conformal. If s = r j for some ≤ j ≤ n and a j = 0 ,then L ( φ, r, s, is also conformal. Otherwise L ( φ, r, s, is not conformal.Additionally, when γ = 0 , L ( φ, , s, γ ) is conformal. Since conformal algebras behave somewhat differently than nonconfor-mal ones, we will treat these two cases separately. It turns out that thenonconformal case, even when γ = 0, has a similar flavor to the situationwhen γ = 0. Finally, one of our main weapons is the following result, well-known to rep-resentation theorists as Dixmier’s version of Schur’s lemma [D, 2.6.5].
Lemma 1.3.
Let A be an C -algebra and M a simple A -module whose di-mension is countable. If ξ ∈ hom C ( M, M ) commutes with the action of A on M , then ξ acts as a scalar on M . In particular, the center of A acts as scalar operators on M . We will usethis result so often that we will not mention it explicitly. In this section we describe weight modules; for our purposes, we are espe-cially interested in universal weight modules and finite-dimensional modules5which are instances of weight modules). When simple, these modules pro-vide almost all of our primitive ideals. In some cases, we do have to considermodules that are not weight modules, but such cases are exceptional, andwe will deal with them when they arise.Finite dimensional simple modules have been classified in [CS, Section4], so all we have to do here is figure out their annihilators.
We first recall the definition of weight modules. If M is an L -module, then v ∈ M is said to have weight ( λ, β ) ∈ C if h · v = λv and ( ud ) · v = βv . The weight space M ( λ,β ) is the linear space consisting of all elements of M withweight ( λ, β ). The module M is a weight module if it is the (direct) sum ofits weight spaces.There is a nice relation between weight modules and the grading of L described in section 1.5. We need to recall the (invertible) operationon weights given by Φ : ( λ, β ) ( rλ + γ, sβ + φ ( λ )). (See [CS, section4].) Then a simple calculation shows that if M is a weight module, then L i M ( λ,β ) ⊆ M Φ i ( λ,β ) ( i ∈ Z ), i.e., elements of degree i transform vectors ofweight ( λ, β ) into vectors whose weights is i steps away.There exist universal weight modules, which we now describe. Let ( λ, β )be an arbitrary weight, and define ( λ i , β i ) = Φ i ( λ, β ) ( i ∈ Z ) as above. Thenthe universal weight module W ( λ, β ) is the module with basis { v i : i ∈ Z } and hv i = λ i v i , i ∈ Z ; uv i = v i +1 , dv − i = v − i − , i ≥ uv − i = β − i +1 v − i +1 , dv i = β i v i − , i > . Any weight module is a quotient of a universal weight module. (It is possibleto describe universal weight modules more succinctly as a tensor product,but since we want to do explicit calculations on these modules, it is betterto have explicit formulas for the action of L .)We will pay a lot of attention to whether W ( λ, β ) is simple. A straight-forward result along these lines is as follows. If the weights ( λ i , β i ) are alldistinct, and β i = 0 for all i ∈ Z , then W ( λ, β ) is simple. We now turn to finite-dimensional modules. All simple finite-dimensionalmodules are weight modules, and hence quotients of the universal weight6odules. We need, however, to have more detailed information. It turnsout that there are two types of finite-dimensional simple modules: thosewith highest (and lowest) weights, and those that are cyclic.The highest weight simple modules can be described as follows. Startwith a weight ( λ, λ i , β i ) for Φ i ( λ, λ = λ and β = 0 in this notation.) Suppose the weights ( λ i , β i ) are all distinct, β n +1 = 0, and β i = 0 for 1 ≤ i ≤ n . Then there is a simple module ofdimension n + 1, say with basis { v , v , . . . , v n } . Each v i is a vector withweight ( λ i , β i ). The action of L on this module is hv i = λ i v i , ≤ i ≤ n ; dv = 0 , dv i = β i v i − , ≤ i ≤ n ; uv n = 0 , uv i = v i +1 , ≤ i ≤ n − . We denote this simple module by F hw ( λ ), similar to the notation in [CS].There are two types of cyclic simple modules, denoted by F c ( ζ, ρ ) and F c ( ζ, ρ ) in [CS, Definition 4.6]. We describe F c ( ζ, ρ ) first. Start with aweight ( λ , β ), and suppose its orbit under Φ is finite; say the orbit has m distinct values ( λ i , β i ), 0 ≤ i ≤ m −
1. Assume that ( λ i , β i ) = (0 , i . Denote this set of weights by ζ . Let ρ be a nonzero complexnumber. Then there is a simple module F c ( ζ, ρ ) of dimension m with basis,say, { v , v , . . . , v m − } , where the basis vectors are indexed by the cyclicgroup Z /m Z . Thus, for example, v m +1 is the same as v . Each v i is vectorof weight ( λ i , β i ). Then action of L on F c ( ζ, ρ ) is hv i = λ i v i , uv i = ρv i +1 , dv i = β i v i − . The modules F c ( ζ, ρ ) are similar. The parameters and the weights arethe same; the only difference with F c ( ζ, ρ ) is in the action of L : hv i = λ i v i , uv i = β i +1 v i +1 , dv i = ρv i − . There is usually a lot of overlap between F c ( ζ, ρ ) and F c ( ζ, ρ ′ ) as ρ and ρ ′ vary over C × , but we need to consider both types of cyclic modules since F c ( ζ, ρ ) and F c ( ζ, ρ ′ ) are nonisomorphic when Q i ∈ Z /m Z β i = 0. We now figure out the annihilators of these simple modules. First we look at F hw ( λ ). In this case, let J λ = { f ∈ L = C [ h, ud ] : f ( λ i , β i ) = 0 , ≤ i ≤ n } .Then J λ is a classical polynomial ideal; it is finitely generated, and forspecific values of ( λ i , β i ) we can figure out a list of its generators.7 roposition 2.1. The annihilator of F hw ( λ ) is h u n +1 , d n +1 , J λ i .Proof. Write I for the ideal h u n +1 , d n +1 , J λ i . It is straightforward to verifythat I annihilates F hw ( λ ); we need to show the reverse inclusion Ann F hw ( λ ) ⊆ I . If n = 0, then F hw ( λ ) is one dimensional; u and d act as the zero operatorwhile h acts as the scalar λ . The ideal J λ is generated by ud and h − λ , soin this case, I = h u, d, h − λ i . It is clear that I is the annihilator of F hw ( λ ).Thus we assume that n ≥ y is an element of L that annihilates F hw ( λ ). Since L i v j ⊆ C v j + i , it does no harm to assume that y is homogeneous, say of degree − k <
0. (If y is homogeneous of positive degree, the proof is similar.)We’ll utilize the element x = u + d n . Note that xv i = v i +1 for 0 ≤ i ≤ n −
1, but xv n = β · · · β n v ; thus x n v i = ηv i , where η = Q ni =1 β i is nonzero.So x n acts a nonzero scalar on F hw ( λ ).Note also that x = u + ud n + d n u + d n ∈ u + L − n + I ; by inductionwe can show that x i ∈ u i + L i − − n + I for 1 ≤ i ≤ n .Recall that y ∈ d k L . Thus yx k ∈ ( u k + L k − − n + I )( d k L ) ∈ L + I .In other words, yx k ∈ f ( h, ud ) + I , where f is a polynomial in two vari-ables. But yx k annihilates everything in F hw ( λ ), so f ( h, ud ) also annihilateseverything, and hence f ( h, ud ) must be in J λ . So yx k is actually in I .Thus modulo I , we have 0 ≡ yx k ≡ yx k x n − k ≡ yx n ≡ ηy . Since η = 0,we conclude that y ≡
0, which is what we want to show.We now take a look at the annihilators of F c ( ζ, ρ ) and F c ( ζ, ρ ). Weagain utilize the ideal J ζ = { f ( h, ud ) : f ( λ i , β i ) = 0 , i ∈ Z /m Z } in L .Write η = Q m − i =0 β i . Proposition 2.2.
The annihilator of F c ( ζ, ρ ) is h u m − ρ m , d m − η, J ζ i , andthe annihilator of F c ( ζ, ρ ) is h d m − ρ m , u m − η, J ζ i .Proof. The proof is similar to the previous proof. As before, it is clear that I annihilates F c ( ζ, ρ ). We want to show that Ann F c ( ζ, ρ ) ⊆ I .Write L ′ k = P i ≡ k mod m L i . Then L ′ k v i ⊆ C v i + k . Suppose now that y annihilates F c ( ζ, ρ ). It does no harm to assume that y ∈ L ′ k for some k ∈ Z .Since u am + k ∈ ρ am u k + I and d bm +( m − k ) ∈ η b d m − k , we can even assumethat y ∈ u k L + d m − k L where 0 ≤ k ≤ m − yu m − k also annihilates F c ( ζ, ρ ), and yu m − k ∈ L m + L ⊆ L + I .Therefore we can write yu m − k = f ( h, ud ) + I where f is a polynomial oftwo variables. Since yu m − k annihilates everything, we conclude that f ∈ J ζ ,i.e., yu m − k ∈ I . 8herefore yu m − k u k is also in I , and so modulo I , we have 0 ≡ yu m ≡ yρ m . Because ρ = 0, we conclude that y ≡
0, as required.The proof for F c ( ζ, ρ ) is similar. It is also useful to be able to tell whether a simple module is finite-dimensionalfrom partial information about its annihilator.
Lemma 2.3.
Let M be a simple module such that (i) either d m or u m ( m ≥ ) acts as a scalar on M ; (ii) M contains a weight vector. Then M is finite-dimensional.Proof. We suppose that d m acts as the scalar ρ m . (The proof where u m actsas a scalar is similar.) We first consider the case where ρ = 0.In this case the first step is to show that M contains an element withweight ( λ, v is a weight vector, then d i v is also a weightvector, for all i ≥
0. Since d m v = 0, there must be an j ≥ d j v = 0 and d j +1 v = 0. Then w = d j v is an element of weight ( λ,
0) forsome λ ∈ C .Now define w i = u i w , i ≥
0. It is straightforward to check thatspan { w i : i ≥ } is stable under u , d , and h ; since M is simple, we have M = span { w i : i ≥ } .Note that dw i = β i w i − for some β i ∈ C . Thus we have 0 = d m w m = β m β m − · · · β w . Since w = 0, we have or β k = 0 for some 1 ≤ k ≤ m .Then M = span { w k + i : i ≥ } is a submodule of M , and hence must be0 or M . If M = 0, then M is spanned by { w , · · · , w k − } and so is finite-dimensional. If M = M , then in particular w is a linear combination of theelements w k + i , i ≥
0. Say w = P ti =1 c i w n i , where c i = 0 and we arrangethe indices so that n i > n j whenever i > j . Then w n t is a linear combinationof w and the elements w n i , i < t . Applying the operator u , we see that any w j , with j ≥ n t , is a linear combination of elements w i , with i < n t . Thus M is spanned by { w i : i < n t } , and hence finite-dimensional.We now consider the case where ρ = 0. As before, suppose v ∈ M is aweight vector. Let v i = u i v for 0 ≤ i ≤ m −
1. Each v i is still a weight vector;in particular, hv i ∈ C v i for all values of i . Since v is an eigenvector of ud , wecheck easily that dv i ∈ C i − for 1 ≤ j ≤ m −
1. Since u m acts as ρ m on M , weget that uv m − = ρ m v . Thus dv = ρ − m duv m − = ρ − m [ sud + φ ( h )] v m − ∈ C v m − . Therefore the span of { v i : 0 ≤ i ≤ m − } is stable under L , andhence must be all of M . Thus M is finite-dimensional.9 orollary 2.4. Let M be a simple module, and suppose for some m ≥ , wehave d m ∈ Ann M , d m − Ann M , and f ( h ) d m − M = 0 for some noncon-stant polynomial f . Then M is finite-dimensional. (The same conclusionholds if d is replaced by u .)In particular, if d m u − s m ud m = 0 , then M is finite-dimensional. (Sim-ilarly, if u m ∈ Ann M , u m − Ann M , and du m − s m u m d = 0 , then M isfinite-dimensional.)Proof. Suppose f has degree n ≥
1. Pick a nonzero element v ∈ d m − M ,and let W = span { v, hv, h v, . . . , h n − v } . Then W is stable under h , andsince W is finite-dimensional, h has an eigenvector in W , say w . Note that dhv = ( rh + γ ) dv = 0; similarly, dh i v = 0 for all i ≥
0. Thus dw = 0, andso w is a weight vector. We can now apply the lemma.In the particular case, we show easily by induction that, for k ≥ d k u − s k ud k = f k ( h ) d k − for some polynomial f k . The hypothesis implies that f m ( h ) = 0. Note that f m ( h ) cannot be a constant since that would imply f m ( h ) d m − M = 0, i.e., d m − M = 0. Thus f m ( h ) is a nonconstant polynomial, and the conclusionfollows.It is also useful occasionally to detect when a simple module is one-dimensional. Lemma 2.5.
Let M be a simple module. Suppose ud and h act as scalarson M . Then M is one-dimensional.Proof. Say ud acts as the scalar β and h acts the scalar λ . Pick a nonzero w ∈ M and for i ≥ w i = u i w and w − i = d i w . For all i ∈ Z , weclearly have uv i ∈ C v i +1 ; also, since du = sud + φ ( h ), we get that dv i ∈ C v i − for i ∈ Z . Thus span { w i : i ∈ Z } is a submodule of M and hence must beall of M .If i <
0, we have uw i = udw i +1 = βw i +1 , so duw i = βw i . If i ≥
0, then uduw i = βuw i = βw i +1 ; but we also know that duw i = αw i for some α ∈ C ,so uduw i = αw i +1 . Thus α = β and we conclude that duw i = βw i .Therefore ud and du act as the same scalar on M . Thus the operators u , d , and h all commute with each other, and hence M is one-dimensional. r = 1 and γ = 0 In this section we tackle the case where γ = 0. Recall from Lemma 1.1 thatin this case it suffices to assume that r = 1. We then have hu = u ( h + γ ) and10 d = d ( h − γ ). It follows easily that f ( h ) u = uf ( h + γ ) and f ( h ) d = df ( h − γ )where f is any polynomial; also, hu i = u i ( h + iγ ) and hd i = ( h − iγ ) d i for i ≥
0. We can summarize succinctly by saying that f ( h ) x i = x i f ( h + iγ ),where x i ∈ L i . We start with a result about homogeneous elements.
Lemma 3.1.
Let I be an ideal of L . Suppose x ∈ I and x = P i ∈ Z x i (where x i ∈ L i ) is the homogeneous decomposition of x . Then x i ∈ I for all i ∈ Z .Proof. We use induction on the length of x . (Recall that the length of x is the number of nonzero x i s.) The lemma is certainly true for elements oflength 1. Assume it is true for elements of length n −
1, and let x = P i ∈ Z x i ,( x i ∈ L i ) be an element of length n in I . Thus exactly n of the x i s is nonzero.Pick an integer m so that x m = 0. Now hx = X i ∈ Z hx i = X i ∈ Z x i ( h + iγ ) , so xh + mγx − hx = X i ∈ Z ( m − i ) γx i . This is an element in I of length n −
1. We apply the induction hypothesisand conclude that ( m − i ) γx i ∈ I for i ∈ Z , and thus x i ∈ I for i = m .Therefore y = P i = m x i is an element of I , hence so is x m = x − y .Recall from Lemma 1.2 that the algebra L ( φ, , s, γ ) is conformal. Thusthere exists a polynomial ψ ∈ C [ x ] such that sψ ( x ) − ψ ( x + γ ) = φ ( x ).Recall also that we define H as the element ud + ψ ( h ); then Hu = suH and dH = sHd . We conclude that f ( h, H ) u = uf ( h + γ, sH ) and f ( h, H ) d = df ( h − γ, s − H ) for any polynomial f ∈ C [ x, y ].Note that if M is an L -module, then HM is stable under u and d (andcertainly under h ), so HM is a submodule. Thus if M is simple, then either HM = 0 or HM = M . We treat these two cases separately. HM = 0 Suppose first that HM = 0. Of course, Ann M then contains h H i , but if ψ is the zero polynomial, we can say more. In this case, du = sud , so uM and dM are submodules of M , and hence either uM = 0 or uM = M ; similarly,11ither dM = 0 or dM = M . It cannot be the case that both uM = M and dM = M , for then HM = udM = M , contradicting HM = 0. Thus when ψ is the zero polynomial (and HM = 0), Ann M must contain either u or d . Lemma 3.2.
Suppose M is simple and HM = 0 . If Ann M ) h H i , then Ann M contains a power of u or a power of d . Furthermore, if ψ is the zeropolynomial and Ann M ) h d i , then Ann M contains a power of u , while if Ann M ) h u i , then Ann M contains a power of d .Proof. When ψ is not the zero polynomial, let I = h H i , but when ψ is thezero polynomial, let I denote either h u i or h d i . Suppose that Ann M strictlycontains I . Choose an element x ∈ Ann M that is not in I ; by Lemma 3.1we can assume that x is homogeneous, say of degree k . If I = h u i , then k ≤
0; if I = h d i , then k ≥
0; if I = h H i , then there is no restriction on k .To avoid repetitions, we assume k ≥ I = h d i if ψ = 0); theother possibilities are treated similarly. Then we can write x = u k g ( h, H ),but since x I , we can assume that x = u k f ( h ) for some polynomial f .Among all nonzero elements of the form u k f ( h ), pick one where thedegree of f is as small as possible. Then xu − ux = u k +1 ( f ( h + γ ) − f ( h )) . The polynomial f ( h + γ ) − f ( h ) has smaller degree than f ( h ), hence it mustbe the zero polynomial by choice of x . Thus we have f ( h + γ ) = f ( h ), andthis implies that f is the constant polynomial. Thus Ann M contains u k ,where k ≥ M . Lemma 3.3.
Suppose M is simple and HM = 0 . If ψ is not the zeropolynomial and Ann M ) h H i , then M is finite-dimensional. If ψ is the zeropolynomial and Ann M ) h u i or Ann M ) h d i , then M is one-dimensional.Proof. Suppose first that ψ = 0 and Ann M ) h d i . Then the previous lemmaimplies that u k ∈ Ann M for some k ≥
1; this implies that u ∈ Ann M (sinceeither uM = 0 or uM = M ). Hence Ann M contains both u and d , andtherefore all three operators u , d , and h commute, so they all act as scalars.Thus M is one-dimensional. Similarly, if ψ = 0 and Ann M ) h u i , then M is one-dimensional.On the other hand, suppose now that ψ = 0. By the previous lemmaAnn M contains u k or d k ; we’ll say u k for definiteness. We first show that12 cannot be a constant. If ψ were the constant C , then ud = H − C wouldact as − C on M , and thus u k d k would act as the nonzero constant ( − C ) k on M , contradicting u k M = 0. Thus ψ must be a nonconstant polynomial.Let u m be the smallest power of u that lies in Ann M . In this case, du m − s m u m d = [ s m ψ ( h − ( m − γ ) − ψ ( h + γ )] u m − . The polynomial s m ψ ( h − ( m − γ ) − ψ ( h + γ ) is nonzero, since s m ψ ( h − ( m − γ ) = ψ ( h + γ ) implies that ψ has infinitely many roots. (If α isa root of ψ , then so are α + mγ , α + 2 mγ , and so on.) Thus we can useCorollary 2.4 to conclude that M is finite-dimensional. HM = M When HM = M , the situation is more complicated, especially when o ( s ) < ∞ . If o ( s ) = n , then H n u = uH n and dH n = H n d , so H n is a centralelement in L . Thus H n acts a scalar c n for some constant c . (We write theconstant as an n th power for balance with H n .) Thus any primitive idealmust contain H n − c n ; since HM = M , we have c = 0. There is a specialcase, however, that we need to look at more closely. It is complicated enoughthat we bestow upon it a separate lemma. Lemma 3.4.
Suppose o ( s ) = n , ψ is a constant polynomial C = 0 , and H n acts as the scalar C n on a simple module M . Then Ann M must containeither u n or d n .Proof. We have φ ( h ) = ( s − C , so du = sud + ( s − C . By induction weget that d i u = s i ud i + ( s i − C for i ≥
1. In particular, d n u = ud n . Thus ud n M = d n uM ; also, hd n M = d n ( h − nγ ) M . Therefore d n M is submoduleof M , hence either d n M = 0 or d n M = M . Similarly, either u n M = 0 or u n M = M . (Note that this result holds for any simple module M , not justthose where H n acts as C n .)Recall that ud = H − C . By induction we establish that u k d k = Q k − i =0 ( s − i H − C ) = ( − k Q k − i =0 ( C − s − i H ). Thus u n d n = ( − n Q n − i =0 ( C − s − i H ) = ( − n ( C n − H n ). Since H n acts as C n on M , we conclude that u n d n acts as the zero operator on M . Therefore either u n M = 0 or d n M = 0,i.e., Ann M contains either u n or d n .We remark here that in the situation of the lemma, any ideal that con-tains u n or d n must also contain H n − C n , since H n − C n = ( − n +1 u n d n .Summarizing so far, we have the following information when HM = M :if o ( s ) = n , then Ann M must contain H n − c n for some c = 0; furthermore,13f ψ = C = 0 and c n = C n , then Ann M actually contains u n or d n . In othercases (i.e., if s is not a root of unity), we have no information about Ann M .We now look at primitive ideals that are possibly larger than the minimalones. We start with a result about polynomials of two variables: it is thetwo-variable analogue of the fact that if f ( x + γ ) = f ( x ), then f is a constantpolynomial. (We used this fact in the proof of Lemma 3.3.) Lemma 3.5.
Suppose g ( x, y ) = P mi =0 g i ( x ) y i with m < o ( s ) , and g ( x + γ, sy ) = s n g ( x, y ) for some ≤ n ≤ m . Then g ( x, y ) = cy n for someconstant c ∈ C .Proof. We have P mi =0 g i ( x + γ ) s i y i = P mi =0 s n g i ( x ) y i . Comparing the coef-ficients of y i , we see that s i g i ( x + γ ) = s n g i ( x ). Now if g i has a root a , then a + γ , a + 2 γ , . . . , are also roots. Thus g i would have infinitely many roots,and hence must be the zero polynomial. If g i does not have any roots, thenit is a constant c , but then we would have s i c = s n c , so c = 0 unless i = n .We conclude that g i ( x ) = 0 if i = n , but g n ( x ) = c . Therefore g ( x, y ) = cy n ,as required. Lemma 3.6.
Let M be a simple module with HM = M . If s is not a rootof unity, and Ann M = { } , then Ann M contains a power of u or a powerof d . If o ( s ) = n , and Ann M strictly contains h H n − c n i for c = 0 , then Ann M also contains a power of u or a power of d .Additionally, suppose ψ ( h ) = C = 0 and Ann M ) h d n i . Then u n ∈ Ann M . Similarly, if Ann M ) h u n i , then d n ∈ Ann M .Proof. If o ( s ) = ∞ , let I = { } ; if o ( s ) = n , let I = h H n − c n i , where0 = c ∈ C . Suppose that Ann M ) I . We proceed as in the proof ofLemma 3.3. Choose an element x ∈ Ann M that is not in I ; by Lemma 3.1we can assume that x is homogeneous, say of degree k . For definiteness wetake k ≥
0; the case k < M contains an element ofthe form x = u k g ( h, H ). If o ( s ) = n , i.e., I = h H n − c n i , we can assumethat the highest power of H that appears in g is at most n −
1. Among allnonzero elements of this form, choose one where the degree of g is minimal.Write g ( h, H ) = P mi =0 f i ( h ) H i ; here m < o ( s ). Then xu = u k g ( h, H ) u = u k +1 g ( h + γ, sH ), so xu − s m ux = u k +1 [ g ( h + γ, sH ) − s m g ( h, H )] = u k +1 m X i =0 [ s i f i ( h + γ ) − s m f i ( h )] H i . The polynomial g ′ ( h, H ) = P mi =0 [ s i f i ( h + γ ) − s m f i ( h )] H i has smaller degreethan g ( h, H ) (if g ( h, H ) has degree ( ℓ, m ), then g ′ ( h, H ) has degree at most14 ℓ − , m )), so by choice of g ( h, H ), we must have g ′ ( h, H ) = 0. Hence g ( h + γ, sH ) = s m g ( h, H ), and by Lemma 3.5 we conclude that g ( h, H ) isa nonzero multiple of H m . Thus Ann M contains u k H m . Since HM = M ,we conclude that Ann M contains u k .Now suppose o ( s ) = n , ψ ( h ) = C = 0, H n acts as the scalar C n on M ,and Ann M ) h d n i . Note that h d n i contains d n − j d j u j = d n − j g j ( H ), where g j ( H ) = Q ji =1 ( s i H − C ) is a polynomial of degree j .As before, we use Lemma 3.1 to conclude that Ann M contains a ho-mogenous element x
6∈ h d n i of degree k . Clearly k > − n . If k ≥
0, then weproceed as above, concluding that Ann M contains a power of u . Supposehowever that k <
0. Set k = j − n . Then we can write x = d n − j f ( h, H );since d n − j g j ( H ) ∈ I , we can assume that the degree of H in f ( h, H ) is atmost j −
1. Therefore the element g ( h, H ) = u n − j x = u n − j d n − j f ( h, H ) is anonzero element in Ann M ; the degree of H in this element is at most n − g ( h, H ) = P mi =0 f i ( h ) H i where m < n . We now proceed asabove, concluding that Ann M contains a power of u . Since either u n M = 0or u n M = M , we conclude that u n ∈ Ann M .Similarly, if Ann M ) h u n i , then Ann M also contains d n . Lemma 3.7.
Let M be a simple module with HM = M .If o ( s ) = ∞ and Ann M = { } , then M is finite-dimensional.If o ( s ) = n , ψ is not a constant polynomial, and Ann M ) h H n − c n i ( c ∈ C × ), then M is finite-dimensional.If o ( s ) = n , ψ ( h ) = C is constant, and Ann M ) h H n − c n i ( c ∈ C × ),then c n = C n . (Thus Ann M must contain u n or d n .)If o ( s ) = n , ψ ( h ) = C is constant, and Ann M ) h u n i or Ann M ) h d n i ,then M is finite-dimensional.Proof. In all the cases listed, the preceding lemma implies that Ann M con-tains a power of u or a power of d . For definiteness let’s say that u k ∈ Ann M .Suppose now that o ( s ) = ∞ and Ann M = { } . We first show that ψ cannot be a constant polynomial.If ψ is the zero polynomial, then H = ud and u k d k = Q k − i =0 s − i H is inAnn M , i.e., H k is in Ann M . This contradicts HM = M .If ψ is a nonzero constant C , we have to work harder. Let v be a nonzeroelement in ker u (such an element exists since u k M = 0). Let v i = d i v . Wecheck easily that for i ≥ uv i = ( s − i − Cv i − . In particular, u k v k = cv ,where c = C k Q ki =1 ( s − i − = 0. This contradicts u k ∈ Ann M .Thus ψ is nonconstant. We now use Corrolary 2.4 to conclude that M is finite-dimensional. 15imilarly, if o ( s ) = n , ψ is nonconstant and Ann M ) h H n − c n i , we useCorrolary 2.4 to conclude that M is finite-dimensional.So it remains to consider the case where o ( s ) = n and ψ ( h ) = C is aconstant polynomial. Now u k M = 0 implies that u n ∈ Ann M , so u n d n = ± ( H n − C n ) is also in Ann M . Thus the scalar c n − C n is in Ann M , hence c n = C n , as required.Suppose now that ψ ( h ) = C , Ann M strictly contains h u n i or h d n i . Bythe lemma above, Ann M contains both u n and d n . We now show thatif u n , d n ∈ Ann M , then M is finite-dimensional. First, choose a nonzeroelement v such that dv = 0 (such an element clearly exists). Define v i = u i v ; then v j = 0 for j ≥ n since u n ∈ Ann M . Note that Hv = ( ud + C ) v = Cv . It follows that Hv i = Hu i v = s i u i Hv = s i Cv i for 0 ≤ i ≤ n − h commutes with H , we have Hf i ( h ) v i = s i f i ( h ) v i for any polynomial f i ( h ). Thus each f i ( h ) v i belong to a different eigenspace of H , 0 ≤ i ≤ n − M is simple, so there exists an element x ∈ L such that xhv = v .We can rewrite this equation as h P n − i =0 f i ( h ) u i v = v for some polynomials f i . Thus h P n − i =0 f i ( h ) v i = v . For i = 0, the H -eigenvalue of f i ( h ) v i isdifferent from the H -eigenvalue of v , so we must have f i ( h ) v i = 0. Thus hf ( h ) v = v , or ( hf ( h ) − v = 0. In other words, we have found anontrivial polynomial in h that annihilates v . By Corollary 2.4, this impliesthat M is finite-dimensional. We now show that the ideals { } , h H i , h H n − c n i , h u i , h d i , h u n i , and h d n i are all primitive, under the right circumstances. Lemma 3.8.
The following are primitive ideals:1. h H i , if ψ is not identically zero;2. { } , if o ( s ) = ∞ ;3. h H n − c n i ( c = 0 ), if o ( s ) = n and ψ is not a constant polynomial;4. h H n − c n i ( c = 0 ), if o ( s ) = n and ψ is a constant C = c ;5. h u i and h d i , if ψ ( h ) = 0 ;6. h u n i and h d n i , if o ( s ) = n and ψ is a nonzero constant C .Proof. We make use of the universal weight module W ( λ, β ) from section 2.1.In the present case ( r = 1 , γ = 0), we have λ i = λ + iγ and β i = s i β + s i ψ ( λ ) − ( λ + iγ ). Note that the λ i s are always all distinct, so if β i = 0 for all i ∈ Z ,then W ( λ, β ) is simple.We need to know how H = ud + ψ ( h ) acts on W ( λ, β ). A quick calcu-lation shows that for i ∈ Z , Hv i = s i c , where c = β + ψ ( λ ). We then have β i = s i c − ψ ( λ + iγ ). We usually set a value for c first, then find λ and β tosuit our purpose.Suppose first that ψ is a nonconstant polynomial. Then no matter whatthe value of c is, we can find λ and β such that β + ψ ( λ ) = c and β i = 0.(The details: For a given i ∈ Z , the polynomial equation ψ ( x ) = s i c hasonly finitely many solutions, so there are only countably many values of λ where ψ ( λ + iγ ) = s i c for some i ∈ Z . Thus we can certainly choose a valueof λ so that ψ ( λ + iγ ) = s i c for any i ∈ Z . We can then choose β to satisfy β + ψ ( λ ) = c .) Thus if ψ is a nonconstant polynomial, we can construct asimple W ( λ, β ) for any value of c .When c = 0, we get a simple module M with HM = 0. Additionally,note that even if ψ is a constant C = 0, we can still choose λ and β sothat c = β + C = 0 and β i = − C = 0 for all i ∈ Z . Thus we get a simplemodule M with HM = 0 whenever ψ is nonzero. Its annihilator is h H i byLemma 3.3. This proves (1).When c = 0 and o ( s ) = n , we get a simple module M with HM = M ;its annihilator must be h H n − c n i by Lemma 3.6. This proves (3).When c = 0 and o ( s ) = ∞ , again we get a simple module with HM = M .Note that even if ψ ( h ) = C , a constant (which could be zero), we can stillchoose λ and β so that c = β + C = 0 and β i = s i c − C = 0 for all i ∈ Z .Thus we have a simple module M with HM = M no matter what ψ is. Theannihilator of this module is { } , by Lemma 3.6. This proves (2).Suppose now that c = 0, o ( s ) = n , and ψ is a constant C . We can stillchoose λ and β such that β + C = c and β i = s i c − C = 0 for any i ∈ Z ,provided C is not one of the values s i c , i.e., provided C n = c n . We get asimple module M such that HM = M . The annihilators must be h H n − c n i by Lemma 3.6. This proves (4).It remains to prove (5) and (6). We now need to introduce modules thatare not weight modules. Here is one possibility. Suppose ψ ( h ) = C where C ∈ C . Pick a positive integer n . Let M be a module with basis { v i : i ∈ Z } where uv i = v i +1 , hv i = v i − n + iγv i , dv i = ( s i − Cv i − .
17e can check that this is indeed a module: huv i = v i +1 − n + ( i + 1) γv i +1 = v i − n +1 + iγv i +1 + γv i +1 = u ( h + γ ) v i ; dhv i = ( s i − n − Cv i − n − + iγ ( s i − Cv i − = ( h + γ ) dv i ;[ sud + ( s − C ] v i = [ s i +1 − s ] Cv i + [ s − Cv i = [ s i +1 − C = duv i . We now verify that this module is simple. Define the operator T = uh − v i is T v i = iγv i + n . Thus v i is “almost” an eigenvector of T with eigenvalue iγ . In this case “almost” is good enough.Suppose M is a nonzero submodule of M . Let x = P a i v i be a nonzeroelement in M with minimal length. Then T x = P iγa i v i + n and for any m with a m = 0, we have T x − mγux = P ( i − m ) γa i v i + n , which has shorterlength than x , and so must be zero. Thus x = a m v m , i.e., M contains apure basis vector v m .Now M can be generated by just one v m . To get v m +1 we simply apply u , and to get v m − we apply h and subtract mγv m , getting v m − n , then apply u repeatedly until we get v m − . Thus M = M . So M is indeed a simplemodule.Now if C = 0 and n = 1, then d annihilates M . By Lemma 3.3, Ann M cannot be bigger than h d i . Thus h d i is indeed a primitive ideal. Similarly, h u i is a primitive ideal. This proves (5).If C = 0 and o ( s ) = n , then d n acts as the zero scalar. Thus Ann M contains h d n i . It cannot be any larger by Lemma 3.6. Thus h d n i is aprimitive ideal. Similarly, h u n i is a primitive ideal. This proves (6). Putting all these together, we get a complete list of the primitive idealsof L ( φ, , s, γ ) ( γ = 0), summarized in the following table. Note that weonly include primitive ideals that are not annihilators of finite-dimensionalmodules. o ( s ) ψ primitive ideals ∞ { } , h u i , h d i∞ nonzero { } , h H i n h H i , h u i , h d i , h H n − c n i ( c = 0) n C = 0 h H i , h u n i , h d n i , h H n − c n i ( c = 0 , c n = C n ) n nonconstant h H i , h H n − c n i ( c = 0)18he preceding process of determining primitive ideals of L ( φ, , s, γ ) is fairlytypical of our method in general. Using a central element or otherwise, wefind an element that must be present in all primitive ideals. Using resultssimilar to Lemmas 3.1 and 3.2 we hope to show that any primitive ideallarger than these minimal ideals must contain a power of u or a power of d ;usually this is enough to conclude that these large primitive ideals must beannihilators of finite-dimensional simple modules. Sometimes we will haveto consider special cases that could very well involve modules that are notweight modules. L is conformal and γ = 0 In this section we assume that γ = 0 and L is conformal. Recall that thismeans there exists a polynomial ψ such that sψ ( x ) − ψ ( rx ) = φ ( x ). We willuse ψ throughout instead of φ .It is very useful, especially when roots of unity are involved, to have acommutation formula between u and d k , k ≥
1; see Corollary 2.4. With thebase case du − sud = sψ ( h ) − ψ ( rh ), we establish by induction that d k u − s k ud k = [ s k ψ ( h ) − ψ ( r k h )] d k − . There is of course a corresponding formula involving d and u k .As before we define H to be ud + ψ ( h ); then Hu = suH and dH = sHd .If M is a simple L -module, then these relations imply that either HM = M or HM = 0; similarly, either hM = M or hM = 0. There are thus fourcases to consider: when hM = HM = 0, when hM = M and HM = 0,when hM = 0 and HM = M , and finally, when hM = HM = M . Before we treat these cases individually, we state the following result. It isa somewhat more complicated analogue of Lemma 3.1.
Lemma 4.1.
Suppose o ( r ) = ∞ and x is an element in an ideal I of L .Let x = P i ∈ Z x i be its homogeneous decomposition (i.e., x i ∈ L i ). Then x i h ℓ ( x ) − ∈ I for all i ∈ Z , where ℓ ( x ) is the length of x .Proof. As before, we use induction on ℓ ( x ). The claim is certainly true if ℓ ( x ) = 1. Assume that ℓ ( x ) >
1. Then hx = P hx i = P r i x i h . Pick k ∈ Z such that x k = 0. Then hx − r k xh = X i ∈ Z ( r i − r k ) x i h. r i = r k unless i = k , we have ℓ ( hx − s k xh ) = ℓ ( x ) −
1. By the inductionhypothesis, ( r i − r k ) x i hh ℓ ( x ) − ∈ I ; hence x i h ℓ ( x ) − ∈ for all i = k . Thus y = P i = k x i h ℓ ( x ) − ∈ I , so x k h ℓ ( x ) − = xh ℓ ( x ) − − y ∈ I also. Remark 4.2.
The proof only uses the commutation relation between h andelements of degree i . Since H has similar commutation relations, the resultis also true when h is replaced by H and r by s .We now consider each of the four cases in turn. hM = HM = 0 We dispose of this case immediately. Both h and ud = H − ψ ( h ) act asscalars on M , so by Lemma 2.5, M must be one-dimensional. hM = M and HM = 0 Suppose first that o ( r ) = n . Then h n commutes with u and d , and hence h n must act as a scalar on M . This implies that h has an eigenvector;thus hv = λv for some nonzero v ∈ M and λ ∈ C . Note that Hv = 0, so( ud ) v = − ψ ( λ ) v , i.e., v is also an eigenvector of ud . Thus v is a weightvector. Now recall that d k u = s k ud k + [ s k ψ ( h ) − ψ ( r k h )] d k − = s k [ H − ψ ( h )] d k − + [ s k ψ ( h ) − ψ ( r k h )] d k − = [ s k H − ψ ( r k h )] d k − for k ≥
1. In particular, since H acts as zero on M , we have d n u = − ψ ( h ) d n − as operators on M . Compare this with ud = H − ψ ( h ), i.e., ud n = Hd n − − ψ ( h ) d n − , and we conclude that as operators on M , d n commutes with u . Clearly d n also commutes with h , and hence d n acts asa scalar on M . (Similarly, u n acts as a scalar on M .) By Lemma 2.3, M isfinite-dimensional.Thus we can concentrate on the case where r is not a root of unity.Clearly Ann M contains h H i , but when ψ ( h ) = 0, we can say even more.As in section 3.2, in this case Ann M must actually contain u or d . Lemma 4.3.
Suppose r is not a root of unity, HM = 0 , hM = M , and Ann M ) h H i . Then Ann M contains a power of u or a power of d .When in addition ψ ( h ) = 0 and Ann M ) h d i , then Ann M contains u .Similarly, if Ann M ) h u i , then Ann M contains d . roof. Denote by J the ideal h H i ; in the special case where ψ ( h ) = 0, let J denote the ideal h d i . Pick an element x in Ann M that is not in J . ByLemma 4.1, Ann M contains an element of the form x i h k where x i J , x i ∈ L i . Since hM = M , we conclude that x i ∈ Ann M also. In the specialcase where ψ ( h ) = 0, we clearly have i ≥
0. In the other cases, to bedefinite we also assume that i ≥
0; the case where i ≤ M contains an element of the form u i f ( h ) for some nonzero polynomial f . Among all such elements, choose one where the degree of f is as smallas possible, say m . Then u i f ( h ) u − r m u i +1 f ( h ) = u i +1 [ f ( rh ) − r m f ( h )]is also in Ann M . But the polynomial f ( rh ) − r m f ( h ) has smaller degreethan m , hence we must have f ( rh ) = r m f ( h ). If f ( h ) = P mj =0 a i h i , thenthis implies that P mj =0 ( r j − r m ) a j h j = 0. Since r is not a root of unity, wehave a j = 0 for j = m . Thus f ( h ) is a multiple of h m . Since hM = M , weconclude that Ann M contains u i .In the case where ψ ( h ) = 0, recall that either uM = M or uM = 0;since u i ∈ Ann M , we must have uM = 0, i.e., u ∈ Ann M .The other cases of the lemma are treated similarly.The lemma has immediate consequences for large primitive ideals. Lemma 4.4.
Suppose o ( r ) = ∞ , M is a simple module, HM = 0 , and hM = M . If Ann M ) h H i and ψ is not zero, then M is finite-dimensional.If ψ is zero and Ann M ) h u i or Ann M ) h d i , then M is one-dimensional.Proof. By the previous lemma, Ann M contains a power of u or a power of d ; assume it is a power of d for definiteness. Then there is a nonzero element v ∈ M such that dv = 0. Since H = ud + ψ ( h ) and HM = 0, it is clear that ψ ( h ) v = 0 also. By Corollary 2.4, M is finite-dimensional.If ψ is zero, then the previous lemma implies that Ann M contains both u and d . Thus as operators on M , the elements u , d , and h all commute, so M must be one-dimensional. hM = 0 and HM = M This case is similar to the previous one, with H and s interchanged with h and r . The conclusions, however, are different, so we will go through thedetails.As before, we start by looking at the case when o ( s ) = n . The operator ψ ( h ) acts as the scalar ψ (0) on M , so in this case we have d k u = s k ud k +21 s k − ψ (0) d k − . Thus d n u = ud n , i.e., d n commutes with u (and with h )as an operator on M . Thus d n acts as a scalar on M . Similarly, u n acts asa scalar on M .Note that H n also commutes with u , d , and h , and hence acts as ascalar on M . Thus H has an eigenvector on M . Since h acts as a scalaron M (namely, as zero), we can use Lemma 2.3 to conclude that M isfinite-dimensional.We can thus concentrate on the case where s is not a root of unity.Clearly Ann M contains h h i . It turns out that Ann M cannot be larger than h h i in this case. Lemma 4.5.
Suppose s is not a root of unity, and M is a simple modulewith HM = M , hM = 0 . Then Ann M = h h i .Proof. Assume that Ann M ) h h i . We will derive a contradiction. Usingthe same calculation as in Lemma 4.3, with H and s exchanging roles with h and r , we conclude that Ann M contains a power of u or a power of d .For definiteness we assume that Ann M contains d m but not d m − , where m ≥
1. Then Ann M also contains d m u − s m ud m = ( s m − ψ (0) d m − . If ψ (0) = 0 we reach our contradiction. If ψ (0) = 0, then ud = H asoperators on M . Thus u m d m = s − m H m , but this is also a contradiction sincethe u m d m acts as zero while H m cannot act as zero since HM = M . hM = M and HM = M As before, we first consider the case where both r and s are roots of unity.In this case there exists a positive integer n such that r n = s n = 1. Then h n and H n are both central elements, so they both act as scalars on M .Since h and H commute, they have a common eigenvector. Therefore h and ud = H − ψ ( h ) also has a common eigenvector.Recall that d k u − s k ud k = [ s k ψ ( h ) − ψ ( r k h )] d k − , which implies that d n u = d n u . Clearly d n h = r n hd n = hd n , thus d n is a central element. Hence d n also acts as a scalar on M . Similarly, u n also acts as a scalar on M . Wecan apply Lemma 2.3 and conclude that M is actually finite-dimensional.We thus can concentrate on the case where at least one of r and s is nota root of unity. In this case we can use Lemma 4.1 and the remark followingit, either through H and s , or through h and r .22ince now both h and H act nontrivially on M , it is possible that theyinteract in somewhat complicated ways. To help keep track of this interac-tion we make use of the set S ( r, s ) = { ( i, j ) ∈ Z × Z : r i = s j } . We willoften just write S for S ( r, s ), since r and s are usually fixed.Note that S is an additive subgroup of Z × Z . Since we are assumingthat not both of r and s are roots of unity, it is straightforward to verifythat S is in fact generated by one element: there exits ( n, m ) ∈ S such that S = { ( kn, km ) : k ∈ Z } . We can thus divide the sets S into two types:those that contain pairs ( i, j ) where i and j have the same sign or zero, andthose that contain pairs ( i, j ) where i and j have opposite signs. For easeof reference, we adopt the following notational convention: when S is of thefirst type, we write S = h ( n, m ) i to indicate that S is generated by ( n, m )where n, m ≥
0; when S is of the second type, we write S = h ( n, − m ) i toindicate that S is generated by ( n, − m ) where n, m >
0. Thus the type of S is indicated by the presence or absence of the minus sign on m .Now let M be a simple module. Every element of S gives rise to anelement of Ann M , as follows.Suppose first that S = h ( n, m ) i and ( i, j ) ∈ S . The pair ( − i, − j ) givesrise to the same element of Ann M , so we can assume that i, j ≥
0. Since hM = M (and ker h = 0), the operator h has an inverse h − (although theelement h does not have an inverse in L ). We have h − u = r − uh − and dh − = r − h − d . Thus the operator h − i H j commutes with everything in L , and hence acts as a scalar c i,j on M . The scalar c i,j must be nonzerobecause hM = M and HM = M . This implies that H j acts the same wayon M as does c i,j h i . Therefore Ann M must contain H j − c i,j h i .We define I S as the ideal generated by all H j − c i,j h i as ( i, j ) ranges overthe elements of S ; then I S ⊆ Ann M . Since S is generated by the singleelement ( n, m ), it is straightforward to verify that if ( i, j ) = ( kn, km ), then H j − ( c n,m ) k h j ∈ h H m − c n,m h n i ⊆ I S . Since H j − c i,j h i ∈ I S , we concludethat c i,j = ( c n,m ) k and that I S = h H m − c n,m h n i .Suppose now that S = h ( n, − m ) i and ( i, − j ) ∈ S . As before, the pair( − i, j ) gives rise to the same element of Ann M , so we can assume that i, j >
0. This time we conclude that h i H j − c i, − j ∈ Ann M for some nonzeroscalar c i, − j . A similar calculation as before shows that c i, − j = ( c n. − m ) k if( i, − j ) = ( kn, − km ). If we define I S as the ideal generated by all h i H j − c i, − j as ( i, − j ) ranges over S , then I S = h h n H m − c n, − m i .Here are some examples. Suppose r and s are algebraically independent.Then S = { (0 , } and I S = { } . Suppose o ( s ) = m < ∞ and o ( r ) = ∞ .Then S = { (0 , km ) : k ∈ Z } and I S = h H m − c ,m i . Suppose rs = 1 and o ( r ) = o ( s ) = ∞ . Then S = { ( k, − k ) : k ∈ Z } and I S = h hH − c , − i .23nspired by these observations, we define, for a given S and any c ∈ C × ,the ideal I c = h H m − ch n i if S = h ( n, m ) i , and I c = h h n H m − c i if S = h ( n, − m ) i . Note that if ( i, j ) ∈ S with i, j ≥
0, then ( i, j ) = ( kn, km ) forsome k ≥
0, and the element H j − c k h i lies in I c . Similarly, if ( i, − j ) ∈ S with i, j >
0, then ( i, − j ) = ( kn, − km ) for some k ≥
0, and the element h i H j − c k lies in I c .Any primitive ideal of L contains I c for some c ∈ C × . We need todetermine which values of c actually make I c a primitive ideal. The easycase, of course, is when S = { (0 , } . Then I c = h − c i and so only I = { } can be a primitive ideal.When S 6 = { (0 , } , the situation is almost the complete opposite. Itturns out that almost all nonzero values of c make I c a primitive ideal, butas usual there are special cases where Ann M has to contain more than just I c . Lemma 4.6.
Let M be a simple module with hM = HM = M . Suppose ( jm, m ) is the generator of S for some j ≥ and m > (thus s m = r jm ).Suppose further that ψ ( h ) = Ch j with C = 0 . If I C m ⊆ Ann M , i.e., if H m − C m h jm ∈ Ann M , then Ann M also contains u m or d m .Proof. We note that s − r j is an m th root of unity. Since du = sH − Cr j h j ,we get by induction that d k u k = k Y i =1 ( s i H − r ij Ch j ) = k Y i =1 s i [ H − ( r j s − ) i Ch j ] . In particular, for k = m , we get that d m u m is a nonzero multiple of H m − C m h jm . Thus d m u m acts as the zero operator on M .Recall that d m u − s m ud m = [ s m ψ ( h ) − ψ ( r m h )] d m − . When ψ ( h ) = Ch j and s m = r jm , we get d m u = s m ud m . Thus d m M isa submodule of M , either 0 or M . Similarly, u m M = 0 or u m M = M .Since d m u m acts as the zero operator on M , it cannot be the case that both u m M = M and d m M = M . Therefore Ann M contains either u m or d m .We now look at what happens when a primitive ideal is larger than I c ,or in the case of the lemma above, larger than h u m i or h d m i .To do so, we need the following lemma about two-variable polynomials,analogous to Lemma 3.5. We first recall the definition of distinctive poly-nomials from [P1]. A set T ⊆ Z × Z is called distinctive if r i s j = r i ′ s j ′ for24ll distinct pairs ( i, j ) , ( i ′ , j ′ ) ∈ T . A polynomial is called distinctive if itspowers are in a distinctive set, i.e., g ( x, y ) = P ( i,j ) ∈T a i,j x i y j is distinctiveif T is a distinctive set. (We count the zero polynomial as distinctive.)(Distinctive sets are not exotic. Note that r i s j = r i ′ s j ′ iff r i − i ′ = s − ( j − j ′ ) ,i.e., iff ( i − i ′ , − ( j − j ′ )) ∈ S . Thus ( i, j ) and ( i ′ , j ′ ) are in a distinctive setiff ( i, − j ) and ( i ′ , − j ′ ) are in different S -cosets. Therefore T is a distinctiveset iff the elements ( i, − j ), where ( i, j ) ∈ T , are different S -coset represen-tatives.) Lemma 4.7.
Suppose g ( x, y ) = P ( i,j ) ∈T a i,j x i y j is a distinctive polynomialand g ( rx, sy ) = r a s b g ( x, y ) where ( a, b ) ∈ T . Then g ( x, y ) = cx a y b for some c ∈ C .Proof. We have X ( i,j ) ∈T a i,j ( r i s j − r a s b ) x i y j = 0 . Since T is distinctive, we have r i s j = r a s b unless ( i, j ) = ( a, b ). The conclu-sion follows.Distinctive polynomials are useful because any element of L is congruentmodulo I c (where c ∈ C × ) to a distinctive polynomial in h and H . Tosee this, suppose g ( h, H ) = P ( i,j ) ∈R a i,j h i H j is an arbitrary element of L ,where R is not necessarily a distinctive set. We will show that we can reducethe set R until we get a distinctive set, without changing the congruenceclass of g ( h, H ) mod I c .So suppose ( i, j ) , ( i ′ , j ′ ) ∈ R with r i s j = r i ′ s j ′ . Then r i − i ′ = s j ′ − j , so( i − i ′ , j ′ − j ) ∈ S . Clearly it does no harm to assume that i − i ′ ≥
0. If j ′ − j ≥
0, then H j ′ − j − c k h i − i ′ ∈ I c for some k ≥
0, so H j ′ h i ′ ≡ c k h i H j mod I c . Thus we can replace the term h i ′ H j ′ by c k h i H j without changingthe congruence class of g ( h, H ) mod I c . On the other hand, if j ′ − j < h i − i ′ H j − j ′ − c k ∈ I c for some k ≥
0. Thus h i H j ≡ c k h i ′ H j ′ mod I c ,and we can still replace the term h i ′ H j ′ by c k h i H j without changing thecongruence class of g ( h, H ). Clearly we can continue this process until weobtain a distinctive polynomial.We use the lemma about distinctive polynomials to prove the followinganalogue of Lemma 3.2. Lemma 4.8.
Let M be a simple module with hM = HM = M . If Ann M ) I c , then Ann M contains a power of u or a power of d .In the special case of Lemma 4.6, suppose Ann M ) h u m i . Then Ann M contains d m . Similarly, if Ann M ) h d m i , then Ann M contains u m . roof. By Lemma 4.1 Ann M contains a homogeneous element x not in I c .For definiteness assume that x has degree k ≥
0; the case k < x = u k f ( h, H ) where f ∈ C [ x, y ] is a nonzero polynomial.Since x I c , we can assume that f is a nonzero distinctive polynomial.Among all elements of the form u k f ( h, H ) where f is nonzero and dis-tinctive, choose one where the degree of f is minimal. Let’s say the degreeof f is ( a, b ). Then xu − r a s b ux = u k f ( h, H ) u − r a s b u k +1 f ( h, H ) = u k +1 [ f ( rh, sH ) − r a s b f ( h, H )] . The polynomial f ( rh, sH ) − r a s b f ( h, H ) is distinctive and has smaller degreethan f , so it must be zero. By Lemma 4.7, f ( h, H ) is a scalar multiple of h a H b , so Ann M contains u k h a H b . Since hM = HM = M , we concludethat Ann M contains u k .Suppose now that we are in the special case of Lemma 4.6. Assumethat Ann M ) h u m i . As above, Ann M contains a homogeneous element x of degree k not in h u m i ; here we can assume that k < m . We note thatsince Ann M contains u m , it also contains d m − k u m = d m − k u m − k u k . Here d m − k u m − k = Q m − ki =1 s i [ H − ( r j s − ) i Ch j ] is a polynomial whose H -degree is m − k . Thus we can write x = u k f ( h, H ) where f is a polynomial whose H -degree is smaller than m − k . Therefore Ann M contains the element d k x = d k u k f ( h, H ) = g ( h, H ), a polynomial whose H -degree is at most m −
1. Since S = { i ( jm, m ) : i ∈ Z } , the polynomial g is distinctive. ThusAnn M contains an element of the form d i g ( h, H ) where g is nonzero anddistinctive. By the calculation above, Ann M contains a power of d . Since d m M = 0 or d m M = M , we conclude that d m ∈ Ann M , as required.In most cases, if Ann M contains a power of u or a power of d , then M must be finite-dimensional. Lemma 4.9.
Let M be a simple module with hM = HM = M . Suppose u m ∈ Ann M but u m − Ann M , m ≥ . Then ψ is not the zero polynomialand M is finite-dimensional, except in the special case of Lemma 4.6. Thesame conclusion holds when d m ∈ Ann M and d m − Ann M .In the special case of Lemma 4.6, if Ann M contains both u m and d m ,then M is finite-dimensional.Proof. If ψ is the zero polynomial, then du = sud . Thus uM is a submoduleand hence must be either 0 or M . (Similarly, either dM = 0 or dM = M .)Since u m M = 0, we must have uM = 0. Thus ud = H acts as the zerooperator on M , contradicting HM = M .26o assume now that ψ is nonzero. Recall that d m u − s m ud m = [ s m ψ ( h ) − ψ ( r m h )] d m − . If ψ ( h ) = P i a i h i , then s m ψ ( h ) − ψ ( r m h ) = P i a i ( s m − r im ) h i , which isnonzero unless s m = r jm for some j ≥ a i = 0 for i = j , i.e., unless weare in the special situation of Lemma 4.6. (Note that the equality s m = r jm means that r cannot be a root of unity, since that would force s to be aroot of unity also, and we have excluded this possibility. Thus s m = r jm is true only for at most one value of j .) We obtain the conclusion fromCorollary 2.4.Suppose now we are in the situation of Lemma 4.6. The proof in thiscase is quite similar to the proof of the last case of Lemma 3.7. Choose anelement v ∈ M such that dv = 0 (such an element exists since d m ∈ Ann M ).Define v i = u i v for i ≥
0; since u m ∈ Ann M , we have u m + i = 0 for i ≥ udv = ( H − Ch j ) v , so Hv = Ch j v . Recall that as anoperator on M , the operator h has an inverse h − (although the element h is not invertible in L ); thus Hh − j v = Cv . Since Hh − j u = sr − j uHh − j , wesee that Hh − j v i = ( sr − j ) i v i for 0 ≤ i ≤ m −
1. Each v i is an eigenvector ofthe operator Hh − j , with distinct eigenvalues.Since M is simple, there exists x ∈ L such that xhv = v . Since dhv = 0, we can assume that x can be written as P i p i ( h ) u i , so X i p i ( h ) u i hv = X i r − i p i ( h ) hu i v = X i r − i p i ( h ) hv i = v . For each i , r − i p i ( h ) hv i is an eigenvector of Hh − j with eigenvalue ( sr − j ) i .They form a linearly independent set. Thus we have p i ( h ) = 0 for i > p ( h ) h − v = 0, i.e., there exists a nontrivial polynomial in h that annihilates v . Thus we can apply Lemma 2.3 to conclude that M isfinite-dimensional. It remains to show that the ideals I c , h h i , h H i , h u i , h d i , h u m i , h d m i are allprimitive ideals under the right conditions. Lemma 4.10.
Suppose L is conformal, with γ = 0 . Then the followingideals are primitive:1. h h i if o ( s ) = ∞ ;2. h H i if o ( r ) = ∞ and ψ is not identically zero; . h u i and h d i if o ( r ) = ∞ and ψ is identically zero;4. { } if S ( r, s ) = { (0 , } ;5. h h n H m − c i if c ∈ C × and S ( r, s ) = h ( n, − m ) i (where n, m > );6. h H m − ch n i , where c ∈ C × , S ( r, s ) = h ( n, m ) i (where one of n or m is nonzero), and we are not in the case of Lemma 4.6;7. h u m i and h d m i if we are in the case of Lemma 4.6, i.e., if ψ ( h ) = Ch j for some C = 0 and j ≥ , and S ( r, s ) = h ( jm, m ) i for some m > .Proof. We again make use of the universal weight module W ( λ, β ) describedin section 2.1. In this case, it is more convenient to use the parameters λ and µ = β + ψ ( λ ) instead of λ and β . We have hv i = r i λv i and Hv i = s i µv i ,hence udv i = ( H − ψ ( h )) v i = [ s i µ − ψ ( r i λ )] v i . The module W ( λ, µ ) is simpleif the weights ( r i λ, s i µ ) are distinct and s i µ − ψ ( r i λ ) = 0 for all i ∈ Z .Suppose now that o ( s ) = ∞ . Set λ = 0. Then the weights ( r i λ, s i µ ) =(0 , s i µ ) are certainly distinct if µ = 0, and we can choose a nonzero valueof µ such that s i µ − ψ (0) = 0 for all i ∈ Z . Thus W (0 , µ ) is simple; itsannihilator is h h i by Lemma 4.5. This proves (1).Suppose next that o ( r ) = ∞ and ψ is not identically zero. Set µ = 0. Theweights ( r i λ, s i µ ) = ( r i λ,
0) are distinct if λ = 0. In this case, s i µ − ψ ( r i λ )is equal to − ψ ( r i λ ). Since ψ is not identically zero, there are only finitelymany values x such that ψ ( x ) = 0; thus there are only countably manyvalues of λ such that ψ ( r i λ ) = 0 for some i ∈ Z . If we choose a nonzerovalue of λ outside these countably many special values, then ψ ( r i λ ) = 0 for all i ∈ Z . For this value of λ , W ( λ,
0) is simple. Its annihilator must be h H i by Lemma 4.4. This proves (2).Suppose now that S ( r, s ) = { ( n, } . Then s is not a root of unity, sothe weights ( r i λ, s i µ ) are distinct when µ = 0. Now if n = 0, then for any c ∈ C × , choose λ so that λ n = 1 /c . If n = 0, we can choose any nonzerovalue for λ ; in this case c has to be 1. Then h n acts as the scalar λ n = c .Furthermore, there are only countably many values of s − i ψ ( r i λ ) as i rangesover Z , so if we choose µ to be different from these countably many values,then we have s i µ − ψ ( r i λ ) = 0 for all i ∈ Z . Then W ( λ, µ ) is simple, andits annihilator must be I c (or I when n = m = 0) by Lemmas 4.8 and 4.9.This proves (1) and parts of (6).Suppose next that S ( r, s ) = h ( n, − m ) i . Then h n H m v i = r n s m λ n µ m v i = λ n µ m v i , i.e., h n H m acts as the scalar λ n µ m on W ( λ, µ ). I claim that forany c ∈ C × , we can find values for λ and µ such that λ n µ m = c , ( r i λ, s i µ )are all distinct, and s i µ − ψ ( r i λ ) = 0 for all i ∈ Z . This would imply that28 ( λ, µ ) is simple. Its annihilator contains I c , and by Lemma 4.9 it cannotbe larger than I c .To prove the claim, set µ m = c/λ n . Then the equation s i µ − ψ ( r i λ ) = 0implies that s im c = λ n ψ ( r i λ ) m . If ψ is the zero polynomial, then there isno solution for any i ∈ Z . Otherwise, for each i ∈ Z , there are only finitelymany values of λ satisfying the equation (recall that n >
0, so λ n ψ ( r i λ ) m is a nontrivial polynomial in λ ). Thus there are at most countably manyvalues of λ such that s im c = λ n ψ ( r i λ ) m for some i ∈ Z . If we choose anonzero value of λ different from any of these countably many values, weget s im c/λ n = ψ ( r i λ ) m for all i ∈ Z , and hence s i µ − ψ ( r i λ ) = 0 for all i ∈ Z . This proves the claim and finishes the proof of (5).Suppose next that S ( r, s ) = h ( n, m ) i with m = 0. Then H m v i = s m µ m v i = r n µ m v i = ( µ m /λ n ) r n λ n v i = ( µ m /λ n ) h n v i , i.e., H m − ( µ m /λ n ) h n is in Ann W ( λ, µ ). I now claim that, except in the case of Lemma 4.6, forany c ∈ C × we can always find values of λ and µ such that c = µ m /λ n ,( r i λ, s i µ ) are all distinct, and s i µ − ψ ( r i λ ) = 0 for all i ∈ Z . As before,this would imply that W ( λ, µ ) is simple; its annihilator contains I c , and byLemma 4.9 it cannot be larger than I c .To prove the claim, we set µ m = cλ n . Then the equation s i µ − ψ ( r i λ ) = 0implies that s im cλ n = ψ ( r i λ ) m . This is a nontrivial polynomial equation in λ , unless ψ ( h ) = Ch j for some j ≥ n = jm , and r ijm C m = s im c (andnote that the last equality implies C m = c since r jm = s m ). Thus outsidethe special case of Lemma 4.6, there are at most countably many values of λ such that s i µ − ψ ( r i λ ) = 0 for some i ∈ Z . All we have to do now is picka nonzero value of λ outside this countably many set of values; as before,this suffices to prove the claim. We have now proven (6).We now tackle the special case of Lemma 4.6. Recall that in this case wehave r n = s m , n = jm , ψ ( h ) = Ch j , and o ( r j s − ) = m . Write θ for r j s − .We construct an infinite-dimensional simple module whose annihilator is h d m i ; we can show that h u m i is primitive in a similar fashion.The construction proceeds as follows. (Note that it works even when j = 0.) The module M has basis { v i : i ∈ Z } , and the action of L on M isgiven by uv i = v i +1 , dv i = Cs i (1 − θ i ) v i − n − , hv i = r i +( n − m ) / v i − m , note that we might need a square root of r . It is not hard to check that huv i = ruhv i for all i ∈ Z . Verifying the other relations is not as straight-forward. We have dhv i = r i +( n − m ) / dv i − m = Cr i +( n − m ) / s i − m (1 − θ i − m ) v i − m − n − , rhdv i = rCs i (1 − θ i ) hv i − n − = Cr i − n +( n − m ) / s i (1 − θ i ) v i − n − − m . Since θ m = 1 and r n = s m , the two expressions are equal, proving that dhv i = rhdv i for all i ∈ Z .To verify the last relation, we first calculate that h j v i = r ij v i − n . In thealgebra L , we have du − sud = ( s − r j ) Ch j . Now( du − sud ) v i = Cs i +1 [(1 − θ i +1 ) − (1 − θ i )] v i − n = Cs i +1 θ i (1 − θ ) v i − n , while( s − r j ) Ch j v i = ( s − r j ) Cr ij v i − n = C ( s − sθ )( sθ ) i v i − n = Cs i +1 θ i (1 − θ ) v i − n , thus verifying that M is indeed an L -module.It is easy to show that M is simple. Each v i is an eigenvector of u m h with eigenvalue r ( n − m ) / r i . These eigenvalues are distinct (recall that r isnot a root of unity), implying that M is simple. Note that d m ∈ Ann M ; byLemma 4.9 we must have Ann M = h d m i . This proves (7).We now note that the construction above works when C = 0 (hence ψ = 0) and n = m = 1. The action of L becomes uv i = v i +1 , dv i = 0 , hv i = r i v i − . It is straightforward to check that with this action of L , M is still an L -module no matter the values of r and s , as long as ψ is identically zero.When r is not a root of unity, the proof above shows that M is simple.Also as above, its annihilator is h d i . Thus h d i is primitive. Similarly, h u i isprimitive. This proves (3). We now summarize our results and provide a list of primitive ideals of L when L is conformal. As before we include only primitive ideals that areannihilators of infinite-dimensional modules. In the first table below, we listthe primitive ideals when r or s is a root of unity. o ( r ) o ( s ) ψ primitive ideals n m any − n ∞ any h h i , h h n − c i ( c ∈ C × ) ∞ m h u i , h d i , h H m − c i ( c ∈ C × ) ∞ m C = 0 h u m i , h d m i , h H i , h H m − c m i ( c ∈ C × , c m = C m ) ∞ m nonconstant h H i , h H m − c i ( c ∈ C × )30n the second table, we list primitive ideals when both r and s are notroots of unity. The results are expressed in terms of the generator of S ( r, s ).In the table, n , m , and j are all positive integers.generator ψ primitive ideals(0 ,
0) 0 { } , h h i , h u i , h d i (0 ,
0) nonzero { } , h h i , h H i ( n, − m ) 0 h h i , h u i , h d i , h h n H m − c i ( c ∈ C × )( n, − m ) nonzero h h i , h H i , h h n H m − c i ( c ∈ C × )( jm, m ) Ch j ( C = 0) h h i , h H i , h u m i , h d m i , h H m − ch jm i ( c ∈ C × , c = C m )( jm, m ) 0 h h i , h u i , h d i , h H m − ch jm i ( c ∈ C × )( jm, m ) = 0 , = Ch j h h i , h H i , h H m − ch jm i ( c ∈ C × )( n, m ), n = jm h h i , h u i , h d i , h H m − ch n i ( c ∈ C × )( n, m ), n = jm nonzero h h i , h H i , h H m − ch n i ( c ∈ C × ) L is not conformal, γ = 0 In this section we assume that L is not conformal and γ = 0. If φ ( h ) = P i a i h i , recall our assumption means s = r j and a j = 0 for some j ≥
0. If r is not a root of unity, then there is only one such value of j , but if r is aroot of unity, then there could be more than one value of j .We now establish some notation. Suppose first that r is a root of unity,with o ( r ) = n . We fix j so that s = r j and 0 ≤ j ≤ n −
1. The polynomial φ can then be written as a sum of a conformal part φ and a nonconformalpart φ , where φ ( h ) = X i j a i h i , φ ( h ) = X i ≡ j a i h i , (the congruences are mod n ). Note that φ is conformal, while φ can bewritten as φ ( h ) = sh j ˜ φ ( h n ) for some polynomial ˜ φ . (We use the constantfactor s in order to make subsequent formulas easier to work with.) If ψ isa conformal polynomial corresponding to φ , then φ ( h ) = sψ ( h ) − ψ ( rh ) + sh j ˜ φ ( h n ).The case n = 1 (i.e., r = s = 1) perhaps deserves a special mention.Here φ ( h ) is the zero polynomial, and ˜ φ ( h n ) = ˜ φ ( h ) = s − φ ( h ). Thus wecan take ψ ( h ) to be zero, and all the action takes place in the nonconformalpart. 31he situation is a bit simpler when r is not a root of unity. We still split φ into a conformal part and a nonconformal part, but the nonconformal partis just a monomial: φ ( h ) = sCh j for some C = 0.Note that this second case can be subsumed into the previous case when˜ φ ( h ) = C . Thus we can include the second case in the first, even though h n doesn’t make sense when r and s are not roots of unity.Indeed, the two cases can be made even more similar. Suppose againthat o ( r ) = n . Since hu = ruh , we have h n u = r n uh n = uh n . Similarly, dh n = h n d . Thus h n is a central element, so ˜ φ ( h n ) also commutes witheverything in L . It follows that if M is a simple module, then ˜ φ ( h n ) actsas a scalar C = C ( M ). If C = 0, then the nonconformal part φ ( h ) actsas the zero operator on M , and thus we are in the same situation as theconformal case. Looking at the first line of the first table in section 4.7,we conclude that in this case M must be finite-dimensional. Therefore wecan assume from now on that the scalar C is nonzero; we can then write φ ( h ) = sψ ( h ) − ψ ( rh ) + sCh j as operators on M . This is just like the casewhere o ( r ) = ∞ .In either case, just as in the conformal case, we define H to be ud + ψ ( h ).A straightforward calculation then shows that Hu = su ( H + ˜ φ ( h n ) h j ) and dH = s ( H + ˜ φ ( h n ) h j ) d . Then Hx i = s i x i ( H + i ˜ φ ( h n ) h j ) for x i ∈ L i . Wecan replace ˜ φ ( h n ) by the nonzero constant C if the terms are viewed asoperators on a simple module M . We have the following lemma about homogeneous elements in ideals, theanalogue of Lemmas 3.1 and 4.1. It is unavoidably more complicated in thecase o ( r ) = n . Lemma 5.1.
Let I be an ideal of L . If o ( r ) = n , assume also that ˜ φ ( h n ) − C ∈ I , where C = 0 . Suppose x ∈ I and x = P i ∈ Z x i (where x i ∈ L i ) is thehomogeneous decomposition of x . Then there exists an integer m ≥ suchthat x i h m ∈ I for all i ∈ Z .Proof. If o ( r ) = ∞ , this is simply Lemma 4.1. Suppose now that o ( r ) = n .We again use induction on the length of x . For a given k ∈ Z with x k = 0,we have hx − r k xh = X i ∈ Z ( r i − r k ) x i h. The element hx − r k xh thus has smaller length than x . Then for all i ∈ Z such that r i = r k , we have x i h m ∈ I for some m ≥
0. Therefore the element32 = P { i : r i = r k } x i h m is also in I .It follows that Hy = P { i : r i = r k } s i x i ( H + i ˜ φ ( h n ) h j ) h m is also in I . Since˜ φ ( h n ) − C is an element of I , we conclude that z = P { i : r i = r k } s i x i ( H + iCh j ) h m ∈ I . Now recall that r i = r k implies that s i = s k . Thus z − s k yH − ks k Cyh j = X { i : r i = r k } [ s i x i Hh m + is i Cx i h j + m ] − X { i : r i = r k } [ s k x i h m H + ks k Cx i h m + j ]= X { i : r i = r k } s k ( i − k ) Cx i h m + j . This element has smaller length than y , so by the induction hypothesis, foreach i = k , the homogeneous element x i h m ′ is in I . It follows that x k h m ′ isalso in I .Now let M be a simple module. As before, either hM = 0 or hM = M .We first look at the situation where hM = 0. hM = 0 In this case h acts as the zero operator on M . Recall that the nonconformalpart of φ ( h ) can be written as φ ( h ) = sCh j , where C = 0. Thus if j > φ ( h ) also acts as the zero operator on M . In this case we have exactlythe same situation as the conformal case: if s is not a root of unity, thenAnn M cannot be larger than h h i , but if s is a root of unity, then M mustbe finite-dimensional.Suppose now that j = 0. Then s = 1 and φ (0) = C ; also, Hu = u ( H + C )and dH = ( H + C ) d . These relations are exactly analogous to the relationsbetween h and u (and between h and d ) in the case r = 1 , γ = 0, with H taking the role of h and C taking the role of γ . In particular, Lemmas 3.1and 3.2 are valid; that is, if Ann M ) h h i , then Ann M contains a powerof u or a power of d . Let’s say Ann M contains d k but not d k − for some k >
0. Since du = ud + C as operators on M , we have d k u = ud k + kCd k − ;this is a contradiction since it implies that Cd k − annihilates M . Thus inthis case, Ann M cannot be larger than just h h i .33 .3 The case hM = M If o ( r ) = n , recall that h n acts as a scalar on M . Thus any primitive idealmust contain h h n − c i for some c ∈ C . We are assuming that ˜ φ ( c ) = C = 0;since hM = M , it is also true that c = 0. (On the other hand, if o ( r ) = ∞ ,we have no obvious information on what a primitive ideal must contain.)We need the following result on polynomials of two variables. It is aversion of Lemmas 3.5 and 4.7, but unfortunately it is rather more compli-cated. Lemma 5.2.
Suppose r j = s and f ( x, y ) = P bi =0 g i ( x ) y i is a two-variablepolynomial of degree ( a, b ) , where deg g i ( x ) < o ( r ) for ≤ i ≤ b . If f ( rx, sCx j + sy ) = r a s b f ( x, y ) for some C = 0 , then f ( x, y ) is a constant multiple of x a .Proof. We first want to show that b = 0. Assume for a contradiction that b >
0. Then we can write f ( x, y ) = g b ( x ) y b + g b − ( x ) y b − + · · · where g b ( x ) has degree a . Then f ( rx, sCx j + sy ) = g b ( rx )( sCx j + sy ) b + g b − ( rx )( sCx j + sy ) b − + · · · = s b g b ( rx ) y b + s b g b ( rx ) bCx j y b − + s b − g b − ( rx ) y b − + · · · = s b g b ( rx ) y b + [ bCs b x j g b ( rx ) + s b − g b − ( rx )] y b − + · · · This is equal to r a s b g b ( x ) y b + r a s b g b − ( x ) y b − + · · · Equating the coefficients of y b and of y b − , we get g b ( rx ) = r a g b ( x ) ,bCs b x j g b ( rx ) + s b − g b − ( rx ) = r a s b g b − ( x ) . Simplifying the second equation gives us bCr a + j x j g b ( x ) = r a + j g b − ( x ) − g b − ( rx ) . The left-hand side is a polynomial of degree a + j since g b has degree a . Nowlook at the right-hand side. If g b − ( x ) = P c i x i , then r a + j g b − ( x ) − g b − ( rx ) = X c i ( r a + j − r i ) x i ,
34o the coefficient of x a + j with is 0. We have the required contradiction, thusindeed b = 0.We therefore conclude that f ( x, y ) = g ( x ). Suppose g ( x ) = P ai =0 c i x i ;recall that a < o ( r ). We require that g ( rx ) = r a g ( x ), but this implies that P c i ( r i − r a ) x i = 0, thus showing that c i = 0 for i = a . We conclude that g ( x ) = c a x a , as required.As usual, we use this lemma to prove that large primitive ideals containa power of u or a power of d . Lemma 5.3.
Suppose M is simple. If o ( r ) = n and Ann M ) h h n − c i (where c = 0 ), then Ann M contains a power of u or a power of d . Similarly,if o ( r ) = ∞ and Ann M = { } , then Ann M contains a power of u or a powerof d .Proof. Write I for the zero ideal if o ( r ) = ∞ and for h h n − c i if o ( r ) = n .Suppose Ann M ) I . By Lemma 5.1, Ann M contains a homogeneous el-ement not in I . It does no harm to assume that this homogeneous ele-ment has nonnegative degree. Thus Ann M contains elements of the form x = u k f ( h, H ), where f is a nonzero polynomial that can be written as f ( h, H ) = P i g i ( h ) H i with the degree of g i ( x ) being less than o ( r ). Choose x = u k f ( h, H ) to be such an element where the degree ( a, b ) of the polyno-mial f is as small as possible. Then xu − r a s b ux = u k +1 [ f ( rh, sH + sCh j ) − r a s b f ( h, H )]is also in Ann M , and the degree of the the polynomial f ( rh, sH + sCh j ) − r a s b f ( h, H ) is smaller than ( a, b ). Thus it must be zero. By Lemma 5.2, f ( h, H ) must be a (nonzero) multiple of h a . Since hM = M , we concludethat u k ∈ Ann M , as required. Lemma 5.4.
Let M be a simple module and suppose Ann M contains apower of u or a power of d . Then M is finite-dimensional.Proof. Let’s say Ann M contains d k but not d k − . Since du = sud + φ ( h ) + sCh j , we have d k u − s k ud k = " k − X i =0 s i φ ( r k − i − h ) + ks k Ch j d k − . The polynomial in square brackets is nonzero since none of the monomialsin φ can cancel out h j . By Corollary 2.4, M is finite-dimensional.35t remains to show that { } is a primitive ideal if o ( r ) = ∞ ; also that h h n − c i is a primitive ideal when o ( r ) = n . Lemma 5.5.
Suppose L is not conformal, s = r j , and r is not a root ofunity. Then { } is a primitive ideal.On the other hand, suppose o ( r ) = n and c ∈ C × satisfies ˜ φ ( c ) = 0 .Then the ideal h h n − c i is a primitive ideal.Suppose further that j = 0 and ˜ φ (0) = 0 . Then h h i is a primitive ideal.Proof. As usual, we use the universal weight modules W ( λ, β ). As before, itis more convenient to use the parametrization ( λ, µ ) where µ = β + ψ ( λ ) isthe H -eigenvalue of the vector v . Since Hu = su ( H + Ch j ) (where C = ˜ φ ( c )if o ( r ) = n ; if o ( r ) = ∞ , C is simply the coefficient of h j in φ ( h )), we canfigure out that Hv i = s i ( µ + iCλ i ) v i for all i ∈ Z . Thus udv i = [ s i ( µ + iCλ i ) − ψ ( r i λ )] v i . Note that each v i is an eigenvector of H with distinct eigenvalues; thus W ( λ, µ ) is simple if β i = s i ( µ + iCλ i ) − ψ ( r i λ ) is nonzero for all i ∈ Z .Now for any given value of λ , there are only countably many values of s − i ψ ( r i λ ) − iCλ i as i ranges over Z . If we pick µ outside these values, then β i = 0 for all i ∈ Z and W ( λ, µ ) would be simple. We now see what happensfor various values of λ .If λ = 0, then h acts as the zero operator on W ( λ, µ ). If in addition j = 0 and ˜ φ (0) = 0, then the annihilator of W ( λ, µ ) cannot be larger than h h i , as discussed in section 5.2. Thus h h i is indeed a primitive ideal in thissituation.If λ = 0, then h does not act as the zero operator, hence hW ( λ, µ ) = W ( λ, µ ). If in addition r is not a root of unity, then the annihilator of W ( λ, µ ) cannot be larger than { } , by Lemmas 5.3 and 5.4, so { } is indeeda primitive ideal.If o ( r ) = n , then for any value c ∈ C × such that ˜ φ ( c ) = 0, we can pick λ ∈ C with λ n = c (note that λ = 0); after that, we pick µ as above to make W ( λ, µ ) a simple module. Then h n acts as the scalar c on W ( λ, µ ). Theannihilator of W ( λ, µ ) is just h h n − c i by Lemmas 5.3 and 5.4. We can put all these results together to generate an explicit list of theprimitive ideals of L when L is not conformal. Recall that s = r j . If o ( r ) = n , we write φ ( h ) = φ ( h ) + s ˜ φ ( h n ) h j where φ ( h ) = P i j a i h i . As36efore, we list primitive ideals that are annihilators of infinite-dimensionalmodules. j o ( r ) p (0) primitive idealsany ∞ n/a { } , h h i n h h n − c i , c ∈ C × , ˜ φ ( c ) = 00 n nonzero h h i , h h n − c i , c ∈ C × , ˜ φ ( c ) = 0 > n any h h n − c i , c ∈ C × , ˜ φ ( c ) = 0 References [BR] G. Benkart and T. Roby, Down-up algebras,
J. Algebra (1998),305–344; Addendum, (1999), 378.[CL] P. Carvalho and S. Lopes, Automorphisms of generalized down-upalgebras,
Comm. Algebra (2009), 1622–1646.[CS] T. Cassidy and B. Shelton, Basic properties of generalized down-upalgebras, J. Algebra (2004), 402–421.[D] J. Dixmier,
Enveloping Algebras , American Mathematical Society,Providence, RI (1996).[J] D. Jordan, Down-up algebras and ambiskew polynomial rings,
J.Algebra (2000), 311–346.[LB] L. Le Bruyn, Conformal sl enveloping algebras, Comm. Algebra (1995), 1325–1362.[MR] J. McConnell and J. Robson, Noncommutative Noetherian Rings ,John Wiley and Sons, New York, 1987.[P1] I. Praton, Primitive ideals of noetherian down-up algebras,
Comm.Algebra (2004), 443–471.[P2] I. Praton, Simple weight modules of non-noetherian generalizeddown-up algebras, Comm. Algebra , (2007), 325–337.[P3] I. Praton, Simple Modules and Primitive Ideals of Non-NoetherianGeneralized Down-Up Algebras, Comm. Algebra (2009), 811–839. 37PM] I. Praton and S. May, Primitive ideals of non-noetherian down-upalgebras, Comm. Algebra (2005), 605–622.[R] S. Rueda, Some algebras similar to the enveloping algebra of sl , Comm. Algebra (2002), 1127–1152.[S] S. P. Smith, A class of algebras similar to the enveloping algebra of sl , Trans. Amer. Math. Soc (1990), 285–314.Department of MathematicsFranklin and Marshall College, Lancaster, PA 17604 [email protected]@fandm.edu