On primitive axial algebras of Jordan type
aa r X i v : . [ m a t h . G R ] M a y ON PRIMITIVE AXIAL ALGEBRAS OF JORDAN TYPE
J.I. HALL Y. SEGEV S. SHPECTOROV
Dedicated to Professor Robert L. Griess, Jr. on the occasion of his st birthday Abstract.
In this note we give an overview of our knowledge regardingprimitive axial algebras of Jordan type half and connections between 3-transposition groups and Matsuo algebras. We also show that primitiveaxial algebras of Jordan type η admit a Frobenius form, for any η . Introduction
The purpose of this note is threefold. In § primitive axial algebras of Jordan type half . This istaken from [HSS]. In fact we focus in § η = are reviewed (amongst other things) by Jon Hall in another paperof this volume. In §
3, we complete, for the case η = , a result connecting3-transposition groups and Matsuo algebras, established in [HRS, Theorem6.3] for η = . In § η (any η ) admits a Frobenius form.We start by recalling a few definitions. We do not give the historicalbackground as it can be best found in the introduction to [HRS].All algebras A in this note are commutative, non-associative over afield F of characteristic not a ∈ A the adjoint operator ad a is multiplication by a , soad a : A → A, x xa. An axis in A is, by definition, a semisimple idempotent , i.e., an idempo-tent whose minimal ad-polynomial has few distinct linear factors; where theminimal ad-polynomial is the minimal polynomial of the linear operator ad a (we are not assuming that A is finite dimensional, however, we are assumingthat ad a has a minimal polynomial). Axial algebras , introduced recently by Hall, Rehren and Shpectorov([HRS]), are, by definition, algebras generated by axes. When certain fusionrules , i.e. multiplication rules, between the eigenspaces corresponding to an
Date : May 9, 2017.2010
Mathematics Subject Classification.
Primary: 17A99; Secondary: 17C99, 17B69.
Key words and phrases.
Axial algebra, 3-transposition, Jordan algebra, Frobenius form. axis, are imposed the structure of axial algebras remains interesting yet itis more rigid.Given an element a ∈ A and a scalar λ ∈ F , the λ -eigenspace of ad a isdenoted A λ ( a ) , so: A λ ( a ) := { x ∈ A | xa = λx } . (We allow A λ ( a ) = 0.) Axial algebras of Jordan type η, where η / ∈ { , } is fixed , are algebrasgenerated by a set of axes A such that for each a ∈ A :(1) The minimal ad-polynomial of a divides ( x − x ( x − η ).(2) The fusion rules imitate the Peirce multiplication rules in Jor-dan algebras. These fusion rules are: A ( a ) A ( a ) ⊆ A ( a ) and A ( a ) A ( a ) ⊆ A ( a ) ,A ( a ) A ( a ) = { } , ( A ( a ) + A ( a )) A η ( a ) ⊆ A η ( a ) , and A η ( a ) ⊆ A ( a ) + A ( a ) . In particular, if we set A + ( a ) = A ( a ) ⊕ A ( a ) and A − ( a ) = A η ( a ) . then A δ ( a ) A ǫ ( a ) ⊆ A δǫ ( a ) , for δ, ǫ ∈ { + , −} . Thus, for example, Jordan algebras are axial algebras of Jordan type , provided that they are generated by idempotents.An axis a ∈ A is absolutely primitive if A ( a ) = F a (this is strongerthan the usual notion of primitivity). We call an absolutely primitive axis a satisfying (1), (2) above an η -axis .A primitive axial algebra of Jordan type η is an algebra generated by η -axes. For η = , primitive axial algebras of Jordan type η were thoroughlyanalyzed by Hall, Rehren, and Shpectorov in [HRS]. The case η = , is muchless understood and is of a different nature. This case is the focus of [HSS]and of §§ η -axis a ∈ A, recall that A = A + ( a ) z }| { A ( a ) ⊕ A ( a ) ⊕ A − ( a ) z }| { A η ( a ) . The map τ ( a ) : A → A defined by x τ ( a ) = x + − x − , where x = x + + x − ∈ A + ( a ) + A − ( a ) , is an automorphism of A of order 1 or 2. It is called theMiyamoto involution corresponding to a . Jordan algebras of Clifford type.
A Jordan algebra of Clifford type J ( V, B ) consists of the following infor-mation:(1) A vector space V over F together with a symmetric bilinear form B on V . The corresponding quadratic form is denoted q ( v ) = B ( v, v ).(2) The Jordan algebra J ( V, B ) is F1 ⊕ V with multiplication definedby is the identity and v ∗ w = B ( v, w ) , ∀ v, w ∈ V. The algebra J ( V, B ) comes from the associative
Clifford algebra
Cl(
V, q ):it is a sub-Jordan algebra of Cl(
V, q ) + , where, as usual, A + denotes thespecial Jordan algebra that emerges from the associative algebra A . Let J = J ( V, B ). It is easy to check that:(a) For u ∈ V and α ∈ F , the element α + u is an idempotent if andonly if α = and q ( u ) = .(b) Assume that a = + u is an idempotent in J . Then(i) J ( a ) = F a, so a is a -axis. (Thus J ( V, B ) is a primitive axialalgebra of Jordan type iff it is generated by idempotents.)(ii) J ( a ) = F ( − a ) (of course − a is a -axis), and(iii) J ( a ) = u ⊥ = J ( − a ) , where u ⊥ = { v ∈ V | B ( u, v ) = 0 } .(c) It follows that τ ( a ) = τ ( − a ) , for any -axis a .The purpose of § . Primitive axial algebras of Jordan type half
Throughout this section A is a primitive axial algebra of Jordan type η, generated by a set A of η -axes.Let ∆ be the graph on the set of all η -axes of A, where distinct a, b forman edge iff ab = 0. Let also ∆ A be the full subgraph of ∆ on the set A . Thepurpose of this section is to sketch a proof of the following theorem: Theorem 2.1.
Assume that ∆ A is connected and that there are two distinct η -axes a, b ∈ A such that τ ( a ) = τ ( b ) . Then η = , a + b = is the identityof A, and A is a Jordan algebra of Clifford type. In the remainder of this section we will sketch a proof of Theorem 2.1.First we need a theorem that enables us to identify A as a Jordan algebraof Clifford type in the case η = . Theorem 2.2.
Let η = . Assume that A contains two -axes a, b ∈ A suchthat a + b = A and such that v a v c ∈ F1 A , for all c ∈ A , where v c = c − A .Then A is a Jordan algebra of Clifford type. J. I. HALL, Y. SEGEV, S. SHPECTOROV
We do not include a proof of Theorem 2.2, see [HSS, Theorem 5.4].We will need some information about 2-generated subalgebras of A . Thisinformation is taken from [HRS]. Let a, b ∈ ∆ with a = b . We denote by N a,b the subalgebra generated by a and b. If N a,b contains an iden-tity element, we denote it by a,b . Note that by [HRS], 2 -generatedsubalgebras are at most -dimensional . Lemma 2.3 (Lemma 3.1.2 in [HSS]) . Let a, b ∈ ∆ with a = b . Then N a,b is -dimensional precisely in the following cases: (1) ab = 0; we then denote: N a,b = 2 B a,b . (2) η = − , ab = − a − b ; we then denote: N a,b = 3 C ( − × a,b . (3) η = , ab = a + b ; we then denote: N a,b = J a,b .Furthermore, (4) the algebras N a,b in cases (2) and (3) above do not have an identityelement. The following proposition deals with 2-generated 3-dimensional subalge-bras.
Proposition 2.4 (Proposition 4.6 [HRS]) . Let a, b ∈ ∆ with a = b . Then N a,b is -dimensional precisely when ab = 0 and there exists = σ ∈ N a,b and a scalar ϕ = ϕ a,b ∈ F such that if we set π = π a,b = (1 − η ) ϕ − η, then (1) ab = σ + ηa + ηb ;(2) σv = πv, for all v ∈ { a, b, σ } . furthermore (3) N a,b contains an identity element if and only if π = 0 , in which case a,b = π σ .When N a,b is -dimensional we denote: N a,b = B ( η, ϕ ) a,b , where ϕ ∈ F isthe scalar mentioned above. From now on we assume that ∆ A is connected . Note that by [HSS,Lemma 6.4], ∆ A is connected iff ∆ is connected. Further, we assume that a, b ∈ ∆ are distinct with τ ( a ) = τ ( b ). Proposition 2.5 (Proposition 6.5 in [HSS]) . ab = 0 and (1) for any c ∈ ∆ r { a, b } exactly one the following holds: (i) ac = bc = 0 . (ii) η = , and for some x ∈ { a, b } = { x, y } , we have N x,c = B ( , x,c is -dimensional, N y,c = J y,c and N y,c ⊂ N x,c . Fur-ther a + b = 1 x,c . (iii) η = , N a,c = N b,c is -dimensional and a + b = 1 a,c . (2) If d is an η -axis in A such that τ ( d ) = τ ( a ) , then d ∈ { a, b } .Proof sketch. By [HSS, Lemma 3.2.1], for any c ∈ ∆ , we have ac = 0 ⇐⇒ c τ ( a ) = c, and since, by definition, a τ ( b ) = a τ ( a ) = a, we see that ab = 0. If ac = 0 , then, as above bc = 0 (and vice versa), so (i) holds. Hence wemay assume that ac = 0 = bc .If η = , then by [HRS, Proposition 6.5], and since ∆ is connected, a = b, a contradiction. Thus η = .Now consider V := N c,c τ ( a ) ⊆ N a,c ∩ N b,c .V is either 2 or 3-dimensional. If V is 3-dimensional, then N a,c = V = N b,c , and since ab = 0 , one shows that a + b = 1 a,c ([HSS, Lemma 3.2.5]), so (iii)holds.So suppose V is 2-dimensional. If both N a,c and N b,c are 2-dimensional,then they both equal to N a,b = F a ⊕ F b . But then c = a or b, a contradiction.Therefore without loss N a,c is 3-dimensional and V is 2-dimensional. If V = N b,c then (ii) holds: Clearly N b,c ⊂ N a,c and a + b = 1 a,c , and then acareful analysis of the situation gives (ii).The case where both N a,c and N b,c are 3-dimensional and V is 2-dimensionalis the hardest case and some precise work is required to get a contradic-tion. (cid:3) Proposition 2.6. η = and (1) xa = 0 = xb, for all x ∈ ∆ r { a, b } ;(2) A contains an identity element = a + b ;(3) for any x ∈ ∆ such that N a,x is -dimensional we have = 1 a,x .Proof. Let d ( , ) be the distance function on ∆. Let∆ ( a ) := { x ∈ ∆ | d ( a, x ) = 1 } . Since ∆ is connected ∆ ( a ) = ∅ . Also, by Proposition 2.5(1i), ∆ ( a ) =∆ ( b ). Let c ∈ ∆ ( a ). By Proposition 2.5, η = and after perhaps inter-changing a and b, N a,c is 3-dimensional and a + b = 1 a,c . Set = 1 a,c = a + b, then c = c, for all c ∈ ∆ ( a ) . Let y ∈ ∆ r ∆ ( a ) be at distance 2 from a in ∆ , and let x ∈ ∆ ( a ) ∩ ∆ ( y ) . Without loss N a,x is 3-dimensional and = 1 a,x . Now • ay = 0 = by = ⇒ τ ( y ) = ( a + b ) τ ( y ) = a τ ( y ) + b τ ( y ) = a + b = . • τ ( x ) = because = 1 a,x . • y = 0 so y τ ( x ) = 0 . • x = x so x τ ( y ) = x τ ( y ) . • W := Span (cid:0) { y, y τ ( x ) } (cid:1) ∩ Span (cid:0) { x, x τ ( y ) } (cid:1) = { } . Indeed, W isthe intersection of two 2-dimensional subspaces of N x,y which is ofdimension at most 3. • both annihilates and acts as identity on W, a contradiction. J. I. HALL, Y. SEGEV, S. SHPECTOROV
Hence ∆ ( a ) = ∆ r { a, b } and clearly d ( a, b ) = 2 in ∆. But now, as wesaw above, c = c for all c ∈ ∆. It follows that is the identity of A and(3) holds as well. (cid:3) We are now in a position to prove Theorem 2.1.
Proof of Theorem 2.1.
We show that the hypotheses of Theorem 2.2 aresatisfied. By Proposition 2.6, η = and a + b = A . Let c ∈ ∆. Then v a v c = ( a − )( c − ) = ac − a − c + = σ a,c + . Clearly v a v c ∈ F1 if c ∈ { a, b } . Otherwise, by Proposition 2.6(1), ac = 0.If N a,c is 2-dimensional, then since ac = 0 , σ a,c = 0 , and so v a v c ∈ F1 . If N a,c is 3-dimensional, then by Proposition 2.6(3), = 1 a,c . Furthermore by[HRS], σ a,c = π a,c a,c = π a,c , for some π a,c ∈ F , and again v a v c ∈ F1 . (cid:3)
3. 3 -transpositions and Matsuo algebras
Recall that a set of axes A is closed iff a τ b ∈ A , for all a, b ∈ A . In thissection A is a primitive axial algebra of Jordan type η generated by a closedset of η -axes A , such that |A| > G be a group generated by a normal set of involutions D . Recall that D is called a set of -transpositions in G if | st | ∈ { , , } , for all s, t ∈ D .The group G is then called a 3 -transposition group .Let D be a normal set of 3-transpositions in the group G that generate G . The Matsuo algebra associated with the pair (
G, D ) , denoted here M δ ( G, D ) , is defined as follows. As a vector space over F it has the basis D .Multiplication is defined for x, y ∈ D as follows x · y = x, if y = x , if | xy | = 2 δ ( x + y − x y ) , if | xy | = 3 . This is extended by linearity to the entire algebra. (Note that we denotemultiplication in G by juxtaposition and in M δ ( G, D ) by dot.) By [HRS,Theorem 6.2], M δ ( G, D ) is a primitive axial algebra of Jordan type 2 δ .The purpose of this section is to prove the following Theorem: Theorem 3.1.
Suppose that the graph ∆ A is connected. Let D := { τ a | a ∈A} and G = h D i . Assume that the map a τ a on A is injective and that D is a set of -transpositions in G . Then A is a quotient of the Matsuoalgebra M η ( G, D ) . Remark 3.2.
Theorem 3.1 was proved in [HRS, Theorem 6.3] for η = .The proof for η = needed a correction, in view of [HSS]. Note that thesummand ⊕ i ∈ I F does not appear in Theorem 3.1 since we are assuming that ∆ A is connected. We also mention that for η = , the map on A definedby a τ a is always injective, by [HSS, Proposition 6.5], and since ∆ A isconnected.We included a proof of Theorem 3.1 for all η for completeness. Lemma 3.3. ab = η a + ϕ a,b b − η a τ b , for all a, b ∈ A .Proof. Clearly this holds when a = b, so assume a = b . Suppose first that N a,b is 2-dimensional. We use [HSS, Lemma 3.1.2]. If N a,b = 2 B a,b , then ab = 0 , ϕ a,b = 0 , and a τ b = a (see also [HSS, Lemma 3.2.1]), so the claimholds.Suppose next that N a,b = 3 C ( − × a,b . Then η = − , ab = − a − b, ϕ a,b = − and a τ b = − a − b (see also [HSS, Lemma 3.1.8]), so the claim holds.Assume that N a,b = J a,b . Then η = , ab = a + b, ϕ a,b = 1 and a τ b = 2 b − a (see also [HSS, Lemma 3.1.9]), so again the claim holds.We may assume that N a,b is 3-dimensional. Set ϕ := ϕ a,b . By[HSS, Theorem 3.1.3(6)], a τ ( b ) = − η σ − η − ϕ ) η b − a . Also, σ = ab − ηa − ηb .Hence we get η σ = − a − η − ϕ ) η b − a τ b ⇐⇒ σ = − η a − ( η − ϕ ) b − η a τ b ⇐⇒ ab = η a + ϕb − η a τ b . (cid:3) Corollary 3.4 (See Corollary 1.2 in [HRS]) . A is spanned over F by A .Proof. This is immediate from Lemma 3.3 and the definition of a closed setof axes. (cid:3)
Lemma 3.5.
Suppose that ( ∗ ) the map a τ a on A is injective.Let a, b ∈ A be distinct. Then (1) if ( τ a τ b ) = 1 , then ab = 0 . (2) if ( τ a τ b ) = 1 , then ϕ a,b = η .Proof. (1): By [HSS, Lemmas 3.2.7(2) and 3.1.6(2)] and by ( ∗ ) , N a,b =2 B a,b , so (1) holds (see also [HSS, Lemma 3.1.2(1a)]).(2): If η = , then (2) follows from [HRS, Proposition 4.8]. So suppose η = . By [HSS, Lemma 3.2.7(1) and Corollary 3.3.2] and by ( ∗ ) , we get ϕ a,b = . (cid:3) We can now prove Theorem 3.1.
Proof of Theorem 3.1.
Set M := M η ( G, D ). We claim that the map f : M → A : τ a a, J. I. HALL, Y. SEGEV, S. SHPECTOROV extended by linearity is a surjective algebra homomorphism. Note that f iswell defined since the map a τ a is injective on A .Now f is surjective by Corollary 3.4. Next we need to check that( ∗ ) f ( τ a · τ b ) = ab, for all a, b ∈ A . If a = b, then τ a · τ b = τ a , and ab = a, so ( ∗ ) holds.If | τ a τ b | = 2 , then τ a · τ b = 0 , while by Lemma 3.5(1), ab = 0 , so ( ∗ ) holdsin this case as well.Finally assume that | τ a τ b | = 3. Then τ a · τ b = η ( τ a + τ b − τ τ b a ) = η ( τ a + τ b − τ aτb ) , where the last equality follows from the standard fact that τ τ b a = τ aτb . Thus f ( τ a · τ b ) = η ( a + b − a τ b ). However, by Lemma 3.5(2) and Lemma 3.3, ab = η ( a + b − a τ b ) , so ( ∗ ) holds in this case as well and the proof of thetheorem is complete. (cid:3) The existence of a Frobenius form
Recall that a non-zero bilinear form ( · , · ) on an algebra A is called Frobe-nius if the form associates with the algebra product, that is,( ab, c ) = ( a, bc )for all a, b, c ∈ A .For primitive axial algebras of Jordan type η, we specialize the concept ofFrobenius form further by asking that the condition ( a, a ) = 1 be satisfiedfor each η -axis a .The purpose of this section is to prove the following theorem: Theorem 4.1.
Let A be a primitive axial algebra of Jordan type η . Then A admits a Frobenius form. The proof of Theorem 4.1 depends on two properties of primitive axialalgebras of Jordan type. The first is Corollary 3.4. The second is proven in[HRS] (Lemma 4.2 below).For an η -axis a ∈ A , let ϕ a be the projection function with respect to a .That is, for u ∈ A , we have that u = ϕ a ( u ) a + u + u η , where u and u η are eigenvectors of the adjoint linear transformation ad a for the eigenvalues0 and η , respectively. Lemma 4.2 (Lemma 4.4 in [HRS]) . For a primitive axial algebra A ofJordan type and for any η -axes a, b ∈ A , we have ϕ a ( b ) = ϕ b ( a ) . Note that the constant ϕ a,b , that we used earlier for η -axes a, b, is thesame as ϕ a ( b ). Proof of Theorem 4.1.
We start by defining the bilinear form ( · , · ) on A . Using Corollary 3.4 wecan select a basis B of A consisting of η -axes, and we let( a , b ) = ϕ a ( b ) , for all a, b ∈ B . Extending by linearity we get the bilinear form ( · , · ). Note that Lemma 4.2implies that ( · , · ) is symmetric. Lemma 4.3. (1) ( a , u ) = ϕ a ( u ) , for all η -axes a ∈ A and all u ∈ A ;(2) ( a , a ) = 1 , for all η -axes a ∈ A ;(3) ( · , · ) is invariant under automorphisms of A .Proof. (1&2): Let a be an η -axis and suppose that( ∗ ) ϕ a ( b ) = ( a, b ) , for all b ∈ B . Since ϕ a is linear, ϕ a ( u ) = ϕ a ( X b ∈B α b b ) = X b ∈B α b ϕ a ( b )= X b ∈B α b ( a, b ) = ( a, X b ∈B α b b ) = ( a, u ) , and (1) holds for a . Now if a ∈ B , then ( ∗ ) holds by definition, so (1)holds for a . Suppose a / ∈ B . Let b ∈ B . Then ϕ a ( b ) = ϕ b ( a ) , by Lemma4.2, and ϕ b ( a ) = ( b, a ) , as (1) holds for b . Finally, since ( · , · ) is symmetric( b, a ) = ( a, b ) , so ϕ a ( b ) = ( a, b ) , and ( ∗ ) holds for any η -axis a . This showsthat (1) holds.In particular, for every η -axis a ∈ A , we have that ( a, a ) = 1, since,clearly, ϕ a ( a ) = 1. Thus (2) holds.(3): Let ψ ∈ Aut( A ), if u = ϕ a ( u ) a + u + u η is the decomposition of u ∈ A with respect to the η -axis a, then u ψ = ϕ a ( u ) a ψ + u ψ + u ψη is the decom-position of u ψ with respect to the η -axis a ψ . Hence ϕ a ψ ( u ψ ) = ϕ a ( u ), andso ( a ψ , u ψ ) = ( a, u ). Finally, taking an arbitrary v ∈ A and decomposingit with respect to the basis B as v = P b ∈B α b b , we get that ( v ψ , u ψ ) =( P b ∈B α b b ψ , u ψ ) = P b ∈B α b ( b ψ , u ψ ) = P b ∈B α b ( b, u ) = ( P b ∈B α b b, u ) =( v, u ). So indeed, ( · , · ) is invariant under the automorphisms of A . (cid:3) Lemma 4.4.
For every η -axis a ∈ A , different eigenspaces of ad a are or-thogonal with respect to ( · , · ) .Proof. Clearly, if u ∈ A ( a )+ A η ( a ) then ( a, u ) = ϕ a ( u ) = 0. Hence A ( a ) = F a is orthogonal to both A ( a ) and A η ( a ). It remains to show that these twoare also orthogonal to each other. Let u ∈ A ( a ) and v ∈ A η ( a ), the fact that( · , · ) is invariant under τ a gives us ( u, v ) = ( u τ a , v τ a ) = ( u, − v ) = − ( u, v ).Clearly, this means that ( u, v ) = 0. (cid:3) We are now ready to complete the proof that ( · , · ) associates with thealgebra product. Note that the identity( a, bc ) = ( ab, c )that we need to prove is linear in a , b , and c . In particular, since A isspanned by η -axes, we may assume that b is an η -axis. Furthermore, since A decomposes as the sum of the eigenspaces of ad b , we may assume that a and c are eigenvectors of ad b , say, for the eigenvalues µ and ν . We have twocases:If µ = ν then( a, bc ) = ( a, νc ) = ν ( a, c ) = µ ( a, c ) = ( µa, c ) = ( ba, c ) = ( ab, c ) . If µ = ν then ( a, bc ) = ν ( a, c ) = 0 = µ ( a, c ) = ( ab, c ) , since A µ ( b ) and A ν ( b ) are orthogonal to each other. Thus, in both caseswe have the desired equality ( a, bc ) = ( ab, c ), proving that the form ( · , · ) isFrobenius. (cid:3) References [HRS] J.I. Hall, F. Rehren, S. Shpectorov,
Primitive axial algebras of Jordan type,
J. Al-gebra (2015), 79–115.[HSS] J.I. Hall, Y. Segev, S. Shpectorov,
Miyamoto involutions in axial algebras of Jordantype half, to appear in Israel J. Math.
Jonathan, I. Hall, Department of Mathematics, Michigan State University,Wells Hall, 619 Red Cedar Road, East Lansing, MI 48840, United States
E-mail address : [email protected] Yoav Segev, Department of Mathematics, Ben-Gurion University, Beer-Sheva 84105, Israel
E-mail address : [email protected] Sergey Shpectorov, School of Mathematics, University of Birmingham,Watson Building, Edgbaston, Birmingham, B15 2TT, United Kingdom
E-mail address ::