Some combinatorial models for reduced expressions in Coxeter groups
aa r X i v : . [ m a t h . C O ] A ug Some combinatorial models for reducedexpressions in Coxeter groups by Hugh Denoncourt
B.A., University of New Hampshire, 1997A thesis submitted to theFaculty of the Graduate School of theUniversity of Colorado in partial fulfillmentof the requirements for the degree ofDoctor of PhilosophyDepartment of Mathematics20091enoncourt, Hugh (Ph.D., Mathematics)Some combinatorial models for reduced expressions in Coxeter groupsThesis directed by Prof. Richard M. Green
Abstract
Stanley’s formula for the number of reduced expressions of a permutationregarded as a Coxeter group element raises the question of how to enumeratethe reduced expressions of an arbitrary Coxeter group element. We providea framework for answering this question by constructing combinatorial ob-jects that represent the inversion set and the reduced expressions for anarbitrary Coxeter group element. The framework also provides a formulafor the length of an element formed by deleting a generator from a Coxetergroup element. Fan and Hagiwara, et al . showed that for certain Coxetergroups, the short-braid avoiding elements characterize those elements thatgive reduced expressions when any generator is deleted from a reduced ex-pression. We provide a characterization that holds in all Coxeter groups.Lastly, we give applications to the freely braided elements introduced byGreen and Losonczy, generalizing some of their results that hold in simply-laced Coxeter groups to the arbitrary Coxeter group setting.2 edication This thesis is dedicated to Mom and Dad.3 cknowledgements
I would especially like to thank Richard Green for being my advisor, forgiving me great advice, especially with respect to the craft of writing researchmathematics, but mostly I would like to thank him for his patience. Iwould also like to Nathaniel Thiem for being my second reader and all othermembers of my committee for serving on the committee. I would also liketo thank any and all fellow graduate students who have been there for meover these adventurous years. 4 ontents w ) . . . . . 674.4 Labelings and Directed Acyclic Graphs . . . . . . . . . . . . . 75 hapter 1 Introduction
Let W be an arbitrary Coxeter group with generating set S of involutions.The geometric representation of W is a canonically determined faithful rep-resentation of W on a vector space V of dimension | S | . Associated to thegeometric representation is a special subset Φ of V , called the root systemof W , upon which W acts. The root system splits into “positive” and “neg-ative” roots, and it is known that a Coxeter group element w is determinedby the set Φ( w ) of positive roots sent negative by w .The minimal length representations of Coxeter group elements with respectto the generators in S are called reduced expressions. The number of gen-erators in a minimal length representation of a Coxeter group element w iscalled the length of w .In this thesis, we study two problems associated with the combinatoricsof reduced expressions. The first problem is to construct correspondencesbetween the set of reduced expressions for an arbitrary Coxeter group el-ement w and sets of combinatorial objects related to the set Φ( w ). Thesecond problem is to determine the reduction in length caused by deletinga generator from a reduced expression.A general correspondence for reduced expressions of arbitrary Coxeter groupswas given by Tits in [24], but that correspondence involves the Coxeter com-plex instead of the set Φ( w ). In [8], Edelman and Greene gave a correspon-dence between the set of reduced expressions for a special type of elementin a type A Coxeter group and what they call “balanced tableaux”. Thesetableaux can be viewed as special labelings of the set Φ( w ). A more general6orrespondence was given by Kra´skiewicz in [20], in which he gave a corre-spondence between the set of reduced expressions for w and what he callsstandard w -tableaux. While more general than the Edelman-Greene cor-respondence, Kra´skiewicz’s correspondence only applies to Coxeter groupsthat are finite Weyl groups.For an arbitrary element w of an arbitrary Coxeter group W , we introduceincidence structures with “betweenness” relations on the set of positive rootsΦ + and the set Φ( w ). The main theorem of this thesis, Theorem 4.4.21, givesseveral correspondences between the set of reduced expressions for w andsets of combinatorial objects that are naturally compatible with these twoincidence structures.Additionally, Theorem 5.1.8 uses the incidence structure on Φ( w ) to de-termine the reduction in length caused by deleting a generator from a re-duced expression. We use Theorem 5.1.8 to give a characterization of thefully covering elements of W , which are the elements such that deletion ofany generator from any reduced expression results in a reduced expression.Such elements were characterized by Fan for finite Weyl groups in [9] andby Hagiwara, et al . for FC-finite simply-laced Coxeter groups in [17]. Ourcharacterization applies in the setting of arbitrary Coxeter groups.Lastly, we study the freely braided elements introduced by Green and Loson-czy for simply-laced Coxeter groups in [12]. We generalize their definition tothe setting of arbitrary Coxeter groups and characterize the freely braidedelements in terms of statistics determined by the element.This thesis is organized as follows:In Chapter 1, we give the preliminaries associated to Coxeter groups and thegeometric representation for a Coxeter group W acting on a vector space V with basis determined by S . Associated to the geometric representation areimportant subsets of the vector space V that W acts upon. These are theroot system Φ and its associated positive system Φ + and negative systemΦ − .Chapter 2 begins with the preliminaries for reduced expressions and theirassociated root sequences. In Corollary 2.1.11, we show that the length ofan element w obtained by deleting a generator from a reduced expression x for w is given by finding the number of roots in the root sequence of x W by analyzing howthe roots associated to a dihedral subsystem are formed. We introduce se-quences of the roots in a dihedral subsystem that make transparent therestrictions on the order in which these roots must occur in a root sequence.Such restrictions are given by a property of subsets of roots called “bicon-vexity”. Proposition 3.4.7 characterizes the biconvex subsets of Φ + in termsof their intersections with dihedral subsystems.In Chapter 4 we construct a discrete incidence structure with betweennessrelations for the set of positive roots Φ + that is compatible with the se-quences of roots introduced in Chapter 3. For an arbitrary w ∈ W we alsoconstruct a discrete (and necessarily finite) incidence structure for the setΦ( w ). We then give correspondences between reduced expressions for w and information placed “on top” of these structures, which is summarizedin Theorem 4.4.21.In Chapter 5, Corollary 5.1.6 shows that the sequences of roots in a dihedralsubsystem derived in Chapter 3 can be used to determine which reflectionsapplied to a positive root will send that positive root to a negative root.This information is naturally encoded in the discrete incidence structure onΦ( w ) introduced in Chapter 4, so in Theorem 5.1.8, we use the incidencestructure to determined the effect of deleting a generator from a reducedexpression upon the length of the resulting Coxeter group element. We thenuse Theorem 5.1.8 to characterize the fully covering elements of a Coxetergroup, which are the elements such that deletion of any generator from anyreduced expression always results in a reduced expression. We then intro-duce the freely braided elements of Green and Losonczy. Definition 5.3.1generalizes their definition to the setting of arbitrary Coxeter groups. The-orem 5.3.14 characterizes the elements w that are freely braided in termsof the number of commutation classes of w and the number of contractiblelong inversion sets of w . 8 .1 Preliminaries We define a
Coxeter system to be a pair (
W, S ) consisting of a group W anda finite set of generators S ⊆ W subject only to relations of the form( ss ′ ) m s,s ′ = 1 , where m s,s = 1 and m s,s ′ > s = s ′ in S . In case no relation occursfor a pair s, s ′ , we make the convention that m s,s ′ = ∞ . Thus, to specify aCoxeter system ( W, S ) is to specify a finite set S and a matrix m indexedby S with entries in Z ∪ {∞} satisfying the above conditions on m . We callthe matrix m the Coxeter matrix of ( W, S ). See [19, Section 5.1] for details.Let S ∗ denote the free monoid on S . The elements of S ∗ will be writ-ten as finite sequences. There is a natural surjective monoid morphism φ : S ∗ → W given by φ ( s , . . . , s n ) = s · · · s n . We say that x ∈ S ∗ is an expression for w ∈ W if φ ( x ) = w . In general, given any finite sequence α ,the length ℓ ( α ) of α is the number of entries in the finite sequence. Thus,for x = ( s , . . . , s n ) ∈ S ∗ , we say that ℓ ( x ) = n . The length of an element w ∈ W , denoted ℓ ( w ), is the minimum of the lengths of the sequences in φ − ( { w } ). If w = φ ( x ) and ℓ ( x ) = ℓ ( w ), then we say that x is a reducedexpression for w . We denote the set of all reduced expressions for w by R ( w ).We also sometimes specify a group element w ∈ W as a product w = w w · · · w k where each w i ∈ W is called a factor of w . The product w w · · · w k is alsocalled an expression for w .For k ≥
0, we write ( s, t ) k for the length k sequence ( s, t, s, . . . ) in S ∗ thatbegins with s and alternates between s and t . Similarly, if u, v ∈ W , wewrite ( uv ) k for the alternating product uvu · · · with k factors. If k = 0, weadopt the convention that ( s, t ) k and ( uv ) k represent the identity elementof S ∗ and W , respectively.Let V be the real vector space with basis ∆ = { α s : s ∈ S } . Asso-ciated to ( W, S ) is the symmetric bilinear form B on V determined by B ( α s , α t ) = − cos( π/m s,t ) for all s, t ∈ S . We define an action of W on V by requiring that s ( α t ) = α t − B ( α s , α t ) α s and extending linearly. Itfollows that the action of s on an arbitrary v ∈ V takes the same form. It9an also be checked that B is invariant under the action of the generatorsin S , and hence under the action of any w ∈ W . This action gives rise to acanonical representation σ : W → GL n ( V ) called the geometric representa-tion of W . For details, see [19, Section 5.3].We call the set Φ = { w ( α s ) : s ∈ S, w ∈ W } the root system of ( W, S ), andwe call the elements of Φ roots . Given s ∈ S , we call α s a simple root and ∆the simple system of ( W, S ). Let Φ + be the set of all α ∈ Φ expressible as anonnegative linear combination of the simple roots. We call the set Φ + the positive system of ( W, S ) and we call the elements of Φ + positive roots . LetΦ − = − Φ + . We call the set Φ − the negative system of ( W, S ) and we callthe elements of Φ − negative roots . It is known (see [19, Section 5.4]) thatΦ = Φ + ∪ Φ − and that the union is disjoint.Let α, β ∈ Φ + . Suppose that a, b are nonnegative real numbers such that a, b are not both 0 and suppose that aα + bβ ∈ Φ. Then it follows easilythat aα + bβ ∈ Φ + . Similarly, a root that is a nonnegative linear com-bination of negative roots is itself a negative root. It is a consequence ofthe W -invariance of B that all roots are of unit length (i.e. α ∈ Φ implies B ( α, α ) = 1). Hence, if α ∈ Φ, then ± α are the only scalar multiples of α in Φ. It follows that distinct positive roots are not scalar multiples of oneanother. For details, see [19, Section 5.4].Once we specify a Coxeter system ( W, S ), we obtain the geometric represen-tation of W and hence the associated simple system, positive (and negative)system, and the root system associated to σ . The terminology we use is thesame as in [19, Section 1.3] except that we allow W (and hence Φ) to beinfinite.If α = w ( α s ) for some s ∈ S , then we write s α = wsw − and we call s α a reflection . We denote the set of all reflections by R . The formula for areflection s α applied to an arbitrary vector λ ∈ V is given by s α ( λ ) = λ − B ( λ, α ) α. (1.1)Observe that s α is independent of the choice of w and s , and that the actionof s α on V takes the same form as that of a simple reflection. That is,the action of s α on V sends α to its negative and since B ( α, α ) = 0, theorthogonal complement of span( { α } ) is a subspace of V of codimension 1(i.e. a hyperplane) and this subspace is fixed pointwise by s α . Also, we have s α = s − α for any α ∈ Φ. See [19, Section 5.7] for details.10o each w ∈ W one can associate the set Φ( w ) = Φ + ∩ w − (Φ − ), whichwe call the inversion set of w . This is the set of positive roots sent to neg-ative roots by w . It is well known (see [19, Proposition 5.6]) that Φ( w ) has ℓ ( w ) elements and that w is uniquely determined by Φ( w ).We let N denote the set of all natural numbers containing zero.11 hapter 2 Root sequences, labelings,and reflection subgroups
Let (
W, S ) be a Coxeter system, let w ∈ W and let x = ( s , . . . , s n ) be anexpression for w (i.e. φ ( x ) = w ). We form the sequence θ ( x ) = ( θ , . . . , θ n )given by θ k = s n · · · s k +1 ( α s k ) for 1 ≤ k ≤ n (with θ n understood to be α s n ), which we call the root sequence of x . It is well-known (cf. [19, Exer-cise 5.6(1)]) that if x is reduced, then the entries of the root sequence areprecisely the elements of Φ( w ).We use the following basic results from the theory of Coxeter groups, sowe reproduce them here for convenience. Proposition 2.1.1.
Let ( W, S ) be a Coxeter system with root system Φ .Then:(1) If α, β ∈ Φ + and β = w ( α ) for some w ∈ W , then ws α w − = s β .(2) Let w ∈ W and s ∈ S . Then ℓ ( ws ) = ℓ ( w ) ± .(3) If s ∈ S , then s sends α s to − α s , but permutes the remaining positiveroots.(4) Let w ∈ W and α ∈ Φ + . Then ℓ ( ws α ) > ℓ ( w ) if and only if w ( α ) ∈ Φ + .(5) Let w = s · · · s n ( s i ∈ S ). Suppose s α satisfies ℓ ( ws α ) < ℓ ( w ) . Thenthere is an index i for which ws α = s · · · b s i · · · s n ( s i is omitted).
6) If w = s · · · s n and ℓ ( w ) < n , then there exist indices i and j such that w = s · · · b s i · · · b s j · · · s n . (7) Let w, w ′ ∈ W . If Φ( w ) = Φ( w ′ ) , then w = w ′ .(8) Let w ∈ W and let x be a reduced expression for w . Then the entries of θ ( x ) are precisely the elements of Φ( w ) .(9) Let w ∈ W and let x and x ′ be reduced expressions for w . Suppose θ ( x ) = θ ( x ′ ) . Then x = x ′ .Proof. See [19, Lemma 5.7] for a proof of statement (1). See [19, Proposition5.2] for a proof of statement (2). See [19, Proposition 5.6a] for a proof ofstatement (3). See [19, Proposition 5.7] for a proof of statement (4). See[19, Theorem 5.8] for a proof of statement (5). See [19, Corollary 5.8] fora proof of statement (6). Statement (7) follows from [1, Proposition 2].For statement (8), see Exercise 1 of [19, Section 5.6] or [3, Lemma 4.3].Statement (9) can be proven by a straightforward induction argument, andis explicitly mentioned in [13, Section 1.2].Statement (5) is called the strong exchange condition , while statement (6)is called the deletion condition . It is well known that if W is a group gener-ated by a set S of involutions, then ( W, S ) is a Coxeter system if and onlyif (
W, S ) satisfies the deletion condition if and only if (
W, S ) satisfies thestrong exchange condition (see [2, Theorem 1.5.1], for example).We wish to make our root sequence manipulations precise. Thus, giventwo finite sequences α = ( α , . . . , α k ) and β = ( β , . . . , β k ′ ) of roots wedefine the multiplication αβ by concatenation of sequences. In other words, αβ = ( α , . . . , α k , β , . . . , β k ′ ) . Also, given w ∈ W , we let w act on a sequence θ = ( θ , . . . , θ k ) by w [ θ ] = ( w ( θ ) , . . . , w ( θ k )) . If we form the free monoid over the set Φ, we find that our multiplicationof finite sequences of roots is isomorphic to the multiplication of the freemonoid Φ ∗ . We do not pursue this except to point out that our sequencemultiplication satisfies both the left and right cancelation properties. Thenext two lemmas record cancelation properties of our sequence multiplica-tion, which are well known and trivial.13 emma 2.1.2. Let α , β , and γ be sequences of roots. If αβ = αγ , then β = γ . Similarly, if αγ = βγ , then α = β .Proof. This is clear.
Lemma 2.1.3.
Let α, β, γ, δ be sequences of roots and suppose α γ = β δ .If ℓ ( α ) = ℓ ( β ) , then α = β and γ = δ .Proof. This is clear.
Lemma 2.1.4.
Let a , b ∈ S ∗ , let x = ab , and let w − = φ ( b ) . Then θ ( x ) = w [ θ ( a )] θ ( b ) .Proof. Starting at the ℓ ( a ) + 1 entry of θ ( x ), the calculations are identical tothose of θ ( b ). If a = ( s , . . . , s m ) and b = ( s ′ , . . . , s ′ n ), then for i < ℓ ( a ) + 1,the i -th entry of θ ( x ) is s ′ n · · · s ′ s m · · · s i +1 ( α s i ) = w ( s n · · · s i +1 ( α s i )), whichis w applied to the i -th entry of a .It will be useful to know what change occurs in a root sequence upon substi-tuting one expression for another in a word where the expressions representthe same element w ∈ W . For more complex decompositions of a word thanwhat appears in Lemma 2.1.4, we need the following terminology: Definition 2.1.5.
Let x = a · · · a k be a product in S ∗ . If θ ( x ) = θ · · · θ k and ℓ ( θ i ) = ℓ ( a i )for all i such that 1 ≤ i ≤ k , then we say that the equation θ ( x ) = θ · · · θ k is the decomposition of θ ( x ) respecting a · · · a k . Remark . Note that the θ i ’s appearing in the decomposition are todenote subsequences of the root sequence of x , but are not themselves rootsequences of the a i . Rather, the θ i are given by some w ∈ W acting onthe root sequence of a i , which can be made precise by repeatedly applyingLemma 2.1.4 to the factorization of x in Definition 2.1.5. Lemma 2.1.7.
Let x = abc and x ′ = ab ′ c . Let v − = φ ( c ) and u − = φ ( b ) .Suppose φ ( b ) = φ ( b ′ ) and set θ ′ = v [ θ ( b ′ )] . If θ ( x ) = θ θ θ is the decom-position of θ ( x ) respecting abc , then θ ( x ′ ) = θ θ ′ θ is the decompositionof θ ( x ′ ) respecting ab ′ c . Thus, substituting equivalent expressions changesonly the corresponding substituted portion of the root sequence. roof. From Lemma 2.1.4, we have θ ( x ) = vu [ θ ( a )] v [ θ ( b )] θ ( c ) . Thus, by Lemma 2.1.3 and Definition 2.1.5, θ = vu [ θ ( a )], θ = v [ θ ( b )], and θ = θ ( c ). By Lemma 2.1.4 again, θ ( x ′ ) = vu [ θ ( a )] v [ θ ( b ′ )] θ ( c ) , which gives the result.The root sequence of a reduced expression x is related to the strong exchangecondition by the following well known fact, which specifies the reflection thatwill remove the k -th generator of x . Lemma 2.1.8.
Suppose that x = ( s , . . . , s n ) is a reduced expression for w , θ ( x ) is the root sequence for x , and θ k is the k -th root of θ ( x ) . Then ws θ k = s · · · b s k · · · s n .Proof. This is a straightforward calculation based on Proposition 2.1.1(1),which appears in the proof of [19, Theorem 5.8].We can state a version of the previous lemma that gives the effect of deletinga generator upon the root sequence of a reduced expression. It is probablywell known, but we have not found it in the literature, so we prove it here.
Lemma 2.1.9.
Let ( W, S ) be a Coxeter system and let w ∈ W . Let s ∈ S ,let x = a ( s ) b be a reduced expression for w , and let θ ( x ) be the root sequenceof x . Let j = ℓ ( a )+1 and let D j ( x ) = x ′ = a b denote the expression obtainedby deleting the j -th letter from x . Let θ ( x ) = θ ( θ j ) θ be the decompositionof θ ( x ) respecting a ( s ) b . Then θ ( x ′ ) = s θ j [ θ ] θ is the decomposition of θ ( x ′ ) respecting a b .Proof. Let θ ( x ′ ) = θ ′ θ ′ be the decomposition of θ ( x ′ ) respecting ab . ByDefinition 2.1.5, we have θ = θ ′ = θ ( b ). Let u − = φ (( s ) b ). By Lemma 2.1.8,( s θ j u ) − = φ ( b ). Thus, by Lemma 2.1.4 and Definition 2.1.5, we have θ = u [ θ ( a )] and θ ′ = s θ j u [ θ ( a )] . It follows that θ ′ = s θ j [ θ ].The root sequence of an expression is related to the deletion condition by thefollowing result, which asserts that the number of times we must apply thedeletion condition to an expression to reach a reduced expression is givenby the number of negative roots in the root sequence.15 roposition 2.1.10. Let ( W, S ) be a Coxeter system. Let w ∈ W , let x = ( s , . . . , s n ) be an expression for w , and let θ ( x ) be the root sequence of x . If d is the number of negative entries in θ ( x ) , then ℓ ( x ) = ℓ ( w ) + 2 d . Inparticular, x is reduced if and only if θ ( x ) consists only of positive roots.Proof. The proof is by induction on d . If d = 0, then the entries in θ ( x )are precisely the positive roots sent negative by w , so ℓ ( x ) = ℓ ( w ). Let d > i be the greatest index such that ( s i , . . . , s n ) is not re-duced. Since ( s n , . . . , s i +1 ) is a reduced expression but ( s n , . . . , s i ) is notreduced, we have s n · · · s i +1 ( α s i ) ∈ Φ − by Proposition 2.1.1(4). Thus if welet a = ( s , . . . , s i − ), b = ( s i , . . . , s n ), and we let θ ( x ) = θ θ be the de-composition of θ ( x ) respecting ab , then the first root (and only the firstroot) of θ is negative. Thus, θ contains d − s n ) is a reduced expression, i < n . Let u − = φ ( s i +1 , . . . , s n ), so that( us i ) − = φ ( b ) and ℓ ( u ) = n − i . By Proposition 2.1.1(2), ℓ ( us i ) = ℓ ( w ) ± i implies ℓ ( us i ) = ℓ ( u ) −
1. Thus, since the expres-sion ( s i , . . . , s n ) is not reduced, the deletion condition implies there ex-ists an expression b ′ = ( s ′ i , . . . , s ′ n − ) such that φ ( b ′ ) = ( us i ) − = φ ( b ).Thus x ′ = ab ′ is an expression for w such that ℓ ( x ′ ) = ℓ ( x ) −
2. Since ℓ ( us i ) = ℓ ( u ) − ℓ ( b ′ ) = n − i − b ′ is a reduced expression for( us i ) − . If θ ′ is the root sequence for b ′ , then θ ( x ′ ) = θ θ ′ is the rootsequence for x ′ by Lemma 2.1.4. The sequence θ ′ has no negative rootssince b ′ is a reduced expression, and θ contains d − θ ( x ′ ) contains d − ℓ ( x ) − ℓ ( x ′ ) = ℓ ( w ) + 2( d − , which then implies ℓ ( x ) = ℓ ( w ) + 2 d . Corollary 2.1.11.
Let ( W, S ) be a Coxeter system and let w ∈ W . Let x be a reduced expression for w , let s ∈ S , and let x = a ( s ) b . Let j = ℓ ( a ) + 1 and x ′ = D j ( x ) = a b . Let w ′ = φ ( x ′ ) and let θ ( x ) = ( θ , . . . , θ ℓ ( w ) ) bethe root sequence of x . If d is the number of roots θ k , ≤ k < j , suchthat s θ j ( θ k ) ∈ Φ − , then ℓ ( w ′ ) = ℓ ( w ) − d − . In particular, x ′ is a reducedexpression for w ′ if and only if s θ j ( θ k ) ∈ Φ + for every k such that ≤ k < j .Proof. If we let θ ( x ) = θ ( θ j ) θ be the decomposition of θ ( x ) respecting a ( s ) b , then by Lemma 2.1.9, θ ( x ′ ) = s θ j [ θ ] θ . Thus the roots in the rootsequence θ ( x ′ ) not already in θ ( x ) are those of the form s θ j ( θ k ), where1 ≤ k < j . Since x is a reduced expression, the roots in θ ( x ) are positive,so the result follows by Proposition 2.1.10.16hile we will use root sequences in the sequel, we shall also have some use forlabelings of roots. In Definition 2.1.14, we introduce a labeling associatedto an expression x , which carries the exact same information as the rootsequence of x . We give correspondences in Chapter 4 between certain typesof labelings of Φ + and reduced expressions for a Coxeter group element w .While the correspondences can also be stated in terms of root sequences, weprefer the labeling approach for technical reasons. Definition 2.1.12.
Let (
W, S ) be a Coxeter system and and let Λ ⊆ Φ. Afunction T : Λ → N is called a labeling of Λ. The support of T is the setsupp( T ) of all λ ∈ Λ such that T ( λ ) = 0. We call a labeling sequential ifsupp( T ) is finite and T (supp( T )) = { , . . . , | supp( T ) |} . Remark . The definition of labeling allows the restriction of a labeling T to its support to be non-injective. If we view the range of T as the labels ofa labeling, then the definition allows the labels to skip numbers and be largerthan | supp( T ) | . By contrast, the nonzero labels of a sequential labeling T occur in order (sequentially), and the restriction of a sequential labeling T to its support is necessarily injective. Definition 2.1.14. [20, Definition 2.4] Let x be a reduced expression for w and let θ ( x ) = ( θ , . . . , θ ℓ ( w ) )be the root sequence for x . Define T x : Φ + → N by T x ( θ k ) = k and T x ( λ ) = 0 for λ Φ( w ). We call this labeling of Φ + the standard encodingof x and we say that T x encodes x .The following terminology is quite standard. In addition to the linear orderson Φ( w ) determined by the root sequence of an expression, we shall havesome use for partial orders on Φ( w ) in Chapter 5. Definition 2.1.15. A partial order is a binary relation ≤ on a set X thatis:(1) reflexive: for all x ∈ X , x ≤ x ;(2) antisymmetric: for all x, y ∈ X , if x ≤ y and y ≤ x , then x = y ;(3) transitive: for all x, y, z ∈ X , if x ≤ y and y ≤ z , then x ≤ z .We call the pair ( X, ≤ ) a partially ordered set . A linear order (or to-tal order ) on X is a partial order ≤ on X that satisfies the followingadditional property: 174) For all x, y ∈ X , either x ≤ y or y ≤ x .If properties (1) − (4) are satisfied, then we call the pair ( X, ≤ ) a linearlyordered set (or a totally ordered set ). At this point we fix a Coxeter system (
W, S ) and the root system Φ asso-ciated to the geometric representation of (
W, S ). Thus we also fix Φ + , Φ − ,and ∆ for the remainder of this thesis.Subgroups of W generated by a set of (not necessarily simple) reflections arecalled reflection subgroups and were investigated by Deodhar in [4] and Dyerindependently in [5]. In both papers it is shown that a reflection subgroupis a Coxeter system in its own right. Thus, if R is the set of reflections of( W, S ), R ′ ⊆ R , and W ′ = h R ′ i , then there exists a (canonically determined)set S ′ ⊆ R ′ such that ( W ′ , S ′ ) is a Coxeter system. If R ( w ) := { t ∈ R : ℓ ( tw ) < ℓ ( w ) } , then the canonical set S ′ is the set of all t ′ ∈ R such that R ( t ′ ) ∩ W ′ = { t ′ } (see [19, Section 8.2] for a summary or the papers [4] and [5] for details). Theorem 2.2.1 ( Deodhar, Dyer ) . Let ( W, S ) be a Coxeter system and R be the set of reflections in W . Let W ′ be the subgroup of W generated by R ′ ⊆ R . Then, with S ′ defined as above,(1) Every reflection in R ∩ W ′ is of the form w ′ s ′ w ′− , where w ′ ∈ W ′ and s ′ ∈ S ′ ;(2) The pair ( W ′ , S ′ ) is a Coxeter system.Proof. See [5, Theorem 3.3(i)] for a proof of statement (1). See [4, Theorem]or [5, Theorem 3.3] for a proof of statement (2).Proposition 2.1.1(1) induces a 1–1 correspondence between the set R of re-flections in ( W, S ) and the set Φ + of positive roots. Thus, from an arbitrarysubset Λ ⊆ Φ + , we can form the reflection subgroup ( W [Λ] , S [Λ]) by letting R ′ = { s λ : λ ∈ Λ } . We associate to such an arbitrary Λ, sets Φ[Λ], Φ + [Λ],Φ − [Λ], and ∆[Λ] that are analogs of the root system Φ, the positive systemΦ + , the negative system Φ − , and the simple system ∆, respectively. We18ould like the following properties to hold for the associated root system in( W [Λ] , S [Λ]):(1) The positive root analogs are positive roots in the original Coxeter sys-tem. Similarly, the negative root analogs are negative roots in the originalCoxeter system. That is, Φ + [Λ] ⊆ Φ + and Φ − [Λ] ⊆ Φ − .(2) The reflections associated to the simple root analogs form the gener-ating set S [Λ] of ( W [Λ] , S [Λ]). That is, S [Λ] = { s δ : δ ∈ ∆[Λ] } .(3) Every root in Φ[Λ] is either a nonnegative linear combination of roots in∆[Λ] or a nonpositive linear combination of roots in ∆[Λ].(4) Every root in Φ[Λ] is either a positive root or a negative root, but notboth. In other words, Φ[Λ] = Φ + [Λ] ∪ Φ − [Λ] and the union is disjoint.(5) Every root in Φ[Λ] can be obtained by applying an element in W [Λ]to a root in ∆[Λ].Deodhar establishes properties (1) through (4) in the proof of [4, Theo-rem]. We give his constructions in Definitions 2.2.2 and 2.2.6. Property (1)is immediate from Definition 2.2.2. Property (2) is given by Lemma 2.2.7(1).Property (3) follows from Lemma 2.2.7(3) and Definition 2.2.2(5). Property(4) is Lemma 2.2.7(2). Lastly, Lemma 2.2.9, gives property (5).Below we give the above mentioned subsets of Φ that we use from Deodhar’sproof of [4, Theorem]. Definition 2.2.2.
Let Λ ⊆ Φ + and let ( W ′ , S ′ ) = ( W, S )[Λ] be the reflectionsubgroup generated by R ′ = { s λ ∈ R : λ ∈ Λ } . Then we set:(1) W [Λ] = W ′ ,(2) S [Λ] = S ′ ,(3) Φ[Λ] = { w ( γ ) : w ∈ W ′ , s γ ∈ R ′ } = { w ( γ ) : w ∈ W ′ , γ ∈ Λ } ,(4) Φ + [Λ] = Φ[Λ] ∩ Φ + ,(5) Φ − [Λ] = Φ[Λ] ∩ Φ − .We call Φ[Λ], Φ + [Λ], and Φ − [Λ], respectively, the root system, positive sys-tem, and negative system , respectively, of ( W, S )[Λ]. We call S [Λ] the canon-ical generators for ( W, S )[Λ]. 19ote that the root system, positive system, and negative system analogsare obtained from roots associated to the reflections present in the reflec-tion subgroup W ′ . The simple system analog we will use can be similarlyobtained from S [Λ], but to identify the simple root analogs we will needmore specific information regarding such roots. The relation ≦ Λ in the nextdefinition is from [4, Section 2]. Definition 2.2.3.
Let Λ ⊆ Φ + . We say that µ ∈ Φ + [Λ] is a positivesummand of λ ∈ Φ + [Λ] if there exists a > { γ i ∈ Λ } i ∈I ofroots such that λ = aµ + P a γ i γ i , where a γ i ≥ i ∈ I . If µ ∈ Φ + [Λ]is a positive summand of λ ∈ Φ + [Λ], then we write µ ≦ Λ λ . Remark . It is noted in [4, Section 2] that ≦ Λ is a preorder relation (i.e.a reflexive and transitive relation). Thus there is an equivalence relation ∼ Λ given by λ ∼ Λ µ if and only if λ ≦ Λ µ and µ ≦ Λ λ that gives rise to apartial order on the equivalence classes. For our purposes, we only need thepreorder relation. Example . To illustrate the preorder ≦ Λ , we let ( W, S ) be the Coxetersystem of type B given by S = { s, t } and m s,t = 4. It is known that thepositive system is given by Φ + = { α s , √ α s + α t , α s + √ α t , α t } . We have α s ≦ Φ + √ α s + α t by writing √ α s + α t = ( √ α s ) + α t , which makes α s a positive summand of √ α s + α t . Similarly α s ≦ Φ + α s + √ α t . Since √ α s + α t = √ / α s + √ α t ) + √ / α s , we have √ α s + α t ≦ Φ + α s + √ α t . By a similar calculation, we also have α s + √ α t ≦ Φ + √ α s + α t . Note that α s and α t , the simple roots of ( W, S ), are incomparable andminimal with respect to ≦ Φ + . 20 efinition 2.2.6. Let∆[Λ] = { λ ∈ Φ + [Λ] : ( µ ∈ Φ + [Λ] and µ ≦ Λ λ ) ⇒ λ = µ } . We call the set ∆[Λ] the simple system of ( W, S )[Λ].The next lemma tells us that our constructions of Φ[Λ], Φ + [Λ], Φ − [Λ], and∆[Λ], respectively, behave like a root system, a positive system, a negativesystem, and a simple system, respectively. Lemma 2.2.7.
Let ∆[Λ] be the simple system of ( W, S )[Λ] . Then:(1) S [Λ] = { s δ : δ ∈ ∆[Λ] } .(2) Φ[Λ] = Φ + [Λ] ∪ ( − Φ + [Λ]) = Φ + [Λ] ∪ Φ − [Λ] , where the unions are dis-joint.(3) Any λ ∈ Φ + [Λ] can be written as a sum X δ ∈ ∆[Λ] a δ δ, where a δ ≥ for all δ ∈ ∆[Λ] .Proof. See “step 4” of the proof of [4, Theorem].For the next lemma, recall that B ( λ, λ ) = 1 for all λ ∈ Φ. Lemma 2.2.8.
Let α, β ∈ Φ and suppose s α = s β . Then α = ± β .Proof. Since s α ( β ) = s β ( β ), we have β − B ( α, β ) α = β − B ( β, β ) β . Thus − B ( α, β ) α = − β, so that α = µβ for some scalar µ . It follows that1 = B ( α, α ) = B ( µβ, µβ ) = µ . Thus µ = ±
1, so α = ± β . Lemma 2.2.9.
Let Λ ⊆ Φ + and let ( W ′ , S ′ ) = ( W, S )[Λ] be the reflectionsubgroup generated by R ′ = { s λ ∈ R : λ ∈ Λ } . Then Φ[Λ] = { w ( δ ) : w ∈ W [Λ] , δ ∈ ∆[Λ] } . roof. For any w ∈ W [Λ] and δ ∈ ∆[Λ], we have w ( δ ) ∈ Φ[Λ] by Defini-tion 2.2.2(3).By Definition 2.2.2(3), to prove the converse, we need to show that forany s λ ∈ R ′ and u ∈ W [Λ], we have u ( λ ) = v ( δ ) for some δ ∈ ∆[Λ] and v ∈ W [Λ]. Thus, we let s λ ∈ R ′ . By Theorem 2.2.1(1), s λ = ws µ w − , where w ∈ W [Λ] and s µ ∈ S [Λ]. By Lemma 2.2.7(1), s µ = s δ , where δ ∈ ∆[Λ]. ByProposition 2.1.1(1), s λ = s w ( δ ) . Lemma 2.2.8 then implies that λ = ± w ( δ ).Thus, if u ∈ W [Λ], then u ( λ ) = uw ( δ ) or u ( λ ) = uw ( s δ ( δ )). In eithercase, u ( λ ) = v ( δ ) for some v ∈ W [Λ] and δ ∈ ∆[Λ].If a positive system of a reflection subgroup contains a simple root α i , then α i must be simple relative to the root system Φ[Λ]. This is the content ofthe next lemma. Lemma 2.2.10.
Let Λ ⊆ Φ + and let α i ∈ Φ + [Λ] be a simple root relativeto ( W, S ) . Then α i ∈ ∆[Λ] .Proof. Suppose λ ∈ Φ + [Λ] is such that α i = aλ + P a δ δ , where a > α i is simple, we must have λ = α i , a = 1, and a δ = 0 for all δ in the sum. By Definition 2.2.6, α i ∈ ∆[Λ]. It was noted in [6, Remark 3.2] that associated to each pair of distinctpositive roots α and β there exists a unique maximal dihedral reflectionsubgroup containing the reflections s α and s β . In general, given an arbitraryset Λ of roots, we can form a reflection subgroup that is maximal with respectto the roots that lie in the subspace spanned by Λ. The next definition givesa name to these reflection subgroups. This section is devoted to the usefulproperties that do not hold for arbitrary reflection subgroups that do holdin these maximal reflection subgroups. Definition 2.3.1.
Let (
W, S ) be Coxeter system. Let Λ ⊆ Φ. We call theCoxeter system (
W, S )[span(Λ) ∩ Φ + ] the local Coxeter system of Λ, whichwe denote by (
W, S ) Λ . Similarly, we set:(1) W Λ = W [span(Λ) ∩ Φ + ],(2) S Λ = S [span(Λ) ∩ Φ + ], 223) Φ Λ = Φ[span(Λ) ∩ Φ + ],(4) Φ +Λ = Φ + [span(Λ) ∩ Φ + ],(5) Φ − Λ = Φ − [span(Λ) ∩ Φ + ],(6) ∆ Λ = ∆[span(Λ) ∩ Φ + ].We call Φ Λ the local root system of Λ. We call Φ +Λ the local positive systemof Λ. We call Φ − Λ the local negative system of Λ. We call ∆ Λ the local simplesystem of Λ. If | Λ | = 2, then we say that ( W, S ) Λ is a dihedral subsystem . Remark . Note that we do not allow arbitrary subsets of reflections inour formation of local Coxeter systems. A simple example of what we wishto disallow occurs with the Coxeter system (
W, S ) of type B specified by S = { s, t } and m s,t = 4. We haveΦ + = { α s , √ α s + α t , α s + √ α t , α t } . Using R ′ = { s, tst } , the reflection subgroup generated by R ′ is the subgroup W ′ = { , s, tst, stst } . Letting Λ = { α s , α s + √ α t } (the positive rootsassociated to the reflections in R ′ ), the local Coxeter system is ( W, S ) itselfsince span(Λ) ∩ Φ + = Φ + . By Definition 2.2.2 parts (3) and (4), we canapply the elements of W ′ to those of Λ to check that Φ + [Λ] = Λ, whichis properly contained in Φ + . In the sequel we show that the positive rootsystems of distinct dihedral subsystems intersect in at most one root. Thisexample shows that this fact does not hold for the positive root systems ofdistinct reflection subgroups generated by two reflections.For local Coxeter systems, parts (3), (4) and (5) of Definition 2.2.2 can bemade more explicit. Thus one advantage to using local Coxeter systems overarbitrary reflection subgroups is that we know what the root system Φ Λ iswithout reference to the reflection subgroup W ′ . Lemma 2.3.3.
Let Λ ⊆ Φ + . Then we have(1) Φ Λ = span(Λ) ∩ Φ ;(2) Φ +Λ = span(Λ) ∩ Φ + ;(3) Φ − Λ = span(Λ) ∩ Φ − .Proof. Let λ ∈ span(Λ) ∩ Φ. Since s α = s − α for all α ∈ Φ, we have that s λ is one of the generating reflections of W [span(Λ) ∩ Φ + ]. Using w = 1in part (3) of Definition 2.2.2, we have λ ∈ Φ Λ , so span(Λ) ∩ Φ ⊆ Φ Λ .Conversely, each reflection of the form s γ with γ ∈ span(Λ) ∩ Φ + fixes (set-wise) the space span(Λ) by the reflection formula (1.1), so Φ Λ ⊆ span(Λ) ∩ Φ,23hich proves the first assertion. Intersecting both sides of the equationΦ Λ = span(Λ) ∩ Φ with Φ + and Φ − , we obtain the stated formulas for Φ +Λ and Φ − Λ by Definition 2.3.1 and Definition 2.2.2. Lemma 2.3.4.
Let
Λ = { α, β } ⊆ Φ + be such that α = β . Let ( W, S ) Λ bethe local Coxeter system of Λ . Then | ∆ Λ | = 2 .Proof. See [6, Remark 3.2].
Remark . Proposition 4.5.4 of [2] states that a subgroup generated bydistinct reflections s γ , s δ is a finite dihedral group if | B ( γ, δ ) | < B ( γ, δ ) ≤ −
1. This, combined with Lemma 2.3.4,shows that the subgroup W { α,β } of W is a dihedral group. Thus, the name“dihedral subsystem” (as given in Definition 2.3.1) is justified whenever | Λ | = 2. Definition 2.3.6.
Let α, β ∈ Φ + be distinct. If ∆ { α,β } = { γ, δ } and S { α,β } = { s γ , s δ } then we call γ and δ the canonical simple roots , and s γ , s δ the canonical generators , for the dihedral Coxeter system ( W, S ) { α,β } .The next lemma shows that a dihedral subsystem and its associated rootsystem is uniquely determined by the canonical simple roots. As a result, anytwo distinct roots within a dihedral subsystem determine the same dihedralsubsystem. Lemma 2.3.7.
Let α, β ∈ Φ + be distinct and let γ and δ be the canonicalsimple roots for the local Coxeter system ( W, S ) { α,β } . Then:(1) ( W, S ) { α,β } = ( W, S ) { γ,δ } .(2) ∆ { α,β } = ∆ { γ,δ } .(3) Φ + { α,β } = Φ + { γ,δ } .(4) Φ { α,β } = Φ { γ,δ } .Proof. Recall that distinct positive roots are not scalar multiples of one an-other. Thus, since γ, δ ∈ span( { α, β } ) and span( { γ, δ } ) is two-dimensional,we have span( { α, β } ) = span( { γ, δ } ) , so the generating set of reflections is the same for both localizations. ByTheorem 2.2.1(2), the generating set of reflections uniquely determines thereflection subgroup and its canonical generating set. Therefore we have( W, S ) { α,β } = ( W, S ) { γ,δ } . The remaining equations follow from Defini-tion 2.2.2. 24emmas 2.3.8 shows that when an arbitrary w ∈ W acts on a dihedralsubsystem, the result is a dihedral subsystem. Lemmas 2.3.9 and 2.3.10show that if w does not send the canonical simple roots negative, then w maps positive roots to positive roots and canonical simple roots to canonicalsimple roots in the resulting subsystem. Lemma 2.3.8.
Let w ∈ W and let α, β ∈ Φ be distinct roots. Then w (Φ { α,β } ) = Φ { w ( α ) ,w ( β ) } . Proof.
Since W acts on Φ and each w ∈ W acts as a vector space automor-phism on V , we have aα + bβ ∈ Φ if and only if aw ( α ) + bw ( β ) ∈ Φ. Wealso have w (span( { α, β } )) = span( { w ( α ) , w ( β ) } )because w acts as a vector space automorphism on V . Thus w (span( { α, β } ) ∩ Φ) = span( { w ( α ) , w ( β ) } ) ∩ Φ . By Lemma 2.3.3, the result follows.
Lemma 2.3.9.
Let γ and δ be the canonical simple roots of Φ { α,β } andsuppose that w ( γ ) , w ( δ ) ∈ Φ + . Then w (cid:16) Φ + { α,β } (cid:17) = Φ + { w ( α ) ,w ( β ) } and w (cid:16) Φ −{ α,β } (cid:17) = Φ −{ w ( α ) ,w ( β ) } . Proof.
Let w ∈ W and suppose γ and δ are the canonical simple roots ofΦ { α,β } and that w ( γ ) , w ( δ ) ∈ Φ + . Let λ ∈ Φ + { α,β } . We have λ = cγ + dδ for some c, d ≥ w ( γ ) , w ( δ ) ∈ Φ + , we have w ( λ ) ∈ Φ + . Similarly, we have that λ ∈ Φ −{ α,β } implies w ( λ ) ∈ Φ − . ByLemma 2.3.8, λ ∈ Φ { α,β } implies w ( λ ) ∈ Φ { w ( α ) ,w ( β ) } . Thus, by parts (4)and (5) of Definition 2.2.2, this proves w (cid:16) Φ + { α,β } (cid:17) ⊆ Φ + { w ( α ) ,w ( β ) } and w (cid:16) Φ −{ α,β } (cid:17) ⊆ Φ −{ w ( α ) ,w ( β ) } . To prove the converse, suppose λ ∈ Φ + { w ( α ) ,w ( β ) } . By Lemma 2.2.7(3), wehave λ = aw ( α ) + bw ( β ) for some a, b ∈ R and λ ∈ Φ + . Then we have w − ( λ ) ∈ Φ { α,β } by Lemma 2.3.8. Suppose towards a contradiction that25 − ( λ ) ∈ Φ −{ α,β } . By Lemma 2.2.7, there exist c, d ∈ R such that c, d ≤ w − ( λ ) = cγ + dδ . Since w (cid:16) Φ −{ α,β } (cid:17) ⊆ Φ −{ w ( α ) ,w ( β ) } , we have w ( w − ( λ )) ∈ Φ − . This implies λ ∈ Φ − and since it was assumedthat λ ∈ Φ + , we have a contradiction. Thus w − ( λ ) ∈ Φ + . Now, since w − ( λ ) ∈ Φ + { α,β } , it follows that λ ∈ w (cid:16) Φ + { α,β } (cid:17) . By a similar argument, we have λ ∈ Φ −{ w ( α ) ,w ( β ) } implies λ ∈ w (cid:16) Φ −{ α,β } (cid:17) . Lemma 2.3.10.
Let w ∈ W and let α and β be distinct positive roots suchthat γ and δ are the canonical simple roots for ( W, S ) { α,β } . Suppose that w ( γ ) , w ( δ ) ∈ Φ + . Then the canonical simple roots of ( W, S ) { w ( α ) ,w ( β ) } are w ( γ ) and w ( δ ) .Proof. Set Λ = span( { α, β } ) ∩ Φ + and Λ ′ = span( { w ( α ) , w ( β ) } ) ∩ Φ + . We as-sumed that w ( γ ) , w ( δ ) ∈ Φ + , so Lemma 2.3.9 implies that if µ ∈ Φ + { w ( α ) ,w ( β ) } then we may write µ = w ( λ ) for some λ ∈ Φ + { α,β } .Suppose that w ( λ ) ≦ Λ ′ w ( γ ). Then, by Definition 2.2.3 there exists a > { a ν } ν ∈I such that a ν ≥ ν ∈ I satis-fying w ( γ ) = aw ( λ ) + X ν ∈I a ν w ( ν ) = w ( aλ + X ν ∈I a ν ν ) , where each ν ∈ Φ + { α,β } . Thus we have γ = aλ + X ν ∈I a ν ν, so that λ ≦ Λ γ . Since γ ∈ ∆ { α,β } , we have λ = γ by Definition 2.2.6.Thus w ( λ ) = w ( γ ), and it follows that w ( γ ) ∈ ∆ { w ( α ) ,w ( β ) } . The same argu-ment applies to w ( δ ) so that { w ( δ ) , w ( γ ) } ⊆ ∆ { w ( α ) ,w ( β ) } . By Lemma 2.3.4, (cid:12)(cid:12) ∆ { w ( α ) ,w ( β ) } (cid:12)(cid:12) = 2, so ∆ { w ( α ) ,w ( β ) } = { w ( γ ) , w ( δ ) } . s γ s δ ) k has k factors (as opposed to 2 k factors)and does not end in s δ if k is odd. Lemma 2.3.11.
Let ∆ { α,β } = { γ, δ } . Then the roots in Φ { α,β } can beobtained by applying a product of the form ( s γ s δ ) k or ( s δ s γ ) k ( k ≥ ) to γ or to δ .Proof. Let λ ∈ Φ { α,β } . By Lemma 2.2.9, we have that λ = w ( θ ) for some w ∈ W { α,β } and θ ∈ { γ, δ } . Since W { α,β } is generated by s γ and s δ we mayexpress w as a product whose only factors are s γ and s δ . If any repeatedfactors of s γ or s δ occur, we may apply the relations s γ = 1 and s δ = 1 untilthere are no repeated factors. 27 hapter 3 Dihedral subsystems
The alternating generators expression of Lemma 2.3.11 gives recurrences forthe roots of a dihedral subsystem. These recurrences also naturally deter-mine doubly infinite sequences, which give a natural total ordering on thepositive roots of the subsystem. We then solve the recurrences explicitly interms of substitutions into Chebyshev polynomials in Lemma 3.3.7. In Lem-mas 3.3.15 and 3.3.17, as well as Corollaries 3.3.16 and 3.3.18, the solutionis used to characterize when a root in one of the doubly infinite sequenceslies in the convex cone spanned by two roots from the sequences. Thesecharacterizations are used in Section 3 . s θ ( µ ) is negative in terms of where θ and µ lie in the ordering. The conditions are made precise in Corollary 5.1.6,which is used to prove Theorem 5.1.8.28 ( { α, β } ) β α Figure 3.1: The shaded region portrays the convex cone spanned by α and β . Included in the convex cone is the indefinite extension into the plane ofthe shaded region. Definition 3.1.1.
Let A ⊆ V . We call the set c( A ) of all nonnegativelinear combinations of elements of A the convex cone spanned by the set A .If A = { α, β } , then we call c( A ) = { aα + bβ : a, b ≥ } the convex conespanned by α and β . If γ = aα + bβ where a, b >
0, then we say γ strictlylies in the convex cone spanned by α and β .The following lemma is basic, but useful. Lemma 3.1.2.
Let α, β, γ ∈ V be pairwise distinct nonzero vectors suchthat α is not a scalar multiple of β . Suppose γ strictly lies in the convexcone spanned by α and β . Then α does not lie in the convex cone spannedby β and γ . Similarly, β does not lie in the convex cone spanned by α and γ .Proof. This is clear.In [1, Section 3], it is shown that given a finite Weyl group W , the subsets ofpositive roots that have the form Φ( w ) for some w ∈ W are characterized bya property called “biconvexity”. Corollary 4.1.6, which is probably folklore,shows that this characterization holds for arbitrary Coxeter groups. Definition 3.1.3.
Let Λ ⊂ Φ + . We say that Λ is convex (or that Λ satisfiesthe convexity property) if for every α, β ∈ Φ + the following condition holds:(1) If α, β ∈ Λ and λ ∈ Φ + lies in the convex cone spanned by α and β ,then λ ∈ Λ. 29f both Λ and Φ + \ Λ are convex, then we say Λ is biconvex (or that Λsatisfies the biconvexity property). Equivalently, (1) and the followingimplication holds:(2) If α, β Λ and λ ∈ Φ + lies in the convex cone spanned by α and β ,then λ Λ.Recall that Φ( w ) is the set of positive roots that are sent to a negative rootby w . Lemma 3.1.4 is implicit in [1, Section 3] and stated exactly as westate it here in [12, Section 2] and [10, Section 2]. No proof is given in anyof those papers, so we include a proof for the sake of completeness. Lemma 3.1.4.
Let w ∈ W . Then the set Φ( w ) satisfies the biconvexityproperty.Proof. Recall that w acts as a linear transformation, and that a nonnega-tive linear combination of positive (respectively, negative) roots is a positive(respectively, negative) root.Let α, β ∈ Φ( w ) and suppose λ is a root such that λ = aα + bβ for some a, b ≥
0. By the definition of Φ( w ), α and β are positive roots, so λ isalso a positive root. Since w ( α ) and w ( β ) are negative roots, we have that aw ( α ) + bw ( β ) is also a negative root. It follows that λ ∈ Φ( w ).Similarly, let α, β ∈ Φ + \ Φ( w ) and suppose λ is a root such that λ = aα + bβ for some a, b ≥
0. Then λ is a positive root since α and β are positive roots.Since α, β Φ( w ), w ( α ) and w ( β ) are positive roots. Thus, w ( λ ) is apositive root as well and it follows that w ( λ ) Φ( w ). By Lemma 2.3.11, the roots of a dihedral subsystem can be obtained by ap-plying an alternating product of canonical generators to one of the canonicalroots. We can directly calculate the scalars arising in the linear combina-tions of the canonical generators by applying the reflection formula (1.1).The alternating products give rise to a recurrence that has the same formthat the Chebyshev polynomials of the second kind have. The way in whichthe alternating products are calculated is independent of the canonical rootsthemselves, so the Chebyshev polynomials require an evaluation based onthe canonical roots to calculate the actual scalars. In this section, we collect30he results about Chebyshev polynomials that we need for later calculations.In what follows, it will be convenient to use sequences indexed by the in-tegers instead of the natural numbers. A typical recurrence for a sequenceof natural numbers expresses a sequence entry in terms of previous entriesin the sequence. For doubly infinite sequences, we also need a “backwardrecurrence”, which is a recurrence expressing a sequence entry in terms ofentries indexed by larger integers. In the sequences we use, we can obtainthese backward recurrences by rearranging the forward recurrences.The following definition of the Chebyshev polynomials of the second kindagrees with [18, Definition 1.2] except that we shift the indices up by oneand extend it to a doubly infinite sequence.
Definition 3.2.1.
Let x ∈ [ − ,
1] and write x = cos θ for some θ ∈ [0 , π ].Then for n ∈ Z , we define U n ( x ) = sin( nθ )sin θ . The functions in this doubly infinite sequence are known as the
Chebyshevpolynomials of the second kind . Remark . The discontinuities that occur at θ = 0 and θ = π are re-movable. Since the functions U n ( x ) turn out to be polynomials, they arecontinuous and hence, the discontinuities are not present when U n ( x ) isviewed as a polynomial. Thus, U n (1) = n and U n ( −
1) = ( − n +1 n for all n ∈ Z .The following lemma is our reason for shifting the indices in the standarddefinition. Lemma 3.2.3.
For any n ∈ N , we have U − n ( x ) = − U n ( x ) .Proof. We have U − n ( x ) = sin( − nθ )sin θ = − sin( nθ )sin θ = − U n ( x ) . Lemma 3.2.4.
Let { U n ( x ) } n ∈ Z be the Chebyshev polynomials of the secondkind. Then U n ( x ) satisfies the initial conditions U ( x ) = 0 , U ( x ) = 1 (3.1)31 nd the recurrence U n +2 ( x ) = 2 xU n +1 ( x ) − U n ( x ) ( n ∈ Z ) . (3.2) The associated backward recurrence is given by U n ( x ) = 2 xU n +1 ( x ) − U n +2 ( x ) . (3.3) Proof.
Equation (3.2) is [18, (1.6a)], which is obtained by applying the iden-tity sin(( n + 1) z ) + sin(( n − z ) = 2 cos( z ) sin( nz )to the definition. We obtain the backward recurrence (3.3) by rearranging(3.2).Although the defining representation for the Chebyshev polynomials of thesecond kind applies only to the interval [ − , U n ( x ) to R (or even C ). In what follows, we will assume that for each function U n ( x ),the domain and codomain are both R . Note that if we obtain an identityusing the trigonometric definition, then the identity applies to the extendeddomain because two polynomials need only agree on a large enough finiteset in order for them to agree everywhere. Thus, if two polynomials agree on[ − , R .We will see that B ( γ, δ ) ≤ − s γ and s δ generate an infinite dihe-dral subsystem. Thus the condition a ≥ x = a , where a = − B ( γ, δ ). Lemma 3.2.5.
Let a ≥ be a fixed real number and let n ≥ . Then thesequence of real numbers { U n ( a ) } n ≥ is positive and strictly increasing.Proof. Since U ( a ) = 1 and U ( x ) = 2 a where x ≥
1, we have U ( a ) > U ( a ).Suppose that U n +1 ( a ) > U n ( a ). Then, by the recurrence (3.2), we have U n +2 ( a ) = 2 aU n +1 ( a ) − U n ( a ) > aU n +1 ( a ) − U n +1 ( a ) . Thus U n +2 ( a ) > (2 a − U n +1 ( a ) ≥ U n +1 ( a ) since we assumed a ≥
1. From U ( a ) >
0, we get that the sequence { U n ( a ) } n ≥ is positive and strictlyincreasing by induction. 32 emma 3.2.6. Let a ≥ be a fixed real number and for n ≥ , form thesequence of ratios r n = U n +1 ( a ) U n ( a ) . Then we have r n is a decreasing sequence of positive real numbers such that r n ≥ for all n ≥ .Proof. First note that by Lemma 3.2.5, the denominator of r n is nonzero.For the base case, we prove that r > r . By the recurrence relations andinitial conditions, we have U ( a ) = 1, U ( a ) = 2 a , and U ( a ) = 4 a − r = 2 a and r = 2 a − a . Since a ≥
1, we have r > r ≥
1, asdesired. For the inductive step, if we divide both sides of (3.2) by U n +1 ( a )we have U n +2 ( a ) U n +1 ( a ) = 2 a − U n ( a ) U n +1 ( a ) . Thus we have r n +1 = 2 a − r n . It follows that r n +2 = 2 a − r n +1 . If we assumethat r n > r n +1 , then r n +1 − r n +2 = r n +1 − r n , so that r n +1 − r n +2 > r n ≥
1, then r n ≤ r n +1 = 2 a − r n ≥ a ≥
1. Thus,by induction, r n ≥ n ≥ R ∪ {−∞ , + ∞} is determined by thecondition that −∞ < x < ∞ for every x ∈ R . We interpret a fraction of the form a , where a >
0, as + ∞ .Such an interpretation is not standard, but our coefficients are nonnegative(so that we are approaching 0 from the right) and the inferences we use areconsistent with this choice. In particular, for the hypothesis c d < c d < c d (or its reverse), only one of the fractions can have 0 as a denominator inorder for the inequalities to be strict. Thus, if ab < cd where the denominator d is 0, we still have ad − bc < Lemma 3.2.7.
Suppose that { γ, δ } forms a basis for a two-dimensionalsubspace of V . Let α = c γ + d δ , α = c γ + d δ , and α = c γ + d δ , here c i , d i ≥ and ( c i , d i ) = (0 , for each i ∈ { , , } . Suppose thateither c d < c d < c d or c d > c d > c d . Then there exist positive scalars a, b > satisfying aα + bα = α .Proof. The given fractional inequalities imply inequalities involving deter-minants: c d < c d < c d ⇒ c d − d c < c d − d c <
0, and c d − d c < .c d > c d > c d ⇒ c d − d c > c d − d c >
0, and c d − d c > . The solution to the vector equation aα + bα = α is given by the matrixequation (cid:20) c c d d (cid:21) (cid:20) ab (cid:21) = (cid:20) c d (cid:21) . Since c d − d c = 0 in either case, the matrix in the matrix equation isinvertible. By Cramer’s rule, the solution to this matrix equation is givenby a = (cid:12)(cid:12)(cid:12)(cid:12) c c d d (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) c c d d (cid:12)(cid:12)(cid:12)(cid:12) and b = (cid:12)(cid:12)(cid:12)(cid:12) c c d d (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) c c d d (cid:12)(cid:12)(cid:12)(cid:12) . If c d < c d < c d then the inequalities imply all of the above determinantsare negative. If instead, c d > c d > c d , then the inequalities imply all of theabove determinants are positive. In either case, a, b > Lemma 3.2.8.
Let a ≥ be a fixed real number. Then, for n ∈ Z the realnumbers U n ( a ) are pairwise distinct, where U n ( a ) > if n ≥ , U n ( a ) < if n ≤ − , and U n ( a ) = 0 if n = 0 .Proof. By Lemma 3.2.5, for n ≥
1, we have that { U n ( a ) } n ≥ is a strictlyincreasing sequence of positive real numbers. Thus, the numbers present inthe sequence are pairwise distinct. Since U ( a ) = 0 and U − n ( a ) = − U n ( a ),the result follows.With the exception of the shift of indices, our proof of the next lemma isidentical to the one given for [25, (2.1)] and is included for completeness. Lemma 3.2.9.
Let n, i, j ∈ Z and a ∈ R . Then we have U i ( a ) U n + i + j ( a ) + U j ( a ) U n ( a ) = U i + j ( a ) U n + i ( a ) . (3.4)34 roof. First suppose a ∈ [ − , θ ∈ [0 , π ] be such that a = cos θ . Then U i ( a ) U n + i + j ( a ) + U j ( a ) U n ( a ) = sin( iθ )sin θ sin(( n + i + j ) θ )sin θ + sin( jθ )sin θ sin( nθ )sin θ = cos(( n + j ) θ ) − cos(( n + 2 i + j ) θ )2 sin θ + cos(( n − j ) θ ) − cos(( n + j ) θ )2 sin θ = cos(( n − j ) θ ) − cos(( n + 2 i + j ) θ )2 sin θ = sin(( i + j ) θ )sin θ sin(( n + i ) θ )sin θ = U i + j ( x ) U n + i ( a ) . The first and last equations are by Definition 3.2.1, while the second andfourth equations use the basic identitysin( x ) sin( y ) = cos( x − y ) − cos( x + y )2 . Since the left hand side and the right hand side are both polynomials forfixed values of n, i, j ∈ Z , the fact that the equality holds for all a ∈ [ − , a ∈ R .We will see that B ( γ, δ ) = − cos( π/m ) for some m < ∞ whenever s γ and s δ generate a finite dihedral subsystem. Thus, there are two substitutionsinto the Chebyshev polynomials of the second kind that are of importanceto us. These are substitutions of the form x = a , where a = cos( π/m ) for2 ≤ m < ∞ and substitutions of the form x = a where a ≥ Lemma 3.2.10.
Let a = cos( π/m ) , where m ≥ . For − m ≤ n ≤ m − ,the ordered pairs ( U n ( a ) , U n +1 ( a )) are pairwise distinct as ordered pairs of real numbers.Proof. If a = cos( π/m ), then U n ( a ) = sin( nπm )sin( πm ) and U n +1 ( a ) = sin( ( n +1) πm )sin( πm ) . Suppose there exist n, n ′ such that − m ≤ n, n ′ ≤ m − U n ( a ) = U n ′ ( a )and U n +1 ( a ) = U n ′ +1 ( a ). Sincesin (cid:18) ( n + 1) πm (cid:19) = sin (cid:16) nπm (cid:17) cos (cid:16) πm (cid:17) + sin (cid:16) πm (cid:17) cos (cid:16) nπm (cid:17) , U n +1 ( a ) = U n ′ +1 ( a ), we havesin (cid:16) nπm (cid:17) cos (cid:16) πm (cid:17) + sin (cid:16) πm (cid:17) cos (cid:16) nπm (cid:17) = sin (cid:18) n ′ πm (cid:19) cos (cid:16) πm (cid:17) + sin (cid:16) πm (cid:17) cos (cid:18) n ′ πm (cid:19) . Since we are assuming U n ( a ) = U n ′ ( a ), and hence thatsin( nπ/m ) = sin( n ′ π/m ) , this last equation reduces tocos (cid:16) nπm (cid:17) = cos (cid:18) n ′ πm (cid:19) . As nπm , n ′ πm ∈ [ − π, π ) and the two angles agree on both sine and cosine, wemust have nπm = n ′ πm . Thus n = n ′ and the ordered pairs ( U n ( a ) , U n +1 ( a ))are pairwise distinct in the given range. In this section we show that we can generate the roots of a dihedral sub-system using a recurrence very much like the one given by the Chebyshevpolynomials. Though we only gave results for evaluating Chebyshev polyno-mials at x = cos( π/m ) and x ≥
1, those will be the only values at which weevaluate the Chebyshev polynomials. Specifically, we will be plugging in thevalue − B ( γ, δ ), which by a theorem of Dyer can only take on certain values.The next theorem is translated to the situation of dihedral subsystems. Theorem 3.3.1 (Dyer) . Let ( W, S ) { α,β } be a dihedral subsystem of ( W, S ) .Then, if γ and δ are the canonical simple roots of ( W, S ) { α,β } , we must have B ( γ, δ ) ∈ ( −∞ , − ∪ {− cos( π/n ) : n ∈ N , n ≥ } . Proof.
See [5, Theorem 4.4].
Definition 3.3.2.
Let α, β ∈ Φ + and let ∆ { α,β } = { γ, δ } be the simplesystem of ( W, S ) { α,β } . Let γ = { γ i } i ∈ Z and δ = { δ i } i ∈ Z be the doublyinfinite sequences of roots defined by the initial conditions γ = γ, δ = δ (3.5)36 = γ = δ γ = δ γ = δ δ = γ = δ γ = δ γ = δ − γ = δ − γ = δ − Figure 3.2: The sequence of roots in a dihedral subsystem pictured will formwhat we call a “local root sequence” for the dihedral subsytem. This figuredepicts the case when | s γ s δ | = 4.and the recurrences γ i +1 = s γ ( δ i ) , δ i +1 = s δ ( γ i ) . (3.6)The associated backward recurrences are given by γ i = s δ ( δ i +1 ) , δ i = s γ ( γ i +1 ) . (3.7)We call γ the γ -sequence of Φ { α,β } and δ the δ -sequence of Φ { α,β } . Any suchsequence is called a local root sequence . Remark . The entries of the γ - and δ -sequences are calculated by multi-plying alternating factors of s γ and s δ and applying the result to either γ or δ . In particular, each γ i for i > s γ as a leftmost factor, and similarly δ i has s δ as a leftmost factor. Example . Let (
W, S ) be the Coxeter system of type A , with generatingset S = { s, t, u } and relations given by m s,t = m t,u = 3 and m s,u = 2. Let α = α s + α t + α u and β = α s . Then γ = α t + α u and δ = α s are canonicalsimple roots for the local system ( W, S ) { α,β } . We have γ = ( . . . , − ( α s + α t + α u ) , − α s , α t + α u , α s + α t + α u , α s , − ( α t + α u ) , . . . ) , = δ γ = γ − δ = γ − γ = δ γ γ δ δ δ − δ − γ − γ − Figure 3.3: The geometric analogies break down somewhat for infinite dihe-dral subsystems. This figure treats the simple roots as perpendicular (theyare not perpendicular in the geometric representation). The non-simpleroots are then projected so that their coefficients add to 1. The picture doesfaithfully represent the property of a root lying in the convex cone of tworoots in the subsystem. 38here the displayed root α t + α u is meant to represent γ . Also, δ = ( . . . , − α s , − ( α s + α t + α u ) , − ( α t + α u ) , α s , α s + α t + α u , α t + α u , − α s , . . . ) , where the displayed root α s is meant to represent δ .The recurrences given in Definition 3.3.2 for the γ - and β -sequences can,in a certain sense, be “solved” explicitly. Our first “solution” expresses theroots as an alternating product applied to either γ or δ .Recall that if u, v ∈ W , then ( uv ) k denotes the alternating product uvuv · · · of u and v beginning with u and having k factors. Thus, if k is odd, then( uv ) k ends with the factor u ; otherwise, ( uv ) k ends with the factor v . Lemma 3.3.5.
Let α, β ∈ Φ + and let ∆ { α,β } = { γ, δ } be the simple systemof ( W, S ) { α,β } . Let γ and δ be the associated local root sequences. Then, for k ≥ , there exist θ k , θ ′ k ∈ { γ, δ } satisfying ( s γ s δ ) k ( θ k ) = γ k +1 ( s γ s δ ) k ( θ ′ k ) = δ − k +1 ( s δ s γ ) k ( θ ′ k ) = δ k +1 ( s δ s γ ) k ( θ k ) = γ − k +1 , where θ k = θ ′ k .Proof. For the base case of k = 1, we apply the recurrences (3.6) and (3.7)once to the initial conditions to obtain γ = s γ ( δ ), δ = s γ ( γ ), δ = s δ ( γ ),and γ = s δ ( δ ). Thus the equations are satisfied if θ = δ and θ ′ = γ . Forthe inductive step, we have: γ k +1 = s γ ( δ k ) = s γ ( s δ s γ ) k − ( θ ′ k − ) = ( s γ s δ ) k ( θ ′ k − ) δ − k +1 = s γ ( γ − ( k − ) = s γ ( s δ s γ ) k − ( θ k − ) = ( s γ s δ ) k ( θ k − ) δ k +1 = s δ ( γ k ) = s δ ( s γ s δ ) k − ( θ k − ) = ( s δ s γ ) k ( θ k − ) γ − k +1 = s δ ( δ − ( k − ) = s δ ( s γ s δ ) k − ( θ ′ k − ) = ( s δ s γ ) k ( θ ′ k − ) , where the first equation in each line follows from (3.6) or (3.7), and thesecond equation in each line follows from the inductive hypothesis. We alsohave that θ k − , θ ′ k − ∈ { γ, δ } and θ k − = θ ′ k − by the inductive hypothesis.The result follows from the last equation of each line by using θ k = θ ′ k − and θ ′ k = θ k − .The next lemma provides much of the motivation for introducing the γ - and δ -sequences. 39 emma 3.3.6. Let ( W, S ) { α,β } be a dihedral subsystem with canonical gen-erators s γ and s δ . Then every root in Φ { α,β } is in either the γ -sequence orthe δ -sequence.Proof. By Lemma 2.3.11, every root in Φ { α,β } can be obtained as a (possiblyempty) alternating product of s γ and s δ applied to either γ or to δ . Inparticular, γ and δ can be obtained using the empty product applied to γ or δ . For an alternating product of fixed length k ≥
1, the product eitherbegins with s γ or with s δ , and is applied to either γ or to δ . These fourcases are precisely the four cases given in Lemma 3.3.5.The next lemma gives another “solution” to the recurrences (3.6) and (3.7).This solution gives an explicit description of the scalars of any root in adihedral subsystem expressed as a linear combination of the canonical simpleroots. Lemma 3.3.7.
Let α, β ∈ Φ and ∆ { α,β } = { γ, δ } . Let a = − B ( γ, δ ) . Thenwe have γ k = U k ( a ) γ + U k − ( a ) δ and δ k = U k − ( a ) γ + U k ( a ) δ, (3.8) for all k ∈ Z .Proof. For k = 1, this is just the initial conditions given by (3.1) since U ( a ) = 0 and U ( a ) = 1. For k = 0, we have γ = s δ ( δ ) = − δ = U ( a ) γ + U − ( a ) δ and δ = s γ ( γ ) = − γ = U − ( a ) γ + U ( a ) δ. Suppose k >
1. Then, by induction, γ k = s γ ( δ k − )= s γ ( U k − ( a ) γ + U k − ( a ) δ )= − U k − ( a ) γ + U k − ( a )( δ − B ( γ, δ ) γ )= (2 aU k − ( a ) − U k − ( a )) γ + U k − ( a ) δ = U k ( a ) γ + U k − ( a ) δ. The equation δ k = U k − ( a ) γ + U k ( a ) δ can be obtained by reversing the rolesof γ and δ in the above proof. For k <
0, we apply the backward recurrences40nd use induction, proving the equations hold for γ k , δ k assuming they holdfor γ k +1 , δ k +1 : γ k = s δ ( δ k +1 )= s δ ( U k ( a ) γ + U k +1 ( a ) δ )= U k ( a )( γ − B ( γ, δ ) δ ) − U k +1 ( a ) δ = U k ( a ) γ + (2 aU k ( a ) − U k +1 ( a )) δ = U k ( a ) γ + U k − ( a ) δ. The last equation applies the backward recurrence (3.3) given for the Cheby-shev polynomials using n = k −
1. The equation for δ k with k < γ and δ in the above proof. Lemma 3.3.8.
Let α, β ∈ Φ + and ∆ { α,β } = { γ, δ } be the simple system of ( W, S ) { α,β } . If m = | s γ s δ | , then the roots of the form γ i , ≤ i ≤ m ( i < m if m = ∞ ), are pairwise distinct and positive. Similarly, the roots of theform δ i , ≤ i ≤ m ( i < m if m = ∞ ), are pairwise distinct and positive.Proof. Suppose m is finite. Then B ( γ, δ ) > −
1, so by Theorem 3.3.1 wehave B ( γ, δ ) = − cos( π/m ′ )for some m ′ ≥
2. Since m ′ determines the order of s γ s δ , m ′ = m . If γ k = γ k ′ for 1 ≤ k, k ′ ≤ m , then by Lemma 3.3.7, we have U k ( a ) γ + U k − ( a ) δ = U k ′ ( a ) γ + U k ′ − ( a ) δ, where a = cos( π/m ). By Lemma 3.2.10, we must have k = k ′ . Since thescalars in δ k can be obtained from the γ k scalars by interchanging γ and δ ,the same argument can be applied to the roots δ i , 1 ≤ i ≤ m .For all i satisfying 0 ≤ i ≤ m , we have sin( iπm ) ≥
0. Since a = cos( π/m ),we have that the linear combination U i ( a ) γ + U i − ( a ) δ has nonnegative co-efficients for 1 ≤ i ≤ m by Definition 3.2.1. Thus γ i is a positive root forall i satisfying 1 ≤ i ≤ m . The same argument applies to δ i where 1 ≤ i ≤ m .If m is infinite, then B ( γ, δ ) ≤ − a = − B ( γ, δ ), we have U k ( a ) = U k ′ ( a ) only if k = k ′ . Also, for k ≥ U k ( a ) ≥ U i ( a ) γ + U i − ( a ) δ has nonnegative coefficients for i ≥
1. Thus, for i ≥
1, the γ i are pairwisedistinct and positive. Similarly, the δ i are pairwise distinct and positive forall i ≥
1. 41 emma 3.3.9.
Let α, β ∈ Φ + and ∆ { α,β } = { γ, δ } be the simple system of ( W, S ) { α,β } . Then, for i ≥ , we have γ − i = − δ i +1 and δ − i = − γ i +1 .Proof. First note that γ = s δ ( δ ) = − δ and δ = s γ ( γ ) = − γ by thebackward recurrences of (3.7). Next, for i >
0, we have γ − i = s δ ( δ − i +1 )= s δ ( − γ i )= − δ i +1 . The second equality follows from induction while the last equality followsfrom the definition of the δ -sequence. A similar computation yields theresult that δ − i = − γ i +1 . Corollary 3.3.10.
Let α, β ∈ Φ + and ∆ { α,β } = { γ, δ } be the simple systemof ( W, S ) { α,β } . If m = | s γ s δ | , then the roots of the form γ − i , ≤ i ≤ m − (with i < m if m = ∞ ), are negative and pairwise distinct. Similarly, theroots of the form δ − i , ≤ i ≤ m − (with i < m if m = ∞ ), are negativeand pairwise distinct.Proof. The formulas for γ − i and δ − i given by Lemma 3.3.9 and Lemma 3.3.8imply that the roots of the form γ − i , for 0 ≤ i ≤ m − δ − i , ≤ i ≤ m − Corollary 3.3.11.
Let γ , δ be the canonical simple roots for a dihedralsubsystem and suppose m = | s γ s δ | is finite. Then, the γ - and δ -sequencesare periodic with period m .Proof. It follows from the defining recurrence for the γ - and δ -sequence that γ i +2 = s γ s δ γ i and δ i +2 = s δ s γ δ i for all i . Since( s γ s δ ) m = ( s δ s γ ) m = 1 , we have γ i +2 m = γ i and δ i +2 m = δ i . Lemma 3.3.8 and Corollary 3.3.10imply that the 2 m roots γ − ( m − , . . . , γ m are pairwise distinct, as are the2 m roots δ − ( m − , . . . , δ m , so the result follows. Lemma 3.3.12.
Let γ , δ be the canonical simple roots for a local Coxetersystem and let γ ∈ Φ + ( γ, δ ) . If m = | s γ s δ | is finite, then γ i = δ m +1 − i and δ i = γ m +1 − i . roof. Let a = − B ( γ, δ ) = cos( π/m ). We have γ i = U i ( a ) γ + U i − ( a ) δ and δ m +1 − i = U m − i ( a ) γ + U m − ( i − ( a ) δ, by Lemma 3.3.7. Sincesin (cid:18) ( m − i ) πm (cid:19) = sin (cid:18) π − iπm (cid:19) = sin (cid:18) iπm (cid:19) for any i ∈ Z , we have U m − i ( a ) = U i ( a ) and U m − ( i − ( a ) = U i − ( a ). Thuswe have γ i = δ m +1 − i for all i ∈ Z . By the same calculation with γ and δ interchanged, we have δ i = γ m +1 − i for all i ∈ Z . Lemma 3.3.13.
Let Φ + { α,β } be an infinite dihedral subsystem with canonicalsimple roots γ and δ . Then, no entry in the γ -sequence is an entry of the δ -sequence.Proof. Let a = − B ( γ, δ ). Since Φ + { α,β } is infinite, we have a ≥ { U n ( a ) } n ≥ is a strictly increasing sequence ofreal numbers. By Lemma 3.3.7, for any i, j ≥
1, we have γ i = U i ( a ) γ + U i − ( a ) δ and δ j = U j − ( a ) γ + U j ( a ) δ. Thus, the γ coefficient is strictly larger than the δ coefficient for γ i , whereasthe γ coefficient is strictly smaller than the δ coefficient for δ j . It followsthat γ i = δ j . By Lemma 3.3.9, this implies that γ i = δ j for any i, j ≤ i ≤ j ≥ γ i is negative by Corollary 3.3.10,whereas δ j is positive by Lemma 3.3.8, so γ i = δ j . The same reasoningapplies to the case i ≥ j ≤ Proposition 3.3.14.
Let γ , δ be the canonical simple roots for a dihedralsubsystem Φ { γ,δ } . Let γ and δ be the associated local root sequences. Let m = | s γ s δ | . If m is finite, then Φ + { α,β } = { γ , . . . , γ m } = { δ , . . . , δ m } . If m is infinite, then Φ + { α,β } = { γ , γ , . . . } ∪ { δ , δ , . . . } , and the union is disjoint. roof. By Lemma 3.3.6, every root of Φ + { α,β } is an entry of the γ -sequenceor of the δ -sequence.Suppose that m is finite. By Lemma 3.3.12, every root of the δ -sequenceis an entry of the γ -sequence, so that every root of Φ + { α,β } is an entry ofthe γ -sequence. By Corollary 3.3.11, the γ -sequence is periodic with period2 m , so every root of Φ { α,β } is in the set { γ − m − , . . . , γ , γ , . . . , γ m } . Thus,Lemma 3.3.8 and Corollary 3.3.10 imply that Φ + { α,β } = { γ , . . . , γ m } .Suppose that m is infinite. By Lemma 3.3.8 and Corollary 3.3.10, the roots γ i and δ i are positive if i ≥ i ≤
0. It follows that everyroot of Φ + { α,β } is in the set { γ , γ , . . . } ∪ { δ , δ , . . . } . The union is disjointby Lemma 3.3.13.If we think of the indices of the γ - or δ -sequence as providing a total orderto the roots in the sequences, then the next lemma says that given any threeroots in a local root sequence that are close enough together in the sequence,the “middle root” lies in the convex cone spanned by the “outer roots”. Lemma 3.3.15.
Let d , d ≥ and n ∈ Z . Let γ, δ be the canonical simpleroots for a local Coxeter system and m = | s γ s δ | (where m is possibly infinite).Suppose d + d < m . Then γ n + d lies in the convex cone spanned by γ n and γ n + d + d . Similarly, δ n + d lies in the convex cone spanned by δ n and δ n + d + d . In particular, if i, j, k ∈ N + and ≤ i < j < k ≤ m , then γ j liesin the convex cone spanned by γ i and γ k . Similarly, δ j lies in the convexcone spanned by δ i and δ k .Proof. By Lemma 3.3.7, we have γ n + d = U n + d ( a ) γ + U n + d − ( a ) δ , where a = − B ( γ, δ ). Let x = U d ( a ) and y = U d ( a ). Then, by Lemma 3.2.9, wehave U d + d ( a ) γ n + d = U d + d ( a ) U n + d ( a ) γ + U d + d ( a ) U n + d − ( a ) δ = [ xU n + d + d ( a ) + yU n ( a )] γ + [ xU n + d + d − ( a ) + yU n − ( a )] δ = x ( U n + d + d ( a ) γ + U n + d + d − ( a ) δ )+ y ( U n ( a ) γ + U n − ( a ) δ )= xγ n + d + d + yγ n . Since 1 ≤ d , d < m , we have x, y >
0. Since 2 ≤ d + d < m , if m is finite then sin (cid:16) ( d + d ) πm (cid:17) > U d + d ( a ) >
0. If m is infinite44hen Lemma 3.2.8 implies U d + d ( a ) >
0. The result follows by solving for γ n + d .Recall by Definition 3.3.2 that the sequences γ and δ associated to a localdihedral Coxeter system with γ and δ as canonical roots are called local rootsequences. Corollary 3.3.16.
Let γ, δ be the canonical simple roots for a dihedral Cox-eter system and let γ be an associated local root sequence. Let m = | s γ s δ | (where m is possibly infinite). Let i, j, k ∈ N + , where ≤ i < j ≤ m and ≤ k ≤ m . If γ k lies in the convex cone spanned by γ i and γ j , then i < k < j .Proof. Suppose that k < i < j . Then by Lemma 3.3.15, γ i lies in the convexcone spanned by γ k and γ j , which (together with the hypothesis that γ k lies in the convex cone spanned by γ i and γ j ) contradicts Lemma 3.1.2.Similarly, if i < j < k , then γ j lies in the convex cone spanned by γ i and γ k ,contradicting Lemma 3.1.2.The previous two results apply to infinite dihedral subsystems whenever theroots involved lie within a single local root sequence. However, it is possiblefor two roots in an infinite dihedral subsystem to lie in distinct local rootsequences. The next two lemmas show that in this case, results similar tothe previous two lemmas hold. Lemma 3.3.17.
Let Φ + { α,β } be an infinite dihedral subsystem with canonicalsimple roots γ and δ . Let γ and δ be the associated local root sequences.Suppose ≤ i < j and k ≥ . Then:(1) the root γ j lies in the convex cone spanned by γ i and δ k ;(2) the root δ j lies in the convex cone spanned by γ k and δ i .Proof. Let a = − B ( γ, δ ). Since Φ + { α,β } is infinite, we have a ≥ r = U k − ( a ) U k ( a ) , r = U j ( a ) U j − ( a ) , and r = U i ( a ) U i − ( a ) . By Lemma 3.2.5,we have r < r , r >
1. By Lemma 3.2.6, we have r > r . Since each r i gives the ratio of the γ coefficient to the δ coefficient, and r < r < r ,the first assertion follows from Lemma 3.2.7. The second assertion is provenby interchanging γ and δ in the above argument. Corollary 3.3.18.
Let γ, δ be the canonical simple roots for a dihedral Cox-eter system. Suppose | s γ s δ | is infinite and let γ and δ be the associated local oot sequences. Suppose i, j, k ≥ , i = j , and that γ j lies in the convex conespanned by γ i and δ k . Then i < j . Similarly, if j = k and δ j lies in theconvex cone spanned by γ i and δ k , then k < j .Proof. Suppose i > j and γ j lies in the convex cone spanned by γ i and δ k .Then by Lemma 3.3.17, γ i lies in the convex cone spanned by γ j and δ k ,contradicting Lemma 3.1.2. Similarly, if δ j lies in the convex cone spannedby γ i and δ k , we get a contradiction if we assume k > j . The sets that contain all the positive roots of a dihedral subsystem play aspecial role in the constructions of Chapter 4, so we give these sets a name.In [12], Green and Losonczy referred to any set of positive roots of the form { α, α + β, β } as an inversion triple . We view our definition of inversion setas a generalization of this notion. Definition 3.4.1.
We call any subset of Φ + of the form Φ + { α,β } , where α, β ∈ Φ are (possibly negative) roots that are not scalar multiples of oneanother, an inversion set . The set of all inversion sets is denoted by Inv(Φ + ).If m = (cid:12)(cid:12)(cid:12) Φ + { α,β } (cid:12)(cid:12)(cid:12) , then we also say that Φ + { α,β } is an inversion m -set . Lemma 3.4.2.
Let Ψ be an inversion set. Then span (Ψ) is a two-dimensionalsubspace of V . Furthermore, given two distinct roots α, β ∈ Ψ , we havespan ( { α, β } ) = span (Ψ) .Proof. Since Ψ is an inversion set, Ψ = Φ + { α,β } for some α, β ∈ Φ. Let γ, δ be the canonical simple roots of (
W, S ) { α,β } . By Lemma 2.3.11, every rootin Φ + { α,β } can be obtained as an alternating product of s γ and s δ applied to γ or δ . By the reflection formula (1.1), s γ and s δ preserve (set-wise) the setspan( { γ, δ } ). Thus every root in Ψ is in span( { γ, δ } ) so that span(Ψ) is atwo-dimensional subspace of V , which proves the first assertion.If α, β ∈ Ψ are distinct, then since roots in Ψ are positive, we have that α and β are not scalar multiples of one another. Thus span( { α, β } ) is atwo-dimensional subspace of span(Ψ), which is two-dimensional, sospan( { α, β } ) = span(Ψ) . emma 3.4.3. Let Ψ and Υ be inversion sets with Ψ = Υ . Then Ψ and Υ intersect in at most one root.Proof. Suppose Ψ and Υ intersect in distinct positive roots α and β . Wemay then form the dihedral subsystem ( W, S ) { α,β } . We have Ψ = Φ + { α ′ ,β ′ } for some α ′ , β ′ ∈ Φ, by Definition 3.4.1 and hypothesis. Also, we have α, β ∈ Ψ, by assumption. It follows that span( { α, β } ) = span( { α ′ , β ′ } ),since span( { α, β } ) is a 2-dimensional subspace of span( { α ′ , β ′ } ), which itselfis a two-dimensional space by Lemma 3.4.2. Similarly, Υ = Φ + { α ′′ ,β ′′ } forsome α ′′ , β ′′ ∈ Φ and span( { α, β } ) = span( { α ′′ , β ′′ } ). Intersecting the spanequations with Φ + gives Ψ = span( { α, β } ) ∩ Φ + = Υ by Lemma 2.3.3. Corollary 3.4.4.
Let α, β ∈ Φ + be distinct positive roots. There exists aunique inversion set Ψ containing α and β .Proof. By Definition 3.4.1 and Lemma 2.3.3, Ψ = Φ + { α,β } is an inversion setcontaining α and β . The uniqueness of Ψ follows from Lemma 3.4.3.The indices of the γ - and δ -sequences make a natural candidate for an orderon the roots of an inversion set. However, in the case that the inversion set isinfinite, the situation is not as straightforward. Though the next definitionis made from the point of view of the γ -sequence, either canonical simpleroot can be “the γ root”. Definition 3.4.5.
Let Ψ = Φ + { α,β } be an inversion m -set with canonicalsimple roots γ and δ . Then we define a total ordering ≤ Ψ ,γ on Ψ as follows:(1) If m is finite, and α = γ i , and β = γ j , where 1 ≤ i, j ≤ m , then we saythat α ≤ Ψ ,γ β if i ≤ j .(2) If m is infinite then we say α ≤ Ψ ,γ β if one of the following holds:(a) α = γ i and β = γ j , where i, j ≥ i ≤ j ;(b) α = δ i and β = δ j , where i, j ≥ i ≥ j ;(c) α = γ i and β = δ j , where i, j ≥ Remark . We can graphically depict the total ordering of Ψ with respectto γ on a line by placing the roots of Ψ on a line so that α is to the left of β if and only if α ≤ Ψ ,γ β . If Ψ is finite, then the typical total ordering of Ψwith respect to γ is depicted below: · · · γ = γ γ γ m = δ
47f Ψ is infinite, then the order begins with all the roots from γ and ends withthe roots in δ . Thus, there are two endpoints, γ and δ , having infinitely manyroots between them. We depict this situation by placing a bar between thetwo infinite sequences. Thus, the three cases given in Definition 3.4.5 forinfinite inversion sets are translated pictorially as: the two roots are to theleft of the bar (both roots are in γ ), one root is to the left of the bar andone root is to the right of the bar (one root is in γ , one root is in δ ), andthe two roots are to the right of the bar (both roots are in δ ). · · · · · · γ = γ γ δ = δδ We use interval notation in the standard way. That is, given any totallyordered set ( X, ≤ ), we have [ a, b ] = { x : a ≤ x ≤ b } . Proposition 3.4.7.
Let Λ ⊆ Φ + be any set of positive roots. Then Λ is abiconvex set of positive roots if and only if for every inversion set Ψ withlocal simple system ∆ , we have one of the following:(1) Λ ∩ Ψ = ∅ , so that Λ ∩ ∆ = ∅ .(2) There exists γ ∈ Λ ∩ ∆ such that Λ ∩ Ψ = { γ , γ , . . . } , the set of positiveroots in the sequence γ .(3) There exists γ ∈ Λ ∩ ∆ such that for some λ ∈ Ψ , Λ ∩ Ψ = [ γ, λ ] in thetotal order (Ψ , ≤ Ψ ,γ ) .Proof. We first show that if statement (1), (2), or (3) holds for any inversionset Ψ, then Λ is a convex set of positive roots. Suppose that α, β ∈ Λ aredistinct and a, b ≥
0. We must show that aα + bβ ∈ Φ + implies aα + bβ ∈ Λ.Thus we let Ψ = Φ + { α,β } and suppose that aα + bβ ∈ Φ + . Since Λ ∩ Ψ isnonempty, (2) or (3) holds. We consider the following two cases:(A) The roots α and β are both in the γ -sequence.(B) The set Ψ is infinite and (without loss of generality) α is contained inthe γ -sequence while β is in the δ -sequence.Suppose α and β are in the γ -sequence and that α = γ i and β = γ j for some i, j ≥
1. We assume without loss of generality that i < j . If aα + bβ is inthe δ -sequence but not the γ -sequence, then Corollary 3.3.18 implies that β is in the convex cone spanned by α and aα + bβ , contradicting Lemma 3.1.2.Thus aα + bβ = γ k , where i < k < j by Corollary 3.3.16. If (2) holds, then aα + bβ ∈ Λ since Λ contains all the roots of the γ -sequence. If (3) holds,48hen α, β ∈ [ γ, λ ] and we have α ≤ Ψ ,γ aα + bβ ≤ Ψ ,γ β , so that aα + bβ ∈ [ γ, λ ]and hence aα + bβ ∈ Λ.Suppose Ψ is infinite, α = γ i , and β = δ j . If aα + bβ = γ k for some k ≥
1, then Corollary 3.3.18 implies that k > i . By Definition 3.4.5, we have α ≤ Ψ ,γ aα + bβ ≤ Ψ ,γ β , and hence aα + bβ ∈ Λ. Similarly, if aα + bβ = δ k for some k ≥
1, then Corollary 3.3.18 implies that k > j . Then we have aα + bβ ∈ [ γ, λ ] since α, β ∈ [ γ, λ ] ⊆ Λ.To show that Λ is biconvex, we note that if Λ satisfies (1), (2), or (3)for every inversion set Ψ, then so does Φ + \ Λ. (This follows from Defini-tion 3.4.5 or Remark 3.4.6.) Thus, we have that Φ + \ Λ is convex, and henceΛ is biconvex by Definition 3.1.3.Turning to the converse, we suppose that Λ is a biconvex set of positiveroots. Let Ψ be an inversion set with local simple system ∆ = { γ, δ } andsuppose Λ ∩ Ψ = ∅ . If γ, δ Λ ∩ Ψ, then there exists λ ∈ Λ ∩ Ψ such that λ = cγ + dδ . Since γ, δ ∈ Ψ, we have γ, δ Λ, contradicting the biconvexityof Λ. Thus we may assume there exists γ ∈ Λ ∩ ∆.Suppose there exists a λ ′ in the γ -sequence such that λ ′ Λ ∩ Ψ. Let i be the smallest index such that γ i Λ ∩ Ψ and set λ = γ i − . Since γ i ∈ Ψ, γ i Λ. If there exists j ∈ N + such that i < j ≤ | s γ s δ | and γ j ∈ Λ ∩ Ψ, we have that γ i lies in the convex cone spanned by γ i − and γ j ,by Lemma 3.3.15. This contradicts the biconvexity of Λ. Similarly, if Ψ isinfinite and there is a root µ in the δ -sequence, then by Lemma 3.3.17, γ i lies in the convex cone spanned by γ i − and µ . Thus, in this case, it followsthat Λ ∩ Ψ = [ γ, λ ] = [ γ, γ i − ].Suppose there is no positive root λ ′ in the γ -sequence such that λ ′ Λ ∩ Ψ.Then if Ψ is finite, we have that Λ ∩ Ψ = Ψ = [ γ, δ ] by Proposition 3.3.14. Ifinstead Ψ is infinite, then every root of the γ -sequence is in Λ ∩ Ψ. If thereare no roots of the δ -sequence in Λ ∩ Ψ, then Condition (2) is satisfied.Suppose every positive root of γ lies in Λ ∩ Ψ and that there are rootsof Λ ∩ Ψ in the δ -sequence. If Λ ∩ Ψ = Ψ, then Λ ∩ Ψ = [ γ, δ ] so that state-ment (3) is satisfied. Thus we suppose there exists a λ ′ in the δ -sequencesuch that λ ′ Λ ∩ Ψ. Let i be the smallest index such that δ i ∈ Λ ∩ Ψ. If i >
1, set λ = δ i and note that δ i − Λ ∩ Ψ. If there exists j > i such that δ j Λ ∩ Ψ, then Lemma 3.3.15 implies that δ i lies in the convex cone of δ i − δ j . Since δ i − , δ i , δ j ∈ Ψ, this would imply that δ i − Λ, δ i ∈ Λ, and δ j Λ, which contradicts the biconvexity of Λ. Thus, if i >
1, Condition (3)is satisfied with Λ ∩ Ψ = [ γ, λ ] = [ γ, δ i ]. Lastly, if i = 1, then γ, δ ∈ Λ ∩ Ψ.By Lemma 3.3.17, λ ′ lies in the convex cone spanned by δ and γ , whichcontradicts the biconvexity of Λ because λ ′ Λ. Corollary 3.4.8.
Let w ∈ W and let Ψ be an inversion set with canonicalsimple roots γ and δ . Let γ and δ be the associated local root sequences. If m = | s γ s δ | , then at least one of the three following statements holds:(1) For some k satisfying ≤ k ≤ m ( k < m if m = ∞ ), we have Φ( w ) ∩ Ψ = [ γ , γ k ] in the total order (Ψ , ≤ Ψ ,γ ) .(2) For some k satisfying ≤ k ≤ m ( k < m if m = ∞ ), we have Φ( w ) ∩ Ψ = [ δ , δ k ] in the total order (Ψ , ≤ Ψ ,δ ) .(3) We have Φ( w ) ∩ Ψ = ∅ .Proof. By Lemma 3.1.4, Φ( w ) is a biconvex set of roots so we may applyProposition 3.4.7. Since Φ( w ) is finite we can eliminate Condition (2) ofProposition 3.4.7. Thus, if Φ( w ) ∩ Ψ is nonempty, we haveΦ( w ) ∩ Ψ = [ γ, λ ]for some λ ∈ Φ( w ) ∩ Ψ in the total ordering (Ψ , ≤ Ψ ,γ ) orΦ( w ) ∩ Ψ = [ δ, λ ]for some λ ∈ Φ( w ) ∩ Ψ in the total ordering (Ψ , ≤ Ψ ,δ ). Since Φ( w ) is finite,in the first case we must have λ = γ k for some k ≤ m . Similarly, in thesecond case we must have λ = δ k for some k ≤ m . The following definition has many variants. In [20, Definition 2.5], we findthe same definition applied to the situation where a, b = 1 in the definitiongiven below. In [2, Section 5.2], the conditions are meant to apply to a totalordering on Φ + . Our variant is introduced to characterize the labelings ofΦ + that encode reduced expressions for some w ∈ W .50 efinition 3.5.1. Let T : Φ + → N be a labeling of Φ + . We say that T isa standard labeling of Φ + if for any pair of distinct positive roots α and β the following implication holds:If γ lies in the convex cone spanned by α and β then we have either T ( α ) ≤ T ( γ ) ≤ T ( β )or T ( β ) ≥ T ( γ ) ≥ T ( α ) . Remark . Recall by Definition 2.1.12 that T : Φ + → N is sequentialif T (supp( T )) = { , . . . , | supp( T ) |} . If T is both sequential and standard,the inequalities in Definition 3.5.1 are strict whenever α, β, γ ∈ supp( T ), i.e.whenever T ( α ) , T ( β ) , T ( γ ) = 0. Lemma 3.5.3.
Let ( W, S ) be a Coxeter system. Let w ∈ W . If x is anyreduced expression for w and T x : Φ + → N is the standard encoding of x ,then T x is a standard labeling.Proof. Let α, β, aα + bβ ∈ Φ + , where a, b > α = β . Since Φ( w ) isbiconvex and α and β can be interchanged in all proofs of standardness,there are only four cases to consider:(1) α ∈ Φ( w ), aα + bβ ∈ Φ( w ), β ∈ Φ( w );(2) α ∈ Φ( w ), aα + bβ ∈ Φ( w ), β Φ( w );(3) α ∈ Φ( w ), aα + bβ Φ( w ), β Φ( w );(4) α Φ( w ), aα + bβ Φ( w ), β Φ( w ).Let x = ( s , . . . , s n ). By Definition 2.1.14, T x ( λ ) = 0 for λ Φ( w ), so cases(3) and (4) satisfy T x ( β ) ≤ T x ( aα + bβ ) ≤ T x ( α ) since the first two labelsin the inequality are zero in both cases.Thus, we first suppose α, β, aα + bβ ∈ Φ( w ), where a, b >
0. Assume withoutloss of generality that T x ( α ) < T x ( β ). Towards a contradiction, we mustconsider the subcase where T x ( aα + bβ ) is larger than both T x ( α ) and T x ( β )and the subcase where T x ( aα + bβ ) is smaller than both T x ( α ) and T x ( β ).First suppose T x ( α ) = k , T x ( β ) = k ′ > k , and T x ( aα + bβ ) > k ′ . Then wecan form the reduced expression x ′ = ( s k ′ +1 , . . . , s n ) for some w ′ ∈ W . Theroot sequence for x ′ contains neither α nor β , but it does contain aα + bβ .51y Lemma 3.1.4, this contradicts the biconvexity of Φ( φ ( x ′ )). Now suppose T x ( aα + bβ ) < k . Then we can form the reduced expression x ′ = ( s k , . . . , s n )for some w ′ ∈ W . The root sequence of x ′ contains α and β , but not λα + µβ ,a contradiction.We next suppose that α, aα + bβ ∈ Φ( w ) and β Φ( w ). Suppose that T x ( aα + bβ ) = k and k > T x ( α ). Then the expression x ′ = ( s k , . . . , s n ) isa reduced expression for some w ′ ∈ W . The root sequence for x ′ does notcontain α or β but it does contain aα + bβ , contradicting the biconvexityof Φ( w ′ ). Thus T x ( α ) > T x ( aα + bβ ). Since T x ( β ) = 0, the standardnessproperty is satisfied for α, aα + bβ, β ∈ Φ + . As all cases are exhausted, T x is standard.The following useful consequence of the biconvexity of Φ( w ) is noted in [12,Section 2] and [10, Section 2] for simply laced Coxeter systems (that is,Coxeter systems such that for all i, j ∈ I , m ij = 2 or m ij = 3). Corollary 3.5.4.
Let α, β, λ ∈ Φ + and suppose λ lies in the convex conespanned by α and β . Let w ∈ W and let x be a reduced expression for w .(1) If α, β, λ ∈ Φ( w ) then λ occurs between α and β in the root sequence of x .(2) If α, λ ∈ Φ( w ) , but β Φ( w ) , then λ must occur before α in the rootsequence of x .Proof. Let θ ( x ) = ( θ , . . . , θ ℓ ( w ) ) be the root sequence of x . Let θ i = α and θ j = λ . By Definition 2.1.14, the standard encoding T x of x satisfies T x ( θ n ) = n for all n satisfying 1 ≤ n ≤ ℓ ( w ).Suppose α, β, λ ∈ Φ( w ) and let θ k = β . Then by Lemma 3.5.3 and Def-inition 3.5.1, either i < j < k or k < j < i , which proves the first assertion.Suppose α, λ ∈ Φ( w ), but β Φ( w ). Then Lemma 3.5.3 and Definition 3.5.1imply that T x ( β ) = 0 < T x ( λ ) < T x ( α ), which proves that j < i . The secondassertion follows. Lemma 3.5.5.
Let T : Φ + → N be a sequential standard labeling of Φ + suchthat supp ( T ) is finite. Let γ, δ be the canonical simple roots for an inversionset Ψ , let m = | s γ s δ | , and let γ and δ be the associated local root sequences.We have:
1) If T ( γ ) = 0 and T ( δ ) = 0 , then T ( λ ) = 0 for all λ ∈ Ψ .(2) If T ( γ ) = 0 and T ( δ ) = 0 , then there exists k < m such that T ( γ ) > · · · > T ( γ k ) and T ( γ i ) = 0 for i > k .(3) If T ( γ ) = 0 and T ( δ ) = 0 then m is finite and we have either T ( γ ) < · · · < T ( γ m ) or T ( γ ) > · · · > T ( γ m ) . Proof.
Since T is sequential with finite support, all the inequalities in Defi-nition 3.5.1 are strict when they apply to roots in supp( T ).First suppose T ( γ ) = 0 and T ( δ ) = 0. Then by Lemmas 3.3.15 and 3.3.17and Definition 3.5.1, we have T ( γ i ) = T ( δ j ) = 0 for any i, j ∈ N + . Thus byProposition 3.4.7, for any λ ∈ Ψ, T ( λ ) = 0.For case (2), we suppose that T ( γ ) = 0 and T ( δ ) = 0. Since supp( T ) isfinite, and since δ supp( T ), there must exist a smallest index k ≥ γ k ∈ supp( T ) and γ k +1 supp( T ). Suppose towards a contradictionthat γ k + j ∈ supp( T ) for some j >
1. We have T ( γ ) > T ( γ k +1 ) = 0 because γ ∈ supp( T ). By Lemma 3.3.15 and Definition 3.5.1, either T ( γ ) ≤ T ( γ k +1 ) ≤ T ( γ k + j )or T ( γ ) ≥ T ( γ k +1 ) ≥ T ( γ k + j ) , contradicting the assumption that T ( γ ) , T ( γ k + j ) = 0 and T ( γ k +1 ) = 0.By Lemma 3.3.15 and Definition 3.5.1, for any i satisfying 1 < i < k ,we have γ i ∈ supp( T ). Thus, for 1 ≤ i ≤ k , we have γ i ∈ supp( T ), and for k < i ≤ m (with i finite if m = ∞ ), we have γ i supp( T ).53et i be such that 1 < i ≤ k . Then, γ i is a positive linear combinationof γ and δ and T ( δ ) = 0, so by Definition 3.5.1, we have T ( γ ) > T ( γ i ) > T ( δ ) = 0for all i such that 1 < i ≤ k . Now, by Lemma 3.3.15 and Definition 3.5.1,we have T ( γ ) > T ( γ i ) > T ( γ i +1 ) for any i such that 2 ≤ i ≤ k . This forces T ( γ ) > T ( γ ) > · · · > T ( γ k ).Turning to case (3), suppose γ, δ ∈ supp( T ). Assume by way of contra-diction that m = ∞ . Then for each i >
1, by Lemma 3.3.17 and Defini-tion 3.5.1, T ( γ i ) lies between T ( γ ) and T ( δ ). This implies that supp( T ) isinfinite, contradicting the hypotheses. Thus m is finite.If T ( γ ) < T ( δ ), then Lemma 3.3.15 and Definition 3.5.1 imply that T ( γ ) < · · · < T ( γ m ) = T ( δ ) . Similarly, if T ( γ ) > T ( δ ), then T ( γ ) > · · · > T ( γ m ), as desired.54 hapter 4 Correspondences withreduced expressions for aCoxeter group element
The goal of this chapter is to establish a correspondence between subsetsof Φ + and the elements of W , as well a correspondence between standardlabelings of Φ + with support equal to Φ( w ) and the reduced expressions for w . In addition to these correspondences, we introduce an incidence structurethat faithfully represents the local dihedral subsystem structure present inΦ + and Φ( w ). In [20], Kra´skiewicz gave a 1–1 correspondence between reduced expressionsfor an element w of a crystallographic Weyl group and standard w -tableaux.In adopting the labeling terminology of Fomin et al. [11], we will decoupleany structure associated to w and its inversion set Φ( w ) from structureimposed by a reduced expression. In our terminology, a standard w -tableauis called a standard sequential labeling. Kra´skiewicz’s notion of encodinga reduced expression is the same as ours. However, we have generalizedthe meaning of standard labeling so that the correspondence applies in theexpanded context of arbitrary Coxeter groups. We now recall Kra´skiewicz’sTheorem and his definition of “standard” translated into our terminologyfor comparison purposes. 55 efinition 4.1.1. [20, Definition 2.5] We say that a labeling T is standard if for every γ ∈ Φ + and every decomposition γ = α + β of γ into a sum of twopositive roots, T ( γ ) is between T ( α ) and T ( β ) (i.e. T ( α ) ≥ T ( γ ) ≥ T ( β ) or T ( α ) ≤ T ( γ ) ≤ T ( β )). Theorem 4.1.2 ( Kra´skiewicz ) . Let W be the Weyl group of a crystallo-graphic root system Φ . Let w ∈ W and let T be a sequential labeling of Φ + such that supp ( T ) = Φ( w ) . Then w encodes a reduced expression if and onlyif T is standard.Proof. See [20, Theorem 2.6].
Lemma 4.1.3.
Let ( W, S ) be a Coxeter system with root system Φ . Let γ ∈ Φ + . Suppose there do not exist α, β ∈ Φ + , α = β , and a, b > suchthat γ = aα + bβ . Then γ is a simple root.Proof. Suppose there do not exist α, β ∈ Φ + , α = β , and a, b > γ = aα + bβ . Let δ ∈ Φ + be distinct from γ . By the reflection formula (1.1), s γ ( δ ) = δ − B ( δ, γ ) γ . Suppose towards a contradiction that s γ ( δ ) ∈ Φ − .Then δ ∈ Φ + implies δ = s γ ( δ ). Since γ, δ ∈ Φ + and s γ ( δ ) ∈ Φ − , we have − B ( δ, γ ) < γ = 12 B ( δ, γ ) δ + 12 B ( δ, γ ) ( − s γ ( δ )) , contradicting the assumption that no such decomposition exists. It followsthat s γ ( δ ) ∈ Φ + for each δ ∈ Φ + distinct from γ , so that Φ( s γ ) = { γ } .However, this implies that ℓ ( s γ ) = 1, so γ is a simple root.The proof of the next proposition is structurally identical to the proof ofTheorem 4.1.2, but the details with respect to Lemma 4.1.3 and calcula-tions verifying that a labeling is standard in the proof of Proposition 4.1.4are different. Our primary contribution to this proposition is in giving thecharacterization that applies for the weakened hypothesis on ( W, S ). Forpurposes of coherence, we include the full proof.Recall from Definition 2.1.12 that the support of T , denoted supp( T ), isthe set of elements not mapped to zero. Also recall that a sequential label-ing has finite support and satisfies T (supp( T )) = { , . . . , | supp( T ) |} . Proposition 4.1.4.
Let ( W, S ) be a Coxeter system. Let T : Φ + → N be alabeling of Φ + . Then T encodes a reduced expression x for some w ∈ W ifand only if T is a sequential standard labeling of Φ + . roof. Let A = supp( T ). If T : Φ( w ) → N is the standard encoding of areduced expression x for some w ∈ W , then by Definition 2.1.14, A = Φ( w )and T ( A ) = { , . . . , | A |} , so T is a sequential labeling. Also, T is a standardlabeling by Lemma 3.5.3.We prove the converse by induction on | A | , which is finite since T is assumedto be a sequential labeling. For the base case, if A = ∅ , then T ( λ ) = 0 forall λ ∈ Φ + , so T is the standard encoding of the empty word, which is areduced expression for the identity. Now let T be a sequential standardlabeling of Φ + such that | A | = k , k >
0. The induction hypothesis statesthat if T ′ : Φ + → N is a sequential standard labeling of Φ + such that | supp( T ′ ) | < k , then T ′ encodes a reduced expression x ′ for some w ′ ∈ W .Let γ = T − ( k ). If γ is not a simple root then γ = aα + bβ for some α, β ∈ Φ + , α = β , by Lemma 4.1.3. Since k is the largest value in theimage of T and since T is a bijection when restricted to its support, we havethat T ( α ) < k and T ( β ) < k . This contradicts the assumption that T isstandard. Thus, γ must be simple. We define a labeling T ′ : Φ + → N by T ′ ( λ ) = ( T ( s γ ( λ )) if λ = γ ;0 if λ = γ. To show that T ′ is standard, there are two cases to consider: one of thepositive roots in a decomposition is γ or neither of the positive roots in adecomposition is γ .We first suppose that a, b > α, β, aα + bβ ∈ Φ + are such that α = β ,and neither α nor β is γ . Then s γ ( α ), s γ ( β ), and as γ ( α ) + bs γ ( β ) are allpositive roots by Proposition 2.1.1(3). Thus T ( s γ ( α )) ≤ T ( as γ ( α ) + bs γ ( β )) ≤ T ( s γ ( β ))if and only if T ′ ( α ) ≤ T ′ ( aα + bβ ) ≤ T ′ ( β )(and similarly for the reverse inequalities). Thus, in this case, T ′ satisfiesone of the inequalities required for T ′ to be standard.Now suppose that one of the roots in a decomposition is γ . That is, sup-pose a, b > α, β, aα + bβ ∈ Φ + , α = β , and without loss of generalitysuppose α = γ . Since T ( α ) is the maximum value of T (Φ + ) and T is57 standard labeling, we must have T ( α ) ≥ T ( aα + bβ ) ≥ T ( β ). Since α is a simple root, Proposition 2.1.1(3) implies that as α ( α ) + bs α ( β ) and s α ( β ) are positive roots. From as α ( α ) + bs α ( β ) = − aα + bs α ( β ), it followsthat s α ( β ) = b ( − aα + bs α ( β )) + ab α is a nonnegative linear combination of as α ( α ) + bs α ( β ) and α . Since T is standard and T ( α ) is the maximumof T (Φ + ), we have T ( s α ( aα + bβ )) ≤ T ( s α ( β )) ≤ T ( α ). This implies T ′ ( α ) ≤ T ′ ( aα + bβ ) ≤ T ′ ( β ), because T ′ ( α ) = 0. Thus, in both cases,one of the inequalities required for T ′ to be standard is satisfied. Thus T ′ isstandard.Since T is sequential, and s γ is a bijection, and the root with the high-est label in T is labeled 0 in T ′ , it follows that T ′ is sequential. Thus, theinductive hypothesis implies that T ′ is the standard encoding for a reducedexpression x ′ for some w ′ ∈ W . Since γ is simple, we may assume γ = α i for some i ∈ I . We let x = x ′ i and w = w ′ s γ . Then T ′ ( γ ) = 0 and T ′ is the standard encoding for x ′ , so it follows from Definition 2.1.14 that γ Φ( w ′ ) and w ′ ( γ ) is therefore a positive root. By Proposition 2.1.1 parts(2) and (4), ℓ ( ws γ ) = ℓ ( w ) + 1, so x is a reduced expression for w . Fur-thermore, we can apply Lemma 2.1.9 (with b being the empty string) toget θ ( x ) = s γ [ θ ( x ′ )]( θ i ). From the construction of T ′ we get that T is thestandard encoding for x .The next corollary is used to construct the correspondence of Proposi-tion 4.1.7. Corollary 4.1.5.
Let ( W, S ) be a Coxeter system and let w ∈ W . Let T : Φ + → N be a labeling of Φ + . Then T encodes a reduced expression x for w if and only if T is a sequential standard labeling of Φ + such thatsupp ( T ) = Φ( w ) .Proof. If T encodes a reduced expression x for w then supp( T ) = Φ( w ) byDefinition 2.1.14. By Proposition 4.1.4, T is a sequential standard labelingof Φ + .Conversely, if T is a standard sequential labeling of Φ + , then Proposi-tion 4.1.4 implies that T encodes a reduced expression x for some w ∈ W .Since we are also assuming that supp( T ) = Φ( w ), Definition 2.1.14 impliesthat Φ( w ) = Φ( φ ( x )). By Proposition 2.1.1(7), w = φ ( x ) so that x is areduced expression for w .We regard the next corollary as fundamental to our work in Chapter 5,though it might be folklore. In the paper [1], Bj¨orner gives a characteri-58ation of subsets of Φ + that take the form Φ( w ). The characterization isthat the set must be finite and biconvex (see [1, Proposition 3]), so long as W is a finite Coxeter group. In [10, Section 2], Fan and Stembridge assertthat the same characterization holds in the setting of arbitrary simply-lacedCoxeter groups. Since there is a natural correspondence between reflectionsand positive roots, it is also interesting to characterize the correspondingsubsets of reflections. In [7, Lemma 2.11], Dyer characterizes such subsetsas the initial sections of reflection orders. He then notes (see [7, Remark2.12]) that through the natural correspondence between reflections and pos-itive roots, the initial sections of reflection orders correspond to biconvexsets but that the converse is open.The next corollary asserts that sets of the form Φ( w ) can be character-ized as the finite biconvex sets of positive roots in the generalized setting ofarbitrary Coxeter groups. Via Dyer’s [7, Lemma 2.11], it then follows thatfinite biconvex sets of positive roots correspond to initial sections of reflec-tion orders. Whether infinite biconvex sets of positive roots (in a necessarilyinfinite Coxeter group) correspond to initial sections of reflection orders isopen. Corollary 4.1.6.
Let A ⊆ Φ + . Then A = Φ( w ) for some w ∈ W if andonly if A is finite and biconvex.Proof. If A = Φ( w ), then A is finite and biconvex by Lemma 3.1.4. By [2,Proposition 5.2.1], there exists a total ordering < on Φ + such that if a, b > α, β, aα + bβ ∈ Φ + then α < aα + bβ < β or β < aα + bβ < α . Weform T : Φ + → N by setting T ( λ ) = 0 for λ A and T ( α ) = i if α is the i -th highest root in A relative to the total ordering < . It follows that T isstandard, so that by Proposition 4.1.4 there exists a reduced expression x forsome w ∈ W such that T is the standard encoding of x . By Definition 2.1.14, A = Φ( w ). Proposition 4.1.7.
Let ( W, S ) be a Coxeter system and let w ∈ W . Thenthere is a 1–1 correspondence between the set R ( w ) of reduced expressionsfor w and the set Lab ( w ) of all sequential standard labelings T such thatsupp ( T ) = Φ( w ) . The correspondence is given by x ←→ T x .Proof. Let x be a reduced expression for w . By Corollary 4.1.5, the standardencoding T x satisfies supp( T x ) = Φ( w ) and is a sequential standard labelingof Φ + . Thus we may define F : R ( w ) → Lab( w ) by F ( x ) = T x .59onversely, Corollary 4.1.5 also shows that given any sequential standard la-beling T of Φ + such that supp( T ) = Φ( w ), there exists a reduced expression x for w such that T = T x . By Definition 2.1.14 and Proposition 2.1.1(9), thereduced expression x is unique. Thus we may define G : Lab( w ) → R ( w ) by G ( T x ) = x .Then we have G ( F ( x )) = G ( T x ) = x , by the definitions of F and G .Similarly, by the definitions of F and G , we have F ( G ( T x )) = F ( x ) = T x ,so that F and G are inverse to each other. In this section, for any w ∈ W , we give a combinatorial model of the set Φ( w )that faithfully represents the local root structure. We give a related combi-natorial representation of the reduced expressions for w . The terminologywe introduce is very similar to that of hypergraphs or incidence structuresfrom graph theory, but we need some additional features. Definition 4.2.1. A segment structure is a triple I = ( P, L , B ), where P isa set of elements called points , L is a set of non-empty subsets L of P called lines satisfying | L | ≥
2, and B is a ternary relation on P satisfying:(B1) If B ( p , p , p ) holds, then p , p , and p are pairwise distinct.(B2) If B ( p , p , p ) holds, then there exists L ∈ L such that p , p , p ∈ L .(B3) If p , p , p ∈ L for some L ∈ L and p , p , p are pairwise distinct,then there exists a permutation σ : { p , p , p } → { p , p , p } such that B ( σ ( p ) , σ ( p ) , σ ( p )) holds.(B4) If B ( p , p , p ) holds, then B ( p , p , p ) holds, and for any permutation σ : { p , p , p } → { p , p , p } other than the identity or the transposition ( p p ), the ternary relation B ( σ ( p ) , σ ( p ) , σ ( p )) does not hold.60f B ( p , p , p ) holds, then we say that p lies between p and p . If p ∈ L and there do not exist p ′ , p ′′ ∈ L such that B ( p ′ , p, p ′′ ), then we say p is an endpoint of L . Otherwise, p is an intermediate point of L . If for all L ∈ L , p is an endpoint of L , then we say p is an endpoint of I . Remark . Observe that the lines L ∈ L may be infinite. In the casethat L has fewer than three elements, no betweenness relations are satisfied,so every point in L is an endpoint of L .Pictorially, we represent a segment structure with points and (not neces-sarily straight) edges, similar to the drawings of graphs in graph theory.However, we prefer to have any collinear points to appear collinear and forthe betweenness relations to be respected in the pictorial representation. Example . Let A be the Coxeter system with set generating set givenby S = { s, t, u } and Coxeter relations m s,t = m t,u = 3 and m s,u = 2. Asan abstract group, W ( A ) is isomorphic to the symmetric group S . It isknown that Φ + = { α s , α t , α u , α s + α t , α t + α u , α s + α t + α u } . We let I = ( P, L , B ) be the segment structure obtained by letting P = Φ + ,and L = { Φ + { α,β } : α, β ∈ Φ + and α = β } . If γ , δ are the canonicalsimple roots for some inversion set Φ + α,β , and γ i , γ j , γ k ∈ Φ + are entries inthe associated γ -sequence, then we set B ( γ i , γ j , γ k ) if and only if we haveeither i < j < k or k < j < i . The pictorial representation is given below. α t α t + α u α u α s + α t α s α s + α t + α u Observe that α s , α t , and α u satisfy the definition for an endpoint of I . Onthe other hand, α s + α t is an endpoint of the line L = { α s + α t , α s + α t + α u , α u } , but is an intermediate point of the line L = { α s , α s + α t , α t } , and hence is not an endpoint of I . 61n the next definition we give a ternary relation on Φ + that is intended tocapture the notion of “betweenness” for inversion sets. Definition 4.2.4.
Let λ, µ, ν ∈ Φ + be pairwise distinct positive roots. Ifthere exists an inversion set Ψ with a canonical simple root γ such that either λ < Ψ ,γ µ < Ψ ,γ ν or ν < Ψ ,γ µ < Ψ ,γ λ , then we say that µ is between λ and ν . We denote this ternary relation by B W ( λ, µ, ν ) and say that B W ( λ, µ, ν ) holds if such a Ψ exists. If λ, µ, ν are not pairwise distinct positive roots orif there does not exist such an inversion set Ψ, then we say that B W ( λ, µ, ν ) does not hold . Lemma 4.2.5.
Let ( W, S ) be a Coxeter system and let Φ + be the associatedpositive system. Let W = (Φ + , Inv (Φ + ) , B W ) , where B W is the between-ness relation of Definition 4.2.4 and Inv (Φ + ) is the set of all inversion setscontained in Φ + . Then W is a segment structure.Proof. By definition, an inversion set consists of at least two positive roots,so all of the lines are nonempty. If B W ( λ, µ, ν ) holds, then there exists aninversion set Ψ containing λ, µ, ν , which are distinct by Definition 4.2.4.Thus B W satisfies properties ( B
1) and ( B λ, µ, ν ∈ Ψ = Φ + { α,β } and that γ is a canonical simple root forΦ + { α,β } . Then, since ≤ Ψ ,γ is a total ordering, there exists a permutation σ of { λ, µ, ν } such that σ ( λ ) < Ψ ,λ σ ( µ ) < Ψ ,γ σ ( ν ), so B W ( σ ( λ ) , σ ( µ ) , σ ( ν ))holds, proving that B W satisfies property ( B B W ( λ, µ, ν ) holds. Then there exists an inversion set Ψ such thateither λ < Ψ ,γ µ < Ψ ,γ ν or ν < Ψ ,γ µ < Ψ ,γ λ . Thus B W ( ν, µ, λ ) holds also.Since < Ψ ,γ is a total order, no other permutation of λ, µ, ν is consistentwith < Ψ ,γ . By Lemma 3.4.3, if Υ is another inversion set containing λ , itcannot contain µ or ν , and hence no other betweenness relations for λ, µ, ν are possible. This proves that B W satisfies property ( B
4) and hence W is asegment structure. Definition 4.2.6.
Let (
W, S ) be a Coxeter system and let Φ + be the as-sociated positive system. Let W be the segment structure of Lemma 4.2.5.We call W the standard segment structure of W . Definition 4.2.7.
Let I = ( P, L , B ) be a segment structure and T : P → N . Then we call T an I -labeling . We denote by supp( T ) the setof all p such that T ( p ) = 0. We say that T is a sequential I -labeling T (supp( T )) = { , . . . , | supp( T ) |} . Suppose that whenever B ( p , p , p )holds, either T ( p ) ≤ T ( p ) ≤ T ( p ) or T ( p ) ≤ T ( p ) ≤ T ( p ). Then wecall T a standard I -labeling . Lemma 4.2.8.
Let W be the standard segment structure of W and let λ, µ, ν ∈ Φ + be distinct positive roots. Then B W ( λ, µ, ν ) holds if and only if µ lies in the convex cone spanned by λ and ν .Proof. Suppose B W ( λ, µ, ν ) holds. Then there exists an inversion set Ψand canonical simple root γ of Ψ such that either λ < Ψ ,γ µ < Ψ ,γ ν or ν < Ψ ,γ µ < Ψ ,γ λ . If Ψ is finite, then Definition 3.4.5 and Lemma 3.3.15imply that µ lies in the convex cone spanned by λ and ν . If Ψ is infinite,then Definition 3.4.5, Lemma 3.3.15, and Lemma 3.3.17 imply that µ lies inthe convex cone spanned by λ and ν .Conversely, suppose that µ lies in the convex cone spanned by λ and ν .Then we may form the inversion set Ψ = Φ + { λ,ν } = span( { λ, ν } ) ∩ Φ + . Since µ ∈ Φ + { λ,ν } , we have that λ, µ, ν are comparable relative to < Ψ ,γ , where γ isa canonical simple root of Ψ, since < Ψ ,γ is a total order. Lemma 3.1.2 showsthat the only orders consistent with Lemma 3.3.15 and Lemma 3.3.17 are λ < Ψ ,γ µ < Ψ ,γ ν and ν < Ψ ,γ µ < Ψ ,γ λ . Thus B W ( λ, µ, ν ) holds. Proposition 4.2.9.
Let W = (Φ + , Inv (Φ + ) , B W ) be the standard segmentstructure of W . The function T : Φ + → N is a standard W -labeling if andonly if T is a standard labeling of Φ + .Proof. Suppose α, β, aα + bα ∈ Φ + , where a, b > α = β . ByLemma 4.2.8, B W ( α, aα + bβ, β ) holds. Thus, if T is a standard W -labeling,then T ( α ) ≤ T ( aα + bβ ) ≤ T ( β ) or T ( β ) ≤ T ( aα + bβ ) ≤ T ( α ). Since α, aα + bβ, β were arbitrary with a, b > α = β , we have that T is astandard labeling of Φ + by Definition 3.5.1.Conversely, suppose T is a standard labeling of Φ + . If B W ( λ, µ, ν ) holds,we have µ = aλ + bν , where a, b > λ = ν by Lemma 4.2.8. Thus,either T ( λ ) ≤ T ( µ ) ≤ T ( ν ) or T ( ν ) ≤ T ( µ ) ≤ T ( λ ). By Definition 4.2.7, T is a standard W -labeling. Corollary 4.2.10.
Let W = (Φ + , Inv (Φ + ) , B W ) be the standard segmentstructure of W . Let w ∈ W and let Lab( W , w ) denote the set of all sequentialstandard W -labelings such that supp ( T ) = Φ( w ) . Let Lab( w ) denote the setof all sequential standard labelings of Φ + such that supp ( T ) = Φ( w ) . Then Lab( w ) = Lab( W , w ) . roof. Both Lab( w ) and Lab( W , w ) consist of functions T : Φ + → N such that supp( T ) = { , . . . , | Φ( w ) |} (since the labelings are required tobe sequential with support Φ( w )). Thus Proposition 4.2.9 implies that T ∈ Lab( w ) if and only if T ∈ Lab( W , w ).Note that the labelings of Φ + and the W -labelings of the previous corollaryare defined in different ways. The usage we have in mind for W is thatwe compute the geometrical structure in advance, then the W -labelingsare placed on top of this structure. In this way, the reduced expressioncorrespondence of the next corollary involves only combinatorial propertiesof a fixed discrete structure instead of the messy details of Φ + and convexcones. Corollary 4.2.11.
Let ( W, S ) be a Coxeter system and let w ∈ W . Let W = (Φ + , Inv ( W ) , B W ) be the standard segment structure of Φ + . Then there is a 1–1 correspondencebetween the set R ( w ) of all reduced expressions for w , and the set Lab( W , w ) of all sequential standard W -labelings such that supp ( T ) = Φ( w ) . The cor-respondence is given by x ←→ T x . Proof.
This follows directly from Proposition 4.1.7 and Corollary 4.2.10.
Example . The Coxeter matrix of the Coxeter group of type B isdetermined by the entries m s,t = 4, m s,u = 2, and m t,u = 3. The element w = φ ( u, s, t, s, t, u ) has four reduced expressions. We haveΦ( w ) = { α u , α t + α u , α s + √ α t + √ α u , √ α s + α t + α u , α s , √ α s +2 α t + α u } . The positive roots of Φ + not in Φ( w ) are α t , α s + √ α t , and √ α s + α t .To avoid clutter in our explanations and figures, we associate names to eachroot in Φ + . The first figure below depicts the standard segment structureof Φ + and the chart to its right gives the names we use for each root in Φ + .Thus, “root F ” refers to the root √ α s + 2 α t + α u and “the B − E line”refers to the line { α s , √ α s + α t + α u , α s + √ α t + √ α u , α t + α u } . Let W be the standard segment structure of Φ + . The four figures fol-lowing the first one are the sequential standard W -labelings T such thatsupp( T ) = Φ( w ), which we explain after listing the figures. In all of the64iagrams below, we omit the lines containing exactly two roots. A BCDE F GHI
Name RootA α u B α s C √ α s + α t + α u D α s + √ α t + √ α u E α t + α u F √ α s + 2 α t + α u G √ α s + α t H α s + √ α t I α t The W -labelings corresponding to the reduced expressions ( u, s, t, s, t, u ) and( s, u, t, s, t, u ), respectively are:6 2345 1 000 6 1345 2 000The W -labelings corresponding to the reduced expressions ( u, t, s, t, s, u ) and( u, t, s, t, u, s ), respectively are:6 5432 1 000 5 6432 1 000It is easy to check that the four W -labelings above are indeed sequential andstandard and have Φ( w ) as their support. For the converse, we make an adhoc argument for this specific example.65uppose that T is a sequential standard W -labeling with Φ( w ) as support.The roots not in Φ( w ) must then be labeled 0. Thus, the roots G , H , and I are labeled 0 by T .Since T is standard and the roots G , H , and I are labeled 0 by T , thelabels on the I − A , H − A , and G − A lines increase towards root A .Thus, root A necessarily has a higher label than any other root in Φ( w ),except possibly root B , which does not lie on the I − A , H − A , or G − A line.On the lines G − E , H − A , and I − C , and in the direction away fromthe points labeled 0, root F is the first root with a nonzero label. Thusroot F must have a lower label than all roots lying on one of those threelines. Since root B is the only root not lying on those three lines, root F necessarily has a lower label than any root in Φ( w ), except possibly root B .The remaining roots of Φ( w ) lie on the B − E line, so that the remain-ing nonzero labels must lie on that line. By Definition 4.2.6, the lines of W are inversion sets of Φ + . By Proposition 4.2.9 and Lemma 3.5.5(3),root E must have either the highest label or the lowest label on the B − E line for T to be a standard W -labeling. Furthermore, once this choice ismade, the order of the remaining labels on the B − E line are determined byLemma 3.5.5(3). Thus, there are eight cases that exhaust all possibilities,according to whether:(1) root A is labeled 5 or 6;(2) root F is labeled 1 or 2;(3) root E has the highest label on the B − E line or root E has the lowestlabel on the B − E line.Choosing root A to have label 6 and root F to have label 1, we see that theroots on the B − E line must be labeled 2, 3, 4, and 5. The case where root E is labeled 5 gives the W -labeling corresponding to the reduced expression( u, s, t, s, t, u ), while the case where root E is labeled 2 gives the W -labelingcorresponding to the reduced expression ( u, t, s, t, s, u ).The case of root A being labeled 5 and root F being labeled 2 is impos-sible by the following reasoning:if root E is labeled 1, then T ( G ) = 0, T ( F ) = 2, and T ( E ) = 1, which66ontradicts the assumption that T is standard since root F is between root G and root E on the G − E line. If root E is labeled 6, then T ( I ) = 0, T ( E ) = 6, and T ( A ) = 5, which again contradicts the assumption that T isstandard since root E is between root A and root I on the A − I line.Now suppose that root A is labeled 6 and root F is labeled 2. Then root E must be labeled 1 or 5. If root E is labeled 1, then T ( G ) = 0, T ( F ) = 2, and T ( E ) = 1, which contradicts the assumption that T is standard. However,the case where root E is labeled 5 gives the W -labeling corresponding tothe reduced expression ( s, u, t, s, t, u ).If root A is labeled 5 and root F is labeled 1, then root E must be la-beled either 2 or 6. If root E is labeled 6, we have T ( I ) = 0, T ( E ) = 6, and T ( A ) = 5, which contradicts the assumption that T is standard. However,the case where root E is labeled 2 gives the W -labeling corresponding tothe reduced expression ( u, t, s, t, u, s ). Φ( w ) The definitions and correspondences of the previous section concern label-ings of Φ + that have Φ( w ) as support. The conceptual advantage to thisapproach is that whenever an inversion set intersects Φ( w ), but is not com-pletely contained in Φ( w ), our labeling scheme informs us what labelinginequalities must hold. However, computationally this is not optimal if thenumber of roots in Φ + \ Φ( w ) is large. For infinite Coxeter groups, there arealways infinitely many such roots, so it is desirable to have an alternativeto Corollary 4.2.11. Definition 4.3.1.
Let I = ( P, L , B ) be a segment structure and let A ⊆ P .For each L ∈ L , we set L A = A ∩ L and define L A = { L A : L ∈ L and | L A | ≥ } . Define a ternary relation B A by the following condition: B A ( p , p , p ) holdsif and only if p , p , p ∈ A and B ( p , p , p ) holds. We call the triple I A = ( A, L A , B A ) the restriction of I to A . Lemma 4.3.2.
Let I = ( P, L , B ) be a segment structure and let A ⊆ P .Let I A = ( A, L A , B A ) be the restriction of I to A . Then I A is a segmentstructure. roof. First note that the lines L ∈ L A satisfy | L | ≥ B A ( p , p , p ) holds, then B ( p , p , p ) holds, so p , p , and p are pairwisedistinct. Thus, B A satisfies property B B A ( p , p , p ) holds then B ( p , p , p ) implies there exists L ∈ L suchthat p , p , p ∈ L . Since p , p , p ∈ A , this implies p , p , p ∈ L A for some L A ∈ L A , so B A satisfies property B p , p , p ∈ L A for some L A ∈ L A , then by property B B , there existsa permutation σ such that B ( σ ( p ) , σ ( p ) , σ ( p )) holds. Since p , p , p ∈ A ,this implies B A ( σ ( p ) , σ ( p ) , σ ( p )) holds, so B A satisfies property B B A satisfies property B
4, let σ : { p , p , p } → { p , p , p } be a permutation. If B A ( p , p , p ) holds, then p , p , p ∈ A and by defini-tion B A ( σ ( p ) , σ ( p ) , σ ( p ))holds if and only if B ( p , p , p ) holds, proving that B A satisfies property B Definition 4.3.3.
Let (
W, S ) be a Coxeter system and let W = (Φ + , Inv(Φ + ) , B W )be the standard segment structure of W . For any w ∈ W , we call W Φ( w ) the standard segment structure of Φ( w ). Remark . Although the definition for restriction to a subset A allowsinfinite lines in the same way that Definition 4.2.1 does, the primary choicefor restriction in this thesis is Φ( w ), which is a finite set. Thus, the lines wesee in this section are all finite. Lemma 4.3.5.
Let w ∈ W and let W Φ( w ) = (Φ( w ) , Inv (Φ + ) Φ( w ) , B Φ( w ) ) bethe standard segment structure of Φ( w ) . Let λ, µ, ν ∈ Φ + be distinct positiveroots. Then B Φ( w ) ( λ, µ, ν ) holds if and only if µ lies in the convex conespanned by λ and ν and λ, µ, ν ∈ Φ( w ) .Proof. By Lemma 4.3.2 and Definition 4.3.3, B Φ( w ) ( λ, µ, ν ) holds if and onlyif λ, µ, ν ∈ Φ( w ) and B W ( λ, µ, ν ) holds. By Lemma 4.2.8, B W ( λ, µ, ν ) holdsif and only if µ lies in the convex cone spanned by λ and ν . The conclusionfollows. 68iven any standard W -labeling T of the standard segment structure for W ,if we restrict the labeling to the set Φ( w ), where w ∈ W , then the result isa standard W Φ( w ) -labeling. However, the converse need not be true. Theproblem is that if an inversion set Ψ partially intersects Φ( w ), then someroots of the inversion set must be labeled 0. This forces the roots in Ψ ∩ Φ( w )to have labels increasing towards the one canonical simple root of Ψ that isin Φ( w ). However, a standard W Φ( w ) -labeling could have labels increasingin the opposite direction. Since we wish to produce a correspondence forour restricted segment structure W Φ( w ) , we must do some bookkeeping toaccount for this possibility. Definition 4.3.6.
Let w ∈ W . Let Ψ be an inversion set. If Ψ ∩ Φ( w ) = Ψthen we call Ψ ∩ Φ( w ) a full inversion set of w . If | Ψ ∩ Φ( w ) | ≥ ∩ Φ( w ) = Ψ, then we call Ψ a partial inversion set of w . Remark . Observe that for an infinite inversion set Ψ, if | Ψ ∩ Φ( w ) | ≥ ∩ Φ( w ) is automatically a partial inversion set of w , since Φ( w ) isfinite. Example . Let W = W ( A ), let S = { s, t, u } , and let w = φ ( t, s, t, u ).Then Φ( w ) = { α s , α s + α t + α u , α t + α u , α u } . Recall, by Definition 3.4.1, that an inversion 2-set is an inversion set Ψ suchthat | Ψ | = 2. Observe that (by Definition 4.3.6) an inversion 2-set can notbe a partial inversion set and that a partial inversion set can not be a fullinversion set. The setsΨ = { α s , α s + α t + α u , α t + α u } , Ψ = { α t , α t + α u , α u } , Ψ = { α s , α s + α t , α t } , andΨ = { α s + α t , α s + α t + α u , α u } are all the 3-inversion sets in the positive root system Φ + of W ( A ). SinceΨ ⊆ Φ( w ), the set Ψ ∩ Φ( w ) = Ψ is a full inversion set of w . SinceΨ ∩ Φ( w ) = { α t + α u , α u } 6 = Ψ , we see that Ψ ∩ Φ( w ) is a partial inversionset of w . However, note that Ψ ∩ Φ( w ) = { α s } is neither a partial inversionset of w nor a full inversion set of w . (This choice in terminology is madebecause Ψ ∩ Φ( w ) fails to form a line in the restriction of W to Φ( w ).)Lastly, note that Ψ ∩ Φ( w ) is a partial inversion set.69 emma 4.3.9. Let w ∈ W and let Ψ be an inversion set such that Ψ ∩ Φ( w ) is a partial inversion set of w . Then there exists exactly one canonical simpleroot γ of Ψ such that γ ∈ Ψ ∩ Φ( w ) .Proof. By Definition 4.3.6, Ψ ∩ Φ( w ) = ∅ . By Corollary 3.4.8, there existsa canonical simple root γ ∈ Ψ ∩ Φ( w ). Suppose towards a contradictionthat both canonical simple roots are in Ψ ∩ Φ( w ). By Lemma 3.1.4, Φ( w ) isbiconvex, so it follows that Ψ ∩ Φ( w ) = Ψ, which contradicts the hypothesisthat Ψ ∩ Φ( w ) is a partial inversion set. Remark . We will always use γ as a name for the unique canonicalsimple root of a partial inversion set. Corollary 4.3.11.
Let w ∈ W and let Ψ ∩ Φ( w ) be a partial inversionset of w . Then there exist canonical simple roots γ and δ for Ψ such that Ψ ∩ Φ( w ) = [ γ , γ k ] in the total order (Ψ , < Ψ ,γ ) and such that k < | s γ s δ | .Proof. By Lemma 4.3.9, there exists one canonical simple root γ of Ψ suchthat γ ∈ Ψ ∩ Φ( w ). Let δ be the canonical simple root of Ψ that is not inΨ ∩ Φ( w ). Then, by Corollary 3.4.8, we have Ψ ∩ Φ( w ) = [ γ , γ k ] in the totalorder (Ψ , < Ψ ,γ ) for some k because γ ∈ Ψ ∩ Φ( w ). If | s γ s δ | is infinite then k < | s γ s δ | since k is finite. Otherwise, δ Ψ ∩ Φ( w ) and Proposition 3.3.14implies that k < | s γ s δ | . Definition 4.3.12.
Let w ∈ W . Let Ψ be an inversion set and let Ψ ∩ Φ( w )be a partial inversion set. We call the unique canonical simple root γ of Ψthat satisfies γ ∈ Ψ ∩ Φ( w ) the canonical simple root of Ψ ∩ Φ( w ). Example . In Example 4.3.8, the partial inversion setΨ ∩ Φ( w ) = { α t + α u , α u } has α u as its canonical simple root. The inversion set Ψ has γ = α u and δ = α t as its canonical simple roots, but α t Ψ ∩ Φ( w ). Note thatΨ ∩ Φ( w ) = [ γ , γ ] , which is consistent with the assertions of Corollary 4.3.11. Definition 4.3.14.
Let w ∈ W and let W Φ( w ) be the standard segmentstructure of Φ( w ). Let Ψ ∩ Φ( w ) be a partial inversion set of w , let γ = γ be the canonical simple root of Ψ ∩ Φ( w ) and, as in Corollary 4.3.11, writeΨ ∩ Φ( w ) = [ γ , γ k ]. We say that a W Φ( w ) -labeling T : Φ( w ) → N satisfies he restrictions of Ψ ∩ Φ( w ) if T ( γ ) > · · · > T ( γ k ). We say that a W Φ( w ) -labeling T satisfies the restrictions of Φ( w ) if T satisfies the restrictions ofevery partial inversion set Ψ ∩ Φ( w ) of w . We denote the set of all sequentialstandard W Φ( w ) -labelings with supp( T ) = Φ( w ) that satisfy the restrictionsof Φ( w ) by Lab( W Φ( w ) , w ). Example . Again we let W = W ( A ) and let w = φ ( t, s, t, u ). Define T : Φ( w ) → N by T ( α s ) = 1, T ( α s + α t + α u ) = 2, T ( α t + α u ) = 3, and T ( α u ) = 4. From Example 4.3.8, we have that Ψ ∩ Φ( w ) = { α t + α u , α u } and Ψ ∩ Φ( w ) = { α s + α t + α u , α u } are the only two partial inversion setsof w . The canonical simple root of Ψ ∩ Φ( w ) is α u , and the canonical sim-ple root of Ψ ∩ Φ( w ) is also α u . We note that T ( α u ) > T ( α t + α u ) and T ( α u ) > T ( α s + α t + α u ) so that T satisfies the restrictions of Φ( w ).If instead we chose the labeling T ′ : Φ( w ) → N defined by T ′ ( α s ) = 1, T ′ ( α s + α t + α u ) = 2, T ′ ( α u ) = 3, and T ′ ( α t + α u ) = 4, then T ′ satisfiesthe restrictions of Ψ ∩ Φ( w ), but not those of Ψ ∩ Φ( w ): this is because T ′ ( α u ) < T ′ ( α t + α u ). As T ′ does not satisfy the restrictions of all the partialinversion sets of w , T ′ does not satisfy the restrictions of Φ( w ). Note that T ′ is, however, a sequential standard W Φ( w ) -labeling.The restrictions made on W Φ( w ) -labelings are justified by the next lemma. Lemma 4.3.16.
Let T : Φ( w ) → N be a sequential standard W Φ( w ) -labelingthat satisfies the restrictions of Φ( w ) and has supp ( T ) = Φ( w ) . Then thereexists a unique sequential standard W -labeling T ′ : Φ + → N such thatsupp ( T ′ ) = Φ( w ) and T ′ | Φ( w ) = T .Proof. To construct T ′ , we set T ′ ( α ) = 0 if α Φ( w ); otherwise we set T ′ ( α ) = T ( α ). We have supp( T ′ ) = supp( T ) = Φ( w ). Thus, T ′ is sequentialsince T is sequential.To show that T ′ is a standard W -labeling, we let λ, µ, ν be distinct pos-itive roots such that µ strictly lies in the convex cone spanned by λ and ν . By Corollary 3.4.4, there is a unique inversion set Ψ containing λ and ν (and hence µ ). If Ψ ∩ Φ( w ) is a partial inversion set we apply Lemma 4.3.9and let γ be the canonical simple root of Ψ ∩ Φ( w ). Assume without lossof generality that λ < Ψ ,γ µ < Ψ ,γ ν (the reverse case being a symmetricargument and the other cases being eliminated by Lemma 4.2.8). Recallthat by Definition 4.2.4 this means that B W ( λ, µ, ν ) holds. We consider thefollowing cases:(1) λ, µ, ν ∈ Φ( w ); 712) λ, µ ∈ Φ( w ), but ν Φ( w );(3) λ ∈ Φ( w ) but µ, ν Φ( w );(4) λ, µ, ν Φ( w ).The assumed ordering of λ , µ , and ν with respect to < Ψ ,γ and Corollary 3.4.8ensures that these are the only possibilities.For case (1), Lemma 4.3.5 implies that B Φ( w ) ( λ, µ, ν ) holds. Since T isassumed to be a standard W Φ( w ) -labeling and T = T ′ on Φ( w ), we haveeither T ′ ( λ ) ≤ T ′ ( µ ) ≤ T ′ ( ν ) or T ′ ( ν ) ≤ T ′ ( µ ) ≤ T ′ ( λ ).For case (2), we have that Ψ ∩ Φ( w ) is a partial inversion set. By Defini-tion 4.3.14, we have T ( λ ) > T ( µ ) since T satisfies the restrictions of Ψ ∩ Φ( w ).Since λ, µ ∈ Φ( w ), this means T ′ ( λ ) > T ′ ( µ ). We have T ′ ( µ ) > T ′ ( ν ) since T ′ ( ν ) = 0.For case (3), we have that T ′ ( λ ) > λ ∈ Φ( w ). Thus T ′ ( λ ) > T ′ ( µ ) ≥ T ′ ( ν ) . Similarly, for case (4) we have T ′ ( λ ) ≥ T ′ ( µ ) ≥ T ′ ( ν ). Having exhausted allcases, we see that T ′ is a standard W -labeling by Definition 4.2.7.Now suppose there exists a sequential standard W -labeling T ′′ : Φ + → N such that supp( T ′′ ) = Φ( w ) and T ′′ | Φ( w ) = T . Then, if α ∈ Φ( w ), we have T ′′ ( α ) = T ( α ) = T ′ ( α ). If α Φ( w ), then since supp( T ′′ ) = Φ( w ), we have T ′′ ( α ) = 0 = T ′ ( α ). Thus T ′′ = T ′ so that T ′ is the unique W -labelingsatisfying these properties. Lemma 4.3.17.
Let T : Φ + → N be a sequential standard W -labeling suchthat supp ( T ) = Φ( w ) . Then T | Φ( w ) is a sequential standard W Φ( w ) -labelingthat satisfies the restrictions of Φ( w ) . Furthermore, ( T | Φ( w ) ) ′ = T , where ′ is the extension of the domain Φ( w ) to Φ + introduced in Lemma 4.3.16.Proof. Since supp( T ) = Φ( w ), we have supp( T | Φ( w ) ) = Φ( w ). Thus T | Φ( w ) is sequential because T is sequential.For every λ, µ, ν ∈ Φ( w ) such that B Φ( w ) ( λ, µ, ν ) holds, B W ( λ, µ, ν ) mustalso hold. Thus, if B Φ( w ) ( λ, µ, ν ) holds, then either T | Φ( w ) ( λ ) ≤ T | Φ( w ) ( µ ) ≤ T | Φ( w ) ( ν )72r T | Φ( w ) ( ν ) ≤ T | Φ( w ) ( µ ) ≤ T | Φ( w ) ( λ ) . It then follows that T | Φ( w ) is a standard W Φ( w ) -labeling.Now let Ψ ∩ Φ( w ) be a partial inversion set of w . By Corollary 4.3.11,there exist canonical simple roots γ and δ of Ψ such that γ ∈ Ψ, δ Ψand such that we may write Ψ ∩ Φ( w ) = [ γ , γ k ] for some k < | s γ s δ | . ByLemma 3.5.5(2), we have T ( γ ) > · · · > T ( γ k ) > T | Φ( w ) ( γ ) > · · · > T | Φ( w ) ( γ k ) . By Definition 4.3.14, T | Φ( w ) satisfies the restrictions imposed by Ψ ∩ Φ( w ).Lastly, ( T | Φ( w ) ) ′ = T follows from the uniqueness assertion of Lemma 4.3.16. Corollary 4.3.18.
Let
Lab( W , w ) be the set of all sequential standard W -labelings T such that supp ( T ) = Φ( w ) . Let Lab( W Φ( w ) , w ) be the set ofall sequential standard W Φ( w ) -labelings that satisfy the restrictions of Φ( w ) .Then there is a 1–1 correspondence between Lab( W , w ) and Lab( W Φ( w ) , w ) given by T −→ T | Φ( w ) and T ′ ←− T. Proof.
This follows from Lemmas 4.3.16 and 4.3.17.
Corollary 4.3.19.
Let ( W, S ) be an arbitrary Coxeter system and let w ∈ W . Then the set R ( w ) of reduced expressions for w is in − correspondencewith the set Lab( W Φ( w ) , w ) of all sequential standard W Φ( w ) -labelings thatsatisfy the restrictions of Φ( w ) .Proof. This correspondence can be obtained by composing the correspon-dence given in Corollary 4.3.18 with the one given in Corollary 4.2.11.
Example . For purposes of comparison, we demonstrate the corre-spondence of Corollary 4.3.19 using the same Coxeter group element as inExample 4.2.12, namely W = W ( B ) and w = φ ( u, s, t, s, t, u ). For this73xample, the vertices are only the roots of Φ( w ) instead of all the rootsin Φ + , and we pictorially represent the inversion sets with two roots. Werepresent the partial inversion sets with an arrow pointing in the directionthat the labels must go. The first figure below depicts the standard segmentstructure of Φ( w ) and the chart to its right gives names we will use for theroots in Φ( w ). Thus, the full inversion sets of w are { A, B } , { B, F } , and { B, C, D, E } . The partial inversion sets of w are { A, E } , { A, D, F } , { A, C } , { C, F } , and { E, F } .The four figures following the first figure give the sequential standard W Φ( w ) -labelings that satisfy the restrictions of Φ( w ), which we discuss after listingthe figures. A BCDE F
Name RootA α u B α s C √ α s + α t + α u D α s + √ α t + √ α u E α t + α u F √ α s + 2 α t + α u The labelings corresponding to the reduced expressions ( u, s, t, s, t, u ) and( s, u, t, s, t, u ), respectively are:6 2345 1 6 1345 2The labelings corresponding to the reduced expressions ( u, t, s, t, s, u ) and( u, t, s, t, u, s ), respectively are:6 5432 1 5 6432 174ne can check that the four figures above are sequential standard W Φ( w ) -labelings satisfying the restrictions of Φ( w ). Conversely, if we follow thearrows, we see many of the features we pointed out in Example 4.2.12. Root A must have a greater label than all other roots except possibly root B , so itmust be labeled 5 or 6. The arrows also give that root F must have a smallerlabel than all other roots except possibly root B . As in Example 4.2.12, theremaining labels occur on the B − E line. Thus, we can again break ouranalysis into the eight cases according to whether root A is labeled 5 or 6,whether root F is labeled 1 or 2, and whether root E has the highest orlowest remaining label on the B − E line.The main difference in the reasoning is that wherever we contradicted theassumption that T is standard in Example 4.2.12, we now contradict theassumption that T satisfies the restrictions of a particular partial inversionset. For example, in Example 4.2.12, we derived the following contradiction:“Now suppose that root A is labeled 6 and root F is labeled 2. Then root E must be labeled 1 or 5. If root E is labeled 1, then T ( G ) = 0, T ( F ) = 2,and T ( E ) = 1 contradicts the assumption that T is standard.”The corresponding contradiction here is as follows: Suppose that root A is labeled 6 and root F is labeled 2. Then root E must be labeled 1 or5. If root E is labeled 1, then we contradict the assumption that T satis-fies the restrictions of the partial inversion set { E, F } , which requires that T ( F ) < T ( E ) since root E is the canonical simple root of { E, F } .Note that the above restriction on labelings for the partial inversion set { E, F } is caused by the absence of root G from Φ( w ). The other contra-dictions derived in Example 4.2.12 are translated similarly to the setting of W Φ( w ) -labelings. One can also check that the four labelings given for thisexample are the same as those of Example 4.2.12, except that the labelingsare restricted to the set Φ( w ). We establish a connection between the labelings we have obtained, and di-rected acyclic graphs with restrictions imposed upon them. For this purpose,we need the basic terminology and some basic results from graph theory.
Definition 4.4.1.
We call the pair G = ( V, E ) a simple directed graph if V
75s a set and E ⊆ V × V such that for each v ∈ V , ( v, v ) E . The elementsof V are called vertices and the elements of E are called directed edges . Wesometimes denote the assertion that ( u, v ) ∈ E by u → v . An oriented graph is a simple directed graph such that for any u, v ∈ V , there is at most onedirected edge connecting u and v (i.e. u → v or v → u or neither). Remark . Every oriented graph is a simple directed graph. However,a simple directed graph may have both u → v and v → u . The directedgraphs that arise in this thesis are all simple and directed, so we do notbother with the full spectrum and generality of graph theoretic objects. Itshould also be remarked that all of the directed graphs we use have a finitevertex set V (and hence a finite edge set E ). Definition 4.4.3.
Let G = ( V, E ) be a directed graph. If v , . . . , v n ∈ V ,then a sequence of edges satisfying v → v · · · → v n → v is called a directed cycle . If G contains no directed cycles, then we call G a directedacyclic graph . Remark . A directed acyclic graph is necessarily oriented since u → v and v → u gives a cycle. Lemma 4.4.5.
Let G = ( V, E ) be a directed acyclic graph. Then thereexists a partial order ≤ on V such that for x, y ∈ V , x ≤ y whenever x → y .Proof. For any edge x → y we require that x ≤ y . Forming the reflexiveand transitive closure of the required relations makes ≤ a partial order. See[15, Section 3.2.3] for details. Definition 4.4.6. A topological sort of a directed acyclic graph G = ( V, E )is a linear ordering of V such that if G contains an edge u → v , then u appears before v in the linear ordering. A linear extension of a partiallyordered set ( X, ≤ ) is a linear ordering ≤ ′ of X such that x ≤ y implies x ≤ ′ y . Lemma 4.4.7.
There exists a topological sort of any finite directed acyclicgraph.Proof.
See Fact 23 of [15, Section 3.2.4].As discussed in [15, Section 3.2.3], there is a close connection between di-rected acyclic graphs and partially ordered sets. From any directed acyclicgraph we can produce a partially ordered set compatible with it, and associ-ated to a partially ordered set is at least one directed acyclic graph (typically76e represent partially ordered sets with “Hasse diagrams”, which can be in-terpreted as a directed acyclic graph). Thus it is not surprising that wecan also give a linear extension (via a topological sort) of a finite partiallyordered set.
Lemma 4.4.8.
Every finite partially ordered set ( X, ≤ ) has a linear exten-sion.Proof. See [16, Algorithm 12.2.2].
Definition 4.4.9.
We call a directed graph G = ( V, E ) a tournament on V if for every pair of distinct vertices x, y ∈ V , we have either x → y or y → x (but not both). A tournament is transitive if for all x, y, z ∈ V , ( x → y and y → z ) implies x → z . Remark . According to [15, Section 3.3], tournaments arise as modelsof round-robin tournaments, which are competitions in which each partici-pant competes against every other participant exactly once. Thus an edge u → v models the situation in which u defeats v in the tournament. Ina transitive tournament, there is a unique overall winner, unique overallsecond place, and so forth. This is captured in the next lemma. Lemma 4.4.11.
Let G = ( V, E ) be a transitive tournament. Then G is adirected acyclic graph and there is a unique topological sort of G .Proof. See Fact 14 of [15, Section 3.3.2].
Lemma 4.4.12.
Let T : Φ( w ) → N be a sequential W Φ( w ) -labeling suchthat supp ( T ) = Φ( w ) . Let G T = ( V, E ) be the directed graph with vertex set Φ( w ) and edge set E = { ( α, β ) : T ( α ) < T ( β ) } . The directed graph G T is a transitive tournament on Φ( w ) .Proof. If α, β ∈ Φ( w ) are distinct positive roots, then by Definition 4.2.7,either T ( α ) < T ( β ) or T ( β ) < T ( α ). Thus, either α → β or β → α . Definition 4.4.13.
We call the directed graph G T in Lemma 4.4.12 the transitive tournament on Φ( w ) determined by T .It will be convenient to regard the labelings of Φ( w ) in Corollary 4.3.19 asinducing linear orders on Ψ ∩ Φ( w ) for any inversion set Ψ.77 efinition 4.4.14. Let (
W, S ) be a Coxeter system and let w ∈ W . IfΨ ∩ Φ( w ) = { γ , . . . , γ k } is a partial inversion set of w , then we call the linear ordering (Ψ ∩ Φ( w ) , ≤ )given by γ > · · · > γ k the induced ordering on Ψ ∩ Φ( w ). SupposeΨ = { γ , . . . , γ m } is a full inversion set. Then we call the two linear orderings ≤ and ≤ onΨ given by γ < · · · < γ m and γ m < · · · < γ the dual orderings of Ψ.To phrase our correspondence in the terminology of tournaments, we needto account for the variety of restrictions that our correspondence from theprevious section imposes.
Definition 4.4.15.
Let G = ( V, E ) be a transitive tournament. Suppose U ⊆ V and ( U, ≤ ) is a linearly ordered set. We say that the tournament G is consistent with ( U, ≤ ) if for any u, u ′ ∈ U , u < u ′ implies ( u, u ′ ) ∈ E . Example . Let V = { A, B, C, D, E } and let G = ( V, E ) be the tour-nament having the edge set depicted in the graph below.
A B CDE
Let U = { A, B, C } and U ′ = { C, D, E } . Let ( U, ≤ ) be the linear orderdetermined by the relations A < B < C and let ( U ′ , ≤ ′ ) be the linear orderdetermined by the relations E < ′ D < ′ C . Then G is consistent with ( U, ≤ )since A → B → C in G and A < B < C . However, G is not consistent with( U ′ , ≤ ′ ) since C → D → E in G and E < ′ D < ′ C .78 emma 4.4.17. Let G = ( V, E ) be a transitive tournament. Suppose ( U, ≤ ) is a linearly ordered set and suppose that G is consistent with U . Let ( V, ≤ ′ ) be the unique topological sort of G . Then for any distinct u, u ′ ∈ U we have u < u ′ if and only if u < ′ u ′ .Proof. If u < u ′ , then since G is consistent with U we have u → u ′ . Thus,since < ′ is given by the topological sort of G , we have u < ′ u ′ .Conversely, suppose u < ′ u ′ . Since G is a tournament, we have either u → u ′ or u ′ → u , but not both. As < ′ is a topological sort of G , we musthave u → u ′ . Now since G is consistent with U , we have u < u ′ . Definition 4.4.18.
Let G = ( V, E ) be a transitive tournament with vertexset V = Φ( w ) and let Ψ be an inversion set. Suppose that for any partialinversion set Ψ ∩ Φ( w ), the tournament G is consistent with the inducedordering on Ψ ∩ Φ( w ). Also suppose that for any full inversion set Ψ, thetournament G is consistent with one of the two dual orders of Ψ. Thenwe say that G satisfies the restrictions imposed by Φ( w ). The set of alltransitive tournaments on Φ( w ) that satisfy the restrictions imposed byΦ( w ) is denoted by Tour( w ). Lemma 4.4.19.
Let ( W, S ) be an arbitrary Coxeter system and let w ∈ W .Then the set Lab( W Φ( w ) , w ) of sequential standard labelings of W Φ( w ) satis-fying the restrictions of Φ( w ) is in 1–1 correspondence with the set Tour( w ) of transitive tournaments satisfying the restrictions imposed by Φ( w ) .Proof. Let T be a sequential standard labeling of W Φ( w ) satisfying the re-strictions of Φ( w ). Let Ψ ∩ Φ( w ) be a partial inversion set. By Lemma 4.3.9,there exists a canonical simple root γ ∈ Ψ ∩ Φ( w ). By Corollary 4.3.11, wehave Ψ ∩ Φ( w ) = [ γ , γ k ] in the total ordering (Ψ , < Ψ ,γ ). By Definition 4.3.14, T ( γ ) > · · · > T ( γ k ). In the transitive tournament G T determined by thelabeling T , we have γ i → γ j for any i > j . Thus, G T is consistent with theinduced ordering on Ψ ∩ Φ( w ).Similarly, if Ψ = { γ , . . . , γ m } is a full inversion set, then since T is standard,either T ( γ ) < · · · < T ( γ m )or T ( γ m ) < · · · < T ( γ ) . G T , the subgraph induced by the vertices of Ψ is consistent with one ofthose orders. Thus we may define G : Lab( W , w ) → Tour( w ) by G ( T ) = G T .Conversely, let G = ( V, E ) be a transitive tournament on the vertex set V = Φ( w ) that satisfies the restrictions imposed by Φ( w ). By Lemma 4.4.11,there is a unique topological sort (Φ( w ) , ≤ ). We define a labeling T by T ( α ) = k if α is the k -th element appearing in the linear ordering ≤ . Thisimmediately implies that T is sequential and supp( T ) = Φ( w ).If Ψ ∩ Φ( w ) = { γ , . . . , γ k } is a partial inversion set of w , then γ i → γ j for any i > j since G satisfies the restrictions imposed by Φ( w ). Since γ i → γ j for i > j gives γ i < γ j in the linear ordering given by the topologi-cal sort, it follows that T ( γ ) > · · · > T ( γ k ) by our construction of T .If Ψ = { γ , . . . , γ m } is a full inversion set, then the restrictions imposedby Φ( w ) give that either γ → · · · → γ m or γ m → · · · → γ . By our definition of T , this implies that either T ( γ ) < · · · < T ( γ m )or T ( γ m ) < · · · < T ( γ ) , so that T is a standard W Φ( w ) -labeling. Thus, we may define F : Tour( w ) → Lab( W , w )by F ( G ) = T G , where T G is determined by the unique topological sort of G .The labeling T can be reconstructed by knowing whether T ( α ) < T ( β )or T ( α ) > T ( β ) for each pair α, β ∈ Φ( w ). Thus we have T G T = T , since G T faithfully encodes the order of the labels. Similarly, G T G = G so that F and G are inverse to each other. Corollary 4.4.20.
Let ( W, S ) be an arbitrary Coxeter system and let w ∈ W . Then the set Tour( w ) of all transitive tournaments on Φ( w ) satisfyingthe restrictions induced by Φ( w ) are in − correspondence with the set R ( w ) of reduced expressions for w . roof. This follows by combining the correspondences of Lemma 4.4.19 andCorollary 4.3.19.We now combine the correspondences obtained so far into a single maintheorem.
Theorem 4.4.21.
Let w ∈ W . Then the following sets are all in naturalbijection:(1) the set R ( w ) of reduced expressions for w ;(2) the set Lab( w ) of all sequential standard labelings with Φ( w ) as support;(3) the set Lab( W , w ) of all sequential standard W -labelings with Φ( w ) assupport;(4) the set Lab( W Φ( w ) , w ) of all sequential standard W Φ( w ) -labelings thatsatisfy the restrictions of Φ( w ) ;(5) the set Tour( w ) of all transitive tournaments satisfying restrictions im-posed by Φ( w ) .Proof. Proposition 4.1.7 establishes a bijection between (1) and (2). Corol-lary 4.2.11 establishes a bijection between (1) and (3). Corollary 4.3.18establishes a bijection between (1) and (4). Corollary 4.4.20 establishes abijection between (1) and (5).
Example . To illustrate Theorem 4.4.21, we give a “side by side” viewof the corresponding element of the sets (1) − (5) for a single reduced expres-sion of a Coxeter group element. As in Example 4.3.20, we let W = W ( B )and w = φ ( u, s, t, s, t, u ). Let x = ( u, s, t, s, t, u ). Recall thatΦ + = { α s , α t , α u , √ α s + α t , α t + α u , α s + √ α t , √ α s + α t + α u ,α s + √ α t + √ α u , √ α s + 2 α t + α u } . Also recall thatΦ( w ) = { α s , α u , α t + α u , √ α s + α t + α u , α s + √ α t + √ α u , √ α s +2 α t + α u } . Then the corresponding elements for (1) − (5) are given by:(1) the reduced expression x = ( u, s, t, s, t, u );812) the labeling T x : Φ + → { , , , , , } given by T x ( α t ) = T x ( α s + √ α t ) = T x ( √ α s + α t ) = 0 ,T x ( √ α s + 2 α t + α u ) = 1 , T x ( α s ) = 2 , T x ( √ α s + α t + α u ) = 3 ,T x ( α s + √ α u + √ α u ) = 4 , T x ( α t + α u ) = 5 , T x ( α u ) = 6;(3) the sequential standard W -labeling below, using the root chart of Ex-ample 4.2.12:6 2345 1 000(4) the sequential standard W Φ( w ) -labeling below that satisfies the restric-tions of Φ( w ):6 2345 1(5) the transitive tournament satisfying the restrictions imposed by Φ( w ):For the pictorial representation of the tournament in (5), we make the con-vention that we need not display arrows implied by transitivity. Further-more, the arrows “outside the lines” give the restrictions on Φ( w ) associatedwith the partial inversion sets of Φ( w ). The arrows “along the lines” givethe choice of direction made for each line of the standard segment structure W Φ( w ) of Φ( w ). Observe that the arrows point in the directions of the labelsin increasing order from the previous two examples.82 hapter 5 Reduced ExpressionCombinatorics
The goal of this chapter is to apply the constructions of the previous chaptersto answer questions involving the combinatorics of reduced expressions. Themain theorem of this chapter is Theorem 5.1.8, which gives a formula forthe length of an element of W obtained by deleting a generator from areduced expression for another element of W . In Section 5 .
2, we collectfacts about the relationship between braid moves and root sequences whichwill be applied in Section 5 .
3. In Section 5 .
3, we introduce the freely braidedelements of Green and Losonczy, and give a characterization of the elements w that are freely braided in terms of statistics derived from w . Segment structures do not come equipped with a metric. Though we candefine one based on the betweenness relation, we only need such a metricfor the standard segment structure W of Φ + . Definition 5.1.1.
Let Ψ be an inversion set and let γ and δ be the canonicalsimple roots of Ψ. Let γ and δ be the associated γ - and δ -sequences. If α = γ i and β = γ j , then we define the distance between α and β to be | i − j | .Similarly, if α = δ i and β = δ j then we define the distance between α and β to be | i − j | . If neither of the above two conditions hold then the distancebetween α and β is undefined. We denote the distance between α and β by d ( α, β ). 83 emark . We only use this metric in the context of the restriction ofthe standard segment structure W = (Φ + , Inv(Φ + ) , B W ) to W Φ( w ) = (Φ( w ) , Inv(Φ + ) Φ( w ) , B Φ( w ) )for a prescribed w ∈ W . In this context, if α, β ∈ Ψ ∩ Φ( w ) for someinversion set Ψ, then Corollary 3.4.8 implies α and β lie in the same localroot sequence. It follows by Corollary 3.4.4 that if α, β ∈ Φ( w ), then d ( α, β )is defined. Definition 5.1.3.
Let W Φ( w ) = (Φ( w ) , Inv( W ) Φ( w ) , B Φ( w ) ) be the standardsegment structure of Φ( w ). Let L ∈ Inv( W ) Φ( w ) be a line in W Φ( w ) and let θ ∈ L . If L = [ γ , γ k ], then we define the minimum distance of θ = γ i to theendpoints of L to be min( d ( γ i , γ ) , d ( γ i , γ k )) = min( i − , k − i ), which wedenote by | θ | L . If θ L , we set | θ | L = 0. Example . Let W = W ( B ) and w = φ ( u, s, t, s, t, u ). Below, we rep-resent the standard segment structure of Φ( w ) in a way similar to Exam-ple 4.3.20 so that roots in the same inversion set (or partial inversion set)appear to be collinear. Throughout our explanation, we refer to the rootsby their name given in the chart below. (Note that this naming is not a“labeling” in the sense of Definition 2.1.12.)Let L be the full inversion set Ψ ∩ Φ( w ) = { B, C, D, E } . Then root B and root E are the canonical simple roots of Ψ. Thus, if E = γ = γ , wehave that E = γ , D = γ , C = γ , and B = γ . To find | C | Ψ , we notethat d ( C, E ) = 2 and d ( C, B ) = 1. Thus | C | Ψ = min(1 ,
2) = 1. On theother hand, if Ψ ′ is the full inversion set Ψ ′ ∩ Φ( w ) = { A, B } , then we have | C | Ψ ′ = 0, since C Ψ ′ . In the partial inversion Ψ ′′ ∩ Φ( w ) = { C, F } , wehave | C | Ψ ′′ = 0 because d ( C, C ) = 0 and C is an endpoint of Ψ ′′ .One advantage to ordering the positive roots of a dihedral subsystem ac-cording to how many alternating reflections are applied to a root is thatwe can use statistics from the ordering to determine whether a reflectionapplied to a root within the dihedral subsystem is positive or negative. Lemma 5.1.5.
Let γ , δ be the canonical simple roots for a local Coxetersystem and let i ≥ , j ∈ Z . Then we have s γ i ( γ j ) = δ j − i +1 . Similarly, s δ i ( δ j ) = γ j − i +1 .Proof. For i = 1, this is just the backward recurrence s γ ( γ j ) = δ j − of (3.7).For i >
1, we first show that s γ i = [( s γ s δ )] i − and s δ i = [( s δ s γ )] i − by84nduction: s γ i = s s γ δ i − = s γ s δ i − s γ = s γ [( s δ s γ )] i − s γ = [( s γ s δ )] i − . Now, since γ j = s γ ( δ j − ) and since the last factor of s γ i is s γ , we have s γ i ( γ j ) = [( s γ s δ )] i − s γ ( δ j − )= [( s γ s δ )] i − s δ s γ s γ ( δ j − )= s γ i − ( γ j − )= δ ( j − − i +3 = δ j − i +1 . The third equality applies the backward recurrence of (3.7) to get the equal-ity s δ ( δ j − ) = γ j − . The fourth equality follows by induction. Interchangingthe roles of γ and δ gives the last statement of the lemma. Corollary 5.1.6.
Let γ, δ be canonical simple roots for a local Coxeter sys-tem and let i ≥ . Let m = | s γ s δ | and let j be such that i < j ≤ m ( j < m if m = ∞ ). Then s γ i ( γ j ) is negative if j < i , otherwise s γ i ( γ j ) is positive.Proof. By Lemma 5.1.5, s γ i ( γ j ) = δ j − i +1 . If j < i , then since j ≤ m , wehave − ( m − ≤ j − i + 1 ≤ j − i + 1 ≤ m = ∞ ), which impliesthat s γ i ( γ j ) is negative by Lemma 3.3.9. If j ≥ i , then since j ≤ m and i ≥
1, we have 1 ≤ j − i + 1 ≤ m , which implies that s γ i ( γ j ) is positive byLemma 3.3.8. Lemma 5.1.7.
Let w ∈ W and let Ψ be an inversion set intersecting Φ( w ) so that | Ψ ∩ Φ( w ) | ≥ . Let x be a reduced expression for w . Then thereexists a unique canonical simple root γ of Ψ that is the last root of Ψ ∩ Φ( w ) to occur in the root sequence θ ( x ) .Proof. If Ψ ∩ Φ( w ) is a partial inversion set, then by Lemma 4.3.9, there isonly one canonical simple root of Ψ in Φ( w ). By Lemma 3.5.5(2), T x ( γ ) hasthe highest label of the roots in Ψ ∩ Φ( w ).If Ψ ∩ Φ( w ) is a full inversion set, then there are two distinct canonicalsimple roots in Φ( w ). By Lemmas 3.5.5 and 3.3.12, the root labeled highestby T x among all the roots of Ψ ∩ Φ( w ) is one of the canonical simple roots85f Ψ. We let γ be that highest labeled root and note that the root sequenceresult follows from Definition 2.1.14.Recall that D j ( x ) denotes the expression obtained from x by deleting the j -th generator. By Lemma 2.1.8, if x is a reduced expression for w and θ j is the j -th entry of the root sequence θ ( x ), then D j ( x ) is an expression for ws θ j . Thus, the length of the Coxeter group element φ ( D j ( x )) is given bythe length of ws θ j . Lastly, recall that by Corollary 2.1.11, the length of ws θ j is ℓ ( w ) − d − d is the number of θ k in θ ( x ) satisfying 1 ≤ k < j and s θ j ( θ k ) ∈ Φ − . The content of the next theorem is that the number d can beexpressed in terms of a statistic associated to θ k in the standard segmentstructure of Φ( w ). Theorem 5.1.8.
Let w ∈ W , let θ ∈ Φ( w ) , and let W Φ( w ) = (Φ( w ) , Inv ( W ) Φ( w ) , B Φ( w ) ) be the standard segment structure of Φ( w ) . Define D = X L ∈ Inv ( W ) Φ( w ) | θ | L , where | θ | L is as it is in Definition 5.1.3. Then ℓ ( ws θ ) = ℓ ( w ) − − D. Proof.
Let x be a reduced expression for w with root sequence θ ( x ) andlet n be the index of the root sequence satisfying θ n = θ . Since θ is fixedthroughout the proof, so is the associated index n . LetΘ = { θ k ∈ θ ( x ) : 1 ≤ k < n and s θ n ( θ k ) ∈ Φ − } and set d = | Θ | . By Corollary 2.1.11, it suffices to show that D = d .For each L ∈ Inv( W ) Φ( w ) , define Θ L = Θ ∩ L .By Corollary 3.4.4, for any k < n there exists a unique inversion set Ψcontaining θ n and θ k . By Definition 4.3.3 each L ∈ Inv( W ) Φ( w ) is of theform Ψ ∩ Φ( w ) for some inversion set Ψ. Thus there exists a unique inver-sion set L ∈ Inv( W ) Φ( w ) containing both θ n and θ k by Corollary 3.4.4. ByLemma 3.4.3 if L, L ′ ∈ Inv( W ) Φ( w ) are distinct lines that contain θ n , thenthey have no other points in common. In other words,( L \ { θ n } ) ∩ ( L ′ \ { θ n } ) = ∅ . θ n Θ, Θ is the disjoint union of sets of the form Θ ∩ L , where L contains θ n . Thus we have | Θ | = X | Θ ∩ L | , where the sum is over all L ∈ Inv( W ) Φ( w ) that contain θ = θ n .Since | θ n | L = 0 for any L ∈ Inv( W ) Φ( w ) such that θ n L , it now suf-fices to show that | θ n | L = | Θ ∩ L | for all L ∈ Inv( W ) Φ( w ) containing θ n .Now let L be an arbitrary line L of Inv( W ) Φ( w ) containing θ n and let θ l be a root occurring before θ n in θ ( x ) so that 1 ≤ l < n . By Corollary 3.4.4,there exists a unique inversion set Ψ containing θ n and θ l . By Lemma 5.1.7,we may let γ denote the unique canonical simple root of Ψ that occurs latestin the root sequence θ ( x ).By Corollary 3.4.8, since L = Ψ ∩ Φ( w ) and | L | ≥
2, we have L = [ γ , γ k ]for some k ≤ m = | s γ s δ | . Thus θ n = γ i for some i satisfying 1 ≤ i ≤ k . Wehave chosen γ so that the only roots in [ γ , γ k ] occurring before θ n in theroot sequence θ ( x ) are the roots γ i +1 , . . . , γ k . Thus, θ l = γ j where i < j .By Corollary 5.1.6, since i < j , s γ i ( γ j ) is negative if and only j < i .There are two cases: k − i ≤ i − k − i > i − k − i ≤ i −
1, so that k ≤ i −
1. Then, for any j satisfying i + 1 ≤ j ≤ k , k ≤ i − j < i . Thus s γ i ( γ j ) is negative. There are k − i such j in this interval, so that, in this case, | θ n | L = min( k − i, i − i − < k − i so that 2 i − < k . Then, for j satisfying i +1 ≤ j ≤ i − < k , s γ i ( γ j ) is negative. For j satisfying 2 i ≤ j ≤ k , s γ i ( γ j )is positive. Thus the number of s γ i ( γ j ) sent negative, where i < j ≤ k , is i −
1, which is min( k − i, i −
1) in this case as well.Thus, in both cases the number of roots in L occurring before θ n in θ ( x ) ismin( k − i, i −
1) = | θ n | L , as desired. Example . Again we use the example W = W ( B ), x = ( u, s, t, s, t, u ),and w = φ ( x ). Using the naming scheme of Example 5.1.4, we have that theroot sequence θ ( x ) = ( F, B, C, D, E, A ). Thus, deletion of the root C in θ ( x )(given by right multiplying by s √ α s + α t + α u ) corresponds to the expression87 ′ = ( u, s, s, t, u ). We have that | C | Ψ = 0 except for Ψ = { B, C, D, E } . Inthat case | C | Ψ = 1, so that D = 1 in Theorem 5.1.8. Theorem 5.1.8 nowpredicts that ℓ ( x ′ ) = 6 − − D from θ ( x ), we get the expression x ′ = ( u, s, t, t, u ) . On the lines L = { B, C, D, E } and L ′ = { A, D, F } , we have | D | L = 1 and D L ′ = 1, respectively. Thus, D = 2 in Theorem 5.1.8. It follows that ℓ ( φ ( x ′ )) = 6 − − w ∈ W such that only relations of the form st = ts need to be appliedto transform one reduced expression for w into another. The short-braidavoiding elements are those whose reduced expressions avoid substrings ofthe form sts , where s, t ∈ S are noncommuting generators. If the deletionof any generator from any reduced expression for w results in a reducedexpression (for some w ′ ∈ W ), then the element is called fully covering. Fan([9, Theorem 1]) shows that in a finite Weyl group, the short-braid avoidingelements are precisely the fully covering elements. In [17], Hagiwara, et al . show that the fully commutative elements of a simply-laced Coxeter groupare precisely the fully covering elements so long as the group has finitelymany fully commutative elements. Since fully commutative elements areprecisely the short-braid avoiding elements in simply-laced Coxeter groups,this extends Fan’s result. In Proposition 5.1.16, we give a characterizationof the fully covering elements in terms of the geometry of W Φ( w ) , whichapplies in the setting of arbitrary Coxeter groups. Definition 5.1.10.
Let (
W, S ) be a Coxeter system and let u, v ∈ W . Ifthere exists a reflection t ∈ T such that ut = w , then we write u → w . Wedenote the transitive closure of → by ≺ , which we call the Bruhat–Chevalleyorder of W . Definition 5.1.11.
We say that w covers v relative to ≺ if v ≺ u ≺ w implies that either u = v or u = w . We say that w is fully covering if ℓ ( w ) = |{ u ∈ W : w covers u }| .
88n the next proposition, we record some well-known facts about the Bruhat–Chevalley order. In Corollary 5.1.13, we give a trivial (but useful) translationof Definition 5.1.11 into a generator deletion property.
Proposition 5.1.12.
Let ( W, S ) be a Coxeter system and let ≺ denote theBruhat–Chevalley order of W . Then:(1) Suppose w = s · · · s k is a reduced expression for w ∈ W . Then for any u ∈ W , u ≺ w if and only if u = s i · · · s i k for some choice i , . . . , i k ofindices.(2) For any u ∈ W , w covers u if and only if ℓ ( u ) = ℓ ( w ) − .Proof. For fact (1), see [19, Theorem 5.10]. For fact (2), see [19, Proposition5.11].
Corollary 5.1.13.
Let w ∈ W . Let s · · · s k be a reduced expression for w .Then w is fully covering if and only if the expression s · · · b s i · · · s k is reducedfor every i satisfying ≤ i ≤ k .Proof. Let w be fully covering so that w covers ℓ ( w ) elements in the Bruhat–Chevalley order. By Proposition 5.1.12, w covers u if and only if the equation ℓ ( u ) = ℓ ( w ) − u can be realized as a subexpression of s · · · s k .The only subexpressions having k − ℓ ( w ) − Example . Let W = W ( A ) with S = { s, t, u } , m s,t = 3, m t,u = 3, and m s,u = 2. Then the element w = φ ( t, s, u, t ) is fully covering since ( s, u, t ),( t, u, t ), ( t, s, t ), and ( t, s, u ) are all reduced expressions in type A .For an example of an element that is not fully covering, consider the Cox-eter group W = W ( f A ). This is the Coxeter group with generating set S = { s, t, u } , m s,t = 3, m t,u = 3, and m s,u = 3. Thus, W is simply-laced, but it is known (see [22, Theorem 4.1]) that there are infinitely manyfully commutative elements in type f A . Thus Hagiwara et al.’s character-ization ([17, Theorem 2.9]) of fully covering elements does not apply. Let w = φ ( t, u, s, t, u ). Then w is fully commutative since no Coxeter relationscan be applied. However, ( t, u, t, u ) is a non-reduced subexpression, so w isnot fully covering. 89 emma 5.1.15. Let ( W, S ) be an arbitrary Coxeter group and let W Φ( w ) = (Φ( w ) , Inv ( W ) Φ( w ) , B Φ( w ) ) be the standard segment structure of Φ( w ) . Then w ∈ W is fully covering ifand only if every point of the standard segment structure is an endpoint ofthe standard segment structure W Φ( w ) of Φ( w ) .Proof. If w is fully covering then given a reduced expression x for w , D j ( x )is reduced for every j satisfying 1 ≤ j ≤ ℓ ( w ). Thus, by Theorem 5.1.8, wehave that for every µ ∈ Φ( w ), | µ | L = 0 for every line L containing µ . Thus µ is an endpoint of (Φ( w ) , Inv( W ) Φ( w ) , B Φ( w ) ).Conversely, if every point of Φ( w ) is an endpoint, then by Theorem 5.1.8, ℓ ( ws µ ) = ℓ ( w ) − µ ∈ Φ( w ). Thus, the number of roots of Φ( w )for which this happens is ℓ ( w ).Recall that by Definition 4.2.1 that every line of a segment structure containsat least two points. Proposition 5.1.16.
Let w ∈ W and let W Φ( w ) = (Φ( w ) , Inv ( W ) Φ( w ) , B Φ( w ) ) be the standard segment structure of Φ( w ) . Then w ∈ W is fully covering ifand only if | L | = 2 for every line L ∈ Inv ( W ) Φ( w ) .Proof. Let λ be a positive root in Φ( w ) and suppose | L | = 2 for every line L ∈ Inv( W ) Φ( w ) . By Definition 4.2.1, λ is an endpoint of the standard seg-ment structure W Φ( w ) if λ is an endpoint for each line L ∈ Inv( W ) Φ( w ) thatcontains λ . However, if λ ∈ L , then λ is an endpoint of L since | L | = 2 byRemark 4.2.2.Conversely, if w ∈ W is fully covering, then by Lemma 5.1.15, every root λ ∈ Φ( w ) is an endpoint of the standard segment structure W Φ( w ) . However,if there exists a line L ∈ Inv( W ) Φ( w ) such that | L | >
2, then property (B3)of Definition 4.2.1 implies the existence of an intermediate point λ ∈ L .This contradicts that λ is an endpoint of the standard segment structure W Φ( w ) . Example . Let W = W ( f A ) and consider the element w = φ ( t, u, s, t, u )given in Example 5.1.14. By calculation,Φ( w ) = { α u , α t + α u , α s + α t + 2 α u , α s + 2 α t + 2 α u , α s + 2 α t + 3 α u } . L = { α u , ( α s + α t ) + 2 α u , α s + α t ) + 3 α u } is a partial inversion set: one can check (using Definitions 2.2.3 and 2.2.6)that the inversion set containing α u and α s + α t has γ = α u and δ = α s + α t as canonical simple roots so that L is a partial inversion set. Since | L | = 3,we have that w is not fully covering by Proposition 5.1.16. Definition 5.2.1.
Let a , b ∈ S ∗ and s, t ∈ S . Let x = a ( s, t ) m s,t b and x ′ = a ( t, s ) m s,t b . We call the substitution a ( s, t ) m s,t b = a ( t, s ) m s,t b an m -braid move and we say that x and x ′ are braid move related . We denote thebraid move transforming x into x ′ by x → x ′ . The reflexive and transitiveclosure of the braid move relation is an equivalence relation called braidequivalence .A well-known result of Matsumoto [21] and Tits [23] states that any reducedexpression for w can be transformed into any other by applying a finitesequence of braid moves. Thus, the set R ( w ) of reduced expressions for w form a single equivalence class with respect to braid equivalence. Theorem 5.2.2 ( Matsumoto, Tits ) . Let w ∈ W . Let x and y be reducedexpressions for w . Then x and y are braid equivalent.Proof. For a proof, see [2, Theorem 3.3.1(ii)].
Definition 5.2.3.
Let θ = ( θ , . . . , θ k ) be a sequence of roots. Then wedenote the sequence ( θ k , . . . , θ ) by θ R and call θ R the reversal of θ .Recall that if x factors as x = x · · · x n then we call θ ( x ) = θ · · · θ n thedecomposition of θ ( x ) respecting x · · · x n if ℓ ( x i ) = ℓ ( θ i ) for each i satisfying1 ≤ i ≤ n . Lemma 5.2.4.
Let m = m s,t be finite and let x = a ( s, t ) m b be a reducedexpression for some w ∈ W . Let θ ( x ) = θ θ θ be the decomposition of θ ( x ) respecting a ( s, t ) m b . Let x ′ = a ( t, s ) m b . Then θ ( x ′ ) = θ θ R θ is thedecomposition of θ ( x ′ ) respecting a ( t, s ) m b .Proof. Let v − = φ ( b ) and let α = θ (( s, t ) m ) = ( θ , . . . , θ m ) be the rootsequence for ( s, t ) m . The first m entries of the associated α s - and α t -sequences give the root sequences for ( s, t ) m and ( t, s ) m respectively, so91emma 3.3.12 implies that β = θ (( t, s ) m ) = ( θ m , . . . , θ ) = α R is the rootsequence for ( t, s ) m . Since φ (( s, t ) m ) = φ (( t, s ) m ), Lemma 2.1.7 impliesthat θ ( x ′ ) = θ θ ′ θ , where θ ′ = v [ θ (( t, s ) m )] = v [ β ]. By Lemma 2.1.4, θ = v [ θ (( s, t ) m ) = v [ α ]. Thus θ ′ = θ R . Lemma 5.2.5.
Let m = m s,t be finite and let x = a ( s, t ) m b be a reducedexpression for some w ∈ W . Let θ ( x ) = θ θ θ be the decomposition of θ ( x ) respecting x = a ( s, t ) m b . If Ψ is the set of roots in the sequence θ , then Ψ is an inversion m -set.Proof. Let v − = φ ( b ). If u = ( s, t ) m , then the roots in θ ( u ) form aninversion m -set Ψ ′ . Let Ψ = v (Ψ ′ ) be the set of roots in θ . Then, since allroots in v (Ψ ′ ) are in Φ( w ) (and thus positive), Lemma 2.3.9 implies that Ψis an inversion m -set. Definition 5.2.6.
Let x = a ( s, t ) m b and x ′ = a ( t, s ) m b be reduced expres-sions for w that are braid move related. Let θ θ θ be the decompositionof θ ( x ) respecting x = a ( s, t ) m b . We call the set of roots Ψ in the sequence θ , the inversion set of the move x → x ′ . Example . Let W = W ( A ) with S = { s, t, u } , and let x = ( u, s, t, s, u ).Let a = b = ( u ) so that x = a ( s, t, s ) b . Then θ = ( α s + α t ), θ = ( α t + α u , α s + α t + α u , α s ) , and θ = ( α u ) determine the decomposition of θ ( z ) respecting a ( s, t, s ) b .Then Ψ = { α t + α u , α s + α t + α u , α s } is the inversion set of the move ( u, s, t, s, u ) → ( u, t, s, t, u ). Lemma 5.2.8.
Let x be a reduced expression for w of the form x = abc , and let θ ( x ) = θ θ θ be the decomposition of θ ( x ) respecting abc . Supposethat the roots of θ form an inversion m -set. Then, there exist distinct s, t ∈ S such that b = ( s, t ) m with m s,t = m .Proof. Let v − = φ ( c ). Let s be the last entry of b and t be the ( m − b . Then α s and s ( α t ) are in the root sequence θ ( b ) = ( θ ′ , . . . , θ ′ m ).Note that s ( α t ) is a positive root in the local Coxeter system ( W, S ) { α s ,α t } .Since α s , α t are simple roots, ∆ { α s ,α t } = { α s , α t } by Lemma 2.2.10. By92ypothesis and Lemma 2.1.7, { v ( θ ′ ) , . . . , v ( θ ′ m ) } is an inversion m -set. Thusthere exist α, β ∈ Φ such thatΦ + { α,β } = { v ( θ ′ ) , . . . , v ( θ ′ m ) } with canonical simple roots γ = v ( θ ′ i ) and δ = v ( θ ′ j ), where 1 ≤ i, j ≤ m .Since b is a reduced expression, v − ( γ ) and v − ( δ ) are positive roots. ByLemma 2.3.9 and Definition 3.4.1, { θ ′ , . . . , θ ′ m } is an inversion m -set. Inparticular, we must have { θ ′ , . . . , θ ′ m } = Φ + { α s ,α t } by Lemma 2.3.7.Suppose u ∈ S is an entry occurring in b and suppose u = s, t . Then,for some θ ′ k in the root sequence θ ( b ), the α u coefficient is nonzero by thereflection formula (1.1). This contradicts the fact thatspan( { θ ′ , . . . , θ ′ m } ) = span( { α s , α t } )is a two-dimensional subspace of V by Lemma 3.4.2. Thus, every entry in b is either s or t , and since b is reduced we have b = ( s, t ) m or b = ( t, s ) m .Since (cid:12)(cid:12)(cid:12) Φ + { α s ,α t } (cid:12)(cid:12)(cid:12) = | st | = m s,t , we have m = m s,t .Lemma 5.2.4 asserts that the effect of applying a braid move upon the rootsequence is that the corresponding roots are reversed in the root sequence ofthe resulting expression. Lemma 5.2.5 asserts that associated to any braidmove is an inversion set. Thus the roots of an inversion set are the rootsbeing reversed upon applying a braid move. Lemma 5.2.8 asserts that whenroots of an inversion set appear consecutively in the root sequence of anexpression, then there exists a braid in the corresponding portion of theexpression. In light of this, the next definition is natural in the context ofbraid moves. Definition 5.2.9.
Let x be a reduced word for some w ∈ W with rootsequence θ ( x ). We say that the inversion k -set Ψ is contracted in x if theroots in Ψ occur as a consecutive subsequence of θ ( x ). If Ψ is an inversion k -set and there exists a reduced expression y for w such that Ψ is contractedin y then we say that Ψ is contractible in w . Otherwise we say that Ψ isnon-contractible. Remark . Though we do not pursue this, Lemmas 5.2.4, 5.2.5, and5.2.8 combine in a natural way with the reduced expression correspondencesof Chapter 4. In particular, when an inversion set is contracted, the associ-ated labelings of Φ + or Φ( w ) given by Theorem 4.4.21 will be consecutiveand applying a braid move reverses the labelings.93ecall that two “braid move related” expressions differ by a single braidmove. Lemma 5.2.11.
Let x and x ′ be reduced expressions for w that are braidmove related. Let Ψ be the inversion set of the braid move x → x ′ . If Ψ ′ = Ψ is an inversion set, then the roots of Ψ ′ ∩ Φ( w ) appear in the same order in θ ( x ′ ) as they do in θ ( x ) .Proof. If | Ψ ′ ∩ Φ( w ) | <
2, the statement is vacuously true. Let µ, ν ∈ Ψ ′ ∩ Φ( w )and suppose that µ and ν appear in a different order in θ ( x ′ ) than they doin θ ( x ). By Lemma 5.2.4, µ and ν can only appear in a different order in θ ( x ′ ) as they do in θ ( x ) if µ, ν ∈ Ψ. However, by Lemma 3.4.3, this wouldimply that Ψ ′ = Ψ. Lemma 5.2.12.
Let x be a reduced expression for some w ∈ W . Let θ ( x ) = ( θ , . . . , θ ℓ ( w ) ) and suppose that B ( θ i , θ i +1 ) = 0 for some i such that ≤ i ≤ ℓ ( w ) − .Then Ψ = { θ i , θ i +1 } is an inversion -set.Proof. By Corollary 3.4.4, there is a unique inversion set Ψ that contains θ i and θ i +1 . By calculating θ i +1 and θ i in the root sequence θ ( x ), we havethat θ i +1 = v ( α s ) and θ i = vs ( α t ) for some s, t ∈ S . Thus B ( θ i , θ i +1 ) = B ( vs ( α t ) , v ( α s )) = B ( s ( α t ) , α s ) = − B ( α t , α s )since B is invariant under multiplication in W . By hypothesis, B ( α s , α t ) = 0so that m st = 2. By Lemma 5.2.5, we have that Ψ = { θ i , θ i +1 } is an inversion2-set since the factor st = ( st ) occurs as the associated factor in x . Lemma 5.2.13.
Let Ψ be an inversion m -set, where m ≥ . Suppose that λ Ψ and that there are distinct roots α, β ∈ Ψ such that B ( λ, α ) = 0 and B ( λ, β ) = 0 . Then, for any root µ in Ψ , we have B ( λ, µ ) = 0 .Proof. We can express any µ ∈ Ψ as µ = aα + bβ sincespan( { α, β } ) = span(Ψ)by Lemma 3.4.2. Thus B ( λ, µ ) = B ( λ, aα + bβ ) = aB ( λ, α ) + bB ( λ, β ) = 0 , since B is bilinear. 94 emma 5.2.14. Let Ψ be an inversion m -set with m > and Ψ ′ be aninversion m ′ -set with m ′ > , and assume that | Ψ ∩ Ψ ′ | = 1 . Then thereexist distinct roots α ∈ Ψ and β ∈ Ψ ′ , neither of which is in Ψ ∩ Ψ ′ , suchthat B ( α, β ) = 0 . If α and β occur together in an inversion m ′′ -set Ψ ′′ , then m ′′ > .Proof. Let γ be the root in Ψ ∩ Ψ ′ . Choose α ∈ Ψ distinct from γ suchthat B ( α, γ ) = 0. Since m >
2, Lemma 5.2.13 implies such an α exists.Suppose towards a contradiction that B ( α, β ) = 0 for all roots β ∈ Ψ ′ suchthat β = γ . Since m ′ >
2, there are two roots in Ψ ′ orthogonal to α , so α is orthogonal to every root in Ψ ′ . In particular, since γ ∈ Ψ ′ , we have B ( γ, α ) = 0, a contradiction. Thus we may choose β such that B ( α, β ) = 0,so that if α and β occur together in an inversion m ′′ -set, we have m ′′ > Definition 5.2.15.
Let x = a ( s, t ) b and suppose m s,t = 2. Then wesay x ′ = a ( t, s ) b is a 2 -braid neighbor of x . The reflexive and transitiveclosure of the 2-braid neighbor relation is an equivalence relation that we call commutation equivalence . The equivalence classes are called commutationclasses . The set of commutation classes for the reduced expressions for w isdenoted C ( w ). We denote commutation equivalent expressions x and x ′ by x ∼ C x ′ . Remark . We can partition the reduced expressions for w into com-mutation classes. See [22, Section 1.1] for details. Lemma 5.2.17.
Let x = ab and let x ′ = ab ′ . Suppose b ∼ C b ′ . Then x ∼ C x ′ .Proof. The sequence of 2-braid moves transforming b into b ′ can be appliedto transform x into x ′ via 2-braid moves. Definition 5.2.18.
Let Ψ be an inversion set. If | Ψ | = 2 then we call Ψ a short inversion set . If | Ψ | >
2, we call Ψ a long inversion set . The set of allcontractible long inversion sets is denoted CInv( w ). Remark . By Proposition 3.3.14, if | Ψ | = 2, then m = | s γ s δ | = 2.Since B ( γ, δ ) = − cos( π/ | s γ s δ | ), the two roots in a short inversion set arenecessarily orthogonal relative to B . In a long inversion set there existsat least one pair of nonorthogonal roots. The notion of “long inversionset” generalizes the notion of “inversion triple”, as defined by Green andLosonczy in [12], from the context of simply-laced Coxeter groups to that ofarbitrary Coxeter groups. That is, long inversion sets are inversion triples95and vice versa) whenever W is simply-laced, but long inversion sets neednot be inversion triples in a Coxeter group that is not simply laced. Lemma 5.2.20.
Let w ∈ W and let α, β ∈ Φ( w ) . Suppose there existreduced expressions x and x ′ such that α occurs before β in θ ( x ) , but β occurs before α in θ ( x ′ ) . Then there exists a contractible inversion set Ψ of w that contains α and β .Proof. By Theorem 5.2.2, there exists a sequence of braid moves transform-ing x into x ′ . By Lemma 5.2.4 and Lemma 5.2.8, each braid move reversesthe order in the root sequence of the roots in an inversion m -set Ψ. If sucha Ψ contains both α and β , then it is a contractible inversion set of w con-taining both α and β . Otherwise, Lemma 5.2.4 implies that α and β remainin the same order. Since α and β occur in a different order in θ ( x ′ ) thanthey do in θ ( x ), there must exist an inversion set Ψ containing both α and β . Corollary 5.2.21.
Let w ∈ W and let α, β ∈ Φ( w ) . Suppose α and β donot occur together in a contractible long inversion set and suppose there existreduced expressions x and x ′ for w where T x ( α ) > T x ( β ) and T x ′ ( α ) < T x ′ ( β ) .Then there exists a contractible inversion -set Ψ containing α and β so that α ⊥ β .Proof. By Lemma 5.2.20, there exists an inversion Ψ containing α and β .Since α and β are not contained in a long inversion set, Ψ is an inversion2-set by Definition 5.2.18. The elements whose contractible long inversion sets are pairwise disjoint arethe freely braided elements. They were introduced by Green and Losonczy in[12] for simply-laced Coxeter groups. Our definition is for arbitrary Coxetergroups, but in many places our exposition closely follows that of [12].
Definition 5.3.1.
Let w ∈ W . We say that w is freely braided if for everypair of contractible long inversion sets Ω , Ω ′ ∈ CInv( w ), we have Ω ∩ Ω ′ = ∅ . Lemma 5.3.2.
Let
Ψ = { γ , . . . , γ m } be a contractible long inversion set.Let x be reduced expression for a freely braided element w ∈ W and let α ∈ Φ( w ) be such that α Ψ . Then the following implications hold:(1) If T x ( γ ) < T x ( α ) < T x ( γ ) , then α ⊥ γ .
2) If T x ( γ m − ) < T x ( α ) < T x ( γ m ) , then α ⊥ γ m .(3) If m > and T x ( γ i ) < T x ( α ) < T x ( γ i +1 ) for some i satisfying theinequality < i < m − , then α ⊥ µ for all µ ∈ Ψ .Proof. Recall, by Lemmas 3.5.3 and 3.5.5(3), that T x is monotone with re-spect to the indices of elements γ i of a local root sequence. That is, weeither have T x ( γ ) < · · · < T x ( γ m ) or T x ( γ m ) < · · · < T x ( γ ).We first prove statement (1), so suppose that T x ( γ ) < T x ( α ) < T x ( γ ).The monotonicity of T x with respect to Ψ implies that T x ( α ) < T x ( γ i ) for i ≥
2. Since Ψ is contractible, there exists a reduced expression x ′ for w in which Ψ is contracted. Since Ψ is contracted in x ′ we either have T x ′ ( α ) < T x ′ ( γ i ) for all 1 ≤ i ≤ m or T x ′ ( γ i ) < T x ′ ( α ) for all 1 ≤ i ≤ m .In the case that T x ′ ( α ) < T x ′ ( γ i ) for all 1 ≤ i ≤ m , we have T x ( γ ) < T x ( α )and T x ′ ( α ) < T x ′ ( γ ). By Corollary 5.2.21, we have α ⊥ γ .In the case that T x ′ ( γ i ) < T x ′ ( α ) for all 1 ≤ i ≤ m , we have in particu-lar that T x ′ ( γ m − ) < T x ′ ( α ) and T x ′ ( γ m ) < T x ′ ( α ). Since T x ( α ) < T x ( γ m − ) < T x ( γ m ) , Corollary 5.2.21 implies that α ⊥ γ m − and that α ⊥ γ m . Thus, in this case, α ⊥ γ by Lemma 5.2.13 and statement (1) follows.The argument for statement (2) can be obtained by interchanging γ and γ m in the proof of statement (1), by interchanging γ and γ m − in the proof ofstatement (1), and by reversing the inequalities in the proof of statement (1).Turning to statement (3), suppose that m > T x ( γ i ) < T x ( α ) < T x ( γ i +1 )for some i satisfying 1 < i < m −
1. The monotonicity of T x with respectto Ψ implies that T x ( γ j ) < T x ( α ) for 1 ≤ j ≤ i and T x ( α ) < T x ( γ j ) for i + 1 ≤ j ≤ m . Since Ψ is contractible, there exists a reduced expression x ′ for w such that Ψ is contracted in x ′ . Since Ψ is contracted in x ′ , we eitherhave T x ′ ( α ) < T x ′ ( γ i ) for all 1 ≤ i ≤ m , or T x ′ ( γ i ) < T x ′ ( α ) for all 1 ≤ i ≤ m .In the case that T x ′ ( α ) < T x ′ ( γ i ) for all 1 ≤ i ≤ m , we have in particu-lar that T x ′ ( α ) < T x ′ ( γ ) and T x ′ ( α ) < T x ′ ( γ ). As T x ( α ) > T x ( γ ) and97 x ( α ) > T x ( γ ), Corollary 5.2.21 implies that α ⊥ γ and α ⊥ γ . Thus, α ⊥ µ for all µ ∈ Ψ by Lemma 5.2.13.In the case that T x ′ ( α ) > T x ′ ( γ i ) for all 1 ≤ i ≤ m , we have in particular that T x ′ ( α ) > T x ′ ( γ m ) and T x ′ ( α ) > T x ′ ( γ m − ). Since T x ( α ) < T x ( γ m − ) and T x ( α ) < T x ( γ m ), Corollary 5.2.21 implies that α ⊥ γ m − and that α ⊥ γ m .By Lemma 5.2.13, α ⊥ µ for all µ ∈ Ψ, proving statement (3).Recall that sequence multiplication is given by concatenation, so that θ ( α )refers to the concatenation of the sequence θ with the length 1 sequence( α ). Lemma 5.3.3.
Let w ∈ W and let x be a reduced expression for w thatfactors as x = x x ( s ) x x and let θ ( x ) = θ θ ( α ) θ θ be the decomposi-tion of θ ( x ) respecting x x ( s ) x x . Suppose that α ⊥ θ for every θ ∈ θ .Then x ′ = x ( s ) x x x is a reduced expression for w such that x ∼ C x ′ and θ ( x ′ ) = θ ( α ) θ θ θ is the decomposition of θ ( x ′ ) respecting x ( s ) x x x .Similarly, if α ⊥ θ for every θ ∈ θ , then x ′ = x x x ( s ) x is a reducedexpression for w such that x ∼ C x ′ and θ ( x ′ ) = θ θ θ ( α ) θ is the decom-position of θ ( x ′ ) respecting x x x ( s ) x .Proof. Let θ = ( α , . . . , α k ). Since α ⊥ α k , Lemma 5.2.12 implies thatΨ = { α, α k } is an inversion 2-set that is contracted in θ ( x ). Thus, byLemma 5.2.8, x = x ′ ( t ), where m s,t = 2. Let x ′ = x x ( s, t ) x x bethe reduced expression formed by applying the braid move ( s, t ) = ( t, s ).Furthermore, by Lemma 5.2.4, θ ( x ′ ) = θ ( α , . . . , α k − , α, α k ) θ θ . Also, since m s,t = 2, we have x ′ ∼ C x .Since α ⊥ α , . . . , α k − we may proceed inductively to obtain a reducedexpression y such that θ ( y ) = θ ( α ) θ θ θ . Furthermore, y ∼ C x .The statement for the situation in which α ⊥ θ for every θ in θ is provensimilarly. Lemma 5.3.4.
Let w ∈ W be freely braided. Let x be a reduced expres-sion for w that factors as x = x ( s ) x ( t ) x and let θ ( x ) = θ ( α ) θ ( β ) θ be the decomposition of θ ( x ) respecting x ( s ) x ( t ) x . Suppose that Ψ is acontractible long inversion set containing both α and β and that Ψ ′ = Ψ is long inversion set that is contracted in x . Then every root in Ψ ′ occursconsecutively in exactly one of the subsequences θ , θ , or θ .Proof. Suppose that Ψ ′ has roots in more than one of the subsequences.Since Ψ ′ is contracted in x , the roots of Ψ ′ occur consecutively in θ ( x ). Itfollows that Ψ ′ contains α or Ψ ′ contains β . Thus Ψ ∩ Ψ ′ = ∅ , contradictingthe assumption that w is freely braided. Definition 5.3.5.
Let x be a reduced expression for a freely braided element w ∈ W . We say that x is a contracted reduced expression for w if every longcontractible inversion set of w is contracted in θ ( x ). Lemma 5.3.6.
Let w ∈ W be a freely braided element. Then there exists acontracted reduced expression x for w . Furthermore, every reduced expres-sion x for w is commutation equivalent to a contracted reduced expression.Proof. Let y be a reduced expression such that the number of contractedlong inversion sets in y is n < N ( w ). Let Ψ = { γ , . . . , γ m } be a contractiblelong inversion set that is not contracted in y . We suppose without loss ofgenerality that the order in which the roots of Ψ appear in θ ( y ) is γ , . . . , γ m .Since m ≥ γ is a root that is neither γ nor γ m . Factor y as y = y ( s ) y ( t ) y so that the decomposition of θ ( y ) respecting y ( s ) y ( t ) y is given by θ ( y ) = θ ( γ ) θ ( γ ) θ . By Lemma 5.3.2, every root λ between γ and γ in θ ( y ) is orthogonal to γ . Thus, by Lemma 5.3.3, the expression y ′ = y ( s, t ) y y is a reducedexpression for w such that y ∼ C y ′ and θ ( y ′ ) = θ ( γ )( γ ) θ θ is the de-composition of θ ( y ′ ) respecting y ( s, t ) y y .Let Ψ ′ = Ψ be an inversion set that is contracted in y . By Lemma 5.3.4, Ψ ′ has all of its roots in exactly one of the subsequences θ , θ , or θ . Thus, Ψ ′ remains contracted in y ′ .We repeat this procedure of shifting γ i +1 to the left until it is consecu-tive with γ i for all 2 ≤ i ≤ m −
1. This results in a reduced expression y ′′ in which γ , . . . , γ m are consecutive in θ ( y ′′ ), and by the same logic asabove, all inversion sets that are contracted in y remain contracted in y ′′ .99urthermore, y ′′ ∼ C y .With γ , . . . , γ m consecutive in θ ( y ′′ ), we apply Lemma 5.3.2 to get thatevery root λ between γ and γ in θ ( y ′′ ) is orthogonal to γ . Thus we mayapply Lemma 5.3.3 to shift γ to the right to obtain a reduced expression y ′′′ in which γ is consecutive with γ , and where the contracted long inversionsets of y are contracted in y ′′′ . Furthermore, y ′′′ ∼ C y .The result is that y ′′′ has n + 1 long inversion sets that are contracted.By induction, there exists a reduced expression x such that there are N ( w )contracted long inversion sets.Since the only moves required to transform a reduced expression into acontracted reduced expression are commutation moves, the last statementof the lemma follows. Definition 5.3.7.
Let x be a reduced expression for a Coxeter group ele-ment w and let θ ( x ) be the associated root sequence. Let R = { ( θ i , θ j ) : i < j and θ i θ j } . Then the reflexive and transitive closure of R is a partial order on Φ( w ) thatwe denote by ≤ θ ( x ) . Remark . Given distinct reduced expressions x and x ′ for w , the partialorders ≤ θ ( x ) and ≤ θ ( x ′ ) are both binary relations on Φ( w ). Thus, when wespeak of equality of these partial orders in the sequel, we mean that ≤ θ ( x ) and ≤ θ ( x ′ ) , viewed as subsets of Φ( w ) × Φ( w ), are equal.The statement and proof of [12, Proposition 3.1.5], which is formed in thecontext of simply-laced Coxeter group generalizes exactly. We reproducethe proof and statement here for convenience. Proposition 5.3.9 ( Green and Losonczy ) . Let w ∈ W . Let x and x ′ be reduced expressions for w . Then ≤ θ ( x ) and ≤ θ ( x ′ ) are equal as partiallyordered sets if and only if x ∼ C x ′ .Proof. Suppose x and x ′ are reduced expressions that differ by a single 2-braid move. Then, by Lemma 5.2.8 and Definition 5.3.7, α ≤ θ ( x ) β if andonly if α ≤ θ ( x ′ ) β . Thus if x and x ′ are commutation equivalent, then ≤ θ ( x ) and ≤ θ ( x ′ ) are equal. 100onversely, suppose ≤ θ ( x ) and ≤ θ ( x ′ ) are equal as partially ordered sets.Write θ ( x ) = ( θ , . . . , θ n ) and θ ( x ′ ) = ( θ ′ , . . . , θ ′ n ). Let π ∈ S n be thepermutation of the indices satisfying θ i = θ ′ π ( i ) . Suppose θ and θ i arenonorthogonal. Then, since the partial orders are equal, π (1) < π ( i ) so thatthe roots occurring before θ in θ ( x ′ ) are all orthogonal to θ .Let ( θ ′ , . . . , θ ′ k , θ ′ π (1) ) be the initial subsequence of θ ( x ′ ). By Lemma 5.2.8,we can form an expression x ′′ by applying a sequence of 2-braid movesstarting with x ′ such that θ ( x ′′ ) = ( θ ′′ , . . . , θ ′′ n ) satisfies θ ′′ = θ . Since thesequence consisted of 2-braid moves, we have x ′′ ∼ C x ′ .Now we factor x = ( s α ) y and x ′′ = ( s β ) y ′′ . Let v = φ ( y ) and v ′′ = φ ( y ′′ ).Since the root sequences θ ( y ) and θ ( y ′′ ) have the same entries, v = v ′′ byProposition 2.1.1 parts (7) and (8). Similarly, we have s α v = s β v ′′ so that s α = s β . By induction, since the partial orders ≤ θ ( y ) and ≤ θ ( y ′′ ) are equal,we have y ∼ C y ′′ . By Lemma 5.2.17, we now have x ∼ C x ′′ . It follows that x ∼ C x ′′ ∼ C x ′ .Following [12], we denote the number of contractible long inversion sets of w by N ( w ). One result of [12] and [13] we wish to generalize is that anelement w ∈ W is freely braided if and only if the number of commutationclasses of w is 2 N ( w ) . Towards this end we introduce an injective map fromthe set of reduced expressions for w to the set of 0-1 states indexed by thecontractible long inversion sets of w . Definition 5.3.10.
Let w ∈ W and let x be a fixed reduced expression for w with standard encoding T x . Let R ( w ) denote the set of reduced expressionsfor w . We define a map F x : R ( w ) → { , } CInv( w ) by F x ( x ′ ) (Ω) = ( T x as in T x ′ Lemma 5.3.11.
Let w ∈ W and x be a fixed reduced expression for w . Let y and y ′ be reduced expressions for w . If y ∼ C y ′ , then F x ( y ) = F x ( y ′ ) .Proof. Suppose y and y ′ differ by a 2-braid move. By Lemma 5.2.4, thereexist α, β ∈ Φ( w ), and a label k such that T y ( α ) = k , T y ( β ) = k + 1, T y ′ ( α ) = k + 1, and T y ′ ( β ) = k . Since y and y ′ differ by only a 2-braidmove, T y ( γ ) = T y ′ ( γ ) whenever γ is neither α nor β . By Lemma 5.2.8, { α, β } is an inversion 2-set. Thus if Ω ∈ CInv( w ), then Ω ∩ { α, β } 6 = { α, β } byLemma 3.4.3. It follows that Ω is in the same relative ordering in T y as in T y ′ .101ence, F x ( y ) (Ω) = F x ( y ′ ) (Ω) for any Ω ∈ CInv( w ). Repeatedly applying2-braid moves gives the same result, so y ∼ C y ′ implies F x ( y ) = F x ( y ′ ).We now prove the converse of the previous assertion. Lemma 5.3.12.
Let w ∈ W and x be a fixed reduced expression for w . Let y and y ′ be reduced expressions for w . If F x ( y ) = F x ( y ′ ) , then y ∼ C y ′ .Proof. Let α and β be distinct nonorthogonal roots of Φ( w ). Without lossof generality, suppose that α < θ ( y ) β . Then, since α and β lie in someinversion set Ψ by Corollary 3.4.4, we have F x ( y )(Ψ) = F x ( y ′ )(Ψ). Thus, α and β occur in the same relative order in θ ( y ) as they do in θ ( y ′ ). It followsthat ≤ θ ( y ) and ≤ θ ( y ′ ) are the same partial order. By Proposition 5.3.9, wehave y ∼ C y ′ . Corollary 5.3.13.
The map F x induces a an injective mapping F ′ x : C ( w ) → { , } CInv(w) . Proof.
By Lemma 5.3.11, the map F x is well-defined on the commutationclasses of w . By Lemma 5.3.12, F x is injective. Theorem 5.3.14.
Let w ∈ W . Then w is freely braided if and only if thenumber of commutation classes of w is N ( w ) .Proof. Let w be freely braided and x be a reduced expression for w . Since w is freely braided, Lemma 5.3.6 implies that there is a contracted reducedexpression y for w . By Lemma 5.2.4, applying a braid move to a contractedlong inversion set Ψ results in a contracted reduced expression y ′ for w suchthat the roots in θ ( y ′ ) occur in the reverse order of those in θ ( y ). Thus, foreach contracted long inversion set, we may specify that it be in either orderand find a sequence of braid moves that transforms y into a contracted re-duced expression with each contracted inversion set in the prescribed order.Thus, F x is surjective. By Corollary 5.3.13, we have that |C ( w ) | = 2 N ( w ) .Conversely, if w is not freely braided, then there exist distinct contractibleinversion sets Ψ and Ψ ′ that intersect in a single root δ . By Lemma 5.2.14,there exist α, β ∈ Φ + , neither of which is δ , such that α ∈ Ψ, β ∈ Ψ ′ , and B ( α, β ) = 0. By Corollary 3.4.4, there is an inversion set Ψ ′′ containing α and β , but not δ .If Ψ ′′ is not contractible, then by Lemma 5.2.4, α is before β in every root se-quence or vice versa. Suppose without loss of generality that T x ( α ) > T x ( β )102or all reduced expressions x for w . If there were 2 N ( w ) commutationclasses for w , then there would be a commutation class representative y such that Ψ was in the relative ordering T y ( α ) < T y ( δ ) and Ψ ′ was inthe relative ordering where T y ( δ ) < T y ( β ). This would then imply that T y ( α ) < T y ( δ ) < T y ( β ) < T y ( α ), a contradiction.If instead, the set Ψ ′′ is a contractible inversion set of Φ( w ), and there are2 N ( w ) commutation classes, then there is a labeling T of Φ( w ) such that therelative ordering of Ψ implies T ( α ) < T ( δ ), that of Ψ ′ implies T ( δ ) < T ( β )and that of Ψ ′′ implies T ( β ) < T ( α ). However, this relative ordering cannot exist, for it implies T ( α ) < T ( α ).103 ibliography [1] A. Bj¨orner. Orderings of Coxeter groups. Contemporary Math. , 34:175– 195, 1984.[2] A. Bj¨orner and F. Brenti.
Combinatorics of Coxeter Groups . Springer,New York, NY, 2005.[3] N. Bourbaki.
Groupes et Alg`ebres de Lie, Chapitres IV–VI . Masson,Paris, 1981.[4] V.V. Deodhar. A note on subgroups generated by reflections in Coxetergroups.
Arch. Math. , 53:543 – 546, 1989.[5] M. Dyer. Reflection subgroups of Coxeter systems.
J. Algebra , 135:57– 73, 1990.[6] M. Dyer. On the “Bruhat graph” of a Coxeter system.
Comp. Math. ,78:185 – 191, 1991.[7] M. Dyer. Hecke algebras and shellings of Bruhat intervals.
Comp.Math. , 89:91 – 115, 1993.[8] P. Edelman and C. Greene. Balanced tableaux.
Advances in Mathe-matics , 63:42 – 99, 1987.[9] C.K. Fan. Schubert varieties and short braidedness.
TransformationGroups , 3(1):51 – 56, 1998.[10] C.K. Fan and J.R. Stembridge. Nilpotent orbits and commutative ele-ments.
J. Algebra , 196:490 – 498, 1997.[11] S. Fomin, C. Greene, V. Reiner, and M. Shimozono. Balanced labellingsand Schubert polynomials.
Europ. J. Combinatorics , 18:373 – 389, 1997.10412] R. M. Green and J. Losonczy. Freely braided elements in Coxetergroups.
Ann. Comb. , 6:337 – 348, 2002.[13] R. M. Green and J. Losonczy. Freely braided elements in Coxetergroups, II.
Adv. in Appl. Math. , 33:26 – 39, 2004.[14] R. M. Green and J. Losonczy. Schubert varieties and free braidedness.
Transformation Groups , 9(4):327 – 336, 2004.[15] J. Gross and J. Yellen.
Handbook of Graph Theory . CRC Press, BocaRaton, FL, 2004.[16] J. Gross and J. Yellen.
Graph Theory and Its Applications . Chapmanand Hall/CRC, Boca Raton, FL, 2006.[17] M. Hagiwara, M. Ishikawa, and H. Tagawa. A characterization of thesimply-laced FC-finite Coxeter groups.
Ann. Comb. , 8:177 – 196, 2004.[18] D.C Handscomb and J.C. Mason.
Chebyshev Polynomials . CRC Press,Boca Raton, FL, 2003.[19] J.E. Humphreys.
Reflection Groups and Coxeter Groups . CambridgeUniversity Press, Cambridge, 1990.[20] W. Kra´skiewicz. Reduced decompositions in Weyl groups.
Europ. J.Combinatorics , 16:293 – 313, 1995.[21] H. Matsumoto. G´en´erateurs et relations des groupes de Weylg´en´eralis´es.
C. R. Acad. Sci. Paris , 258:3419 – 3422, 1964.[22] J.R. Stembridge. On the fully commutative elements of Coxeter groups.
J. Algebraic Combin. , 5:353 – 385, 1996.[23] J. Tits. Le probl`eme des mots dans les groupes de Coxeter.
Ist. Naz.Alta Mat. (1968), Sympos. Math. , 258(1):175 – 185, 1969.[24] J. Tits.
Buildings of Spherical Type and Finite BN-Pairs , volume 386of
Lecture Notes in Math.
Springer-Verlag, New York, 1974.[25] G. Udrea. A problem of Diophantos–Fermat and Chebyshev polyno-mials of the second kind.