Structure of Thin Irreducible Modules of a Q-polynomial Distance-Regular Graph
aa r X i v : . [ m a t h . C O ] M a r Structure of Thin Irreducible Modules of a Q -polynomial Distance-Regular Graph Diana R. Cerzo ∗ Abstract
Let Γ be a Q -polynomial distance-regular graph with vertex set X , diameter D ≥ A . Fix x ∈ X and let A ∗ = A ∗ ( x ) be the corresponding dual adjacency matrix. Recall that theTerwilliger algebra T = T ( x ) is the subalgebra of Mat X ( C ) generated by A and A ∗ . Let W denote a thinirreducible T -module. It is known that the action of A and A ∗ on W induces a linear algebraic objectknown as a Leonard pair. Over the past decade, many results have been obtained concerning Leonardpairs. In this paper, we apply these results to obtain a detailed description of W . In our description, wedo not assume that the reader is familiar with Leonard pairs. Everything will be proved from the pointof view of Γ.Our results are summarized as follows. Let { E i } Di =0 be a Q -polynomial ordering of the primitive idem-potents of Γ and let { E ∗ i } Di =0 be the dual primitive idempotents of Γ with respect to x . Let r, t and d bethe endpoint, dual endpoint and diameter of W , respectively. Let u and v be nonzero vectors in E t W and E ∗ r W, respectively. We show that { E ∗ r + i A i v } di =0 and { E t + i A ∗ i u } di =0 are bases for W that are orthogonalwith respect to the standard Hermitian dot product. We display the matrix representations of A and A ∗ with respect to these bases. We associate with W two sequences of polynomials { p i } di =0 and { p ∗ i } di =0 . Weshow that for 0 ≤ i ≤ d , p i ( A ) v = E ∗ r + i A i v and p ∗ i ( A ∗ ) u = E t + i A ∗ i u. Next, we show that { E ∗ r + i u } di =0 and { E t + i v } di =0 are orthogonal bases for W ; we call these the standard basis and dual standard basis for W , respectively. We display the matrix representations of A and A ∗ with respect to these bases. Theentries in these matrices will play an important role in our theory. We call these the intersection numbersand dual intersection numbers of W. Using these numbers, we compute all inner products involving thestandard and dual standard bases. We also use these numbers to define two normalizations u i , v i (resp. u ∗ i , v ∗ i ) for p i (resp. p ∗ i ). Using the orthogonality of the standard and dual standard bases, we show thatfor each of the sequences { p i } di =0 , { p ∗ i } di =0 , { u i } di =0 , { u ∗ i } di =0 , { v i } di =0 , { v ∗ i } di =0 the polynomials involved areorthogonal and we display the orthogonality relations. We also show that each of the sequences satisfya three-term recurrence and a relation known as the Askey-Wilson duality. We then turn our attentionto two more bases for W. We find the matrix representations of A and A ∗ with respect to these bases.From the entries of these matrices we obtain two sequences of scalars known as the first split sequenceand second split sequence of W. We associate with W a sequence of scalars called the parameter array.This sequence consists of the eigenvalues of the restriction of A to W , the eigenvalues of the restrictionof A ∗ to W , the first split sequence of W and the second split sequence of W . We express all the scalarsand polynomials associated with W in terms of its parameter array. We show that the parameter arrayof W is determined by r, t, d and one more free parameter. From this we conclude that the isomorphismclass of W is determined by these four parameters. Finally, we apply our results to the case in which Γhas q -Racah type or classical parameters. The Terwilliger algebra T of a distance-regular graph was first introduced in [9]. This algebra has been usedextensively to study the Q -polynomial property [5, 6, 8]. In this paper, we continue this study focusing onthe structure of thin irreducible T -modules.Let Γ be a Q -polynomial distance-regular graph with vertex set X , diameter D ≥ , and adjacency matrix A (see Section 2 for formal definitions). Fix x ∈ X and let A ∗ = A ∗ ( x ) be the corresponding dual adjacencymatrix. Recall that the Terwilliger algebra T = T ( x ) is the subalgebra of Mat X ( C ) generated by A and ∗ Institute of Mathematics, University of the Philippines, Diliman, Quezon City, Philippines ∗ . Let W be a thin irreducible T -module. It is known that the action of A and A ∗ on W induces a linearalgebraic object called a Leonard pair; this was first introduced by Terwilliger in [12]. The theory of Leonardpairs has been developed over the past decade. We apply these results to obtain a detailed description of W . In our description, we do not assume that the reader is familiar with Leonard pairs. The results will beproved from the point of view of Γ.Our results are summarized as follows. Let { E i } Di =0 be a Q -polynomial ordering of the primitive idem-potents of Γ and let { E ∗ i } Di =0 be the dual primitive idempotents of Γ with respect to x . Let r, t and d bethe endpoint, dual endpoint and diameter of W , respectively. Let u and v be nonzero vectors in E t W and E ∗ r W, respectively. We show that { E ∗ r + i A i v } di =0 and { E t + i A ∗ i u } di =0 are bases for W that are orthogonalwith respect to the standard Hermitian dot product. We display the matrix representations of A and A ∗ with respect to these bases. We associate with W two sequences of polynomials { p i } di =0 and { p ∗ i } di =0 . Weshow that for 0 ≤ i ≤ d , p i ( A ) v = E ∗ r + i A i v and p ∗ i ( A ∗ ) u = E t + i A ∗ i u. Next, we show that { E ∗ r + i u } di =0 and { E t + i v } di =0 are orthogonal bases for W ; we call these the standard basis and dual standard basis for W ,respectively. We display the matrix representations of A and A ∗ with respect to these bases. The entries inthese matrices will play an important role in our theory. We call these the intersection numbers and dualintersection numbers of W. Using these numbers, we compute all inner products involving the standard anddual standard bases. We also use these numbers to define two normalizations u i , v i (resp. u ∗ i , v ∗ i ) for p i (resp. p ∗ i ). Using the orthogonality of the standard and dual standard bases, we show that for each of the sequences { p i } di =0 , { p ∗ i } di =0 , { u i } di =0 , { u ∗ i } di =0 , { v i } di =0 , { v ∗ i } di =0 the polynomials involved are orthogonal and we displaythe orthogonality relations. We also show that each of the sequences satisfy a three-term recurrence anda relation known as the Askey-Wilson duality. We then turn our attention to two more bases for W. Wefind the matrix representations of A and A ∗ with respect to these bases. From the entries of these matriceswe obtain two sequences of scalars known as the first split sequence and second split sequence of W. Weassociate with W a sequence of scalars called the parameter array. This sequence consists of the eigenvaluesof the restriction of A to W , the eigenvalues of the restriction of A ∗ to W , the first split sequence of W andthe second split sequence of W . We express all the scalars and polynomials associated with W in termsof its parameter array. We show that the parameter array of W is determined by r, t, d and one more freeparameter. From this we conclude that the isomorphism class of W is determined by these four parameters.Finally, we apply our results to the case in which Γ has q -Racah type or classical parameters. In this section, we recall some basic concepts concerning Q -polynomial distance-regular graphs. For morebackground information see [2] and [4].Let X be a non-empty finite set. Let Mat X ( C ) denote the C -algebra of matrices whose rows and columnsare indexed by X and whose entries are in C . We let I (resp. J ) denote the identity matrix (resp. all 1’smatrix) in Mat X ( C ). Let V = C X be the vector space over C consisting of column vectors whose coordinatesare indexed by X and whose entries are in C . Observe that Mat X ( C ) acts on V by left multiplication. For u, v ∈ V , define h u, v i := u t v , where u t is the transpose of u and v is the complex conjugate of v . Observethat h , i is a positive definite Hermitian form on V . Note that h Bu, v i = h u, B t v i for all B ∈ Mat X ( C )and u, v ∈ V . For y ∈ X , let ˆ y denote the element in V with a 1 in the y coordinate and 0 in all othercoordinates. Observe that { ˆ y | y ∈ X } is an orthonormal basis for V. Let Γ = (
X, R ) be a finite undirected connected graph without loops or multiple edges, with vertex set X and edge set R . Let ∂ denote the path-length distance function for Γ. Set D = max { ∂ ( x, y ) | x, y ∈ X } .We refer to D as the diameter of Γ. For x ∈ X and an integer i ≥
0, let Γ i ( x ) = { y | y ∈ X, ∂ ( x, y ) = i } .Abbreviate Γ( x ) := Γ ( x ). For an integer k ≥
0, we say that Γ is regular with valency k whenever k = | Γ( x ) | for all x ∈ X . We say that Γ is distance-regular whenever there exists scalars p hij (0 ≤ h, i, j ≤ D ) suchthat p hij = | Γ i ( x ) ∩ Γ j ( y ) | for all x, y ∈ X with ∂ ( x, y ) = h . We refer to the p hij as the intersection numbers of Γ. For the rest of this paper, assume that Γ is distance-regular with diameter D ≥
3. Note that by thetriangle inequality, we have (i) p hij = 0 if one of h, i, j is greater than the sum of the other two; (ii) p hij = 0if one of h, i, j is equal to the sum of the other two. We abbreviate c i := p i i − (1 ≤ i ≤ D ) , a i := p i i (0 ≤ i ≤ D ) , b i := p i i +1 (0 ≤ i ≤ D − b D = 0 , c = 0. Observe that Γ isregular with valency k = b . To avoid trivialities, we always assume that k ≥
3. Note that c i + a i + b i = k ≤ i ≤ D . For 0 ≤ i ≤ D , let k i = p ii . Observe that k i = | Γ i ( x ) | for all x ∈ X . By [2, p.195], k i = b b · · · b i − c c · · · c i (0 ≤ i ≤ D ) . (1)We refer to k i as the i th valency of Γ . We now recall the Bose-Mesner algebra of Γ. For 0 ≤ i ≤ D , define A i ∈ Mat X ( C ) to have ( x, y )-entryequal to 1 if ∂ ( x, y ) = i , and 0 otherwise. We refer to A i as the i th distance matrix of Γ. Note that (i) A = I ; (ii) P Di =0 A i = J ; (iii) A ti = A i (0 ≤ i ≤ D ); (iv) A i A j = P Dh =0 p hij A h (0 ≤ i, j ≤ D ). Observe that { A i } Di =0 are linearly independent. Thus, they form a basis for a subalgebra M of Mat X ( C ); M is called the Bose-Mesner algebra of
Γ. Abbreviate A := A and call this the adjacency matrix of Γ. By [2, p.190], M isgenerated by A . By [2, p.59], M has a second basis { E i } Di =0 which satisfies the following: (i) E = | X | − J ;(ii) P Di =0 E i = I ; (iii) E ti = E i = E i (0 ≤ i ≤ D ); (iv) E i E j = δ ij E i (0 ≤ i, j ≤ D ). For notationalconvenience, define E − = 0 , E D +1 = 0. For 0 ≤ i ≤ D , let m i denote the rank of E i ; we call m i the multiplicity of Γ associated with E i . Since { E i } Di =0 is a basis for M , there exist complex scalars { θ i } Di =0 suchthat A = P Di =0 θ i E i . Note that for 0 ≤ i ≤ D , AE i = E i A = θ i E i . Thus, E i V is an eigenspace for A , and θ i is the corresponding eigenvalue. Since A is symmetric, θ i ∈ R . Since A generates M , the { θ i } Di =0 aremutually distinct. Note that V = D X i =0 E i V (orthogonal direct sum), (2)and that E i = Y ≤ j ≤ Dj = i A − θ j Iθ i − θ j (0 ≤ i ≤ D ) . (3)We call θ i the eigenvalue of Γ associated with E i .We now recall the Krein parameters of Γ. Observe that A i ◦ A j = δ ij A i for 0 ≤ i, j ≤ D , where ◦ is the entry-wise multiplication. Thus, M is closed under ◦ . Consequently, there exist complex scalars q hij (0 ≤ h, i, j ≤ D ) such that E i ◦ E j = | X | − D X h =0 q hij E h (0 ≤ i, j ≤ D ) . The q hij are known as the Krein parameters or dual intersection numbers of Γ. By [2, p.69], the q hij are realand nonnegative.We now consider the Q -polynomial property. The graph Γ is said to be Q -polynomial (with respectto the given ordering { E i } Di =0 of primitive idempotents) whenever both: (i) q hij = 0 if one of h, i, j isgreater than the sum of the other two; (ii) q hij = 0 if one of h, i, j is equal to the sum of the other two.For the rest of this paper, we assume that Γ is Q -polynomial with respect to { E i } Di =0 . We abbreviate c ∗ i := q i i − (1 ≤ i ≤ D ) , a ∗ i := q i i (0 ≤ i ≤ D ) , b ∗ i := q i i +1 (0 ≤ i ≤ D − b ∗ D = 0 , c ∗ = 0. By [2, p.67], m i = q ii (0 ≤ i ≤ D ). By [2, p.196], m i = b ∗ b ∗ · · · b ∗ i − c ∗ c ∗ · · · c ∗ i (0 ≤ i ≤ D ) . (4)We now recall the dual Bose-Mesner algebra of Γ. For the rest of this paper, fix x ∈ X . For 0 ≤ i ≤ D ,define E ∗ i = E ∗ i ( x ) to be the diagonal matrix in Mat X ( C ) with ( y, y )-entry( E ∗ i ) yy = (cid:26) ∂ ( x, y ) = i y ∈ X ) . (5)We refer to E ∗ i as the i th dual primitive idempotent of Γ with respect to x . For notational convenience, define E ∗− = 0 , E ∗ D +1 = 0. Note that (i) P Di =0 E ∗ i = I ; (ii) E ∗ ti = E ∗ i = E ∗ i (0 ≤ i ≤ D ); (iii) E ∗ i E ∗ j = δ ij E ∗ i (0 ≤ i, j ≤ D ). Observe that { E ∗ i } Di =0 are linearly independent. Thus, they form a basis for a commutative3ubalgebra M ∗ = M ∗ ( x ) of Mat X ( C ); M ∗ is called the dual Bose-Mesner algebra of Γ with respect to x .For 0 ≤ i ≤ D , define A ∗ i = A ∗ i ( x ) to be the diagonal matrix in Mat X ( C ) such that ( A ∗ i ) yy = | X | ( E i ) xy for y ∈ X . By [9, p.379], { A ∗ i } Di =0 is a basis for M ∗ and satisfies the following properties: (i) A ∗ = I ; (ii) P Di =0 A ∗ i = | X | E ∗ ; (iii) A ∗ ti = A ∗ i = A ∗ i (0 ≤ i ≤ D ); (iv) A ∗ i A ∗ j = P Dh =0 q hij A ∗ h (0 ≤ i, j ≤ D ). We referto A ∗ i as the i th dual distance matrix of Γ with respect to x . Abbreviate A ∗ := A ∗ and call this the dualadjacency matrix of Γ with respect ot x . By [9, Lemma 3.11], M ∗ is generated by A ∗ . Since { E ∗ i } Di =0 isa basis for M ∗ , there exist complex scalars { θ ∗ i } Di =0 such that A ∗ = P Di =0 θ ∗ i E ∗ i . Note that for 0 ≤ i ≤ D , A ∗ E ∗ i = E ∗ i A ∗ = θ ∗ i E ∗ i . Since A ∗ is real, θ ∗ i ∈ R . Since A ∗ generates M ∗ , the { θ ∗ i } Di =0 are mutually distinct.Observe that E ∗ i V = Span { ˆ y | y ∈ X, ∂ ( x, y ) = i } (0 ≤ i ≤ D ) . Moreover, V = D X i =0 E ∗ i V (orthogonal direct sum) (6)and E ∗ i = Y ≤ j ≤ Dj = i A ∗ − θ ∗ j Iθ ∗ i − θ ∗ j (0 ≤ i ≤ D ) . (7)We call θ ∗ i the dual eigenvalue of Γ associated with E ∗ i .We now recall the Terwilliger algebra of Γ. Let T = T ( x ) denote the subalgebra of Mat X ( C ) generatedby M and M ∗ . We refer to T as the Terwilliger algebra of Γ with respect to x . Observe that T is generatedby A, A ∗ . Moreover, T is semi-simple. By [9, Lemma 3.2], E ∗ i A h E ∗ j = 0 if and only if p hij = 0 (0 ≤ h, i, j ≤ D ) , (8) E i A ∗ h E j = 0 if and only if q hij = 0 (0 ≤ h, i, j ≤ D ) . (9)It follows from (8) and (9) that AE ∗ i V ⊆ E ∗ i − V + E ∗ i V + E ∗ i +1 V (0 ≤ i ≤ D ) ,A ∗ E i V ⊆ E i − V + E i V + E i +1 V (0 ≤ i ≤ D ) . Moreover, E ∗ i A h E ∗ j = (cid:26) , h < | i − j |6 = 0 , h = | i − j | (0 ≤ h, i, j ≤ D ) , (10) E i A ∗ h E j = (cid:26) , h < | i − j |6 = 0 , h = | i − j | (0 ≤ h, i, j ≤ D ) . (11) Lemma 2.1
For ≤ i, j, k, l ≤ D with i + j = | k − l | , E ∗ l A i + j E ∗ k = (cid:26) E ∗ l A i E ∗ l + i A j E ∗ k , i + j = k − lE ∗ l A i E ∗ k + j A j E ∗ k , i + j = l − k. Proof: In E ∗ l A i + j E ∗ k , write A i + j as A i IA j with I = P Dm =0 E ∗ m . Evaluate the result using (10). ✷ Lemma 2.2
For ≤ i, j, k, l ≤ d with i + j = | k − l | , E l ( A ∗ ) i + j E k = (cid:26) E l A ∗ i E l + i A ∗ j E k , i + j = k − lE l A ∗ i E k + j A ∗ j E k , i + j = l − k. Proof:
Similar to the proof of Lemma 2.1. ✷ T -modules In this section, we recall some basic facts concerning the T -modules of Γ.Let W be a subspace of V . We say that W is a T - module whenever T W ⊆ W. Note that V is a T -module.We refer to V as the standard module . Let W and W ′ be T -modules. By a T - module isomorphism from W to W ′ , we mean a vector space isomorphism σ : W → W ′ such that ( σB − Bσ ) W = 0 for all B ∈ T .If such a map exists, we say that W and W ′ are isomorphic as T -modules. A T -module W is said to be irreducible whenever W = 0 and W contains no T -modules besides 0 and W . W is said to be thin wheneverdim E ∗ i W ≤ ≤ i ≤ D . Similarly, W is said to be dual thin whenever dim E i W ≤ ≤ i ≤ D .We now recall the notion of endpoint, dual endpoint, diameter and dual diameter. Observe that W = P E ∗ i W (orthogonal direct sum) where the sum is taken over all indices i (0 ≤ i ≤ D ) such that E ∗ i W = 0.Similarly, W = P E i W (orthogonal direct sum) where the sum is taken over all indices i (0 ≤ i ≤ D ) suchthat E i W = 0. Let r = min { i | ≤ i ≤ D, E ∗ i W = 0 } and t = min { i | ≤ i ≤ D, E i W = 0 } . We call r and t the endpoint and dual endpoint of W , respectively. Let d = |{ i | ≤ i ≤ D, E ∗ i W = 0 }| − d ∗ = |{ i | ≤ i ≤ D, E i W = 0 }| −
1. We refer to d and d ∗ as the diameter and dual diameter of W, respectively. Lemma 3.1 [9, Lemma 3.9]
Let W be an irreducible T -module with endpoint r , dual endpoint t , diameter d and dual diameter d ∗ . Then (i)–(v) below hold. (i) AE ∗ i W ⊆ E ∗ i − W + E ∗ i W + E ∗ i +1 W (0 ≤ i ≤ D ) . (ii) E ∗ i W = 0 if and only if r ≤ i ≤ r + d (0 ≤ i ≤ D ) . (iii) E ∗ i AE ∗ j W = 0 if | i − j | = 1 (0 ≤ i, j ≤ D ) . (iv) W = P di =0 E ∗ r + i W (orthogonal direct sum) . (v) Suppose W is thin. Then E i W = E i E ∗ r W for ≤ i ≤ D . Moreover, W is dual thin and d = d ∗ . Lemma 3.2 [9, Lemma 3.12]
Let W be as in Lemma 3.1. Then (i)–(v) below hold. (i) A ∗ E i W ⊆ E i − W + E i W + E i +1 W (0 ≤ i ≤ D ) . (ii) E i W = 0 if and only if t ≤ i ≤ t + d (0 ≤ i ≤ D ) . (iii) E i A ∗ E j W = 0 if | i − j | = 1 (0 ≤ i, j ≤ D ) . (iv) W = P di =0 E t + i W (orthogonal direct sum) . (v) Suppose W is dual thin. Then E ∗ i W = E ∗ i E t W for ≤ i ≤ D . Moreover, W is thin and d ∗ = d . Lemma 3.3 [9, Lemma 3.6]
There exists a unique irreducible T -module of endpoint , dual endpoint anddiameter D . Moreover, it is thin and dual thin. We refer to this module as the trivial T -module. For the rest of this paper, we will have the following assumption on W. Assumption 3.4
From now on, W will denote a thin irreducible T -module with endpoint r , dual endpoint t and diameter d . Unless otherwise stated, we assume that d > . ( W ) With reference to Assumption 3.4, let End( W ) = End C ( W ) denote the C -algebra of all C -linear transfor-mations from W to W . In this section, we will look at bases and generators of End( W ). We begin with twolemmas whose proofs are routine and left to the reader. Lemma 4.1
For ≤ i ≤ d , let w i be a nonzero vector in E ∗ r + i W . Note that { w i } di =0 is a basis for W. Withrespect to this basis, the matrix representation of E ∗ r + i has ( i, i ) -entry and all other entries ≤ i ≤ d ) ; (ii) the matrix representation of A ∗ is diag ( θ ∗ r , θ ∗ r +1 , . . . , θ ∗ r + d ) ; (iii) the matrix representation of A is tridiagonal with each entry nonzero on the superdiagonal and subdi-agonal. Lemma 4.2
For ≤ i ≤ d , let w ∗ i be a nonzero vector in E t + i W. Note that { w ∗ i } di =0 is a basis for W. Withrespect to this basis, (i) the matrix representation of E t + i has ( i, i ) -entry and all other entries ≤ i ≤ d ) ; (ii) the matrix representation of A is diag ( θ t , θ t +1 , . . . , θ t + d ) ; (iii) the matrix representation of A ∗ is tridiagonal with each entry nonzero on the superdiagonal and subdi-agonal. Definition 4.3
We refer to the sequence { θ t + i } di =0 (resp. { θ ∗ r + i } di =0 ) as the eigenvalue sequence (resp. dualeigenvalue sequence ) of W. Lemma 4.4 On W , d Y i =0 ( A − θ t + i I ) = 0 , d Y i =0 ( A ∗ − θ ∗ r + i I ) = 0 . Proof:
Immediate from Lemmas 4.1(ii) and 4.2(ii). ✷ Lemma 4.5
Let B (resp. B ∗ ) denote the matrix representation of A (resp. A ∗ ) with respect to the basisgiven in Lemma 4.1 (resp. Lemma 4.2). Then ( B h ) ij = (cid:26) , h < | i − j |6 = 0 , h = | i − j | (0 ≤ h, i, j ≤ d ) , ( B ∗ h ) ij = (cid:26) , h < | i − j |6 = 0 , h = | i − j | (0 ≤ h, i, j ≤ d ) . Proof:
Routine using Lemmas 4.1(iii) and 4.2(iii). ✷ Using Lemma 4.5 we obtain the following strengthening of (10) and (11).
Lemma 4.6
For ≤ h, i, j ≤ d , the following hold on W.E ∗ r + i A h E ∗ r + j = (cid:26) , h < | i − j |6 = 0 , h = | i − j | , (12) E t + i A ∗ h E t + j = (cid:26) , h < | i − j |6 = 0 , h = | i − j | . (13) Proof:
Let B denote matrix representation of A with respect to the basis given in Lemma 4.1. By con-struction, the matrix representation of E ∗ r + i A h E ∗ r + j with respect to this basis has ( i, j )-entry ( B h ) ij and allother entries are 0. Line (12) follows from this and Lemma 4.5. The proof of (13) is similar. ✷ Theorem 4.7
Each of the following forms a basis for the C -vector space End ( W ) : (i) the actions of { A m E ∗ r A n | ≤ m, n ≤ d } on W , (ii) the actions of { A ∗ m E t A ∗ n | ≤ m, n ≤ d } on W . roof: Let S denote { A m E ∗ r A n | ≤ m, n ≤ d } . Observe that | S | = ( d + 1) and this is equal tothe dimension of End( W ). It suffices to show that the actions of the elements of S on W are linearlyindependent. Let { w i } di =0 be the basis for W in Lemma 4.1. With respect to this basis, let B and F ∗ r be thematrix representations of A and E ∗ r . We claim that for 0 ≤ m, n ≤ d , B m F ∗ r B n has entries( B m F ∗ r B n ) ij = (cid:26) , i > m or j > n = 0 , i = m and j = n (0 ≤ i, j ≤ d ) . (14)By Lemma 4.1(i), F ∗ r has (0 , B m F ∗ r B n ) ij = ( B m ) i ( B n ) j (0 ≤ i, j ≤ d ) . Combining this with Lemma 4.5, we obtain (14). It follows from (14) that actions of the elements of S on W are linearly independent and hence form a basis for End( W ). Similarly, (ii) can be shown to be a basisfor End( W ). ✷ Theorem 4.8
Each of the following is a generating set for the C -algebra End ( W ) : (i) the actions of A, E ∗ r on W, (ii) the actions of A ∗ , E t on W, (iii) the actions of A, A ∗ on W. Proof:
By Theorem 4.7, (i) and (ii) are generating sets for End( W ). The set (iii) is a generating set forEnd( W ) by (i) and since E ∗ r is a polynomial in A ∗ . ✷ Definition 4.9
Define D (resp. D ∗ ) to be the subalgebra of End( W ) generated by the action of A (resp. A ∗ ) on W . Lemma 4.10
Each of the following forms a basis for the C -vector space D : (i) the actions of { A i } di =0 on W, (ii) the actions of { E t + i } di =0 on W. Proof: (i) By Lemma 4.5, { A i } di =0 are linearly independent on W . Combining this with Lemma 4.4, weobtain the result.(ii) Immediate from (3) and (i). ✷ Lemma 4.11
Each of the following forms a basis for the C -vector space D ∗ : (i) the actions of { A ∗ i } di =0 on W, (ii) the actions of { E ∗ r + i } di =0 on W. Proof:
Similar to the proof of Lemma 4.10. ✷ Corollary 4.12
Each of the following forms a basis for the C -vector space End ( W ) : (i) the actions of { E t + i E ∗ r E t + j | ≤ i, j ≤ d } on W, (ii) the actions of { E ∗ r + i E t E ∗ r + j | ≤ i, j ≤ d } on W. Proof:
Immediate from Theorem 4.7 and Lemmas 4.10, 4.11. ✷ The scalars a i ( W ) and x i ( W ) Let W be as in Assumption 3.4. In this section, we associate with W two sequences of scalars called the a i ( W ) and x i ( W ). We will then describe the algebraic properties of these scalars. Notation 5.1
For any Y ∈ T , tr W Y denotes the trace of the action of Y on W. Definition 5.2
Define a i ( W ) = tr W ( E ∗ r + i A ) a ∗ i ( W ) = tr W ( E t + i A ∗ ) (0 ≤ i ≤ d ) , (15) x i ( W ) = tr W ( E ∗ r + i AE ∗ r + i − A ) x ∗ i ( W ) = tr W ( E t + i A ∗ E t + i − A ∗ ) (1 ≤ i ≤ d ) . (16)For notational convenience, define x ( W ) = 0 and x ∗ ( W ) = 0 . Lemma 5.3
For ≤ i ≤ d , let w i be a nonzero vector in E ∗ r + i W . Let B denote the matrix representationof A with respect to { w i } di =0 . Then (i)–(iii) below hold. (i) B ii = a i ( W ) (0 ≤ i ≤ d ) . (ii) B i,i − B i − ,i = x i ( W ) (1 ≤ i ≤ d ) . (iii) x i ( W ) = 0 (1 ≤ i ≤ d ) .Proof: (i) By Lemma 4.1(i), (iii), the ( j, j )-entry of the matrix representation of E ∗ r + i A with respect to { w i } di =0 is B ii if j = i and 0 otherwise (0 ≤ j ≤ d ). Taking the trace of this matrix and using (15), we obtainthe desired result.(ii) By Lemma 4.1(i), (iii), the ( j, j )-entry of the matrix representation of E ∗ r + i AE ∗ r + i − A with respect to { w i } di =0 is B i,i − B i − ,i if j = i and 0 otherwise (0 ≤ j ≤ d ). Taking the trace of this matrix and using (16),we obtain the desired result.(iii) Immediate from (ii) and Lemma 4.1(iii). ✷ Lemma 5.4
For ≤ i ≤ d , let w ∗ i be a nonzero vector in E t + i W . Let B ∗ denote the matrix representationof A ∗ with respect to { w ∗ i } di =0 . Then (i)–(iii) below hold. (i) B ∗ ii = a ∗ i ( W ) (0 ≤ i ≤ d ) . (ii) B ∗ i,i − B ∗ i − ,i = x ∗ i ( W ) (1 ≤ i ≤ d ) . (iii) x ∗ i ( W ) = 0 (1 ≤ i ≤ d ) .Proof: Similar to the proof of Lemma 5.3. ✷ Theorem 5.5
Let v be a nonzero vector in E ∗ r W . Then for ≤ i ≤ d , E ∗ r + i A i v is nonzero and hence is abasis for E ∗ r + i W . Moreover, { E ∗ r + i A i v } di =0 is a basis for W .Proof: Since v spans E ∗ r W , E ∗ r + i A i v spans E ∗ r + i A i E ∗ r W . By Lemma 4.6, E ∗ r + i A i E ∗ r W = 0. Hence, E ∗ r + i A i v = 0. The rest of the assertion follows. ✷ Theorem 5.6
Let u be a nonzero vector in E t W . Then for ≤ i ≤ d , E t + i A ∗ i u is nonzero and hence is abasis for E t + i W . Moreover, { E t + i A ∗ i u } di =0 is a basis for W .Proof: Similar to the proof of Theorem 5.5. ✷ Theorem 5.7
With respect to the basis given in Theorem 5.5, the matrix representation of A is a ( W ) x ( W ) a ( W ) x ( W )1 · ·· · ·· a d − ( W ) x d ( W ) a d ( W ) . (17)8 roof: Let { w i } di =0 be the basis for W in Theorem 5.5. Let B denote the matrix representation of A withrespect to this basis. Note that for 0 ≤ i ≤ d − E ∗ r + i +1 Aw i = B i +1 ,i w i +1 . By Lemma 2.1, E ∗ r + i +1 Aw i = E ∗ r + i +1 AE ∗ r + i A i E ∗ r v = E ∗ r + i +1 A i +1 E ∗ r v = w i +1 . Thus, B i +1 ,i = 1 for 0 ≤ i ≤ d −
1. The rest of the assertion follows from Lemma 5.3(i), (ii). ✷ Theorem 5.8
With respect to the basis given in Theorem 5.6, the matrix representation of A ∗ is a ∗ ( W ) x ∗ ( W ) a ∗ ( W ) x ∗ ( W )1 · ·· · ·· a ∗ d − ( W ) x ∗ d ( W ) a ∗ d ( W ) . Proof:
Similar to the proof of Theorem 5.7. ✷ Lemma 5.9
The following hold on W . (i) E ∗ r + i AE ∗ r + i = a i ( W ) E ∗ r + i (0 ≤ i ≤ d ) . (ii) E ∗ r + i AE ∗ r + i − AE ∗ r + i = x i ( W ) E ∗ r + i (1 ≤ i ≤ d ) . (iii) E ∗ r + i − AE ∗ r + i AE ∗ r + i − = x i ( W ) E ∗ r + i − (1 ≤ i ≤ d ) . (iv) E t + i A ∗ E t + i = a ∗ i ( W ) E t + i (0 ≤ i ≤ d ) . (v) E t + i A ∗ E t + i − A ∗ E t + i = x ∗ i ( W ) E t + i (1 ≤ i ≤ d ) . (vi) E t + i − A ∗ E t + i A ∗ E t + i − = x ∗ i ( W ) E t + i − (1 ≤ i ≤ d ) .Proof: (i) Let { w j } dj =0 be the basis for W in Theorem 5.5. By (17), E ∗ r + i AE ∗ r + i w j = δ ij a i ( W ) E ∗ r + i w j . Theresult follows.(ii) Let G i denote the action of E ∗ r + i on W . Since W is thin, G i End( W ) G i has dimension 1. Observe that G i is a nonzero element of G i End( W ) G i . Thus there exists α ∈ C such that E ∗ r + i AE ∗ r + i − AE ∗ r + i = αE ∗ r + i on W . Take the trace of both sides of this equation. Evaluating this using Definition 5.2 and the fact thattr W ( E ∗ r + i ) = 1, we find that α = x i ( W ).(iii) Similar to (ii).(iv)-(vi) Similar to the proofs of (i)–(iii). ✷ Lemma 5.10
The following hold. (i) P di =0 a i ( W ) = P di =0 θ t + i . (ii) P di =0 a ∗ i ( W ) = P di =0 θ ∗ r + i . (iii) a i ( W ) ∈ R , a ∗ i ( W ) ∈ R (0 ≤ i ≤ d ) . (iv) x i ( W ) ∈ R , x i ( W ) > ≤ i ≤ d ) . (v) x ∗ i ( W ) ∈ R , x ∗ i ( W ) > ≤ i ≤ d ) .Proof: (i) Immediate from Theorem 5.7 and the fact that { θ t + i } di =0 are the eigenvalues of the action of A on W .(ii) Similar to the proof of (i).(iii) By Lemma 5.9(i), a i ( W ) is an eigenvalue of the real symmetric matrix E ∗ r + i AE ∗ r + i . Thus, a i ( W ) ∈ R .Similarly, a ∗ i ( W ) ∈ R .(iv) By Lemma 5.9(ii), x i ( W ) is an eigenvalue of the real symmetric matrix E ∗ r + i AE ∗ r + i − AE ∗ r + i . Thus, x i ( W ) ∈ R . Since E ∗ r + i AE ∗ r + i − AE ∗ r + i = ( E ∗ r + i − AE ∗ r + i ) t ( E ∗ r + i − AE ∗ r + i )is positive definite, x i ( W ) > ✷ The polynomial p i Let W be as in Assumption 3.4. In the previous section, we defined two bases for W . In this section, wewill use these bases to obtain two sequences of polynomials. We will investigate some properties of thesepolynomials. Let C [ λ ] denote the C -algebra of polynomials in λ with coefficients in C . Definition 6.1
For 0 ≤ i ≤ d + 1, define p i = p Wi in C [ λ ] by p = 1, λp i = p i +1 + a i ( W ) p i + x i ( W ) p i − (0 ≤ i ≤ d ) , (18)where x i ( W ) and a i ( W ) are as in Definition 5.2 and p − = 0. Definition 6.2
For 0 ≤ i ≤ d + 1, define p ∗ i = p ∗ Wi in C [ λ ] by p ∗ = 1, λp ∗ i = p ∗ i +1 + a ∗ i ( W ) p ∗ i + x ∗ i ( W ) p ∗ i − (0 ≤ i ≤ d ) , (19)where x ∗ i ( W ) and a ∗ i ( W ) are as in Definition 5.2 and p ∗− = 0. Lemma 6.3
For any nonzero u ∈ E t W and nonzero v ∈ E ∗ r W , p i ( A ) v = E ∗ r + i A i v (0 ≤ i ≤ d ) , (20) p ∗ i ( A ∗ ) u = E t + i A ∗ i u (0 ≤ i ≤ d ) . (21) Moreover, p d +1 ( A ) v = 0 and p ∗ d +1 ( A ∗ ) u = 0 .Proof: For 0 ≤ i ≤ d + 1, let w i = E ∗ r + i A i v and w ′ i = p i ( A ) v . Recall that by Theorem 5.5, { w i } di =0 is abasis for W. By (17), Aw i = w i +1 + a i ( W ) w i + x i ( W ) w i − (0 ≤ i ≤ d ) . (22)By (18), Aw ′ i = w ′ i +1 + a i ( W ) w ′ i + x i ( W ) w ′ i − (0 ≤ i ≤ d ) . (23)Comparing (22) and (23) and using the fact that w = w ′ , we find that w i = w ′ i for 0 ≤ i ≤ d + 1. Hence(20) holds. Since w d +1 = 0 , p d +1 ( A ) v = 0. The rest of the assertion is proved similary. ✷ Theorem 6.4
For ≤ i ≤ d , p i ( A ) E ∗ r W = E ∗ r + i W, (24) p ∗ i ( A ∗ ) E t W = E t + i W. (25) Proof:
Let v be a nonzero vector in E ∗ r W. By (20), E ∗ r + i A i v spans p i ( A ) E ∗ r W. By Theorem 5.5, E ∗ r + i A i v spans E ∗ r + i W . From these comments, we obtain (24). The proof for (25) is similar. ✷ Theorem 6.5
For ≤ i ≤ d , the following hold on W . p i ( A ) E ∗ r = E ∗ r + i A i E ∗ r ,p ∗ i ( A ∗ ) E t = E t + i A ∗ i E t . Proof:
Abbreviate ∆ := p i ( A ) E ∗ r − E ∗ r + i A i E ∗ r . We will show that ∆ = 0 on W . For 0 ≤ j ≤ d , let w j be anonzero vector in E ∗ r + j W. Note that ∆ w j = 0 for 1 ≤ j ≤ d . By Lemma 6.3, ∆ w = 0. Therefore, ∆ = 0 on W . The second assertion is proved similary. ✷ Theorem 6.6
The following hold. (i) p d +1 is both the minimal polynomial and the characteristic polynomial of the action of A on W . (ii) p d +1 = Q di =0 ( λ − θ t + i ) . (iii) p ∗ d +1 is both the minimal polynomial and characteristic polynomial of the action of A ∗ on W. p ∗ d +1 = Q di =0 ( λ − θ ∗ r + i ) .Proof: (i) By Lemma 6.3, p d +1 ( A ) E ∗ r W = 0. For 1 ≤ i ≤ d , p d +1 ( A ) E ∗ r + i W = p d +1 ( A ) p i ( A ) E ∗ r W by (24)= p i ( A ) p d +1 ( A ) E ∗ r W = 0 . Therefore, p d +1 ( A ) E ∗ r + i W = 0 for 0 ≤ i ≤ d . Hence by Lemma 3.1(iv), p d +1 ( A ) = 0 on W . By Theorem5.5 and (20), p i ( A ) = 0 on W for 0 ≤ i ≤ d . From these comments, p d +1 is the minimal polynomial of theaction of A on W . Since the characteristic polynomial of the action of A on W has degree d + 1, it followsthat p d +1 is also the characteristic polynomial of this action.(ii) Immediate from (i) and the fact that { θ t + i } di =0 are the eigenvalues of the action of A on W .(iii), (iv) Similar to the proofs of (i), (ii). ✷ ν, m i Let W be as in Assumption 3.4. In this section, we will investigate the algebraic properties of two morescalars associated with W, called the m i ( W ) and the ν ( W ). Definition 7.1
For 0 ≤ i ≤ d , define m i ( W ) = tr W ( E t + i E ∗ r ) , (26) m ∗ i ( W ) = tr W ( E ∗ r + i E t ) . (27) Lemma 7.2
For ≤ i ≤ d , the following (i)–(iv) hold on W . (i) E t + i E ∗ r E t + i = m i ( W ) E t + i . (ii) E ∗ r E t + i E ∗ r = m i ( W ) E ∗ r . (iii) E ∗ r + i E t E ∗ r + i = m ∗ i ( W ) E ∗ r + i . (iv) E t E ∗ r + i E t = m ∗ i ( W ) E t .Proof: (i) Let H i denote the action of E t + i on W . Since W is thin, H i End( W ) H i has dimension 1. Notethat H i is a nonzero element of H i End( W ) H i , hence a basis for H i End( W ) H i . Thus there exists α ∈ C suchthat E t + i E ∗ r E t + i = αE t + i on W . Taking the trace of both sides of this equation and using Definition 7.1and the fact that tr W ( E ∗ r ) = 1, we find that α = m i ( W ).(ii) Let L r denote the action of E ∗ r on W . Since W is thin, L r End( W ) L r has dimension 1. Note that L r is a nonzero element of L r End( W ) L r , hence a basis for L r End( W ) L r . Thus there exists α ∈ C such that E ∗ r E t + i E ∗ r = αE ∗ r on W . Arguing as in the proof of (i), we find that α = m i ( W ).(iii), (iv) Similar to the proofs of (i), (ii). ✷ Lemma 7.3
The following hold. (i) P di =0 m i ( W ) = 1 . (ii) P di =0 m ∗ i ( W ) = 1 . (iii) m i ( W ) ∈ R , m i ( W ) > ≤ i ≤ d ) . (iv) m ∗ i ( W ) ∈ R , m ∗ i ( W ) > ≤ i ≤ d ) .Proof: (i) Observe that on W , P di =0 E t + i = I . In this equation, multiply each term on the right by E ∗ r ,take the trace and use Definition 7.1 to obtain P di =0 m i ( W ) = 1.(ii) Similar to the proof of (i).(iii) By Lemma 7.2(i), m i ( W ) is an eigenvalue of the real symmetric matrix E t + i E ∗ r E t + i . Hence m i ( W ) ∈ R .Since E t + i E ∗ r E t + i = ( E ∗ r E t + i ) t ( E ∗ r E t + i ) is positive definite, m i ( W ) > ✷ efinition 7.4 Note that m ( W ) = m ∗ ( W ). We denote the multiplicative inverse of this common value tobe ν ( W ).The following is an immediate consequence of Lemma 7.2 and Definition 7.4. Lemma 7.5
The following hold on W . (i) ν ( W ) E t E ∗ r E t = E t . (ii) ν ( W ) E ∗ r E t E ∗ r = E ∗ r . W Let W be as in Assumption 3.4. In this section, we will look at two bases for W called the standard basisand dual standard basis. Theorem 8.1
Let u and v be nonzero vectors in E t W and E ∗ r W , respectively. Then (i)–(ii) below hold. (i) { E ∗ r + i u } di =0 is a basis for W . (ii) { E t + i v } di =0 is a basis for W .Proof: (i) By Lemma 3.1(iv), it suffices to show that E ∗ r + i u = 0 for 0 ≤ i ≤ d . By Lemmas 3.1(ii) and3.2(v), E ∗ r + i E t W = E ∗ r + i W = 0. Since u spans E t W , E ∗ r + i u spans E ∗ r + i E t W . Therefore, E ∗ r + i u = 0.(ii) Similar to (i). ✷ Definition 8.2
Let u and v be nonzero vectors in E t W and E ∗ r W , respectively. We call { E ∗ r + i u } di =0 (resp. { E t + i v } di =0 ) a standard (resp. dual standard ) basis for W . Theorem 8.3
Let { w i } di =0 be a standard basis for W and { w ′ i } di =0 be a sequence of vectors in W. Then thefollowing are equivalent. (i) { w ′ i } di =0 is a standard basis for W . (ii) There exists a nonzero α ∈ C such that w ′ i = αw i for ≤ i ≤ d .Proof: By Definition 8.2, there exists a nonzero u ∈ E t W such that w i = E ∗ r + i u for 0 ≤ i ≤ d . Note that { w ′ i } is a standard basis for W if and only if there exists a nonzero u ′ ∈ E t W such that w ′ i = E ∗ r + i u ′ for0 ≤ i ≤ d . Since u spans E t W , u ′ = αu for some nonzero α ∈ C . The conclusion follows. ✷ Theorem 8.4
Let { v i } di =0 be a dual standard basis for W and { v ′ i } di =0 be a sequence of vectors in W . Thenthe following are equivalent. (i) { v ′ i } di =0 is a dual standard basis for W . (ii) There exists a nonzero α ∈ C such that v ′ i = αv i for ≤ i ≤ d .Proof: Similar to the proof of Theorem 8.3. ✷ We now give various characterizations of a standard basis and dual standard basis.
Theorem 8.5
Let { w i } di =0 be a sequence of vectors in W , not all . Then { w i } di =0 is a standard basis for W if and only if both (i) and (ii) below hold. (i) w i ∈ E ∗ r + i W (0 ≤ i ≤ d ) . (ii) P di =0 w i ∈ E t W . roof: Suppose that { w i } di =0 is a standard basis for W . By Definition 8.2, there exists a nonzero u ∈ E t W such that w i = E ∗ r + i u for 0 ≤ i ≤ d . Thus, (i) holds. Combining Lemma 3.1(ii) and the fact that P Dj =0 E ∗ j = I , we have u = P di =0 E ∗ r + i u . From this comment, we find that P di =0 w i = P di =0 E ∗ r + i u = u ∈ E t W . Hence,(ii) holds. Conversely, suppose that { w i } di =0 satisfies (i) and (ii). Let u = P di =0 w i . By (ii) and the fact thatnot all of { w i } di =0 are 0, u is a nonzero vector in E t W . By (i), E ∗ r + i u = w i for 0 ≤ i ≤ d . Therefore, { w i } di =0 is a standard basis for W . ✷ Theorem 8.6
Let { v i } di =0 be a sequence of vectors in W , not all . Then { v i } di =0 is a dual standard basisfor W if and only if both (i) and (ii) below hold. (i) v i ∈ E t + i W (0 ≤ i ≤ d ) . (ii) P di =0 v i ∈ E ∗ r W .Proof: Similar to the proof of Theorem 8.5. ✷ Lemma 8.7
Let { w i } di =0 be a basis for W . With respect to this basis, let B and B ∗ denote the matrixrepresentations of A and A ∗ , respectively. Then { w i } di =0 is a standard basis for W if and only if both (i) and(ii) below hold. (i) B has constant row sum θ t . (ii) B ∗ = diag ( θ ∗ r , θ ∗ r +1 , . . . , θ ∗ r + d ) .Proof: Let w = P di =0 w i . Note that Aw = P di =0 P dj =0 B ji w j . Since E t W is the eigenspace of A correspond-ing to θ t , by the previous statement, B has constant row sum equal to θ t if and only if w ∈ E t W . Observealso that w i ∈ E ∗ r + i W if and only if B ∗ = diag( θ ∗ r , θ ∗ r +1 , . . . , θ ∗ r + d ). The result follows from these commentsand Theorem 8.5. ✷ Lemma 8.8
Let { v i } di =0 be a basis for W . With respect to this basis, let B and B ∗ be the matrix represen-tations of A and A ∗ , respectively. Then { v i } di =0 is a dual standard basis for W if and only if both (i) and (ii)below hold. (i) B ∗ has constant row sum θ ∗ r . (ii) B = diag ( θ t , θ t +1 , . . . , θ t + d ) .Proof: Similar to the proof of Lemma 8.7. ✷ Definition 8.9
Define the two maps ♭ : End( W ) → Mat d +1 ( C ) and ♯ : End( W ) → Mat d +1 ( C ) as follows:For every Y ∈ End( W ) , Y ♭ (resp. Y ♯ ) is the matrix representation of Y with respect to a standard basis(resp. dual standard basis) for W . Note that Y ♭ (resp. Y ♯ ) is independent of the choice of standard basis(resp. dual standard basis) by Theorem 8.3 (resp. Theorem 8.4). Theorem 8.10
With reference to Definition 8.9, the following hold. (i) A ♭ has constant row sum θ t . (ii) A ∗ ♭ = diag ( θ ∗ r , θ ∗ r +1 , . . . , θ ∗ r + d ) . (iii) A ∗ ♯ has constant row sum θ ∗ r . (iv) A ♯ = diag ( θ t , θ t +1 , . . . , θ t + d ) .Proof: Immediate from Lemmas 8.7 and 8.8. ✷ The scalars b i ( W ) , c i ( W ) Let W be as in Assumption 3.4 and let ♭, ♯ be the maps in Definition 8.9. In this section, we will take a closelook at the entries of A ♭ and A ∗ ♯ .By Lemmas 4.1 and 4.2, the matrices A ♭ and A ∗ ♯ are tridiagonal. Moreover, by Lemma 5.3, the ( i, i )-entry of these matrices are a i ( W ) and a ∗ i ( W ), respectively. We now take a close look at the superdiagonaland subdiagonal entries of these matrices. Definition 9.1
Define b i ( W ) = ( A ♭ ) i,i +1 , b ∗ i ( W ) = ( A ∗ ♯ ) i,i +1 (0 ≤ i ≤ d − ,c i ( W ) = ( A ♭ ) i,i − , c ∗ i ( W ) = ( A ∗ ♯ ) i,i − (1 ≤ i ≤ d ) . Thus, A ♭ = a ( W ) b ( W ) c ( W ) a ( W ) b ( W ) c ( W ) · ·· · ·· a d − ( W ) b d − ( W ) c d ( W ) a d ( W ) , (28) A ∗ ♯ = a ∗ ( W ) b ∗ ( W ) c ∗ ( W ) a ∗ ( W ) b ∗ ( W ) c ∗ ( W ) · ·· · ·· a ∗ d − ( W ) b ∗ d − ( W ) c ∗ d ( W ) a ∗ d ( W ) . (29)For notational convenience, define b d ( W ) = 0 , c ( W ) = 0 (resp. b ∗ d ( W ) = 0 , c ∗ ( W ) = 0). Observe that byLemmas 4.1(iii) and 4.2(iii), b i ( W ) , b ∗ i ( W ) (0 ≤ i ≤ d − , c i ( W ) , c ∗ i ( W ) (1 ≤ i ≤ d ) are all nonzero. Definition 9.2
By the intersection numbers (resp. dual intersection numbers ) of W, we mean the a i ( W ), b i ( W ), c i ( W ) (resp. a ∗ i ( W ) , b ∗ i ( W ) , c ∗ i ( W )). Lemma 9.3
The following hold. (i) b i − ( W ) c i ( W ) = x i ( W ) (1 ≤ i ≤ d ) . (ii) c i ( W ) + a i ( W ) + b i ( W ) = θ t (0 ≤ i ≤ d ) . (iii) b ∗ i − ( W ) c ∗ i ( W ) = x ∗ i ( W ) (1 ≤ i ≤ d ) . (iv) c ∗ i ( W ) + a ∗ i ( W ) + b ∗ i ( W ) = θ ∗ r (0 ≤ i ≤ d ) . (v) b i ( W ) ∈ R , c i ( W ) ∈ R (0 ≤ i ≤ d ) . (vi) b ∗ i ( W ) ∈ R , c ∗ i ( W ) ∈ R (0 ≤ i ≤ d ) .Proof: (i) Immediate from Lemma 5.3(ii).(ii) Immediate from Theorem 8.10(i).(iii), (iv) Similar to the proofs of (i), (ii).(v) Recall that a ( W ) ∈ R by Lemma 5.10(iii). Since θ t ∈ R and a ( W ) + b ( W ) = θ t , we have b ( W ) ∈ R .By Lemma 5.10(iii), (iv), we obtain a i ( W ) ∈ R and x i ( W ) ∈ R for 0 ≤ i ≤ d . Combining this with (i), (ii)and the fact that b ( W ) ∈ R and b i ( W ) = 0 for 0 ≤ i ≤ d −
1, we find that b i ( W ) ∈ R and c i ( W ) ∈ R for0 ≤ i ≤ d .(vi) Similar to the proof of (v). ✷ emma 9.4 For ≤ i ≤ d , b ( W ) b ( W ) · · · b i − ( W ) = p i ( θ t ) , (30) b ∗ ( W ) b ∗ ( W ) · · · b ∗ i − ( W ) = p ∗ i ( θ ∗ r ) , (31) where p i = p Wi , p ∗ i = p ∗ Wi are from Definitions 6.1, 6.2.Proof: We prove (30) by induction on i . It can be verified that (30) is true for i = 0 ,
1. Fix 2 ≤ i ≤ d . By(18), p i ( θ t ) = ( θ t − a i − ( W )) p i − ( θ t ) − x i − ( W ) p i − ( θ t ) . (32)Eliminate x i − ( W ) and a i − ( W ) in (32) using Lemma 9.3(i), (ii). Evaluate the result using the inductivehypothesis to obtain the desired result. Equation (31) is proved similarly. ✷ Theorem 9.5
The following (i)–(iv) hold. (i) b i ( W ) = p i +1 ( θ t ) p i ( θ t ) (0 ≤ i ≤ d − . (ii) c i ( W ) = x i ( W ) p i − ( θ t ) p i ( θ t ) (1 ≤ i ≤ d ) . (iii) b ∗ i ( W ) = p ∗ i +1 ( θ ∗ r ) p ∗ i ( θ ∗ r ) (0 ≤ i ≤ d − . (iv) c ∗ i ( W ) = x ∗ i ( W ) p ∗ i − ( θ ∗ r ) p ∗ i ( θ ∗ r ) (1 ≤ i ≤ d ) .In the above lines, p j = p Wj , p ∗ j = p ∗ Wj are from Definitions 6.1, 6.2.Proof: (i) Immediate from Lemma 9.4.(ii) Immediate from (i) and Lemma 9.3(i).(iii), (iv) Similar to the proofs of (i), (ii). ✷ Lemma 9.6 [10, Theorem 4.1(vi)]
Let W be the trivial T -module. For ≤ i ≤ D , let a i , b i , c i (resp. a ∗ i , b ∗ i , c ∗ i ) be the intersection (resp. dual intersection) numbers of Γ . Then (i) a i ( W ) = a i , b i ( W ) = b i , c i ( W ) = c i , (ii) a ∗ i ( W ) = a ∗ i , b ∗ i ( W ) = b ∗ i , c ∗ i ( W ) = c ∗ i . We finish this section with a few comments.
Lemma 9.7
Let
W, W ′ be thin irreducible T -modules. The following are equivalent. (i) W and W ′ are isomorphic T -modules. (ii) W and W ′ have the same endpoint, dual endpoint, diameter and intersection numbers. (iii) W and W ′ have the same endpoint, dual endpoint, diameter and dual intersection numbers.Proof: (i) ⇒ (ii) Suppose that W and W ′ are isomorphic T -modules. Let φ : W → W ′ be an isomorphismof T -modules. Thus, φ ( E i W ) = E i W ′ . Hence E i W = 0 if and only if E i W ′ = 0. Similarly, E ∗ i W = 0 if andonly if E ∗ i W ′ = 0. Therefore, W and W ′ have the same endpoint, dual endpoint and diameter. Since W and W ′ are isomorphic, the matrices representing the action of A on W and W ′ are the same. Hence theyhave the same intersection numbers.(ii) ⇐ (i) Suppose that W and W ′ have the same endpoint r , dual endpoint t and diameter d . Suppose alsothat they have the same intersection numbers. For 0 ≤ i ≤ d , let w i = E ∗ r + i u and w ′ i = E ∗ r + i u ′ , where u and u ′ are nonzero vectors in E t W and E t W ′ , respectively. Since W and W ′ both have dimension d + 1,15here exists a vector space isomorphism φ : W → W ′ such that φ ( w i ) = w ′ i . Since w i ∈ E ∗ r + i W , it canbe easily verified that ( φA ∗ − A ∗ φ ) w i = 0 for 0 ≤ i ≤ D . By (28) and the fact that W and W ′ have thesame intersection numbers, ( φA − Aφ ) w i = 0 for 0 ≤ i ≤ d . From these comments, ( φA − Aφ ) W = 0 and( φA ∗ − A ∗ φ ) W = 0. Since T is generated by A, A ∗ , we find that φ is a T -module isomorphism. Therefore, W and W ′ are isomorphic T -modules.(i) ⇔ (iii) Similar to the proof of (i) ⇔ (ii). ✷
10 The scalar k i ( W ) Let W be as in Assumption 3.4. In this section, we will look at a sequence of scalars closely related with the m i ( W ). Definition 10.1
For 0 ≤ i ≤ d , define k i ( W ) = m ∗ i ( W ) ν ( W ) ,k ∗ i ( W ) = m i ( W ) ν ( W ) , where m i ( W ) , m ∗ i ( W ) , ν ( W ) are from Definitions 7.1 and 7.4. Lemma 10.2
The following (i)–(iii) hold. (i) k ( W ) = 1 , k ∗ ( W ) = 1 . (ii) P di =0 k i ( W ) = ν ( W ) . (iii) P di =0 k ∗ i ( W ) = ν ( W ) . (iv) k i ( W ) > , k ∗ i ( W ) > ≤ i ≤ d ) .In the above lines, ν ( W ) is from Definition 7.4.Proof: (i) Immediate from Definition 10.1.(ii) Immediate from Lemma 7.3(i) and Definition 10.1.(iii) Similar to the proof of (ii).(iv) Immediate from Lemma 7.3(iii), (iv) and Definitions 7.4, 10.1. ✷ We now relate k i ( W ) (resp. k ∗ i ( W )) and the intersection (resp. dual intersection) numbers of W . Lemma 10.3
For ≤ i ≤ d , k i ( W ) c i ( W ) = k i − ( W ) b i − ( W ) , (33) k ∗ i ( W ) c ∗ i ( W ) = k ∗ i − ( W ) b ∗ i − ( W ) , (34) where b j ( W ) , b ∗ j ( W ) , c j ( W ) , c ∗ j ( W ) are from Definition 9.1 and b − ( W ) = 0 , b ∗− ( W ) = 0 .Proof: We proceed by induction on i . Since c ( W ) = 0, equation (33) holds for i = 0. Assume 1 ≤ i ≤ d .By Definition 9.1, on WAE ∗ r + i E t = b i − ( W ) E ∗ r + i − E t + a i ( W ) E ∗ r + i E t + c i +1 ( W ) E ∗ r + i +1 E t , (35)where c d +1 ( W ) = 0. Take the trace of both sides of (35). Evaluate this using Definition 7.1 and the factthat E t A = θ t E t . Multiplying ν ( W ) on both sides of the resulting equation and using Definition 10.1 weobtain θ t k i ( W ) = b i − ( W ) k i − ( W ) + a i ( W ) k i ( W ) + c i +1 ( W ) k i +1 ( W ) . (36)Solving for c i +1 ( W ) k i +1 ( W ) in (36) using the inductive hypothesis and Lemma 9.3(ii), we find that (33)holds for i + 1. The proof of (34) is similar. ✷ heorem 10.4 For ≤ i ≤ d , k i ( W ) = b ( W ) b ( W ) · · · b i − ( W ) c ( W ) c ( W ) · · · c i ( W ) , (37) k ∗ i ( W ) = b ∗ ( W ) b ∗ ( W ) · · · b ∗ i − ( W ) c ∗ ( W ) c ∗ ( W ) · · · c ∗ i ( W ) , (38) where b j ( W ) , b ∗ j ( W ) , c j ( W ) , c ∗ j ( W ) are from Definition 9.1.Proof: Solve for k i ( W ) and k ∗ i ( W ) in Lemma 10.3 recursively to obtain the desired result. ✷ Corollary 10.5
Let W be the trivial T -module. Then for ≤ i ≤ D , k i ( W ) = k i , k ∗ i ( W ) = m i , where k i is the i th valency of Γ and m i is the multiplicity of Γ associated with E i .Proof: Immediate from (1), (4), Lemma 9.6 and Theorem 10.4. ✷
11 The polynomials u i and v i Let W be as in Assumption 3.4. In this section, we will look at two normalizations of the polynomials p i and p ∗ i in Definitions 6.1, 6.2. Definition 11.1
Define v i = v Wi and v ∗ i = v ∗ Wi in C [ λ ] by v i = p i c ( W ) c ( W ) · · · c i ( W ) (0 ≤ i ≤ d ) , (39) v ∗ i = p ∗ i c ∗ ( W ) c ∗ ( W ) · · · c ∗ i ( W ) (0 ≤ i ≤ d ) , (40)where p i = p Wi , p ∗ i = p ∗ Wi are from Definitions 6.1, 6.2 and c j ( W ) , c ∗ j ( W ) are from Definition 9.1. Fornotational convenience, define v − = 0 , v ∗− = 0. Lemma 11.2
For ≤ i ≤ d , v i ( θ t ) = k i ( W ) , v ∗ i ( θ ∗ r ) = k ∗ i ( W ) , where v i = v Wi , v ∗ i = v ∗ Wi are from Definition 11.1.Proof: Immediate from Lemma 9.4, Theorem 10.4 and Definition 11.1. ✷ Lemma 11.3
With reference to Definition 11.1, for ≤ i ≤ d − , λv i = b i − ( W ) v i − + a i ( W ) v i + c i +1 ( W ) v i +1 , (41) λv ∗ i = b ∗ i − ( W ) v ∗ i − + a ∗ i ( W ) v ∗ i + c ∗ i +1 ( W ) v ∗ i +1 , (42) where b − ( W ) = 0 , b ∗− ( W ) = 0 . Moreover, λv d − a d ( W ) v d − b d − ( W ) v d − = c − p d +1 ,λv ∗ d − a ∗ d ( W ) v ∗ d − b ∗ d − ( W ) v ∗ d − = c ∗− p ∗ d +1 , where c = c ( W ) c ( W ) · · · c d ( W ) ,c ∗ = c ∗ ( W ) c ∗ ( W ) · · · c ∗ d ( W ) . roof: To obtain (41), divide both sides of (18) by c ( W ) c ( W ) · · · c i ( W ) and eliminate x i ( W ) using Lemma9.3(i). The proof of (42) is similar. ✷ Theorem 11.4
With reference to Definition 11.1, for ≤ i ≤ d , v i ( A ) E ∗ r u = E ∗ r + i u, v ∗ i ( A ∗ ) E t v = E t + i v, (43) where u and v are nonzero vectors in E t W and E ∗ r W , respectively.Proof: For 0 ≤ i ≤ d , let w i = E ∗ r + i u and w ′ i = v i ( A ) E ∗ r u . By (28), Aw i = b i − ( W ) w i − + a i ( W ) w i + c i +1 ( W ) w i +1 (0 ≤ i ≤ d − , (44)where b − ( W ) = 0. Using (41), we obtain Aw ′ i = b i − ( W ) w ′ i − + a i ( W ) w ′ i + c i +1 ( W ) w ′ i +1 (0 ≤ i ≤ d − . (45)Using the fact that w = w ′ and comparing (44) and (45), we obtain the equation on the left of (43). Theequation on the right of (43) can be similarly obtained. ✷ Definition 11.5
For 0 ≤ i ≤ d , define u i = u Wi and u ∗ i = u ∗ Wi in C [ λ ] as follows: u i = p i p i ( θ t ) , (46) u ∗ i = p ∗ i p ∗ i ( θ ∗ r ) , (47)where p i = p Wi , p ∗ i = p ∗ Wi are from Definitions 6.1, 6.2. For notational convenience, define u − = 0 , u ∗− = 0. Lemma 11.6
With reference to Definition 11.1, for ≤ i ≤ d , v i = k i ( W ) u i , v ∗ i = k ∗ i ( W ) u ∗ i , where u i = u Wi , u ∗ i = u ∗ Wi are from Definition 11.5 and k i ( W ) , k ∗ i ( W ) are from Definition 10.1.Proof: Immediate from Lemma 9.4, Theorem 10.4 and Definitions 11.1, 11.5. ✷ Lemma 11.7
With reference to Definition 11.5, for ≤ i ≤ d − , λu i = c i ( W ) u i − + a i ( W ) u i + b i ( W ) u i +1 , (48) λu ∗ i = c ∗ i ( W ) u ∗ i − + a ∗ i ( W ) u ∗ i + b ∗ i ( W ) u ∗ i +1 . (49) Moreover, λu d − c d ( W ) u d − − a d ( W ) u d = p d +1 /p d ( θ t ) ,λu ∗ d − c ∗ d ( W ) u ∗ d − − a ∗ d ( W ) u ∗ d = p ∗ d +1 /p ∗ d ( θ ∗ r ) . Proof:
To obtain (48), divide both sides of (18) by p i ( θ t ) and eliminate x i ( W ) using Lemma 9.3(i). Evaluatethe result using Lemma 9.4. The proof of (49) is similar. ✷ Theorem 11.8
With reference to Definition 11.5, for ≤ i, j ≤ d , θ t + j u i ( θ t + j ) = c i ( W ) u i − ( θ t + j ) + a i ( W ) u i ( θ t + j ) + b i ( W ) u i +1 ( θ t + j ) ,θ ∗ r + j u ∗ i ( θ ∗ r + j ) = c ∗ i ( W ) u ∗ i − ( θ ∗ r + j ) + a ∗ i ( W ) u ∗ i ( θ ∗ r + j ) + b ∗ i ( W ) u ∗ i +1 ( θ ∗ r + j ) , where u d +1 = 0 , u ∗ d +1 = 0 . Proof:
Immediate from (48) and (49) with λ = θ t + j and λ = θ ∗ r + j . ✷ Let W be as in Assumption 3.4. In this section, we will look at all inner products involving the elementsof a standard basis and a dual standard basis for W . Using these inner products, we will show that all thepolynomials associated with W satisfy relations known as the Askey-Wilson duality.Throughout the entire section, u and v are nonzero vectors in E t W and E ∗ r W , respectively. Recall thatby Definition 8.2, { E ∗ r + i u } di =0 (resp. { E t + i v } di =0 ) is a standard basis (resp. dual standard basis) for W . By(2) and (6), each of these bases is orthogonal. We now compute some square norms. Theorem 12.1
For ≤ i ≤ d , k E ∗ r + i u k = k u k k i ( W ) /ν ( W ) , (50) k E t + i v k = k v k k ∗ i ( W ) /ν ( W ) , (51) where ν ( W ) is from Definition 7.4 and k i ( W ) , k ∗ i ( W ) are from Definition 10.1.Proof: Note that k E ∗ r + i u k = h E ∗ r + i u, E ∗ r + i u i = h u, E ∗ r + i u i = h u, E ∗ r + i u i = h u, v i ( A ) E ∗ r u i by Lemma 11.4= h v i ( A ) u, E ∗ r u i = h v i ( θ t ) u, E ∗ r u i = k i ( W ) h u, E ∗ r u i by Lemma 11.2 . Since u ∈ E t W , u = E t u . Using this we find that h u, E ∗ r u i = h E t u, E ∗ r E t u i = h u, E t E ∗ r E t u i . Evaluating E t E ∗ r E t using Lemma 7.5(i) we find that h u, E ∗ r u i = k u k /ν ( W ). Thus, we obtain (50). Equation (51) isproved similarly. ✷ Our next goal is to compute the inner product between the elements of { E ∗ r + i u } di =0 and { E t + i v } di =0 . Weneed the following lemma. Lemma 12.2
The following hold. (i) h E ∗ r u, E t v i = h u, v i /ν ( W ) . (ii) E ∗ r u = h u,v ik v k v . (iii) E t v = h v,u ik u k u . (iv) h u, v i 6 = 0 . (v) ν ( W ) |h u, v i| = k u k k v k .In the above lines, ν ( W ) is from Definition 7.4.Proof: (i) Since v ∈ E ∗ r W, v = E ∗ r v . Using this we find that h E ∗ r u, E t v i = h E ∗ r u, E t E ∗ r v i = h u, E ∗ r E t E ∗ r v i .Evaluate E ∗ r E t E ∗ r using Lemma 7.5(ii) to obtain the desired result.(ii) Since v spans E ∗ r W , E ∗ r u = αv for some α ∈ C . Thus h E ∗ r u, v i = α k v k . Since h E ∗ r u, v i = h u, E ∗ r v i = h u, v i , we find that α = h u,v ik v k .(iii) Similar to the proof of (ii).(iv) Observe that E ∗ r u = 0 since it is an element of a standard basis. It follows from this and (ii) that h u, v i 6 = 0.(v) Eliminate E ∗ r u and E t v in (i) using (ii) and (iii). ✷ heorem 12.3 For ≤ i, j ≤ d , h E ∗ r + i u, E t + j v i = u i ( θ t + j ) k i ( W ) k ∗ j ( W ) h u, v i /ν ( W ) , (52) h E ∗ r + i u, E t + j v i = u ∗ j ( θ ∗ r + i ) k i ( W ) k ∗ j ( W ) h u, v i /ν ( W ) , (53) where ν ( W ) , k i ( W ) , k ∗ j ( W ) are from Definitions 7.4, 10.1 and u i = u Wi , u ∗ j = u ∗ Wj are from Definition11.5.Proof: Note that h E ∗ r + i u, E t + j v i = h v i ( A ) E ∗ r u, E t + j v i by Theorem 11.4= h E ∗ r u, v i ( A ) E t + j v i = v i ( θ t + j ) h E ∗ r u, E t + j v i = v i ( θ t + j ) h E ∗ r u, v ∗ j ( A ∗ ) E t v i by Theorem 11.4= v i ( θ t + j ) h v ∗ j ( A ∗ ) E ∗ r u, E t v i = v i ( θ t + j ) v ∗ j ( θ ∗ r ) h E ∗ r u, E t v i = v i ( θ t + j ) v ∗ j ( θ ∗ r ) h u, v i /ν ( W ) by Lemma 12.2(i) . The result then follows from Lemmas 11.2 and 11.6. Equation (53) is proved similarly. ✷ Theorem 12.4
For ≤ i, j ≤ d , u i ( θ t + j ) = u ∗ j ( θ ∗ r + i ) , (54) where u i = u Wi and u ∗ j = u ∗ Wj are from Definition 11.5.Proof: Compare (52) with (53). ✷ Theorem 12.5
For ≤ i, j ≤ d , p i ( θ t + j ) p i ( θ t ) = p ∗ j ( θ ∗ r + i ) p ∗ j ( θ ∗ r ) , (55) v i ( θ t + j ) k i ( W ) = v ∗ j ( θ ∗ r + i ) k ∗ j ( W ) , (56) where p i = p Wi , p ∗ i = p ∗ Wi , v i = v Wi , v ∗ i = v ∗ Wi are from Definitions 6.1, 6.2 and 11.1.Proof: Immediate from Definition 11.5 and Theorems 11.6, 12.4. ✷ Equations (54), (55) and (56) are known as the
Askey-Wilson duality . Combining Theorem 11.8 andTheorem 12.4, we obtain the following result.
Theorem 12.6
For ≤ i, j ≤ d , θ t + j u ∗ j ( θ ∗ r + i ) = b i ( W ) u ∗ j ( θ ∗ r + i +1 ) + a i ( W ) u ∗ j ( θ ∗ r + i ) + c i ( W ) u ∗ j ( θ ∗ r + i − ) , (57) θ ∗ r + j u j ( θ t + i ) = b ∗ i ( W ) u j ( θ t + i +1 ) + a ∗ i ( W ) u j ( θ t + i ) + c ∗ i ( W ) u j ( θ t + i − ) , (58) where u j = u Wj and u ∗ j = u ∗ Wj are from Definition 11.5.
13 The orthogonality relations
Let W be as in Assumption 3.4. In this section, we display the transition matrix relating a standard basisand a dual standard basis. Using this and the results of the previous section, we display the orthogonalityrelations satisfied by the polynomials we have seen in this paper.20 heorem 13.1 Let u and v be nonzero vectors in E t W and E ∗ r W , respectively. For ≤ i ≤ d , E ∗ r + i u = h u, v ik v k d X j =0 v i ( θ t + j ) E t + j v, (59) E t + i v = h v, u ik u k d X j =0 v ∗ i ( θ ∗ r + j ) E ∗ r + j u, (60) where v i = v Wi , v ∗ i = v ∗ Wi are from Definition 11.1.Proof: Combining Lemma 3.2(ii) and the fact that P Dj =0 E j = I , we find that v = P dj =0 E t + j v . By Theorem11.4 and Lemma 12.2(ii), E ∗ r + i u = h u,v ik v k v i ( A ) v . Therefore, E ∗ r + i u = h u, v ik v k v i ( A ) v = h u, v ik v k v i ( A ) d X j =0 E t + j v = h u, v ik v k d X j =0 v i ( θ t + j ) E t + j v. Hence, (59) holds. Equation (60) is proved similarly. ✷ Theorem 13.2
For ≤ i, j ≤ d , d X h =0 v i ( θ t + h ) v j ( θ t + h ) k ∗ h ( W ) = δ ij ν ( W ) k i ( W ) , (61) d X h =0 v h ( θ t + i ) v h ( θ t + j )( k h ( W )) − = δ ij ν ( W )( k ∗ i ( W )) − (62) and d X h =0 v ∗ i ( θ ∗ r + h ) v ∗ j ( θ ∗ r + h ) k h ( W ) = δ ij ν ( W ) k ∗ i ( W ) , (63) d X h =0 v ∗ h ( θ ∗ r + i ) v ∗ h ( θ ∗ r + j )( k ∗ h ( W )) − = δ ij ν ( W )( k i ( W )) − , (64) where ν ( W ) , k h ( W ) , k ∗ h ( W ) are from Definitions 7.4, 10.1 and v h = v Wh , v ∗ h = v ∗ Wh are from Definition11.1.Proof: Concerning (61), let u be a nonzero vector in E t W. We compute h E ∗ r + i u, E ∗ r + j u i in two ways.First, by (6) and (50), h E ∗ r + i u, E ∗ r + j u i = δ ij k u k k i ( W ) /ν ( W ). Secondly, we compute h E ∗ r + i u, E ∗ r + j u i byevaluating each of E ∗ r + i u and E ∗ r + j u using (59). Simplify the result using (51) and Lemma 12.2(v). We findthat h E ∗ r + i u, E ∗ r + j u i is equal to k u k / ( ν ( W )) times the left side of (61). Equation (61) follows from thesecomments. Similarly, we obtain (63). To obtain (62), evaluate (63) using (56). To obtain (64), evaluate (61)using (56). ✷ Theorem 13.3
For ≤ i, j ≤ d , d X h =0 u i ( θ t + h ) u j ( θ t + h ) k ∗ h ( W ) = δ ij ν ( W )( k i ( W )) − , (65) d X h =0 u h ( θ t + i ) u h ( θ t + j ) k h ( W ) = δ ij ν ( W )( k ∗ i ( W )) − , (66)21 nd d X h =0 u ∗ i ( θ ∗ r + h ) u ∗ j ( θ ∗ r + h ) k h ( W ) = δ ij ν ( W )( k ∗ i ( W )) − , (67) d X h =0 u ∗ h ( θ ∗ r + i ) u ∗ h ( θ ∗ r + j ) k ∗ h ( W ) = δ ij ν ( W )( k i ( W )) − , (68) where ν ( W ) , k h ( W ) , k ∗ h ( W ) are from Definitions 7.4, 10.1 and u h = u Wh , u ∗ h = u ∗ Wh are from Definition11.5.Proof: Evaluate each of (61)-(64) using Lemma 11.6. ✷ Theorem 13.4
For ≤ i, j ≤ d , d X h =0 p i ( θ t + h ) p j ( θ t + h ) k ∗ h ( W ) = δ ij ν ( W ) x ( W ) x ( W ) · · · x i ( W ) , (69) d X h =0 p h ( θ t + i ) p h ( θ t + j ) x ( W ) x ( W ) · · · x h ( W ) = δ ij ν ( W )( k ∗ i ( W )) − , (70) and d X h =0 p ∗ i ( θ ∗ r + h ) p ∗ j ( θ ∗ r + h ) k h ( W ) = δ ij ν ( W ) x ∗ ( W ) x ∗ ( W ) · · · x ∗ i ( W ) , (71) d X h =0 p ∗ h ( θ ∗ r + i ) p ∗ h ( θ ∗ r + j ) x ∗ ( W ) x ∗ ( W ) · · · x ∗ h ( W ) = δ ij ν ( W )( k i ( W )) − , (72) where x h ( W ) , ν ( W ) , k h ( W ) , k ∗ h ( W ) are from Definitions 5.2, 7.4, 10.1 and p h = p Wh , p ∗ h = p ∗ Wh are fromDefinitions 6.1, 6.2.Proof: Evaluate each of (61)-(64) using Definition 11.1. Simplify the result using Lemma 9.3(i), (iii). ✷ We now present Theorem 13.2 in matrix form.
Definition 13.5
Define matrices P = P ( W ) and P ∗ = P ∗ ( W ) in Mat d +1 ( C ) as follows. For 0 ≤ i, j ≤ d ,their ( i, j )-entries are P ij = v j ( θ t + i ) , P ∗ ij = v ∗ j ( θ ∗ r + i ) , where v j = v Wj , v ∗ j = v ∗ Wj are from Definition 11.1. Theorem 13.6
With reference to Definition 13.5, P ∗ P = ν ( W ) I , where ν ( W ) is from Definition 7.4.Proof: We compute the ( i, j )-entry of P ∗ P using Definition 13.5 and (56). We find that this is equal to( k i ( W )) − times the left hand side of (61). Using (61), we obtain P ∗ P = ν ( W ) I . ✷ Theorem 13.7
Let ♭ and ♯ be the maps in Definition 8.9. With reference to Definition 13.5, Y ♯ P = P Y ♭ for Y ∈ End ( W ) .Proof: By Lemma 13.1, the transition matrix from a standard basis to a dual standard basis for W is ascalar multiple of P . Therefore, Y ♯ P = P Y ♭ . ✷ W Let W be as in Assumption 3.4. In Sections 8 and 9, we found two bases for W with respect to which A and A ∗ are represented by tridiagonal and diagonal matrices. In this section, we will look at two more bases for W with respect to which A and A ∗ are represented by lower bidiagonal and upper bidiagonal matrices. Definition 14.1
For 0 ≤ i ≤ d , define τ i = τ Wi , τ ∗ i = τ ∗ Wi , η i = η Wi , η ∗ i = η Wi in C [ λ ] as follows: τ i = i − Y h =0 ( λ − θ t + h ) , τ ∗ i = i − Y h =0 ( λ − θ ∗ r + h ) ,η i = i − Y h =0 ( λ − θ t + d − h ) , η ∗ i = i − Y h =0 ( λ − θ ∗ r + d − h ) . Observe that each of τ i , τ ∗ i , η i , η ∗ i is monic of degree i . Lemma 14.2
For ≤ i, j ≤ d , (i) each of τ i ( θ t + j ) , τ ∗ i ( θ ∗ r + j ) is if j < i and nonzero if j = i ; (ii) each of η i ( θ t + j ) , η ∗ i ( θ ∗ r + j ) is if j > d − i and nonzero if j = d − i .Proof: Immediate from Definition 14.1. ✷ Lemma 14.3
Let v be a nonzero vector in E ∗ r W . Then { τ i ( A ) v } di =0 is a basis for W .Proof: By Theorem 5.5 and Lemma 6.3, { p i ( A ) v } di =0 is a basis for W . For 0 ≤ i ≤ d , each of τ i and p i is apolynomial of degree i . The result follows. ✷ Definition 14.4
For 0 ≤ i ≤ d , define U i = τ i ( A ) E ∗ r W. For notational convenience, define U − = 0 and U d +1 = 0. Lemma 14.5
With reference to Definition 14.4, U i has dimension for ≤ i ≤ d . Moreover, W = d X i =0 U i (direct sum) . (73) Proof:
Immediate from Lemma 14.3 and Definition 14.4. ✷ Lemma 14.6
For ≤ i ≤ d , (i) P ih =0 U h = P ih =0 E ∗ r + h W , (ii) P dh = i U h = P dh = i E t + h W. Proof:
Let v be a nonzero vector in E ∗ r W .(i) By Lemma 3.1(i), τ j ( A ) v is contained in P ih =0 E ∗ r + h W for 0 ≤ j ≤ i. Hence, P ih =0 U h ⊆ P ih =0 E ∗ r + h W .In this inclusion, equality holds since each side has dimension i + 1.(ii) For i ≤ j ≤ d , τ j ( A ) v = D X l =0 E l τ j ( A ) v = d X h =0 E t + h τ j ( A ) v = d X h =0 τ j ( θ t + h ) E t + h v = d X h = j τ j ( θ t + h ) E t + h v by Lemma 14.2 . τ j ( A ) v ∈ P dh = i E t + h W for i ≤ j ≤ d . Thus, P dh = i U h ⊆ P dh = i E t + h W . In this inclusion, equalityholds since each side has dimension i + 1. ✷ Lemma 14.7
For ≤ i ≤ d , U i = i X h =0 E ∗ r + h W ! ∩ d X h = i E t + h W ! . Proof:
By Lemma 14.5, U i = ( U + U + · · · + U i ) ∩ ( U i + U i +1 + · · · + U d ). Combining this with Lemma14.6, we obtain the desired result. ✷ Lemma 14.8
For ≤ i ≤ d , (i) ( A − θ t + i I ) U i = U i +1 , (ii) ( A ∗ − θ ∗ r + i I ) U i = U i − .Proof: (i) Immediate from Definition 14.4.(ii) Assume 1 ≤ i ≤ d , otherwise, we are done since U = E ∗ r W . Let v be a nonzero vector in E ∗ r W .Since A ∗ E ∗ r + i = θ ∗ r + i E ∗ r + i , we have ( A ∗ − θ ∗ r + i I )( P ih =0 E ∗ r + h W ) ⊆ P i − h =0 E ∗ r + h W. By Lemma 3.2(i), wehave ( A ∗ − θ ∗ r + i I )( P dh = i E t + h ) ⊆ P dh = i − E t + h W . Combining these comments with Lemma 14.7, we findthat ( A ∗ − θ ∗ r + i I ) U i ⊆ U i − . We now show equality holds. Suppose that ( A ∗ − θ ∗ r + i I ) U i ( U i − . Then( A ∗ − θ ∗ r + i I ) U i = 0 since dim U i − = 1. Let W ′ = U i + U i +1 + · · · + U d . Observe that W ′ is nonzero. By(i), AW ′ ⊆ W ′ . Since ( A ∗ − θ ∗ r + i I ) U i = 0 and ( A ∗ − θ ∗ r + j I ) U j ⊆ U j − for i + 1 ≤ j ≤ d , we find that A ∗ W ′ ⊆ W ′ . Hence W ′ is a nonzero T -submodule of W . Since the T -module W is irreducible, W ′ = W .This contradicts (73) since i >
0. Therefore, ( A ∗ − θ ∗ r + i I ) U i = U i − . ✷ By Lemma 14.8, for 1 ≤ i ≤ d , U i is invariant under ( A − θ t + i − I )( A ∗ − θ ∗ r + i I ) and the correspondingeigenvalue is nonzero. Definition 14.9
For 1 ≤ i ≤ d , let ϕ i = ϕ i ( W ) be the eigenvalue of ( A − θ t + i − I )( A ∗ − θ ∗ r + i I ) correspondingto U i . Observe that ϕ i = 0. We refer to the sequence { ϕ i } di =1 as the first split sequence of W . For notationalconvenience, define ϕ = 0. Theorem 14.10
With respect to the basis for W in Lemma 14.3, the matrices representing A , A ∗ are θ t θ t +1 θ t +2 · ·· θ t + d − θ t + d , θ ∗ r ϕ θ ∗ r +1 ϕ θ ∗ r +2 ·· · θ ∗ r + d − ϕ d θ ∗ r + d . Proof:
Immediate from Definitions 14.4, 14.9 and Lemma 14.8. ✷ In Lemmas 14.3–14.8 and Theorem 14.10, we replace E t + i with E t + d − i for 0 ≤ i ≤ d and we routinelyobtain the following results. Lemma 14.11
Let v be a nonzero vector in E ∗ r W . Then { η i ( A ) v } di =0 is a basis for W . Definition 14.12
For 0 ≤ i ≤ d , define U ⇓ i = η i ( A ) E ∗ r W. For notational convenience, define U ⇓− = 0 and U ⇓ d +1 = 0. Lemma 14.13
With reference to Definition 14.12, W = d X i =0 U ⇓ i (direct sum) . (74)24 emma 14.14 For ≤ i ≤ d , (i) P ih =0 U ⇓ h = P ih =0 E ∗ r + h W , (ii) P dh = i U ⇓ h = P d − ih =0 E t + h W. Lemma 14.15
For ≤ i ≤ d , (i) ( A − θ t + d − i I ) U ⇓ i = U ⇓ i +1 , (ii) ( A ∗ − θ ∗ r + i I ) U ⇓ i = U ⇓ i − . By Lemma 14.15, for 1 ≤ i ≤ d , U ⇓ i is invariant under ( A − θ t + d − i +1 I )( A ∗ − θ ∗ r + i I ) and the correspondingeigenvalue is nonzero. Definition 14.16
For 1 ≤ i ≤ d , let φ i = φ i ( W ) be the eigenvalue of ( A − θ t + d − i +1 I )( A ∗ − θ ∗ r + i I ) cor-responding to U ⇓ i . Observe that φ i = 0. We refer to the sequence { φ i } di =1 as the second split sequence of W . Theorem 14.17
With respect to the basis for W in Lemma 14.11, the matrices representing A, A ∗ are θ t + d θ t + d − θ t + d − · ·· θ t +1 θ t , θ ∗ r φ θ ∗ r +1 φ θ ∗ r +2 ·· · θ ∗ r + d − φ d θ ∗ r + d . In [12, Lemma 12.7], it was shown that { ϕ i } di =1 and { φ i } di =1 are related by the following: ϕ i = φ i − X h =0 θ t + h − θ t + d − h θ t − θ t + d + ( θ ∗ r + i − θ ∗ r )( θ t + i − − θ t + d ) (1 ≤ i ≤ d ) , (75) φ i = ϕ i − X h =0 θ t + h − θ t + d − h θ t − θ t + d + ( θ ∗ r + i − θ ∗ r )( θ t + d − i +1 − θ t ) (1 ≤ i ≤ d ) . (76) Definition 14.18
By the parameter array of W , we mean the sequence of scalars( { θ t + i } di =0 , { θ ∗ r + i } di =0 , { ϕ i } di =1 , { φ i } di =1 ) , where r, t, d are from Assumption 3.4, and the ϕ i , φ i are from Definitions 14.9, 14.16.
15 Describing W in terms of its parameter array Let W be as in Assumption 3.4. Up until now, we have associated with W a number of polynomials andparameters. In this section, we will express all these polynomials and parameters in terms of the parameterarray ( { θ t + i } di =0 , { θ ∗ r + i } di =0 , { ϕ i } di =1 , { φ i } di =1 ) of W . Recall the polynomials τ i , τ ∗ i , η i , η ∗ i from Definition14.1. Theorem 15.1
For ≤ i ≤ d , u i = i X h =0 τ ∗ h ( θ ∗ r + i ) ϕ ϕ · · · ϕ h τ h , (77) u ∗ i = i X h =0 τ h ( θ t + i ) ϕ ϕ · · · ϕ h τ ∗ h , (78) where u i = u Wi , u ∗ i = u ∗ Wi are from Definition 11.5. roof: We first verify (77). Since u i has degree i , there exist complex scalars { α h } ih =0 such that u i = P ih =0 α h τ h . By Lemma 14.2(i), τ ( θ t ) = 1 and τ i ( θ t ) = 0 for 1 ≤ i ≤ d . From these comments and since u i ( θ t ) = 1, we have α = 1. Now assume i ≥
1, otherwise we are done. Let v be a nonzero vector in E ∗ r W .By Theorem 11.4 and Lemma 11.6, u i ( A ) v ∈ E ∗ r + i W . Thus,0 = ( A ∗ − θ ∗ r + i I ) u i ( A ) v = i X h =0 α h A ∗ τ h ( A ) v − θ ∗ r + i i X h =0 α h τ h ( A ) v = i X h =0 α h ( θ ∗ r + h τ h ( A ) v + ϕ h τ h − ( A ) v ) − θ ∗ r + i i X h =0 α h τ h ( A ) v by Theorem 14.10= i − X h =0 ( ϕ h +1 α h +1 + α h θ ∗ r + h − θ ∗ r + i α h ) τ h ( A ) v. By Lemma 14.3, { τ h ( A ) v } i − h =0 are linearly independent. Thus, ϕ h +1 α h +1 + α h θ ∗ r + h − θ ∗ r + i α h = 0 for 0 ≤ h < i .From this recursive equation and the fact that α = 1, we find that α h = τ ∗ h ( θ ∗ r + i ) / ( ϕ ϕ · · · ϕ h ) for 0 ≤ h ≤ i .Therefore, (77) holds. We now prove (78). Let f i be the polynomial on the right in (78). Using (77),we find that f i ( θ ∗ r + j ) = u j ( θ t + i ) for 0 ≤ j ≤ i . By Theorem 12.4, u ∗ i ( θ ∗ r + j ) = u j ( θ t + i ). Therefore, f i ( θ ∗ r + j ) = u ∗ i ( θ ∗ r + j ) for 0 ≤ j ≤ i . By this and since u ∗ i , f i have degree i , we find that u ∗ i = f i . ✷ Lemma 15.2
For ≤ i ≤ d , p i ( θ t ) = ϕ ϕ · · · ϕ i τ ∗ i ( θ ∗ r + i ) , p ∗ i ( θ ∗ r ) = ϕ ϕ · · · ϕ i τ i ( θ t + i ) , (79) where p i = p Wi , p ∗ i = p ∗ Wi are from Definitions 6.1, 6.2.Proof: We first prove the equation on the left in (79). We compute the coefficient of λ i in u i in two ways:one way using (77) and another way using Definition 11.5. Comparing the results, we obtain the equationon the left in (79). Argue similarly to obtain the equation on the right in (79). ✷ Theorem 15.3
For ≤ i ≤ d − , b i ( W ) = ϕ i +1 τ ∗ i ( θ ∗ r + i ) τ ∗ i +1 ( θ ∗ r + i +1 ) , b ∗ i ( W ) = ϕ i +1 τ i ( θ t + i ) τ i +1 ( θ t + i +1 ) . (80) The b i ( W ) , b ∗ i ( W ) are from Definition 9.1.Proof: Immediate from Theorem 9.5(i), (iii) and Lemma 15.2. ✷ Theorem 15.4
With reference to Definition 5.2, a ( W ) = θ t + ϕ θ ∗ r − θ ∗ r +1 , a d ( W ) = θ t + d + ϕ d θ ∗ r + d − θ ∗ r + d − , (81) a ( W ) = θ t + d + φ θ ∗ r − θ ∗ r +1 , a d ( W ) = θ t + φ d θ ∗ r + d − θ ∗ r + d − . (82) For ≤ i ≤ d − , a i ( W ) = θ t + i + ϕ i θ ∗ r + i − θ ∗ r + i − + ϕ i +1 θ ∗ r + i − θ ∗ r + i +1 (83)= θ t + d − i + φ i θ ∗ r + i − θ ∗ r + i − + φ i +1 θ ∗ r + i − θ ∗ r + i +1 . (84)26 roof: To obtain (83), we compute the coefficient of λ i in u i +1 in two ways. One way is using Lemma 9.4and Lemma 11.7. Using this approach, we find that the coefficient is equal to − i X l =0 a l ( W ) p i +1 ( θ t ) . (85)Another way is using (77). Using this approach, the coefficient is equal to τ ∗ i ( θ ∗ r + i +1 ) ϕ ϕ · · · ϕ i − i X l =0 θ t + l τ ∗ i +1 ( θ ∗ r + i +1 ) ϕ ϕ · · · ϕ i +1 . (86)Evaluating (85) using (79) and comparing the result with (86), we obtain (83). Similarly, we obtain the twoequations in (81). We now prove (84). Observe that by Definitions 5.2 and 14.16, replacing E t + i with E t + d − i for 0 ≤ i ≤ d has the effect of switching ( a i ( W ) , θ t + i , ϕ i ) to ( a i ( W ) , θ t + d − i , φ i ). Applying this switching to(83), we obtain (84). Similarly, we obtain the two equations in (82). ✷ Theorem 15.5
With reference to Definition 5.2, a ∗ ( W ) = θ ∗ r + ϕ θ t − θ t +1 , a ∗ d ( W ) = θ ∗ r + d + ϕ d θ t + d − θ t + d − , (87) a ∗ ( W ) = θ ∗ r + d + φ d θ t − θ t +1 , a ∗ d ( W ) = θ ∗ r + φ θ t + d − θ t + d − . (88) For ≤ i ≤ d − , a ∗ i ( W ) = θ ∗ r + i + ϕ i θ t + i − θ t + i − + ϕ i +1 θ t + i − θ t + i +1 (89)= θ ∗ r + d − i + φ d − i +1 θ t + i − θ t + i − + φ d − i θ t + i − θ t + i +1 . (90) Proof:
To obtain (87) and (89) argue similarly as in the proof of (83). We now prove (90). By Definitions5.2 and 14.16, replacing E t + i with E t + d − i for 0 ≤ i ≤ d has the effect of switching ( a ∗ i ( W ) , θ t + i , ϕ i ) to( a ∗ d − i ( W ) , θ t + d − i , φ i ). Applying this switching to (89), we obtain a ∗ d − i ( W ) = θ ∗ r + i + φ i θ t + d − i − θ t + d − i +1 + φ i +1 θ t + d − i − θ t + d − i − . (91)Changing i to d − i in (91), we obtain (90). ✷ Theorem 15.6
For ≤ i ≤ d , ϕ i is equal to each of the following: ( θ ∗ r + i − θ ∗ r + i − ) i − X j =0 ( θ t + j − a j ( W )) , ( θ ∗ r + i − − θ ∗ r + i ) d X j = i ( θ t + j − a j ( W )) , (92)( θ t + i − θ t + i − ) i − X j =0 ( θ ∗ r + j − a ∗ j ( W )) , ( θ t + i − − θ t + i ) d X j = i ( θ ∗ r + j − a ∗ j ( W )) . (93) The a h ( W ) , a ∗ h ( W ) are from Definition 5.2.Proof: To obtain the expression on the left in (92), solve for ϕ i recursively using (83). From this and Lemma5.10(i), we obtain the expression on the right in (92). The remaining assertions can be similarly shown. ✷ Theorem 15.7
For ≤ i ≤ d , φ i is equal to each of the following: ( θ ∗ r + i − θ ∗ r + i − ) i − X j =0 ( θ t + d − j − a j ( W )) , ( θ ∗ r + i − − θ ∗ r + i ) d X j = i ( θ t + d − j − a j ( W )) , (94)( θ t + d − i − θ t + d − i +1 ) i − X j =0 ( θ ∗ r + j − a ∗ d − j ( W )) , ( θ t + d − i +1 − θ t + d − i ) d X j = i ( θ ∗ r + j − a ∗ d − j ( W )) . (95) The a h ( W ) , a ∗ h ( W ) are from Definition 5.2. roof: Similar to the proof of Theorem 15.6. ✷ Theorem 15.8
For ≤ i ≤ d , the polynomial p i = p Wi from Definition 6.1 is equal to both i X h =0 ϕ ϕ · · · ϕ i τ ∗ h ( θ ∗ r + i ) ϕ ϕ · · · ϕ h τ ∗ i ( θ ∗ r + i ) τ h , i X h =0 φ φ · · · φ i τ ∗ h ( θ ∗ r + i ) φ φ · · · φ h τ ∗ i ( θ ∗ r + i ) η h . (96) Proof:
The expression on the left in (96) is equal to p i by Definition 11.5, (77), and the equation on the leftin (79). To show that p i is equal to the expression on the right in (96), write u i as a linear combination of { η h } ih =0 . Arguing as in the proof of (77), we find that u i = u i ( θ t + d ) i X h =0 τ ∗ h ( θ ∗ r + i ) φ φ · · · φ h η h . (97)To find u i ( θ t + d ), we compute the coefficient of λ i in u i in two ways: one way is using (77) and another wayis using (97). Comparing these results we obtain u i ( θ t + d ) = φ φ · · · φ i ϕ ϕ · · · ϕ i . (98)Evaluating p i using Definition 11.5, (97), (98) and the equation on the left in (79), we find that p i is equalto the expression on the right in (96). ✷ Theorem 15.9
For ≤ i ≤ d , the polynomial p ∗ i = p ∗ Wi from Definition 6.2 is equal to both i X h =0 ϕ ϕ · · · ϕ i τ h ( θ t + i ) ϕ ϕ · · · ϕ h τ i ( θ t + i ) τ ∗ h , i X h =0 φ d φ d − · · · φ d − i +1 τ h ( θ t + i ) φ d φ d − · · · φ d − h +1 τ i ( θ t + i ) η ∗ h . (99) Proof:
The expression on the left in (99) is equal to p ∗ i by Definition 11.5, (78), and the equation on theright in (79). We now prove that p ∗ i is equal to the expression on the right in (99). Comparing the equationon the left in (94) and the equation on the right in (95), we find that interchanging A and A ∗ has the effectof switching φ i to φ d − i +1 for 1 ≤ i ≤ d . Applying this switching to the sum on the right in (96), we obtainthe sum on the right in (99). ✷ Lemma 15.10
For ≤ i ≤ d , p i ( θ t + d ) = φ φ · · · φ i τ ∗ i ( θ ∗ r + i ) , p ∗ i ( θ ∗ r + d ) = φ d φ d − · · · φ d − i +1 τ i ( θ t + i ) , where p i = p Wi , p ∗ i = p ∗ Wi are from Definitions 6.1, 6.2.Proof: Immediate from the right side of lines (96) and (99). ✷ Theorem 15.11
For ≤ i ≤ d , c i ( W ) = φ i η ∗ d − i ( θ ∗ r + i ) η ∗ d − i +1 ( θ ∗ r + i − ) , c ∗ i ( W ) = φ d − i +1 η d − i ( θ t + i ) η d − i +1 ( θ t + i − ) . (100) The c i ( W ) , c ∗ i ( W ) are from Definition 9.1.Proof: We first verify the equation on the right in (100). By (29), replacing E t + i with E t + d − i for 0 ≤ i ≤ d switches b ∗ i ( W ) and c ∗ d − i ( W ). Applying this switching to the equation on the right in (80), we find that for0 ≤ i ≤ d − c ∗ d − i ( W ) = φ i +1 η i ( θ t + d − i ) η i +1 ( θ t + d − i − ) . (101)Changing i to d − i in (101), we obtain the equation on the right in (100). We now verify the equation on theleft in (100). Recall from the proof of Theorem 15.9 that interchanging A and A ∗ switches φ i and φ d − i +1 .Applying this switching to the equation on the right in (100), we obtain the equation on the left in (100). ✷ heorem 15.12 With reference to Definition 7.4, ν ( W ) = η d ( θ t ) η ∗ d ( θ ∗ r ) φ φ · · · φ d . (102) Proof:
Let 0 = v ∈ E ∗ r W. By Theorem 14.17, ( A ∗ − θ ∗ r + i I ) η i ( A ) v = φ i η i − ( A ) v for 1 ≤ i ≤ d . Hence, η ∗ d ( A ∗ ) η d ( A ) v = φ φ · · · φ d v . By (3) and (7), on W we have η d ( A ) = η d ( θ t ) E t and η ∗ d ( A ∗ ) = η ∗ d ( θ ∗ r ) E ∗ r . Thus, η ∗ d ( A ∗ ) η d ( A ) v = η ∗ d ( θ ∗ r ) η d ( θ t ) E ∗ r E t v. From these comments and since v ∈ E ∗ r W , we obtain φ φ · · · φ d v = η ∗ d ( θ ∗ r ) η d ( θ t ) E ∗ r E t E ∗ r v. Evaluate E ∗ r E t E ∗ r using Theorem 7.5(ii). The result follows. ✷ Theorem 15.13
With reference to Definitions 5.2 and 10.1, k i ( W ) = ϕ ϕ · · · ϕ i φ φ · · · φ i η ∗ d ( θ ∗ r ) τ ∗ i ( θ ∗ r + i ) η ∗ d − i ( θ ∗ r + i ) (0 ≤ i ≤ d ) , (103) k ∗ i ( W ) = ϕ ϕ · · · ϕ i φ d φ d − · · · φ d − i +1 η d ( θ t ) τ i ( θ t + i ) η d − i ( θ t + i ) (0 ≤ i ≤ d ) , (104) x i ( W ) = ϕ i φ i τ ∗ i − ( θ ∗ r + i − ) η ∗ d − i ( θ ∗ r + i ) τ ∗ i ( θ ∗ r + i ) η ∗ d − i +1 ( θ ∗ r + i − ) (1 ≤ i ≤ d ) , (105) x ∗ i ( W ) = ϕ i φ d − i +1 τ i − ( θ t + i − ) η d − i ( θ t + i ) τ i ( θ t + i ) η d − i +1 ( θ t + i − ) (1 ≤ i ≤ d ) . (106) Proof:
Evaluate the equations in (37), (38) and in Lemma 9.3(i), (iii), using Theorems 15.3 and 15.11. ✷ For the rest of this section, we will find alternative formulae for the intersection and dual intersectionnumbers of W . The reason in doing this is that the formulae given in Theorems 15.3 and 15.11 involve hugeproducts which may not be easy to compute. We will need the following lemma. Lemma 15.14
For ≤ i ≤ d , c i ( W ) τ ∗ ( θ ∗ r + i − ) + a i ( W ) τ ∗ ( θ ∗ r + i ) + b i ( W ) τ ∗ ( θ ∗ r + i +1 ) = ϕ + θ t +1 τ ∗ ( θ ∗ r + i ) , (107) c ∗ i ( W ) τ ( θ t + i − ) + a ∗ i ( W ) τ ( θ t + i ) + b ∗ i ( W ) τ ( θ t + i +1 ) = ϕ + θ ∗ r +1 τ ( θ t + i ) . (108) The a i ( W ) , b i ( W ) , c i ( W ) (resp. a ∗ i ( W ) , b ∗ i ( W ) , c ∗ i ( W ) ) are the intersection numbers (resp. dual intersectionnumbers) of W .Proof: By (49), u ∗ = ( λ − a ∗ ( W )) /b ∗ ( W ). Use this to evaluate (57) with j = 1. Eliminate a ∗ ( W ) in theresulting equation using the expression on the left of (87). Simplify using Lemma 9.3(ii) to obtain (107).The proof of (108) is similar. ✷ Theorem 15.15
The intersection numbers of W are as follows: b ( W ) = ϕ θ ∗ r +1 − θ ∗ r , (109) b i ( W ) = ( θ t − a i ( W ))( θ ∗ r + i − θ ∗ r + i − ) + ( θ t − θ t +1 )( θ ∗ r − θ ∗ r + i ) + ϕ θ ∗ r + i +1 − θ ∗ r + i − (1 ≤ i ≤ d − , (110) c i ( W ) = ( θ t − a i ( W ))( θ ∗ r + i − θ ∗ r + i +1 ) + ( θ t − θ t +1 )( θ ∗ r − θ ∗ r + i ) + ϕ θ ∗ r + i − − θ ∗ r + i +1 (1 ≤ i ≤ d − , (111) c d ( W ) = ϕ + ( θ t +1 − θ t )( θ ∗ r + d − θ ∗ r ) θ ∗ r + d − − θ ∗ r + d . (112) To obtain b ∗ i ( W ) and c ∗ i ( W ) , replace ( θ t + j , θ ∗ r + j , a j ( W )) with ( θ ∗ r + j , θ t + j , a ∗ j ( W )) .Proof: To obtain (109), eliminate a ( W ) in the equation on the left of (81) using Lemma 9.3(ii). To obtain(110) and (111), solve the system of equations in Lemmas 9.3(ii) and (107). To obtain (112), set i = d in(107) and eliminate a d ( W ) using Lemma 9.3(ii). The proof of the assertion regarding the dual intersection29umbers of W is similar. ✷ By Theorems 15.4, 15.15, the intersection numbers (resp. dual intersection numbers) of W can beexpressed in terms of the parameter array of W . By (75) and (76), the parameter array of W is determinedby the eigenvalue sequence of W , dual eigenvalue sequence of W , and ϕ ( W ). Hence, we now solve for theintersection numbers (resp. dual intersection numbers) of W in terms of these parameters. But first we needthe following lemmas. Lemma 15.16
Assume d ≥ . Then the scalar ϕ is equal to both ϕ (1+ θ t +1 − θ t + d − θ t − θ t + d )+( θ ∗ r +1 − θ ∗ r )( θ t + d + θ t + d − − θ t − θ t +1 )+( θ ∗ r +2 − θ ∗ r )( θ t +1 − θ t + d ) , (113) ϕ (1+ θ ∗ r +1 − θ ∗ r + d − θ ∗ r − θ ∗ r + d )+( θ t +1 − θ t )( θ ∗ r + d + θ ∗ r + d − − θ ∗ r − θ ∗ r +1 )+( θ t +2 − θ t )( θ ∗ r +1 − θ ∗ r + d ) . (114) Proof:
To obtain (113), set i = 2 in (75) and evaluate φ using (76). Comparing the formula for ϕ i on theleft in lines (92) and (93), we find that interchanging A and A ∗ has no effect on ϕ i for 1 ≤ i ≤ d . Applyingthis switching to (113), we obtain (114). ✷ Lemma 15.17
Assume d ≥ . Then for ≤ i ≤ d , c i ( W ) τ ∗ ( θ ∗ r + i − ) + a i ( W ) τ ∗ ( θ ∗ r + i ) + b i ( W ) τ ∗ ( θ ∗ r + i +1 ) = ϕ τ ∗ ( θ ∗ r + i ) + θ t +2 τ ∗ ( θ ∗ r + i ) , (115) c ∗ i ( W ) τ ( θ t + i − ) + a ∗ i ( W ) τ ( θ t + i ) + b ∗ i ( W ) τ ( θ t + i +1 ) = ϕ τ ( θ t + i ) + θ ∗ r +2 τ ( θ t + i ) . (116) The a i ( W ) , b i ( W ) , c i ( W ) (resp. a ∗ i ( W ) , b ∗ i ( W ) , c ∗ i ( W ) ) are the intersection numbers (resp. dual intersectionnumbers) of W .Proof: Eliminating u ∗ in (57) with j = 2 using (78), we obtain c i ( W ) + a i ( W ) + b i ( W ) + τ ( θ t +2 ) ϕ ( c i ( W ) τ ∗ ( θ ∗ r + i − ) + a i ( W ) τ ∗ ( θ ∗ r + i ) + b i ( W ) τ ∗ ( θ ∗ r + i +1 )) (117)+ τ ( θ t +2 ) ϕ ϕ ( c i ( W ) τ ∗ ( θ ∗ r + i − ) + a i ( W ) τ ∗ ( θ ∗ r + i ) + b i ( W ) τ ∗ ( θ ∗ r + i +1 ))= θ t +2 (1 + τ ( θ t +2 ) ϕ τ ∗ ( θ ∗ r + i ) + τ ( θ t +2 ) ϕ ϕ τ ∗ ( θ ∗ r + i )) . Simplify the first three terms of (117) using Lemma 9.3(ii). Evaluating the coefficient of τ ( θ t +2 ) /ϕ in (117)using (107), we routinely obtain (115). The proof of (116) is similar. ✷ Theorem 15.18
The intersection numbers of W are as follows: b ( W ) = ϕ θ ∗ r +1 − θ ∗ r , (118) b i ( W ) = ϕ f + i + g + i ( θ ∗ r + i +1 − θ ∗ r + i )( θ ∗ r + i +1 − θ ∗ r + i − ) , (1 ≤ i ≤ d − , (119) c i ( W ) = ϕ f − i + g − i ( θ ∗ r + i − − θ ∗ r + i )( θ ∗ r + i − − θ ∗ r + i +1 ) (1 ≤ i ≤ d − , (120) c d ( W ) = ϕ + ( θ t +1 − θ t )( θ ∗ r + d − θ ∗ r ) θ ∗ r + d − − θ ∗ r + d , (121) where f ± i = θ ∗ r +1 − θ ∗ r + i ∓ − ( θ ∗ r + i − θ ∗ r )( θ ∗ r +1 − θ ∗ r + d − )( θ ∗ r + d − θ ∗ r ) ,g ± i = ( θ ∗ r + i − θ ∗ r )(( θ t +2 − θ t +1 )( θ ∗ r + i − θ ∗ r + d ) − ( θ t +1 − θ t )( θ ∗ r + i ∓ − θ ∗ r + d − )) , provided d ≥ . To obtain b ∗ i ( W ) and c ∗ i ( W ) , replace ( θ t + j , θ ∗ r + j ) with ( θ ∗ r + j , θ t + j ) . roof: Observe that (118), (121) are (109), (112). To obtain (119) and (120), eliminate a i ( W ) in (107) and(115) using Lemma 9.3(ii). Then for 1 ≤ i ≤ d − c i ( W )( θ ∗ r + i − − θ ∗ r + i ) + b i ( W )( θ ∗ r + i +1 − θ ∗ r + i ) = ϕ + ( θ t +1 − θ t )( θ ∗ r + i − θ ∗ r ) , (122) c i ( W ) h i ( θ ∗ r + i − ) + b i ( W ) h i ( θ ∗ r + i +1 ) = ϕ ( θ ∗ r + i − θ ∗ r ) + ( θ t +2 − θ t )( θ ∗ r + i − θ ∗ r )( θ ∗ r + i − θ ∗ r +1 ) , (123)where h i ( λ ) = ( λ − θ ∗ r + i )( λ + θ ∗ r + i − θ ∗ r +1 − θ ∗ r ) . Eliminate ϕ in (123) using (114). Solving the system of equations (122), (123), we obtain (119) and (120).Argue similarly and evaluate ϕ using (113) to obtain the formula for the dual intersection numbers of W . ✷ Lemma 15.19
Given vertices y, z in X , let W (resp. W ′ ) be the trivial T ( y ) -module (resp. T ( z ) -module)of Γ . Then W and W ′ have the same parameter array. ✷ Proof:
By Lemma 9.6, W and W ′ have the same intersection numbers and dual intersection numbers.Thus, W and W ′ both have eigenvalue sequence { θ i } Di =0 and dual eigenvalue sequence { θ ∗ i } Di =0 . By (118), ϕ ( W ) = ϕ ( W ′ ). Using (75) and (76), we find that W and W ′ have the same first split sequence and secondsplit sequence. ✷ Definition 15.20
By the parameter array of
Γ, we mean the parameter array of the trivial T ( x )-module.Observe that this parameter array is independent of the choice of x by Lemma 15.19.
16 Isomorphism Classes of Thin Irreducible T-modules
In Corollary 9.7, we mentioned some set of scalars needed to determine the isomorphism class of a thinirreducible T -module. As we have seen in Theorem 15.18, there are many relations among these scalars.Wenow consider a much smaller set of scalars needed to determine the isomorphism class. Let us first considersome equations from (75), (76). Lemma 16.1
Let W be as in Assumption 3.4. Let { ϕ i } di =1 , { φ i } di =1 denote the first split sequence andsecond split sequence of W , respectively. Then φ = ϕ + ( θ ∗ r +1 − θ ∗ r )( θ t + d − θ t ) ,φ d = ϕ + ( θ ∗ r + d − θ ∗ r )( θ t +1 − θ t ) ,ϕ d = φ + ( θ ∗ r + d − θ ∗ r )( θ t + d − − θ t + d ) . Proof:
Immediate from (75) and (76). ✷ Lemma 16.2
Suppose that W and W ′ are thin irreducible T -modules with the same endpoint, dual endpointand diameter d > . Then the following are equivalent: ϕ ( W ) = ϕ ( W ′ ) , ϕ d ( W ) = ϕ d ( W ′ ) ,φ ( W ) = φ ( W ′ ) , φ d ( W ) = φ d ( W ′ ) ,a ( W ) = a ( W ′ ) , a d ( W ) = a d ( W ′ ) ,a ∗ ( W ) = a ∗ ( W ′ ) , a ∗ d ( W ) = a ∗ d ( W ′ ) . Proof:
Combine Lemma 16.1, (81), (82), (87), (88). ✷ Theorem 16.3
Suppose that W and W ′ are thin irreducible T -modules with common diameter d . (i) Assume d = 0 . Then W and W ′ are isomorphic as T -modules if and only if they have the same endpointand dual endpoint. Assume d > . Then W and W ′ are isomorphic as T -modules if and only if they have the sameendpoint, dual endpoint and all of the quantities in Lemma 16.2.Proof: (i) Immediate from Lemma 9.7.(ii) By Theorem 15.18, W and W ′ have the same intersection numbers (resp. dual intersection numbers) ifand only if they have the same endpoint, dual endpoint, diameter and ϕ ( W ) = ϕ ( W ′ ). Combining thiswith Lemmas 9.7, 16.2, we obtain the desired result. ✷
17 Two examples of Q -polynomial distance-regular graphs In this section, we apply the results that we have obtained in Section 15 to several examples of Q -polynomialdistance-regular graphs. We will continue talking about the T -module W in Assumption 3.4 but now wewill impose extra conditions on Γ. Definition 17.1
The graph Γ is said to have q -Racah type whenever its parameter array ( { θ i } Di =0 , { θ ∗ i } Di =0 , { ϕ i } Di =1 , { φ i } Di =1 ) satisfy the following.For 0 ≤ i ≤ D , θ i = θ + hq − i (1 − q i )(1 − sq i +1 ) ,θ ∗ i = θ ∗ + h ∗ q − i (1 − q i )(1 − s ∗ q i +1 ) . For 1 ≤ i ≤ D , ϕ i = hh ∗ q − i (1 − q i )(1 − q i − D − )(1 − r q i )(1 − r q i ) ,φ i = hh ∗ q − i (1 − q i )(1 − q i − D − )( r − s ∗ q i )( r − s ∗ q i ) /s ∗ . In the above, q, h, h ∗ , r , r , s, s ∗ are complex scalars such that r r = ss ∗ q D +1 , hh ∗ ss ∗ = 0 , q / ∈ {− , , } . Lemma 17.2
Let Γ be as in Definition 17.1. Then for ≤ i, j ≤ D , θ i − θ j = h ( q i − q j )( sq − q − i − j ) ,θ ∗ i − θ ∗ j = h ∗ ( q i − q j )( s ∗ q − q − i − j ) . Proof:
Routine calculation using Definition 17.1. ✷ Lemma 17.3
With reference to Definition 17.1, none of q i , r q i , r q i , s ∗ q i /r , s ∗ q i /r is equal to for ≤ i ≤ D . Moreover, neither of sq i , s ∗ q i is equal to for ≤ i ≤ D .Proof: The first assertion follows from Definition 17.1 and the fact that for 1 ≤ i ≤ D , θ i = θ , θ ∗ i = θ ∗ , ϕ i = 0 , φ i = 0. The second assertion is immediate from Lemma 17.2 and the fact that the eigenvalues (resp.dual eigenvalues) of Γ are mutually distinct. ✷ Lemma 17.4
Let Γ be as in Definition 17.1. Let { ϕ i ( W ) } di =1 and { φ i ( W ) } di =1 be the first split sequenceand second split sequence of W , respectively. Then there exists τ ( W ) ∈ C such that for ≤ i ≤ d , ϕ i ( W ) = hh ∗ (1 − q i )(1 − q d − i +1 )( τ ( W ) − ss ∗ q r + t + i +1 − q − r − t − i − d ) , (124) φ i ( W ) = hh ∗ (1 − q i )(1 − q d − i +1 )( τ ( W ) − s ∗ q r − t − d + i − sq t − r − i +1 ) . (125) Proof:
Since h, h ∗ are both nonzero and q, q d are both not equal to 1, there exists τ ( W ) such that (124)holds for i = 1. Plugging ϕ ( W ) in (76) and using Lemma 17.2, we routinely obtain that (125) holds for1 ≤ i ≤ d . Evaluating (75) using (125) with i = 1 and repeating the same argument above, we find that(124) holds for 1 ≤ i ≤ d . ✷ We make a comment about our notation used in Lemma 17.4. In the proof of [13, Theorem 35.15], thereare scalars τ, h, h ∗ . Our present h, h ∗ are the same as those in [13, Theorem 35.15]. However, our τ ( W ) isequal to τ /hh ∗ . 32 heorem 17.5 Let Γ be as in Definition 17.1. Let r ( W ) , r ( W ) be the roots of λ − τ ( W ) q r + t + d λ + ss ∗ q r +2 t + d +1 = 0 , where τ ( W ) is from Lemma 17.4. Then for ≤ i ≤ d , ϕ i ( W ) = hh ∗ q − i − t − r (1 − q i )(1 − q i − d − )(1 − r ( W ) q i )(1 − r ( W ) q i ) , (126) φ i ( W ) = hh ∗ q − i − t − r (1 − q i )(1 − q i − d − )( r ( W ) − s ∗ q i +2 r )( r ( W ) − s ∗ q i +2 r ) /s ∗ q r . (127) Proof:
Note that r ( W ) r ( W ) = ss ∗ q r +2 t + d +1 , r ( W ) + r ( W ) = τ ( W ) q r + t + d . (128)Eliminating τ ( W ) , ss ∗ in (124) using (128), we routinely obtain (126). Arguing similarly, we obtain (127). ✷ The next theorem will involve basic hypergeometric series. For the definition, see [7, p.4].
Theorem 17.6
Let Γ be as in Definition 17.1. Then u i ( θ t + j ) = φ q − i , s ∗ q r + i +1 , q − j , sq t + j +1 r ( W ) q, r ( W ) q, q − d (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) q, q ! (0 ≤ i, j ≤ d ) , where u i = u Wi , r ( W ) , r ( W ) are from Definitions 11.5, 17.5.Proof: Routine calculation using (77) and Lemmas 17.2, 17.5. ✷ The polynomials u i are q -Racah polynomials. For the definition of q -Racah polynomials, see [1]. Theorem 17.7
Let Γ be as in Definition 17.1. Then the intersection numbers of W are as follows: b ( W ) = hq − t (1 − q − d )(1 − r ( W ) q )(1 − r ( W ) q )(1 − s ∗ q r +2 ) b i ( W ) = hq − t (1 − q i − d )(1 − s ∗ q r + i +1 )(1 − r ( W ) q i +1 )(1 − r ( W ) q i +1 )(1 − s ∗ q r +2 i +1 )(1 − s ∗ q r +2 i +2 ) (1 ≤ i ≤ d − ,c i ( W ) = hq − t (1 − q i )(1 − s ∗ q r + i + d +1 )( r ( W ) − s ∗ q r + i )( r ( W ) − s ∗ q r + i ) s ∗ q r + d (1 − s ∗ q r +2 i )(1 − s ∗ q r +2 i +1 ) (1 ≤ i ≤ d − ,c d ( W ) = hq − t (1 − q d )( r ( W ) − s ∗ q r + d )( r ( W ) − s ∗ q r + d ) s ∗ q r + d (1 − s ∗ q r +2 d ) a i ( W ) = θ t − b i ( W ) − c i ( W ) (0 ≤ i ≤ d ) , where r ( W ) , r ( W ) are from Theorem 17.5. To obtain the dual intersection numbers of W , replace ( h, s ∗ , r, t ) with ( h ∗ , s, t, r ) .Proof: Evaluate the equations on the left in (80) and (100) using Lemma 17.2 and Theorem 17.5. ✷ Corollary 17.8
Let Γ be as in Definition 17.1. Then the intersection numbers of Γ are as follows: b = h (1 − q − D )(1 − r q )(1 − r q )(1 − s ∗ q ) b i = h (1 − q i − D )(1 − s ∗ q i +1 )(1 − r q i +1 )(1 − r q i +1 )(1 − s ∗ q i +1 )(1 − s ∗ q i +2 ) (1 ≤ i ≤ D − ,c i = h (1 − q i )(1 − s ∗ q i + D +1 )( r − s ∗ q i )( r − s ∗ q i ) s ∗ q D (1 − s ∗ q i )(1 − s ∗ q i +1 ) (1 ≤ i ≤ D − ,c D = h (1 − q D )( r − s ∗ q D )( r − s ∗ q D ) s ∗ q D (1 − s ∗ q D ) a i = θ − b i − c i (0 ≤ i ≤ D ) , where r , r are from Definition 17.1. To obtain the dual intersection numbers of Γ , replace ( h, s ∗ ) with ( h ∗ , s ) . roof: Apply Theorem 17.7 with W equal to the trivial T -module and use Lemma 9.6. ✷ We now turn our attention to graphs with classical parameters.
Definition 17.9
Let b, α, σ ∈ C with b / ∈ {− , , } . The graph Γ is said to have classical parameters ( D, b, α, σ ) whenever c i = b i − b − (cid:18) α b i − − b − (cid:19) (1 ≤ i ≤ D ) ,b i = b D − b i b − (cid:18) σ − α b i − b − (cid:19) (0 ≤ i ≤ D − . Theorem 17.10 [4, Corollary 8.4.4]
Let Γ be as in Definition 17.9. The following hold. (i) There exists an ordering { θ i } Di =0 of the eigenvalues such that for ≤ i ≤ D , θ i = η + µb i + hb − i , where η = ( σ − − b ) − α ( b D + 1)( b − ,µ = α − b + 1( b − ,h = b D ( σb − σ + α )( b − . (ii) Γ is Q -polynomial with respect to { θ i } Di =0 . Let E be the primitive idempotent of Γ corresponding to θ . Let { θ ∗ i } Di =0 be its corresponding dualeigenvalue sequence. Our next goal is to express these values in terms of α, b, σ, D . Theorem 17.11 [4, Corollary 8.4.4]
Let { θ ∗ i } Di =0 be the dual eigenvalues corresponding to E . Then for ≤ i ≤ D , θ ∗ i = η ∗ + h ∗ b − i , (129) where η ∗ = θ ∗ (cid:18) bb − σ − α )( b D − − − b + 1 − σ ( b D − σ ( b D − (cid:19) ,h ∗ = θ ∗ − η ∗ . (130)Observe that by (129), h ∗ = 0 since θ ∗ i = θ ∗ for 1 ≤ i ≤ D . Later in this section, we will express θ ∗ interms of α, b, σ, D . Lemma 17.12
Let Γ be as in Definition 17.9. Then for ≤ i, j ≤ D , θ i − θ j = ( b i − b j )( µ − hb − i − j ) ,θ ∗ i − θ ∗ j = h ∗ b − i − j ( b j − b i ) , where µ, h, h ∗ are the from Theorems 17.10, 17.11.Proof: Routine calculation using Theorems 17.10, 17.11. ✷ emma 17.13 Let Γ be as in Definition 17.9. Then there exists τ ( W ) ∈ C such that the first split sequenceand second split sequence of W are given by ϕ i ( W ) = (1 − b i )(1 − b d − i +1 )( τ ( W ) − hh ∗ b − r − t − i − d ) (1 ≤ i ≤ d ) , (131) φ i ( W ) = (1 − b i )(1 − b d − i +1 )( τ ( W ) − h ∗ µb − r + t − i ) (1 ≤ i ≤ d ) , (132) where µ, h, h ∗ are from Theorems 17.10, 17.11.Proof: Similar to the proof of Lemma 17.4. ✷ Applying Lemma 17.13 with W equal to the trivial T -module and using Definition 15.20, we obtain thefollowing corollary. Corollary 17.14
Let Γ be as in Definition 17.9. Then the first split sequence and second split sequence of Γ are as follows: ϕ i = (1 − b i )(1 − b D − i +1 )( τ − hh ∗ b − i − D ) (1 ≤ i ≤ D ) ,φ i = (1 − b i )(1 − b D − i +1 )( τ − h ∗ µb − i ) (1 ≤ i ≤ D ) , where µ, h, h ∗ are from Theorems 17.10, 17.11 and τ is the τ ( W ) associated with the trivial T -module. Observe that the parameter h given in Theorem 17.10 may or may not be zero. Consider a thin irreducible T -module W. Note that if h = 0, then τ ( W ) = 0. This follows from (131) and the fact that ϕ i ( W ) = 0 for1 ≤ i ≤ d . Theorem 17.15
Let Γ be as in Definition 17.9. For ≤ i, j ≤ d , u i ( θ t + j ) = φ b − i , b − j , µb t + j h b − d , τ ( W ) b r + t + d +1 hh ∗ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) b, b ! if h = 0 φ b − i , b − j b − d (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) b, µh ∗ b − r + t + j − d τ ( W ) ! if h = 0 , where u i = u Wi is from Definition 11.5 and µ, h, h ∗ , τ ( W ) are from Theorems 17.10, 17.11 and Lemma 17.13.Proof: Routine calculation using (77) and Lemmas 17.12, 17.13. ✷ Theorem 17.16
Let Γ be as in Definition 17.9. Then the intersection numbers of W are as follows: b i ( W ) = b r +2 i +1 (1 − b d − i )( τ ( W ) − hh ∗ b − r − t − i − d − ) /h ∗ (0 ≤ i ≤ d − , (133) c i ( W ) = b r + i ( b i − τ ( W ) − h ∗ µb − r + t − i ) /h ∗ (1 ≤ i ≤ d ) , (134) a i ( W ) = θ t − b i ( W ) − c i ( W ) (0 ≤ i ≤ d ) , where µ, h, h ∗ , τ ( W ) are from Theorems 17.10, 17.11 and Lemma 17.13.Proof: Evaluate the equations on the left in (80) and (100) using Lemmas 17.12, 17.13. ✷ Theorem 17.17
Let Γ be as in Definition 17.9. Then the dual intersection numbers of W are as follows: b ∗ ( W ) = ( b d − τ ( W ) − hh ∗ b − r − t − d − ) µb t − hb − t − ,b ∗ i ( W ) = b − i ( b d − i − τ ( W ) − hh ∗ b − r − t − i − d − )( µb t − hb − t − i )( µb t − hb − t − i − )( µb t − hb − t − i ) (1 ≤ i ≤ d − , (135) c ∗ i ( W ) = b d − i +1 (1 − b i )( τ ( W ) − h ∗ µb − r + t − d + i − )( µb t − hb − t − i − d )( µb t − hb − t − i )( µb t − hb − t − i +1 ) (1 ≤ i ≤ d − , (136) c ∗ d ( W ) = b − d +1 (1 − b d )( τ ( W ) − h ∗ µb − r + t − ) µb t − hb − t − d +1 ,a ∗ i ( W ) = θ ∗ r − b ∗ i ( W ) − c ∗ i ( W ) (0 ≤ i ≤ d ) , whete µ, h, h ∗ , τ ( W ) are from Theorems 17.10, 17.11 and Lemma 17.13. roof: Note that for 1 ≤ i ≤ d − µb t − hb − t − i − = 0 since this is a factor of θ t + i − θ t + i +1 by Lemma 17.12and the eigenvalues of Γ are mutually distinct. Similarly, µb t − hb − t − i and µb t − hb − t − i +1 are nonzerosince these are factors of θ t + i − − θ t + i +1 and θ t + i − θ t + i − , respectively. Arguing as in the proof of Theorem17.16, we obtain the desired result. ✷ In Definition 17.9, we gave a formula for the intersection numbers of Γ in terms of α, b, σ, D . We nowgive an alternate formula in terms of µ, h, h ∗ . Theorem 17.18
Let Γ be as in Definition 17.9. Then the intersection numbers of Γ are as follows: b i = b i +1 (1 − b D − i )( τ − hh ∗ b − i − D − ) /h ∗ (0 ≤ i ≤ D − ,c i = b i ( b i − τ − h ∗ µb − i ) /h ∗ (1 ≤ i ≤ D ) , (137) a i = θ − b i − c i (0 ≤ i ≤ D ) , where µ, h, h ∗ , τ are from Theorems 17.10, 17.11 and Corollary 17.14.Proof: Immediate from Lemmas 9.6 and 17.16. ✷ We now give a formula for the dual intersection numbers of Γ.
Theorem 17.19
Let Γ be as in Definition 17.9. Then the dual intersection numbers of Γ are as follows: b ∗ = ( b D − τ − hh ∗ b − D − ) µ − hb − ,b ∗ i = b − i ( b D − i − τ − hh ∗ b − i − D − )( µ − hb − i )( µ − hb − i − )( µ − hb − i ) (1 ≤ i ≤ D − ,c ∗ i = b D − i +1 (1 − b i )( τ − h ∗ µb − D + i − )( µ − hb − i − D )( µ − hb − i )( µ − hb − i +1 ) (1 ≤ i ≤ D − , (138) c ∗ D = b − D +1 (1 − b D )( τ − h ∗ µb − )( µ − hb − D +1 ) ,a ∗ i = θ ∗ − b ∗ i − c ∗ i (0 ≤ i ≤ D ) , where µ, h, h ∗ , τ are from Theorems 17.10, 17.11 and Corollary 17.14.Proof: Immediate from Lemma 9.6 and Theorem 17.17 ✷ Recall that in Lemma 17.11, we gave a formula for the dual eigenvalues of Γ in terms of α, b, σ, θ ∗ . Weare now ready to solve for θ ∗ . Lemma 17.20
Let Γ be as in Definition 17.9. Let h ∗ and τ be as in Theorem 17.11 and Corollary 17.14,respectively. Then h ∗ = − b D − ( µb − h )( µb − h )( µb D +1 − h )( b D − + µ ( b − b D − − , (139) τ = h ∗ (1 + µb − µ ) b ( b − , (140) θ ∗ = h ∗ σ ( b D − − b ) b (( σ − α )( b D − − − b + 1 − σ ( b D − . (141) Proof:
To obtain (139) and (140), solve the system of equations in (137) and (138) with i = 1 and use thefact that c = 1 = c ∗ . Line (141) is immediate from (130). ✷ In Theorem 17.16, we gave a formula for the intersection numbers of W. We now give an alternate formulawhich is reminiscent of Definition 17.9 36 heorem 17.21
Let Γ be as in Definition 17.9. Then c i ( W ) = b i − b − (cid:18) c ( W ) + α ( W ) b i − − b − (cid:19) (1 ≤ i ≤ d ) , (142) b i ( W ) = b d − b i b − (cid:18) σ ( W ) − α ( W ) b i − b − (cid:19) (0 ≤ i ≤ d − , (143) where α ( W ) = τ ( W ) b r +1 ( b − /h ∗ ,σ ( W ) = hb − t ( b − − α ( W ) b d b d ( b − . Proof:
Comparing the right side of (142) with that of (134), we find that (142) holds. Comparing the rightside of (143) with that of (133), we obtain (143). ✷ Acknowledgments
This paper was written while the author was an honorary fellow at the University of Wisconsin-Madison,January-December 2009, with support from HEDP-FDP Sandwich Program of the Commission on HigherEducation, Philippines. The author is greatly indebted to Professor Terwilliger for his many valuable ideasand suggestions.
References [1] R. Askey and J. A Wilson. A set of orthogonal polynomials that generalize the Racah coefficients or6 − j symbols. SIAM J. Math, Anal., 10:1008-1016, 1979.[2] E. Bannai and T. Ito, Algebraic Combinatorics I: Association Schemes . Benjamin/Cummings, London,1984.[3] N. Biggs, Algebraic Graph Theory, Cambridge University Press, Cambridge, 1993.[4] A. E. Brouwer, A. M. Cohen, and A. Neumaier,
Distance-Regular Graphs . Spring Verlag, Berlin, 1989.[5] J. S. Caughman IV, The Terwilliger Algebra of bipartite P-and Q-polynomial schemes. Discrete Math.196 (1999) 65–95.[6] B. Curtin, The Terwilliger Algebra of a 2-Homogeneous Bipartite Distance-Regular Graph. J. Combin.Theory 81(B) (2001), 125-141.[7] G. Gasper and M. Rahman.
Basic hypergeometric series.
Encyclopedia of Mathematics and its Appli-cations, vol. 35, Cambridge University Press, Cambridge, 1990.[8] J. T. Go, The Terwilliger Algebra of the Hypercube. Europ. J. Combin. 23 (2002), 399–429[9] P. Terwilliger, The Subconstituent Algebra of an Association Scheme (Part I). J. Algebraic Combin.1(4)(1992), 363–388.[10] P. Terwilliger, The Subconstituent Algebra of an Association Scheme (Part II). J. Algebraic Combin.2(1) (1993), 73–103.[11] P. Terwilliger, Leonard pairs and the q -Racah Polynomials. Linear Alg. and App. 387(1) (2004), 235–276[12] P. Terwilliger, Two linear transformations each tridiagonal with respect to an eigenbasis of the other.Lin. Algebra Appl. 330 (2001), 149–203.[13] P. Terwilliger, Two linear transformations each tridiagonal with respect to an eigenbasis of the other;an algebraic approach to the Askey scheme of orthogonal polynomials. arXiv:math/0408390v337iana R. CerzoInstitute of MathematicsUniversity of the PhilippinesC.P. Garcia St., DilimanQuezon City, Philippines 1101email: [email protected]@math.upd.edu.ph