An analogue of Szego's limit theorem in free probability theory
aa r X i v : . [ m a t h . OA ] A ug An analogue of Szeg ¨ o ’s limit theoremin free probability theory Junhao Shen1
Department of Mathematics and Statistics, University of New Hampshire, Durham, NH, 03824
Email: [email protected]
Abstract:
In the paper, we discuss orthogonal polynomials in free probability theory. Espe-cially, we prove an analogue of of Szeg¨ o ’s limit theorem in free probability theory. Keywords: orthogonal polynomial, Szeg¨ o ’s limit theorem, free probability Primary 42C05, Secondary 46L10
1. Introduction
Szeg¨ o ’s limit theorem plays an important role in the theory of orthogonal polynomials in onevariable (see [ ],[ ]). Given a real random variable x with a compact support in a probabilityspace, then Szeg¨ o ’s limit theorem (see for example [ ]) provides us the information of asymptoticbehavior of determinants of Toeplitz (or Hankel) matrices associated with x (equivalently theasymptotic behavior of volumes of the parallelograms spanned by 1 , x, . . . , x q ).The theory of free probability was developed by Voiculescu from 1980s (see [ ]). One basicconcept in free probability theory is “freeness”, which is the analogue of “independence” inprobability theory. The purpose of this paper is to study Szeg¨ o ’s limit theorem in the contextof free probability theory.Suppose hM , τ i is a free probability space and x , . . . , x n are random variables in M suchthat x , . . . , x n are free with respect to τ . Our result (Theorem 1) in the paper, as an analogue ofSzeg¨ o ’s limit theorem, describes the asymptotic behavior of determinants of the Hankel matricesassociated with x , . . . , x n . More specifically, we proved that the following equation:lim q →∞ ln D q +1 ( x , . . . , x n ) q · n q = ( n − n · n X k =1 E n ( x k ) , where D q +1 ( x , . . . , x n ) is the Hankel determinant associated with x , . . . , x n (see Definition 3);and E n ( x k ) is n − th entropy number of x k (see Definition 5).The organization of the paper is as follows. We review the process of Gram-Schmidt orthog-onalization in section 2. Generally, orthogonal polynomials can be computed by Gram-Schmidtorthogonalization. In section 3, we introduce families of orthogonal polynomials in several non-commutative variables and the concept of Hankel determinant. The relationship between Hankeldeterminant and volume of the parallelogram spanned a family of vectors is also mentioned in1 The second author is supported by an NSF grant. this section. The Szeg¨ o ’s limit theorem in one variable is recalled in section 4. We state andprove the main Theorem, as an analogue of Szeg¨ o ’s limit theorem in free probability theory, insection 5.
2. Gram-Schmidt Orthogonalization
In this section, we will review the process of Gram-Schmidt orthogonalization. Suppose H is a complex Hilbert space. Let { y q } Nq =1 be a family of linearly independent vectors in H , where N is a positive integer or infinity. Let, for each 2 ≤ q ≤ N , H q be the closed subspace linearlyspanned by { y , . . . , y q − } in H . Let E = 0 and E q be the projection from H onto H q for2 ≤ q ≤ N . Then, for each 1 ≤ q ≤ N , we have y q − E q ( y q ) = 1 D q (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h y , y i h y , y i · · · h y q , y ih y , y i h y , y i · · · h y q , y i· · ·h y , y q − i h y , y q − i · · · h y q , y q − i y y · · · y q (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , where, for q ≥ D q +1 = | ( h y j , y i i ) ≤ i,j ≤ q | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h y , y i h y , y i · · · h y q , y ih y , y i h y , y i · · · h y q , y i· · ·h y , y q − i h y , y q − i · · · h y q , y q − ih y , y q i h y , y q i · · · h y q , y q i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) and D = 1.The following proposition follows easily from the process of Gram-Schmidt orthogonalization. Proposition 1.
For each ≤ q ≤ N , we have D q +1 = q Y i =1 k y i − E i ( y i ) k = q Y i =1 h y i − E i ( y i ) , y i − E i ( y i ) i = ( volume of the parallelogram linearly spanned by y , . . . , y q in H ) .
3. Definitions of Orthogonal Polynomials in Free Probability
A pair of objects hM , τ i is called a free probability space when M is a finite von Neumannalgebra and τ is a faithful normal tracial state on M (see [ ]). Let H be the complex Hilbertspace L ( M , τ ). Let x , . . . , x n be a family of random variables in M . Let A ( x , . . . , x n )be the unital algebra consisting of non-commutative polynomials of I, x , . . . , x n with complexcoefficients, where I is the identity element of M . Definition 1.
Suppose Σ is a totally ordered index set. Then { P α ( x , . . . , x n ) } α ∈ Σ in A ( x , . . . , x n ) is called a family of orthogonal polynomials of x , . . . , x n in M if, for all α, β in Σ with α = β , τ ( P β ( x , . . . , x n ) ∗ P α ( x , . . . , x n )) = 0 . Suppose x is an element in M . Let H = C I and H q be the linear subspace spanned by the elements { I, x, x , . . . , x q − } in H foreach q ≥
2. Let E q be the projection from H onto H q .For each q in N , we let P q ( x ) be x q − E q ( x q ), obtained by the process of Gram-Schmidtorthogonalization as in section 2. It is not hard to see that { P q ( x ) } q ∈ N is a family of orthogonalpolynomials of x in M . The following recursive formula is well-known. (see [ ] or [ ]) Lemma 1.
Suppose x is a self-adjoint element in a free probability space hM , τ i and { P q ( x ) } ∞ q =1 is defined as in section 3.1. Then there are sequences of real numbers { a q } ∞ q =1 with a q > ∀ q ≥ and { b q } ∞ q =1 such that xP q ( x ) = P q +1 ( x ) + b q +1 P q ( x ) + a q P q − ( x ) , for all q ≥ . These a , a , . . . are called the coefficients of Jacobi matrix associated with x . Moreover, k P q ( x ) k = τ ( P q ( x ) ∗ P q ( x )) / = a a · · · a q , for all q ≥ . Suppose u is a unitary element in a free probability space M and { P q ( u ) } ∞ q =1 is definedas in section 3.1. For each q ≥
1, let K q be the linear subspace spanned by the set { u, u , . . . , u q } in H . Let F q be the projection from H onto K q . Define Q q ( u ) = I − F q ( I ) for each q ≥ ]).With the notations as above. We have the following result. (see [ ] or [ ]) Lemma 2.
There exists a sequence of complex number { α q } ∞ q =0 with | α q | ≤ ( ∀ q ≥ ) so that P q +1 ( u ) = uP q ( u ) − α q Q ( u ) Q q +1 ( u ) = Q q ( u ) − α q uP q ( u ) . These α , α , . . . are called Verblunsky coefficients. Moreover, k P q ( u ) k = q Y j =0 (1 − | α q | ) . Suppose x , . . . , x n is a family ofelements in M .Let Σ n = F + n be the unital free semigroup generated by n generators X , . . . , X n with lexi-cographic order ≺ , i.e. e ≺ X ≺ X ≺ . . . ≺ X n ≺ X ≺ X X ≺ X X ≺ . . . ≺ x n ≺ X ≺ · · · For each α = X i X i · · · X i q in Σ n with q ≥
1, 1 ≤ i , . . . , i q ≤ n , we define x α = x i x i · · · x i q in M . We also define x e = I .Let H e = C I . Note that each element in M can be canonically identified as a vector in H .For each α ∈ Σ n , let H α be the linear subspace spanned by the set { x β } β ≺ α in H . Let E α bethe projection from H onto the closure of the subspace ∪ β ≺ α H β .For each α in Σ n , we let P α ( x , . . . , x n ) be x α − E α ( x α ), obtained by the process of Gram-Schmidt orthogonalization on the family of vectors { x β } β ≺ α in H . It is not hard to see that { P α ( x , . . . , x n ) } α ∈ Σ is a family of orthogonal polynomials of x , . . . , x n in M . Remark 1. If x , . . . , x n are algebraically free (i.e. satisfy no algebraic relation), then P α ( x , . . . , x n ) = P β ( x , . . . , x n ) for all α = β in Σ n and P γ = 0 for all γ in Σ . Definition 2.
Let Σ n = F + n be the unital free semigroup generated by n generators X , . . . , X n with lexicographic order ≺ . For every α in Σ n , we define the length of α as | α | = (cid:26) , if α = eq, if α = X i X i · · · X i q for some q ≥ , ≤ i , . . . , i q ≤ n For each γ ∈ Σ n , let m be the cardinality of the set { x α } α ≺ γ and A γ be an m × m complex matrix such that ( α, β ) − th entry of A γ is equal to τ ( x ∗ α x β ) for each α, β ≺ γ . Definethe Hankel determinant ¯ D γ ( x , . . . , x n ) to be the determinant of A γ , i.e. ¯ D γ ( x , . . . , x n ) = | A γ | = | ( τ ( x ∗ α x β )) α,β ≺ γ | . For each q ≥ , let k be the cardinality of the set { x α } | α | Proposition 2. ¯ D γ ( x , . . . , x n ) = Y α ≺ γ k P α ( x , . . . , x n ) k , for all γ ∈ Σ n = ( volume of the parallelogram linearly spanned by { x α } α ≺ γ in H ) D q ( x , . . . , x n ) = Y | α | 4. Szeg ¨ o ’s Limit Theorem in One Variable In this section, we will recall Szeg¨ o ’s Limit Theorem. ¨ o ’s functions of class G . Let G denote the class of functions w ( t ) ≥ 0, definedand measurable in [ − , Z π − π w (cos θ ) | sin θ | dθ, Z π − π | log( w (cos θ ) | sin θ | ) | dθ exist with the first integral supposed positive. ¨ o ’s Limit Theorem. We state Szeg¨ o ’s Limit Theorem as follows. (see [ ]) Lemma 3. Suppose M is a free probability space with a tracial state τ . Let x be a self-adjointrandom variable in M with density function w ( t ) defined on [ − , , i.e. τ ( x q ) = Z − t q w ( t ) dt, for all q ≥ . Suppose P q ( x ) for q = 1 , , , · · · are the orthogonal polynomials as defined in subsection 3.1. Ifthe function w ( t ) belongs to class G , then as q → ∞ , k P q ( x ) k ⋍ π − / q e − π R − w ( t ) dt √ − t . Remark 2. Combining Lemma 3 and Proposition 1, we obtain the information of asymptoticbehavior of D q , which is the determinant of a Toeplitz matrix when x is a unitary element, orthe determinant of a Hankel matrix when x is a self-adjoint element. 5. An Anolague of Szeg ¨ o Limit Theorem in Free Probability Theory In this section, we will follow the previous notations. Let hM , τ i be a free probability space.Let x , . . . , x n be a family of random variables in M .For each k, ≤ k ≤ n , and integer q ≥ 1, we let P k,q ( x k ) be x qk − E k,q ( x qk ) where E k,q is theprojection from the Hilbert space L ( M , τ ) onto the linear subspace spanned by { I, x k , . . . , x q − k } in L ( M , τ ). By section 3.1, { P k,q ( x k ) } ∞ q =1 is the family of orthogonal polynomials associatedwith x k in M . Definition 4. (See [ ] ) The von Neumann subalgebras M i , i ∈ I of M are free with respect tothe trace τ if τ ( y . . . y n ) = 0 whenever y j ∈ M i j , i = . . . = i n and τ ( y j ) = 0 for ≤ j ≤ n and every n in N . (Note that i and i , for example, may be equal: “adjacent” A i s are not inthe same M i ). A family of self-adjoint elements { x , . . . , x n } is free with respect to the trace τ if the von Neumann subalgebras M i generated by the x i are free with respect to the trace τ . Let Σ n = F + n be the unital free semigroup generated by n generators X , . . . , X n with lexicographic order ≺ . For each α = X i X i · · · X i q in Σ n with q ≥ 1, 1 ≤ i , . . . , i q ≤ n , we define x α = x i x i · · · x i q in M . We let P α ( x , . . . , x n ) be x α − E α ( x α ) where E α the projection from L ( M , τ ) onto the linearsubspace spanned by { x β } β ≺ α in L ( M , τ ). Lemma 4. Suppose x , . . . , x n is a free family of random variables in M with respect to thetracial state τ . Let Σ , P q ( x i ) and P α ( x , . . . , x n ) be as defined as above. For each α = X j i X j i · · · X j m i m in Σ n with m ≥ , ≤ i = i = · · · 6 = i m ≤ n , we have P α ( x , . . . , x n ) = m Y k =1 P i k ,j k ( x i k ) . Proof. For each α = X j i X j i · · · X j m i m , let us denote Q mk =1 P i k ,j k ( x i k ) by Q α ( x , . . . , x n ). Bythe definition, for any 1 ≤ i k ≤ n and j k ≥ 1, we know that P i k ,j k ( x i k ) = x j k i k − E i k ,j k ( x j k i k ) , where E i k ,j k is the projection from H = L ( M , τ ) onto the linear space spanned by { I, x i k , . . . , x j k − i k } in H . It is not hard to see that Q α ( x , . . . , x n ) = m Y k =1 P i k ,j k ( x i k ) = x j i x j i · · · x j m i m + Q ( x , . . . , x n ) , where Q ( x , . . . , x n ) is a linear combination of { x β } β ≺ α , i.e. E α ( Q ( x , . . . , x n )) = Q ( x , . . . , x n ).Thus the subspace spanned by { x β } β ≺ α is equal to the subspace spanned by { Q β } β ≺ α in H .On the other hand, it follows from the definition of the freeness that τ ( Q ∗ β ( x , . . . , x n ) Q α ( x , . . . , x n )) = 0for any β = α in Σ n . It induces that Q α ( x , . . . , x n ) is orthogonal to the linear space spannedby { Q β )( x , . . . , x n ) } β ≺ α whence Q α ( x , . . . , x n ) is orthogonal to the linear space spanned by { x β } β ≺ α . So E α ( Q α ( x , . . . , x n )) = 0.Hence0 = E α ( Q α ( x , . . . , x n )) = E α ( m Y k =1 P i k ,j k ( x i k )) = E α ( x j i x j i · · · x j m i m ) + E α ( Q ( x , . . . , x n ))= P α ( x , . . . , x n ) − x j i x j i · · · x j m i m + Q ( x , . . . , x n )= P α ( x , . . . , x n ) − m Y k =1 P i k ,j k ( x i k ) . It follows that P α ( x , . . . , x n ) = Q mk =1 P i k ,j k ( x i k ) . (cid:3) Lemma 5. Denote, for every integer q ≥ , s q = Y α ∈ Σ , | α | = q || P α ( x , . . . , x n ) k . Then, we have s q +1 s nq = n Y k =1 k P q +1 ( x k ) k ! · n Y k =1 q − Y j =1 ( k P j ( x k ) k ) n j − ! . Proof. Note the index setΣ n = { e } ∪ { X j i X j i · · · X j m i m : m ≥ 1; 1 ≤ i = i = · · · 6 = i m ≤ n ; j , j , . . . , j m ≥ } . We let, for each integer q ≥ ≤ k ≤ n , A q = { X j i X j i · · · X j m i m ∈ Σ n : m ≥ 1; 1 ≤ i = i = · · · 6 = i m ≤ n ; j + j + · · · + j m = q } B q,k = { X j i X j i · · · X j m i m ∈ A q : i = k } C q,k = { X j i X j i · · · X j m i m ∈ A q : i = k } . It is not hard to verify that A q = B q,k ∪ C q,k for every 1 ≤ k ≤ n ;and A q = n [ k =1 B q,k = n [ k =1 q − [ j =1 (cid:0) X jk · C q − j,k (cid:1) [ { X qk } ! , where X jk · C q − j,l = { X jk β : β ∈ C q − j,l } . Note that { X jk · C q − j,k , { X qk }} ≤ k ≤ n, ≤ j ≤ q − is a collection of disjoint subsets of A q . So s q = Y α ∈ Σ , | α | = q || P α ( x , . . . , x n ) k = Y α ∈ A q || P α ( x , . . . , x n ) k = Y α ∈ S nk =1 ( S q − j =1 ( X jk · C q − j,k ) S { X qk } ) || P α ( x , . . . , x n ) k = n Y k =1 q − Y j =1 Y α ∈ X jk · C q − j,k || P α ( x , . . . , x n ) k n Y k =1 k P k,q ( x k ) k ! . Form the fact that the cardinality of the set C q − j,k is equal to ( n − n q − j − and Lemma 4, itfollows that s q = n Y k =1 q − Y j =1 Y β ∈ C q − j,k (cid:0) k P k,j ( x k ) k · || P β ( x , . . . , x n ) k (cid:1) n Y k =1 k P k,q ( x k ) k ! = n Y k =1 q − Y j =1 k P k,j ( x k ) k n − n q − j − · Y β ∈ C q − j,k || P β ( x , . . . , x n ) k n Y k =1 k P k,q ( x k ) k ! = n Y k =1 q − Y j =1 k P k,j ( x k ) k n − n q − j − ! n Y k =1 q − Y j =1 Y β ∈ C q − j,k || P β ( x , . . . , x n ) k n Y k =1 k P k,q ( x k ) k ! = n Y k =1 q − Y j =1 k P k,j ( x k ) k n − n q − j − ! n Y k =1 q − Y j =1 Q β ∈ A q − j || P β ( x , . . . , x n ) k Q β ∈ B q − j,k || P β ( x , . . . , x n ) k ! n Y k =1 k P k,q ( x k ) k ! = n Y k =1 q − Y j =1 k P k,j ( x k ) k n − n q − j − ! q − Y j =1 s n − j ! n Y k =1 k P k,q ( x k ) k ! . Or, s q Q q − j =1 s n − j = n Y k =1 q − Y j =1 k P k,j ( x k ) k n − n q − j − ! n Y k =1 k P k,q ( x k ) k ! . (cid:3) The following lemma can be directly verified by combinatory method. Lemma 6. Suppose that { c q } ∞ q =1 , { d q } ∞ q =2 are two sequences of positive numbers and r > . If c q − r · q − X j =1 c j = d q , for q ≥ then c q = r (1 + r ) q − b + r q − X j =2 (1 + r ) q − − j d j + d q , for q ≥ . Combining Lemma 5 and Lemma 6, we have the following. Lemma 7. ln s = n X k =1 ln k P ( x k ) k ln s q = ( n − n q − ln s + d q + ( n − q − X j =2 n q − − j d j , for q ≥ , where d q = ( n − n X k =1 q − X j =1 n q − j − ln k P k,j ( x k ) k ! + n X k =1 ln k P k,q ( x k ) k ! . Proposition 3. We have ln s = n X k =1 ln k P ( x k ) k ln s q = n X k =1 ln k P k,q ( x k ) k + 2( n − q − X j =1 n q − − j n X k =1 ln k P k,j ( x k ) k ! + ( n − q − X j =1 ( q − − j ) n q − − j n X k =1 ln k P k,j ( x k ) k ! , for all q ≥ . Proof. Let C j = n X k =1 ln k P k,j ( x k ) k , for j ≥ . Then by Lemma 7 we haveln s q = ( n − n q − ln s + d q + ( n − q − X j =2 n q − − j d j , for q ≥ ,d q = ( n − q − X j =1 n q − j − C j ! + C q . Thus, ln s q = ( n − n q − ln s + ( n − q − X j =1 n q − j − C j ! + C q + ( n − q − X j =2 n q − − j ( n − j − X m =1 n j − m − C m ! + C j ! , for q ≥ . So, ln s q = ( n − n q − ln C + n − q − X j =2 n q − j − C j + ( n − n q − C ! + C q + ( n − q − X j =2 n q − − j ( n − j − X m =1 n j − m − C m ! = C q + 2( n − q − X j =1 n q − j − C j + ( n − q − X m =1 q − X j = m +1 n q − − m C m = C q + 2( n − q − X j =1 n q − j − C j + ( n − q − X m =1 ( q − m − n q − − m C m , where C j = n X k =1 ln k P k,j ( x k ) k , for j ≥ . (cid:3) n − th entropy number.Definition 5. Suppose x is an element in a free probability space M with a tracial state τ . Foreach j ≥ , let P j ( x ) be defined as in section 3.1. Then we define n -th entropy number of x E n ( x ) = ∞ X j =1 ln k P j ( x ) k n j . By Lemma 1, we have Corollary 1. Suppose x = x ∗ is a self-adjoint element in M . For n ≥ , E n ( x ) = 2( n − n ∞ X j =1 ln a j n j , where a , a , . . . are as defined in Lemma 1. Corollary 2. Suppose u is a unitary element in M . For n ≥ , E n ( x ) = n − n ∞ X j =1 ln(1 − | α j | ) n j , where α , α , . . . are as defined in Lemma 2. The following is the main result in the paper. Theorem 1. Suppose hM , τ i is a free probability space. Suppose x , . . . , x n ( n ≥ are randomvariables in M such that x , . . . , x n are free with respect to τ . For each q ≥ , let D q ( x , . . . , x n ) be defined as in section 3.5. Then we have lim q →∞ ln D q +1 ( x , . . . , x n ) q · n q = ( n − n · n X k =1 E n ( x k ) , where E n ( x k ) is n − th entropy number of x k in section 5.2. Proof. Let Σ n = F + n be the unital free semigroup generated by n generators X , . . . , X n with lexicographic order ≺ . For each α in Σ, let P α ( x , . . . , x n ) be as defined in section 3.2. Foreach q ≥ ≤ k ≤ n , let P k,q ( x k ) be as defined in section 5.1. Let, for each q ≥ s q = Y α ∈ Σ , | α | = q || P α ( x , . . . , x n ) k . By Proposition 3, we haveln s q = n X k =1 ln k P k,q ( x k ) k + 2( n − q − X j =1 n q − − j n X k =1 ln k P k,j ( x k ) k ! + ( n − q − X j =1 ( q − − j ) n q − − j n X k =1 ln k P k,j ( x k ) k ! . Dividing by qn q on both side equation, we get1 qn q ln s q = 1 qn q n X k =1 ln k P k,q ( x k ) k + 2( n − qn q − X j =1 n − j n X k =1 ln k P k,j ( x k ) k ! + ( n − n q − X j =1 n − j n X k =1 ln k P k,j ( x k ) k ! + ( n − qn q − X j =1 ( − − j ) n − j n X k =1 ln k P k,j ( x k ) k ! . Since k P k,q ( x k ) k ≤ k x qk k ≤ k x k k q , we get1 qn q n X k =1 ln k P k,q ( x k ) k , n − qn q − X j =1 n − j n X k =1 ln k P k,j ( x k ) k ! and ( n − qn q − X j =1 ( − − j ) n − j n X k =1 ln k P k,j ( x k ) k ! go to 0 as q goes to ∞ . Hence,lim q →∞ ln s q q · n q = ( n − n · n X k =1 ∞ X j =1 ln k P k,j ( x k ) k n j = ( n − n n X k =1 E n ( x k ) . Note that D q +1 ( x , . . . , x n ) = Y | α | Corollary 3. We assume the same notations as in Theorem 1. Suppose x , . . . , x n is a freefamily of self-adjoint elements in M . Then lim q →∞ ln D q +1 ( x , . . . , x n ) q · n q = 2( n − n n X k =1 ∞ X j =1 ln a k,j n j , where a k, , a k, , . . . are the coefficients of Jacobi matrix associated with x k (see Lemma 1). Corollary 4. We assume the same notations as in Theorem 1. Suppose u , . . . , u n is a freefamily of unitary elements in M . Then lim q →∞ ln D q +1 ( u , . . . , u n ) q · n q = n − n n X k =1 ∞ X j =1 ln(1 − | α k,j | ) n j , where α k, , α k, , . . . are the Verblunsky coefficients associated with u k (see Lemma 2). References [1] P. Deift, “Orthogonal polynomials and random matrices: a Riemann-Hilbert approach,” Courant LectureNotes in Mathematics, 3. New York University, Courant Institute of Mathematical Sciences, New York;American Mathematical Society, Providence, RI, 1999.[2] B. Simon, “Orthogonal polynomials on the unit circle,” Part 1, 2. Classical theory. American MathematicalSociety Colloquium Publications, 54, Part 1, 2. American Mathematical Society, Providence, RI, 20053