aa r X i v : . [ m a t h . SP ] A p r A FORMULA RELATED TO CMV MATRICES AND SZEG ˝OCOCYCLES
FENGPENG WANG
Abstract.
For Schr¨odinger operators, there is a well known and widely used formulaconnecting the transfer matrices and Dirichlet determinants. No analog of this formulawas previously known for CMV matrices. In this paper we fill this gap and provide theCMV analog of this formula. Introduction
In recent years, orthogonal polynomials on the unit circle (OPUC) have been extensivelystudied, see [15] for a expository note and books [16, 17] for details. In the study oforthogonal polynomials on the real line (OPRL), Jacobi matrix representation is one ofthe key tools, hence people want to get the matrix realization of OPUC. While for OPUCcase, different orthonomal bases corresponds to different matrix representation. In 2003,M. Cantero, L. Moral, L. Vel´azquez [6] gave the ”right” basis and the corresponding matrixrepresentation which is named after them, i.e. CMV matrix. Naturally, it is viewed asthe unitary analog of Jacobi matrices.Since Jacobi matrices have been studied for more than one hundred years, there arefruitful results in this area and their relation with OPRL is clear. As CMV matrices arethe unitary analog of Jacobi matrices, people expect the results for Jacobi matrices alsohold for CMV matrices. Indeed, many of them have been carried out in Barry Simon’smonographs [16, 17], while the formula in our paper is one of the exceptions.For a special class of Jacobi operators, one dimensional Schr¨odinger operators, theycan be viewed as tridiagonal matrices and people related its Dirichlet determinants to theassociated transfer matrices with an eqaution, which is playing an important role in thestudy of Schr¨odinger operators, for example it can be used to prove Anderson localization.The formula in our paper is the CMV analog of this relation, which means it might beuseful to get Anderson localization of CMV matrices and indeed it is.The importance of Anderson localization is due to the seminal work by the physicistP. W. Anderson [2], which is named after him and helped him get the 1977 physicsNobel prize. In physics, Anderson localization refers to the phenomena that disorder inthe media will cause suppression of electron transport; while in mathematics, it meansthe corresponding operator has only pure point spectrum with exponentially decayingeigenfunctions.Mathematically rigorous studies of the Anderson Model and other models started inthe 1970s and several powerful methods have been found to prove Anderson localization,such as multiscale analysis (MSA) introduced by J. Fr¨ohlich and T. Spencer [9] , frac-tional moments method (FMM) developed by M. Aizenman and S. Molchanov [1], etc.Recently, the method developed by J. Bourgain, M. Goldstein and W. Schlag [3, 4] forone-dimensional Schr¨odinger operators has been applied widely to other one-dimensionalmodels [5, 7, 11, 12, 13], which motivated us to apply this method to CMV matrices.
F.W. was supported by CSC (No.201606330003) and NSFC (No.11571327). Preliminaries
In this section, we recall the Schr¨odinger version of this formula and give some prepa-rations for CMV matrix and Szeg˝o cocycle map.2.1.
Schr¨odinger case.
Consider the lattice Schr¨odinger operator H v acting on ℓ ( Z )[ H v u ]( n ) = u ( n + 1) + u ( n −
1) + v n u ( n ) . Then the solutions to u ( n + 1) + u ( n −
1) + v n u ( n ) = Eu n must satisfy (cid:18) u ( n + 1) n ( n ) (cid:19) = M Ev n (cid:18) n ( n ) u ( n − (cid:19) where M Ev n = (cid:18) E − v n −
11 0 (cid:19) is usually referred to as Schr¨odinger cocycle map.It is well known that a Schr¨odinger operator can be viewed as a tridiagonal bi-infinitematrix. Let P [ a,b ] denote the projection ℓ ( Z ) → ℓ ([ a, b ]) and define the restriction ofSchr¨odinger operator by(2.1) H v, [ a,b ] = ( P [ a,b ] ) ∗ H v P [ a,b ] where a < b and a, b ∈ Z . Then the restriction of Schr¨odinger operator on interval [1 , n ]is equivalent to H v, [1 ,n ] = v · · · v · · ·
00 1 v · · · · · · v n where { v j } j ∈ Z are the potentials and the n-step transfer matrix is given by M En = Y j = n (cid:18) E − v j −
11 0 (cid:19) . By induction, it’s not hard to get the following formula which is well known(2.2) M En = (cid:18) det( E − H v, [1 ,n ] ) − det( E − H v, [2 ,n ] )det( E − H v, [1 ,n − ) − det( E − H v, [2 ,n − ) (cid:19) . Next, before giving the CMV analog of this formula, we need some general settingsabout CMV matrices.2.2.
Definitions and notations related to CMV matrices.
In this section, we willintroduce CMV matrix in view of orthogonal polynomial on the unit circle.Let the probability measure µ on the unit circle ∂ D be nontrivial , which means itssupport contains infinitely many points and denote the monic orthogonal polynomials byΦ n ( z ). Definition 2.1.
For any polynomial Q n ( z ) of degree n , define the reversed polynomial Q ∗ n ( z ) (also called Szeg˝o dual [18]) by the following equation, Q ∗ n ( z ) = z n Q n (1 / ¯ z ) , Specially, for z ∈ ∂ D , Q ∗ n ( z ) = z n Q n ( z ) . FORMULA RELATED TO CMV MATRICES AND SZEG ˝O COCYCLES 3
Then the
Szeg˝o recurrence is given byΦ n +1 ( z ) = z Φ n ( z ) − ¯ α n Φ ∗ n ( z )where the parameters α , α , · · · are called Verblunsky coefficients and they are all in theunit disc D = { z ∈ C : | z | < } .The half-line CMV matrix associated with Verblunsky coefficients { α n } n ∈ N is given by C = ¯ α ¯ α ρ ρ ρ ρ − ¯ α α − ρ α ¯ α ρ − ¯ α α ¯ α ρ ρ ρ ρ ρ − ρ α − ¯ α α − ρ α ¯ α ρ − ¯ α α ¯ α ρ ρ ρ − ρ α − ¯ α α . . . . . . . . . where ρ n = (1 − | α n | ) / , so it defines a unitary operator in ℓ ( N ).According to Verblunsky’s theorem (also called Favard’s theorem for the circle), the map µ → { α n } n ∈ N sets up a one-one correspondence between the set of nontrivial probabilitymeasures on ∂ D and × ∞ j =0 D .Similarly, an extended CMV matrix is a unitary operator on ℓ ( Z ) defined by a bi-infinitesequence { α n } n ∈ Z ⊂ D , E = . . . . . . . . . − ¯ α α − ¯ α ρ ρ ρ − ρ α − − ¯ α α − ρ α ¯ α ρ − ¯ α α ¯ α ρ ρ ρ ρ ρ − ρ α − ¯ α α − ρ α ¯ α ρ − ¯ α α ¯ α ρ ρ ρ − ρ α − ¯ α α . . . . . . . . . In the expressions of C and E , all unspecified matrix entries are implicitly assumed tobe zero.2.3. Szeg˝o cocycle map.
Recall Szeg˝o recurrence, it’s equivalent to the following one,(2.3) ρ n ϕ n +1 ( z ) = zϕ n ( z ) − ¯ α n ϕ ∗ n ( z )where ϕ n ( z ) = Φ n ( z ) k Φ n ( z ) k . Apply ∗ to both sides of (2.3), then(2.4) ρ n ϕ ∗ n +1 ( z ) = ϕ ∗ n ( z ) − α n zϕ n ( z ) . Equations (2.3) and (2.4) can be written as (cid:18) ϕ n +1 ϕ ∗ n +1 (cid:19) = S zα n (cid:18) ϕ n ϕ ∗ n (cid:19) where S zα n = 1 ρ n (cid:18) z − ¯ α n − α n z (cid:19) F. WANG is the
Szeg˝o cocycle map and the n -step transfer matrix is defined by(2.5) S zn = Y j = n − S zα n . For X ∈ {C , E} , let’s define X [ a,b ] = ( P [ a,b ] ) ∗ XP [ a,b ] and ϕ X,z [ a,b ] = det( z − X [ a,b ] ), then theCMV analogs of formula (2.2) are as follows Theorem 1 (Half-line case) . For z ∈ ∂ D , we have(2.6) S zn = n − Y j =0 ρ j " zϕ C ,z [1 ,n − ϕ C ,z [0 ,n − − zϕ C ,z [1 ,n − z ( ϕ C ,z [0 ,n − − zϕ C ,z [1 ,n − ) ∗ ( ϕ C ,z [1 ,n − ) ∗ Theorem 2 (Extended case) . For z ∈ ∂ D , we have(2.7) S zn = n − Y j =0 ρ j zϕ E ,z [1 ,n − zϕ E ,z [1 ,n − − ϕ E ,z [0 ,n − α − z ( zϕ E ,z [1 ,n − − ϕ E ,z [0 ,n − α − ) ∗ ( ϕ E ,z [1 ,n − ) ∗ Remark.
One may notice the term α − in (2.7) which means there might be a problemin the case α − = 0. But we should mention that the numerator of S zn (1 ,
2) contains afactor α − and from our proof one can see that the term S zn (1 ,
2) doesn’t depend on α − .Actually, Theorem 1 and Theorem 2 are equivalent, we will first prove the equivalencyof them, and then give the proof of Theorem 1, since its proof needs some additionalpreparations. 3. Proof
Proof of the equivalency.
Notice the special forms of CMV matrices C and E , it is not hardto see E [ a,b ] = C [ a,b ] , for 1 ≤ a < b , hence E [1 ,n − = C [1 ,n − .Compare the corresponding entries in the matrices from Theorem 1 and Theorem 2,then the equivalency can be got immediately if the following equation holds(3.1) det( z − C [0 ,n − ) − z det( z − E [1 ,n − ) = z det( z − E [1 ,n − ) − det( z − E [0 ,n − ) α − . Simple calculations tell us thatdet( z − E [0 ,n − ) = ( z + ¯ α α − ) det( z − E [1 ,n − ) − ρ α − det P n − det( z − C [0 ,n − ) = ( z − ¯ α ) det( z − E [1 ,n − ) + ρ det P n − where the matrix P n − is given by P n − = − ¯ α ρ − ρ ρ − ¯ α ρ z + ¯ α α − ¯ α ρ − ρ ρ − ρ ρ ρ α z + ¯ α α ρ α − ¯ α ρ z + ¯ α α . . . z + ¯ α n − α n − . These two equations imply that z det( z − E [1 ,n − ) − det( z − E [0 ,n − ) = − ¯ α α − det( z − E [1 ,n − ) + ρ α − det P n − = α − ( − ¯ α det( z − E [1 ,n − ) + ρ det P n − )= α − (det( z − C [0 ,n − ) − z det( z − E [1 ,n − ))which means (3.1) is true and hence Theorem 1 is equivalent to Theorem 2. (cid:3) FORMULA RELATED TO CMV MATRICES AND SZEG ˝O COCYCLES 5
Next, let’s recall some definitions and notations from [16] which are necessary to obtainthe proof of Theorem 1.
Definition 3.1.
Let { α n } n ∈ Z be a set of Verblunsky coefficients and λ ∈ ∂ D . We defineΦ λn by Φ λn ( z ; dµ ) = Φ λn ( z ; dµ λ )with dµ λ , the Aleksandrov measures , defined by α n ( dµ λ ) = λα n ( dµ )Specially, for the case λ = − n ( z ; dµ ) = Φ λ = − n ( z ; dµ )are called the second kind of polynomials for µ .Let C ′ denote the CMV matrix whose Verblunsky coefficients replaced by {− α n } n ∈ N ,that is C ′ = − ¯ α − ¯ α ρ ρ ρ ρ − ¯ α α ρ α − ¯ α ρ − ¯ α α − ¯ α ρ ρ ρ ρ ρ ρ α − ¯ α α ρ α − ¯ α ρ − ¯ α α − ¯ α ρ ρ ρ ρ α − ¯ α α . . . . . . . . . The corresponding extened CMV matrix is E ′ = . . . . . . . . . − ¯ α α − − ¯ α ρ ρ ρ ρ α − − ¯ α α ρ α − ¯ α ρ − ¯ α α − ¯ α ρ ρ ρ ρ ρ ρ α − ¯ α α ρ α − ¯ α ρ − ¯ α α − ¯ α ρ ρ ρ ρ α − ¯ α α . . . . . . . . . Lemma 3.2.
Let Ψ n ( z ) be the second kind of polynomial and C ′ [0 ,n − denote the restric-tion of C ′ , then Ψ n ( z ) = det( z − C ′ [0 ,n − ) . Proof.
It is known that (Theorem 5.3 of [15])Φ n ( z ) = det( z − C [0 ,n − )According to the definition of second kind of polynomial, the Verblunsky coefficients inΦ n ( z ) and Ψ n ( z ) have opposite signs. Notice the way we define matrix C ′ , this Lemmafollows by Theorem 5.3 in [15]. (cid:3) F. WANG
Recall the definition of P n − in the proof of Theorem 2, replace { α j } Let E [1 ,n − , E ′ [1 ,n − , P n − and P ′ n − be as above, then we havedet( z − E [1 ,n − ) = det( z − E ′ [1 ,n − )det P n − = − det P ′ n − . Proof. The first equation can be easily proved by induction and the second one follows bya direct calculation. More precisely,It is easy to check that det( z − E [ n − ,n − ) = det( z − E ′ [ n − ,n − ). For the inductive step,we assume det( z − E [ j,n − ) = det( z − E ′ [ j,n − ) holds for all k < j < n − 1, then by a directcalculation,det( z − E [ k,n − )= ( z + ¯ α k α k − ) det( z − E [ k +1 ,n − ) + ¯ α k +1 α k − ρ k det( z − E [ k +2 ,n − ) + · · · + ¯ α n − α k − k Y j = n − ρ j = ( z + ¯ α k α k − ) det( z − E ′ [ k +1 ,n − ) + ¯ α k +1 α k − ρ k det( z − E ′ [ k +2 ,n − ) + · · · + ¯ α n − α k − k Y j = n − ρ j = det( z − E ′ [ k,n − ) . In particular, take k = 1, we get the first equationdet( z − E [1 ,n − ) = det( z − E ′ [1 ,n − ) . For the second equation, since det( z −E [ j,n − ) = det( z −E ′ [ j,n − ) holds for all 0 ≤ j < n − z − E [0 ,n − ) = ( z + ¯ α α − ) det( z − E [1 ,n − ) − ρ α − det P n − det( z − E ′ [0 ,n − ) = ( z + ¯ α α − ) det( z − E ′ [1 ,n − ) + ρ α − det P ′ n − implies det P n − ( z ) = − det P ′ n − ( z ) . (cid:3) Proof of Theorem . From [16, Section 3.2], we have S zn = n − Y j =0 ρ − j (cid:20) zB ∗ n − ( z ) A ∗ n − ( z ) zA n − ( z ) B n − ( z ) (cid:21) where Φ n ( z ) = zB ∗ n − ( z ) + A ∗ n − ( z ) and A n − ( z ) = Φ ∗ n ( z ) − Ψ ∗ n ( z )2 zB n − ( z ) = Φ ∗ n ( z ) + Ψ ∗ n ( z )2 . Compare this existing result with our Theorem 1, it is sufficient to prove zB ∗ n − ( z ) = z det( z − E [1 ,n − ) FORMULA RELATED TO CMV MATRICES AND SZEG ˝O COCYCLES 7 that is,(3.2) Φ n ( z ) + Ψ n ( z ) = 2 z det( z − E [1 ,n − ) . By Lemma 3.2 and some calculations, we haveΦ n ( z ) = det( z − C [0 ,n − )= ( z − ¯ α ) det( z − E [1 ,n − ) + ρ det P n − ( z )and Ψ n ( z ) = det( z − C ′ [0 ,n − )= ( z + ¯ α ) det( z − E ′ [1 ,n − ) + ρ det P ′ n − ( z )Applying Lemma 3.3, it is obvious that (3.2) holds and hence Theorem 1. (cid:3) Application In this section, we explain how to get Anderson localization of half-line CMV matricesvia the method developed in [3] where our formula can play an important role, the extendedCMV matrices case can be carried out similarly.From now on, we consider a special class of Verblunsky coefficients which are generatedby a analytic function α ( x ) ∈ D , i.e. α n ( x ) = α ( x + nω ), where x, ω ∈ T .Recall the Szeg˝o cocycle map which is given by S zα n ( x ) = 1 ρ n ( x ) (cid:18) z − ¯ α n ( x ) − zα n ( x ) 1 (cid:19) . Since det S zα n ( x ) = z ∈ ∂ D and the method in [3] only cares about the norm of n-steptransfer matrix, so it’s equivalent to study the following one M zα n ( x ) = 1 ρ n ( x ) √ z − ¯ α n ( x ) √ z −√ zα n ( x ) √ z ! and the corresponding n-step transfer matrix M zn ( x ) = Y j = n − M zα n ( x ) . Define L n ( x ) = 1 n Z T log k M zn ( x ) k dx then the Lyapunov exponent is given by L ( z ) = inf n L n ( z ) for n → ∞ . Theorem 4.1. Consider the family {C ω } ω ∈ T of half-line CMV matrices with Verblunskycoefficients { α n ( x ) } n ∈ N generated by analytic function α : T → D , that is, α n ( x ) = α ( x + nω ) where x, ω ∈ T . Assume the Lyapunov exponent L ( z ) satisfies L ( z ) > δ > for all z ∈ Z and ω ∈ I , where Z ⊂ ∂ D and I ⊂ T are compact intervals.Then for almost every ω ∈ I , C ω (0) has pure point spectrum on Z with exponentiallydecaying eigenfunctions (i.e. Anderson localization). F. WANG Remark. In the proof this theorem, we use the method developed in [3] and replace theequation (2.2) there by our formula.To make our theorem meaningful, we should mention that the assumption in this the-orem is guaranteed by [19] where Zhenghe Zhang provided an example whose Lyapunovexponent is uniformly positive. More specifically, the analytic function is α ( x ) = λe πih ( x ) where λ ∈ (0 , 1) and h ( x ) ∈ C ω ( T , T ).4.1. A large deviation estimate. For analytic Schr¨odinger cocycle with Diophantinefrequencies, the first large deviation estimate (LDT) is established by J. Bourgain and M.Goldstein [3]. From the proof we can see that this LDT holds true for all analytic SL(2 , R )matrices whose n-step transfer matrices satisfying(4.1) k M n ( x ) k + k M − n ( x ) k ≤ C n . It is well known that M zα n ( x ) is unitary equivalent to a SL(2 , R ) matrix via Q = − 11 + i (cid:18) − i i (cid:19) that is, Q ∗ M zα n ( x ) Q ∈ SL(2 , R ). Since our Verblunsky coefficients α n ( x ) ∈ D are analyticand ρ n ( x ) = p − | α n ( x ) | , so k M n ( x ) k + k M − n ( x ) k ≤ C and our n-step transfer matricessatisfy condition (4.1).Therefore we have LDT for Szeg˝o cocycle maps which is as follows, Lemma 4.2. Assume ω satisfies a Diophantine condition (DC A,c ) k kω k > c | k | − A for k ∈ Z \ { } . Then there is σ > { x ∈ T | (cid:12)(cid:12)(cid:12) n log k M zn ( x ) k − L n ( z ) (cid:12)(cid:12)(cid:12) > n − σ } < e − n σ for all n ∈ Z + and all z ∈ Z . Remark. In [3], the statement of LDT is not uniform, but from the proof we can see itholds uniformly and to prove Anderson localization.In addition, it’s easy to check the results in [3, section 2 and section 3] also hold truefor M zn ( x ).4.2. Elimination of the eigenvalue. In [3, section 4], the elimination of double reso-nances at a fixed point is proved based on the following fact for self-adjoint operatorsdist( E, σ ( H n )) = k ( H n − E ) − k − . Although C [0 ,n − is not self-adjoint, even not normal, we can still get the same estimateas in [3, Lemma 4.1] using the following Lemma from [8] instead of the above fact, Proposition 4.3. Let M n be the set of pairs ( A, z ), where A is an n × n matrix, z ∈ C with | z | ≥ k A k and | z | / ∈ σ ( A ) . Then sup M n dist( z, σ ( A )) k ( A − z ) − k = cot( π n ) . Then we have the CMV analog of [3, Lemma 4.1], FORMULA RELATED TO CMV MATRICES AND SZEG ˝O COCYCLES 9 Lemma 4.4. Let log log ¯ n ≪ log n . Denote S ⊂ T × T the set of ( ω, x ) such that k kω k > c | k | − A for k ∈ Z \ { } . There is n < ¯ n and z such that(4.2) k ( z − C [0 ,n − (0)) − k > C n and(4.3) 1 n log k M zn ( x ) k < L n ( z ) − n σ Then mes S < e − n σ .4.3. Semi-algebraic sets and frequency estimates. According to the method in [3],in order to get Anderson localization, we need remove double resonances along the orbits { x + nω } where n should be large enough. For i.i.d. distributed Verblunsky coefficients,this result is not so hard to obtain [5], but for analytic Verblunsky coefficients, it becomesmuch more complicated. We need to rewrite the resonance conditions (4.2) and (4.3) aspolynomials, then use the tool developed in [14] to estimate the complexity of S .More specifically, we need to know the upper bound of interval numbers in the set S x = { ω ∈ T | ( ω, x ) ∈ S } and we want to show it’s at most polynomially large. By unitaryconjugacy, all statements in [3, section 5 and section 6] hold true for the CMV case, whichmeans we have the following estimate. Lemma 4.5. Choose δ > n a sufficiently large integer. Denote Ω n,δ ⊂ T the set offrequencies ω ∈ DC ,c such thatThere is n < n C , 2 (log n ) ≤ ℓ ≤ (log n ) , and z such that k ( z − C [0 ,n − (0)) − k > C n , n log k M zn ( ℓω ) k < L n ( z ) − n σ Then mes Ω n,δ < − (log n ) . Proof of Theorem 4.1. Now, we are ready to prove Anderson localization of CMVmatrices with analytic Verblunsky coefficients using our formula (2.6).Denote by Ω n,δ the frequency set obtained in Lemma 4.5 and defineΩ δ = \ n ′ [ n>n ′ Ω n,δ , Ω = [ δ Ω δ , then mes Ω = 0.Take ω ∈ I ∩ ( DC ,c \ Ω) and let z ∈ Z , ξ = ( ξ n ) n ∈ N satisfy the equation C (0) ξ = zξ where ξ = 1 and | ξ n | ≤ n C .Let δ = δ and assume there is n < n C satisfies, k ( z − C [0 ,n − (0)) − k > C n , then Lemma 4.5 tells us for all 2 (log n ) ≤ ℓ ≤ (log n ) ,(4.4) 1 n log k M zn ( ℓω ) k > L n ( z ) − δ. Similar to [3, Lemma 2.1], we have(4.5) 1 n log k M zn ( x ) k < L n ( z ) + δ for all z ∈ Z .According to Cramer’s rule and direct calculations, we have(4.6) | ( z − C [0 ,n − ) − ( n , n ) | ≤ C (cid:12)(cid:12)(cid:12) det( z − ˜ C [0 ,n − ) det( z − ˜ C [ n +1 ,n ] )det( z − C [0 ,n − ) (cid:12)(cid:12)(cid:12) where | det( z − ˜ C [ a,b ] ) | ≍ | det( z − C [ a,b ] ) | .Next, Theorem 1 allows us to estimate (4.6).Recall that k M zn ( x ) k = k S zn ( x ) k , then Theorem 1 implies k M zn ( x ) k ≥ n − Y j =0 ρ j ( | det( z − C [1 ,n − ) | + | det( z − C [0 ,n − ) − z det( z − C [1 ,n − ) | ) ≥ n − Y j =0 ρ j ( | det( z − C [1 ,n − ) | + | det( z − C [0 ,n − ) | − | det( z − C [1 ,n − ) | ) ≥ n − Y j =0 ρ j | det( z − C [0 ,n − ) | (4.7)and k M zn ( x ) k ≤ | det( z − C [0 ,n − ) | or k M zn ( x ) k ≤ | det( z − C [0 ,n − ) − z det( z − C [1 ,n − )) | which is equivalent to(4.8) k M zn ( x ) k ≤ {| det( z − C [0 ,n − ) | , | det( z − C [1 ,n − ) |} . Apply inequalities (4.4), (4.5), (4.7) and (4.8) to (4.6), then for each 2 (log n ) ≤ ℓ ≤ (log n ) , one of the matrices C [0 ,n − ( ℓω ) , C [1 ,n − ( ℓω )thus C Λ (0) for some Λ ∈ { [ ℓ, ℓ + n − , [ ℓ + 1 , ℓ + n − } will satisfy | G Λ ( n , n ) | = | ( z − C Λ ) − ( n , n ) | ≤ e − L n ( E ) | n − n | + o ( n ) ≤ e − δ L ( E )+ o ( n ) , where we use the fact C [ a,b ] ( x + ω ) = C T [ a +1 ,b +1] ( x ) which means they have the same absolutevalue and the second inequality is because L n ( z ) < L ( z ) + δ for n large.Next, according to paving property in [3], for 2 (log n ) +1 < N < (log n ) − , the Green’sfunction G [ N ,N ] satisfies | G [ N ,N ] ( n , n ) | < e − δ | n − n | + o ( N ) Restricting the eigenequation C (0) ξ = zξ to the interval [ N , N ] and direct calculationsimply | ξ N | ≤ e − δ N It remains to show there is n < n C satisfies, k ( z − C [0 ,n − (0)) − k > C n , which is almost the same with that in [3]. FORMULA RELATED TO CMV MATRICES AND SZEG ˝O COCYCLES 11 Acknowledgements I would like to thank Zhenghe Zhang for suggesting me to figure out this formula andmany useful discussions. I am grateful to my advisors David Damanik and Daxiong Piaofor their partly supports. References [1] M. Aizenman, S. Molchanov, Localization at large disorder and at extreme energies: an elementaryderivation, Comm. Math. Phys. (1993), 245–278.[2] P.W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. (1958), 1492–1505.[3] J. Bourgain, M. Goldstein, On nonperturbative localization with quasi-periodic potential, Ann. ofMath. (2000), 835–879.[4] J. Bourgain, W. Schlag, Anderson localization for Schr¨odinger operators on Z with strongly mixingpotentials, Comm. Math. Phys. (2000), 143–175.[5] V. Bucaj, D. Damanik, J. Fillman, V. Gerbuz, T. Vandenboom, F. Wang, Z. Zhang, Localization forthe one-dimensional Anderson model via positivity and large deviations for the Lyapunov exponent,in preparation.[6] M. Cantero, L. Moral, L. Vel´azquez, Minimal representations of unitary operators and orthogonalpolynomials on the unit circle, Linear Algebra Appl. (2003), 40–65.[7] F. Wang, D. Damanik, Spectral properties of quasi-periodic CMV operators, in preparation.[8] E.B. Davis, B. Simon, Eigenvalue estimates for non-normal matrices and the zeros of randomorthogonal polynomials on the unit circle, J. Approx. Theory (2006), 189–213.[9] J. Fr¨ohlich, T. Spencer, Absence of diffusion in the Anderson tight binding model for large disorderor low energy, Comm. Math. Phys. (1983), 151–184.[10] M. Goldstein, W. Schlag, H¨older continuity of the integrated density of states for quasi-periodicSchr¨odinger equations and averages of shifts of subharmonic functions, Ann. of Math. (2001),155–203.[11] S. Klein, Anderson localization for the discrete one-dimensional quasi-periodic Schr¨dinger operatorwith potential defined by a Gevrey-class function, J. Funct. Anal. (2005), 255–292[12] S. Klein, Localization for quasiperiodic Schrdinger operators with multivariable Gevrey potentialfunctions, J. Spectr. Theory. (2014), 431-484[13] S. Klein, Anderson localization for one-frequency quasi-periodic block Jacobi operators, J. Funct.Anal. (2017), 1140–1164[14] J. Milnor, On the Betti number of real varieties, Proc. A.M.S. (1964), 275–280.[15] B. Simon, OPUC on one foot, Bull. Amer. Math. Soc. , (2005), 431–460.[16] B. Simon, Orthogonal Polynomials on the Unit Circle. Part 1. Classical Theory. Amer. Math. Soc.Colloq. Publ.,vol. 54, Amer. Math. Soc., Providence, RI, 2005.[17] B. Simon, Orthogonal Polynomials on the Unit Circle. Part 2. Spectral Theory. Amer. Math. Soc.Colloq. Publ.,vol. 54, Amer. Math. Soc., Providence, RI, 2005.[18] B. Simon, CMV matrices: Five years after, J. Comput. Appl. Math. (2007), 120–154.[19] Z. Zhang, Positive Lyapunov exponents for quasiperiodic Szeg˝o cocycles, Nonlinearity. (2012),1771–1797. Ocean University of China, Qingdao 266100, Shandong, China and Rice University, Hous-ton, TX 77005, USA E-mail address ::