Spectral and Dynamical Properties of Certain Random Jacobi Matrices with Growing Parameters
aa r X i v : . [ m a t h . SP ] J un SPECTRAL AND DYNAMICAL PROPERTIES OFCERTAIN RANDOM JACOBI MATRICES WITHGROWING PARAMETERS
JONATHAN BREUER
Abstract.
In this paper, a family of random Jacobi matrices,with off-diagonal terms that exhibit power-law growth, is stud-ied. Since the growth of the randomness is slower than that ofthese terms, it is possible to use methods applied in the study ofSchr¨odinger operators with random decaying potentials. A partic-ular result of the analysis is the existence of operators with arbi-trarily fast transport whose spectral measure is zero dimensional.The results are applied to the infinite Dumitriu-Edelman model [6]and its spectral properties are analyzed. Introduction
For a sequence of positive numbers, { a ( n ) } ∞ n =1 , and a sequence, { b ( n ) } ∞ n =1 , of real numbers, let J ( { a ( n ) } ∞ n =1 , { b ( n ) } ∞ n =1 ) denote the Ja-cobi matrix with off-diagonal elements given by { a ( n ) } ∞ n =1 and diagonalelements given by { b ( n ) } ∞ n =1 on the diagonal. That is: J ( { a ( n ) } ∞ n =1 , { b ( n ) } ∞ n =1 ) = b (1) a (1) 0 0 . . .a (1) b (2) a (2) 0 . . . a (2) b (3) a (3) . . .... . . . . . . . . . . . . . (1.1)For η ∈ (0 ,
1) and λ >
0, let J λ ,η be the Jacobi matrix whoseparameters are a λ ,η ( n ) = λ n η and b λ ,η ( n ) ≡
0. Since η < J η is a self-adjoint operator on ℓ ( N ) [1]. For any such operator we maydefine the spectral measure, µ , as the unique measure satisfying (cid:0) δ , ( J − z ) − δ (cid:1) = Z R dµ ( x ) x − z z ∈ C \ R where ( · , · ) denotes the inner product in ℓ . It follows from the work ofJanas and Naboko [8] that J λ ,η has absolutely continuous spectrumcovering the whole real line. This paper deals with random perturbations of J λ ,η that are weak,in the sense that the variance of the perturbing random parametersgrows like ∼ n η with η < η .The case a ( n ) ≡ , η < η = 0”) and no perturbationoff the diagonal, is the extensively studied family of one-dimensionaldiscrete Schr¨odinger operators with a random decaying potential[3, 4, 5, 13, 14, 19]. For these Schr¨odinger operators it has been estab-lished that when η < − , the absolutely continuous spectrum of theLaplacian is a.s. preserved. When η > − , however, the disorder winsover and the spectrum is pure point with eigenfunctions that decay at asuper-polynomial rate. At the critical point ( η = − ) the spectral be-havior exhibits a sensitive dependence on the coupling constant: Thegeneralized eigenfunctions decay polynomially and the spectral mea-sure is pure point or singular continuous according to whether thesegeneralized eigenfunctions are in ℓ or not (for a comprehensive treat-ment of discrete Schr¨odinger operators with random decaying poten-tials, see Section 8 of [13]).From this perspective, the extension presented in this paper is inallowing growth of the off-diagonal terms. Intuitively, these terms areresponsible for transport, and thus, their growth should have an effecton the spectrum similar to that of the decay of the potential (diagonalterms). Indeed, a particular case of our analysis is that of η = 0,namely, the diagonal terms are i.i.d. random variables. We show thatthe critical point here is η = . Below this value, the spectrum is a.s.pure point, whereas above it the spectral measure is one-dimensional.More generally, let { X ω ( n ) } ∞ n =1 be a sequence of i.i.d. random vari-ables. Let { Y ω ( n ) } ∞ n =1 be another such sequence (the distributions ofthe X ’s and the Y ’s need not be the same). Assume the following issatisfied:(i) For all n h X ω ( n ) i = h Y ω ( n ) i = 0 (1.2)(where h f ( ω ) i ≡ R Ω f ( ω ) dp and (Ω , F , dp ) is the underlying probabilityspace.)(ii) For any k ∈ N (cid:10) | Y ω ( n ) | k (cid:11) < ∞ , (cid:10) | X ω ( n ) | k (cid:11) < ∞ . (1.3)(iii) (cid:10) ( X ω ( n )) (cid:11) = 1 , (cid:10) ( Y ω ( n )) (cid:11) = 14 (1.4)(iv) The common distribution of X ω ( n ) is absolutely continuous withrespect to Lebesgue measure. ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 3
Given the quadruple Υ = ( η , η , λ , λ ) with 0 < η < η < η and λ , λ >
0, define the two random sequences b Υ ,ω ( n ) ≡ b λ ,η ; ω ( n ) ≡ λ n η X ω ( n ) , α λ ,η ; ω ( n ) ≡ λ n η Y ω ( n ) . Let a Υ ,ω ( n ) ≡ a λ ,η ( n ) + α λ ,η ; ω ( n )and define J Υ ,ω = J (cid:0) { a Υ ,ω ( n ) } ∞ n =1 , { b Υ ,ω ( n ) } ∞ n =1 (cid:1) . (1.5)The assumptions on the parameters defining J Υ ,ω , do not exclude thepossibility that some of the off-diagonal terms will vanish. However,with probability one, this may happen only a finite number of times, sothat J Υ ,ω has an infinite part with strictly positive off-diagonal entries.In the following, when we refer to J Υ ,ω , we refer to this part.We shall prove Theorem 1.1.
For the model above, let γ = η − η and let Λ = ( λ /λ ) . The following holds with probability one: (1) If γ > the spectrum of J Υ ,ω is R and µ Υ ,ω , the spectral measureof J Υ ,ω , is one-dimensional, meaning that it does not give weightto sets of Hausdorff dimension less than . (2) In the case γ = the spectrum of J Υ ,ω is R and we have thefollowing two possibilities: (a) If Λ > − η , then µ Υ ,ω is pure point with eigenfunctionsdecaying like | ψ Eω ( n ) | ∼ n − (Λ+ η ) . (b) If Λ ≤ − η , then µ Υ ,ω is purely singular continuous withexact Hausdorff dimension equal to − Λ(1 − η ) . (3) If γ < then the spectrum is pure point with eigenfunctionsdecaying like | ψ Eω ( n ) | ∼ e − Λ n − γ . In this case, if η > η then the spectrum fills R .Remark. We say that a measure, µ , has exact Hausdorff dimension, ̺ ,if it is supported on a set of Hausdorff dimension ̺ and does not giveweight to sets of Hausdorff dimension less than ̺ . For more informationconcerning the decomposition of general measures with respect to theirHausdorff-dimensional properties, consult [15] and references therein. Remark.
In analogy to the Schr¨odinger case, one would expect to haveabsolutely continuous spectrum for γ > /
2. Unfortunately, though webelieve this is true, one-dimensional spectral measure is all we couldget.
J. BREUER
Remark.
The requirement that { X ω ( n ) } ∞ n =1 and { Y ω ( n ) } ∞ n =1 be iden-tically distributed sequences is not really necessary and is made hereonly to simplify the discussion.In resemblance of the Schr¨odinger case, the proof of this theoremfollows by analyzing the asymptotics of solutions to the formal eigen-function equation “ J ψ = Eψ ”. Namely, we shall analyze the solutionsto the difference equation: a ( n ) ψ ( n + 1) + b ( n ) ψ ( n ) + a ( n − ψ ( n −
1) = Eψ ( n ) n > . (1.6)By a theorem of Kiselev and Last [12, Theorem 1.2], the results ob-tained have implications for the quantum dynamics associated with J ;namely the behavior of a given vector ψ under the operation of the oneparameter unitary group e − itJ . More precisely, let ˆ X be the positionoperator defined by (cid:16) ˆ Xψ (cid:17) ( n ) = nψ ( n ) . Theorem 1.2 of [12], which can be seen to hold in our setting, saysthat Theorem 1.1 and Proposition 2.7 below lead to
Theorem 1.2.
Assume γ = and Λ ≤ − η (namely, case 2(b) ofTheorem 1.1). Then for any ε > , m ∈ N , T > and ψ ∈ ℓ , T Z T (cid:12)(cid:12)(cid:12)(cid:16) ˆ X m e − itJ Υ ,ω ψ, e − itJ Υ ,ω ψ (cid:17)(cid:12)(cid:12)(cid:12) dt ≥ C ( ω, ψ, m, ε ) T ( m/ (1 − η )) − ε (1.7) with probability one. Note that η may be chosen arbitrarily close to 1 while the spectralmeasure may have any dimension in [0 , β ensembles arising naturally in the context of Random Matrix Theory:The eigenvalue distribution functions for the three classical Gaussianensembles are given by f β,N ( E , · · · , E N ) = 1 G βN exp − N X j =1 E j ! Y ≤ j Fix β > 0. The random family of Jacobi matrices J β,ω ≡ J (cid:0) { a β,ω ( n ) } ∞ n =1 , { b β,ω ( n ) } ∞ n =1 (cid:1) is defined by:(1) The random variables { a β,ω ( n ) } ∞ n =1 , { b β,ω ( n ) } ∞ n =1 are all inde-pendent.(2) b β,ω ( n ) are all standard Gaussian variables (that is, with zeromean and variance= 1), irrespective of β and n .(3) The probability distribution function of a β,ω ( n ) is given by P { ω | a β,ω ( n ) < C } = 2Γ (cid:0) βn (cid:1) Z C x βn − e − x dx. (1.9)In [6], Dumitriu and Edelman showed that the eigenvalue distribu-tion function of the finite matrix, obtained as the restriction of J β,ω tothe N × N upper left corner, is f β,N for any β > h a β,ω ( n ) i ≡ Z Ω a β,ω ( n ) dω = Γ (cid:0) βn +12 (cid:1) Γ (cid:0) βn (cid:1) = r βn (cid:18) − βn (cid:19) + O (cid:18) n (cid:19) , (1.10) (cid:10) ( a β,ω ( n ) − h a β,ω ( n ) i ) (cid:11) = 14 + O (cid:18) n (cid:19) . (1.11)Thus we see that the family J β,ω corresponds to the case η = 1 / η = 0 of the general matrices introduced above. Technically, thefollowing theorem is not a corollary of Theorem 1.1, because of the O (cid:0) n − / (cid:1) term in (1.10) and the O ( n − ) term in (1.11). The proof ofTheorem 1.1, however, is robust with respect to such a change, and wehave Theorem 1.4. For any β , the spectrum of J β,ω is R with probabilityone.If β < , then, with probability one, the spectral measure, µ β,ω , cor-responding to J β,ω and δ , is pure point with eigenfunctions decayingas | ψ ω ( n ) | ∼ n − ( + β ) . If β ≥ , then with probability one, for any ε > , µ β,ω has exactdimension − β with probability one. Furthermore, for β ≥ , we havethat, almost surely, T Z T (cid:12)(cid:12)(cid:12)(cid:16) ˆ X m e − itJ β,ω ψ, e − itJ β,ω ψ (cid:17)(cid:12)(cid:12)(cid:12) dt ≥ C ( ω, ψ, m, ε ) T m − ε (1.12) J. BREUER for any ψ , ε > and m . This result, without the dynamical part, was announced in [2]. Wenote that the analogous Circular β Ensembles can be realized as eigen-values of truncated CMV matrices. This was shown by Killip andNenciu [10] and later used by Killip and Stoiciu in their analysis oflevel statistics for ensembles of random CMV matrices [11]. The bulkspectral properties of the appropriate matrices were analyzed by Simon[21, Section 12.7].The proof of Theorem 1.1 is given in the next section. Since theproof of the spectral part of Theorem 1.4 is precisely the same, it is notgiven separately. As noted earlier, the dynamical part of our analysis(Theorem 1.2 and the corresponding statement in Theorem 1.4) followsimmediately from Theorem 1.1 and Proposition 2.7, by Theorem 1.2of [12].The method we use is a variation on the one used by Kiselev-Last-Simon [13, Section 8] in their analysis of the Schr¨odinger case describedabove. A notable difference is the fact that, due to the growth ofthe a ( n ), the effective energy parameter, Ea ( n ) , vanishes in the limit.This, in addition to requiring a modification in the technique of proof(see Lemma 2.4 and Proposition 2.5 below), leads to the fact that theasymptotics of the generalized eigenfunctions are constant over R . Atthe critical point (( η − η ) = ), this implies uniformity of the localHausdorff dimensions of the spectral measure.A modified Combes-Thomas estimate, for operators with unboundedoff-diagonal terms, enters our analysis in the identification of the spec-trum of J Υ ,ω . Such an estimate may be of independent interest andthus is presented in the Appendix.2. Proof of Theorem 1.1 We begin with a simple lemma that shows that, in a certain sense, J Υ ,ω is a random relatively decaying perturbation of J λ ,η . Lemma 2.1. For any ε > there exists, with probability one, a con-stant C = C ( ω, ε ) for which (cid:12)(cid:12)(cid:12)(cid:12) | n η X ω ( n ) | n η (cid:12)(cid:12)(cid:12)(cid:12) ≤ Cn γ − ε (2.1) and (cid:12)(cid:12)(cid:12)(cid:12) | n η Y ω ( n ) | n η (cid:12)(cid:12)(cid:12)(cid:12) ≤ Cn γ − ε (2.2) where γ = η − η . ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 7 Proof. By (1.3) and Chebyshev’s inequality we have for any k ∈ N P n ≡ P (cid:26) ω | (cid:12)(cid:12)(cid:12)(cid:12) | n η X ω ( n ) | n η (cid:12)(cid:12)(cid:12)(cid:12) ≥ n γ − ε (cid:27) ≤ C ( k ) n kε . By choosing 2 k > ε − we see that ∞ X n =1 P n < ∞ . (2.1) follows now from Borel-Cantelli. The proof of (2.2) is the same. (cid:3) As stated in the Introduction, we follow the strategy of [13]. In par-ticular, we will deduce the spectral properties of J Υ ,ω from the asymp-totics of the solutions to the corresponding eigenfunction equation.In order to fix notation, for a given Jacobi matrix J ( { a ( n ) } ∞ n =1 , { b ( n ) } ∞ n =1 ) and a fixed E ∈ R , denote by ψ E asolution to the equation a ( n ) ψ E ( n + 1) + b ( n ) ψ E ( n ) + a ( n − ψ E ( n − 1) = Eψ E ( n ) n > . (2.3)It is customary to extend this equation to n = 1 by defining a (0) = 1.Clearly, the space of sequences { ψ E ( n ) } ∞ n =0 solving (2.3) is a two-dimensional vector space and any such sequence is completely deter-mined by its values at 0 and 1. We let ψ Eφ ( n ) stand for the solution of(2.3) satisfying ψ Eφ (0) = sin( φ ) ψ Eφ (1) = cos( φ ) . (2.4)We note that, formally, ψ E ( n ) satisfies J ψ E = Eψ E . Let S E ( n ) = (cid:18) E − b ( n ) a ( n ) − a ( n − a ( n ) (cid:19) and T E ( n ) = S E ( n ) · S E ( n − · · · S E (1) . Then, for any φ , (cid:18) ψ Eφ ( n + 1) ψ Eφ ( n ) (cid:19) = T E ( n ) (cid:18) ψ Eφ (1) ψ Eφ (0) (cid:19) , and so T E ( n ) = ψ E ( n + 1) ψ E π ( n + 1) ψ E ( n ) ψ E π ( n ) ! . J. BREUER We call the matrices S E ( n ) defined above one-step transfer matrices ,and for the matrices T E ( n ), we use the name n -step transfer matrices .Our main technical result is Theorem 2.2. Let J Υ ,ω be the family of random Jacobi matrices de-scribed in the Introduction. Then, for any E ∈ R , the following holdswith probability one: (1) If γ > lim n →∞ log k T Eω ( n ) k log( n ) = − η (2.5)(2) If γ = lim n →∞ log k T Eω ( n ) k log( n ) = Λ − η . (2.6)(3) If γ < lim n →∞ log k T Eω ( n ) k n − γ = Λ . (2.7)The EFGP transform (see [13]) is a useful tool for studying the as-ymptotic behavior of k T E ( n ) k in the Schr¨odinger case ( a ( n ) ≡ a ( n ) → ∞ , certain modifications are needed. We proceed to present aversion that is suitable for our purposes.Let J ( { a ω ( n ) } ∞ n =1 { b ω ( n ) } ∞ n =1 ) be a Jacobi matrix whose entries areall independent random variables. Let ˜ a ( n ) = h a ω ( n ) i and α ω = a ω ( n ) − ˜ a ( n ), and assume thatlim n →∞ ˜ a ( n ) = ∞ (2.8)and that lim n →∞ α ω ( n )˜ a ( n ) = 0 (2.9)with probability one. These properties clearly hold for J Υ ,ω (see Lemma2.1). In the analysis that follows we keep E ∈ R fixed so we omit itfrom the notation. Define K ω ( n ) = (cid:18) a ω ( n ) (cid:19) . Then, ˜ S ω ( n ) ≡ K ω ( n ) S ( n ) K ω ( n − − = (cid:18) E − b ω ( n ) a ω ( n ) − a ω ( n ) a ω ( n ) 0 (cid:19) and ˜ T ω ( n ) ≡ ˜ S ω ( n ) · ˜ S ω ( n − · · · ˜ S ω (1) = K ω ( n ) T ω ( n ) . ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 9 For any φ , define the sequences { u ω,φ ( n ) } ∞ n =1 and { v ω,φ ( n ) } ∞ n =1 by (cid:18) u ω,φ ( n ) v ω,φ ( n ) (cid:19) = ˜ T ω ( n ) (cid:18) cos( φ )sin( φ ) (cid:19) , so that, from the definition of ψ ω,φ (= ψ φ for the random Jacobi param-eters), we see that u ω,φ ( n ) = ψ ω,φ ( n + 1) v ω,φ ( n ) = a ω ( n ) ψ ω,φ ( n ) . (2.10)By (2.8) we see that for any E ∈ R and sufficiently large n , we maydefine k n ∈ (0 , π ) by 2 cos( k n ) = E ˜ a ( n ) . (2.11)Clearly, k n → π as n → ∞ .Now, define R ω,φ ( n ) and θ ω,φ ( n ) through R ω,φ ( n ) sin( θ ω,φ ( n )) = v ω,φ ( n ) sin( k n ) (2.12)and R ω,φ ( n ) cos( θ ω,φ ( n )) = ˜ a ( n ) u ω,φ ( n ) − v ω,φ ( n ) cos( k n ) (2.13)so that (using (2.10)) R ω,φ ( n ) = v ω,φ ( n ) + ˜ a ( n ) u ω,φ ( n ) − a ( n ) u ω,φ ( n ) v ω,φ ( n ) cos( k n )= a ω ( n ) ψ ω,φ ( n ) + ˜ a ( n ) ψ ω,φ ( n + 1) − a ω ( n ) Eψ ω,φ ( n + 1) ψ ω,φ ( n ) , which leads to R ω,φ ( n ) ˜ a ( n ) ( ψ ω,φ ( n ) + ψ ω,φ ( n + 1) ) = 1 + 2 α ω ( n )˜ a ( n ) ψ ω,φ ( n ) ˜ a ( n ) ( ψ ω,φ ( n ) + ψ ω,φ ( n + 1) )+ 2 α ω ( n ) ψ ω,φ ( n ) ˜ a ( n ) ( ψ ω,φ ( n ) + ψ ω,φ ( n + 1) ) − Ea ω ( n ) ψ ω,φ ( n ) ψ ω,φ ( n + 1)˜ a ( n ) ( ψ ω,φ ( n ) + ψ ω,φ ( n + 1) ) . (2.14)By (2.9), it follows that the right hand side converges to one withprobability 1, uniformly in φ , so that almost surely, for sufficientlylarge n , there are constants C , C > C R ω,φ ( n ) ≤ ˜ a ( n ) (cid:0) ψ ω,φ ( n ) + ψ ω,φ ( n + 1) (cid:1) ≤ C R ω,φ ( n ) Now, by a straightforward adaptation of Lemma 2.2 of [13] it followsthat for any two angles φ = φ , there are constants C , C > that C max( R ω,φ ( n ) , R ω,φ ( n ) ) ≤ ˜ a ( n ) k T ω ( n ) k ≤ C max( R ω,φ ( n ) , R ω,φ ( n ) ) . (2.15)Thus, we are led to examine the asymptotic properties of log R ω,φ ( n ).Let us formulate a recursion relation for R ω,φ ( n ) : (2.12) and (2.13)mean (cid:18) R ω,φ ( n ) sin( θ ω,φ ( n )) R ω,φ ( n ) cos( θ ω,φ ( n )) (cid:19) = (cid:18) k n )˜ a ( n ) − cos( k n ) (cid:19) (cid:18) u ω,φ ( n ) v ω,φ ( n ) (cid:19) = (cid:18) k n )˜ a ( n ) − cos( k n ) (cid:19) (cid:18) a ω ( n ) (cid:19) (cid:18) ψ ω,φ ( n + 1) ψ ω,φ ( n ) (cid:19) . We also know (cid:18) ψ ω,φ ( n + 2) ψ ω,φ ( n + 1) (cid:19) = S ω ( n + 1) (cid:18) ψ ω,φ ( n + 1) ψ ω,φ ( n ) (cid:19) , so (cid:18) R ω,φ ( n + 1) sin( θ ω,φ ( n + 1)) R ω,φ ( n + 1) cos( θ ω,φ ( n + 1)) (cid:19) = (cid:18) k n +1 )˜ a ( n + 1) − cos( k n +1 ) (cid:19) (cid:18) a ω ( n + 1) (cid:19) S ω ( n + 1) · (cid:18) a ω ( n ) (cid:19) − (cid:18) k n )˜ a ( n ) − cos( k n ) (cid:19) − (cid:18) R ω,φ ( n ) sin( θ ω,φ ( n )) R ω,φ ( n ) cos( θ ω,φ ( n )) (cid:19) = (cid:18) k n +1 )˜ a ( n + 1) − cos( k n +1 ) (cid:19) ˜ S ω ( n + 1) · (cid:18) k n )˜ a ( n ) − cos( k n ) (cid:19) − (cid:18) R ω,φ ( n ) sin( θ ω,φ ( n )) R ω,φ ( n ) cos( θ ω,φ ( n )) (cid:19) . (2.16)Now, write ˜ S ω ( n + 1) = (cid:18) E − b ω ( n +1) a ω ( n +1) − a ω ( n +1) a ω ( n + 1) 0 (cid:19) = ˜ a ( n + 1) a ω ( n + 1) (cid:18) E ˜ a ( n +1) − a ( n +1) ˜ a ( n + 1) 0 (cid:19) + − b ω ( n +1)˜ a ( n +1) a ω ( n +1) − ˜ a ( n +1) ˜ a ( n +1) ! ! . ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 11 We define Z ω ( n + 1) = (cid:18) k n +1 )˜ a ( n + 1) − cos( k n +1 ) (cid:19) (cid:18) E ˜ a ( n +1) − a ( n +1) ˜ a ( n + 1) 0 (cid:19) · (cid:18) k n )˜ a ( n ) − cos( k n ) (cid:19) − (cid:18) sin( θ ω,φ ( n ))cos( θ ω,φ ( n )) (cid:19) , and W ω ( n + 1) = (cid:18) k n +1 )˜ a ( n + 1) − cos( k n +1 ) (cid:19) − b ω ( n +1)˜ a ( n +1) a ω ( n +1) − ˜ a ( n +1) ˜ a ( n +1) ! · (cid:18) k n )˜ a ( n ) − cos( k n ) (cid:19) − (cid:18) sin( θ ω,φ ( n ))cos( θ ω,φ ( n )) (cid:19) (we ignore the dependence on φ since we keep it fixed). Then, from(2.16) we see that R ω,φ ( n + 1) R ω,φ ( n ) = ˜ a ( n + 1) a ω ( n + 1) k Z ω ( n + 1) + W ω ( n + 1) k . (2.17) θ ω,φ ( n ) satisfies a recurrence relation as well: From (2.10) we havethat v ω,φ ( n + 1) = a ω ( n + 1) u ω,φ ( n ) (2.18)and a ω ( n + 1) u ω,φ ( n + 1) + b ω ( n + 1) u ω,φ ( n ) + a ω ( n ) u ω,φ ( n − 1) = Eu ω,φ ( n ) . (2.19)Write, using (2.12)-(2.13)cot( θ ω,φ ( n + 1)) = ˜ a ( n + 1) u ω,φ ( n + 1) − cos( k n +1 ) v ω,φ ( n + 1)sin( k n +1 ) v ω,φ ( n + 1)= ˜ a ( n + 1) u ω,φ ( n + 1) − a ω ( n + 1) cos( k n +1 ) u ω,φ ( n )sin( k n +1 ) a ω ( n + 1) u ω,φ ( n ) . (2.20)Furthermore, observing that R ω,φ ( n ) sin( θ ω,φ ( n ) + k n ) = ˜ a ( n ) sin( k n ) u ω,φ ( n ) , and R ω,φ ( n ) cos( θ ω,φ ( n ) + k n ) = ˜ a ( n ) cos( k n ) u ω,φ ( n ) − v ω,φ ( n )= ˜ a ( n ) cos( k n ) u ω,φ ( n ) − a ω ( n ) u ω,φ ( n − , we may writecot( θ ω,φ ( n ) + k n ) = ˜ a ( n ) cos( k n ) u ω,φ ( n ) − a ω ( n ) u ω,φ ( n − a ( n ) sin( k n ) u ω,φ ( n ) . (2.21) Substituting u ω,φ ( n + 1) from (2.19) into (2.20) and then u ω,φ ( n − θ ω,φ ( n + 1)) = ˜ a ( n + 1)˜ a ( n ) a ω ( n + 1) sin( k n )sin( k n +1 ) cot( θ ω,φ ( n ) + k n )+ cot( k n +1 ) (cid:18) ˜ a ( n + 1) a ω ( n + 1) − (cid:19) − ˜ a ( n + 1)sin( k n ) a ω ( n + 1) b ω ( n + 1) ≡ κ ω ( n + 1) cot(¯ θ ω,φ ( n )) + ζ ω ( n + 1) (2.22)where ¯ θ ω,φ ( n ) = θ ω,φ ( n ) + k n . By (2.15) and picking two angles φ = φ , Theorem 2.2 follows from Proposition 2.3. Let J Υ ,ω be the family of random Jacobi matricesdescribed in the Introduction. Then, for any E ∈ R , and for any φ , thefollowing holds with probability one: (1) If γ > lim n →∞ log R Eω,φ ( n ) log( n ) = η (2.23)(2) If γ = lim n →∞ log R Eω,φ ( n ) log( n ) = Λ + η . (2.24)(3) If γ < lim n →∞ log R Eω,φ ( n ) n − γ = Λ . (2.25) Proof. As in [13] we shall prove the statement by using the recursionrelation for R ω ( n ) (equation (2.17)). Namely, we shall prove that1 F γ ( n ) n X j =1 (cid:18) log (cid:0) k Z ω ( j ) + W ω ( j ) k (cid:1) − log (cid:18) a ω ( j ) ˜ a ( j ) (cid:19)(cid:19) (2.26)converges to the appropriate limit, where F γ ( n ) = log( n ) for γ ≥ and F γ ( n ) = n − γ otherwise. From this point on, a ω ( n ) = a Υ ,ω ( n ) and b ω ( n ) = b Υ ,ω ( n ).We shall need some estimate on the behavior of θ ω ( n ). We start with Lemma 2.4. For any ε > , there exists, with probability one, a con-stant ˜ C = ˜ C ( ω, ε ) , such that | θ ω,φ ( n + 1) − ¯ θ ω,φ ( n ) | ≤ ˜ C max( n − γ + ε , n − ) . (2.27) ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 13 Proof of the Lemma. We start by proving a similar statement for | κ ω ( n + 1) − | + | ζ ω ( n + 1) | :Recall that k n → π . Thus, for large enough n , sin( k n ) > . More-over, cos( k n ) ∼ n η . It follows that | sin( k n ) − sin( k n +1 ) | = | cos( k n ) − cos( k n +1 ) || sin( k n ) + sin( k n +1 ) | ≤ C ′ n η . Thus, | κ ω ( n + 1) − | + | ζ ω ( n + 1) | ≤ (cid:12)(cid:12)(cid:12)(cid:12) sin( k n )sin( k n +1 ) (cid:18) ˜ a ( n + 1)˜ a ( n ) a ω ( n + 1) − (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) sin( k n ) − sin( k n +1 )sin( k n +1 ) (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) cot( k n +1 ) (cid:18) ˜ a ( n + 1) a ω ( n + 1) − (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12) ˜ a ( n + 1)sin( k n ) a ω ( n + 1) b ω ( n + 1) (cid:12)(cid:12)(cid:12)(cid:12) = I + I + I + I . Now, choosing ε < η − η , by Lemma 2.1 I ≤ (cid:12)(cid:12)(cid:12)(cid:12) ˜ a ( n + 1) (˜ a ( n ) − ˜ a ( n + 1)) a ω ( n + 1) (cid:12)(cid:12)(cid:12)(cid:12) + 4 (cid:12)(cid:12)(cid:12)(cid:12) α ω ( n + 1)˜ a ( n + 1) a ω ( n + 1) (cid:12)(cid:12)(cid:12)(cid:12) + 2 (cid:12)(cid:12)(cid:12)(cid:12) α ω ( n + 1) a ω ( n + 1) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( ω, ε ) max( n − γ + ε , n − ) I ≤ | cos( k n +1 ) | (cid:12)(cid:12)(cid:12)(cid:12) α ω ( n + 1)˜ a ( n + 1) + α ω ( n + 1) a ω ( n + 1) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( ω, ε ) n γ − ε , and I ≤ (cid:12)(cid:12)(cid:12)(cid:12) ˜ a ( n + 1) b ω ( n + 1) a ω ( n + 1) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( ω, ε ) n γ − ε almost surely. We have shown above that I ≤ C n η so we see that there exists, with probability one, a constant C ( ω, ε )for which | κ ω ( n + 1) − | + | ζ ω ( n + 1) | ≤ C max ( n − γ + ε , n − ) . (2.28)Now, use e ix = 1 − 21 + i cot x and (2.22) to see that, if | κ ω ( n + 1) − | + 2 | ζ ω ( n + 1) | < , (whichindeed happens almost surely, for large enough n ), then (cid:12)(cid:12)(cid:12) e iθ ω,φ ( n +1) − e i ¯ θ ω,φ ( n ) (cid:12)(cid:12)(cid:12) ≤ | κ ω ( n + 1) − | + | ζ ω ( n + 1) | ) , which implies (by | e ix − | ≥ | x | π ) that | θ ω,φ ( n + 1) − ¯ θ ω,φ ( n ) | ≤ π ( | κ ω ( n + 1) − | + | ζ ω ( n + 1) | ) . This, together with (2.28), implies (2.27) and concludes the proof ofthe lemma. (cid:3) The direct consequence of this is Proposition 2.5. Assume f ( n ) is a function that satisfies f ( n ) = o ( n r ) and f ( n + 1) − f ( n ) = o (cid:0) n r − (cid:1) , with r = γ − γ ≥ / η > − γ if γ < / η > η − γ ≥ / η ≤ η − γ if γ < / η ≤ . (2.29) Then, for any φ , lim n →∞ F γ ( n ) n X j =1 f ( j ) cos(2 θ ω,φ ( j )) = 0 (2.30) almost surely. The same statement holds with θ replaced by ¯ θ .Proof of the Proposition. We shall prove the statement for θ . By sum-mation by parts, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X j =1 f ( j ) cos(2 θ ω,φ ( j )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) f ( n ) n X j =1 cos(2 θ ω,φ ( j )) − n − X j =1 j X l =1 cos(2 θ ω,φ ( l )) ( f ( j + 1) − f ( j )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . (2.31) ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 15 Thus we are led to examine P jl =1 cos(2 θ ω,φ ( l )). Assume that j = 2 m is even. Then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) j X l =1 cos(2 θ ω,φ ( l )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X l =1 (cos(2 θ ω,φ (2 l )) − cos(2 θ ω,φ (2 l − 1) + π )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ m X l =1 | θ ω,φ (2 l ) − θ ω,φ (2 l − − π |≤ m X l =1 (cid:12)(cid:12) θ ω,φ (2 l ) − ¯ θ ω,φ (2 l − (cid:12)(cid:12) + (cid:12)(cid:12)(cid:12) k l − − π (cid:12)(cid:12)(cid:12) ≤ C ω max( j − γ + ε , j − η ) (2.32)almost surely, by Lemma 2.4 and by (cid:12)(cid:12)(cid:12) k n − π (cid:12)(cid:12)(cid:12) ≤ | cos( k n ) | , which holds for sufficiently large n . Thus, for any j , we get (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) j X l =1 cos(2 θ ω,φ ( l )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ω max( j − γ + ε , j − η ) + 1 ≤ ( C ω + 1) max( j − γ + ε , j − η ) . (2.33)A simple calculation finishes the proof for θ . The proof for ¯ θ followsthe same argument, with an additional | k l − k l − | term in (2.32). (cid:3) We abbreviate A ω ( j ) = a ω ( j ) − ˜ a ( j ) ˜ a ( j ) = 2 α ω ( j )˜ a ( j ) + α ω ( j ) ˜ a ( j ) = 2 λ λ Y ω ( j ) j γ + (cid:18) λ λ (cid:19) Y ω ( j ) j γ , (2.34)and B ω ( j ) = b ω ( j )˜ a ( j ) = λ λ X ω ( j ) j γ (2.35)By a straightforward calculation we get k W ω ( j ) k = sin (¯ θ ω ( j − a ( j ) sin ( k j − )˜ a ( j − × A ω ( j ) + B ω ( j ) + 2 cos( k j ) B ω ( j ) A ω ( j ) ! , (2.36) k Z ω ( j ) k = 1 + sin (¯ θ ω ( j − a ( j ) − ˜ a ( j − ˜ a ( j − + ˜ a ( j ) sin (¯ θ ω ( j − a ( j − sin ( k j ) − sin ( k j − )sin ( k j − ) (2.37)and( Z ω ( j ) , W ω ( j )) = ˜ a ( j )˜ a ( j − × A ω ( j ) sin(¯ θ ω ( j − ( k j − ) (cid:16) ˜ a ( j )˜ a ( j − 1) sin(¯ θ ω ( j − − cos( k j ) sin( θ ω ( j − (cid:17) − B ω ( j ) sin(2¯ θ ω ( j − k j − ) ! (2.38)(Recall ¯ θ ω ( j ) ≡ ¯ θ ω,φ ( j ) = θ ω,φ ( j ) + k j ≡ θ ω ( j ) + k j .)Let ε < γ . By Theorem 2.1, the fact that cos( k j ) ∼ a ( j ) and theidentity | sin ( x ) − sin ( y ) | = | cos ( x ) − cos ( y ) | , it follows that k W ω ( j ) k = O ω ( j − γ +2 ε ), ( Z ω ( j ) , W ω ( j )) = O ω ( j − γ + ε )and ( k Z ω ( j ) k − 1) = O ( j − ) with probability 1, where the notation O ω indicates that the implicit constant depends on ω . Thus, we canuse log(1 + x ) = x − x O ( x ) , together with the observation that (almost surely) (cid:16) k Z ω ( j ) k − Z ω ( j ) , W ω ( j ))+ k W ω ( j ) k (cid:17) = 4( Z ω ( j ) , W ω ( j )) + O ω (cid:18) j γ − ε (cid:19) , to see that, with probability one, for large enough j we have thatlog (cid:16) k Z ω ( j ) k − Z ω ( j ) , W ω ( j ))+ k W ω ( j ) k (cid:17) = ( k Z ω ( j ) k − 1) + 2( Z ω ( j ) , W ω ( j ))+ k W ω ( j ) k − Z ω ( j ) , W ω ( j )) + O ω (cid:18) j γ − ε + 1 j γ − ε (cid:19) . (2.39) ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 17 Therefore, since the edition of a finite number of terms is inconsequen-tial, it follows thatlim n →∞ F γ ( n ) n X j =1 log (cid:0) k Z ω ( j ) + W ω ( j ) k (cid:1) = lim n →∞ F γ ( n ) n X j =1 (cid:16) ( k Z ω ( j ) k − 1) + 2( Z ω ( j ) , W ω ( j ))+ k W ω ( j ) k − Z ω ( j ) , W ω ( j )) (cid:17) (2.40)with probability one, in the sense that both limits exist together andare equal if they do.Similarly,lim n →∞ − F γ ( n ) n X j =1 log (cid:18) a ω ( j ) ˜ a ( j ) (cid:19) = lim n →∞ − F γ ( n ) n X j =1 log (1 + A ω ( j ))= lim n →∞ − F γ ( n ) n X j =1 (cid:18) A ω ( j ) − A ω ( j ) (cid:19) . (2.41)Thus, our problem is reduced to computing the limits: ξ Z ≡ lim n →∞ F γ ( n ) n X j =1 (cid:0) k Z ω ( j ) k − (cid:1) ξ W ≡ lim n →∞ F γ ( n ) n X j =1 k W ω ( j ) k ξ ZW ≡ lim n →∞ F γ ( n ) n X j =1 ( Z ω ( j ) , W ω ( j )) ξ ZW ≡ lim n →∞ − F γ ( n ) n X j =1 ( Z ω ( j ) , W ω ( j )) ξ A ≡ lim n →∞ − F γ ( n ) n X j =1 (cid:18) A ω ( j ) − A ω ( j ) (cid:19) . By (2.37), k Z ω ( j ) k − (¯ θ ω ( j − a ( j ) − ˜ a ( j − ˜ a ( j − + ˜ a ( j ) sin (¯ θ ω ( j − a ( j − sin ( k j ) − sin ( k j − )sin ( k j − ) . The last term on the right is absolutely summable (= O ( n − − η )) sowe only need to look at sin (¯ θ ω ( j − ˜ a ( j ) − ˜ a ( j − ˜ a ( j − . But, by sin ( α ) = − cos(2 α )2 and by Proposition 2.5, we see that (recall ˜ a ( j ) = λ j − η ) ξ Z = 12 lim n →∞ F γ ( n ) n X j =1 ˜ a ( j ) − ˜ a ( j − ˜ a ( j − = 12 lim n →∞ F γ ( n ) n X j =1 η j = (cid:26) η if γ ≥ / 20 otherwise . (2.42)For the other four limits, we shall use extensively Lemma 8.4 of [13],in order to replace A ω and B ω by their means. For example, write k W ω ( j ) k = sin (¯ θ ω ( j − a ( j ) sin ( k j − )˜ a ( j − A ω ( j ) − (cid:10) A ω ( j ) (cid:11) ! + sin (¯ θ ω ( j − a ( j ) sin ( k j − )˜ a ( j − B ω ( j ) − (cid:10) B ω ( j ) (cid:11) ! + sin (¯ θ ω ( j − a ( j ) sin ( k j − )˜ a ( j − k j ) B ω ( j ) A ω ( j ) ! + sin (¯ θ ω ( j − a ( j ) sin ( k j − )˜ a ( j − (cid:0)(cid:10) A ω ( j ) (cid:11) + (cid:10) B ω ( j ) (cid:11)(cid:1) . (2.43)Then the first three terms have mean zero and so, by Lemma 8.4 of[13] we get that ξ W ≡ lim n →∞ F γ ( n ) ∞ X j =1 sin (¯ θ ω ( j − a ( j ) sin ( k j − )˜ a ( j − (cid:0)(cid:10) A ω ( j ) (cid:11) + (cid:10) B ω ( j ) (cid:11)(cid:1) . ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 19 Expanding h A ω ( j ) i , throwing out terms that are o ( j − γ ), and applyingProposition 2.5 (writing, again, sin ( α ) = − cos(2 α )2 ), we get γ W = lim n →∞ F γ ( n ) n X j =1 j η ( j − η sin ( k j − ) Λ j − γ × (cid:0)(cid:10) Y ω ( j ) (cid:11) + (cid:10) X ω ( j ) (cid:11)(cid:1) = lim n →∞ F γ ( n ) n X j =1 j − γ = (cid:26) γ > / 22Λ otherwise , (2.44)(recall h Y ω ( j ) i = ). Applying the same procedure to ξ ZW and ξ A weget ξ ZW = lim n →∞ F γ ( n ) n X j =1 j η ( j − η sin (¯ θ ω ( j − ( k j − ) (cid:28) Y ω ( j ) j γ (cid:29) = lim n →∞ F γ ( n ) n X j =1 Λ2 1 j γ = (cid:26) γ > / Λ2 otherwise (2.45)and ξ A = lim n →∞ − F γ ( n ) n X j =1 (cid:28) Y ω ( j ) j γ (cid:29) = lim n →∞ F γ ( n ) n X j =1 (cid:18) Λ2 1 j γ (cid:19) = (cid:26) γ > / Λ2 otherwise . (2.46)The computation of ξ ZW involves sin (¯ θ ω ) = − cos(2¯ θ ω )2 + cos(4¯ θ ω )8 , forwhich Proposition 2.5 is useless. Luckily, the cos(4¯ θ ω ( j )) cancels out.As before ξ ZW = lim n →∞ − F γ ( n ) n X j =1 j γ × h Y ω ( j ) i sin (¯ θ ω ( j − ( k j − ) j η ( j − η + h X ω ( j ) i sin (2¯ θ ω ( j − ! Now write h Y ω ( j ) i sin (¯ θ ω ( j − ( k j − ) j η ( j − η + h X ω ( j ) i sin (2¯ θ ω ( j − ! = sin (¯ θ ω ( j − ( k j − ) j η ( j − η + sin (2¯ θ ω ( j − ! = j η j − η sin ( k j − ) + 18 ! + j η j − η sin ( k j − ) cos(2¯ θ ω ( j − θ ω ( j − j η ( j − η sin ( k j − ) − ! = j η j − η sin ( k j − ) + 18 ! + j η j − η sin ( k j − ) cos(2¯ θ ω ( j − θ ω ( j − O ( n − ) + O ( n − η ) ! . to see that ξ ZW = (cid:26) γ > / − 2Λ otherwise , (2.47)where the cos(2¯ θ ω ) term vanishes by Proposition 2.5. Summing up thevarious limits, the proposition is proved. (cid:3) Proof of Theorem 2.2. By (2.15), the theorem follows from Proposition2.3. (cid:3) Theorem 1.1 almost follows immediately from Theorem 2.2 andProposition 2.3. As in the Schr¨odinger situation, the case γ = re-quires some subtle reasoning. We have established that, in this case,with probability one, equation (2.3) has a solution, ψ , with | ψ ( n ) | ≍ n Λ − η . (2.48)In order to use subordinacy theory, we need the existence of anothersolution with faster decay at infinity. The following is Lemma 8.7 of[13], formulated for general regular matrices: ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 21 Lemma 2.6. Let u φ = (cos φ, sin φ ) ∈ R . For any matrix, A ∈ GL ( R ) with det( A ) = d > , let φ ( A ) be the unique φ ∈ ( − π , π ] with √ d k Au φ k = √ d k A k − . Define ρ ( A ) = k Au kk Au π/ k .Let A n be a sequence of matrices in GL ( R ) with det( A n ) = d n > ,that satisfy √ d n k A n k→ ∞ and d n k A n +1 A − n kk A n kk A n +1 k → as n → ∞ . Let ρ n = ρ ( A n ) and φ n = φ ( A n ) . Then: (1) φ n has a limit φ ∞ if and only if lim n →∞ ρ n = ρ ∞ exists ( ρ ∞ = ∞ is allowed, but then we only have | φ n | → π ) . (2) Suppose φ n has a limit φ ∞ = 0 , π (equivalently, ρ ∞ = 0 , ∞ ).Then lim n →∞ log k A n u ∞ k − log √ d n log k A n k − log √ d n = − if and only if lim sup n →∞ log | ρ n − ρ ∞ | log k A n k − log √ d n ≤ − . Proposition 2.7. Let J Υ ,ω be the family of random Jacobi matricesdescribed in Theorem 2.2, with γ = . Then, for any E ∈ R , thereexists, with probability one, an initial condition Ψ φ ( ω ) = (cid:18) cos( φ ( ω ))sin( φ ( ω )) (cid:19) such that lim n →∞ log k T Eω ( n )Ψ φ ( ω ) k log n = − 12 (Λ + η ) . (2.49) Proof. We imitate the proof of [13, Lemma 8.8]. By Proposition 2.3lim n →∞ log | R ω, ( n ) | log n = 12 (Λ + η )and lim n →∞ log | R ω,π/ ( n ) | log n = 12 (Λ + η )for almost every ω . By (2.12)-(2.13), R ω, ( n ) R ω,π/ ( n ) sin( θ ω,π/ ( n ) − θ ω, ( n )) = ˜ a ( n ) sin k n so that, for almost every ω ,lim n →∞ log | θ ω, ( n ) − θ ω,π/ ( n ) | log n = − Λ (2.50)(recall ˜ a ( n ) = λ n η ).Let ρ ω ( n ) = R ω, ( n ) R ω,π/ ( n ) . Then L ω ( n ) ≡ log ρ ω ( n + 1) − log ρ ω ( n ) = log(1 + X ω, ( n )) − log(1 + X ω,π/ ( n ))(2.51) where X ω,φ ( n ) = (cid:0) k Z ω,φ ( n + 1) + W ω,φ ( n + 1) k − (cid:1) . (2.52)Since X ω,φ ( j ) = O (cid:16) √ j (cid:17) almost surely, we may apply a finite Taylorexpansion to the above (using (2.36),(2.37) and (2.38)) to see that,with probability one, for large enough nL ω ( n ) = ∆ ω ( n ) (cid:0) sin (¯ θ ω, ( n − − sin (¯ θ ω,π/ ( n − (cid:1) + ∆ ω ( n ) (cid:0) sin(2¯ θ ω, ( n − − sin(2¯ θ ω,π/ ( n − (cid:1) + ∆ ω ( n ) (cid:16) sin(¯ θ ω, ( n − θ ω, ( n − − sin(¯ θ ω,π/ ( n − θ ω,π/ ( n − (cid:17) + O (cid:0) n − (1+Λ − ε ) (cid:1) where (cid:10) ∆ ω ( n ) (cid:11) = (cid:10) ∆ ω ( n ) (cid:11) = (cid:10) ∆ ω ( n ) (cid:11) = 0and D(cid:0) ∆ iω ( n ) (cid:1) E ≤ C j ni = 1 , , 3. (The O (cid:0) n − (1+Λ − ε ) (cid:1) for the remainder follows from Lemma2.1 and (2.50).)A standard application of Kolmogorov’s inequality and the BorelCantelli Lemma shows that with probability one, for large enough k ,and for each i = 1 , , m =2 k − +1 ,..., k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X j =2 k − +1 ∆ iω ( j ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ k (2.53)and also sup m =2 k − +1 ,..., k − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) k X m ∆ iω ( j ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ k. (2.54)Combining this with the fact that, with probability one, for largeenough n, (cid:0) sin (¯ θ ω, ( n − − sin (¯ θ ω,π/ ( n − (cid:1) = O (cid:0) n − Λ+ ε (cid:1) , (cid:0) sin(2¯ θ ω, ( n − − sin(2¯ θ ω,π/ ( n − (cid:1) = O (cid:0) n − Λ+ ε (cid:1) and (cid:16) sin(¯ θ ω, ( n − θ ω, ( n − − sin(¯ θ ω,π/ ( n − θ ω,π/ ( n − (cid:17) = O (cid:0) n − Λ+ ε (cid:1) , ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 23 it follows that ∞ X n =1 L ω ( n )exists and (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ X n = N L ω ( n ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ω N − Λ+ ε almost surely.Thus, with proability one, lim n →∞ R ω, ( n ) R ω,π/ ( n ) = lim n →∞ ρ ω ( n ) = ρ ω ( ∞ ) exists and lim sup n →∞ log | ρ ω ( n ) − ρ ω ( ∞ ) | log n ≤ − Λ . Lemma 2.6 completes the proof (note that d n ≡ det T Eω ( n ) = a ω ( n ) ). (cid:3) We are now ready to complete the Proof of Theorem 1.1. (1) By Theorem 2.2, Fubini’s Theorem, thefact that the distribution of X ω ( n ) is absolutely continuous withrespect to Lebesgue measure, and the theory of rank-one pertur-bations ([20]), it follows that, with probability one, the spectralmeasure is supported on the set of energies where, for any ε > n , k T Eω ( n ) k ≤ n − η + ε . From Corollary4.4 of [9] it follows now that the spectral measure is contin-uous with respect to (1 − ε )-dimensional Hausdorff measure,for any ε > 0. Thus the spectral measure is one-dimensional.Since ψ ( n ) = (cid:0) δ , ( J − z ) − δ n (cid:1) solves the eigenvalue equationfor z (away from n = 0), Theorem A.1 and Wronskian conser-vation imply that if z ∈ R were to be outside of the spectrum,the transfer matrices would have to exhibit exponential growth.Since this is not the case, it follows that the spectrum is R .(2) In this case again, the fact that the spectrum is R follows fromthe polynomial bound on the transfer matrices in Theorem 2.2and Theorem A.1 below. As for the properties of the spec-tral measure, these follow from Theorem 1.2 in [9] using (2.48),Proposition 2.7 and the theory of rank one perturbations.(3) The existence, with probability one, of an exponentially decay-ing eigenfunction, for every E ∈ R , follows from Theorem 2.2and Theorem 8.3 of [16]. Fubini and the theory of rank one per-turbations imply that the spectral measure is supported, withprobability one, on the set where these eigenfunctions exist.Comparing powers of n in the exponent, Theorem A.1 impliesthat, as long as η > η , the spectrum fills R . (cid:3) Appendix A. A Combes-Thomas Estimate for JacobiMatrices with Unbounded Parameters This section presents a Combes-Thomas estimate, suitable for appli-cation to Jacobi matrices with unbounded off-diagonal terms. Also see[18] for a related result. Theorem A.1. Let J = J ( { a ( n ) } ∞ n =1 , { b ( n ) } ∞ n =1 ) be a self-adjoint Ja-cobi matrix such that < a ( n ) ≤ f ( n ) for a nondecreasing function f ( n ) . Let z ∈ C be such that dist( z, Spec ( J )) = σ (where Spec ( J ) isthe spectrum of J ). Then (cid:12)(cid:12)(cid:0) δ , ( J − z ) − δ N (cid:1)(cid:12)(cid:12) ≤ eσ e − α N · N (A.1) where α N = min (cid:16) , σ ef ( N ) (cid:17) . In particular (cid:12)(cid:12)(cid:0) δ , ( J Υ ,ω − z ) − δ N (cid:1)(cid:12)(cid:12) ≤ eσ e − C ( σ,ω ) · N − η (A.2) almost surely.Remark. The monotonicity of f is not essential. One may insteadreplace f ( N ) in the formula for α N , by max( f (1) , . . . , f ( N )). Proof. Let R N be the diagonal matrix defined by R N ( n, n ) = (cid:26) e α N · n n ≤ Ne α N · N n > N. (A.3)Then e α n · ( N − (cid:0) δ , ( J − z ) − δ N (cid:1) = (cid:0) δ , R − N ( J − z ) − R N δ N (cid:1) so it suffices to bound k R − N ( J − z ) − R N k . Noting that R − N ( J − z ) − R N = (cid:0) R − N ( J − z ) R N (cid:1) − ≡ C N ( z ), we may apply theResolvent identity to get C N ( z ) = ( J − z ) − + C N ( z ) · (cid:0) J − z − R − N ( J − z ) R N (cid:1) ( J − z ) − . A simple computation shows that k J − z − R − N ( J − z ) R N k≤ ef ( N ) α N if α N ≤ 1. Thus, by k ( J − z ) − k≤ σ , we see that k C N ( z ) k≤ σ + C N ( z ) 2 ef ( N ) σ α N so, by α N ≤ · σ ef ( N ) , we see that k C N ( z ) k≤ σ ANDOM JACOBI MATRICES WITH GROWING PARAMETERS 25 which finishes the proof. (cid:3) Acknowledgments We are grateful to Peter Forrester and Uzy Smi-lansky for presenting us with the problem that led to this paper. Wealso thank Yoram Last and Uzy Smilansky for many useful discussions.We thank the referee for useful remarks.This research was supported in part by THE ISRAEL SCIENCEFOUNDATION (grant no. 1169/06) and by grant no. 2002068 from theUnited States-Israel Binational Science Foundation (BSF), Jerusalem,Israel. References [1] J. Berezanskii, Expansions in Eigenfunctions of Selfadjoint Operators , Transl.Math. Monographs, Vol. , Amer. Math. Soc., Providence, RI, 1968.[2] J. Breuer, P. Forrester and U. Smilansky, Random discrete Schr¨odinger opera-tors from random matrix theory , J. Phys. A: Math. Theor. (2007), F1–F8.[3] F. Delyon, Appearance of a purely singular continuous spectrum in a class ofrandom Schr¨odinger operators , J. Stat. Phys. (1985), 621–630.[4] F. Delyon, B. Simon and B. Souillard, From Power-Localized to Extended Statesin a Class of One-Dimensional Disordered Systems , Phys. Rev. Lett. (1984),2187–2189.[5] F. Delyon, B. Simon and B. Souillard, From power pure point to continuousspectrum in disordered systems , Ann. Inst. H. Poincar´e Phys. Th´eor. (1985),283–309.[6] I. Dumitriu and A. Edelman, Matrix models for beta ensembles , J. Math. Phys. (2002), 5830–5847.[7] D. J. Gilbert and D. B. Pearson, On subordinacy and analysis of the spectrumof one-dimensional Schr¨odinger operators , J. Math. Anal. Appl. (1987),30–56.[8] J. Janas and S. Naboko, Spectral analysis of selfadjoint Jacobi matrices withperiodically modulated entries , J. Funct. Anal. (2002), 318–324.[9] S. Jitomirskaya and Y. Last, Power-law subordinacy and singular spectra, I.Half-line operators , Acta Math. (1999), 171–189.[10] R. Killip and I. Nenciu, Matrix models for circular ensembles , Int. Math. Res.Not. (2004), 2665–2701.[11] R. Killip and M. Stoiciu, Eigenvalue statistics for CMV matrices: From Pois-son to clock via CβE , preprint math-ph/0608002.[12] A. Kiselev and Y. Last, Solutions, spectrum and dynamics for Schr¨odingeroperators on infinite domains , Duke Math. J. (2000), 125–150.[13] A. Kiselev, Y. Last, and B. Simon, Modified Pr¨ufer and EFGP transformsand the spectral analysis of one-dimensional Schr¨odinger operators , Commun.Math. Phys. (1998), 1–45.[14] S. Kotani and N. Ushiroya, One-dimensional Schr¨odinger operators with ran-dom decaying potentials , Commun. Math. Phys. (1988), 247–266.[15] Y. Last, Quantum dynamics and decompositions of singular continuous spectra ,J. Funct. Anal. (1996), 406–445. [16] Y. Last and B. Simon, Eigenfunctions, transfer matrices, and absolutely con-tinuous spectrum of one-dimensional Schr¨odinger operators , Invent. Math. (1999), 329–367.[17] M. Reed and B. Simon, Methods of Modern Mathematical Physics, I . Func-tional Analysis , Academic Press, New York, 1972.[18] J. Sahbani, Spectral properties of Jacobi matrices of certain birth and deathprocesses , J. Operator Theory (2006), 377–390.[19] B. Simon, Some Jacobi matrices with decaying potential and dense point spec-trum , Commun. Math. Phys. (1982), 253–258.[20] B. Simon “Spectral analysis of rank one perturbations and applications”. In Proc. Mathematical Quantum Theory, II. Schr¨odinger Operators, CRM Pro-ceedings and Lecture Notes 8, edited by J. Feldman, R. Froese and L. Rosen,109–149. Providence, RI: American Mathematical Society, (1995).[21] B. Simon, Orthogonal Polynomials on the Unit Circle, vol. 2.