The IDS and Asymptotic of the Largest Eigenvalue of Random Schrödinger Operators with Decaying Random Potential
aa r X i v : . [ m a t h . SP ] S e p The IDS and Asymptotic of the Largest Eigenvalue of Random Schr¨odingerOperators with Decaying Random Potential.Dhriti Ranjan DolaiIndian Institute of Technology DharwadDharwad - 580011, India.Email: [email protected]
Abstract:
In this work we obtain the integrated density of states for theSchr¨odinger operators with decaying random potentials acting on ℓ ( Z d ). Wealso study the asymptotic of the largest and smallest eigenvalues of its finitevolume approximation. Mathematics Subject Classification (2010):
Keywords:
Random Schr¨odinger operators, integrated density of states,decaying random potential, ground states.
The model we consider is given by H ω = ∆ + V ω , ω ∈ Ω , (1.1)(∆ u )( n ) = X | k − n | =1 u ( k ) , u = { u ( n ) } ∈ ℓ ( Z d ) , ( V ω u )( n ) = ω n (1 + | n | ) α u ( n ) , α > , here { ω n } n ∈ Z d are i.i.d real random variables with common distribution µ . Weconsider the probability space (cid:0) R Z d , B R Z d , P (cid:1) , where P = ⊗ n ∈ Z d µ constructedvia the Kolmogorov theorem. We refer to this probability space as (cid:0) Ω , B Ω , P (cid:1) and denote ω = ( ω n ) n ∈ Z d ∈ Ω. The operator ∆ is known as the discreteLaplacian and the decaying potential V ω is nothing but the multiplicationoperator on ℓ ( Z d ) by the sequence (cid:8) ω n (1+ | n | α ) (cid:9) n ∈ Z d . We note that the oper-ators { H ω } ω ∈ Ω are self-adjoint and have common core domain consisting ofvectors with finite support. The author was partially supported by the Inspire Grant, DST/INSPIRE/04/2017/000109 . L ⊂ Z d to be the cube of side length 2 L + 1 center at origin namely,Λ L = { n = ( n , n , · · · , n d ) ∈ Z d : | n i | ≤ L, i = 1 , , · · · , d } . Let χ L be the orthogonal projection onto ℓ (Λ L ). We define the matrices H ωL , ∆ L and V ωL of size (2 L + 1) d as H ωL = ∆ L + V ωL , ∆ L = χ L ∆ χ L , V ωL = χ L V ω χ L . (1.2)Since the spectrum of H ωL and ∆ L are consisting of real eigenvalues, then onecan define the eigenvalue counting function up to energy E ∈ R , N ωL ( E ) : = (cid:26) n : λ ωn ≤ E, λ ωn ∈ σ (cid:0) H ωL (cid:1)(cid:27) , (1.3) N L ( E ) : = (cid:26) n : λ n ≤ E, λ j ∈ σ (cid:0) ∆ L (cid:1)(cid:27) . (1.4)With these definitions in place, we state our main result : Theorem 1.1.
Under the assumption E ( ω ) < ∞ , the integrated densityof states of the decaying model H ω as in (1.1) agrees with that of the freeLaplacian. In other words we have, lim L →∞ N ωL ( E )(2 L + 1) d = N ( E ) , E ∈ R a.e ω, (1.5) where N ( E ) is the integrated density of states of the discrete Laplacian, ∆ . One can note that the density of the distribution function N ( · ) is the density(w.r.t Lebesgue measure) of the measure h δ , E ∆ ( · ) δ i , where E ∆ ( · ) be thespectral measure of the discrete Laplacian, ∆ and { δ n } n ∈ Z d is the standardbasis of ℓ ( Z d ). An explicit calculation of h δ , e it ∆ δ i , the Fourier transfor-mation of the measure h δ , E ∆ ( · ) δ i is given in [21, Lemma 4.1.8]. Remark 1.2.
The techniques used in proving the Theorem 1.1 will also workfor the potentials of the form V ω = X n ∈ Z d a n ω n | δ m ih δ n | , a n ∈ R and give thesame result as long as X n ∈ Λ L a n = o (cid:18) (2 L + 1) d (cid:19) and E ( ω ) < ∞ . α = 0) the IDS can bedefined as the thermodynamic limit of the eigenvalue counting function. Werefer to [8] for the proof of the existence of the limit. The basic facts aboutthe integrated density of states can be found in any of the standard books inthis area, for example Figotin-Pastur [12], Cycon-Froese-Kirsch-Simon [14],Carmona-Lacroix [13] and Veseli´c [15].However, in the absence of ergodicity (of the potential) the existence of theIDS is not immediate and very few results are known so far.In [7], Gordon-Jakˇsi´c-Mol˘canov-Simon considered the model on ℓ ( Z d ) withgrowing potential: H ω = − ∆ + X n ∈ Z d (1 + | n | α ) ω n , α > , where { ω n } n ∈ Z d are i.i.d random variables uniformly distributed on [0 , { a j } j ∈ N (with a = ∞ ) such that if we take dk +1 < α < dk , k ∈ N and E ∈ ( a j , a j − ) thenlim L →∞ N ωL ( E ) L d − jα = N j ( E ) a.e ω, ≤ j ≤ k. Here N j ( · ) are independent of ω , for the proof we refer to [7, Theorem 1.4].B´ocker in his thesis [5] showed the strong law of large numbers for sparserandom potentials. He also studied the density of surface states for some non-stationary potentials. Using a Laplace transform they studied the asymp-totic behaviour of the integrated density of surface states for random Gaus-sian surface potentials. In [6], B´ocker-Werner-Stollmann review some resultson the spectral theory of non-stationary random potentials (see also [9]).They present various models with decaying and sparse random potentials,including those where the sparse set itself is random. Their results includea definition of the integrated density of states and some results on Lifshitstails for such models. 3n [16], the author find the candidate for normalization to find the analo-gous of the IDS outside [ − d, d ] for the decaying model as defined by (1.1).If we take dµdx ( x ) = Θ( | x | − δ ) as | x | → ∞ with 0 < αδ < d then the normal-ization quantity (outside [ − d, d ]) is given by β L = (2 L + 1) d − αδ . For thedetails we refer to [16, Theorem 1.4] and [16, Theorem 1.8]. In other wordsinside the region R \ [ − d, d ] the average spacing between two consecutiveeigenvalues of H ωL is of order (2 L + 1) − ( d − αδ ) .Theorem 1.1 shows that if we compute the IDS with the normalization(2 L + 1) d (for any α >
0) the density of states measure is supported in-side [ − d, d ]. The average spacing between two consecutive eigenvalues isof order (2 L + 1) − d in [ − d, d ].To investigate the asymptotic distributions of the highest and lowest eigen-values of H ωL we have to assume some additional properties on the probabilitymeasure µ and the exponent α . Hypothesis 1.3.
1. The measure µ is absolutely continuous with respectto Lebesgue measure and its density is given by dµdx ( x ) = (cid:26) if | x | < δ | x | δ if | x | > , δ > .
2. The pair ( α, δ ) satisfies the condition < αδ ≤ d .3. Since X n ∈ Λ L | n | ) αδ = Θ (cid:18) (2 L + 1) d − αδ (cid:19) if < αδ < d Θ (cid:18) ln (cid:0) (2 L + 1) (cid:1)(cid:19) if αδ = d. We define b := lim L →∞ δL X n ∈ Λ L | n | ) αδ , where Γ L := ( (2 L + 1) d − αδδ if < αδ < d (cid:0) ln(2 L + 1) (cid:1) δ if αδ = d. emark 1.4. Here we have taken the explicit expression for dµdx ( x ) in orderto make the calculations (in the proofs) simple. The result is still valid if thedensity of the single site potential, µ is of the form dµdx ( x ) = Θ (cid:0) | x | δ (cid:1) , δ > . In [17], Kirsch-Krishna-Obermeit considered the same model H ω (1.1) andinvestigated the spectral properties (see also [10], [11]). Under the Hypoth-esis 1.3 it is easy to verify a n − supp = R , see [17, Definition 2.1] for moredetails. Now [17, Theorem 2.4, Corollary 2.5] implies σ ( H ω ) = R a.e ω andone of its implication will be max { σ ( H ωL ) } and min { σ ( H ωL ) } converge to ∞ and −∞ , respectively as L → ∞ . Therefore, to find the asymptotic distri-butions of the highest and lowest eigenvalues of H ωL we scale it down by anon random constant, depends only on L .Let’s set a few notations before describe our second result: E maxL ( ω ) := max σ (cid:0) H ωL (cid:1) , ˜ E max L ( ω ) := max σ ( V ωL ) , (1.6) E minL ( ω ) := min σ (cid:0) H ωL (cid:1) , ˜ E min L ( ω ) := min σ ( V ωL ) . (1.7)Next theorem gives the informations about the distribution of the highest(1.6) and lowest (1.7) eigenvalues of H ωL (1.2). Theorem 1.5.
The asymptotic distribution of E maxL , the highest eigenvalueof H ωL is given by lim L →∞ P (cid:18) ω : E maxL ( ω )Γ L ≤ x (cid:19) = ( e − b xδ if x > if x ≤ . (1.8) The asymptotic distribution of E minL , the lowest eigenvalue of H ωL is given by lim L →∞ P (cid:18) ω : E minL ( ω )Γ L ≤ x (cid:19) = ( if x ≥ − e − b | x | δ if x < . (1.9) In the above Γ L and b > are defined by [3, Hypothesis 1.3]. Remark 1.6.
The above result can also be achieved for the ergodic modelsi.e, when α = 0 provided δ is positive. For δ > , the ergodic model hasspectrum on whole real line. In that case our Γ L := (2 L + 1) dδ and b = 1 . − d dx + q on L [0 , M ] in which q is the standardwhite noise potential and showed that M N (cid:0) λ ( M ) (cid:1) weakly −−−−→ M →∞ e − x dx, where N ( · ) is the integrated density of states and λ ( M ) is the ground state.In [3], the authors studied the distribution (as M → ∞ ) of the individualeigenvalue of − d dx + dQ ( x ) dx on L [0 , M ], here Q ( x ) is a one dimensional com-pound Poisson process. More precisely, they find the limiting distribution of M N (cid:0) λ k ( M ) (cid:1) as M → ∞ , for each k . In [2], the limit of the joint distribu-tion of the first k eigenvalues, λ ( M ) , λ ( M ) , · · · , λ k ( M ) was obtained undersuitable normalization. In this section we prove the Theorems 1.1 and 1.5. Before going to theproofs, we collect few results about the distance between eigenvalues of twosymmetric matrices and weak convergence of probability measures, whichare very useful in our proofs. Let’s first describe the Hoffman-Wielandtinequality, the proof can be found in [19, Lemma 2.1.19].
Lemma 2.1.
Let
A, B be N × N symmetric matrices, with eigenvalues λ A ≤ λ A ≤ · · · ≤ λ AN and λ B ≤ λ B ≤ · · · ≤ λ BN . Then N X j =1 | λ Aj − λ Bj | ≤ tr ( A − B ) . (2.1)Let µ, ν be two probability measures on R then W p ( µ, ν ) , ≤ p < ∞ , the L p -Wasserstein distances between them is defined by W p ( µ, ν ) := (cid:18) inf π ∈ Π( µ,ν ) Z R × R | x − y | p dπ ( x, y ) (cid:19) p .
6n above definition Π( µ, ν ) denote the collection of all probability measuresdefine on R × R such that the first marginal is µ and the second marginal is ν .One can define probability measures µ A and µ B on R associated with theeigenvalues of A and B , considered in Lemma 2 . µ A ( · ) : = 1 N N X k =1 δ λ Ak ( · ) , (2.2) µ B ( · ) : = 1 N N X k =1 δ λ Bk ( · ) . (2.3) Proposition 2.2.
An estimation of the L -Wasserstein distances between µ A and µ B can be given by (cid:0) W ( µ A , µ B ) (cid:1) ≤ N tr ( A − B ) . (2.4) Proof.
We define a probability measure π ( · , · ) on R × R such that the firstmarginal is µ A and second marginal is µ B and it is given by π ( · ) = 1 N ∞ X k =1 δ ( λ Ak ,λ Bk ) ( · ) . The L -Wasserstein distances between µ A and µ B can be bounded as: W ( µ A , µ B ) : = (cid:18) inf π ∈ Π( µ A ,µ B ) Z R × R | x − y | dπ ( x, y ) (cid:19) ≤ (cid:18) Z R × R | x − y | dπ ( x, y ) (cid:19) ≤ (cid:18) N N X k =1 | λ Ak − λ Bk | (cid:19) ≤ √ N (cid:18) tr ( A − B ) (cid:19) . In the last line we used Lemma 2.1.The next theorem characterize the weak convergence of measures in Wasser-stein sense, for the proof we refer to [18, Theorem 6.9].7 heorem 2.3.
Let { µ L } L ≥ and µ are probability measures define on R , then µ L converges to µ weakly if and only if W p ( µ L , µ ) → as L → ∞ . With all these preliminaries results, we are in the position to present theproof of the main theorem.
Proof of the Theorem 1.1:
Let’s start with the definitions, µ ωL ( · ) : = 1(2 L + 1) d X j δ λ ωj ( · ) , λ ωj ∈ σ ( H ωL ) , (2.5) µ L ( · ) : = 1(2 L + 1) d X j δ λ j ( · ) , λ j ∈ σ (∆ L ) . (2.6)Since m := E ( ω ) < ∞ then using the Proposition 2.2 we write: (cid:0) W ( µ ωL , µ L ) (cid:1) ≤ L + 1) d tr (cid:18)(cid:0) V ωL (cid:1) (cid:19) , H ωL = ∆ L + V ωL = 1(2 L + 1) d X n ∈ Λ L ω n (1 + | n | ) α = 1(2 L + 1) d X n ∈ Λ L ω n − m (1 + | n | ) α + m (2 L + 1) d X n ∈ Λ L | n | ) α ≤ C (2 L + 1) ǫ X n ∈ Λ L ω n − m (1 + | n | ) d +2 α − ǫ + m C (2 L + 1) ǫ X n ∈ Λ L | n | ) d +2 α − ǫ = C (2 L + 1) ǫ X L ( ω ) + m C (2 L + 1) ǫ A L . (2.7)In the fourth line of the above we used L +1) d − ǫ ≤ C (1+ | n | ) d − ǫ , n ∈ Λ L , C is apositive constant does not depend on L and choose 0 < ǫ < min { α, d } .Now an application of the Theorem 2.3 will give the proof of our result,provided we havelim L →∞ A L (2 L + 1) ǫ = 0 and lim L →∞ X L ( ω )(2 L + 1) ǫ = 0 a.e ω. (2.8)8ince { ω n − m } n are i.i.d random variables with zero mean, we find that theconditional expectation of X L given X i , i = 1 , , · · · , L − E (cid:0) X L ( ω ) | X ( ω ) , · · · , X L − ( ω ) (cid:1) = X L − ( ω ) + E (cid:18) X | n | = L ω n − m (cid:19) = X L − ( ω ) , showing that X L ( ω ) is a martingale. Since sup L E (cid:0) X L ( ω (cid:1) ) < ∞ , the martin-gale convergence theorem [20, Theorem 5.7] shows that X L ( ω ) converges a.e ω to a random variable which is finite almost everywhere which implies thesecond part of (2.8) namely, X L ( ω )(2 L + 1) ǫ −→ as L → ∞ a.e ω. The first part of (2.8) is immediate as the choice of ǫ < α gives X n ∈ Z d | n | ) d +2 α − ǫ < ∞ . Hence the theorem.Now we are going to prove the Theorem 1.5. The key idea of the proofis that, since ∆ is a bounded operator on ℓ ( Z d ) so the rate of growth of theabsolute value of largest or smallest eigenvalue of H ωL := ∆ L + V ωL and V ωL will be the same as L → ∞ . In that case we have to find asymptotic of the(2 L + 1) d th or 1 st order statistics of the collection of independent randomvariables (cid:8) ω n (1+ | n | ) α : n ∈ Λ L (cid:9) , the eigenvalues of V ωL . Proof of the Theorem 1.5:
Denote { λ ωn } n and { ˜ λ ωn } n be the set of alleigenvalues of H ωL and V ωL . Also, we can write down the explicit expressionfor ˜ λ ωn and it is given by ω j (1+ | n | ) α , n ∈ Λ L , the diagonal elements of V ωL . Nowa simple application of Min-Max theorem give | λ ωn − ˜ λ ωn | ≤ k ∆ L k ≤ d ∀ n ∈ Λ L . (2.9)Therefore, using the notations defined in (1.6) and (1.7) we get | E max L ( ω ) − ˜ E max L ( ω ) | ≤ d, | E min L ( ω ) − ˜ E min L ( ω ) | ≤ d a.e ω. (2.10)9ince Γ L L →∞ −−−→ ∞ , see [3, Hypothesis 1.3] the above inequalities lead tolim L →∞ E max L ( ω )Γ L = lim L →∞ ˜ E max L ( ω )Γ L a.e ω (2.11)lim L →∞ E min L ( ω )Γ L = lim L →∞ ˜ E min L ( ω )Γ L a.e ω. (2.12)Now to prove our theorem it’s enough to calculate the R.H.S of both of theabove limits. Let’s begin with the first one P (cid:18) ω : ˜ E max L ( ω )Γ L ≤ x (cid:19) = Y n ∈ Λ L P (cid:18) ω n ≤ (1 + | n | ) α Γ L x (cid:19) = Y n ∈ Λ L (cid:18) − P (cid:0) ω n > (1 + | n | ) α Γ L x (cid:1)(cid:19) =: M L ( x ) . (2.13)Using [1, Hypothesis 1.3 ] we get the estimation of M L ( x ) for x ≤ M L ( x ) = Y n ∈ Λ L (cid:18) − P (cid:0) ω n > (1 + | n | ) α Γ L x (cid:1)(cid:19) ≤ (2 L +1) d . (2.14)Again using the Hypothesis 1.3 for x >
0, we writeln (cid:0) M L ( x ) (cid:1) = X n ∈ Λ L ln (cid:18) − P (cid:0) ω n ≥ (1 + | n | ) α Γ L x (cid:1)(cid:19) = X n ∈ Λ L ln (cid:18) − x − δ | n | ) αδ Γ δL (cid:19) = − x − δ δL X n ∈ Λ L | n | ) αδ + O (cid:0) E L (cid:1) , (2.15)In the last line we used the Taylor series expansion of ln(1 − x ), as for a fixedpositive x and large enough L we have 0 < x − δ Γ δL <
1. The error term E L canbe estimated by E L := ( (2 L + 1) − d if 0 < αδ < d (cid:0) ln(2 L + 1) (cid:1) − δ if αδ = d.
10e substitute (2.15), (2.14) in (2.13) and used [3, Hypothesis 1.3] to getlim L →∞ P (cid:18) ω : ˜ E maxL ( ω )Γ L ≤ x (cid:19) = ( e − b xδ if x >
00 if x ≤ . (2.16)In view of the equivalence relation (2.11) we have (1.8).For the smallest eigenvalue we compute, P (cid:18) ω : ˜ E min L ( ω )Γ L ≤ x (cid:19) = 1 − P (cid:18) ω : ˜ E min L ( ω )Γ L > x (cid:19) = 1 − Y n ∈ Λ L P (cid:18) ω n > (1 + | n | ) α Γ L x (cid:19) = 1 − Y n ∈ Λ L (cid:18) − P (cid:0) ω n ≤ (1 + | n | ) α Γ L x (cid:1)(cid:19) := 1 − ˜ M L ( x ) . (2.17)If we take (2.12) into account, a similar estimation of ˜ M L ( x ) as we did for M L ( x ) in (2.14) and (2.15) will lead to the proof of (1.9).We end our article on a note about distribution of intermediate eigenvalueswhich are close to the edges of σ ( H ωL ). As one can observe that the methodused in the proof of Theorem 1.5 can also be applied to find asymptotic dis-tributions of the first (or last) k eigenvalues of H ωL with same normalizationΓ L , here k is a positive integer. In view of (2.9), we only have to find the k number of order statistics from the above (or below) of the collection ofindependent random variables (cid:26) ω n (1+ | n | ) α (cid:27) n ∈ Λ L . References [1] McKean. H:
A limit law for the ground state of Hills equation , J. Statist.Phys. , 1227 − On the basic states of one dimen-sional disordered structures , Comm. Math. Phys. , 101 − On asymptotics of Eigenvalues for a certain 1-dimensional random Schr¨odinger operator , Osaka J. Math. , 69 − The ground state eigenvalue of Hill’sequation with white noise potential , Comm. Pure Appl. Math. ,1277 − Zur integrierten Zustandsdichte von Schr¨oinger opera-toren mit zuf¨aligen, inhomogenen Potentialen , Doctoral thesis Ruhr-Universit¨at, Bochum (2003).[6] B¨ocker. S, Kirsch. W, Stollmann. P:
Spectral theory for nonstationaryrandom potentials. Interacting stochastic systems , 103 − Spectral properties ofrandom Schr¨dinger operators with unbounded potentials , Comm. Math.Phys. , 23 −
50, 1993.[8] Kirsch. W: An Invitation to Random Schr¨odinger Operators, (With anappendix by Fr´ed´eric Klopp) Panor. Synth`eses, , Random Schr¨odingeroperators, , Soc. Math. France, Paris, 1 − The integrated density of states for randomSchr¨odinger operators , Spectral theory and mathematical physics, Proc.Sympos. Pure Math,
76, part 2 , Amer. Math. Soc, Providence, RI,649 − Anderson model with decaying randomness existence ofextended states , Proc. Indian Acad. Sci. Math. Sci, , 285 − From power pure point to contin-uous spectrum in disordered systems , in Annales de l’IHP PhysiqueTh´eorique, , (Gauthier- Villars), 283 − Spectra of Random and Almost-Periodic Opera-tors , Springer-Verlag, Berlin, 1992.[13] Carmona. C, Lacroix. J:
Spectral Theory of Random Schr¨odinger Oper-ators , Birkh¨auser, Boston, 1990.1214] Cycon. H, Froese. R, Kirsch. W, Simon. B:
Schr¨odinger Operators , Textsand Monographs in Physics, Springer-Verlag, 1985.[15] Veseli´c. I:
Existence and Regularity Properties of the Integrated Densityof States of Random Schr¨odinger Operators , Lecture Notes in Mathe-matics, Springer-Verlag, 2008.[16] Dolai. D: Some Estimates Regarding Integrated density of States forRandom Schr¨odinger Operator with decaying Random Potentials, Op-erator Theory: Advances and Applications. , 119 − Anderson model with decayingrandomness: mobility edge , Math. Z, , 421 − Optimal transport: old and new , Grundlehren der math-ematischen Wissenschaften, , Springer Science & Business Media,2008.[19] Anderson. G, Guionnet. A, Zeitouni, O:
An Introduction to RandomMatrices , Cambridge Studies in Advanced Mathematics, , Cam-bridge University Press, 2010.[20] Varadhan. S:
Probability Theory . Courant Lecture Notes 7, AmericanMathematical Society, Providence RI, 2000.[21] Demuth. M, Krishna. M: