Eigenvalue Statistics for higher rank Anderson model over Canopy tree
aa r X i v : . [ m a t h . SP ] J un Eigenvalue Statistics for higher rank Andersonmodel over Canopy tree
Narayanan P.A.The Institute of Mathematical SciencesTaramani, Chennai 600113, Indiaemail: [email protected] 20, 2018
Abstract
This work is focused on the local eigenvalue statistics for the An-derson tight binding model with non-rank-one perturbations over thecanopy tree, at large disorder. On the Hilbert space ℓ ( C ), where C isthe canopy tree, the random operator we consider is ∆ C + P y ∈ J ω y P y ,where ∆ C is the adjacency operator over the tree, { ω y } y ∈ J are i.i.d realrandom variables following some absolutely continuous distribution hav-ing a bounded density with compact support, and P y are projections on ℓ ( { x ∈ C : d( y, x ) < m & y ≺ x } ). For this operator, we show that,the eigenvalue-counting point process converges to compound Poissonprocess. Contents
Introduction
In the theory of disordered systems, Anderson tight binding model is wellstudied for its spectral and dynamical properties. The spectral theory for theAnderson tight binding Hamiltonian over Bethe lattice has a rich structure andis one of the models where the existence of both the absolute continuous [3, 12,18] as well as the pure point [1, 2, 13] spectrum are proven. Naturally, the nextquestion is about the local structure of the spectrum, and so the eigenvaluestatistics is an important question to study. But, the eigenvalue statistics asdefined by Minami [21] does not provide the eigenvalue statistics over the Bethelattice, but over the canopy tree (as explained by Aizenman-Warzel [4]). Themain focus of this manuscript is to study the local eigenvalue statistics for theAnderson tight binding model over the canopy tree when single site potentialaffects a collection of vertices of the tree. Although, to define the point processwe look at the cut-off operator on the Bethe lattice.To describe our main result we need to set upa few notations first. Let B = ( V B , E B ) denotethe infinite rooted tree with the root 0 ∈ V B ; allthe vertices has K + 1 neighbours (in the figure, K is 2). On the Hilbert space ℓ ( B ), we havethe graph Laplacian ∆ defined by(∆ ψ )( x ) = X d( x,y )=1 ψ ( y ) , ∀ x ∈ V B , ψ ∈ ℓ ( B ) . Here, d( x, y ) is the usual distance on thegraphs, which is the length (i.e., the numberof edges) of the shortest path between the ver-tices x and y . The higher ranked Anderson typeoperator on the Bethe lattice B is defined as H ωλ := ∆ + λ X y ∈ J ω y P y , (1.1) where λ > { ω y } y ∈ J are independent identicallydistributed real random variables following absolutely continuous distribution ρ ( x ) dx where ρ ∈ L ∞ ( R ) and supp ( ρ ) compact. The projections P y are definedto be ( P y ψ )( x ) = (cid:26) ψ ( x ) , d( y, x ) ≤ m & y ≺ x, , other wise , (1.2)for y ∈ J . Note that, rank ( P y ) = K m − K − which we will denote by M . Here, y ≺ x means that the vertex x is such that d(0 , x ) = d(0 , y ) + d( y, x ); i.e., y x (equivalently, x is forward to y ). Finally, the indexing set J is defined by J = { x ∈ V B : d(0 , x ) ∈ ( m + 1) N ∪ { }} . (1.3)Since our main concern is to study the eigenvalue process, we will work withthe cut-off operator H ωλ,L = χ Λ L (0) H ωλ χ Λ L (0) (1.4)on ℓ (Λ L (0)), where Λ L ( x ) = { y ∈ V B : d( x, y ) ≤ L } , and the projection χ U , for U ⊆ V B , is defined as( χ U ψ )( x ) = (cid:26) ψ ( x ) , x ∈ U, , other wise , ∀ ψ ∈ ℓ ( B ) . But from now onwards, for convenience, we will denote Λ L (0) by Λ L .To study the local eigenvalue statistics at E ∈ R , we will look at the limitof random point processes { µ ω,λE ,L } L ∈ N defined by µ ω,λE ,L ( f ) = Tr( f ( | Λ L | ( H ωλ,L − E ))) , ∀ f ∈ C c ( R ) , (1.5)where C c ( R ) is the set of all continuous functions with compact support on R .As stated earlier, this method of defining the point process does not providelocal eigenvalue statistics over the Bethe lattice, but on the canopy tree. Thecanopy tree C = ( V C , E C ) is defined recursively, layer by layer, starting fromthe boundary vertices, C = ∂ C (a countable set of vertices). Each layer C n (a countable collection of vertices) is partitioned into sets of K vertices, whichare joined to a single unique vertex in the layer C n +1 . Notice that, in the graphdefined through this process, for any x ∈ C n , we have, d( x, ∂ C ) = n . [Seefigure [1]]. On the canopy graph we have the random operator C = ∂ CC C C C Figure 1: First few recursion steps for the canopy tree for K = 2.3 ω C ,λ = ∆ C + λ X y ∈ J C ω y P y , (1.6)where P y := χ ˜Λ m ( y ) for y ∈ J C , λ >
0, is the disorder parameter, and { ω y } y ∈ J C are i.i.d real random variables following the distribution ρ ( x ) dx . Here, J C := { y ∈ V C : d( ∂ C , y ) = m + ( m + 1) k, for some k ∈ N ∪ { }} , and ˜Λ m ( y ) = { x ∈ V C : d( y, x ) ≤ m & d( ∂ C , y ) = d( ∂ C , x ) + d( x, y ) } . Note that, removing the root of Λ L of the Bethe lattice we are left with acollection of K + 1 sub-trees each of which can be identified with a sub-tree˜Λ L − ( y ) of the canopy tree, for some y such that d( y, ∂ C ) = L −
1. Intuitively,from the perspective of the root of the sub-tree Λ L , as L → ∞ it describes theBethe lattice; but, from the perspective of the vertices near the boundary (inother words, the canopy) of Λ L , it describes the canopy tree.With these definitions in place we have: Theorem 1.1.
Let H ωλ,L be defined as in (1.4) . Then, for any < s < , E ∈ R , and γ > , there exist λ γ,s > and C > such that, sup ǫ> E ω h(cid:12)(cid:12)(cid:12)D δ x , (cid:0) H ωλ,L − E − iǫ (cid:1) − δ y E(cid:12)(cid:12)(cid:12) s i ≤ Ce − γ d( x,y ) (1.7) for all λ > λ γ,s and L large enough so that x, y ∈ Λ L . The above theorem describes the exponential decay of the Green’s function.But, what is more important is the fact that any rate of decay is achievable bychanging the disorder parameter. The next theorem is about the regularity ofthe density of states for the model.
Theorem 1.2.
For any interval I ⊂ R , we have, | Λ L | Tr( E H ωλ,L ( I )) L →∞ −−−→ n C ,λ ( I ) a.s., (1.8) where n C ,λ ( I ) = K − K ∞ X n =0 K − n E ω hD δ x n , E H ω C ,λ ( I ) δ x n Ei . (1.9) Here, { x n } ∞ n =0 is a sequence of vertices of V C , such that, d( ∂ C , x n ) = n . Themeasure n C ,λ is absolutely continuous w.r.t the Lebesgue measure. Theorem 1.3.
For any E ∈ R , and λ > large enough, define the se-quence of measures { µ ω,λE ,L } L ∈ N by (1.5) . For any bounded interval I , thereexists a sequence of natural numbers { L n } n ∈ N such that, the random vari-ables { µ ω,λE ,L n ( I ) } n converge to P ωI , a compound Poisson random variable, inthe sense of distribution. The characteristic function E [ e ιt P ωI ] is of the form e P M k =1 ( e ιtk − p k ( I ) with the property p k ( I ) ≤ Kn C ,λ ( E ) k | I | for all ≤ k ≤ M . It should be noted that the operators H ω C ,λ,L and H ωλ,L can have non-trivialmultiplicity. This is because of the fact that any symmetry of the tail sub-trees ( ˜Λ m ( y ) for d( y, ∂ C ) = m ) produces a unitary operator which fixes H ω C ,λ (similar thing happens in the case of H ωλ,L ).The eigenvalue statistics in one dimension was studied by Molchanov [22],and later for higher dimensions by Minami [21]. In the region of fractionallocalization (where (1.7) holds), they showed that the statistics is Poisson.Subsequently, the Poisson statistics was shown for the trees by Aizenman-Warzel [4], and by Geisinger [14] for regular graphs. In some recent results,Germinet-Klopp [15] extended the results of Killipp-Nakano [17]. These worksare focused on eigenfunction statistics in the regime of pure point spectrum. Ananalogue of Minami’s [21] work was done by Dolai-Krishna [11], with α -H¨oldercontinuous single site distribution. There are also works in the region of abso-lutely continuous spectrum, like Kotani-Nakano [19], Avila-Last-Simon [5], andDolai-Mallick [20]. There are a few results for spectral statistics for non-rankone case, for example, Hislop-Krishna [16] and Combes-Germinet-Klein [8].This work is inclined towards the works of Aizenman-Warzel [4] and Hislop-Krishna [16]. In the work [4], the authors concluded simple Poisson pointprocess as the eigenvalue statistics for the Anderson tight binding model overthe canopy tree. One of the important points they raised is the fact that infinitedivisibility of the eigenvalue process cannot be taken similar to Z d case. Thisis because | ∂ Λ L || Λ L | does not converges to zero as L → ∞ . But, because of theexponential nature of the growth of the surface area, and the fact that we canachieve any rate of decay in Theorem 1.1, we can get the infinite divisibilityneeded for Poisson process by dividing the trees into sub-trees of height ≈ αL (for α > In this section, some important results are established which are essential forproving the main results. Before that, a few notations are needed. For y ∈ Λ L ,5e will denote Λ ′ l ( y ) := { x ∈ Λ L : d( x, y ) ≤ l & y ≺ x } , (2.1)for l ∈ N . Therefore, for any p ∈ J , the projections P p = χ Λ ′ m ( p ) . Using theresolvent equation between H ωλ,L and˜ H ωL := ( I − P p ) H ωλ,L ( I − P p ) + P p ∆ P p + ω p P p , we have, (cid:10) δ x , ( H ωλ,L − z ) − δ y (cid:11) = − (cid:28) δ x , h P p ∆ P p + ( ω p − z ) P p − P p ∆( χ Λ L − P p )( ˜ H ωL − z ) − ( χ Λ L − P p )∆ P p i − P p ∆( χ Λ L − P p )( ˜ H ωL − z ) − δ y E (2.2)for x ∈ Λ ′ m ( p ) and y ∈ Λ L \ Λ ′ m ( p ). By taking y ∈ Λ ′ m ( p ), we can also showthat, P p ( H ωλ,L − z ) − P p = h P p ∆ P p + ( ω p − z ) P p − P p ∆( χ Λ L − P p )( ˜ H ωL − z ) − ( χ Λ L − P p )∆ P p i − . (2.3)Using the fact that there is a unique path from x to y , (in the sense thatif we remove any edge within this path, then x and y will lie in differentcomponents) say, x = x , . . . , x n = y , and taking n < n so that x n ∈ Λ ′ m ( p )and x n +1 ∈ Λ L \ Λ ′ m ( p ), the expression (2.2) gives us (cid:10) δ x , ( H ωλ,L − z ) − δ y (cid:11) = Γ x ,x n D δ x n , ( ˜ H ωL − z ) − δ y E , (2.4)where Γ a,b is (cid:28) δ a , h P p ∆ P p + ( ω p − z ) P p − P p ∆( χ Λ L − P p )( ˜ H ωL − z ) − ( χ Λ L − P p )∆ P p i − δ b (cid:29) (2.5)for a, b ∈ Λ ′ m ( p ). Repeating this procedure inductively, we have, (cid:10) δ x , ( H ωλ,L − z ) − δ y (cid:11) = m Y i =0 Γ x ni +1 ,x ni +1 , (2.6)with x = x , . . . , x n = y the shortest path between x and y , and { n i } mi =1 withthe property that for each i there exists p i ∈ J such that x n i − +1 , x n i ∈ Λ ′ m ( p i ),6 = 0, and n m +1 = n . Finally,Γ x ni +1 ,x ni +1 = * δ x ni +1 , " P p i ∆ P p i + ( ω p i − z ) P p i (2.7) − P p i ∆ χ Λ L − i X j =1 P p j ! ( ˜ H ωi,L − z ) − χ Λ L − i X j =1 P p j ! ∆ P p i − δ x ni +1 + , where˜ H ωi,L := ( χ Λ L − P p i ) H ωi − ,L ( χ Λ L − P p i ) + P p i ∆ P p i + ω p i P p i with ˜ H ω ,L := H ωλ,L . Observe that, P p ∆ ( χ Λ L − P p ) ( ˜ H ωL − z ) − ( χ Λ L − P p ) ∆ P p is multiplication operator over the boundary of the sub-treeΛ ′ m ( p ). After removing the sub-tree Λ ′ m ( p ), we are left withdisjoint trees, and ˜ H ωL restricted on each of these sub-trees areindependent of each other. For y ∈ Λ ′ m ( p ), define N y = { x ∈ Λ L : d( x, y ) = 1 & x Λ ′ m ( p ) } , which is the set of neighbours of the vertex y , which lie outsideΛ ′ m ( p ) . We have, P p ∆ ( χ Λ L − P p ) ( ˜ H ωL − z ) − ( χ Λ L − P p ) ∆ P p = X y ∈ Λ ′ m ( p ) | δ y i h δ y | X x ∈ N y D δ x , ( ˜ H ωL − z ) − δ x E and the independence of ˜ H ωL on each of the subtree impliesthe independence of nD δ x , ( ˜ H ωL − z ) − δ x Eo x for each x ∈∪ y ∈ Λ ′ m ( p ) N y .With these notations, we are ready to establish the Wegner and the Minamiestimates. Notice that, rank ( P p ), (which we have called as M ) is same as | Λ ′ m ( p ) | . Even though there are multiple proofs of the Wegner estimate, forexample [7, 9, 10], those proofs are in more general settings, and use moresophisticated techniques. In the case of projection valued perturbations, theproof can be done using rank one case as the basis, as done here.7 emma 2.1. (Wegner Estimate) For any bounded interval I ⊂ R , we have, E ω hD δ x , E H ωλ,L ( I ) δ x Ei ≤ C | I | , and (2.8) E ω [Tr( E H ωλ,L ( I ))] ≤ C | I || Λ L | . (2.9) Proof.
The proof follows similar steps as in the rank one case. Using Stone’sformula [23, Theorem VII.13], we have,12 (cid:16) E H ωλ,L ( I cls ) + E H ωλ,L ( I int ) (cid:17) = s- lim ǫ → Z I ℑ ( H ωλ,L − E − ιǫ ) − dE, (here, I cls and I int are the closure and interior of I , respectively) and since ℑ ( H ωλ,L − E − ιǫ ) − is non-negative definite, we can use Tonelli’s theorem [6,Theorem 3.7.7] to get E ω (cid:20)(cid:28) δ x , (cid:16) E H ωλ,L ( I cls ) + E H ωλ,L ( I int ) (cid:17) δ x (cid:29)(cid:21) = lim ǫ → E ω (cid:20)Z I (cid:10) δ x , ℑ ( H ωλ,L − E − ιǫ ) − δ x (cid:11) dE (cid:21) = lim ǫ → Z I E ω [ (cid:10) δ x , ℑ ( H ωλ,L − E − ιǫ ) − δ x (cid:11) ] dE. (2.10)So, to get (2.8) and (2.9), we need to estimate E ω [ (cid:10) δ x , ℑ ( H ωλ,L − E − ιǫ ) − δ x (cid:11) ]independent of E and ǫ .We can re-write (2.3),as P p ( H ωλ,L − z ) − P p = [ ω p I − A ω ( z )] − , ∀ z ∈ C + , where we have collected all the terms occurring in (2.3) other than ω p into A ω ( z ). We can see that A ω ( z ) doesn’t depend on ω p . Now, let { E ˜ ω,p,zi } i denote the eigenvalues of the matrix A ω ( z ). Because P p ( H ωλ,L − z ) − P p is amatrix valued Herglotz function, one can see that, A ω ( z ) is also a matrixvalued Herglotz function, and so all the eigenvalues have positive imaginarypart. Hence, E ω [Tr( ℑ P y ( H ωλ,L − z ) − P y )] = E ˜ ω "Z X i ℑ x − E ˜ ω,p,zi ρ ( x ) dx ≤ k ρ k ∞ X i E ˜ ω "Z ℑ E ˜ ω,p,zi ( x − ℜ E ˜ ω,p,zi ) + ( E ˜ ω,p,zi ) dx ≤ πm k ρ k ∞ . (2.11)8sing the above estimate we get, E ω [Tr( E H ωλ,L ( I ))] = X y ∈ J E ω [Tr( P y E H ωλ,L ( I ) P y )]= X y ∈ J lim ǫ → Z I E ω [Tr( P y ( H ωλ,L − E − ιǫ ) − P y )] dE ≤ k ρ k ∞ π | Λ L || I | . Since, the model we are concerned with involves higher rank perturbations,it is possible that operators in our model might have eigenvalues of multiplicitygreater than one. Therefore, in general we might not be able to get a properMinami estimate. Below, we prove an extended version of the Minami estimate.
Lemma 2.2. (Extended Minami Estimate)
For any bounded interval I ⊂ R , we have, X m ≥ M P [Tr( E H ωλ,L ( I )) > m ] ≤ ( π k ρ k ∞ | Λ L || I | ) , (2.12) where M is the common rank of the perturbing projections.Proof. With out loss of generality assume supp ( ρ ) ⊆ [ a, b ]. Following thenotations from the previous lemma, for any y ∈ J , we have, H ωλ,L ≤ H ˜ ωL + ( b + λ ) P y ∀ λ > . Using | Tr( E H ωλ,L ( I )) − Tr( E H ˜ ωL +( b + λ ) P y ( I )) | ≤ M , we have, Tr( E H ωL ( I )) − M ≤ Tr( E H ˜ ωL +( b + λ ) P y ( I )) , and in particular,Tr( E H ωL ( I )) − M ≤ Z Tr( E H ˜ ωL +( b + λ ) P y ( I )) ρ ( λ + a ) dλ. X m ≥ M P [Tr( E H ωλ,L ( I )) > m ] ≤ E ω h Tr( E H ωλ,L ( I ))(Tr( E H ωλ,L ( I )) − M ) χ (Tr( E H ωλ,L ( I )) > M ) i = X y ∈ J E ω h Tr( P y E H ωλ,L ( I ) P y )(Tr( E H ωλ,L ( I )) − M ) χ (Tr( E H ωλ,L ( I )) > M ) i ≤ X y ∈ J E ω h Tr( P y E H ωλ,L ( I ) P y ) Tr( E H ˜ ωL +( b + λ y ) P y ( I )) χ (Tr( E H ωλ,L ( I )) > M ) i ≤ X y ∈ J E ω (cid:20) Tr( P y E H ωλ,L ( I ) P y ) Z Tr( E H ˜ ωL +( b + λ y ) P y ( I )) ρ ( λ y + a ) dλ y (cid:21) ≤ X y ∈ J E ˜ ω (cid:20)(cid:18)Z Tr( E H ˜ ωL +( b + λ y ) P y ( I )) ρ ( λ y + a ) dλ y (cid:19)(cid:18)Z ba Tr( P y E H ˜ ωL + xP y ( I ) P y ) ρ ( x ) dx (cid:19) (cid:21) ≤ π k ρ k ∞ m | I | X y ∈ J E ˜ ω (cid:20)Z Tr( E H ˜ ωL +( b + λ y ) P y ( I )) ρ ( λ y + a ) dλ y (cid:21) . So, taking { λ y } y ∈ J to be i.i.d random variables following the distribution ρ ( x + a ) dx , independent of { ω y } y ∈ J , we can use the Wegner estimate (2.9),to get (2.12). To prove the theorem we will use the expression (2.6). Notice that, in thatexpression, Γ x ni +1 ,x ni +1 is independent of the random variables { ω p j } i − j =1 . So wehave, E ω (cid:2)(cid:12)(cid:12)(cid:10) δ x , ( H ωλ,L − z ) − δ y (cid:11)(cid:12)(cid:12) s (cid:3) = E ω ⊥ p , ··· ,ω ⊥ pm (cid:20) E ω p (cid:20) | Γ x n ,x n | s E ω p (cid:20) · · · E ω pm h | Γ x nm +1 ,x nm +1 | s i(cid:21)(cid:21)(cid:21) . Therefore, all we need to do is to estimate E ω pi h | Γ x ni +1 ,x ni +1 | s i independent of { ω n } n = p i . Let { E ωj ( z ) } rank ( P pi ) j =1 , counted with multiplicity, denote the eigenval-10es of P p i ∆ P p i − P p i ∆ χ Λ L − i X j =1 P p j ! ( ˜ H ωi,L − z ) − χ Λ L − i X j =1 P p j ! ∆ P p i . Then, by the definition of Γ (see (2.7)), we have, | Γ x ni +1 ,x ni +1 | ≤ rank ( P pi ) X j =1 | E ωi ( z ) − λω p i − z | . Hence, E ω pi h | Γ x ni +1 ,x ni +1 | s i ≤ E ω pi " rank ( P pi ) X j =1 | E ωi ( z ) − λω p i − z | s ≤ C | Λ ′ m ( p i ) | λ s . Therefore, for large enough λ , C | Λ ′ m ( p i ) | λ s <
1. So, using E ω (cid:2)(cid:12)(cid:12)(cid:10) δ x , ( H ωλ,L − z ) − δ y (cid:11)(cid:12)(cid:12) s (cid:3) ≤ (cid:18) C | Λ ′ m ( p i ) | λ s (cid:19) m , (3.1)we get the estimate (1.7), proving the theorem. To show (1.8), it is enough to showlim L →∞ | Λ L | Tr( f ( H ωλ,L )) = Z f ( x ) dn C ,λ ( x ) , for f ∈ C ( R ). (For us it is enough to show the above result for functions in C c ( R ); but since C c ( R ) is contained in C ( R ), clearly this suffices.) Notice that1 | Λ L | Tr( f ( H ωλ,L )) = 1 | Λ L | L X r =0 X d(0 ,x )= L − r (cid:10) δ x , f ( H ωλ,L ) δ x (cid:11) , which can be written as1 | Λ L | Tr( f ( H ωλ,L )) =11 + ( K + 1) K L − K − (cid:18)(cid:10) δ , f ( H ωλ,L ) δ (cid:11) + L − X r =0 ( K + 1) K L − r − C ωL,r ( f ) (cid:19) , C ωL,r ( f ) := 1( K + 1) K L − r − X d(0 ,x )= L − r (cid:10) δ x , f ( H ωλ,L ) δ x (cid:11) , that is, the sum is done over the set of vertices which are at a distance r fromthe boundary. Notice that | C ωL,r ( f ) | ≤ k f k ∞ . Hence, It is enough to showlim L →∞ C ωL,r ( f ) = E ω [ (cid:10) δ x r , f ( H ω C ,λ ) δ x r (cid:11) ] , where x r ∈ V C is such that d( x r , ∂ C ) = r . Since ℑ ( · − z ) − for z ∈ C + aredense in C ( R ), to prove the above expression, it is enough to showlim L →∞ C ωL,r ( ℑ ( · − z ) − ) = E ω [ ℑ (cid:10) δ x r , ( H ω C ,λ − z ) − δ x r (cid:11) ] . For L ∈ N , define S L − r − L := { x ∈ Λ L : d( x, Λ L − r − L (0)) = 1 } , and for y ∈ S L − r − L , set Λ L,y := { x ∈ Λ L : y ≺ x } . Then, using the resolvent identity with H ωλ,L and χ Λ L − r − L (0) H ωλ,L χ Λ L − r − L (0) + X x ∈ S L − r − L χ Λ L,x H ωλ,L χ Λ L,x , for x ∈ Λ L such that d( x,
0) = L − r , we get, (cid:10) δ x , ( H ωλ,L − z ) − δ x (cid:11) = (cid:10) δ x , ( H ωλ,L,A x − z ) − δ x (cid:11) − (cid:10) δ x , ( H ωλ,L − z ) − δ P A x (cid:11) (cid:10) δ A x , ( H ωλ,L,A x − z ) − δ x (cid:11) , where A x is the unique vertex in S L − r − L such that x ∈ Λ L,A x , H ωλ,L,x := χ Λ L,x H ωλ,L χ Λ L,x and
P A x ∈ Λ L − r − L such that d( A x , P A x ) = 1. We will choose L so that H ωλ,L,x are i.i.d for each x ∈ S L − r − L . Hence, (cid:12)(cid:12)(cid:12)(cid:12) ( K + 1) K L − r − C ωL,r (( · − z ) − ) − X y ∈ S L − r − L X x ∈ Λ L,x d( x,y )= L − (cid:10) δ x , ( H ωλ,L,y − z ) − δ x (cid:11)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) X y ∈ S L − r − L X x ∈ Λ L,x d( x,y )= L − (cid:10) δ x , ( H ωλ,L − z ) − δ P y (cid:11) (cid:10) δ y , ( H ωλ,L,y − z ) − δ x (cid:11) (cid:12)(cid:12)(cid:12)(cid:12) . | (cid:10) δ x , ( H ωλ,L − z ) − δ y (cid:11) | ≤ ℑ z we get, (cid:12)(cid:12)(cid:12)(cid:12) C ωL,r (( · − z ) − ) − K + 1) K L − r − L X y ∈ S L − r − L K L − X x ∈ Λ L,y d( x,y )= L − (cid:10) δ x , ( H ωλ,L,y − z ) − δ x (cid:11)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:18) ℑ z ) − s K + 1) K L − r − L (cid:19)X y ∈ S L − r − L K L − X x ∈ Λ L,x d( x,y )= L − | (cid:10) δ y , ( H ωλ,L,y − z ) − δ x (cid:11) | s . (3.2)Now, observe that, the graphs Λ L,y are isomorphic to ˜Λ L + r − (˜ y ), for any˜ y ∈ C such that d( ∂ C , ˜ y ) = L + r −
1. So, using the independence of H ωλ,L,y and viewing them as cut-off operator for H ω C ,λ , we havelim L →∞ | S L − r − L | X y ∈ S L − r − L K L − X x ∈ Λ L,y d( x,y )= L − (cid:10) δ x , ( H ωλ,L,y − z ) − δ x (cid:11) = 1 K L − X x ∈ ˜Λ L r − (˜ y )d( x,y )= L − E ω hD δ x , ( χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ x Ei , which follows through the first ergodic theorem, which gives almost sure con-vergence. Finally, using the symmetry of the tree ˜Λ L + r − (˜ y ), we concludethat 1 K L − X x ∈ ˜Λ L r − (˜ y )d( x,y )= L − E ω hD δ x , ( χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ x Ei = E ω hD δ x r , ( χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ x r Ei , for some x r ∈ ˜Λ L + r − (˜ y ) such that d( ∂ C , x r ) = r . Using this in (3.2), and13sing (1.7), we have, (cid:12)(cid:12)(cid:12)(cid:12) C ωL,r (( · − z ) − ) − E ω hD δ x r , ( χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ x r Ei(cid:12)(cid:12)(cid:12)(cid:12) ≤ ℑ z ) − s E h(cid:12)(cid:12)(cid:12)D δ ˜ y , ( χ ˜Λ L r − ( y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ x r E(cid:12)(cid:12)(cid:12) s i ≤ ℑ z ) − s Ce − γL . Next, using the resolvent identity between H ω C ,λ and χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) at x r and using (1.7), we have (cid:12)(cid:12)(cid:12)(cid:12) E ω hD δ x r , ( χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ x r E − (cid:10) x r , ( H ω C ,λ − z ) − δ x r (cid:11)i(cid:12)(cid:12)(cid:12)(cid:12) ≤ E h(cid:12)(cid:12)(cid:12)D δ x r , ( χ ˜Λ L r − (˜ y ) H ω C ,λ χ ˜Λ L r − (˜ y ) − z ) − δ ˜ y E (cid:10) δ P ˜ y , ( H ω C ,λ − z ) − δ x r (cid:11)(cid:12)(cid:12)(cid:12)i ≤ ℑ z ) − s E (cid:2)(cid:12)(cid:12)(cid:10) δ P ˜ y , ( H ω C ,λ − z ) − δ x r (cid:11)(cid:12)(cid:12) s (cid:3) ≤ C ( ℑ z ) − s e − γL . Combining the above results, and letting L → ∞ , we getlim L →∞ C ωL,r (( · − z ) − ) = E ω (cid:2)(cid:10) δ x , ( H ω C ,λ − z ) − δ x (cid:11)(cid:3) . This completes the proof of the theorem.
To show the infinite divisibility of the sequence of measures { µ ω,λE ,L } L , first,define the measures η ω,λE ,L,x for x ∈ Λ L as η ω,λE ,L,x ( f ) := Tr( f ( | Λ L | ( χ Λ ′ L − N ( x ) H ωλ,L χ Λ ′ L − N ( x ) − E ))) , f ∈ C c ( R ) , (3.3)where N = d(0 , x ).The following lemma says that the processes µ ω,λE ,L and P x :d(0 ,x )= l L η ω,λE ,L,x have the same set of limit points in the topology of distributional convergence.We have used the Fourier transform charactarization of the distributional con-vergence. Lemma 3.1.
For < α < , set l L = m j αLm k . Then for f ∈ C c ( R ) , lim L →∞ E ω h(cid:12)(cid:12)(cid:12) e ιµ ω,λE ,L ( f ) − e ι P x :d(0 ,x )= lL η ω,λE ,L,x ( f ) (cid:12)(cid:12)(cid:12)i = 0 . (3.4)14 roof. Using the decomposition f = f − f + ι ( f − f ) where f i ∈ C c ( R )and positive, it is enough to show (3.4) for non-negetive functions. Therefore,we will assume f is a non-negetive function. Next, using the inequality | e ιx − | ≤ | x | , (pulling out one of the terms inside the modulus leaves us with anexpression of the form | e ix − | ) we only have to showlim L →∞ E ω h(cid:12)(cid:12)(cid:12) µ ω,λE ,L ( f ) − X x :d(0 ,x )= l L η ω,λE ,L,x ( f ) (cid:12)(cid:12)(cid:12)i = 0 . Finally, since the functions ℑ ( · − z ) − for z ∈ C + are in C ( R ), it is enough toshow lim L →∞ E ω h(cid:12)(cid:12)(cid:12) µ ω,λE ,L ( ℑ ( · − z ) − ) − X x :d(0 ,x )= l L η ω,λE ,L,x ( ℑ ( · − z ) − ) (cid:12)(cid:12)(cid:12)i = 0 . (3.5)Hence, (denote H ωλ,L,x := χ Λ ′ L − N ( x ) H ωλ,L χ Λ ′ L − N ( x ) ) (cid:12)(cid:12)(cid:12) µ ω,λE ,L ( ℑ ( · − z ) − ) − X x :d(0 ,x )= l L η ω,λE ,L,x ( ℑ ( · − z ) − ) (cid:12)(cid:12)(cid:12) = 1 | Λ L | (cid:12)(cid:12)(cid:12)(cid:12) X y ∈ Λ L ℑ (cid:10) δ y , ( H ωλ,L − E − | Λ L | − z ) − δ y (cid:11) − X x :d(0 ,x )= l L X y ∈ Λ ′ L ( x ) ℑ (cid:10) δ y , ( H ωλ,L,x − E − | Λ L | − z ) − δ y (cid:11)(cid:12)(cid:12)(cid:12)(cid:12) ≤ | Λ L | X d(0 ,x ) 1) = O ( K (1 − α ) L ) L →∞ −−−→ . For the third term, we use the resolvent equation, and get ( P x denotes theneighboring vertex such that d(0 , x ) = d(0 , P x ) + 1; i.e., the vertex previousto x .) 1 | Λ L | X x :d(0 ,x )= l L X y ∈ Λ ′ L ( x )d( x,y ) ≥ l L E [ | G L ( y, y ; z L ) − G L,x ( y, y ; z L ) | ]= 1 | Λ L | X x :d(0 ,x )= l L X y ∈ Λ ′ L ( x )d( x,y ) ≥ l L E [ | G L ( y, P x ; z L ) G L,x ( x, y ; z L ) | ] ≤ | Λ L | X x :d(0 ,x )= l L X y ∈ Λ ′ L ( x )d( x,y ) ≥ l L | Λ L | − ℑ z ) − s E [ | G L,x ( x, y ; z L ) | s ] ≤ | Λ L | − s ( K + 1) K l L − ( ℑ z ) − s ∞ X n = l L K n e n ( ˜ C − s m ln λ ) = O (cid:16) e (1 − s +2 α ) ln KL + αL ( ˜ C − s m ln λ ) (cid:17) , (3.6)where we used the fact that | G L ( x, y ; z L ) | ≤ | Λ L | − ℑ z , and the last expressioncomes from the proof of Theorem 1.1 (see (3.1)). For m ((1 − s + 2 α ) ln K + α ˜ C ) sα < ln λ, observe that (3.6) goes to zero as L → ∞ . This completes the proof of (3.5),and hence the lemma.Before attempting to prove the main result, we need to establish someresults on the limit of the process. 16 emma 3.2. Let L n = m n for n ∈ N . Then, for any bounded interval I , lim n →∞ E ω [ µ ω,λE ,L n ( χ I )] = Kn C ,λ ( E ) | I | . (3.7) Given any bounded interval I , there exists sub-sequence { ˜ L n } n of { L n } n , suchthat { P d(0 ,x )= l ˜ Lm P [ η ω,λE , ˜ L m ,x ( χ I ) = k ] } m ∈ N converges. Here l L is the sequencedefined in Lemma 3.1.Proof. Using the Wegner estimate (equation (2.9)) on (1.5), we have, E ω [ µ ω,λE ,L ( χ I )] ≤ C | I | for any bounded interval I . So, the measure associated to the linear functional E ω [ µ ω,λE ,L ( · )] is absolutely continuous with bounded density. From definition of µ ω,λE ,L , (assume L is divisible by m ) E ω [ µ ω,λE ,L ( χ I )] = E h Tr( E H ωλ,L ( E + | Λ L | − I )) i = X x ∈ Λ L E hD δ x , E H ωλ,L ( E + | Λ L | − I ) δ x Ei = L X n =0 X d( x,∂ Λ L )= n E hD δ x , E H ωλ,L ( E + | Λ L | − I ) δ x Ei = L − X n =0 K L − n E hD δ x n , E H ωλ,L ( E + | Λ L | − I ) δ x n Ei + E hD δ , E H ωλ,L ( E + | Λ L | − I ) δ Ei , (3.8)where x n ∈ Λ L is a vertex such that d( x n , ∂ Λ L ) = n (for any n , there aremultiple such vertex). Using (2.8) on (3.8) for n > L where L = ⌊ αL ⌋ forsome 0 < α < E ω [ µ ω,λE ,L ( χ I )] = L X n =0 ( K + 1) K L − n E hD δ x n , E H ωλ,L ( E + | Λ L | − I ) δ x n Ei + O (cid:0) K − L | I | (cid:1) . (3.9)Therefore, to obtain the limit (3.7), we only need to compute the limit of theRHS above, as L → ∞ . Using the denseness of ℑ ( · − z ) − for z ∈ C + in C ( R ),we only have to find the limit of( K − L X n =0 K − n E (cid:2) ℑ (cid:10) δ x n , ( H ωλ,L − E − | Λ L | − z ) − δ x n (cid:11)(cid:3) , (3.10)17s L → ∞ , since as L → ∞ , we can replace ( K +1) K L − n | Λ L | by ( K − K − n . Thiscan be done because, the expectation is bounded, which follows from the proofof Lemma 2.1.Now, we continue as in the proof of Theorem 1.2. Define˜ H ωλ,L = χ Λ m (0) H ωλ,L χ Λ m (0) + X d(0 ,y )= m +1 χ Λ L,y H ωλ,L χ Λ L,y , where Λ L,y = { x ∈ Λ L : y ≺ x } . Using the resolvent equation, for any x with d( x, ∂ Λ L ) ≤ L , between H ωλ,L and˜ H ωλ L (we will denote z L = E + | Λ L | − z ), (cid:10) δ x , ( H ωλ,L − z L ) − δ x (cid:11) = (cid:10) δ x , ( H ωλ,L,A x − z L ) − δ x (cid:11) − (cid:10) δ x , ( H ωλ,L − z L ) − δ P A x (cid:11) (cid:10) δ A x , ( H ωλ,L,A x − z L ) − δ x (cid:11) , (3.11)where A x is the unique vertex satisfying d(0 , A x ) = m + 1 and x ∈ Λ L,A x , H ωλ,L,y = χ Λ L,y H ωλ,L χ Λ L,y , and P A x ∈ Λ m (0) is such that d( A x , P A x ) = 1.Using (3.11) on (3.10) will give( K − L X n =0 K − n E (cid:2) ℑ (cid:10) δ x n , ( H ωλ,L − z L ) − δ x n (cid:11)(cid:3) =( K − L X n =0 K − n E (cid:2) ℑ (cid:10) δ x n , ( H ωλ,L,A xn − z L ) − δ x n (cid:11)(cid:3) − ( K − L X n =0 K − n E (cid:2) ℑ (cid:0)(cid:10) δ x n , ( H ωλ,L − z L ) − δ P A xn (cid:11) (cid:10) δ A xn , ( H ωλ,L,A xn − z L ) − δ x n (cid:11)(cid:1)(cid:3) . (3.12)Embedding the graph Λ L,A x into C and using the resolvent identity, we get (cid:10) δ x , ( H ωλ,L,A x − z L ) − δ x (cid:11) = (cid:10) δ x , ( H ω C ,λ − z L ) − δ x (cid:11) + (cid:10) δ x , ( H ωλ,L,A x − z L ) − δ A x (cid:11) (cid:10) δ P A x , ( H ω C ,λ − z L ) − δ x (cid:11) , K − L X n =0 K − n E (cid:2) ℑ (cid:10) δ x n , ( H ωλ,L − z L ) − δ x n (cid:11)(cid:3) =( K − L X n =0 K − n E (cid:2) ℑ (cid:10) δ x n , ( H ω C ,λ − z L ) − δ x n (cid:11)(cid:3) +( K − L X n =0 K − n E h ℑ (cid:16) (cid:10) δ x n , ( H ωλ,L,A xn − z L ) − δ A xn (cid:11) × (cid:10) δ P A xn , ( H ω C ,λ − z L ) − δ x n (cid:11) (cid:17)i − ( K − L X n =0 K − n E (cid:2) ℑ (cid:0)(cid:10) δ x n , ( H ωλ,L − z L ) − δ P A xn (cid:11) (cid:10) δ A xn , ( H ωλ,L,A xn − z L ) − δ x n (cid:11)(cid:1)(cid:3) . Using the exponential decay estimate (3.1), the second and third terms give( K − L X n =0 K − n E (cid:2)(cid:12)(cid:12)(cid:10) δ x n , ( H ωλ,L,A xn − z L ) − δ A xn (cid:11) (cid:10) δ P A xn , ( H ω C ,λ − z L ) − δ x n (cid:11)(cid:12)(cid:12)(cid:3) +( K − L X n =0 K − n E (cid:2)(cid:12)(cid:12)(cid:10) δ x n , ( H ωλ,L − z L ) − δ P A xn (cid:11) (cid:10) δ A xn , ( H ωλ,L,A xn − z L ) − δ x n (cid:11)(cid:12)(cid:12)(cid:3) ≤ ( K − L X n =0 K − n ˜ C m ,µ λ s ! L − L m | Λ L | − s ( ℑ z ) − s ≤ O (cid:16) e Lm ( m (2 − s ) ln K +(1 − α )(ln ˜ C m ,µ − s ln λ )) (cid:17) . Hence, for m (2 − s ) ln K + (1 − α ) ln ˜ C µ,m s (1 − α ) < ln λ, we getlim L →∞ E [ µ ω,λE ,L ( ℑ ( · − z ) − )] = lim L →∞ ( K − ⌊ αL ⌋ X n =0 K − n E [ ℑ (cid:10) δ x n , ( H ω C ,λ − z L ) − δ x n (cid:11) ] . But the RHS converges to Kn C ,λ ( E ), where n C ,λ ( E ) is the density of themeasure n C ,λ (which is the density of state measure; see Theorem 1.2 for thedefinition), at E . 19or the second assertion of the lemma, using Lemmas 3.1 and (3.7), wehave lim n →∞ X d(0 ,x )= l nm E [ η ω,λE ,nm ,x ( χ I )] = lim n →∞ E [ µ ω,λE ,nm ( χ I )] = Kn C ,λ ( E ) | I | . Using 0 ≤ X d(0 ,x )= l L P [ η ω,λE ,L,x ( χ I ) = k ] ≤ k X d(0 ,x )= l L E [ η ω,λE ,L,x ( χ I )] , we can find a subsequence { ˜ L m } m of { nm } n ∈ N , such that (cid:26) X d(0 ,x )= l L P [ η ω,λE ,L,x ( χ I ) = k ] (cid:27) converges and the limit lies in the interval [0 , Kk n C ,λ ( E ) | I | ]. Proof of Theorem 1.3 To prove the theorem all we need to do is to computelim L →∞ E ω h e ιtµ ω,λE ,L ( χ I ) i , where χ I is the characteristic function of a bounded interval I ⊂ R . Noticethat, for χ I , the random variables µ ω,λE ,L ( χ I ) and { η ω,λE ,L,x ( χ I ) } x are integervalued.Using the Lemma 3.1, we have,lim L →∞ E ω h e ιtµ ω,λE ,L ( χ I ) i = lim L →∞ E ω h e ιt P d(0 ,x )= lL η ω,λE ,L,x ( χ I ) i = lim L →∞ Y d(0 ,x )= l L (cid:18) E ω h e ιtη ω,λE ,L,x ( χ I ) i(cid:19) = lim L →∞ e P d(0 ,x )= lL ln E ω " e ιtηω,λE ,L,x ( χI ) − +1 ! . (3.13)The second line follows because of the independence of { η ω,λE ,L,x } d(0 ,x )= l L (thisis where l L is a multiple of m is used so that all of the the projections P p j hassupport at atmost one { Λ ′ L − l L ( x ) } x ). Using | e ιx − | ≤ | x | for real x , we have, (cid:12)(cid:12)(cid:12) E ω h e ιtη ω,λE ,L,x ( χ I ) − i(cid:12)(cid:12)(cid:12) ≤ E ω h(cid:12)(cid:12)(cid:12) e ιtη ω,λE ,L,x ( χ I ) − (cid:12)(cid:12)(cid:12)i ≤ | t | E ω h η ω,λE ,L,x ( χ I ) i , (3.14)20nd using the Wegner estimate (2.9) over the operator χ Λ ′ L − lL ( x ) H ωλ,L χ Λ ′ L − lL ( x ) ,(the measure η ω,λE ,L,x is defined using this, see (3.3)) we get, (cid:12)(cid:12)(cid:12)(cid:12) E ω h e ιtη ω,λE ,L,x ( χ I ) − i(cid:12)(cid:12)(cid:12)(cid:12) ≤ | t | E ω h η ω,λE ,L,x ( χ I ) i ≤ C | t || I | | Λ ′ L − l L ( x ) || Λ L | L →∞ −−−→ . (3.15)From the expressions (3.14), (3.15), and the fact that | ln(1 + x ) − x | ≤ | x | for | x | ≪ 1, we haveln (cid:18) E ω h e ιtη ω,λE ,L,x ( χ I ) − i + 1 (cid:19) = E ω h e ιtη ω,λE ,L,x ( χ I ) − i + O (cid:12)(cid:12)(cid:12)(cid:12) E ω h e ιtη ω,λE ,L,x ( χ I ) − i(cid:12)(cid:12)(cid:12)(cid:12) ! = E ω h e ιtη ω,λE ,L,x ( χ I ) − i + O | t | (cid:18) E ω h η ω,λE ,L,x ( χ I ) i(cid:19) ! = E ω h e ιtη ω,λE ,L,x ( χ I ) − i + O | t | | I | (cid:18) | Λ ′ L − l L ( x ) || Λ L | (cid:19) ! . (3.16)Using (3.16) and the fact that | Λ L − l L (0) | (cid:16) | Λ ′ L − lL ( x ) || Λ L | (cid:17) L →∞ −−−→ L →∞ E ω h e ιtµ ω,λE ,L ( χ I ) i = lim L →∞ e P d(0 ,x )= lL ln E ω " e ιtηω,λE ,L,x ( χI ) − +1 ! = lim L →∞ e P d(0 ,x )= lL E ω " e ιtηω,λE ,L,x ( χI ) − . (3.17)Focusing on the exponent of the last equation, we have X d(0 ,x )= l L E ω h e ιtη ω,λE ,L,x ( χ I ) − i = X d(0 ,x )= l L ∞ X k =1 ( e ιtk − P [ η ω,λE ,L,x ( χ I ) = k ]= M X k =1 ( e ιtk − (cid:16) X d(0 ,x )= l L P [ η ω,λE ,L,x ( χ I ) = k ] (cid:17) + R ( L ) , (3.18)21here R ( L ) = X d(0 ,x )= l L ∞ X k = M +1 ( e ιtk − P [ η ω,λE ,L,x ( χ I ) = k ] | R ( L ) | ≤ X d(0 ,x )= l L ∞ X k = M +1 P [ η ω,λE ,L,x ( χ I ) = k ] ≤ ( π k ρ k ∞ | I | ) | Λ L − l L (0) | (cid:18) | Λ ′ L − l L ( x ) || Λ L | (cid:19) L →∞ −−−→ . (3.19)The last line follows from the Minami estimate (2.12). From Lemma 3.2, wehave a sequence { L n } n ∈ N , such that, for k ≤ M lim n →∞ X d(0 ,x )= l Ln P [ η ω,λE ,L n ,x ( χ I ) = k ] = p k ( I ) . (3.20)Combining (3.20), (3.19), and (3.17) giveslim n →∞ E ω h e ιtµ ω,λE ,L ( χ I ) i = e P M k =1 ( e ιtk − p k ( I ) . (3.21)This completes the proof of the theorem. Acknowledgement I thank my guide Prof. M Krishna for the guidance and support he is givingme. I thank Dhriti Ranjan Dolai for being the understanding and encouragingperson he is, and Anish Mallick for all the numerous helps he has done at eachstage of this work; it would have been impossible to complete this work without him. References [1] Michael Aizenman and Stanislav Molchanov. Localization at large disorderand at extreme energies: An elementary derivations. Communications inMathematical Physics , 157(2):245–278, 1993.[2] Michael Aizenman, Jeffrey H Schenker, Roland M Friedrich, and DirkHundertmark. Finite-volume fractional-moment criteria for anderson lo-calization. Communications in Mathematical Physics , 224(1):219–253,2001. 223] Michael Aizenman, Robert Sims, and Simone Warzel. Stability of theabsolutely continuous spectrum of random schr¨odinger operators on treegraphs. Probability theory and related fields , 136(3):363–394, 2006.[4] Michael Aizenman and Simone Warzel. The canopy graph and level statis-tics for random operators on trees. Mathematical Physics, Analysis andGeometry , 9(4):291–333, 2006.[5] Artur Avila, Yoram Last, and Barry Simon. Bulk universality and clockspacing of zeros for ergodic jacobi matrices with absolutely continuousspectrum. Analysis & PDE , 3(1):81–108, 2010.[6] John J Benedetto and Wojciech Czaja. Integration and modern analysis .Springer Science & Business Media, 2010.[7] Jean-Michel Combes, Fran¸cois Germinet, and Abel Klein. Generalizedeigenvalue-counting estimates for the anderson model. Journal of Statis-tical Physics , 135(2):201, 2009.[8] Jean-Michel Combes, Fran¸cois Germinet, and Abel Klein. Poisson statis-tics for eigenvalues of continuum random schr¨odinger operators. Analysis& PDE , 3(1):49–80, 2010.[9] Jean-Michel Combes, Peter D. Hislop, and Fr´ed´eric Klopp. An optimalwegner estimate and its application to the global continuity of the inte-grated density of states for random schr¨odinger operators. Duke Math. J. ,140(3):469–498, 12 2007.[10] J.M. Combes and P.D. Hislop. Localization for some continuous, randomhamiltonians in d-dimensions. Journal of Functional Analysis , 124(1):149– 180, 1994.[11] Dhriti Ranjan Dolai and M Krishna. Poisson statistics for anderson modelwith singular randomness. J. Ramanujan Math. Soc. , 30(3):251–266, 2015.[12] Richard Froese, David Hasler, and Wolfgang Spitzer. Absolutely continu-ous spectrum for the anderson model on a tree: a geometric proof of klein’stheorem. Communications in mathematical physics , 269(1):239–257, 2007.[13] J¨urg Fr¨ohlich, Fabio Martinelli, Elisabetta Scoppola, and ThomasSpencer. Constructive proof of localization in the anderson tight bindingmodel. Communications in Mathematical Physics , 101(1):21–46, 1985.2314] Leander Geisinger. Poisson eigenvalue statistics for random schr¨odingeroperators on regular graphs. In Annales Henri Poincar´e , volume 16, pages1779–1806. Springer, 2015.[15] Fran¸cois Germinet and Fr´ed´eric Klopp. Spectral statistics for the discreteanderson model in the localized regime. arXiv preprint arXiv:1006.4427 ,2010.[16] Peter D. Hislop and M. Krishna. Eigenvalue statistics for randomschr¨odinger operators with non rank one perturbations. Communicationsin Mathematical Physics , 340(1):125–143, 2015.[17] Rowan Killip and Fumihiko Nakano. Eigenfunction statistics in the local-ized anderson model. In Annales Henri Poincar´e , volume 8, pages 27–36.Springer, 2007.[18] Abel Klein. Extended states in the anderson model on the bethe lattice. Advances in Mathematics , 133(1):163–184, 1998.[19] Shinichi Kotani and Fumihiko Nakano. Level statistics of one-dimensionalschr¨odinger operators with random decaying potential. Preprint , 2012.[20] Anish Mallick and Dhriti Ranjan Dolai. Spectral statistics for one di-mensional anderson model with unbounded but decaying potential. arXivpreprint arXiv:1602.02986 , 2016.[21] Nariyuki Minami. Local fluctuation of the spectrum of a multidimensionalanderson tight binding model. Communications in mathematical physics ,177(3):709–725, 1996.[22] SA MolˇcNov. The local structure of the spectrum of the one-dimensional schr¨odinger operator. Communications in MathematicalPhysics , 78(3):429–446, 1981.[23] Michael Reed and Barry Simon.