aa r X i v : . [ m a t h . P R ] F e b Local elliptic law
Johannes Alt ∗ University of Geneva, Courant Institute
Torben Krüger † University of Copenhagen
Abstract
The empirical eigenvalue distribution of the elliptic random matrix ensemble tends to the uni-form measure on an ellipse in the complex plane as its dimension tends to infinity. We show thisconvergence on all mesoscopic scales slightly above the typical eigenvalue spacing in the bulk spec-trum with an optimal convergence rate. As a corollary we obtain complete delocalisation for thecorresponding eigenvectors in any basis.
The empirical spectral distribution (ESD) of an n × n random matrix X = ( x ij ) is typically wellapproximated by a deterministic measure as n tends to infinity. For Hermitian matrices this measureis supported on the real line. Wigner matrices with independent and identically distributed (i.i.d.)entries above the diagonal and x ij = x ji are the basic representatives of this symmetry type. Theirasymptotic spectral density is the celebrated semicircle law [57]. In contrast, the spectrum of non-Hermitian random matrices concentrates on an area of the complex plane. Their representatives arematrices whose entries x ij are i.i.d. without any symmetry constraints. The circular law [10, 32, 54]asserts that the limiting density for such matrices is the uniform distribution on a disk.The elliptic ensemble , for which all entry pairs ( x ij , x ji ) with i < j are i.i.d., naturally interpolatesbetween the representatives of these two symmetry types. In particular, any linear combination ofindependent Wigner matrices and non-Hermitian matrices with i.i.d. entries has this property. As thename suggests the ESD of such matrices converges to the uniform distribution on an ellipse in thecomplex plane. Such elliptic law was first established by Girko under a bounded density assumptionon the entries [33, 34]. It was extended by Naumov in [47], assuming only finite fourth moment, andsubsequently by Nguyen and O’Rourke in [49], relying solely on finite variances of the entries. In casethe entries of the elliptic random matrix X are Gaussian, its joint eigenvalue distribution and limitingESD are explicitly computable [14, 39, 41].Elliptic random matrices are commonly used to model the dynamics of large neural networks [21,45], where the reciprocal connection between neurons, modelled by Cov( x ij , x ji ) = 0, is overrepresented[53, 56]. They also play a similar role in the analysis of the stability-complexity relationship withincomplex ecosystems [31]. Universal power law decay for the long time asymptotics of a system ofdifferential equations, critically coupled through a random elliptic connectivity matrix, has been shownin [46] for the i.i.d. Gaussian and in [26] for the general non-Gaussian model without assuming identicaldistributions.It is a hallmark of random matrix theory that the approximation of the eigenvalue process byits limiting density remains valid on all mesoscopic scales. The number of eigenvalues in a ballconcentrates around the value predicted by the density as long as its radius stays well above the typical Date: February 8, 2021Keywords: local law, eigenvector delocalisation, elliptic ensemble.MSC2010 Subject Classifications: 60B20, 15B52. ∗ This project has received funding from the European Union’s Horizon 2020 research and innovation programmeunder the Marie Sklodowska-Curie grant agreement No. 895698, from the European Research Council (ERC) under theEuropean Union’s Horizon 2020 research and innovation programme (grant agreement No. 715539 RandMat) and fromthe Swiss National Science Foundation through the NCCR SwissMAP grant. These are gratefully acknowledged.Email: [email protected] † Financial support from Novo Nordisk Fonden Project Grant 0064428 & VILLUM FONDEN via the QMATH Centreof Excellence (Grant No. 10059) and Young Investigator Award (Grant No. 29369) is gratefully acknowledged.Email: [email protected] local law is a signature of the logarithmic repulsion between eigenvaluesand is in stark contrast to the behaviour of the Poisson point process, for which the independencebetween point positions leads to much larger fluctuations. The first local laws on the optimal spectralscale in the bulk appeared in the context of Wigner matrices [28, 30] (see [29] for a historical account).Since then they have been established for a wide range of self-adjoint models, including randommatrices with deterministic deformations [37, 42], variance profiles [3], general decaying correlations[4, 27], heavy tailed entries [1, 16], band matrices [19, 22], sparse random graphs [12, 13, 23], invariantensembles [17] as well as specific polynomials [9, 25] and rational functions [24] in several randommatrices.In the non-Hermitian setup local laws were first established for matrices with i.i.d. entries [18, 59].They were extended to ensembles with independent non-identically distributed entries [5, 6], ensembleswith decaying correlations [8], products of independent matrices [36, 48] and to matrices that are eithermultiplied by independent Haar unitaries on the left and right [11] or by a matrix with i.i.d. entrieson one side [58]. In all these cases the spectral density becomes radially symmetric in the limit oflarge dimension. Control on the fluctuation of the ESD down to local scales is also a vital ingredientin spectral universality proofs [20, 55].In this work we establish the local law in the spectral bulk of elliptic n × n - random matrices X .The spectral resolution and convergence rates are optimal, up to factors n o (1) . As a corollary we showcomplete delocalisation of the eigenvectors of X in any basis. For matrices with independent entriessimilar results have been proved in [43, 44, 51], for matrices with decaying correlations in [8] and fora model with elliptic type correlations and non-random imaginary part in [52].Our proof relies on Girko’s Hermitization trick, i.e. on expressing the spectral distribution of X in terms of the eigenvalues of the ζ -dependent family of Hermitian matrices H ζ := X − ζ ( X − ζ ) ∗ ! , ζ ∈ C (1.1)at the origin. Optimal control on the resolvent G ( ζ, η ) := ( H ζ − i η ) − for η ≫ n − in combinationwith a bound on the smallest singular value of X − ζ , which we import from [49], allows to infer thelocal law for X . The Hermitization H ζ belongs to a general class of self-adjoint random matriceswith correlated entries, for which it was shown in [27] that G = G ( ζ, η ) satisfies the non-linearmatrix Dyson equation (cf. (5.7)) with solution M = M ( ζ, η ) in the limit n → ∞ . In contrast to[27], however, the equation associated to H ζ has a structural instability on local scales η ≪
1. Wetreat this instability by projecting the Dyson equation onto a codimension one subspace to conclude G − M →
0. A similar strategy has been used in [8]. In contrast to previous works we establish theconvergence h x , ( G − M ) y i → x , y without performingan elaborate separate step (see e.g. [15, 40]) and without tracking a family of high probability setsassociated to a large number of vectors for which the convergence holds (see e.g. [27, 37]). Comparedto the general settings in the latter two references, here this simplification is achievable because ofthe simple structure within both, the correlations and expectation values of H ζ entries. We reducethe stability analysis of the matrix Dyson equation to a finite dimensional problem by taking partialtraces of G − M and at the same time treat all directions simultaneously by working with isotropicL p -norms directly on the space of random matrices. This strategy streamlines the proof of the locallaw for H ζ and is generalisable to random matrices with block correlation structures. More precisely,it is applicable to Kronecker random matrices (see [7] for a definition) with constant variance andexpectation profiles, i.e. for e a i = a and s µij = s µ , t µij = t µ in [7, eq. (2.1) and eq. (2.3)], respectively. In this section, we state our assumptions and the main results.2 .1 Assumptions
Throughout the paper, let ̺ ∈ ( − ,
1) and µ ∈ [0 ,
1] be fixed. Let ξ , ξ and ξ be complex randomvariables such that E ξ j = 0 , E | ξ j | ν < ∞ , E | ξ | = 1 , E (Re ξ k ) = µ, E (Im ξ k ) = 1 − µ, E [Re ξ Re ξ ] = µ̺, E [Im ξ Im ξ ] = − (1 − µ ) ̺, E [Re ξ k Im ξ l ] = 0for all k, l ∈ { , } , j ∈ { , , } and ν ∈ N .Let X = ( x ij ) ni,j =1 ∈ C n × n be a random matrix such that• { ( x ij , x ji ) : i, j ∈ J n K , i < j } ∪ { x ii : i ∈ J n K } consists of independent random variables,• { x ii : i ∈ J n K } are independent copies of √ n ξ ,• { ( x ij , x ji ) : i, j ∈ J n K , i < j } are independent copies of √ n ( ξ , ξ ).Here, we used the notation J n K .. = { , . . . , n } . We introduce σ ̺ : C → [0 , ∞ ), the uniform probability density on the ellipse E ̺ defined through σ ̺ ( ζ ) = 1 π − ̺ ( ζ ∈ E ̺ ) , E ̺ .. = E ̺, , E ̺,δ .. = (cid:26) ζ ∈ C : (Re ζ ) (1 + ̺ ) + (Im ζ ) (1 − ̺ ) ≤ − δ (cid:27) (2.1)with δ ∈ [0 , X on all mesoscopic scales. Such scale is probed by the observables f ζ ,α defined through f ζ ,α : C → C , f ζ ,α ( ζ ) .. = n α f ( n α ( ζ − ζ )) , (2.2)where f : C → C is an arbitrary function, ζ ∈ C and α > Theorem 2.1 (Local elliptic law) . Let δ ∈ (0 , and α ∈ (0 , / . Then, for any ε > and ν ∈ N ,there is C > such that P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) n X ζ ∈ Spec X f ζ ,α ( ζ ) − Z C f ζ ,α ( ζ ) σ ̺ ( ζ )d ζ (cid:12)(cid:12)(cid:12)(cid:12) ≤ n − α + ε k ∆ f k L (cid:19) ≥ − Cn − ν (2.3) uniformly for all n ∈ N , ζ ∈ E ̺,δ , and every f ∈ C ( C ) satisfying f ( ζ ) = 0 for all ζ ∈ C with | ζ | ≥ ϕ and k ∆ f k L a ≤ n D k ∆ f k L with some constants ϕ , a > and D ∈ N . Theorem 2.1 will be proved at the end of Section 3 below. We also obtain complete delocalisationof eigenvectors of X in any basis. Corollary 2.2 (Isotropic eigenvector delocalisation) . Let δ ∈ (0 , and denote V δ = { u ∈ C n \ { } : Xu = ζ u for some ζ ∈ E ̺,δ } , the set of eigenvectors of X with eigenvalue in E ̺,δ . Then, for any ε > and ν > , there is C > such that P (cid:16) |h w , u i| ≤ n − / ε k w kk u k for all u ∈ V δ (cid:17) ≥ − Cn − ν for all w ∈ C n and all n ∈ N . Here, h w , u i denotes the Euclidean scalar product of w and u . Corollary 2.2 is a consequence of the local law for the Hermitization H ζ of X from (1.1) which isthe main input in the proof of Theorem 2.1. Its proof will be given in Section 5.1 below. Remark 2.3 (Weaker assumptions for Corollary 2.2, exclusion of eigenvalues away from E ̺ ) . i) For Corollary 2.2, Theorem 5.1 and their proofs given in Section 5, it suffices to assume thatthe entries of X satisfy • The collection { x ii : i ∈ J n K } ∪ { ( x ij , x ji ) : i, j ∈ J n K , i < j } consists of independent randomvariables. • The random variables have mean zero and variance n , i.e. E x ij = 0 and E | x ij | = n forall i, j ∈ J n K , and satisfy E [ x ik x ki ] = ̺n for all i, k ∈ J n K with i < k . • For any ν ∈ N , there is a constant C ν > such that E | x ij | ν ≤ C ν n − ν/ for all i, j ∈ J n K .For simplicity of the presentation, we stated Theorem 2.1 and Corollary 2.2 under the sameassumptions. The stronger assumptions are solely needed to control the smallest singular valuevia a result by Nguyen–O’Rourke (see Theorem 3.4 below).(ii) A sufficiently strong local law also implies that there are no eigenvalues away from the supportof the limiting eigenvalue density. This is also correct in the present setup. However, for ellipticrandom matrices the absence of eigenvalues away from E ̺ has been shown in [50, Theorem 2.2]for real matrices. Therefore, we refrain from reproving such result here.Moreover, for real and complex matrices, exclusion of eigenvalues can also be derived from [26]by combining [26, Lemma 4.8] and the displayed equation after [26, eq. (7.2)]. We now give an overview of the remainder of this paper. In the next subsection, we explain afew notations and conventions used throughout this work. Section 3 summarises the main ingredientsand contains the proof of Theorem 2.1 given these ingredients. In Section 4, we analyse the Dysonequation and its solution which is a deterministic approximation of the resolvent of the Hermitizationof X . In Section 5, this analysis is used to prove a local law for the Hermitization, a key input for theproof of the local law for X . Here, we introduce and collect a few notations used throughout the paper. We first recall the definition J n K .. = { , . . . , n } for n ∈ N . We write r >
0, we denotethe disk of radius r in the complex plane centred at the origin by D r .. = { z ∈ C : | z | < r } . Forfunctions of ζ ∈ C , we write ∂ and ¯ ∂ for their derivatives with respect to ζ and ¯ ζ , respectively, i.e. ∂ = ( ∂ Re ζ − i ∂ Im ζ ) and ¯ ∂ = ( ∂ Re ζ + i ∂ Im ζ ). We remark that, in matrix equations, scalars areidentified with the corresponding multiple of the identity matrix, e.g. H ζ − i η . For a matrix A ∈ C l × l ,we denote its normalised trace by h A i = l Tr A .Throughout the paper, we use the convention that all generic constants are denoted by c and C and may always depend on the distribution of ξ := ( ξ , ξ , ξ ). These constants are uniform in allother parameters, e.g. n , ζ , etc., within specified parameter sets.If f and g are two real scalars then we write f . g and g & f if there is a constant C > f ≤ Cg . If f . g and f & g then we write f ∼ g . In case, the constant C depends on a parameter δ we write . δ , & δ and ∼ δ , respectively. The same notation is also used for Hermitian matrices f and g , where f ≤ Cg is interpreted in a quadratic form sense. If f is complex and g ≥ f = O ( g ) if | f | . g . Similarly, f = O δ ( g ) if | f | . δ g . We omit the subscripts from . and O for theparameters 1 − | ̺ | , α from (2.2) as well as ϕ , D or a from Theorem 2.1. In this section, we prove Theorem 2.1. To that end, we first collect the ingredients of its proof andthen use them to conclude the local elliptic law. The novel ingredients will be proved in the followingsections.The first ingredient is a basic formula due to Girko [32] expressing the averaged linear statistics of X in a more tractable way. Indeed, since log is the fundamental solution of the Laplace equation in4 , we have1 n X ξ ∈ Spec X f ( ξ ) = 12 πn X ξ ∈ Spec X Z C ∆ f ( ζ ) log | ζ − ξ | d ζ = 14 πn Z C ∆ f ( ζ ) log | det H ζ | d ζ, (3.1)where we introduced the Hermitization H ζ := X − ζ ( X − ζ ) ∗ ! . (3.2)The log-determinant of H ζ can be easily obtained from the resolvent G ( ζ, η ) := ( H ζ − i η ) − throughthe well-known identitylog | det H ζ | = − n Z T h Im G ( ζ, η ) i d η + log | det( H ζ − i T ) | (3.3)for any T > h R i denotes the normalisedtrace of a matrix R ∈ C n × n .When n becomes large, h G i is well approximated by a deterministic function which we introducenext. For each ζ ∈ C and η >
0, let M ≡ M ( ζ, η ) ∈ C × be the unique solution of − M − = i η ζ ¯ ζ i η ! + S [ M ] , S (cid:20) a a a a ! (cid:21) .. = a ̺a ̺a a ! , (3.4)whose imaginary part Im M = ( M − M ∗ ) is positive definite. The existence and uniqueness of M isshown e.g. in [38]. In (3.4), a , a , a , a ∈ C . The relation in (3.4) is called Dyson equation andthe linear map S : C × → C × is the self-energy operator or self-energy . It is easy to see that theunique solution M of (3.4) satisfies M = i v ¯ bb i v ! (3.5)for each ζ ∈ C and η >
0, with some v ≡ v ( ζ, η ) ∈ (0 , ∞ ) and b ≡ b ( ζ, η ) ∈ C . The followingproposition states that h G i is well approximated by i v on all scales η ∈ [ n − γ , n ] slightly abovethe typical eigenvalue spacing of H ζ around zero, which is of order n − (in the bulk, i.e. for ζ ∈ E ̺,δ ).This proposition will be proved in Section 5.1 below. Proposition 3.1 (Local law for H ζ , averaged version) . Let v be as in (3.5) , γ > and δ ∈ (0 , .Then, for any ε > and ν > , there is C ε,ν > such that P (cid:18)(cid:12)(cid:12) h G ( ζ, η ) i − i v ( ζ, η ) (cid:12)(cid:12) ≤ n ε nη (cid:19) ≥ − C ε,ν n − ν uniformly for all η ∈ [ n − γ , n ] , ζ ∈ E ̺,δ and n ∈ N . The previous proposition directly implies the bound on the number of small singular values of X − ζ in the next lemma. The singular values of X − ζ coincide with the moduli of the eigenvaluesof H ζ . The latter are denoted by { λ ( ζ ), . . . , λ n ( ζ ) } = Spec H ζ in the following. Lemma 3.2 (Number of small singular values of X − ζ ) . Let δ ∈ (0 , and γ > . Then, for each ν > , there is a constant C ν > such that P (cid:16) (cid:8) i ∈ J n K : | λ i ( ζ ) | ≤ η (cid:9) . nη (cid:17) ≥ − C ν n − ν uniformly for all η ∈ [ n − γ , n ] , ζ ∈ E ̺,δ and n ∈ N . As we have seen in the formulation of Proposition 3.1 and Lemma 3.2 the following notion of highprobability events is useful. It will be used extensively in the proofs of Lemma 3.2 and Theorem 2.1below. 5 efinition 3.3 (With very high probability) . The (sequence of) events ( A n ) n ∈ N occur with very highprobability if for every ν > there is C ν > such that, for all n ∈ N , P (cid:0) A n (cid:1) ≥ − C ν n − ν . Proof of Lemma 3.2.
From the first bound in (3.7) in Lemma 3.5 below, we conclude that the traceof G is bounded by n , i.e. | Tr G ( ζ, η ) | . n , with very high probability due to Proposition 3.1. Since η η ≤ X i ∈ Σ η ηη + λ i ( ζ ) ≤ Im Tr G ( ζ, η ) . n, where Σ η .. = { i ∈ J n K : | λ i ( ζ ) | ≤ η } , this proves Lemma 3.2.The smallest singular value of X − ζ , which coincides with min j ∈ J n K | λ j ( ζ ) | , is controlled in thefollowing result from [49, Theorem 1.9]. See also [35] for a related result. Theorem 3.4 (Smallest singular value of X − ζ ) . Let a > . Then, for any B > , there are A > and C > such that P (cid:0) min j ∈ J n K | λ j ( ζ ) | ≤ n − A (cid:1) ≤ Cn − B uniformly for all n ∈ N and ζ ∈ C with | ζ | ≤ n a . The next lemma relates v from (3.5) with the elliptic law σ ̺ defined in (2.1) and will be proved inSection 4.3 below. The relation is expected due to the identities in (3.1) and (3.3). Lemma 3.5 ( σ ̺ as distributional derivative) . For every ψ ∈ C ( C ) with supp ψ ⊂ E ̺,δ for some δ ∈ (0 , , we have π Z C ∆ ψ ( ζ ) L ( ζ )d ζ = Z C ψ ( ζ ) σ ̺ ( ζ )d ζ, L ( ζ ) .. = − Z ∞ v ( ζ, η ) −
11 + η d η, (3.6) where the integral in the definition of L exists in Lebesgue sense for all ζ ∈ C .Moreover, uniformly for all η ∈ (0 , ∞ ) , ζ ∈ D and T ∈ [1 , ∞ ) , we have v ( ζ, η ) ≤ η ) − , Z T (cid:12)(cid:12)(cid:12)(cid:12) v ( ζ, η ) −
11 + η (cid:12)(cid:12)(cid:12)(cid:12) d η . , Z ∞ T (cid:12)(cid:12)(cid:12)(cid:12) v ( ζ, η ) −
11 + η (cid:12)(cid:12)(cid:12)(cid:12) d η . T − . (3.7)In order to use Proposition 3.1 when h G i is integrated with respect to ζ , we will approximatesuch integrals by an average of evaluations of the integrand at uniformly distributed points. Thisapproximation is controlled by the next lemma which is a simplified version of [55, Lemma 36] usedin a similar context. Lemma 3.6 (Monte Carlo sampling) . Let Ω ⊂ C be bounded and of positive Lebesgue measure. Let µ be the normalised Lebesgue measure on Ω and F : Ω → C square-integrable with respect to µ . For m ∈ N , let ξ , . . . , ξ m be independent random variables distributed according to µ .Then, for any δ > , we have P (cid:18)(cid:12)(cid:12)(cid:12) m m X i =1 F ( ξ i ) − Z Ω F d µ (cid:12)(cid:12)(cid:12) ≤ √ mδ (cid:16) Z Ω (cid:12)(cid:12)(cid:12) F − Z Ω F d µ (cid:12)(cid:12)(cid:12) d µ (cid:17) / (cid:19) ≥ − δ. Proof.
The i.i.d. random variables F ( ξ ) , . . . , F ( ξ m ) have mean R Ω F d µ and variance R Ω | F − R Ω F d µ | d µ .Thus, Markov’s inequality implies Lemma 3.6.We now combine the results introduced above in order to prove Theorem 2.1. The argument isvery similar to the proof of [8, Theorem 2.7]. However, we detail it here for the convenience of thereader. 6 roof of Theorem 2.1. Set Ω .. = E ̺,δ/ . Let f ∈ C ( C ) with supp f ⊂ D ϕ and k ∆ f k L a ≤ n D k ∆ f k L .Owing to the definition of f ζ ,α in (2.2) and supp f ⊂ D ϕ , we have supp f ζ ,α ⊂ Ω if n is sufficientlylarge.Since Ω ⊂ D , combining the first step in (3.1), (3.6) and the last estimate in (3.7) yields1 n X ξ ∈ Spec X f ζ ,α ( ξ ) − Z C f ζ ,α ( ζ ) σ ̺ ( ζ )d ζ = Z Ω F ( ζ )d µ ( ζ ) + O (cid:0) T − k ∆ f k L n α (cid:1) . (3.8)Here, we denoted by µ the normalised Lebesgue measure on Ω and by F the function defined through F ( ζ ) .. = | Ω | π ∆ f ζ ,α ( ζ ) h ( ζ ) , h ( ζ ) .. = 1 n X ξ ∈ Spec X log | ξ − ζ | + Z T (cid:18) v ( ζ, η ) −
11 + η (cid:19) d η. The rest of this proof is devoted to estimating R Ω F d µ . We now apply Lemma 3.6. The function ζ log | ξ − ζ | is in L p (Ω) for all p ∈ [1 , ∞ ). Hence, the second bound in (3.7) implies that, for any p ∈ [1 , ∞ ), we have k h k L p (Ω) . p T ≥
1. In particular, F is square-integrable on Ω.For any ν >
0, we apply Lemma 3.6 with δ = n − ν and m = n ν +2 D +20 to get (cid:12)(cid:12)(cid:12)(cid:12) Z Ω F ( ζ )d µ ( ζ ) − m m X i =1 F ( ξ i ) (cid:12)(cid:12)(cid:12)(cid:12) . n − D − α k ∆ f k L a (3.9)with probability at least 1 − n − ν . Here, ξ , . . . , ξ m are uniformly distributed on Ω.We now choose T .. = n and show that, for all ε >
0, we have | F ( ζ ) | ≤ n ε n − | ∆ f ζ ,α ( ζ ) | (3.10)with very high probability for every ζ ∈ Ω.First, we decompose h . To that end, we introduce h ( ζ ) .. = Z Tn − ε v ( ζ, η ) − h Im G ( ζ, η ) i d η, h ( ζ ) .. = − Z n − ε h Im G ( ζ, η ) i d η,h ( ζ ) .. = 14 n X i ∈ J n K log (cid:18) λ i ( ζ ) T (cid:19) − log (cid:18) T (cid:19) , h ( ζ ) .. = Z n − ε v ( ζ, η )d η. Using R T (1 + η ) − d η = log(1 + T ), the second step in (3.1) and (3.3), we obtain h = h + h + h + h .Therefore, in order to prove (3.10), it suffices to show that | h i ( ζ ) | ≤ n − ε with very high probabilityfor i ∈ J K .Using the Lipschitz-continuity of v ( ζ, η ) − h Im G ( ζ, η ) i in η and a grid-argument in η , we concludefrom Proposition 3.1 with γ = ε that | h ( ζ ) | ≤ n − ε with very high probability.The spectral theorem for H ζ shows that using the short-hand λ j ≡ λ j ( ζ ), we have − h ( ζ ) = 14 n X j ∈ J n K log (cid:18) n − ε λ j (cid:19) . We decompose the sum into the three regimes, | λ j | < n − ε , | λ j | ∈ [ n − ε , n − / ] and | λ j | > n − / and estimate the sum separately in each of them. Owing to Lemma 3.2 with η = n − ε , we obtain X | λ j |
1. From (3.4), we conclude − Im M − ≥ η and, hence, k M k ≤ η − . Therefore, k M k ≤ min { , η − } ≤ η ) − which is (4.1a). For (4.1b), we apply k · k to (3.4) and use that k S [ M ] k ≤ k M k ≤ r >
0. Then there is C r > η − ( | ζ | + k M k ) ≤ / η ≥ C r and ζ ∈ D r . If η ∈ (0 , C r ] then (4.2a) holds trivially. If η ≥ C r then we take the inverse of (3.4),expand the right-hand side of the result around (i η ) − and obtain − M = (i η ) − (cid:18) η ) − (cid:18) ζ ¯ ζ ! + S [ M ] (cid:19)(cid:19) − = − i η − + O r ( η − ) . Here, we used in the last step that k S [ M ] k ≤ k M k and η − ( | ζ | + k M k ) ≤ / η ≥ C r and k M k ≤ η ) − by (4.1a). This proves (4.2a) in the missing regime. For (4.2b), we note that M ∗ M ≥ k M − k − . Hence, the upper bound in (4.1b) implies (4.2b) as | ζ | ≤ r . Since Im M = M ∗ ( − Im M − ) M ≥ ηM ∗ M by (3.4), the bound (4.2c) follows from (4.2b).The next lemma shows that, for ζ ∈ E ̺,δ , the origin is asymptotically in the bulk spectrum of H ζ . Lemma 4.2 (Scaling of v on E ̺,δ ) . Let δ ∈ (0 , . Then, uniformly for ζ ∈ E ̺,δ and η ∈ (0 , , wehave v ( ζ, η ) ∼ δ . (4.4) Moreover, for any ζ ∈ int( E ̺ ) , we have lim η ↓ v ( ζ, η ) = 1 − (Re ζ ) (1 + ̺ ) − (Im ζ ) (1 − ̺ ) . (4.5)We need further relations following from (3.4). Left-multiplying (3.4) by M and computing theright-upper entry of the result using (3.5) yield − ¯ b ( η + v ) = v ( ζ + ̺b ). Hence, by applying (4.3), weobtain − ¯ b = ( v + | b | )( ζ + ̺b ) . (4.6)Moreover, we use (4.6) and (4.3) to conclude − (1 + η/v )¯ b = ζ + ̺b . By taking the real and imaginarypart of this relation separately, we obtain b = − Re ζ ̺ + η/v + i Im ζ − ̺ + η/v = − ζ + ¯ ζ ̺ + η/v + 12 ζ − ¯ ζ − ̺ + η/v . (4.7)Employing (4.3) again, we arrive at(Re ζ ) (1 + η/v + ̺ ) + (Im ζ ) (1 + η/v − ̺ ) = | b | = 11 + η/v − v . (4.8) Proof.
We now show that there is c ∼ δ δ such that if v ( ζ, η ) ≤ c for some z ∈ D and η ∈ (0 ,
1] then ζ / ∈ E ̺,δ . The previous statement implies (4.4).Suppose that v ( z, η ) ≤ c . First, we choose c ∼ | b | &
1. This ispossible since v + | b | & z ∈ D and η ∈ (0 , | b | ≤ b and Im b , respectively, andcombining the results to a relation for τ = | b | yield(Re ζ ) ( τ − + ̺τ ) + (Im ζ ) ( τ − − ̺τ ) = 1 + O ( v ) , (4.9)where we also used τ = | b | ∼ τ − + ̺τ ≥ ̺ and τ − − ̺τ ≥ − ̺ for τ ∈ (0 , ζ ) (1 + ̺ ) + (Im ζ ) (1 − ̺ ) ≥ O ( v ) . Therefore, ζ / ∈ E ̺,δ if c ∼ δ ζ ∈ int( E ̺ ). Hence, ζ ∈ E ̺,δ for some δ ∈ (0 , η ↓ v ( ζ, η ) ∼ δ .2 Stability For the following analysis of the stability operator of the Dyson equation, (3.4), we introduce thematrix E − .. = − ! ∈ C × . We will work on the linear subspace E ⊥− ⊂ C × , where the orthogonality is understood with respectto the Hilbert-Schmidt scalar product on C × . The next proposition proves a precise bound on theinverse of the stability operator L : C × → C × of the Dyson equation, (3.4). The operator L isdefined through L R .. = R − M ( S R ) M (4.10)for any R ∈ C × . This proposition will be a crucial ingredient in the proof of Proposition 3.1 and itsgeneralisation, Theorem 5.1 below. Proposition 4.3 (Linear stability estimate) . For any ζ ∈ C and η > , the operator L leaves E ⊥− invariant and is invertible on E ⊥− . Moreover, the inverse of the restriction to E ⊥− satisfies (cid:13)(cid:13) L − | E ⊥− (cid:13)(cid:13) . (cid:18) v + ηv (cid:19) − (4.11) uniformly for all ζ ∈ D and η ∈ (0 , . In (4.11) and the following, we write L − | E ⊥− for the inverse of the restriction of L to E ⊥− , i.e. theoperator is first restricted to E ⊥− and then inverted. Corollary 4.4 (Linear stability estimate in bulk) . Let δ ∈ (0 , . Then, uniformly for ζ ∈ E ̺,δ and η ∈ (0 , , we have (cid:13)(cid:13) L − | E ⊥− (cid:13)(cid:13) . δ . Proof of Corollary 4.4.
The claimed bound follows directly from (4.11) and (4.4).
Proof of Proposition 4.3.
The main tool of this proof is the following lemma, which is proved in [2,Lemma 5.8] and provides a bound on the inverse of operators of the type U − T , where U is unitaryand T is self-adjoint. The lemma uses the notion of the spectral gap of a self-adjoint operator T on C k . The spectral gap Gap( T ) of T is the difference of the two largest eigenvalues of | T | (cf. [2,Definition 5.4]). In the next lemma and for the rest of this proof, k · k is the operator norm on C k × k induced by the Euclidean norm on C k . Lemma 4.5 (Rotation-Inversion) . Let T be a self-adjoint and U a unitary operator on C k . Supposethat Gap( T ) > and k T k ≤ . Then there is universal constant C > such that k ( U − T ) − k ≤ C (cid:0) Gap( T ) | − k T k h h , U h i| (cid:1) − , where h is the normalised eigenvector of T corresponding to the nondegenerate eigenvalue k T k . We now explain how L can be represented as U − T for some U and T introduced next. From(3.5), we conclude that M is normal. Let M = | M | U = U | M | be its polar decomposition. Owing to(4.3), we have | M | = M ∗ M = k M k , k M k = v + | b | = vη + v , U = ( v + | b | ) − / M. Consequently, L = U ∗ ( U − T ) with the definitions T .. = k M k S and U R .. = U ∗ RU ∗ for any R ∈ C × . Note that S , U and U ∗ , and, thus, L leave E ⊥− invariant. Moreover, S is self-adjointand U is unitary on E ⊥− . In particular, k M k S is self-adjoint on E ⊥− .We now apply Lemma 4.5 by identifying E ⊥− with C and the choices of T and U made above.It is easily seen that Spec( S | E ⊥− ) = { , ̺, − ̺ } with the identity matrix, the first and second Pauli10atrix being the respective eigenvectors. Hence, k T k = k M k k S k = k M k = vη + v ≤ T ) = k M k (1 − | ̺ | ). Moreover, the eigenvector h from Lemma 4.5 is the identity matrix. Thus,1 − k M k h h , U h i = h M i = 1 − v + | b | = 2 v + ηη + v by (3.5) and (4.3). Therefore, Lemma 4.5implies (cid:13)(cid:13) L − | E ⊥− (cid:13)(cid:13) ≤ C (cid:18) (1 − | ̺ | ) k M k (cid:18) v + ηη + v (cid:19)(cid:19) − , (4.12)where we identified E ⊥− with C and k · k denotes the operator norm induced by the Hilbert-Schmidtscalar product on E ⊥− ⊂ C × . Since k M k & v = Im M & η for η ∈ (0 ,
1] and ζ ∈ D by(4.2b) and (4.2c), respectively, and the norms on the space of operators on C × are equivalent (withconstants ∼ Corollary 4.6 (Bounded derivatives) . The function M ( ζ, η ) is continuously differentiable with respectto ζ , ¯ ζ and η at any η > and ζ ∈ C . Moreover, for any δ ∈ (0 , , we have k ∂M ( ζ, η ) k + k ¯ ∂M ( ζ, η ) k + k ∂ η M ( ζ, η ) k . δ uniformly for η ∈ (0 , and ζ ∈ E ̺,δ .Proof. Since (3.4) is an equation on E ⊥− and the linear stability operator L is invertible on E ⊥− for any η > ζ ∈ C , the implicit function theorem implies the continuous differentiability with respect to ζ , ¯ ζ and η . The bound on the derivatives follows from differentiating (3.4) and using Corollary 4.4. Proof of Lemma 3.5.
Note that the integral in the definition of L exists in the Lebesgue sense since,for every r >
0, we have (cid:12)(cid:12) v ( ζ, η ) − (1 + η ) − (cid:12)(cid:12) . r (1 + η ) − (4.13)uniformly for all η > ζ ∈ D r by (4.1a) and (4.2a).The first inequality in (3.7) follows directly from (4.1a). The second and the third bound in (3.7)are immediate consequences of (4.13) with r = 10.The main ingredients of the proof of (3.6) are the identity (4.14) and the limit (4.15) below whichwe will show in the following. For each ε >
0, we have L ε ∈ C ( C ) and2 ∂L ε ( ζ ) = − b ( ζ, ε ) , L ε ( ζ ) .. = − Z ∞ ε (cid:18) v ( ζ, η ) −
11 + η (cid:19) d η. (4.14)Moreover, b ( ζ, η ) is differentiable with respect to ¯ ζ for η > δ ∈ (0 , − π lim η ↓ ¯ ∂b ( ζ, η ) = σ ̺ ( ζ ) (4.15)uniformly on E ̺,δ .We now deduce (3.6) from (4.14) and (4.15). Let ψ ∈ C ( C ) such that supp ψ ⊂ E ̺,δ for some δ ∈ (0 , π ¯ ∂ψ and integrate the result over E ̺,δ .Thus, integrating by parts on both sides and using ∆ = 4 ∂ ¯ ∂ yields12 π Z E ̺,δ ∆ ψ ( ζ ) L ε ( ζ )d ζ = − π Z E ̺,δ ψ ( ζ ) ¯ ∂b ( ζ, ε )d ζ. (4.16)From (4.13), we conclude that L is uniformly bounded in ζ and that L ε → L uniformly on D for ε ↓
0. Therefore, sending ε ↓ ∂ to (4.7), send η ↓ η ↓ ¯ ∂ ( η/v ) = lim η ↓ η/v = 0on E ̺,δ as v ∼ | ¯ ∂v | . E ̺,δ by Lemma 4.2 and Corollary 4.6, respectively. This proves(4.15) due to the definition of σ ̺ in (2.1). 11hat remains is proving L ε ∈ C ( C ) and (4.14). We first observe that v is differentiable withrespect to η and ζ by Corollary 4.6 and positive for η > M . Hence, differentiating (4.8) with respect to η and ζ yield (cid:18) v η + 2(Re ζ ) (1 + ̺ + η/v ) + 2(Im ζ ) (1 − ̺ + η/v ) − η/v ) (cid:19) ∂ η (cid:18) ηv (cid:19) = 2 v η , (cid:18) v η + 2(Re ζ ) (1 + ̺ + η/v ) + 2(Im ζ ) (1 − ̺ + η/v ) − η/v ) (cid:19) ∂ (cid:18) ηv (cid:19) = Re ζ (1 + ̺ + η/v ) − iIm ζ (1 − ̺ + η/v ) . (4.17)Since ∂ ( η/v ) = − ηv − ∂v and v = η − + O ( η − ) by (4.2a), the second identity in (4.17) implies | ∂v | . η − for large η . As ∂v is a continuous function in η on (0 , ∞ ) by Corollary 4.6, ∂v is Lebesgue-integrable in η on ( ε, ∞ ) for any ε >
0. Thus, L ε ∈ C ( C ), ∂L ε ( ζ ) = − R ∞ ε ∂v ( ζ, η )d η for all ε > ε →∞ ∂L ε ( ζ ) = 0. Therefore, ∂L ε ( ζ ) is differentiable in η and (4.14) is equivalent to2 ∂ ε ∂L ε ( ζ ) = − ∂ ε b ( ζ, ε ) since lim ε →∞ ∂L ε ( ζ ) = 0 and lim ε →∞ b ( ζ, ε ) = 0 by (4.1a). Owing to (4.7),the identity 2 ∂ ε ∂L ε ( ζ ) = − ∂ ε b ( ζ, ε ) is equivalent to2 v η ∂ (cid:18) ηv (cid:19) = (cid:18) Re ζ (1 + ̺ + η/v ) − iIm ζ (1 − ̺ + η/v ) (cid:19) ∂ η (cid:18) ηv (cid:19) . (4.18)Since (4.18) holds due to (4.17), this proves (4.14) and, thus, completes the proof of Lemma 3.5. In this section we prove Proposition 3.1 and Corollary 2.2, see Section 5.1 below. The main tool isthe local law for H = H ζ from (3.2), Theorem 5.1 below, which states that as n tends to infinity theresolvent G = G ( ζ, η ) = ( H ζ − i η ) − converges to the deterministic matrix M = M ( ζ, η ) ∈ C n × n defined as M ( ζ, η ) := i v ( ζ, η ) b ( ζ, η ) b ( ζ, η ) i v ( ζ, η ) ! , (5.1)where every entry in this 2 × C n × n , i.e. M = M ⊗ ∈ C × ⊗ C n × n . We recall that M and v as well as b were defined in (3.4) and (3.5),respectively.To express the convergence of G to M , we introduce appropriate norms. For any random matrix A ∈ C l × l in dimension l ∈ N we define the p -norms k A k p := k A k iso p := sup k x k , k y k≤ (cid:0) E |h x , Ay i| p (cid:1) /p , k A k av p := sup k B k≤ (cid:0) E |h BA i| p (cid:1) /p , (5.2)where the supremum is taken over x, y ∈ C l and B ∈ C l × l , respectively. In (5.2) and in the following, h · , · i denotes the Euclidean scalar product on C l and h · i the normalised trace on C l × l . We also allow p = ∞ by setting k A k ∞ := lim p ↑∞ k A k p for , av. In particular, k α k p denotes the standard p -norm for a scalar random variable α . Note that k A k iso p ∼ l k A k av p ∼ l kk A kk p (5.3)for all A ∈ C l × l and p ∈ N , i.e. if l does not depend on n all these norms are comparable.With these definitions we state the local law for H ζ . Theorem 5.1 (Local law for H ζ ) . Let γ ∈ (0 , and p ∈ N . Uniformly for all η ∈ [ n − γ , n ] and ζ ∈ E ̺,γ , the following local law holds: k G − M k iso p . p,γ n γ √ nη , k G − M k av p . p,γ n γ nη . (5.4)12he proof of Theorem 5.1 will be presented in Section 5.2 below. For η ∈ [ n − ε , n ] with a verysmall ε >
0, it will directly follow from [27, Theorem 2.1]. As it stands, equation (3.4) is unstable inthe local regime η ≤ n ε . Thus, the results from [27] do not directly extend to such small η . We willuse the refined stability from Proposition 4.3, orthogonal to the unstable direction E − , to show (5.4)in the local regime. We now conclude Proposition 3.1 and Corollary 2.2 from Theorem 5.1.
Proof of Proposition 3.1.
We first note that h M i = i v by the definition of M in (5.1). Thus, Propo-sition 3.1 follows directly from Theorem 5.1 with suitably chosen γ and p as P (cid:18)(cid:12)(cid:12) h G ( ζ, η ) i − i v ( ζ, η ) (cid:12)(cid:12) ≥ n ε nη (cid:19) ≤ n p η p n εp E |h G ( ζ, η ) − M ( ζ, η ) i| p ≤ n p η p n εp (cid:0) k G ( ζ, η ) − M ( ζ, η ) k av p (cid:1) p due to Markov’s inequality.We use a standard argument, adjusted to our setting, to obtain eigenvector delocalisation fromthe resolvent control in the local law, see e.g. [28] for an early version of this argument. Proof of Corollary 2.2.
Fix w ∈ C n and ε >
0. Let u ∈ V δ and ζ ∈ E ̺,δ such that Xu = ζ u . Thus, H ζ (0 , u ) t = 0. We extend (0 , u ) t / k (0 , u ) t k to an orthonormal basis (0 , u ) t / k (0 , u ) t k , v , . . . , v n of C n consisting of eigenvectors of H ζ with corresponding eigenvalues λ ( ζ ) = 0, λ ( ζ ), . . . λ n ( ζ ).Hence, for any x ∈ C n and η >
0, the spectral theorem for H ζ yieldsIm h x , G ( ζ, η ) x i = |h x , (0 , u ) t i| k (0 , u ) t k η + n X i =2 η |h x , v i i| λ i ( ζ ) + η ≥ η |h w , u i| k u k , (5.5)where we chose x = (0 , w ) t in the last step. Owing to (5.5), for any η >
0, we have the inclusion ofevents (cid:8) ∃ u ∈ V δ : |h w , u i| ≥ n − / ε k w kk u k (cid:9) ⊂ (cid:8) ∃ ζ ∈ E ̺,δ : η |h x , G ( ζ, η ) x i| ≥ n − ε k x k (cid:9) , (5.6)where x .. = (0 , w ) t .We now show that |h x , G ( ζ, η ) x i| is bounded even on small scales η . Since k M k = k M k . k G − M k iso p in (5.4) with suitably chosen γ and p as well as Markov’s inequalityimply that, for each γ ∈ (0 , |h x , G ( ζ, η ) x i| . k x k with very high probability uniformlyfor all ζ ∈ E ̺,γ , where η = n − γ . As ζ
7→ h x , G ( ζ, η ) x i is Lipschitz-continuous with Lipschitz-constant . n for η ≥ n − , a grid- and continuity-argument in ζ yields that, for any γ ∈ (0 , ζ ∈ E ̺,γ |h x , G ( ζ, η ) x i| . k x k holds with very high probability for η = n − γ . This provesCorollary 2.2 due to the inclusion (5.6) with η = n − γ and sufficiently small γ > In the regime η ≥
1, Theorem 5.1 directly follows from [27, Theorem 2.1] (see also Proposition 5.3below). Therefore, we focus on the regime η ≤ G approximately satisfies the matrixDyson equation 1 + (i η + Z + S M ) M = 0 , (5.7)where Z = − E H ∈ C n × n and S is the natural extension of S from (3.4) to C n × n , i.e. Z = ζζ ! , S A = h A i ̺ h A i ̺ h A i h A i ! , A = A A A A ! . (5.8)13he matrix M defined in (5.1) solves (5.7).For consistency with the presentation in [27] we introduce the self-energy operator b S A := E ( H + Z ) A ( H + Z ) , which is a slight modification of S , Indeed, one easily verifies that b S A = S A + P ⊙ A t Q ⊙ A t Q ∗ ⊙ A t P ⊙ A t ! , (5.9)where P = ( p ij ) ni,j =1 and Q = ( q ij ) ni,j =1 are the n × n -matrices with entries p ij := E x ij x ji , p ii := 0 , q ij := E x ij , q ii := 0 (5.10)for i , j ∈ J n K with i = j , and ⊙ denotes the entrywise Hadamard product. From the definition of theresolvent G we see that1 + (i η + Z + b S G ) G = D , D := ( H + Z + b S G ) G . (5.11)We interpret this equation as a perturbation of (5.7) with error matrix D . This point of view isjustified by the following proposition that we import from [27, Theorem 4.1]. Proposition 5.2 (Bound on error matrix) . Let ε > and p ∈ N . Then there is C ∗ > such that,uniformly for η ∈ [ n − , and ζ ∈ D , we have the following bounds on the error matrix k D k p . p,ε n ε s k Im G k q nη (1 + k G k q ) C ∗ (1 + n − / k G k q ) C ∗ p (5.12) k D k av p . p,ε n ε k Im G k q nη (1 + k G k q ) C ∗ (1 + n − / k G k q ) C ∗ p (5.13) with q = C ∗ p /ε (choose µ = 1 / ). In the remainder of this section we will infer the bounds (5.4) for ∆ := G − M from the boundson the error matrix in Proposition 5.2. We subtract (5.7) from (5.11) to find ∆ − M ( S ∆ ) M = M ( S ∆ ) ∆ − M ( D + (( b S − S ) G ) G ) . (5.14)This equation is equivalent to the last equation in the proof of [27, Theorem 5.2] with the translation ∆ → V , b S → S , M → M . However, in (5.14) we have written the linear and quadratic term in ∆ interms of S instead of b S and, thus, added the additional error term (( b S − S ) G ) G ). Note that M alsosatisfies (5.7) with S replaced by b S since b S M = S M due to the block diagonal structure of M in(5.1). The purpose of writing (5.14) in this fashion will become apparent when we take partial tracesinside the 2 × ∆ on its left-hand side has a non-trivialkernel. This instability also prevents us from using [27, Theorem 5.2] directly to show stability of(5.14). The bounded invertibility of the stability operator in [27] was ensured by [27, Assumption (E)]which is violated by b S as well as S .We will now show how stability of (5.14) is nevertheless achieved when the equation is restrictedto a codimension one subspace. First we reduce (5.14) to a matrix equation on C × through thepartial trace operation C n × n → C × , A A defined via A := h A i h A ih A i h A i ! , A = A A A A ! , with A ij ∈ C n × n for all i, j = 1 ,
2. We apply this operation to (5.14) and find L ∆ = M ( S ∆ ) ∆ − M D , D := D + (( b S − S ) G ) G , (5.15)14here M = M and L : C × → C × is the stability operator of the Dyson equation, (3.4), definedin (4.10). Recall from Proposition 4.3 that L leaves E ⊥− invariant, that κ δ := sup ζ ∈ E ̺,δ k L − | E ⊥− k . δ ∆ ⊥ E − because h G i = h G i (cf. [8, Lemma B.5]) as well as h M i = i v = h M i (cf.(5.1)). Thus, taking the k · k p -norm from (5.2) after inverting L on E ⊥− in (5.15) and multiplying with ( k ∆ k ≤ c/κ δ ) with some small enough constant c > k ∆ ( k ∆ k ≤ c/κ δ ) k p . κ δ k D k p , (5.17)uniformly for ζ ∈ E ̺,δ . Recall the comparability of the norms from (5.3) for l = 2. Here, k D k p satisfiesthe bound k D k p . n − k G ∗ G k p + k D k av p = k Im G k p nη + k D k av p . (5.18)The first inequality in (5.18) holds due to the definition of D in (5.15) and k (( b S − S ) G ) G k av p . n − k G ∗ G k p , (5.19)which itself follows from the expression for b S − S in (5.9), that the matrices Q and P from (5.10)satisfy | p ij | + | q ij | . n − (5.20)and from the general inequality( k AB k av p ) ≤ k A ∗ A k av p k B ∗ B k av p ≤ k A ∗ A k p k B ∗ B k p for any random matrices A , B . For the equality in (5.18) we used the Ward identity G ∗ G = GG ∗ = Im G η . (5.21)We will now use (5.17) together with (5.18) to show that k ∆ k p ≪ k ∆ k p . p ∈ N implies k D k p ≪ p ∈ N because k G k p ≤ k M k p + k ∆ k p . nη ≫
1. This, in turn, implies k ∆ k p ≪ p because of (5.17). Finally, we estimate k ∆ k p in terms of k ∆ k p to get k ∆ k p ≪
1. Altogether this argument shows that k ∆ k p . k ∆ k p ≪ A δ,γ for any γ >
0, where we introduced the parameter set A δ,γ := E ̺,δ × [ n − γ , . This implication can be bootstrapped from a regime far away from the local regime η ∼ n − , i.e. froma global law with η ∼
1. The global law is imported from [27, Theorem 2.1]. Using k G k ≤ η − ≤ n on A δ,γ in combination with [27, Lemma 5.4 (i)], we obtain the following version of [27, Theorem 2.1]since M in [27] coincides with M from (5.1) due to b S M = S M . Proposition 5.3 (Global law) . There is a universal constant c > such that for any ε > the globallaw k ∆ k p . p,ε n ε √ nη , k ∆ k avg p . p,ε n ε nη holds for ζ ∈ D and η ∈ [ n − cε , n ] . Before we formalise the bootstrapping in Lemma 5.4 we require a few preparations. Through(5.14) and the definition of S in (5.8) in terms of partial traces we bound k ∆ k p in terms of k ∆ k p andthe error matrix, namely k ∆ k p . k ∆ k p + k ∆ k p k ∆ k p + k D k p + k (( b S − S ) G ) G k p , (5.22)15here , av and we used k M k . k ∆ k p ∼ k ∆ k av p because ∆ ∈ C × .The additional error term in (5.22) which is not covered by Proposition 5.2 satisfied the bounds k (( b S − S ) G ) G k iso p . n − / k G k p k GG ∗ k / p ≤ k G k p (cid:18) k Im G k p nη (cid:19) / . (5.23)The first inequality in (5.23) holds because for any x , y ∈ C n we have E |h x , (( b S − S ) G ) Gy i| p ≤ E k (( b S − S ) G ) ∗ x k p k Gy k p . n p/ k G ∗ G k p/ p k G k p p . Here, in the last step we used (5.9), (5.20) and E k ( R ⊙ A ) x k p = E (cid:18) n X i =1 |h e i , A e x i i| (cid:19) p ≤ (2 n ) p k A k p p max i k e x i k p ≤ (2 n ) p k A ∗ A k pp max i k e x i k p for any random matrix A ∈ C n × n and deterministic R ∈ C n × n with e x i := ( r ij x j ) j and e i = ( δ ij ) j .The second inequality in (5.23) follows from the Ward identity, (5.21).Finally, for any random matrix A ∈ C l × l we define a norm that tracks several p -norms simultane-ously through k A k ⋆,K := K X k =0 n − k/K k A k k + n − k A k ∞ , (5.24)for any fixed K ∈ N and k A k ⋆,K := k A k iso ⋆,K . This norm isconvenient in the following to express inequalities such as (5.22), that estimate the p -norm of ∆ interms of its 2 p -norm, in a fixed norm. The ⋆ -norm and p -norm are bounded in terms of each otherthrough n − k/K k A k k ≤ k A k ⋆,K ≤ − n − /K k A k K + n − k A k ∞ . (5.25)With these preparations we now formalise the bootstrapping step in which a rough bound on ∆ istransported from the almost global regime A δ,γ with γ close to 1 to the local regime A δ,γ with γ ≪ Lemma 5.4 (Bootstrapping) . There is a constant c ∗ > depending only on the distribution of ξ suchthat k ∆ k p . p,δ,γ n − γ/ for all p ∈ N on A δ,γ implies k ∆ k p . p,δ,γ n − γ/ for all p ∈ N on A δ, (1 − c ∗ ) γ .Proof. For transparency of the argument we write δ ∗ := c ∗ γ , where we will choose the constant c ∗ > δ in the notationfor the remainder of this proof. In particular, all constants implicit in the comparison relation . maydepend on δ and we write A γ = A δ,γ .We assume that k ∆ k p . p,γ n − γ/ holds for all p on A γ . Since k M k = k M k . A γ , i.e. that k G k p . p,γ
1. The function η η k G ( ζ, η ) k p ismonotonously increasing which is seen by checking positivity of its derivative. In particular, k G ( ζ, n − δ ∗ η ) k p ≤ n δ ∗ k G ( ζ, η ) k p . We conclude that k G k p . p,γ n δ ∗ holds on A γ − δ ∗ . This simplifies the bounds on the error matrix on A γ − δ ∗ from Proposition 5.2 to k D k p . p,ε,γ n ε + C ∗ δ ∗ + δ ∗ / √ nη . γ n − γ/ , k D k av p . p,ε,γ n ε + C ∗ δ ∗ + δ ∗ nη . γ n − γ/ . (5.26)For the final bounds we used that nη ≥ n γ − δ ∗ by definition of A γ − δ ∗ and we choose ε = δ ∗ = c ∗ γ forsufficiently small c ∗ > ξ through C ∗ .From (5.17) we find that for every p ∈ N the bound k ∆ ( k ∆ k ≤ c/κ δ ) k p . p,γ n − γ/ , (5.27)16olds on A γ − δ ∗ , where we used (5.18), (5.26) and k G k p . p,γ n δ ∗ . By Markov’s inequality and (5.3),the inequality (5.27) implies P (cid:0) n − γ/ ≤ k ∆ k ≤ c/κ δ (cid:1) . p,γ n − γ p/ . We take a union bound over points in A γ − δ ∗ ∩ ( n − Z ) and use the Lipschitz continuity | ∂ ζ k ∆ k| + | ∂ η k ∆ k| . η − ≤ n for every realisation of k ∆ k to conclude P (cid:0) ∃ ( ζ, η ) ∈ A γ − δ ∗ : n − γ/ ≤ k ∆ k ≤ n − γ/ (cid:1) . p,γ n − p , (5.28)for every p ∈ N . The Lipschitz-continuity of M follows from Corollary 4.6. By using Markov’sinequality again in an analogous argument we infer P (cid:0) ∃ ( ζ, η ) ∈ A γ : k ∆ k ≥ n − γ/ (cid:1) . p,γ n − p (5.29)from the assumption k ∆ k p . p,γ n − γ/ for all p ∈ N on A γ . Since k ∆ k is continuous in η , thecombination of (5.28) and (5.29) implies P (cid:0) ∃ ( ζ, η ) ∈ A γ − δ ∗ : k ∆ k ≥ n − γ/ (cid:1) . p,γ n − p . This, in turn, allows to estimate the p -norm of ∆ via k ∆ k p ≤ n − γ/ + 2 η P (cid:0) k ∆ k ≥ n − γ/ (cid:1) /p . p,γ n − γ/ , where we also used k ∆ k ≤ η − .We use this bound, as well as (5.23) and k G k p . p,γ n δ ∗ in (5.22) to see the first inequality in k ∆ k p . p,γ n − γ/ + n − γ/ k ∆ k p + k D k p + n δ ∗ / √ nη . p,γ n − γ/ + n − γ/ k ∆ k p + n c ∗ γ − γ/ . n − γ/ (1 + k ∆ k p ) (5.30)for all p ∈ N with c ∗ ≤ /
10. For the second inequality we used (5.26) and nη ≥ n γ − δ ∗ = n (1 − c ∗ ) γ . In(5.30) the p -norm of ∆ is bounded in terms of the 2 p -norm. The ⋆ -norm from (5.24) is designed tohandle this problem. Summing up (5.30) over p = 2 k implies k ∆ k ⋆,K . K,γ n − γ/ + n − γ/ /K k ∆ k ⋆,K + n − k ∆ k ∞ . For K ≥ /γ and with k ∆ k ∞ ≤ η ≤ n we infer k ∆ k ⋆,K . K,γ n − γ/ . This finishes the proof since k ∆ k k ≤ n k/K k ∆ k ⋆,K . K,γ n k/K − γ/ . k,γ n − γ/ holds by choosing K sufficiently large, depending on k and γ .By the isotropic global law from Proposition 5.3 we see that the bound k ∆ k p . p,δ,γ n − γ/ issatisfied on A δ,γ for some γ close to 1. Thus, repeated application of Lemma 5.4 implies the followingcorollary. Corollary 5.5 (Weak isotropic local law) . For any γ > the bound k ∆ k p . p,γ n − γ/ holds on A γ,γ . Armed with this rough bound that holds down to all mesoscopic scales, i.e. on A γ,γ for all γ > H ζ . Proof of Theorem 5.1.
By Corollary 5.5 and k M k .
1, we have a uniform bound on the resolvent onall of A γ,γ , i.e. k G k p . p,γ
1. Proposition 5.2 then implies k D k p . p,ε,γ n ε s k Im G k q nη , k D k av p . p,ε,γ n ε k Im G k q nη . (5.31)17ogether with (5.18) and (5.31), we use (5.15) and (5.16) to see k ∆ k p . p,ε,γ k ∆ k p + n ε k Im G k q nη . p,γ n − γ/ k ∆ k p + n ε k Im G k q nη , (5.32)where we used the rough bound from Corollary 5.5.We sum up p = 2 k and use k ∆ k ∞ ≤ /η to get a quadratic inequality for the ⋆ -norm defined in(5.24), namely k ∆ k ⋆,K . K,ε,γ n − γ/ /K k ∆ k ⋆,K + n ε nη , By choosing ε > K ∈ N large enough, we conclude k ∆ k ⋆,K . K,γ n γ/ nη . (5.33)Now we use (5.32), (5.31), (5.23) and (5.19) in (5.22) to find k ∆ k p . p,ε,γ k ∆ k p + k ∆ k p k ∆ k p + n ε s k Im G k q nη , k ∆ k av p . p,ε,γ k ∆ k p + k ∆ k p k ∆ k av2 p + n ε k Im G k q nη . We use (5.33), (5.25), sum up p = 2 k and employ k ∆ k ∞ ≤ /η to translate these bounds to the ⋆ -norm and conclude (5.4) with (5.25). References [1] A. Aggarwal, P. Lopatto, and H.-T. Yau,
GOE statistics for Levy matrices , preprint (2019),arXiv:1806.07363.[2] O. H. Ajanki, L. Erdős, and T. Krüger,
Singularities of solutions to quadratic vector equations on thecomplex upper half-plane , Comm. Pure Appl. Math. (2017), no. 9, 1672–1705. MR 3684307[3] , Universality for general Wigner-type matrices , Probab. Theory Related Fields (2017), no. 3-4,667–727. MR 3719056[4] ,
Stability of the matrix Dyson equation and random matrices with correlations , Probab. TheoryRelated Fields (2019), no. 1-2, 293–373. MR 3916109[5] J. Alt, L. Erdős, and T. Krüger,
Local inhomogeneous circular law , Ann. Appl. Probab. (2018), no. 1,148–203. MR 3770875[6] , Spectral radius of random matrices with independent entries , to appear in Probab. Math. Phys.(2021), arXiv:1907.13631.[7] J. Alt, L. Erdős, T. Krüger, and Yu. Nemish,
Location of the spectrum of kronecker random matrices , Ann.Inst. H. Poincaré Probab. Statist. (2019), no. 2, 661–696.[8] J. Alt and T. Krüger, Inhomogeneous circular law for correlated matrices , preprint (2020),arXiv:2005.13533.[9] G. W. Anderson,
A local limit law for the empirical spectral distribution of the anticommutator of indepen-dent Wigner matrices , Ann. Inst. Henri Poincaré Probab. Stat. (2015), no. 3, 809–841. MR 3365962[10] Z. D. Bai, Circular law , Ann. Probab. (1997), no. 1, 494–529. MR 1428519[11] Z. Bao, L. Erdős, and K. Schnelli, Local single ring theorem on optimal scale , Ann. Probab. (2019),no. 3, 1270–1334. MR 3945747[12] R. Bauerschmidt, J. Huang, and H.-T. Yau, Local Kesten-McKay law for random regular graphs , Comm.Math. Phys. (2019), no. 2, 523–636. MR 3962004[13] R. Bauerschmidt, A. Knowles, and H.-T. Yau,
Local semicircle law for random regular graphs , Comm. PureAppl. Math. (2017), no. 10, 1898–1960. MR 3688032[14] M. Bender, Edge scaling limits for a family of non-Hermitian random matrix ensembles , Probab. TheoryRelated Fields (2010), no. 1-2, 241–271. MR 2594353
15] A. Bloemendal, L. Erdős, A. Knowles, H.-T. Yau, and J. Yin,
Isotropic local laws for sample covarianceand generalized Wigner matrices , Electron. J. Probab. (2014), no. 33, 53. MR 3183577[16] C. Bordenave and A. Guionnet, Localization and delocalization of eigenvectors for heavy-tailed randommatrices , Probab. Theory Related Fields (2013), no. 3-4, 885–953. MR 3129806[17] P. Bourgade, L. Erdős, and H.-T. Yau,
Universality of general β -ensembles , Duke Math. J. (2014),no. 6, 1127–1190. MR 3192527[18] P. Bourgade, H.-T. Yau, and J. Yin, Local circular law for random matrices , Probab. Theory Related Fields (2014), no. 3-4, 545–595. MR 3230002[19] ,
Random band matrices in the delocalized phase I: Quantum unique ergodicity and universality ,Comm. Pure Appl. Math. (2020), no. 7, 1526–1596. MR 4156609[20] G. Cipolloni, L. Erdős, and D. Schröder, Edge Universality for non-Hermitian Random Matrices , Probab.Theory Related Fields (2020), available at https://doi.org/10.1007/s00440–020–01003–7.[21] A. Crisanti and H. Sompolinsky,
Dynamics of spin systems with randomly asymmetric bonds: Langevindynamics and a spherical model , Phys. Rev. A (3) (1987), no. 10, 4922–4939. MR 918489[22] L. Erdős, A. Knowles, H.-T. Yau, and J. Yin, Delocalization and diffusion profile for random band matrices ,Comm. Math. Phys. (2013), no. 1, 367–416. MR 3085669[23] ,
Spectral statistics of Erdős-Rényi graphs I: Local semicircle law , Ann. Probab. (2013), no. 3B,2279–2375. MR 3098073[24] L. Erdős, T. Krüger, and Yu. Nemish, Scattering in quantum dots via noncommutative rational functions ,preprint (2019), arXiv:1911.05112.[25] ,
Local laws for polynomials of Wigner matrices , J. Funct. Anal. (2020), no. 12, 108507, 59.MR 4078529[26] L. Erdős, T. Krüger, and D. Renfrew,
Randomly coupled differential equations with elliptic correlations ,preprint (2019), arXiv:1908.05178.[27] L. Erdős, T. Krüger, and D. Schröder,
Random matrices with slow correlation decay , Forum Math. Sigma (2019), Paper No. e8, 89. MR 3941370[28] L. Erdős, B. Schlein, and H.-T. Yau, Local semicircle law and complete delocalization for Wigner randommatrices , Comm. Math. Phys. (2009), no. 2, 641–655. MR 2481753[29] L. Erdős and H.-T. Yau,
A dynamical approach to random matrix theory , Courant Lecture Notes in Math-ematics, vol. 28, Courant Institute of Mathematical Sciences, New York; American Mathematical Society,Providence, RI, 2017. MR 3699468[30] L. Erdős, H.-T. Yau, and J. Yin,
Universality for generalized Wigner matrices with Bernoulli distribution ,J. Comb. (2011), no. 1, 15–81. MR 2847916[31] Y. V. Fyodorov and B. A. Khoruzhenko, Nonlinear analogue of the May-Wigner instability transition , Proc.Natl. Acad. Sci. USA (2016), no. 25, 6827–6832. MR 3521630[32] V. L. Girko,
Circular law , Theory Probab. Appl. (1985), no. 4, 694–706.[33] , An elliptic law , Dokl. Akad. Nauk Ukrain. SSR Ser. A (1985), no. 1, 56–59. MR 781018[34] ,
Strong elliptic law , Random Oper. Stochastic Equations (1997), no. 3, 269–306. MR 1483014[35] F. Götze, A. Naumov, and A. Tikhomirov, On minimal singular values of random matrices with correlatedentries , Random Matrices Theory Appl. (2015), no. 2, 1550006, 30. MR 3356884[36] , Local laws for non-Hermitian random matrices and their products , Random Matrices Theory Appl. (2020), no. 4, 2150004, 53. MR 4133077[37] Y. He, A. Knowles, and R. Rosenthal, Isotropic self-consistent equations for mean-field random matrices ,Probab. Theory Related Fields (2018), no. 1-2, 203–249. MR 3800833[38] J. W. Helton, R. Rashidi Far, and R. Speicher,
Operator-valued semicircular elements: solving a quadraticmatrix equation with positivity constraints , Int. Math. Res. Not. IMRN (2007), no. 22, Art. ID rnm086, 15.MR 2376207[39] K. Johansson,
From Gumbel to Tracy-Widom , Probab. Theory Related Fields (2007), no. 1-2, 75–112.MR 2288065
40] A. Knowles and J. Yin,
The isotropic semicircle law and deformation of Wigner matrices , Comm. PureAppl. Math. (2013), no. 11, 1663–1750. MR 3103909[41] M. Ledoux, Complex Hermite polynomials: from the semi-circular law to the circular law , Commun. Stoch.Anal. (2008), no. 1, 27–32. MR 2446909[42] J. O. Lee, K. Schnelli, B. Stetler, and H.-T. Yau, Bulk universality for deformed Wigner matrices , Ann.Probab. (2016), no. 3, 2349–2425. MR 3502606[43] K. Luh and S. O’Rourke, Eigenvector delocalization for non-Hermitian random matrices and applications ,Random Structures Algorithms (2020), no. 1, 169–210. MR 4120597[44] A. Lytova and K. Tikhomirov, On delocalization of eigenvectors of random non-Hermitian matrices ,Probab. Theory Related Fields (2020), no. 1-2, 465–524. MR 4095020[45] D. Martí, N. Brunel, and S. Ostojic,
Correlations between synapses in pairs of neurons slow down dynamicsin randomly connected neural networks , Phys. Rev. E (2018), 062314.[46] B. Mehlig and J. T. Chalker, Statistical properties of eigenvectors in non-Hermitian Gaussian randommatrix ensembles , J. Math. Phys. (2000), no. 5, 3233–3256. MR 1755501[47] A. Naumov, The elliptic law for random matrices , Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet.(2013), no. 1, 31–38, 48. MR 3114382[48] Yu. Nemish,
Local law for the product of independent non-Hermitian random matrices with independententries , Electron. J. Probab. (2017), Paper No. 22, 35. MR 3622892[49] H. H. Nguyen and S. O’Rourke, The elliptic law , Int. Math. Res. Not. IMRN (2015), no. 17, 7620–7689.MR 3403996[50] S. O’Rourke and D. Renfrew,
Low rank perturbations of large elliptic random matrices , Electron. J. Probab. (2014), no. 43, 65. MR 3210544[51] M. Rudelson and R. Vershynin, Delocalization of eigenvectors of random matrices with independent entries ,Duke Math. J. (2015), no. 13, 2507–2538. MR 3405592[52] ,
No-gaps delocalization for general random matrices , Geom. Funct. Anal. (2016), no. 6, 1716–1776. MR 3579707[53] S. Song, P. J. Sjöström, M. Reigl, S. Nelson, and D. B. Chklovskii, Highly nonrandom features of synapticconnectivity in local cortical circuits , PLOS Biol. (2005), no. 3.[54] T. Tao and V. Vu, Random matrices: universality of ESDs and the circular law , Ann. Probab. (2010),no. 5, 2023–2065, With an appendix by Manjunath Krishnapur. MR 2722794[55] , Random matrices: universality of local spectral statistics of non-Hermitian matrices , Ann. Probab. (2015), no. 2, 782–874. MR 3306005[56] Y. Wang, H. Markram, P. H. Goodman, T. K. Berger, J. Ma, and P. S. Goldman-Rakic, Heterogeneity inthe pyramidal network of the medial prefrontal cortex , Nat. Neurosci. (2006), no. 4, 534.[57] E. P. Wigner, Characteristic vectors of bordered matrices with infinite dimensions , Ann. of Math. (1955),548–564. MR 77805[58] H. Xi, F. Yang, and J. Yin, Local circular law for the product of a deterministic matrix with a randommatrix , Electron. J. Probab. (2017), Paper No. 60, 77. MR 3683369[59] J. Yin, The local circular law III: general case , Probab. Theory Related Fields (2014), no. 3-4, 679–732.MR 3278919(2014), no. 3-4, 679–732.MR 3278919