On Local laws for non-Hermitian random matrices and their products
aa r X i v : . [ m a t h . P R ] D ec ON LOCAL LAWS FOR NON-HERMITIAN RANDOM MATRICESAND THEIR PRODUCTS
F. G ¨OTZE, A. NAUMOV, AND A. TIKHOMIROV
Abstract.
The aim of this paper is to prove a local version of the circular law for non-Hermitianrandom matrices and its generalization to the product of non-Hermitian random matrices underweak moment conditions. More precisely we assume that the entries X ( q ) jk of non-Hermitian randommatrices X ( q ) , ≤ j, k ≤ n, q = 1 , . . . , m, m ≥ E X ( q ) jk = 0 , E | X ( q ) jk | = 1and E | X ( q ) jk | δ < ∞ for some δ >
0. It is shown that the local law holds on the optimal scale n − a , < a < /
2, up to some logarithmic factor. We further develop a Stein type method toestimate the perturbation of the equations for the Stieltjes transform of the limiting distribution.We also generalize the recent results [8], [47] and [37]. Introduction and main result
One of the main questions of the Random matrix theory (RMT) is to investigate the limitingbehaviour of spectra of random matrices from different ensembles. In the current paper we shallstudy the case of products of non-Hermitian random matrices. More precisely, we consider a set ofrandom non-Hermitian matrices X ( q ) = (cid:2) X ( q ) jk (cid:3) nj,k =1 , q = 1 , . . . , m, m ∈ N . Assume that X ( q ) jk , ≤ j, k ≤ n, q = 1 , . . . , m , are independent random variables (r.v.) with zero mean.Note that the distribution of X ( q ) jk may depend on n . Denote by ( λ ( X ) , ..., λ n ( X )) – the eigenvaluesof the matrix X def = 1 n m m Y q =1 X ( q ) . For any set B ∈ B ( C ) we introduce the counting function of the eigenvalues in B : N B def = N B ( X ) def = { ≤ k ≤ n : λ k ( X ) ∈ B } . It is also convenient to denote by µ n ( · ) – the empirical spectral distribution of X : µ n ( B ) def = 1 n N B , B ∈ B ( C ) . We first assume that m = 1. Denote p (1) ( z ) def = 1 π [ | z | ≤ , z ∈ C , (1.1)and let A ( · ) be the Lebesgue measure on C . By w −→ we denote weak convergence of probabilitymeasures. We first assume that m = 1. Then the following result is the well-known circular law . Theorem 1.1 (Macroscopic circular law) . Let X jk , ≤ j, k ≤ n be i.i.d. complex r.v. with E X jk =0 , E | X jk | = 1 . Then µ n w −→ µ (1) a.s. as n tends to infinity, where dµ (1) ( z ) = p (1) ( z ) dA ( z ) . Date : December 10, 2018.
Key words and phrases.
Random matrices, local circle law, product of non-Hermitian random matrices, Stieltjestransform, logarithmic potential, Stein’s method.
The circular law was first proven by Ginibre [21] in 1965 in the case when X jk are standardcomplex Gaussian r.v. His proof was based on the joint density of ( λ ( X ) , . . . , λ n ( X )). If X jk arecomplex (real) Gaussian r.v. we say that X belongs to complex (resp. real) Ginibre ensemble ofrandom matrices. Here we also refer to the book of M. Mehta [35]. Later on the circular law wasextended to more general classes of random entries by V. Girko [22]. Therefore the circular law is oftenreferred to as the
Girko–Ginibre circular law . It has been further extended in a number of papers,for instance in [5], [31], [32], [42], [45], [46]. In particular, F. G¨otze and A. Tikhomirov, see [32],established the circular law under the assumption that max j,k E | X jk | log η (1 + | X jk | ) < ∞ forany η >
0. They also generalized it to the case of sparse random matrices as well. That is, let usdefine X ε def = ( np ) − / [ X (1) jk ε jk ] nj,k =1 , where ε jk are i.i.d. Bernoulli r.v. with parameter p . It followsfrom [32], that one may take p ≥ c log n/n for some c >
0. A result with optimal moment conditions,see Theorem 1.1, was established by T. Tao and V. Vu in [46]. The progress made in [32], [42], [45], [46]was based on bounds for the least singular value of shifted matrices X − z I , z ∈ C , due to M. Rudelsonand R. Vershynin, see e.g. [44]. For a detailed account we refer the interested reader to the overview [7].In applications the case of the non-homogeneous circular law is of considerable interest, which meansdropping the assumption of identical distribution of entries, while still assuming that E | X (1) jk | = σ jk .In particular, the papers [32], [46] already deal with non i.i.d. entries but under the additionalassumption that all σ jk = 1. An extended model one would require some appropriate conditions on thematrix Σ def = [ σ jk ] nj,k =1 . For example, one may assume that Σ is doubly-stochastic. See e.g. [1], [12], [4].The circular law may be further generalized to the case of dependent r.v. The typical example hereis the case of matrices from Girko’s elliptic ensemble . Here the pairs ( X jk , X kj ) , ≤ j < k < n , arei.i.d. random vectors, and E X jk X kj = ρ , for some ρ : | ρ | ≤
1. The global limiting distribution forspectra of elliptic random matrices is given by a uniform law in the ellipsoid with the semi-axes equalto 1 + ρ and 1 − ρ resp. We refer the interested reader to the papers to [23], [36], [38], [27]. In the case ρ = 1 we get Wigner’s semicircle law, [49]. In the case ρ = 0 and under additional assumption that X belongs to Ginibre’s ensemble we again arrive at the circular law. For other models of non-Hermitianrandom matrices with dependent entries, see for instance [1], [6], [2].The main emphasis of the current paper is concerned with the generalization of the circular law tothe case of arbitrary m ≥
1. We denote p ( m ) ( z ) def = 1 πm | z | m − [ | z | ≤ . (1.2)It is straightforward to check that p ( m ) ( z ) is the density of m -th power of a uniform distribution onthe unit circle. We state the following theorem in the macroscopic scale. Theorem 1.2 (Products of random matrices, macroscopic regime) . Let m ∈ N and X ( q ) jk , ≤ j, k ≤ n, q = 1 , . . . , m , be i.i.d. complex r.v. with E X ( q ) jk = 0 , E | X ( q ) jk | = 1 . Then µ n w −→ µ ( m ) in probabilityas n tends to infinity, where dµ ( m ) ( A ) = p ( m ) ( z ) dA ( z ) . We refer here to the results of F. G¨otze and A. Tikhomirov [29] and S. O’Rourke and A. Sosh-nikov [41]. For product of Girko’s elliptic random matrices, see [39] and [26].The circular law and its generalisation to the product of random matrices are valid, in particular,for all circles B ( z , r ) with centre at z and finite radius r > n . Such sets typicallycontain a macroscopically large number of eigenvalues, which means a number of order n . In particular,the statement of Theorem 1.1–1.2 may be formulated as follows:1 nr E N B ( z ,r ) = 1 r Z B ( z ,r ) p ( m ) ( z ) dA ( z ) + R n r , (1.3)where lim n →∞ R n = 0 . (1.4) OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 3 (similar statement may be formulated for N B ( z ,r ) ). Unfortunately for smaller radius, when r tendsto zero as n goes to infinity, the number of eigenvalues cease to be macroscopically large. In this caseit is essential to describe the second term in (1.3) more precisely, rather then (1.4). We say that the local law holds if the second term in (1.3) tends to zero as r = r ( n ) tends to zero. The series of theresults in that direction was recently proved by P. Bourgade, H.-T. Yau and J. Yin [8], [9], [51] andT. Tao and V. Vu [47] in the case of m = 1. They derived the local version of Theorem 1.1 up to theoptimal scale n − a , a >
0. In [8], [9], [51] the local circular law was proved under the assumptionof sub-exponential tails for the distribution of entries (or assuming finite moments of all orders).In [47] it was proved under similar assumptions by means of the so-called fourth moment theorem ,which requires that the first four moments of X jk match the corresponding moments of the standardGaussian distribution. We also refer to the recent results [4] and [50]. The general case of m ≥ δ momentsare finite for some δ >
0. See the following section 2 for precise statements. This work continues theprevious results of authors [24], [25], where the local semicircle law for Hermitian random matriceswas proved under similar moment conditions.We continue to use Stein type methods for the estimation of perturbations of the equation forStieltjes transforms of the limiting distribution, since it turn out to be very flexible and useful. Inthis context we provide a general result, i.e. Lemma 6.2, which may be of independent interest. Inparticular, as a consequence of this lemma one may derive among others a Rosenthal type inequalityfor moments of linear forms (e.g. [43][Theorem 3] and [33][Inequality (A)]), inequality for momentsof quadratic forms (e.g. [20][Proposition 2.4] or [30][Lemma A.1]) with precise values of all constantsinvolved. We also jointly apply the additive descent method introduced by L. Erd¨os, B. Schlein, H.-T. Yau and et al., see [17], [16], [18], [13], [14], [15], [34] among others, together with multiplicativedescent methods introduced in [11] and further developed in [24], [28]. See Lemma 4.2 for details.We finish this section discussing some related results. In particular, we have already mentioned thelocal semicircle law. Significant progress in studying the local semicircular law was made in a series ofpapers by L. Erd¨os, B. Schlein, H.-T. Yau and et al., [17], [16], [18], [13], [14], [15], [34]. We also referto the more recent results [11], [24], [25]. An extension to the elliptic random matrix ensembles, whichgeneralizes both ensembles considered above would be of interest. This applies as well to local versionsof the elliptic law and its extension to products of such matrices. See [39] and [26] for the limitingbehaviour in the macroscopic regime. In particular, it would be interesting to study the so-called weaknon-Hermicity limit , i.e. the case ρ tends to one, see [19] and recent result [3].1.1. Notations.
Throughout the paper we will use the following notations. We assume that allrandom variables are defined on a common probability space (Ω , F , P ) writing E for the mathematicalexpectation with respect to P . We denote by R and C the set of all real and complex numbers. Wealso introduce C + def = { z ∈ C : Im z ≥ } .(1) We denote by [ A ] the indicator function of the set A .(2) By C and c we denote some positive constants. If we write that C depends on δ we meanthat C = C ( δ, µ δ ).(3) For an arbitrary square matrix A taking values in C n × n (or R n × n ) we define the operator normby k A k def = sup x ∈ R n : k x k =1 k A x k , where k x k def = ( P nj =1 | x j | ) / . We use the Hilbert-Schmidt(Frobenius) norm given by k A k def = Tr / AA ∗ = ( P nj,k =1 | A jk | ) / .(4) For a vector x = ( x , . . . , x n ) T we denote | x | def = max ≤ k ≤ n | x k | .(5) For an arbitrary function from L ( C )-space we denote k f k L def = R C | f ( z ) | dz .(6) For an arbitrary function f we denote k f k def = sup z ∈ C | f ( z ) | .(7) Define the Laplace operator in two dimensions as ∆ def = ∂ ∂x + ∂ ∂y . F. G ¨OTZE, A. NAUMOV, AND A. TIKHOMIROV (8) We write f ∼ g if there exist positive constants c , c , such that c | g | ≤ | f | ≤ c | g | . We shalloften write f . g which mean that there exists positive constant c such that | f | ≤ c | g | .2. Main result
Without loss of generality we will assume in what follows that X ( q ) are real non-symmetric matrices.Our results proven below apply to the case of complex matrices as well. Here we may additionallyassume for simplicity that Re X ( q ) jk and Im X ( q ) jk are independent r.v. for all 1 ≤ j, k ≤ n, q = 1 , . . . , m .Otherwise one needs to extend the moment inequalities for linear and quadratic forms in complex r.v.(see [24][Theorem A.1-A.2]) to the case of dependent real and imaginary parts, the details of whichwe omit.We will often refer to the following conditions. Definition 2.1 (Conditions ( C0 )) . We say that conditions ( C0 ) hold if: • X ( q ) jk , ≤ j, k ≤ n, q = 1 , . . . , m , are independent real random variables; • E X ( q ) jk = a ( q ) jk , E | X ( q ) jk | = [ σ ( q ) jk ] ; • max j,k,q,n E | X ( q ) jk | δ def = µ δ < ∞ for some δ > independent of n ; • | a ( q ) jk | ≤ n − − δ , | − [ σ ( q ) jk ] | ≤ n − − δ for some δ > independent of n . Definition 2.2 (Conditions ( C1 )) . We say that conditions ( C1 ) hold if: • ( C0 ) hold; • There exists φ = φ ( δ ) > such that | X ( q ) jk | ≤ Dn / − φ for all ≤ j, k ≤ n and some D > .Here one may take < φ ≤ δ/ (2(4 + δ )) . Let f ( z ) be a smooth non-negative function with compact support, such that k f k ≤ C, k f ′ k ≤ n C for some constant C independent of n . Following [8], we define for any a ∈ (0 , /
2) and z ∈ C thefunction f z ( z ) def = n a f (( z − z ) n a ) ( f z is a smoothed delta-function at the point z ). The mainresult of the current paper is the following theorem which provides a local version of Theorem 1.1 andTheorem 1.2 under weak moment conditions ( C1 ). Theorem 2.3 (Local regime) . Assume that the conditions ( C1 ) hold. Let z : || z | − | ≥ τ > .Then for any Q > there exists constant c = c ( δ, δ ) > such that with probability at least − n − Q (cid:12)(cid:12)(cid:12)(cid:12) n n X j =1 f z ( λ j ) − Z f z ( z ) dµ ( m ) ( z ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ q ( n ) n − a k ∆ f k L , (2.1) where q ( n ) ≤ c log n . An immediate corollary of the main theorem is the following statement.
Corollary 2.4.
Assume that the conditions ( C0 ) hold. Then the inequality (2.1) holds with probabilityat least − n − c ( δ ) , where c ( δ ) is some positive constant.Proof of Corollary 2.4. Let b X ( q ) be X ( q ) with X ( q ) jk replaced by X ( q ) jk [ | X ( q ) jk | ≤ Dn / − φ ′ ], where0 < φ ′ < φ . Applying Markov’s inequality we obtain P ( X ( q ) = b X ( q ) ) ≤ n X j,k =1 P ( | X ( q ) jk | ≥ Dn / − φ ′ ) ≤ n − c ( δ ) . This inequality implies the statement of Corollary 2.4. (cid:3)
Remark.
It still remains one challenging open problem, namely extending the bounds to weaken themoment condition to δ = 0. Furthermore, it is not unlikely that the power of the logarithmic factorin the upper bound for q ( n ) may be reduced.It seems that the bound n − c ( δ ) of Corollary 2.4 can not be improved in general. The main difficultyhere is to estimate the least singular value, see (3.4). The required bound should be faster than any OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 5 polynomial. The proof of such bound is based on the result [44] and requires to control the largestsingular value, see (3.3). Unfortunately, this requires to assume high finite moments of matrix entries.Another way is to assume that the matrix entries have absolutely continuous and bounded densities,see [4].It is possible to consider the case when z is near the edge of the unit circle and extend the results [9],[51], but this topic leaves the scope of the current paper.We finish this section comparing our result with [8] in the case m = 1 and [37] for m >
1. In thesepapers the authors assume instead of condition (3) in ( C0 ) that the uniform sub-exponential decaycondition is satisfied: ∃ θ > ≤ q ≤ m max ≤ j,k ≤ n P ( | X ( q ) jk | ≥ t ) ≤ θ − e − t θ . They also extended the latter to the case of finite moments of all orders. Another difference is in theupper bound for q ( n ) in (2.1). It was proved that q ( n ) ≤ n ε for any small ε >
0. In the case m = 1in the paper [4] conditions (1) and (2) were replaced by assumption that X (1) jk may be non-i.i.d. and c ≤ E | X (1) jk | ≤ c for some c , c >
0, but one needs to assume that E | X ( q ) jk | l ≤ µ l < ∞ for all l ∈ N and X ( q ) jk have bounded density. 3. Proof of Theorem 2.3
Linearization.
We linearise the problem considering the following block matrix (see e.g. [10]): W def = 1 √ n O X (1) O . . . OO O X (2) . . .
OO O O . . . X ( m − X ( m ) O O . . . O . It is straightforward to check that the eigenvalues of W m are λ ( X ) , . . . , λ n ( X ) with multiplicity m .Hence, the following identity holds:1 n n X j =1 f z ( λ j ( X )) − Z f z ( z ) µ ( m ) ( dz ) = 1 nm nm X j =1 f z ( λ mj ( W )) − Z f z ( z m ) dµ (1) ( z )= 1 nm nm X j =1 e f ( λ j ( W )) − Z e f ( z ) dµ (1) ( z ) , (3.1)where e f ( z ) def = f z ( z m ).Let us consider a r.v. ζ uniformly distributed in the unit circle and independent of all other r.v.Then for any r > W − rζ I are λ j ( W ) − rζ. We denote the counting measure of λ j ( W ) − rζ, j = 1 , . . . , nm , by µ n ( r, · ). It follows that µ (0) n = µ n .Since k f ′ k ≤ n C we get the following bound1 n n X j =1 f z ( λ j ( X )) − Z f z ( z ) µ ( m ) ( dz ) == 1 nm nm X j =1 e f ( λ j ( W ) − rζ ) − Z e f ( z ) dµ (1) ( z ) + R n ( r ) , where | R n ( r ) | ≤ rn C . Choosing r small enough the term R n ( r ) will be negligible. In what follows weassume that r def = n − c log n . F. G ¨OTZE, A. NAUMOV, AND A. TIKHOMIROV
Together with the eigenvalues of W − rζ I we will be also interested as well in the singular valuesof shifted matrices W ( z, r ) def = W − rζ I − z I , z ∈ C . Let s j ( z, r ) def = s j ( W ( z, r )) , j = 1 , . . . , nm , bethe singular values of W ( z, r ) arranged in the non-increasing order, i.e. s ( z, r ) ≥ s ( z, r ) ≥ . . . ≥ s nm ( z, r ) . We shall consider as well the following matrix V ( z, r ) def = (cid:20) O W ( z, r ) W ∗ ( z, r ) O (cid:21) . It is easy to check that ± s j ( z, r ) , j = 1 , . . . , nm , are the eigenvalues of V ( z, r ). Introduce the empiricalspectral distribution (ESD) of V ( z, r ): F n ( z, x, r ) def = 12 nm nm X j =1 [ s j ( z, r ) ≤ x ] + 12 nm nm X j =1 [ − s j ( z, r ) ≤ x ] . The logarithmic potential approach.
A common tool to deal with non-Hermitian randommatrices is the logarithmic potential, which is defined as follows. Let ν be an arbitrary (probability)measure on C . Then the logarithmic potential of ν is given by U ν ( z ) def = − Z C log | z − w | dν ( w ) . For any f ∈ C ( C ) we have Z f ( z ) dν ( z ) = 12 π Z ∆ f ( z ) U ν ( z ) dA ( z ) . (3.2)Applying (3.2), we obtain1 n n X j =1 f z ( λ j ( X )) − Z f z ( z ) dµ ( m ) ( z ) = 12 π Z ∆ e f ( z )[ U ( r ) n ( z ) − U µ (1) ( z )] dA ( z ) + R n ( r ) , where U ( r ) n , U µ (1) are the logarithmic potentials of µ ( r ) n , µ (1) respectively.We observe that e f ( z ) = 0 for all z ∈ M def = { z : | z m − z | < Cn − a } . For any z ∈ M we introducethe following event Ω n def = Ω n ( z ) def = { ω ∈ Ω : s nm ( z, r ) ≥ n − c log n , k W k ≤ K } (3.3)for some large K . It follows from [44] (see also [29][Lemma 5.1], [40][Theorem 31]) and [48] (seealso [25][Lemma A.1]) that P (Ω cn ) ≤ n − c log n . (3.4)We rewrite U ( r ) n ( z ) as follows U ( r ) n = − nm nm X j =1 log | λ j ( W ) − rζ − z | [Ω n ] − nm nm X j =1 log | λ j ( W ) − rζ − z | [Ω cn ] def = U ( r ) n ( z ) + b U ( r ) n ( z ) . Let us investigate the difference U ( r ) n ( z ) − U µ (1) ( z ). Following Girko [22] we use his hermitization trick and rewrite U ( r ) n as the logarithmic moment of F n ( z, x, r ): U ( r ) n ( z ) = − nm log | det W ( z, r ) | = − nm log | det V ( z, r ) | = − Z ∞−∞ log | x | dF n ( z, x, r ) . Moreover, it was proved in [32] that there exists a distribution function G ( z, x ) such that U µ (1) = − Z ∞−∞ log | x | dG ( z, x ) . OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 7
These equations imply | U ( r ) n ( z ) − U µ (1) ( z ) | ≤ I + I + I , (3.5)where I def = (cid:12)(cid:12)(cid:12)(cid:12) Z | x |≤ n − c log n log | x | dG ( z, x ) (cid:12)(cid:12)(cid:12)(cid:12) , I def = (cid:12)(cid:12)(cid:12)(cid:12) Z n − c log n ≤| x |≤ K log | x | d ( F n ( z, x, r ) − G ( z, x, r ) (cid:12)(cid:12)(cid:12)(cid:12) , I def = (cid:12)(cid:12)(cid:12)(cid:12) Z | x |≥ K log | x | dG ( z, x ) (cid:12)(cid:12)(cid:12)(cid:12) . We recall some properties of the limiting distribution and introduce additional notations. Let usdenote α def = p | z | . Define w , def = ( α ± α ± , and λ + def = | w | , λ − def = | w | . Moreover, let J ( z ) def = ( x ∈ R : x ∈ [ − λ + , − λ − ] ∪ [ λ − , λ + ] , if | z | > ,x ∈ R : x ∈ [ − λ + , λ + ] , if | z | < . (3.6)It is known (e.g. [32]) that J ( z ) is the support of G ( z, x ). Moreover, G ( z, x ) has an absolutely continuessymmetric density g ( z, x ), which is bounded and at the endpoints ± λ ± of the support J ( z ) it behavesas follows g ( z, x ) ∼ p γ ( x ), where γ ( u ) def = ( min( || u | − λ + | , || u | − λ − | ) , if | z | > , || u | − λ + | , if | z | < . Returning to I and I we may conclude that I . n − and I = 0 . (3.7)Let us consider the second term I . Applying integration by parts we obtain I . ∆ ∗ n ( z, r ) log n, (3.8)where ∆ ∗ n ( z, r ) def = sup x ∈ R | F n ( z, x, r ) − G ( z, x ) | . It is easy to check that∆ ∗ n ( z, r ) ≤ ∆ ∗ n ( z,
0) +
Cr.
We proceed by application of the smoothing inequality of Corollary B.3. Let us denote J ε def = { x ∈ J : γ ( x ) ≥ ε } and introduce the following region in C + : D ( z ) def = { w = u + iv ∈ C + : u ∈ J ε/ ( z ) , v / p γ ( u ) ≤ v ≤ V } , (3.9)where v def = A n − log n (3.10)and V ≥ , A > F n ( z, x ) def = F n ( z, x,
0) by m n ( z, w ). It is known that under conditions of Theorem 2.3 the Stieltjestransform m n ( z, w ) converges a.s. to the Stieltjes transform s ( z, w ), which is a solution of the followingcubic equation s ( z, w ) = − w + s ( z, w )( w + s ( z, w )) − | z | , (3.11)see, for instance, [32]. Moreover, s ( z, w ) is the Stieltjes transform of the distribution function G ( z, x ).For detailed properties of s ( z, w ) we refer to [8][Lemma 4.1, 4.2]. Let us denoteΛ n ( z, u + iv ) def = m n ( z, u + iv ) − s ( z, u + iv ) , F. G ¨OTZE, A. NAUMOV, AND A. TIKHOMIROV
We may conclude from Theorem 4.1 below that there exists
C > P (cid:18) \ z ∈M \ w ∈D (cid:26) | Λ n ( z, w ) | ≤ C log nnv (cid:27)(cid:19) ≥ − n − Q . (3.12)Applying the smoothing inequality, Corollary B.3, to ∆ ∗ n we get the following bound∆ ∗ n ( z, ≤ C Z ∞−∞ | Λ n ( z, u + iV ) | du + C sup x ∈ J ε/ (cid:12)(cid:12)(cid:12)(cid:12) Z Vv ′ Λ n ( z, x + iv ) dv (cid:12)(cid:12)(cid:12)(cid:12) + C v + C ε , (3.13) v ′ = v / p γ ( x ). The proof of this inequality repeats the proof of its analogue in the case of thesemi-circular law (see [30][Corollary 2.3]). For the readers convenience we include the arguments inthe appendix. Let us take in this inequality ε def = (2 v a ) / . Then C v + C ε / ≤ Cn − log n . Itfollows from (3.5), (3.7), (3.8) and (3.13) that Z | ∆ e f ( z ) || U rn ( z ) − U µ ( z ) | dA ( z ) ≤ C log n Z | ∆ e f ( z ) | Z ∞−∞ | Λ n ( z, u + iV ) | du dA ( z )+ C log n Z | ∆ e f ( z ) | sup x ∈ J ε/ (cid:12)(cid:12)(cid:12)(cid:12) Z Vv ′ Λ n ( z, x + iv ) dv (cid:12)(cid:12)(cid:12)(cid:12) dA ( z )+ C n − log n. (3.14)Inequality (3.12) implies that with probability at least 1 − n − Q sup z ∈M sup x ∈ J ε/ (cid:12)(cid:12)(cid:12)(cid:12) Z Vv ′ Λ n ( z, x + iv ) dv (cid:12)(cid:12)(cid:12)(cid:12) . n − log n. Hence, Z | ∆ e f ( z ) | sup x ∈ J ε/ (cid:12)(cid:12)(cid:12)(cid:12) Z Vv ′ Λ n ( z, x + iv ) dv (cid:12)(cid:12)(cid:12)(cid:12) dA ( z ) . k ∆ e f k L n − log n (3.15)with probability at least 1 − n − Q . We conclude from Lemma 4.4 that E p | Λ n ( z, u + iV ) | p ≤ Cp | s ( z, u + iV ) | p +1 p n , which holds for all w = u + iV, u ∈ R . Hence, E p (cid:20) Z ∞−∞ | Λ n ( z, u + iV ) | du (cid:21) p ≤ Z ∞−∞ E p | Λ n ( z, u + iV ) | p du ≤ Cpn Z ∞−∞ Z ∞−∞ du dG ( z, x )(( x − u ) + V ) p +1 p . n − log n. It is straightforward to check that E (cid:20) Z | ∆ e f ( z ) | Z ∞−∞ | Λ n ( z, u + iV ) | du dA ( z ) (cid:21) p ≤ k ∆ e f k pL sup z ∈M E (cid:20) Z ∞−∞ | Λ n ( z, u + iV ) | du (cid:21) p ≤ k ∆ e f k pL n − p log p n. Markov’s inequality implies that with probability at least 1 − n − Q Z | ∆ e f ( z ) | Z ∞−∞ | Λ n ( z, u + iV ) | du dA ( z ) . k ∆ e f k L n − log n. (3.16)Combining now (3.14), (3.15) and (3.16) we conclude that with probability at least 1 − n − Q Z | ∆ e f ( z ) || U ( r ) n ( z ) − U µ (1) ( z ) | dA ( z ) . k ∆ e f k L n − log n. (3.17) OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 9
It remains to estimate Z | ∆ e f ( z ) || b U ( r ) n ( z ) | dA ( z ) . Let us consider b U ( r ) n ( z ). We get E | b U ( r ) n ( z ) | p ≤ nm nm X j =1 E (cid:20) log p | λ j − rζ − z | (cid:21) P (Ω cn ) . We fix j = 1 , . . . , nm and write12 π Z | ζ |≤ log p | λ j − rζ − z | dζ ≤ J + J + J , where J = 12 π Z | ζ |≤ , | λ j − rζ − z |≤ ε log p | λ j − rζ − z | dζ,J = 12 π Z | ζ |≤ ,ε< | λ j − rζ − z |≤ /ε log p | λ j − rζ − z | dζ,J = 12 π Z | ζ |≤ , | λ j − rζ − z |≥ /ε log p | λ j − rζ − z | dζ, It is easy to see that J ≤ log p (1 /ε ) . To estimate J we first note that for any b >
0, the function − u b log u is not decreasing on the interval0 < u < e − /b . Hence, for any 0 < u ≤ ε < e − /b we obtain − log u ≤ ε b u − b log(1 /ε ) . We take b such that bp = 1. Then J ≤ πr ε bp log p (1 /ε ) Z | ζ |≤ ε | ζ | − bp dζ ≤ log p (1 /ε ) ε r − . Choosing ε = r we arrive at the inequality J ≤ log p (1 /ε ) . It remains to estimate J . It is straightforward to check that log p u ≤ ε u log p ε for u ≥ /ε and p of order log n (we recall that ε = n − c log n ). Hence,1 nm nm X j =1 E J ≤ m n r (2 + | z | ) log p ε. These bounds together imply that for p of order log n sup z ∈M E | b U ( r ) n ( z ) | p ≤ n − c log n . Repeating the same arguments as in the proof of (3.16) we conclude the estimate Z | ∆ e f ( z ) || b U ( r ) n ( z ) | dA ( z ) . k ∆ e f k L n , (3.18)which holds with probability at least 1 − n − Q . Combining (3.17) and (3.18) we come to the followingbound (cid:12)(cid:12)(cid:12)(cid:12) π Z ∆ e f ( z )[ U ( r ) n ( z ) − U µ (1) ( z )] dA ( z ) (cid:12)(cid:12)(cid:12)(cid:12) . q ( n ) k ∆ f k L n − a , which holds with probability at least 1 − n − Q . The last inequality implies the claim of Theorem 2.3. Local law for shifted matrices
The following theorem provides the estimate for Λ n ( z, w ) up to the optimal scale v (see defini-tion (3.10)). The proof of this result will be given later on in section. Theorem 4.1 (Local law for eigenvalues of V ( z )) . Assume that ( C0 ) hold. Let Q > be an arbitrarynumber. There exists constant C > such that P (cid:18) \ z ∈M \ w ∈D (cid:26) | Λ n ( z, w ) | ≤ C log nnv (cid:27)(cid:19) ≥ − n − Q . By standard truncation arguments (see [28][Lemmas D.1-D.3]) in what follows we may assumethat conditions ( C1 ) hold and a ( q ) jk = 0, for all j, k = 1 , . . . , n, q = 1 , . . . , m . For simplicity we willalso assume that X ( q ) jk , j, k = 1 , . . . , n, q = 1 , . . . , m are i.i.d. r.v. In this case one may also show(see [28][Lemmas D.1-D.3]) that it is possible to assume that [ σ ( q ) jk ] = 1 for all j, k = 1 , . . . , n, q =1 , . . . , m . The proof in the non i.i.d. case in the same. One needs to add additional ε j term in (4.8)which will be small due to the assumption that | − [ σ ( q ) jk ] | ≤ n − − ε .4.1. Bound for the distance between Stieltjes transforms.
We start from the general lemma,which is motivated by the additive descent approach introduced and further developed by L. Erd¨os,B. Schlein, H.-T. Yau and et al., see [17], [16], [18], [13], [14], [15], [34] among others. Recall thatΛ n ( z, w ) def = m n ( z, w ) − s ( z, w ) . (4.1)For w = u + iv ∈ C + we define R ( z, w ) def = ( V ( z ) − w I ) − . It is easy to see that m n ( z, w ) = nm Tr R ( z, w ). Denote j α def = ( α − n + j . Introduce the followingpartial traces of resolvent m ( α ) n ( z, w ) def = n P nj =1 R j α j α and Λ def = (Λ (1) n , . . . , Λ (2 m ) n ) T , Λ ( α ) n = m ( α ) n ( z, w ) − s ( z, w ) . It is easy to check that Λ n = 1 m m X α =1 Λ ( α ) n = 1 m m X α =1 Λ ( m + α ) n . Moreover, | Λ n ( z, w ) | ≤ | Λ ( z, w ) | . Let w = u + iv ∈ D I ( v ) def = I τ ( z, u + iv ) def = K v Y k =0 (cid:2) | Λ n ( z, u + ivs k ) | ≤ τ Im s ( z, u + ivs k ) (cid:3) , (4.2)where K v def = min { l : vs l ≥ V } and s ≥
1. The exact value of s will be defined later in section 5. Let C be a positive constant. We take τ, A sufficiently small and A sufficiently large such that Cpnv ≤ τ Im s ( z, w ) (4.3)for any w = u + iv ∈ D and 1 ≤ p ≤ A log n . The exact values of τ, A , A will be defined later insection 5.The next lemma is crucial for the proof of Theorem 4.1. OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 11
Lemma 4.2.
Let w ∈ D and τ be some fixed number. Assume that for all v ≥ v / p γ ( u ) and ≤ p ≤ A log n E [ | Λ n ( z, u + iv ) | p I ( v )] ≤ C p p p ( nv ) p , (4.4) and P (cid:18) | Λ n ( z, u + iV ) | ≥ τ Im s ( z, u + iV ) (cid:19) ≤ Cn Q , (4.5) Then for any v / p γ ( u ) ≤ v ≤ V P (cid:18) | Λ n ( z, u + iv ) | ≥ τ Im s ( z, u + iv ) (cid:19) ≤ Cn Q . (4.6) Proof.
Let κ = κ n be such that | Λ n ( z, u + iv ) − Λ n ( z, u + i ( v + κ )) | ≤ τ s ( z, u + iv ) . (4.7)It easy to check that one may take, for example, κ n = n − . Denote v ′ = v / p γ ( u ). We split [ v ′ , V ]into N = ( V − v ′ ) /κ intervals and denote v k = v ′ + kκ . Assume that we have already proved (4.6) forall v k ≤ v ≤ V and prove it for any v up to v k − . For example, for v = v N = V it follows from (4.5).We fix v : v k − ≤ v < v k . Taking p = A log n and K : K − p ≤ Cn − Q we get P (cid:18) | Λ n ( z, u + iv k ) | ≥ KCp nv k (cid:19) ≤ E (cid:20) | Λ n ( z, u + iv k )) | ≥ KCp nv k (cid:21) I ( v k )+ K v X l =0 P (cid:18) | Λ n ( z, u + iv k s l ) | ≥ τ Im s ( z, u + iv k s l ) (cid:19) ≤ (cid:18) CKp nv k (cid:19) − p E [ | Λ n ( z, u + iv k ) | p I ( v k )] + Cn Q ≤ Cn Q . Here we also used (4.4). Since v k ≥ v ≥ v ′ we get that KCp nv k ≤ τ Im s ( z, u + iv ). Hence, using (4.7)we obtain P (cid:18) | Λ n ( z, u + iv ) | ≥ τ Im s ( z, u + iv ) (cid:19) ≤ Cn Q . (cid:3) It follows from Lemma that we need to check conditions (4.4)–(4.5).4.2.
Stieltjes transform and self-consistent equations.
In this section we investigate m n ( z, w )and show that it satisfies a cubic equation (see (4.9) below), which is a perturbation of the corre-sponding equation (3.11) for s ( z, w ).Let R ( j α ) ( resp. R ( j α ,j α ) ) be the resolvent matrix of V ( z ) with j α -th row and column deleted(resp. j α - and j m + α -th row and column deleted). Applying Schur’s inverse formula, we may write,for all j = 1 , . . . , n and α = 1 , . . . , m , that R j α j α = − − w − m ([ α +1]+ m ) n + | z | w + m ([ α − n ( z,w ) (1 − ε j α R j α j α ) , (4.8)where m ( α ) n ( z, w ) def = n P nj =1 R j α j α and ε j α def = e ε j α + | z | w + m ([ α − n ( z, w ) R ( j α ) j α + m ,j α + m b ε j α . Here e ε j α def = e ε j + . . . + e ε j , e ε j α , def = 1 n X k ∈ T R k [ α +1] ,k [ α +1]+ m − n X k ∈ T R ( j α ) k [ α +1]+ m ,k [ α +1]+ m , e ε j α , def = − n X l = k ∈ T X ( α ) jk X ( α ) jl R ( j α ) l [ α +1]+ m ,k [ α +1]+ m , e ε j α , def = − n X l ∈ T ([ X ( α ) jl ] − R ( j α ) l [ α +1]+ m ,l [ α +1]+ m , e ε j α , def = z + z √ n X l ∈ T X ( α ) jl R ( j α ) j [ α +1]+ m ,l [ α +1]+ m . and b ε j def = b ε j, + . . . + b ε j, , where b ε j α , def = 1 n X l ∈ T R l [ α − ,l [ α − − n X l ∈ T R ( j α ,j α ) l [ α − ,l [ α − , b ε j α , def = − n X k = l ∈ T X ([ α − kj X ([ α − lj R ( j α ,j α ) k [ α − ,l [ α − , b ε j α , def = − n X l ∈ T ([ X ([ α − lj ] − R ( j α ,j α ) l [ α − ,l [ α − . Summing up equality (4.8) in j = 1 , . . . , n for fixed α = 1 , . . . , m we get m ( α ) n ( w, z ) = − w + m ([ α +1]+ m ) n ( z, w ) − | z | w + m ([ α − n ( z,w ) (1 − T ( α ) n ) , (4.9) m ( m + α ) n ( w, z ) = − w + m ([ α − n ( z, w ) − | z | w + m ([ α +1]+ m ) n ( z,w ) (1 − T ( m + α ) n ) , (4.10)where T ( α ) n def = 1 n n X j =1 ε j α R j α ,j α for α = 1 , . . . , m . It follows from the form of equations (4.9)-(4.10) and (3.11) that to bound thedistance between m ( α ) n ( z, w ) and s ( z, w ) it is crucial to estimate the perturbation T ( α ) n . Introduce thefollowing block-matrix A def = (cid:20) A A A T A T (cid:21) , (4.11)where A def = a . . . bb a . . . b a . . . . . . . . . a
00 0 0 . . . b a , A def = . . . . . . . . . . . . . . . . . . . Here, a def = − s − ( z, w ) , b def = | z | ( w + s ( z,w )) . Substituting s ( z, w ) from the both sides of equations (4.9)-(4.10) we come to the following linear system: AΛ n = r n + s − T n , (4.12) OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 13 where k r n k ≤ | Λ n | | z | | w + s | (cid:18) m X α =1 | w + m ( α ) n | (cid:19) (cid:18) | Λ n || s | (cid:19) + 1 | s | (cid:18) | z | | w + s | (cid:19)! . (4.13)Permuting the rows and columns of A we may come to the matrix from [37][Equation 5.18].4.3. Validity of condition (4.4) . Define A ( z, v, q ) def = max ( J , K ) ∈J max α =1 ,...,m max l α ∈ T Jα E q Im q R ( J , K ) l α ,l α ( z, w ) I ( v ) , (4.14) E ( q ) def = Im s ( z, w ) + A ( z, v, q ) (cid:18) K v X k =0 s k A ( z, v k , q )Im s ( z, v k ) (cid:19) . (4.15)See (5.7) for the definitions of J , K , J , T J α . The next lemma shows that the condition (4.4) holds. Lemma 4.3.
Assume that for all w ∈ D and ≤ p ≤ A log n E [ | T n ( z, iv ) | p I ( v )] . C p p p E p ( κp )( nv ) p (4.16) and A p ( κp ) ≤ C p Im p s ( z, w ) (4.17) for some κ > . Then E | Λ n ( z, iv ) | p I ( v ) . (cid:18) Cp nv (cid:19) p . We apply Stein’s method and ’leave one out’ idea to estimate T n . Here we follow the ideasintroduced in [30] and further developed in [24]. It is clear that the bound for T n requires estimationof the moments of R jj . We do it section 5, where we also introduce the general principle to estimatethe moments of so-called k -descent function (see definition 5.1). Moreover, in this section we showthat (4.17) holds. Proof of Lemma 4.3.
We may rewrite (4.12) as follows Λ = A − r + s − A − T . It follows that k Λ n k ≤ k A − kk r k + | s | − k A − kk T k (4.18)We may write | w + m ( α ) n ( z, w ) | I ( v ) ≥ ( | w + s ( z, w ) | − | Λ ( α ) n | ) I ( v ) ≥ (1 − τ ) | w + s ( z, w ) | I ( v ) (4.19)Moreover, using the definition of I ( v ) we obtain | Λ n | I ( v ) ≤ τ | Λ n | I ( v ) . Taking into account the last two inequalities and definition (4.13) of r we obtain k r k I ( v ) . τ Im s ( z, w ) | Λ n | I ( v ) . It follows from [37][Proposition 5.5] then k A − k . Im − s ( z, w ). Taking expectation of the both sidesof (4.18) and applying (4.16), (4.17) we get the claim of this lemma. (cid:3) Validity of condition (4.5) . This conditions in a consequence of the the next lemma.
Lemma 4.4.
For any w = u + iV, u ∈ R , V ≥ and all p ≥ the following bound holds E p | Λ n ( z, u + iV ) | p ≤ Cp | s ( z, u + iV ) | p +1 p n . Proof.
The proof is similar to the proof of the analogous inequality in [25][Inequality 2.8] in thesemi-circle law case. (cid:3)
Proof of Theorem 4.1.
Proof of Theorem 4.1.
The proof is the direct corollary of Lemmas 4.2 and 4.3. Indeed, taking p = A log n we may write P (cid:18) | Λ n ( z, u + iv ) | ≥ KCp nv (cid:19) ≤ E (cid:20) | Λ n ( z, u + iv ) | ≥ KCp nv (cid:21) I ( v )+ K v X k =0 P (cid:18) | Λ n ( z, u + ivs k ) | ≥ τ Im s ( z, u + ivs k ) (cid:19) ≤ (cid:18) CKp nv (cid:19) − p E [ | Λ n | p I ( v )] + Cn Q ≤ Cn Q . This inequality implies the claim of the theorem. (cid:3) Bound for functions with k -descent property As it was already mentioned that the estimation of T n requires to bound the high moments of R jj ( z, u + iv ) and Im R jj ( z, u + iv ) for j = 1 , . . . , nm up to the optimal value v of v . Here we aregoing to apply multiplicative descent method introduced in [11] and further developed in the seriesof papers [24], [28] by the authors. This method requires the small number of steps, usually of thelogarithmic order. One may compare with additive descent method , Lemma 4.2, where one needs tomake polynomial number of steps.5.1. Class of descent function.
We start from rather general definition and proposition which areessential for multiplicative descent . Let us introduce the following class of functions.
Definition 5.1.
Let k ≥ . We say that a function f ( w ) , w = u + iv ∈ C + , satisfies the k -descentproperty if for any v > (cid:12)(cid:12)(cid:12)(cid:12) ∂∂v log f ( u + iv ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ kv Let us denote by D escent ( k ) def = { f : C + → C : f satisfies k -descent property } . The followingstatement collects the main properties of k -descent functions. Proposition 5.2.
The following statements hold:(1) If f ∈ D escent ( k ) then f − ∈ D escent ( k ) ;(2) If f ∈ D escent ( k ) , g ∈ D escent ( l ) then f g ∈ D escent ( k + l ) .(3) For any f ∈ D escent ( k ) and for any s ≥ | f ( u + iv/s ) | ≤ s k | f ( u + iv ) | and | f ( u + iv ) | ≤ s k | f ( u + iv/s ) | Proof.
The proof of (1) and (2) are trivial. To prove (3) it is enough to mention that | log f ( u + iv/s ) − log f ( u + iv ) | ≤ k log s. (cid:3) It is easy to check that | R jj ( z, w ) | , Im R jj ( z, w ) are examples of functions with 1-descent propertyw.r.t. w .5.2. Bound for moments of some functions of the resolvent matrix.
Recall the definition of I τ ( w ) for w = u + iv ∈ D I ( w ) def = I τ ( z, u + iv ) def = K v Y k =0 (cid:2) | Λ n ( z, u + ivs k ) | ≤ τ Im s ( z, u + ivs k ) (cid:3) , OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 15 where K v def = min { l : vs l ≥ V } . Here V ≥ s ≥ I ( u + iv ) ≤ I ( u + is v ) . (5.1)In what follows for simplicity we shall often omit w = u + iv from all notations and write onlyimaginary part v . Lemma 5.3.
Let V be some fixed number. There exist a positive constant C depending on V, z andpositive constants A , A , τ depending on C such that max j =1 ,...,nm E | R jj ( z, u + iv ) | p I ( u + iv ) ≤ C p , (5.2)max j =1 ,...,nm E Im p R jj ( z, u + iv ) I ( u + iv ) ≤ C p Im p s ( z, u + iv ) . (5.3) for all u + iv ∈ D and ≤ p ≤ A log n . Lemma 5.4.
Let V be some fixed number. There exist a positive constant H depending on V, andpositive constant τ depending on H such that the following inequalities hold: max | w + m ( α ) n ( z, w ) | , (cid:12)(cid:12)(cid:12) w + m ([ α +1]+ m ) n ( z, w ) − | z | w + m ([ α − n ( z, w ) (cid:12)(cid:12)(cid:12) − ! I ( v ) ≤ H (5.4) for all w = u + iv ∈ D and α = 1 , . . . , m .Remark. The statement of Lemma remains valid if one replaces m ( α ) n ( z, w ) by m ( α, J , K ) n ( z, w ). Proof of Lemma.
Assume that I ( v ) = 1. In the opposite case the claim is trivial. Then1 | w + m ( α ) n ( z, w ) | ≤ | w + s ( z, w ) | + | Λ ( α ) n || w + s ( z, w ) || w + m ( α ) n ( z, w ) | ≤ c + c τ | w + m ( α ) n ( z, w ) | , here c depends on z and V . The last inequality implies1 | w + m ( α ) n ( z, w ) | ≤ c − c τ . If we take, say, H ≥ c , then we may find sufficiently small τ , such that the r.h.s. of the previousinequality is bounded by H . Similarly we may prove. (cid:3) Proof of Lemma 5.3.
The proof of is more involved then the proof of the previous lemma. The generalidea how to prove these results follows the idea of [11] about multiplicative descent approach developedin [24]. We briefly discuss these ideas on the bound (5.2) for E | R jj ( z, v ) | p I ( v ) (the same will be truefor (5.3)).We prove below in Lemma 5.5 that the boundmax j ∈ T E | R jj ( v ) | p I ( v ) ≤ C p (5.5)holds for all w ∈ D and 1 ≤ p ≤ A ( nv ) (1 − α ) / . We are interested in p of the order log n . Denote v def = n − log / (1 − α ) n and take p = A log n (It is sufficient to consider only such values of p . Forall 1 ≤ q ≤ p we may apply the Lyapunov inequality). It is easy to see that (5.5) holds for all v : v ≤ v ≤ V with p = A log n . Let us fix v : v ≤ v ≤ v and let l def = min { l ≥ vs l ≥ v } . Wetake s def = s l . It is clear that s . log α − α n . Applying Proposition 5.2, (5.5) and (5.1) we may showthat for all v ≥ v max j ∈ T E | R jj ( v ) | p I ( v ) ≤ C p log( α − α ) p n. It remains to remove the log factor on the right hand side of the previous inequality. To this aim weshall adopt the moment matching technique which has been successfully used recently by Lee and in
Yin in [34](see Lemma 5.2 and Lemma 5.3). We denote by Y jk , ≤ j ≤ k ≤ n a triangular set ofrandom variables such that | Y jk | ≤ D , for some D chosen later, and E X sjk = E Y sjk for s = 1 , ..., . It follows from [34][Lemma 5.2] that such a set of random variables exists. Let us denote W y := √ n Y , R y := ( W y − z I ) − and m y n ( z ) := n Tr R y ( z ). Then, repeating the proof of [28][Lemma 3.5]we show that for all v ≥ v and 5 ≤ p ≤ A log n there exist positive constants C , C such that E | R jj ( v ) | p I ( v ) ≤ C p + C E | R y jj ( v ) | p I ( v ) . (5.6)It is easy to see that Y jk are sub-Gaussian random variables. Repeating the proof of Lemma 5.5 belowfor sub-Gaussian random variables one may show that E | R y jj ( v ) | p I ( v ) ≤ C p for all w ∈ D and 1 ≤ p ≤ A nv . Here one needs to replace Lemmas A.1–A.4 by the Hanson-Wrightinequality (see, for example, [28][Lemma A.4–A.7]). For details see the proof of the correspondingresult in [28][Lemma 4.1]). (cid:3) Lemma 5.5.
Let V be some fixed number. There exist a positive constant C depending on V, z andpositive constants A , A depending on C such that max j,k =1 ,...,nm E | R jk ( z, u + iv ) | p I ( u + iv ) ≤ C p for all A n − ≤ v ≤ V, u ∈ J ε and ≤ p ≤ A ( nv ) − α .Remark. In Lemma 5.5 we bound the off-diagonal entries as well. We use the bound for off diagonalentries to show that (5.6) holds. See [28][Lemma 3.5] for details.Let us define J α , α = 1 , . . . , m as an arbitrary subsets of T . Here J α will correspond to the indicesof rows deleted from X ( α ) . Similarly we define K α , α = 1 , . . . , m as the indices of columns deletedfrom X ( α ) . Moreover, let | J α \ K α | = 0 or 1. For a particular choice of these sets we define J def = { J α ⊂ T , α = 1 , . . . , m } , K def = { K α ⊂ T , α = 1 , . . . , m } . (5.7)Handling now all possible J α , K α , α = 1 , . . . , m we define J L as follows J L def = { J , K : | J | ≤ L, | J \ K | ≤ } , L ≥ . We also define T J α def = ( α − m + T \ J α . Similarly we may define T K α . Let X ( α, J α , K α ) be a sub-matrixof X ( α ) with entries X ( α ) jk , j ∈ T J α , k ∈ T K α . Then we may define W ( J , K ) as W with all X α replacedby X ( α, J α , K α ) . Similarly we define V ( J , K ) , R ( J , K ) and all other quantities. Denote I ( J , K ) τ ( v ) def = K v Y k =1 (cid:2) | Λ ( J , K ) n ( u + is k v ) | ≤ τ Im s ( z, u + is k v ) (cid:3) . It is easy to see that I ( J , K ) τ ( v ) ≤ I ( J , K ) τ ( s v ) (5.8)for any τ : τ ≥ τ . Lemma 5.6.
Assume that the conditions ( C1 ) hold. Let C and s be arbitrary numbers such that C ≥ max(1 /, H ) , s ≥ /κ . There exist sufficiently large A and small A depending on C , s , V only such that the following statement holds. Fix some ˜ v : v s / p γ ( u ) ≤ ˜ v ≤ V . Suppose that forsome integer K > , all u, v ′ , q such that ˜ v ≤ v ′ ≤ V, u ∈ J ε , ≤ q ≤ A ( nv ′ ) − α max ( J , K ) ∈J K +1 max α,β =1 ,...,m max l α ∈ T Jα ,k β ∈ T Kβ E | R ( J , K ) l α k β ( v ′ ) | q I ( J , K )2 τ ( v ) ≤ C q . (5.9) OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 17
Then for all u, v, q such that ˜ v/s ≤ v ≤ V, u ∈ J ε , ≤ q ≤ A ( nv ) − α max ( J , K ) ∈J K max α,β =1 ,...,m max l α ∈ T Jα ,k β ∈ T Kβ E | R ( J , K ) l α k β ( v ) | q I ( J , K )2 τ ( v ) ≤ C q . Proof.
Let us fix α = 1 , . . . , m . Without loss of generality we assume that J = K . We fix some j ∈ T \ J α and denote e J = { J β , β = α, J α ∪ { j }} . Similarly we define e K . We first consider the diagonal entries. Applying Schur’s inverse formula, wemay write R ( J , K ) j α j α = − − w − m ([ α +1]+ m, J , K ) n + | z | w + m ([ α − , J , K ) n ( z,w ) (1 − ε ( J , K ) j α R ( J , K ) j α j α ) , where ε ( J , K ) j α def = e ε ( J , K ) j α + | z | w + m ([ α − , J , K ) n ( z, w ) R ( e J , K ) j α + m ,j α + m b ε ( J , K ) j α . Here e ε ( J , K ) j α def = e ε ( J , K ) j + . . . + e ε ( J , K ) j , e ε ( J , K ) j α , def = 1 n X k ∈ T \ K [ α +1] R ( J , K ) k [ α +1]+ m ,k [ α +1]+ m − n X k ∈ T \ K [ α +1] R ( e J , K ) k [ α +1]+ m ,k [ α +1]+ m , e ε ( J , K ) j α , def = − n X l = k ∈ T \ K [ α +1] X ( α ) jk X ( α ) jl R ( e J , K ) l [ α +1]+ m ,k [ α +1]+ m , e ε ( J , K ) j α , def = − n X l ∈ T \ K [ α +1] ([ X ( α ) jl ] − R ( e J , K ) l [ α +1]+ m ,l [ α +1]+ m , e ε ( J , K ) j α , def = z + z √ n X l ∈ T \ K [ α +1] X ( α ) jl R ( e J , K ) j [ α +1]+ m ,l [ α +1]+ m . and b ε ( J , K ) j def = b ε ( J , K ) j, + . . . + b ε ( J , K ) j, , where b ε ( J , K ) j α , def = 1 n X l ∈ T \ J [ α − R ( J , K ) l [ α − ,l [ α − − n X l ∈ T \ J [ α − R ( e J , e K ) l [ α − ,l [ α − , b ε ( J , K ) j α , def = − n X k = l ∈ T \ J [ α − X ([ α − kj X ([ α − lj R ( e J , e K ) k [ α − ,l [ α − , b ε ( J , K ) j α , def = − n X l ∈ T \ J [ α − ([ X ([ α − lj ] − R ( e J , e K ) l [ α − ,l [ α − . We conclude from (4.8) and Lemma 5.4 that there exist a positive constant H depending on u , V, z and positive constant A depending on H such that the following inequality holds: | R ( J , K ) j α j α ( v ) | I ( J , K ) τ ( v ) ≤ H (cid:16) | ε ( J , K ) j α R ( J , K ) j α j α | I ( J , K ) τ ( v ) (cid:17) . Hence, E (cid:2) | R ( J , K ) j α j α | q I ( J , K ) τ ( v ) (cid:3) ≤ q H q (cid:16) E (cid:2) | ε ( J , K ) j α | q I ( J , K ) τ ( v ) (cid:3) E (cid:2) | R ( J , K ) j α j α | q I ( J , K ) τ ( v ) (cid:3)(cid:17) . It follows from Proposition 5.2, (5.8) and (5.9) that E (cid:2) | R ( J , K ) j α j α ( v ) | q I ( J , K ) τ ( v ) (cid:3) ≤ s q C q . (5.10)The Cauchy-Schwartz inequality and Lemma imply E [ | ε ( J , K ) j | q I ( J , K ) τ ] ≤ q E [ | e ε ( J , K ) j | q I ( J , K ) τ ] + 2 q H q | z | q E [ | b ε ( J , K ) j | q I ( J , K ) τ ] E [ | R ( e J , K ) j α + m ,j α + m | q I ( J , K ) τ ] . Similarly to (5.10) E [ | R ( e J , K ) j α + m ,j α + m ( v ) | q I ( J , K ) τ ( v )] ≤ s q E [ | R ( e J , K ) j α + m ,j α + m ( s v ) | q I ( e J , K )2 τ ( s v )] ≤ ( C s ) q . (5.11)Applying (5.8) we obtain E [ | e ε ( J , K ) j α | q I ( J , K ) τ ] ≤ E [ | e ε ( J , K ) j α | q I ( e J , K )3 τ/ ( s v )]It is easy to see from Lemmas A.1–A.4 in the appendix that the moment bounds for e ε ( J , K ) j α , and e ε ( J , K ) j α , depends on the moments of off-diagonal entries of resolvent which are non k -descent function. Herewe may use R ( w ) − R ( w ) = ( w − w ) R ( w ) R ( w ) , w , w ∈ C + , (5.12)which gives us that | R ( e J , K ) j α ′ k β ′ ( z, v ) | ≤ | R ( e J , K ) j α ′ k β ′ ( z, sv ) | + s | R ( e J , K ) j α ′ j α ′ ( z, sv ) | | R ( e J , K ) k β ′ k β ′ ( z, v ) | . Now the desired bound follows from Proposition 5.2 and assumption (5.9). Lemmas A.1–A.4 in theappendix imply E [ | ε ( J , K ) j α | q I ( e J , K )3 τ/ ( s v )] ≤ CC s q ( nv ) − α ! q . Here, C depends on z as well. Similarly, applying Lemmas A.5–A.7 from the appendix we mayestimate E [ | b ε ( J , K ) j α | q I ( e J , e K )2 τ ( s v )] ≤ CC s q ( nv ) − α ! q . The last two inequalities yield the following bound: E [ | ε ( J , K ) j α | q I ( J , K ) τ ] ≤ CC s q ( nv ) − α ! q . Hence, choosing sufficiently large A and small A we may show that E [ | R ( J , K ) j α j α ( v ) | q I ( J , K ) τ ] ≤ C q . To deal with off-diagonal entries we use the following representation R ( J , K ) j α ,k β = − √ n X l ∈ T \ K [ α +1] X ( α ) jl R ( e J , K ) l [ α +1]+ m ,k β R ( J , K ) j α ,j α − z √ n X l ∈ T \ J [ α − X ([ α − lj R ( e J , e K ) l [ α − ,k β R ( e J , K ) j α + m ,j α + m R ( J , K ) j α j α . Applying now Rosenthal’s inequality (e.g. [43][Theorem 3] and [33][Inequality (A)]) and assump-tion (5.9) we may show that one may choose sufficiently large A and small A such that E | R ( J , K ) j α k β ( v ) | q I ( J , K ) τ ≤ C q . (cid:3) Proof of Lemma 5.3.
Let us choose some sufficiently large constant C > max(1 /V, H ) and fix s def = 2 − α . Here H is defined in Lemma 5.4. We also choose A and A as in Lemma 5.6. We fix u ∈ J ε . Let L def = (cid:2) log s (cid:0) V p γ ( u ) /v (cid:1)(cid:3) + 1. Since k R ( J ) ( V ) k ≤ V − we may writemax ( J , K ) ∈J L max α,β =1 ,...,m max l α ∈ T Jα ,k β ∈ T Kβ E | R ( J , K ) l α k β ( V ) | q I ( J , K )( L +1) τ ( V ) ≤ C q for all 1 ≤ p ≤ A ( nV ) − α . Fix arbitrary v : V /s ≤ v ≤ V and p : 1 ≤ p ≤ A ( nv ) − α . Lemma 5.6yields that max ( J , K ) ∈J L − max α,β =1 ,...,m max l α ∈ T Jα ,k β ∈ T Kβ E | R ( J , K ) l α k β ( v ) | q I ( J , K ) Lτ ( v ) ≤ C q for 1 ≤ p ≤ A ( nV /s ) − α , v ≥ V /s . We may repeat this procedure L times and finally obtainmax l,k =1 ,...,nm E | R lk ( v ) | p I τ ( v ) ≤ C p for 1 ≤ p ≤ A ( nV /s L ) − α ≤ A ( n ˜ v ) and v ≥ v / p γ ( u ). (cid:3) Estimation of T n In this section we prove the following theorem.
Theorem 6.1.
For any w ∈ D and all ≤ p ≤ A log n max ≤ α ≤ m E | T ( α ) n | p ≤ C p p p E p ( κp )( nv ) p , where E ( q ) is defined in (4.15) and κ is some positive constant depending on δ only. We shall proceed as in [24] applying Stein’s method.6.1.
Framework for moment bounds of some statistics of r.v.
We start from the followinglemma, which provide a framework to estimate the moments of some statistics of independent randomvariables.Let X , . . . , X n be independent r.v. and denote M def = σ { X , . . . , X n } , M ( j ) def = σ { X , . . . X j − , X j +1 , . . . , X n } . For simplicity we introduce E j ( · ) def = E ( · (cid:12)(cid:12) M ( j ) ). Assume that ξ j , f j , j = 1 , . . . , n , are M -measurabler.v. and E j ( ξ j ) = 0 . (6.1)We consider the following statistic: T ∗ n def = n X j =1 ξ j f j + R , where R is some M measurable function. Moreover, let b f j an arbitrary M ( j ) -measurable r.v. and e T ( j ) n def = E j ( T ∗ n ) . Lemma 6.2.
For all p ≥ there exist some absolute constant C such that E | T ∗ n | p ≤ C p (cid:18) A p + p p B p + p p C + p p D + E |R| p (cid:19) , where A def = E p n X j =1 E j | ξ j ( f j − b f j ) | p , B def = E p n X j =1 E j ( | ξ j ( T ∗ n − e T ( j ) n ) | ) | b f j | p , C def = n X j =1 E | ξ j || T ∗ n − e T ( j ) n | p − | b f j | , D def = n X j =1 E | ξ j || f − b f j || T ∗ n − e T ( j ) n | p − . Remark.
We conclude the statement of the last lemma by several remarks.(1) It follows from the definition of A , B , C , D that instead of estimation of high moments of ξ j one needs to estimate conditional expectation E j | ξ j | α for some small α . Typically, α ≤ b f j of f j and estimate T ∗ n − e T ( j ) n ;(3) This lemma may be generalized as follows. We may assume that T ∗ n def = m X ν =1 n X j =1 ξ jν f jν + R , (6.2)where ξ jν , f jν , j = 1 , . . . , n, ν = 1 , . . . , m , are M -measurable r.v. such that E j ( ξ jν ) = 0 , ν = 1 , . . . , m. Repeating the previous calculations we obtain E | T ∗ n | p ≤ C p (cid:18) m X ν =1 ( A pν + p p B p ν + p p C ν ) + E |R| p (cid:19) , where A ν , B ν , C ν are defined similarly to the corresponding quantities in Lemma 6.2. Proof of Lemma 6.2.
Let us introduce the following function: ϕ ( ζ ) def = ζ | ζ | p − . (6.3)In these notations E | T ∗ n | p may be rewritten as follows E | T ∗ n | p = E T ∗ n ϕ ( T ∗ n ) = n X j =1 ξ j f j ϕ ( T ∗ n ) + R ϕ ( T ∗ n ) = X l =1 A l , where A def = n X j =1 E ξ j b f j ϕ ( e T ( j ) n ) , A def = n X j =1 E ξ j b f j ( ϕ ( T ∗ n ) − ϕ ( e T ( j ) n )) , A def = n X j =1 E ξ j ( f j − b f j ) ϕ ( T ∗ n ) , A def = R ϕ ( T ∗ n ) . It follows from (6.1) that A = 0. Applying the following useful inequality( x + y ) q ≤ ex q + ( q + 1) q y q , x, y > , q ≥ , (6.4) OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 21 we estimate A by the sums of the following terms A def = e n X j =1 E | ξ j || f j − b f j || e T ( j ) n | p − , A def = p p − n X j =1 E | ξ j || f j − b f j || T ∗ n − e T ( j ) n | p − . The term A we remain unchanged. It will appear in the final bound. H¨older’, Jensen’ and Young’sinequalities imply A ≤ E p − p | T ∗ n | p E p (cid:18) n X j =1 E j | ξ j || f j − b f j | (cid:19) p ≤ ρ E | T ∗ n | p + E (cid:18) n X j =1 E j | ξ j || f j − b f j | (cid:19) p . (6.5)It follows from the Taylor formula that A = n X j =1 E ξ j b f j ( T ∗ n − e T ( j ) n ) ϕ ′ ( e T ( j ) n + θ ( T ∗ n − e T ( j ) n )) , where θ is a uniformly distributed on [0 ,
1] r.v., independent of X j , j = 1 , . . . , n . Taking absolutevalues and using (6.4) we get |A | ≤ A + A , where A def = ep n X j =1 E E j ( | ξ j ( T ∗ n − e T ( j ) n ) | ) | b f j || e T ( j ) n | p − , A def = p p − n X j =1 E E j ( | ξ j || T ∗ n − e T ( j ) n | p − ) | b f j | . Applying H¨older’s and Jensen’s inequalities we obtain A ≤ Cp E p n X j =1 E j ( | ξ j ( T ∗ n − e T ( j ) n ) | ) | b f j | p E p − p | T ∗ n | p . Now Young’s inequality implies A ≤ C p p p E n X j =1 E j ( | ξ j ( T ∗ n − e T ( j ) n ) | ) | b f j | p + ρ E | T ∗ n | p . (6.6)Finally, for the term A we may write A ≤ C p E |R| p + ρ E | T ∗ n | p . (6.7)Inequalities (6.5), (6.6) and (6.7) yield the claim of the lemma. (cid:3) Proof of Theorem 6.1.
Proof of Theorem 6.1.
We consider the case α = 1 only. For simplicity we shall write T n = T (1) n . Firstwe mention that T n is of the kind (6.2). Indeed, here ξ j = e ε j + . . . + e ε j , ξ j = b ε j + b ε j ,f j = R jj , f j = − | z | R jj R ( j ) j m +1 ,j m +1 w + m ( m ) n ( z, w ) , R = 1 n n X j =1 e ε j f j + 1 n n X j =1 b ε j f j . We introduce the following smoothed version of I ( v ). We denote h α,β ( x ) def = , if 0 < x < Im s ( z, w ) , − x − α Im s ( z,w )( β − α ) Im s ( z,w ) , if α Im s ( z, w ) < x < β Im s ( z, w ) , , otherwise,and write H ( v ) def = Q K v k =1 Q mα =1 h τ, τ ( | Λ ( α ) n ( v ) | ). It is easy to see that I ( v ) ≤ H ( v ) ≤ I / τ ( v ) ≤ I ( j )2 τ ( v ) . (6.8)To simplify all notations below we shall often omit the bottom index from I τ ( v ) and all its counterparts.We will also write I ( v ) ≤ I ( j ) ( v ) having in mind that I τ ( v ) ≤ I ( j ) τ ′ ( v ) for some fixed τ ′ > τ .Applying the notation of H ( v ) we write E | T n | p I ( v ) ≤ E | T n | p H p ( v ) . For simplicity we set T n,h def = T n H ( v ). Recall the definition (6.3) of ϕ ( ζ ). We rewrite the r.h.s. of theprevious inequality as follows E | T n,h | p = 1 n n X j =1 2 X ν =1 E ξ jν f jν H ( v ) ϕ ( T n,h ) + E R H ( v ) ϕ ( T n,h ) . We denote A def = 1 n n X j =1 2 X ν =1 E ξ jν f jν H ( v ) ϕ ( T n,h ) . In these notations E | T n,h | p may be rewritten as follows E | T n,h | p = A + E R H ( v ) ϕ ( T n,h ) . Estimate of E R H ( v ) ϕ ( T n,h ) . Simple calculations imply R = 1 n n X j =1 n X l =1 [ R jl ] − n | z | w + m ( m ) n ( z, w ) n X j =1 n X l =1 [ R l m ,j ] R ( j ) j m +1 ,j m +1 − n | z | w + m ( m ) n ( z, w ) n X j =1 n X l =1 [ R l m ,j m +1 ] R jj . Applying (6.8), Lemma 5.4 and Lemma A.12 in the appendix we conclude that |R| H ( v ) . Im s ( z, w ) nv + | z | n v n X j =1 Im R jj | R ( j ) j m +1 ,j m +1 | I ( v ) + | z | n v n X j =1 Im R j m +1 ,j m +1 | R jj | I ( v ) . OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 23
Using now H¨older’s inequality and Young’s inequality we come to the following inequality E R H ( v ) ϕ ( T n,h ) ≤ C p E |R| p H ( v ) p + ρ E | T n,h | p ≤ C p (1 + | z | ) p A p (2 p )( nv ) p + ρ E | T n,h | p . Estimate of A . The estimation of A is more involved. Let us introduce conditional expectations E j ( · ) def = E ( · (cid:12)(cid:12) M ( j ) ) (resp. E j,j ( · ) def = E ( · (cid:12)(cid:12) M ( j,j ) )) with respect to σ -algebras M ( j ) (resp. M ( j,j ) ). Here M ( j ) (resp. M ( j,j ) ) is formed from all X ( α ) lk , l, k = 1 , . . . , n, α = 1 , . . . , m , except X (1) jk , k = 1 , . . . , n (resp. except X (1) jk , X (1) lj , k, l = 1 , . . . , n ). Moreover, we introduce the following notations e T ( j ) n,h def = E ( T n,h (cid:12)(cid:12) M ( j ) ) , e T ( j,j ) n,h def = E ( T n,h (cid:12)(cid:12) M ( j,j ) ) , e Λ ( j ) n def = E ( Λ n (cid:12)(cid:12) M ( j ) ) , e Λ ( j,j ) n def = E ( Λ n (cid:12)(cid:12) M ( j,j ) ) . We rewrite A as follows A = A + . . . + A , where A def = 1 n n X j =1 2 X ν =1 E ξ jν e f jν e H ( j,j ) ( v ) ϕ ( e T ( j,j ) n,h ) , A def = 1 n n X j =1 2 X ν =1 E ξ jν e f jν [ H ( v ) − e H ( j,j ) ( v )] ϕ ( e T ( j,j ) n,h ) , A def = 1 n n X j =1 2 X ν =1 E ξ jν e f jν H ( v )[ ϕ ( T n,h ) − ϕ ( e T ( j,j ) n,h )] , A def = 1 n n X j =1 2 X ν =1 E ξ jν [ f jν − e f jν ] H ( v ) ϕ ( T n,h ) . Moreover, it is easy to check that A = 0.6.4.1. Bound for A . Taking conditional expectation and applying H¨older’s inequality it is straight-forward to check that A ≤ E p − p | T n,h | p E p (cid:18) n n X j =1 E j,j ( | ξ jν e f jν || H ( v ) − e H ( j,j ) ( v ) | (cid:19) p . Moreover, Young’s inequality implies that A ≤ ρ E | T n,h | p + C p E (cid:18) n n X j =1 E j,j ( | ξ jν e f jν || H ( v ) − e H ( j,j ) ( v ) | (cid:19) p . (6.9)Let us denote for simplicity B def = 1 n n X j =1 E (cid:2) E j,j ( | ξ jν e f jν || H ( v ) − e H ( j,j ) ( v ) | (cid:3) p , To estimate the r.h.s. of (6.9) it is enough to bound B . We may use Lemma 6.3 to estimate thedifference H ( v ) − e H ( j,j ) ( v ). We get B ≤ C p p p n n X j =1 m X α =1 K v X k =1 s ( z, v k ) E (cid:2) E j,j ( | ξ jν || Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v ) (cid:3) p , (6.10)where v k def = vs k , k ≥
0. We also used the fact that K pv ≤ p p and e f jν is M ( j,j ) -measurable. We fix j, α and k and study E (cid:2) E j,j ( | ξ jν || Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v ) (cid:3) p . Applying Lemma 6.4 we get E (cid:2) E j,j ( | ξ jν | Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v ) (cid:3) p ≤ C p E p ( κp )( nv ) p . Since Im s ( z, v ) ≥ ( nv ) − for w ∈ D , the last inequality and (6.10) imply A ≤ ρ E | T n,h | p + C p p p E p ( κp )( nv ) p . Bound for A . Applying Taylor’s formula | ϕ ( T n,h ) − ϕ ( e T ( j,j ) n,h ) | ≤ p | e T ( j,j ) n,h + θ ( T n,h − e T ( j,j ) n,h ) | p − | T n,h − e T ( j,j ) n,h | It is easy to check that e T ( j,j ) n,h = e T ( j,j ) n e H ( j,j ) − E j,j [( T n − e T ( j,j ) n ) H ] (6.11)and T n,h − e T ( j,j ) n e H ( j,j ) = ( T n − e T ( j,j ) n ) e H ( j,j ) + T n ( H − e H ( j,j ) ) (6.12)Hence, | ϕ ( T n,h ) − ϕ ( e T ( j,j ) n,h ) | ≤ p | e T ( j,j ) n,h | p − | T n,h − e T ( j,j ) n,h | + p p − | T n | p − | H − e H ( j,j ) | p − | T n,h − e T ( j,j ) n,h | + p p − | T n − e T ( j,j ) n | p − | T n,h − e T ( j,j ) n,h | H p − ( v )+ p p − E p − j,j [ | T n − e T ( j,j ) n | H ] | T n,h − e T ( j,j ) n,h | . We obtain that A . A + . . . + A , where A def = pn n X j =1 2 X ν =1 E | ξ jν e f jν || e T n,h | p − | T n,h − e T ( j,j ) n,h | H ( v ) , A def = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n | p − | H − e H ( j,j ) | p − | T n,h − e T ( j,j ) n,h | H ( v ) , A def = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n − e T ( j,j ) n | p − | T n,h − e T ( j,j ) n,h | H p − ( v ) , A def = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν | E p − j,j [ | T n − e T ( j,j ) n | H ] | T n,h − e T ( j,j ) n,h | H ( v )It follows from these representations that A ≤ A + . . . + A , where A def = pn n X j =1 2 X ν =1 E | ξ jν e f jν || e T ( j,j ) n,h | p − | T n − e T ( j,j ) n | I ( v ) , A def = pn n X j =1 2 X ν =1 E | ξ jν e f jν || e T ( j,j ) n,h | p − | T n,h || H − e H ( j,j ) | , A def = pn n X j =1 2 X ν =1 E | ξ jν e f jν || e T ( j,j ) n,h | p − E j,j [( T n − e T ( j,j ) n ) H ] I ( v ) , OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 25
Let us consider the term A . We may apply Lemma 6.5 and bound this term by the sum of threeterms: A ≤ p Im sn m X α =1 n X j =1 2 X ν =1 E | ξ jν e f jν || e T n,h | p − | Λ n − e Λ ( α,j,j ) n | I ( v )+ pn ( nv ) n X j =1 2 X ν =1 E | ξ jν e f jν || e T n,h | p − (cid:20) Im R jj | R jj | + Im R ( j ) j m +1 ,j m +1 | R ( j ) j m +1 ,j m +1 | (cid:21) I ( v )+ pn ( nv ) n X j =1 2 X ν =1 E | ξ jν e f jν || e T n,h | p − E j,j (cid:20) Im R jj | R jj | + Im R ( j ) j m +1 ,j m +1 | R ( j ) j m +1 ,j m +1 | (cid:21) I ( v ) def = A (1)311 + A (2)311 + A (3)311 . The last two terms, A (2)311 , A (3)311 , may be easily bounded as follows A ( j )311 ≤ C p p p E p ( κp )( nv ) p + ρ E | T n,h | p , j = 2 , . Indeed, one may apply H¨older’s inequality and Young’s inequality. For the estimation of A (1)311 we firstuse Young’s inequality and get A (1)311 ≤ ρ E | T n,h | p + C p p p Im p sn n X j =1 2 m X α =1 2 X ν =1 E (cid:2) E j,j ( | ξ jν || Λ ( α ) n − e Λ ( α,j,j ) n | I ( v )) (cid:3) p . Applying Lemma 6.4 we get A (1)311 ≤ ρ E | T n,h | p + C p p p E p ( κp )( nv ) p . It is easy to see that similarly one may estimate the term A . To finish estimation of A it remainsto estimate A . Applying Lemma 6.3 we obtain A . pn n X j =1 2 X ν =1 2 m X α =1 K v X k =0 s ( z, v k ) E | ξ jν e f jν || e T ( j,j ) n,h | p − | T n,h || Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v ) . Applying Let us denote I j,k,α,ν def = E | ξ jν e f jν || e T ( j,j ) n,h | p − | T n,h || Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v ) . Using the Cauchy-Schwartz inequality we obtain I j,k,α,ν ≤ E | e T ( j,j ) n,h | p − | e f jν | I ( v ) E j,j [ | ξ jν || T n,h || Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v )] ≤ E | e T ( j,j ) n,h | p − | e f jν | I ( v ) E j,j [ | ξ jν I ( v )] E j,j | T n,h | E j,j [ | Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v )] ≤ E p − p | T n,h | p E p E pj,j [ | ξ jν I ( v )] E p E pj,j [ | Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | I ( v )] . It is easy to show that | Λ ( α ) n ( v k ) − Λ ( α,j,j ) n ( v k ) | ≤ n (cid:20) δ α, | R jj ( v k ) | + δ α,m +1 | R ( j ) j m +1 j m +1 ( v k ) | + 1 v k Im R jj ( v k ) | R jj ( v k ) | + 1 v k Im R ( j ) j m +1 j m +1 ( v k ) | R ( j ) j m +1 j m +1 ( v k ) | (cid:21) . (6.13)We may use Lemmas A.2–A.4 and Young’s inequality to get A ≤ ρ E | T n,h | p + C p p p E p ( κp )( nv ) p . Let us consider A . Applying we may may etimate it by the sum of the following terms A def = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n | p − | H − e H ( j,j ) | p − | T n − e T ( j,j ) n | e H ( j,j ) H ( v ) , A def = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n | p − | H − e H ( j,j ) | p − H ( v ) , A def = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n | p − | H − e H ( j,j ) | p − E j,j [( T n − e T ( j,j ) n ) H ] H ( v ) . All three terms may be bounded similarly. We turn our attention to the first term only. To deal withit we use the following inequality | T n | p − I ( v ) ≤ C p Im p − s ( z, w ) , which may be deduced from equation (4.12). Hence, A ≤ C p p p − Im p − s ( v ) n n X j =1 2 X ν =1 E | ξ jν e f jν || H − e H ( j,j ) | p − | T n − e T ( j,j ) n | e H ( j,j ) H ( v ) , Using Lemma 6.3 we get A ≤ C p p p − Im p − s ( v ) n n X j =1 2 X ν =1 2 m X α =1 K v X k =0 E | ξ jν e f jν || Λ ( α ) n ( v k ) − e Λ ( α,j,j ) n ( v k ) | p − | T n − e T ( j,j ) n | I ( v ) s k Im p − s ( v k ) . It follows from (6.13) and Lemma 6.4 that A ≤ C p p p E p ( κp )( nv ) p . Let us consider A . Applying (6.11)–(6.12) we get, where A = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n − e T ( j,j ) n | p − H ( v ) , A = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n − e T ( j,j ) n | p − | H − e H ( j,j ) || T n,h | , A = p p − n n X j =1 2 X ν =1 E | ξ jν e f jν || T n − e T ( j,j ) n | p − E j,j [( T n − e T ( j,j ) n ) H ] H ( v ) . We estimate A only. All other terms may be bounded similarly. We get A ≤ C p p p − Im p − s ( v ) n ( nv ) p − n X j =1 2 X ν =1 E | ξ jν e f jν || T n − e T ( j,j ) n | H ( v ) , Applying Lemmas 6.4 and 6.5 we get A ≤ C p p p E p ( κp )( nv ) p . The term A may be estimated similarly to A . We omit the details. Collecting all bounds weobtain the following estimate for A : A ≤ ρ E | T n,h | p + C p p p E p ( κp )( nv ) p . OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 27
Bound for A . Recall that A = 1 n n X j =1 2 X ν =1 E ξ jν [ f jν − e f jν ] H ( v ) ϕ ( T n,h ) . We may estimate it as follows A def = en n X j =1 2 X ν =1 E | ξ jν || f jν − e f jν || e T ( j,j ) n,h | p − H ( v ) , A def = p p − n n X j =1 2 X ν =1 E | ξ jν || f jν − e f jν || T n,h − e T ( j,j ) n,h | p − H ( v ) . We may choose e f j def = − − w − m ( m +2 ,j,j ) n ( z, w ) + | z | w + m ( m,j,j ) n ( z,w ) , e f j def = − | z | e f j [ w + m ( m,j,j ) n ( z, w )] To estimate the difference f jν − e f jν we may apply representation (4.8) and inequality (6.13). Repeatingall arguments from the previous section we may conclude the bound A ≤ ρ E | T n,h | p + C p p p E p ( κp )( nv ) p . Collecting now all bounds above we get the claim of the theorem. (cid:3)
Auxiliary lemmas.
We finish this section by several important lemmas.
Lemma 6.3.
Let v k = vs k , k ≥ . The following inequality holds | H ( Λ n ) − H ( e Λ ( j α ,j α ) n ) | ≤ τ m X β =1 K v X k =0 s ( z, v k ) | Λ ( β ) n ( v k ) − e Λ ( β,j α ,j α ) n ( v k ) | I ( v ) . The same is true if one replaces e Λ ( j α ,j α ) n by Λ ( j α ,j α ) n .Proof. The proof follows from the simple inequality | Q nj =1 a j − Q nj =1 b j | ≤ P nj =1 | a j − b j | and directcalculations. (cid:3) Denote I ( v, v ′ ) def = I ( v ) I ( v ′ ). Lemma 6.4.
Let w = u + iv ∈ D , w ′ = u + iv ′ ∈ D . Moreover, we assume that v ′ ≥ v . Let g j α ( w, w ′ ) , g j α ( w, w ′ ) be some positive r.v. such that E | g kj α | q < ∞ , k = 1 , , for ≤ q ≤ C log n .Then for any α = 1 , . . . , m and j = 1 , . . . , n max ≤ β ≤ m E j α ,j α (cid:2) | ξ j α ν ( v ) || Λ ( β ) n ( v ′ ) − e Λ ( β,j α ,j α ) n ( v ′ ) | g j α ( v, v ′ ) I ( v, v ′ ) (cid:3) . A / j α ( v ) B / j α ( v ′ ) g j α ( v, v ′ )( nv ) / ( nv ′ ) / , where A j α ( v ) def = max (cid:26) Im s ( v ) , E j α ,j α [Im R ( j α ) j m +[ α +1] ,j m +[ α +1] I ( v )] (cid:27) ,B j α ( v ′ ) def = max (cid:26) Im s ( v ′ ) , E β j α ,j α [Im β R ( j α ) j m + α ,j m + α I ( v ′ )] , E β j α ,j α [Im β R ( j α ) j m +[ α +1] ,j m +[ α +1] I ( v ′ )] , E β j α ,j α [Im β R ( j m + α ) j [ α − ,j [ α − I ( v ′ )] , E β j α ,j α [Im β R ( j m + α ) j α j α I ( v ′ )] (cid:27) . Proof.
We start from the representation for Λ ( β ) n − e Λ ( β,j α ,j α ) n . We rewrite it as followsΛ ( β ) n − e Λ ( β,j α ,j α ) n = Λ ( β ) n − e Λ ( β,j α ) n + e Λ ( β,j α ) n − e Λ ( β,j α ,j α ) n . Let us introduce the following notations: I def = E j α ,j α (cid:2) | ξ j α ν ( v ) || Λ ( β ) n ( v ′ ) − e Λ ( β,j α ) n ( v ′ ) || g j α ( v, v ′ ) | I ( v, v ′ ) (cid:3) ,I def = E j α ,j α (cid:2) | ξ j α ν ( v ) || e Λ ( β,j α ) n ( v ′ ) − e Λ ( β,j α ,j α n ( v ′ ) || g j α ( v, v ′ ) | I ( v, v ′ ) (cid:3) . We start from I . We first mention that Λ ( β ) n − e Λ ( β,j α ) n = m ( β ) n − m ( β,j α ) n − E j α ( m ( β ) n − m ( β,j α ) n ). Letus consider m ( α ) n − m ( α,j ) n and rewrite it as follows m ( β ) n − m ( β,j α ) n = 1 n (cid:0) δ α,β R j α j α + X l ∈ T \ J β ( R l β l β − R ( j α ) l β l β ) (cid:1) . Writing down the decomposition for the diagonal entries of resolvent we get m ( β ) n − m ( β,j α ) n = 1 n (cid:0) δ βα R j α j α + X l ∈ T \ J β R − j α j α [ R j α l β ] (cid:1) = 1 n (cid:18) δ βα + X l ∈ T \ J β (cid:20) √ n n X k =1 X ( α ) jk R ( j α ) k m +[ α +1] ,l β − z R ( j α ) j m +1 α ,l β (cid:21) (cid:19) R j α j α = 1 n ( δ βα + η j α ) R j α j α , where η j α def = η j α + . . . + η j α and η j α def = z X l ∈ T \ J β [ R ( j α ) j m + α ,l β ] − n n X k =1 R ( j α ) kk , η j α def = 1 n X k = k ′ X ( α ) jk X ( α ) jk ′ R ( j α ) kk ′ ,η j α def = 1 n n X k =1 [ X (1) jk ] − R ( j α ) kk , η j α def = − zn n X k =1 X ( α ) jk R ( j α ) kj . Here R ( j α ) k,k ′ def = P l ∈ T \ J β R ( j α ) k m +[ α +1] ,l β R ( j α ) k ′ m +[ α +1] ,l β . Using this representation we writeΛ ( β ) n − e Λ ( β,j α ) n = 1 n ( δ βα + η j α )[ R j α j α − E j α ( R j α j α )] + 1 n e η j α R j α j α − n E j α ( e η j α R j α j α ) , where e η j α def = η j α + . . . + η j α and g ,j α . Moreover, it is straightforward to check that1 n ( δ βα + | η j α ( v ′ ) | ) I ( v ′ ) ≤ Cnv ′ (Im s ( z, v ′ ) + Im R ( j α ) j m + α ,j m + α ( v ′ )) I ( v ′ ) . Introduce the following approximation for R j α j α : Q j α def = − − w ′ − m ( m +[ α +1] ,j α ) n ( z, v ′ ) + | z | w ′ + m ([ α − ,jα ) n ( z,v ′ ) . OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 29
Then R j α j α − Q j α = ξ j α f j α + ξ j α f j α . We conclude from the previous facts that I ≤ n E j α ,j α (cid:2) ( δ α + | η j α ( v ′ ) | ) | ξ j α ν ( v ) ξ j α ( v ′ ) | g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) ( δ α + | η j α ( v ′ ) | ) | ξ j α ν ( v ) ξ j α ( v ′ ) | g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) ( δ α + | η j α ( v ′ ) | ) | ξ j α ν ( v ) | E j α ( | ξ j α ( v ′ ) f j α ( v ′ ) | ) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) ( δ α + | η j α ( v ′ ) | ) | ξ j α ν ( v ) | E j α ( | ξ j α ( v ′ ) f j α ( v ′ ) | ) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) e η j α ( v ′ ) | g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) | E j α ( | e η j α ( v ′ ) R j α j α ( v ′ ) | ) g j α ( v, v ′ ) I ( v, v ′ ) (cid:3) . Here g ,j α is some positive function for bounded moments ap to the order C log n .All terms in the upper bound for I may be estimated directly. We show how to deal with oneterm only, all other terms may be estimated similarly. For example, we estimate the second last term.H¨older’s inequality implies1 n E j α ,j α (cid:2) | ξ j α ν ( v ) e η j α ( v ′ ) | g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) ≤ n E j α ,j α ( | ξ j α ν ( v ) | I ( v )) E β j α ,j α ( | e η j α ( v ′ ) | β I ( v ′ )) × E β − β j α ,j α ( | g ′ j α ( v, v ′ ) | ββ − I ( v, v ′ )) . Here, β def = (4 + δ ) /
4. It remains to apply Lemmas A.9–A.11 to get the desired bound as stated inlemma.Let us consider I . It is easy to check that e Λ ( β,j α ) n − e Λ ( β,j α ,j α n = E j α ( m ( β ) n − m ( β,j m + α ) n )) − E j α ,j α ( m ( β ) n − m ( β,j m + α ) n )Similarly to (6.14) we may show that m ( β ) n − m ( β,j m + α ) n = 1 n (cid:0) δ β,m + α R j m + α j m + α + X l ∈ T \ J β R − j m + α ,j m + α [ R j m + α l β ] (cid:1) = 1 n (cid:18) δ β,m + α + X l ∈ T \ J β (cid:20) √ n n X k =1 X ([ α − jk R ( j m + α ) k [ α − ,l β − z R ( j m + α ) j α ,l β (cid:21) (cid:19) R j m + α ,j m + α = 1 n (cid:18) δ β,m + α + X l ∈ T \ J β (cid:20) √ n n X k =1 X ([ α − jk R ( j α ,j α ) k [ α − ,l β − z R ( j m + α ) j α ,l β (cid:21) (cid:19) R j m + α ,j m + α + 1 n X l ∈ T \ J β Θ l R j m + α ,j m + α = b η j α + 1 n X l ∈ T \ J β Θ l R j m + α ,j m + α . Here, b η j m + α def = b η j m + α , + . . . + b η j m + α , b η j m + α , def = z X l ∈ T \ J β E j α [ R ( j m + α ) j α l β ] − n n X k =1 R ( j α j α ) kk , b η j m + α , def = z X l ∈ T \ J β [[ R ( j m + α ) j α l β ] − E j α [ R ( j m + α ) j α l β ] ] , b η j m + α , def = 1 n X k = k ′ X ([ α − jk X ([ α − jk ′ R ( j α j α ) kk ′ , b η j m + α , def = 1 n n X k =1 [ X ([ α − jk ] − R ( j α ,j α ) kk , b η j m + α , def = − zn / n X k =1 X ([ α − jk X l ∈ T \ J β R ( j α ,j α ) k [ α − ,l β R ( j m + α ) j α l β . and R ( j α ) kk ′ def = P l ∈ T \ J β R ( j α ,j α ) k [ α − ,l β R ( j α ,j α ) k ′ [ α − ,l β . Moreover,Θ l def = (cid:20) √ n n X k =1 X ([ α − jk [ R ( j m + α ) k [ α − ,l β − R ( j α ,j α ) k [ α − ,l β ] (cid:21) + 2 (cid:20) √ n n X k =1 X ([ α − jk [ R ( j m + α ) k [ α − ,l β − R ( j α ,j α ) k [ α − ,l β ] (cid:21)(cid:20) √ n n X k =1 X ([ α − jk R ( j α ,j α ) k [ α − ,l β − z R ( j m + α ) j α ,l β (cid:21) . It is easy to see that b η j m + α , is M ( j α ,j α ) -measurable. Hence, e Λ ( β,j α ) n − e Λ ( β,j α ,j α ) n = 1 n ( δ β,m + α + b η j m + α , )[ E j α R j m + α ,j m + α − E j α ,j α R j m + α j m + α ]+ 1 n E j α η j m + α R j m + α ,j m + α − n E j α ,j α ( η j m + α R j m + α ,j m + α )+ 1 n X l ∈ T \ J β E j α Θ l R j m + α ,j m + α − n X l ∈ T \ J β E j α ,j α Θ l R j m + α ,j m + α . Here η j m + α def = b η j m + α , + . . . + b η j m + α , . All this facts imply the following bound for I : I ≤ n E j α ,j α (cid:2) | ξ j α ν ( v ) | ( δ β,m + α + | b η j m + α , ( v ′ ) | ) E j α (cid:0) | ξ j m + α , ( v ′ ) f j m + α , ( v ′ ) | (cid:1) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) | ( δ β,m + α + | b η j m + α , ( v ′ ) | ) E j α (cid:0) | ξ j m + α , ( v ′ ) f j m + α , ( v ′ ) | (cid:1) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) | ( δ β,m + α + | b η j m + α , ( v ′ ) | ) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) E j α ,j α (cid:2) | ξ j m + α , ( v ′ ) f j m + α , ( v ′ ) | I ( v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) | ( δ β,m + α + | b η j m + α , ( v ′ ) | ) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) E j α ,j α (cid:2) | ξ j m + α , ( v ′ ) f j m + α , ( v ′ ) | I ( v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) | E j α ( | η j m + α ( v ′ ) R j m + α ,j m + α ( v ′ ) | ) g ′ j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n E j α ,j α (cid:2) | ξ j α ν ( v ) | g j α ( v, v ′ ) I ( v, v ′ ) (cid:3) E j α ,j α (cid:2) | η j m + α ( v ′ ) R j m + α ,j m + α ( v ′ ) | I ( v ′ ) (cid:3) + 1 n X l ∈ T \ J β E j α ,j α (cid:2) | ξ j α ν ( v ) | E j α (cid:0) | Θ l ( v ′ ) R j m + α ,j m + α ( v ′ ) | (cid:1) g j α ( v, v ′ ) I ( v, v ′ ) (cid:3) + 1 n X l ∈ T \ J β E j α ,j α (cid:2) | ξ j α ν ( v ) | g j α ( v, v ′ ) I ( v, v ′ ) (cid:3) E j α ,j α (cid:2) | Θ l ( v ′ ) R j m + α ,j m + α ( v ′ ) | I ( v ′ ) (cid:3) . One may proceed similarly to the estimate of I . We only consider the second last term. It isstraightforward to check that | Θ l | ≤ (cid:20) √ n n X k =1 X ([ α − jk R ( j m + α ) k [ α − ,j α (cid:21) | R ( j m + α ) j α l β | | R ( j m + α ) j α j α | − + 2 (cid:12)(cid:12)(cid:12)(cid:12) √ n n X k =1 X ([ α − jk R ( j m + α ) k [ α − ,j α (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) √ n n X k =1 X ([ α − jk R ( j α ,j α ) k [ α − ,l β − z R ( j m + α ) j α ,l β (cid:12)(cid:12)(cid:12)(cid:12) | R ( j m + α ) j α ,l β || R ( j m + α ) j m + α j α | − . Let us introduce the following notations: ζ j α , def = 1 √ n n X k =1 X ([ α − jk R ( j m + α ) k [ α − ,j α , ζ j α , def = 1 √ n n X k =1 X ([ α − jk R ( j α ,j α ) k [ α − ,l β . OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 31
Using these notations we may obtain1 n X l ∈ T \ J β | Θ l | ≤ | ζ j α , | nv Im R ( j m + α ) j α j α | R ( j m + α ) j α j α | − + 2 | ζ j α , |√ nv (cid:12)(cid:12)(cid:12)(cid:12) n X l ∈ T \ J β | ζ j α , | (cid:12)(cid:12)(cid:12)(cid:12) / Im / R ( j m + α ) j α ,j α | R ( j m + α ) j α j α | − + 2 | z || ζ j α , | nv Im / R ( j m + α ) j [ α − ,j [ α − Im / R ( j m + α ) j α j α | R ( j m + α ) j α j α | − . Now we may estimate the second last term in the bound for I . Using the previous inequality it maybe estimated as the sum of three terms. We will estimate the second term only. It is easy to see that ζ j α , is M ( j α ) -measurable. Hence,2 E j α ,j α (cid:20) | ξ j α ν ( v ) | (cid:0) n X l ∈ T \ J β | ζ j α , | (cid:1) / × E j α [ | ζ j α , | Im / R ( j m + α ) j α ,j α | R ( j m + α ) j α j α | − | R j m + α ,j m + α | ] | g j α ( v, v ′ ) | I ( v, v ′ ) (cid:21) ≤ E / βj α ,j α (cid:2) | ξ j α ν ( v ) | I ( v ) (cid:3) β E β − β j α ,j α (cid:2) | g ′ j α ( v, v ′ ) | β β − I ( v, v ′ ) (cid:3) × E / j α ,j α (cid:20) n X l ∈ T \ J β | ζ j α , ( v ′ ) | E j α (cid:0) | ζ j α , ( v ′ ) | (cid:1) E / j α (cid:0) Im R ( j m + α ) j α j α ( v ′ ) (cid:1) E / j α (cid:0) g ′ ( v, v ′ ) (cid:1) I ( v ′ ) (cid:21) ≤ √ nv ′ E / βj α ,j α (cid:2) | ξ j α ν ( v ) I ( v ) | β (cid:3) E / j α ,j α (cid:2) | ζ j α , ( v ′ ) | I ( v ′ ) (cid:3) E / βj α ,j α (cid:2) | ζ j α , ( v ′ ) | β I ( v ′ ) (cid:3) g j α ( v, v ′ ) . Applying now Rosenthal’s inequality to ζ j α ,k , k = 1 ,
2, we conclude the bound as required by thestatement of the lemma. (cid:3)
Lemma 6.5.
For any w ∈ D the following inequality holds: max ≤ α ≤ m | T ( α ) n − T ( α,j α ,j α ) n | I ( v ) . Im snv . Moreover, max ≤ α ≤ m | T ( α ) n − e T ( α,j α ,j α ) n | I ( v ) . Im s ( z, w ) max ≤ β ≤ m | Λ ( β ) n − e Λ ( β,j α ,j α ) n | I ( v )+ 1( nv ) max ≤ α ≤ m (cid:20) Im R j α j α | R j α j α | + Im R ( j α ) j m + α ,j m + α | R ( j α ) j m + α ,j m + α | (cid:21) I ( v )+ 1( nv ) E j α ,j α (cid:20) max ≤ α ≤ m (cid:18) Im R j α j α | R j α j α | + Im R ( j α ) j m + α ,j m + α | R ( j α ) j m + α ,j m + α | (cid:19) I ( v ) (cid:21) . Proof.
Both statements are consequence of the equation for Λ n . (cid:3) Acknowledgement
A. Naumov and A. Tikhomirov would like to thank Laszlo Erd¨os for his hospitality in IST Austriaand helpful discussions on the subject of this paper.F. G¨otze was supported by CRC 1283 ”Taming uncertainty and profiting from randomness andlow regularity in analysis, stochastics and their applications”. A. Naumov and A. Tikhomirov weresupported by the Russian Academic Excellence Project 5-100 and Russian Science Foundation grant18-11-00132 (results of section 3 were obtained under support of the grant).
Appendix A. Inequalities for linear and quadratic forms
In this section we present some inequalities for linear and quadratic forms.
A.1.
Estimations of ε j α , for j ∈ T . In this section we estimate the moments of ε ( J , K ) j α ,ν for J , K ∈ T and ν = 1 , . . . η ( J , K ) νj , ν = 1 , . . . ,
5. In what follows for any J ⊂ T and j ∈ T J we denote e J def = J ∪ { j } . We also introduce σ -algebra M ( J , K ) def = σ { X kl , k ∈ J , l ∈ K } , J , K ⊂ T , and denote E ∗ ( · ) def = E ( · (cid:12)(cid:12) M ( e J , K ) ) . (A.1) Lemma A.1.
For any j ∈ T \ J α | e ε ( J , K ) j | ≤ ( nv ) − . Proof.
Since R ( J , K ) k [ α +1]+ m,k [ α +1]+ m, − R ( e J , K ) k [ α +1]+ m,k [ α +1]+ m, = [ R ( J , K ) k [ α +1]+ m,k [ α +1]+ m, ] / R ( J , K ) j α j α we get that | e ε ( J , K ) j | ≤ nv Im R ( J , K ) j α j α | R ( J , K ) j α j α | − ≤ ( nv ) − . Thus lemma A.1 is proved. (cid:3)
Lemma A.2.
There exist a positive constant C such that for any j ∈ T \ J α and all p ≥ E ∗ | e ε ( J , K ) j α , | p ≤ C p p p ( nv ) p Im p m ( e J , K ) n + µ p C p p p n p v p X l ∈ T \ K [ α +1] Im p R ( e J , K ) l [ α +1]+ m ,k [ α +1]+ m + µ p C p p p n p X l,k ∈ T \ K [ α +1] | R ( e J , K ) l [ α +1]+ m ,k [ α +1]+ m | p . Proof.
Applying moment inequality for quadratic forms, e.g. [20][Proposition 2.4] or [30][Lemma A.1],and Lemma A.12 we get the proof. (cid:3)
Lemma A.3.
There exist a positive constant C such that for any j ∈ T \ J α and all p ≥ E ∗ | e ε ( J , K ) j α , | p ≤ C p p p n p (cid:16) n X l ∈ T \ K [ α +1] | R ( e J , K ) l [ α +1]+ m ,l [ α +1]+ m | (cid:17) p + µ p C p p p n p X l ∈ T \ K [ α +1] | R ( e J , K ) l [ α +1]+ m ,l [ α +1]+ m | p . Proof.
The proof follows from the Rosenthal type inequality for linear forms, e.g. [43][Theorem 3]and [33][Inequality (A)] and Lemma A.12. (cid:3)
Lemma A.4.
There exist a positive constant C such that for any j ∈ T \ J α and all p ≥ E ∗ | e ε ( J , K ) j α , | p ≤ C p | z | p p p ( nv ) p Im p R ( e J , K ) l [ α +1]+ m ,l [ α +1]+ m + µ p C p | z | p p p n p X l ∈ T \ K [ α +1] | R ( e J , K ) l [ α +1]+ m ,l [ α +1]+ m | p . Proof.
The proof is similar to the proof of previous lemma. (cid:3)
We also estimate the moments of ε ( e J , K ) j + n,ν for j ∈ T K , ν = 1 , ,
3. Similarly to (A.1) we define E ∗∗ ( · ) def = E ( · (cid:12)(cid:12) M ( e J , e K ) ) . Lemma A.5.
For any j ∈ T \ J α | b ε ( e J , K ) j α | p ≤ nv ) − . Proof.
Repeating the arguments of the proof of Lemma A.1 one gets the statement of this lemma. (cid:3)
OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 33
Lemma A.6.
There exist a positive constant C such that for any j ∈ T \ J α and all p ≥ E ∗∗ | b ε ( J , K ) j α | p ≤ C p p p ( nv ) p Im p m ( e J , e K ) n + µ p C p p p n p v p X l ∈ T \ J [ α − Im p R ( e J , e K ) ll + µ p C p p p n p X l,k ∈ T \ J [ α − | R ( e J , e K ) kl | p . Proof.
Repeating the arguments of the proof of Lemma A.2 one gets the statement of this lemma. (cid:3)
Lemma A.7.
There exist a positive constant C such that for any j ∈ T \ J α and all p ≥ E ∗∗ | b ε ( e J , K ) j α | p ≤ C p p p n p (cid:16) n X l ∈ T \ J [ α − | R ( e J , e K ) ll | (cid:17) p + µ p C p p p n p X l ∈ T \ J [ α − | R ( e J , e K ) ll | p . Proof.
Repeating the arguments of the proof of Lemma A.3 one gets the statement of this lemma. (cid:3)
Lemma A.8.
For any j ∈ T | η j α | . v (Im m ( j α ) n + | z | Im R ( j α ) j m + α ,j m + α ) . Proof.
The proof follows from Lemma A.12. (cid:3)
Lemma A.9.
Under conditions ( C0 ) for any j ∈ T and all p : 2 ≤ p ≤ E ∗ | η j α | p . nv ) p Im p m ( j α ) n . Proof.
Applying moment inequality for quadratic forms, e.g. [20][Proposition 2.4] or [30][Lemma A.1],and Lemma A.12 we get the proof. (cid:3)
Lemma A.10.
Under conditions ( C0 ) for any j ∈ T and all p : 2 ≤ p ≤ α E ∗ | η j α , | p . nv ) p Im p m ( j α ) n . Proof.
The proof follows from the Rosenthal type inequality for linear forms, e.g. [43][Theorem 3]and [33][Inequality (A)] and Lemma A.12. (cid:3)
Lemma A.11.
Under conditions ( C0 ) for any j ∈ T and all p : 2 ≤ p ≤ E ∗ | η j α | p . | z | p Im p R ( j α ) j m + α ,j m + α n p v p Im p m ( j α ) n . Proof.
Similar to the proof of previous lemma. (cid:3)
A.2.
Inequalities for resolvent matrices.Lemma A.12.
Let ≤ K < n . For all ( J , K ) ∈ J K mn Tr | R ( J , K ) | ≤ v Im m ( J , K ) n + | J | − | K | mnv . (A.2) For all j = 1 , . . . , mn such that j / ∈ J X k ∗ | R ( J , K ) jk | ≤ v Im R ( J , K ) jj , (A.3) where P k ∗ is the sum over all k = 1 , . . . , mn such that k / ∈ J ∪ K .Proof. The proof of (A.2) follows from the following inequality1 mn Tr | R ( J , K ) | = 1 mnv Im Tr R ( J , K ) ≤ v Im m ( J , K ) n + | J | − | K | mnv . The bound (A.3) follows from the eigenvalue decomposition of V ( z ). (cid:3) Appendix B. Bounds for the Kolmogorov distance between distribution functionsvia Stieltjes transforms
We reformulate the following smoothing inequality proved in [30][Corollary 2.3], which allows torelate distribution functions to their Stieltjes transforms. Let G ( x ) be an arbitrary distribution func-tion which support is an interval or union of non-intersecting intervals, say J = supp G ( x ) = ∪ mα =1 J α and J α = [ a α , b α ]. Additionally we assume that G ( x ) has an absolutely continues density which isbounded and for any end-point c of the support J it behaves as g ( x ) ∼ ( x − c ) . For any x ∈ J we define γ ( x ) def = min α {| x − a α | , | b α − x |} . Given min α { b α − a α } > ε > J ( ε ) α = { x ∈ [ a α , b α ] : γ ( x ) ≥ ε } and J ′ ε = ∪ mα =1 J ( ε/ α . For a distribution function F denote by S F ( z )its Stieltjes transform, S F ( z ) def = Z ∞−∞ x − z dF ( x ) . We also denote a def = √ . (B.1) Proposition B.1.
Let v > and > ε > be positive numbers such that va ≤ ε / . (B.2) Denote v ′ = v/ √ γ . If G denotes the distribution function satisfying conditions above, and F is anydistribution function, there exist some absolute constants C and C such that ∆( F, G ) def = sup x | F ( x ) − G ( x ) |≤ x ∈ J ′ ε (cid:12)(cid:12)(cid:12)(cid:12) Im Z x −∞ (cid:0) S F ( u + iv ′ ) − S G ( u + iv ′ ) (cid:1) du (cid:12)(cid:12)(cid:12)(cid:12) + C v + C ε . Remark.
For any x ∈ J ε we have γ = γ ( x ) ≥ ε and according to condition (B.2), av √ γ ≤ ε . Lemma B.2.
Let < v ≤ ε / a and V > v . Denote v ′ def = v/ √ γ . The following inequality holds sup x ∈ J ′ ε (cid:12)(cid:12)(cid:12)(cid:12)Z x −∞ (Im( S F ( u + iv ′ ) − S G ( u + iv ′ )) du (cid:12)(cid:12)(cid:12)(cid:12) ≤ Z ∞−∞ | S F ( u + iV ) − S G ( u + iV ) | du + sup x ∈ J ′ ε (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z Vv ′ ( S F ( x + iu ) − S G ( x + iu )) du (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Proof.
Let x ∈ J ′ ε be fixed. Let γ = γ ( x ). Put z = u + iv ′ . Since v ′ = v √ γ ≤ ε a , see (B.2), we mayassume without loss of generality that v ′ ≤ x ∈ J ′ ε . Since the functions of S F ( z ) and S G ( z ) areanalytic in the upper half-plane, it is enough to use Cauchy’s theorem. We can write for x ∈ J ′ ε Z x −∞ Im( S F ( z ) − S G ( z )) du = Im { lim L →∞ Z x − L ( S F ( u + iv ′ ) − S G ( u + iv ′ )) du } . By Cauchy’s integral formula, we have Z x − L ( S F ( z ) − S G ( z )) du = Z x − L ( S F ( u + iV ) − S G ( u + iV )) du + Z Vv ′ ( S F ( − L + iu ) − S G ( − L + iu )) du − Z Vv ′ ( S F ( x + iu ) − S G ( x + iu )) du Denote by ξ (resp. η ) a random variable with distribution function F ( x ) (resp. G ( x )). Then we have | S F ( − L + iu ) | = | E ( ξ + L − iu ) − | ≤ ( v ′ ) − P ( | ξ | > L/
2) + 2 /L, for any v ′ ≤ u ≤ V . Similarly, | S G ( − L + iu ) | ≤ v ′− P ( | η | > L/
2) + 2 /L.
OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 35
These inequalities imply that (cid:12)(cid:12)(cid:12)(cid:12) Z Vv ′ ( S F ( − L + iu ) − S G ( − L + iu )) du (cid:12)(cid:12)(cid:12)(cid:12) → L → ∞ , which completes the proof. (cid:3) Combining the results of Proposition B.1 and Lemma B.2, we get
Corollary B.3.
Under the conditions of Proposition B.1 the following inequality holds ∆( F, G ) ≤ Z ∞−∞ | S F ( u + iV ) − S G ( u + iV ) | du + C v + C ε + 2 sup x ∈ J ′ ε Z Vv ′ | S F ( x + iu ) − S G ( x + iu ) | du, where v ′ = v √ γ and C , C > denote absolute constants. B.1.
Proof of Bounds for the Kolmogorov Distance.
Proof. Proposition B.1 . The proof of Proposition B.1 is a straightforward adaptation of the prooffrom [30][Lemma 2.1]. We include it here for the sake of completeness. First we note thatsup x | F ( x ) − G ( x ) | = sup x ∈ J | F ( x ) − G ( x ) | = max (cid:26) sup x ∈ J ε | F ( x ) − G ( x ) | , sup x ∈ [ a α ,a α + ε ] | F ( x ) − G ( x ) | , sup x ∈ [ b α − ε,b α ] | F ( x ) − G ( x ) | , α = 1 , . . . , m (cid:27) . Without loss of generality we shall assume that a ≤ b < a ≤ b < · · · ≤ a m ≤ b m . Consider x ∈ [ a , a + ε ] we have − G ( a α + ε ) ≤ F ( x ) − G ( x ) ≤ F ( a + ε ) − G ( a + ε ) + G ( a + ε ) ≤ sup x ∈ J ( ε )1 | F ( x ) − G ( x ) | + G ( a + ε ) . This inequality yieldssup x ∈ [ a ,a + ε ] | F ( x ) − G ( x ) | ≤ sup x ∈ J ( ε )1 | F ( x ) − G ( x ) | + G ( a + ε ) . Let x ∈ [ a α , a α + ε ] for α = 2 , . . . , m . Note that G ( b α − ) = G ( a α ). We may write F ( b α − ) − G ( b α − ) − ( G ( a α + ε ) − G ( a α )) ≤ F ( x ) − G ( x ) ≤ ( F ( a α + ε ) − G ( a α + ε )) + ( G ( a α + ε ) − G ( a α )) . From here it follows thatsup x ∈ [ a α ,a α + ε | F ( x ) − G ( x ) | ≤ sup x ∈ J ( ε ) α | F ( x ) − G ( x ) | + sup x ∈ J ( ε ) α − | F ( x ) − G ( x ) | + ( G ( a α + ε ) − G ( a α )) . By induction we get for any α = 1 , . . . , m sup x ∈ J α | F ( x ) − G ( x ) | ≤ m max ≤ i ≤ m sup x ∈ J ( ε ) i | F ( x ) − G ( x ) | + m max ≤ i ≤ m ( G ( a i + ε ) − G ( a i )) . Similarly we getsup x ∈ [ b α − ε,b α ] | F ( x ) − G ( x ) | ≤ m max ≤ i ≤ m sup x ∈ J ( ε ) i | F ( x ) − G ( x ) | + m max ≤ i ≤ m ( G ( b i + ε ) − G ( b i )) . Note that G ( a α + ε ) − G ( a α ) ≤ Cε / with some absolute constant C >
0. Combining all theserelations we get sup x | F ( x ) − G ( x ) | ≤ ∆ ε ( F, G ) + Cε / , (B.3)where ∆ ε ( F, G ) = sup x ∈ J ε | F ( x ) − G ( x ) | . We denote v ′ def = v/ √ γ . For any x ∈ J ′ ε (cid:12)(cid:12)(cid:12)(cid:12) π Im (cid:16) Z x −∞ ( S F ( u + iv ′ ) − S G ( u + iv ′ )) du (cid:17)(cid:12)(cid:12)(cid:12)(cid:12) ≥ π Im (cid:16) Z x −∞ ( S F ( u + iv ′ ) − S G ( u + iv ′ )) du (cid:17) = 1 π (cid:20)Z x −∞ Z ∞−∞ v ′ d ( F ( y ) − G ( y ))( y − u ) + v ′ du (cid:21) = 1 π Z x −∞ (cid:20)Z ∞−∞ v ′ ( y − u )( F ( y ) − G ( y )) dy (( y − u ) + v ′ ) (cid:21) = 1 π Z ∞−∞ ( F ( y ) − G ( y )) (cid:20)Z x −∞ v ′ ( y − u )(( y − u ) + v ′ ) du (cid:21) dy = 1 π Z ∞−∞ F ( x − v ′ y ) − G ( x − v ′ y ) y + 1 dy, by change of variables . (B.4)Furthermore, using the definition (B.1) of a and ∆( F, G ) we note that1 π Z | y | >a | F ( x − v ′ y ) − G ( x − v ′ y ) | y + 1 dy ≤ (1 − β )∆( F, G ) . (B.5)Since F is non decreasing, we have1 π Z | y |≤ a F ( x − v ′ y ) − G ( x − v ′ y ) y + 1 dy ≥ π Z | y |≤ a F ( x − v ′ a ) − G ( x − v ′ y ) y + 1 dy ≥ ( F ( x − v ′ a ) − G ( x − v ′ a )) β − π Z | y |≤ a | G ( x − v ′ y ) − G ( x − v ′ a ) | dy. These inequalities together imply (using a change of variables in the last step)1 π Z ∞−∞ F ( x − v ′ y ) − G ( x − v ′ y ) y + 1 dy ≥ β ( F ( x − v ′ a ) − G ( x − v ′ a )) − π Z | y |≤ a | G ( x − v ′ y ) − G ( x − v ′ a ) | dy − (1 − β )∆( F, G ) ≥ β ( F ( x − v ′ a ) − G ( x − v ′ a )) − v ′ π Z | y |≤ v ′ a | G ( x − y ) − G ( x − v ′ a ) | dy − (1 − β )∆( F, G ) . (B.6)Note that according to Remark B, x ± v ′ a ∈ J ′ ε for any x ∈ J ε . Assume first that x n ∈ J ε is a sequencesuch that F ( x n ) − G ( x n ) → ∆ ε ( F, G ). Then x ′ n def = x n + v ′ a ∈ J ′ ε . Using (B.4) and (B.6), we getsup x ∈ J ′ ε (cid:12)(cid:12)(cid:12)(cid:12) Im Z x −∞ ( S F ( u + iv ′ ) − S G ( u + iv ′ )) du (cid:12)(cid:12)(cid:12)(cid:12) ≥ Im Z x ′ n −∞ ( S F ( u + iv ′ ) − S G ( u + iv ′ )) du ≥ β ( F ( x ′ n − v ′ a ) − G ( x ′ n − v ′ a )) − πv sup x ∈ J ′ ε √ γ Z | y |≤ v ′ a | G ( x + y ) − G ( x ) | dy − (1 − β )∆( F, G )= β ( F ( x n ) − G ( x n )) − πv sup x ∈ J ′ ε √ γ Z | y | < v ′ a | G ( x + y ) − G ( x ) | dy − (1 − β )∆( F, G ) . (B.7)Assume for definiteness that y >
0. Recall that ε ≤ γ , for any x ∈ J ′ ε . By Remark B with ε/ ε , we have 0 < y ≤ v ′ a ≤ √ ε , for any x ∈ J ′ ε . By conditions of Proposition, we have, | G ( x + y ) − G ( x ) | ≤ y sup u ∈ [ x,x + y ] G ′ ( u ) ≤ yC √ γ + y ≤ Cy p γ + 2 v ′ a ≤ Cy √ γ + ε ≤ Cy √ γ. OCAL LAWS FOR NON-HERMITIAN RANDOM MATRICES 37
This yields after integrating in y πv sup x ∈ J ′ ε √ γ Z ≤ y ≤ v ′ a | G ( x + y ) − G ( x ) | dy ≤ Cv sup x ∈ J ′ ε γv ′ ≤ Cv. (B.8)Similarly we get that1 πv sup x ∈ J ′ ε √ γ Z ≥ y ≥− v ′ a | G ( x + y ) − G ( x ) | dy ≤ Cv sup x ∈ J ′ ε γv ′ ≤ Cv. (B.9)By inequality (B.3) ∆ ε ( F, G ) ≥ ∆( F, G ) − Cε . (B.10)The inequalities (B.7), (B.10) and (B.8), (B.9) together yield as n tends to infinitysup x ∈ J ′ ε (cid:12)(cid:12)(cid:12)(cid:12) Im Z x −∞ ( S F ( u + iv ′ ) − S G ( u + iv ′ )) du (cid:12)(cid:12)(cid:12)(cid:12) ≥ (2 β − F, G ) − Cv − Cε , (B.11)for some constant C >
0. Similar arguments may be used to prove this inequality in case there isa sequence x n ∈ J ε such F ( x n ) − G ( x n ) → − ∆ ε ( F, G ). In view of (B.11) and 2 β − / (cid:3) References [1] R. Adamczak. On the Marchenko-Pastur and circular laws for some classes of random matrices with dependententries.
Electron. J. Probab. , 16:no. 37, 1068–1095, 2011.[2] R. Adamczak, D. Chafa¨ı, and P. Wolff. Circular law for random matrices with exchangeable entries.
RandomStructures Algorithms , 48(3):454–479, 2016.[3] G. Akemann, M. Cikovic, and M. Venker. Universality at weak and strong non-Hermiticity beyond the ellipticGinibre ensemble. arXiv:1610.06517 , 2015.[4] J. Alt, L. Erd˝os, and T. Kr¨uger. Local inhomogeneous circular law. arXiv:1612.07776 , 2016.[5] Z. Bai and J. Silverstein.
Spectral analysis of large dimensional random matrices . Springer, New York, secondedition, 2010.[6] C. Bordenave, P. Caputo, and D. Chafa¨ı. Circular law theorem for random Markov matrices.
Probab. TheoryRelated Fields , 152(3-4):751–779, 2012.[7] C. Bordenave and D. Chafai. Lecture notes on the circular law. In
Modern aspects of random matrix theory ,volume 72 of
Proc. Sympos. Appl. Math. , pages 1–34. Amer. Math. Soc., Providence, RI, 2014.[8] P. Bourgade, H.-T. Yau, and J. Yin. Local circular law for random matrices.
Probab. Theory Related Fields ,159(3-4):545–595, 2014.[9] P. Bourgade, H.-T. Yau, and J. Yin. The local circular law II: the edge case.
Probab. Theory Related Fields ,159(3-4):619–660, 2014.[10] Z. Burda, R. Janik, and B. Waclaw. Spectrum of the product of independent random Gaussian matrices.
Phys.Rev. E (3) , 81(4):041132, 12, 2010.[11] C. Cacciapuoti, A. Maltsev, and B. Schlein. Bounds for the stieltjes transform and the density of states of wignermatrices.
Probability Theory and Related Fields , 163(1):1–59, 2015.[12] W. Cook, N. Hachem, J. Najim, and D. Renfrew. Limiting spectral distribution for non-hermitian random matriceswith a variance profile. arXiv:1612.04428 , 2016.[13] L. Erd˝os, A. Knowles, H.-T. Yau, and J. Yin. Spectral statistics of Erd˝os-R´enyi Graphs II: Eigenvalue spacing andthe extreme eigenvalues.
Comm. Math. Phys. , 314(3):587–640, 2012.[14] L. Erd˝os, A. Knowles, H.-T. Yau, and J. Yin. The local semicircle law for a general class of random matrices.
Electron. J. Probab. , 18:no. 59, 58, 2013.[15] L. Erd˝os, A. Knowles, H.-T. Yau, and J. Yin. Spectral statistics of Erd˝os-R´enyi graphs I: Local semicircle law.
Ann. Probab. , 41(3B):2279–2375, 2013.[16] L. Erd˝os, B. Schlein, and H.-T. Yau. Local semicircle law and complete delocalization for Wigner random matrices.
Comm. Math. Phys. , 287(2):641–655, 2009.[17] L. Erd˝os, B. Schlein, and H.-T. Yau. Semicircle law on short scales and delocalization of eigenvectors for Wignerrandom matrices.
Ann. Probab. , 37(3):815–852, 2009.[18] L. Erd˝os, B. Schlein, and H.-T. Yau. Wegner estimate and level repulsion for Wigner random matrices.
Int. Math.Res. Not. IMRN , (3):436–479, 2010.[19] Y. Fyodorov, B. Khoruzhenko, and H.J. Sommers. Almost-Hermitian random matrices: eigenvalue density in thecomplex plane.
Phys. Lett. A , 226(1-2):46–52, 1997.[20] E. Gin´e, R. Lata la, and J. Zinn. Exponential and moment inequalities for U -statistics. In High dimensional prob-ability, II (Seattle, WA, 1999) , volume 47 of
Progr. Probab. , pages 13–38. Birkh¨auser Boston, Boston, MA, 2000. [21] J. Ginibre. Statistical ensembles of complex, quaternion, and real matrices.
J. Mathematical Phys. , 6:440–449,1965.[22] V. Girko. The circular law.
Teor. Veroyatnost. i Primenen. , 29(4):669–679, 1984.[23] V. Girko. The elliptic law.
Teor. Veroyatnost. i Primenen. , 30(4):640–651, 1985.[24] F. G¨otze, A. Naumov, and A. Tikhomirov. Local semicircle law under moment condtions. Part I: The Stieltjestransfrom. arXiv:1510.07350 , 2015.[25] F. G¨otze, A. Naumov, and A. Tikhomirov. Local semicircle law under moment condtions. Part II: Localization anddelocalization. arXiv:1511.00862 , 2015.[26] F. G¨otze, A. Naumov, and A. Tikhomirov. On a generalization of the elliptic law for random matrices.
Acta Phys.Polon. B , 46(9):1737–1745, 2015.[27] F. G¨otze, A. Naumov, and A. Tikhomirov. On minimal singular values of random matrices with correlated entries.
Random Matrices Theory Appl. , 4(2):1550006, 30, 2015.[28] F. G¨otze, A. Naumov, A. Tikhomirov, and A. Timushev. On the local semicircle law for wigner ensembles. arXiv:1602.03073 , 2016.[29] F. G¨otze and A. Tikhomirov. On the asymptotic spectrum of products of independent random matrices. arXiv:1012.2710 .[30] F. G¨otze and A. Tikhomirov. Rate of convergence to the semi-circular law.
Probab. Theory Related Fields ,127(2):228–276, 2003.[31] F. G¨otze and A. Tikhomirov. On the circular law. arXiv:math/0702386 , 2007.[32] F. G¨otze and A. Tikhomirov. The circular law for random matrices.
Ann. Probab. , 38(4):1444–1491, 2010.[33] W. Johnson, G. Schechtman, and J. Zinn. Best constants in moment inequalities for linear combinations of inde-pendent and exchangeable random variables.
Ann. Probab. , 13(1):234–253, 1985.[34] J. Lee and J. Yin. A necessary and sufficient condition for edge universality of Wigner matrices.
Duke Math. J. ,163(1):117–173, 2014.[35] M. Mehta.
Random matrices , volume 142 of
Pure and Applied Mathematics (Amsterdam) . Elsevier/AcademicPress, Amsterdam, third edition, 2004.[36] A.A. Naumov. Elliptic law for random matrices.
Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet. , (1):31–38,2013.[37] Y. Nemish. Local law for the product of independent non-hermitian random matrices with independent entries.
Electron. J. Probab. , 22:35 pp., 2017.[38] H. Nguyen and S. O’Rourke. The elliptic law.
Int. Math. Res. Not. IMRN , (17):7620–7689, 2015.[39] S. O’Rourke, D. Renfrew, A. Soshnikov, and V. Vu. Products of independent elliptic random matrices.
J. Stat.Phys. , 160(1):89–119, 2015.[40] S. O’Rourke and A. Soshnikov. Products of independent non-hermitian random matrices. arXiv:1012.4497 .[41] S. O’Rourke and A. Soshnikov. Products of independent non-Hermitian random matrices.
Electron. J. Probab. ,16:no. 81, 2219–2245, 2011.[42] G. Pan and W. Zhou. Circular law, extreme singular values and potential theory.
J. Multivariate Anal. , 101(3):645–656, 2010.[43] H. Rosenthal. On the subspaces of L p ( p >
2) spanned by sequences of independent random variables.
Israel J.Math. , 8:273–303, 1970.[44] M. Rudelson and R. Vershynin. The Littlewood-Offord problem and invertibility of random matrices.
Adv. Math. ,218(2):600–633, 2008.[45] T. Tao and V. Vu. Random matrices: the circular law.
Commun. Contemp. Math. , 10(2):261–307, 2008.[46] T. Tao and V. Vu. Random matrices: universality of ESDs and the circular law.
Ann. Probab. , 38(5):2023–2065,2010. With an appendix by Manjunath Krishnapur.[47] T. Tao and V. Vu. Random matrices: universality of local spectral statistics of non-Hermitian matrices.
Ann.Probab. , 43(2):782–874, 2015.[48] V. Vu. Spectral norm of random matrices.
Combinatorica , 27(6):721–736, 2007.[49] E. Wigner. Characteristic vectors of bordered matrices with infinite dimensions.
Ann. of Math. (2) , 62:548–564,1955.[50] H. Xi, F. Yang, and J. Yin. Local circular law for the product of a deterministic matrix with a random matrix.
Electron. J. Probab. , 22:77 pp., 2017.[51] J. Yin. The local circular law III: general case.
Probab. Theory Related Fields , 160(3-4):679–732, 2014.
Friedrich G¨otze, Faculty of Mathematics, Bielefeld University, Bielefeld, Germany
E-mail address : [email protected] Alexey A. Naumov, National Research University Higher School of Economics, Moscow, Russia
E-mail address : [email protected] Alexander N. Tikhomirov, Department of Mathematics, Komi Science Center of Ural Division of RAS,Syktyvkar, Russia; and National Research University Higher School of Economics, Moscow, Russia
E-mail address ::