On the lower bound of the spectral norm of symmetric random matrices with independent entries
aa r X i v : . [ m a t h . P R ] M a y On the lower bound of the spectral norm of symmetric randommatrices with independent entries
Sandrine P´ech´e ∗ Alexander Soshnikov † November 4, 2018
Abstract
We show that the spectral radius of an N × N random symmetric matrix with i.i.d.bounded centered but non-symmetrically distributed entries is bounded from below by2 σ − o ( N − / ε ) , where σ is the variance of the matrix entries and ε is an arbitrary smallpositive number. Combining with our previous result from [7], this proves that for any ε > , one has k A N k = 2 σ + o ( N − / ε ) with probability going to 1 as N → ∞ . Wigner random matrices were introduced by E.Wigner about fifty years ago ([15], [16]) as amodel to study the statistics of resonance levels for neutrons off heavy nuclei. Nowadays, thereare many fruitful connections between Random Matrix Theory and Mathematical Physics, Prob-ability Theory, Integrable Systems, Number Theory, Quantum Chaos, Theoretical ComputerTheory, Combinatorics, Statistics, and many other areas of science.Let A N be a sequence of real symmetric Wigner random matrices with non symmetricallydistributed entries. In other words, A N = 1 √ N ( a ij ) Ni,j =1 , where the a ij , i ≤ j are i.i.d. random variables such that E a ij = 0 , E a ij = σ , E a ij = µ , and | a ij | ≤ C, ∀ ≤ i, j ≤ N, (1) ∗ Institut Fourier BP 74, 100 Rue des maths, 38402 Saint Martin d’Heres, France (permanent address) andDepartment of Mathematics, University of California at Davis, One Shields Ave., Davis, CA 95616, USA. E-mail:[email protected] † Department of Mathematics, University of California at Davis, One Shields Ave., Davis, CA 95616, USA.E-mail address: [email protected]. Research was supported in part by the NSF grants DMS-0405864and DMS-0707145. C is some positive constant that does not depend on N. The common third moment µ isnot necessarily zero, which allows us to study the case when the marginal distribution of matrixentries is not symmetric.Let us denote by k A N k the spectral norm of the matrix A N , k A N k = max ≤ i ≤ N | λ i | , where λ , . . . , λ N are the eigenvalues of A N . Clearly, the eigenvalues of A N are real random variables.It was proved in [7] that for an arbitrary small positive number ε > A N is bounded as k A N k ≤ σ + o ( N − / ε ) , (2)with probability going to 1. In this paper, we prove that 2 σ + o ( N − / ε ) is also a lower boundfor k A N k . The main result of the paper is the following
Theorem 1.1.
Let k A N k denote the spectral norm of the matrix A N and ε > . Then k A N k ≥ σ − o ( N − / ε ) (3) with probability going to 1 as N → ∞ . Combining the result of Theorem 1.1 with (2), we obtain
Theorem 1.2.
Let k A N k denote the spectral norm of the matrix A N and ε > . Then k A N k = 2 σ + o ( N − / ε ) (4) with probability going to 1 as N → ∞ . Remark . In fact, one does not need the assumption that the matrix entries are identicallydistributed as long as { a ij , ≤ i ≤ j ≤ N } are independent, uniformly bounded centralizedrandom variables with the same variance σ off the diagonal. The proofs of the results of thepresent paper and of [7] still hold without any significant alterations since we only use the upperbounds | E a kij | ≤ C k on the third and higher moments, i.e. for k ≥ , and not the exact valuesof these moments. Remark . Similar results hold for Hermitian Wigner matrices as well. Since the proof isessentially the same, we will discuss only the real symmetric case in this paper.We remark that 2 σ is the right edge of the support of the Wigner semicircle law, and,therefore, it immediately follows from the classical result of Wigner ([15], [16], [2]) that for anyfixed δ > , P ( k A N k ≥ σ − δ ) → N → ∞ . A standard way to obtain an upper bound on the spectral norm is to study the asymptoticsof E [ T rA s N N ] for integers s N proportional to N γ , γ > . If one can show that E [ T rA s N N ] ≤ Const N γ (2 σ ) s N , (5)2here s N = ConstN γ (1 + o (1)) , and Const and γ depend only on Const and γ, one can provethat k A N k ≤ σ + O ( N − γ log N ) (6)with probability going to 1 by using the upper bound E [ k A N k s N ] ≤ E [ T rA s N N ] and the Markovinequality. In particular, F¨uredi and Koml´os in [3] were able to prove (6) for γ ≤ / , and Vu[14] extended their result to γ ≤ / . Both papers [3] and [14] treated the case when the matrixentries { a ij } are uniformly bounded. In [7], we were able to prove that E [ T rA s N N ] = Nπ / s / N (2 σ ) s N (1 + o (1)) , (7)for s N = O ( N / − ε ) and any ε > , thus establishing (2). Again, we restricted our attention in[7] to the case of uniformly bounded entries. The proof relies on combinatorial arguments goingback to [8], [9], and [10].More is known if the matrix entries of a Wigner matrix have symmetric distribution (so,in particular, the odd moments of matrix entries vanish). In the case of symmetric marginaldistribution of matrix entries, one can relax the condition that ( a ij ) are uniformly bounded andassume that the marginal distribution is sub-Gaussian. It was shown by Tracy and Widomin [11] in the Gaussian (GOE) case that the largest eigenvalue deviates from the soft edge 2 σ on the order O ( N − / ) and the limiting distribution of the rescaled largest eigenvalue obeysTracy-Widom law ([11]):lim N →∞ P (cid:16) λ max ≤ σ + σxN − / (cid:17) = exp (cid:18) − / Z ∞ x q ( t ) + ( t − x ) q ( t ) dt (cid:19) , where q ( x ) is the solution of the Painl´eve II differential equation q ′′ ( x ) = xq ( x ) + 2 q ( x ) withthe asymptotics at infinity q ( x ) ∼ Ai ( x ) as x → + ∞ . It was shown in [10] that this behavior isuniversal for Wigner matrices with sub-Gaussian and symmetrically distributed entries. Similarresults hold in the Hermitian case (see [13], [10]). It is reasonable to expect that in the non-symmetric case, the largest eigenvalue will have the Tracy-Widom distribution in the limit aswell.The lower bonds on the spectral norm of a Wigner random matrix with non-symmetricallydistributed entries were probably considered to be more difficult than the upper bounds. Letus again restrict our attention to the case when matrix entries are uniformly bounded. It wasclaimed in [3] that the estimate of the type (5) for γ ≤ / k A N k ≥ σ − O ( N − / log N ) . As noted by Van Vu in [14], “We do not see any way to materialize this idea.” We concur withthis opinion. In the next section, we show that (5) implies a rather weak estimate P (cid:16) k A N k ≥ σ − N − / δ (cid:17) ≥ N − / δ , (8)3or small δ > N. Combining (8) with the concentration of measureinequalities for k A N k (see [4], [1]), one then obtains that for Wigner matrices with uniformlybounded entries P (cid:16) k A N k ≥ σ (1 − CN − / p log N ) (cid:17) → N → ∞ , (9)where C is the same as in (1). The proof of Theorem 1.1 will be given in Section 3, where weestablish the analogue of the law of large numbers for T rA s N N with s N = O ( N / − ε ) , provingthat T rA s N N = Nπ / s / N (2 σ ) s N (1 + o (1)) (10)with probability going to 1 as N → ∞ . Without loss of generality, we can assume σ = 1 / . This conveniently sets the right edge of theWigner semicircle law to be 1 . Let us fix 0 < δ < / , and denote Ω N = {k A N k > − N − / δ } . Choose s N to be an integer such that s N = N / − ε (1 + o (1)) and 2 δ/ < ε < δ. Let us denoteby 1 Ω the indicator of the set Ω and by Ω c the complement of Ω . Then E (cid:16) T rA s N N Ω cN (cid:17) ≤ N (1 − N − / δ ) s N ≤ N (cid:16) exp( − N − / δ ) (cid:17) N / − ε = O (cid:16) N e − N δ − ε (cid:17) (11)which is o (1) as N → ∞ . Let us now partition Ω N as the disjoint union Ω N = Ω N F Ω N , whereΩ N = { − N − / δ < k A N k < N − / ε } and Ω N = {k A N k ≥ N − / ε } . Then E (cid:16) T rA s N N Ω N (cid:17) ≤ N (1 + N − / ε ) s N P (cid:0) Ω N (cid:1) ≤ N ( e + o (1)) P (cid:0) Ω N (cid:1) . (12)As for E (cid:16) T rA s N N Ω N (cid:17) , one can show that E (cid:16) T rA s N N Ω N (cid:17) ≤ E (cid:16) N k A N k s N Ω N (cid:17) ≤ E (cid:16) N k A N k N / − ε/ ] Ω N (cid:17) ≤ (13) N (1 + N − / ε ) − N / − ε/ ] E (cid:16) k A N k N / − ε/ ] Ω N (cid:17) ≤ N e − N ε/ E (cid:16) k A N k N / − ε/ ] (cid:17) ≤ N e − N ε/ , where in the last inequality we used (7) (for σ = 1 /
2) to get E (cid:16) k A N k N / − ε/ ] (cid:17) ≤ E (cid:16) T rA N / − ε/ ] N (cid:17) = Nπ / (4 N / − ε/ ) / (1 + o (1)) ≤ N N. Combining the above estimates and (7) (for σ = 1 / N that N s / N ≤ E (cid:16) T rA s N N (cid:17) ≤ O (cid:16) N e − N δ − ε (cid:17) + O ( N e − N ε/ ) + P (cid:0) Ω N (cid:1) N ( e + o (1)) . (14)Therefore, P (cid:16) k A N k > − N − / δ (cid:17) ≥ P (cid:0) Ω N (cid:1) ≥ e − s − / N (1+ o (1)) = N − / ε/ ( e − / o (1)) ≥ N − / δ (15)for sufficiently large N (depending on δ .)It was shown by by Alon, Krivelevich, and Vu ([1]), and Guionnet and Zeitouni ([4]) thatfor Wigner random matrices with bounded entries, the spectral norm is strongly concentratedaround its mean. Indeed, the spectral norm is a 1-Lipschitz function of the matrix entries since |k A k − k B k| ≤ k A − B k ≤ k A − B k HS = (cid:0) T r (cid:0) ( A − B )( A − B ) t (cid:1)(cid:1) / = X ij ( a ij − b ij ) / , where kk HS denotes the Hilbert-Schmidt norm. Therefore, one can apply the concentration ofmeasure results ([12], [6], [5]). In particular (see Theorem 1 in ([1])), P (cid:16) |k A N k − E k A N k| > CtN − / (cid:17) ≤ e − t / , (16)uniformly in N and t, where the constant C is the same as in (1). Combining (15) and (16), wearrive at E k A N k ≥ − √ CN − / p log N (17)for sufficiently large N . The last inequality together with (16) then implies (9) (recall that weset σ = 1 / . ) The main technical result of this section is the following analogue of the Law of Large Numbersfor
T rA s N N . Proposition 3.1.
Let s N = O ( N / − ε ) , where ε is an arbitrary small constant. Then T rA s N N = E (cid:16) T rA s N N (cid:17) (1 + δ N ) , (18) where P (cid:0) | δ N | ≥ N − / (cid:1) → as N → ∞ . T rA s N N = Nπ / s / N (2 σ ) s N (1 + o (1)) , (19)with probability going to 1 as N → ∞ . To make (19) more precise, we can say that the ratio ofthe l.h.s. and the r.h.s. of (19) goes to 1 in probability as N → ∞ . The main part of the proof of Proposition 3.1 is the following bound on the variance.
Lemma 3.1.
Let s N = O ( N / − ε ) , where ε is an arbitrary small constant. There there exists Const > such that Var ( T rA s N N ) ≤ Const √ s N (2 σ ) s N . (20)Lemma is proven in the subsection below. Assuming Lemma 3.1, we obtain the proof ofProposition 3.1 via the Chebyshev inequality. Indeed, it follows from (20) and (7) thatVar T rA s N N E [ T rA s N N ] ! ≤ Const √ s N (2 σ ) s N N (2 σ ) s N / ( πs N ) (1 + o (1)) = O ( s / N N − ) = O ( N − / − ε/ ) . (21)To finish the proof of the main result of the paper, we fix an arbitrary small positive constant δ > ε in such a way that 0 < ε < δ. Setting σ = 1 / , we scalethe eigenvalues in such a way that the right edge of the Wigner semicircle law is equal to 1 . Letus denote, as before, Ω N = {k A N k > − N − / δ } . Choosing s N = N / − ε (1 + o (1)) , we notethat on Ω cN T rA s N N Ω cN ≤ N (1 − N − / δ ) s N = O (cid:16) N e − N δ − ε (cid:17) = o (1) . At the same time, Proposition 3.1 implies (see (19)) that
T rA s N N ≥ N / with probabilitygoing to 1. Therefore, P (cid:16) k A N k ≤ − N − / δ (cid:17) → N → ∞ . (22) We now turn our attention to the variance of the trace, which can be considered as follows.To express Var Tr A s N N in terms of the matrix entries, we first write Tr A s N N as the sum of theproducts of matrix entries, namely we express Tr A s N N as the sum of the diagonal entries of thematrix A s N N . Therefore, E Tr A s N N = X ≤ i ,...,i sN − ≤ N E Y ≤ k ≤ s N − a i k i k +1 , (23)6here we assume that i s N = i . We can then rewrite E Tr A s N N as the sum over the set of closedpaths P = { i → i → . . . i s N − → i } on the complete graph on the N vertices { , , . . . , N } as E Tr A s N N = X P E Y ( i k i k +1 ) ∈P a i k i k +1 . (24)In a similar fashion (again using the agreement that i s N = i and j s N = j ), we can writeVar Tr A s N N = 1 N s N X ≤ i ,...,i sN − ≤ N X ≤ j ,...,j sN − ≤ N h E Y ≤ k ≤ s N − Y ≤ l ≤ s N − a i k i k +1 a j l j l +1 − E Y ≤ k ≤ s N − a i k i k +1 E Y ≤ l ≤ s N − a j l j l +1 i = 1 N s N X P , P h E Y ( i k i k +1 ) ∈P Y ( j l j l +1 ) ∈P a i k i k +1 a j l j l +1 − E Y ( i k i k +1 ) ∈P a i k i k +1 E Y ( j l j l +1 ) ∈P a j l j l +1 i = 1 N s N ⋆ X P , P h E Y ( i k i k +1 ) ∈P Y ( j l j l +1 ) ∈P a i k i k +1 a j l j l +1 − E Y ( i k i k +1 ) ∈P a i k i k +1 E Y ( j l j l +1 ) ∈P a j l j l +1 i , where P and P are closed paths of length 2 s N , P = { i → i → . . . i s N − → i } and P = { j → j → . . . j s N − → j } . The starred summation symbol P ⋆ P , P in the last line of the previous array of equations meansthat the summation is restricted to the set of the pairs of closed paths P , P of length 2 s N onthe complete graph on N vertices { , , . . . , N } that satisfy the following two conditions:(i) P and P have at least one edge in common;(ii) each edge from the union of P and P appears at least twice in the union.Indeed, if P and P do not satisfy the conditions (i) and (ii) then the corresponding term inthe expression for VarTr A s N N vanishes due to the independence of the matrix entries up fromthe diagonal and the fact that the matrix entries have zero mean. Paths P , P that satisfy (i)and (ii) are called correlated paths (see [8], [9]).To estimate from above the contribution of the pairs of correlated paths, we construct foreach such pair a new path of length 4 s N − . Such a mapping from the set of the pairs of correlatedpaths of length 2 s N to the set of paths of length 4 s N − s N − P and P . Let us consider the first edge along P which also belongs to P . We shall call such an edge the joint edge of the ordered pair of correlated paths P and P . We are now ready to construct the corresponding path of length 4 s N − P ∨ P . We choose the starting point of P ∨ P to coincide with the starting point of thepath P . We begin walking along the first path until we reach for the first time the joint edge.At the left point of the joint edge we then switch to the second path. If the directions of the7oint edge in P and P are opposite to each other, we walk along P in the direction of P . Ifthe directions of the joint edge in P and P coincide, we walk along P in the opposite directionto P . In both cases, we make 2 s N − P . In other words, we pass all 2 s N edges of P except for the joint edge and arrive at the right point of the joint edge. There, we switchback to the first path and finish it. It follows from the construction that the new path P ∨ P is closed since it starts and ends at the starting point of P . Moreover, the length of P ∨ P is4 s N − P ∨ P . We now estimatethe contribution of correlated pairs P , P in terms of P ∨ P . Note that P ∪ P and P ∨ P have the same edges appearing for the same number of times save for one important exception.It follows from the construction of P ∨ P that the number of appearances of the joint edgein P ∪ P is bigger than the number of appearances of the joint edge in P ∨ P by two (inparticular, if the joint edge appears only once in both P and P , it does not appear at all in P ∨ P ). This observation will help us to determine the number of pre-images P , P of a givenpath P ∪ P and relate the expectations associated to P ∪ P and P ∨ P .Assume first that P ∨ P is an even path. In this case, the arguments are identical to theones used in [9] and [10]. For the convenience of the reader, we discuss below the key steps. Toreconstruct P and P from P ∨ P , it is enough to determine three things: (i) the momentof time t s in P ∨ P where one switches from P to P , (ii) the direction in which P is read,and (iii) the origin of P . The reader can note that the joint edge is uniquely determined bythe instant t s , since the two endpoints of the joint edge are respectively given by the verticesoccurring in P ∨ P at the moments t s and t s + 2 s N −
1. It was proved in [9] (see Proposition3) that the typical number of moments t s of possible switch is of the order of √ s N (and not s N ). This follows from the fact that the random walk trajectory associated to P ∨ P does notdescend below the level x ( t s ) during a time interval of length at least 2 s N . Given t s , there are atmost 2 × s N = 4 s N possible choices for the orientation and origin of P . From that, we deducethat the contribution of correlated pairs P , P for which P ∨ P is an even path is of the orderof s / N N Nπ / (2 s N − / (2 σ ) s N − = O ((2 σ ) s N ) , where the extra factor 1 /N arises from the contribution of the erased joint edge. Clearly, thisbound is negligible compared to the r.h.s. of (20).We now consider the contribution of correlated paths P , P such that P ∨ P contains oddedges. To do so, we use the gluing procedure defined in [9]. Two cases can be encountered:1. the joint edge of P and P appears in P ∨ P exactly once (i.e. it appears in the unionof P and P exactly three times).2. all the odd edges of P ∨ P are read at least three times.8n case 2, one can use the results established in [7] to estimate the contribution to E [Tr M s N − N ]of paths P ∨ P admitting odd edges, all of which being read at least 3 times. Therein it isproved by using the same combinatorial machinery that proved (7) that1 N s N − ∗ X P E Y ( m k m k +1 ) ∈P | a m k m ik +1 | ≤ Nπ / s / N (2 σ ) s N − (1 + o (1)) , where the starred sum is over the set of paths P of length 4 s N − P at least three times. We first note that the number of preimages of the path P ∨ P under the described mapping is at most 8 s N . Indeed, to reconstruct the pair P , P , wefirst note that there are at most 2 s N choices for the left vertex of the joint edge of P and P as we select it among the vertices of P ∨ P . Once the left vertex of the joint edge is chosen,we recover the right vertex of the joint edge automatically since all we have to do is to make2 s N − P ∨ P to arrive at the right vertex of the joint edge. Once this is done,we completely recover P . To recover P , we have to choose the starting vertex of P and itsorientation. This can be done in at most 2 s N × s N ways. Thus, we end up with the upperbound 8 s N N N s N − X P E Y ( m k m k +1 ) ∈P | a m k m ik +1 | , (25)where the sum is over the set of paths P (i.e. P = P ∨ P ) of length 4 s N − P at least three times (i.e. P does not contain edges that appear only oncethere). Using the results of [7], we can bound (25) from above by8 s N N Nπ / s / N (2 σ ) s N − (1 + o (1)) ≤ const √ s N (2 σ ) s N . Finally, we have to deal with the case 1 (i.e. when the joint edge of P and P appears in P ∨ P exactly once). Thus we need to be able to estimate the contribution: ∗∗ X P E Y ( m k m k +1 ) ∈P | a m k m ik +1 | where the two-starred sum is over the set of paths P of length 4 s N − P at least three times. For this, we need to modify the arguments in [7]to include the case when there is one single edge in the path. We refer the reader to the abovepaper for the notations we will use. As we have already pointed out, in the case 1 the path P ∨ P has one single edge ( ij ), which determines two vertices of the path P . This edge servesas the joint edge of P and P . We recall from the construction of P ∨ P that in this case, thejoint edge appears three times in the union of P and P . In other words, it either appears twice9n P and once in P , or it appears once in P and twice in P . Without loss of generality, wecan assume that the joint edge appears once in P and twice in P . Let us recall that in order toconstruct P ∨P , we first go along P , then switch to P at the appropriate endpoint of the jointedge, then make 2 s N − P , and, finally, switch back to P at the other endpoint ofthe joint edge. Let the moment of the switch back to P occur at time t in P ∨ P . Call P thepath obtained from P ∨ P by adding at time t two successive occurrences of the (unordered)edge ( ij ) in such a way that P is still a path. Note that P constructed in such way is a path oflength 4 s N . Furthermore, it follows from the construction of P and the definition of the jointedge that the last occurrence of ( ij ) in P is an odd edge and it necessarily starts a subsequenceof odd edges (we refer the reader to the beginning of Section 2.1 in [7] for the full account ofhow we split the set of the odd edges into disjoint subsequences S i , i = 1 , . . . , J of odd edges.)Assume that we are given a path P with at least one edge read three times and where the lasttwo occurrences of this edge take place in succession. The idea used in [7] is that a path P withodd edges (seen at least 3 times) can be built from a path (or a succession of paths) with evenedges by inserting at some moments the last occurrence of odd edges. Given a path with evenedges only, we first choose, as described in the Insertion Procedure in Sections 3 and 4 in [7],the set of edges that will be odd in P and choose for each of them the moment of time wherethey are going to be inserted. To be more precise, we first recall that the set of odd edges can beviewed as a union of cycles. We then split these cycles into disjoint subsequences of odd edgesto be inserted into the even path (or, in general, in the succession of paths). In [7], we usedthe (rough) estimate that there are at most s N possible choices for the moment of insertion ofeach subsequence of odd edges. The expectation corresponding to such a path can be examinedas in [7], up to the following modification. One of the subsequences S k of odd edges describedin Section 2.1 of [7] begins with the joint edge ( ij ) , and there are just two possible choices(instead of s N ) where one can insert that particular sequence of odd edges since the momentof the insertion must follow the moment of the appearance of ( ij ) . This follows from the factthat the edge ( ij ) appears exactly three times in the path P , and the last two appearances aresuccessive. As in [7], let us denote the number of the odd edges of P by 2 l. Since ( ij ) is anodd edge of the path P , there are at most 2 l ways to choose the edge ( ij ) from the odd edgesof P . Once ( ij ) is chosen, the number of the preimages of ( P , P ) is at most 4 s N . Indeed, weneed at most 2 s N choices to select the starting vertex of P and at most two choices to selectthe orientation of P . Combining these remarks, we obtain that the computations precedingSubsection 4.1.2 of [7] yield that1 N s N ′ X P , P h E Y ( i k i k +1 ) ∈P Y ( j k j k +1 ) ∈P ′ | a i k i ik +1 a j k j k +1 | i (26) ≤ N s N X l s N (2 l )(4 s N ) X P with 2 l odd edges E Y ( m k m k +1 ) ∈P | a m k m k +1 | , ij ) appears three times in P and only once in P ∨ P , the weight of P is of the order 1 /N of the weight of P ∨ P (since each matrix entry is of the order of N − / ).Consider the path P ∨ P . Let ν N be the maximal number of times a vertex occurs in the evenpath associated to P ∨ P . In particular, if we know one of the endpoints of an edge (say, theleft one), the number of all possible choices for the the other endpoint is bounded from above by ν N . Then the number of preimages of P ∨ P is at most ν N × s N . Indeed, since ( ij ) is the onlysingle edge of P ∨ P (i.e. the only edge appearing in P ∨ P just once), there is no ambiguity indetermining the joint edge ( ij ) in P ∨ P . Then, there are at most ν N choices to determine theplace of the erased edge since we have to select one of the appearances of the vertex i in P ∨ P which can be done in at most ν N ways. Finally, there are 2 s N choices for the starting vertexof P and 2 choices for its orientation. As in [7], let us denote by P ′ the even path obtainedfrom P ∨ P by the gluing procedure. The only modification from Subsection 4.1.2 of [7] is thatthe upper bound (39) on the number of ways to determine the cycles in the Insertion procedurehas to be multiplied by the factor ν N /s N . The reason for this modification is the following. InSection 4.1.2, we observed that the set of odd edges can be viewed as a union of cycles. In [7],these cycles repeat some edges of P ′ . We need to reconstruct the cycles in order to determinethe set of odd edges. Note that to reconstruct a cycle we need to know only every other edgein the cycle. For example, if we know the first and the third edges of the cycle, this uniquelydetermines the second edge of the cycle as the left endpoint of the second edge coincides withthe right endpoint of the first edge and the right endpoint of the second edge coincides with theleft endpoint of the third edge, and so on. In [7], we used a trivial upper bound 2 s N on thenumber of ways to choose an edge in the cycle since each such edge appears in P ′ and we haveto choose it among the edges of P ′ . The difference with our situation is that one of the edgesof P ∨ P , namely the joint edge ( ij ) , does not appear in P ′ . However, its end points i and j appear in P ′ among its vertices. Therefore, we have at most ν N choices for such edge instead ofthe standard bound 2 s N that we used in [7]. Once the cycles are determined, one can split thesecycles into disjoint sequences of odd edges to be inserted in P ′ . The total number of possibleways to insert these sequences is unchanged from Subsection 4.1.2 of [7]. These considerationsimmediately imply that the contribution to the variance from the correlated paths P , P is atmost of the order 1 N ν N s N ν N s N Ns / N = O ( √ s N ) , as long as ν N < Cs / N . The case where ν N > Cs / N gives negligible contribution as it is extremelyunlikely for any given vertex to appear many times in the path. We refer the reader to Section4.1.2 of [7] where this case was analyzed. 11inally, one has to bound N sN P ′ P , P h E Q ( i k i k +1 ) ∈P a i k i ik +1 E Q ( j k j k +1 ) ∈P a j k j k +1 i , wherethe sum is over the correlated pairs of paths. This can be done in the same way as we treated N sN P ′ P , P h E Q ( i k i k +1 ) ∈P Q ( j k j k +1 ) ∈P ′ a i k i ik +1 a j k j k +1 i above. This finishes the proof of thelemma and gives us the proof of the main result. References [1] N.Alon, M. Krivelevich, and V.Vu,
On the concentration of eigenvalues of random sym-metric matrices.
Israel J. Math. , (2002), 259–267.[2] L. Arnold,
On Wigner’s semicircle law for eigenvalues of random matrices.
J. Math. Anal.Appl. , (1967), 262–268.[3] Z. F¨uredi and J. Koml´os, The eigenvalues of random symmetric matrices.
Combinatorica no. 3, (1981), 233–241.[4] A. Guionnet and O. Zeitouni, Concentration of the spectral measure for large matrices.
Electron. Comm. Probab. , (2000), 119–136.[5] M. Ledoux, The Concentration of Measure Phenomenon.
Mathematical Surveys and Mono-graphs, v. 89, American Mathematical Society 2001.[6] M. Ledoux,
Concentration of measure and logarithmic Sobolev inequalities.
S´eminaire deprobabilit´e XXXIII. lecture Notes in Math. 1709, (1999), 80–216.[7] S. P´ech´e and A. Soshnikov,
Wigner random matrices with non symmetrically distributedentries. J. Stat. Phys. , (2007), 857-883.[8] Y. Sinai and A. Soshnikov,
Central limit theorem for traces of large random symmetricmatrices with independent matrix elements.
Bol. Soc. Brasil. Mat. (N.S.) no. 1, (1998),1–24.[9] Y. Sinai and A. Soshnikov, A refinement of Wigner’s semicircle law in a neighborhood ofthe spectrum edge for random symmetric matrices.
Funct. Anal. Appl. (1998), 114–131.[10] A. Soshnikov, Universality at the edge of the spectrum in Wigner random matrices.
Com-mun. Math. Phys. (1999), 697–733.[11] C. Tracy and H. Widom,
On orthogonal and symplectic matrix ensembles.
Commun. Math.Phys. (1996), 727–754. 1212] M. Talagrand,
Concentration of measures and isoperimetric inequalities in product spaces.
Publications Mathematiques de l’I.H.E.S., , (1996), 73–205.[13] C. Tracy and H. Widom, Level-spacing distribution and Airy kernel.
Commun. Math. Phys. (1994), 151–174.[14] V.H. Vu,
Spectral norm of random matrices.
STOC’05: Proceedings of the 37th AnnualACM Symposium on Theory of Computing, (2005), 423–430.[15] E.Wigner,
Characteristic vectors of bordered matrices with infinite dimenisons.
Annals ofMath. (1955), 548–564.[16] E.Wigner, On the distribution of the roots of certain symmetric matrices.
Annals of Math.68