Adjacency matrices of random digraphs: singularity and anti-concentration
Alexander E. Litvak, Anna Lytova, Konstantin Tikhomirov, Nicole Tomczak-Jaegermann, Pierre Youssef
aa r X i v : . [ m a t h . P R ] O c t Adjacency matrices of random digraphs: singularity andanti-concentration
Alexander E. Litvak Anna Lytova Konstantin TikhomirovNicole Tomczak-Jaegermann Pierre Youssef
Abstract
Let D n,d be the set of all d -regular directed graphs on n vertices. Let G be a graphchosen uniformly at random from D n,d and M be its adjacency matrix. We show that M is invertible with probability at least 1 − C ln d/ √ d for C ≤ d ≤ cn/ ln n , where c, C are positive absolute constants. To this end, we establish a few properties of d -regular directed graphs. One of them, a Littlewood–Offord type anti-concentrationproperty, is of independent interest. Let J be a subset of vertices of G with | J | ≈ n/d .Let δ i be the indicator of the event that the vertex i is connected to J and define δ = ( δ , δ , ..., δ n ) ∈ { , } n . Then for every v ∈ { , } n the probability that δ = v isexponentially small. This property holds even if a part of the graph is “frozen.” AMS 2010 Classification:
Keywords:
Adjacency matrices, anti-concentration, invertibility, Littlewood–Offord theory, ran-dom digraphs, random graphs, random matrices, regular graphs, singular probability, singularity,sparse matrices
Contents
Invertibility of adjacency matrices 32
For 1 ≤ d ≤ n an undirected (resp., directed) graph G is called d -regular if every vertex hasexactly d neighbors (resp., d in-neighbors and d out-neighbors). In this definition we allowgraphs to have loops and, for directed graphs, opposite (anti-parallel) edges, but no multipleedges. Thus directed graphs ( digraphs ) can be viewed as bipartite graphs with both partsof size n . For a digraph G with n vertices its adjacency matrix ( µ ij ) i,j ≤ n is defined by µ ij = ( , if there is an edge from i to j ;0 , otherwise.For an undirected graph G its adjacency matrix is defined in a similar way (in the latter casethe matrix is symmetric). We denote the sets of all undirected (resp., directed) d -regulargraphs by G n,d and D n,d , respectively, and the corresponding sets of adjacency matrices by S n,d and M n,d . Clearly S n,d ⊂ M n,d and M n,d coincides with the set of n × n matrices with0 / d ones. By the probabilityon G n,d , D n,d , S n,d , and M n,d we always mean the normalized counting measure.Spectral properties of adjacency matrices of random d -regular graphs attracted consid-erable attention of researchers in the recent years. Among others, we refer the reader to [2],[3], [12], [14], [26], and [35] for results dealing with the eigenvalue distribution. At the sametime, much less is known about the singular values of the matrices.The present work is motivated by related general questions on singular probability. Oneproblem was mentioned by Vu in his survey [37, Problem 8.4] (see also 2014 ICM talks byFrieze and Vu [15, Problem 7], [38, Conjecture 5.8]). It asks if for 3 ≤ d ≤ n − S n,d is singular goes to zero as n grows to infinity. Note that in the case d = 1 the matrix is a permutation matrix, hencenon-singular; while in the case d = 2 the conjecture fails (see [37] and, for the directedcase, [9]). Note also that M ∈ M n,d is singular if and only if the “complementary” matrix M ′ ∈ M n,n − d obtained by interchanging zeros and ones is singular, thus the cases d = d and d = n − d are essentially the same. The corresponding question for non-symmetricadjacency matrices is the following (cf., [9, Conjecture 1.5]): Is it true that for every ≤ d ≤ n − p n,d := P M n,d ( { M ∈ M n,d : M is singular } ) −→ as n → ∞ ? ( ∗ )2he main difficulty in such singularity questions stems from the restrictions on row- andcolumn-sums, and from possible symmetry constraints for the entries. The question ( ∗ )has been recently studied in [9] by Cook who obtained the bound p n,d ≤ d − c for a smalluniversal constant c > d satisfying ω (ln n ) ≤ d ≤ n − ω (ln n ), where f ≥ ω ( a n )means f /a n → ∞ as n → ∞ .The main result of our paper is the following theorem. Theorem A.
There are absolute positive constants c, C such that for C ≤ d ≤ cn/ ln n onehas p n,d ≤ C ln d √ d . Thus we proved that p n,d → d → ∞ , which in particular verifies ( ∗ ) whenever d grows to infinity with n , without any restrictions on the rate of convergence. (Recall thatthe proof in [9] requires d ≥ ω (ln n ).) We would also like to notice that even for the range ω (ln n ) ≤ d ≤ cn/ ln n , our bound on probability in Theorem A is better than in [9]. Ofcourse, it would be nice to obtain a bound going to zero with n and not with d for the range d ≥ d ≥ ω (ln n )and to treat very sparse matrices. In particular, we introduce the notion of almost constant vectors and show how to eliminate matrices having almost constant null vectors; we showa new approximation argument dealing with tails of properly rescaled vectors; we prove ananti-concentration property for graphs, which is of independent interest; and we provide amore delicate version of the so-called “shuffling” technique.This paper can be naturally split into two distinct parts. In the first one we establishcertain properties of random d -regular digraphs. In the second part we use them (or to bemore precise, their “matrix” equivalents) to deal with the singularity of adjacency matrices.However in the introduction we reverse this order and discuss first the “matrix” part as itprovides a general perspective and motivations for graph results.Singularity of random square matrices is a subject with a long history and many results.In [21] (see also [22]) Koml´os proved that a random n × n matrix with independent ± Bernoulli matrix) is singular with probability tending to zero as n → ∞ . Upper bounds forthe singular probability of random Bernoulli matrices were successively improved to c n (forsome c ∈ (0 , (cid:0) / o (1) (cid:1) n in [34]; and to (cid:0) / √ o (1) (cid:1) n in [6]. Recall thatthe conjectured bound is (cid:0) / o (1) (cid:1) n . The corresponding problem for symmetric Bernoullimatrices was considered in [11], [27], [36]. Recently, matrices with independent rows andwith row-sums constrained to be equal to zero were studied in [28].In all these works, a fundamental role is played by what is nowadays called the Littlewood-Offord theory . In its classical form, established by Erd˝os [13], the Littlewood-Offord inequal-ity states that for every fixed z ∈ R , a fixed vector a = ( a , a , . . . , a n ) ∈ R n with non-zerocoordinates, and for independent random signs r k , k ≤ n , the probability P { P nk =1 r k a k = z }
3s bounded from above by n − / . This combinatorial result has been substantially strength-ened and generalized in subsequent years, leading to a much better understanding of interre-lationship between the law of the sum P nk =1 r k a k and the arithmetic structure of the vector a . For more information and further references, we refer the reader to [32], [33, Section 3],and [29, Section 4]. The use of the Littlewood-Offord theory in context of random matricescan be illustrated as follows. Given an n × n matrix A with i.i.d. elements, A is non-singularif and only if the inner product of a normal vector to the span of any subset of n − A with the remaining column is non-zero. Thus, knowing the “typical” arithmetic struc-ture of the random normal vectors and conditioning on their realization, one can estimatethe probability that A is singular. Moreover, a variant of this approach allows us to obtainsharp quantitative estimates for the smallest singular value of the matrix with independentsubgaussian entries [30].Similarly to the aforementioned works, the Littlewood-Offord inequality plays a crucialrole in the proof of Theorem A. Note that if M is a random matrix uniformly distributed on M n,d then every two entries/rows/columns of M are probabilistically dependent; moreover,a realization of the first n − M . This makesa straightforward application of the Littlewood-Offord theory (as illustrated in the previousparagraph) impossible.In [9], a sophisticated approach based on the “shuffling” of two rows was developed todeal with that problem. The shuffling consists in a random perturbation of two rows of afixed matrix M ∈ M n,d in such a way that the sum of the rows remains unchanged. Wediscuss this procedure in more details in Section 4.3. It can be also defined in terms of“switching” discussed below. The proof in [9] can be divided into two steps: at the firststep, one proves that the event that a random matrix M does not have any (left or right)null vectors with many ( ≥ Cnd − c ) equal coordinates has probability close to one, providedthat d ≥ ω (ln n ). Then one shows that conditioned on this event, a random matrix M isnon-singular with large probability.In our paper, we expand on some of the techniques developed in [9] by adding new crucialingredients. On the first step, in Section 4.1, we show that for C ≤ d ≤ cn , with probabilitygoing to one with n , a random matrix M does not have any null vectors having at least n (1 − / ln d ) equal coordinates, (we call such vectors almost constant ). Note that we ruleout a much smaller set of null vectors. This allows us to drop the lower bound on d , butrequires a delicate adjustment of the second step. Key elements of the first step consists ofa new anti-concentration property of random graphs and their adjacency matrices as well asof using a special form of an ε -net build from the “tails” of appropriately rescaled vectors x ∈ R n . Then, conditioning on the event that M does not have almost constant null vectors,we show in Section 4.3 that a random matrix M is non-singular with high probability. Thisrelies on a somewhat modified and simplified version of the shuffling procedure for the matrixrows. As the shuffling involves supports of only two rows we get at this step that probabilityconverges with d and not with n . We would like to emphasize that this is the only stepwhich does not allow to have the convergence to zero with n .We now turn our attention to Section 2, which deals with the set D n,d of d -regulardigraphs. Our analysis is based on an operation called “the simple switching,” which is a4tandard tool to work with regular graphs. As an illustration, let G ∈ D n,d and let i = i and j = j be vertices of G such that ( i , j ) and ( i , j ) are edges of G and ( i , j ), ( i , j ) arenot. Then the simple switching consists in replacing the edges ( i , j ), ( i , j ) with ( i , j )and ( i , j ), while leaving all other edges unchanged. Note that the operation does notdestroy d -regularity of the graph. The simple switching was introduced (for general graphs)by Senior [31] (in that paper, it was called “transfusion”); in the context of d -regular graphsit was first applied by McKay [25]. As in [25], we use this operation to compare cardinalitiesof certain subsets of D n,d . We note that one could use the configuration model, introduced byBollob`as [4] in the context of random regular graphs, to prove our results for sparse graphs.We prefer to use the switching method in order to have a unified proof for all ranges of d .As in the matrix counterpart we work with a random graph G uniformly distributed on D n,d . For a finite set S , we denote by | S | its cardinality. For a positive integer n we denoteby [ n ] the set { , , . . . , n } . For every subset S ⊂ [ n ], let N inG ( S ) be the set of all verticesof G which are in-neighbors to some vertex in S . Further, for every two subsets I, J of [ n ],we denote by E G ( I, J ) the collection of edges of G starting from a vertex in I and endingat a vertex from J . In a simplified form, our first statement about graphs (Theorem 2.2 inSection 2.2) can be formulated as follows: Let ≤ d ≤ n , ε ∈ (0 , , and k ≥ . Assume that ε ≥ d − max { , ln d } and k ≤ cεn/d for a sufficiently small absolute positive constant c . Then P (cid:8) ∃ S ⊆ [ n ] , | S | = k such that (cid:12)(cid:12) N inG ( S ) (cid:12)(cid:12) ≤ (1 − ε ) d (cid:12)(cid:12) S (cid:12)(cid:12)(cid:9) ≤ exp (cid:18) − ε dk (cid:18) ecεnkd (cid:19)(cid:19) . Note that | S | ≤ (cid:12)(cid:12) N inG ( S ) (cid:12)(cid:12) ≤ d | S | . Thus, roughly speaking, our result says that “typi-cally,” whenever a set S is not too large, the set of all in-neighbors of S has cardinality closeto the maximal possible one. In the case of undirected graphs such results are known (seee.g. [1] and references therein). We note that in fact we prove a more general statement, inwhich we estimate the probability conditioning on a “partial” realization of a random graph G , when a certain subset of its edges is fixed (see Theorem 2.2).In our second result, we estimate the probability that E G ( I, J ) is empty for large sets I and J (see Theorem 2.6 in Section 2.3): There exist absolute positive constants c, C such that the following holds. Let ≤ d ≤ n/ and Cn ln d/d ≤ ℓ ≤ r ≤ n/ . Then P (cid:8) E G ( I, J ) = ∅ for some I, J ⊂ [ n ] with | I | ≥ ℓ, | J | ≥ r (cid:9) ≤ exp ( − crℓd/n ) . Note that the first statement can be reformulated in terms of sets E G ( I, J ) (however, therange of cardinalities for I and J will be different compared to the second result). Thesestatements can be seen as manifestations of a general phenomenon that a random graph G with a large probability has good regularity properties. Let us also note that analogousstatements for the Erd˝os–R´enyi graphs (in this random model an edge between every twovertices is included/excluded in a graph independently of other edges) follow from standardBernstein-type inequalities. For related results on d -regular random graphs, we refer thereader to [23] where concentration properties of co-degrees were established in the undirected5etting, and to [8] for concentration of co-degrees and of the “edge counts” | E G ( I, J ) | fordigraphs. In paper [8] which serves as a basis for the main theorem of [9] mentioned above,rather strong concentration properties of | E G ( I, J ) | are established; however, the resultsprovided in that paper are valid only for d ≥ ω (ln n ). The proof in [9] is based on the methodof exchangeable pairs introduced by Stein and developed for concentration inequalities byChatterjee (see survey [7] for more information and references). On the contrary, our proof ofthe afore-mentioned statements is simpler, completely self-contained and works for d ≥ C . Aswe mentioned above, we use the following Littlewood-Offord type anti-concentration resultmatching anti-concentration properties of a weighted sum of independent random variablesor vectors studied in the Littlewood-Offord theory. This result is of independent interest,and we formulate it here as a theorem (see also Theorem 2.15 in Section 2.4). For every J ⊂ [ n ] and i ∈ [ n ] we define δ Ji ( G ) ∈ { , } as the indicator of the event { i ∈ N inG ( J ) } anddenote δ J ( G ) := ( δ J ( G ) , . . . , δ Jn ( G )) ∈ { , } n . Theorem B.
There are two positive absolute constants c and c such that the followingholds. Let ≤ d ≤ cn and I, J be disjoint subsets of [ n ] such that | I | ≤ d | J | / and ≤ | J | ≤ cn/d . Let vectors a i ∈ { , } n , i ∈ I , be such that the event E := {N inG ( i ) = supp a i for all i ∈ I } has non-zero probability (if I = ∅ we set E = D n,d ). Then for every v ∈ { , } n one has P { δ J ( G ) = v | E } ≤ (cid:18) − c d | J | ln (cid:0) nd | J | (cid:1)(cid:19) . We note that the probability estimate in the previous statement matches the one for thecorresponding quantity δ J in the Erd˝os–R´enyi model.The paper is organized as follows. Sections 2 deals with all results related to graphs.Section 3 provides links between the graph results of Section 2 and the matrix results usedin Section 4. Finally, Section 4 presents the proof of the main theorem, including a numberof auxiliary combinatorial lemmas.In this paper letters c, C , c , C , c , C , ... always denote absolute positive constants (i.e.independent of any parameters), whose precise value may be different from line to line.Main results of this paper were announced in [24]. Aknowledgment.
This work was conducted while the second named author was a Re-search Associate at the University of Alberta, the third named author was a graduate studentand held the PIMS Graduate Scholarship, and the last named author was a CNRS/PIMSPDF at the same university. They all would like to thank the Pacific Institute and theUniversity of Alberta for the support. A part of this work was also done when the first fourauthors took part in activities of the annual program “On Discrete Structures: Analysis andApplications” at the Institute for Mathematics and its Applications (IMA), Minneapolis,MN, USA. These authors would like to thank IMA for the support and excellent workingconditions. All authors would like to thank Michael Krivelevich for many helpful commentson the “graph” part of this paper. We would also like to thank Justin Salez for helpfulcomments. 6
Expansion and anti-concentration for random digraphs
For a real number x , we denote by ⌊ x ⌋ the largest integer smaller than or equal to x and by ⌈ x ⌉ the smallest integer larger than or equal to x . Further, for every a ≥
1, we denote by [ a ]the set { i ∈ N : 1 ≤ i ≤ ⌊ a ⌋} .Let d ≤ n be positive integers. A d -regular directed graph (or d -regular digraph ) on n (labeled) vertices is a graph in which every vertex has exactly d in-neighbors and d out-neighbors. We allow the graphs to have loops and opposite/anti-parallel edges but do notallow multiple edges. Thus this set coincides with the set of d -regular bipartite graphs withboth parts of size n . The set of vertices of such graphs is always identified with [ n ]. The setof all these graphs is denoted by D n,d . When n and d are clear from the context, we will usea one-letter notation D . Note that the set of adjacency matrices for graphs in D coincideswith the set of n × n matrices with 0 / d ones. By a random graph on D we always mean a graph uniformly distributed on D (that is, with respect to the normalized counting measure).Let G = ([ n ] , E ) be an element of D , where E is the set of its directed edges. Thus( i, j ) ∈ E , i, j ≤ n , means that there is an edge going from vertex i to vertex j . We will denotethe adjacency matrix of G by M = M ( G ); its rows and columns by R i = R i ( M ) = R i ( G )and X i = X i ( M ) = X i ( G ), i ≤ n , respectively.Given a graph G ∈ D and a subset S ⊂ [ n ] of its vertices, let N out ( S ) = N outG ( S ) := (cid:8) v ≤ n : ∃ i ∈ S ( i, v ) ∈ E (cid:9) = [ i ∈ S supp R i , N in ( S ) = N inG ( S ) := (cid:8) v ≤ n : ∃ i ∈ S ( v, i ) ∈ E (cid:9) = [ i ∈ S supp X i . Similarly, we define the out-edges and the in-edges as follows E outG ( S ) := (cid:8) e ∈ E : e = ( i, j ) for some i ∈ S and j ≤ n (cid:9) ,E inG ( S ) := (cid:8) e ∈ E : e = ( i, j ) for some i ≤ n and j ∈ S (cid:9) . For one-element subsets of [ n ] we will use lighter notations N outG ( i ), N inG ( i ), E outG ( i ), E inG ( i )instead of N outG ( { i } ), N inG ( { i } ), E outG ( { i } ), E inG ( { i } ).Given a graph G = ([ n ] , E ), for every I, J ⊂ [ n ] the set of all edges departing from I andlanding in J is denoted by E G ( I × J ) = E G ( I, J ) = (cid:8) e ∈ E : e = ( i, j ) for some i ∈ I and j ∈ J (cid:9) . Further, we let D ( I, J ) = (cid:8) G ∈ D : E G ( I, J ) = ∅ (cid:9) . Note that D ( I, J ) is the set of all graphs whose adjacency matrices have zero I × J -minor,hence the superscript “0”. 7iven G ∈ D , for u, v ≤ n the sets of common out-neighbors and common in-neighborswill be denoted asC outG ( u, v ) = { j ≤ n : ( u, j ) , ( v, j ) ∈ E } = supp R u ∩ supp R v , C inG ( u, v ) = { i ≤ n : ( i, u ) , ( i, v ) ∈ E } = supp X u ∩ supp X v . For every S ⊂ [ n ] and F ⊂ [ n ] × [ n ], we define D ( S, F ) = (cid:8) G ∈ D : E inG ( S ) = F (cid:9) . Informally speaking, D ( S, F ) is the subset of d -regular graphs for which the in-edges of S are“frozen” and, as a set, coincide with F . Note that a necessary (but not sufficient) conditionfor D ( S, F ) to be non-empty is ∀ i ≤ n |{ ℓ ∈ [ n ] : ( i, ℓ ) ∈ F }| ≤ d and ∀ j ∈ S |{ ℓ ∈ [ n ] : ( ℓ, j ) ∈ F }| = d. For every ε ∈ (0 , D co ( ε ) = (cid:8) G ∈ D : ∀ i = j ≤ n | C outG ( i, j ) | ≤ εd (cid:9) = \ i Let s, t > . Let R be a relation between two finite sets A and B such that forevery a ∈ A and every b ∈ B one has | R ( a ) | ≥ s and | R − ( b ) | ≤ t . Then s | A | ≤ t | B | . Proof. Without loss of generality we assume that A = [ k ] and B = [ m ] for some positiveintegers k and m . For i ≤ k and j ≤ m , we set r ij = 1 if ( i, j ) ∈ R and r ij = 0 otherwise.Counting the number of ones in every row and every column of the matrix { r ij } ij we obtain k X i =1 m X j =1 r ij = k X i =1 | R ( i ) | ≥ sk = s | A | and m X j =1 k X i =1 r ij = m X j =1 | R − ( j ) | ≤ tm = t | B | , which implies the desired estimate. ✷ .2 An expansion property of random digraphs In this section, we establish certain expansion properties of random graphs uniformly dis-tributed on D , which can roughly be described as follows: given a subset S ⊂ [ n ] of cardi-nality | S | ≤ cn/d , with high probability the number of in-neighbors of S is of order d | S | .Beside its own interest, this result is used in the proof of the anti-concentration property forgraphs which will be given in Section 2.4. In fact we will need a statement where we controlthe number of in-neighbors of a subset of vertices while “freezing” (i.e. conditioning on arealization of) a set of edges inside the graph. Theorem 2.2. Let ≤ d ≤ n , ε ∈ (0 , , and k ≥ . Assume that ε ≥ max { , ln d } d and k ≤ cεnd for a sufficiently small absolute positive constant c . Let I ⊂ [ n ] be of cardinality at most n/ . Define Γ k = (cid:8) G ∈ D : ∃ S ⊆ I c , | S | = k, such that (cid:12)(cid:12) N inG ( S ) (cid:12)(cid:12) ≤ (1 − ε ) d (cid:12)(cid:12) S (cid:12)(cid:12)(cid:9) and Γ = (cid:8) G ∈ D : ∃ S ⊆ I c , | S | ≤ cεn/d, such that (cid:12)(cid:12) N inG ( S ) (cid:12)(cid:12) ≤ (1 − ε ) d (cid:12)(cid:12) S (cid:12)(cid:12)(cid:9) = cεn/d [ ℓ =2 Γ ℓ . Then for every F ⊂ [ n ] × [ n ] with D ( I, F ) = ∅ we have P (Γ k | D ( I, F )) ≤ exp (cid:18) − ε dk (cid:18) ecεnkd (cid:19)(cid:19) . In particular, P (Γ | D ( I, F )) ≤ exp (cid:18) − ε d (cid:16) ecεnd (cid:17)(cid:19) . Let us describe the idea of the proof of Theorem 2.2. Suppose we are given a set ofvertices S of an appropriate size. Since | E inG ( S ) | = d | S | , then we always have | S | ≤ |N inG ( S ) | ≤ d | S | . We want to prove that the number of graphs satisfying |N inG ( S ) | ≤ (1 − ε ) d | S | is rathersmall. In order to estimate the number of in-neighbors of S , our strategy is to build S byadding one vertex at a time and trace how the number of in-neighbors is changing. Namely,if S = { v i } i ≤ s then to build S we start by setting S := { v } – a set for which we know that ithas exactly d in-neighbors. Now we add the vertex v to S to get S := { v , v } . We need totrace how the number of in-neighbors to S changed compared to that of S . More precisely,we need to count the number of graphs for which the number of in-neighbors has increasedby at most (1 − ε/ d . To this end, we count the number of graphs having the property that9he number of common in-neighbors to v and v is at least εd/ 2. We count such graphsby applying the simple switching. One should be careful here to switch the edges withoutinterfering with the frozen area of the graph. We continue in a similar manner by addingone vertex at a time and controlling the number of common in-neighbors between the addedvertex and the existing ones. Now, note that the condition |N inG ( S ) | ≤ (1 − ε ) d | S | impliesthat for a large proportion of the vertices added, the number of common in-neighbors withthe existing vertices is at least εd/ 2. We use this together with the cardinality estimatesobtained via the simple switching at each step to get the required result.We use the following notation. Given S ⊂ [ n ] and δ ∈ (0 , S, ∅ ) = D and fora non-empty J ⊂ [ n ], letΓ( S, J ) = Γ( S, J, δ ) = n G ∈ D : ∀ j ∈ J one has (cid:12)(cid:12)(cid:12) [ i ∈ S,i Lemma 2.3. Let δ ∈ (0 , , ≤ d ≤ n/ , ≤ k ≤ δn/ (4 ed ) and F, H ⊂ [ n ] × [ n ] . Forevery I ⊂ [ k + d ] c satisfying | I | ≤ n , one has (cid:12)(cid:12) Γ([ k ] , k + 1) ∩ D ([ k ] , F ) ∩ D ( I, H ) (cid:12)(cid:12) ≤ γ k (cid:12)(cid:12) D ([ k ] , F ) ∩ D ( I, H ) (cid:12)(cid:12) , where γ k = (cid:18) ekdδn (cid:19) δd . Less formally, the above statement asserts that, considering a subset of D with prescribed(frozen) sets of in-edges for [ k ] and I , for a vast majority of such graphs the ( k + 1)-th vertexwill have a small number of common in-neighbors with the first k vertices. Proof. We assume that the intersection D ([ k ] , F ) ∩ D ( I, H ) is non-empty. Then we have F ([ n ]) = [ k ] and F − ([ k ]) = N inG ([ k ]) (recall notation for images and preimages of a relation).Without loss of generality, N inG ([ k ]) = [ n ] c for some n ≤ n . Note that k ≤ (cid:12)(cid:12) N inG ([ k ]) (cid:12)(cid:12) ≤ kd, hence n − kd ≤ n ≤ n − k .For 0 ≤ q ≤ d denote Q ( q ) := (cid:8) G ∈ D ([ k ] , F ) ∩ D ( I, H ) : (cid:12)(cid:12) N inG ([ k ]) ∩ N inG ( k + 1) (cid:12)(cid:12) = q (cid:9) . Q := Γ([ k ] , k + 1) ∩ D ([ k ] , F ) ∩ D ( I, H ) = d [ q = ⌈ δd ⌉ Q ( q ) . We proceed by comparing the cardinalities Q ( q ) and Q ( q − 1) for every 1 ≤ q ≤ d . Tothis end, we will define a relation R q ⊂ Q ( q ) × Q ( q − G ∈ Q ( q ). Then there exist n < i < ... < i q such that for every ℓ ≤ q we have i ℓ ∈ N inG ([ k ]) ∩ N inG ( k + 1) . For every ℓ ≤ q , there are at most d edges inside E G (cid:0) [ n ] , N outG ( i ℓ ) (cid:1) . Further, there are( n − ( d − q )) d edges in E outG (cid:0) [ n ] \N inG ( k +1) (cid:1) and at most d | I | edges in E G (cid:0) [ n ] \N inG ( k +1) , I (cid:1) .Therefore, for every ℓ ≤ q , the cardinality of the set E ℓ := E G (cid:0) [ n ] \ N inG ( k + 1) , I c \ N outG ( i ℓ ) (cid:1) can be estimated as | E ℓ | ≥ (cid:0) n − ( d − q ) − | I | (cid:1) d − d ≥ (7 n/ − kd − d ) d ≥ nd/ | I | ≤ n/ n ≥ n − kd together with the restrictions on k ). Now, we turn to constructing the relation R q . We let a pair ( G, G ′ ) belong to R q for some G ′ ∈ Q ( q − 1) if G ′ can be obtained from G in the following way. First we choose ℓ ≤ q andan edge ( i, j ) ∈ E ℓ . We destroy the edge ( i ℓ , k + 1) to form the edge ( i, k + 1), then destroythe edge ( i, j ) to form the edge ( i ℓ , j ) (in other words, we perform the simple switching onthe vertices i, i ℓ , j, k +1). Note that the conditions i / ∈ N inG ( k +1) and j / ∈ N outG ( i ℓ ), which areimplied by the definition of E ℓ , guarantee that the simple switching does not create multipleedges, and we obtain a valid graph in Q ( q − R q implies that for every G ∈ Q ( q ) one has | R q ( G ) | ≥ q X ℓ =1 | E ℓ | ≥ qnd . (1)Now we estimate the cardinalities of preimages. Let G ′ ∈ R q (cid:0) Q ( q ) (cid:1) . In order to recon-struct a graph G for which ( G, G ′ ) ∈ R q , we need to perform a simple switching whichdestroys an edge from E G ′ (cid:0) [ n ] , k + 1 (cid:1) and adds an edge to E G ′ (cid:0) [ n ] c , k + 1 (cid:1) . There are at most d − q + 1 choices to destroy an edge in E G ′ (cid:0) [ n ] , k + 1 (cid:1) , and at most n − n ≤ kd possibilities to create an edge connecting [ n ] c with ( k + 1)-st vertex. Assumethat we destroyed an edge ( v, k + 1) and added an edge ( u, k + 1). The second part of thesimple switching is to destroy an excessive out-edge of u and create a corresponding edge(with the same end-point) for v . It is easy to see that we have at most d possibilities for thesecond part of the switching. Therefore, (cid:12)(cid:12) R − q ( G ′ ) (cid:12)(cid:12) ≤ kd . (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) ≤ (cid:18) kd qn (cid:19) · (cid:12)(cid:12) Q ( q − (cid:12)(cid:12) and, applying the estimate successively, (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) ≤ (cid:18) kd n (cid:19) q q ! (cid:12)(cid:12) Q (0) (cid:12)(cid:12) . Since q ! ≥ q/e ) q and 2 ekd/ ( δn ) ≤ / 2, this implies | Q | = d X q = ⌈ δd ⌉ (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) ≤ d X q = ⌈ δd ⌉ (cid:18) ekdδn (cid:19) q (cid:12)(cid:12) Q (0) (cid:12)(cid:12) ≤ (cid:18) ekdδn (cid:19) δd (cid:12)(cid:12) Q (0) (cid:12)(cid:12) . Using that Q (0) ⊂ D ([ k ] , F ) ∩ D ( I, H ), we obtain the desired result. ✷ Now, we iterate the last lemma to obtain the following statement. Corollary 2.4. Let δ , n , d , k and γ k be as in Lemma 2.3 and let ℓ ≤ k . Further, let I ⊂ [ n ] satisfy | I | ≤ n/ and let H ⊂ [ n ] × [ n ] . Then for every subsets J ⊂ S ⊂ I c such that | S | = k and | J | = ℓ , one has (cid:12)(cid:12) Γ( S, J ) ∩ D ( I, H ) (cid:12)(cid:12) ≤ γ kℓ (cid:12)(cid:12) D ( I, H ) (cid:12)(cid:12) . Proof. Without loss of generality we assume that the intersection Γ( S, J ) ∩ D ( I, H ) isnon-empty, that S = [ k ] and I ⊂ [ k + d ] c . Write J = { j , ..., j ℓ } for some j < ... < j ℓ . For1 ≤ s ≤ ℓ denote J s = { j , ..., j s } , J = ∅ and let k s = j s − 1. Note that for every 1 ≤ s ≤ ℓ ,we have Γ( S, J s ) = Γ([ k s ] , J s ) . Note also that Γ( S, J s ) = Γ([ k s ] , k s + 1) ∩ Γ( S, J s − ) . (2)Clearly, (cid:12)(cid:12) Γ( S, J ) ∩ D ( I, H ) (cid:12)(cid:12) = |D ( I, H ) | ℓ Y s =1 | Γ( S, J s ) ∩ D ( I, H ) || Γ( S, J s − ) ∩ D ( I, H ) | . (3)For 1 ≤ s ≤ ℓ define F s = { F ⊂ [ n ] × [ n ] : D ([ k s ] , F ) ∩ D ( I, H ) ⊂ Γ( S, J s − ) } . Then by (2) we haveΓ( S, J s ) ∩ D ( I, H ) = G F ∈F s Γ([ k s ] , k s + 1) ∩ D ([ k s ] , F ) ∩ D ( I, H ) . (cid:12)(cid:12) Γ( S, J s ) ∩ D ( I, H ) (cid:12)(cid:12) = X F ∈F s | Γ([ k s ] , k s + 1) ∩ D ([ k s ] , F ) ∩ D ( I, H ) |≤ γ k s X F ∈F s |D ([ k s ] , F ) ∩ D ( I, H ) |≤ γ k | Γ( S, J s − ) ∩ D ( I, H ) | , where the last inequality follows from the definition of F s and k s ≤ k . This and (3) implythe result. ✷ We are now ready to prove Theorem 2.2. In the proof, we will use Corollary 2.4, togetherwith an easy observation that the condition |N inG ( S ) | ≤ (1 − ε ) d | S | for an (ordered) subset S of vertices implies that proportionally many vertices in S have at least εd/ Proof of Theorem 2.2. Let G ∈ Γ k and S be as in the definition of Γ k . For j ∈ S consider A j = [ i ∈ S,i 13e assume that εk ≥ εk < 2, in which ℓ = 1, is treated similarly). Using ε ≥ max { p ln d/d, p /d } , by direct calculations we observe (cid:16) enk (cid:17) k (cid:18) ekℓ (cid:19) ℓ (cid:18) ekdεn (cid:19) εdℓ/ ≤ (cid:16) enk (cid:17) k (cid:18) eε (cid:19) εk/ (cid:18) ekdεn (cid:19) ε dk/ ≤ (cid:18) C kdεn (cid:19) ε dk/ for a sufficiently large absolute constant C > 0. Taking c ≤ / (3 eC ), we obtain the desiredestimate for Γ k . The second assertion of the theorem regarding Γ follows immediately. ✷ As we have already noted, Theorem 2.2 essentially postulates that a random d -regulardigraph typically has good expansion properties in the sense that every sufficiently smallsubset S of its vertices has almost d | S | in-neighbors and d | S | out-neighbors. In the undi-rected setting, expansion properties of graphs are a subject of very intense research (see, inparticular, [17] and references therein). As the conclusion for this subsection, we would liketo recall some of the known expansion properties of undirected random graphs and comparethem with the main result of this part of our paper.Let G = ( V, E ) be an undirected graph on n vertices. Given a subset U ⊂ V , by ∂ V U we denote a set of all vertices adjacent to the set U but not in U , i.e. ∂ V U := { i U : ∃ j ∈ U ( i, j ) ∈ E } = N inG ( U ) \ U. Similarly, let ∂ E U be the set of all edges of G with exactly one endpoint in U . For every λ ∈ (0 , the λ -vertex isoperimetric number i λ,V ( G ) := min | U |≤ λn | ∂ V U || U | , and, for every λ ∈ (0 , / the λ -edge isoperimetric number i λ,E ( G ) := min | U |≤ λn | ∂ E U || U | . For λ = 1 / 2, the above quantities are simply called the vertex and the edge isoperimetricnumbers, denoted by i V ( G ) and i E ( G ). Since | ∂ V U | ≤ | ∂ E U | ≤ d | ∂ V U | , for every λ ∈ (0 , / i λ,V ( G ) ≤ i λ,E ( G ) ≤ di λ,V ( G ) . (4)Now, let G be a d -regular graph uniformly distributed on the set G n,d . In [5] it was shownthat for large enough fixed d i E ( G ) ≥ d/ − √ d ln 2 , (5)with probability going to one with n . This result was generalized in [20], where is was shownthat i λ,E ( G ) ≥ d (1 − λ + o (1))with probability going to one with n , where o (1) depends on d and can be made arbitrarilysmall by increasing d . Note that the relation (4) together with results from [5, 20] immedi-ately implies i λ,V ( G ) ≥ − λ + o (1)14where the bound should be interpreted in the same way as before), however the bound isfar from being optimal. An estimate for the second eigenvalue of G proved in [14] impliesthat for a fixed d with large probability (going to one with n ) i V ( G ) ≥ − /d + O (1 /d ) . Moreover, for every d and δ > λ = λ ( d, δ ) > i λ,V (corresponding to expansions of small subsets of V ) can be estimated as i λ,V ( G ) ≥ d − − δ (see [17, Theorem 4.16]).Our main result of this subsection can be interpreted as an expansion property of regulardigraphs for small vertex subsets. We define the vertex isoperimetric number i λ,V for digraphsby the same formula as for undirected graphs. Theorem 2.2 has the following consequence,which, in particular, provides quantitative estimates of i λ,V for d growing together with n toinfinity. Corollary 2.5. Let ≤ d ≤ n and ε ∈ (0 , . Assume that ε ≥ max { , ln d } d , d ≤ cεn and λ ( ε ) := cεd . Further, let G be uniformly distributed on D . Then i λ ( ε ) ,V ( G ) ≥ (1 − ε ) d − with probability at least − exp (cid:18) − ε d (cid:16) ecεnd (cid:17)(cid:19) . In this part, we consider the following problem. Let G be uniformly distributed on D and let I and J be two (large enough) subsets of [ n ]. We want to estimate the probability that G hasno edges connecting a vertex from I to a vertex from J . The main result of the subsectionis the following theorem. Theorem 2.6. There exist absolute constants c > and C, C ≥ such that the followingholds. Let C ≤ d ≤ n/ and let natural numbers ℓ and r satisfy n ≥ r ≥ ℓ ≥ Cn ln( en/r ) d . Then P n[ D ( I, J ) o ≤ exp (cid:18) − crℓdn (cid:19) , where the union is taken over all I, J ⊂ [ n ] with | I | ≥ ℓ and | J | ≥ r . emark 2.7. Obviously, the roles of ℓ and r in this theorem are interchangeable and theassumptions on ℓ and r imply that ℓ ≥ Cn/d and r ≥ C n ln d/d . Remark 2.8. We would like to notice that adding an assumption ℓ ≥ d in this theorem,we could simplify its proof (we would not need quite technical Lemma 2.12 below) Remark 2.9. The statement of the theorem can be related to known results on the inde-pendence number of random undirected graphs. Recall that the independence number α ( G )of a graph G is the cardinality of the largest subset of its vertices such that no two verticesof the subset are adjacent. Suppose now that G is uniformly distributed on G n,d . For d → ∞ with d ≤ n θ for some fixed θ < 1, it was shown in [16] and [10] that, as n goes to infinity,the ratio α ( G ) / (cid:0) nd − ln d (cid:1) converges to 1 in probability. Moreover, in [23] it was verifiedthat in the range n θ ≤ d ≤ . n (for a sufficiently large θ < α ( G )is 2 ln d/ ln( n/ ( n − d )), which is equivalent to 2 n ln d/d when d/n is small. Taking I = J in Theorem 2.6, we observe a bound of the same order for random digraphs, which can beinterpreted as a large deviation estimate for the independence number as follows. Corollary 2.10. There exist absolute positive constants c, C such that for every ≤ d ≤ n/ and a random digraph G uniformly distributed on D one has P (cid:26) α ( G ) > C n ln dd (cid:27) ≤ exp (cid:18) − cn ln dd (cid:19) . We first give an idea of the proof of Theorem 2.6. Fix two sets of vertices I and J ofsizes ℓ and r . Our strategy is to start with two small subsets of I , J and to arrive to I , J byadding one vertex at a time. Suppose that I ⊂ I and J ⊂ J and S is a subset of graphsfrom D with no edges departing from I and landing in J . We add a vertex from J \ J to J to form a set J and check whether the property of having no edges connecting I to J is preserved, using the simple switching. More precisely, when conditioning on the set ofgraphs S , we estimate the proportion of graphs in S such that there are no edges departingfrom I to the vertex added. We perform an analogous procedure by adding a vertex to I and continue until the whole sets I and J are reconstructed.Note that a similar argument can be applied in the undirected setting to estimate prob-ability of large deviation for the independence number (the sets I and J shall be equal inthis situation). We omit the proof of the undirected case as it is a simple adaptation of theargument of Theorem 2.6 and is not of interest in the present paper.We start with a lemma which can be described as follows: given two sets of vertices [ p ]and [ k ], among graphs having no edges departing from [ p ] to [ k ], we count how many haveno departing edges from [ p ] to the vertex k + 1. The proof of Theorem 2.6 will then followby iterating this lemma. Lemma 2.11. Let ≤ d ≤ n/ and e n/d ≤ p, k ≤ n/ . Then max (cid:8) |D ([ p ] , [ k + 1]) | , |D ([ p + 1] , [ k ]) | (cid:9) ≤ exp (cid:18) − pd e n (cid:19) |D ([ p ] , [ k ]) | . 16o prove this lemma we need the following rather technical statement, which shows thatfor most graphs under consideration every two vertices have a relatively small number ofcommon out-neighbors. For reader’s convenience we postpone its proof to the end of thissection. Lemma 2.12. Let ε ∈ (0 , , ≤ k ≤ n/ , ≤ p ≤ n , and d ≤ εn/ . Then (cid:12)(cid:12) D ([ p ] , [ k ]) \ D co ( ε ) (cid:12)(cid:12) ≤ n (cid:18) edεn (cid:19) εd (cid:12)(cid:12) D ([ p ] , [ k ]) (cid:12)(cid:12) , where D ([ p ] , [ k ]) = D if p = 0 or k = 0 . Proof of Lemma 2.11. We prove the bound for D ([ p ] , [ k +1]), the other bound is obtainedby passing to the transpose graph.Fix q := ⌈ pd/ (2 e n ) ⌉ . Denote Q := D ([ p ] , [ k + 1]) ∩ D co (1 / Q ( q ) := { G ∈ D ([ p ] , [ k ]) : | E G ([ p ] , k + 1) | = q } . To estimate cardinalities we construct a relation R between Q and Q ( q ). We say that( G, G ′ ) ∈ R for some G ∈ Q and G ′ ∈ Q ( q ) if G ′ can be obtained from G using thesimple switchings as follows. First choose q vertices 1 ≤ v < v < . . . < v q ≤ p . Thereare (cid:0) pq (cid:1) such choices. Then choose q edges in E G ([ p ] c , k + 1), say ( w i , k + 1), i ≤ q , with p < w < w < . . . < w q ≤ n . There are (cid:0) dq (cid:1) such choices. Finally for every i ≤ q choose j ( i ) ∈ N outG ( v i ) \ N outG ( w i ) . Since G ∈ D co (1 / i ≤ q there are at least d/ j ( i ). For every i ≤ q wedestroy edges ( w i , k + 1), ( v i , j ( i )) and create edges ( v i , k + 1), ( w i , j ( i )). We have (cid:12)(cid:12) R ( G ) (cid:12)(cid:12) ≥ (cid:18) pq (cid:19) (cid:18) dq (cid:19) (cid:18) d (cid:19) q ≥ (cid:18) pd q (cid:19) q . (6)Now we estimate the cardinalities of preimages. Let G ′ ∈ R ( Q ). We bound | R − ( G ′ ) | from above. To reconstruct a possible G ∈ Q with ( G, G ′ ) ∈ R , we perform simple switchingsas follows. Write E G ′ ([ p ] , k + 1) as ( v , k + 1) , . . . , ( v q , k + 1) with 1 ≤ v < . . . < v q ≤ p .Choose q vertices p < w < . . . w q ≤ n such that w i ∈ [ p ] c \ N inG ′ ( k + 1)for all i ≤ q . There are (cid:18) n − p − ( d − q ) q (cid:19) ≤ (cid:18) enq (cid:19) q i ≤ q find j ∈ (cid:0) N outG ′ ( w i ) ∩ [ k + 1] c (cid:1) \ N outG ′ ( v i )(there are at most d such choices). For every i ≤ q we destroy edges ( v i , k + 1), ( w i , j ( i ))and create edges ( w i , k + 1), ( v i , j ( i )). We obtain (cid:12)(cid:12) R − ( G ′ ) (cid:12)(cid:12) ≤ (cid:18) enq (cid:19) q d q . Claim 2.1 together with the last bound and (6) yields (cid:12)(cid:12) Q (cid:12)(cid:12) ≤ (cid:18) enqpd (cid:19) q (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) . By the choice of q , we have (cid:12)(cid:12) Q (cid:12)(cid:12) ≤ exp( − pd/ (2 e n )) (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) . This, together with Lemma 2.12,implies |D ([ p ] , [ k ]) | ≥ | Q ( q ) | ≥ exp (cid:18) pd e n (cid:19) |D ([ p ] , [ k + 1]) ∩ D co (1 / | = exp (cid:18) pd e n (cid:19) (cid:0) |D ([ p ] , [ k + 1]) | − |D ([ p ] , [ k + 1]) \ D co (1 / | (cid:1) ≥ exp (cid:18) pd e n (cid:19) − n (cid:18) edn (cid:19) d/ ! |D ([ p ] , [ k + 1]) | , which implies the desired result. ✷ Proof of Theorem 2.6. It is enough to prove the theorem for the union over all | I | = ℓ and | J | = r . By the union bound, we have P n[ D ( I, J ) o ≤ (cid:18) nℓ (cid:19) (cid:18) nr (cid:19) (cid:12)(cid:12) D ([ ℓ ] , [ r ]) (cid:12)(cid:12)(cid:12)(cid:12) D (cid:12)(cid:12) ≤ (cid:16) enr (cid:17) r (cid:12)(cid:12) D ([ ℓ ] , [ r ]) (cid:12)(cid:12)(cid:12)(cid:12) D (cid:12)(cid:12) . (7)Setting D ([0] , [0]) = D and using D ([ k ] , [ k ]) ⊃ D ([ k + 1] , [ k + 1]), we get (cid:12)(cid:12) D ([ ℓ ] , [ r ]) (cid:12)(cid:12)(cid:12)(cid:12) D (cid:12)(cid:12) = ℓ − Y k =0 (cid:12)(cid:12) D ([ k + 1] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ k ] , [ k ]) (cid:12)(cid:12) r − Y k = ℓ (cid:12)(cid:12) D ([ ℓ ] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ ℓ ] , [ k ]) (cid:12)(cid:12) ≤ ℓ − Y k = ⌈ ℓ/ ⌉ (cid:12)(cid:12) D ([ k + 1] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ k ] , [ k ]) (cid:12)(cid:12) r − Y k = ℓ (cid:12)(cid:12) D ([ ℓ ] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ ℓ ] , [ k ]) (cid:12)(cid:12) . (8)Further, we write (cid:12)(cid:12) D ([ k + 1] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ k ] , [ k ]) (cid:12)(cid:12) = (cid:12)(cid:12) D ([ k + 1] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ k ] , [ k + 1]) (cid:12)(cid:12) · (cid:12)(cid:12) D ([ k ] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ k ] , [ k ]) (cid:12)(cid:12) , ⌈ ℓ/ ⌉ ≤ k ≤ ℓ − (cid:12)(cid:12) D ([ k + 1] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ k ] , [ k ]) (cid:12)(cid:12) ≤ exp (cid:18) − kd e n (cid:19) , and for every ℓ ≤ k ≤ r − (cid:12)(cid:12) D ([ ℓ ] , [ k + 1]) (cid:12)(cid:12)(cid:12)(cid:12) D ([ ℓ ] , [ k ]) (cid:12)(cid:12) ≤ exp (cid:18) − ℓd e n (cid:19) . Thus (8) implies (cid:12)(cid:12) D ([ ℓ ] , [ r ]) (cid:12)(cid:12)(cid:12)(cid:12) D (cid:12)(cid:12) ≤ exp (cid:18) − ℓrd e n (cid:19) . Combining this bound and (7) and using that ℓ ≥ Cn ln( en/r ) /d we complete the proof. ✷ Proof of Lemma 2.12. Clearly, (cid:12)(cid:12) D ([ p ] , [ k ]) \ D co ( ε ) (cid:12)(cid:12) ≤ X i 1) in the following way.Let G ∈ Q ( q ). Then there exist j < ... < j q such that for every ℓ ≤ qj ℓ ∈ N outG ( i ) ∩ N outG ( j ) . Note that for every ℓ ≤ q , there are d edges inside E outG (cid:0) N inG ( j ℓ ) (cid:1) . Also, there are at least( n − k − (2 d − q )) d edges in E inG (cid:0) [ k ] c \ N outG ( { i, j } ) (cid:1) . Therefore, for ℓ ≤ q , the set E ℓ := E G (cid:0) [ n ] \ N inG ( j ℓ ) , [ k ] c \ N outG ( { i, j } ) (cid:1) is of cardinality at least | E ℓ | ≥ ( n − k − (2 d − q )) d − d ≥ nd/ . We say that ( G, G ′ ) ∈ R q for some G ′ ∈ Q ( q − 1) if G ′ can be obtained from G in thefollowing way. First we choose ℓ ≤ q and an edge ( u, v ) ∈ E ℓ . Note v ∈ [ k ] c and u = i . Since v 6∈ N outG ( j ) then we can destroy the edge ( j, j ℓ ) and create the edge ( j, v ). Since u 6∈ N inG ( j ℓ )19hen we can destroy the edge ( u, v ) and create the edge ( u, j ℓ ). Thus, we obtain G ′ by thesimple switching on vertices u, v, j, j ℓ . It is not difficult to see that we have not created anyedges between [ p ] and [ k ], hence G ′ indeed belongs to Q ( q − G ∈ Q ( q ), | R q ( G ) | ≥ qnd . (9)Now we estimate the cardinalities of preimages. Let G ′ ∈ R q (cid:0) Q ( q ) (cid:1) . In order to recon-struct a possible G for which ( G, G ′ ) ∈ R q , we need to perform the simple switching whichremoves an edge ( j, v ) with v 6∈ N outG ′ ( i ) and recreates an edge ( j, w ) for some w ∈ N outG ′ ( i ) \ N outG ′ ( j ) . There are at most d − q + 1 choices for such v and at most d − q + 1 choices for such w . Forthe second part of the switching, we have at most d possible choices. Therefore, (cid:12)(cid:12) R − q ( G ′ ) (cid:12)(cid:12) ≤ d ( d − q + 1) ≤ d . Using this bound, (9), and Claim 2.1, we obtain that (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) ≤ (cid:18) d qn (cid:19) · (cid:12)(cid:12) Q ( q − (cid:12)(cid:12) and, applying this successively, (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) ≤ (cid:18) d n (cid:19) q q ! (cid:12)(cid:12) Q (0) (cid:12)(cid:12) . Since q ! ≥ q/e ) q and 2 ed/ ( εn ) ≤ / 2, this implies | Q | = d X q = ⌊ εd ⌋ +1 (cid:12)(cid:12) Q ( q ) (cid:12)(cid:12) ≤ d X q = ⌊ εd ⌋ +1 (cid:18) edεn (cid:19) q (cid:12)(cid:12) Q (0) (cid:12)(cid:12) ≤ (cid:18) edεn (cid:19) εd (cid:12)(cid:12) Q (0) (cid:12)(cid:12) . Using that Q (0) ⊂ D ([ p ] , [ k ]) and that there are n ( n − / i < j , we obtain thedesired result. ✷ For every G ∈ D , J ⊂ [ n ] and i ∈ [ n ], we define δ Ji ∈ { , } by δ Ji = δ Ji ( G ) := (cid:26) i ∈ N inG ( J ) , . Denote δ J := ( δ J , . . . , δ Jn ) ∈ { , } n . The vector δ J can be regarded as an indicator of thevertices that are connected to J , without specifying how many edges connect a vertex to J .20aking v ∈ { , } n and conditioning on the realization δ J = v , we obtain a class ofgraphs with a particular arrangement of the edges. Namely, if a vertex i of a graph in theclass is not connected to J then all graphs in the class have the same property. In thissection we estimate the cardinalities of such classes generated by vertices of the cube, underadditional assumption that a part of the graph is “frozen.” We show that if the size of theset J is at most cn/d then a large proportion of such classes are “approximately” of thesame size. In other words, we prove that the distribution of δ J is similar to that of a randomvector uniformly distributed on the discrete cube { , } n in the sense that for each fixed v ∈ { , } n the probability that δ J = v is exponentially small. This makes a link to the anti-concentration results in the Littlewood-Offord theory. We start with a simplified versionof this result, when there is no “frozen” part. In this case it is a rather straightforwardconsequence of Theorem 2.2. Proposition 2.13. Let ≤ d ≤ n and J ⊂ [ n ] . Let v ∈ { , } n and m := | supp v | . Then P { δ J = v } ≤ (cid:18) nm (cid:19) − ≤ exp (cid:16) − m ln nm (cid:17) . Moreover, if | J | ≤ cn/d , then P { δ J = v } ≤ exp (cid:18) − cd | J | ln cnd | J | (cid:19) , where c is an absolute positive constant. Remark 2.14. Note that max { d, | J |} ≤ | supp δ J | ≤ d | J | , therefore P { δ J = v } = 0 unlessmax { d, | J |} ≤ m ≤ d | J | . Proof. Without loss of generality we assume that max { d, | J |} ≤ m ≤ d | J | . Consider thefollowing subset of the discrete cube T = { w ∈ { , } n : | supp w | = m } . Clearly, every w ∈ T can be obtained by a permutation of the coordinates of v . Since thedistribution of a random graph is invariant under permutations, P { δ J = v } = P { δ J = w } for every w ∈ T . Therefore, P { δ J = v } ≤ | T | − = (cid:18) nm (cid:19) − ≤ exp (cid:16) − m ln nm (cid:17) , which proves the first bound and the “moreover” part in the case m ≥ d | J | / | J | ≤ cn/d and m ≤ d | J | / 2. Applying Theorem 2.2 with S = J , I = ∅ ,and ε = 1 / 2, we observe P {| supp δ J | ≤ d | J | / } ≤ exp (cid:18) − cd | J | ln cnd | J | (cid:19) , ✷ Now we turn to the main theorem of this section, which will play a key role in the“matrix” part of our paper. We obtain an anti-concentration property for the vector δ J evenunder an assumption that a part of edges is “frozen.” It requires a more delicate argument. Theorem 2.15. There exist two absolute positive constants c, ˜ c such that the following holds.Let ≤ d ≤ cn and let I, J be disjoint subsets of [ n ] such that | I | ≤ d | J | and ≤ | J | ≤ cnd . Let F ⊂ [ n ] × [ n ] be such that D ( I, F ) = ∅ and let v ∈ { , } n . Then P { δ J = v | D ( I, F ) } ≤ (cid:18) − ˜ cd | J | ln (cid:18) nd | J | (cid:19)(cid:19) . To prove this theorem we first estimate the size of the class of graphs given by a realizationof a subset of coordinates of δ J . More precisely, restricted to a subset of graphs withpredefined out-edges for the first i − δ J , we count the number of graphs for whichthe vertex i is connected to J . In other words, conditioning on the realization of the first i − δ J , we estimate the probability that δ Ji = 1. In Lemma 2.16 below we showthat this probability is of the order d | J | /n for a wide range of i -s. In a sense, this shows thatthe sets of out-edges restricted on J for different vertices behave like independent. Indeed,in the Erd˝os–R´enyi model, when the edges are distributed independently with probabilityof having an edge equals d/n , the probability that a vertex i is connected to J is of order d | J | /n .We need the following notation. For ε ∈ (0 , 1) and J ⊂ [ n ] letΛ( ε, J ) = (cid:8) G ∈ D : |N inG ( J ) | ≥ (1 − ε ) d | J | (cid:9) . Lemma 2.16. Let ≤ d ≤ n/ . Let F ⊂ [ n ] × [ n ] and I, J be disjoint subsets of [ n ] satisfying | I | ≤ d | J | and ≤ | J | ≤ n d . (10) Then there exists a permutation σ ∈ Π n such that for every | I | ≤ i < i < . . . < i d | J | / , every s ≤ d | J | / and H ⊂ [ n ] × [ n ] satisfying e Γ := (cid:8) G ∈ D ( I, F ) ∩ Λ (cid:0) , J (cid:1) : E G (cid:0) σ ([2 | I | ] ∪ { i , . . . , i s − } ) , I c (cid:1) ∈ H (cid:9) = ∅ one has d | J | n ≤ P n δ Jσ ( i s ) = 1 | e Γ o ≤ d | J | n . 22s the proof of lemma is rather technical, we postpone it to the end of this section. Proof of Theorem 2.15. Fix F ⊂ [ n ] × [ n ] with D ( I, F ) = ∅ and v ∈ { , } n . Let σ bea permutation given by Lemma 2.16.Denote B := D ( I, F ) ∩ Λ (cid:0) , J (cid:1) . Since J ⊂ I c , applying Theorem 2.2 with ε = 1 / k = | J | , we get that for some appropriate constant ˜ c P (cid:8) Λ (cid:0) , J (cid:1) | D ( I, F ) (cid:9) ≥ − exp (cid:18) − ˜ cd | J | ln (cid:16) nd | J | (cid:17)(cid:19) , which in particular implies that B is non-empty.Using this we have P { δ J = v | D ( I, F ) } ≤ P (cid:8) δ J = v | D ( I, F ) ∩ Λ (cid:0) , J (cid:1)(cid:9) + P (cid:8) Λ c (cid:0) , J (cid:1) | D ( I, F ) (cid:9) ≤ P (cid:8) δ J = v | B (cid:9) + exp (cid:18) − ˜ cd | J | ln (cid:18) nd | J | (cid:19)(cid:19) . Therefore, it is enough to estimate the first term in the previous inequality. Note that if | supp v | < d | J | / P (cid:8) δ J = v | D ( I, F ) ∩ Λ (cid:0) , J (cid:1)(cid:9) = 0 . Assume that | supp v | ≥ d | J | / m = d | J | / 16. Since 2 | I | ≤ m , there exist2 | I | ≤ i < i < . . . < i m such that for every s ≤ m one has v σ ( i s ) = 1 . Let Q = [2 | I | ]. For every 2 ≤ s ≤ m + 1,define Q s := Q ∪ { i , . . . , i s − } and H s := (cid:8) H ⊂ σ ( Q s ) × I c : ∀ ℓ ∈ Q s “ v σ ( ℓ ) = 0” ⇔ “ ∀ j ∈ J ( σ ( ℓ ) , j ) H ” (cid:9) . In words, H s is the collection of all possible realizations of configurations of edges connecting σ ( Q s ) to I c , such that σ ( ℓ ) is not connected to J ⊂ I c if and only if v σ ( ℓ ) = 0 ( ℓ ∈ Q s ). Notethat A s := (cid:8) G ∈ D : ∀ ℓ ∈ Q s δ Jσ ( ℓ ) = v σ ( ℓ ) (cid:9) = (cid:8) G ∈ D : E G (cid:0) σ ( Q s ) , I c (cid:1) ∈ H s (cid:9) . Denote B s := (cid:8) G ∈ D : δ Jσ ( i s ) = 1 (cid:9) . Since v σ ( i s ) = 1 for every s ≤ m then A s +1 = B s ∩ A s and P { A s +1 | B } = P { B s ∩ A s | B } = P { B s | B ∩ A s } P { A s | B } . Therefore, P { δ J = v | B } ≤ P { A m +1 | B } ≤ m Y s =1 P { B s | B ∩ A s } . 23y the assumptions of the theorem and Lemma 2.16, for every s ≤ m we have P { B s | B ∩ A s } ≤ d | J | n , which implies P { δ J = v | B } ≤ (cid:18) d | J | n (cid:19) m ≤ exp (cid:18) − d | J | 16 ln (cid:18) n d | J | (cid:19)(cid:19) . This completes the proof. ✷ It remains to prove Lemma 2.16. To get the lower bound we employ the simple switchingto graphs whose i -th vertex is not connected to J and transform them into graphs with the i -th vertex connected to J . To get the upper bound, we do the opposite trick to transformgraphs with only one edge relating vertex i to J to a graph with no connections from i to J .Then we show that if i is connected to J , it is more likely that the number of correspondingout-edges is small. This is very natural if we have in mind the result proven in Theorem 2.2.Indeed, if vertices of a graph had a large number of out-edges connecting them to J , thenthe number of in-neighbors to J would be small, while Theorem 2.2 states that N inG ( J ) israther large. Proof of Lemma 2.16. Let σ be a permutation such that the sequence (cid:8) |N outG (cid:0) σ ( ℓ ) (cid:1) ∩ I | (cid:9) nℓ =1 is non-increasing. Note that σ depends only on F when G ∈ D ( I, F ).First we note that for every G ∈ D ( I, F ) ∀ i ≥ | I | (cid:12)(cid:12) N outG (cid:0) σ ( i ) (cid:1) ∩ I c (cid:12)(cid:12) ≥ d/ . (11)Indeed, otherwise there would exist G ∈ D ( I, F ) and i ≥ | I | such that (cid:12)(cid:12) N outG (cid:0) σ ( i ) (cid:1) ∩ I (cid:12)(cid:12) > d/ . Since (cid:8) |N outG (cid:0) σ ( ℓ ) (cid:1) ∩ I | (cid:9) ℓ ≤ n is non-increasing, then for every ℓ ≤ i we would have (cid:12)(cid:12) N outG (cid:0) σ ( ℓ ) (cid:1) ∩ I (cid:12)(cid:12) > d/ . This would imply (cid:12)(cid:12) E G ( σ ([ i ]) , I ) (cid:12)(cid:12) > i d/ ≥ d | I | , which is impossible.Fix s ≤ d | J | / 16. For 0 ≤ k ≤ p := min { d, | J |} denote e Γ k := (cid:8) G ∈ e Γ : (cid:12)(cid:12) E G ( σ ( i s ) , J ) (cid:12)(cid:12) = k (cid:9) . Clearly, e Γ = F k ≤ p e Γ k . d | J | n (cid:12)(cid:12)e Γ (cid:12)(cid:12) ≤ (cid:12)(cid:12)e Γ \ e Γ (cid:12)(cid:12) ≤ d | J | n (cid:12)(cid:12)e Γ (cid:12)(cid:12) . (12)We first show that (cid:12)(cid:12)e Γ (cid:12)(cid:12) ≤ nd | J | (cid:12)(cid:12)e Γ (cid:12)(cid:12) . (13)Note that (13) implies the left hand side of (12). Indeed, since e Γ ⊆ e Γ \ e Γ , then (13) yieldsthat (cid:12)(cid:12)e Γ (cid:12)(cid:12) ≤ nd | J | (cid:12)(cid:12)e Γ \ e Γ (cid:12)(cid:12) . Adding (cid:12)(cid:12)e Γ \ e Γ (cid:12)(cid:12) to both sides we obtain the left hand side of (12).In order to prove (13), we define a relation R between the sets e Γ and e Γ . Let G ∈ e Γ .Since G ∈ Λ (cid:0) , J (cid:1) and 2 | I | + s ≤ d | J | / 8, then (cid:12)(cid:12) N inG ( J ) \ σ ([2 | I | ] ∪ { i , . . . , i s − } ) (cid:12)(cid:12) ≥ d | J | . (14)Denote T := (cid:0) N inG ( J ) \ σ ([2 | I | ] ∪ { i , . . . , i s − } ) (cid:1) × ( N outG (cid:0) σ ( i s ) (cid:1) ∩ I c ) . Since G ∈ e Γ , that is (cid:12)(cid:12) E G ( σ ( i s ) , J ) (cid:12)(cid:12) = 0, we have (cid:12)(cid:12) E G (cid:0) T (cid:1)(cid:12)(cid:12) ≤ ( d − (cid:12)(cid:12) N outG (cid:0) σ ( i s ) (cid:1) ∩ I c (cid:12)(cid:12) . This together with (11), (14), and | J | ≥ S := T \ E G ( T ) satisfies | S | ≥ (cid:18) d | J | − d + 1 (cid:19) · (cid:12)(cid:12) N outG (cid:0) σ ( i s ) (cid:1) ∩ I c (cid:12)(cid:12) ≥ d | J | . (15)We say that ( G, G ′ ) ∈ R for some G ′ ∈ e Γ if G ′ can be obtained from G in the followingway. First choose ( v, j ) ∈ S . Since j ∈ N outG (cid:0) σ ( i s ) (cid:1) ∩ I c and ( v, j ) E G ( T ) then we candestroy the edge ( σ ( i s ) , j ) and create the edge ( v, j ). Since v ∈ N inG ( J ), there is j ′ ∈ J suchthat ( v, j ′ ) is an edge in G . Since G ∈ e Γ , ( σ ( i s ) , j ′ ) G . Thus we can destroy the edge( v, j ′ ) and create the edge ( σ ( i s ) , j ′ ), completing the simple switching. By (15) we get | R ( G ) | ≥ d | J | . Note that the above transformation of G does not decrease |N inG ( J ) | which guarantees that G ′ ∈ Λ (cid:0) , J (cid:1) .Now we estimate the cardinalities of preimages. Let G ′ ∈ R ( e Γ ). In order to reconstruct apossible G for which ( G, G ′ ) ∈ R , destroy the only edge ( σ ( i s ) , j ′ ) in E G ′ ( σ ( i s ) , J ) and createan edge ( ℓ, j ′ ) for ℓ σ ([2 | I | ] ∪ { i , . . . , i s − } ). There are at most n − | I | − ( s − ≤ n possible choices at this step. To complete the simple switching, we destroy one of the edges25n E G ′ ( ℓ, J c ∩ I c ) and create an edge connecting σ ( i s ) to J c ∩ I c . There are at most d possiblechoices here. Therefore, | R − ( G ′ ) | ≤ nd. By Claim 2.1, this implies the inequality (13).We now show that for every k ∈ { , . . . , p } , one has | e Γ k | ≤ d | J | kn | e Γ k − | . (16)Note that (16) implies the right hand side of (12). Indeed, by (16), | e Γ k | ≤ (cid:18) d | J | n (cid:19) k k ! | e Γ | , hence | e Γ | = | e Γ | + p X k =1 | e Γ k | ≤ exp (cid:18) d | J | n (cid:19) | e Γ | , which implies (cid:12)(cid:12)e Γ \ e Γ (cid:12)(cid:12) ≤ (cid:12)(cid:12)e Γ (cid:12)(cid:12) − exp (cid:18) − d | J | n (cid:19) (cid:12)(cid:12)e Γ (cid:12)(cid:12) ≤ d | J | n (cid:12)(cid:12)e Γ (cid:12)(cid:12) . In order to prove (16) for every k ∈ { , . . . , p } , we construct a relation R k between thesets e Γ k and e Γ k − .Let G ∈ e Γ k . Note that (cid:12)(cid:12) σ ([2 | I | ] ∪ { i , . . . , i s } ) ∪ N inG (cid:0) J (cid:1)(cid:12)(cid:12) ≤ | I | + s + d | J | ≤ d | J | . (17)By (10), we get (cid:12)(cid:12) I c ∩ J c \ N outG ( σ ( i )) (cid:12)(cid:12) ≥ n − d | J | − n d − d ≥ n − d | J | . (18)Denote S k := E G (cid:0) σ ([2 | I | ] c \ { i , . . . , i s } ) \ N inG (cid:0) J (cid:1) , I c ∩ J c \ N outG ( σ ( i s )) (cid:1) . Using (17), (18) we observe that | S k | ≥ d (cid:18) n − d | J | − d | J | (cid:19) ≥ nd . (19)We say that ( G, G ′ ) ∈ R k for some G ′ ∈ e Γ k − if G ′ can be obtained from G in the followingway. Let ( σ ( i s ) , j ) be one of the k edges in E G ( σ ( i s ) , J ). Destroy an edge ( v, j ) ∈ S k . Since j 6∈ N outG (cid:0) σ ( i s ) (cid:1) , then we can create the edge ( σ ( i s ) , j ). Since v 6∈ N inG ( j ), then we can26estroy the edge ( σ ( i s ) , j ) and create the edge ( v, j ), thus completing the simple switching.Therefore by (19) we get | R k ( G ) | ≥ knd . Note that the above transformation of G does not decrease |N inG ( J ) | which guarantees that G ′ ∈ Λ (cid:0) , J (cid:1) .Now we estimate the cardinalities of preimages. Let G ′ ∈ R k ( e Γ k ). In order to recon-struct a possible G for which ( G, G ′ ) ∈ R k , destroy an edge ( v, j ) from E G ′ ( σ ([2 | I | ] c \{ i , . . . , i s } ) , J ) to create the edge ( σ ( i s ) , j ) for j ∈ J . There are at most d | J | such choices.To complete the simple switching, we destroy an edge ( σ ( i s ) , j ) in E G ′ ( σ ( i s ) , I c ∩ J c ) andcreate the edge ( v, j ). There are at most d possible choices here. Therefore | R − k ( G ′ ) | ≤ d | J | . Claim 2.1 implies the inequality (16), and completes the proof. ✷ In this section we continue to study density properties of random d -regular directed (rrd)graphs. We interpret results obtained in the previous section in terms of adjacency matri-ces and provide consequences of the anti-concentration property, Theorem 2.15, needed toinvestigate the invertibility of adjacency matrices. For 1 ≤ d ≤ n we denote by M n,d the set of n × n matrices with 0 / d ones. By a random matrix on M n,d we understanda matrix uniformly distributed on M n,d , in other words the probability on M n,d is given bythe normalized counting measure. Whenever it is clear from the context, we usually use thesame letter M for an element of M n,d and for a random matrix.For I ⊂ [ n ] by P I we denote the orthogonal projection on the coordinate subspace R I and I c := [ n ] \ I . For a matrix M ∈ M n,d we say that a non-zero vector x is a null-vectorof M if either M x = 0 (a right null-vector) or x T M = 0 (a left null vector).Let M = { µ ij } ∈ M n,d . The i ’th row of M is denoted by R i = R i ( M ) and the i ’thcolumn by X i = X i ( M ), respectively. For j ≤ n , we denote supp X j = { i ≤ n : µ ij = 1 } and for every subset J ⊂ [ n ] we let S J := [ j ∈ J supp X j , Clearly, | J | ≤ | S J | ≤ d | J | and n − d | J | ≤ | ( S J ) c | ≤ n − | J | .For x ∈ R n we denote its coordinates by x i , i ≤ n , its ℓ ∞ -norm by k x k ∞ = max i | x i | andfor a linear operator U from X to Y by k U : X → Y k we denote its operator norm.27 .2 Maximizing columns support In this section we reformulate Theorem 2.2 in terms of adjacency matrices. It correspondsto bounding from below the number of rows which are non-zero on a given set of columns.More precisely, for every subset J ⊂ [ n ] we have | S J | ≤ d | J | . We prove that for almost allmatrices in M n,d , this inequality is close to being sharp whenever J is of the appropriatesize (less than some proportion of n/d ). This means that S J is of almost maximal size. Theorem 3.1. Let ≤ d ≤ n and ε ∈ (0 , satisfy ε ≥ max { , ln d } d . Define Ω ε = n M ∈ M n,d : ∀ J ⊂ [ n ] , | J | ≤ c εnd , one has | S J | ≥ (1 − ε ) d | J | o , where c is a sufficiently small absolute positive constant. Then P (Ω ε ) ≥ − exp (cid:18) − ε d (cid:16) ec εnd (cid:17)(cid:19) . Remark 3.2. In fact Theorem 2.2 gives slightly more, namely the corresponding estimateswhen | J | = k for a fixed k ≤ c εn/d . However we don’t use it below.The following proposition is a direct consequence of Lemma 2.12 (applied with 2 ε insteadof ε and with p = k = 0). It shows that for a big proportion of matrices in M n,d , every tworows have almost disjoint supports. Proposition 3.3. Let ε ∈ (0 , and ≤ d ≤ εn/ . Define Ω ε = n M ∈ M n,d : ∀ i, j ∈ [ n ] | supp ( R i + R j ) | ≥ − ε ) d o . Then P (Ω ε ) ≥ − n (cid:18) edεn (cid:19) εd . In this section we reformulate Theorem 2.6 in terms of adjacency matrices. It states thatalmost all matrices in M n,d do not contain large zero minors.Given 0 ≤ α, β ≤ ( α, β ) = { M ∈ M n,d : ∃ I, J ⊂ [ n ] such that | I | ≥ αn, | J | ≥ βn, and ∀ i ∈ I ∀ j ∈ J µ ij = 0 } . (20)In other terms, the elements of Ω ( α, β ) are the matrices in M n,d having a zero submatrixof size at least αn × βn . Theorem 2.6, reformulated below, shows that this set is smallwhenever α and β are not very small. 28 heorem 3.4. There exist absolute positive constants c, C such that the following holds. Let ≤ d ≤ n/ and < α ≤ β ≤ / . Assume that α ≥ C ln( e/β ) d . Then P (cid:0) Ω ( α, β ) (cid:1) ≤ exp ( − cαβdn ) . Remark 3.5. We usually apply this theorem with the following choice of parameters: α = p/ (2 q ), β = p/ 2, where q = c p d for a sufficiently small absolute positive constant c . Thenwe have P (cid:18) Ω (cid:18) p q , p (cid:19)(cid:19) ≤ exp( − c n ) . (21)We will also need the following simple lemma. Lemma 3.6. Let ≤ d ≤ n and < α, β < . Let Ω α,β = n M ∈ M n,d : ∀ J, | J | ≥ βn, one has |{ i : | supp R i ∩ J | ≥ β/ α }| ≥ (1 − α ) n o . Then provided that αn is an integer, we have (cid:0) Ω ( α, β/ (cid:1) c ⊂ Ω α,β . Proof. Let M ∈ Ω cα,β . Then there exist J ⊂ [ n ] with | J | ≥ βn and I ⊂ [ n ] with | I | = αn such that ∀ i ∈ I | supp R i ∩ J | < β/ α. This shows that the minor { µ ij : i ∈ I, j ∈ J } has strictly less than βn/ βn/ ∃ I ⊂ [ n ] , | I | = αn, ∃ J ⊂ [ n ] , | J | ≥ βn/ , ∀ i ∈ I, ∀ j ∈ J : µ ij = 0 . In other words, there is a zero minor of size αn × βn/ 2. This proves the lemma. ✷ For every F ⊂ [ n ] × [ n ] and I ⊂ [ n ], let M n,d ( I, F ) = { M = { µ ij } ∈ M n,d : µ ij = 1 if and only if j ∈ I, ( i, j ) ∈ F } . Thus matrices in M n,d ( I, F ) have the same columns indexed by I and the places of ones inthese columns are given by F ∩ ([ n ] × I ). Of course this set can be empty.For every M ∈ M n,d , J ⊂ [ n ] and i ≤ n , we define δ Ji ∈ { , } as follows δ Ji = δ Ji ( M ) := (cid:26) R i ∩ J = ∅ , δ J := ( δ J , . . . , δ Jn ) ∈ { , } n . The quantity δ J indicates the rows whosesupports intersect with J , i.e. the rows that have at least one 1 in columns indexed by J .The following is a reformulation of Theorem 2.15, concerning the anti-concentration propertyof graphs, in terms of adjacency matrices. Theorem 3.7. There are absolute positive constants c, ˜ c such that the following holds. Let ≤ d ≤ cn and I, J be disjoint subsets of [ n ] such that | I | ≤ d | J | and ≤ | J | ≤ cnd . (22) Let F ⊂ [ n ] × [ n ] be such that M n,d ( I, F ) = ∅ and v ∈ { , } n . Then P { δ J = v | M n,d ( I, F ) } ≤ (cid:18) − ˜ cd | J | ln (cid:18) nd | J | (cid:19)(cid:19) . This theorem has the following consequence. Proposition 3.8. There are absolute positive constants c, c ′ such that the following holds.Let ≤ d ≤ cn , λ ∈ R , a > , and I, J, J λ be a partition of [ n ] satisfying (22). Let q ≤ n/ be such that q +1 ≤ exp (cid:18) c ′ d | J | ln (cid:18) nd | J | (cid:19)(cid:19) (23) and y be a vector in R n satisfying ∀ ℓ ∈ J λ y ℓ = λ and ∀ j ∈ J y j − λ ≥ a. (24) Then for every S ⊂ [ n ] with | S | ≥ n − q , one has P {k P S M y k ∞ < a } ≤ exp (cid:18) − c ′ d | J | ln (cid:18) nd | J | (cid:19)(cid:19) . (25) Remark 3.9. The above statement with essentially the same proof holds when (24) isreplaced by ∀ ℓ ∈ J λ y ℓ = λ and ∀ j ∈ J λ − y j ≥ a. To prove Proposition 3.8 we need the following lemma. Lemma 3.10. Let λ ∈ R , a > , and I, J, J λ be a partition of [ n ] satisfying (22). Let y bea vector in R n satisfying (24). Then for every i ≤ n and every F ⊂ [ n ] × [ n ] there exists v i ∈ { , } such that { M ∈ M n,d ( I, F ) | δ Ji ( M ) = v i } ⊆ { M ∈ M n,d ( I, F ) | | ( M y ) i | ≥ a } . Proof. Fix i ∈ [ n ] and F ⊂ [ n ] × [ n ]. We argue by contradiction. Assume that theabove inclusion is violated in both cases, v i = 0 and v i = 1. Then there exist two matrices M , M ∈ M n,d ( I, F ) such that A := supp R i ∩ J = ∅ , supp R i ∩ J = ∅ , | ( M y ) i | < a and | ( M y ) i | < a, R ji = R i ( M j ) denotes the i -th row of M j , j = 1 , 2. Note that since M , M ∈M n,d ( I, F ) then supp R i ∩ I = supp R i ∩ I := A . Let s := | A | and s := | A | . Using (24), we observe( M y ) i = X j ∈ A y j + X j ∈ A y j + λ ( d − s − s ) and ( M y ) i = X j ∈ A y j + λ ( d − s ) . Therefore, ( M y ) i − ( M y ) i = X j ∈ A ( y j − λ ) ≥ s a ≥ a, which is impossible as | ( M y ) i | < a and | ( M y ) i | < a . ✷ Proof of Proposition 3.8. Since (24) and (25) are homogeneous in y , without loss ofgenerality we assume that a = 1.Fix S ⊂ [ n ] with | S | ≥ n − q . Let F be the set of all F ⊂ [ n ] × [ n ] such that M n,d ( I, F )is not empty. Note that {M n,d ( I, F ) } F ∈F form a partition of M n,d . Therefore it is enoughto prove that for every F ∈ F , p := P {k P S M y k ∞ < | M n,d ( I, F ) } ≤ exp (cid:18) − c ′ d | J | ln (cid:18) nd | J | (cid:19)(cid:19) . Fix F ∈ F . Let v , . . . , v n ∈ { , } be given by Lemma 3.10. Note that k P S M y k ∞ < ∀ i ∈ S | ( M y ) i | < , therefore if k P S M y k ∞ < { i : δ Ji ( M ) = v i } ⊂ S c . Thus p ≤ P {{ i : δ Ji ( M ) = v i } ⊆ S c | M n,d ( I, F ) } . Now for every K ⊂ [ n ], define v K ∈ { , } n by v Ki := (cid:26) v i if i ∈ K, − v i otherwise.Since m := | S c | ≤ q , by Theorem 3.7 we obtain p ≤ m X ℓ =0 P {∃ K ⊂ S c : | K | = ℓ and δ J ( M ) = v K | M n,d ( I, F ) }≤ m X ℓ =0 (cid:18) mℓ (cid:19) max | K | = ℓ P { δ J ( M ) = v K | M n,d ( I, F ) } ≤ q +1 exp (cid:18) − ˜ cd | J | ln (cid:18) nd | J | (cid:19)(cid:19) . Taking c ′ = ˜ c/ ✷ Invertibility of adjacency matrices In this section we investigate the invertibility of adjacency matrices M ∈ M n,d of random d -regular directed graphs and prove Theorem A. We say that a non-zero vector is “almost constant” if for some 0 < p < / − p ) n of its coordinates are equal to each other. Formally, for 0 < p < / AC ( p ) = { x ∈ R n \ { } : ∃ λ x ∈ R |{ i : x i = λ x }| ≥ (1 − p ) n } . (26)In this section we estimate the probability of the event E AC ( p ) := { M ∈ M n,d : ∀ x ∈ AC ( p ) M x = 0 and x T M = 0 } , (27)which relates almost constant vectors to null vectors of M . We show that that this probabilityis close to one, in other words we show that with high probability a matrix M ∈ M n,d cannothave almost constant null vectors. This will be used in the proof of the main theorem allowingone to restrict the proof to the event E AC ( p ). More precisely, we prove the following theorem. Theorem 4.1. There are absolute positive constants C, c such that for C ≤ d ≤ cn and p ≤ c/ ln d one has P (cid:0) E AC ( p ) (cid:1) ≥ − (cid:18) Cdn (cid:19) cd . (28)We start with some comments on the structure of almost constant vectors. Since p < / x ∈ AC ( p ) then only one real number λ x satisfies (26). For every x ∈ AC ( p ) we fix such λ x ∈ R . We set AC + ( p ) = { x ∈ AC ( p ) : λ x ≥ } . Note that λ − x = − λ x , therefore E AC ( p ) = { M ∈ M n,d : ∀ x ∈ AC + ( p ) M x = 0 and x T M = 0 } . Moreover, since ( x T M ) T = M T x and M T has the same distribution as M then P ( { M ∈ M n,d : ∀ x ∈ AC + ( p ) M x = 0 } ) = P ( { M ∈ M n,d : ∀ x ∈ AC + ( p ) x T M = 0 } ) . Therefore it is enough to consider the event E AC + ( p ) = { M ∈ M n,d : ∀ x ∈ AC + ( p ) M x = 0 } and to prove that P (cid:0) E AC + ( p ) (cid:1) ≥ − (cid:18) Cdn (cid:19) cd . 32o this end we split AC + ( p ) into two complementary sets and treat them separately in twolemmas.For a vector x = ( x i ) ∈ R n we define the rearrangement x ⋆ = ( x ⋆i ) i as follows: x ⋆i = x π ( i ) ,where π : [ n ] → [ n ] is a permutation of [ n ] such that ( | x ⋆i | ) i is a decreasing sequence, that is, | x π (1) | ≥ | x π (2) | ≥ . . . ≥ | x π ( n ) | . Contrary to the usual decreasing rearrangement of absolutevalues of a sequence, here values x ⋆i can be negative.In the proof, we choose appropriately a positive integer m and consider a certain subsetof AC + ( p ). For a vector x in this subset we “ignore” its first m coordinates x ⋆i , i.e. weconsider P I x ⋆ with I = [ m ] c . Then we show that this vector can be split into a sum of twovectors with disjoint supports and such that the second vector has equal coordinates on itssupport. To approximate such vectors in ℓ ∞ -metric we construct a net in the following way.Let η > H ⊂ [ n ] of cardinality k := pn − m (we choose p so that pn is an integer) fix an η -net ∆ H in the cube P H ([ − , n ). Such ∆ H can be chosen with | ∆ H | ≤ (1 /η ) k . Given L ⊂ [ n ] of cardinality k := (1 − p ) n , considerthe one-dimensional space generated by the vector v L with supp v L = L and all coordinateson L equal to one. Fix an η -net Λ L in the segment [ − v L , v L ]. Clearly, Λ L can be chosenwith | Λ L | = 1 /η . Note also that for every z ∈ Λ L one has supp z = L and z i = z j whenever i, j ∈ L , that is z ∈ AC ( p ) and z i = λ z for i ∈ Λ. Given disjoint subsets H , L of [ n ] ofcardinalities k and k respectively, consider ∆ H ⊕ Λ L = { w + z : w ∈ ∆ H , z ∈ Λ L } . Then∆ H ⊕ Λ L ⊂ AC ( p ) and | ∆ H ⊕ Λ L | ≤ (1 /η ) k +1 ≤ (1 /η ) pn . Finally we observe that the vector P I x ⋆ can be approximated by the vectors in the union of∆ H ⊕ Λ L over all such choices of H and L .In fact we will use only a subset of this union. Fix a parameter a > r . For H , L as above considerΓ( H, L ) = Γ a,r ( H, L ) := { y ∈ ∆ H ⊕ Λ L : ∃ J ⊂ H, | J | = r, such that ∀ i ∈ J y i − λ y ≥ a } . Clearly, | Γ( H, L ) | ≤ (1 /η ) pn . Finally, set N = N a,r := [ | L | = k , | H | = k Γ( H, L ) , where the union is taken over all disjoints subsets H and L of [ n ] of cardinalities k and k correspondingly. Then |N | ≤ (cid:18) nk (cid:19) (cid:18) n − k k (cid:19) (cid:18) η (cid:19) pn ≤ (cid:18) npn (cid:19) (cid:18) pnm (cid:19) (cid:18) η (cid:19) pn ≤ (cid:18) eηp (cid:19) pn . (29)We are ready now to prove two lemmas needed for Theorem 4.1. In both of them we usethe following set associated with x ∈ AC + ( p ) and a given m , J x = J x ( m ) := { i > m : | x ⋆i − λ x | ≥ / (2 d ) } . emma 4.2. There are absolute positive constants c and c such that the following holds. Let ≤ d ≤ cn , m ≥ , and r ≥ be integers such that ≤ c r ln( n/ ( dr )) . Let p ∈ (0 , / be such that pn is an integer. Assume that m ≤ c r ln (cid:16) ndr (cid:17) , r ≤ cnd , p ≤ dr n , and p (cid:0) ln( e/p ) + ln(18 d ) (cid:1) ≤ c drn ln (cid:16) ndr (cid:17) . (30) Consider the following subset of almost constant vectors T = { x ∈ AC + ( p ) : | x ⋆m | = 1 and | J x | ≥ r } and the corresponding event E T = { M ∈ M n,d : ∀ x ∈ T M x = 0 } . Then P ( E T ) ≥ − (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) . Remark 4.3. We apply this lemma with r = c n/d , m = c n/d , so that the probability isexponentially (in n ) close to one. Remark 4.4. In fact we show a stronger estimate which could be of independent interest,namely P (cid:0) { M ∈ M n,d : ∃ x ∈ T such that k M x k ∞ < / (8 d ) } (cid:1) ≤ (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) . Proof of Lemma 4.2. We prove a stronger bound from Remark 4.4. We start byfew general comments on the strategy behind the proof. By the construction of T , for x = ( x i ) i ∈ T we havemax n |{ i > m : x ⋆i − λ x ≥ / (2 d ) }| , |{ i > m : λ x − x ⋆i ≥ / (2 d ) }| o ≥ r. Therefore denoting T +1 := { x ∈ T : |{ i > m : x ⋆i − λ x ≥ / (2 d ) }| ≥ r } and T − := { x ∈ T : |{ i > m : λ x − x ⋆i ≥ / (2 d ) }| ≥ r } , we have T ⊆ T +1 ∪ T − . Thus it is sufficient to show that p := P (cid:0) { M ∈ M n,d : ∃ x ∈ T +1 k M x k ∞ < / (8 d ) } (cid:1) ≤ exp (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) (31)and similarly for T − . Below we prove (31) only. Its counterpart for T − follows the same lines,one just needs to apply Proposition 3.8 with Remark 3.9 below (with a slight modificationof the net constructed above). 34o prove (31) we first approximate vectors in T +1 by elements of the net N constructedabove. By the union bound, this will reduce (31) to estimates on the net. Then, applyingProposition 3.8, we obtain a probability bound for a fixed vector from the net. As usual,the balance between the probability bound and the size of the net plays the crucial role.Fix two parameters η := 1 / (9 d ) and a = 1 / (4 d ) − η , and take k = pn − m , k =(1 − p ) n as in the construction of the net N above. We start by showing how an elementof T +1 is approximated by an element from N . Let x ∈ T +1 and assume for simplicity that | x | ≥ | x | ≥ . . . ≥ | x n | (that is, x = x ⋆ ). Recall that λ x is the unique real number satisfying(26). By the definition of T +1 it is easy to see that there exists a partition J, J , I of [ n ] suchthat | J | = r, | J | = k , | I | = n − r − k , ∀ i ∈ J x i = λ x with λ x ≥ , ∀ j ∈ J j > m and x j ≥ λ x + 1 / (2 d ) . Since | x m | = 1 and there is i > m such that x i ≥ λ x + 1 / (2 d ), we observe that λ x < i ≤ m we have either x i ≥ x i ≤ − 1, then J ∩ I = ∅ , where I = [ m ]. Notealso that J ∩ I = ∅ , hence I ⊂ I . Set H = J ∪ ( I \ I ) and L = J . Then | H | = k , | L | = k ,and A := I c = H ∪ L . By the definition of ∆ H and Λ L there exist y ′ ∈ ∆ H and y ′′ ∈ Λ L such that k P H x − P H y ′ k ∞ ≤ η and k P L x − P L y ′′ k ∞ ≤ η. Therefore y := y ′ + y ′′ ∈ ∆ H ⊕ Λ L satisfies k P A x − P A y k ∞ ≤ η . Moreover, by the constructionof the net y ∈ AC ( p ), ∀ i ∈ L y i = y ′′ i = λ y and ∀ i ∈ J y i − λ y ≥ x i − λ x − η ≥ (2 d ) − − η = 2 a. Thus we showed that for every x ∈ T +1 there exist H, L ⊂ [ n ] with | H | = pn − m , | L | =(1 − p ) n , and y ∈ Γ( H, L ) = Γ a,r ( H, L ) such that k P A x − P A y k ≤ η . Note also, that given H and L one can “reconstruct” I as I = [ n ] \ ( H ∪ L ).Moreover, denoting S := S cI = [ n ] \ supp X i ∈ I X i . and observing that P S M P I = 0 (indeed, for every i ∈ S and j ∈ I one has µ ij = 0), we get k P S M y k ∞ = k P S M x + P S M ( y − x ) k ∞ = k P S M x + P S M P A ( y − x ) k ∞ ≤ k P S M x k ∞ + k P S M P A ( y − x ) k ∞ < k M x k ∞ + k M : ℓ ∞ → ℓ ∞ k η ≤ / (8 d ) + ηd < a, provided that k M x k ∞ ≤ / (8 d ). Thus, by the union bound, we obtain p ≤ X y ∈N P (cid:0) { M ∈ M n,d : k P S M y k ∞ < a } (cid:1) , where S = S ( y ) = S cI , I = I ( y ) = [ n ] \ ( H ∪ L ) whenever y ∈ Γ( H, L ).35inally we estimate the probabilities in the sum. Let H, L ⊂ [ n ] be such that | H | = pn − m , | L | = (1 − p ) n , and y ∈ Γ( H, L ), J be from the definition Γ( H, L ) and S beas above. Let I = [ n ] \ ( J ∪ L ). Then I , J , L form a partition of [ n ] with | J | = r and | I | = pn − r . By assumptions of the lemma, this partition satisfies (22). Note also thatassumptions on m and r imply m d < n , hence | S | ≥ n − m d > 0, and (23) is satisfiedwith q := m d (with c = c ′ / H, L ) the vector y satisfies ∀ i ∈ L y i = λ y and ∀ i ∈ J y i − λ y ≥ a. Applying Proposition 3.8 with the partition { I, J, L } , the vector y , and the set S , we obtain P (cid:0) { M ∈ M n,d : k P S M y k ∞ < a } (cid:1) ≤ exp (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) . Since η = 1 / (9 d ), by (29) and (30), this implies p ≤ |N | exp (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) ≤ (cid:18) d ep (cid:19) pn exp (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) ≤ exp (cid:16) − c dr ln (cid:16) ndr (cid:17)(cid:17) , which completes the proof. ✷ In the next lemma we prove an analogous statement for the set which is complementaryto T . Recall that Ω ε was introduced in Theorem 3.1 and let c be the same constant as inthat theorem. Lemma 4.5. Let ε ∈ (0 , / . Let m , m , r be positive integers such that m = m + 2 r < min { m / (2 ε ) , c εn/d } . Consider the following subset of almost constant vectors T = { x ∈ AC + ( p ) : either | x ⋆m | = 0 or ( | x ⋆m | = 1 and | J x | < r ) } . Then Ω ε ⊂ E T := { M ∈ M n,d : ∀ x ∈ T M x = 0 } . To prove the lemma we need the following simple observation. Claim 4.6. Let ε ∈ (0 , / , ≤ d ≤ n , and ≤ m ≤ c εn/d , where c is the constant fromthe definition of Ω ε . Let M ∈ Ω ε and I be the set of indices corresponding to rows havingexactly one in columns indexed by [ m ] , i.e. I = { i ∈ S [ m ] : | supp R i ∩ [ m ] | = 1 } . Then | I | ≥ (1 − ε ) dm > . roof. Since M ∈ Ω ε , | S [ m ] | ≥ (1 − ε ) dm. Since rows R i , i ∈ I , have exactly one 1 on [ m ], while other rows indexed by S [ m ] have atleast two ones on [ m ], we observe | I | + 2( | S [ m ] | − | I | ) ≤ dm. This implies the desired result. ✷ Proof of Lemma 4.5. Let M ∈ Ω ε and x ∈ T . For simplicity assume that x = x ⋆ , sothat | x | ≥ | x | ≥ ... ≥ | x n | . Our proof consists of the following three cases. Case 1: | x m | = 0 . Let m x ≥ | x m x | 6 = 0. Clearly m x < m . Let I x be the set of indices corresponding to rows having exactly one 1 in columnsindexed by [ m x ]. By Claim 4.6, I x = ∅ . Thus there exists a row R i , i ∈ I x , and a unique j ∈ [ m x ] such that µ ij = 1. This implies( M x ) i = h R i , x i = x j = 0 . Case 2: | x m | = 1 , | J x | < r , and λ x < / (2 d ) . In this case the cardinality of the set { i ≥ m : | x i | ≥ λ x + 1 / (2 d ) } is less than 2 r . Since m − m = 2 r , we have | x m | < λ x + 1 / (2 d ) < /d. Let J j = [ m j ], j = 0 , 1. We first show that there is a row R i such that | supp R i ∩ J | = 1 and | supp R i ∩ ( J \ J ) | = 0 . (32)Let I be the set of indices corresponding to rows having exactly one 1 in J . By Claim 4.6, | I | ≥ (1 − ε ) dm . Since the number of nonzero rows on J \ J is at most d | J \ J | , thenumber of rows satisfying (32) is at least(1 − ε ) dm − d ( m − m ) = d ( m − εm ) > , that is there exists a row R i satisfying (32). Denote j ∈ J the only coordinate of P J R i which is equal to one, i.e. µ ij = 1 and for every j ∈ J \ { j } , µ ij = 0. Therefore if j ∈ supp R i and j = j then j > m and | x j | ≤ | x m | . Since | x m | < /d , we observe | ( M x ) i | = |h R i , x i| ≥ | x j | − X j ∈ supp R i j = j | x j | ≥ | x m | − ( d − | x m | ≥ − d − d > . Case 3: | x m | = 1 , | J x | < r , and λ x ≥ / (2 d ) . Consider the set J := { i ≤ n : 0 < x i < λ x + 1 / (2 d ) } . A := J c ⊂ [ m ] ∪ J x . Thus | S A | ≤ ( m + 2 r ) d < n . Therefore, there exists a row R j , j / ∈ S A , such that supp R j ⊂ J . This implies( M x ) j = h R j , x i = X j ∈ J x j > . Thus in all cases M x = 0, which completes the proof. ✷ Proof of Theorem 4.1. Recall that as we mentioned after the theorem it is enough tobound the probability of the event E AC + .Let c , c , c be constants from Lemmas 4.2 and 4.5. We choose ε > ε = min { / , c c / , c/c } would work), m = ⌊ c ε n/d ⌋ and r = ⌊ m / (8 ε ) ⌋ ≈ c εn/ (4 d ), so that assumptions of Lemmas 4.2 and 4.5 on m and r are satisfied. Finally,for a sufficiently small absolute positive constant c we choose the biggest p ≤ c / ln d suchthat pn is an integer. Then assumptions of Lemma 4.2 on p are also satisfied (note that it isenough to prove the theorem with the biggest possible p ). Therefore, applying these lemmastogether with Theorem 3.1, we have P ( E T ) ≥ − − c n ) and P ( E T ) ≥ P (Ω ε ) ≥ − exp (cid:16) − c d ln (cid:16) c nd (cid:17)(cid:17) , where T , T are events from the lemmas and c i ’s are absolute positive constants. Since E AC + ⊇ E T ∩ E T we obtain the desired result by adjusting absolute constants. ✷ We will need the two following simple facts. Claim 4.7. Let p ∈ (0 , / and x ∈ R n . Assume that ∀ λ ∈ R |{ i : x i = λ }| ≤ (1 − p ) n. Then there exists J ⊂ [ n ] such that pn ≤ | J | ≤ (1 − p ) n and ∀ i ∈ J ∀ j / ∈ J x i = x j . Remark 4.8. We apply this claim twice, once in the following form. Let m ≤ n and ℓ ≤ m/ 3. Let S ⊂ [ n ], | S | = m , and let v ∈ R n satisfy ∀ λ ∈ R |{ i ∈ S : x i = λ }| ≥ ℓ .Then there exists J ⊂ S such that ℓ ≤ | J | ≤ m − ℓ and ∀ i ∈ J ∀ j ∈ S \ J v i = v j . roof of Claim 4.7. Let { λ , ..., λ k } be the set of all distinct values of coordinates of x .For j ≤ k , let I j = { i : x i = λ j } and m j = | I j | . Clearly, m j ≤ (1 − p ) n for every j . Byrelabeling assume that m ≥ m ≥ ... ≥ m k . If m ≥ pn , choose J = I . Otherwise set J = I ∪ ... ∪ I ℓ , where ℓ is the smallest number satisfying m := | J | = m + ... + m ℓ ≥ pn .Since m j ≤ m < pn for j ≤ k , then m < pn , and this implies pn ≤ | J | < pn ≤ (1 − p ) n. ✷ Let A , A , ..., A m be sets such that every x ∈ A belong to at least k of sets A i ’s. Thenwe say that { A i } i forms a k -fold covering of A .The proof of the following fact uses a standard argument in measure theory, so we omitit. Claim 4.9. Let ( X, µ ) be a measure space. Let A , A , ..., A m be subsets of X such that { A i } i forms a k -fold covering of A . Then kµ ( A ) ≤ m X i =1 µ ( A i ) . In this section we prove a Littlewood-Offord type result, which will be one of key steps inthe shuffling procedure.Consider a probability space Ω = { B ⊂ [2 d ] : | B | = d } with the probability given by the normalized counting measure. For a vector v ∈ R d and B ∈ Ω denote v B = X i ∈ B v i . Proposition 4.10. Let ≤ k ≤ d . Let v ∈ R d and a ∈ R . Assume there exists J ⊂ [2 d ] such that | J | = k and for every i ∈ J and every j J one has v i = v j . Then P ( v B = a ) ≤ √ k . To prove Proposition 4.10 we need two combinatorial lemmas. We start with so-calledanti-concentration Littlewood-Offord type lemma ([13], see also [19]). Usually it is statedfor ± emma 4.11. Let ξ , ξ , ..., ξ m be independent two-valued random variables. Let x ∈ R m .Then sup a ∈ R P (cid:16) m X i =1 ξ i x i = a (cid:17) ≤ | supp x | − / . Let Π d be the permutation group with a probability given by the normalized countingmeasure and denoted by P Π d . By π we denote a random permutation. The proof of thenext lemma is rather straightforward, we postpone it to the end of the section. Lemma 4.12. Let ≤ k ≤ d . Let x ∈ R d and J ⊂ [2 d ] be such that | J | = k and for every i ∈ J and every j J one has x i = x j . For π ∈ Π d let E = E ( π ) = { ( x π ( i ) , x π ( i + d ) ) : i ≤ d, x π ( i ) = x π ( i + d ) } . Then P Π d (cid:18) | E | ≤ k (cid:19) ≤ (cid:18) k . d (cid:19) k/ . Proof of Proposition 4.10. Let B be a (set-valued) random variable on Ω . Let δ = ( δ , ..., δ d ) be a vector of independent Bernoulli random 0 / P ( δ i = 1) = 1 / i ≤ d ), and Ω denote the corresponding probability space. Consider a random (onΠ d × Ω) set of indexes A ( δ, π ) = { π ( i ) : δ i = 1 } ∪ { π ( i + d ) : δ i = 0 } ⊂ [2 d ] . Note that | A ( δ, π ) | = d . It is not difficult to see that for every fixed δ , A ( δ, π ) has the samedistribution as B . Therefore, A ( δ, π ) on Π d × Ω has the same distribution as B on Ω .Thus, given v ∈ R d , the random variable v B has the same distribution as v A ( δ,π ) . Now weintroduce the following random variables on Π d × Ω: ξ i = ξ vi = δ i v π ( i ) + (1 − δ i ) v π ( i + d ) . Note that P ( ξ i = v π ( i ) ) = P ( ξ i = v π ( i + d ) ) = 1 / v A ( δ,π ) = X i ∈ A ( δ,π ) v i = d X i =1 ξ i . Moreover, if we condition on π , the random variables ¯ ξ i = ξ i | π are independent, hence wecan apply Lemma 4.11. Denote by m ( π ) the number of two-valued ¯ ξ i ’s. Then P Ω (cid:16) d X i =1 ¯ ξ i = a (cid:17) ≤ p m ( π ) . Finally, we note that by Lemma 4.12 we have many permutations with large m ( π ), namely P Π d (cid:0) m ( π ) ≤ k/ (cid:1) ≤ (cid:18) k . d (cid:19) k/ . P (cid:16) d X i =1 ξ i = a (cid:17) ≤ r k + (cid:18) k . d (cid:19) k/ . This implies the desired result. ✷ Proof of Lemma 4.12. Without loss of generality we can assume that x i = 1 for i ≤ k and x i = 0 for i > k . Let π be a random permutation uniformly distributed over Π d . Thebasic idea of the proof is to condition on a realization of a set { i ≤ d : x π ( i ) = 1 } andshow that the conditional probability of the event | E | < k/ 50 is small regardless of thatrealization.Let A = { i ≤ d : x π ( i ) = 1 } be a random subset of [ d ]. Fix a subset A ⊂ [ d ] with | A | ≤ k (then the event A = A has a non-zero probability). Denote m := | A | . Further, define arandom subset E = { i ∈ A : x π ( i + d ) = 1 } . Clearly, we have | E | = m −| E | +( k − m −| E | ) = k − | E | everywhere on the event { A = A } . Let a parameter β ∈ (0 , . 1) be chosen later.Consider three cases. Case 1: m ≤ (1 − β ) k/ . Then clearly we have | E | ≥ k − m ≥ β k everywhere on theevent { A = A } . Case 2: m ≥ (1 + β ) k/ . Since | E | + m ≤ k (deterministically), we have | E | ≥ m − k ≥ β k everywhere on { A = A } . Case 3: (1 − β ) k/ ≤ m ≤ (1 + β ) k/ . Note that the set { π ( d + 1) , π ( d + 2) , . . . , π (2 d ) } contains k − m ones and d − k + m zeros. Thus, for every ℓ ≤ k − m we have p ℓ := P ( | E | = ℓ | A = A ) = 1 d ! (cid:18) mℓ (cid:19)(cid:18) d − mk − m − ℓ (cid:19) ( k − m )!( d − k + m )! , where the second factor is the number of choices of subsets E of A of cardinality ℓ , the thirdfactor is the number of possible choices of the set { d + 1 ≤ i ≤ d : x π ( i ) = 1 and x π ( i − d ) = 0 } provided that | E | = ℓ , and the factors ( k − m )! and ( d − k + m )! are the numbers of allpermutations of ones and zeros in the last d positions. Therefore, p ℓ = (cid:18) dm (cid:19) − (cid:18) k − mℓ (cid:19)(cid:18) d − k + mm − ℓ (cid:19) We choose β > / − β )(1 − β ) / β and set β = 1 − β . Using Chernoff bounds,we observe X ℓ ≥ βm (cid:18) d − k + mm − ℓ (cid:19) = X ℓ ≤ (1 − β ) m (cid:18) d − k + mℓ (cid:19) ≤ (cid:18) e ( d − k + m ) β m (cid:19) β m ≤ (cid:18) edβ m (cid:19) β m . Since k ≤ m/ (1 − β ) and 2 / (1 − β ) − − β = 2 β , then for ℓ ≥ βm , (cid:18) k − mℓ (cid:19) = (cid:18) k − mk − m − ℓ (cid:19) ≤ (cid:18) e ( k − m ) k − m − βm (cid:19) k − m − βm ≤ (cid:18) e (1 + β )2(1 − β ) β (cid:19) β m ≤ (cid:18) β (cid:19) β m . X ℓ ≥ βm p ℓ ≤ (cid:16) md (cid:17) m (cid:18) β (cid:19) β m (cid:18) edβ m (cid:19) β m ≤ (cid:16) md (cid:17) βm (cid:18) β (cid:19) β m ≤ (1 + β ) k d (cid:18) β (cid:19) β /β ! βm . Choosing β = 1 / 50 we obtain X ℓ ≥ βm p ℓ ≤ (cid:18) k . d (cid:19) k/ . On the event { A = A } we have | E | ≥ m − | E | , hence P ( | E | ≤ (1 − β ) m | A = A ) ≤ X ℓ ≥ βm p ℓ . Using that m ≥ (1 − β ) k/ − β )(1 − β ) / β we obtain P ( | E | ≤ β k | A = A ) ≤ (cid:18) k . d (cid:19) k/ , which completes the proof. ✷ In this section, we complete the proof of the main result for adjacency matrices. The generalscheme is similar to the one in [9, Section 4]. The main idea of the proof of Theorem A canbe roughly described as follows: after throwing away all small “bad” events (namely, theexistence of almost constant null vectors, big zero minors, and rows with largely overlappingsupports) we split the remaining singular matrices from M n,d into two sets E = { M ∈ M n,d : rk M = n − } and E = { M ∈ M n,d : rk M ≤ n − } . Then, combining linear-algebraic arguments (Lemmas 4.16 and 4.17) with the shuffling pro-cedure (Lemma 4.14), we show that E and E have a small proportion inside the sets M n,d and { M ∈ M n,d : rk M ≤ n − } , respectively. This implies that E ∪ E has smallprobability.The argument is rather technical and involves various events and “linear-algebraic” ob-jects. To make the reading more convenient, we first group the notation used in this section. For every k ≤ n , let E k = { M ∈ M n,d : rk M ≤ k } . Our purpose is to estimate the probability of the event E n − .42et M be a matrix from M n,d with rows R i , i ≤ n . For every i ∈ [ n ], we denote by M i the ( n − × n minor of M obtained by removing the row R i . Further, take a pair ofdistinct indices ( i, j ), i = j ≤ n . By M i,j we denote the ( n − × n minor of M obtained byremoving the rows R i , R j . Additionally, define V i,j = V i,j ( M ) = span { R k , k = i, j } and F i,j = F i,j ( M ) = span { V i,j , R i + R j } . In what follows, we write simply V i,j and F i,j instead of V i,j ( M ) and F i,j ( M ) as the matrix M will always be clear from the context. Note that the random vector R i + R j and therandom subspaces V i,j and F i,j are fully determined by the ( n − × n matrix M i,j .As we see later, to be able to successfully apply the aforementioned shuffling to a pairof rows R i , R j , we will need at our disposal a vector orthogonal to the subspace F i,j suchthat its restriction to the union of the supports of R i and R j has many pairs of distinctcoordinates. Of course, such a vector may not exist for some matrices M ∈ M n,d . We startby defining for every q ∈ [ n ] and i = j ≤ n a “good” subset of M n,d as follows: E i,j ( q ) = { M ∈ M n,d : ∃ v ⊥ F i,j such that (33) ∀ λ ∈ R |{ k ∈ supp ( R i + R j ) : v k = λ }| ≥ q } . For a matrix in this set we fix one vector satisfying (33), in fact we define it as a functionof the matrix. The crucial fact for our proof is that since F i,j and R i + R j are uniquelydetermined by M i,j , we can fix such a vector for the class of matrices “sharing” the sameminor M i,j . Definition 4.13. Given M ∈ E i,j ( q ) , consider the equivalence class H i,jM ( q ) = { f M ∈ E i,j ( q ) : f M i,j = M i,j } . For every equivalence class H i,jM ( q ) fix one vector v = v ( M, q, i, j ) satisfying v ⊥ F i,j and ∀ λ ∈ R |{ k ∈ supp ( R i + R j ) : v k = λ }| ≥ q. (34)Whenever q and the indices i, j are clear from context, we write v ( M ) instead of v ( M, q, i, j ).One of the key ideas of the proof of Theorem A is to show that for most matrices M in H , M ( q ),the vector v ( M ) does not belong to the kernel of M . To this end we introduce a subset of E , ( q ) K , ( q ) = { M ∈ E , ( q ) : v ( M ) ∈ ker M } . In Lemma 4.14 below we will show that the ratio |K , ( q ) | / |M n,d | goes to zero as d → ∞ .As we mentioned above, in the proof we essentially restrict ourselves to the set of matrices,which have no almost constant null-vectors, no big zero minors, and no rows with largelyoverlapping supports. To define this set formally, let 0 < p ≤ / 3, 2 ≤ q ≤ d/ 2, and ε ∈ (0 , p, q, ε ) := E AC ( p ) ∩ Ω ε \ Ω (cid:0) p/ q, p/ (cid:1) , ε , Ω ( p/ q, p/ E AC ( p ) were introduced in Proposition 3.3, (20), and (27),respectively. By Proposition 3.3, Theorem 4.1 , and (21) we have P (cid:0) Θ c (cid:1) ≤ n (cid:18) edεn (cid:19) εd + (cid:18) Cdn (cid:19) cd + e − cn ≤ n (cid:18) edεn (cid:19) εd ≤ (cid:18) edεn (cid:19) εd/ (35)provided that p ≤ c / ln d , q = c p d , c /ε ≤ d ≤ εn/ ε is small enough.Further we will need two more auxiliary events dealing with the ( n − × n minors M i,j of M . For i = j, introduce E i,jn − = { M ∈ M n,d : rk M i,j = n − R i + R j / ∈ V i,j } , and E i,jrk = { M ∈ M n,d : R i , R j ∈ V i,j } . Note that for every M ∈ E i,jrk we have rk M = rk M i,j . In the next section we prove severalstatements involving events E i,jn − , E i,jrk , and K , ( q ). The next lemma encapsulates the shuffling procedure. Recall that Ω ε was defined in Propo-sition 3.3. Lemma 4.14. Let ε ∈ (0 , and εd < q ≤ d/ . Then P (cid:0) K , ( q ) (cid:12)(cid:12) E , ( q ) ∩ Ω ε (cid:1) ≤ p ( q − εd ) . Proof. Note that K , ( q ) = { M ∈ E , ( q ) : h v ( M ) , R i = 0 } . Let M ∈ E , ( q ) ∩ Ω ε . Denote S , = S , ( M ) = supp R ∪ supp R , s , = s , ( M ) = supp R ∩ supp R and set S = S ( M ) = S , \ s , , m = | S , | , m = | s , | , and m = | S | . Note that m = 2 d − m and m = m − m = 2( d − m ). By the definition of Ω ε , we have m ≥ − ε ) d and m ≤ εd. By (34), the vector v := v ( M ) satisfies ∀ λ ∈ R |{ i ∈ S : x i = λ }| ≥ q − m . Since q − m ≤ m/ 3, by Claim 4.7 (see Remark 4.8) there exists J ⊂ S such that q − m ≤ | J | ≤ m − ( q − m ) and ∀ i ∈ J ∀ j ∈ S \ J v i = v j . (36)44e compute the desired probability as follows. For every (fixed) M ∈ E , ( q ) ∩ Ω ε considerthe equivalence class F M := H , M ( q ) ∩ Ω ε = n f M ∈ E , ( q ) ∩ Ω ε : f M , = M , o . Note that by construction S , ( f M ) = S , ( M ) = S , , s , ( f M ) = s , ( M ) = s , and v ( f M ) = v ( M ) = v for every matrix f M in F M . Therefore it is enough to show that the proportion ofmatrices f M ∈ F M satisfying h v, R ( f M ) i = 0 is small. Every matrix f M ∈ F M is determinedby its minor M , (which is fixed on F M ) and its first row R ( f M ). Thus to determine amatrix in F M it is enough to choose a support of the first row, which is a subset of S , .Since m elements in supp R are fixed (as s , is fixed) then we have to calculate how many( d − m )-element subsets B of m -element set S exist so that h v, R i = X i ∈ B ∪ s , v i = 0 , that is X i ∈ B v i = a := − X i ∈ s , v i . For vectors v = v ( M ) satisfying (36) this was calculated in Proposition 4.10 (note that m = 2( d − m ), a is independent of B ⊂ S and apply the proposition with q − m and m/ k and d ). Applying this for all classes F M , we obtain the desired bound. ✷ In what follows, we will show that, up to intersection with Θ ∩ E , ( q ) ∩ E , n − (resp.,Θ ∩ E , ( q ) ∩ E , rk ), the event E = E n − \ E n − (resp., E = E n − ) is a subset of K , ( q ),hence has a small probability. Our treatment of singular matrices M with rk M = n − M ≤ n − M , we fix a left null vector y = y ( M ) and a right null-vector x = x ( M ) and choose a row R i such that rk M i = rk M and x has many distinct coordinateson R i . On the next step, we choose a second row R j so that the minor M i,j is of maximalrank, that is rk M i,j = n − M = n − M i,j = rk M in the caserk M ≤ n − 2. We also show that there are many choices for such i and j . Finally, usingthe shuffling, we show, in a sense, that we can increase the rank of a matrix by “playing”with rows i and j only, i.e. that the events E n − and E n − are small inside E n − and M n,d respectively. The next lemma describes the set of “good” i ’s for the first step. Lemma 4.15. Let < p ≤ / , and q ≥ be such that pn/ q is an integer. Further, let M ∈ E n − ∩ E AC ( p ) \ Ω (cid:0) p/ q, p/ (cid:1) and x ∈ ker M \ { } , y ∈ ker M T \ { } . Consider the set of indices I M ( x, y ) = { i ∈ supp y : ∀ λ ∈ R |{ j ∈ supp R i : x j = λ }| ≥ q } . Then | I M ( x, y ) | ≥ pn/ . y ∈ ker M T we have P i y i R i = 0 and I M ( x, y ) ⊂ supp y . Thereforeremoving the row R i , i ∈ I M ( x, y ), we do not decrease the rank of M , that is rk M i = rk M . Proof of Lemma 4.15. Since x / ∈ AC ( p ) and p ≤ / 3, by Claim 4.7 there exists a subset J x ⊂ [ n ] such that pn ≤ | J x | ≤ (1 − p ) n and ∀ i ∈ J x ∀ j J x x i = x j . Now we compute how many rows have more than q ones in J x and more than q ones in J cx .Since M Ω ( p/ q, p/ 2) then applying Lemma 3.6 with α = p/ (2 q ) and β = p , we get |{ i : | supp R i ∩ J x | ≥ q }| ≥ (1 − p/ q ) n and |{ i : | supp R i ∩ J cx | ≥ q }| ≥ (1 − p/ q ) n. Therefore |{ i : | supp R i ∩ J x | ≥ q and | supp R i ∩ J cx | ≥ q }| ≥ (1 − p/q ) n. By the construction of the set J x this implies that the set I := { i : ∀ λ ∈ R |{ j ∈ supp R i : x j = λ }| ≥ q } has cardinality at least (1 − p/q ) n . Finally, since y / ∈ AC ( p ), we have that | supp y | ≥ pn ,which implies | I M ( x, y ) | = | I ∩ supp y | ≥ pn − pn/q ≥ pn/ , and completes the proof. ✷ Now we consider the set of matrices M ∈ Θ with rk M ≤ n − Lemma 4.16. Let p, q satisfy the assumptions of Lemma 4.15, ε ∈ (0 , , and let E = E n − ∩ Θ with Θ = Θ( p, q, ε ) . Then P (cid:0) E (cid:1) ≤ p − P (cid:0) E , rk ∩ E , ( q ) ∩ E (cid:1) . Proof. Fix M ∈ E . There exist x ∈ ker M \ { } and y ∈ ker M T \ { } , that is ∀ i ≤ n x ⊥ R i and X i ∈ supp y y i R i = 0 . Note that by the definition of Θ we have x, y / ∈ AC ( p ). We compute how many orderedpairs ( i, j ), i = j , satisfy R i , R j ∈ V i,j and ∀ λ ∈ R |{ k ∈ supp ( R i + R j ) : x k = λ }| ≥ q. By Lemma 4.15, the set I M ( x, y ) satisfies | I M ( x, y ) | ≥ pn/ 2, and for every i ∈ I M ( x, y ) wehave rk M i = rk M . Next, since rk M i < n − 1, the set ker( M i ) T \ { } is non-empty, i.e. ∃ y ( i ) ∈ R n \ { } such that y ( i ) i = 0 , n X j =1 y ( i ) j R j = 0 . y ( i ) ∈ ker M T \ { } , and, since M ∈ E AC ( p ), y ( i ) has at least pn non-zero coordinates;moreover, ∀ j ≤ n such that y ( i ) j = 0 one has R j ∈ V i,j . This shows that for every M ∈ E there are at least ( pn ) / i, j ) with i < j satisfying R i , R j ∈ V i,j . Obviously x ⊥ F i,j for every i, j ≤ n . Hence for each pair ( i, j ) we have R i , R j ∈ V i,j , x ⊥ F i,j and ∀ λ ∈ R |{ k ∈ supp ( R i + R j ) : x k = λ }| ≥ q, implying that M belongs to at least ( pn ) / {E i,jrk ∩ E i,j ( q ) ∩ E } i Let p, q satisfy the conditions of Lemma 4.15, ε ∈ (0 , , and let E = ( E n − \E n − ) ∩ Θ with Θ = Θ( p, q, ε ) . Then P (cid:0) E (cid:1) ≤ p − P (cid:0) E , n − ∩ E , ( q ) ∩ E ) . Proof. Repeating the first step of the proof of Lemma 4.16, we fix M ∈ E , x ∈ ker M \ { } , y ∈ ker M T \{ } . Then by Lemma 4.15 the set of indices I M ( x, y ) has cardinality | I M ( x, y ) | ≥ pn/ i ∈ I M ( x, y ) the ( n − × n minor M i satisfies rk M i = rk M .Now, we calculate how many ordered pairs ( i, j ), i = j , exist such thatrk M i,j = n − R i + R j / ∈ V i,j . Let i ∈ I M ( x, y ). Since y / ∈ AC ( p ), there are at least pn choices of j such that y j = y i . Fixsuch a j . We claim that then R i + R j / ∈ V i,j . Indeed, otherwise R i + R j = X ℓ = i,j z ℓ R ℓ for some z ℓ ∈ R , hence there exists w ∈ ker M T \ { } such that w i = w j . Since the dimensionof ker M T is one, we have y = λw for some λ ∈ R , which contradicts the condition y i = y j .Therefore, there are at least ( pn ) / i, j ) with i = j satisfyingrk M i,j = n − , R i + R j / ∈ V i,j , x ⊥ F i,j , ∀ λ ∈ R |{ k ∈ supp ( R i + R j ) : x k = λ }| ≥ q. In other words, the matrix M belongs to at least ( pn ) / E i,jn − ∩ E i,j ( q ) ∩ E . Thus,we proved that {E i,jn − ∩ E i,j ( q ) ∩ E } i Proof of Theorem A. Let p , q , ε be chosen later to satisfy assumptions in the correspond-ing statements. By Lemmas 4.16, 4.17 and (35) we obtain P ( E n − ) ≤ P (cid:0) E n − ∩ Θ (cid:1) + P (cid:0) ( E n − \ E n − ) ∩ Θ (cid:1) + (cid:18) edεn (cid:19) εd/ ≤ p − (cid:0) P ( A ) + P ( B ) (cid:1) + (cid:18) edεn (cid:19) εd/ , where A = E , rk ∩ E , ( q ) ∩ E n − ∩ Θ and B = E , n − ∩ E , ( q ) ∩ ( E n − \ E n − ) ∩ Θ . We show now that A ∪ B ⊂ K , ( q ) ∩ E , ( q ) ∩ Ω ε . In other words, we verify that for a matrix M ∈ A ∪ B the vector v ( M ) ∈ F ⊥ , (see Defini-tion 4.13) belongs to ker M .Indeed, if M ∈ A , then R , R ∈ V , . Using the condition v ( M ) ∈ F ⊥ , we immediatelyget v ( M ) ∈ ker M .If M ∈ B then rk M = n − 1, rk M , = n − 2, and R + R / ∈ V , . Therefore dim F , = n − 1. Since ker M ⊆ F ⊥ , and dim ker M = dim F ⊥ , = 1 we infer ker M = F ⊥ , and thus v ( M ) ∈ ker M .Finally note that A and B are disjoint, so P ( A ) + P ( B ) = P ( A ∪ B ). Applying Lemma4.14 we obtain P ( E n − ) ≤ p − P (cid:0) K , ( q ) ∩ E , ( q ) ∩ Ω ε (cid:1) + (cid:18) edεn (cid:19) εd/ = 20 p p ( q − εd ) + (cid:18) edεn (cid:19) εd/ . Finally we choose the parameters. Let c , c be sufficiently small positive absolute constants.Choose p = c / ln d and q to be the largest integer not exceeding c p d = c c d/ ln d (we48lightly adjust c , c so that pn/ q is also an integer). Let ε = q/ (4 d ) ≈ / ln d (note thatthen the condition c /ε ≤ d ≤ εn/ ✷ Remark 4.18. We could choose q = − cdp/ ln p ≈ d/ ((ln ln d ) ln d ), then ε = q/ (4 d ) ≈ / ((ln ln d ) ln d ). This would lead to the restriction d ≤ cn/ ((ln ln n ) ln n ) instead of d ≤ cn/ ln n in Theorem A. References [1] N. Alon and J.H. Spencer. The probabilistic method . Wiley-Interscience Series in Discrete Mathematicsand Optimization. John Wiley & Sons, Inc., Hoboken, NJ, third edition, 2008. With an appendix onthe life and work of Paul Erd˝os.[2] R. Bauerschmidt, J. Huang, A. Knowles, and H.T. Yau. Bulk eigenvalue statistics for random regulargraphs , arXiv:1505.06700.[3] R. Bauerschmidt, A. Knowles, and H.T. Yau. Local semicircle law for random regular graphs, arXiv:1503.08702.[4] B. Bollob´as. A probabilistic proof of an asymptotic formula for the number of labelled regular graphs ,European J. Combin., (1980), 311–316.[5] B. Bollob´as. The isoperimetric number of random regular graphs , European J. Combin., (1988),241–244.[6] J. Bourgain, V.H. Vu and P. M. Wood, On the singularity probability of discrete random matrices , J.Funct. Anal. (2010), no. 2, 559–603.[7] S. Chatterjee, A short survey of Stein’s method , Proceedings ICM, Vol. 4, 2014, 1–24.[8] N.A. Cook, Discrepancy properties for random regular digraphs , Random Structures Algorithms, toappear.[9] N.A. Cook, On the singularity of adjacency matrices for random regular digraphs , Prob. Th. Rel. Fields,to appear.[10] C. Cooper, A. Frieze, B. Reed, O. Riordan, Random regular graphs of non-constant degree: independenceand chromatic number , Combin. Probab. Comput. (2002), 323–341.[11] K.P. Costello, T. Tao and V. Vu, Random symmetric matrices are almost surely nonsingular , DukeMath. J. (2006), no. 2, 395–413.[12] I. Dumitriu and S. Pal, Sparse regular random graphs: spectral density and eigenvectors , Ann. Probab. (2012), 2197–2235.[13] P. Erd˝os, On a lemma of Littlewood and Offord , Bull. Amer. Math. Soc. (1945), 898–902.[14] J. Friedman. A proof of Alon’s second eigenvalue conjecture and related problems , Mem. Amer. Math.Soc., (2008), no. 910.[15] A. Frieze, Random structures and algorithms , Proceedings ICM, Vol. 1, 2014, 311–340.[16] A.M. Frieze and T. Luczak. On the independence and chromatic numbers of random regular graphs , J.Combin. Theory Ser. B, (1992), 123–132.[17] S. Hoory, N. Linial and A. Wigderson, Expander graphs and their applications, Bull. Amer. Math. Soc.(N.S.) (2006), no. 4, 439–561. 18] J. Kahn, J. Koml´os and E. Szemer´edi, On the probability that a random ± -matrix is singular , J. Amer.Math. Soc. (1995), no. 1, 223–240.[19] D.J. Kleitman, On a Lemma of Littlewood and Offord on the Distributions of Linear Combinations ofVectors , Adv. Math., (1970), 155–157.[20] B. Kolesnik and N. Wormald. Lower bounds for the isoperimetric numbers of random regular graphs ,SIAM J. Discrete Math., (2014), 553–575.[21] J. Koml´os, On the determinant of (0 , matrices , Studia Sci. Math. Hungar (1967), 7–21.[22] J. Koml´os, Circulated Manuscript, Available online at http://math.rutgers.edu/ ∼ komlos/01short.pdf.[23] M. Krivelevich, B. Sudakov, V.H. Vu, and N. C. Wormald. Random regular graphs of high degree ,Random Structures Algorithms, (2001), 346–363.[24] A.E. Litvak, A. Lytova, K. Tikhomirov, N. Tomczak-Jaegermann, P. Youssef, Anti-concentration prop-erty for random digraphs and invertibility of their adjacency matrices , C.R. Math. Acad. Sci. Paris, (2016), 121–124.[25] B.D. McKay, Subgraphs of random graphs with specified degrees , Congr. Numer. (1981), 213–223.[26] B.D. McKay, The expected eigenvalue distribution of a large regular graph , Linear Algebra Appl. (1981), 203–216.[27] H.H. Nguyen, Inverse Littlewood-Offord problems and the singularity of random symmetric matrices ,Duke Math. J. (2012), 545–586.[28] H.H. Nguyen, On the singularity of random combinatorial matrices , SIAM J. Discrete Math. (2013),447–458.[29] M. Rudelson and R. Vershynin, Non-asymptotic theory of random matrices: extreme singular values ,Proceedings ICM, Vol. 3, 2010, 1576–1602.[30] M. Rudelson and R. Vershynin, The Littlewood-Offord problem and invertibility of random matrices ,Adv. Math. (2008), 600–633.[31] J.K. Senior, Partitions and their representative graphs , Amer. J. Math. (1951), 663–689.[32] T. Tao and V. Vu, Inverse Littlewood-Offord theorems and the condition number of random discretematrices , Annals of Math., (2009), 595–632.[33] T. Tao and V. Vu, From the Littlewood-Offord problem to the circular law: universality of the spectraldistribution of random matrices , Bull. Amer. Math. Soc. (N.S.) (2009), 377–396.[34] T. Tao and V. Vu, On the singularity probability of random Bernoulli matrices , J. Amer. Math. Soc. (2007), 603–628.[35] L.V. Tran, V.H. Vu and K. Wang, Sparse random graphs: eigenvalues and eigenvectors , Random Struc-tures Algorithms (2013), 110–134.[36] R. Vershynin, Invertibility of symmetric random matrices , Random Structures Algorithms (2014),135–182.[37] V. Vu, Random discrete matrices , Horizons of combinatorics, Bolyai Soc. Math. Stud., , 257–280,Springer, Berlin, 2008.[38] V.H. Vu, Combinatorial problems in random matrix theory , Proceedings ICM, Vol. 4, 2014, 489–508. lexander E. Litvak, Anna Lytova, Konstantin Tikhomirov and Nicole Tomczak-Jaegermann,Dept. of Math. and Stat. Sciences,University of Alberta,Edmonton, Alberta, Canada, T6G 2G1. e-mail: [email protected], [email protected], [email protected],[email protected] Pierre Youssef,Universit´e Paris Diderot,Laboratoire de Probabilit´es et de mod`eles al´eatoires,75013 Paris, France. e-mail: [email protected]: [email protected]