Agreement testing theorems on layered set systems
AAgreement testing theorems on layered set systems
Yotam Dikstein ∗ Irit Dinur † September 4, 2019
Abstract
We introduce a framework of layered subsets, and give a sufficient conditionfor when a set system supports an agreement test. Agreement testing is acertain type of property testing that generalizes PCP tests such as the plane vs.plane test. Previous work has shown that high dimensional expansion is usefulfor agreement tests. We extend these results to more general families of subsets,beyond simplicial complexes. These include– Agreement tests for set systems whose sets are faces of high dimensionalexpanders. Our new tests apply to all dimensions of complexes both incase of two-sided expansion and in the case of one-sided partite expansion.This improves and extends an earlier work of Dinur and Kaufman (FOCS2017) and applies to matroids, and potentially many additional complexes.– Agreement tests for set systems whose sets are neighborhoods of verticesin a high dimensional expander. This family resembles the expanderneighborhood family used in the gap-amplification proof of the PCPtheorem. This set system is quite natural yet does not sit in a simplicialcomplex, and demonstrates some versatility in our proof technique.– Agreement tests on families of subspaces (also known as the Grassmannposet). This extends the classical low degree agreement tests beyond thesetting of low degree polynomials.Our analysis relies on a new random walk on simplicial complexes which wecall the “complement random walk” and which may be of independent interest.This random walk generalizes the non-lazy random walk on a graph to higherdimensions, and has significantly better expansion than previously-studiedrandom walks on simplicial complexes. ∗ Weizmann Institute of Science, ISRAEL. email: y [email protected]. † Weizmann Institute of Science, ISRAEL. email: i [email protected]. a r X i v : . [ c s . CC ] S e p ontents References 63A Standard Definitions and Claims 67
A.1 Expander graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67A.1.1 Bipartite Graphs and Bipartite Expanders . . . . . . . . . . . . 67A.2 Properties of Expander Graphs . . . . . . . . . . . . . . . . . . . . . . 68A.3 Simplicial Complexes and high dimensional expanders . . . . . . . . . 70
B From Independent Choice to Expanding Choice 72C List of Abbreviations for STAV-Structures 74D List of Results 74
D.1 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74D.2 Applications of Main Theorem . . . . . . . . . . . . . . . . . . . . . . 75D.3 Analysis of the Complement Walk . . . . . . . . . . . . . . . . . . . . 76D.4 High Dimensional Expander Mixing Lemma . . . . . . . . . . . . . . . 762
Introduction
Agreement testing is a certain type of property testing. The first agreement testingtheorems are the line versus line or plane versus plane low degree agreement tests[RS96, AS97, RS97] that play an important part in various PCP constructions. Wediscuss the history and evolution of these tests further below.Abstractly, an agreement test is the following. Let V be a ground set and let S be a family of subsets of V . The object being tested is an ensemble of local functions { f s ∈ Σ s | s ∈ S } with one function per set s ∈ S . The domain of f s is s itself. A perfect ensemble is an ensemble that comes from a global function g : V → Σ whosedomain is the entire vertex set. In a perfect ensemble the local function at s is therestriction of g to the set s , that is, f s = g (cid:22) s for all s ∈ S .We let G be the set of all perfect ensembles. An agreement test is a property testerfor G . It is specified by a distribution over pairs of intersecting subsets, s , s ∈ S , andthe test accepts if the respective local functions agree on the intersection: f s (cid:22) t = f s (cid:22) t where t = s ∩ s . A perfect ensemble is clearly accepted with probability 1. Thetest is c -sound if dist ( f , G ) (cid:54) c · P s , s [ f s (cid:22) t = f s (cid:22) t ] . (1.1)Here the distance dist ( f , G ) is the minimal fraction of sets s ∈ S that we need tochange in f in order to get a function in G .It is well known (see Example 2.3) that in some cases exact soundness is impossibleand we must allow a slightly weaker notion, called γ -approximate soundness. The γ -approximate distance between two ensembles f and g , denoted dist γ ( f , g ) , is thefraction of sets s in which dist ( f s , g s ) > γ . An agreement test is γ -approximately c -sound if dist γ ( f , G ) (cid:54) c · P s , s [ f s (cid:22) t = f s (cid:22) t ] . (1.2)This means that if the test succeeds with probability 1 − ε there must be a globalfunction g : V → Σ such that for all but c · ε of the sets s , dist ( f s , g (cid:22) s ) (cid:54) γ . Why study agreement tests.
The original motivation for agreement tests comesfrom PCP proof composition: a key step in this construction is to combine manysmall proofs into one global proof, but without knowing whether the small proofs areconsistent with each other. The agreement test ensures that they can be combinedtogether coherently. Indeed, agreement tests are the basis of the “inner verifier”constructed in recent works on 2 : +
18, BKS19, KMS18].Recent work [DFH19] used agreement tests in a different context, for provingstructure theorems for Boolean functions. The idea is to prove structure for smallrestrictions of the function, often an easier task, and then apply an agreement testingtheorem to combine these structures together.Agreement tests are a natural family of tests that seems interesting in its ownright. This work makes a step towards developing a theory that explains which setsystems have agreement tests.
The STAV layered set system
We describe a three layered set system which we call a STAV.Looking closely at agreement tests, we can always model them with three layers:the vertices ( V ), the sets ( S ) and the possible intersections between sets ( T ). TheSTAV has an additional so-called “Amplification” layer ( A ) that captures an amplifi-cation property that occurs in many interesting settings: given that we know that In some cases the test can query more than two subsets, as in the so-called Z-test of [IKW12],but in this paper we restrict attention only to two query tests. ( S , T , A , V ) together with the following threedistributions– The STAV distribution - a distribution over ( s , t , a , v ) , s ⊃ t ⊃ a ·∪ v .– The STS distribution - a distribution over s , t , s that gives the agreementtesting distribution and in addition a subset t ⊆ s ∩ s .– The VASA distribution - a distribution over v , a , s , a whose role will be madeclear in the analysis.A STAV is called γ -good if these distributions (and some local views of them) satisfycertain spectral conditions. The surprise parameter.
Based on the STAV structure, it is natural to define aparameter which we call the surprise. This parameter depends both on the ensemble f = { f s } and on the STAV, and in some cases, it can be bounded independently of f (this is the case for simplicial complexes). The surprise parameter is a measure of howmuch amplification the A layer gives us. It is the probability that two intersectingsets agree on a given that they disagree on t (See Definition 2.17). This parametergives a unified way to address different agreement scenarios. Main Results
Our main technical theorem (Theorem 2.26) says that every set system that supportsa γ -good STAV must support a sound agreement test. This reduces the task ofproving an agreement test to the much simpler task of uncovering a STAV underneaththe set system.We list here a few applications of this theorem, starting with agreement tests forhigh dimensional expanders. Introducing high dimensional expanders is beyond thecurrent scope and we refer the reader to Section A.3 for more introductory definitions. Theorem 1.1 (Agreement for two-sided HDX - short version of Theorem 4.1) . Thereexists a constant c > such that for every d -dimensional simplicial complex X thefollowing holds. If X is a d -two-sided d -dimensional HDX, then X ( d ) supports a c -sound agreement test. In Section 4 we describe some corollaries of this theorem for matroids.The only known constructions of sparse two-sided HDXs are by truncating one-sided HDXs, see the Ramanujan complexes of [LSV05a] as well as the construction ofHDXs due to [KO18a]. It is natural to study agreement tests for the (non-truncated)one-sided HDX itself. The following theorem gives such a result in the special casethat the complex is also d + d + Theorem 1.2 (Agreement for partite one-sided HDX - short version of Theorem 4.4) . There exists a constant c > such that the following holds. Suppose X is a ( d + ) -Partite complex that is a d -one sided HDX. Then X ( d ) supports a c -sound agreementtest. Our next agreement theorem is for a family of subsets that is derived from a highdimensional expander, although itself it does not sit inside a simplicial complex. Thesubsets in this family are balls, or neighborhoods, of a vertex or a higher dimensional4ace in a simplicial complex that is a HDX. This construction resembles the set systemunderlying the gap-amplification based proof of the PCP theorem [Din07], in whichan agreement theorem underlies the argument somewhat implicitly.
Theorem 1.3 (Agreement on neighborhoods - short version of Theorem 5.3) . Thereexists a constant c > such that the following holds. Let X be a d -two-sided highdimensional expander. For each vertex z ∈ X ( ) let B z be the set of neighbors of z ,and let S = { B z | z ∈ X ( ) } . Then S supports a d -approximately c -sound agreementtest. Finally, our last agreement theorem is for a family of subspaces of a vector space,also called the Grassmann. Such families were studied in PCP constructions forspecial ensembles whose local functions belong to some code. Such ensembles areguaranteed to have the following property. For all s , t , s , if f s (cid:22) t (cid:44) f s (cid:22) t thendist ( f s (cid:22) t , f s (cid:22) t ) (cid:62) δ . We call such ensembles δ -ensembles and prove, Theorem 1.4 (Agreement on subspaces - informal, see Theorem 6.2) . There exists aconstant c > such that the following holds. Let F n be a vector space and let S have aset for every affine subspace of dimension d . Then S supports a / q Ω ( d ) -approximately c -sound agreement test for δ -ensembles. For the benefit of the reader we added in Section D a list of theorems proven inthis work.
Overview of the proof of our main theorem (Theorem 2.26)
Our main agreement theorem on STAV structures has two parts, as in manyprevious works. The first part of the proof uses the amplification given by thesurprise parameter to construct a family of functions for each a ∈ A , that is g = { g a : reach a → Σ | a ∈ A } . The reach of a is the set of all vertices v , sothat { v } ·∪ a ⊂ s for some s ∈ S . The value g a ( v ) is defined by popularity of f s ( v ) for all s ⊃ a . This part is standard and occurs in many agreement test analyses.The second part of the proof is our main new technical contribution. In this stepone constructs a global G : V → Σ from the pieces g a . This is done by showingsufficient agreement between the different g a ’s. We consider a graph connecting a pair a , a when they sit together inside some s . In earlier works this graph is dense andhas very low diameter (2 typically). This can only happen when the functions g a aredefined on a pretty large part of the vertex set (as in [DS14, BDL17, DFH19, RS97])unlike our context where each reach a is quite tiny (its size can be a constant, farsmaller than | V | ). When the diameter is small and reach a is huge it is easy to stitchthe different g a ’s together, even when the agreement between the g a ’s is rather crude,by taking a very short random walk from a to a to a .In contrast, in our case the diameter is logarithmic and we cannot afford a randomwalk because the error would build up badly. Instead, we construct the global function G : V → Σ by G ( v ) = pop { g a ( v ) | a ∈ reach v } ,i.e. the most popular opinion of the g a ’s on v . We show that it has the desiredproperties. This argument relies on the fact that the VASA random walk (in particular,moving from a to s to a ) is a very strong expander. That such VASA distributions areavailable is proven through a new type of random walk which we call the complementrandom walk, and is discussed separately below.The only previous work that analyzed an agreement test on a sparse set system(where this “large diameter” problem appears) was in [DK17]. Their solution circum-vented this problem by reducing to the dense case in a certain way. That reductionis ad-hoc and required an additional external layer of sets above S , which limitedthe generality of the theorem. Whereas the current proof is more direct and workswithout this technical caveat. 5 he complement random walk in high dimensional expanders Several previous works [KM17, DK17, KO18b] analyzed random walks on highdimensional expanders . In this work we study a new type of random walk which wecall the complement random walk.Interestingly, independent recent work of Alev, Jeronimo, and Tulsiani [AJT19],studies the same walk, where it is called “swap walk”. The authors use this walk foranalyzing an algorithm that solves constraint satisfaction problems (CSPs) on highdimensional expanders.The complement walk goes from i -face to i -face via a shared j -face, just likethe upper and lower random walks previously studied. However it has significantlybetter expansion, and is hence much more useful for us. We construct with it γ -goodSTAVs in many of our applications. The problem with many of the previously studiedrandom walks is that they have an inherent “laziness” built in: starting from an i faceand walking down to a j face, and then back up to another i face, the j + i -face a moves up to a j -face b ⊃ a and then moves down to another i -face a ⊂ b conditioned on a , a being disjoint (of course we need j (cid:62) i +
1, notethat any choice of such j would give the exact same random walk). It turns out (seeTheorem 7.1) that this walk has great expansion. This can be seen by examining forexample the case of i = Garland’s method . This method, proves global properties of the simplicialcomplexes by properties on the links. This method, originally developed by Garlandin [Gar73], is used in many works such as [EK16, DK17, Opp18a].We believe these random walks are interesting on their own account. These walksgeneralize the non-lazy adjacency operator in a graph, and the bipartite adjacencyoperator in a bipartite graph to high dimensions. As a bonus we show an immediateapplication for these walks: a new high dimensional expander mixing lemma forsets in all dimensions (see Lemma 7.14 and Lemma 7.15), extending the work of[LGE15, Opp18b].
More background and context
As mentioned earlier the first agreement testing theorems are the line versus lineor plane versus plane low degree agreement tests [RS96, AS97, RS97] that playan important part in various PCP constructions. Combinatorial analogs of thesetheorems were subsequently dubbed “direct product tests” and studied in a sequenceof works [GS97, DR06, DG08, IKW12, DS14, DL17]. For a long while there wereonly two prototypical set systems for which agreement tests were known:– All k -dimensional subspaces of some vector space– All k -element subsets of an underlying ground setEach of these has several variants (varying the field size and ambient dimension,deciding whether the sets are ordered or not, etc.).The study of agreement tests initially came as a part of a PCP construction, asin the case of the low degree agreement tests and later in works leading towardscombinatorial proofs for the PCP theorem, as started in [GS97] and continued in[DR06, Din07]. In this section we assume familiarity with high dimensional link expansion, see Section A.3 forformal definitions. +
18, BKS19, KMS18]concerning unique and 2 : + We begin with the definition of an agreement expander, similar to that of [DK17].Let S be a family of subsets of a ground set V . An ensemble of local functions is acollection { f s : s → Σ | s ∈ S } consisting, for each subset s ∈ S , of a function whosedomain is s . A perfect ensemble is one that comes from a global function g : V → Σ ,namely f s = g (cid:22) s for all s ∈ S . We denote the set of all perfect ensembles by G ( V ; Σ ) = {{ g (cid:22) s } s ∈ S | g : V → Σ } .An agreement test is given by a distribution D over pairs of intersecting subsets,– Input: An ensemble of local functions { f s : s → Σ | s ∈ S } – Test: Choose a random edge { s , s } according to the distribution D , let t = s ∩ s and accept iff f s (cid:22) t = f s (cid:22) t .We denote by rej D ( f ) the probability that the agreement test rejects a given ensemble f = { f s } . A perfect ensemble is clearly accepted with probability 1. We say that thetest is sound if it is a sound test for the property G ( V ; Σ ) in the standard propertytesting sense, namely, Definition 2.1 (Sound agreement test) . An agreement test is c -sound if everyensemble f = { f s } satisfies dist ( f , G ) (cid:54) c · rej D ( f ) .Finally we can define an agreement expander, Definition 2.2 ( c -agreement expander) . An agreement expander is a family S ofsubsets of a ground set V that supports a c -sound agreement test.The reason for the term “agreement expander” is the similarity to a Rayleighquotient given by 1 c = inf f (cid:60) G rej D ( f ) dist ( f , G ) ,where the numerator counts the number of rejecting edges and the denominatormeasures the distance from the property. See [KL14] for a more detailed analogybetween expansion and property testing. 7 pproximate versus exact agreement For some agreement tests one cannot expect a conclusion as strong as in Definition 2.2.For example, suppose that the testing distribution D selects pairs s , s that typicallyintersect on an η (cid:28) s (and of s ). In such a case consider the followingensemble, Example 2.3.
Construct an ensemble f = { f s } at random as follows. For all s set f s = (cid:22) s and then for each s with probability α do: change one bit of f s at random.This ensemble passes the test with probability at least 1 − αη while being roughly α -far from G . Setting α = approximate agreement. Let us denote by dist γ ( f , f ) the fraction ofsets s on which f s , f s differ on more than γ fraction of s . Namely,dist γ ( f , f ) = P s [ dist ( f s , f s ) > γ ] . Definition 2.4 ( γ -approximate soundness) . An agreement test is γ -approximately c -sound if every ensemble f = { f s } satisfiesdist γ ( f , G ) (cid:54) c · rej D ( f ) .When γ < / | s | we recover the previous notion of soundness which we now call exact soundness. So a test is c -sound or exactly c -sound if it is γ -approximately c -sound for some γ < / | s | . A STAV structure introduces two additional layers of subsets of V : layer T and layer A . These come in addition to the top layer S that we already have in the definitionof an agreement expander. The layer T represents the intersections of pairs of subsets s , s ∈ S , and is implicit in the definition of the agreement test distribution. Thelayer A is new and sits below T . It provides a certain amplification needed for theanalysis. 𝑠 𝑡 𝑎 𝑣 𝑣 𝑎 𝑠 𝑎′ 𝑡𝑠 𝑠′ Figure 1: The STAV, STS, and VASA distributions
Definition 2.5 (STAV structure) . A STAV structure is a tuple X =( S , T , A , V ; D stav ) consisting of a ground set V and three layers of subsets A , T , S ⊂ P ( V ) , together with a stochastic process D stav that samples ( s , t , a , v ) as follows.– Choose s – Choose t conditioned on s – Choose a , v conditioned on t (but not dependent on s )The distributions in which the above are chosen are not restricted except for assumingthat the marginal of this process is uniform over v and that the probability to choosea vertex or a set is never zero. The STAV comes with two distributions,– STS distribution:
A distribution over triples ( s , t , s ) that is symmetric withrespect to s , s and satisfies that the marginal of ( s , t ) (and therefore ( s , t ) )is identical to the marginal of D stav .8 VASA distribution:
A distribution D vasa over tuples ( v , a , s , a ) that is sym-metric with respect to a , a and satisfies that the marginal of ( v , a , s ) (andtherefore ( v , a , s ) ) is identical to the marginal of D stav .Notation: Throughout this paper we use the letters s , t , a , v to denote elementsin S , T , A and V respectively without specifically mentioning this. So for examplefixing a , { s ⊃ a } stands for all elements of S that contain a ∈ A . Unless specifiedotherwise, all random choices are with respect to the distributions D stav or the STSor VASA distributions.Before we continue to define what a “good” STAV is, let us mention a couple ofexamples that might be useful to keep in mind. Example 2.6 (The direct product test STAV) . Fix k and let ‘ = k /
3. We constructthe following family of STAVs for all n (cid:29) k , n → ∞ . Let V = [ n ] , let S = ( [ n ] k ) , T =( [ n ] ‘ ) and A = ( [ n ] ‘ − ) . The STAV distribution is choosing a k -element set uniformly,then an ‘ -element subset of it, and then splitting t randomly into a and v . A possibleSTS distribution is to choose a random t and then two independent s , s ⊃ t . Anotherpossibility is to choose s , s ⊃ t so that their intersection is exactly t . The VASAdistribution is to choose s uniformly and in it a , a , v uniformly so that they are alldisjoint.An agreement test for this example appears in [DS14] under the name directproduct test. Example 2.7 (HDX simplicial complexes, generalizing Example 2.6) . Fix k and let ‘ = k /
3. We construct the following family of STAVs for infinitely many n (cid:29) k .Suppose X is a high dimensional expander on n vertices. Let V = X ( ) , let S = X ( k ) , T = X ( ‘ ) and A = X ( ‘ − ) . The STAV distribution is choosing arandom s from the distribution of X , then a uniform t ⊂ s , and then splitting t randomly into a and v . A possible STS distribution is to choose a random t andthen two independent s , s ⊃ t . Another possibility is to choose s , s ⊃ t so thatthey must be disjoint. The VASA distribution is to choose s according to the X distribution and in it a , a , v uniformly so that they are all disjoint.Agreement tests for this example were analyzed in [DK17] for certain complexes X and certain bounds on the dimension k . Example 2.8 (Subspaces STAV) . Fix m > d > ‘ . We construct the following familyof STAVs for all finite fields F = F q , q → ∞ . Let V = F m , let S be all d -dimensionalspaces of V , let T be all ‘ -dimensional spaces of V and let A be all ( ‘ − ) -dimensionalspaces of V . The STAV distribution is choosing s uniformly, t ⊂ s uniformly, then a ⊂ t uniformly, then v uniformly from t \ a . A possible STS distribution is to choosea random t and then two uniform s , s ⊃ t . The VASA distribution is to choose s uniformly and in it a , a , v uniformly so that they are all disjoint.This example generalizes the plane vs. plane low degree agreement test. Anagreement test for it is proved in [RS97] for ensembles whose local functions are lowdegree functions, and in [IKW12] for general ensembles (in both cases the focus wason a different parameter regime).We now define several graphs that arise as local views of the STS and VASAdistributions. The first of these is the bipartite graph obtained by the marginal of D stav on A and V , Definition 2.9 (The AV -Graph (reach graph)) . The AV -graph, or reach graph, is abipartite graph ( V , A , E ) where the probability of choosing an edge ( v , a ) is given bythe marginal of D stav on V × A , namely, P r [( v , a )] = P s , t P D stav [( s , t , a , v )] .9e denote reach a ⊂ V the set of neighbors of a in this graph, and by reach v ⊂ A the set of neighbors of v in this graph. Definition 2.10 (The local reach graphs) . Let X be a STAV-structue, and fix s ∈ S .The s -local reach graph, or AV s -graph, is a bipartite graph where: L = { a | a ⊂ s } . R = { v | v ∈ s } . E = { ( a , v ) | v ∈ reach a } .The probability of choosing an edge ( a , v ) is the probability of choosing ( a , v ) in theSTAV-distribution given that we chose s . The STS graph and its local views
The STS distribution is conveniently viewed as a graph whose vertex set is S andwhose edges are labeled by elements of T , with the weight of the edge from s to s labeled by t given by the probability of ( s , t , s ) . The graph is undirected since theSTS distribution is symmetric wrt s , s .We consider “local views” of the sts graph - obtained by inducing it on a smallerset of vertices. Definition 2.11 ( sts a -Graph) . For a fixed a , an sts a -Graph is has vertexset { s | s ⊃ a } and the probability of choosing an edge { s , s } t is given by2 P sts [( s , t , s ) | t ⊃ a ] . Definition 2.12 ( sts a , v -Graph) . For a fixed a , v , an sts a , v -Graph is has vertexset { s | s ⊃ a ∪ { v }} and the probability of choosing an edge { s , s } t is given by2 P sts [( s , t , s ) | t ⊃ a ∪ v ] . Local views of the VASA distribution
When fixing one of the four terms in ( v , a , s , a ) , we can define the following twographs by the marginal: Definition 2.13 ( v ASA -Graph) . For a fixed v , an v ASA -Graph is the graph whosevertex set is reach v , and labeled edges are E = {{ a , a } s | a , a ∈ A , s ∈ S , v , a , a ⊂ s } .The probability to choose an edge { a , a } s is given by P D vasa (cid:2) ( v , a , s , a ) (cid:12)(cid:12) v = v (cid:3) . Definition 2.14 (Bipartite
V AS a -Graph) . For a fixed a , an V AS a -Graph is thebipartite graph ( L , R , E ) where L = reach a , R = (cid:8) ( a , s ) (cid:12)(cid:12) ∃ v ∈ L ( v , a , s , a ) ∈ Supp ( D vasa ) (cid:9) , E = (cid:8) ( v , ( a , s )) (cid:12)(cid:12) ( v , a , s , a ) ∈ Supp ( D vasa ) (cid:9) .The probability of choosing an edge ( v , ( a , s )) is given by P D vasa [( v , a , s , a ) | a = a ] . 10 ood STAV-Structures Having defined all the relevant graphs, we come to the requirements for a good STAV: [I: there exist vasa distr and sts distr such that]
Definition 2.15 (A good STAV-Structure) . Let X be STAV structure and γ < X is a γ -good if assumptions (A1)-(A3) and one of (A4( r ))or (A4) below hold for X : (A1) The reach graph is a √ γ -bipartite expander. (A2) (a) For all a ∈ A , the ST S a -Graph is a -edge expander.(b) For all a ∈ A and v ∈ reach a , the ST S ( a , v ) -graph is an γ -two-sidedspectral expander. (A3) (a) For all v ∈ V , the v ASA -graph is a either a γ -bipartite expander or a γ -two-sided spectral expander.(b) For all a ∈ A , the V AS a -graph is a √ γ -bipartite expander. (A4( r )) For all s ∈ S , the AV s -graph is a rγ -sampler graph. Here r > rγ -sampler graph is defined in Definition A.5. (A4) For every pair a , s so that a ⊂ s , the size of reach a inside s is relatively large,that is P v ∼ D [ v ∈ reach a | v ∈ s ] (cid:62)
12 .
Remark . The constants , are arbitrary. In addition, in the proof of the maintheorem, we will use the fact that the graphs in Assumption (A3) , Assumption (A2) bare -edge expanders. By the famous Cheeger’s inequality, for a small enough γ , ifthe graphs above are γ -spectral expanders, then they are also -edge-expanders. Let f = { f s } s ∈ S be an ensemble. In this section we discuss an additional parameterof f and the underlying STAV structure X that influences the agreement theorem.This is the so-called surprise parameter. This parameter measures how surprised weare when f s and f s agree on a given that we already know that they disagree on t , where t ⊃ a . If this probability is small, we get strong amplification. This ideaplayed an important role in several previous works and it seems useful to considerthis parameter explicitly. Definition 2.17 (Surprise of an ensemble) . Let X be a STAV structure. The surpriseof a given ensemble f = { f s } with respect to X is ξ ( X , f ) = P s , s , t , a , v [ f s (cid:22) a = f s (cid:22) a and f s ( v ) (cid:44) f s ( v ) | f s (cid:22) t (cid:44) f s (cid:22) t ] where the probability is over choosing s , t , s from the sts distribution and thenchoosing ( a , v ) conditioned on t . Note that both s , t , a , v and s , t , a , v are distributedas in D stav .It is sometimes natural to restrict attention to a sub-family of ensembles whichwe call δ -ensembles. Definition 2.18 ( δ ensemble) . An ensemble f is a δ -ensemble if for every labelededge ( s , t , s ) in the sts graph, f s (cid:22) t (cid:44) f s (cid:22) t = ⇒ dist ( f s (cid:22) t , f s (cid:22) t ) > δ (where dist ( · , · ) stands for relative hamming distance).11 emark . Note that every ensemble is a | t | ensemble. Remark . Agreement theorems are often considered for special ensembles whereeach f s belongs to an error correcting code, such as the Reed-Muller code in thecase of low degree tests. Furthermore, in the low degree test examples, for all t ⊂ s , f s (cid:22) t itself belongs to an error correcting code with some distance δ . Clearly, suchensembles are automatically δ -ensembles.In some important cases the STAV structure itself implies a non-trivial surpriseparameter for all possible ensembles. We are thus led to define the surprise of theSTAV as the supremum over all possible ensembles, Definition 2.21 (Global surprise) . Let X be a STAV structure. The surprise of X is ξ ( X ) = sup f ξ ( f ) .While the agreement of f is a property of the ensemble f , the surprise is influencedby the STAV-structure itself. For this, the following graphs play a role: Definition 2.22 (T-Lower Graph) . Fix t ∈ T . The T-lower graph of t is a bipartitegraph where L = { v | v ∈ t } , R = { a | a ⊂ t } , E = { ( a , v ) | v ∈ a } .Notice that here, we require v ∈ a and not v ∈ reach a as we required in the STAV-structure. The probability to choose an edge ( a , v ) ∈ E is the probability of choosing a given that a ⊂ t and then choosing v at random inside a .A priori, the T-lower graphs need not be good expanders, as in the STAV-structures defined for Theorem 4.4. However, when they are, we can use theirexpansion properties to establish the “surprise”. We can give the following easy boundon the surprise parameter, Lemma 2.23.
Let X be a STAV-structure so that for every t ∈ T , the T-lower graphis a η -bipartite expander. For any δ ensemble f , ξ ( X , f ) (cid:54) O ( η δ ) . Before proving the lemma let us give a couple of examples demonstrating itsusefulness.
Example 2.24 (HDX simplicial complexes, continued) . Consider the STAV fromExample 2.7. For any t ∈ X ( ‘ ) , the T -lower graph of t is the graph where R is thevertices of t , and L are subsets of t of size | t | −
1, where the edges denote containment.The reader may calculate that this graph is a η -bipartite expander with η = ‘ .Plugging in δ = / ‘ we get ξ ( X ) (cid:54) η / δ = / ‘ . Example 2.25 (The Grassmann Poset) . Let F be a finite field, let X is a STAV-structure where V = F n , T is the set of ‘ -dimensional linear subspaces of F n , A isthe set of ( ‘ − ) -dimensional spaces. For any t ∈ X ( ‘ ) , the T -lower graph of t is thegraph where R are the 1-dimensional subspaces of t , and L are the ‘ -dimensionalsubspaces of t , where the edges denote containment. The reader may calculate thatthis graph is an O (cid:16) q q t − (cid:17) -bipartite expander. One is often interested in agreementtheorems on the Grassmann poset where the local functions are promised to comefrom some error correcting code. In this case the ensemble f will be a δ -ensemble forconstant δ , and therefore we bound the surprise by ξ ( X ) (cid:54) O ( / q t − ) . Proof of Lemma 2.23.
It suffices to show that P [ f s (cid:22) a = f s (cid:22) a | f s (cid:22) t (cid:44) f s (cid:22) t ] = O (cid:18) η δ (cid:19) .Denote by B = { v ∈ t | f s ( v ) (cid:44) f s ( v ) } . By our assumption on the distance,we are promised that P [ B ] (cid:62) δ . And indeed, we can invoke the sampler lemma,Lemma A.9, and get that the probability of a to see no vertices in B is O ( η δ ) . (cid:3) .4 Main theorem: agreement on STAV structures We are now ready to state our main technical theorem. Recall that for a givendistribution D over pairs s , s we denoted by rej D ( f ) the probability that f s (cid:22) t (cid:44) f s (cid:22) t when choosing s , s ∼ D and setting t = s ∩ s . For a given STAV X weextend this notation to rej X ( f ) understanding that the sets s , t , s are now chosenvia the STS distribution that comes with X . Theorem 2.26 (STAV Agreement Theorem) . Let Σ be some finite alphabet (forexample Σ = {
0, 1 } ). Let X = ( S , T , A , V ) be a γ -good STAV-structure for some γ < . Let f = { f s : s → Σ | s ∈ S } be an ensemble such that1. Agreement: rej X ( f ) (cid:54) ε , (2.1)
2. Surprise: ξ ( X , f ) (cid:54) O ( γ ) (2.2) Then assuming either Assumption (A4( r )) for r = or Assumption (A4) , dist γ ( f , G ) (cid:54) O ( ε ) . More explicitly, there exists a global function G : V → Σ s.t. P s ∈ S (cid:20) f s γ (cid:44) G (cid:22) s (cid:21) def = P s ∈ S (cid:20) P v ∈ V [ f s ( v ) (cid:44) G (cid:22) s | v ∈ s ] (cid:62) γ (cid:21) = O ( ε ) . Moreover, for any r > , if either Assumption (A4( r )) or Assumption (A4) holdsthen P s ∈ S (cid:20) f s rγ (cid:44) G (cid:22) s (cid:21) = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . (2.3) The O notation does not depend on any parameter including γ , ε , the size of thealphabet, the size of | S | , | T | , | A | , | V | and, size of any s ∈ S . In this section we prove our main theorem, Theorem 2.26.We first give a direct proof for the case of two-sided high dimensional expanders,that follows the same line of general proof. Afterwards we prove the theorem in fullgenerality.
In this section we give a direct proof to a special case of our main theorem. We givea sound agreement test on set systems coming from a two-sided high dimensionalexpander.We recall that a simplicial complex X is a family of subsets that is downwardsclosed to containment, i.e. if s ∈ X and t ⊂ s the then t ∈ X . We denote by X ( ‘ ) allsubsets (also called faces) of size ‘ +
1. We identify X ( ) with the set of vertices. Acomplex is d -dimensional if the largest faces have size d +
1. Our test is the following:
Definition 3.1 ( d , ‘ -agreement distribution) . Let X be a d -dimensional simplicialcomplex and ‘ < d be a positive integer. We define the distribution D d , ‘ by thefollowing random process 13. Sample t ∈ X ( ‘ ) .2. Sample s , s ∈ X ( d ) independently, given that t ⊂ s , s .The d , ‘ -agreement test is the test associated with the d , ‘ -agreement distributionon this family. Theorem 3.2 (Agreement for High Dimensional Expanders) . There exists a constant c > such that for every d > such that the following holds. Suppose that X isa d -two-sided d -dimensional HDX, and ‘ = b d c . Then the d , ‘ -agreement test isexactly c -sound. This theorem holds for a wider range of parameters. Also, in this section we willassume that the alphabet is binary, namely that the local functions are f s : s → {
0, 1 } .The full theorem, Theorem 4.1, is discussed and proven in Section 4. The proof of the theorem goes through some auxiliary functions:
Definition 3.3 (local popularity function) . For every a ∈ X ( ‘ − ) define h a : a → Σ by popularity, i.e. h a = pop s ⊃ a { f s (cid:22) a } . The notation pop refers to the value f s (cid:22) a with highest probability over s ⊃ a , ties are broken arbitrarily. Definition 3.4 (the reach function) . For every a ∈ X ( ‘ − ) define g a : X a ( ) → Σ by the popularity conditioned on f s (cid:22) a = h a , i.e. g a ( v ) = pop { f s ( v ) : s ⊃ a , f s (cid:22) a = h a } .Ties are broken arbitrarily.First, we will prove the following lemma on the local popularity functions: Lemma 3.5.
For any a ∈ X ( ‘ − ) , let h a be as in Definition 3.3. Denote by ε a thedisagreement probability given that the intersection t ∈ X ( ‘ ) contains a . That is, ε a = P [ f s (cid:22) t (cid:44) f s (cid:22) t | a ⊂ t ] . Then for every a ∈ X ( ‘ − ) : P s ∈ X ( d ) [ f s (cid:22) a (cid:44) h a | s ⊃ a ] = O ( ε a ) .Next, we move towards showing that when f s (cid:22) a = h a , then for a typical a , f s ( v ) = g a ( v ) occurs with probability 1 − O (cid:0) εd (cid:1) .Consider the distribution ( a , s , a ) ∼ D comp , where we choose s ∈ X ( d ) and thentwo a , a ⊂ s uniformly at random given that they are disjoint.We say that a triple ( a , s , a ) is bad if f s (cid:22) a (cid:44) h a or f s (cid:22) a (cid:44) h a . It is easy to seefrom Lemma 3.15 that there are O ( ε ) bad triples at most.We use the bad triples to define the set of globally bad elements in X ( ‘ − ) .These are all a ∈ X ( ‘ − ) with many bad triples touching them A ∗ = (cid:26) a ∈ X ( ‘ − ) (cid:12)(cid:12)(cid:12)(cid:12) P ( s , a ) [( a , s , a ) is a bad triple ] (cid:62) (cid:27) .We shall use this set A ∗ to filter and disregard certain a ∈ X ( ‘ − ) , that ruin theprobability to agree with the { g a } a ∈ X ( ‘ − ) , and later on with the global function.The constant is arbitrary, and once it is fixed, we can say that P [ A ∗ ] = O ( ε ) byMarkov’s inequality. 14 emma 3.6 (agreement with link function) . Let ( a , s , v ) ∼ D be the distributionwhere we choose s ∈ X ( d ) and from it a , v uniformly at random so that v (cid:60) a . Then P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ ] = O (cid:16) εd (cid:17) . (3.1)Finally our goal is to stitch the g a ’s functions together to one global function.Lemma 3.16 motivates us to define the global function as the popularity vote on g a ( v ) for all a ∈ X v ( ‘ − ) that see few bad triples when conditioned on v . However,in order to properly define the global function, we need to define another process thattakes into account the agreement of two functions g a , g a . For this we need to look ateach vertex v ∈ X ( ) separately.To do so, we define the following graph: Definition 3.7 (Local Complement Graph) . Fix any v ∈ X ( ) . The local comple-ment graph H v is the graph whose vertices are V = X v ( ‘ − ) . Our labeled edgesare chosen as follows: Given that we are at element a we traverse to a via edge s , bychoosing some s ⊃ a ·∪ { v } and then choosing some a ⊂ s given that a ∩ a = ∅ .For v ∈ V , we say a ∈ X v ( ‘ − ) is locally bad for v , if P ( a , s , a ) ∈ E ( H v ) [( a , s , a ) is bad | a = a ] >
120 .The constant here is also arbitrary.Finally, for every v ∈ V , we define A ∗ v to be the set of all a ∈ X ( ‘ − ) that areeither globally bad, or locally bad for v .We show using the sampler lemma, Lemma A.9, that if a ∈ X ( ‘ − ) is not globallybad, then the probability over v ∈ V , that it will be locally bad for v is small, i.e. Claim . P a ∈ X ( ‘ − ) , v ∈ X a ( ) [ a ∈ A ∗ v and a (cid:60) A ∗ ] = O (cid:16) εd (cid:17) .Now we can define our global function G : V → Σ as follows: G ( v ) = pop { g a ( v ) | a ∈ X v ( ‘ − ) , a (cid:60) A ∗ v } ,as usual, ties are broken arbitrarily. In words, we remove a small amount of bad a ∈ X ( ‘ − ) , where many functions f s ’s don’t agree with the g a ’s, and take thepopular vote of the remainder.Using the local complement graph and Claim 3.8, we can now prove: Lemma 3.9 (agreement with global function) . P a ∈ X ( ‘ − ) , v ∈ X ( a ) [ g a ( v ) (cid:44) G ( v ) and a (cid:60) A ∗ v ] = O (cid:16) εd (cid:17) .Given the lemmata above, we prove the theorem. Proof of Theorem 3.2.
We note that it is enough to show P s ∈ X ( d ) , a ∈ X ( ‘ − ) , a ⊂ s (cid:2) f s (cid:22) s \ a (cid:44) G (cid:22) s \ a (cid:3) = O ( ε ) . (3.2)This is due to the fact that | s \ a | (cid:62) | s | , thus if f s (cid:44) G (cid:22) s , then f s (cid:22) s \ a (cid:44) G (cid:22) s \ a forat least half of the possible a ⊂ s .Next, we prove (3.2). We define the following events, when we choose ( a , s , v ) inthe simplicial complex: 15. E - the event that f s (cid:22) a (cid:44) h a .2. E - the event that a ∈ A ∗ , i.e. the a chosen has many bad edges.Define a random variable Z , that samples s , a and outputs Z ( s , a ) = P v ∈ s \ a [ f s ( v ) (cid:44) G ( v )] , (3.3)i.e. the fraction of vertices in s \ a so that f s ( v ) (cid:44) G ( v ) .The probability for E ∨ E is O ( ε ) by Lemma 3.5 and Markov’s inequality.If ¬ ( E ∨ E ) , yet a vertex v contributes to the probability in (3.3), then one ofthe three must occur:1. a ∈ A ∗ v .2. f s ( v ) (cid:44) g a ( v ) and a (cid:60) A ∗ v .3. a (cid:60) A ∗ v but f s ( v ) = g a ( v ) (cid:44) G ( v ) .The first event occurs with probability O (cid:0) εd (cid:1) by Claim 3.8. The second occurswith probability O (cid:0) εd (cid:1) by Lemma 3.6. The third occurs with probability O (cid:0) εd (cid:1) byLemma 3.9. Thus by the expectation of Z given that ¬ ( E ∨ E ) is O (cid:0) εd (cid:1) . ByMarkov’s inequality P s ∈ X ( d ) , a ∈ X ( ‘ − ) , a ⊂ s (cid:2) f s (cid:22) s \ a (cid:44) G (cid:22) s \ a (cid:12)(cid:12) ¬ ( E ∨ E ) (cid:3) = P (cid:20) Z (cid:62) d (cid:12)(cid:12)(cid:12)(cid:12) ¬ ( E ∨ E ) (cid:21) = | s \ a | O (cid:16) εd (cid:17) = O ( ε ) .In conclusion P s ∈ X ( d ) [ f s (cid:44) G (cid:22) s ] (cid:54) P [ E ∨ E ] + P s ∈ X ( d ) , a ∈ X ( ‘ − ) , a ⊂ s (cid:2) f s (cid:22) s \ a (cid:44) G (cid:22) s \ a (cid:12)(cid:12) ¬ ( E ∨ E ) (cid:3) = O ( ε ) . (cid:3) (Restatement of Lemma 3.5) . For any a ∈ X ( ‘ − ) , let h a be as inDefinition 3.3. Denote by ε a the disagreement probability given that the intersection t ∈ X ( ‘ ) contains a . That is, ε a = P [ f s (cid:22) t (cid:44) f s (cid:22) t | a ⊂ t ] . Then for every a ∈ X ( ‘ − ) : P s ∈ X ( d ) [ f s (cid:22) a (cid:44) h a | s ⊃ a ] = O ( ε a ) . Proof of Lemma 3.5.
Fix a ∈ X ( ‘ − ) . If ε a (cid:62) we are trivially done, so assumeotherwise. Consider the following graph:1. The elements in the graph are all s ⊃ a .2. We connect two elements s , s whenever there exists some t ∈ X ( ‘ ) , t ⊃ a sothat s ∩ s ⊃ t . 16he random walk in this graph, given s traverses to s by the d , ‘ -agreement test’sdistribution, given that the intersection contains a .By Theorem 4.6, this graph is a very good spectral expander. In particular, it isa -edge expander, when d is sufficiently large.We color the vertices of this graph according to their value at a . Denote by S , S , ...the colors, where S is the largest. Namely, S are all the s so that f s (cid:22) a = h a .Denote by S i = { s : f s (cid:22) a = h ia } . We need to show that the set of vertices S = { s : f s (cid:22) a = h a } (the largest of all S i ) is 1 − O ( ε a ) .The quantity ε a , i.e. the amount of edges between S i ’s, is by assumption less than . By Claim A.6, using the fact that the graph is a -edge expander and the factthat the fraction of edges between the S i ’s is less that . We get that P [ S ] (cid:62) .Furthermore, by the edge-expander property P [ S c ] (cid:54) E ( S , S c ) (cid:54) ε a . (cid:3) Corollary 3.10. P [ A ∗ ] = O ( ε ) .Proof of Corollary 3.10. Each a ∈ X ( ‘ − ) contributes to A ∗ if the amount of badtriples that a participates in is (cid:62) . The total amount of bad triples is O ( ε ) byLemma 3.5. Thus by Markov’s inequality P [ A ] = O ( ε ) . (cid:3) We move to Claim 3.8.
Claim (Restatement of Claim 3.8) . P a ∈ X ( ‘ − ) , v ∈ X a ( ) [ a ∈ A ∗ v and a (cid:60) A ∗ ] = O (cid:16) εd (cid:17) . Proof of Claim 3.8.
Fix some a (cid:60) A ∗ . Consider the following bipartite graph:– L = { ( a , s ) : a ·∪ a ⊂ s } .– R = X a ( ) .– E = { ( v , ( a , s )) : { v } ·∪ a ·∪ a ⊂ s } ,The probability to choose each edge is given by the following distribution in the link X a ) :1. Sample v ∈ X a ( ) .2. Sample s \ a ∈ X a ( d − ‘ ) so that v ∈ s .3. Sample a ∈ X a ( ‘ − ) so that a ⊂ s \ { v } .Note that the probability of ( a , s ) in the left side, is precisely the probability tochoose the triple ( a , s , a ) ∼ D comp , given that the first element is a .Denote by B ⊂ L the that consists of all ( s , a ) s.t. ( a , s , a ) is a bad triple. If a (cid:60) A ∗ then P [ B ] < .By Proposition 4.13, this graph is a O (cid:16) √ d (cid:17) -bipartite expander.Define the set V ∗ = (cid:26) v ∈ reach a (cid:12)(cid:12)(cid:12)(cid:12) P ( s , a ) (cid:2) B (cid:12)(cid:12) v ∼ ( s , a ) (cid:3) (cid:62) (cid:27) ,the set of v ∈ reach a so that the probability for a bad edge is larger than , namely,that a is locally bad for v . 17n the sampler lemma, Lemma A.9, we see that bipartite-expanders are goodsamplers. We use Lemma A.9 to get that P [ V ∗ ] = O (cid:0) d (cid:1) P [ B ] . Taking expectationon all a ∈ A we get that P a ∈ X ( ‘ − ) , v ∈ X a ( ) [ a ∈ A ∗ v and a (cid:60) A ∗ ] = P [ a (cid:60) A ∗ ] · E a (cid:60) A ∗ [ P [ V ∗ ]] = P [ a (cid:60) A ∗ ] · O (cid:18) d (cid:19) E a (cid:60) A ∗ [ P [ B ]] (cid:54) O (cid:18) d (cid:19) E a ∈ A [ P [ B ]] = O (cid:16) εd (cid:17) ,The last inequality is due to the fact that taking expectation on this set conditionedon a (cid:60) A ∗ , is less than the expectation on all A (by definition when a ∈ A ∗ , then P [ B ] (cid:62) , and when a (cid:60) A ∗ , P [ B ] < ). The last equality is since P [ B ] = O ( ε ) by Corollary 3.10. (cid:3) We move towards proving Lemma 3.6. We shall use the following “surprise” claim.
Claim . Let ˆ D denote the distribution where we sample:1. a ∈ X ( ‘ − ) .2. v ∈ X a ( ) .3. s , s ∈ X ( d ) independently, given that they contain t = a ·∪ { v } .Then P ˆ D [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] = O (cid:16) εd (cid:17) .This claim is given in full generality in that is given in Section A.3. For thissection to be self contained, we give it an elementary proof: Proof of Claim 3.11. ˆ D can be described as first choosing s , s , t and then partition-ing t = a ·∪ { v } . So from the law of total probability we obtain: P ˆ D [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] = E ( t , s , s ) (cid:20) P v ∈ t , a = t \{ v } [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] (cid:21) .Notice that for every t ∈ X ( ‘ ) , the { s , s } pairs that contribute to the probabilityabove, are the ones that fail the test (but do so on exactly one vertex). By theagreement test, there are at most an ε -fraction of such pairs. Given that we choosesuch a pair, their contribution to the expectation is ‘ = O (cid:0) d (cid:1) since that is theprobability of choosing the v ∈ t s.t. f s ( v ) (cid:44) f s ( v ) . (cid:3) Now we are ready to prove Lemma 3.6.
Lemma (Restatement of Lemma 3.6) . Let ( a , s , v ) ∼ D be the distribution where wechoose s ∈ X ( d ) and from it a , v uniformly at random so that v (cid:60) a . Then P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ ] = O (cid:16) εd (cid:17) .The proof we give here relies on the fact that the alphabet is binary, or at least ofsize O ( ) . It is possible to prove this for an alphabet of unbounded size, as we do inthe main proof. 18 roof of Lemma 3.6. First, note that by Claim 3.8, (3.1) is less or equal to P [ a (cid:60) A ∗ and a ∈ A ∗ v ] + P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ v ] = O (cid:16) εd (cid:17) + P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ v ] .Thus we focus on bounding P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ v ] . (3.4)We write the expression we want to bound in (3.4) as E a , v h P s [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a (cid:44) h a and a (cid:60) A ∗ v ] i .We denote the expression inside the expectation as p a , v = P s [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a (cid:44) h a and a (cid:60) A ∗ v ] .Thus we want to show that E a , v [ p a , v ] = O (cid:16) εd (cid:17) .By Claim 3.11, we got that P ( a , v , s , s ) ∼ ˆ D [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] = O (cid:16) εd (cid:17) .We can write this also as an expectation over a , v : E a , v (cid:20) P ( s , s ) [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] (cid:21) = O (cid:16) εd (cid:17) .We denote the expression in the expectation by q a , v = P ( s , s ) [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] .Our goal is to relate the two quantities, namely, to show that p a , v = O ( q a , v ) .This will show that E a , v [ p a , v ] = O (cid:18) E a , v [ q a , v ] (cid:19) = O (cid:16) εd (cid:17) .Fix some a ∈ X ( ‘ − ) and v ∈ X a ( ) . If a ∈ A ∗ v then p a , v = a (cid:60) A ∗ v .Denote by H the set of all s ⊃ t = a ·∪ { v } . In the sampling process for p a , v we choose some s ∈ H , and in the sampling process for q a , v we choose s , s ∈ H independently.We can partition H to H = G ·∪ B ,where G contains all s ∈ H so that f s (cid:22) a = h a . B is all s ∈ H so that f s (cid:22) a (cid:44) h a . a (cid:60) A ∗ v , thus P s ∈ H [ B ] <
120 ,or P s ∈ H [ G ] > G doesn’t change the probability of q a , v significantly, namely P s , s [ f s (cid:22) a = f s (cid:22) a and f s ( v ) (cid:44) f s ( v ) | s , s ∈ G ] (cid:54) q a , v .The first equality in the probability, f s (cid:22) a = f s (cid:22) a , is immediately satisfied in thisset, since if s , s ∈ G then f s (cid:22) a = h a = f s (cid:22) a . So we get P s , s [ f s ( v ) (cid:44) f s ( v ) | s , s ∈ G ] (cid:54) q a , v .Because s , s are chosen independently, we can say that P s , s [ f s ( v ) (cid:44) f s ( v ) | s , s ∈ G ] = P s [ f s ( v ) = g a ( v ) | s ∈ G ] · P s [ f s ( v ) (cid:44) g a ( v ) | s ∈ G ] . The definition of g a ( v ) is taking the majority of f s ( v ) for all s ∈ G . Thus P s [ f s ( v ) = g a ( v ) | s ∈ G ] (cid:62) . P s , s [ f s ( v ) (cid:44) f s ( v ) | s , s ∈ G ] (cid:62) P s [ f s ( v ) (cid:44) g a ( v ) | s ∈ G ] (cid:62) p a , v .The last inequality is by the definition of p a , v . In conclusion, p a , v (cid:54) q a , v and we aredone. (cid:3) We state this immediate corollary:
Corollary 3.12.
Consider the following distribution ( v , a , s , a ) ∼ D vasa , where ( a , s , a ) are chosen by D comp and v is sampled from s \ ( a ·∪ a ) uniformly at random.Then P ( v , a , s , a ) ∼ D vasa (cid:2) f s (cid:22) a i = h a i and g a (cid:44) g a and a i (cid:60) A ∗ v for i =
1, 2 (cid:3) = O (cid:16) εd (cid:17) . (cid:3) The proof for this corollary is just applying Lemma 3.6 for each a i and using aunion bound.It remains to prove Lemma 3.9. Lemma (Restatement of Lemma 3.9) . P a ∈ X ( ‘ − ) , v ∈ X ( a ) [ g a ( v ) (cid:44) G ( v ) and a (cid:60) A ∗ v ] = O (cid:16) εd (cid:17) .For the proof of the lemma, we’ll need the following property of expander graphs.In an expander graph, the number of outgoing edges from some A ⊂ V , is anapproximation to the size of A or V \ A . The following claim generalizes this fact tothe setting where we count only outgoing edges from A to a (large) set B ⊂ V \ A . Claim (Restatement of Claim A.10) . Let G = ( V , E ) be a λ -two sided spectralexpander. Let V = A ·∪ B ·∪ C , s.t. P [ A ] (cid:54) P [ B ] . Then P [ A ] (cid:54) ( − λ ) P [ B ] ( P [ E ( A , B )] + λ P [ C ]) . (3.5)In particular, if P [ A ] , 1 − λ = Ω ( ) then P [ A ] = O ( P [ E ( A , B )] + λ P [ C ]) . We are using the fact that the f s ’s are binary. roof of Lemma 3.9. Fix some v ∈ X ( ) . We view the local complement graph H from Definition 3.7.The walk on this graph is the ‘ − ‘ − v . ByTheorem 7.1, that we will show in Section 7, this graph is a O (cid:0) d (cid:1) -two-sided spectralexpander.Consider the following sets in this graph: M v = (cid:8) a ∈ X v ( ‘ − ) \ A ∗ v (cid:12)(cid:12) g a ( v ) = G ( v ) (cid:9) , the popular vote, N v = (cid:8) a ∈ X v ( ‘ − ) \ A ∗ v (cid:12)(cid:12) g a ( v ) (cid:44) G ( v ) (cid:9) , the other votes, C v = A ∗ v ,The a ∈ N v are those where g a ( v ) (cid:44) G ( v ) and a (cid:60) A ∗ v . Hence we need to bound E v [ P [ N v ]] .We invoke Claim A.10 for N v , M v , C v and get that P [ N v ] (cid:54) ( − O (cid:0) d (cid:1) ) P [ M v ] P [ E ( N v , M v )] + O (cid:18) d (cid:19) P [ C v ] . (3.6)The proof now has two steps:1. We show that P [ M v ] (cid:62) for all but O (cid:0) εd (cid:1) of the vertices v (the constant isarbitrary). This will imply that the denominator in (3.6) is larger than someconstant (say ).2. We show that the right hand side of (3.6) is O (cid:0) εd (cid:1) in expectation. To show step 1.
We will need to show that for all but O (cid:0) εd (cid:1) of the v , the size of C v is smaller than , namely P v (cid:20) P [ A ∗ v ] > (cid:21) = O (cid:16) εd (cid:17) (3.7)Assuming that for P [ C v ] (cid:54) , it is obvious that P [ M v ] (cid:62) , using the factthat the alphabet is binary in this special case, thus M v is the larger set between M v , N v .To show (3.7) consider the complement graph between X ( ) and X ( ‘ − ) , wherethe edges are all ( v , a ) so that { v } ·∪ a ∈ X ( ‘ ) . This is the 0, ( ‘ − ) -complementwalk.The set of vertices v that we need to bound is the set of v ’s with large P [ A ∗ v ] > .There are two types of vertices v :– P [ A ∗ v ∩ A ∗ ] (cid:54) – P [ A ∗ v ∩ A ∗ ] > By Claim 3.8, P ( a , v ) [ a ∈ A ∗ v and a (cid:60) A ∗ ] = O (cid:0) εd (cid:1) . Thus by Markov’s inequality, only O (cid:0) εd (cid:1) of the vertices can see fraction of neighbors a ∈ A ∗ v \ A ∗ , thus bounding by O (cid:0) εd (cid:1) the fraction of v ’s of the first type.To bound the vertices of the second type, note that these are vertices that havea large ( > ) fraction of neighbors in A ∗ . By Corollary 3.10, P [ A ∗ ] = O ( ε ) .According to Theorem 7.1, our graph is a q d -bipartite expander. Thus by thesampler lemma Lemma A.9, the set of vertices v ∈ X ( ) who have more than -fraction neighbours in A ∗ , is O (cid:0) εd (cid:1) . 21 s for step 2. Taking expectation on (3.6) we have that E [ P [ N v ]] (cid:54) E [ ( − O (cid:0) d (cid:1) ) P [ M v ] P [ E ( N v , M v )]] + O (cid:18) d (cid:19) E [ P [ C v ]] (cid:54) P v (cid:20) P [ A ∗ v ] > (cid:21) + E [ P [ E ( N v , M v )]] + O (cid:18) d (cid:19) E [ P [ C v ]] , (3.8)where the second inequality is due to the fact that when P v (cid:2) P [ A ∗ v ] (cid:54) (cid:3) then1 ( − O (cid:0) d (cid:1) ) P [ M v ] (cid:54) P v (cid:20) P [ A ∗ v ] > (cid:21) = O (cid:16) εd (cid:17) .By Corollary 3.10 and Claim 3.8 O (cid:18) d (cid:19) E v [ P [ C v ]] = O (cid:18) d (cid:19) E v [ P [ A ∗ v ]] = O (cid:16) εd (cid:17) .We continue to bound P [ E ( N v , M v )] in expectation. Every edge counted in E ( N v , M v ) is either a bad triple (i.e. and edge ( a , s , a ) s.t. f s (cid:22) a i (cid:44) h a i for i = O (cid:0) εd (cid:1) non-bad edges in the cut.As for the bad edges, notice that a ∈ N v is not a member of A ∗ v , thus the amountof bad edges that are connected to a is at most -fraction of the edges connected to a (by definition). Thus the amount of bad edges is bounded by P [ N v ] , and P [ E ( N v , M v )] (cid:54) O (cid:16) εd (cid:17) + P [ N v ] .By summing up the bounds we get that E [ P [ N v ]] (cid:54) O (cid:16) εd (cid:17) + E [ P [ N v ]] hence E [ P [ N v ]] = O (cid:16) εd (cid:17) . (cid:3) Next we prove Theorem 2.26 in full generality.The proof of the theorem goes through these auxiliary functions:
Definition 3.13 (local popularity function) . For every a ∈ A define h a : a → Σ bypopularity, i.e. h a = pop s ⊃ a f s (cid:22) a . The notation pop refers to the value f s (cid:22) a withhighest probability over s ⊃ a , ties are broken arbitrarily. Definition 3.14 (the reach function) . For every a ∈ A define g a : reach a → Σ bythe popularity conditioned on f s (cid:22) a = h a , i.e. g a ( v ) = pop { f s ( v ) : a ⊂ s , f s (cid:22) a = h a } .Ties are broken arbitrarily. 22irst, We will prove the following lemma on the local popularity functions: Lemma 3.15.
For any a ∈ A , let h a be as in Definition 3.13. Denote by ε a thedisagreement probability given that t ⊃ a , i.e. ε a = P (cid:2) f s (cid:22) t (cid:44) f s (cid:22) t (cid:12)(cid:12) t ∈ { t ⊃ a } (cid:3) . Then for every a ∈ A : P s ∈{ s ⊃ a } [ f s (cid:22) a (cid:44) h a ] = O ( ε a ) .Next, we move towards showing that when f s (cid:22) a = h a , then for a typical a , f s ( v ) = g a ( v ) occurs with probability 1 − O ( γε ) .We consider the V ASA -distribution. We say that a triple ( a , s , a ) is bad if f s (cid:22) a (cid:44) h a or f s (cid:22) a (cid:44) h a , in the context of the v ASA -graphs defined in Section 2.2,we call these triples bad edges, since the edges of the v ASA -graphs correspond totriples ( a , s , a ) . It is easy to see from Lemma 3.15 that there are O ( ε ) bad edges atmost.We use the bad triples to define the set of globally bad elements in A , to be all a ∈ A with many bad triples touching them A ∗ = (cid:26) a ∈ A (cid:12)(cid:12)(cid:12)(cid:12) P ( s , a ) [( a , s , a ) is a bad edge ] (cid:62) (cid:27) namely, all the a ∈ A so that the probability in Lemma 3.15 given that we chose afixed a ∈ A , is larger than a constant. We shall use this set A ∗ to filter and disregardcertain a ∈ A , that ruin the probability to agree with the { g a } a ∈ A , and later on withthe global function. The constant is arbitrary, and once it is fixed, we can saythat P [ A ∗ ] = O ( ε ) by Markov’s inequality. Lemma 3.16 (agreement with link function) . Let D be a distribution over ( a , s , v ) ∈ A × S × V defined by the STAV-structure, that is:1. Choose some ( a , v ) where v ∈ reach a .2. Choose some ( a , v ) ⊂ s (where we mean { v } , a ⊂ s ).Then P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ v ] = O ( γε ) . (3.9)Finally our goal is to stitch g a ’s functions together to one global function.Lemma 3.16 motivates us to define the global function as the popularity voteon g a ( v ) for all a (cid:60) A ∗ v such that v ∈ reach a . However, in order to properly definethe global function, we need to define another process that takes into account theagreement of two functions g a , g a . For this we use the v ASA -graphs described inAssumption (A3) a.For v ∈ V , we say a ∈ reach v is locally bad for v , if P ( a , s , a ) ∈ E ( v ASA ) [( a , s , a ) is bad | a = a ] >
120 .The constant here is also arbitrary.Finally, for every v ∈ V , we define A ∗ v to be the set of all a ∈ reach v that areeither globally bad, or locally bad for v .We show using the sampler lemma, Lemma A.9, that if a ∈ A is not globally bad,then the probability over v ∈ V , that it will be locally bad for v is small, i.e.23 laim . P a ∈ A , v ∈ reach a [ a ∈ A ∗ v and a (cid:60) A ∗ ] = O ( γε ) .Now we can define our global function G : V → Σ as follows: G ( v ) = pop { g a ( v ) | a ∈ reach v , a (cid:60) A ∗ v } ,as usual, ties are broken arbitrarily. In words, we remove a small amount of bad a ∈ A , where many functions f s ’s don’t agree with the g a ’s, and take the popularvote of the remainder.We can now prove: Lemma 3.18 (agreement with global function) . P a ∈ A , v ∈ reach a [ g a ( v ) (cid:44) G ( v ) and a (cid:60) A ∗ v ] = O ( γε ) .Given the lemmata above, we prove the theorem for STAV-structures. Proof of Theorem 2.26.
We first show that based on Assumption (A4) or Assump-tion (A4( r )) , it is enough to prove P s ∈ S , a ∈ s " f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . (3.10)Indeed for any r > (A4) holds, and f s rγ (cid:44) G (cid:22) s .it implies that f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a for all a ⊂ s . Thus there cannot be more than O (cid:0)(cid:0) + r (cid:1) ε (cid:1) s ∈ S as above.– If Assumption (A4( r )) holds for rγ , then for any f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a ,it is true by the assumption that a -fraction of the a ⊂ s have the propertythat f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a .Hence P s (cid:20) f s rγ (cid:44) G (cid:22) s (cid:21) (cid:54) P s ∈ S , a ∈ s " f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) and we are done.Next, we prove (3.10). We define the following events:1. E - the event that f s (cid:22) a (cid:44) h a .2. E - the event that a ∈ A ∗ , i.e. the a chosen has many bad edges.24efine a random variable Z , that samples s , a and outputs P v ∈ s ∩ reach a [ f s ( v ) (cid:44) G ( v )] . (3.11)The probability for E ∨ E is O ( ε ) by Lemma 3.15 and Markov’s inequality.If ¬ ( E ∨ E ) , yet a vertex v contributes to the probability in (3.11), then one ofthe three must occur:1. a ∈ A ∗ v .2. f s ( v ) (cid:44) g a ( v ) and a (cid:60) A ∗ .3. a (cid:60) A ∗ v but f s ( v ) = g a ( v ) (cid:44) G ( v ) .The first event occurs with probability O ( γε ) by Claim 3.17. The second occurswith probability O ( γε ) by Lemma 3.16. The third occurs with probability O ( γε ) by Lemma 3.18. Thus by the expectation of Z given that ¬ ( E ∨ E ) is O ( γε ) . ByMarkov’s inequality for any r > P s ∈ S , a ∈ s (cid:20) f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a (cid:12)(cid:12)(cid:12)(cid:12) ¬ ( E ∨ E ) (cid:21) = O (cid:16) εr (cid:17) .In conclusion P s ∈ S , a ∈ s (cid:20) f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a (cid:21) (cid:54) P [ E ∨ E ] + P s ∈ S , a ∈ s (cid:20) f s (cid:22) s ∩ reach a rγ (cid:44) G (cid:22) s ∩ reach a (cid:12)(cid:12)(cid:12)(cid:12) ¬ ( E ∨ E ) (cid:21) = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . (cid:3) In a special case, we can say something even stronger.
Theorem 3.19 (Extension of Theorem 2.26) . Let X , f be as in Theorem 2.26.Suppose that we have a distribution ( v , b , a , s ) of sets b where v ∈ b ⊂ s ∩ reach a .Suppose that the marginal ( v , a , s ) is the marginal of V × A × S in D stav , then thefollowing holds: P s ∈ S , a ∈ s , b ⊂ s \ a (cid:20) f s (cid:22) b rγ (cid:44) G (cid:22) b (cid:21) = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . (3.12)We will need Theorem 3.19 for some of our applications. Proof of Theorem 3.19.
The case discussed in (3.12) has a similar proof to Theo-rem 2.26. We define the random variable Z that samples ( s , a , b ) and outputs P v ∈ b [ f s ( v ) (cid:44) G ( v )] . Consider the same events as in the proof of Theorem 2.26. Since1. E ∨ E occur with the same probability.2. The expectation of Z given that ¬ ( E ∨ E ) is still O ( γε ) .Then by Markov’s inequality for any r >
0, (3.12) holds. (cid:3) .3 Proof of the Lemmata Lemma (Restatement of Lemma 3.15) . For any a ∈ A , let h a be as in Definition 3.13.Denote by ε a the disagreement probability given that t ⊃ a , i.e. ε a = P (cid:2) f s (cid:22) t (cid:44) f s (cid:22) t (cid:12)(cid:12) t ∈ { t ⊃ a } (cid:3) . Then for every a ∈ A : P s ∈{ s ⊃ a } [ f s (cid:22) a (cid:44) h a ] = O ( ε a ) . Proof of Lemma 3.15.
Fix a ∈ A , and denote by ε a the probability to succeed in theSTS-test given that s , s , t ⊃ a . If ε a (cid:62) we are trivially done, so assume otherwise.Denote by { h ia } i all possible functions from a to Σ , where h a = h a . Consider the ST S a -graph. According to Assumption (A2) a, this is a -edge expander.Denote by S i = { s : f s (cid:22) a = h ia } . We need to show that the set of vertices S = { s : f s (cid:22) a = h a } (the largest of all S i ) is 1 − O ( ε a ) .The quantity ε a , i.e. the amount of edges between S i ’s, is by assumption less than . The ST S a -graph is a -edge expander.It is a known fact that if we partition a vertex of an edge-expander graph, andthere are few outgoing edges, then one of the parts in the partition is large. This factis formulated in Claim A.6.We invoke Claim A.6, using the fact that the graph is a -edge expander and thefact that the fraction of edges between the S i ’s is less that . We get that P [ S ] (cid:62) .By the edge-expander property P [ S c ] (cid:54) E ( S , S c ) (cid:54) ε a . (cid:3) Corollary 3.20. P [ A ∗ ] = O ( ε ) .Proof of Corollary 3.20. a ∈ A contributes to A ∗ if the amount of bad edges that a participates in is (cid:62) . The total amount of bad edges is O ( ε ) by Lemma 3.15. Thusby Markov’s inequality P [ A ] = O ( ε ) . (cid:3) We move to Claim 3.17.
Claim (Restatement of Claim 3.17) . P a ∈ A , v ∈ reach a [ a ∈ A ∗ v and a (cid:60) A ∗ ] = O ( γε ) . Proof of Claim 3.17.
Fix some a (cid:60) A ∗ . Consider the V AS a -graph for this a . This isthe bipartite graph, where L = reach a , R = { ( a , s ) | ∃ v ∈ L ( v , a , s , a ) ∈ Supp ( D ) } , E = { ( v , ( a , s )) : ( v , a , s , a ) ∈ Supp ( D ) } .The probability of choosing an edge ( v , ( a , s )) is given by P D [( v , a , s , a ) | a = a ] .Denote by B ⊂ L the that consists of all ( s , a ) s.t. ( a , s , a ) is bad. If a (cid:60) A ∗ then P [ B ] < . From Assumption (A3) b, this graph is a √ γ -bipartite expander.Define the set V ∗ = (cid:26) v ∈ reach a (cid:12)(cid:12)(cid:12)(cid:12) P ( s , a ) (cid:2) B (cid:12)(cid:12) v ∼ ( s , a ) (cid:3) (cid:62) (cid:27) ,the set of v ∈ reach a so that the probability for a bad edge is larger than ,namely, that a is locally bad for v . In the sampler lemma, Lemma A.9, we see thatbipartite-expanders are good samplers. 26e use Lemma A.9 to get that P [ V ∗ ] = O ( γ ) P [ B ] . Taking expectation on all a ∈ A we get that P a ∈ A , v ∈ reach a [ a ∈ A ∗ v and a (cid:60) A ∗ ] = P [ a (cid:60) A ∗ ] · E a (cid:60) A ∗ [ P [ V ∗ ]] = P [ a (cid:60) A ∗ ] · O ( γ ) E a (cid:60) A ∗ [ P [ B ]] (cid:54) O ( γ ) E a ∈ A [ P [ B ]] = O ( γε ) ,The last inequality is due to the fact that taking expectation on this set conditionedon a (cid:60) A ∗ , is less than the expectation on all A (by definition when a ∈ A ∗ , then P [ B ] (cid:62) , and when a (cid:60) A ∗ , P [ B ] < ). The last equality is since P [ B ] = O ( ε ) by Lemma 3.15. (cid:3) Moving on to Lemma 3.16:
Lemma (Restatement of Lemma 3.16) . Let D be a distribution over ( a , s , v ) ∈ A × S × V defined by the STAV-structure, that is:1. Choose some ( a , v ) where v ∈ reach a .2. Choose some ( a , v ) ⊂ s (where we mean { v } , a ⊂ s ).Then P ( a , v , s ) ∼ D [ f s ( v ) (cid:44) g a ( v ) and f s (cid:22) a = h a and a (cid:60) A ∗ v ] = O ( γε ) .For the proof of the lemma, we’ll need the following property of expander graphs.In an expander graph, the number of outgoing edges from some A ⊂ V , is anapproximation to the size of A or V \ A . The following claim generalizes this fact tothe setting where we count only outgoing edges from A to a (large) set B ⊂ V \ A . Claim (Restatement of Claim A.10) . Let G = ( V , E ) be a λ -two sided spectralexpander. Let V = A ·∪ B ·∪ C , s.t. P [ A ] (cid:54) P [ B ] . Then P [ A ] (cid:54) ( − λ ) P [ B ] ( P [ E ( A , B )] + λ P [ C ]) . (3.13)In particular, if P [ A ] , 1 − λ = Ω ( ) then P [ A ] = O ( P [ E ( A , B )] + λ P [ C ]) . Proof of Lemma 3.16.
For a fixed ( a , v ) we consider he conditioned ST S a , v -graph.Recall that the vertices in this graph are all the s ⊃ ( a , v ) and the edges are ( s , t , s ) so that t ⊃ ( a , v ) .We partition this graph to three sets: M a , v = (cid:8) s ∈ V (cid:12)(cid:12) f s (cid:22) a = h a , f s ( v ) = g a ( v ) (cid:9) , N a , v = (cid:8) s ∈ V (cid:12)(cid:12) f s (cid:22) a = h a , f s ( v ) (cid:44) g a ( v ) (cid:9) , C a , v = (cid:8) s ∈ V (cid:12)(cid:12) f s (cid:22) a (cid:44) h a (cid:9) .For ( a , v ) so that a (cid:60) A ∗ v , the elements s ∈ N a , v are exactly those whocontribute to the probability in (3.9). Thus the probability in (3.9) is P ( a , v ) (cid:2) a (cid:60) A ∗ v (cid:3) E ( a , v ) : a (cid:60) A ∗ v [ P [ N a , v ]] .We also denote by H a , v the set of edges ( s , t , s ) in the ST S a , v -graph, so that f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a .27ote that any edge between N a , v and M a , v is an edge of H a , v . By (2.2), ξ ( f ) = γ .Thus in particular P ( s , s , t , a , v ) [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] (cid:54) P ( s , s , t , a , v ) [ f s (cid:22) t (cid:44) f s (cid:22) t and f s (cid:22) a = f s (cid:22) a ] = P [ f s (cid:22) t (cid:44) f s (cid:22) t ] ξ ( f ) = γε .Thus E ( a , v ) [ P [ H a , v ]] = P [ f s ( v ) (cid:44) f s ( v ) and f s (cid:22) a = f s (cid:22) a ] = O ( γε ) .According to Assumption (A2) b, the ST S a , v -graph is a γ -two-sided spectralexpander, thus we can use the almost cut approximation property Claim A.10 toshow that ( − γ ) P [ M a , v ] P [ N a , v ] = O ( P [ E ( N a , v , M a , v )] + γ P [ C a , v ]) . (3.14)To conclude the proof we show first that the right hand side of (3.14) is boundby O ( γε ) in expectation over ( a , v ) . Then we show that for all but O ( γε ) of the ( a , v ) , P [ M a , v ] (cid:62)
12 . (3.15)Indeed, as E ( N a , v , M a , v ) ⊂ H a , v ,we get that P ( a , v ) (cid:2) a (cid:60) A ∗ v (cid:3) E ( a , v ) : a (cid:60) A ∗ v [ P [ E ( N a , v , M a , v )]] (cid:54) E ( a , v ) [ P [ H a , v ]] = O ( γε ) .Furthermore, Note that P ( a , v ) (cid:2) a (cid:60) A ∗ v (cid:3) E ( a , v ) : a (cid:60) A ∗ v [ γ P [ C a , v ]] (cid:54) γ E ( a , v ) [ P [ C a , v ]] = γ P [ f s (cid:22) a (cid:44) h a ] .This is bounded by O ( γε ) by Lemma 3.15.Hence the right hand side of (3.14) is bound by O ( γε ) in expectation over ( a , v ) .To complete the proof, we turn to showing (3.15) for all but O ( γε ) of the ( a , v ) .For this, we use the edge expander partition property, Claim A.6.Partition the vertices of the ST S a , v -graph to B , B , ... B n + where B = M a , v , B = C a , v and N a , v = B ·∪ ... ·∪ B n , where each B j is the set of s so that f s ( v ) = σ j for all σ j ∈ Σ .We assumed that P [ C v ] (cid:54) hence E ( B , B c ) = E ( C , C c ) (cid:54) .From (2.2), E ( a , v [ P [ H a , v ]] = O ( γε ) . From Markov’s inequality, P [ H a , v ] < , for all but O ( γε ) of the ( a , v ) . When this occurs, the amount of edges betweenthe partition parts is < .From the edge expander partition property Claim A.6 we get that one of thepartition sets has probability (cid:62) . This is not B = C , as its probability is (cid:54) .Thus P [ M a , v ] (cid:62) .Thus by using the fact that γ < , for all but O ( γε ) of the ( a , v ) , P [ N a , v ] = O ( P [ E ( N a , v , M a , v )] + γ P [ C a , v ]) .28ence E ( a , v ) [ P [ N a , v ]] = O ( γε ) . (cid:3) Corollary 3.21.
Consider the
V ASA -distribution promised for us in Assump-tion (A3) . P ( v , a , s , a ) (cid:2) f s (cid:22) a i = h a i ∧ g a ( v ) (cid:44) g a ( v ) and a i (cid:60) A ∗ v for i =
1, 2 (cid:3) = O ( γε ) . Proof of Corollary 3.21.
The probability is bounded by twice the probability webound in Lemma 3.16, and the probability we bound in Claim 3.17. (cid:3)
We restate Lemma 3.18:
Lemma (Restatement of Lemma 3.18) . P a ∈ A , v ∈ reach a [ g a ( v ) (cid:44) G ( v ) and a (cid:60) A ∗ v ] = O ( γε ) . Proof of Lemma 3.18.
Fix some v ∈ V and consider its v ASA -graph defined inSection 2.2, namely the graph whose vertices reach v and edges are all the ( a , s , a ) so that ( v , a , s , a ) is in the support of our V ASA -distribution.Consider the following sets in this graph: M v = (cid:8) a ∈ reach v \ A ∗ v (cid:12)(cid:12) g a ( v ) = G ( v ) (cid:9) , the popular vote, N v = (cid:8) a ∈ reach v \ A ∗ v (cid:12)(cid:12) g a ( v ) (cid:44) G ( v ) (cid:9) , the other votes, C v = A ∗ v ,The a ∈ N are those where g a ( v ) (cid:44) G ( v ) and a (cid:60) A ∗ v . Hence we need to bound E v [ P [ N v ]] .By Assumption (A3) a it is a either a γ -bipartite expander or a γ -two-sidedspectral expander. Claim A.11, the bipartite almost cut approximation property, isthe analogue claim to Claim A.10 for bipartite graphs. We invoke either Claim A.11or Claim A.10 for N v , M v , C v and get that ( − γ ) P [ M v ] P [ N v ] (cid:54) P [ E ( N v , M v )] + γ P [ C v ] ,or P [ N v ] (cid:54) ( − γ ) P [ M v ] P [ E ( N v , M v )] + γ P [ C v ] . (3.16)The proof now has two steps:1. We show that P [ M v ] (cid:62) for all but O ( γε ) of the vertices v .2. We show that the right hand side of (3.16) is O ( γε ) .29 o show step 1. we will need to show that for all but O ( γε ) of the v , the size of C v is smaller than . P v (cid:20) P [ A ∗ v ] > (cid:21) = O ( γε ) (3.17)Assuming that for P [ C v ] (cid:54) , we show that P [ M v ] (cid:62) using the edge expanderpartition property, Claim A.6.By Assumption (A3) a, the v ASA -graph is a either γ -bipartite expander or a γ -two-sided spectral expander for γ < , thus it is also a -edge expander. We indendto invoke Claim A.6. Partition V to:– B = A ∗ v = C v .– B = M v .– B , ..., B n - elements a ∈ A s.t. g a ( v ) = σ i for all σ i ∈ Σ that are not themajority assumption. Note that A v = B ·∪ ... ·∪ B n .By (3.17), the set B = A ∗ v is (cid:54) for all but O ( γε ) of the v ’s. When this occurs,then E ( C , C c ) (cid:54) .We bound the amount of edges between the B i ’s that are not B . We can dividethe edges to bad edges, and edges that are not bad.The “bad edges” between the B i ’s account for at most as for every i and every a ∈ B i , the amount of bad edges connected to it is (cid:54) (since a (cid:60) A ∗ v ).Finally, from Corollary 3.21 and Markov’s inequality, there are at most O ( γε ) ofthe v ’s where the amount of edges between different B i ’s that are not bad is greaterthan .Thus for all but O ( γε ) of the v ’s, the amount of edges between parts of thispartition is (cid:54) < . Invoke Claim A.6, to get that one set above must be of size atleast . This must be B = M v , as it is larger than the other B i ’s where i (cid:62)
1, andsince B = C v is of size (cid:54) .We move to show that (3.17) is true for all but O ( γε ) of the vertices v ∈ V .Consider the graph between STAV-parts V and A where we choose a pair ( a , v ) according to the probability to chose them in the ST AV -structure.The set of vertices v that we need to bound is the set of v ’s with large P [ A ∗ v ] > .There are two types of vertices v :– P [ A ∗ v ∩ A ∗ ] (cid:54) – P [ A ∗ v ∩ A ∗ ] > By Claim 3.17, P ( a , v ) [ a ∈ A ∗ v and a (cid:60) A ∗ ] = O ( γε ) . Thus by Markov’s inequality,only O ( γε ) of the vertices can see -fraction of neighbors a ∈ A ∗ v \ A ∗ , thus boundingby O ( γε ) the fraction of v ’s of the first type.To bound the vertices of the second type, note that these are vertices that havea large ( > ) fraction of neighbors in A ∗ . By Corollary 3.20, P [ A ∗ ] = O ( ε ) .According to Assumption (A1) , our graph is a √ γ -bipartite expander. Thus bythe sampler lemma Lemma A.9, the set of vertices v ∈ X ( ) who have more than -fraction neighbours in A ∗ , is O ( γε ) . As for step 2.
Taking expectation on (3.16) we get that E [ P [ N v ]] (cid:54) E [ ( − γ ) P [ M v ] P [ E ( N v , M v )]] + γ E [ P [ C v ]] P v (cid:20) P [ A ∗ v ] > (cid:21) + E [ P [ E ( N v , M v )]] + γ E [ P [ C v ]] , (3.18)where the second inequality is due to the fact that we assumed that γ < andthat when P v (cid:2) P [ A ∗ v ] (cid:54) (cid:3) then P [ M v ] (cid:62) , hence1 ( − γ ) P [ M v ] (cid:54) P v (cid:20) P [ A ∗ v ] > (cid:21) = O ( γε ) .By Corollary 3.20 and Claim 3.174 γ E v [ P [ C v ]] = γ E v [ P [ A ∗ v ]] = O ( γε ) .We continue to bound P [ E ( N v , M v )] in expectation. Every edge counted in E ( N v , M v ) is either a bad triple (i.e. and edge ( a , s , a ) s.t. f s (cid:22) a i (cid:44) h a i for i = O (cid:0) εd (cid:1) non-bad edges in the cut.As for the bad edges, notice that a ∈ N v is not a member of A ∗ v , thus the amountof bad edges that are connected to a is at most -fraction of the edges connected to a (by definition). Thus the amount of bad edges is bounded by P [ N v ] , and P [ E ( N v , M v )] (cid:54) O ( γε ) + P [ N v ] .By summing up the bounds we get that E [ P [ N v ]] (cid:54) O ( γε ) + E [ P [ N v ]] hence E [ P [ N v ]] = O ( γε ) . (cid:3) In the next three sections we derive several different agreement theorems from ourSTAV agreement theorem, Theorem 2.26.The first two agreement testing theorems, Theorem 4.1 and Theorem 4.4, improveand extend the agreement tests from [DK17] and from [DS14]. In both theoremsthe ground set are the vertices of a simplicial complex and the subsets are the faces.In the first theorem the complex is a two-sided high dimensional expander, and inthe second theorem it is a one-sided high dimensional expander with a d + S = X ( d ) whose ground set is X ( ) . Ourtest is the d , ‘ -agreement test: Definition (Restatement of Definition 3.1) . Let X be a d -dimensional simplicialcomplex and ‘ < d be a positive integer. We define the distribution D d , ‘ by thefollowing random process 31. Sample t ∈ X ( ‘ ) .2. Sample s , s ∈ X ( d ) independently, given that t ⊂ s , s .The d , ‘ -agreement test is the test associated with the d , ‘ -agreement distributionon this family. We show that the d , ‘ -agreement test is sound, as long as X is a two-sided high dimensional expanders (HDX) . Theorem 4.1 (Agreement for High Dimensional Expanders) . There exists a constant c > such that for every two natural numbers d > ‘ such that d − ‘ = Ω ( d ) thefollowing holds. Suppose that X is a d ‘ -two-sided d -dimensional HDX. Then forevery r > the d , ‘ -agreement test is r‘ -approximately (cid:0) c ( + r ) (cid:1) -sound. In particular,if ‘ = Ω ( d ) , then the test is exactly c -sound. The theorem in [DS14] says that the d -dimensional complete complex supports a c -sound agreement test for some constant c >
0. The complete complex is the complexthat contains all possible sets of size (cid:54) d +
1. This is a special case of Theorem 4.1,but even this case is not trivial.Building on [DS14], the main theorem in [DK17] shows that the √ d -dimensionalskeleton S = X ( √ d ) , of a d -dimensional two-sided high-dimensional expander,supports a c -sound agreement test for some constant c >
0. This gave the firstagreement test on a sparse system of sets, that is, such that every vertex is containedin a constant number of sets. Why go to a √ d dimensional skeleton? This was dueto a technical step in the proof, and we show in Theorem 4.1 that it is unnecessary.In fact all levels of a two-sided high dimensional expander, give rise to a soundagreement test.A subtle and not very important difference between our theorem and the theoremin [DK17] is the agreement distribution. The two distributions are slightly different(one is based on an upper walk and one is based a lower walk), but the difference isunimportant because one you’ve proven the result with one of these distributions, itimplies the same for the other. For a further discussion on this matter, see Section B.This theorem has some implications for matroids. Let X be a simplicial complexwhose faces are the independent sets of a fixed matroid whose rank is r (i.e. the largestindependent set in this matroid has size r ). In an exciting recent work [ALGV18]it was proven that this complex is a 0-one-sided HDX. Oppenheim [Opp18b] provedthat if we truncate this complex by keeping only faces of dimensions 0 (cid:54) i (cid:54) d then itbecomes a 1 / ( r − d − ) -two-sided HDX. We reach the following conclusion Corollary 4.2 (Truncated matroids) . For any matroid of rank r , for any d (cid:54) √ r ,the collection of independent sets in a matroid whose size is d supports a soundagreement test. (cid:3) Furthermore, some matroids are themselves (without truncation) two-sided highdimensional expanders. For example the matroid of linear bases of a vector space F nq can easily be shown to be a q -two-sided HDX. When n (cid:54) q we can deduce that Corollary 4.3 (Linear bases matroid) . Let S be the collection of all linear bases ofa vector space F nq . If n (cid:54) q then this set system supports a sound agreement test. (cid:3) If simplicial complexes are high dimensional analogues to graphs, then d + V to V , ..., V d + , so that every set of size d containsexactly one vertex from each set V i .Our second theorem shows that the d , ‘ -agreement test is sound when X is a d + one-sided high dimensional expander (HDX) . Onesided HDX are the high dimensional analogue to bipartite expanders. They areformally defined Section 4.3. 32 heorem 4.4 (Agreement for ( d + ) -Partite High Dimensional Expanders) . Thereexists a constant c > such that for every two natural numbers k , ‘ so that k (cid:62) ‘ + the following holds. Suppose X is a k -dimensional skeleton of a ( d + ) -Partite k ‘ -one sided HDX (including k = d ) . Then for every r > the d , ‘ -agreement testis r‘ -approximately (cid:0) c (cid:0) + r (cid:1)(cid:1) -sound. In particular, if ‘ = Ω ( k ) , then the test isexactly c -sound. Interestingly, in the known one-sided d + X ( d ) is uniform. Thus this theorem gives us a sparse uniformly distributed set system with a sound agreement test. This is unlike the known constructionsfor two-sided high-dimensional expanders that come from truncating one-sided high-dimensional expanders and for which the distribution of the test over S = X ( d ) isnot uniform. Organization of this section.
This section is a bit long so let us quickly explainits contents. In Section 4.1 we describe random walks on simplicial complexes, boththe well-known “containment” random walks as well as the new complement randomwalks. In Section 4.2 we prove Theorem 4.1 by showing that any two sided HDXsupports a STAV structure. In Section 4.3 we prove Theorem 4.4. The proof of thistheorem is more intricate, as we don’t only find one STAV structure but rather manydifferent STAVs. We apply our main technical theorem on each and then combinethe outcomes together.
We refer to the definition of a weighted simplicial complex and High DimensionalExpanders in Section A.3.
The Containment Walk.
On a d -dimensional simplicial complex we can definethe k , ‘ -lower random walk, for ‘ < k (cid:54) d : Definition 4.5 (The lower walk) . Given s ∈ X ( k ) we choose s ∈ X ( k ) by:– Choose t ∈ X ( ‘ ) given that t ⊂ s .– Choose s ∈ X ( k ) given that t ⊂ s .One can also define the ‘ , k -upper walk on X ( ‘ ) , where we given t ∈ X ( ‘ ) wechoose s ⊃ t in X ( k ) , and then choose t ⊂ s .This random walk is in fact two independent steps in the k , ‘ -containment graph: L = X ( k ) , R = X ( ‘ ) , E = { ( s , t ) | t ⊂ s } .We denote the bipartite operator of this graph by D k , ‘ . Note that D k , ‘ = D ‘ + ‘ D ‘ + ‘ + ... D k , k − .This random walk has been studied in [KM17, KO18b, DK17, DDFH18] andmore. In particular [KO18b] proved the following theorem: Theorem 4.6.
Let X be a λ -one sided link expander, then λ ( D k + k ) , the secondlargest eigenvalue of the upper walk, is q k + k + + O ( kλ ) . Theorem 4.6 immediately implies the following useful corollary:
Corollary 4.7.
Let X be a λ -one sided link expander, then λ ( D k , ‘ ) is q ‘ + k + + O k + t ( λ ) . (cid:3) a k -skeleton of a d -dimensional simplicial complex Y is X = { s ∈ X | | s | (cid:54) k + } . he Complement Walk. As we noted in the introduction, we needed a randomwalk for a
V ASA -distribution on two-sided high dimensional expanders. The spectralgap of this walk needed to be O (cid:0) ‘ (cid:1) . Unfortunately, the lower walk, or its dual,the upper walk, had spectral gap of approximately ‘ + k + . This is a constant when ‘ = Ω ( k ) .The complement walk, is a walk between X ( k ) and X ( ‘ ) , where we go from s ∈ X ( k ) to t ∈ X ( ‘ ) by t ·∪ s ∈ X ( k + ‘ + ) . Definition 4.8 (The Complement Walk) . Let X be a d -dimensional simplicialcomplex. Let k , ‘ be integers s.t. k + ‘ + (cid:54) d . The k , ‘ -complement walk is thebipartite graph with edges ( L , R , E ) :– The vertices are L = X ( k ) , R = X ( ‘ ) .– The edges are E = { ( s , t ) | s ·∪ t ∈ X ( k + ‘ + ) } .The probability of choosing an edge ( s , t ) is the probability of choosing s ·∪ t ∈ X ( k + ‘ + ) and then choosing s ∈ X ( k ) , given that we chose s ·∪ t .We will show that in a λ -two-sided spectral expander, this walk has spectralgap proportionate to ‘ , k and λ . More formally, we will prove the following claim(Theorem 7.1, item 1): Claim . Let X be a λ -two-sided link expander. ‘ , ‘ integers so that ‘ + ‘ + (cid:54) d .Denote by M ‘ , ‘ , the bipartite operator of the ‘ , ‘ -complement walk. Then λ ( M ‘ , ‘ ) (cid:54) ( ‘ + )( ‘ + ) λ . Colored Walks in d + -Partite Simplicial Complexes. On one-sided highdimensional expanders, the complement walk may not be a good expander. However,in the d + colored walk . Fortwo colors I , J , this walk goes from t ∈ X [ I ] to s ∈ X [ J ] by a face in X [ I ·∪ J ] . Definition 4.10 (The Colored Walk) . Let X be a d -dimensional d + I , J ⊂ [ d ] be two disjoint sets of colors. The I , J -colored walkis the bipartite graph with edges ( L , R , E ) :– The vertices are L = X [ I ] , R = X [ J ] .– The edges are E = { ( s , t ) | s ·∪ t ∈ X [ I ·∪ J ] } .The probability of choosing an edge ( s , t ) is the probability of choosing s ·∪ t ∈ X [ I ·∪ J ] .Denote the bipartite adjacency operator of this walk by M I , J . We show that if X is a d + λ -one sided link expander, λ ( M I , J ) is proportionate to | I || J | and λ . We state the following claim that bounds the spectral gap of the colored walks(Theorem 7.1, item 2): Claim . Let X be a d + λ ( d + ) λ + -one-sided link expander, where λ < .Let I , J ⊂ [ d ] be two disjoint colors. Denote by M I , J the I , J -colored walk. Then λ ( M I , J ) (cid:54) | I || J | λ . Proof of Theorem 4.1.
First, note that when d is small, the theorem is true by asimple union bound. Thus we may assume d >> f andshow that if rej ( f ) = ε then there exists a global function G : X ( ) → Σ such that P s ∼ D d , ‘ " f s r‘ (cid:44) G (cid:22) s = c (cid:18) + r (cid:19) ε ,for some constant c .The STAV structure we examine for this agreement test is as follows: S = X ( d ) , T = X ( t ) ; A = X ( t − ) ; V = X ( ) .Our distribution is1. Choosing s ∈ S according to the distribution of the simplicial complex.2. Choosing t ⊆ s uniformly at random.3. Choosing ( v , a ) by choosing v ∈ t uniformly at random and setting a = t \ { v } .The ST S -test of this structure is the ( d , ‘ ) -agreement test. The V ASA -distributionis the following:1. Choose s ∈ X ( d ) .2. Choose a , a , v so that a ·∪ a ·∪ { v } ⊂ s .This distribution is obviously symmetric in a and a . Furthermore, the choice of ( v , a i , s ) is identical to the marginal in the STAV-structure.First, we claim that for any simplicial complex, the STAV-structure above has ξ ( X ) = O (cid:0) ‘ (cid:1) . Claim . Let X be any simplicial complex, then the surprise of a STAV-structurewith T = X ( ‘ ) , A = X ( ‘ − ) ; V = X ( ) has ξ ( X ) = O (cid:0) ‘ (cid:1) . Proof of Claim 4.12.
Let f be any ensemble of local functions. We need to show that P ( s , s , t , a ) [ f s (cid:22) a = f s (cid:22) a | f s (cid:22) t = f s (cid:22) t ] .To do so, we want to invoke Lemma 2.23. For every t ∈ T , the T -lower graph is thecontainment graph where on one side we have L = (cid:18) t‘ − (cid:19) ,and on the other we have R = { v ∈ t } .It is a well known fact that this graph is a ‘ -bipartite expander. Trivially, if f s (cid:22) t (cid:44) f s (cid:22) t , they differ on at least ‘ + -fraction of the vertices (namely, one vertex).By Lemma 2.23, we get a surprise of ‘ − ( ‘ + ) − = O (cid:0) ‘ (cid:1) . (cid:3) If we show STAV-structure defined above theorem is O ( γ ) -good as in Defini-tion 2.15, for γ = ‘ , we could invoke Theorem 2.26 and conclude. Hence we need,that it fulfils the assumptions in Definition 2.15:1. We begin with the proof of Assumption (A4) . We require that the probabilityof choosing some v ∈ s so that v ∈ reach a is greater or equal to for any a ⊂ s .as ‘ < d , and s ∩ reach a = s \ a for any a ⊂ s , P [ v ∈ reach a | v ∈ s ] = | s \ a || s | (cid:62)
12 .35. Assumption (A1) : The graph described in this assumption is the 0, ‘ − X . By our assumption X is a d ‘ -two-sided HDX.Thus from Claim 4.9, this graph is a O (cid:0) ‘ (cid:1) -bipartite expander.3. Assumption (A2) a: Fix a ∈ A . The conditioned ST S -graph,
ST S a is the graphwhose vertices are all { s ⊃ a } . Traversing from s to s is going by t = a ·∪ { v } .We are to show that this graph is a O (cid:0) ‘ (cid:1) -two-sided spectral expander. Indeedthis graph is (isomorphic to) the graph obtained whose vertices are X a ( d − ‘ ) ,and s \ a , s \ a are connected by an edge if their intersection contains a vertexin X a ( ) . d − ‘ = Ω ( ‘ ) thus by Theorem 4.6, and the fact that X is an ‘d -HDX, this graph is a η -two-sided spectral expander, for η = d − ‘ + O (cid:18) dd ‘ (cid:19) = O (cid:18) ‘ (cid:19) .In particular, it is an -edge-expander (for a large enough d ).4. Assumption (A2) b: We are to bound the spectral gap in the conditioned ST S -graph, namely
ST S a , v , whose vertices are s ⊃ ( a , v ) , and edges are ( s , t , s ) where t ⊃ ( a , v ) . (When we say for instance s ⊃ ( a , v ) we mean of course that s ⊃ a , v .)In the context of the STAV-structure above, conditioning on a ∈ A , v ∈ X ( ) isthe same as conditioning on t = a ·∪ { v } ∈ T . In this case, the choices of s , s are independent - i.e. the graph we get is a clicque with self loops. This graphis a 0-two-sided spectral expander.5. Assumption (A3) a: Fix some v ∈ V . The v ASA -graph is the graph whose ver-tices are all a , a so that ( v , a , s , a ) are in the support of the V ASA -distribution.In this case these are exactly X v ( ‘ − ) . We go from a to a by choosing ( v , a , s , a ) in the V ASA -distribution. Thus in this case we go from a to a ifthey are disjoint and share a face s \ { v } ∈ X v ( d − ) .The graph we just described is the graph whose double cover is the ( ‘ − ) , ( ‘ − ) -complement walk graph in X v , the link of v . X is a O (cid:0) ‘d (cid:1) -two-sided HDX, thusby Claim 4.9, this graph is a ‘ O (cid:0) ‘d (cid:1) = O (cid:0) ‘ (cid:1) -two sided spectral expanderexpander.6. Assumption (A3) b: This is the only part of the proof that is not immediate.Fix some a ∈ A . The graph in this assumption is the V AS a -graph, the bipartitegraph where L = reach a , R = { ( a , s ) | ∃ v ∈ L ( v , a , s , a ) ∈ Supp ( D ) } , E = { ( v , ( a , s )) : ( v , a , s , a ) ∈ Supp ( D ) } .The probability of choosing an edge ( v , ( a , s )) is given by P D [( v , a , s , a ) | a = a ] .We describe the graph in this case explicitly in this following proposition, thatsays this graph is a q ‘ -bipartite expander. Proposition 4.13.
Fix some a ∈ A , and consider the following graph– L = { ( a , s ) : a ·∪ a ⊂ s } . – R = reach a = X a ( ) . – E = { ( v , ( a , s )) : { v } ·∪ a ·∪ a ⊂ s } , and the probability to choose eachedge is given by the distribution that chooses ( s , a , v ) in the link of a . 𝑎 (cid:4593) , 𝑠) 𝑎 (cid:4593) 𝑣 𝑣 ∪ 𝑎 (cid:4593) ∈ 𝑋 (cid:3028) 𝑎 Figure 2: A triangle in Y . The graph described above is an O (cid:16) √ ‘ (cid:17) -bipartite expander. To prove this proposition, we state Lemma 4.14. The proof of this lemma usesGarland’s method, so we postpone its proof to Section 7.
Lemma 4.14.
Let Y be a -dimensional -partite complex, and denote itsparts by X ( ) = X [ ] ·∪ X [ ] ·∪ X [ ] . Suppose that for every v ∈ X [ ] , X v is a η -bipartite expander. Denote by A , A and A the bipartite walks between ( V , V ) ( V , V ) and ( V , V ) respectively. Then λ ( A ) (cid:54) η + λ ( A ) λ ( A ) . Proof of Proposition 4.13.
Consider the following 2-dimensional 3-partite sim-plicial complex:– The parts of the complex are Y [ ] = X a ( t − ) , Y [ ] = X a ( ) , Y [ ] = { ( a , s ) ∈ X ( t − ) × X ( d ) | a ·∪ a ⊂ s } .– We connect ( a , v , ( a , s )) ∈ Y ( ) if a = a and { v } ·∪ a ·∪ a ⊂ s . Theprobability of choosing some triangle ( a , v , ( a , s )) is the probability ofchoosing s given a , and then choosing a , v (given that they are disjointfrom a ): P X [ s | a ] P X (cid:2) a (cid:12)(cid:12) s \ a (cid:3) P X (cid:2) v (cid:12)(cid:12) s \ ( a ·∪ a ) (cid:3) .We notice the following:(a) A is the bipartite operator of the bipartite walk between L , R in the wedefined in the proposition.(b) A is the bipartite operator of the complement walk in the link of a , andfrom Claim 4.9 λ ( A ) (cid:54) d λ ( X a ) = O (cid:0) ‘ (cid:1) .(c) for every a ∈ Y [ ] , the bipartite operator of the link of a is the containmentwalk between X a ·∪ a ( ) and X a ·∪ a ( d − ‘ ) . Recall that d − ‘ = Ω ( d ) ,thus d − ‘ = Ω ( d ) . Hence this walk is also an O (cid:16) √ ‘ (cid:17) expander.Hence we can apply Lemma 4.14 and conclude that λ ( A ) (cid:54) O (cid:18) √ ‘ (cid:19) + O (cid:18) ‘ (cid:19) = O (cid:18) √ ‘ (cid:19) . (cid:3)(cid:3) .3 Agreement for One-Sided Partite High Dimensional Ex-panders We continue and prove an agreement theorem on one-sided partite high dimensionalexpanders. For a definition of partite simplicial complexes, and other terminology,see Section A.3.In the proof of the two-sided case Theorem 4.1, we used a single STAV-structurederived from the sets X ( k ) , X ( ‘ ) , X ( ‘ − ) , X ( ) . In the one-sided case the STAVdefined above is not γ -good, so we need to work a little harder. As it turns out whenthe one-sided spectral expander is also ( d + ) -partite, we can use the colored walksto substitute for the complement walk. Details follow. Proof of Theorem 4.4
As in Theorem 4.1, we are given an ensemble f with rej D d , ‘ ( f ) = ε . We need to finda global function G : X ( ) → Σ so that P s (cid:20) f s γ (cid:44) G (cid:22) s (cid:21) = O ( ε ) .Without loss of generality, ‘ >
1. For any two disjoint colors I , J of size ‘ , wedefine the I , J -STAV-structure as follows:1. S ( I , J ) = { s ∈ X ( k ) | col ( s ) ⊃ I ·∪ J } .2. T ( I , J ) = { t ∈ X ( ‘ ) | col ( t ) ∩ ( I ·∪ J ) ∈ { I , J }} , i.e. t so that it’s color contains I and is disjoint from J , or vice versa.3. A ( I , J ) = { a ∈ X ( ‘ − ) | col ( a ) = I or col ( a ) = J } .4. V ( I , J ) = { v ∈ X ( v ) | col ( v ) (cid:60) I ·∪ J } .The sts -test associated with the STAV-structure is:1. Choose t ∈ X ( ‘ ) given that col ( t ) either contains I and is disjoint from J , orcontains J and is disjoint from I .2. Choose s , s ⊃ t independently given that col ( s i ) ⊃ I ·∪ J for i =
1, 2.We denote the test associated with this STAV-structure as the I , J -STAV-test .The I , J -STAV distribution is choosing s , t as above, and then setting a ⊂ t sothat col ( a ) = I or col ( a ) = J , and { v } = t \ a . We denote the I , J -STAV distributionby D I , J . These STAV-structures come with V ASA -distributions that are choosing v , s as in the I , J -STAV distribution, and taking a , a to be the subsets of s of colors I , J respectively.We denote the surprise of f in the I , J -STAV structure by ξ ( I , J ) ( f ) , and therejection probability by rej I , J ( f ) . Lemma 4.15.
For any two disjoint colors I , J , each of size ‘ , the STAV-structureabove is γ = O (cid:0) ‘ (cid:1) -good. For a pair of disjoint ( I , J ) -we would like to define a global functions G I , J , thatwill be defined on all vertices so that col ( v ) (cid:60) I , J (using Theorem 2.26). After that,we would like to stitch the G I , J ’s together. In fact, we only need two such globalfunctions, to cover vertices of all colors.However, in order to invoke Theorem 2.26, we need that both rej I , J ( f ) = O ( ε ) and ξ ( I , J ) ( f ) = O (cid:0) ‘ (cid:1) . Furthermore, we will need to use the ( d , ‘ ) -agreement test tostitch the two global functions together.We define an additional agreement test. This test will be used to stitch the G I , J ’stogether. We call it the ( I , J ) -in-one-set test :38. Choose t ∈ X ( ‘ ) (with no conditioning on the color).2. Choose s , s ⊃ t independently given that col ( s ) ⊃ I , J .We denote the rejection probability of this test by rej − setI , J ( f ) .The following lemma states formally what we require from the I , J -STAV distri-butions: Lemma 4.16.
There exists four disjoint colors I , J , I , J where for i =
1, 2 :1. rej I i , J i ( f ) = O ( ε ) .2. ξ ( I i , J i ) ( f ) = O (cid:0) ‘ (cid:1) .Furthermore, we can require from I i , J i that3. rej − setI i , J i ( f ) = O ( ε ) . Given the first two items in the lemma above, we can invoke Theorem 2.26 to geta global function G I i , J i : V ( I , J ) → Σ so that for i =
1, 2 P s ∼ D Ii , Ji (cid:20) f s γ (cid:44) G I i , J i (cid:22) s (cid:21) = O ( ε ) .We glue these two functions to one global function G : X ( ) → Σ : G ( v ) = ( G I , J ( v ) col ( v ) (cid:60) I ·∪ I . G I , J ( v ) otherwise .Here’s a short and informal overview the proof of the theorem given Lemma 4.16:We will choose ( s , t ) as in the d , ‘ -agreement test. Then we will choose an additional t ⊂ s , s , so that s ⊃ I ·∪ J and s ⊃ I ·∪ J .On the one hand, for i =
1, 2 the choice of ( s , t , s i ) is done as in the ( I , J ) -in-one-set distribution. By the third item of Lemma 4.16, f s (cid:22) t (cid:44) f s i (cid:22) t with probability O ( ε ) .On the other hand, by the first two items in Lemma 4.16, P (cid:20) f s i (cid:22) t γ (cid:44) G (cid:22) t (cid:21) = O ( ε ) .By union bound, we will get our theorem. Details follow. Proof of Theorem 4.4.
First, we show that in order to prove P s (cid:20) f s γ (cid:44) G (cid:22) s (cid:21) = O ( ε ) it is enough to prove that P s ∈ x ( d ) , t ⊂ s , t ∈ X ( ‘ ) " f s (cid:22) t γ (cid:44) G (cid:22) t = O ( ε ) . (4.1)Denote by H = (cid:26) s : P v ∈ s (cid:20) f s ( v ) γ (cid:44) G ( v ) (cid:21) (cid:62) ‘ (cid:27) .We need to show that given (4.1), P [ H ] = O ( ε ) . Fix some s ∈ H , i.e. f s γ (cid:44) G (cid:22) s .Consider the following containment graph for s : L = s − the vertices in s , R = (cid:18) s‘ (cid:19) , the subsets of s of size ‘ .39his graph is a √ ‘ -bipartite expander by Theorem 4.6. By Lemma A.9, this graph isa O (cid:0) ‘ (cid:1) -sampler graph . Hence if P v ∈ s (cid:20) f s ( v ) γ (cid:44) G ( v ) (cid:21) (cid:62) ‘ then the set T ∗ s = (cid:26) t ∈ (cid:18) s‘ (cid:19) (cid:12)(cid:12)(cid:12)(cid:12) P v ∈ t [ f s ( v ) (cid:44) G ( v )] < γ (cid:27) has probability of at most . In other words, at least of the t ∈ ( s‘ ) have thatproperty that f s (cid:22) t γ (cid:44) G (cid:22) t .Hence O ( ε ) (cid:62) P [ H ] , and we conclude that there may be on O ( ε ) of s ’s so that f s γ (cid:44) G (cid:22) s .We move to showing (4.1). Observe the following distribution:1. Choose s ∈ X ( k ) and t ⊂ s according to the probability of the simplicialcomplex.2. Choose ∆ ∈ X ( d ) given that t ⊂ ∆ .3. Choose two s , s ⊂ ∆ given that they also contain t , and so that I ·∪ J ⊂ col ( s ) and I ·∪ J ⊂ col ( s ) .In a simplicial complex, a k -face s (respectively s ) is chosen by choosing a d -face ∆ ∈ X ( d ) and choosing s ⊂ ∆ . Thus in this distribution ( s , t , s , s ) are chosen sothat the marginals ( s , t , s i ) are chosen according to the ( I i , J i ) -in-one-set test .By the last item of Lemma 4.16, f s disagrees on t with s or s , with probability O ( ε ) .Denote t = { v ∈ t | col ( v ) (cid:60) I ·∪ J } and t = { v ∈ t | col ( v ) (cid:60) I ·∪ J } , clearly t = t ∪ t and some vertices might appear in both sets.We would like to invoke Theorem 3.19, the extension to Theorem 2.26 to get that P " f s i (cid:22) t i γ (cid:44) G (cid:22) t i = O ( ε ) ,Since if | t i | (cid:54) ‘ +
1, this implies that f s i (cid:22) t i = G (cid:22) t i . Indeed by Lemma 4.16, we knowthat rej I i , J i ( f ) = O ( ε ) in the I i , J i -STAV-structure, and that ξ ( X ( I , J ) , F ) = O (cid:0) ‘ (cid:1) .Consider the sampling of ( v , a , s , t i ) where a ⊂ s is of color I i or J i , and v ∈ t i is chosenuniformly at random. It holds that ( v , a , s ) is chosen as in the I i , J i -STAV-structure,hence by Theorem 3.19, P (cid:20) f s i (cid:22) t i γ (cid:44) G (cid:22) t i (cid:21) = O ( ε ) .by the statement (3.12) of Theorem 2.26.Hence we bound the probability (4.1) by (cid:54) X i = rej I i , J i ( f ) + P " f s i (cid:22) t i γ (cid:44) G (cid:22) t i .Which is O ( ε ) by Lemma 4.16. (cid:3) Proof of Lemma 4.16.
For this lemma, we consider the uniform distribution on the4-tuples of disjoint colors I , J , I , J ∼ C ·∪ ‘ .To show there exists four disjoint colors I , J , I , J with the properties in thelemma statement, we show that each property is satisfied separately with largeprobability, thus their intersection has non-zero probability as well. We do this by anexpectation argument, and then use Markov’s inequality.40 tep 1: more than of the colors -tuples I , J , I , J satisfy the seconditem in Lemma 4.16. That is, we show the “surprise” ξ ( I i , J i ) ( f ) = O (cid:0) ‘ (cid:1) .For this, note that for any color J , P ( s , s , a , v ) [ f s (cid:22) t (cid:44) f s (cid:22) t | f s (cid:22) a = f s (cid:22) a and B J ] = O (cid:18) ‘ (cid:19) ,where B J is the event that col ( s i \ ( a ·∪ { v } )) ⊃ J for i =
1, 2. This is due to thesame argument in Claim 4.12. Hence E I , J , I , J ∼ C ·∪ ‘ " X i = P ( s , s , a , v ) (cid:2) f s ( v ) (cid:44) f s ( v ) (cid:12)(cid:12) f s (cid:22) a = f s (cid:22) a and B J i and col ( a ) = I i (cid:3) = O (cid:18) ‘ (cid:19) .By Markov’s inequality 0.8 of the 4-tuples I , J , I , J satisfy that P ( s , s , a , v ) (cid:2) f s ( v ) (cid:44) f s ( v ) (cid:12)(cid:12) f s (cid:22) a = f s (cid:22) a and B J i and col ( a ) = I i (cid:3) = O (cid:18) ‘ (cid:19) . Step 2: more than of the -tuples of disjoint colors I , J , I , J satisfythe first item in Lemma 4.16. That is, that when we choose ( s , t , s ) accordingto the I i , J i -STAV distribution, then the rejection probability is O ( ε ) .First recall that by our assumptionrej D d , ‘ ( f ) = P ( s , t , s ) ∼ D d , ‘ [ f s (cid:44) f s ] = ε .We can condition this test on col ( t ) = I ∪ { p } or J ∪ { p } for p (cid:60) I , J and on col ( s ) ⊃ I , J (but no conditioning on s ).This conditioning is different from the I , J -STAV-structure ST S -test since wedon’t condition on col ( s ) ⊃ I , J . It is also different from the I , J -in-one-set testsince we do condition on col ( t ) = I ∪ { p } or J ∪ { p } .Denote the probability for this conditioned agreement test by rej ∗ I , J ( f ) . We knowthat E I , J (cid:2) rej ∗ I , J ( f ) (cid:3) = rej D d , ‘ ( f ) = ε .Hence by Markov’s inequality, 0.8 of the disjoint 4-tuples I , J , I , J satisfyrej ∗ I i , J i ( f ) (cid:54) O ( ε ) .For a pair I i , J i , we think about the following experiment ( s , t , s , s ) :1. Choose ( s , t , s ) as in the Agree ( f ) I , J test, i.e. col ( t ) = I i ∪ { p } or J i ∪ { p } for p (cid:60) I i , J i and col ( s ) ⊃ I i , J i .2. Choose ˜ s , given s ⊂ t and conditioning on col ( ˜ s ) ⊃ I i , J i .Observe the following:1. The marginal ( s , t , ˜ s ) is according to the agreement test in the I , J -STAV-structure ST S -test.2. The marginals ( s , t , s ) and ( s , t , ˜ s ) is according to the rej ∗ I , J ( f ) test.If rej ∗ I i , J i ( f ) = O ( ε ) , then by a union bound we get that P [ f s (cid:22) t (cid:44) f ˜ s (cid:22) t ] (cid:54) − P [ f s (cid:22) t (cid:44) f s (cid:22) t ] = O ( ε ) .41 tep 3: more than of the colors -tuples I , J , I , J satisfy the thirditem in Lemma 4.16,. That is, that when we choose ( s , t , s ) by the I , J -in-one-set distribution is rej − setI , J ( f ) = O ( ε ) .This step follows the same reasoning as in step 1 or 2. the agreement in the d , ‘ -agreement test is ε . Hence by Markov’s inequality, 0.8 of pairs I , J , I , J havethe property that rej − setI , J ( f ) = O ( ε ) in the I , J -in-one-set distribution test.From the three steps above, the size of the intersection of the three properties islower bounded by 0.4, by a union bound. In particular it is not empty. (cid:3) We move towards proving Lemma 4.15. We need the following proposition, thatcontainment walks in one-sided high dimensional expanders have a spectral gap evenwhen conditioning on color:
Proposition 4.17.
Let X be a γ -one sided ( d + ) -partite high dimensional expander.Let J be a color of size ‘ . Consider the following graph:– L = { v ∈ X ( ) | col ( v ) (cid:60) J } .– R = { s ∈ X ( k ) | J ⊂ col ( s ) } .– E = { ( v , s ) : v ⊂ s } , where the probability of an edge is P ( v , s ) is to choose s ∈ X ( k ) given that J ⊂ col ( s ) , and then choose v ∈ s uniformly at randomgiven that col ( v ) (cid:60) J .The this graph is a O (cid:16) √ k − ‘ (cid:17) + kγ -bipartite expander. We prove this proposition after the proof of Lemma 4.15.
Proof of Lemma 4.15.
Again, we may assume that ‘ >>
1. Fix some disjoint col-ors I , J , and consider the I , J -STAV structure. We show the five assumptions inDefinition 2.15 hold for γ = O (cid:0) ‘ (cid:1) :1. Assumption (A4) : We need to show that P v [ v ∈ reach a | v ∈ s ] (cid:62) . Thisassumption holds trivially since because in these STAV-structures col ( a ) ∈{ I , J } and the vertices in v don’t have colors I , J , and given s , all we canchoose all possible pairs ( a , v ) with these colors. Hence when choosing v ∈ s itis always in reach a .2. Assumption (A1) : We need to show that the global graph between A and V ,where choosing an edge is choosing a pair ( a , v ) in the STAV-distribution is a O (cid:0) ‘ (cid:1) -bipartite expander. In this case, this graph is the graph of all ( v , a ) where col ( a ) ∈ { I , J } and v (cid:60) I , J . Note that we can decompose this random walkto a convex combination of colored walks M I , k , M J , k for colors k (cid:60) I ·∪ J . Foreach k , this colored walk is O (cid:16) ‘d ‘ (cid:17) = O (cid:0) ‘ (cid:1) -bipartite expander by Claim 4.11.Hence, the combination of walks is also a O (cid:0) ‘ (cid:1) -bipartite expander.3. Assumption (A2) a: Fix some a ∈ A . the ST S a -graph is the graph where wechoose ( s , t , s ) given that they all contain a . This graph is (isomorphic to) thegraph whose vertices are s ∈ X a ( d − ‘ ) , and we connect s , s if they share avertex in X a ( ) whose color is not in J . Taking a step in this graph is like takingtwo steps in the graph described in Proposition 4.17 if we begin with some s .Hence by Proposition 4.17, this is a (cid:16) O (cid:16) √ k − ‘ + k‘k (cid:17)(cid:17) = O (cid:0) ‘ (cid:1) -two-sidedspectral expander. In particular, for ‘ large enough, this is a -edge expander.42. Assumption (A2) b: We are to bound the spectral gap in the conditioned ST S -graph, namely
ST S a , v , whose vertices are s ⊃ ( a , v ) , and edges are ( s , t , s ) where t ⊃ ( a , v ) .In the context of the I , J -STAV-structures, conditioning on a ∈ A , v ∈ X ( ) isthe same as conditioning on t = a ·∪ { v } ∈ T . In this case, the choices of s , s are independent - i.e. the graph we get is a clicque with self loops. This graphis a 0-two-sided spectral expander.5. Assumption (A3) : We define the following V ASA -distribution:(a) Choose s ∈ S (i.e. s ∈ X ( k ) so that col ( s ) ⊃ I , J ).(b) Set a , a ⊂ s so that col ( a ) = I , col ( a ) = J .(c) Choose some v ∈ s so that col ( v ) (cid:60) I , J .(d) Output either ( v , a , s , a ) or ( v , a , s , a ) with equal probability.This distribution is symmetric with respect to a , a . Furthermore when werestrict to one of the marginals ( v , a , s ) or ( v , s , a ) , this is precisely the distri-bution in the STAV-structure. Hence this is indeed a V ASA -distribution.6. Assumption (A3) a: Fix some v ∈ V and consider the v ASA -graph. In thecase of the I , J -STAV structure, this graph is all the bipartite graph where L = X v [ I ] , R = X v [ J ] and we connect a , a if they share some s ∈ X v ( k − ) .This is the ( I , J ) -colored walk in the link of v . By Claim 4.11, this is a O (cid:16) ‘ ‘d (cid:17) = O (cid:0) ‘ (cid:1) -bipartite expander.7. Assumption (A3) b: Fix some a ∈ A , and without loss of generality its color is I . The graph in this assumption is the V AS a -graph, the bipartite graph where L = reach a , R = { ( a , s ) | ∃ v ∈ L ( v , a , s , a ) ∈ Supp ( D ) } , E = { ( v , ( a , s )) | ( v , a , s , a ) ∈ Supp ( D ) } .The probability of choosing an edge ( v , ( a , s )) is given by P D [( v , a , s , a ) | a = a ] .In this case our graph is the graph where L = { v ∈ X a ( ) | col ( v ) (cid:60) J } and R = { ( a , s ) | col ( a ) = J , a ⊂ s } . Notice that s has exactly one subset of color J hence R is (isomorphic to) the set { s ∈ X a ( d − ‘ ) | J ⊂ col ( s ) } . This isthe graph we described in Proposition 4.17, hence by that proposition it is a O (cid:16) √ d − ‘ (cid:17) = O (cid:16) √ ‘ (cid:17) -bipartite expander. (cid:3) Proof of Proposition 4.17.
This proof is similar to the proof of Proposition 4.13. Webuild a 3-partite complex where the bipartite graph is a walk between two of its sidesand use Lemma 4.14.Consider the following 2-dimensional 3-partite simplicial complex:– The parts of the complex are Y [ ] = X [ J ] , Y [ ] = { v ∈ X ( ) | col ( v ) (cid:60) J } , Y [ ] = { s ∈ X ( k ) | J ⊂ col ( s ) } .– We connect ( a , v , s ) ∈ Y ( ) if a ·∪ { v } ⊂ s . The probability of choosing sometriangle ( a , v , s ) is the probability of choosing a ∈ X [ J ] , and then choosing v ⊂ s \ a from the link of a : P X [ J ] [ a ] P X [ s | a ] P X [ v | a , s ] .43e notice the following:1. A is the bipartite operator of the bipartite walk between L , R in the graphwe defined in the proposition.2. A is the convex combination of the bipartite operators of the colored walks M J , i for all i (cid:60) J . From Claim 4.11 λ ( A ) (cid:54) kγ .3. for every a ∈ Y [ ] , the bipartite operator of the link of a is the containment walkbetween X a ( ) and X a ( k − ‘ ) . Hence this walk is also an O (cid:16) √ k − ‘ (cid:17) expander.Hence we can apply Lemma 4.14 and conclude that λ ( A ) (cid:54) O (cid:18) √ k − ‘ (cid:19) + kγ . (cid:3) In this section we consider a number of new set systems. The sets in this set systemconsist of neighbors of a given vertex (or higher dimensional face). This resembles theset system underlying the gap-amplification based proof of the PCP theorem [Din07],in which an agreement theorem underlies the soundness proof somewhat implicitly.Given a simplicial complex X , for a vertex z ∈ X ( ) we denote by Ball z the set ofvertices adjacent to z (recall that even if X has high dimensional faces, it must also haveedges). More generally, for a face z ∈ X ( k ) we let Ball z = { v ∈ X ( ) \ z | v ∪ z ∈ X } ( Ball z is just the set of vertices in the link of z ).Our next agreement testing theorem is for the family S = { B z | z ∈ X ( k ) } whoseground set is V = X ( ) . In this section we abuse notation and refer to f Ball z by f z .We describe a couple of test distributions on such an ensemble: Definition 5.1 (Neighborhood independent agreement distribution) . Let X bea d -dimensional simplicial complex, and let ‘ , k be non-negative integers so that ‘ + k + (cid:54) d . We define the distribution N ID ‘ , k by the following random process:1. Sample t ∈ X ( ‘ ) .2. Sample z , z ∈ X t ( k ) independently. Definition 5.2 (Neighborhood complement agreement distribution) . Let X be a d -dimensional simplicial complex, so that ‘ + k + (cid:54) d . We define the distribution N CD ‘ , k by the following random process.1. Sample t ∈ X ( ‘ ) .2. Sample z , z ∈ X t ( k ) by the k , k -complement walk in X t .Note that z , z are distributed as in the k , k -complement walk distribution.Whereas usually an agreement test selects two subsets s , s and checks if f s agrees with f s on their entire intersection, it sometimes makes sense to choose arandom t ⊂ s ∩ s and check that f s and f s agree only on t . For this section wecall such tests weak and define two agreement test of this form.1. In the weak independent agreement test we sample ( t , z , z ) ∼ N ID ‘ , k andaccept if f z (cid:22) t = f z (cid:22) t .2. In the weak complement agreement test we sample ( t , z , z ) ∼ N CD ‘ , k andaccept if f z (cid:22) t = f z (cid:22) t . 44f our simplicial complex is a two-sided high dimensional expander, then we can showthat these agreement tests have some soundness, even in their weak variant: Theorem 5.3 (Agreement on neighborhoods) . There exists a constant c > suchthat for every non-negative integers ‘ , k , d such that (cid:54) ‘ (cid:54) d − and ‘ + k + (cid:54) d ,the following holds. Let X be a d -dimensional ‘ ( k + ‘ ) -two-sided high dimensionalexpander. Then the ‘ , k -weak independent agreement test and the ‘ , k -weak complementagreement test are both ‘ -approximately c -sound. Clearly if f z (cid:22) I = f z (cid:22) I for I = B z ∩ B z then f z (cid:22) t = f z (cid:22) t since t ⊂ I .Therefore, the theorem also holds if we make a stronger agreement test that checksagreement on the entire intersection. The current statement is stronger because itbegins from a weaker assumption. However, it could be that if we make the strongertest, we could reach an even stronger conclusion in terms of the closeness of theensemble to a perfect ensemble. This is an interesting direction for further study. Proof of Theorem 5.3.
As in the proof of Theorem 4.1, we have an ensemble offunctions f that has rej ( f ) = ε by either the independent agreement distribution, orthe complement agreement distribution. We need to find a global function G so that P s " f s ‘ (cid:44) G (cid:22) s = O ( ε ) .We do so using Theorem 2.26. For both distributions our STAV-structure is thefollowing:1. S = { Ball z | z ∈ X ( k ) } .2. T = X ( ‘ ) .3. A = X ( ‘ − ) .4. V = X ( ) .As noted before, whenever we choose z , we will always mean that we choose Ball z ∈ S .The STAV-structure’s distribution will be ( z , t , a , v ) where z ∈ X ( k ) , t ∈ X z ( ‘ ) , and t = a ·∪ { v } for a partition chosen uniformly at random. Note that ( z , t ) are chosenas the marginal of both the independent agreement test and the weak complementagreement test.Given any fixed t , the independent agreement distribution samples z , z ⊃ t independently. The complement agreement distribution does not sample z , z ⊃ t independently, but according to an expanding random walk. In Claim B.2, we provethat in this case rej NID ‘ , k ( f ) = Θ ( rej NCD ‘ , k ( f )) . Thus it is enough to prove thetheorem on the independent agreement distribution.By Claim 4.12 we know that ξ ( f ) = ‘ . If we show that this STAV-structure is O (cid:0) ‘ (cid:1) -good, we can directly obtain the theorem by invoking Theorem 2.26. We checkthat this STAV-structure fulfils the assumptions:1. Assumption (A1) : The graph between A and V whose edges are ( a , v ) so that a ·∪ v ∈ X ( ‘ ) is the 0, ‘ -complement walk. This graph is a O (cid:16) ‘‘ ( ‘ + k ) (cid:17) = O (cid:0) ‘ (cid:1) -bipartite expander, by Claim 4.9.2. Assumption (A2) a: The ST S a -graph here is the graph where we choose v ∈ X a ( ) and then choose independently two edges v ·∪ z and v ·∪ z , and output z , z . This is just taking two steps in the 0, k -complement walk in X a , thusby Claim 4.9, this is a ‘ -two-sided spectral expander. As ‘ (cid:62)
4, this is also a -edge expander. 45. Assumption (A2) b: The ST S a , v -graph here is the graph obtained after choosingtwo k -faces in the link of X a ·∪{ v } independently. Similar to the previous itemsin this section, this is a 0-two-sided spectral expander.4. Assumption (A3) : Consider the following V ASA -distribution.(a) Choose z ∈ X ( ) and v ∈ X z ( ) .(b) Choose a , a in the complement walk in the link of { z , v } .(c) Output either ( v , a , z , a ) or ( v , a , z , a ) with probability .This is symmetric in a , a . It is easy to verify that the marginals ( v , z , a ) and ( v , z , a ) are just the same as choosing according to the STAV-distribution.5. Assumption (A3) a: For each v ∈ X ( ) , the v ASA -graph here is just the0, ‘ − X v . Hence by Claim 4.9, this is a O (cid:0) ‘ (cid:1) -two sidedspectral expander.6. Assumption (A3) b: Finally, given a , the V AS a -graph is the graph where L = X a ( ) . R = (cid:8) ( z , a ) (cid:12)(cid:12) a ·∪ z ·∪ a ∈ X (cid:9) .We connect ( v , ( z , a )) if a ∈ X { v , z } ·∪ a . We can decompose this graph to twoindependent steps in two bipartite graphs. Denote M = X a ( ‘ ) . If we considerthe complement walk between Between L and M , and the graph between M and R where every t is connected to ( z , a ) so that t = { z } ·∪ a . It is easy to seethat a step from L to R is two independent steps between L and M , and then M and R . By Claim 4.9, the step between L and M is a O (cid:16) √ ‘ (cid:17) -expander,and thus the V AS a -graph is a O (cid:16) √ ‘ (cid:17) -expander.7. Assumption (A4( r )) : We show that for every z , the AV z -graph is a ‘ -samplergraph. In this case the AV z − -graph is a bipartite graph where L = X z ( ‘ − ) , R = X z ( ) .and the edges are ( a , v ) so that a ·∪ { v } ∈ X z , i.e. the 0, ‘ − X z . We need to show that this graph is a ‘ -sampler, namely that if C ⊂ R is of size P [ C ] (cid:62) ‘ then the set T = (cid:26) a ∈ L (cid:12)(cid:12)(cid:12)(cid:12) P v ∈ R [ v ∈ C | v ∼ a ] (cid:62) ‘ (cid:27) is at least of size P [ T ] (cid:62) . Indeed, the complement set L \ T is contained inthe set of all a ∈ R so that | P [ [ v | ∈ ] R ] v ∈ Cv ∼ a − P [ C ] | (cid:62) P [ C ] .This walk is a ‘‘ ( k + ‘ ) -bipartite, expander. By the sampler lemma, Lemma A.9,1 − P [ T ] = P [ L \ T ] (cid:54) ‘ ( P [ C ]) P [ C ] (cid:54) ‘ .The statement follows for ‘ (cid:62) (cid:3) The Grassmann Poset
Finally, the fourth agreement testing theorem, Theorem 6.3 gives new agreement testson the Grassmann poset. Such agreement tests are well studied in the PCP literaturebut other than [IKW12] we are not aware of works that study the general questionoutside the context of Reed Muller codes. This part can be viewed as extending[IKW12] to a broader parameter regime (our focus here is on the 99% soundnesswhereas in [IKW12] the focus was on 1% soundness).Let F be the finite field of size q , the Affine Grassmann Poset X = Gr aff ( F n , d ) is the set of all affine subspaces of dimension (cid:54) d . We order the subspaces bycontainment, and denote by X ( k ) all subspaces of dimension k .Similarly, we can restrict ourselves to linear subspaces. We denote by Y = Gr lin ( F n , d ) , the set of all linear subspaces s ⊂ F n of dimension (cid:54) d +
1. We orderthe subspaces by containment, and the convention here is denoting by Y ( k ) allsubspaces of dimension exactly k + Definition 6.1 (The Grassmann d , ‘ -distribution) . Let ‘ < d . We define the dis-tribution
AGD d , ‘ on the Affine Grassmann Poset and a distribution LGD d , ‘ on theLinear Grassman Poset, by the following random process:1. Sample t ∈ X ( ‘ ) (respectively in Y ( ‘ ) ).2. Sample s , s ∈ X ( d ) (respectively in Y ( d ) ) given that t ⊂ s , s .The ground set in the Affine Grassmann agreement test is V aff = X ( ) , the setof points in F n . Our sets S aff = X ( d ) are the d -dimensional affine spaces.In the Linear Grassmann agreement test, our ground set V lin = X ( ) is theone-dimensional spaces. Our sets are S lin = { [ s ] = { v ⊂ s } | s ∈ X ( d ) } .Namely, for each d + s ∈ X ( d ) the set [ s ] ∈ S lin is the collectionof all the one-dimensional vectors paces that are contained in s .We are ready to state our main theorem for Grassmann Posets: Theorem 6.2 (Agreement on the Affine Grassmann Poset) . There exists a constant c > such that for every prime power q , r , δ > , and integers ‘ , d , n such that ‘ + < d (cid:54) n the following holds. The d , ‘ -Grassmann agreement test on X = Gr aff ( F n , d ) is q − ‘ rδ -approximately c (cid:0) + r (cid:1) -sound for δ -ensembles. Theorem 6.3 (Agreement on the Linear Grassmann Poset) . There exists a constant c > such that for every prime power q , r , δ > , and integers ‘ , d , n such that ‘ + < d (cid:54) n the following holds. The d , ‘ -Grassmann agreement test on X = Gr lin ( F n , d ) is q − ‘ + rδ -approximately c (cid:0) + r (cid:1) -sound for δ -ensembles. For the proofs of these theorems, we use the spectral gaps in the containmentwalk in the Grassmann, and the complement walk in the Grassmann. In particular
Claim .
1. The following 0, k -containment walk is a √ q k -bipartiteexpander in any the Affine Grassmann Poset where k (cid:54) d : L = X ( ) , R = X ( k ) ,and ( v , a ) ∈ E if v ⊂ a .2. The following 0, k -containment walk is a √ q k -bipartite expander in any the Linear Grassmann Poset or Linear Grassmann Poset where k (cid:54) d : L = X ( ) , R = X ( k ) ,and ( v , a ) ∈ E if v ⊂ a . 47e can define the complement walk for the Grassmann Posets as well. In the AffineGrassmann complement walk, we traverse from w to w if dim ( span ( w , w )) = dim ( w ) + dim ( w ) +
1. Here span ( w , w ) is the smallest affine space that contains w ∪ w .In the Linear Grassmann complement walk, we we traverse from w to w if dim ( span ( w , w )) = dim ( w ) + dim ( w ) . Equivalently, if the intersection between w and w is trivial.It will be useful to examine these walks when we also condition on being indepen-dent with respect to a fixed subspace u . Definition 6.5 (Conditioned Complement Walk in the Affine Grassmann Poset) . Let X = Gr aff ( F n , d ) , and let ‘ , ‘ , ‘ (cid:54) d so that ‘ + ‘ + ‘ + (cid:54) n . Fix some u ∈ X ( ‘ ) . The u -conditioned ‘ , ‘ -complement walk in X is the walk where L = { v ∈ X ( ‘ ) | dim ( span ( u , v )) = ‘ + ‘ + } , R = { w ∈ X ( ‘ ) | dim ( span ( u , w )) = ‘ + ‘ + } , E = { ( v , w ) | v ∈ X ( ‘ ) , w ∈ X ( ‘ ) , dim ( span ( v , w , u )) = ‘ + ‘ + ‘ + } .We choose an edge ( v , w ) uniformly at random. Definition 6.6 (Conditioned Complement Walk in the Linear Grassmann Poset) . Let Y = Gr lin ( F n , d ) , and let ‘ , ‘ , ‘ (cid:54) d so that ‘ + ‘ + ‘ + (cid:54) n . Fix some u ∈ X ( ‘ ) . The u -conditioned ‘ , ‘ -complement walk in X is the walk where L = { v ∈ X ( ‘ ) | u ∩ v = { }} , R = { w ∈ X ( t ) | u ∩ w = { }} E = { ( v , w ) | w ∈ X ( ‘ ) , v ∈ X ( t ) , v ⊕ w ⊕ u ∈ X ( ‘ + ‘ + ‘ ) } .Here ⊕ means direct sum. Requiring that the sum is direct, is equivalent to requiringthe dimension of the sum, to be the sum of the dimensions of v , w and u . We choosean edge ( v , w ) uniformly at random. Claim .
1. Let X = Gr lin ( F n , d ) be anAffine Grassmann Poset. Let ‘ , ‘ , ‘ (cid:54) d so that ‘ + ‘ + ‘ + (cid:54) n . Fix some u ∈ X ( ‘ ) . Then the u -conditioned ‘ , ‘ -complement walk in the GrassmannPoset is a q n − ‘ − ‘ − ‘ − -bipartite expander.2. Let Y = Gr lin ( F n , d ) be a Linear Grassmann Poset. Let ‘ , ‘ , ‘ (cid:54) d sothat ‘ + ‘ + ‘ + (cid:54) n . Fix some u ∈ X ( ‘ ) . Then the u -conditioned ‘ , ‘ -complement walk in the Grassmann Poset is a q n − ‘ − ‘ − ‘ − -bipartiteexpander.We prove this claim in Section 7.3. Proof of Theorem 6.2.
As in the proof of Theorem 4.1, we have an ensemble offunctions f that has rej AGR d , ‘ ( f ) = ε . We need to find a global function G so that P s " f [ s ] rδq − ‘ (cid:44) G (cid:22) s = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) .We do so using Theorem 2.26.Consider the following STAV-structure S = S aff = X ( d ) , T = X ( ‘ ) , A = X ( ‘ − ) V = X ( ) ,The distribution is choosing: 48. s ∈ X ( d ) uniformly at random.2. t ∈ X ( ‘ ) given that t ⊂ s .3. A pair ( a , v ) given that span ( a , v ) = t .By Lemma 2.23, and the fact that our T -lower graph is the containment graphin the Grassmann, which is a O (cid:18) √ q − ‘ (cid:19) -bipartite expander, any ( ‘ , δ ) -distanceensemble has ξ ( f ) = O (cid:0) δ − q − ‘ (cid:1) .Next, we are to show that the STAV-structure defined in the theorem is O ( γ ) -good,for γ = q ‘ . Namely, that it fulfils the assumptions in Definition 2.15:1. Assumption (A1) : The bipartite graph whose edges are the pairs ( a , v ) in theAffine Grassmann Poset is the complement walk graph in the Affine Grassmannposet. By Claim 6.7, this graph is a O (cid:16) q n − ‘ − (cid:17) = O (cid:16) q − ‘ (cid:17) -bipartite expander.2. Assumption (A2) a: Note that for any a ∈ X , the collection of subspaces thatcontain a are isomorphic to the Grassmann Poset of the quotient of F n / a where a is a linear subspace of dimension ‘ . Thus, the ST S a -graph is thetwo step version of the 0, d − ‘ + q d − ‘ -two-sided spectral expander. This isin particular a -edge expander.3. Assumption (A2) b: As in the simplicial complex case, once we condition on ( a , v ) , there is only one space in t = span ( a , v ) ∈ T that contains both a and v .Thus the graph in the assumption is a clique with self loops, and in particularhas q ‘ -spectral expansion.4. Assumption (A3) Consider the following
V ASA -distribution:(a) Choose s ∈ S .(b) Choose two a , a ⊂ s so that dim ( span ( a , a )) = ‘ + v ∈ s so that dim ( span ( v , a , a )) = ‘ + ( v , a , s , a ) or ( v , a , s , a ) with probability .This distribution is symmetric in a , a , and its marginal is exactly the choiceof ( v , a , s ) in the STAV-structure above.5. Assumption (A3) a: Fix some v ∈ V . The v ASA -graph in the Affine GrassmannPoset is the graph whose double cover is v -conditioned ‘ − ‘ − q n − ‘ − = O (cid:16) q ‘ (cid:17) -two-sided spectralexpander expander.6. Assumption (A3) b: Fix a ∈ A . The V AS a -graph is the following graph: L = { v ∈ V | v (cid:60) a } , R = (cid:8) ( a , s ) (cid:12)(cid:12) a , a ⊂ s and dim ( span ( a , a )) = ‘ + (cid:9) , E = (cid:8) ( v , ( a , s )) (cid:12)(cid:12) { v } , a , a ⊂ s and dim ( span ( v , a , a )) = ‘ + (cid:9) .The probability of the edges are uniform. We prove below that this graph is a q q ‘ -bipartite expander using Lemma 4.14.Consider the following 2-dimensional 3-partite simplicial complex:49 The parts of the complex are Y [ ] = { v ∈ V | v (cid:60) a } , Y [ ] = (cid:8) a ∈ a (cid:12)(cid:12) dim ( span ( a , a )) = ‘ + (cid:9) , Y [ ] = (cid:8) ( a , s ) (cid:12)(cid:12) a , a ⊂ s and dim ( span ( a , a )) = ‘ + (cid:9) .– We connect ( a , v , ( a , s )) ∈ Y ( ) if a = a , { v } , a , a ⊂ s and dim ( span ( v , a , a )) = ‘ + { v } , a , a ⊂ s . The probabilityof choosing some triangle ( a , v , ( a , s )) in uniform, but we view it as thefollowing: choosing s given that a ⊂ s , and then choosing a , v given that { v } , a , a ⊂ s and dim ( span ( v , a , a )) = ‘ + A i , j the bipartite walks between Y [ i ] and Y [ j ] . We notice thefollowing:(a) A is the bipartite operator of the bipartite walk between L , R in the V AS a -graph.(b) A is the a -conditioned 0, ( ‘ − ) -complement walk in the Grassmann. Itis a O (cid:16) q n − ‘ − (cid:17) = O (cid:16) q ‘ (cid:17) -bipartite expander.(c) Assume without loss of generality, that a is linear. for every a ∈ Y [ ] , thelink of a is the following bipartite graph: L = (cid:8) v ∈ V (cid:12)(cid:12) dim ( span ( v , a , a )) = ‘ + (cid:9) , R (cid:27) (cid:8) s ∈ S (cid:12)(cid:12) a , a ⊂ s (cid:9) .This walk is similar to the 0, d − ‘ -containment walk in Y = Gr lin ( F n / span ( a , a ) , d ) (but instead of a single line in [ v ] ∈ X we have aset of points v , ..., v j whose projection to F n / span ( a , a ) go in to the onedimensional space [ v ] ). Hence this walk is a O (cid:16) √ q d − ‘ − (cid:17) = O (cid:18) √ q ‘ (cid:19) -bipartite expander.Hence we can apply Lemma 4.14 and conclude that λ ( A ) (cid:54) O p q ‘ ! + O p q ‘ ! = O p q ‘ ! .7. Assumption (A4) : In the Grassmann, reach a are all the points v (cid:60) a . For d (cid:62) ‘ +
1, the probability of choosing a point that is not contained in a is ≈ − q > .Thus by Theorem 2.26, we are promised a function G : X ( ) → Σ so that P " f s rq − ‘ δ (cid:44) G (cid:22) s = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . (cid:3) The proof in the linear case is very similar:50 roof of Theorem 6.3.
As in the proof of Theorem 4.1, we have an ensemble offunctions f that has rej LGR d , ‘ ( f ) = ε . We need to find a global function G so that P s " f [ s ] rδq − ‘ + (cid:44) G (cid:22) s = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) .We do so using Theorem 2.26.Consider the following STAV-structure S = S lin (cid:27) X ( d ) , T = X ( ‘ ) , A = X ( ‘ − ) V = X ( ) ,where we abuse the notation and identify s or f [ s ] with s ∈ X ( d ) and f s . Thedistribution is choosing:1. s ∈ X ( d ) uniformly at random.2. t ∈ X ( ‘ ) given that t ⊂ s .3. A pair ( a , v ) given that a ⊕ v = t (here ⊕ means direct sum).By Lemma 2.23, and the fact that our T -lower graph is the containment graphin the Grassmann, which is a O (cid:18) √ q − ‘ + (cid:19) -bipartite expander, any ( ‘ , δ ) -distanceensemble has ξ ( f ) = O (cid:0) δ − q − ‘ + (cid:1) .Next, we are to show that the STAV-structure defined in the theorem is O ( γ ) -good,for γ = q ‘ − . Namely, that it fulfils the assumptions in Definition 2.15:1. Assumption (A1) : The bipartite graph whose edges are the pairs ( a , v ) in theGrassmann Poset is the complement walk graph in the Grassmann poset. ByClaim 6.7, this graph is a O (cid:16) ‘q n − ‘ − (cid:17) = O (cid:16) q ‘ − (cid:17) -bipartite expander.2. Assumption (A2) a: Consider the Grassmann Poset of the quotient Y = Gr lin ( F n / a , d − ‘ + ) . The ST S a -graph for the Grassmann, is (isomorphicto) the graph whose vertices are Y ( d − ‘ + ) , and where two subspaces s , s share an edge if they intersect on a 1-dimensional subspace. This graph is thetwo step version of the 0, d − ‘ + O (cid:16) q d − ‘ (cid:17) -two-sided spectral expander. In particularit is a -edge expander.3. Assumption (A2) b: As in the simplicial complex case, once we condition on ( a , v ) , there is only one space in t = a ⊕ v ∈ T that contains both a and v .Thus the graph in the assumption is a clique with self loops, and in particularhas q ‘ − -spectral expansion.4. Assumption (A3) Consider the following
V ASA -distribution:(a) Choose s ∈ S .(b) Choose two a , a ⊂ s so that a ⊕ a ⊂ s .(c) Choose v ⊂ s so that v ∩ ( a ⊕ a ) = { } .(d) out put ( v , a , s , a ) or ( v , a , s , a ) with probability .This distribution is symmetric in a , a , and its marginal is exactly the choiceof ( v , a , s ) in the STAV-structure above.5. Assumption (A3) a: Fix some v ∈ V . The v ASA -graph in the Linear Grass-mann Poset is the graph whose double cover is the v -conditioned ‘ − ‘ − O (cid:16) q n − ‘ − (cid:17) = O (cid:16) q ‘ − (cid:17) -twosided expander. 51. Assumption (A3) b: Fix a ∈ A . The V AS a -graph is the following graph: L = { v ∈ V | a ∩ v = { }} , R = (cid:8) ( a , s ) (cid:12)(cid:12) a ⊕ a ⊂ s (cid:9) , E = (cid:8) ( v , ( a , s )) (cid:12)(cid:12) v ⊕ a ⊕ a ⊂ s (cid:9) .The probability of the edges are uniform. We prove below that this graph is a q q ‘ − -bipartite expander using Lemma 4.14.Consider the following 2-dimensional 3-partite simplicial complex:– The parts of the complex are Y [ ] = { v ∈ V | a ∩ v = { }} , Y [ ] = (cid:8) a ∈ a (cid:12)(cid:12) a ∩ a = { } (cid:9) , Y [ ] = (cid:8) ( a , s ) (cid:12)(cid:12) a ⊕ a ⊂ s (cid:9) .– We connect ( a , v , ( a , s )) ∈ Y ( ) if a = a and v ⊕ a + a ⊂ s . Theprobability of choosing some triangle ( a , v , ( a , s )) in uniform, but weview it as the following: choosing s given that a ⊂ s , and then choosing a , v given that a ⊕ a ⊕ v ⊂ s .Denote by A i , j the bipartite walks between Y [ i ] and Y [ j ] . We notice thefollowing:(a) A is the bipartite operator of the bipartite walk between L , R in the V AS a -graph.(b) A is the a -conditioned 0, ( ‘ − ) -complement walk in the Grassmann. Itis a O (cid:16) q n − ‘ − (cid:17) = O (cid:16) q ‘ − (cid:17) -bipartite expander.(c) for every a ∈ Y [ ] , the link of a is the following bipartite graph: L = (cid:8) v ∈ V (cid:12)(cid:12) a ⊕ a ⊂ s (cid:9) , R (cid:27) (cid:8) s ∈ S (cid:12)(cid:12) a ⊕ a ⊂ s (cid:9) .This walk is similar to the 0, d − ‘ -containment walk in X = Gr lin ( F n / ( a ⊕ a ) , d ) (but instead of a single vertex in [ v ] ∈ X we havea set of vertices v , ..., v j whose projection to F n / ( a ⊕ a ) is [ v ] ). Hencethis walk is a O (cid:16) √ q d − ‘ − (cid:17) = O (cid:18) √ q ‘ − (cid:19) -bipartite expander.Hence we can apply Lemma 4.14 and conclude that λ ( A ) (cid:54) O p q ‘ − ! + O p q ‘ − ! = O p q ‘ − ! .7. Assumption (A4) : In the Grassmann, reach a are all the subspaces v so thatthey are not contain in a . For d (cid:62) ‘ +
1, the probability of choosing a subspacethat is not contained in a is ≈ − q > .Thus by Theorem 2.26, we are promised a function G : X ( ) → Σ so that P " f s rq − ‘ + δ (cid:44) G (cid:22) s = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . (cid:3) The Complement Random Walk
This section is dedicated the so-called complement random walk, as described inDefinition 4.8, and which we repeat now for ease of reading:
Definition (Restatement of Definition 4.8) . Let X be a d -dimensional simplicialcomplex. Let k , ‘ be integers s.t. k + ‘ + (cid:54) d . The k , ‘ -complement walk is thebipartite graph with edges ( L , R , E ) :– The vertices are L = X ( k ) , R = X ( ‘ ) .– The edges are E = { ( s , t ) | s ·∪ t ∈ X ( k + ‘ + ) } .The probability of choosing an edge ( s , t ) is the probability of choosing s ·∪ t ∈ X ( k + ‘ + ) and then choosing s ∈ X ( k ) , given that we chose s ·∪ t . Theorem 7.1.
1. Let X be a λ two-sided d -dimensional link-expander. Let ‘ , ‘ integers so that ‘ + ‘ + (cid:54) d . Denote by M ‘ , ‘ , the bipartite operator of the ‘ , ‘ -complement walk. Then λ ( M ‘ , ‘ ) (cid:54) ( ‘ + )( ‘ + ) λ .
2. Let X be a d + -partite λ ( d + ) λ + -one-sided link expander, where λ < . Let I , J ⊂ [ d ] be two disjoint colors. Denote by M I , J the I , J -colored walk. Then λ ( M I , J ) (cid:54) | I || J | λ .In Section 7.1 we give some additional definitions and preliminaries for this section.In Section 7.2 we prove the two-sided complement walk’s expansion, and the d + In a finite measured space we have an inner product on the space of real functions.Thus for any f , g ∈ ‘ ( X ( k )) h f , g i = E s ∈ X ( k ) [ f ( s ) g ( s )] .In addition, we define two sets of operators that connect the different levels offunctions by averaging. Definition 7.2 (Up and Down Operators) . Define the up operator U k , k + : ‘ ( X ( k )) → ‘ ( X ( k + )) , and the down operator D k + k : ‘ ( X ( k + )) → ‘ ( X ( k )) , by U k , k + f ( s ) = E t ⊂ s ; t ∈ X ( k ) [ f ( t )] , D k + k g ( t ) = E s ⊃ t ; s ∈ X ( k + ) [ g ( s )] .One can show that D k + = ( U k ) ∗ , the adjoint with respect to the inner productabove.Recall the k + k -lower walk defined in Section 4. D k + k is it’s bipartite operator.53 .1.1 Localization Given a function in f : X ( k ) → R there are two natural operations that give us afunction in the link. Definition 7.3 (Localization) . Let ‘ (cid:54) k be two integers, f ∈ C k ( X ) and s ∈ X ( ‘ ) .The localization of f denoted by f s : X s ( k − | σ | ) → R , is defined by: f s ( t ) = f ( s ·∪ t ) . Definition 7.4 (Restriction) . Let ‘ , k be two integers s.t. ‘ + k + (cid:54) d , f ∈ C k ( X ) and s ∈ X ( ‘ ) . The restriction of f denoted by f s : X s ( k ) → R , is defined by: f s ( t ) = f ( t ) . First we prove Theorem 7.1. Our main technical tools is Lemma 4.14, which wasalready stated in Section 4. We restate it here:
Lemma (Restatement of Lemma 4.14) . Let Y be a -dimensional -partite complex,and denote its parts by X ( ) = X [ ] ·∪ X [ ] ·∪ X [ ] . Suppose that for every v ∈ X [ ] , X v is a η -bipartite expander. Denote by A , A and A the bipartite walksbetween ( V , V ) ( V , V ) and ( V , V ) respectively. Then λ ( A ) (cid:54) η + λ ( A ) λ ( A ) . Proof of Theorem 7.1, item 1.
We begin with the two sided case, and prove thestatement by induction on ‘ + ‘ = k . The base case is ‘ + ‘ =
0, i.e. ‘ = ‘ = X is a λ -two sided link expander.Assume the statement is true for any ‘ , ‘ s.t. ‘ + ‘ (cid:54) k , and consider thegraph operator of the complement walk graph M ‘ , ‘ + : R X ( ‘ ) → R X ( ‘ + ) , forsome ‘ , ‘ s.t. ( ‘ + ) + ‘ = k +
1. We need to prove that λ ( M ‘ + ‘ ) (cid:54) ( ‘ + )( ‘ + ) λ ,Note that it is enough to prove for the case where we take ‘ + M ‘ + ‘ is M ‘ , ‘ + . It might be easy to keep in mind the first non-trivial casewhere ‘ + = ‘ = Y :– The vertices are Y [ ] = X ( ‘ ) , Y [ ] = X ( ‘ + ) , Y [ ] = X ( ‘ ) .– We connect ( y , y , y ) ∈ Y ( ) if y ⊂ y and y ·∪ y ∈ X ( ) . The probabilityof choosing some ( y , y , y ) is the probability of choosing the edge y ·∪ y andthen choosing y ⊂ y . In other words, P Y [( y , y , y )] = P X ( ‘ + ‘ ) [ y ·∪ y ] P X [ y | y ·∪ y ] P X [ y | y ] .We notice the following:1. M is the bipartite operator of the bipartite walk between X ( ‘ + ) , X ( ‘ ) .2. M is the bipartite operator of the bipartite walk between X ( ‘ ) , X ( ‘ ) . Byinduction λ ( M ) (cid:54) ( ‘ + )( ‘ + ) λ .3. for every s ∈ Y [ ] , the bipartite operator of the link of s is the complementwalk for ‘ = ‘ = ‘ in the link of s . as ‘ + ‘ < ( ‘ + ) + ‘ , we may usethe induction assumption to conclude that λ ( M s ) (cid:54) ( ‘ + ) λ .54ence we can apply Lemma 4.14 and conclude that λ ( M ) (cid:54) ( ‘ + ) λ + ( ‘ + )( ‘ + ) λ k M k (cid:54) ( ‘ + )( ‘ + ) λ . (cid:3) Towards proving the second item in Theorem 7.1, we need the following lemma,that we shall prove in Section 7.2.2:
Lemma 7.5.
Let X be a ( d + ) -partite simplicial complex, and suppose that for all v ∈ X ( ) the underlying graph is a λ -one sided d -partite expander, for λ < . Supposethat the underlying graph of X is connected. Then for every { i } , { j } ⊂ [ d + ] , thebipartite graph between X [ i ] , X [ j ] is a λ − λ -bipartite expander. The links of s ∈ X ( d − ) in a d -partite simplicial complexes are bipartite graphs.Thus by iterating this lemma we get the following corollary: Corollary 7.6.
Let λ < . Let X be a simplicial complex s.t. every link of X isconnected and that for every s ∈ X ( d − ) , X s is a λ ( d − ) λ + -bipartite expanders.Then for every two colors { i } , { j } , and every s ∈ X s.t. i , j (cid:60) col ( s ) the graph betweenthe two colors M { i } , { j } s is a λ -bipartite expander.Proof of Theorem 7.1, item 2. The proof of the colored version is similar to the two-sided case, as is done by induction on k : = | I | + | I | . The base case is where | I | + | I | =
2, i.e. | I | = | I | =
1. This case is true due to Corollary 7.6.Take some disjoint color sets I , I s.t. | I | + | I | = k +
1, and suppose the wlog I = J ·∪ { i } where J is non-empty.Consider the following 2-dimensional 3-partite simplicial complex Y :– The vertices are Y [ ] = X [ I ] , Y [ ] = X [ J ] , Y [ ] = X [ I ] .– We connect ( y , y , y ) ∈ Y ( ) if y ⊂ y and y ·∪ y ∈ X [ I ·∪ I ] . Theprobability of choosing some ( y , y , y ) is the probability of choosing the edge y ·∪ y and then choosing y ⊂ y . In other words, P Y [( y , y , y )] = P X ( ‘ + ‘ ) [ y ·∪ y ] P X [ y | y ·∪ y ] P X [ y | y ] .We notice the following:1. M is the bipartite operator of the bipartite walk between X [ I ] , X [ I ] .2. M is the bipartite operator of the bipartite walk between X [ J ] , X [ I ] . Byinduction λ ( M ) (cid:54) | J || I | λ .3. for every s ∈ Y [ ] , the bipartite operator of the link of s is the complementwalk for { i } , I in the link of s . as |{ i }| + | I | < | I | + | I | , we may use theinduction assumption to conclude that λ ( M s ) (cid:54) | I | λ .Hence we can apply Lemma 4.14 and conclude that λ ( M ) (cid:54) | I | λ + λ | J || I |k M k (cid:54) | I || I | λ . (cid:3) .2.1 Proof of Lemma 4.14 Proof of Lemma 4.14.
Consider two functions f : X [ ] → R , g : X [ ] → R s.t. f , g are orthogonal to the space of constant functions, and s.t. k f k = k g k =
1. We needto prove that h A f , g i (cid:54) η + λ ( A ) λ ( A ) .The following claim allows us to calculate the inner product in a simplicial complexlocally. Claim . Let X be a d + I , I , I disjoint colors.Then for any f : X [ I ] → R , g : X [ I ] → R h M I , I f , g i = E r ∈ I h h M I , I s f r , g r i i ,where M I , I s is the bipartite operator for I , I in the link of s .For every v ∈ X [ ] , we denote it’s bipartite operator by A v . By Claim 7.7 h A f , g i = E v ∈ X [ ] [ h A v f v , g v i ] .We decompose f v = f v + f ⊥ v where f v is constant and f ⊥ v is orthogonal to f v , andsimilarly g v = g v + g ⊥ v . Note that A v f v is also constant and A v f ⊥ v is also orthogonalto the constant part, because A v is an averaging operator. Thus E v ∈ X [ ] [ h A v f v , g v i ] = E v ∈ X [ ] (cid:2) h A v f v , g v i (cid:3) + E v ∈ X [ ] h h A v f ⊥ v , g ⊥ v i i . (7.1)We bound each part in the righthand side of (7.1) separately.– From Cauchy-Schwartz: E v ∈ X [ ] h h A v f ⊥ v , g ⊥ v i i (cid:54) E v ∈ X [ ] h λ ( A v ) k f ⊥ v k · k g ⊥ v k i .From the assumption for every v ∈ X [ ] , λ ( M v ) (cid:54) η , thus: E v ∈ X ( ) h λ ( A v ) k f ⊥ v k · k g ⊥ v k i (cid:54) η E v ∈ X [ ] h k f ⊥ v k · k g ⊥ v ] k i (cid:54) η E v ∈ X [ ] (cid:20) ( k f ⊥ v k + k g ⊥ v k ) (cid:21) (cid:54) η ,where the second inequality is achieved by taking arithmetic mean instead ofgeometric mean.– Next we bound E v ∈ X [ ] (cid:2) h A v f v , g v i (cid:3) . Notice that f v ≡ E u ∈ X v [ ] [ f v ( u )] = A f ( v ) , g v ≡ E u ∈ X v [ ] [ g v ( u )] = A g ( v ) .Hence E v ∈ X [ ] (cid:2) h A v f v , g v i (cid:3) = E v ∈ X [ ] (cid:2) h A f ( v ) A g ( v ) i (cid:3) = h A f ( v ) , A g ( v ) i .From Cauchy-Schwarz h A f ( v ) , A g ( v ) i (cid:54) λ ( A ) λ ( A ) k f kk g k = λ ( A ) λ ( A ) .56umming up the two terms, we get that the operator is bounded by η + λ ( A ) λ ( A ) . (cid:3) Proof of Claim 7.7. h M I , I f , g i = E t ∈ X [ I ] (cid:20) g ( t ) E s ∈ X t [ I ] [ f ( s )] (cid:21) = E s ·∪ t ∈ X [ I ·∪ I ] [ f ( s ) g ( t )] ,where the last expectation is by choosing two faces according to the random walkdefined using M I , I . We condition on choosing some r ∈ X [ I ] : = E r ∈ X [ I ] (cid:20) E s ·∪ t ∈ X r [ I ·∪ I ] [ f ( s ) g ( t )] (cid:21) = E r ∈ X [ I ] (cid:20) E s ·∪ t ∈ X r [ I ·∪ I ] [ f r ( s ) g r ( t )] (cid:21) ,Following the previous steps in every link we conclude: = E r ∈ X [ I ] h h M I , I r f r , g r i i . (cid:3) We now go towards proving Lemma 7.5, since its corollary, Corollary 7.6 is the basecase for proving Theorem 7.1, item 2. This lemma is an adaptation of the theorem in[Opp18a], where the author proved the following:
Theorem 7.8 (Theorem 5.2 in [Opp18a]) . Let X be simplicial complex, Let − Claim . Let X be any d + I , I be disjoint color sets, and I (cid:40) I . Let f ∈ R X [ I ] , g ∈ R X [ I ] .Then h M I , I f , g i = E r ∈ X [ I ] h h M I \ I , I r f r , g r i i . Where M I \ I , I r is the coloredcomplement walk in the link of r .The proof is similar to the proof of Claim 7.7 and is therefore omitted. Proof of Lemma 7.5. Fix two colors i , j , s.t. λ { i } , { j } is maximal, and fix some k (cid:44) i , j .Take two functions f : X [ i ] → R , g : X [ j ] → R , s.t. E [ f ] = E [ g ] = || f || = || g || = || M { i } , { j } || = h M { i } , { j } f , g i .For every v ∈ X [ k ] we decompose f v , g v to their constant part and the part thatis perpendicular to constant functions: f v = ( f v ) + ( f v ) ⊥ ; g v = g v + g ⊥ v .Thus by theorem 7.9: h M { i } , { j } f , g i = E v ∈ X [ k ] h h M { i } , { j } v ( f v ) , g v i i = v ∈ X [ k ] h h M { i } , { j } v ( f v ) , g v i + h M { i } , { j } v ( f v ) ⊥ , g ⊥ v i i (cid:54) E v ∈ X [ k ] (cid:20) h M { i } , { j } v ( f v ) , g v i + λ || ( f v ) ⊥ || + || g ⊥ v || (cid:21) = E v ∈ X [ k ] (cid:20) h M { i } , { j } v ( f v ) , g v i + λ || f v || + || g v || − λ || ( f v ) || + || g v || (cid:21) (cid:54) E v ∈ X [ k ] (cid:20) h M { i } , { j } v ( f v ) , g v i + λ || f v || + || g v || − λ || ( f v ) |||| g v || (cid:21) (cid:54) ( − λ ) E v ∈ X [ k ] h h M { i } , { j } v ( f v ) , g v i i + λ The last inequality is by Cauchy-Schwartz.Notice that the average value that is in all the entries of ( f v ) , is exactly M { i } , { k } f ( v ) , and similarly g v ’s entries are M { j } , { k } g ( v ) hence the above is equal to: ( − λ ) h M { i } , { k } f , M { j } , { k } g i + λ (cid:54) ( − λ ) || M { i } , { k } |||| M { j } , { k } || + λ ,and since || M { i } , { j } || is maximal: (cid:54) ( − λ ) || M { i } , { j } || + λ .The inequality || M { i } , { j } || (cid:54) ( − λ ) || M { i } , { j } || + λ indicates that || M { i } , { j } || (cid:62) || M { i } , { j } || (cid:54) λ − λ . If we show that the walk isconnected, then as an immediate conclusion || M { i } , { j } || (cid:54) λ − λ . We separate theproof that the walk is connected to the following claim: Claim . Let X be a d -partite simplicial complex s.t. every link of X is connected.Then for every i , j ∈ { 1, ..., d } , the induced graph between vertices of color i andvertices of color j is connected.Modulo this claim, the lemma follows. (cid:3) Proof of Claim 7.10. We prove this by induction on d - the number of parts. Thebase case of two parts is clear. Assume for d parts and prove for d + v ∈ X [ i ] , u ∈ X [ j ] , as we already assumed that the whole complexis connected there is a walk v = w , w , ..., w t , w t + = u . We prove now that if w q ∈ X [ i ] ·∪ X [ j ] and w q + (cid:60) X [ i ] ·∪ X [ j ] we can substitute it with a walk from w q to w q + , where all the vertices except maybe w q + are in w q ∈ X [ i ] ·∪ X [ j ] .Each edge { w q + , w q + } is contained in some d + s ∈ X ( d ) . We denote by w iq + , w jq + the vertices in s that are in X [ i ] , X [ j ] respectively.Assume without loss of generality that w q ∈ X [ I ] . The link of w q + , is a d -partitecomplex. By the induction hypothesis it is color connected, i.e. there is a walkbetween any two vertices from colors i , j in the link. Specifically we can walk from w q to w jq + . Also, as w jq + and w q + share a a d -face, they also share an edge. Thusthe walk between w q to w jq + and the edge { w jq + , w q + } is the walk between w q and w q + where all vertices except (maybe) w q + are in X [ I ] ·∪ X [ J ] . (cid:3) In this subsection, we prove that the complement walk in the Grassmann Poset hasgood spectral gap, as stated in Claim 6.7. We feel that the notion of complementwalks could be generalized to many other Posets, however in this paper we merelystudy the complement walk of the Grassmann Poset.58 laim (Restatement of Claim 6.7) . 1. Let X = Gr lin ( F n , d ) be an Affine Grass-mann Poset. Let ‘ , ‘ , ‘ (cid:54) d so that ‘ + ‘ + ‘ + (cid:54) n . Fix some u ∈ X ( ‘ ) .Then the u -conditioned ‘ , ‘ -complement walk in the Grassmann Poset is a q n − ‘ − ‘ − ‘ − -bipartite expander.2. Let Y = Gr lin ( F n , d ) be a Linear Grassmann Poset. Let ‘ , ‘ , ‘ (cid:54) d sothat ‘ + ‘ + ‘ + (cid:54) n . Fix some u ∈ X ( ‘ ) . Then the u -conditioned ‘ , ‘ -complement walk in the Grassmann Poset is a q n − ‘ − ‘ − ‘ − -bipartiteexpander. Proof of the Affine Case. Let u ⊂ U be of dimension ‘ . If we denote by A the bipartite operator of the ‘ , ‘ -affine-complement walk and by J the bipar-tite operator of just choosing w , w independently. Denote by E the event that dim ( span ( w , w , u )) = ‘ + ‘ + ‘ + 2. We can say that eA = J − ( − e ) M .where e is the probability of choosing w , w independently so that E occurs and M is the operator conditioned that E doesn’t occur. Since the spectral norm of J is 0when we restrict to the space of functions with expectation 0, we obtain that k A k (cid:54) − ee k M k (cid:54) − ee .We calculate a lower bound on e . Consider the following process where we choose ‘ + ‘ points ( p , ..., p ‘ + ‘ ) sequentially so that the first ‘ points span w , and theother ‘ span w . If we choose these points so that in j -step p j (cid:60) span ( U , p , ..., p j − ) ,then E occurs.For every j , if we chose p , ... p j − so that span ( U , p , ..., p j − ) is of maximaldimension, then the probability to choose p j ∈ span ( U , p , ..., p j − ) is q ‘ + j − q n = q n − ‘ − j + .By union bound, we get that the probability that e (cid:62) − ‘ + ‘ X j = q n − ‘ − j + ,Rearranging and taking to infinity the geometric sum, we get that this is greater orequal to e (cid:62) − q n − ‘ − ‘ − ‘ − qq − (cid:62) − q n − ‘ − ‘ − ‘ − .Hence we get that the expansion is bounded by q n − ‘ − ‘ − ‘ − . (cid:3) Proof of the Linear Case. Similar to the affine case, let u ⊂ U be of dimension ‘ .We denote by A the bipartite operator of the ‘ , ‘ -affine-complement walk and by J the bipartite operator of just choosing w , w independently. Denote by E the eventthat dim ( w ⊕ w ⊕ u )) = ‘ + ‘ + ‘ + 1. And as before we obtain that k A k (cid:54) − ee .where e is the probability of choosing w , w independently so that E occurs.We calculate a bound lower on e . Consider the following process where we choose ‘ + ‘ lines ( r , ..., r ‘ + ‘ ) sequentially so that the first ‘ lines span w , and theother ‘ span w . If we choose these lines so that in j -step p j (cid:60) span ( U , r , ..., r j − ) ,then E occurs. 59or every j , if we chose r , ... r j − so that span ( U , r , ..., r j − ) is of maximaldimension, then the probability to choose r j ∈ span ( U , r , ..., r j − ) is q ‘ + j − q n − (cid:54) q n − ‘ − j .Similarly to the previous case, by union bound, we get that the probability that e (cid:62) − ‘ + ‘ X j = q n − ‘ − j ,Rearranging and taking to infinity the geometric sum, we get that this is greater orequal to e (cid:62) − q n − ‘ − ‘ − ‘ − q − q (cid:62) − q n − ‘ − ‘ − ‘ − .Hence we get that the expansion is bounded by q n − ‘ − ‘ − ‘ − . (cid:3) As a generalization of the complement walk, we can also define a random walk wherewe go from ‘ ∈ X ( ‘ ) to ‘ ∈ X ( ‘ ) if their union is of size ‘ + + j for some fixed j > Definition 7.11 (Fixed Union Size Walk) . Let X be a d -dimensional simplicialcomplex. Let ‘ (cid:62) (cid:54) j (cid:54) ‘ + ‘ + j + (cid:54) d . The ‘ , ‘ + j -fixed unionwalk is a random walk on X ( ‘ ) , where given t ∈ X ( ‘ ) we:1. Choose s ∈ X ( ‘ + j ) given that t ⊂ ‘ .2. Choose t ∈ X ( ‘ ) given that t ∪ t = s . Equivalently, we can require that t ⊂ s and that | t ∩ t | = ‘ + − j .For example, if j = ‘ + 1, this walk is the complement walk. If j = t , t if they are contained insome s ∈ X ( ‘ + ) .In [DDFH18], the authors proved that in a λ -two-sided high dimensional expander,the difference between the non-lazy upper walk and the ‘ , ‘ − λ in spectral norm. Lemma 7.12 ([DDFH18] Theorem 5.5 item 1) . Let X be a λ -two-sided spectralexpander, then k A − L k (cid:54) λ , where A is the non-lazy ‘ , ‘ + -upper walk adjacency operator, and L is the ‘ , ‘ − lower-walk adjacency operator. (cid:3) We generalize this result, and show that the difference between the ‘ , j -fixed unionwalk and the ‘ , ‘ − j -lower walk is bounded by the spectral gap of the j , j -complementwalk. In particular, by Theorem 7.1, the complement walk is bounded by j λ for any λ -two-sided high dimensional expander. Corollary 7.13. Let X be a λ -two-sided high dimensional expander. Fix some ‘ and (cid:54) j (cid:54) ‘ + so that ‘ + j + (cid:54) d . Denote by A the adjacency operator for the ‘ , j -fixed union walk. Denote by L the adjacency operator of the ‘ , ‘ − j -lower walk.Then k A − L k (cid:54) j λ . In particular, λ ( A ) (cid:54) ‘ + − j‘ + + O (cid:0) ‘ λ (cid:1) . roof of Corollary 7.13. The last part of λ ( A ) (cid:54) ‘ − j + O (cid:0) ‘ λ (cid:1) , is just using the firstpart of the corollary, along with Theorem 4.6 from which we obtain that λ ( L ) = ‘ + − j‘ + + O (cid:0) ‘ λ (cid:1) .as for the first part, consider two functions f , g : X ( ‘ ) → R so that k f k = k g k = h Af , g i = E a ∈ X ( ‘ − j ) (cid:2) h M j , ja f a , g a i (cid:3) ,where M j , ja is the j , j -complement walk in X a . This is true since choosing t , t by the ‘ , j -fixed union walk, is the same as choosing the intersection t ∩ t = a ∈ X ( ‘ − j ) ,and then choosing t \ a , t \ a in the complement walk of X a ( j ) . For each a ∈ X ( ‘ − j ) we denote f a = f a ,0 + f a , ⊥ ; g a = g a ,0 + g a , ⊥ where f a ,0 is constant and f a , ⊥ is perpendicular to the constant part (and the samefor g ). E a ∈ X ( ‘ − j ) (cid:2) h M j , ja f a , g a i (cid:3) = E a ∈ X ( ‘ − j ) (cid:2) h f a ,0 , g a ,0 i (cid:3) + E a ∈ X ( ‘ − j ) h h M j , ja f a , ⊥ , g a , ⊥ i i .1. | E a ∈ X ( ‘ − j ) (cid:2) h M j , ja f a , ⊥ , g a , ⊥ i (cid:3) | (cid:54) j λ by Theorem 7.1, since this is applying thecomplement walk in X a to an operator perpendicular to the constant functions.2. The constant part f a ,0 = E p ∈ X a ( j ) [ f a ( p )] = E a ⊂ t ∈ X ( ‘ ) [ f ( t )] ,and by definition this is D ‘ , ‘ − j f ( a ) (and the same for g ). Thus E a ∈ X ( ‘ − j ) (cid:2) h f a ,0 , g a ,0 i (cid:3) = E a ∈ X ( ‘ − j ) (cid:2) D ‘ , ‘ − j f ( a ) , D ‘ , ‘ − j g ( a ) (cid:3) = h D ‘ , ‘ − j f , D ‘ , ‘ − j g i .By definition of the lower-walk h D ‘ , ‘ − j f , D ‘ , ‘ − j g i = h ( D ‘ , ‘ − j ) ∗ D ‘ , ‘ − j f , g i = h Lf , g i .Combining the two item from above, we get that for every f , g as above |h Af , g i − h Lf , g i| (cid:54) j λ ,or k A − L k (cid:54) j λ . (cid:3) We can use our newly constructed complement walks and colored walks to prove highdimensional versions of the expander mixing lemma.Let A ⊂ X ( j ) , ..., A m ⊂ X ( j m ) , and denote by k = P mt = j t + m − 1. Wedenote by F ( A , ..., A k ) def = (cid:8) s ∈ X ( k ) (cid:12)(cid:12) ∀ j ∃ s j ∈ A j s j ⊂ s (cid:9) ,i.e. all k -faces that contain a subface from each A j . For example, when m = j = j = F ( A , A ) are all edges between A and A , in the underlying graph of X . 61 emma 7.14 (High dimensional expander mixing lemma - two-sided) . Let X be a d -dimensional λ -two sided link expander. Let j , j , ..., j m (cid:54) d , and A ⊂ X ( j ) , A ⊂ X ( j ) , ..., A m ⊂ X ( j m ) s.t. for any j ‘ (cid:44) j ‘ , and any s ∈ A j ‘ , t ∈ A j ‘ , s ∩ t = ∅ .Then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P [ F ( A , A , ..., A k )] − (cid:18) k + j + j + 1, ..., j m + (cid:19) m Y j = P [ A j ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Cλ m vuut m Y j = P [ A j ] where C depends on m , d only. Lemma 7.15 (High dimensional expander mixing lemma - one-sided d + . Let X be a λ -one sided d + -partite link expander. Let I , ..., I m ⊂ [ d + ] be pairwisedisjoint colors, and let A ⊂ X [ I ] , ..., A m ⊂ X [ I m ] . Then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P [ F ( A , A , ..., A k )] − m Y j = P (cid:2) A j (cid:12)(cid:12) X [ I j ] (cid:3)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Cλ m vuut m Y j = P (cid:2) A j (cid:12)(cid:12) X [ I j ] (cid:3) where C depends on m , d only. Comparison with previous results. There are other suggested expander mixinglemmas for high dimensional expanders. For example, the lemma in [Opp18b] statesthat on a λ -two-sided high dimensional expander, for A , ..., A m ⊂ X ( ) we get that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P [ F ( A , A , ..., A k )] − ( k + ) ! m Y j = P [ A j ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Cλ r min j (cid:44) i P [ A i ] P [ A j ] .The lemma in [LGE15], had a similar statement for a special case of Ramanujancomplexes.Our lemma generalizes these results. It deals with faces of all sizes, and not onlyvertices. This shows that link expanders have pseudorandom behavior in all levels ofthe complex.We give the proof for the two-sided case. The one sided case’s proof is similar. Proof of Lemma 7.14. The proof is by induction on m . The base case where m = X and A ⊂ X ( j ) , ..., A m + ⊂ X ( j m + ) be as above. It is enough to provethat for any A i that | P [ F ( A , A , ..., A k )] − m Y j = P [ A j ] | (cid:54) Cλ vuuut P [ A i ] m vuut m + Y i (cid:44) j = P [ A j ] ,because the geometric mean of RHS is m + Y i = Cλ vuuut P [ A i ] m vuut m + Y i (cid:44) j = P [ A j ] m + = Cλ m + vuut m + Y j = P [ A j ] .Indeed denote by F ( A ,..., A m ) , A m + : X ( k ) → R the indicators of F ( A , ..., A m ) and A m + respectively. Consider the expression h M A m + , F ( A ,..., A m ) i , here (cid:0) k + j + j + j m + (cid:1) is the number of partitions of a set of size k + j + j + 1, ..., j m + M def = M j m + , k − j m + − is the complement walk operator. As wecan see h M A m + , F ( A ,..., A m ) i = E s ∈ X ( j m + ) , s ∈ X ( k − j m + − ) ; s ·∪ s ∈ X ( k ) h A m + ( s ) F ( A ,..., A m ) ( s ) i = P [ F ( A , ..., A m + )] ( k + j m + + k − j m + ) ,As this is exactly the probability to get a face t ∈ F ( A , ..., A m + ) , and partition itto s , s (there is only one such partition so that s ∈ A m + and s ∈ F ( A , ..., A m ) ,because of the mutual disjointness property of the A j i ’s).On the other hand, we can decompose A m + = A m + + ⊥ A m + and F ( A ,..., A m ) = F ( A ,..., A m ) + ⊥ F ( A ,..., A m ) ,to the constant part and the part perpendicular to it. Thus h M A m + , F ( A ,..., A m ) i = h M A m + , F ( A ,..., A m ) i + h M ⊥ A m + , ⊥ F ( A ,..., A m ) i .Thus from Cauchy-Schwartz: (cid:12)(cid:12)(cid:12) h M A m + , F ( A ,..., A m ) i − h M A m + , F ( A ,..., A m ) i (cid:12)(cid:12)(cid:12) (cid:54) λ ( M ) k ⊥ A m + kk ⊥ F ( A ,..., A m ) k .The product between constant parts is equal to the product of probabilities and byinduction: h M A m + , F ( A ,..., A m ) i = P [ A m + ] P [ F ( A , ..., A m )] .Thus (cid:12)(cid:12)(cid:12) h M A m + , F ( A ,..., A m ) i − P [ A m + ] P [ F ( A , ..., A m )] (cid:12)(cid:12)(cid:12) (cid:54) λ ( M ) k ⊥ A m + kk ⊥ F ( A ,..., A m ) k .By the triangle inequality (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) h M A m + , F ( A ,..., A m ) i − m + Y j = P [ A j ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) λ ( M ) k ⊥ A m + kk ⊥ F ( A ,..., A m ) k + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P [ A m + ] P [ F ( A , ..., A m )] − m + Y j = P [ A j ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Cλ vuuut P [ A m + ] m vuut m Y j = P [ A j ] . (cid:3) Acknowledgement We wish to thank Prahladh Harsha for many helpful discussions.63 eferences [AJT19] Vedat Levi Alev, Fernando Granha Jeronimo, and Madhur Tulsiani.Approximating constraint satisfaction problems on high-dimensional ex-panders. InProceedings of the 60th IEEE Symposium on Foundations ofComputerScience, 2019 , 2019.[ALGV18] Nima Anari, Kuikui Liu, Shayan Oveis Gharan, and Cynthia Vinzant.Log-concave polynomials II: high-dimensional walks and an FPRAS forcounting bases of a matroid. CoRR , abs/1811.01816, 2018.[AS97] Sanjeev Arora and Madhu Sudan. Improved low degree testing and itsapplications. In Proceedings of the Twenty-Ninth Annual ACM Symposiumon Theory of Computing , pages 485–495, El Paso, Texas, 4–6 May 1997.[BDL17] Amey Bhangale, Irit Dinur, and Inbal Livni Navon. Cube vs. cubelow degree test. In , pages40:1–40:31, 2017.[BKS19] Boaz Barak, Pravesh K. Kothari, and David Steurer. Small-set expansionin shortcode graph and the 2-to-2 conjecture. In , pages 9:1–9:12, 2019.[DDFH18] Yotam Dikstein, Irit Dinur, Yuval Filmus, and Prahladh Harsha. Booleanfunction analysis on high-dimensional expanders. In Proc. th Inter-national Workshop on Randomization and Computation (RANDOM) ,volume 116, 2018.[DFH19] Irit Dinur, Yuval Filmus, and Prahladh Harsha. Analyzing booleanfunctions on the biased hypercube via higher-dimensional agreement tests:[extended abstract]. In Proceedings of the Thirtieth Annual ACM-SIAMSymposium on Discrete Algorithms, SODA 2019, San Diego, California,USA, January 6-9, 2019 , pages 2124–2133, 2019.[DG08] Irit Dinur and Elazar Goldenberg. Locally testing direct products in thelow error range. In Proc. 49th IEEE Symp. on Foundations of ComputerScience , 2008.[Din07] Irit Dinur. The PCP theorem by gap amplification. Journal of the ACM ,54(3), 2007.[DK17] Irit Dinur and Tali Kaufman. High dimensional expanders imply agreementexpanders. In Proc. th IEEE Symp. on Foundations of Comp. Science(FOCS) , pages 974–985, 2017.[DKK + 18] Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra.Towards a proof of the 2-to-1 games conjecture? In Proc. th ACMSymp. on Theory of Computing (STOC) , 2018.[DL17] Irit Dinur and Inbal Livni Navon. Exponentially small soundness for thedirect product z-test. In , pages 29:1–29:50, 2017.[DR06] Irit Dinur and Omer Reingold. Assignment testers: Towards combinatorialproofs of the PCP theorem. SIAM Journal on Computing , 36(4):975–1024,2006. Special issue on Randomness and Computation.64DS14] I. Dinur and D. Steurer. Direct product testing. In , pages 188–196, 6 2014.[EK16] Shai Evra and Tali Kaufman. Bounded degree cosystolic expanders ofevery dimension. In Proc. th ACM Symp. on Theory of Computing(STOC) , pages 36–48, 2016.[Gar73] Howard Garland. p -adic curvature and the cohomology of discrete sub-groups of p -adic groups. Ann. of Math. , 97(3):375–423, 1973.[GS97] Oded Goldreich and Shmuel Safra. A combinatorial consistency lemmawith application to proving the PCP theorem. In RANDOM: InternationalWorkshop on Randomization and Approximation Techniques in ComputerScience . LNCS, 1997.[IKW12] Russell Impagliazzo, Valentine Kabanets, and Avi Wigderson. New direct-product testers and 2-query PCPs. SIAM J. Comput. , 41(6):1722–1768,2012.[KL14] Tali Kaufman and Alexander Lubotzky. High dimensional expandersand property testing. In Innovations in Theoretical Computer Science,ITCS’14, Princeton, NJ, USA, January 12-14, 2014 , pages 501–506, 2014.[KM17] Tali Kaufman and David Mass. High dimensional random walks andcolorful expansion. In , pages4:1–4:27, 2017.[KMS17] Subhash Khot, Dor Minzer, and Muli Safra. On independent sets, 2-to-2games, and Grassmann graphs. In Proc. th ACM Symp. on Theory ofComputing (STOC) , pages 576–589, 2017.[KMS18] Subhash Khot, Dor Minzer, and Muli Safra. Pseudorandom sets ingrassmann graph have near-perfect expansion. In , pages 592–601, 2018.[KO18a] Tali Kaufman and Izhar Oppenheim. Construction of new local spectralhigh dimensional expanders. In Proceedings of the 50th Annual ACMSIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles,CA, USA, June 25-29, 2018 , pages 773–786, 2018.[KO18b] Tali Kaufman and Izhar Oppenheim. High order random walks: Beyondspectral gap. In Eric Blais, Klaus Jansen, José D. P. Rolim, and DavidSteurer, editors, Proc. th International Workshop on Randomizationand Computation (RANDOM) , volume 116 of LIPIcs . Schloss Dagstuhl,2018.[LGE15] Alexander Lubotzky, Konstantin Golubev, and Shai Evra. Mixing Proper-ties and the Chromatic Number of Ramanujan Complexes. InternationalMathematics Research Notices , 2015(22):11520–11548, 02 2015.[LSV05a] Alexander Lubotzky, Beth Samuels, and Uzi Vishne. Explicit constructionsof ramanujan complexes of type. Eur. J. Comb. , 26(6):965–993, 2005.[LSV05b] Alexander Lubotzky, Beth Samuels, and Uzi Vishne. Ramanujan com-plexes of type ˜ A d . Israel J. Math. , 149(1):267–299, 2005.65Opp18a] Izhar Oppenheim. Local spectral expansion approach to high dimensionalexpanders part I: Descent of spectral gaps. Discrete Comput. Geom. ,59(2):293–330, 2018.[Opp18b] Izhar Oppenheim. Local spectral expansion approach to high dimensionalexpanders part ii: Mixing and geometrical overlapping, 2018.[RS96] Ronitt Rubinfeld and Madhu Sudan. Robust characterizations of polynomi-als with applications to program testing. SIAM J. Comput. , 25(2):252–271,1996.[RS97] Ran Raz and Shmuel Safra. A sub-constant error-probability low-degreetest, and a sub-constant error-probability pcp characterization of np. In Proceedings of the Twenty-ninth Annual ACM Symposium on Theory ofComputing , STOC ’97, pages 475–484, New York, NY, USA, 1997. ACM.66 Standard Definitions and Claims In this appendix we give the necessary background and conventions we use throughoutthe paper. Most results and claims in this section are standard, and thus given withoutproof. A.1 Expander graphs Every weighted undirected graph induces a random walk on its vertices: Let G =( V , E ) be a finite weighted graph with a probability weight function µ : E → [ 0, 1 ] .The transition probability from v to u is µ ( { u , v } ) P w ∼ v µ ( { v , w } ) .Denote by A = A ( G ) the Markov operator associated with this random walk. Wecall this operator the adjacency operator . A is an operator on real valued functions on the vertices, where ∀ v ∈ V Af ( v ) = E u ∼ v [ f ( u )] .The expectation is taken with respect to the graph’s probability on vertices, condi-tioned on being adjacent to v . A ’s eigenvalues are in the interval [ − 1, 1 ] . We denote its eigenvalues by λ (cid:62) λ (cid:62) ... (cid:62) λ n (with multiplicities). The largest eigenvalue is always λ = 1, and it isobtained by the constant function. The second eigenvalue is strictly less than 1 ifand only if the graph is connected. Definition A.1 (spectral expanders) . Let G be a graph. G is a λ -one sided spectralexpander for some 0 (cid:54) λ < 1, if λ (cid:54) λ . G is a λ -two sided spectral expander for some 0 (cid:54) λ < 1, ifmax ( | λ | , | λ n | ) (cid:54) λ .There is another notion of graph expansion that we’ll need in this paper, callededge expansion. Intuitively, an edge expander is a graph where every set of verticeshas a large number of outgoing edges. Definition A.2 (edge expansion) . Let G be a weighted graph. The edge expansion of G is Φ ( G ) = min (cid:26) P [ E ( S , V \ S )] P [ S ] (cid:12)(cid:12)(cid:12)(cid:12) S ⊂ V , 0 < P [ S ] (cid:54) (cid:27) ,where E ( S , V \ S ) is the set of all edges between S and V \ S .There is a connection between spectral expansion and edge expansion: Theorem A.3 (Cheeger’s inequality) . Let G be any weighted graph. Then − λ (cid:54) Φ ( G ) (cid:54) q ( − λ ) . (cid:3) A.1.1 Bipartite Graphs and Bipartite Expanders A bipartite graph is a graph where the vertex set can be partitioned to two independentsets V = L ·∪ R , called sides. Bipartite graphs are sometimes easier to analyze thangraphs, and arise naturally when studying STAV-structures.67 he Bipartite Adjacency Operator. In a bipartite graph, we view each side asa separate probability space, where for any v ∈ L (resp. R ), P [ v ] = P w ∼ v µ ( { v , w } ) .We can define the bipartite adjacency operator as the operator B : ‘ ( L ) → ‘ ( R ) by ∀ f ∈ ‘ ( L ) , v ∈ R , Bf ( v ) = E w ∼ v [ f ( u )] where the expectation is taken with respect to the probability space L , conditionedon being adjacent to v .We denote by λ ( B ) the spectral norm of B when restricted to ‘ ( L ) = { } ⊥ , theorthogonal complement of the constant functions (according to the inner product themeasure induces). Namely λ ( B ) = sup {h Bf , g i | k g k , k f k = } . Definition A.4 (Bipartite Expander) . Let G be a bipartite graph, let λ < 1. Wesay G is a λ -bipartite expander , if λ ( B ) (cid:54) λ . Sampling Graph. We also define a sampling graph, a notion close in some senseto expanders. Definition A.5 (Sampling Graph) . Let G = ( L , R , E ) be a bipartite graph, and δ < 1. We say that G has the δ -sampling property if the following holds: For any set B ⊂ V of size greater than P [ C ] (cid:62) δ , the set T = { a : P v ∈ V [ v ∈ C | v ∈ reach a ] (cid:62) δ } has size at least . A.2 Properties of Expander Graphs In this subsection we develop the necessary properties of expander graphs, that wewill need in Section 3. Edge-Expander Partition Property. The following claim is also useful in theproof of the main theorem. It says that if we partition the vertices, and there are fewedges between the partition’s parts, then one set in the partition is larger than . Claim A.6 (Edge-Expander Partition Property) . Let G = ( V , E ) be a c -edge expander.Let V = B ·∪ ... ·∪ B n , partitioned into sets, and suppose that there are less than c edges between parts of the partition, namely:12 n X i = P [ E ( B i , B ci )] < c i such that P [ B i ] (cid:62) . Proof of Claim A.6. Assume towards contradiction that for all 1 (cid:54) i (cid:54) n , P [ B i ] < .From our assumption, there are less than c edges between parts of the partition,namely c > n X i = P [ E ( B i , B ci )] (cid:62) c n X i = P [ B i ] ,where the second inequality is from edge expansion. B i ’s are a partition of the vertices,thus P ni = P [ B i ] = 1, a contradiction. (cid:3) xpander Mixing Lemma. A classical result in expander graphs is the expandermixing lemma , that intuitively says that the weight of the edges between any twovertex sets S , T ⊂ V is proportionate to the probabilities of S , T . Lemma A.7 (Expander Mixing Lemma) . Let G = ( V , E ) be a λ -two sided spectralexpanders. Then for any S , T ⊂ V | P [ E ( S , T )] − P [ S ] P [ T ] | (cid:54) λ q P [ S ] P [ T ] ( − P [ S ])( − P [ T ]) . (cid:3) Bipartite graphs have their own type of expander mixing lemma: Lemma A.8 (Bipartite Expander Mixing Lemma) . Let G = ( L , R , E ) be a bipartite λ -one sided spectral expander. Then for any S ⊂ L , T ⊂ R | P [ E ( S , T )] − P v ∈ L [ v ∈ S ] P w ∈ R [ w ∈ T ] | (cid:54) λ q P [ S ] P [ T ] ( − P [ S ])( − P [ T ]) . (cid:3) Expander Sampler Property. In [DK17] the authors showed that bipartite λ -onesided spectral expander has the following useful sampler property. Lemma A.9 (Sampler Property, by [DK17]) . Let G = ( L , R , U ) be a bipartite λ -one sided spectral expander. Let B ⊂ R be any set of vertices, and c > . then T = { v ∈ L | | P w ∈ R [ w ∈ S | w ∼ v ] − P [ S ] | > c } of vertices who view S as "large",satisfies: P [ T ] (cid:54) λ c P [ S ] . Almost Cut Approximation Property. As a corollary to the expander mixinglemma, we get the following useful approximation property. In an expander graph, ifthe number of outgoing edges from some A ⊂ V , is an approximation to the size of A or V \ A . The following claim generalizes this fact to the setting where we count onlyoutgoing edges from A to a (large) set B ⊂ V \ A . Claim A.10 (Almost Cut Approximation Property) . Let G = ( V , E ) be a λ -two sidedspectral expander. Let V = A ·∪ B ·∪ C , s.t. P [ A ] (cid:54) P [ B ] . Then P [ A ] (cid:54) ( − λ ) P [ B ] ( P [ E ( A , B )] + λ P [ C ]) . (A.1)In particular, if P [ A ] , 1 − λ = Ω ( ) then P [ A ] = O ( P [ E ( A , B )] + λ P [ C ]) .For bipartite expanders we have an analogues almost approximation cut property,similar to Claim A.10. Claim A.11 (Almost Cut Approximation Property - Bipartite expanders) . Let G =( L , R , E ) be a λ -bipartite expander for λ < . Let V = A ·∪ B ·∪ C , s.t. P [ A ] (cid:54) P [ B ] (where the probability is taken over all the graph). Then P [ A ] (cid:54) ( − λ ) P [ B ] ( P [ E ( A , B )] + λ P [ C ]) . (A.2)In particular, if P [ A ] , 1 − λ = Ω ( ) then P [ A ] = O ( P [ E ( A , B )] + λ P [ C ]) .69 roof of Claim A.10. By the expander mixing lemma P [ A ] P [ B ] (cid:54) P [ E ( A , B )] + λ q P [ A ] P [ B ] ( − P [ A ])( − P [ B ]) .The expression inside the square root is equal P [ A ] P [ B ] ( P [ C ] + P [ A ] P [ B ]) , since P [ C ] = − P [ A ] − P [ B ] . Thus we may write P [ A ] P [ B ] (cid:54) P [ E ( A , B )] + λ ( P [ A ] P [ B ] + P [ C ]) .The claim easily follows by direct calculation. (cid:3) Proof of Claim A.11. Denote the restriction of a set to L or R by X L or X R respec-tively. Denote a L = P L [ A L ] and the same for b L , c L , a R , b R , c R . By the bipartiteexpander mixing lemma a L b R (cid:54) P [ E ( A L , B R )] + λ q a L b R ( − a L − b R + a L b R ) ,and a R b L (cid:54) P [ E ( A R , B L )] + λ q a R b L ( − a R − b L + a R b L ) .The expressions inside both square roots are less or equal to ( a R b L + a L b R )(( − a R − b R ) + ( − a L − b L ) + ( a R b L + a L b R )) .This in turn, is less or equal than (( − a R − b R ) + ( − a L − b L ) + ( a R b L + a L b R )) .Notice that we may write ( − a R − b R ) + ( − a L − b L ) = c L + c R = P [ C ] . Thusby combining both inequalities we obtain: ( − λ )( a R b L + a L b R ) (cid:54) E ( A , B ) + λ P [ C ] .Wlog a L (cid:62) a R thus we obtain that ( a R b L + a L b R ) (cid:62) a L ( b L + b R ) (cid:62) a L ( P [ B ]) (cid:62) P [ A ] P [ B ] .Thus P [ A ] (cid:54) ( − λ ) P [ B ] P [ E ( A , B )] + λ P [ C ] . (cid:3) A.3 Simplicial Complexes and high dimensional expanders We include here the basic definitions needed for our results. For a more comprehensiveintroduction to this topic we refer the reader to [DK17] and the references therein.A simplicial complex is a hypergraph that is closed downward with respect tocontainment. It is called d -dimensional if the largest hyperedge has size d + 1. Werefer to X ( ‘ ) as the hyperedges (also called faces) of size ‘ + X ( ) are the vertices.We define a weighted simplicial complex. Suppose we have a d -dimensionalsimplicial complex X and a probability distribution µ : X ( d ) → [ 0, 1 ] . We considerthe following probabilistic process for choosing lower dimensional faces:1. Choose some d -face s d ∈ X ( d ) with probability µ ( s d ) .2. Given the choice of s d , choose sequentially a chain of faces contained in s d , ( ∅ ⊂ s ⊂ ... ⊂ s d ) uniformly, where s i ∈ X ( i ) .70or any s ∈ X ( k ) we denote by P [ s ] = P [ { ( ∅ ⊂ s ⊂ ... ⊂ s d ) } | s k = s ] .For all s k ∈ X ( k ) , s ‘ ∈ X ( l ) , we will write P [ s k | s ‘ ] the probability of the k -face inthe sequence is s , given that the l -face is s ‘ .From here throughout the rest of the paper, when we refer to a simplicial complex X , we always assume that there is a probability measure on it constructed as above.A link of a face in a simplicial complex, is a generalization of a neighbourhood ofa vertex in a graph: Definition A.12 (link of a face) . Let s ∈ X ( k ) be some k -face. The link of s is a d − ( k + ) -dimensional simplicial complex defined by: X s = { t \ s : s ⊆ t ∈ X } .The associated probability measure P r X s , for the link of s is defined by P X s [ t ] = P X [ t ∪ s | s ] ,where P r X is the measure defined on X . Definition A.13 (underlying graph) . The underlying graph of a simplicial complex X with some probability measure as define above, is the graph whose vertices are X ( ) and edges are X ( ) , with (the restriction of) the probability measures of X tothe vertices and edges.We are ready to define our notion of high dimensional expanders: the one-sidedand two-sided link expander. Definition A.14 (one-sided and two-sided link expander) . Let 0 (cid:54) λ < 1. Asimplicial complex X is a λ -two sided link expander (or λ -two sided HDX) if for every − (cid:54) k (cid:54) d − s ∈ X ( k ) , the underlying graph of the link X s is a λ -twosided spectral expander.Similarly, X is a λ -one sided link expander (or λ -one sided HDX) if for every − (cid:54) k (cid:54) d − s ∈ X ( k ) , the underlying graph of the link X s is a λ -onesided spectral expander.When X is a graph, this definition coincides with the definition of a spectralexpander.We remark that it is a deep theorem that there exist good one-sided and two-sidedhigh dimensional expanders with bounded degree [LSV05b]. d + -partite simplicial complexes A d + d -dimensional simplicial complex is d + -partite if we can partition the vertex set V = V ·∪ V ·∪ ... ·∪ V d ,s.t. any d -face s ∈ X ( d ) , contains a vertex from each V i , i.e. | s ∩ V i | = color of a k -face s ∈ X ( k ) , is the set of all indexes of V i ’s, that intersect with s . I.e. col ( s ) = { j ∈ [ d ] : | s ∩ V j | = } .For any J ⊂ [ d ] , we denote X [ J ] = { s ∈ X : col ( s ) = J } .When J = { i } , we abuse the notation and write X [ i ] instead of X [ { i } ] (not to beconfused with X ( i ) ). 71 From Independent Choice to Expanding Choice In Section 4, Section 5 and Section 6 we showed that a number of agreement testswere sound. The agreement test’s distributions had in common the following property:given the choice of intersection t , we chose the sets s , s independently. This propertyis very helpful in analyzing the expansion of the conditioned ST S a , v -graph, as requiredwhen showing that Assumption (A2) b holds.In this appendix, we show that if the choice of s , s given t , is done according toan expanding graph, then we can get a similar result. Definition B.1 ( ST S t -graph) . Let X = ( S , T , A , V ) be any STAV-structure. For afixed t ∈ T , an sts t -Graph is has vertex set { s ⊃ t } and the probability of choosingan edge { s , s } t is given by 2 P ST S [( s , s ) | s , s ⊃ t ] . Claim B.2 . Let X = ( S , T , A , V ) be any STAV-structure. Let D , D be two ST S -distributions on X so that for all t ∈ T :1. The choice of s , s ∼ D given t is independent.2. The sts t -graph for D is a -two-sided spectral expander.Denote by ε i = rej D i ( f ) , namely, the probability to sample ( t , s , s ) ∼ D i so that f s (cid:22) t (cid:44) f s (cid:22) t . Then 16 ε (cid:54) ε (cid:54) ε .The constant is arbitrary, any constant bounded away from 1 will suffice.As a corollary to this claim, Corollary B.3. Let X = ( S , T , A , V ) be any STAV-structure. Let D , D be two ST S -distributions on X so that for all t ∈ T , the sts t -graphs are -edge spectralexpanders for both D and D . Denote by ε i = rej D i ( f ) , namely, the probability tosample ( t , s , s ) ∼ D i so that f s (cid:22) t (cid:44) f s (cid:22) t . Then ε (cid:54) ε . In particular, D yields a γ -approximate c -sound agreement test if and only if D yields a γ -approximate c -sound agreement test (including the exact case where γ = ). (cid:3) The proof of the corollary is by two uses of the claim above. We leave the detailsto the reader. Example B.4 (Simplicial Complexes) . We recall that for a simplicial complex X wecan define agreement tests for the ground set V = X ( ) and S = X ( d ) . Previouslywe defined the D d , ‘ distribution where we choose s , s independently given that theycontain some ‘ -face t ∈ X ( ‘ ) .Observe the following test distribution U p k , k for a 2 k -dimensional simplicialcomplex.1. Sample r ∈ X ( k ) and t ∈ X ( k ) .2. Sample s , s ∈ X ( k ) , given that t ⊂ s , s ⊂ r .Given any t ∈ X ( k ) the ST S t -graph above is two steps in the k , k -containmentwalk, thus an edge expander. By Claim B.2, we can immediately obtain thatrej UP k , k = O (cid:18) rej D k , k (cid:19) . By Theorem 4.1 this agreement test is exact c -sound.We can take this argument one step further. Consider the following test distribution U P k , where we only condition on s , s ⊂ r , namely:72. Sample r ∈ X ( k ) .2. Sample s , s ∈ X ( k ) , given that s , s ⊂ r .This distribution was the main distribution analyzed in the agreement theorem in[DK17].We expect that s and s intersect on a set of size k . Thus by a simple Markovargument, P s , s ∼ Up k (cid:2) | s ∩ s | (cid:62) k (cid:3) = Ω ( ) . Thus if rej Up k (cid:54) ε , then conditionedon intersecting on a set of size k , the rejection probability is still O ( ε ) . In conclusion,we get that rej UP k = O (cid:18) rej UP k , k (cid:19) = O (cid:18) rej D k , k (cid:19) .By Theorem 4.1, we obtain a new proof to the theorem in [DK17] that this distributiongives rise to a c -sound agreement test, for a good enough two-sided spectral expander. [Y: cite DK theorem formally.] Proof of Claim B.2. For any t ∈ T and i = 1, 2 we denote by ε i , t the probability ofsampling s , s ⊃ t who disagree on t . It is easy to see that E t [ ε i , t ] = ε i , so it willsuffice to show that ε t (cid:54) ε t (cid:54) ε t for every t ∈ T .We begin by showing that ε t (cid:54) ε t or equivalently that ε t (cid:54) ε t . If ε t (cid:62) ,then ε t (cid:54) (cid:54) ε t .Otherwise observe the partition of { s ⊃ t } into V , ..., V n where V i = { f s (cid:22) t = h i } ,for all possible assignments h i : t → Σ . By the edge expander partition propertyClaim A.6, there is a set V i such that P [ V i ] (cid:62) . Without loss of generality it is V .By edge expansion we get that P [ V ci ] (cid:54) P [ E ( V i , V ci )] (cid:54) ε t .Observe that the ( s , t ) marginal according to D and D are identical since theyare both ST S -test distributions of the same STAV. Thus in particular when we write P [ V i ] it doesn’t matter whether we are sampling s in the ST S t -graph according to D or according to D .Returning to ST S t -graph of D , the probability of choosing s , s ∈ V accordingto D is just P [ V i ] (cid:62) ( − ε t ) (cid:62) − ε t .If we choose s , s ∼ D that disagree, then at least on of them is not in the majorityset, hence ε t (cid:54) ε t .Next we show that ε t (cid:54) ε t . If ε t (cid:62) then ε t (cid:54) (cid:54) ε t so assumeotherwise.Consider again V , the set of all f s that agree with the most popular assignment.From independence P [ V ] P [ V c ] = P [ s ∈ V , s (cid:60) V ] (cid:54) P s , s [ f s (cid:22) t (cid:44) f s (cid:22) t ] = ε t .The graph where we sample s , s independently is also a -edge expander. By thesame argument as in the other direction, we can get that P [ V ] (cid:62) , thus P [ V c ] = ε t .Recall that this inequality is true also when sampling s ∈ . If we chose s , s ∼ D suchthat they disagree, then at least one vertex is in V c . Thus ε t (cid:54) P [ V c ] (cid:54) ε t . (cid:3) List of Abbreviations for STAV-Structures Name Definition Reference STAV-Structure A system of sets with four layers: S - sets, T - inter-sections, A - amplification, V - vertices.It is accompanied by a distribution ( s , t , ( a , v )) ∼ D stav . Definition 2.5STS-distribution A distribution where we sample t ∈ T , and then s , s ∈ S so that s ∩ s ⊃ t . The marginal ( s i , t ) isthe same as the marginal in D stav . Definition 2.5VASA-distribution A distribution ( v , a , s , a ) ∼ D vasa where themarginals ( v , a , s ) , ( v , a , s ) are the same as D stav . Definition 2.5Reach Graph The bipartite graph between V and A where wechoose an edge ( v , a ) according to the STAV-distribution.We denote by reach a or reach v then neighbours of a or v in this graph, respectively. Definition 2.9.Local Reach Graph( AV s -graph) For a fixed s ∈ S , the AV s -graph is a bipartite graphwhere L = { a | a ⊂ s } and R = { v | v ∈ s } . Theedges are chosen according to the STAV-distributiongiven that s = s . Definition 2.10 sts a -Graph For a fixed a ∈ A , the sts a -graph is a graph whoseelements are { s | s ⊃ a } . We connect s , s whenthere exists t ∈ T so that a ⊂ t ⊂ s ∩ s . Definition 2.11 sts a , v -Graph For a fixed a ∈ A and v ∈ reach a , the sts a , v -graph is a graph whose elements are { s | s ⊃ ( a , v ) } .We connect s , s when there exists t ∈ T so that ( a , v ) ⊂ t ⊂ s ∩ s . Definition 2.12 v ASA -graph For a fixed v ∈ V the v ASA -graph is a graph whoseelements are a ∈ reach v . We connect a , a with alabeled edge ( a , s , a ) if ( v , a , s , a ) is in the supportof D vasa . Definition 2.13Bipartite V AS a -Graph For a fixed a ∈ A , the V AS a -Graph is a bipartitegraph where one side is L = reach a . The other sideis the set of ( s , a ) so that ( a , s , a ) is in the supportof the marginal of D vasa .We sample an edge in this graph by sampling ( v , a , s , a ) given that a = a . Definition 2.14Surprise Let { f s } s ∈ S be some local ensemble. The surpriseof the ensemble is the probability over ( s , a , v ) that f s (cid:22) a = f s (cid:22) a but f s ( v ) (cid:44) f s ( v ) . Definition 2.17 D List of Results D.1 Main Theorem Theorem D.1 (Restatement of Theorem 2.26) . Let Σ be some finite alphabet (forexample Σ = { 0, 1 } ). Let X = ( S , T , A , V ) be a γ -good STAV-structure for some γ < . Let f = { f s : s → Σ | s ∈ S } be an ensemble such that1. Agreement: rej X ( f ) (cid:54) ε ,74 . Surprise: ξ ( X , f ) (cid:54) O ( γ ) (D.1) Then assuming either Assumption (A4( r )) for r = or Assumption (A4) , dist γ ( f , G ) (cid:54) O ( ε ) . More explicitly, there exists a global function G : V → Σ s.t. P s ∈ S (cid:20) f s γ (cid:44) G (cid:22) s (cid:21) def = P s ∈ S (cid:20) P v ∈ V [ f s ( v ) (cid:44) G (cid:22) s | v ∈ s ] (cid:62) γ (cid:21) = O ( ε ) . Moreover, for any r > , if either Assumption (A4( r )) or Assumption (A4) holdsthen P s ∈ S (cid:20) f s rγ (cid:44) G (cid:22) s (cid:21) = O (cid:18)(cid:18) + r (cid:19) ε (cid:19) . The O notation does not depend on any parameter including γ , ε , the size of thealphabet, the size of | S | , | T | , | A | , | V | and, size of any s ∈ S . D.2 Applications of Main Theorem 1. Agreement tests on two-sided HDX. Theorem (Restatement of Theorem 4.1) . There exists a constant c > suchthat for every two natural numbers d > ‘ such that d − ‘ = Ω ( d ) the followingholds. Suppose that X is a d ‘ -two-sided d -dimensional HDX. Then for every r > the d , ‘ -agreement test is r‘ -approximately (cid:0) c ( + r ) (cid:1) -sound. In particular,if ‘ = Ω ( d ) , then the test is exactly c -sound. 2. Agreement tests on one-sided HDX. Theorem (Restatement of Theorem 4.4) . There exists a constant c > suchthat for every two natural numbers k , ‘ so that k (cid:62) ‘ + the following holds.Suppose X is a k -dimensional skeleton of a ( d + ) -Partite k ‘ -one sidedHDX (including k = d ) . Then for every r > the d , ‘ -agreement test is r‘ -approximately (cid:0) c (cid:0) + r (cid:1)(cid:1) -sound. In particular, if ‘ = Ω ( k ) , then the test isexactly c -sound. 3. Agreement tests on vertex neighbourhoods. Theorem (Restatement of Theorem 5.3) . There exists a constant c > suchthat for every non-negative integers ‘ , k , d such that (cid:54) ‘ (cid:54) d − and ‘ + k + (cid:54) d , the following holds. Let X be a d -dimensional ‘ ( k + ‘ ) -two-sided highdimensional expander. Then the ‘ , k -weak independent agreement test and the ‘ , k -weak complement agreement test are both ‘ -approximately c -sound. 4. Agreement tests on the Affine and Linear Grassmann Posets: Theorem (Restatement of Theorem 6.2) . There exists a constant c > suchthat for every prime power q , r , δ > , and integers ‘ , d , n such that ‘ + < d (cid:54) n the following holds. The d , ‘ -Grassmann agreement test on X = Gr aff ( F n , d ) is q − ‘ rδ -approximately c (cid:0) + r (cid:1) -sound for δ -ensembles. Theorem (Restatement of Theorem 6.3) . There exists a constant c > suchthat for every prime power q , r , δ > , and integers ‘ , d , n such that ‘ + < d (cid:54) n the following holds. The d , ‘ -Grassmann agreement test on X = Gr lin ( F n , d ) is q − ‘ + rδ -approximately c (cid:0) + r (cid:1) -sound for δ -ensembles. a k -skeleton of a d -dimensional simplicial complex Y is X = { s ∈ X | | s | (cid:54) k + } . .3 Analysis of the Complement Walk Theorem (Restatement of Theorem 7.1) . 1. Let X be a λ two-sided d -dimensional link-expander. Let ‘ , ‘ integers so that ‘ + ‘ + (cid:54) d . Denoteby M ‘ , ‘ , the bipartite operator of the ‘ , ‘ -complement walk. Then λ ( M ‘ , ‘ ) (cid:54) ( ‘ + )( ‘ + ) λ . 2. Let X be a d + -partite λ ( d + ) λ + -one-sided link expander, where λ < . Let I , J ⊂ [ d ] be two disjoint colors. Denote by M I , J the I , J -colored walk. Then λ ( M I , J ) (cid:54) | I || J | λ . D.4 High Dimensional Expander Mixing Lemma 1. Two sided case: Theorem (Restatement of Lemma 7.14) . Let X be a d -dimensional λ -two sidedlink expander. Let j , j , ..., j m (cid:54) d , and A ⊂ X ( j ) , A ⊂ X ( j ) , ..., A m ⊂ X ( j m ) s.t. for any j ‘ (cid:44) j ‘ , and any s ∈ A j ‘ , t ∈ A j ‘ , s ∩ t = ∅ . Then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P [ F ( A , A , ..., A k )] − (cid:18) k + j + j + 1, ..., j m + (cid:19) m Y j = P [ A j ] (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Cλ m vuut m Y j = P [ A j ] where C depends on m , d only. 2. One sided partite case: Theorem (Restatement of Lemma 7.15) . Let X be a λ -one sided d + -partitelink expander. Let I , ..., I m ⊂ [ d + ] be pairwise disjoint colors, and let A ⊂ X [ I ] , ..., A m ⊂ X [ I m ] . Then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P [ F ( A , A , ..., A k )] − m Y j = P (cid:2) A j (cid:12)(cid:12) X [ I j ] (cid:3)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:54) Cλ m vuut m Y j = P (cid:2) A j (cid:12)(cid:12) X [ I j ] (cid:3) where C depends on m , d only. here (cid:0) k + j + j + j m + (cid:1) is the number of partitions of a set of size k + j + j + 1, ..., j m +1.