The Corruption Bound, Log Rank, and Communication Complexity
aa r X i v : . [ c s . CC ] S e p The Corruption Bound, Log-Rank, and CommunicationComplexity
Adi ShraibmanThe School of Computer ScienceThe Academic College of Tel Aviv-Yaffo [email protected]
Abstract
We prove upper bounds on deterministic communication complexity in terms of logof the rank and simple versions of the corruption bound.Our bounds are a simplified version of the results of Gavinsky and Lovett [8], us-ing the same set of tools. We also give an elementary proof for the upper bound oncommunication complexity in terms of rank proved by Lovett [18].
The notion of communication complexity was introduced by Yao [24] as a discretevariation of a model of Abelson [1] concerning information transfer in distributive com-putations. In the basic model, two players Alice and Bob wish to compute together adiscrete boolean function f : I × J → { , } . The players are assumed to have infinitecomputational power, and their goal is to minimize the communication between themduring the computation. Alice and Bob first agree on a communication protocol andthen use this protocol to compute the value of the function f on any given pair offeasible inputs ( i, j ) ∈ I × J . The inputs are presented to the players in a way thatAlice sees only i and Bob receives j . They then take turns writing bits ( or ) on ablackboard, until both players know the value of f ( i, j ) . The cost of a protocol is themaximal number of bits the players write on the blackboard during the computationof f ( i, j ) , over all choices of inputs ( i, j ) ∈ I × J . The deterministic communicationcomplexity of f , denoted D ( f ) , is equal to the minimal cost of a protocol for f .It is sometimes convenient to consider the function f : I × J → { , } as a signmatrix A , where the rows of A correspond to i ∈ I and the columns correspond to j ∈ J . The entries of A satisfy A i,j = 1 if f ( i, j ) = 0 and A i,j = − if f ( i, j ) = 1 . Weuse this matrix notation.Many variants of the basic communication complexity model of Yao were defined.The models differ in the type of communication allowed (e.g. deterministic, randomized,nondeterministic, etc), the number of players, the type of function computed, and more.The interested reader can see the book of Kushilevitz and Nisan [13] for a thoroughexposition and discussion on the basic models of communication complexity.The communication complexity literature is mainly concerned with proving lowerbounds, and indeed communication complexity lower bounds are used in various areasof theoretical computer science such as decision tree complexity, VLSI circuits, space-time tradeoffs for Turing machines and more. To prove lower bounds in communicationcomplexity many techniques were developed. One of the early lower bound techniques s the rank lower bound of Mehlhorn and Schmidt [19]: Let A be a sign matrix, anddenote by rank( A ) the rank of A over the reals, then log rank( A ) ≤ D ( A ) . Since there are many variants of communication complexity and various lower bounds,it is interesting to fully understand the relation between the different measures of com-plexity. For this purpose we need to prove upper bounds as well as lower bounds.Some of the major open questions in communication complexity are of this type,and in particular the log-rank conjecture. The log-rank conjecture [17, 20] states thatdeterministic communication complexity of a sign matrix A and log of the rank of A arepolynomially related. Namely, the conjecture is that there is a constant c such that everysign matrix A satisfies D ( A ) ≤ (log rank( A )) c . A fairly simple upper bound is D ( A ) ≤ rank( A ) , and that was the best known for quite a while. Drawing ideas and tools from [6,8, 14, 15], Lovett [18] proved the bound D ( A ) ≤ O ( p rank( A ) log rank( A )) , achieving afirst significant breakthrough at this end. On the other end, it is known that the constant c in the log-rank conjecture must be at least two [9]. This is also a recent breakthrough,continuing a line of work [3, 22, 21, 20] which gradually constructed matrices with largergaps between the log of the rank and the deterministic communication complexity.Although the gap in our knowledge regarding the log-rank conjecture is very wide,there are interesting upper bounds on D ( A ) , which include:1. D ( A ) ≤ ( N ( A ) + 1)( N ( A ) + 1) [2],2. D ( A ) ≤ log rank( A ) log rank + ( A ) [16],3. D ( A ) ≤ log(rank( A ) + 1)(min { N ( A ) , N ( A ) } + 1) [16],4. D ( A ) ≤ f s ( A )( N ( A ) + 1) [16].Here, N ( A ) and N ( A ) are the nondeterministic and co-nondeterministic communi-cation complexity of A respectively, rank + ( A ) is the positive rank of A , and f s ( A ) isthe maximal size of a fooling set for A . All these complexity measures are known lowerbounds on deterministic communication complexity. See [16] and also [13] for a com-prehensive survey of these complexity measures and bounds. See [7] for a comparisonbetween f s ( A ) and rank ( A ) , and also an extension of the fooling set bound which ispolynomially tight up to a logarithmic additive factor.In the above bounds, roughly speaking, f s ( A ) or rank( A ) serve as a potential func-tion, while N ( A ) or N ( A ) serve as a pool of monochromatic rectangles. Anotherupper bound, similar in nature, is given by Nisan and Wigderson [20]. The statement,as it was phrased in [8], is Theorem 1 ([20, 8, 18])
Let A be a sign matrix and let rank( A ) = r . Assume thatevery submatrix B of A contains a monochromatic rectangle of size at least − q | B | ,where | B | is the size of B (i.e., the number of entries). Then, D ( A ) ≤ O (log r + q log r ) . In the protocol of Nisan and Wigderson, rank serves like before as a kind of a potentialfunction. N ( A ) and N ( A ) on the other hand are replaced by the size of a largestmonochromatic rectangle which appears in their bound as an independent complexitymeasure.Gavinsky and Lovett [8] augmented the above repertoire of upper bounds on deter-ministic communication complexity. They proved that the deterministic communicationcomplexity of a sign matrix A is at most O ( CC ( A ) log rank( A )) , where CC ( A ) is eitherrandomized communication complexity, information complexity, or zero-communicationcomplexity. Thus when the rank of the matrix is low, an efficient nondeterministic pro-tocol or a randomized protocol, implies an efficient deterministic protocol. he core upper bound of [8] is in terms of extended discrepancy and it thereforeimplies additional results to those listed above. In fact, as observed by Göös and Watson[10], extended discrepancy corresponds to a fractional version of approximate majoritycovers (see [12] for details). Thus, for example, the bounds of [8] are also valid forthe Merlin Arthur (MA) complexity of A , with error / . In this model the playersfirst make a nondeterministic guess and then perform a randomized protocol. This hasthe nice interpretation that when the rank is low, there is an efficient deterministicprotocol, even compared with protocols combining the power of nondeterminism andrandomization.The proofs of [8] are based on the protocol of Nisan and Wigderson [20] (Theo-rem 1). In addition they use a simple and clever lemma relating the size of almostmonochromatic rectangles, in some conditions, to the size of monochromatic ones. Theorem 2 ([8, 18])
Let A be an m × n sign matrix with rank( A ) = r . Assumethat the fraction of ’s or the fraction of − ’s in A is at most r . Then A contains amonochromatic rectangle R such that | R | ≥ mn . Theorem 1 gives an upper bound on D ( A ) in terms of the size of monochromatic rect-angles. Theorem 2 enables to replace the concept of monochromatic rectangles in thisbound by a more relaxed notion.Our contribution is to point out that the language of corruption bounds is per-fectly suited for the line of proof described above. This enables to give a simple andelementary proofs for the results of [8] and [18]. Furthermore, the corruption bound,which is a central lower bound technique in (randomized) communication complexity,has proved relations with: randomized communication complexity, information com-plexity, zero-communication complexity, nondeterministic communication complexity,MA complexity, and positive rank. The corruption bound therefore provides a uniformview of previous upper bounds, as well as a natural link to the upper bound of Nisanand Wigderson which is based on the size of monochromatic rectangles. The heart of the matter is the following definition:
Definition 3
Let A be an m × n sign matrix, and denote by u the uniform distributionon [ m ] × [ n ] . Let v ∈ {− , } be such that u ( v ) ≤ / . For ≤ ρ ≤ define mono ρ ( A ) = log (cid:18) R { u ( R ) : u ( v | R ) ≤ ρu ( v ) } (cid:19) , where the maximum is over all combinatorial rectangles R . Let hmono ρ ( A ) be themaximum of mono ρ ( B ) over all submatrices B of A . Denote the distribution of − ’s and ’s in A by ( α, − α ) , and assume without lossof generality that α ≤ / . Then roughly speaking, mono ρ ( A ) quantifies the relativesize of a largest submatrix of A in which the frequency changes to ( ρα, − ρα ) or evenmore biased. That is, we seek a submatrix for which the frequency of − ’s is smallerby at least a factor of ρ than their frequency in A . In particular, mono ( A ) quantifiesthe relative size of a largest submatrix of A containing only -entries.The quantity hmono ρ ( A ) is a hereditary version of mono ρ ( A ) , that is obviouslyneeded if we wish to show a strong relation with communication complexity which ishereditary. It is an easy exercise to show that hmono ( A ) ≤ D ( A ) + 1 , and theprotocol of Nisan and Wigderson [20] implies that D ( A ) ≤ O (log r + hmono ( A ) log r ) . If u (1) = u ( − then we let v = − . This inequality follows from the fact that every c -bit deterministic communication protocol for A parti-tions the matrix A into at most c monochromatic combinatorial rectangle. herefore, the log-rank conjecture is true if and only if hmono ( A ) ≤ (log rank( A )) c forsome constant c . This kind of relation though (if true), between the rank of a matrixand the size of a monochromatic submatrix, is very hard to capture. The contributionof Theorem 2 is that instead of monochromatic submatrices we can consider the morerelaxed notion hmono / ( A ) , that is: Theorem 4
For every sign matrix A with r = rank ( A ) it holds that D ( A ) ≤ O (hmono / ( A ) log r ) . Clearly, mono ρ ( A ) ≤ mono ρ ( A ) whenever ρ ≥ ρ , thus the above upper boundvia hmono / ( A ) is tighter than the previous bound in terms of hmono ( A ) when ig-noring the log-rank factors. The more important advantage is though that the natureof hmono / ( A ) makes it easier to relate it to other complexity measures such as ran-domized communication complexity, information complexity, zero-communication com-plexity, and more [8]. This enhances the variety of upper bounds on deterministiccommunication complexity that are applicable when the rank is small. All these up-per bounds follow from the relation between hmono / ( A ) and the corruption bound,explained in Section 4.Another complexity measure of a sign matrix A that can be tied to hmono / ( A ) is the discrepancy of A , denoted disc ( A ) , which is defined as follows: Let σ be adistribution on the entries of A . The discrepancy with respect to σ is: max R | σ (1 , R ) − σ ( − , R ) | , where the maximum is over all combinatorial rectangles in A . The discrepancy of A isthe minimal discrepancy with respect to σ , over all probability distributions.The discrepancy is often used to lower bound communication complexity in differentmodels, and it is also equivalent (up to a constant) to the reciprocal of margin com-plexity. See [14, 15] for the definitions and proof of the equivalence of these measures.This equivalence was used in [15] to prove that /disc ( A ) ≤ O ( p rank( A )) .For a sign matrix A let d = 1 /disc ( A ) and r = rank ( A ) . Lovett [18] proved that Acontains a monochromatic rectangle of size − O ( d log r ) | A | . Combined with the protocolof Nisan and wigderson (Theorem 1) and the relation between discrepancy and rank,this proves that deterministic communication complexity is bounded by root of therank up to log factors. We give in Section 5 an elementary proof of a slightly differentstatement hmono / ( A ) ≤ O ( d log d ) , that gives the same result up to a log factor. Let A be an m × n sign matrix, and let µ be a probability distribution on [ m ] × [ n ] . Fora set of entries E ⊆ [ m ] × [ n ] , let µ ( E ) be the sum P ( i,j ) ∈ E µ ( i, j ) . A combinatorialrectangle is a subset S × T of entries, such that S ⊆ [ m ] and T ⊆ [ n ] . That is, a combi-natorial rectangle corresponds to a submatrix of A . With a slight abuse of notation, for v ∈ {± } and a combinatorial rectangle R , we denote by µ ( v ) = µ ( { ( i, j ) | A i,j = v } ) ,and µ ( v, R ) = µ ( { ( i, j ) ∈ R | A i,j = v } ) . We also write µ ( v | R ) for the probability that A i,j = v conditioned upon v ∈ R , which is equal to µ ( v, R ) /µ ( R ) .We call a distribution µ on [ m ] × [ n ] uniformly-balanced for A , if it satisfies: • The set of entries ( i, j ) for which µ ( i, j ) > is a combinatorial rectangle. • A i,j = A x,y implies that µ ( i, j ) = µ ( x, y ) , if both are nonzero. • µ (1) = µ ( −
1) = .We use uniformly balanced distributions in Section 4 to define a simple version ofthe corruption bound and relate it to hmono ρ ( A ) . The upper bound
To prove Theorem 4 we need a slight variation of Theorem 2.
Claim 5
Let A be an m × n sign matrix with rank( A ) = r , and v ∈ {− , } . Assumethat the fraction of − v ’s in A is at most / r . Then A contains a v -monochromaticrectangle R such that | R | ≥ mn . Proof
The proof follows from Theorem 2 by observing that R is too big to containonly − v entries, as the fraction of − v ’s in A is at most / r ≤ / . Proof [of Theorem 4] First, recall that D ( A ) ≤ O (log r + hmono ( A ) log r ) by The-orem 1. Second, the bound hmono ( A ) ≤ O (hmono / r ( A )) follows from Claim 5.Finally, observe that for every ≤ ρ , ρ ≤ it holds that hmono ρ ρ ( A ) ≤ hmono ρ ( A ) + hmono ρ ( A ) . The proof of the above inequality is straightforward from the definition. This makes am-plification possible, and in particular shows that hmono / r ( A ) ≤ O (log r · hmono / ( A )) .Combining these inequalities gives the proof. We would like to show that hmono ρ ( A ) is a lower bound for randomized communicationcomplexity, information complexity, zero-communication complexity, nondeterministiccommunication complexity, and positive rank. The simplest way to do that is to relateit to the corruption/rectangle bound (defined in the sequel) which was proved as a lowerbound for all these complexity measures, and more.In a way hmono ρ ( A ) is a very simple version of the corruption bound. When Yao firstused the corruption bound as a lower bound on randomized communication complexity,his definition was in the spirit of mono ρ ( A ) (See Lemma 3 in [25]). This first bounddid not take into account the fact that one can choose any probability distributionover the entries of the matrix, and not only the uniform distribution. It is easy to make mono ρ ( A ) small even for matrices with high randomized communication complexity, forexample by planting a large monochromatic submatrix in a random sign matrix. Thiscan be fixed by considering the worst probability distribution over the entries of thematrix, which can for example give zero weight to the large monochromatic submatrix.The corruption bound is usually defined as follows [4, 23, 12, 5, 11]: Definition 6
Let A be a sign matrix, ≤ ǫ ≤ , and v ∈ {− , } . For a probabilitydistribution µ on the entries of A , define size ( v ) ǫ ( A, µ ) = max R { µ ( R ) : µ ( − v | R ) ≤ ǫ } , where the maximum is over all combinatorial rectangles R . Define corr ( v ) ǫ ( A ) = max µ log 1 / size ( v ) ǫ ( A, µ ) , where µ runs over all balanced distributions for A . Finally define corr ǫ ( A ) = max { corr (1) ǫ ( A ) , corr ( − ǫ ( A ) } . A balanced distribution is a distribution for which the probability of − ’s and the probability of ’s arebounded from below by a constant. e use a simple version of the corruption bound which is similar to the above, onlywe maximize over the family of uniformly-balanced distributions (defined in Section 1.2),instead of all balanced distributions. We denote this variant of the corruption boundby ubc ǫ ( A ) . Obviously ubc ǫ ( A ) ≤ corr ǫ ( A ) for every sign matrix A and ≤ ǫ ≤ , aswe maximize over a subfamily of probability distributions.As we show next, ubc ǫ ( A ) is closely related to hmono ǫ ( A ) . Lemma 7
Let A be an m × n sign matrix. Then, for every ≤ ǫ ≤ / it holds that hmono ǫ ( A ) ≤ ubc ǫ ( A ) . Proof
Let u be the uniform distribution on [ m ] × [ n ] , and denote by µ the uniformly-balanced distribution for A supported on [ m ] × [ n ] . For every entry ( i, j ) it holds that A i,j = − ⇒ µ ( i, j ) = u ( i, j )2 u ( − , (1) A i,j = 1 ⇒ µ ( i, j ) = u ( i, j )2 u (1) . The proof is very simple, but technical. We therefore first give the basic intuition.Consider the extreme case where u (1) = u ( −
1) = 1 / . In this case µ = u and seekinga combinatorial rectangle with µ ( − | R ) ≤ ǫ and large µ ( R ) is equivalent to seeking acombinatorial rectangle with u ( − | R ) ≤ ǫu ( − and large u ( R ) . Thus in this case thequantities of interest both for ubc ǫ ( A ) and for hmono ǫ ( A ) coincide. Since we consideruniformly balanced distributions, and normalize the distribution so that the probabilityof ’s and − ’s are equal, the general case is similar. In the general case there is anadvantage towards hmono ǫ ( A ) that increases when the imbalance between the numberof ’s and − ’s increases. The proof is essentially to show that the rectangle R foundfor ubc ǫ ( A ) is also good for hmono ǫ ( A ) . In the first part of the proof below we showthat for this rectangle u ( R ) ≥ µ ( R ) , and in the second part that u ( − | R ) ≤ ǫu ( − .Assume without loss of generality that u ( − ≤ / , and let ubc ǫ ( A ) = k . Then,there is a rectangle R such that µ ( R ) ≥ − k and µ ( − | R ) ≤ ǫ . Thus u ( R ) = X ( i,j ) ∈ R u ( i, j )= X A i,j = − , ( i,j ) ∈ R u ( i, j ) + X A i,j =1 , ( i,j ) ∈ R u ( i, j )= X A i,j = − , ( i,j ) ∈ R u ( − µ ( i, j ) + X A i,j =1 , ( i,j ) ∈ R u (1) µ ( i, j )= 2 u ( − µ ( − , R ) + 2 u (1) µ (1 , R )= 2 µ ( R ) [ u ( − µ ( − | R ) + (1 − u ( − − µ ( − | R ))]= 2 µ ( R ) [1 − u ( − − µ ( − | R ) + 2 u ( − µ ( − | R )] ≥ µ ( R ) . The last step follows from the fact that the function f ( x, y ) = 1 − x − y + 2 xy satisfies f ( x, y ) ≥ / for x, y ∈ [0 , / . Recall that u ( − ≤ / and µ ( − | R ) ≤ ǫ ≤ / . Also u ( − , R ) = X A i,j = − , ( i,j ) ∈ R u ( i, j )= X A i,j = − , ( i,j ) ∈ R u ( − µ ( i, j )= 2 u ( − µ ( − , R ) ≤ u ( − ǫµ ( R ) ≤ ǫu ( − u ( R ) . his concludes that mono ǫ ( A ) ≤ k since u ( R ) ≥ µ ( R ) ≥ − k and u ( − | R ) ≤ ǫu ( − .Proving similarly for every submatrix B of A gives hmono ǫ ( A ) ≤ k . Theorem 8 ([18])
Let A be a sign matrix, and let disc ( A ) = 1 /d . Then hmono / ( A ) ≤ O ( d log d ) . Proof
The proof is essentially observing that discrepancy corresponds to the error incorruption. Suppose that ubc / − / d ( A ) = O (log d ) , then by Lemma 7 this impliesthat hmono − / d ( A ) ≤ O (log d ) , and therefore hmono / ( A ) ≤ O ( d log d ) . We now prove that ubc / − / d ( A ) = O (log d ) . Let µ be a uniformly-balanceddistribution for A . Since disc ( A ) = disc ( − A ) it is enough to prove the existence ofa large combinatorial rectangle satisfying µ ( − | R ) ≤ − d . By definition of thediscrepancy, there is a combinatorial rectangle R , such that | X ( i,j ) ∈ R µ ( i, j ) A i,j | ≥ d . (2)Observe that we can assume that X ( i,j ) ∈ R µ ( i, j ) A i,j ≥ d . Otherwise, the sum in Equation (2) is negative, and since µ is uniformly-balanced X ( i,j ) ∈ ¯ R µ ( i, j ) A i,j ≥ d , where ¯ R is the complement of R . But ¯ R can be partitioned into three combinatorialrectangles, and thus there is a rectangle R ′ such that X ( i,j ) ∈ R ′ µ ( i, j ) A i,j ≥ d . Now, P ( i,j ) ∈ R µ ( i, j ) A i,j = µ (1 , R ) − µ ( − , R ) , and µ ( R ) = µ (1 , R ) + µ ( − , R ) .Therefore, µ ( − , R ) = 12 µ ( R ) − X ( i,j ) ∈ R µ ( i, j ) A i,j ≤ µ ( R ) − d = ( 12 − dµ ( R ) ) µ ( R ) ≤ ( 12 − d ) µ ( R ) . This concludes the proof, as obviously µ ( R ) ≥ d . cknowledgements I thank Troy Lee and Michal Parnas for their help in writing this manuscript.
References [1] H. Abelson. Lower bounds on information transfer in distributed computations.In
Proceedings of the 19th IEEE Symposium on Foundations of Computer Science ,pages 151–158. IEEE, 1978.[2] A. Aho, J. Ullman, and M. Yannakakis. On notions of information transfer in VLSIcircuits. In
Proceedings of the 15th ACM Symposium on the Theory of Computing ,pages 133–139. ACM, 1983.[3] N. Alon and P. Seymour. A counterexample to the rank-coloring conjecture.
Jour-nal of Graph Theory , 13(4):523–525, 1989.[4] L. Babai, P. Frankl, and J. Simon. Complexity classes in communication complexitytheory. In
Proceedings of the 27th IEEE Symposium on Foundations of ComputerScience . IEEE, 1986.[5] P. Beame, T. Pitassi, N. Segerlind, and A. Wigderson. A strong direct productlemma for corruption and the NOF complexity of disjointness.
ComputationalComplexity , 15(4):391–432, 2006.[6] E. Ben-Sasson, S. Lovett, and N. Ron-Zewi. An additive combinatorics approachrelating rank to communication complexity. In
Foundations of Computer Science(FOCS), 2012 IEEE 53rd Annual Symposium on , pages 177–186, 2012.[7] M. Dietzfelbinger, J. Hromkovič, and G. Schnitger. A comparison of two lower-bound methods for communication complexity.
Theoretical Computer Science ,168(1):39–51, 1996.[8] D. Gavinsky and S. Lovett. En route to the log-rank conjecture: New reductionsand equivalent formulations. Electronic Colloquium on Computational Complexity(ECCC), vol. 20, p. 80, 2013.[9] M. Göös, T. Pitassi, and T. Watson. Deterministic communication vs. partitionnumber. In
Foundations of Computer Science (FOCS), 2015 IEEE 56th AnnualSymposium on , pages 1077–1088. IEEE, 2015.[10] M. Göös and T. Watson. Private communication.[11] R. Jain and H. Klauck. The partition bound for classical communication complexityand query complexity.
CoRR , abs/0910.4266, 2009.[12] H. Klauck. Rectangle size bounds and threshold covers in communication complex-ity. In
Proceedings of the 18th IEEE Conference on Computational Complexity .IEEE, 2003.[13] E. Kushilevitz and N. Nisan.
Communication Complexity . Cambridge UniversityPress, 1997.[14] N. Linial, S. Mendelson, G. Schechtman, and A. Shraibman. Complexity measuresof sign matrices.
Combinatorica , 27(4):439–463, 2007.[15] N. Linial and A. Shraibman. Learning complexity versus communication complex-ity. In
Proceedings of the 23rd IEEE Conference on Computational Complexity ,pages 384–393. IEEE, 2008.[16] L. Lovász. Communication complexity: A survey. In B. Korte, L. Lovász,H. Prömel, and A. Schrijver, editors,
Paths, flows, and VLSI-layout , pages 235–265.Springer-Verlag, 1990.
17] L. Lovász and M. Saks. Communication complexity and combinatorial latticetheory.
Journal of Computer and System Sciences , 47:322–349, 1993.[18] S. Lovett. Communication is bounded by root of rank. Technical ReportarXiv:1306.1877, arXiv, 2013.[19] K. Mehlhorn and E. Schmidt. Las Vegas is better than determinism in VLSI anddistributed computing. In
Proceedings of the 14th ACM Symposium on the Theoryof Computing , pages 330–337. ACM, 1982.[20] N. Nisan and A. Wigderson. A note on rank vs. communication complexity.
Com-binatorica , 15(4):557–566, 1995.[21] R. Raz and B. Spieker. On the log rank conjecture in communication complexity.
Combinatorica , 15(4):567–588, 1995.[22] A. Razborov. The gap between the chromatic number of a graph and the rank ofits adjacency matrix is superlinear.
Discrete Mathematics , 108:393–396, 1992.[23] A. Razborov. On the distributional complexity of disjointness.
Theoretical Com-puter Science , 106:385–390, 1992.[24] A. Yao. Some complexity questions related to distributive computing. In
Proceed-ings of the 11th ACM Symposium on the Theory of Computing , pages 209–213.ACM, 1979.[25] A. Yao. Lower bounds by probabilistic arguments. In
Proceedings of the 24th IEEESymposium on Foundations of Computer Science , pages 420–428, 1983., pages 420–428, 1983.