An application of communication complexity, Kolmogorov complexity and extremal combinatorics to parity games
AAn application of communication complexity,Kolmogorov complexity and extremal combinatoricsto parity games
Alexander Kozachinskiy ∗ and Mikhail Vyalyi † National Research University Higher School of Economics, RussianFederation Moscow Institute of Physics and Technology, Russian Federation Dorodnicyn Computing Centre, FRC CSC RAS, Russian Federation
Abstract
So-called separation automata are in the core of several recently invented quasi-polynomial time algorithms for parity games. An explicit q -state separation au-tomaton implies an algorithm for parity games with running time polynomial in q . It is open whether a polynomial-state separation automaton exists. A positiveanswer will lead to a polynomial-time algorithm for parity games, while a negativeanswer will at least demonstrate impossibility to construct such an algorithm usingseparation approach.In this work we prove exponential lower bound for a restricted class of sep-aration automata. Our technique combines communication complexity and Kol-mogorov complexity. One of our technical contributions belongs to extremal com-binatorics. Namely, we prove a new upper bound on the product of sizes of twofamilies of sets with small pairwise intersection. Applications of Communication Complexity (CC) in Formal Language Theory (FLT)are well-known. Apparently, the most important one is obtaining lower bounds on statecomplexity of non-deterministic automata (NFA) (see, e.g., the monograph [15]). CCis also applied to analysis of nondeterminism measures in finite automata [16] and for ∗ [email protected] † [email protected] a r X i v : . [ c s . F L ] S e p number of other problems in FLT (e.g., see [1] for bounds on nondeterministic com-munication complexity of regular languages). Also, it is worth to mention that lowerbounds on memory used in streaming algorithms, an another important application ofCC (see the book [26]), can be viewed as lower bounds on size of probabilistic automataof specific form. Note that most of these applications have equivalent combinatorialcounterparts (see discussion in [12]).In this paper we extend the applications of CC to separation problems for safetyautomata . These automata accept or reject infinite words. They appeared in recentdevelopments in algorithmic game theory. More exactly, safety automata play an im-portant role in analysis of quasi-polynomial algorithms for solving parity games [7] (seedetails below).We are interested in state complexity of deterministic safety automata separating apair of languages (in the sequel, separation automata ). There are no convenient tools forthis task. We propose an approach based on time restriction . Our technique gives lowerbounds on state complexity of separation automata that accept a word after readinga sufficiently short prefix of an infinite word.These lower bounds are based on lower bounds for multi-party nondeterministic com-munication complexity in the number-in-hand model. But, in contrast with the previousworks, we do not give a direct way to convert a small separation automaton to a protocolsolving an appropriate communication problem. Our approach uses also the ideas of thefooling set technique. We conclude from lower bounds for a communication problemthat a small automata cannot separate a specific family of pairs of finite languages. Inthe definition of this family we use Kolmogorov complexity to control the size of com-munication protocols. Next step is to use this family to construct a pair of words foolinga small separation automaton. The family is used multiple times, and each time we haveto manage Kolmogorov complexity by exploiting the fact that the automaton has fewstates.We hope that the approach presented in this paper has a potential to get more strongbounds for separation automata as well as to be applied for other problems in FLT.To present our results in more details, we need a brief introduction to the area ofparity games and to separation approach in solving parity games. For a game with two competitive players one can consider a problem of deciding whichplayer has a winning strategy. Solving parity games is a classical example when thisproblem lies in NP ∩ coNP yet for which no polynomial-time algorithm is known. Tospecify an instance of a parity game one needs to specify: • n -node directed graph in which any node has at least one outgoing-edge; • indicated initial node; • labeling of edges by integers from { , , . . . , d } ( priorities );2 partition of nodes into two parts, V and V .There are two players named Player 0 (Even) and
Player 1 (Odd). A position of agame is specified by a node of a graph. It is possible to move from node u to node v ifand only if ( u, v ) is an edge of a graph. For each node u it is predetermined which playermakes a move in u . Namely, Player 0 makes a move in V and Player 1 makes a move in V . Since all nodes have out-going edges, a play can always last for infinite number ofmoves. In this way we obtain an infinite sequence of nodes { v k } ∞ k =1 visited by players.We can also look at the sequence of corresponding priorities. Namely, let l k be a priorityof an edge ( v k , v k +1 ). Winning conditions in parity game are the following: Player i winsif and only if lim sup k →∞ l k ≡ i (mod 2) . Such a winning condition is Borel, which means due to Martin’s theorem ([22]) thateither Player 0 or Player 1 has a winning strategy. Moreover, it turns out that a playerhaving a winning strategy in a parity game has also a memoryless winning strategy, i.e.one in which every move depends only on a current node ([8, 24]). This fact means alot for the complexity of
ParityGames , a problem of determining the winner of a paritygame. Namely, due to this fact
ParityGame is in NP ∩ coNP (a short certificate for a playeris his/her memoryless winning strategy). More involved argument shows that actually ParityGames is in UP ∩ coUP ([18]).All this leaves a hope that ParityGames is solvable in polynomial time. Yet this isstill an open problem. A lot of work was done to improve an obvious n n -time algorithmchecking all memoryless strategies (see, e.g., [25, 20, 23, 27]). This finally led in 2017 toa quasi-polynomial time algorithm for ParityGames : Theorem 1 (Calude et. al., [5]) . ParityGames with n nodes and d priorities can be solvedin n O (log d ) time. Although we made no assumptions on d , it is clear that we can always reduce a giveninstance of parity game to one in which d is linear in the number of edges. Thus in aworst case the algorithm of Calude et. al. takes n O (log n ) time. Since the paper of Calude et. al., several other quasi-polynomial time algorithms wereinvented for
ParityGames ([19, 10, 21]). The paper of Czerwiński et. al. ([7]) arguesthat all these works follow so-called separation approach . Let us briefly summarize thisapproach.The main idea is to reduce
ParityGames to reachability games . To specify a reacha-bility game one needs to specify a graph and mark some of its nodes as winning. Thegoal of one player is to visit one of winning nodes at least once. Correspondingly, thegoal of the other player is to avoid winning nodes.3 standard analysis of complete information games works also for reachability games,which leads to a polynomial-time algorithm for the these games. In separation approach,a parity game on a graph G with n nodes is reduced to a reachability game on a product of G and the transition graph of some specific deterministic finite automaton A .The input alphabet of A is a set { , . . . , n } × { , . . . , d } (pairs of the form h a node of G , a priority i ). We use this alphabet to encode infinite paths in n -node labeled graphswith d priorities. Namely, assume that an infinite path starts with node v , then goes to v , then to v and so on. Moreover, assume that the priority of the first edge in a pathis p , the priority of the second edge in a path is p etc. Then this path corresponds tothe infinite sequence ( v , p )( v , p )( v , p ) . . . over the alphabet { , . . . , n } × { , . . . , d } .In what follows by saying that A does something on an infinite path we mean that A does something on the input sequence corresponding to this path.To make a reduction correct, we impose the following requirement on A . Thereshould be a state q accept of A with the following properties: • A reaches q accept on all paths produced by memoryless strategies of Player 0 whichare winning for some n -node graph with d priorities. • A never reaches q accept on any path produced by a memoryless strategy of Player1 which is winning for some n -node graph with d priorities.Automata satisfying the above requirements are called separation automata . It fol-lows immediately from definition that a memoryless winning strategy in a parity gameon G yields a winning strategy in a reachability game on a product of G and A , where A is a separation automaton and winning nodes correspond to a state q accept . Thus indeedto solve a parity game on G it is enough to solve such a reachability game, and this takestime polynomial in the number of states of A .It is possible to simplify a little bit a definition of separation automata. A graph iscalled even ( odd ) if the maximum of priorities along a cycle is even (odd) for all cycles.Take any winning positional strategy of Player i in a parity game on G . Notice that ifwe remove from G all edges contradicting this strategy, then we obtain, depending on i ,either odd or even graph.In [7], Czerwiński et. al. define the following two languages consisting of infinite wordsover { , . . . , n } × { , . . . , d } . Denote by EvenCycles n,d ⊆ ( { , . . . , n } × { , . . . , d } ) N theset of all inputs sequences to A which correspond to some infinite path in an evengraph with at most n nodes and d priorities. Define OddCycles n,d similarly. Now fromthe observation above it follows that we can define separation automata equivalently asfollows: A should reach q accept on sequences from EvenCycles n,d and should avoid q accept on sequences from OddCycles n,d .As far as we know, before [7] a formalization of separation approach appearsin a textbook of Bojańczyk and Czerwiński [3]. However, instead of EvenCycles n,d and OddCycles n,d , they used another two languages, EvenLoops n,d and OddLoops n,d .Namely, EvenLoops n,d (OddLoops n,d ) consists of all infinite paths in which the maximum Actually, [3] contains no name for these two languages and we use a terminology of [7].
4f priorities between any two visits of a same node is always even (odd). It is clear thatEvenCycles n,d (cid:40)
EvenLoops n,d and OddCycles n,d (cid:40)
OddLoops n,d . Thus it is easier toconstruct separation automata in a sense of [7] than in a sense of [3]. Correspondingly, itis easier to obtain lower bounds against the latter than against the former. We stress thatin this paper we follow the approach of [7], i.e., we use EvenCycles n,d and OddCycles n,d .To describe the main lower bound of [7] we shall introduce two more sets of infinitesequences from ( { , . . . , n } × { , . . . , d } ) N . Namely, let LimSupEven n,d be the set ofall sequences ( v , p )( v , p )( v , p ) ∈ ( { , . . . , n } × { , . . . , d } ) N satisfying lim sup i →∞ p i ≡ n,d similarly. Again, it is clear thatEvenCycles n,d (cid:40) LimSupEven n,d , OddCycles n,d (cid:40)
LimSupOdd n,d . First Czerwiński et. al. demonstrate that actually all quasi-polynomial time algorithmsfor parity games listed above provide a quasi-polynomial-state automaton separatingLimSupEven n,d from OddCycles n,d (in the same sense of separation as above — an au-tomaton should reach q accept on sequences from the first set an avoid q accept from sequencesof the second set). It is more than required in separation approach — however, no quasi-polynomial state automaton doing “no more than required” is known.On the other hand Czerwiński et. al. show that any automaton separatingLimSupEven n,d from OddCycles n,d has n Ω(log d ) number of states. This exactly matchesknown constructions. To obtain such a lower bound, they introduce a combinatorialobject called “universal trees” and show that automata separating LimSupEven n,d fromOddCycles n,d should contain a universal tree within the set of its states. Then they provea quasi-polynomial lower bound on universal trees.It is not clear how to generalize this technique to separation EvenCycles n,d fromOddCycles n,d (for which no better lower bound that just n is known). One of theobstacles is that the lower bound based on universal trees works also for non-deterministicautomata. At the same time separation of EvenCycles n,d and OddCycles n,d is veryeasy with non-determinism allowed — just guess a node appearing more then once andcompute the maximum between two occurrences of this node. We attack the question of obtaining lower bound on automata separating EvenCycles n,d and OddCycles n,d . To do so we first relax a notion of separation automata by introducingan additional parameter t . Namely, recall that for any w ∈ EvenCycles n,d a separationautomaton should reach an accepting state on some finite prefix on w . The length ofsuch prefix is not anyhow bounded. We suggest to simplify the problem and study it forautomata in which such prefix is of length at most t .More specifically, we say that a deterministic finite automaton separatesEvenCycles n,d from OddCycles n,d in time t if for all w ∈ ( { , . . . , n } × { , . . . , d } ) N : • if w ∈ EvenCycles n,d , then an automaton reaches q accept while reading w w . . . w t and always stays in q accept after that; 5 if w ∈ OddCycles n,d , then an automaton never reaches q accept on w .A requirement that an automaton stays in q accept forever after reading w w . . . w t isnot essential because we can make q accept an absorbing state.It is easy to see that a deterministic automaton with q states separating EvenCycles n,d from OddCycles n,d necessarily does it in qn -time (for the sake of completeness we includethe proof in Appendix A). Thus a lower bound q on the size of separation automataworking in time t implies min { t/n, q } lower bound on the size of unrestricted separationautomata.Even super-linear lower bounds for unrestricted separation automata are not known.To obtain such bounds with our approach we first have to prove a good lower bound forsuper-quadratic t . Unfortunately, lower bounds we obtain in this paper are reasonableonly for t = O ( n / ). Theorem 2.
Any deterministic finite automaton separating
EvenCycles n, from OddCycles n, in time t has exp (Ω( n /t )) number of states. Notice that this theorem is true even for d = 2. The fact that our argument usesonly 2 priorities means that essentially new ideas are needed to obtain similar boundfor super-quadratic t . Indeed, there exists a simple O ( n )-state deterministic automaton,separating EvenCycles n, from OddCycles n, in O ( n )-time (namely, accept if and onlyif at least n + 1 priorities which are equal to 2 have been already seen). For our proof we define the following communication problem which is a variation ofDisjointness problem. Fix n, k ∈ N and γ >
0. There are k parties. The i th partyreceives a set X i ⊆ { , , . . . , n } of size b n/k c . It is promised that either X , X , . . . , X k are disjoint or ∀ i, i ∈ { , . . . , k } | X i X i | (cid:54) γ · b n/k c . The goal of parties is to output1 in the first case and 0 otherwise. We denote this problem by DISJ k,γ ( n ).We show the following lower bound on DISJ k,γ ( n ): Theorem 3.
For all large enough n and for all k ∈ { , . . . , n − } and γ ∈ (0 , satisfying kγ (cid:54) √ n the non-deterministic communication complexity of DISJ k,γ ( n ) is atleast γ n · k − ( n ) . A similar problem (without restrictions on sizes of input sets) in the two-party settingwas considered in [13]. We postpone proof of Theorem 3 to Section 5.To show Theorem 3 we prove the following result from extremal combinatorics whichis interesting on its own:
Theorem 4.
For all n, a, t ∈ N satisfying t < a < n the following holds. If F ⊆ (cid:16) [ n ] a (cid:17) and G ⊆ (cid:16) [ n ] a (cid:17) are such that | F ∩ G | (cid:54) t for all F ∈ F and G ∈ G , then |F | · |G| (cid:54) a ( n − a ) · e − ( a − t − / (20 a ) na ! .
6e postpone proof of Theorem 4 to Section 4. For a special case when t = Ω( n )and a − t = Ω( n ) this bound can be found in a classical work of Frankl and Rödl [11].Moreover, their result only requires that | F ∩ G | 6 = t + 1 for all F ∈ F , G ∈ G . However,the paper [11] does not contain a complete proof of this bound and it is unclear how torestore details omitted. Also, it is quite hard to turn a proof of Frankl and Rödl into anexplicit bound for sublinear a and t . We denote the set { , , . . . , n } by [ n ] and the set { a, a + 1 , . . . , b } by [ a, b ]. By 2 [ n ] wemean the set of all subsets of [ n ] and by (cid:16) [ n ] k (cid:17) we mean the set of all k -element subsetsof [ n ]. Notation X Y is used for the symmetric difference of two sets X , Y . Let Σ be a finite alphabet. For w ∈ Σ ∗ ∪ Σ N by | w | we denote the length of w . We assumethat subscripts enumerating letters of w start with 1, i.e., we write w = w w w . . . A deterministic finite automaton A over Σ is specified by a finite set Q of its states,an indicated initial state q start ∈ Q and a transition function δ A : Q × Σ → Q . As usual,we extend δ A to be a function of the form δ : Q × Σ ∗ → Q by setting δ A ( q, w . . . w p ) tobe a state reached by the automaton from q ∈ Q after reading w . . . w p ∈ Σ ∗ .For A, B ⊆ Σ N , A ∩ B = ∅ , we say that a deterministic finite automaton A separates A from B if there exists a state q accept ∈ Q such that for all w = w w . . . ∈ Σ N thefollowing holds: • if w ∈ A , then there exists i ∈ N such that δ A ( q start , w . . . w i ) = q accept for all i (cid:62) i ; • if w ∈ B , then for all i ∈ N it holds that δ A ( q start , w . . . w i ) = q accept .We say that an automaton separates A from B in time t if, instead of the first condition,the stronger one holds: if w ∈ A , then δ ( q , w . . . w i ) = q accept for all i (cid:62) t .A game graph with n nodes and d priorities is a pair G = h E, π i , where • E is a subset of { , . . . , n } satisfying the following condition: for all u ∈ { , . . . , n } there is v ∈ { , . . . , n } such that ( u, v ) ∈ E ; • π is a function of the form π : E → { , . . . , d } .I.e., we consider G as a directed graph in which nodes are elements of { , . . . , n } andedges are elements of E . Moreover, edge e has a label π ( e ) ∈ { , . . . , d } on it. Edgelabels are called priorities. A game graph should satisfy the following requirement: foreach node, there exists at least one out-going edge. We stress that we allow loops butdo not allow parallel edges . Our main lower bound holds for graphs without loops as well and the proof is easily adaptable. Tosimplify an exposition, we present a weaker result. G = h E, π i is called even (odd), if the maximum of π on everycycle of G is even (odd). More formally, G is called even (odd) if for all k (cid:62) v , . . . , v k ∈ { , . . . , n } satisfying:( v , v ) , ( v , v ) , . . . , ( v k − , v k ) , ( v k , v ) ∈ E, it holds that:max { π (( v , v )) , π (( v , v )) , . . . , π (( v k − , v k )) , π (( v k , v )) } is even (odd).Now let us define two sets (languages) consisting of infinite words over an alpha-bet Σ = { , . . . , n } × { , . . . , d } . These two languages will be called EvenCycles n,d and OddCycles n,d . Namely, an infinite sequence ( v , l )( v , l )( v , l ) . . . ∈ ( { , . . . , n } ×{ , . . . , d } ) N belongs to EvenCycles n,d if there exists an even game graph G = h E, π i with at most n nodes and d priorities such that for all i (cid:62) v i , v i +1 ) ∈ E and π (( v i , v i +1 )) = l i . I.e., we put ( v , l )( v , l )( v , l ) . . . into EvenCycles n,d if and onlyif this sequence can be realized as an infinite path in some even game graph with at n nodes and d priorities.If, instead of G being even, we require that G is odd, we obtain a definition ofOddCycles n,d . For our main lower bound we use non-deterministic communication complexity in the number-in-hand model, but let us start with the deterministic case. In the number-in-hand model there are k parties and their goal is to compute some (fixed in advance,possibly partial) function f : X × . . . × X k → { , } , where sets X , . . . , X k are finite.The i th party receives an element X i of X i on input. Parties have a shared blackboard onwhich they can write binary messages. Blackboard is seen by all parties. A deterministicprotocol specifies at each moment of time: • whose turn is to write on the blackboard (depending on what is already writtenthere); • a message of the corresponding party (which depends not only on what is writtenon the blackboard but also on the player’s input).In the end of the communication, parties output a single bit which is assumed to be thevalue of f on ( X , . . . , X k ). This bit is a function of the history of communication, i.e.it can be computed by an external observer who can see only the blackboard but doesnot see inputs of players. The communication complexity of a deterministic protocol π (denoted below by CC ( π )) is the maximal possible (over all inputs) number of bitswritten on the blackboard in π .Now let us switch to non-deterministic protocols. The most convenient definitionfor us is the following one. A non-deterministic protocol is a set P of deterministic8rotocols. A run of a non-deterministic protocol has two phases. At first phase partiesguess π ∈ P . The guess is public so all the parties have the same π . Then the partiesrun π on ( X , . . . , X k ). By the communication complexity of P we mean the followingexpression: CC ( P ) = d log ( |P|e ) + max π ∈P CC ( π ) . In particular, besides communication in π , the number of bits needed to specify π alsocounts. For brevity, we use a term “ c -bit protocol” for a protocol with the communicationcomplexity at most c .We say that P computes f if for all ( X , . . . , X k ) ∈ X × . . . × X k it holds that: • if f ( X , . . . , X k ) = 1, then there is π ∈ P such that π outputs 1 on ( X , . . . , X k ); • if f ( X , . . . , X k ) = 0, then for all π ∈ P it holds that π outputs 0 on ( X , . . . , X k ).Finally, by the non-deterministic communication complexity of f we mean the minimal c ∈ N such that there exists a c -bit non-deterministic communication protocol computing f . More formal introduction to the number-in-hand model can be found, for instance, in[17, Chapter 5]. For our lower bound we use only a very basic technique of monochromaticboxes . This technique is a generalization of a standard two-party monochromatic rectan-gle technique. A box is a set of the form F × . . . × F k for some F ⊆ X , . . . , F k ⊆ X k .We exploit the following feature of protocols: a c -bit non-deterministic protocol comput-ing f induces a cover of { ( X , . . . , X k ) ∈ X × . . . × X k : f ( X , . . . , X k ) = 1 } by at most2 c boxes such that each box in the cover does not contain a tuple on which f is definedand takes value 0. Consider two binary strings x and y . Informally speaking, the conditional Kolmogorovcomplexity of x given y is the minimal length of a program producing x from y (lengthis measured in bits). To define it formally, consider any partial computable function D : { , } ∗ × { , } ∗ → { , } ∗ . Let C D ( x | y ) denotemin {| p | : p ∈ { , } ∗ and D ( p, y ) = x } . (here, as above, | p | stands for the length of p ). So C D ( x | y ) can be viewed as a compressedsize of x given y with respect to “decompressor” D . Kolmogorov – Solomonoff theoremstates that there exists an “optimal” decompressor; more precisely, there is a partialcomputable function D : { , } ∗ ×{ , } ∗ → { , } ∗ such that for any partial computablefunction D : { , } ∗ × { , } ∗ → { , } ∗ there exists A > x, y ∈ { , } ∗ we have C D ( x | y ) (cid:54) C D ( x | y ) + A . We fix any such D and let C ( x | y ) = C D ( x | y ) bethe Kolmogorov complexity of x given y . We also define the unconditional Kolmogorovcomplexity of x as the Kolmogorov complexity of x given the empty word.Let us list some standard properties of Kolmogorov complexity which will be used inthis paper. Proofs of them can be found, for instance, in [28].9 roposition 5. For any z ∈ { , } ∗ the number of x ∈ { , } ∗ satisfying C ( x | z ) (cid:54) a isless than a +1 . Proposition 6.
For any computable function f ( · , · ) and for all x, y ∈ { , } ∗ the follow-ing holds: C ( f ( x, y ) | y ) (cid:54) C ( x | y ) + O (1) (constant hidden in O ( · ) depends only on f but not on x and y ). Proposition 7.
For all m ∈ N and for all x , . . . , x m , y ∈ { , } ∗ the following holds: C ( x , x , . . . , x m | y ) (cid:54) O (1) + m X i =1 (2 C ( x i | x , . . . , x i − , y ) + 2) (constant hidden in O ( · ) is absolute) . Kolmogorov Complexity can be defined not only for binary strings but for other“finite objects”, like tuples of strings, finite sets, graphs etc. To do so we have to fixsome encoding of these objects by binary strings. Different encodings lead to the samecomplexity up to O (1) additive term. We actually prove a more specified version of Theorem 2.
Theorem 8.
For all large enough n the following holds. If n (cid:54) t (cid:54) n / , then anydeterministic finite automaton separating EvenCycles n, from OddCycles n, in time t has more than n · t )4 states. Theorem 2, however, has no restrictions on t , unlike Theorem 8. Nevertheless, it iseasy to see that Theorem 8 implies Theorem 2. For t > n / the lower bound of Theorem 2is just constant, and the constant lower bound is obvious. Next, theorem 2 for n (cid:54) t < n follows from Theorem 8 for t = 8 n (with some constant loss in the exponent). Finally, weobserve that for t < n there is no deterministic finite automaton separating EvenCycles n, from OddCycles n, in time t at all. Indeed, a word (1 , , . . . ( n − ,
1) is a prefix ofa sequence from EvenCycles n, and also a prefix of a sequence from OddCycles n, .Now we proceed to a proof of Theorem 8. Assume for contradiction that for some n and 8 n (cid:54) t (cid:54) n / there exists a deterministic finite automaton A with at most Q statesseparating EvenCycles n, from OddCycles n, in time t . Here Q is defined as follows Q = 2 l n · t )4 m . (1) There is a more tight relation between the left and the right hand side known as “chain rule”.However, Proposition 7 is enough for our purposes.
10o obtain a contradiction we construct two words on which A comes into the samestate. One word is a prefix of a sequence from EvenCycles n, . Moreover, its length isat least t . The other word is a prefix of a sequence from OddCycles n, . This gives acontradiction with the fact that A separates EvenCycles n, from OddCycles n, in time t .To explain the construction let us introduce some notation. For a finite X ⊆ N denote: ( X,
1) = ( x , x , . . . ( x m , ∈ ( N × { } ) ∗ , where x , x , . . . , x m ∈ N aresuch that x < x < . . . < x m and X = { x , x , . . . , x m } . Next, for a word w =( v , . . . ( v m , ∈ ( N × { } ) ∗ denote v ( w ) = { v , . . . , v m } . I.e., w v ( w ) operation,loosely speaking, is the inverse to X ( X,
1) operation.Set n = d n/ e , k = 20 · (cid:22) tn (cid:23) , γ = 1 k , a = b n /k c , (2) D = ( X , . . . , X k ) ∈ [ n ] a ! k : X , X , . . . , X k are disjoint , I = ( Y , . . . , Y k ) ∈ [ n ] a ! k : ∀ i, i ∈ { , . . . , k } | Y i Y i | (cid:54) γa . (3)Note that DISJ k,γ ( n ) is a problem to output 1 on D and output 0 on I .For a tuple X = ( X , X , . . . , X k ) ∈ D let < X be the linear order on X ∪ X ∪ . . . ∪ X k drawn on Figure 1. X < X < . . . < X k Figure 1: < X orderFormally, we say that p < X q if at least one of the following two conditions holds: • p ∈ X i , q ∈ X i for some i, i ∈ [ k ] , i < i ; • p < q and p, q ∈ X i for some i ∈ [ k ].Next, given X = ( X , X , . . . , X k ) ∈ D , let us say that a word ( v , l ) . . . ( v m , l m ) ∈ ([ n ] × { , } ) ∗ is X -increasing if v , v , . . . , v m ∈ X ∪ X ∪ . . . ∪ X k and v < X v < X . . . < X v m .Finally, for r ∈ N let r denote a pair ( n + r, r only for r (cid:54) k/ k = O ( n / ). This means that for any r (cid:54) k/ r ∈ [ n ] × { , } , i.e., r belongs to the input alphabet of A .We are ready to formulate our main lemma. Lemma 9.
For some tuple X ∈ D there are words f , . . . , f k/ , g , . . . , g k/ ∈ ([ n ] ×{ } ) ∗ satisfying the following conditions v ( f ) , . . . , v ( f k/ ) are disjoint; • g , . . . , g k/ are X -increasing; • | g | (cid:62) n / , . . . , | g k/ | (cid:62) n / ; • δ A ( q start , f f . . . f k/ k/ ) = δ A ( q start , g g . . . g k/ k/ ) .Here q start is the initial state of A . Let us explain how Lemma 9 implies Theorem 8. Take X ∈ D and f , . . . , f k/ , g , . . . , g k/ ∈ ([ n ] × { } ) ∗ satisfying Lemma 9. To obtain a contradictionit is enough to show that f f . . . f k/ k/ is a prefix of a word from OddCycles n,d , (4) g g . . . g k/ k/ is a prefix of a word from EvenCycles n,d . (5)Indeed, define q = δ A ( q start , f f . . . f k/ k/ ) = δ A ( q start , g g . . . g k/ k/ ) . By (4) we have q = q accept . On the other hand the length of g g . . . g k/ k/ is atleast ( k/ · (4 n / tn − · n . In turn, fromthe formulation of Theorem 8 we know that t (cid:62) n . This implies that the length of g g . . . g k/ k/ is at least t . Due to (5) this means that q = q accept , contradiction.Let us at first show (4). Consider the following graph G odd (see Figure 2). v ( f ) ∪{ n + 1 } v ( f ) ∪{ n + 2 } . . . v ( f k/ ) ∪{ n + k/ } n + k/ Figure 2: A graph for f f . . . f k/ k/ .Nodes of this G odd are elements of v ( f ) ∪ v ( f ) ∪ . . . ∪ v ( f k/ ) ∪ { n + 1 , . . . , n + k/ } . By Lemma 9 sets v ( f ) , . . . , v ( f k/ ) are disjoint subsets of [ n ]. Let us specify edges of G odd . First of all, for each j ∈ [ k/
5] we draw all possible edges between nodes from v ( f j ) ∪ { n + j } (including loops), each with priority 1. Next, for all j < k/ v ( f j ) ∪{ n + j } and end at a node from v ( f j +1 ) ∪{ n + j +1 } ,each with priority 2. We also draw all edges that start at a node from v ( f k/ ) ∪{ n + k/ } { n + k/ } , each again with priority 2. Finally, draw a loop at n + k/ G odd has at least oneout-going edge).It is easy to see from the construction that G odd is an odd game graph with at most n nodes. Moreover, f f . . . f k/ k/ encodes a path in G odd . Indeed, we move forsome time in v ( f ), then through n + 1 we go to v ( f ) and so on. Thus (4) is proved.For (5) it is extremely important that for some tuple X ∈ D words g , . . . , g k/ areall X -increasing. To see why, consider any even game graph G with 2 priorities. Ifwe remove all edges of priority 2, we obtain an acyclic graph. Let T be a topologicalordering of the remaining graph. If we move in G using only edges of priority 1, thennodes we visit should increase in T . It is reflected in a fact that g g . . . g k/ k/ is split by , . . . , k/ into X -increasing words.Now, to show (5) we define another graph, G even (see Figure 3). Its nodes are elements { n + 1 , n + 2 , . . . , n + k/ } DAG representing < X (all edges have priority )2 1 Figure 3: A graph for g g . . . g k/ k/ .of X ∪ X ∪ . . . ∪ X k ∪ { n + 1 , . . . , n + k/ } , where X = ( X , X , . . . , X k ). Next, letus specify edges of G even . For all u, v ∈ X ∪ X ∪ . . . ∪ X k satisfying u < X v we add anedge with priority 1 from u to v . Moreover, we draw all edges between X ∪ X ∪ . . . ∪ X k and { n + 1 , . . . , n + k/ } (in both directions). In particular, this ensures that each nodeof G even has at least one out-going edge. We assign priority 1 to the edges starting at X ∪ X ∪ . . . ∪ X k and priority 2 to the edges starting in { n + 1 , . . . , n + k/ } .Note that once we delete all edges with priority 2 from G even , we obtain an acyclicgraph. Hence G even is an even game graph with at most n nodes. On the other hand, since g , g , . . . , g k/ are X -increasing, it is easy to see that g g . . . g k/ k/ correspondsto a path of G even . Indeed, each g i represents a path at the bottom of the Figure 3.Once we reach the end of g i , we go up with priority 1. Then after reading j , we godown. Thus (5) is proved. Here we give a proof sketch of Lemma 9. The proof is by induction. I.e., we firstconstruct f and g , then f and g and so on. A tuple X = ( X , . . . , X k ), for which13onditions of Lemma 9 hold, comes from the following Proposition 10.
There exists X = ( X , . . . , X k ) ∈ D such that for every state q of A and for every U ⊆ [ n ] satisfying C ( U | n, t, A ) (cid:54) k log ( Q ) , there exists ( Y , . . . , Y k ) ∈ I such that: δ A ( q , ( X \ U, . . . ( X k \ U, δ A ( q , ( Y \ U, . . . ( Y k \ U, . We derive this proposition from Theorem 3 (a lower bound for the problemDISJ k,γ ( n )).Now, assume that f , . . . , f r − , g , . . . , g r − satisfying Lemma 9 are already con-structed for some r (cid:54) k/
5. Note that f . . . f r − r − and g . . . g r − r − lead A into to the same state q . We shall construct f r , g r ∈ ([ n ] × { } ) ∗ satisfying thefollowing conditions: (a) v ( f r ) is disjoint with U = v ( f ) ∪ v ( f ) ∪ . . . ∪ v ( f r − ); (b) g r is long enough (more precisely, its length should be at least 4 n /
7) and g r is X -increasing; (c) δ A ( q , f r ) = δ ( q , g r ).To do so we apply Proposition 10 to q and U and set f r = ( Y \ U, . . . ( Y k \ U, , g r = ( X \ U, . . . ( X k \ U, , where ( Y , . . . , Y k ) ∈ I is such that δ A ( q , ( X \ U, . . . ( X k \ U, δ A ( q , ( Y \ U, . . . ( Y k \ U, . Now, (a), (c) and the second part of (b) immediately follow from the construction. Someexplanation is needed only for the first part of (b). Recall that ( Y , . . . , Y k ) ∈ I , whichmeans that Y , Y , . . . , Y k are highly intersecting. This implies that v ( f r ) is rather small,namely of size at most 2 n /k . I.e., each time we do an induction step, the size of U increases by at most 2 n /k . Since the number of increment steps is k/
5, the size of U is at most 2 n / g r = ( X \ U, . . . ( X k \ U,
1) and X , . . . , X k are disjoint b n /k c -elements subsets of [ n ]. This means that the length of g r is at least k · b n /k c − | U | (cid:62) n − k − n / > n / U . I.e., we have to ensure that the Kolmogorov complexity of U given n, t and A issmall.Note that U = v ( f ) ∪ v ( f ) ∪ . . . ∪ v ( f r − ) is a function of f , . . . , f r − . We will explainhow to add a new f r in such a way that complexity of U increases by approximatelylog ( Q ) bits. This guarantees that complexity of U is at most ≈ ( k/ · log ( Q ) at anymoment.So, we need a way to describe f r in just log ( Q ) bits assuming that f , . . . , f r − (andalso n, t, A ) are given. Recall how f r was constructed. Namely, note that f r is a function14f ( Y , . . . , Y k ) and U . In turn, U is a function of f , . . . , f r − , so we only have a problemwith ( Y , . . . , Y k ). If we knew X = ( X , . . . , X k ), satisfying Proposition 10, we couldfind ( Y , . . . , Y k ) just by the brute-force search over I . Indeed, first we compute q = δ A ( q start , f f . . . f r − r − ) (this yet does not require knowing ( X , . . . , X k )). Thenby emulating A we can find some ( Y , . . . , Y k ) ∈ I satisfying δ A ( q , ( X \ U, . . . ( X k \ U, δ A ( q , ( Y \ U, . . . ( Y k \ U, . However, it is unclear how to describe X in about log ( Q ) bits (even given n, t and A ).One could argue that X can also be found by a brute-force search over D . Nevertheless,this requires listing all U of small Kolmogorov complexity. Unfortunately, Kolmogorovcomplexity is not computable.The key observation here is that in the brute-force search algorithm for finding( Y , . . . , Y k ) described above we never used ( X , . . . , X k ) as a whole. Instead, we onlyused q = δ A ( q , ( X \ U, . . . ( X k \ U, Y , . . . , Y k ) ∈ I we check whether q = δ A ( q , ( Y \ U, . . . ( Y k \ U, ( Q )-bitdescription of q . In this way we get a conditional log ( Q )-bit description of ( Y , . . . , Y k )given f , . . . , f r − and n, t, A , as required.In the end of this subsection we provide more details of the proof of Proposition 10.We define the following non-deterministic protocol involving A . Description of the protocol P . In this protocol there are k parties and the i thparty receives a set X i ∈ (cid:16) [ n ] a (cid:17) . At the beginning parties non-deterministically guessa state q of A and a set U ⊆ [ n ] satisfying C ( U | n, t, A ) (cid:54) k log ( Q ). Then partiescommunicate in k stages. Stages are numbered from 1 to k . At the i th stage the i thparty writes log ( Q ) bits specifying a state of A on the blackboard. Namely,at the 1st stage the 1st party writes q = δ A ( q , ( X \ U, q = δ A ( q , ( X \ U, k th stage the k th party writes q k = δ A ( q k − , ( X k \ U, q k = δ A ( q , ( X \ U, X \ U, . . . ( X k \ U, . After performing these k stages parties finish communication. It remains to explainhow the output of the protocol P is computed. Parties output 1 if and only if there isno ( Y , . . . , Y k ) ∈ I such that q k = δ A ( q , ( Y \ U, Y \ U, . . . ( Y k \ U, . In other words, parties output 1 if and only if there is no input from I on which P produces the same q k for a guess ( q , U ). Description of the protocol is finished.
It is easy to bound CC ( P ). Parties communicate exactly k log ( Q ) bits. We shouldalso add the number of bits needed to specify a non-deterministic guess of P . For that we15nly need about ( k + 1) log ( Q ) bits — this is because the number of sets of complexityat most k log ( Q ) is smaller than 2 k log ( Q )+1 . After that some tedious calculations showthat with the choice of parameters as above CC ( P ) is smaller than the non-deterministiccommunication complexity of DISJ k,γ ( n ) (here we use the bound of Theorem 3). Thismeans that P does not compute DISJ k,γ ( n ). On the other hand, it is clear from theconstruction that P always outputs 0 on any input from I . Hence there should be atuple X ∈ D on which P outputs 0 for any possible non-deterministic guess. This isexactly what is needed from X in Proposition 10.We give a formal proof of Lemma 9 in the next subsection. To simplify the analysis below we need the following lower bound on separatingEvenCycles n, from OddCycles n, without any time restrictions. Proposition 11.
Any deterministic finite automaton separating
EvenCycles n, from OddCycles n, has at least n + 1 states.Proof. Assume that a deterministic finite automaton B separates EvenCycles n, fromOddCycles n, . For i = 0 , , . . . , n − q i = δ B ( q start , (1 , , . . . ( i, , where q start is the initial state of B . Note that (1 , , . . . ( n − ,
2) is a prefix of aword from OddCycles n, . Indeed, consider a graph which for i ∈ [ n −
1] has an edge from i to i + 1 with priority 2 and also has loop with priority 1 at node n . This means that q = q accept , q = q accept , . . . , q n − = q accept . Now assume that B has at most n states.Note that q , q , . . . , q n are distinct from q accept . It implies that there are at most n − q , q , . . . , q n − . Therefore there are i, j ∈ { , , . . . , n − } , i < j ,such that q i = q j . Consider a graph G with n nodes which has all possible directed edges(including loops) and all of them have priority 2. Obviously, G is an even game graph.Let C i,j be a cycle of G obtained by going from i + 1 to j and then back to i + 1 (inparticular if j = i + 1, then C i,j is a loop at j ). Consider an infinite path in G whichgoes from 1 to i and then stays on C i,j forever. By definition, B should reach q accept onthis path at some point. On the other hand, it is easy to see that the set of states visitedby B on this path is { q , q , . . . , q i , . . . , q j − } .Recall that A separates EvenCycles n, from OddCycles n, in time t and has at most Q states. From Proposition 11 we get Q (cid:62) n + 1 (6)(for the rest of the proof we only need the fact that Q is super-constant). From thehypotheses of Theorem 8 it is easy to derive the following bound: k = O ( n / ) . (7)Now let us prove Proposition 10. 16 roof of Proposition 10. Let P be a non-deterministic communication protocol definedon page 15. First let us establish that CC ( P ) is smaller than the non-deterministiccommunication complexity of DISJ k,γ ( n ).Let us start with the upper bound on the communication complexity of P . ByProposition 5 there are at most Q · k log ( Q )+1 possible non-deterministic guesses in P .After making a guess, parties communicate exactly k log ( Q ) bits. Therefore: CC ( P ) (cid:54) log ( Q ) + k log ( Q ) + 1 + k log ( Q ) = (2 k + 1) log ( Q ) + 1 . The last expression is at most 3 k log ( Q ). Indeed, k = 20 b t/n c by (2) and t (cid:62) n byhypotheses of Theorem 8. Hence k (cid:62)
160 and k (cid:62) k + 1. Note also that log ( Q ) issuper-constant by (6). Thus(2 k + 1) log ( Q ) + 1 (cid:54) k · log ( Q ) + 1 (cid:54) k log ( Q ) . In this way we conclude CC ( P ) (cid:54) k log ( Q ) . (8)Let us verify that kγ (cid:54) √ n . Indeed, again by (2) and by the hypotheses of Theorem8 we have: kγ = k (cid:54) · t n (cid:54) · n / n = 400 · √ · r n (cid:54) · √ · √ n < √ n . Hence by Theorem 3 the non-deterministic communication complexity of DISJ k,γ ( n ) isat least γ n · k − ( n ) (cid:62) γ n · · k − ( n ) (cid:62) γ n · · k . In the first inequality we use the definition of n (see (2)). The second inequality holdsbecause γ n/k = n/k = Ω( n / ) by (7).Thus by (8) it remains to show that:log ( Q ) < γ n · · k = n · · k . The right hand side by definition of k (see (2)) is at least n · · (cid:16) tn (cid:17) (cid:62) n · t . In turn, the left hand side by definition of Q (see (1)) is at mostlog ( Q ) = & n (10 · t ) ’ < n · t + 1 (cid:54) · n · t < n · t , t (cid:54) n due to the hypotheses of Theo-rem 8. Thus the fact that CC ( P ) is smaller than the non-deterministic communicationcomplexity of DISJ k,γ ( n ) is proved.This means that P does not compute DISJ k,γ ( n ). In turn, obviously P outputs 0 onany input from I for any possible guess. This means that there is X = ( X , . . . , X k ) ∈ D such that P outputs 0 on the input X for any guess. It is easy to see that this is equivalentto the statement of Proposition 10.To complete the proof of Lemma 9, we introduce the algorithm ALG .Description of ALG involves a lot of notation which resembles the one used above,but with subscript 1. This is to avoid confusion and to stress that ALG is independentof any other parameters. The latter is quite important due to our usage of Kolmogorovcomplexity.An input to ALG consists of two parts: • n , t ∈ N , a deterministic finite automaton A with input alphabet [ n ] × { , } and a tuple α = ( f , . . . , f j ), where f , . . . , f j ∈ ([ n ] × { } ) ∗ and j (cid:62) j = 0, we assume that α is empty); • a binary word q ∈ { , } log ( Q ) .Here n = d n / e , Q = 2 l n · t m (i.e., n and Q are defined in the same way as n and Q in (2) and (1)). The algorithm ALG also sets k = 20 j t n k , γ = 1 /k , a = b n /k c and I = ( Y , . . . , Y k ) ∈ [ n ] a ! k : ∀ i, i ∈ { , . . . , k } | Y i Y i | (cid:54) γ a . (this is similar to the definitions of k, γ, a and I in (2) and (3)).The algorithm ALG interprets q as a state of A (if there is more than Q statesin A , then ALG halts and outputs “not found”). The algorithm ALG computes U = v ( f ) ∪ v ( f ) ∪ . . . ∪ v ( f j ) , q = δ A ( q start, , f , . . . f j j, ) . Here q start, is the initial state of A and , = ( n + 1 , , . . . , j, = ( n + j, ALG tries to find ( Y , . . . , Y k ) ∈ I satisfying the following condition: q = δ A ( q , ( Y \ U, Y \ U, . . . ( Y k \ U, . Once any such ( Y , . . . , Y k ) is found, the algorithm ALG outputs a word f = ( Y \ U, . . . ( Y k \ U, Y , . . . , Y k ) at all, ALG halts and outputs “notfound”. Description of the algorithm ALG is finished .For the rest of the proof, we assume that X = ( X , X , . . . , X k ) is a tuple satisfyingthe conditions of Proposition 10. By Proposition 10 and by the definition of ALG weget: 18 roposition 12. Take any f , . . . , f j ∈ ([ n ] × { } ) ∗ . Define U = v ( f ) ∪ v ( f ) ∪ . . . ∪ v ( f j ) and q = δ A ( q start , f f . . . f j j ) , q = δ A ( q , ( X \ U, X \ U, . . . ( X k \ U, . Assume that C ( U | n, t, A ) (cid:54) k log ( Q ) . Then ALG (( n, t, A , ( f , . . . , f j )) , q ) = ( Y \ U, Y \ U, . . . ( Y k \ U, for some ( Y , . . . , Y k ) ∈ I satisfying q = δ A ( q , ( Y \ U, Y \ U, . . . ( Y k \ U, . To show Lemma 9 it is enough to show that for every r = 1 , . . . , k/ f , . . . , f r , g , . . . , g r ∈ ([ n ] × { } ) ∗ satisfying the following conditions: v ( f ) , . . . , v ( f r ) are disjoint and | v ( f ) | (cid:54) n /k, . . . , | v ( f r ) | (cid:54) n /k, (9) C ( f j | f , . . . , f j − , n, t, A ) (cid:54) ( Q ) for j = 1 , . . . , r, (10) | g | (cid:62) n / , . . . , | g r | (cid:62) n / , (11) g , . . . , g r are X -increasing , (12) δ A ( q start , f f . . . f r r ) = δ A ( q start , g g . . . g r r ) . (13)The proof is by induction on r . Induction base and induction step will be proved bythe same argument. Namely, assume that f , . . . , f r − , g , . . . , g r − satisfying (9–13) arealready constructed for some r (cid:54) k/ r = 1 corresponds to the induction base).Define U = v ( f ) ∪ v ( f ) ∪ . . . v ( f r − ) ,q = δ A ( q start , f f . . . f r − r − )(for r = 1 we have U = ∅ and q = q start ). Note that by (13) we also have q = δ A ( q start , g g . . . g r − r − ) . It is enough to construct f r , g r ∈ ([ n ] × { } ) ∗ satisfying: v ( f r ) ∩ U = ∅ and | v ( f r ) | (cid:54) n /k, (14) C ( f r | f , . . . , f r − , n, t, A ) (cid:54) ( Q ) , (15) | g r | (cid:62) n / , (16) g r is X -increasing , (17) δ A ( q , f r ) = δ A ( q , g r ) . (18)We define g r as follows: g r = ( X \ U, X \ U, . . . ( X k \ U, . At first, we derive (16) and (17). The latter is clear from construction. As for the former,recall that ( X , . . . , X k ) ∈ D , i.e., X , . . . , X k are disjoint. Hence | g r | = | ( X ∪ X ∪ . . . ∪ k ) \ U | . The last expression is at least k · b n /k c − | U | , By (9) and by definition of U itssize is at most ( k/ · (2 n /k ) = 2 n /
5. As k = O ( n / ) by (7), we obtain | g r | (cid:62) n / f r which is not yetdefined). For that we first have to establish that C ( U | n, t, A ) (cid:54) k log ( Q ). By applyingProposition 6 to a mapping, which takes a tuple of strings from ( N × { } ) ∗ , applies v tothem and takes the union, we get: C ( U | n, t, A ) (cid:54) C ( f , f , . . . , f r − | n, t, A ) + O (1) . By Proposition 7, the right hand side of the last inequality is upperbounded by O (1) + r − X j =1 (2 C ( f j | f , . . . , f j − , n, t, A ) + 2) . The last sum by (10) is at most ( k/ · (4 log ( Q ) + 2) + O (1) (cid:54) k log ( Q ). The lastinequality holds because k (cid:62)
160 (see the proof of Proposition 10) and Q is super-constant by (6).Set q = δ A ( q , g r ) = δ A ( q , ( X \ U, X \ U, . . . ( X k \ U, f r = ALG (( n, t, A , ( f , f , . . . , f r − )) , q ) . Since we have proved that C ( U | n, t, A ) (cid:54) k log ( Q ), from Proposition 12 we obtain that: f r = ( Y \ U, Y \ U, . . . ( Y k \ U, Y , . . . , Y k ) ∈ I satisfying q = δ A ( q , ( Y \ U, Y \ U, . . . ( Y k \ U, q = δ A ( q , g r ) by definition. On theother hand, δ A ( q , f r ) = δ A ( q , ( Y \ U, Y \ U, . . . ( Y k \ U, q .The first part of (14) is once again obvious because f r = ( Y \ U, Y \ U, . . . ( Y k \ U, v ( f r ) ⊆ Y ∪ Y ∪ . . . ∪ Y k . Hence | v ( f r ) | (cid:54) | Y | + | Y \ Y | + . . . + | Y k \ Y | (cid:54) n k + ( k − γn k (cid:54) n k . Here in the second inequality we use the fact that ( Y , Y , . . . , Y k ) ∈ I and in the thirdinequality we use the definition of γ (see (2)).Finally, to show (15) recall once again that f r = ALG (( n, t, A , ( f , f , . . . , f r − )) , q ) . Hence by the definition of conditional Kolmogorov complexity we have: C ( f r | f , . . . f r − , n, t, A ) (cid:54) | q | + O (1) = log ( Q ) + O (1) (cid:54) ( Q ) , where the last inequality is due to (6). 20 Proof of Theorem 4
Let us sketch our proof of Theorem 4. First of all, for the sake of brevity we say that twofamilies F , G ⊆ (cid:16) [ n ] a (cid:17) are t -far if | F ∩ G | ≤ t for all F ∈ F , G ∈ G (so that any memberof F is of Hamming distance at least 2 a − t from any member of G ). Step 1 . We use a classical shifting technique of [9] to define so-called left-compressed families. We show that it is enough to demonstrate Theorem 4 for the case when F isleft-compressed (Lemma 14). Step 2 . We observe (Proposition 17) that left-compressed families are ideals of aspecial partial order v a (see [2]) on a set (cid:16) [ n ] a (cid:17) . Step 3 . We give a necessary and sufficient condition for a family
G ⊆ (cid:16) [ n ] a (cid:17) to be t -far from an ideal F of v a (Lemma 18). Step 4 . Using this condition we give an upper bound on the probability that X ∈ F and Y ∈ G for two suitably chosen independent random variables X and Y (Lemma19). From that we easily deduce an upper bound on |F | · |G| . For every i, j ∈ [ n ] we define so-called shifting operations s ij and S ij . Namely, s ij is aunary operation on the set of all subsets of [ n ]. Given X ⊆ [ n ], the value of s ij ( X ) isdefined as follows: s ij ( X ) = ( ( X \ { j } ) ∪ { i } , if j ∈ X, i / ∈ X,X, otherwise.In turn, S ij is a unary operation on the set of all families of subsets of [ n ]. Given X ⊆ [ n ] , we define the value of S ij ( X ) as follows: S ij ( X ) = { s ij ( X ) : X ∈ X , s ij ( X ) / ∈ X } ∪ { X : X ∈ X , s ij ( X ) ∈ X } . Note that s ij preserves the size of a set, i.e., | X | = | s ij ( X ) | for all X ⊆ [ n ]. Henceif a family X consists only of a -element subsets of [ n ], then the same holds for S ij ( X ).It is also easy to see that S ij preserves the size of a family, i.e., |X | = | S ij ( X ) | for all X ⊆ [ n ] . Proposition 13 (Lemma 2.1 from [4]) . Assume that ≤ i < j ≤ n and F , G ⊆ (cid:16) [ n ] m (cid:17) are t -far. Then S ij ( F ) , S ji ( G ) are also t -far. A family
F ⊆ [ n ] is said to be left-compressed if S ij ( F ) = F for all i < j . Lemma 14. If F , G ⊆ (cid:16) [ n ] m (cid:17) are t -far, then there are F , G ⊆ (cid:16) [ n ] a (cid:17) satisfying the followingthree conditions: • F and G are t -far; • |F | = |F | and |G | = |G| ; F is left-compressed. It is easy to deduce the last lemma from Proposition 13. Indeed, apply S ij to F and S ji to G until S ij ( F ) = F for some i < j . To show that this can be done only finitenumber of times observe that X A ∈ S ij ( F ) X i ∈ A i < X A ∈F X i ∈ A i, whenever S ij ( F ) = F . The proof can also be found in [4] (see the last two paragraphsbefore Section 3). For X ⊆ [ n ] and 1 (cid:54) i (cid:54) | X | define m ( X, i ) to be the i th smallest element of X . Alsodefine m ( X,
0) = 0.For any l ∈ [ n ] we define the partial order v l on the set (cid:16) [ n ] l (cid:17) as follows (see [2]): X v l Y if m ( X, i ) (cid:54) m ( Y, i ) for all 1 (cid:54) i (cid:54) l . Proposition 15.
Let X = { x , . . . , x l } ∈ (cid:16) [ n ] l (cid:17) and Y ∈ (cid:16) [ n ] l (cid:17) be such that x i ≤ m ( Y, i ) for all ≤ i ≤ l . Then X v l Y . Note that x i in this proposition are not ordered. In other words, a smaller set w.r.t.this order can be produced by decreasing values of some elements of a set. Proof of Proposition 15.
Take any i ∈ [ l ]. Let j be the largest element of { , , . . . , l } satisfying m ( X, j ) (cid:54) m ( Y, i ). Note that j is equal to the size of X ∩ [1 , m ( Y, i )]. On theother hand, we have x (cid:54) m ( Y, , . . . , x i (cid:54) m ( Y, i ). Hence x , . . . , x i ∈ X ∩ [1 , m ( Y, i )],which means that j = | X ∩ [1 , m ( Y, i )] | (cid:62) i . Therefore m ( X, i ) (cid:54) m ( X, j ) (cid:54) m ( Y, i ). Proposition 16.
Let X ∈ (cid:16) [ n ] l (cid:17) and Y = { y , . . . , y l } ∈ (cid:16) [ n ] l (cid:17) be such that m ( X, i ) ≤ y i for all ≤ i ≤ l . Then X v l Y .Proof. Apply Proposition 15 to X = { n − y l + 1 , n − y l − + 1 , . . . , n − y + 1 } and Y = { n − j + 1 : j ∈ X } .Recall that an ideal A of a partially ordered set P is a downward-closed subset of P :if x ≤ P y and y ∈ A , then x ∈ A . Proposition 17 (Proposition 3 in [2]) . A left-compressed family
F ⊆ (cid:16) [ n ] a (cid:17) is an idealof the order v a . For reader’s convenience we also give here a proof sketch of Proposition 17. If F is notan ideal of v a , then for some B ∈ F there is A ∈ (cid:16) [ n ] a (cid:17) \ F immediately preceding B withrespect to v a . It is not hard to see that A can be obtained from B after decreasing someelement of B (say, i ) by one. Then s i − ,i ( B ) = A and hence F is not left-compressed.So, it suffice to prove Theorem 4 for a pair ( F , G ) in which F is an ideal of the order v a . 22 .3 Characterizing families which are t -far from ideals Define the j -left border L j ( X ) and the j -right border R j ( X ) of a set X ⊆ (cid:16) [ n ] a (cid:17) as L j ( X ) = { m ( X, i ) : 1 ≤ i ≤ j } ; R j ( X ) = { m ( X, i ) : a − j + 1 ≤ i ≤ a } . In other words, L j ( X ) consists of j smallest elements of X and R j ( X ) consists of j largest elements of X . Lemma 18.
Let
F ⊆ (cid:16) [ n ] a (cid:17) be an ideal of v a . Then for any G ⊆ (cid:16) [ n ] a (cid:17) the following twoconditions are equivalent: (a) F and G are t -far; (b) L t +1 ( G ) t +1 R t +1 ( F ) for all F ∈ F and G ∈ G .Proof. (b) = ⇒ (a) . Assume for contradiction that F and G are not t -far. Hence thereare F ∈ F and G ∈ G such that | F ∩ G | (cid:62) t + 1. Let X be any ( t + 1)-element subsetof F ∩ G . Then obviously we have that L t +1 ( G ) v t +1 X v t +1 R t +1 ( F ), contradiction. (a) = ⇒ (b) . Assume for contradiction that there are F ∈ F and G ∈ G such that L t +1 ( G ) v t +1 R t +1 ( F ). Define F = { F ∈ F : L t +1 ( G ) v t +1 R t +1 ( F ) } . By definition F ∈ F , i.e., F is non-empty. Let F be any minimal element of F withrespect to v a , i.e., assume there is no F ∈ F , F = F such that F v a F . To obtaina contradiction it is enough to show that | F ∩ G | (cid:62) t + 1 (this would mean that F and G are not t -far).Assume that | F ∩ G | < t + 1. Hence there is an element of L t +1 ( G ) which is not in F . Namely, there is i ∈ { , , . . . , t + 1 } such that m ( G, i ) / ∈ F . Define F = ( F \ { m ( F , a − t − i ) } ) ∪ { m ( G, i ) } . First of all, observe that | F | = | F | = a (this is because m ( F , a − t − i ) ∈ F and m ( G, i ) / ∈ F ). Let us check that the following three claims hold: F ∈ F (19) F = F (20) F v a F . (21)These three claims give a contradiction with minimality of F .The simplest one is (20) — observe that F contains m ( G, i ) and F does not.Now, let us show (21). Recall that F ∈ F , i.e., L t +1 ( G ) v t +1 R t +1 ( F ). Hence m ( G, i ) = m ( L t +1 ( G ) , i ) (cid:54) m ( R t +1 ( F ) , i ) = m ( F , a − t − i ), i.e., F is obtainedfrom F by removing a bigger element and adding a smaller element (which originallywas not in F ). Hence by Proposition 15 we have that F v a F .23o show (19) let us at first show that F ∈ F . Indeed, F is an ideal of v a and F ∈ F ⊆ F . Hence by (21) we have that F ∈ F . To show that actually F ∈ F wehave to prove that L t +1 ( G ) v t +1 R t +1 ( F ). Define X = ( R t +1 ( F ) \ { m ( F , a − t − i ) } ) ∪ { m ( G, i ) } . Observe that X is a ( t + 1)-element subset of F . Note that m ( G, i ) = m ( L t +1 ( G ) , i ) / ∈ R t +1 ( F ) and m ( F , a − t − i ) = m ( R t +1 ( F ) , i ) and recall once again that L t +1 ( G ) v t +1 R t +1 ( F ). Thus X is obtained from R t +1 ( F ) by removing the i th element of R t +1 ( F ) andadding the i th element of L t +1 ( G ). Hence by Proposition 16 we have that L t +1 ( G ) v t +1 X . On the other hand obviously X v t +1 R t +1 ( F ), which means that (19) is proved. To upperbound |F | · |G| , where F , G ⊆ (cid:16) [ n ] a (cid:17) are t -far and F is an ideal of the order v a ,we use an approach suggested in [11]. We introduce a probabilistic measure µ p on theset 2 [ n ] such that the probability of a subset X ∈ [ n ] is equal to p | X | (1 − p ) n −| X | . It iseasy to see that this measure is a product of Bernoulli measures: each point x belongsto a random set X with probability p and points are included in the set independently. Lemma 19.
Let F , G ⊆ (cid:16) [ n ] a (cid:17) be such that L t +1 ( G ) t +1 R t +1 ( F ) for all F ∈ F , G ∈ G .Define X and Y to be two independent random variables, both distributed according to µ a/n . Then Pr [ X ∈ F , Y ∈ G ] (cid:54) n · exp (cid:16) − ( a − t − / (20 a ) (cid:17) . We will use the following form of the Chernoff bound:
Proposition 20 ([14], Theorem 1) . Let Z , . . . , Z l be l independent Bernoulli randomvariables. Assume that each Z i takes value with probability p . Then for all ε (cid:62) : Pr " l X i =1 Z i (cid:62) ( p + ε ) l (cid:54) exp ( − D ( p + ε || p ) · l ) Pr " l X i =1 Z i (cid:54) ( p − ε ) l (cid:54) exp ( − D ( p − ε || p ) · l ) , where D ( x || y ) is the Kullback – Leibler divergence: D ( x || y ) = x ln xy ! + (1 − x ) ln − x − y ! . We also need the following lower bound on the Kulback – Leibler divergence:
Proposition 21 ([29]) . D ( x || y ) (cid:62) ( x − y ) x + y ) . From Propositions 20 and 21 we obtain:24 orollary 22.
Let Z , . . . , Z l be l independent Bernoulli random variables. Assume thateach Z i takes value with probability p . Then for all ε (cid:62) : Pr " l X i =1 Z i / ∈ [( p − ε ) l, ( p + ε ) l ] (cid:54) − ε · l p + 2 ε ! . Proof of Lemma 19.
Denote s = ( a − t − E be the event that for all r ∈ { , . . . , n } it holds that | X ∩ [1 , r ] | ∈ (cid:20) an · r − s/ , an · r + s/ (cid:21) and | Y ∩ [1 , r ] | ∈ (cid:20) an · r − s/ , an · r + s/ (cid:21) . Let us show that X ∈ F , Y ∈ G = ⇒ ¬ E . Indeed, assume for contradiction thatthere are X ∈ F and Y ∈ G such that event E holds for X = X , Y = Y . Note that L t +1 ( Y ) t +1 R t +1 ( X ). Hence, m ( Y, j ) > m ( X, a − t − j ) = m ( X, s + j ) for some j ∈ { , . . . , t + 1 } . Consider r = m ( X, s + j ). By definition there are exactly s + j elements of X in [1 , r ]. Since event E holds for X = X , Y = Y , we get: s + j (cid:54) an · r + s/ . (22)On the other hand, there are at most j − Y in [1 , m ( X, s + j )] = [1 , r ] (thisis because m ( Y, j ) > m ( X, s + j )). Hence an · r − s/ (cid:54) j − E holds for ( X, Y )). By adding (23) and (22) we get0 (cid:54) −
1. Thus an implication X ∈ F , Y ∈ G = ⇒ ¬ E is proved.In particular, we get: Pr [ X ∈ F , Y ∈ G ] (cid:54) Pr [ ¬ E ] . Hence it is enough to upper bound the probability of ¬ E . If ¬ E holds, then for some r ∈ { , . . . , n } we have: | X ∩ [1 , r ] | / ∈ (cid:20) an · r − s/ , an · r + s/ (cid:21) = (cid:20)(cid:18) an − s r (cid:19) r, (cid:18) an + s r (cid:19) r (cid:21) or | Y ∩ [1 , r ] | / ∈ (cid:20) an · r − s/ , an · r + s/ (cid:21) = (cid:20)(cid:18) an − s r (cid:19) r, (cid:18) an + s r (cid:19) r (cid:21) . By Corollary 22 both of these events have probability at most2 exp − (cid:16) s r (cid:17) · r · an + 2 · s r = 2 exp − s · arn + 4 s ! (cid:54) − s · ann + 4 s ! = 2 exp − s a + 4 s ! (cid:54) − s a ! a ∈ { , . . . , n } we get the required bound. Assume that
F ⊆ (cid:16) [ n ] a (cid:17) and G ⊆ (cid:16) [ n ] a (cid:17) are t -far. By Lemma 14 there are F , G ⊆ (cid:16) [ n ] m (cid:17) satisfying the following three conditions: • F and G are t -far; • |F | = |F | and |G | = | G | ; • F is left-compressed.By Proposition 17 we have that F is an ideal of v a . Then by Lemma 18 we get that L t +1 ( G ) t +1 R t +1 ( F ) for all F ∈ F and G ∈ G . Hence by Lemma 19 we have Pr [ X ∈ F , Y ∈ G ] (cid:54) n exp (cid:16) − ( a − t − / (20 a ) (cid:17) , (24)where X and Y are two independent random variables distributed according to µ a/n .The left hand side of (24) equals |F | · |G | · "(cid:18) an (cid:19) a · (cid:18) − an (cid:19) n − a . Finally, from the following lower bound on (cid:16) na (cid:17) (see [6, Lemma 2.4.2]) na ! (cid:62) vuut n · an · n − an · (cid:18) na (cid:19) a · (cid:18) nn − a (cid:19) n − a , we get: |F | · |G| = |F | · |G | (cid:54) "(cid:18) na (cid:19) a · (cid:18) nn − a (cid:19) n − a · n exp (cid:16) − ( a − t − / (20 a ) (cid:17) (cid:54) n · an · n − an · na ! · n exp (cid:16) − ( a − t − / (20 a ) (cid:17) = 32 a ( n − a ) · exp (cid:16) − ( a − t − / (20 a ) (cid:17) · na ! . Communication lower bound
Our proof of Theorem 3 relies on Theorem 4. Since we are dealing with k -party setting,we need the following k -dimensional generalization of Theorem 4. Fortunately, thisgeneralization can be obtained via a very simple induction argument. Lemma 23.
For all n, a, t, k ∈ N satisfying t < a < n the following holds. Assume that F , F , . . . , F k ⊆ (cid:16) [ n ] a (cid:17) are such that |F i | (cid:62) k − · q a ( n − a ) · exp − ( a − t − a ! · na ! + 2 k − for all i ∈ { , , . . . , k } . Then there are F ∈ F , F ∈ F , . . . , F k ∈ F k such that | F ∩ F i | (cid:62) t + 1 for all i ∈ { , . . . , k } .Proof. For t < a < n let A k,na,t be the minimal positive integer N such that for all F , . . . , F k ⊆ (cid:16) [ n ] a (cid:17) the following holds. If |F i | (cid:62) N for all i ∈ { , . . . , k } , then there are F ∈ F , F ∈ F , . . . , F k ∈ F k such that | F ∩ F i | (cid:62) t + 1 for all i ∈ { , . . . , k } .Let us verify that A k,na,t are non-decreasing in k , i.e.: A k,na,t (cid:54) A k +1 ,na,t (25)for all k (cid:62) t < a < n . Indeed, take F , F , . . . , F k ⊆ (cid:16) [ n ] a (cid:17) such that |F i | (cid:62) A k +1 ,na,t for all i ∈ { , . . . , k } . It is clear that A k +1 ,na,t (cid:54) (cid:16) na (cid:17) . So, from definition of A k +1 ,na,t applied to the families F , F , . . . , F k +1 , where F k +1 = (cid:16) [ n ] a (cid:17) , we conclude that there are F ∈ F , F ∈ F , . . . , F k +1 ∈ F k +1 satisfying | F ∩ F i | (cid:62) t + 1 for all i ∈ { , . . . , k + 1 } .Theorem 4 implies that: A ,na,t (cid:54) $q a ( n − a ) · exp − ( a − t − a ! · na !% + 1 . Indeed, assume that F , F ⊆ (cid:16) [ n ] a (cid:17) are such that: |F | , |F | (cid:62) $q a ( n − a ) · exp − ( a − t − a ! · na !% + 1 . Then |F | · |F | is strictly larger than 32 a ( n − a ) · exp (cid:16) − ( a − t − a (cid:17) · (cid:16) na (cid:17) . By Theorem 4this means that there are F ∈ F , F ∈ F such that | F ∩ F | (cid:62) t + 1.To show the lemma it is enough to demonstrate that A k +1 ,na,t (cid:54) · A k,na,t , for all k (cid:62) t < a < n . To do so, fix k + 1 families F , . . . , F k +1 ⊆ (cid:16) [ n ] a (cid:17) . Assumethat |F i | (cid:62) · A k,na,t for all i ∈ { , . . . , k + 1 } . Our goal is to show that there are F ∈ F , . . . , F k +1 ∈ F k +1 satisfying | F ∩ F i | (cid:62) t + 1 for all i ∈ { , . . . , k + 1 } . N = A k,na,t . We claim that there are N distinct G , . . . , G N ∈ F such that forevery j ∈ { , , . . . , N } there are F j ∈ F , . . . , F jk ∈ F k satisfying | G j ∩ F ji | (cid:62) t + 1 forall i ∈ { , . . . , k } .We construct such G , . . . , G N one by one. Assume that G , . . . , G j for some j ∈{ , . . . , N − } are already constructed. Notice that: |F \ { G , . . . , G j }| (cid:62) · A k,na,t − j (cid:62) · A k,na,t − N = A k,na,t , |F i | (cid:62) · A k,na,t (cid:62) A k,na,t , i = 2 , . . . , k. This means by definition of A k,na,t that there are G ∈ F \{ G , . . . , G j } , H ∈ F , . . . , H k ∈F k satisfying: | G ∩ H i | (cid:62) t + 1 for all i ∈ { , . . . , k } . Then we set G j +1 = G, F j +12 = H , . . . , F j +1 k = H k . Note that G j +1 is distinct from G , . . . , G j because G / ∈ { G , . . . , G j } .Finally, consider two families { G , . . . , G N } and F k +1 . These two families are bothof size at least N = A k,na,t (cid:62) A ,na,t (the last inequality here is by (25)). Hence there are j ∈ { , . . . , N } and H k +1 ∈ F k +1 such that | G j ∩ H k +1 | (cid:62) t + 1. To finish the proof set F = G j , F = F j , . . . , F k = F jk and F k +1 = H k +1 .We are now ready to prove Theorem 3. Proof of theorem 3.
Set a = b n/k c and t = b (1 − γ/ a c . Note that a − t (cid:62) γa (cid:62) γ ( n/k − n · kγ − γ (cid:62) √ n −
14 (26)Here we use the assumption that kγ (cid:54) √ n . In particular, (26) implies t < a for all largeenough n .Define D = ( X , . . . , X k ) ∈ [ n ] a ! k : X , X , . . . , X k are disjoint , I = ( F , . . . , F k ) ∈ [ n ] a ! k : | F i F i | (cid:54) γa for all i, i ∈ { , . . . , k } . Observe that: |D| = na ! · n − aa ! · . . . · n − ( k − · aa ! > . (27)Assume that there is a c -bit non-deterministic communication protocol forDISJ k,γ ( n ). Hence there is a cover of D by at most 2 c boxes which are disjoint with I . Among these boxes there is one which contains at least |D| / c elements of D . Letthis box be F × . . . × F k for some F , . . . , F k ⊆ (cid:16) [ n ] a (cid:17) .28et us show that for some i ∈ { , . . . , k } it holds that |F i | < k − · q a ( n − a ) · exp − ( a − t − a ! · na ! + 2 k − . (28)Indeed, assume that it is not true. Then, since t < a < n , we can apply Lemma 23 tofind F ∈ F , F ∈ F , . . . , F k ∈ F k such that | F ∩ F i | (cid:62) t + 1 for all i ∈ { , . . . , k } . Notealso that | F ∩ F | = | F | = a (cid:62) t + 1. From that for every i, i ∈ { , . . . , k } we obtain: | F i F i | (cid:54) | F i F | + | F i F | = | F i | + | F | − | F i ∩ F | + | F i | + | F | − | F i ∩ F | (cid:54) · a − · ( t + 1) (cid:54) γa. This means that F × F × . . . × F k intersects I , contradiction.Take any i ∈ { , , . . . , k } satisfying (28). Recall that by definition there are at least |D| / c elements of D in F × F × . . . × F k . On the other hand, notice that for any fixed X ∈ (cid:16) [ n ] a (cid:17) there are exactly n − aa ! · . . . · n − ( k − · aa ! . elements of D with the i th coordinate equals to X . Hence there are at most |F i | · n − aa ! · . . . · n − ( k − · aa ! elements of D in F × F × . . . × F k . By combining these two bounds we obtain: |D| / c (cid:54) |F i | · n − aa ! · . . . · n − ( k − · aa ! . By (27) this transforms to 2 c (cid:62) (cid:16) na (cid:17) |F i | . Recall that the size of F i satisfies (28). This gives us the following:2 c (cid:62) (cid:16) na (cid:17) k − · q a ( n − a ) · exp (cid:16) − ( a − t − a (cid:17) · (cid:16) na (cid:17) + 2 k − (cid:62)
12 min exp (cid:16) ( a − t − a (cid:17) k − q a ( n − a ) , (cid:16) na (cid:17) k − (cid:62)
12 min exp (cid:16) ( a − t − a (cid:17) k − √ · n , (cid:16) na (cid:17) k − . of the last inequality (bearing in mind that log ( e ) >
1) we obtain that c for all large enough n satisfies the following: c (cid:62) min ( ( a − t − a − k − . ( n ) , log na !! − k ) (here we subtract 0 . ( n ) from the first argument of min to compensate negativeconstant terms). It remains to demonstrate that both expressions in the minimumabove are at least γ n · k − ( n ) for all large enough n :( a − t − a − k − . ( n ) (cid:62) γ n · k − ( n ) , (29)log na !! − k (cid:62) γ n · k − ( n ) . (30)Let us start with (29). At first, note that a − t − (cid:62) γa − (cid:62) γa , where the last inequality is because γa is large enough: γa (cid:62) γ ( n/k − (cid:62) √ n − . (in the second inequality of the last line we use the assumption that kγ (cid:54) √ n ). Inparticular, a − t − a − t − a − k − . ( n ) (cid:62) γ a − k − . ( n ) (cid:62) γ n · k − k − ( n )(here once again we subtract 0 . ( n ) to compensate a negative constant term whichis due to rounding of a = b n/k c ). To prove (29) it remains to notice that k (cid:54) γ n · k because kγ (cid:54) √ n .To show (30) we will actually show that the left hand side of (30) is at least the lefthand side of (29). Indeed, log na !! (cid:62) a · log (cid:18) na (cid:19) (cid:62) a. The last inequality is because k (cid:62) a = b n/k c (cid:54) n/
2. But recall that a − t − a is at least ( a − t − a . Acknowledgment.
The article was prepared within the framework of the HSEUniversity Basic Research Program and funded by the Russian Academic ExcellenceProject ’5-100’. Mikhail Vyalyi is partially supported by RFBR grant 17–01-00300 andby the state assignment topic no. 0063-2016-0003.Authors are sincerely grateful to anonymous reviewers for valuable comments.30 eferences [1]
Ada, A.
On the non-deterministic communication complexity of regular languages.
International Journal of Foundations of Computer Science 21 , 4 (2010), 479–493.[2]
Bashov, M.
On minimisation of the double-sided shadow in the unit cube.
DiscreteMathematics and Applications 21 (2011), 517–535.[3]
Bojańczyk, M., and Czerwiński, W.
An au-tomata toolbox. A book of lecture notes, available at , 2018.[4]
Borg, P.
The maximum product of sizes of cross-t-intersecting uniform families.
Australasian J. Combinatorics 60 (2014), 69–78.[5]
Calude, C. S., Jain, S., Khoussainov, B., Li, W., and Stephan, F.
Decid-ing parity games in quasipolynomial time. In
Proceedings of the 49th Annual ACMSIGACT Symposium on Theory of Computing (2017), ACM, pp. 252–263.[6]
Cohen, G., Honkala, I., Litsyn, S., and Lobstein, A.
Covering codes ,vol. 54. Elsevier, 1997.[7]
Czerwiński, W., Daviaud, L., Fijalkow, N., Jurdziński, M., Lazić, R.,and Parys, P.
Universal trees grow inside separating automata: Quasi-polynomiallower bounds for parity games. In
Proceedings of the Thirtieth Annual ACM-SIAMSymposium on Discrete Algorithms (2019), SIAM, pp. 2333–2349.[8]
Emerson, E. A., and Jutla, C. S.
Tree automata, mu-calculus and determinacy.In
Foundations of Computer Science, 1991. Proceedings., 32nd Annual Symposiumon (1991), IEEE, pp. 368–377.[9]
Erdős, P., Ko, C., and Rado, R.
Intersection theorems for systems of finitesets.
The Quarterly Journal of Mathematics 12 (1961), 313–320.[10]
Fearnley, J., Jain, S., Schewe, S., Stephan, F., and Wojtczak, D.
Anordered approach to solving parity games in quasi polynomial time and quasi linearspace. In
Proceedings of the 24th ACM SIGSOFT International SPIN Symposiumon Model Checking of Software (2017), ACM, pp. 112–121.[11]
Frankl, P., and Rödl, V.
Forbidden intersections.
Transactions of the AmericanMathematical Society 300 , 1 (1987), 259–286.[12]
Gruber, H., and Holzer, M.
Finding lower bounds for nondeterministic statecomplexity is hard. In
Ibarra O.H., Dang Z. (eds) Developments in Language The-ory. DLT 2006. Lecture Notes in Computer Science, vol 4036. (2006), pp. 363–374.3113]
Gruska, J., Qiu, D., and Zheng, S.
Communication complexity of promiseproblems and their applications to finite automata. arXiv preprint arXiv:1309.7739 (2013).[14]
Hoeffding, W.
Probability inequalities for sums of bounded random variables.
J. Am. Stat. Associ. 58 , 301 (1963), 13–30.[15]
Hromkovič, J.
Communication complexity and parallel computing . Springer-Verlag, Berlin, Heidelberg, 1997.[16]
Hromkovič, J., Seibert, S., Karhumäki, J., Klauck, H., and Schnitger,G.
Communication complexity method for measuring nondeterminism in finiteautomata.
Information and Computation 172 , 2 (2002), 202–217.[17]
Jukna, S.
Boolean function complexity: advances and frontiers , vol. 27. SpringerScience & Business Media, 2012.[18]
Jurdziński, M.
Deciding the winner in parity games is in UP ∩ Co-UP.
Informa-tion Processing Letters 68 , 3 (1998), 119–124.[19]
Jurdziński, M., and Lazić, R.
Succinct progress measures for solving paritygames. In (2017), IEEE.[20]
Jurdziński, M., Paterson, M., and Zwick, U.
A deterministic subexponentialalgorithm for solving parity games.
SIAM Journal on Computing 38 , 4 (2008), 1519–1532.[21]
Lehtinen, K.
A modal µ perspective on solving parity games in quasipolynomialtime. In (2018), IEEE.[22] Martin, D. A.
A purely inductive proof of borel determinacy. In
RecursionTheory, Proceedings of Symposia in Pure Mathematics (1985), vol. 42, AmericanMathematical Society, pp. 303–308.[23]
McNaughton, R.
Infinite games played on finite graphs.
Annals of Pure andApplied Logic 65 , 2 (1993), 149–184.[24]
Mostowski, A. W.
Games with forbidden positions. Tech. Rep. 78, UniwersytetGdánski, Instytut Matematyki, 1991.[25]
Petersson, V., and Vorobyov, S. G.
A randomized subexponential algorithmfor parity games.
Nordic Journal of Computing 8 , 3 (2001), 324–345.[26]
Rao, A., and Yehudayoff, A.
Communication Complexity and Applications .Cambridge University Press, 2019. 3227]
Schewe, S.
Solving parity games in big steps. In
International Conferenceon Foundations of Software Technology and Theoretical Computer Science (2007),Springer, pp. 449–460.[28]
Shen, A., Uspensky, V. A., and Vereshchagin, N.
Kolmogorov complexityand algorithmic randomness , vol. 220. American Mathematical Soc., 2017.[29]
Topsoe, F.
Some inequalities for information divergence and related measures ofdiscrimination.
IEEE Transactions on information theory 46 , 4 (2000), 1602–1609.
A Reduction to finite time
Proposition 24.
Assume that a deterministic finite automaton A with q states separates EvenCycles n,d from
OddCycles n,d . Then A separates EvenCycles n,d from
OddCycles n,d in time qn .Proof. Let Q be the set of states of A and let q start be the initial state of A . Withoutloss of generality we may assume that q accept is an absorbing state of A , i.e., δ A ( q accept , a ) = q accept for all a ∈ [ n ] × { , , . . . , d } . Thus it is enough to show that for every w ∈ EvenCycles n,d there exists i ∈ { , , . . . , qn } such that δ A ( q start , w . . . w i ) = q accept . Assume that for some w = ( v , l )( v , l )( v , l ) . . . ∈ EvenCycles n,d this is false. Let G be an even game graph with at most n nodes which has an infinite path correspondingto w . Define a mapping φ : [ qn + 1] → [ n ] × Q as follows: φ ( i ) = ( v i , δ A ( q start , w . . . w i − )) . By the pigeonhole principle there are i, j ∈ [ qn + 1], i < j such that φ ( i ) = φ ( j ) . I.e., v i = v j and δ A ( q start , w . . . w i − ) = δ A ( q start , w . . . w j − ). Consider the followinginfinite path w of G . This path starts at v and goes to v i by edges encoded in w . . . w i − .Then it stays forever on a cycle starting at v i = v j and formed by edges encoded in w i . . . w j − . It is easy to see that the only states A reaches on w are: q start , δ A ( q start , w ) , . . . , δ A ( q start , w w . . . w j − ) . By our assumption δ A ( q start , w ) , . . . , δ A ( q start , w w . . . w j − ) are all different from q accept (and q start is obviously too, because otherwise A reaches q accept on every word). Onthe other hand w is an infinite path of an even game graph with at most nn