Consensus-Halving: Does It Ever Get Easier?
Aris Filos-Ratsikas, Alexandros Hollender, Katerina Sotiraki, Manolis Zampetakis
CConsensus-Halving: Does It Ever Get Easier?
Aris Filos-Ratsikas Alexandros Hollender
University of Liverpool, United Kingdom University of Oxford, United Kingdom
[email protected] [email protected]
Katerina Sotiraki Manolis Zampetakis
Massachusetts Institute of Technology, USA Massachusetts Institute of Technology, USA [email protected] [email protected]
Abstract
In the ε - Consensus-Halving problem, a fundamental problem in fair division, there are n agents with valuations over the interval [
0, 1 ] , and the goal is to divide the interval into piecesand assign a label “ + ” or “ − ” to each piece, such that every agent values the total amountof “ + ” and the total amount of “ − ” almost equally. The problem was recently proven byFilos-Ratsikas and Goldberg [2018, 2019] to be the first “natural” complete problem for thecomputational class PPA, answering a decade-old open question.In this paper, we examine the extent to which the problem becomes easy to solve, if onerestricts the class of valuation functions. To this end, we provide the following contributions.First, we obtain a strengthening of the PPA-hardness result of [Filos-Ratsikas and Goldberg,2019], to the case when agents have piecewise uniform valuations with only two blocks . Weobtain this result via a new reduction, which is in fact conceptually much simpler than thecorresponding one in [Filos-Ratsikas and Goldberg, 2019]. Then, we consider the case of single-block (uniform) valuations and provide a parameterized polynomial time algorithm forsolving ε - Consensus-Halving for any ε , as well as a polynomial-time algorithm for ε = k -Division problem [Simmons and Su, 2003]. In particular, we prove that ε -Consensus-1/3-Division is PPAD-hard. The topic of fair division has been in the focus of research in economics and mathematics, sincethe late 1940s and the pioneering works of Banach, Knaster and Steinhaus [Steinhaus, 1948], whodeveloped the associated theory. The related literature contains many interesting problems, withthe most celebrated perhaps being the problems of envy-free cake-cutting and equitable cake-cutting ,for which a plethora of results have been obtained. More recently, the computer science literaturehas made a significant contribution in studying the computational complexity of these problems,and attempting to design efficient algorithms for several of their variants [Aziz and Mackenzie,2016a,b; Deng et al., 2012; Arunachaleswaran et al., 2019].Another classical problem in fair division, whose study originates back to as early as the 1940s1 a r X i v : . [ c s . CC ] J u l nd the work of Neyman [1946], is the Consensus-Halving problem [Simmons and Su, 2003]. Inthis problem, there is a set of n agents with valuation functions over the I = [
0, 1 ] interval. Thegoal is to divide the interval into pieces using at most n cuts, and assign a label from { + , −} toeach piece, such that every agent values the total amount of I labeled “ + ” and the total amount of I labeled “ − ” equally. Similarly to other well-known problems in fair division, the existence ofa solution to the Consensus-Halving problem is always guaranteed, and can be proven via theapplication of a fixed-point theorem; here the Borsuk-Ulam theorem [Borsuk, 1933]. As a matterof fact, the problem is a continuous analogue of the well-known Necklace Splitting problem[Goldberg and West, 1985; Alon, 1987], whose existence of a solution is typically established viaan existence proof for the continuous version.The Consensus-Halving problem attracted attention in the literature of computer sciencerecently, due to the breakthrough results of Filos-Ratsikas and Goldberg [2018, 2019] who studiedthe computational complexity of the approximate version, in which there is a small allowablediscrepancy ε between the values of the two portions. First, in [Filos-Ratsikas and Goldberg,2018], the authors proved that ε -Consensus-Halving for inverse-exponential ε is complete for thecomputational class PPA, defined by Papadimitriou [1994]. This was the first PPA-completenessresult for a “natural” problem, i.e., a computational problem that does not have a polynomial-sizedcircuit explicitly in its definition, answering an open question from Papadimitriou [1994], reiteratedmultiple times over the years [Grigni, 2001; Aisenberg et al., 2020]. Then in [Filos-Ratsikas andGoldberg, 2019], the authors strengthened their hardness result to the case of inverse-polynomial ε , which also established the PPA-completeness of the Necklace Splitting problem for 2 thieves.Despite the aforementioned results, the complexity of the problem is not yet well understood.Does the problem remain hard if one restricts attention to classes of simple valuation functions?Note that the reduction of [Filos-Ratsikas and Goldberg, 2018, 2019] uses instances with piecewiseconstant valuation functions with polynomially many pieces. On the opposite side, are there efficientalgorithms for solving special cases of the problem? What if we allow a larger number of cuts? Towards understanding the complexity of Consensus-Halving, we present the following results. (cid:73)
We prove that ε -Consensus-Halving is PPA-complete, even when the agents have two-blockuniform valuations, i.e., valuation functions which are piecewise uniform over the intervaland assign non-zero value on at most two pieces. This result holds even when ε is inverse-polynomial, and extends to the case where the number of allowable cuts is n + n − δ , forsome constant δ > The name “Consensus-Halving” is attributed to Simmons and Su [2003], although the problem has been studiedunder different names in the past. For example, it is also known as
The Hobby-Rice theorem [Hobby and Rice, 1965], or continuous necklace splitting [Alon, 1987]. We study the case of single-block valuations and provide the first algorithmic results for theproblem. Specifically, we present: (cid:46) an algorithm for any ε , whose running time is polynomial in 1/ ε and a parameter d related to the maximum number of overlapping blocks. (cid:46) a polynomial-time algorithm for 1/2-Consensus-Halving.We complement our main results with a simple algorithm based on linear programming,which solves the problem for single-block valuations in polynomial-time, if one is allowedto use 2 n − (cid:96) cuts, for any constant (cid:96) . (cid:73) As an application of the new ideas developed in our reduction, we obtain the first hardnessresult for a generalization of ε -Consensus-Halving, known as ε - Consensus- k-Division ,for k ≥
3. Specifically, we prove that ε - Consensus- -Division is PPAD-hard, when ε isinverse-exponential. The study of the Consensus-Halving problem originates back to the early 1940s and the work ofNeyman [1946]. The first proof of existence for n cuts can be traced back to the 1965 theoremof Hobby and Rice [Hobby and Rice, 1965]. The problem was famously studied in the contextof Necklace Splitting, being a continuous analogue of the latter problem; in fact, most knownproofs for Necklace Splitting go via the continuous version [Goldberg and West, 1985; Alon andWest, 1986]. The name Consensus-Halving is attributed to Simmons and Su [2003], who studied thecontinuous problem independently, and came up with a constructive proof of existence. Theirconstruction, although yielding an exponential-time algorithm, was later adapted by Filos-Ratsikaset al. [2018] to prove that the problem lies in the computational class PPA.The class PPA was defined by Papadimitriou [1994] in his seminal paper in 1994, in which healso defined several other important subclasses of TFNP [Megiddo and Papadimitriou, 1991], theclass of
Total Search Problems in NP , i.e., problems that always have solutions which are efficientlyverifiable. Among those classes, the class PPAD has been very successful in capturing thecomplexity of many interesting computational problems [Mehta, 2014; Garg et al., 2018; Goldbergand Hollender, 2019; Chen et al., 2013], highlighted by the celebrated result of Daskalakis et al.[2009] and Chen et al. [2009] about the PPAD-completeness of computing a Nash equilibrium. Onthe contrary, since the definition of the class, PPA was not known to contain any natural completeproblems, but rather mostly versions of PPAD-complete problems of a topological nature, definedon non-orientable spaces [Deng et al., 2016; Grigni, 2001]. In 2015, Aisenberg et al. [2020] showedthat the computational version of Tucker’s Lemma [Tucker, 1945], already shown to be in PPA byPapadimitriou [1994], is actually complete for the class.Using the latter result as a starting point, Filos-Ratsikas and Goldberg [2018] proved that ε -Consensus-Halving is PPA-complete when ε is inverse exponential. This was a breakthroughresult in the following sense: it was the first PPA-completeness result for a “natural” computationalproblem, where the term “natural” takes the specific meaning of a problem that does not have a To be precise, we provide the first such results for the version of the problem with n agents and n cuts. For a largenumber of cuts, Brams and Taylor [1996] present algorithms for ε -approximate solutions. Crucially, these algorithmsrequire a number of cuts which grows as ε decreases, while our results for more than n cuts are not dependent on ε . This is true for the case of 2 thieves. For k thieves, the proofs go via the Consensus-1/ k -Division problem instead. ε -Consensus-Halving problemand the well-known Necklace Splitting problem of Alon [1987] for 2 thieves [Goldberg and West,1985; Alon and West, 1986], when ε is inverse-polynomial. In [Filos-Ratsikas and Goldberg, 2019],the authors strengthened their result to ε being inverse-polynomial, which, together with theaforementioned result from [Filos-Ratsikas and Goldberg, 2018], also provided a proof for the PPA-completeness of Necklace Splitting. As we mentioned earlier, besides being a strengthening, ourPPA-hardness proof for ε -Consensus-Halving is a notable simplification over that of [Filos-Ratsikasand Goldberg, 2019], and importantly, it holds for ε which is inverse-polynomial. Therefore, wealso obtain a new, simplified proof of PPA-hardness for Necklace Splitting with 2 thieves.For constant ε , the only hardness result that we know is the PPAD-hardness of Filos-Ratsikaset al. [2018], who also show that when n − exact Consensus-Halvingand showed that the problem is FIXP-hard. Interestingly, the authors also introduced a newcomputational class, called BU (for Borsuk-Ulam) and showed that the problem lies in that class,leaving open the question of whether it is BU-complete.If we generalize the number of labels to {
1, 2, . . . , k } rather that { + , −} , and we allow ( k − ) n cuts rather than only n , then we obtain a generalization of the Consensus-Halving problem whichwas referred to as Consensus- k-Division in [Simmons and Su, 2003]. The existence of a solutionfor this problem can be proved via fixed-point theorems that generalize the Borsuk-Ulam theorem[B´ar´any et al., 1981; Alon, 1987], however very little is known about its complexity. One mightfeel inclined to believe that Consensus-1/ k -Division is a harder problem that Consensus-Halving;however, note that in the former problem, we have more cuts at our disposal. In fact, Filos-Ratsikasand Goldberg [2019] conjectured that the complexities of the problems for different values of k areincomparable, and are characterized by different complexity classes. The complexity classes thatare believed to be the most related are called PPA- k , defined also by Papadimitriou [1994] in hisoriginal paper; we refer the reader to the recent papers of [G ¨o ¨os et al., 2020; Hollender, 2019] for amore detailed discussion of these classes.Before our paper, virtually nothing was known about the hardness of the problem when k ≥ k =
3, which enables us to proveour PPAD-hardness result. While we do not expect the problem for k ≥ We start with the definition of the ε -approximate version of the C onsensus -H alving problem. Definition 1 ( ε -C onsensus -H alving ) . Let k ≥
2. We are given ε > C of continuousprobability measures µ , . . . , µ n on I = [
0, 1 ] . The probability measures are given by their densityfunctions on I . The goal is to partition the unit interval into 2 (not necessarily connected) pieces I + and I − using at most n cuts, such that | µ j ( I + ) − µ j ( I − ) | ≤ ε for all agents j ∈ {
1, . . . , n } .We will refer to the probability measures µ , . . . , µ n as valuation functions or simply valuations .While the existence and PPA-membership results hold more generally, in this paper, we will4estrict our attention to the case when the valuation functions are piecewise constant . These can berepresented explicitly in the input as endpoints and heights of value blocks. Definition 2 (P iecewise constant valuation functions ) . A valuation function µ i is piecewiseconstant over an interval I , if the domain can be partitioned into a finite set of intervals such thatthe density of µ i is constant over each interval.Piecewise constant functions are often referred to as step functions . Definition 3 (U niform valuation functions ) . We will consider the following subclasses ofpiecewise constant valuation functions.-
Piecewise Uniform: , The domain can be partitioned into a finite set of intervals such thatthe density of µ i is either v i or 0 over each interval, for some constant v i .- d -block Uniform: The domain can be partitioned into a finite set of intervals, such that in atmost d of those the density of µ i is v i and everywhere else it is 0, for some constant v i .- 2 -block Uniform: d -block uniform valuations for d = Single-block: d -block uniform valuations for d =
1. Here we omit the term “uniform”, asthere is only a single value block.Obviously, piecewise constant ⊇ piecewise uniform ⊇ ⊇ single-block. As we mentioned in the introduction, C onsensus -H alving is a Total Search Problem in NP , i.e., aproblem with a guaranteed solution which is verifiable in polynomial time. The correspondingclass is the class TFNP [Megiddo and Papadimitriou, 1991]. Formally, a binary relation P ( x , y ) isin the class TFNP if for every x , there exists a y of size bounded by a polynomial in | x | such that P ( x , y ) holds and P ( x , y ) can be verified in polynomial time. The problem is given x , to find sucha y in polynomial time.The subclasses of TFNP that will be relevant for this paper are PPAD and PPA [Papadimitriou,1994]. These are defined via their canonical problems, E nd - of -L ine and L eaf respectively. Definition 4 (E nd - of -L ine ) . The input to the E nd - of -L ine problem consists of two Boolean circuits S (for successor) and P (for predecessor) with n inputs and n outputs such that P ( n ) = n (cid:54) = S ( n ) ,and the goal is to find a vertex x such that P ( S ( x )) (cid:54) = x or S ( P ( x )) (cid:54) = x (cid:54) = n .A problem is in PPAD if it is polynomial-time reducible to E nd -O f -L ine and it is PPAD- complete if E nd -O f -L ine reduces to it in polynomial-time. Intuitively, PPAD is defined with respect to adirected graph of exponential size, which is given implicitly as input, via the use of the predecessorand successor circuits defined above. PPAD is a subclass of PPA, which is defined similarly, butwith respect to an undirected graph and a circuit that outputs the neighbours of a vertex. Itscanonical computational problem is called L eaf , which is defined below. Definition 5 (L eaf ) . The input to the L eaf problem is a Boolean circuit C with n inputs and atmost 2 n outputs, outputting the set N ( y ) of (at most two) neighbors of a vertex y , such that |N ( n ) | =
1, and the goal is to find a vertex x such that x (cid:54) = n and |N ( x ) | = eaf and is PPA-complete if L eaf reduces to it in polynomial time. 5 .2 High-dimensional Tucker Our reduction in Section 3 will start from the following problem, which is an N -dimensionalvariant of the 2 D -T ucker problem [Papadimitriou, 1994; Aisenberg et al., 2020]. Definition 6 ( high -D-T ucker ) . An instance of high -D-T ucker consists of a labeling λ : [ ] N →{±
1, . . . , ± N } computed by a Boolean circuit. We further assume that the labeling is antipodallyanti-symmetric (i.e. for all x on the boundary of [ ] N it holds that λ ( x ) = − λ ( x ) where x i = − x i for all i ), which can be enforced syntactically. A solution consists of two points x , y ∈ [ ] N with λ ( x ) = − λ ( y ) and (cid:107) x − y (cid:107) ∞ ≤ [ ] N instead of [ ] N . We adapt the hardness to the case of Definition 6 in the theorem below. Theorem 1. high -D-T ucker is PPA -complete.Proof.
Papadimitriou [1994] has shown that the problem lies in PPA. In order to show PPA-hardness, we use the fact that Filos-Ratsikas and Goldberg [2019] have proved that the problem isPPA-hard on the domain [ ] N (instead of [ ] N ) by using a standard snake-embedding technique[Chen et al., 2009; Deng et al., 2017].Let λ be an instance of H igh -D-T ucker but on the domain [ ] N instead of [ ] N . We will reducethis to an instance λ (cid:48) of H igh -D-T ucker (on our standard domain [ ] N ). In the two-dimensionalcase ( N = [ ] N and duplicate thecentral vertical and horizontal lines of the grid (thus also duplicating the labels at these gridpoints).Formally, we proceed as follows. Define the operator (cid:98) · such that for any r ∈ [ ] (cid:98) r : = (cid:26) r − r ≥ r if r ≤ x = ( x , . . . , x N ) ∈ [ ] N , let (cid:98) x = ( (cid:98) x , . . . , (cid:98) x N ) ∈ [ ] N . Now define λ (cid:48) such that for all x ∈ [ ] N , λ (cid:48) ( x ) : = λ ( (cid:98) x ) . This is well-defined and given a circuit that computes λ , we can constructa circuit for λ (cid:48) in polynomial time.Let us first show that if λ is antipodally anti-symmetric on [ ] N , then λ (cid:48) is antipodally anti-symmetric on [ ] N . Consider any x ∈ ∂ ([ ] N ) , i.e. there exists j ∈ [ N ] such that x j ∈ {
1, 8 } . Notethat we then have (cid:98) x ∈ ∂ ([ ] N ) , because (cid:98) x j ∈ {
1, 7 } . Thus, we know that λ ( − (cid:98) x , . . . , 8 − (cid:98) x N ) = − λ ( (cid:98) x , . . . , (cid:98) x N ) . Using the key observation that (cid:92) − x i = − (cid:98) x i for all i ∈ [ N ] , we obtain that λ (cid:48) ( − x , . . . , 9 − x N ) = λ ( − (cid:98) x , . . . , 8 − (cid:98) x N ) = − λ ( (cid:98) x , . . . , (cid:98) x N ) = − λ (cid:48) ( x , . . . , x N ) .It remains to show that given any solution to λ (cid:48) , we can retrieve in polynomial time a solutionto λ . Let x , y ∈ [ ] N be such that λ (cid:48) ( x ) = − λ (cid:48) ( y ) and (cid:107) x − y (cid:107) ∞ ≤
1. Then, we immediately obtainthat λ ( (cid:98) x ) = − λ ( (cid:98) y ) and it remains to show that (cid:107) (cid:98) x − (cid:98) y (cid:107) ∞ ≤
1. Consider any i ∈ [ N ] . If x i , y i ≥ x i , y i ≤
4, then in both cases | x i − y i | ≤ | (cid:98) x i − (cid:98) y i | ≤
1. If x i ≥ y i ≤
4, then | x i − y i | ≤ x i = y i = | (cid:98) x i − (cid:98) y i | = ≤
1. The remaining case isanalogous. Thus, we have shown that (cid:98) x , (cid:98) y form a solution to λ . In this section, we present our first result, regarding the PPA-hardness of C onsensus -H alving .6 heorem 2. ε - C onsensus -H alving is PPA-hard, when ε is inverse-polynomial and the agents havetwo-block uniform valuations. As we mentioned in the Introduction, Theorem 2 is a strengthening of the result of Filos-Ratsikasand Goldberg [2019], which requires the valuation functions to have a polynomial number ofvalue blocks, and which is seemingly very difficult to extend to two-block uniform valuations. Toachieve this stronger result, we have to develop new gadgetry, based on a new interpretation of thecut positions with respect to the positions of points in the domain of high -D-T ucker . As it turnsout, this new interpretation allows us to obtain a new proof of the main theorem of Filos-Ratsikasand Goldberg [2019], one which is conceptually much simpler, even though it actually applies tomore restricted valuations.Before we proceed, we first remark the following. In [Filos-Ratsikas et al., 2018] (where thePPAD-hardness of ε -C onsensus -H alving was proven for constant ε ) the authors presented asimple argument that allowed them to extend their hardness result to n + c cuts, where c is someconstant. The idea is to make c + completely disjoint copies of the instance of ε -C onsensus -H alving , and solve it using n + c cuts. One of the copies would have to be solved using at most n cuts, which is a PPAD-hard problem. We observe that the same principle applies generically(beyond PPAD-hardness and also to the results of Filos-Ratsikas and Goldberg [2018, 2019]), andin fact extends to n + n − δ cuts, where δ > Corollary 3. ε - C onsensus -H alving is PPA-hard, when ε is inverse-polynomial and the agents havetwo-block uniform valuations, even when one is allowed to use n + n − δ cuts, for constant δ > . We are now ready to prove Theorem 2. We first provide an overview of the reduction and wehighlight the main simplifications over the proof of Filos-Ratsikas and Goldberg [2019]. Then weproceed to formally present the proof of Theorem 2.
We are given an instance of high -D-T ucker , namely a labeling λ : [ ] N → {±
1, . . . , ± N } computedby a Boolean circuit. We will show how to construct an instance of C onsensus -H alving inpolynomial time such that any ε -approximate solution yields a solution to the high -D-T ucker instance (for some inversely-polynomial ε ). The complexity will be measured with respect to therepresentation size of the high -D-T ucker instance, i.e., the size of the circuit λ (which is also atleast N ).For clarity and convenience, the instance of C onsensus -H alving we will construct will notbe defined on the domain [
0, 1 ] , but instead on some interval [ M ] , where M is bounded by apolynomial in the size of the high -D-T ucker circuit λ . It is easy to transform this into an instanceon [
0, 1 ] by just re-scaling the valuation functions, namely scaling down the positions of the blocksby M and scaling up the heights of the blocks by M . Overview.
Let us first provide a very high-level description of the instance we construct. Simi-larly to [Filos-Ratsikas and Goldberg, 2019], the left-most end of the instance will be the
Coordinate-Encoding region. In any solution S to the instance, the way in which this region is dividedamongst the labels + and − will represent a point x ∈ [ −
1, 1 ] N . A circuit-simulator C will read-inthe coordinates of x , perform some computations (including a simulation of λ ) and output N values [ C ( x )] , . . . , [ C ( x )] N ∈ [ −
1, 1 ] . The circuit-simulator will consist of a set of agents andeach agent will implement one gate/operation of the circuit. Unfortunately, the circuit-simulator7an sometimes fail to perform the desired computation, so instead of one circuit-simulator C wewill actually have a polynomial number p ( N ) of circuit-simulators C , . . . , C p ( N ) . Each of thesecircuit-simulators will be performing (almost) the same computation. Finally, we will introduce a Feedback region where N feedback agents f , . . . , f N will implement the feedback mechanism . For each i ∈ {
1, . . . , N } , feedback agent f i will ensure that p ( N ) ∑ p ( N ) j = [ C j ( x )] i ≈
0. Namely, it will ensurethat the average of the outputs in dimension i is close to zero. We will show that from any solu-tion S to the C onsensus -H alving instance, we obtain a solution to the original high -D-T ucker instance. Encoding of a value in [ −
1, 1 ] . Given any solution S of our instance, every interval I of length1 of the domain encodes a value in [ −
1, 1 ] as follows. Let I + and I − denote the subsets of I labeled respectively + and − in the solution S . Then the value encoded by I , v S ( I ) , is given by µ ( I + ) − µ ( I − ) , where µ is the Lebesgue measure on R . Since there are at most n cuts (where n isthe number of agents in the instance), I + is the union of at most n + I and µ ( I + ) is simply the sum of the lengths of these intervals (and the same holds for I − ). It iseasy to see that v S ( I ) = I being perfectly shared between + and − in S , whereas v S ( I ) = + I being labeled + . We will drop the subscript S and just use v ( I ) in the remainder of this exposition. Coordinate-Encoding region.
The sub-interval [ N ] of the domain is called the Coordinate-Encoding region. Indeed, the way in which this region is subdivided amongst the + and − labelsin a solution S will encode the coordinates of a point in x ∈ [ −
1, 1 ] N . In more detail, x ∈ [ −
1, 1 ] will be given by v ([
0, 1 ]) , i.e., the value encoded by interval [
0, 1 ] . Similarly, x ∈ [ −
1, 1 ] will begiven by v ([
1, 2 ]) , x ∈ [ −
1, 1 ] by v ([
2, 3 ]) , etc. Constant-Creation region.
The sub-interval [ N , N + p ( N )] of the domain is called the Constant-Creation region. This region will be used to create the constants that the circuit-simulators need.The circuit-simulator C will read-in the value v ([ N , N + ]) = : const and will assume that itcorresponds to the value +
1. Note that given the constant +
1, the circuit-simulator can createany constant ζ ∈ [ −
1, 1 ] by using a × ζ -gate (multiplication by the constant ζ ). Similarly, thecircuit-simulator C will read-in the value v ([ N + N + ]) = : const and use it as the constant +
1, and so on for C , C , . . . , C p ( N ) .If S is a solution such that the Constant-Creation region does not contain any cut, then thewhole region will have the same label, and without loss of generality we can assume that thislabel is + . Thus, in such a solution S , all the circuit-simulators will indeed read-in the constant + j = + j =
1, . . . , p ( N ) . Circuit-Simulation regions.
For each j ∈ {
1, 2, . . . , p ( N ) } , the sub-interval [ N + p ( N ) + ( j − ) q , N + p ( N ) + jq ] of the domain will be used by the circuit-simulator C j . The length q used byevery circuit-simulator will be upper-bounded by some polynomial in N and the size of the circuit λ . Every circuit-simulator C j will read-in the coordinates x , . . . , x N ∈ [ −
1, 1 ] of the point x fromthe Coordinate-Encoding region, as well as the value const j ∈ [ −
1, 1 ] from the Constant-Creationregion (and assume that it corresponds to the constant + C j will performsome computations, including a simulation of the Boolean circuit λ , and finally output N values [ C j ( x , const j )] , . . . , [ C j ( x , const j )] N ∈ [ −
1, 1 ] into the Feedback region .8 eedback region. The Feedback region is located at the right end of the domain and is subdi-vided into N intervals F , . . . , F N of length p ( N ) each. For every j ∈ [ p ( N )] , let F i ( j ) denote the j thsub-interval of length 1 of F i . The i th output of circuit-simulator C j will be located in sub-interval F i ( j ) . In other words, v ( F i ( j )) = [ C j ( x , const j )] i .Every interval F i will have a corresponding feedback agent f i , who will ensure that the average ofall the outputs in interval F i is close to zero. In more detail, agent f i will have a single block of valuethat covers interval F i . As a result, this agent will be satisfied only if p ( N ) ∑ p ( N ) j = v ( F i ( j )) ∈ [ − ε , ε ] . Stray Cuts.
Any agent belonging to a circuit-simulator performs a gate-operation. In Section 3.3,we introduce the different types of gates and how they are implemented by agents. One importantfeature of the agents implementing the gates is that every such agent ensures that at least one cutmust lie in a specific interval J of the domain (in any solution S ). By construction, we will makesure that these intervals are pairwise disjoint for different agents. Thus, every agent introduced aspart of a circuit-simulator will force one cut to lie in a specific interval.The only agents that are not part of a circuit-simulator are the feedback agents f , . . . , f N . Sincethe number of cuts in any solution is at most the number of agents, there are at most N cutsthat are not constrained to lie in some specific interval. We call these the free cuts . The free cutscan theoretically “go” anywhere in the domain and interfere with the correct functioning of thecircuit-simulators or the Constant-Creation region. The expected behavior of these N free cutsis that they should lie in the Coordinate-Encoding region. As such, any of the free cuts that liesoutside the Coordinate-Encoding region will be called a stray cut (following [Filos-Ratsikas andGoldberg, 2019]). Observation 1.
If there is at least one stray cut, then the point x ∈ [ −
1, 1 ] N encoded by the Coordinate-Encoding region lies on the boundary of [ −
1, 1 ] N (i.e., there exists i such that | x i | = ). Stray Cut interference.
There are two ways for a stray cut to cause trouble:1. it can corrupt a circuit, i.e., interfere with the correct functioning of the gates of a circuit-simulator. If the cut lies in the region of circuit-simulator C j , then it can make a gate outputthe wrong result (i.e., not perform the desired operation). If the cut lies in the Constant-Creation region and intersects the interval that is used by circuit-simulator C i to read-inthe constant const j , then it can have an effect such that | const j | (cid:54) =
1. However, in any case,a single stray cut can only interfere with one circuit-simulator in this way. Thus, at most N circuit-simulators can suffer from this kind of interference. We will choose p ( N ) largeenough so that these corrupted circuit-simulators have a very limited influence.2. it can interfere with the sign of const j for many circuit-simulators C j . Indeed, even a singlestray cut can ensure that half of our circuit-simulators read-in the constant + −
1. We will show that this is actually not a problem, and thatit does not produce bogus solutions. Since stray cuts can only occur when x lies on theboundary of [ −
1, 1 ] N (Observation 1), the Tucker boundary conditions will be important forthis.Stray cuts that end up in the Feedback region do not have any effect. Indeed, the feedback agents f , . . . , f N are immune to stray cuts. They always ensure that the average of the outputs is closeto zero. Thus, a stray cut can only influence the outputs that a feedback agent sees (as detailedabove), but not its functionality. 9 . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . C-E region Constant-Creation region Circuit-Simulation regionsconst const const p ( N ) C C C p ( N ) Feedback region f f ... f N C C ... C p ( N ) . . . . . .Figure 1: An overview of the different regions defined in the reduction. The regions correspondingto different Circuit-Simulators are color-coded. An arrow indicates that the region where it ispointing receives inputs from the region from where it is originating. On the left, the differenttypes of agents are shown, namely the feedback agents, as well as the agents corresponding tothe different Circuit-Simulators. The Coordinate-Encoding region and the Feedback Region aredivided into sub-intervals, indicated by vertical gray lines, as detailed in Section 3.1. Circuit-Simulator failure.
There are two ways in which a circuit-simulator can fail to have thedesired output:1. it is corrupted by a stray cut. This can happen to at most N circuit-simulators.2. it can fail in extracting the binary bits from (a point close to) x . We will ensure that this canhappen to at most N circuit-simulators.Thus, at most 2 N circuit-simulators fail, i.e., at least p ( N ) − N circuit-simulators have the desiredoutput. A much cleaner domain.
The PPA-hardness of high -D-T ucker was already established in[Filos-Ratsikas and Goldberg, 2019] and our version can be obtained from that one using minormodifications, see Theorem 1. The corresponding result of [Filos-Ratsikas and Goldberg, 2019]is a standard application of the “snake-embedding” technique developed in [Chen et al., 2009].However, the reduction in [Filos-Ratsikas and Goldberg, 2019] requires (a) a further constraint10n how the domain is colored and more importantly (b) the embedding of the high -D-T ucker instance into a M ¨obius-type simplex domain, in which two facets have been “identified” witheach other - one can envision a high-dimensional M ¨obius strip with an instance of high -D-T ucker in its center, embedding in a high-dimensional simplex. A key step in the reduction is theextension of the labeling of high -D-T ucker to the remainder of the domain, in a way that does notintroduce any artificial solutions, and such that solutions to high -D-T ucker can be traced backfrom solutions on other points on the domain. For this purpose, the authors of [Filos-Ratsikas andGoldberg, 2019] develop a rather complicated coordinate transformation, applied to the inputsread from the positions of the cuts. They establish how to compute the transformation and itsinverse in polynomial time and how distances in the two coordinate systems (before and after thetransformation) are polynomially related. In contrast, our reduction works with the rather cleandomain of high -D-T ucker , avoiding all the unnecessary technical clutter of the domain used in[Filos-Ratsikas and Goldberg, 2019].
Simpler gadgetry.
Another complication of the proof in [Filos-Ratsikas and Goldberg, 2019]is the use of blanket-sensor agents, which constrain the positions of the cuts in the coordinate-encoding region, to ensure that solutions to ε -C onsensus -H alving do not encode points thatlie too far from a specific region in the “middle” of the domain, called the “significant region”;this is achieved via appropriate feedback provided by these agents to the coordinate-encodingagents. To make sure that the blanket-sensor agents do not “cancel” each other, extra care must betaken on how the feedback of these agents is designed, giving rise to a series of technical lemmas.Our reduction does not need to use any such agents and is therefore significantly simpler in thatregard as well. Label sequence robustness.
The reduction in [Filos-Ratsikas and Goldberg, 2019] requiresknowledge of the label sequence, i.e., whether the first cut that occurs in the c-e region has thelabel + or − on its left side. This is fundamental for the design of the gates, as they read theinputs as the distances from the left endpoints of the corresponding designated intervals, unlikeour interpretation, which measures the difference between the value of the two labels. Thus, forthe disorientation of the domain and to deal with sign flips that happen due to the stray cuts, theauthors of [Filos-Ratsikas and Goldberg, 2019] employ a pre-processing circuit that uses the firstcoordinate-detecting agent as a reference agent, when performing computations. This is again notneeded in our case; our equivariant gates ensure that even when the corresponding point lies onthe boundary of the high -D-T ucker domain, the output is computed correctly in a much simplerway. In this section we show how to construct gates which perform various operations on numbersin [ −
1, 1 ] with error at most g ( ε ) = ε , where ε is the error we allow in a C onsensus -H alving solution. Some of these gates will be immune to “corruption” by a stray cut, while others mightget corrupted and not work properly.Recall that in any solution S , any unit length interval I of the domain represents value v ( I ) ∈ [ −
1, 1 ] . We will now show how to perform computations with these values. We let T : R → [ −
1, 1 ] , z (cid:55)→ max ( −
1, min ( z )) , i.e., T [ z ] is the truncation of z ∈ R in [ −
1, 1 ] . We willalso abuse notation and use T [ x ] = ( T [ x ] , . . . , T [ x N ]) for x ∈ [ −
1, 1 ] N . At this stage, we assumethat ε is sufficiently small for the gates to work (namely ε ≤ − is enough).11 × ( − ) G × ( − ) ε I O + − + −− + Figure 2: A
Multiplication by − [ G × ( − ) ] gate. The gray shaded regions are not part of theagents valuation on I , but are shown for clarity. The input to the gate is − ± ε . Two different sets of cut are shown (top and bottom), to emphasize the fact that thegadgets are resilient to flips in the parity of the cut sequence. On both cases, the parity sequenceon the left is + / − ; on the top, the parity sequence on the right is also + / − , and the cut is placedin the rightmost half in O , whereas on the bottom, the parity sequence on the right is − / + , andthe cut is placed in the leftmost half in O .We will design basic gates , namely Multiplication by − [ G × ( − ) ] , Constant ζ ∈ [ −
1, 1 ] ∩ Q [ G ζ ] and Addition [ G + ] , and additional gates , namely Copy [ G copy ] , Multiplication by k ∈ N [ G × k ] and Boolean Gates, Negation [ G ¬ ] , AND [ G ∧ ] and OR [ G ∨ ] . δ -Volume Gate [ G δ ] : Let δ ∈ [ ε , 1 ] . Let I and O be disjoint intervals of length 1. The agent forthis gate has a block of length 1 − δ and height − δ centered in interval I and a block of length 1and (same) height − δ in interval O .It is easy to check that since δ ≥ ε , at least one cut must lie strictly within O in any solution.Furthermore, this gate has the notable property that it cannot be corrupted. From this construction,we obtain the following gates:- Multiplication by − [ G × ( − ) ] : Set δ = ε . Then, in any solution it holds that v ( O ) = T [ − v ( I ) ± ε ] . See Fig. 2 for an illustration.- Constant ζ ∈ [ −
1, 1 ] ∩ Q [ G ζ ] : To create a constant, we use an input interval I in theConstant-Creation region. Let j be such that this gate is part of circuit-simulator C j . Inthis case, we use input I : = [ N + ( j − ) , N + j ] (the region corresponding to const j ). If v ( I ) = + j = + – For ζ ≤
0, we let δ = max ( + ζ , 2 ε ) and obtain that v ( O ) = T [ ζ ± ε ] . – For ζ >
0, use G − ζ and then G × ( − ) , for a total error of at most 8 ε . Note that if | v ( I ) | =
1, then this gate can also be used to obtain T [ v ( I ) × ζ ± ε ] .See Fig. 3 for an illustration. 12 ζ I O + + − Figure 3: A
Constant ζ ∈ [ −
1, 1 ] ∩ Q [ G ζ ] gate. The gray shaded regions are not part of the agentsvaluation on I , but are shown for clarity. I is part of the Constant-Creation region and therefore(in a well-behaved case), it is not intersected by any cuts. The value block of the agent on the leftis labeled entirely by “ + ”, and the cut on the right side assumes the corresponding position infavor of “ − ”m to balance out the discrepancy. In the example, ζ = − δ = − Addition [ G + ] : Let I and I be the two length-1 intervals encoding the two inputs. Let I (cid:48) be aninterval of length 2 that is disjoint from I and I . Let J be an interval of length 3 that is disjointfrom I , I and I (cid:48) . We first use a G × ( − ) -gate with input I and output I (cid:48) [
0, 1 ] , and another onewith input I and output I (cid:48) [
1, 2 ] .To compute addition we create a new agent with valuation function that has height 1/5 in I (cid:48) andin J , and height 0 everywhere else. Note that since ε < J . We say that the gate is corrupted, if there are at least two cuts lying strictly in the interval J . Theoutput of the gate will be in interval O = J [
1, 2 ] . If the gate is not corrupted, then by constructionwe have v ( O ) = T [ v ( I ) + v ( I ) ± ε ] . See Fig. 4 for an illustration. Using the gates presented above, we can also implement the following operations.
Copy [ G copy ] : We can copy a value by using two successive G × ( − ) -gates. The error will be atmost 8 ε . This gate can not be corrupted. Multiplication by k ∈ N [ G × k ] : This can be implemented by a chain of k − ( k − ) ε . Boolean Gates:
For the Boolean gates, the bits {
0, 1 } will be represented by the values − + v ( I ) ∈ [ −
1, 1 ] is a perfect bit , if v ( I ) ∈ {−
1, 1 } . TheBoolean gates we will construct have the following very nice property: if all inputs are perfectbits, then the output is a perfect bit (and is the correct output).- Negation Gate [ G ¬ ] : . This can be done by first using a G × ( − ) -gate and then a G × -gate. Theerror will be at most 20 ε , so this will work as intended as long as 20 ε ≤ AND Gate: [ G ∧ ] : Let b and b be the two inputs. We perform the computation (( b + b ) − ) × G + -gates, a G − -gate and a G × -gate. The error will be at most12 × ε , so the gate will work as intended as long as 12 × ε ≤ × ( − ) G × ( − ) G + . . . . . . . . .2 ε I I I (cid:48) OJ + − + − + − + − + Figure 4: An
Addition [ G + ] gate. An arbitrary sequence of labels is shown. First, we use two G × ( − ) gates for the two inputs and the outputs of those gates are “read together” in a singleinterval I (cid:48) . The output of the G + gate is read from O . In the example, the first input is − − J is placed appropriately to balance out the excess of “ + ” in I (cid:48) , but the output of the gate is actually −
1, i.e., the gate applies truncation to the output of theaddition as it is smaller than − OR Gate [ G ∨ ] : This can easily be obtained by using G ∧ and G ¬ .Note that all the arithmetic gates we have presented have error at most g ( ε ) = ε , with theexception of G × k , which has an error of ( k − ) g ( ε ) . Thus, we have to be careful whenever we usethis gate for some non-constant k . Remark 1 ( Equivariant Gates ) . Note that the operation performed by any gate is equivariant .Namely, if we flip the sign of all inputs, the same output is still valid, but with a flipped sign. For G ×− and G + gates this is obvious. For G ζ , we have to recall that const j is the input to the gate.With this interpretation, the equivariance is once again obvious. For the G ∧ -gate the equivarianceis a bit more subtle. Note that it uses a G − -gate, which uses const j . Thus, in this case again, ifwe flip the sign of the inputs b , b and const j , the sign of the output is flipped too.This property of the gates is not a coincidence. It follows from the way we are encoding values.Note that in any solution S , if we swap the labels + and − , the solution remains valid. The onlything that has changed is that for any gate, the sign of all inputs and outputs has been flipped.It follows that any circuit that we construct out of these gates will be equivariant. Namely, thecomputation will still be valid if we flip the sign of all inputs (including const j ) and all outputs. In this section we describe the functionality of the circuit-simulators and what it achieves. Recallthat every circuit-simulator C j reads-in inputs x , . . . , x N ∈ [ −
1, 1 ] from the Coordinate-Encodingregion and const j ∈ [ −
1, 1 ] from the Constant-Creation region. The circuit-simulator then performssome computations and outputs [ C j ( x , const j )] , . . . , [ C j ( x , const j )] N ∈ [ −
1, 1 ] into the Feedbackregion. If the circuit-simulator C j is corrupted (i.e., one of the stray cuts interferes with it), thenwe will not claim anything about the outputs of C j . In that case, we will only use the fact that allthe outputs must lie in [ −
1, 1 ] (which is guaranteed by the way values are represented).If the circuit-simulator C j is not corrupted, then we know that all gates will perform correctcomputations, and we also know that const j ∈ {− + } . In our construction of C j , we will be14ssuming that const j = +
1. However, we will show later that even if const j = − C j will outputsomething useful. Phase 1: equi-angle displacement.
In the first phase, C j applies a small displacement to itsinput x . Namely, for every i ∈ [ N ] , C j computes (cid:98) x i ≈ T [ x i + j α ] , where α = p ( N ) . This is achievedby using a G j α -gate to create the constant j α (by using const j ), followed by a G + -gate to performthe addition x i + j α . However, since ε >
0, the gates might make some error in the computations.Nevertheless, by construction of the gates, we immediately obtain:
Claim 1.
Assume that the circuit-simulator C j is not corrupted and const j = + . Then the equi-angledisplacement phase outputs (cid:98) x i = T [ x i + j α ± g ( ε )] for all i. Phase 2: bit extraction.
In the second phase, C j extracts the three most significant bits fromeach (cid:98) x i ∈ [ −
1, 1 ] . These three bits tell us where (cid:98) x i lies in [ −
1, 1 ] , namely in which of the eightpossible standard intervals of length 1/4: [ − − ] , [ − − ] , [ − − ] , [ − ] , [
0, 1/4 ] , [ ] , [ ] , [ ] . Instead of the usual {
0, 1 } , our bits will take values in {− + } . The first bit b ∈ {− + } indicates whether (cid:98) x i is positive or negative. If b = +
1, then (cid:98) x i ∈ [
0, 1 ] . If b = −
1, then (cid:98) x i ∈ [ −
1, 0 ] . The second bit b ∈ {− + } , then indicates in whichhalf of that interval (cid:98) x i lies. Thus, if b = + b = −
1, then (cid:98) x i ∈ [
0, 1/2 ] . Note that some ofthe bits are not well-defined if (cid:98) x i ∈ B where B = {− − − } . Thus, wecannot expect the bit extraction to succeed in this case. In fact, the bit extraction will fail if (cid:98) x i issufficiently close to any point in B .The bit extraction for (cid:98) x i is performed as follows.1. b ≈ T [ (cid:98) x i × (cid:100) g ( ε ) (cid:101) ] (use G ×(cid:100) g ( ε ) (cid:101) -gate)2. (cid:98) x (cid:48) i ≈ T [ (cid:98) x i − b /2 ] (use G − -gate and G + -gate)3. b ≈ T [ (cid:98) x (cid:48) i × (cid:100) g ( ε ) (cid:101) ] (cid:98) x (cid:48)(cid:48) i ≈ T [ (cid:98) x (cid:48) i − b /4 ] b ≈ T [ (cid:98) x (cid:48)(cid:48) i × (cid:100) g ( ε ) (cid:101) ] Note that to compute − b /2 and − b /4 we just use the corresponding constant gate, namely G − and G − (with input b or b respectively, instead of const j ). The computation may beincorrect if b or b are not in {−
1, 1 } , but in that case the bit-extraction has already failed anyway.We can show that the bit-extraction succeeds if (cid:98) x i is sufficiently far away from any point in B .Letting dist ( t , B ) = min p ∈ B | t − p | , we obtain: Claim 2.
Assume that the circuit-simulator C j is not corrupted and const j = + . If dist ( T [ x i + j α ] , B ) ≥ g ( ε ) , then the bit-extraction phase for (cid:98) x i outputs the correct bits for T [ x i + j α ] .Proof. Since dist ( T [ x i + j α ] , B ) ≥ g ( ε ) , it follows by Claim 1 that dist ( (cid:98) x i , B ) ≥ g ( ε ) , and inparticular | (cid:98) x i | ≥ g ( ε ) . In step 1 we use a G ×(cid:100) g ( ε ) (cid:101) -gate on input (cid:98) x i . By construction of thegate, it follows that b = T [ (cid:98) x i × (cid:100) g ( ε ) (cid:101) ± ( (cid:100) g ( ε ) (cid:101) − ) g ( ε )] = T [ (cid:98) x i × (cid:100) g ( ε ) (cid:101) ± ] where weused | ( (cid:100) g ( ε ) (cid:101) − ) g ( ε ) | ≤
1. Since | (cid:98) x i | ≥ g ( ε ) , it holds that | (cid:98) x i × (cid:100) g ( ε ) (cid:101)| ≥ ≥
2. Thus, b ∈ {− + } is the correct bit for (cid:98) x i .In step 2 we compute (cid:98) x (cid:48) i = T [ (cid:98) x i − b /2 ± g ( ε )] . Thus, we have dist ( (cid:98) x (cid:48) i , B ) ≥ g ( ε ) , and inparticular | (cid:98) x (cid:48) i | ≥ g ( ε ) , which implies that b = T [ (cid:98) x (cid:48) i × (cid:100) g ( ε ) (cid:101) ± ] ∈ {− + } is the correctfirst bit for (cid:98) x (cid:48) i and the correct second bit for (cid:98) x i . 15n step 4 we compute (cid:98) x (cid:48)(cid:48) i = T [ (cid:98) x (cid:48) i − b /2 ± g ( ε )] . Thus, | (cid:98) x (cid:48)(cid:48) i | ≥ g ( ε ) , which implies that b = T [ (cid:98) x (cid:48)(cid:48) i × (cid:100) g ( ε ) (cid:101) ± ] ∈ {− + } is the correct first bit for (cid:98) x (cid:48)(cid:48) i , i.e., the correct second bit for (cid:98) x (cid:48) i and the correct third bit for (cid:98) x i .Finally, note that if dist ( T [ x i + j α ] , B ) ≥ g ( ε ) , then T [ x i + j α ] and (cid:98) x i must lie in the samestandard interval of length 1/4, i.e., they must have the same three bits. Phase 3: simulation of λ . Recall that λ : [ ] N → {±
1, . . . , ± N } is the Boolean circuit computingthe high -D-T ucker labeling. We interpret [ ] as a subdivision of [ −
1, 1 ] into standard intervalsof length 1/4. Namely, 1 corresponds to [ − − ] , 2 to [ − − ] , etc. Thus, [ ] N can beinterpreted as a subdivision of [ −
1, 1 ] N into hypercubes of side-length 1/4. With this in mind, wedefine λ : ([ −
1, 1 ] \ B ) N → {±
1, . . . , ± N } , so that for any x ∈ ([ −
1, 1 ] \ B ) N , λ ( x ) is the label that λ assigns to the hypercube containing x .We can assume that the inputs of λ consist of three bits each, such that the number representedby these three bits yields an element in [ ] (by using [ ] ≡ {
0, 1, . . . , 7 } ). The three bits b , b , b ∈{− + } extracted from T [ x i + j α ] tell us exactly in which interval T [ x i + j α ] lies. Note thatif we were to map those bits to {
0, 1 } (where − (cid:55)→ + (cid:55)→ b b b would correspond to the number associated with the interval and would thus be the correctcorresponding input to the circuit.We re-interpret the circuit λ as working on bits {− + } , where − C j with our Boolean gates. As long as the inputs to every gate areperfect bits (i.e., in {− + } ), the output of the gate will also be a perfect bit, and will correspondto the result of the operation computed by the gate. The inputs to the circuit will be exactly thebits obtained in the bit extraction phase for each (cid:98) x i . Thus, it follows that if the bit extractionphase succeeds, then the simulation of λ will always have a correct output. In other words, usingClaim 2, we obtain: Claim 3.
Assume that the circuit-simulator C j is not corrupted and const j = + . If dist ( T [ x i + j α ] , B ) ≥ g ( ε ) for all i ∈ [ N ] , then the simulation phase outputs λ ( T [ x + j α ]) . Phase 4: output into the Feedback region.
For convenience, we will assume that the output ofthe Boolean circuit λ is encoded in a particular way. It is easy to see that this is without loss ofgenerality, since we can always modify λ so that it follows this encoding. The output of λ is anelement in {±
1, . . . , ± N } . The encoding we choose uses 2 N bits y a , y b , y a , y b , . . . , y aN , y bN to encodesuch an element. The element + i is represented by y ai = y bi = y a (cid:96) = y b (cid:96) = (cid:96) (cid:54) = i .The element − i is represented by y ai = y bi = y a (cid:96) = y b (cid:96) = (cid:96) (cid:54) = i .Recall that in the simulation of λ inside C j , we actually use the bits {− + } instead of {
0, 1 } .Thus, output + i is represented by y ai = y bi = + y a (cid:96) = + y b (cid:96) = − (cid:96) (cid:54) = i . Whereas theelement − i is represented by y ai = y bi = − y a (cid:96) = − y b (cid:96) = + (cid:96) (cid:54) = i . For any i ∈ [ N ] and any z ∈ ([ −
1, 1 ] \ B ) N define λ i ( z ) to be: • λ i ( z ) = + λ ( z ) = + i • λ i ( z ) = − λ ( z ) = − i • λ i ( z ) = T [ y ai + y bi ] = λ i ( T [ x + j α ]) .In this last phase, for each i ∈ [ N ] we compute T [ y ai + y bi ] and copy this value into the Feedbackregion, namely into interval F i ( j ) (recall that C j is the current circuit-simulator). For this we firstuse a G + -gate and then a G copy -gate. Thus, we immediately obtain:16 laim 4. Assume that the circuit-simulator C j is not corrupted and const j = + . If dist ( T [ x i + j α ] , B ) ≥ g ( ε ) for all i ∈ [ N ] , then [ C j ( x , const j )] i : = v ( F i ( j )) = T [ λ i ( T [ x + j α ]) ± g ( ε )] for all i ∈ [ N ] . In this section we prove that the reduction works, i.e., from any solution to the C onsensus -H alving instance, we can obtain a solution to the original high -D-T ucker instance in polynomialtime. In order to do this, we consider two cases and show that we can retrieve a solution in bothcases. The first case corresponds to a “well-behaved” solution where there are no stray cuts. Thesecond case corresponds to a solution with stray cuts.We set p ( N ) = N and pick ε such that 16 g ( ε ) ≤ p ( N ) , i.e., ε ≤ N . Lemma 4.
Let S be any ε -approximate solution for the C onsensus -H alving instance. If S does not haveany stray cuts, then it yields a solution to the high -D-T ucker instance in polynomial time.Proof. Since there are no stray cuts, none of the circuit-simulators is corrupted. In particular, thereis no cut strictly inside the Constant-Creation region. Thus, without loss of generality we canassume that the whole Constant-Creation region is labeled + . This means that all circuit-simulatorsread-in the constant +
1, i.e., const j = + j .Since the circuit-simulators are not corrupted, the only way in which they can fail is if thebit extraction phase fails. Let x ∈ [ −
1, 1 ] N be the point represented by the Coordinate-Encodingregion in S . Let z ( j ) = T [ x + j α ] .Consider any i ∈ [ N ] . First of all, note that if z ( j ) i ∈ {− + } , then the bit extraction will notfail (for dimension i ). So, we ignore any such points. For all j (cid:54) = (cid:96) such that z ( j ) i , z ( (cid:96) ) i / ∈ {− + } itholds that α ≤ | z ( j ) i − z ( (cid:96) ) i | ≤ p ( N ) α .Since p ( N ) α ≤ α = p ( N ) ≥ g ( ε ) , it follows that there exists at most one j ∗ such that z ( j ∗ ) i lies too close to B = {− − − } . Namely, there exists j ∗ ∈ [ p ( N )] such that for all j ∈ [ p ( N )] \ { j ∗ } , we have dist ( z ( j ) i , B ) ≥ g ( ε ) . By Claim 2 it follows that the bitextraction phase in dimension i fails in at most one circuit-simulator.We thus obtain that the bit extraction (in any dimension) fails in at most n circuit-simulators.Let X ⊆ [ p ( N )] be such that the bit extraction does not fail in C j for all j ∈ X . Then, we haveshown that | X | ≥ p ( N ) − N ≥ p ( N ) − N . By Claim 3, we get that for all j ∈ X , phase 3 of thecircuit-simulator C j outputs λ ( z ( j ) ) , i.e., the correct label for z ( j ) (which lies in ([ −
1, 1 ] \ B ) N ).Since (cid:107) z ( j ) − z ( (cid:96) ) (cid:107) ∞ ≤ p ( N ) α ≤ j , (cid:96) ∈ [ p ( N )] , all the points z ( j ) lie in (pairwise)adjacent hypercubes of [ −
1, 1 ] N (as defined in phase 3). Thus, in order to find a solution to the high -D-T ucker instance, it suffices to find j , (cid:96) ∈ X such that λ ( z ( j ) ) = − λ ( z ( (cid:96) ) ) .By Claim 4, we have that for every j ∈ X the circuit-simulator C j outputs [ C j ( x , const j )] , . . . , [ C j ( x , const j )] N such that for all i ∈ [ N ] we have [ C j ( x , const j )] i ≈ λ i ( z ( j ) ) (up to error 2 g ( ε ) ). It holds that forevery j ∈ X , there exists i j ∈ [ N ] such that | λ i j ( z ( j ) ) | =
1. Thus, there exist i ∈ [ N ] and X i ⊆ X such that | X i | ≥ | X | / N , | λ i ( z ( j ) ) | = j ∈ X i and λ i ( z ( j ) ) | = j ∈ X \ X i .17f there exist j , (cid:96) ∈ X i such that λ i ( z ( j ) ) = − λ i ( z ( (cid:96) ) ) , then z ( j ) and z ( (cid:96) ) have opposite labels andwe are done. Thus, assume that λ i ( z ( j ) ) = + j ∈ X i (the case with − [ C j ( x , const j )] i ≥ − g ( ε ) for all j ∈ X i .Since we also have [ C j ( x , const j )] i ≥ − g ( ε ) for all j ∈ X \ X i , we can write: p ( N ) ∑ j = [ C j ( x , const j )] i = ∑ j ∈ X [ C j ( x , const j )] i + ∑ j ∈ [ p ( N )] \ X [ C j ( x , const j )] i ≥ | X i | − p ( N ) g ( ε ) − | [ p ( N )] \ X |≥ p ( N ) − NN − p ( N ) g ( ε ) − N ≥ N − − /2 > p ( N ) ε for N sufficiently large, where we used p ( N ) = N and ε ≤ N . However, recall that feedbackagent f i ensures that ∑ p ( N ) j = [ C j ( x , const j )] i = ∑ p ( N ) j = v ( F i ( j )) ∈ [ − p ( N ) ε , p ( N ) ε ] . So, we haveobtained a contradiction. Lemma 5.
Let S be any ε -approximate solution for the C onsensus -H alving instance. If S has at leastone stray cut, then it yields a solution to the high -D-T ucker instance in polynomial time.Proof. As explained earlier, stray cuts can affect the circuit-simulators in two ways. First of all, astray cut can corrupt a circuit-simulator. Since there are at most N stray cuts, at most N circuit-simulators can be corrupted. The other way in which the circuit-simulators can be affected, is ifthere are stray cuts in the Constant-Creation region and const j = − C j , and const (cid:96) = + C (cid:96) .Let us now take a look at what happens if a circuit-simulator C j has const j = −
1. Since all thegates in C j are equivariant (Remark 1), it follows that C j with inputs x , . . . , x N and const j is alsoequivariant. This implies that [ C j ( x , − )] i = − [ C j ( − x , + )] i for all i ∈ [ N ] .Thus, the output of C j is the negation of a valid output that C j would have had if its inputs were − x and const j = + C j can have multiple valid outputs for a single input (because every gate is allowed a small error).So, we are saying that the output of C j on inputs x , − − x , +
1. Thus, we immediately obtain that the analogous statements for Claims 1,2,3 and 4 alsohold in this case. In other words, we get that if circuit-simulator C j is not corrupted, and ifdist ( T [ x i + j α ] , B ) ≥ g ( ε ) , then [ C j ( x , + )] i = λ i ( T [ x + j α ]) ± g ( ε ) and [ C j ( x , − )] i = − λ i ( T [ − x + j α ]) ± g ( ε ) .Consider any i ∈ [ N ] . Using the same argument as in the proof of Lemma 4, it is easy to showthat at most one of the points T [ x i + α ] , . . . , T [ x i + p ( N ) α ] and T [ x i − α ] , . . . , T [ x i − p ( N ) α ] lies within distance at most 8 g ( ε ) of B . It follows that at most one of the points T [ x i + α ] , . . . , T [ x i + p ( N ) α ] and T [ − x i + α ] , . . . , T [ − x i + p ( N ) α ] g ( ε ) of B . Thus, we again have that the bit extraction phase fails in atmost N circuit-simulators.Let X ⊆ [ p ( N )] be such that for all j ∈ X , C j is not corrupted and the bit extraction does notfail in C j . Then we have shown that | X | ≥ p ( N ) − N . The same argument as in the proof ofLemma 4 yields that there must exist i ∈ [ N ] and j , (cid:96) ∈ X such that [ C j ( x , const j )] i = + ± g ( ε ) and [ C (cid:96) ( x , const (cid:96) )] i = − ± g ( ε ) .If const j = const (cid:96) = +
1, then it follows that λ i ( T [ x + j α ]) = + λ i ( T [ x + (cid:96) α ]) = − j = const (cid:96) = −
1, then it follows that λ i ( T [ − x + j α ]) = − λ i ( T [ − x + (cid:96) α ]) = + j = − const (cid:96) . Without loss of generalityassume that const j = + (cid:96) = −
1. Then, we obtain that λ ( T [ x + j α ]) = λ ( T [ − x + (cid:96) α ]) .Since there is at least one stray cut in the solution S , by Observation 1 we know that the point x ∈ [ −
1, 1 ] N encoded by the Coordinate-Encoding region must lie on the boundary of [ −
1, 1 ] N .Thus, since p ( N ) α ≤ u = T [ x + j α ] and v = T [ − x + (cid:96) α ] must lie in hypercubes on theboundary. Since 2 p ( N ) α ≤ u and − v lie in adjacent hypercubes. However,by the boundary conditions of the high -D-T ucker instance λ , we have λ ( u ) = λ ( v ) = − λ ( − v ) .Thus, u and − v yield adjacent hypercubes with opposite labels, and thus a solution to the high -D-T ucker instance. Note that u and − v cannot lie in the same hypercube, because of the boundaryconditions of λ . In the previous section, we proved that even when we have 2-block uniform valuations, theproblem remains PPA-hard (even when we are allowed to use n + n − δ cuts, for some constant δ > ε (where ε appears polynomially in the running time), which isparameterized by the maximum number of intersection between the blocks of different agents and(b) a polynomial-time algorithm for 1/2-C onsensus -H alving . The latter algorithm generalizes tothe case of d -block uniform valuations (in fact, even to the case of piecewise constant valuationswith d blocks) if one is allowed to use d · n cuts instead of n . Towards the end of the section, wealso present a simple idea based on linear programming, which allows us to solve the C onsensus -H alving problem in polynomial time, when we are allowed to use 2 n − (cid:96) cuts, for any (cid:96) which isconstant. ε -Consensus-Halving We start with the algorithm for solving ε -C onsensus -H alving that is parameterized by the maximum intersection between the probability measures. In particular, if d is the maximum numberof measures with positive density at any point x ∈ [
0, 1 ] and the maximum value of the densities19s at most M then we provide a dynamic programming algorithm that computes an ε -approximatesolution in time O (cid:16)(cid:0) M ε (cid:1) d · poly ( M / ε , n , d ) (cid:17) . We start with the formal definition of the maximumintersection quantity. In this section we denote by f i the probability density function of theprobability measure µ i . Definition 7 (M aximum I ntersection & M aximum V alue ) . We say that a single-block instanceof ε -C onsensus -H alving has maximum intersection d if for every x ∈ [
0, 1 ] it holds that the set R ( x ) = { i ∈ [ n ] | f i ( x ) > } , has cardinality |R ( x ) | ≤ d . We also say that the instance has maximum value M if for every i ∈ [ n ] and every x ∈ [
0, 1 ] it holds that f i ( x ) ≤ M . This is equivalentto b i − a i ≥ M .For the rest of the section, we often refer to the following quantity. Definition 8 (V alue of B alance ) . For any ε -C onsensus -H alving instance C = { µ , . . . , µ n } , anyvector of cuts (cid:126) s and any z ∈ [
0, 1 ] let b i ( (cid:126) s ; z ) be the mass with respect to µ i of the part of theinterval [ z , 1 ] labeled “ + ” minus the part of the interval labeled “ − ”, when split with the cuts (cid:126) s .Formally, b i ( (cid:126) s ; z ) = µ i ([ z , 1 ] + ) − µ i ([ z , 1 ] − ) , given the set of cuts (cid:126) s .The first step of the algorithm is to discretize the interval [
0, 1 ] into intervals of length ε / ( M ) . Hence, we split the interval [
0, 1 ] into m = M / ε equal subintervals of the form p (cid:96) =[( (cid:96) − ) / m , (cid:96) / m ] , where (cid:96) ∈ [ m ] . The following claim shows the sufficiency of our discretizationand follows very easily from the above definitions. Claim 5.
Let Q m = (cid:110) (cid:96) − m | (cid:96) ∈ [ m ] (cid:111) , where m = M / ε (cid:48) and let C = { µ , . . . , µ n } be a single-block ε - C onsensus -H alving instance with maximum value M. Then we define the rounded single-blockinstance C (cid:48) = { µ (cid:48) , . . . , µ (cid:48) n } where a (cid:48) i and b (cid:48) i are equal to the number of Q m that is closer to a i and b i respectively. Then every ε - C onsensus -H alving solution of C (cid:48) is an ( ε + ε (cid:48) ) - C onsensus -H alving solution of C . Additionally, there exists a solution to the ε (cid:48) - C onsensus -H alving problem where thepositions of the cuts lie in the set Q m . Because of Claim 5 we will assume for the rest of this section that we are working with C (cid:48) and we will focus on finding an ( ε (cid:48) = ε /2 ) -C onsensus -H alving solution with cuts in Q m , where m = M / ε . This will give us an ε -approximate solution for the single-block instance C . We alsoneed the following definitions: (cid:46) for any z ∈ [
0, 1 ] and any instance C = { µ , . . . , µ n } we define the set of measures that havepositive mass in [ z , 1 ] as follows: U ( z ; C ) = { i ∈ [ n ] | µ i ∈ C ∧ µ i ([ z , 1 ]) > } where wemight drop C when it is clear from the context, see also Figure 5, (cid:46) for any z ∈ [
0, 1 ] and any instance C = { µ , . . . , µ n } we define the set of measures that havepositive density at z as follows: R ( z ; C ) = { i ∈ [ n ] | µ i ∈ C ∧ f i ( z ) > } where we mightdrop C when it is clear from the context, see also Figure 5.We also define ¯ Q m = { z | z ∈ Q m ∨ − z ∈ Q m } . Now we are ready to define our main recursiverelation for solving the instance C (cid:48) . For this we define the function α ( q , . . . , q d , z , t ) where q j ∈ ¯ Q m , z ∈ Q m , and t ∈ [ n ] and α : ¯ Q dm × Q m × [ n ] → {
0, 1 } . The intuitive explanation of the value of α isthe following: α ( q , . . . , q d , z , t ) denotes whether it is possible to find t cuts (cid:126) s to split the interval [ z , 1 ] such that: (1) if i is the jth element of R ( z ) it holds that | b i ( (cid:126) s ; z ) − q j | ≤ ε , (2) for everyi ∈ U ( z ) \ R ( z ) it holds that | b i ( (cid:126) s ; 0 ) | = | b i ( (cid:126) s ; z ) | ≤ ε . U ( z ) and R ( z ) .The value of α ( q , . . . , q d , z , t ) can be recursively computed via the following procedure. (cid:73) for every r ∈ Q m , with r > z do (cid:46) if there exists i ∈ R ( z ) and i is the j th element of R ( z ) but i (cid:54)∈ R ( r ) and by adding thecut r to the set of cuts then the condition | b i ( r ; z ) − q j | > ε holds then continue , (cid:46) if there exists i ∈ U ( z ) \ R ( z ) but i (cid:54)∈ U ( r ) \ R ( r ) then continue (cid:46) else for every i ∈ R ( r ) , where i is the j th element of R ( r ) · if i (cid:54)∈ R ( z ) then we set q (cid:48) j = ( − ) t µ i ([ z , r ]) · if i is the (cid:96) th element of R ( z ) then we set q (cid:48) j = ( − ) t µ i ([ z , r ]) + q (cid:96) · call the function α ( q (cid:48) , . . . , q (cid:48) d , r , t − ) (cid:73) return true if at least one of the recursive calls is successful, and false otherwise.This procedure evaluates the binary function α , but our goal is to solve the search problemthat finds a set of cuts that form a solution to ε (cid:48) -C onsensus -H alving . This can be easily done bystoring one possible solution whenever α = true . We call this possible solution β ( q , . . . , q d , z , t ) .More formally, the algorithm is shown in Algorithm 1.From the above recursive algorithm we see that the evaluation order for the dynamic program-ming algorithm that computes α ( q , . . . , q d , z , t ) starts from t = t = n , z = z = | q j | = | q j | = Theorem 6.
There exists a dynamic programming algorithm that for any single-block instance C of ε - C onsensus -H alving with maximum value M and maximum intersection d, computes a set of cuts thatdefine a solution. The running time of the algorithm is O (cid:16)(cid:0) M ε (cid:1) d + n ( n + d ) (cid:17) . -Consensus-Halving In this subsection, we present a polynomial-time algorithm for 1/2-C onsensus -H alving , for thecase of single-block valuations. We state the main theorem, which will be proven in the subsection. Theorem 7.
There is a polynomial-time algorithm for - C onsensus -H alving , when agents havesingle-block valuations. High-level description:
The high-level idea of the algorithm is the following greedy strategy.We will consider the agents in order of non-decreasing height of their valuation blocks. Sinceeach agent has a single block of value, this means that for two agents i and j with i < j that havetheir value block in intervals I i and I j , agent i will be considered before agent j . For each agent,21 LGORITHM 1:
Recursive Computation of α , β Input: leftover balances q , . . . , q d , position of last cut z , leftover number of cuts t , required accuracy ε . Output: value of ( α ( q , . . . , q d , z , t ) , β ( q , . . . , q d , z , t )) as described above. if t = thenif q j = ∀ j ∈ [ d ] thenreturn ( true , {} ) endreturn ( false , {} ) endfor every r ∈ Q m with r > z dofor j ∈ [ d ] do Let i be the j th element of R ( z ) if i (cid:54)∈ R ( r ) and | b i ( r ; z ) − q j | > ε thencontinue to next value of r endendfor i ∈ U ( z ) \ R ( z ) doif i (cid:54)∈ U ( r ) \ R ( r ) thenreturn ( false , {} ) endendfor j ∈ [ d ] do Let i the j th element of R ( r ) if i (cid:54)∈ R ( z ) then q (cid:48) j ← ( − ) t µ i ([ z , r ]) endif i is the (cid:96) th element of R ( z ) then q (cid:48) j ← ( − ) t µ i ([ z , r ]) + q (cid:96) endend res ← ( α ( q (cid:48) , . . . , q (cid:48) d , r , t − ) , β ( q (cid:48) , . . . , q (cid:48) d , r , t − )) if res = true thenreturn ( true , β ( q (cid:48) , . . . , q (cid:48) d , r , t − ) ∪ { r } ) endendreturn ( false , {} ) we will attempt to “reserve” a large enough sub-interval of her value block (of total value atleast 1/2 for the agent) and split it in half (to two parts of equal value at least 1/4 to the agent)using a cut, assigning each half to + and − respectively. At that point, this split will ensure that | µ i ( I + i ) − µ i ( I − i ) | ≤ I i . A reserved sub-interval, or reserved region (RR) will never be intersected by any subsequent cut.This ensures that the guarantee | µ i ( I + i ) − µ i ( I − i ) | ≤ i previously consideredwill continue to hold.Reserving a large enough region for the first considered agent is straightforward. For anysubsequent agent i , part of her value block interval I i might already be “covered” by regions thatwere reserved in previous steps. Before inserting any new cut, we will expand some of the RRswhich are contained in I i (those that exhibit a different label on each of their endpoints), until weeither ensure that the agent is approximately satisfied with the imbalance of labels that she sees, orthese RRs cannot be expanded any longer. In the latter case, we will place the cut corresponding22 i internal RRinternal RR internal RRboundary RR boundary RR Figure 6: Internal and external reserved areas. The areas colored green have been assigned to oneof the labels (e.g., “+”) and the areas colored red had been assigned to the other (e.g., “-”). Theareas of the interval I i which are not yet assigned to any label are colored blue. The parity of theRRs can also be seen: the boundary RRs and the internal RR in the middle of the interval haveodd parity, whereas the other two internal RRs (which are sub-regions of the boundary RRs) haveeven parity.to agent i in I i , on the midpoint of the “virtual” interval U i ⊆ I i , consisting of all the intervals of I i not covered by RRs, “glued” together. Then, we will create a new RR, which will potentially alsocontain some of the RRs already present in I i , which will be such that (a) the total value coveredby RRs for agent i is 1/2 and (b) the agent is approximately satisfied (up to a 1/2) with the valueshe has for RRs in I i . First, we will use the following terminology: A reserved region (RR) R is a designated interval inwhich no further cuts are allowed to be placed, other than those that already lie in the region. Wewill define the parity of R to be odd if the left-hand side of the leftmost cut intersecting R and theright-hand side of the rightmost cut intersecting R have different labels; otherwise, we will saythat the parity of R is even .We will let I i denote the (single) interval in which agent i has positive value, and we will let R i = (cid:83) R j is an RR ( R j ∩ I i ) be the part of I i that is “covered” by RRs. We will also let U i = I i \ R i denote the unreserved part of I i . Finally, we will say that an RR R is an internal RR of I i , if it isentirely contained in I i (including the case where the endpoint of the RR is the endpoint of theinterval); otherwise, we will say that R is a boundary RR . Note that since RRs can be containedin other RRs, it is possible that an internal RR is contained in a boundary RR (see Fig. 6). Anoverview of the algorithm is given in Algorithm 2.The algorithm considers the agents in non-increasing order of the heights of their value blocks.We will say that an agent i is 1/2 -satisfied at step j , if, after considering agent j , it holds that | µ i ( I + ) − µ i ( I − ) ≤ + and − .Let step i be the step when agent i is considered. At step i the algorithm does the following.1. Expand internal RRs of odd parity, until agent i is 1/2-satisfied, or until any internal RRs ofodd parity can no longer be extended. See Section 4.2.2 below.2. Place the cut in the midpoint of U i (where U i is obtained by considering U i as a continuousinterval, and disregarding R i ). See Section 4.2.3 below.3. Create a new RR, by considering only U i , possibly merging the intervals of the new RR withsome RRs in R i . See Section 4.2.4 below. 23 . .. . . I i RR (cid:48) Figure 7: An expansion of an internal reserved region of odd parity. The areas colored greenhave been assigned to one of the labels (e.g., “+”) and the areas colored red had been assignedto the other (e.g., “-”). The areas of the interval I i which are not yet assigned to any label arecolored blue. The reserved region R is expanded symmetrically on both sides, until it meetsthe left endpoint of the boundary region of I i on the left, in which case the expansion stops(Condition (b) in Section 4.2.2). This results in the creation of new RRs on the left (a boundary RRand an internal RR contained in the boundary RR) that contain both the original RR R and itsexpansion R (cid:48) . Crucially, the area of I i that is covered by RRs has increased, while maintaining thesame balance between the two labels. Consider step i , when we are considering agent i , and her corresponding value block interval I i .We will consider internal RRs of odd parity in I i . For any such RR, we extend its endpoints at thesame rate towards the left and the right, until we either:(a) reach a point where the agent is 1/2-satisfied or(b) reach the endpoint of one or two other RRs, or,(c) reach one of the endpoints of I i .If Condition (a) above is satisfied, then we do not consider any other RR and continue with thenext agent. Otherwise, we continue with the next internal RR of odd parity, if any. Note that afterthis part of the algorithm, the cut corresponding to agent i has not been placed yet. See Fig. 7. After we consider all internal RRs of odd parity, and if Condition (a) above has not been satisfiedyet, we will use the agent’s corresponding cut to balance out the remaining value block I i , toensure that the agent becomes 1/2-satisfied. To do that, we consider agent i ’s valuation in I i on U i , and we place the cut on the midpoint y of U i - one can envision this operation as considering U i as a continuous interval (merging any sub-intervals separated by RRs in R i ) and splitting thisinterval in half via the placement of the cut. See Fig. 8. After the cut has been placed, we need to reserve a corresponding region for agent i , to ensure thatshe is 1/2-satisfied from the updated union of reserved regions R i in I i . To do that, we reserve anequal portion of U i to the left and to the right of the position y of the cut, until the agent becomes24 . .. . . . . .. . . I i Even parity RRpart of U i part of U i New RRmidpoint of U i Figure 8: The placement of the cut in U i and the creation of the new region. The parts of U i areshown in blue and the midpoint of U i , where the cut is placed, is shown with a red dashed line.After the cut is placed, an equal amount of green (e.g., “ + ”) and of red (e.g., “ − ”) will be reservedto the left and the right of the position of the cut, “jumping over” RRs of even parity. The newlyadded parts of U i (which are now part of R i instead are shown in slightly different shades of theirrespective colors, for clarity. After the operation, the agent is 1/2-satisfied, which is guaranteedby her value in the union of restricted regions R i .1/2-satisfied. We argue in the proof of Lemma 10 that this is always possible. Intuitively, one canvisualize that we extend the region from y to the left and to the right at the same rate, “jumpingover” RRs, until we have reserved enough area to 1/2-satisfy the agent. The effect of this processis a new RR, possibly containing previously reserved RRs and the newly reserved regions. SeeFig. 8. Below, we argue about the correctness of the algorithm. We will prove the correctness via a seriesof lemmas.
Lemma 8.
After step i of the algorithm, the value of agent i for any internal RR in R i is at most .Proof. We will prove the lemma using induction. For i =
1, the statement is clearly true; there areno pre-existing RRs, so we place a cut in the midpoint y of I and reserve a region of total value1/4 for the agent to the left of y , and a region of total value 1/4 to the right of y , which togetherform the first RR R . This is the only internal RR in I and hence the lemma follows.Next, we will argue that the lemma holds for some 1 < j ≤ n . Consider agent j and its valueinterval I j . Since we consider agents in non-increasing order of the height of their valuation blocks,from the induction hypothesis, it holds that the value of agent j for any RR which is internal for I i ,for any i < j , is at most 1/2 as well. Therefore, it is not possible for agent j to value any RR whichis internal for I j , at the beginning of step j , at more than 1/2. During step j , the algorithm mighteither expand existing internal RRs, create a new RR, or both, but in any case, by construction, theunion of these RRs will never amount to more than 1/2 of agent j ’s value. Lemma 9.
Let R b be any boundary RR in I i at step i. Then, it holds that | µ i ( R + b ) − µ i ( R − b ) | ≤ .Proof. First, note that since R b is a boundary RR in I i , it is part of some larger RR; let R be that RR,i.e., R b ⊆ R . Also, note that R is “balanced”, meaning that the total length of R labeled “ + ” and25 LGORITHM 2:
Polynomial-time algorithm for 1/2-C onsensus -H alving Input:
The value blocks of the agents, as pairs ( I i , v i ) , such that- I i = [ α i , b i ] is the interval.- v i is the height of the value block. Output:
The set of cuts c , . . . , c n , which satisfies that | µ i ( I + ) − µ i ( I − ) ≤ i .Order the agents in non-increasing order of v i and let wlog {
1, . . . , n } be any such ordering. for i = to n do /* Step (1), Expanding internal RRs of odd parity (see Section 4.2.2). */ Let R odd be the set of internal RRs of odd parity in I i that can be expanded. while R odd (cid:54) = ∅ do Pick R in R odd Expand R until it can no longer be expanded, i.e.,- The agent becomes 1/2-satisfied.- The expansion reaches the endpoint of some other RR.- The expansion reaches the endpoint of I i . if Agent i is -satisfied then
Break and move on to the next agent. end
Generate new RRs (if the expansion of R is merged with some other RR).Update R odd . end /* Step (2), Placing the cut in U i (see Section 4.2.3). */ Let U i be the interval obtained by joining the intervals of U i .Let y be the midpoint of U i .Place cut c i on y . /* Step (3), Creating the new RR (see Section 4.2.4). */ Consider some x ∈ R and the intervals [ y − x , y ] , [ y , y + x ] ⊆ U i .Pick x to satisfy v i · ( y − x ) + v i · ( y + x ) + µ i ( R i ) = y and y be the points in U i corresponding to y − x and y + x respectively.Create a new RR R (cid:48) = [ y , y ] (possibly containing other RRs of even parity). end the total length of R labeled “ − ” are equal; this follows from the fact that in any of the steps (1)and (3) of the algorithm (Section 4.2.2 and Section 4.2.4) where the RRs are created or expanded,these operations are only done symmetrically with respect to the position of some cut. In otherwords, if R is internal for some agent j , it holds that µ j ( R + ) = µ j ( R − ) .Assume by contradiction that | µ i ( R + b ) − µ i ( R − b ) | > + ” over “ − ”. This in particular means that more than 1/4 of thevalue of agent j in R b is labeled “ + ”, which means that R b contains an interval of length largerthan 1/4 · v i that is labeled “ + ”, where v j is the height of agent i ’s valuation block. Since R isbalanced, this means that R must have length at least 1/2 · v i . However, consider the step j whenthe R was last modified (either created or expanded) before step i and observe that R was aninternal RR for I j at the point. This means that agent j had value larger than v j · ( · v i ) ≥ R at the point, contradicting Lemma 8. 26 emma 10. After step (3) of the algorithm (Section 4.2.4), agent i is -satisfied.Proof.
Consider the imbalance between the portions of I i assigned to the two labels after step (1) ofthe algorithm, and assume without loss of generality that there is an excess of the label “ + ” overthe label “ − ”, i.e., µ i ( I + i ) > µ i ( I − i ) , as otherwise the agent is perfectly satisfied and we are done.Since by definition, there can be at most 2 external RRs for I i , and since all internal RRs have aperfect balance of “ + ” and “ − ” for agent i , it follows from Lemma 9 that µ i ( R + i ) − µ i ( R − i ) ≤ I i is at most 1/2. If µ i ( R i ) ≥ y of U i = I i \ R i and we reserve an equal portion of I i to the left and tothe right of y , until we establish that µ i ( R i ) = R in step (3) of the algorithm that willmake the agent 1/2-satisfied is possible. First, observe that R will only strictly contain RRs thatare of even parity, as all RRs of odd parity were considered in step (1) of the algorithm and wereeither (a) transformed to RRs of even parity via merging or (b) expanded until their reached someendpoint of the interval, and therefore can not be strictly contained in R . From this, it follows thatevery sub-interval of U i that will be included in R on the left side of the cut on y will receive thesame label (e.g., “ + ”) and every sub-interval of U i that will be included in R on the right side ofthe cut on y will receive the opposite label (e.g., “ − ”). Lemma 11.
After any step j ≥ i, agent i is -satisfied.Proof. First, it follows straightforwardly from Lemma 10 that agent i is 1/2-satisfied after step i . For any j > i , let R i ( k ) be the union of RRs of I i after step k and consider R i ( i ) and R i ( j ) .By the way RRs are constructed, and since they are never intersected by a new cut, it followsthat R i ( i ) ⊆ R i ( j ) and in particular, the balance of labels in R i ( i ) has remained unchanged in R i ( j ) (although the labels themselves might have the opposite parity). It follows that the agent is1/2-satisfied after step j .The correctness of the algorithm follows by applying Lemma 11 for j = n .Before we conclude the subsection, we mention that the algorithm can actually be applied toinstances with piecewise constant valuations with d blocks, as long as we have d · n cuts at ourdisposal. The idea is quite simple: Any agent that has (at most) d value blocks will be replacedby (at most) d agents, with single-block valuations. This requires the appropriate scaling of theheights of the blocks, to make sure that the functions are still probability measures. Then, we willrun the algorithm, now on n (cid:48) ≤ d · n agents, and we will obtain a solution using n (cid:48) cuts. This setof cuts will also be a solution to the original instance, since the parameter ε is constant (1/2) inboth cases. We obtain the following corollary. Corollary 12.
There exists a polynomial-time algorithm for - C onsensus -H alving , when the agentshave piecewise constant valuations with d blocks and we are allowed to use d · n cuts. n − (cid:96) cuts We conclude the section with the following theorem, stating that there is a polynomial-timealgorithm for solving the exact C onsensus -H alving problem with single-block valuations, if weare allowed to use 2 n − (cid:96) cuts, for any constant (cid:96) . Theorem 13.
Let (cid:96) be an integer constant. There exists a polynomial-time algorithm that given anysingle-block instance: returns an exact C onsensus -H alving solution that uses at most n − (cid:96) cuts, or • correctly outputs that no such solution exists. Given the single-block valuations of the n agents, we can determine numbers 0 ≤ a < a < · · · < a m < a m + ≤ m ∈ [ n − ] such that for every agent i ∈ [ n ] : • µ i ([ a ]) = µ i ([ a m + , 1 ]) =
0, and • for any j ∈ [ m ] , the density of µ i in interval ( a j , a j + ) is constant.The upper bound m ≤ n − (cid:96) = [ a j , a j + ] , j ∈ [ m ] , and to alternate the labels + and − between the cuts.Then, the value of every agent within every interval is split in half, which implies that the samealso holds for their value over the whole interval. Thus, this yields an exact C onsensus -H alving that uses m ≤ n − (cid:96) is any integer constant. If m ≤ n − (cid:96) , the solution above usesat most 2 n − (cid:96) cuts and we are done. Thus, it remains to solve the case where 2 n − (cid:96) < m . Let k = n − (cid:96) . The algorithm proceeds by picking a subset S ⊆ [ m ] of size | S | = k , and checkingwhether there exists a solution with one cut in each interval [ a j , a j + ] , j ∈ S , by using a linearprogramming subroutine. Since the number of such subsets S is ( mk ) = ( mm − k ) ≤ O ( n (cid:96) ) , where (cid:96) isa constant, the algorithm can try all possible subsets S of size k .Let us now describe how the linear programming subroutine for a given subset S works. Weare going to put one cut in each interval [ a j , a j + ] for j ∈ S , and alternate the labels + and − between the cuts. In other words, whenever we “cross a cut”, the label changes. We assume thatthe left-most label is + . First of all, note that for any j ∈ [ m ] \ S , the whole interval [ a j , a j + ] willbe assigned to the same label, and, in fact, we can determine which label, since we know the exactnumber of cuts to the left of that interval. Similarly, we also know to which label the intervals [ a ] and [ a m + , 1 ] will be assigned. Thus, let A + denote the union of all intervals for which wealready know that they will be assigned to label + . Define A − analogously for label − . Notethat [
0, 1 ] \ ( A + ∪ A − ) = (cid:83) j ∈ S [ a j , a j + ] (where we ignore endpoints of intervals, since they do notmatter for the valuations).The LP will have one variable x j for each j ∈ S , and x j will represent the position of the cut inthe interval [ a j , a j + ] . Since we will alternate labels between cuts, we can partition S = S + ∪ S − ,such that for any j ∈ S + , the interval [ a j , x j ] is labeled + and the interval [ x j , a j + ] is labeled − .For any j ∈ S − , the interval [ a j , x j ] is labeled − and the interval [ x j , a j + ] is labeled + .We can now write down the LP, which, in fact, does not even have an objective function (anyfeasible solution will do). µ i ( A + ) + ∑ j ∈ S + µ i ([ a j , x j ]) + ∑ j ∈ S − µ i ([ x j , a j + ])= µ i ( A − ) + ∑ j ∈ S + µ i ([ x j , a j + ]) + ∑ j ∈ S − µ i ([ a j , x j ]) for all i ∈ [ n ] (1) x j ∈ [ a j , a j + ] for all j ∈ S Note that µ i ( A + ) and µ i ( A − ) are constants, while µ i ([ a j , x j ]) and µ i ([ x j , a j + ]) can be expressedas affine linear functions of x j , since every agent has constant density within [ a j , a j + ] . Thus, all ofthe constraints can indeed be written as linear equations/inequalities. Note that the LP (1) can be28onstructed for any non-empty subset S ⊆ [ m ] . It is easy to see that any feasible solution ( x j ) j ∈ S immediately yields a C onsensus -H alving that uses at most | S | cuts. In particular, if | S | = k ,then this yields a C onsensus -H alving that uses at most k = n − (cid:96) cuts. The correctness of thealgorithm follows from the claim below. Claim 6.
Let c ∈ [ m ] . If for all subsets S ⊆ [ m ] with | S | = c, the LP (1) has no feasible solution, then the C onsensus -H alving instance does not admit a solution with at most c cuts.Proof. Let c min denote the minimum number of cuts c such that there exists a solution to theC onsensus -H alving instance that uses c cuts. Note that c min ≤ n . The statement in the claimclearly holds for all c < c min , since there indeed is no C onsensus -H alving solution that uses atmost c cuts. Furthermore, the statement also holds for c = m , since the (in that case, single) LP (1)admits a feasible solution, as explained above. We begin by proving that the statement holds for c = c min . Then, we show that if it holds for some c ≥ c min , then it also holds for c +
1, as long as c + ≤ m . By induction, this proves the claim. Base case c = c min : Let I + , I − denote an exact solution to the C onsensus -H alving instancethat uses the minimum number of cuts possible for this instance, i.e., c min cuts. First of all, notethat the intervals [ a ] and [ a m + , 1 ] cannot contain any cut. Indeed, since all agents have value0 for these intervals, which lie at the edge of the cake, we could remove any such cut withoutdestroying the perfect balance between + and − . Since c min is the minimum number of cuts inany solution, this is impossible. Furthermore, all the cuts have to lie at distinct positions and thelabels have to alternate between + and − . Otherwise, it is easy to see that we can again removecuts, without destroying the perfect balance.At the beginning, all cuts are at the positions indicated by I + , I − and they are unclaimed . Forall j ∈ [ m ] , starting with j = j = m , we perform the following operations toconstruct S and ( x j ) j ∈ S . Consider the interval [ a j , a j + ] . • If the interval does not contain any unclaimed cut, then move on to the next j . • If the interval contains exactly one unclaimed cut, at some position s , then mark the cut asclaimed, add j to the set S , let x j : = s , and move on to the next j . • If the interval contains exactly two unclaimed cuts, at positions s < s , then move the firstcut to a j + − ( s − s ) and claim it, and move the second cut to a j + . Then, add j to the set S , let x j : = a j + − ( s − s ) and move on to the next j . Note that the distance between thetwo cuts is still s − s , and since both cuts also still lie in [ a j , a j + ] , where all agents haveconstant density, we keep the perfect balance between + and − .No other case can occur. In particular, it is not possible that s = s in the third bullet point,since that would mean that the two cuts can be removed without destroying the perfect balance.Furthermore, it is not possible for the interval to contain more than two cuts (claimed or un-claimed), since then we can definitely remove at least two cuts without destroying the perfectbalance. Indeed, if there are three cuts, then it is easy to see that they can be replaced by a singlecut. If there are four cuts, then they can be replaced by two cuts.When we reach j = m , the interval [ a m , a m + ] can contain at most one cut. If it is unclaimed, itwill be claimed by x m and we will add m to the set S . If the interval contained more than onecut, then using the same arguments as above we could either remove one cut, or, if there weretwo cuts, we could shift them to the right until the second cut reaches a m + , but then this cut canagain be removed. 29ote that after we have finished our pass, all cuts must have been claimed. Thus, we obtaina set S ⊆ [ m ] of size | S | = c min and values ( x j ) j ∈ S that satisfy the LP (1). This proves that thestatement of the claim holds for c = c min . From c to c + : Consider any c ≥ c min such that the statement of the claim holds and c + ≤ m .Since the statement holds for c and, by definition of c min , there exists a solution to the C onsensus -H alving instance that uses at most c cuts, it follows that there exists a set S ⊆ [ m ] with | S | = c ,and values ( x j ) j ∈ S that satisfy the LP (1).We show how to construct a set S (cid:48) ⊆ [ m ] with | S (cid:48) | = c +
1, and values ( x j ) j ∈ S (cid:48) that satisfy theLP (1). It immediately then follows that the statement of the claim also holds for c + ( x j ) j ∈ S and they are claimed . Weadd an additional unclaimed cut at position a . Note that we now have c + onsensus -H alving instance. For all j ∈ [ m ] , starting with j = j = m , we perform the following operations. Consider the interval [ a j , a j + ] . The unclaimedcut is at position a j . • If j / ∈ S , then let S (cid:48) = S ∪ { j } , set x j : = a j , and terminate. • If j ∈ S , then update x j : = a j + ( a j + − x j ) , put the unclaimed cut at position a j + and moveon the next j . Note that the cuts still form a solution to the C onsensus -H alving instance,since the distance between the two cuts in the interval has not changed.Since c ≤ m −
1, there exists j ∈ [ m ] \ S , and the procedure will terminate when it reaches thesmallest such j . It is easy to see that S (cid:48) and ( x j ) j ∈ S (cid:48) satisfy the LP (1). -Division is PPAD-hard As we mentioned in the Introduction, our newly developed tools that allowed us to obtain astrengthening of the PPA-completeness result for C onsensus -H alving , turn out to be very usefulfor proving a hardness result for a more general version of the problem, the C onsensus -1/ k -D ivision problem, for k =
3. We provide the definition below.
Definition 9 ( ε -C onsensus -1/ k -D ivision ) . Let k ≥
2. We are given ε > µ , . . . , µ n on [
0, 1 ] . The probability measures are given by their densityfunctions on [
0, 1 ] . The goal is to partition the unit interval into k (not necessarily connected)pieces A , . . . , A k using at most ( k − ) n cuts, such that | µ j ( A i ) − µ j ( A (cid:96) ) | ≤ ε for all i , j , (cid:96) .For ε -C onsensus -1/3-D ivision , for ease of notation, we will use the labels A/B/C instead of1/2/3. We state the main theorem of the section. Theorem 14. ε - C onsensus -1/3-D ivision is PPAD-hard, for inverse-exponential ε . As we discussed in the Introduction, the problem for k ≥ k =
2, because we have more cuts at our disposal. Before we present the proof, wehighlight some fundamental challenges that arise when one moves from k = k .First, when moving to k ≥
3, we move from a simple + / − parity to a more general settingwith at least 3 labels. This is already severely problematic when it comes to the reduction of[Filos-Ratsikas and Goldberg, 2019], which is highly dependent on the solution having twolabels. Indeed, the interpretation of the inputs in [Filos-Ratsikas and Goldberg, 2019] and the30orresponding design of the gates needs to know which label will appear on the left side of thecut, and special “parity-flip“ gadgets are used throughout the reduction to ensure this. On thecontrary, with our new value interpretation, we design gadgets which take the label sequence“internally” into account, by adjusting the position of the cut accordingly.The sequence of the labels however gives rise to a second and more challenging issue: Whilein the case of k = + ” and “ − ”, we cannot make any such assumptions even when k =
3. In fact, it isknown that if one restricts the solution to exhibit a cyclic sequence of labels A / B / C , then theproblem is no longer a total search problem [P´alv ¨olgyi, 2009]. This seems to be a fundamentalobstacle to the design of gates for the case of k ≥
3. For k =
3, we manage to side-step thisobstacle by using a clever “trick”: we make sure that the intervals of the C onsensus -1/3-D ivision instance where we read the two inputs (of the 2-dimensional PPAD-complete problem that wereduce from, see Definition 11) are placed next to each other, therefore fixing the position of oneof the three labels. We prove PPAD-hardness for the exact version of the problem (in which we arelooking for a perfect balance of the labels, with no allowable discrepancy ε ), which will guaranteethat in a solution, the value of this label will be fixed to 1/3 throughout the instance. Since ourinstance is constructed to have piecewise constant valuation functions, the result can be extendedto the case of inverse-exponential ε using the following lemma, which is based on an argument ofEtessami and Yannakakis [2010]. Lemma 15.
Let k ≥ . For piece-wise constant valuation functions, exact C onsensus -1/ k -D ivision reduces in polynomial time to ε - C onsensus -1/ k -D ivision with inverse-exponential ε .Proof. We use the proof idea introduced by Etessami and Yannakakis [2010]. Given an instance ofConsensus-1/ k -Division with piecewise constant valuations that we want to solve exactly, we firstfind an ε -approximate solution S for some sufficiently small ε . Then, using S , we construct an LPin polynomial time. Finally, we show that any solution to this LP yields an exact solution to theConsensus-1/ k -Division instance.Let I be an instance of Consensus-1/ k -Division with agents {
1, 2, . . . , n } that have piecewiseconstant valuations. Then, we can efficiently find positions 0 = p < p < · · · < p m − < p m = i ∈ [ n ] and all j ∈ [ m ] , the density of the valuation function µ i in interval [ p j − , p j ] is constant. Denote that constant by f i , j . Note that m and the bit-length of f i , j arepolynomially upper bounded by the size of the instance I .Let S = ( (cid:98) x , label ) be a candidate solution, where the cut positions are 0 = (cid:98) x ≤ (cid:98) x ≤ (cid:98) x ≤· · · ≤ (cid:98) x ( k − ) n − ≤ (cid:98) x ( k − ) n ≤ (cid:98) x ( k − ) n + = t ∈ [( k − ) n + ] , label ( t ) is the label in [ k ] assigned to interval [ (cid:98) x t − , (cid:98) x t ] .We construct an LP that has variables x , . . . , x ( k − ) n + and a variable z . We denote it byLP ( I , S ) . In the LP we minimize z under the constraints: • x = x ( k − ) n + = • for every t ∈ [( k − ) n ] : cut x t must lie between cuts x t − and x t + • for every t ∈ [( k − ) n ] : for every j such that (cid:98) x t ∈ [ p j − , p j ] (at most two such j exist), x t must also lie in [ p j − , p j ] • let A , . . . , A k denote the partition of the domain [
0, 1 ] given by the cuts x , . . . , x ( k − ) n + andthe labeling label. Then the constraint is:for every agent i ∈ [ n ] : max (cid:96) , (cid:96) | µ i ( A (cid:96) ) − µ i ( A (cid:96) ) | ≤ z
31e could also add a constraint z ≥
0, but this is already implicitly enforced by the existingconstraints.Apart from the last constraint type, it is clear that all the other constraints can be expressedlinearly. For the last type of constraint, note that it can be broken down into all the constraints µ i ( A (cid:96) ) − µ i ( A (cid:96) ) ≤ z and µ i ( A (cid:96) ) − µ i ( A (cid:96) ) ≤ z for all (cid:96) , (cid:96) . Thus, it remains to show that µ i ( A (cid:96) ) and µ i ( A (cid:96) ) can be written as linear functions of x , . . . , x ( k − ) n + .To see this, first note that the labeling is fixed, i.e. for any interval [ x t , x t + ] we know whichlabel it is assigned to. Furthermore, for every t ∈ [( k − ) n ] the cut x t is constrained to lie in someinterval [ p j − , p j ] , and it is not allowed to cross from one interval [ p j − , p j ] to another. Thus, forany agent i ∈ [ n ] and for any interval [ p j − , p j ] , we can express the amount of value of agent i in [ p j − , p j ] going to each of the labels {
1, 2, . . . , k } as a linear expression. If [ p j − , p j ] does notcontain any cut x t , then all of it is assigned to some label (cid:96) (which is fixed, i.e. only depends on S ).Thus, the value assigned to label (cid:96) is f ij ( p j − p j − ) , and the value assigned to all other labels is 0.If [ p j − , p j ] contains the cuts x t ≤ · · · ≤ x t s , then: • interval [ p j − , x t ] yields value f ij ( x t − p j − ) to label ( t ) (and value 0 to all other labels) • interval [ x t p , x t p + ] (for 1 ≤ p ≤ s −
1) yields value f ij ( x t p + − x t p ) to label ( t p + ) • interval [ x t s , p j ] yields value f ij ( p j − x t s ) to label ( t s + ) Note that any feasible solution to this LP with z = k -Division instance I . Furthermore, note that there exist polynomials p and p such that for anycandidate solution S , the number of equations in LP ( I , S ) is at most p ( | I | ) and the bit-size of anynumber appearing in LP ( I , S ) is at most p ( | I | ) . Here | I | denotes the representation size of theinput I . Thus, there exists some polynomial q such that we can efficiently compute an optimalsolution of LP ( I , S ) where the bit-size of the variables is at most q ( | I | ) . Note that this does notdepend on S .Now assume that we have picked ε < q ( | I | ) and obtain a solution S = ( (cid:98) x , label ) to ε -Consensus-1/ k -Division( I ). We then construct LP ( I , S ) and compute an optimal solution ( x ∗ , z ∗ ) .Note that ( (cid:98) x , ε ) is a feasible solution to LP ( I , S ) . Thus, it must hold that z ∗ ≤ ε < q ( | I | ) . Butsince the bit-size of z ∗ is at most q ( | I | ) and z ∗ ≥
0, it must hold that z ∗ =
0. Thus, S (cid:48) = ( x ∗ , label ) is an exact solution to our Consensus-1/ k -Division instance I . We start with the following problem, proven to be PPAD-complete by Mehta [2014].
Definition 10 (2D-L inear -FIXP [Mehta, 2014]) . The problem 2D-L inear -FIXP is defined as follows.We are given a circuit C using gates { + , × ζ , max } and rational constants, that computes a function F C : [
0, 1 ] → [
0, 1 ] . The goal is to find x ∈ [
0, 1 ] such that F C ( x ) = x . Theorem 16 ([Mehta, 2014]) . inear -FIXP is PPAD -complete.
In order to prove PPAD-hardness of Consensus-1/3-Division, we will use a slightly modifiedversion of 2D-L inear -FIXP, that we call 2D-T runcated -L inear -FIXP. The first difference is that thedomain is [ −
1, 1 ] instead of [
0, 1 ] . Furthermore, instead of the gates { + , × ζ , max } and rationalconstants in Q , the circuit will only be allowed to use the gates { + T , × T ζ } and rational constantsin [ −
1, 1 ] ∩ Q . The gate + T corresponds to truncated addition and the gate × T ζ corresponds to truncated multiplication by ζ . For any x , y ∈ [ −
1, 1 ] and any ζ ∈ Q , we have x + T y = T [ x + y ] and x × T ζ = T [ x × ζ ] , where T is the truncation operator in [ −
1, 1 ] as defined in Section 3.32 efinition 11 (2D-T runcated -L inear -FIXP) . The problem 2D-T runcated -L inear -FIXP is definedas follows. We are given a circuit C using gates { + T , × T ζ } and rational constants in [ −
1, 1 ] , thatcomputes a function F C : [ −
1, 1 ] → [ −
1, 1 ] . The goal is to find x ∈ [ −
1, 1 ] such that F C ( x ) = x .By applying some simple modifications to any 2D-L inear -FIXP circuit, we are able to showthe following theorem. Theorem 17. runcated -L inear -FIXP is PPAD -complete.Proof.
Containment in PPAD follows from the containment of the more general problem L inear -FIXP in PPAD [Etessami and Yannakakis, 2010]. In particular, note that the truncated gates can besimulated by using their non-truncated versions, along with max and constant-gates.To show PPAD-hardness, we reduce from 2D-L inear -FIXP. Let C be a circuit that computesa function F C : [
0, 1 ] → [
0, 1 ] that uses gates { + , max, × ζ } and rational constants. First, letus change the domain to [ −
1, 1 ] . We modify the circuit C into (cid:98) C by applying the followingtransformation (also used in [Mehta, 2014]) to every output: x (cid:55)→ max { min { x } , 0 } = max {− × max {− − × x } , 0 } . Then, for any input ( x , y ) ∈ [ −
1, 1 ] , we must have that F (cid:98) C ( x , y ) ∈ [
0, 1 ] and for ( x , y ) ∈ [
0, 1 ] we have F (cid:98) C ( x , y ) = F C ( x , y ) . It follows that (cid:98) C computes a function F (cid:98) C : [ −
1, 1 ] → [ −
1, 1 ] and any fixed-point of F (cid:98) C must be in [
0, 1 ] and thus also a fixed-point of F C . This means that without loss of generality, we can assume that the domain is [ −
1, 1 ] insteadof [
0, 1 ] .Next, let us show how to replace the gates { + , × ζ } by { + T , × T ζ } , and ensure that the constant-gates all have value in [ −
1, 1 ] . Let c ≥ × ζ -gates. Note that c has bit representation length that is polynomial in the size of C (since the size of C includes therepresentation length of all constants). Let n be the number of gates { + , max, × ζ } in C .Begin with the circuit C that only contains the two input gates of C and all the constant-gatesof C . Pick an arbitrary gate in C with input(s) in C and add it to C to obtain circuit C . Then,pick an arbitrary gate in C (and not yet in C ) with input(s) in C and add it to C to obtain C . Ifwe repeat this n , we obtain an incremental sequence of circuits C , C , . . . , C n = C , where a singlegate is added at each step.For i ∈ {
0, 1, . . . , n } , let v ( C i ) denote the maximum absolute value of any gate in C i , over allinputs in [ −
1, 1 ] . Clearly, we have v ( C ) ≤ max { c } = c . To obtain C from C , we have addeda single gate. If this gate is a + -gate, then maximum absolute value appearing in the circuit isat most multiplied by 2 ≤ c . If it is a max-gate, then the maximum absolute remains the same.Finally, if it is a × ζ -gate, the maximum absolute value is at most multiplied by | ζ | ≤ c . Thus, inany case, we have v ( C ) ≤ cv ( C ) . By induction it follows that v ( C ) = v ( C n ) ≤ c n v ( C ) = c n + .Note that M : = c n + has representation length that is polynomial with respect to the size of C .From the arguments above, it follows that if the input is in [ −
1, 1 ] , all the intermediate valuesin the computation of the circuit are upper bounded by M in absolute value. Now modify C toobtain C (cid:48) as follows. For every constant-gate, replace its constant ζ by ζ / M . For every input,introduce a × ( M ) -gate that multiplies it by 1/ M and then use the output of that gate as thecorresponding input for C . By induction, it is easy to see that on any input ( x , y ) ∈ [ −
1, 1 ] , C (cid:48) computes F C ( x , y ) / M . Importantly, all the intermediate values in the computation of the circuitare now upper bounded by 1 in absolute value. In particular, all the constant-gates have value in [ −
1, 1 ] .For any ( x , y ) ∈ [ −
1, 1 ] , we know that F C ( x , y ) ∈ [ −
1, 1 ] , i.e. F C ( x , y ) / M ∈ [ − M , 1/ M ] .Obtain circuit C (cid:48)(cid:48) by taking C (cid:48) and multiplying all the outputs by M (by using × M -gates). Then,33n any input ( x , y ) ∈ [ −
1, 1 ] , C (cid:48)(cid:48) outputs F C ( x , y ) and all intermediate values in the computationof the circuit are upper bounded by 1 in absolute value.Thus, we have transformed C into a circuit that computes the same function F C : [ −
1, 1 ] → [ −
1, 1 ] , but only uses gates { + T , × T ζ , max } , and all the constant-gates have value in [ −
1, 1 ] .The last step is to get rid of max-gates. To do this observe that for any x , y ∈ [ −
1, 1 ] , we havemax { x , y } = (cid:18) × T x + max (cid:26) × T y + T (cid:18) − (cid:19) × T x , 0 (cid:27)(cid:19) × T { x , 0 } = ( x + T ( − )) + T { + T , × T ζ } andrational constants in [ −
1, 1 ] .Putting everything together, we have provided a reduction from 2D-L inear -FIXP to 2D-T runcated -L inear -FIXP. In order to show that Consensus-1/3-Division is PPAD-hard, we reduce from 2D-T runcated -L inear -FIXP. Namely, given a 2D-T runcated -L inear -FIXP circuit C , we will construct an instance I C of Consensus-1/3-Division, such that any solution of I C yields a solution to C (i.e., a fixed-pointof F C : [ −
1, 1 ] → [ −
1, 1 ] ). As before, we will construct a Consensus-1/3-Division instanceon some domain [ M ] for some polynomial M , which is easy to transform into an equivalentinstance on domain [
0, 1 ] .First, let us give a high-level description of the ideal reduction that we would like to construct.First, we would show how any interval of the Consensus-1/3-Division domain encodes a value in [ −
1, 1 ] . Namely, in any solution S to instance I C , for every interval I , v S ( I ) ∈ [ −
1, 1 ] would bethe value encoded by interval I . Then, we would construct agents that implement the arithmeticgates needed by 2D-T runcated -L inear -FIXP, namely { + T , × T ζ } and rational constants in [ −
1, 1 ] .These agents read some value(s) in [ −
1, 1 ] from one or two intervals and output the result of thegate-operation into some other interval of the domain.With these gates we could implement the circuit C inside our Consensus-1/3-Division instance.In particular, we would have two intervals In and In each representing the two inputs, andtwo intervals Out and Out each representing the two outputs. These intervals are pairwisedisjoint. In the final step we would then “connect” the outputs to the inputs. Namely, we wouldintroduce an agent implementing a × T Out and output In , and a second agentimplementing a × T Out and output In . This ensures that from any solution ofthe Consensus-1/3-Division instance we can extract a fixed-point of F C .If we could do all this, then this reduction would be very similar to the reduction of Filos-Ratsikas et al. [2018] showing that ε -Consensus-Halving is PPAD-hard for constant ε . Unfortunately,there is a significant obstacle. Namely, we don’t know how to find an encoding of values inintervals such that we can implement arithmetic gates that always work. Because we don’t knowin what order the labels A , B and C will appear in any given interval, implementing arithmeticgates is actually much harder than in the case of Consensus-Halving. Thus, the gates we areable to implement only work if the input interval encodes a value in a very specific way. Inthis case, we say that the interval is a valid encoding of a value. Not all intervals will be a validencoding of a value. In general, it is very hard to enforce valid encodings. This is the reason why34ur reduction does not seem to generalize to yield hardness for inversely polynomial ε , or toConsensus-1/ k -Division with k > S ), then let v S ( I ) ∈ [ −
1, 1 ] denote the value encoded by I . We will drop the subscript S in the remainder of the exposition. The following two lemmas are crucial. They are proved inSection 5.3, where the proof of Theorem 14 is detailed. Lemma 18.
In the instance I C we construct, it holds that: • the two intervals In and In are valid encodings, and • if Out and Out are valid encodings, then v ( Out ) = v ( In ) and v ( Out ) = v ( In ) . Lemma 19.
In instance I C , we can implement arithmetic gates { G + T , G × T ζ , G ζ } for operations { + T , × T ζ } and constant-gate respectively, such that: • G ζ outputs a valid encoding of ζ ∈ [ −
1, 1 ] ∩ Q • if the input to G × T ζ is a valid encoding of value x ∈ [ −
1, 1 ] , then the gate outputs a valid encodingof x × T ζ • if the two inputs to G + T are valid encodings of x , y ∈ [ −
1, 1 ] , then the gate outputs a valid encodingof x + T y. If these two lemmas indeed hold, then the reduction is correct. First of all, by Lemma 18,the two inputs to the circuit are valid encodings. Thus, using Lemma 19, it follows by inductionthat all the gates in the circuit perform their operation correctly and output a valid encoding. Inparticular, the two outputs of the circuit are valid encodings. Thus, by Lemma 18, we get thateach of the two outputs is equal to the corresponding input. As a result, we have identified afixed-point of F C : [ −
1, 1 ] → [ −
1, 1 ] . We construct an instance with a set of agents {
1, 2, . . . , n } for some n polynomial in the size of C . For every i ∈ [ n ] , agent i will have an output interval O i of length 9 on the domain with thefollowing properties: • for all j (cid:54) = i it holds O j ∩ O i = ∅ (output intervals are pairwise disjoint) • agent i ’s valuation function has density 3/10 in O i [
1, 2 ] ∪ O i [
4, 5 ] ∪ O i [
7, 8 ] .From this it follows that in any solution S , for each i ∈ [ n ] , O i must contain at least twocuts. Since the intervals O i are pairwise disjoint, all 2 n cuts are accounted for and thus for each i ∈ [ n ] , O i contains exactly two cuts. But then it follows that O i must be well-cut : one cut liesin O i [ ] and the other one in O i [ ] . In particular, there are no stray cuts in thisreduction. 35 epresentation of a value. In any solution S , every interval I encodes a value as follows. Let X ( I ) = I [
0, 1 ] ∪ I [
2, 4 ] ∪ I [
5, 7 ] ∪ I [
8, 9 ] . Let X ( I ) A , X ( I ) B and X ( I ) C denote the subsets of X ( I ) allocated by the solution S to labels A , B and C respectively. Let µ denote the Lebesgue measureon R . We say that I is a valid encoding of a value, if µ ( X ( I ) C ) = I is well-cut. If I is a validencoding, then the encoded value is v ( I ) = ( µ ( X ( I ) A ) − µ ( X ( I ) B )) /2 ∈ [ −
1, 1 ] .In the next section, we show how to construct the arithmetic gates. With these gates, wecan implement the circuit C in our instance I C . We will implement the circuit such that the firstoutput of the circuit is encoded in the left-most interval of the domain Out = [
0, 9 ] , and thesecond output of the circuit is encoded in the next interval, i.e. Out = [
10, 19 ] . We will ensurethat the next two intervals are not used by any gate of the circuit, namely Temp = [
20, 29 ] and Temp = [
30, 39 ] are available. Finally, the next two intervals correspond to the two inputs of thecircuit, i.e. In = [
40, 49 ] and In = [
50, 59 ] .By construction, we know that Out is well-cut, because it is the output interval of some agent.In particular, it contains exactly two cuts. By convention, we will assume that the labels appear inorder A , B , C from left to right in Out . Given any solution S of I C , we can always ensure that thisholds by renaming the labels. Since Out is well-cut, we know that interval Out [
0, 1/2 ] is labeled A , interval Out [ ] is labeled B , and interval Out [ ] is labeled C . Furthermore,since interval Out is also well-cut and right next to Out , we know that Out [
0, 1/2 ] is labeled C . However, we do not know the labels of Out [ ] and Out [ ] , except that one ofthem is A and the other B (but not which one is which). Projection Agents.
In order to ensure that Lemma 18 holds, we are going to introduce twospecial agents, that we call projection agents . Let I : = Out and O : = Temp . The first projectionagent’s valuation function has height 3/10 in O [
1, 2 ] ∪ O [
4, 5 ] ∪ O [
7, 8 ] , height 1/120 in X ( O ) ,height 1/30 in I [ ] , height 1/120 in I [
2, 4 ] , height 1/60 in I [
0, 1/2 ] ∪ I [ ] , andheight 0 everywhere else. Since Out is well-cut and we know the labels of the pieces, this agentensures that: • interval Temp is a valid encoding • if interval Out is a valid encoding, then v ( Temp ) = − v ( Out ) The first property holds because exactly 1/3 of the projection agent’s value in interval
Out goesto label C . Thus, label C also needs to get exactly 1/3 of the agent’s value in interval Temp .By construction of the valuation function in Temp , it follows that C must get exactly 1/3 of X ( Temp ) . To show the second property, consider the three possible cases: • label C is the right-most label in Temp : then there is a cut at position Temp [ ] . In this case,it is easy to see that the other cut in Temp will exactly reflect what the first cut in Out does. • label C is the middle label in Temp : since C gets 1/3 of X ( Temp ) , it follows that the cutsin Temp have distance exactly 3 between them. From this, it follows that both cuts reflectwhat the first cut in Out does. • label C is the left-most label in Temp : then there is a cut at position Temp [ ] . Again, asin the first case, the other cut in Temp will exactly reflect what the first cut in Out does.(Note that this case is actually not possible here, but we have included it, because it mightoccur for the second projection agent). 36n order to show that Lemma 18 holds for In and Out , it suffices to use a G × T − -gate (seenext section) with input Temp and output In .Now let I : = Out and O : = Temp . The second projection agent’s valuation function hasheight 3/10 in O [
1, 2 ] ∪ O [
4, 5 ] ∪ O [
7, 8 ] , height 1/120 in X ( O ) , height 1/30 in I [
0, 1/2 ] , height1/120 in I [
5, 7 ] , height 1/60 in I [ ] ∪ I [ ] , and height 0 everywhere else. In otherwords, the second projection agent’s valuation function in interval Out is the same as the firstprojection agent’s in Out , except that it is mirrored. By using the same arguments we used forthe first projection agent, we get that Lemma 18 also holds for In and Out . In this section, we prove Lemma 19, by providing an explicit construction of the arithmetic gates.As before, let T : R → [ −
1, 1 ] denote the truncation operator. When constructing the agentsimplementing the gates, we can assume that their output interval is well-cut, since this holds forall agents by construction. Multiplication by a constant ζ ∈ Q [ G × T ζ ] : Let I and O be two disjoint intervals of length 9.First consider the case ζ ≤
0. We create an agent with the following valuation function. Theagent’s density function has height 3/10 in O [
1, 2 ] ∪ O [
4, 5 ] ∪ O [
7, 8 ] , height | ζ | ( | ζ | + ) in X ( I ) , height ( | ζ | + ) in X ( O ) , and height 0 everywhere else. If I is a valid encoding, then O is a valid encodingof the value v ( O ) = T [ v ( I ) × ζ ] . If ζ >
0, then use a G × T − ζ -gate and then a G × T − -gate.Let us see why this works. Since I encodes a valid value, exactly 1/3 of the agent’s value ininterval I goes to label C . It follows that exactly 1/3 of the agent’s value in interval O must go tolabel C . By construction of the valuation function in O , it follows that 1/3 of X ( O ) must go tolabel C . As a result, O is also a valid encoding. Now, if label C gets the left-most piece in O , thenthere is a cut at position O [ ] . The other cut, which is located in O [ ] , will thus encodethe value of interval O and the truncation will work similarly to our Consensus-Halving gadgets.The analogous argument also holds if C gets the right-most piece in O . If label C gets the middlepiece in O , then the distance between the two cuts will be exactly 3. Thus, the two cuts “movetogether” and will touch a big block at the same time. As a result, the truncation works here aswell. Addition [ G + T ] : Let I , I and O be three pairwise disjoint intervals of length 9. The agent’sdensity function has height 3/10 in O [
1, 2 ] ∪ O [
4, 5 ] ∪ O [
7, 8 ] , height 1/180 in X ( I ) ∪ X ( I ) ∪ X ( O ) ,and height 0 everywhere else. If I and I are valid encodings, then O is a valid encoding of thevalue v ( O ) = − T [ v ( I ) + v ( I )] . To obtain G + T , just apply a G × T − -gate on the output. A similarreasoning to the case of G × T ζ proves the correctness of this gate too. Constant ζ ∈ [ −
1, 1 ] ∩ Q [ G ζ ] : For this we use the left-most interval of length 9 of the domain,namely
Out . We know that interval I : = Out is the output interval of some gate, so it is well-cut.Furthermore, we know that the labels appear in order A , B , C from left to right (by convention).Thus, we know that interval I [
0, 1/2 ] is labeled A , interval I [ ] is labeled B , and interval I [ ] is labeled C .Introduce an agent with output interval O of length 9. The agent’s density function has height3/10 in O [
1, 2 ] ∪ O [
4, 5 ] ∪ O [
7, 8 ] , height 1/120 in X ( O ) , height 1/30 in I [ ] , height − ζ /230 in37 [
0, 1/2 ] , height + ζ /230 in I [ ] , and height 0 everywhere else. From this construction, itfollows that interval O is a valid encoding of the value v ( O ) = ζ . The main technical question is whether ε -C onsensus -H alving for single-block valuations isPPA-hard or polynomially solvable, or perhaps even complete for some other class. Anotherinteresting direction is to extend the PPA-hardness result of Theorem 2 (or even for a largernumber of blocks) to constant ε ; such a result however would seemingly require radically newideas, namely an averaging argument over a constant set of outputs that is robust to stray cuts.In a slightly different direction, Deligkas et al. [2020] very recently showed that the problem fora constant number of agents is PPA-complete, if we allow agents to have more general valuations,in particular non-additive. This leaves open the fundamental question of showing PPA-hardnessof ε -C onsensus -H alving for a constant number of agents with additive valuations.Finally, it would be interesting to study the complexity of the C onsensus -1/ k -D ivision problem when k ≥ k .To this end, we have recently shown [Filos-Ratsikas et al., 2020] that the problem is in PPA- k , forany k which is a prime power. This begs the question whether C onsensus -1/ k -D ivision (andconsecutively Necklace Splitting with k thieves [Filos-Ratsikas and Goldberg, 2018]) is actuallycomplete for PPA- k . Acknowledgments
Alexandros Hollender is supported by an EPSRC doctoral studentship (Reference 1892947). Kate-rina Sotiraki is supported in part by NSF/BSF grant
References
James Aisenberg, Maria Luisa Bonet, and Sam Buss. 2-D Tucker is PPA complete.
Journal ofComputer and System Sciences , 108:92–103, 2020. doi:10.1016/j.jcss.2019.09.002.Noga Alon. Splitting necklaces.
Advances in Mathematics , 63(3):247–253, 1987. doi:10.1016/0001-8708(87)90055-7.Noga Alon and Douglas B. West. The Borsuk-Ulam Theorem and Bisection of Necklaces.
Proceed-ings of the American Mathematical Society , 98(4):623–628, 1986. doi:10.2307/2045739.Eshwar Ram Arunachaleswaran, Siddharth Barman, Rachitesh Kumar, and Nidhi Rathi. Fair andefficient cake division with connected pieces. In
Proceedings of the 15th Conference on Web andInternet Economics (WINE) , pages 57–70. Springer, 2019. doi:10.1007/978-3-030-35389-6 5.Haris Aziz and Simon Mackenzie. A discrete and bounded envy-free cake cutting protocol for anynumber of agents. In
Proceedings of the 57th Annual IEEE Symposium on Foundations of ComputerScience (FOCS) , pages 416–427. IEEE, 2016a. doi:10.1109/FOCS.2016.52.38aris Aziz and Simon Mackenzie. A discrete and bounded envy-free cake cutting protocol forfour agents. In
Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC) ,pages 454–464, 2016b. doi:10.1145/2897518.2897522.Imre B´ar´any, Senya B. Shlosman, and Andr´as Sz ¨ucs. On a topological generalization of a theoremof Tverberg.
Journal of the London Mathematical Society , 2(1):158–164, 1981. doi:10.1112/jlms/s2-23.1.158.Karol Borsuk. Drei S¨atze ¨uber die n-dimensionale euklidische Sph¨are.
Fundamenta Mathematicae ,1933. ISSN 0016-2736. doi:10.4064/fm-20-1-177-190.Steven J. Brams and Alan D. Taylor.
Fair Division: From cake-cutting to dispute resolution . CambridgeUniversity Press, 1996. doi:10.1017/CBO9780511598975.Xi Chen, Xiaotie Deng, and Shang-Hua Teng. Settling the complexity of computing two-playerNash equilibria.
Journal of the ACM , 56(3):14, 2009. doi:10.1145/1516512.1516516.Xi Chen, Dimitris Paparas, and Mihalis Yannakakis. The complexity of non-monotone markets. In
Proceedings of the 45th Annual ACM Symposium on the Theory of Computing (STOC) , pages 181–190,2013. doi:10.1145/2488608.2488632.Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity ofcomputing a Nash equilibrium. In
SIAM Journal on Computing , volume 39, pages 195–259, 2009.doi:10.1137/070699652.Argyrios Deligkas, John Fearnley, Themistoklis Melissourgos, and Paul G. Spirakis. Computingexact solutions of consensus halving and the Borsuk-Ulam theorem. In
Proceedings of the 46thInternational Colloquium on Automata, Languages and Programming (ICALP) . Schloss Dagstuhl–Leibniz-Zentrum f ¨ur Informatik, 2019. doi:10.4230/LIPIcs.ICALP.2019.138.Argyrios Deligkas, Aris Filos-Ratsikas, and Alexandros Hollender. Two’s Company, Three’s aCrowd: Consensus-Halving for a Constant Number of Agents. arXiv preprint , 2020. URL https://arxiv.org/abs/2007.15125 .Xiaotie Deng, Qi Qi, and Amin Saberi. Algorithmic solutions for envy-free cake cutting.
OperationsResearch , 60(6):1461–1476, 2012. doi:10.1287/opre.1120.1116.Xiaotie Deng, Jack R. Edmonds, Zhe Feng, Zhengyang Liu, Qi Qi, and Zeying Xu. Understandingppa-completeness. In
Proceedings of the 31st Conference on Computational Complexity (CCC) . SchlossDagstuhl–Leibniz-Zentrum f ¨ur Informatik, 2016. doi:10.4230/LIPIcs.CCC.2016.23.Xiaotie Deng, Zhe Feng, and Rucha Kulkarni. Octahedral Tucker is PPA-Complete. In
ElectronicColloquium on Computational Complexity (ECCC) , volume 24, page 118, 2017. URL https://eccc.weizmann.ac.il/report/2017/118/ .Kousha Etessami and Mihalis Yannakakis. On the complexity of Nash equilibria and other fixedpoints.
SIAM Journal on Computing , 39(6):2531–2597, 2010. doi:10.1137/080720826.Aris Filos-Ratsikas and Paul W. Goldberg. Consensus Halving is PPA-complete. In
Proceedingsof the 50th Annual ACM Symposium on Theory of Computing (STOC) , pages 51–64. ACM, 2018.doi:10.1145/3188745.3188880. 39ris Filos-Ratsikas and Paul W. Goldberg. The Complexity of Splitting Necklaces and BisectingHam Sandwiches. In
Proceedings of the 51st Annual ACM Symposium on Theory of Computing(STOC) , pages 638–649. ACM, 2019. doi:10.1145/3313276.3316334.Aris Filos-Ratsikas, Søren Kristoffer Still Frederiksen, Paul W. Goldberg, and Jie Zhang. HardnessResults for Consensus-Halving. In
Proceedings of the 43rd International Symposium on MathematicalFoundations of Computer Science (MFCS) , pages 24:1–24:16. Schloss Dagstuhl–Leibniz-Zentrumf ¨ur Informatik, 2018. doi:10.4230/LIPIcs.MFCS.2018.24.Aris Filos-Ratsikas, Alexandros Hollender, Katerina Sotiraki, and Manolis Zampetakis. A topo-logical characterization of modulo-p arguments and implications for necklace splitting. arXivpreprint , 2020. URL https://arxiv.org/abs/arXiv:2003.11974 .Jugal Garg, Ruta Mehta, and Vijay V. Vazirani. Substitution with satiation: A new class of utilityfunctions and a complementary pivot algorithm.
Mathematics of Operations Research , 43(3):996–1024, 2018. doi:10.1287/moor.2017.0892.Charles H. Goldberg and Douglas B. West. Bisection of Circle Colorings.
SIAM Journal on AlgebraicDiscrete Methods , 1985. ISSN 0196-5212. doi:10.1137/0606010.Paul W. Goldberg and Alexandros Hollender. The Hairy Ball Problem is PPAD-Complete.In
Proceedings of the 46th International Colloquium on Automata, Languages and Program-ming (ICALP) , pages 65:1–65:14. Schloss Dagstuhl–Leibniz-Zentrum f ¨ur Informatik, 2019.doi:10.4230/LIPIcs.ICALP.2019.65.Mika G ¨o ¨os, Pritish Kamath, Katerina Sotiraki, and Manolis Zampetakis. On the Complexityof Modulo-q Arguments and the Chevalley-Warning Theorem. In
Proceedings of the 35thComputational Complexity Conference (CCC) , pages 19:1–19:42. Schloss Dagstuhl–Leibniz-Zentrumf ¨ur Informatik, 2020. doi:10.4230/LIPIcs.CCC.2020.19.Michelangelo Grigni. A Sperner lemma complete for PPA.
Information Processing Letters , 2001.doi:10.1016/S0020-0190(00)00152-6.Charles R. Hobby and John R. Rice. A moment problem in L1 approximation.
Proceedings of theAmerican Mathematical Society , 16(4):665–670, 1965. doi:10.2307/2033900.Alexandros Hollender. The Classes PPA-k: Existence from Arguments Modulo k. In
Proceedingsof the 15th Conference on Web and Internet Economics (WINE) , pages 214–227. Springer, 2019.doi:10.1007/978-3-030-35389-6 16.Nimrod Megiddo and Christos H. Papadimitriou. On total functions, existence theorems andcomputational complexity.
Theoretical Computer Science , 1991. doi:10.1016/0304-3975(91)90200-L.Ruta Mehta. Constant rank bimatrix games are PPAD-hard. In
Proceedings of the 46th Annual ACMSymposium on the Theory of Computing (STOC) , pages 545–554, 2014. doi:10.1145/2591796.2591835.Jerzy Neyman. Un th´eor`eme d’existence.
CR Acad. Sci. Paris , 222:843–845, 1946.D ¨om ¨ot ¨or P´alv ¨olgyi. Combinatorial Necklace Splitting.
The Electronic Journal of Combinatorics , 16(1)(R79):1–8, 2009. doi:10.37236/168.Christos H. Papadimitriou. On the complexity of the parity argument and other inefficient proofsof existence.
Journal of Computer and System Sciences , 48(3):498–532, 1994. doi:10.1016/S0022-0000(05)80063-7. 40orest W. Simmons and Francis E. Su. Consensus-halving via theorems of Borsuk-Ulam andTucker.
Mathematical social sciences , 45(1):15–25, 2003. doi:10.1016/S0165-4896(02)00087-2.Hugo Steinhaus. The Problem of Fair Division.
Econometrica , 16:101–104, 1948.Albert W. Tucker. Some Topological Properties of Disk and Sphere. In