Economic Efficiency Requires Interaction
aa r X i v : . [ c s . G T ] A p r Economic Efficiency Requires Interaction
Shahar Dobzinski ∗ Noam Nisan † Sigal Oren ‡ Abstract
We study the necessity of interaction between individuals for obtaining approximately ef-ficient economic allocations. We view this as a formalization of Hayek’s classic point of viewthat focuses on the information transfer advantages that markets have relative to centralizedplanning. We study two settings: combinatorial auctions with unit demand bidders (bipartitematching) and combinatorial auctions with subadditive bidders. In both settings we prove thatnon-interactive protocols require exponentially larger communication costs than do interactiveones, even ones that only use a modest amount of interaction. ∗ Weizmann Institute of Science. Email: [email protected] . Incumbent of the Lilian and GeorgeLyttle Career Development Chair. Supported in part by the I-CORE program of the planning and budgeting com-mittee and the Israel Science Foundation 4/11 and by EU CIG grant 618128. † Hebrew University and Microsoft Research. Email: [email protected] . ‡ Hebrew University and Microsoft Research. Email: [email protected] . Supported by an I-CORE fellowshipand by a grant from the Israel Science Foundation. Part of the work was done while at Cornell University where theauthor was supported in part by NSF grant CCF-0910940 and a Microsoft Research Fellowship.
Introduction
The most basic economic question in a social system is arguably how to determine an efficientallocation of the economy’s resources. This challenge was named the economic calculation problem by von Mises, who argued that markets do a better job than centralized systems can. In his classicpaper [17], Hayek claimed that the heart of the matter is the distributed nature of “input”, i.e. thatthe central planner does not have the information regarding the costs and utilities of the differentparties: knowledge of the circumstances of which we must make use never exists in concentratedor integrated form but solely as the dispersed bits of ... knowledge which all the separateindividuals possess.
Hayek therefore proposes that the question of which economic system is better (market-basedor centrally-planned) hinges on which of them is able to better transfer the information needed foreconomic efficiency: which of these systems is likely to be more efficient depends ... on whether we are morelikely to succeed in putting at the disposal of a single central authority all the knowledgewhich ... is initially dispersed among many different individuals, or in conveying to theindividuals such additional knowledge as they need in order to enable them to fit theirplans with those of others.
When Hurwicz [20] formalized a notion of protocols for transfer of information in an economy,the basic examples were a Walrasian-Tatonnement protocol modeling a free market and a commandprocess that entails full report of information to a centralized planner. He noted that:
The language of the command process is much larger than that of the Walrasian process.We must remember, however, that the pure command process is finished after only twoexchanges of information while the tatonnement may go on for a long time.
This paper follows Hurwicz’s approach of formalizing Hayek’s question by technically studyingthe amount of information exchange needed for obtaining economic efficiency. We consider themain distinction – in terms of information transfer – between a centralized system and a distributedmarket-based one to be that of interaction: in a centralized system all individuals send informationto the central planner who must then determine an efficient allocation, while market based systemsare by nature interactive. Our main results support Hayek’s point of view. We exhibit situations where interaction allowsexponential savings in information transfer, making the economic calculation problem tractable forinteractive markets even when it is intractable for a centralized planner. We have two conceptuallysimilar, but technically disjoint, sets of results along this line. The first set of results considers theclassic simple setting of unit-demand bidders, essentially a model of matching in bipartite graphs.The second and more complicated setting concerns combinatorial auctions with subadditive bidders.In both settings we show that non-interactive protocols have exponentially larger communication Interactive versions of centralized planning such as the “market socialism” models of Lange and Lerner [25] havealso been suggested. These can essentially simulate free markets so become indistinguishable from market mechanismsin terms of their informational abilities. The distinction between them and true market models is, thus, outside thescope of the discussion in this paper. It should perhaps also be noted here that from a more modern point of view,the revelation principle states that centralized systems can also simulate any incentives that a distributed one mayhave. Non-interactive systems are modeled as simultaneous communication protocols, where all agents simul-taneously send messages to a central planner who must decide on the allocation based on thesemessages alone. Interactive systems may use multiple rounds of communication and we measurethe amount of interactiveness of a system by the number of communication rounds. In both ofour settings we prove lower bounds on the simultaneous communication complexity of finding anapproximately efficient allocation as well as exponentially smaller upper bounds for protocols thatuse a modest number of rounds of interaction.We now elaborate in more details on the two settings that we consider and describe our results.We begin with the technically simpler setting of bipartite matching.
In this simple matching scenario there are n players and n goods. Each player i is interested inacquiring a single item from some privately known subset S i of the goods and our goal is to allocatethe items to the players in a way that maximizes the number of players who get an item from theirdesired set. This is of course a classic problem in economics (matching among unit demand bidders)as well as in computer science (bipartite matching).We first consider simultaneous protocols. Each of the players is allowed to send a small amountof information, l bits with l << n , to the centralized planner who must then output a matching. Theorem: • Every deterministic simultaneous protocol where each player sends at most n ǫ bits of com-munication cannot approximate the size of the maximum matching to within a factor betterthan O ( n − ǫ ). • Any randomized simultaneous protocol where each player sends at most n ǫ bits of communi-cation cannot approximate the size of the maximum matching to within a factor better than O ( n − ǫ ).Both our bounds are essentially tight. For deterministic protocols, one can trivially obtain anapproximation ratio of n with message length O (log n ): each player sends the index of one arbitraryitem that he is interested in. If randomization is allowed, it is not hard to see that when each playersends the index of a random item he is interested in, we get an approximation ratio of O ( √ n ). Wehave therefore established a gap between randomized and deterministic protocols. We also notethat the randomized lower bound can in fact be obtained from more general recent results of [19] ina stronger model. For completeness we present a significantly simpler direct proof for our setting.On the positive side, we show that a few communication rounds suffice to get an almost efficientallocation . Our algorithm is a specific instantiation of the well known class of “auction algorithms”.This class of algorithms has its roots in [8] and has been extensively studied from an economic pointof view (e.g. [32]) as well as from a computational point of view (starting with [5]).The standard ascending auction algorithm for this setting begins by setting the price of eachitem to be 0. Initially, all bidders are “unallocated”. Then, in an arbitrary order, each unallocated While Hurwicz and other economists employed models that allowed communicating real numbers, we employ thestandard, modern, notions from computer science (see [24]) that count bits. This distinction does not seem to haveany conceptual significance – see [31] for a discussion. Formally, the algorithm works in the so called “blackboard model” – see Appendix A for a definition. i reports an index of an item that maximizes his profit in the current prices (his “demand”).The price of that item increases by ǫ and the item is reallocated to player i , and the process continueswith another unallocated player. It is well known that if the maximum that a player is willing topay for an item is 1, then this process terminates after at most Ω( nǫ ) steps and it is not hard toconstruct examples where this is tight. We show that if in each round every unallocated playerreports, simultaneously with the others, an index of a random item that maximizes his profit in thecurrent prices ( O (log n ) bits of information) and each reported item is re-allocated to an arbitraryplayer that reported it, then the process terminates in logarithmically many rounds. We are notaware of any other scenario where natural market dynamics provably converge (approximately) toan equilibrium in time that is sub-linear in the market size. Theorem:
Fix ǫ >
0. After O ( log nǫ ) rounds the randomized algorithm provides in expectation an(1 + ǫ )-approximation to the bipartite matching problem.We then quantify the tradeoff between the amount of interaction and economic efficiency. We showthat for every k ≥ O ( n / ( k +1) )-approximation in k rounds, where at each round each player sends O (log n ) bits of information.In passing we note that the communication complexity of the exact problem, i.e. of findingan exact perfect matching, when it exists, remains a very interesting open problem. Moreover, webelieve that it may shed some light on basic algorithmic challenges of finding a perfect matchingin near-linear time as well as deterministically in parallel. We shortly present this direction inappendix A. Our second set of results concerns a setting where we are selling m items to n bidders in a com-binatorial auction. Here each player i has a valuation v i that specifies his value for every possiblesubset S of items. The goal is to maximize the “social welfare” P i v i ( A i ), where A i is the set ofgoods that is allocated to player i . The communication requirements in such settings have receivedmuch attention and it is known that, for arbitrary valuations, exponential amount of communica-tion is required to achieve even m / − ǫ -approximation of the optimal welfare [31]. However, it isalso known that if the valuations are subadditive, v i ( S ∪ T ) ≤ v i ( S ) + v i ( T ), then constant factorapproximations can be achieved using only polynomial communication [12, 13, 33, 10, 26]. Canthis level of approximate welfare be achieved by a direct mechanism, without interaction?Two recent lines of research touch on this issue. On one hand several recent papers show thatvaluations cannot be “compressed”, even approximately, and that any polynomial-length descrip-tion of subadditive valuations (or even the more restricted XOS valuations) must lose a factor ofΘ( √ m ) in precision [3, 2]. Similar, but somewhat weaker, non-approximation results are also knownfor the far more restricted subclass of “gross-substitutes” valuations [4] for which exact optimiza-tion is possible with polynomial communication. Thus the natural approach for a direct mechanismwhere each player sends a succinctly specified approximate version of his valuation (a “sketch”) tothe central planner cannot lead to a better than O ( √ m ) approximation. This does not, however,rule out other approaches for non-interactive allocation, that do not require approximating thewhole valuation. Indeed we show that one can do better: Theorem:
There exists a deterministic communication protocol such that each player holds a sub-additive valuation and sends (simultaneously with the others) polynomially many bits of communi-cation to the central planner that guarantees an ˜ O ( m / )-approximation to the optimal allocation.3nother line of relevant research considers bidders with such valuations being put in a game wherethey can only bid on each item separately [7, 6, 16, 14]. In such games the message of eachbidder is by definition only O ( m ) real numbers, each can be specified in sufficient precision withlogarithmically-many bits. Surprisingly, it turns out that sometimes this suffices to get constantfactor approximation of the social welfare. Specifically one such result [14] considers a situationwhere the valuation v i of each player i is drawn independently from a commonly known distribution D i on subadditive valuations. In such a case, every player i can calculate bids on the items –requiring O ( m log m ) bits of communication – based only on his valuation v i and the distributions D j of the others (but not their valuations v j ). By allocating each item to the highest bidder we geta 2-approximation to the social welfare, in expectation over the distribution of valuations. This isa non-interactive protocol that comes tantalizingly close to what we desire: all that remains is forthe 2-approximation to hold for every input rather than in expectation. Using Yao’s principle, thiswould follow (at least for a randomized protocol) if we could get an approximation in expectation for every distribution on the inputs, not just the product distribution where the valuations are chosenindependently. While the approximation guarantees of [7, 6, 16, 14] do not hold for correlateddistributions on the valuations, there are other settings where similar approximation results dohold even for correlated distributions [27, 1]. Would this be possible here too?Our main technical construction proves a negative answer and shows that interaction is essentialfor obtaining approximately optimal allocation among subadditive valuations (even for the morerestricted XOS valuations): Theorem:
No (deterministic or randomized) protocol such that each player holds an XOS valuationand simultaneously with the others sends sub-exponentially many bits of communication to a centralplanner can guarantee an m − ǫ -approximation.Again, this is in contrast to interactive protocols that can achieve a factor 2 approximation [12](with polynomially many rounds of communication). The lower bound shows that interaction isnecessary to solve the economic calculation problem in combinatorial auctions. We show that if asmall amount of interaction is allowed then one can get significantly better results: Theorem:
For every k ≥ O ( k · m / ( k +1) )-approximation in k rounds, where at each round each player sends poly ( m, n ) bits. In particular,after k = log m rounds we get a poly-logarithmic approximation to the welfare. Open Questions.
In our opinion the most intriguing open question is to determine the possibleapproximation ratio achievable by simultaneous combinatorial auctions with submodular or evengross-substitutes players that are allowed to send poly ( m, n ) bits. Our O ( m )-algorithm is clearlyapplicable for both settings. We do know that exactly solving the problem for gross-substitutesvaluations requires exponential communication (see Section B), although when interaction is allowedpolynomial communication suffices.Another natural open question is to prove lower bounds on the approximation ratio achievable by k -round protocols. Our bounds only hold for k = 1. Furthermore, how good is the approximationratio that can be guaranteed when incentives are taken into account? Can a truthful k -roundalgorithm guarantee poly-logarithmic approximation in O (log n ) rounds for XOS valuations?For the bipartite matching setting we leave open the question of developing algorithms forweighted bipartite matching. In addition, our k -round algorithms are randomized; developingdeterministic k -round algorithms even for the unweighted case is also of interest. In AppendixA we further discuss more open questions related to the communication complexity of bipartitematching and its relation to the computational complexity of bipartite matching.4inally, we studied the matching problem in the framework of simultaneous communicationcomplexity. A fascinating future direction is to study other classic combinatorial optimizationproblems (e.g., minimum cut, packing and covering problems, etc.) using the lenses of simultaneouscommunication complexity. Combinatorial Auctions.
In a combinatorial auction we have a set N of players ( | N | = n )and a set M of different items ( | M | = m ). Each player i has a valuation function v i : 2 M → R .Each v i is assumed to be normalized ( v i ( ∅ ) = 0) and non decreasing. The goal is to maximize thesocial welfare, that is, to find an allocation of the items to players ( A , . . . , A n ) that maximizesthe welfare: Σ i v i ( A i ). A valuation function is subadditive if for every two bundles S and T , v ( S ) + v ( T ) ≥ v ( S ∪ T ). A valuation v is additive if for every bundle S we have that v ( S ) =Σ j ∈ S v ( { j } ). A valuation v is XOS if there exist additive valuations a , . . . , a t such that for everybundle S , v ( S ) = max r a r ( S ). Each a r is a clause of v . If a ∈ arg max r a r ( S ) then a is a maximizingclause of S . Matching.
Here the goal is to find a maximum matching in an undirected bipartite graph G =( V , V , E ), | V | = | V | = n . Each player i corresponds to vertex i ∈ V and is only aware of edgesof the form ( i, j ) ( j ∈ V since the graph is bipartite). The neighbor set of i is S i = { j | ( i, j ) ∈ E } .The goal is to maximize the number of matched pairs. When convenient we will refer to vertices onthe left as unit demand bidders and the vertices on the right as goods. Under this interpretationthe neighbor set of player i is simply the set of goods that he is interested in. Chernoff Bounds.
Let X be a random variable with expectation µ . Then, for any δ > P ( X > (1 + δ ) µ ) < ( e δ (1 + δ ) δ ) µ = ( e δ e (1+ δ ) ln(1+ δ ) ) µ = ( 1 e (1+ δ ) ln(1+ δ ) − δ ) µ . For δ > e we canloosely bound this expression by: ( 1 e δ ) µ . In this section we state lower bounds on the power of algorithms for bipartite matching. The firstone deals with the power of deterministic algorithms (proof in Subsection 3.1):
Theorem 3.1 (lower bound for deterministic algorithms)
The approximation ratio of anydeterministic simultaneous algorithm for matching that uses at most l bits per player is no betterthan n l + 4 log( n ) . In particular, for any fixed ǫ > and l = n ǫ the approximation ratio is Ω( n − ǫ ) . The second theorem gives a lower bound on the power of randomized algorithms (proof inSubsection 3.2):
Theorem 3.2 (lower bound for randomized algorithms)
Fix ǫ > . The approximation ra-tio of every algorithm in which each player sends a message of size l ≤ n − α − ǫ is at most n α , forevery α ≤ − ǫ . The next proposition shows that both lower bounds are essentially tight. In particular, thisimplies a proven gap between the power of deterministic and randomized algorithms.5 roposition 3.3
1. There exists a deterministic simultaneous algorithm that uses l bits per player and providesan approximation ratio of max(2 , n log nl ) .2. There exists a simultaneous randomized algorithm that provides an expected approximationratio of O ( √ n ) . Proof:
The randomized algorithm will be obtained as a corollary of the k -round algorithm ofSubsection 4.2 ( k = 1). We now describe the deterministic algorithm.Let l ′ = l log n . We consider the algorithm where each player reports the indices of some arbitrary l ′ vertices in his neighbor set. The algorithm matches as many reported vertices as possible.We now analyze the approximation ratio of the algorithm. Consider some optimal matching.We distinguish between two cases. The first one is when at least half of the players that are matchedin the optimal solution have at most l ′ neighbors (call this set S ). In this case, vertices in S canreport their full neighbor set. The algorithm will consider in particular the matching that matchesvertices in S as in the optimal solution, and thus we will get a 2 approximation.In the second case, most of the players that are matched in the optimal solution have more than l ′ neighbors (call this set T ). Consider the matching that the algorithm outputs. If all vertices in T are matched, then we get a 2 approximation. Otherwise, there is a player i ∈ T that is not matchedby the algorithm. This implies that all the l ′ vertices that he reported are already matched, andthis is a lower bound to the number of matches that the algorithm outputs. The theorem followssince the optimal matching makes at most n matches. In the proof we assume that our graphs are w -uniform graphs. That is, the size of neighbor set S i for every player i is w . Notice that this assumption only makes our hardness result stronger. Forthis proof, we fix a specific simultaneous algorithm and analyze its properties.We begin by considering a specific random construction of graphs that we name w -random. Ina w -random graph we choose | U | = w vertices from V uniformly at random and let the neighborset of each one of the players be S i = U . Let ( a , . . . , a n ) be the output of the algorithm on thisinstance. As the optimal matching includes exactly w matched players it is clear that the solutionoutputted by the algorithm matches at most w players. The crux of the proof is constructinga “fooling instance”, where all players send the same messages and hence the algorithm cannotdistinguish the fooling instance from the original instance and outputs the same allocation. Wewill construct this fooling instance so that on one hand for almost every player i , a i is not in theneighbour set of player i (this will be true to approximately n − w players). This implies that thevalue of the matching that the algorithm outputs in the fooling instance is low. On the other hand,the size of the optimal matching in the fooling instance will be α = Θ( n ).Let g i ( V ) denote the message that player i sends when his neighbor set is V . Let G i ( m ) = { V | g i ( V ) = m } and G i ( V ) = G i ( g i ( V )). The main challenge in the proof is constructing a foolingset with a large optimal matching. The next definition provides us the machinery required forproving this: Definition 3.4
For a player i and neighbor set S i , S i is α -unsafe for a vertex k / ∈ S i if |∪ U ∈ G i ( S i ) ,k / ∈ U U | ≥ α . Otherwise, we say that S i is α -safe for k . For a w -random graph we say that player i is α -adaptable if the neighbor set S i is α -unsafe for a i . We will first formalize the discussion above by constructing a fooling instance given that there6re | P | α -adaptable players. Later, we will show that there exists an instance in which | P | ≈ n − w players are α -adaptable. Lemma 3.5
Consider a w -random graph in which a set P of the players are α -adaptable. Then,there exists a fooling instance for which at least min( | P | , α ) vertices can be matched, but the algo-rithm returns a solution with at most w matched vertices. Proof:
The fooling instance is constructed as follows: for each player i ∈ P , let S ′ i be someneighbor set such that a i / ∈ S ′ i and S ′ i ∈ G i ( S i ). Such S ′ i exists since by assumption player i isan α -adaptable player which implies that a i is α -unsafe for S i . For any such choice of S ′ i we havethat players in P are still not matched by the algorithm. The number of matchings in the foolinginstance is therefore at most w . Notice that this argument holds for any set of S ′ i ’s chosen as above.To guarantee that at least min( | P | , α ) vertices can be matched in the new instance, we have tobe more careful in our choice of the S ′ i ’s. We say that player i with neighbor set S i is interested in vertex j ∈ V if there exists a neighbor set S ′ i where a i / ∈ S ′ i , S ′ i ∈ G i ( S i ), and j ∈ S ′ i . Since forevery i ∈ P we know that a i is α -unsafe for S i , there exists at least α vertices which player i isinterested in. Thus by Hall’s marriage theorem there exists a matching of at least α vertices amongthe players in P where each player is matched to a vertex that he is interested in. This implies thatthere exists a set of neighbor set S ′ i as above where at least min( | P | , α ) vertices can be matched inthe fooling instance.We now show that there exists a w -random graph in which the number of α -adaptable playersis large: Lemma 3.6
Let p = 2 l · n (cid:0) αw (cid:1)(cid:0) nw (cid:1) . There exists a w -random graph in which at least (1 − p ) · n − w ofthe players are α -adaptable. Proof:
We first show that for each player i the number of neighbor sets that are α -safe for atleast one vertex is small. Claim 3.7
For each player i there are at most l · n (cid:0) αw (cid:1) possible neighbor sets (of size w ) that are α -safe for at least one vertex. Proof:
Consider a message m and a vertex k ∈ V . Observe that by definition for every set S which is α -safe for k we have that S ⊆ ∪ U ∈ G i ( S i ) ,k / ∈ U U and that | ∪ U ∈ G i ( S i ) ,k / ∈ U U | ≤ α . Thisimmediately implies that there are at most (cid:0) αw (cid:1) neighbor sets in G i ( m ) that are α -safe for k . Thisis true for each of the n vertices in V and hence there are at most n (cid:0) αw (cid:1) neighbor sets in G i ( m )that are α -safe for at least one vertex. The proof is completed by observing that there are at most2 l different messages.We are now ready to show that there exists a w -random graph with the required number of α -adaptable players. Observe that in a w -random graph, for each player i the probability that S i is α -safe for some vertex k / ∈ S i is at most 2 l · n (cid:0) αw (cid:1)(cid:0) nw (cid:1) = p . This is simply because by constructionthe neighbor set S i of player i is chosen uniformly at random from all possible neighbor sets (theneighbor sets of any two players are indeed correlated).We now show that there exists a w -random graph in which there is a set P ′ , | P ′ | ≥ (1 − p ) · n where for each i ∈ P ′ we have that S i is α -unsafe for every k / ∈ S i . To see why this is the case, foreach player i , let n i be a random variable that gets a value of 1 if S i is α -unsafe for every k / ∈ S i and a value of 0 otherwise. Let n ′ = Σ i n i . By the first part, for any player i , E [ n i ] ≥ − p . Using7inearity of expectation, E [ n ′ ] ≥ n · (1 − p ). The claim follows since there must be at least oneinstance I where n ′ ≥ E [ n ′ ].To conclude the proof, observe that in the instance I any player i ∈ P for which a i / ∈ S i isan α -adaptable player. Since there are at most w players for which a i ∈ S i we have that at least | P ′ | − w = (1 − p ) · n − w players are α -adaptable, as required.Finally, we compute the values of our parameters and show that the theorem indeed holds. Lemma 3.8
For α = n and w = 2 l + log( n ) , the approximation ratio of a simultaneous algorithmin which each message length is at most l bits is n l + 4 log( n ) . Proof:
We first compute a lower bound on the number of α -adaptable players. By Lemma 3.6,we have that the number of α -adaptable players is a least (1 − p ) · n − w . By Claim 3.9 below wehave that (1 − p ) ≥ (1 − (cid:0) (cid:1) l ), thus (1 − p ) · n − w > α . Therefore the approximation ratio of thealgorithm is at most αw = n l + log( n ) = n l + 4 log( n ) . Claim 3.9
For α = n and w = 2 l + log( n ) : p ≤ l · n ( αw )( nw ) ≤ (cid:0) (cid:1) l . Proof: l · n (cid:0) αw (cid:1)(cid:0) nw (cid:1) = 2 l n · α ! · ( n − w )!( α − w )! · n ! = 2 l n · Q wi =1 ( α − w + i ) Q wi =1 ( n − w + i ) ≤ l n · (cid:16) αn (cid:17) w . By plugging in the values of α and w we get:2 l n · (cid:16) αn (cid:17) w ≤ l n · (cid:18) (cid:19) l +log( n ) ≤ n (cid:18) (cid:19) l +log( n ) ≤ (cid:18) (cid:19) l . We consider a bipartite graph ( V , V , E ) with n vertices in each side. As usual, the left-sidevertices are the players. We prove a lower bound for randomized algorithms in this setting. ByYao’s principle, it is enough to prove a lower bound on the power of deterministic mechanisms onsome distribution.The hard distribution on which we will prove the lower bound is the following:1. The size of the neighbor set S i of each player i is exactly k + 1, where k = n .2. The neighbor sets S i are chosen, in a correlated way, as follows: a set T of size exactly 2 k is chosen uniformly at random, and each S i is obtained by independently taking a randomsubset of size exactly k of T and another single random element from T c (the complement of T ).3. The players do not know T nor do they know which of their elements is the one from T c .We will prove the lower bound by reducing the matching problem to the following two playerproblem. 8 .2.1 A 2-Player Problem: The Hidden Item In the hidden item problem there are two players (Alice and Bob) and n items. In this problemAlice holds a subset T of the items of size exactly 2 k ( k = n ). Bob holds a set S of size exactly k + 1. The guarantee is that | S ∩ T | = k . Bob sends a message to Alice of length l who must outputan item x (based only on the message that she got and T ). Alice and Bob win if x ∈ S − T .We will analyze the power of deterministic mechanisms on the following distribution:1. T is selected uniformly at random among all subsets that consist of exactly 2 k items.2. S is selected in a correlated way by taking a random subset of size k from T and a randomextra element from T c (without knowing which is which). Lemma 3.10
If inputs are drawn from the above distribution, in any deterministic algorithm theprobability that Alice and Bob win is at most O ( n − α ) , for any α and l such that n α · l = o ( n ) . The lemma will be proved in Subsection 3.2.2. We first show why the lemma implies Theorem3.2.
Lemma 3.11
Let α < − ǫ . If there exists a deterministic algorithm for the hard distribution ofthe matching problem that provides an approximation ratio of n α where each player sends a messageof length l , then there exists an algorithm for the hard distribution of the hidden item problem whereBob sends a message of length l and the probability of success is n α . Proof:
Assume a deterministic algorithm for the hard matching distribution achieving an approx-imation ratio better than n α . Observe that the optimal social welfare is at least n with very highprobability. Thus, an expected social welfare of n − α is required for achieving this approximationratio. Clearly at most 2 k = 2 n < n + ǫ / < n − α /
20 of this expected social welfare comes fromitems in T (for big enough n ). Thus, the expected social welfare, obtained just from items outside T is at least n − α .This implies that there exists some player, without loss of generality player 1, whose expectedvalue, not including any item in T , is at least n α . We will use this protocol to construct the two-player protocol, by Bob simulating player 1 and Alice simulating all the other players combined.When Alice and Bob get their inputs S and T for the hidden item problem, Alice uses T tochoose at random S , ..., S n as to fit the distribution of the n -player problem, and Bob sets S = S .Bob sends to Alice the message player 1 sends in the n -player algorithm for matching. Alice firstsimulates the messages of all players and then calculates the outcome ( a , ..., a n ) of the n -playermatching. Notice that whenever player 1 in the n -player protocol gets utility 1 from an item outsideof T , we have that x ∈ S − T . Thus Alice and Bob win with probability at least n α .Theorem 3.2 now follows as we have by Lemma 3.10 that for l ≤ n − α − ǫ and α ≤ − ǫ theprobability of success in the hidden item problem is O ( n − α ). This together with Lemma 3.11implies that there cannot be an algorithm for matching achieving an approximation ratio betterthan n α using l ≤ n − α − ǫ bits for α ≤ − ǫ . Assume that a protocol with a better winning probability than O ( n − α ) exists. We will use random-self-reducibility to get a randomized (public coin) protocol that will work for all pairs of sets S, T with | S | = k + 1, | T | = 2 k , | S − T | = 1. This is obtained by the two players choosing jointly9 random permutation of the n items and running the protocol on their permuted items. Noticethat this randomized reduction maps any original input to exactly the distribution on which weassumed the original protocol worked well, and thus now the winning probability, for any fixedinput of the specified form, is at least O ( n − α ). We now run this protocol O ( n α ) times in parallel(with independent random choices) to get a situation where Bob sends O ( n α · l ) bits and Aliceoutputs a set of size O ( n α ) that with high probability contains the element in S − T .We will use the known hardness for the disjointness two-player communication-complexity prob-lem: Theorem 3.12 (Razborov)
Assume that Alice holds a subset S and Bob holds a subset R of auniverse of size m where, | S | = m/ , | T | = m/ . Distinguishing between the case that S ∩ R = ∅ andthe case that | S ∩ R | = 1 requires Ω( m ) randomized (multiple round, constant-error) communication. Corollary 3.13
By using a simple padding argument, we have that for m = 4 k and a setting whereAlice holds a set S of size k + 1 of a universe of size n and Bob holds a set R of size n − k ofthe same universe, distinguishing between S ∩ R = ∅ and | S ∩ R | = 1 requires Ω( m ) randomizedcommunication. We now show how to use an algorithm for the hidden instance problem to solve the hard problemdescribed in the previous corollary: let T = R c and use the protocol constructed in which Aliceoutputs a set of size O ( n α ) that with high probability contains the element in S − T = S ∩ R , shewill now send this whole list back to Bob who will report which of these elements is in S . We havenow achieved a 2-round protocol (Bob → Alice → Bob) that uses O ( n α ) · l bits of communicationthat finds the element in S ∩ R with high probability, if such an element existed. Otherwise, suchan element is not found so we have distinguished the two possibilities.To reach a contradiction our protocol has to use less than Ω( m ) bits. Thus for a contradictionto be reached it must hold that n α · l = o ( m ) = o ( k ). We provide two algorithms that guarantee significantly better approximation ratios using a smallnumber of rounds. We first show that O ( log nδ ) rounds suffice to get a (1 + δ ) approximation. InSubsection 4.2 we present an algorithm that provides an approximation ratio of O ( n k +1 ) in k rounds.This shows that even a constant number of rounds suffices to get much better approximation ratiosthan what can be achieved by simultaneous algorithms. (1 + δ ) -Approximation for Bipartite Matching in O ( log nδ ) Rounds
The algorithm is based on an auction where each player competes at every point on one item thathe demands the most at the current prices. Therefore, it will be easier for us to imagine the playersas having valuations. Specifically, each player i is a unit demand bidder with v i ( j ) = 1 if j ∈ S i and v i ( j ) = 0 otherwise. The Algorithm
1. For every item j let p j = 0.2. Let N be the set of all players. 10. In every round r = 1 , . . . , nδ :(a) For each player i ∈ N r , let the demand of i be D i = arg min j : p j < ,j ∈ S i p j . This is thesubset of S i for which the price of each item is minimal and smaller than 1.(b) Each player i ∈ N r selects uniformly at random an item j i ∈ D i and reports its index.(c) Go over the players in N r in a fixed arbitrary order. If item j i was not yet allocatedin this round, player i receives it and the price p j i is increased by δ . In this case wesay that player i is committed to item j i . A player i ′ that was committed to j i in theprevious round (if such exists) now becomes uncommitted.(d) Let N r +1 be the set of uncommitted players at the end of round r .Our algorithm is very similar to the classical auction algorithms except for two seeminglysmall changes. However, quite surprisingly, these changes allow us to substantially reduce thecommunication cost. The first change is to ask all the players to report an item of their demandset simultaneously (instead of sequentially). This change alone is not enough as in the worst casemany players might report the same item and hence the number of rounds might still be Ω( nδ ).Hence we ask each player to report a random item of his demand instead. Theorem 4.1
After O ( log nδ ) rounds the algorithm above provides an approximation ratio of (1+ δ ) . Proof:
Fix some optimal solution ( o , ..., o n ) (every player receives at most one item). Let N ′ , | N ′ | = n ′ , be the set of players that receive an item in the optimal solution. Definition 4.2
A player is called satisfied if he is either allocated an item or D i = ∅ . Let
EN D be the random variable that denotes the number of rounds until the first time that(1 − δ ) n ′ players in N ′ are satisfied. We will prove the following lemma: Lemma 4.3 E [ EN D ] ≤ nδ . Proof:
The heart of the proof is the definition of two budgets: one for demand halving actionsfor players in N ′ and one for price increments. We show that in expectation after at most nδ rounds at least one of these budgets is exhausted and hence the number of unsatisfied players in N ′ is at most δn ′ .Consider the demand set D i of some player i at some round r . Observe that all items in D i hasthe same price p D i . Let D pi = S i ∩ { j | p j = p } . We will use the following claim: Claim 4.4
Consider some round r and suppose that at least t players are unsatisfied in the begin-ning of that round. Then, either the expected increase in Σ j p j is at least t · δ or for at least half ofthe unsatisfied players it holds that D p Di i has shrunk by at least a factor of . Proof:
Consider some player i that is not satisfied. When it is i ’s turn to be considered in Step3c either at least half the items in D i were taken by previous players in the order or not. If at leasthalf the items in D i were taken by previous players then D p Di i has shrunk by a factor of at least2. Otherwise, since player i selects j i at random from D i , with probability of at least we havethat j i was not taken by any previous player. In this case i the price of p j i is increased by δ so theexpected increase of some item due to i is δ . This implies that either for at least t/ D p Di i has shrunk by at least a factor of 2 or at least t/ δ/ t · δ . 11ow, notice that the price of each item can be increased at most δ times (the price increasesin increments of δ and no player demands an item which has a value of 1). Since an item that wasallocated stays allocated and at most n ′ items can be allocated, we have that the maximal numberof increments that the algorithm can make is n ′ δ .In addition, there are n items and the price of an item can only increase, each D pi can be shrunkby a factor of 2 at most log n time. As previously argued, p can get only δ different values. Thetotal number of shrinkage steps with respect to players in N ′ is therefore n ′ log nδ .To complete the proof recall that at every round prior to EN D at least δn ′ players in N ′ areunsatisfied. By the claim in each round either the expected number of increments is δn ′ or theexpected number of shrinkage steps with respect to players in N ′ is δn ′ . In any case, after nδ rounds we expect that there are no more increments or shrinkage steps with respect to players in N ′ to make. Lemma 4.5
If at least (1 − δ ) n ′ of the players in N ′ are satisfied then the approximation of thealgorithm is − δ . Proof:
We use a variant of the first welfare theorem to prove the lemma. Consider a player i that has received an item j i . The player receives an item that maximizes his demand (up to δ ),and thus the profit from j i is at least the profit from the item o i he got in the optimal solution (upto δ , we allow o i = ∅ ). We therefore have: v i ( j i ) − p j i ≥ v i ( o i ) − p o i − δ . For each satisfied player i that did not receive any items we have that 0 ≥ v i ( o i ) − p o i . Denote by N s the set of satisfiedplayers. Summing over all satisfied players we get: X i ∈ N s ( v i ( j i ) − p j i ) ≥ X i ∈ N s is allocated ( v i ( o i ) − p o i − δ ) + X i ∈ N s is unallocated ( v i ( o i ) − p o i )= X i ∈ N s ( v i ( o i ) − p o i ) − δn ′ ALG − X j ∈ N p j ≥ OP T − δn ′ − X i ∈ N s p o i − n ′ δALG ≥ OP T − n ′ δ = (1 − δ ) n ′ where in the third transition we used the facts that items that are unallocated by the algorithmhave a price of 0 and that N ′ ∩ N s ≥ (1 − δ ) N ′ .It is worth noting a different version of the auction algorithm which was discussed in [8]. Inthis version at every round each player reports its entire demand set (simultaneously with theother players), then a minimal set of over demanded items is computed and only their prices areincreased. While the number of rounds for this algorithm might be small the communication costof each round can be linear in n . k -Round Algorithm for Matching Fix some optimal solution ( o , ..., o n ) (every player receives at most one item). Let N ′ be the set ofplayers that receive a nonempty bundle in the optimal solution. The following algorithm achievesan approximation ratio of O ( n k +1 ) in k rounds.12 he Algorithm
1. Let N = V and U = V .2. In every round r = 1 , . . . , k :(a) Each player i selects uniformly at random an item j i ∈ U i that he demands.(b) Go over the players in N r in a fixed arbitrary order. Player i receives j i if this item wasnot allocated yet.(c) Let N r +1 ⊆ N r be the set of players that were not allocated items at round r or before.(d) Let U r +1 ⊆ U r be the set of items that were not allocated at round r or before. Theorem 4.6
For every k ≤ log n , the approximation ratio of the algorithm is O ( n k +1 ) . Inparticular, when k = 1 the approximation ratio is O ( √ n ) and when k = O (log n ) the approximationratio is O (1) . Proof:
Consider a run of the algorithm. Let D r,i ⊆ U r be the set of items that player i demandsand are still available immediately before round r starts. Let X r,i be the set of items that whereallocated to other players before i ’s turn in step (2b). Player i is said to be easy to satisfy if in someround r we have that D r,i − X r,i ≥ D r,i n k +1 . Let S be the event that at least half of the players in N ′ areeasy to satisfy. We will show that E [ ALG | S ] = O ( OP Tn k +1 ) and that E [ ALG | ¯ S ] = O ( OP Tn k +1 ), where ALG is the random variable that denotes the value of the solution that the algorithm outputs.Together this implies that E [ ALG ] = O ( OP Tn k +1 ). Each one of the next two lemmas handles one ofthose cases. Lemma 4.7 E [ ALG | S ] = O ( OP Tn k +1 ) . Proof:
Let C i,r be the random variable that denotes the probability that player i is allocatedan item at round r . Observe that if player i is easy to satisfy then for some round r we have that E [ C i,r ] ≥ n k +1 . Let P denote the set of easy to satisfy players. The expected number of easyto satisfy players that are allocated an item is at least E [Σ i ∈ P C i,r ] = Σ i ∈ P E [ C i,r ] ≥ | P | · n k +1 ≥ n ′ · n k +1 . The required approximation ratio follows since the value of the optimal solution is n ′ . Lemma 4.8 E [ ALG | ¯ S ] = O ( OP Tn k +1 ) . Proof:
Consider a player i that is not easy to satisfy. Observe the for every such player i andround r , if i ∈ N r ∩ N r +1 then | D r,i | ≥ n k +1 | D r +1 ,i | . This is true since if such player i was notallocated any items at round r then the set of available items that he demands shrinks. Therefore,for every player i that was not allocated anything at round k , we have that | D k +1 ,i | ≤ n k +1 . If thereexists a player i such that | D k +1 ,i | > | D ,i | ≥ n kk +1 . Thisimplies that at least n kk +1 − n k +1 of the items were allocated and hence the approximation ratio is O ( n k +1 ) (there are m items, so the value of the optimal solution is at most m ). In any other casefor every such player i that was not allocated any bundle we have that D k +1 ,i = ∅ . In particular,the item o i that he receives in the optimal solution was allocated. Since there are at least n ′ suchplayers, this implies that at least n ′ items were allocated and proves the claimed approximationbound in this case as well. 13 A Lower Bound for Subadditive Combinatorial Auctions
We now move to discuss combinatorial auctions with subadditive bidders. In particular, in thissection we prove our most technical result:
Theorem 5.1
No randomized simultaneous protocol for combinatorial auctions with subadditivebidders where each bidder sends sub-exponentially many bits can approximate the social welfare towithin a factor of m − ǫ , for every constant ǫ > . We will actually prove the lower bound using only XOS valuations, a subclass of subadditivevaluations. We present a distribution over the inputs and show that any deterministic algorithmin which each bidder sends sub-exponentially many bits cannot approximate the expected socialwelfare to within a factor of m / − ǫ for this specific distribution (where expectation is taken overthe distribution). This implies, by Yao’s principle, that there is no randomized algorithm thatachieves an approximation ratio of m / − ǫ for every constant ǫ > D is the following: • n = k players, m = k + k items. • Each player i gets a family F i of size t = e k ǫ of sets of k items. The valuation of player i isdefined to be: v i ( S ) = max T ∈ F i | T ∩ S | . Observe that this is an XOS valuation where eachset S ∈ F i defines a clause in which all items in S have value 1 and the rest of the items havevalue 0. • The families F i are chosen, in a correlated way, as follows: first, a center C of size k ischosen at random; then for each player i a petal P i of size k is chosen at random from thecomplement of C . Now, for each player i , the family F i is chosen as follows: one set T i of size k is chosen at random from P i and t − k are chosen at random from C ∪ P i . • The players do not know C nor do they know which of the sets was chosen from P i . We mayassume without loss of generality that each player i knows the set C ∪ P i .Each player sends, deterministically, simultaneously with the others, at most l bits of commu-nication just based on his input F i . A referee that sees all the messages chooses an allocation A , . . . , A n of the m items to the n players (only based on the messages), with the A i ’s beingdisjoint sets. We assume without loss of generality that all items are allocated.In order to prove that no deterministic algorithm can obtain a good approximation for instancesdrawn from D , we show that to get a good approximation we must identify T i for almost all of theplayers. This would have been easy had each player could have distinguished between the items in C and P i , but this information is missing. We show that for the central planner to successfully identifyeven a single T i player i has to send exponentially many bits. Formally, this is done by reducing thetwo-player “set seeking” problem that we define below to the multi-player combinatorial auctionproblem. The main technical challenge is to prove the hardness of the set seeking problem.The next couple of lemmas together gives us Theorem 5.1. The proof of the first lemma is easy,but the second lemma is the heart of the lower bound. Lemma 5.2
With very high probability, over this distribution ( D ) , there exists an allocation withsocial welfare P i v i ( A i ) = Θ( k ) . Proof:
Consider allocating each player i the set T i (an item j that is in multiple T i ’s will beallocated to some player i such that j ∈ T i ). The social welfare of this allocation is | ∪ i T i | . We show14hat with high probability the social welfare is Θ( k ). This follows since each T i is of size k andis practically selected uniformly at random from a set of size k . Thus, the probability that item j is in T i is 1 /k . Since there are k players, an item is in ∪ i T i with constant probability. Next,since items are chosen independently we can use Chernoff bounds to get that with high probabilitya constant fraction of all items is in ∪ i T i . Lemma 5.3
Every deterministic protocol for the combinatorial auction problem with l < t ǫ pro-duces an allocation with P i v i ( A i ) = k O ( ǫ ) , in expectation (over the distribution). To prove Lemma 5.3 we first define a two-player “set seeking” problem and show its hard-ness (Subsection 5.1). Next, we reduce the two-player set seeking problem to our multi-playercombinatorial auction problem (Subsection 5.2).
The “Set Seeking” problem includes two players and x = k + k items. One of the players playsthe role of the keeper and gets as an input a family of t = e k ǫ sets F where all the sets are ofsize k . The other player plays the role of the seeker and gets a set P of size k . In this problem,first the keeper sends a message of at most l bits (advice). Next, based on this message the seekeroutputs some set A ⊆ P, | A | ≥ k . The goal is to maximize max T ∈ F | A ∩ T || A | .We will analyze the performance of deterministic algorithms on a specific distribution ( D ) forthis problem which we now define. This distribution is based on choosing two sets, F and P whichare chosen in correlation as follows:1. A set P of size k is chosen uniformly at random from all items.2. The set F is constructed by choosing uniformly at random a special set T P of size k from P and additional t − k from all items. Lemma 5.4 If l ≤ t ǫ then there is no k − ǫ -approximation for the set seeking problem. Proof:
Fix a message m and let A m ( P ) denote the set A that the seeker returns when his inputis P and the keeper sends a message m . Definition 5.5
Fix a message m and a set P , | P | = k . A set S is ( m, P )-compatible if | A m ( P ) ∩ S | ≥ | A m ( P ) | k − ǫ . Claim 5.6
Fix a message m and a set P , | P | = k . The probability that a set S which is chosenuniformly at random from P is ( m, P ) -compatible is at most e − k ǫ . Proof:
We fix a set A m ( P ) = A ⊆ P and compute the probability that the intersection of this setwith a set S of size k chosen uniformly at random out of k items will be greater than | A | k − ǫ . Theprobability of each element in S to be in A ∩ S is exactly | A | k . Thus we expect that | A ∩ S | ≈ | A | k .We now use Chernoff bounds to make this precise.Consider constructing the following random set T : k items are selected so that each item ischosen uniformly at random amount the k items of P . T is similar to the way S is constructed,except that it possibly contains less than k items as there is some positive chance that some item15n P will be selected twice. We conservatively assume that every item that was selected twice is in A . Thus, if we bound the probability that | A ∩ T | is too large, we also bound the probability that | A ∩ S | is too large.We first bound the probability that more than k ǫ items are selected at least twice to T . Sincethere are at most k items in T , in the k ’th item that we have the select, the probability that wewill choose an already-selected item is at most k − k ≤ k . By the Chernoff bounds, the probabilitythat more than k ǫ items are selected twice to T is at most: e kǫ .To compute the expected intersection with A , assume k independent variables, each variable y i is true with probability | A | k and false with probability 1 − | A | k . Observe that Y = P i y i isdistributed exactly as | T ∩ A | . The expectation of Y is | A | k . Now by Chernoff bounds we get that:Pr[ Y > k ǫ · | A | k ] ≤ (cid:18) e k ǫ (cid:19) | A | k ≤ e k ǫ . The last transition holds by the assumption that | A | ≥ k .We have that with probability of at most 2 e k ǫ we have that | A ∩ T | ≤ e − k ǫ . By our discussionabove, with at most the same probability we have that | A ∩ S | ≤ e k ǫ . Definition 5.7
Let F be a family of sets of size k , | F | = t = e k ǫ . We say that a message m is F -good if Pr[ | A m ( P ) ∩ T | ≥ | A m ( P ) | k − ǫ ] ≥ e − k ǫ , where T is chosen uniformly at random from F and P , | P | = k , contains T and k − k items chosen uniformly at random from the rest of theitems. Claim 5.8
For every message m , the probability that m is F -good is at most p ′ = e ( − k ǫ ) ( ekǫ ) where the sets in F are chosen uniformly at random. Proof:
Fix a message m . Consider F where the sets in F are chosen uniformly at random. Wefirst compute the probability that for a single T ∈ F we have that | A m ( P ) ∩ T | ≥ A m ( P ) k − ǫ . Observethat every set T ∈ F in this setting can be thought of as chosen uniformly at random from a fixedset P . Thus that probability is the same as the probability that T is ( m, P )-compatible, which is e − k ǫ by Claim 5.6.Now we would like to compute the probability that m is F -good, that is the probability thatthere exist at least e − k ǫ · | F | = e k ǫ sets T ∈ F such that | A m ( P ) ∩ T | ≥ | A m ( P ) | k − ǫ . The expectednumber of such sets is e − k ǫ · | F | = e k ǫ . By the Chernoff bounds ( µ = e k ǫ , δ = e k ǫ ) this probabilityis at most p ′ .We can now finish the proof. Choose F at random. For every m the probability that m is F -good is at most p ′ . The message length is l , and the total number of messages is therefore atmost 2 l . Thus, by the union bound, the probability that there exists some message m which is F -good is at most 2 l · p ′ ≤ t ǫ · p ′ = 2 e kǫǫ · p ′ < (2 e ( − k ǫ ) ) ( e kǫ ) < e − e kǫ , where the transition beforethe last uses the fact that ǫ <
1. Hence, with probability at least 1 − e − e kǫ every message m isnot F -good for the randomly chosen F . This in turn implies that for a family F and for everymessage m the probability for P and T chosen as in Definition 5.7 that T is ( m, P )-compatible is16t most e − k ǫ . Thus, with probability 1 − e − e kǫ − e − k ǫ we have that | A m ( P ) ∩ T | ≤ | A m ( P ) | k − ǫ .This implies that with probability 1 − e − e kǫ − e − k ǫ the approximation ratio is at most k − ǫ . Evenif with probability e − e kǫ + e − k ǫ the approximation ratio is 1, the expected approximation ratio isat most k − ǫ . → Combinatorial Auctions with XOS bidders)
Given the hardness of the set seeking problem, we will be able to derive our result for combinatorialauctions using the following reduction:
Lemma 5.9
Any protocol for combinatorial auctions on distribution D that achieves approxima-tion ratio better than m / − ǫ where the message length of each player is l can be converted into aprotocol for the two-player set seeking problem on distribution D achieving an approximation ratioof k − ǫ with the same message length l . Proof:
In the proof we fix an algorithm for the multi-player combinatorial auction problem andanalyze its properties.
Definition 5.10
Fix an algorithm for the XOS problem and consider the distribution D . We saythat player i is good if E [ | A i ∩ T i | ] ≥ max { | A i | k − ǫ , k ǫ } . To prove the lemma we first show that if none of the players are good then the algorithms approxi-mation ratio is bounded by m / − ǫ . Else, there exists at least a single player which is good. In thiscase we show how the algorithm for combinatorial auctions can be used to get a good approximationratio for the set seeking problem. Claim 5.11
If none of the players is good then the expected approximation ratio is at most m − ǫ . Proof:
To give an upper bound on the expected social welfare we assume that the k items inthe center are always allocated to players that demand them. We now compute an upper boundon the contribution of the remaining k items to the expected social welfare. Observe that sincenone of the players is good, each player contributes at most | A i | k − ǫ + k ǫ to the expected socialwelfare (of the k items). Hence the expected social welfare achieved by the algorithm is at most P i ( | A i | k − ǫ + k ǫ ) + | C | = k k − ǫ + k · k ǫ + k ≤ k ǫ . This implies that the approximation ratio ofthe algorithm is k − ǫ ≤ m − ǫ . Claim 5.12
If there exists a good player then there exists an algorithm for the two-player setseeking problem that guarantees an approximation ratio of k − ǫ with the same message length. Proof:
Let player i be the good player. Recall that D is the distribution for the multi-playercombinatorial auction problem and that D is the distribution defined for the set seeking problem.We denote by E D [ · ] and E D [ · ] expectations taken over the distributions D and D respectively. Weshow that there exists an algorithm for the set seeking problem achieving expected approximationratio of k − ǫ on D .Let the keeper take the role of player i in the multi-player algorithm and the seeker play theroles of the rest of the n − i ’s message to17he seeker. This is possible as the input of the keeper is identical to the input of the players in themulti-player problem. Next, the seeker simulates the messages of the remaining players and runthe algorithm internally. This simulation is possible by the assumption that in the multi-playeralgorithm all messages are sent simultaneously. The number of items in the combinatorial auctionwill be k + k , where the x items of the set seeking problem will correspond to some set X of size x of items in the combinatorial auction. We first show that given that the information of the seekerand keeper is drawn in a correlated way from D , they have enough information to simulate thecorrelated distribution D .The input of the keeper is defined in a straightforward way, where each set in F defines aclause in the XOS valuation. All items in these clauses are subsets of X . The seeker constructsthe valuations of the other n − X \ P form the center.Next the seeker chooses uniformly at random for each player j a petal P j of the k items not inthe center, a set T j ⊆ P j of size k and additional t − k from C ∪ P j . Observe thatthe distribution of valuations constructed this way is identical to D . The inherent reason for thisis that for player i the distribution of P i and T i is identical to the distribution of P , T and F in D as in both cases P and the t − F (or P i and F i ) are chosen uniformly at random froma set of size k + k and T (or T i ) is chosen uniformly at random from P (or P i ). In other words,the distribution D on F, P and T is identical to distribution D projected on F i , P i and T i .We now observe that since i is a good player we have that E D [ | A i ∩ T i | ] ≥ max { | A i | k − ǫ , k ǫ } .We show that this implies an algorithm achieving expected approximation ratio of k − ǫ for thetwo-player set seeking problem. The algorithm works as follows: we first perform the reductionabove, and therefore the distribution we are analyzing is D . Now, if player i was assigned a bundle A i of size at least k , the algorithm returns A = A i . Else, the algorithm returns a bundle A thatcontains A i and additional k − | A i | arbitrary items. Thus, we have that E D [ | A ∩ T i | ] ≥ | A | k − ǫ , aswe made sure that | A | ≥ k implying that | A | k − ǫ ≥ k ǫ .We claim that the expected approximation ratio of the algorithm on D is k − ǫ . As the distri-bution D on F, P and T is identical to distribution D projected on F i , P i and T i , we have that E D [ | A ∩ T | ] ≥ | A | k − ǫ . Thus, E D [ | A ∩ T || A | ] ≥ k − ǫ . This in turn implies that max S ∈ F | A ∩ S || A | ≥ k − ǫ and since the optimal solution has a value of 1 the expected approximation ratio is k − ǫ .From the last two claims we get that either the algorithm for the combinatorial auction problemdoes not guarantee a good approximation ratio, or that we have constructed an efficient protocolfor the set seeking problem. We design algorithms for a restricted special case of “ t -restricted” instances (see definition below).We will show however that the existence of a simultaneous algorithm for t -restricted instancesimplies a simultaneous approximation algorithm for subadditive bidders with almost the sameapproximation ratio. Definition 6.1
Consider an XOS valuation v ( S ) = max r a r ( S ) , where each a r is an additivevaluation. v is called binary if for every a r and item j we have that a r ( { j } ) ∈ { , µ } , for some µ . efinition 6.2 An instance of combinatorial auctions with binary XOS valuations (all with thesame µ , for simplicity and without loss of generality µ = 1 ) is called t -restricted if there exists anallocation ( A , . . . , A n ) such that all the following conditions hold:1. For every i , v i ( A i ) = | A i | .2. For every i , either | A i | = t or | A i | = 0 .3. t is a power of .4. Σ i v i ( A i ) ≥ OP T m . Proposition 6.3
If there exists a simultaneous algorithm for t -restricted instances that providesan approximation ratio of α where each bidder sends a message of length l , then there exists asimultaneous algorithm for subadditive bidders that provides an approximation ratio of O ( α · log m ) where each bidder sends a message of length O ( l · log m ) . Proof:
The proposition follows from the following three lemmas.
Lemma 6.4
If there exists a simultaneous algorithm for XOS bidders that provides an approxi-mation ratio of α where each bidder sends a message of length l , then there exists a simultaneousalgorithm for subadditive bidders that provides an approximation ratio of O ( α · log m ) where eachbidder sends a message of length l . Proof:
For every subadditive valuation there is an XOS valuation that is an O (log m ) approxi-mation of it [9] ; thus, if each player computes this XOS valuation and proceeds as in the algorithmfor XOS valuations we get an algorithm for subadditive valuations, losing only an O (log m ) factorin the approximation ratio. Lemma 6.5
If there exists a simultaneous algorithm for binary XOS bidders (all with the same µ )that provides an approximation ratio of α where each bidder sends a message of length l , then thereexists a simultaneous algorithm for XOS bidders that provides an approximation ratio of O ( α · log m ) where each bidder sends a message of length O ( l · log m ) . Proof:
We will move from general XOS valuations to binary XOS valuations using the followingnotion of projections:
Definition 6.6 A µ -projection of an additive valuation a ′ is the following additive valuation a : a ( { j } ) = (cid:26) µ, if µ > a ′ ( j ) ≥ µ ; , otherwise. A µ -projection of an XOS valuation v is the XOS valuation v µ that consists exactly of all µ -projections of the additive valuations (clauses) that define v .Let v max be max i v i ( M ) rounded down to the nearest power of 2. Let M = { v max m , v max m , . . . , v max , v max } .The next claim shows that there exists some µ ∈ M such that the value of the optimal solutionwith respect to the µ -projections v µi is only a logarithmic factor away from the value of the optimalsolution with respect to the v i ’s. Given the claim below and an algorithm A for binary XOS valua-tions we can construct the following algorithm for general XOS valuations: each player i computes, A valuation v α -approximates a valuation u if for every S we have that u ( S ) ≥ v ( S ) ≥ u ( S ) α . µ ∈ { MAX i m , MAX i m , . . . , MAX i , M AX i } , where M AX i equals v i ( M ) rounded down to thenearest power of 2, his µ -projection and sends both µ and the message of length l he would havesent in the algorithm A if his valuation was v µi . The new algorithm computes now up to |M| different allocations by running A once for each µ ∈ M , and outputs the allocation with the bestvalue . By the claim below, the approximation ratio is O ( α · log m ). The length of the messagethat each bidder sends is O ( l · log m ). Claim 6.7
Let
OP T be the value of the optimal solution with respect to the v i ’s. For every µ ∈ M ,let OP T µ be the value of the optimal solution with respect to the µ -projections v µi . There exists some µ ∈ M such that OP T µ ≥ OP T m . Proof:
Fix some optimal solution ( O , . . . , O n ). For each player i let a i be the maximizing clausefor the bundle O i in v i . Now, for every player i and every item j ∈ O i , put item j into bin x , where x is a power of 2, if and only if 2 x > a i ( { j } ) ≥ x . Let M x be the set of items in bin x . We claimthat there exists bin µ , µ ≥ v max m , for which it holds that Σ i Σ j ∈ O i ∩ M µ a i ( { j } ) ≥ Σ i v i ( O i )4 log m . To seethis, first let L be the set of “small” items that are in any of the bins x , x ≤ v max m . It holds thatΣ i Σ j ∈ O i ∩ L a i ( { j } ) ≤ Σ i Σ j ∈ O i ∩ L v max m ≤ v max ≤ Σ i v i ( O i )2 . Thus, we have that Σ i v i ( O i ∩ ∪ x ∈M M x ) ≥ Σ i v i ( O i )2 . Now observe that the number of bins x , x ≥ v max m is bounded from above by 2 log m .Therefore, there exists a bin µ such that Σ i v i ( O i ∩ M µ ) ≥ Σ i v i ( O i )4 log m . The proof is completed byobserving that Σ i v µi ( O i ∩ M µ ) ≥ Σ i v i ( O i ∩ M µ ) as the µ -projection cuts the value of each of theitem by at most half. Lemma 6.8
If there exists a simultaneous algorithm for t -restricted instances that provides anapproximation ratio of α where each bidder sends a message of length l , then there exists a simul-taneous algorithm for binary XOS bidders (with the same µ ) that provides an approximation ratioof O ( α · log m ) where each bidder sends a message of length O ( l · log m ) . Proof:
We will show that for every instance of combinatorial auctions with binary XOSbidders there exists some t for which this instance is t -restricted. Given algorithms A t for t -restricted instances we can construct an algorithm for binary XOS valuations as follows: for each t ∈ { , , , . . . , m } each bidder sends the same message as in A t . Now compute log m +1 allocations,one for each value of t , and output the allocation with the highest value.We now show the existence of one “good” t as above. Let ( O , . . . O n ) be some optimal solution.Put the players into log m bins, where player i is in bin r if 2 r > | O i | ≥ r , for r ∈ { , , , . . . , m } .Let N r be the set of players in bin r . Let t = arg max r Σ i ∈ N r v i ( O i ). For each i let A i be an arbitrarysubset of O i of size t if i ∈ N t and A i = ∅ otherwise. We have that Σ i v i ( A i ) ≥ Σ i ∈ Nt v i ( O i )2 ≥ Σ i v i ( O i )4 log m .In the next two subsections we to design two algorithms for t -restricted instances. We will use theproposition to claim that these algorithms can be extended with a small loss in the approximationfactor to the general case as well. Observe that if player i does not report a valuation for some value of µ ∈ M then the algorithm may assume itsvaluation for this µ is 0 for every bundle. .1 A Simultaneous ˜ O ( m ) -Approximation We show that simultaneous algorithms can achieve better approximation ratios than those that canbe obtained by sketching the valuations. Specifically, we prove that:
Theorem 6.9
There is a deterministic simultaneous algorithm for combinatorial auctions withsubadditive bidders where each player sends poly ( m, n ) bits that guarantees an approximation ratioof ˜ O ( m ) . Given Proposition 6.3, we may focus only on designing algorithms for t -restricted instances.The algorithm for t -restricted instances is simple:1. Each player reports a maximal set of disjoint bundles S i such that for every bundle S ∈ S i : | S | = t and v i ( S ) = | S | .2. For each i , let v ′ i be the following XOS valuation: v ′ i ( S ) = max T ∈S i | T ∩ S | .3. Output ( T , . . . , T n ) – the best allocation with respect to the v ′ i ’s.Notice that the size of the message that each player sends is poly ( m ). Furthermore, for each bundle S and bidder i v i ( S ) ≥ v ′ i ( S ). We will show that the best allocation with respect to the v ′ i ’s providesa good approximation with respect to the original valuations v i ’s. I.e., P i v i ( A i ) P i v ′ i ( T i ) = ˜ O ( m ).The proof considers three different allocations and shows that each allocation provides a goodapproximation for a different regime of parameters. • The best allocation (with respect to the v ′ i ’s) in which each player receives at most one item.We show that this provides an O ( t ) approximation with respect to the v i ’s. • Each player i is allocated the fraction of the bundle T ∈ S i that maximizes | T ∩ A i | . We showthat this allocation guarantees an approximation ratio of ˜ O ( l ), for some l related to the l i ’s. • The third allocation is constructed randomly (even though our algorithm is deterministic):each player i chooses at random a bundle S i , and each item j is allocated to some player that j is in his randomly selected bundle, if such exists. Let n ′ be the number of nonempty bundlesin ( A , . . . , A n ). We show that the expected approximation of this allocation is ˜ O ( n ′ /l ), forthe same l as above.More formally: Lemma 6.10
Let ( A , . . . , A n ) be the allocation that is guaranteed by the t -restrictness of theinstance. Let ( T , . . . , T n ) be the allocation that the algorithm outputs. Then, P i v i ( A i ) P i v i ( T i ) ≤ ˜ O ( m / ) . Proof:
We first show that every player i reports a large fraction of A i (but the items of A i mightbe split among different reported bundles). Claim 6.11
For each player i , let Φ i = A i ∩ ( ∪ T ∈S i T ) . We have that | Φ i | ≥ t/ . As stated, this algorithm uses polynomial communication but may not run in polynomial time since finding theoptimal solution with explicitly given XOS valuations is NP hard. If one requires polynomial communication and time, an approximate solution may be computed using any of the known constant ratio approximation algorithms.The analysis remains essentially the same, with a constant factor loss in the approximation ratio. roof: Suppose that there exists some player i such that | Φ i | < t/
2, and let S = A i − ( ∪ T ∈S i T ).Since | A i | = t , we have that | S | ≥ t/
2. But then the set S i is not maximal, since any subset of thebundle S of size of t/ Corollary 6.12 Σ i v i (Φ i ) ≥ Σ i v ( A i )2 . We now divide the players into log m bins so that player i is in bin r if r > |S i | ≥ r . Let the setof players in bin r be N r . Observe that there has to be some bin l with Σ i ∈ N l v i (Φ i ) ≥ Σ i v i (Φ i )log m .Denote its size by n l = | N l | ≤ n ′ . For the rest of this proof we will only consider the players in N l ;this will result of a loss of at most log( m ) in the approximation factor. Claim 6.13 ( ˜ O ( t ) -approximation) Σ i ∈ N l v ′ i ( T i ) ≥ Σ i v i (Φ i ) / ( t · log( m )) . Proof:
We show that there exists an allocation ( B , . . . , B n ), | B i | ≤ i , with Σ i v ′ i ( B i ) ≥ i v i (Φ i ) /t . For each player i ∈ N l , let B i be the set that contains exactly one item j i from Φ i ifΦ i = ∅ and B i = ∅ otherwise. Notice that the j ′ i s are distinct, since for every i = i ′ we have thatΦ i ∩ Φ i ′ = ∅ . The lemma follows since v ′ i ( { j i } ) = 1. Claim 6.14 ( ˜ O ( l ) -approximation) Σ i ∈ N l v ′ i ( T i ) ≥ Σ i v i (Φ i ) / ( l · log m ) . Proof:
For each player i ∈ N l , let B i ⊆ Φ i be the bundle of the maximal size such that v ′ i ( B i ) = | B i | . Note that | B i | ≥ | Φ i | /l as by construction v ′ i has at most l clauses and by definition eachitem in Φ i is contained in one of the |S i | clauses of v ′ i . Therefore, | B i | ≥ | Φ i | /l . Thus we have thatΣ i ∈ N l v ′ i ( B i ) ≥ Σ i ∈ Nl v i (Φ i ) l . The claim follows as we already observed that Σ i ∈ N l v i (Φ i ) ≥ Σ i v i (Φ i )log m . Claim 6.15 ( ˜ O ( n ′ /l ) -approximation) Σ i ∈ N l v ′ i ( T i ) ≥ n l l · Σ i v i (Φ i )log( m ) . Proof:
Consider the following experiment. For each player in N l choose uniformly at randoma bundle to compete on among the bundles in the set S i (recall that l ≤ |S i | < l ). Now allocateeach item j uniformly at random to one of the players that are competing on bundles that containthat item j .There are n l players in this experiment, so there are at most n l potential competitors for eachitem. Each player has at least l/ n l /l . So when player i is competing on a bundle,the expected competition on each of the items he competes on is 2 n l /l + 1. Therefore, the expectedcontribution of each item that player i is competing on is at least l n l . The lemma follows by applyinglinearity of expectation on the (at least) t/ i is competingon. Therefore we have that Σ i ∈ N l v ′ i ( T i ) ≥ l n l · t · n l = l · t . Recall that Σ i v i (Φ i )log( m ) ≤ Σ i ∈ N l v i (Φ i ) ≤ t · n l .Thus we have that l · t ≥ n l l · Σ i v i (Φ i )log( m ) as required.By choosing the best of these three allocations we get the desired approximation ratio: Claim 6.16
Suppose that we have three allocations B , B and B such that: P i v i ( A i ) P i v ′ i ( B i ) = O ( t ) , P i v i ( A i ) P i v ′ i ( B i ) = ˜ O ( l ) and P i v i ( A i ) P i v ′ i ( B i ) = ˜ O ( n ′ /l ) . Let B be the allocation with the highest welfare amongthe three. Then, P i v i ( A i ) P i v ′ i ( B i ) = ˜ O ( m ) . roof: By the first two claims, we get an approximation ratio of ˜ O ( m ) whenever l < m or t < m . Hence, we now assume that l, t ≥ m . Now observe that when t ≥ m then n ′ ≤ m ,since there can be at most m/t players that receive non-empty (disjoint) bundles in any allocationwhere the size of each non-empty bundle is at least t . We therefore have that n ′ l ≤ m and thelemma follows by the third claim. k -Round Algorithm We now develop an algorithm that guarantees an approximation ratio of O ( m k +1 ) for combinatorialauctions with subadditive valuations in k rounds. In each of the rounds each player sends poly ( m )bits. We provide an algorithm for t -restricted instances (see Section 6.1 for a definition). ByProposition 6.3 this implies an algorithm with almost the same approximation ratio for generalsubadditive valuations. The Algorithm (for t -restricted instances)
1. Let N = N , U = M and U ,i = M .2. In every round r = 1 , . . . , k :(a) Each player reports the a maximal set of disjoint bundles S r,i such that for every bundle S ∈ S r,i : S ⊆ U r,i , | S | = t k and v i ( S ) = | S | .(b) Go over the players in N r in an arbitrary order. For every player i for which there existsa bundle S ∈ S r,i such that at least m k +1 of its items were not allocated yet, allocateplayer i the remaining unallocated items of S .(c) Let N r +1 ⊆ N r be the set of players that were not allocated items at round r or before.(d) Let U r +1 ⊆ U r be the set of items that were not allocated at round r or before.(e) Let U r +1 ,i = ( ∪ S ∈S r,i S ) ∩ U r +1 . Theorem 6.17
For every k ≤ log m , there exists an algorithm for t -restricted instances that pro-vides an approximation ratio of O ( k · m k +1 ) in k rounds where each player sends poly ( m, n ) bits.In particular, when k = O (log m ) the approximation ratio is O (log m ) . As a corollary, there existsa k -round approximation algorithm for subadditive valuations that provides an approximation ratioof O ( k · m k +1 · log m ) . Proof:
In the analysis we fix some optimal solution ( O , . . . , O n ). We break the proof of thetheorem into two lemmas. Lemma 6.18
At the end of the algorithm, either for every player i that did not receive any bundlewe have that U k +1 ,i = ∅ , or the approximation ratio of the algorithm is O ( m k +1 ) . Proof:
Observe the for every player i and round r , if i ∈ N r ∩ N r +1 then | ∪ S ∈S r,i S | ≥ m k +1 | ∪ S ∈S r +1 ,i S | . This is true since if player i was not allocated any items at round r then forevery bundle S ∈ S r,i that he reported, at least (1 − m k +1 ) of the items were allocated. Thus, m k +1 ·| U r +1 ,i | ≤ |∪ S ∈S r,i S | . Recall that by definition we have that ( ∪ S ∈S r +1 ,i S ) ⊆ U r +1 ,i . Therefore,23or every player i that was not allocated anything at round k , we have that | U k +1 ,i | ≤ m k +1 . If thereexists a player i such that | U k +1 ,i | > | ∪ S ∈S ,i S | ≥ m kk +1 .This implies that at least m kk +1 − m k +1 of the items were allocated and hence the approximationratio is O ( m k +1 ) (there are m items, so the value of the optimal solution is at most m ). In anyother case for every player that was not allocated any bundle we have that U k +1 ,i = ∅ . Lemma 6.19
Suppose that for every player i that did not receive any bundle we have that U k +1 ,i = ∅ . The approximation ratio of the algorithm is O ( k · m k +1 ) . Proof:
Suppose that at least n ′ / t k · m k +1 . Thus, in this case the algorithmachieves an O ( k · m k +1 )-approximation (recall that the value of the optimal solution is n ′ · t ).Else, assume that at most n ′ / i in N ′ thatdid not receive any items (there are at least n ′ / i , | O i ∩ U k | ≤ t/
4. In other words, we will show that for every such player i , the algorithmallocates at least t of the items that this player receives in the optimal solution. This will provean approximation ratio of O ( k · m k +1 ).Thus assume towards contradiction that there exists some i ∈ N that did not receive any itemsfor which | O i ∩ U k | ≥ t/
4. In this case by Claim 6.20 (below) we have that | O i ∩ U k,i | ≥ t − k · t k = t .In particular this implies that | U k,i | ≥ t/
4. This is a contradiction to the assertion of Lemma 6.18that U k,i = ∅ for every player that was not allocated any bundle. Claim 6.20
For every i ∈ N r ∩ N ′ , | O i ∩ U r | − | O i ∩ ( ∪ S ∈S r,i S ) | ≤ r · t k . Proof:
We prove the claim by induction. For the base case r = 1, observe that since player i reports a maximal set of bundles of value t k at most t k items of his optimal bundle may not bereported by him – otherwise, those items can be bundled together and added to S r,i , contradictingmaximality. We now assume correctness for r and prove for r + 1. We have that | O i ∩ U r | − | O i ∩ ( ∪ S ∈S r,i S ) | ≤ r · t k . Denote the items allocated at round r by A r . We have that | O i ∩ ( U r \ A r ) | −| O i ∩ ( ∪ S ∈S r,i S ) \ A r | ≤ r · t k since ∪ S ∈S r,i S ) ⊆ u r . By definition, this implies that | O i ∩ U r +1 |− | O i ∩ u r +1 ,i ) | ≤ r · t k . The proof is completed by observing that | O i ∩ U r,i | − | O i ∩ ( ∪ S ∈S r,i S ) | ≤ t k . Sincein every round player i reports a maximal disjoint set of bundles of size t k , thus at every roundthe player can leave out of its reported set at most t k items from the optimal solution (i.e., if theplayer leaves out more items then he could have reported those items, contradicting maximality). References [1] Moshe Babaioff, Brendan Lucier, Noam Nisan, and Renato Paes Leme. On the efficiency ofthe walrasian mechanism.
CoRR , abs/1311.0924, 2013.[2] Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, andTim Roughgarden. Sketching valuation functions. In
SODA , 2012.[3] Maria-Florina Balcan, Florin Constantin, Satoru Iwata, and Lei Wang. Learning valuationfunctions.
Journal of Machine Learning Research - Proceedings Track , 23:4.1–4.24, 2012.244] Maria-Florina Balcan and Nicholas J. A. Harvey. Learning submodular functions. In
STOC ,2011.[5] Dimitri P Bertsekas. The auction algorithm: A distributed relaxation method for the assign-ment problem.
Annals of operations research , 14(1):105–123, 1988.[6] Kshipra Bhawalkar and Tim Roughgarden. Welfare guarantees for combinatorial auctions withitem bidding. In
SODA , 2011.[7] G. Christodoulou, A. Kov´acs, and M. Schapira. Bayesian combinatorial auctions.
Automata,Languages and Programming , pages 820–832, 2010.[8] Gabrielle Demange, David Gale, and Marilda Sotomayor. Multi-item auctions.
The Journalof Political Economy , pages 863–872, 1986.[9] Shahar Dobzinski. Two randomized mechanisms for combinatorial auctions. In
APPROX ,2007.[10] Shahar Dobzinski and Michael Schapira. An improved approximation algorithm for combina-torial auctions with submodular bidders. In
SODA , 2006.[11] Danny Dolev and Tom´as Feder. Multiparty communication complexity. In
Foundations ofComputer Science, 1989., 30th Annual Symposium on , pages 428–433. IEEE, 1989.[12] Uriel Feige. On maximizing welfare where the utility functions are subadditive. In
STOC ,2006.[13] Uriel Feige and Jan Vondr´ak. Approximation algorithms for allocation problems: Improvingthe factor of 1-1/e. In
FOCS , 2006.[14] Michal Feldman, Hu Fu, Nick Gravin, and Brendan Lucier. Simultaneous auctions are (almost)efficient. In
Proceedings of the 45th annual ACM Symposium on theory of computing , pages201–210. ACM, 2013.[15] Ashish Goel, Michael Kapralov, and Sanjeev Khanna. On the communication and streamingcomplexity of maximum bipartite matching. In
Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms , pages 468–485. SIAM, 2012.[16] Avinatan Hassidim, Haim Kaplan, Yishay Mansour, and Noam Nisan. Non-price equilibria inmarkets of discrete goods. In
ACM Conference on Electronic Commerce , pages 295–296, 2011.[17] F.A. Hayek. The use of knowledge in society.
American Economic Review , 35(4):519–530,1945.[18] John E Hopcroft and Richard M Karp. An nˆ5/2 algorithm for maximum matchings inbipartite graphs.
SIAM Journal on computing , 2(4):225–231, 1973.[19] Zengfeng Huang, Bozidar Radunovic, Milan Vojnovic, and Qin Zhang. Communication com-plexity of approximate maximum matching in distributed graph data.
Microsoft TechnicalReport, MSR-TR-2013-35 , 2013.[20] Leonid Hurwicz. The design of mechanisms for resource allocation.
American economic review ,63(2):1–30, 1973. 2521] Michael Kapralov. Better bounds for matchings in the streaming model. arXiv preprintarXiv:1206.2269 , 2012.[22] Richard M Karp, Eli Upfal, and Avi Wigderson. Constructing a perfect matching is in randomnc. In
Proceedings of the seventeenth annual ACM symposium on Theory of computing , pages22–32. ACM, 1985.[23] Richard M Karp, Umesh V Vazirani, and Vijay V Vazirani. An optimal algorithm for on-linebipartite matching. In
Proceedings of the twenty-second annual ACM symposium on Theoryof computing , pages 352–358. ACM, 1990.[24] Eyal Kushilevitz and Noam Nisan.
Communication complexity . Cambridge university press,2006.[25] O. Lange and F. Taylor.
On the Economic Theory of Socialism . University of Minnesota Press,1938.[26] B. Lehmann, D. Lehmann, and N. Nisan. Combinatorial auctions with decreasing marginalutilities.
Games and Economic Behavior , 55(2):270–296, 2006.[27] Renato Paes Leme, Vasilis Syrgkanis, and ´Eva Tardos. The curse of simultaneity. In
ITCS ,pages 60–67, 2012.[28] Zvi Lotker, Boaz Patt-Shamir, and Seth Pettie. Improved distributed approximate matching.In
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architec-tures , pages 129–136. ACM, 2008.[29] Marcin Mucha and Piotr Sankowski. Maximum matchings via gaussian elimination. In
FOCS ,pages 248–255, 2004.[30] Ketan Mulmuley, Umesh V Vazirani, and Vijay V Vazirani. Matching is as easy as matrixinversion. In
Proceedings of the nineteenth annual ACM symposium on Theory of computing ,pages 345–354. ACM, 1987.[31] Noam Nisan and Ilya Segal. The communication requirements of efficient allocations andsupporting prices.
Journal of Economic Theory , 129(1):192–224, 2006.[32] Alvin E Roth and Marilda A Oliveira Sotomayor.
Two-sided matching: A study in game-theoretic modeling and analysis . Number 18. Cambridge University Press, 1992.[33] Jan Vondr´ak. Optimal approximation for the submodular welfare problem in the value oraclemodel. In
STOC , 2008.[34] Raphael Yuster. Maximum matching in regular and almost regular graphs.
Algorithmica ,66(1):87–92, 2013.
A Communication Complexity of Bipartite Matching
In this appendix we discuss a proposed communication complexity investigation of the bipartitematching problem. This model is essentially the same as that used in our investigation of bipartitematching in the rest of the paper but focusing on the exact problem rather than on approximations,26nd proposing the study of communication as of itself rather than merely as an abstraction of marketprocesses.We focus on the open problem(s) and shortly mention some related models where communicationbottlenecks for matching have been investigated and give a few pointers to different such threadswhere the interested reader may find many more references.
A.1 The Model and Problem
There are n items and n players. Each player i holds a subset S i of items that he is interested in.I.e. we have a bipartite graph with n left vertices (players) and n right vertices (items), and havea player in our model for each left vertex, a player that knows the set of neighbors of his vertex inthe bipartite graph (but there are no players associated with the items.) The goal of these playersis to find a maximum matching between items and players, i.e. that each player is assigned a singleitem j i ∈ S i with no items assigned to multiple players j i = j i ′ for i = i ′ . Communication Model
The players engage in a fixed communication protocol using broadcast messages. Formally theytake turns writing on a common “blackboard”. At every step in the protocol, the identity of thenext player i to write a bit on the blackboard must be completely determined by the contents ofthe blackboard, and the message written by this player i must be determined by the contents of theblackboard as well as his own private input S i . Whether the protocol terminates at a given pointmust be completely determined by the contents of the blackboard, and at this point the outputmatching must be solely determined by the contents of the blackboard. This model is completelyequivalent to a decision tree, each query can be an arbitrary function depending only on a singleplayer’s information S i . The measure of complexity here is the total number of bits communicated. Rounds
The communication model above allows an arbitrary order of communication. Of interest are alsovarious limited orders: oblivious (the order of speaking is fixed independently of the input), andthe simplest special case of it, one-way communication where the players speak in the fixed order ofplayer 1, player 2, etc. We will focus on speaking in rounds: in each “round” each of the n playerswrites a message on the blackboard, a message that may depend on his own input as well as themessages of the others in previous rounds (i.e. on the contents of the blackboard after the previousround). The measures of complexity here are the number of rounds and the total number of bitscommunicated. The special case of a single round is called a simultaneous protocol. Open Problems
How much communication is needed for finding a maximum matching? What if we are limited to r rounds? These questions apply both to deterministic and to randomized protocols. The tradeoffbetween communication and approximation is of course also natural to explore. What is Known
The trivial upper bound for communication is n since players can all simultaneously send theirfull input. The trivial lower bound is Ω( n log n ) as this is the number of bits necessary to representthe output matching (and every matching may need to be given as output).27ignificantly, the non-deterministic (and co-non-deterministic) communication complexity isalso O ( n log n ): to verify that a given matching is maximum size it suffices to add a Hall-theoremblocking set, or alternatively a solution for the dual. Specifically, a specification of a set of “high-price” items, so that (1) only allocated items are high-price (2) all players that are not allocated alow price item are only interested in high-price ones. The fact that the non-deterministic complexityis low means that “easy” lower bounds techniques such as fooling-sets or cover-size bounds will notsuffice for giving good lower bounds.Interestingly, an O ( n . log n ) upper bound can be obtained by adapting known algorithms tothis framework: First, the auction algorithm described in Section 4 gives a (1 − δ )-approximationusing O ( n log n/δ ) communication. When we choose δ = 1 / √ n this means that we get a matchingthat is at most smaller than the optimal one by an additive √ n . We can thus perform √ n moreaugmenting path calculations to get an optimal matching. Each augmenting path calculationrequires only O ( n log n ) bits of communication: it requires finding a path in a graph on the playersthat has a directed edge between player i and player i ′ whenever i is interested in an item thatis currently allocated to i ′ . The goal here is to find a path from any player that is not allocatedan item to any player that is interested in an unallocated item. A breadth first search with theblackboard serving as the queue requires writing every vertex at most once on the blackboard, atmost O ( n log n ) communication.We do not know any better upper bound, nor do we know a better than O ( n ) upper boundfor even n rounds. Our lower bounds for matching provide an Ω( n ) lower bound for simultaneousprotocols, and a n / log log n ) lower bound for one-way communication follows from [15]. We don’tknow any lower bound better than Ω( n log n ) for general protocols or even for 2-round protocols. Algorithmic Implications
We believe that studying the bipartite matching under this model may be a productive way ofunderstanding the general algorithmic complexity of the problem. A major open problem is whetherbipartite matching has a (nearly-)linear time algorithm: O ( n o (1) ) time for dense graphs (andmaybe O ( m o (1) ) for graphs with m edges). The best deterministic running time known (for thedense case) is the 40-year old O ( n . ) algorithm of [18], with a somewhat better randomized O ( n ω )algorithm known [29] (where ω = 2 . ... is the matrix multiplication exponent). For special cases likeregular or near-regular graphs nearly linear times are known (e.g. [34]). In parallel computation, amajor open problem is whether bipartite matching can be solved in parallel poly-logarithmic time(with a polynomial amount of processors). Randomized parallel algorithms for the problem [30, 22]have been known for over 25 years.On the positive side, it is “likely” that any communication protocol for bipartite matching thatimproves on the currently known O ( n . ) complexity will imply a faster than the currently known O ( n . ) algorithm. This is not a theorem, however the computational complexity needed to send asingle bit in a communication protocol is rarely more than linear in the input held by the playersending the bit. Most often each bit is given by a very simple computation in which case this is sotrivially, but sometimes, clever data structures will be needed for this to be so. If this will be thecase in the communication protocol in question then the improved algorithm is implied. A similarphenomena should happen if a deterministic communication protocol that uses poly-logarithmicmany rounds is found since most likely each bit sent by each player is determined by a simplecomputation that can be computed in parallel logarithmic time.On the negative side, lower bounds in a communication model do not imply algorithmic lowerbounds, however they can direct the search for algorithms by highlighting which approaches cannotwork. This is the great strength of concrete models of computation where lower bounds are possible28o prove. A.2 Related Models
The bipartite matching problem has been studied in various models that focus on communication.
Point to Point Communication
In our model players communicate using a “blackboard”; any bits sent by a player is seen by all. Aweaker model that is more natural to capture realities in distributed systems will consider the casewhere each message is sent to a single recipient. Such models are also called “message passing”or “private channels”. One must be slightly careful in defining such protocols as to ensure thatno communication is “smuggled” by the timing of messages, and the standard way of doing so isusing the essentially equivalent coordinator model of [11]. It turns out that bipartite matching iseven harder to approximate without broadcast and the results of [19] give an Ω( α n ) lower boundfor even finding an α -approximation (even using randomization). Note that this model is triviallymore general than that of simultaneous protocols hence this lower bound gives the randomizedlower bound from Theorem 3.2. We also note that our proof for this theorem actually also appliesto the model of interactive protocols with private channels. Multiple Vertices per Processor
In our model, the problem for n + n -vertex graphs is handled by n players. A model that is moreappropriate for the current scales of distributed systems is to use k processors where k << n . Thereare various options for partitioning the n bits of input to the processors where, most generally, eachof the k processors can hold an arbitrary part. This is the usual model of interest for distributedsystems and was e.g. used in [19]. Here again one might distinguish between broadcast and point topoint communication models, where the gap between the models can be no larger than a factor of k .To convert an auction-based algorithm to run in this framework one must be able to calculate thedemand of a vertex. Usually this can be easily done with O ( k log n ) communication, but in manycases it is possible to pay an overhead of O ( δ − ) instead. In particular, the basic δ -approximationauction algorithm that obtains a (1 − δ ) approximation can be run in this model using O ( n log n/δ )communication in the blackboard model and thus O ( kn log /δ ) with point to point communication. Two Players
When we are down to k = 2 players we are back to the standard two-player communicationcomplexity model of Yao. Two variants regarding the partition of the input to the two players arenatural here: (a) each player i holds an arbitrary subset E i of the edges and the graph in questionis just the union E ∪ E ; (b) each player i holds the edges adjacent to n/ n -player model. As in our model,the complexity of bipartite matching is completely open, and in particular the communicationcomplexity of the decision problem of whether the input graph has a perfect matching is open withno known non-trivial, ω ( n ), lower bounds or non-trivial, o ( n ) upper bounds. Streaming and Semi-streaming
One of the main applications of communication complexity is to serve as lower bounds for “stream-ing” algorithms, those are algorithms that go over the input sequentially in a single pass (or in29ew passes), while using only a modest amount of space. The model of communication complexityrequired for such lower bounds is that of a one-way single-round private-channel protocol where instep i player i sends a message to player i + 1. (For r -pass variants of streaming algorithms, we willhave r such rounds of one-way communication.) The lower bounds mentioned above thus implythat no streaming algorithm that uses o ( n ) space can get even a constant factor-approximation ofthe maximum matching, even with O (1) rounds. A greedy algorithm gets 1 / O ( n ) space, and slight improvements in the approximation factor using linearspace are possible, e.g. using the online matching algorithm of [23]. In r passes and nearly-linearspace, [21] gets an 1 − O (1 / √ r ) approximation. Streaming algorithms that use linear or near-linearspace are usually called semi-streaming algorithms and lower bounds for them are usually derivedby looking at the information transfer between the “first half” and the “second half” of the inputdata and proving a significantly super-linear lower bound on the one-way two-party communication.This was done in [15] who give a n / log log n ) lower bound for improving the 2 / Distributed Computing
In this model the input graph is also the communication network. I.e. players can communicatewith each other only over links that are edges in the input graph, and the interest is the number ofrounds needed. For this to make sense in a bipartite graph we need to also have processors for theright-vertices of the graph (and thus every edge is known by the two processors it connects.) It isnot hard to see that to get a perfectly maximal matching in this model transfer of information acrossthe whole diameter of the graph may be needed, which may require Ω( n ) rounds of communication,but in [28] a protocol is exhibited that gives a (1 − δ )-approximation in O (log n/δ ) rounds. B Gross Substitutes Valuations
Proposition B.1
Any exact simultaneous algorithm for combinatorial auctions with gross substi-tutes valuations (even for just two players) requires super polynomial message length.
Proof:
Suppose towards a contradiction that there exists an exact algorithm A for combinatorialauctions with gross substitutes where the message length of each message is polynomial. Wewill show how given such an algorithm A we can construct an exact sketch any gross substitutesvaluation using only polynomial space. We get a contradiction since gross substitutes valuationsdo not admit polynomial sketches [4]. Since the construction of [4] contains only integer-valuedfunctions between 1 and poly ( m ) it suffices to show how to sketch these functions only.Given such an algorithm A the sketch for a valuation v we construct is simply the message L v a player with valuation v sends in the algorithm A and in addition the value v ( M ). We now showhow to compute the value of any bundle v ( S ) with no additional communication cost. For this itsuffices to show that for any T ⊂ M and any item j / ∈ T we can compute the marginal value v ( j | T ).To compute the marginal value v ( j | T ), we construct the following family of additive valuations: v j | Tx ( { j ′ } ) = x for j ′ = j j ′ ∈ T v ( M ) otherwise30onsider an instance with two players, one with valuation v and the other with valuation v j | Tx .Observe that in any efficient allocation of this instance the player with valuation v should alwaysreceive bundle T . He will also receive item j if v ( j | T ) > x and will not receive it if v ( j | T ) < x .Thus, we can use A to compute the value of v ( j | T ) > x (with no additional communication cost)by giving it as an input L v and L v j | Tx for increasing values of x ∈ { , , . . . , v ( M ) − } . This allowsto determine the maximal value of x ( x ∗ ) for which the v -player receives j . We now have that v ( j | T ) = x ∗ +12