Approximately Optimal Mechanism Design via Differential Privacy
aa r X i v : . [ c s . G T ] M a r Approximately Optimal Mechanism Designvia Differential Privacy ∗ Kobbi Nissim † Rann Smorodinsky ‡ Moshe Tennenholtz § Abstract
In this paper we study the implementation challenge in an abstract interdependent valuesmodel and an arbitrary objective function. We design a mechanism that allows for approximateoptimal implementation of insensitive objective functions in ex-post Nash equilibrium. If,furthermore, values are private then the same mechanism is strategy proof. We cast our resultsonto two specific models: pricing and facility location. The mechanism we design is optimalup to an additive factor of the order of magnitude of one over the square root of the number ofagents and involves no utility transfers.Underlying our mechanism is a lottery between two auxiliary mechanisms — with highprobability we actuate a mechanism that reduces players influence on the choice of the socialalternative, while choosing the optimal outcome with high probability. This is where the recentnotion of differential privacy is employed. With the complementary probability we actuate amechanism that is typically far from optimal but is incentive compatible. The joint mechanisminherits the desired properties from both. ∗ We thank Amos Fiat and Haim Kaplan for discussions at an early stage of this research. We thank Frank McSherryand Kunal Talwar for helping to clarify issues related to the constructions in [22]. Finally, we thank Jason Hartline,James Schummer, Roberto Serrano and Asher Wolinsky for their valuable comments. † Microsoft Audience Intelligence, Israel, and Department of Computer Science, Ben-Gurion University. Researchpartly supported by the Israel Science Foundation (grant No. 860/06). [email protected] . ‡ Faculty of Industrial Engineering and Management, Technion – Israel Institute of Technology, Haifa 32000, Israel.This work was supported by Technion VPR grants and the Bernard M. Gordon Center for Systems Engineering at theTechnion. [email protected] . § Microsoft Israel R&D Center and the Faculty of Industrial Engineering and Management, Technion – IsraelInstitute of Technology, Haifa 32000, Israel. [email protected] . Introduction
Mechanism design deals with the implementation of desired outcomes in a multi-agent system withasymmetric information. The outcome of a mechanism may be a price for a good, an allocationof goods to the agents, the decision on a provision of a public good, locating public facilities,etc. The quality of the outcome is measured by some objective function. In many instances theliterature is concerned with the sum of the agents’ valuations for an outcome, but the objectivefunction can take many other forms, such as the revenue of a seller in an auction setting, the socialinequality in a market setting and more. The reader is referred to Mas-Colell, Whinston and Green[19] for a broader introduction. The holy grail of the mechanism design challenge is to designmechanisms which exhibit dominant strategies for the players, and furthermore, once players playtheir dominant strategies the outcome of the mechanism coincides with maximizing the objectivefunction. Broadly speaking, this challenge is equivalent to designing optimal direct mechanismsthat are truthful.As it turns out, such powerful mechanisms do not exist in general. The famous Gibbard-Satterthwaite theorem (Gibbard [14] and Satterthwaite [29]) tells us that for non-restricted settingsany non-trivial truthful mechanism is dictatorial. However, if we restrict attention to the objectivefunction that is simply the sum of the agents’ valuations, then this problem can be overcomeby introducing monetary payments. Indeed, in such cases the celebrated Vickrey-Clarke-Grovesmechanisms, discovered by Vickrey [37] and generalized by Clarke [8] and Groves [16], guaranteethat being truthful is a dominant strategy and the outcome is optimal. Unfortunately, Roberts [26]showed that a similar mechanism cannot be obtained for other objective functions. This cul-de-sacinduced researchers to ‘lower the bar’ for mechanism design. One possibility for lowering the baris to replace the solution concept with a weaker one and a large body of literature on Bayes-Nashimplementation has developed (the reader is referred to Mas-Colell et al. [19] for further reading).Another direction is that of approximate implementation where the quest replaces accurateimplementation with approximate implementation, while keeping the approximation inaccuracyas low as possible. The latter research agenda turned out to be fruitful and yielded many positiveresults. A sequence of papers on virtual implementation , initiated by Matsushima [20] and Abreuand Sen [2], provides general conditions for approximate implementation where the approximationinaccuracy in a fixed model can be made arbitrarily small. On the other hand, the recent literatureemerging from the algorithmic mechanism design community looks at approximation inaccuracieswhich are a function of the size of the model (measured, e.g., the number of agents).Interestingly, no general techniques are known for designing mechanisms that are approxi-mately optimal for arbitrary social welfare functions. To demonstrate this consider the facilitylocation problem, where a social planner needs to locate some facilities, based on agents’ reportsof their own location. This problem has received extensive attention recently, yet small changes inthe model result in different techniques which seem tightly tailored to the specific model assump-tions (see Alon et al. [5], Procaccia and Tennenholtz [25] and Wang et al. [38]).1nother line of research, initiated by Moulin [23], is that on mechanism design without money .Moulin, and later Schummer and Vohra [32, 33], characterized functions that are truthfully im-plementable without payments and studied domains in which non-dictatorial functions can be im-plemented. More recently, Procaccia and Tennenholtz [25] studied a relaxation of this notion – approximate mechanism design without money.Our work presents a general methodology for designing approximately optimal mechanismsfor a broad range of models, including the facility location problem. A feature of our constructionsis that the resulting mechanisms do not involve monetary transfers.
We introduce an abstract mechanism design model where agents have interdependent values andprovide a generic technique for approximate implementation of an arbitrary objective function.More precisely, we bound the worst case difference between the optimal outcome (‘first best’) andthe expected outcome of our generic mechanism by O ( q ln nn ) , where n is the population size. Inaddition, our generic construction does not involve utility transfer.Our construction combines two very different random mechanisms: • With high probability we deploy a mechanism that chooses social alternatives with a prob-ability that is proportional to (the exponent of) the outcome of the objective function, as-suming players are truthful. This mechanism exhibits two important properties. First, agentshave small influence on the outcome of the mechanism and consequently have little influenceon their own utility. As a result all strategies, including truthfulness, are ǫ -dominant. Second,under the assumption that players are truthful, alternatives which are nearly optimal are mostlikely to be chosen. The concrete construction we use follows the Exponential Mechanismpresented by McSherry and Talwar [22]. • With vanishing probability we deploy a mechanism which is designed with the goal of elic-iting agents’ private information, while ignoring the objective function.Our technique is developed for settings where the agents’ type spaces as well as the set of socialalternatives are finite. In more concrete settings, however, our techniques extend to ‘large’ typesets. We demonstrate our results in two specific settings: (1) Facility location problems, wherethe social planner is tasked with the optimal location of K facilities in the most efficient way. Inthis setting we focus on minimizing the social cost which is the sum of agents’ distances fromthe nearest facility. (2) The digital goods pricing model, where a monopolist needs to determinethe price for a digital good (goods with zero marginal cost for production) in order to maximizerevenue. 2nother contribution of our work is an extension of the classical social choice model. Inthe classical model agents’ utilities are expressed as a function of the private information anda social alternative, a modeling that abstracts away the issue of how agents exploit the socialchoice made. We explicitly model this by extending the standard model by an additional stage,following the choice of the social alternative, where agents take an action to exploit the socialalternative and determine their utility (hereinafter ‘reaction’). We motivate this extension to thestandard model with the following examples: (1) In a Facility Location problem agents react to themechanism’s outcome by choosing one of the facilities (e.g., choose which school to attend). (2) AMonopolist posts a price based on agents input. Agents react by either buying the good or not. (3)In an exchange economy agents react to the price vector (viewed as the outcome of the invisiblehand mechanism) by demanding specific bundles. (4) In a public good problem, where a set ofsubstitutable goods is supplied, each agent must choose her favorite good. (5) Finally, considera network design problem, where each agent must choose the path it will use along the networkcreated by the society. These examples demonstrate the prevalence of ‘reactions’ in a typical designproblem. With this addendum to the model one can enrich the notion of a mechanism; in additionto determining a social choice the mechanism can also restrict the set of reactions available to anagent. For example, in the context of school location, the central planner can choose where to buildnew schools and, in addition, impose the specific school assigned to each student. We refer to thisaspect of mechanisms as imposition .We demonstrate the notion of imposition with the following illustrative example:
Example 1
In time of depression the government proposes to subsidize some retraining programs.There are three possible programs from which the government must choose two due to budgetconstraints. Once a pair of programs is chosen each agent is allocated to her favorite program. Forsimplicity, assume each candidate for retraining has a strict preference over the three programs,with utilities equal , and . Assume the government wants to maximize the social welfare subjectto its budget constraint. A naive approach in which the government chooses the pair that maximizesthe overall grade is clearly manipulable (there may be settings where an agent will falsely down-grade his 2nd choice to the third place in order to ensure his first choice makes it). An alternativemethodology is for the government to choose a pair randomly, where the probability assigned toeach pair is an increasing function of its induced welfare (the specific nature of the function will bemade clear in the sequel). In addition, with a vanishing probability, a random pair will be chosenand in that case each agent will assigned her preferred program according to her announcement.It turns out that this scheme is not manipulable and agents’ optimal strategy is to report truth-fully. If the population is large enough then the probability of choosing the truly optimal pair canbe made arbitrarily close to one. Formally, the introduction of reactions only generalizes the model. In fact, if we assume that the set of reactionsis a singleton then we are back to the classical model. Additionally, it could be argued that reactions can be modeledas part of the set social alternatives, S . For the analysis and mechanism we propose the distinction between the set S and the reactions is important. .2 Related Work Virtual implementation.
The most closely related body of work is the literature on ‘virtual im-plementation’ with incomplete information, derived from earlier work on virtual implementationwith complete information which was initiated by Matsushima [20] and Abreu and Sen [2]. Asocial choice function is virtually implementable if for any ǫ > there exists a mechanism whichequilibria result in outcomes that ǫ -approximate the function. Results due to Abreu and Mat-sushima [1], Duggan [9] and Serrano and Vohra [34, 35] provide necessary and sufficient condi-tions for functions to be virtually implementable in various environments with private information.A common thread throughout the results on virtual implementation under incomplete informationis the incentive compatibility requirement over the social choice function, in addition to someform of type diversity. Compared with our contribution the above mentioned work provides posi-tive results in environments with small populations, whereas we require large populations in orderto have a meaningful approximation. On the other hand, the solution concepts we focus on areex-post Nash equilibrium, undominated strategies, and strict dominance (for the private values set-ting), compared with iterated deletion of dominated strategies or Bayes-Nash equilibria, providedin the above mentioned papers. In addition, the virtual implementation results apply to functionsthat are incentive compatible from the outset, whereas our technique applies to arbitrary objectivefunctions. In both cases the mechanisms proposed do not require transfers but do require somekind of type diversity. Influence and Approximate Efficiency.
The basic driving force underlying our constructionis ensuring that each agent has a vanishing influence on the outcome of the mechanism as thepopulation grows. In the limit, if players are non-influential, then they might as well be truthful.This idea is not new and has been used by various authors to provide mechanisms that approximateefficient outcomes when the population of players is large. Some examples of work that hingeon a similar principle for large, yet finite populations, are Swinkels [36] who studies auctions,Satterthwaite and Williams [30] and Rustichini, Satterthwaite and Williams [28] who study doubleauctions, and Al-Najjar and Smorodinsky [4] who study an exchange market. The same principleis even more enhanced in models with a continuum of players, where each agent has no influenceon the joint outcome (e.g., Roberts and Postlewaite [27] who study an exchange economy). Themechanisms provided in these papers are designed for maximizing the sum of agents’ valuations,and provide no value for alternative objective functions. In contrast, our results hold a for a widerange of objective functions and are generic in nature. Interestingly, a similar argument, hingingon players’ lack of influence, is instrumental to show inefficiency in large population models (forexample, Mailath and Postlewaite [18] demonstrate ‘free-riding’ in the context of public goods,which eventually leads to inefficiency).A formal statement of ‘influence’ in an abstract setting appears in Levine and Pesendorfer [17]and Al-Najjar and Smorodinsky [3]. Beyond the formalization of influence these works providebounds on aggregate measures of influence such as the average influence or on the number of4nfluential agents. McLean and Postlewaite [21] introduce the notion of informational smallness,formalizing settings where one player’s information is insignificant with respect to the aggregatedinformation.
Differential Privacy.
The notion of differential privacy , recently introduced by Dwork, McSh-erry, Nissim and Smith [12], captures a measure of (lack of) privacy by the impact of a singleagent’s input on the outcome of a joint computation. A small impact suggests that the agent’sprivacy cannot be significantly jeopardized. In the limit, if an agent has no impact then nothingcan be learned about the agent from the outcome of the computation. More accurately, differentialprivacy stipulates that the influence of any contributor to the computation is bounded in a verystrict sense: any change in the input contributed by an individual translates to at most a near-onemultiplicative factor in the probability distribution over the set of outcomes. The scope of computations that were shown to be computed in a differentially private mannerhas grown significantly since the introduction of the concept and the reader is referred to Dwork[11] for a recent survey.McSherry and Talwar [22] establish an inspiring connection between differential privacy andmechanism design, where differential privacy is used as a tool for constructing efficient mecha-nisms. They observe that participants (players) that contribute private information to ǫ -differentiallyprivate computations have limited influence on the outcome of the computation, and hence havea limited incentive to lie, even if their utility is derived from the joint outcome. Consequently,truth-telling is approximately dominant in mechanisms that are ǫ -differentially private, regardlessof the agent utility functions. McSherry and Talwar introduce the exponential mechanism as ageneric ǫ -differentially private mechanism. In addition, they show that whenever agents are truth-ful the exponential mechanism chooses a social alternative which almost optimizes the objectivefunction. They go on and demonstrate the power of this mechanism in the context of UnlimitedSupply Auctions, Attribute Auctions, and Constrained pricing.The contribution of McSherry and Talwar leaves much to be desired in terms of mechanism de-sign: (1) It is not clear how to set the value of ǫ . Lower values of ǫ imply higher compatibility withincentives, on the one hand, but deteriorate the approximation results on the other hand. The modeland results of McSherry and Talwar do not provide a framework for analyzing these countervailingforces. (2) Truth telling is approximately dominant, but, in fact, in the mechanisms they design all strategies are approximately dominant, which suggests that truth telling may have no intrin-sic advantage over any other strategy in their mechanism. (3) Furthermore, one can demonstratethat misreporting one’s private information can actually dominate other strategies, truth-telling in-cluded. To make things worse, such dominant strategies may lead to inferior results for the socialplanner. This is demonstrated in Example 2, in the context of monopoly pricing. The measure of ‘impact’ underlying differential privacy is the analog of ‘influence’ a-la Levine and Pesendor-fer [17] and Al-Najjar and Smorodinsky [3] in a non-Bayesian framework, with worst-case considerations. Schummer [31] also studies approximately dominant strategies, in the context of exchange economies. acility Location. One of the concrete examples we investigate is the optimal location of facil-ities. The facility location problem has already been tackled in the context of approximate mech-anism design without money, and turned out to lead to interesting challenges. While the singlefacility location problem exhibits preferences that are single-peaked and can be solved optimallyby selecting the median declaration, the 2-facility problem turns out to be non-trivial. Most recentlyWang et al [38] introduce a randomized -(multiplicative) approximation truthful mechanism forthe facility location problem. The techniques introduced here provide much better approxima-tions - in particular we provide an additive ˜ O ( n − / ) approximation to the average optimal distancebetween the agents and the facilities. Following our formalization of reactions and of imposition and its applicability to facility lo-cation, Fotakis and Tzamos [13] provide ‘imposing’ versions of previously known mechanisms toimprove implementation accuracy. They provide constant multiplicative approximation or loga-rithmic multiplicative approximation, albeit with fully imposing mechanisms.
Non discriminatory Pricing of Digital Goods.
Another concrete setting where we demonstrateour generic results is a pricing application, where a monopolist sets a single price for goods withzero marginal costs (“digital goods”) in order to maximize revenues. We consider environmentswhere the potential buyers have interdependent valuations for the good. Pricing mechanisms forthe private values case have been studied by Goldberg et al [15] and Balcan et al [7]. They considersettings where agents’ valuation are not necessarily restricted to a finite set and achieve O ( √ n ) -implementation (where n is the population size). Whereas our mechanism provides a similar boundit is limited to settings with finitely many possible prices. However, it is derived from generalprinciples and therefore more robust. In addition, our mechanism is applicable beyond the privatevalues’ setting. Let N denote a set of n agents, S denotes a finite set of social alternatives and T i , i = 1 , . . . , n ,is a finite type space for agent i . We denote by T = × ni =1 T i the set of type tuples and write T − i = × j = i T j with generic element t − i . Agent i ’s type, t i ∈ T i , is her private information. Let R i be the set of reactions available to i . Typically, once a social alternative, s ∈ S , is determinedagents choose a reaction r i ∈ R i . The utility of an agent i is therefore a function of the vector of The notation ˜ O ( n − ) is used to denote convergence to zero at a rate ln( n ) n . Compared with O ( n − ) which denotesconvergence to zero at a rate n u i : T × S × R i → [0 , . A tuple ( T, S, R, u ) , where R = × ni =1 R i and u = ( u , . . . , u n ) , is called an environment . We willuse r i ( t, s ) to denote an arbitrary optimal reaction for agent i (i.e., r i ( t, s ) is an arbitrary functionwhich image is in the set argmax r i ∈ R i u i ( t, s, r i ) ).We say that an agent has private reactions if her optimal reaction of i depends only only on hertype and the social alternative. Formally, agent i has private reactions if argmax r i ∈ R i u i (( t i , t − i ) , s, r i ) = argmax r i ∈ R i u i (( t i , t ′− i ) , s, r i ) , for all s, i, t i , t − i and t ′− i . To emphasize that r i ( t, s ) does not dependon t − i we will use in this case the notation r i ( t i , s ) to denote an arbitrary optimal reaction for agent i . We say that an agent has private values if she has private reactions and furthermore her utilitydepends only on her type, social alternative and reaction, i.e., u i (( t i , t − i ) , s, r i ) = u i (( t i , t ′− i ) , s, r i ) for all s, i, t i , t − i and t ′− i . In this case we will use the notation u i ( t i , s, r i ) to denote the agent’sutility, to emphasize that it does not depend on t − i . In the more general setting, where the utility u i and the optimal reaction r i may depend on t − i , we say that agents have interdependent values .An environment is non-trivial if for any pair of types there exists a social alternative for whichthe optimal reactions are distinct. Formally, ∀ i , t i = ˆ t i ∈ T i and t − i there exists s ∈ S , denoted s ( t i , ˆ t i , t − i ) , such that argmax r i ∈ R i u i (( t i , t − i ) , s, r i ) ∩ argmax r i ∈ R i u i ((ˆ t i , t − i ) , s, r i ) = ∅ . We saythat s ( t i , ˆ t i , t − i ) separates between t i and ˆ t i at t − i . A set of social alternatives, ˜ S ⊂ S is called separating if for any i and t i = ˆ t i and t − i , there exists some s ( t i , ˆ t i , t − i ) ∈ ˜ S that separates between t i and ˆ t i at t − i . A social planner, not knowing the vector of types, wants to maximize an arbitrary objective function (sometimes termed social welfare function ), F : T × S → [0 , . We focus our attention on a classof functions for which individual agents have a diminishing impact, as the population size grows:
Definition 1 (Sensitivity)
The objective function F : T × S → [0 , is d -sensitive if ∀ i, t i = ˆ t i , t − i and s ∈ S , | F (( t i , t − i ) , s ) − F ((ˆ t i , t − i ) , s ) | ≤ dn , where n is the population size. Note that this definition refers to unilateral changes in announcements, while keeping the socialalternative fixed. In particular d -sensitivity does not exclude the possibility of a radical change in Utilities are assumed to be bounded in the unit interval. This is without loss of generality, as long as there is someuniform bound on the utility. In fact, one can consider objective functions of the form F : T × S × R → [0 , . Our results go through if for any t and s and any i and r − i the functions F ( t, s, ( r − i , · )) : R i → [0 , and u i ( t, s, · ) : R i → [0 , are co-monotonic.In words, as long as the objective function’s outcome (weakly) increases whenever a change in reaction increases anagent’s utility. In the definition of sensitivity one can replace the constant d with a function d = d ( n ) that depends on thepopulation size. Our go through for the more general case as long as lim n →∞ d ( n ) n = 0 . -sensitive is the average utility, F ( t, s ) = P i u i ( t, s, r i ( t, s )) n . Note that a d -sensitive function eliminates situations where any single agent has an overwhelmingimpact on the value of the objective function, for a fixed social alternative s . In fact, if an objectivefunction is not d -sensitive, for any d , then in a large population this function could be susceptibleto minor faults in the system (e.g., noisy communication channels). Denote by R i = 2 R i \ {∅} the set of all subsets of R i , except for the empty set, and let R = × i R i .A (direct) mechanism randomly chooses, for any vector of inputs t a social alternative, and foreach agent i a subset of available reactions. Formally: Definition 2 (Mechanism)
A (direct) mechanism is a function M : T → ∆( S × R ) . In addition, the mechanism discloses the vector of agents’ announcements, and agents can usethis information to choose a reaction. We denote by M S ( t ) the marginal distribution of M ( t ) on S and by M i ( t ) the marginal dis-tribution on R i . We say that the mechanism M is non-imposing if M i ( t )( R i ) = 1 . That is, theprobability assigned to the grand set of reactions is one, for all i and t ∈ T . Put differently, themechanism never restricts the set of available reactions. M is ǫ -imposing if M i ( t )( R i ) ≥ − ǫ forall i and t ∈ T . In words, with probability exceeding − ǫ the mechanism imposes no restrictions. A mechanism induces the following game with incomplete information. In the first phase agentsannounce their types simultaneously to the mechanism. Then the mechanism chooses a social An example of a function that is not d -sensitive, for any d , is the following: set F = 1 ( F = 0 ) if there is aneven number of agents which utility exceeds some threshold and the social alternative is A ( B ), and F = 0 ( F = 1 )otherwise. If, however, all agents have private reactions then this information is useless to the agents and we do not requiresuch a public disclosure of the agents’ announcements. W i : T i → T i denote theannouncement of agent i , given his type and let W = ( W i ) ni =1 . Upon the announcement of thesocial alternative s , the vector of opponents’ announcements, t − i and a subset of reactions, ˆ R i ⊂ R i , the rational agent will choose an arbitrary optimal reaction, r i (( t i , W − − i ( t − i )) , s, ˆ R i ) , where W − − i ( t − i ) denotes the pre-image of W − i at the vector of announcements t − i . Thus, given a mechanism and a vector of announcement functions, ( W i ) ni =1 , the agents’ reac-tion are uniquely defined. Therefore, we can view ( W i ) ni =1 as the agents’ strategies, without anexplicit reference to the choice of reactions. Given a vector of types, t , and a strategy tuple W , themechanism M induces a probability distribution, M ( W ( t )) over the set of social alternatives andreaction tuples. The expected utility of i , at a vector of types t , is E M ( W ( t )) u i ( t, s, r i ) , where r i isshort-writing for the optimal reaction, which itself is determined by M and W . In fact, hereinafterwe suppress the reference to the reactions in our notations and write E M ( W ( t )) u i ( t, s ) instead of E M ( W ( t )) u i ( t, s, r i ) .A strategy W i is dominant for the mechanism M if for any vector of types t ∈ T , any alternativestrategy ˆ W i of i and any strategy profile ¯ W − i of i ’s opponents E M (( W i ( t i ) , ¯ W − i ( t − i ))) u i ( t, s ) ≥ E M (( ˆ W i ( t i ) , ¯ W − i ( t − i ))) u i ( t, s ) . (1)In words, W i is a strategy that maximizes the expected payoff of i for any vector of types andany strategy used by her opponents. If for all i the strategy W i ( t i ) = t i is dominant then M iscalled truthful (or strategyproof ). A strategy W i is strictly dominant if it is dominant and furthermore whenever W ( t i ) = ˆ W ( t i ) then a strong inequality holds in Equation (1). If W i ( t i ) = t i is strictly dominant for all i then M is strictly truthful .A strategy W i is dominated for the mechanism M if there exists an alternative strategy ˆ W i , suchthat for any vector of types t ∈ T , and any strategy profile ¯ W − i of i ’s opponents, the followingholds: E M (( W i ( t i ) , ¯ W − i ( t − i ))) u i ( t, s ) ≤ E M (( ˆ W i ( t i ) , ¯ W − i ( t − i ))) u i ( t, s ) , with a strong inequality holdingfor at least one type vector t .Finally, a strategy tuple W is an ex-post Nash Equilibrium if for all i and t ∈ T and for anystrategy ˆ W i of player i , E M ( W ( t )) u i ( t, s ) ≥ E M (( ˆ W i ( t i ) ,W − i ( t − i ))) u i ( t, s ) . If { W i ( t i ) = t i } ni =1 is anex-post Nash equilibrium then M is ex-post Nash truthful . We slightly abuse notation as W − − i ( t − i ) may not be a singleton but a subset of type vectors, in which case theoptimal reaction is not well defined. More accurate notation must involve considering another primitive to the model- the prior belief of i over T − i . With such a prior r i (( t i , W − − i ( t − i )) , s, R i ) denotes the reaction in ˆ R i that maximizesthe expected utility with respect to the prior belief, conditional on the subset W − − i ( t − i ) . Note we do not require a strong inequality to hold on any instance. .5 Implementation Given a vector of types, t , the expected value of the objective function, F , at the strategy tuple W is E M ( W ( t )) [ F ( t, s )] . Definition 3 ( β -implementation) We say that the mechanism
M β -implements F in (strictly)dominant strategies , for β > , if for any (strictly) dominant strategy tuple, W , for any t ∈ T , E M ( W ( t )) [ F ( t, s )] ≥ max s ∈ S F ( t, s ) − β .A mechanism M β -implements F in an ex-post Nash equilibrium if for some ex-post Nashequilibrium strategy tuple, W , for any t ∈ T , E M ( W ( t )) [ F ( t, s )] ≥ max s ∈ S F ( t, s ) − β .A mechanism M β -implements F in undominated strategies if for any tuple of strategies, W ,that are not dominated and for any t ∈ T , E M ( W ( t )) [ F ( t, s )] ≥ max s ∈ S F ( t, s ) − β . Main Theorem (informal statement):
For any d -sensitive function F and > β > thereexists a number n and a mechanism M which β -implements F in an ex-post Nash equilibrium,whenever the population has more than n agents. If, in addition, reactions are private then Mβ -implements F in strictly dominant strategies. In this section we present a general scheme for implementing arbitrary objective functions in largesocieties. The convergence rate we demonstrate is of an order of magnitude of q ln( n ) n . Our schemeinvolves a lottery between two mechanisms: (1) The Exponential Mechanism , a non-imposingdifferentially-private mechanism that randomly selects a social alternative, s . The probability ofchoosing s is proportional to (a exponent of) the value it induces on F ; and (2) The CommitmentMechanism , where imposition is used to commit agents to take a reaction that complies with theirannounced type.
Consider the following non-imposing mechanism, which we refer to as the
Exponential Mecha-nism , originally introduced by McSherry and Talwar [22]: M ǫ ( t )( s ) = e nǫF ( t,s ) P ¯ s ∈ S e nǫF ( t, ¯ s ) . ǫ -differential privacy, i.e., for all i it is insensitive to a change in t i . And, it chooses s that almostmaximizes F ( t, s ) .We follow Dwork et al [12] and define: Definition 4 [ ǫ -differential privacy] A mechanism, M , provides ǫ -differential privacy if it is non-imposing and for any s ∈ S , any pair of type vectors t, ˆ t ∈ T , which differ only on a singlecoordinate, M ( t )( s ) ≤ e ǫ · M (ˆ t )( s ) . In words, a mechanism preserves ǫ -differential privacy if, for any vector of announcements, a uni-lateral deviation changes the probabilities assigned to any social choice s ∈ S by a (multiplicative)factor of e ǫ , which approaches as ǫ approaches zero. Lemma 1 (McSherry and Talwar [22]) If F is d -sensitive then M ǫ d ( t ) preserves ǫ -differentialprivacy. The proof is simple, and is provided for completeness:
Proof : Let t and ˆ t be or two type vectors that differ on a single coordinate. Then for any s ∈ S , F ( t, s ) − dn ≤ F (ˆ t, s ) ≤ F ( t, s ) + dn , hence, M ǫ d ( t )( s ) M ǫ d (ˆ t )( s ) = e nǫF ( t,s )2 d P ¯ s ∈ S e nǫF ( t, ¯ s )2 d e nǫF (ˆ t,s )2 d P ¯ s ∈ S e nǫF (ˆ t, ¯ s )2 d ≤ e nǫF ( t,s )2 d P ¯ s ∈ S e nǫF ( t, ¯ s )2 d e nǫ ( F ( t,s ) − dn )2 d P ¯ s ∈ S e nǫ ( F ( t, ¯ s )+ dn )2 d = e ǫ . QED
The appeal of mechanisms that provide ǫ -differential privacy is that they induce near indiffer-ence among all strategies, in the following sense: Lemma 2 If M is non-imposing and provides ǫ -differential privacy, for some ǫ < , then for anyagent i , any type tuple t , any strategy tuple W , and any alternative strategy for i , ˆ W i the followingholds: | E M ( W ( t )) [ u i ( t, s )] − E M ( ˆ W i ( t i ) ,W − i ( t − i )) [ u i ( t, s )] | < ǫ. For non discrete sets of alternatives the definition requires that M ( t )( ˆ S ) M (ˆ t )( ˆ S ) ≤ e ǫ ∀ ˆ S ⊂ S . The motivation underlying this definition of ǫ -differential privacy is that if a single agent’s input to a databasechanges then a query on that database would result in (distributionally) similar results. This, in return, suggests that itis difficult to learn new information about the agent from the query, thus preserving her privacy. Proof : Let W and ˆ W be two strategy vectors that differ the i ’th coordinate. Then for every t ∈ T , s ∈ S , r i ∈ R i and u i : T × S × R i → [0 , we have E M ( W ( t )) [ u i ( t, s )] = X s ∈ S M ( W ( t ))( s ) · u i ( t, s ) ≤ X s ∈ S e ǫ · M ( ˆ W i ( t i ) , W − i ( t − i ))( s ) · u i ( t, s )= e ǫ · E ˆ W i ( t i ) ,W − i ( t − i )) [ u i ( t, s )] , where the inequality follows since M provides ǫ -differential privacy, and u i is non-negative. Asimilar analysis gives E ˆ W i ( t i ) ,W − i ( t − i )) [ u i ( t, s )] ≤ e ǫ · E M ( W ( t )) [ u i ( t, s )] . Hence we get: E M ( W ( t )) [ u i ( t, s )] − E M ( ˆ W i ( t i ) ,W − i ( t − i )) [ u i ( t, s )] ≤ ( e ǫ − · E M ( ˆ W i ( t i ) ,W − i ( t − i )) [ u i ( t, s )] ≤ e ǫ − , where the last inequality holds because u i returns a values in [0 , . Similarly, E M ( ˆ W i ( t i ) ,W − i ( t − i )) [ u i ( t, s )] − E M ( W ( t )) [ u i ( t, s )] ≤ e ǫ − . To conclude the lemma, note that ( e ǫ − ≤ ǫ for ≤ ǫ ≤ . QED
McSherry and Talwar [22] note in particular that in the case of private values truthfulness is ǫ -dominant, which is an immediate corollary of Lemma 2. They combine this with the followingobservation to conclude that exponential mechanisms approximately implement F in ǫ - dominantstrategies: Lemma 3 (McSherry and Talwar [22])
Let F : T n × S → [0 , be an arbitrary d -sensitiveobjective function and n > e dǫ | S | . Then for any t , E M ǫ d ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − dnǫ ln (cid:16) nǫ | S | d (cid:17) . Proof : Let δ = dnǫ ln (cid:16) nǫ | S | d (cid:17) . As n > e dǫ | S | we conclude that ln (cid:16) nǫ | S | d (cid:17) > ln e > and, inparticular, δ > . 12ix a vector of types, t and denote by ˆ S = { ˆ s ∈ S : F ( t, ˆ s ) < max s F ( t, s ) − δ } . For any ˆ s ∈ ˆ S the following holds: M ǫ d ( t )(ˆ s ) = e nǫF ( t, ˆ s )2 d P s ′ ∈ S e nǫF ( t,s ′ )2 d ≤ e nǫ (max s F ( t,s ) − δ )2 d e nǫ max s F ( t,s )2 d = e − nǫ d δ . Therefore, M ǫ d ( t )( ˆ S ) = P ˆ s ∈ ˆ S M ǫ d ( t )(ˆ s ) ≤ | ˆ S | e − nǫ d δ ≤ | S | e − nǫ d δ . Which, in turn, implies: E M ǫ d ( t ) [ F ( t, s )] ≥ (max s F ( t, s ) − δ )(1 − | S | e − nǫ d δ ) ≥ max s F ( t, s ) − δ − | S | e − nǫ d δ . Substituting for δ we get that E M ǫ d ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − dnǫ ln (cid:18) nǫ | S | d (cid:19) − dnǫ . In addition, n > e dǫ | S | which implies ln (cid:16) nǫ | S | d (cid:17) > ln( e ) = 1 , and hence dnǫ ≤ dnǫ ln (cid:16) nǫ | S | d (cid:17) .Plugging this into the previous inequality yields E M ǫ d ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − dnǫ ln (cid:16) nǫ | S | d (cid:17) as desired. QED
Note that lim n →∞ dnǫ ln (cid:16) nǫ | S | d (cid:17) = 0 whenever the parameters d, ǫ and | S | are held fixed. Therefore, the exponential mechanism is almost optimal for a large and truthful population.
Remark:
There are other mechanisms which exhibit similar properties to those of the Expo-nential Mechanism, namely ‘almost indifference’ and ‘approximate optimality’. The literature ondifferential privacy is rich in techniques for establishing mechanisms with such properties. Sometechniques for converting computations into ǫ -differentially private computations without jeopar-dizing the accuracy too much are the addition of noise calibrated to global sensitivity by Dwork etal. [12], the addition of noise calibrated to smooth sensitivity and the sample and aggregate frame-work by Nissim et al. [24]. The reader is further referred to the recent survey of Dwork [11]. Anyof these mechanisms can replace the exponential mechanism in the following analysis. This limit also approaches zero if d, ǫ, | S | depend on n , as long as d/ǫ is sublinear in n and | S | is subexponentialin n . .2 The Commitment Mechanism We now consider an imposing mechanism that chooses s ∈ S randomly, while ignoring agents’announcements. Once s is chosen the mechanism restricts the allowable reactions for i to those thatare optimal assuming all agents are truthful. Formally, if s is chosen according to the probabilitydistribution P , let M P denote the following mechanism: M PS ( t )( s ) = P ( s ) and M Pi ( t )( r i ( t, s )) | s ) = 1 . Players do not influence the choice of s in M P and so they are (weakly) better off being truthful.We define the gap of the environment, γ = g ( T, S, A, u ) , as: γ = g ( T, S, A, u ) = min i,t i = b i ,t − i max s ∈ S ( u i ( t, s, r i ( t, s )) − u i ( t, s, r i (( b i , t − i ) , s ))) . In words, γ is a lower bound for the loss incurred by misreporting in case of an adversarialchoice of s ∈ S . In non-trivial environments γ > . We say the a distribution P is separating ifthere exists a separating set ˜ S ⊂ S such that P (˜ s ) > for all ˜ s ∈ ˜ S . In this case we also say that M P is a separating mechanism. In particular let ˜ p = min s ∈ ˜ S P ( s ) . Clearly one can choose P suchthat ˜ p ≥ | S | . The following is straightforward: Lemma 4
If the environment ( T, S, A, u ) is non-trivial and P is a separating distribution over S then ∀ b i = t i , t − i , E M P ( t i ,t − i ) [ u i ( t, s, r i ( t, s ))] ≥ E M P ( b i ,t − i ) [ u i ( t, s, r i (( b i , t − i ) , s ))] + ˜ pγ . If, in addition, reactions are private, then for any i , b i = t i , t − i and b − i : E M P ( t i ,b − i ) [ u i ( t, s, r i ( t i , s ))] ≥ E M P ( b i ,b − i ) [ u i ( t, s, r i ( b i , s ))] + ˜ pγ. Proof : For any pair b i = t i and for any s ∈ S u i ( t, s, r i ( t i , s )) ≥ u i ( t, s, r i ( b i , s )) . Inaddition, there exists some ˆ s = s ( t i , b i ) , satisfying P (ˆ s ) ≥ ˜ p , for which u i ( t, ˆ s, r i ( t i , ˆ s )) ≥ u i ( t, ˆ s, r i ( b i , ˆ s ))+ γ . Therefore, for any i , b i = t i ∈ T i and for any t − i , E M P ( t i ,t − i ) [ u i ( t, s, r i ( t, s ))] ≥ E M P ( b i ,t − i ) [ u i ( t, s, r i (( b i , t − i ) , s ))] + ˜ pγ , as claimed.Recall that if reactions are private then r i ( t, s ) = r i ( t i , s ) , namely the optimal reaction of anagent, given some social alternative s , depends only on the agent’s type. Therefore we derive theresult for private reactions by replacing r i (( t i , t − i ) , s ) with r i ( t i , s ) on the left hand side of the lastinequality and r i (( b i , t − i ) , s ) with r i ( b i , s ) on the right hand side.QEDThe following is an immediate corollary: 14 orollary 1 If the environment ( T, S, A, u ) is non-trivial and P is a separating distribution over S then1. Truthfulness is an ex-post Nash equilibrium of M P .2. If agent i has private reactions then truthfulness is a strictly dominant strategy for i in M P . An alternative natural imposing mechanism is that of a random dictator, where a random agentis chosen to dictate the social outcome. Similarly, agents will be truthful in such a mechanism.However, the loss from misreporting can only be bounded below by γn , whereas the commitmentmechanism gives a lower bound of γ ˜ p ≥ γ | S | , which is independent of the population size. Fix a non-trivial environment ( T, S, A, u ) with a gap γ , separating set ˜ S , a d -sensitive objectivefunction F and a separating commitment mechanism, M P , with ˜ p = min s ∈ ˜ S P ( s ) .Set ¯ M ǫq ( t ) = (1 − q ) M ǫ d ( t ) + qM P ( t ) . Theorem 1 If q ˜ pγ ≥ ǫ then the mechanism ¯ M ǫq is ex-post Nash truthful. Furthermore, if agentshave private reactions then ¯ M ǫq is strictly truthful. Proof : Follows immediately from Lemmas 2 (set W ( t i ) = t i ) and 4. QED
Set the parameters of the mechanism ¯ M ǫq ( t ) as follows: • ǫ = q ˜ pγdn r ln (cid:16) n ˜ pγ | S | d (cid:17) . • q = ǫ ˜ pγ .and consider populations of size n > n , where n is the minimal integer satisfying n ≥ max { d ˜ pγ ln (cid:16) ˜ pγ | S | d (cid:17) , e d ˜ pγ | S | } and n ln( n ) > d ˜ pγ . Lemma 5 If n > n then . q = ǫ ˜ pγ < .2. ǫ < ˜ pγ .3. n > edǫ | S | . Proof : Part (1): n ln( n ) > n ln( n ) ≥ d ˜ pγ which implies n > d ˜ pγ ln( n ) . In addition, n > n > d ˜ pγ ln (cid:16) ˜ pγ | S | d (cid:17) . Therefore n > d ˜ pγ ln (cid:16) ˜ pγ | S | d (cid:17) + d ˜ pγ ln( n ) = d ˜ pγ ln (cid:16) ˜ pγ | S | n d (cid:17) = ⇒ (˜ pγ ) > pγdn ln (cid:16) ˜ pγ | S | n d (cid:17) .Taking the square root and substituting for ǫ on the right hand side yields ˜ pγ > ǫ and the claimfollows.Part (2) follows directly from part (1)Part (3): n > n ≥ e d ˜ pγ | S | ≥ e d ˜ pγ | S | = ⇒ √ n > ed √ ˜ pγd | S | . In addition n > e d ˜ pγ | S | > de ˜ pγ | S | whichimplies < ln (cid:16) ˜ pγ | S | n d (cid:17) . Combining these two inequalities we get: √ n > ed √ ˜ pγd q ln ( ˜ pγ | S | n d ) | S | .Multiplying both sides by √ n implies n > ed √ n √ ˜ pγd q ln ( ˜ pγ | S | n d ) | S | = edǫ | S | . QED
Using these parameters we set ˆ M ( t ) = ¯ M ǫq ( t ) . Our main result is: Theorem 2 (Main Theorem)
The mechanism ˆ M ( t ) is ex-post Nash truthful and, in addition,it q d ˜ pγn r ln (cid:16) n ˜ pγ | S | d (cid:17) -implements F in ex-post Nash equilibrium, for n > n . If agents haveprivate reactions the mechanism is strictly truthful and q d ˜ pγn r ln (cid:16) n ˜ pγ | S | d (cid:17) -implements F instrictly dominant strategies. Recall that for ex-post Nash implementation we only need to show that one ex-post Nashequilibrium yields the desired outcome.
Proof : Given the choice of parameters ǫ and q then, Theorem 1 guarantees that ˆ M ( t ) is ex-postNash truthful (and truthful whenever reactions are private). Therefore, it is sufficient to show thatfor any type vector t , E ˆ M ( t ) ( F ( t, s )) ≥ max s F ( t, s ) − s d ˜ pγn s ln (cid:18) n ˜ pγ | S | d (cid:19) . F is positive, E M P ( t ) [ F ( t, s )] ≥ and so E ˆ M ( t ) [ F ( t, s )] ≥ (1 − q ) E M ǫ d ( t ) [ F ( t, s )] . By part (3) of Lemma 5 we are guaranteed that the condition on the size of of the population ofLemma 3 holds and so we can apply Lemma 3 to conclude that: E ˆ M ( t ) [ F ( t, s )] ≥ (1 − q ) (cid:18) max s F ( t, s ) − dnǫ ln (cid:18) nǫ | S | d (cid:19)(cid:19) . We substitute q with ǫ ˜ pγ and recall that max s F ( t, s ) ≤ . In addition, part (1) of Lemma 5asserts that ǫ ˜ pγ < . Therefore E ˆ M ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − ǫ ˜ pγ − dnǫ ln (cid:18) nǫ | S | d (cid:19) ≥ max s F ( t, s ) − ǫ ˜ pγ − dnǫ ln (cid:18) n ˜ pγ | S | d (cid:19) , where the last inequality is based on the fact ǫ < ˜ pγ , which is guaranteed by part (2) of Lemma 5.Substituting ǫ for q ˜ pγdn r ln (cid:16) n ˜ pγ | S | d (cid:17) we conclude that E ˆ M ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − s d ˜ pγn s ln (cid:18) n ˜ pγ | S | d (cid:19) − s d ˜ pγn s ln (cid:18) n ˜ pγ | S | d (cid:19) and the result follows. QED
One particular case of interest is the commitment mechanism M U , where U is the uniformdistribution over the set S : Corollary 2
Let n be the minimal integer satisfying n ≥ max { d | S | γ ln (cid:0) γ d (cid:1) , e dγ | S | } and n ln( n ) > d | S | γ . Then the mechanism ˆ M U ( t ) 6 q d | S | γn q ln (cid:0) nγ d (cid:1) -implements F , in ex-post Nash equilib-rium, for all n > n . If agents have private reactions the mechanism ˆ M U ( t ) 6 q d | S | γn q ln (cid:0) nγ d (cid:1) -implements F in strictly dominant strategies. Proof : P = U implies that the minimal probability is ˜ p = | S | . Plugging this into Theorem 2gives the result. QED
Holding the parameters of the environment d, γ, | S | fixed the approximation inaccuracy of ourmechanism converges to zero at a rate of q ln( n ) n .In summary, by concatenating the exponential mechanism, where truthfulness is ǫ -dominantwith the commitment mechanism we obtain a strictly truthful mechanism. In fact, this would holdtrue for any mechanism where truthfulness is ǫ -dominant not only the exponential mechanism.17 Applications
We now turn to demonstrate the generic results in two concrete applications.
A monopolist producing digital goods, for which the marginal cost of production is zero, faces aset of indistinguishable buyers. Each buyer has a unit demand with a valuation in the unit inter-val. Agents are arranged in (mutually exclusive) cohorts and the valuations of cohort membersare correlated. Each agent receives a private signal and her valuation is uniquely determined bythe signals of all her cohort members. The monopolist wants to set a uniform price in order tomaximize her average revenue per user. Assume there are N · D agents, with agents labeled ( n, d ) (the d th agent in the n th cohort).Agent ( n, d ) receives a signal X nd ∈ IR and we denote a cohort’s vector of signals by X n = { X nd } Dd =1 . We assume that the valuation of an agent, V nd , is uniquely determined by the signals ofher cohort members; V nd = V nd ( X n ) .We assume that each agent’s signal is informative in the sense that V nd ( X n ) > V nd ( ˆ X n ) when-ever X n > ˆ X n (in each coordinate a weak inequality holds and for at least one of the coordinatesa strong inequality holds). That is, whenever an individual’s signal increases the valuation of eachof her cohort members increases.Let R ( n,d ) = { ‘Buy’ , ‘Not buy’ } be the set of reactions for agent ( n, d ) .The utility of ( n, d ) , given the vector of signals X = { X n } Nn =1 = {{ X nd } Dd =1 } Nn =1 , and the price p , is u ( n,d ) ( X, p, r ( n,d ) ) = (cid:26) V nd ( X n ) − p if r ( n,d ) = ‘Buy’, if r ( n,d ) = ‘Not buy’.We assume that all valuations are restricted to the unit interval, prices are restricted to somefinite grid S = S m = { , m , m , . . . , } (hence, | S | = m + 1 ), and X nd takes on only finitely manyvalues. We assume the price grid is fine enough so that for any two vectors X n > ˆ X n there existssome price p ∈ S such that E ( V n | X n ) > p + m > p > E ( V n | ˆ X n ) . Therefore for vector ofannouncements there exists a maximal price for which optimal reaction is Buy. For that price, ifan agent announces a lower value then the best reaction would be Not Buy, which will yield a lossof m at least. Similarly, there exists the lowest price for which the optimal reaction is Not Buy. To make this more concrete one can think of the challenge of pricing a fire insurance policy to apartment owners.Each apartment building is a cohort that shares the same risk and once the risk is determined (via aggregation ofagents’ signals) each agent has a private valuation for the insurance. m at least. We conclude that the gap is γ = m .The monopolist wants to maximize F ( t, p ) = pND · |{ ( n, d ) : V nd ( X n ) > p }| , the averagerevenue per buyer. Note that a unilateral change in the type of one agent may change at most thebuying behavior of the D members in her cohort, resulting in a change of at most pDND ≤ DND in theaverage revenue. As the population size is
N D we conclude that F is D -sensitive.Let M dg be a mechanism as in Corollary 2, where a Uniform Commitment mechanism is used: Corollary 3
For any D there exists some N such that for all N > N the mechanism M dg O ( q m N ln( Nm )) -implements F in ex-post Nash equilibrium. The literature on optimal pricing in this setting has so far concentrated on the private valuescase and has provided better approximations. For example, Balcan et al. [7], using samplingtechniques from Machine Learning, provide a mechanism that O ( √ n ) -implements the maximalrevenue without any restrictions to a grid. A Multi Parameter Extension:
In the above setting we assumed a simple single-parameter typespace. However, the technique provided does not hinge on this. In particular, it extends to morecomplex settings where agents have a multi-parameter type space. More concretely, consider amonopolist that produces G types of digital goods, each with zero marginal cost for production.There are N buyers, where each buyer assigns a value, in some bounded interval, to each subsetof the G goods (agents want at most a singe unit of each good). The monopolist sets G prices,one for each good, and once prices are set each agent chooses his optimal bundle. The challengeof the monopolist is to maximize the average revenue per buyer. In this model types are suffi-ciently diverse. In fact, for any two types there exists a price vector that yields different optimalconsumptions. Therefore, the scheme we provide applies just as well to this setting. Consider a population of n agents located on the unit interval. An agent’s location is privateinformation and a social planner needs to locate K similar facilities in order to minimize theaverage distance agents travel to the nearest facility. We assume each agent wants to minimizeher distance to the facility that services her. In particular, this entails that values (and reactions) areprivate. We furthermore assume that agent and facility locations are all restricted to a fixed finite For expositional reasons we restricting attention to the unit interval and to the average travel distance. Similarresults can be obtained for other sets in IR and other metrics, such as distance squared. L = L ( m ) = { , m , m , . . . , } . Using the notation of previous sections,let T i = L , S = L K , and let R i = L . The utility of agent i is u i ( t i , s, r i ) = (cid:26) −| t i − r i | if r i ∈ s , − otherwise.Hence, r i ( b i , s ) is the facility closest to the locations of the facility in s closest to b i . Let F ( t, s ) = n P ni =1 u i ( t i , s, r i ( t i , s )) be the social utility function, which is -sensitive (i.e., d = 1 ).First, consider the uniform commitment mechanism ˆ M U , which is based on the uniform dis-tribution over S for the commitment mechanism. Now consider the mechanism ˆ M LOC , based onthe uniform commitment mechanism, as in Corollary 2 Corollary 4 ∃ n such that ∀ n > n the mechanism ˆ M LOC q m ( m +1) K n q ln (cid:0) n m (cid:1) - implementsthe optimal location in strictly dominant strategies. Proof : Note that γ = m , | S | = ( m + 1) K and the proof follows immediately from Theorem 2. QED
Now consider an alternative commitment mechanism. Consider the distribution P , over S = L K , which chooses uniformly among all the following alternatives - placing one facility in location jm and the remaining K − facilities in location j +1 m , where j = 0 , . . . , m − . Note that for any i ,any pair b i = t i is separated by at least one alternative in this set. For this mechanism ˜ p = m . Nowconsider the mechanism ˆ M LOC , based on the commitment mechanism, M P : Corollary 5 ∃ n such that ∀ n > n ˆ M LOC m √ n r ln (cid:16) n ( m +1) K m (cid:17) -implements the optimal locationin strictly dominant strategies. Proof:
In analogy to the proof of Theorem 2, setting ǫ = m √ n r ln (cid:16) n ( m +1) K m (cid:17) and q = 2 ǫm . QED
For both mechanisms the approximation error converges to zero at a rate proportional to / √ n as society grows. In addition, the approximation error of both mechanisms grows as the grid size, m , grows. However in the second mechanism approximation deteriorates at a substantially slowerrate. 20 Large Type Sets
The arguments underlying the generic approximate optimal mechanism for a finite number ofsocial alternatives do not generally extend to models where the type set is large. However, inconcrete models, where additional structure is assumed, such an extension may be possible. Wedemonstrate this in the facility location problem introduced in the previous section.As before, we assume that each player is located on the unit interval and that her locationis private information. Formally, set T = [0 , . A mechanism must (randomly) decide on thelocation of K facilities in the unit interval. Let S = [0 , K and consider the standard Borel σ -algebra which we denote S . The objective of the designer is to minimize the average distance aplayer must travel to a facility. Formally, the designer seeks to minimize F ( t, s ) = n P ni =1 | t i − r i ( t i , s ) | , where r i ( t i , s ) denotes the facility in s that is closest to t i .We use a continuous version of the Exponential Mechanism , where the probability of any event ˆ S ∈ S is given by: M ǫ ( t )( ˆ S ) = R ˆ S e nǫF ( t,s ) d s R S e nǫF ( t,s ) d s ∀ ˆ S ∈ S . We say that a mechanism M provides ǫ -differential privacy if M ( t )( ˆ S ) M (ˆ t )( ˆ S ) ≤ e ǫ ∀ ˆ S ∈ S and for anypair of type tuples, t and ˆ t , that differ on a single entry. McSherry and Talwar [22] prove thefollowing (which is analogous to lemma 1): Lemma 6 (McSherry and Talwar [22]) If F is d -sensitive then M ǫ d ( t ) preserves ǫ -differentialprivacy. The proof is identical to that of Lemma 1 and is therefore omitted.
The solution concept we pursue in this section is deletion of dominated strategies. In fact, what weshow in the sequel is that being truthful dominates significantly mis-reporting one’s type. Thus,deletion of dominated strategies implies that agents resort to strategies that are ‘almost’ truthful.Consequently, we turn to study the approximation accuracy of the Exponential Mechanismwhenever agents slightly mis-report their types. We begin by considering truthful agents.To state the next lemma we introduce the following notation. For ≤ α ≤ let S α = S α ( t ) = { ¯ s ∈ S : F ( t, ¯ s ) ≥ max s F ( t, s ) − α } , and ¯ S α = ¯ S α ( t ) = S \ S α . Let µ denote the uniformprobability over S . 21 emma 7 (McSherry and Talwar [22]) If α ≥ dnǫ ln (cid:16) max s F ( t,s ) αµ ( S α ) (cid:17) then E M ǫ d ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − α . We include the proof for completeness.
Proof : Note first that M ǫ d ( t )( ¯ S α ) ≤ M ǫ d ( t )( ¯ S α ) M ǫ d ( t )( S α ) = R ¯ S α e nǫF ( t, ¯ s )2 d d¯ s R S α e nǫF ( t, ¯ s )2 d d¯ s ≤ R ¯ S α e nǫ (max s F ( t,s ) − α )2 d d¯ s R S α e nǫ (max s F ( t,s ) − α )2 d d¯ s = e − nǫα d · µ ( ¯ S α ) µ ( S α ) ≤ e − nǫα d µ ( S α ) , where the first inequality follows from M ǫ d ( t )( S α ) ≤ , the second inequality follows from thedefinition of ¯ S α and S α , and the third inequality follows from µ ( ¯ S α ) ≤ . Hence, we get that M ǫ d ( t ) returns s ∈ S α with probability at least − e − nǫα d µ ( S α ) ≥ − α max s F ( t,s ) . Hence, E M ǫ d ( t ) [ F ( t, s )] ≥ (max s F ( t, s ) − α )(1 − α max s F ( t, s ) ) ≥ max s F ( t, s ) − α. QED
This result enables us to prove the following:
Corollary 6 E M ǫ ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) . Proof:
Fix a tuple of players’ locations t ∈ T n and let s denote the alternative in S thatminimizes F ( t, s ) . For any α > , if ˆ s ∈ [0 , K satisfies max k | ˆ s k − s k | < α then ˆ s ∈ S α . To seethis note that F ( t, s ′ ) = 1 n n X i =1 u i ( t i , s ′ , r i ( t i , s ′ )) ≤ n n X i =1 u i ( t i , s ′ , r i ( t i , s )) ≤ n n X i =1 ( u i ( t i , s, r i ( t i , s )) + α ) ≤ F ( t, s ′ ) + α. α ≤ . , µ ( S α ) ≥ α K .Set α = nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) . We argue that α ≥ nǫ ln (cid:16) max s F ( t,s ) αµ ( S α ) (cid:17) which implies that wecan apply Lemma 7. To see this recall that max s F ( t, s ) ≤ , and using our bound on µ ( S α ) itsuffices to show that ( nǫ ) K +1 ≥ /α K +1 , which indeed is the case as α ≥ /nǫ . By Lemma 7 E M ǫ ( t ) [ F ( t, s )] ≥ max s F ( t, s ) − nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) , as required. QED
We now turn to analyze the case where agents misreport their location.
Lemma 8 • | F ( b i , t − i , s ) − F ( t i , t − i , s ) | ≤ n | t i − b i | ; and • | F ( b, s ) − F ( t, s ) | ≤ max i | t i − b i | . Proof:
To derive the first part note that − u i ( b i , s, r i ( b i , s )) = | b i − r i ( b i , s ) | ≤ | b i − r i ( t i , s ) | = | b i − t i + t i − r i ( t i , s ) |≤ | b i − t i | + | t i − r i ( t i , s ) | = | b i − t i | − u i ( t i , s, r i ( t i , s )) , and (by a similar analysis) − u i ( t i , s, r i ( t i , s )) ≤ | b i − t i |− u i ( b i , s, r i ( b i , s )) . Hence, | u i ( b i , s, r i ( b i , s )) − u i ( t i , s, r i ( t i , s )) | ≤ | b i − t i | and we get that | F ( b i , t − i , s ) − F ( t i , t − i , s ) | = 1 n | u i ( b i , s, r i ( b i , s )) − u i ( t i , s, r i ( t i , s )) | ≤ n | t i − b i | . The second part follows by iteratively applying the first part for n times. QEDLemma 9 If | b i − t i | ≤ β for all i then | max s F ( t, s ) − max s F ( b, s ) | ≤ β . Proof:
Let s t ∈ argmax s { F ( t, s ) } and s b ∈ argmax s { F ( b, s ) } . Using triangle inequality, | t i − b i | + | b i − r i ( b i , s b ) | ≥ | t i − r i ( b i , s b ) | , and noting that | t i − r i ( b i , s b ) | ≥ | t i − r i ( t i , s b ) | weget that | t i − b i | + | b i − r i ( b i , s b ) | ≥ | t i − r i ( t i , s b ) | . n n X i =1 − ( | t i − b i | + | b i − r i ( b i , s b ) | ) ≤ n n X i =1 −| t i − r i ( t i , s b ) | = F ( t, s b ) ≤ F ( t, s t ) . Noting that n P ni =1 −| b i − r i ( b i , s b ) | = F ( b, s b ) we get that F ( b, s b ) − F ( t, s t ) ≤ n n X i =1 | t i − b i | ≤ β. (2)A similar argument yields F ( t, s t ) − F ( b, s b ) ≤ β. (3)Combining inequalities 2 and 3 we conclude that | max s F ( t, s ) − max s F ( b, s ) | = | F ( t, s t ) − F ( b, s b ) | ≤ β, as claimed. QEDLemma 10 If | b i − t i | ≤ β for all i then E M ǫ ( b ) [ F ( t, s )] ≥ max s F ( t, s ) − β − nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) Proof:
For any finite set of locations s ⊂ [0 , , traveling from t i to the point closest to t i in s is not longer than a taking a detour via b i , and then traveling from b i to the point closest to b i in s ,i.e., | t i − r i ( t i , s ) | ≤ | t i − b i | + | b i − r i ( b i , s ) | . Therefore, E M ǫ ( b ) [ F ( t, s )] ≥ E M ǫ ( b ) [ F ( b, s )] − n n X i =1 | t i − b i | ≥ E M ǫ ( b ) [ F ( b, s )] − β. By Corollary 6, E M ǫ ( b ) [ F ( b, s )] ≥ max s F ( b, s ) − nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) . By Lemma 9, max s F ( b, s ) ≥ max s F ( t, s ) − β. Combining all three inequalities above gives: E M ǫ ( b ) [ F ( t, s )] ≥ max s F ( t, s ) − β − nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) , as claimed. QED .2 Deviations from truthfulness in the Exponential Mechanism We bound the potential gain of an agent located at t i who reports b i : Lemma 11
Using the Exponential Mechanism for the facility location problem, if ǫ ≤ then forany i , any b i , t i ∈ T i and any t − i ∈ T − i , E M ǫ ( b i ,t − i ) [ u i ( t i , s, r i ( t i , s ))] − E M ǫ ( t i ,t − i ) [ u i ( t i , s, r i ( t i , s ))] ≤ ǫ | t i − b i | . Proof : By Lemma 8, | F ( b i , t − i , s ) − F ( t i , t − i , s ) | ≤ n | t i − b i | . Plugging this into the definitionof the Exponential Mechanism we get: E M ǫ ( b i ,t − i ) [ u i ( t i , s, r i ( t i , s )] = Z s ∈ S u i ( t i , s, r i ( t i , s )) d M ǫ ( b i , t − i )( s )= Z s ∈ S u i ( t i , s, r i ( t i , s )) e nǫ F ( b i ,t − i ,s ) R s ′ ∈ S e nǫ F ( b i ,t − i ,s ′ ) d s ′ d s ≤ Z s ∈ S u i ( t i , s, r i ( t i , s )) e nǫ (cid:16) F ( t i ,t − i ,s )+ | ti − bi | n (cid:17) R s ′ ∈ S e nǫ (cid:16) F ( t i ,t − i ,s ′ ) − | ti − bi | n (cid:17) d s ′ d s = e ǫ | t i − b i | Z s ∈ S u i ( t i , s, r i ( t i , s )) d M ǫ ( t i , t − i )( s )= e ǫ | t i − b i | E M ǫ ( t i ,t − i ) [ u i ( t i , s, r i ( t i , s )] , The proof is completed by noting that as | t i − b i | ≤ and ǫ < , e ǫ | t i − b i | ≤ ǫ | t i − b i | . QED
Consider a commitment mechanism induced by the following distribution P over the set S =[0 , K : First, choose a uniformly a random integer X ∈ { , , , . . . , ¯ m } , where the parameter ¯ m will be set below. Next choose a number Y , randomly and uniformly, from the interval [0 , X − .Now let s be the alternative where one facility is located at Y X and the other K − facilities at Y +12 X . Lemma 12 If | b i − t i | ≥ − ( ¯ m − then E M P ( t i ,t − i ) [ u i ( t i , s )] ≥ E M P ( b i ,t − i ) [ u i ( t i , s )] + | t i − b i | m . roof: We first consider the case b i ≤ t i − − ( ¯ m − . Assume X and Y are chosen such that X < | t i − b i | ≤ X and Y X ∈ [ b i , b i + t i ] . As a result the facility assigned to i , whenever she announces b i is located at Y X . However, if she announces her true location, t i , she is assigned a facility locatedat Y +12 X . Consequently, u i ( t i , s, r i ( t i , s )) ≥ u i ( t i , s, r i ( b i , s )) + X ≥ u i ( t i , s, r i ( b i , s )) + ( t i − b i )4 . Inwords, for the specific choice of X and Y misreporting one’s type leads to a loss exceeding ( t i − b i )4 .The probability of choosing the unique X satisfying X < | t i − b i | ≤ X is / ¯ m . Conditional onthis event, the probability of choosing Y satisfying Y X ∈ [ b i , b i + t i ] is ( t i − b i )2 . Since the mechanismis imposing, then for an arbitrary choice of X and Y misreporting is not profitable. Therefore, theexpected loss from misreporting exceeds | t i − b i | m .The proof of the complementary case, b i ≥ t i + 2 − ( ¯ m − , uses similar arguments and is omitted. QED
As in the generic construction, let ¯ M ǫq ( t ) = (1 − q ) M ǫ ( t )+ qM P ( t ) . Note that whenever q | t i − b i | m ≥ ǫ | t i − b i | being truthful dominates any announcement satisfying | b i − t i | ≥ − ( ¯ m − . In particular,this holds whenever q ≥ ǫ ¯ m ¯ m .Set ǫ = n / √ K + 1 , ¯ m = ⌈ log (cid:16) n / √ K +1 ln n (cid:17) ⌉ , and q = 16 ǫ ¯ m ¯ m and denote by ˆ M LOC = ¯ M ǫq for this choice of parameters. Theorem 3
There exists n = n ( K ) such that ˆ M LOC √ K +1 n / ln n -implements F in undomi-nated strategies for all n > n . Proof:
We first observe that as there exists n q = n q ( K ) such that q < for all n > n q andhence the mechanism is well defined. This also implies that for agent i reporting b i such that | b i − t i | ≥ − ( ¯ m − is dominated by reporting t i . Similarly, there exists n α = n α ( K ) such that α = nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) ≤ . for all n > n α as required in the proof of Corollary 6. Finally,there exists n ¯ m = n ¯ m ( K ) such that ¯ m ≤ ln n for all n > n ¯ m . In the following we will assume that n > max( n q , n α , n ¯ m ) .There are two sources for the additive error for ˆ M LOC :1. The commitment mechanism introduces an additive error of at most q = 8 ǫ ¯ m ¯ m . Notingthat ¯ m ≤ · n / √ K +1 ln n and substituting for ǫ we get that q ≤
83 ln nn / ≤ √ K +1 n / ln n .26. The exponential mechanism introduces an additive error of − ( ¯ m − + nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) (see Lemma 10). Note that nǫ ln (cid:0) e + ( nǫ ) K +1 (cid:1) ≤ K +1) nǫ ln ( e + nǫ ) and substituting for ǫ ,we get that there exists n = n ( K ) such that for all n > n this additive error is boundedby − ( ¯ m − + √ K +1 n / ln n . In addition ¯ m − = · ¯ m ≥ n / √ K +1 ln n which implies that theerror is bounded by · √ K +1 n / ln n + √ K +1 n / ln n = √ K +1 n / ln n .Setting n = max( n q , n α , n ¯ m , n ) , we get that for all n > n the total additive error is boundedby √ K + 1 n / ln n + 30 √ K + 1 n / ln n = 32 √ K + 1 n / ln n. QED
The mechanisms proposed in this paper are based on two pillars – a differentially private mecha-nism on the one hand and an imposing mechanism on the other hand. In the following we discussthe importance of each of these pillars for the results obtained. In addition, we discuss some of thelimitations of our results.
McSherry and Talwar [22] observed that differential privacy is sufficient to yield approximateimplementation in ǫ -dominant strategies. However, as we show below, differential privacy doesnot generally imply implementation with a stronger solution concept.Our example is a pricing mechanism that utilizes the exponential mechanism and hence yieldsan ǫ -dominant implementation that (assuming parties act truthfully) well approximates the optimalrevenue. However, there are dominant strategies in the example that involve mis-representationand lead to a significantly inferior revenue. Example 2
Consider a monopolist producing an unlimited supply digital good who faces n buy-ers, each having a unit demand at a valuation that is either . µ or µ where < µ < . .The monopolist cannot distinguish among buyers and is restricted to choosing a price in the set { . , } . Assume the monopolist is interested in maximizing the average revenue per buyer. Theoptimal outcome for the auctioneer is hence
OP T (¯ t ) = max s ∈{ . , } ( s · |{ i : t i ≥ s }| ) n . We consider the average revenue per buyer as the objective function, instead of the total revenue, in order tocomply with the requirement that the value of the objective function is restricted to the unit interval. f the monopolist uses the appropriate exponential mechanism then it is ǫ -dominant for agentsto announce their valuation truthfully, resulting in an almost optimal revenue. However, one shouldnote that the probability that the exponential mechanism will choose the lower of the two pricesincreases with the number buyers that announce . . Hence, it is dominant for buyers to announce . . This may lead to inferior results. In particular, whenever all agents value the good at butannounce . the mechanism will choose the price . with high probability, leading to an averagerevenue of . per buyer, which is half the optimal revenue per buyer. It is tempting to think that our notion of imposition trivializes the result, i.e., that, regardless ofthe usage of a differentially-private mechanism, the ability to force agents to react sub-optimally,according to their announced types, already inflicts sufficient disutility that would deter untruthfulannouncements. The next example demonstrates that such a naive imposition is generally insuffi-cient. Intuitively, the reason is that for inducing both truthfulness and efficiency, one needs a strongbound on an agent’s benefit from mis-reporting: the utility from mis-reporting should be smallerfrom the disutility from being committed to a sub-optimal reaction.
Example 3
Consider a digital goods pricing problem with n agents, where the valuation of eachagent is either n or µ , and the possible prices are n and . In this example the optimal priceis whenever there exists an agent of type µ, µ < . .Consider the following mechanism: with high probability it implements the optimal price andwith a low probability it uses an imposing mechanism. Note that the strategy to always announce avaluation of n is a Nash equilibrium. This announcement is clearly optimal if an agent’s valuationis indeed n . If an agent’s valuation, on the other hand, is µ , then complying with this strategywill result in a utility that is almost , whereas deviating to truthful announcement will result in aprice of with high probability, hence a utility of µ .Therefore, the monopolist’s average revenue from a buyer is always n . This is substantiallyinferior to the optimal outcome, which could be as high as , whenever all agents are of the hightype. The Nash equilibrium from example 3 survives even if we modify the mechanism to be fullyimposing (i.e, it always imposes the optimal reaction). Thus, the above mentioned sub-optimalityholds.We believe that the notion of imposition is natural in many settings, and that to some extentimposition is already implicitly integrated into the mechanism design literature. In fact, any mech-anism that is not ex-post individually rational imposes its outcome on the players: it imposesparticipation and ignores the possibility players have to ‘walk away’ once the results are known.Moreover, models that involve transfers treat these as imposed reactions: once the social choiceand transfers are determined, players must comply (consider taxation and auction payments as anexample). 28 .3 Model Limitations
There are three overarching limitations to the technique we present: (1) The generic mechanismonly works for objective functions which are not sensitive, (2) We consider settings where thereaction set of agents is rich enough, such that any pair of types can be separated by the optimalreaction on at least one social alternative; and (3) The size of the set of social alternatives cannotgrow too fast as the set of agents grows. We discuss these below.
Many objective functions of interest are actually insensitive and comply with our requirements.Revenue in the setting of digital goods, and social welfare (i.e., sum of agents’ valuations) aretypical examples. We note that although we focused our attention on social functions whose sen-sitivity is constant (independent of n ), one can apply Theorem 2 also in the case where d = d ( n ) as long as d ( n ) n → as n → ∞ .There are, however, important settings where the objective function is sensitive and hence ourtechniques cannot be applied. An important example is that of revenue maximization in a singleunit auction – it is easy to come up with extreme settings where a change in a type of a single agentcan drastically change the revenue outcome. Consider, e.g. the case where all agents value thegood at zero, resulting in a maximal revenue of zero. A unilateral change in the valuation of anysingle agent from zero to one will change the maximal revenue from zero to one as well.However, even in this case the domain of type profiles (valuation profiles) that demonstratesensitivity is quite small – for instance, if agents valuations are taken uniformly from [0 , thenalthough the worst-case sensitivity of the maximal revenue is , the ’typical’ sensitivity would beof order /n . In this case, the work of Nissim et al. [24] may turn to be applicable, as it yieldsdifferentially private mechanisms where the deviation from the maximum depends on a local notionof sensitivity called smooth sensitivity . We leave the examination of this approach to future work. The second limitation is the requirement that agents’ reaction sets are sufficiently rich. In fact,what we need for the results to hold is that for any pair of types of an agent there exists some socialalternative for which the set of optimal reactions for the first type is disjoint of the set of optimalreactions for the second type. For example, in an auction setting, we require that for each pair ofagents’ valuations the auctioneer can propose a price such that one type will buy the good, whilethe other will refuse. 29 .3.3 Small number of social alternatives
The approximation accuracy we achieve in Theorem 2 is proportional to q ln( n ˜ p | S | )˜ pn . Note that ˜ p > / | S | . A naive use of the theorem yields accuracy O ( | S | ln n/n ) , yielding meaningful ap-proximation as long as | S | (as a function of n ) grows slower than n/ ln n .As we have demonstrated in sections 4.2, one can sometimes design a commitment mechanismrealizing a much bigger ˜ p , ideally independent of | S | . If that is the case, then Theorem 2 yieldsapproximation error O ( q ln( n | S | ) n ) , allowing the number of social alternatives to be as high as anexponential function of the number of agents. For larger S , the approximation error may notvanish as n increases. Two interesting examples for such settings are matching problems, whereeach social alternative specifies the list of pairs, and multi unit auctions where the number of goodsis half the number of bidders. The framework we presented combines a differentially private mechanism with an imposing one.Our general results refer to a ‘universal’ construction of an imposing mechanism (the uniform one),yet the specific examples we analyze demonstrate that imposing mechanisms that are tailor madeto the specific setting can improve upon the results.Similarly, it is not imperative to use the Exponential mechanism as the first component, andother differentially-private mechanisms may be adequate. In fact, the literature on differentialprivacy provides various alternatives that may outperform the Exponential mechanism, given aspecific context. Some examples can be found in Dwork et al. [12], where the mechanism has anoisy component that is calibrated to global sensitivity, or in Nissim et al. [24] where a similarnoisy component is calibrated to smooth sensitivity. The latter work also uses random sampling toachieve similar properties. To learn more the reader is referred to the recent survey of Dwork [11].
References [1] Dilip Abreu and Hitoshi Matsushima. “Virtual Implementation in Iteratively UndominatedStrategies: InComplete Information.” Mimeo, Princeton University, 1992.[2] Dilip Abreu and Arunava Sen. “Subgame perfect implementation: A Necessary and AlmostSufficient Condition.”
Journal of Economic Theory , 50:285-299, 1990.[3] Nabil Al-Najjar and Rann Smorodinsky. “Pivotal Players and the Characterization of Influ-ence.”
Journal of Economic Theory , Volume 92, 2, 318-342. 2000.304] Nabil Al-Najjar and Rann Smorodinsky. “The Efficiency of Competitive Mechanisms underPrivate Information.”
Journal of Economic Theory , 137:383–403, 2007.[5] Noga Alon, Michal Feldman, Ariel D. Procaccia, and Moshe Tennenholtz. “StrategyproofApproximation of the Minimax on Networks.”
Mathematics of Operations Research , Volume35(3): 513–526, 2010.[6] Moshe Babaioff, Ron Lavi, Elan Pavlov. “Single-value combinatorial auctions and algorith-mic implementation in undominated strategies.” Journal of the ACM, Volume 56(1): , 2009.[7] Maria-Florina Balcan, Avrim Blum, Jason D. Hartline, and Yishay Mansour. “Mechanismdesign via machine learning.” In
FOCS , pages 605–614. IEEE Computer Society, 2005.[8] Edward H. Clarke. “Multipart pricing of public goods.”
Public Choice , 18:19–33, 1971.[9] John Duggan. “Virtual Bayesian implementation.”
Econometrica , 65:1175–1199, 1997.[10] Cynthia Dwork. “Differential privacy.” In Michele Bugliesi, Bart Preneel, Vladimiro Sas-sone, and Ingo Wegener, editors,
ICALP (2) , volume 4052 of
Lecture Notes in ComputerScience , pages 1–12. Springer, 2006.[11] Cynthia Dwork. “The differential privacy frontier (extended abstract).” In Omer Reingold,editor,
TCC , volume 5444 of
Lecture Notes in Computer Science , pages 496–502. Springer,2009.[12] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. “Calibrating Noise toSensitivity in Private Data Analysis.” In Shai Halevi and Tal Rabin, editors,
TCC , volume3876 of
Lecture Notes in Computer Science , pages 265–284. Springer, 2006.[13] Dimitris Fotakis and Christos Tzamos. “Winner-Imposing Strategyproof Mechanisms forMultiple Facility Location Games.”
Workshop on Internet and Network Economics - WINE’10 , 2010.[14] Allan Gibbard. “Manipulation of voting schemes: A General Result.”
Econometrica , 41:587–601, 1973.[15] Andrew V. Goldberg, Jason D. Hartline, Anna R. Karlin, Michael Saks and Andrew Wright.“Competitive Auctions.”
Games and Economic Behavior . Volume 55(2):242–269, 2006.[16] Theodore F. Groves. “Incentives in Teams.”
Econometrica , 41:617–631, 1973.[17] David K. Levine and Wolfgang Pesendorfer. “When Are Agents Negligible?”
The AmericanEconomic Review , Vol. 85(5):1160–1170, 1995.[18] “George J. Mailath, Andrew Postlewaite.” Asymmetric Information Bargaining Problemswith Many Agents
The Review of Economic Studies , 57(3):351–367, 1990.3119] Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green.
Microeconomic Theory .Oxford University Press, 1995.[20] Hitoshi Matsushima. “A New Approach to the Implementation Problem.”
Journal of Eco-nomic Theory , 45:128–144, 1988.[21] Richard McLean, Andrew Postlewaite. “Informational Size and Incentive Compatibility.”
Econometrica , 70(6):2421–2453, 2002.[22] Frank McSherry and Kunal Talwar. “Mechanism Design via Differential Privacy.” In
FOCS ,pages 94–103, 2007.[23] Herve Moulin. “On Strategy-Proofness and Single-Peakedness.”
Public Choice , 35:437–455,1980.[24] Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. “Smooth Sensitivity and Samplingin Private Data Analysis.” In David S. Johnson and Uriel Feige, editors,
STOC , pages 75–84.ACM, 2007.[25] Ariel D. Procaccia and Moshe Tennenholtz. “Approximate Mechanism Design withoutMoney.” In
ACM Conference on Electronic Commerce , pages 177–186, 2009.[26] Kevin Roberts. “The Characterization of Implementable Choice Rules.” In Jean-Jacques Laf-font, editor,
Aggregation and Revelation of Preferences. Papers presented at the 1st EuropeanSummer Workshop of the Econometric Society , pages 321–349. 1979.[27] Donald J. Roberts and Andrew Postlewaite. “The Incentives for Price-Taking Behavior inLarge Exchange Economies.”
Econometrica , 44(1):115–127, 1976.[28] Mark A. Satterthwaite, Aldo Rustichini and Steven R. Williams. “Convergence to Efficiencyin a Simple Market with Incomplete Information.”
Econometrica , 62(1):1041–1063, 1994.[29] Mark.A. Satterthwaite. “Stratey Proofness and Arrow’s Conditions: Existence and Corre-spondence Theorems for Voting Procedures and Social Welfare Functions.”
Journal of Eco-nomic Theory , 10:187–217, 1975.[30] Mark A. Satterthwaite and Steven R. Williams. “The Rate of Convergence to Efficiency in theBuyer‘s Bid Double Auction as the Market Becomes Large.”
Review of Economic Studies ,56:477–498, 1989.[31] James Schummer. “Almost-Dominant Strategy Implementation.”
Games and Economic Be-havior , 48(1): 154-170, 2004.[32] James Schummer and Rakesh V. Vohra. “Strategy-Proof Location on a Network.”
Journal ofEconomic Theory , 104(2):405–428, 2004. 3233] Janmes Schummer and Rakesh V. Vohra. “Mechanism Design without Money.” In N. Nisan,T. Roughgarden, ´E. Tardos, and V. Vazirani, editors,
Algorithmic Game Theory , chapter 10.Cambridge University Press, 2007.[34] Roberto Serrano and Rajiv Vohra. “Some Limitations of Virtual Bayesian Implementation.”
Econometrica , 69:785-792, 2001.[35] Roberto Serrano and Rajiv Vohra. “A Characterization of Virtual Bayesian Implementation.”
Games and Economic Behavior , 50:312-331, 2005.[36] Jeroen Swinkels. “Efficiency of Large Private Value Auctions.”
Econometrica , 69(1):37–68,2001.[37] William S. Vickrey. “Counterspeculations, Auctions, and Competitive Sealed Tenders.”
Jour-nal of Finance , 16:15–27, 1961.[38] Pinyan Lu, Xiaorui Sun, Yajun Wang, and Zeyuan Zhu. “Asymptotically Optimal Strategy-Proof Mechanisms for Two-Facility Games.” In