Impartial selection with prior information
aa r X i v : . [ c s . G T ] F e b Impartial selection with prior information
IOANNIS CARAGIANNIS,
Aarhus University, Denmark
GEORGE CHRISTODOULOU,
University of Liverpool, United Kingdom
NICOS PROTOPAPAS,
University of Liverpool, United KingdomWe study the problem of impartial selection , a topic that lies at the intersection of computational social choiceand mechanism design. The goal is to select the most popular individual among a set of community members.The input can be modeled as a directed graph, where each node represents an individual, and a directed edgeindicates nomination or approval of a community member to another. An impartial mechanism is robust topotential selfish behavior of the individuals and provides appropriate incentives to voters to report their truepreferences by ensuring that the chance of a node to become a winner does not depend on its outgoing edges.The goal is to design impartial mechanisms that select a node with an in-degree that is as close as possible tothe highest in-degree. We measure the efficiency of such a mechanism by the difference of these in-degrees,known as its additive approximation.Following the success in the design of auction and posted pricing mechanisms with good approximationguarantees for welfare and profit maximization, we study the extent to which prior information on voters’preferences could be useful in the design of efficient deterministic impartial selection mechanisms with goodadditive approximation guarantees. We consider three models of prior information, which we call the opin-ion poll , the a priori popularity , and the uniform model. We analyze the performance of a natural selectionmechanism that we call approval voting with default (AVD) and show that it achieves a
O (√ 𝑛 ln 𝑛 ) additiveguarantee for opinion poll and a O ( ln 𝑛 ) for a priori popularity inputs, where 𝑛 is the number of individuals.We consider this polylogarithmic bound as our main technical contribution. We complement this last resultby showing that our analysis is close to tight, showing an Ω ( ln 𝑛 ) lower bound. This holds in the uniformmodel, which is the simplest among the three models. We study the problem of impartial selection , which has recently attracted a lot of attention from asocial choice theory and mechanism design point of view. The goal is to select the most popularindividual, among a set of community members. The input can be modeled as a directed graph,where each node represents an individual, and a directed edge indicates nomination or approvalof a community member to another. A selection mechanism takes a graph as input and returns asingle node as the winner. This would be a trivial task from the algorithmic point of view, but thechallenge here is that the true preferences of the individuals are private information known onlyto them. In settings where each individual is simultaneously a voter and a candidate and thereforehas also personal interest in becoming a winner, she may manipulate the mechanism and misreporther true preferences if this could increase her chance to win. An impartial mechanism is robustto such behavior, and provides appropriate incentives to voters to report their true preferences byensuring that the chance of a node to become a winner, does not depend on its outgoing edges.Unfortunately, it is well-known that the obvious selection mechanism that always returns thehighest in-degree node as a winner, suffers from possible manipulation, i.e., it is not impartial. Thechallenge is to design an impartial selection mechanism that selects a winner with an in-degreethat approximates well the highest in-degree.Impartial selection, was introduced independently by Holzman and Moulin [19] and Alon etal. [1]. The former work considered minimum axiomatic properties that impartial selection rulesshould satisfy, while the latter quantified the efficiency loss with the notion of the approximationratio, defined as the worst-case ratio of the maximum in-degree over the in-degree of the nodewhich is selected by the mechanism. This line of research concluded with the work of Fischer andKlimm [15], who proposed randomized impartial mechanisms with optimal approximation ratio. oannis Caragiannis, George Christodoulou, and Nicos Protopapas additive approximationmay be a more appropriate measure to evaluate impartial mechanisms, and also by providingmechanisms with sublinear additive approximation guarantees.The most natural and well-studied selection rule is the approval voting rule (AV), which has re-ceived much attention in social choice theory [22]. In our context, AV always returns the nodewith the highest in-degree. Unfortunately, as already mentioned, this mechanism is not impartial.The reason is that in case of a tie at the maximum degree, some of the nodes involved in the tie mayhave incentive to vote non-truthfully. Fortunately, there is a simple fix of this deficiency, which isinspired by the simpler plurality with default mechanism by Holzman and Moulin [19]; in case ofa tie, select as winner a predetermined/default node . We refer to this modified version of AV as ap-proval voting with default (AVD). Although (a careful implementation of) this tweak re-establishesimpartiality, this modification comes at a cost, as this preselection should be independent of theinput graph. Imagine a scenario, where there is a tie between two nodes with the maximum degree.In the unfortunate situation where the default node receives only a small number of votes, thismight lead to a poor additive approximation, linear in the number of nodes.Most of the previous work consider randomized mechanisms, hence the efficiency is measuredin expectation. However, in the design of selection mechanisms, determinism is arguably moredesirable. Unfortunately, all the known deterministic mechanisms have very poor, linear additiveapproximation, and it is wide open whether substantially better mechanisms exist.In this work, we take a different route: we study the extent to which prior information on thepreferences of the voters could allow the design of deterministic impartial selection mechanismswith good additive approximation guarantees. Our focus is on the analysis of AVD, for which ourdesign choice boils down to an effective choice of the default node, with the help of the priorinformation. We assume that the preferences are drawn from a probability distribution that isknown to the mechanism. We assume throughout voter independence , that is, the random choiceof preferences for each voter is independent of those of the others.We propose different models that capture several aspects of the problem. In the opinion poll model, we assume that the prior information concerns information about the preferences of dif-ferent (types of) voters. The designer has access to the probability 𝑝 𝑖 ( 𝑆 ) , with which voter 𝑖 (or allvoters of type 𝑖 ) would approve a subset 𝑆 of candidates. The a priori popularity model assumes thatthe designer has prior information about the popularity of each candidate 𝑗 , which is summarizedby a scalar 𝑝 𝑗 . We assume that each candidate 𝑗 receives independently a vote from each voterwith probability 𝑝 𝑗 . As a special case, we also study the uniform model, in which every candidate 𝑗 has the same popularity 𝑝 𝑗 = 𝑝 .Note that these models capture different information scenarios; the former assumes that thedesigner has access to opinion poll statistics for each (type of) voter, while the latter assumes thatthe designer has access only to aggregate information about the popularity of a candidate. Thisaggregation is over the whole population of voters, as the actual information may be sanitized topreserve anonymity of those (types) participated in the poll. Note that popularity may measureother forms of biases over specific individuals. For example, consider the situation in which a PC We should note that, with correlated distributions there is not much one can achieve (see Example 1 in the appendix).oannis Caragiannis, George Christodoulou, and Nicos Protopapas
Our main focus is the analysis of the AVD mechanism. We begin with the opinion poll model and,as a warm-up, in Section 3, we present a simple mechanism that ignores the edges of the graphand selects as winner a pre-selected node of maximum expected in-degree . We call this mechanismthe constant mechanism , and show that it is Θ (√ 𝑛 ln 𝑛 ) -additive (Theorem 5, Theorem 6). TheAVD mechanism that selects as default the node of highest expected in-degree, can only performbetter than the constant mechanism. Our main and most technically involved result shows thatthis version of the AVD is O( ln 𝑛 ) -additive in the a priori popularity model (Theorem 7). Wecomplement this result by showing that our analysis is tight, up to a logarithmic factor: even foruniform inputs where all candidates are a priori equally popular, there is a class of instances forwhich AVD has additive approximation Ω ( ln 𝑛 ) , for any choice of the default node (Theorem 13).The analysis of the constant mechanism serves multiple purposes. It illustrates that when priorinformation is available, a low expected additive approximation is achievable even by simple de-terministic mechanisms, and by the simplest statistic of the prior, that is the expected in-degree.This is in sharp contrast to the no-prior case, where deterministic mechanisms have a very poorperformance for both additive [10] and multiplicative approximation [1]. Second, the analysis ofthe constant mechanism is quite simple; e.g., the upper bound follows by a simple applicationof the Hoeffding bound. However, it introduces some of the techniques (such as tail inequalitiesand reverse Chernoff bounds) that our strongest results in Sections 4 and 5 use. Finally, the upperbound on the expected additive approximation of the constant mechanism serves as a benchmarkof efficiency for all impartial mechanisms with priors.The analysis of AVD is considerably more involved than the analysis of the constant mechanism.Roughly speaking, the important quantity that affects the additive approximation is the differencebetween the maximum in-degree and the in-degree of the default node when two or more nodesare tied with the highest in-degree, times the probability of this tie. In the a priori popularity model,the in-degree 𝑑 of a node is a random variable following the binomial probability distribution withparameters 𝑛 (the number of trials) and 𝑝 (the probability that a trial is successful). Furthermore,the in-degrees of different nodes are independent. Hence, bounding the probability of a tie at themaximum in-degree is related to (but more demanding than) bounding the probability that twoout of many independent binomial random variables take the same maximum value.Unfortunately, even though problems of this kind have been studied in the literature of appliedprobability and statistics (e.g., see [9, 12, 13]), the existing results have not been proved usefulfor our purposes. When the difference between the maximum in-degree and the in-degree of thedefault node is large, Chernoff bounds can unsurprisingly be used to show that the probability ofa tie at maximum is negligible and, hence, the contribution to the expectation of the quantity ofinterest is negligible as well. The real challenge is when the difference of the two in-degrees issmall. In this regime, it turns out that we need sharp bounds on the ratio Pr [ 𝑑 = 𝑥 ]/ Pr [ 𝑑 ≥ 𝑥 ] (also called the hazard function ) for a binomial random variable 𝑑 and value 𝑥 that is close to theexpectation 𝜇 = 𝑝𝑛 of 𝑑 (i.e., so that Pr [ 𝑑 ≥ 𝑥 ] is only polynomially small in terms of 𝑛 ). As weshow in Lemma 12, en route to proving Theorem 7, the ratio Pr [ 𝑑 = 𝑥 ]/ Pr [ 𝑑 ≥ 𝑥 ] is at most O (cid:16)q ln 𝑛 min { 𝜇,𝑛 − 𝜇 } (cid:17) in this case.We believe that this technical tool can be of independent interest and could find applicationselsewhere. The bound is asymptotically tight; its tightness for 𝑝 = / p min { 𝜇, 𝑛 − 𝜇 } oannis Caragiannis, George Christodoulou, and Nicos Protopapas from below the probability thata binomial random variable is far from its expectation. These statements are less popular thanChernoff bounds but rather standard. Impartial selection was introduced independently by Alon et al. [1] and Holzman and Moulin [19].Alon et al. [1] proposed the approximation ratio as the fraction between the highest in-degree andthe (expected) in-degree of the winner. They provided a simple, 4-approximate randomized mech-anism and noted that no randomized impartial mechanism can achieve an approximation ratioless than 2, even with randomization. If randomization is not allowed, however, the approxima-tion ratio can be arbitrarily large. Later on, Fischer and Klimm [15] introduced a 2-approximationrandomized mechanism, closing that gap. Bousquet et al. [8] proposed a randomized mechanismwith an arbitrarily close to optimal approximation ratio, provided that the maximum in-degree ofthe graph is large enough.Holzman and Moulin [19] considered various mechanisms under the more restricted family ofgraphs where each node has an out-degree equal to 1. Among others, they proposed the pluralitywith default mechanism, which can be seen as a version of AVD mechanism, tailored to that familyof inputs. They also came up with an important impossibility result regarding the quality of im-partial mechanisms: Any deterministic impartial mechanism can guarantee, either to never select0 in-degree nodes or to always select a unanimously nominated node, but never both. Variationsof the problem are studied in [6, 11, 23, 28, 29].Additive approximation was first studied by Caragiannis et al. [10]. Therein, they propose sim-ple randomized mechanisms with sub-linear additive approximation guarantees. They also showthat a specific class of deterministic mechanisms cannot achieve additive approximation less that 𝑛 − Ω (√ 𝑛 ) . For general deterministicmechanisms however, they only show a lower bound of 2, while no deterministic mechanism isknown with additive approximation smaller than 𝑛 −
1. Closing this gap remains a tantalizing openquestion.Impartiality is encountered in various domains. In the AI literature, a related application is peer-reviewing [3, 20, 21, 25]. In another direction, Babichenko et al. [4, 5] present impartial mechanismsfor the selection of the most influential node in a network. The main difference with our settingis that the influence of a specific node does not depend merely on its in-degree, but also on allthe paths leading to that node. Mackenzie [24] analyses the papal conclave through the lens ofimpartiality.Our motivation for considering prior information comes from its successful application to auc-tion and posted pricing mechanisms. An excellent survey of related work can be found in [17]. Weshould also note that there is an interesting connection of the techniques needed for the analysisin the a priori popularity model with the literature on random graphs [7, 16] and, in particular, re-sults regarding the multiplicity of the highest in-degree in 𝐺 𝑛,𝑝 graphs. Unfortunately, such resultshave a focus on asymptotics: for example, en route to proving bounds on the chromatic number,Erdős and Wilson [14] showed that the maximum degree is unique with probability 1 − 𝑜 ( ) in 𝐺 𝑛, / graphs. Instead, for proving our approximation guarantees, we need sharp estimations ofthe hidden 𝑜 ( ) term. So, such results are not directly applicable to our analysis. oannis Caragiannis, George Christodoulou, and Nicos Protopapas The rest of the paper is structured as follows. We begin with preliminary definitions and tail in-equality statements in Section 2. Section 3 is devoted to the analysis of the constant mechanism inthe opinion poll model. Our polylogarithmic additive approximation for AVD is presented in Sec-tion 4 and the logarithmic lower bound in Section 5. We conclude with open problems in Section 6.Two additional observations are given in the appendix.
We denote by 𝑁 the set of individuals (or agents ). For a set 𝑆 ⊂ 𝑁 , we use 𝑁 𝑆 as an abbreviationof 𝑁 \ 𝑆 and write 𝑁 𝑖,...,𝑗 instead of 𝑁 { 𝑖,...,𝑗 } for simplicity. A nomination profile 𝐺 = ( 𝑁 , 𝐸 ) is adirected graph without self-loops that has the agents of 𝑁 as nodes. Each directed edge ( 𝑖, 𝑗 ) ∈ 𝐸 represents a nomination from agent 𝑖 to agent 𝑗 . Occasionally, we refer to the outgoing edgesas votes . We define as 𝑥 𝑖 = {( 𝑖, 𝑗 ) ∈ 𝐸 } the set of outgoing edges from node 𝑖 ∈ 𝑁 and usethe tuple x = ( 𝑥 , ..., 𝑥 𝑛 ) as an alternative representation for 𝐺 . We use x − 𝑖 to denote the graph ( 𝑁 , 𝐸 \ ({ 𝑖 } × 𝑁 )) .Denoting by G the set of all nomination profiles over the agents of 𝑁 , a (deterministic) selectionmechanism is simply a function 𝑓 : G → 𝑁 which maps each nomination profile to a single node(the winner ). A deterministic selection mechanism 𝑓 is impartial when for any agent 𝑖 ∈ 𝑁 , anygraph x ∈ G and any set 𝑥 ′ 𝑖 of outgoing edges from node 𝑖 , it is 𝑓 ( x ) = 𝑖 if and only if 𝑓 ( 𝑥 ′ 𝑖 , x − 𝑖 ) = 𝑖 .In other words, no agent (node) has any incentive to misreport her preferences (its outgoing edges).We use 𝑑 𝑗 ( 𝑆, x ) to denote the in-degree of node 𝑗 ∈ 𝑁 , taking into account only the incomingedges from nodes of set 𝑆 , given the profile x , i.e., 𝑑 𝑗 ( 𝑆, x ) = |{ 𝑖 ∈ 𝑆 : ( 𝑖, 𝑗 ) ∈ 𝐸 }| . We use the simplified notations 𝑑 𝑗 ( x ) when 𝑆 = 𝑁 𝑗 . Δ ( x ) denotes the maximum in-degree of theprofile x , i.e., Δ ( x ) = max 𝑗 ∈ 𝑁 𝑑 𝑗 ( x ) . Following the work of Caragiannis et al. [10], we evaluatethe performance of a mechanism 𝑓 on a nomination profile x using the additive approximation Δ ( x ) − 𝑑 𝑓 ( x ) ( x ) , i.e., the difference between the maximum in-degree over all nodes and the in-degree of the winner returned by mechanism 𝑓 .We assume that the input is a random nomination profile (among the agents of 𝑁 ), selectedaccording to a probability distribution P over all such profiles. We assume voter independence ,which means that the distribution P is a product Î 𝑖 ∈ 𝑁 P 𝑖 of independent distributions, where P 𝑖 denotes the distribution according to which node 𝑖 selects its set of outgoing edges.We examine a hierarchy of three families of distributions, giving raise to opinion poll , a prioripopularity , and uniform instances (or models), respectively: • In the opinion poll model, each node 𝑖 ∈ 𝑁 selects its set of outgoing edges among all possibleedges to nodes of 𝑁 𝑖 , according to the probability distribution P 𝑖 . Due to voter independence,the in-degree 𝑑 𝑗 ( x ) of each node 𝑗 is equal to the sum Í 𝑖 ∈ 𝑁 𝑗 𝑥 𝑖 𝑗 of independent Bernoullirandom variables, each denoting whether the directed edge from node 𝑖 to node 𝑗 existsin the nomination profile ( 𝑥 𝑖 𝑗 =
1) of not ( 𝑥 𝑖 𝑗 = 𝑁 to have 𝑛 + 𝑛 independent random variables. • The a priori popularity model is the special case of opinion poll where each node 𝑗 hasa popularity 𝑝 𝑗 ∈ [ , ] and the directed edge ( 𝑖, 𝑗 ) exists in the nomination profile withprobability 𝑝 𝑗 , independently on all other edges. In this case, the in-degree of node 𝑗 followsthe binomial distribution B ( 𝑛, 𝑝 𝑗 ) , where 𝑛 denotes the number of trials and 𝑝 𝑗 is the successprobability for each trial. oannis Caragiannis, George Christodoulou, and Nicos Protopapas • We call uniform the special case of the a priori popularity model with 𝑝 𝑗 = 𝑝 for everyagent 𝑗 .We assume that prior information about the underlying probability distributions is known in ad-vance. Hence, we examine selection mechanisms that are defined using this information and eval-uate them in terms of their expected additive approximation E x ∼ P [ Δ ( x ) − 𝑑 𝑓 ( x ) ( x )] . We use the term 𝛼 -additive to refer to a selection mechanism with expected additive approxi-mation at most 𝛼 . Our aim is to design deterministic impartial selection mechanisms that have aslow as possible expected additive approximation in any distribution from the above classes. Ourpositive results apply to opinion poll or to a priori popularity distributions; our proofs of negativeresults use the simplest uniform ones. We include some tail bounds here that will be very useful later in our analysis.
Lemma 1 (Hoeffding [18]).
Let 𝑋 , 𝑋 , ..., 𝑋 𝑛 be independent random variables so that Pr (cid:2) 𝑎 𝑗 ≤ 𝑋 𝑗 ≤ 𝑏 𝑗 (cid:3) = . Then, the expectation of the random variable 𝑋 = Í 𝑛𝑗 = 𝑋 𝑗 is E [ 𝑋 ] = Í 𝑛𝑗 = E [ 𝑋 𝑗 ] and, furthermore, for every 𝜈 ≥ , Pr [ | 𝑋 − E [ 𝑋 ] | ≥ 𝜈 ] ≤ − 𝜈 Í 𝑛𝑗 = ( 𝑏 𝑗 − 𝑎 𝑗 ) ! . Lemma 2 (Chernoff bounds).
Let 𝐵 ∼ B ( 𝑛, 𝑝 ) and 𝜇 = 𝑛𝑝 . Then, the following inequalities hold • Let 𝑥 ≥ 𝜇 . Then Pr [ 𝐵 ≥ 𝑥 ] ≤ exp (cid:18) − ( 𝑥 − 𝜇 ) 𝑛 𝜇 ( 𝑛 − 𝜇 ) (cid:19) (1) if 𝜇 ≥ 𝑛 / , and Pr [ 𝐵 ≥ 𝑥 ] ≤ exp (cid:18) − ( 𝑥 − 𝜇 ) 𝜇 (cid:19) (2) if 𝜇 < 𝑛 / and, furthermore, 𝑥 ≤ 𝜇 . • Let 𝑥 ≤ 𝜇 . Then, Pr [ 𝐵 ≤ 𝑥 ] ≤ exp (cid:18) − ( 𝜇 − 𝑥 ) 𝑛 𝜇 ( 𝑛 − 𝜇 ) (cid:19) (3) if 𝜇 ≤ 𝑛 / , and Pr [ 𝐵 ≤ 𝑥 ] ≤ exp (cid:18) − ( 𝜇 − 𝑥 ) ( 𝑛 − 𝜇 ) (cid:19) (4) if 𝜇 > 𝑛 / and, furthermore, 𝑥 ≥ 𝜇 − 𝑛 . Inequalities (2) and (4) are the standard Chernoff bounds; e.g., see [26]. Inequalities (1) and (3)are due to Okamoto [27]. The following lemma (see [2], Lemma 4.7.2, page 116]) indicates thatChernoff bounds are asymptotically tight.
Lemma 3.
Let 𝐵 ∼ B ( 𝑛, 𝑝 ) and 𝛿 ∈ [ , − 𝑝 ) . Then, Pr [ 𝐵 ≥ 𝑛 ( 𝑝 + 𝛿 )] ≥ p 𝑛 ( 𝑝 + 𝛿 ) ( − 𝑝 − 𝛿 ) · (cid:18) 𝑝𝑝 + 𝛿 (cid:19) 𝑝 + 𝛿 (cid:18) − 𝑝 − 𝑝 − 𝛿 (cid:19) − 𝑝 − 𝛿 ! 𝑛 . oannis Caragiannis, George Christodoulou, and Nicos Protopapas Corollary 4 (Inverse Chernoff bound).
Let 𝐵 ∼ B ( 𝑛, / ) and 𝛿 ∈ [ , / ] . Then, Pr (cid:20) 𝐵 ≥ 𝑛 (cid:18) + 𝛿 (cid:19) (cid:21) ≥ √ 𝑛 exp (cid:0) − 𝛿 𝑛 (cid:1) Proof.
By applying Lemma 3 to the random variable 𝐵 , we havePr (cid:20) 𝐵 ≥ 𝑛 (cid:18) + 𝛿 (cid:19) (cid:21) ≥ √ 𝑛 (cid:18) / / + 𝛿 (cid:19) / + 𝛿 (cid:18) / / − 𝛿 (cid:19) / − 𝛿 ! 𝑛 = √ 𝑛 √ − 𝛿 (cid:18) / − 𝛿 / + 𝛿 (cid:19) 𝛿 ! 𝑛 ≥ √ 𝑛 exp (cid:18) − 𝛿 − 𝛿 − 𝛿 𝑛 (cid:19) ≥ √ 𝑛 exp (cid:0) − 𝛿 𝑛 (cid:1) . The first inequality follows by Lemma 3 and since ( 𝑝 + 𝛿 ) ( − 𝑝 − 𝛿 ) is at most 1 /
4. The secondinequality follows by the inequality 𝑒 𝑧 ≥ + 𝑧 for 𝑧 ∈ R which implies that √ − 𝛿 ≤ exp (− 𝛿 ) and / + 𝛿 / − 𝛿 ≤ exp (cid:16) 𝛿 − 𝛿 (cid:17) . The third inequality follows since 𝛿 ≤ / (cid:3) We first consider a simple mechanism, which we call the constant mechanism. This mechanismignores all edges and awards a particular preselected node, which we call the default winner (ordefault node). The selection of the default winner depends only on the prior. For example, thecriterion that we consider here is to select as default winner a node of maximum expected in-degree, i.e., 𝑓 co ∈ argmax 𝑣 ∈ 𝑁 E [ 𝑑 𝑣 ( x )] . Our first statement is an upper bound on the additive approximation of the constant mechanism;its proof follows by a simple application of the Hoeffding bound (Lemma 1).
Theorem 5.
For opinion poll inputs, the constant mechanism that uses the maximum expectedin-degree node as the default winner has expected additive approximation O (cid:16) √ 𝑛 ln 𝑛 (cid:17) . Proof.
Recall that, in the opinion poll model, the in-degree of node 𝑣 is the sum of 𝑛 independentBernoulli random variables, i.e., 𝑑 𝑣 ( x ) = Í 𝑢 ∈ 𝑁 𝑣 𝑥 𝑢𝑣 . Then, a simple application of the Hoeffdingbound (Lemma 1) yieldsPr h 𝑑 𝑣 ( x ) ≥ E [ 𝑑 𝑣 ( x )] + √ 𝑛 ln 𝑛 i ≤ Pr h | 𝑑 𝑣 ( x ) − E [ 𝑑 𝑣 ( x )] | ≥ √ 𝑛 ln 𝑛 i ≤ 𝑛 . Hence, the probability that some node has in-degree at least E [ 𝑑 𝑓 co ( x )] + √ 𝑛 ln 𝑛 is at most theprobability that some node 𝑣 has in-degree at least E [ 𝑑 𝑣 ( x )] + √ 𝑛 ln 𝑛 . By the inequality aboveand the union bound, this probability is at most 𝑛 · ( 𝑛 + ) ≤ 𝑛 . Thus, the expected maximumin-degree is E [ Δ ( x )] ≤ E [ 𝑑 𝑓 co ( x )] + √ 𝑛 ln 𝑛 + 𝑛 · 𝑛 ≤ E [ 𝑑 𝑓 co ( x )] + + √ 𝑛 ln 𝑛, and the expected additive approximation E [ Δ ( x ) − 𝑑 𝑓 co ( x )] is no more than 3 + √ 𝑛 ln 𝑛 . (cid:3) oannis Caragiannis, George Christodoulou, and Nicos Protopapas 𝑝 = /
2. Consequently,it holds for any selection of the default winner. The proof exploits the reverse Chernoff bound(Corollary 4).
Theorem 6.
The constant mechanism has expected additive approximation Ω (cid:16) √ 𝑛 ln 𝑛 (cid:17) , evenwhen applied to uniform inputs. Proof.
Consider a uniform prior with 𝑝 = / 𝑛 + 𝑛 is large, e.g., 𝑛 ≥
80. Then, the in-degree of any node 𝑢 is a random variable following the binomial probabilitydistribution B ( 𝑛, / ) . Let 𝑢 ∗ = 𝑓 co be the node returned by the constant mechanism; clearly, E [ 𝑑 𝑢 ∗ ( x )] = 𝑛 /
2. Denote by E the event that some node different than 𝑢 ∗ has in-degree at least 𝑛 + q 𝑛 ln 𝑛 . By applying Corollary 4 with 𝛿 = q ln 𝑛 𝑛 (the fact that 𝑛 is large guarantees that 𝛿 ≤ / 𝑑 𝑢 ( x ) , we havePr " 𝑑 𝑢 ( x ) ≥ 𝑛 + r 𝑛 ln 𝑛 ≥ 𝑛 √ , for every node 𝑢 ≠ 𝑢 ∗ and, hence,Pr [E] ≥ − (cid:18) − 𝑛 √ (cid:19) 𝑛 ≥ − 𝑒 − /√ ≥ √ − , where the second inequality follows by the inequality ( − 𝑟 / 𝑛 ) 𝑛 ≤ 𝑒 − 𝑟 and the third one by theinequality 𝑒 𝑧 ≥ + 𝑧 (and, thus, 𝑒 /√ ≥ + /√ E [ Δ ( x )] ≥ E [ max 𝑢 ≠ 𝑢 ∗ 𝑑 𝑢 ( x ) {E}] + E [ 𝑑 𝑢 ∗ ( x ) {E}] ≥ 𝑛 + r 𝑛 ln 𝑛 ! Pr [E] + E [ 𝑑 𝑢 ∗ ( x )] Pr [E] = E [ 𝑑 𝑢 ∗ ( x )] + r 𝑛 ln 𝑛 · Pr [E] ≥ E [ 𝑑 𝑢 ∗ ( x )] + √ 𝑛 ln 𝑛, and the desired lower bound on the expected additive approximation E [ Δ ( x ) − 𝑑 𝑢 ∗ ( x )] follows. (cid:3) We devote this section to AVD mechanism and its analysis on a priori popularity instances. AVDuses a preselected node 𝑡 as the default winner. To give a formal definition of the mechanism,we say that a non-default node 𝑘 beats another non-default node 𝑗 in the nomination profile x if 𝑑 𝑘 ( 𝑁 𝑗,𝑘,𝑡 , x ) > 𝑑 𝑗 ( 𝑁 𝑗,𝑘,𝑡 , x ) , i.e., if node 𝑘 has higher in-degree than node 𝑗 when ignoring incomingedges from nodes 𝑗 , 𝑘 , and the default node 𝑡 . Node 𝑘 beats (respectively, is beaten by) the defaultnode 𝑡 if 𝑑 𝑘 ( 𝑁 𝑘,𝑡 , x ) > 𝑑 𝑡 ( 𝑁 𝑘,𝑡 , x ) (respectively, 𝑑 𝑘 ( 𝑁 𝑘,𝑡 , x ) < 𝑑 𝑡 ( 𝑁 𝑘,𝑡 , x ) ). When applied on thenomination profile x , AVD returns as the winner 𝑤 the node that beats every other node, or thedefault node if no node that beats every other node exists. We remark that the default node is notprohibited to win by beating every other node.Notice that, by misreporting its outgoing edges, a node cannot affect the set of other nodes itbeats. Hence, AVD is clearly impartial. In addition, the above formal definition allows us to observethat the in-degree of the winner returned by AVD is never lower than the in-degree of the defaultnode 𝑡 . Indeed, when the default node is not the winner, it is beaten by the winner, who has at leastas high in-degree. Hence, the upper bound of O(√ 𝑛 ln 𝑛 ) on the expected additive approximationof the constant mechanism on opinion poll instances carries over to AVD mechanism when the The case in which no node beats every other node refines the notion of a tie that we informally used in Section 1.oannis Caragiannis, George Christodoulou, and Nicos Protopapas
Theorem 7.
The AVD mechanism that uses the node of highest expected in-degree as the defaultnode has an expected additive approximation of O( ln 𝑛 ) when applied on a priori popularity in-stances. Again, for simplicity of notation, in our analysis of AVD, we consider profiles with 𝑛 + 𝑛 ≥ (otherwise, Theorem 7 holds trivially). Let 𝑝 𝑘 be the popularity of node 𝑘 and recall that the in-degree 𝑑 𝑘 ( x ) of node 𝑘 is a random variable takingvalues between 0 and 𝑛 following the binomial distribution B ( 𝑛, 𝑝 𝑘 ) , which is also independenton the in-degree of the other nodes. Also, let 𝜇 𝑘 = E [ 𝑑 𝑘 ( x )] = 𝑝 𝑘 𝑛 and 𝜉 𝑘 = min { 𝜇 𝑘 , 𝑛 − 𝜇 𝑘 } .We first consider the case of 𝜉 𝑡 < 𝑛 . This means that the expected in-degree of thedefault node is either very high, i.e., 𝜇 𝑡 > 𝑛 − 𝑛 , or very low, i.e., 𝜇 𝑡 < 𝑛 . When 𝜇 𝑡 > 𝑛 − 𝑛 , the expected degree of the winner (be it the default node or not; recall theargument above that compares AVD with the constant mechanism) is more than 𝑛 − 𝑛 . As 𝑑 𝑤 ≤ 𝑛 , the expected additive approximation is less than 8200 ln 𝑛 . In the case 𝜇 𝑡 < 𝑛 ,a simple application of a Chernoff bound (i.e., the tail inequality (2) from Lemma 2) yields thatPr [ 𝑑 𝑘 ( x ) ≥ 𝑛 ] ≤ 𝑛 − and, hence, the expected additive approximation is at most 𝑛 − · 𝑛 +( − 𝑛 − ) · 𝑛 ≤ + 𝑛 .So, in the following, we analyze the AVD mechanism assuming that 𝜉 𝑡 ≥ 𝑛 . Let ℎ be thehighest in-degree among all nodes. Denote by 𝐴 the event that there is no node that beats everyother node and the default node is beaten by a non-default node of degree ℎ . The following lemmaaddresses the simplest case where the event 𝐴 is not true. Lemma 8. E [( ℎ − 𝑑 𝑤 ( x )) { 𝐴 }] ≤ . Proof.
If the event 𝐴 does not hold, there must either be a node that beats every other node orthe default node is not beaten by any node of degree ℎ .So, first, assume that there is a node 𝑤 that beats every other node. The lemma follows if thisnode has degree ℎ . Otherwise, let 𝑖 be a node of degree ℎ . Since 𝑤 beats 𝑖 , we have 𝑑 𝑤 ( x ) ≥ 𝑑 𝑤 ( 𝑁 𝑖,𝑤,𝑡 , x ) ≥ 𝑑 𝑖 ( 𝑁 𝑖,𝑤,𝑡 , x )+ ≥ 𝑑 𝑖 ( x )− = ℎ − 𝑤 ≠ 𝑡 , and 𝑑 𝑤 ( x ) ≥ 𝑑 𝑡 ( 𝑁 𝑖,𝑡 , x ) ≥ 𝑑 𝑖 ( 𝑁 𝑖,𝑡 , x )+ ≥ 𝑑 𝑖 ( x ) = ℎ if 𝑤 = 𝑡 .Now, assume that the default node is not beaten by any node of in-degree ℎ and there is no nodethat beats every other node. In this case, the winner will be the default node 𝑡 . The lemma clearlyfollows if 𝑡 has in-degree ℎ . Otherwise, since 𝑡 is not beaten by some node 𝑖 of degree ℎ , we have 𝑑 𝑡 ( x ) ≥ 𝑑 𝑡 ( 𝑁 𝑖,𝑡 , x ) ≥ 𝑑 𝑖 ( 𝑁 𝑖,𝑡 , x ) ≥ 𝑑 𝑖 ( x ) − = ℎ − (cid:3) We will now bound E [( ℎ − 𝑑 𝑤 ( x )) { 𝐴 }] ; to do so, we will use a structural lemma. Lemma 9.
Assume that 𝐴 is true and let 𝑖 be a node of highest in-degree ℎ that beats the defaultnode 𝑡 . Then, there is a node 𝑗 , different than 𝑖 and 𝑡 , that has degree either ℎ , or ℎ − , or ℎ − . Proof.
Since node 𝑖 does not beat every other node, there must be some node 𝑗 that is not beatenby 𝑖 (clearly, 𝑗 is different than 𝑡 ). Then, 𝑑 𝑗 ( x ) ≥ 𝑑 𝑗 ( 𝑁 𝑖,𝑗,𝑡 , x ) ≥ 𝑑 𝑖 ( 𝑁 𝑖,𝑗,𝑡 , x ) ≥ 𝑑 𝑖 ( x ) − = ℎ − (cid:3) By Lemma 9, we can bound E [( ℎ − 𝑑 𝑤 ( x )) { 𝐴 }] by the expected value of the difference ℎ − 𝑑 𝑡 ( x ) for all possible values of the maximum degree ℎ , all possibilities for an agent 𝑖 ≠ 𝑡 having degree ℎ and an agent 𝑗 ≠ 𝑖, 𝑡 having degree either ℎ or ℎ − ℎ −
2, with the degree of the default noderanging from 0 to ℎ and the degree of all other nodes ranging from 0 to ℎ as well. We have E [( ℎ − 𝑑 𝑤 ( x )) { 𝐴 }] ≤ 𝑛 Õ ℎ = ℎ Õ 𝑔 = ( ℎ − 𝑔 ) · Pr [ 𝑑 𝑡 ( x ) = 𝑔 ] Õ 𝑖 ∈ 𝑁 𝑡 Pr [ 𝑑 𝑖 ( x ) = ℎ ] oannis Caragiannis, George Christodoulou, and Nicos Protopapas · Õ 𝑗 ∈ 𝑁 𝑖,𝑡 Pr [ max { , ℎ − } ≤ 𝑑 𝑗 ( x ) ≤ ℎ ] Ö 𝑘 ∈ 𝑁 𝑖,𝑗,𝑡 Pr [ 𝑑 𝑘 ( x ) ≤ ℎ ] (5)For every 𝑘 ∈ { , ..., 𝑛 + } , define the comfort zone 𝑍 𝑘 of agent 𝑘 to be the set of integers { 𝐿 𝑘 , ..., 𝑈 𝑘 } with the boundaries satisfying 𝐿 𝑘 ≤ 𝜇 𝑘 ≤ 𝑈 𝑘 and being defined as follows. The lowerboundary 𝐿 𝑘 is equal to the highest integer 𝑐 such that Pr [ 𝑑 𝑘 ( x ) < 𝑐 ] ≤ 𝑛 − . or 0 if no such 𝑐 exists. The upper boundary 𝑈 𝑘 is equal to the lowest integer 𝑐 such that Pr [ 𝑑 𝑘 ( x ) > 𝑐 ] ≤ 𝑛 − . or 𝑛 if no such 𝑐 exists. We use the terms “above 𝑍 𝑘 ” and “below 𝑍 𝑘 ” to denote the ranges of integers(if any) { , ..., 𝐿 𝑘 − } and { 𝑈 𝑘 + , ..., 𝑛 } .Now, by simple properties of the binomial distribution and the fact that node 𝑡 has maximumexpected in-degree, we observe that if ℎ lies above the comfort zone of agent 𝑡 , it also lies abovethe comfort zone of agent 𝑖 and, hence, Pr [ 𝑑 𝑖 ( x ) = ℎ ] ≤ 𝑛 − . . Also, if 𝑔 lies below the comfortzone 𝑍 𝑡 , it holds Pr [ 𝑑 𝑡 ( x ) = 𝑔 ] ≤ 𝑛 − . . Furthermore, if ℎ − 𝑍 𝑗 , thenPr [ max { , ℎ − } ≤ 𝑑 𝑗 ( x ) ≤ ℎ ] ≤ Pr [ 𝑑 𝑗 ( x ) ≥ 𝑈 𝑗 ] < 𝑛 − . as well. Since, trivially, ℎ − 𝑑 𝑡 ( x ) ≤ 𝑛 ,the contribution of the at most 𝑛 terms of the sum in which either ℎ or 𝑔 does not belong to thecomfort zone 𝑍 𝑡 or ℎ − 𝑍 𝑗 is at most 𝑛 · 𝑛 · 𝑛 − . <
1. Hence, equation(5) becomes E [( ℎ − 𝑑 𝑤 ( x )) { 𝐴 }] ≤ + 𝑈 𝑡 Õ ℎ = 𝐿 𝑡 ℎ Õ 𝑔 = 𝐿 𝑡 ( ℎ − 𝑔 ) · Pr [ 𝑑 𝑡 ( x ) = 𝑔 ] Õ 𝑖 ∈ 𝑁 𝑡 : ℎ ∈ 𝑍 𝑖 Pr [ 𝑑 𝑖 ( x ) = ℎ ]· Õ 𝑗 ∈ 𝑁 𝑖,𝑡 : ℎ − ∈ 𝑍 𝑗 Pr [ max { , ℎ − } ≤ 𝑑 𝑗 ( x ) ≤ ℎ ] Ö 𝑘 ∈ 𝑁 𝑖,𝑗,𝑡 Pr [ 𝑑 𝑘 ( x ) ≤ ℎ ] (6)Our aim in the following is to evalute each term in the sum at the RHS of (6). To do so, we will needthree auxiliary technical lemmas. The proofs of the first two follow easily by applying Chernoffbounds. Lemma 10.
For the boundaries of the comfort zone 𝑍 𝑡 we have 𝑈 𝑡 ≤ 𝜇 𝑡 + p 𝜉 𝑡 ln 𝑛 and 𝐿 𝑡 ≥ 𝜇 𝑡 − p 𝜉 𝑡 ln 𝑛 . Proof. If 𝜇 𝑡 ≥ 𝑛 /
2, by applying the tail inequality (1) from Lemma 2, we getPr [ 𝑑 𝑡 ( x ) ≥ 𝜇 𝑡 + p 𝜉 𝑡 ln 𝑛 ] ≤ exp − ( p 𝜉 𝑡 ln 𝑛 ) 𝑛 𝜉 𝑡 ( 𝑛 − 𝜉 𝑡 ) ! ≤ 𝑛 − . If 𝜇 𝑡 < 𝑛 /
2, observe that 4 p 𝜉 𝑡 ln 𝑛 ≤ 𝜇 𝑡 , by our assumption 𝜉 𝑡 ≥ 𝑛 . Hence, by applying thetail inequality (2) from Lemma 2, we getPr [ 𝑑 𝑡 ( x ) ≥ 𝜇 𝑡 + p 𝜉 𝑡 ln 𝑛 ] ≤ exp − ( p 𝜉 𝑡 ln 𝑛 ) 𝜇 𝑡 ! ≤ 𝑛 − . . The bounds on 𝑈 𝑡 follows by its definition.Similarly, if 𝜇 𝑡 ≤ 𝑛 /
2, by applying the tail inequality (3) from Lemma 2, we getPr [ 𝑑 𝑡 ( x ) ≤ 𝜇 𝑡 − p 𝜉 𝑡 ln 𝑛 ] ≤ exp − ( p 𝜉 𝑡 ln 𝑛 ) 𝑛 𝜉 𝑡 ( 𝑛 − 𝜉 𝑡 ) ! ≤ 𝑛 − . If 𝜇 𝑡 > 𝑛 /
2, observe that 𝜇 𝑡 − p 𝜉 𝑡 ln 𝑛 ≥ 𝜇 𝑡 − 𝑛 , by our assumption 𝜉 𝑡 = 𝑛 − 𝜇 𝑡 ≥ 𝑛 . Hence,by applying the tail inequality (4) from Lemma 2, we getPr [ 𝑑 𝑡 ( x ) ≤ 𝜇 𝑡 − p 𝜉 𝑡 ln 𝑛 ] ≤ exp − ( p 𝜉 𝑡 ln 𝑛 ) 𝜇 𝑡 ! ≤ 𝑛 − . . oannis Caragiannis, George Christodoulou, and Nicos Protopapas 𝐿 𝑘 follows by its definition. (cid:3) We will say that the comfort zones 𝑍 𝑘 and 𝑍 𝑘 ′ almost intersect if 𝐿 𝑘 ′ − 𝑈 𝑘 ≤ 𝐿 𝑘 − 𝑈 𝑘 ′ ≤ ℎ ∈ 𝑍 𝑡 and ℎ − ∈ 𝑍 𝑗 , the two comfort zones 𝑍 𝑡 and 𝑍 𝑗 almost intersect. Lemma 11.
If two comfort zones 𝑍 𝑘 and 𝑍 𝑘 ′ almost intersect, then 𝜇 𝑘 ≤ 𝜇 𝑘 ′ ≤ 𝜇 𝑘 and 𝜉 𝑘 ≤ 𝜉 𝑘 ′ ≤ 𝜉 𝑘 . Proof.
Without loss of generality, assume that 𝜇 𝑘 ≤ 𝜇 𝑘 ′ ; the other case is completely symmetric.Then, 𝐿 𝑘 ′ − 𝑈 𝑘 ≤
2, which, using the facts 𝜉 𝑘 , 𝜉 𝑘 ′ ≤ 𝜇 𝑘 ′ and 𝜇 𝑘 ′ ≥ 𝑛 as well as Lemma 10,implies that 𝜇 𝑘 ′ ≤ + 𝜇 𝑘 + p 𝜉 𝑘 ln 𝑛 + p 𝜉 𝑘 ′ ln 𝑛 ≤ + 𝜇 𝑘 + p 𝜇 𝑘 ′ ln 𝑛 ≤ 𝜇 𝑘 (cid:18) + (cid:19) + 𝜇 𝑘 ′ √ , which clearly implies that 𝜇 𝑘 ′ ≤ 𝜇 𝑘 .Also, observe thatmax { 𝜉 𝑘 ′ , 𝜉 𝑘 } − min { 𝜉 𝑘 ′ , 𝜉 𝑘 } ≤ 𝜇 𝑘 ′ − 𝜇 𝑘 ≤ + p 𝜉 𝑘 ln 𝑛 + p 𝜉 𝑘 ′ ln 𝑛 ≤ + p max { 𝜉 𝑘 , 𝜉 𝑘 ′ } ln 𝑛 ≤ (cid:18) + √ (cid:19) max { 𝜉 𝑘 , 𝜉 𝑘 ′ } , which implies that max { 𝜉 𝑘 ′ , 𝜉 𝑘 } ≤ min { 𝜉 𝑘 ′ , 𝜉 𝑘 } as desired. (cid:3) We now prove the most important technical lemma in our analysis.
Lemma 12.
Let ℓ ∈ { , , } and ℎ be such that ℎ ∈ 𝑍 𝑡 and ℎ − ∈ 𝑍 𝑘 for some agent 𝑘 . Then, Pr [ 𝑑 𝑘 ( x ) = ℎ − ℓ ] ≤ 𝑒 s ln 𝑛𝜉 𝑡 · Pr [ 𝑑 𝑘 ( x ) > ℎ ] . Proof.
By the definition of the binomial distribution, we havePr [ 𝑑 𝑘 ( x ) = 𝑧 ] = (cid:18) 𝑛𝑧 (cid:19) 𝑝 𝑧𝑘 ( − 𝑝 𝑘 ) 𝑛 − 𝑧 for every integer 𝑧 with 0 ≤ 𝑧 ≤ 𝑛 . Let 𝑥 be any positive integer with 𝑥 ≤ 𝜇 𝑘 + p 𝜉 𝑘 ln 𝑛 . For everyinteger 𝑦 > 𝑥 , we havePr [ 𝑑 𝑘 ( x ) = 𝑥 ] Pr [ 𝑑 𝑘 ( x ) = 𝑦 ] = (cid:0) 𝑛𝑥 (cid:1) 𝑝 𝑥𝑘 ( − 𝑝 𝑘 ) 𝑛 − 𝑥 (cid:0) 𝑛𝑦 (cid:1) 𝑝 𝑦𝑘 ( − 𝑝 𝑘 ) 𝑛 − 𝑦 = ( 𝑥 + ) · ( 𝑥 + ) · ... · 𝑦 ( 𝑛 − 𝑦 + ) · ( 𝑛 − 𝑦 + ) · ... · ( 𝑛 − 𝑥 ) · ( − 𝑝 𝑘 ) 𝑦 − 𝑥 𝑝 𝑦 − 𝑥𝑘 = (cid:16) + 𝑥 − 𝜇 𝑘 + 𝜇 𝑘 (cid:17) · (cid:16) + 𝑥 − 𝜇 𝑘 + 𝜇 𝑘 (cid:17) · ... · (cid:16) + 𝑦 − 𝜇 𝑘 𝜇 𝑘 (cid:17)(cid:16) − 𝑦 − 𝜇 𝑘 − 𝑛 − 𝜇 𝑘 (cid:17) · (cid:16) − 𝑦 − 𝜇 𝑘 𝑛 − 𝜇 𝑘 (cid:17) · ... · (cid:16) − 𝑥 − 𝜇 𝑘 𝑛 − 𝜇 𝑘 (cid:17) ≤ (cid:16) + 𝑦 − 𝜇 𝑘 𝜇 𝑘 (cid:17) 𝑦 − 𝑥 (cid:16) − 𝑦 − 𝜇 𝑘 − 𝑛 − 𝜇 𝑘 (cid:17) 𝑦 − 𝑥 oannis Caragiannis, George Christodoulou, and Nicos Protopapas ≤ exp (cid:18) ( 𝑦 − 𝜇 𝑘 ) ( 𝑦 − 𝑥 ) 𝜇 𝑘 + ( 𝑦 − 𝜇 𝑘 + ) ( 𝑦 − 𝑥 ) 𝑛 − 𝑦 + (cid:19) (7)The first inequality follows since 𝑥 < 𝑦 . In the second inequality, we have used the properties1 + 𝑧 ≤ 𝑒 𝑧 for 𝑧 ∈ R and, consequently, − 𝑧 = + 𝑧 − 𝑧 ≤ exp ( 𝑧 − 𝑧 ) for 𝑧 ≠ 𝑦 such that 𝑦 > ℎ − ℓ and ( 𝑦 − 𝜇 𝑘 + ) ( 𝑦 − ℎ + ℓ ) ≤ ( 𝜉 𝑡 − 𝑦 + 𝜇 𝑘 ) , (8)we get Pr [ 𝑑 𝑘 ( x ) = ℎ − ℓ ] Pr [ 𝑑 𝑘 ( x ) = 𝑦 ] ≤ 𝑒. (9)Recall that 𝑍 𝑘 and 𝑍 𝑡 almost intersect. Hence, we have 𝜇 𝑘 ≥ 𝜇 𝑡 / ( 𝑦 − 𝜇 𝑘 ) ( 𝑦 − ℎ + ℓ ) 𝜇 𝑘 ≤ · 𝜉 𝑡 − 𝑦 + 𝜇 𝑘 𝜇 𝑘 ≤
411 2 𝜇 𝑡 − 𝑦𝜇 𝑡 ≤ . (10)Furthermore, using again (8), and the inequalities 𝜇 𝑘 ≤ 𝜇 𝑡 and 𝜉 𝑡 ≤ 𝑛 − 𝜇 𝑡 , we get ( 𝑦 − 𝜇 𝑘 + ) ( 𝑦 − 𝑑 + ℓ ) 𝑛 − 𝑦 + ≤ · 𝜉 𝑡 − 𝑦 + 𝜇 𝑘 𝑛 − 𝑦 + ≤ · 𝜉 𝑡 − 𝑦 + 𝜇 𝑡 𝑛 − 𝑦 + ≤ · 𝑛 − 𝑦𝑛 − 𝑦 + ≤ . (11)Inequality (9) now follows by inequalities (7), (10), and (11).Solving inequality (8), we get that the range of values for 𝑦 so that (9) is true satisfies ℎ − ℓ < 𝑦 ≤ ℎ − ℓ + 𝜇 𝑘 − + q ( ℎ − ℓ − 𝜇 𝑘 ) + ( ℎ − ℓ − 𝜇 ) + 𝜉 𝑡 + . Hence, the number of integer values for 𝑦 so that 𝑦 > ℎ − ℓ and (8) holds is at least ℎ − ℓ + 𝜇 𝑘 − + q ( ℎ − ℓ − 𝜇 𝑘 ) + ( 𝑑 − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + − ℎ + ℓ − = q ( ℎ − ℓ − 𝜇 𝑘 ) + ( 𝑑 − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + − ( ℎ − ℓ − 𝜇 𝑘 + / ) + . (12)The derivative of the quantity at the RHS of (12) with respect to ℎ is2 ( ℎ − ℓ − 𝜇 𝑘 ) + q ( ℎ − ℓ − 𝜇 𝑘 ) + ( ℎ − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + − < , i.e., it is decreasing. Since ℎ − ℓ ∈ 𝑍 𝑘 and 𝑍 𝑘 and 𝑍 𝑡 almost intersect, using Lemmas 10 and 11 wehave ℎ − ℓ − 𝜇 𝑘 ≤ p 𝜉 𝑘 ln 𝑛 ≤ p 𝜉 𝑡 ln 𝑛 . Hence, we can bound the RHS of (12) as follows: q ( ℎ − ℓ − 𝜇 𝑘 ) + ( 𝑑 − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + − (cid:0) ℎ − ℓ − 𝜇 𝑘 + (cid:1) + = ( ℎ − ℓ − 𝜇 𝑘 ) + ( 𝑑 − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + − (cid:0) ℎ − ℓ − 𝜇 𝑘 + (cid:1) ( q ( ℎ − ℓ − 𝜇 𝑘 ) + ( 𝑑 − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + + ( ℎ − ℓ − 𝜇 𝑘 + / )) + = 𝜉 𝑡 − ( ℎ − ℓ − 𝜇 𝑘 ) − (cid:18)q ( ℎ − ℓ − 𝜇 𝑘 ) + ( 𝑑 − ℓ − 𝜇 𝑘 ) + 𝜉 𝑡 + + ℎ − ℓ − 𝜇 𝑘 + / (cid:19) + oannis Caragiannis, George Christodoulou, and Nicos Protopapas ≥ 𝜉 𝑡 − p 𝜉 𝑡 ln 𝑛 − (cid:18)q 𝜉 𝑡 ln 𝑛 + p 𝜉 𝑡 ln 𝑛 + 𝜉 𝑡 + + p 𝜉 𝑡 ln 𝑛 + (cid:19) ≥ r 𝜉 𝑡 ln 𝑛 + . In the second inequality, we have used 905 p 𝜉 𝑡 ln 𝑛 ≤ 𝜉 𝑡 and 820 ≤ 𝜉 𝑡 to bound the numerator by 𝜉 𝑡 (recall that 𝜉 𝑡 ≥ 𝑛 ), while the parenthesis in the denominator is clearly at most 12 p 𝜉 𝑡 ln 𝑛 .Now, let ℓ ∈ { , , } and 𝑟 = (cid:24) q 𝜉 𝑡 ln 𝑛 (cid:25) +
2. By the discussion above, for 𝑥 = ℎ − ℓ we havePr [ 𝑑 𝑘 ( x ) = ℎ − ℓ ] ≤ 𝑒 Pr [ 𝑑 𝑘 ( x ) = 𝑦 ] for 𝑦 = ℎ − ℓ + , ℎ − ℓ + , ..., ℎ − ℓ + 𝑟 +
2. By summing these inequalities for 𝑦 = ℎ + , ..., ℎ + 𝑟 , weget 𝑟 Pr [ 𝑑 𝑘 ( x ) = ℎ − ℓ ] ≤ 𝑒 𝑟 Õ 𝑦 = ℎ + Pr [ 𝑑 𝑘 ( x ) = 𝑦 ] ≤ 𝑒 Pr [ 𝑑 𝑘 ( x ) > ℎ ] and, equivalently,Pr [ 𝑑 𝑘 ( x ) = ℎ − ℓ ] ≤ 𝑒𝑟 Pr [ 𝑑 𝑘 ( x ) > ℎ ] ≤ 𝑒 s ln 𝑛𝜉 𝑡 Pr [ 𝑑 𝑘 ( x ) > ℎ ] . The lemma follows. (cid:3)
We are ready to complete the proof of Theorem 7. For ℎ ∈ 𝑍 𝑡 , using Lemma 10, we have Õ ℎ ∈ 𝑍 𝑡 Õ 𝑔 ∈ 𝑍 𝑡 ( ℎ − 𝑔 ) Pr [ 𝑑 𝑡 ( x ) = 𝑔 ] ≤ Õ ℎ ∈ 𝑍 𝑡 p 𝜉 𝑡 ln 𝑛 Õ 𝑔 ∈ 𝑍 𝑡 Pr [ 𝑑 𝑡 ( x ) = 𝑔 ] ≤ 𝜉 𝑡 ln 𝑛. (13)Furthermore, Lemma 12 yields Õ 𝑖 ∈ 𝑁 𝑡 Õ 𝑗 ∈ 𝑁 𝑖,𝑡 Pr [ 𝑑 𝑖 ( x ) = ℎ ] Pr [ max { , ℎ − } ≤ 𝑑 𝑗 ( x ) ≤ ℎ ] Ö 𝑘 ∈ 𝑁 𝑖,𝑗,𝑡 Pr [ 𝑑 𝑘 ( x ) ≤ ℎ ]≤ · 𝑒 s ln 𝑛𝜉 𝑡 ! Õ 𝑖 ∈ 𝑁 𝑡 Õ 𝑗 ∈ 𝑁 𝑖,𝑡 Pr [ 𝑑 𝑖 ( x ) > ℎ ] Pr [ 𝑑 𝑗 ( x ) > ℎ ] Ö 𝑘 ∈ 𝑁 𝑖,𝑗,𝑡 Pr [ 𝑑 𝑘 ( x ) ≤ ℎ ]≤ · 𝑒 · ln 𝑛𝜉 𝑡 . (14)since the last double sum is simply the probability that exactly two agents have degree higher than ℎ (and, hence, has value at most 1). Using (13) and (14), equation (6) yields E [( ℎ − 𝑑 𝑤 ( x )) { 𝐴 }] ≤ + · ln 𝑛 and the proof of Theorem 7 is now complete. (cid:3) In this section, we prove the following lower bound for the uniform domain.
Theorem 13.
When applied on uniform instances with 𝑝 = / , the AVD mechanism has expectedadditive approximation Ω ( ln 𝑛 ) . With uniform instances, the in-degree of each node follows the binomial distribution. In theproof of Theorem 13, we use the random variables 𝐵 and 𝐵 ′ following the distributions B ( 𝑛, / ) and B ( 𝑛 − , / ) , respectively. We also assume that 𝑛 is sufficiently large.Let 𝑈 be the lowest integer 𝑐 such that Pr [ 𝐵 > 𝑐 ] ≤ 𝑒 𝑛 √ . Similarly, let 𝐿 be the lowest integer 𝑐 such that Pr [ 𝐵 > 𝑐 ] < 𝑛 √ . Consider the following event 𝐷 : oannis Caragiannis, George Christodoulou, and Nicos Protopapas • The default node has degree at most 𝑛 / • two non-default nodes (called the potential winners ) have the same in-degree 𝑑 ∈ [ 𝐿 + , 𝑈 ] ,without counting the edges between them, • the remaining non-default nodes (called the losers ) have in-degree at most 𝑑 − 𝐿 − 𝑛 / E [( 𝐿 − 𝑛 / ) { 𝐷 }] is Ω ( ln 𝑛 ) , proving the lemma. In particular, we will use theinequality E [( 𝐿 − 𝑛 / ) { 𝐷 }] ≥ ( 𝐿 − 𝑛 / ) 𝑈 Õ 𝑑 = 𝐿 + (cid:18) 𝑛 (cid:19) Pr [ 𝐵 ′ = 𝑑 ] · Pr [ 𝐵 ≤ 𝑑 − ] 𝑛 − . (15)The RHS of equation (15) is the product of the lower bound of the additive approximation 𝐿 − 𝑛 / 𝐷 happens, with 1 / 𝑛 /
2, and with the probability Pr [ 𝐵 ′ = 𝑑 ] that the two potential winners havedegree 𝑑 (ignoring the edges between them) and the probability Pr [ 𝐵 ≤ 𝑑 − ] 𝑛 − that the losershave degree at most 𝑑 −
1, for all the (cid:0) 𝑛 (cid:1) selections of the two potential winners.We will make use of a series of lemmas to bound the several quantities that appear in the RHSof equation (15). Lemma 14. 𝐿 ≥ 𝑛 + q 𝑛 ln 𝑛 . Proof.
By applying Corollary 4 to the random variable 𝐵 ∼ B ( 𝑛, / ) with 𝛿 = q ln 𝑛 𝑛 (observethat 𝛿 ≤ /
10 since 𝑛 is large), we have Pr (cid:20) 𝐵 ≥ 𝑛 + q 𝑛 ln 𝑛 (cid:21) ≥ 𝑛 √ for the random variable 𝐵 ∼ B ( 𝑛, / ) . The lemma follows by the definition of 𝐿 . (cid:3) Lemma 15. 𝑈 ≤ 𝑛 + √ 𝑛 ln 𝑛 . Proof.
A simple application of the Chernoff bound (inequality (1) from Lemma 2) to the bino-mial random variable 𝐵 ∼ B ( 𝑛, / ) yields Pr h 𝐵 ≥ 𝑛 + √ 𝑛 ln 𝑛 i ≤ 𝑛 ≤ 𝑒 √ . The lemma thenfollows by the definition of 𝑈 . (cid:3) Lemma 16.
For the random variable 𝐵 ∼ B ( 𝑛, / ) , it holds that Pr [ 𝐵 = 𝑥 ] ≥ 𝐿 − 𝑛𝑛 · Pr [ 𝐵 ≥ 𝑥 ] , for every integer 𝑥 ≥ 𝐿 . Proof.
Consider integers 𝑥, 𝑦 with 𝐿 ≤ 𝑥 ≤ 𝑦 . By the definition of the binomial distribution B ( 𝑛, / ) , we havePr [ 𝐵 = 𝑦 ] Pr [ 𝐵 = 𝑥 ] = (cid:0) 𝑛𝑦 (cid:1)(cid:0) 𝑛𝑥 (cid:1) = 𝑥 ! ( 𝑛 − 𝑥 ) ! 𝑦 ! ( 𝑛 − 𝑦 ) ! ≤ (cid:16) 𝑛 − 𝑥𝑥 (cid:17) 𝑦 − 𝑥 ≤ (cid:18) 𝑛 − 𝐿𝐿 (cid:19) 𝑦 − 𝑥 . Hence, Pr [ 𝐵 ≥ 𝑥 ] = 𝑛 Õ 𝑦 = 𝑥 Pr [ 𝐵 = 𝑦 ] ≤ Pr [ 𝐵 = 𝑥 ] · 𝑛 Õ 𝑦 = 𝑥 (cid:18) 𝑛 − 𝐿𝐿 (cid:19) 𝑦 − 𝑥 ≤ 𝐿 𝐿 − 𝑛 · Pr [ 𝐵 = 𝑥 ] ≤ 𝑛 𝐿 − 𝑛 · Pr [ 𝐵 = 𝑥 ] , and the lemma follows by rearranging. (cid:3) oannis Caragiannis, George Christodoulou, and Nicos Protopapas Claim 17.
For the random variable 𝐵 ∼ B ( 𝑛, / ) and integers 𝑥 and 𝑦 with 𝑥 ≤ 𝑦 , it holds that Pr [ 𝐵 = 𝑥 ] ≤ (cid:18) 𝑦𝑛 − 𝑦 (cid:19) 𝑦 − 𝑥 · Pr [ 𝐵 = 𝑦 ] . Proof.
By the definition of the binomial distribution
B ( 𝑛, / ) , we havePr [ 𝐵 = 𝑥 ] Pr [ 𝐵 = 𝑦 ] = (cid:0) 𝑛𝑥 (cid:1)(cid:0) 𝑛𝑦 (cid:1) = 𝑦 ! ( 𝑛 − 𝑥 ) ! 𝑥 ! ( 𝑛 − 𝑦 ) ! ≤ (cid:18) 𝑦𝑛 − 𝑦 (cid:19) 𝑦 − 𝑥 . (cid:3) Lemma 18.
For the random variable 𝐵 ∼ B ( 𝑛, / ) , it holds that Pr [ 𝐵 = 𝑈 ] ≤ 𝑒 √ √ ln 𝑛𝑛 / . Proof.
By Claim 17, we havePr [ 𝐵 = 𝑈 ] Pr [ 𝐵 = 𝑦 ] ≤ (cid:18) 𝑦𝑛 − 𝑦 (cid:19) 𝑦 − 𝑈 ≤ exp (cid:18) ( 𝑦 − 𝑈 ) ( 𝑦 − 𝑛 ) 𝑛 − 𝑦 (cid:19) (16)for every integer 𝑦 > 𝑈 . The second inequality follows since 𝑒 𝑧 ≥ + 𝑧 for 𝑧 ∈ R . By selecting 𝑦 such that ( 𝑦 − 𝑈 ) ( 𝑦 − 𝑛 ) 𝑛 − 𝑦 ≤ , (17)we get Pr [ 𝐵 = 𝑈 ] Pr [ 𝐵 = 𝑦 ] ≤ 𝑒. (18)Solving inequality (17), we get that the range of values for 𝑦 so that (18) is true satisfies 𝑈 < 𝑦 ≤ 𝑈 + 𝑛 − + p ( 𝑈 − 𝑛 ) + 𝑛 − 𝑈 + . Hence, the number of integer values for 𝑦 so that 𝑦 > 𝑈 and (17) is satisfied is at least2 𝑈 + 𝑛 − + p ( 𝑈 − 𝑛 ) + 𝑛 − 𝑈 + − 𝑈 − = p ( 𝑈 − 𝑛 ) + 𝑛 − 𝑈 + − 𝑈 + 𝑛 − . (19)Now observe that the quantity at the RHS of (19) is non-increasing with respect to 𝑈 since itsderivative 2 𝑈 − 𝑛 − p ( 𝑈 − 𝑛 ) + 𝑛 − 𝑈 + − 𝑈 fromLemma 15. We get p ( 𝑈 − 𝑛 ) + 𝑛 − 𝑈 + − 𝑈 + 𝑛 − oannis Caragiannis, George Christodoulou, and Nicos Protopapas = 𝑛 − 𝑈 − (cid:16)p ( 𝑈 − 𝑛 ) + 𝑛 − 𝑈 + + 𝑈 − 𝑛 + (cid:17) ≥ 𝑛 − √ 𝑛 ln 𝑛 − (cid:16)p 𝑛 ln 𝑛 + 𝑛 − √ 𝑛 ln 𝑛 + + √ 𝑛 ln 𝑛 − 𝑛 + (cid:17) ≥ r 𝑛 ln 𝑛 . In the last inequality, we have used 24 √ 𝑛 ln 𝑛 + ≤ 𝑛 to lower-bound the numerator by 3 𝑛 and5 ≤ √ 𝑛 ln 𝑛 to upper-bound the parenthesis in the denominator by 6 √ 𝑛 ln 𝑛 .Now, let 𝑟 = (cid:6) p 𝑛 ln 𝑛 (cid:7) . Multiplying inequality (18) by 1 / 𝑟 and summing these inequalities for 𝑦 = 𝑈 + , ..., 𝑈 + 𝑟 , we havePr [ 𝐵 = 𝑈 ] ≤ 𝑒𝑟 𝑈 + 𝑟 Õ 𝑦 = 𝑈 + Pr [ 𝐵 = 𝑦 ] ≤ 𝑒𝑟 · Pr [ 𝐵 > 𝑈 ] ≤ 𝑒 √ √ ln 𝑛𝑛 / , as desired. The last inequality follows by the definition of 𝑈 . (cid:3) Lemma 19. 𝑈 − 𝐿 ≥ p 𝑛 ln 𝑛 . Proof.
Let 𝐵 ∼ B ( 𝑛, / ) . Using the definition of 𝑈 and 𝐿 , we have23 𝑛 ≤ 𝑛 √ − 𝑒 𝑛 √ ≤ Pr [ 𝐿 ≤ 𝐵 ≤ 𝑈 ]≤ 𝑈 Õ 𝑥 = 𝐿 (cid:18) 𝑈𝑛 − 𝑈 (cid:19) 𝑈 − 𝑥 · Pr [ 𝐵 = 𝑈 ]≤ (cid:18) 𝑈𝑛 − 𝑈 (cid:19) 𝑈 − 𝐿 + · 𝑛 − 𝑈 𝑈 − 𝑛 · 𝑒 √ √ ln 𝑛𝑛 / ≤ exp (cid:18) 𝑈 − 𝑛𝑛 − 𝑈 ( 𝑈 − 𝐿 + ) (cid:19) · 𝑛 − 𝐿 𝐿 − 𝑛 · 𝑒 √ √ ln 𝑛𝑛 / ≤ exp (cid:18) 𝑈 − 𝑛𝑛 − 𝑈 ( 𝑈 − 𝐿 + ) (cid:19) · 𝑒𝑛 . (20)The first inequality is obvious, while the second one uses the definition of 𝑈 and 𝐿 (recall thatPr [ 𝐵 ≥ 𝐿 ] ≥ 𝑛 √ and Pr [ 𝐵 > 𝑈 ] ≤ 𝑒 𝑛 √ ). The third inequality follows by Claim 17. The fourthinequality follows by Lemma 18, the fifth one follows since 𝑈 ≥ 𝐿 and by the definition of 𝑈 , andthe sixth one follows by Lemma 14 and the fact 𝐿 ≥ 𝑛 / 𝑛 , 2 𝑛 − 𝑈 ≥ 𝑛 /
3. Hence, inequality (20) implies that 𝑈 − 𝐿 ≥ 𝑛 − 𝑈 𝑈 − 𝑛 ≥ r 𝑛 ln 𝑛 , as desired. (cid:3) Lemma 20.
Let 𝐵 ∼ B ( 𝑛, / ) and 𝐵 ′ ∼ B ( 𝑛 − , / ) . For every integer 𝑥 ∈ [ 𝐿 + , 𝑈 ] , Pr [ 𝐵 ′ = 𝑥 ] ≥ Pr [ 𝐵 = 𝑥 ] . oannis Caragiannis, George Christodoulou, and Nicos Protopapas Proof.
By the definition of the binomial distribution, Lemma 15, and the facts that 𝑥 ≤ 𝑈 andthat 𝑛 is large (the last two imply that 𝑥 ≤ 𝑛 / [ 𝐵 ′ = 𝑥 ] = (cid:18) 𝑛 − 𝑥 (cid:19) − 𝑛 + = 𝑛 − 𝑥𝑛 (cid:18) 𝑛𝑥 (cid:19) − 𝑛 ≥
23 Pr [ 𝐵 = 𝑥 ] . (cid:3) We are now ready to bound E [( 𝐿 − 𝑛 / ) { 𝐷 }] from below. Using equation (15), and the lemmasabove, we have E ( 𝐿 − 𝑛 / ) { 𝐷 } ≥ (cid:16) 𝐿 − 𝑛 (cid:17) · 𝑈 Õ 𝑑 = 𝐿 + (cid:18) 𝑛 (cid:19) Pr [ 𝐵 ′ = 𝑑 ] Pr [ 𝐵 ≤ 𝑑 − ] 𝑛 − ≥ (cid:16) 𝐿 − 𝑛 (cid:17) · 𝑈 Õ 𝑑 = 𝐿 + (cid:18) 𝑛 (cid:19) Pr [ 𝐵 = 𝑑 ] Pr [ 𝐵 ≤ 𝑑 − ] 𝑛 − ≥ 𝑛 (cid:16) 𝐿 − 𝑛 (cid:17) 𝑈 Õ 𝑑 = 𝐿 + (cid:18) 𝑛 (cid:19) Pr [ 𝐵 ≥ 𝑑 ] Pr [ 𝐵 ≤ 𝑑 − ] 𝑛 − ≥ 𝑛 (cid:16) 𝐿 − 𝑛 (cid:17) ( 𝑈 − 𝐿 ) (cid:18) 𝑛 (cid:19) (cid:18) 𝑒 𝑛 √ (cid:19) (cid:18) − 𝑛 √ (cid:19) 𝑛 − ≥ 𝑒 √ · ln 𝑛. The second inequality follows by Lemma 20. The third inequality follows by Lemma 16. The fourthinequality follows by the definition of 𝐿 and 𝑈 . Finally, the fifth inequality follows by Lemma 14,Lemma 19, and the fact (cid:16) − 𝑛 √ (cid:17) 𝑛 − ≥ / 𝑒 . Theorem 13 follows. (cid:3) Our polylogarithmic upper bound in Section 4 shows that prior information can yield dramaticimprovements on the performance of simple impartial selection mechanisms. It also gives hopethat AVD could be similarly efficient for the more general opinion poll instances. Unfortunately,this is not true as the following counter-example indicates.Indeed, starting from a uniform instance with 𝑛 + 𝑝 = /
2, we add a newcopy 𝑗 ′ for each node 𝑗 . Also, for every edge ( 𝑖, 𝑗 ) realized in the original instance, we add the edge ( 𝑖, 𝑗 ′ ) . In this way, we construct opinion poll instances with 2 ( 𝑛 + ) nodes, where no node can everbeat all the other nodes. Hence, in such instances, AVD will behave as the constant mechanism inthe original instance and will always return the default node as winner. By applying Theorem 6we obtain the following negative result for AVD. Theorem 21.
When applied on opinion poll instances, the AVD mechanism has expected additiveapproximation Ω (√ 𝑛 ln 𝑛 ) . We should note that the above construction is fragile, in the sense that it exploits a very specificaspect of the mechanism. So, still, the quest of designing deterministic mechanisms that achievepolylogarithmic additive approximation in the opinion poll model is very important and challeng-ing. A starting step could be to restrict our attention to the instances considered in [19], in whichevery voter approves exactly one other candidate.Finally, throughout the paper, we have assumed that the prior information is reliable. Thisshould not be expected to be the case in practice. We expect that our results on the constant mech-anism still hold if we have a rough estimate of the highest in-degree. Highest accuracy seems to oannis Caragiannis, George Christodoulou, and Nicos Protopapas
ACKNOWLEDGEMENTS
This work was partially supported by COST Action 16228 “European Network for Game Theory”.
REFERENCES [1] Noga Alon, Felix Fischer, Ariel Procaccia, and Moshe Tennenholtz. Sum of us: Strategyproof selection from theselectors. In
Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK) , pages101–110, 2011.[2] Robert B. Ash.
Information Theory . Courier Corporation, 1990.[3] Haris Aziz, Omer Lev, Nicholas Mattei, Jeffrey S. Rosenschein, and Toby Walsh. Strategyproof peer selection usingrandomization, partitioning, and apportionment.
Artificial Intelligence , 275:295–309, 2019.[4] Yakov Babichenko, Oren Dean, and Moshe Tennenholtz. Incentive-compatible diffusion. In
Proceedings of the 27thInternational Conference on World Wide Web (WWW) , pages 1379–1388, 2018.[5] Yakov Babichenko, Oren Dean, and Moshe Tennenholtz. Incentive-compatible selection mechanisms for forests. In
Proceedings of the 21st ACM Conference on Economics and Computation (EC) , page 111–131, 2020.[6] Antje Bjelde, Felix Fischer, and Max Klimm. Impartial selection and the power of up to two choices.
ACM Transactionson Economics and Computation , 5(4):21, 2017.[7] Béla Bollobás.
Random Graphs . Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2ndedition, 2001.[8] Nicolas Bousquet, Sergey Norin, and Adrian Vetta. A near-optimal mechanism for impartial selection. In
Proceedingsof the 10th International Conference on Web and Internet Economics (WINE) , pages 133–146, 2014.[9] J.J.A.M. Brands, F.W. Steutel, and R.J.G. Wilms. On the number of maxima in a discrete sample.
Statistics and ProbabilityLetters , 20(3):209–217, 1994.[10] Ioannis Caragiannis, George Christodoulou, and Nicos Protopapas. Impartial selection with additive approximationguarantees. In
Proceedings of the 12th International Symposium on Algorithmic Game Theory (SAGT) , pages 269–283,2019.[11] Geoffroy de Clippel, Hervé Moulin, and Nicolaus Tideman. Impartial division of a dollar.
Journal of Economic Theory ,139(1):176–191, 2008.[12] Bennett Eisenberg and Gilbert Stengle. Minimizing the probability of a tie for first place.
Journal of MathematicalAnalysis and Applications , 198(2):458–472, 1996.[13] Bennett Eisenberg, Gilbert Stengle, and Gilbert Strang. The asymptotic probability of a tie for first place.
The Annalsof Applied Probability , 3(3):731–745, 1993.[14] Paul Erdős and Robin J. Wilson. On the chromatic index of almost all graphs.
Journal of Combinatorial Theory, SeriesB , 23(2-3):255–257, 1977.[15] Felix Fischer and Max Klimm. Optimal impartial selection.
SIAM Journal on Computing , 44(5):1263–1285, 2015.[16] Alan Frieze and Michał Karoński.
Introduction to random graphs . Cambridge University Press, 2016.[17] Jason D. Hartline.
Bayesian Mechanism Design . Foundations and trends in theoretical computer science. Now Pub-lishers, 2013.[18] Wassily Hoeffding. Probability inequalities for sums of bounded random variables.
Journal of the American StatisticalAssociation , 58(301):13–30, 1963.[19] Ron Holzman and Hervé Moulin. Impartial nominations for a prize.
Econometrica , 81(1):173–196, 2013.[20] Anson Kahng, Yasmine Kotturi, Chinmay Kulkarni, David Kurokawa, and Ariel D Procaccia. Ranking wily peoplewho rank each other. In
Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI) , pages 1087–1094,2018.[21] David Kurokawa, Omer Lev, Jamie Morgenstern, and Ariel D. Procaccia. Impartial peer review. In
Proceedings of the24th International Joint Conference on Artificial Intelligence (IJCAI) , pages 582–588, 2015.[22] Jean-Francois Laslier and M. Remzi Sanver, editors.
Handbook on Approval Voting . Springer, 2010.[23] Andrew Mackenzie. Symmetry and impartial lotteries.
Games and Economic Behavior , 94:15–28, 2015.[24] Andrew Mackenzie. An axiomatic analysis of the papal conclave.
Economic Theory , 69:713–743, 2020.[25] Nicholas Mattei, Paolo Turrini, and Stanislav Zhydkov. PeerNomination: Relaxing exactness for increased accuracy inpeer selection. In
Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI) , pages 393–399,2020.[26] Rajeev Motwani and Prabhakar Raghavan.
Randomized Algorithms . Cambridge University Press, 1995.oannis Caragiannis, George Christodoulou, and Nicos Protopapas [27] Masashi Okamoto. Some inequalities relating to the partial sum of binomial probabilities. Annals of the Institute ofStatistical Mathematics , 10(1):29–35, 1958.[28] Shohei Tamura. Characterizing minimal impartial rules for awarding prizes.
Games and Economic Behavior , 95:41–46,2016.[29] Shohei Tamura and Shinji Ohseto. Impartial nomination correspondences.
Social Choice and Welfare , 43(1):47–54,2014.
A APPENDIX: MULTIPLICATIVE APPROXIMATION AND VOTER CORRELATION
In the following, we briefly justify two main decisions that we have taken. First, we show thatknowing the prior cannot help us improve the approximation ratio of 2 that is best possible forworst-case inputs. This explains why we have completely ignored the study of multiplicative ap-proximations when prior information is available.We extend the approximation ratio 𝜌 of a mechanism 𝑓 against a prior P as follows: 𝜌 = E x ∼ P [ Δ ( x )] E x ∼ P [ 𝑑 𝑓 ( x ) ( x )] Theorem 22.
For every 𝜖 > , no impartial selection mechanism has approximation ratio betterthan − 𝜖 against all uniform priors. Proof.
Consider uniform instances with two nodes 𝑢 and 𝑣 of popularity 𝑝 . Clearly, E [ Δ ( x )] = − ( − 𝑝 ) = 𝑝 − 𝑝 . We show that for every impartial mechanism 𝑓 , it holds E [ 𝑑 𝑓 ( x ) ( x )] ≤ 𝑝 .The theorem then follows by taking 𝑝 to be sufficiently small.Indeed, consider the profile consisting of the two directed edges between 𝑢 and 𝑣 and let 𝑞 𝑢 and 𝑞 𝑣 be the probabilities that the winner is node 𝑢 and node 𝑣 , respectively. Impartiality means thatnode 𝑢 is the winner with probability 𝑞 𝑢 at the profile consisting only of the directed edge from 𝑣 to 𝑢 and node 𝑣 is the winner at the profile consisting only of the directed edge from 𝑢 to 𝑣 withprobability 𝑞 𝑣 . Overall, E [ 𝑑 𝑓 ( x ) ( x )] = ( 𝑞 𝑢 + 𝑞 𝑣 ) · 𝑝 + 𝑞 𝑢 · 𝑝 · ( − 𝑝 ) + 𝑞 𝑣 · ( − 𝑝 ) · 𝑝 ≤ ( 𝑞 𝑢 + 𝑞 𝑣 ) · 𝑝 ≤ 𝑝. Notice that our argument includes randomized mechanisms that may return no winner with pos-itive probability at some profiles. (cid:3)
Second, we show that our assumption about voter independence is crucial since, otherwise, evenour most appealing AVD mechanism has linear additive approximation.
Example 1.
Consider the following instance with 8 𝑘 + 𝐴 and 𝐵 of 4 𝑘 nodes each and two additional nodes 𝑎 and 𝑏 . Node 𝑎 is approved by no node withprobability 1 / 𝑘 nodes of set 𝐴 with probability 1 / 𝐴 ). Similarly, and independently from the approvals to node 𝑎 , node 𝑏 is approved byno node with probability 1 / 𝐵 with probability 1 /
2. Notice that there isalways a tie and hence AVD always selects the default node, which cannot have expected in-degreehigher than 2 𝑘 . The expected highest in-degree is 4 𝑘 with probability 3 / /
4, i.e., an expected highest in-degree of 3 𝑘 . Hence, the additive approximation is 𝑘𝑘