Vote Delegation with Uncertain Number of Voters
aa r X i v : . [ c s . G T ] F e b Vote Delegation Favors Minority
Hans Gersbach
CER-ETH and CEPRZ¨urichbergstrasse 188092 Zurich, [email protected]
Akaki Mamageishvili
CER-ETHZ¨urichbergstrasse 188092 Zurich, [email protected]
Manvir Schneider
CER-ETHZ¨urichbergstrasse 188092 Zurich, [email protected]
Last updated: February 18, 2021
Abstract
We examine vote delegation when delegators do not know the preferences of represen-tatives. We show that free delegation favors minorities, that is, alternatives that have alower chance of winning ex-ante. The same—but to a lesser degree—occurs if the numberof voting rights actual voters can have is capped. However, when the fraction of delegatorsincreases, the probability that the ex-ante minority wins under free and capped delegationconverges to the one under conventional voting—albeit non-monotonically. Finally, whenthe total number of voters is converging to infinity with a fixed fraction of the majority,all three probabilities converge to one, no matter the number of delegators. Therefore,vote delegation is safe on a large scale.
Introduction
While a particular type of vote delegation, in the form of proxy voting in shareholder meetings,has a long tradition, whether and how vote delegation should be allowed in other environmentshas become a prominent theme in scientific and public discussions. ”Liquid democracy”, forinstance, is a concept entailing the right for citizens to delegate voting rights to other citizens.This has been advocated by several political parties and is used internally by the GermanPirate party or the Demoex party in Sweden. It is implemented in several online platformssuch as LiquidFeedback and Google Votes A further reference is Escoffier B. (2019). It isan important way to govern blockchains, see Goodman (2014) and Damg˚ard et al. (2020). Apriori it is not obvious how liquid voting compares to standard voting procedures such asvoluntary voting. It is thus useful to compare liquid democracy to standard voting in specificenvironments.In this paper, we examine how voting outcomes are affected when vote delegation is allowed,but delegators do not know the preferences of the proxy to whom they delegate. In particular,we study the following problem. A polity—say a jurisdiction or a blockchain community—decides between two alternatives A and B . A fraction of individuals does not want to participatein the voting. Henceforth, these individuals are called ”delegators”. They abstain in traditionalvoting and delegate their vote in liquid democracy. There are many reasons why individualsdo not want to participate in voting. Either they want to avoid facing the costs of informingthemselves or do not want to incur the cost of voting. In the context of blockchains and proof-of-stake, individuals may have the option to delegate their stake to other individuals, therebyparticipating in the rewards from validating transactions without having to run the validationsoftware themselves.Our central assumption is that delegators do not know the preferences of the individualswho will vote, that is, whether these individuals favor A or B . We call individuals willing tovote ”voters” and individuals favoring A ( B ) ” A -voters” (” B -voters”) and the collection of ” A -voters” (” B -voters”) party A (party B ). From the perspective of delegators, voters themselvesand any outsider who might want to design vote delegation, the probability that a voter favors A is some number p (0 < p < p < , the chances of alternative A to win under standardvoting is positive but below . We call party A the ”ex-ante minority” in such cases. In thecontext of blockchains, this could also mean that individuals are honest or dishonest when theyvalidate transactions, for instance.We compare three potential voting schemes. First, under standard voting, delegation is notallowed and thus delegators abstain. Second, with free vote delegation, arbitrary delegationis allowed. Since delegators do not know the preferences of voters and thus for delegators, allvoters are alike, every voter has the same chance to obtain a vote from delegators. We thusmodel free delegation as a random process in which delegated voting rights are randomly anduniformly assigned to voters. This can be achieved in one step where delegators know thepool of voters, or by transitive delegation, in which each individual can delegate his/her votingright to some other individual, who, in turn, can delegate the accumulated voting rights to anext individual. This process stops when an individual decides to use the received votes andexert his/her voting right. Third, we introduce capped vote delegation. With capped votedelegation, the number of votes an individual can receive and cast is limited. Essentially, if avoter has reached the cap of voting rights through delegation, s/he drops out of the pool ofvoters that can receive delegated votes. Thus, only voters who have not reached the cap yetwill be able to receive further voting rights. (retrieved on February 18, 2021). (retrievedon February 18, 2021).
2e establish three main results. First, compared to standard voting, free delegation favorsthe minority. That is, if p < and thus alternative A has a lower chance to win than alternative B under standard voting, free delegation strictly increases the probability of A to win. Thisresult, which requires an elaborated proof, shows that vote delegation cannot only reversevoting outcomes but is a chance for the minority to win. Second, the same occurs with a cappeddelegation, albeit to a lesser degree. That is, the capped delegation also increases the minority’sprobability to win, but the increase is smaller than under free delegation. Third, when thefraction of delegators increases compared to voters, the probability that minorities win underfree or capped delegation converges to the probability under standard voting. However, theconvergence is not monotonic. The results show that outcomes with delegation may significantlydiffer from standard election outcomes and probabilistically violate the ”principe majoritaire”,i.e. a majority of citizens who is willing to cast a vote for a particular alternative may lose.This result might caution blockchain companies: to implement vote delegation as it might raisethe risk that dishonest agents become a majority through vote delegation. A couple of recent papers sheds new light on how vote delegation impacts outcomes. Importantstudies studying vote delegation from an algorithmic and AI perspective have developed severaldelegation rules that allow examining how many votes a representative can and should obtain and whether delegation votes to neighbours in the network may deliver more desirable outcomesthan direct voting . The result of Kahng et al. (2018) is similar to our result in spirit. Theauthors show that even with a delegation from a less informed to better informed voter, theprobability of implementing the right alternative in a generic network is decreasing. The settingstudied in the paper is different, though, as it focuses on the information acquisition aspect ofvoting, with voters having the same preferences.Delegation games on networks were studied by Bloembergen et al. (2019) and Escoffier B.(2019). Bloembergen et al. (2019) identify conditions for the existence of Nash equilibria ofsuch games, while Escoffier B. (2019) shows that in more general setups, no Nash equilibriummay exist, and it may even be N P -complete to decide whether one exists at all. We adopta collective decision-framework and study how free delegation or capped delegation impactsoutcomes when delegators do not know the preferences of individuals to whom they delegate.We model the number of voters as a Poisson random variable. This assumption is a stan-dard tool to make the analysis of voting outcomes analytically tractable. It was introducedin Myerson (1998).
We consider a large polity (a society or a blockchain community) that faces a binary choicebetween two alternatives A and B . There is a group of m ∈ N + individuals who do not wantto vote and either abstain under conventional voting or delegate their voting rights under votedelegation. The remaining population votes and are thus called ”voters”. They have privateinformation about their preference for A or B . Hence, when individuals delegate their votingrights to a voter, this is equivalent to uniformly and randomly delegating a voting right to one voter. From the perspective of other individuals, a voter prefers A ( B ) with probability p See G¨olz et al. (2018) and Kotsialou and Riley (2020). See Kahng et al. (2018). − p ), where 0 < p <
1. Without loss of generality, we assume p > . Voters favoring A ( B )are called ” A -voters” (” B -voters”) and the respective group is called ” A -party” (” B -party”).We assume that the number of voters can be described by a Poisson random variable withexpected value n ∈ N . We compare three voting processes: • Conventional voting: Each of the n voters cast one vote. • Free delegation: m delegators randomly delegate their voting rights. Each voter casts allvotes that is corresponding to his/her voting rights. • Capped delegation: m delegators delegate their voting rights randomly to voters. If avoter has reached the cap, all further voting rights are distributed among the remainingvoters up to the cap. If all voters have reached their cap, superfluous voting rights areeliminated.We start with conventional voting and denote by P ( n, p ) the probability that A wins. It iscalculated by the following formula: P ( n, p ) = ∞ X k =0 ∞ X l =0 ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) g ( k, l ) , (1)where g ( k, l ) is the probability that A -voters are in majority if there are k A -voters and lB -voters, it is calculated in the following way: g ( k, l ) := , if k > l, , if k = l, , if k < l. In case of free vote delegation, the probability that A wins, denoted by P ( n, p, m ), is equalto: P ( n, p, m ) = ∞ X k =0 ∞ X l =0 ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) G ( k, l, m ) , (2)where G ( k, l, m ) denotes the probability that A -voters win if there are k A -voters, l B -votersand m voters delegate their votes. The value is calculated in the following way: G ( k, l, m ) = m X h =0 (cid:18) mh (cid:19) (cid:18) kk + l (cid:19) h (cid:18) lk + l (cid:19) m − h g ( k + h, l + m − h ) . We now consider the delegation procedure when delegated votes are distributed randomlyamong the other voters, with the restriction that no voter obtains more than c votes. If avoter already has c votes, s/he is not allowed to receive more, and, therefore, leaves the pool ofreceivers. We invalidate the rest of delegated votes, if there are any. With cap c , the probabilitythat A -voters have a majority, denoted as P c ( n, p, m ), is equal to P c ( n, p, m ) = ∞ X k =0 ∞ X l =0 ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) G c ( k, l, m ) , (3)where G c ( k, l, m ) denotes the probability that A -voters win if there are k A -voters, l B -voters and m voters who delegate their votes, with an individual voter being allowed to have4 votes at most. The value is calculated in the following way: G c ( k, l, m ) = m X h =0 (cid:18) mh (cid:19) (cid:18) kk + l (cid:19) h (cid:18) lk + l (cid:19) m − h g (min { k + h, ck } , min { l + m − h, cl } ) . Since ck is the maximum amount of votes majority voters can have, we take min { k + h, ck } to calculate the total number of votes for majority voters. Similarly, cl is the maximum amountof votes minority voters can have. Therefore, we take a value min { l + m − h, cl } to calculatethe total number of votes minority voters can have.For notational convenience, we denote a binomial random variable with parameters n and p by Bin ( n, p ), for any n ∈ N and p ∈ [0 , To gain an intuition in the formal results, we start with numerical examples. n p m P ( n, p ) P ( n, p, m ) P ( n, p, m )20 0 . . . . . . . . . . . . . . . . . . . . A winning under conventional voting P ( n, p ), free delegation P ( n, p, m ) and capped delegation P c ( n, p, m ) for cap c = 2.Table 1 reveals that the likelihood of A winning is smaller with free delegation than underconventional voting. The same occurs with capped delegation but is slightly less pronouncedthan with free delegation. We also observe that when m increases, both the winning probabil-ities for free delegation and for capped delegation first decline and then start to converge tothe probability under conventional voting. The pattern in Table 1 repeats for different valuesof n and p . In the following, we prove formal results. We start with free delegation and show the following:
Theorem 1 P ( n, p, m ) < P ( n, p ) for any m > and p > . . In other words, Theorem 1 says that free delegation probabilistically favors the ex-ante minority,since the probability that the minority wins under vote delegation is 1 − P ( n, p, m ), while theprobability that the minority wins in conventional voting is equal to 1 − P ( n, p ).Before proving the theorem, we show the following lemma: Lemma 1
For any l, t, m ∈ N we have G ( l + t, l, m ) + G ( l, l + t, m ) = 1 . roof of Lemma 1. First, we note: G ( l + t, l, m ) = m X h = ⌊ m − t +1 ⌋ (cid:18) mh (cid:19) (cid:18) l + t l + t (cid:19) h (cid:18) l l + t (cid:19) m − h + 12 (cid:18) m m − t (cid:19) (cid:18) l + t l + t (cid:19) m − t (cid:18) l l + t (cid:19) m + t and G ( l, l + t, m ) = m X h = ⌊ m + t +1 ⌋ (cid:18) mh (cid:19) (cid:18) l l + t (cid:19) h (cid:18) l + t l + t (cid:19) m − h + 12 (cid:18) m m + t (cid:19) (cid:18) l l + t (cid:19) m + t (cid:18) l + t l + t (cid:19) m − t . We define a binomial random variable X with parameters m and l + t l + t . Then, G ( l + t, l, m ) = P [ X ≥ m − t P [ X = m − t , (4)where the second summand vanishes if m − t is odd.We define Y as Bin ( m, l l + t ). It follows that G ( l, l + t, m ) = P [ Y ≥ m + t P [ Y = m + t . We use the symmetry property of binomial distribution, namely for Z ∼ Bin ( m, q ) and Z = m − Z , and we have Z ∼ Bin ( m, − q ).Hence, we obtain Y = m − X . Then, it follows: G ( l + t, l, m ) + G ( l, l + t, m )= P [ X ≥ m − t P [ X = m − t P [ Y ≥ m + t P [ Y = m + t P [ X ≥ m − t P [ X = m − t P [ m − X ≥ m + t P [ m − X = m + t P [ X ≥ m − t P [ X = m − t P [ m − t − ≥ X ]= P [ X ≥
0] = 1 . With this Lemma we proceed to prove Theorem 1.
Proof of Theorem 1.
To prove that P ( n, p, m ) is smaller than P ( n, p ) for m > p > . P ( n, p, m ) and P ( n, p ), in (3) and (1), respectively.We consider different cases of k, l and compare the summands of P ( n, p, m ) and P ( n, p ). Recallthe definition of g ( k, l ). For fixed k > l , the summand in P ( n, p ) is equal to( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) , P ( n, p, m ) is( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) G ( k, l, m ) ≤ ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) , since G ( k, l, m ) ≤
1. That is, the summand in P ( n, p, m ) is smaller (or equal) than thesummand in P ( n, p ) for k > l . Their difference is( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) (1 − G ( k, l, m )) ≥ . For k = l , both g ( k, l ) and G ( k, l, m ) are equal to and consequently, the correspondingsummands in both P ( n, p, m ) and P ( n, p ) are equal. The only critical case is for fixed k < l .In this case, the summand in P ( n, p ) is zero, whereas the summand in P ( n, p, m ) is greaterthan or equal than zero. That is, the summand in P ( n, p, m ) is greater than (or equal to) thesummand in P ( n, p ) for k < l . The difference is( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) (0 − G ( k, l, m )) = − ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) G ( k, l, m ) . We compare the differences in the cases where k > l and k < l . If the difference in thecase k > l is greater than the difference in absolute value in the case k < l , then overall, P ( n, p, m ) < P ( n, p ). For given m , we consider the following cases: If k ≥ l + m + 1, then g ( k, l ) = G ( k, l, m ) = 1. If k ≤ l − ( m + 1) , then g ( k, l ) = G ( k, l, m ) = 0. If k = l , then g ( k, l ) = G ( k, l, m ) = . The 2 m relevant cases are k = l ± k = l ± , ..., k = l ± m . Let t ∈ { , , ..., m } . If k = l + t , then g ( l + t, l ) = 1 and if k = l − t , then g ( l − t, l ) = 0. ByLemma 1, we have G ( l + t, l, m ) + G ( l, l + t, m ) = 1 . (5)Let x denote np and y denote n (1 − p ). We subtract from the terms in P ( n, p ) − P ( n, p, m ),where k = l + t , the terms in P ( n, p ) − P ( n, p, m ), where k = l − t , that is1 e n ∞ X l =0 y l l ! x k k ! ( g ( k, l ) − G ( k, l, m )) { k = l + t } − e n ∞ X l =0 y l l ! x k k ! ( | g ( k, l ) − G ( k, l, m ) | ) { k = l − t } . Because g ( l + t, l ) = 1 and g ( l, l + t ) = 0, we obtain the following:1 e n ∞ X l =0 y l l ! (cid:18) x l + t ( l + t )! (1 − G ( l + t, l, m )) (cid:19) − e n ∞ X l =0 y l l ! (cid:18) x l − t ( l − t )! G ( l − t, l, m ) (cid:19) =: f t ( x, y ) . We simplify f t ( x, y ) by rearranging terms and using (5). In the first part of the sum, weonly use (5), in the second part of the sum, we note that the first nonzero terms are for l = t ,as l − t )! = 0 for l < t . That is, for the second part of f t ( x, y ), we change the variable l → l + t and obtain 7 t ( x, y ) = 1 e n x t ∞ X l =0 y l l ! x l ( l + t )! G ( l, l + t, m ) − e n y t ∞ X l =0 y l l ! x l ( l + t )! G ( l, l + t, m ) = ( x t − y t ) r > , where r > r := 1 e n ∞ X l =0 y l l ! x l ( l + t )! G ( l, l + t, m ) . This holds for all relevant cases ( t ∈ { , , ..., m } ). As the difference to 1 in the cases where k is larger than l in P ( n, p, m ) is greater than the difference to 0 in the cases where k is smallerthan l , it follows that P ( n, p, m ) < P ( n, p ) for m > , p > . We establish the following results in the case of capped delegation:
Theorem 2 P c ( n, p, m ) < P ( n, p ) for any m > , c > and p > . . Theorem 3 P ( n, p, m ) < P c ( n, p, m ) for any m > , c > and p > . . Theorem 2 shows that capped delegation probabilistically favors the ex-ante minority. Onthe other hand, Theorem 3 shows that capped delegation is better for the ex-ante majority thanfree delegation. Together, they imply that capped delegation is in-between standard voting(no delegation) and free delegation, with respect to the probability of majority winning. Thelatter also confirms the observation obtained in the numerical example.Before proving the theorems, we need to prove the following two lemmata:
Lemma 2
For c ≥ , l, m ∈ N and t ∈ { , ..., m } : G c ( l + t, l, m ) + G c ( l, l + t, m ) = 1 . Proof of Lemma 2.
First we consider G c ( l + t, l, m ): G c ( l + t, l, m ) = m X h =0 (cid:18) mh (cid:19) (cid:18) l + t l + t (cid:19) h (cid:18) l l + t (cid:19) m − h g (min( l + t + h, c ( l + t )) , min( l + m − h, cl )) . We have to distinguish between the following four cases, which can arise because of the min-functions as the arguments of g function.1. l + t + m ≤ c ( l + t ) and l + m ≤ cl ,2. l + t + m ≤ c ( l + t ) and l + m > cl ,3. l + t + m > c ( l + t ) and l + m ≤ cl , 8. l + t + m > c ( l + t ) and l + m > cl .Note that the second and third cases are impossible, because if one condition holds, then theother does not. Hence, we focus on Cases 1 and 4. Case 1: l + t + m ≤ c ( l + t ) and l + m ≤ cl . In this case, the value g ( l + t + h, l + m − h )is 1, if h > m − t and , if h = m − t .Hence, G c ( l + t, l, m ) = P (cid:20) X > m − t (cid:21) + 12 P (cid:20) X = m − t (cid:21) , where X ∼ Bin ( m, l + t l + t ).Similarly, in this case, G c ( l, l + t, m ) = P (cid:20) Y > m + t (cid:21) + 12 P (cid:20) Y = m + t (cid:21) , where Y = m − X is a binomial random variable with parameters m and l l + t .Taken together, in Case 1: G c ( l + t, l, m ) + G c ( l, l + t, m )= P [ X > m − t P [ X = m − t P [ m − t > X ] + 12 P [ X = m − t P [ X ≥
0] = 1 . Case 4: l + t + m > c ( l + t ) and l + m > cl . As l + t < c ( l + t ) and l < cl , there exist h , h ∈ { , ..., m − } , such that l + t + h = c ( l + t ) and l + h = cl .We note that h < h , because, h = ( c − l + t ) and h = ( c − l . Then there are twocases: either h ≤ m − h or h > m − h . Subcase 4.1: If h ≤ m − h , then we have the following: g (min( l + t + h, c ( l + t )) , min( l + m − h, cl )) = g ( l + t + h, cl ) , when h ∈ { , ..., h } ,g ( c ( l + t ) , cl ) , when h ∈ { h + 1 , ..., m − h } ,g ( c ( l + t ) , l + m − h ) , when h ∈ { m − h + 1 , ..., m } . Let X ∼ Bin ( m, l + t l + t ) and note that from expression above: G c ( l + t, l, m ) = P [ h ≥ X > ( c − l − t ] + 12 P [ X = ( c − l − t ]+ P [ m − h ≥ X ≥ h + 1]+ P [ X > m − h + 1]= P [ h ≥ X > h − t ] + 12 P [ X = h − t ] + P [ m − h ≥ X ≥ h + 1] + P [ X > m − h + 1]= P [ X > h − t ] + 12 P [ X = h − t ] . In the same setting, we have for G c ( l, l + t, m ):9 (min( l + h, cl ) , min( l + t + m − h, c ( l + t ))) = g ( l + h, c ( l + t )) , when h ∈ { , ..., h } ,g ( cl, c ( l + t )) , when h ∈ { h + 1 , ..., m − h } ,g ( cl, l + t + m − h ) , when h ∈ { m − h + 1 , ..., m } . In the first interval, g is non-zero, if h ≥ ( c − l + ct = h + ct , but the right endpoint ofthe first interval is smaller than this. Hence, g is zero on the interval [0 , h ]. Clearly, g is zeroon the second interval [ h + 1 , m − h ] as cl < c ( l + t ). On the third interval, we have g = 1, if h > m + t − ( c − l = m + t − h and if there is equality. Hence, for Y = m − X ∼ Bin ( m, l l + t ): G c ( l, l + t, m ) = P [ Y > m + t − h ] + 12 P [ Y = m + t − h ]= P [ m − X > m + t − h ] + 12 P [ m − X = m + t − h ]= P [ h − t > X ] + 12 P [ h − t = X ] . Hence, in Subcase 4.1: G c ( l + t, l, m ) + G c ( l, l + t, m )= P [ X > h − t ] + 12 P [ X = h − t ] + P [ h − t > X ] + 12 P [ h − t = X ]= 1 . Subcase 4.2: If h > m − h , then we have the following: g (min( l + t + h, c ( l + t )) , min( l + m − h, cl )) = g ( l + t + h, cl ) , when h ∈ { , ..., m − h } ,g ( l + t + h, l + m − h ) , when h ∈ { m − h + 1 , ..., h } ,g ( c ( l + t ) , l + m − h ) , when h ∈ { h + 1 , ..., m } . We have an expression of the form P [ I ∩ I ] + P [ X = a ] where I , I are two discreteintervals. Note that the term 1 / P [ X = a ] vanishes if a / ∈ I . Further note that P [ X = m − t ]vanishes if m − t / ∈ N . G c ( l + t, l, m ) = P [( X ∈ [0 , m − h ]) ∩ ( X > h − t )] + 12 P [ X = h − t ] { h − t ∈ [0 ,m − h ] } + P [( X ∈ [ m − h + 1 , h ]) ∩ ( X > m − t P [ X = m − t { m − t ∈ [ m − h +1 ,h ] } + P [( X ∈ [ h + 1 , m ]) ∩ ( X > m − h − ct )] + 12 P [ X = m − h − ct ] { m − h − ct ∈ [ h +1 ,m ] } = P [( X ∈ [0 , m − h ]) ∩ ( X > h − t )] + 12 P [ X = h − t ] { h − t ∈ [0 ,m − h ] } + P [( X ∈ [ m − h + 1 , h ]) ∩ ( X > m − t P [ X = m − t { m − t ∈ [ m − h +1 ,h ] } + P [( X ∈ [ h + 1 , m ])] . m − h < h by assumption.In the same setting, to calculate G c ( l, l + t, m ), we first observe the following: g (min( l + h, cl ) , min( l + t + m − h, c ( l + t ))) = g ( l + h, c ( l + t )) , when h ∈ [0 , m − h ] ,g ( l + h, l + t + m − h ) , when h ∈ [ m − h + 1 , h ] ,g ( cl, l + t + m − h ) , when h ∈ [ h + 1 , m ] . Hence, for Y = m − X ∼ Bin ( m, l l + t ): G c ( l, l + t, m ) = P [( Y ∈ [0 , m − h ]) ∩ ( Y > h + ct )] + 12 P [ Y = h + ct ] { h + ct ∈ [0 ,m − h ] } + P [( Y ∈ [ m − h + 1 , h ]) ∩ ( Y > m + t P [ Y = m + t { m + t ∈ [ m − h +1 ,h ] } + P [( Y ∈ [ h + 1 , m ]) ∩ ( Y > m − h + t )] + 12 P [ Y = m − h + t ] { m − h + t ∈ [ h +1 ,m ] } = P [( X ∈ [ m − h , h − ∩ ( X < m − t P [ X = m − t { m − t ∈ [ m − h ,h − } + P [( X ≤ m − h − ∩ ( X < h − t )] + 12 P [ X = h − t ] { h − t ∈ [0 ,m − h − } . In the last equation the first summand disappeared because m − h < h , by assumption.Hence, in Subcase 4.2 we have: G c ( l + t, l, m ) + G c ( l, l + t, m )= P [( X ∈ [0 , m − h ]) ∩ ( X > h − t )] + 12 P [ X = h − t ] { h − t ∈ [0 ,m − h ] } + P [( X ∈ [ m − h + 1 , h ]) ∩ ( X > m − t P [ X = m − t { m − t ∈ [ m − h +1 ,h ] } + P [( X ∈ [ h + 1 , m ])+ P [( X ∈ [ m − h , h − ∩ ( X < m − t P [ X = m − t { m − t ∈ [ m − h ,h − } + P [( X ≤ m − h − ∩ ( X < h − t )] + 12 P [ X = h − t ] { h − t ∈ [0 ,m − h − } . We have the following subcases: • If h − t < m − h (i.e. h < m + t , then m − t = m − m + t < m − h ): G c ( l + t, l, m ) + G c ( l, l + t, m )= P [( X ∈ ( h − t, m − h ])] + 12 P [ X = h − t ]+ P [( X ∈ [ m − h + 1 , h ])+ P [( X ∈ [ h + 1 , m ])+ P [( X < h − t )] + 12 P [ X = h − t ] = 1 . If h − t > m − h (i.e. h > m + t , then m − t = m − m + t > m − h ): G c ( l + t, l, m ) + G c ( l, l + t, m )= P [( X ∈ ( m − t , h ])] + 12 P [ X = m − t P [( X ∈ [ h + 1 , m ])+ P [( X ∈ [ m − h , m − t P [ X = m − t P [( X ≤ m − h − . • If h − t = m − h (i.e. h = m + t , then m − t = m − m + t = m − h ): G c ( l + t, l, m ) + G c ( l, l + t, m )= 12 P [ X = h − t ]+ P [( X ∈ [ m − h + 1 , h ])+ P [( X ∈ [ h + 1 , m ])+ 12 P [ X = m − t P [( X ≤ m − h − . Lemma 3 G c ( l, l, m ) = for any c, l, m ∈ N . Proof of Lemma 3.
Note that all intervals in this proof are discrete, that is, [ a, b ] stands forthe discrete interval { a, ..., b } . G c ( l, l, m ) = m X h =0 (cid:18) mh (cid:19) (cid:18) (cid:19) m g (min( l + h, cl ) , min( l + m − h, cl )) . We have to consider two cases: First, if l + m ≤ cl , then G c ( l, l, m ) = m X h =0 (cid:18) mh (cid:19) ( 12 ) m g ( l + h, l + m − h )= P [ X > m P [ X = m , where X ∼ Bin ( m, ) and the last summand vanishes if m / ∈ N . Hence, G c ( l, l, m ) = (cid:18) (cid:19) m m . Second, if l + m > cl , then there is some h , so that l + h = cl . Hence, if h ≤ m − h , wehave 12 (min( l + h, cl ) , min( l + m − h, cl )) = g ( l + h, cl ) , when h ∈ { , ..., h } ,g ( cl, cl ) , when h ∈ { h + 1 , ..., m − h } ,g ( cl, l + m − h ) , when h ∈ { m − h + 1 , ..., m } . It follows that G c ( l, l, m ) = P [ X ∈ (( c − l, h ]] + 12 P [ X = ( c − l ] { ( c − l ∈ [0 ,h ] } + 12 P [ X ∈ [ h + 1 , m − h ]]+ P [ X ∈ ( m − ( c − l, m ]] + 12 P [ X = m − ( c − l ] { m − ( c − l ∈ [ m − h +1 ,m ] } . By symmetry of binomial distribution of X , we have P [ X ∈ (( c − l, h ]] = P [ X ∈ ( m − h , m − ( c − l ]] . Hence, G c ( l, l, m ) = .If h > m − h , then we have: g (min( l + h, cl ) , min( l + m − h, cl )) = g ( l + h, cl ) , when h ∈ { , ..., m − h } ,g ( l + h, l + m − h ) , when h ∈ { m − h + 1 , ..., h } ,g ( cl, l + m − h ) , when h ∈ { h + 1 , ..., m } . It follows that G c ( l, l, m ) = P [ X ∈ (( c − l, m − h ]] + 12 P [ X = ( c − l ] { ( c − l ∈ [0 ,h ] } + P [ X ∈ ( m , h ]] + 12 P [ X = m { m ∈ [ m − h +1 ,h ] } + P [ X ∈ ( m − ( c − l, m ]] + 12 P [ X = m − ( c − l ] { m − ( c − l ∈ [ m − h +1 ,m ] } . Again, by symmetry of binomial distribution of X , we have P [ X ∈ (( c − l, m − h ]] = P [ X ∈ ( h , m − ( c − l ]] , and 12 P [ X = ( c − l ] = 12 P [ X = m − ( c − l ] . Hence, G c ( l, l, m ) = .Next, we prove Theorem 2. Proof of Theorem 2.
As for the proof of Theorem 1, the key lies in Lemma 2. We proceedas in the proof of Theorem 1. Recall that g ( k, l ) = 1 if k > l , g ( k, l ) = if k < l and g ( k, l ) = 013f k = l . That is why, for fixed k > l a representative summand in P ( n, p ) is equal to( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) , whereas a representative summand of P c ( n, p, m ), in (3), is( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) G c ( k, l, m ) ≤ ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) , since G c ( k, l, m ) ≤
1. Therefore, the summand in P c ( n, p, m ) is smaller (or equal) than thesummand in P ( n, p ) for k > l . The difference is( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) (1 − G c ( k, l, m )) ≥ . For fixed k = l , both g ( k, l ) = G c ( k, l, m ) = by Lemma 3. Consequently, the summandsof both P c ( n, p, m ) and P ( n, p ) are equal.The only critical case is for fixed k < l . In that case, the summand in P ( n, p ) is zero, whereasthe summand in P c ( n, p, m ) is greater or equal than zero. So the summand in P c ( n, p, m ) isgreater than (or equal to) the summand in P ( n, p ) for k < l . The difference of the twosummands is equal to( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) (0 − G c ( k, l, m )) = − ( np ) k k ! e np ( n (1 − p )) l l ! e n (1 − p ) G c ( k, l, m ) . We have to compare the differences in the cases where k > l and k < l . If the difference inthe case where k > l is greater than the difference in absolute value in the case where k < l ,then overall, P c ( n, p, m ) < P ( n, p ).For given m , we need to take care of the following: If k ≥ l + m + 1 then g ( k, l ) = G c ( k, l, m ) = 1. If k ≤ l − ( m + 1) , then g ( k, l ) = G c ( k, l, m ) = 0. If k = l , then g ( k, l ) = G c ( k, l, m ) = . The 2 m relevant cases are k = l ± , l ± , ..., l ± m . Let t ∈ { , , ..., m } . If k = l + t , then g ( l + t, l ) = 1 and if k = l − t , then g ( l − t, l ) = 0. By Lemma 2 we have G c ( l + t, l, m ) + G c ( l, l + t, m ) = 1 . (6)Let x denote np and y denote n (1 − p ). From the terms in P ( n, p ) − P c ( n, p, m ), where k = l + t , we now subtract the terms in P ( n, p ) − P c ( n, p, m ), where k = l − t . That is,1 e n ∞ X l =0 y l l ! x k k ! ( g ( k, l ) − G c ( k, l, m )) { k = l + t } − e n ∞ X l =0 y l l ! x k k ! ( | g ( k, l ) − G c ( k, l, m ) | ) { k = l − t } . Because g ( l + t, l ) = 1 and g ( l, l + t ) = 0, we obtain the following:1 e n ∞ X l =0 y l l ! (cid:18) x l + t ( l + t )! (1 − G c ( l + t, l, m )) (cid:19) − e n ∞ X l =0 y l l ! (cid:18) x l − t ( l − t )! G c ( l − t, l, m ) (cid:19) =: f t ( x, y ) .
14e simplify f t ( x, y ) by rearranging terms and using (6): In the first part of the sum, weonly use (6), in the second part of the sum, we note that the first nonzero terms are for l = t ,as l − t )! = 0 for l < t . This is why for the second part of f t ( x, y ), we change the variables, i.e. l → l + t and we obtain f t ( x, y ) = 1 e n x t ∞ X l =0 y l l ! x l ( l + t )! G c ( l, l + t, m ) − e n y t ∞ X l =0 y l l ! x l ( l + t )! G c ( l, l + t, m ) = ( x t − y t ) r > , where r > r := 1 e n ∞ X l =0 y l l ! x l ( l + t )! G c ( l, l + t, m ) . This holds for all relevant cases ( t ∈ { , , ..., m } ). As the difference to 1 in the cases where k is larger than l in P c ( n, p, m ) is greater than the difference to 0 in the cases where k is smallerthan l , it follows that P c ( n, p, m ) < P ( n, p ) for m > , p > . Proof of Theorem 3.
The proof is analogous to the proof of Theorem 2. The only differenceis that we have to consider more cases.
In this section, we consider three cases of asymptotic behavior of delegation. In the first, theprobability that a voter is in the majority is large. In the second, the average total numberof voters is large. Both cases describe typical assumptions in real-world situations. In thefirst, one assumes there are very few voters in the minority, for example malicious voters inthe context of blockchains. In the second, we look at a large electorate, with fixed shares ofmajority and minority voters. In the third, we show the convergence of both delegation rulestowards conventional voting as the number of delegators goes to infinity.First, we obtain a rather straightforward proposition:
Proposition 1 lim p → P ( n, p, m ) = 1 for any fixed n and m . Proof of Proposition 1.
We can verify that P ( n, , m ) = 1. P ( n, , m ) = ∞ X k =0 n k e n k ! (cid:18) mm (cid:19) (cid:18) kk (cid:19) m (cid:18) k (cid:19) g ( k + m,
0) = 1 . Indeed, only the terms where l = 0 and h = m survive. Since the function P ( n, p, m ) isuniformly continuous in p , the proposition is proved.Next, we consider a large electorate. It is straightforward that with a constant numberof delegators, the probability that the majority wins converges to one as the total populationsize converges to infinity. We show that the same holds even if there are arbitrarily manydelegators. Proposition 2 lim n →∞ P ( n, p, m ) = 1 for any fixed p > and any m , where m can evendepend on n . roof of Proposition 2. Let p > . ǫ ∈ (0 , p − ). Define two Poisson random variables K , with parameter np , and L , with parameter n (1 − p ). The probability that A -voters have at least n ( + ǫ )votes can be bounded by using the Poisson random variable concentration inequalities fromMitzenmacher and Upfal (2005): P ( X ≤ x ) ≤ e − λ ( eλ ) x x x (7)and P ( X ≥ x ) ≤ e − λ ( eλ ) x x x . (8)Using (7) for n ( + ǫ ) and np , we obtain: P [ K < n ( 12 + ǫ )] ≤ ( enp ) n ( + ǫ ) e np ( n ( + ǫ )) n ( + ǫ ) n →∞ −→ . Hence, the probability that A -voters have at least n ( + ǫ ) votes is P [ K ≥ n ( 12 + ǫ )] = 1 − P [ K < n ( 12 + ǫ )] n →∞ −→ . At the same time we apply (8) for n ( − ǫ ) and n (1 − p ), we obtain: P [ L > n ( 12 − ǫ )] ≤ ( en (1 − p )) n ( − ǫ ) e n (1 − p ) ( n ( − ǫ )) n ( − ǫ ) n →∞ −→ . Hence, the probability that B -voters have at most n ( − ǫ ) votes is P [ L ≤ n ( 12 − ǫ )] = 1 − P [ L > n ( 12 − ǫ )] n →∞ −→ . . These two results and the fact that K, L are independent yield: P [ K ≥ n ( 12 + ǫ ) and L ≤ n ( 12 − ǫ )] = P [ K ≥ n ( 12 + ǫ )] | {z } n →∞ −→ P [ L ≤ n ( 12 − ǫ )] | {z } n →∞ −→ n →∞ −→ . It follows that for m < ǫn , A -voters are in the majority with probability 1, if n goes to infinity.Next, we consider m > ǫn . From above, we know that for high enough n , with probability1 K ≥ n ( + ǫ ) and L ≤ n ( − ǫ ). Hence, KL ≥ + ǫ − ǫ > . We define the i.i.d. random variables: 16 i := ( +1 w.p. + ǫ − − ǫ for i = 1 , · · · , m . The surplus of delegated votes to party A -voters is the sum of the X i . Weknow that E [ X i ] = 2 ǫ > m P mi =1 X i m →∞ −→ ǫ >
0. As m > ǫn , n → ∞ implies m → ∞ .We show the same result with capped delegation. Proposition 3 lim n →∞ P c ( n, p, m ) = 1 for any fixed p > , any c ∈ N and any m , where m can even depend on n . Proof of Proposition 3.
The proof is a direct corollary of Proposition 2 and Theorem 3.Next, we show that the probability that A wins with free delegation converges to theprobability that A wins with conventional voting, if m goes to infinity. Proposition 4 lim m →∞ P ( n, p, m ) = P ( n, p ) for any n > and p ∈ (0 , . Proof of Proposition 4.
First, using Hoeffding’s inequality , we can show that lim m →∞ G ( k, l, m ) = g ( k, l ) for any fixed k and l . • Consider fixed pair k, l , so that k = l . Then, with q := kk + l = , we have G ( k, l, m ) = m X h = ⌊ m +1 ⌋ (cid:18) mh (cid:19) q h (1 − q ) m − h + 12 (cid:18) m m (cid:19) q m (1 − q ) m = (cid:18) (cid:19) m m X h = ⌊ m +1 ⌋ (cid:18) mh (cid:19) + (cid:18) (cid:19) m +1 (cid:18) m m (cid:19) . Note that the latter summand is zero if m is odd. – If m is odd: G ( k, l, m ) = (cid:18) (cid:19) m m X h = ⌊ m +1 ⌋ (cid:18) mh (cid:19) = (cid:18) (cid:19) m m . – If m is even: G ( k, l, m ) = (cid:18) (cid:19) m m X h = ⌊ m +1 ⌋ (cid:18) mh (cid:19) + (cid:18) (cid:19) (cid:18) m m (cid:19) = (cid:18) (cid:19) m m . Hence, for k = l , we have G ( k, l, m ) = g ( k, l ) = . See Hoeffding (1963). Consider fixed pair k, l , so that k < l . Then, with q := kk + l < , we have G ( k, l, m ) = m X h = ⌊ l − k + m +1 ⌋ (cid:18) mh (cid:19) q h (1 − q ) m − h + 12 (cid:18) m l − k + m (cid:19) q l − k + m (1 − q ) m − l − k + m . Define a := l − k + m . The first summand above is equal to the probability P [ X > a ] fora random variable X ∼ Bin ( m, q ). We consider P [ X ≥ a ], which is G ( k, l, m ) plus anon-negative term (the second term of G ( k, l, m )) and show that this converges to 0 andhence also G ( k, l, m ) converges to 0.Hoeffding’s inequality yields: P [ X ≥ m ( q + ǫ )] ≤ exp( − ǫ m ) . We calculate ǫ by solving m ( q + ǫ ) = a : ǫ = l − k + m m − q. Let t := l − k , then by Hoeffding’s inequality: P [ X ≥ a ] ≤ exp( − l − k + m m − q ) m )= exp( − m (2 q − q + 12 ) − t − t m + 2 qt ) . The right-hand side goes to 0 for m → ∞ if 2 q − q + >
0. This inequality is true forany q = . As q < , the RHS goes to 0 for m → ∞ . That is,lim m →∞ P [ X ≥ a ] = 0 . Hence, for k < l we have G ( k, l, m ) = g ( k, l ) = 0. • Consider fixed pair k, l , so that k > l . Then, with q := kk + l > we have G ( k, l, m ) = m X h = ⌊ l − k + m +1 ⌋ (cid:18) mh (cid:19) q h (1 − q ) m − h + 12 (cid:18) m l − k + m (cid:19) q l − k + m (1 − q ) m − l − k + m . Define a := l − k + m . The first summand above is equal to the probability P [ X > a ] for arandom variable X ∼ Bin ( m, q ). As before, we consider the value P [ X ≥ a ].Hoeffding’s inequality yields: P [ X < m ( q − ǫ )] ≤ exp( − ǫ m ) .
18e calculate ǫ by solving m ( q − ǫ ) = a : ǫ = q − l − k + m m . Let t := | l − k | , then, by Hoeffding’s inequality: P [ X < a ] ≤ exp( − q − l − k + m m ) m )= exp( − m (2 q − q + 12 ) − t − t m + 2 qt ) . The RHS of the latter converges to 0 for m → ∞ if 2 q − q + >
0. This inequality istrue for any q = . As q > , the RHS converges to 0 for m → ∞ . That is,lim m →∞ P [ X < a ] = 0 . Therefore, lim m →∞ P [ X ≥ a ] = lim m →∞ (1 − P [ X < a ]) = 1 . Hence, for k > l , we have G ( k, l, m ) = g ( k, l ) = 1.It follows that lim m →∞ G ( k, l, m ) = g ( k, l ) for any fixed k and l .In the next step, we want to show that lim m →∞ P ( n, p, m ) = P ( n, p ). Define, a k ( m ) := ( np ) k k ! e np ∞ X l =0 ( n (1 − p )) l l ! e n (1 − p ) G ( k, l, m ) . Then, P ( n, p, m ) = ∞ X k =0 a k ( m ) . Note that, as G ( k, l, m ) ≤ | a k ( m ) | ≤ ( np ) k k ! e np ∞ X l =0 ( n (1 − p )) l l ! e n (1 − p ) = ( np ) k k ! e np =: M k , and P ∞ k =0 M k = 1 < ∞ . Then, by dominated convergence theorem,lim m →∞ P ( n, p, m ) = ∞ X k =0 lim m →∞ a k ( m )= ∞ X k =0 ( np ) k k ! e np lim m →∞ ∞ X l =0 ( n (1 − p )) l l ! e n (1 − p ) G ( k, l, m ) . b l ( m ) := ( n (1 − p )) l l ! e n (1 − p ) G ( k, l, m ) . Note that | b l ( m ) | ≤ ( n (1 − p )) l l ! e n (1 − p ) =: M l and P ∞ l =0 M l = 1 < ∞ . Hence, by dominated convergence theorem,lim m →∞ ∞ X l =0 b l ( m ) = ∞ X l =0 lim m →∞ b l ( m )= ∞ X l =0 ( n (1 − p )) l l ! e n (1 − p ) lim m →∞ G ( k, l, m ) . Taken together, by applying dominated convergence theorem twice, we obtain:lim m →∞ P ( n, p, m ) = ∞ X k =0 lim m →∞ a k ( m )= ∞ X k =0 ( np ) k k ! e np lim m →∞ ∞ X l =0 b l ( m )= ∞ X k =0 ( np ) k k ! e np ∞ X l =0 lim m →∞ b l ( m )= ∞ X k =0 ( np ) k k ! e np ∞ X l =0 ( n (1 − p )) l l ! e n (1 − p ) lim m →∞ G ( k, l, m )= ∞ X k =0 ( np ) k k ! e np ∞ X l =0 ( n (1 − p )) l l ! e n (1 − p ) g ( k, l ) = P ( n, p ) . Similarly, for capped delegation, we show:
Proposition 5 lim m →∞ P c ( n, p, m ) = P ( n, p ) for any n, c ∈ N and p ∈ (0 , . Proof of Proposition 5.
The proof is an immediate corollary of Proposition 4 and Theorem 3.
We showed that the introduction of vote delegation leads to lower probabilities of winning forthe ex-ante majority. Capped delegation leads to even lower probabilities than free delegation.However, both delegation processes lead to lower probabilities than conventional voting. Theseresults are particularly important in blockchain governance if one does not want dishonestagents to increase their probability of winning. Although the setting we analyze in the paperis, as simple as possible, we already obtain non-trivial observations.In addition to the formal results, we also conjecture that P ( n, p, m ) is first decreasing andthen increasing in m , based on numerical evidence. This conjecture is interesting in the context20f delegation as a strategic decision. For the delegators, whether or not they belong to the ex-ante majority, choosing between delegating and abstaining becomes a strategic decision. If weprove only one part of the conjecture, namely that the winning probability is increasing fromsome point, we will get that in the equilibrium all majority voters are delegating for a largeenough number of delegators m . First, note that nobody delegating can not be an equilibriumstate, because minority voters prefer to delegate. Once some of them delegate, the probabilitythat the majority of voters winning is smaller than the one in conventional voting. Usingproposition 1, we get that with a large number of delegators, all majority voters delegating issustainable in the equilibrium. We leave this challenging issue to future research. References
Bloembergen, D., Grossi, D., and Lackner, M. (2019). On Rational Delegations in LiquidDemocracy.
Proceedings of the AAAI Conference on Artificial Intelligence , 33(1):1796–1803.Damg˚ard, I., Gersbach, H., Maurer, U., Nielsen, J. B., Orlandi, C., and Pedersen, T. P. (2020).Concordium White Paper.Escoffier B., Gilbert H., P.-L. A. (2019). The Convergence of Iterative Delegations in LiquidDemocracy in a Social Network.
In: Fotakis D., Markakis E. (eds) Algorithmic Game Theory.SAGT 2019. Lecture Notes in Computer Science , 11801:284–297.G¨olz, P., Kahng, A., Mackenzie, S., and Procaccia, A. D. (2018). The Fluid Mechanics of LiquidDemocracy.
In: Christodoulou G., and Harks T. (eds) Web and Internet Economics - 14thInternational Conference, WINE 2018, Proceedings. Lecture Notes in Computer Science ,11316:188–202.Goodman, L. (2014). Tezos – Self-amending Crypto-ledger.
White Paper .Hoeffding, W. (1963). Probability Inequalities for Sums of Bounded Random Variables.
Journalof the American Statistical Association , 58(301):13–30.Kahng, A., Mackenzie, S., and Procaccia, A. (2018). Liquid Democracy: An AlgorithmicPerspective.
Proceedings of the AAAI Conference on Artificial Intelligence , 32(1):1095–1102.Kotsialou, G. and Riley, L. (2020). Incentivising Participation in Liquid Democracy withBreadth-First Delegation.
Proceedings of the 19th International Conference on AutonomousAgents and MultiAgent Systems , page 638–644.Mitzenmacher, M. and Upfal, E. (2005). Probability and Computing: Randomized Algorithmsand Probabilistic Analysis.
Cambridge University Press , page 97.Myerson, R. (1998). Population Uncertainty and Poisson Games.