Public Evidence from Secret Ballots
Matthew Bernhard, Josh Benaloh, J. Alex Halderman, Ronald L. Rivest, Peter Y. A. Ryan, Philip B. Stark, Vanessa Teague, Poorvi L. Vora, Dan S. Wallach
PPublic Evidence from Secret Ballots
Matthew Bernhard (cid:47)
Josh Benaloh † J. Alex Halderman (cid:47)
Ronald L. Rivest (cid:5)
Peter Y. A. Ryan ◦ Philip B. Stark ‡ Vanessa Teague (cid:46)
Poorvi L. Vora § Dan S. Wallach (cid:63) † Microsoft Research (cid:47)
University of Michigan (cid:5)
Massachusetts Institute of Technology ◦ University of Luxembourg ‡ University of California at Berkeley (cid:46)
University of Melbourne § George Washington University (cid:63)
Rice University
Abstract
Elections seem simple— aren’t they just counting?
But they havea unique, challenging combination of security and privacy require-ments. The stakes are high; the context is adversarial; the electorateneeds to be convinced that the results are correct; and the secrecy ofthe ballot must be ensured. And they have practical constraints: timeis of the essence, and voting systems need to be affordable and main-tainable, and usable by voters, election officials, and pollworkers.It is thus not surprising that voting is a rich research area spanningtheory, applied cryptography, practical systems analysis, usable se-curity, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy , whichamounts to convincing assurance that there is no evidence abouthow any given person voted. These are obviously in tension. Weexamine how current systems walk this tightrope.
1. Introduction: What is the evidence?
The Russians did three things ...
The third is that they tried, andthey were not successful, but they still tried, to get access tovoting machines and vote counting software, to play with theresults
Former CIA Acting Director Michael Morell , Mar. 15, 2017These are baseless allegations substantiated with nothing, doneon a rather amateurish, emotional level
Kremlin spokesman Dmitry Peskov , Jan. 9, 2017It would take an army to hack into our voting system.
Tom Hicks, EAC Commissioner , Oct. 6, 2016
It is not enough for an election to produce the correct outcome.The electorate must also be convinced that the announced resultreflects the will of the people. And for a rational person to beconvinced requires evidence.Modern technology—computer and communications systems—is fragile and vulnerable to programming errors and undetectablemanipulation. No current system that relies on electronic technologyalone to capture and tally votes can provide convincing evidencethat election results are accurate without endangering or sacrificingthe anonymity of votes. Moreover, the systems that come closest are not readily usable bya typical voter. Paper ballots, on the other hand, have some very helpful securityproperties: they are readable (and countable, and re-countable) byhumans; they are relatively durable; and they are tamper-evident.Votes cast on paper can be counted using electronic technology; thenthe accuracy of the count can be checked manually to ensure that thetechnology functioned adequately well. Statistical methods allowthe accuracy of the count to be assessed by examining only a fractionof the ballots manually, often a very small fraction. If there is alsoconvincing evidence that the collection of ballots has been conserved(no ballots added, lost, or modified) then this combination—voter-verifiable paper ballots, a mechanized count, and a manual check ofthe accuracy of that count—can provide convincing evidence thatannounced electoral outcomes are correct.Conversely, absent convincing evidence that the paper trail hasbeen conserved, a manual double-check of electronic results againstthe paper trail will not be convincing. If the paper trail has beenconserved adequately, then a full manual tally of the ballots cancorrect the electronic count if the electronic count is incorrect.These considerations have led many election integrity advocatesto push for a voter-verifiable paper trail (VVPAT). In the 2016 presidential election, about three quarters of Amer-icans voted using systems that generated voter-verifiable paperrecords. The aftermath of the election proved that even if 100% ofvoters had used such systems, it would not have sufficed to provideconvincing evidence that the reported results are accurate. • No state has (or had) adequate laws or regulations to ensurethat the paper trail is conserved adequately, and that provideevidence to that effect. • No state had laws or regulations that provided adequate manualscrutiny of the paper to ensure that the electronically generatedresults are correct; most still do not. • Many states that have a paper trail also have laws that makeit hard for anyone to check the results using the paper trail—even candidates with war chests for litigation. Not only canother candidates fight attempts to check the results, the statesthemselves can fight such attempts. This treats the paper as anuisance, rather than a safeguard.The bottom line is that the paper trail is not worth the paper it’sprinted on. Clearly this must change.Other techniques like software independence and end-to-end veri-fiability can offer far greater assurance in the accuracy of an elec-tion’s outcome, but these methods have not been broadly applied. Voter-marked paper ballots or ballots marked using a ballot-marking device are preferable to VVPAT, a cash-register style print-out that the voter cannot touch. a r X i v : . [ c s . CR ] A ug .1 Why so hard? Several factors make it difficult to generate convincing evidencethat reported results are correct. The first is the trust model.
No one is trusted
In any significant election, voters, electionofficials, and equipment and software cannot necessarily be trustedby anyone with a stake in the outcome. Voters, operators, systemdesigners, manufacturers, and external parties are all potential ad-versaries.
The need for evidence
Because officials and equipment may notbe trustworthy, elections should be evidence-based . Any observershould be able to verify the reported results based on trustworthyevidence from the voting system. Many in-person voting systemsfail to provide sufficient evidence; and as we shall see Internetsystems scarcely provide any at all.
The secret ballot
Perhaps the most distinctive element of elec-tions is the secret ballot , a critical safeguard that defends againstvote selling and voter coercion. In practical terms, voters shouldnot be able to prove how they voted to anyone, even if they wishto do so . This restricts the types of evidence that can be producedby the voting system. Encryption alone is not sufficient, since thevoters may choose to reveal their selections in response to briberyor coercion.The challenge of voting is thus to use fragile technology to pro-duce trustworthy, convincing evidence of the correctness of theoutcome while protecting voter privacy in a world where no personor machine may be trusted . The resulting voting system and itssecurity features must also be usable by regular voters.The aim of this paper is to explain the important requirements ofsecure elections and the solutions already available from e-votingresearch, then to identify the most important directions for research.Prior to delving into our discussion, we need to make a distinctionin terminology.
Pollsite voting systems are those in which votersrecord and cast ballots at predetermined locations, often in publicareas with strict monitoring.
Remote voting refers to a system wherevoters fill out ballots anywhere, and then send them to a centrallocation to cast them, either physically mailing them in the caseof vote-by-mail, or sending them over the Internet in the case ofInternet voting.The next section defines the requirements, beginning with no-tions of election evidence, then considering privacy, and concludingwith more general usability and security requirements. Section 3describes the cryptographic, statistical, and engineering tools thathave been developed for designing voting systems with verifiablycorrect election outcomes. Section 4 discusses the challenge ofsatisfying our requirements for security using the tools presented inreal-world election systems. Section 5 concludes with the promiseand problems associated with Internet voting.
2. Requirements for Secure Voting
Trustworthiness before trust
Onora O’Neill
For an election to be accepted as legitimate, the outcome shouldbe convincing to all—and in particular to the losers—leaving novalid grounds to challenge the outcome. Whether elections areconducted by counting paper ballots by hand or using computertechnology, the possibility of error or fraud necessitates assurancesof the accuracy of the outcome. It is clear that a naive introduction of computers into votingintroduces the possibility of wholesale and largely undetectablefraud. If we can’t detect it, how can we prevent it?
Statistical post-election audits provide assurance that a reportedoutcome is correct, by examining some or all of an audit trail consist-ing of durable, tamper-evident, voter-verifiable records. Typicallythe audit trail consists of paper ballots.The outcome of an election is the set of winners. An outcome isincorrect if it differs from the set of winners output by a perfectlyaccurate manual tabulation of the audit trail.
Definition 1.
An audit of an election contest is a risk-limitingaudit (RLA) with risk limit α if it has the following two properties:1. If the reported contest outcome under audit is incorrect, theprobability that the audit leads to correcting the outcome is atleast 1 − α .2. The audit never indicates a need to alter a reported outcomethat is correct.(In this context, “correct” means “what a full manual tally of thepaper trail would show.” If the paper trail is unreliable, a RLA in gen-eral cannot detect that. RLAs should be preceded by “complianceaudits” that check whether the audit trail itself is adequately reliableto determine who won.) Together, these two properties imply thatpost-RLA, either the reported set of winners is the set that a perfectlyaccurate hand count of the audit trail would show, or an event withprobability no larger than α has occurred. (That event is that theoutcome was incorrect, but the RLA did not lead to correcting theoutcome.) RLAs amount to a limited form of probabilistic errorcorrection: by relying on appropriate random sampling of the audittrail and hypothesis tests, they have a known minimum probabilityof correcting the outcome. They are not designed to ensure that thereported numerical tally is correct, only that the outcome is correct.The following procedure is a trivial RLA: with probability 1 − α ,perform a full manual tally of the audit trail. Amend the outcometo match the set of winners the full hand count shows if that set isdifferent.The art in constructing RLAs consists of maintaining the risklimit while performing less work than a full hand count when theoutcome is correct. Typically, this involves framing the audit asa sequential test of the statistical hypothesis that the outcome isincorrect. To reject that hypothesis is to conclude that the outcomeis correct. RLAs have been developed for majority contests, plu-rality contests, and vote-for- k contests and complex social choicefunctions including D’Hondt and other proportional representationrules—see below. RLAs have also been devised to check more thanone election contest simultaneously [99]. Rivest and Wack introduced a definition targeted specifically atdetecting misbehavior in computer-based elections:
Definition 2. [81] A voting system is software independent ifan undetected change or error in its software cannot cause an unde-tectable change or error in an election outcome.Software independence clearly expresses that it should not benecessary to trust software to determine election outcomes, but itdoes not say what procedures or types of evidence should be trustedinstead. A system that is not software independent cannot producea convincing evidence trail, but neither can a paper-based systemthat does not ensure that the paper trail is complete and intact, a2ryptographic voting system that relies on an invalid cryptographicassumption, or a system that relies on audit procedures but lacks ameans of assuring that those procedures are properly followed. Wecould likewise demand independence of many other kinds of trust as-sumptions: hardware, paper chain-of-custody, cryptographic setup,computational hardness, procedures, good randomness generation etc.
Rivest and Wack also define a stronger form of the property thatincludes error recovery:
Definition 3. [81] A voting system is strongly software inde-pendent if it is software independent and a detected change or errorin an election outcome (due to the software) can be corrected withoutrerunning the election.A strongly software-independent system can recover from soft-ware errors or bugs, but that recovery in turn is generally based onsome other trail of evidence.A software independent system can be viewed as a form of tamper-evident system: a material software problem leaves a detectabletrace.
Strongly software independent systems are resilient: not onlydo material software problems leave a trace, the overall electionsystem can recover from a detected problem.One mechanism to provide software independence is to recordvotes on a paper record that provides physical evidence of voter’sintent, can be inspected by the voter prior to casting the vote, and—ifpreserved intact—can later be manually audited to check the electionoutcome. Risk-limiting audits (see Section 3.2) can then achievea pre-specified level of assurance that results are correct; machineassisted risk-limiting audits [23], can help minimize the amount oflabor required for legacy systems that do not provide a cast-voterecord for every ballot, linked to the corresponding ballot.
Open problems: • How can systems handle errors in the event that electionsdon’t verify? Can they recover?
The concern regarding fraud and desire for transparency has mo-tivated the security and crypto communities to develop another ap-proach to voting system assurance: end-to-end verifiability (E2E-V).An election that is end-to-end verifiable achieves software indepen-dence together with the analagous notion of hardware independenceas well as independence from actions of election personnel andvendors. Rather than attempting to verify thousands of lines of codeor closely monitor all of the many processes in an election, E2E-Vfocuses on providing a means to detect errors or fraud in the processof voting and counting. The idea behind E2E-V is to enable votersthemselves to monitor the integrity of the election; democracy forthe people by the people, as it were. This is challenging becausetotal transparency is not possible without undermining the secretballot, hence the mechanisms to generate such evidence have to becarefully designed.
Definition 4. (adapted from [16])
A voting system is end-to-endverifiable if it has the following three kinds of verifiability: • Cast as intended:
Voters can independently verify that theirselections are correctly recorded. • Collected as cast:
Voters can independently verify that therepresentation of their vote is correctly collected in the tally. • Tallied as collected:
Anyone can verify that every well-formed,collected vote is correctly included in the tally. If verification relies on trusting entities, software, or hardware, thevoter and/or auditor should be able to choose them freely. Trustedprocedures, if there are any, must be open to meaningful observationby every voter.Note that the above definition allows each voter to check that hervote is correctly collected, thus ensuring that attempts to change ordelete cast votes are detected. In addition, it should also be possibleto check the list of voters who cast ballots, to ensure that votes arenot added to the collection ( i.e., to prevent ballot-box stuffing). Thisis called eligibility verifiability [65, 96].
In an E2E-V election protocol, voters can check whether theirvotes have been properly counted, but if they discover a problem,there may not be adequate evidence to correct it. An election systemthat is collection-accountable provides voters with evidence of anyfailure to collect their votes.
Definition 5.
An election system is collection accountable ifany voter who detects that her vote has not been collected has, aspart of the vote-casting protocol, convincing evidence that can bepresented to an independent party to demonstrate that the vote hasnot been collected.Another form of evidence involves providing each voter with acode representing her votes, such that knowledge of a correct codeis evidence of casting a particular vote [31]. Yet another mechanismis a suitable paper receipt. Forensic analysis may provide evidencethat this receipt was not forged by a voter [13, 9].
Open problems: • Can independently verifiable evidence be provided by thevoting system for incorrect ballot casting?
While accountability helps secure the election process, it is notvery useful if there is no way to handle disputes. If a voter claims,on the basis of accountability checks provided by a system, thatsomething has gone wrong, there needs to be a mechanism to addressthis. This is known as dispute resolution : Definition 6. [58] A voting system is said to have dispute reso-lution if, when there is a dispute between two participants regardinghonest participation, a third party can correctly resolve the dispute.An alternative to dispute resolution is dispute-freeness:
Definition 7. [62] A dispute-free voting system has built-in pre-vention mechanisms that eliminate disputes among the active partic-ipants; any third party can check whether an active participant hascheated.
Open problems: • Can effective dispute resolution for all classes of possibleerrors exist in a given system? • Are there other reasonable definitions and mechanisms fordispute resolution? • Can a system offer complete dispute resolution capabilitiesin which every dispute can be adjudicated using evidenceproduced by the election system?3 .1.6 From Verifiable to Verified
Constructing a voting system that creates sufficient evidence toreveal problems is not enough on its own. That evidence mustactually be used—and used appropriately—to ensure the accuracyof election outcomes.An election result may not be verified, even if it is generated byan end-to-end verifiable voting system. For verification of the result,we need several further conditions to be satisfied: • Enough voters and observers must be sufficiently diligent inperforming the appropriate checks. • Random audits (including those initiated by voters) must besufficiently extensive and unpredictable that changes that affectelection outcomes have a high chance of being detected. • If checks fail, this must be reported to the authorities who, inturn, must take appropriate action.These issues involve complex human factors, including voters’ in-centives to participate in verification. Little work has been done onthis aspect of the problem.An E2E-V system might give an individual voter assurance thather vote has not been tampered with if that voter performs certainchecks. However, sufficiently many voters must do this in order toprovide evidence that the election outcome as a whole is correct.Combining risk-limiting audits with E2E-V systems can provide avaluable layer of protection in the case that an insufficient numberof voters participate in verification.Finally, another critical verification problem that has receivedlittle attention to date is how to make schemes that are recoverablein the face of errors. We do not want to have to abort and rerun anelection every time a check a fails. Certain levels of detected errorscan be shown to be highly unlikely if the outcome is incorrect, andhence can be tolerated. Other types and patterns of error cast doubton the outcome and may require either full inspection or retabulationof the paper trail or, if the paper trail cannot be relied upon, a newelection.Both Küsters et al. [67] and Kiayias et al. [64] model voter-initiated auditing [12] and its implications for detection of an incor-rect election result. Both definitions turn uncertainty about voterinitiated auditing into a bound on the probability of detecting devia-tions of the announced election result from the truth. Open problems: • Can systems be designed so that the extent and diligence ofchecks performed can be measured? • Can verification checks be abstracted from voters, either byembedding them in election processes or automating them?
A significant challenge for election systems is the credentialingof voters to ensure that all eligible voters, and no one else, can castvotes. This presents numerous questions: what kinds of credentialsshould be used? How should they be issued? Can they be revokedor de-activated? Are credentials good for a single election or foran extended period? How difficult are they to share, transfer, steal,or forge? Can the ability to create genuine-looking forgeries helpprevent coercion? These questions must be answered carefully, anduntil they are satisfied for remote voting, pollsite voting is the onlyrobust way to address these questions—and even then, in-personcredentialing is subject to forgery, distribution, and revocation con-cerns (for instance, the Dominican Republic recently held a pollsite election where voters openly sold their credentials [47]). In the U.S.,there is concern that requiring in-person credentialing, in the formof voter ID, disenfranchises legitimate voters.
Open problems: • Is there a sufficiently secure way credential Internet voting? • Can a traditional PKI ensure eligibility for remote voting? • How does use of a PKI change coercion assumptions?
In most security applications, privacy and confidentiality are syn-onymous. In elections, however, privacy has numerous componentsthat go well beyond typical confidentiality. Individual privacy can becompromised by “normal” election processes such as a unanimousresult. Voters may be coerced if they can produce a proof of howthey voted, even if they have to work to do so.Privacy for votes is a means to an end: if voters don’t expresstheir true preferences then the election may not produce the rightoutcome. This section gives an overview of increasingly strongdefinitions of what it means for voters to be free of coercion.
We will take ballot privacy to mean that the election does notleak any information about how any voter voted beyond what can bededuced from the announced results. Confidentiality is not the onlyprivacy requirement in elections, but even simple confidentialityposes significant challenges. It is remarkable how many deployede-voting systems have been shown to lack even the most basicconfidentiality properties (e.g., [54, 46, 27, 24, 71]).Perhaps more discouraging to basic privacy is the fact that remotevoting systems (both paper and electronic) inherently allow votersto eschew confidentiality. Because remote systems enable voters tofill out their ballots outside a controlled environment, anyone canwatch over the voter’s shoulder while she fills out her ballot.In an election—unlike, say, in a financial transaction—even thecandidate receiving an encrypted vote should not be able to decryptit. Instead, an encrypted (or otherwise shrouded) vote must remainconfidential to keep votes from being directly visible to electionauthorities.Some systems, such as code voting [30] and the Norwegian andSwiss Internet voting schemes, defend privacy against an attackerwho controls the computer used for voting; however, this relies onassumptions about the privacy and integrity of the code sheet. Someschemes, such as JCJ/Civitas [57], obscure who has voted whileproviding a proof that only eligible votes were included in the tally.Several works [40] [67], following Benaloh [18] formalize thenotion of privacy as preventing an attacker from noticing when twoparties swap their votes.
Open problems: • Can we develop more effective, verifiable forms of assurancethat vote privacy is preserved? • Can we build means of privacy for remote voting throughcomputer-based systems?
Moran and Naor expressed concern over what might happen toencrypted votes that can still be linked to their voter’s name some4ecades into the future, and hence decrypted by superior technology.They define a requirement to prevent this:
Definition 8. [72] A voting scheme has everlasting privacy if itsprivacy does not depend on assumptions of cryptographic hardness.Their solution uses perfectly hiding commitments to the votes,which are aggregated homomorphically. Instead of privacy depend-ing upon a cryptographic hardness assumption, it is the integrity ofan election that depends upon a hardness assumption; and only areal-time compromise of the assumption can have an impact.
We generally accept that without further information, a voter ismore likely to have voted for a candidate who has received morevotes, but additional data is commonly released which can furthererode voter privacy. Even if we exclude privacy compromises, thereare other privacy risks which must be managed. If voters achieveprivacy by encrypting their selections, the holders of decryptionkeys can view their votes. If voters make their selections on devicesout of their immediate control ( e.g. official election equipment),then it is difficult to assure them that these devices are not retaininginformation that could later compromise their privacy. If votersmake their selections on their own devices, then there is an evengreater risk that these devices could be infected with malware thatrecords (and perhaps even alters) their selections (see, for instance,the Estonian system [97]).
Open problems: • Are there ways to quantify systemic privacy loss? • Can elections minimize privacy loss? • Can elections provide verifiable integrity while minimizingprivacy loss?
Preventing coercion and vote-selling was considered solved withthe introduction of the
Australian ballot. The process of voting pri-vately within a public environment where privacy can be monitoredand enforced prevents improper influence. Recent systems havecomplicated this notion, however. If a voting protocol provides areceipt but is not carefully designed, the receipt can be a channel forinformation to the coercive adversary.Benaloh and Tuinstra [17] pointed out that passive privacy isinsufficient for resisting coercion in elections:
Definition 9.
A voting system is receipt free if a voter is unableto prove how she voted even if she actively colludes with a coercerand deviates from the protocol in order to try to produce a proof .Traditional elections may fail receipt-freeness too. In general,if a vote consists of a long list of choices, the number of possiblevotes may be much larger than the number of likely voters. Thisis sometimes called (a failure of) the short ballot assumption [84].Prior to each election, coercers assign a particular voting patternto each voter. When the individual votes are made public, anyvoter who did not cast their pattern can then be found out. This issometimes called the
Italian attack , after a once prevalent practicein Sicily. It can be easily mitigated when a vote can be broken up,but is difficult to mitigate in systems like IRV in which the vote iscomplex but must be kept together. Mitigations are discussed inSections 3.2.1 and 3.3.5.
Incoercibility has been defined and examined in the universallycomposable framework in the context of general multiparty computa-tion [25, 105]. These definitions sidestep the question of whether thevoting function itself allows coercion (by publishing individual com-plex ballots, or by revealing a unanimous result for example)—theyexamine whether the protocol introduces additional opportunitiesfor coercion. With some exceptions (such as [7]), they usually focuson a passive notion of receipt-freeness, which is not strong enoughfor voting.
Schemes can be receipt-free, but not entirely resistant to coercion.Schemes like Prêt à Voter [87] that rely on randomization for receipt-freeness can be susceptible to forced randomization , where a coercerforces a voter to always choose the first choice on the ballot. Dueto randomized candidate order, the resulting vote will be randomlydistributed. If a specific group of voters are coerced in this way, itcan have a disproportionate impact on the election outcome.If voting rolls are public and voting is not mandatory, this hasan effect equivalent to prevent forced abstention , wherein a coercerrefuses to let a voter vote. Schemes that rely on credentialing arealso susceptible to coercion by forced surrender of credentials .One way to fully resist forced abstention is to obscure who voted.However, this is difficult to reconcile with the opportunity to verifythat only eligible voters have voted (eligibility verifiability), thoughsome schemes achieve both [53].Moran and Naor [72] provide a strong definition of receipt free-ness in which a voter may deviate actively from the protocol in orderto convince a coercer that she obeyed. Their model accommodatesforced randomization. A scheme is resistant to coercion if the votercan always pretend to have obeyed while actually voting as shelikes.
Definition 10.
A voting scheme is coercion resistant if thereexists a way for a coerced voter to cast her vote such that her co-ercer cannot distinguish whether or not she followed the coercer’sinstructions.Coercion resistance is defined in [57] to include receipt freenessand defence against forced-randomization, forced abstention andthe forced surrender of credentials. More general definitions include[68], which incorporates all these attacks along with Moran andNaor’s notion of a coercion resistance strategy.Note that if the coercer can monitor the voter throughout thevote casting period, then resistance is futile. For in-person voting,we assume that the voter is isolated from any coercer while sheis in the booth (although this is questionable in the era of mobilephones). For remote voting, we need to assume that voters will havesome time when they can interact with the voting system (or thecredential-granting system) unobserved.
Some authors have tried to provide some protection against co-ercion without achieving full coercion resistance.
Caveat coerci-tor [51] proposes the notion of coercion evidence and allows votersto cast multiple votes using the same credential.
Open problem: • Can we design usable, verifiable, coercion-resistant votingfor a remote setting?5 .4 Availability
Denial-of-Service (DoS) is an ever-present threat to electionswhich can be mitigated but never fully eliminated. A simple serviceoutage can disenfranchise voters, and the threat of attack fromforeign state-level adversaries is a pressing concern. Indeed, one ofthe countries that regularly uses Internet voting, Estonia, has beensubject to malicious outages [104].A variant of DoS specific to the context of elections is selectiveDoS , which presents a fundamentally different threat than generalDoS. Voting populations are rarely homogeneous, and disruptionof service, for instance, in urban (or rural) areas can skew resultsand potentially change election outcomes. If DoS cannot be entirelyeliminated, can service standards be prescribed so that if an outcomefalls below the standards it is vacated? Should these standards bedependent on the reported margin of victory? What, if any, recoverymethods are possible? Because elections are more vulnerable tominor perturbations than most other settings, selective DoS is aconcern which cannot be ignored.
A voting system must be usable by voters, poll-workers, electionofficials, observers, and so on. Voters who may not be computerliterate—and sometimes not literate at all—should be able to votewith very low error rates. Although some error is regarded asinevitable, it is also critical that the interface not drive errors in aparticular direction. For instance, a list of candidates that crossesa page boundary could cause the candidates on the second page tobe missed. Whatever security mechanisms we add to the votingprocess should operate without degrading usability, otherwise theresulting system will likely be unacceptable. A full treatment ofusability in voting is beyond the scope of this paper. However, wenote that E2E-V systems (and I-voting systems, even when not E2E-V) add additional processes for voters and poll workers to follow.If verification processes can’t be used properly by real voters, theoutcome will not be properly verified. One great advantage ofstatistical audits is to shift complexity from voters to auditors.
Open problems: • How effectively can usability be integrated into the designprocess of a voting system? • How can we ensure full E2E-V, coercion resistance, etc., ina usable fashion?
A variety of other mechanical requirements are often imposed bylegal requirements that vary among jurisdictions. For example: • Allowing voters to “write-in” vote for a candidate not listed onthe ballot. • Mandating the use of paper ballots (in some states withoutunique identifying marks or serial numbers; in other states requiring such marks) • Mandating the use of certain social choice functions (see 2.6.1Complex Election Methods below). • Supporting absentee voting. • Requiring or forbidding that “ballot rotation” be used (listingthe candidates in different orders in different jurisdictions). • Requiring that voting equipment be certified under governmentguidelines. Newer electronic and I-voting systems raise important policychallenges for real-world adoption. For example, in STAR-Vote [9],there will be multiple copies of every vote record: mostly electronicrecords, but also paper records. There may be instances where one isdamaged or destroyed and the other is all that remains. When lawsspeak to retention of “the ballot”, that term is no longer well-defined.Such requirements may need to be adapted to newer voting systems.
Many countries allow voters to select, score, or rank candidatesor parties. Votes can then be tallied in a variety of complex ways [21,90]. None of the requirements for privacy, coercion-resistance, or theprovision of verifiable evidence change. However, many tools thatachieve these properties for traditional "first-past-the-post" electionsneed to be redesigned.An election method might be complex at the voting or the tallyingend. For example, party-list methods such as D’Hondt and Sainte-Laguë have simple voting, in which voters select their candidateor party, but complex proportional seat allocation. Borda, RangeVoting, and Approval Voting allow votes to be quite expressive butare simple to tally by addition. Condorcet’s method and relatedfunctions [94, 103] can be arbitrarily complex, as they can combinewith any social choice function. Instant Runoff Voting (IRV) and theSingle Transferable Vote (STV) are both expressive and complicatedto tally. This makes for several challenges.
Open problem: • Which methods for cast-as-intended verification (e.g. codevoting [30]) work for complex voting schemes? • How can we apply Risk-limiting audits to complex schemes?See Section 3.2.1 for more detail. • How can failures of the short ballot assumption [84] bemitigated with complex ballots? • Can we achieve everlasting privacy for complex elections?
3. How can we secure voting?
These truths are self-evident but not self-enforcing
Barack Obama
The goal of this section and the next is to provide a state-of-the-artpicture of current solutions to voting problems and ongoing votingresearch, to motivate further work on open problems, and to defineclear directions both in research and election policy.
Following security problems with direct-recording electronic vot-ing systems (DREs) [71, 24, 46, 107], many parts of the USAreturned to the use of paper ballots. If secure custody of the pa-per ballots is assumed, paper provides durable evidence requiredto determine the correctness of the election outcome. For this rea-son, when humans vote from untrusted computers, cryptographicvoting system specifications often use paper for security, includedin the notions of dispute-freeness, dispute resolution, collectionaccountability and accountability [66] (all as defined in Section 2.1).Note that the standard approach to dispute resolution, based onnon-repudiation, cannot be applied to the voting problem in the stan-dard fashion, because the human voter does not have the ability to6heck digital signatures or digitally sign the vote (or other messagesthat may be part of the protocol) unassisted.Dispute-freeness or accountability are often achieved in a pollingplace through the use of cast paper ballots, and the evidence oftheir chain of custody (e.g., wet-ink signatures). Paper provides aninterface for data entry for the voter—not simply to enter the vote,but also to enter other messages that the protocol might require—and data on unforgeable paper serves many of the purposes ofdigitally signed data. Thus, for example, when a voter marks a
Prêt à Voter [87] or
Scantegrity [31] ballot, she is providing aninstruction that the voting system cannot pretend was somethingelse. The resulting vote encryption has been physically committed toby the voting system—by the mere act of printing the ballot—beforethe voter “casts” her vote.Physical ceremony, such as can be witnessed while the election isongoing, also supports verifiable cryptographic election protocols(see Section 3.3.2). Such ceremonies include the verification of votercredentials, any generation of randomness if required for the choicebetween cast and audit, any vote-encryption-verification performedby election officials, etc.The key aspect of these ceremonies is the chance for observers tosee that they are properly conducted.
Open problem: • Can we achieve dispute-resolution or -freeness without theuse of paper and physical ceremony?
Two types of Risk Limiting Audits have been devised: ballotpolling and comparison [69, 14, 98]. Both types continue to exam-ine random samples of ballots until either there is strong statisticalevidence that the outcome is correct, or until there has been a com-plete manual tally. “Strong statistical evidence” means that the p -value of the hypothesis that the outcome is incorrect is at most α ,within tolerable risk.Both methods rely on the existence of a ballot manifest thatdescribes how the audit trail is stored. Selecting the random samplecan include a public ceremony in which observers contribute byrolling dice to seed a PRNG [36]. Ballot-polling audits examine random samples of individual bal-lots. They demand almost nothing of the voting technology otherthan the reported outcome. When the reported outcome is correct,the expected number of ballots a ballot-polling audit inspects isapproximately quadratic in the reciprocal of the (true) margin ofvictory, resulting in large expected sample sizes for small margins.
Comparison audits compare reported results for randomly se-lected subsets of ballots to manual tallies of those ballots. Compari-son audits require the voting system to commit to tallies of subsetsof ballots (“clusters”) corresponding to identifiable physical subsetsof the audit trail. Comparison audits have two parts: confirm thatthe outcome computed from the commitment matches the reportedoutcome, and check the accuracy of randomly selected clusters bymanually inspecting the corresponding subsets of the audit trail.When the reported cluster tallies are correct, the number of clustersa comparison audit inspects is approximately linear in the reciprocalof the reported margin. The efficiency of comparison audits also de-pends approximately linearly on the size of the clusters. Efficiencyis highest for clusters consisting of individual ballots: individualcast vote records. To audit at the level of individual ballots requiresthe voting system to commit to the interpretation of each ballot in away that is linked to the corresponding element of the audit trail. In addition to RLAs, auditing methods have been proposed withBayesian [83] or heuristic [82] justifications.All post-election audits implicitly assume that the audit trail isadequately complete and accurate that a full manual count wouldreflect the correct contest outcome.
Compliance audits are designedto determine whether there is convincing evidence that the audit trailwas curated well, by checking ballot accounting, registration records,pollbooks, election procedures, physical security of the audit trail,chain of custody logs, and so on.
Evidence-based elections [101]combine compliance audits and risk-limiting audits to determinewhether the audit trail is adequately accurate, and if so, whether thereported outcome is correct. If there is not convincing evidence thatthe audit trail is adequately accurate and complete, there cannot beconvincing evidence that the outcome is correct.
Generally, in traditional and complex elections, whenever anelection margin is known and the infrastructure for a comparisonaudit is available, it is possible to conduct a rigorous risk-limitingcomparison audit. This motivates many works on practical margincomputation for IRV [70, 28, 93, 20].However, such an audit for a complex election may not be effi-cient, which motivates the extension of Stark’s sharper discrepancymeasure to D’Hondt and related schemes [100]. For Schulze andsome related schemes, neither efficient margin computation nor anyother form of RLA is known (see [55]); a Bayesian audit [83, 33]may nonetheless be used when one is able to specify suitable priors.
Open problems: • Can comparison audits for complex ballots be performedwithout exposing voters to “Italian” attacks? • Can RLAs or other sound statistical audits be developed forsystems too complex to compute margins efficiently? • Can the notion of RLAs be extended to situations wherephysical evidence is not available (i.e. Internet voting)?
Typically E2E-V involves providing each voter with a protectedreceipt —an encrypted or encoded version of their vote—at the timethe vote is cast. The voter can later use her receipt to check whetherher vote is included correctly in the tabulation process. Furthermore,given the set of encrypted votes (as well as other relevant informa-tion, like the public keys), the tabulation is universally verifiable :anyone can check whether it is correct. To achieve this, most E2E-Vsystems rely on a public bulletin board, where the set of encryptedballots is published in an append-only fashion.The votes can then be turned into a tally in one of two mainways.
Homomorphic encryption schemes [35, 18] allow the tallyto be produced on encrypted votes.
Verifiable shuffling transformsa list of encrypted votes into a shuffled list that can be decryptedwithout the input votes being linked to the (decrypted) output. Thereare efficient ways to prove that the input list exactly matches theoutput [91, 76, 102, 8, 52].
How can a voter verify that her cast vote is the one she wanted?
Code Voting , first introduced by Chaum [30], gives each voter asheet of codes for each candidate. Assuming the code sheet is valid,the voter can cast a vote on an untrusted machine by entering thecode corresponding to her chosen candidate and waiting to receive7he correct confirmation code. Modern interpretations of code votinginclude [108, 56, 86].Code voting only provides assurance that the correct voting codereached the server, it does not of itself provide any guarantees thatthe code will subsequently be correctly counted. A scheme that im-proves on this is Pretty Good Democracy [89], where knowledge ofthe codes is threshold shared in such a way that receipt of the correctconfirmation code provides assurance that the voting code has beenregistered on the bulletin board by a threshold set of trustees, andhence subsequently counted.The alternative is to ask the machine to encrypt a vote directly,but verify that it does so correctly. Benaloh [11] developed a simpleprotocol to enable vote encryption on an untrusted voting machine.A voter uses a voting machine to encrypt any number of votes, andcasts only one of these encrypted votes. All the other votes maybe “audited” by the voter. If the encryption is audited, the votingsystem provides a proof that it encrypted the vote correctly, andthe proof is public. The corresponding ballot cannot be cast asthe correspondence between the encryption and the ballot is nowpublic, and the vote is no longer secret. Voters take home receiptscorresponding to the encryptions of their cast ballots as well asany ballots that are to be audited. They may check the presence ofthese on a bulletin board, and the correctness proofs of the auditedencryptions using software obtained from any of several sources.However, even the most dilligent voters need only check that theirreceipts match the public record and that any ballots selected foraudit display correct candidate selections. The correctness proofsare part of the public record that can be verified by any individualor observer that is verifying correct tallying.
In addition to the work of Adida on assisted-human interactiveproofs (AHIPs, see [1]), there has been some work on a rigorous un-derstanding of one or more properties of single protocols, includingthe work of Moran and Naor [74, 73] and Küsters et al. [66].There have also been formalizations of voting protocols withhuman participants, such as by Moran and Naor [73] (for a pollingprotocol using tamper-evident seals on envelopes) and Kiayias etal. [63]. However, there is no one model that is sufficient for therigorous understanding of the prominent protocols used/proposedfor use in real elections. The absence of proofs has led to theoverlooking of vulnerabilities in the protocols in the past, see [59,61, 60, 50].Many systems use a combination of paper, cryptography, andauditing to achieve E2E-V in the polling place, including Mark-pledge [77, 4], Wombat [85, 10], Demos [64], Prêt à Voter [87],STAR-Vote [9], and Moran and Naor’s scheme [72]. Their proper-ties are summarised more thoroughly in the following section.The cryptographic literature has numerous constructions of end-to-end verifiable election schemes (e.g., [48, 78, 87, 26, 84, 77, 85,92, 9, 56]). There are also detailed descriptions of what it means toverify the correctness of the output of E2E-V systems (e.g., [64, 17,72]). Others have attempted to define alternative forms of the E2E-Vproperties [79, 37, 66]. There are also less technical explanations ofE2E-V intended for voters and election officials [16, 106].
Open problem: • Can we develop a rigorous model for humans and the use ofpaper and ceremonies in cryptographic voting protocols? • Can we rigorously examine the combination of statisticaland cryptographic methods for election verification?
Some simple approaches to coercion resistance have been sug-gested in the literature. These include allowing multiple votes withonly the last counting and allowing in-person voting to overrideremotely cast votes (both used in Estonian, Norwegian, and Utahelections [97, 49, 19]). It is not clear that this mitigates coercion atall. Alarm codes can also be provided to voters: seemingly real butactually fake election credentials, along with the ability for votersto create their own fake credentials. Any such approach can beconsidered a partial solution at best, particularly given the usabilitychallenges.One voting system,
Civitas [34], based on a protocol by Juels,Catalano and Jakobsson [57], allows voters to vote with fake creden-tials to lead the coercive adversary into believing the desired votewas cast. Note that the protocol must enable universal verificationof the tally from a list of votes cast with both genuine and fakecredentials, proving to the verifier that only the ones with genuinecredentials were tallied, without identifying which ones they were.
Open problem: • Can we develop cryptographic techniques that provide fullycoercion resistant remote voting?
Cast-as-intended verification based on creating and then challeng-ing a vote works regardless of the scheme ( e.g.
Benaloh challenges).Cut-and-choose based schemes such as Prêt à Voter and ScantegrityII need to be modified to work.Both uses of end-to-end verifiable voting schemes in governmentelections, the Takoma Park run of Scantegrity II and the Victo-rian run of Prêt à Voter, used IRV (and one used STV). VerifiableIRV/STV counting that doesn’t expose individual votes to the Italianattack has been considered [15], but may not be efficient enough foruse in large elections in practice, and was not employed in eitherpractical implementation.
Open problems: • Is usable cast-as-intended verification for complex votingmethods possible?
Blockchains provide an unexpectedly effective answer to a long-standing problem in computer science—how to form a consistentpublic ledger in a dynamic and fully distributed environment inwhich there is no leader and participants may join and leave atany time [75]. In fact, the blockchain process effectively selects a"random" leader at each step to move things forward, so this seemsat first to be a natural fit for elections—citizens post their preferencesonto a blockchain and everyone can see and agree upon the outcomeof the election.However, blockchains and elections differ in significant ways.Elections typically already have central authorities to play the lead-ership role, an entity that administrates the election: what will bevoted on, when, who is allowed to vote, etc.). This authority canalso be tasked with publishing a public ledger of events. Note that(as with blockchains) there need be no special trust in a centralauthority as these tasks are all publicly observable. So to begin with,by simply posting something on a (digitally signed) web page, anelection office can do in a single step what blockchains do with a8umbersome protocol involving huge amounts of computation.Blockchains are inherently unaccountable. Blockchain miners areindividually free to include or reject any transactions they desire—this is considered a feature. To function properly in elections, ablockchain needs a mechanism to ensure all legitimate votes areincluded in the ledger, which leads to another problem: there’salso no certainty in traditional blockchain schemes. Disputes aretypically resolved with a "longest chain wins" rule. Miners may haveinconsistent views of the contents of blockchains, but the incentivesare structured so that the less widely held views eventually fadeaway—usually. This lack of certainty is not a desirable property inelections.In addition to lacking certainty and accountability, blockchainsalso lack anonymity. While modifications can be made to blockchainprotocols to add anonymity, certainty, and accountability, balancingthese modifications on top of the additional constraints of voting isdifficult, and simpler solutions already exist as we discuss.In short, blockchains do not address any of the fundamentalproblems in elections, and their use actually makes things worse.
4. Current Solutions
I am committed to helping Ohio deliver its electoral votes to thepresident next year.
Walden O’Dell, Diebold CEO, 2003
Below we provide a brief analysis of several real-world votingsystems developed by the scientific community. These systems usethe properties discussed in Sections 2 and 3. We include both poll-site and remote systems. This collection is by no means exhaustive,but hopefully the abundance of verifiable, evidence-based votingsystems will convince the reader that there are significant technolog-ical improvements that can greatly improve election security. Ouranalysis is graphically represented in Table 1.
The systems below were developed specifically with the require-ments from Section 2 in mind. As such, all satisfy the end-to-endverifiability criteria from Section 2.1.3, and to a varying degreeprovide collection accountability, receipt-freeness, and coercionresistance. X X Figure 1: Marked ballots in Prêt-à-Voter [87] ( left ) and Scant-egrity [31] ( right ). Prêt à Voter [87] ballots list the candidates in a pseudo-randomorder, and the position of the voter’s mark serves as an encryptionof the vote. The ballot also carries an encryption of the candidateordering, which can be used, with the mark position, to obtain thevote. Voters can audit ballots to check that the random candidateorder they are shown matches the encrypted values on their ballot. vVote
In the 2014 state election the Australian state of Victoriaconducted a small trial of end-to-end verifiable pollsite voting, usinga system called vVote derived from Prêt à Voter [38].
The
Scantegrity [31, 32] voter marks ballots that are very similarto optical scan ballots, with a single important difference. Eachoval has printed on it, in invisible ink, a confirmation code—theencryption corresponding to this vote choice. When voters fill theoval with a special pen, the confirmation number becomes visible.The same functionality can be achieved through the use of scratch-off surfaces.
Scantegrity II was used by the City of Takoma Park for its munic-ipal elections in 2009 and 2011 [26], the first secret-ballot electionfor public office known to use an E2E voting system within the U.S.
VeriScan [13], like Scantegrity, uses optical scan ballots. Butthe ballots are ordinary – using regular ink – and are filled byvoters using ordinary pens. Optical scanners used by VeriScan areaugmented to hold the ballot deposited by a voter and to print areceipt consisting of an encryption of the selections made by thevoter (or a hash thereof).Once the receipt has been given to the voter by the scanner, thevoter can instruct the scanner to either retain the ballot or to returnthe ballot to the voter. A returned ballot should be automaticallymarked as no longer suitable for casting and effectively becomes achallenge ballot as in STAR-Vote (below).All encrypted ballots – whether cast or retained by a voter – areposted to a public web page where they can be checked against voterreceipts. The cast ballots are listed only in encrypted form, but theretained ballots are listed in both encrypted and decrypted form sothat voters can check the decryptions against their own copies of theballots.
STAR-Vote [9] is an E2E-V, in-person voting system designedjointly with Travis County (Austin), Texas, and is scheduled forwide-spread deployment in 2018. STAR-Vote is a DRE-style touch-screen system, which prints a human-readable paper ballot which isdeposited into a ballot box. The system also prints a receipt that canbe taken home. These two printouts serve as evidence for audits.STAR-Vote encodes a Benaloh-style cast-or-spoil question [11]as the depositing of the ballot into the ballot box. Each votingmachine must commit to the voter’s ballot without knowing if it willbe deposited and counted or spoiled and thereby challenged.STAR-Vote posts threshold encrypted cast and spoiled ballots toa web bulletin board. Voters can then check that their cast ballotswere included in the tally, or that the system correctly recordedtheir vote by decrypting their challenged ballots. STAR-Vote iscollection accountable only to the extent that paper receipts andballot summaries are resistent to forgery. It is coercion resistant andsoftware independent, and allows for audits of its paper records.
While many of the above schemes provide most of the requiredproperties laid out in Section 2, most do not account for everlastingprivacy. However, by integrating the Perfectly Private Audit Trail(PPAT) [39], many of the previously discussed systems can attaineverlasting privacy. Notably, PPAT can be implemented both withmixnet schemes like Scantegrity [31] and Helios [2] as well as withhomomorphic schemes like that used in STAR-Vote [9].
The
Remotegrity [108] voting system specification provides alayer over local coded voting systems specifications to enable their9 e l d e d c o e r c i on r e s i s t a n cee v e r l a s ti ngp r i v ac y s o f t w a r e i nd e p e nd e n ce t a k e - ho m ee v i d e n ce b a ll o t ca s t a ss u r a n cec o ll ec ti on acc oun t a b l e v e r i fi a b l y ca s t - a s - i n t e nd e dv e r i fi a b l y c o ll ec t e d - a s - ca s t v e r i fi a b l ec oun t e d - a s - c o ll ec t e dp a p e r / e l ec t r on i c / hyb r i d w r it e - i n ss uppo r t e dp r e f e r e n ti a l b a ll o t ss uppo r t e d Poll-site techniques in widespread use
Hand-counted in-person paper (cid:32) (cid:32) (cid:32) (cid:32) (cid:35) (cid:35) (cid:35) (cid:32) (cid:35) (cid:32) p (cid:32) (cid:32) Optical-scan in-person paper (cid:32) (cid:32) (cid:32) (cid:32) (cid:35) (cid:35) (cid:35) (cid:32) (cid:35) (cid:32) h (cid:32) (cid:32) DRE (with paper audit trail) (cid:32) (cid:32) (cid:35) (cid:32) (cid:35) (cid:35) (cid:35) (cid:71)(cid:35) (cid:32) (cid:32) h (cid:32) (cid:32) Paperless DRE (cid:32) (cid:32) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) e (cid:32) (cid:32) Poll-site systems from research
Prêt-à-voter [87] (cid:32) (cid:32) (cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) (cid:32) h (cid:35) (cid:32) Scantegrity [31] (cid:32) (cid:32) (cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) (cid:32) h (cid:71)(cid:35) (cid:71)(cid:35) STAR-Vote [9] (cid:71)(cid:35) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) h (cid:35) (cid:35) Wombat [85] (cid:71)(cid:35) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) h (cid:35) (cid:35) VeriScan [13] (cid:35) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) h (cid:35) (cid:71)(cid:35) Scratch and Vote [5] (cid:35) (cid:32) (cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) (cid:32) h (cid:71)(cid:35) (cid:71)(cid:35) MarkPledge [77] (cid:35) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) (cid:32) (cid:32) (cid:32) (cid:32) e (cid:35) (cid:35) ThreeBallot [80] (cid:35) (cid:32) (cid:32) (cid:32) (cid:32) (cid:71)(cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:32) h (cid:35) (cid:35) Remote voting systems and techniques
Helios [2] (cid:71)(cid:35) (cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:71)(cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) e (cid:35) (cid:35) Remotegrity [108] (cid:71)(cid:35) (cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:35) (cid:32) (cid:32) (cid:32) (cid:32) h (cid:35) (cid:71)(cid:35) Civitas [34] (cid:71)(cid:35) (cid:32) (cid:71)(cid:35) (cid:35) (cid:32) (cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) e (cid:71)(cid:35) (cid:32) Selene [88] (cid:35) (cid:32) (cid:71)(cid:35) (cid:32) (cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) (cid:32) e (cid:35) (cid:32) Norway [49] (cid:32) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:71)(cid:35) (cid:35) (cid:35) (cid:71)(cid:35) e (cid:32) (cid:32) Estonia [97] (cid:32) (cid:35) (cid:35) (cid:35) (cid:71)(cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) e (cid:35) (cid:35) iVote [54] (cid:32) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:71)(cid:35) (cid:71)(cid:35) (cid:35) e (cid:35) (cid:32) Paper ballots returned by postal mail (cid:32) (cid:35) (cid:32) (cid:32) (cid:35) (cid:35) (cid:35) (cid:35) (cid:35) (cid:32) p (cid:32) (cid:32)(cid:32) = provides property (cid:35) = does not provide property (cid:71)(cid:35) = provides property with provisions Used in small trial elections Pending deployment Used in private sector elections Absentee voting only Allows multiple voting Possible with PPAT With sufficient auditing Receipts sent by email Temporary email receipt Queryable (phone system) Queryable (code sheets) Enhanced with pre- andpost-election auditing Enhanced with auditingduring elections To the extent the paperresists forgeryTable 1:
Applying our threat model to fielded and proposed voting schemes — Note that certain features like credentialing and availabilityare excluded, as these factors impact all systems in roughly equivalent ways. The Utah system has not been made available for rigoroussecurity analysis, and is excluded. 10se in a remote setting. It is the only known specification thatenables the voter to detect and prove attempts by adversaries tochange the remote vote.Voters are mailed a package containing a coded-vote ballot and acredential sheet. The sheet contains authorization codes and lock-incodes under scratch-offs, and a return code. To vote, voters scratch-off an authorization code at random and use it as a credential toenter the candidate code. The election website displays the enteredinformation and the return code, which indicates to the voter that thevote was received. If the website displays the correct information,the voter locks it in with a random lock-in code. If not, the voteruses another computer to vote, scratching-off another authorizationcode. For voter-verifiability, voters may receive multiple ballots,one of which is voted on, and the others audited.The credential authority (an insider adversary) can use the creden-tials to vote instead of the voter. If this happens, the voter can showthe unscratched-off surface to prove the existence of a problem.Remotegrity thus achieves E2E-V, collection accountability, andsoftware independence. Since there is no secret ballot guarantee,there is no coercion resistance.
Remotegrity was made available to absentee voters in the 2011election of the City of Takoma Park, alongside in-person votingprovided by
Scantegrity . Helios [2, 3] is an E2E-V Internet voting system. Voters visita web page “voting booth” to enter their selections. After votersreview their ballots, each ballot is encrypted using a threshold keygenerated during election set up.Voters cast a ballot by entering credentials supplied for this elec-tion. Alternatively, voters can anonymously spoil their ballots todecrypt them, to show that their selections were accurately recorded.Voters can cast multiple ballots with only the last one retained, as aweak means of coercion mitigation.When the election closes, the cast votes are verifiably tallied—either using homomorphic tallying or a mixnet. Independent veri-fiers have been written to check the tallying and decryptions of eachspoiled ballot. Confirmation that the vote is received is then emailedto the voter. Helios is used for elections by a variety of universitiesand professional societies including the Association for ComputingMachinery and the International Association for Cryptologic Re-search. Helios lacks collection accountability, but is still E2E-V andsoftware independent through its spoil function.
Selene [88] is a remote E2E-V system that revisits the trackernumbers of Scantegrity, but with novel cryptographic constructs tocounter the drawbacks. Voters are notified of their tracker after thevote/tracker pairs have been posted to the web bulletin board, whichallows coerced voters to identify an alternative tracker pointing to thecoercer’s required vote. Voter verification is much more transparentand intuitive, and voters are not required to check the presence ofan encrypted receipt. For the same reasons as Remotegrity, Seleneis software independent and provides collection accountability.
Open problems: • Is there a cast-as-intended method that voters can executesuccessfully without instructions from pollworkers? • Is it possible to make E2E-V protocols simpler for electionofficials and pollworkers to understand and administer?
5. Internet Voting “People of Dulsford,” began Boris, “I want to assure you that asyour newly elected mayor I will not just represent the peoplewho voted for me ...”“That’s good,” said Derrick, “because no-one voted for him.”“But the people who didn’t vote for me as well,” said Boris.There was a smattering of half-hearted clapping from the crowd.
R. A. Spratt,
Nanny Piggins and the Race to Power
In this section we present the challenges of secure Internet votingthrough a set of (possibly contradictory) requirements. No systemhas addressed the challenges sufficiently so far, and whether it ispossible to do so remains an open problem. We begin by introduc-ing prominent contemporary instances of I-voting as case studies.Then we examine the Internet voting threat model, along the wayshowing how these Internet systems have failed to adequately de-fend themselves. We look at voter authentication, verification ofthe correctness of a voting system’s output, voter privacy and coer-cion resistance, protections against denial-of-service, and finally theusability and regulatory constraints faced by voting systems.One major roadblock faced exclusively by I-voting is the under-lying infrastructure of the Internet. The primary security mecha-nism for Internet communication is Transport Layer Security (TLS),which is constantly evolving in response to vulnerabilities. Forinstance, the website used in the iVote system was vulnerable tothe TLS FREAK [41] and LogJam [6] vulnerabilities. Researchersdiscovered this during the election period and demonstrated thatthey could exploit it to steal votes [54]. At the time, LogJam hadnot been publicly disclosed, highlighting the risk to I-voting fromzero-day vulnerabilities. Internet voting systems must find ways torely on properties like software independence and E2E-V beforethey can be considered trusted.In 2015, the U.S. Vote Foundation issued an export report on theviability of using E2E-verifiability for Internet voting [106]. Thefirst two conclusions of the report were as follows.1. Any public elections conducted over the Internet must beend-to-end verifiable.2. No Internet voting system of any kind should be used forpublic elections before end-to-end verifiable in-person votingsystems have been widely deployed and experience has beengained from their use.Many of the possible attacks on I-voting systems could be per-formed on postal voting systems too. The main difference is thelikelihood that a very small number of people could automate themanipulation of a very large number of votes, or a carefully chosenfew important votes, without detection.
Estonia [43]
Estonia’s I-voting deployment—the largest in theworld by fraction of the electorate—was used to cast nearly a thirdof all votes in recent national elections [42]. The Estonian systemuses public key cryptography to provide a digital analog of the“double envelope” ballots often used for absentee voting [43]. Ituses a national PKI system to authenticate voters, who encrypt anddigitally sign their votes via client-side software. Voters can verify We use the term “verify” loosely in this subsection; these systemsprovide no guarantee that what is shown when voters “verify” theirvotes proves anything about the correctness of vote recording andprocessing. see 2.1.11hat their votes were correctly received using a smartphone app, butthe tallying process is only protected by procedural controls [44].The voting system does not provide evidence of a correct tally, nordoes it provide evidence that the vote was correctly recorded ifthe client is dishonest. A 2013 study showed that the Estoniansystem is vulnerable to vote manipulation by state-level attackersand client-side malware, and reveals significant shortcomings inofficials’ operational security [97]. iVote [22]
The largest online voting trial by absolute number ofvotes occurred in 2015 in New South Wales, Australia, using a web-based system called iVote. It received 280,000 votes out of a totalelectorate of over 4 million. The system included a telephone-basedvote verification service that allowed voters to dial in and hear theirvotes read back in the clear. A limited server-side auditing processwas performed only by auditors selected by the electoral authority.Thus no evidence was provided that received votes were correctlyincluded in the tally. At election time, the electoral commissiondeclared that, “1.7% of electors who voted using iVote also usedthe verification service and none of them identified any anomalieswith their vote.” It emerged more than a year later that 10% ofverification attempts had failed to retrieve any vote at all. This errorrate, extrapolated to all 280,000 votes, would have been enough tochange at least one seat.
Norway [49]
In 2011 and 2013 Norway ran trials of an I-votingsystem. In the 2013 trial, approximately 250,000 voters (7% ofthe Norwegian electorate) were able to submit ballots online [95].Voters are given precomputed encrypted return codes for the variouscandidates they can vote for. Upon submitting a ballot, the voterreceives an SMS message with the return code computed for thevoter’s selections. In principle, if the return codes were kept privateby the election server, the voter knows the server correctly receivedher vote. This also means that ballots must be associated withthe identity of those who cast them, enabling election officials topossibly coerce or selectively deny service to voters. The votingsystem did not provide publicly verifiable evidence of a correct tally.
Switzerland [29]
In Switzerland, the Federal Chancellery hasproduced a clear set of requirements. More stringent verifiabilityproperties come into force as a larger fraction of the votes arecarried over the Internet. Many aspects of this way of proceedingare admirable. However, the final systems are dependent on a code-verification system, and hence integrity depends on the proper andsecret printing of the code sheets. If the code-printing authoritiescollude with compromised devices, the right verification codes canbe returned when the votes are wrong.
Utah [19]
In March 2016 the Utah Republican party held its cau-cus, running pollsite voting in addition to an online system. Voterscould register through a third-party website and have a voting creden-tial sent to their phone via SMS or email. Any registered voter couldreceive a credential, but as the site was unauthenticated, anyonewith a voter roll could submit any registered voter’s informationand receive that person’s credential. On the day of the election, saidcredentials were used to log onto the website ivotingcenter.us to fillout and submit ballots. The system provided voters with a receiptcode that voters could check on the election website. The systemdoes not provide evidence that the vote was correctly recorded if theclient is dishonest, nor does it provide evidence of a correct tally.Election day saw many voters fail to receive their voting credentialsor not be able to reach the website to vote at all, forcing as manyas 13,000 of the 40,000 who attempted to register to vote online toeither vote in person or not vote at all [45]. That is, a publicly available list of registered voters, their partyaffiliations, home addresses, and other relevant information All of these systems place significant trust in unverifiable pro-cesses, at both client and server sides, leading to serious weaknessesin privacy and integrity. Their faults demonstrate the importanceof a clear and careful trust model that makes explicit who does anddoes not have power over the votes of others, and reinforce theimportance of providing convincing evidence of an accurate electionoutcome.
Internet voting presents numerous challenges that have not beenadequately addressed. First among these is the coercion problemwhich is shared with other remote voting systems in widespread usetoday (such as vote-by-mail). However, I-voting exacerbates theproblem by making coercion and vote-selling a simple matter of avoter providing credentials to another individual.Client malware poses another significant obstacle. While E2E-verifiability mitigates the malware risks by providing voters with al-ternate means to ensure that their votes have been properly recordedand counted, many voters will not avail themselves of these ca-pabilities. We could therefore have a situation were a large-scalefraud is observed by a relatively small number of voters. While thedetection of a small number of instances of malfeasance can bringa halt to an election which provides collection accountability, therequired evidence can be far more fleeting and difficult to validatein an Internet setting. An election should not be overturned by asmall number of complaints if there is no substantive evidence tosupport these complaints.Targeted denial-of-service is another serious unresolved threat toI-voting. Ordinary denial-of-service (DoS) is a common threat onthe Internet, and means have been deployed to mitigate — althoughnot eliminate — these threats. The unique aspect in elections is thatwhile ordinary DoS can slow commerce or block access to a web sitefor a period, the effects of a targeted DoS attack on an election canbe far more severe. Since voting paterns are far from homogeneous,an attacker can launch a targeted DoS attack against populations andregions which are likely to favor a particular candidate or position.By merely making it more difficult for people in targeted populationsto vote, the result of an election can be altered. As yet, we have noeffective mitigations for such attacks.Finally, as was observed in the U.S. Vote Foundation study [106],we simply don’t yet have much experience with large-scale deploy-ments of E2E-verifiable election systems in the simpler and moremanageable setting of in-person voting. It would be angerous tojump directly to the far more challenging setting of Internet votingwith a heavy dependence on a technology that has not previouslybeen deployed at scale.
There are numerous alternatives to Internet voting that can helpenfranchise voters who can not easily access a poll site on the dayof an election.Early voting is in widespread use throughout the U.S. By extend-ing the voting window from a single day to as much as three weeks,voters who may be away or busy on the date of an election can beafforded an opportunity to vote in person, at their convenience, ata poll site with traditional safeguards. Early voting also mitigatesmany of the risks of traditional systems since, for example, an equip-ment failure ten days prior to the close of an election is far lessserious than one that takes place during a single day of voting.Some U.S. jurisdictions have adopted a vote center system inwhich voters may vote in person outside of their home precincts.This option has been facilitated by the use of electronic poll books,and it allows voters to, for instance, vote during a lunch break from12ork if they will be away from their homes during voting hours. Thevote center model could potentially be extended from the currentmodel of voters away from their home precincts but still within theirhome counties by allowing voters to use any poll site in the state orcountry. It would even be possible to establish remote voting kiosksoverseas in embassies, conslates, or other official sites, and romingvoting kiosks could be established with as little as two poll workersand a laptop computer. Security and accountability in all of thesenon-local voting scenarios can be greatly enhanced by the use ofE2E-verifiability.Blank-ballot electronic delivery is another option which hasgained in popularity. While there are numerous risks in using theInternet for casting of ballots, the risks a far less in simply providingblank ballots to voters. Electronic delivery of blank-ballots can savehalf of the round-trip time that is typical in absentee voting, andtraditional methods of ballot return can be used which are less sus-ceptible to the large-scale attacks that are possible with full Internetvoting.
6. A Look Ahead
There is no remedy now to a process that was so opaque that itcould have been manipulated at any stage
Michael Meyer-Resende and Mirjam Kunkler, on the Iranian2009 Presidential election
Voting has always used available technology, whether pebblesdropped in an urn or marked paper put in a ballot box; it now usescomputers, networks, and cryptography. The core requirement, toprovide public evidence of the right result from secret ballots, hasn’tchanged in 2500 years.Computers can improve convenience and accessibility over plainpaper and manual counting. In the polling place there are goodsolutions, including Risk Limiting Audits and end-to-end verifiablesystems. These must be more widely deployed and their options forverifying the election result must actually be used.Many of the open problems described in this paper—usable andaccessible voting systems, dispute resolution, incoercibility—cometogether in the challenge of a remote voting system that is verifiableand usable without supervision. The open problem of a systemspecification that (a) does not use any paper at all and (b) is basedon a simple procedure for voters and poll workers, will motivateresearchers for a long time. Perhaps a better goal is a hybrid systemcombining paper evidence with some auditing or cryptographicverification.Research in voting brings together knowledge in many fields—cryptography, systems security, statistics, usability and accessibility,software verification, elections, law and policy to name a few—toaddress a critical real-world problem.The peaceful transfer of power depends on confidence in theelectoral process. That confidence should not automatically begiven to any outcome that seems plausible—it must be earned byproducing evidence that the election result is what the people chose.Insisting on evidence reduces the opportunities for fraud, hencebringing greater security to citizens the world over.
Acknowledgments
This work was supported in part by the U.S. National Science Foun-dation awards CNS-1345254, CNS-1409505, CNS-1518888, CNS-1409401, CNS-1314492, and 1421373, the Center for Science ofInformation STC (CSoI), an NSF Science and Technology Center,under grant agreement CCF-0939370, the Maryland ProcurementOffice under contract H98230-14-C-0127, and FNR Luxembourgunder the PETRVS Mobility grant.
7. References [1] B. Adida.
Advances in Cryptographic Voting Systems . PhD thesis,MIT, July 2006.[2] B. Adida. Helios: Web-based open-audit voting. 17th USENIXSecurity Symposium, Aug. 2008. https://vote.heliosvoting.org.[3] B. Adida, O. deMarneffe, O. Pereira, and J.-J. Quisquater. Electing auniversity president using open-audit voting: Analysis of real-worlduse of Helios. In
Electronic Voting Technology Workshop / Workshopon Trustworthy Elections , EVT/WOTE ’09, Aug. 2009.[4] B. Adida and C. A. Neff. Efficient receipt-free ballot casting resistantto covert channels.
IACR Cryptology ePrint Archive , 2008:207, 2008.[5] B. Adida and R. L. Rivest. Scratch and Vote: Self-containedpaper-based cryptographic voting. In
ACM Workshop on Privacy inthe Electronic Society , WPES ’06, pages 29–40, 2006.[6] D. Adrian, K. Bhargavan, Z. Durumeric, P. Gaudry, M. Green, J. A.Halderman, N. Heninger, D. Springall, E. Thomé, L. Valenta,B. VanderSloot, E. Wustrow, S. Zanella-Béguelin, andP. Zimmermann. Imperfect forward secrecy: How Diffie-Hellmanfails in practice. In , CCS ’15, Oct 2015.[7] J. Alwen, R. Ostrovsky, H.-S. Zhou, and V. Zikas. Incoerciblemulti-party computation and universally composable receipt-freevoting. In
Advances in Cryptology—CRYPTO 2015 , pages 763–780.Springer, 2015.[8] S. Bayer and J. Groth. Efficient zero-knowledge argument forcorrectness of a shuffle. In
Annual International Conference on theTheory and Applications of Cryptographic Techniques , pages263–280. Springer, 2012.[9] S. Bell, J. Benaloh, M. D. Byrne, D. DeBeauvoir, B. Eakin, G. Fisher,P. Kortum, N. McBurnett, J. Montoya, M. Parker, O. Pereira, P. B.Stark, D. S. Wallach, and M. Winn. STAR-vote: A secure,transparent, auditable, and reliable voting system.
USENIX Journalof Election Technology and Systems , 1(1), Aug. 2013.[10] J. Ben-Nun, N. Fahri, M. Llewellyn, B. Riva, A. Rosen, A. Ta-Shma,and D. Wikström. A new implementation of a dual (paper andcryptographic) voting system. In , 2012.[11] J. Benaloh. Simple verifiable elections. In
USENIX/ACCURATEElectronic Voting Technology Workshop , EVT ’06, Aug. 2006.[12] J. Benaloh. Ballot casting assurance via voter-initiated poll stationauditing. In
USENIX/ACCURATE Electronic Voting TechnologyWorkshop , EVT ’07, Aug. 2007.[13] J. Benaloh. Administrative and public verifiability: Can we haveboth? In
USENIX/ACCURATE Electronic Voting TechnologyWorkshop , EVT ’08, Aug. 2008.[14] J. Benaloh, D. Jones, E. Lazarus, M. Lindeman, and P. B. Stark.Soba: Secrecy-preserving observable ballot-level audit. In proc. Proc.USENIXAccurate Electronic Voting Technology Workshop , 2011.[15] J. Benaloh, T. Moran, L. Naish, K. Ramchen, and V. Teague.Shuffle-sum: Coercion-resistant verifiable tallying for STV voting.
IEEE Transactions on Information Forensics and Security ,4(4):685–698, 2009.[16] J. Benaloh, R. Rivest, P. Y. Ryan, P. Stark, V. Teague, and P. Vora.End-to-end verifiability, 2015. arXiv:1504.03778.[17] J. Benaloh and D. Tuinstra. Receipt-free secret-ballot elections. In , STOC ’94, pages544–553, 1994.[18] J. D. C. Benaloh.
Verifiable Secret-ballot Elections . PhD thesis, Yale,1987. AAI8809191.[19] M. Bernhard. What happened in the Utah GOP caucus.https://mbernhard.com/Utahvoting.pdf.[20] M. Blom, P. J. Stuckey, V. J. Teague, and R. Tidhar. Efficientcomputation of exact IRV margins, 2015. arXiv:1508.04885.[21] S. Brams.
Mathematics and democracy
23] J. A. Calandrino, J. A. Halderman, and E. W. Felten.Machine-assisted election auditing. In
USENIX/ACCURATEElectronic Voting Technology Workshop , FOCS’96, pages 504–513, 1996.[26] R. Carback, D. Chaum, J. Clark, J. Conway, A. Essex, P. S. Herrnson,T. Mayberry, S. Popoveniuc, R. L. Rivest, E. Shen, A. T. Sherman,and P. L. Vora. Scantegrity II municipal election at Takoma Park:The first E2E binding governmental election with ballot privacy. In
USENIX/ACCURATE Electronic Voting Technology Workshop /Workshop on Trustworthy Elections
IAVoSS Workshop onTrustworthy Elections , WOTE ’01, 2001.[31] D. Chaum, R. Carback, J. Clark, A. Essex, S. Popoveniuc, R. L.Rivest, P. Y. A. Ryan, E. Shen, and A. T. Sherman. Scantegrity II:End-to-end verifiability for optical scan election systems usinginvisible ink confirmation codes. In
USENIX/ACCURATE ElectronicVoting Workshop , EVT ’08, Aug. 2008.[32] D. Chaum, R. Carback, J. Clark, A. Essex, S. Popoveniuc, R. L.Rivest, P. Y. A. Ryan, E. Shen, A. T. Sherman, and P. L. Vora.Scantegrity II: End-to-end verifiability by voters of optical scanelections through confirmation codes.
IEEE Transactions onInformation Forensics and Security , 4(4):611–627, 2009.[33] B. Chilingirian, Z. Perumal, R. L. Rivest, G. Bowland, A. Conway,P. B. Stark, M. Blom, C. Culnane, and V. Teague. Auditing australiansenate ballots. arXiv preprint arXiv:1610.00127 , FOCS ’85, pages 372–382,1985.[36] A. Cordero, D. Wagner, and D. Dill. The role of dice in electionaudits – extended abstract. In
IAVoSS Workshop On TrustworthyElections (WOTE 2006) , 2006.[37] V. Cortier, D. Galindo, R. Küsters, J. Müller, and T. Truderung.Verifiability Notions for E-Voting Protocols. Technical report,Technical Report 2016/287, Cryptology ePrint Archive, 2016.Available at http://eprint. iacr. org/2016/287.[38] C. Culnane, P. Y. A. Ryan, S. Schneider, and V. Teague. vVote: Averifiable voting system.
ACM Transactions on Information andSystem Security , 18(1), 2015. Techreport on ArXiV eprint:arXiv:1404.6822.[39] E. Cuvelier, O. Pereira, and T. Peters. Election verifiability or ballotprivacy: Do we need to choose? In , ESORICS ’13, Sept. 2013.[40] S. Delaune, S. Kremer, and M. Ryan. Verifying privacy-typeproperties of electronic voting protocols: A taster. In
TowardsTrustworthy Elections
USENIX/ACCURATE Electronic Voting Technology Workshop
IAVoSSWorkshop on Trustworthy Elections , WOTE ’06, 2006.[49] K. Gjøsteen. The Norwegian Internet voting protocol. In , VoteID ’11, 2011.[50] M. Gogolewski, M. Klonowski, P. Kubiak, M. Kutylowski, A. Lauks,and F. Zagórski. Kleptographic attacks on e-voting schemes. In
International Conference on Emerging trends in Information andCommunication Security , pages 494–508, 2006.[51] G. S. Grewal, M. D. Ryan, S. Bursuc, and P. Y. Ryan. Caveatcoercitor: Coercion-evidence in electronic voting. In , pages 367–381, 2013.[52] J. Groth. A verifiable secret shuffle of homomorphic encryptions.
Journal of Cryptology , 23(4):546–579, 2010.[53] R. Haenni and O. Spycher. Secure internet voting on limited deviceswith anonymized dsa public keys.
EVT/WOTE , 11, 2011.[54] J. A. Halderman and V. Teague. The New South Wales iVote system:Security failures and verification flaws in a live online election. In , VoteID ’15, Aug.2015.[55] L. A. Hemaspaandra, R. Lavaee, and C. Menton. Schulze andranked-pairs voting are fixed-parameter tractable to bribe,manipulate, and control. In
International Conference on AutonomousAgents and Multiagent Systems , pages 1345–1346, 2013.[56] R. Joaquim, C. Ribeiro, and P. Ferreira. Veryvote: A voter verifiablecode voting system. In
International Conference on E-Voting andIdentity , pages 106–121. Springer, 2009.[57] A. Juels, D. Catalano, and M. Jakobsson. Coercion-resistantElectronic Elections. In
ACM Workshop on Privacy in the ElectronicSociety , WPES ’05, pages 61–70, Nov. 2005.[58] T. Kaczmarek, J. Wittrock, R. Carback, A. Florescu, J. Rubio,N. Runyan, P. L. Vora, and F. Zagórski. Dispute resolution inaccessible voting systems: The design and use of audiotegrity. InJ. Heather, S. A. Schneider, and V. Teague, editors,
E-Voting andIdentify - 4th International Conference, Vote-ID 2013, Guildford, UK,July 17-19, 2013. Proceedings , volume 7985 of
Lecture Notes inComputer Science , pages 127–141. Springer, 2013.[59] C. Karlof, N. Sastry, and D. Wagner. Cryptographic voting protocols:A systems perspective. In , pages33–49, Aug. 2005.[60] J. Kelsey, A. Regenscheid, T. Moran, and D. Chaum. Attackingpaper-based E2E voting systems. In
Towards Trustworthy Elections,New Directions in Electronic Voting , pages 370–387, 2010.[61] S. Khazaei and D. Wikström. Randomized partial checking revisited.In
Topics in Cryptology, CT-RSA 2013 , pages 115–128. Springer,2013.[62] A. Kiayias and M. Yung. Self-tallying elections and perfect ballotsecrecy. In , PKC ’02, pages 141–158, 2002.[63] A. Kiayias, T. Zacharias, and B. Zhang. Ceremonies for end-to-endverifiable elections.
IACR Cryptology ePrint Archive , 2015:1166,2015.[64] A. Kiayias, T. Zacharias, and B. Zhang. End-to-end verifiableelections in the standard model. In
Advances inCryptology—EUROCRYPT 2015 , pages 468–498. Springer, 2015.
65] S. Kremer, M. Ryan, and B. Smyth. Election verifiability inelectronic voting protocols. In , WISSEC ’09, Nov. 2009.[66] R. Küsters, T. Truderung, and A. Vogt. Accountability: Definitionand relationship to verifiability. In , CCS ’10, pages 526–535,2010.[67] R. Küsters, T. Truderung, and A. Vogt. Verifiability, privacy, andcoercion-resistance: New insights from a case study. In , pages 538–553, 2011.[68] R. Küsters, T. Truderung, and A. Vogt. A game-based definition ofcoercion resistance and its applications.
Journal of ComputerSecurity , 20(6):709–764, 2012.[69] M. Lindeman, P. B. Stark, and V. S. Yates. BRAVO: Ballot-pollingrisk-limiting audits to verify outcomes. In
USENIX Electronic VotingTechnology Workshop / Workshop on Trustworthy Elections ,EVT/WOTE ’12, Aug. 2012.[70] T. R. Magrino, R. L. Rivest, E. Shen, and D. Wagner. Computing themargin of victory in IRV elections. In
USENIX Electronic VotingTechnology Workshop / Workshop on Trustworthy Elections
Advances in Cryptology—CRYPTO 2006 ,pages 373–392. Springer, 2006.[73] T. Moran and M. Naor. Basing cryptographic protocols ontamper-evident seals.
Theoretical Computer Science , 411:1283–1310,March 2010.[74] T. Moran and M. Naor. Split-ballot voting: Everlasting privacy withdistributed trust.
ACM Transactions on Information and SystemSecurity , 13(2):16, 2010.[75] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008.[76] C. A. Neff. A verifiable secret shuffle and its application to e-voting.In
ACM Conference on Computer and Communications Security
IAVoSSWorkshop on Trustworthy Elections , WOTE ’06, 2006.[79] S. Popoveniuc, J. Kelsey, A. Regenscheid, and P. L. Vora.Performance requirements for end-to-end verifiable elections. In
USENIX Electronic Voting Technology Workshop / Workshop onTrustworthy Elections , EVT/WOTE ’10, Aug. 2010.[80] R. L. Rivest. The Three Ballot voting system, 2006. https://people.csail.mit.edu/rivest/Rivest-TheThreeBallotVotingSystem.pdf.[81] R. L. Rivest. On the notion of “software independence” in votingsystems.
Philosophical Transactions of the Royal Society A:Mathematical, Physical and Engineering Sciences ,366(1881):3759–3767, 2008.[82] R. L. Rivest. DiffSum: A simple post-election risk-limiting audit.
CoRR abs/1509.00127 , 2015.[83] R. L. Rivest and E. Shen. A Bayesian method for auditing elections.In
USENIX Electronic Voting Technology Workshop / Workshop onTrustworthy Elections , EVT/WOTE ’12, Aug. 2012.[84] R. L. Rivest and W. D. Smith. Three voting protocols: ThreeBallot,VAV, and Twin. In
USENIX/ACCURATE Electronic VotingTechnology Workshop , 2011.[87] P. Y. A. Ryan, D. Bismark, J. Heather, S. Schneider, and Z. Xia. Prêt à Voter: A voter-verifiable voting system.
IEEE Transactions onInformation Forensics and Security , 4(4):662–673, 2009.[88] P. Y. A. Ryan, P. B. Roenne, and V. Iovino. Selene: Voting withTransparent Verifiability and Coercion-Mitigation. Cryptology ePrintArchive, Report 2015/1105, 2015. http://eprint.iacr.org/.[89] P. Y. A. Ryan and V. Teague. Pretty Good Deomcracy. In
Proceedings of the Seventeenth International Workshop on SecurityProtocols 2009 , 2013.[90] D. G. Saari.
Geometry of voting . Springer, 2012.[91] K. Sako and J. Killian. Receipt-free mix-type voting scheme: Apractical solution to the implementation of a voting booth. In
Advanced in Cryptology—EUROCRYPT ’95 , pages 393–403, 1995.[92] D. R. Sandler, K. Derr, and D. S. Wallach. VoteBox: Atamper-evident, verifiable electronic voting system. In , July 2008.[93] A. D. Sarwate, S. Checkoway, and H. Shacham. Risk-limiting auditsand the margin of victory in nonplurality elections.
Statistics, Politicsand Policy , 4(1):29–64, 2013.[94] M. Schulze. A new monotonic, clone-independent, reversalsymmetric, and condorcet-consistent single-winner election method.
Social Choice and Welfare , 36(2):267–303, 2011.[95] C. M. Sehat. Internet Voting Pilot: Norway’s 2013 ParliamentaryElections. Technical report, The Carter Center, March 2014.[96] B. Smyth, M. Ryan, S. Kremer, and M. Kourjieh. Towards automaticanalysis of election verifiability properties. In
Automated Reasoningfor Security Protocol Analysis and Issues in the Theory of Security ,pages 146–163. Springer, 2010.[97] D. Springall, T. Finkenauer, Z. Durumeric, J. Kitcat, H. Hursti,M. MacAlpine, and J. A. Halderman. Security analysis of theEstonian Internet voting system. In , CCS ’14, pages 703–715,2014.[98] P. Stark. Conservative statistical post-election audits.
Annals ofApplied Statistics , 2008.[99] P. B. Stark. Super-simple Simultaneous Single-ballot Risk-limitingAudits. In
Proceedings of the 2010 International Conference onElectronic Voting Technology/Workshop on Trustworthy Elections ,EVT/WOTE’10, pages 1–16, Berkeley, CA, USA, 2010. USENIXAssociation.[100] P. B. Stark and V. Teague. Verifiable European elections:Risk-limiting audits for d’hondt and its relatives.
USENIX Journal ofElection Technology and Systems , 3(1), 2014.[101] P. B. Stark and D. A. Wagner. Evidence-based elections.
IEEESecurity and Privacy Magazine , 10(05):33–41, Sep.–Oct. 2012.[102] B. Terelius and D. Wikström. Proofs of restricted shuffles. In
Progress in Cryptology–AFRICACRYPT 2010 , pages 100–113.Springer, 2010.[103] T. N. Tideman. Independence of clones as a criterion for voting rules.
Social Choice and Welfare
Advances in Cryptology–CRYPTO 2010
Expert Report in Floresv. Lopez , 2006.[108] F. Zagórski, R. Carback, D. Chaum, J. Clark, A. Essex, and P. L.Vora. Remotegrity: Design and use of an end-to-end verifiableremote voting system. In , ANCS ’13, pages 441–457,2013., ANCS ’13, pages 441–457,2013.