Natural Strategic Abilities in Voting Protocols
aa r X i v : . [ c s . M A ] J u l Natural Strategic Abilities in Voting Protocols
Wojciech Jamroga , , Damian Kurpiewski , and Vadim Malvone SnT, University of Luxembourg Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland Universit´e d’´Evry, France
Abstract.
Security properties are often focused on the technologicalside of the system. One implicitly assumes that the users will behave inthe right way to preserve the property at hand. In real life, this cannotbe taken for granted. In particular, security mechanisms that are difficultand costly to use are often ignored by the users, and do not really defendthe system against possible attacks.Here, we propose a graded notion of security based on the complexity ofthe user’s strategic behavior. More precisely, we suggest that the levelto which a security property ϕ is satisfied can be defined in terms of (a)the complexity of the strategy that the voter needs to execute to make ϕ true, and (b) the resources that the user must employ on the way. Thesimpler and cheaper to obtain ϕ , the higher the degree of security.We demonstrate how the idea works in a case study based on an electronicvoting scenario. To this end, we model the vVote implementation of thePrˆet `a Voter voting protocol for coercion-resistant and voter-verifiableelections. Then, we identify “natural” strategies for the voter to obtainreceipt-freeness, and measure the voter’s effort that they require. We alsolook at how hard it is for the coercer to compromise the election througha randomization attack. Keywords: electronic voting, coercion resistance, natural strategies,multi-agent models, graded security
Security analysis often focuses on the technological side of the system. It im-plicitly assumes that the users will duly follow the sequence of steps that thedesigner of the protocol prescribed for them. However, such behavior of humanparticipants seldom happens in real life. In particular, mechanisms that are dif-ficult and costly to use are often ignored by the users, even if they are there todefend those very users from possible attacks.For example, protocols for electronic voting are usually expected to satisfy receipt-freeness (the voter should be given no certificate that can be used to breakthe anonymity of her vote) and the related property of coercion-resistance (thevoter should be able to deceive the potential coercer and cast her vote in accor-dance with her preferences) [7, 26, 17, 27]. More recently, significant progress hasbeen made in the development of voting systems that would be coercion-resistantand at the same time voter-verifiable , i.e., would allow the voter to verify her part
Wojciech Jamroga, Damian Kurpiewski, and Vadim Malvone of the election outcome [31, 12]. The idea is to partly “crowdsource” an auditof the election to the voters, and see if they detect any irregularities. Examplesinclude the Prˆet `a Voter protocol [32] and its implementation vVote [13] thatwas used in the 2014 election in the Australian state of Victoria.However, the fact that a voting system includes a mechanism for voter-verifiability does not immediately imply that it is more secure and trustworthy.This crucially depends on how many voters will actually verify their ballots [37],which in turn depends on how understandable and easy to use the mechanismis. The same applies to mechanisms for coercion-resistance and receipt-freeness,and in fact to any optional security mechanism. If the users find the mechanismcomplicated and tiresome, and they can avoid it, they will avoid it.Thus, the right question is often not if but how much security is obtainedby the given mechanism. In this paper, we propose a graded notion of practicalsecurity based on the complexity of the strategic behavior, expected from theuser if a given security property is to be achieved. More precisely, we suggestthat the level to which property ϕ is “practically” satisfied can be defined interms of (a) the complexity of the strategy that the user needs to execute tomake ϕ true, and (b) the resources that the user must employ on the way. Thesimpler and cheaper to obtain ϕ , the higher the degree of security.Obviously, the devil is in the detail. It often works best when a general ideais developed with concrete examples in mind. Here, we do the first step, andlook how the level of coercion-resistance and voter-verifiability can be assessedin vVote and Prˆet `a Voter. To this end, we come up with a multi-agent modelof vVote, inspired by interpreted systems [18]. We consider three main typesof agents participating in the voting process: the election system, a voter, anda potential coercer. Then, we identify strategies for the voter to use the voter-verifiability mechanism, and estimate the voter’s effort that they require. We alsolook at how difficult it is for the coercer to compromise the election through arandomization attack [26]. The strategic reasoning and its complexity is formal-ized by means of so called natural strategies , proposed in [24, 25] and consistentwith psychological evidence on how humans use symbolic concepts [8, 19].To create the models, we use the Uppaal model checker for distributed andmulti-agent systems [4], with its flexible modeling language and intuitive GUI.This additionally allows to use the
Uppaal verification functionality and checkthat our natural strategies indeed obtain the goals for which they are proposed.
Related work.
Formal analysis of security that takes a more human-centeredapproach has been done in a number of papers, for example with respect toinsider threats [21]. A more systematic approach, based on the idea of secu-rity ceremonies , was proposed and used in [10, 5, 6, 29], and applied to formalanalysis of voting protocols [28]. Here, we build on different modeling tradition,namely on the framework of multi-agent systems . This modeling approach wasonly used in [22] where a preliminary verification of the
Selene voting protocolwas conducted. Moreover, to our best knowledge, the idea of measuring the se-curity level by the complexity of strategies needed to preserve a given securityrequirement is entirely new. atural Strategic Abilities in Voting Protocols 3 check2_ok check4_okcheck4_fail castendcheck3 check4out shredvote_okcheck2_failcheck1 errorvotedscanninghas_ballotprintingstart check3 move_nextcheck2 finishcheck4finishraise_error check4 leaveleave shred_ballotskipraise_error move_nextleaveshred_ballotmove_nextmove_nextcheck3raise_errorcheck2move_next check_ballotcounter++ enter_votescan_ballotprint_ballotcheck_auth
Fig. 1.
Voter model
Other (somewhat) related works include social-technical modeling of attackswith timed automata [15] and especially game-theoretic analysis of voting pro-cedures [9, 14, 2, 23]. Also, strategies for human users to obtain simple securityrequirements were investigated in [3]. Finally, specification of coercion-resistanceand receipt-freeness in logics of strategic ability was attempted in [36].
The main goal of this paper is to propose a framework for analyzing securityand usability of voting protocols, based on how easy it is for the participants touse the functionality of the protocol and avoid a breach of security. Dually, wecan also look at how difficult it is for the attacker to compromise the system. Inthis section we explain the methodology.
The first step is to divide the description of the protocol into loosely coupledcomponents, called agents. The partition is often straightforward: in our case, itwill include the voter, the election infrastructure, the teller etc.For each agent we define its local model, which consists of locations (i.e.,the local states of the agent) and labeled edges between locations (i.e., localtransitions). A transition corresponds to an action performed by the agent. Anexample model of the voter can be seen in Figure 1. When the voter has scannedher ballot and is in the state scanning she can perform action enter vote , thusmoving to the state voted . This model, as well as the others, has been createdusing the modeling interface of the
Uppaal model checker [4]. The locationsin
Uppaal are graphically represented as circles, with initial locations markedby a double circle. The edges are annotated by colored labels: guards (green),
Wojciech Jamroga, Damian Kurpiewski, and Vadim Malvone synchronizations (teal) and updates (blue). The syntax of expressions is like thatof C/C++. Guards enable the transition if and only if the guard condition eval-uates to true. Synchronizations allow processes to synchronize over a commonchannel. Update expressions are evaluated when the transition is taken.The global model of the whole system consists of a set of concurrent processes,i.e., local models of the agents. The combination of the local models unfolds intoa global model, where each global state represents a possible configuration ofthe local states of the agents.
Many relevant properties of multi-agent systems refer to strategic abilities ofagents and their groups. For example, voter-verifiability can be understood asthe ability of the voter to check if her vote was registered and tallied correctly.Similarly, receipt-freeness can be understood as the inability of the coercer, typi-cally with help from the voter, to obtain evidence of how the voter has voted [36].Logics of strategic reasoning, such as ATL and Strategy Logic, provide neatlanguages to express properties of agents’ behavior and its dynamics, driven byindividual and collective goals of the agents [1, 11, 30]. For example, the ATLformula hh cust ii F ticket may be used to express that the customer cust can en-sure that he will eventually obtain a ticket, regardless of the actions of the otheragents. Semantically, the specification holds if cust has a strategy whose ev-ery execution path satisfies ticket at some point in the future. Strategies in amulti-agent system are understood as conditional plans, and play central rolein reasoning about purposeful agents [1, 35]. Formally, strategies are defined asfunctions from states to actions. However, real-life processes often have millionsor even billions of possible states, which allows for terribly complicated strategies– and humans are notoriously bad at handling combinatorially complex objects.To better model the way human agents strategize, the authors of [24, 25]proposed to use a more human-friendly representation of strategies, based onlists of condition-action rules. The conditions are given by Boolean formulas.Moreover, it was postulated that only those strategies should be consideredwhose complexity does not exceed a given bound. This is consistent with classicalapproaches to commonsense reasoning [16] and planning [20], as well as theempirical results on how humans learn and use concepts [8, 19].
Natural strategies.
Let B ( P rop a ) be the set of Boolean formulas over atomicpropositions P rop a observable by agent a . In our case, P rop a consists of all thereferences to the local variables of agent a , as well as the global variables in themodel. We represent natural strategies of agent a by lists of guarded actions ,i.e., sequences of pairs φ i α i such that: (1) φ i ∈ B ( P rop a ), and (2) α i is anaction available to agent a in every state where φ i holds. Moreover, we assumethat the last pair on the list is ⊤ α for some action α , i.e., the last rule isguarded by a condition that will always be satisfied. A collective natural strategy for a group of agents A = { a , . . . , a | A | } is a tuple of individual natural strategies s A = ( s a , . . . , s a | A | ). The set of such strategies is denoted by Σ A . atural Strategic Abilities in Voting Protocols 5 The “outcome” function out ( q, s A ) returns the set of all paths (i.e., all max-imal traces) that occur when coalition A executes strategy s A from state q onward, and the agents outside A are free to act in an arbitrary way. Complexity of strategies.
We will use the following complexity metric forstrategies: compl ( s A ) = P ( φ,α ) ∈ s A | φ | , with | φ | being the number of symbols in φ , without parentheses. That is, compl ( s A ) simply counts the total length ofguards in s A . Intuitively, the complexity of a strategy is understood as its levelof sophistication. It corresponds to the mental effort needed to come up withthe strategy, memorize it, and execute it. To reason about natural strategic ability, the logic NatATL was introduced in [24,25] with the following syntax: ϕ ::= p | ¬ ϕ | ϕ ∧ ϕ | hh A ii ≤ k X ϕ | hh A ii ≤ k F ϕ | hh A ii ≤ k G ϕ | hh A ii ≤ k ϕ U ϕ. where A is a group of agents and k ∈ N is a complexity bound. Intuitively, hh A ii ≤ k γ reads as “coalition A has a collective strategy of size less or equal than k to enforce the property γ .” The formulas of NatATL make use of classicaltemporal operators: “ X ” (“in the next state”), “ G ” (“always from now on”),“ F ” (“now or sometime in the future”), and U (strong “until”). For example, theformula hh cust ii ≤ F ticket expresses that the customer can obtain a ticket by astrategy of complexity at most 10. This seems more appropriate as a functionalityrequirement than to require the existence of any function from states to actions.The path quantifier “for all paths” can be defined as A γ ≡ hh∅ii ≤ γ .We will use NatATL to specify requirements on the voting system. For ex-ample, voter-verifiability captures the ability of the voter to verify her vote afterthe election. In our case, this is represented by the check hh voter ii ≤ k F ( check4 ok ∨ error ). Theintuition is simple: the voter has a strategy of size at most k to successfullyperform check hh voter ii ≤ k F ( check4 ok ∨ check4 fail ). Moreover, we can use the formula AG ( check4 fail → hh voter ii ≤ k F error ) to capture dispute resolution .The conceptual structure of receipt-freeness is different. In that case, we wantto say that the voter has no way of proving how she has voted, and that thecoercer (or a potential vote-buyer) does not have a strategy that allows him tolearn the value of the vote. Crucially, this refers to the knowledge of the coercer.To capture the requirement, we need to extend NatATL with knowledge oper-ators K a , with K a ϕ expressing that agent a knows that ϕ holds. For instance, K coerc voted i says that the coercer knows that the voter has voted for the candi-date i . Then, receipt-freeness can be formalized as V i ∈ Cand ¬hh coerc, voter ii ≤ k G ( end → ( K coerc voted i ∨ K coerc ¬ voted i )).That means that the coercer and the voter have no strategy with complexity at Wojciech Jamroga, Damian Kurpiewski, and Vadim Malvone most k to learn, after the election is finished, whether the voter has voted for i ornot. Note that this is only one of the possible formalization of the requirement.For example, one may argue that, to violate receipt-freeness, it suffices that thecoercer can detect whenever the voter has not obeyed ; he does not have to learnthe exact value of her vote. This can be captured by the following formula: V i ∈ Cand ¬hh coerc, voter ii ≤ k G (( end ∧ ¬ voted i ) → K coerc ¬ voted i ). The focus of this work is on modeling and specification; the formal analysis isdone mainly by hand. However, having the models specified in
Uppaal suggeststhat we can also benefit from its model checking functionality. Unfortunately,the requirement specification language of
Uppaal is very limited, and allowsfor neither strategic operators nor knowledge modalities. Still, we can use it toverify concrete strategies if we carefully modify the input formula and the model.We will show how to do it in Section 6.
Secure and verifiable voting is becoming more and more important for the democ-racy to function correctly. In this paper, we analyze the vVote implementationof Prˆet `a Voter which was used for remote voting and voting of handicappedpersons in the Victorian elections in November 2014 [13]. The main idea of thePrˆet `a Voter protocol focuses on encoding the vote using a randomized candidatelist. In this protocol the ballot consists of two parts: the randomized order ofcandidates (left part) and the list of empty checkboxes along with the numberencoding the order of the candidates (right part). The voter cast her vote in anusual way, by placing a cross in the right hand column against the candidate ofher choice. After that she tears the ballot in two parts, destroys the left part,cast the right one and takes the copy of it as her receipt. After the election hervote appears on the Web Bulletin Board as the pair of the encoding numberand the marked box, which can be compared with the receipt for verification.We look at the whole process, from the voter entering the polling station, to theverification of her vote on the public Web Bulletin Board (WBB).After entering the polling station, the Poll Worker (PW) authenticates thevoter (using the method prescribed by the appropriate regulations), and sends aprint request to the Print On Demand (PON) device specifying the district/re-gion of the voter. If the authentication is valid (state printing ) then the PONretrieves and prints an appropriate ballot for the voter, including a Serial Num-ber (SN) and the district, with a signature from the Private Web Bulletin Board(PWBB). The PWBB is a robust secure database which receives messages, per-forms basic validity checks, and returns signatures. After that, the voter maychoose to check and confirm the ballot. This involves demanding a proof thatthe ballot is properly formed, i.e., that the permuted candidate list correspondscorrectly to the cipher-texts on the public WBB for that serial number. The atural Strategic Abilities in Voting Protocols 7 check4_failerror check4_okcheck4_2check4_1check4 check4check_preferencescheck_serial check4raise_errorraise_errorraise_error
Fig. 2.
Voter refinement: phase check4
WBB is an authenticated public broadcast channel with memory. If the ballothas a confirmation check, the voter returns to the printing step for a new ballot(transition from state check printing ).Having obtained and possibly checked her ballot (state has ballot ), the votercan scan it by showing the ballot bar code to the Electronic Ballot Marker(EBM). Then, she enters her vote (state scanning ) via the EBM interface. TheEBM prints on a separate sheet the voters receipt with the following information:(i) the electoral district, (ii) the Serial Number, (iii) the voter’s vote permutedappropriately to match the Prˆet `a Voter ballot, and (iv) a QR code with thisdata and the PWBB signature.Further, the voter must check the printed vote against the printed candidatelist. In particular, she checks that the district is correct and the Serial Numbermatches the one on the ballot form. If all is well done, she can optionally checkthe PWBB signature, which covers only the data visible to the voter. Notethat, if either check check
In this section we present models of a simplified version of vVote, focusing onthe steps that are important from the voter’s perspective. We use
Uppaal as themodeling tool because of its flexible modeling language and user-friendly GUI.
The model already presented in Figure 1 captures the voter actions from en-tering the polling station to casting her vote, going back home and verifyingher receipt on the web bulletin board. As shown in the model, some actions,i.e. additional checks, are optional for the voter. Furthermore, to simulate thehuman behavior we added some additional actions, not described in the protocolitself. For example the voter can skip even obligatory steps, like check
2. This
Wojciech Jamroga, Damian Kurpiewski, and Vadim Malvone receipt_check_snerror check4_1wbb_check_sncheck4 i < nmove_nexti++ raise_errorraise_error i == nend_firstcheck_serial2(i)raise_error check_serial1(i)
Fig. 3.
Serial number check error check4_2receipt_check_prwbb_check_prcheck4_1 raise_errorj < mmove_nextj++raise_errorraise_error j == mend_secondcheck_number2(j)check_number1(j)
Fig. 4.
Preferences order check is especially important, as check error state. The state representscommunication with the election authority, signaling that the voter could notcast her vote or a machine malfunction was detected.
The model shown in Figure 1 is relatively abstract. For example, check
Check4 phase.
Recall that this is the last phase in the protocol and it isoptional. Here, the voter can check that the printed receipt matches her intendedvote on the WBB. This includes checking that the serial numbers match (action check serial ), and that the printed preferences order match the one displayed onthe WBB (action check pref erences ). So, if both steps succeed, then the voterreach state check ok . The whole model for this phase is presented in Figure 2.Other phases, like check Serial number phase.
In some cases the model shown in Figure 2 may stillbe too general. Depending on the length of the serial number we can have dif-ferent levels of difficulty. For example comparing two alphanumeric sequences oflength 2 is easier then comparing such sequences of length 10. To express thisconcept, we split this step into atomic actions: check serial i ) for checking the i th symbol on the WBB, and check serial i ) for checking the i th symbol onthe receipt. The resulting model is shown in Figure 3, where n is the length ofthe serial number. Preferences order phase.
Similarly to comparing the two serial numbers,verifying the printed preferences can also be troublesome for the voter. In or-der to make sure that her receipt matches the entry on the WBB, the votermust check each number showing her preference. Actions check number i ) and check number i ) mean checking number on the WBB and on the receipt, re-spectively. This is shown on the model in Figure 4, where m is the number ofcandidates in the ballot. atural Strategic Abilities in Voting Protocols 9 elaboratingdisplay displayidle new_info Fig. 5.
Public WBB checkswait sign or !signmessageidle
Fig. 6.
Private WBB cancellationwait idle cancel_votewrong_vote
Fig. 7.
Cancel station endprintererror startwait returnidleidle printid_print_request == accountid_print_request != account print_request
Fig. 8.
Print-on-demand printer elaborationend wait_voteerror wait idle check_voteraise_error bar_codeidleraise_error return print
Fig. 9.
Electronic Ballot Marker (EBM)
The voter is not the only entity taking part in the election procedure. Theelection infrastructure and the electronic devices associated with it constitute asignificant part of the procedure. Since there are several components involved inthe voting process, we decided to model each component as a separate agent.The models of the Public WBB, Private WBB, the cancel station, the print-on-demand printer, and the EBM are shown in Figures 5–9.
To model the coercer, we first need to determine his exact capabilities. Is heable to interact with the voter, or only with the system? Should he have fullcontrol over the network, like the Dolev-Yao attacker, or do we want the agentto represent implicit coercion, where the relatives or subordinates are forced tovote for the specified candidate. There are many possibilities, and to this endwe have decided that one model for the Coercer agent is not enough. Because ofthat and due to lack of space, we omit the details, and only describe the possibleactions of the coercer: • Coerce ( v, ca ): the coercer coerces the voter v to vote for his candidate ca ; • M odif yBallot ( v, ca ): the coercer modifies the ballot for v by setting ca ; • RequestV ote ( v ): the coercer requests a vote from v ; • P unish ( v ): the coercer punishes v ; • Inf ect : the coercer infects the voting machine with malicious code; • Listen ( v ): the coercer listens to the vote of v from the voting machine; • Replace ( v, ca ): the coercer replaces the vote of v with ca . Some of the actions may depend on each other. For example,
Listen and
Replace actions should be executed only after the
Inf ect action has succeed, asthe Coercer needs some kind of access to the voting machine.
There are many possible objectives for the participants of a voting procedure.A voter’s goal could be to just cast her vote, another one could be to make surethat her vote was correctly counted, and yet another one to verify the electionresults. The same goes for the coercer: he may just want to make his family votein the “correct,” or to change the outcome of the election. In order to definedifferent objectives, we can use formulas of NatATL and look for appropriatenatural strategies, as described in Section 2. More precisely, we can fix a subset ofthe participants and their objective with a formula of NatATL, find the smalleststrategy that achieves the objective, and compute its size. The size of the strategywill be an indication of how hard it is to make sure that the objective is achieved.An example of specification that the voter wants to achieve is the veri-fiability of her vote. Given the model in Figure 1, we can use the formula hh v ii ≤ k F check ok to check whether the voter has a natural strategy of size lessor equal than k to verify that her receipt has the same information displayed inthe public WBB. If we want to check only if the voter has a natural strategyto verify her receipt, i.e. we want also consider the case in which the voter’sreceipt and the information in the public WBB are different, we can considerthe formula hh v ii ≤ k F( check ok ∨ check f ail ) as we discussed in Section 2.3.Note that it is essential to fix the granularity level of the modeling right.When shifting the level of abstraction, we obtain significantly different “mea-surements” of strategic complexity. This is why we proposed several variants ofthe voter model in Section 4. In this section, we will show how it affects theoutcome of the analysis.In the following we take another look at the previously defined models andtry to list possible strategies for the participants. In this section we focus on natural strategies for voter-verifiability. Given themodel in Figure 1, we can analyze an example of natural strategy that can beused by the voter to achieve the end of the voting procedure, i.e., to satisfy theNatATL formula ϕ = hh v ii ≤ k F end . Clearly, ϕ specifies the fact that the voterhas a natural strategy of size less or equal than k (captured by hh v ii ≤ k ) to reachsooner or later (captured by the eventually operator F) the end of procedure(i.e. the state labeled with atom end ). Natural Strategy 1
A strategy for the voter is:1. has ballot scan ballot scanning enter vote atural Strategic Abilities in Voting Protocols 11 voted check check2 ok ∨ check2 fail ∨ out move next vote ok shred ballot shred leave check4 check check4 ok ∨ check4 fail f inish ⊤ ⋆ Recall that the above is an ordered sequence of guarded commands. The firstcondition (guard) that evaluates to true determines the action of the voter. Thus,if the voter has the ballot and she has not scanned it (proposition has ballot ),she scans the ballot. If has ballot is false and scanning is true then she enters hervote, and so on. If all the preconditions except ⊤ are false, then she executesan arbitrary available action (represented by the wildcard ⋆ ). For example, thevoter will do print ballot at the state printing , where the voter needs to waitwhile the Pool Walker identifies her and generates a new ballot.In Natural Strategy 1, we have 9 guarded commands in which the command(4) costs 5 since in its condition there are five symbols (three atoms plus twodisjunctions), the command (7) costs 3 since in its condition there are threesymbols (two atoms plus the disjunction), while the other guarded commandscost 1, so the total complexity is 1 · · · ϕ istrue with any k of 15 or more. Further, by Natural Strategy 1, starting from thestate has ballot , the voter needs of 9 steps to achieve the state end .Note that Natural Strategy 1 can be also used to demonstrate that the for-mula ψ = hh v ii ≤ k F( check4 ok ∨ check4 fail ) holds. In that case, we can reduce thesize of the strategy by removing the guarded command (8). Thus, ψ is satisfiedeven for k ≥ check check
3, i.e., we want to satisfy the formula ϕ = hh v ii ≤ k F( checked1 ∧ checked3 ∧ end ). In particular, ϕ checks whether there exist a natural strat-egy for the voter such that sooner or later she does check check
3, and endsthe whole voting process. Note that, apart from the standard propositions like check
1, we also add their persistent version like checked
1, i.e., once it gets true,it remains always true.
Natural Strategy 2
A strategy for the voter that considers the optional phases check and check is:1. has ballot ∧ counter == check has ballot scan ballot scanning enter vote voted check check2 ok ∨ check2 fail check check1 ∨ check3 ∨ out move next vote ok shred ballot shred leave check4 check check4 ok ∨ check4 fail f inish ⊤ ⋆ In Natural Strategy 2, we introduce the verification of check check check · · · ϕ is true forany k ≥
21. Further, by Natural Strategy 2, starting from the state has ballot ,the voter needs of 13 steps to achieve the state end .An important aspect to evaluate in this subject concerns the detailed analysisof check
4. Some interesting questions on this analysis could be: how does thevoter perform check
4? How does she compare the printed preferences with theinformation in the public WWB? These questions open up several scenarios bothfrom a strategic point of view and from the model to be used. From a strategicpoint of view, we could consider a refinement of Natural Strategy 1, in whichthe action check check
Natural Strategy 3
A strategy for the voter that refines check is:1. has ballot scan ballot scanning enter vote voted check check2 ok ∨ check2 fail ∨ out move next vote ok shred ballot shred leave check4 check serial check4 1 check pref erences check4 2 check check4 ok ∨ check4 fail f inish ⊤ ⋆ In Natural Strategy 3, we have 11 guarded commands in which all the con-ditions are defined with a single atom but (4) in which there is a disjunction ofthree atoms and (10) in which there is a disjunction of two atoms. So, the totalcomplexity is 1 · · · check
4, we need to provide a formulathat verifies atoms check4 , check4 1 , and check4 2 . To do this in NatATL, weuse the formula ϕ = hh v ii ≤ k F( checked4 ∧ checked4 1 ∧ checked4 2 ). Note that ϕ is true for any k ≥
17; one can use Natural Strategy 3 to demonstrate that.In addition, to increase the level of detail, one could consider that the voterchecks the preferences and the serial number one by one in an ordered fashion.So, given the models in Figures 3 and 4, we can consider a formula that checkswhether the voter has a strategy that satisfies the following properties: atural Strategic Abilities in Voting Protocols 13
1. sooner or later she enters in the check ϕ = hh v ii ≤ k F( checked4 ∧ wbb checked sn ∧ receipt checked sn ∧ checked4 1 ∧ wbb checked pr ∧ receipt checked pr ∧ checked4 2 ).So, we can define a natural strategy that satisfies ϕ , as follows. Natural Strategy 4
A strategy for the voter that still refines check is:1. has ballot scan ballot scanning enter vote voted check vote ok shred ballot shred leave out ∨ check2 ok ∨ check2 fail ∨ receipt check sn ∨ receipt check pr move next check4 check serial wbb check sn check serial receipt check sn ∧ i == n end f irst check4 1 check number wbb check pr check number receipt check pr ∧ j == m end second check4 2 check check4 ok ∨ check4 fail f inish ⊤ ⋆ To conclude, the above natural strategy has 15 guarded commands in whichthe conditions in (9), (12), and (14) are conjunctions of two atoms, the conditionin (6) is a disjunction of five atoms, and all the other conditions are defined witha single atom. Therefore, the complexity of Natural Strategy 4 is 1 · · · ϕ is true with k ≥ So far, we have measured the effort of the voter by how complex strategies shemust execute. This helps to estimate the mental difficulty related, e.g., to voter-verifiability. However, this is not the only source of effort that the voter has toinvest. Verifying one’s vote might require money (for example, if the voter needsto buy special software or a dedicated device), computational power, and, mostof all, time. Here, we briefly concentrate on the latter factor.For a voter’s task expressed by the NatATL formula hh v ii ≤ k F ϕ and a naturalstrategy s v for the voter, we can estimate the time spent on the task by thenumber of transitions necessary to reach ϕ . That is, we take all the paths in out ( q, s v ), where q is the initial state of the procedure. On each path, ϕ mustoccur at some point. We look for the path where the first occurrence of ϕ happens latest , and count the number of steps to ϕ on that path. We will demonstratehow it works on the last two strategies from Section 5.1.For Natural Strategy 3, starting from the starting state, the voter needs of9 + 2 = 11 steps to achieve check4 ∧ wbb check sn ∧ receipt check sn ∧ check4 1 ∧ wbb check pr ∧ receipt check pr ∧ check4 2 . More precisely, 9 steps are needed toachieve check check start , the voter needs of 9 + ((2 · n ) + 1) + ((2 · m ) + 1) steps to achieveher goals, where n and m are the sizes of the serial number and the list ofpreferences, respectively. In particular, 9 steps are needed to achieve check · n ) + 1) steps to achieve check · m ) + 1) steps to achieve check For the coercer, we can start the analysis by considering the basic setting inwhich he requests the vote and if the voter does not give him the ballot or thevote is different with the one imposed by him, then he punish the voter. Thisreasoning is captured by the following natural strategy.1. ¬ coerced v Coerce ( v, ca )2. coerced v ∧ ¬ requested v RequestV ote ( v )3. coerced v ∧ requested v ∧ ¬ punished v ∧ ( ca v = ca ∨ not show v ) P unish ( v )The total complexity of the above natural strategy is 16 since (1) has 2symbols (atom + negation), (2) has 4 symbols (two atoms + conjunction +negation), and (3) has 10 symbols (five atoms + three conjunctions + negation+ disjunction).Another intervention of the coercer regards the possibility to infect one ormore machines and to replace the vote of a voter.1. ¬ infected Inf ect infected ∧ ¬ replaced v Replace ( v, ca )In this case the complexity is 6 since we have three atoms involved, twonegations, and a conjunction.We can extend the above by considering that the coercer can infect themachine and then can coerce the voter. So, if the coercer see a different vote onthe machine he can punish the voter. Note that this setting is interesting in thecase the coercer has infected a machine that only displays informations withouthaving the power to modify the vote.1. ¬ infected Inf ect ¬ coerced v Coerce ( v, ca )3. infected ∧ listen v = ca P unish ( v )In the above, the complexity is 7, where (1) and (2) have complexity 2 and(3) has complexity 3. atural Strategic Abilities in Voting Protocols 15 In this section we explain how the model checking functionality of
Uppaal canbe used for an automated verification of the strategies presented in Section 5.To verify selected formulas and the corresponding natural strategies, we need tomodify several things: (i) the formula, (ii) the natural strategy and finally, (iii) the model. We start by explaining the required modifications step by step.
Formula.
To specify the required properties for the protocol, we have used avariant of strategic logic, as it is one of the most suited logic to specify propertiesfor agents in an intuitive way. However,
Uppaal does not support NatATL, sothe formula needs to be modified accordingly. In the formula we replace thestrategic operator hh A ii ≤ k with an universal path quantifier A. For example, wecan consider the formula ϕ used in Section 5. In this context, we produce theCTL formula ϕ ′ = AF end . Natural Strategy.
The natural strategy needs to be modified so that all theguard conditions are mutually exclusive. To this end, we go through the precondi-tions from top to bottom, and refine them by adding the negated preconditionsfrom all the previous guarded commands. For example, Natural Strategy 1 ismodified as follows:1. has ballot scan ballot ¬ has ballot ∧ scanning enter vote ¬ has ballot ∧ ¬ scanning ∧ voted check ¬ has ballot ∧¬ scanning ∧¬ voted ∧ ( check2 ok ∨ check2 fail ∨ out ) move next ¬ has ballot ∧¬ scanning ∧¬ voted ∧¬ ( check2 ok ∨ check2 fail ∨ out ) ∧ vote ok shred ballot ¬ has ballot ∧¬ scanning ∧¬ voted ∧¬ ( check2 ok ∨ check2 fail ∨ out ) ∧¬ vote ok ∧ shred leave ¬ has ballot ∧¬ scanning ∧¬ voted ∧¬ ( check2 ok ∨ check2 fail ∨ out ) ∧¬ vote ok ∧¬ shred ∧ ( check4 ok ∨ check4 fail ) f inish ⊤ ⋆ Model.
To verify the selected strategy, we fix it in the model by adding thepreconditions of the guards to the preconditions of the corresponding local tran-sitions in the voter’s model. Thus, we effectively remove all transitions that arenot in accordance with the strategy. This way, only the paths that are consistentwith the strategy will be considered by the model-checker.
Levels of granularity.
As we showed in Section 4, it is often important tohave variants of the model for different levels of abstraction. To handle thosein
Uppaal , we have used synchronizations edges. For example, to have a moredetailed version of the phase check
4, we added synchronization edges in the votermodel (Figure 1) and in the check check
Uppaal will proceed to the more detailedmodel and come back after getting to its final state.
Running the verification.
We have modified the models, formulas, and strate-gies from Sections 4 and 5 following the above steps. Then, we used
Uppaal to verify that Natural Strategies 1–4 indeed enforce the prescribed properties. Thetool reported that each formula holds in the corresponding model. The executiontime was always at most a few seconds. In the analysis of a voting protocols it is important to make sure that the voterhas a strategy to use the functionality of the protocol. That is, she has a strategyto fill in and cast her ballot, verify her vote on the bulletin board, etc. However,this is not enough: it is also essential to see how hard that strategy is. In thispaper, we propose a methodology that can be used to this end. One can assumea natural representation of the voter’s strategy, and to measure its complexityas the size of the representation.We mainly focus on one aspect of the voter’s effort, namely the mental effortneeded to produce, memorize, and execute the required actions. We also indi-cate that there are other important factors, like the time needed to execute thestrategy or the financial cost of the strategy. This may lead to tradeoffs whereoptimizing the costs with respect to one resource leads to higher costs in terms ofanother resource. Moreover, resources can vary in their importance for differentagents. For example, time may be more important for the voter, while moneymore relevant when we analyze the strategy of the coercer. We leave a closerstudy of such tradeoffs for future work.Another interesting extension would be to further analyze the parts of theprotocol where the voter compares two numbers, tables, etc. As the voter is ahuman being, it is natural for her to make a mistake. Furthermore, the proba-bility of making a mistake at each step can be added to the model to analyzethe overall probability of successfully comparing two data sets by the voter.Finally, we point out that the methodology proposed in this paper can beapplied outside the e-voting domain. For example, one can use it to study theusability of policies for social distancing in the current epidemic situation, andwhether they are likely to obtain the expected results.
References
1. R. Alur, T. A. Henzinger, and O. Kupferman. Alternating-time Temporal Logic.
Journal of the ACM , 49:672–713, 2002.2. David A. Basin, Hans Gersbach, Akaki Mamageishvili, Lara Schmid, and OriolTejada. Election security and economics: It’s all about Eve. In
Proceedings ofE-Vote-ID , pages 1–20, 2017.3. David A. Basin, Sasa Radomirovic, and Lara Schmid. Modeling human errorsin security protocols. In
Computer Security Foundations Symposium, CSF , pages325–340. IEEE Computer Society, 2016.4. G. Behrmann, A. David, and K.G. Larsen. A tutorial on uppaal . In
FormalMethods for the Design of Real-Time Systems: SFM-RT , number 3185 in LNCS,pages 200–236. Springer, 2004.atural Strategic Abilities in Voting Protocols 175. Giampaolo Bella, Paul Curzon, Rosario Giustolisi, and Gabriele Lenzini. A socio-technical methodology for the security and privacy analysis of services. In
COMP-SAC Workshops , pages 401–406. IEEE Computer Society, 2014.6. Giampaolo Bella, Paul Curzon, and Gabriele Lenzini. Service security and privacyas a socio-technical problem.
J. Comput. Secur. , 23(5):563–585, 2015.7. J. Benaloh and D. Tuinstra. Receipt-free secret-ballot elections. In
Proceedings ofthe twenty-sixth annual ACM symposium on Theory of Computing , pages 544–553.ACM, 1994.8. L. E. Bourne. Knowing and using concepts.
Psychol. Rev. , 77:546–556, 1970.9. Ahto Buldas and Triinu M¨agi. Practical security analysis of e-voting systems. In
Proceedings of IWSEC , volume 4752 of
Lecture Notes in Computer Science , pages320–335. Springer, 2007.10. Marcelo Carlomagno Carlos, Jean Everson Martina, Geraint Price, and Ricardo Fe-lipe Cust´odio. A proposed framework for analysing security ceremonies. In
SE-CRYPT , pages 440–445. SciTePress, 2012.11. K. Chatterjee, T.A. Henzinger, and N. Piterman. Strategy Logic.
Information andComputation , 208(6):677–693, 2010.12. V. Cortier, D. Galindo, R. K¨usters, J. M¨uller, and T. Truderung. SoK: Verifiabilitynotions for e-voting protocols. In
IEEE Symposium on Security and Privacy , pages779–798, 2016.13. C. Culnane, P.Y.A. Ryan, S.A. Schneider, and V. Teague. vvote: A verifiable votingsystem.
ACM Trans. Inf. Syst. Secur. , 18(1):3:1–3:30, 2015.14. Chris Culnane and Vanessa Teague. Strategies for voter-initiated election audits.In
Decision and Game Theory for Security: Proceedings of GameSec , volume 9996of
Lecture Notes in Computer Science , pages 235–247. Springer, 2016.15. Nicolas David, Alexandre David, Ren´e Rydhof Hansen, Kim Guldstrand Larsen,Axel Legay, Mads Chr. Olesen, and Christian W. Probst. Modelling social-technicalattacks with timed automata. In
Proceedings of International Workshop on Man-aging Insider Security Threats, MIST , pages 21–28. ACM, 2015.16. E. Davis and G. Marcus. Commonsense reasoning.
Communications of the ACM ,58(9):92–103, 2015.17. S. Delaune, S. Kremer, and M. Ryan. Coercion-resistance and receipt-freeness inelectronic voting. In
Computer Security Foundations Workshop, 2006. 19th IEEE ,pages 12–pp. IEEE, 2006.18. R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.
Reasoning about Knowledge .MIT Press, 1995.19. J. Feldman. Minimization of Boolean complexity in human concept learning.
Na-ture , 407:630–3, 11 2000.20. M. Ghallab, D. Nau, and P. Traverso.
Automated Planning: Theory and Practice .Morgan Kaufmann, 2004.21. Jeffrey Hunker and Christian W. Probst. Insiders and insider threats - an overviewof definitions and mitigation techniques.
J. Wirel. Mob. Networks Ubiquitous Com-put. Dependable Appl. , 2(1):4–27, 2011.22. W. Jamroga, M. Knapik, and D. Kurpiewski. Model checking the SELENE e-voting protocol in multi-agent logics. In
Proceedings of the 3rd International JointConference on Electronic Voting (E-VOTE-ID) , volume 11143 of
Lecture Notes inComputer Science , pages 100–116. Springer, 2018.23. W. Jamroga and M. Tabatabaei. Preventing coercion in e-voting: Be open andcommit. In
Electronic Voting: Proceedings of E-Vote-ID 2016 , volume 10141 of
Lecture Notes in Computer Science , pages 1–17. Springer, 2017.8 Wojciech Jamroga, Damian Kurpiewski, and Vadim Malvone24. Wojciech Jamroga, Vadim Malvone, and Aniello Murano. Natural strategic ability.
Artificial Intelligence , 277, 2019.25. Wojciech Jamroga, Vadim Malvone, and Aniello Murano. Natural strategic abilityunder imperfect information. In
Proceedings of the 18th International Conferenceon Autonomous Agents and Multiagent Systems AAMAS 2019 , pages 962–970.IFAAMAS, 2019.26. A. Juels, D. Catalano, and M. Jakobsson. Coercion-resistant electronic elections.In
Proceedings of the 2005 ACM workshop on Privacy in the electronic society ,pages 61–70. ACM, 2005.27. R. K¨usters, T. Truderung, and A. Vogt. A game-based definition of coercion-resistance and its applications. In
Proceedings of the 2010 23rd IEEE ComputerSecurity Foundations Symposium , pages 122–136. IEEE Computer Society, 2010.28. T. Martimiano, E. Dos Santos, M. Olembo, and J.E. Martina. Ceremony analysismeets verifiable voting: Individual verifiability in Helios. In
SECURWARE , 2015.29. Taciane Martimiano and Jean Everson Martina. Threat modelling service securityas a security ceremony. In , pages 195–204. IEEE Computer Society, 2016.30. F. Mogavero, A. Murano, G. Perelli, and M.Y. Vardi. Reasoning about strate-gies: On the model-checking problem.
ACM Transactions on Computational Logic ,15(4):1–42, 2014.31. Peter Y. A. Ryan, Steve A. Schneider, and Vanessa Teague. End-to-end verifiabilityin voting systems, from theory to practice.
IEEE Security & Privacy , 13(3):59–62,2015.32. P.Y.A. Ryan. The computer ate my vote. In
Formal Methods: State of the Art andNew Directions , pages 147–184. Springer, 2010.33. F.P. Santos.
Dynamics of Reputation and the Self-organization of Cooperation .PhD thesis, University of Lisbon, 2018.34. F.P. Santos, F.C. Santos, and J.M. Pacheco. Social norm complexity and pastreputations in the evolution of cooperation.
Nature , 555:242–245, 2018.35. Y. Shoham and K. Leyton-Brown.
Multiagent Systems - Algorithmic, Game-Theoretic, and Logical Foundations . Cambridge University Press, 2009.36. M. Tabatabaei, W. Jamroga, and Peter Y. A. Ryan. Expressing receipt-freenessand coercion-resistance in logics of strategic ability: Preliminary attempt. In