aa r X i v : . [ c s . A I] F e b Egalitarian Judgment Aggregation
Sirin Botan
University of Amsterdam, The [email protected]
Ronald de Haan
University of Amsterdam, The [email protected]
Marija Slavkovik
University of Bergen, [email protected]
Zoi Terzopoulou
University of Amsterdam, The [email protected]
ABSTRACT
Egalitarian considerations play a central role in many areas of so-cial choice theory. Applications of egalitarian principles range fromensuring everyone gets an equal share of a cake when decidinghow to divide it, to guaranteeing balance with respect to genderor ethnicity in committee elections. Yet, the egalitarian approachhas received little attention in judgment aggregation—a powerfulframework for aggregating logically interconnected issues. We makethe first steps towards filling that gap. We introduce axioms cap-turing two classical interpretations of egalitarianism in judgmentaggregation and situate these within the context of existing ax-ioms in the pertinent framework of belief merging. We then ex-plore the relationship between these axioms and several notionsof strategyproofness from social choice theory at large. Finally, anovel egalitarian judgment aggregation rule stems from our analy-sis; we present complexity results concerning both outcome deter-mination and strategic manipulation for that rule.
KEYWORDS
Social Choice Theory, Judgment Aggregation, Egalitarianism, Strate-gic Manipulation, Computational Complexity
ACM Reference Format:
Sirin Botan, Ronald de Haan, Marija Slavkovik, and Zoi Terzopoulou. 2021.Egalitarian Judgment Aggregation. In
Proc. of the 20th International Confer-ence on Autonomous Agents and Multiagent Systems (AAMAS 2021), Online,May 3–7, 2021 , IFAAMAS, 14 pages.
Judgment aggregation is an area of social choice theory concernedwith turning the individual binary judgments of a group of agentsover logically related issues into a collective judgment [23]. Beinga flexible and widely applicable framework, judgment aggregationprovides the foundations for collective decision making settings invarious disciplines, like philosophy, economics, legal theory, andartificial intelligence [39]. The purpose of judgment aggregationmethods ( rules ) is to find those collective judgments that better rep-resent the group as a whole. Following the utilitarian approach insocial choice, an “ideal" such collective judgment has traditionallybeen considered the will of the majority. In this paper we challengethis perspective, introducing a more egalitarian point of view.In economic theory, utilitarian approaches are often contrastedwith egalitarian ones [55]. In the context of judgment aggregation,
Proc. of the 20th International Conference on Autonomous Agents and Multiagent Sys-tems (AAMAS 2021), U. Endriss, A. Nowé, F. Dignum, A. Lomuscio (eds.), May 3–7, 2021,Online an egalitarian rule must take into account whether the collectiveoutcome achieves equally distributed satisfaction among agentsand ensure that agents enjoy equal consideration. A rapidly grow-ing application domain of egalitarian judgment aggregation (thatalso concerns multiagent systems with practical implications likein the construction of self-driving cars) is the aggregation of moralchoices [15], where utilitarian approaches do not always offer ap-propriate solutions [4, 57]. One of the drawbacks of majoritarian-ism is that a strong enough majority can cancel out the views of aminority, which is questionable in several occasions.For example, suppose that the president of a student union hassecured some budget for the decoration of the union’s office andshe asks her colleagues for their opinions on which paintings tobuy (perhaps imposing some constraints on the combinations ofpaintings that can be simultaneously selected, due to clashes onstyle). If the members of the union largely consist of pop-art en-thusiasts that the president tries to satisfy, then a few memberswith diverting taste will find themselves in an office that they de-test; an arguably more viable strategy would be to ensure that—asmuch as possible—no-one is strongly dissatisfied. But then, con-sider a similar situation in which a kindergarten teacher needs todecide what toys to complement the existing playground with. Inthat case, the teacher’s goal is to select toys that equally (dis)satisfyall kids involved, so that no extra tension is created due to envy,which the teacher will have to resolve—if the kids disagree a lot,then the teacher may end up choosing toys that none of them re-ally likes.In order to formally capture scenarios like the above, this paperintroduces two fundamental properties (also known as axioms ) ofegalitarianism to judgment aggregation, inspired by the theory ofjustice. The first captures the idea behind the so-called veil of ig-norance of Rawls [60], while the second speaks about how happyagents are with the collective outcome relative to each other.Our axioms closely mirror properties in other areas of socialchoice theory. In belief merging , egalitarian axioms and mergingoperators have been studied by Everaere et al. [29]. The natureof their axioms is in line with the interpretation of egalitarianismin this paper, although the two main properties they study arelogically weaker than ours, as we further discuss in Section 3.1.In resource allocation , fairness has been interpreted both as max-imising the share of the worst off agent [11] as well as eliminat-ing envy between agents [32]. In multiwinner elections , egalitarian-ism is present in diversity [22] and in proportional representation[2, 20] notions.Unfortunately, egalitarian considerations often come at a cost.A central concern in many areas of social choice theory, of whichudgement aggregation does not constitute an exception, is thatagents may have incentives to manipulate , i.e., to misrepresenttheir judgments aiming for a more preferred outcome [18]. Fre-quently, it is impossible to simultaneously be fair and avoid strate-gic manipulation. For both variants of fairness in resource alloca-tion, rules satisfying them usually are susceptible to strategic ma-nipulation [1, 9, 13, 54]. The same type of results have recentlybeen obtained for multiwinner elections [49, 58]. It is not easy tobe egalitarian while disincentivising agents from taking advantageof it.Inspired by notions of manipulation stemming from voting the-ory, we explore how our egalitarian axioms affect the agents’ strate-gic behaviour within judgment aggregation. Our most importantresult in this vein is showing that the two properties of egalitari-anism defined in this paper clearly differ in terms of strategyproof-ness.Our axioms give rise to two concrete egalitarian rules—one thathas been previously studied, and one that is new to the literature.For the latter, we are interested in exploring how computationallycomplex its use is in the worst-case scenario. This kind of question,first addressed by Endriss et al. [28], is regularly asked in the lit-erature of judgment aggregation [5, 25, 51]. As Endriss et al. [26]wrote recently, the problem of determining the collective outcomeof a given judgment aggregation rule is “the most fundamental al-gorithmic challenge in this context”.The remainder of this paper is organised as follows. Section 2reviews the basic model of judgment aggregation, while Section 3introduces our two original axioms of egalitarianism and the rulesthey induce. Section 4 analyses the relationship between egalitar-ianism and strategic manipulation in judgment aggregation, andSection 5 focuses on relevant computational aspects: although thegeneral problems of outcome determination and of strategic ma-nipulation are proven to be very difficult, we propose a way toconfront them with the tools of
Answer Set Programming [36].
Our framework relies on the standard formula-based model of judg-ment aggregation [52], but for simplicity we also use notation com-monly employed in binary aggregation [38].Let N denote the (countably infinite) set of all agents that canpotentially participate in a judgment aggregation setting. In everyspecific such setting, a finite set of agents 𝑁 ⊂ N of size 𝑛 ≥ issues (formu-las in propositional logic) Φ = { 𝜑 , . . . , 𝜑 𝑚 } , called the agenda . J ( Φ ) ⊆ { , } 𝑚 denotes the set of all admissible opinions on Φ .Then, a judgment 𝐽 is a vector in J ( Φ ) , with 1 (0) in position 𝑘 meaning that the issue 𝜑 𝑘 is accepted (rejected). 𝐽 is the antipodal judgment of 𝐽 : for all 𝜑 ∈ Φ , 𝜑 is accepted in 𝐽 if and only if it isrejected in 𝐽 .A profile 𝑱 = ( 𝐽 , . . . 𝐽 𝑛 ) ∈ J ( Φ ) 𝑛 is a vector of individual judg-ments, one for each agent in a group 𝑁 . We write 𝑱 ′ = − 𝑖 𝑱 whenthe profiles 𝑱 and 𝑱 ′ are the same, besides the judgment of agent 𝑖 .We write 𝑱 − 𝑖 to denote the profile 𝑱 with agent 𝑖 ’s judgment re-moved, and ( 𝑱 , 𝐽 ) ∈ J ( Φ ) 𝑛 + to denote the profile 𝑱 with judg-ment 𝐽 added. A judgment aggregation rule 𝐹 is a function thatmaps every possible profile 𝑱 ∈ J ( Φ ) 𝑛 , for every group 𝑁 and agenda Φ , to a nonempty set 𝐹 ( 𝑱 ) of collective judgments in J ( Φ ) .Note that a judgment aggregation rule is defined over groups andagendas of variable size, and may return several, tied, collectivejudgments.The agents that participate in a judgment aggregation scenariowill naturally have preferences over the outcome produced by theaggregation rule. First, given an agent 𝑖 ’s truthful judgment 𝐽 𝑖 , weneed to determine when agent 𝑖 would prefer a judgment 𝐽 overa different judgment 𝐽 ′ . The most prevalent type of such prefer-ences considered in the judgment aggregation literature is that of Hamming distance preferences [6–8, 64].The Hamming distance between two judgments 𝐽 and 𝐽 ′ equalsthe number of issues on which these judgments disagree—concretely,it is defined as 𝐻 ( 𝐽 , 𝐽 ′ ) = Í 𝜑 ∈ Φ | 𝐽 ( 𝜑 ) − 𝐽 ′ ( 𝜑 )| , where 𝐽 ( 𝜑 ) denotesthe binary value in the position of 𝜑 in 𝐽 . For example, 𝐻 ( , ) =
2. Then, the (weak, and analogously strict) preference of agent 𝑖 over judgments is defined by the relation (cid:23) 𝑖 (where 𝐽 (cid:23) 𝑖 𝐽 ′ meansthat 𝑖 ’s utility from 𝐽 is higher than that from 𝐽 ′ ): 𝐽 (cid:23) 𝑖 𝐽 ′ if and only if 𝐻 ( 𝐽 𝑖 , 𝐽 ) ≤ 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) . But an aggregation rule often outputs more than one judgment,and thus we also need to determine agents’ preferences over setsof judgments. We define two requirements guaranteeing that thepreferences of the agents over sets of judgments are consistentwith their preferences over single judgments. To that end, let ˚ (cid:23) 𝑖 (with strict part ˚ ≻ 𝑖 ) denote agent 𝑖 ’s preferences over sets 𝑋, 𝑌 ⊆J ( Φ ) . We require that ˚ (cid:23) 𝑖 is related to (cid:23) 𝑖 as follows: • 𝐽 (cid:23) 𝑖 𝐽 ′ if and only if { 𝐽 } ˚ (cid:23) 𝑖 { 𝐽 ′ } , for any 𝐽 , 𝐽 ′ ∈ J ( Φ ) ; • 𝑋 ˚ ≻ 𝑖 𝑌 implies that there exist some 𝐽 ∈ 𝑋 and 𝐽 ′ ∈ 𝑌 suchthat 𝐽 ≻ 𝑖 𝐽 ′ and { 𝐽 , 𝐽 ′ } * 𝑋 ∩ 𝑌 .The above conditions hold for almost all well-known preferenceextensions. For example, they hold for the pessimistic preference( 𝑋 ≻ pess 𝑌 if and only if there exists 𝐽 ′ ∈ 𝑌 such that 𝐽 ≻ 𝐽 ′ forall 𝐽 ∈ 𝑋 ) and the optimistic preference ( 𝑋 ≻ opt 𝑌 if and only ifthere exists 𝐽 ∈ 𝑋 such that 𝐽 ≻ 𝐽 ′ for all 𝐽 ′ ∈ 𝑌 ) of Dugganand Schwartz [19], as well as the preference extensions of Gärden-fors [33] and Kelly [43]. The results provided in this paper abstractaway from specific preference extensions. This section focuses on two axioms of egalitarianism in judgmentaggregation. We examine them in relation to each other and to ex-isting properties from belief merging, as well as to the standard ma-jority property defined below. Most of the well-known judgmentaggregation rules return the majority opinion, when that opinionis logically consistent [24]. Let 𝑚 ( 𝑱 ) be the judgment that accepts exactly those issues ac-cepted by a strict majority of agents in 𝑱 . A rule 𝐹 is majoritarian when for all profiles 𝑱 , 𝑚 ( 𝑱 ) ∈ J ( Φ ) implies that 𝐹 ( 𝑱 ) = { 𝐽 } .Our first axiom with an egalitarian flavour is the maximin prop-erty , suggesting that we should aim at maximising the utility ofthose agents that will be worst off in the outcome. Assuming that Various approaches have been taken within the area of social choice theoryin order to extend preferences over objects to preferences over sets of objects —seeBarberà et al. [3] for a review. A central problem in judgment aggregation concerns the fact that the issue-wisemajority is not always logically consistent [52]. veryone submits their truthful judgment during the aggregationprocess, this means that we should try to minimise the distance ofthe agents that are furthest away from the outcome. Formally: ◮ A rule 𝐹 satisfies the maximin property if for all profiles 𝑱 ∈J ( Φ ) 𝑛 and judgments 𝐽 ∈ 𝐹 ( 𝑱 ) there do not exist judgment 𝐽 ′ ∈ J ( Φ ) and agent 𝑗 ∈ 𝑁 such that 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) < 𝐻 ( 𝐽 𝑗 , 𝐽 ) for all 𝑖 ∈ 𝑁 .
Although the maximin property is quite convincing, there are set-tings like those motivated in the Introduction where it does notoffer sufficient egalitarian guarantees. We thus consider a differ-ent property next, which we call the equity property . This axiomrequires that the gaps in the agents’ satisfaction be minimised. Inother words, no two agents should find themselves in very differ-ent distances with respect to the collective outcome. Formally: ◮ A rule 𝐹 satisfies the equity property if for all profiles 𝑱 ∈J 𝑛 and judgments 𝐽 ∈ 𝐹 ( 𝑱 ) , there do not exist judgment 𝐽 ′ ∈ J ( Φ ) and agents 𝑖 ′ , 𝑗 ′ ∈ 𝑁 such that | 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) − 𝐻 ( 𝐽 𝑗 , 𝐽 ′ )| < | 𝐻 ( 𝐽 𝑖 ′ , 𝐽 ) − 𝐻 ( 𝐽 𝑗 ′ , 𝐽 )| for all 𝑖, 𝑗 ∈ 𝑁 .
No rule that satisfies either the maximin- or equity property canbe majoritarian. As an illustration, in a profile of only two agentswho disagree on some issues, any egalitarian rule will try to reach acompromise, and this compromise will not be affected if any agentsholding one of the two initial judgments are added to the profile—in contrast, a majoritarian rule will simply conform to the crowd.Proposition 1 shows that it is also impossible for the maximinproperty and the equity property to simultaneously hold. There-fore, we have established the logical independence of all three ax-ioms discussed so far: maximin, equity, and majoritarianism.
Proposition 1.
No judgment aggregation rule can satisfy both themaximin property and the equity property.
Proof.
Take an agenda Φ where J ( Φ ) consists of the nodes inthe graph below and consider the profile 𝑱 = ( 𝐽 , 𝐽 ) . Each edge islabelled with the Hamming distance between the judgments. 𝐽 : 110000 𝐽 : 010000 𝐽 : 001100 𝐽 ′ : 11111113 44Every aggregation rule satisfying the maximin property will return { 𝐽 } , as this judgment maximises the utility of the worst off agent—in this case, agent 2. However every rule satisfying the equity prop-erty will return { 𝐽 ′ } , as this judgment minimises the difference inutility between the best off and worst off agents. Thus, there is norule that can satisfy the two properties at the same time. (cid:3) From Proposition 1, we also know now that the two properties ofegalitarianism generate two disjoint classes of aggregation rules.In particular, in this paper we focus on the maximal rule that meetseach property: a rule 𝐹 is the maximal one of a given class if, for This includes popular rules like the median rule [56]—known under a num-ber of other names, notably distance-based rule [59],
Kemeny rule [24], and prototyperule [53]. every profile 𝑱 , the outcomes obtained by any other rule in thatclass are always outcomes of 𝐹 too. The maximal rule satisfying the maximin property is the rule
MaxHam (see, e.g., Lang et al., 2011). For all profiles 𝑱 ∈ J ( Φ ) 𝑛 ,MaxHam ( 𝑱 ) = argmin 𝐽 ∈J ( Φ ) max 𝑖 ∈ 𝑁 𝐻 ( 𝐽 𝑖 , 𝐽 ) . Analogously, we define a rule new to the judgment aggregationliterature, which is the maximal one satisfying the equity property.For all profiles 𝑱 ∈ J ( Φ ) 𝑛 ,MaxEq ( 𝑱 ) = argmin 𝐽 ∈J ( Φ ) max 𝑖,𝑗 ∈ 𝑁 | 𝐻 ( 𝐽 𝑖 , 𝐽 ) − 𝐻 ( 𝐽 𝑗 , 𝐽 )| . To better understand these rules, consider an agenda with six is-sues: 𝑝, 𝑞, 𝑟 ≡ 𝑝 ∧ 𝑞 , and their negations. Suppose that there areonly two agents in a profile 𝑱 , holding judgments 𝐽 = ( ) and 𝐽 = ( ) . Then, we have that MaxHam ( 𝑱 ) = {( ) , ( )} ,while MaxEq = {( ) , ( )} . In this example, the difference inspirit between the two rules of our interest is evident. Althoughthe MaxHam rule is able to fully satisfy exactly one of the agentswithout causing much harm to the other, it still creates greater un-balance than the MaxEq rule, which ensures that the two agents areequally happy with the outcome (under Hamming-distance prefer-ences). In that sense, MaxEq is better suited for a group of agentsthat do not want any of them to feel particularly put upon, whileMaxHam seems more desirable when a minimum level of happi-ness is asked for.MaxHam generalises minimax approval voting [10], which isthe special case without logical constraint on the judgments, mean-ing agents may approve any subset of issues. Brams et al. [10] showthat MaxHam remains manipulable in this special case. As findingthe outcome of minimax is computationally hard, Caragiannis et al.[12] provide approximation algorithms that circumvent this prob-lem. They also demonstrate the interplay between manipulabilityand lower bounds for the approximation algorithm—establishingstrategyproofness results for approximations of minimax. A framework closely related to ours is that of belief merging [45],which is concerned with how to aggregate several (possibly incon-sistent) sets of beliefs into one consistent belief set. Egalitarianbelief merging is studied by Everaere et al. [29], who examine in-terpretations of the
Sen-Hammond equity condition [63] and the
Pigou-Dalton transfer principle [17]—two properties that are logi-cally incomparable. We situate our egalitarian axioms within thecontext of these egalitarian axioms from belief merging; we refor-mulate these axioms into our framework. ◮ Fix an arbitrary profile 𝑱 , agents 𝑖, 𝑗 , and any three judgmentsets 𝐽 , 𝐽 ′ ∈ J ( Φ ) . An aggregation rule 𝐹 satisfies the Sen-Hammond equity property if whenever 𝐻 ( 𝐽 𝑖 , 𝐽 ) < 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) < 𝐻 ( 𝐽 𝑗 , 𝐽 ′ ) < 𝐻 ( 𝐽 𝑗 , 𝐽 ) Of course, several natural refinements of these rules can be defrined, with re-spect to various other axiomatic properties that we may find desirable. Identifyingand studying such rules is an interesting direction for future research. We refer to Everaere et al. [30] for a detailed comparison of the two frameworks. Another egalitarian property in belief merging is the arbitration postulate . Wedo not go into detail on this postulate, but refer the reader to Konieczny and Pérez[45]. nd 𝐻 ( 𝐽 𝑖 ′ , 𝐽 ) = 𝐻 ( 𝐽 𝑖 ′ , 𝐽 ′ ) for all other agents 𝑖 ′ ∈ 𝑁 \ { 𝑖, 𝑗 } ,then 𝐽 ∈ 𝐹 ( 𝑱 ) implies 𝐽 ′ ∈ 𝐹 ( 𝑱 ) . Proposition 2.
If a rule satisfies either the maximin property or theequity property, then it will satisfy the Sen-Hammond equity prop-erty.
Proof (sketch).
Let 𝑱 = ( 𝐽 𝑖 , 𝐽 𝑗 ) be a profile such that 𝐻 ( 𝐽 , 𝐽 𝑖 ) < 𝐻 ( 𝐽 ′ , 𝐽 𝑖 ) < 𝐻 ( 𝐽 ′ , 𝐽 𝑗 ) < 𝐻 ( 𝐽 , 𝐽 𝑗 ) , and 𝐻 ( 𝐽 𝑖 ′ , 𝐽 ) = 𝐻 ( 𝐽 𝑖 ′ , 𝐽 ′ ) forall other agents 𝑖 ′ ∈ 𝑁 \ { 𝑖, 𝑗 } . Suppose 𝐹 satisfies the equityproperty—if there is some agent 𝑖 ′ such that | 𝐻 ( 𝐽 𝑖 , 𝐽 ) − 𝐻 ( 𝐽 𝑖 ′ , 𝐽 )| > | 𝐻 ( 𝐽 𝑖 , 𝐽 ) − 𝐻 ( 𝐽 𝑗 , 𝐽 )| , then 𝐽 ∈ 𝐹 ( 𝑱 ) if and only if 𝐽 ′ ∈ 𝐹 ( 𝑱 ) , as themaximal difference in distance will be the same for the two judg-ments. If this is not the case, then agents 𝑖 and 𝑗 determine theoutcome regarding 𝐽 and 𝐽 ′ so clearly 𝐽 ∈ 𝐹 ( 𝑱 ) implies 𝐽 ′ ∈ 𝐹 ( 𝑱 ) .The argument for other cases proceeds similarly.If 𝐹 satisfies the maximin property, then a similar argument tellsus that if membership of 𝐽 and 𝐽 ′ in the outcome is determined byan agent other than 𝑖 or 𝑗 , we will either have both or neither. If 𝑖 , and 𝑗 are the determining factor then 𝐽 ∈ 𝐹 ( 𝑱 ) implies 𝐽 ′ ∈ 𝐹 ( 𝑱 ) . (cid:3)◮ Given a profile 𝑱 = ( 𝐽 , . . . , 𝐽 𝑛 ) and agents 𝑖 and 𝑗 such that: – 𝐻 ( 𝐽 𝑖 , 𝐽 ) < 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) ≤ 𝐻 ( 𝐽 𝑗 , 𝐽 ′ ) < 𝐻 ( 𝐽 𝑗 , 𝐽 ) , – 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) − 𝐻 ( 𝐽 𝑖 , 𝐽 ) = 𝐻 ( 𝐽 𝑗 , 𝐽 ′ ) − 𝐻 ( 𝐽 𝑗 , 𝐽 ) , and – 𝐻 ( 𝐽 𝑖 ∗ , 𝐽 ) = 𝐻 ( 𝐽 𝑖 ∗ , 𝐽 ′ ) for all other agents 𝑖 ∗ ∈ 𝑁 \ { 𝑖, 𝑗 } , 𝐹 satisfies the Pigou-Dalton transfer principle if 𝐽 ′ ∈ 𝐹 ( 𝑱 ) implies 𝐽 ∉ 𝐹 ( 𝑱 ) .We refer to these axioms simply as Sen-Hammond , and
Pigou-Dalton .Note that Pigou-Dalton is also a weaker version of our equity prop-erty, as it stipulates that the difference between utility in agentsshould be lessened under certain conditions, while the equity prop-erty always aims to minimise this distance.While we can find a rule that satisfies both the equity propertyand a weakening of the maximin property, Sen-Hammond, we can-not do the same by weakening the equity property.
Proposition 3.
No judgment aggregation rule can satisfy both themaximin property and Pigou-Dalton.
Proof.
Consider the domain
J ( Φ ) = { 𝐽 , 𝐽 , 𝐽 , 𝐽 , 𝐽 ′ } with thefollowing Hamming distances between judgment sets. 𝐽 𝐽 ′ 𝐽 𝐽 𝐽 𝐽 𝐽 𝐽 𝑱 = ( 𝐽 , 𝐽 , 𝐽 ) . If 𝐹 satisfies the maximin property, { 𝐽 , 𝐽 ′ } ⊆ 𝐹 ( 𝑱 ) , as we can see from the grey cells. This means Pigou-Daltonis violated in this profile, as 𝐽 ′ ∈ 𝐹 ( 𝑱 ) should imply 𝐽 ∉ 𝐹 ( 𝑱 ) . (cid:3) We summarise the observations of this section in Figure 1. One such domain would be the following, where 𝐽 = , 𝐽 ′ = , 𝐽 = , 𝐽 = , and 𝐽 = . Equity Pigou-DaltonMaximin Sen-Hammond
Figure 1: Dashed lines denote incompatibility, dotted linesincomparability, and arrows implication relations. . This section provides an account of strategic manipulation withrespect to the egalitarian axioms defined in Section 3. We start offwith presenting the most general notion of strategic manipulationin judgment aggregation, introduced by Dietrich and List [18]. Weassume Hamming preferences throughout this section.
Definition 1.
A rule 𝐹 is susceptible to manipulation by agent 𝑖 in profile 𝑱 , if there exists a profile 𝑱 ′ = − 𝑖 𝑱 such that 𝐹 ( 𝑱 ′ ) ˚ ≻ 𝑖 𝐹 ( 𝑱 ) . We say that 𝐹 is strategyproof in case 𝐹 is not manipulable by anyagent 𝑖 ∈ 𝑁 in any profile 𝑱 ∈ J ( Φ ) 𝑛 .Proposition 4 shows an important fact: In judgment aggregation,egalitarianism is incompatible with strategyproofness. Proposition 4.
If an aggregation rule is strategyproof, it cannotsatisfy the maximin property or the equity property.
Proof.
We show the contrapositive. Let Φ be an agenda suchthat J ( Φ ) = { , , , } . Consider the fol-lowing two profiles 𝑱 (left) and 𝑱 ′ (right). 𝐽 𝑖 𝐽 𝑗 𝐹 ( 𝑱 ) 𝐽 ′ 𝑖 𝐽 ′ 𝑗 𝐹 ( 𝑱 ′ ) 𝑱 , both the maximin and the equity properties prescribethat 110000 should be returned as the single outcome, while in pro-file 𝑱 ′ they agree on 111000. Because 𝑱 ′ = ( 𝑱 − 𝑖 , 𝐽 ′ 𝑖 ) , and 111000 ≻ 𝑖 𝐹 satisfies themaximin or the equity property, it fails strategyproofness. (cid:3) Strategyproofness according to Definition 1 is a strong require-ment, which many known rules fail [8]. We investigate two morenuanced notions of strategyproofness that are novel to judgmentaggregation, yet have familiar counterparts in voting theory.First, no-show manipulation happens when an agent can achievea preferable outcome simply by not submitting any judgment, in-stead of reporting a truthful or an untruthful one.
Definition 2.
A rule 𝐹 is susceptible to no-show manipulation by agent 𝑖 in profile 𝑱 if 𝐹 ( 𝑱 − 𝑖 ) ˚ ≻ 𝑖 𝐹 ( 𝑱 ) . We say that 𝐹 satisfies participation if it is not susceptible to no-show manipulation by any agent 𝑖 ∈ 𝑁 in any profile. The original definition of Dietrich and List [18] concerned single-judgment col-lective outcomes, and a type of preferences that covers Hamming-distance ones. This in in line with Brams et al.’s work on the minimax rule in approval voting. cf. the no-show paradox in voting [31]. econd, antipodal strategyproofness poses another barrier againstmanipulation, by stipulating that an agent cannot change the out-come towards a better one for herself by reporting a totally un-truthful judgment. This is a strictly weaker requirement than fullstrategyproofness, serving as a protection against excessive lying. Definition 3.
A rule 𝐹 is susceptible to antipodal manipulation by agent 𝑖 in profile 𝑱 if 𝐹 ( 𝑱 − 𝑖 , 𝐽 𝑖 ) ˚ ≻ 𝑖 𝐹 ( 𝑱 ) . We say that 𝐹 satisfies antipodal strategyproofness if it not suscep-tible to antipodal manipulation by any agent 𝑖 ∈ 𝑁 in any profile.As is the case for participation, antipodal strategyproofness is aweaker notion of strategyproofness as far as the MaxHam and theMaxEq rules are concerned.In voting theory, Sanver and Zwicker [61] show that participa-tion implies antipodal strategyproofness (or half-way monotonic-ity , as called in that framework) for rules that output a single win-ning alternative. Notably, this is not always the case in our model(see Example 1). This is not surprising, as obtaining such a resultindependently of the preference extension would be significantlystronger than the result by Sanver and Zwicker [61]. We are, how-ever, able to reproduce this relationship between participation andstrategyproofness in Theorem 1, for a specific type of preferences. Example 1.
We present a rule that satisfies participation but vio-lates antipodal strategyproofness. The other direction admits a simi-lar example, and is thus omitted. Note that the rule demonstrated isquite unnatural for simplicity of the presentation.Consider an agenda Φ with J ( Φ ) = { , , } . We constructan anonymous rule 𝐹 that is only sensitive to which judgments aresubmitted and not to their quantity: 𝐹 ( ) = 𝐹 ( ) = 𝐹 ( , ) = 𝐹 ( , ) = { , } ; 𝐹 ( ) = { , } ; 𝐹 ( , ) = 𝐹 ( , , ) = { } .For the pessimistic preference, no agent can be strictly better off by ab-staining. However, compare the profiles ( , ) and ( , ) : agent 2with truthful judgment can move from outcome { , } to out-come { } , which is strictly better for her. While the two axioms are independent in the general case, par-ticipation implies antipodal strategyproofness (Theorem 1) if westipulate that • 𝑋 ˚ ≻ 𝑖 𝑌 if and only if there exist some 𝐽 ∈ 𝑋 and 𝐽 ′ ∈ 𝑌 such that 𝐽 ≻ 𝑖 𝐽 ′ and { 𝐽 , 𝐽 ′ } * 𝑋 ∩ 𝑌 .If a preference satisfies the above condition, we say that it is deci-sive . This condition gives rise to a preference extension equivalentto the large preference extension of Kruger and Terzopoulou [48].Note that a decisive preference is not necessarily acyclic—in fact,it may even be symmetric. The interpretation of such a preferenceextension is slightly different than the usual one; when we say thata rule is strategyproof for a decisive preference where both 𝐽 ˚ ≻ 𝐽 ′ and 𝐽 ′ ˚ ≻ 𝐽 hold, we mean that no agent 𝑖 with 𝐽 ˚ ≻ 𝑖 𝐽 ′ and noagent 𝑗 ≠ 𝑖 with 𝐽 ′ ˚ ≻ 𝑗 𝐽 will ever have an incentive to manipulate.Using Lemma 1, we can now prove a result analogous to the onein voting theory, to give a complete picture of how these axiomsrelate to each other in judgment aggregation. For other agendas we can simply take the rule to be constant.
Lemma 1.
For judgment sets
𝐽 , 𝐽 ′ and 𝐽 ′′ : 𝐻 ( 𝐽 , 𝐽 ′ ) > 𝐻 ( 𝐽 , 𝐽 ′′ ) , ifand only if 𝐻 ( 𝐽, 𝐽 ′ ) < 𝐻 ( 𝐽, 𝐽 ′′ ) . Proof.
For judgment sets
𝐽 , 𝐽 ′ ∈ J ( Φ ) , 𝐻 ( 𝐽, 𝐽 ′ ) = 𝑚 − 𝐻 ( 𝐽 , 𝐽 ′ ) .Suppose 𝐻 ( 𝐽 , 𝐽 ′ ) > 𝐻 ( 𝐽 , 𝐽 ′′ ) . Then 𝐻 ( 𝐽, 𝐽 ′ ) = 𝑚 − 𝐻 ( 𝐽 , 𝐽 ′ ) < 𝑚 − 𝐻 ( 𝐽 , 𝐽 ′′ ) = 𝐻 ( 𝐽, 𝐽 ′′ ) . The other direction is analogous. (cid:3) Theorem 1.
For decisive preferences over sets of judgments, partic-ipation implies antipodal strategyproofness.
Proof.
Working on the contrapositive, suppose that 𝐹 is suscep-tible to antipodal manipulation. We will prove that 𝐹 is susceptibleto no-show manipulation too. We know that there exists 𝑖 ∈ 𝑁 such that 𝐹 ( 𝑱 − 𝑖 , 𝐽 𝑖 ) ˚ ≻ 𝑖 𝐹 ( 𝑱 − 𝑖 , 𝐽 𝑖 ) , for some profile 𝑱 . This meansthat there exist 𝐽 ′ ∈ 𝐹 ( 𝑱 − 𝑖 , 𝐽 𝑖 ) and 𝐽 ∈ 𝐹 ( 𝑱 − 𝑖 , 𝐽 𝑖 ) with 𝐽 ′ ≻ 𝑖 𝐽 .Equivalently, 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) < 𝐻 ( 𝐽 𝑖 , 𝐽 ) (1)Next, consider a judgment 𝐽 ′′ ∈ 𝐹 ( 𝑱 − 𝑖 ) .If 𝐻 ( 𝐽 𝑖 , 𝐽 ′′ ) < 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) , then 𝐹 is susceptible to no-show manip-ulation by agent 𝑖 in the profile ( 𝑱 − 𝑖 , 𝐽 𝑖 ) .Otherwise, 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) ≤ 𝐻 ( 𝐽 𝑖 , 𝐽 ′′ ) . Then Lemma 1 implies that 𝐻 ( 𝐽 𝑖 , 𝐽 ′′ ) ≤ 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) . So, together with Inequality (1), we havethat 𝐻 ( 𝐽 𝑖 , 𝐽 ′′ ) < 𝐻 ( 𝐽 𝑖 , 𝐽 ) . This means that 𝐹 is susceptible to no-show manipulation by agent 𝑖 in the profile ( 𝑱 − 𝑖 , 𝐽 𝑖 ) . (cid:3) We next prove that any rule satisfying the maximin property is im-mune to both no-show manipulation and antipodal manipulation(Theorem 2), while this is not true for the equity property (Propo-sition 5). We emphasise that the theorem holds for all prefer-ence extensions. These results—holding for two independent no-tions of strategyproofness—are significant for two reasons. First,they bring to light the conditions under which we can have ourcake and eat it too, simultaneously satisfying an egalitarian prop-erty and a degree of strategyproofness. In addition, they provide afurther way to distinguish between the properties of maximin andequity: the former is better suited in contexts where we may worryabout the agents’ strategic behaviour.
Theorem 2.
The maximin property implies participation and an-tipodal strategyproofness.
Proof.
We prove the participation case; the proof for antipodalstrategyproofness is analogous, and utilises Lemma 1.Suppose for contradiction that 𝐹 is a rule that satisfies the max-imin property but violates participation. Then there must existagent 𝑖 ∈ 𝑁 and profile 𝑱 where 𝐽 𝑖 is agent 𝑖 ’s truthful judgment,such that 𝐹 ( 𝑱 − 𝑖 ) ˚ ≻ 𝑖 𝐹 ( 𝑱 ) . This means there must exist judgments 𝐽 ∈ 𝐹 ( 𝑱 ) and 𝐽 ′ ∈ 𝐹 ( 𝑱 − 𝑖 ) such that 𝐽 ′ ≻ 𝑖 𝐽 and { 𝐽 , 𝐽 ′ } * 𝐹 ( 𝑱 ) ∩ 𝐹 ( 𝑱 − 𝑖 ) . Because agent 𝑖 strictly prefers 𝐽 ′ to 𝐽 , this means that 𝐻 ( 𝐽 𝑖 , 𝐽 ) > 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) . We consider two cases.Case 1: Suppose that 𝐽 ′ ∉ 𝐹 ( 𝑱 ) . Let 𝑘 be the distance betweenthe worst off agent’s judgment in 𝑱 and any judgment in 𝐹 ( 𝑱 ) .Then, 𝐻 ( 𝐽 𝑗 ′ , 𝐽 ) ≤ 𝑘 for all 𝑗 ′ ∈ 𝑁 . (2)We know that 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) < 𝑘 because 𝐻 ( 𝐽 𝑖 , 𝐽 ) ≤ 𝑘 , and agent 𝑖 strictly prefers 𝐽 ′ to 𝐽 . From Inequality (2), this means that if 𝐽 ′ Note that antipodal strategyproofness is not so weak a requirement that is im-mediately satisfied by all “utilitarian” aggregation rules. For example, the Copelandvoting rule fails the analogous axiom of half-way monotonicity [66]. s not among the outcomes in 𝐹 ( 𝑱 ) , there has to be some 𝑗 ∈ 𝑁 \ { 𝑖 } such that 𝐻 ( 𝐽 𝑗 , 𝐽 ′ ) > 𝑘 . But all judgments submitted toprofile ( 𝑱 − 𝑖 ) by agents in 𝑁 \ { 𝑖 } are at most at distance 𝑘 from 𝐽 by Inequality (2), so 𝐽 would be selected by any rule satisfying themaximin property will select 𝐽 as an outcome of 𝐹 ( 𝑱 − 𝑖 ) —insteadof 𝐽 ′ , a contradiction.Case 2: Suppose that 𝐽 ′ ∈ 𝐹 ( 𝑱 ) , meaning that 𝐽 ∉ 𝐹 ( 𝑱 − 𝑖 ) . Anal-ogously to the first case, let 𝑘 ′ be the distance between the worstoff agent’s judgment in 𝑱 − 𝑖 and any judgment in 𝐹 ( 𝑱 − 𝑖 ) . Then, 𝐻 ( 𝐽 𝑗 ′ , 𝐽 ′ ) ≤ 𝑘 ′ for all 𝑗 ′ ∈ 𝑁 \ { 𝑖 } . (3)Moreover, since 𝐽 ∉ 𝐹 ( 𝑱 − 𝑖 ) , it is the case that 𝐻 ( 𝐽 𝑗 , 𝐽 ) > 𝑘 ′ for some 𝑗 ≠ 𝑖. (4)In profile 𝑱 , Inequalities (3) and (4) still hold. In addition, we havethat 𝐻 ( 𝐽 𝑖 , 𝐽 ) > 𝐻 ( 𝐽 𝑖 , 𝐽 ′ ) because agent 𝑖 strictly prefers 𝐽 ′ to 𝐽 . So,for any rule satisfying the maximin property, judgment 𝐽 ′ will bebetter as an outcome of 𝐹 ( 𝑱 ) than 𝐽 , a contradiction. (cid:3) Corollary 1.
The rule MaxHam satisfies antipodal strategyproof-ness and participation.
Proposition 5.
No rule that satisfies the equity property can satisfyparticipation or antipodal strategyproofness .
Proof.
The following is a counterexample for antipodal strate-gyproofness. A similar one exists for participation.Consider the following profiles 𝑱 = { 𝐽𝑖, 𝐽 𝑗 } and 𝑱 ′ = ( 𝑱 − 𝑖 , 𝐽 𝑖 ) .We give a visual representation of the profiles as well as the out-comes under an arbitrary rule 𝐹 that satisfies the equity principle.We specify that J ( Φ ) = { , , , , } . 𝐹 ( 𝑱 ) 𝑱 𝐽 𝑖 : 00000 𝐽 𝑗 : 01110 𝐹 ( 𝑱 ′ ) 𝑱 ′ 𝐽 ′ 𝑖 : 11111 𝐽 ′ 𝑗 : 01110441421Each edge from an individual judgment to a collective one is la-belled with the Hamming distance between the two. It is clear thatagent 𝑖 will benefit from her antipodal manipulation, as her truejudgment is much closer to the singleton outcome in 𝑱 ′ than thesingleton outcome in 𝑱 . (cid:3) Corollary 2.
The rule MaxEq does not satisfy participation or an-tipodal strategyproofness.
We have discussed two aggregation rules that reflect desirable egal-itarian principles—i.e., the MaxHam and MaxEq rules—and exam-ined whether they give agents incentives to misrepresent their truth-ful judgments. In this section we consider how complex it is, com-putationally, to employ these rules, and the complexity of deter-mining whether an agent can manipulate the collective outcome.The MaxHam rule has been considered from a computationalperspective before [40–42]. Here, we extend this analysis to theMaxEq rule, and we compare the two rules with each other on theircomputational properties. Concretely, we primarily establish somecomputational complexity results; motivated by these results, wethen illustrate how some computational problems related to these rules can be solved using the paradigm of Answer Set Program-ming.
We investigate some computational complexity aspects of the judg-ment aggregation rules that we have considered. Due to space con-straints, we will only describe the main lines of these results—forfull details, we refer to the accompanying Appendix.Consider the problem of outcome determination (for a rule 𝐹 ).This is most naturally modelled as a search problem, where the in-put consists of an agenda Φ and a profile 𝑱 = ( 𝐽 , . . . , 𝐽 𝑛 ) ∈ J ( Φ ) 𝑛 .The problem is to produce some judgment set 𝐽 ∗ ∈ 𝐹 ( 𝑱 ) . We willshow that for the MaxEq rule, this problem can be solved in poly-nomial time with a logarithmic number of calls to an oracle for NP search problems (where the oracle also produces a witness for yesanswers—also called an FNP witness oracle). Said differently, theoutcome determination problem for the the MaxEq rule lies in thecomplexity class FP NP [log,wit] . We also show that the problem iscomplete for this class (using the standard type of reductions usedfor search problems: polynomial-time Levin reductions). Theorem 3.
The outcome determination problem for the MaxEqrule is FP NP [log,wit] -complete under polynomial-time Levin reduc-tions. Proof (sketch).
Membership in FP NP [log,wit] can be shownby giving a polynomial-time algorithm that solves the problem byquerying an FNP witness oracle a logarithmic number of times.The algorithm first finds the minimum value 𝑘 of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ )− 𝐻 ( 𝐽 , 𝐽 ′′ )| by means of binary search—requiring a logarithmic num-ber of oracle queries. Then, with one additional oracle query, the al-gorithm can produce some 𝐽 ∗ ∈ J ( Φ ) with max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ )− 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| = 𝑘 .To show FP NP [log,wit] -hardness, we reduce from the problemof finding a satisfying assignment of a (satisfiable) propositionalformula 𝜓 that sets a maximum number of variables to true [14, 44].This reduction works roughly as follows. Firstly, we produce 3CNFformulas 𝜓 , . . . ,𝜓 𝑣 where each 𝜓 𝑖 is 1-in-3-satisfiable if and onlyif there exists a satisfying assignment of 𝜓 that sets at least 𝑖 vari-ables to true. Then, for each 𝑖 , we transform 𝜓 𝑖 to an agenda Φ 𝑖 and a profile 𝑱 𝑖 such that there is a judgment set with equal Ham-ming distance to each 𝐽 ∈ 𝑱 𝑖 if and only if 𝜓 𝑖 is 1-in-3-satisfiable.Finally, we put the agendas Φ 𝑖 and profiles 𝑱 𝑖 together into a sin-gle agenda Φ and a single profile 𝑱 such that we can—from theoutcomes selected by the MaxEq rule—read off the largest 𝑖 forwhich 𝜓 𝑖 is 1-in-3-satisfiable, and thus, the maximum number ofvariables set to true in any truth assignment satisfying 𝜓 . This laststep involves duplicating issues in Φ , . . . , Φ 𝑣 different numbers oftimes, and creating logical dependencies between them. Moreover,we do this in such a way that from any outcome selected by theMaxEq rule, we can reconstruct a truth assignment satisfying 𝜓 that sets a maximum number of variables to true. (cid:3) The result of Theorem 3 means that the computational complexityof computing outcomes for the MaxEq rule lies at the Θ p -level ofthe Polynomial Hierarchy. This is in line with previous results onthe computational complexity of the outcome determination prob-lem for the MaxHam rule—De Haan and Slavkovik [41] showedhat a decision variant of the outcome determination problem forthe MaxHam rule is Θ p -complete. Notably, our proof (presented indetail in the Appendix) brings out an intriguing fact about a prob-lem that is at first glance simpler than outcome determination forMaxEq: Given an agenda Φ and a profile 𝑱 , deciding whether theminimum value of max 𝑖,𝑗 ∈ 𝑁 | 𝐻 ( 𝐽 𝑖 , 𝐽 ) − 𝐻 ( 𝐽 𝑗 , 𝐽 )| for 𝐽 ∈ J ( Φ ) —the value that the MaxEq rule minimizes—is divisible by 4, is Θ p -complete (Proposition 6). Intuitively, merely computing the mini-mum value that is relevant for MaxEq is Θ p -hard. Proposition 6.
Given an agenda Φ and a profile 𝑱 , deciding whetherthe minimal value of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| for 𝐽 ∗ ∈J ( Φ ) , is divisible by 4, is a Θ p -complete problem. Interestingly, we found that the problem of deciding if there existsa judgment set 𝐽 ∗ ∈ J ( Φ ) that has the exact same Hamming dis-tance to each judgment set in the profile is NP -hard, even whenthe agenda consists of logically independent issues. Proposition 7.
Given an agenda Φ and a profile 𝑱 , the problem ofdeciding whether there is some 𝐽 ∗ ∈ J ( Φ ) with max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ )− 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| = is NP -complete. Moreover, NP -hardness holds evenfor the case where Φ consists of logically independent issues—i.e.,the case where J ( Φ ) = { , } 𝑚 for some 𝑚 .This is also in line with previous results for the MaxHam rule—De Haan [40] showed that computing outcomes for the MaxHamrule is computationally intractable even when the agenda consistsof logically independent issues.Next, we turn our attention to the problem of strategic manip-ulation. Specifically, we show that—for the case of decisive pref-erences over sets of judgment sets—the problem of deciding if anagent 𝑖 can strategically manipulate is in the complexity class Σ p . Proposition 8.
Let (cid:23) be a preference relation over judgment setsthat is polynomial-time computable, and let ˚ (cid:23) be a decisive exten-sion over sets of judgment sets. Then the problem of deciding if agiven agent 𝑖 can strategically manipulate under the MaxEq rule—i.e., given Φ and 𝑱 , deciding if there exists some 𝑱 ′ = − 𝑖 𝑱 with MaxEq ( 𝑱 ′ ) ˚ ≻ 𝑖 MaxEq ( 𝑱 ) —is in the complexity class Σ p . Proof (sketch).
To show membership in Σ p = NP NP , we de-scribe a nondeterministic polynomial-time algorithm with accessto an NP oracle that solves the problem. The algorithm firstly guessesa new judgment set 𝐽 ′ 𝑖 for agent 𝑖 in the new profile 𝑱 ′ , and guessesa truth assignment witnessing that 𝐽 ′ 𝑖 is consistent. Then, usingthe NP oracle, it computes the values 𝑘 = max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| and 𝑘 ′ = max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 ′ | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| , for 𝐽 ∈ J ( Φ ) .Finally, it guesses some 𝐽 , 𝐽 ′ ∈ J ( Φ ) , together with truth assign-ments witnessing consistency, and it verifies that 𝐽 ′ ≻ 𝑖 𝐽 , that 𝐽 ′ ∈ MaxEq ( 𝑱 ′ ) , that 𝐽 ∈ MaxEq ( 𝑱 ) , and that { 𝐽 , 𝐽 ′ } * MaxEq ( 𝑱 ) ∩ MaxEq ( 𝑱 ′ ) . Since these final checks can all be done in polynomialtime—using the previously guessed and computed information—one can verify that this can be implemented by an NP NP algo-rithm. (cid:3) This Σ p -membership result can straightforwardly be extended toother variants of the manipulation problem (e.g., no-show manip-ulation and antipodal manipulation) and to other preferences, aswell as to the MaxHam rule. Due to space constraints, we omit further details on this. Still, we shall mention that results demon-strating that strategic manipulation is very complex are generallymore welcome than analogous ones regarding outcome determi-nation. If manipulation is considered a negative side-effect of theagents’ strategic behaviour, knowing that it is hard for the agentsto materialise it is good news. In Section 5.2 we will revisit theseconcerns from a different angle.
The complexity results in Section 5.1 leave no doubt that apply-ing our egalitarian rules is computationally difficult. Nevertheless,they also indicate that a useful approach for computing outcomesof the MaxEq rule in practice would be to encode this probleminto the paradigm of Answer Set Programming (ASP) [36], and touse ASP solving algorithms. ASP offers an expressive automatedreasoning framework that typically works well for problems atthe Θ p level of the Polynomial Hierarchy. In this section, we willshow how this encoding can be done—similarly to an ASP encod-ing for the MaxHam rule [42]. Due to space restrictions, we referto the literature for details on the syntax and semantics of ASP—e.g., [34, 36].We use the same basic setup that De Haan and Slavkovik [42]use to represent judgment aggregation scenarios—with some sim-plifications and modifications for the sake of readability. In par-ticular, we use the predicate voter/1 to represent individuals, weuse issue/1 to represent issues in the agenda, and we use js/2 torepresent judgment sets—both for the individual voters and for adedicated agent col that represents the outcome of the rule.With this encoding of judgment aggregation scenarios, one canadd further constraints on the predicate js/2 that express whichjudgment sets are consistent, based on the logical relations betweenthe issues in the agenda Φ —as done by De Haan and Slavkovik [42].We refer to their work for further details on how this can be done.Now, we show how to encode the MaxEq rule into ASP, similarlyto the encoding of the MaxHam rule by De Haan and Slavkovik[42]. We begin by defining a predicate dist/2 to capture the Ham-ming distance D between the outcome and the judgment set of anagent A . dist(A,D) :- voter(A),D = Then, we define predicates maxdist/1 , mindist/1 and inequity/1 that capture the maximum Hamming distance from the outcometo any judgment set in the profile, the minimum such Hammingdistance, and the difference between the maximum and minimum(or inequity ), respectively. maxdist (Max) :- Max = mindist (Min) :- Min = inequity (Max -Min) :- maxdist (Max), mindist (Min). Finally, we add an optimization constraint that states that only out-comes should be selected that minimize the inequity. Note though that hardness results regarding manipulation of our egalitarianrules remain an open question. The expression “ @30 ” in Line 5 indicates the priority level of this optimizationstatement (we used the arbitrary value of 30, and priority levels lexicographically). or any answer set program that encodes a judgment aggrega-tion setting, combined with Lines 1–5, it then holds that the op-timal answer sets are in one-to-one correspondence with the out-comes selected by the MaxEq rule.Interestingly, we can readily modify this encoding to capture re-finements of the MaxEq rule. An example of this is the refinementthat selects (among the outcomes of the MaxEq rule) the outcomesthat minimize the maximum Hamming distance to any judgmentset in the profile. We can encode this example refinement by addingthe following optimization statement that works at a lower prioritylevel than the optimization in Line 5. We now show how to encode the problem of strategic manipula-tion into ASP. The value of this section’s contribution should beviewed from the perspective of the modeller rather than from thatof the agents. That is, even if we do not wish for the agents to beable to easily check whether they can be better off by lying, it maybe reasonable, given a profile of judgments, to externally determinewhether a certain agent can benefit from being untruthful.The simplest way to achieve this is with the meta-programmingtechniques developed by Gebser et al. [35]. Their meta-programmingapproach allows one to additionally express optimization statementsthat are based on subset-minimality, and to transform programswith this extended expressivity to standard (disjunctive) answerset programs. We use this to encode the problem of strategic ma-nipulation.Due to space reasons, we will not spell out the full ASP encod-ing needed to do so. Instead, we will highlight the main steps, anddescribe how these fit together. We will use the example of MaxEq,but the exact same approach would work for any other judgmentaggregation rule that can be expressed in ASP efficiently using reg-ular (cardinality) optimization constraints—in other words, for allrules for which the outcome determination problem lies at the Θ p level of the Polynomial Hierarchy. Moreover, we will use the ex-ample of a decisive preference ˚ ≻ over sets of judgment sets thatis based on a polynomial-time computable preference ≻ over judg-ment sets. The approach can be modified to work with other pref-erences as well.We begin by guessing a new judgment set 𝐽 ′ 𝑖 for the individual 𝑖 that is trying to manipulate—and we assume, w.l.o.g., that 𝑖 = voter(prime (1)). Then, we express the outcomes of the MaxEq rule, both for thenon-manipulated profile 𝑱 and for the manipulated profile 𝑱 ′ , us-ing the dedicated agents col (for 𝑱 ) and prime(col) (for 𝑱 ′ ). This isdone exactly as in the encoding of the problem of outcome determi-nation (so for the case of MaxEq, as described in Section 5.2)—withthe difference that optimization is expressed in the right format forthe meta-programming method of Gebser et al. [35].We express the following subset-minimality minimization state-ment (at a higher priority level than all other optimization con-straints used so far). This will ensure that every possible judgmentset 𝐽 ′ 𝑖 will be considered as a subset-minimal solution. _criteria (40,1,js(prime(1) ,X)) :- js(prime (1) ,S). _optimize (40,1,incl). To encode whether or not the guessed manipulation was suc-cessful, we have to define a predicate successful/0 that is true ifand only if (i) 𝐽 ′ ≻ 𝑖 𝐽 and (ii) 𝐽 and 𝐽 ′ are not both selected as out-come by the MaxEq rule for both 𝑱 and 𝑱 ′ , where 𝐽 ′ is the outcomeencoded by the statements js(prime(col),X) and 𝐽 is the outcomeencoded by the statements js(col,X) . Since we assume that ≻ 𝑖 iscomputable in polynomial time, and since we can efficiently checkusing statements in the answer set whether 𝐽 and 𝐽 ′ are selectedby the MaxEq rule for 𝑱 and 𝑱 ′ , we know that we can define thepredicate successful/0 correctly and succinctly in our encoding.For space reasons, we omit further details on how to do this.Then, we express another minimization statement (at a lowerpriority level than all other optimization statements used so far),that states that we should make successful true whenever possible.Intuitively, we will use this to filter our guessed manipulations thatare unsuccessful. unsuccessful :- not successful . successful :- not unsuccessful . _criteria (10,1, unsuccessful ) :- unsuccessful . _optimize (10,1,card). Finally, we feed the answer set program 𝑃 that we constructedso far into the meta-programming method, resulting in a new (dis-junctive) answer set program 𝑃 ′ that uses no optimization state-ments at all, and whose answer sets correspond exactly to the (lex-icographically) optimized answer sets of our program 𝑃 . Since thenew program 𝑃 ′ does not use optimization, we can add additionalconstraint to 𝑃 ′ to remove some of the answer sets. In particular,we will filter out those answer sets that correspond to an unsuc-cessful manipulation—i.e., those containing the statement unsuccessful .Effectively, we add the following constraint to 𝑃 ′ : :- unsuccessful . As a result the only answer sets of 𝑃 ′ that remain correspond ex-actly to successful manipulations 𝐽 ′ 𝑖 for agent 𝑖 .The meta-programming technique that we use uses the full dis-junctive answer set programming language. For this full language,finding answer sets is a Σ p -complete problem [21]. This is in linewith our result of Proposition 8 where we show that the problemof strategic manipulation is in Σ p .The encoding that we described can straightforwardly be mod-ified for various variants of strategic manipulation (e.g., antipodalmanipulation). To make this work, one needs to express additionalconstraints on the choice of the judgment set 𝐽 ′ 𝑖 . To adapt the en-coding for other preference relations ˚ ≻ , one needs to adapt thedefinition of successful/0 , expressing under what conditions anact of manipulation is successful.Our encoding using meta-programming is relatively easily un-derstandable, since we do not need to tinker with the encoding ofcomplex optimization constraints in full disjunctive answer set pro-gramming ourselves—this we outsource to the meta-programmingmethod. If one were to do this manually, there is more space fortailor-made optimizations, which might lead to a better performanceof ASP solving algorithms for the problem of strategic manipula-tion. It is an interesting topic for future research to investigate this,and possibly to experimentally test the performance of differentencodings, when combined with ASP solving algorithms. CONCLUSION
We have introduced the concept of egalitarianism into the frame-work of judgment aggregation and have presented how egalitarianand strategyproofness axioms interact in this setting. Importantly,we have shown that the two main interpretations of egalitarian-ism give rise to rules with differing levels of protection againstmanipulation. In addition, we have looked into various computa-tional aspects of the egalitarian rules that arise from our axioms,in a twofold manner: First, we have provided worst-case complex-ity results; second, we have shown how to solve the relevant hardproblems using Answer Set Programming.While we have axiomatised two prominent egalitarian princi-ples, it remains to be seen whether other egalitarian axioms canprovide stronger barriers against manipulation. For example, inparallel to majoritarian rules, one could define rules that minimisethe distance to some egalitarian ideal. Moreover, as is the case injudgment aggregation, there is an obvious lack of voting rules de-signed with egalitarian principles in mind. We hope this paperopens the door for similar explorations in voting theory.
A APPENDIX
In this appendix, we will show that the outcome determinationproblem for the MaxEq rule boils down to a Θ p -complete problem.In particular, we will show that the outcome determination prob-lem for the MaxEq rule, when seen as a search problem, is complete(under polynomial-time Levin reductions) for FP NP [log,wit] —whichis a search variant of Θ p . Then, we show that the problem of find-ing the minimum difference between two agents’ satisfaction, anddeciding if this value is divisible by 4, is a Θ p -complete problem.Along the way, we show that deciding if there is a judgment set 𝐽 ∗ ∈J ( Φ ) that has the same Hamming distance to each judgment setin a given profile 𝑱 is NP -hard, even in the case where the agendaconsists of logically independent issues.We begin, in Section A.1, by recalling some notions from compu-tational complexity theory—in particular, notions related to searchproblems. Then, in Section A.2, we establish the computationalcomplexity results mentioned above. A.1 Additional complexity-theoreticpreliminaries
We will consider search problems. Let Σ be an alphabet. A searchproblem is a binary relation 𝑅 over strings in Σ ∗ . For any inputstring 𝑥 ∈ Σ ∗ , we let 𝑅 ( 𝑥 ) = { 𝑦 ∈ Σ ∗ | ( 𝑥, 𝑦 ) ∈ 𝑅 } denote theset of solutions for 𝑥 . We say that a Turing machine 𝑇 solves 𝑅 if on input 𝑥 ∈ Σ ∗ the following holds: if there exists at leastone 𝑦 such that ( 𝑥, 𝑦 ) ∈ 𝑅 , then 𝑇 accepts 𝑥 and outputs some 𝑦 such that ( 𝑥, 𝑦 ) ∈ 𝑅 ; otherwise, 𝑇 rejects 𝑥 . With any search prob-lem 𝑅 we associate a decision problem 𝑆 𝑅 , defined by 𝑆 𝑅 = { 𝑥 ∈ Σ ∗ | there exists some 𝑦 ∈ Σ ∗ such that ( 𝑥, 𝑦 ) ∈ 𝑅 } . We will usethe following notion of reductions for search problems. A polynomial-time Levin reduction from one search problem 𝑅 to another searchproblem 𝑅 is a pair of polynomial-time computable functions ( 𝑔 , 𝑔 ) such that: • the function 𝑔 is a many-one reduction from 𝑆 𝑅 to 𝑆 𝑅 , i.e.,for every 𝑥 ∈ Σ ∗ it holds that 𝑥 ∈ 𝑆 𝑅 if and only if 𝑔 ( 𝑥 ) ∈ 𝑆 𝑅 . • for every string 𝑥 ∈ 𝑆 𝑅 and every solution 𝑦 ∈ 𝑅 ( 𝑔 ( 𝑥 )) itholds that ( 𝑥, 𝑔 ( 𝑥, 𝑦 )) ∈ 𝑅 .One could also consider other types of reductions for search prob-lems, such as Cook reductions (an algorithm that solves 𝑅 by mak-ing one or more queries to an oracle that solves the search prob-lem 𝑅 ). For more details, we refer to textbooks on the topic—e.g., [37].we will use complexity classes that are based on Turing ma-chines that have access to an oracle. Let 𝐶 be a complexity classwith decision problems. A Turing machine 𝑇 with access to a yes-no 𝐶 oracle is a Turing machine with a dedicated oracle tape anddedicated states 𝑞 oracle , 𝑞 yes and 𝑞 no . Whenever 𝑇 is in the state 𝑞 oracle ,it does not proceed according to the transition relation, but in-stead it transitions into the state 𝑞 yes if the oracle tape containsa string 𝑥 that is a yes-instance for the problem 𝐶 , i.e., if 𝑥 ∈ 𝐶 ,and it transitions into the state 𝑞 no if 𝑥 ∉ 𝐶 . Let 𝐶 be a complexityclass with search problems. Similarly, a Turing machine with ac-cess to a witness 𝐶 oracle has a dedicated oracle tape and dedicatedstates 𝑞 oracle , 𝑞 yes and 𝑞 no . Also, whenever 𝑇 is in the state 𝑞 oracle ittransitions into the state 𝑞 yes if the oracle tape contains a string 𝑥 such that there exists some 𝑦 such that 𝐶 ( 𝑥, 𝑦 ) , and in additionthe contents of the oracle tape are replaced by (the encoding of)such an 𝑦 ; it transitions into the state 𝑞 no if there exists no 𝑦 suchthat 𝐶 ( 𝑥, 𝑦 ) . Such transitions are called oracle queries .We consider the following complexity classes that are based onoracle machines. • The class P NP [log] consists of all decision problems that canbe decided by a deterministic polynomial-time Turing ma-chine that has access to a yes-no NP oracle, and on any inputof length 𝑛 queries the oracle at most 𝑂 ( log 𝑛 ) many times.This class coincides with the class P NP || (spoken: “parallel ac-cess to NP”), and is also known as Θ p .Incidentally, allowing the algorithms access to a witness FNP oracle instead of access to a yes-no NP oracle leads to thesame class of problems, i.e., the class P NP [log,wit] that coin-cides with P NP [log] (cf. [46, Corollary 6.3.5]). • The class FP NP [log,wit] consists of all search problems thatcan be solved by a deterministic polynomial-time Turingmachine that has access to a witness FNP oracle, and onany input of length 𝑛 queries the oracle at most 𝑂 ( log 𝑛 ) many times. In a sense, it is the search variant of P NP [log] .This complexity class happens to coincide with the class FNP // OptP[log] , which is defined as the set of all searchproblems that are solvable by a nondeterministic polynomial-time Turing machine that receives as advice the answer toone “NP optimization” computation [14, 44].
A.2 Complexity proofs for the MaxEq rule
We define the search problem of outcome determination for a judg-ment aggregation rule 𝐹 as follows. The input for this problem con-sists of an agenda Φ , a profile 𝑱 = ( 𝐽 , . . . , 𝐽 𝑛 ) ∈ J ( Φ ) 𝑛 . The prob-lem is to output some judgment set 𝐽 ∗ ∈ 𝐹 ( 𝑱 ) . In other words, theproblem is the relation 𝑅 that consists of all pairs (( Φ , 𝑱 ) , 𝐽 ∗ ) suchthat 𝑱 ∈ J ( Φ ) 𝑛 is a profile and 𝐽 ∗ ∈ 𝐹 ( 𝑱 ) . We will show that theoutcome determination problem for the MaxEq rule is complete forthe complexity class FP NP [log,wit] under polynomial-time Levinreductions.n order to do so, we begin with establishing a lemma that willbe useful for the FP NP [log,wit] -hardness proof. This lemma usesthe notion of 1-in-3-satisfiability. Let 𝜓 be a propositional logic for-mula in 3CNF, i.e., 𝜓 = 𝑐 ∧ · · · ∧ 𝑐 𝑚 , where each 𝑐 𝑖 is a clause con-taining exactly three literals. Then 𝜓 is if thereexists a truth assignment 𝛼 that satisfies exactly one of the threeliterals in each clause 𝑐 𝑖 . Lemma A.1.
Let 𝜓 be a 3CNF formula with clauses 𝑐 , . . . , 𝑐 𝑏 thatare all of size exactly 3 and with 𝑛 variables 𝑥 , . . . , 𝑥 𝑛 , such that(1) no clause of 𝜓 contains complementary literals, and (2) there existssome 𝑥 ∗ ∈ { 𝑥 , . . . , 𝑥 𝑛 } and a partial truth assignment 𝛽 : { 𝑥 , . . . , 𝑥 𝑛 }\{ 𝑥 ∗ } → { , } that satisfies exactly one literal in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ occurs, and satisfies no literal in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ occurs. We can, in polynomial time given 𝜓 , construct an agenda Φ on 𝑚 issues such that J ( Φ ) = { , } 𝑚 and a profile 𝑱 over Φ , suchthat: • Φ = { 𝑦 𝑖 , 𝑦 ′ 𝑖 | ≤ 𝑖 ≤ 𝑛 } ∪ { 𝑧 , . . . , 𝑧 } ; • there exists a judgment set 𝐽 ∈ J ( Φ ) that has the same Ham-ming distance to each 𝐽 ′ ∈ 𝑱 if and only if 𝜓 is 1-in-3-satisfiable; • if 𝜓 is not 1-in-3-satisfiable, then for each judgment set 𝐽 ∈J ( Φ ) it holds that max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| ≥ , andthere exists some 𝐽 ∈ J ( Φ ) such that max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| = ; • the above two properties hold also when restricted to judgmentsets 𝐽 that contain exactly one of 𝑦 𝑖 and 𝑦 ′ 𝑖 for each ≤ 𝑖 ≤ 𝑛 ,and that contain ¬ 𝑧 , . . . , ¬ 𝑧 , 𝑧 ; and • the number of judgment sets in the profile 𝑱 only depends on 𝑛 and 𝑏 . Proof.
We let the agenda Φ consist of the 2 𝑛 + 𝑦 , . . . , 𝑦 𝑛 , 𝑦 ′ , . . . , 𝑦 ′ 𝑛 , 𝑧 , . . . , 𝑧 . It follows directly that J ( Φ ) = { , } 𝑛 + .Then, we start by constructing 2 𝑛 judgment sets 𝐽 , . . . , 𝐽 𝑛 overthese issues, defined as depicted in Figure 2. 𝑦 𝑦 · · · 𝑦 𝑛 𝑦 ′ 𝑦 ′ · · · 𝑦 ′ 𝑛 𝑧 · · · 𝑧 𝐽 · · · · · · · · · 𝐽 · · · · · · · · · ... ... ... . . . ... ... ... . . . ... ... . . . ...𝐽 𝑛 · · · · · · · · · 𝐽 𝑛 + · · · · · · · · · 𝐽 𝑛 + · · · · · · · · · ... ... ... . . . ... ... ... . . . ... ... . . . ...𝐽 𝑛 · · · · · · · · · Figure 2: Construction of the judgment sets 𝐽 , . . . , 𝐽 𝑛 in theproof of Lemma A.1. Then, for each clause 𝑐 𝑘 of 𝜓 , we introduce three judgment sets 𝐽 𝑘, , 𝐽 𝑘, , and 𝐽 𝑘, that are defined as follows. The judgment set 𝐽 𝑘, contains 𝑦 𝑖 , ¬ 𝑦 ′ 𝑖 for each positive literal 𝑥 𝑖 occurring in 𝑐 𝑘 , andcontains 𝑦 ′ 𝑖 , ¬ 𝑦 𝑖 for each negative literal ¬ 𝑥 𝑖 occurring in 𝑐 𝑘 . Con-versely, the judgment sets 𝐽 𝑘, , 𝐽 𝑘, contain 𝑦 ′ 𝑖 , ¬ 𝑦 𝑖 for each positiveliteral 𝑥 𝑖 occurring in 𝑐 𝑘 , and contain 𝑦 𝑖 , ¬ 𝑦 ′ 𝑖 for each negative lit-eral ¬ 𝑥 𝑖 occurring in 𝑐 𝑘 . For each variable 𝑥 𝑗 that does not occur in 𝑐 𝑘 , all three of 𝐽 𝑘, , 𝐽 𝑘, , 𝐽 𝑘, contain ¬ 𝑦 𝑗 , ¬ 𝑦 ′ 𝑗 . Finally, the judg-ment set 𝐽 𝑘, contains ¬ 𝑧 , . . . , ¬ 𝑧 , 𝑧 , the judgment set 𝐽 𝑘, con-tains ¬ 𝑧 , ¬ 𝑧 , 𝑧 , 𝑧 , 𝑧 , and the judgment set 𝐽 𝑘, contains 𝑧 , 𝑧 , ¬ 𝑧 , ¬ 𝑧 , 𝑧 .This is illustrated in Figure 3 for the example clause ( 𝑥 ∨¬ 𝑥 ∨ 𝑥 ) . 𝑦 𝑦 𝑦 · · · 𝑦 ′ 𝑦 ′ 𝑦 ′ · · · 𝑧 𝑧 𝑧 𝑧 𝑧 𝐽 𝑘, · · · · · · 𝐽 𝑘, · · · · · · 𝐽 𝑘, · · · · · · Figure 3: Illustration of the construction of the judgmentsets 𝐽 𝑘, , 𝐽 𝑘, , 𝐽 𝑘, for the example clause 𝑐 𝑘 = ( 𝑥 ∨ ¬ 𝑥 ∨ 𝑥 ) in the proof of Lemma A.1. The profile 𝑱 then consists of the judgment sets 𝐽 , . . . , 𝐽 𝑛 , aswell as the judgment sets 𝐽 𝑘, , 𝐽 𝑘, , 𝐽 𝑘, for each 1 ≤ 𝑘 ≤ 𝑚 . Inorder to prove that 𝑱 has the required properties, we consider thefollowing observations and claims (and prove the claims). Observation 1.
If a judgment set 𝐽 contains exactly one of 𝑦 𝑖 and 𝑦 ′ 𝑖 for each 𝑖 , then the Hamming distance from 𝐽 to each of 𝐽 , . . . , 𝐽 𝑛 —restricted to the issues 𝑦 , . . . , 𝑦 𝑛 , 𝑦 ′ , . . . , 𝑦 ′ 𝑛 —is exactly 𝑛 . Claim 2.
If a judgment set 𝐽 contains both 𝑦 𝑖 and 𝑦 ′ 𝑖 for some 𝑖 , orneither 𝑦 𝑖 nor 𝑦 ′ 𝑖 for some 𝑖 , then there are at least two judgment setsamong 𝐽 , . . . , 𝐽 𝑛 such that the Hamming distance from 𝐽 to thesetwo judgment sets differs by at least . Proof of Claim 2.
We argue that this is the case for 𝑦 and 𝑦 ′ ,i.e., for the case of 𝑖 =
1. For other values of 𝑖 , an entirely similarargument works.Picking both 𝑦 and 𝑦 ′ to be part of a judgment set 𝐽 adds to theHamming distances between 𝐽 , on the one hand, and 𝐽 , . . . , 𝐽 𝑛 ,on the other hand, according to the vector 𝑣 + : 𝑣 + = ( , , . . . , | {z } 𝑛 − , , , . . . , | {z } 𝑛 − ) . Picking both ¬ 𝑦 and ¬ 𝑦 ′ to be part of the set 𝐽 adds to the Ham-ming distances between 𝐽 and 𝐽 , . . . , 𝐽 𝑛 according to the vector 𝑣 : 𝑣 − = ( , , . . . , | {z } 𝑛 − , , , . . . , | {z } 𝑛 − ) . Picking exactly one of 𝑦 and 𝑦 ′ and exactly one of ¬ 𝑦 and ¬ 𝑦 ′ to be part of 𝐽 corresponds to the all-ones vector = ( , . . . , ) .More generally, both 𝑦 𝑖 and 𝑦 ′ 𝑖 to be part of 𝐽 adds to the Hammingdistances according to the vector 𝑣 + 𝑖 : 𝑣 + 𝑖 = ( , . . . , | {z } 𝑖 − , , , . . . , | {z } 𝑛 − 𝑖 + , , . . . , | {z } 𝑖 − , , , . . . , | {z } 𝑛 − 𝑖 + ) , picking both ¬ 𝑦 𝑖 and ¬ 𝑦 ′ 𝑖 corresponds to the vector 𝑣 − 𝑖 : 𝑣 − 𝑖 = ( , . . . , | {z } 𝑖 − , , , . . . , | {z } 𝑛 − 𝑖 + , , . . . , | {z } 𝑖 − , , , . . . , | {z } 𝑛 − 𝑖 + ) , and picking exactly one of 𝑦 𝑖 and 𝑦 ′ 𝑖 and exactly one of ¬ 𝑦 𝑖 and ¬ 𝑦 ′ 𝑖 to be part of 𝐽 corresponds to the all-ones vector = ( , . . . , ) . Forach 1 ≤ 𝑖 ≤ 𝑛 , the vectors 𝑣 − , . . . , 𝑣 − 𝑛 and 𝑣 + 𝑖 are linearly indepen-dent, and the vectors 𝑣 + , . . . , 𝑣 + 𝑛 and 𝑣 − 𝑖 are linearly independent.Suppose now that we pick 𝐽 to contain both 𝑦 and 𝑦 ′ . Suppose,moreover, to derive a contradiction, that the Hamming distancefrom 𝐽 to each of the judgment sets 𝐽 , . . . , 𝐽 𝑛 is the same. Thismeans that there is some way of choosing 𝑠 , . . . , 𝑠 𝑛 such that 𝑣 − = Í < 𝑖 ≤ 𝑛 𝑣 𝑠 𝑖 𝑖 , which contradicts the fact that 𝑣 + , . . . , 𝑣 + 𝑛 and 𝑣 − 𝑖 arelinearly independent—since each 𝑣 − 𝑗 can be expressed as 𝑣 − 𝑗 = 𝑣 + 𝑖 + 𝑣 − 𝑖 − 𝑣 + 𝑗 . Thus, we can conclude that there exist at least two judg-ment sets among 𝐽 , . . . , 𝐽 𝑛 such that the Hamming distance from 𝐽 to these two judgment sets differs. Moreover, since all vectors con-tain only even numbers, and the coefficients in the sum are integers(in fact, either 0 or 1), we know that the difference must be evenand thus at least 2.An entirely similar argument works for the case where we pick 𝐽 to contain both ¬ 𝑦 and ¬ 𝑦 ′ . ⊣ Observation 3.
Let 𝛼 : { 𝑥 , . . . , 𝑥 𝑛 } → { , } be a truth assign-ment that satisfies exactly one literal in each clause of 𝜓 . Consider thejudgment set 𝐽 𝛼 = { 𝑦 𝑖 , ¬ 𝑦 ′ 𝑖 | ≤ 𝑖 ≤ 𝑛, 𝛼 ( 𝑥 𝑖 ) = } ∪ { 𝑦 ′ 𝑖 , ¬ 𝑦 𝑖 | ≤ 𝑖 ≤ 𝑛, 𝛼 ( 𝑥 𝑖 ) = } ∪ {¬ 𝑧 , . . . , ¬ 𝑧 , 𝑧 } . Then the Hamming distancefrom 𝐽 𝛼 to each judgment set in the profile 𝑱 is exactly 𝑛 + . Claim 4.
Suppose that 𝜓 is not 1-in-3-satisfiable. Then for each judg-ment set 𝐽 that contains exactly one of 𝑦 𝑖 and 𝑦 ′ 𝑖 for each 𝑖 , there issome clause 𝑐 𝑘 of 𝜓 such that the difference in Hamming distancefrom 𝐽 to (two of) 𝐽 𝑘, , 𝐽 𝑘, , 𝐽 𝑘, is at least . Proof of Claim 4.
Take an arbitrary judgment set 𝐽 that con-tains exactly one of 𝑦 𝑖 and 𝑦 ′ 𝑖 for each 𝑖 . This judgment set 𝐽 corre-sponds to the truth assignment 𝛼 𝐽 : { 𝑥 , . . . , 𝑥 𝑛 } → { , } definedsuch that for each 1 ≤ 𝑖 ≤ 𝑛 it is the case that 𝛼 ( 𝑥 𝑖 ) = 𝑦 𝑖 ∈ 𝐽 and 𝛼 ( 𝑥 𝑖 ) = 𝑦 𝑖 ∉ 𝐽 . Since 𝜓 is not 1-in-3-satisfiable, we knowthat there exists some clause 𝑐 ℓ such that 𝛼 𝐽 does not satisfy exactlyone literal in 𝑐 ℓ . We distinguish several cases: either (i) 𝛼 𝐽 satisfiesno literals in 𝑐 ℓ , or (ii) 𝛼 𝐽 satisfies two literals in 𝑐 ℓ , or (iii) 𝛼 𝐽 satis-fies three literals in 𝑐 ℓ . In each case, the Hamming distances from 𝐽 ,on the one hand, and 𝐽 𝑘, , 𝐽 𝑘, , 𝐽 𝑘, , on the other hand, must differby at least 2. This can be verified case by case—and we omit a fur-ther detailed case-by-case verification of this. ⊣ Claim 5. If 𝜓 is not 1-in-3-satisfiable, then there exists a judgmentset 𝐽 such that max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| = . Proof of Claim 5.
Suppose that 𝜓 is not 1-in-3-satisfiable. Weknow that there exists a variable 𝑥 ∗ ∈ { 𝑥 , . . . , 𝑥 𝑛 } and a partialtruth assignment 𝛽 : { 𝑥 , . . . , 𝑥 𝑛 } \ { 𝑥 ∗ } → { , } that satisfiesexactly one literal in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ does not oc-cur, and satisfies no literal in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ occurs.Without loss of generality, suppose that 𝑥 ∗ = 𝑥 . Now considerthe judgment set 𝐽 𝛽 = {¬ 𝑦 , ¬ 𝑦 ′ } ∪ { 𝑦 𝑖 , ¬ 𝑦 ′ 𝑖 | < 𝑖 ≤ 𝑛, 𝛽 ( 𝑥 𝑖 ) = } ∪ { 𝑦 ′ 𝑖 , ¬ 𝑦 𝑖 | < 𝑖 ≤ 𝑛, 𝛽 ( 𝑥 𝑖 ) = } ∪ {¬ 𝑧 , . . . , ¬ 𝑧 , 𝑧 } .One can verify that the Hamming distances from 𝐽 𝛽 , on the onehand, and 𝐽 ′ ∈ 𝑱 , on the other hand, differ by at most 2—and forsome 𝐽 ′ , 𝐽 ′′ ∈ 𝑱 it holds that | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| = ⊣ We now use the above observations and claims to show that Φ and 𝑱 have the required properties. If 𝜓 is 1-in-3-satisfiable, by Ob-servation 3, there is some 𝐽 ∈ J ( Φ ) that has the same Hamming distance to each 𝐽 ′ ∈ 𝑱 . Suppose, conversely, that 𝜓 is not 1-in-3-satisfiable. Then by Claims 2 and 4, there exist two judgmentsets 𝐽 ′ , 𝐽 ′′ ∈ 𝑱 such that 𝐻 ( 𝐽 , 𝐽 ′ ) and 𝐻 ( 𝐽 , 𝐽 ′′ ) differ (by at least2). Thus, 𝜓 is 1-in-3-satisfiable if and only if there exists a judg-ment set 𝐽 ∈ J ( Φ ) that has the same Hamming distance to eachjudgment set in the profile 𝑱 .Suppose that 𝜓 is not 1-in-3-satisfiable. Then by Claims 2 and 4,we know that for each 𝐽 ∈ J ( Φ ) it holds that max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ )− 𝐻 ( 𝐽 , 𝐽 ′′ )| ≥
2. Moreover, by Claim 5, there exists a judgment set 𝐽 ∈J ( Φ ) such that max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| = J ( Φ ) = { , } 𝑚 for some 𝑚 .Moreover, one can straightforwardly verify that the statements af-ter the first two bullet points in the statement of the lemma alsohold when restricted to judgment sets 𝐽 that contain exactly oneof 𝑦 𝑖 and 𝑦 ′ 𝑖 for each 1 ≤ 𝑖 ≤ 𝑛 and that contain ¬ 𝑧 , . . . , ¬ 𝑧 , 𝑧 .This concludes the proof of the lemma. (cid:3) Now that we have established the lemma, we continue with the FP NP [log,wit] -completeness proof. Theorem 3.
The problem of outcome determination for the MaxEqrule is FP NP [log,wit] -complete under polynomial-time Levin reduc-tions. Proof.
To show membership in FP NP [log,wit] , we describe analgorithm with access to a witness FNP oracle that solves the prob-lem in polynomial time by making at most a logarithmic number oforacle queries. This algorithm will use an oracle for the following
FNP problem: given some 𝑘 ∈ N and given the agenda Φ and theprofile 𝑱 , compute a judgment set 𝐽 ∈ J ( Φ ) such that max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ )− 𝐻 ( 𝐽 , 𝐽 ′′ )| ≤ 𝑘 , if such a 𝐽 exists, and return “none” otherwise. Byusing 𝑂 ( log | Φ |) queries to this oracle, one can compute the mini-mum value 𝑘 min of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 , 𝐽 ′ ) − 𝐻 ( 𝐽 , 𝐽 ′′ )| where the min-imum is taken over all judgment sets 𝐽 ∈ J ( Φ ) . Then, with a finalquery to the oracle, using 𝑘 = 𝑘 min , one can use the oracle to pro-duce a judgment set 𝐽 ∗ ∈ MaxEq ( 𝑱 ) .We will show FP NP [log,wit] -hardness by giving a polynomial-time Levin reduction from the FP NP [log,wit] -complete problem offinding a satisfying assignment for a (satisfiable) propositional for-mula 𝜓 that sets a maximum number of variables to true (amongany satisfying assignment of 𝜓 ) [14, 44, 47, 65]. Let 𝜓 be an arbi-trary satisfiable propositional logic formula with 𝑣 variables. With-out loss of generality, assume that 𝑣 is even and that there is asatisfying truth assignment for 𝜓 that sets at least one variable totrue. We will construct an agenda Φ and a profile 𝑱 , such that theminimum value of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| , for any 𝐽 ∗ ∈J ( Φ ) , is divisible by 4 if and only if the maximum number of vari-ables set to true in any satisfying assignment of 𝜓 is odd. More-over, we will construct Φ and 𝑱 in such a way that from any 𝐽 ∗ ∈ MaxEq ( Φ ) we can construct, in polynomial time, a satisfying as-signment of 𝜓 that sets a maximum number of variables to true.This—together with the fact that Φ and 𝑱 can be constructed inpolynomial time—suffices to exhibit a polynomial-time Levin re-duction, and thus to show FP NP [log,wit] -hardness. We proceed inseveral stages (i–iv).(i) We begin, in the first stage, by constructing a 3CNF for-mula 𝜓 𝑖 with certain properties, for each 1 ≤ 𝑖 ≤ 𝑣 . laim 6. We can construct in polynomial time, for each ≤ 𝑖 ≤ 𝑣 ,a 3CNF formula 𝜓 𝑖 , that is 1-in-3-satisfiable if and only if there is atruth assignment that satisfies 𝜓 and that sets at least 𝑖 variables in 𝜓 to true. Moreover, we construct these formulas 𝜓 𝑖 in such a way thatthey all contain exactly the same variables 𝑥 , . . . , 𝑥 𝑛 and exactlythe same number 𝑏 of clauses, and such that each formula 𝜓 𝑖 has theproperties: • that it contains no clause with complementary literals, and • that it contains a variable 𝑥 ∗ and a partial truth assignment 𝛽 : { 𝑥 , . . . , 𝑥 𝑛 } \ { 𝑥 ∗ } → { , } that satisfies exactly one literalin each clause where 𝑥 ∗ or ¬ 𝑥 ∗ does not occur, and satisfies noliteral in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ occurs. Proof of Claim 6.
Consider the problems of deciding if a giventruth assignment 𝛼 to the variables in 𝜓 satisfies 𝜓 , and decidingif 𝛼 satisfies at least 𝑖 variables, for some 1 ≤ 𝑖 ≤ 𝑣 . These prob-lems are both polynomial-time solvable. Therefore, by using stan-dard techniques from the proof of the Cook-Levin Theorem [16],we can construct in polynomial time a propositional formula 𝜒 in3CNF containing (among others) the variables 𝑡 , . . . , 𝑡 𝑣 , the vari-able 𝑥 † and the variables in 𝜓 such that any truth assignment tothe variables 𝑥 † , 𝑡 , . . . , 𝑡 𝑣 and the variables in 𝜓 can be extended toa satisfying truth assignment for 𝜒 if and only if either (i) it sets 𝑥 † to true, or (ii) it satisfies 𝜓 and for each 1 ≤ 𝑖 ≤ 𝑣 it sets 𝑡 𝑖 to falseif and only if it sets at least 𝑖 variables among the variables in 𝜓 to true. Then, we can transform this 3CNF formula 𝜒 to another3CNF formula 𝜒 ′ with a similar property—namely that any truthassignment to the variables 𝑥 † , 𝑡 , . . . , 𝑡 𝑣 and the variables in 𝜓 canbe extended to a truth assignment that satisfies exactly one literalin each clause of 𝜒 if and only if either (i) it sets 𝑥 † to true, or (ii) itsatisfies 𝜓 and for each 1 ≤ 𝑖 ≤ 𝑣 it sets 𝑡 𝑖 to false if and onlyif it sets at least 𝑖 variables among the variables in 𝜓 to true. Wedo so by using the the polynomial-time reduction from 3SAT to1-IN-3-SAT given by Schaefer [62].Then, for each particular value of 𝑖 , we add two clauses thatintuitively serve to ensure that variable 𝑡 𝑖 must be set to false in any1-in-3-satisfying truth assignment. We add ( 𝑠 ∨ 𝑠 ∨ 𝑡 𝑖 ) and (¬ 𝑠 ∨¬ 𝑠 ∨ 𝑡 𝑖 ) , where 𝑠 and 𝑠 are fresh variables—the only way to satisfyexactly one literal in both of these clauses is to set exactly one of 𝑠 and 𝑠 to true, and to set 𝑡 𝑖 to false.Moreover, we add the clauses ( 𝑟 ∨ 𝑟 ∨ 𝑥 † ) , (¬ 𝑟 ∨ 𝑟 ∨¬ 𝑥 † ) , ( 𝑥 ∗ ∨ 𝑟 ∨ 𝑟 ) , and (¬ 𝑥 ∗ ∨ 𝑟 ∨ 𝑟 ) , where 𝑟 , . . . , 𝑟 and 𝑥 ∗ are fresh vari-ables. These clauses serve to ensure the property that there alwaysexists a partial truth assignment 𝛽 : { 𝑥 , . . . , 𝑥 𝑛 } \ { 𝑥 ∗ } → { , } that satisfies exactly one literal in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ doesnot occur, and satisfies no literal in each clause where 𝑥 ∗ or ¬ 𝑥 ∗ occurs. Moreover, these added clauses preserve 1-in-3-satisfiabilityif and only if there exists a 1-in-3-satisfying truth assignment forthe formula without these added clauses that sets 𝑥 † to true.Putting all this together, we have constructed a 3CNF formula 𝜒 ′′ —consisting of 𝜒 ′ with the addition of the six clauses mentioned inthe above two paragraphs—that has the right properties mentionedin the statement of the claim. In particular, 𝜒 ′′ is 1-in-3-satisfiableif and only if there is a truth assignment that satisfies 𝜓 and thatsets at least 𝑖 variables in 𝜓 to true. Moreover, the constructed for-mula 𝜒 ′′ has the same variables and the same number of clauses,regardless of the value of 𝑖 chosen in the construction. ⊣ Since the formulas 𝜓 𝑖 , as described in Claim 6, satisfy the re-quirements for Lemma A.1, we can construct agendas Φ , . . . , Φ 𝑣 and profiles 𝑱 , . . . , 𝑱 𝑣 such that for each 1 ≤ 𝑖 ≤ 𝑣 , the agenda Φ 𝑖 and the profile 𝑱 𝑖 satisfy the conditions mentioned in the statementof Lemma A.1. Moreover, we can construct the agendas Φ , . . . , Φ 𝑣 in such a way that they are pairwise disjoint. For each 1 ≤ 𝑖 ≤ 𝑣 , let 𝑦 𝑖, , . . . , 𝑦 𝑖,𝑛 , 𝑦 ′ 𝑖, , . . . , 𝑦 ′ 𝑖,𝑛 , 𝑧 𝑖, , . . . , 𝑧 𝑖, denote the issues in Φ 𝑖 and let 𝑱 𝑖 = ( 𝐽 𝑖, , . . . , 𝐽 𝑖,𝑢 ) .(ii) Then, in the second stage, we will use the profiles Φ , . . . , Φ 𝑣 and the profiles 𝑱 , . . . , 𝑱 𝑣 to construct a single agenda Φ anda single profile 𝑱 .We let Φ contain the issues 𝑦 𝑖,𝑗 , 𝑦 ′ 𝑖,𝑗 and 𝑧 𝑖, , . . . , 𝑧 𝑖, , for each 1 ≤ 𝑖 ≤ 𝑣 and each 1 ≤ 𝑗 ≤ 𝑛 , as well as issues 𝑤 𝑖,ℓ,𝑘 for each 1 ≤ 𝑖 ≤ 𝑣 ,each 1 ≤ ℓ ≤ 𝑢 and each 1 ≤ 𝑘 ≤ 𝑛 . We let 𝑱 contain judgmentsets 𝐽 ′ 𝑖,ℓ for each 1 ≤ 𝑖 ≤ 𝑣 and each 1 ≤ ℓ ≤ 𝑢 , that we will definebelow. Intuitively, for each 𝑖 , the sets 𝐽 ′ 𝑖,ℓ will contain the judgmentsets 𝐽 𝑖, , . . . , 𝐽 𝑖,𝑢 from the profile 𝑱 𝑖 .Take an arbitrary 1 ≤ 𝑖 ≤ 𝑣 , and an arbitrary 1 ≤ ℓ ≤ 𝑢 . Welet 𝐽 ′ 𝑖,ℓ agree with 𝐽 𝑖,ℓ (from 𝑱 𝑖 ) on all issues from Φ 𝑖 —i.e., the is-sues 𝑦 𝑖,𝑗 , 𝑦 ′ 𝑖,𝑗 for each 1 ≤ 𝑗 ≤ 𝑛 and 𝑧 𝑖, , . . . , 𝑧 𝑖, . On all issues 𝜑 from each Φ 𝑖 ′ , for 1 ≤ 𝑖 ′ ≤ 𝑣 with 𝑖 ′ ≠ 𝑖 , we let 𝐽 ′ 𝑖,ℓ ( 𝜑 ) =
0. Then,we let 𝐽 ′ 𝑖,ℓ ( 𝑤 𝑖 ′ ,ℓ ′ ,𝑘 ) = 𝑖 = 𝑖 ′ and ℓ = ℓ ′ . In otherwords, 𝐽 ′ 𝑖,ℓ agrees with 𝐽 𝑖,ℓ on the issues from Φ 𝑖 , it sets every issuefrom each other Φ 𝑖 ′ to false, it sets all the issues 𝑤 𝑖,ℓ,𝑘 to true, andit sets all other issues 𝑤 𝑖 ′ ,ℓ ′ ,𝑘 to false.(iii) In the third stage, we will replace the logically independentissues in Φ by other issues, in order to place restrictions onthe different judgment sets that are allowed.We start by describing a constraint—in the form of a propositionallogic formula Γ on the original (logically independent) issues in Φ —and then we describe how this constraint can be used to producereplacement formulas for the issues in Φ . We define Γ = Γ ∨ Γ by: Γ = Ô ≤ 𝑖 ≤ 𝑣 ≤ ℓ ≤ 𝑢 © « Ó ≤ 𝑘 ≤ 𝑛 𝑤 𝑖,ℓ,𝑘 ∧ Ó ≤ 𝑖 ′≤ 𝑣, ≤ ℓ ≤ 𝑢𝑖 ≠ 𝑖 ′ or ℓ ≠ ℓ ′ ¬ 𝑤 𝑖 ′ ,ℓ ′ ,𝑘 ª®¬ Γ = © « Ó ≤ 𝑖 ≤ 𝑣, ≤ ℓ ≤ 𝑢 ≤ 𝑘 ≤ 𝑛 ¬ 𝑤 𝑖,ℓ,𝑘 ª®¬ ∧ © « Ó ≤ 𝑖 ≤ 𝑣 ≤ 𝑗 ≤ 𝑛 ( 𝑦 𝑖,𝑗 ↔ ¬ 𝑦 ′ 𝑖,𝑗 ) ª®¬ In other words, Γ requires that either (1) for some 𝑖, ℓ all issues 𝑤 𝑖,ℓ,𝑘 are set to true, and all other issues 𝑤 𝑖 ′ ,ℓ ′ ,𝑘 are set to false, or (2) allissues 𝑤 𝑖,ℓ,𝑘 are set to false and for each 𝑖, 𝑗 exactly one of 𝑦 𝑖,𝑗 and 𝑦 ′ 𝑖,𝑗 is set to true.Because we can in polynomial time compute a satisfying truthassignment for Γ , we know that we can also in polynomial timecompute a replacement 𝜑 ′ for each 𝜑 ∈ Φ , resulting in an agenda Φ ′ ,so that the logically consistent judgment sets 𝐽 ∈ J ( Φ ′ ) corre-spond exactly to the judgment sets 𝐽 ∈ J ( Φ ) that satisfy the con-straint Γ [27, Proposition 3]. In the remainder of this proof, wewill use Φ (with the restriction that judgment sets should satisfy Γ )interchangeably with Φ ′ .(iv) Finally, in the fourth stage, we will duplicate some issues 𝜑 ∈ Φ a certain number of times, by adding semantically equiva-lent (yet syntactically different) issues to the agenda Φ , andby updating the judgment sets in the profile 𝑱 accordingly.or each 1 ≤ 𝑖 ≤ 𝑣 , we will define a number 𝑐 𝑖 and will makesure that there are 𝑐 𝑖 (syntactically different, logically equivalent)copies of each issue in Φ that originated from the agenda Φ 𝑖 . Inother words, we duplicate each agenda Φ 𝑖 a certain number oftimes, by adding 𝑐 𝑖 − Φ 𝑖 .For each 1 ≤ 𝑖 ≤ 𝑣 , we let 𝑐 𝑖 = ( 𝑣 − 𝑖 ) .This concludes the description of our reduction—i.e., of the pro-file Φ and the profile 𝑱 . What remains is to show that the minimumvalue of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| , for any 𝐽 ∗ ∈ J ( Φ ) , isdivisible by 4 if and only if the maximum number of variables setto true in any satisfying assignment of 𝜓 is odd. To do so, we beginwith stating and proving the following claims. Claim 7.
For any judgment set 𝐽 ∗ ∈ J ( Φ ) that satisfies Γ , thevalue of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| is at least 𝑛 . Proof of Claim 7.
Take some 𝐽 ∗ that satisfies Γ . Then theremust exist some 𝑖, ℓ such that 𝐽 sets all 𝑤 𝑖,ℓ,𝑘 to true and all other 𝑤 𝑖 ′ ,ℓ ′ ,𝑘 to false. Therefore, | 𝐻 ( 𝐽 ∗ , 𝐽 ′ 𝑖,ℓ )− 𝐻 ( 𝐽 ∗ , 𝐽 𝑖 ′ ,ℓ ′ )| is at least 2 𝑛 , for any 𝑖 ′ , ℓ ′ such that ( 𝑖, ℓ ) ≠ ( 𝑖 ′ , ℓ ′ ) . ⊣ Claim 8.
For any judgment set 𝐽 ∗ ∈ J ( Φ ) that satisfies Γ , thevalue of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| is strictly less than 𝑣 . Proof of Claim 8.
Take some 𝐽 ∗ that satisfies Γ . Then 𝐽 ∗ setseach 𝑤 𝑖,ℓ,𝑘 to false, and for each 1 ≤ 𝑖 ≤ 𝑣 the judgment set 𝐽 con-tains ¬ 𝑧 𝑖, , . . . , ¬ 𝑧 𝑖, , 𝑧 𝑖, and contains exactly one of 𝑦 𝑖,𝑗 and 𝑦 ′ 𝑖,𝑗 for each 1 ≤ 𝑗 ≤ 𝑛 . Moreover, without loss of generality, wemay assume that 𝐽 ∗ , for each 1 ≤ 𝑖 ≤ 𝑣 , assigns truth valuesto the issues originating from Φ 𝑖 in a way that corresponds ei-ther (a) to a truth assignment to the variables in 𝜓 𝑖 witnessingthat 𝜓 𝑖 is 1-in-3-satisfiable, or (b) to a partial truth assignment 𝛽 :var ( 𝜓 𝑖 ) \ { 𝑥 } → { , } that satisfies exactly one literal in eachclause where 𝑥 or ¬ 𝑥 occurs, and satisfies no literal in each clausewhere 𝑥 or ¬ 𝑥 occurs. If this were not the case, we could consideranother 𝐽 ∗ instead that does satisfy these properties and that has avalue of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| that is at least as small.Then, by the construction of 𝑱 —and the profiles 𝑱 𝑖 used in thisconstruction—for each 𝐽 ′ 𝑖,𝑗 it holds that 𝐻 ( 𝐽 ∗ , 𝐽 ′ 𝑖,𝑗 ) = Í ≤ 𝑖 ≤ 𝑣 𝑐 𝑖 ( 𝑛 + ) + 𝑛 ± 𝑑 𝑖 , where 𝑑 𝑖 = 𝜓 𝑖 is 1-in-3-satisfiable and 𝑑 𝑖 = ( 𝑣 − 𝑖 ) oth-erwise. From this it follows that the value of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ )− 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| is strictly less than 2 𝑣 , since 𝜓 is 1-in-3-satisfiable. ⊣ By Claims 7 and 8, and because we may assume without loss ofgenerality that 𝑛 ≥ 𝑣 , we know that any judgment set 𝐽 ∗ ∈ J ( Φ ) that minimizes max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| must satisfy Γ .Moreover, by a straightforward modification of the arguments inthe proof of Claim 8, we know that the minimal value, over judg-ment sets 𝐽 ∗ ∈ J ( Φ ) , of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| is 2 ( 𝑣 − 𝑖 ) for the smallest value of 𝑖 such that 𝜓 𝑖 is not 1-in-3-satisfiable,which coincides with 2 ( 𝑣 + − 𝑖 ) for the largest value of 𝑖 suchthat 𝜓 𝑖 is 1-in-3-satisfiable. Since 𝑣 is even (and thus 𝑣 + ( 𝑣 + − 𝑖 ) is divisible by 4 if and only if 𝑖 is odd.Therefore, the minimal value of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| is divisible by 4 if and only if the maximum number of variablesset to true in any satisfying assignment of 𝜓 is odd.Moreover, it is straightforward to show that from any 𝐽 ∗ ∈ MaxEq ( Φ ) we can construct, in polynomial time, a satisfying as-signment of 𝜓 that sets a maximum number of variables to true. This concludes our description and analysis of the polynomial-time Levin reduction, and thus of our hardness proof. (cid:3) Proposition 5.
Given an agenda Φ and a profile 𝑱 , deciding whetherthe minimal value of max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| for 𝐽 ∗ ∈J ( Φ ) , is divisible by 4, is a Θ p -complete problem. Proof.
This follows from the proof of Theorem 3. The proof ofmembership in FP NP [log,wit] for the outcome determination prob-lem for the MaxEq rule can directly be used to show membershipin Θ p for this problem. Moreover, the polynomial-time Levin re-duction used in the FP NP [log,wit] -hardness proof can be seen as apolynomial-time (many-to-one) reduction from the Θ p -completeproblem of deciding if the maximum number of variables set totrue in any satisfying assignment of a (satisfiable) propositionalformula 𝜓 is odd [47, 65]. (cid:3) Proposition 6.
Given an agenda Φ and a profile 𝑱 , the problem ofdeciding whether there is some 𝐽 ∗ ∈ J ( Φ ) with max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ )− 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| = is NP -complete. Moreover, NP -hardness holds evenfor the case where Φ consists of logically independent issues—i.e.,the case where J ( Φ ) = { , } 𝑚 for some 𝑚 . Proof (sketch).
The problem of deciding if there exists a truthassignment that satisfies a given 3CNF formula 𝜓 and that sets atleast, say, 2 variables among 𝜓 to true is an NP -complete prob-lem. Then, by combining Claim 6 in the proof of Theorem 3 withLemma A.1, we directly get that the problem of deciding whetherthere is some 𝐽 ∗ ∈ J ( Φ ) with max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ ) − 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| =
0, for a given agenda Φ and a given profile 𝑱 is NP -hard, even forthe case where Φ consists of logically independent issues. Mem-bership in NP follows from the fact that one can guess a set 𝐽 ∗ (together with a truth assignment witnessing that it is consistent)in polynomial time, after which checking whether 𝐽 ∗ ∈ J ( Φ ) —i.e.,whether it is indeed consistent—and checking whether max 𝐽 ′ ,𝐽 ′′ ∈ 𝑱 | 𝐻 ( 𝐽 ∗ , 𝐽 ′ )− 𝐻 ( 𝐽 ∗ , 𝐽 ′′ )| = (cid:3) REFERENCES [1] Georgios Amanatidis, Georgios Birmpas, and Evangelos Markakis. 2016. OnTruthful Mechanisms for Maximin Share Allocations. In
Proceedings of the 25thInternational Joint Conference on Artificial Intelligence (IJCAI) .[2] Haris Aziz, Markus Brill, Vincent Conitzer, Edith Elkind, Rupert Freeman, andToby Walsh. 2017. Justified Representation in Approval-based Committee Vot-ing.
Social Choice and Welfare
48, 2 (2017), 461–485.[3] Salvador Barberà, Walter Bossert, and Prasanta K Pattanaik. 2004. Ranking Setsof Objects. In
Handbook of Utility Theory . Springer, 893–977.[4] Seth D. Baum. 2017. Social Choice Ethics in Artificial Intelligence.
AI & Society (2017), 1–12.[5] Dorothea Baumeister, Gábor Erdélyi, Olivia Johanna Erdélyi, and Jörg Rothe.2013. Computational aspects of manipulation and control in judgment aggrega-tion. In
Proceedings of the 3rd International Conference on Algorithmic DecisionTheory (ADT) .[6] Dorothea Baumeister, Gábor Erdélyi, Olivia J Erdélyi, and Jörg Rothe. 2015.Complexity of Manipulation and Bribery in Judgment Aggregation for UniformPremise-Based Quota Rules.
Mathematical Social Sciences
76 (2015), 19–30.[7] Dorothea Baumeister, Jörg Rothe, and Ann-Kathrin Selker. 2017. Strategic Be-havior in Judgment Aggregation. In
Trends in Computational Social Choice . Lulu.com, 145–168.[8] Sirin Botan and Ulle Endriss. 2020. Majority-Strategyproofness in JudgmentAggregation. In
Proceedings of the 19th International Conference on AutonomousAgents and Multiagent Systems (AAMAS) .[9] Steven J Brams, Michael A Jones, and Christian Klamler. 2008. Proportional Pie-Cutting.
International Journal of Game Theory
36, 3-4 (2008), 353–367.[10] Steven J Brams, D Marc Kilgour, and M Remzi Sanver. 2007. A Minimax Proce-dure for Electing Committees.
Public Choice
Journal of Political Economy
Pro-ceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI) , Vol. 24.[13] Yiling Chen, John K Lai, David C Parkes, and Ariel D Procaccia. 2013. Truth,Justice, and Cake Cutting.
Games and Economic Behavior
77, 1 (2013), 284–297.[14] Zhi-Zhong Chen and Seinosuke Toda. 1995. The Complexity of Selecting Maxi-mal Solutions.
Information and Computation
119 (1995), 231–239. Issue 2.[15] Vincent Conitzer, Walter Sinnott-Armstrong, Jana Schaich Borg, Yuan Deng,and Max Kramer. 2017. Moral Decision Making Frameworks for Artificial In-telligence. In
Proceedings of the 31st AAAI Conference on Artificial Intelligence(AAAI) .[16] Stephen A. Cook. 1971. The Complexity of Theorem-Proving Procedures. In
Proceedings of the 3rd Annual ACM Symposium on Theory of Computing . ShakerHeights, Ohio, 151–158.[17] H Dalton. 1920. The Measurement of the Inequality of Incomes.
Economic Jour-nal
30, 119 (1920), 348–461.[18] Franz Dietrich and Christian List. 2007. Strategy-proof Judgment Aggregation.
Economics & Philosophy
23, 3 (2007), 269–300.[19] John Duggan and Thomas Schwartz. 2000. Strategic Manipulability without Res-oluteness or Shared Beliefs: Gibbard-Satterthwaite Generalized.
Social Choiceand Welfare
17, 1 (2000), 85–93.[20] Michael Dummet. 1984.
Voting Procedures . Oxford University Press.[21] Thomas Eiter and Georg Gottlob. 1995. On the Computational Cost of Disjunc-tive Logic Programming: Propositional Case.
Annals of Mathematics and Artiffi-cial Intelligence
15, 3-4 (1995), 289–323.[22] Edith Elkind, Piotr Faliszewski, Piotr Skowron, and Arkadii Slinko. 2017. Proper-ties of Multiwinner Voting Rules.
Social Choice and Welfare
48, 3 (2017), 599–632.[23] Ulle Endriss. 2016. Judgment Aggregation. In
Handbook of Computational SocialChoice , F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia (Eds.).Cambridge University Press.[24] Ulle Endriss. 2016. Judgment Aggregation. In
Handbook of Computational SocialChoice , F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia (Eds.).Cambridge University Press.[25] Ulle Endriss and Ronald de Haan. 2015. Complexity of the Winner Determi-nation Problem in Judgment Aggregation: Kemeny, Slater, Tideman, Young. In
Proceedings of the 14th International Conference on Autonomous Agents and Mul-tiagent Systems (AAMAS) .[26] Ulle Endriss, Ronald de Haan, Jérôme Lang, and Marija Slavkovik. 2020. TheComplexity Landscape of Outcome Determination in Judgment Aggregation.
Journal of Artificial Intelligence Research (2020).[27] Ulle Endriss, Umberto Grandi, Ronald de Haan, and Jérôme Lang. 2016. Succinct-ness of Languages for Judgment Aggregation. In
Proceedings of the 15th Interna-tional Conference on the Principles of Knowledge Representation and Reasoning(KR) .[28] Ulle Endriss, Umberto Grandi, and Daniele Porello. 2012. Complexity of Judg-ment Aggregation.
Journal of Artificial Intelligence Research
45 (2012), 481–514.[29] Patricia Everaere, Sébastien Konieczny, and Pierre Marquis.2014. On EgalitarianBelief Merging. In
Proceedings of the 14th International Conference on the Princi-ples of Knowledge Representation and Reasoning (KR) .[30] Patricia Everaere, Sébastien Konieczny, and Pierre Marquis.2017. Belief Mergingand its Links with Judgment Aggregation. In
Trends in Computational SocialChoice , Ulle Endriss (Ed.). AI Access Foundation, 123–143.[31] Peter C Fishburn and Steven J Brams. 1983. Paradoxes of Preferential Voting.
Mathematics Magazine
56, 4 (1983), 207–214.[32] Duncan K Foley. 1967. Resource Allocation and the Public Sector.
Yale EconomicEssays
7, 1 (1967), 45–98.[33] Peter Gärdenfors. 1976. Manipulation of Social Choice Functions.
Journal ofEconomic Theory
13, 2 (1976), 217–228.[34] Martin Gebser, Roland Kaminski, Benjamin Kaufmann, and Torsten Schaub.2012.
Answer Set Solving in Practice . Morgan & Claypool Publishers.[35] Martin Gebser, Roland Kaminski, and Torsten Schaub. 2011. Complex Optimiza-tion in Answer Set Programming.
Theory and Practics of Logic Programming
Handbook of Knowledge Representation ,Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter (Eds.). Elsevier.[37] Oded Goldreich. 2010.
P, NP, and NP-Completeness: The Basics of ComplexityTheory . Cambridge University Press. [38] Umberto Grandi and Ulle Endriss. 2010. Lifting Rationality Assumptions in Bi-nary Aggregation. In
Proceedings of the 24th AAAI Conference on Artificial Intel-ligence (AAAI) .[39] Davide Grossi and Gabriella Pigozzi. 2014.
Judgment Aggregation: A Primer . Syn-thesis Lectures on Artificial Intelligence and Machine Learning, Vol. 8. Morgan& Claypool Publishers. 1–151 pages.[40] Ronald de Haan. 2018. Hunting for Tractable Languages for Judgment Aggre-gation. In
Proceedings of the 16th International Conference on the Principles ofKnowledge Representation and Reasoning (KR) .[41] Ronald de Haan and Marija Slavkovik. 2017. Complexity Results for AggregatingJudgments using Scoring or Distance-Based Procedures. In
Proceedings of the16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS .[42] Ronald de Haan and Marija Slavkovik. 2019. Answer Set Programming for Judg-ment Aggregation. In
Proceedings of the 28th International Joint Conference onArtificial Intelligence (IJCAI) .[43] Jerry S Kelly. 1977. Strategy-proofness and Social Choice Functions withoutSinglevaluedness.
Econometrica
45, 2 (1977), 439–446.[44] Johannes Köbler and Thomas Thierauf. 1990. Complexity Classes with Advice.In
Proceedings of the 5th Annual Structure in Complexity Theory Conference .[45] Sébastien Konieczny and Ramón Pino Pérez. 2011. Logic Based Merging.
Journalof Philosophical Logic
40, 2 (2011), 239–270.[46] Jan Krajicek. 1995.
Bounded arithmetic, propositional logic and complexity theory .Cambridge University Press.[47] Mark W. Krentel. 1988. The Complexity of Optimization Problems.
J. Comput.System Sci.
36, 3 (1988), 490–509.[48] Justin Kruger and Zoi Terzopoulou. 2020. Strategic Manipulation with Incom-plete Preferences: Possibilities and Impossibilities for Positional Scoring Rules.In
Proceedings of the 19th International Conference on Autonomous Agents andMultiagent Systems (AAMAS) .[49] Martin Lackner and Piotr Skowron. 2018. Approval-Based Multi-Winner Rulesand Strategic Voting. In
Proceedings of the 27th International Joint Conference onArtificial Intelligence (IJCAI) .[50] Jérôme Lang, Gabriella Pigozzi, Marija Slavkovik, and Leendert van der Torre.2011. Judgment Aggregation Rules Based on Minimization. In
Proceedings of the13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK) .[51] Jérôme Lang and Marija Slavkovik. 2014. How Hard is it to Compute Majority-Preserving Judgment Aggregation Rules?. In
Proceedings of the 21st EuropeanConference on Artificial Intelligence (ECAI) .[52] Christian List and Philip Pettit. 2002. Aggregating Sets of Judgments: An Impos-sibility Result.
Economics and Philosophy
18, 1 (2002), 89–110.[53] Michael K. Miller and Daniel Osherson. 2009. Methods for Distance-based Judg-ment Aggregation.
Social Choice and Welfare
32, 4 (2009), 575–601.[54] Elchanan Mossel and Omer Tamuz. 2010. Truthful Fair Division. In
Proceedingsof the 3rd International Symposium on Algorithmic Game Theory (SAGT) .[55] Hervé Moulin. 1988.
Axioms of Cooperative Decision Making . Econometric Soci-ety Monographs, Vol. 15. Cambridge University Press.[56] Klaus Nehring, Marcus Pivato, and Clemens Puppe. 2014. The Condorcet Set:Majority Voting over Interconnected Propositions.
Journal of Economic Theory
151 (2014), 268–303.[57] Ritesh Noothigattu, Snehalkumar S Gaikwad, Edmond Awad, Sohan Dsouza,Iyad Rahwan, Pradeep Ravikumar, and Ariel D Procaccia. 2018. A Voting-BasedSystem for Ethical Decision Making. In
Proceedings of the 32nd AAAI Conferenceon Artificial Intelligence (AAAI) .[58] Dominik Peters. 2018. Proportionality and Strategyproofness in MultiwinnerElections. In
Proceedings of the 17th International Conference on AutonomousAgents and Multiagent Systems (AAMAS) .[59] Gabriella Pigozzi. 2006. Belief Merging and the Discursive Dilemma: AnArgument-based Account to Paradoxes of Judgment Aggregation.
Synthese
A Theory of Justice . Belknap Press.[61] M Remzi Sanver and William S Zwicker. 2009. One-way Monotonicity as a Formof Strategy-proofness.
International Journal of Game Theory
38, 4 (2009), 553–574.[62] Thomas J. Schaefer. 1978. The complexity of satisfiability problems. In
Confer-ence Record of the 10th Annual ACM Symposium on Theory of Computing . ACM.[63] Amartya Sen. 1997.
Choice, Welfare and Measurement . Harvard University Press.[64] Zoi Terzopoulou and Ulle Endriss. 2018. Modelling Iterative Judgment Aggrega-tion. In
Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI) .[65] Klaus W. Wagner. 1990. Bounded Query Classes.
SIAM J. Comput.
19, 5 (1990),833–846.[66] William S. Zwicker. 2016. Introduction to the Theory of Voting. In