A Logic for Conditional Local Strategic Reasoning
aa r X i v : . [ c s . G T ] F e b A Logic for Conditional Local Strategic Reasoning ∗ Valentin GorankoDepartment of Philosophy, Stockholm University, Sweden [email protected]
Fengkui JuFaculty of Philosophy, The John Paul II Catholic University of Lublin, Poland [email protected]
February 12, 2021
Abstract
We consider systems of rational agents who act and interact in pursuit of their indi-vidual and collective objectives. We study and formalise the reasoning of an agent, or ofan external observer, about the expected choices of action of the other agents based ontheir objectives, in order to assess the reasoner’s ability, or expectation, to achieve theirown objective.To formalize such reasoning we extend Pauly’s Coalition Logic with three new modaloperators of conditional strategic reasoning, thus introducing the Logic for Local Condi-tional Strategic Reasoning
ConStR . We provide formal semantics for the new conditionalstrategic operators in concurrent game models, introduce the matching notion of bisim-ulation for each of them, prove bisimulation invariance and Hennessy-Milner propertyfor each of them, and discuss and compare briefly their expressiveness. Finally, we alsopropose systems of axioms for each of the basic operators of
ConStR and for the full logic.
Keywords:
Conditional strategic reasoning Concurrent games Coalition Logic proac-tive and reactive abilities Bisimulations Expressiveness
Consider the following scenario. Alice and Bob are students at DownTown University. Aliceis coming to campus today, and has some agenda to complete. Bob wants to meet Alicesomewhere on campus today. She does not know that (maybe, even does not know Bob)and they have no communication. Bob may, or may not, know what Alice is going to do oncampus, or where and at what time she will go during the day. Using his knowledge of what,where, and when Alice intends to do today, Bob wants to come up with a plan of how (whereand when) to meet her.From a more general perspective, we consider a scenario of agents acting independently,and possibly concurrently, in pursuit of their individual and collective goals and we analysethe reasoning of an agent (or, observer) about the possible local actions (at the current stateonly) of the other agents and their effect for realising or enabling the outcome of interest forthe reasoner. ∗ This paper is a revised and extended version of [12]. elated work and motivation. The kind of strategic reasoning discussed here is withinthe conceptual thrust motivating the research on logic-based strategic reasoning over thepast two decades, starting with Coalition Logic CL ([19], [20]), its temporal extension, thealternating-time temporal logic ATL ([5]), its epistemic extension ATEL ([14]), and graduallyevolving towards increasingly expressive formalisms, such Strategy Logic SL [16] (cf. [17]).See [7] and [4] for overviews of the area. Most of these logical systems (except SL wherethe agents’ strategies are explicitly named in the language) assume arbitrary or adversarialbehaviour of the agents outside of the proponent coalitions in CL, ATL, and ATEL. Also, theknowledge of the agents involved in ATEL refers to truths (of formulae in the language) atthe current state, rather than to their knowledge about each others’ objectives and availableactions. Thus, these logics formalise absolute / unconstrained strategic reasoning – usually,by an external observer – about the unconditional strategic abilities of agents and coalitionsto achieve their goals. However, such unconstrained strategic reasoning is seldom applicablein practice, except in purely adversarial zero-sum games. Usually, all agents acting in thesystem (except for the environment, or an absolute adversary) have their own goals and actin pursuit of their fulfilment, rather than just to prevent the proponents from achieving theirgoals. This calls for a more refined strategic reasoning, conditional on the agents’ knowledgeof the opponents’ goals and possible available actions to achieve them, which is the proposalof the present paper. It should be noted that there is a recent line of research on rationalsynthesis [9], [15] and rational verification [21], [13], which does take into account all agents’goals, but aims at designing stable strategy profiles (Nash equilibria) that only guarantee thesatisfaction of the goal of one special agent (the proponent, representing the system), whereasall others are supposed to act rationally and accept the proposed solution, whether it satisfiestheir own goals or not. Thus, our work takes an essentially different perspective and has quitedifferent objectives. We are aware of few other works that deal more directly and explicitlywith conditional strategic reasoning in a sense akin to the present paper. Besides the earlier,conference version [12] of this work, perhaps the closest to it in spirit is the recent [11], towhich the present work relates both conceptually and technically, as well as the conceptuallyrelated work [18], which presents a logic which can express statement of the type: “Thecoalition B has a strategy to achieve their goal once they know the strategy of the coalitionA, no matter what the strategy is”. If the epistemic ingredient in it is considered implicit,this statement is expressible by our operator for ‘reactive strategic ability’ O β . We also notethat an axiomatic system is proposed and proved complete in [18], which shares some basicaxioms with our axiom system for O β presented in Section 6.3, but differs from it on others, Our contributions.
In this work we identify several patterns of conditional strategic rea-soning of an observer or an active agent, depending on his/her knowledge about the objectivesand possible actions of the other agents. To formalize such reasoning we extend CoalitionLogic ([19], [20]) with three new modal operators of conditional strategic reasoning, thus in-troducing the Logic for Local Conditional Strategic Reasoning
ConStR . We provide formalsemantics for the new conditional strategic operators, introduce the matching notion of bisim-ulation for each of them and discuss and compare briefly their expressiveness. We then alsopropose systems of axioms for each of the basic operators of
ConStR and for the whole logic,without stating yet completeness claims (for lack of space, these are left to future work).2 tructure of the paper.
Section 2 provides some preliminaries on concurrent game modelsand the coalition logic CL. Then, Section 3 presents an informal discussion on conditionalstrategic reasoning, motivating the further technical work. Section 4 introduces three modaloperators formalising patterns of conditional strategic reasoning and the new logic
ConStR as an extension of Coalition Logic with these operators. Section 5 introduces the matchingnotion of bisimulation for that logic and discuss briefly it expressiveness. In Section 6 wepropose systems of axioms for each of the basic operators of
ConStR and for the full logic.We end with brief concluding remarks in Section 7.
Multi-agent game models.
We fix a finite set of agents A gt = { a , ..., a n } and a set of atomic propositions Π. Subsets of A gt will also be called coalitions . Definition 2.1. A game model for A gt and Π is a tuple M = ( S, { Σ a } a ∈ A gt , g, V ) where S is a non-empty set of states ; each Σ a is a non-empty set of possible actions ofagent a ; V : Π → P ( S ) is a valuation of the atomic propositions from Π in S ; and g is a game map that assigns to each s ∈ S a strategic game form g ( s ) = (Σ sa , .... Σ sa n , o s ) , whereeach Σ sa i ⊆ Σ a i is a non-empty set of actions available to player a i at s , and o s : Σ sa × ... × Σ sa n → S is a local outcome function assigning to any action profile σ ∈ Σ sa × ... × Σ sa n the outcome state o s ( σ ) produced by σ when applied at s ∈ S . The set Σ sa × ... × Σ sa n of actionprofiles available at s will be denoted by Act s .Now, the global outcome function in M is the partial mapping O : S × Σ a × ... × Σ a n S defined by O ( s, σ ) = o s ( σ ) , whenever σ ∈ Act s .Given a coalition C ⊆ A gt , a joint action for C in the model M is a tuple of individualactions σ C ∈ Q a ∈ C Σ a . For any such joint action σ C that is available at s ∈ S , we define the set of outcome states from σ C at s : Out [ s, σ C ] = { u ∈ S | ∃ σ ∈ Act s : σ | C = σ C & o s ( σ ) = u } where σ | C is the restriction of σ to C . Note that the empty tuple σ ∅ is the only available jointaction for the empty coalition ∅ at any state. The basic logic for coalitional strategic reasoning CL . The Coalition Logic CL wasintroduced in [19], cf. also [20]. CL extends the classical propositional logic with coalitionalstrategic modal operators [C], for any coalition of agents C. The formulae of CL are definedas follows: ϕ := p | ¬ ϕ | ϕ ∨ ϕ | [C] ϕ We will write [ i ] instead of [ { i } ]. The intuitive reading of [C] ϕ is: These game models are essentially equivalent to concurrent game models used in [5]. has a joint action that ensures an outcome (state) satisfying ϕ ,regardless of how all other agents act.”The semantics of CL is defined in terms of the notion of truth of a CL -formula ψ ata state s of a game model M , denoted M , s (cid:15) ψ , by induction on formulae, via the keyclause: M , s | = [C] φ ⇔ there exists a joint action σ C available at s , such that M , u | = φ for each u ∈ Out [ s, σ C ]Thus, [C] φ formalises a claim of the ability of the agent/coalition C to choose a suitable(joint) action to ensure achieving the goal φ regardless of how all other agents choose to act ,and therefore without assuming that the agents in C know the goal(s) of the remaining agentsand their available actions to achieve these goals.The notion of bisimulation that guarantees truth invariance of all CL -formulae was firstdefined in [6] for the so called ‘alternating transition systems’ (equivalent to a special case ofconcurrent game models), then independently in [19] for the abstract game models definedthere, and later in [2] for concurrent game models, which definition we give here. Definition 2.2 ( CL -bisimulation) . Let M = ( S, { Σ a } a ∈ A gt , g, V ) be a concurrent game model.A binary relation R ⊆ S is a CL -bisimulation in M if it satisfies the following conditionsfor every pair of states ( s , s ) such that s R s and for every coalition C : Atom equivalence:
For every p ∈ Π : s ∈ V ( p ) iff s ∈ V ( p ) . Forth:
For every joint action σ of C at s , there is a joint action σ of C at s such thatfor every u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ] such that u R u . Back:
Like
Forth , but with and swapped. CL -bisimulation is defined here within a model. It readily extends to CL -bisimulation between models, by treating both as parts of a single model. The Alternating-time Temporal Logic
ATL ∗ . The Alternating-time Temporal Logic
ATL ∗ , proposed by [5], is an extension of CL with temporal operators. Its featured operatoris hh C ii φ , denoting the claim that C has a joint strategy that guarantees the satisfaction of φ ,where φ is a ‘temporal objective’, i.e. a (path) formula beginning with a temporal operator‘ nexttime ’ X , ‘ always ’ G , or ‘ until ’ U . The logic CL embeds into ATL ∗ as the fragmentextending propositional logic only with combinations of strategic and temporal operators ofthe type hh C ii X , cf [10]. We only mention ATL ∗ here for the sake of some further references,but the present paper will not make any essential use of that logic, and no familiarity withit, nor even with its fragment ATL , is required.4
Conditional strategic reasoning: an informal discussion Recall the scenario with the students Alice and Bob. Suppose that Alice has an objective γ A to achieve – say, to meet with her supervisor Carl on campus today. Suppose also thatAlice has several possible choices of an action (or a ‘strategy’) that would possibly, orcertainly, guarantee the achievement of her objective. In our example, suppose these choicesare: meeting in the supervisor’s office, or in the library, or at the campus caf´e. Let us first consider the case where Bob is just an observer who is not acting, but onlyreasoning about the consequences from Alice’s possible actions with respect to the occurrenceof another – intended or not – outcome event γ B . For instance, suppose that Bob is interestedin meeting with Alice on campus today – let us call that event γ B – and is sitting in the campuscaf´e and reasoning about whether Alice will happen to come to the caf´e, thus enabling theevent γ B (recall that Alice may not know about bob expecting her there, or at all). Moregenerally, we can also assume that there are other agents, besides Alice, also acting in pursuitof their own goals, and Bob is reasoning about their individual and collective choices of actionand the consequences from these choices. This leads to an observer’s conditional strategicreasoning about claims of the type:“ Some/every action of Alice that guarantees achievement of γ A also guarantees/enablesoccurrence of the outcome γ B ”.Depending on Bob’s knowledge about Alice’s objective and of her expected choices ofaction there can be several possible cases for Bob’s reasoning about the expected occurrenceof the outcome γ B . Suppose that
Bob does not know Alice’s objective γ A , and therefore has no a priori expectationsabout her choice of action. In our example, suppose that Bob only knows that Alice is comingto campus today, but not why and where on campus she is going. Then, Bob can only claimfor sure that the outcome γ B will occur if γ B is inevitable, regardless of how Alice (and allothers) will act. For instance, if Bob knows that Alice is coming to campus and he is standingby the only entrance of the campus, then he will know for sure that he is going to meet Alice( γ B will occur), no matter what she will do there. This claim can be expressed in CoalitionLogic CL simply as [ ∅ ] γ B . Suppose now that Bob does know Alice’s objective and knows that Alice can guarantee theachievement of that objective and will act towards that, but Bob does not know how exactly
Alice might act. E.g., Bob knows that Alice is coming to campus to meet with her supervisor The reader who is only interested in the technical part of the paper can skip this section without essentialloss. In this paper we focus on local reasoning, about once-off actions, but in this section the word ‘action’ canbe conceived in a wider sense, and may mean either a once-off action, or a global strategy guiding the longterm behaviour of the agent. γ B willoccur for sure if γ B is true on every possible course of events (“play”) on which γ A is true .For instance, if Bob knows that Alice’s supervisor will be working in his office for the wholeday, and he is sitting in the corridor, next to Carl’s office, then he knows that he will meetwith Alice ( γ B will occur) no matter when Alice comes to meet with Carl (i.e. no matter how γ A occurs).This can be expressed in CL simply as [ ∅ ]( γ A → γ B ) and reflects the case when γ A canoccur in various, possibly unintended ways, but its occurrence always implies the occurrenceof γ B (e.g. if Bob is with Carl throughout the day, then even if Alice bumps into Carlaccidentally, Bob will still meet her). Suppose now that Bob not only knows Alice’s objective, but also knows all possible actions(or, strategies) of Alice that can ensure the satisfaction of her objective γ A , and knows thatAlice will perform one of them , but does not know to which one . (E.g., Bob knows that Alice,who is coming to campus to meet with her supervisor, can meet with him either in his office,or in the library, or in the caf´e.) Now, for Bob to claim that the outcome γ B will occur forsure, it suffices to know that each action of Alice that guarantees γ A will also guarantee γ B .(E.g., suppose that all possible meeting places for Alice and her supervisor are in the mainbuilding and Bob is waiting at the only entrance of the main building.)Here the conditional “If γ A then γ B ” has a suitably constrained context, specifying that γ A can occur only because the agent (Alice) takes a deliberate action to bring about γ A . Thiscan no longer be expressed in CL and requires introducing a new strategic operator.Lastly, suppose that Bob also knows the specific action which Alice is taking in order toguarantee the achievement of her goal . Then, Bob can claim that the outcome γ B will occurfor sure, as long as that specific action of Alice guarantees the satisfaction of γ B . To formalisethat one needs explicit names for actions, but in our logic we will be able to state somethingstronger, viz. that every specific action of Alice that guarantees γ A will also bring about γ B . Suppose now that Bob is not just a passive observer, but an acting agent, who has theoutcome γ B as his own goal. There may be other agents, besides Alice and Bob, also actingin pursuit of their own goals, and Bob is reasoning about their expected choices of action andthe consequences from these choices. Now, Bob is to decide – based on his reasoning aboutAlice’s (and other agents’) possible choices of actions – on his own action in pursuit of γ B .This calls for an agent’s conditional strategic reasoning about statements of the type:“ For some/every action of Alice that guarantees achievement of γ A , Bob has an action ofhis own to guarantee achievement of his objective γ B ”.We call this local conditional strategic reasoning , as it only refers to the immediate actionsof the agents, not about their global strategies . Respectively, the outcomes from the localaction profiles are just successor states, while in the general case they are (finite or possiblyinfinite) plays . The global conditional strategic reasoning will be treated in a follow-up work.Each of the cases considered in Section 3.1 accordingly applies here, too, with the onlydifference being that now Bob is to choose a suitable action of his own. Besides, there areseveral additional cases to consider regarding the possible choice of action of Bob.6 .2.1 Agent Bob’s reasoning, case 1: assuming Alice’s cooperation . First, suppose, in addition, that Alice also knows Bob’s objective and can choose to cooperatewith Bob by selecting a suitable action σ A that would not only guarantee achievement of herobjective but will also enable Bob to supplement σ A with an action σ B which would thenalso guarantee achievement of his objective, too. (So, we also assume that Alice knowsenough about Bob’s possible actions.) We refer the reader to Example 4.1 for a formal modelillustrating the agent’s ability assuming cooperation from the other agent. not assuming Alice’s cooperation . Now, suppose Bob cannot count on Alice’s cooperation. Still, the statement“
Whichever way Alice acts towards achieving the objective γ A , Bob can act so asto bring about achievement of his objective γ B . ”admits two different readings, which we respectively call ‘ proactive ability ’ and ‘ reactive abil-ity ’ which we discuss below . In this case, for every action of Alice that ensures γ A Bob is to choose reactively an actionof his, generally dependent on Alice’s action , that would also ensure the occurrence of γ B (possibly in different ways for the different actions). This essentially corresponds to thecase where Bob knows Alice’s action at the time of choosing his own action, and for everychoice of action of Alice that guarantees γ A , Bob’s respective choice will also bring about γ B .For instance, in our running example, if Bob knows where Alice is going to meet with hersupervisor, he can choose respectively where to go and wait for her.More formally, each of Alice’s actions that would guarantee γ A generates a set of possible outcome states (plays), and for each such set Bob is looking for a respective action that willbring about γ B on that set of outcome states. The case of proactive ability is when Bob only knows that Alice has committed to act so asto achieve her goal γ A (to meet with Carl at one of the 3 possible meeting places), but doesnot knows the action that Alice has chosen and her choice will remain unknown to Bob atthe time when he is to choose his action aiming at satisfying γ B (meet Alice).In this case Bob must consider all possible courses of events (plays) that can occur as aresult of Alice acting towards achieving γ A and reason about whether he can choose proactivelyand uniformly one action that would bring about γ B regardless of which action Alice maychoose to apply in order to achieve her goal γ A . For instance, in our running story, assumingthat all meeting places are in the main building, Bob can choose to wait for Alice at theonly entrance of that building. Formally speaking, in this case, based on his knowledge Bobconsiders the set of states in the model which is the union of all sets of outcome states enabledby the specific actions of Alice that would guarantee γ A , and is looking for an action that willbring about γ B on each of these outcome states. In [12] these were called respectively ‘ ability de re ’ and ‘ ability de dicto ’
7e refer the reader to Example 4.2 for a formal model illustrating the concepts of proactiveand reactive abilities of an agent and their difference.Top sum up, the proactive – reactive ability distinction applies to Bob depending onwhether or not he knows Alice’s choice at the time when he is to make his own choice ofaction. If he knows Alice’s choice at that time, his reasoning is about reactive ability, elseit is a proactive ability reasoning. We note that the notions of proactive and reactive abilityrespectively correspond to the notions of α -effectivity and β -effectivity in game theory (cf.e.g. [1]).Lastly, an important point: even though knowledge of the agent about the other’s goalsand possible actions is essential, it will not feature in our formal logical language, nor in theformal semantics, but only in the external reasoners’ analysis of which case of conditionalstrategic ability applies. ConStR
Given coalitions A and B and joint actions σ A for A and σ B for B, we say that σ B is consistentwith σ A if σ B coincides with σ A on A ∩ B.We now introduce new operators for conditional strategic reasoning, for any coalitions Aand B, with intuitive semantics corresponding to the three reasoning cases in Section 3.2, asfollows.( O c ) h A i c ( φ ; h B i ψ ) says that A has a joint action σ A which, when applied, guarantees thetruth of φ and enables B to apply a joint action σ B that is consistent with σ A and guarantees ψ when additionally applied by B, in sense that all agents in A act according to σ A and thosein B \ A act according to σ B5 This operator formalises the agent’s reasoning Case 1 discussedin Section 3.2, where A knows the objective of B and can choose to cooperate with B byselecting a suitable action.( O α ) [A] α ( φ ; h B i ψ ) says that the coalition B \ A has an action σ B \ A such that if A appliesany action that guarantees the truth of φ , then B \ A can guarantee the truth of ψ by applyingadditionally the action σ B \ A .This operator formalises a claim of the ability of the agent/coalition B to choose a suitable(joint) action so as to achieve the goal ψ assuming that A acts so as to achieve the goal φ , ifB is to choose their (joint) action before A chooses their (joint) action, or before B learns theaction of A. This corresponds to the notion of agent’s proactive ability discussed in Section3.2.4, respectively to the game-theoretic notion of α -effectivity , hence the notation.( O β ) [A] β ( φ ; h B i ψ ) says that for any joint action σ A of A that guarantees the truthof φ , when applied by A there is an action σ B that is consistent with σ A and guarantees ψ when additionally applied by B . This operator formalises a claim of the ability of theagent/coalition B to choose a suitable (joint) action so as to achieve the goal ψ assumingthat A acts so as to achieve the goal φ , if B is to choose their (joint) action after B learns We note that h A i c ( φ ; h B i ψ ) is equivalent to hh A ii ( φ ∧ hh B \ A ii ψ ) in ATL ∗ . Note that [A] β ( ⊥ ; h B i ψ ) is vacuously true for any A, B, and ψ , as then there cannot be such joint actions σ A that enable satisfying ⊥ . This may sounds odd, but it is no special phenomenon in ConStR , as the sameeffect occurs in FOL with universal quantification over an empty set of objects. reactive ability discussed inSection 3.2.3, respectively to the game-theoretic notion of β -effectivity , hence the notation. ConStR
We fix a finite nonempty set of agents A gt and a countable set of atomic propositions Π. Theformulae of ConStR , where p ∈ Π and A , B ⊆ A gt are defined as follows: φ ::= p | ⊤ | ¬ φ | ( φ ∧ φ ) | h A i c ( φ ; h B i φ ) | [A] α ( φ ; h B i φ ) | [A] β ( φ ; h B i φ ) ConStR
The following can be easily seen from the informal semantics above, and can also be easilyverified with the formal semantics introduced further. • The coalitional operator from CL is definable as a special case of each of O c , O α , O β asfollows:( O c ) [A] φ := h A i c ( φ ; h A i φ ) or [A] φ := h A i c ( φ ; h A i⊤ );( O α ) [A] φ := [ ∅ ] α ( ⊤ ; h A i φ ). This expresses the case when B is not informed about thegoal of A and has to choose proactively a joint action, before A has chosen theiraction. Thus, it indeed claims an unconditional ability of B to choose an actionthat guarantees φ .( O β ) [A] φ := [ ∅ ] β ( ⊤ ; h A i φ ), or [A] φ := [A] β ( ⊤ ; h A i φ ); (the empty coalition has only onestrategy, and it guarantees the satisfaction of ⊤ )Thus, the cases of observer’s reasoning discussed in Sections 3.1.1 and 3.1.2 are readilyformalisable in ConStR . • The dual to O c operator ¬h A i c ( φ ; h B i¬ ψ ) says that every joint action of A that, whenapplied, guarantees the truth of φ , would prevent B from acting additionally so as toguarantee ψ . This formalises the conditional reasoning scenario where the goals of Aand B are conflicting and where Bob can establish that whichever way A acts towardstheir goal, that would block B from acting to guarantee achievement of its goal. • [A] β ( ⊤ ; h B i ψ ) essentially formalises the case when B is not informed about the goal ofA, but has to choose their action after learning the action of A. • On the other hand, [A] c ( φ | ψ ) := [A] β ( φ ; h∅i ψ ), also equivalent to [A] α ( φ ; h∅i ψ ), saysthat for any joint strategy of A, if it guarantees φ to be true, then it guarantees ψ tobe true, too. That formalises the case in section 3.1.3 of reasoning of an observer whoknows both the goal φ and the possible actions of A, about the occurrence of outcome ψ . • h A i c ( φ | ψ ) := ¬ [A] c ( φ |¬ ψ ) says that there is a joint strategy of A that guarantees φ tobe true and enables ψ to be true, too. Note that it is equivalent to a special case ofthe “socially friendly coalitional operator” SF , [C]( φ ; ψ , . . . , ψ k ), introduced in [11], viz. h A i c ( φ | ψ ) ≡ [A]( φ ; ψ ).Moreover, h A i c ( φ | ψ ) is also definable as h A i c ( φ ; h A i ψ ), where A = A gt \ A.9
The coalitional operator [A] from CL is a special case of the above: [A] φ := h A i c ( φ |⊤ ),meaning “A has a joint action to ensure the truth of φ ” . • h A i c ( φ ; h B i ψ ) is definable in terms of the “group protecting coalitional operator” GIP ,introduced in [11]: h A i c ( φ ; h B i ψ ) ≡ h [ A ⊲ φ, A ∪ B ⊲ ψ ] i . Nevertheless, it now has adifferent motivation and intuitive interpretation. ConStR
Given coalitions A , B ⊆ A gt and joint actions σ A for A and σ B for B, we define σ A ⊎ σ B tobe the joint action for A ∪ B which equals to σ A when restricted to A and equals to σ B | B \ A when restricted to B \ A. Thus, in particular, σ A ⊎ σ B = σ A for any B ⊆ A ⊆ A gt.Now, let M = ( S, { Σ a } a ∈ A gt , g, V ) be a game model. The formal semantics of ConStR extends the one of CL to the new operators as follows: M , s (cid:13) h A i c ( φ ; h B i ψ ) ⇔ A has a joint action σ A , such that M , u (cid:13) φ for every u ∈ Out [ s, σ A ] and B has a joint action σ B such that M , u (cid:13) ψ for every u ∈ Out [ s, σ A ⊎ σ B ]. M , s (cid:13) [A] α ( φ ; h B i ψ ) ⇔ B has a joint action σ B such thatfor every joint action σ A of A, if M , u (cid:13) φ for every u ∈ Out [ s, σ A ],then M , u (cid:13) ψ for every u ∈ Out [ s, σ A ⊎ σ B ]. M , s (cid:13) [A] β ( φ ; h B i ψ ) ⇔ for every joint action σ A of A such that M , u (cid:13) φ for every u ∈ Out [ s, σ A ], B has a joint action σ B (generally, dependent on σ A ) such that M , u (cid:13) ψ for every u ∈ Out [ s, σ A ⊎ σ B ]. Remark:
The semantics of each of the operators above can be re-stated to consider jointactions for B \ A rather than the whole B. For instance, it can be easily verified for thelatter operator, that M , s (cid:13) [A] α ( φ ; h B i ψ ) iff B \ A has a joint action σ B \ A such that forevery joint action σ A of A, if M , u (cid:13) φ for every u ∈ Out [ s, σ A ], then M , u (cid:13) ψ for every u ∈ Out [ s, σ A ⊎ σ B \ A ]. Here we provide a few simple examples illustrating the semantics of
ConStR . Example 4.1.
The game model M below has two players, a and b . Each has two actions atstate s : a , a , resp. b , b . s { p } s { p } s { p,q } s { q } s { p } ( a ,b )( a ,b ) ( a ,b )( a ,b ) NB: We have preserved the box-like notation for [A] from CL , even though it is not consistent with ours. t is easy to see that M , s (cid:13) h a i c ( p ; h b i q ) , while M , s (cid:13) [ b ] q .Thus, an agent may have only conditional ability to achieve its goal. Example 4.2.
The game model M below has two players, a and b . The agent a has 3 actionsat state s , a , a , a , and b has 2 actions, b , b . s { p } s { p } s { p } s { p,q } s { p } s { p,q } s {} ( a , b )( a , b )( a , b ) ( a , b )( a , b )( a , b ) Note that: • M , s (cid:13) [ a ] β ( p ; h b i q ) . Indeed, agent a has two actions at state s to ensure p : a and a . For each of them, b has an action to ensure q , viz.: choose b if a chooses a , andchoose b if a chooses a . • M , s (cid:13) [ a ] α ( p ; h b i q ) . Indeed, neither b nor b ensures q against both choices a and a of a . Thus, b does not have a uniform action to ensure q against any action of a that ensures p .Therefore, [ a ] α ( p ; h b i q ) and [ a ] β ( p ; h b i q ) are semantically different. • However, if the outcomes of ( a , b ) and ( a , b ) are swapped, then [ a ] α ( p ; h b i q ) becomestrue at s in the resulting model. • ( M , s ) does not satisfy the ATL ∗ formula [[ a ]](X p → hh b ii X q ) (where [[C]] φ := ¬hh C ii¬ φ ),hence it is not equivalent to [ a ] β ( p ; h b i q ) . ConStR
ConStR
The definition of
ConStR -bisimulation involves, besides atomic equivalence, 3 nested Forthand Back conditions, for each of the respective new operators O c , O α , and O β . As thedefinition of CL -bisimulation given in Section 2, we only define ConStR -bisimulation withina game model, which generalises to
ConStR -bisimulation between game models easily. Notethat the nested back-and-forth conditions are needed because of the patterns of quantificationin the semantic definitions of the new strategic operators: first quantification over the actionsof A and B, and then over the outcomes generated by these actions. Each of these conditions is a respective variation of the bisimulation conditions for the basic strategicoperators in the logics
SFCL and
GPCL defined in [11]. efinition 5.1 ( ConStR -bisimulation) . Let M = ( S, { Σ a } a ∈ A gt , g, V ) be a game model. Abinary relation R ⊆ S is a ConStR -bisimulation in M if it satisfies the following conditionsfor every pair of states ( s , s ) such that s R s and for every coalitions A and B : Atom equivalence:
For every p ∈ Π : s ∈ V ( p ) iff s ∈ V ( p ) . O c -bisimulation: (For illustration, see Figure 1.) s s σ σ Out [ s , σ ] Out [ s , σ ] u u σ σ Out [ s , σ ⊎ σ ] Out [ s , σ ⊎ σ ] v v Figure 1: The A -Forth c half of O c -bisimulationA -Forth c : For every joint action σ of A at s there is a joint action σ of A at s ,such that: A -LocalBack c : For every u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ] such that u R u . B -Forth c : For every joint action σ of B at s there is a joint action σ of B at s , such that: ( A ⊎ B )-LocalBack c : For every u ∈ Out [ s , σ ⊎ σ ] there exists u ∈ Out [ s , σ ⊎ σ ] such that u R u . A -Back c : Like A -Forth c , but with and swapped. O α -bisimulation: (For illustration, see Figure 2.) B -Forth α : For every joint action σ of B at s there is a joint action σ of B at s ,such that: A -Back α : For every joint action σ of A at s there is a joint action σ of A at s , such that: ( A )-LocalForth α : For every u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ] such that u R u . ( A ⊎ B )-LocalBack α : For every u ∈ Out [ s , σ ⊎ σ ] there exists u ∈ Out [ s , σ ⊎ σ ] such that u R u . B -Back α : Like B -Forth , but with and swapped. O β -bisimulation: (For illustration, see Figure 3.) s σ σ σ σ Out [ s , σ ] Out [ s , σ ] u u Out [ s , σ ⊎ σ ] Out [ s , σ ⊎ σ ] v v Figure 2: The B -Forth α half of O α -bisimulation s s σ σ Out [ s , σ ] Out [ s , σ ] u u σ σ Out [ s , σ ⊎ σ ] Out [ s , σ ⊎ σ ] v v Figure 3: The A -Forth β half of O β -bisimulationA -Forth β : For every joint action σ of A at s there is a joint action σ of A at s ,such that: A -LocalBack β : For every u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ] such that u R u . B -Back β : For every joint action σ of B at s there is a joint action σ of B at s , such that: ( A ⊎ B )-LocalForth β : For every u ∈ Out [ s , σ ⊎ σ ] there exists u ∈ Out [ s , σ ⊎ σ ] such that u R u . A -Back β : Like A -Forth , but with and swapped.States s , s ∈ M are ConStR -bisimulation equivalent , or just
ConStR -bisimilar if thereis a
ConStR -bisimulation R in M such that s R s . Proposition 5.2 ( ConStR -bisimulation invariance) . Let R be a ConStR -bisimulation in agame model M . Then for every ConStR -formula θ and a pair s , s ∈ M such that s R s : M , s | = θ iff M , s | = θ . roof. Induction on θ . All boolean cases are straightforward. The cases for the 3 strate-gic operators are similar, but we will nevertheless check each of them, to ensure that thebisimulation conditions above are correctly defined. (Case O c ) Let θ = h A i c ( φ ; h B i ψ ), assuming that the claim holds for φ and ψ .Suppose, M , s | = θ . Then A has a joint action σ at s such that, when applied,it guarantees φ and enables B to adopt a joint action σ B that is consistent with σ A andguarantees ψ when additionally applied by B. By A -Forth c , there is a joint action σ of Aat s , such that, by A -LocalBack c , for each u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ]such that u R u . By the choice of σ , M , u | = φ for each u ∈ Out [ s , σ ]. It follows, bythe inductive hypothesis applied to φ , that M , u | = φ for each u ∈ Out [ s , σ ]. Moreover,B has a joint action σ at s such that, when applied by B, in addition to A applying σ , itguarantees ψ , i.e. M , u | = ψ for each u ∈ Out [ s , σ ⊎ σ ]. By condition B -Forth c , there isa joint action σ of B at s , such that, by (A ⊎ B )-LocalBack c , for every u ∈ Out [ s , σ ⊎ σ ]there exists u ∈ Out [ s , σ ⊎ σ ] such that u R u . Therefore, by the inductive hypothesisapplied to ψ , M , u | = ψ for each u ∈ Out [ s , σ ⊎ σ ]. Thus, M , s | = θ .The converse is similar, using A -Back c . (Case O α ) Let θ = [A] α ( φ ; h B i ψ ), assuming the claim holds for φ and ψ .Suppose, M , s | = θ . Let σ be a joint action of B at s satisfying the truth conditionof θ . By B -Forth α , there is a joint action σ of B at s , such that A -Back α holds. Now,take any joint action σ of A at s such that M , u | = φ for each u ∈ Out [ s , σ ]. Then,by A -Back α , there is a joint action σ of A at s such that, by ( A )-LocalForth α , for every u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ] such that u R u . Then, by the inductivehypothesis applied to φ , it follows that M , u | = φ for each u ∈ Out [ s , σ ]. By the choiceof σ , this implies M , u | = ψ for each u ∈ Out [ s , σ ⊎ σ ]. By (A ⊎ B )-LocalBack α , forevery u ∈ Out [ s , σ ⊎ σ ] there exists u ∈ Out [ s , σ ⊎ σ ] such that u R u . Therefore,by the inductive hypothesis applied to ψ , we have M , u | = ψ for each u ∈ Out [ s , σ ⊎ σ ].Thus, σ satisfies the truth condition of θ at s . Hence, M , s | = θ .The converse direction is analogous, using B -Back α . (Case O β ) Let θ = [A] β ( φ ; h B i ψ ), assuming the claim holds for φ and ψ .Suppose, M , s | = θ . Then, consider any joint action σ of A at s such that, when appliedby A, it guarantees φ . (If no such joint action exist at s , then M , s | = θ is vacuously true.)Then, by A -Forth β , there is a joint action σ of A at s , such that, by A -LocalBack β , forevery u ∈ Out [ s , σ ] there exists u ∈ Out [ s , σ ] such that u R u . Then, by the inductivehypothesis applied to φ , it follows that M , u | = φ for each u ∈ Out [ s , σ ]. Therefore, bythe assumption for the truth of θ at s , B has a joint action σ at s such that, when appliedby B, in addition to A applying σ , it guarantees the truth of ψ , i.e. M , u | = ψ for each u ∈ Out [ s , σ ⊎ σ ]. Then, by B -Back β , there is a joint action σ of B at s , such that, by(A ⊎ B )-LocalForth β , for every u ∈ Out [ s , σ ⊎ σ ] there exists u ∈ Out [ s , σ ⊎ σ ] suchthat u R u . Therefore, M , u | = ψ for each u ∈ Out [ s , σ ⊎ σ ]. Thus, M , s | = θ .The converse is analogous, using A -Back β .We also obtain the Hennessy-Milner property for ConStR -bisimulations:
Proposition 5.3 (Hennessy-Milner property) . For any finite game model M the relationof ConStR -equivalence (satisfaction of the same
ConStR -formulae) between states in M is a ConStR -bisimulation in M . roof. (Sketch) One direction follows from Prop. 5.2. We now prove the converse. Since M is finite, there is a mapping χ from M to the formulae of ConStR that assigns to eachstate s in M its characteristic formula χ ( s ), such that s , s are ConStR -equivalent if andonly if s satisfies χ ( s ) (and vice versa), iff χ ( s ) ≡ χ ( s ). Furthermore, χ ( s ) ∧ χ ( s ) ≡ ⊥ whenever s and s are not ConStR -equivalent. Now, for any set of states Z in M we define χ ( Z ) := W z ∈ Z χ ( z ). The crucial observation for proving the claim is that every state s ∈ M satisfies each of the following formulae, enabling the verification of the respective ConStR -bisimulation conditions: (1) ^ A,B ⊆ A gt (cid:8) h A i c ( χ ( Z ); h B i χ ( Y )) | ∃ σ ∈ Act s : Out [ s, σ | A ] = Z and Out [ s, σ | ( A ∪ B ) ] = Y (cid:9) (2) ^ A,B ⊆ A gt (cid:8) [A] α ( χ ( Z ); h B i χ ( Y )) | ∃ σ ∈ Act s : ∀ σ ′ ∈ Act s if Out [ s, σ ′ | A ] ⊆ Z and σ ′ | (B \ A) = σ | (B \ A) then Out [ s, σ ′ | ( A ∪ B ) ] ⊆ Y (cid:9) (3) ^ A,B ⊆ A gt (cid:8) [A] β ( χ ( Z ); h B i χ ( Y )) | ∀ σ ∈ Act s : Out [ s, σ | A ] ⊆ Z implies Out [ s, σ ′ | ( A ∪ B ) ] ⊆ Y for some σ ′ ∈ Act s such that σ ′ | A = σ | A (cid:9) Proposition 5.4.
Let a , b be different agents and p, q be different atomic propositions. Thenthe following hold .1. [ a ] β ( p ; h b i q ) [ a ] α ( p ; h b i q ) .2. h a i c ( p ; h b i q ) is not definable in CL .3. [ a ] c ( p | q ) (and, consequently, [ a ] β ( p ; h∅i q ) ) is not definable in CL .4. [ b ] α ( q ; h a i p ) is not definable in CL .Proof. The claims follow respectively from examples 4.2, 5.5, 5.6 and 5.7.The results above generalise to pairwise coalitions in a straightforward way. Even though we state the non-definability claims for CL , they apply likewise even to ATL ∗ with its standard,memory-based semantics, because all formulae of ATL ∗ are invariant under CL -bisimulations with respect tothat semantics. xample 5.5. The game models M , M below involve three players: a , b , c . s {} s { p } s { p,q } s { q } M a ,b ,c ) ( a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c ( a ,b ,c ) ( a ,b ,c a ,b ,c ( a ,b ,c ) t {} t { p } t { p,q } t { q } M a ,b ,c ) ( a ,b ,c a ,b ,c a ,b ,c a ,b ,c ( a ,b ,c )( a ,b ,c ) ( a ,b ,c ) Note that (1) The relation R = { ( s i , t i ) | i = 0 , , , } is an CL -bisimulation between M and M ; (2) M , s (cid:13) h a i c ( p ; h b i q ) but M , t (cid:13) h a i c ( p ; h b i q ) . Example 5.6.
The game models M and M below involve two players: a and b . s {} s { p } s { p,q } s { q } M a ,b ) ( a ,b a ,b a ,b ( a ,b ) ( a ,b ) t {} t { p } t { p,q } t { q } M a ,b ) ( a ,b a ,b a ,b a ,b a ,b ( a ,b )( a ,b ) ( a ,b ) Note that (1) The relation R = { ( s i , t i ) | i = 0 , , , } is an CL -bisimulation between M and M ; (2) M , s (cid:13) [ a ] c ( p | q ) but M , t (cid:13) [ a ] c ( p | q ) . Example 5.7.
The game models M , M below involve three players: a , b , c . s {} s { p } s { p,q } s { q } M a ,b ,c ) ( a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c ( a ,b ,c ) ( a ,b ,c ) ( a ,b ,c a ,b ,c a ,b ,c t {} t { p } t { p,q } t { q } M a ,b ,c ) ( a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c a ,b ,c ( a ,b ,c ) ( a ,b ,c ) ( a ,b ,c a ,b ,c a ,b ,c Note that (1) The relation R = { ( s i , t i ) | i = 0 , , , } is an CL -bisimulation between M and M ; (2) M , s (cid:13) [ b ] α ( q ; h a i p ) but M , t (cid:13) [ b ] α ( q ; h a i p ) . Axiomatic system for
ConStR
Here we propose systems of axiom schemes for each of the basic operators of
ConStR andfor the whole logic, without stating completeness claims; these are left for a follow-up work.Some of these axiom schemes are adapted from the axiomatic systems for
SFCL and
GPCL presented in [11].
ConStR
The axiomatic system Ax ConStR builds on the axiom schemes and rules of the complete ax-iomatic system for Coalition Logic Ax CL , given in [19] and [20]. Analogues of these axiomschemes and rules are added for each of the conditional strategic operators occurring in thefragment of ConStR that is to be axiomatized. (Recall that the coalitional operator of CL isa special case of each of O c , O α , O β .) Some of these common axiom schemes and rules willturn out derivable from the special ones added below, but we are not concerned now withminimality of our system. O c Axiom schemes:( O c Monotonicity w.r.t. A: h A i c ( φ ; h B i ψ ) → h A ∪ C i c ( φ ; h B i ψ ) for any C ⊆ A gt ( O c Monotonicity w.r.t. B: h A i c ( φ ; h B i ψ ) → h A i c ( φ ; h B ∪ C i ψ ) for any C ⊆ A gt ( O c h A i c ( φ ; h B i ψ ) → h A ∪ B i c (( φ ∧ ψ ); h∅i⊤ ) ( O c h A i c ( φ ; h∅i ψ ) ↔ h A i c (( φ ∧ ψ ); h∅i⊤ )(NB: the direction → follows from ( O c ( O c ¬h A i c ( ⊥ ; h B i ψ ) ( O c h A i c ( φ ; h B i ψ ) ↔ h A i c ( φ ; h B \ A i ψ ) ( O c h A i c ( φ ; h B i ψ ) ↔ h A i c ( φ ; h B i ( φ ∧ ψ )) Rule of inference: O c -Monotonicity ( O c -Mon) : φ → φ ′ , ψ → ψ ′ h A i c ( φ ; h B i ψ ) → h A i c ( φ ′ ; h B i ψ ′ ) O β Axiom schemes:( O β Monotonicity w.r.t. B:[A] β ( φ ; h B i ψ ) → [A] β ( φ ; h B ∪ C i ψ ) for any C ⊆ A gt. ( O β [A] β ( φ ; h∅i φ ) 17 O β [A] β ( ⊥ ; h∅i ψ ) ( O β [ ∅ ] β ( ⊤ ; h A i φ ) → ¬ [A] β ( φ ; h B i⊥ ) ( O β [A] β ( φ ; h B i ψ ) ↔ [A] β ( φ ; h B \ A i ψ ) ( O β [A] β ( φ ; h B i ψ ) ↔ [A] β ( φ ; h B i ( φ ∧ ψ )) Rule of inference: O β -Monotonicity ( O β -Mon) : φ ′ → φ, ψ → ψ ′ [A] β ( φ ; h B i ψ ) → [A] β ( φ ′ ; h B i ψ ′ ) O α Axiom schemes:
All axioms ( O β
1) - ( O β O α . In addition: ( O α *) Anti-monotonicity w.r.t. A:[A ∪ C] α ( φ ; h B i ψ ) → [A] α ( φ ; h B i ψ ) for any C ⊆ A gt. Rule of inference: O α -Monotonicity ( O α -Mon) : φ ′ → φ, ψ → ψ ′ [A] α ( φ ; h B i ψ ) → [A] α ( φ ′ ; h B i ψ ′ ) ConStR ( ConStR [A] α ( φ ; h B i ψ ) → [A] β ( φ ; h B i ψ ) ( ConStR [ ∅ ] β ( ⊤ ; h A i φ ) ∧ [A] β ( φ ; h B i ψ ) → h A i c ( φ ; h B i ψ ) Proposition 6.1 (Soundness) . The following hold for the system Ax ConStR .1. All axiom schemes are valid in the formal semantics of
ConStR .2. The analogue of the anti-monotonicity axiom with respect to A for O β : [A ∪ C] β ( φ ; h B i ψ ) → [A] β ( φ ; h B i ψ ) , for any A , B , C ⊆ A gt , is not valid.Proof. Checking the soundness of most of the axioms is routine application of the formalsemantics and we leave the details to the reader. We will only verify here the anti-monotonicityaxiom scheme ( O α *), which is less straightforward and plays a special role of distinguishing O β and O α .It suffices to prove the validity of the following instance:[A ∪ C] α ( p ; h B i q ) → [A] α ( p ; h B i q ), for any A , B , C ⊆ A gt.Consider any CGM M and a state s in it.For any formula φ we denote k φ k M := { w ∈ M | M , w (cid:13) φ } .Suppose M , s (cid:13) [A ∪ C] α ( p ; h B i q ). Fix a joint action σ B witnessing the truth of thatantecedent. Thus, for every joint action σ A ∪ C such that Out [ s, σ A ∪ C ] ⊆ k p k M we have that Out [ s, σ (A ∪ C) ⊎ σ B ] ⊆ k q k M . An analogue of (
ConStR
2) for O α is easily derivable from ( ConStR
1) and (
ConStR
18o show that M , s (cid:13) [A] α ( p ; h B i q ), we use the same joint action σ B .Consider any joint action σ A such that Out [ s, σ A ] ⊆ k p k M .That implies that for any additional joint action σ C \ A , we have for the resulting jointaction σ A ∪ C := σ A ⊎ σ C \ A that Out [ s, σ A ∪ C ] ⊆ k p k M .Then, by the assumption above, we have that Out [ s, σ (A ∪ C) ⊎ σ B ] ⊆ k q k M .Now, note that every extension of σ A ⊎ σ B to a full strategy profile σ ′ can be obtained byfirst selecting a joint action σ ′ C which coincides with σ A ⊎ σ B on A ∪ B and is defined accordingto σ ′ for the agents in C \ (A ∪ B). Equivalently, every such strategy profile σ ′ can be generatedby first selecting a joint action σ A ∪ C which extends σ A with the joint action σ C \ A obtainedby restricting σ ′ C to C \ A, then adding the actions of the agents from B \ (A ∪ C) according to σ A ⊎ σ B , and then adding the remaining actions according to σ ′ . Thus, σ ′ can be constructedas an extension of σ (A ∪ C) ⊎ σ B , where σ (A ∪ C) extends σ A . Hence, the outcome from s of everysuch σ ′ is in k q k M .Therefore Out [ s, σ A ⊎ σ B ] ⊆ k q k M . Hence, M , s (cid:13) [A] α ( p ; h B i q ). Thus, we have shownthat M , s (cid:13) [A ∪ C] α ( p ; h B i q ) → [A] α ( p ; h B i q ), whence the validity of that formula.2. The instance [ { a , c } ] β ( p ; h b i q ) → [ a ] β ( p ; h b i q ) of the anti-monotonicity principle w.r.t.A for O β is falsified in the example below of a game model M with three players, a , b and c . s {} s { p,q } s { p } s { p,q } s { p } ( a ,b ,c )( a ,b ,c ) ( a ,b ,c )( a ,b ,c ) It is easy to verify that M , s (cid:13) [ { a , c } ] β ( p ; h b i q ) but M , s (cid:13) [ a ] β ( p ; h b i q ). Hence, M , s (cid:13) [ { a , c } ] β ( p ; h b i q ) → [ a ] β ( p ; h b i q ).Thus, the anti-monotonicity principle with respect to A is (at least so far) the only axiomscheme in our system, that distinguishes O α from O β . First, we note that, while the new strategic operators introduced here can be expressed in asuitable version of Strategy Logic (cf. [17]), we choose – for both conceptual and computa-tional reasons – to stay within a purely modal framework where actions and strategies are notexplicitly referred and quantified over in the language, but are only present in the semantics.We regard this work as a step towards developing a rich technical framework for logic-basedconditional strategic reasoning of rational agents. The major further steps and directionsinclude:1. Completeness proofs of the proposed axiomatizations of the 3 main fragments and theentire logic
ConStR are currently under construction.19. We also claim that, for general reasons,
ConStR has the finite tree-model property, andis therefore decidable (to be proved in a follow-up paper). We further conjecture that itssatisfiability problem is PSPACE-complete – a major argument in favour of the modalapproach to formalising conditional strategic reasoning advocated here, as opposed toone based on a version of Strategy Logic. We also leave the question of the precisecomplexity of model checking subject to further investigation, but conjecture that itwill still be tractable in the size of the model, as in CL and ATL .3. Extension of the framework to a full-fledged, long term conditional strategic reasoning,by extending the language with standard temporal operators, to produce an
ATL -likeextension of
ConStR .4. Long term conditional strategic reasoning naturally requires considerations about strate-gic commitments and model updates (cf. [2] and [3]) and, more generally, requiresinvolving strategy contexts in the semantics ([8]).5. Adding knowledge in the semantics, and explicitly in the language, by assuming thatthe agents reason and act under imperfect information.6. Last, but most important long-term objective of this project is to model and capture bysemantically richer logic-based formalism the mutually conditional strategic reasoning ,where all agents reason about their strategic choices, conditional on the others’ strategicchoices, conditional on the reasoners’ choices, etc., recursively.
References [1] Abdou, J., Keiding, H.: Effectivity Functions in Social Choice. Springer (1991)[2] ˚Agotnes, T., Goranko, V., Jamroga, W.: Alternating-time temporal logics with irrevo-cable strategies. In: D. Samet (ed.) Proceedings of TARK XI, pp. 15–24 (2007)[3] ˚Agotnes, T., Goranko, V., Jamroga, W.: Strategic commitment and release in logics formulti-agent systems (Extended abstract). Tech. Rep. IfI-08-01, Clausthal University ofTechnology (2008)[4] ˚Agotnes, T., Goranko, V., Jamroga, W., Wooldridge, M.: Knowledge and ability. In:H. van Ditmarsch, J. Halpern, W. van der Hoek, B. Kooi (eds.) chapter in: Handbookof Epistemic Logic, pp. 543–589. College Publications (2015)[5] Alur, R., Henzinger, T.A., Kupferman, O.: Alternating-time temporal logic. J. ACM (5), 672–713 (2002)[6] Alur, R., Henzinger, T.A., Kupferman, O., Vardi, M.: Alternating refinement relations.In: D. Sangiorgi, R. de Simone (eds.) Proc. of CONCUR’98. Springer LNCS 1466 (1998)[7] van Benthem, J., Ghosh, S., Verbrugge, R. (eds.): Models of Strategic Reasoning - Logics,Games, and Communities, LNCS , vol. 8972. Springer (2015)[8] Brihaye, T., Lopes, A.D.C., Laroussinie, F., Markey, N.: ATL with strategy contexts andbounded memory. In: S. Art¨emov, A. Nerode (eds.) Proceedings of LFCS’2009,
LNCS ,vol. 5407, pp. 92–106. Springer (2009) 209] Fisman, D., Kupferman, O., Lustig, Y.: Rational synthesis. In: Proc. of TACAS 2010,pp. 190–204 (2010)[10] Goranko, V.: Coalition games and alternating temporal logics. In: J. van Benthem (ed.)Proceedings of TARK VIII, pp. 259–272. Morgan Kaufmann (2001)[11] Goranko, V., Enqvist, S.: Socially friendly and group protecting coalition logics. In:Proceedings of AAMAS 2018, pp. 372–380 (2018)[12] Goranko, V., Ju, F.: Towards a logic for conditional local strategic reasoning. In:P. Blackburn, E. Lorini, M. Guo (eds.) Logic, Rationality, and Interaction - 7th Inter-national Workshop, LORI 2019, Chongqing, China, October 18-21, 2019, Proceedings,
Lecture Notes in Computer Science , vol. 11813, pp. 112–125. Springer (2019)[13] Gutierrez, J., Harrenstein, P., Wooldridge, M.J.: From model checking to equilibriumchecking: Reactive modules for rational verification. Artif. Intel. , 123–157 (2017)[14] van der Hoek, W., Wooldridge, M.: Cooperation, knowledge, and time: Alternating-timetemporal epistemic logic and its applications. Studia Logica (1), 125–157 (2004)[15] Kupferman, O., Perelli, G., Vardi, M.Y.: Synthesis with rational environments. Annalsof Mathematics and Artificial Intelligence (1), 3–20 (2016)[16] Mogavero, F., Murano, A., Perelli, G., Vardi, M.Y.: Reasoning about strategies: On themodel-checking problem. ACM Trans. Comput. Log. (4), 34:1–34:47 (2014)[17] Mogavero, F., Murano, A., Perelli, G., Vardi, M.Y.: Reasoning about strategies: On thesatisfiability problem. Logical Methods in Computer Science (1) (2017)[18] Naumov, P., Yuan, Y.: Intelligence in strategic games. CoRR abs/1910.07298 (2019)[19] Pauly, M.: Logic for social software. Ph.D. thesis, University of Amsterdam (2001)[20] Pauly, M.: A modal logic for coalitional power in games. Journal of Logic and Compu-tation12