Intelligence in Strategic Games
IIntelligence in Strategic Games
Pavel Naumov
Claremont McKenna College, Claremont, California, USA
Yuan Yuan
Vassar College, Poughkeepsie, New York, USA
Abstract
The article considers strategies of coalitions that are based on intelligenceinformation about moves of some of the other agents. The main technicalresult is a sound and complete logical system that describes the interplaybetween coalition power modality with intelligence and distributed knowledgemodality in games with imperfect information.
1. Introduction
The Battle of the Atlantic was a classical example of the matching penniesgame. British (and American) admirals were choosing routes of the alliedconvoys and Germans picked routes of their U-boats. If their trajectoriescrossed, the Germans scored a win, if not the allies did. Neither of theplayers appeared to have a strategy that would guarantee victory.The truth, however, was that during most of the battle one of the sideshad exactly such a strategy. First, it was British who broke German Enigmacipher in summer 1941. Although Germans did not know about British suc-cess, they changed codebook and added fourth wheel to Enigma in February1942 thus preventing British from decoding German messages. The very nextmonth, in March 1942, German navy cryptography unit, B-Dienst, broke al-lied code and got access to convoy route information. Germans lost theirability to read allied communication in December 1942 due to a routinechange in the allied codebook. The same month, British were able to read
Email addresses: [email protected] (Pavel Naumov), [email protected] (YuanYuan) a r X i v : . [ c s . G T ] O c t erman communication as a result of capturing codebook from a U-boat inMediterranean. In March 1943, Germans changed codebook again and, un-knowingly, disabled British ability to read German messages. Simultaneous,Germans caught up and started to decipher British transmissions again [1, 2].At almost any moment during these two years one of the sides was ableto read the communications of the other side. However, neither of them wasable to figure out that their own code is insecure because the two sides neverhave been able to read each other messages at the same time to notice thatthe other side knows more than it should have known. Finally, in May 1943,with help of US Navy, British cracked German messages while Germans stillwere reading British. It was the first time allies understood that their codewas insecure. A new convoy cipher was immediately introduced and Germanshave never been able to break it again, while allies continued reading Enigma-encrypted transmissions till the end of the war [1].In this article we study coalition power in strategic games assuming thatthe coalition has intelligence information about moves of all or some of itsopponents. We write [ C ] I ϕ if coalition C has a strategy to achieve outcome ϕ as long as the coalition knows what will be the move of each agent in set I . For example, [ British ] Germans (Convoy is saved) . Modality [ C ] ∅ ϕ is the coalitional power modality proposed by Marc Pauly [3,4]. He gave a sound and complete axiomatization of this modality in the caseof perfect information strategic games. Various extensions of his logic hasbeen studied before [5, 6, 7, 8, 9, 10, 11, 12, 13]. Strategic power modalitywith intelligence [ a , . . . , a n ] i ,...,i k ϕ can be expressed in Strategy Logic [14, 15]as ∀ t . . . ∀ t k ∃ s , . . . ∃ s n ( a , s ) . . . ( a n , s n )( i , t ) . . . ( i k , t k ) X ϕ. The literature on the strategy logic covers model checking [16], synthesis [17],decidability [18, 19], and bisimulation [20]. We are not aware of any complete-ness results for a strategy logic with quantifiers over strategies. At the sametime, our approach is different from the one in Alternating-time TemporalLogic with Explicit Strategies (ATLES) [21]. There, modality (cid:104)(cid:104) C (cid:105)(cid:105) ρ denotesexistence existence of a strategy of coalition C for a fixed commitment ρ ofsome of the other agents. Unlike ATLES, out modality [ C ] B denotes exis-tence of a strategy of coalition of coalition C for any commitment of coalition B , as long as it is known to C . Goranko and Ju proposed several versionsof strategic power with intelligence modality, gave formal semantics of these2odalities, and discussed a matching notion of bisimulation [22]. They donot suggest any axioms for these modalities.An important example of intelligence in strategic games comes from Stackelberg security games . These are two-player games between a defenderand an intruder. The defender is using a mixed strategy to assign availableresources to targets and the intruder is using a pure strategy to attack one ofthe targets. The distinctive property of the security games is the assumptionthat the intruder knows the probabilities with which the defender assignsresources to different targets. The intruder uses this information to plan theattack that is likely to bring the most damage. In other words, it is assumedthat the intruder has the intelligence about the mixed strategy deployed bythe defender. Security games have been used by the U.S. Transportation Se-curity Administration, the U.S. Federal Air Marshal Service, the U.S. CoastGuard, and others [23].Recently, logics of coalition power were generalized to imperfect informa-tion games. Unlike prefect information strategic games, the outcome of animperfect information game might depend on the initial state of the gamethat could be unknown to the players. For example, consider a hypotheti-cal setting in which an allied convoy and a German U-boat have to choosebetween three routes from point A to point B: route 1, route 2, or route 3,see Figure 1. Let us furthermore assume that it is known to both sides thatone of these routes is blocked by Russian naval mines. Although the minesare located along route 1, neither allies nor Germans known this. If allieshave an access to intelligence about German U-boats, then, in theory, theyhave a strategy to save the convoy. For example, if Germans will use route2, then allies can use route 3. However, since allies do not know the locationof Russian mines, even after they receive information about German plans,they still would not know how to save the convoy.
A B1 2 3
Figure 1: Three routes from point A to point B . It has been suggested in several recent works that, in the the case of thegames with imperfect information, strategic power modality in Marc Pauly3ogic should be restricted to existence of know-how strategies [29, 30, 31,32, 33, 34]. That is, modality [ C ] ϕ should stands for “coalition C has astrategy, it knows that it has a strategy, and it knows what the strategy is”.In this article we adopt this approach to strategic power with intelligence.For example, in the imperfect information setting depicted in Figure 1, afterreceiving the intelligence report, the British have a strategy, they know thatthey have a strategy, but they do not know what the strategy is: ¬ [ British ] Germans (Convoy is saved) . At the same time, since Russians presumably know the location of theirmines, [British, Russians]
Germans (Convoy is saved) . The main contribution of this article is a complete logical system thatdescribes the interplay between the coalition power with intelligence modality[ C ] I and the distributed knowledge modality K C in an imperfect informationsetting. The most interesting axiom of our system is a generalized versionof Marc Pauly’s [3, 4] Cooperation axiom that connects intelligence I andcoalition C parameters of the modality [ C ] I . Our proof of the completenessis significantly different from the existing proofs of completeness for gameswith imperfect information [29, 30, 32, 33, 34]. We highlight these differencesin the beginning of the Section 6.
2. Outline
The rest of the article is organized as follows. In the next section weintroduce the syntax and the formal semantics of our logical system. InSection 4, we list the axioms and the inference rules of the system, comparethem to related axioms in the previous works and give two examples of formalproofs in our system. In the Section 5 and Section 6 we prove the soundnessand the completeness of our logical system respectively. Section 7 concludes. Know-how strategies were studied before under different names. While Jamroga and˚Agotnes talked about “knowledge to identify and execute a strategy” [24], Jamroga and vander Hoek discussed “difference between an agent knowing that he has a suitable strategyand knowing the strategy itself” [25]. Van Benthem called such strategies “uniform” [26].Wang gave a complete axiomatization of “knowing how” as a binary modality [27, 28],but his logical system does not include the knowledge modality. . Syntax and Semantics In this section we define the syntax and the semantics of our formal sys-tem. Throughout the article we assume a fixed set of propositional variablesand a fixed set of agents A . By a coalition we mean any finite subset of A .Finiteness of coalitions will be important for the proof of the completeness. Definition 1.
Let Φ be the minimal set of formulae such that p ∈ Φ for each propositional variable p , ϕ → ψ, ¬ ϕ ∈ Φ for all ϕ, ψ ∈ Φ , K C ϕ ∈ Φ for each formula ϕ ∈ Φ and each coalition C ⊆ A ,
4. [ C ] B ϕ ∈ Φ for each formula ϕ ∈ Φ and all disjoint coalitions B, C ⊆ A . In other words, the language of our logical system is defined by grammar: ϕ := p | ¬ ϕ | ϕ → ϕ | K C ϕ | [ C ] B ϕ. Formula K C ϕ stands for “coalition C distributively knows ϕ ” and formula[ C ] B ϕ for “coalition C distributively knows strategy to achieve ϕ as long asit gets an intelligence on actions of coalition B ”.For any sets X and Y , by X Y we mean the set of all functions from Y to X . Definition 2.
A tuple ( W, {∼ a } a ∈A , ∆ , M, π ) is called a game if W is a set of states, ∼ a is an “indistinguishability” equivalence relation on set W for eachagent a ∈ A ,
3. ∆ is a nonempty set, called the “domain of actions”, relation M ⊆ W × ∆ A × W is an “aggregation mechanism”, function π maps propositional variables to subsets of W . A function δ from set ∆ A is called a complete action profile .Figure 2 depicts a diagram of the Battle of the Atlantic game with im-perfect information, as described in the introduction. For the sake of sim-plicity, we treat British, Germans, and Russians as single agents, not groupsof agents. The game has five states: 1, 2, 3, s , and d . States 1, 2, and3 are three “initial” states that correspond to possible locations of Russianmines along route 1, route 2, or route 3. Neither British nor Germans can5istinguish these states, which is shown in the diagram by labels on dashedlines connecting these three states. Russians know location of the mines and,thus, can distinguish these states. The other two states are “final” states s and d that describe if the convoy made it safe ( s ) or was destroyed ( d ) byeither a U-boat or a mine. The designation of some states as “initial” andothers as “final” is specific to the Battle of the Atlantic game. In general, ourDefinition 2 does not distinguish between such states and we allow games totake multiple consecutive transitions from one state to another.
123 s d , , , , , , , , , , , , , , British, GermansBritish, Germans
Figure 2: Battle of the Atlantic with imperfect information.
The domain of actions ∆ in this game is { , , } . For British and Ger-mans actions represent the choice of routes that they make for their convoysand U-boats respectively. Russians are passive players in this game. Theiraction does not affect the outcome of the game. Technically, a completeaction profile is a function δ from set { British , Germans , Russians } into set { , , } . Since, there are only three players in the Battle of the Atlanticgame, it is more convenient to represent function δ by triple bgr ∈ { , , } ,where b is the action of British, g is the action of Germans, and c is theaction of Russians.The mechanism M of the Battle of the Atlantic game is captured by thedirected edges in Figure 2 labeled by complete actions profiles. Since value r in a profile bgr does not effect the outcome, it is omitted on the diagram.For example, directed edge from state 1 to state s is labeled with 23 and32. This means that the mechanism M contains triples (1 , , s ), (1 , , s ),(1 , , s ), (1 , , s ), (1 , , s ), and (1 , , s ).The definition of a game that we use here is more general than the oneused in the original Marc Pauly’s semantics of the logic of coalition power.Namely, we assume that the mechanism is a relation, not a function. Onone hand, this allows us to talk about nondeterministic games where for6ach initial state and each complete action profile there might be more thanone outcome. On the other hand, this also allows for some combinations ofthe initial state and the complete action profile there to be no outcome atall. In other words, we do not exclude games in which agents might havean ability in some situations to terminate the game without reaching anoutcome. If needed, such games can be excluded and an additional axiom ¬ [ C ] ∅ ⊥ be added to the logical system. The proof of the completeness willremain mostly unchanged. We also introduce indistinguishability relation onstates to capture the imperfect information. We do it in the same way as ithas been done in the cited earlier previous works on the logics of coalitionpower with imperfect information. Definition 3.
For any states w, w (cid:48) ∈ W and any coalition C , let w ∼ C w (cid:48) if w ∼ a w (cid:48) for each agent a ∈ C . In particular, w ∼ ∅ w (cid:48) for any two states of the game. Lemma 1.
For any coalition C , relation ∼ C is an equivalence relation onset W . (cid:50) By an action profile of a coalition C we mean any function from set ∆ C .For any two functions f, g , we write f = X g if f ( x ) = g ( x ) for each x ∈ X .Next is the key definition of this article. Its part 5 gives the semantics ofmodality [ C ] B . This part uses state w (cid:48) to capture the fact that the strategysucceeds in each state indistinguishable by coalition C from the current state w . In other words, the coalition C knows that this strategy will succeed. Ex-cept for the addition of coalition B and its action profile β , this is essentiallythe same definition as the one used in [29, 30, 31, 35, 33, 32, 34]. Definition 4.
For any game ( W, {∼ a } a ∈A , ∆ , M, π ) , any state w ∈ W , andany formula ϕ ∈ Φ , let satisfiability relation w (cid:13) ϕ be defined as follows: w (cid:13) p if w ∈ π ( p ) , where p is a propositional variable, w (cid:13) ¬ ϕ if w (cid:49) ϕ , w (cid:13) ϕ → ψ if w (cid:49) ϕ or w (cid:13) ψ , w (cid:13) K C ϕ if w (cid:48) (cid:13) ϕ for each w (cid:48) ∈ W such that w ∼ C w (cid:48) , w (cid:13) [ C ] B ϕ if for any action profile β ∈ ∆ B of coalition B there is anaction profile γ ∈ ∆ C of coalition C such that for any complete actionprofile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B δ , γ = C δ , w ∼ C w (cid:48) ,and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ . (cid:13) [British, Russians] Germans (Convoy is saved) . Indeed, statement (1 ∼ British, Russians w (cid:48) ) is true only for one state w (cid:48) ∈ W ,namely state 1 itself. Then, for any action profile β ∈ { , , } { Germans } ofthe single-member coalition { Germans } we can define action profile γ ∈{ , , } { British, Russians } as, for example, γ ( a ) = , if a = British and β (Germans) = 2 , , if a = British and β (Germans) = 3 , , if a = Russians . In other words, if profile β assigns Germans route 2, then profile γ assignsBritish route 3 and vice versa. Assignment of an action to Russians in notimportant. This way, no matter what Germans’ action is, the British convoywill avoid both the German U-boat and the Russian mines in the game thatstarts from state w (cid:48) = 1. At the same time,1 (cid:13) ¬ [British] Germans (Convoy is saved) . because without Russians the British cannot distinguish states 1, 2, and3. In other words, (1 ∼ British w (cid:48) ) for any state w (cid:48) ∈ { , , } . Thus, foreach action profile β ∈ { , , } { Germans } we need to have a single actionprofile γ ∈ { , , } { British, Russians } that would bring the convoy to state s from any of the states 1, 2, and 3. Such profile γ does not exists because,even if the British know where Germans U-boat will be, there is no singleuniform strategy to choose path that would avoid Russian mines from allthree indistinguishable states 1, 2, and 3.
4. Axioms
In addition to the propositional tautologies in language Φ, our logicalsystem consists of the following axioms:1. Truth: K C ϕ → ϕ ,2. Distributivity: K C ( ϕ → ψ ) → ( K C ϕ → K C ψ ),3. Negative Introspection: ¬ K C ϕ → K C ¬ K C ϕ ,8. Epistemic Monotonicity: K C ϕ → K D ϕ , where C ⊆ D ,5. Strategic Introspection: [ C ] B ϕ → K C [ C ] B ϕ ,6. Empty Coalition: K ∅ ϕ → [ ∅ ] ∅ ϕ ,7. Cooperation: [ C ] B ( ϕ → ψ ) → ([ D ] B,C ϕ → [ C, D ] B ψ ), where sets B, C ,and D are pairwise disjoint,8. Intelligence Monotonicity: [ C ] B ϕ → [ C ] B (cid:48) ϕ , where B ⊆ B (cid:48) ,9. None to Analyze: [ ∅ ] B ϕ → [ ∅ ] ∅ ϕ .Note that in the Cooperation axiom above and often throughout the rest ofthe article we abbreviate B ∪ C as B, C . However, we keep writing B ∪ C when notation B, C could be confusing.The Truth, the Distributivity, the Negative Introspection, and the Epis-temic Monotonicity axioms are the standard axioms of the epistemic logicof distributed knowledge [36]. The Strategic Introspection axiom states thatif a coalition C has a “know-how” strategy, then it knows that it has sucha strategy. A version of this axiom without intelligence was first introducedin [29]. The Empty Coalition axiom says that if statement ϕ is satisfied ineach state of the model, then the empty coalition has a strategy to achieveit. This axiom first appeared in [35]. The Cooperation axiom for strategieswithout intelligence: [ C ]( ϕ → ψ ) → ([ D ] ϕ → [ C, D ] ψ ) , where sets C and D are disjoint, was introduced in [3, 4]. This is the signatureaxiom that appears in all subsequent works on logics of coalition power. Theversion of this axiom with intelligence is one of the key contributions ofthe current article. Our version states that if coalition C knows how toachieve ϕ → ψ assuming it has intelligence about actions of coalition B andcoalition D knows how to achieve ϕ assuming it has intelligence about actionsof coalitions B and C , then coalitions C and D know how together they canachieve ψ if they have intelligence about actions of coalition B . We provesoundness of this axiom in Section 5. The remaining two axioms are originalto this article. The Intelligence Monotonicity axiom states if coalition C hasa strategy based on intelligence about actions of coalition B , then coalition C has such strategy based on intelligence about any larger coalition. Theother form of monotonicity for modality [ C ] B , monotonicity on coalition C is also true. It is not listed among our axioms because it is provable in oursystem, see Lemma 2. The None to Analyze axiom say that if there is none to9nterpret the intelligence information about coalition B , then this intelligencemight as well not exist.We write (cid:96) ϕ if formula ϕ is provable from the above axioms using theModus Ponens, the Epistemic Necessitation, and the Strategic Necessitationinference rules: ϕ, ϕ → ψψ , ϕ K C ϕ , ϕ [ C ] B ϕ . We write X (cid:96) ϕ if formula ϕ ∈ Φ is provable from the theorems ofour logical system and an additional set of axioms X using only the ModusPonens inference rule. Note that if set X is empty, then statement X (cid:96) ϕ isequivalent to (cid:96) ϕ . We say that set X is consistent if X (cid:48) ⊥ .The next lemma gives an example of a formal proof in our logical system.This example will be used later in the proof of the completeness. Lemma 2. (cid:96) [ C ] B ϕ → [ C (cid:48) ] B ϕ , where C ⊆ C (cid:48) . Proof.
Formula ϕ → ϕ is a tautology. Thus, (cid:96) [ C (cid:48) \ C ] B ( ϕ → ϕ ) bythe Strategic Necessitation inference rule. Note that the following formula:[ C (cid:48) \ C ] B ( ϕ → ϕ ) → ([ C ] B ϕ → [( C (cid:48) \ C ) ∪ C ] B ϕ ) is an instance of the Co-operation axiom. Thus, (cid:96) [ C ] B ϕ → [( C (cid:48) \ C ) ∪ C ] B ϕ by the Modus Ponensinference rule. Note also that ( C (cid:48) \ C ) ∪ C = C (cid:48) because of the assumption C ⊆ C (cid:48) . Therefore, (cid:96) [ C ] B ϕ → [ C (cid:48) ] B ϕ . (cid:2) The following lemma states the well-known Positive Introspection princi-ple for the distributed knowledge.
Lemma 3. (cid:96) K C ϕ → K C K C ϕ . Proof.
Formula K C ¬ K C ϕ → ¬ K C ϕ is an instance of the Truth axiom.Thus, (cid:96) K C ϕ → ¬ K C ¬ K C ϕ by contraposition. Hence, taking into accountthe following instance of the Negative Introspection axiom: ¬ K C ¬ K C ϕ → K C ¬ K C ¬ K C ϕ , we have (cid:96) K C ϕ → K C ¬ K C ¬ K C ϕ. (1)At the same time, ¬ K C ϕ → K C ¬ K C ϕ is an instance of the NegativeIntrospection axiom. Thus, (cid:96) ¬ K C ¬ K C ϕ → K C ϕ by the law of contrapos-itive in the propositional logic. Hence, by the Necessitation inference rule, (cid:96) K C ( ¬ K C ¬ K C ϕ → K C ϕ ). Thus, by the Distributivity axiom and the Modus10onens inference rule, (cid:96) K C ¬ K C ¬ K C ϕ → K C K C ϕ. The latter, together withstatement (1), implies the statement of the lemma by propositional reason-ing. (cid:2)
We conclude this section by stating the two standard lemmas about ourdeduction system. These lemmas will be used later in the proof of the com-pleteness.
Lemma 4 (deduction). If X, ϕ (cid:96) ψ , then X (cid:96) ϕ → ψ . Proof.
Since
X, ϕ (cid:96) ψ refers to the provability without the use of the Epis-temic Necessitation and the Strategic Necessitation inference rules, the stan-dard proof of the deduction lemma for propositional logic [37, Proposition1.9] applies to our system as well. (cid:2) Lemma 5 (Lindenbaum).
Any consistent set of formulae can be extendedto a maximal consistent set of formulae.
Proof.
The standard proof of Lindenbaum’s lemma [37, Proposition 2.14]applies here too. (cid:2)
5. Soundness
In this section we prove the soundness of the axioms of our logical systemwith respect to the semantics given in Section 3.
Theorem 1 (soundness). If (cid:96) ϕ , then (cid:13) ϕ for each state w of each game. As usual, the soundness of the Truth, the Distributivity, the Negative Intro-spection, and the Monotonicity axiom follows from the assumption that ∼ a is an equivalence relation [36]. Below we prove the soundness of each of theremaining axioms as a separate lemma. Lemma 6. If w (cid:13) [ C ] B ϕ , then w (cid:13) K C [ C ] B ϕ . roof. Consider any state w (cid:48) ∈ W such that w ∼ C w (cid:48) . By Definition 4, itsuffices to show that w (cid:48) (cid:13) [ C ] B ϕ . Indeed, consider any action profile β ∈ ∆ B of coalition B . By the same Definition 4, it suffices to show that there isan action profile γ ∈ ∆ C of coalition C such that for any complete actionprofile δ ∈ ∆ A and any states w (cid:48)(cid:48) , u ∈ W if β = B δ , γ = C δ , w (cid:48) ∼ C w (cid:48)(cid:48) , and( w (cid:48)(cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Since w ∼ C w (cid:48) , by Lemma 1, it suffices to show that there is an actionprofile γ ∈ ∆ C of coalition C such that for any complete action profile δ ∈ ∆ A and any states w (cid:48)(cid:48) , u ∈ W if β = B δ , γ = C δ , w ∼ C w (cid:48)(cid:48) , and ( w (cid:48)(cid:48) , δ, u ) ∈ M ,then u (cid:13) ϕ . The last statement is true by Definition 4 and the assumption w (cid:13) [ C ] B ϕ . (cid:2) Lemma 7. If w (cid:13) K ∅ ϕ , then w (cid:13) [ ∅ ] ∅ ϕ . Proof.
Let β ∈ ∆ ∅ be an action profile of an empty coalition . By Defini-tion 4, it suffices to show that there is an action profile γ ∈ ∆ ∅ of the emptycoalition such that for any complete action profile δ ∈ ∆ A and all states w (cid:48) , u ∈ W if β = ∅ δ , γ = ∅ δ , w ∼ ∅ w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Indeed, let γ = β . Thus, it suffices to prove that u (cid:13) ϕ for each state u ∈ W .The last statement follows from the assumption w (cid:13) K ∅ ϕ by Definition 4. (cid:2) Lemma 8. If w (cid:13) [ C ] B ( ϕ → ψ ) , w (cid:13) [ D ] B,C ϕ , and sets B , C , and D arepairwise disjoint, then w (cid:13) [ C, D ] B ψ . Proof.
Consider any action profile β ∈ ∆ B of coalition B . By Definition 4, itsuffices to show that there is an action profile γ ∈ ∆ C ∪ D of coalition C ∪ D such that for any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B δ , γ = C,D δ , w ∼ C,D w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ψ .Assumption w (cid:13) [ C ] B ( ϕ → ψ ), by Definition 4, implies that there isan action profile γ ∈ ∆ C of coalition C such that for any complete actionprofile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B δ , γ = C δ , w ∼ C w (cid:48) , and( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ → ψ . Such action profile is unique, but this is not important for our proof. β ∈ ∆ B ∪ C of coalition B ∪ C as follows: β ( a ) = (cid:40) β ( a ) , if a ∈ B,γ ( a ) , if a ∈ C. Action profile β is well-defined because sets B and C are disjoint.Assumption w (cid:13) [ D ] B,C ϕ , by Definition 4, implies that there is an actionprofile γ ∈ ∆ D of coalition D such that for any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B,C δ , γ = D δ , w ∼ D w (cid:48) , and( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Define action profile γ ∈ ∆ C ∪ D of coalition C ∪ D as follows: γ ( a ) = (cid:40) γ ( a ) , if a ∈ C,γ ( a ) , if a ∈ D. Action profile γ is well-defined because sets C and D are disjoint.Consider any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W such that β = B δ , γ = C ∪ D δ , w ∼ C ∪ D w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M . Recallfrom the first paragraph of this proof that it suffices to show that u (cid:13) ϕ .Note that β = B δ , γ = C γ = C δ , w ∼ C w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M . Thus, u (cid:13) ϕ → ψ by the choice of the action profile γ . Similarly, β = B β = B δ and β = C γ = C γ = C δ . Hence, β = B ∪ C δ . Also, γ = D γ = D δ , w ∼ C w (cid:48) ,and ( w (cid:48) , δ, u ) ∈ M . Thus, u (cid:13) ϕ by the choice of the action profile γ . There-fore, u (cid:13) ψ by Definition 4 because u (cid:13) ϕ → ψ and u (cid:13) ϕ . (cid:2) Lemma 9. If w (cid:13) [ C ] B ϕ , B ⊆ B (cid:48) , and sets B (cid:48) and C are disjoint, then w (cid:13) [ C ] B (cid:48) ϕ . Proof.
Consider any action profile β (cid:48) ∈ ∆ B (cid:48) of coalition B (cid:48) . By Definition 4,it suffices to show that there is an action profile γ ∈ ∆ C of coalition C suchthat for any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β (cid:48) = B δ , γ = C δ , w ∼ C w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Define action profile β ∈ ∆ B of coalition B to be such that β ( a ) = β (cid:48) ( a )for each agent a ∈ B . Action profile β is well-defined due to the assumption B ⊆ B (cid:48) of the lemma. By Definition 4, assumption w (cid:13) [ C ] B ϕ implies thatthere is an action profile γ ∈ ∆ C of coalition C such that for any completeaction profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B δ , γ = C δ , w ∼ C w (cid:48) ,13nd ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ . Note that β = B β (cid:48) by the choice of actionprofile β . Therefore, there is an action profile γ ∈ ∆ C of coalition C suchthat for any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β (cid:48) = B δ , γ = C δ , w ∼ C w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ . (cid:2) Lemma 10. If w (cid:13) [ ∅ ] B ϕ , then w (cid:13) [ ∅ ] ∅ ϕ . Proof.
By Definition 4, assumption w (cid:13) [ ∅ ] B ϕ implies that for any actionprofile β ∈ ∆ B of coalition B there is an action profile γ ∈ ∆ ∅ of the emptycoalition such that for any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B δ , γ = ∅ δ , w ∼ ∅ w (cid:48) , and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Thus, for any action profile β ∈ ∆ B of coalition B , any complete actionprofile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = B δ and ( w (cid:48) , δ, u ) ∈ M ,then u (cid:13) ϕ . Hence, for any complete action profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Then, for any action profile β ∈ ∆ ∅ of the empty coalition there is anaction profile γ ∈ ∆ ∅ of the empty coalition such that for any completeaction profile δ ∈ ∆ A and any states w (cid:48) , u ∈ W if β = ∅ δ , γ = ∅ δ , w ∼ ∅ w (cid:48) ,and ( w (cid:48) , δ, u ) ∈ M , then u (cid:13) ϕ .Therefore, w (cid:13) [ ∅ ] ∅ ϕ by Definition 4. (cid:2)
6. Completeness
In this section we prove the completeness of our logical system. We startthis proof by fixing a maximal consistent set of formulae X and defining thecanonical game G ( X ) = ( W, {∼ a } a ∈A , ∆ , M, π ).There are two major challenges that we need to overcome while definingthe canonical model. The first of them is well-known complication relatedto the presence of the distributed knowledge modality in our logical system.The second is a unique challenge of specific to strategies with intelligence.To understand the first challenge, recall that in case of individual knowledgestates are usually defined as maximal consistent sets. Two such sets are ∼ a -equivalent if the sets contain the same K a formulae. Unfortunately, this con-struction can not be easily adapted to distributed knowledge because if twosets share K a and K b formulae, then they not necessarily share K a,b formulae.To overcome this challenge we use “tree” construction in which each state is a14ode of a labeled tree. Nodes of the tree are labeled with maximal consistentsets and edges are labeled with coalitions. This construction has been usedin logics of know-how with distributed knowledge before [30, 35, 33, 32, 34].To understand the second challenge, let us first recall the way the canon-ical game is usually constructed for the logics of the coalition power. Thecommonly used construction defines the domain of actions to be the set ofall formulae. Informally, it means that each agent “votes” for a formula thatthe agents wants to be true in the next state. Of course, not requests ofagents are granted. The canonical game mechanism specifies which requestsare granted and which are ignored. There also are canonical game construc-tions in which a voting ballot in addition to a formula must also contain someadditional information that acts as a “key” verifying that the voting agentshas certain information [33, 34]. So, it is natural to assume that in case offormula [ C ] B ϕ , coalition C should vote for formula ϕ and provide vote ofcoalition B as a key. This approach, however, turns out to be problematic.Indeed, in order to satisfy some other formula, say [ B ] D ψ , vote of coalition B would need to include vote of coalition D as a key. Thus, it appears,that vote of C would need to include vote of D as well. The situation isfurther complicated by mutual recursion when one attempts to satisfy for-mulae [ C ] B ϕ and [ B ] C χ simultaneously. The solution that we propose in thisarticle avoids this recursion. It turns out that it is not necessary for the keyto contain the complete intelligence information. Namely, we assume thateach agent votes for a formula and signs her vote with a random integer key.To satisfy formula [ C ] B ϕ , the mechanism will guarantee that if all membersof coalition C vote for ϕ and sign with integer keys that are larger than keysof all members in coalition B , then ϕ will be true in the next state . This ideais formalized later in Definition 7. Definition 5.
Sequence X , C , X , C , . . . , C n , X n is a state of the canonicalgame if n ≥ , X , . . . , X n are maximal consistent sets of formulae, C , . . . , C n are coalitions of agents, { ϕ ∈ Φ | K C k ϕ ∈ X k − } ⊆ X k for each integer k such that ≤ k ≤ n . We say that sequence w = X , C , X , . . . , C n − , X n − and sequence u = X , C , X , . . . , C n , X n are adjacent . The adjacency relation defines an undi-rected labeled graph whose nodes are elements of set W and whose edges are15pecified by the adjacency relation. The node u by set X n and edge ( w, u )by each agent in set C n , see Figure 3. Note that this graph has no cycles andthus is a tree. For any agent a ∈ A and any nodes v, v (cid:48) ∈ W , we say that X
Assumption w ∼ C w (cid:48) implies that each edge along the unique simplepath between nodes w and w (cid:48) is labeled with all agents in coalition C . Thus,it suffices to show that K C ϕ ∈ hd ( w ) iff K C ϕ ∈ hd ( w (cid:48) ) for any two adjacent nodes along this path. Indeed, without loss of generality, let w = X , C , X , . . . , C n − , X n − w (cid:48) = X , C , X , . . . , C n − , X n − , C n , X n . The assumption that the edge between w and w (cid:48) is labeled with all agentsin coalition C implies that C n ⊆ C . Next, we show that K C ϕ ∈ hd ( w ) iff K C ϕ ∈ hd ( w (cid:48) ).( ⇒ ) : Suppose that K C ϕ ∈ hd ( w ) = X n − . Thus, X n − (cid:96) K C K C ϕ byLemma 3. Hence, X n − (cid:96) K C n K C ϕ by the Epistemic Monotonicity axiomand because C n ⊆ C . Hence, K C n K C ϕ ∈ X n − because set X n − is maximal.Then, K C ϕ ∈ X n = X ( w (cid:48) ) by Definition 5.16 ⇐ ) : Suppose that K C ϕ / ∈ hd ( w ) = X n − . Thus, ¬ K C ϕ ∈ X n − because set X n − is maximal. Hence, X n − (cid:96) K C ¬ K C ϕ by the Negative Introspectionaxiom. Hence, X n − (cid:96) K C n ¬ K C ϕ by the Epistemic Monotonicity axiom andbecause C n ⊆ C . Hence, K C n ¬ K C ϕ ∈ X n − because set X n − is maximal.Then, ¬ K C ϕ ∈ X n by Definition 5. Therefore, K C ϕ / ∈ X n = hd ( w (cid:48) ) becauseset X n is consistent. (cid:2) This defines the states of the canonical game G ( X ) and the indistin-guishability relations {∼ a } a ∈A on these states. We now will define the domainof actions and the mechanism of the canonical game. Definition 6. ∆ is a set of all pairs ( ϕ, z ) such that ϕ ∈ Φ is a formula,and z ∈ Z is an integer number. If u is a pair ( x, y ), then by pr ( u ) and pr ( u ) we mean elements x and y respectively. Definition 7.
Mechanism M is the set of triples ( w, δ, u ) such that for anyformula [ C ] B ϕ ∈ hd ( w ) if pr ( δ ( c )) = ϕ , for each c ∈ C , pr ( δ ( b )) < pr ( δ ( c )) for each b ∈ B and each c ∈ C ,then ϕ ∈ hd ( u ) . Figure 4 describes a Battle of the Atlantic inspired example that illus-trates the definition of the canonical mechanism. Here set hd ( w ) contains hd ( w ) [British] German (saved) , [German] British (not saved)British (saved , , , Figure 4: Battle of the Atlantic Mechanism. formulae [British]
German (saved) and [German]
British (not saved). Thus, the17echanism enables both British and Germans to achieve their goal as long astheir have intelligence about the move of the other party. British, Germans,and Russians have chosen actions (saved , , , pr ( δ (British)) = 17 <
23 = pr ( δ (Germans)). Then, ac-cording to Definition 7, statement “saved” (short for ”Convoy is saved”) willbelong to set hd ( u ), where u is the outcome state of the game. Note thatalthough pr ( δ (German)) = 23 <
29 = pr ( δ (Russians)), the Russian actiondid not save the convoy because statement [Russian] German (saved) does notbelong to set hd ( w ). Definition 8. π ( p ) = { w ∈ W | p ∈ hd ( w ) } . This concludes the definition of the canonical game G ( X ) = ( W, {∼ a } a ∈A , ∆ , M, π ). The next important milestone in the proof of the complete-ness is what sometimes is called “truth” lemma that connects syntax andsemantics sides in the canonical game construction. In our case, this isLemma 14. Before that lemma, however, we state and prove two auxiliarystatements that will be used in the induction step of the proof of Lemma 14. Lemma 12. If ¬ K C ϕ ∈ hd ( w ) , then there is a state u ∈ W such that w ∼ C u and ¬ ϕ ∈ hd ( u ) . Proof.
Consider set of formulae X = {¬ ϕ } ∪ { ψ | K C ψ ∈ w } . First, we prove that set X is consistent. Suppose the opposite. Thus, thereare formulae K C ψ , . . . , K C ψ n ∈ hd ( w ) such that ψ , . . . , ψ n (cid:96) ϕ. Hence, by applying n times Lemma 4, (cid:96) ψ → ( ψ → . . . ( ψ n → ϕ ) . . . ) . Then, by the Epistemic Necessitation inference rule, (cid:96) K C ( ψ → ( ψ → . . . ( ψ n → ϕ ) . . . )) . (cid:96) K C ψ → K C ( ψ → . . . ( ψ n → ϕ ) . . . )) . Recall that K C ψ ∈ hd ( w ) by the choice of formula K C ψ . Hence, by theModus Ponens inference rule, hd ( w ) (cid:96) K C ( ψ → . . . ( ψ n → ϕ ) . . . )) . By repeating the previous step n − hd ( w ) (cid:96) K C ϕ. Hence, ¬ K C ϕ / ∈ hd ( w ) due to the consistency of the set hd ( w ). This con-tradicts the assumption of the lemma. Therefore, set X is consistent. ByLemma 5, there is a maximal consistent extension ˆ X of the set X . Let u besequence w :: C :: ˆ X . Note that u ∈ W by Definition 5 and the choice of set X , set ˆ X and sequence u . Furthermore, w ∼ C u by the definition of relation ∼ a on the set W . Finally, ¬ ϕ ∈ X ⊆ ˆ X = hd ( u ) again by the choice of set X , set ˆ X and sequence u . (cid:2) Lemma 13. If ¬ [ C ] B ϕ ∈ hd ( w ) , then there exists an action profile β ∈ ∆ B of coalition B such that for each action profile γ ∈ ∆ C of coalition C thereis a complete action profile δ ∈ ∆ A and states w (cid:48) , u ∈ W such that β = B δ , γ = C δ , w ∼ C w (cid:48) , ( w (cid:48) , δ, u ) ∈ M , and ¬ ϕ ∈ hd ( u ) . Proof.
Let action profile β ∈ ∆ B of coalition B be such that β ( b ) = ( (cid:62) , b ∈ B . Consider any action profile γ ∈ ∆ C of coalition C .Choose an integer z such that for each a ∈ Cpr ( γ ( a )) < z . (2)Such z exists because coalition C is a finite set of agents. Define completeaction profile δ ∈ ∆ A as follows δ ( a ) = β ( a ) , if a ∈ B,γ ( a ) , if a ∈ C, ( (cid:62) , z ) , otherwise. (3)19ote that ¬ [ C ] B ϕ ∈ hd ( w ) ⊆ Φ by the assumption of the lemma. Thus,sets B and C are disjoint by Definition 1. Thus, complete action profile δ iswell-defined.Consider set X such that X = {¬ ϕ } ∪ { σ | [ ∅ ] ∅ σ ∈ hd ( w ) } ∪ (4) { ψ | [ P ] Q ψ ∈ hd ( w ) , P (cid:54) = ∅ , ∀ p ∈ P ( pr ( δ ( p )) = ψ ) , ∀ q ∈ Q ∀ p ∈ P ( pr ( δ ( q )) < pr ( δ ( p ))) } . Next we show that set X is consistent. Suppose the opposite. Thus, σ , . . . , σ m , ψ , ψ , ψ , . . . , ψ n (cid:96) ϕ (5)for some formulae [ ∅ ] ∅ σ , . . . , [ ∅ ] ∅ σ m ∈ hd ( w ) (6)and some formulae [ P ] Q ψ , . . . , [ P n ] Q n ψ n ∈ hd ( w ) (7)such that pr ( δ ( q )) < pr ( δ ( p )) , (8) P i (cid:54) = ∅ , (9)and pr ( δ ( p )) = ψ i (10)for each i ≤ n , each q ∈ Q i , and each p ∈ P i . Without loss of generality, wecan assume that formulae ψ , ψ , ψ , . . . , ψ n are distinct and none of them isequal to (cid:62) : ψ i (cid:54) = ψ j , (11) ψ i (cid:54) = (cid:62) (12)for each i ≤ n and each j (cid:54) = i . Claim 1.
Sets P , . . . , P n are pairwise disjoint. Proof.
Consider any agent a ∈ P i ∩ P j . Then, ψ i = pr ( δ ( a )) = ψ j by equa-tion (10), which contradicts assumption (11). (cid:2) laim 2. P i ⊆ C for each i ≤ n . Proof.
Consider any agent a ∈ P i . Suppose that a / ∈ C . Then, pr ( δ ( a )) = (cid:62) by equation (3). At the same time, pr ( δ ( a )) = ψ i by equality (10) because a ∈ P i . Hence, ψ i = (cid:62) , which contradicts assumption (12). (cid:2) Claim 3. Q i ⊆ B ∪ C . Proof.
Consider any agent q ∈ Q i . Statement (9) implies that there is at leastone agent p ∈ P i . Then, p ∈ C by Claim 2. Thus, pr ( δ ( p )) = pr ( γ ( p )) < z due to equality (3) and inequality (2). Hence, pr ( δ ( q )) < z by inequal-ity (8). Therefore, q ∈ B ∪ C due to equality (3). (cid:2) For any nonempty finite set of agents P ⊆ A let rank ( P ) = min p ∈ P pr ( δ ( p )) . (13)Sets P , . . . , P n are nonempty by statement (9). Thus, rank ( P i ) is definedfor each i ≤ n . Without loss of generality, we can assume that, see Figure 5, pr ( ( P ))
Sets Q i and P j are disjoint for ≤ i ≤ j . Proof.
For any q ∈ Q i and any p ∈ P j , by inequality (8) and definition (13);assumption (14); and again definition (13), pr ( δ ( q )) < rank ( P i ) ≤ rank ( P j ) ≤ pr ( δ ( p )) . Q i and P j are disjoint. (cid:2) Let R = C \ ( P ∪ · · · ∪ P n ) . (15) Claim 5.
Sets B , R , P , . . . , P n are pairwise disjoint. Proof.
Assumption ¬ [ C ] B ϕ ∈ hd ( w ) of the lemma implies that ¬ [ C ] B ϕ ∈ Φ.Thus, sets B and C are disjoint by Definition 1. Hence, sets B and R aredisjoint because of equation (15) and set B is disjoint with each of sets P , . . . , P n by Claim 2 and because sets B and C are disjoint. Also, set R isdisjoint with each of sets P , . . . , P n by equation (15). Finally, sets P , . . . , P n are pairwise disjoint by Claim 1. (cid:2) Claim 6. Q i ⊆ B ∪ R ∪ P ∪ · · · ∪ P i − for ≤ i ≤ n . Proof.
Consider any agent q ∈ Q i such that q / ∈ B ∪ R . It suffices to show that q ∈ P ∪ · · · ∪ P i − . Indeed, assumptions q ∈ Q i and q / ∈ B imply that q ∈ C by Claim 3. Thus, q ∈ P ∪ · · · ∪ P n by the assumption q / ∈ R and the def-inition of set R . Therefore, q ∈ P ∪ · · · ∪ P i − by Claim 4 because q ∈ Q i . (cid:2) Let us now return to the proof of the lemma. Statement (5), by theLemma 4 applied m + n times, implies that (cid:96) σ → ( . . . ( σ m → ( ψ → . . . ( ψ n → ϕ ) . . . )) . . . ) . Hence, by the Strategic Necessitation inference rule, (cid:96) [ ∅ ] ∅ ( σ → ( . . . ( σ m → ( ψ → . . . ( ψ n → ϕ ) . . . )) . . . )) . Thus, by the Cooperation axiom (where B = C = D = ∅ ) and the ModusPonens inference rule, (cid:96) [ ∅ ] ∅ σ → [ ∅ ] ∅ ( σ → ( . . . ( σ m → ( ψ → . . . ( ψ n → ϕ ) . . . )) . . . )) . Then, by the Modus Ponens inference rule and assumption (6), hd ( w ) (cid:96) [ ∅ ] ∅ ( σ → ( . . . ( σ m → ( ψ → . . . ( ψ n → ϕ ) . . . )) . . . )) .
22y repeating the previous step m − hd ( w ) (cid:96) [ ∅ ] ∅ ( ψ → ( ψ → ( ψ → . . . ( ψ n → ϕ ) . . . ))) . Hence, by the Intelligence Monotonicity axiom and the Modus Ponens infer-ence rule, hd ( w ) (cid:96) [ ∅ ] B ( ψ → ( ψ → ( ψ → . . . ( ψ n → ϕ ) . . . ))) . Thus, by Lemma 2, Claim 5, and the Modus Ponens inference rule, hd ( w ) (cid:96) [ R ] B ( ψ → ( ψ → ( ψ → . . . ( ψ n → ϕ ) . . . ))) . Then, by the Cooperation axiom, Claim 5, and the Modus Ponens inferencerule, hd ( w ) (cid:96) [ P ] B,R ψ → [ R, P ] B ( ψ → ( ψ → . . . ( ψ n → ϕ ) . . . )) . At the same time, recall that [ P ] Q ψ ∈ hd ( w ) by assumption (7). Thus, hd ( w ) (cid:96) [ P ] B,R ψ by the Intelligence Monotonicity axiom and because Q ⊆ B ∪ R due to Claim 6. Hence, by the Modus Ponens inference rule, hd ( w ) (cid:96) [ R, P ] B ( ψ → ( ψ → . . . ( ψ n → ϕ ) . . . )) . Then, by the Cooperation axiom, Claim 5, and the Modus Ponens inferencerule, hd ( w ) (cid:96) [ P ] B,R,P ψ → [ R, P , P ] B ( ψ → . . . ( ψ n → ϕ ) . . . )) . At the same time, recall that [ P ] Q ψ ∈ hd ( w ) by assumption (7). Thus, hd ( w ) (cid:96) [ P ] B,R,P ψ by the Intelligence Monotonicity axiom and because Q ⊆ B, R, P due to Claim 6. Hence, by the Modus Ponens inference rule, hd ( w ) (cid:96) [ R, P , P ] B ( ψ → . . . ( ψ n → ϕ ) . . . )) . By repeating the previous step n − hd ( w ) (cid:96) [ R, P , P , . . . , P n ] B ϕ. Equation (15) implies that R ⊆ C . Thus, R, P , P , . . . , P n ⊆ C by Claim 2.23hen, hd ( w ) (cid:96) [ C ] B ϕ by Lemma 2, which contradicts the assumption ¬ [ C ] B ϕ ∈ hd ( w ) and the consistency of set hd ( w ). Therefore, set X , as defined byequation (4), is consistent. By Lemma 5, there exists a maximal consistentextension ˆ X of the set X .Let w (cid:48) be state w and u to be the sequence w :: ∅ :: ˆ X . Claim 7. u ∈ W . Proof.
By Definition 5, it suffices to prove that { ϕ ∈ Φ | K ∅ ϕ ∈ hd ( w ) } ⊆ hd ( u ) . Indeed, let K ∅ ϕ ∈ hd ( w ). Thus, hd ( w ) (cid:96) [ ∅ ] ∅ ϕ by the Empty Coalition ax-iom. Hence, [ ∅ ] ∅ ϕ ∈ hd ( w ) due to the maximality of the set hd ( w ). Then, ϕ ∈ X by equation (4). Thus, ϕ ∈ ˆ X by the choice of set ˆ X . Therefore, ϕ ∈ hd ( u ) by the choice of sequence u . (cid:2) Note that β = B δ because of equation (3) and the assumption β ∈ ∆ B of the lemma. Similarly, γ = C δ . Also, w ∼ C w (cid:48) by Lemma 1 because w (cid:48) = w . Additionally, ¬ ϕ ∈ hd ( u ) because, due to equation (4), we have ¬ ϕ ∈ X ⊆ ˆ X = hd ( u ).Since w = w (cid:48) , to finish the proof of the lemma, we need to show that( w, δ, u ) ∈ M . Consider any formula [ P ] Q ψ ∈ hd ( w ) such that pr ( δ ( p )) = ψ for each agent p ∈ P and pr ( δ ( q )) < pr ( δ ( p )) for each agent q ∈ Q andeach agent p ∈ P . By Definition 7, it suffices to prove that ψ ∈ hd ( u ). Weconsider the following tow cases separately. Case I : P (cid:54) = ∅ . Thus, ψ ∈ X by equation (4). Therefore, ψ ∈ X ⊆ ˆ X = hd ( u ). Case II : P = ∅ . Then, assumption [ P ] Q ψ ∈ hd ( w ) can be rewritten as[ ∅ ] Q ψ ∈ hd ( w ). Thus, hd ( w ) (cid:96) [ ∅ ] ∅ ψ by the None to Analyze axiom.Hence, [ ∅ ] ∅ ψ ∈ hd ( w ) because of the maximality of the set hd ( w ). Thus, ψ ∈ X by equation (4). Therefore, ψ ∈ X ⊆ ˆ X = hd ( u ).This concludes the proof of Lemma 13. (cid:2) We are now ready to state and to prove the main induction lemma ofthe proof of the completeness, which sometimes also is referred to as truthlemma.
Lemma 14. w (cid:13) ϕ iff ϕ ∈ hd ( w ) for each formula ϕ ∈ Φ . roof. We prove the lemma by the induction on the structural complexityof formula ϕ . The case follows from Definition 4 and Definition 8. The casewhen formula ϕ is a negation or an implication follows from Definition 4 andthe assumption that set hd ( w ) is a maximal consistent set of formulae in thestandard way.Suppose that formula ϕ has the form K C ψ .( ⇒ ): Suppose that K C ψ / ∈ hd ( w ). Thus, ¬ K C ψ ∈ hd ( w ) due to the maxi-mality of the set hd ( u ). Hence, by Lemma 12, there is a state u ∈ W suchthat w ∼ C u and ¬ ψ ∈ hd ( u ). Then, ψ / ∈ hd ( u ) due to the consistency of theset hd ( u ). Thus, u (cid:49) ψ by the induction hypothesis. Therefore, w (cid:49) K C ψ by Definition 4.( ⇐ ): Assume that K C ψ ∈ hd ( w ). Consider any state u ∈ W such that w ∼ C u . By Definition 4, it suffices to show that u (cid:13) ψ . Indeed, byLemma 11, assumptions K C ψ ∈ hd ( w ) and w ∼ C u imply that K C ψ ∈ hd ( u ).Hence, hd ( u ) (cid:96) ψ by the Truth axiom and the Modus Ponens inference rule.Thus, ψ ∈ hd ( u ) due to the maximality of the set hd ( u ). Therefore, u (cid:13) u by the induction hypothesis.Suppose that formula ϕ has the form [ C ] B ψ .( ⇒ ): Assume that [ C ] B ψ / ∈ hd ( w ). Hence, ¬ [ C ] B ψ ∈ hd ( w ) due to themaximality of the set hd ( w ). Thus, by Lemma 13, there exists an actionprofile β ∈ ∆ B of coalition B such that for each action profile γ ∈ ∆ C ofcoalition C there is a complete action profile δ ∈ ∆ A and states w (cid:48) , u ∈ W such that β = B δ , γ = C δ , w ∼ C w (cid:48) , ( w (cid:48) , δ, u ) ∈ M , and ¬ ψ ∈ hd ( u ). Notethat ¬ ψ ∈ hd ( u ) implies ψ / ∈ hd ( u ) due to the consistency of the set hd ( u ),which, in term implies u (cid:49) ψ by the induction hypothesis.Thus, there exists an action profile β ∈ ∆ B of coalition B such thatfor each action profile γ ∈ ∆ C of coalition C there is a complete actionprofile δ ∈ ∆ A and states w (cid:48) , u ∈ W such that β = B δ , γ = C δ , w ∼ C w (cid:48) ,( w (cid:48) , δ, u ) ∈ M , and u (cid:49) ψ . Therefore, w (cid:49) [ C ] B ψ by Definition 4.( ⇐ ): Assume that [ C ] B ψ ∈ hd ( w ). Consider any action profile β ∈ ∆ B ofcoalition B . Set B is finite by Definition 1. Let z be any integer number suchthat pr ( β ( b )) < z for each agent b ∈ B . Define action profile γ ∈ ∆ C as γ ( c ) = ( ϕ, z ) for each c ∈ C . Consider any complete action profile δ ∈ ∆ A ,any state w (cid:48) ∈ W , and any state u ∈ W such that β = B δ , γ = C δ , w ∼ C w (cid:48) ,and ( w (cid:48) , δ, u ) ∈ M . By Definition 4, it suffices to show that u (cid:13) ψ .Assumption [ C ] B ψ ∈ hd ( w ) implies that hd ( w ) (cid:96) K C [ C ] B ψ by the Strate-gic Introspection axiom and the Modus Ponens inference rule. Thus, K C [ C ] B ψ ∈ hd ( w ) by the maximality of the set hd ( w ). Hence, K C [ C ] B ψ ∈ hd ( w (cid:48) ) by the25ssumption w ∼ C w (cid:48) and Lemma 11. Then, hd ( w (cid:48) ) (cid:96) [ C ] B ψ by the Truthaxiom and the Modus Ponens inference rule. Hence, [ C ] B ψ ∈ hd ( w (cid:48) ) due tothe maximality of the set hd ( w (cid:48) ).By the choice of action profile γ and the assumption γ = C δ , for each c ∈ C , we have pr ( δ ( c )) = pr ( γ ( c )) = ϕ . At the same time, by theassumption β = B δ , the choice of integer z , the choice of action profile γ ,and the assumption γ = C δ , pr ( δ ( b )) = pr ( β ( b )) < z = pr ( γ ( c )) = pr ( δ ( c ))for each agent b ∈ B and each agent c ∈ C .Thus, ψ ∈ hd ( u ) by Definition 7 and the assumption ( w (cid:48) , δ, u ) ∈ M .Therefore, u (cid:13) ψ by the induction hypothesis. (cid:2) Now we are ready to state and to prove the strong completeness of ourlogical system.
Theorem 2. If Y (cid:48) ϕ , then there is a state w of a game such that w (cid:13) χ for each χ ∈ Y and w (cid:49) ϕ . Proof.
Suppose that Y (cid:48) ϕ . By Lemma 5, there exists a maximal consis-tent set of formulae X such that Y ∪ {¬ ϕ } ⊆ X . Let w be the single-element sequence X . By Definition 5, sequence w is a state of the canonicalgame G ( X ). Note that χ ∈ X = hd ( w ) for each formula χ ∈ Y and ¬ ϕ ∈ X = hd ( w ) by the choice of set X and the choice of sequence w .Thus, w (cid:13) χ for each χ ∈ Y and w (cid:13) ¬ ϕ by Lemma 14. Therefore, w (cid:49) ϕ by Definition 4. (cid:2)
7. Conclusion
In this article we proposed the notion of a strategy with intelligence andgave a sound and complete axiomatization of a bimodal logic that describesthe interplay between strategic power with intelligence and the distributedknowledge modalities in the setting of strategic games with imperfect infor-mation. A natural question is decidability of the proposed logical system.Unfortunately the standard filtration technique [38] can not be easily appliedhere to produce a finite model. Indeed, it is crucial for the proof of the com-pleteness, see Definition 7 that for each action there is another action with26 higher value of the second component. Thus, for the proposed construc-tion to work, the domain of choices must be infinite. One perhaps mightbe able to overcome this by changing the second component of the actionfrom an infinite linear ordered set to a finite circularly “ordered” set as inrock-article-scissors game.Another possible extension of this work is to consider modality K BC thatcaptures the knowledge of coalition C after coalition B disclosed to C theactions that it intends to take. References [1] S. Budiansky, German vs. allied codebreakers in the battle of the at-lantic, International Journal of Naval History 1 (1).[2] J. P. M. Showell, German Naval Code Breakers, 1st Edition, Naval InstPr, 2003.[3] M. Pauly, Logic for social software, Ph.D. thesis, Institute for Logic,Language, and Computation (2001).[4] M. Pauly, A modal logic for coalitional power in games, Journal of Logicand Computation 12 (1) (2002) 149–166. doi:10.1093/logcom/12.1.149 .[5] V. Goranko, Coalition games and alternating temporal logics, in: Pro-ceedings of the 8th conference on Theoretical aspects of rationality andknowledge, Morgan Kaufmann Publishers Inc., 2001, pp. 259–272.[6] W. van der Hoek, M. Wooldridge, On the logic of cooperation and propo-sitional control, Artificial Intelligence 164 (1) (2005) 81 – 119.[7] S. Borgo, Coalitions in action logic, in: 20th International Joint Confer-ence on Artificial Intelligence, 2007, pp. 1822–1827.[8] L. Sauro, J. Gerbrandy, W. van der Hoek, M. Wooldridge, Reasoningabout action and cooperation, in: Proceedings of the Fifth Interna-tional Joint Conference on Autonomous Agents and Multiagent Sys-tems, AAMAS ’06, ACM, New York, NY, USA, 2006, pp. 185–192. doi:10.1145/1160633.1160663 .279] T. ˚Agotnes, P. Balbiani, H. van Ditmarsch, P. Seban, Group an-nouncement logic, Journal of Applied Logic 8 (1) (2010) 62 – 81. doi:10.1016/j.jal.2008.12.002 .[10] T. ˚Agotnes, W. van der Hoek, M. Wooldridge, Reasoning about coali-tional games, Artificial Intelligence 173 (1) (2009) 45 – 79. doi:10.1016/j.artint.2008.08.004 .[11] F. Belardinelli, Reasoning about knowledge and strategies: Epistemicstrategy logic, in: Proceedings 2nd International Workshop on StrategicReasoning, SR 2014, Grenoble, France, April 5-6, 2014, Vol. 146 ofEPTCS, 2014, pp. 27–33.[12] V. Goranko, W. Jamroga, P. Turrini, Strategic games and truly playableeffectivity functions, Autonomous Agents and Multi-Agent Systems26 (2) (2013) 288–314. doi:10.1007/s10458-012-9192-y .[13] V. Goranko, S. Enqvist, Socially friendly and group protecting coali-tion logics, in: Proceedings of the 17th International Conference onAutonomous Agents and Multiagent Systems, International Foundationfor Autonomous Agents and Multiagent Systems, 2018, pp. 372–380.[14] K. Chatterjee, T. A. Henzinger, N. Piterman, Strategy logic, Informa-tion and Computation 208 (6) (2010) 677–693.[15] F. Mogavero, A. Murano, G. Perelli, M. Y. Vardi, Reasoning aboutstrategies: On the model-checking problem, ACM Transactions on Com-putational Logic (TOCL) 15 (4) (2014) 34.[16] R. Berthon, B. Maubert, A. Murano, S. Rubin, M. Y. Vardi, Strategylogic with imperfect information, in: Logic in Computer Science (LICS),2017 32nd Annual ACM/IEEE Symposium on, IEEE, 2017, pp. 1–12.[17] P. ˇCerm´ak, A. Lomuscio, A. Murano, Verifying and synthesising multi-agent systems against one-goal strategy logic specifications, in: Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.[18] F. Mogavero, A. Murano, G. Perelli, M. Y. Vardi, What makes atl*decidable? a decidable fragment of strategy logic, in: InternationalConference on Concurrency Theory, Springer, 2012, pp. 193–208.2819] M. Y. Vardi, G. Perelli, A. Murano, F. Mogavero, Reasoning aboutstrategies: on the satisfiability problem, Logical Methods in ComputerScience 13.[20] F. Belardinelli, C. Dima, A. Murano, Bisimulations for logics of strate-gies: A study in expressiveness and verification, in: Proceedings of the16th International Conference on Principles of Knowledge Representa-tion and Reasoning, 2018, pp. 425–434.[21] D. Walther, W. van der Hoek, M. Wooldridge, Alternating-time tempo-ral logic with explicit strategies, in: Proceedings of the 11th conferenceon Theoretical aspects of rationality and knowledge, ACM, 2007, pp.269–278.[22] V. Goranko, F. Ju, Towards a logic for conditional local strategic reason-ing, in: International Workshop on Logic, Rationality and Interaction,Springer, 2019, pp. 112–125.[23] A. Sinha, F. Fang, B. An, C. Kiekintveld, M. Tambe, Stackelberg secu-rity games: Looking beyond a decade of success., in: IJCAI, 2018, pp.5494–5501.[24] W. Jamroga, T. ˚Agotnes, Constructive knowledge: what agents canachieve under imperfect information, Journal of Applied Non-ClassicalLogics 17 (4) (2007) 423–475. doi:10.3166/jancl.17.423-475 .[25] W. Jamroga, W. van der Hoek, Agents that know how to play, Funda-menta Informaticae 63 (2-3) (2004) 185–219.[26] J. van Benthem, Games in dynamic-epistemic logic, Bulletin of Eco-nomic Research 53 (4) (2001) 219–248. doi:10.1111/1467-8586.00133 .[27] Y. Wang, A logic of knowing how, in: Logic, Rationality, and Interac-tion, Springer, 2015, pp. 392–405.[28] Y. Wang, A logic of goal-directed knowing how, Synthese (2016) 1–21.[29] T. ˚Agotnes, N. Alechina, Coalition logic with individual, distributed andcommon knowledge, Journal of Logic and Computation doi:10.1093/logcom/exv085 . 2930] P. Naumov, J. Tao, Coalition power in epistemic transition systems,in: Proceedings of the 2017 International Conference on AutonomousAgents and Multiagent Systems (AAMAS), 2017, pp. 723–731.[31] R. Fervari, A. Herzig, Y. Li, Y. Wang, Strategically knowing how, in:Proceedings of the Twenty-Sixth International Joint Conference on Ar-tificial Intelligence, IJCAI-17, 2017, pp. 1031–1038.[32] P. Naumov, J. Tao, Strategic coalitions with perfect recall, in: Pro-ceedings of Thirty-Second AAAI Conference on Artificial Intelligence,2018.[33] P. Naumov, J. Tao, Together we know how to achieve: An epistemiclogic of know-how, Artificial Intelligence 262 (2018) 279 – 300. doi:https://doi.org/10.1016/j.artint.2018.06.007doi:https://doi.org/10.1016/j.artint.2018.06.007