Argument Schemes and Dialogue for Explainable Planning
aa r X i v : . [ c s . A I] F e b Argument Schemes and Dialogue for Explainable Planning
Quratul-ain Mahesar
Department of Informatics, King’s College London,London, [email protected]
Simon Parsons
School of Computing Science, University of Lincoln,Lincoln, [email protected]
ABSTRACT
Artificial Intelligence (AI) is being increasingly deployed in prac-tical applications. However, there is a major concern whether AIsystems will be trusted by humans. In order to establish trust in AIsystems, there is a need for users to understand the reasoning be-hind their solutions. Therefore, systems should be able to explainand justify their output. In this paper, we propose an argumentscheme-based approach to provide explanations in the domain ofAI planning. We present novel argument schemes to create argu-ments that explain a plan and its key elements; and a set of crit-ical questions that allow interaction between the arguments andenable the user to obtain further information regarding the keyelements of the plan. Furthermore, we present a novel dialoguesystem using the argument schemes and critical questions for pro-viding interactive dialectical explanations.
KEYWORDS
Argument Schemes, Dialogue, Explanation, Planning
Artificial intelligence (AI) researchers are increasingly concernedthat whether the systems they build will be trusted by humans.One mechanism for increasing trust is to make AI systems capableof explaining their reasoning. In this paper we provide a mecha-nism for explaining the output of an AI planning system. Auto-mated planning [12] is one of the sub fields of AI that focuses ondeveloping techniques to create efficient plans, i.e., sequences ofactions that should be performed in order to achieve a set of goals.In practical applications, for instance, this set of actions can bepassed to a robot, or a manufacturing system, that can follow theplan and produce the desired result.Explainable AI Planning (XAIP) [11] is a field that involves ex-plaining AI planning systems to a user. The main goal of a plan ex-planation is to help humans understand reasoning behind the plansproduced by the planners. Approaches to this problem include ex-plaining planner decision-making processes as well as forming ex-planations from the models. Previous work on model-based expla-nations of plans include [11, 22].To provide explanations for plans, we make use of argumenta-tion. Argumentation [21] is a logical model of reasoning that isaimed at the evaluation of possible conclusions or claims by consid-ering reasons for and against them. These reasons, i.e., argumentsand counter-arguments, provide support for and against the con-clusions or claims, through a combination of dialectical and logi-cal reasoning. Argumentation is connected to the idea of establish-ing trust in AI systems by explaining the results and processes ofthe computation of a solution or decision, and has been used in many applications in multi-agent planning [25] and practical rea-soning [1]. Argumentation has also been used in explanation dia-logues. A dialogue system [3] for argumentation and explanationconsists of a communication language that defines the speech actsand protocols that allow transitions in the dialogue. This allowsthe explainee to challenge and interrogate the given explanationsto gain further understanding.In this paper, we show how to use argumentation to generateexplanations in the domain of AI planning, to answer questionssuch as ‘Why a?’, where a is an action in the plan, or ‘How g?’,where g is a goal. Questions like these are inherently based upondefinitions held in the domain related to a particular problem andsolution. Furthermore, questions regarding particular state infor-mation may arise, such as ‘Why a here?’. To answer such ques-tions, it is necessary to extract relevant information about actions,states and goals from the planning model. This information allowsus to provide supporting evidence to draw conclusions within theexplanation process. In addition, it allows us to create a summaryexplanation of the whole plan and, through dialogue and question-ing, extract further information regarding the elements of the plan.Our approach is built around a set of argument schemes [26]which create arguments that explain and justify a plan and its keyelements (i.e., actions, states and goals). We call such arguments plan explanation arguments . These schemes are complemented bycritical questions that allow a user to seek further information re-garding the plan elements, and allow interaction between differ-ent arguments. Plan explanation arguments can be constructedthrough the instantiation of argument schemes, and can be ques-tioned with a given scheme’s associated critical questions (CQs).Given a plan explanation argument 𝐴 instantiating an argumentscheme, CQs are possible counter-arguments to 𝐴 , and they ques-tion the premises, i.e., presumptions of 𝐴 about the key elementsof the plan, and so shift the burden of proof such that further argu-ments must be put forward to argue for the plan elements in thepremises questioned. Until this burden of proof is met, a plan expla-nation argument 𝐴 cannot be said to be justified. The plan explana-tion arguments enable a planning system to answer such questionsat different levels of granularity. In addition, we present a dialoguesystem utilizing the argument schemes and critical questions forproviding an interactive approach for explanation and query an-swering, and give algorithms that describe the mechanization ofa dialogue conversation between the planner and user in the dia-logue system. To make our argumentation-based explanations forthe planning study concrete, we use a version of the classic blocksworld. Our research is inspired by work in practical reasoning and argu-mentation for multi-agent planning. However, our argument schemeased approach, generates explanations for a plan created by anAI planner, which we assume to be a single entity. One of themost well known scheme-based approaches in practical reasoningis presented in [1], which is accompanied by a set of critical ques-tions that allow agents to evaluate the outcomes on the basis ofthe social values highlighted by the arguments. [14] is an exten-sion of [1] specifically designed for multi-agent planning whereagents can discuss the suitability of plans based on an argumenta-tion scheme and associated critical questions. Furthermore, in [24],a model for arguments is presented that contributes in delibera-tive dialogues based on argumentation schemes for arguing aboutnorms and actions in a multi-agent system. [17] has proposed asimilar scheme-based approach for normative practical reasoningwhere arguments are constructed for a sequence of actions.[20] propose a framework that integrates both the reasoningand dialectical aspects of argumentation to perform normative prac-tical reasoning, enabling an agent to act in a normative environ-ment under conflicting goals and norms and generate explanationfor agent behaviour. [2] explore the use of situation calculus as alanguage to present arguments about a common plan in a multi-agent system and [23] present an argumentation-based approachto deliberation, the process by which two or more agents reacha consensus on a course of action. [19] propose a formal modelof argumentative dialogues for multi-agent planning, with a focuson cooperative planning, and [10] present a practical solution formulti-agent planning based upon an argumentation-based defeasi-ble planning framework on ambient intelligence applications.The works that are closest to our research for generating planexplanations using argumentation are given in [5, 18] and [8]. In [5,18], a dialectical proof based on the grounded semantics [4] is cre-ated to justify the actions executed in a plan. More recently, in [8],an assumption-based argumentation framework (ABA) [7] is usedto model the planning problem and generate explanation using therelated admissible semantic [9]. Our work differs from both, sincewe present argument schemes to generate the explanation argu-ments for all the key elements of the plan, and critical questionsto allow interaction between the arguments. Whilst previous re-search provides a static explanation, in our approach, a dialoguesystem is presented that allows the user to engage in a dialogueconversation with the AI planner to challenge and interrogate theplanner explanations.
In this section, we describe plan explanation arguments and theirinteractions at the level of abstract argumentation [6], togetherwith a set of critical questions that are suitable for arguing overplan explanations as we will later show.An argumentation framework is simply a set of arguments and abinary attack relation among them. Given an argumentation frame-work, argumentation theory allows to identify the sets of argu-ments that can survive the conflicts expressed in the framework.
Definition 3.1. (Abstract Argumentation Framework [6]): An ab-stract argumentation framework (AAF) is a pair
AAF = (A , R) ,where A is a set of arguments and R is an attack relation (R ⊆A × A) . The notation ( 𝐴, 𝐵 ) ∈ R where
𝐴, 𝐵 ∈ A denotes that argument A attacks argument B . Dung [6] originally introduced an extension approach to definethe acceptability of arguments, i.e., semantics for an abstract argu-mentation framework. An extension is a subset of A that repre-sents the set of arguments that can be accepted together. For an AAF = (A , R) : (1) A set E ⊆ A is said to be conflict free if andonly if there are no
𝐴, 𝐵 ∈ E such that ( 𝐴, 𝐵 ) ∈ R . (2) A set
E ⊆ A is said to be admissible if and only if it is conflict free and defendsall its arguments. E defends 𝐴 if and only if for every argument 𝐵 ∈ A , if we have ( 𝐵, 𝐴 ) ∈ R then there exists 𝐶 ∈ E such that ( 𝐶, 𝐵 ) ∈ R . (3) A set
E ⊆ A is a complete extension if and only if E is an admissible set which contains all the arguments it defends.(4) A set E ⊆ A is a grounded extension if and only if E is a min-imal (for set inclusion) complete extension. Below, we will showhow to formulate the explanation of a plan (and its key elements)as an argument in such a way that the argument is only accept-able if the plan is valid. We then adopt the grounded semantics toestablish acceptability so that the explanation argument will onlybe acceptable if none of the objections, established using criticalquestions, are supported by the planning model. In this section, we introduce the planning model that we use. Thisis based on an instance of the most widely used planning represen-tation, PDDL [13].The main components are:
Definition 4.1. (Planning Problem) A planning problem is a tu-ple 𝑃 = h 𝑂, Pr , △ 𝐼 , △ 𝐺 , 𝐴, Σ , 𝐺 i , where:(1) 𝑂 is a set of objects;(2) Pr is a set of predicates;(3) △ 𝐼 ⊆ Pr is the initial state;(4) △ 𝐺 ⊆ Pr is the goal state, and 𝐺 is the set of goals;(5) 𝐴 is a finite, non-empty set of actions;(6) Σ is the state transition system; Definition 4.2. (Predicates) Pr is a set of domain predicates, i.e.,properties of objects that we are interested in, that can be true orfalse. For a state 𝑠 ⊆ 𝑃𝑟 , 𝑠 + are predicates considered true , and 𝑠 − = 𝑃𝑟 \ 𝑠 + . A state 𝑠 satisfies predicate 𝑝𝑟 , denoted as 𝑠 | = 𝑝𝑟 , if 𝑝𝑟 ∈ 𝑠 , and satisfies predicate ¬ 𝑝𝑟 , denoted 𝑠 | = ¬ 𝑝𝑟 , if 𝑝𝑟 ∉ 𝑠 . Definition 4.3. (Action) An action 𝑎 = h 𝑝𝑟𝑒, 𝑝𝑜𝑠𝑡 i is composedof sets of predicates 𝑝𝑟𝑒 , 𝑝𝑜𝑠𝑡 that represent 𝑎 ’s pre and post con-ditions respectively. Given an action 𝑎 = h 𝑝𝑟𝑒, 𝑝𝑜𝑠𝑡 i , we write 𝑝𝑟𝑒 ( 𝑎 ) and 𝑝𝑜𝑠𝑡 ( 𝑎 ) for 𝑝𝑟𝑒 and 𝑝𝑜𝑠𝑡 . Postconditions are dividedinto 𝑎𝑑𝑑 ( 𝑝𝑜𝑠𝑡 ( 𝑎 ) + ) and 𝑑𝑒𝑙𝑒𝑡𝑒 ( 𝑝𝑜𝑠𝑡 ( 𝑎 ) − ) postcondition sets. Anaction 𝑎 can be executed in state 𝑠 iff the state satisfies its precon-ditions. The postconditions of an action are applied in the state 𝑠 atwhich the action ends, by adding 𝑝𝑜𝑠𝑡 ( 𝑎 ) + and deleting 𝑝𝑜𝑠𝑡 ( 𝑎 ) − . Definition 4.4. (State Transition System) The state-transition sys-tem is denoted by Σ = ( 𝑆, 𝐴,𝛾 ) , where: • 𝑆 is the set of states. • 𝐴 is a finite, non-empty set of actions. • 𝛾 : 𝑆 × 𝐴 → 𝑆 where: – 𝛾 ( 𝑆, 𝑎 ) → ( 𝑆 \ 𝑝𝑜𝑠𝑡 ( 𝑎 ) − )) ∪ 𝑝𝑜𝑠𝑡 ( 𝑎 ) + , if 𝑎 is applicable in 𝑆 ; – 𝛾 ( 𝑆, 𝑎 ) → undefined otherwise; nitial StateCBA Goal StateAC B
Figure 1: Blocks World Example – 𝑆 is closed under 𝛾 . Definition 4.5. (Goal) A goal achieves a certain state of affairs.Each 𝑔 ∈ 𝐺 is a set of predicates 𝑔 = { 𝑟 , ..., 𝑟 𝑛 } , known as goalrequirements (denoted as 𝑟 𝑖 ), that should be satisfied in the state tosatisfy the goal.We then define a plan as follows. Definition 4.6. (Plan) A plan 𝜋 is a sequence of actions h 𝑎 , ..., 𝑎 𝑛 i .A plan 𝜋 is a solution to a planning problem 𝑃 , i.e., plan 𝜋 is valid iff:(1) Only the predicates in △ 𝐼 hold in the initial state: 𝑆 = △ 𝐼 ; (2) thepreconditions of action 𝑎 𝑖 hold at state 𝑆 𝑖 , where 𝑖 = , , ..., 𝑛 ;(3) 𝛾 ( 𝑆, 𝜋 ) satisfies the set of goals 𝐺 . (4) the set of goals satisfiedby plan 𝜋 is a non-empty 𝐺 𝜋 ≠ ∅ consistent subset of goals.Finally we define the state transitions associated with a plan. Definition 4.7. (Extended State Transition System) The extendedstate transition function for a plan is defined as follows: • 𝛾 ( 𝑆, 𝜋 ) → 𝑆 if | 𝜋 | = 𝜋 is empty); • 𝛾 ( 𝑆, 𝜋 ) → 𝛾 ( 𝛾 ( 𝑆, 𝑎 ) , 𝑎 , ..., 𝑎 𝑛 ) if | 𝜋 | > 𝑎 is applica-ble in 𝑆 ; • 𝛾 ( 𝑆, 𝜋 ) → undefined otherwise.Each action in the plan can be performed in the state that resultsfrom the application of the previous action in the sequence. Afterperforming the final action, the set of goals 𝐺 𝜋 will be true. Wepresent the following Blocks World example to illustrate. Example 4.8.
A classic blocks world consists of the following: Aflat surface such as a tabletop; an adequate set of identical blockswhich are identified by letters; and the blocks can be stacked oneon one to form towers of unlimited height. We have three pred-icates to capture the domain: On ( X , Y ) , block 𝑋 is on block 𝑌 ; Ontable ( X ) , block 𝑋 is on the table; and Clear ( X ) , block 𝑋 hasnothing on it. We have two actions, 𝑎 and 𝑎 :(1) 𝑎 : Unstack ( X , Y ) – pick up clear block 𝑋 from block 𝑌 ; • pre ( a ) : { Clear ( X ) , On ( X , Y )}• post ( a ) + : { Ontable ( X ) , Clear ( Y )}• post ( a ) − : { On ( X , Y )} (2) 𝑎 : Stack ( X , Y ) – place block 𝑋 onto clear block 𝑌 ; • pre ( a ) : { Ontable ( X ) , Clear ( X ) , Clear ( Y )}• post ( a ) + : { On ( X , Y )}• post ( a ) − : { Ontable ( X ) , Clear ( Y )} The initial and goal states of the blocks world problem are shownin Figure 1.The initial state △ 𝐼 is given by { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} and the goal state △ 𝐺 is given by { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )} The action sequence: h Unstack ( A , B ) , Unstack ( B , C ) , Stack ( C , A )i isa valid plan. (cid:3) In scheme-based approaches [26] arguments are expressed in natu-ral language and a set of critical questions is associated with eachscheme, identifying how the scheme can be attacked. Below, weintroduce a set of argument schemes for explaining a plan and itskey elements, i.e., action, state and goal. The set of critical ques-tions allow a user to ask for a summary explanation for the planand consequently interrogate the elements of the plan by question-ing the premises of the arguments put forward by the planner. Theexplanation arguments constructed using the argument schemesallow the planner to answer any user questions.
Definition 5.1.
Given a planning problem 𝑃 : • HoldPrecondition ( pre ( a ) , S ) denotes that the precondition 𝑝𝑟𝑒 ( 𝑎 ) of action 𝑎 holds at the state 𝑆 . • HoldGoal ( g , S ) denote that the goal 𝑔 holds at the state 𝑆 . • HoldGoals ( G , △ G ) denotes that all the goals in the set ofgoals 𝐺 hold at the goal state △ 𝐺 . • ExecuteAction ( a , S ) denotes that action 𝑎 is executed at state 𝑆 . • AchieveGoal ( a , g ) denotes that action 𝑎 achieves goal 𝑔 . • AchieveGoals ( 𝜋, G ) denotes that sequence of actions 𝜋 achievesthe set of goals 𝐺 . • Solution ( 𝜋, P ) denotes that 𝜋 is a solution to the planningproblem 𝑃 . Definition 5.2. (Action Argument Scheme
Arg 𝑎 ) An action argu-ment Arg 𝑎 explains how it is possible to execute an action 𝑎 : • Premise : HoldPrecondition ( pre ( a ) , S ) . In the current state 𝑆 , the pre-condition 𝑝𝑟𝑒 ( 𝑎 ) of action 𝑎 holds. • Conclusion : ExecuteAction ( a , S ) . Therefore, we can exe-cute action 𝑎 in the current state 𝑆 . Example 5.3.
We consider the blocks world of Example 4.8. Theexplanation argument for the first action
Unstack ( A , B ) is shownas follows. Where: • 𝑝𝑟𝑒 ( Unstack ( A , B )) = { Clear ( A ) , On ( A , B )} . • 𝑆 = { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} . Premise : HoldPrecondition ( pre ( Unstack ( A , B )) , S ) In the current state { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} , thepre-condition { Clear ( A ) , On ( A , B )} of action Unstack ( A , B ) holds. Conclusion:
ExcecuteAction ( Unstack ( A , B ) , S ) Therefore, we can execute action
Unstack ( A , B ) in the current state { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} . (cid:3) efinition 5.4. (State Argument Scheme Arg 𝑆 ) A state argument
Arg 𝑆 explains how the state 𝑆 becomes true: • Premise : 𝛾 ( 𝑆 , 𝑎 ) → (( 𝑆 \ 𝑝𝑜𝑠𝑡 ( 𝑎 ) − )) ∪ 𝑝𝑜𝑠𝑡 ( 𝑎 ) + = 𝑆 ) . Inthe current state 𝑆 , we can execute the action 𝑎 ∈ 𝜋 , afterwhich the negative postconditions 𝑝𝑜𝑠𝑡 ( 𝑎 ) − do not hold andthe positive postconditions 𝑝𝑜𝑠𝑡 ( 𝑎 ) + hold, that results in thestate 𝑆 . • Conclusion:
Therefore, the state 𝑆 is true. Example 5.5.
The state argument
Arg 𝑆 for the state 𝑆 = { On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A ) , Ontable ( C )} in the Example 4.8 isshown as follows. Where: • 𝑎 = Unstack ( A , B ) . • 𝑝𝑜𝑠𝑡 ( 𝑎 ) − = { On ( A , B )}• 𝑝𝑜𝑠𝑡 ( 𝑎 ) + = { Ontable ( A ) , Clear ( B )}• 𝑆 = { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} . Premise : 𝛾 ( 𝑆 , Unstack ( A , B )) → (( 𝑆 \ 𝑝𝑜𝑠𝑡 ( 𝑎 ) − )) ∪ 𝑝𝑜𝑠𝑡 ( 𝑎 ) + = 𝑆 ) .In the current state { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} , wecan execute the action Unstack ( A , B ) , after which the negative post-conditions { On ( A , B )} do not hold and the positive postconditions { Ontable ( A ) , Clear ( B )} hold, that results in the state { On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A ) , Ontable ( C )} . Conclusion:
Therefore, the state { On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A ) , Ontable ( C )} is true. (cid:3) Definition 5.6. (Goal Argument Scheme
Arg 𝑔 ) A goal argument Arg 𝑔 explains how a goal is achieved by an action in the plan: • Premise : 𝛾 ( 𝑆 , 𝑎 ) → 𝑆 . In the current state 𝑆 , we canexecute the action 𝑎 ∈ 𝜋 , that results in the next state 𝑆 . • Premise : HoldGoal ( g , S ) . In the next state 𝑆 , the goal 𝑔 holds. • Conclusion:
AchieveGoal ( a , g ) : Therefore, the action 𝑎 achi-eves the goal 𝑔 . Example 5.7.
The goal argument
Arg 𝑔 for the goal 𝑔 = Ontable ( A ) in the Example 4.8 is shown as follows. Where: • 𝑎 = Unstack ( A , B ) . • 𝑆 = { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} . • 𝑆 = { Ontable ( C ) , On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A )} . Premise : 𝛾 ( 𝑆 , Unstack ( A , B )) → 𝑆 In the current state { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} , wecan execute the action Unstack ( A , B ) , that results in the next state { Ontable ( C ) , On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A )} . Premise : HoldGoal ( Ontable ( A ) , 𝑆 ) .In the next state { Ontable ( C ) , On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A )} , the goal Ontable ( A ) holds. Conclusion:
AchieveGoal ( Unstack ( A , B ) , Ontable ( A )) :Therefore, the action Unstack ( A , B ) achieves the goal Ontable ( A ) . (cid:3) This does not apply to the initial state △ 𝐼 and we assume that the user knows theinitial state is true by default. Definition 5.8. (Plan Summary Argument Scheme
Arg 𝜋 ) A plansummary argument Arg 𝜋 explains that a proposed sequence of ac-tions 𝜋 = h 𝑎 , 𝑎 , ..., 𝑎 𝑛 i is a solution to the planning problem 𝑃 because it achieves a set of goals 𝐺 : • Premise : 𝛾 ( 𝑆 , 𝑎 ) → 𝑆 , 𝛾 ( 𝑆 , 𝑎 ) → 𝑆 ,..., 𝛾 ( 𝑆 𝑛 , 𝑎 𝑛 ) → 𝑆 𝑛 + . In the initial state 𝑆 = △ 𝐼 , we can execute the firstaction 𝑎 in the sequence of actions 𝜋 that results in thenext state 𝑆 and execute the next action 𝑎 in the sequencein the state 𝑆 that results in the next state 𝑆 and carry onuntil the last action 𝑎 𝑛 in the sequence is executed in thestate 𝑆 𝑛 that results in the goal state 𝑆 𝑛 + = △ 𝐺 . • Premise : HoldGoals ( G , △ G ) . In the goal state △ 𝐺 , all thegoals in the set of goals 𝐺 hold. • Premise : AchieveGoals ( 𝜋, G ) . The sequence of actions 𝜋 achieves the set of all goals 𝐺 . • Conclusion:
Solution ( 𝜋, P ) . Therefore, 𝜋 is a solution to theplanning problem 𝑃 . Example 5.9.
The plan summary argument
Arg 𝜋 for the solutionplan given in the Example 4.8 is shown as follows. Where: • 𝑆 = △ 𝐼 = { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )}• 𝑆 = { On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A ) , Ontable ( C )}• 𝑆 = { Clear ( A ) , Clear ( B ) , Clear ( C ) , Ontable ( A ) , Ontable ( B ) , Ontable ( C )}• 𝑆 = △ 𝐺 = { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )}• 𝐺 = { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )}• 𝜋 = h Unstack ( A , B ) , Unstack ( B , C ) , Stack ( C , A )i Premise : 𝛾 ( 𝑆 , Unstack ( A , B )) → 𝑆 In the initial state { Ontable ( C ) , On ( B , C ) , On ( A , B ) , Clear ( A )} , wecan execute the action Unstack ( A , B ) that results in the next state { On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A ) , Ontable ( C )} . 𝛾 ( 𝑆 , Unstack ( B , C )) → 𝑆 In the state { On ( B , C ) , Clear ( A ) , Clear ( B ) , Ontable ( A ) , Ontable ( C )} ,we can execute the action Unstack ( B , C ) that results in the nextstate { Clear ( A ) , Clear ( B ) , Clear ( C ) , Ontable ( A ) , Ontable ( B ) , Ontable ( C )} . 𝛾 ( 𝑆 , Stack ( C , A )) → 𝑆 In the state { Clear ( A ) , Clear ( B ) , Clear ( C ) , Ontable ( A ) , Ontable ( B ) ,𝑂𝑛𝑡𝑎𝑏𝑙𝑒 ( 𝐶 )} , we can execute the action Stack ( C , A ) that results inthe goal state { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )} . Premise : HoldGoals (△ 𝐺 , 𝐺 ) In the goal state { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )} , all the goals in the set of goals { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )} hold. Premise : AchieveGoals ( 𝜋, 𝐺 ) The sequence of actions h Unstack ( A , B ) , Unstack ( B , C ) , Stack ( C , A )i achieves the set of all goals { On ( C , A ) , Ontable ( A ) , Ontable ( B ) , Clear ( C ) , Clear ( B )} . Conclusion:
Solution ( 𝜋, P ) herefore, h Unstack ( A , B ) , Unstack ( B , C ) , Stack ( C , A )i is a solutionto the planning problem 𝑃 . (cid:3) Having described the schemes and shown how they are used, weturn to the critical questions (CQs). The four CQs given below de-scribe the ways in which the arguments built using the argumentschemes can interact with each other. These CQs are associated to(i.e., attack) one or more premises of the arguments constructed us-ing the argument schemes and are in turn answered (i.e., attacked)by the other arguments, which are listed in the description.
CQ1: Is it possible for the plan 𝜋 to be a solution? This CQbegins the dialogue with the user, and it is the first question thatthe user asks when presented with a solution plan 𝜋 . The argu-ment scheme Arg 𝜋 answers the CQ by constructing the summaryargument for the plan 𝜋 . CQ2: Is it possible to execute the action 𝑎 ? This CQ is associ-ated with the following argument schemes:
Arg 𝜋 , Arg 𝑆 , Arg 𝑔 . Theargument scheme Arg 𝑎 answers the CQ by constructing the expla-nation argument for the action 𝑎 . CQ3: Is it possible to have the state 𝑆 ? This CQ is associatedwith the following argument schemes:
Arg 𝜋 , Arg 𝑎 , Arg 𝑔 . The argu-ment scheme Arg 𝑆 answers the CQ by constructing the explana-tion argument for the state 𝑆 . CQ4: Is it possible to achieve the goal 𝑔 ? This CQ is associatedwith the argument scheme
Arg 𝜋 . The argument scheme Arg 𝑔 an-swers the CQ by constructing the explanation argument for thegoal 𝑔 .We organise the arguments and their interactions by mappingthem into a Dung abstract argumentation framework [6] denotedby AAF = (A , R) , where A is a set of arguments and R is an at-tack relation (R ⊆ A ×A) . Args ⊂ A and
CQs ⊂ A , where
Args = { Arg 𝜋 , Arg 𝑎 , Arg 𝑆 , Arg 𝑔 } and CQs = { CQ , CQ , CQ , CQ } . Giventhe way that the plan explanation arguments were constructed,they will be acceptable under the grounded semantics if the plan isvalid. We present the properties of the plan explanation argumentsas follows. Property 5.1.
For a valid plan 𝜋 , the set of arguments Args is com-plete, in that, if a CQ ∈ CQs exists, then it will be answered (i.e.,attacked) by an
Arg ∈ Args . Proof.
Since, ( Arg 𝜋 , 𝐶𝑄 ) ∈ R , ( Arg 𝑎 , 𝐶𝑄 ) ∈ R , ( Arg 𝑆 , 𝐶𝑄 ) ∈R , and ( Arg 𝑔 , 𝐶𝑄 ) ∈ R , therefore, a unique Arg ∈ Args exists, thatattacks a unique CQ ∈ CQs . Thus,
Args is complete. (cid:3)
In other words, if the plan is valid, then all the objections thatcan be put forward regarding the plan and its elements do not hold.In particular:
Property 5.2.
For a valid plan 𝜋 , Arg 𝜋 ∈ Gr iff 𝐶𝑄 ∉ Gr when ( CQ , Arg 𝜋 ) ∈ R , 𝐶𝑄 ∈ 𝐶𝑄𝑠 . Proof.
Follows from Property 5.1. Since any CQ that attacks
Arg 𝜋 is in turn attacked by an Arg ∈ Args , therefore, 𝐶𝑄 ∉ Gr .Thus, Arg 𝜋 ∈ Gr . (cid:3) In a very similar way we can show the following:
Property 5.3.
For a valid plan 𝜋 , Arg 𝜋 ∈ Gr iff ∀ 𝑔 ∈ 𝐺 Arg 𝑔 ∈ Gr . Property 5.4.
For a valid plan 𝜋 , Arg 𝜋 ∈ Gr iff ∀ 𝑎 ∈ 𝐴 Arg 𝑎 ∈ Gr . Property 5.5.
For a valid plan 𝜋 , Arg 𝜋 ∈ Gr iff ∀ 𝑆 𝑖 ∈ 𝑆 Arg 𝑆 𝑖 ∈ Gr . In other words, all the objections of the user regarding: the set ofgoals 𝐺 ; the actions 𝐴 ; and states 𝑆 (where ∃ 𝑆 𝑖 ∈ 𝑆 held at eachaction step) that are related to the plan 𝜋 , do not hold. Property 5.6.
For a plan 𝜋 , Arg 𝜋 ∈ Gr iff plan 𝜋 is valid. In other words, all the objections regarding the plan 𝜋 and itselements do not hold. Therefore, plan 𝜋 is valid. Property 5.7.
For a plan 𝜋 ′ , Arg 𝜋 ′ ∉ Gr iff plan 𝜋 ′ is invalid. Collectively these properties align the notion of a valid planwith an acceptable plan explanation argument. If we assume, as wedo going forward, that a user is rational in the sense of 1) accept-ing the arguments in the grounded extension of a Dung framework,and 2) only holding to true those facts in the planning model, thenif a plan is valid, that user will accept that the plan explanationargument holds. It is in that sense that we consider the argumentto be a suitable explanation — it can justify all the plan elementsby providing arguments that are grounded in the planning model.
In this section, we present a system for a formal dialogue betweenplanner and user that allows the user to explore the plan explana-tion arguments, raising objections and having them answered. Itprovides an interactive process that recursively unpacks the expla-nation arguments and allows the user to assure themselves of theiracceptability (and hence the validity of the plan).The dialogue takes place between two participants: (1) planner;and (2) user . The communication language consists of the legal di-alogue moves by the two participants. The moves that the plannerand user can use in a dialogue are defined as follows.
Definition 6.1. (Planner Moves) Planner moves consist of the fol-lowing: (1)
Arg 𝜋 , plan summary argument. (2) Arg a , action argu-ment. (3) Arg S , state argument. (4) Arg g , goal argument. Definition 6.2. (User Moves) User moves consist of the follow-ing: (1) CQ , is it possible for the plan 𝜋 to be a solution? (2) CQ ,is it possible to execute the action 𝑎 ? (3) CQ , is it possible to havethe state 𝑆 ? (4) CQ , is it possible to achieve the goal 𝑔 ?A joint commitment store is used for holding the planner anduser moves (i.e., arguments) used within a dialogue. Definition 6.3. (Commitment Store) A commitment store denotedby CS ⊆ ( Args ∪ CQs ) holds all the arguments (i.e., planner movesand user moves) which the planner and user are dialectically com-mitted to. CS ( pl ) denotes all the arguments of the planner and CS ( us ) denotes all the arguments of the user.To ensure that the dialogue conversation ends, we define thetermination conditions for the dialogue as follows. Definition 6.4. (Termination conditions) The dialogue terminateswhen any one of the following three conditions holds: • 𝑇 : PlannerMove = 𝑛𝑢𝑙𝑙 . When the planner is unable togenerate the argument, because one of the premises of theargument is not true. Follows immediately from properties 5.1–5.5. Follows immediately from properties 5.1–5.5. 𝑇 : UserMove = 𝑛𝑢𝑙𝑙 . When the user has exhaustively askedall CQs regarding the components of the plan. • 𝑇 : UserMove = 𝑛𝑜𝑛𝑒 . When the user does not want to askany more questions.Termination of the dialogue conversation results in an outcome. Definition 6.5. (Dialogue Outcomes) There are three possibleoutcomes of the dialogue: • 𝑂 = “Plan is invalid and explanation is unacceptable". • 𝑂 = “Plan is valid and explanation is acceptable". • 𝑂 = “Explanation is acceptable".The user and planner both have to follow rules. Definition 6.6. (User Move Rules) The moves that are allowedfor the user to put forward depend on the previous moves of theplanner . The allowed user moves in response to the planner movesare given below, where the first move does not require any previ-ous planner moves. (1) CQ (2) Arg a : CQ (3) Arg S : CQ (4) Arg g : CQ , CQ (5) Arg 𝜋 : CQ , CQ , CQ Definition 6.7. (Planner Move Rules) The moves that are allowedfor the planner to put forward depend on the previous move by theuser. The allowed planner moves in response to the user movesare given below. (1) CQ : Arg 𝜋 (2) CQ : Arg a (3) CQ : Arg S (4) CQ : Arg g The dialogue has to follow certain rules (i.e., a protocol).
Definition 6.8. (Dialogue Rules) Following are the rules of the di-alogue denoted by DR : (1) The first move in the dialogue is made bythe user, which is, CQ . (2) Both players, i.e., planner and user canput forward a single move at a given step in response to each other.(3) Once the move is put forward, it is stored in the commitmentstore CS . (4) The user cannot put forward a move, i.e., argumentalready present in the commitment store CS for a plan componentand the same goes for the planner. (5) Each user move has to fol-low the user move rules given in Definition 6.6. (6) Each plannermove has to follow the planner move rules given in Definition 6.7.(7) The dialogue ends when any one of the termination conditions 𝑇 , 𝑇 , or 𝑇 holds.The dialogue between the planner and user is then defined. Definition 6.9. (Dialogue) We define a dialogue to be a sequenceof moves D = [ 𝑀 , 𝑀 , ..., 𝑀 𝑛 ] . The dialogue takes place betweenthe two participants, i.e., planner and user. Each dialogue partici-pant must follow the dialogue rules DR for making moves. Eachmove put forward by both participants is recorded and stored inthe commitment store CS . The dialogue terminates when any oneof the termination conditions 𝑇 ,𝑇 , or 𝑇 holds. Based on the ter-mination condition 𝑇 , the outcome of the dialogue can be: • If 𝑇 = 𝑇 , then outcome of the dialogue is: 𝑂 = “Plan is invalid and explanation is unacceptable". User can select any one of the previous moves put forward by the planner. We assume that planner has already presented the plan 𝜋 to the user and do notconsider this a move. Since a previously asked user CQ for a plan component is not allowed, therefore, weassume that the planner will not repeat the same argument. • If 𝑇 = 𝑇 , then outcome of the dialogue is: 𝑂 = “Plan is valid and explanation is acceptable". • If 𝑇 = 𝑇 , then outcome of the dialogue is: 𝑂 = “Explanation is acceptable".In order to describe the dialectical process, we introduce Algo-rithms 1–4. Algorithm 4 presents an operational description of thedialogue D in the dialogue system specified above, that allows theplanner and user to find moves, i.e., arguments to put forward in adialogue, based on the counterarguments put forward by the otherparty. The dialogue starts when the planner presents a plan 𝜋 tothe user and the user asks the first 𝐶𝑄 . During the dialogue con-versation, the planner is either able to construct an appropriateargument via the argument schemes; or the argument returned is null , indicating that it is unable to construct the argument, i.e., 𝑇 holds, which consequently terminates the dialogue. Similarly, theuser either has the option of asking a critical question regardingthe premises of any one argument from the previous argumentsput forward by the planner; or the user has no more questions toask, i.e., 𝑇 holds, which consequently terminates the dialogue. Fur-thermore, the user may exhaustively ask all the critical questions(and the planner successfully answers them), i.e., 𝑇 holds, and thus,the dialogue terminates . The dialogue finishes with one of the threepossible outcomes 𝑂 , 𝑂 , or 𝑂 .Algorithm 1–2 provide an operational description of ways thata rational user may behave. We can think of these as both a mecha-nism for proving that the dialogue works as desired, and as a mech-anism for generating allowable moves in an implementation thatwalks the user through an exploration of the plan. In Algorithm 1,the user is able to choose a previous planner argument in the di-alogue and find all the allowed moves that she can put forwardin response to that. The input of Algorithm 1 is the set of all pre-vious planner moves, i.e., arguments generated via the argumentschemes and the output is a set of user moves consisting of argu-ments, i.e., CQs, from which the user can further choose a singleCQ. In Algorithm 2, the user can select a particular move from theset of user moves or choose not to ask any more questions. Thus,the output of Algorithm 2 is either the chosen CQ or none . In Al-gorithm 3, the planner can find the relevant move (i.e., argumentgenerated via the argument schemes) to answer the user CQ. Theinput of Algorithm 3 is the previous user move, i.e., CQ and theoutput is an appropriate planner move corresponding to the userCQ. If the planner is unable to generate the argument via the ar-gument schemes, then we assume the output planner move to be null . A dialogue D generated in the dialogue system has the followingproperties. In these results, Args D denotes all the moves, i.e., argu-ments used by the planner in a dialogue D , where Args D ⊆ Args . CQs D denotes all the moves, i.e., CQs used by the user in a dialogue D , where CQs D ⊆ 𝐶𝑄𝑠 .The first four properties align the validity of a plan with thesuccessful defence of the explanation argument. lgorithm 1
Find User Moves
Require:
Args P , set of previous arguments put forward by plan-ner Ensure:
UserMoves , set of all allowed user moves function FindUserMoves(
Args P ) if Args P ≠ ∅ then Arg ← user chooses Arg s.t.
Arg ∈ Args P if Arg = Arg a then ⊲ if 𝐴𝑟𝑔 is the action argument UserMoves ← { CQ } else if Arg = Arg S then ⊲ if 𝐴𝑟𝑔 is the state argument UserMoves ← { CQ } else if Arg = Arg g then ⊲ if 𝐴𝑟𝑔 is the goal argument UserMoves ← { CQ , CQ } else if Arg = Arg 𝜋 then ⊲ if 𝐴𝑟𝑔 is the plan summaryargument
UserMoves ← { CQ , CQ , CQ } end if else ⊲ if the plan 𝜋 is presented to the user UserMoves ← { CQ } end if return UserMoves end functionAlgorithm 2
Select User Move
Require:
UserMoves , set of all allowed user moves
Require:
CQs U , set of previous arguments put forward by user Ensure:
UserMove , a selected user move function SelectUserMove(
UserMoves , CQs U ) if (User wants to question further) && ( UserMoves * CQs U ) then UserMove ← user chooses 𝑢 𝑠.𝑡. 𝑢 ∈ UserMoves && 𝑢 ∉ CQs U else ⊲ User does not have any more questions UserMove ← none end if return UserMove end functionAlgorithm 3 Find Planner Move
Require:
UserMove , critical question put forward by user
Ensure:
PlannerMove , allowed planner move function FindPlannerMoves(
UserMove ) if UserMove = CQ then PlannerMove ← Arg 𝜋 else if UserMove = CQ then PlannerMove ← Arg a else if UserMove = CQ then PlannerMove ← Arg S else if UserMove = CQ then PlannerMove ← Arg g end if return PlannerMove end function Algorithm 4
Dialogue
Require:
𝐶𝑄𝑠 , set of all critical questions for plan 𝜋 Ensure: outcome , outcome of the dialogue function Dialogue(
𝐶𝑄𝑠 ) PlannerMove ← 𝑛𝑢𝑙𝑙 , UserMoves ← ∅ UserMove ← 𝑛𝑢𝑙𝑙 , CS ← ∅ do if CS ( us ) ≠ CQs then ⊲ User has not exhaustivelyasked all CQs. UserMoves ← FindUserMoves ( CS ( pl )) UserMove ← SelectUserMove ( UserMoves , CS ( us )) if UserMove ≠ none then CS ← CS ∪ { UserMove } PlannerMove ← FindPlannerMove ( UserMove ) if PlannerMove ≠ 𝑛𝑢𝑙𝑙 then CS ← CS ∪ { PlannerMove } end if end if else ⊲ User has exhaustively asked all CQs.
UserMove ← null end if while (( UserMove ≠ null ) or ( UserMove ≠ none ) or( PlannerMove ≠ null )) if PlannerMove = null then outcome ← “Plan is invalid and explanation is unac-ceptable." else if UserMove = null then outcome ← “Plan is valid and explanation is accept-able." else if UserMove = none then outcome ← “Explanation is acceptable." end if return outcome end functionProperty 6.1. For a given dialogue D , plan 𝜋 is valid and its ex-planation is acceptable iff the planner has exhaustively answeredall CQs regarding the plan and its elements, i.e., CQs D = 𝐶𝑄𝑠 . Property 6.2.
For a given dialogue D , plan 𝜋 ’s explanation is ac-ceptable iff the planner has answered all CQs of the user and theuser does have any more CQs to ask. Property 6.3.
For a given dialogue D , plan 𝜋 ′ is invalid and itsexplanation is unacceptable to the user iff the planner is unable toanswer, i.e., generate an appropriate argument via the argumentschemes for at least one CQ ∈ CQs D . Property 6.4.
A dialogue D for a valid plan 𝜋 results in an expla-nation that is acceptable. Next we consider termination of the dialogue.
Property 6.5.
A dialogue D for a plan 𝜋 always terminates. Follows from property 5.1. Follows from property 6.1. Follows from property 5.7. Follows from property 5.6. roof.
The three termination conditions 𝑇 ,𝑇 , and 𝑇 ensurethat the dialogue 𝐷 always terminates. We prove this by consid-ering all three conditions: (1) 𝑇 : This arises when the planner isunable to construct an appropriate argument in response to a userquestion. Whenever this happens the dialogue D terminates. (2) 𝑇 :This arises when the user has asked all possible CQs and the plan-ner has successfully answered them. Whenever this happens thedialogue D terminates, and the user is not able to put forward anymore questions. (3) 𝑇 : This arises when the user has no furtherquestions to ask. Whenever this happens the dialogue D termi-nates.For a valid plan 𝜋 , since 𝑇 condition ensures that the dialogueterminates when the user has exhaustively asked all CQs regardingthe elements of the plan (worst-case), and the dialogue rules 𝐷𝑅 ensure that the user cannot ask the same CQ (for the same planelement) again, therefore, the dialogue D for a valid plan 𝜋 alwaysterminates.For an invalid plan 𝜋 ′ , since 𝑇 condition will always arise be-fore 𝑇 which we have already proved above will always termi-nate, therefore, the dialogue D for an invalid plan 𝜋 ′ always ter-minates. (cid:3) Next we show that there is a sense in which the dialogue issound and complete with respect to the arguments that are ex-pressed.
Property 6.6.
A dialogue D for a valid plan 𝜋 is complete, in that,the planner has an argument for each element of the plan and theuser has a CQ for each element of the plan. Furthermore, if a userhas a CQ ∈ CQs D then the planner has a corresponding argument Arg ∈ Args D to respond with. Proof.
Follows from property 5.6. Lines 2-15 of Algorithm 1help the user in finding a CQ for each plan element correspondingto the planner argument (or the plan). Similarly, Lines 2-10 of Al-gorithm 3 help the planner in finding an argument for each planelement (or plan summary) corresponding to a user CQ. (cid:3)
Property 6.7.
A dialogue D for a valid plan 𝜋 is sound, in that, ∀ CQ ∈ CQs D , each user move CQ is correct. Similarly, ∀ Arg ∈ Args D , each planner move Arg is correct.
Proof.
Both participants of the dialogue D , i.e., planner anduser must follow the dialogue rules 𝐷𝑅 . This ensure that each ofthem always picks a move that is legally allowed, i.e., correct atevery single step in the dialogue D . Since, Algorithm 1 finds theset of all allowed user moves following the dialogue rules 𝐷𝑅 asgiven in Lines 2-15, therefore ∀ CQ ∈ CQs D , each user move CQ iscorrect. Similarly, Algorithm 3 finds a planner move following thedialogue rules 𝐷𝑅 as given in Lines 2-10, therefore ∀ Arg ∈ Args D ,each planner move Arg is correct. (cid:3)
The following is a dialogue generated by the dialogue system giventhe blocks world of Example 4.8.1) The planner presents a plan 𝜋 , i.e., sequence of actions, 𝜋 = h Unstack ( A , B ) , Unstack ( B , C ) , Stack ( C , A )i to the user.2) Algorithm 1 is called by the user to find a set of possiblemoves, and then, Algorithm 2 is called by the user to select Arg a CQ Arg 𝜋 CQ Figure 2: Argumentation graph of the example dialogue a particular move (or end the dialogue if the user has noquestions). Since 𝜋 is not an argument, it is assumed thatthe planner has presented the plan 𝜋 . The user asks, CQ1: Is it possible for the plan 𝜋 to be a solution?
3) Algorithm 3 is called by the planner to find a suitable move,i.e.,
Arg 𝜋 which is the plan summary argument. Plannerpresents the argument Arg 𝜋 , which is given in Example 5.9.
4) Algorithm 1 is called by the user which returns a set of CQs, { 𝐶𝑄 , 𝐶𝑄 , 𝐶𝑄 } , and after that, Algorithm 2 is called by theuser to select one of the CQs or end the dialogue. The userasks, CQ2: Is it possible to execute the action 𝑎 = UNSTACK ( A , B ) ?
5) Algorithm 3 is called by the planner to find a suitable move,i.e.,
Arg 𝑎 , which is the action argument for action 𝑎 = UNSTACK ( A , B ) , as given in Example 5.3.
6) Algorithm 1 is called by the user, where the user choosesthe previous planner argument
Arg 𝑎 to question , whichreturns a set of CQs, { 𝐶𝑄 } . After that, Algorithm 2 is calledby the user to select a CQ or terminate the dialogue. Theuser decides to terminate the dialogue, and thus, finds theexplanation acceptable.Figure 2 presents the argumentation graph of the above dia-logue, where the grounded extension Gr = { Arg a , Arg 𝜋 } . The out-come of the dialogue is “Explanation is acceptable" , since theuser has no further questions to ask and the plan 𝜋 has not yetproven to be invalid. On the other hand, if at any point the plan-ner is unable to construct a suitable argument then the explanationwill be considered as unacceptable and this would imply the planis invalid. Furthermore, if the user has exhaustively asked all CQsand the planner has successfully answered them, then the explana-tion will be considered acceptable and the plan valid. We have presented a novel argument scheme-based approach forgenerating interactive explanations in the domain of AI planning.The novelty of our approach compared to previous research is:(1) we have presented novel argument schemes to generate thearguments that directly provide an explanation; (2) we have usedthe concept of critical questions to allow interaction between thearguments; (3) we have presented a novel dialogue system usingthe argument schemes and critical questions to provide dialecticalinteraction between the planner and user; and (4) our approachhelps in determining the validity of the plans and whether the ex-planation is acceptable to a rational user. Note that, our approachto generating explanation arguments is planner independent and Arg 𝜋 is not repeated here and can be seen in Example 5.9. Arg 𝑎 is not repeated here and can be seen in Example 5.3. At this point the user can choose any previous planner argument including the plansummary argument
Arg 𝜋 to question further. herefore, can work on a wide range of input plans in classical plan-ning, and in the future, we intend to extend this to richer planningformalisms such as partial order and temporal planning.In the future, we aim to develop algorithms based on the ar-gument schemes to automatically extract the arguments from theinput planning model. Furthermore, we aim to extend the argu-ments schemes and CQs to generate contrastive explanations [15],and to explore dialogue strategies [14] for finding good moves ina dialogue, for instance, to facilitate the user to find relevant infor-mation about the plan with a minimum number of moves. Anotheravenue of future research is to model user trust [16] during a di-alogue conversation and determine how the user trust ratings ofthe planner explanation arguments should affect the outcome ofthe dialogue. REFERENCES [1] Katie Atkinson and Trevor J. M. Bench-Capon. 2007. Practical reasoning as pre-sumptive argumentation using action based alternating transition systems.
Artif.Intell.
Proceedings of the 9th International Conferenceon Autonomous Agents and Multiagent Systems (AAMAS 2010), Toronto, Canada,May 10-14, 2010, Volume 1-3 . IFAAMAS, 765–772.[3] Floris Bex and Douglas Walton. 2016. Combining explanation and argumenta-tion in dialogue.
Argument & Computation
7, 1 (2016), 55–68.[4] Martin Caminada and Mikolaj Podlaszewski. 2012. Grounded Semantics as Per-suasion Dialogue. In
Computational Models of Argument - Proceedings of COMMA2012, Vienna, Austria, September 10-12, 2012 (Frontiers in Artificial Intelligence andApplications, Vol. 245) . IOS Press, 478–485.[5] Martin W. A. Caminada, Roman Kutlák, Nir Oren, and Wamberto Weber Vascon-celos. 2014. Scrutable plan enactment via argumentation and natural languagegeneration. In
International conference on Autonomous Agents and Multi-AgentSystems, AAMAS ’14, Paris, France, May 5-9, 2014 . IFAAMAS/ACM, 1625–1626.[6] P. M. Dung. 1995. On the acceptability of arguments and its fundamental rolein nonmonotonic reasoning, logic programming and n-person games.
ArtificialIntelligence
77 (1995), 321–357.[7] Phan Minh Dung, Robert A. Kowalski, and Francesca Toni. 2009.
Assumption-Based Argumentation . Springer US. 199–218 pages.[8] Xiuyi Fan. 2018. On Generating Explainable Plans with Assumption-Based Ar-gumentation. In
Proceedings of the 21st International Conference on Principlesand Practice of Multi-Agent Systems PRIMA (Lecture Notes in Computer Science,Vol. 11224) . Springer, 344–361.[9] Xiuyi Fan and Francesca Toni. 2015. On Computing Explanations in Argumenta-tion. In
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence,January 25-30, 2015, Austin, Texas, USA . AAAI Press, 1496–1502.[10] Sergio Pajares Ferrando and Eva Onaindia. 2012. Defeasible argumentation formulti-agent planning in ambient intelligence applications. In
Proceedings of the11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1 . 509–516.[11] Maria Fox, Derek Long, and Daniele Magazzeni. 2017. Explainable Planning. In
IJCAI workshop on Explainable AI . https://arxiv.org/abs/1709.10256[12] Malik Ghallab, Dana S. Nau, and Paolo Traverso. 2004.
Automated planning -theory and practice . Elsevier.[13] Patrik Haslum, Nir Lipovetzky, Daniele Magazzeni, and Christian Muise. 2019.
An Introduction to the Planning Domain Definition Language (2 ed.). Number 2in Synthesis Lectures on Artificial Intelligence and Machine Learning. Morganand Claypool Publishers, 1–169.[14] Rolando Medellin-Gasque, Katie Atkinson, Trevor J. M. Bench-Capon, and PeterMcBurney. 2013. Strategies for question selection in argumentative dialoguesabout plans.
Argument & Computation
4, 2 (2013), 151–179.[15] Tim Miller. 2019. Explanation in artificial intelligence: Insights from the socialsciences.
Artif. Intell.
267 (2019), 1–38.[16] Gideon Ogunniye, Alice Toniolo, and Nir Oren. 2017. A Dynamic Model of Trustin Dialogues. In
Theory and Applications of Formal Argumentation - 4th Interna-tional Workshop, TAFA 2017, Melbourne, VIC, Australia, August 19-20, 2017, Re-vised Selected Papers (Lecture Notes in Computer Science, Vol. 10757) . Springer,211–226.[17] Nir Oren. 2013. Argument Schemes for Normative Practical Reasoning. In
The-ory and Applications of Formal Argumentation - Second International Workshop,TAFA 2013, Beijing, China, August 3-5, 2013, Revised Selected papers (Lecture Notesin Computer Science, Vol. 8306) . Springer, 63–78. [18] Nir Oren, Kees van Deemter, and Wamberto Weber Vasconcelos. 2020.Argument-Based Plan Explanation. In
Knowledge Engineering Tools and Tech-niques for AI Planning . Springer, 173–188.[19] Pere Pardo, Sergio Pajares, Eva Onaindia, Lluís Godo, and Pilar Dellunde. 2011.Multiagent argumentation for cooperative planning in DeLP-POP. In
Proceed-ings of the 10th International Conference on Autonomous Agents and MultiagentSystems-Volume 3 . 971–978.[20] Zohreh Shams, Marina De Vos, Nir Oren, and Julian A. Padget. 2020.Argumentation-Based Reasoning about Plans, Maintenance Goals, and Norms.
ACM Trans. Auton. Adapt. Syst.
14, 3 (2020), 9:1–9:39.[21] Guillermo Ricardo Simari and Iyad Rahwan (Eds.). 2009.
Argumentation in Arti-ficial Intelligence . Springer.[22] D.E. Smith. 2012. Planning as an iterative process.
Proceedings of the NationalConference on Artificial Intelligence
Proceedings of the 4th International Joint Conference on AutonomousAgents and Multiagent Systems (AAMAS 2005), July 25-29, 2005, Utrecht, TheNetherlands . ACM, 552–559.[24] Alice Toniolo, Timothy J. Norman, and Katia P. Sycara. 2011. ArgumentationSchemes for Collaborative Planning. In
Proceedings of the 14th International Con-ference on Principles and Practice of Multi-Agent Systems PRIMA (Lecture Notes inComputer Science, Vol. 7047) . Springer, 323–335.[25] Alejandro Torreño, Eva Onaindia, Antonín Komenda, and Michal Štolba. 2018.Cooperative Multi-Agent Planning.
Comput. Surveys
50, 6 (2018), 1–32.[26] D.N. Walton. 1996.