A System for Explainable Answer Set Programming
FF. Ricca, A. Russo et al. (Eds.): Proc. 36th International Conferenceon Logic Programming (Technical Communications) 2020 (ICLP 2020)EPTCS 325, 2020, pp. 124–136, doi:10.4204/EPTCS.325.19 c (cid:13)
P. Cabalar, J. Fandinno, B. Mu˜nizThis work is licensed under theCreative Commons Attribution License.
A System for Explainable Answer Set Programming ∗ Pedro Cabalar
University of Corunna, Spain [email protected]
Jorge Fandinno
University of Potsdam, Germany [email protected]
Brais Mu˜niz*
University of Corunna, Spain [email protected]
We present xclingo , a tool for generating explanations from ASP programs annotated with text andlabels. These annotations allow tracing the application of rules or the atoms derived by them. Theinput of xclingo is a markup language written as ASP comment lines, so the programs annotatedin this way can still be accepted by a standard ASP solver. xclingo translates the annotations intoadditional predicates and rules and uses the ASP solver clingo to obtain the extension of thoseauxiliary predicates. This information is used afterwards to construct derivation trees containingtextual explanations. The language allows selecting which atoms to explain and, in its turn, whichatoms or rules to include in those explanations. We illustrate the basic features through a diagnosisproblem from the literature.
KEYWORDS : Answer Set Programming, Causal justifications, Non-Monotonic Reasoning, ASP debugging, Diag-nosis.
Answer Set Programming (ASP) [13, 12, 4] is a successful paradigm for Knowledge Representation andproblem solving. Under this paradigm, the programmer represents a problem as a logic program formedby a set of rules and obtains solutions to that problem in terms of models of the program called answersets. Thanks to the availability of efficient solvers, ASP is nowadays applied in a wide variety of areasincluding robotics, bioinformatics, music composition [7, 5, 3], and many more.An ASP program does not contain information about the method to obtain the answer sets, somethingthat is completely delegated to the ASP solver. This, of course, has the advantage of making ASP a fullydeclarative language, where the programmer must concentrate on specification rather than on design ofsearch algorithms. However, when it comes to explainability of the obtained results, the informationprovided by answer sets themselves is usually scarce. There exist several approaches for obtainingjustifications for answer sets: for a recent review, see [9]. Some of them are more oriented to debuggingof ASP programs while others are interested in the causal nature of justifications themselves. What theseapproaches generally do is to offer some kind of enlightening about the derivation process of the rulesthat led to finally include (or not) some literal in an answer set.Justifying the result of an ASP program is not only interesting for the programmer but has alsoother implications, especially in the context of explainable Artificial Intelligence . For instance, sincethe approval of the General Data Protection Regulation (GDPR) by the European Union every systemthat makes automatic decisions that affect to persons must offer some kind of explanation on the logicinvolved in the decision making process, obviously in a human-readable way.In this paper, we present xclingo , a tool for generating explanations of annotated programs forthe ASP solver clingo [10]. The input accepted by xclingo is a markup language that introduces ∗ Partially supported by MINECO, Spain, grant TIN2017-84453-P and CITIC Research Center, Xunta de Galicia, Spain andERDF (ED431G 2019/01). The second author is funded by the Alexander von Humboldt Foundation, Germany. . Cabalar, J. Fandinno, B. Mu˜niz xclingo generates derivation trees following the framework of causal graph justifications introduced in [6]. Under that framework, answer sets are multi-valued interpretations where the valuefor each true atom is an algebraic expression constructed with labels associated to program rules. Eachexpression is an alternative of multiple derivation trees that have been proved to correspond to the set ofminimal proofs for the atom built with the Horn clauses in the program reduct. By now, xclingo justguarantees correctness of the obtained derivations, but not their minimality, which is planned to become afuture optional feature. The tool uses the clingo python API to translate the annotations into additionalrules with auxiliary predicates and computes the explanations from the information obtained from thesepredicates.Since causal graphs show possible derivations for the conclusions, it is difficult to avoid that, as thesize of a rule system increases, the readability and comprehension of the explanations becomes moredifficult. A large explanation may be tractable by a computer but not too useful for a human reader, whois normally more concerned about relevant pieces of information. Because of that, xclingo puts anextra effort on creating a flexible way to format these explanations at a detailed level, style and size.Although its main purpose is the explanation of ASP program conclusions, we also find xclingo helpful for program debugging, again, because of the flexibility of its explanation configuration system.In order to show its features and to demonstrate its usefulness, we use a diagnosis problem exam-ple from the literature as a guide. We choose this example because we find the nature of xclingo ’sexplanations very helpful in diagnosis.The rest of the paper is organised as follows. First we introduce our diagnosis running example.Then, the input language and the features of xclingo are described. Next, we describe how the transla-tion of the input into a standard ASP program works. Afterwards, we describe how the explanations arecomputed from the solution of the translated program. Finally, we comment about related work and weconclude the paper.
We consider an example from [2] (Fig. 1). In the example, an analog AC circuit is presented. In it, anagent can close a switch that should ultimately cause a bulb to turn on. However, there are exogenousactions that can modify the environment and make the bulb not to turn on by closing the switch. Ourgoal is to develop a diagnostic system that can identify the reasons why the light does not turn on andpresent them to the user in the form of readable explanations. rs2 bs1 Figure 1: A circuit with a bulb b , a relay r and two switches, s s A System for Explainable Answer Set Programming
Example 1 (From Balduccini and Geldfond 2003)
Consider a system S consisting of an agent oper-ating an analog circuit AC from Fig. 1. We assume that switches s and s are mechanical componentswhich cannot become damaged. Relay r is a magnetic coil. If not damaged, it is activated when s isclosed, causing s to close. Undamaged bulb b emits light if s is closed. For simplicity of presentationwe consider the agent capable of performing only one action, close ( s ) . The environment can be rep-resented by two damaging exogenous actions: brk, which causes b to become faulty, and srg (powersurge), which damages r and also b assuming that b is not protected. Suppose that the agent operatingthis device is given a goal of lighting the bulb. He realizes that this can be achieved by closing the firstswitch, performs the operation, and discovers that the bulb is not lit. Our ASP implementation of this example (we call program P in Listings 1 and 2) follows the onepresented in [2] with the addition of c/3 and few other predicates for improving the explanation results.At a first sight, the encoding may seem too involved for our small example, but this is because the repre-sentation is general enough to cover a whole family of similar diagnosis problems. Listing 1 contains thebasic type definitions. The predicate names are self-explanatory, except perhaps lines 15–24. This is be-cause we allow arbitrary fluent domains that can be specified explicitly through predicate value(F,V) ,meaning that fluent F may have value V . When no value has been specified in that way, fluents are as-sumed Boolean by default. Finally, predicate domain(F,V) collects all domain values for V , regardlessof whether they are defined explicitly or by default. In our example, fluents relay, light, s1 and s2 can take values on and off (to make them more readable) whereas the rest of fluents are Boolean. plength (1) . time (0.. L) :- plength (L). step (1.. L) :- plength (L). switch (s1). switch (s2). component ( relay ). component ( bulb ). fluent ( relay ). fluent ( light ). fluent ( b_prot ). fluent (S):- switch (S). abfluent (ab(C)) :- component (C). fluent (F) :- abfluent (F). value ( relay ,on). value ( relay , off ). value ( light ,on). value ( light , off ). value (S, open ) :- switch (S). value (S, closed ) :- switch (S). hasvalue (F) :- value (F,V). % Fluents are boolean by default domain (F, true ) :- fluent (F), not hasvalue (F). domain (F, false ) :- fluent (F), not hasvalue (F). % otherwise , they take the specified values domain (F,V) :- value (F,V). agent ( close (s1)). exog ( break ). exog ( surge ). action (Y):- exog (Y). action (Y):- agent (Y). Listing 1: Type predicates for program P .Listing 2 contains the description of the problem. Given any action A , fluent F , value V and time point I we use the following predicates: . Cabalar, J. Fandinno, B. Mu˜niz h(F,V,I) = F holds value V at Iobs_h(F,V,I) = F was observed to hold value V at Ic(F,V,I) = F ’s value was caused to be V at Ic(F,I) = F ’s value was caused at Io(A,I) = A occurred at Iobs_o(A,I) = A was observed to occur at I % Inertia h(F,V,I) :- h(F,V,I -1) , not c(F,I), step (I). % Axioms for caused h(F,V,J) :- c(F,V,J). c(F,J) :- c(F,V,J). % Direct effects c(s1 , closed ,I) :- o( close (s1),I), step (I). % Indirect effects c( relay ,on ,J) :- h(s1 , closed ,J), h(ab( relay ),false ,J), time (J). c( relay ,off ,J) :- h(s1 ,open ,J), time (J). c( relay ,off ,J) :- h(ab( relay ),true ,J), time (J). c(s2 , closed ,J) :- h( relay ,on ,J), time (J). c( light ,on ,J) :- h(s2 , closed ,J), h(ab( bulb ),false ,J), time (J). c( light ,off ,J) :- h(s2 ,open ,J), time (J). c( light ,off ,J) :- h(ab( bulb ),true ,J), time (J). % Malfunctioning c(ab( bulb ),true ,I) :- o( break ,I), step (I). c(ab( relay ),true ,I) :- o( surge ,I), step (I). c(ab( bulb ),true ,I) :- o( surge ,I), not h( b_prot ,true ,I -1) , step (I). % Executability :- o( close (S),I), h(S, closed ,I -1) , step (I). % Something happening actually occurs o(A,I) :- obs_o (A,I), step (I). % Check that observations hold :- obs_h (F,V,J), not h(F,V,J). % Completing the initial state h(F,V ,0) :- domain (F,V), not -h(F,V ,0) . -h(F,V ,0) :- h(F,W ,0) , domain (F,V), W!=V. % A history obs_h (s1 ,open ,0) . obs_h (s2 ,open ,0) . obs_h ( b_prot ,true ,0) . obs_h (ab( bulb ),false ,0) . obs_h (ab( relay ),false ,0) . obs_o ( close (s1) ,1). % Something went wrong obs_h ( light ,off ,1) . % Diagnostic module : generate exogenous actions o(Z,I) :- step (I), exog (Z), not no(Z,I). no(Z,I) :- step (I), exog (Z), not o(Z,I). Listing 2: Program P describing Example 1.As usual in diagnosis problems, we differentiate between what happens in the real world, with pred-icates h/3 and o/2 , and the partial observations we have about that world, with predicates obs_h/3 and obs_o/2 , respectively. If we execute clingo on this code, we obtain the three answer sets that cor-respond to the possible diagnosis: one including an exogenous action o(break,1) ; a second one with28 A System for Explainable Answer Set Programming the exogenous action o(surge,1) ; and, finally, a third, non-minimal diagnosis where both exogenousactions occur. Of course, in the original work by Balduccini and Gelfond, diagnoses were additionallyminimised to avoid unnecessary addition of exogenous actions, but for the purpose of this paper, weconsider the three answer sets of program P as equally interesting for generating explanations. At themoment, xclingo cannot properly deal with minimisation clauses in clingo yet.In the rest of the paper, we use this code as a running example. We will complete it using different xclingo features in order to get the diagnoses in a fully readable and understandable way. xclingo system To understand the purpose of the different types of annotations in xclingo it is perhaps better to startillustrating the kind of output we expect to achieve. For instance, Fig. 2 shows the result we obtain forprogram P encoding from Example 1 once it is annotated — we will see how these annotations look likelater on. As we can see, the programmer has requested explanations for fluents light and relay . Eachexplanation must be understood as follows. Below each explained (true) atom (preceded by >> ) we get alist of trees that correspond to alternative (and equally effective) causes for the atom. Then, in each tree,two explanations at the same level must be understood as a joint cause (the two effects act together) andeach time we jump into a lower level, we can read this as a “because” relation.Fig. 2 shows the same three answer sets we obtain with plain clingo . Answer set 1 corresponds tothe case in which both a power surge occurred and something broke the bulb. The explanation for therelay being off (lines 2–6) can be read as follows: “the relay is not working at 1 because it has beendamaged because there has been a power surge.” We have started all explanations for exogenous actionswith the word Hypothesis to clarify that these are assumptions added to explain the observations. Aswe can see, in this first answer set, there are two alternative valid causes for the light being off. Thefirst one (Fig. 2, lines 9–12) is that the bulb was damaged because something broke it. The three linesin the explanation respectively come from the annotations (we will see later) for lines 18, 21 and 52 inListing 2. The second cause (lines 14–16) is that the light was already off in the initial state becauseswitch s2 was initially open: in this case, because of the activation of rules in lines 17 and 5 in Listing 2.Answer set 2 (lines 19–29) corresponds to the case in which we just had a power surge. When thishappens, the relay is not working (as in the answer set 1) and the light simply remains off, since s2 wasinitially open. In this case, we do not get the additional reason for having the light off, since the bulb isnot broken.Finally, answer set 3 shows the case where something breaks the bulb but there is no power surge.In this case, we can see that the relay eventually worked because the agent closed switch 1 and the relaywas not initially damaged. As nothing else happens, the relay is still undamaged in state 1. Notice thatthese two things (the agent closing the switch and the relay being initially undamaged) constitute a jointcause altogether: lines 36 and 37 share the same parent at the explanation tree.Let us proceed now to describe how these explanations are generated, including the markup languageaccepted by xclingo . Each explanation is a tree that shows the derivation proof for an atom. In thesetrees, each node is a trace label that corresponds to a string associated to some fired rule or to some de-rived atom. Trace labels can be created manually through annotations or can be automatically generatedby xclingo . For instance, Fig. 3 displays the explanation for h(light,off,1) under the “auto-tracing”mode, where every rule is automatically traced using the rule head as a label. As we can see, we obtain acomplete derivation tree for h(light,off,1) following the positive part of the program. Note the dif-ference with respect to Fig. 2, where we used manually defined (textual) trace labels. In that case, the tool . Cabalar, J. Fandinno, B. Mu˜niz Answer : 1 >> h( relay ,off ,1) [1] * |__" The relay is not working at 1" | |__" The relay has been damaged at 1" | | |__" Hypothesis : there has been a power surge at 1" >> h( light ,off ,1) [2] * |__" The light is off at 1" | |__" The bulb has been damaged at 1" | | |__" Hypothesis : something has broken the bulb at 1" * |__" The light is off at 1" | |__"s2 was initially open " Answer : 2 >> h( relay ,off ,1) [1] * |__" The relay is not working at 1" | |__" The relay has been damaged at 1" | | |__" Hypothesis : there has been a power surge at 1" >> h( light ,off ,1) [1] * |__" The light is off at 1" | |__"s2 was initially open " Answer : 3 >> h( relay ,on ,1) [1] * |__" The relay is working at 1" | |__" The agent has closed switch s1 at 1" | |__" Initially , the relay was not damaged " >> h( light ,off ,1) [1] * |__" The light is off at 1" | |__" The bulb has been damaged at 1" | | |__" Hypothesis : something has broken the bulb at 1" Figure 2: Explanations obtained for the annotated version of P . >> h( light ,off ,1) [1] * | __h ( light ,off ,1) | | __c ( light ,off ,1) | | | __h (ab( bulb ),true ,1) | | | | __c (ab( bulb ),true ,1) | | | | | __o ( break ,1) | | | | | | __step (1) | | | | | | | __plength (1) | | | | | | __exog ( break ) | | | | | __step (1) | | | | | | __plength (1) | | | __time (1) | | | | __plength (1) Figure 3: Explanation using automatically generated trace labels.30
A System for Explainable Answer Set Programming skips any intermediate node in the derivation tree that has no explicit trace label. In this way, knowledgeengineers may decide the detail to be shown: either automatically tracing all possible rule applications,which may be helpful for debugging, or selecting the relevant information, something more interestingfor explanation. In the latter, the explanation design, that is, selecting the right amount of information,may become a non-trivial Knowledge Representation effort in itself, but would easily become a probleminstead if the tool did not offer such a possibility. For instance, one important decision is to avoid tracingthe inertia rule (line 2 of Listing 2). In that way, if we ask for the explanation of h(light,off,20) inanswer set 2 of Fig. 2 we still get the same derivation tree (replacing time stamp 1 by 20) because theswitch was initially off and nothing else changed that in the whole interval. Another important featurethat helps avoiding irrelevant information is that a negative literal not p is never used in derivations,since it is understood as “there is no cause for p .” If this was not done in this way, then the explanationfor h(light,off,20) in an answer set where no real action occurred would include the negation of allcombinations of actions that could have changed the light value along the way from 1 to 20 (and thereare too many, even in this simple example). This information may be relevant for answering why was notthe light turned on , but is irrelevant for explaining why it has simply remained off, which is the purposeof xclingo .For adding custom trace labels to a program, the programmer has to make use of the xclingo ’smarkup language. It works by adding annotations to the program that start with %! , so they are justtreated as comments by a plain ASP solver like clingo . There exist two different types of annotationsfor writing custom trace labels. The first, trace_rule , allows the user to write a custom trace label andto associate it with a specific rule in the program. Listing 3 shows how we have modified lines 11–23from Listing 2 to associate some custom trace labels with those rules. %%%%%% Indirect effects %! trace_rule {" The relay is working at %" ,J} c( relay ,on ,J) :- h(s1 , closed ,J), h(ab( relay ),false ,J), time (J). %! trace_rule {" The relay is not working at %" ,J} c( relay ,off ,J) :- h(s1 ,open ,J), time (J). %! trace_rule {" The relay is not working at %" ,J} c( relay ,off ,J) :- h(ab( relay ),true ,J), time (J). c(s2 , closed ,J) :- h( relay ,on ,J), time (J). %! trace_rule {" The light is on at %" ,J} c( light ,on ,J) :- h(s2 , closed ,J), h(ab( bulb ),false ,J), time (J). %! trace_rule {" The light is off at %" ,J} c( light ,off ,J) :- h(s2 ,open ,J), time (J). %! trace_rule {" The light is off at %" ,J} c( light ,off ,J) :- h(ab( bulb ),true ,J), time (J). %%%%%% Malfunctioning %! trace_rule {" The bulb has been damaged at %" ,I} c(ab( bulb ),true ,I) :- o( break ,I), step (I). %! trace_rule {" The relay has been damaged at %" ,I} c(ab( relay ),true ,I) :- o( surge ,I), step (I). %! trace_rule {" The bulb has been damaged at %" ,I} c(ab( bulb ),true ,I) :- o( surge ,I), not h( b_prot ,true ,I -1) , step (I). Listing 3: Adding trace labels to specific rules with trace_rule . Including “why not” queries is an interesting topic for future study. . Cabalar, J. Fandinno, B. Mu˜niz trace_rule annotations are associated to the rule they precede. Inside the braces, the firstargument is mandatory and it must be a string enclosed by quotes. The rest of the arguments are optionaland must be variable names used either in the head or in the body of the rule. The % placeholders arespecial characters that will be replaced by the values of the variables after solving, according to the orderthat variables are listed after the first argument.Trying to write trace labels only using trace_rule annotations soon makes the code larger, redun-dant and harder to maintain. For those cases, trace annotations are more suitable instead: they allowa permanent association of a label to an atom, regardless of which rules have triggered it. Thus, theirinformation is less specific but allows other general interesting features. For instance, trace annotationscan be stored separately from the base code, multiple versions of the same trace labels could be writ-ten depending on the context: different languages, different users or different detail level. Lines 1–8 inListing 4 show the trace annotations added for obtaining the output from Fig. 2. For the part betweenbraces, the syntax works the same as in trace_rule annotations, but instead of being followed by a rule,they are followed by a conditional atom defining the set of atoms affected by the trace label. Finally,the show_trace annotations (Lines 10–11 in Listing 4) work in a similar way to the directivesin clingo , choosing which atoms are displayed in each answer set, but in this case, asking for theirexplanation. Again, show_trace annotations allow conditional atoms, as happened with trace . %! trace {" Hypothesis : there has been a power surge at %" ,J} o( surge ,J). %! trace {" Hypothesis : something has broken the bulb at %" ,J} o( break ,J). %! trace {" The agent has closed switch s1 at %" ,J} o( close (s1),J). %! trace {" The % was initially damaged ",C} h(ab(C),true ,0) . %! trace {" Initially , the % was not damaged ",C} h(ab(C),false ,0) . %! trace {"% was initially %" ,F,V} h(F,V ,0) : not abfluent (F). %! show_trace h( light ,V ,1) . %! show_trace h( relay ,V ,1) . Listing 4: Tracing atoms through trace annotations for Program P . The tool xclingo performs two main tasks: (1) a translation of the annotated program P into a logicprogram P (cid:48) ; and (2) a construction of derivation trees by decoding the answer sets of P (cid:48) . Program P (cid:48) builtin the translation phase is equivalent to the non-annotated version of P but includes auxiliary predicatesand clingo theory atoms to keep track of the rules that have been fired. The translation is further dividedinto two steps. In a first step, trace_rule and trace annotations become clingo theory atoms withoutmuch transformation. These atoms are accepted by the grounder and can be handled after solving throughthe clingo Python API. The show_trace annotations are just transformed into traditional rules for anauxiliary head predicate show_all_p for each predicate p to be shown.Listing 5 shows the result of this first step when applied to some of the annotations presented before.In a second step, each predicate of the original program P is prefixed with holds_ and it is as-signed a numeric identifier N . After that, each rule of the form H :- B, &trace { label } is split into32 A System for Explainable Answer Set Programming %% Translation of lines 2 ,3 in Listing 3 c( relay ,on ,J) :- h(s1 , closed ,J), h(ab( relay ),false ,J), time (J), & trace {" The relay is working at %" ,J}. %% Translation of line 1 in Listing 4 & trace_all {o( surge ,J) ," Hypothesis : there has been a power surge at %" ,J : } :- o( surge ,J). %% Translation of line 10 in Listing 4 show_all_h ( light ,V,J):-h( light ,V,J). Listing 5: Transforming annotations into theory atoms and auxiliary predicates.the following rules: fired N ( X , . . . , X n ) :- BH :- fired N ( X , . . . , X n ) & trace { N , H , label } :- fired N ( X , . . . , X n ) where the X , . . . , X n also include the free variables in the body B . If the original rule body does notcontain any trace label, then the last rule is not generated. As an example, suppose that rule in lines 2–3from Listing 5 is assigned identifier 33. Then, its translation is shown in Listing 6. fired_33 ( relay ,on ,J) :- holds_h (s1 , closed ,J), holds_h (ab( relay ),false ,J), holds_time (J). holds_c (Aux0 ,Aux1 , Aux2 ) :- fired_33 (Aux0 ,Aux1 , Aux2 ). & trace {33 ,c( relay ,on ,J) ," The relay is working at %" ,J } :- fired_33 ( relay ,on ,J). Listing 6: Translation of lines 2–3 from Listing 5.During this translation process, some additional rule information is stored: (1) the original head of therule; (2) the original body of the rule; and (3) a list with all the variable names used in the fired_ rule.This information is used to reconstruct the derivation proof of the atoms after solving.Once the translated program P (cid:48) is generated, xclingo makes a call to clingo ’s solve function toretrieve its answer sets and, for each one, proceeds to construct the explanations from the informationretrieved in the answer set. To do so, the first step consists in collecting all the theory atoms &trace and &trace_all and replacing the % placeholders in strings by their actual values. Once processed, thetrace labels are stored into a dictionary, where they can be retrieved either by their associated fired_id or by their atom. Then, xclingo identifies which rules have been fired for each model by finding the fired_ atoms in it. For each fired rule, we save the different sets of values the rule was fired with in adictionary indexed by fired_id . With this information, together with the information about the originalrules that was stored during translation (the original head, the original body, and all the variable names)we build the derivation proofs and print the explanations. The construction of the derivation proof ismade with a “ causes table ” that has a row per each rule and includes, the trace labels and the bodies thatfired those labels. Once the causes table is obtained, the next step is filtering those atoms affected by show_trace annotations. All the atoms in the model that start with the show_all_ prefix are retrievedand stored in a list after removing the prefix. If that list is empty, then all the atoms in the model areexplained. Finally, the explanation of each atom in that list has to be built and printed. Listing 7 showsthe pseudocode for the recursive function that builds the explanation graph for a given atom and a given . Cabalar, J. Fandinno, B. Mu˜niz A , we can find that one atom a in its d e f b u i l d e x p l a n a t i o n s ( atom , c a u s e s t a b l e , s t a c k ) : f o r row i n c a u s e s t a b l e . f i n d b y f i r e d h e a d ( atom ) : i f n o t e m p t y ( row [ ’ f b o d y ’ ] ) : e n t r y e x p l s = [ ] f o r atom a i n row [ ’ f b o d y ’ ] : i f atom a n o t i n s t a c k : s t a c k . p u s h ( a ) a t o m e x p l s = b u i l d e x p l a n a t i o n s ( a , c a u s e s , s t a c k ) s t a c k . pop ( a ) e n t r y e x p l s = c o m b i n e ( e n t r y e x p l s , a e x p l s ) e l s e : e n t r y e x p l s = [ { } ] i f n o t e m p t y ( row [ ’ t r a c e s ’ ] ) : e x p l a n a t i o n s = [ ] f o r t i n row [ ’ t r a c e s ’ ] : f o r e i n e n t r y e x p l s : e x p l a n a t i o n s . a p p e n d ( { t : e } ) e l s e : e x p l a n a t i o n s = e n t r y e x p l s r e t u r n e x p l a n a t i o n s Listing 7: Pseudocode for the build_explanations function.fired body has multiple explanations. In that case, A has a different explanation for each a explanation.Function _combine in line 10, performs this combination. Lines 14 to 21 manage the trace labels, whichare the root of the tree graphs. The atom has one explanation for each different trace label it is associatedwith. If an atom has no trace labels, nothing is added as root of the explanations (lines 20–21). As aconsequence, one level is skipped in that subtree.Given that atoms may have multiple alternative explanations, the size of the set of explanations,when expressed as a set of trees, can grow exponentially in the worst case. To see why, just consider theprogram in Listing 8 for some constant value n . Here, the explanation for atom p( n ) is a chain formedby n labels "a( n )" , . . . , "a(1)" . If we add a second trace for p(X) using "b(%)" , then each atom p(X) has now two alternative labels a(X) and b(X) , and so, the number of alternative derivations for p( n ) becomes 2 n . p (1) . p(X +1) :- p(X), X <=n. %! trace {"a (%) ",X} p(X). Listing 8: A sequential chain for predicate p .We have tested the performance of xclingo on a simple encoding of the well-known blocks world domain varying the size of the scenario in terms of the number of blocks, but also the size of the expla-34 A System for Explainable Answer Set Programming >> unclear (9 ,14) [1] * |__" Block 9 is finally unclear " | |__" Block 1 is finally on 9" | | |__" Block 1 was moved on top of block 9 at t =11" Figure 4: Explanation answering why block 9 is unclear at final step 14.Figure 5: Execution time vs labels per atom. Figure 6: Execution time vs num. of atoms.nations, by adding more trace labels per atom. We asked xclingo to explain the actions performed, thefinal location of each block and the final value of predicate unclear(B) that states whether a block B hasanything on top (see example in Fig. 4). We measured the time spent in five different steps of the process:(1) the translation; (2) the execution of clingo for solving the planning problem; (3) the constructionof the causes table and the dependencies among labels; (4) the expansion of all derivation trees; andfinally, (5) their printing. Fig. 5 shows how the different times evolve when we fix the number of atomsrequested, but we progressively add new trace labels per each atom. As we explained before, increasingthe number of labels per atom causes an exponential grow in the number of alternative explanations thatis well reflected in the graphic. As future work, we plan to include an execution mode for generating amaximum number of explanations per atom instead of expanding all of them. Fig. 6 shows how, whenwe fix the number of trace labels per atom, the time spent in the construction of the causes table and itsdependencies has a linear increase as we add more atoms. We also plan to explore alternatives to theconstruction of the explanations like performing a tabled evaluation or even using clingo itself to solvethis part of the problem. The explanations of xclingo correspond to causal justifications from [6] except that xclingo does notguarantee that the derivation trees are minimal, although it performs some minor simplifications. In thesurvey [9], different approaches to the problem of answering the “why” in ASP are reviewed, includingthe two related ones called off-line justifications [14] and argumentative explanations [15]. An importantdifference with respect to these approaches is that, as we explained, xclingo does not derive information . Cabalar, J. Fandinno, B. Mu˜niz xclingo semantics, as theabsence of cause. As a result, explanations only provide information when defaults are broken (likeinertia), rather than including all possible events that could have changed the outcome, although they didnot happen. The latter can be interesting for answering “why not” queries, but the size of explanationsmay simply become unmanageable. Another difference is that, thanks to the labelling system inheritedfrom causal justifications, xclingo allows selecting in a fine grained way which information can be usedin the justification trees.Other systems that provide explanations for logic programs are
ErgoAI , for Datalog programs, and s(ASP) [1] for non-ground ASP programs. An advantage of these two systems is that their explanationscan be generated without resorting to a full grounding of the program. Besides, ErgoAI shows somesimilarities to xclingo like selecting the information in the explanations (although, in this case, wemust explicitly define which information must be omitted), or allowing text labels and variables in thejustifications. However, both
ErgoAI and s(ASP) include information about negated literals, unlike xclingo . We presented xclingo , an ASP extension interpreter that allows obtaining justifications of the literalsin the answer sets of logic programs. At the moment, the tool is a partial implementation of the LogicProgramming extension described by causal justifications [6]. The python source code of xclingo andinstructions for its usage are available at github . It requires python 3.* and the following python modulesto be installed: clingo , pandas and more itertools . We illustrated, on a diagnostic reasoning example,how xclingo ’s features can be used for obtaining readable explanations, that can be even expressed innatural language. Even if explanations are not needed, xclingo can also be used as a complement to clingo for debugging purposes, since the annotations are included as comments and do not make thecode incompatible.The current version of xclingo is still in a preliminary stage and will be augmented with newfeatures. Immediate future work includes the treatment of minimization directives (something essential,for instance, to obtain minimal solutions for diagnosis problems), and other usual clingo constructs like choice rules or pooling , which are not accepted yet. Another extension we plan to include in the futureis the addition of trace labels for constraints, so that those that are labelled become weak constraints.Other extensions for the long term include the use of causal literals as in [8] or the use of verificationannotations to be combined with a similar theorem proving technique as done in [11]. References [1] Joaqu´ın Arias, Manuel Carro, Elmer Salazar, Kyle Marple & Gopal Gupta (2018):
Constraint AnswerSet Programming without Grounding . Theory and Practice of Logic Programming http://coherentknowledge.com/product-overview-ergoai-platform/ https://github.com/bramucas/xclingo https://potassco.org/clingo/python-api/5.4/ https://pandas.pydata.org/ https://github.com/more-itertools/more-itertools A System for Explainable Answer Set Programming [2] Marcello Balduccini & Michael Gelfond (2003):
Diagnostic reasoning with A-Prolog . Theory and Practiceof Logic Programming
Automatic Music Composition using An-swer Set Programming . Theory and Practice of Logic Programming
11, doi:10.1017/S1471068410000530.[4] Gerhard Brewka, Thomas Eiter & Miroslaw Truszczynski (2011):
Answer set programming at a glance . Commun. ACM
Inferring Phylogenetic TreesUsing Answer Set Programming . Journal of Automated Reasoning
39, pp. 471–511, doi:10.1007/s10817-007-9082-1.[6] Pedro Cabalar, Jorge Fandinno & Michael Fink (2014):
Causal Graph Justifications of Logic Programs . Theory and Practice of Logic Programming TPLP
Answer set programming for collaborative housekeepingrobotics: Representation, reasoning, and execution . Intelligent Service Robotics
5, doi:10.1007/s11370-012-0119-x.[8] Jorge Fandinno (2016):
Deriving conclusions from non-monotonic cause-effect relations . Theory Pract. Log.Program.
Answering the “why” in answer set programming - A survey ofexplanation approaches . Theory Pract. Log. Program.
Theory Solving MadeEasy with Clingo 5 . In M. Carro & A. King, editors:
Technical Communications of the Thirty-secondInternational Conference on Logic Programming (ICLP’16) , OpenAccess Series in Informatics (OASIcs)
Verifying Strong Equivalence of Programs in theInput Language of gringo , pp. 270–283. doi:10.1007/978-3-030-20528-7 20.[12] Victor W. Marek & Miroslaw Truszczy´nski (1999):
Stable Models and an Alternative Logic Program-ming Paradigm . In Krzysztof R. Apt, Victor W. Marek, Mirosław Truszczy´nski & DavidS Warren, edi-tors:
The Logic Programming Paradigm , Artificial Intelligence, Springer Berlin Heidelberg, pp. 375–398,doi:10.1007/978-3-642-60085-2 17.[13] Ilkka Niemel¨a (1999):
Logic Programs with Stable Model Semantics as a Constraint Pro-gramming Paradigm . Annals of Mathematics and Artificial Intelligence
Justifications for logic programs under answer setsemantics . Theory and Practice of Logic Programming TPLP