Non-Monotonic Reasoning and Story Comprehension
Irene-Anna Diakidoy, Antonis Kakas, Loizos Michael, Rob Miller
aa r X i v : . [ c s . A I] J u l Non-Monotonic Reasoning and Story Comprehension
In Proceedings of the 15th International Workshop on Non-Monotonic Reasoning (NMR 2014), Vienna, 17–19 July, 2014
Irene-Anna Diakidoy
University of [email protected]
Antonis Kakas
University of [email protected]
Loizos Michael
Open University of [email protected]
Rob Miller
University College [email protected]
Abstract
This paper develops a Reasoning about Actions and Changeframework integrated with Default Reasoning, suitable asa Knowledge Representation and Reasoning framework forStory Comprehension. The proposed framework, which isguided strongly by existing knowhow from the Psychologyof Reading and Comprehension, is based on the theory ofargumentation from AI. It uses argumentation to capture ap-propriate solutions to the frame, ramification and qualifica-tion problems and generalizations of these problems requiredfor text comprehension. In this first part of the study the workconcentrates on the central problem of integration (or elabo-ration) of the explicit information from the narrative in thetext with the implicit (in the reader’s mind) common senseworld knowledge pertaining to the topic(s) of the story givenin the text. We also report on our empirical efforts to gatherbackground common sense world knowledge used by humanswhen reading a story and to evaluate, through a prototype sys-tem, the ability of our approach to capture both the majorityand the variability of understanding of a story by the humanreaders in the experiments.
Introduction
Text comprehension has long been identified as a key testfor Artificial Intelligence (AI). Aside from its central posi-tion in many forms of the Turing Test, it is clear that humancomputer interaction could benefit enormously from this andother forms of natural language processing. The rise of com-puting over the Internet, where so much data is in the formof textual information, has given even greater importance tothis topic. This paper reports on a research program aim-ing to learn from the (extensive) study of text comprehen-sion in Psychology in order to draw guidelines for develop-ing frameworks for automating narrative text comprehensionand in particular, story comprehension (SC).Our research program brings together knowhow fromPsychology and AI, in particular, our understanding of Rea-soning about Actions and Change and Argumentation in AI,to provide a formal framework of representation and a com-putational framework for SC, that can be empirically evalu-ated and iteratively developed given the results of the eval-uation. This empirical evaluation, which forms an impor-tant part of the program, is based on the following method-ology: (i) set up a set of stories and a set of questions totest different aspects of story comprehension; (ii) harness the world knowledge on which human readers base their com-prehension; (iii) use this world knowledge in our frameworkand automated system and compare its comprehension be-haviour with that of the source of the world knowledge.In this paper we will concentrate on the development ofan appropriate Reasoning about Actions and Change andDefault Reasoning framework for representing narrativesextracted from stories together with the background worldknowledge needed for the underlying central process forstory comprehension of synthesizing and elaborating the ex-plicit text information with new inferences through the im-plicit world knowledge of the reader. In order to place thisspecific consideration in the overall process of story compre-hension we present here a brief summary of the problem ofstory comprehension from the psychological point of view.
A Psychological Account of Story Comprehension
Comprehending text entails the construction of a men-tal representation of the information contained in the text.However, no text specifies clearly and completely all theimplications of text ideas or the relations between them.Therefore, comprehension depends on the ability to men-tally represent the text-given information and to gener-ate bridging and elaborative inferences that connect andelaborate text ideas resulting in a mental or comprehen-sion model of the story. Inference generation is neces-sary in order to comprehend any text as a whole, i.e.,as a single network of interconnected propositions in-stead of as a series of isolated sentences, and to appre-ciate the suspense and surprise that characterize narrativetexts or stories, in particular (Brewer and Lichtenstein 1982;McNamara and Magliano 2009).Although inference generation is based on the activa-tion of background world knowledge , the process is con-strained by text information. Concepts encountered in thetext activate related conceptual knowledge in the readers’long-term memory (Kintsch 1988). In the case of sto-ries, knowledge about mental states, emotions, and moti-vations is also relevant as the events depicted tend to re-volve around them. Nevertheless, at any given point in theprocess, only a small subset of all the possible knowledge-based inferences remain activated and become part of themental representation: those that connect and elaboratetext information in a way that contributes to the coher-nce of the mental model (McNamara and Magliano 2009;Rapp and den Broek 2005). Inference generation is a task-oriented process that follows the principle of cognitiveeconomy enforced by a limited-resource cognitive system.However, the results of this coherence-driven selectionmechanism can easily exceed the limited working memorycapacity of the human cognitive system. Therefore, coher-ence on a more global level is achieved through higher-level integration processes that operate to create macro-propositions that generalize or subsume a number of text-encountered concepts and the inferences that connectedthem. In the process, previously selected information thatmaintains few connections to other information is droppedfrom the mental model. This results in a more consolidatednetwork of propositions that serves as the new anchor forprocessing subsequent text information (Kintsch 1998).Comprehension also requires an iterative general revisionmechanism of the mental model that readers construct. Thefeelings of suspense and surprise that stories aim to cre-ate are achieved through discontinuities or changes (in set-tings, motivations, actions, or consequences) that are notpredictable or are wrongly predictable solely on the basisof the mental model created so far. Knowledge about thestructure and the function of stories leads readers to expectdiscontinuities and to use them as triggers to revise theirmental model (Zwaan 1994). Therefore, a change in timeor setting in the text may serve as a clue for revising partsof the mental model while other parts remain and integratedwith subsequent text information.The interaction of bottom-up and top-down processes forthe purposes of coherence carries the possibility of different but equally legitimate or successful comprehension out-comes. Qualitative and quantitative differences in concep-tual and mental state knowledge can give rise to differencesbetween the mental models constructed by different read-ers. Nevertheless, comprehension is successful if these areprimarily differences in elaboration but not in the level ofcoherence of the final mental model.In this paper we will focus on the underlying lower-leveltask of constructing the possibly additional elements of thecomprehension model and the process of revising these ele-ments as the story unfolds with only a limited concern on theglobal requirements of coherence and cognitive economy.Our working hypothesis is that these higher level featuresof comprehension can be tackled on top of the underlyingframework that we are developing in this paper, either at thelevel of the representational structures and language or withadditional computational processes on top of the underlyingcomputational framework defined in this paper. We are alsoassuming as solved the orthogonal issue of correctly pars-ing the natural language of the text into some information-equivalent structured (e.g., logical) form that gives us theexplicit narrative of the story. This is not to say that this is-sue is not an important element of narrative text comprehen-sion. Indeed, it may need to be tackled in conjunction withthe problems on which we are focusing (since, for example,the problem of de-referencing pronoun and article anaphoracould depend on background world knowledge and hencepossibly on the higher-level whole comprehension of the text (Levesque, Davis, and Morgenstern 2012).In the next two sections we will develop an appropriaterepresentation framework using preference based argumen-tation that enables us to address well all the three majorproblems of frame, ramification and qualification and pro-vide an associated revision process. The implementation ofa system discussed after this shows how psychologically-inspired story comprehension can proceed as a sequence ofelaboration and revision. The paper then presents, using theempirical methodology suggested by research in psychol-ogy, our initial efforts to evaluate how closely the inferencesdrawn by our framework and system match those given byhumans engaged in a story comprehension task.The following story will be used as a running example.
Story:
It was the night of Christmas Eve. After feeding theanimals and cleaning the barn, Papa Joe took his shotgunfrom above the fireplace and sat out on the porch cleaningit. He had had this shotgun since he was young, and it hadnever failed him, always making a loud noise when it fired.Papa Joe woke up early at dawn, picked up his shotgunand went off to forest. He walked for hours, until the sight oftwo turkeys in the distance made him stop suddenly. A birdon a tree nearby was cheerfully chirping away, building itsnest. He aimed at the first turkey, and pulled the trigger.After a moment’s thought, he opened his shotgun and sawthere were no bullets in the shotgun’s chamber. He loadedhis shotgun, aimed at the turkey and pulled the trigger again.Undisturbed, the bird nearby continued to chirp and buildits nest. Papa Joe was very confused. Would this be the firsttime that his shotgun had let him down?
The story above along with other stories and materialused for the evaluation of our approach can be found at http://cognition.ouc.ac.cy/narrative/ . KRR for Story Comprehension
We will use methods and results from Ar-gumentation Theory in AI (e.g., (Dung 1995;Modgil and Prakken 2012)) and its links to the areaof Reasoning about Action and Change (RAC) withDefault Reasoning on the static properties of domains(see (van Harmelen, Lifschitz, and Porter 2008) for anoverview) to develop a Knowledge Representation and Rea-soning (KRR) framework suitable for Story Comprehension(SC). Our central premise is that SC can be formalized interms of argumentation accounting for the qualification andthe revision of the inferences drawn as we read a story.The psychological research and understanding of SC willguide us in the way we exploit the know how from AI. Theclose link between human common sense reasoning, such asthat for SC, and argumentation has been recently re-enforcedby new psychological evidence (Mercier and Sperber 2011)suggesting that human reasoning is in its general form in-herently argumentative. In our proposed approach of KRRfor SC the reasoning to construct a comprehension modeland its qualification at all levels as the story unfolds will becaptured through a uniform acceptability requirement on thearguments that support the conclusions in the model.The significance of this form of representation for SC isthat it makes easy the elaboration of new inferences from thexplicit information in the narrative, that, as we discussedin the introduction, is crucially necessary for the successfulcomprehension of stories. On the other hand, this easy formof elaboration and the extreme form of qualification thatit needs can be mitigated by the requirement, again givenfrom the psychological perspective, that elaborative infer-ences need to be grounded on the narrative and sceptical innature. In other words, the psychological perspective of SC,that also suggests that story comprehension is a process of “fast thinking” , leads us to depart from a standard logicalview of drawing conclusions based on the truth in all (pre-ferred) models. Instead, the emphasis is turned on buildingone grounded and well-founded model from a collection ofsolid or sceptical properties that are grounded on the text andfollow as unqualified conclusions.We use a typical RAC language of Fluents, Actions,Times, with an extra sort of
Actors . An actor-action pairis an event , and a fluent/event or its negation is a literal . Forthis paper it suffices to represent times as natural numbers and to assume that time-points are dense between story el-ements to allow for the realization of indirect effects. Ar-guments will be build from premises in the knowledge con-nected to any given story. We will have three types of suchknowledge units as premises or basic units of arguments. Definition 1.
Let L be a fluent literal, X a fluent/event lit-eral and S a set of fluent/event literals. A unit argument orpremise has one of following forms: • a unit property argument pro ( X, S ) or prec ( X, S ) ; • a unit causal argument cau ( X, S ) ; • a unit persistence argument per ( L, { L } ) (which wesometimes write as per ( L, · ) ).These three forms are called types of unit arguments. A unitargument of any type is denoted by arg i ( H i , B i ) . The twoforms of unit property arguments differ in that pro ( X, S ) re-lates properties to each other at the same time-point, whereas prec ( X, S ) aims to capture preconditions that hold at thetime-point of an event, under which the event is blockedfrom bringing about its effects at the subsequent time-point.With abuse of terminology we will sometimes call theseunits of arguments, simply as arguments.The knowledge required for the comprehension of a storycomprises of two parts: the explicit knowledge of the nar-rative extracted from the text of the story and the implicitbackground knowledge that the reader uses along with thenarrative for elaborative inferences about the story. Definition 2. A world knowledge theory W is a set of unitproperty and causal arguments together with a (partial) ir-reflexive priority relation on them. A narrative N is: a setof observations OBS ( X, T ) for a fluent/event literal X , and atime-point T ; together with a (possibly empty) set of (storyspecific) property or causal unit arguments.The priority relation in W would typically reflect the pri-ority of specificity for properties, expressed by unit propertyarguments pro ( X, S ) , or the priority of precondition prop-erties, expressed by unit property arguments prec ( X, S ) , In general, abstract time points called scenes are useful. over causal effects, expressed by unit causal arguments. Thispriority amongst these basic units of knowledge gives a formof non-monotonic reasoning (NMR) for deriving new prop-erties that hold in the story.To formalize this NMR we use a form of preference-basedargumentation uniformly to capture the static (default) infer-ence of properties at a single time point as well as inferencesbetween different type points, by extending the domain spe-cific priority relation to address the frame problem.
Definition 3. A story representation SR = hW , N , ≻i comprises a world knowledge theory W , a narrative N ,and a (partial) irreflexive priority relation ≻ extending theone in W so that: (i) cau ( H, B ) ≻ per ( ¬ H, B ) ; (ii) per ( H, B ) ≻ pro ( ¬ H, B ) . The extended relation ≻ mayalso prioritize between arguments in N and those in W (typ-ically the former over the latter).The first priority condition, namely that causal argumentshave priority over persistence arguments, encompasses a so-lution to the frame problem . When we need to reason withdefeasible property information, such as default rules aboutthe normal state of the world in which a story takes place,we are also faced with a generalized frame problem , where“a state of the world persists irrespective of the existenceof general state laws”. Hence, if we are told that the worldis in fact in some exceptional state that violates a general(default) property this will continue to be the case in the fu-ture, until we learn of (or derive) some causal informationthat returns the world into its normal state. The solution tothis generalized frame problem is captured succinctly by thesecond general condition on the priority relation of a storyrepresentation and its combination with the first condition.A representation SR of our example story (focusing onits ending) may include the following unit arguments in W and N (where pj is short for “Papa Joe”): c cau ( fired at ( pj, X ) , { aim ( pj, X ) , pull trigger ( pj ) } ) c cau ( ¬ alive ( X ) , { fired at ( pj, X ) , alive ( X ) } ) c cau ( noise, { fired at ( pj, X ) } ) c cau ( ¬ chirp ( bird ) , { noise, nearby ( bird ) } ) c cau ( gun loaded, { load gun } ) p prec ( ¬ fired at ( pj, X ) , {¬ gun loaded } ) p pro ( ¬ fired at ( pj, X ) , {¬ noise } ) (story specific) with p ≻ c , p ≻ c ; and the following in N : OBS ( alive ( turkey ) , , OBS ( aim ( pj, turkey ) , , OBS ( pull trigger ( pj ) , , OBS ( ¬ gun loaded, , OBS ( load gun, , OBS ( pull trigger ( pj ) , , OBS ( chirp ( bird ) , , OBS ( nearby ( bird ) , ,with the exact time-point choices being inconsequential.As we can see in this example the representation of com-mon sense world knowledge has the form of simple associ-ations between concepts in the language. This stems froma key observation in psychology that typically all worldknowledge and irrespective of type is inherently default. Itis not in the form of an elaborate formal theory of detaileddefinitions of concepts, but rather is better regarded as acollection of relatively loose semantic associations betweenconcepts, reflecting typical rather than absolute information.hus knowledge need not be fully qualified at the represen-tation level, since it can be qualified via the reasoning pro-cess by the relative strength of other (conflicting) associa-tions in the knowledge. In particular, as we will see below, endogenous qualification will be tackled by the priority re-lation in the theory and exogenous qualification by this pri-ority coupled with the requirement that explicit narrative in-formation forms, in effect, non-defeasible arguments. Argumentation Semantics for Stories
To give the semantics of any given story represen-tation SR we will formulate a corresponding pref-erence based argumentation framework of the form h Arguments, Disputes, Def ences i . Arguments will bebased on sets of timed unit arguments. Since we are requiredto reason about properties over time, it is necessary that ar-guments populate some connected subset of the time line. Definition 4.
Let SR = hW , N , ≻i be a story rep-resentation. A (unit) argument tuple has the form (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) , where, arg ( H, B ) , is a unitargument in SR , X is a fluent/event literal, d ∈ { F , B } isan inference type of either forwards derivation or backwardsderivation by contradiction, and T h , T are time points. T h refers to the time-point at which the head of the unit argu-ment applies, while X and T refer to the conclusion drawnusing the unit argument in the tuple. An interpretation ∆ of SR is then defined as a a set of argument tuples. Wesay ∆ supports a fluent/event literal, X , at T , if either (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) ∈ ∆ or OBS ( X, T ) ∈ N . Thenotion of support is extended to hold on sets of timed literals.The inference process of how an argument tuple supportsa timed literal, and thus is allowed to belong to an interpre-tation, is made precise by the following definition. Definition 5.
Let ∆ be an interpretation and (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) in ∆ with d = F . Then arg ( H, B ) applied at T h forward derives X at T under ∆ iff X = H , T = T h and ∆ supports B at T ′ . Theset {h Y, T ′ i | Y ∈ B } is called the activation condition for the derivation; T ′ = T h if arg ( H, B ) is of the form pro ( H, B ) . T ′ = T h − for the other argument types.When d = B , arg ( H, B ) applied at T h backwardderives X at T under ∆ iff ¬ X ∈ B and ∆ sup-ports {¬ H } at T h and B \ {¬ X } at T . The set (cid:8)(cid:10) ¬ H, T h (cid:11)(cid:9) ∪ {h Y, T i | Y ∈ B \ {¬ X }} is the acti-vation condition ; T = T h if arg ( H, B ) is of the form pro ( H, B ) . T = T h − for the other argument types.The framework thus includes reasoning by contra-diction with the defeasible world knowledge. Al-though the psychological debate on the question towhat extent humans reason by contradiction, e.g., bycontraposition, (see, e.g., (Johnson-Laird and Yang 2008;Rips 1994)) is still ongoing it is natural for a formal ar-gumentation framework to capture this mode of indirectreasoning (see, e.g., (Kakas, Toni, and Mancarella 2013;Kakas and Mancarella 2013)). One of the main conse-quences of this is that it gives a form of backwards per-sistence , e.g., from an observation to support (but not nec- essarily conclude) that the observed property holds alsoat previous time points. An argument tuple of the form h per ( L, · ) , T + 1 , B ; ( ¬ L, T ) i captures the backwards per-sistence of ¬ L from time T + 1 to T using by contrapositionthe unit argument of persistence of L from T to T + 1 . Wealso note that the separation of the inference type (e.g., for-wards and backwards) is known to be significant in prefer-ence based argumentation (Modgil and Prakken 2012). Thiswill be exploited when we consider the attacking betweenarguments: their disputes and defences.To reflect the suggestion by psychology that inferencesdrawn by readers are strongly tied to the story we requirethat the activation conditions of argument tuples must beeventually traced on the explicit information in the narrativeof the story representation. Definition 6.
An interpretation ∆ is grounded on SR iffthere is a total ordering of ∆ such that the activation condi-tion of any tuple α ∈ ∆ is supported by the set of tuples thatprecede α in the ordering or by the narrative in SR .Hence in a grounded interpretation there can be no cyclesin the tuples that support their activation conditions and sothese will always end with tuples whose activation condi-tions will be supported directly by the observations in thenarrative of the story.We can now define the argumentation framework corre-sponding to any given story representation. The central taskis to capture through the argumentation semantics the non-monotonic reasoning of linking the narrative to the defeasi-ble information in the world knowledge. In particular, theargumentation will need to capture the qualification prob-lem, encompassed in this synthesis of the narrative with theworld knowledge, both at the level of static reasoning at onetime point with default property arguments and at the levelof temporal projection from one time point to another. Definition 7.
Let SR be a story representation.Then the corresponding argumentation framework, (cid:10) ARG SR , DIS SR , DEF SR (cid:11) is defined as follows: • An argument, A , in ARG SR is any grounded interpreta-tion of SR . • Given an argument A then A is in conflict with SR iffthere exists a tuple α = (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) in A such that OBS ( ¬ X, T ) ∈ N of SR . • Given two arguments A , A then these are in (direct)conflict with each other iff there exists a tuple α = (cid:10) arg ( H , B ) , T h , d ; ( X , T ) (cid:11) in A and a tuple α = (cid:10) arg ( H , B ) , T h , d ; ( X , T ) (cid:11) in A such that X = ¬ X , T = T . Given two arguments A , A then theseare in indirect conflict with each other iff there exists a tu-ple α = (cid:10) arg ( H , B ) , T h , d ; ( X , T ) (cid:11) in A and atuple α = (cid:10) arg ( H , B ) , T h , d ; ( X , T ) (cid:11) in A suchthat ( d = B or d = B ) and H = ¬ H , T h = T h . • Given two arguments A , A then A disputes A andhence ( A , A ) ∈ DIS SR iff A is in direct or indirectconflict with A , and in the case of indirect conflict d = B holds in the definition of indirect conflict above. • Argument A undercuts A iff A , A are in direct or indirect conflict via α and α , – when in direct conflict, there exists a tuple α ′ = D arg ′ ( H ′ , B ′ ) , T h ′ , d ′ ; ( X ′ , T ′ ) E in A and a tuple α ′ = D arg ′ ( H ′ , B ′ ) , T h ′ , d ′ ; ( X ′ , T ′ ) E in A suchthat arg ′ ( H ′ , B ′ ) ≻ arg ′ ( H ′ , B ′ ) and T ′ = T ′ or T h ′ = T h ′ . – when in indirect conflict, then arg ( H , B ) ≻ arg ( H , B ) where arg ( H , B ) and arg ( H , B ) are the unit arguments in α and α respectively. • Argument A defends against A and hence ( A , A ) ∈ DEF SR , iff there exists a subset A ′ ⊆ A which is inminimal conflict with A (i.e., no proper subset of A ′ isin conflict with A ) and A undercuts A ′ .Several clarifying comments are in order. Argumentsthat are in dispute are arguments that support some contraryconclusion at the same time point and hence form counter-arguments for each other. The use of contrapositive rea-soning for backwards inference also means that it is pos-sible to have arguments that support conclusions that are notcontrary to each other but whose unit arguments have con-flicting conclusions. For example, in our running examplewe can use the causal unit argument, c , to forward derive f ired at ( pj, X ) and the property argument p to backwardsderive gun loaded from ¬ f ired at ( pj, X ) and despite thefact that the derived facts are not in conflict the unit argu-ments used concern conflicting conclusions. Hence such ar-guments are also considered to be in conflict but instead ofa direct conflict we say we have an indirect conflict. Not allsuch indirect conflicts are important. A dispute that resultsfrom an indirect conflict of a unit argument used backwardson a unit argument that is used forwards does not have anyeffect. Such cases are excluded from giving rise to disputes.This complication in the definitions of conflicts and dis-putes results from the defeasible nature of the world knowl-edge and the fact we are allowing reasoning by contradictionon such defeasible information. These complications in factstem from the fact that we are only approximating the proofby contradiction reasoning, capturing this indirectly throughcontraposition. The study of this is beyond the scope of thispaper and the reader is referred to the newly formulated Ar-gumentation Logic (Kakas, Toni, and Mancarella 2013).Undercuts between arguments require that the undercut-ting argument does so through a stronger unit or premiseargument than some unit argument in the argument that isundercut. The defence relation is build out of undercuts byapplying an undercut on minimally conflicting subsets of theargument which we are defending against. Hence these tworelations between arguments are asymmetric. Note also thatthe stronger premise from the undercutting argument doesnot necessarily need to come from the subset of the unit ar-guments that supports the conflicting conclusion. Instead, itcan come from any part of the undercutting argument to un-dercut at any point of the chain supporting the activation ofthe conflicting conclusion. This, as we shall illustrate below,is linked to how the framework addresses the ramificationproblem of reasoning with actions and change. The semantics of a story representation is defined usingthe corresponding argumentation framework as follows. Definition 8.
Let SR be a story representation and (cid:10) ARG SR , DIS SR , DEF SR (cid:11) its corresponding argumen-tation framework. An argument ∆ is acceptable in SR iff • ∆ is not in conflict with SR nor in direct conflict with ∆ . • No argument A undercuts ∆ . • For any argument A that minimally disputes ∆ , ∆ de-fends against A .Acceptable arguments are called comprehension models of SR . Given a comprehension model ∆ , a timed fluentliteral ( X, T ) is entailed by SR iff this is supported by ∆ .The above definition of comprehension model and storyentailment is of a sceptical form where, apart from the factthat all conclusions must be ground on the narrative, theymust also not be non-deterministic in the sense that therecan not exist another comprehension model where the neg-ative conclusion is entailed. Separating disputes and un-dercuts and identifying defences with undercuts facilitatesthis sceptical form of entailment. Undercuts (see, e.g.,(Modgil and Prakken 2012) for some recent discussion) arestrong counter-claims whose existence means that the at-tacked set is inappropriate for sceptical conclusions whereasdisputes are weak counter-claims that could be defendedor invalidated by extending the argument to undercut themback. Also the explicit condition that an acceptable argu-ment should not be undercut even if it can undercut backmeans that this definition does not allow non-deterministicchoices for arguments that can defend themselves.To illustrate the formal framework, how argumentsare constructed and how a comprehension of a storyis formed through acceptable arguments let us considerour example story starting from the end of the sec-ond paragraph, corresponding to time-points - in theexample narrative. Note that the empty ∆ supports aim ( pj, turkey ) and pull trigger ( pj ) at . Hence, c on forward activates f ired at ( pj, turkey ) at underthe empty argument, ∆ . We can thus populate ∆ with h c , , F ; ( f ired at ( pj, turkey ) , i . Similarly, we can in-clude h per ( alive ( turkey ) , · ) , , F ; ( alive ( turkey ) , i inthe new ∆ . Under this latter ∆ , c on forward acti-vates ¬ alive ( turkey ) at , allowing us to further extend ∆ with h c , , F ; ( ¬ alive ( turkey ) , i . The resulting ∆ isa grounded interpretation that supports ¬ alive ( turkey ) at . It is based on this inference, that we expect readers torespond that the first turkey is dead, when asked about itsstatus at this point, since no other argument grounded on thenarrative (thus far) can support a qualification argument tothis inference. Note also that we can include in ∆ the tu-ple h p , , B ; ( gun loaded, i to support, using backwards(contrapositive) reasoning with p , the conclusion that thegun was loaded when it was fired at time .Reading the first sentence of the third paragraph, we learnthat OBS ( ¬ gun loaded, . We now expect that this newpiece of evidence will lead readers to revise their infer-ences as now we have an argument to support the conclusion ¬ f ired at ( pj, turkey ) based on the stronger (qualifying)nit argument of p . For this we need to support the activa-tion condition of p at time , i.e., to support ¬ gun loaded at . To do this we can use the argument tuples: h per ( gun loaded, · ) , , B ; ( ¬ gun loaded, ih per ( gun loaded, · ) , , B ; ( ¬ gun loaded, ih per ( gun loaded, · ) , , B ; ( ¬ gun loaded, i which support the conclusion that the gun was also un-loaded before it was observed to be so. This uses per ( gun loaded, · ) contrapositively to backward activatethe unit argument of persistence, e.g., had the gun beenloaded at , it would have been so at which would con-tradict the story. Note that this backwards inference of ¬ gun loaded would be qualified by a causal argument for ¬ gun loaded at any time earlier than , e.g., if the worldknowledge contained the unit argument c : cau ( ¬ gun loaded, { pull trigger ( pj ) } ) This then supports an indirect conflict at time with theforwards persistence of gun loaded from to and due tothe stronger nature of unit causal over persistence argumentsthe backwards inference of ¬ gun loaded is undercut and socannot belong to an acceptable argument.Assuming that c is absent, the argument, ∆ , consist-ing of these three “persistence” tuples is in conflict on gun loaded on with the argument ∆ above. Each ar-gument disputes the other and in fact neither can forman acceptable argument. If we extend ∆ with the tuple h p , , F ; ( ¬ f ired at ( pj, turkey ) , i then this can now un-dercut and thus defend against ∆ using the priority of p over c . Therefore the extended ∆ is acceptable and theconclusion ¬ f ired at ( pj, turkey ) at is drawn revising theprevious conclusions drawn from ∆ . The process of under-standing our story may then proceed by extending ∆ , with h per ( alive ( turkey ) , · ) , T, F ; ( alive ( turkey ) , T ) i for T =2 , , , resulting in a model that supports alive ( turkey ) at . It is based on this inference that we expect readers torespond that the first turkey is alive at .Continuing with the story, after Papa Joe loads the gunand fires again, we can support by forward inferences thatthe gun fired, that noise was caused, and that the bird stoppedchirping, through a chaining of the unit arguments c , c , c .But OBS ( chirp ( bird ) , supports disputes on all thesethrough the repeated backwards use of the same unit argu-ments grounded on this observation. We thus have an exoge-nous qualification effect where these conclusions can not besceptical and so will not be supported by any comprehensionmodel. But if we also consider the stronger (story specific)information in p , that this gun does not fire without a noise,together with the backwards inference of ¬ noise an argu-ment that contains these can undercut the firing of the gun attime and thus defend against disputes that are grounded on pull triger at and the gun firing. As a result, we have theeffect of blocking the ramification of the causation of noise and so ¬ noise (as well as ¬ f ired at ( pj, turkey ) ) are scep-tically concluded. Readers, indeed respond in this way.With this latter part of the example story we see howour framework addresses the ramification problem andits non-trivial interaction with the qualification problem(Thielscher 2001). In fact, a generalized form of this prob-lem is addressed where the ramifications are not chained only through causal laws but through any of the forms ofinference we have in the framework — causal, property orpersistence — and through any of the type of inference —forwards or backwards by contradiction.A comprehension model can be tested, as is often done inpsychology, through a series of multiple-choice questions. Definition 9.
Let M be a comprehension model of a storyrepresentation SR . A possible answer,“ X at T ”, to a ques-tion is accepted , respectively rejected , iff “ X at T ” (respec-tively “ ¬ X at T ”) is supported by M . Otherwise, we saythat the question is allowed or possible by M .In some cases, we may want to extend the notion of acomprehension model to allow some non-sceptical entail-ments. This is needed to reflect the situation when a readercannot find a sceptical answer to a question and chooses be-tween two or more allowed answers. This can be capturedby allowing each such answer to be supported by a moregeneral notion of acceptability such as the admissibility cri-terion of argumentation semantics. For this, we can drop thecondition that ∆ is not undercut by any argument and al-low weaker defences, through disputes, to defend back on adispute that is not at the same time an undercut.Finally, we note that a comprehension model need not becomplete as it does not need to contain all possible scepticalconclusions that can be drawn from the narrative and theentire world knowledge. It is a subset of this, given by thesubset of the available world knowledge that readers chooseto use. This incompleteness of the comprehension modelis required for important cognitive economy and coherenceproperties of comprehension, as trivially a “full model” iscontrary to the notion of coherence. Computing Comprehension Models
The computational procedure below constructs a compre-hension model, by iteratively reading a new part of the story SR , retracting existing inferences that are no longer appro-priate, and including new inferences that are triggered as aresult of the new story part. Each part of the story may in-clude more than one observation, much in the same way thathuman readers may be asked to read multiple sentences inthe story before being asked to answer a question. We shallcall each story part of interest a block , and shall assume thatit is provided as input to the computational procedure.At a high level the procedure proceeds as in Algorithm 1.The story is read one block at a time. After each blockof SR is read, a directed acyclic graph G [ b ] is maintainedwhich succinctly encodes all interpretations that are relevantfor SR up to its b -th block. Starting from G [ b − , a newtuple is added as a vertex if it is possible to add a directededge to each h X, T i in the tuple’s condition from either anobservation OBS ( X, T ) in the narrative of SR [ b ] , or from atuple (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) already in G [ b ] . In effect,then, edges correspond to the notion of support from the pre-ceding section, and the graph is the maximal grounded inter-pretation given the part of the story read.Once graph G [ b ] is computed, it is used to revise the com-prehension model ∆[ b − so that it takes into account theobservations in SR [ b ] . The revision proceeds in two steps. lgorithm 1 Computing a Comprehension Model input: story SR , partitioned in a list of k blocks, and aset of questions Q [ b ] associated with each SR block b .Set G [0] to be the empty graph. for every b = 1 , , . . . , k do Let SR [ b ] be the restriction of SR up to its b -th block.Let G [ b ] := graph ( G [ b − , SR [ b ]) be the new graph.Let Π[ b ] := retract (∆[ b − , G [ b ] , SR [ b ]) .Let ∆[ b ] := elaborate (Π[ b ] , G [ b ] , SR [ b ]) .Answer Q [ b ] with the comprehension model ∆[ b ] . end for In the first step, the tuples in ∆[ b − are considered in theorder in which they were added, and each one is checked tosee whether it should remain in the comprehension model.Any tuple in ∆[ b − that is undercut by the tuples in G [ b ] , ordisputed and cannot be defended, is retracted, and is not in-cluded in the provisional set Π[ b ] . As a result of a retraction,any tuple (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) ∈ ∆[ b − such that arg ( H, B ) no longer activates X at T under Π[ b ] is also re-tracted and is not included in Π[ b ] . This step guarantees thatthe argument Π[ b ] is trivially acceptable.In the second step, the provisional set Π[ b ] , which is itselfa comprehension model (but likely a highly incomplete one),is elaborated with new inferences that follow. The elabo-ration process proceeds as in Algorithm 2. Since the pro-visional comprehension model Π effectively includes onlyunit arguments that are “strong” against the attacks from G ,it is used to remove (only as part of the local computation ofthis procedure) any weak arguments from G itself (i.e., ar-guments that are undercut), and any arguments that dependon the former to activate their inferences. This step, then,ensures that all arguments (subsets of G ) that are defendedare no longer part of the revised G , in effect accommodatingthe minimality condition for attacking sets. It then consid-ers all arguments that activate their inferences in the pro-visional comprehension model. The comprehension modelis expanded with a new tuple from E if the tuple is not inconflict with the story nor in direct conflict with the currentmodel ∆ , and if “attacked” by arguments in G then thesearguments do not undercut ∆ , and ∆ undercuts back. Onlyarguments coming from the revised graph G are considered,as per the minimality criterion on considered attacks.The elaboration process adds only “strong” arguments inthe comprehension model, retaining its property as a com-prehension model. The discussion above forms the basis forthe proof of the following theorem: Theorem 1.
Algorithm 1 runs in time that is polynomial inthe size of SR and the number of time-points of interest,and returns a comprehension model of the story. Proof sketch.
Correctness follows from our earlier discus-sion. Regarding running time: The number of iterations ofthe top-level algorithm is at most linear in the relevant pa-rameters. In constructing the graph G [ b ] , each pair of ele- Algorithm 2
Elaborating a Comprehension Model input: provisional comprehension model Π , graph G ,story SR ; all inputs possibly restricted up to some block. repeat Let G := retract (Π , G , SR ) .Let E include all tuples (cid:10) arg ( H, B ) , T h , d ; ( X, T ) (cid:11) such that arg ( H, B ) activates X at T under Π .Let Π := ∆ .Let
Π := expand (∆ , E, G ) . until ∆ = Π output: elaborated comprehension model ∆ .ments (unit arguments or observations at some time-point)in SR [ b ] is considered once, for a constant number of op-erations. The same is the case for the retraction process inthe subsequent step of the algorithm. Finally, the loop of theelaboration process repeats at most a linear in the relevantparameters number of times, since at least one new tuple isincluded in Π in every loop. Within each loop, each stepconsiders each pair of elements (unit arguments or obser-vations at some time-point) in SR [ b ] once, for a constantnumber of operations. The claim follows. QEDThe computational processes presented above have beenimplemented using Prolog, along with an accompanyinghigh-level language for representing narratives, backgroundknowledge, and multiple-choice questions. Without goinginto details, the language allows the user to specify a se-quence of sessions of the form session(s(B),Qs,Vs) ,where B is the next story block to read, Qs is the set of ques-tions to be answered afterwards, and Vs is the set of fluentsmade visible in a comprehension model returned to the user.The narrative itself is represented by a sequence of state-ments of the form s(B) :: X at T , where B is theblock in which the statement belongs (with possibly mul-tiple statements belonging in the same block), X is a fluentor action, and T is the time-point at which it is observed.The background knowledge is represented by clauses ofthe form p(N) :: A, B, ..., C implies X or c(N) :: A, B, ..., C causes X , where p or c shows a property or causal clause, N is the name of the rule, A, B, ..., C is the rule’s body, and X is the rule’s head.Negations are represented by prefixing a fluent or action inthe body or head with the minus symbol. Variables can beused in the fluents or actions to represent relational rules.Preferences between clauses are represented by statementsof the form p(N1) >> c(N2) with the natural reading.Questions are represented by clauses of the form q(N)?? (X1 at T1, ..., X2 at T2) ; ... , where N is the name of the question, (X1 at T1, ..., X2at T2) is the first possible answer as a conjunction of flu-ents or actions that need to hold at their respective time-points, and ; separates the answers. The question is alwaysthe same: “Which of the following choices is the case?”.The implemented system demonstrates real modularityand elaboration tolerance, allowing as input any story nar-ative or background knowledge in the given syntax, al-ways appropriately qualifying the given information to com-pute a comprehension model. The system is available at http://cognition.ouc.ac.cy/narrative/ . Evaluation through Empirical Studies
In the first part of the evaluation of our approach we car-ried a psychological study to ascertain the world knowledgethat is activated to successfully comprehend example storiessuch as our example story on the basis of data obtained fromhuman readers. We were interested both in the outcomesof successful comprehension and the world knowledge thatcontributed to the human comprehension. We developeda set of inferential questions to follow the reading of pre-specified story segments. These assessed the extent to whichreaders connected, explained, and elaborated key story ele-ments. Readers were instructed to answer each question andto justify their answers using a “think-aloud” method of an-swering questions while reading in order to reveal the worldknowledge that they had used.The qualitative data from the readers was pooled togetherand analysed as to the frequencies of the types of responsesin conjunction with the information given in justificationsand think-aloud protocols. For example, the data indicatedthat all readers considered Papa Joe to be living on a farmor in a village (q.01, “Where does Papa Joe live?”) andthat all readers attributed an intention of Papa Joe to hunt(q.06, “What was Papa Joe doing in the forest?”). An in-teresting example of variability occurred in the answers forthe group of questions 07,08,10,11, asking about the statusof the turkeys at various stages in the story. The major-ity of participants followed a comprehension model whichwas revised between the first turkey being dead and alive.However, a minority of participants consistently answeredthat both turkeys were alive. These readers had defeated thecausal arguments that supported the inference that the firstturkey was dead, perhaps based on an expectation that thedesire of the protagonist for turkey would be met with com-plications. We believe that such expectations can be gener-ated from standard story knowledge in the same way as wedraw other elaborative inferences from WK.
Evaluation of the system
Using the empirical data discussed above, we tested ourframework’s ability to capture the majority answers and ac-count for their variability. The parts of our example storyrepresentation relevant to questions 01 and 06 are as follows: s(1) :: night at 0. s(2) :: animal(turkey2) at 2.s(1) :: xmasEve at 0. s(2) :: alive(turkey1) at 2.s(1) :: clean(pj,barn) at 0. s(2) :: alive(turkey2) at 2.s(2) :: xmasDay at 1. s(2) :: chirp(bird) at 2.s(2) :: gun(pjGun) at 1. s(2) :: nearby(bird) at 2.s(2) :: longWalk(pj) at 1. s(2) :: aim(pjGun,turkey1) at 2.s(2) :: animal(turkey1) at 2. s(2) :: pulltrigger(pjGun) at 2.
The two questions are answered after reading, respec-tively, the first and second blocks of the story above: session(s(1),[q(01)], ). session(s(2),[q(06)], ). with their corresponding multiple-choice answers being: q(01) ??lives(pj,city) at 0; lives(pj,hotel) at 0;lives(pj,farm) at 0; lives(pj,village) at 0.q(06) ??motive(in(pj,forest),practiceShoot) at 3;motive(in(pj,forest),huntFor(food)) at 3;(motive(in(pj,forest),catch(turkey1)) at 3,motive(in(pj,forest),catch(turkey2)) at 3);motive(in(pj,forest),hearBirdsChirp) at 3.
To answer the first question, the system uses the followingbackground knowledge: p(11) :: has(home(pj),barn) implies lives(pj,countrySide).p(12) :: true implies -lives(pj,hotel).p(13) :: true implies lives(pj,city).p(14) :: has(home(pj),barn) implies -lives(pj,city).p(15) :: clean(pj,barn) implies at(pj,barn).p(16) :: at(pj,home), at(pj,barn) implies has(home(pj),barn).p(17) :: xmasEve, night implies at(pj,home).p(18) :: working(pj) implies -at(pj,home).p(111) :: lives(pj,countrySide) implies lives(pj,village).p(112) :: lives(pj,countrySide) implies lives(pj,farm).p(113) :: lives(pj,village) implies -lives(pj,farm).p(114) :: lives(pj,farm) implies -lives(pj,village).p(14) >> p(13). p(18) >> p(17). By the story information, p(17) implies at(pj,home), with-out being attacked by p(18), since nothing is said in the storyabout Papa Joe working. Also by the story information,p(15) implies at(pj,barn). Combining the inferences fromabove, p(16) implies has(home(pj),barn), and p(11) implieslives(pj,countrySide). p(12) immediately dismisses the caseof living in a hotel (as people usually do not), whereas p(14)overrides p(13) and dismisses the case of living in the city.Yet, the background knowledge cannot unambiguously de-rive one of the remaining two answers. In fact, p(111),p(112), p(113), p(114) give arguments for either of the twochoices. This is in line with the variability in the empiricaldata in terms of human answers to the first question.To answer the second question, the system uses the fol-lowing background knowledge: p(21) :: want(pj,foodFor(dinner)) impliesmotive(in(pj,forest),huntFor(food)).p(22) :: hunter(pj) impliesmotive(in(pj,forest),huntFor(food)).p(23) :: firedat(pjGun,X), animal(X) implies-motive(in(pj,forest),catch(X)).p(24) :: firedat(pjGun,X), animal(X) implies-motive(in(pj,forest),hearBirdsChirp).p(25) :: xmasDay implieswant(pj,foodFor(dinner)).p(26) :: longWalk(pj) implies-motive(in(pj,forest),practiceShooting).p(27) :: xmasDay implies-motive(in(pj,forest),practiceShooting).
By the story information and parts of the backgroundknowledge not shown above, we can derive that Papa Joe is ahunter, and that he has fired at a turkey. From the first infer-ence, p(22) already implies that the motivation is to hunt forfood. The same inference can be derived by p(25) and p(21),although for a different reason. At the same time, p(23) and(24) dismiss the possibility of the motivation being to catchthe two turkeys or to hear birds chirp, whereas story infor-mation along with either p(26) or p(27) dismiss also the pos-sibility of the motivation being to practice shooting.The background knowledge above follows evidence fromthe participant responses in our psychological study that themotives in the answers of the second question can be “de-rived” from higher-level desires or goals of the actor. Suchhigh-level desires and intentions are examples of generaliza-tions that contribute to the coherence of comprehension, andto the creation of expectations in readers about the courseof action that the story might follow in relation to fulfillingdesires and achieving intentions of the protagonists.
Related Work
Automated story understanding has been an ongoing fieldof AI research for the last forty years, starting with theplanning and goal-oriented approaches of Schank, Abelson,Dyer and others (Schank and Abelson 1977; Dyer 1983);for a good overview see (Mueller 2002) and the web-site (Mueller 2013). Logic-related approaches have largelybeen concerned with the development of appropriate repre-sentations, translations or annotations of narratives, with theimplicit or explicit assumption that standard deduction orlogical reasoning techniques can subsequently be applied tothese. For example, the work of Mueller (Mueller 2003),which in terms of story representation is most closely re-lated to our approach, equates various modes of storyunderstanding with the solving of satisfiability problems.(Niehaus and Young 2009) models understanding as partialorder planning, and is also of interest here because of amethodology that includes a controlled comparison with hu-man readers.To our knowledge there has been very little work relat-ing story comprehension with computational argumentation,an exception being (Bex and Verheij 2013), in which a caseis made for combining narrative and argumentation tech-niques in the context of legal reasoning, and with whichour argumentation framework shares important similarities.Argumentation for reasoning about actions and change, onwhich our formal framework builds, has been studied in(Vo and Foo 2005; Michael and Kakas 2009).Many other authors have emphasized the impor-tance of commonsense knowledge and reasoning instory comprehension (Silva and Montgomery 1977;Dahlgren, McDowell, and Stabler 1989; Riloff 1999;Mueller 2004; Mueller 2009; Verheij 2009;Elson and McKeown 2009; Michael 2010), and indeedhow it can offer a basis for story comprehension tasksbeyond question answering (Michael 2013b).
Conclusions and Future Work
We have set up a conceptual framework for story compre-hension by fusing together knowhow from the psychologyof text comprehension with established AI techniques andtheory in the areas of Reasoning about Actions and Changeand Argumentation. We have developed a proof of conceptautomated system to evaluate the applicability of our frame- work through a similar empirical process of evaluating hu-man readers. We are currently, carrying out psychologicalexperiments with other stories to harness world knowledgeand test our system against the human readers.There are still several problems that we need to address tocomplete a fully automated approach to SC, over and abovethe problem of extracting through Natural Language Pro-cessing techniques the narrative from the free format text.Two major such problems for our immediate future workare (a) to address further the computational aspects of thechallenges of cognitive economy and coherence and (b) thesystematic extraction or acquisition of common sense worldknowledge. For the first of these we will investigate how thiscan be addressed by applying “computational heuristics” ontop of (and without the need to reexamine) the solid semanticframework that we have developed thus far, drawing againfrom psychology to formulate such heuristics. In particu-lar, we expect that the psychological studies will guide us inmodularly introducing computational operators such as se-lection, dropping and generalization operators so that wecan improve the coherence of the computed models.For the problem of the systematic acquisition of worldknowledge we aim to source this (semi)-automaticallyfrom the Web. For this we could build on lexicaldatabases such as WordNet (Miller 1995), FrameNet(Baker, Fillmore, and Lowe 1998), and PropBank(Palmer, Gildea, and Kingsbury 2005), exploring thepossibility of populating the world knowledge theo-ries using archives for common sense knowledge (e.g.,Cyc (Lenat 1995)) or through the automated extractionof commonsense knowledge from text using natu-ral language processing (Michael and Valiant 2008),and appealing to textual entailment for the se-mantics of the extracted knowledge (Michael 2009;Michael 2013a).We envisage that the strong inter-disciplinary nature ofour work can provide a concrete and important test bed forevaluating the development of NMR frameworks in AI whileat the same time offering valuable feedback for Psychology.
References [Baker, Fillmore, and Lowe 1998] Baker, C. F.; Fillmore,C. J.; and Lowe, J. B. 1998. The Berkeley FrameNet Project.In
Proc. of 36th Annual Meeting of the Association for Com-putational Linguistics and 17th International Conference onComputational Linguistics , 86–90.[Bex and Verheij 2013] Bex, F., and Verheij, B. 2013. Le-gal Stories and the Process of Proof.
Artif. Intell. Law
Journal of Pragmatics
ComputationalLinguistics
Artif. In-tell.
In-Depth Understanding:A Computer Model of Integrated Processing for NarrativeComprehension . MIT Press, Cambridge, MA.[Elson and McKeown 2009] Elson, D., and McKeown, K.2009. Extending and Evaluating a Platform for Story Under-standing. In
Proc. of AAAI Symposium on Intelligent Narra-tive Technologies II .[Johnson-Laird and Yang 2008] Johnson-Laird, P. N., andYang, Y. 2008. Mental Logic, Mental Models, and Simula-tions of Human Deductive Reasoning. In Sun, R., ed.,
TheCambridge Handbook of Computational Psychology , 339–358.[Kakas and Mancarella 2013] Kakas, A., and Mancarella, P.2013. On the Semantics of Abstract Argumentation.
LogicComputation
Proc. of 11th Interna-tional Symposium on Logical Formalizations of Common-sense Reasoning .[Kintsch 1988] Kintsch, W. 1988. The Role of Knowledgein Discourse Comprehension: A Construction-IntegrationModel.
Psychological Review
Comprehension: AParadigm of Cognition . NY: Cambridge University Press.[Lenat 1995] Lenat, D. B. 1995. CYC: A Large-ScaleInvestment in Knowledge Infrastructure.
Commun. ACM
Proc. of 13th International Confer-ence on Principles of Knowledge Representation and Rea-soning , 552–561.[McNamara and Magliano 2009] McNamara, D. S., andMagliano, J. 2009. Toward a Comprehensive Model ofComprehension.
The Psychology of Learning and Motiva-tion
Behavioral and Brain Sciences
Proc. of 10th International Conference on Logic Program-ming and Nonmonotonic Reasoning , 209–222.[Michael and Valiant 2008] Michael, L., and Valiant, L. G.2008. A First Experimental Demonstration of MassiveKnowledge Infusion. In
Proc. of 11th International Confer-ence on Principles of Knowledge Representation and Rea-soning , 378–389.[Michael 2009] Michael, L. 2009. Reading Between theLines. In
Proc. of 21st International Joint Conference onArtificial Intelligence , 1525–1530. [Michael 2010] Michael, L. 2010. Computability of Narra-tive. In
Proc. of AAAI Symposium on Computational Modelsof Narrative .[Michael 2013a] Michael, L. 2013a. Machines with Web-sense. In
Proc. of 11th International Symposium on LogicalFormalizations of Commonsense Reasoning .[Michael 2013b] Michael, L. 2013b. Story Understanding...Calculemus! In
Proc. of 11th International Symposium onLogical Formalizations of Commonsense Reasoning .[Miller 1995] Miller, G. A. 1995. WordNet: A LexicalDatabase for English.
Commun. ACM
Artif. Intell.
Encyclopedia of Cognitive Science , volume 4,238–246. London: Macmillan Reference.[Mueller 2003] Mueller, E. 2003. Story Understandingthrough Multi-Representation Model Construction. In Hirst,G., and Nirenburg, S., eds.,
Proc. of the HLT-NAACL 2003Workshop on Text Meaning , 46–53.[Mueller 2004] Mueller, E. 2004. Understanding Script-Based Stories Using Commonsense Reasoning.
CognitiveSystems Research
Proc. of Workshop on AdvancingComputational Models of Narrative .[Mueller 2013] Mueller, E. 2013.Story Understanding Resources.http://xenia.media.mit.edu/ mueller/storyund/storyres.html.Accessed February 28, 2013.[Niehaus and Young 2009] Niehaus, J., and Young, R. M.2009. A Computational Model of Inferencing in Narrative.In
Proc. of AAAI Symposium on Intelligent Narrative Tech-nologies II .[Palmer, Gildea, and Kingsbury 2005] Palmer, M.; Gildea,D.; and Kingsbury, P. 2005. The Proposition Bank: AnAnnotated Corpus of Semantic Roles.
Computational Lin-guistics
Current Directions in Psychological Science
Understanding Language Under-standing: Computational Models of Reading , 435–460. TheMIT Press.[Rips 1994] Rips, L. 1994.
The Psychology of Proof . MITPress.[Schank and Abelson 1977] Schank, R. C., and Abelson,R. P. 1977.
Scripts, Plans, Goals, and Understanding: AnInquiry into Human Knowledge Structures . Lawrence Erl-baum, Hillsdale, NJ.Silva and Montgomery 1977] Silva, G., and Montgomery,C. A. 1977. Knowledge Representation for Automated Un-derstanding of Natural Language Discourse.
Computers andthe Humanities
Artif. Intell.
Handbook ofKnowledge Representation . Elsevier Science.[Verheij 2009] Verheij, B. 2009. Argumentation Schemes,Stories and Legal Evidence. In
Proc. of Workshop on Ad-vancing Computational Models of Narrative .[Vo and Foo 2005] Vo, Q. B., and Foo, N. Y. 2005. Reason-ing about Action: An Argumentation-Theoretic Approach.
J. Artif. Intell. Res.