A lemma on closures and its application to modularity in logic programming semantics
aa r X i v : . [ c s . L O ] A ug A lemma on closures and its application to modularityin logic programming semantics
Michael J. Maher
Reasoning Research Institute ∗ Canberra, Australia
Email: [email protected]
Written 1991, released 2020
Abstract
This note points out a lemma on closures of monotonic increasing functions andshows how it is applicable to decomposition and modularity for semantics defined asthe least fixedpoint of some monotonic function. In particular it applies to numer-ous semantics of logic programs. An appendix addresses the fixedpoints of (possiblynon-monotonic) functions that are sandwiched between functions with the same fixed-points.
Note:
This is a cleaned up version of a draft, probably begun in 1990, and last revisedin 1991 (before the cleaning-up). It has been cleaned up by: completing references (somewere incomplete or the publication had not yet appeared), deleting notes to self, andadding a little structure (including section headings). The note is lacking introduction andmotivational text, and a more detailed discussion of related work. The appendix datesfrom some later time.
We assume a fixed domain of computation. It can be any constraint domain, but if it is adomain of finite (or rational or infinite or...) trees then the set of function symbols is fixedin advance, and is independent of the program(s).A module P is a pair h R, S i where R is a set of rules and S is a set of ground atoms,the set of ground atoms whose truth value R defines. To avoid some difficulties witharbitrary use of this definition, in this paper we assume that S is characterized by a set ofpredicate symbols (that is, if a predicate symbol p appears in S then every ground atom ∗ This is a technical report of the Reasoning Research Institute. p appears in S ), and that all the predicate symbols of the heads ofrules of R also occur in S . A program is a module such that every predicate which occursin R also occurs in S . In what follows we will generally use P also to refer to the set ofrules R and def ( P ) to refer to S .We write P >> Q if no predicate of Q depends on a predicate of P . That is, everypredicate which appears in the body of a rule of Q does not appear in def ( P ). This includesthe possibility that Q contains only unit rules. We can view this as saying that the module P might call the module Q , but never vice versa.One simple example of P >> Q occurs when we wish to extend a statement aboutatoms which is directly expressible in terms of, say, lf p ( f Q ) to a statement about goals.One technique that often works is to consider the program P ∪ Q where P is { Answer (˜ x ) ← Goal } and ˜ x = vars ( Goal ). For example, { Goalθ } ⊆ lf p ( f P ) iff Answer (˜ x ) θ ∈ lf p ( f P ∪ Q ).It is clear that P >> Q in this case.
A complete partial order is a partially ordered set ( S, ≤ ) with a least element where theleast upper bound of a chain of elements always exists. That is, for any X ≤ · · · X j ≤ · · · , ⊔ i X i is defined. Every complete lattice is a complete partial order.Let f and g be functions on a complete partial order. We define ( f + g )( X ) = f ( X ) ⊔ g ( X ), f + ( X ) = f ( X ) ⊔ X , f α denotes f applied α times in the usual way ( α may betransfinite), f ≤ g iff for every X , f ( X ) ≤ g ( X ). f is monotonic if X ≤ Y implies f ( X ) ≤ f ( Y ), for every X and Y . f is increasing if f ( X ) ≥ X for every X . Thus, for anyfunction f , f + is the smallest increasing function which is greater than f . X is a fixedpointof f if f ( X ) = X . Every fixedpoint of f is also a fixedpoint of f + . f ∗ is the operation of closing under f , that is, f ∗ ( X ) = ⊔ β ≤ α f β ( X ) for some, possiblytransfinite, ordinal α . We stipulate that f ( X ) = X . If f is monotonic and X ≤ f ( X ) (inparticular, if f is increasing) then f ∗ ( X ) is the least fixedpoint of f greater than X , whichwe also denote by lf p ( f, X ). We also have f ∗ ( X ) = lf p ( f + , X ).Clearly, f ≤ f + ≤ f ∗ . Furthermore, f ∗ = f ◦ f ∗ , f ∗ = f ∗ + f ∗ = f ∗ ◦ f ∗ and f ∗ = ( f ∗ ) ∗ .All expressions involving +, ◦ , ∗ and monotonic, increasing functions also representmonotonic, increasing functions. Also, for any expression e and e , if e ≤ e then both e ◦ e ≤ e ◦ e and e ◦ e ≤ e ◦ e , for any expression e .The operator ∗ is monotonic on monotonic functions. That is f ≤ g implies f ∗ ≤ g ∗ .Proof is by transfinite induction. Let X be an element of the domain. f ( X ) = X = g ( X ). For successor ordinals β + 1, f β +1 ( X ) = f ( f β ( X )) ≤ f ( g β ( X )) ≤ g ( g β ( X )) = In [24], this notation is used for the same idea, but based on sets of ground atoms, rather than sets ofpredicates. β +1 ( X ), using monotonicity of f and f ≤ g . For limit ordinals α , f α ( X ) = ⊔ β<α f β ( X ) ≤⊔ β<α g β ( X ) = g α ( X ). Thus ∗ is a closure operator on monotonic functions. Lemma 1
Let f and g be monotonic, increasing functions on a complete partial orderordered by ≥ .1. ( f + g ) ∗ = ( f ◦ g ) ∗ = ( g ◦ f ) ∗ .2. If f ∗ ◦ g ≥ g ◦ f ∗ then ( f + g ) ∗ = ( f ◦ g ) ∗ = f ∗ ◦ g ∗ .3. If g is continuous and f ◦ g ≥ g ◦ f then ( f + g ) ∗ = ( f ◦ g ) ∗ = f ∗ ◦ g ∗ .4. If g is continuous and f ◦ g ∗ ≥ g ∗ ◦ f then ( f + g ) ∗ = ( f ◦ g ) ∗ = f ∗ ◦ g ∗ . Proof:
Part 1. ( f + g ) ≤ ( f ◦ g ), so ( f + g ) ∗ ≤ ( f ◦ g ) ∗ . ( f ◦ g ) ≤ ( f + g ) , so( f ◦ g ) ∗ ≤ ( f + g ) ∗ . Thus ( f + g ) ∗ = ( f ◦ g ) ∗ . By symmetry, we also have ( f + g ) ∗ = ( g ◦ f ) ∗ .Part 2. ( f + g ) ∗ = ( f + g ) ∗ ◦ ( f + g ) ∗ ≥ f ∗ ◦ g ∗ , using ( f + g ) ≥ f and ( f + g ) ≥ g . Thisinequality holds without the need for the hypothesis.For the other inequality, observe that g ◦ f ∗ ◦ g ∗ ≤ f ∗ ◦ g ◦ g ∗ = f ∗ ◦ g ∗ . Hence( f + g ) ◦ f ∗ ◦ g ∗ = f ◦ f ∗ ◦ g ∗ + g ◦ f ∗ ◦ g ∗ = f ∗ ◦ g ∗ + g ◦ f ∗ ◦ g ∗ = f ∗ ◦ g ∗ using the above observation. Since, f ∗ ◦ g ∗ ( X ) is closed under ( f + g ), we must have( f + g ) ∗ ≤ f ∗ ◦ g ∗ .Part 3. We show that the hypotheses imply the hypothesis of part 1. We claim g ◦ f β ≤ f β ◦ g for every β . The proof is by transfinite induction. For β = 0, it reduces to g ≤ g . For asuccessor ordinal β + 1, we have g ◦ f β +1 = g ◦ f ◦ f β ≤ f ◦ g ◦ f β ≤ f ◦ f β ◦ g = f β +1 ◦ g ,using the second hypothesis of part 3.For a limit ordinal α , g ◦ f α = g ◦ ( ⊔ β<α f β )= ( ⊔ β<α g ◦ f β ) ≤ ( ⊔ β<α f β ◦ g )= ( ⊔ β<α f β ) ◦ g = f α ◦ g .In this derivation we use both hypotheses of part 3.Since f ∗ is f α , for some ordinal α , g ◦ f ∗ ≤ f ∗ ◦ g . Now, by part 2, ( f + g ) ∗ = f ∗ ◦ g ∗ .3art 4. We can apply Part 3 with g ∗ in place of g , provided we can prove that ( f + g ) ∗ =( f + g ∗ ) ∗ , and that g ∗ is continuous. Now, ( f + g ) ∗ = (( f + g ) ∗ ) ∗ = (( f + g ) ∗ + ( f + g ) ∗ ) ∗ ≥ ( f + g ∗ ) ∗ ≥ ( f + g ) ∗ , using monotonicity and straightforward inequalities. Hence,( f + g ) ∗ = ( f + g ∗ ) ∗ .Let { X i } be an increasing sequence of elements of the complete partial order. We prove g α ( ⊔ i X i ) = ⊔ i g α ( X i ), for every ordinal α , by transfinite induction. Then g ( ⊔ i X i ) = ⊔ i g ( X i ), since g is continuous. For a successor ordinal β + 1 we have g β +1 ( ⊔ i X i ) = g ◦ g β ( ⊔ i X i ) = g ( ⊔ i g β ( X i )) = ⊔ i g β +1 ( X i ). For a limit ordinal α we have g α ( ⊔ i X i ) = ⊔ β<α g β ( ⊔ i X i ) = ⊔ β<α ⊔ i g β ( X i ) = ⊔ i ⊔ β<α g β ( X i ) = ⊔ i g α ( X i ). Since g ∗ = g α , for some α , g ∗ is continuous. ✷ The usefulness of this lemma comes from the following observations: • if f is continuous then so is f + ; if f is monotonic then so is f + ; • if f is monotonic and lf p ( f, X ) exists then f ∗ ( X ) = lf p ( f, X ) = lf p ( f + , X ) = f + ∗ ( X ); • in the context of logic programs, there are many semantics defined as the least fixed-point of a monotonic function f P , dependent on a program P . In the context ofmodules, the semantics becomes the closure, under f P , of the semantics of the mod-ules on which P depends. Furthermore, we often have – f P ∪ Q = f P + f Q , or the weaker f P + f Q ≤ f P ∪ Q ≤ f P ◦ f Q (perhaps provided that P >> Q ). It is straightforward to show that ( f + g ) ∗ = ( f ◦ g ) ∗ = ( g ◦ f ) ∗ when f and g are monotonic and increasing. Thus in these cases we have f ∗ P ∪ Q =( f P + f Q ) ∗ . – if P >> Q then f ∗ P ◦ f Q ≥ f Q ◦ f ∗ P and/or f P ◦ f Q ≥ f Q ◦ f P . In fact, we oftenhave f P + f Q = f Q ◦ f P , which can be easy to show. If P >> Q does not holdthen generally these properties do not hold, although they do if g = c + for someconstant function c and in some other cases (see, for example, [20, 21]).Applying the lemma in these cases, we have that ( f P ∪ Q ) ∗ = f ∗ P ◦ f ∗ Q , so the structure of theclosure semantics reflects the modular structure of the program P ∪ Q .For example, using T P [8] for definite programs P , T ∗ P is [[ P ]] [20] and we have: if P >> Q then [[ P ∪ Q ]] = [[ P ]] ◦ [[ Q ]]. (We take f = T + P = T P + Id and g = T + Q = T Q + Id ,where Id is the identity function.) In terms of fixedpoints, we get: if P >> Q then lf p ( T P ∪ Q , X ) = lf p ( T P , lf p ( T Q , X )), provided lf p ( T P ∪ Q , X ) exists. Just as quickly we getsimilar results for semantics based on sets of atoms possibly containing variables [10], andsets of clauses (possibly enhanced) [4, 5, 3], even when the function encodes a left-to-right4election rule. Equally, the technique applies to constraint logic programs [16], programsinvolving universal quantification [25], and weighted programs such as [28, 29].The main points of this discussion are summarized in the following proposition. Proposition 2
Let P and Q be programs, and f a semantic function over a completepartial order ordered by ≥ . If1. f P and f Q are monotonic,2. f + P + f + Q ≤ f + P ∪ Q ≤ f + P ◦ f + Q
3. Either • f Q is continuous, and either f + Q ◦ f + P ≤ f + P ◦ f + Q or f ∗ Q ◦ f + P ≤ f + P ◦ f ∗ Q , or • f + Q ◦ f ∗ P ≤ f ∗ P ◦ f + Q then f ∗ P ∪ Q = f ∗ P ◦ f ∗ Q .If, further,4. f P ∪ Q (or f P + f Q , or f P ◦ f Q ) has a fixedpoint greater than (or equal to) X then lf p ( f P ∪ Q , X ) = lf p ( f P , lf p ( f Q , X )) . Proof:
As noted earlier, the first two hypotheses imply that f ∗ P ∪ Q = ( f P + f Q ) ∗ . By theprevious lemma, ( f P + f Q ) ∗ = f ∗ P ◦ f ∗ Q . Thus f ∗ P ∪ Q = f ∗ P ◦ f ∗ Q , that is, lf p ( f + P ∪ Q , X ) = lf p ( f + P , lf p ( f + Q , X )), for every X . It is known that if a monotonic function has a fixedpointgreater than (or equal to) X , then it has a least such fixedpoint. It is straightforward toshow that lf p ( f P + f Q , X ) = lf p ( f P ∪ Q , X ) = lf p ( f P ◦ f Q , X ) if one of these exists, in whichcase they all do. Since lf p ( f P ∪ Q , X ) exists, so also must lf p ( f P + f Q , X ), lf p ( f Q , X ) and lf p ( f P , lf p ( f Q , X )). As observed above, since the fixedpoints exists, they are equal to thefixedpoints of the corresponding increasing function, which completes the proof. ✷ Note that we can weaken the second hypothesis by replacing f + P ◦ f + Q by any expressioninvolving f + P , f + Q , + and ◦ and the proposition will still hold.The requirements of hypothesis 3 can often be tested syntactically. For example, fordefinite logic programs and the function T P , the condition f + Q ◦ f + P ≤ f + P ◦ f + Q can be testedusing unfolding and subsumption [21, 22]. In the case of Datalog programs, again using T P , more tests are possible. For example, sometimes it is possible to test f + Q ◦ f + P ≤ f + P ◦ f ∗ Q [27].If we are simply interested in the combinations of functions (even if f P and f Q are notassociated with programs), and not interested in f P ∪ Q , then we have the following corollaryof the above proof. 5 orollary 3 Under hypotheses 1 and 3 of the above proposition, ( f P + f Q ) ∗ = ( f P ◦ f Q ) ∗ = f ∗ P ◦ f ∗ Q .Under hypotheses 1, 3 and 4 of the above proposition, lf p ( f P + f Q , X ) = lf p ( f P ◦ f Q , X ) = lf p ( f P , lf p ( f Q , X )) . If we have multiple modules, we might want to look at semantics in terms of commonfixedpoints and/or chaotic iterations [7, 20, 21]. For increasing functions f and g , thecommon fixedpoints of f and g are exactly the fixedpoints of f ◦ g (or g ◦ f , or f + g , orany other composition of f ’s and g ’s). Formulating semantics in terms of f + P makes therelationship with common fixedpoints and chaotic iterations easier. There are also the dual results to the above, involving decreasing functions, downwardsclosures, greatest fixedpoints, etc., but these seem less useful since the dual of functionaddition occurs less often in practice, at least in the context we consider here.At the very least we have: if f and g are monotonic decreasing functions and f • ◦ g ≤ g ◦ f • then ( f ◦ g ) • = ( g ◦ f ) • = f • ◦ g • . Here f • denotes the downward closure of f .But, in general, ( f + g ) • = f • ◦ g • when these conditions apply. For example, let C bethe lattice of subsets of { a, b, c } . Define f ( x ) = { a, b } if x = { a, b, c } and ∅ otherwise , anddefine g ( x ) = { b, c } if x = { a, b, c } and ∅ otherwise . Clearly f and g are monotonic anddecreasing and satisfy f • ◦ g ≤ g ◦ f • . However ( f + g ) • = { a, b, c } 6 = ∅ = f • ◦ g • .If we can express f P ∪ Q as a functional expression of f P and f Q involving only functioncomposition – for example, f P ∪ Q = f P ◦ f Q – or, more generally, bound f P ∪ Q between twosuch expressions then we do get something. Suppose the elements of the complete partial order can be viewed as (possibly infinite)programs that are their own semantics. That is, suppose that there is a mapping m whichmaps every X in the complete partial order to a program P X such that lf p ( f P X ) = X .Such a mapping represents an evaluation of the program. In the following proposition wemake the extra assumption that f P X is a constant function, that is, ∀ Y f P X ( Y ) = X . Proposition 4
Let P and Q be programs, and f a semantic function on a complete partialorder ordered by ≥ . If1. for all P ′ , f P ′ is monotonic,2. for all programs P ′ and Q ′ , f + P ′ + f + Q ′ ≤ f + P ′ ∪ Q ′ ≤ f + P ′ ◦ f + Q ′ The dual of function addition is ( f + g )( X ) = f ( X ) ⊓ g ( X ). . Either • f Q is continuous, and f + Q ◦ f + P ≤ f + P ◦ f + Q , or • f + Q ◦ f ∗ P ≤ f ∗ P ◦ f + Q f P ∪ Q (or f P + f Q , or f P ◦ f Q ) has a fixedpoint greater than (or equal to) X then lf p ( f P ∪ Q , X ) = lf p ( f P ∪ QX , X ) = lf p ( f P ∪ QX ) , where QX is the program correspondingto lf p ( f Q , X ) , and f QX is the constant function f QX ( Y ) = lf p ( f Q , X ) . Proof:
We apply the previous proposition and reason lf p ( f P ∪ Q , X )= lf p ( f P , lf p ( f Q , X ))= lf p ( f P , lf p ( f QX , X ))= lf p ( f P ∪ QX , X )The last step is a second application of the previous proposition (involving P and QX )and needs some argument. Since f QX is a constant, the third hypothesis of the propositionis satisfied. Let Z = lf p ( f P , lf p ( f QX , X )). Clearly f P ( Z ) = Z , and f QX ( Z ) ≤ Z since f QX ( Z ) = lf p ( f Q , X ) ≤ Z , so that f P + f QX has a fixedpoint Z greater than X .The last equality in the statement of the proposition holds since it is clear that any fixed-point of f P ∪ QX is greater than X . ✷ This proposition justifies a simple form of partial evaluation in which Q is partiallyevaluated wrt X , and the result is added to the program in the form of QX . Taking theexample of definite logic programs and T P , we can take m to be the identity function, sothat QX = lf p ( T Q , X ) and lf p ( T P ∪ Q , X ) = lf p ( T P ∪ QX , X ) = lf p ( T P ∪ QX ).In many cases of interest we can weaken the condition that f QX be a constant function.If, instead, for every P and Q , P >> Q implies hypothesis 3 is satisfied, and we furtherassume that
P >> Q and
P >> m ( X ) implies P >> QX , then the conclusion of theabove proposition holds, even if f QX is not a constant. The semantics of programs with negation generally make the implicit assumption thatany predicate not defined in the program has empty extension (i.e., is false). To handlemodules, these semantics must be modified slightly, by taking def ( P ) into account, sothat a predicate intended to be defined in another module is not automatically given anempty extension in the semantics of P . Such a modification generally does not affect suchproperties as monotonicity and continuity of the function involved.7oughly speaking, we will be replacing a function f P by f ′ P where f ′ P ( I ) = f P ( I ) | def ( P ) ⊔ I | def ( P ) where def ( P ) denotes the complement of def ( P ), and X | Y means something like X ∩ Y . That is, the effects of the application of f P are restricted to def ( P ).We examine Fitting’s semantics [11] first. A partial interpretation I over the domain ofcomputation is represented by the consistent set of ground literals which are consequencesof I . The function Φ P is modified to handle modules as follows. Φ P ( I ) = { A | A ← L , . . . , L k ∈ gd ( P ) , A ∈ def ( P ) , I | = L i , i = 1 , . . . , k } ∪ {¬ A | for every A ← L , . . . , L k ∈ gd ( P ) , A ∈ def ( P ) , I | = ¬ ( L ∧ . . . ∧ L k ) } . It is straightforward to see that, if P >> Q ,Φ P ∪ Q = Φ P + Φ Q and Φ + Q ◦ Φ + P = Φ + Q + Φ + P . Here m F ( I ) = { A : I | = A } ∪ { A ← A : I = A, I = ¬ A } and we have if P >> Q then lf p (Φ P ∪ Q , X ) = lf p (Φ P , lf p (Φ Q , X )) = lf p (Φ P ∪ Q ′ , X ), where Q ′ = m F ( lf p (Φ Q )).We now turn our attention to the well-founded semantics [13]. We define ¬ S = {¬ s | s ∈ S } and identify ¬¬ s with s . Let T P ( I ) = { A | A ← L , . . . , L k ∈ gd ( P ) , A ∈ def ( P ) , I | = L i , i = 1 , . . . , k } . A P, I -unfounded set is a set U ⊆ def ( P ) of atoms A such that for everyrule A ← L , . . . , L k in gd ( P ) there is some i such that either I | = ¬ L i or L i ∈ U . Let U P ( I ) denote the greatest P, I -unfounded set and let W P ( I ) = T P ( I ) ∪ ¬ U P ( I ). Assume J is a partial interpretation defining only predicate symbols not in def ( P ). The least (underthe definedness ordering) fixedpoint of W P which is greater than J is a partial model of P ,called the well-founded partial model of P extending J , denoted W F ( P, J ). For J = ∅ wewrite W F ( P ). If def ( P ) is the “Herbrand base” then this definition reduces to the usualdefinition of the well-founded partial model [13].If P is { p ← p ; p ← q } and Q is { q ← q } , where def ( P ) = { p } and def ( Q ) = { q } ,then P >> Q . We have W P ( ∅ ) = ∅ and W Q ( ∅ ) = {¬ q } . On the other hand, W P ∪ Q ( ∅ ) = {¬ p, ¬ q } and thus W + P ∪ Q = W + P + W + Q . Nevertheless we are still able to satisfy hypothesis2 of Proposition 4, as we now show.It can be verified that U P ( I ∪ J ) ⊇ U P ( I ) whenever ( J ∪ ¬ J ) ∩ def ( P ) = ∅ . This isused in the penultimate step below.Clearly W + P ( I ) ∪ W + Q ( I ) ≤ W + P ∪ Q ( I ). If P >> Q then W + P ∪ Q ( I )= T P ∪ Q ( I ) ∪ ¬ U P ∪ Q ( I )= T P ( I ) ∪ T Q ( I ) ∪ I ∪ ¬ U Q ( I ) ∪ ¬ U P ( I ∪ ¬ U Q ( I ))= T P ( I ) ∪ W + Q ( I ) ∪ ¬ U P ( I ∪ ¬ U Q ( I )) ≤ T P ( W + Q ( I )) ∪ W + Q ( I ) ∪ ¬ U P ( W + Q ( I ))= W + P ( W + Q ( I ))Also in this case, we have that( W + Q ◦ W + P )( I )= T Q ( W + P ( I )) ∪ W + P ( I ) ∪ ¬ U Q ( W + P ( I ))= T Q ( I ) ∪ W + P ( I ) ∪ ¬ U Q ( I )= ( W + Q + W + P )( I ) 8 ( W + P ◦ W + Q )( I )Since W P is known to be monotonic [13], applying the above proposition gives us: if P >> Q then
W F ( P ∪ Q, X ) =
W F ( P, W F ( Q, X )) =
W F ( P ∪ Q ′ , X ) where m W F ( I ) = { A : A ∈ I } ∪ { A ← A : ¬ A ∈ I } ∪ { A ← ¬ A : A I, ¬ A I } , and Q ′ = m W F ( W F ( Q )).In particular, if P depends only on Q and Q is a program (i.e. only depends on itself) then W F ( P ∪ Q ) = W F ( P ∪ m W F ( W F ( Q ))). Closure operators are the natural semantics for modules when semantics is defined by leastfixedpoints. Working with monotonic, increasing functions is more convenient than simplymonotonic functions. Passing from f to f + makes reasoning easier.Of course, this technique is dependent on an appropriate least fixedpoint characteriza-tion of the semantics. Thus it is not directly applicable to the Clark-completion semantics[6], Kunen’s semantics [18], the stable model [14] and stable class semantics [1]. But perhapsFage’s semantics..... [9] Acknowledgement
This work was conducted while the author was an employee ofIBM.
References [1] C. Baral & V.S. Subrahmanian, Stable and Extension Class Theory for Logic Programs andDefault Logics,
Journal of Automated Reasoning
Proc. POPL ,95–104, 1992.[3] R. Barbuti, M. Codish, R. Giacobazzi & M. Maher, Oracle Semantics for Prolog,
Proc. 3rdInt. Conf. Algebraic and Logic Programming , 100–114, 1992.[4] A. Bossi & M. Menegus, Una Semantica Composizionale per Programmi Logici Aperti,
Proc.6th Italian Conf. on Logic Programming , 95–100, 1991.[5] A. Bossi, M. Gabbrielli, G. Levi & M.C. Meo, Contributions to the Semantics of Open LogicPrograms,
Proc. Int. Conf. on Fifth Generation Computer Systems , 570–580, 1992.[6] K. Clark, Negation as Failure, in:
Logic and Databases , H. Gallaire & J. Minker (Eds),Plenum Press, 293-322, 1978.[7] P. Cousot & R. Cousot, Constructive versions of Tarski’s fixed point theorems,
Pacific J.Math.
82, 1 (1979), 43–57.[8] M.H. van Emden & R.A. Kowalski, The Semantics of Predicate Logic as a ProgrammingLanguage,
Journal of the ACM 23 , 4 (1976), 733–742.
9] F. Fages, A New Fixpoint Semantics for General Logic Programs Compared with the Well-Founded and the Stable Model Semantics,
Proc. ICLP-7 , 441–458, 1990.[10] M. Falaschi, G. Levi, M. Martelli & C. Palamidessi, Declarative Modeling of the OperationalBehavior of Logic Languages,
Theoretical Computer Science , 69, 289–318, 1989.[11] M. Fitting, A Kripke-Kleene Semantics for Logic Programs,
Journal of Logic Programming ,4, 295–312, 1985.[12] M. Fitting, Well-founded Semantics, Generalized,
Proc. ILPS , 71–84, 1991.[13] A. van Gelder, K. Ross & J.S. Schlipf, Unfounded Sets and Well-Founded Semantics forGeneral Logic Programs,
Proc. PODS’88 , 221–230, 1988.[14] M. Gelfond & V. Lifschitz, The Stable Model Semantics for Logic Programming,
Proc.ICLP/SLP-5 , 1070–1080, 1988.[15] Y.E. Ioannidis & E. Wong, Towards an Algebraic Theory of Recursion,
JACM
Proc. POPL , 111–119, 1987.[17] K. Kanchanasut & P. Stuckey, Eliminating Negation from Normal Logic Programs,
Proc.ALP’90 , 217–231, 1990.[18] K. Kunen, Negation in Logic Programming,
Journal of Logic Programming , 4, 289–308, 1987.[19] K. Kunen, Signed Data Dependencies in Logic Programs,
Journal of Logic Programming , 7,231–245, 1989.[20] J-L. Lassez & M.J. Maher, Closures and Fairness in the Semantics of Logic Programs,
The-oretical Computer Science , 29, 167–184, 1984.[21] M.J. Maher, Semantics of Logic Programs, Ph.D. thesis, Technical Report TR85/14, De-partment of Computer Science, University of Melbourne, 1985.[22] M.J. Maher, Equivalences of Logic Programs, in:
Foundations of Deductive Databases andLogic Programming , J. Minker (Ed), Morgan-Kaufmann, 627–658, 1988.[23] M.J. Maher, A Transformation System for Deductive Database Modules with Perfect ModelSemantics,
Proc. FSTTCS , 89–98, 1989.[24] M.J. Maher, Reasoning about Stable Models (and other Unstable Semantics), manuscript,1990.[25] J. Plaza, Fully Declarative Programming with Logic: Mathematical Foundations, Ph.D.thesis, City University of New York, 1990.[26] H. Przymusinska & T. Przymusinski, Semantic Issues in Deductive Databases and LogicPrograms, in:
Formal Techniques in Artificial Intelligence , A. Banerji (Ed.), North-Holland,321–367, 1990.
27] R. Ramakrishnan, Y. Sagiv, J. Ullman & M. Vardi, Proof-tree Transformation Theoremsand their Applications,
Proc. PODS , 172–181, 1989.[28] E.Y. Shapiro, Logic Programs With Uncertainties: A Tool for Implementing Rule-BasedSystems,
Proc. IJCAI , 529–532, 1983.[29] V. S. Subrahmanian, On the Semantics of Quantitative Logic Programs,
SLP ,173–182, 1987.
A Fixedpoints of a Sandwiched Function
We review some notions of fixedpoints of functions on a partially ordered set ( S, ≤ ). X is a pre-fixedpoint of a function f if f ( X ) ≤ X ; thus a pre-fixedpoint is closed under the action of f . X is a fixedpoint of f if f ( X ) = X . X is a post-fixedpoint of f if X ≤ f ( X ). Let PRE ( f ), POST ( f ) and F PT ( f ) denote, respectively, the set of pre-fixedpoints, post-fixedpoints,and fixedpoints of a function f . Note that every fixedpoint is also a pre-fixedpoint and apost-fixedpoint.The following lemma shows that a function that is intermediate between two functionswith the same pre-fixedpoints and fixedpoints, has exactly the same pre-fixedpoints andfixedpoints as those functions. Note that this lemma does not require the functions to bemonotonic, nor does it require any conditions on the partial order. Lemma 5
Let f , f and g be functions on a partially ordered set ( S, ≤ ) . Suppose forevery X , f ( X ) ≤ g ( X ) ≤ f ( X ) .1. If PRE ( f ) = PRE ( f ) then PRE ( g ) = PRE ( f ) = PRE ( f )
2. If
POST ( f ) = POST ( f ) then POST ( g ) = POST ( f ) = POST ( f )
3. If
PRE ( f ) = PRE ( f ) , and F PT ( f ) = F PT ( f ) then F PT ( g ) = F PT ( f ) = F PT ( f )
4. If
POST ( f ) = POST ( f ) , and F PT ( f ) = F PT ( f ) then F PT ( g ) = F PT ( f ) = F PT ( f )
5. If
PRE ( f ) = PRE ( f ) , and POST ( f ) = POST ( f ) then F PT ( g ) = F PT ( f ) = F PT ( f )
6. If
F PT ( f ) = F PT ( f ) then F PT ( g ) ⊇ F PT ( f ) = F PT ( f ) Proof:
Part 1. Suppose
PRE ( f ) = PRE ( f ). Now, let I be a pre-fixedpoint of g .Then f ( I ) ≤ g ( I ) ≤ I . Thus I is a pre-fixedpoint of f and f . Conversely, let I be apre-fixedpoint of f and f . Then g ( I ) ≤ f ( I ) ≤ I . Hence I is a pre-fixedpoint of g . Thus PRE ( g ) = PRE ( f ) = PRE ( f ).Part 2. By duality, the first part implies: If POST ( f ) = POST ( f ) then POST ( g ) = POST ( f ) = POST ( f ). 11art 3. Suppose PRE ( f ) = PRE ( f ), and F PT ( f ) = F PT ( f ). Let I be a fixedpointof f and f . Then I = f ( I ) ≤ g ( I ) ≤ f ( I ) = I . Hence g ( I ) = I . Conversely, let I be afixedpoint of g . By Part 1, PRE ( g ) = PRE ( f ) and hence f ( I ) ≤ I . Also, I = g ( I ) ≤ f ( I ). Thus I is also a fixedpoint of f and, hence, F PT ( g ) = F PT ( f ) = F PT ( f ).Part 4. This is the dual of part 3.Part 5. If PRE ( f ) = PRE ( f ), and POST ( f ) = POST ( f ) then, by parts 1 and2, PRE ( g ) = PRE ( f ) = PRE ( f ) and POST ( g ) = POST ( f ) = POST ( f ). Thefixedpoints are those elements that are both a pre- and post-fixedpoint. Hence, F PT ( g ) = F PT ( f ) = F PT ( f ).Part 6. Suppose F PT ( f ) = F PT ( f ), and let I ∈ F PT ( f ). Then I = f ( I ) ≤ g ( I ) ≤ f ( I ) = I , so I is also a fixedpoint of g . Hence F PT ( f ) = F PT ( f ) ⊆ F PT ( g ). ✷ It is tempting to assume that we could have: if
F PT ( f ) = F PT ( f ) then F PT ( g ) = F PT ( f ) = F PT ( f ). However, this does not hold, in general, as the following exampleshows. Example 1
Let S = { , , } under the usual ordering. We define f and f as follows: f (1) = f (2) = 1 and f (3) = 3 . f (1) = 1 and f (2) = f (3) = 3 . Let g be theidentity function. It is straightforward to verify that f , f and g are monotonic functions, F PT ( f ) = F PT ( f ) = { , } , and for all X , f ( X ) ≤ g ( X ) ≤ f ( X ) . However, 2 is afixedpoint of g , but not of f or f ..