aa r X i v : . [ c s . A I] M a y International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017
Source-Sensitive Belief Change
Shahab EbrahimiAmirkabir University of Technology, Tehran, [email protected]
ABSTRACT
The AGM model is the most remarkable framework for modeling belief revision. However, it is notperfect in all aspects. Paraconsistent belief revision, multi-agent belief revision and non-prioritizedbelief revision are three different extensions to AGM to address three important criticisms appliedto it. In this article, we propose a framework based on AGM that takes a position in each of thesecategories. Also, we discuss some features of our framework and study the satisfiability of AGMpostulates in this new context.
KEYWORDS belief revision, the AGM model, multi-source belief revision, non-prioritized belief revision, para-consistent belief revision, the logic PAC
Belief revision is a scientific field of research in the intersection of epistemology, logic and artificialscience. The main goal of belief revision is to provide a logical framework for modeling the processof belief change of a rational agent. AGM ([1, 20, 15]) is the most popular model for this aim.In AGM, a knowledge state is represented by a logically closed set of propositions called beliefset. Given a proposition as an input, three types of change are possible: expansion, revision andcontraction. Expansion and revision are both about adding a new belief to the belief set, butin the former, keeping the consistency of the set is a consideration. Contraction, however, dealswith retracting an old belief. Applying any of these changes to a belief set results a new setthat according to AGM should satisfy a set of postulates ordained with respect to several rationalcriteria.Beside all advantages of AGM that have been explored by authors, there are several criticismshave been noticed to show that it has some counterintuitive features. Thus, various extensions toAGM are proposed to address such criticisms ([13]). Multi-agent belief revision, non-prioritizedbelief revision and paraconsistent belief revision are three extensions proposed for three differentcriticisms of AGM.Multi-agent belief revision is introduced in order to enable AGM to handle belief changes in multi-agent environments; whether we face multiple agents that change their beliefs, or have multiplesources that utters new propositions for making belief changes. The latter case is called multi-source belief revision.DOI : 10.5121/ijaia.2017.8205 45nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017The main goal of non-prioritized belief revision is different. In AGM, every input information hashigher priority than existing beliefs. In other words, accepting a new proposition or retracting anon-tautological one could be done without any obligation. However, this feature of AGM seemsto be an ideal assumption since we intuitively don’t accept every information passed to us by everysource. All multi-source models could be considered as non-prioritized, because it is impossible todifferentiate between the sources if we accept all the inputs of all of them. However, the reverse isnot true. There are several non-prioritized approaches that are defined in single-agent environmentsand are sensitive only to input propositions.In paraconsistent belief revision, the main problem is that it is impossible to have two differentinconsistent belief sets. Because by the rules of classical logic, every inconsistent belief set isequivalent to K ⊥ , the set of all propositions. This feature seems counterintuitive too. Whetherthe propositions are perceived as rational beliefs or practical data, it seems plausible to be ableto make meaningful deductions in case of inconsistencies. Also, handling such situations seemsreasonable in multi-source context. Because even fully self-consistent and rational sources canutter contradicting information and in some cases, there are not enough reasons to accept only oneof them.As it is clear, these three different criticisms have some intersections in their motivations and it isreasonable to study models that coalesce all of them together. In the following, we try to define aframework based on AGM that leverages the benefits of the three aforementioned extensions. In this section, we describe our approach for handling the mentioned criticisms.
First change we apply, is to use a paraconsistent logic as our background logic. Paraconsistent logicis a class of non-classical logics for which ECQ fails, i.e., the relation q ∈ Cn ( { p, ¬ p } ) is not validin those systems. Therefore, they allow a theory to contain inconsistencies while not deducing allpropositions. There are many approaches for constructing such systems ([22]) and the question iswhich one performs the best in the AGM model. Here, we use a three-valued paraconsistent logic,called PAC ([5]). This logic is an enrichment of the logic LP ([21]) with an implication connective → for which modus ponens and the deduction theorem hold for it. In PAC, we have three values,denoted by 1, − V = { , , − } and the set of designated values is D = { , } . In PAC, fourconnectives ∧ , ∨ , → and ¬ are defined respectively for “conjunction (and)”, “disjunction (or)”,“material implication (if...then)” and “negation (not)” by table 1. Two constants ⊥ and ⊤ arealso given with the values “false” and “true”.The consequence relation of PAC ( ⊢ ) is defined as follows: { p , p , ..., p n } ⊢ p if and only if every ex contradiction quodlibet { p, p → q } ⊢ q . If A ∪ { p } ⊢ q , then A ⊢ p → q . ∧ ∨ → ¬ p i , does the same to p . It is possible for a valuation v , such that v ( p ) = v ( ¬ p ) = 0 and v ( q ) = −
1. Therefore, { p, ¬ p } q and ECQ fails in this logic.The Hilbert-style formulation of PAC can be given by table 2. In this table, the connective ↔ isdefined as follows: p ↔ q := ( p → q ) ∧ ( q → p ).Axioms: • p → ( q → p ) • ( p → ( q → r )) → (( p → q ) → ( p → r )) • (( p → q ) → p ) → p • p ∧ q → p • p ∧ q → q • p → ( q → p ∧ q ) • p → p ∨ q • q → p ∨ q • ( p → r ) → (( q → r ) → ( p ∨ q → r )) • ¬ ( p ∨ q ) ↔ ¬ p ∧ ¬ q • ¬ ( p ∧ q ) ↔ ¬ p ∨ ¬ q • ¬¬ p ↔ p • ¬ ( p → q ) ↔ p ∧ ¬ q • p ∨ ¬ p Rule of Inference: • p p → qq Table 2: Hilbert-style system of PAC.PAC has lots of advantages that makes it a suitable logic for being used for our purposes. Onthe one hand, by the ideas of society semantics and multi-source epistemic logics introduced in[6], [8], [9] and [11], it can be shown that PAC is a reasonable logic for handling inconsistenciesin multi-source environment with respect to epistemological aspects. On the other hand, theprofits of paraconsistent three-valued logics in handling inconsistent data bases, is discussed in [7].Thus, it seems PAC is a suitable candidate for our framework both conceptually and practically.Furthermore, it is an ideal and natural paraconsistent logic, as discussed in [2], [3] and [4] thatconfirms it behaves like classical logic in the majority of cases. 47nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017By having the consequence relation of PAC, we can define its induced logical consequences operatoras follow: Cn ( A ) = { p : A ′ ⊢ p, for a finite A ′ ⊆ A } (1)Consequently, the theorem below could be concluded: Theorem 2.1. Cn satisfies the following conditions:( Cn A ⊆ Cn ( A ) . (inclusion)( Cn Cn ( Cn ( A )) ⊆ Cn ( A ) . (iteration)( Cn
3) If A ⊆ B , then Cn ( A ) ⊆ Cn ( B ) . (monotony)( Cn
4) If p ∈ Cn ( A ) , then p ∈ Cn ( A ′ ) for some finite set A ′ of A . (compactness)( Cn
5) If p ∈ Cn ( A ∪ { q } ) and p ∈ Cn ( A ∪ { r } ) , then p ∈ Cn ( A ∪ { q ∨ r } ) . (introduction ofdisjunction in premises)Proof. By the definition of ⊢ , it is easy to see that it satisfies Tarskian conditions on consequencerelations, i.e., reflexivity , transitivity and weakening . ( Cn Cn
3) are the direct results ofthese conditions.The satisfiability of ( Cn
4) can be easily followed by definition. Because if p ∈ Cn ( A ), it must beat least one finite A ′ ⊆ A , such that A ′ ⊢ p . And finally, by the semantics of PAC for ∨ and thedefinition of ⊢ , ( Cn
5) can be easily deduced.
Like the original case, by a belief set K we mean a set of propositions that is closed under logicalconsequences, i.e., K = Cn ( K ). However, unlike AGM, the inputs are not represented by a singleproposition. In the proposed approach, an input I is described by an ordered pair ( p, s ) such thatthe first element is a proposition and the second term is the source that utters p for a particulartype of belief change. In the following, we will use P I and S I for referring to the proposition andthe source of input I .Also, epistemic and reliability functions are utilized to specify the degree of believability for beliefsand inputs, respectively. In order to define these functions, a value set for the range of thesefunctions is required. Each value set V needs at least three elements with a given total order onthem, i.e., for every x, y, z ∈ V we have: • x ≤ y or y ≤ x . (totality) • If x ≤ y and y ≤ x , then x = y . (antisymmetry) • If x ≤ y and y ≤ z , then x ≤ z . (transitivity) If p ∈ A , then A ⊢ p . If A ⊢ p and B ⊢ q for every q ∈ A , then B ⊢ p . If A ⊢ p and A ⊆ B , then B ⊢ p . ≤ , V has minimum element b and maximum element t . Inthe recent article, V is taken as a fix value set. By adopting the notion of epistemic entrenchmentproposed in [16], the notion of epistemic function can be defined. Motivations and propertiesof epistemic entrenchment fit well to our approach. There is only one critical difference in ourframework. Epistemic entrenchment is an order on propositions providing a qualitive relationbetween them, but here the epistemic function is considered to be a function on propositions andthe range of it gives such a relation on propositions. If P ROP is the set of all propositions of ourlanguage, then we have:
Definition 2.1.
For a given belief set K , the function E : P ROP → V is an epistemic functionif it satisfies: ( E If p ⊢ q the E ( p ) ≤ E ( q ) . (dominance) ( E For any p and q , E ( p ) ≤ E ( p ∧ q ) or E ( q ) ≤ E ( p ∧ q ) . (conjunctiveness) ( E When K = K ⊥ , p / ∈ K if and only if E ( p ) = b . (minimality) ( E If E ( p ) = t , then ⊢ p . (maximality) The relation E ( q ) ≤ E ( p ) states that p is epistemologically more important than q , in the sense ofretaining to K if one of them must be avoided. The motivations of considering these conditionscan be found in [16]. Beside epistemic function, we assume another function on inputs that showsthe reliabilities of inputs. The notions of reliability and trustworthiness are usually taken as basicfactors in communication and social sciences and the effects of them in belief change have beendiscussed in many articles ([27, 19]). For the input ( p, s ), the reliability of the source s and theamount of its knowledge about p are both important for evaluating the degree of reliability.Most models for multi-source or non-prioritized belief revision relate such concepts only to inputproposition or input source. However, many situations can be given to show that it is plausible toassume both are important for such evaluation. For example, the relation between the trustwor-thiness of two sources that are expert in two different science fields may not be the only importantdata, but the content of information they utter may be effective either. This evaluation playsthe main role in several field of science and we don’t challenge the difficulties of that. Here, itis assumed that such information about reliabilities are given, like epistemic entrenchment order.However, we do not claim that computing reliabilities is an easy task to do. Just devolve it toanother research field.If IN P is the set of all possible inputs, then we have:
Definition 2.2.
The function R : IN P → V is a reliability function if it satisfies: ( R ) If I = ( p, s ) and I = ( q, s ) and p ⊢ q , then R ( I ) ≤ R ( I ) . (dominance) By ( R ), it is said that if a single source utters two propositions such that one of them is theconsequence of the other one, then the reliability of the input with the deduced proposition mustbe greater than the other. Because if p ⊢ q , then Cn ( q ) ⊆ Cn ( p ). Hence, p can be seen as abigger claim than q for making a change in the beliefs. Intuitively, if the sources are not thesame, this postulate should not necessarily hold. Several other conditions can be considered forreliability function, but we will continue with this general definition to keep the situation open forany possible extension in future. 49nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017 The properties of ≤ on V provide the prerequisites to do a comparison between old beliefs andinputs to make a decision for belief change. Most works for this purpose just get the non-priorityof the revision functions under discussion. In [12] two taxonomies are given for classifying thenon-prioritized functions proposed for this change; one based on the outputs of them and the otherbased on their construction process. In the recent framework, the non-priority of every kind ofbelief changes will be discussed.Hence, we generalize our approach in order to suit source-sensitive functions, as follows: • Based on the output: “all or nothing” approach; that means either it accepts the whole inputor leaves the belief set unchanged. • Based on the construction: “decision + action” approach; that means first we decide whetherto accept the input or not, then do the change.
In the original manuscripts about AGM, contraction, expansion and revision are considered to bethe three main types of belief change that happen to a belief state with respect to to an input.Contraction is defined for eliminating an existing belief and expansion together with revision aredefined for adding a new belief. The difference between revision and expansion is that in theformer, the result must be consistent, whereas for expansion consistency in not a restriction. Bychanging our view from consistency to paraconsistency, motivations for taking revision as a mainkind of belief change don’t have reasonable supports and motivations anymore. As investigated byTanaka ([26]), by using paraconsistent logics in the AGM model, revision collapses on expansion.Also, it seems very admissible to agree with this result in our multi-source environment. Consis-tency of beliefs in this framework can be considered as an effective factor for evaluating reliability,however it is not the only important parameter. Hence, the comparison of R ( I ) and E ( ¬ P I ) doesnot provide sufficient information for accepting or rejecting I . Therefore, there cannot be a directway to define revision and we take expansion and contraction as the only main kinds of beliefchange. By expanding a belief set K by an input I , we want to non-prioritizely have P I as a belief inthe resulted belief set. Since no restriction is considered in the definition of expansion, every newinformation is welcome. The only condition we consider is b < R ( I ), that means the reliability ofan acceptable input can be anything expect exactly the minimum degree of reliability, that is theamount of the epistemic value of ⊥ . So, we define expansion as follows: Definition 3.1.
For a belief set K and an input I , the function ˙+ is a source-sensitive expansionif and only if: K ˙+ I = ( Cn ( K ∪ { P I } ) if b < R ( I ) K otherwise The construction is simple. If b < R ( I ), we add P I to K and close it under Cn , to accept the newlogical consequences of K ∪ { p } too. On contrary, unlike original cases, our background logic isPAC and also the change may be unsuccessful. If we want to discuss the properties of this functionby studying the satisfiability of AGM postulates, we must introduce the modified version of themto have compatible postulates with our framework and its notations. The corresponding postulatesto AGM expansion postulates are: ( ˙+ K ˙+ I is a belief set. (closure) ( ˙+ P I ∈ K ˙+ I or K ˙+ I = K . (relative success) ∗ ( ˙+ K ⊆ K ˙+ I . (inclusion) ( ˙+ If P I ∈ K , then K ˙+ I = K . (vacuity) ( ˙+ If K ⊆ K , then K ˙+ I ⊆ K ˙+ I . (monotony) ( ˙+ K ˙+ I is smaller than any set that satisfies ( ˙+1) and ( ˙+3)-( ˙+5) and contains P I . (minimality)The only postulate that is not the direct correspondence of its corresponding AGM postulate, is( ˙+2). Here, we use a weakening of the success postulate , that shows “all or nothing” approach forconstructing our non-prioritized expansion function. Also, in the original case, the last postulatespecifies a limit on the size of the result. Thus, the important thing is to have a control on thesize of the expanded belief set. Therefore, we consider postulate ( ˙+6) only for the case that P I isaccepted in the result. Now, for this function we can show: Theorem 3.1.
Source-sensitive expansion satisfies ( ˙+ ˙+ For ( ˙+1), if b < R ( I ), then by definition, the result is a belief set. Otherwise, the result is K that is our given belief set. For ( ˙+2), if b < R ( I ), then by ( Cn P I ∈ Cn ( K ∪ { P I } ). If not b < R ( I ), then by definition K ˙+ I = K .For ( ˙+3), if b < R ( I ), then by ( Cn K ⊆ Cn ( K ∪ { P I } ). In the other case, the result is K and K ⊆ K . For ( ˙+4), suppose that b < R ( I ). From P I ∈ K , it follows that K ˙+ I = Cn ( K ∪ { p } ) = Cn ( K ) = K . If not b < R ( I ), then the result is K by definition. For ( ˙+5), firstassume b < R ( I ). Since K ⊆ K , by ( Cn
3) we conclude that Cn ( K ∪ { P I } ) ⊆ Cn ( K ∪ { P I } ).So, K ˙+ I ⊆ K ˙+ I . For the other case, by definition K ˙+ I = K and K ˙+ I = K . Hence byassumption, K ˙+ I ⊆ K ˙+ I .For ( ˙+6), assume that K ∗ satisfies ( ˙+1) and ( ˙+3)-( ˙+5) and contains P I . Assume the case that b < R ( I ). Suppose that not K ˙+ I ⊆ K ∗ . It means that, there is a proposition p such that p ∈ K ˙+ I and p / ∈ K ∗ . Since p ∈ K ˙+ I , then by definition, there is a finite K ′ ⊆ K ∪ { P I } such that K ′ ⊢ p .We have two possible conditions. Either P I / ∈ K ′ or P I ∈ K ′ . The former case deduces that K ′ ⊆ K and so, p ∈ Cn ( K ). Thus by inclusion, p ∈ K ∗ , that is contradictory. Assume the lattercase, i.e., P I ∈ K ′ . We have ( K ′ \ { P I } ) ∪ { P I } ⊢ p . By deduction theorem we deduce that K ′ \ { P I } ⊢ P I → p . Since K ′ \ { P I } ⊆ K , then P I → p ∈ Cn ( K ). From inclusion, it follows that P I → p ∈ K ∗ . Since by assumption K ∗ contains P I , then by modus ponens and closure, it is easyto show that p ∈ K ∗ . That is again in contradiction with the assumption. For the case that not b < R ( I ), by the definition the result is K and by inclusion K ⊆ K ∗ . Success postulate state that if + is an AGM expansion, then p ∈ K + p . For contracting a given belief set K with respect to an input I , we are aiming to eliminate P I from K by the claim of S I . We expect this change to happen whenever I has more amount of reliabilitythan the amount of believability of P I in K . So, we want to constraint E ( P I ) < R ( I ) be satisfiedfor contraction. In [16], by having a standard relation for epistemic entrenchment, e.g., ≤ EE , aconstruction for AGM contraction called entrenchment-based contraction is defined as follow: K − p = ( { q ∈ K : p < EE p ∨ q } if p / ∈ Cn ( ∅ ) K otherwise (2)In the above definition, < EE is defined as usual. Since every epistemic function induces a standardepistemic entrenchment relation on propositions, we can define source-sensitive contraction basedon equation 2 by changing its constraint. Definition 3.2.
For a belief set K and an input I , the function ˙ − is a source-sensitive expansionif and only if: K ˙ − I = ( { p ∈ K : E ( P I ) < E ( P I ∨ p ) } if E ( P I ) < R ( I ) K otherwise Again, we should change some notations to have the correspondences of AGM contraction postu-lates. The important point is that we are compelled to consider postulates ( ˙ − −
7) and ( ˙ − willbe replaced by a weakening of that which is called relative success. It specifies “all or nothing”approach for constructing our non-prioritized contraction. The corresponding postulates to AGMcontraction postulates are: ( ˙ − K ˙ − I is a belief set. (closure) ( ˙ − P I / ∈ K ˙ − I or K ˙ − I = K . (relative success) ∗ ( ˙ − K ˙ − I ⊆ K . (inclusion) ( ˙ − If P I / ∈ K , then K ˙ − I = K . (vacuity) ( ˙ − If S I = S I and Cn ( P I ) = Cn ( P I ), then K ˙ − I = K ˙ − I . (extensionality) ( ˙ − K ⊆ Cn (( K ˙ − I ) ∪ { P I } ). (recovery) ( ˙ − If S I = S I = S I and P I = P I ∧ P I , then K ˙ − I ∩ K ˙ − I ⊆ K ˙ − I . (conjunctive overlap) ( ˙ − If S I = S I = S I and P I = P I ∧ P I and P I / ∈ K ˙ − I , then K ˙ − I ⊆ K ˙ − I . (conjunctiveinclusion)Now we can show the following theorem: With respect the success postulate for contraction, if − is an AGM contraction, then p / ∈ K − p whenever p / ∈ Cn ( ∅ ). Theorem 3.2.
Source-sensitive contraction satisfies ( − − Again, we show the satisfiability of postulates for both cases E ( P I ) < R ( I ) and not E ( P I ) 1) and ( E 2) it is easy to showthat E ( P I ) < E (( P I ∨ p ) ∧ ( P I ∨ p ) ∧ ... ∧ ( P I ∨ p n )). In PAC, ⊢ (( P I ∨ p ) ∧ ( P I ∨ p ) ∧ ... ∧ ( P I ∨ p n )) ↔ ( P I ∨ p ∧ ). So by ( E E ( P I ) < E ( P I ∨ p ∧ ). Since p ∧ ⊢ p , by the rules of PAC, P I ∨ p ∧ ⊢ P I ∨ p and by ( E E ( P I ∨ p ∧ ) ≤ E ( P I ∨ p ). Therefore, it deduces that E ( P I ) < E ( P I ∨ p ). We are done.For the case that not E ( P I ) < R ( I ), the result is K that is a belief set.For ( − E ( P I ) < R ( I ). Clearly, not E ( P I ) < E ( P I ∨ P I ). So by definition, P I / ∈ K ˙ − I . If it is not the case that E ( P I ) < R ( I ), then we have K ˙ − I = K .For ( − K . Hence, K ˙ − I ⊆ K .For ( − P I / ∈ K . By ( E E ( P I ) = b . Now if E ( P I ) < R ( I ), thenthe result is { p ∈ K : E ( P I ) < E ( P I ∨ p ) } . By ( E 1) and ( E p ∈ K , E ( P I ) < E ( p ) ≤ E ( P I ∨ p ). So K ˙ − I = K . If not E ( P I ) < R ( I ), the result is obvious.For ( − Cn ( P I ) = Cn ( P I ), from reflexivity of ⊢ and deduction theorem, it follows that ⊢ P I ↔ P I . Hence, by ( E 1) and ( R ) we conclude that E ( P I ) = E ( P I ) and R ( I ) = R ( I ). So, E ( P I ) < R ( I ) if and only if E ( P I ) < R ( I ). Assume that E ( P I ) < R ( I ). So, K ˙ − I = { p ∈ K : E ( P I ) < E ( P I ∨ p ) } and K ˙ − I = { p ∈ K : E ( P I ) < E ( P I ∨ p ) } . Suppose that p ∈ K ˙ − I .It means that p ∈ K and ( P I ) < E ( P I ∨ p ) } . Since ⊢ P I ↔ P I , then ⊢ P I ∨ p ↔ P I ∨ p .Hence by ( E P I ) < E ( P I ∨ p ) } . So, p ∈ K ˙ − I and K ˙ − I ⊆ K ˙ − I . The reverse is similar.Therefore K ˙ − I = K ˙ − I . Now assume not E ( P I ) < R ( I ). It deduce that not E ( P I ) < R ( I ).In this case, by definition we have K ˙ − I = K ˙ − I = K .For ( − E ( P I ) < R ( I ). Clearly by ( E 4) and the definition of R , P I . Bydefinition, Cn (( K ˙ − I ) ∪ { P I } ) = Cn ( { p ∈ K : E ( P I ) ≤ E ( P I ∨ p ) } ∪ { P I } ). Now we want to showthat K ⊆ Cn (( K ˙ − I ) ∪ { P I } ). Thus by taking p ∈ K , we will show that p ∈ Cn (( K ˙ − I ) ∪ { P I } ).Since in PAC for every p and q we have p ⊢ q → p , by closure of K , we conclude that P I → p ∈ K .Since P I ∨ ( P I → p ) is theorem in PAC and P I , then by ( E E ( P I ) < E ( P I ∨ ( P I → p )). Henceby definition, P I → p ∈ K ˙ − I and by ( Cn P I → p ∈ Cn (( K ˙ − I ) ∪ { P I } ). From ( Cn P I ∈ Cn (( K ˙ − I ) ∪ { P I } ). So by modus ponens, it deduces that p ∈ Cn (( K ˙ − I ) ∪ { P I } ).Hence, K ⊆ Cn (( K ˙ − I ) ∪ { P I } ). Now suppose that not E ( P I ) < R ( I ). Then by definition, Cn (( K ˙ − I ) ∪ { P I } ) = Cn ( K ∪ { P I } ) and the result is obvious.For ( − E ( P I ) < R ( I ). Independently, three conditions are possible for E ( P I )and E ( P I ). E ( P I ) < E ( P I ) or E ( P I ) < E ( P I ) or E ( P I ) = E ( P I ). Consider the first case.By ( E 1) and ( E E ( P I ) ≤ E ( P I ). Now, ( R ) deduces that R ( I ) ≤ R ( I ). So, we53nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017have E ( P I ) < R ( I ). We want to show that K ˙ − I ⊆ K ˙ − I . Suppose that p ∈ K ˙ − I . Itmeans that p ∈ K and E ( P I ) < E ( P I ∨ p ). It deduces that E ( P I ) < E ( P I ∨ p ). Now since E ( P I ) < E ( P I ), by ( E 1) and ( R ), we have E ( P I ) ≤ E ( P I ) < E ( P I ) ≤ E ( P I ∨ p ). Henceby ( E 1) and ( E E ( P I ) < E (( P I ∨ p ) ∧ ( P I ∨ p )). Since ⊢ ( P I ∨ p ) ↔ ( P I ∨ p ) ∧ ( P I ∨ p ),from ( E 1) we conclude E ( P I ) < E ( P I ∨ p ). So by definition, p ∈ K ˙ − I and K ˙ − I ⊆ K ˙ − I .Therefore, K ˙ − I ∩ K ˙ − I ⊆ K ˙ − I . For the case that E ( P I ) < E ( P I ), the proof is similar.Now assume the third case, i.e., E ( P I ) = E ( P I ). So, since E ( P I ) < R ( I ), both constraints E ( P I ) < R ( I ) and E ( P I ) < R ( I ) are satisfied. Now take p ∈ K ˙ − I ∩ K ˙ − I . It meansthat p ∈ K and E ( P I ) ≤ E ( P I ) < E ( P I ∨ p ) and E ( P I ) ≤ E ( P I ) < E ( P I ∨ p ). Therefore E ( P I ) < E (( P I ∨ p ) ∧ ( P I ∨ p )). So as we seen, it deduces that E ( P I ) < E ( P I ∨ p ). Henceby definition, p ∈ K ˙ − I and K ˙ − I ∩ K ˙ − I ⊆ K ˙ − I . The only remained case is when not E ( P I ) < R ( I ). In this case, by definition K ˙ − I = K . From ( ˙ − 2) it follows that K ˙ − I ⊆ K and K ˙ − I ⊆ K . Therefore again, K ˙ − I ∩ K ˙ − I ⊆ K ˙ − I .For ( − E ( P I ) < R ( I ). It follows from assumption P I / ∈ K ˙ − I , that either P I / ∈ K or it is the case that P I ∈ K and E ( P I ) < R ( I ) and not E ( P I ) < E ( P I ∨ P I ).In the first case, by ( ˙ − K ˙ − I = K and the postulate is satisfied. Now take the second case.We can show that if not E ( P I ) < E ( P I ∨ P I ), then we have E ( P I ) ≤ E ( P I ). It is because if E ( P I ) < E ( P I ), then by ( E 1) we conclude that E ( P I ) < E ( P I ∨ P I ). Then by having ( E E ( P I ) < E (( P I ∨ P I ) ∧ P I ). Since in PAC ⊢ (( P I ∨ P I ) ∧ P I ) ↔ ( P I ∨ P I ), then( E E ( P I ) ≤ E ( P I ) < E (( P I ∨ P I )). Contradiction. So, E ( P I ) ≤ E ( P I ).Now, we must show that in this condition, we have K ˙ − I ⊆ K ˙ − I . Take p / ∈ K ˙ − I . It meanseither p / ∈ K or p ∈ K and E ( P I ) < R ( I ) and not E ( P I ) < E ( P I ∨ p ). The first caseclearly deduces that p / ∈ K ˙ − I . Assume the second case. Since we have E ( P I ) ≤ E ( P I ) and ⊢ ( P I ∨ p ) ↔ (( P I ∨ p ) ∧ ( P I ∨ p )), then by ( E 1) and ( E 2) and assumption not E ( P I ) < E ( P I ∨ p ),we conclude that E ( P I ∨ p ) ≤ E ( P I ∨ p ) ≤ E ( P I ) ≤ E ( P I ). We had before E ( P I ) < R ( I ).Hence, by definition p / ∈ K ˙ − I . The only remaining case is when not E ( P I ) < R ( I ). In this case,by definition we have K ˙ − I = K , Therefore by ( ˙ − K ˙ − I ⊆ K ˙ − I . As said before, revision should not be considered as a main kind of belief change in the para-consistent context. However, it is possible to define it as a derived change from expansion andcontraction. The standard way of doing this, is Levi identity, i.e.: K ∗ L p = Cn (( K − ¬ p ) ∪ { p } ) (3)It is shown that if − is an AGM contraction function, then ∗ L will be an AGM revision. Theoriginal form of Levi identity is not suitable in our framework, for two reasons. First, in our multi-source environment there is no relation between an input with proposition p and another one withproposition ¬ p . Second, since the last step of Levi identity is expansion, it almost disregards ournon-prioritized approach.Thus, it should be modified in order to fit our proposed framework. For every input I = ( p, s ), weuse the auxiliary notation I , such that I = ( ¬ p, s ). Now, the revision process on K with respectto I will be contracting it by I and then expanding it by I . These changes will be done whenever54nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017their considered restrictions are satisfied. Therefore, one way to define the source-sensitive revisionfunction, denoted by ˙ ∗ , is as follows: K ˙ ∗ I = ( ( K ˙ − I ) ˙+ I if E ( P I ) < R ( I ) and b < R ( I ) K otherwise (4)However, it is not the only way. Another approach can be modifying reverse Levi identity ([18]),that is: K ∗ RL p = ( K + p ) − ¬ p (5)In the original case, whenever ¬ p ∈ K we will have K + p = K ⊥ . Hence, this definition does not fitthe AGM model well. One way to avoid triviality, is to use belief bases ([14, 17]) instead of beliefsets. Belief bases are sets of propositions that are not closed under logical consequences. Since inour framework inconsistencies do not explode to triviality, the problem will not happen. Thus bysome necessary modifications, also reverse Levi identity can be used for defining source-sensitiverevision.However, by avoiding primacy of the input and consistency criterion, many of AGM revisionpostulates will be lost. It is important for many various applications of belief revision, to find thebest way of this kind of change. In this paper, we introduced a new framework, called source-sensitive belief change, based on AGMto address three criticisms applied to the AGM model, i.e., single-agent environment, primacy ofinput and consistency criterion. We specified our position in each extended subject for thosecriticisms, called multi-agent belief change, non-prioritized belief change and paraconsistent beliefchange. Naturally, there are some similarities and relations between our model and other proposals([10], [23], [24] and [25]) in these fields and also there are several motivations, features and propertiesthat are exclusive in the recent framework.As showed, the changes we applied for constructing our desired model, resulted in perseveringAGM postulates as much as possible. However, some definitions we used are new and defined withvery general attributes. Hence, ways of improving them are open and need to be studied. References [1] Alchourrn, Carlos E., Peter Grdenfors, and David Makinson. On the logic of theory change:Partial meet contraction and revision functions. The journal of symbolic logic 50.02 (1985):510-530.[2] Arieli, Ofer, Arnon Avron, and Anna Zamansky. What Is an Ideal Logic for Reasoning withInconsistency?. IJCAI. 2011. 55nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017[3] Arieli, O., A. Avron, and A. Zamansky. Ideal Paraconsistent Logics . Studia Logica (2011):31-60.[4] Arieli, Ofer, and Arnon Avron. Three-valued paraconsistent propositional logics. New Direc-tions in Paraconsistent Logic. Springer India, 2015. 91-129.[5] Avron, Arnon. Natural 3-valued logics: characterization and proof theory. The Journal ofSymbolic Logic 56.01 (1991): 276-294.[6] Carnielli, Walter A., and Mamede Lima-Marques. Society semantics and multiple-valued logics .Advances in Contemporary Logic and Computer Science 235 (1999): 33-52.[7] Carnielli, Walter, Joo Marcos, and Sandra De Amo. Formal inconsistency and evolutionarydatabases . Logic and logical philosophy 8 (2004): 115-152.[8] Ciucci, Davide, and Didier Dubois. Three-valued logics for incomplete information and epis-temic logic . European Workshop on Logics in Artificial Intelligence. Springer Berlin Heidel-berg, 2012.[9] Ciucci, Davide, and Didier Dubois. From paraconsistent three-valued logics to multiple-sourceepistemic logic . 8th European Society for Fuzzy Logic and Technology Conference (EUSFLAT2013). 2013.[10] Dragoni, Aldo Franco, and Paolo Giorgini. Belief revision through the belief-function formalismin a multi-agent environment . International Workshop on Agent Theories, Architectures, andLanguages. Springer Berlin Heidelberg, 1996.[11] Dubois, Didier, and Henri Prade. Toward a unified view of logics of incomplete and conflictinginformation . bstracts (2014): 49.[12] Ferm, Eduardo. Revising the AGM postulates. Departamento de Computacin, Universidad deBuenos Aires (1999).[13] Ferm, Eduardo, and Sven Ove Hansson. AGM 25 years. Journal of Philosophical Logic 40.2(2011): 295-331.[14] Fuhrmann, Andr. Theory contraction through base contraction. Journal of Philosophical Logic20.2 (1991): 175-203.[15] Grdenfors, Peter. Knowledge in flux: Modeling the dynamics of epistemic states. The MITpress, 1988.[16] Grdenfors, Peter, and David Makinson. Revisions of knowledge systems using epistemic en-trenchment. Proceedings of the 2nd conference on Theoretical aspects of reasoning aboutknowledge. Morgan Kaufmann Publishers Inc., 1988.[17] Hansson, Sven Ove. Belief base dynamics. Acta Universitatis Upsaliensis, 1991.[18] Hansson, Sven Ove. Reversing the Levi identity. Journal of Philosophical Logic 22.6 (1993):637-669.[19] Liu, Wei. A framework for multi-agent belief revision. Diss. University of Newcastle, 2002.[20] Makinson, David. How to give it up: A survey of some formal aspects of the logic of theorychange. Synthese 62.3 (1985): 347-363.[21] Priest, Graham. The logic of paradox. Journal of Philosophical logic 8.1 (1979): 219-241. 56nternational Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.2, March 2017[22] Priest, Graham. Paraconsistent logic. Handbook of philosophical logic. Springer Netherlands,2002. 287-393.[23] Restall, Greg, and John K. Slaney. Realistic Belief Revision . WOCFAI. Vol. 95. 1995.[24] Tamargo, Luciano H., et al. Modeling knowledge dynamics in multi-agent systems based oninformants . The Knowledge Engineering Review 27.01 (2012): 87-114.[25] Tamargo, Luciano H., et al. On the revision of informant credibility orders . Artificial Intelli-gence 212 (2014): 36-58.[26] Tanaka, Koji. The AGM theory and inconsistent belief change. Logique et analyse 48.189-192(2005): 113-150.[27] Wolf, Ann G., Susann Rieger, and Markus Knauff.