CoreDiag: Eliminating Redundancy in Constraint Sets
aa r X i v : . [ c s . A I] F e b C ORE D IAG : Eliminating Redundancy in Constraint Sets
Alexander Felfernig , Christoph Zehentner , Paul Blazek Graz University of Technology, Inffeldgasse 16b, 8010 Graz, [email protected]@ist.tugraz.at cyLEDGE Media GmbH, Schottenfeldgasse 60, 1070 Vienna, [email protected] A BSTRACT
Constraint-based environments such as config-uration systems, recommender systems, andscheduling systems support users in different de-cision making scenarios. These environmentsexploit a knowledge base for determining solu-tions of interest for the user. The developmentand maintenance of such knowledge bases is anextremely time-consuming and error-prone task.Users often specify constraints which do not re-flect the real-world. For example, redundant con-straints are specified which often increase both,the effort for calculating a solution and efforts re-lated to knowledge base development and main-tenance. In this paper we present a new al-gorithm (C
ORE D IAG ) which can be exploitedfor the determination of minimal cores (minimalnon-redundant constraint sets). The algorithm isespecially useful for distributed knowledge en-gineering scenarios where the degree of redun-dancy can become high. In order to show the ap-plicability of our approach, we present an empir-ical study conducted with commercial configura-tion knowledge bases.
Keywords : redundant constraints, minimal cores.
The central element of a constraint-based ap-plication is a knowledge base (constraint set).When developing and maintaining constraint sets,users are often defining faulty constraints (thesystem calculates solutions which are not al-lowed or – in the worst case – no solution canbe found) (Bakker et al. , 1993; Felfernig et al. , 2004)or redundant constraints which are not neededto express the domain knowledge in a com-plete fashion (Sabin and Freuder, 1999; Piette, 2008;Fahad and Qadir, 2008; Levy and Sagiv, 1992). In thispaper we focus on situations where users are defin-ing redundant constraints which – when deleted fromthe constraint set (knowledge base) – do not changethe semantics of the remaining constraint set. More formally, if C= { c , c , ..., c n } is the initial set of con-straints defined for the knowledge base and one con-straint c i is redundant, then ( C − { c i } ) ∪ C is incon-sistent ( C is the negation of C).Redundancy elimination in knowledge bases is atopic extensively investigated by AI research. Theidentification of redundant constraints plays a ma-jor role, for example, in the development and main-tenance of configuration knowledge bases (see, e.g.,(Sabin and Freuder, 1999)). The authors introduceconcepts for the detection of redundant constraintsin conditional constraint satisfaction problems (CC-SPs). The approach is based on the idea of analyz-ing the solution space of the given problem (on thelevel of individual solutions) in order to detect differ-ent types of redundant constraints. (Piette, 2008) pro-vide an in-depth discussion of the role of redundancyelimination in SAT solving. They introduce an (in-complete) algorithm for the elimination of redundantclauses and show its applicability on the basis of anempirical study. The role of redundancies in ontologydevelopment is analyzed by (Fahad and Qadir, 2008).The authors point out the importance of redun-dancy elimination and discuss typical modeling errorsthat occur during ontology development and mainte-nance. (Grimm and Wissmann, 2011) introduce algo-rithms for redundancy elimination in OWL ontologies.The authors propose an algorithm that computes re-dundant axioms by exploiting prior knowledge of thedispensibility of axioms. (Levy and Sagiv, 1992) ana-lyze two types of redundancies in Datalog programs.First, they interpret redundancy in terms of reachabil-ity , i.e., rules and predicates are eliminated that are not part of any derivation tree. Second, redundancy is de-fined on the basis of the concepts of minimal deriva-tion trees which do not include any pair of identicalatoms where one is the predecessor of the other one.All the mentioned approaches focus on the identifi-cation of redundant constraints in centralized scenarioswhere a knowledge engineer is interested in identify-ing redundant constraints in the given knowledge base.In such scenarios it is assumed that only a small subsetof the given constraints is redundant (this assump-tion is also denoted as low redundancy assumption largernumber of redundant constraints due to the fact thatdifferent contributors add constraints which are relatedto the same topic (see, e.g., (Chklovski and Gil, 2005;Richardson and Domingos, 2003)) – we denote theassumption of larger sets of redundant constraintsthe high redundancy assumption . For example, weenvision a scenario where a large number of userspropose constraints to be applied by a constraint-based configuration or recommendation engine(Felfernig and Burke, 2008) and the task of an un-derlying diagnosis algorithm is to identify minimalsets of constraints which retain the semantics of theoriginal constraint set – we denote such constraint setsas minimal cores . Note that the following discussionsare based on the assumption of consistent constraintsets . Methods for consistency restoration are dis-cussed in (Bakker et al. , 1993; Felfernig et al. , 2004;Friedrich and Shchekotykhin, 2005;Felfernig et al. , 2011).The major contributions of our paper are the fol-lowing. First, we introduce a new algorithm whichallows for a more efficient determination of redun-dant constraints especially in the context of distributed(community-based) knowledge engineering scenarios.Second, we present the results of a performance anal-ysis of our algorithm conducted with real-world con-figuration knowledge bases.The remainder of this paper is organized as follows.In Section 2 we introduce a simple example configu-ration knowledge base from the automotive domain.In Section 3 we introduce a basic algorithm for thedetermination of redundant constraints in centralizedsettings (S EQUENTIAL ). In Section 4 we introduce theC
ORE D IAG algorithm. Thereafter we report the re-sults of a performance evaluation conducted with real-world configuration knowledge bases (Section 5). Thepaper is concluded with Section 6.
For illustration purposes we use a car configurationknowledge base throughout this paper. A configura-tion task can be defined as a basic constraint satisfac-tion problem (CSP) (Tsang, 1993) (see the followingdefinition). Definition (Configuration Task) : A configura-tion task can be defined as a CSP (V, D, C). V = { v , v , ..., v n } is a set of finite domain variables. D= { dom ( v ) , dom ( v ) , ..., dom ( v n ) } is a set of cor-responding domain definitions where dom( v k ) is thedomain of the variable v k . C = C KB ∪ C R where C KB = { c , c , ..., c q } is a set of domain-specificconstraints (the configuration knowledge base) and C R = { c q +1 , c q +2 , ..., c t } is a set of customer re-quirements (as well represented as constraints). Note that the presented concepts are as well applicableto other types of knowledge representations such as SAT ordescription logics.
The following configuration task will be used asa working example throughout the paper. The vari-able type represents the type of the car, pdc is thepark distance control feature, fuel represents the av-erage fuel consumption per 100 kilometers, a skibag allows convenient ski stowage inside the car, and represents the actuation type (4-wheel supportedor not supported). These variables represent the possi-ble combinations of customer requirements. The set C KB = { c , c , c , c , c } defines additional re-strictions on the set of possible customer requirements C R = { c , c , c , c , c } .• V = { type, f uel, skibag, − wheel, pdc } • D = { dom ( type ) = { city, limo, combi, xdrive } , dom ( f uel ) = { l, l, l } , dom ( skibag ) = { yes, no } , dom (4 − wheel ) = { yes, no } , dom ( pdc ) = { yes, no }} • C KB = { c : 4 − wheel = yes → type = xdrive , c : skibag = yes → type = city , c : f uel = 4 l → type = city , c : f uel = 6 l → type = xdrive , c : type = city → f uel = 10 l } • C R = { c : 4 − wheel = no,c : f uel = 4 l,c : type = city,c : skibag = no,c : pdc = yes } On the basis of this example configuration task wenow give a definition of a corresponding configuration(solution).
Definition (Configuration) : A configuration (so-lution) for a configuration task is an instantiationI= { v = ins , v = ins , ..., v n = ins n } where ins k is an element of the domain of v k . A configuration is consistent if the assignments in I are consistent withthe constraints in C . A complete solution is one inwhich all the variables are instantiated. Finally, a con-figuration is valid if it is both, consistent and complete.A configuration for our example configuration taskwould be I = { − wheel = no, f uel = 4 l, type = city, skibag = no, pdc = yes } . Let us now consider a simple adaptation of the originalset of constraints C KB which we denote with C ′ KB . C ′ KB includes an additional constraint c a which hasbeen added by a knowledge engineer. C ′ KB = { c a : skibag = no → type = limo ∨ type = combi ∨ type = xdrive , c : 4 − wheel = yes → type = xdrive , c : skibag = yes → type = city , c : f uel = 4 l → type = city , c : f uel = 6 l → type = xdrive , c : type = city → f uel = 10 l } c a is redundant since it does notfurther restrict the solution space defined by the con-straints C KB = { c , c , c , c , c } . In order to dis-cuss constraint redundancy on a more formal level, weintroduce the following definitions. Definition (Redundant Constraint) : Let c a bea constraint element of the configuration knowledgebase C KB . c a is called redundant iff C KB − { c a } | = c a . If this condition is not fulfilled, c a is said to be non-redundant . Redundancy can also be analyzed bychecking C KB − { c a } ∪ C KB for consistency – ifconsistency is given, c a is non-redundant.Iterating over each constraint of C KB , executing thenon-redundancy check C KB − { c a } ∪ C KB , anddeleting redundant constraints from C KB results in aset of non-redundant constraints (the minimal core ).If the non-redundancy check fails (no solution canbe found), the constraint c a is redundant and can bedeleted from C KB . Otherwise (the non-redundancycheck is successful), c a is non-redundant. Definition (Minimal Core) : Let C KB be a config-uration knowledge base. C KB is denoted as minimalcore iff ∀ c i ∈ C KB : C KB − { c i } ∪ C KB is consis-tent. Obviously, C KB ∪ C KB | = ⊥ .The principle of the following algorithm(S EQUENTIAL - Algorithm 1) is often used for de-termining such redundancies (see, e.g., (Piette, 2008;Grimm and Wissmann, 2011)).
Algorithm 1 S EQUENTIAL ( C KB ) : ∆ { C KB : configuration knowledge base }{ C KB : the complement of C KB }{ ∆ : set of redundant constraints } C KBt ← C KB ; for all c i in C KBt doif isInconsistent (( C KBt − c i ) ∪ C KB ) then C KBt ← C KBt − { c i } ; end ifend for ∆ ← C KB − C KBt ; return ∆; The approach of S
EQUENTIAL is straightforward:each individual constraint c i is evaluated w.r.t. redun-dancy by checking whether C KBt − c i is still in-consistent with C KB . If this is the case, c i can beconsidered as redundant. If C KBt − c i is consistent with C KB , c i is a non-redundant constraint since itsdeletion induces consistency with C KB . Applying thealgorithm S EQUENTIAL to our example C ′ KB resultsin ∆ = { c a } since C KB − { c a } ∪ C KB is inconsistentand no further constraint c i can be deleted from C KB such that C KB − { c a } − { c i } is still inconsistent.The problem of checking whether a given constraintcan be inferred from the remaining part of a constraintset has shown to be Co-NP-complete in the generalcase (Piette, 2008). The major goal of our work wasto figure out whether there exist alternative algorithmsthat have a better runtime performance compared to S EQUENTIAL in situations with a large amount of re-dundant constraints in C KB . Large amounts of redun-dant constraints typically occur in distributed knowl-edge engineering scenarios where a large number ofusers specify rules that in the following have to be ag-gregated into one consistent constraint set (see, e.g.,(Chklovski and Gil, 2005)).In the following section we introduce the C ORE -D IAG algorithm which is a valuable alternative to S E - QUENTIAL in situations with a large number of redun-dant constraints. After having introduced C
ORE D IAG we will analyze the performance of both algorithms(S
EQUENTIAL and C
ORE D IAG ) on the basis of real-world configuration knowledge bases (Section 5).
The C
ORE D IAG algorithm (together with C
ORE
D) isbased on the principle of divide-and-conquer: when-ever a set S which is a subset of C KB is inconsistentwith C KB , it is or contains a minimal core, i.e. a setof constraints which preserve the semantics of C KB .In our implementation C ORE
D is responsible for de-termining such minimal cores, C
ORE D IAG returns thecomplement of a minimal core which is a maximal setof redundant constraints in C KB . C ORE
D is based onthe principle of
QuickXPlain (Junker, 2004) – as a con-sequence a minimal core (minimal set of constraintsthat preserve the semantics of C KB ) can be interpretedas a minimal conflict, i.e., a minimal set of constraintsthat are inconsistent with C KB .C ORE
D allows the determination of preferred min-imal cores since the algorithm is based on the as-sumption of a strict lexicographical ordering of theconstraints in C KB . On an informal level a pre-ferred minimal core can be characterized as follows:if we have different options for choosing a mini-mal core, we would select the one with the mostagreed-upon constraints. For more details on the roleof strict lexicographical orderings of constraints werefer the reader to the work of (Junker, 2004) and(Felfernig et al. , 2011).The C ORE D IAG algorithm generates C KB from C KB . It then activates C ORE
D (see Algorithm 3)which determines a minimal core on the basis ofa divide-and-conquer strategy that divides the con-straints in C into two subsets ( C and C ) with the goalto figure out whether one of those subsets already con-tains a minimal core. If C contains a minimal core, C is not further taken into account. If C contains onlyone element ( c α ) and B is still consistent, then c α ispart of the minimal core. Algorithm 2 C ORE D IAG ( C KB ) : ∆ { C KB = { c , c , ..., c n }}{ C KB : the complement of C KB }{ ∆ : set of redundant constraints } C KB ← {¬ c ∨ ¬ c ∨ ... ∨ ¬ c n } ; return ( C KB − C ORE D ( C KB , C KB , C KB )); | c KB | ) Alg. ˜0-10% ˜50% ˜75% ˜87.5%Bike A (32) S 32.0 / 205.4 / 0 64.0 / 408.6 / 32 128.0 / 1209.0 / 96 256.0 / 4073.2 / 224Bike A (32) CD 63.0 / 614.4 / 0 88.8 / 863.2 / 32 106.6 / 1352.0 / 96 107.4 / 1737.2 / 224Bike B (35) S 35.0 / 256.8 / 1 70.0 / 616.4 / 36 140.0 / 1710.0 / 106 280.0 / 4854.0 / 246Bike B (35) CD 68.6 / 693.4 / 1 94.0 / 960.8 / 36 109.6 / 1365.2 / 106 117.8 / 1893.0 / 246Bike C (37) S 37.0 / 297.0 / 1 74.0 / 696.6 / 38 148.0 / 1824.8 / 112 296.0 / 5722.8 / 260Bike C (37) CD 72.4 / 703.6 / 1 101.2 / 1091.2 / 38 114.8 / 1524.2 / 112 122.2 / 2115.4 / 260Bike D (34) S 34.0 / 280.2 / 1 68.0 / 606.8 / 35 136.0 / 1672.0 / 103 272.0 / 5033.6 / 239Bike D (34) CD 66.2 / 727.6 / 1 94.8 / 1031.8 / 35 104.8 / 1433.8 / 103 114.8 / 2000.8 / 239Bike E (35) S 35.0 / 254.2 / 9 70.0 / 601.0 / 44 140.0 / 1628.6 / 114 280.0 / 5124.8 / 254Bike E (35) CD 60.8 / 663.0 / 9 83.4 / 821.4 / 44 96.0 / 1182.6 / 114 103.6 / 1671.2 / 254Bike F (33) S 33.0 / 274.0 / 1 66.0 / 601.8 / 34 132.0 / 1573.8 / 100 264.0 / 4525.0 / 232Bike F (33) CD 64.6 / 632.8 / 1 88.6 / 931.2 / 34 108.2 / 1345.6 / 100 110.8 / 1822.2 / 232Bike G (36) S 36.0 / 281.4 / 2 72.0 / 660.6 / 38 144.0 / 1729.8 / 110 288.0 / 5434.4 / 254Bike G (36) CD 70.6 / 714.6 / 2 96.0 / 939.8 / 38 111.6 / 1409.6 / 110 122.4 / 2081.8 / 254Bike H (24) S 24.0 / 194.2 / 0 48.0 / 398.8 / 24 96.0 / 1047.0 / 72 192.0 / 3010.0 / 168Bike H (24) CD 47.0 / 443.4 / 0 63.0 / 587.4 / 24 77.2 / 869.8 / 72 80.0 / 1240.4 / 168Bike I (35) S 35.0 / 268.4 / 1 70.0 / 647.0 / 36 140.0 / 1696.4 / 106 280.0 / 4976.4 / 246Bike I (35) CD 68.4 / 708.6 / 1 93.6 / 985.0 / 36 112.2 / 1371.4 / 106 117.0 / 1897.0 / 246Bike J (46) S 46.0 / 366.8 / 4 92.0 / 867.8 / 50 184.0 / 2309.8 / 142 368.0 / 7234.4 / 326Bike J (46) CD 88.4 / 896.0 / 4 119.4 / 1258.8 / 50 139.8 / 1886.8 / 142 142.0 / 2413.6 / 326Bike K (35) S 35.0 / 254.0 / 1 70.0 / 805.4 / 36 140.0 / 1852.8 / 106 280.0 / 5146.8 / 246Bike K (35) CD 68.8 / 712.4 / 1 95.6 / 1021.6 / 36 108.8 / 1374.2 / 106 117.6 / 1945.4 / 246Bike L (37) S 37.0 / 290.0 / 2 74.0 / 645.8 / 39 148.0 / 1822.0 / 113 296.0 / 5740.8 / 261Bike L (37) CD 71.4 / 716.4 / 2 96.6 / 1001.6 / 39 113.2 / 1425.8 / 113 111.0 / 1829.8 / 261Bike 2 (32) S 32.0 / 883.0 / 3 64.0 / 2386.4 / 35 128.0 / 8218.8 / 99 256.0 / 37784.4 / 227Bike 2 (32) CD 61.2 / 2165.2 / 3 85.4 / 3749.6 / 35 97.2 / 5693.2 / 99 108.0 / 10276.8 / 227esvs (21) S 21.0 / 340.0 / 0 42.0 / 870.8 / 21 84.0 / 2771.8 / 63 168.0 / 10231.8 / 147esvs (21) CD 41.0 / 724.0 / 0 56.0 / 1170.6 / 21 65.6 / 1844.0 / 63 71.0 / 3296.4 / 147fs (16) S 16.0 / 291.6 / 1 32.0 / 664.0 / 17 64.0 / 1989.2 / 49 128.0 / 7238.0 / 113fs (16) CD 30.6 / 658.8 / 1 42.0 / 933.4 / 17 49.2 / 1504.2 / 49 52.2 / 2431.8 / 113hypo (21) S 21.0 / 116.6 / 1 42.0 / 321.0 / 22 84.0 / 975.4 / 64 168.0 / 3297.6 / 148hypo (21) CD 40.6 / 383.8 / 1 55.2 / 549.0 / 22 62.2 / 802.2 / 64 71.0 / 1293.0 / 148large2 (185) S 130.0 / 2552.8 / 75 260.0 / 4721.8 / 260 520.0 / 7860.0 / 445 1040.0 / 15025.4 / 630large2 (185) CD 76.8 / 1868.8 / 75 79.8 / 2085.6 / 260 96.6 / 2834.0 / 445 103.0 / 3870.4 / 630Table 1: Application of S EQUENTIAL (S) (Algorithm 1) and C
ORE D IAG / runtime ( ms ) / ). We now compare the performance of C
ORE D IAG withthe S
EQUENTIAL algorithm discussed in Section 3.The worst case complexity (and best case complex-ity) of S
EQUENTIAL in terms of the number of neededconsistency checks is n (the number of constraintsin C KB ). Worst case and best case complexity areidentical since S EQUENTIAL checks the redundancyof each individual constraint c i with respect to C KB .In contrast, the worst case complexity of C ORE D IAG depends on the number of redundant constraints in C KB . The worst case complexity of C ORE D IAG interms of the number of needed consistency checks is c ∗ log ( nc ) + 2 c where n is the number of constraintsin C KB and c is the minimal core size. The best casecomplexity in terms of the number of needed consis-tency checks can be achieved if all constraints elementof the minimal core are positioned in one branch of theC ORE
D search tree: log ( nc ) + 2 c . Consequently, theperformance of C ORE D IAG heavily relies on the num- ber of constraints contained in the minimal core (thelower the number of constraints in the minimal core,the better the performance of C
ORE D IAG ).Table 1 reflects the results of our analysis conductedwith the knowledge bases of the configuration bench-mark. The tests have been executed on a standarddesktop computer (
Intel®Core™2 Quad CPU Q9400
CPU with and
RAM) using the CLiblibrary. We compared the performance of S
EQUEN - TIAL and C
ORE D IAG for the different configurationknowledge bases. In order to show the advantages ofC
ORE D IAG that come along with an increasing num-ber of redundant constraints, we generated three addi-tional versions from the benchmark knowledge bases(see Table 1) that differ in their redundancy rate R (seeFormula 1). The number of iterations per setting wasset to 10; for each iteration we applied a randomizedconstraint ordering. Note that an evaluation of the indi- ms needed for calculating a solution for a given configuration knowledge base version. Algorithm 3 C ORE D( B, D, C = { c , c , ..., c p } ) : ∆ { B: consideration set }{ D: constraints added to B }{ C: set of constraints to be checked for redundancy } if D = ∅ and inconsistent ( B ) then return ∅ ; end ifif singleton ( C ) then return ( C ); end if k ← ⌈ r ⌉ ; C ← { c , c , ..., c k } ; C ← { c k +1 , c k +2 , ..., c p } ;∆ ← C ORE D ( B ∪ C , C , C );∆ ← C ORE D ( B ∪ ∆ , ∆ , C ); return (∆ ∪ ∆ ); vidual properties of the used knowledge bases is withinthe scope of future work. R ( C KB ) = | redundant constraints in C KB || constraints in C KB | (1)In addition to the original version (redundancy rate= ˜0-10%) we generated three knowledge bases withthe redundancy rates 50%, 75%, and 87.5%. For ex-ample, a knowledge base with redundancy rate 50%can be generated by simply duplicating each constraintof the original knowledge base. Starting with a re-dundancy rate of 50% we can observe a transition inthe runtime performance (C ORE D IAG starts to per-form better than S
EQUENTIAL ) due to the increasednumber of redundant constraints (see the large2 con-figuration knowledge base in Figure 1). Another out-come of our analysis is that nearly each of the inves-tigated configuration knowledge bases contains redun-dant constraints (see Table 1). The average runtimefor determining configurations without the redundantconstraints is lower compared to the runtime with the redundant constraints included (see Table 2) – for thisevaluation as well the number of iterations per settingwas set to 10; for each iteration we applied a random-ized constraint ordering.
The detection of redundant constraints plays a majorrole in the context of (configuration) knowledge basedevelopment and maintenance. In this paper we haveproposed two algorithms which can be applied for theidentification of minimal cores, i.e., minimal sets ofconstraints that preserve the semantics of the originalknowledge base. The S
EQUENTIAL algorithm can beapplied in settings where the number of redundant con-straints in the knowledge base is low. The second al-gorithm (C
ORE D IAG ) is more efficient but restricted inits application to knowledge bases that contain a largenumber of redundant constraints.
The work presented in this paper has been conductedwithin the scope of the research project ICONE (Intel-ligent Assistance for Configuration Knowledge BaseDevelopment and Maintenance) funded by the Aus-trian Research Promotion Agency (827587).
REFERENCES (Bakker et al. , 1993) R. Bakker, F. Dikker, F. Tem-pelman, and P. Wogmim. Diagnosing and solv-ing over-determined constraint satisfaction prob-lems. In , pages 276–281, Chambery,France, 1993.(Chklovski and Gil, 2005) T. Chklovski and Y. Gil.An analysis of knowledge collected from volunteercontributors. In , pages 564–571, Pitts-burg, PA, 2005. 52nd International Workshop on Principles of Diagnosis(Fahad and Qadir, 2008) M. Fahad and M. Qadir. Aframework for ontology evaluation. In , pages 149–158, Toulouse, France, 2008.(Felfernig and Burke, 2008) A. Felfernig andR. Burke. Constraint-based recommender sys-tems: Technologies and research issues. In
ACMInternational Conference on Electronic Commerce(ICEC’08) , pages 17–26, Innsbruck, Austria, 2008.(Felfernig et al. , 2004) A. Felfernig, G. Friedrich,D. Jannach, and M. Stumptner. Consistency-baseddiagnosis of configuration knowledge bases.
Artifi-cial Intelligence , 152(2):213–234, 2004.(Felfernig et al. , 2011) A. Felfernig, M. Schubert, andC. Zehentner. An efficient diagnosis algorithm forinconsistent constraint sets.
Artificial Intelligencefor Engineering Design, Analysis, and Manufactur-ing (AIEDAM) , 25(2):175–184, 2011.(Friedrich and Shchekotykhin, 2005) G. Friedrichand K. Shchekotykhin. A general diagnosis methodfor ontologies. In , number 3729 in Lecture Notesin Computer Science, pages 232–246, Galway,Ireland, 2005. Springer.(Grimm and Wissmann, 2011) S. Grimm and J. Wiss-mann. Elimination of redundancy in ontologies. In
Extended Semantik Web Conference (ESWC2011) ,pages 260–274, Heraklion, Greece, 2011.(Junker, 2004) U. Junker. Quickxplain: Preferredexplanations and relaxations for over-constrainedproblems. In , pages 167–172, SanJose, CA, 2004.(Levy and Sagiv, 1992) A. Levy and Y. Sagiv. con-straints and redundancy in datalog. In , pages67–80, San Diego, CA, 1992.(Piette, 2008) C. Piette. Let the solver deal with re-dundancy. In , pages 67–73,Dayton, OH, 2008.(Richardson and Domingos, 2003) M. Richardsonand P. Domingos. Building large knowledgebases by mass collaboration. In ,pages 129–137, Sanibel Island, FL, 2003.(Sabin and Freuder, 1999) M. Sabin and E. Freuder.Detecting and resolving inconsistency and redun-dancy in conditional constraint satisfaction prob-lems. In
AAAI 1999 Workshop on Configuration ,pages 90–94, Orlando, FL, 1999.(Tsang, 1993) E. Tsang.