A dichotomy theorem for nonuniform CSPs simplified
aa r X i v : . [ c s . CC ] J u l A dichotomy theorem for nonuniform CSPssimplified
Andrei A. Bulatov
Abstract
In a non-uniform Constraint Satisfaction problem
CSP(Γ) , where Γ is a set ofrelations on a finite set A , the goal is to find an assignment of values to variablessubject to constraints imposed on specified sets of variables using the relationsfrom Γ . The Dichotomy Conjecture for the non-uniform CSP states that for everyconstraint language Γ the problem CSP(Γ) is either solvable in polynomial timeor is NP-complete. It was proposed by Feder and Vardi in their seminal 1993 paper.In this paper we confirm the Dichotomy Conjecture.
In a Constraint Satisfaction Problem (CSP) the question is to decide whether or not itis possible to satisfy a given set of constraints. One of the standard ways to specify aconstraint is to require that a combination of values of a certain set of variables belongsto a given relation. If the constraints allowed in a problem have to come from someset Γ of relations, such a restricted problem is referred to as a nonuniform CSP and de-noted CSP(Γ) . The set Γ is then called a constraint language . Nonuniform CSPs notonly provide a powerful framework ubiquitous across a wide range of disciplines fromtheoretical computer science to computer vision, but also admit natural and elegant re-formulations such as the homomorphism problem, and characterizations, in particular,as the class of problems equivalent to a logic class MMSNP. Many different versions ofthe CSP have been studied across various fields. These include CSPs over infinite sets,counting CSPs (and related Holant problem and the problem of computing partitionfunctions), several variants of optimization CSPs, valued CSPs, quantified CSPs, andnumerous related problems. The reader is referred to the recent book [50] for a surveyof the state-of-the art in some of these areas. In this paper we, however, focus on thedecision nonuniform CSP and its complexity.A systematic study of the complexity of nonuniform CSPs was started by Schaeferin 1978 [58] who showed that for every constraint language Γ over a 2-element set theproblem CSP(Γ) is either solvable in polynomial time or is NP-complete. Schaeferalso asked about the complexity of
CSP(Γ) for languages over larger sets. The nextstep in the study of nonuniform CSPs was made in the seminal paper by Feder andVardi [35, 36], who apart from considering numerous aspects of the problem, posedthe
Dichotomy Conjecture that states that for every finite constraint language Γ over a1nite set the problem CSP(Γ) is either solvable in polynomial time or is NP-complete.This conjecture has become a focal point of the CSP research and most of the effort inthis area revolves to some extent around the Dichotomy Conjecture.The complexity of the CSP in general and the Dichotomy Conjecture in particularhas been studied by several research communities using a variety of methods, each con-tributing an important aspect of the problem. The CSP has been an established area inartificial intelligence for decades, and apart from developing efficient general methodsof solving CSPs researchers tried to identify tractable fragments of the problem [34]. Avery important special case of the CSP, the (Di)Graph Homomorphism problem and the H -Coloring problem have been actively studied in the graph theory community, see,e.g. [41, 40] and subsequent works by Hell, Feder, Bang-Jensen, Rafiey and others.Homomorphism duality introduced in these works has been very useful in understand-ing the structure of constraint problems. The CSP plays a major role and has beensuccessfully studied in database theory, logic and model theory [47, 48, 39], althoughthe version of the problem mostly used there is not necessarily nonuniform. Logicgames and strategies are now a standard tool in most of CSP algorithms. An interest-ing approach to the Dichotomy Conjecture through long codes was suggested by Kunand Szegedy [51]. Brown-Cohen and Raghavendra proposed to study the conjectureusing techniques based on decay of correlations [11]. In this paper we use the algebraicstructure of the CSP, which is briefly discussed next.The most effective approach to the study of the CSP turned out to be the algebraicapproach that associates every constraint language with its (universal) algebra of poly-morphisms. This approach was first developed in a series of papers by Jeavons andcoauthors [44, 45, 46] and then refined by Bulatov, Krokhin, Barto, Kozik, Maroti,Zhuk and others [5, 8, 6, 28, 16, 30, 54, 55, 60, 61]. While the complexity of CSP(Γ) has been already solved for some interesting classes of structures such as graphs [41],the algebraic approach allowed the researchers to confirm the Dichotomy Conjecture ina number of more general cases: for languages over a set of size up to 7 [12, 17, 53, 61],so called conservative languages [14, 18, 19, 3], and some classes of digraphs [7]. Italso helped to design the main classes of CSP algorithms [6, 27, 21, 10, 43], and torefine the exact complexity of the CSP [1, 8, 33, 52].In this paper we confirm the Dichotomy Conjecture for arbitrary languages overfinite sets. More precisely we prove the following
Theorem 1
For any finite constraint language Γ over a finite set the problem CSP(Γ) is either solvable in polynomial time or is NP-complete.
The same result has been independently obtained by Zhuk [62, 63, 64].The proved criterion matches the algebraic form of the Dichotomy Conjecture sug-gested in [28]. The hardness part of the conjecture has been known for long time.Therefore the main achievement of this paper is a polynomial time algorithm for prob-lems satisfying the tractability condition from [28].Using the algebraic language we can state the result in a stronger form. Let A be afinite idempotent algebra and let CSP( A ) denote the union of problems CSP(Γ) suchthat every term operation of A is a polymorphism of Γ . Problem CSP( A ) is no longera nonuniform CSP, and Theorem 1 allows for problems CSP(Γ) ⊆ CSP( A ) to have2ifferent solution algorithms even when A meets the tractability condition. We showthat the solution algorithm only depends on the algebra A . Theorem 2
For a finite idempotent algebra that satisfies the conditions of the Di-chotomy Conjecture there is a uniform solution algorithm for
CSP( A ) . An interesting question arising from Theorems 1,2 is known as the
Meta-problem :Given a constraint language or a finite algebra, decide whether or not it satisfies theconditions of the theorems. The answer to this question is not quite simple, for athorough study of the Meta-problem see [32, 38].We start with introducing the terminology and notation for CSPs that is used through-out the paper and reminding the basics of the algebraic approach. Then in Section 4we introduce the key ingredients used in the algorithm: separation of congruences andcentralizers. Then in Section 5 we apply these concepts to CSPs, first, to demonstratehow centralizers help to decompose an instance into smaller subinstances, and, second,to introduce a new kind of minimality condition for CSPs, block minimality . After thatwe state the main results used by the algorithm and describe the algorithm itself. Thelast part of the paper, Sections 6–9, is devoted to proving the technical results.
For a detailed introduction to the CSP and the algebraic approach to its structure thereader is referred to a recent survey by Barto et al. [9]. Basics of universal algebra canbe learned from the textbook [31]. In preliminaries to this paper we therefore focus onwhat is needed for our result.
The ‘AI’ formulation of the CSP best fits our purpose. Fix a finite set A and let Γ be a constraint language over A , that is, a set — not necessarily finite — of relations over A . The ( nonuniform ) Constraint Satisfaction Problem ( CSP ) associated with language Γ is the problem CSP(Γ) , in which, an instance is a pair ( V, C ) , where V is a set ofvariables; and C is a set of constraints , i.e. pairs h s , R i , where s = ( v , . . . , v k ) isa tuple of variables from V , the constraint scope , and R ∈ Γ , the k -ary constraintrelation . We always assume that relations are given explicitly by a list of tuples. Theway constraints are represented does not matter if Γ is finite, but it may change thecomplexity of the problems for infinite languages. The goal is to find a solution , i.e., amapping ϕ : V → A such that for every constraint h s , R i ∈ C , ϕ ( s ) ∈ R . Jeavons et al. in [44, 45] were the first to observe that higher order symmetries ofconstraint languages, called polymorphisms, play a significant role in the study ofthe complexity of the CSP. A polymorphism of a relation R over A is an operation f ( x , . . . , x k ) on A such that for any choice of a , . . . , a k ∈ R we have f ( a , . . . , a k ) ∈ R . If this is the case we also say that f preserves R , or that R is invariant with respect3o f . A polymorphism of a constraint language Γ is an operation that is a polymorphismof every R ∈ Γ . Theorem 3 ([44, 45])
For constraint languages Γ , ∆ , where Γ is finite, if every poly-morphism of ∆ is also a polymorphism of Γ , then CSP(Γ) is polynomial time reducibleto
CSP(∆) . Listed below are several types of polymorphisms that occur frequently throughoutthe paper. The presence of each of these polymorphisms imposes strong restrictions onthe structure of invariant relations that can be used in designing a solution algorithm.Some of such results will be mentioned later.–
Semilattice operation is a binary operation f ( x, y ) such that f ( x, x ) = x , f ( x, y ) = f ( y, x ) , and f ( x, f ( y, z )) = f ( f ( x, y ) , z ) for all x, y, z ∈ A ;– k -ary near-unanimity operation is a k -ary operation u ( x , . . . , x k ) such that u ( y, x, . . . , x ) = u ( x, y, x, . . . , x ) = · · · = u ( x, . . . , x, y ) = x for all x, y ∈ A ; aternary near-unanimity operation m is called a majority operation, it satisfies the equa-tions m ( y, x, x ) = m ( x, y, x ) = m ( x, x, y ) = x ;– Mal’tsev operation is a ternary operation h ( x, y, z ) satisfying the equations h ( x, y, y ) = h ( y, y, x ) = x for all x, y ∈ A ; the affine operation x − y + z of an Abelian group is aspecial case of a Mal’tsev operation;– k -ary weak near-unanimity operation is a k -ary operation w that satisfies the sameequations as a near-unanimity operation w ( y, x, . . . , x ) = · · · = w ( x, . . . , x, y ) , ex-cept for the last one ( = x ).To illustrate the effect of polymorphisms on the structure of invariant relations wegive a few examples that involve polymorphisms introduced above. First, we needsome terminology and notation.By [ n ] we denote the set { , . . . , n } . For sets A , . . . , A n tuples from A ×· · ·× A n are denoted in boldface, say, a ; the i th component of a is referred to as a [ i ] . An n -ary relation R over sets A , . . . , A n is any subset of A × · · · × A n . For I = { i , . . . , i k } ⊆ [ n ] by pr I a , pr I R we denote the projections pr I a = ( a [ i ] , . . . , a [ i k ]) , pr I R = { pr I a | a ∈ R } of tuple a and relation R . If pr i R = A i for each i ∈ [ n ] ,relation R is said to be a subdirect product of A ×· · ·× A n . Sometimes it is convenientto label the coordinate positions of relations by elements of some set other than [ n ] , e.g.by variables of a CSP. Example 1 (1) Let ∨ be the binary operation of disjunction on { , } , as is easilyseen, it is a semilattice operation. The following property of relations invariant under ∨ helps solving the corresponding CSP: A relation R contains the tuple (1 , . . . , whenever for each coordinate position R contains a tuple with a 1 in that position.Similarly, relations invariant under other semilattice operations on larger sets alwayscontain a sort of a ‘maximal’ tuple.(2) By the results of [2] a tuple a belongs to a ( n -ary) relation R invariant under a Using the s − t -Connectivity algorithm by Reingold [57] this reduction can be improved to a log-spaceone. -ary near-unanimity operation if and only if for every ( k − -element set I ⊆ [ n ] we have pr I a ∈ pr I R . In particular, if f is the majority operation on { , } given by ( x ∧ y ) ∨ ( y ∧ z ) ∨ ( z ∧ x ) , and R is a relation on { , } , then a ∈ R if and only if ( a [ i ] , a [ j ]) ∈ pr ij R . This property easily gives rise to a reduction of the correspondingCSP to 2-SAT.(3) If m ( x, y, z ) = x − y + z is the affine operation of, say, Z p , p prime, then relationsinvariant with respect to m are exactly those that can be represented as solution setsof systems of linear equations over Z p , and the corresponding CSP can be solved byGaussian Elimination. One direction is easy to see. If R = { x | x · M = d } , where M is the matrix of the system of equations, and a , b , c ∈ R , then ( a − b + c ) · M = a · M − b · M + c · M = d − d + d = d , implying m ( a , b , c ) ∈ R . The other direction is more involved. ⋄ The next step in discovering more structure behind nonuniform CSPs was made in[28], where universal algebras were brought into the picture. A (universal) algebra isa pair A = ( A, F ) consisting of a set A , the universe of A , and a set F of operationson A . Operations from F (called basic ) together with operations that can be obtainedfrom them by means of composition are called the term operations of A .Algebras allow for a more general definition of CSPs than the one used above.Let CSP( A ) denote the class of nonuniform CSPs { CSP(Γ) | Γ ⊆ Inv ( F ) , Γ finite } ,where Inv ( F ) denotes the set of all relations invariant with respect to all operationsfrom F . Note that the tractability of CSP( A ) can be understood in two ways: as theexistence of a polynomial-time algorithm for every CSP(Γ) from this class, or as theexistence of a uniform polynomial-time algorithm for all such problems. One of theimplications of our results is that these two types of tractability are the same. From theformal standpoint we will use the stronger one.
We use some structural elements of algebras, the main of which are subalgebras, con-gruences, and quotient algebras. For B ⊆ A and an operation f on A by f B we denotethe restriction of f on B . Algebra B = ( B, { f B | f ∈ F } ) is a subalgebra of A if f ( b , . . . , b k ) ∈ B for any b , . . . , b k ∈ B and any f ∈ F .Congruences play a very significant role in our algorithm, and we discuss them inmore detail. A congruence is an equivalence relation α ∈ Inv ( F ) . This means that forany operation f ∈ F and any ( a , b ) , . . . , ( a k , b k ) ∈ α it holds ( f ( a , . . . , a k ) , f ( b , . . . , b k )) ∈ α . Hence one can define an algebra on A/ α , the set of α -blocks, by setting f / α ( a / α , . . . , a k / α ) = ( f ( a , . . . , a k )) / α for a , . . . , a k ∈ A ,where a/ α denotes the α -block containing a . The algebra A / α is called the quotientalgebra modulo α . Often the fact that a, b are related by a congruence α is denoted a α ≡ b . Example 2
The following are examples of congruences and quotient algebras.(1) Let A be any algebra. Then the equality relation A and the full binary relation A on A are congruences of A . The quotient algebra A / A is A itself, while A / A is a1-element algebra.(2) Let L n be an n -dimensional vector space and L ′ its k -dimensional subspace, k ≤ n .The binary relation π given by: ( a, b ) ∈ π iff a, b have the same orthogonal projectionon L ′ , is a congruence of L n and L n / π is L ′ .(3) The next example will be our running example throughout the paper. Let A = { , , } , and let A M be the algebra with universe A and two basic operations: abinary operation r such that r (0 ,
0) = r (0 ,
1) = r (2 ,
0) = r (0 ,
2) = r (2 ,
1) = 0 , r (1 ,
1) = r (1 ,
0) = r (1 ,
2) = 1 , r (2 ,
2) = 2 ; and a ternary operation t such that t ( x, y, z ) = x − y + z if x, y, z ∈ { , } , where + , − are the operations of Z , t (2 , ,
2) = 2 , and otherwise t ( x, y, z ) = t ( x ′ , y ′ , z ′ ) , where x ′ = x if x ∈ { , } and x ′ = 0 if x = 2 ; the values y ′ , z ′ are obtained from y, z by the same rule. Itis an easy exercise to verify the following facts: (a) B = ( { , } , r { , } , t { , } ) and C = ( { , } , r { , } , t { , } ) are subalgebras of A M ; (b) the partition { , } , { } is acongruence of A M , let us denote it θ ; (c) algebra C is basically a semilattice, that is, aset with a semilattice operation, see Fig 1(a).The classes of congruence θ are / θ = { , } , / θ = { } . Then the quotientalgebra A M / θ is also basically a semilattice, as r/ θ (0 / θ , / θ ) = r/ θ (0 / θ , / θ ) = r/ θ (2 / θ , / θ ) = 0 / θ and r/ θ (2 / θ , / θ ) = 2 / θ . ⋄ (a) CB (b) CB0 12 0 12 Figure 1: (a) Algebra A M . (b) Algebra A N . Dots represent elements, ovals representsubalgebras, and arrows represent semilattice edges (see Section 3.2).Figure 2: (a) The congruence lattice of algebra A M ; (b) congruence lattice of a subdi-rectly irreducible algebra.The (ordered) set of all congruences of A is denoted by Con ( A ) . This set is actuallya lattice, that is, the operations of meet ∧ and join ∨ can be defined so that α ∧ β isthe greatest lower bound of α, β ∈ Con ( A ) and α ∨ β is the least upper bound of6 , β . Fig. 2(a) shows Con ( A M ) for the algebra A M from Example 2(3). By HS ( A ) wedenote the set of all quotient algebras of all subalgebras of A . The results of [28] reduce the dichotomy conjecture to idempotent algebras. An algebra A = ( A, F ) is said to be idempotent if every operation f ∈ F satisfies the equation f ( x, . . . , x ) = x . If A is idempotent, then all the constant relations { ( a ) } are invariantunder F . Therefore studying CSPs over idempotent algebras is the same as studying theCSPs that allow all constant relations. Another useful property of idempotent algebrasis that every block of every its congruence is a subalgebra. We now can state thealgebraic version of the dichotomy theorem. Theorem 4
For a finite idempotent algebra A the following are equivalent:(1) CSP( A ) is solvable in polynomial time;(2) A has a weak near-unanimity term operation;(3) every algebra from HS ( A ) has a nontrivial term operation (that is, not a projection ,an operation of the form f ( x , . . . , x k ) = x i ).Otherwise CSP( A ) is NP-complete. The hardness part of this theorem is proved in [28]; the equivalence of (2) and (3)was proved in [13] and [56]. The equivalence of (1) to (2) and (3) is the main result ofthis paper. In the rest of the paper we assume all algebras to satisfy conditions (2),(3).In fact, we will prove a slightly more general result. Let A be a finite class of finiteidempotent similar algebras, that is, whose basic operations have the same ‘names’ andthe corresponding arities. One may assume that such a class is produced from a singlealgebra A by taking subalgebras, quotient algebras and also retractions introduced inSection 5.5. Then CSP( A ) denotes the class of CSP instances whose variables canhave different domains belonging to A , see, e.g. [15]. We will design an algorithm for CSP( A ) whenever there is a near-unanimity term for all algebras in A simultaneously. Leaving aside occasional combinations thereof, there are only two standard types ofalgorithms solving the CSP. In this section we give a brief introduction into them.
Algorithms of the first kind are based on the idea of local propagation, that is formallydescribed below.Let P = ( V, C ) be a CSP instance. For W ⊆ V by P W we denote the restriction of P onto W , that is, the instance ( W, C W ) , where for each C = h s , R i ∈ C , the set C W includes the constraint C W = h s ∩ W, pr s ∩ W R i , where s ∩ W is the subtuple of s containing all the elements from W in s , say, s ∩ W = ( i , . . . , i k ) , and pr s ∩ W R stands for pr { i ,...,i k } R . The set of solutions of P W will be denoted by S W .7nary solutions, that is, when | W | = 1 play a special role. As is easily seen, for v ∈ V the set S v is just the intersections of unary projections pr v R of constraintswhose scope contains v . Instance P is said to be if for every v ∈ V andevery constraint C = h s , R i ∈ C such that v ∈ s , it holds pr v R = S v . For a 1-minimalinstance one may always assume that allowed values for a variable v ∈ V is the set S v . We call this set the domain of v and assume that CSP instances may have differentdomains, which nevertheless are always subalgebras or quotient algebras of the originalalgebra A . It will be convenient to denote the domain of v by A v . The domain A v maychange as a result of transformations of the instance.Instance P is said to be (2,3)-consistent if it has a (2,3)-strategy , that is, a collectionof relations R X , X ⊆ V , | X | = 2 satisfying the following conditions (we use R v , R vw for R { v } , R { v,w } :– for every X ⊆ V with | X | ≤ , pr s ∩ X R X ⊆ S X ;– for every X = { u, v } ⊆ V , any w ∈ V − X , and any ( a, b ) ∈ R X , there is c ∈ A w such that ( a, c ) ∈ R uw and ( b, c ) ∈ R vw .Let the collection of relations R X be denoted by R . A tuple a whose entries areindexed with elements of W ⊆ V and such that pr X a ∈ R X for any X ⊆ W , | X | = 2 ,will be called R -compatible . If a (2,3)-consistent instance P with a (2,3)-strategy R satisfies the additional condition– for every constraint C = h s , R i of P every tuple a ∈ R is R -compatible,it is called (2,3)-minimal . For k ∈ N , ( k, k + 1) -strategies, ( k, k + 1) -consistency, and ( k, k + 1) -minimality are defined in a similar way replacing 2,3 with k, k + 1 .Instance P is said to be minimal (or globally minimal ) if for every C = h s , R i ∈ C and every a ∈ R there is a solution ϕ ∈ S such that ϕ ( s ) = a . Similarly, P is saidto be globally 1-minimal if for every v ∈ V and a ∈ A v there is a solution ϕ with ϕ ( v ) = a .Any instance can be transformed to a 1-minimal, (2,3)-consistent, or (2,3)-minimalinstance in polynomial time using the standard constraint propagation algorithms (see,e.g. [34]). These algorithms work by changing the constraint relations and the do-mains of the variables eliminating some tuples and elements from them. We call sucha process tightening the instance. It is important to notice that if the original instancebelongs to CSP( A ) for some algebra A , that is, all its constraint relations are invariantunder the basic operations of A , the constraint relations obtained by propagation algo-rithms are also invariant under the basic operations of A , and so the resulting instancealso belongs to CSP( A ) . Establishing minimality amounts to solving the problem andtherefore not always can be easily done.If a constraint propagation algorithm solves a CSP, the problem is said to be ofbounded width. More precisely, CSP(Γ) (or
CSP( A ) ) is said to have bounded width iffor some k every ( k, k +1) -minimal instance from CSP(Γ) (or
CSP( A ) ) has a solution.Problems of bounded width are very well studied, see an older survey [29] and a morerecent paper [4]. Theorem 5 ([4, 21, 16, 49])
For an idempotent algebra A the following are equiva-lent:(1) CSP( A ) has bounded width;
2) every (2,3)-minimal instance from
CSP( A ) has a solution;(3) A has a weak near-unanimity term of arity k for every k ≥ ;(4) every algebra HS ( A ) has a nontrivial operation, and none of them is equivalent toa module (in a certain precise sense). The second type of CSP algorithms can be viewed as a generalization of Gaussianelimination, although, it utilizes just one property also used by Gaussian elimination:the set of solutions of a system of linear equations or a CSP has a set of generators ofsize polynomial in the number of variables. The property that for every instance P of CSP( A ) its solution space S has a set of generators of polynomial size is nontrivial,because there are only exponentially many such sets, while, as is easily seen CSPs mayhave up to double exponentially many different sets of solutions. Formally, an algebra A = ( A, F ) has few subpowers if for every n there are only exponentially many n -aryrelations in Inv ( F ) .Algebras with few subpowers are well studied and the CSP over such an algebra hasa polynomial-time solution algorithm, see, [10, 43]. In particular, such algebras admita characterization in terms of the existence of a term operation with special properties,an edge term. We need only a subclass of algebras with few subpowers that appearedin [21, 25] and is defined as follows.A pair of elements a, b ∈ A is said to be a semilattice edge if there is a binary termoperation f of A such that f ( a, a ) = a and f ( a, b ) = f ( b, a ) = f ( b, b ) = b , that is, f is a semilattice operation on { a, b } . For example, the set { , } from Example 2(3) isa semilattice edge, and the operation r of A M witnesses that. Proposition 6 ([21, 25])
If an idempotent algebra A has no semilattice edges, it hasfew subpowers, and therefore CSP( A ) is solvable in polynomial time. Semilattice edges have other useful properties including the following one that weuse for reducing a CSP to smaller problems.
Lemma 7 (Proposition 24, [23])
For any idempotent algebra A there is a binary termoperation xy of A (think multiplication) such that xy is a semilattice operation on anysemilattice edge and for any a, b ∈ A either ab = a or { a, ab } is a semilattice edge. Note that any semilattice operation satisfies the conditions of Lemma 7. The op-eration r of the algebra A M from Example 2(3) is not a semilattice operation (for in-stance, it does not satisfy the equation r ( x, y ) = r ( y, x ) ), but it satisfies the conditionsof Lemma 7. In this section we introduce an alternative definition of the centralizer operator on con-gruence lattices studied in commutator theory, and study its properties and its con-nection to decompositions of CSPs. Unlike the vast majority of the literature on the9lgebraic approach to the CSP we use not only term operations, but also polynomialoperations of an algebra. It should be noted however that the first to use polynomialsfor CSP algorithms was Maroti in [55]. We make use of some ideas from that paper inthe next section.Let f ( x , . . . , x k , y , . . . , y ℓ ) be a k + ℓ -ary term operation of an algebra A =( A, F ) and b , . . . , b ℓ ∈ A . The operation g ( x , . . . , x k ) = f ( x , . . . , x k , b , . . . , b ℓ ) is called a polynomial of A . The name ‘polynomial’ refers to usual polynomials. In-deed, if A is a ring, its polynomials as just defined are the same as polynomials in theregular sense. A polynomial that depends on only one variable, i.e. k = 1 , is said to bea unary polynomial.While polynomials of A do not have to be polymorphisms of relations from Inv ( F ) ,congruences and unary polynomials are in a special relationship. More precisely, it isa well known fact that an equivalence relation over A is a congruence if and only if itis preserved by all the unary polynomials of A . If α is a congruence, and f is a unarypolynomial, by f ( α ) we denote the set of pairs { ( f ( a ) , f ( b )) | ( a, b ) ∈ α } . Example 3
The unary polynomials of the algebra A M from Example 2(3) include thefollowing unary operations (these are the polynomials we will use, there are moreunary polynomials of A M ): h ( x ) = r ( x,
0) = r ( x, , such that h (0) = h (2) = 0 , h (1) = 1 ; h ( x ) = r (2 , x ) , such that h (0) = h (1) = 0 , h (2) = 2 ; h ( x ) = r (0 , x ) = 0 .The lattice Con ( A M ) has 3 congruences: , θ, (see Example 2(3)). As is easilyseen, h ( θ ) , h (1) θ , but h (1) ⊆ θ , h ( θ ) ⊆ , h (1) ⊆ . ⋄ For an algebra A , a term operation f ( x, y , . . . , y k ) , and a ∈ A k , let f a ( x ) = f ( x, a ) . Let α, β ∈ Con ( A ) , α ≤ β , and let ( α : β ) ⊆ A denote the greatest con-gruence such that for any term operation f ( x, y , . . . , y k ) and any a , b ∈ A k such that ( a [ i ] , b [ i ]) ∈ ( α : β ) , it holds that f a ( β ) ⊆ α if and only if f b ( β ) ⊆ α . Polynomialsof the form f a , f b are often called twin polynomials.The congruence ( α : β ) will be called the centralizer of α, β . The followingstatement is one of the key ingredients of the algorithm. Lemma 8 (Corollary 37 [26])
Let ( α : β ) = 1 A , a, b, c ∈ A and b β ≡ c . Then ( ab, ac ) ∈ α , where multiplication is as in Lemma 7. Example 4
In the algebra A M , see Example 2(3), the centralizer acts as follows: (0 : θ ) = 1 and ( θ : 1) = θ . We start with the second centralizer. Since every polynomialpreserves congruences, for any term operation h ( x, y , . . . , y k ) and any a , b ∈ A kM such that ( a [ i ] , b [ i ]) ∈ θ for i ∈ [ k ] , we have ( h a ( x ) , h b ( x )) ∈ θ for any x . This of Traditionally, the centralizer of two congruences is defined in a different way, see, e.g. [37]. Congruence ( α : β ) appeared in [42], but completely inconsequentially, they did not study it at all, and its relation tothe standard notion of centralizer remained unknown. We used the current definition in [22] and called itquasi-centralizer, again, not completely aware of its connection to the standard centralizer. Later Willard [59]showed that the two concepts are equivalent, see [26, Proposition 33] for a proof, and we use ‘centralizer’here rather than ‘quasi-centralizer’. ourse implies ( θ : 1) ≥ θ . On the other hand, let f ( x, y ) = r ( y, x ) . Then f ( x ) = f ( x,
0) = r (0 , x ) = h ( x ) ,f ( x ) = f ( x,
2) = r (2 , x ) = h ( x ) , and f (1) ⊆ θ , while f (1) θ . This means that (0 , ( θ : 1) and so ( θ : 1) ⊂ .For the first centralizer it suffices to demonstrate that the condition in the definitionof centralizer is satisfied for pairs of twin polynomials of the form ( r ( a, x ) , r ( b, x )) , ( r ( x, a ) , r ( x, b )) , ( t ( x, a , a ) , t ( x, b , b )) , ( t ( a , x, a ) , t ( b , x, b )) , ( t ( a , a , x ) ,t ( b , b , x )) for a, b, a , a , b , b ∈ { , , } , which can be verified directly.Interestingly, Lemma 8 implies that if we change the operation r in just one point,it has a profound effect on the centralizer (0 : θ ) . Let A N be the same algebra as A M with operations r ′ , t ′ defined in the same way as r, t , except r ′ (2 ,
1) = 1 replacing thevalue r (2 ,
1) = 0 . In this case { , } is also a semilattice edge, see Fig. 1(b). Let again f ( x, y ) = r ′ ( y, x ) and a = 0 , b = 2 . This time we have f ( x ) = f ( x,
0) = r ′ (0 , x ) = h ′ ( x ) ,f ( x ) = f ( x,
2) = r ′ (2 , x ) = h ′ ( x ) , where h ′ ( x ) = 0 for all x ∈ { , , } and h ′ (0) = 0 , h ′ (1) = 1 showing that f ( θ ) ⊆ , while f ( θ ) . ⋄ Fig. 3(a),(b) shows the effect of large centralizers ( α : β ) on the structure of algebra A , which is a generalization of the phenomena observed in Example 4. Dots thererepresent α -blocks (assume α is the equality relation), ovals represent β -blocks, letthey be B and C , and such that there is at least one semilattice edge between B and C . If ( α : β ) is the full relation, Lemmas 7 and 8 imply that for any a ∈ B and any b, c ∈ C we have ab = ac , and so ab is the only element of C such that { a, ab } is asemilattice edge (represented by arrows). In other words, we have a mapping from B to C that can also be shown injective. We will use this mapping to lift any solutionwith a value from B to a solution with a value from C . (a) ( b) CB CB Figure 3: (a) ( α : β ) is the full relation; (b) ( α : β ) is not the full relationFinally, we prove an easy corollary from Lemma 8. Corollary 9
Let α, β ∈ Con ( A ) , α ≤ β , be such that ( α : β ) ≥ β . Then for every β -block B if ab is a semilattice edge and a, b ∈ B , then a α ≡ b . roof: Let a, b ∈ B , a α ≡ b , form a semilattice edge, that is, ab = ba = b . However,since a ( α : β ) ≡ b , by Lemma 8 it must hold aa α ≡ bb , a contradiction. ✷ In this section we introduce the reductions used in the algorithm, and then explain thealgorithm itself. The reductions heavily use the algebraic structure of the domains ofan instance, and the structure of the instance itself.
We have seen in the previous section that large centralizers impose strong restrictionson the structure of an algebra. We start this section showing that small centralizersimply certain properties of CSP instances, as well.Let R be a binary relation, a subdirect product of A × B , and α ∈ Con ( A ) , γ ∈ Con ( B ) . Relation R is said to be αγ -aligned if, for any ( a, c ) , ( b, d ) ∈ R , ( a, b ) ∈ α ifand only if ( c, d ) ∈ γ . This means that if A , . . . , A k are the α -blocks of A , then thereare also k γ -blocks of B and they can be labeled B , . . . , B k in such a way that R = ( R ∩ ( A × B )) ∪ · · · ∪ ( R ∩ ( A k × B k )) . This definition provides a way to decompose CSP instances. Let P = ( V, C ) be a(2,3)-minimal instance from CSP( A ) . We will always assume that a (2,3)-consistentor (2,3)-minimal instance has a constraint C X = h X, R X = S X i for every X ⊆ V , | X | ≤ . So, C contains a constraint C vw = h ( v, w ) , R vw i for every v, w ∈ V , andthese relations form a (2,3)-strategy for P . Recall that A v denotes the domain of v ∈ V .Let W ⊆ V and α v ∈ Con ( A v ) , v ∈ W , be such that for any v, w ∈ W the relation R vw is α v α w -aligned. The set W is then called a strand of P . We will also say that P W is α -aligned .For a strand W and congruences α v as above there is a one-to-one correspondencebetween α v - and α w -blocks of A v and A w , v, w ∈ W . Moreover, by (2,3)-minimalitythese correspondences are consistent, that is, if u, v, w ∈ W and B u , B v , B w are α u -, α v - and α w -blocks, respectively, such that R uv ∩ ( B u × B v ) = ∅ and R vw ∩ ( B v × B w ) = ∅ , then R uw ∩ ( B u × B w ) = ∅ . This means that P W can be split into severalinstances, whose domains are α v -blocks. Lemma 10
Let P , W, α v for each v ∈ W , be as above. Then P W can be decomposedinto a collection of instances P , . . . , P k , k constant, P i = ( W, C i ) such that everysolution of P W is a solution of one of the P i and for every v ∈ W its domain in P i isan α v -block. Example 5
Let A M be the algebra introduced in Example 2(3), and R is the followingternary relation over A M invariant under r, t , given by R = , here triples, the elements of the relation are written vertically. Consider the followingsimple CSP instance from CSP( A M ) : P = ( V = { v , v , v , v , v } , { C = h s =( v , v , v ) , R i , C = h s = ( v , v , v ) , R i} , where R = R = R . To make theinstance (2,3)-minimal we run the appropriate local propagation algorithm on it. First,such an algorithm adds new binary constraints C v i v j = h ( v i , v j ) , R v i v j i for i, j ∈ [5] starting with R v i v j = A M × A M . It then iteratively removes pairs from these relationsthat do not satisfy the (2,3)-minimality condition. Similarly, it tightens the originalconstraint relations if they violate the conditions of (2,3)-minimality. It is not hard tosee that this algorithm does not change constraints C , C , and that the new binaryrelations are as follows: R v v = R v v = R v v = θ , R v v = R v v = R v v = R v v = Q , and R v v = R v v = R v v = S , where Q = pr R = (cid:18) (cid:19) ,S = (cid:18) (cid:19) . In order to distinguish elements and congruences of domains belonging to differentvariables let the domain of v i be denoted by A i , its elements by i , i , i , and thecongruences of A i by i , θ i , i . RR v v v v v Figure 4: Instance P from Example 5 Let W = { v , v , v } , α i = θ i for v i ∈ W . Then, since R v v = R v v = R v v = θ and therefore are α i α j -aligned, i, j ∈ { , , } , W is a strand of P .Therefore the instance P W = ( { v , v , v } , { C W = h ( v , v ) , pr v v R i , C W = h ( v , v ) , pr v v R i} ) can be decomposed into a disjoint union of two instances P = ( { v , v , v } , {h ( v , v ) , Q i , h ( v , v ) , Q i ) , P = ( { v , v , v } , {h ( v , v ) , S i , h ( v , v ) , S i ) , where Q = { , } × { , } , Q = { , } × { , } , S = { (2 , ) } , S = { (2 , ) } . ⋄ In order to formulate the algorithm properly we need one more transformation of al-gebras. An algebra A is said to be subdirectly irreducible if the intersection of all its13ontrivial (different from the equality relation) congruences is nontrivial. This smallestnontrivial congruence µ A is called the monolith of A , see Fig. 2(b). For instance, thealgebra A M from Example 2(3) is subdirectly irreducible, because it has the smallestnontrivial congruence, θ . It is a folklore observation that any CSP instance can betransformed in polynomial time to an instance, in which the domain of every variableis a subdirectly irreducible algebra. We will assume this property of all the instanceswe consider. Using Lemma 10 we introduce a new type of consistency of a CSP instance, block-minimality, which will be crucial for our algorithm. In a certain sense it is similar tothe standard local consistency notions, as it also defined through a family of relationsthat have to be consistent in a certain way. However, block-minimality is not quitelocal, and is more difficult to establish, as it involves solving smaller CSP instancesrecursively. The definitions below are designed to allow for an efficient procedureto establish block-minimality. This is achieved either by allowing for decomposing asubinstance into instances over smaller domains as in Lemma 10, or by replacing largedomains with their quotient algebras.Let α v be a congruence of A v for v ∈ V . By P / α we denote the instance ( V, C α ) constructed as follows: the domain of v ∈ V is A v / α v ; for every constraint C = h s , R i ∈ C , s = ( v , . . . , v k ) , the set C α includes the constraint h s , R/ α i , where R/ α = { ( a [ v ] / α v , . . . , a [ v k ] / α vk ) | a ∈ R } . Example 6
Consider the instance P from Example 5, and let α v i = θ i for each i ∈ [5] .Then P / α is the instance over A M / θ given by P / α = ( V, {h s , R / α i , h s , R / α i} ) ,where R / α = R / α = / θ / θ / θ / θ / θ / θ / θ / θ / θ . ⋄ Let P = ( V, C ) be a (2,3)-minimal instance, and for X ⊆ V , | X | ≤ , there is aconstraint C X = h X, R X i , where R X is the set of partial solutions on X .Recall that an algebra A v is said to be semilattice free if it does not contain semilat-tice edges. Let size ( P ) denote the maximal size of domains of P that are not semilatticefree and MAX ( P ) be the set of variables v ∈ V such that | A v | = size ( P ) and A v is notsemilattice free. Finally, for Y ⊆ V let µ Yv = µ v if v ∈ Y and µ Yv = 0 v otherwise.Instance P is said to be block-minimal if(BM) for every strand U ⊆ V the problem P /U = P / µ Y , where Y = MAX ( P ) − U ,is minimal.The definition of block-minimality is designed in such a way that block-minimality canbe efficiently established. Observe that a strand can be large, even equal to V . However P /U splits into a union of disjoint problems over smaller domains.14 xample 7 Let us consider again the instance P from Example 5. In that example wefound all its binary solutions, and now we use them to find strands and to verify thatthis instance is block-minimal. As we saw in Example 5, unless i, j ∈ { , , } therelation R v i v j is not αβ -aligned for any congruences α, β except the full ones. Thismeans that the only strands of P are W = { v , v , v } and all the 1-element sets ofvariables.Now we check the condition (BM) for P . Consider W . For this strand we have Y = { , } , and so µ Y = µ Y = µ Y = 0 and µ Y = µ Y = θ . The problem P /W is thefollowing problem: ( V, { C ′ , C ′ } ) , where C ′ = h s , R θ i , C ′ = h s , R θ i , and R θ = / θ / θ / θ / θ / θ / θ . Now, consider first C . For any tuple ( a , a , a ) ∈ R θ , that is, assignment v = a ∈ A M , v = a ∈ A M , v = a ∈ A M / θ , we can extend this assignment to v = v and v = 0 / θ to obtain a satisfying assignment of P /W . For C the argument is the same.For 1-element strands consider { v } . Then Y = { v , v , v , v } , and µ Y = µ Y = µ Y = µ Y = θ . We have P / { v } = ( V, { C ′′ , C ′′ } ) , where C ′′ = h s , R θθ i , C ′ = h s , R θθ i , and R θθ = / θ / θ / θ / θ / θ / θ / θ / θ , R θθ = / θ / θ / θ / θ / θ / θ / θ / θ . As is easily seen, any assignment to v , v , v or to v , v , v can be extended to asolution of P / { v } . ⋄ For an instance P we say that an instance P ′ is strictly smaller than instance P if size ( P ′ ) < size ( P ) . Lemma 11
Let P = ( V, C ) be a (2,3)-minimal instance. Then P can be transformedto an equivalent block-minimal instance P ′ by solving a quadratic number of strictlysmaller CSPs. Proof:
To establish block-minimality of P , for every strand U ⊆ V , we need tocheck if the problem given in condition (BM) is minimal. If they are then P is block-minimal, otherwise some tuples can be removed from some constraint relation R (theset of tuples that remain in R is always a subalgebra, as is easily seen), and the instance P tightened, in which case we need to repeat the procedure with the tightened instance.Therefore we just need to show how to reduce solving those subproblems to solvingstrictly smaller CSPs.By the definition of a strand there is a partition B w , . . . , B wℓ of A w for w ∈ U such that for every constraint h s , R i ∈ C , for any w , w ∈ s ∩ U , any b ∈ R , andany i ∈ [ ℓ ] it holds b [ w ] ∈ B w i if and only if b [ w ] ∈ B w i . Then the problem P /U is a disjoint union of instances P , . . . , P ℓ given by: P i = ( V, C i ) , where for everyconstraint C = h s , R i ∈ C there is C i = h s , R i i ∈ C i such that R i = { a ′ | a ∈ R, a [ w ] ∈ B wi for each w ∈ s ∩ U } , a ′ [ u ] = a [ u ] / µ Yu , Y = MAX ( P ) − U , for each u ∈ s . Clearly, size ( P i ) < size ( P ) for each i ∈ [ ℓ ] .In order to establish the minimality of P /U it suffices to do the following. Take C = h s , R i ∈ C and a ∈ R . We need to check that a ′ = a / µ Y , Y = MAX ( P ) − U ,extends to a solution of at least one of the problems P , . . . , P ℓ . For i ∈ [ ℓ ] let P ′ i bethe problem obtained from P i as follows: fix the values of variables from s to those of a ′ , or in other words, add the constraint h ( w ) , { a [ w ] / µ Yw }i for each w ∈ s . Then a ′ canbe extended to a solution of P i if and only if P ′ i has a solution. ✷ We are now in a position to describe our solution algorithm. In the algorithm wedistinguish three cases depending on the presence of semilattice edges and centralizersof the domains of variables. In each case we employ different methods of solving orreducing the instance to a strictly smaller one. Algorithm 1,
SolveCSP , gives a moreformal description of the solution algorithm.Let P = ( V, C ) be a subdirectly irreducible (2,3)-minimal instance. Let Center ( P ) denote the set of variables v ∈ V such that (0 v : µ v ) = 1 v . Let µ ∗ v = µ v if v ∈ MAX ( P ) ∩ Center ( P ) and µ ∗ v = 0 v otherwise. Semilattice free domains.
If all domains of P are semilattice free then P can besolved in polynomial time, using the few subpowers algorithm, as shown in [43, 21]. Small centralizers If µ ∗ v = 0 v for all v ∈ V , by Theorem 12 block-minimalityguarantees that a solution exists, and we can use Lemma 11 to solve the instance. Theorem 12 If P is subdirectly irreducible, (2,3)-minimal, block-minimal, and MAX ( P ) ∩ Center ( P ) = ∅ , then P has a solution. Large centralizers
Suppose that
MAX ( P ) ∩ Center ( P ) = ∅ . In this case the algo-rithm proceeds in three steps. Stage 1.
Consider the problem P / µ ∗ . We establish the global 1-minimality of this prob-lem. If it is tightened in the process, we start solving the new problem from scratch.To check global 1-minimality, for each v ∈ V and every a ∈ A v / µ ∗ v , we need to finda solution of the instance, or show it does not exists. To this end, add the constraint h ( v ) , { a }i to P / µ ∗ . The resulting problem belongs to CSP( A ) , since A v is idempotent,and hence { a } is a subalgebra of A v / µ ∗ v . Then we establish (2,3)-minimality and blockminimality of the resulting problem. Let us denote it P ′ . There are two possibilities.First, if size ( P ′ ) < size ( P ) then P ′ is a problem strictly smaller than P and can besolved by recursively calling Algorithm 1 on P ′ . If size ( P ′ ) = size ( P ) then, as allthe domains A v of maximal size for v ∈ Center ( P ) are replaced with their quotientalgebras, there is w Center ( P ) such that | A w | = size ( P ) and A w is not semilatticefree. Therefore for every u ∈ Center ( P ′ ) , for the corresponding domain A ′ u we have | A ′ u | < size ( P ) = size ( P ′ ) . Thus, MAX ( P ′ ) ∩ Center ( P ′ ) = ∅ , and P ′ has a solution16y Theorem 12. Stage 2.
For every v ∈ MAX ( P ) we find a solution ϕ of P / µ ∗ such that there is a ∈ A v such that { a, ϕ ( v ) } is a semilattice edge if µ ∗ v = 0 v , or, if µ ∗ v = µ v , there is b ∈ ϕ ( v ) such that { a, b } is a semilattice edge. Take v ∈ MAX ( P ) and b ∈ A v / µ ∗ v such that { a, b } is a semilattice edge in A v / µ ∗ v for some a ∈ A v / µ ∗ v . Such a semilattice edgeexists, because A v is not semilattice free. Also, if µ ∗ v = 0 v , then v ∈ Center ( P ) and (0 v : µ v ) = 1 v and by Corollary 9 its semilattice edges are all between µ v -blocks.Since P / µ ∗ is globally 1-minimal, there is a solution ϕ v,b such that ϕ v,b ( v ) = b , andtherefore ϕ v,b satisfies the condition. Let MAX ( P ) = { v , . . . , v ℓ } and b , . . . , b ℓ thevalues satisfying the requirements above.S TAGE
3. We apply the transformation of P suggested by Maroti in [55]. For a solu-tion ϕ of P / µ ∗ , by P · ϕ we denote the instance ( V, C ϕ ) given by the rule: for every C = h s , R i ∈ C the set C ϕ contains a constraint h s , R · ϕ i . To construct R · ϕ choosea tuple b ∈ R such that b [ v ] / µ ∗ v = ϕ ( v ) for all v ∈ s ; this is possible because ϕ isa solution of P / µ ∗ . Then set R · ϕ = { a · b | a ∈ R } . By the results of [55] andLemma 8 the instance P · ϕ has a solution if and only if P does. We now use thesolutions ϕ v ,b , . . . , ϕ v ℓ ,b ℓ to construct a new problem P = ( . . . (( P · ϕ v ,b ) · ϕ v ,b ) · . . . ) · ϕ v ℓ ,b ℓ . Note that the transformation of P above boils down to a collection of mappings p v : A v → A v , v ∈ V , so called consistent mappings , see Section 5.5, that also satisfysome additional properties. If we now repeat the procedure above starting from P andusing the same solutions ϕ v i ,b i , we obtain an instance P , for which the correspondingcollection of consistent mappings is p v ◦ p v , v ∈ V . More generally, P i +1 = ( . . . (( P i · ϕ v ,b ) · ϕ v ,b ) · . . . ) · ϕ v ℓ ,b ℓ . There is k such that p kv is idempotent for every v ∈ V , that is, ( p kv ◦ p kv )( x ) = p kv ( x ) for all x . Set P † = P k . We will show later that size ( P † ) < size ( P ) .This last case can be summarized as the following Theorem 13 If P / µ ∗ is globally 1-minimal, then P can be reduced in polynomial timeto a strictly smaller instance over a class of algebras satisfying the conditions of theDichotomy Conjecture. We now illustrate the algorithm on our running example.
Example 8
We illustrate the algorithm
SolveCSP on the instance from Example 5.Recall that the domain of each variable is A M , its monolith is θ , and (0 : θ ) is the fullrelation. This means that size ( P ) = 3 , MAX ( P ) = V and Center ( P ) = V , as well.Therefore we are in the case of large centralizers. Set µ ∗ v i = θ i for each i ∈ [5] andconsider the problem P / µ ∗ = ( V, { C ∗ = h s , R ∗ i , C ∗ = h s , R ∗ i ) , where R ∗ = / θ / θ / θ / θ / θ / θ / θ / θ / θ . lgorithm 1 Procedure
SolveCSP
Require:
A CSP instance P = ( V, C ) over A Ensure:
A solution of P if one exists, ‘NO’ otherwise if all the domains are semilattice free then Solve P using the few subpowers algorithm and RETURN the answer end if Transform P to a subdirectly irreducible, block-minimal and (2,3)-minimal in-stance µ ∗ v = µ v for v ∈ MAX ( P ) ∩ Center ( P ) and µ ∗ v = 0 v otherwise P ∗ = P / µ ∗ /* the global 1-minimality of P ∗ for every v ∈ V and a ∈ A v / µ ∗ v do P ′ = P ∗ ( v,a ) /* Add constraint h ( v ) , { a }i fixing the value of v to a Transform P ′ to a subdirectly irreducible, (2,3)-minimal instance P ′′ If size ( P ′′ ) < size ( P ) call SolveCSP on P ′′ and flag a if P ′′ has no solution Establish block-minimality of P ′′ ; if the problem changes, return to Step 10 If the resulting instance is empty, flag the element a end for If there are flagged values, tighten the instance by removing the flagged elementsand start over
Use Theorem 13 to reduce P to an instance P † with size ( P † ) < size ( P ) Call
SolveCSP on P † and RETURN the answer It is an easy exercise to show that this instance is globally 1-minimal (every value / θ can be extended to the all- / θ solution, and every value / θ can be extended to theall- / θ solution). This completes Stage 1 . For every variable v i we choose b ∈ A M / θ such that for some a ∈ A M / θ the pair { a, b } is a semilattice edge. Since A M / θ isa 2-element semilattice, setting b = 0 / θ and a = 2 / θ is the only choice. Therefore ϕ v i ,b i in our case can be chosen to be the same solution ϕ given by ϕ ( v i ) = 0 / θ ; and Stage 2 is completed. For
Stage 3 first note that in A M the operation r plays the roleof multiplication · . Then for each of the constraints C , C choose a representative a ∈ R ∩ ( ϕ ( v ) × ϕ ( v ) × ϕ ( v )) = R ∩ { , } , a ∈ R ∩ ( ϕ ( v ) × ϕ ( v ) × ϕ ( v )) = R ∩ { , } , and set P ′ = ( { v , . . . , v } , { C ′ = h ( v , v , v ) , R ′ i , C ′ = h ( v , v , v ) , R ′ i} ) , where R ′ = r ( R , a ) , R ′ = r ( R , b ) . Since r (2 ,
0) = r (2 ,
1) =0 , regardless of the choice of a , b in our case R ′ ⊆ R , R ′ ⊆ R , and are invariantwith respect to the affine operation of Z . Therefore the instance P ′ can be viewed asa system of linear equations over Z (this system is actually empty in our case), andcan be easily solved. ⋄ Using Lemma 11 and Theorems 12,13 it is not difficult to see that the algorithmruns in polynomial time.
Theorem 14
Algorithm SolveCSP (Algorithm 1) correctly solves every instance from
CSP( A ) and runs in polynomial time. Proof:
By the results of [21, 25] the algorithm correctly solves the given instance P
18n polynomial time if the conditions of Step 1 are true. Lemma 11 implies that Steps 4and 12 can be completed by recursing to strictly smaller instances.Next we show that the for-loop in Steps 8-14 checks if P ∗ = P / µ ∗ is globally1-minimal. For this we need to verify that a value a is flagged if and only if P ∗ has nosolution ϕ with ϕ ( v ) = a , and therefore if no values are flagged then P ∗ is globally 1-minimal. If ϕ ( v ) = a for some solution ϕ of P ∗ , then ϕ is a solution P ′ constructed inStep 9. In this case Steps 11,12 cannot result in an empty instance. Suppose a ∈ A v / µ ∗ v is not flagged. If size ( P ′′ ) < size ( P ) this means that P ′′ and therefore P ′ has asolution. Otherwise this means that establishing block-minimality of P ′′ is successful.In this case P ′′ has a solution by Theorem 12, because MAX ( P ′′ ) ∩ Center ( P ′′ ) = ∅ .This in turn implies that P ′ has a solution. Observe also that the set of unflagged valuesfor each variable v ∈ V is a subalgebra of A / µ ∗ . Indeed, the set of solutions of P ∗ isa subalgebra S ∗ of Q v ∈ V A / µ ∗ , and the set of unflagged values is the projection of S ∗ on the coordinate position v .Finally, if Steps 8–15 are completed without restarts, Steps 16,17 can be completedby Theorem 13, and recursing on P ′ such that either size ( P ′ ) < size ( P ) or MAX ( P ′ ) ∩ Center ( P ′ ) = ∅ .To see that the algorithm runs in polynomial time it suffices to observe that(1) The number of restarts in Steps 4 and 15 is at most linear, as the instance becomessmaller after every restart; therefore the number of times Steps 4–15 are executed to-gether is at most linear.(2) The number of iterations of the for-loop in Steps 8–14 is linear.(3) The number of restarts in Steps 10 and 12 is at most linear, as the instance becomessmaller after every iteration.(4) Every call of SolveCSP when establishing block-minimality in Steps 4, and 12 ismade on an instance strictly smaller than P , and therefore the depth of recursion isbounded by size ( P ) in Step 4,11,12 and 17.Thus a more thorough estimation gives a bound on the running time of O ( n k ) , where k is the maximal size of an algebra in A . ✷ Following [55] let P = ( V, C ) be an instance and p v : A v → A v , v ∈ V . Mappings p v , v ∈ V , are said to be consistent if for any h s , R i ∈ C , s = ( v , . . . , v k ) , and anytuple a ∈ R the tuple ( p v ( a [1]) , . . . , p v k ( a [ k ])) belongs to R . It is easy to see that thecomposition of two families of consistent mappings is also a consistent mapping. Forconsistent idempotent mappings p v by p ( P ) we denote the retraction of P , that is, P restricted to the images of p v . In this case P has a solution if and only if p ( P ) has, see[55].Let ϕ be a solution of P / µ ∗ . We define p ϕv : A v → A v as follows: p ϕv = q kv , where q v ( a ) = a · b v , element b v is any element of ϕ ( v ) , and k is such that q kv is idempotentfor all v ∈ V . Note that by Lemma 8 this mapping is properly defined even if µ ∗ v = 0 v . Lemma 15
Mappings p ϕv , v ∈ V , are consistent. roof: Take any C = h s , R i ∈ C . Since ϕ is a solution of P / µ ∗ , there is b ∈ R such that b [ v ] ∈ ϕ ( v ) for v ∈ s . Then for any a ∈ R , q ( a ) = a · b ∈ R , and thisproduct does not depend on the choice of b , as it follows from Lemma 8. Iterating thisoperation also produces a tuple from R . ✷ We are now in a position to prove Theorem 13.
Proof: [of Theorem 13] We need to show 3 properties of the problem P † con-structed in Stage 3: (a) P has a solution if and only if P † does; (b) for every v ∈ MAX ( P ) , | A † v | < | A v | , where A † v is the domain of v in P † ; and (c) every algebra A † v has a weak near-unanimity term operation. We use the inductive definition of P † givenin Stage 3.Recall that MAX ( P ) = { v , . . . , v ℓ } , a i , b i ∈ A v i are such that a i ≤ b i and b i ∈ ϕ v i ,b i ( v i ) , where ϕ v i ,b i is a solution of P / µ ∗ . For v ∈ V let mapping p vi : A v → A v be given by p vi ( x ) = ( . . . ( x · ϕ v ,b ( v )) · . . . ) · ϕ v i ,b i ( v ) , where if µ ∗ v = µ v by Lemma 8 the multiplication by ϕ v j ,b j ( v ) does not depend on thechoice of a representative from ϕ v j ,b j ( v ) . By Lemma 15 { p vi } for every i , and so { p v } and { p kv } are collections of consistent mappings. Now (a) follows from [55].Next we show that for every j ≤ i ≤ ℓ it holds that | p vi ( A v j ) | < | A v j | . Since ap-plying mappings to a set does not increase its cardinality, this implies (b). If | p vj − ( A v j ) | < | A v j | , we have the desired inequality applying the observation in the previous sentence.Otherwise a j ∈ A v j = p vj − ( A v j ) , and it suffices to notice that a j · ϕ v j ,b j ( v j ) = b j · ϕ v j ,b j ( v j ) = b j .To prove (c) observe that if A v is semilattice free then p ϕv is the identity mappingfor any ϕ by Lemma 7, and so A † v = A v . For the remaining domains let f be a weaknear-unanimity term of the class A . Then for any idempotent mapping p the operation p ◦ f given by ( p ◦ f )( x , . . . , x n ) = p ( f ( x , . . . , x n )) is a weak near-unanimity termof p ( A ) = { p ( A ) | A ∈ A} . The result follows. ✷ The rest of the paper is dedicated to proving Theorem 12. This part assumes somefamiliarity with algebraic terminology. A brief review of the necessary facts fromuniversal algebra can be found in [26]. In this section we remind some results from[26] necessary for our proof.
In [16, 30] we introduced a local approach to the structure of finite algebras. As we usethis approach in the proof of Theorem 12, we present the necessary elements of it here,see also [23, 24]. For the sake of the definitions below we slightly abuse terminologyand by a module mean the full idempotent reduct of a module.20or an algebra A the graph G ( A ) is defined as follows. The vertex set is the universe A of A . A pair ab of vertices is an edge if and only if there exists a maximal congruence θ of Sg ( a, b ) , and a term operation f of A such that either Sg ( a, b ) / θ is a module and f is an affine operation on it, or f is a semilattice operation on { a/ θ , b/ θ } , or f is amajority operation on { a/ θ , b/ θ } . (Note that we use the same operation symbol in thiscase.) If there are a maximal congruence θ and a term operation f of A such that f is asemilattice operation on { a/ θ , b/ θ } then ab is said to have the semilattice type . An edge ab is of majority type if there are a maximal congruence θ and a term operation f suchthat f is a majority operation on { a/ θ , b/ θ } and there is no semilattice term operationon { a/ θ , b/ θ } . Finally, ab has the affine type if there are θ and f such that f is an affineoperation on Sg ( a, b ) / θ and Sg ( a, b ) / θ is a module. Pairs of the form { a/ θ , b/ θ } willbe referred to as thick edges .Properties of G ( A ) are related to the properties of the algebra A . Theorem 16 (Theorem 5 of [23])
Let A be an idempotent algebra A such that var ( A ) omits type . Then(1) any two elements of A are connected by a sequence of edges of the semilattice,majority, and affine types;(2) var ( A ) omits types and if and only if G ( A ) satisfies the conditions of item (1)and contains no edges of the affine type. We use the following refinement of this construction. Let A be a finite class offinite smooth algebras. A ternary term operation g ′ of A is said to satisfy the majoritycondition for A if g ′ is a majority operation on every thick majority edge of everyalgebra from A . A ternary term operation h ′ is said to satisfy the minority condition for A if h ′ is a Mal’tsev operation on every thick affine edge. Operations satisfyingthe majority and minority conditions always exists, as is proved in [23, Theorem 21].Fix an operation h satisfying the minority condition, it can also be chosen to satisfy theequation h ( h ( x, y, y ) , y, y ) = h ( x, y, y ) . A pair of elements a, b ∈ A ∈ A is said to be(1) a semilattice edge if there is a term operation f such that f ( a, b ) = f ( b, a ) = b ;(2) a thin majority edge if for any term operation g ′ satisfying the majority conditionthe subalgebras Sg ( a, g ′ ( a, b, b )) , Sg ( a, g ′ ( b, a, b )) , Sg ( a, g ′ ( b, b, a )) contain b .(3) a thin affine edge if h ( b, a, a ) = b and b ∈ Sg ( a, h ′ ( a, a, b )) for any term opera-tion h ′ satisfying the minority condition.Note that thin edges are directed, as a and b appear asymmetrically. By G ′ ( A ) wedenote the graph whose vertices are the elements of A , and the edges are the thinedges defined above. Theorem 21 from [23] also implies that there exists a binary termoperation · of A that is a semilattice operation on every thin semilattice edge.We distinguish several types of paths in G ′ ( A ) depending on the types of edgesinvolved. A directed path in G ′ ( A ) is called an asm-path , if there is an asm-path from a to b we write a ⊑ asm b . If all edges of this path are semilattice or affine, it is called an affine-semilattice path or an as-path , if there is an as-path from a to b we write a ⊑ as b .21e consider strongly connected components of G ′ ( A ) with majority edges removed,and the natural partial order on such components. The maximal components will becalled as-components , and the elements from as-components are called as-maximal ;the set of all as-maximal elements of A is denoted by amax ( A ) . An alternative way todefine as-maximal elements is as follows: a is as-maximal if for every b ∈ A such that a ⊑ as b it also holds that b ⊑ as a . Finally, element a ∈ A is said to be universallymaximal (or u-maximal for short) if for every b ∈ A such that a ⊑ asm b it also holdsthat b ⊑ asm a . The set of all u-maximal elements of A is denoted umax ( A ) .U-maximality has additional useful properties. Lemma 17 (Theorem 23, [24]; Lemma 12, [26]) (1) Any two u-maximal elements areconnected with an asm-path,(2) Let B be a subalgebra of A containing a u-maximal element of A . Then everyelement u-maximal in B is also u-maximal in A . In particular, if α is a congruenceof A and B is a u-maximal α -block, that is B is a u-maximal element in A / α , then umax ( B ) ⊆ umax ( A ) . Relations, or, more generally subdirect products of algebras can be naturally en-dowed with a graph structure: Let R be a subdirect product of A × · · · × A n . A pair a , b ∈ R is a thin { semilattice, majority, affine } edge if for every i ∈ [ n ] the pair a [ i ] , b [ i ] is a thin { semilattice, majority, affine } edge or a [ i ] = b [ i ] (in the latter caseit will often be convenient to call a pair of equal elements a thin edge of whatever typewe need). Paths and maximality can also be lifted to subdirect products. Lemma 18 (The Maximality Lemma, Corollaries 18,19, [24])
Let R be a subdirectproduct of A × · · · × A n , I ⊆ [ n ] .(1) For any a ∈ R , and an as-path (asm-path) b , . . . , b k ∈ pr I R with pr I a = b ,there is an as-path (asm-path) b ′ , . . . , b ′ ℓ ∈ R such that pr I b ′ ℓ = b ℓ .(2) For any b ∈ amax (pr I R ) ( b ∈ umax (pr I R ) ) there is b ′ ∈ amax ( R ) ( b ′ ∈ umax ( R ) ), such that pr I b ′ = b .(3) If a ∈ R is a as-maximal or u-maximal element then so is pr I a . We complete this section with an auxiliary statement that will be needed later.
Lemma 19 (Lemmas 15, [26], Lemma 4.14, [42]) (1) Let α ≺ β , α, β ∈ Con ( A ) , let B be a β -block and typ ( α, β ) = . Then B/ α is term equivalent to a module. Inparticular, every pair of elements of B/ α is a thin affine edge in A / α .(2) If ( α : β ) ≥ β , then typ ( α, β ) = . We make use of the property of quasi-2-decomposability proved in [24].
Theorem 20 (The 2-Decomposition Theorem 30, [24]) If R is an n -ary relation, X ⊆ [ n ] , tuple a is such that pr J a ∈ pr J R for any J ⊆ [ n ] , | J | = 2 , and pr X a ∈ amax (pr X R ) , there is a tuple b ∈ R with pr J a ⊑ as pr J b for any J ⊆ [ n ] , | J | = 2 ,and pr X b = pr X a . R be a subdirectproduct of A , A . By lk , lk we denote the congruences of A , A , respectively, gen-erated by the sets of pairs { ( a, b ) ∈ A | there is c ∈ A such that ( a, c ) , ( b, c ) ∈ R } and { ( a, b ) ∈ A | there is c ∈ A such that ( c, a ) , ( c, b ) ∈ R } , respectively. Congru-ences lk , lk are called link congruences . Relation R is said to be linked if the linkcongruences are full congruences. Proposition 21 (Corollary 28, [24])
Let R be a subdirect product of A and A , lk , lk the link congruences, and let B , B be as-components of a lk -block and a lk -block,respectively, such that R ∩ ( B × B ) = ∅ . Then B × B ⊆ R .In particular, if R is linked and B , B are as-components of A , A , respectively,such that R ∩ ( B × B ) = ∅ , then B × B ⊆ R . Let A be a finite algebra and α, β ∈ Con ( A ) . The pair α, β is said to be a primeinterval , denoted α ≺ β if α < β and for any γ ∈ Con ( A ) with α ≤ γ ≤ β either α = γ or β = γ . For α ≺ β , an ( α, β ) -minimal set is a set minimal with respect toinclusion among the sets of the form f ( A ) , where f is a unary polynomial of A suchthat f ( β ) α .For an ( α, β ) -minimal set U and a β -block B such that β U ∩ B = α U ∩ B , the set U ∩ B is said to be an ( α, β ) -trace . A 2-element set { a, b } ⊆ U ∩ B such that ( a, b ) ∈ β − α , is called an ( α, β ) -subtrace .Let α ≺ β and γ ≺ δ be prime intervals in Con ( A ) . We say that ( α, β ) can be separated from ( γ, δ if there is a unary polynomial f of A such that f ( β ) α , but f ( δ ) ⊆ γ . The polynomial f in this case is said to separate ( α, β ) from ( γ, δ ) .In a similar way separation can be defined for prime intervals in different coordinatepositions of a relation. Let R be a subdirect product of A × · · · × A n . Then R is alsoan algebra and its polynomials can be defined in the same way as for a single algebra.Let i, j ∈ [ n ] and let α ≺ β , γ ≺ δ be prime intervals in Con ( A i ) and Con ( A j ) ,respectively. Interval ( α, β ) can be separated from ( γ, δ ) if there is a unary polynomial f of R such that f ( β ) α but f ( δ ) ⊆ γ (note that the actions of f on A i , A j arepolynomials of those algebras).If A , . . . , A n are algebras and B , . . . , B n are their subsets B i ⊆ A i , i ∈ [ n ] , and α , . . . , α n are congruences of the A i ’s, it will be convenient to denote B × · · · × B n by B and β × · · · × β n = { ( a , b ) ∈ ( A × · · · × A n ) | a [ i ] α i ≡ b [ i ] , i ∈ [ n ] } by β .By Cg A ( D ) , or just Cg ( D ) if A is clear from the context, we denote the congruence of A generated by a set D of pairs from A .For an algebra A , a set U of unary polynomials, and B ⊆ A , we denote by Cg A , U ( B ) the transitive-symmetric closure of the set T ( B, U ) = { ( f ( a ) , f ( b )) | ( a, b ) ∈ B, f ∈ U} . Let also α, β ∈ Con ( A ) , α ≤ β and D a subuniverse of A such that β = Cg A ( α ∪ { ( a, b ) } ) for some a, b ∈ D . We say that α and β are U -chained with respect to D if for any β -block B such that B ′ = B ∩ umax ( D ) = ∅ wehave ( umax ( B ′ )) ⊆ Cg A , U ( α ∪ { ( a, b ) } ) .23et β i ∈ Con ( A i ) , let B i be a β i -block for i ∈ [ n ] , and let R ′ = R ∩ B , B ′ i = pr i R ′ .A unary polynomial f is said to be B -preserving if f ( B ) ⊆ B . We call an n -aryrelation R chained with respect to β, B if(Q1) for any I ⊆ [ n ] and α, β ∈ Con (pr I R ) such that α ≤ β ≤ β I , α, β are U B -chained with respect to pr I R ′ , and U B is the set of all B -preserving polynomials of R ;(Q2) for any α, β ∈ Con (pr I R ) , γ, δ ∈ Con ( A j ) , j ∈ [ n ] , such that α ≺ β ≤ β I , γ ≺ δ ≤ β j , and ( α, β ) can be separated from ( γ, δ ) , the congruences α and β are U ( γ, δ, B ) -chained with respect to pr I R ′ , where U ( γ, δ, B ) is the set of all B -preserving polynomials g of R such that g ( δ ) ⊆ γ .The following lemma claims that the property to be chained is preserved undercertain transformations of β and B . Lemma 22 (Lemmas 44,45, [26])
Let R be a subdirect product of A , . . . , A n .(1) Let β i = 1 A i and B i = A i for i ∈ [ n ] . Then R is chained with respect to β, B .(2) Let β i ∈ Con ( A i ) and B i a β i -block, i ∈ [ n ] , be such that R is chained with respectto β, B . Let R ′ = R ∩ B and B ′ i = pr i R ′ . Fix i ∈ [ n ] , β ′ i ≺ β i , and let D i be a β ′ i -block that is as-maximal in B ′ i / β ′ i . Let also β ′ j = β j and D j = B j for j = i . Then R is chained with respect to β ′ , D . Let again R be a subdirect product of A × · · · × A n and let W R denote the setof triples ( i, α, β ) , where i ∈ [ n ] and α, β ∈ Con ( A i ) , α ≺ β . We say that ( i, α, β ) cannot be separated from ( j, γ, δ ) if ( α, β ) cannot be separated from ( γ, δ ) in R . Thenthe relation ‘cannot be separated’ on W R is clearly reflexive and transitive. The nextlemma shows that it is to some extent symmetric. Lemma 23 (Theorem 30, [26])
Let R be a subdirect product of A × · · · × A n , foreach i ∈ [ n ] , β i ∈ Con ( A i ) , B i a β i -block such that R is chained with respect to β, B ; R ′ = R ∩ B , B ′ i = pr i R ′ . Let also α ≺ β ≤ β , γ ≺ δ = β , where α, β ∈ Con ( A ) , γ, δ ∈ Con ( A ) . If B ′ / γ has a nontrivial as-component D and ( α, β ) can be separated from ( γ, δ ) , then there is a B -preserving polynomial g suchthat g ( β B ′ ) ⊆ α and g ( δ ) γ . Moreover, for any c, d ∈ D polynomial f can bechosen such that f ( c ) = c, f ( d ) = d . We also introduce polynomials that collapse all prime intervals in congruence lat-tices of factors of a subproduct, except for a set of intervals that cannot be separatedfrom each other.Let R be a subdirect product of A × · · · × A n , and choose β j ∈ Con ( A j ) , j ∈ [ n ] .Let also i ∈ [ n ] , and α, β ∈ Con ( A i ) be such that α ≺ β ≤ β i ; let also B j be a β j -block, j ∈ [ n ] . We call an idempotent unary polynomial f of R αβ -collapsing for β, B if(a) f is B -preserving;(b) f ( A i ) is an ( α, β ) -minimal set, in particular f ( β ) α ;24c) f ( δ B j ) ⊆ γ B j for every γ, δ ∈ Con ( A j ) , j ∈ [ n ] , with γ ≺ δ ≤ β j , and suchthat ( α, β ) can be separated from ( γ, δ ) or ( γ, δ ) can be separated from ( α, β ) . Lemma 24 (Theorem 40, [26])
Let R , i , α, β , and β j , j ∈ [ n ] , be as above and R chained with respect to β, B . Let also R ′ = R ∩ B . Then if β = β i and pr i R ′ / α contains a nontrivial as-component, then there exists an αβ -collapsing polynomial f for β, B . Moreover, f can be chosen to satisfy any one of the following conditions:(d) for any ( α, β ) -subtrace { a, b } ⊆ amax (pr i R ′ ) with b ∈ as ( a ) , polynomial f canbe chosen such that a, b ∈ f ( A i ) ;(e) if typ ( α, β ) = , for any a ∈ umax ( R ′ ) polynomial f can be chosen such that f ( a ) = a ;(f) if typ ( α, β ) = , a ∈ umax ( R ′′ ) , where R ′′ = { b ∈ R | b [ i ] α ≡ a [ i ] } and { a, b } ⊆ amax (pr i R ′ ) is an ( α, β ) -subtrace such that a [ i ] = a and b ∈ as ( a ) , thenpolynomial f can be chosen such that f ( a ) = a and a, b ′ ∈ f ( A i ) for some b ′ α ≡ b . This section contains a technical result, the Congruence Lemma 26, that will be usedwhen proving Theorem 12. We start with introducing two closure properties of algebrasand their subdirect products. Although we do not need as-closeness right now, it fitswell with polynomial closeness.Let R be a subdirect product of A , . . . , A n and Q a subalgebra of R . We say that Q is polynomially closed in R if for any polynomial f of R the following conditionholds: for any a , b ∈ umax ( Q ) such that f ( a ) = a and for any c ∈ Sg ( a , f ( b )) suchthat a ⊑ as c in Sg ( a , f ( b )) , the tuple c belongs to Q . A subset S ⊆ Q is as-closed in Q if for any a , b ∈ Q with a ∈ umax ( S ) , a ⊑ as b in Q , it holds b ∈ S . The set S issaid to be weakly as-closed in Q if for any i ∈ [ n ] , pr i S is as-closed in pr i Q .Polynomially closed subalgebras and as-closed subsets are well behaved with re-spect to some standard algebraic transformations. Lemma 25 (Lemma 42, [26]) (1) For any R , R is polynomially closed in R and R isas-closed in R .(2) Let Q i be polynomially closed in R i , i ∈ [ k ] , and let R, Q be pp-defined through R , . . . , R k and Q , . . . , Q k , respectively, by the same pp-formula ∃ x Φ ; that is, R = ∃ x Φ( R , . . . , R k ) and Q = ∃ x Φ( Q , . . . , Q k ) . Let also R ′ = Φ( R , . . . , R k ) and Q ′ = Φ( Q , . . . , Q k ) , and suppose that for every atom R i ( x , . . . , x ℓ ) and any a ∈ umax ( R i ) there is b ∈ R ′ with pr { x ,...,x ℓ } b = a , and also umax ( Q ′ ) ∩ umax ( R ′ ) = ∅ . Then Q is polynomially closed in R .If also S i ⊆ Q i are as-closed in Q i , then S = Φ( S , . . . , S k ) is as-closed in Q .(3) Let R be a subdirect product of A , . . . , A n , β i ∈ Con ( A i ) , i ∈ [ n ] , and let Q bepolynomially closed in R . Then Q/ β is polynomially closed in R .If S ⊆ Q is as-closed in Q then S/ β is as-closed in Q/ β . We are now in a position to state the Congruence Lemma. Let R be a subdirectproduct of A × A , β , β congruences of A , A , and let B , B be β - and β -blocks, respectively. Also, let R be chained with respect to ( β , β ) , ( B , B ) and25 ∗ = R ∩ ( B × B ) , B ∗ = pr R ∗ , B ∗ = pr R ∗ . Let α ∈ Con ( A ) be such that α ≺ β . Lemma 26 (The Congruence Lemma, Lemma 43, [26])
Suppose α = 0 and let R ′ be a subalgebra of R ∗ polynomially closed in R and such that B ′ = pr R ′ containsan as-component C of B ∗ and R ′ ∩ umax ( R ∗ ) = ∅ . Let β ′ be the least congruenceof A such that umax ( B ′′ ) , where B ′′ = R ′ [ C ] is a subset of a β ′ -block. Then either(1) C × umax ( B ′′ ) ⊆ R ′ , or(2) there is η ∈ Con ( A ) with η ≺ β ′ ≤ β such that the intervals ( α, β ) and ( η, β ′ ) cannot be separated.Moreover, in case (2) R ′ ∩ ( C × B ′′ ) is the graph of a mapping ϕ : B ′′ → C such thatthe kernel of ϕ is the restriction of η on B ′′ . In this section we apply the machinery developed in the previous section to constraintssatisfaction problems in order to prove Theorem 12.
We begin with showing how separating congruence intervals and centralizers can becombined to obtain strands and therefore useful decompositions of CSPs. The case ofbinary relations is settled in [26].
Lemma 27 (Lemma 34, [26])
Let R be a subdirect product of A × A , α i , β i ∈ Con ( A i ) , α i ≺ β i , for i = 1 , . If ( α , β ) and ( α , β ) cannot be separated fromeach other, then the coordinate positions 1,2 are ζ ζ -aligned in R , where ζ = ( α : β ) , ζ = ( α : β ) . Let P = ( V, C ) be a (2,3)-minimal instance and let β , β v ∈ Con ( A v ) , v ∈ V ,be a collection of congruences. Let W P ( β ) denote the set of triples ( v, α, β ) suchthat v ∈ V , α, β ∈ Con ( A v ) , and α ≺ β ≤ β v . Also, W P denotes W P ( β ) when β v = 1 v for all v ∈ V . We will omit the superscript P whenever it is clear from thecontext. Let also W ′P ( β ) , W ′P , W ′ denote the set of triples ( v, α, β ) from W P ( β ) , W P , W , respectively, for which ( α : β ) = 1 v . For every ( v, α, β ) ∈ W ( β ) , let Z ( v, α, β, β ) denote the set of triples ( w, γ, δ ) ∈ W ( β ) such that ( α, β ) and ( γ, δ ) cannot be separated in R vw . Slightly abusing the terminology we will also say that ( α, β ) and ( γ, δ ) cannot be separated in P . Then let W ( v, α, β, β ) = { w ∈ V | ( w, γ, δ ) ∈ Z ( v, α, β, β ) for some γ, δ ∈ Con ( A w ) } . We will omit mentioning of β whenever possible. Sets of the form W ( v, α, β, β ) will be called β -coherent sets , orjust coherent sets if β is clear from the context. Also, if ( α : β ) = 1 v then thecorresponding coherent set is called non-central . The following statement is an easycorollary of Lemma 27. 26 heorem 28 Let P = ( V, C ) be a (2,3)-minimal instance and ( v, α, β ) ∈ W . For w ∈ W ( v, α, β, β ) , where β v = 1 v for v ∈ V , let ( w, γ, δ ) ∈ W be such that ( α, β ) and ( γ, δ ) cannot be separated and ζ w = ( γ : δ ) . Then P W ( v,α,β,β ) is ζ -aligned. Theorem 28 relates domains with congruence intervals that cannot be separatedwith strands.
Corollary 29
Let P = ( V, C ) be a (2,3)-minimal instance and W a non-central co-herent set. Then W is a subset of a strand. For technical reasons we will also count the empty set as a non-central coherentset.
In this section we define a way to tighten a block-minimal problem instance in such away that it remains (similar to) block-minimal. More precisely, we introduce severalproperties of a subproblem of a CSP instance P that are preserved when the problemis restricted in a certain way.Let P = ( V, C ) be a (2,3)-minimal and block-minimal instance over A . Recall thatfor a strand W ⊆ V by P /W we denote the problem P / µ /W , where µ /W = µ Y and Y = MAX ( P ) − W . Let also S /W denote the set of solutions of P /W . If W is anon-central coherent set, the problem P /W is defined in the same way. Lemma 30
Let P be a (2,3)-minimal and block minimal problem. Then for every non-central coherent set W the problem P /W is minimal. Proof:
By Corollary 29 there is a strand U ⊆ V such that W ⊆ U . It now sufficesto observe that for every solution ϕ ∈ S /U of P /U the mapping ϕ/ µ /W is a solution of P /W . ✷ Let β v ∈ Con ( A v ) and let B v be a β v -block, β = ( β v | v ∈ V ) , B = ( B v | v ∈ V ) . A problem instance P † = ( V, C † ) , where h s , R † i ∈ C † if and only if h s , R i ∈ C , issaid to be ( β, B ) -compressed from P if the following conditions hold:(S1) For every h s , R i ∈ C the relation R † is a nonempty subalgebra of R ∩ B ;(S2) the relations R X † , where R X † is obtained from R X for X ⊆ V , | X | ≤ , forma nonempty (2 , -strategy for P † ;(S3) for every non-central coherent set W the problem P † /W = P † / µ /W is minimal;(S4) for every h s , R i ∈ C the relation R is chained with respect to β, B , and therelation S /W is chained with respect to β, B for every non-central coherent set W ⊆ V ;(S5) for every h s , R i ∈ C the subalgebra R † is polynomially closed in R ;27S6) for every h s , R i ∈ C the subalgebra R † is weakly as-closed in R ∩ B .Conditions (S1)–(S3) are the conditions we actually want to maintain when con-structing a compressed instance, and these are the ones that provide the desired results.However, to prove that (S1)–(S3) are preserved under transformations of compressedinstances we also need more technical conditions (S4)–(S6).We now show how we plan to use compressed instances. Let P be a subdirectlyirreducible, (2,3)-minimal, and block-minimal instance, β v = 1 v and B v = A v for v ∈ V . Then as is easily seen the instance P itself is ( β, B ) -compressed from P .Also, by (S1) a ( γ, D ) -compressed instance with γ v = 0 v for all v ∈ V gives asolution of P . Our goal is therefore to show that a ( β, B ) -compressed instance for any β and an appropriate B can be ‘reduced’, that is, transformed to a ( β ′ , B ′ ) -compressedinstance for some β ′ < β . Note that this reduction of instances is where the condition MAX ( P ) ∩ Center ( P ) = ∅ is used. Indeed, suppose that β v = µ ∗ v (see Section 5.4).Then by conditions (S1)–(S6) we only have information about solutions to problemsof the form P / µ ∗ or something very close to that. Therefore this barrier cannot bepenetrated. We consider two cases.C ASE
1. There are v ∈ V and α ≺ β v nontrivial on B v , typ ( α, β v ) = . This caseis considered in Section 8.C ASE
2. For all v ∈ V and α ≺ β v nontrivial on B v , typ ( α, β v ) ∈ { , , } . Thiscase is considered in Section 9.There is also the possibility that α R v † = β v R v † for all α ≺ β v . In this case we canreplace β v with a smaller congruence without violating any of the conditions (S1)–(S6). In this section we consider Case 1 of tightening instances: there is α ∈ Con ( A v ) forsome v ∈ V such that α ≺ β v and typ ( α, β v ) = . Let P = ( V, C ) be a block-minimal instance with subdirectly irreducible domains, β = ( β v ∈ Con ( A v ) | v ∈ V ) and B = ( B v | B v is a β v -block, v ∈ V ) . Let W , W ′ denote W P ( β ) , W ′P ( β ) , respectively. Let also P † = ( V, C † ) be a ( β, B ) -compressedinstance, and for C = h s , R i ∈ C there is C † = h s , R † i ∈ C † . We select v ∈ V and α ∈ Con ( A v ) with α ≺ β v , typ ( α, β v ) = , and an α -block B ∈ B v / α . Notethat since typ ( α, β v ) = , B v / α is a module, and therefore B is as-maximal in thisset. In this section we show how P † can be transformed to a ( β ′ , B ′ ) -compressedinstance such that β ′ w ≤ β w , B ′ w ⊆ B w for w ∈ V , and β ′ v = α , B ′ v = B . Let also W = W ( v, α, β v , β ) , and let S † /U denote the set of solutions of P † /U for a non-centralcoherent set U . We use P † / ∅ , S † / ∅ to denote such a problem and its solution set for U = ∅ . Let also S † /U ( B ) = { ϕ ∈ S † /U | ϕ ( v ) ∈ B/ µ /Uv } .Let P ‡ = ( V, C ‡ ) be the following instance.28R1) For every C † = h s , R † i ∈ C † , the set R ′‡ includes(a) if ( v, α, β v )
6∈ W ′ , every a ∈ umax ( R † ) such that a / µ /W extends to asolution ϕ ∈ umax ( S † /W ( B )) ;(b) if ( v, α, β v ) ∈ W ′ , every a ∈ umax ( R † ) such that a / µ ∅ extends to asolution ϕ ∈ umax ( S † / ∅ ( B )) .(R2) for every C † = h s , R † i ∈ C † , there is C ‡ = h s , R ‡ i , where R ‡ = Sg R ( R ′‡ ) .The following two statements show how relations R ‡ are related to R † . Theyamount to saying that either R ‡ is (almost) the intersection of R † with a block of acongruence of R , or umax ( R ‡ ) = umax ( R † ) . Recall that for congruences β w , w ∈ V ,and U ⊆ V by β U we denote the collection ( β w ) w ∈ U . Lemma 31
Let C = h s , R i ∈ C , and let S ◦ , S ◦† be the set of solutions of P /W (respectively, P † /W ) if ( v, α, β v )
6∈ W ′ , or the set of solutions of P / ∅ (respectively, P † / ∅ ) if ( v, α, β v ) ∈ W ′ . There is a congruence τ C of R satisfying the followingconditions.(a) Either umax ( R ‡ ) = umax ( R † ) , or for a τ C -block T it holds R ‡ = R † ∩ T .(b) Either τ C R † = β s R † , or R † / τ C is isomorphic to R v † / α . Moreover, in the lattercase τ C ≺ β s . If, according to item (b) of the lemma, τ C R † = β s R † , we say that τ C is the full con-gruence ; if the latter option of item (b) holds we say that τ C is a maximal congruence . Proof: If v ∈ s then set τ C to be β s ∧ α , where α is viewed as a congruence of R † , equal to α × Q x ∈ s −{ v } x . Otherwise consider Q = pr s ∪{ v } S ◦ as a subdirectproduct of A v and pr s S ◦ . This relation is chained with respect to β, B by (S4) for P † and pr s ∩{ v } S ◦† is polynomially closed in Q by (S5) for P † and Lemma 25(2); applythe Congruence Lemma 26 to it. Specifically, consider Q/ α as a subdirect product of pr s S ◦ and A v / α . If the first option of the Congruence Lemma 26 holds, set τ C = β s .If the second option is the case, choose τ C to be the congruence η of pr s S ◦ identifiedin the Congruence Lemma 26. Note that in the latter case the restriction of τ C on R † isnontrivial, because tuples from a τ C -block are related in Q only to elements from one α -block, while the domain of v in Q spans more than one α -block.(a) In this case the result follows by the Congruence Lemma 26.(b) If τ C = β s , by construction R † / τ C is isomorphic to pr v S ◦† / α , which is iso-morphic to R v † .To show that τ C ≺ β s , as β w , w ∈ s , is the smallest congruence for which R w † isa subset of a β w -block, it suffices to prove that for any a , b ∈ R † and such that a τ C ≡ b , R † is in a γ -block, where γ = Cg R ( τ C ∪ { a , b } ) . Consider again the relation Q andlet R ′ = R † / µ ◦ , a ′ = a / µ ◦ , b ′ = b / µ ◦ . Tuples a , b can be chosen u-maximal in their τ c -blocks. Let also ( a ′ , a ) , ( b ′ , b ) ∈ Q ; then a α ≡ b and a can be chosen u-maximal inits α -block. Since α ≺ β v and B v / α is a module, for any α -block D ⊆ B v there is29 ∈ D such that { a, c } is an ( α, β w ) -subtrace. By Lemma 24 there is a polynomial f of pr s ∪{ v } S ◦ such that f ( a ′ , a ) = ( a ′ , a ) and f ( b ′ , b ) = ( c ′ , c ) for some c ′ ∈ R ′ . Indeed,we start with any polynomial g that maps a/ α , b/ α to a/ α , c/ α and g ( A v ) is an ( α, β v ) -minimal set. Then by Lemma 24 it can be amended in such a way that g ( a ) = a and g ( a ′ ) = a ′ . Since a/ α b/ α is an affine edge there is also ( d , d ) ∈ Sg Q (( a ′ , a ) , ( c ′ , c )) such that ( a ′ , a )( d ′ , d ) is a thin affine edge and d α ≡ c . Since Q is polynomially closed ( d , d ) ∈ Q . On the other hand, as ( d , d ) ∈ Sg Q (( a ′ , a ) , ( c ′ , c )) , there is a term op-eration h such that ( d , d ) = h (( a ′ , a ) , ( c ′ , c ) . The polynomial h ( f ( a ′ , a ) , f ( x )) maps ( a ′ , a ) to ( a ′ , a ) and ( b ′ , b ) to ( d , d ) , proving that any two τ C blocks of R † are γ -related. ✷ Next we identify variables w ∈ V for which β ′ w has to be different from β w . Since P is (2,3)-minimal, for every w ∈ V there is C w = h ( w ) , R w i ∈ C . For w ∈ W thereare two cases. In the first case, when τ C w is the full congruence, we set β ′ w = β w .Otherwise τ C w is a congruence of A w with τ C w ≺ β w in Con ( A w ) . Set β ′ w = τ C w . If β ′ w = β w then there is a β ′ w -block B ′ w such that b ∈ B ′ w whenever ( a, b ) ∈ R vw † and a ∈ B . For the remaining variables w we set B ′ w = B w . Lemma 32
In the notation above(1) Let γ, δ ∈ Con ( A u ) , u ∈ U = s ∩ W be such that ( u, γ, δ ) ∈ W and ( α, β v ) , ( γ, δ ) cannot be separated from each other. Then if τ C is a maximalcongruence, for any polynomial f of R , f ( β s ) ⊆ τ C if and only if f ( δ ) ⊆ γ . If γ, δ are considered as congruences of R , this condition means that ( τ C , β s ) and ( γ, δ ) cannot be separated.(2) Assuming MAX ( P ) ∩ Center ( P ) = ∅ , if ( v, α, β v ) ∈ W ′ , then for any w ∈ MAX ( P ) , the interval (0 w , µ w ) can be separated from ( α, β v ) or the otherway round, and therefore either (0 w , µ w ) can be separated from every ( τ C , β s ) ,where C ∈ C is such that τ C is a maximal congruence, or the other way round. Proof: (1) Let S ◦ be defined as in Lemma 31 and τ C a maximal congruence. Takea polynomial f of R . Since P is a block-minimal instance, the polynomial f can beextended from a polynomial on R to a polynomial of S ◦ , and, in particular, to a polyno-mial of pr s ∪{ v } S ◦ ; we keep notation f for those polynomials. Since τ C is maximal, bythe Congruence Lemma 26 the intervals ( α, β v ) and ( τ C , β s ) in the congruence latticesof A v and R , respectively, cannot be separated in pr s ∪{ v } S ◦ . Therefore f ( β v ) ⊆ α ifand only if f ( β s ) ⊆ τ C . Since ( α, β v ) and ( γ, δ ) cannot be separated in P , the firstinclusion holds if and only if f ( δ ) ⊆ γ , and we infer the result.(2) Since ( v, α, β v ) ∈ W ′ , the centralizer ( α : β v ) = 1 v . On the other hand, if w ∈ MAX ( P ) , then w Center ( P ) and (0 w : µ w ) = 1 w . Therefore ( α, β v ) can beseparated from (0 w , µ w ) or the other way round, as it follows from Lemma 27. ✷ Now we are in a position to prove that P ‡ is a ( β ′ , B ′ ) -compressed instance. Theorem 33
In the notation above, P ‡ is a ( β ′ , B ′ ) -compressed instance. .2 Conditions (S1), and (S4)–(S6) We start with conditions (S1), and (S4)–(S6).Condition (S1) is straightforward by construction, item (R2). Since B v / α is a mod-ule, and therefore is a nontrivial as-component, Lemma 22 immediately implies thatcondition (S4) for P ‡ holds. Condition (S5) is also fairly straightforward. Lemma 34
Condition (S5) for P ‡ holds. That is, for every h s , R i ∈ C the relation R ‡ is polynomially closed in R . Proof:
Let f be a polynomial of R , and let a , b ∈ R be tuples satisfying theconditions of polynomial closeness. Let c ∈ Sg ( a , f ( b )) be such that a ⊑ as c in Sg ( a , f ( b )) . By (S5) for P † , c ∈ R † . It suffices to show that c is in the same τ C blockas a . However, this is straightforward, because a τ C ≡ b , and as f ( a ) = a , we also have a τ C ≡ f ( b ) . Since c ∈ Sg ( a , f ( b )) , it follows c τ C ≡ a . ✷ Finally, condition (S6) also holds.
Lemma 35
Condition (S6) for P ‡ holds. Proof:
Let C = h s , R i ∈ C . By Lemma 31(a) either umax ( R ‡ ) = umax ( R † ) , inwhich case we are done, or R ‡ = R † ∩ T , where T is a τ C -block. If w ∈ s − W ,then umax (pr w R ‡ ) = umax (pr w R † ) and the property of weak as-closeness holdsfor such variables. Otherwise if s ∩ W = ∅ , R ‡ = R † ∩ B ′ . Moreover for any a ∈ R † and any w, u ∈ s ∩ W it holds a [ w ] ∈ B ′ w if and only if a [ u ] ∈ B ′ u . Let a ∈ umax (pr w R ‡ ) ⊆ umax (pr w R † ) and b ∈ pr w ( R ∩ B ′ ) such that a ⊑ as b in pr w ( R ∩ B ′ ) . By (S6) for R † there is b ∈ R † such that b [ w ] = b . Then, as we ob-served b ∈ R † ∩ B ′ = R ‡ , as required. ✷ Property (S2) is more difficult to prove. We start with a construction similar to whatwe used before and that we will also use in the proof of (S3).Let µ ◦ z denote z if z ∈ W and ( v, α, β v )
6∈ W ′ , and µ ◦ z = µ ∅ z otherwise. In otherwords, µ ◦ is µ /W if ( v, α, β v )
6∈ W ′ and µ ◦ is µ ∅ otherwise. Let S ◦ be the set ofsolutions of P ◦ = P / µ ◦ . Then for C = h s , R i ∈ C we define Q C to be a subalgebraof the product R × A v / α that consists of all tuples ( b , c ′ ) , b ∈ R , such that, there isa solution ϕ ∈ S ◦ with b ∈ ϕ ( s ) , and ϕ ( v ) ∈ c ′ . By the block-minimality of P therelation Q C is indeed a subdirect product of R and A v / α , and by (S3) for P † we have Q C ∩ ( R † × R v † / α ) is a subdirect product of R † and R v † / α ) . Also, by Lemma 25(2,3) Q C is polynomially closed. Lemma 36
Condition (S2) for P ‡ holds. That is, the relations R X ‡ , where R X ‡ isobtained from R X † as described in (R1),(R2) for X ⊆ V , | X | ≤ , form a nonempty (2 , -strategy for P ‡ . roof: By (S2) for P † the relations R X † , X ⊆ V , | X | ≤ , constitute a (2 , -strategy for P † . As R xy ‡ is generated by R ′ xy ‡ , it suffices to show that for any tuple ( a, b ) ∈ R ′ xy ‡ and any w
6∈ { x, y } there is c ∈ A w such that ( a, c ) ∈ R xw ‡ , ( b, c ) ∈ R yw ‡ . By (R1) R ′ xw ‡ ⊆ umax ( R xw † ) and so by (S2) for P † there is d ∈ A w such that ( a, d ) ∈ umax ( R xw † ) , ( b, d ) ∈ umax ( R yw † ) .Let Q x = Q C xw , Q y = Q C yw , as defined before Lemma 36. As we observed, Q x is a subdirect product of R xw × A v / α and by (S3) for P † we have Q x ∩ ( R xw † × R v † / α ) is a subdirect product of R xw † and R v † / α ) . For the relation Q y similar properties hold.Consider the relation S ( x, y, w, v , v ) = R xy ( x, y ) ∧ Q x ( x, w, v ) ∧ Q y ( y, w, v ) , and S ′ = S ∩ B and S ∗ = S/ µ ◦ . It suffices to show that for some c ∈ R w † and e = B ,such that ( a, c ) ∈ umax ( R xw † ) and ( b, c ) ∈ umax ( R yw † ) it holds ( a, b, c, e, e ) ∈ S ′ .Indeed, by the definition of Q x , Q y it means that ( a, c ) ∈ R xw ‡ and ( b, c ) ∈ R yw ‡ .As we observed above there is d ∈ R w † such that ( a, d ) ∈ R xw † , ( b, d ) ∈ R yw † , andthe triple ( a, b, d ) extends to a tuple from S ′ . Note that as ( a, b ) ∈ umax ( R xy ‡ ) , d can be chosen such that ( a, b, d ) ∈ umax (pr x,y,w S ′ ) . Thus, for some e , e ∈ B/ α we have a = ( a, b, d, e , e ) ∈ S ′ . Since B v / α is a module and therefore is as-connected, a ∈ umax ( S ′ ) . On the other hand, by (R1) there is a solution ϕ of P † / µ ◦ such that a ∈ ϕ ( x ) , b ∈ ϕ ( y ) , and ϕ ( v ) ∈ e . In other words, there are ( a ′ , c ′ ) ∈ R xw ‡ and ( b ′ , c ′′ ) ∈ R yw ‡ with a ′ µ ◦ x ≡ a , b ′ µ ◦ y ≡ b , and c ′ µ ◦ w ≡ c ′′ . This also means that ( a ′ , c ′ , e ) ∈ Q x and ( b ′ , c ′′ , e ) ∈ Q y .By the definition of the congruences µ ◦ z and Lemma 32(2) for every z ∈ V theinterval ( α, β v ) can be separated from (0 z , µ ◦ z ) or the other way round. Therefore, byLemma 24 there exists an idempotent polynomial f of S satisfying the following con-ditions:(a) f is B -preserving;(b) f ( A v / α ) is an ( α, β v ) -minimal set;(c) f ( µ ◦ xB x ) ⊆ x , f ( µ ◦ y B y ) ⊆ y , f ( µ ◦ w B w ) ⊆ w .Since { e, e } is an ( α, β v ) -subtrace of A v / α , as B/ α is a module, and as a can beassumed from umax ( S ′′ ) , S ′′ = { b ∈ S ′ | b [ v ] = e , b [ v ] = e } , by Lemma 24 for S the polynomial f can be chosen such that(d) f ( e ) = e , f ( e ) = e in coordinate position v ; and(e) f ( a ) = a .The appropriate restrictions of f are also polynomials of Q x , Q y . Therefore apply-ing f to ( a ′ , c ′ , e ) and ( b ′ , c ′′ , e ) we get ( a, c ∗ , e ) ∈ Q x , ( b, c ∗ , e ′ ) ∈ Q y , where c ∗ = f ( c ′ ) = f ( c ′′ ) and e ′ = f ( e ) in the coordinate position v (and so f ( e ) = e doesnot have to be true in v ). Thus, b = ( a, b, c ∗ , e, e ′ ) ∈ S ′ . However, ( a, c ∗ ) , ( b, c ∗ ) do not necessarily belong to R xw † , R yw † respectively. To fix this let c be a tuple in Sg S ′ ( a , b ) such that ac is a thin affine edge and c [ v ] = e . As is easily seen, c has theform ( a, b, c ◦ , e, e ′′ ) . As, ( a, c ′ ) ∈ R xw † , ( b, c ′′ ) ∈ R yw † , and these relations are poly-nomially closed in R xw , R yw , respectively, ( a, c ◦ ) ∈ R xw † , ( b, c ◦ ) ∈ R yw † , as well.Since ( a, b, e, e ′ ) ∈ umax (pr x,y,v ,v S ′ ) , we may assume c ∈ umax ( S ′ ) . Finally, re-32eating the same argument we find a polynomial g of S satisfying the conditions (a)–(e)with c in place of a and using the ( α, β v ) -subtrace { e ′ , e } in coordinate position v inplace of { e , e } . Then we conclude that for some c • ∈ Sg A w ( c ◦ , g ( c ′ )) , such that c ◦ c • is a thin affine edge it holds ( a, b, c • , e, e ) ∈ S and ( a, c • ) ∈ R xw † , ( b, c • ) ∈ R yw † . ✷ In this section we prove that P ‡ satisfies conditions (S3).As before, let W = W ( v, α, β v , β ) . Recall also that for a coherent set U = W ( u, γ, δ, β ) , ( u, γ, δ )
6∈ W ′ by µ /U we denote a collection of congruences µ ′ w , w ∈ V such that µ ′ w = µ w if w ∈ MAX ( P ) − U , and µ ′ w = 0 w otherwise. Lemma 37
The instance P ‡ satisfies (S3). That is, for every coherent set U the prob-lem P ‡ /U is minimal. More precisely, for every h s , R ‡ i ∈ C ‡ , and every a ∈ R ‡ , thereis a solution ϕ ∈ S ‡ /U such that ϕ ( s ) = a / µ /U . Proof:
For a coherent set U and a constraint C = h s , R ‡ i it suffices only to checkthat tuples a ∈ R ′‡ are extendable to solutions of S ‡ /U , because R ‡ is generated by R ′‡ .For a constraint C ′ = h s ′ , R ′ i ∈ C , let Q C ′ denote the relation introduced beforeLemma 36, and Q ′ C ′ = Q C ′ / µ /U .Let C ⊆ C be the set of all constraints C ′ such that τ C ′ is maximal. Let also V = { x , . . . , x n } , v = x i , s = ( x , . . . , x k ) , and C = { C , . . . , C ℓ } , C j = h s j , R j i .Consider the relation T ( x , . . . , x n , v , . . . , v ℓ ) = S /U ( x , . . . , x n ) ∧ ℓ ^ j =1 Q ′ C j ( s j , v j ) , and T ′ = T ∩ ( B × ( B v / α ) ℓ ) . Let a ∈ R ′‡ and a ′ = a / µ /U . It suffices to showthat for some c ∈ pr x k +1 ,...,x n S † /U and e = B such that ( a ′ , c ) ∈ umax ( S † /U ) it holds ( a ′ , c , e, . . . , e ) ∈ T ′ .By construction there is a solution ϕ of P † / µ ◦ (recall that this problem is P † / ∅ if ( v, α, β v ) ∈ W ′ , and is P † /W if ( v, α, β v )
6∈ W ′ ) such that a / µ ◦ = ϕ ( s ) and ϕ ( v ) ∈ e . Since a / µ ◦ ∈ umax ( R † / µ ◦ ) , ϕ can be chosen from umax ( S ◦† ) . The existenceof ϕ also means that for any C ∗ = h s ∗ , R ∗ i ∈ C there is b C ∗ ∈ R ∗‡ such that b C ∗ / µ ◦ = ϕ ( s ∗ ) . Again, b C ∗ can be chosen from umax ( R ∗† ) . We show that thereexists a solution ψ ∈ S † /U such that ψ ( s ) = a ′ and for every C ∗ = h s ∗ , R ∗ i ∈ C itholds ( ψ ( s ∗ ) , b ′ C ∗ ) ∈ τ C ∗ , (1)where we use b ′ C ∗ to denote b C ∗ / µ /U . In other words, ψ ∈ S ‡ /U , as required. Bythe definition of Q C j there exists e j ∈ B v / α such that ( b C j , e j ) ∈ Q C j , and so ( b ′ C j , e j ) ∈ Q ′ C j . 33y (S3) for P † there is σ ∈ umax ( S † /U ) with σ ( s ) = a ′ . Choose one for whichcondition (1) is true for a maximal number of constraints from C . Suppose that (1)does not hold for C j = h s ∗ , R ∗ i ∈ C . Using the solution ϕ of P † / µ ◦ we will constructanother solution σ ∈ S † /U such that (1) for σ is true for all constraints it is true for σ ,and is also true for C j .By the definition of the congruences µ ◦ z and Lemma 32(2) for every z ∈ V theinterval ( α, β v ) can be separated from (0 z , µ ◦ z ) or the other way round. Therefore,by Lemma 24 there exists an idempotent polynomial f of T satisfying the followingconditions:(a) f is B -preserving;(b) f ( A v / α ) in the coordinate v j position of T is an ( α, β v ) -minimal set; and(c) f ( µ ◦ x q B xq ) ⊆ x q for q ∈ [ n ] .Since { e, e j } is a ( α, β v ) -subtrace of A v / α , as B v / α is a module, and ( σ, e , . . . , e ℓ ) ∈ umax ( T ′ ) , by Lemma 24 for T the polynomial f can be chosen such that(d) f ( e ) = e , f ( e j ) = e j in coordinate position v j ; and(e) f (( σ, e , . . . , e ℓ )) = ( σ, e , . . . , e ℓ ) .The appropriate restrictions of f are also polynomials of Q ′ C q and R ′ for each q ∈ [ ℓ ] and C ′ = h s ′ , R ′ i ∈ C . By (c) for any C ◦ = h s ◦ , R ◦ i , C • = h s • , R • i ∈ C we have f ( b ′ C ◦ [ w ]) = f ( b ′ C • [ w ]) for each w ∈ s ◦ ∩ s • . This means that σ = f ( ϕ ) is properlydefined by setting σ ( w ) = f ( b ′ C • [ w ]) for any w ∈ V and C • = h s • , R • i ∈ C suchthat w ∈ s • . Also, for any constraint C q ∈ C for which (1) holds for σ , it also holdsfor σ , as f ( σ ( s q )) = σ ( s q ) τ Cq ≡ b ′ C q implies σ ( s q ) = f ( b ′ C q ) τ Cq ≡ f ( σ ( s q )) τ Cq ≡ b ′ C q in this case. By (e), σ ( s ) = a ′ . Finally, f ( e ) = e in the coordinate position v j of Q ,and so σ ( s j ) τ Cj ≡ b ′ C j , that is, (1) holds for C j as well.The mapping σ satisfies many of the desired properties, and it is a solution of P /U because σ ( s ◦ ) ∈ R ◦ for each C ◦ = h s ◦ , R ◦ i ∈ C . However, it is not necessarily asolution of P † /U , and so we need to make one more step. To convert σ into a solutionof P † /U consider c = ( σ, e, . . . , e ) and d = ( σ , f ( e ) , . . . , f ℓ ( e )) . Note that theaction of the polynomial f in coordinate positions v r of T may differ, we reflect itby using subscripts in the tuple d . In the subalgebra of T generated by c , d take c ′ = ( ψ, e ′ , . . . , e ′ ℓ ) such that cc ′ is a thin affine edge and c ′ [ v j ] = e ′ j = f j ( e ) = e .For every C ◦ = h s ◦ , R ◦ i ∈ C the relation R ◦† is polynomially closed in R ◦ by (S5).Since σ ( s ◦ ) ψ ( s ◦ ) is a thin affine edge in the subalgebra generated by σ ( s ◦ ) , σ ( s ◦ ) ,and σ ( s ◦ ) is the image of b ′ C ◦ ∈ R ◦† / µ /U under f , we get ψ ( s ◦ ) ∈ R ◦† / µ /U , as well.Thus, ψ is a solution of P † /U .Since σ ( s ) = σ ( s ) = a ′ , the same holds for ψ ( s ) . Also, for any constraint C q ∈ C for which σ satisfies (1) so does σ , and therefore ψ . Finally, by construction c ′ [ v j ] = e , which means that (1) holds for C j as well. A contradiction with the choiceof σ . ✷ Proof of Theorem 12: non-affine factors
In this section we consider Case 2 of tightening instances: for every v ∈ V and every α ∈ Con ( A v ) with α ≺ β v it holds typ ( α, β v ) = .Let P = ( V, C ) be a (2,3)-minimal and block-minimal instance with subdirectly ir-reducible domains, β = ( β v ∈ Con ( A v ) | v ∈ V ) and B = ( B v | B v is a β v -block, v ∈ V ) . Let also P † = ( V, C † ) be a ( β, B ) -compressed instance, and for C = h s , R i ∈ C there is C † = h s , R † i ∈ C † . We select v ∈ V and α ∈ Con ( A v ) with α ≺ β v , typ ( α, β v ) = , and an α -block B ∈ B v / α such that B is as-maximal in R v † / α . By(S6) for P † for any C = h s , R i ∈ C with v ∈ s , the α -block B is also as-maximal in pr v ( R ∩ B ) / α . In particular, it is maximal in B v / α = ( R v ∩ B v ) / α . We show how P † can be transformed to a ( β ′ , B ′ ) -compressed instance such that β ′ w ≤ β w , B ′ w ⊆ B w for w ∈ V , and β ′ v = α , B ′ v = B .By Lemma 23 if R v † / α contains a nontrivial as-component, there is a coherent setassociated with the triple ( v, α, β v ) . Let W = W ( v, α, β v , β ) in this case; note that ( v, α, β v )
6∈ W ′ , because ( α : β v ) = 1 v by Lemma 19(2). Let also S † /U denote the setof solutions of P † /U for a coherent set U . Lemma 38 If B v / α contains a nontrivial as-component, then for every w ∈ W thereis a congruence α w ∈ Con ( A w ) with α w < β w , and such that R vw † is aligned withrespect to ( α, α w ) , that is, for any ( a , a ) , ( b , b ) ∈ R vw † , a α ≡ b if and only if a α w ≡ b . Proof:
It suffices to show that the link congruences lk , lk of Q = R vw viewed asa subdirect product of A v × A w are such that β v ∧ lk ≤ α and β w ∧ lk < β w . Since w ∈ W there are γ, δ ∈ Con ( A w ) such that γ ≺ δ ≤ β w and ( α, β v ) and ( γ, δ ) cannotbe separated. By Lemmas 19,27 it follows that β v ∧ lk ≤ α and lk ∧ δ ≤ γ . We set α w = β w ∧ lk < β w . ✷ Let P ‡ = ( V, C ‡ ) be constructed as follows.(R) Let P ′ be the problem obtained from P † by adding extra constraint h{ v } , B i .Let P ‡ be the problem obtained from P ′ by establishing (2 , -minimality, andthe minimality of P ‡ /U for every non-central coherent set U .Set β ′ v = α , B ′ v = B . Let Z be the set of variables w such that there is a congruence α w < β w such that R wv † / α is the graph of a mapping π w : R w † → R v † / α and α w isits kernel. For instance, if B belongs to a nontrivial as-component, then Z = W . For w ∈ U set β ′ w = α w , B ′ w = π − ( B ) . For the remaining variables w set β ′ w = β w , B ′ w = B w . Lemma 39 P ‡ satisfies condition (S5). In other words, for every C = h s , R i ∈ C , therelation R ‡ is polynomially closed in R . Proof:
Condition (S5) holds for P † . The instance P ‡ is obtained from P † byadding an extra constraint (whose relation is polynomially closed in A v ) and estab-lishing various sorts of minimality. This means that every R ‡ is obtained through a35p-formula of polynomially closed relations. By Lemma 25 it is polynomially closedin R as well. ✷ Condition (S4) follows from Lemma 22 by the choice of β ′ v , B and (S6) for P † .The following two lemmas show that the constraints of P ‡ are not empty. We do itby identifying a set of tuples in every constraint relation that withstand the propagationalgorithms. We start with constructing such sets for (2 , -minimality. Set Q x = { a ∈ amax ( R x † ) | there is d ∈ B such that ( d, a ) ∈ R vx † } , Lemma 40
The collection of sets Q xy = R xy † ∩ ( Q x × Q y ) , x, y ∈ V , is a (2 , -strategy for P ′ . Proof:
We need to show that for any x, y, w ∈ V and ( a, b ) ∈ Q xy there is c ∈ R w † such that ( a, c ) ∈ Q xw , ( b, c ) ∈ Q yw . By (S2) for P † there is c with ( a, c ) ∈ R xw † , ( b, c ) ∈ R yw † . Let e = B . Consider the relation Q below. Q ′ ( x, y, w, v ) = R xy † ( x, y ) ∧ R xw † ( x, w ) ∧ R yw † ( y, w ) ∧ R wv † / α ( w, v ) , (2)and Q = pr xyv Q ′ . As is easily seen, it suffices to show that ( a, b, e ) ∈ Q for some c . Condition (S2) for P † also implies that a = ( a, b, e ′ ) ∈ Q for some e ′ , and a canbe chosen as-maximal in Q . We use the Quasi-2-Decomposition Theorem 20. Thetuple a indicates that ( a, b ) ∈ pr xy Q . It is also easy to see that ( a, e ) ∈ pr xv Q and ( b, e ) ∈ pr yv Q . By Theorem 20 ( a, b, e ′′ ) ∈ Q for some e ′′ with e ⊑ as e ′′ . If e doesnot belong to a nontrivial as-component of B v / α , then e ′′ = e . So, suppose that e belongs to a nontrivial as-component E of B v / α .Let c ∈ R w † be such that ( a, b, c, e ′′ ) ∈ Q ′ . If w W , then by the CongruenceLemma 26 ( c, e ) ∈ R wv † / α whenever c ∈ umax ( D ) , D = { d ∈ R w † , ( d, e ∗ ) ∈ R wv / α for some e ∗ ∈ E } . Since ( a, b, e ′′ ) ∈ amax ( Q ) , element c can be chosen from amax ( D ) . Therefore ( a, b, e ) ∈ Q . So, assume that w ∈ W . If x ∈ W or y ∈ W , then e ′ = e . Otherwise as is easily seen, R xv † / α ⊆ pr xv Q , R yv † / α ⊆ pr yv Q , and ( α, β v ) can be separated from any ( γ x , δ x ) , ( γ y , δ y ) , where γ x ≺ δ x ≤ β x , γ y ≺ δ y ≤ β y , and γ x , δ x ∈ Con ( A x ) , γ y , δ y ∈ Con ( A y ) , or the other way round. Consider S ( x, y, w, v ) = R xy ( x, y ) ∧ R xw ( x, w ) ∧ R yw ( y, w ) ∧ R wv / α ( w, v ) , by Lemma 22 S is chained with respect to β, B . Let { e , e } ∈ B v / α be an ( α, β v ) -subtrace. By Lemma 24 there is a B -preserving polynomial f of S such that f ( e ) = e , f ( e ) = e , and | f ( B x ) | = | f ( B y ) | = 1 . Therefore ( α, β v ) can be separatedfrom every prime interval γ ≺ δ ≤ β x × β y in Con ( R xy ) . Applying the CongruenceLemma 26 to Q we obtain umax ( F ) × E ⊆ Q , where F = { ( d , d ) | ( d , d , e ∗ ) ∈ Q for some e ∗ ∈ E } . In particular, ( a, b, e ) ∈ Q . ✷ Let Q = { Q x | x ∈ V } . We say that a tuple a ∈ Q ℓi =1 A v i , v , . . . , v ℓ ∈ V , is Q -compatible if a [ v i ] ∈ Q v i for any i ∈ [ ℓ ] .36 emma 41 Let C = h s , R i ∈ C . Then for any non-central coherent set U and any Q -compatible tuple a ∈ amax ( R † ) there is a Q -compatible solution ϕ ∈ S † /U suchthat ϕ ( s ) = a / µ /U . Proof:
The proof of this lemma follows the same lines as the proof of Lemma 40.We show by induction that for every I , s ⊆ I ⊆ V , there is ψ ∈ pr I S † /U such that a ′ = ψ ( s ) , where a ′ = a / µ /U and ψ ( w ) ∈ Q x for all w ∈ I . The base case, I = s isgiven by (S3) for P † .Suppose the claim is proved for some I , s ⊆ I ⊆ V , and w ∈ V − I . Letalso ψ ∈ amax (pr I S † /U ) be a partial solution for this set, I = { x , . . . , x k } , and I ′ = I ∪ { w } . Let e = B . Consider the following relation Q ′ ( x , . . . , x k , w, v ) = pr I ′ S † /U ( x , . . . , x k , w ) ∧ R wv † / α ( w, v ) , (3)and Q = pr I ∪{ v } Q ′ . As is easily seen, it suffices to show that ( ψ, e ) ∈ Q . Firstly, ψ ∈ pr I Q by the induction hypothesis, as any value of w can be extended to a pair from R wv † . For i ∈ [ k ] , as ( ψ ( x i ) , e ) ∈ R x i v † / µ /Uxi × α , we have ( ψ ( x i ) , b ) ∈ R x i v † / µ /Uxi for some b ∈ B . By (S3) for P † this pair can be extended to a solution from S † /U . Thisimplies ( ψ ( x i ) , e ) ∈ pr x i v Q . By the Quasi-2-Decomposition Theorem 20 ( ψ, e ′ ) ∈ Q for some e ′ ∈ as ( e ) in B v / α . If e is in a trivial as-component of B v / α , we obtain e ′ = e . So, suppose that e belongs to a nontrivial as-component E of B v / α .As ψ is as-maximal, there is as-maximal ϕ = ( ψ, e ′ ) ∈ Q . If w W , by the Con-gruence Lemma 26 ( c, e ) ∈ R wv † / α whenever c ∈ umax ( D ) , c satisfies the conditionsof (3) and D = { d ∈ R w † , ( d, e ∗ ) ∈ R wv † / α for some e ∗ ∈ E } . Since ϕ ∈ umax ( Q ) ,element c can be chosen from D . Therefore ( ψ, e ) ∈ Q . So, assume that w ∈ W . If I ∩ W = ∅ , then e ′ = e . Otherwise as is easily seen, R x i v † / µ /Uxi × α ⊆ pr x i v Q and α ≺ β v can be separated from any γ ≺ δ ≤ β x i , where γ, δ ∈ Con ( A x i ) , i ∈ [ k ] .Consider S ( x , . . . , x k , w, v ) = pr I ′ S /U ( x , . . . , x k , w ) ∧ R wv / α ( w, v ) , by Lemma 22 S is chained with respect to β, B . Similar to the proof of Lemma 40,let { e , e } ∈ B v / α be an ( α, β v ) -subtrace. By Lemma 24 there is a polynomial f of S such that f ( e ) = e , f ( e ) = e , and | f ( B x i ) | = 1 for i ∈ [ k ] . Therefore ( α, β v ) can be separated from every prime interval γ ≺ δ ≤ β I in Con (pr I S /U ) . Ap-plying the Congruence Lemma 26 to S and Q we obtain umax ( F ) × E ⊆ Q , where F = { χ ∈ pr I S † /U | ( χ, e ∗ ) ∈ Q for some e ∗ ∈ E } . In particular, ( ψ, e ) ∈ Q . ✷ Conditions (S2), (S3) hold for P ‡ by construction and P ‡ does not contain emptyconstraint relations by Lemmas 40 and 41, implying (S1).Finally, we verify condition (S6). Lemma 42
Condition (S6) for P ‡ holds. Proof:
Similar to the sets Q x above we introduce T x = { a ∈ R x † | there is d ∈ B such that ( a, d ) ∈ R xv † } . C = h s , R i ∈ C . We make use of the following property of R † : for any w, u ∈ s ∩ Z and any a ∈ R † , if a [ w ] ∈ B ′ w then a [ u ] ∈ B ′ u . We prove the claim in three steps.First, we will show that for every C = h s , R i ∈ C the relation R ′′ = R † ∩ Q x ∈ s T x is as-closed (not weakly as-closed!) in R ′ = R † ∩ B ′ . Second, we use Lemma 25 toconclude that R ‡ is as-closed in R ′′ . Third, we conclude that this implies that R ‡ isweakly as-closed in R ∩ B ′ .For the first step, note that it suffices to show that T x is as-closed in pr x R ∩ B ′ x for x ∈ s . Depending on the case of the Congruence Lemma 26 that holds for R xv † / α and whether or not B belongs to a nontrivial as-component E , either umax ( T x ) = umax ( T ′ x ) , where T ′ x = { a ∈ R x † | there is d ∈ B v such that d/ α ∈ E, and ( a, d ) ∈ R xv † } , or T x = B ′ x . In both cases the claim holds.The second step is immediate by Lemma 25. For the third step, if a ∈ umax (pr w R ‡ ) ⊆ umax (pr w ( R ∩ B ′ )) and b ∈ pr w ( R ∩ B ′ ) are such that a ⊑ as b for w ∈ s , then let a ∈ R ‡ with a = a [ w ] . Since a ∈ R † , by (S6) for R † , b ∈ pr w R † , and therefore b = b [ w ] for some b ∈ R † . As a ⊑ as b , the tuple b can be chosen such that a ⊑ as b in R † . Moreover, as we observed above, b ∈ R ′ . This means, by the second step, that b ∈ R ‡ , confirming the claim. ✷ References [1] Eric Allender, Michael Bauland, Neil Immerman, Henning Schnoor, and HeribertVollmer. The complexity of satisfiability problems: Refining Schaefer’s theorem.In
MFCS , pages 71–82, 2005.[2] K.A. Baker and A.F. Pixley. Polynomial interpolation and the Chinese remaindertheorem.
Mathematische Zeitschrift , 143:165–174, 1975.[3] Libor Barto. The dichotomy for conservative constraint satisfaction problemsrevisited. In
LICS , pages 301–310, 2011.[4] Libor Barto. The collapse of the bounded width hierarchy.
J. Log. Comput. ,26(3):923–943, 2016.[5] Libor Barto and Marcin Kozik. Absorbing subalgebras, cyclic terms, and the con-straint satisfaction problem.
Logical Methods in Computer Science , 8(1), 2012.[6] Libor Barto and Marcin Kozik. Constraint satisfaction problems solvable by localconsistency methods.
J. ACM , 61(1):3:1–3:19, 2014.[7] Libor Barto, Marcin Kozik, and Todd Niven. The CSP dichotomy holds for di-graphs with no sources and no sinks (A positive answer to a conjecture of Bang-Jensen and Hell).
SIAM J. Comput. , 38(5):1782–1802, 2009.388] Libor Barto, Marcin Kozik, and Ross Willard. Near unanimity constraints havebounded pathwidth duality. In
LICS , pages 125–134, 2012.[9] Libor Barto, Andrei A. Krokhin, and Ross Willard. Polymorphisms, and how touse them. In
The Constraint Satisfaction Problem: Complexity and Approxima-bility , pages 1–44. 2017.[10] Joel Berman, Pawe Idziak, Petar Markovi´c, Ralph McKenzie, Matthew Valeriote,and Ross Willard. Varieties with few subalgebras of powers.
Trans. Amer. Math.Soc. , 362(3):1445–1473, 2010.[11] Jonah Brown-Cohen and Prasad Raghavendra. Correlation decay and tractabilityof CSPs. In
ICALP , volume 55 of
LIPIcs , pages 79:1–79:13. Schloss Dagstuhl -Leibniz-Zentrum f¨ur Informatik, 2016.[12] Andrei A. Bulatov. A dichotomy theorem for constraints on a three-element set.In
FOCS’02 , pages 649–658, 2002.[13] Andrei A. Bulatov and Peter G. Jeavons. Algebraic structures incombinatorial problems. Technical Report MATH-AL-4-2001, Tech-nische universit¨at Dresden, Dresden, Germany, 2001. available at http://web.comlab.ox.ac.uk/oucl/research/areas/constraints/publications/index.html .[14] Andrei A. Bulatov. Tractable conservative constraint satisfaction problems. In
LICS , page 321, 2003.[15] Andrei A. Bulatov and Peter Jeavons. An Algebraic Approach to Multi-sortedConstraints. In CP , Lecture Notes in Computer Science, vol.2833, page 183–198, 2003.[16] Andrei A. Bulatov. A graph of a relational structure and constraint satisfactionproblems. In LICS , pages 448–457, 2004.[17] Andrei A. Bulatov. A dichotomy theorem for constraint satisfaction problems ona 3-element set.
J. ACM , 53(1):66–120, 2006.[18] Andrei A. Bulatov. Complexity of conservative constraint satisfaction problems.
ACM Trans. Comput. Log. , 12(4):24, 2011.[19] Andrei A. Bulatov. Conservative constraint satisfaction re-revisited.
Journal ofComputer and System Sciences , 82(2):347–356, 2016.[20] Andrei A. Bulatov. Graphs of finite algebras, edges, and connectivity.
CoRR ,abs/1abs/1601.07403, 2016.[21] Andrei A. Bulatov. Graphs of relational structures: restricted types. In
LICS ,2016.[22] Andrei A. Bulatov. A dichotomy theorem for nonuniform CSPs. In
FOCS , pages319–330, 2017. 3923] Andrei A. Bulatov. Local structure of idempotent algebras I.
CoRR ,abs/2006.09599, 2020.[24] Andrei A. Bulatov. Local structure of idempotent algebras II. CoRR,abs/2006.10239, 2020.[25] Andrei A. Bulatov. Graphs of relational structures: restricted types. CoRR,abs/2006.11713, 2020.[26] Andrei A. Bulatov. Separation of congruence intervals and implications.
CoRR ,abs/2007.07237, 2020.[27] Andrei A. Bulatov and V´ıctor Dalmau. A simple algorithm for Mal’tsev con-straints.
SIAM J. Comput. , 36(1):16–27, 2006.[28] Andrei A. Bulatov, Peter Jeavons, and Andrei A. Krokhin. Classifying the com-plexity of constraints using finite algebras.
SIAM J. Comput. , 34(3):720–742,2005.[29] Andrei A. Bulatov, Andrei A. Krokhin, and Benoit Larose. Dualities for con-straint satisfaction problems. In
Complexity of Constraints - An Overview of Cur-rent Research Themes [Result of a Dagstuhl Seminar]. , pages 93–124, 2008.[30] Andrei A. Bulatov and Matthew Valeriote. Recent results on the algebraic ap-proach to the CSP. In
Complexity of Constraints - An Overview of Current Re-search Themes [Result of a Dagstuhl Seminar]. , pages 68–92, 2008.[31] Stanley Burris and H.P. Sankappanavar.
A course in universal algebra , volume 78of
Graduate Texts in Mathematics . Springer-Verlag, New York-Berlin, 1981.[32] Hubie Chen and Benoˆıt Larose. Asking the metaquestions in constraint tractabil-ity.
TOCT , 9(3):11:1–11:27, 2017.[33] V´ıctor Dalmau and Andrei A. Krokhin. Majority constraints have bounded path-width duality.
Eur. J. Comb. , 29(4):821–837, 2008.[34] Rina Dechter.
Constraint processing . Morgan Kaufmann Publishers, 2003.[35] Tomas Feder and Moshe Vardi. Monotone monadic SNP and constraint satisfac-tion. In
STOC , pages 612–622, 1993.[36] Tomas Feder and Moshe Vardi. The computational structure of monotonemonadic SNP and constraint satisfaction: A study through datalog and grouptheory.
SIAM Journal of Computing , 28:57–104, 1998.[37] Ralph Freese and Ralph McKenzie.
Commutator theory for congruence modularvarieties , volume 125 of
London Math. Soc. Lecture Notes . London, 1987.[38] Ralph Freese and Matthew Valeriote. On the complexity of some Maltsev condi-tions.
IJAC , 19(1):41–77, 2009. 4039] Georg Gottlob, Gianluigi Greco, and Francesco Scarcello. Treewidth and hyper-tree width. In
Tractability: Practical Approaches to Hard Problems , pages 3–38.Cambridge University Press, 2014.[40] Pavol Hell and Jaroslav Neˇsetˇril.
Graphs and homomorphisms , volume 28 of
Oxford Lecture Series in Mathematics and its Applications . Oxford UniversityPress, 2004.[41] Pavol Hell and Jaroslav Neˇsetˇril. On the complexity of H -coloring. Journal ofCombinatorial Theory, Ser.B , 48:92–110, 1990.[42] David Hobby and Ralph McKenzie.
The Structure of Finite Algebras , volume 76of
Contemporary Mathematics . American Mathematical Society, Providence,R.I., 1988.[43] Pawel M. Idziak, Petar Markovic, Ralph McKenzie, Matthew Valeriote, and RossWillard. Tractability and learnability arising from algebras with few subpowers.
SIAM J. Comput. , 39(7):3023–3037, 2010.[44] Peter G. Jeavons, David A. Cohen, and Marc Gyssens. Closure properties ofconstraints.
J. ACM , 44(4):527–548, 1997.[45] Peter G. Jeavons. On the algebraic structure of combinatorial problems.
Theoret-ical Computer Science , 200:185–204, 1998.[46] Peter G. Jeavons, David A. Cohen, and Martin C. Cooper. Constraints, consis-tency and closure.
Artificial Intelligence , 101(1-2):251–265, 1998.[47] Phokion G. Kolaitis. Constraint satisfaction, databases, and logic. In
IJCAI’03 ,pages 1587-1595, 2003.[48] Phokion G. Kolaitis and Moshe Y. Vardi. A game-theoretic approach to constraintsatisfaction. In
AAAI’00 , pages 175–181, 2000.[49] Marcin Kozik. Weak consistency notions for all the CSPs of bounded width. In
LICS , pages 633–641, 2016.[50] Andrei A. Krokhin and Stanislav Zivny, editors.
The Constraint SatisfactionProblem: Complexity and Approximability , volume 7 of
Dagstuhl Follow-Ups .Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2017.[51] G´abor Kun and Mario Szegedy. A new line of attack on the dichotomy conjecture.
Eur. J. Comb. , 52:338–367, 2016.[52] Benoit Larose, Cynthia Loten, and Claude Tardif. A characterisation of first-order constraint satisfaction problems.
Logical Methods in Computer Science ,3(4), 2007.[53] Petar Markovic. The complexity of CSPs on a 4-element set. Personal commu-nication, 2011. 4154] Mikl´os Mar´oti. Malcev on top. Manuscript, available at ∼ mmaroti/pdf/200x%20Maltsev%20on%20top.pdf ,2011.[55] Mikl´os Mar´oti. Tree on top of malcev. Manuscript, available at ∼ mmaroti/pdf/200x%20Tree%20on%20top%20of%20Maltsev.pdf ,2011.[56] Mikl´os Mar´oti and Ralph McKenzie. Existence theorems for weakly symmetricoperations. Algebra Universalis , 59(3-4):463–489, 2008.[57] Omer Reingold. Undirected connectivity in log-space.
J. ACM , 55(4):17:1–17:24,2008.[58] Tomas J. Schaefer. The complexity of satisfiability problems. In
STOC’78 , pages216–226, 1978.[59] Ross Willard. Personal communication, 2019.[60] Dmitriy Zhuk. On key relations preserved by a weak near-unanimity function. In
ISMVL , pages 61–66, 2014.[61] Dmitriy Zhuk. On CSP dichotomy conjecture. In
Arbeitstagung Allgemeine Al-gebra AAA’92 , page 32, 2016.[62] Dmitriy Zhuk. A proof of CSP dichotomy conjecture. In
FOCS , pages 331–342,2017.[63] Dmitriy Zhuk. The proof of CSP dichotomy conjecture.
CoRR , abs/1704.01914,2017.[64] Dmitriy Zhuk. A modification of the CSP algorithm for infinite languages.