Algebraic Theory of Promise Constraint Satisfaction Problems, First Steps
aa r X i v : . [ c s . CC ] S e p Algebraic Theory of Promise ConstraintSatisfaction Problems, First Steps
Libor Barto ∗ Charles UniversityFaculty of Mathematics and PhysicsDepartment of AlgebraSokolovsk´a 83, 18675 Praha 8, Czechiaemail: [email protected] 31, 2019
Abstract
What makes a computational problem easy (e.g., in P, that is, solvablein polynomial time) or hard (e.g., NP-hard)? This fundamental questionnow has a satisfactory answer for a quite broad class of computationalproblems, so called fixed-template constraint satisfaction problems (CSPs)– it has turned out that their complexity is captured by a certain specificform of symmetry. This paper explains an extension of this theory to amuch broader class of computational problems, the promise CSPs, whichincludes relaxed versions of CSPs such as the problem of finding a 137-coloring of a 3-colorable graph.
In Computational Complexity we often try to place a given computational prob-lem into some familiar complexity class, such as P, NP-complete, etc. In otherwords, we try to determine the image of a computational problem under thefollowing mapping Φ.Φ : computational problems → complexity classesproblem its complexity classWhen we try to achieve this goal for a whole class of computational problems,say S , it is a natural idea to look for some intermediate collection I of “invari-ants” and a decomposition of Φ through I : S Ψ → I → complexity classes ∗ Libor Barto has received funding from the European Research Council (ERC) under theEuropean Unions Horizon 2020 research and innovation programme (grant agreement No771005) I are then objects that exactly capture the computational complex-ity of problems in S . The larger S is and the more objects Ψ glues together,the better such a decomposition is.This idea proved greatly useful for an interesting class of problems, so calledfixed-template constraint satisfaction problems (CSPs), and eventually led to afull complexity classification result [17, 30]. In a decomposition, suggested in [20]and proved in [24], Ψ assigns to a CSP a certain algebraic object that describes,informally, the high dimensional symmetries of the CSP. This basic insight ofthe so called algebraic approach to CSPs was later twice improved [16, 7], givingus a chain CSPs Ψ → I → I → I → complexity classes . The basics of the algebraic theory can be adapted and applied in variousgeneralizations and variants of the fixed-template CSPs, see surveys in [26]. Oneparticularly exciting direction is a recently proposed significant generalization ofCSPs, so called promise CPSs (PCSPs) [3, 14]. This framework is substantiallyricher, both on the algorithmic and the hardness side, and a full complexityclassification is wide open even in very restricted subclasses. On the other hand,the algebraic basics can be generalized from CSP to PCSP and, moreover, oneof the results in [18] not only gives such a generalization but also provides anadditional insight and simplifies the algebraic theory of CSPs.The aim of this paper is to explain this result (here Theorem 3.8) and thedevelopment in CSPs leading to it (Theorems 2.10, 2.15, 2.18). The most recentmaterial comes from the conference papers [18] and [4], which will be merged andexpanded in [5]. Very little preliminary knowledge is assumed but an interestedreader may find an in depth introduction to the fixed-template CSP and itsvariants in [26].
Fur the purposes of this paper, we define a finite relational structure as a tuple A = ( A ; R , . . . , R n ), where A is a finite set, called the domain of A , and each R i is a relation on A of some arity, that is, R i ⊆ A ar( R i ) where ar( R i ) is anatural number.A primitive positive formula (pp-formula) over A is a formula that uses onlyexistential quantification, conjunction, relations in A , and the equality relation.We will work only with formulas in a prenex normal form. Definition 2.1.
Fix a finite relational structure A . The CSP over A , writtenCSP( A ), is the problem of deciding whether a given pp-sentence over A is true.In this context, A is called the template for CSP( A ).For example, if A = ( A ; R, S ) and both R and S are binary, then an inputto CSP( A ) is, e.g.,( ∃ x ∃ x . . . ∃ x ) R ( x , x ) ∧ S ( x , x ) ∧ R ( x , x ) . satisfying assignment , that is, a mapping f : { x , . . . , x } → A such that ( f ( x ) , f ( x )) ∈ R , ( f ( x ) , f ( x )) ∈ S , and( f ( x ) , f ( x )) ∈ R . Each conjunct thus can be thought of as a constraintlimiting f and the goal is to decide whether there is an assignment satisfyingeach constraint.Clearly, CSP( A ) is always in NP.The CSP over A can be also defined as a search problem where the goal isto find a satisfying assignment when it exists. It has turned out that the searchproblem is no harder then the decision problem presented in Definition 2.1 [16]. Typical problems covered by the fixed-template CSP framework are satisfiabilityproblems, (hyper)graph coloring problems, and equation solvability problems.Let us look at several examples. We use here the notation E k = { , , . . . , k − } . Example 2.2.
Let 3
SAT = ( E ; R , R , . . . , R ), where R abc = E \ { ( a, b, c ) } for all a, b, c ∈ { , } . An input to CSP(3
SAT ) is, e.g.,( ∃ x ∃ x . . . ) S ( x , x , x ) ∧ S ( x , x , x ) ∧ S ( x , x , x ) ∧ . . . . Observe that this sentence is true if and only if the propositional formula( x ∨ x ∨ ¬ x ) ∧ ( ¬ x ∨ ¬ x ∨ x ) ∧ ( x ∨ x ∨ x ) ∧ . . . is satisfiable. Therefore CSP(3 SAT ) is essentially the same as the 3SAT prob-lem, a well known NP-complete problem.On the other hand, recall that the 2SAT problem, which is the CSP over2
SAT = ( E ; R , R , R , R ), where R ab = E \ { ( a, b ) } , is in P. Example 2.3.
Let K = ( E ; N ), where N is the binary inequality relation,i.e., N = { ( a, b ) ∈ E : a = b } . An input to CSP( K ) is, e.g.,( ∃ x . . . ∃ x ) N ( x , x ) ∧ N ( x , x ) ∧ N ( x , x ) ∧ N ( x , x ) ∧ N ( x , x ) . Here an input can be drawn as a graph – its vertices are the variables andvertices x, y are declared adjacent iff the input contains a conjunct N ( x, y )or N ( y, x ). For example, the graph associated to the input above is the fivevertex graph obtained by merging two triangles along an edge. Clearly, an inputsentence is true if and only if the vertices of the associated graph can be coloredby colors 0, 1, and 2 so that adjacent vertices receive different colors. Therefore3SP( K ) is essentially the same as the 3-coloring problem for graphs, anotherwell known NP-complete problem.More generally, CSP( K k ) = ( E k , N k ), where N k is the inequality relation on E k , is NP-complete for k ≥ k = 2. Example 2.4.
Let 3
NAE k = ( E k ; 3 N AE k ), where 3 N AE k is the ternary not-all-equal relation, i.e.,3 N AE k = E k \ { ( a, a, a ) : a ∈ E k } . Taking the viewpoint of Example 2.2, the CSP over 3
NAE is the positive not-all-equal 3SAT, where one is given a 3SAT formula without negations and theaim is to decide whether there is an assignment such that, in every clause, notall variables get the same value. This problem is NP-complete [29].From the graph theoretical viewpoint, CSP(3 NAE k ) is the problem of de-ciding whether a given 3-uniform hypergraph admits a coloring by k colors sothat no hyperedge is monochromatic. Example 2.5.
Let 1 IN E ; 1 IN IN { (1 , , , (0 , , , (1 , , } . The CSP over 1 IN Example 2.6.
Let 3
LIN = ( E ; L , L , . . . , L ), where L abcd = { ( x, y, z ) ∈ E : ax + by + cz = d (mod 5) } . An input, such as( ∃ x ∃ x . . . ) L ( x , x , x ) ∧ L ( x , x , x ) ∧ . . . can be written as a system of linear equations over the 5-element field Z , suchas 1 x + 2 x + 3 x = 4 , x + 3 x + 2 x = 1 , . . . , therefore CSP(3 LIN ) is essentially the same problem as deciding whether asystem of linear equations over Z (with each equation containing 3 variables)has a solution. This problem is in P. Here we should rather say a hypergraph whose hyperedges have size at most 3 because ofconjuncts of the form 3
NAE k ( x, x, y ) or 3 NAE k ( x, x, x ). Let us ignore this minor technicalimprecision. .2 1st step: polymorphisms The crucial concept for the algebraic approach to the CSP is a polymorphism,which is a homomorphism from a cartesian power of a structure to the structure:
Definition 2.7.
Let A = ( A ; R , . . . , R n ) be a relational structure. A k -ary(total) function f : A k → A is a polymorphism of A if it is compatible withevery relation R i , that is, for all tuples r , . . . , r k ∈ R i , the tuple f ( r , . . . , r k )(where f is applied component-wise) is in R i .By Pol( A ) we denote the set of all polymorphisms of A .The compatibility condition is often stated as follows: for any (ar( R i ) × k )-matrix whose column vectors are in R i , the vector obtained by applying f toits rows is in R i as well.Note that the unary polymorphisms of A are exactly the endomorphisms of A . One often thinks of endomorphisms (or just automorphisms) as symmetriesof the structure. In this sense, polymorphisms can be thought of as higherdimensional symmetries.For any domain A and any i ≤ k , the k -ary projection to the i -th coordinate,that is, the function π ki : A k → A defined by π ki ( x , . . . , x n ) = x i , is a polymorphism of every structure with domain A . These are the trivial polymorphisms. The following examples show some nontrivial polymorphisms. Example 2.8.
Consider the template 2
SAT from Example 2.2. It is easy toverify that the ternary majority function maj : E → E given bymaj( x, x, y ) = maj( x, y, x ) = maj( y, x, x ) = x for all x, y ∈ E is a polymorphism of 2 SAT .In fact, whenever a relation R ⊆ E m is compatible with maj, it can be pp-defined (that is, defined by a pp-formula) from relations in 2 SAT (see e.g. [25]).Now for any template A = ( E ; R , . . . , R n ) with polymorphism maj, an inputof CSP( A ) can be easily rewritten to an equivalent input of CSP(2 SAT ) andtherefore CSP( A ) is in P. Example 2.9.
Consider the template 3
LIN from Example 2.6. Each relationin this structure is an affine subspace of Z . Every affine subspace is closedunder affine combinations, therefore, for every k and every t , . . . , t k ∈ E suchthat t + · · · + t k = 1 (mod 5), the k -ary function f t ,...,t k : E k → E defined by f t ,...,t k ( x , . . . , x k ) = t x + . . . , t k x k (mod 5)is a polymorphism of 3 LIN .Conversely, every subset of A m closed under affine combinations is an affinesubspace of Z m . It follows that if every f t ,...,t k is a polymorphism of A =( E ; R , . . . , R n ), then inputs to CSP( A ) can be rewritten to systems of linearequations over Z and thus CSP( A ) is in P.5he above examples also illustrate that polymorphisms influence the com-putational complexity. The first step of the algebraic approach was to realizethat this is by no means a coincidence. Theorem 2.10 ([24]) . The complexity of
CSP( A ) depends only on Pol( A ) .More precisely, if A and B are finite relational structures and Pol( A ) ⊆ Pol( B ) , then CSP( B ) is (log-space) reducible to CSP( A ) . In particular, if Pol( A ) =Pol( B ) , then CSP( A ) and CSP( B ) have the same complexity.sketch. If Pol( A ) ⊆ Pol( B ), then relations in B can be pp-defined from relationsin A by a classical result in Universal Algebra [21, 10, 11]. This gives a reductionfrom CSP( B ) to CSP( A ).Theorem 2.10 can be used as a tool for proving NP-hardness: when A hasonly trivial polymorphism (and has domain of size at least two), any CSP on thesame domain can be reduced to CSP( A ) and therefore CSP( A ) is NP-complete.This NP-hardness criterion is not perfect, e.g., CSP(3 NAE ) has a nontrivialendomorphism x − x . Theorem 2.10 shows that the set of polymorphisms determines the complexity ofa CSP. What information do we really need to know about the polymorphisms todetermine the complexity? It has turned out that it is sufficient to know whichfunctional equations they solve. In the following definition we use a standarduniversal algebraic term for a functional equation, a strong Maltsev condition.
Definition 2.11. A strong Maltsev condition over a set of function symbols Σis a finite set of equations of the form t = s , where t and s are terms built fromvariables and symbols in Σ.Let M be a set of functions on a common domain. A strong Maltsev con-dition S is satisfied in M if the function symbols of Σ can be interpreted in M so that each equation in S is satisfied for every choice of variables. Example 2.12.
A strong Maltsev condition over Σ = { f, g, h } (where f and g are binary symbols and h is ternary) is, e.g., f ( g ( f ( x, y ) , y ) , z ) = g ( x, h ( y, y, z )) f ( x, y ) = g ( g ( x, y ) , x ) . This condition is satisfied in the set of all projections (on any domain) since, byinterpreting f and g as π and h as π , both equations are satisfied for every x, y, z in the domain – they are equal to x .The strong Maltsev condition in the above example is not interesting for ussince it is satisfied in every Pol( A ). Such conditions are called trivial: Definition 2.13.
A strong Maltsev condition is called trivial if it is satisfiedin the set of all projections on a two-element set (equivalently, it is satisfied inPol( A ) for every A ). 6wo nontrivial Maltsev condition are shown in the following example. Example 2.14.
The strong Maltsev condition (over a single ternary symbol m ) m ( x, x, y ) = xm ( x, y, x ) = xm ( y, x, x ) = x is nontrivial since each of the possible interpretations π , π , π of m falsifiesone of the equations. This condition is satisfied in Pol(2 SAT ) by interpreting m as the majority function, see Example 2.8.The strong Maltsev condition p ( x, x, y ) = yp ( y, x, x ) = y is also nontrivial. It is satisfied in Pol(3 LIN ) by interpreting p as x + 4 y + z (mod 5).In fact, if Pol( A ) satisfies one of the strong Maltsev conditions in this exam-ple, then CSP( A ) is in P (see e.g. [6]).The following theorem is (a restatement of) the second crucial step of thealgebraic approach. Theorem 2.15 ([16], see also [9]) . The complexity of
CSP( A ) depends only onstrong Maltsev conditions satisfied by Pol( A ) .More precisely, if A and B are finite relational structures and each strongMaltsev condition satisfied in Pol( A ) is satisfied in Pol( B ) , then CSP( B ) is (log-space) reducible to CSP( A ) . In particular, if Pol( A ) and Pol( B ) satisfy the samestrong Maltsev conditions, then CSP( A ) and CSP( B ) have the same complexity.sketch. The proof can be done in a similar way as for Theorem 2.10. Insteadof pp-definitions one uses more general constructions called pp-interpretationsand, on the algebraic side, the Birkhoff HSP theorem [8].Theorem 2.15 gives us an improved tool for proving NP-hardness: if Pol( A )satisfies only trivial strong Maltsev conditions, then CSP( A ) is NP-hard. Thiscriterion is better, e.g., it can be applied to CSP(3 NAE ), but still not perfect,e.g., it cannot be applied to the CSP over the disjoint union of two copies of K . Strong Maltsev conditions that appear naturally in the CSP theory or in Uni-versal Algebra are often of an especially simple form, they involve no nesting offunction symbols. The third step in the basics of the algebraic theory was torealize that this is also not a coincidence.7 efinition 2.16.
A strong Maltsev condition is called a minor condition if eachside of every equation contains exactly one occurrence of a function symbol.In other words, each equation in a strong Maltsev condition is of the form“symbol(variables) = symbol(variables)”.
Example 2.17.
The condition in Example 2.12 is not a minor condition since,e.g., the left-hand side of the first equation involves three occurrences of functionsymbols.The conditions in Example 2.14 are not minor conditions either since theright-hand sides do not contain any function symbol. However, these conditionshave close friends which are minor conditions. For instance, the friend of thesecond system is the minor condition p ( x, x, y ) = p ( y, y, y ) p ( y, x, x ) = p ( y, y, y ) . Note that this system is also satisfied in Pol(3
LIN ) by the same interpretationas in Example 2.14, that is, x + 4 y + z (mod 5).The following theorem is a strengthening of Theorem 2.15. We give only theinformal part of the statement, the precise formulation is analogous to Theo-rem 2.15. Theorem 2.18 ([7]) . The complexity of
CSP( A ) (for finite A ) depends only onminor conditions satisfied by Pol( A ) .sketch. The proof again follows the same pattern by further generalizing pp-interpretations (to so called pp-constructions) and the Birkhoff HSP theorem.
Just like Theorems 2.10 and 2.15 give hardness criteria for CSPs, we get animproved sufficient condition for NP-hardness as a corollary of Theorem 2.18.
Corollary 2.19.
Let A be a finite relational structure which satisfies only trivialminor conditions. Then CSP( A ) is NP-complete. Bulatov, Jeavons, and Krokhin have conjectured [16] that satisfying onlytrivial minor conditions is actually the only reason for hardness . Intensiveefforts to prove this conjecture, called the tractability conjecture or the algebraicdichotomy conjecture , have recently culminated in two independent proofs byBulatov and Zhuk: Theorem 2.20 ([17],[30]) . If a finite relational structure A satisfies a nontrivialminor condition, then CSP( A ) is in P. Their conjecture is equivalent but was, of course, originally stated in a different language– the significance of minor conditions in CSPs was identified much later.
Many fixed-template CSPs, such as finding a 3-coloring of a graph or findinga satisfying assignment to a 3SAT formula, are hard computational problems.There are two ways how to relax the requirement on the assignment in order toget a potentially simpler problem. The first one is to require a specified fractionof the constraints to be satisfied. For example, given a satisfiable 3SAT input, isit easier to find an assignment satisfying at least 90% of clauses? A celebratedresult of H˚astad [22], which strengthens the famous PCP Theorem [1, 2], provesthat the answer is negative – it is still an NP-complete problem. (Actually,any fraction greater than 7/8 gives rise to an NP-complete problem while thefraction 7/8 is achievable in polynomial time.)The second type of relaxation, the one we consider in this paper, is to requirethat a specified weaker version of every constraint is satisfied. For example, wewant to find a 100-coloring of a 3-colorable graph, or we want to find a validCSP(3
NAE ) assignment to a true input of CSP(1 IN Definition 3.1.
Let A = ( A ; R A , R A , . . . , R A n ) and B = ( B ; R B , R B , . . . , R B n ) betwo similar finite relational structures (that is, R A and R B have the same arityfor each i ), and assume that there exists a homomorphism A → B . Such a pair( A , B ) is refered to as a PCSP template .The PCSP over ( A , B ), denoted PCSP( A , B ), is the problem to distinguish,given a pp-sentence φ over the relational symbols R , . . . , R n , between the casesthat φ is true in A (answer “Yes”) and φ is not true in B (answer “No”).For example, consider A = ( A ; R A , S A ) and B = ( B ; R B , S B ), where all therelations are binary. An input to PCSP( A , B ) is, e.g.,( ∃ x ∃ x . . . ∃ x ) R ( x , x ) ∧ S ( x , x ) ∧ R ( x , x ) . The algorithm should answer “Yes” if the sentence is true in A , i.e., the followingsentence is true( ∃ x ∃ x . . . ∃ x ) R A ( x , x ) ∧ S A ( x , x ) ∧ R A ( x , x ) , and the algorithm should answer “No” if the sentence is not true in B . In casethat neither of the cases takes place, we do not have any requirements on the9lgorithm. Alternatively, we can say that the algorithm is promised that theinput satisfies either “Yes” or “No” and it is required to decide which of thesetwo cases takes place.Note that the assumption that A → B is necessary for the problem to makesense, otherwise, the “Yes” and “No” cases would not be disjoint. Also observethat CSP( A ) is the same problem as PCSP( A , A ).The search version of PCSP( A , B ) is perhaps a bit more natural problem:the goal is to find a B -satisfying assignment given an A -satisfiable input. Unlikein the CSP, it is not known whether the search version can be harder than thedecision version presented in Definition 3.1. The examples below show that PCSPs are richer than CSP, both on the algo-rithmic and the hardness side.
Example 3.2.
Recall the structure K k from Example 2.3 consisting of theinequality relation on a k -element set. For k ≤ l , the PCSP over ( K k , K l ) isthe problem to distinguish between k -colorable graphs and graphs that are noteven l -colorable (or, in the search version, the problem to find an l -coloring ofa k -colorable graph).Unlike for the case k = l , the complexity of this problem for 3 ≤ k < l is a notorious open question. It is conjectured that PCSP( K k , K l ) is NP-hardfor every k < l , but this conjecture was confirmed only in special cases: for l ≤ k − k and l ≤ Ω( k / ) [23]. The algebraic development discussed in the next subsectionhelped in improving the former result to l ≤ k − Example 3.3.
Recall the structure 3
NAE k from Example 2.4 consisting of theternary not-all-equal relation on a k -element set. For k ≤ l , the PCSP over(3 NAE k , NAE l ) is essentially the problem to distinguish between k -colorable3-uniform hypergraphs and 3-uniform hypergraphs that are not even l -colorable.This problem is NP-hard for every 2 ≤ k ≤ l [19], the proof uses strong tools,the PCP theorem and Lov´asz’s theorem on the chromatic number of Kneser’sgraphs [27]. Example 3.4.
Recall from Example 2.5 that 1 IN E with the ternary “one-in-three” relation 1 IN
3. The PCSP over(1 IN , NAE ) is the problem to distinguish between 3-uniform hypergraphs,which admit a coloring by colors 0 and 1 so that exactly one vertex in eachhyperedge receives the color 1, and 3-uniform hypergraphs that are not even2-colorable.This problem, even its search version, admits elegant polynomial time al-gorithms [14, 15] – one is based on solving linear equations over the integers,another one on linear programming. For this specific template, the algorithmcan be further simplified as follows. 10e are given a 3-uniform hypergraph, which admits a coloring by colors 0and 1 so that ( x, y, z ) ∈ IN { x, y, z } , and we want tofind a 2-coloring. We create a system of linear equations over the rationals asfollows: for each hyperedge { x, y, z } we introduce the equation x + y + z = 1. Bythe assumption on the input hypergraph, this system is solvable in { , } ⊆ Q (in fact, { , } -solutions are the same as valid 1 IN { , } is hard, but it is possible to solve the system in Q \ { / } in polynomial time by a simple adjustment of Gaussian elimination. Now weassign 1 to a vertex x if x > / IN , NAE ), the presented algorithm uses aCSP over an infinite structure, namely ( Q \ { / } ; R ), where R = { ( x, y, z ) ∈ ( Q \ { / } ) : x + y + z = 1 } . In fact, the infinity is necessary for this PCSP,see [4] for a formal statement and a proof. After the introduction of the PCSP framework, it has quickly turned out thatboth the notion of a polymorphism and Theorem 2.10 have straightforwardgeneralizations.
Definition 3.5.
Let A = ( A ; R A , . . . ) and B = ( B ; R B , . . . ) be two similarrelational structures. A k -ary (total) function f : A k → B is a polymorphism of ( A , B ) if it is compatible with every pair ( R A i , R B i ), that is, for all tuples r , . . . , r k ∈ R A i , the tuple f ( r , . . . , r k ) is in R B i .By Pol( A , B ) we denote the set of all polymorphisms of ( A , B ). Example 3.6.
For every k which is not disible by 3, the k -ary “1/3-threshold”function f : E k → E defined by f ( x , . . . , x k ) = (cid:26) P x i /k > /
30 elseis a polymorphism of the PCSP template (1 IN , NAE ) from Example 3.4.Any PCSP whose template (over the domains E and E ) admits all thesepolymorphisms is in P [14, 15]. Theorem 3.7 ([13]) . The complexity of
PCSP( A , B ) depends only on Pol( A , B ) .sketch. Proof is similar to Theorem 2.10 using [28] instead of [21, 10, 11].Note that, in general, composition of polymorphisms is not even well-defined.Therefore the second step, considering strong Maltsev conditions satisfied bypolymorphisms, does not make sense for PCSPs. However, minor conditionsmake perfect sense and they do capture the complexity of PCSPs, as provedin [18]. Furthermore, the paper [18] also provides an alternative proof by directlyrelating a PCSP to a computational problem concerning minor conditions!11 heorem 3.8 ([18]) . Let ( A , B ) be a PCSP template and M = Pol( A , B ) . Thefollowing computational problems are equivalent for every sufficiently large N . • PCSP( A , B ) . • Distinguish, given a minor condition C whose function symbols have arityat most N , between the cases that C is trivial and C is not satisfied in M .sketch. The reduction from PCSP( A , B ) to the second problem works as follows.Given an input to the PCSP we introduce one | A | -ary function symbol g a foreach variable a and one | R A | -ary function symbol f C for each conjunct R ( . . . ).The way to build a minor condition is quite natural, for example, the input( ∃ a ∃ b ∃ c ∃ d ) R ( c, a, b ) ∧ R ( a, d, c ) ∧ . . . to PCSP(1 IN , NAE ) is transformed to the minor condition f ( x , x , x ) = g c ( x , x ) f ( x , x , x ) = g a ( x , x ) f ( x , x , x ) = g b ( x , x ) f ( x , x , x ) = g a ( x , x ) f ( x , x , x ) = g d ( x , x ) f ( x , x , x ) = g c ( x , x ) . . . It is easy to see that a sentence that is true in A is transformed to a trivialminor condition. On the other hand, if the minor condition is satisfied in M ,say by the functions denoted f ′ , f ′ , g ′ a , . . . , then the mapping a g ′ a (0 , b g ′ b (0 , B -satisfying assignment of the sentence – this can bededuced from the fact that f ′ s and g ′ are polymorphisms.The reduction in the other direction is based on the idea that the question“Is this minor condition satisfied by polymorphisms of A ?” can be interpretedas an input to CSP( A ). The main ingredient is to look at functions as tuples(their tables); then “ f is a polymorphism” translates to a conjunction, andequations can be simulated by merging variables.Theorem 3.8 implies Theorem 2.18 (and its generalization to PCSPs) sincethe computational problem in the second item clearly only depends on minorconditions satisfied in M . The proof sketched above • is simple and does not (explicitly) use any other results, such as thecorrespondence between polymorphisms and pp-definitions used in Theo-rem 2.10 or the Birkhoff HSP theorem used in Theorem 2.15, and12 is based on constructions which have already appeared, in some form, inseveral contexts; in particular, the second item is related to importantproblems in approximation, versions of the Label Cover problem (see [18,5]).The theorem and its proof are simple, nevertheless, very useful. For exam-ple, the hardness of PCSP( K k , K k − ) mentioned in Example 3.2 was provedin [18] by showing that every minor condition satisfied in Pol( K k , K k − ) is sat-isfied in Pol(3 NAE , NAE l ) (for some l ) and then using the NP-hardness ofPCSP(3 NAE , NAE l ) proved in [19] (see Example 3.3). The PCSP framework is much richer than the CSP framework; on the otherhand, the basics of the algebraic theory generalize from CSP to PCSP, as shownin Theorem 3.8. Strikingly, the computational problems in Theorem 3.8 are(promise and restricted versions) of two “similar” problems:(i) Given a structure A and a first-order sentence φ over the same signature,decide whether A satisfies φ .(ii) Given a structure A and a first-order sentence φ in a different signature,decide whether symbols in φ can be interpreted in A so that A satisfies φ .Indeed, CSP( A ) is the problem (i) with A a fixed relational structure and φ app-sentence (and PCSP is a promise version of this problem), whereas a promiseversion of the problem (ii) restricted to a fixed A of purely functional signatureand universally quantified conjunctive first-order sentences φ is the second itemin Theorem 3.8. Variants of problem (i) appear in many contexts throughoutComputer Science. What about problem (ii)? References [1] Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and MarioSzegedy. Proof verification and the hardness of approximation problems.
J. ACM , 45(3):501–555, May 1998.[2] Sanjeev Arora and Shmuel Safra. Probabilistic checking of proofs: A newcharacterization of NP.
J. ACM , 45(1):70–122, January 1998.[3] Per Austrin, Venkatesan Guruswami, and Johan H˚astad. (2+ ǫ )-sat is NP-hard. SIAM J. Comput. , 46(5):1554–1573, 2017.[4] Libor Barto. Promises make finite (constraint satisfaction) problems infini-tary. To appear in LICS 2019, 2019.[5] Libor Barto, Jakub Bul´ın, Andrei A. Krokhin, and Jakub Oprˇsal. Algebraicapproach to promise constraint satisfaction. in preparation, 2019.136] Libor Barto, Andrei Krokhin, and Ross Willard. Polymorphisms, and howto use them. In Andrei Krokhin and Stanislav ˇZivn´y, editors,
The Con-straint Satisfaction Problem: Complexity and Approximability , volume 7 of
Dagstuhl Follow-Ups , pages 1–44. Schloss Dagstuhl–Leibniz-Zentrum fuerInformatik, Dagstuhl, Germany, 2017.[7] Libor Barto, Jakub Oprˇsal, and Michael Pinsker. The wonderland of re-flections.
Israel Journal of Mathematics , 223(1):363–398, Feb 2018.[8] Garrett Birkhoff. On the structure of abstract algebras.
MathematicalProceedings of the Cambridge Philosophical Society , 31(4):433454, 1935.[9] Manuel Bodirsky. Constraint satisfaction problems with infinite templates.In Nadia Creignou, Phokion G. Kolaitis, and Heribert Vollmer, editors,
Complexity of Constraints , volume 5250 of
Lecture Notes in Computer Sci-ence , pages 196–228. Springer, 2008.[10] V. G. Bodnarchuk, L. A. Kaluzhnin, V. N. Kotov, and B. A. Romov. Galoistheory for post algebras. i.
Cybernetics , 5(3):243–252, May 1969.[11] V. G. Bodnarchuk, L. A. Kaluzhnin, Viktor N. Kotov, and Boris A. Romov.Galois theory for Post algebras. II.
Cybernetics , 5(5):531–539, 1969.[12] Joshua Brakensiek and Venkatesan Guruswami. New hardness results forgraph and hypergraph colorings. In Ran Raz, editor, , volume 50 of
Leibniz InternationalProceedings in Informatics (LIPIcs) , pages 14:1–14:27, Dagstuhl, Germany,2016. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.[13] Joshua Brakensiek and Venkatesan Guruswami. Promise constraint satis-faction: Algebraic structure and a symmetric boolean dichotomy.
ECCC ,Report No. 183, 2016.[14] Joshua Brakensiek and Venkatesan Guruswami. Promise constraint sat-isfaction: Structure theory and a symmetric boolean dichotomy. In
Pro-ceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on DiscreteAlgorithms , SODA’18, pages 1782–1801, Philadelphia, PA, USA, 2018. So-ciety for Industrial and Applied Mathematics.[15] Joshua Brakensiek and Venkatesan Guruswami. An algorithmic blend ofLPs and ring equations for promise CSPs. In
Proceedings of the ThirtiethAnnual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, SanDiego, California, USA, January 6-9, 2019 , pages 436–455, 2019.[16] Andrei Bulatov, Peter Jeavons, and Andrei Krokhin. Classifying the com-plexity of constraints using finite algebras.
SIAM J. Comput. , 34(3):720–742, March 2005. 1417] Andrei A. Bulatov. A dichotomy theorem for nonuniform CSPs. In , pages 319–330, October 2017.[18] Jakub Bul´ın, Andrei Krokhin, and Jakub Oprˇsal. Algebraic approach topromise constraint satisfaction. In
Proceedings of the 51st Annual ACMSIGACT Symposium on the Theory of Computing (STOC 19) , New York,NY, USA, 2019. ACM.[19] Irit Dinur, Oded Regev, and Clifford Smyth. The hardness of 3-uniformhypergraph coloring.
Combinatorica , 25(5):519–535, September 2005.[20] Tom´as Feder and Moshe Y. Vardi. The computational structure of mono-tone monadic SNP and constraint satisfaction: A study through datalogand group theory.
SIAM J. Comput. , 28(1):57–104, February 1998.[21] David Geiger. Closed systems of functions and predicates.
Pacific J. Math. ,27:95–100, 1968.[22] Johan H˚astad. Some optimal inapproximability results.
J. ACM , 48(4):798–859, July 2001.[23] Sangxia Huang. Improved hardness of approximating chromatic number. InPrasad Raghavendra, Sofya Raskhodnikova, Klaus Jansen, and Jos´e D. P.Rolim, editors,
Approximation, Randomization, and Combinatorial Opti-mization. Algorithms and Techniques: 16th International Workshop, AP-PROX 2013, and 17th International Workshop, RANDOM 2013, Berkeley,CA, USA, August 21-23, 2013. Proceedings , pages 233–243, Berlin, Heidel-berg, 2013. Springer.[24] Peter Jeavons. On the algebraic structure of combinatorial problems.
Theor. Comput. Sci. , 200(1-2):185–204, 1998.[25] Peter Jeavons, David Cohen, and Marc Gyssens. Closure properties ofconstraints.
J. ACM , 44(4):527–548, July 1997.[26] Andrei Krokhin and Stanislav ˇZivn´y, editors.
The Constraint SatisfactionProblem: Complexity and Approximability , volume 7 of
Dagstuhl Follow-Ups . Schloss Dagstuhl – Leibniz-Zentrum f¨ur Informatik, 2017.[27] L. Lov´asz. Kneser’s conjecture, chromatic number, and homotopy.
J. Com-bin. Theory Ser. A , 25(3):319–324, 1978.[28] Nicholas Pippenger. Galois theory for minors of finite functions.
DiscreteMathematics , 254(1):405–419, 2002.[29] Thomas J. Schaefer. The complexity of satisfiability problems. In
Pro-ceedings of the Tenth Annual ACM Symposium on Theory of Computing ,STOC ’78, pages 216–226, New York, NY, USA, 1978. ACM.1530] Dmitriy Zhuk. A proof of CSP dichotomy conjecture. In2017 IEEE 58thAnnual Symposium on Foundations of Computer Science (FOCS)