Optimization, Randomized Approximability, and Boolean Constraint Satisfaction Problems
aa r X i v : . [ c s . CC ] S e p Optimization, Randomized Approximability, andBoolean Constraint Satisfaction Problems
Tomoyuki Yamakami ∗ Abstract.
We give a unified treatment to optimization problems that can be ex-pressed in the form of nonnegative-real-weighted Boolean constraint satisfaction prob-lems. Creignou, Khanna, Sudan, Trevisan, and Williamson studied the complexity ofapproximating their optimal solutions whose optimality is measured by the sums of out-comes of constraints. To explore a wider range of optimization constraint satisfactionproblems, following an early work of Marchetti-Spaccamela and Romano, we study thecase where the optimality is measured by products of constraints’ outcomes. We com-pletely classify those problems into three categories: PO problems, NPO-hard problems,and intermediate problems that lie between the former two categories. To prove thistrichotomy theorem, we analyze characteristics of nonnegative-real-weighted constraintsusing a variant of the notion of T-constructibility developed earlier for complex-weightedcounting constraint satisfaction problems. keywords: optimization problem, approximation algorithm, constraint satisfactionproblem, PO, APX, approximation-preserving reducibility
In the 1980s started extensive studies that have greatly improved our understandings of the exoticbehaviors of various optimization problems within a scope of computational complexity theory.These studies have brought us deep insights into the approximability and inapproximability ofoptimization problems; however, many studies have targeted individual problems by cultivatingdifferent and independent methods for them. To push our insights deeper, we are focused on acollection of “unified” optimization problems, whose foundations are all formed in terms of
Booleanconstraint satisfaction problems (or
CSPs , in short). Creignou is the first to have given a formaltreatment to maximization problems derived from CSPs [3]. The maximization constraint satisfac-tion problems (or MAX-CSPs for succinctness) are, in general, optimization problems in which weseek a truth assignment σ of Boolean variables that maximizes an objective value † (or a measure ) of σ , which equals the number of constraints being satisfied at once. Creignou presented three criteria(which are 0-validity, 1-validity, and 2-monotonicity) under which we can solve the MAX-CSPs inpolynomial time; that is, the problems belong to PO.Creignou’s result was later reinforced by Khanna, Sudan, Trevisan, and Williamson [7], whogave a unified treatment to several types of CSP-based optimization problems, including MAX-CSP,MIN-CSP, and MAX-ONE-CSP. With constraints limited to “nonnegative” integer values, Khannaet al. defined MAX-CSP( F ) as the maximization problem in which constraints are all takenfrom constraint set F and the maximization is measured by the “sum” of the objective values of ∗ Present Affiliation: Department of Information Science, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507,Japan † A function that associates an objective value (or a measure) to each solution is called an objective function or (a measure function ). additive measure . More formally,MAX-CSP( F ) is defined as:MAX-CSP( F ): • Instance: a finite set H of elements of the form h h, ( x i , . . . , x i k ) i on Boolean variables x , . . . , x n , where h ∈ F , { i , . . . , i k } ⊆ [ n ], and k is the arity of h . • Solution: a truth assignment σ to the variables x , . . . , x n . • Measure: the sum P h h,x ′ i∈ H h ( σ ( x i ) , . . . , σ ( x i k )), where x ′ = ( x i , . . . , x i k ).For instance, MAX-CSP( XOR ) coincides with the optimization problem MAX-CUT, whichis known to be MAX-SNP-complete [9]. Khanna et al. re-proved Creignou’s classification the-orem that, for every set F of constraints, if F is one of 0-valid, 1-valid, and 2-monotone,then MAX-CSP( F ) is in PO, and otherwise, MAX-CSP( F ) is APX-complete ‡ under theirapproximation-preserving reducibility. This classification theorem was proven by an extensive useof a notion of strict/perfect implementation .From a different and meaningful perspective, Marchetti-Spaccamela and Romano [8] made ageneral discussion on max NPSP problems whose maximization is measured by the “products” ofthe objective values of chosen (feasible) solutions. From their general theorem follows an earlyresult of [1] that the maximization problem, MAX-PROD-KNAPSACK, has a fully polynomial(-time) approximation scheme (or FPTAS). This result can be compared with another result ofIbarra and Kim [5] that MAX-KNAPSACK (with additive measures) admits an FPTAS. Thegeneral theorem of [8] requires development of a new proof technique, called variable partitioning .A similar approach was taken in a study of linear multiplicative programming to minimize the“product” of two positive linear cost functions, subject to linear constraints [6]. In contrast withthe previous additive measures, we call this different type of measures multiplicative measures , andwe wish to study the behaviors of MAX-CSPs whose maximization is taken over multiplicativemeasures.Our approach clearly fills a part of what Creignou [3] and Khanna et al. [7] left unexplored. Letus formalize our objectives for clarity. To differentiate our multiplicative measures from additivemeasures, we develop a new notation MAX-PROD-CSP( · ), in which “PROD” refers to a use of“product” of objective values.MAX-PROD-CSP( F ): • Instance: a finite set H of elements of the form h h, ( x i , . . . , x i k ) i on Boolean variables x , . . . , x n , where h ∈ F and { i , . . . , i k } ⊆ [ n ]. • Solution: a truth assignment σ to x , . . . , x n . • Measure: the product Q h h,x ′ i∈ H h ( σ ( x i ) , . . . , σ ( x i k )), where x ′ = ( x i , . . . , x i k ).There is a natural, straightforward way to relate MAX-CSP( F )’s to MAX-PROD-CSP( F )’s.For example, consider the problem MAX-CUT and its multiplicatively-measured counterpart,MAX-PROD-CUT. Given any cut for MAX-CUT, we set the weight of each vertex to be 2 ifit belongs to the cut, and set the weight to be 1 otherwise. When the cardinality of a cut is maxi-mized, the same cut incurs the maximum product value as well. In other words, an algorithm that“exactly” finds an optimal solution for MAX-CUT also computes an optimal solution for MAX-PROD-CUT. However, when an algorithm only tries to “approximate” such an optimal solution,its performance rates for MAX-CUT and for MAX-PROD-CUT significantly differ. In a case of ‡ APX is the collection of optimization problems whose optimal solutions can be (deterministically) approximatedto within a fixed constant in polynomial time. h ’s accidentally fall into zero and thus the entire prod-uct value vanishes. Thus, an approximation algorithm using an additive measure does not seemto lead to an approximation algorithm with a multiplicative measure. This circumstance indicatesthat multiplicative measures appear to endow their associated optimization problems with muchhigher approximation complexity than additive measures do. We then need to develop a quitedifferent methodology for a study on the approximation complexity of multiplicatively-measuredoptimization problems.Within the framework of MAX-PROD-CSPs, we can precisely express many maximizationproblems, such as MAX-PROD-CUT, MAX-PROD-SAT, MAX-PROD-IS (independent set), andMAX-PROD-BIS (bipartite independent set), which are naturally induced from their correspondingadditively-measured maximization problems. Some of their formal definitions will be given inSection 2.2. In a context of approximability, since the multiplicative measures can provide aricher structure than the additive measures, classification theorems for MAX-CSPs and for MAX-PROD-CSPs are essentially different. The classification theorem for MAX-PROD-CSPs—our mainresult—is formally stated in Theorem 1.1, in which we use an abbreviation, MAX-PROD-CSP ∗ ( F ),to mean a MAX-PROD-CSP whose unary constraints are provided for free § and other nonnegative-real-weighted constraints are drawn from F . Theorem 1.1
Let F be any set of constraints. If either F ⊆ AF or F ⊆ ED , then
MAX - PROD - CSP ∗ ( F ) is in PO . Otherwise, if F ⊆ IM opt , then
MAX - PROD - CSP ∗ ( F ) isAPT-reduced from MAX - PROD - BIS and is APT-reduced to
MAX - PROD - FLOW . Otherwise,
MAX - PROD - CSP ∗ ( F ) is APT-reduced from MAX - PROD - IS . Here, “APT” stands for “approximation preserving Turing” in the sence of [4]. Moreover, AF isthe set of “affine”-like constraints, ED is related to the binary equality and disequality constraints,and IM opt is characterized by “implication”-like constraints. The problem MAX-PROD-FLOWis a maximization problem of finding a design that maximizes the amount of water flow in a givendirect graph. See Sections 2.1–2.2 for their formal definitions.The purpose of the rest of this paper is to prove Theorem 1.1. For this purpose, we will introducea new notion of T max -constructibility , which is a variant of the notion of T-constructibility inventedin [10]. This T max -constructibility is proven to be a powerful tool in dealing with MAX-PROD-CSPs. Let N denote the set of all natural numbers (i.e., nonnegative integers) and let R denote the setof all real numbers. For convenience, N + expresses N − { } and, for each n ∈ N + , [ n ] denotes theinteger set { , , . . . , n } . Moreover, the notation R ≥ stands for the set { r ∈ R | r ≥ } . Here, we use terminology given in [10]. A (nonnegative-real-weighted) constraint is a functionmapping from { , } k to R ≥ , where k is the arity of f . Assuming the standard lexicographic § Allowing a free use of arbitrary unary constraints is a commonly used assumption for decision CSPs and countingCSPs. { , } k , we express f as a series of its output values. For instance, if k = 2, then f is( f (00) , f (01) , f (10) , f (11)). We set EQ = (1 , , , = (1 , = (0 , relation of arity k is a subset of { , } k . Such a relation can be also viewed as a functionmapping Boolean variables to { , } (i.e., x ∈ R iff R ( x ) = 1, for every x ∈ { , } k ) and itcan be treated as a Boolean constraint. For instance, logical relations OR , N AN D , XOR , and
Implies are all expressed as appropriate constraints in the following manner: OR = (0 , , , N AN D = (1 , , , XOR = (0 , , , Implies = (1 , , , R is affine if it isexpressed as a set of solutions to a certain system of linear equations over GF (2). An underlyingrelation R f of f is defined as R f = { x | f ( x ) = 0 } .We introduce the following six special sets of constraints.1. Let U denote the set of all unary constraints.2. The notation N Z expresses the set of all non-zero constraints.3. Denote by DG the set of all constraints that are expressed by products of unary constraints,each of which is applied to a different variable. Such a constraint is called degenerate .4. Let ED denote the set of all constraints that are expressed as products of some of unaryconstraints, the equality EQ , and the disequality XOR .5. The set AF is defined as the collection of all constraints of the form g ( x , . . . , x k ) Q j : j = i R j ( x i , x j ) for a certain fixed index i ∈ [ k ], where g is in DG andeach R j is an affine relation.6. Define IM opt to be the collection of all constraints that are products of some of the followingconstraints: unary constraints and constraints of the form (1 , , λ,
1) with 0 ≤ λ <
1. This isdifferent from IM defined in [10]. Lemma 2.1
For any constraint f = ( x, y, z, w ) with x, y, z, w ∈ R ≥ , if xw > yz , then f belongsto IM opt . A (combinatorial) optimization problem P = ( I, sol , m ) takes input instances of “admissible data”to the target problem. We often write I to denote the set of all such instances and sol( x ) denotesa set of (feasible) solutions associated with instance x . A measure function (or objective function ) m associates a nonnegative real number to each solution y in sol( x ); that is, m ( x, y ) is an objectivevalue (or a measure ) of the solution y on the instance x . We conveniently assume m ( x, y ) = 0 forany element y sol( x ). The goal of the problem P is to find a solution y in sol( x ) that has anoptimum value (such y is called an optimal solution ), where the optimality is measured by eitherthe maximization or minimization of an objective value m ( x, y ), taken over all solutions y ∈ sol( x ).When y is an optimal solution, we set m ∗ ( x ) to be m ( x, y ).Let NPO denote the class of all optimization problems P such that (1) input instances andsolutions can be recognized in polynomial time; (2) solutions are polynomially-bounded in inputsize; and (3) a measure function can be computed in polynomial time. Define PO as the class of allproblems P in NPO such that there exists a deterministic algorithm that, for every instance x ∈ I ,returns an optimal solution y in sol( x ) in time polynomial in the size | x | of the instance x .We say that, for a fixed real-valued function α with α ( n ) ≥ n ∈ N , analgorithm A for an optimization problem P = ( I, sol , m ) is an α -approximation algorithm if, forevery instance x ∈ I , A produces a solution y ∈ sol( x ) satisfying that 1 /α ( | x | ) ≤ | m ( x, y ) /m ∗ ( x ) | ≤ α ( | x | ), except that, whenever m ∗ ( x ) = 0, we always demand that m ( x, y ) = 0. Such a y is called4n α ( n ) -approximate solution for input instance x of size n . The class APX (resp., exp-APX)consists of all problems P in NPO such that there are a constant r ≥ ¶ function α ) and a polynomial-time r -approximation (resp., α -approximation) algorithmfor P . Approximation algorithms are often randomized. A randomized approximation scheme for P is a randomized algorithm that takes a standard input instance x ∈ I together with an errortolerance parameter ε ∈ (0 , ε -approximate solution y ∈ sol( x ) with probabilityat least 3 / oracle Turing machine . Since thepurpose of Dyer et al. is to solve counting problems approximately, we need to modify their notionso that we can deal with optimization problems. Given two optimization problems P = ( I, sol , m )and Q = ( I ′ , sol ′ , m ′ ), a polynomial-time (randomized) approximation-preserving Turing reduc-tion (or APT-reduction , in short) from P to Q is a randomized algorithm N that takes a pair( x, ε ) ∈ I × (0 ,
1) as input, uses an arbitrary randomized approximation scheme (not necessarilypolynomial time-bounded) M for Q as oracle , and satisfies the following conditions: (i) N is a ran-domized approximation scheme for P for any choice of oracle M for Q ; (ii) every oracle call madeby N is of the form ( w, δ ) ∈ I ′ × (0 ,
1) satisfying 1 /δ ≤ p ( | x | , /ε ), where p is a certain absolutepolynomial, and an oracle answer is an outcome of M on the input ( w, δ ); and (iii) the runningtime of N is bounded from above by a certain polynomial in ( | x | , /ε ), not depending on the choiceof the oracle M . In this case, we write P ≤ APT Q and we also say that P is APT-reducible (or
APT-reduced ) to Q . Note that APT-reducibility composes. If P ≤ APT Q and Q ≤ APT P , then P and Q are said to be APT-equivalent and we use the notation P ≡ APT Q .In the definition of MAX-PROD-CSP( F ) given in Section 1, we also write h ( x i , . . . , x i k )to mean h h, ( x i , . . . , x i k ) i in H . For notational simplicity, we intend to write, for example,MAX-PROD-CSP( f, F , G ) instead of MAX-PROD-CSP( { f } ∪ F ∪ G ). In addition, we abbrevi-ate as MAX-PROD-CSP ∗ ( F ) the maximization problem MAX-PROD-CSP( F ∪ U ).For any optimization problem P and any class C of optimization problems, we write P ≤ APT C if there exists a problem Q ∈ C such that P ≤ APT Q . Our choice of APT-reducibilitymakes it possible to prove Lemma 2.2; however, it is not clear whether the lemma implies thatMAX-PROD-CSP( F ) ∈ exp-APX. Lemma 2.2
MAX - PROD - CSP( F ) ≤ APT exp - APX for any constraint set F . Hereafter, we introduce the maximization problems stated in Theorem 1.1.MAX-PROD-IS: (maximum product independent set) • Instance: an undirected graph G = ( V, E ) and a series { w x } x ∈ V of vertex weights with w x ∈ R ≥ ; • Solution: an independent set A on G ; • Measure: the product Q x ∈ A w x .This maximization problem MAX-PROD-IS literally coincides withMAX-PROD-CSP( N AN D, G ), where G = { [1 , λ ] | λ ≥ } , and it can be easily shown tobe NPO-complete. When all input graphs are limited to bipartite graphs, the correspondingproblem is called MAX-PROD-BIS. ¶ This function α must satisfy that there exists a positive polynomial p for which 1 ≤ α ( n ) ≤ p ( n ) for any number n ∈ N . • In MAX-PROD-IS, all input graphs are limited to bipartite graphs.Since the above two problems can be expressed in the form of MAX-PROD-CSP ∗ ( · ), we candraw the following important conclusion, which becomes part of the proof of the main theorem. Lemma 2.3 MAX - PROD - IS ≤ APT
MAX - PROD - CSP ∗ ( OR ) .2. MAX - PROD - BIS ≤ APT
MAX - PROD - CSP ∗ ( Implies ) . Next, we introduce a special maximization problem, called MAX-PROD-FLOW, whose intuitivesetting is explained as follows. Suppose that water flows from point u to point v through a one-waypipe at flow rate ρ ( u,v ) . A value σ ( x ) expresses an elevation (indicating either the bottom level orthe top level ) of point x so that water runs from point u to point v whenever σ ( u ) ≥ σ ( v ). Morewater is added at influx rate w v at point v for which σ ( v ) = 1.MAX-PROD-FLOW: (maximum product flow) • Instance: a directed graph G = ( V, E ), a series { ρ e } e ∈ E of flow rates with ρ e ≥
1, and aseries { w x } x ∈ V of influx rates with w x ≥ • Solution: a Boolean assignment σ to V ; • Measure: the product (cid:16)Q ( x,y ) ∈ E,σ ( x ) ≥ σ ( y ) ρ ( x,y ) (cid:17) (cid:16)Q z ∈ V,σ ( z )=1 w z (cid:17) .From the above definition, it is not difficult to prove the following statement. Lemma 2.4
For any constraint set
F ⊆ IM opt , MAX - PROD - CSP ∗ ( F ) is APT-reducible to MAX - PROD - FLOW . max -Constructibility To pursue notational succinctness, we use the following notations. Let f be any arity- k con-straint. For any two distinct indices i, j ∈ [ k ] and any bit c ∈ { , } , let f x i = c denote thefunction g satisfying that g ( x , . . . , x i − , x i +1 , . . . , x k ) = f ( x , . . . , x i − , c, x i +1 , . . . , x k ) and let f x i = x j be the function g defined as g ( x , . . . , x i − , x i +1 , . . . , x k ) = f ( x , . . . , x i − , x j , x i +1 , . . . , x k ).Moreover, we denote by max y ,...,y d ( f ) the function g defined as g ( x , . . . , x k ) =max ( y ,...,y d ) ∈{ , } d { f ( x , . . . , x k , y , . . . , y d ) } , where y , . . . , y d are all distinct and different from x , . . . , x k , and let λ · f denote the function satisfying ( λ · f )( x , . . . , x k ) = λ f ( x , . . . , x k ).A helpful tool invented in [10] for counting CSPs is a notion of T-constructibility . For ourpurpose of proving the main theorem, we wish to modify this notion and introduce a notion of T max -constructibility. We say that an arity- k constraint f is T max -constructible (or T max -constructed )from a constraint set G if f can be obtained, initially from constraints in G , by applying recursivelya finite number (possibly zero) of seven functional operations described below.1. Permutation: for two indices i, j ∈ [ k ] with i < j , by exchanging two columns x i and x j in ( x , . . . , x i , . . . , x j , . . . , x k ), transform g into g ′ , where g ′ is defined as g ′ ( x , . . . , x i , . . . , x j , . . . , x k ) = g ( x , . . . , x j , . . . , x i , . . . , x k ).2. Pinning: for an index i ∈ [ k ] and a bit c ∈ { , } , build g x i = c from g .3. Linking: for two distinct indices i, j ∈ [ k ], build g x i = x j from g .4. Expansion: for an index i ∈ [ k ], introduce a new “free” variable, say, y and transform g into g ′ that is defined by g ′ ( x , . . . , x i , y, x i +1 , . . . , x k ) = g ( x , . . . , x i , x i +1 , . . . , x k ).6. Multiplication: from two constraints g and g of arity k that share the same in-put variable series ( x , . . . , x k ), build the constraint g · g , where ( g · g )( x , . . . , x k ) = g ( x , . . . , x k ) g ( x , . . . , x k ).6. Maximization: build max y ,...,y d ( g ) from g , where y , . . . , y d are not shared with any otherconstraint other than this particular constraint g .7. Normalization: for a positive constant λ , build λ · g from g .When f is T max -constructible from G , we use the notation f ≤ maxcon G . In particular, when G is asingleton { g } , we also write f ≤ maxcon g instead of f ≤ maxcon { g } .It holds that T max -constructibility between constraints guarantees APT-reducibility betweentheir corresponding MAX-PROD-CSP ∗ ( · )’s. Lemma 2.5 If f ≤ maxcon G , then MAX - PROD - CSP ∗ ( f, F ) is APT-reducible to MAX - PROD - CSP ∗ ( G , F ) for any constraint set F . Our main theorem—Theorem 1.1—states that all maximization problems of the form ofMAX-PROD-CSP ∗ ( · ) can be classified into three categories. This trichotomy theorem sheds aclear contrast with the dichotomy theorem of Khanna et al. [7] for MAX-CSPs. Hereafter, we willpresent the proof of Theorem 1.1. We begin with MAX-PROD-CSP ∗ ( · )’s that can be solved in polynomial time. Proposition 3.1
If either
F ⊆ AF or F ⊆ ED , then
MAX - PROD - CSP ∗ ( F ) belongs to PO . Proof Sketch.
For every target problem, as in the proof of [10, Lemma 6.1], we can greatlysimplify the structure of each input instance so that it depends only on polynomially many solutions.By examining all such solutions deterministically, we surely find its optimal solution. Hence, thetarget problem belongs to PO.It thus remains to deal with only the case where F * AF and F * ED . In this case, we firstmake the following key claim that leads to the main theorem. Proposition 3.2
Let f be any constraint and assume that f
6∈ AF ∪ ED . Let F be any set ofconstraints.1. If f ∈ IM opt , then MAX - PROD - CSP ∗ ( Implies, F ) is APT-reducible to MAX - PROD - CSP ∗ ( f, F ) .2. If f
6∈ IM opt , then there exists a constraint g ∈ { OR, N AN D } such that MAX - PROD - CSP ∗ ( g, F ) is APT-reducible to MAX - PROD - CSP ∗ ( f, F ) . We postpone the proof of the above proposition and, meanwhile, we want to prove Theorem1.1 using the proposition.
Proof of Theorem 1.1. If F ⊆ AF or F ⊆ ED , then Proposition 3.1 implies thatMAX-PROD-CSP ∗ ( F ) belongs to PO. Henceforth, we assume that F * AF and F * ED . If F ⊆ IM opt , then Lemma 2.4 helps APT-reduce MAX-PROD-CSP ∗ ( F ) to MAX-PROD-FLOW.7ext, we choose a constraint f ∈ F for which f
6∈ AF ∪ ED . Proposition 3.2(1)then yields an APT-reduction from MAX-PROD-CSP ∗ ( Implies ) to MAX-PROD-CSP ∗ ( f ).By Lemma 2.3(2), we obtain MAX-PROD-BIS ≤ APT
MAX-PROD-CSP ∗ ( Implies ). SinceMAX-PROD-CSP ∗ ( f ) ≤ APT
MAX-PROD-CSP ∗ ( F ), it follows that MAX-PROD-BIS is APT-reducible to MAX-PROD-CSP ∗ ( F ).Finally, we assume that F * IM opt . Take a constraint f ∈ F satisfying that f
6∈ AF ∪ ED ∪ IM opt . In this case, Proposition 3.2(2) yields an APT-reductionfrom MAX-PROD-CSP ∗ ( OR ) to MAX-PROD-CSP ∗ ( f ), since MAX-PROD-CSP ∗ ( OR ) ≡ APT
MAX-PROD-CSP ∗ ( N AN D ). From MAX-PROD-CSP ∗ ( f ) ≤ APT
MAX-PROD-CSP ∗ ( F ), it im-mediately follows that MAX-PROD-CSP ∗ ( OR ) is APT-reducible to MAX-PROD-CSP ∗ ( F ). ByLemma 2.3(1), MAX-PROD-IS ≤ APT
MAX-PROD-CSP ∗ ( OR ). Therefore, we conclude thatMAX-PROD-IS is APT-reducible to MAX-PROD-CSP ∗ ( F ). ✷ To finish the proof of Theorem 1.1, we still need to prove Proposition 3.2. Proving this propositionrequires three properties. To describe them, we first review two existing notions from [10]. We saythat a constraint f has affine support if R f is an affine relation and that f has imp support if R f islogically equivalent to a conjunction of a certain “positive” number of relations of the form ∆ ( x ),∆ ( x ), and Implies ( x, y ). The notation AF F IN E denotes the set of all affine relations.In the following three statements, F denotes an arbitrary set of constraints. Lemma 3.3 If f is a non-degenerate constraint in IM opt and has no imp support, then MAX - PROD - CSP ∗ ( Implies, F ) ≤ APT
MAX - PROD - CSP ∗ ( f, F ) . Proposition 3.4
Let f be any constraint having imp support. If either f has no affine supportor f
6∈ ED , then
MAX - PROD - CSP ∗ ( Implies, F ) is APT-reducible to MAX - PROD - CSP ∗ ( f, F ) . Proposition 3.5
Let f
6∈ N Z be any constraint. If f has neither affine support nor imp sup-port, then there exists a constraint g ∈ { OR, N AN D } such that MAX - PROD - CSP ∗ ( g, F ) ≤ APT
MAX - PROD - CSP ∗ ( f, F ) . With a help of the above statements, we can prove Proposition 3.2 as follows.
Proof Sketch of Proposition 3.2.
Let f
6∈ AF ∪ ED be any constraint. We proceed our proofby induction on the arity k of f . For the proposition’s claims, (1) and (2), the basis case k = 1 istrivial since ED contains all unary constraints. Next, we prove the induction step k ≥
3. In theremainder of this proof, as our induction hypothesis, we assume that the proposition holds for anyarity less than k . The claims (1) and (2) will be shown separately.(1) Assume that f is in IM opt . If f has imp support, since f
6∈ ED , we can apply Proposi-tion 3.4 and immediately obtain the desired APT-reduction MAX-PROD-CSP ∗ ( Implies, F ) ≤ APT
MAX-PROD-CSP ∗ ( f, F ). Otherwise, by Lemma 3.3, we have the desired APT-reduction.(2) Since f has no imp support, if R f is not affine, then Proposition 3.5 implies that, for acertain g ∈ { OR, N AN D } , MAX-PROD-CSP ∗ ( g , F ) ≤ APT
MAX-PROD-CSP ∗ ( f, F ); therefore,the desired result follows. To finish the proof, we hereafter assume the affine property of R f .[Case: f ∈ N Z ] Recall that f
6∈ ED and R f ∈ AF F IN E . Since f ∈ N Z , we have | R f | = 2 k ,and thus f should be in clean form (i.e., f contains no factor of the form: ∆ ( x ), ∆ ( x ), and EQ ( x, y )). As shown in [10, Lemma 7.5], there exists a constraint p = (1 , x, y, z )
6∈ ED with8 yz = 0, z = xy , and p ≤ maxcon f . When z < xy , we can prove that, for a certain g ∈ { OR, N AN D } ,MAX-PROD-CSP ∗ ( g , F ) ≤ APT
MAX-PROD-CSP ∗ ( p, F ). In the case where z > xy , Lemma 2.1implies p ∈ IM opt . Since p is obtained from f by pinning operations only, we conclude that f ∈ IM opt , a contradiction. This finishes the induction step.[Case: f
6∈ N Z ] We first claim that k ≥
3. Assume otherwise that k = 2. Since R f ∈ AF F IN E ,it is possible to write f in the form f ( x , x ) = ξ A ( x , x ) g ( x ) after appropriately permutingvariable indices. This places f within AF , a contradiction against the choice of f . Hence, k ≥ g
6∈ AF of arity m for which 2 ≤ m < k , g ≤ maxcon f , and either g ∈ N Z or R g AF F IN E . If we can show that (*) there exists a constraint g ∈ { OR, N AN D } satisfying MAX-PROD-CSP ∗ ( g , F ) ≤ APT
MAX-PROD-CSP ∗ ( g, F ), thenthe proposition immediately follows from g ≤ maxcon f . The claim (*) is split into two cases: (i) R g ∈ AF F IN E and (ii) R g AF F IN E . For (i), we apply the induction hypothesis. For (ii), weapply Propositions 3.4–3.5. Thus, we have completed the induction step. ✷ In this end, we have completed the proof of the main theorem. The detailed proofs omittedin this extended abstract will be published shortly. We hope that our systematic treatment ofMAX-PROD-CSP ∗ s would lead to a study on a far wider class of optimization problems. References [1] G. Ausiello, A. Marchetti-Spaccamela, and M. Protasi. Full approximability of a class of problemsover power sets. In:
Proc. of CAAP 81 , LNCS, vol. 112, pp. 76–87. Springer, Berlin (1981)[2] S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi. A tutorial on geometric programming.
Optm. Eng.
8, 67–127 (2007)[3] N. Creignou. A dichotomy theorem for maximum generalized satisfiability problems.
J. Comput.System Sci.
51, 511–522 (1995)[4] M. Dyer, L. A. Goldberg, C. Greenhill, and M. Jerrum. The relative complexity of approximatingcounting problems.
Algorithmica
38, 471–500 (2003)[5] O. H. Ibarra and C. E. Kim. Fast approximation for the knapsack and sum of subset problems.
J.ACM
22, 463–468 (1975)[6] H. Konno and T. Kuno. Linear multiplicative programming.
Math. Program.
56, 51–64 (1992)[7] S. Khanna, M. Sudan, L. Trevisan, and D. P. Williamson. The approximability of constraintsatisfaction problems.
SIAM J. Comput.
30, 1863–1920 (2001)[8] A. Marchetti-Spaccamela and S. Romano. On different approximation criteria for subset productproblems.
Inform. Process. Lett.
21, 213–218 (1985)[9] C. Papadimitriou and M. Yannakakis. Optimization, approximation and complexity classes.
J.Comput. System Sci.
43, 425–440 (1991)[10] T. Yamakami. Approximate counting for complex-weighted Boolean constraint satisfaction prob-lems. Available at arXiv:1007.0391. An older version appeared in the
Proc. of WAOA 2010 , LNCS,vol. 6534, pp. 261–272. Springer, Heidelberg (2011), LNCS,vol. 6534, pp. 261–272. Springer, Heidelberg (2011)