Global Cardinality Constraints Make Approximating Some Max-2-CSPs Harder
aa r X i v : . [ c s . CC ] S e p Global Cardinality ConstraintsMake Approximating SomeMax-2-CSPs Harder
Per Austrin ∗ School of Electrical Engineering and Computer ScienceKTH Royal Institute of [email protected]
Aleksa Stankovi´c ∗ Department of MathematicsKTH Royal Institute of [email protected]
September 19, 2019
Abstract
Assuming the Unique Games Conjecture, we show that existing approximation algorithmsfor some Boolean Max-2-CSPs with cardinality constraints are optimal. In particular, we provethat Max-Cut with cardinality constraints is UG-hard to approximate within ≈ . ≈ . ≈ .
878 for Max-Cut, and ≈ .
940 for Max-2-Sat).The hardness obtained for Max-2-Sat applies to monotone Max-2-Sat instances, meaningthat we also obtain tight inapproximability for the Max- k -Vertex-Cover problem. Constraint satisfaction problems (CSPs) are one of the most fundamental objects studied in com-plexity theory. An instance of a CSP has a set of variables taking values over a certain domain anda set of constraints on tuples of these variables as an input. Probably the best known CSP is 3-Sat,in which the constraints are clauses, each clause is a disjunction of at most three literals, and eachliteral is either a variable or negation of a variable. In the satisfiability version of CSP problems,we are interested whether there is an assignment to the variables which satisfies all the constraints.Hardness of deciding satisfiability of CSPs is well understood, due to the dichotomy theorem [32]of Schaefer which shows that each CSP with variables taking values in a Boolean domain is eitherin P or NP-complete, and due to the more recent results of Bulatov [8] and Zhuk [35] which settlethis question on general domains.Another well-studied version is the Max-CSP, which is the optimization version in which we areinterested in maximizing the number of constraints satisfied. This type of problem is NP-hard inmost cases and we typically settle with finding a good estimate of the optimal solution, for which ∗ Research supported by the Approximability and Proof Complexity project funded by the Knut and Alice Wal-lenberg Foundation.
1e rely on approximation algorithms. A common example of a constraint satisfaction problem inthis setting is Max-Cut, in which the input consists of a graph G , and the goal is to partitionthe vertices into two sets such that the number of edges between the two parts is maximized.Approximability of Max-CSPs has been a major research topic which inspired many influentialbreakthroughs. One of the first surprising results was an algorithm of Goemans and Williamson[13], which uses semidefinite programming (SDP) to approximate the optimal solution to withina constant of α GW ≈ . / α LLZ ≈ . − δ for some universal constant δ >
0. By using the PCP theorem andparallel repetition [31] as a starting point, H˚astad [17] proved optimal inapproximability for Max-3-Sat by showing that it cannot be approximated better than 7 / ǫ for any ǫ > α GW -approximation algorithm for Max-Cut and the α LLZ -approximation algorithm for Max-2-Sat was shown in [21, 26] and [3], respectively. The strength ofsemidefinite programming for approximating Max-CSPs was corroborated in a breakthrough resultof Raghavendra [28], which showed that assuming the UGC, a certain SDP relaxation achievesoptimal approximation ratios for all Max-CSPs.Locality of the constraints was of crucial importance in studying CSPs and Max-CSPs sincetheir inception. Therefore, it is not a surprise that typical techniques fail when we work withCSPs for which feasible assignments need to satisfy some additional global constraints, and theseproblems almost always become harder. For example, while the satisfiability of a 2-Sat instancecan be checked by a straightforward algorithm, Guruswami and Lee recently showed [14] thatwhen the satisfying assignment needs to have exactly half of its variables set to true, this problembecomes NP-hard. Hardness of deciding satisfiability of CSPs in which we prescribe how manyvariables are assigned to certain values is well understood due to the dichotomy theorem of Bulatovand Marx [9], which shows that these problems are either NP-hard or in P, and gives a simpleclassification. Another type of global constraint is studied by Brakensiek et al. [7], who considerhardness of deciding CSPs in presence of modular constraints, which restrict cardinality of valuesin an assignment modulo a natural number M .In this paper we are interested in optimization variants of CSPs with global cardinality con-straints, i.e., constraints which specify the number of occurrences of each value from the domainin the assignment. We refer to these problems as CC-Max-CSPs. It is not hard to see that theseproblems are at least as hard to approximate as their unconstrained counterparts. CC-Max-CSPshave been actively studied in the past. For example the Max-Bisection problem, i.e., Max-Cutwhere the two partitions need to be of the same size, has been of a particular interest, with a seriesof papers [12], [34],[16], [11], [30] obtaining improved approximation algorithms, until the mostrecent result which achieves an approximation ratio of 0 . ≈ − below theUG-hardness bound α GW . The state-of-the-art algorithm [30] for the more general CC-Max-Cutproblem achieves an approximation ratio of α cccut ≈ . k -VC . The best algorithm [30] up to date for general CC-Max-2-Sat achieves an ap- Max- k -VC is an abbreviation for maximum k vertex cover, in which we are given a graph and the task is toselect a subset of k vertices covering as many of the edges as possible. α cc sat , where α cc sat ≈ . k -VC within a factor α AKS ≈ .
944 (note that this is slightly larger than the hardnessof α LLZ ≈ .
940 for general Max-2-Sat).Yet another well-studied CC-Max-CSP is the
Densest k -Subgraph (Max- k -DS) problem, in whichwe are given a graph and the objective is to find a maximally dense induced subgraph on k vertices.Analogously to the Max- k -VC problem, Max- k -DS can be viewed as the monotone CC-Max-2-Andproblem. Max- k -DS is qualitatively very different from the previously discussed problems. It is notknown to be approximable within a constant factor, and is in fact known to be hard to approximateto within almost polynomial factors assuming the Exponential Time Hypothesis [24], or to withinany constant factor assuming the Small-Set Expansion Hypothesis [29].Obtaining tight approximability results for CC-Max-CSPs presents an important research topic.Qualitatively, it is also interesting to determine whether adding a cardinality constraint to a non-trivial Max-CSP makes approximation strictly harder. For example, we know that CC-Max-2-Satis as hard as Max-2-Sat, but it is still conceivable that they are equally hard. In particular, it wouldbe interesting to answer the following question:“Can CC-Max-2-Sat be approximated within α LLZ ?”So far the only result in this direction comes from [4] which shows that the “bisection version”(where the cardinality constraint is that exactly half of the variables must be set to true) of CC-Max-2-Sat can be approximated within α LLZ . However, the approach taken in that algorithm does notimmediately extend to handle general cardinality constraints. A similar question arises for the CC-Max-Cut problem, but here even the basic Max-Bisection problem is not known to be approximablewithin the Max-Cut constant α GW ≈ . k -VC is monotoneMax-2-Sat, and unconstrained Max- k -DS is monotone Max-2-And, which are both trivial). Our results
In this paper, we answer the above question negatively, by giving improved UG-hardness results for CC-Max-Cut and Max- k -VC. Theorem 1.1.
For every ε > , CC-Max-Cut is UG-hard to approximate within β cccut + ε , where β cccut ≈ . . Theorem 1.2.
For every ε > , Max- k -VC is UG-hard to approximate within β ccvc + ε , where β ccvc ≈ . . Note that since CC-Max-Cut and Max- k -VC are special cases of CC-Max-2-Lin and CC-Max-2-Sat respectively, the corresponding hardness results apply to the latter problems as well.The constants β ccvc and β cccut are calculated numerically and their estimated values match theconstants α cc sat and α cccut , which are the approximation ratios for corresponding problems achievedby the algorithm of Raghavendra and Tan [30]. We provide even stronger evidence that theseconstants match each other, by showing that β ccvc and β cccut are calculated as minima of the samefunctions used for calculating their counterparts α ccvc and α cccut , but over a slightly more restricteddomain.Moreover, in Section 3 we give refined statements of Theorem 1.1 and Theorem 1.2 which de-scribe inapproximability of these problems as a function of the cardinality constraint q ∈ (0 , . . . . . . . . . (a) CC-Max-Cut . . . . . . . . . (b) Max- k -VC . . . . . . . . (c) CC-Max-2-Sat Figure 1: Hardness ratio ( − ), approximation ratio ( − , ⋄ ), and matchingalgorithm/hardness ratio ( − ) as a function of the cardinality constraint q forthe three problems. Overview of proof ideas
The main observation behind the hardness results is that the reductionused to prove hardness of approximation for the Independent Set and Vertex Cover problems inbounded degree graphs [5] gives very strong soundness guarantees. In particular it shows that inthe “no” case of the reduction, all induced subgraphs of the graph contain many edges, which inturn gives useful upper bounds on the number of edges cut by a bipartition of a given size, or thenumber of edges covered by a subgraph. This is also how [25] obtained the previous hardness of ≈ .
944 for Max- k -VC. Thus our results use essentially the same reduction as [5] (which is in turnsimilar to the reduction for Max-Cut [21]). Note however that even though the graph producedby that reduction has a small vertex cover in the “yes” case, using that small vertex cover is notnecessarily the best solution for the Max- k -VC problem on the graph. In particular for q < / k -VC solution in the yescase (the intuition being that since it is independent, it covers many edges relative to its size).Another difference is that we have somewhat greater flexibility in choosing the noise distributionof our “dictatorship test” (the key component of essentially all UG-hardness results) The reasonis that for Independent Set/Vertex Cover, the reduction needs “perfect completeness”, i.e., in the“yes” case it needs to produce graphs with large independent sets/small vertex covers, whereas fore.g. Max- k -VC we are perfectly happy with graphs where there are sets of size k covering many,but not necessarily all, edges. This increased flexibility turns out to improve the hardness ratiosfor some range of the cardinality constraint q . For example, for the CC-Max-Cut problem with q = 1 /
2, this allows us to recover the α GW -hardness for the Max-Bisection problem using the samereduction. However, at q further away from 1 /
2, and in particular at the minima in Figure 1(a),it turns out that this flexibility does not help. E.g. in the global minimum at q ≈ .
365 for Max- k -VC, the reduction does output a graph with a large independent set containing a q fraction ofthe vertices, and choosing that independent set is the optimal solution for the Max- k -VC instance.Similarly, in the part of the range q > / k -VC instance in the yes case is to pick an actual vertex cover of size q , and the first pointof the curve in this range corresponds exactly to the hardness of 0 .
944 from [25].
Organization
This paper is organized as follows. In Section 2 we fix the notation, recall somewell-known facts, and formally introduce the problems of interest. In Section 3 we give our mainhardness reduction and improved inapproximability results. In Section 4 we give a brief overviewof the algorithm of Raghavendra and Tan [30] in order to observe that the hardness ratios we getmatch the approximation ratios of the algorithm. Finally, in Section 5 we propose some possibledirections for future research. 4
Preliminaries
In this paper we work with undirected (multi)graphs G = ( V, E ). For a set S ⊆ V of vertices weuse S c to denote its complement S c = V \ S , and write U ⊔ V for a disjoint union of sets U and V .The graphs are both edge and vertex weighted and the weights of vertices and edges are given byfunctions w : V → [0 , w : E → [0 , S ⊆ V and K ⊆ E we interpret w ( S ) and w ( K ) as the sum of weights of vertices contained in S and edges in K , respectively. Furthermore,weights are normalized so that w ( V ) = w ( E ) = 1 and the weight of each vertex equals half theweight of all edges adjacent to it. Therefore, the weights of edges and vertices can be interpretedas probability distributions, and sampling a vertex with probability equal to its weight is the sameas sampling an edge and then sampling one of its endpoints with probability 1 /
2. For
S, T ⊆ V ,we write w ( S, T ) for the total weight of edges from E which have one endpoint in S , and otherin T . Note that, since we work with undirected graphs, the order of endpoints is not important,and therefore w ( S, T ) = w ( T, S ). In other words, the weight of an edge e = ( u, v ) contributes to w ( S, T ) if either ( u, v ) ∈ T × S or ( u, v ) ∈ S × T . We also have the identity w ( S, V ) = w ( S ) + 12 w ( S, S c ) . (1)The set of all neighbours of a vertex v including v is denoted by N ( v ), and the set of all neighboursof a set S ⊆ V including S is denoted by N ( S ). Let us also introduce the following definition. Definition 2.1.
A graph G is ( q, ε )-dense if every subset S ⊆ V with w ( S ) = q satisfies w ( S, S ) ≥ ε .We use φ ( x ) = √ π e − x / to denote the density function of a standard normal random variable,and Φ( x ) = R x −∞ φ ( y ) dy to denote its cumulative distribution function (CDF). We also work withbivariate normal random variables, and to that end introduce the following function. Definition 2.2.
Let ρ ∈ [ − , X, Y with mean0 and covariance matrix Cov(
X, Y ) = (cid:20) ρρ (cid:21) . We define Γ ρ : [0 , → [0 ,
1] asΓ ρ ( x, y ) = Pr (cid:2) X ≤ Φ − ( x ) ∧ Y ≤ Φ − ( y ) (cid:3) . We also write Γ ρ ( x ) = Γ ρ ( x, x ). We have the following basic lemma (for a proof see AppendixA of [4]). Lemma 2.3.
For every ρ ∈ [ − , , and every x, y ∈ [0 , , we have Γ ρ ( x, y ) = Γ ρ (1 − x, − y ) − x + y. This paper is concerned with Max-Cut, Max-2-Lin, Max-2-Sat, and Max- k -VC problems with car-dinality constraints. Let us give the definitions of these problems as integer optimization programsnow. In these definitions instead of { , } we represent Boolean domain as {− , } , and for thatreason instead of cardinality constraint q we consider a balance constraint r = 1 − q . Definition 2.4.
An instance F of the cardinality constrained Max-2-Lin ( CC-Max- -Lin ) problemwith balance constraint r ∈ ( − ,
1) over variables X = { x , . . . , x n } taking values in {− , } is5iven by the following integer optimization programmax X ( i,j )= e ℓ ∈ E P ℓ x i x j , s.t. X i ∈ V x i = nr, where P ℓ ∈ {− , } and the term (1 + P ℓ x i x j ) / x i x j = P ℓ . Incase P ℓ = − ℓ , the integer optimization program is an instance of CC-Max-Cut problem.
Definition 2.5.
An instance F of the cardinality constrained Max-2-Sat ( CC-Max- -Sat ) problemwith balance constraint r ∈ ( − ,
1) over variables X = { x , . . . , x n } taking values in {− , } isgiven by the following integer optimization programmax X ( i,j )= e ℓ ∈ E P ℓ x i + P ℓ x j + P ℓ x i x j , s.t. X i ∈ V x i = nr, where ( P ℓ , P ℓ , P ℓ ) ∈ { ( − , − , − , − , , ( − , , , (1 , , − } corresponds to one of the fourpossible clauses x i ∨ x j , ¬ x i ∨ x j , x i ∨ ¬ x j , ¬ x i ∨ ¬ x j . In case ( P ℓ , P ℓ , P ℓ ) = ( − , − , −
1) for all ℓ , the integer optimization program is an instance of Max- k -VC problem.The objective in the problems given by Definitions 2.4 and 2.5 is to find an assignment z : X →{− , } which satisfies a (hard) global cardinality constraint and maximizes the number of satisfiedsoft constraints represented by the objective function. For an assignment z that satisfies globalconstraint of an instance F we use Val z ( F ) to denote the value of the objective function under theassignment z . Furthermore, we useOptVal( F ) = max z : X →{− , } P x ∈ X z ( x )= rn Val z ( F )to denote the maximum value of the objective function over all assignments z satisfying the cardi-nality constraint.The starting point of the hardness results in this paper is the Unique Games problems, whichis defined as follows. Definition 2.6. A Unique Games instance
Λ = ( U , V , E , Π , [ L ]) consists of an unweighted bipartitemultigraph ( U ⊔ V , E ), a set Π = { π e : [ L ] → [ L ] | e ∈ E and π e is a bijection } of permutationconstraints, and a set [L] of labels. The value of Λ under the assignment z : U ⊔ V → [ L ] is thefraction of edges satisfied, where an edge e = ( u, v ) , u ∈ U , v ∈ V is satisfied if π e ( z ( u )) = z ( v ).We write Val c (Λ) for the value of Λ under z , and Opt(Λ) for the maximum possible value over allassignments z .The Unique Games Conjecture [20] can be formulated as follows ([22], Lemma 3.4). Conjecture 2.7 (Unique Games Conjecture) . For every constant γ > there is a sufficiently large L ∈ N , such that for a Unique Games instance Λ = ( U , V , E , Π , [ L ]) with a regular bipartite graph ( U ⊔ V , E ) , it is NP-hard to distinguish between • Opt(Λ) ≥ − γ , • Opt(Λ) ≤ γ . .3 Analysis of Boolean Functions One of the ubiquitous tools in the hardness of approximation area is Fourier analysis of Booleanfunctions. We now recall some of the well-known facts which are used in the paper. For a moredetailed study, we refer to [27].For q ∈ [0 ,
1] and n ∈ N we write π q : { , } → [0 ,
1] for the probability distribution givenby π q (1) = q, π q (0) = 1 − q . We also write π ⊗ nq for the probability distribution on n -bit strings x ∈ { , } n where each bit is distributed according to π q , independently. We use L ( π ⊗ nq ) to denotethe space of random variables f : { , } n → R over the probability space (cid:0) { , } n , P ( { , } n ) , π ⊗ nq (cid:1) ,and interpret E [ f ] and Var [ f ] as expectation and variance of f ( X ) when the X is drawn from π ⊗ nq .Depending on context, the elements of L ( π ⊗ nq ) will be interpreted as functions as well.Let us now introduce some of the common objects used in the study of Boolean functions. Definition 2.8.
Consider a function f ∈ L ( π ⊗ nq ) and i ∈ { , . . . , n } . The influence Inf i [ f ] of the i -th argument on f is defined as Inf i [ f ] = E x ∼ π ⊗ nq [ Var ˜ x i ∼ π q [ f ( x , . . . , x i − , ˜ x i , x i +1 , . . . , x n )]] . Minimal correlation between two q -biased bits is max( − q/ (1 − q ) , − (1 − q ) /q ). For notationalconvenience, let us introduce the function κ which assigns to each value q ∈ (0 ,
1) an interval I ⊆ ( − ,
0) as κ ( q ) = [ − q/ (1 − q ) , , if q < / , ( − , , if q = 1 / , [ − (1 − q ) /q, , if q > / . Definition 2.9.
For a fixed x ∈ { , } , q ∈ (0 ,
1) and ρ ∈ κ ( q ) we write y ∼ N ρ ( x ) to indicate that y is a ρ -correlated copy of x . In particular each bit y i is equal to 1 with probability q + ρ (1 − q ) if x i = 1, and y i = 1 with probability q − ρq when x i = 0, independently. Definition 2.10.
Consider q ∈ (0 ,
1) and ρ ∈ κ ( q ). The noise operator T ρ : L ( π ⊗ nq ) → L ( π ⊗ nq )is defined as T ρ f ( x ) = E y ∼ N ρ ( x ) [ f ( y )] . The following lemma gives a useful bound on the number of influential variables of T ρ f . Lemma 2.11.
Consider q ∈ (0 , , a function f ∈ L ( π ⊗ nq ) , and ρ ∈ κ ( q ) . Then, for any τ > wehave that |{ i ∈ [ n ] | Inf i [ T ρ f ] ≥ τ }| ≤ Var [ f ] τ e ln(1 / | ρ | ) . For a proof we refer to Lemma 3.4 of [15]. We also need to introduce the notion of noise stability,defined as follows.
Definition 2.12.
Let q ∈ (0 , , ρ ∈ κ ( q ) and f ∈ L ( π ⊗ nq ). The noise stability of function f at ρ is defined as S ρ = E [ f · T ρ f ] . Let us also recall the following variant of the “Majority is Stablest” theorem in the form thatappeared in [5], and which follows from Theorem 3.1 in [10].
Theorem 2.13.
Let q ∈ (0 , and ρ ∈ κ ( q ) . Then for any ε > , there exist τ > and δ > suchthat for every function f ∈ L ( π ⊗ nq ) , f : {− , } n → [0 , that satisfies max i ∈ [ n ] Inf i [ T − δ f ] ≤ τ, we have S ρ ( f ) ≥ Γ ρ ( E [ f ]) − ε. Hardness Reduction
In this section we give our main hardness reduction. As discussed in the introduction, it is ageneralization of the reduction of Theorem III.1 from [5].
Theorem 3.1.
For every q ∈ (0 , , ε > , and ρ ∈ κ ( q ) , there exists a γ > and a reduction fromUnique Games instances Λ = ( U , V , E , Π , [ L ]) to weighted multigraphs G = ( V, E ) with the followingproperties: • Completeness: If Opt(Λ) ≥ − γ , then there is a set S ⊆ V such that w ( S ) = q and w ( S, S c ) ≥ q (1 − q )(1 − ρ ) − γ . • Soundness: If Opt(Λ) ≤ γ , then for every r ∈ [0 , , G is ( r, Γ ρ ( r ) − ε ) -dense.Moreover, the running time of the reduction is polynomial in |U| , |V| , |E| , and exponential in L .Proof. Let ν : { , } → [0 ,
1] be the probability distribution over two ρ -correlated q -biased bits. Inother words, letting t = ( q − q )(1 − ρ ), we have ν (0 ,
0) = 1 − q − t, ν (0 ,
1) = ν (1 ,
0) = t, ν (1 ,
1) = q − t. Let us now describe how the multigraph G can be constructed from Λ. We define the vertex setof G to be V = V × { , } L = { ( v, x ) | v ∈ V , x ∈ { , } L } . In particular, for every vertex v ∈ V we create 2 L vertices of G , which we identify with L -bit strings in { , } L . We also write v x for avertex ( v, x ) of the graph G . The weights of vertices in G are given by w ( v x ) = 1 |V| π ⊗ Lq ( x ) . (2)The edges of G are constructed in the following way. For every u ∈ U , and for every two v , v ∈ N ( u ), we create an edge between vertices v x , v y with weight1 |U| D ν ⊗ L ( x ◦ π e , y ◦ π e ) , where e = ( u, v ) , e = ( u, v ) . Expressed formally, the edge set E is E = { ( e x , e y ) | e = ( u, v ) , e = ( u, v ) , u ∈ U , v , v ∈ V , x, y ∈ { , } L } . Since the marginal of the distribution ν over either the first or the second argument is a q -biaseddistribution on { , } L , the weight of all edges adjacent to a vertex v x equals two times the weightof the vertex v x . Furthermore, it is trivial to check that w ( V ) = w ( E ) = 1. The number of verticesin G is |V| L , and the number of edges is |U| D L , so the construction is indeed polynomial in |U| , |V| and |E| .Let us now prove completeness and soundness. Completeness:
Since Opt(Λ) ≥ − γ , there is a labeling z : U ⊔V → [ L ] such that Val z (Λ) ≥ − γ .Consider a set S given by S = { v x ∈ V | x z ( v ) = 1 } . The weight of the set S is obviously q . Let us consider a set consisting of pairs of edges in E whichhave a common vertex in U , i.e. the setˆ E = { ( e , e ) ∈ E × E | e = ( u, v ) , e = ( u, v ) , u ∈ U , v , v ∈ V} , and its subset ˆ E good consisting of edge pairs which are satisfied under the assignment z , or formallyˆ E good = { ( e , e ) ∈ ˆ E | e = ( u, v ) , e = ( u, v ) , z ( u ) = π − e ( z ( v )) = π − e ( z ( v )) } , − γ of edges in E are satisfied under z , at least fraction (1 − γ ) of edge pairs inˆ E is satisfied under z , i.e. | ˆ E good | ≥ (1 − γ ) | ˆ E | . For every ( e , e ) ∈ ˆ E good , e = ( u, v ) , e = ( u, v ),the edges between S and S c created through the pair of edges ( e , e ) have the total weight of1 |U| D Pr ( x,y ) ∼ ν ⊗ L h ( x ◦ π e − ) z ( v ) = ( y ◦ π e − ) z ( v ) i = 1 |U| D Pr ( x,y ) ∼ ν ⊗ L (cid:2) x z ( u ) = y z ( u ) (cid:3) = 1 |U| D ( ν (0 ,
1) + ν (1 , |U| D t. Therefore, we have w ( S, S c ) ≥ t (1 − γ ) ≥ q (1 − q )(1 − ρ ) − γ . Soundness:
Let us assume by contradiction that G is not ( r, Γ ρ ( r ) − ε )-dense, and thereforethat there is a set S ⊆ V of weight w ( S ) = r for which w ( S, S ) < Γ ρ ( r ) − ε . For each v ∈ V , letus define a function S v ∈ L ( π ⊗ Lq ) to be the indicator function of S restricted to the vertex v . Inparticular, we have that S v ( x ) = 1 if and only if v x ∈ S . Furthermore, for all u ∈ U let us define S u ∈ L ( π ⊗ Lq ) as S u ( x ) = E e =( u,v ) ,v ∈ N ( u ) [ S v ( x ◦ π − e )] . We have that w ( S, S ) = E u ∈U ,e =( u,v ) ,e =( u,v ) v ,v ∈ N ( u ) (cid:20) E ( x,y ) ∼ ν ⊗ L [ S v ( x ◦ π − e ) S v ( y ◦ π − e )] (cid:21) = E u ∈U , ( x,y ) ∼ ν ⊗ L E e =( u,v ) ,e =( u,v ) v ,v ∈ N ( u ) [ S v ( x ◦ π − e ) S v ( y ◦ π − e )] = E u ∈U (cid:20) E ( x,y ) ∼ ν ⊗ L [ S u ( x ) S u ( y )] (cid:21) = E u ∈U " E x ∼ π ⊗ Lq [ S u ( x ) T ρ S u ( x )] = E u ∈U [ S ρ ( S u )] . Let us define µ u = E x ∼ π ⊗ Lq [ S u ( x )], and remark that due to regularity of Λ we have E u ∈ U [ S u ] = r .We claim that there is a set U ′ ⊆ U , |U ′ | ≥ ε |U| / u ∈ U ′ we have S ρ ( S u ) < Γ ρ ( µ u ) − ε/
2. Otherwise, we reach a contradiction by noticing thatΓ ρ ( r ) − ε > w ( S, S ) = E u ∈U [ S ρ ( S u )] ≥ (1 − ε/ (cid:18) E u ∈ U [Γ ρ ( µ u )] − ε/ (cid:19) ≥ E u ∈ U [Γ ρ ( µ u )] − ε ≥ Γ ρ ( r ) − ε, where in the last inequality we used the fact that Γ ρ is convex.By Theorem 2.13 there is τ > δ > u ∈ U ′ there is a significantcoordinate i ∈ [ L ] for which Inf i [ T − δ S u ] ≥ τ . For each u ∈ U ′ and for its significant coordinate i ,by using the fact that Inf i is convex and Markov’s inequality we conclude that for at least τ / v ∈ N ( u ) we have Inf π e ( i ) [ T − δ S v ] ≥ τ / , e = ( u, v ) . For each v ∈ V let [ L ] v ⊆ [ L ] denote a set of labels defined by[ L ] v = { i ∈ [ L ] | Inf i [ T − δ S v ] ≥ τ / } . By Lemma 2.11 we have that | [ L ] v | ≤ τe ln(1 / (1 − δ )) . Let us now pick an assignment z : U ⊔ V → [ L ]of Λ using the following randomized procedure. For each v ∈ V , pick i ∈ [ L ] v randomly, and set z ( v ) = i . If [ L ] v = ∅ , we pick i ∈ [ L ] randomly. Then, for each u ∈ U , we set z ( u ) = i for the i that maximizes the number of edges satisfied. From the previous discussion we conclude that thislabeling satisfies Ω (cid:0) ετ ln (1 / (1 − δ )) (cid:1) of constraints of Λ in expectation. But since this constantdoes not depend on γ this would be a contradiction if we started with a sufficiently small γ .9 .1 Hardness for CC-Max-Cut Now that we have proven Theorem 3.1, it is straightforward to prove the following theorem whichgives a hardness result of CC-Max-Cut.
Theorem 3.2.
For any q ∈ (0 , and ρ ∈ κ ( q ) it is UG-hard to approximate CC-Max-Cut withcardinality constraint q within β cccut ( q, ρ ) + ε where ε > is arbitrary small and β cccut ( q, ρ ) is givenby β cccut ( q, ρ ) = 1 − Γ ρ ( q ) − Γ ρ (1 − q )2( q − q )(1 − ρ ) . Proof.
By Theorem 3.1 there exists a family of multigraphs G = ( V, E ) for which it is UG-hard todecide between the following two statements: • There is a set S ⊆ V, w ( S ) = q , such that w ( S, S c ) ≥ q (1 − q )(1 − ρ ) − γ . • For any r ∈ [0 ,
1] and every set T ⊆ V, w ( T ) = r we have w ( T, T ) ≥ Γ ρ ( r ) − ε .The second statement implies that for any S ⊆ V, w ( S ) = q , we have w ( S, S c ) = w ( V, V ) − w ( S, S ) − w ( S c , S c ) ≤ − Γ ρ ( q ) − Γ ρ (1 − q ) + 2 ε . Therefore, by setting γ sufficiently small thisshows UG-hardness of approximating CC-Max-Cut with cardinality constraint q within1 − Γ ρ (1 − q ) − Γ ρ ( q )2 q (1 − q )(1 − ρ ) + 2 ε, where ε > k -VC and CC-Max-2-Sat Next we give the hardness result for Max- k -VC. Theorem 3.3.
Consider q ∈ (0 , and let ρ ∈ κ ( q ) . Then, it is UG-hard to approximate Max- k -VC with cardinality constraint q within β ccvc ( q, ρ ) + ε where ε > is arbitrary small and β ccvc ( q, ρ ) isgiven by β ccvc ( q, ρ ) = 1 − Γ ρ (1 − q ) q (1 + (1 − q )(1 − ρ )) . Proof.
As we have shown in Theorem 3.1, there is a family of multigraphs G = ( V, E ) for which itis UG-hard to decide between the following two statements: • There is a set S ⊆ V, w ( S ) = q , such that w ( S, S c ) ≥ q (1 − q )(1 − ρ ) − γ . • For any r ∈ [0 ,
1] and every set T ⊆ V, w ( T ) = r we have w ( T, T ) ≥ Γ ρ ( r ) − ε .By (1), the first item implies that w ( S, V ) = q (1 + q (1 − q )(1 − ρ )) − γ . The second statementimplies that for any S ⊆ V, w ( S ) = q , we have w ( S, V ) = w ( V, V ) − w ( S c , S c ) ≤ − Γ ρ (1 − q ) + ε .Therefore, by letting γ → k -VC with cardinalityconstraint q within 1 − Γ ρ (1 − q ) q (1 + (1 − q )(1 − ρ )) + ε, where ε > . . . . . . . (a) CC-Max-Cut . . . . . . . . (b) Max- k -VC . . . . . . . . (c) CC-Max-2-Sat Figure 2: Hardness ratio as a function of cardinality constraint q obtained byarguments from Section 3.1 and Section 3.2. As we have concluded in Theorems 3.2 and 3.3, it is UG-hard to approximate CC-Max-Cut andMax- k -VC with cardinality constraint q ∈ (0 ,
1) to within β cccut ( q ) = inf ρ ∈ κ ( q ) β cccut ( q, ρ ) , β ccvc ( q ) = inf ρ ∈ κ ( q ) β ccvc ( q, ρ ) , respectively. In Figure 2 we visualize these two hardness curves as well as the resulting hardnessfor CC-Max-2-Sat (obtained by taking the lower half of the Max- k -VC curve and mirroring it).For a fixed q it is not clear for which ρ the functions β cccut ( q, · ) and β ccvc ( q, · ) are minimized. Forthe plots of the inapproximability curves in Figure 2, the optimization over ρ was done numerically.Interestingly, numerical calculations show that the worst-case value of the cardinality constraint q < / q at which the hardness ratio meets the approximation ratio) is the same forMax- k -VC and CC-Max-Cut, and in particular its value is q ∗ ≈ . ρ for which this worst-case hardness is achieved is extremal, i.e., ρ = − q ∗ / (1 − q ∗ ) ≈− . q > / q . For CC-Max-Cut the curve is symmetric around 1 / − q ∗ ≈ . k -VC problem it occurs at ≈ . q ≤ q ∗ and also for q > / ρ minimizing both β cccut ( q, ρ ) and β ccvc ( q, ρ ) is the minimum value of κ ( q ). On the other hand, when q is close to 1 /
2, the best choice of ρ does not equal min κ ( q ). For example, when q = 1 /
2, thehardness we obtain for CC-Max-Cut is the same as for the Max-Cut problem, attained using thevalue ρ ≈ − . The hardness results shown in Figure 2 obtained in the preceding sections can be improved by simplemonotonicity arguments. The simplest case is Max- k -VC, for which the true approximability curvemust be monotone. Claim 3.4.
If there is an α -approximation algorithm for Max- k -VC with cardinality constraint q ′ ,then there is also an α -approximation algorithm for Max- k -VC for all cardinality constraints q > q ′ .Proof. Given a Max- k -VC instance G with cardinality constraint q , construct G ′ by adding ( q/q ′ − | V | isolated vertices to G . Observe that an optimal Max- k -VC solution to G ′ with cardinalityconstraint q ′ only uses vertices from G and has cardinality q ′ | V ′ | = q | V | .11 . . . . . . . (a) CC-Max-Cut . . . . . . . . (b) Max- k -VC . . . . . . . . (c) CC-Max-2-Sat Figure 3: Improving hardness using isolated vertices/dummy variables.Applying Claim 3.4 to the Max- k -VC curve in Figure 2(b) we obtain improvements for small q and for q around 1 /
2, as shown in Figure 3(b).The direct analogue of Claim 3.4 is not obviously true for CC-Max-Cut and CC-Max-2-Sat,because the optimum value as a function of the cardinality constraint q is not necessarily monotone(so an optimal solution of G ′ in the above reduction might select some of the newly added isolatedvertices). However, note that in the soundness case of our reduction, the CC-Max-Cut value withcardinality constraint q is (up to ǫ error) 1 − Λ ρ ( q ) − Λ ρ (1 − q ), and this is a monotonically increasingfunction for q ≤ /
2. This implies that for these instances and q ≤ / q ≥ / q bounded away from 1 / q , we note that by adding | V | / min( q, − q ) newdummy variables, the problem is as hard as unconstrained Max-2-Sat, giving further small improve-ments for q close to 1 / In this section we recall the algorithm of Raghavendra and Tan [30], somewhat reformulated inorder to obtain explicit expressions for the approximation ratios that match the hardness resultswe obtain. We keep the exposition at a high level and skip over certain technical details, and referthe reader interested in the details to [30] or the follow-up work [4].In order to find a good approximation for NP-hard integer optimization problems given inDefinitions 2.4 and 2.5 we use semidefinite programming (SDP) relaxations. In particular, weextend the domain of variables { x i } ni =1 from { , } to vectors on an n -sphere, which we denote by v i ∈ S n . We also introduce a vector v ∈ S n which represents the value false (corresponding value is1 in the integer program). Then, we replace x i by the scalar product h v , v i i and x i x j with h v i , v j i .For example, the semidefinite relaxation of the CC-Max-Cut program is given asmax X ( i,j )= e ℓ ∈ E − h v i , v j i , s.t. X i ∈ V h v i , v i = rn. Furthermore, since | x i − x j | ≤ | x i − x k | + | x k − x j | , we also demand from the vectors v i to satisfythe triangle inequalities k v i − v j k ≤ k v i − v k + k v − v j k . In order to relax the notation we12efine µ i = h v , v i i , ρ ij = h v i , v j i , and write triangle inequalities as µ i + µ j + ρ ij ≥ − , µ i − µ j − ρ ij ≥ − , − µ i + µ j − ρ ij ≥ − , − µ i − µ j + ρ ij ≥ − . The triples ( µ , µ , ρ ) satisfying triangle inequalities will be called configurations . We denote theset of all configurations as Conf ⊆ [ − , . We can solve a semidefinite program up to desiredaccuracy in polynomial time. Then, the main challenge is finding a rounding algorithm whichtranslates the vectors { v i } ni =0 back to {− , } so that they satisfy the balance constraint, andsuch that the rounding does not incur a big loss in the objective value. Raghavendra and Tanused a randomized rounding procedure, which rounds vectors { v i } ni =0 to ± { y i } ni =1 in thefollowing way. First, let us define w i = v i − µ i v , and let w i = w i / k w i k . Then, we draw a vector g from the Gaussian distribution N (0 , I n +1 ) and set the values of y i as y i = (cid:26) h g, w i i ≥ Φ − (cid:0) − µ i (cid:1) , − E [ y i ] = µ i , so we have E (cid:20) n P i =1 y i (cid:21) = rn , and therefore the solution { y i } ni =1 satisfies the balance constraint in expectation. Furthermore, as shown in [30], using additional levelsof the Lasserre hierarchy we can guarantee that with probability 1 − δ the sampled solution { y i } ni =1 is O ( δ )-far away from satisfying the balance constraint, where δ > O ( δ ) n variables y i to get a solution y i exactly satisfying the balance constraint, while losing only an additional small factor O ( δ ) in theobjective value. Thus, it is sufficient to show that the objective value of the y i ’s is large.Consider now the SDP relaxation for any of the integer programs F given in either Definition2.4 or Definition 2.5, and let SDPVal( F ) be the optimal value of the SDP relaxation for the instance F . We have that SDPVal( F ) ≥ OptVal( F ). Finally, let us define RndVal( F ) to be the expectationof the value of the objective function after randomized rounding procedure. The analysis of theapproximation ratio for the algorithm boils down to proving RndVal( F ) ≥ α SDPVal( F ), where α is a constant that depends on the problem of interest. The way to calculate α is to look atthe loss incurred by rounding at each constraint. Let us now show how this can be done for theCC-Max-Cut problem.The expected value of each constraint − x i x j after rounding the SDP solution of CC-Max-Cutproblem is − E [ y i y j ] , and therefore at each constraint the loss factor incurred by rounding is givenas 1 − E (cid:2) y i y j (cid:3) · − h v i , v j i ) / . Thus, in order to calculate the approximation ratio, we need to bound this expression frombelow. Let us first note that E [ y y ] = 4Γ ρ (cid:18) − µ , − µ (cid:19) + µ + µ − , where ρ is given as ρ = ρ − µ µ p − µ p − µ . We assume that k w i k 6 = 0, since we can introduce a small perturbation to the values v i without affecting theobjective value too much. α cccut defined as the solution of theoptimization problem α cccut = min ( µ ,µ ,ρ ) ∈ Conf − ρ (cid:0) − µ , − µ (cid:1) − µ − µ − ρ . Computing α cccut is a hard global optimization problem, and therefore we resort to numerical com-putations to estimate it (we remark that the same approach is taken for a similar function in [23]and [3]). Extensive numerical experiments show that the minimum is attained at µ = µ = µ ,while the ρ is on the boundary of the polytope Conf , ρ = − | µ | . More precisely, the minimumis attained at µ ≈ .
27, and ρ ≈ − . . µ, µ, − µ ) , µ > α cccut can be found as the minimum of a function1 − ρ (cid:0) − µ , − µ (cid:1) − µ − µ , where µ ∈ (0 , q = (1 − µ ) /
2, we can reexpress this function as α cccut ( q ) = 2 q − ρ ( q )2 q = 1 − Γ ρ ( q ) − Γ ρ (1 − q )2 q , q ∈ (0 , / , where in the last equality we used Lemma 2.3. Furthermore, ρ = − q/ (1 − q ). Similar analysis forCC-Max-2-Lin shows that the approximation ratio is the minimal value of the same function.Straightforward calculations show that β cccut ( q, − q/ (1 − q )) from Theorem 3.2 equals the valueof α cccut ( q ). Therefore, under the (mild) assumption that worst-case configurations indeed take thespecial form as explained above, our hardness result is sharp and the algorithm for CC-Max-Cut ofRaghavendra and Tan is optimal on general instances of CC-Max-Cut / CC-Max-2-Lin.In completely analogous way, we can conclude that the approximation ratio for CC-Max-2-Satand Max- k -VC problems can be calculated as the minimum of the following function α cc sat ( q ) = 1 − Γ ρ (1 − q )2 q , q ∈ (0 , / , where ρ = − q/ (1 − q ). Numerical experiments show that α cc sat ≈ . q ≈ . β ccvc ( q, − q/ (1 − q )) = α cc sat ( q ), implying (under the assumption on worst-case configurations) that the algorithmfor CC-Max-2-Sat of Raghavendra and Tan is optimal. We studied cardinality constrained 2-CSPs, and assuming the Unique Games Conjecture derivedhardness results which show that approximation ratios achieved by the algorithm described in [30]are optimal for CC-Max-2-Sat (and its special case Max- k -VC) and CC-Max-2-Lin (and its specialcase CC-Max-Cut). An obvious open question is to close the gap between the approximation ratioand hardness for all values of q in Figure 1.It would be interesting to derive UG-hardness for related CC-Max-CSPs of arity 2, most in-terestingly for the Max- k -DS problem. While super-constant hardness for Max- k -DS is currentlyknown under the closely related Small-Set Expansion Hypothesis [29], it is not yet known whetherthe UGC implies hardness of Max- k -DS. Another interesting CSP of arity 2 is the CC-Max-Di-Cut problem, which as far as we are aware has not been previously studied in the literature. It14as a simple randomized 1 / k/ k vertices with highestout-degree at random, and k/ | V | − k vertices at random) and is as hard asCC-Max-Cut, but beyond that we are not aware of any results.Arguably, the simple trick of adding isolated vertices to achieve the flat parts of Figure 1 inSection 3.4 is somewhat dissatisfactory, and suggests that it may be more interesting to insteadstudy the approximability of these problems on regular instances. The graphs produced by ourmain reduction are indeed regular so the hardness curves in Figure 2 still apply for regular graphs.Furthermore, it is easy to see that in the regular case the best approximation ratio does tend to 1as q tends to 0 or 1. For instance, for CC-Max-Cut in a regular graph with cardinality constraint q ≤ /
2, picking a random set of q | V | vertices gives an approximation ratio of 1 − q , because it isexpected to cut a 2 q (1 − q ) fraction of all edges, and no cut can cut more than a 2 q fraction of alledges. For small q this matches the hardness result in Figure 2(a), which up to lower order termsequals 1 − q .Another interesting direction would be to come up with hardness results for cardinality con-strained versions of some other well-know Max-CSPs like Max-3-Sat, or even more ambitiously toextend the results of Raghavendra [28] and obtain tight hardness for all cardinality-constrainedMax-CSPs. Acknowledgements
The authors thank Johan H˚astad for helpful suggestions and comments on the manuscript. We alsothank anonymous reviewers for their helpful remarks.
References [1] Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. Proof ver-ification and hardness of approximation problems. In , pages 14–23, 1992.[2] Sanjeev Arora and Shmuel Safra. Probabilistic checking of proofs; A new characterization ofNP. In , pages 2–13, 1992.[3] Per Austrin. Balanced max 2-sat might not be the hardest. In
Proceedings of the 39th AnnualACM Symposium on Theory of Computing, San Diego, California, USA, June 11-13, 2007 ,pages 189–197, 2007.[4] Per Austrin, Siavosh Benabbas, and Konstantinos Georgiou. Better balance by being bi-ased: A 0.8776-approximation for max bisection. In
Proceedings of the Twenty-Fourth AnnualACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA,January 6-8, 2013 , pages 277–294, 2013.[5] Per Austrin, Subhash Khot, and Muli Safra. Inapproximability of vertex cover and independentset in bounded degree graphs.
Theory of Computing , 7(1):27–43, 2011.[6] Markus Bl¨aser and Bodo Manthey. Improved approximation algorithms for max-2sat withcardinality constraint. In
Algorithms and Computation, 13th International Symposium, ISAAC2002 Vancouver, BC, Canada, November 21-23, 2002, Proceedings , pages 187–198, 2002.[7] Joshua Brakensiek, Sivakanth Gopi, and Venkatesan Guruswami. Csps with global modularconstraints: Algorithms and hardness via polynomial representations.
Electronic Colloquiumon Computational Complexity (ECCC) , 26:13, 2019.158] Andrei A. Bulatov. A dichotomy theorem for nonuniform csps. In , pages 319–330, 2017.[9] Andrei A. Bulatov and D´aniel Marx. The complexity of global cardinality constraints.
LogicalMethods in Computer Science , 6(4), 2010.[10] Irit Dinur, Elchanan Mossel, and Oded Regev. Conditional hardness for approximate coloring.
SIAM J. Comput. , 39(3):843–873, 2009.[11] Uriel Feige and Michael Langberg. The rpr2 rounding technique for semidefinite programs.
J.Algorithms , 60(1):1–23, 2006.[12] Alan M. Frieze and Mark Jerrum. Improved approximation algorithms for MAX k-cut andMAX BISECTION.
Algorithmica , 18(1):67–81, 1997.[13] Michel X. Goemans and David P. Williamson. .879-approximation algorithms for MAX CUTand MAX 2sat. In
Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory ofComputing, 23-25 May 1994, Montr´eal, Qu´ebec, Canada , pages 422–431, 1994.[14] Venkatesan Guruswami and Euiwoong Lee. Complexity of approximating CSP with balance /hard constraints.
Theory Comput. Syst. , 59(1):76–98, 2016.[15] Venkatesan Guruswami, Rajsekar Manokaran, and Prasad Raghavendra. Beating the randomordering is hard: Inapproximability of maximum acyclic subgraph. In , pages 573–582, 2008.[16] Eran Halperin and Uri Zwick. A unified framework for obtaining improved approximationalgorithms for maximum graph bisection problems.
Random Struct. Algorithms , 20(3):382–402, 2002.[17] Johan H˚astad. Some optimal inapproximability results.
J. ACM , 48(4):798–859, 2001.[18] Thomas Hofmeister. An approximation algorithm for MAX-2-SAT with cardinality constraint.In
Algorithms - ESA 2003, 11th Annual European Symposium, Budapest, Hungary, September16-19, 2003, Proceedings , pages 301–312, 2003.[19] Howard J. Karloff and Uri Zwick. A 7/8-approximation algorithm for MAX 3sat? In , pages 406–415, 1997.[20] Subhash Khot. On the power of unique 2-prover 1-round games. In
Proceedings on 34th AnnualACM Symposium on Theory of Computing, May 19-21, 2002, Montr´eal, Qu´ebec, Canada ,pages 767–775, 2002.[21] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproxima-bility results for MAX-CUT and other 2-variable csps?
SIAM J. Comput. , 37(1):319–357,2007.[22] Subhash Khot and Oded Regev. Vertex cover might be hard to approximate to within 2- ε .In , page 379, 2003. 1623] Michael Lewin, Dror Livnat, and Uri Zwick. Improved rounding techniques for the MAX 2-satand MAX DI-CUT problems. In Integer Programming and Combinatorial Optimization, 9thInternational IPCO Conference, Cambridge, MA, USA, May 27-29, 2002, Proceedings , pages67–82, 2002.[24] Pasin Manurangsi. Almost-polynomial ratio eth-hardness of approximating densest k-subgraph. In
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Com-puting, STOC 2017, Montreal, QC, Canada, June 19-23, 2017 , pages 954–961, 2017.[25] Pasin Manurangsi. A note on max k-vertex cover: Faster fpt-as, smaller approximate kerneland improved approximation. In , pages 15:1–15:21, 2019.[26] Elchanan Mossel, Ryan O’Donnell, and Krzysztof Oleszkiewicz. Noise stability of functionswith low influences: invariance and optimality.
Ann. of Math. (2) , 171(1):295–341, 2010.[27] Ryan O’Donnell.
Analysis of Boolean Functions . Cambridge University Press, 2014.[28] Prasad Raghavendra. Optimal algorithms and inapproximability results for every csp? In
Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, BritishColumbia, Canada, May 17-20, 2008 , pages 245–254, 2008.[29] Prasad Raghavendra and David Steurer. Graph expansion and the unique games conjecture. In
Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge,Massachusetts, USA, 5-8 June 2010 , pages 755–764, 2010.[30] Prasad Raghavendra and Ning Tan. Approximating csps with global cardinality constraintsusing SDP hierarchies. In
Proceedings of the Twenty-Third Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012 , pages 373–387, 2012.[31] Ran Raz. A parallel repetition theorem.
SIAM J. Comput. , 27(3):763–803, 1998.[32] Thomas J. Schaefer. The complexity of satisfiability problems. In
Proceedings of the TenthAnnual ACM Symposium on Theory of Computing , STOC ’78, pages 216–226, New York, NY,USA, 1978. ACM.[33] Maxim Sviridenko. Best possible approximation algorithm for MAX SAT with cardinalityconstraint.
Algorithmica , 30(3):398–405, 2001.[34] Yinyu Ye. A .699-approximation algorithm for max-bisection.
Math. Program. , 90(1):101–111,2001.[35] Dmitriy Zhuk. A proof of CSP dichotomy conjecture. In58th IEEE Annual Symposium onFoundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017