Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems
Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz, Mohit Singh
SSticky Brownian Rounding and its Applications to ConstraintSatisfaction Problems
Sepehr Abbasi-Zadeh ∗ Nikhil Bansal † Guru Guruganesh ‡ Aleksandar Nikolov § Roy Schwartz ¶ Mohit Singh (cid:107)
Abstract
Semidefinite programming is a powerful tool in the design and analysis of approximationalgorithms for combinatorial optimization problems. In particular, the random hyperplanerounding method of Goemans and Williamson [31] has been extensively studied for more thantwo decades, resulting in various extensions to the original technique and beautiful algorithmsfor a wide range of applications. Despite the fact that this approach yields tight approximationguarantees for some problems, e.g. , Max-Cut , for many others, e.g. , Max-SAT and
Max-DiCut , the tight approximation ratio is still unknown. One of the main reasons for this is thefact that very few techniques for rounding semi-definite relaxations are known.In this work, we present a new general and simple method for rounding semi-definite pro-grams, based on Brownian motion. Our approach is inspired by recent results in algorithmicdiscrepancy theory. We develop and present tools for analyzing our new rounding algorithms,utilizing mathematical machinery from the theory of Brownian motion, complex analysis, andpartial differential equations. Focusing on constraint satisfaction problems, we apply our methodto several classical problems, including
Max-Cut , Max-2SAT , and
Max-DiCut , and derivenew algorithms that are competitive with the best known results. To illustrate the versatilityand general applicability of our approach, we give new approximation algorithms for the
Max-Cut problem with side constraints that crucially utilizes measure concentration results for theSticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations. ∗ University of Toronto. E-mail: [email protected] . † TU Eindhoven, and Centrum Wiskunde & Informatica. E-mail: [email protected] . ‡ Google Research. Email: [email protected] . § University of Toronto. Email: [email protected] . ¶ Technion. Email: [email protected] . (cid:107) Georgia Institute of Technology. Email: [email protected] . a r X i v : . [ c s . D S ] O c t Introduction
Semi-definite programming (SDP) is one of the most powerful tools in the design of approximationalgorithms for combinatorial optimization problems. Semi-definite programs can be viewed asrelaxed quadratic programs whose variables are allowed to be vectors instead of scalars and scalarmultiplication is replaced by inner products between the vectors. The prominent approach whendesigning SDP based approximation algorithms is rounding : (1) an SDP relaxation is formulatedfor the given problem, (2) the SDP relaxation is solved, and lastly (3) the fractional solution forthe SDP relaxation is transformed into a feasible integral solution to the original problem, hencethe term rounding .In their seminal work, Goemans and Williamson [31] presented an elegant and remarkablysimple rounding method for SDPs: a uniformly random hyperplane (through the origin) is chosen,and then each variable, which is a vector, is assigned to the side of the hyperplane it belongs to.This (binary) assignment is used to round the vectors and output an integral solution. For example,when considering
Max-Cut , each side of the hyperplane corresponds to a different side of the cut.Using the random hyperplane rounding, [31] gave the first non-trivial approximation guaranteesfor fundamental problems such as
Max-Cut , Max-2SAT , and
Max-DiCut . Perhaps the mostcelebrated result of [31] is the 0 .
878 approximation for
Max-Cut , which is known to be tight [39, 44]assuming Khot’s Unique Games Conjecture [38]. Since then, the random hyperplane method hasinspired, for more than two decades now, a large body of research, both in approximation algorithmsand in hardness of approximation. In particular, many extensions and generalizations of the randomhyperplane rounding method have been proposed and applied to a wide range of applications , e.g. , Max-DiCut and
Max-2SAT [25, 40],
Max-SAT [8, 13],
Max-Bisection [12, 50],
Max-Agreement in correlation clustering [22], the
Cut-Norm of a matrix [3].Despite this success and the significant work on variants and extensions of the random hy-perplane method, the best possible approximation ratios for many fundamental problems stillremain elusive. Several such examples include
Max-SAT , Max-Bisection , Max-2CSP , and
Max-DiCut . Perhaps the most crucial reason for the above failure is the fact that besides therandom hyperplane method and its variants, very few methods for rounding SDPs are known.A sequence of papers by Austrin [10], Raghavendra [48], Raghavendra and Steurer [49] hasshown that SDP rounding algorithms that are based on the random hyperplane method and itsextensions nearly match the Unique Games hardness of any
Max-CSP , as well as the integrality gapof a natural family of SDP relaxations. However, the universal rounding proposed by Raghavendraand Steurer is impractical, as it involves a brute-force search on a large constant-sized instance ofthe problem. Moreover, their methods only allow computing an ε additive approximation to theapproximation ratio in time double-exponential in 1 /ε . Our main contributions are (1) to propose a new SDP rounding technique that is based on diffusionprocesses, and, in particular, on Brownian motion; (2) to develop the needed tools for analyzing ournew SDP rounding technique by deploying a variety of mathematical techniques from probabilitytheory, complex analysis and partial differential equations (PDEs); (3) to show that this roundingtechnique has useful concentration of measure properties, not present in random hyperplane based1echniques, that can be used to obtain new approximation algorithms for a version of the
Max-Cut problem with multiple global side constraints.Our method is inspired by the recent success of Brownian motion based algorithms for construc-tive discrepancy minimization, where it was used to give the first constructive proofs of some of themost powerful results in discrepancy theory [14, 15, 16, 41]. The basic idea is to use the solutionto the semi-definite program to define the starting point and the covariance matrix of the diffusionprocess, and let the process evolve until it reaches an integral solution. As the process is forcedto stay inside the cube [ − , n (for Max-Cut ) or [0 , n (for Max-2SAT and other problems),and to stick to any face it reaches, we call the most basic version of our algorithm (without anyenhancements) the
Sticky Brownian Motion rounding. The algorithm is defined more formally inSection 1.2.1.
Sticky Brownian Motion.
Using the tools we introduce, we show that this algorithm is alreadycompetitive with the state of the art results for
Max-Cut , Max-2SAT , and
Max-DiCut . Theorem 1.
The basic Brownian rounding achieves an approximation ration of . for the Max-Cut problem. Moreover, when the
Max-Cut instance has value − ε , Sticky Brownian Motionachieves value − Ω( √ ε ) . In particular, using complex analysis and evaluating various elliptic integrals, we show that theseparation probability for any two unit vectors u and v separated by an angle θ , is given by acertain hypergeometric function of θ (see Theorem 4 for details). This precise characterization ofthe separation probability also proves that the Sticky Brownian Motion rounding is different fromthe random hyperplane rounding. The overview of the analysis is in Section 1.2.2 and Section 2has the details.We can also analytically show the following upper bound for Max-2SAT . Theorem 2.
The Sticky Brownian Motion rounding achieves approximation ratio of at least . for Max-2SAT . While the complex analysis methods also give exact results for
Max-2SAT , the explicit expres-sions are much harder to obtain as one has to consider all possible starting points for the diffusionprocess, while in the
Max-Cut case the process always starts at the origin. Because of this, inorder to prove Theorem 2 we introduce another method of analysis based on partial differentialequations (PDEs), and the maximum principle, which allows us to prove analytic bounds on PDEsolutions. Moreover, numerically solving the PDEs suggests the bound 0.921. The overview anddetails of the
Max-2SAT analysis are, respectively, in Sections 1.2.3 and 3. Section 5.1 has detailsabout numerical calculations for various problems.For comparison, the best known approximation ratio for
Max-Cut is the Goemans-Williamsonconstant α GW ≈ . Max-2SAT is 0 . Max-Cut instances of value 1 − ε is optimal up to constants [39], assuming theUnique Games Conjecture.We emphasize that our results above are achieved with a single algorithm “out of the box”,without any additional engineering. While the analysis uses sophisticated mathematical tools, thealgorithm itself is simple, efficient, and straightforward to implement.2lgorithm Max-Cut Max-2SAT Max-DiCut (cid:63)
Brownian Rounding 0.861 0.921 0 . . † . † indicates that for Max-Cut , the approximation for the slowed down walk differsfrom the GW bound only in the fourth decimal. For
Max-DiCut , the (cid:63) indicates that we onlyconsider the n + 1-dimensional walk. Extensions.
Next, we consider two different modifications of Sticky Brownian Motion that allowus to improve the approximation guarantees above, and show the flexibility of diffusion basedrounding algorithms. The first one is to smoothly slow down the process depending on how far itis from the boundaries of the cube. As a proof of concept, we show, numerically, that a simplemodification of this kind matches the Goemans-Williamson approximation of
Max-Cut up tothe first three digits after the decimal point. We also obtain significant improvements for otherproblems over the vanilla method.Second, we propose a variant of Sticky Brownian Motion running in n + 1 dimensions ratherthan n dimensions, and we analyze it for the Max-DiCut problem. The extra dimension is usedto determine whether the nodes labeled +1 or those labeled − .
79 for
Max-DiCut . Slowingdown the process further improves this approximation to 0 .
81. We give a summary of the obtainedresults in Table 1. An overview and details of the extensions are given, respectively, in Sections1.2.4 and 5.1. Recent Progress.
Very recently, in a beautiful result, Eldan and Naor [24] describe a slowdownprocess that exactly achieves the Goemans-Williamson (GW) bound of 0.878 for Max-Cut, an-swering an open question posed in an earlier version of this paper. This shows that our roundingtechniques are at least as powerful as the classical random hyperplane rounding, and are potentiallymore general and flexible.In general, given the dearth of techniques for rounding semidefinite programs, we expect thatrounding methods based on diffusion processes, together with the analysis techniques introducedin this paper, will find broader use, and, perhaps lead to improved results for Max-CSP problems.
Applications.
To further illustrate the versatility and general applicability of our approach, weconsider the
Max-Cut with Side Constraints problem, abbreviated
Max-Cut-SC , a generalizationof the
Max-Bisection problem which allows for multiple global constraints. In an instance of the
Max-Cut-SC problem, we are given an n -vertex graph G = ( V, E ), a collection F = { F , . . . , F k } of subsets of V , and cardinality bounds b , . . . , b k ∈ N . The goal is to find a subset S ⊂ V thatmaximizes the weight | δ ( S ) | of edges crossing the cut ( S, V \ S ), subject to having | S ∩ F i | = b i forall i ∈ [ k ]. Our numerical results are not obtained via simulating the random algorithm but solving a discretized version ofa PDE that analyzes the performance of the algorithm. Error analysis of such a discretization can allow us to provethe correctness of these bounds within a reasonable accuracy. We give the following result for the problem, using the Sticky BrownianMotion as a building tool.
Theorem 3.
There exists a O ( n poly(log( k ) /ε ) ) -time algorithm that on input a satisfiable instance G = ( V, E ) , F , and b , . . . , b k , as defined above, outputs a (0 . − ε, ε ) -approximation with highprobability. In the presence of a single side constraint, the problem is closely related to the
Max-Bisection problem [12, 50], and, more generally to
Max-Cut with a cardinality constraint. While our meth-ods use the stronger semi-definite programs considered in [50] and [12], the main new technicalingredient is showing that the Sticky Brownian Motion possesses concentration of measure prop-erties that allow us to approximately satisfy multiple constraints. By contrast, the hyperplanerounding and its generalizations that have been applied previously to the
Max-Cut and
Max-Bisection problems do not seem to allow for such strong concentration bounds. For this reason,the rounding and analysis used in [50] only give an O ( n poly( k/ε ) ) time algorithm for the Max-Cut-SC problem, which is trivial for k = Ω( n ), whereas our algorithm has non-trivial quasi-polynomialrunning time even in this regime. We expect that this concentration of measure property will findfurther applications, in particular to constraint satisfaction problems with global constraints. Remark.
We can achieve better results using Sticky Brownian Motion with slowdown. In par-ticular, in time O ( n poly(log( k ) /ε ) ) we can get a (0 . − ε, ε )-approximation with high probabilityfor any satisfiable instance. However, we focus on the basic Sticky Brownian Motion algorithm tosimplify exposition. Note that due to the recent work by Austrin and Stankovi´c [11], we know thatadding even a single global cardinality constraint to the Max-Cut problem makes it harder toapproximate. In particular, they show that subject to a single side constraint,
Max-Cut is UniqueGames-hard to approximate within a factor of approximately 0 . Max-Cut-SC problem is optimal up to smallnumerical errors. (We emphasize the possibility of numerical errors as both our result, and thehardness result in [11] are based on numerical calculations.)
Let us describe our basic algorithm in some detail. Recall that the Goemans-Williamson SDP for
Max-Cut is equivalent to the following vector program: given a graph G = ( V, E ), we writemax (cid:88) ( i,j ) ∈ E − w i · w j s.t. w i · w i = 1 ∀ i ∈ V We say that a set S ⊂ V is an ( α, ε )-approximation if (cid:12)(cid:12) | S ∩ F i | − b i (cid:12)(cid:12) ≤ εn for all i ∈ [ k ], and | δ ( S ) | ≥ α · | δ ( T ) | for all T ⊂ V such that | T ∩ F i | = b i for all i ∈ [ k ]. w i range over n dimensional real vectors ( n = | V | ). The Sticky Brownian Mo-tion rounding algorithm we propose maintains a sequence of random fractional solutions X , . . . , X T such that X = and X T ∈ {− , +1 } n is integral. Here, a vertex of the hypercube {− , +1 } n isnaturally identified with a cut, with vertices assigned +1 forming one side of the cut, and the onesassigned − A t be the random set of coordinates of X t − which are not equal to − t = 1 , . . . , T , the algorithm picks ∆X t sampled from theGaussian distribution with mean and covariance matrix W t , where ( W t ) ij = w i · w j if i, j ∈ A t ,and ( W t ) ij = 0 otherwise. The algorithm then takes a small step in the direction of ∆X t , i.e. sets X t = X t − + γ ∆X t for some small real number γ . If the i -th coordinate of X t is very close to − i , then it is rounded to either − γ and T are chosen so that the fractional solutions X t never leave the cube [ − , n , and so that the finalsolution X T is integral with high probability. As γ goes to 0, the trajectory of the i -th coordinateof X t closely approximates a Brownian motion started at 0, and stopped when it hits one of theboundary values {− , +1 } . Importantly, the trajectories of different coordinates are correlatedaccording to the SDP solution. A precise definition of the algorithm is given in Section 2.1.The algorithm for Max-2SAT (and
Max-DiCut ) is essentially the same, modulo using thecovariance matrix from the appropriate standard SDP relaxation, and starting the process at themarginals for the corresponding variables. We explain this in greater detail below.
In order to analyze this algorithm, it is sufficient to understand the probability that an edge ( i, j )is cut as a function of the angle θ between the vectors w i and w j . Thus, we can focus on theprojection (( X t ) i , ( X t ) j ) of X t . We observe that (( X t ) i , ( X t ) j ) behaves like a discretization ofcorrelated 2-dimensional Brownian motion started at (0 , τ when it hitsthe boundary of the square [ − , . After τ , (( X t ) i , ( X t ) j ) behaves like a discretization of a 1-dimensional Brownian motion restricted to one of the sides of the square. From now on we willtreat the process as being continuous, and ignore the discretization, which only adds an arbitrarilysmall error term in our analysis. It is convenient to apply a linear transformation to the correlated Brownian motion (( X t ) i , ( X t ) j ) so that it behaves like a standard B t started at (0 , − , to a rhombus S centered at with internal angle θ ; we can then think of τ as the first time B t hits the boundaryof S . After time τ , the transformed process is distributed like a 1-dimensional Brownian motionon the side of the rhombus that was first hit. To analyze this process, we need to understand theprobability distribution of B τ . The probability measure associated with this distribution is knownas the harmonic measure on the boundary ∂ S of S , with respect to the starting point . Thesetransformations and connections are explained in detail in Section 2.2.The harmonic measure has been extensively studied in probability theory and analysis. Thesimplest special case is the harmonic measure on the boundary of a disc centered at with respectto the starting point . Indeed, the central symmetry of the disc and the Brownian motion impliesthat it is just the uniform measure. A central fact we use is that harmonic measure in 2 dimensionsis preserved under conformal (i.e. angle-preserving) maps. Moreover, such maps between polygonsand the unit disc have been constructed explicitly using complex analysis, and, in particular, are5iven by the Schwarz-Christoffel formula [2]. Thus, the Schwarz-Christoffel formula gives us anexplicit formulation of sampling from the harmonic measure on the boundary ∂ S of the rhombus:it is equivalent to sampling a uniformly random point on the boundary of the unit disc D centeredat the origin, and mapping this point via a conformal map F that sends D to S . Using thisformulation, in Section 2.3 we show how to write the probability of cutting the edge ( i, j ) as anelliptic integral.Calculating the exact value of elliptic integrals is a challenging problem. Nevertheless, byexploiting the symmetry in the Max-Cut objective, we relate our particular elliptic integral tointegrals of the incomplete beta and hypergeometric functions. We further simplify these integralsand bring them into a tractable form using several key identities from the theory of special functions.Putting everything together, we get a precise closed form expression for the probability that theSticky Brownian Motion algorithm cuts a given edge in Theorem 4, and, as a consequence, weobtain the claimed guarantees for
Max-Cut in Theorems 1 and 7.
The algorithm for
Max-2SAT is almost identical to the
Max-Cut algorithm, except that the SDPsolution is asymmetric, in the following sense. We can think of the SDP as describing the meanand covariance of a “pseudo-distribution” over the assignments to the variables. In the case of
Max-Cut , we could assume that, without loss of generality, the mean of each variable (i.e. one-dimensional marginal) is 0 since S and S are equivalent solutions. However, this is not the case for Max-2SAT . We use this information, and instead of starting the diffusion process at the centerof the cube, we start it at the point given by the marginals. For convenience, and also respectingstandard convention, we work in the cube [0 , n rather than [ − , n . Here, in the final solution X T , if ( X T ) i = 0 we set the i -th variable to true and if ( X T ) i = 1, we set it to false . We againanalyze each clause C separately, which allows us to focus on the diffusion process projected to thecoordinates (( X t ) i , ( X t ) j ), where i and j are the variables appearing in C . However, the previousapproach of using the Schwarz-Christoffel formula to obtain precise bounds on the probability doesnot easily go through, since it relies heavily on the symmetry of the starting point of the Brownianmotion. It is not clear how to extend the analysis when we change the starting point to a pointother than the center, as the corresponding elliptic integrals appear to be intractable.Instead, we appeal to a classical connection between diffusion processes and partial differentialequations [47, Chapter 9]. Recall that we are focusing on a single clause C with variables i and j ,and the corresponding diffusion process (( X t ) i , ( X t ) j ) in the unit square [0 , starting at a pointgiven by the marginals and stopped at the first time τ when it hits the boundary of the square;after that time the process continues as a one-dimensional Brownian motion on the side of thesquare it first hit. For simplicity let us assume that both variables appear un-negated in C . Theprobability that C is satisfied then equals the probability that the process ends at one of the points(0 , ,
0) or (0 , u : [0 , → [0 ,
1] be the function which assigns to ( x, y ) the probabilitythat this happens when the process is started at ( x, y ). Since on the boundary ∂ [0 , of the squareour process is a one-dimensional martingale, the value of u ( x, y ) is easy to compute on ∂ [0 , , andin fact equals 1 − xy . Then, in the interior of the square, we have u ( x, y ) = E [ u (( X τ ) i , ( X τ ) j )].It turns out that this identifies u as the unique solution to an elliptic partial differential equation(PDE) L u = 0 with the Dirichlet boundary condition u ( x, y ) = 1 − xy ∀ ( x, y ) ∈ ∂ [0 , . In our6ase, the operator L just corresponds to Laplace’s operator L [ u ] = ∂ u∂x + ∂ u∂y after applying a lineartransformation to the variables and the domain. This connection between our rounding algorithmand PDEs is explained in Section 3.2.Unfortunately, it is still not straightforward to solve the obtained PDE analytically. We dealwith this difficulty using two natural approaches. First, we use the maximum principle of ellipticPDE’s [28], which allows us to bound the function u from below. In particular, if we can find afunction g such that g ( x, y ) ≤ u ( x, y ) = 1 − xy on the boundary of the square, and L g ≥ g ( x, y ) ≤ u ( x, y ) for all x, y in the square. Weexhibit simple low-degree polynomials which satisfy the boundary conditions by design, and usethe sum of squares proof system to certify non-negativity under the operator L . In Section 3.3, weuse this method to show that Sticky Brownian Motion rounding achieves approximation ratio atleast 0 . Max-2SAT . Recall that in the Sticky Brownian Motion roundingeach increment is proportional to ∆X t sampled from a Gaussian distribution with mean andcovariance matrix W t . The covariance is derived from the SDP: for example, in the case of Max-Cut , it is initially set to be the Gram matrix of the vectors produced by the SDP solution. Then,whenever a coordinate ( X t ) i reaches {− , +1 } , we simply zero-out the corresponding row andcolumn of W t . This process can be easily modified by varying how the covariance matrix W t evolves with time. Instead of zeroing out rows and columns of W t , we can smoothly scale thembased on how far ( X t − ) i is from the boundary values {− , } . A simple way to do this, in the caseof the Max-Cut problem, is to set( W t ) ij = (1 − ( X t − ) i ) α/ (1 − X t − ) j ) α/ w i · w j for a constant 0 ≤ α <
2. Effectively, this means that the process is slowed down smoothlyas it approaches the boundary of the cube [ − , +1] n . This modified diffusion process, which wecall Sticky Brownian Motion with Slowdown, still converges to {− , +1 } n in finite time. Onceagain, the probability of cutting an edge ( i, j ) of our input graph can be analyzed by focusingon the two-dimensional projection (( X t ) i , ( X t ) j ) of X t . Moreover, we can still use the generalconnection between diffusion processes and PDE’s mentioned above. That is, if we write u ( x, y ) :[ − , → [0 ,
1] for the probability that edge ( i, j ) is cut if the process is started at ( x, y ), then u can be characterized as the solution of an elliptic PDE with boundary conditions u ( x, y ) = − xy ∀ ( x, y ) ∈ ∂ [ − , . We solve this PDE numerically using the finite element method toestimate the approximation ratio for a fixed value of the parameter α , and then we optimize over α . At the value α =1.61 our numerical solution shows an approximation ratio that matches theGoemans-Williamson approximation of Max-Cut up to the first three digits after the decimalpoint. We also analyze an analogous algorithm for
Max-2SAT and show that for α =1.61 itachieves an approximation ratio of 0.929. The detailed analysis of the slowed down Sticky BrownianMotion rounding is given in Section 5.1. 7 higher-dimensional version. We also consider a higher-dimensional version of the StickyBrownian Motion rounding, in which the Brownian motion evolves in n + 1 dimensions rather than n . This rounding is useful for asymmetric problems like Max-DiCut in which the SDP producesnon-uniform marginals, as we discussed above in the context of Max-2SAT . Such an SDP has avector w in addition to w , . . . , w n , and the marginals are given by w · w i . Now, rather thanusing the marginals to obtain a different starting point, we consider the symmetric Sticky BrownianMotion process starting from the center but using all the n + 1 vectors w , . . . , w n . At the finalstep T of the process, in the case of Max-DiCut , the variables whose value is equal to ( X T ) areassigned to the left side of the cut, and the variables with the opposite value are assigned to theright side of the cut. Thus, for an edge i → j to be cut, it must be the case that ( X T ) i = ( X T ) and ( X T ) j = 1 − ( X T ) . While analyzing the probability that this happens is a question aboutBrownian motion in three rather than two dimensions, we reduce it to a two-dimensional questionvia the inclusion-exclusion principle. After this reduction, we can calculate the probability thatan edge is cut by using the exact formula proved earlier for the Max-Cut problem. Our analysis,which is given in Section 5.2, shows that this ( n + 1)-dimensional Sticky Brownian Motion achievesan approximation of 0 .
79 for
Max-DiCut . Moreover, combining the two ideas, of changing thecovariance matrix at each step, as well as performing the n +1-dimensional Sticky Brownian Motion,achieves a ratio of 0 . The starting point for our algorithm for the
Max-Cut-SC problem is a stronger SDP relaxationderived using the Sum of Squares (SoS) hierarchy. Similar relaxations were previously consideredin [12, 50] for the
Max-Bisection problem. In addition to giving marginal values and a covariancematrix for a “pseudo-distribution” over feasible solutions, the SoS SDP makes it possible to condi-tion on small sets of variables. The global correlation rounding method [17, 32] allows us to choosevariables to condition on so that, after the conditioning, the covariance matrix has small entries onaverage. Differing from previous works [12, 50], we then run the Sticky Brownian Motion roundingdefined by the resulting marginals and covariance matrix. We can analyze the weight of cut edgesusing the PDE approach outlined above. The main new challenge is to bound the amount by whichthe side constraints are violated. To do so, we show that Sticky Brownian Motion concentratestightly around its mean, and, in particular, it satisfies sub-Gaussian concentration in directionscorresponding to sets of vertices. Since the mean of the Sticky Brownian Motion is given by themarginals, which satisfy all side constraints, we can bound how much constraints are violated viathe concentration and a union bound. To show this key concentration property, we use the factthat the covariance that defines the diffusion has small entries, and that Brownian Motion is a mar-tingale. Then the concentration inequality follows, roughly, from a continuous analogue of Azuma’sinequality. The detailed analysis is given in Section 4. We again remark that such sub-Gaussianconcentration bounds are not known to hold for the random hyperplane rounding method or itsgeneralizations as considered in [12, 50]. The input for
Max-DiCut is a directed graph G = ( V, E ) and the goal is to find a cut S ⊆ V that maximizesthe number of edges going from S to S . .3 Related Work In their seminal work, Goemans and Williamson [31] presented the random hyperplane roundingmethod which yielded an approximation of 0.878 for
Max-Cut . For the closely related
Max-DiCut problem they presented an approximation of 0 . . . et. al. [40] who presentthe current best known approximation of 0 . Max-Cut . Another fundamental and closely related problem is
Max-Bisection . In their classicwork [27], Frieze and Jerrum present an approximation of 0 .
651 for this problem. Their result waslater improved to 0 .
699 by Ye [53], to 0 .
701 by Halperin and Zwick [34], and to 0 .
702 by Feigeand Langberg [26]. Using the sum of squares hierarchy, Raghavendra and Tan [50] gave a furtherimprovement to 0 .
85, and finally, Austrin et. al. [12] presented an almost tight approximationof 0 . / for Max-Cut (which implies the exact same hardness for
Max-Bisection ) and a hardness of / for Max-DiCut (both of these hardness results are assuming P (cid:54) = NP ). If one assumes the Unique GamesConjecture of Khot [38], then it is known that the random hyperplane rounding algorithm of [31]is tight [39, 44]. Thus, it is worth noting that though Max-Cut is settled conditional on theUnique Games conjecture, both
Max-DiCut and
Max-Bisection still remain unresolved, evenconditionally.Another fundamental class of closely related problems are
Max-SAT and its special cases
Max- k -SAT . For Max-2SAT
Goemans and Williamson [31], using random hyperplane rounding,presented an approximation of 0.878. This was subsequently improved in a sequence of works:Feige and Goemans [25] presented an approximation of 0 . . et. al. [40] presented the current best knownapproximation of 0 . Max-2SAT , assuming P (cid:54) = NP , H˚astad[35] presented a hardness of / . Assuming the Unique Games Conjecture Austrin [9] presenteda (virtually) tight hardness of 0 . Max-3SAT , Karloffand Zwick [37] and Zwick [54] presented an approximation factor of / based on the randomhyperplane method. The latter is known to be tight by the celebrated hardness result of H˚astad[35]. For Max-4SAT
Halperin and Zwick [33] presented an (almost) tight approximation guaranteeof 0 . Max-SAT in its full generality, a sequence of works [7, 8, 13] slowlyimproved the known approximation factor, where the current best one is achieved by Avidor et. al. [13] and equals 0 . For the general case of
Max-CSP a sequence of works [10, 48] culminatedin the work of Raghavendra and Steurer [49] who presented an algorithm that assuming the UniqueGames Conjecture matches the hardness result for any constraint satisfaction problem. However,as previously mentioned, this universal rounding is impractical as it involves a brute-force solutionto a large constant instance of the problem. Moreover, it only allows computing an ε additiveapproximation to the approximation ratio in time double-exponential in / ε .Many additional applications of random hyperplane rounding and its extensions exist. Somewell known examples include: [5, 20, 36], Max-Agreement in correlation clustering[22, 51], the maximization of quadratic forms [21], and the computation of the
Cut-Norm [3]. Avidor et. al. also present an algorithm with a conjectured approximation of 0 . et. al. [6] for the Sparsest-CUT problem. Though the approach of [6]uses random projections, it is based on different mathematical tools, e.g. , L´evy’s isoperimetricinequality. Moreover, the algorithmic machinery that was developed since the work of [6] has founduses for minimization problems, and in particular it is useful for minimization problems that relateto graph cuts and clustering.Brownian motion was first used for rounding SDPs in Bansal [14] in the context of constructivediscrepancy minimization. This approach has since proved itself very successful in this area, andhas led to new constructive proofs of several major results [15, 16, 41]. However, this line of workhas largely focused on improving logarithmic factors, and its methods are not precise enough toanalyze constant factor approximation ratios.
In this section, we use
Max-Cut as a case study for the method of rounding a semi-definiterelaxation via Sticky Brownian Motion. Recall, in an instance of the
Max-Cut problem we aregiven a graph G = ( V, E ) with edge weights a : E → R + and the goal is to find a subset S ⊂ V thatmaximizes the total weight of edges crossing the cut ( S, V \ S ), i.e., a ( δ ( S )) := (cid:80) { u,v }∈ E : u ∈ S,v / ∈ S a uv .We first introduce the standard semi-definite relaxation for the problem and introduce the stickyBrownian rounding algorithm. To analyze the algorithm, we use the invariance of Brownian motionwith respect to conformal maps, along with several identities of special functions. Before we proceed, we recall again the SDP formulation for the
Max-Cut problem, famouslystudied by Goemans and Williamson [31].max (cid:88) e =( i,j ) ∈ E a ( e ) (1 − w i · w j )2 s.t. w i · w i = 1 ∀ i = 1 , ..., n We now describe the Sticky Brownian Motion rounding algorithm specialized to the
Max-Cut problem. Let W denote the positive semi-definite correlation matrix defined by the vectors w , . . . , w n , i.e. , for every 1 ≤ i, j ≤ n we have that: W i,j = w i · w j . Given a solution W to the semi-definite program, we perform the following rounding process: start at the origin andperform a Brownian motion inside the [ − , n hypercube whose correlations are governed by W .Additionally, the random walk is sticky : once a coordinate reaches either − { X t } t ≥ as follows. We fix X = . Let { B t } t ≥ bestandard Brownian motion in R n started at the origin, and let τ = inf { t : x + W / B t (cid:54)∈ [ − , n } be the first time x + W / B t exits the cube. With probability 1, you can assume that τ is alsothe first time that the process lies on the boundary of the cube. Here W / is the principle squareroot of W . Then, for all 0 ≤ t ≤ τ we define X t = x + W / B t . This defines the process until the first time it hits a face of the cube. From this point on, we will forceit to stick to this face. Let A t = { i : ( X t ) i (cid:54) = ± } be the active coordinates of the process at time t ,and let F t = { x ∈ [ − , n : x i = ( X t ) i ∀ i ∈ A t } be the face of the cube on which X t lies at time t .With probability 1, F τ has dimension n −
1. We define the covariance matrix ( W t ) ij = W ij when i, j ∈ A t , and ( W t ) ij = 0 otherwise. Then we take τ = inf { t ≥ τ : X τ + W / τ ( B t − B τ ) (cid:54)∈ F τ } to be the first time that Brownian motion started at X τ with covariance given by W τ exits theface F τ . Again, with probability 1, we can assume that this is also the first time the process lieson the boundary of F τ . For all τ < t ≤ τ we define X t = X τ + W / τ ( B t − B τ ) . Again, with probability 1, dim F τ = n −
2. The process is defined analogously from here on. Ingeneral, τ i = inf { t ≥ τ i − : X τ i − + W / τ i − ( B t − B τ i − ) (cid:54)∈ F τ i − } is (with probability 1) the firsttime that the process hits a face of the cube of dimension n − i . Then for τ i − < t ≤ τ i we have X t = X τ i − + W / τ i − ( B t − B τ i − ). At time τ n , X τ n ∈ {− , } n , so the process remains fixed, i.e. forany t ≥ τ n , X t = X τ n . The output of the algorithm then corresponds to a cut S ⊆ V defined asfollows: S = { i ∈ V : ( X τ n ) i = 1 } . We say that a pair of nodes { i, j } is separated when | S ∩ { i, j }| = 1. Remark:
While we have defined the algorithm as a continuous diffusion process, driven by Brow-nian motion, a standard discretization will yield a polynomial time algorithm that achieves thesame guarantee up to an error that is polynomially small. Such a discretization was outlined in theIntroduction. An analysis of the error incurred by discretizing a continuous diffusion process in thisway can be found, for example, in [29] or the book [30]. More sophisticated discrete simulations ofsuch diffusion processes are also available, and can lead to better time complexity as a function ofthe error. One example is the Walk on Spheres algorithm analyzed by Binder and Braverman [19].This algorithm allows us to draw a sample X τ from the continuous diffusion process, stopped ata random time τ , such that X τ is within distance ε from the boundary of the cube [ − , n . Thetime necessary to sample X τ is polynomial in n and log(1 /ε ). We can then round X τ to the nearestpoint on the boundary of the cube, and continue the simulation starting from this rounded point.It is straightforward to show, using the fact that the probability to cut an edge is continuous inthe starting point of our process, that if we set ε = o ( n − ), then the approximation ratio achievedby this simulation is within an o (1) factor from the one achieved by the continuous process. In therest of the paper, we focus on the continuous process since our methods of analysis are naturallyamenable to it. We will always assume that a standard Brownian motion starts at the origin. See Appendix A for a precisedefinition. .2 Analysis of the Algorithm Our aim is to analyze the expected value of the cut output by the Sticky Brownian Motion roundingalgorithm. Following Goemans and Williamson [31], we aim to bound the probability an edgeis cut as compared to its contribution to the SDP objective. Theorem 4 below gives an exact characterization of the probability of separating a pair of vertices { i, j } in terms of the gammafunction and hypergeometric functions. We refer to Appendix B.1 for the definitions of thesefunctions and a detailed exposition of their basic properties. Theorem 4.
The probability that the Sticky Brownian Motion rounding algorithm will separate apair { i, j } of vertices for which θ = cos − ( w i · w j ) equals − Γ( a +12 )Γ( − a )Γ( a + 1) · F (cid:20) a , a , a a , a + 1 ; 1 (cid:21) where a = θ/π , Γ is the gamma function, and F is the hypergeometric function. Theorem 1 will now follow from the following corollary of Theorem 4. The corollary followsfrom numerical estimates of the gamma and hypergeometric functions.
Corollary 1.
For any pair { i, j } , the probability that the pair { i, j } is separated is at least 0.861 · − w i · w j . We now give an outline of the proof of Theorem 4. The plan is to first show that the desiredprobability can be obtained by analyzing the two-dimensional standard Brownian motion startingat the center of a rhombus. Moreover, the probability of separating i and j can be computed usingthe distribution of the first point on the boundary that is hit by the Brownian motion. Conformalmapping and, in particular, the Schwarz-Christoffel formula, allows us to obtain a precise expressionfor such a distribution and thus for the separation probability, as claimed in the theorem. We nowexpand on the above plan.First observe that to obtain the probability i and j are separated, it is enough to considerthe 2-dimensional process obtained by projecting to the i th and j th coordinates of the vector X t .Projecting the process onto these coordinates, we obtain a process ˜ X t ∈ R that can be equivalentlydefined as follows. Let ˜ W = (cid:18) θ )cos( θ ) 1 (cid:19) , where θ is the angle between w i and w j . Let B t be standard Brownian motion in R started at0, and let τ = inf { t : ˜ W / B t (cid:54)∈ [ − , t } be the first time the process hits the boundary of thesquare. Then for all 0 ≤ t ≤ τ we define ˜ X t = ˜ W / B t . Any coordinate k for which ( ˜ X τ ) k ∈ {± } remains fixed from then on, i.e. for all t > τ , ( ˜ X t ) k = ( ˜ X τ ) k . The coordinate (cid:96) that is not fixedat time τ (one exists with probability 1) continues to perform one-dimensional Brownian motionstarted from ( ˜ X τ ) (cid:96) until it hits − σ be the timethis happens; it is easy to show that σ < ∞ with probability 1, and, moreover, E [ σ ] < ∞ . We saythat the process { ˜ X t } t ≥ is absorbed at the vertex ˜ X σ ∈ {− , } . Observation 1.
The probability that the algorithm separates vertices i and j equals Pr (cid:104) { ˜ X t } t is absorbed in { (+1 , − , ( − , +1) } (cid:105) . X t by X t and ˜ W by W for the rest of the section whichis aimed at analyzing the above probability. We also denote by ρ = cos( θ ) the correlation betweenthe two coordinates of the random walk, and call the two-dimensional process just described a ρ -correlated walk. It is easier to bound the probability that i and j are separated by transforming the ρ -correlated walk inside [ − , into a standard Brownian motion inside an appropriately scaledrhombus. We do this by transforming { X t } t ≥ linearly into an auxiliary random process { Y t } t ≥ which will be sticky inside a rhombus (see Figures (1a)-(1b)). Formally, given the random process { X t } t ≥ , we consider the process Y t = O · W − / · X t , where O is a rotation matrix to be chosenshortly. Recalling that for 0 ≤ t ≤ τ the process { X t } ≤ t ≤ τ is distributed as (cid:8) W / B t (cid:9) ≤ t ≤ τ , wehave that, for all 0 ≤ t ≤ τ , Y t = O · B t ≡ B t . Above ≡ denotes equality in distribution, and follows from the invariance of Brownian motionunder rotation. Applying OW − / to the points inside [ − , , we get a rhombus S with vertices b , . . . , b , which are the images of the points (+1 , − , (+1 , +1) , ( − , +1) , ( − , − O so that b lies on the positive x -axis and b on the positive y -axis. Since OW − / isa linear transformation, it maps the interior of [ − , to the interior of S and the sides of [ − , to the sides of S . We have then that τ is the first time Y t hits the boundary of S , and that afterthis time Y t sticks to the side of S that it first hit and evolves as (a scaling of) one-dimensionalBrownian motion restricted to this side, and started at Y τ . The process then stops evolving at thetime σ when Y σ ∈ { b , . . . , b } . We say that { Y t } t ≥ is absorbed at Y σ .The following lemma, whose proof appears in the appendix, formalizes the main facts we useabout this transformation. Lemma 1.
Applying the transformation OW − / to { X t } t ≥ we get a new random process { Y t } t ≥ which has the following properties:1. If X t is in the interior/boundary/vertex of [ − , then Y t is in the interior/boundary/vertexof S , respectively.2. S is a rhombus whose internal angles at b and b are θ , and at b and b are π − θ . Thevertex b lies on the positive x -axis, and b , b , b are arranged counter-clockwise.3. The probability that the algorithm will separate the pair { i, j } is exactly Pr[ Y t is absorbed in b or b ] . In the following useful lemma we show that, in order to compute the probability that the process { Y t } t ≥ is absorbed in b or b , it suffices to determine the distribution of the first point Y τ onthe boundary ∂ S that the process { Y t } t ≥ hits. This distribution is a probability measure on ∂ S known in the literature as the harmonic measure (with respect to the starting point 0). We denoteit by µ ∂ S . The statement of the lemma follows. Lemma 2.
Pr[ Y t is absorbed in b or b ] = 4 · (cid:90) b b − (cid:107) p − b (cid:107)(cid:107) b − b (cid:107) dµ ∂ S ( p ) . Proof.
Since both S and Brownian motion are symmetric with respect to reflection around thecoordinate axes, we see that µ ∂ S is the same as we go from b to b or b , and as we go from b to b or b . Therefore,Pr[pair { i, j } is separated] = 4 · Pr[pair { i, j } is separated | Y τ lies on the segment [ b , b ]] . ,0 𝑎 𝑎 𝑎 𝑎 (a) { X t } t ≥ in [ − , square (cid:2870) (cid:2869)(cid:2871) (cid:2872) (cid:2016) (b) { Y t } t ≥ in S 𝜔 𝜔 𝜔 𝜔 (c) { B t } t ≥ in D Figure 1: Figure (a) depicts { X t } t ≥ in the [ − , square, Figure (b) depicts { Y t } t ≥ in therhombus S , and Figure (c) depicts { B t } t ≥ in the unit disc D . The linear transformation W − / transforms the [ − , square to S (Figure (a) to Figure (b)), whereas the conformal mapping F θ transforms D to S (Figure (c) to Figure (b)).The process { Y t } τ ≤ t ≤ σ is a one-dimensional martingale, so E [ Y σ | Y τ ] = Y τ by the optional stoppingtheorem [43, Proposition 2.4.2]. If we also condition on Y τ ∈ [ b , b ], we have that Y σ ∈ { b , b } .An easy calculation then shows that the probability of being absorbed in b conditional on Y τ andon the event Y τ ∈ [ b , b ] is exactly (cid:107) Y τ − b (cid:107)(cid:107) b − b (cid:107) = 1 − (cid:107) Y τ − b (cid:107)(cid:107) b − b (cid:107) . Then,Pr[pair { i, j } is separated | Y τ ∈ [ b , b ]] = E (cid:20) − (cid:107) Y τ − b (cid:107)(cid:107) b − b (cid:107) (cid:21) = (cid:90) b b − (cid:107) p − b (cid:107)(cid:107) b − b (cid:107) dµ ∂ S ( p ) . This proves the lemma.To obtain the harmonic measure directly for the rhombus S we appeal to conformal mappings.We use the fact that the harmonic measure can be defined for any simply connected region U inthe plane with 0 in its interior. More precisely, let B t be standard 2-dimensional Brownian motionstarted at 0, and τ ( U ) = inf { t : B t (cid:54)∈ U } be the first time it hits the boundary of U . Then µ ∂U denotes the probability measure induced by the distribution of B τ ( U ) , and is called the harmonicmeasure on ∂U (with respect to 0). When U is the unit disc centered at 0, the harmonic measureis uniform on its boundary because Brownian motion is invariant under rotation. Then the mainidea is to use conformal maps to relate harmonic measures on the different domains, namely thedisc and our rhombus S . Before we proceed further, it is best to transition to the language of complex numbers and identify R with the complex plane C . A complex function F : U → V where U, V ⊆ C is conformal if it isholomorphic (i.e. complex differentiable) and its derivative f (cid:48) ( x ) (cid:54) = 0 for all x ∈ U . The key fact weuse about conformal maps is that they preserve harmonic measure. Below we present this theoremfrom M¨orters and Peres [43] specialized to our setting. In what follows, D will be the unit disc in C centered at 0. Theorem 5. [43, p. 204, Theorem 7.23]. Suppose F θ is a conformal map from the unit disk D to S . Let µ ∂ D and µ ∂ S be the harmonic measures with respect to . Then µ ∂ D ◦ F − θ = µ ∂ S . S of the boundary of D is the same as the probability of thestandard Brownian motion first hitting its image under F θ , i.e. F θ ( S ) in ∂ S .To complete the picture, the Schwarz-Christoffel formula gives a conformal mapping from theunit disc D to S that we utilize. Theorem 6. [2, Theorem 5, Section 2.2.2] Define the function F θ ( ω ) by F θ ( ω ) = (cid:90) ωs =0 f θ ( s ) ds = (cid:90) ωs =0 (1 − s ) − (1 − θ/π ) (1 + s ) − (1 − θ/π ) ( s − i ) − θ/π ( s + i ) − θ/π ds. Then, for some real number c > , cF θ ( ω ) is a conformal map from the unit-disk D to the rhombus S . The conformal map has some important properties which will aid us in calculating the prob-abilities. We collect them in the following lemma, which follows from standard properties of theSchwarz-Christoffel integral [2], and is easily verified.
Lemma 3.
The conformal map cF θ ( ω ) has the following properties:1. The four points located at { , i, − , − i } map to the four vertices { b , . . . , b } of the rhombus S , respectively.2. The origin maps to the origin.3. The boundary of the unit-disk D maps to the boundary of S . Furthermore, the points in thearc from to i map to the segment [ b , b ] . Define the function r : [0 , π/ → R as r ( φ ) := | F θ ( e iφ ) − F θ (1) | . Lemma 4.
The probability that vertices { i, j } are separated, given that the angle between w i and w j is θ , is π (cid:90) π/ − r ( φ ) r ( π/ dφ Proof.
Rewriting the expression in Lemma 2 in complex number notation, we havePr[ { i, j } separated] = 4 · (cid:90) b b − | z − b || b − b | dµ ∂ S ( z ) = 4 · (cid:90) b b − | z − cF θ (1) | c | F θ ( i ) − F θ (1) | dµ ∂ S ( z ) . Since the conformal map F θ preserves the harmonic measure between the rhombus S and theunit-disk D (see Theorem 5) and by Lemma 3, the segment from b to b is the image of the arcfrom 1 to i under cF θ , we can rewrite the above as= 4 · (cid:90) π/ − | cF θ ( e iφ ) − cF θ (1) | c | F θ ( i ) − F θ (1) | dµ ∂ D ( e iφ ) . The harmonic measure µ ∂ D on the unit-disk is uniform due to the rotational symmetry ofBrownian motion. = 4 · (cid:90) π/ − | cF θ ( e iφ ) − cF θ (1) | c | F θ ( i ) − F θ (1) | dφ π . π · (cid:90) π/ − | F θ ( e iφ ) − F (1) || F θ ( i ) − F (1) | dφ = 2 π · (cid:90) π/ − r ( φ ) r ( π/ dφ. This completes the proof.To calculate the approximation ratio exactly, we will make use of the theory of special functions.While these calculations are technical, they are not trivial. To aid the reader, we give a brief primerin Appendix B.1 and refer them to the work of Andrews et al. [4], Beals and Wong [18] for a morethorough introduction.The proof of Theorem 4, will follow from the following key claims whose proofs appear in theappendix. Letting a = θ/π and b = 1 − a , we have Claim 1. r ( φ ) = 14 β sin φ ( a/ , b/ when φ ∈ [0 , π/ . Claim 2. · (cid:90) π/ r ( φ ) dφ = β ( a/ / , / a · F (cid:20) a , a , a a , a + 1 ; 1 (cid:21) θ close to π . We consider the case when the angle θ = (1 − (cid:15) ) · π as (cid:15) →
0. The hyperplane-rounding algorithmseparates such an edge by θ/π , and hence has a separation probability of 1 − (cid:15) . We show a similarasymptotic behaviour for the Brownian rounding algorithm, albeit with slightly worse constants.We defer the proof to the appendix. Theorem 7.
Given an edge { i, j } with cos − ( w Ti w j ) = θ = (1 − (cid:15) ) π , the Sticky Brownian Motionrounding will cut the edge with probability at least − (cid:0) π (cid:15) + O ( (cid:15) ) (cid:1) . In this section we use
Max-2SAT as a case study for extending the Sticky Brownian Motionrounding method to other constraint satisfaction problems besides
Max-Cut . In the
Max-2SAT problem we are given n variables z , . . . , z n and m clauses C , . . . , C m , where the j th clause is ofthe form y j ∨ y j ( y j is a literal of z j , i.e. , z j or z j ). The goal is to assign to each variable z i avalue of true or false so as to maximize the number of satisfied clauses.16 .1 Semi-definite Relaxation and Brownian Rounding Algorithm The standard SDP relaxation used for
Max-2SAT is the following:max m (cid:88) j =1 (1 − v j · v j ) s.t. v · v = 1 (1) v · v i = v i · v i ∀ i = − n, . . . , n (2) v i · v − i = 0 ∀ i = 1 , . . . , n (3) v · ( v i + v − i ) = 1 ∀ i = 1 , . . . , n (4)1 ≥ v · v i + v j · v − v i · v j ∀ i, j = − n, . . . , n (5) v i · v ≥ v i · v j ∀ i, j = − n, . . . , n (6) v i · v j ≥ ∀ i, j = − n. . . . , n (7)In the above v is a unit vector that denotes the false assignment (constraint 1), whereas a zerovector denotes the true assignment. We use the standard notation that v i denotes the literal z i and v − i denotes the literal z i . Therefore, v i · v − i = 0 for every i = 1 , . . . .n (constraints 3 and 4)since z i needs to be either true or false. The remainder of the constraints (constraints 5, 6 and 7)are equivalent to the (cid:96) triangle inequalities over all triples of vectors that include v .When trying to generalize the Brownian rounding algorithm for Max-Cut presented in Section2 to
Max-2SAT , there is a problem: unlike
Max-Cut the
Max-2SAT problem is not symmetric.Specifically, for
Max-Cut both S and S are equivalent solutions having the same objective value.However, for Max-2SAT an assignment to the variables z = α , . . . , z n = α n is not equivalent tothe assignment z = α , . . . , z n = α n (here α i ∈ { , } and α i = 1 ⊕ α i ). For example, if v i · v = 1then we would like the Brownian rounding algorithm to always assign z i to false. The Brownianrounding for Max-Cut cannot handle such a requirement. In order to tackle the above problemwe incorporate v into both the starting point of the Brownian motion and the covariance matrix.Let us now formally define the Brownian rounding algorithm for Max-2SAT . For simplicity ofpresentation denote for every i = 1 , . . . , n by x i the marginal value of z i , formally: x i := v i · v .Additionally, let w i be the (unique) unit vector in the direction of the projection of v i to thesubspace orthogonal to v , i.e. , w i satisfies v i = x i v + (cid:113) x i − x i w i . Similarly to
Max-Cut ,our Sticky Brownian Motion rounding algorithm performs a random walk in R n , where the i th coordinate corresponds to the variable z i . For simplicity of presentation, the random walk is definedin [0 , n as opposed to [ ± n , where 1 denotes false and 0 denotes true. Unlike
Max-Cut , thestarting point X is not the center of the cube. Instead, we use the marginals, and set ( X ) i := x i .The covariance matrix W is defined by W i,j := w i · w j for every i, j = 1 , . . . , n , and similarly to Max-Cut , let W / be the principle square root of W . Letting { B t } t ≥ denote standard Brownianmotion in R n , we define τ = inf { t : W / B t + X (cid:54)∈ [0 , n } to be the first time the process hitsthe boundary of [0 , n . Then, for all times 0 ≤ t ≤ τ , the process X t is defined as X t = W / B t + X . It is easy to see that x − i = 1 − x i and w − i = − w i for every i = 1 , . . . , n . We note that the Brownian rounding algorithm for
Max-2SAT can be equivalently defined in [ − , n , however,this will incur some overhead in the notations which we would like to avoid. τ , we force X t to stick to the face F hit at time τ : i.e. if ( X τ ) i ∈ { , } , then we fix itforever, by zeroing out the i -th row and column of the covariance matrix of W for all future timesteps. The rest of the process is defined analogously to the one for Max-Cut : whenever X t hits alower dimensional face of [0 , n , it is forced to stick to it until finally a vertex is reached, at whichpoint X t stops changing. We use τ i for the first time that X t hits a face of dimension n − i ; then, X τ n ∈ { , } n .The output of the algorithm corresponds to the collection of the variables assigned a value oftrue T ⊆ { , . . . , n } : T = { i : ( X τ n ) i = 0 } , whereas implicitly the collection of variables assigned a value of false are { i : ( X τ n ) i = 1 } . Our goal is to analyze the expected value of the assignment produced by the Sticky BrownianMotion rounding algorithm. Similarly to previous work, we aim to give a lower bound on theprobability that a fixed clause C is satisfied. Unfortunately, the conformal mapping approachdescribed in Section 2 does not seem to be easily applicable to the extended Sticky BrownianMotion rounding described above for Max-2SAT , because our calculations for
Max-Cut reliedheavily on the symmetry of the starting point of the random walk. We propose a different methodfor analyzing the Brownian rounding algorithm that is based on partial differential equations andthe maximum principle. We prove analytically the following theorem which gives a guarantee onthe performance of the algorithm. We also note that numerical calculations show that the algorithmin fact achieves the better approximation ratio of 0 .
921 (see Section 5.1 for details).
Theorem 8.
The Sticky Brownian Motion rounding algorithm for
Max-2SAT achieves an ap-proximation of at least . . As mentioned above, our analysis focuses on the probability that a single clause C with variables { z i , z j } is satisfied. We assume the variables are not negated. This is without loss of generality asthe algorithm and analysis are invariant to the sign of the variable in the clause.For simplicity of notation we denote by x the marginal value of z i and by y the marginal valueof z j . Thus, v i = x v + √ x − x w i and v j = y v + (cid:112) y − y w j . Projecting the random process { X } t ≥ on the i and j coordinates of the random process, we obtain a new process { ˜ X t } t ≥ where˜ X = ( x, y ). Let ˜ W = (cid:18) θ )cos( θ ) 1 (cid:19) , where θ is the angle between w i and w j . Then ˜ X t = ˜ X + ˜ W / B t for all 0 ≤ t ≤ τ , where τ = inf { t : ˜ X + ˜ W / B t (cid:54)∈ [0 , } is the first time the process hits the boundary of the square.After time τ , the process ˜ X t performs a one-dimensional standard Brownian motion on the firstside of the square it has hit, until it hits a vertex at some time σ . After time σ the process staysfixed. Almost surely σ < ∞ , and, moreover, it is easy to show that E σ < ∞ . We say that { ˜ X t } t ≥ is absorbed at ˜ X σ ∈ { , } . 18 bservation 2. The probability that the algorithm satisfies the clause { z i , z j } equals Pr (cid:104) ˜ X σ is absorbed in { (0 , , (0 , , (1 , } (cid:105) . We abuse notation slightly and denote ˜ X t by X t and ˜ W by W for the rest of the section whichis aimed at analyzing the above probability. We also denote ρ = cos( θ ).Our next step is fixing θ and analyzing the probability of satisfying the clause for all possiblevalues of marginals x and y . Indeed, for different x and y but the same θ , the analysis only needs toconsider the same random process with a different starting point. Observe that not all such x, y arenecessarily feasible for the SDP: we characterize which ones are feasible for a given θ in Lemma 7.But considering all x, y allows us to handle the probability in Observation 2 analytically.For any 0 ≤ x ≤ , ≤ y ≤
1, let u ( x, y ) denote the probability of starting the random walk atthe point ( x, y ) and ending at one of the corners (0 , ,
1) or (1 , x, y ) (and angle θ ). We can easilycalculate this probability exactly when either x or y are in the set { , } . We obtain the followingeasy lemma whose proof appears in the appendix. Lemma 5.
For φ ( x, y ) = 1 − xy , we have u ( x ) = φ ( x ) for all x ∈ ∂ [0 , (8) Moreover, for all x in the interior of the square [0 , , u ( x ) = E x [ φ ( X τ )] , where E x denotesexpectation with respect to starting the process at X = x . Next we use the fact that Brownian motion gives a solution to the Dirichlet boundary problem.While Brownian motion gives a solution to Laplace’s equation ([43] chapter 3), since our randomprocess is a diffusion process, we need a slightly more general result . We state the following resultfrom [47], specialized to our setting, that basically states that given a diffusion process in [0 , anda function φ on the boundary, the extension of the function defined on the interior by the expectedvalue of the function at the first hitting point on the boundary is characterized by an elliptic partialdifferential equation. Theorem 9 ([47] Theorem 9.2.14) . Let D = (0 , ⊆ R , Σ ∈ R × and let a , a , a , a bedefined as follows (cid:18) a a a a (cid:19) = 12 ΣΣ (cid:62) . For any x ∈ D , consider the process X t = X + ΣB t where B t is standard Brownian motion in R .Let τ = inf { t : X t (cid:54)∈ D } . Given a bounded continuous function φ : ∂D → R , define the function u : D → R such that u ( x ) = E x [ φ ( X τ )] , where E x denotes the expected value when X = x ∈ R . I.e., u ( x ) is the expected value of φ when first hitting ∂D conditioned on starting at point x . Consider the uniformly elliptic partialdifferential operator L in D defined by: L = (cid:88) i,j =1 a ij ∂ ∂x i ∂x j . This result can also be derived from Theorem 3.12 in [43] after applying a linear transformation to the variables. hen u ∈ C ( D ) is the unique solution to the partial differential equation : L u = 0 in D lim x → yx ∈ D u ( x ) = φ ( y ) for all y ∈ ∂D We instantiate our differential equation by choosing Σ = W / and thus a ij are the entries of W . It is important to note that all a ij s are independent of the starting point x ∈ [0 , . Thus, weobtain that u is the unique function satisfying the following partial differential equation: ∂ u∂x + ∂ u∂y + 2 ρ ∂ u∂x∂y = 0 ∀ ( x, y ) ∈ Int[0 , u ( x, y ) = (1 − xy ) ∀ ( x, y ) ∈ ∂ [0 , Above, and in the rest of the paper, we use Int D to denote the interior of a set D , and ∂D todenote its boundary.It remains to solve the above partial differential equation (PDE) that will allow us to calculate u ( x, y ) and give the probability of satisfying the clause. Finding closed form solutions general PDE’s is challenging and, there is no guarantee any solutionwould be expressible in terms of simple functions. However, to find a good approximation ratio, itsuffices for us to find good lower-bounds on the probability of satisfying the clause. I.e. we needto give a lower bound on the function u ( x, y ) from the previous section over those ( x, y ) that arefeasible. Since the PDE’s generated by our algorithm are elliptic (a particular kind of PDE), we willuse a property of elliptic PDE’s which will allow us to produce good lower-bounds on the solutionat any given point. More precisely, we use the following theorem from Gilbarg and Trudinger [28].Let L denote the operator L := (cid:88) ij a ij ∂ ∂ i ∂ j and we say that L is an elliptic operator if the coefficient matrix A = [ a ij ] i,j is positive semi-definite.We restate a version of Theorem 3.1 in Gilbarg and Trudinger [28] that shows how the maximumprinciple can be used to obtain lower bounds on u ( x, y ). Here ¯ D denotes the closure of D . Theorem 10 (Maximum Principle) . Let L be elliptic on a bounded domain D and suppose L [ g ]( x ) ≥ ∀ x ∈ D for some g ∈ C ( D ) ∩ C ( ¯ D ) . Then the maximum of g on D is achieved on ∂D , that is, sup x ∈ D g ( x ) = sup x ∈ ∂D g ( x )Theorem 10 has the following corollary that allows us to obtain lower bounds on u ( x, y ). Corollary 2.
Let L be elliptic on a bounded domain D and for some u, g ∈ C ( D ) ∩ C ( ¯ D ) . u ∈ C k ( D ) means that u has a continuous k th derivative over D , and u ∈ C ( D ) means that u is continuous. . L [ g ]( x ) ≥ L [ u ]( x ) ∀ x ∈ D g ( x ) ≤ u ( x ) ∀ x ∈ ∂D then g ( x ) ≤ u ( x ) ∀ x ∈ D . We refer the reader to [28] for a formal proof. Thus, it is enough to construct candidate functions g : [0 , → R such that ∂ g∂x + ∂ g∂y + 2 ρ ∂ g∂x∂y ≥ ∀ ( x, y ) ∈ Int[0 , (9) g ( x, y ) ≤ (1 − xy ) ∀ ( x, y ) ∈ ∂ [0 , (10)Then we obtain that g ( x, y ) ≤ u ( x, y ) for all ( x, y ) ∈ [0 , . In what follows we construct manydifferent such function each of which works for a different range of the parameter θ (equivalently, ρ ). We now construct feasible candidates to the maximum principle as described in Corollary 2. Wedefine the following functions:1. g ( x, y ) = 1 − xy − cos( θ ) √ x − x (cid:112) y − y .2. g ( x, y ) = 1 − xy − θ )( x − x )( y − y ).3. g ( x, y ) = 1 − xy − (1 + 5 cos( θ ))( x − x )( y − y )( x + y )(2 − x − y ).The following lemma shows that the above functions satisfy the conditions required for theapplication of the maximum principle (its proof appears in the appendix). Lemma 6.
Each of g , g , g satisfies the boundary conditions, i.e. g i ( x, y ) = u ( x, y ) for all x, y ∈ ∂ [0 , and for all values θ . Moreover, we have the following for each ( x, y ) ∈ [0 , :1. If ≥ cos( θ ) ≥ , then L g ≥ .2. If ≥ cos( θ ) ≥ − , then L g ≥ .3. If − ≥ cos( θ ) ≥ − , then L g ≥ . While some of these proofs are based on simple inequalities, proving others requires us to usesum of squares expressions. For example, to show L g ≥
0, we consider L g = p ( x, y, cos( θ )) as apolynomial in x, y and cos( θ ). Replacing z = cos( θ ), our aim is to show p ( x, y, z ) ≥ ≤ x, y ≤ − ≤ z ≤ − . Equivalently, we need to show p ( x, y, z ) ≥ r ( x, y, z ) := x − x ≥ r ( x, y, z ) := y − y ≥ r ( x, y, z ) := − ( z + ) ≥ r ( x, y, z ) := ( z + 1) ≥
0. We show21his by obtaining polynomials q i ( x, y, z ) for i = 0 , , , , q i is a sum of squarespolynomial of fixed degree and we have p ( x, y, z ) = q ( x, y, z ) + (cid:88) i =1 q i ( x, y, z ) r i ( x, y, z ) . Observe that the above polynomial equality proves the desired result by evaluating the RHS forevery 0 ≤ x, y ≤ − / ≥ z ≥ −
1. Clearly, the RHS is non-negative: each q i is non-negativesince it is a sum of squares and each r i is non-negative in the region we care about, by construction.We mention that we obtain these proofs via solving a semi-definite program of fixed degree (at most6) for each of the q i polynomials (missing details appear in the appendix).Let us now focus on the approximation guarantee that can be proved using the above func-tions g , g , and g . The following lemma compares the lower bounds on the probability ofsatisfying a clause, as given by g , g , and g , to the SDP objective. Recall that the contri-bution of any clause with marginals x and y and angle θ to the SDP’s objective is given by:1 − xy − cos( θ ) √ x − x (cid:112) y − y . We denote this contribution by SDP ( x, y, θ ). It is important tonote that not all triples ( x, y, θ ) are feasible (recall that θ is the angle between w i and w j ), due tothe triangle inequalities in the SDP. This is summarized in the following lemma. Lemma 7.
Let x, y, θ be as defined by a feasible pair of vectors v i and v j . Then they must satisfythe following constraints:1. ≤ x ≤ , ≤ y ≤ , ≤ θ ≤ π .2. cos( θ ) ≥ − (cid:113) xy (1 − x )(1 − y ) .3. cos( θ ) ≥ − (cid:113) (1 − x )(1 − y ) xy . Finally, we prove the following lemma which proves an approximation guarantee of 0.8749 for
Max-2SAT via the PDE and the maximum principle approach. As before, these proofs relyon explicitly obtaining sum of squares proofs as discussed above. We remark that these proofsessentially aim to obtain = 0 . − allow us to obtaina slightly worse bound using this methods. The details appear in the appendix. Lemma 8.
Consider any feasible triple ( x, y, θ ) satisfying the condition in Lemma 7. We have thefollowing.1. If ≥ cos( θ ) ≥ , then g ( x, y ) ≥ · SDP ( x, y, θ ) .2. If ≥ cos( θ ) ≥ − , then g ( x, y ) ≥ · SDP ( x, y, θ ) .3. If − ≥ cos( θ ) ≥ − , then g ( x, y ) ≥ · SDP ( x, y, θ ) . In this section we describe how to apply the Sticky Brownian Motion rounding and the framework ofRaghavendra and Tan [50] to the
Max-Cut-SC problem in order to give a bi-criteria approximationalgorithm whose running time is non-trivial even when the the number of constraints is large.22 .1 Problem Definition and Basics
Let us recall the relevant notation and definitions. An instance of the
Max-Cut-SC problem isgiven by an n -vertex graph G = ( V, E ) with edge weights a : E → R + , as well as a collection F = { F , . . . , F k } of subsets of V , and cardinality bounds b , . . . , b k ∈ N . For ease of notation, wewill assume that V = { , . . . , n } . Moreover, we denote the total edge weight by a ( E ) = (cid:80) e ∈ E a ( e ).The goal in the Max-Cut-SC problem is to find a subset S ⊂ V that maximizes the weight a ( δ ( S ))of edges crossing the cut ( S, V \ S ), subject to having | S ∩ F i | = b i for all i ∈ [ k ]. These cardinalityconstraints may not be simultaneously satisfiable, and moreover, when k grows with n , checkingsatisfiability is NP -hard [23]. For these reasons, we allow for approximately feasible solutions. Wewill say that a set of vertices S ⊆ V is an ( α, ε )-approximation to the Max-Cut-SC problem if (cid:12)(cid:12) | S ∩ F i | − b i (cid:12)(cid:12) ≤ εn for all i ∈ [ k ], and a ( δ ( S )) ≥ α · a ( δ ( T )) for all T ⊂ V such that | T ∩ F i | = b i for all i ∈ [ k ]. In the remainder of this section we assume that the instance given by G , F , and b is satisfiable, i.e. that there exists a set of vertices T such that | T ∩ F i | = b i for all i ∈ [ k ].Our algorithm may fail if this assumption is not satisfied. If this happens, then the algorithm willcertify that the instance was not satisfiable.We start with a simple baseline approximation algorithm, based on independent rounding. Thealgorithm outputs an approximately feasible solution which cuts a constant fraction of the totaledge weight. For this reason, it achieves a good bi-criteria approximation when the value of theoptimal solution OPT is much smaller than εa ( E ). This allows us to focus on the case in which OPT is bigger than εa ( E ) for our main rounding algorithm. The proof of the lemma, which followsfrom standard arguments, appears in the appendix. Lemma 9.
Suppose that n ≥ k/ε ) ε and ε ≤ . There exists a polynomial time algorithm thaton input a satisfiable instance G = ( V, E ) , F , and b , . . . , b k , as defined above, outputs a set S ⊆ V such that, with high probability, a ( δ ( S )) ≥ ε a ( E ) , and (cid:12)(cid:12) | S ∩ F i | − b i (cid:12)(cid:12) ≤ εn for all i ∈ [ k ] . Our main approximation algorithm is based on a semidefinite relaxation, and the sticky Brownianmotion. Let us suppose that we are given the optimal objective value
OPT of a feasible solution:this assumption can be removed by doing binary search for
OPT . We can then model the problemof finding an optimal feasible solution by the quadratic program (cid:88) e =( i,j ) ∈ E a ( e )( x i − x j ) ≥ OPT s.t. (cid:88) j ∈ F i x j = b i ∀ i = 1 , . . . , kx j (1 − x j ) = 0 ∀ j = 1 , . . . , k Let us denote this quadratic feasibility problem by Q . The Sum of Squares (Lasserre) hierarchygives a semidefinite program that relaxes Q . We denote by SoS (cid:96) ( Q ) the solutions to the level- (cid:96) Sumof Squares relaxations of Q . Any solution in SoS (cid:96) ( Q ) can be represented as a collection of vectors V = { v S : S ⊆ [ n ] , ≤ | S | ≤ (cid:96) } . To avoid overly cluttered notation, we write v i for v { i } ; we alsowrite v for v ∅ . We need the following properties of V , valid as long as (cid:96) ≥ v · v = 1.2. v S · v T = v S (cid:48) · v T (cid:48) for any S, S (cid:48) , T, T (cid:48) such that S ∪ T = S (cid:48) ∪ T (cid:48) and | S ∪ T | ≤ k . In particular, v i · v i = v i · v for any i .3. For any i and j the following inequalities hold:1 ≥ v · v i + v j · v − v i · v j (11) v i · v ≥ v i · v j (12) v i · v j ≥ (cid:80) e =( i,j ) ∈ E a ( e ) (cid:107) v i − v j (cid:107) ≥ OPT
5. For any i , there exist two solutions V i → and V i → in SoS (cid:96) − ( Q ) such that, if we denote thevectors in V i → by v S , and the vectors in V i → by v S , we have v S · v = (1 − v i · v ) v S · v + ( v i · v ) v S · v . Moreover, a solution
V ∈
SoS (cid:96) can be computed in time polynomial in n (cid:96) .Intuitively, we think of V as describing a pseudo-distribution over solutions to Q , and weinterpret v S · v T as the pseudo-probability that all variables in S ∪ T are set to one, or, equivalently, asthe pseudo-expectation of (cid:81) i ∈ S ∪ T x i . Usually we cannot expect that there is any true distributiongiving these probabilities. Nevertheless, the pseudo-probabilities and pseudo-expectations satisfysome of the properties of actual probabilities. For example, the transformation from V to V i → b corresponds to conditioning x i to b .We will denote by x S = v S · v the marginal value of set S . In particular, we will work withthe single-variable marginals x i = x { i } = v i · v , and will denote x = ( x , . . . , x n ). As before,it will be convenient to work with the component of v i which is orthogonal to v . We define (cid:101) w i = v i − x i v , and w i = (cid:107) (cid:101) w i (cid:107) (cid:101) w i . Note that, by the Pythagorean theorem, (cid:107) (cid:101) w i (cid:107) = x i − x i , and v i = x i v + (cid:113) x i − x i w i . We define the matrices (cid:102) W and W by (cid:102) W i,j := (cid:101) w i · (cid:101) w j and W i,j := w i · w j .We can think of (cid:102) W as the covariance matrix of the pseudodistribution corresponding to the SDPsolution. The following lemma, due to Barak, Raghavendra, and Steurer [17], and, independently,to Guruswami and Sinop [32], shows that any pseudodistribution can be conditioned so that thecovariances are small on average. Lemma 10.
For any ε ≤ , and any V ∈
SoS (cid:96) ( Q ) , where (cid:96) ≥ ε + 2 , there exists an efficientlycomputable V (cid:48) ∈ SoS (cid:96) − /ε ( Q ) , such that n (cid:88) i =1 n (cid:88) j =1 (cid:102) W i,j ≤ ε n . (14) In particular, V (cid:48) can be computed by conditioning V on ε variables. .3 Rounding Algorithm For our algorithm, we first solve a semidefinite program to compute a solution in SoS (cid:96) ( Q ), to whichwe apply Lemma 10 with parameter ε , which we will choose later. In order to be able to applythe lemma, we choose (cid:96) = (cid:108) ε (cid:109) + 2. The rounding algorithm itself is similar to the one we usedfor Max-2SAT . We perform a Sticky Brownian Motion with initial covariance W , starting at theinitial point X = x , i.e. at the marginals given by the SDP solution. As variables hit 0 or 1, wefreeze them, and delete the corresponding row and column of the covariance matrix. The maindifference from the Max-2SAT rounding is that we stop the process at time τ , where τ is anotherparameter that we will choose later. Then, independently for each i = 1 , . . . , n , we include vertex i in the final solution S with probability ( X τ ) i , and output S .The key property of this rounding that allows us to handle a large number of global constraintsis that, for any F i ∈ F , the value (cid:80) j ∈ F i ( X τ ) j that the fractional solution assigns to the set F i satisfies a sub-Gaussian concentration bound around b i . Note that (cid:80) j ∈ F i ( X t ) j is a martingalewith expectation equal to b i . Moreover, by Lemma 10, the entries of the covariance matrix (cid:102) W aresmall on average, which allows us to also bound the entries of the covariance matrix W , and, as aconsequence, bound how fast the variance of the martingale increases with time. The reason we stopthe walk at time τ is to make sure the variance doesn’t grow too large: this freedom, allowed by theSticky Brownian Motion rounding, is important for our analysis. The variance bound then impliesthe sub-Gaussian concentration of (cid:80) j ∈ F i ( X τ ) j around its mean b i , and using this concentrationwe can show that no constraint is violated by too much. This argument crucially uses the factthat our rounding is a random walk with small increments, and we do not expect similarly strongconcentration results for the random hyperplane rounding or its variants.The analysis of the objective function, as usual, reduces to analyzing the probability that wecut an edge. However, because we start the Sticky Brownian Motion at x , which may not be equalto , our analysis from Section 2 is not sufficient. Instead, we use the PDE based analysis fromSection 3, which easily extends to the Max-Cut objective. One detail to take care of is that,because we stop the walk early, edges incident on vertices that have not reached 0 or 1 by time τ may be cut with much smaller probability than their contribution to the SDP objective. Todeal with this, we choose the time τ when we stop the walk large enough, so that any vertex hasprobability at least 1 − poly( ε ) to have reached { , } by time τ . We show that this happens for τ = Θ(log(1 /ε )). This value of τ is small enough so that we can usefully bound the variance of (cid:80) j ∈ F i ( X τ ) i and prove the sub-Gaussian concentration we mentioned above.Let us recall some notation that will be useful in our analysis. We will use τ i for the firsttime t that X t hits a face of [0 , n of dimension n − i ; then, X τ n ∈ { , } n . We also use W t for the covariance used at time step t , which is equal to W with rows and columns indexed by { i : ( X t ) i ∈ { , }} zeroed out.As discussed, our analysis relies on a martingale concentration inequality, and the followinglemma, which is proved with the methods we used above for the Max-2SAT problem. A proofsketch can be found in the appendix.
Lemma 11.
For the SDP solution V and the Sticky Brownian Motion X t described above, and forany pair { i, j } of vertices Pr[( X τ n ) i (cid:54) = ( X τ n ) j ] ≥ . · (cid:107) v i − v j (cid:107) . t drops expo-nentially with t . We use this fact to argue that by time τ = Θ(log(1 /ε )) the endpoints of any edgehave probability at least 1 − poly( ε ) to be fixed, and, therefore, edges are cut with approximatelythe same probability as if we didn’t stop the random walk early, which allows us to use Lemma 11.The proof of this lemma, which is likely well-known, appears in the appendix. Lemma 12.
For any i , and any integer t ≥ , Pr[ ∀ s ≤ t : 0 < ( X s ) i < < − t . The following concentration inequality is our other key lemma. The statement is complicated bythe technical issue that the concentration properties of the random walk depend on the covariancematrix W , while Lemma 10 bounds the entries of (cid:102) W . When x i (1 − x i ) or x j (1 − x j ) is small, (cid:102) W i,j can be much smaller than W i,j . Because of this, we only prove our concentration bound for setsof vertices i for which x i (1 − x i ) is sufficiently large. For those i for which x i (1 − x i ) is small, wewill instead use the fact that such x i are already nearly integral to prove a simpler concentrationbound. Lemma 13.
Let ε , ε ∈ [0 , , and n ≥ ε τε . Define V >ε = { i : 2 x i (1 − x i ) > ε } . For any set F ⊆ V >ε , and any t ≥ , the random set S output by the rounding algorithm satisfies Pr (cid:34)(cid:12)(cid:12)(cid:12) | F ∩ S | − (cid:88) i ∈ F x i (cid:12)(cid:12)(cid:12) ≥ tε n (cid:35) ≤ (cid:18) − ε t τ (cid:19) . We give the proof of Lemma 13 after we finish the proof of Theorem 3, restated below forconvenience.
Theorem 3.
There exists a O ( n poly(log( k ) /ε ) ) -time algorithm that on input a satisfiable instance G = ( V, E ) , F , and b , . . . , b k , as defined above, outputs a (0 . − ε, ε ) -approximation with highprobability.Proof. The algorithm outputs either the set S output by the Sticky Brownian Rounding describedabove, or the one guaranteed by Lemma 9, depending on which one achieves a cut of larger totalweight. If OPT ≤ ε a ( E ), then Lemma 9 achieves the approximation we are aiming for. Therefore,for the rest of the proof, we may assume that OPT ≥ ε a ( E ), and that the algorithm outputs theset S computed by the Sticky Brownian Rounding. Then, it is enough to guarantee that, with highprobability, a ( δ ( S )) ≥ . · OPT − ε a ( E ) . (15)Let us set ε = ε ε , and define, as above, V >ε = { i : 2 x i (1 − x i ) > ε } and let V ≤ ε = { i : 2 x i (1 − x i ) ≤ ε } . Let Y be the indicator vector of the set S output by the algorithm.Observe that, for each i , since Y i is a Bernoulli random variable with expectation x i , we have E (cid:104)(cid:80) i ∈ V ≤ ε | Y i − x i | (cid:105) ≤ ε n , and, therefore,Pr (cid:88) i ∈ V ≤ ε | Y i − x i | ≥ ε n ≤ ε . F i ∈ F , by Lemma 13 applied to F i ∩ V >ε , we havePr (cid:12)(cid:12) | F i ∩ V >ε ∩ S | − (cid:88) i ∈ F ∩ V >ε x i (cid:12)(cid:12) ≥ (cid:114) τε ln 32 kε ε n ≤ ε k . Therefore, with probability at least 1 − ε , for all i ∈ [ k ] we have (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) | F i ∩ S | − (cid:88) i ∈ F x i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) | F i ∩ V ≤ ε ∩ S | − (cid:88) i ∈ F ∩ V ≤ ε x i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) | F i ∩ V >ε ∩ S | − (cid:88) i ∈ F ∩ V >ε x i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:88) i ∈ F ∩ V ≤ ε | Y i − x i | + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) | F i ∩ V >ε ∩ S | − (cid:88) i ∈ F ∩ V >ε x i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:32)
16 + (cid:114) τε ε ln 32 kε (cid:33) ε n. This means that, with probability at least 1 − ε , S satisfies all the constraints up to additive error εn , as long as ε ≤ min (cid:40) ε , ε (cid:113) τ ln kε (cid:41) . It remains to argue about the objective function. For τ ≥ log √ ε , Lemma 12 implies that, forany vertex i , Pr[( X τ ) i (cid:54)∈ { , } ] ≤ − τ ≤ ε . By Lemma 11, any pair of vertices { i, j } is separatedwith probability Pr[( X τ n ) i (cid:54) = ( X τ n ) j ] ≥ . · (cid:107) v i − v j (cid:107) , where we recall that, for edge e = ( i, j ), a ( e ) (cid:107) v i − v j (cid:107) is the contribution of e to the objectivevalue. Then,Pr[( X τ ) i (cid:54) = ( X τ ) j ] ≥ Pr[( X τ n ) i (cid:54) = ( X τ n ) j , ( X τ ) i = ( X τ n ) i , ( X τ ) j = ( X τ n ) j ]= Pr[( X τ n ) i (cid:54) = ( X τ n ) j , ( X τ ) i ∈ { , } , ( X τ ) j ∈ { , } ] ≥ Pr[( X τ n ) i (cid:54) = ( X τ n ) j ] − ε ≥ . · (cid:107) v i − v j (cid:107) − ε . Therefore, E [ a ( δ ( S ))] ≥ . · OPT − ε a ( E ). By Markov’s inequality applied to a ( E ) − a ( δ ( S )),Pr (cid:104) a ( δ ( S )] < . · OPT − ε a ( E ) (cid:105) < ε ε < − ε . In conclusion, we have that with probability at least ε , (15) is satisfied, and all global constraintsare satisfied up to an additive error of εn . The probability can be made arbitrarily close to 1 byrepeating the entire algorithm O ( ε − ) times. To complete the proof of the theorem, we can verifythat the running time is dominated by the time required to find a solution in SoS (cid:96) ( Q ), which ispolynomial in n (cid:96) , where (cid:96) = O ( ε − ) = poly(log( k ) /ε ).27e finish this section with the proof of Lemma 13 Proof of Lemma 13.
Since each i is included in S independently with probability ( X τ ) i , by Hoeffd-ing’s inequality we havePr (cid:34)(cid:12)(cid:12)(cid:12) | F ∩ S | − (cid:88) i ∈ F ( X τ ) i (cid:12)(cid:12)(cid:12) ≥ tε n (cid:35) ≤ e − ε t n ≤ (cid:18) − ε t τ (cid:19) , where the final inequality follows by our assumption on n . Therefore, it is enough to establishPr (cid:34)(cid:12)(cid:12)(cid:12)(cid:88) i ∈ F ( X τ ) i − x i (cid:12)(cid:12)(cid:12) ≥ tε n (cid:35) ≤ (cid:18) − ε t τ (cid:19) . (16)Suppose y ∈ { , } n is the indicator vector of F so that y (cid:62) ( X τ − X ) = y (cid:62) ( X τ − x ) = (cid:88) i ∈ F (( X τ ) i − x i ) . A standard calculation using Itˆo’s lemma (see Exercise 4.4. in [47]) shows that, for any λ ≥
0, therandom process Y t = exp (cid:18) λ y (cid:62) ( X t − x ) − λ (cid:90) t ( y (cid:62) W s y ) ds (cid:19) is a martingale with starting state Y = 1. Since, for any s , W s equals W with some rowsand columns zeroed out, we have that W − W s is positive semidefinite, and y (cid:62) W s y ≤ y (cid:62) Wy .Therefore, E (cid:20) exp (cid:16) λ y (cid:62) ( X τ − x ) − t λ y (cid:62) Wy (cid:17)(cid:21) ≤ E [ Y τ ] = 1 . Rearranging, this gives us that, for all λ ≥ E [ e λ y (cid:62) ( X τ − x ) ] ≤ E [ e τλ y (cid:62) Wy / ] ≤ e τλ y (cid:62) Wy / . (17)We can bound y (cid:62) Wy using Cauchy-Schwarz, the assumption that 2 x i (1 − x i ) > ε for all i ∈ F ,and (14): y (cid:62) Wy = (cid:88) i ∈ F (cid:88) j ∈ F W i,j ≤ | F | (cid:88) i ∈ F (cid:88) j ∈ F W i,j / = | F | (cid:88) i ∈ F (cid:88) j ∈ F (cid:102) W i,j x i x j (1 − x i )(1 − x j ) / < nε (cid:88) i ∈ F (cid:88) j ∈ F (cid:102) W i,j / ≤ ε n ε . Plugging back into (17), we get E [ e λ y (cid:62) ( X τ − x ) ] ≤ e τλ ε n /ε . The standard exponential momentargument then implies (16). 28 Extensions of the Brownian Rounding
In this section, we consider two extensions of the Brownian rounding algorithm. We also present nu-merical results for these variants showing improved performance over the sticky Brownian Roundinganalyzed in previous sections.
As noted in section 2, the Sticky Brownian rounding algorithm does not achieve the optimal valuefor the
Max-Cut problem. A natural question is to ask if we can modify the algorithm to achievethe optimal constant. In this section, we will show that a simple modification achieves this ratioup to at least three decimals. Our results are computer-assisted as we solve partial differentialequations using finite element methods. These improvement indicate that variants of the BrownianRounding approach offer a direction to obtain optimal SDP rounding algorithms for
Max-Cut problem as well as other CSP problems.In the sticky Brownian motion, the covariance matrix W t is a constant, until some vertex’smarginals ( X t ) i becomes ±
1. At that point, we abruptly zero the i th row and column. In thissection, we analyze the algorithm where we gradually dampen the step size of the Brownian motionas it approaches the boundary of the hypercube, until it becomes 0 at the boundary. We call thisprocess a “Sticky Brownian Motion with Slowdown.”Let ( X t ) i denote the marginal value of vertex i at time t . Initially ( X ) i = 0. First, we describethe discrete algorithm which will provide intuition but will also be useful to those uncomfortablewith Brownian motion and diffusion processes. At each time step, we will take a step whose lengthis scaled by a factor of (1 − ( X t ) i ) α for some constant α . In particular, the marginals will evolveaccording to the equation:( X t + dt ) i = ( X t ) i + (1 − ( X t ) i ) ) α/ · (cid:0) w i · G t (cid:1) · √ dt. (18)where G t is distributed according to an n -dimensional Gaussian and dt is a small discrete step bywhich we advance the time variable. When X t is sufficiently close to − X s ) i = ( X t ) i for all s > t .More formally, X t is defined as an Itˆo diffusion process which satisfies the stochastic differentialequation d X t = A ( X t ) · W / · d B t (19)where B t is the standard Brownian motion in R n and A ( X t ) is the diagonal matrix with entries[ A ( X t )] ii = (1 − ( X t ) i ) α/ . Since this process is continuous, it becomes naturally sticky when somecoordinate ( X t ) i reaches {− , } .Once again, it suffices to restrict our attention to the two dimensional case where we analyzethe probability of cutting an edge ( i, j ) and we will assume that˜ W = (cid:18) θ )cos( θ ) 1 (cid:19) , θ is the angle between w i and w j .Let τ be the first time when X t hits the boundary ∂ [ − , . Since the walk slows down as itapproaches the boundary, it is worth asking if E [ τ ] is finite. In Lemma 21, we show that E [ τ ] isfinite for constant α .Let u ( x, y ) denote the probability of the Sticky Brownian Walk algorithm starting at ( x, y )cutting an edge, i.e. the walk is absorbed in either (+1 , −
1) or ( − , +1). It is easy to give a preciseformula for u at the boundary as the algorithm simplifies to a one-dimensional walk. Thus, u ( x, y )satisfies the boundary condition φ ( x, y ) = (1 − xy ) / x, y ) ∈ bd [ − , . For a given( x, y ) ∈ Int[ − , , we can say u ( x, y ) = E ( x,y ) [ φ ( ˜ X τ ( i ) , ˜ X τ ( j ))] , where E ( x,y ) denotes the expectation of diffusion process that begins at ( x, y ). Informally, u ( x, y ) isthe expected value of φ when first hitting ∂ [ − , conditioned on starting at point ( x, y ). Observethat the probability that the algorithm will cut an edge is given by u (0 , u ( x, y ) that we use is that it is the unique solution to a Dirichlet Problem,formalized in Lemma 14 below. Lemma 14.
Let L α denote the operator L α = (1 − x ) α ∂ ∂x + 2 cos( θ )(1 − x ) α/ (1 − y ) α/ ∂ ∂x∂y + (1 − y ) α ∂ ∂y , then the function u ( x, y ) is the unique solution to the Dirichlet Problem: L α [ u ]( x, y ) = 0 ∀ ( x, y ) ∈ Int([ − , )lim ( x,y ) → (˜ x, ˜ y ) , ( x,y ) ∈ Int([ − , ) u ( x, y ) = φ (˜ x, ˜ y ) ∀ (˜ x, ˜ y ) ∈ ∂ [ − , . The proof again uses [47, Theorem 9.2.14], however, the exact application is a little subtle andwe defer the details to Appendix E.
Numerical Results
The Dirichlet problem is parameterized by two variables: the slowdownparameter α and the angle between the vectors θ . We can numerically solve the above equationusing existing solvers for any given fixed α and angle θ ∈ [0 , π ]. We solve these problems for avariety of α between 0 and 2 and all values of θ in [0 , π ] discretized to a granularity of 0 . We observe that as we increase α from 0 to 2, the approximation ratio peaks around α ≈ . θ . In particular, when α = 1 .
61, the approximation ratio is 0 .
878 which matchesthe integrality gap for this relaxation up to three decimal points.The Brownian rounding with slowdown is a well-defined algorithm for any 2-CSP. We investigate3 different values of slowdown parameter, i.e., α , and show their relative approximation ratios. Weshow that with a slowdown of 1 .
61 we achieve an approximation ratio of 0 .
929 for
Max-2SAT . Welist these values below in Table 2. Our code, containing the details of the implementation, is available at [1].
Max-Cut problem, since we start the walk at the point (0 , , θ ] (Figure 2). In particular, we are able to achieve values that are comparable to theGoemans-Williamson bound. α Max-Cut Max-2SAT .
874 0.9271 .
61 0 .
878 0.929Table 2: Approximation ratio of Sticky Brownian Motion rounding with Slowdown for
Max-Cut and
Max-2SAT . (a) α = 0. (b) α = 1. (c) α = 1 . Figure 2: Comparing the performance of three values of the slowdown parameter for the
Max-Cut problem.
Our motivating example for considering the higher-dimension Brownian rounding is the
Max-DiCut problem: given a directed graph G = ( V, E ) equipped with non-negative edge weights a : E → R + we wish to find a cut S ⊆ V that maximizes the total weight of edges going out of S .The standard semi-definite relaxation for Max-DiCut is the following:max (cid:88) e =( i → j ) ∈ E a e · ( w + w i ) · ( w − w j )4 s.t. w i · w i = 1 ∀ i = 0 , , ..., n (cid:107) w i − w j (cid:107) + (cid:107) w j − w k (cid:107) ≥ (cid:107) w i − w k (cid:107) ∀ i, j, k = 0 , , ..., n In the above, the unit vector w denotes the cut S , whereas − w denotes S . We also include thetriangle inequalities which are valid for any valid relaxation.The sticky Brownian rounding algorithm for Max-DiCut fails to give a good performanceguarantee. Thus we design a high-dimensional variant of the algorithm that incorporates theinherent asymmetry of the problem. Let us now describe the high-dimensional Brownian roundingalgorithm. It is similar to the original Brownian rounding algorithm given for
Max-Cut , exceptthat it evolves in R n +1 with one additional dimension for w . Let W ∈ R ( n +1) × ( n +1) denote31he positive semi-definite correlation matrix defined by the vectors w , w , . . . , w n , i.e. , for every0 ≤ i, j ≤ n we have that: W i,j = w i · w j . The algorithm starts at the origin and perform a stickyBrownian motion inside the [ ± n +1 hypercube whose correlations are governed by W .As before, we achieve this by defining a random process { X t } t ≥ as follows: X t = W / B t , where { B t } t ≥ is the standard Brownian motion in R n +1 starting at the origin and W / is thesquare root matrix of W . Additionally, we force { X t } t ≥ to stick to the boundary of the [ ± n +1 hypercube, i.e. , once a coordinate of X t equals either 1 or − Max-Cut problem.Below we use σ for the time at which X σ ∈ {− , } n +1 , which has finite expectation.Unlike the Brownian rounding algorithm for Max-Cut , we need to take into consideration thevalue w was rounded to, i.e. , ( X σ ) , since the zero coordinate indicates S . Formally, the output S ⊆ V is defined as follows: S = { i ∈ V : ( X σ ) i = ( X σ ) } . To simplify the rest of the presentation, let us denote Z i := ( X σ ) i for every i = 0 , , . . . , n .The event that an edge ( i → j ) ∈ E is an outgoing edge from S , i.e. , i ∈ S and j ∈ S ,involves three random variables: Z i , Z j , and Z . Formally, the above event happens if and onlyif Z i = Z and Z j (cid:54) = Z . We now show how events on any triplet of the random variables Z , Z , . . . , Z n can be precisely calculated. To simplify the presentation, denote the following forevery i, j, k = 0 , , , . . . , n and α, β, γ ∈ {± } : p i ( α ) (cid:44) Pr [ Z i = α ] p ij ( α, β ) (cid:44) Pr [ Z i = α, Z j = β ] p ijk ( α, β, γ ) (cid:44) Pr [ Z i = α, Z j = β, Z k = γ ] Observation 3.
The following two hold:1. p i ( α ) = p i ( − α ) , p ij ( α, β ) = p ij ( − α, − β ) , and p ijk ( α, β, γ ) = p ijk ( − α, − β, − γ ) for every i, j, k = 0 , , , . . . , n and α, β, γ ∈ {± } .2. p i ( α ) = / for every i = 0 , , , . . . , n and α ∈ {± } . The proof of Observation 3 follows immediately from symmetry.The following lemmas proves that every conjunction event that depends on three variables from Z , Z , Z , . . . , Z n can be precisely calculated. Lemma 15.
For every i, j, k = 0 , , , . . . , n and α, β, γ ∈ {± } : p ijk ( α,β,γ ) = 12 (cid:20) p ij ( α, β ) + p ik ( α, γ ) + p jk ( β, γ ) − (cid:21) . roof. − p ijk ( α, β, γ ) = 1 − p ijk ( − α, − β, − γ )= Pr [ Z i = α ∨ Z j = β ∨ Z k = γ ]= p i ( α ) + p j ( β ) + p k ( γ ) − p ij ( α, β ) − p ik ( α, γ ) − p jk ( β, γ ) + p ijk ( α, β, γ ) . The first equality follows from property (1) of Observation 3. The second equality follows fromDe-Morgan’s law. The third equality follows from the inclusion and exclusion principle. Isolating p ijk ( α, β, γ ) above and using property (2) of Observation 3 concludes the proof.Let us now consider the case study problem Max-DiCut . One can verify that an edge ( i → j ) ∈ E is a forward edge crossing the cut S if and only if the following event happens: { Z i = Z (cid:54) = Z j } (recall that Z indicates S ). Thus, the value of the Brownian rounding algorithm, when consideringonly the edge ( i → j ), equals p ij (1 , , −
1) + p ij ( − , − , p ij ( α, β ) for every i, j = 0 , . . . , n and α, β ∈ {± } , then p ij (1 , , −
1) and p ij ( − , − ,
1) can be calculated (thus deriving the exact probability that ( i → j ) is a forward edgecrossing S ).How can we calculate p ij ( α, β ) for every i, j = 0 , . . . , n and α, β ∈ {± } ? Fix some i , j , α and β . We note that Theorem 4 can be used to calculate p ij ( α, β ). The reason is that: (1) Theorem 4provides the value of p ij ( − ,
1) + p ij (1 , − p ij ( − , −
1) + p ij ( − ,
1) + p ij (1 , −
1) + p ij (1 ,
1) = 1;and (3) p ij ( − , −
1) = p ij (1 ,
1) and p ij ( − ,
1) = p ij (1 , −
1) from symmetry. We conclude that usingTheorem 4 we can exactly calculate the probability that ( i → j ) is a forward edge crossing S , andobtain that this probability equals: 12 ( p j + p ij − p i ) , where p ij is the probability that i and j are separated as given by Theorem 4.Similarly to Max-2SAT , not all triplets of angles { θ i , θ j , θ ij } are possible due to the triangleinequality constraints (here θ ij indicates the angle between w i and w j ). Let us denote by F thecollection of all possible triplet of angles for the Max-DiCut problem. Then, we can lower boundthe approximation guarantee of the Brownian rounding algorithm as follows:min ( θ i ,θ j ,θ ij ) ∈F (cid:40) ( p j + p ij − p i ) (1 − cos( θ j ) + cos( θ i ) − cos( θ ij )) (cid:41) . This results in the following theorem.
Theorem 11.
The high dimensional Brownian rounding algorithm achieves an approximation ratioof 0.79 for the
Max-DiCut problem.
We also remark that we can introduce slowdown (as discussed in Section 5.1 to the high dimen-sional Brownian rounding algorithm. Numerically, we show that this improves the performance to0 . Acknowledgements
The authors are grateful to Assaf Naor and Ronen Eldan for sharing their manuscript with us. SAwould like to thank Gantumur Tsogtgerel for bringing the Maximum Principle to our attention,and Christina C. Christara for helping us compare various numerical PDE solvers. SA and ANthank Allan Borodin for useful discussions during the initial stages of this research. GG would liketo thank Anupam Gupta and Ian Tice for useful discussion, and for also pointing out the MaximumPrinciple.SA and AN were supported by an NSERC Discovery Grant (RGPIN-2016-06333). MS wassupported by NSF grant CCF-BSF:AF1717947. Some of this work was carried out while AN and NBwere visitors at the Simons Institute program on Bridging Discrete and Continuous Optimization,partially supported by NSF grant
References [1] Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Aleksandar Nikolov, Roy Schwartz,and Mohit Singh. Code for PDE Solvability and Sum of Square Proofs. https://github.com/sabbasizadeh/brownian-rounding , 2018.[2] Lars V. Ahlfors.
Complex Analysis: An Introduction to the Theory of Analytic Functions ofOne Complex Variable . McGraw-Hill Book Company, second edition, 1966.[3] Noga Alon and Assaf Naor. Approximating the cut-norm via grothendieck’s inequality.
SIAMJ. Comput. , 35(4):787–803, 2006.[4] George E Andrews, Richard Askey, and Ranjan Roy.
Special Functions, volume 71 of Ency-clopedia of Mathematics and its Applications . Cambridge University Press, Cambridge, 1999.[5] Sanjeev Arora, Eden Chlamtac, and Moses Charikar. New approximation guarantee for chro-matic number. In
Proceedings of the Thirty-eighth Annual ACM Symposium on Theory ofComputing , STOC ’06, pages 215–224, New York, NY, USA, 2006. ACM.[6] Sanjeev Arora, Satish Rao, and Umesh Vazirani. Expander flows, geometric embeddings andgraph partitioning.
J. ACM , 56(2):5:1–5:37, April 2009.[7] Takao Asano. An improved analysis of goemans and williamson’s lp-relaxation for MAX SAT.
Theor. Comput. Sci. , 354(3):339–353, 2006.[8] Takao Asano and David P Williamson. Improved approximation algorithms for MAX SAT.
Journal of Algorithms , 42(1):173–202, 2002.[9] Per Austrin. Balanced Max 2-SAT might not be the hardest. In
STOC , pages 189–197. ACM,2007.[10] Per Austrin. Towards sharp inapproximability for any 2-csp.
SIAM J. Comput. , 39(6):2430–2463, 2010. 3411] Per Austrin and Aleksa Stankovic. Global cardinality constraints make approximating someMax-2-CSPs harder. In
APPROX-RANDOM , volume 145 of
LIPIcs , pages 24:1–24:17. SchlossDagstuhl - Leibniz-Zentrum f¨ur Informatik, 2019.[12] Per Austrin, Siavosh Benabbas, and Konstantinos Georgiou. Better balance by being biased:A 0.8776-approximation for Max Bisection.
ACM Trans. Algorithms , 13(1):2:1–2:27, 2016.[13] Adi Avidor, Ido Berkovitch, and Uri Zwick. Improved approximation algorithms for Max NAE-SAT and Max SAT. In
International Workshop on Approximation and Online Algorithms ,pages 27–40. Springer, 2005.[14] Nikhil Bansal. Constructive algorithms for discrepancy minimization. In
Foundations of Com-puter Science (FOCS), 2010 51st Annual IEEE Symposium on , pages 3–10. IEEE, 2010.[15] Nikhil Bansal, Daniel Dadush, and Shashwat Garg. An algorithm for koml´os conjecture match-ing banaszczyk’s bound. In
FOCS , pages 788–799. IEEE Computer Society, 2016.[16] Nikhil Bansal, Daniel Dadush, Shashwat Garg, and Shachar Lovett. The Gram-Schmidt walk:a cure for the banaszczyk blues. In
STOC , pages 587–597. ACM, 2018.[17] Boaz Barak, Prasad Raghavendra, and David Steurer. Rounding semidefinite programminghierarchies via global correlation. In
FOCS , pages 472–481. IEEE Computer Society, 2011.[18] Richard Beals and Roderick Wong.
Special functions: a graduate text , volume 126. CambridgeUniversity Press, 2010.[19] Ilia Binder and Mark Braverman. The rate of convergence of the walk on spheres algorithm.
Geom. Funct. Anal. , 22(3):558–587, 2012. ISSN 1016-443X. doi: 10.1007/s00039-012-0161-z.URL https://doi.org/10.1007/s00039-012-0161-z .[20] Avrim Blum and David Karger. An ˜ O (cid:0) n / (cid:1) -coloring algorithm for 3-colorable graphs. In-formation Processing Letters , 61(1):49 – 53, 1997.[21] Moses Charikar and Anthony Wirth. Maximizing quadratic programs: Extendinggrothendieck’s inequality. In
Proceedings of the 45th Annual IEEE Symposium on Founda-tions of Computer Science , FOCS ’04, pages 54–60, 2004.[22] Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. Clustering with qualitativeinformation.
J. Comput. Syst. Sci. , 71(3):360–383, 2005.[23] Moses Charikar, Alantha Newman, and Aleksandar Nikolov. Tight hardness results for mini-mizing discrepancy. In
SODA 2011 , pages 1607–1614. SIAM, 2011.[24] Ronen Eldan and Assaf Naor. Krivine diffusions attain the goemans-williamson approximationratio.
CoRR , abs/1906.10615, 2019. URL http://arxiv.org/abs/1906.10615 .[25] Uriel Feige and Michel Goemans. Approximating the value of two power proof systems, withapplications to Max 2-SAT and Max DiCut. In istcs , page 0182. IEEE, 1995.[26] Uriel Feige and Michael Langberg. The rpr2 rounding technique for semidefinite programs. In
ICALP , volume 2076 of
Lecture Notes in Computer Science , pages 213–224. Springer, 2001.3527] A. Frieze and M. Jerrum. Improved approximation algorithms for MAX k-CUT and MAXBISECTION.
Algorithmica , 18(1):67–81, May 1997.[28] David Gilbarg and Neil S Trudinger.
Elliptic partial differential equations of second order .springer, 2015.[29] Emmanuel Gobet. Weak approximation of killed diffusion using Euler schemes.
StochasticProcess. Appl. , 87(2):167–197, 2000. ISSN 0304-4149. doi: 10.1016/S0304-4149(99)00109-X.URL https://doi.org/10.1016/S0304-4149(99)00109-X .[30] Emmanuel Gobet.
Monte-Carlo methods and stochastic processes . CRC Press, Boca Raton,FL, 2016. ISBN 978-1-4987-4622-9. From linear to non-linear.[31] Michel X Goemans and David P Williamson. Improved approximation algorithms for max-imum cut and satisfiability problems using semidefinite programming.
Journal of the ACM(JACM) , 42(6):1115–1145, 1995.[32] Venkatesan Guruswami and Ali Kemal Sinop. Lasserre hierarchy, higher eigenvalues, andapproximation schemes for graph partitioning and quadratic integer programming with PSDobjectives. In
FOCS , pages 482–491. IEEE Computer Society, 2011.[33] Eran Halperin and Uri Zwick. Approximation algorithms for MAX 4-SAT and rounding pro-cedures for semidefinite programs. In
Proceedings of the 7th International IPCO Conferenceon Integer Programming and Combinatorial Optimization , pages 202–217, 1999.[34] Eran Halperin and Uri Zwick. A unified framework for obtaining improved approximationalgorithms for maximum graph bisection problems.
Random Struct. Algorithms , 20(3):382–402, May 2002.[35] Johan H˚astad. Some optimal inapproximability results.
J. ACM , 48(4):798–859, July 2001.[36] David Karger, Rajeev Motwani, and Madhu Sudan. Approximate graph coloring by semidefi-nite programming.
J. ACM , 45(2):246–265, 1998.[37] Howard Karloff and Uri Zwick. A 7/8-approximation algorithm for MAX 3SAT? In focs , page406. IEEE, 1997.[38] Subhash Khot. On the power of unique 2-prover 1-round games. In
STOC , pages 767–775.ACM, 2002.[39] Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproxima-bility results for MAX-CUT and other 2-variable csps?
SIAM J. Comput. , 37(1):319–357,2007.[40] Michael Lewin, Dror Livnat, and Uri Zwick. Improved rounding techniques for the MAX 2-satand MAX DI-CUT problems. In
IPCO , volume 2337 of
Lecture Notes in Computer Science ,pages 67–82. Springer, 2002.[41] Shachar Lovett and Raghu Meka. Constructive discrepancy minimization by walking on theedges.
SIAM Journal on Computing , 44(5):1573–1582, 2015.3642] Tomomi Matsui and Shiro Matuura. 0.935-approximation randomized algorithm for Max2-SAT and its derandomization.
Department of Mathematical Engineering and InformationPhysics, University of Tokyo (Technical Report METR 2001-03 , 2001.[43] Peter M¨orters and Yuval Peres.
Brownian Motion . Cambridge Series in Statistical and Prob-abilistic Mathematics, 2010.[44] Elchanan Mossel, Ryan O’Donnell, and Krzysztof Oleszkiewicz. Noise stability of functionswith low influences: invariance and optimality. In
FOCS , pages 21–30. IEEE Computer Society,2005.[45] Y. Nesterov. Semidefinite relaxation and nonconvex quadratic optimization.
OptimizationMethods and Software , 9(1-3):141–160, 1998.[46] Ryan O’Donnell and Yi Wu. An optimal sdp algorithm for Max-Cut, and equally optimal longcode tests. In
Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing ,STOC ’08, pages 335–344, 2008.[47] Bernt Øksendal.
Stochastic differential equations . Universitext. Springer-Verlag, Berlin, fourthedition, 1995. ISBN 3-540-60243-7. doi: 10.1007/978-3-662-03185-8. URL https://doi.org/10.1007/978-3-662-03185-8 . An introduction with applications.[48] Prasad Raghavendra. Optimal algorithms and inapproximability results for every csp? In
STOC , pages 245–254. ACM, 2008.[49] Prasad Raghavendra and David Steurer. How to round any CSP. In
FOCS , pages 586–594.IEEE Computer Society, 2009.[50] Prasad Raghavendra and Ning Tan. Approximating csps with global cardinality constraintsusing SDP hierarchies. In
SODA , pages 373–387. SIAM, 2012.[51] Chaitanya Swamy. Correlation clustering: Maximizing agreements via semidefinite program-ming. In
Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms ,SODA ’04, pages 526–527, 2004.[52] Norbert Wiener. Differential-space.
Journal of Mathematics and Physics , 2(1-4):131–174,1923. doi: 10.1002/sapm192321131.[53] Yinyu Ye. A .699-approximation algorithm for Max-Bisection.
Mathematical Programming ,90(1):101–111, Mar 2001.[54] Uri Zwick. Approximation algorithms for constraint satisfaction problems involving at mostthree variables per constraint. In
Proceedings of the Ninth Annual ACM-SIAM Symposium onDiscrete Algorithms , SODA ’98, pages 201–210, 1998.[55] Uri Zwick. Outward rotations: a tool for rounding solutions of semidefinite programmingrelaxations, with applications to MAX CUT and other problems. In
Proceedings of the thirty-first annual ACM symposium on Theory of computing , pages 679–687. ACM, 1999.37
Definition of Brownian Motion
For completeness, we recall the definition of standard Brownian motion.
Definition 1.
A stochastic process { B t } t ≥ taking values in R n is called an n -dimensional Brow-nian motion started at x ∈ R n if • B = x , • the process has independent increments, i.e. for all N and all times ≤ t ≤ t ≤ . . . ≤ t N ,the increments B t N − B t N − , B t N − − B t N − , . . . , B t − B t are independent random variables, • for all t ≥ and all h > , the increment B t + h − B t is distributed as a Gaussian randomvariable with mean and covariance matrix equal to the identity I n , • the function f ( t ) = B t is almost surely continuous.The process { B t } t ≥ is called standard Brownian motion if x = 0 . The fact that this definition is not empty, i.e. that such a stochastic process exists, is non-trivial.The first rigorous proof of this fact was given by Wiener [52]. We refer the reader to the book [43]for a thorough introduction to Brownian motion and its properties.
B Omitted Proofs from Section 2
We start with a brief primer about special functions with an emphasis on the lemmas and identitiesthat will be useful for our analysis. We recommend the excellent introductions in Andrews et al.[4], Beals and Wong [18] for a thorough introduction.
B.1 Special Functions: A Primer
While there is no common definition of special functions, three basic functions, Γ, β and thehypergeometric functions p F q show up in nearly all treatments of the subject. We will define themand some useful relationships between them. Definition 2 (Gamma Function) . The gamma function is defined as Γ( z ) := (cid:90) ∞ x z − e − x dx. for all complex numbers z with non-negative real part, and analytically extended to all z (cid:54) = 0 , − , − , ... . Fact 1.
Recall that the gamma function satisfies the recurrence Γ( z + 1) = z Γ( z ) and it followseasily from the definition that Γ(1) = 1 . In particular, when n is a positive integer, Γ( n + 1) = n ! Definition 3 (Beta Function) . The beta function β ( a, b ) is defined for complex numbers a and b with Re( a ) > , Re( b ) > by β ( a, b ) = (cid:90) s a − (1 − s ) b − ds. β ( a, b ) = β ( b, a ). Setting s = u/ ( u + 1) gives the following alternate form. β ( a, b ) = (cid:90) ∞ u a − (cid:18)
11 + u (cid:19) a + b du Lemma 16. (Theorem 1.1.4 in [4]) The beta function can be expressed in terms of the gammafunction using the following identity: β ( a, b ) = Γ( a )Γ( b )Γ( a + b )We will use the following very useful fact. Lemma 17. ( Exercise 2.2 in [18]) (cid:90) π/ sin a − θ cos b − θdθ = 12 β ( a/ , b/ Definition 4 (Hypergeometric Function) . The hypergeometric function p F q (cid:104) a , ..., a p b , ..., b q ; z (cid:105) is definedas p F q (cid:20) a , . . . , a p b , . . . , b q ; z (cid:21) := ∞ (cid:88) n =0 ( a ) n · · · ( a p ) n ( b ) n · ( b q ) n z n n ! where the Pochhammer symbol (rising factorial) is defined inductively as ( a ) n := a ( a + 1) · · · ( a + n − and ( a ) = 1 . A very simple but useful way to write the binomial theorem using the Pochhammer symbol is(1 − x ) − a = ∞ (cid:88) n =0 ( a ) n n ! x n . The Pochhammer symbol also satisfies the formula ( a ) n = Γ( a + n ) / Γ( a ).A useful connection between the hypergoemetric function F and the gamma function is givenin the following lemma. Lemma 18 (Euler’s Integral Representation) . [Theorem 2.2.1 in [4]] If Re( c ) > Re( b ) > then F (cid:20) a, bc ; x (cid:21) = Γ( c )Γ( b )Γ( c − b ) (cid:90) t b − (1 − s ) c − b − (1 − xs ) − a ds where we assume that (1 − xs ) − a takes its principal value. Definition 5.
The incomplete beta integral is defined as β x ( a, b ) = (cid:90) x t a − (1 − t ) b − dt, and is well-defined for Re( a ) > and x / ∈ [1 , ∞ ) . Lemma 19. (cid:90) φ sin a − θ cos b − θdθ = 12 β sin φ ( a/ , b/ Proof.
Let sin θ = t , then cos θ = √ − t and (cos θ ) dθ = dt , and we get (cid:90) φ sin a − θ cos b − θdθ = (cid:90) sin( φ )0 t a − (1 − t ) ( b − / dt. Setting s = t gives (cid:90) sin ( φ )0 (1 / s ( a − / (1 − s ) ( b − / ds = (1 / β sin φ ( a/ , b/ . This completes the proof.The following identity relates the incomplete beta integral to hypergeometric functions.
Lemma 20 (Gauss’s Identity) . [Exercise 8.7 in [18]] β x ( a, b ) = (cid:90) x t a − (1 − t ) b − dt = x a a · F (cid:20) a, − ba + 1 ; x (cid:21) Proof.
It is natural to substitute t = sx , as we can now integrate from s = 0 to 1. This gives (cid:90) x a − s a − (1 − sx ) b − xds = x a (cid:90) s a − (1 − sx ) b − ds Using the integral form given in Lemma 18 with 1 − b in the place of a , a in the place of b , and a + 1 in the place of c , we get that the integral equals x a Γ( a )Γ(1)Γ( a + 1) · F (cid:20) − b, aa + 1 ; x (cid:21) = x a a · F (cid:20) − b, aa + 1 ; x (cid:21) By the symmetry in the definition of F with respect to the first two arguments, the resultfollows. B.2 Proof of Theorem 4
First, we will prove the claim, which expresses the function r ( φ ) in terms of the incomplete betafunction. Claim 1. r ( φ ) = 14 β sin φ ( a/ , b/ when φ ∈ [0 , π/ . roof. Recall r ( φ ) = | F θ ( e iφ ) − F θ (1) | . Furthermore, from Lemma 3, we know that the conformalmap maps the arc from 1 to i to an edge on the rhombus S . Hence we can write r ( φ ) as an integral r ( φ ) = (cid:90) φ | F (cid:48) θ ( e iφ ) | dφ = (cid:90) φ | f θ ( e iφ ) | dφ. Expanding f θ , and substituting a = θπ and b = 1 − a , we have= (cid:90) φ | (1 − e iφ ) a − · (1 + e iφ ) b − | dφ. Expanding this in terms of trigonometric functions, and simplifying using double angle formulas,we get = (cid:90) φ | (2 · e iφ · sin φ ) a − · (2 · e − iφ · cos φ ) b − | dφ = (cid:90) φ | a + b − | · | e iφ ( a − | · | sin φ a − | · | e − iφ ( b − | · | cos φ b − | dφ. Since | e iφ ( a − | = | e iφ ( b − | = 1 and the remaining terms are positive, we drop the norms.= (cid:90) φ
12 (sin φ ) a − (cos φ ) b − dφ = 14 β sin φ ( a/ , b/
2) by Lemma 19.By substituting φ = π/ Corollary 3.
The length of the side of rhombus is given by r = r ( π/
2) = / · β ( a/ , b/ . The claim below will characterize the integral of the incomplete beta function which will beimportant for us later.
Claim 2. · (cid:90) π/ r ( φ ) dφ = β ( a/ / , / a · F (cid:20) a , a , a a , a + 1 ; 1 (cid:21) Proof.
By Lemma 19, the left hand side equals (cid:90) π/ β sin φ (cid:18) a , b (cid:19) dφ = (cid:90) π/ (cid:0) sin φ (cid:1) a/ a F (cid:20) a , − b a + 1 ; sin φ (cid:21) dφ By Lemma 20= (cid:90) π/ φ ) a a F (cid:20) a , a +12 a + 1 ; sin φ (cid:21) dφ Substituting b = 1 − a = 2 a (cid:90) π/ (cid:16) ∞ (cid:88) n =0 ( a/ n ( a/ / n ( a/ n (sin φ ) n + a n ! (cid:17) dφ Expand using Definition 4= 2 a ∞ (cid:88) n =0 (cid:16) (cid:90) π/ (sin φ ) n + a dφ (cid:17) ( a/ n ( a/ / n ( a/ n · n ! (*)41e take a brief interlude to analyze the integral in the parenthesis above: (cid:90) π/ (sin φ ) n + a dφ = 12 β ( n + a/ / , /
2) By Lemma 17= Γ(1 / n + a/ / n + a/ / a + 1 / n Γ( a/ / a/ n Γ( a + 1)= β ( a/ / , / a/ / n ( a/ n . Going back and substituting the above result into the Equation (*), we get= β ( a/ / , / a (cid:32) ∞ (cid:88) n =0 ( a/ n ( a/ / n ( a/ / n n !( a/ n ( a + 1) n (cid:33) = β ( a/ / , / a · F (cid:20) a , a , a a + 1 , a + 1 ; 1 (cid:21) Armed with Claim 2 and Corollary 3, we can prove Theorem 4.
Theorem 4.
The probability that the Sticky Brownian Motion rounding algorithm will separate apair { i, j } of vertices for which θ = cos − ( w i · w j ) equals − Γ( a +12 )Γ( − a )Γ( a + 1) · F (cid:20) a , a , a a , a + 1 ; 1 (cid:21) where a = θ/π , Γ is the gamma function, and F is the hypergeometric function.Proof. Substituting r = r ( π/
2) below, by Lemma 4 we have that the probability of separating thevertices is 2 π (cid:90) π/ φ =0 − r ( φ ) r dφ, or equivalently, the probability of not separating them is2 π (cid:90) π/ φ =0 r ( φ ) r dφ Expanding the above using Claim 1, we get that this equals2 π β ( a/ , b/ (cid:90) π/ φ =0 β sin ψ ( a/ , (1 − a ) / dψ. π β ( a/ , b/ · β ( a/ / , / a · F (1 / a/ , / a/ , a/ a/ , a + 1; 1) . Using Lemma 16 and the fact that Γ(1 / = π we can simplify this to= Γ( a + 1 / / − a/ a/ · F (1 / a, / a/ , a/ a/ , a + 1; 1) . B.3 Proof of Theorem 7
First, we rewrite r in a form that will be useful later. Claim 3. (cid:90) π/ r ( φ ) dφ = (cid:90) π/ φ (sin φ ) b − (cos φ ) a − dφ Proof.
The left hand side equation can be written as2 (cid:90) π/ r ( φ ) dφ = (cid:90) π/ β sin φ ( a/ , b/ dφ = (cid:90) π/ (cid:18)(cid:90) φ (sin ψ ) a − (cos ψ ) b − dψ (cid:19) dφ Applying integration by parts: (cid:82) pdq = [ pq ] − (cid:82) qdp with q = π/ − φ and p = (cid:82) φ (sin ψ a − )(cos ψ ) b − dψ gives (cid:104) ( π/ − φ ) (cid:90) φ (sin ψ a − )(cos ψ ) b − dψ (cid:105) π/ + (cid:90) π/ ( π/ − φ ) ddφ (cid:90) φ (sin ψ a − )(cos ψ ) b − dψ The first term is 0, and using Fundamental Theorem of Calculus, the second term is (cid:90) π/ ( π/ − φ )(sin φ ) a − (cos φ ) b − dφ Substituting φ for π/ − φ gives (cid:90) π/ φ (sin φ ) b − (cos φ ) a − dφ. Next, we claim that
Claim 4.
When θ = (1 − (cid:15) ) π , we can say (cid:90) π/ r ( φ ) dφ ≤ · (cid:16) O ( (cid:15) log(1 /(cid:15) )) (cid:17) . roof. Using Claim 3, we can write2 (cid:90) π/ r ( φ ) dφ = (cid:90) π/ φ (sin φ ) b − (cos φ ) a − dφ = (cid:90) π/ φ sin φ (tan φ ) (cid:15) dφ Since x sin( x ) ≤ ≤ x ≤ π / , to prove the claim it suffices to show that (cid:90) π/ ((tan φ ) (cid:15) − dφ = O ( (cid:15) log(1 /(cid:15) )) . Let φ = arctan(1 /(cid:15) ). We will break the above integral into two parts and deal with each separately: (cid:90) π/ ((tan φ ) (cid:15) − dφ = (cid:90) φ ((tan φ ) (cid:15) −
1) + (cid:90) π/ φ ((tan φ ) (cid:15) − . Case 1 for φ ≤ φ ,(tan φ ) (cid:15) ≤ (cid:18) (cid:15) (cid:19) (cid:15) = exp( (cid:15) ln(1 /(cid:15) )) = 1 + O ( (cid:15) log(1 /(cid:15) )) , so (cid:90) φ ((tan φ ) (cid:15) − dφ = O ( (cid:15) log(1 /(cid:15) )) . Case 2
For φ > φ , (cid:90) π/ φ ((tan φ ) (cid:15) − dφ ≤ (cid:90) π/ φ / (cos φ ) (cid:15) dφ = (cid:90) π/ − φ (1 / sin φ ) (cid:15) dφ Since sin( x ) = cos( π/ − x ) ≤ (cid:90) π/ − φ (2 /φ ) (cid:15) dφ Since 1 ≤ x / sin( x ) ≤ ≤ (cid:15) ( π/ − φ ) − (cid:15) − (cid:15) ≤ ( π/ − φ )(1 + O ( (cid:15) )) . Finally, we note that π/ − φ ≤ tan( π/ − φ ) = 1 / tan( φ ) = (cid:15) . Theorem 7.
Given an edge { i, j } with cos − ( w Ti w j ) = θ = (1 − (cid:15) ) π , the Sticky Brownian Motionrounding will cut the edge with probability at least − (cid:0) π (cid:15) + O ( (cid:15) ) (cid:1) .Proof. Let a = 1 − (cid:15) and b = (cid:15) . 44s discussed in Lemma 4, the non-separation probability is2 πr (cid:90) π/ r ( φ ) dφ where r = r ( π/ r := r ( π/
2) and (cid:82) π/ · r ( φ ) dφ as ε → r as ε →
0. Recall that r = (1 / β ( a/ , b/
2) By Corollary 3= Γ((1 − (cid:15) ) / (cid:15)/ /
2) By Lemma 16= Γ((1 − (cid:15) ) / (cid:15)/ (cid:15) Γ(1 /
2) Using Γ( (cid:15) (cid:15)
Γ(1 + (cid:15) z + (cid:15) ) = Γ( z )(1 + O ( (cid:15) )) for fixed z >
0= 12 (cid:15)
Γ(1 /
2) (Γ(1 /
2) + O ( (cid:15) ))(Γ(1) + O ( (cid:15) ))= 1(2 (cid:15) ) + O (1)which implies that 1 /r = 2 (cid:15) + O ( (cid:15) ) Using Claim 4, we know that (cid:82) π/ · r ( φ ) dφ is at most 2 · (cid:16) O ( (cid:15) log(1 /(cid:15) ) (cid:17) . Combining thetwo, we get the probability of non-separation is (cid:15) π + O ( (cid:15) ) ≈ . (cid:15) + O ( (cid:15) ). B.4 Other Missing Proofs
Lemma 1.
Applying the transformation OW − / to { X t } t ≥ we get a new random process { Y t } t ≥ which has the following properties:1. If X t is in the interior/boundary/vertex of [ − , then Y t is in the interior/boundary/vertexof S , respectively.2. S is a rhombus whose internal angles at b and b are θ , and at b and b are π − θ . Thevertex b lies on the positive x -axis, and b , b , b are arranged counter-clockwise.3. The probability that the algorithm will separate the pair { i, j } is exactly Pr[ Y t is absorbed in b or b ] . Proof.
Part 1 is immediate from the continuity and linearity of the map O · W − / .To prove part 2, observe that the W / is given explicitly by the matrix W = 1 √ · (cid:20) cos( θ ) + sin( θ ) cos( θ ) − sin( θ )cos( θ ) − sin( θ ) cos( θ ) + sin( θ ) (cid:21) . W − = 1 √ · (cid:20) sec( θ ) + csc( θ ) sec( θ ) − csc( θ )sec( θ ) − csc( θ ) sec( θ ) + csc( θ ) (cid:21) . Since W − [ − , is the image of a parallelogram, it must also be a parallelogram. Moreover, onecan directly check that the diagonals are orthogonal to each other, so it must be a rhombus. It iseasy to calculate the angle between the sides and see that it is exactly θ at the image of (1 , − π − θ at the image of (1 , X t is one a side or a vertex of [ − , Y t is on the corresponding side or vertex of S . C Omitted Proofs from Section 3
Recall that, for 0 ≤ x ≤ , ≤ y ≤
1, the function u ( x, y ) denotes the probability the probabilityof a clause being satisfied when the random walk walk begins with marginals ( x, y ) and angle θ .Equivalently, u ( x, y ) is the probability that the walk, started at ( x, y ), ends at one of the corners(0 , ,
1) or (1 , Lemma 5.
For φ ( x, y ) = 1 − xy , we have u ( x ) = φ ( x ) for all x ∈ ∂ [0 , (8) Moreover, for all x in the interior of the square [0 , , u ( x ) = E x [ φ ( X τ )] , where E x denotesexpectation with respect to starting the process at X = x .Proof. Recall that τ is the first time when X t hits the boundary of [0 , , and σ is the first timewhen X t hits a vertex of [0 , . The function φ evaluates to 1 at the vertices (0 , , , , u ( x ) = E x [ φ ( X σ )].Let us first consider the case when x is on the boundary of [0 , . Then one of the coordinatesof X t remains fixed for the entire process. Since φ is affine in each of its arguments, and X t is amartingale, ∀ x ∈ ∂ [0 , : u ( x ) = E x [ φ ( X σ )] = φ ( E x [ X σ ]) = φ ( x ) . When x is in the interior of [0 , , we have, by the law of total expectation, ∀ x ∈ Int[0 , : u ( x ) = E x [ φ ( X σ )] = E x [ E X τ [ φ ( X σ )]] = E x [ φ ( X τ )] . The final equality follows by the special case when the starting point of the random walk is on theboundary of [0 , . This proves the lemma.Before we give an outline of the proofs of Lemma 6 and Lemma 8, we prove Lemma 7. Thiswill allow us to give a completely analytic proof of . Lemma 7.
Let x, y, θ be as defined by a feasible pair of vectors v i and v j . Then they must satisfythe following constraints:1. ≤ x ≤ , ≤ y ≤ , ≤ θ ≤ π . . cos( θ ) ≥ − (cid:113) xy (1 − x )(1 − y ) .3. cos( θ ) ≥ − (cid:113) (1 − x )(1 − y ) xy .Proof. Clearly, the first set of the constraints are obvious. We focus on the second and the thirdconstraint. Recall that v i = x v + √ x − x w i and v j = y v + (cid:112) y − y w j where w i and w j areunit vectors orthogonal to v with cos( θ ) = w j · w j . Thus we have v i · v j = xy + cos( θ ) (cid:112) x − x (cid:112) y − y But then we have the following valid constraint from the SDP: v i · v j ≥ θ ) ≥ − (cid:114) xy (1 − x )(1 − y )proving the second inequality.For the other inequality, observe that we have v − i = (1 − x ) v − √ x − x w i and v − j =(1 − y ) v − (cid:112) y − y w j . Then we have v − i · v − j = (1 − x )(1 − y ) + cos( θ ) (cid:112) x − x (cid:112) y − y But then we have the following valid constraint from the SDP: v − i · v − j ≥ θ ) ≥ − (cid:115) (1 − x )(1 − y ) xy proving the third inequality.To ease the remainder of the presentation we first prove that the Brownian rounding algorithmachieves an approximation of / for Max-2SAT via the maximum principle. In order to achievethat we use the following two functions for different ranges of θ . • g ( x, y ) = 1 − xy − cos( θ ) √ x − x (cid:112) y − y . • f ( x, y ) = 1 − xy . 47irst consider the case when 0 ≤ θ ≤ π . In this case, we show g satisfies the requirement of theCorollary 2 as well as give an approximation factor of 1. The last fact is trivially true since g isexactly the SDP objective.For conditions of the Corollary 2, we need to show that ∂ g ∂x + ∂ g ∂y + 2 cos( θ ) ∂ g ∂x∂y ≥ ∀ ( x, y ) ∈ Int[0 , g ( x, y ) ≤ (1 − xy ) ∀ ( x, y ) ∈ ∂ [0 , Since ( x − x )( y − y ) = 0 on ∂ [0 , , we obtain that g ( x, y ) = 1 − xy on ∂ [0 , as required. Itremains to show that ∂ ∂x g ( x, y ) + ∂ ∂y g ( x, y ) + 2 cos θ ∂ ∂x∂y g ( x, y ) ≥ x, y ) ∈ (0 , . Consider h ( x, y ) := ∂ ∂x g ( x, y ) + ∂ ∂y g ( x, y ) + 2 cos θ ∂ ∂x∂y g ( x, y ) . To show h is non-negative, we do the following change of variables in x = (1+sin( a ))2 and y = (1+sin( b ))2 for some | a | , | b | ≤ π . Such a and b exist since 0 ≤ x, y ≤
1. Now simplifying, we obtain: h (cid:18) a )2 , b )2 (cid:19) = 2 cos( θ ) sec ( a ) sec ( b ) (cid:104) (cid:0) cos ( a ) − cos ( b ) (cid:1) +2 cos ( a ) cos ( b ) (1 − cos( θ ) sin( a ) sin( b ) − cos( a ) cos( b )) (cid:105) Since | a | , | b | ≤ π and 0 ≤ θ ≤ π , we have that sec( a ) , sec( b ) , cos( θ ) ≥
0. Thus, it enough to showthat 1 − cos( θ ) sin( a ) sin( b ) − cos( a ) cos( b ) ≥ . Since the above expression is linear in cos( θ ), it is enough to check for extreme values of cos( θ )which takes value between 0 and 1. It is clearly true when cos( θ ) = 0. For cos( θ ) = 1, it equals1 − cos( a − b ) and is thus non-negative.Now consider − ≤ cos( θ ) ≤
0. We show that f ( x, y ) = 1 − xy satisfies the condition ofCorollary 2 and is at least the value of the SDP objective for all feasible ( x, y, θ ). First let usfocus on the condition of Corollary 2. Clearly, the boundary conditions are satisfied by construction.Note that ∂ f ( x,y ) ∂x = 0, ∂ f ( x,y ) ∂y = 0, and that ∂ f ( x,y ) ∂x∂y = 1. Thus L f = − cos( θ ) ≥ θ ) ≤ f provides an approximation guarantee of / in case cos( θ ) < SDP ( x, y, θ ) = 1 − xy − cos( θ ) √ x − x (cid:112) y − y is the contribution of a clause to theSDP’s objective whose two variables z i and z j have marginal values of x and y respectively and thatcos( θ ) = w i · w j . We prove the following claim which would imply that we obtain a -approximation.48 laim 5. For any x, y, θ that satisfy the feasibility conditions in Lemma 7 and cos( θ ) < , we have g ( x, y ) ≥ SDP ( x, y, θ ) . Proof.
From Lemma 7, we have − cos( θ ) ≤ min (cid:40)(cid:114) xy (1 − x )(1 − y ) , (cid:115) (1 − x )(1 − y ) xy (cid:41) . Observe that we have g ( x, y ) ≥ SDP ( x, y, θ ), if(1 − xy ) ≥ − θ ) (cid:112) ( x − x )( y − y ) . First, suppose xy ≤ . Then − θ ) (cid:112) ( x − x )( y − y ) ≤ (cid:114) xy (1 − x )(1 − y ) · (cid:112) ( x − x )( y − y ) = 3 xy ≤ − xy Else, if xy ≥ , then we have1 − xy + 3 cos( θ ) (cid:112) ( x − x )( y − y ) ≥ − xy − (cid:115) (1 − x )(1 − y ) xy · (cid:112) ( x − x )( y − y )= − x + 3 y − xy Over all 1 ≥ x ≥ , ≥ y ≥ xy , the quantity 2+3 x +3 y − xy is minimized when x = y .Since xy ≥ , we must have x ≥ . But then it becomes − − x + 2 x ) = − − x )(1 − x ) ≥ ≤ x ≤
1. This proves the -approximation.We now give a brief outline of the proof of Lemma 6 and Lemma 8. The complete proofs involvelong sum of square expression that are available at [1]. Lemma 6.
Each of g , g , g satisfies the boundary conditions, i.e. g i ( x, y ) = u ( x, y ) for all x, y ∈ ∂ [0 , and for all values θ . Moreover, we have the following for each ( x, y ) ∈ [0 , :1. If ≥ cos( θ ) ≥ , then L g ≥ .2. If ≥ cos( θ ) ≥ − , then L g ≥ .3. If − ≥ cos( θ ) ≥ − , then L g ≥ .Proof. Feasibility of g ( x, y ) . We already showed in the above proof of -approximation.49 easibility of g ( x, y ) . Now we consider g ( x, y ) = 1 − xy − θ )( x − x )( y − y ). Since( x − x )( y − y ) = 0 on ∂ [0 , , we obtain that g ( x, y ) = 1 − xy on ∂ [0 , as required. It remainsto show that L g = ∂ ∂x g ( x, y ) + ∂ ∂y g ( x, y ) + 2 cos θ ∂ ∂x∂y g ( x, y ) ≥ x, y ) ∈ (0 , for any 0 ≥ cos( θ ) ≥ − . A simple calculation allows us to obtain that L g = − θ ) (cid:0) x + 2 y + 2 cos( θ ) − x − y − y cos( θ ) − x cos( θ ) + 8 xy cos( θ ) (cid:1) . Since − θ ) >
0, it is enough to show that for any 0 ≤ x ≤ ≤ y ≤ h ( x, y ) = 1 + 2 x + 2 y + 2 cos( θ ) − x − y − y cos( θ ) − x cos( θ ) + 8 xy cos( θ ) ≥ . We now prove the above inequality. Since the above expression is linear in cos( θ ), for any fixed x, y the minimum appears at either cos( θ ) = 0 or cos( θ ) = − . First consider cos( θ ) = 0. In thiscase, we obtain h ( x, y ) = 1 + 2 x + 2 y − x − y = 12 (1 − x ) + 12 (1 − y ) ≥ θ ) = − , we obtain h ( x, y ) = 2 x + 2 y − xy = 2( x − y ) ≥ L g ≥ Feasibility of g ( x, y ) . Now we consider g ( x, y ) = 1 − xy − (1 + 5 cos( θ ))( x − x )( y − y )( x + y )(2 − x − y ) on ∂ [0 , , we obtain that g ( x, y ) = 1 − xy on ∂ [0 , as required. It remains toshow that L g = ∂ ∂x g ( x, y ) + ∂ ∂y g ( x, y ) + 2 cos θ ∂ ∂x∂y g ( x, y ) ≥ x, y ) ∈ (0 , for any − ≥ cos( θ ) ≥ − L g ≥
0, we consider L g = p ( x, y, cos( θ )) as a polynomial in x, y and cos( θ ). Replacing z = cos( θ ), our aim is to show p ( x, y, z ) ≥ ≤ x, y ≤ − ≤ z ≤ −
1. Equivalently, weneed to show p ( x, y, z ) ≥ r ( x, y, z ) := x − x ≥ r ( x, y, z ) := y − y ≥ r ( x, y, z ) := − ( z + ) ≥ r ( x, y, z ) := ( z + 1) ≥
0. This we show by obtaining polynomials q i ( x, y, z ) for i = 0 , , , , q i is a sum of square polynomial of fixed degree and we have p ( x, y, z ) = q ( x, y, z ) + (cid:88) i =1 q i ( x, y, z ) r i ( x, y, z ) . Observe that above polynomial inequality shows the desired inequality. Indeed evaluate the aboveidentity for any 0 ≤ x, y ≤ − ≥ z ≥ −
1. Clearly, the RHS is non-negative. Each q i is50on-negative since it is a SoS and each r i is non-negative by construction. We mention that weobtain these proofs via solving a semi-definite program of fixed degree (4) for each of q (cid:48) i s . Wealso remark that these SoS expressions are obtained with a small error of order δ < − . This,formally, implies that the approximation factors of slightly worse than . Lemma 8.
Consider any feasible triple ( x, y, θ ) satisfying the condition in Lemma 7. We have thefollowing.1. If ≥ cos( θ ) ≥ , then g ( x, y ) ≥ · SDP ( x, y, θ ) .2. If ≥ cos( θ ) ≥ − , then g ( x, y ) ≥ · SDP ( x, y, θ ) .3. If − ≥ cos( θ ) ≥ − , then g ( x, y ) ≥ · SDP ( x, y, θ ) .Proof. We prove the three inequalities. We also remark that the SoS expressions below are obtainedwith a small error of order δ < − . This, formally, implies that the approximation factors ofslightly worse than .1. If 1 ≥ cos( θ ) ≥
0, then g ( x, y ) ≥ · SDP ( x, y, θ ). Observe that g ( x, y ) = SDP ( x, y, θ ) andinequality holds.2. If 0 ≥ cos( θ ) ≥ − , then g ( x, y ) ≥ / · SDP ( x, y, θ ). We need to show that1 − xy − θ )( x − x )( y − y ) ≥ . · (cid:16) − xy − cos( θ ) (cid:112) x − x (cid:112) y − y (cid:17) which holds if 1 − xy −
16 cos( θ )( x − x )( y − y ) ≥ − θ ) (cid:112) x − x (cid:112) y − y Since both sides are non-negative (1 − xy ≥ θ ) ≤ (cid:0) − xy −
16 cos( θ )( x − x )( y − y ) (cid:1) −
49 cos ( θ )( x − x )( y − y ) ≥ r ( x, y, cos( θ )) := x − x ≥ r ( x, y, cos( θ )) := y − y ≥ r ( x, y, cos( θ )) := − cos( θ ) ≥
0, , r ( x, y, cos( θ )) := xy − (1 − x )(1 − y ) cos ( θ ) ≥ r ( x, y, cos( θ )) := (1 − x )(1 − y ) − xy cos ( θ ) ≥ q i ( x, y, cos( θ )) for 0 ≤ i ≤ (cid:0) − xy −
16 cos( θ )( x − x )( y − y ) (cid:1) −
49 cos ( θ )( x − x )( y − y )= q ( x, y, cos( θ )) + (cid:88) i =1 q i ( x, y, cos( θ )) r i ( x, y, cos( θ ))3. If − ≥ cos( θ ) ≥ −
1, then g ( x, y ) ≥ SDP ( x, y, θ ). The similar argument as above allowsus to obtain SoS proofs. We omit the details.51 Omitted Proofs from Section 4
Baseline ApproximationLemma 9.
Suppose that n ≥ k/ε ) ε and ε ≤ . There exists a polynomial time algorithm thaton input a satisfiable instance G = ( V, E ) , F , and b , . . . , b k , as defined above, outputs a set S ⊆ V such that, with high probability, a ( δ ( S )) ≥ ε a ( E ) , and (cid:12)(cid:12) | S ∩ F i | − b i (cid:12)(cid:12) ≤ εn for all i ∈ [ k ] .Proof. If the constraints specified by F and b are satisfiable, then surely the following linearprogram also has a solution. (cid:88) j ∈ F i x j = b i ∀ i = 1 , . . . , k ≤ x j ≤ ∀ j = 1 , . . . , k We compute a solution x ∈ R n to the program, and form a vector y ∈ R n by defining y j =(1 − ε ) x j + ε for all j ∈ [ n ]. The vector y still satisfies the constraints approximately, i.e. for all i ∈ [ k ] we have (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:88) j ∈ F i x j − b i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ εn . (20)We now apply standard randomized rounding to y : we form a set S by independently includingany j ∈ [ n ] in S with probability y j . By (20), and a Hoeffding and a union bound,Pr (cid:34) ∃ i : (cid:12)(cid:12)(cid:12) (cid:88) j ∈ F i x j − b i (cid:12)(cid:12)(cid:12) > εn (cid:35) ≤ ke − ε n/ . By the assumption we made on n , the right hand side is at most ε .Next we analyze the weight of the cut edges a ( δ ( S )). Any edge e = ( i, j ) has probability y i + y j − y i y j ≥ ε (1 − ε ) ≥ ε. to be cut. Therefore, E [ a ( δ ( S ))] ≥ εa ( E ). By Markov’s inequality applied to a ( E ) − a ( δ ( S )),Pr (cid:104) a ( δ ( S )) < ε (cid:105) < − ε − ε ≤ − ε . Therefore, the probability that S satisfies every constraint up to an additive error of εn , and a ( δ ( S )) ≥ ε a ( E ) is at least ε . We get the high probability guarantee by repeating the entirerounding procedure a sufficient number of times. Approximation Ratio AnalysisLemma 11.
For the SDP solution V and the Sticky Brownian Motion X t described above, and forany pair { i, j } of vertices Pr[( X τ n ) i (cid:54) = ( X τ n ) j ] ≥ . · (cid:107) v i − v j (cid:107) . roof. Let us denote by θ ij the angle between the unit vectors w i and w j , i.e. θ ij = arccos( (cid:104) w i , w j (cid:105) ).Recall that, for any i , (cid:107) v i (cid:107) = x i , and v i = x i v + (cid:113) x i − x i w i , where v and w i are orthogonal toeach other. Therefore, for any pair { i, j } , (cid:107) v i − v j (cid:107) is characterized entirely by the triple ( x i , x j , θ ),and is equal to (cid:107) v i − v j (cid:107) = x i + x j − θ ij ) (cid:113) x i (1 − x i ) x j (1 − x j ) . (21)We will refer to triples ( x, y, θ ) as configurations, and will denote the expression on the right handside of (21) with x i = x , x j = y , and θ ij = θ by SDP ( x, y, θ ).To calculate Pr[( X τ n ) i (cid:54) = ( X τ n ) j )], we use the techniques introduced in section 3. More con-cretely, let u θ ( x, y ) = Pr (cid:2) ( X τ n ) i (cid:54) = ( X τ n ) j ) | (( X ) i , ( X ) j ) = ( x, y ) (cid:3) . As shown in Section 3, the function u θ is the unique solution to the partial differential equations ∂ u θ ∂x + ∂ u θ ∂y + 2 cos( θ ) ∂ u θ ∂x∂y = 0 ∀ ( x, y ) ∈ Int[0 , (22) u θ ( x, y ) = x + y − xy ∀ ( x, y ) ∈ ∂ [0 , (23)The above system is a Dirichlet problem and can be solved numerically for any configuration( x, y, θ ).To calculate the worst case approximation ratio of the Sticky Brownian Motion algorithm,it suffices to evaluate min x,y,θ u θ ( x,y ) SDP ( x,y,θ ) . However, just taking a minimum over all ( x, y, θ ) ∈ [0 , × [0 , π ] is too pessimistic, since there are many configurations ( x, y, θ ) which never arise assolutions to SoS (cid:96) for any (cid:96) ≥
2. It is therefore necessary to consider only configurations that mayarise as solutions to some instance. In particular, we know that any vectors v , v i , v j in the SDPsolution satisfy the triangle inequalities (11)–(13). Translating these inequalities to inequalitiesinvolving x i , x j and θ ij givescos( θ ) ≥ max (cid:32) − (cid:115) (1 − x i ) · (1 − x j ) x i · x j , − (cid:115) ( x i · x j )(1 − x i )(1 − x j ) (cid:33) cos( θ ) ≤ min (cid:32)(cid:115) x i · (1 − x j )(1 − x i ) · x j , (cid:115) (1 − x i ) · x j ) x i · (1 − x j ) (cid:33) To compute the worst case approximation ratio, we use numerical methods. In particular, we solvethe Dirichlet problem (22)–(23) for all configurations ( x, y, θ ) satisfying the inequalities above, witha granularity of 0 .
02 in each coordinate. This numerical computation shows that the ratio u θ ( x,y ) SDP ( x,y,θ ) is at least 0 .
843 for all valid configurations.
Hitting Time AnalysisLemma 12.
For any i , and any integer t ≥ , Pr[ ∀ s ≤ t : 0 < ( X s ) i < < − t .Proof. We first make some observations about Brownian motion in R . Let Z t be a standard one-dimensional Brownian motion started in Z = z ∈ [0 , σ = inf { t : Z t ∈ { , }} be thefirst time Z t exits the interval [0 , E [ σ ] = z (1 − z ) ≤ . Therefore,53y Markov’s inequality, Pr[ σ > < . Now observe that, by the Markov property of Brownianmotion, for any integer t ≥ σ > t ] = t − (cid:89) r =0 Pr (cid:2) ∀ s ∈ [ r, r + 1] : 0 < Z s < | < Z r < (cid:3) . But, conditional on Z r , the process { Z s } s ≥ r is a Brownian motion started at Z r , and, as we observedabove, each of the conditional probabilities on the right hand side above is bounded by . Therefore,we have Pr[ σ > t ] < − t .To prove the lemma, we just notice that, until the first time σ i when ( X t ) i reaches { , } ,it is distributed like a one-dimensional Brownian motion started at x i . This follows because, atany t < σ i , the variance per step of ( X t ) i is ( W t ) i,i = W i,i = 1. Then, by observations above,Pr[ σ i > t ] ≤ − t . E Omitted Proofs from Section 5
Hitting Times AnalysisLemma 21.
The expected hitting time E [ τ ] for the diffusion process defined for Brownian WalkAlgorithm with Slowdown when the starting point X ∈ [ − δ, − δ ] n and α is a constant. While the hitting time is only defined for the points away from the boundary, this is the regionwhere the discrete algorithm runs. Therefore, this is sufficient for the analysis of our algorithm.
Proof Sketch.
Without loss of generality, we assume the number of dimensions is 1. In the onedimensional walk, the diffusion process satisfies the stochastic differential equation: d X t = (1 − X t ) α/ dB t (24)To show this we use Dynkin’s equation to compute stopping times which we present below spe-cialised to the diffusion process at Equation (24). Dynkin’s Equation (Theorem 7.4.1 in [47]) Let f ∈ C ([ − δ, − δ ]). Suppose µ is a finitestopping time, then E x [ f ( X µ )] = f ( x ) + E x (cid:20)(cid:90) µ (cid:18) (1 − x ) α · ∂ ∂x f ( X s ) (cid:19) ds (cid:21) Let f ( x ) denote the function f ( x ) = x · F (cid:34) , α ; x (cid:35) − − (1 − x ) − α α −
1) + C · x + C where C and C are chosen so that f (1 − δ ) = f ( − δ ) = 1. Observe that for a fixed δ > f is well-defined and finite in the domain [ − δ, δ ] and satisfies − K δ ≤ f ( x ) ≤ K δ where K δ .Furthermore, f satisfies the differential equation (1 − x ) α ∂ f∂x = 1. We verify this using Mathematica. µ j = min( j, τ ) and applying Dynkin’s equation we get that E x [ f ( X µ j )] = f ( x ) + E x (cid:20)(cid:90) µ j ds (cid:21) = f ( x ) + E x [ µ j ]Simplifying the above we get that 2 K δ ≥ E x [ µ j ] for all j . Since we know that E x [ τ ] = lim j →∞ E x [ µ j ]almost surely, we can bound 2 K δ ≥ E [ τ ]Observe that the proof does not work when α = 1. For this case, we simply change f ( x ) = C · x + C + 12 [(1 + x ) log(1 + x ) + (1 − x ) log(1 − x )]and the argument goes through verbatim. Remark about Lemma 14
The proof is largely similar to the one described in Lemma 5and Theorem 9 with two caveats:1. Theorem 9 is stated with a fixed Σ . However we can handle the general case, where Σ is allowed to depend on the diffusion prcoess(i.e. Σ ( X t )), by appealing to general Theorem9.2.14 in [47].2. To apply Theorem 9.2.14 from [47], we need the resulting matrix Σ ( X t ) Σ ( X t ) (cid:62) to have eigen-values bounded away from zero. In our case, Σ ( X t ) Σ ( X t ) (cid:62) can have zero rows and columnson the boundary. To avoid this, we simply restrict our domain to be the hypercube scaled bya small value [ − δ, − δδ